hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
90343a144c37fdb7158800a8ee58d2d9babd43eb | 32 | py | Python | budget/database/__init__.py | deep4788/budgetManager | f7a20c9458315cf608c3ab5fdb4f2fad998d73f8 | [
"MIT"
] | 1 | 2016-10-17T16:26:33.000Z | 2016-10-17T16:26:33.000Z | budget/database/__init__.py | deep4788/budgetManager | f7a20c9458315cf608c3ab5fdb4f2fad998d73f8 | [
"MIT"
] | null | null | null | budget/database/__init__.py | deep4788/budgetManager | f7a20c9458315cf608c3ab5fdb4f2fad998d73f8 | [
"MIT"
] | null | null | null | from .sqlitedb import Datastore
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90349b311a943f9eccdc1f1e653c79c0456eb515 | 86 | py | Python | tests/executors/myexecutor.py | xiongma/bert2tf | 105fd1524edb703bf68aec8fde289de5923e1f78 | [
"Apache-2.0"
] | 7 | 2021-08-05T16:35:08.000Z | 2022-01-04T03:26:10.000Z | tests/executors/myexecutor.py | xiongma/bert2tf | 105fd1524edb703bf68aec8fde289de5923e1f78 | [
"Apache-2.0"
] | 96 | 2021-08-06T08:32:09.000Z | 2022-01-21T11:07:25.000Z | tests/executors/myexecutor.py | xiongma/bert2tf | 105fd1524edb703bf68aec8fde289de5923e1f78 | [
"Apache-2.0"
] | null | null | null | from bert2tf.executors import BaseExecutor
class MyExecutor(BaseExecutor):
pass
| 14.333333 | 42 | 0.802326 | 9 | 86 | 7.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 0.151163 | 86 | 5 | 43 | 17.2 | 0.931507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5f76b81ca3bda6de560ec2cdf4f7a178d7a773ec | 47 | py | Python | inferno/extensions/containers/__init__.py | 0h-n0/inferno | f466c84ed72ff92f9113891a96ce58e19eeeff1e | [
"Apache-2.0"
] | 204 | 2017-10-10T20:58:52.000Z | 2021-12-07T03:01:19.000Z | inferno/extensions/containers/__init__.py | 0h-n0/inferno | f466c84ed72ff92f9113891a96ce58e19eeeff1e | [
"Apache-2.0"
] | 86 | 2017-10-11T11:32:36.000Z | 2021-11-15T17:47:25.000Z | inferno/extensions/containers/__init__.py | 0h-n0/inferno | f466c84ed72ff92f9113891a96ce58e19eeeff1e | [
"Apache-2.0"
] | 30 | 2017-11-16T23:21:30.000Z | 2021-11-15T15:11:00.000Z | from .graph import *
from .sequential import *
| 15.666667 | 25 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 26 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
39d8825ca2b588256a0b3ba4651b6abffb1c9fc0 | 1,610 | py | Python | tests/cli/5_test_stack_services_restart.py | zorrobyte/easyengine | a37396d0c941ef363c6a297876582ddcc37ed55b | [
"MIT"
] | 2 | 2018-11-12T12:13:28.000Z | 2021-04-22T12:04:20.000Z | tests/cli/5_test_stack_services_restart.py | zorrobyte/easyengine | a37396d0c941ef363c6a297876582ddcc37ed55b | [
"MIT"
] | 1 | 2020-10-27T19:47:50.000Z | 2020-10-27T19:47:50.000Z | tests/cli/5_test_stack_services_restart.py | zorrobyte/easyengine | a37396d0c941ef363c6a297876582ddcc37ed55b | [
"MIT"
] | null | null | null | from ee.utils import test
from ee.cli.main import get_test_app
class CliTestCaseStack(test.EETestCase):
def test_ee_cli(self):
self.app.setup()
self.app.run()
self.app.close()
def test_ee_cli_stack_services_restart_nginx(self):
self.app = get_test_app(argv=['stack', 'restart', '--nginx'])
self.app.setup()
self.app.run()
self.app.close()
def test_ee_cli_stack_services_restart_php5_fpm(self):
self.app = get_test_app(argv=['stack', 'restart', '--php'])
self.app.setup()
self.app.run()
self.app.close()
def test_ee_cli_stack_services_restart_mysql(self):
self.app = get_test_app(argv=['stack', 'restart', '--mysql'])
self.app.setup()
self.app.run()
self.app.close()
def test_ee_cli_stack_services_restart_postfix(self):
self.app = get_test_app(argv=['stack', 'restart', '--postfix'])
self.app.setup()
self.app.run()
self.app.close()
def test_ee_cli_stack_services_restart_memcached(self):
self.app = get_test_app(argv=['stack', 'restart', '--memcache'])
self.app.setup()
self.app.run()
self.app.close()
def test_ee_cli_stack_services_restart_dovecot(self):
self.app = get_test_app(argv=['stack', 'restart', '--dovecot'])
self.app.setup()
self.app.run()
self.app.close()
def test_ee_cli_stack_services_restart_all(self):
self.app = get_test_app(argv=['stack', 'restart'])
self.app.setup()
self.app.run()
self.app.close()
| 30.377358 | 72 | 0.618012 | 220 | 1,610 | 4.245455 | 0.140909 | 0.232334 | 0.085653 | 0.102784 | 0.808351 | 0.808351 | 0.808351 | 0.808351 | 0.808351 | 0.494647 | 0 | 0.000809 | 0.232298 | 1,610 | 52 | 73 | 30.961538 | 0.754854 | 0 | 0 | 0.571429 | 0 | 0 | 0.081366 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.047619 | 0 | 0.261905 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f2d50b3b585fe62ab0195e2bc60af914d31c9ebb | 33 | py | Python | pygrpfe/__init__.py | tlamadon/pygfe | d12ee279c02c9e32f8ca0ceb0d2132d832a8d819 | [
"MIT"
] | 1 | 2021-01-20T02:38:46.000Z | 2021-01-20T02:38:46.000Z | pygrpfe/__init__.py | tlamadon/pygrpfe | d12ee279c02c9e32f8ca0ceb0d2132d832a8d819 | [
"MIT"
] | null | null | null | pygrpfe/__init__.py | tlamadon/pygrpfe | d12ee279c02c9e32f8ca0ceb0d2132d832a8d819 | [
"MIT"
] | null | null | null | from .helpers import group,train
| 16.5 | 32 | 0.818182 | 5 | 33 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ffcb01fcecb669d973d69fefa46101dc1514a9b3 | 87 | py | Python | RunFile.py | GitHubEmploy/DashboardWebsite | 1b84b2df36e6a360c6b91d8d7ba5fc4f64332698 | [
"Apache-2.0"
] | null | null | null | RunFile.py | GitHubEmploy/DashboardWebsite | 1b84b2df36e6a360c6b91d8d7ba5fc4f64332698 | [
"Apache-2.0"
] | null | null | null | RunFile.py | GitHubEmploy/DashboardWebsite | 1b84b2df36e6a360c6b91d8d7ba5fc4f64332698 | [
"Apache-2.0"
] | null | null | null | import os
os.system('export FLASK_APP=run.py && flask run --host=0.0.0.0 --port=5000')
| 29 | 76 | 0.689655 | 18 | 87 | 3.277778 | 0.666667 | 0.101695 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0.103448 | 87 | 2 | 77 | 43.5 | 0.653846 | 0 | 0 | 0 | 0 | 0.5 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ffdf8d4093208fd694e716cf410296df05b18303 | 37 | py | Python | grab/selector/__init__.py | subeax/grab | 55518263c543da214d1f0cb54622bbc4fda66349 | [
"MIT"
] | null | null | null | grab/selector/__init__.py | subeax/grab | 55518263c543da214d1f0cb54622bbc4fda66349 | [
"MIT"
] | null | null | null | grab/selector/__init__.py | subeax/grab | 55518263c543da214d1f0cb54622bbc4fda66349 | [
"MIT"
] | null | null | null | from grab.selector.selector import *
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fff2b0851c3fd49ba27637a4168b7b8fb6e11878 | 108 | py | Python | pychemia/code/fireball/__init__.py | quanshengwu/PyChemia | 98e9f7a1118b694dbda3ee75411ff8f8d7b9688b | [
"MIT"
] | 1 | 2021-03-26T12:34:45.000Z | 2021-03-26T12:34:45.000Z | pychemia/code/fireball/__init__.py | quanshengwu/PyChemia | 98e9f7a1118b694dbda3ee75411ff8f8d7b9688b | [
"MIT"
] | null | null | null | pychemia/code/fireball/__init__.py | quanshengwu/PyChemia | 98e9f7a1118b694dbda3ee75411ff8f8d7b9688b | [
"MIT"
] | null | null | null | from .fireball import FireBall, read_fireball_stdout, read_geometry_bas, write_geometry_bas, get_fdata_info
| 54 | 107 | 0.87963 | 16 | 108 | 5.4375 | 0.6875 | 0.252874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 108 | 1 | 108 | 108 | 0.87 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
083a386ab7b1ae28039ce6c0552b5e6eb5ec306d | 48 | py | Python | pymysql_wrapper/__init__.py | RafaelGSS/pymysql_wrapper | 1efad085abeab842064945837a8163c595e66628 | [
"MIT"
] | null | null | null | pymysql_wrapper/__init__.py | RafaelGSS/pymysql_wrapper | 1efad085abeab842064945837a8163c595e66628 | [
"MIT"
] | null | null | null | pymysql_wrapper/__init__.py | RafaelGSS/pymysql_wrapper | 1efad085abeab842064945837a8163c595e66628 | [
"MIT"
] | null | null | null | from .connection import *
from .session import * | 24 | 25 | 0.770833 | 6 | 48 | 6.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 2 | 26 | 24 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
085965938213e87d086fb8a3adafbda8beafbe58 | 189 | py | Python | dashaggregator/__init__.py | tspycher/python-dashaggregator | af1712096c95648d97dde4e3d9cac094f697d2b2 | [
"MIT"
] | null | null | null | dashaggregator/__init__.py | tspycher/python-dashaggregator | af1712096c95648d97dde4e3d9cac094f697d2b2 | [
"MIT"
] | null | null | null | dashaggregator/__init__.py | tspycher/python-dashaggregator | af1712096c95648d97dde4e3d9cac094f697d2b2 | [
"MIT"
] | null | null | null | from .modulemanager import Modulemanager
from .baseresource import BaseResource
from .dashboardresource import DashboardResource
from .dashboardconfigresource import DashboardConfigResource | 47.25 | 60 | 0.899471 | 16 | 189 | 10.625 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079365 | 189 | 4 | 60 | 47.25 | 0.977011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f24d25b8400f90b0123f771668786c942446e004 | 199 | py | Python | submission.py | lypnol/adventofcode-2017 | 03ced3df3eb80e5c7965c4120e3932919067cb15 | [
"MIT"
] | 16 | 2017-12-02T11:56:25.000Z | 2018-02-10T15:09:23.000Z | submission.py | lypnol/adventofcode-2017 | 03ced3df3eb80e5c7965c4120e3932919067cb15 | [
"MIT"
] | 19 | 2017-12-01T07:54:22.000Z | 2017-12-19T17:41:02.000Z | submission.py | lypnol/adventofcode-2017 | 03ced3df3eb80e5c7965c4120e3932919067cb15 | [
"MIT"
] | 4 | 2017-12-04T23:58:12.000Z | 2018-02-01T08:53:16.000Z | """
Deprecated - See runners/python.py
Backward compatibilty with previous submissions up to day 5
"""
from runners.python import Submission as SubmissionPy
class Submission(SubmissionPy):
pass
| 22.111111 | 59 | 0.788945 | 25 | 199 | 6.28 | 0.84 | 0.165605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005882 | 0.145729 | 199 | 8 | 60 | 24.875 | 0.917647 | 0.472362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f2502a274dd682d7555cb615525bfb8b8cda3856 | 330 | py | Python | bitmovin_api_sdk/encoding/configurations/video/h265/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/configurations/video/h265/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/configurations/video/h265/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.configurations.video.h265.h265_api import H265Api
from bitmovin_api_sdk.encoding.configurations.video.h265.customdata.customdata_api import CustomdataApi
from bitmovin_api_sdk.encoding.configurations.video.h265.h265_video_configuration_list_query_params import H265VideoConfigurationListQueryParams
| 82.5 | 144 | 0.915152 | 41 | 330 | 7.04878 | 0.414634 | 0.124567 | 0.155709 | 0.186851 | 0.536332 | 0.536332 | 0.536332 | 0.536332 | 0.366782 | 0 | 0 | 0.066038 | 0.036364 | 330 | 3 | 145 | 110 | 0.842767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f25680042fcefbed020d300f8e9803215d4c912d | 178 | py | Python | maluforce/__init__.py | rmcferrao/maluforce | 12c776dc129c8d778086e22fd8ad9de996816081 | [
"MIT"
] | 4 | 2018-11-05T15:44:02.000Z | 2021-04-14T17:18:26.000Z | maluforce/__init__.py | rmcferrao/maluforce | 12c776dc129c8d778086e22fd8ad9de996816081 | [
"MIT"
] | null | null | null | maluforce/__init__.py | rmcferrao/maluforce | 12c776dc129c8d778086e22fd8ad9de996816081 | [
"MIT"
] | 4 | 2019-08-02T16:56:35.000Z | 2021-11-03T16:32:02.000Z | from maluforce.core import Maluforce
from maluforce.reportutils import adjust_report,lod_rename,to_lod
from maluforce.fileutils import save_lod_files,read_lod_file,read_lod_files | 59.333333 | 75 | 0.898876 | 28 | 178 | 5.392857 | 0.535714 | 0.258278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061798 | 178 | 3 | 75 | 59.333333 | 0.904192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f28abf45de18650dd5db170d5944974d034063db | 236 | py | Python | needy/generators/__init__.py | carlbrown/needy | 5a70726c9846f86a88be896ec39740296d503835 | [
"MIT"
] | 65 | 2015-07-21T01:40:17.000Z | 2019-06-10T10:46:28.000Z | needy/generators/__init__.py | bittorrent/needy | 31e57ad09d5fc22126e10b735c586262a50139d7 | [
"MIT"
] | 110 | 2015-07-21T01:41:40.000Z | 2017-01-18T23:13:30.000Z | needy/generators/__init__.py | bittorrent/needy | 31e57ad09d5fc22126e10b735c586262a50139d7 | [
"MIT"
] | 4 | 2015-07-20T02:45:43.000Z | 2016-07-31T21:48:39.000Z | from .jamfile import JamfileGenerator
from .pkgconfig_jam import PkgConfigJamGenerator
from .xcconfig import XCConfigGenerator
def available_generators():
return [JamfileGenerator, PkgConfigJamGenerator, XCConfigGenerator]
| 29.5 | 72 | 0.826271 | 20 | 236 | 9.65 | 0.65 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131356 | 236 | 7 | 73 | 33.714286 | 0.941463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f2b1672073c1dc8e87c78e093bf66541829cd6aa | 96 | py | Python | venv/lib/python3.8/site-packages/requests/sessions.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 1 | 2022-02-22T04:49:18.000Z | 2022-02-22T04:49:18.000Z | venv/lib/python3.8/site-packages/requests/sessions.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/requests/sessions.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/e7/b3/b8/b9df7244be9e2d887e76d15ba82d643b877f13065c0a0617124ba395fb | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b4a2fea5f18a08a02e7e19bd18720f545d5b2c5 | 27,162 | py | Python | tests.py | rivergillis/pytis | c84e792d00121d05e3c27762418c5d2586f06c76 | [
"MIT"
] | null | null | null | tests.py | rivergillis/pytis | c84e792d00121d05e3c27762418c5d2586f06c76 | [
"MIT"
] | null | null | null | tests.py | rivergillis/pytis | c84e792d00121d05e3c27762418c5d2586f06c76 | [
"MIT"
] | null | null | null | import unittest
from node import Node
import main
class TestNodes(unittest.TestCase):
def test_make_node(self):
n = Node(0, 0)
self.assertEqual(n.xpos, 0)
self.assertEqual(n.ypos, 0)
self.assertEqual(n.pc, 0)
self.assertEqual(n.acc, 0)
self.assertEqual(n.bak, 0)
self.assertTrue(n.is_valid)
n.lines = ["ADD 1"]
n.parse_lines()
self.assertTrue(n.is_valid)
self.assertEqual(n.code, {0: ("ADD", 1, None)})
def test_build_io_tables(self):
n1 = Node(0, 0) # upper left node
n2 = Node(0, 1) # lower left node
n3 = Node(1, 0) # upper right node
n4 = Node(1, 1) # lower right node
nodes = [n1, n2, n3, n4]
main.build_io_tables(nodes)
self.assertIsNone(n1.adjacency["LEFT"])
self.assertEqual(n1.adjacency["DOWN"], n2)
self.assertEqual(n1.adjacency["RIGHT"], n3)
self.assertIsNone(n1.adjacency["UP"])
self.assertIsNone(n2.adjacency["LEFT"])
self.assertIsNone(n2.adjacency["DOWN"])
self.assertEqual(n2.adjacency["RIGHT"], n4)
self.assertEqual(n2.adjacency["UP"], n1)
self.assertEqual(n3.adjacency["LEFT"], n1)
self.assertEqual(n3.adjacency["DOWN"], n4)
self.assertIsNone(n3.adjacency["RIGHT"])
self.assertIsNone(n3.adjacency["UP"])
self.assertEqual(n4.adjacency["LEFT"], n2)
self.assertIsNone(n4.adjacency["DOWN"])
self.assertIsNone(n4.adjacency["RIGHT"])
self.assertEqual(n4.adjacency["UP"], n3)
def test_add_i_and_pc(self):
n = Node(0, 0)
n.lines = ["ADD 1", "ADD 50", "ADD -3"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
self.assertEqual(n.pc, 1)
self.assertEqual(n.acc, 1)
n.execute_next()
self.assertEqual(n.pc, 2)
self.assertEqual(n.acc, 51)
n.execute_next()
self.assertEqual(n.pc, 0)
self.assertEqual(n.acc, 48)
n.execute_next()
self.assertEqual(n.pc, 1)
self.assertEqual(n.acc, 49)
def test_sub_i(self):
n = Node(0, 0)
n.lines = ["SUB 1", "SUB 50", "SUB -3"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
self.assertEqual(n.acc, -1)
n.execute_next()
self.assertEqual(n.acc, -51)
n.execute_next()
self.assertEqual(n.acc, -48)
def test_neg(self):
n = Node(0, 0)
n.lines = ["ADD 50", "NEG", "NEG"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
n.execute_next()
self.assertEqual(n.acc, -50)
n.execute_next()
self.assertEqual(n.acc, 50)
def test_sav_swp(self):
n = Node(0, 0)
n.lines = ["ADD 20", "SAV", "ADD 50", "SWP"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
n.execute_next()
self.assertEqual(n.bak, n.acc)
self.assertEqual(n.bak, 20)
n.execute_next()
self.assertNotEqual(n.bak, n.acc)
self.assertEqual(n.bak, 20)
n.execute_next()
self.assertEqual(n.bak, n.acc)
self.assertEqual(n.bak, 20)
def test_labels_full_line_tis_accurate(self):
n = Node(0, 0)
n.lines = ["ADD 5", "label:", "SUB 20", "labeltwo:"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
# adds 5, pc now points to sub 20
self.assertEqual(n.acc, 5)
self.assertEqual(n.pc, 2)
n.execute_next()
# subs 20, pc now points to add 5
self.assertEqual(n.acc, -15)
self.assertEqual(n.pc, 0)
n.execute_next()
# adds 5, pc now points to sub 20
self.assertEqual(n.acc, -10)
self.assertEqual(n.pc, 2)
def test_jmp_tis_accurate(self):
n = Node(0, 0)
n.lines = ["ADD 5", "label:", "SUB 20",
"labeltwo:", "JMP label", "ADD 20"]
n.parse_lines()
self.assertTrue(n.is_valid)
# frame 1:
n.execute_next()
# n has added 5 and the pc now points to sub 20
self.assertEqual(n.acc, 5)
self.assertEqual(n.pc, 2)
# frame 2:
n.execute_next()
# n has subbed 20 and the pc now points to jmp label
self.assertEqual(n.acc, -15)
self.assertEqual(n.pc, 4)
# frame 3:
n.execute_next()
# n has jumped and the pc now points to sub 20
self.assertEqual(n.acc, -15)
self.assertEqual(n.pc, 2)
# frame 4:
n.execute_next()
# n has subbed 20 and the pc now points to jmp label
self.assertEqual(n.acc, -35)
self.assertEqual(n.pc, 4)
def test_jez_jnz_tis_accurate(self):
n = Node(0, 0)
n.lines = ["JNZ label", "JEZ label", "ADD 1",
"label:", "ADD 5", "JEZ label", "JNZ label"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
n.execute_next()
self.assertEqual(n.pc, 4)
self.assertEqual(n.acc, 0)
for i in range(3):
n.execute_next()
self.assertEqual(n.pc, 4)
self.assertEqual(n.acc, 5)
def test_jgz_jlz_tis_accurate(self):
n = Node(0, 0)
n.lines = ["JGZ label", "JLZ label", "ADD 20", "label:", "NEG"]
n.parse_lines()
self.assertTrue(n.is_valid)
for i in range(4):
n.execute_next()
self.assertEqual(n.pc, 0)
self.assertLess(n.acc, 0)
n.execute_next()
self.assertEqual(n.pc, 1)
n.execute_next()
self.assertEqual(n.pc, 4)
n.execute_next()
self.assertEqual(n.pc, 0)
self.assertGreater(n.acc, 0)
n.execute_next()
self.assertEqual(n.pc, 4)
def test_nop(self):
n = Node(0, 0)
n.lines = ["ADD 20", "NOP", "NOP", "ADD 20"]
n.parse_lines()
self.assertTrue(n.is_valid)
for i in range(4):
n.execute_next()
self.assertEqual(n.acc, 40)
self.assertEqual(n.pc, 0)
def test_jro_tis_accurate(self):
n = Node(0, 0)
n.lines = ["ADD 2", "LABEL:", "JRO ACC", "NOP",
"LABEL2:", "JRO -3", "NEG", "JRO ACC"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
self.assertEqual(n.pc, 2)
self.assertEqual(n.acc, 2)
n.execute_next()
# n jumps over the next (2-1=1) lines of code (LABEL: is not a LOC) to
# JRO
self.assertEqual(n.pc, 5)
self.assertEqual(n.acc, 2)
n.execute_next()
# n jumps back over (3-1=2) lines of code to ADD 2
self.assertEqual(n.pc, 0)
self.assertEqual(n.acc, 2)
n.execute_next()
# n adds 2 and skips over the next label
self.assertEqual(n.pc, 2)
self.assertEqual(n.acc, 4)
n.execute_next()
# n jumps over (4-1=3) LOC to JRO ACC
self.assertEqual(n.pc, 7)
self.assertEqual(n.acc, 4)
n.execute_next()
# n cannot jump another further! infinite loop
self.assertEqual(n.pc, 7)
self.assertEqual(n.acc, 4)
def test_jro_negbounds_zero(self):
n = Node(0, 0)
n0 = Node(0, 1)
n.lines = ["NOP", "JRO -3", "NOP"]
n0.lines = ["NOP", "JRO 0", "NOP"]
n.parse_lines()
n0.parse_lines()
self.assertTrue(n.is_valid)
self.assertTrue(n0.is_valid)
n.execute_next()
n0.execute_next()
n.execute_next()
# n tries to jump below bounds, gets to NOP
self.assertEqual(n.pc, 0)
n0.execute_next()
# n0 tried to jump over 0 instructs, lands back on JRO
self.assertEqual(n0.pc, 1)
def test_jro_labels_bounds(self):
# note: in reality jro should jump to itself when it cannot find a spot
# but I don't think this matters much, infinite loop either way
n = Node(0,0)
n.lines = ["label:", "label2:", "JRO -1", "NOP"]
n0 = Node(0,1)
n0.lines = ["JRO 1", "label:", "label2:"]
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
#n moves up to the jro
self.assertEqual(n.pc, 2)
n.execute_next()
#n tries to move under the label and cannot
self.assertEqual(n.pc, 0)
n0.parse_lines()
self.assertTrue(n0.is_valid)
n0.execute_next()
#n0 tries to move beyond the label and cannot
self.assertEqual(n0.pc, 2)
"""
def test_send_receive(self):
# note: this test is no longer accurate?
n1 = Node(0, 0)
n2 = Node(0, 1)
n1.value_to_send = 52
n1.sending = n2
n2.receiving = n1
n2.receiving_into_acc = True
self.assertEqual(n2.acc, 0)
n2.receive_value()
self.assertEqual(n2.acc, 52)
self.assertIsNone(n1.value_to_send)
self.assertIsNone(n1.sending)
self.assertIsNone(n2.receiving)
self.assertFalse(n2.receiving_into_acc)
"""
def test_mov_tis_accurate(self):
# this test is based off a TIS-100 run, this is the only accurate test
# for IO
# upon mov completion, the pc of both nodes increase in tandem
n1 = Node(0, 0) # upper left node
n2 = Node(0, 1) # lower left node
n3 = Node(1, 0) # upper right node
n4 = Node(1, 1) # lower right node
nodes = [n1, n2, n3, n4]
main.build_io_tables(nodes)
n1.lines = ["ADD 4", "MOV ACC, DOWN", "MOV RIGHT, ACC"]
n2.lines = ["MOV UP, ACC", "ADD 32",
"JMP label", "label:", "MOV ACC, RIGHT"]
n4.lines = ["MOV LEFT, UP", "NOP"]
n3.lines = ["MOV DOWN, LEFT", "NOP"]
# after 1 frame:
# n1.acc = 4
# rest waiting
for n in nodes:
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
# idea: have an end_frame() func in each node that sets the
# value_to_send, called after every node has execute_next()'d
self.assertEqual(n1.acc, 4)
self.assertIsNone(n1.sending) # n1 not sending anything
self.assertEqual(n2.receiving, n1) # n2 trying to receive from n1
self.assertTrue(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
# after 2 frames:
# n1 is sending 4 to n2
# n1 pc is still pointing to mov, has yet to increase
# n2 has yet to pick up 4 from n1
# rest waiting
for n in nodes:
n.execute_next()
self.assertEqual(n1.sending, n2)
self.assertEqual(n1.value_to_send, n1.acc) # n1 sending 4 to n2
self.assertEqual(n1.pc, 1) # n1 still on mov statement
self.assertEqual(n2.acc, 0) # n2 yet to pick up 4 from n1
self.assertEqual(n2.pc, 0) # n2 still on mov statement
self.assertEqual(n2.receiving, n1) # n2 trying to receive from n1
self.assertTrue(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
# after 3 frames:
# n1 has moved pc down to next mov BUT ONLY POINTS TO IT
# n2 has picked up 4 from n1 and pc points to add 32
# n2.acc = 4
# rest waiting
for n in nodes:
n.execute_next()
self.assertIsNone(n1.sending)
self.assertIsNone(n1.value_to_send) # n1 not sending
self.assertIsNone(n1.receiving) # n1 not yet trying to receive from n4
self.assertFalse(n1.receiving_into_acc) # into its acc
self.assertEqual(n1.pc, 2) # n1 points to next mov
self.assertEqual(n1.acc, 4) # n1 still has 4 in its acc
self.assertEqual(n2.acc, 4) # n2 picked up 4 from n1
self.assertEqual(n2.pc, 1) # n2 moved past mov
self.assertIsNone(n2.receiving) # n2 not receiving
self.assertFalse(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertIsNone(n2.value_to_send) # with nothing
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
# after 4 frames:
# n2 has added, pc points to jmp, rest waiting
for n in nodes:
n.execute_next()
self.assertIsNone(n1.sending)
self.assertIsNone(n1.value_to_send) # n1 not sending
self.assertEqual(n1.receiving, n3) # n1 trying to receive from n4
self.assertTrue(n1.receiving_into_acc) # into its acc
self.assertEqual(n1.pc, 2) # n1 stil on next mov statement
self.assertEqual(n1.acc, 4) # n1 still has 4 in its acc
self.assertEqual(n2.acc, 36) # n2 has added
self.assertEqual(n2.pc, 2) # n2 moved onto jmp
self.assertIsNone(n2.receiving) # n2 not receiving
self.assertFalse(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertIsNone(n2.value_to_send) # with nothing
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
# after 5 frames:
# n2 has jumped, pc now points to the line AFTER the label
# rest waiting
for n in nodes:
n.execute_next()
self.assertIsNone(n1.sending)
self.assertIsNone(n1.value_to_send) # n1 not sending
self.assertEqual(n1.receiving, n3) # n1 trying to receive from n4
self.assertTrue(n1.receiving_into_acc) # into its acc
self.assertEqual(n1.pc, 2) # n1 stil on next mov statement
self.assertEqual(n1.acc, 4) # n1 still has 4 in its acc
self.assertEqual(n2.acc, 36) # n2 has added
# n2 has jumped to after the label, points to mov (not executing yet)
self.assertEqual(n2.pc, 4)
self.assertIsNone(n2.receiving) # n2 not receiving
self.assertFalse(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertIsNone(n2.value_to_send) # with nothing
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
# after 6 frames:
# n2 is now sending 36 to n4, n4 has yet to pick up
# n2 pc still points to mov, has yet to increase, rest waiting
for n in nodes:
n.execute_next()
self.assertIsNone(n1.sending)
self.assertIsNone(n1.value_to_send) # n1 not sending
self.assertEqual(n1.receiving, n3) # n1 trying to receive from n4
self.assertTrue(n1.receiving_into_acc) # into its acc
self.assertEqual(n1.pc, 2) # n1 stil on next mov statement
self.assertEqual(n1.acc, 4) # n1 still has 4 in its acc
self.assertEqual(n2.acc, 36) # n2 has added
self.assertEqual(n2.pc, 4) # still points to mov
self.assertIsNone(n2.receiving) # n2 not receiving
self.assertFalse(n2.receiving_into_acc) # into its acc
self.assertEqual(n2.sending, n4) # n2 now sending to n4
self.assertEqual(n2.value_to_send, 36) # n2 sending 36 to n4
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertFalse(n4.receiving_into_acc) # n4 not receiving into acc
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
self.assertFalse(n3.receiving_into_acc) # n3 not receiving into acc
# after 7 frames:
# n2 has now sent 36 to n4, n4 has picked up 36
# the pc of n2 has increased beyond the mov, but n4 has NOT increased pc
# n2 points to the first mov, but has not executed it yet
# n4 is now sending 36 to n3, n3 has yet to pick up, rest waiting
for n in nodes:
n.execute_next()
self.assertIsNone(n1.sending)
self.assertIsNone(n1.value_to_send) # n1 not sending
self.assertEqual(n1.receiving, n3) # n1 trying to receive from n4
self.assertTrue(n1.receiving_into_acc) # into its acc
self.assertEqual(n1.pc, 2) # n1 stil on next mov statement
self.assertEqual(n1.acc, 4) # n1 still has 4 in its acc
self.assertEqual(n2.acc, 36) # n2 has added
self.assertEqual(n2.pc, 0) # left the mov, now back to 0 (pointing)
self.assertIsNone(n2.receiving) # n2 not receiving
self.assertFalse(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not sending
self.assertIsNone(n2.value_to_send) # n2 has nothing to send
self.assertEqual(n4.acc, 0) # n4 got 36 from n2 but not into acc
self.assertEqual(n4.pc, 0) # n4 still on mov statement (sending)
self.assertIsNone(n4.receiving) # n4 received from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertFalse(n4.receiving_into_acc) # n4 not receiving into acc
self.assertEqual(n4.value_to_send, 36) # n4 now sending 36
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
self.assertFalse(n3.receiving_into_acc) # n3 not receiving into acc
# after 8 frames:
# n4 has send 36 to n3, n3 has picked up and is sending 36 to n2
# n4 pc has increased, n3 is still on the mov (now sending)
for n in nodes:
n.execute_next()
self.assertIsNone(n1.sending)
self.assertIsNone(n1.value_to_send) # n1 not sending
self.assertEqual(n1.receiving, n3) # n1 trying to receive from n4
self.assertTrue(n1.receiving_into_acc) # into its acc
self.assertEqual(n1.pc, 2) # n1 stil on next mov statement
self.assertEqual(n1.acc, 4) # n1 still has 4 in its acc
self.assertEqual(n2.acc, 36) # n2 has added
self.assertEqual(n2.pc, 0) # left the mov, now back to 0
self.assertEqual(n2.receiving, n1) # n2 receiving from n1
self.assertTrue(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not sending
self.assertIsNone(n2.value_to_send) # n2 has nothing to send
self.assertEqual(n4.acc, 0) # n4 got 36 from n2 but not into acc
self.assertEqual(n4.pc, 1) # n4 has left the mov
self.assertIsNone(n4.receiving) # n4 received from n2
self.assertIsNone(n4.sending) # n4 has sent
self.assertFalse(n4.receiving_into_acc) # n4 not receiving into acc
self.assertIsNone(n4.value_to_send) # n4 not sending anymore
self.assertEqual(n3.acc, 0) # n3 received, but not into its acc
self.assertEqual(n3.pc, 0) # n3 still on that mov (sending now)
self.assertIsNone(n3.receiving) # n3 not receiving
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertEqual(n3.value_to_send, 36) # n3 sending 36 to n1
self.assertFalse(n3.receiving_into_acc) # n3 not receiving into acc
# after 9 frames:
# n3 has sent to n1, now n3 pc has increased since n1 has picked it up
# n1.acc is now 36 and it is about to add 4 to it
# differences between this frame and frame 1:
# n1 contains 36 in its acc instead of 0
# n2 contains 36 in its acc instead of 0
# n3 pc points to the NOP (about to increase past)
for n in nodes:
n.execute_next()
self.assertIsNone(n1.sending) # n1 not yet sending
self.assertIsNone(n1.value_to_send) # n1 not sending
self.assertIsNone(n1.receiving) # n1 has received
self.assertFalse(n1.receiving_into_acc) # into its acc
self.assertEqual(n1.pc, 0) # n1 points to start
self.assertEqual(n1.acc, 36) # n1 now has 36 in acc (received)
self.assertEqual(n2.acc, 36) # n2 has added
self.assertEqual(n2.pc, 0) # left the mov, now back to 0
self.assertEqual(n2.receiving, n1) # n2 receiving from n1
self.assertTrue(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not sending
self.assertIsNone(n2.value_to_send) # n2 has nothing to send
self.assertEqual(n4.acc, 0) # n4 got 36 from n2 but not into acc
self.assertEqual(n4.pc, 0) # n4 has left the NOP, back to start
self.assertEqual(n4.receiving, n2) # n4 receiving from n2
self.assertEqual(n4.sending, n3) # n4 sending to n3
self.assertFalse(n4.receiving_into_acc) # n4 not receiving into acc
self.assertIsNone(n4.value_to_send) # n4 has nothing to send yet
self.assertEqual(n3.acc, 0) # n3 received, but not into its acc
self.assertEqual(n3.pc, 1) # n3 left mov, points to NOP
self.assertIsNone(n3.receiving) # n3 not receiving
self.assertIsNone(n3.sending) # n3 no longer sending
self.assertIsNone(n3.value_to_send) # n3 no longer sending
self.assertFalse(n3.receiving_into_acc) # n3 not receiving into acc
"""
def test_mov_with_delay(self):
n1 = Node(0, 0) # upper left node
n2 = Node(0, 1) # lower left node
n3 = Node(1, 0) # upper right node
n4 = Node(1, 1) # lower right node
nodes = [n1, n2, n3, n4]
main.build_io_tables(nodes)
n1.lines = ["ADD 4", "MOV ACC, DOWN", "MOV RIGHT, ACC"]
n2.lines = ["MOV UP, ACC", "ADD 32", "MOV ACC, RIGHT"]
n4.lines = ["MOV LEFT, UP", "NOP"]
n3.lines = ["MOV DOWN, LEFT", "NOP"]
# frame 0:
for n in nodes: # set up and execute one frame
n.parse_lines()
self.assertTrue(n.is_valid)
n.execute_next()
# frame 1:
# n1 has 4 in ACC
self.assertEqual(n2.receiving, n1) # n2 trying to receive from n1
self.assertTrue(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
for n in nodes[1:]:
self.assertEqual(n.pc, 0) # the later 3 nodes have not moved pc
self.assertEqual(n1.pc, 1) # but n1 has moved on
# frame 2:
# n1 should move 4 down, which n2 picks up and puts into acc
# n3 and n4 still waiting
for n in nodes:
n.execute_next()
self.assertIsNone(n2.receiving) # n2 no longer receiving
self.assertFalse(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertEqual(n2.acc, 4) # n2 got 4 into its acc
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
self.assertEqual(n1.pc, 2)
self.assertEqual(n2.pc, 1)
self.assertEqual(n3.pc, 0)
self.assertEqual(n4.pc, 0)
# frame 3:
# n1 now stuck trying to move in right, n2 adds, n3 and n4 wait
for n in nodes:
n.execute_next()
self.assertIsNone(n2.receiving) # n2 not receiving
self.assertFalse(n2.receiving_into_acc) # into its acc
self.assertIsNone(n2.sending) # n2 not trying to send to anyone
self.assertEqual(n2.acc, 36) # n2 added 32 to 4
self.assertEqual(n4.receiving, n2) # n4 trying to receive from n2
self.assertEqual(n4.sending, n3) # n4 trying to send to n3
self.assertIsNone(n4.value_to_send) # n4 has nothing to send though
self.assertEqual(n3.receiving, n4) # n3 trying to receive from n4
self.assertEqual(n3.sending, n1) # n3 trying to send to n1
self.assertIsNone(n3.value_to_send) # but has no value to send
self.assertEqual(n1.pc, 2)
self.assertEqual(n2.pc, 2)
self.assertEqual(n3.pc, 0)
self.assertEqual(n4.pc, 0)
# frame 4:
# n1 waiting, n2 moves 36 into n4, which moves that into n3,
"""
if __name__ == '__main__':
unittest.main()
| 39.251445 | 80 | 0.608239 | 4,041 | 27,162 | 4.014848 | 0.056917 | 0.173817 | 0.064103 | 0.030572 | 0.786058 | 0.749569 | 0.720907 | 0.709813 | 0.680905 | 0.651566 | 0 | 0.051796 | 0.290627 | 27,162 | 691 | 81 | 39.308249 | 0.790222 | 0.233856 | 0 | 0.760664 | 0 | 0 | 0.032111 | 0 | 0 | 0 | 0 | 0 | 0.64218 | 1 | 0.035545 | false | 0 | 0.007109 | 0 | 0.045024 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29a0dc6fd57e0ace6dd065fd40361071744647a3 | 14,014 | py | Python | tests/TestHyperGraph.py | oberbichler/HyperGraph | 89adf81abc351e1825501bfc19499ca4718431a6 | [
"MIT"
] | 3 | 2020-07-21T11:57:11.000Z | 2020-11-17T13:02:58.000Z | tests/TestHyperGraph.py | oberbichler/HyperGraph | 89adf81abc351e1825501bfc19499ca4718431a6 | [
"MIT"
] | null | null | null | tests/TestHyperGraph.py | oberbichler/HyperGraph | 89adf81abc351e1825501bfc19499ca4718431a6 | [
"MIT"
] | null | null | null | import unittest
import hypergraph as hg
import numpy as np
from numpy.testing import assert_equal, assert_array_equal, assert_almost_equal, assert_array_almost_equal
class TestHyperGraph(unittest.TestCase):
def test_init(self):
graph = hg.HyperGraph()
def test_new_variable(self):
graph = hg.HyperGraph()
variable = graph.new_variable(5)
assert_equal(variable.value, 5)
def test_new_variables(self):
graph = hg.HyperGraph()
a, b, c = graph.new_variables([5, 6, 7])
assert_equal(a.value, 5)
assert_equal(b.value, 6)
assert_equal(c.value, 7)
# addition
def test_addition_variable_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a + b
assert_equal(result.value, 11)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [1, 1])
assert_array_equal(h, [[0, 0], [0, 0]])
def test_addition_variable_constant(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a + 6
assert_equal(result.value, 11)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [1, 0])
assert_array_equal(h, [[0, 0], [0, 0]])
def test_addition_constant_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = 5 + b
assert_equal(result.value, 11)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [0, 1])
assert_array_equal(h, [[0, 0], [0, 0]])
# subtraction
def test_subtraction_variable_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a - b
assert_equal(result.value, -1)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [1, -1])
assert_array_equal(h, [[0, 0], [0, 0]])
def test_subtraction_variable_constant(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a - 6
assert_equal(result.value, -1)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [1, 0])
assert_array_equal(h, [[0, 0], [0, 0]])
def test_subtraction_constant_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = 5 - b
assert_equal(result.value, -1)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [0, -1])
assert_array_equal(h, [[0, 0], [0, 0]])
# multiplication
def test_multiplication_variable_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a * b
assert_equal(result.value, 30)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [6, 5])
assert_array_equal(h, [[0, 1], [0, 0]])
def test_multiplication_variable_constant(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a * 6
assert_equal(result.value, 30)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [6, 0])
assert_array_equal(h, [[0, 0], [0, 0]])
def test_multiplication_constant_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = 5 * b
assert_equal(result.value, 30)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_equal(g, [0, 5])
assert_array_equal(h, [[0, 0], [0, 0]])
# division
def test_division_variable_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a / b
assert_almost_equal(result.value, 5 / 6)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [1/6, -5/36])
assert_array_almost_equal(h, [[0, -1/36], [0, 5/108]])
def test_division_variable_constant(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = a / 6
assert_almost_equal(result.value, 5 / 6)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [1/6, 0])
assert_array_equal(h, [[0, 0], [0, 0]])
def test_division_constant_variable(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = 5 / b
assert_almost_equal(result.value, 5 / 6)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [0, -5/36])
assert_array_almost_equal(h, [[0, 0], [0, 5/108]])
# trigonometry
def test_cos(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = np.cos(a)
assert_almost_equal(result.value, np.cos(5))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [-np.sin(5), 0])
assert_array_almost_equal(h, [[-np.cos(5), 0], [0, 0]])
def test_sin(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = np.sin(a)
assert_almost_equal(result.value, np.sin(5))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [np.cos(5), 0])
assert_array_almost_equal(h, [[-np.sin(5), 0], [0, 0]])
def test_tan(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = np.tan(a)
assert_almost_equal(result.value, np.tan(5))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [1/np.cos(5)**2, 0])
assert_array_almost_equal(h, [[2/np.cos(5)**2*np.tan(5), 0], [0, 0]])
def test_acos(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([3/4, 6])
result = np.arccos(a)
assert_almost_equal(result.value, np.arccos(3/4))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [-4/np.sqrt(7), 0])
assert_array_almost_equal(h, [[-48/(7*np.sqrt(7)), 0], [0, 0]])
def test_asin(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([3/4, 6])
result = np.arcsin(a)
assert_almost_equal(result.value, np.arcsin(3/4))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [4/np.sqrt(7), 0])
assert_array_almost_equal(h, [[48/(7*np.sqrt(7)), 0], [0, 0]])
def test_atan(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([3/4, 6])
result = np.arctan(a)
assert_almost_equal(result.value, np.arctan(3/4))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [16/25, 0])
assert_array_almost_equal(h, [[-384/625, 0], [0, 0]])
# other
def test_sqrt(self):
graph = hg.HyperGraph()
a, b = graph.new_variables([5, 6])
result = np.sqrt(a)
assert_almost_equal(result.value, np.sqrt(5))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [1/(2*np.sqrt(5)), 0])
assert_array_almost_equal(h, [[-1/(20*np.sqrt(5)), 0], [0, 0]])
# vector
def test_cross(self):
graph = hg.HyperGraph()
ax, ay, az, bx, by, bz = graph.new_variables([1, 2, 3, 4, 5, 6])
a = np.array([ax, ay, az])
b = np.array([bx, by, bz])
result = np.cross(a, b)
assert_almost_equal(result[0].value, -3)
assert_almost_equal(result[1].value, 6)
assert_almost_equal(result[2].value, -3)
graph.compute(result[0])
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [0, 6, -5, 0, -3, 2])
assert_array_almost_equal(h, [[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, -1, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
graph.compute(result[1])
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [-6, 0, 4, 3, 0, -1])
assert_array_almost_equal(h, [[0, 0, 0, 0, 0, -1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
graph.compute(result[2])
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [5, -4, 0, -2, 1, 0])
assert_array_almost_equal(h, [[0, 0, 0, 0, 1, 0],
[0, 0, 0, -1, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
def test_dot(self):
graph = hg.HyperGraph()
ax, ay, az, bx, by, bz = graph.new_variables([1, 2, 3, 4, 5, 6])
a = np.array([ax, ay, az])
b = np.array([bx, by, bz])
result = np.dot(a, b)
assert_almost_equal(result.value, 32)
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [4, 5, 6, 1, 2, 3])
assert_array_almost_equal(h, [[0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
def test_norm(self):
graph = hg.HyperGraph()
ax, ay, az, bx, by, bz = graph.new_variables([1, 2, 3, 4, 5, 6])
a = np.array([ax, ay, az])
b = np.array([bx, by, bz])
result = np.linalg.norm(np.cross(a, b))
assert_almost_equal(result.value, 3*np.sqrt(6))
graph.compute(result)
g = graph.g()
h = graph.h()
assert_array_almost_equal(g, [-17/np.sqrt(6), -np.sqrt(2/3), 13/np.sqrt(6), 4*np.sqrt(2/3), np.sqrt(2/3), -2*np.sqrt(2/3)])
assert_array_almost_equal(h, [[77/(18*np.sqrt(6)), -77/(9*np.sqrt(6)), 77/(18*np.sqrt(6)), -8*np.sqrt(2/3)/9, 23/(9*np.sqrt(6)), -17*np.sqrt(2/3)/9],
[0, 77*np.sqrt(2/3)/9, -77/(9*np.sqrt(6)), 41/(9*np.sqrt(6)), -32*np.sqrt(2/3)/9, 23/(9*np.sqrt(6))],
[0, 0, 77/(18*np.sqrt(6)), np.sqrt(2/3)/9, 41/(9*np.sqrt(6)), -8*np.sqrt(2/3)/9],
[0, 0, 0, 7/(9*np.sqrt(6)), -7*np.sqrt(2/3)/9, 7/(9*np.sqrt(6))],
[0, 0, 0, 0, 14*np.sqrt(2/3)/9, -7*np.sqrt(2/3)/9],
[0, 0, 0, 0, 0, 7/(9*np.sqrt(6))]])
def test_full_hessian(self):
graph = hg.HyperGraph()
ax, ay, az, bx, by, bz = graph.new_variables([1, 2, 3, 4, 5, 6])
a = np.array([ax, ay, az])
b = np.array([bx, by, bz])
result = np.linalg.norm(np.cross(a, b))
graph.compute(result)
h = graph.h(full=True)
assert_array_almost_equal(np.triu(h), [[77/(18*np.sqrt(6)), -77/(9*np.sqrt(6)), 77/(18*np.sqrt(6)), -8*np.sqrt(2/3)/9, 23/(9*np.sqrt(6)), -17*np.sqrt(2/3)/9],
[0, 77*np.sqrt(2/3)/9, -77/(9*np.sqrt(6)), 41/(9*np.sqrt(6)), -32*np.sqrt(2/3)/9, 23/(9*np.sqrt(6))],
[0, 0, 77/(18*np.sqrt(6)), np.sqrt(2/3)/9, 41/(9*np.sqrt(6)), -8*np.sqrt(2/3)/9],
[0, 0, 0, 7/(9*np.sqrt(6)), -7*np.sqrt(2/3)/9, 7/(9*np.sqrt(6))],
[0, 0, 0, 0, 14*np.sqrt(2/3)/9, -7*np.sqrt(2/3)/9],
[0, 0, 0, 0, 0, 7/(9*np.sqrt(6))]])
assert_array_almost_equal(np.tril(h).T, np.triu(h))
def test_out(self):
graph = hg.HyperGraph()
ax, ay, az, bx, by, bz = graph.new_variables([1, 2, 3, 4, 5, 6])
a = np.array([ax, ay, az])
b = np.array([bx, by, bz])
result = np.linalg.norm(np.cross(a, b))
assert_almost_equal(result.value, 3*np.sqrt(6))
graph.compute(result)
g = np.empty(6)
h = np.empty((6, 6))
graph.g(out=g)
graph.h(out=h)
assert_array_almost_equal(g, [-17/np.sqrt(6), -np.sqrt(2/3), 13/np.sqrt(6), 4*np.sqrt(2/3), np.sqrt(2/3), -2*np.sqrt(2/3)])
assert_array_almost_equal(h, [[77/(18*np.sqrt(6)), -77/(9*np.sqrt(6)), 77/(18*np.sqrt(6)), -8*np.sqrt(2/3)/9, 23/(9*np.sqrt(6)), -17*np.sqrt(2/3)/9],
[0, 77*np.sqrt(2/3)/9, -77/(9*np.sqrt(6)), 41/(9*np.sqrt(6)), -32*np.sqrt(2/3)/9, 23/(9*np.sqrt(6))],
[0, 0, 77/(18*np.sqrt(6)), np.sqrt(2/3)/9, 41/(9*np.sqrt(6)), -8*np.sqrt(2/3)/9],
[0, 0, 0, 7/(9*np.sqrt(6)), -7*np.sqrt(2/3)/9, 7/(9*np.sqrt(6))],
[0, 0, 0, 0, 14*np.sqrt(2/3)/9, -7*np.sqrt(2/3)/9],
[0, 0, 0, 0, 0, 7/(9*np.sqrt(6))]])
if __name__ == '__main__':
unittest.main()
| 31.211581 | 167 | 0.479378 | 2,095 | 14,014 | 3.073986 | 0.047255 | 0.06087 | 0.071273 | 0.070807 | 0.857453 | 0.833075 | 0.813665 | 0.771118 | 0.742547 | 0.738975 | 0 | 0.086748 | 0.345226 | 14,014 | 448 | 168 | 31.28125 | 0.615083 | 0.004995 | 0 | 0.617089 | 0 | 0 | 0.000574 | 0 | 0 | 0 | 0 | 0 | 0.259494 | 1 | 0.085443 | false | 0 | 0.012658 | 0 | 0.101266 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29a575a41558c0894ef145d4ce70b29c70aef992 | 106 | py | Python | app/SuperPhy/models/sparql/__init__.py | superphy/semantic | 06b3885a6a3e424ba57be0c839ae2bd8a46364ac | [
"Apache-2.0"
] | 16 | 2015-09-23T17:19:35.000Z | 2020-05-26T16:02:54.000Z | app/SuperPhy/models/sparql/__init__.py | superphy/semantic | 06b3885a6a3e424ba57be0c839ae2bd8a46364ac | [
"Apache-2.0"
] | 21 | 2015-11-03T15:43:58.000Z | 2017-10-30T02:23:37.000Z | app/SuperPhy/models/sparql/__init__.py | superphy/semantic | 06b3885a6a3e424ba57be0c839ae2bd8a46364ac | [
"Apache-2.0"
] | 5 | 2015-10-19T17:12:39.000Z | 2018-09-10T16:19:56.000Z | from general import *
from user import *
from genomes import *
from genes import *
from prefixes import *
| 17.666667 | 22 | 0.764151 | 15 | 106 | 5.4 | 0.466667 | 0.493827 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188679 | 106 | 5 | 23 | 21.2 | 0.94186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4b1e6e6b1210639f7c7bae33754de83d2864a7d0 | 38,030 | py | Python | instances/passenger_demand/pas-20210421-2109-int18e/35.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int18e/35.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int18e/35.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 4222
passenger_arriving = (
(4, 9, 7, 8, 4, 0, 7, 9, 6, 5, 6, 0), # 0
(5, 18, 10, 3, 2, 0, 9, 14, 9, 8, 3, 0), # 1
(3, 15, 5, 6, 4, 0, 6, 8, 5, 6, 4, 0), # 2
(7, 8, 11, 5, 4, 0, 7, 13, 9, 4, 3, 0), # 3
(4, 8, 11, 5, 2, 0, 7, 14, 5, 5, 1, 0), # 4
(3, 7, 9, 4, 6, 0, 6, 8, 6, 7, 3, 0), # 5
(5, 6, 9, 5, 2, 0, 11, 12, 5, 9, 1, 0), # 6
(5, 14, 11, 4, 1, 0, 7, 11, 8, 5, 2, 0), # 7
(2, 10, 10, 10, 4, 0, 3, 18, 5, 6, 5, 0), # 8
(8, 10, 9, 5, 1, 0, 6, 10, 4, 6, 0, 0), # 9
(5, 11, 12, 4, 3, 0, 10, 10, 3, 7, 5, 0), # 10
(4, 16, 8, 5, 1, 0, 5, 15, 11, 9, 0, 0), # 11
(5, 15, 14, 6, 2, 0, 8, 22, 6, 8, 1, 0), # 12
(10, 13, 10, 6, 2, 0, 10, 9, 7, 4, 3, 0), # 13
(11, 11, 12, 5, 1, 0, 4, 14, 5, 11, 3, 0), # 14
(6, 11, 12, 8, 2, 0, 13, 12, 7, 7, 1, 0), # 15
(3, 16, 9, 5, 1, 0, 13, 14, 7, 7, 2, 0), # 16
(3, 11, 16, 7, 4, 0, 4, 11, 5, 8, 3, 0), # 17
(4, 13, 7, 3, 5, 0, 11, 9, 10, 3, 3, 0), # 18
(7, 13, 6, 7, 1, 0, 6, 10, 6, 8, 5, 0), # 19
(9, 9, 5, 5, 1, 0, 6, 7, 8, 8, 5, 0), # 20
(5, 18, 6, 2, 3, 0, 7, 12, 8, 4, 3, 0), # 21
(10, 11, 7, 8, 1, 0, 5, 8, 7, 10, 3, 0), # 22
(6, 19, 13, 1, 3, 0, 9, 8, 6, 4, 4, 0), # 23
(6, 14, 9, 3, 2, 0, 10, 16, 10, 5, 5, 0), # 24
(5, 6, 13, 12, 3, 0, 6, 14, 9, 4, 5, 0), # 25
(8, 12, 10, 4, 1, 0, 10, 13, 17, 7, 0, 0), # 26
(3, 16, 5, 8, 1, 0, 10, 9, 12, 8, 3, 0), # 27
(3, 8, 7, 8, 3, 0, 10, 11, 8, 8, 5, 0), # 28
(4, 17, 3, 5, 3, 0, 9, 20, 2, 6, 2, 0), # 29
(9, 9, 9, 4, 3, 0, 9, 8, 4, 3, 3, 0), # 30
(9, 10, 11, 6, 1, 0, 10, 12, 7, 11, 1, 0), # 31
(5, 13, 9, 5, 2, 0, 9, 17, 8, 3, 3, 0), # 32
(2, 7, 5, 10, 1, 0, 3, 13, 10, 7, 3, 0), # 33
(3, 16, 6, 3, 5, 0, 10, 8, 7, 5, 2, 0), # 34
(6, 13, 13, 6, 3, 0, 7, 13, 11, 5, 5, 0), # 35
(4, 12, 10, 3, 4, 0, 4, 17, 5, 5, 2, 0), # 36
(8, 9, 8, 3, 4, 0, 10, 11, 8, 4, 10, 0), # 37
(6, 6, 10, 3, 1, 0, 13, 7, 7, 8, 2, 0), # 38
(7, 17, 9, 5, 0, 0, 5, 9, 8, 5, 6, 0), # 39
(8, 10, 9, 5, 0, 0, 5, 11, 11, 6, 4, 0), # 40
(5, 12, 13, 6, 0, 0, 5, 13, 6, 10, 5, 0), # 41
(5, 7, 6, 7, 2, 0, 4, 12, 16, 5, 1, 0), # 42
(3, 13, 9, 2, 5, 0, 11, 13, 11, 5, 2, 0), # 43
(8, 13, 9, 9, 4, 0, 8, 12, 6, 7, 3, 0), # 44
(5, 10, 10, 7, 3, 0, 10, 11, 5, 5, 3, 0), # 45
(10, 16, 16, 3, 3, 0, 5, 16, 8, 5, 3, 0), # 46
(4, 14, 6, 6, 0, 0, 7, 17, 2, 4, 3, 0), # 47
(7, 7, 8, 4, 5, 0, 6, 16, 6, 4, 2, 0), # 48
(8, 13, 10, 6, 1, 0, 12, 10, 5, 4, 2, 0), # 49
(11, 6, 9, 9, 5, 0, 8, 11, 2, 9, 4, 0), # 50
(4, 11, 8, 3, 1, 0, 8, 16, 6, 7, 1, 0), # 51
(4, 19, 11, 3, 2, 0, 5, 11, 9, 7, 4, 0), # 52
(8, 23, 11, 7, 1, 0, 4, 15, 9, 6, 5, 0), # 53
(2, 15, 9, 6, 2, 0, 8, 16, 8, 12, 4, 0), # 54
(8, 12, 5, 3, 3, 0, 9, 15, 8, 6, 3, 0), # 55
(4, 17, 10, 10, 1, 0, 5, 10, 8, 6, 4, 0), # 56
(6, 10, 8, 9, 5, 0, 5, 13, 7, 6, 1, 0), # 57
(6, 10, 8, 0, 6, 0, 7, 9, 7, 12, 2, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(4.769372805092186, 12.233629261363635, 14.389624839331619, 11.405298913043477, 12.857451923076923, 8.562228260869567), # 0
(4.81413961808604, 12.369674877683082, 14.46734796754499, 11.46881589673913, 12.953819711538461, 8.559309850543478), # 1
(4.8583952589991215, 12.503702525252525, 14.54322622107969, 11.530934782608696, 13.048153846153847, 8.556302173913043), # 2
(4.902102161984196, 12.635567578125, 14.617204169344474, 11.591602581521737, 13.14036778846154, 8.553205638586958), # 3
(4.94522276119403, 12.765125410353535, 14.689226381748071, 11.650766304347826, 13.230375, 8.550020652173911), # 4
(4.987719490781387, 12.892231395991162, 14.759237427699228, 11.708372961956522, 13.318088942307691, 8.546747622282608), # 5
(5.029554784899035, 13.01674090909091, 14.827181876606687, 11.764369565217393, 13.403423076923078, 8.54338695652174), # 6
(5.0706910776997365, 13.138509323705808, 14.893004297879177, 11.818703125, 13.486290865384618, 8.5399390625), # 7
(5.1110908033362605, 13.257392013888888, 14.956649260925452, 11.871320652173912, 13.56660576923077, 8.536404347826087), # 8
(5.1507163959613695, 13.373244353693181, 15.018061335154243, 11.922169157608696, 13.644281249999999, 8.532783220108696), # 9
(5.1895302897278315, 13.485921717171717, 15.077185089974291, 11.971195652173915, 13.719230769230771, 8.529076086956522), # 10
(5.227494918788412, 13.595279478377526, 15.133965094794343, 12.018347146739131, 13.791367788461539, 8.525283355978262), # 11
(5.2645727172958745, 13.701173011363636, 15.188345919023137, 12.063570652173912, 13.860605769230768, 8.521405434782608), # 12
(5.3007261194029835, 13.803457690183082, 15.240272132069407, 12.106813179347826, 13.926858173076925, 8.51744273097826), # 13
(5.335917559262511, 13.90198888888889, 15.289688303341899, 12.148021739130433, 13.99003846153846, 8.513395652173912), # 14
(5.370109471027217, 13.996621981534089, 15.336539002249355, 12.187143342391304, 14.050060096153846, 8.509264605978261), # 15
(5.403264288849868, 14.087212342171718, 15.380768798200515, 12.224124999999999, 14.10683653846154, 8.50505), # 16
(5.4353444468832315, 14.173615344854797, 15.422322260604112, 12.258913722826087, 14.16028125, 8.500752241847827), # 17
(5.46631237928007, 14.255686363636363, 15.461143958868895, 12.291456521739132, 14.210307692307696, 8.496371739130435), # 18
(5.496130520193152, 14.333280772569443, 15.4971784624036, 12.321700407608695, 14.256829326923079, 8.491908899456522), # 19
(5.524761303775241, 14.40625394570707, 15.530370340616965, 12.349592391304348, 14.299759615384616, 8.487364130434782), # 20
(5.552167164179106, 14.47446125710227, 15.56066416291774, 12.375079483695652, 14.339012019230768, 8.482737839673913), # 21
(5.578310535557506, 14.537758080808082, 15.588004498714653, 12.398108695652175, 14.374499999999998, 8.47803043478261), # 22
(5.603153852063214, 14.595999790877526, 15.612335917416454, 12.418627038043478, 14.40613701923077, 8.473242323369567), # 23
(5.62665954784899, 14.649041761363636, 15.633602988431875, 12.43658152173913, 14.433836538461538, 8.468373913043479), # 24
(5.648790057067603, 14.696739366319445, 15.651750281169667, 12.451919157608696, 14.457512019230768, 8.463425611413044), # 25
(5.669507813871817, 14.738947979797977, 15.66672236503856, 12.464586956521739, 14.477076923076922, 8.458397826086957), # 26
(5.688775252414398, 14.77552297585227, 15.6784638094473, 12.474531929347828, 14.492444711538463, 8.453290964673915), # 27
(5.7065548068481124, 14.806319728535353, 15.68691918380463, 12.481701086956523, 14.503528846153845, 8.448105434782608), # 28
(5.722808911325724, 14.831193611900254, 15.69203305751928, 12.486041440217392, 14.510242788461538, 8.44284164402174), # 29
(5.7375, 14.85, 15.69375, 12.4875, 14.512500000000001, 8.4375), # 30
(5.751246651214834, 14.865621839488634, 15.692462907608693, 12.487236580882353, 14.511678590425532, 8.430077267616193), # 31
(5.7646965153452685, 14.881037215909092, 15.68863804347826, 12.486451470588234, 14.509231914893617, 8.418644565217393), # 32
(5.777855634590792, 14.896244211647728, 15.682330027173915, 12.485152389705883, 14.50518630319149, 8.403313830584706), # 33
(5.790730051150895, 14.91124090909091, 15.67359347826087, 12.483347058823531, 14.499568085106382, 8.38419700149925), # 34
(5.803325807225064, 14.926025390624996, 15.662483016304348, 12.481043198529411, 14.492403590425532, 8.361406015742128), # 35
(5.815648945012788, 14.940595738636366, 15.649053260869564, 12.478248529411767, 14.48371914893617, 8.335052811094453), # 36
(5.8277055067135555, 14.954950035511365, 15.63335883152174, 12.474970772058823, 14.47354109042553, 8.305249325337332), # 37
(5.839501534526853, 14.969086363636364, 15.615454347826088, 12.471217647058824, 14.461895744680852, 8.272107496251873), # 38
(5.851043070652174, 14.983002805397728, 15.595394429347825, 12.466996875000001, 14.44880944148936, 8.23573926161919), # 39
(5.862336157289003, 14.99669744318182, 15.573233695652176, 12.462316176470589, 14.434308510638296, 8.196256559220389), # 40
(5.873386836636828, 15.010168359374997, 15.549026766304348, 12.457183272058824, 14.418419281914893, 8.153771326836583), # 41
(5.88420115089514, 15.023413636363639, 15.522828260869566, 12.451605882352942, 14.401168085106384, 8.108395502248875), # 42
(5.894785142263428, 15.03643135653409, 15.494692798913043, 12.445591727941178, 14.38258125, 8.060241023238381), # 43
(5.905144852941176, 15.049219602272727, 15.464675, 12.439148529411764, 14.36268510638298, 8.009419827586207), # 44
(5.915286325127877, 15.061776455965909, 15.432829483695656, 12.43228400735294, 14.341505984042554, 7.956043853073464), # 45
(5.925215601023019, 15.074100000000003, 15.39921086956522, 12.425005882352941, 14.319070212765958, 7.90022503748126), # 46
(5.934938722826087, 15.086188316761364, 15.363873777173913, 12.417321874999999, 14.295404122340427, 7.842075318590705), # 47
(5.944461732736574, 15.098039488636365, 15.326872826086957, 12.409239705882353, 14.27053404255319, 7.7817066341829095), # 48
(5.953790672953963, 15.10965159801136, 15.288262635869566, 12.400767095588236, 14.24448630319149, 7.71923092203898), # 49
(5.96293158567775, 15.121022727272724, 15.248097826086958, 12.391911764705883, 14.217287234042553, 7.65476011994003), # 50
(5.971890513107417, 15.132150958806818, 15.206433016304347, 12.38268143382353, 14.188963164893616, 7.588406165667167), # 51
(5.980673497442456, 15.143034375, 15.163322826086954, 12.373083823529411, 14.159540425531915, 7.5202809970015), # 52
(5.989286580882353, 15.153671058238638, 15.118821875, 12.363126654411765, 14.129045345744682, 7.450496551724138), # 53
(5.9977358056266, 15.164059090909088, 15.072984782608694, 12.352817647058824, 14.09750425531915, 7.379164767616192), # 54
(6.00602721387468, 15.174196555397728, 15.02586616847826, 12.342164522058825, 14.064943484042553, 7.306397582458771), # 55
(6.014166847826087, 15.184081534090907, 14.977520652173913, 12.331175, 14.031389361702129, 7.232306934032984), # 56
(6.022160749680308, 15.193712109375003, 14.92800285326087, 12.319856801470587, 13.996868218085105, 7.15700476011994), # 57
(6.030014961636829, 15.203086363636363, 14.877367391304347, 12.308217647058825, 13.961406382978723, 7.0806029985007495), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(4, 9, 7, 8, 4, 0, 7, 9, 6, 5, 6, 0), # 0
(9, 27, 17, 11, 6, 0, 16, 23, 15, 13, 9, 0), # 1
(12, 42, 22, 17, 10, 0, 22, 31, 20, 19, 13, 0), # 2
(19, 50, 33, 22, 14, 0, 29, 44, 29, 23, 16, 0), # 3
(23, 58, 44, 27, 16, 0, 36, 58, 34, 28, 17, 0), # 4
(26, 65, 53, 31, 22, 0, 42, 66, 40, 35, 20, 0), # 5
(31, 71, 62, 36, 24, 0, 53, 78, 45, 44, 21, 0), # 6
(36, 85, 73, 40, 25, 0, 60, 89, 53, 49, 23, 0), # 7
(38, 95, 83, 50, 29, 0, 63, 107, 58, 55, 28, 0), # 8
(46, 105, 92, 55, 30, 0, 69, 117, 62, 61, 28, 0), # 9
(51, 116, 104, 59, 33, 0, 79, 127, 65, 68, 33, 0), # 10
(55, 132, 112, 64, 34, 0, 84, 142, 76, 77, 33, 0), # 11
(60, 147, 126, 70, 36, 0, 92, 164, 82, 85, 34, 0), # 12
(70, 160, 136, 76, 38, 0, 102, 173, 89, 89, 37, 0), # 13
(81, 171, 148, 81, 39, 0, 106, 187, 94, 100, 40, 0), # 14
(87, 182, 160, 89, 41, 0, 119, 199, 101, 107, 41, 0), # 15
(90, 198, 169, 94, 42, 0, 132, 213, 108, 114, 43, 0), # 16
(93, 209, 185, 101, 46, 0, 136, 224, 113, 122, 46, 0), # 17
(97, 222, 192, 104, 51, 0, 147, 233, 123, 125, 49, 0), # 18
(104, 235, 198, 111, 52, 0, 153, 243, 129, 133, 54, 0), # 19
(113, 244, 203, 116, 53, 0, 159, 250, 137, 141, 59, 0), # 20
(118, 262, 209, 118, 56, 0, 166, 262, 145, 145, 62, 0), # 21
(128, 273, 216, 126, 57, 0, 171, 270, 152, 155, 65, 0), # 22
(134, 292, 229, 127, 60, 0, 180, 278, 158, 159, 69, 0), # 23
(140, 306, 238, 130, 62, 0, 190, 294, 168, 164, 74, 0), # 24
(145, 312, 251, 142, 65, 0, 196, 308, 177, 168, 79, 0), # 25
(153, 324, 261, 146, 66, 0, 206, 321, 194, 175, 79, 0), # 26
(156, 340, 266, 154, 67, 0, 216, 330, 206, 183, 82, 0), # 27
(159, 348, 273, 162, 70, 0, 226, 341, 214, 191, 87, 0), # 28
(163, 365, 276, 167, 73, 0, 235, 361, 216, 197, 89, 0), # 29
(172, 374, 285, 171, 76, 0, 244, 369, 220, 200, 92, 0), # 30
(181, 384, 296, 177, 77, 0, 254, 381, 227, 211, 93, 0), # 31
(186, 397, 305, 182, 79, 0, 263, 398, 235, 214, 96, 0), # 32
(188, 404, 310, 192, 80, 0, 266, 411, 245, 221, 99, 0), # 33
(191, 420, 316, 195, 85, 0, 276, 419, 252, 226, 101, 0), # 34
(197, 433, 329, 201, 88, 0, 283, 432, 263, 231, 106, 0), # 35
(201, 445, 339, 204, 92, 0, 287, 449, 268, 236, 108, 0), # 36
(209, 454, 347, 207, 96, 0, 297, 460, 276, 240, 118, 0), # 37
(215, 460, 357, 210, 97, 0, 310, 467, 283, 248, 120, 0), # 38
(222, 477, 366, 215, 97, 0, 315, 476, 291, 253, 126, 0), # 39
(230, 487, 375, 220, 97, 0, 320, 487, 302, 259, 130, 0), # 40
(235, 499, 388, 226, 97, 0, 325, 500, 308, 269, 135, 0), # 41
(240, 506, 394, 233, 99, 0, 329, 512, 324, 274, 136, 0), # 42
(243, 519, 403, 235, 104, 0, 340, 525, 335, 279, 138, 0), # 43
(251, 532, 412, 244, 108, 0, 348, 537, 341, 286, 141, 0), # 44
(256, 542, 422, 251, 111, 0, 358, 548, 346, 291, 144, 0), # 45
(266, 558, 438, 254, 114, 0, 363, 564, 354, 296, 147, 0), # 46
(270, 572, 444, 260, 114, 0, 370, 581, 356, 300, 150, 0), # 47
(277, 579, 452, 264, 119, 0, 376, 597, 362, 304, 152, 0), # 48
(285, 592, 462, 270, 120, 0, 388, 607, 367, 308, 154, 0), # 49
(296, 598, 471, 279, 125, 0, 396, 618, 369, 317, 158, 0), # 50
(300, 609, 479, 282, 126, 0, 404, 634, 375, 324, 159, 0), # 51
(304, 628, 490, 285, 128, 0, 409, 645, 384, 331, 163, 0), # 52
(312, 651, 501, 292, 129, 0, 413, 660, 393, 337, 168, 0), # 53
(314, 666, 510, 298, 131, 0, 421, 676, 401, 349, 172, 0), # 54
(322, 678, 515, 301, 134, 0, 430, 691, 409, 355, 175, 0), # 55
(326, 695, 525, 311, 135, 0, 435, 701, 417, 361, 179, 0), # 56
(332, 705, 533, 320, 140, 0, 440, 714, 424, 367, 180, 0), # 57
(338, 715, 541, 320, 146, 0, 447, 723, 431, 379, 182, 0), # 58
(338, 715, 541, 320, 146, 0, 447, 723, 431, 379, 182, 0), # 59
)
passenger_arriving_rate = (
(4.769372805092186, 9.786903409090908, 8.63377490359897, 4.56211956521739, 2.5714903846153843, 0.0, 8.562228260869567, 10.285961538461537, 6.843179347826086, 5.755849935732647, 2.446725852272727, 0.0), # 0
(4.81413961808604, 9.895739902146465, 8.680408780526994, 4.587526358695651, 2.5907639423076922, 0.0, 8.559309850543478, 10.363055769230769, 6.881289538043478, 5.786939187017995, 2.4739349755366162, 0.0), # 1
(4.8583952589991215, 10.00296202020202, 8.725935732647814, 4.612373913043478, 2.609630769230769, 0.0, 8.556302173913043, 10.438523076923076, 6.918560869565217, 5.817290488431875, 2.500740505050505, 0.0), # 2
(4.902102161984196, 10.1084540625, 8.770322501606683, 4.636641032608694, 2.628073557692308, 0.0, 8.553205638586958, 10.512294230769232, 6.954961548913042, 5.846881667737789, 2.527113515625, 0.0), # 3
(4.94522276119403, 10.212100328282828, 8.813535829048842, 4.66030652173913, 2.6460749999999997, 0.0, 8.550020652173911, 10.584299999999999, 6.990459782608696, 5.875690552699228, 2.553025082070707, 0.0), # 4
(4.987719490781387, 10.313785116792928, 8.855542456619537, 4.6833491847826085, 2.663617788461538, 0.0, 8.546747622282608, 10.654471153846153, 7.025023777173913, 5.90369497107969, 2.578446279198232, 0.0), # 5
(5.029554784899035, 10.413392727272727, 8.896309125964011, 4.705747826086957, 2.680684615384615, 0.0, 8.54338695652174, 10.72273846153846, 7.058621739130436, 5.930872750642674, 2.603348181818182, 0.0), # 6
(5.0706910776997365, 10.510807458964646, 8.935802578727506, 4.72748125, 2.697258173076923, 0.0, 8.5399390625, 10.789032692307693, 7.0912218750000005, 5.95720171915167, 2.6277018647411614, 0.0), # 7
(5.1110908033362605, 10.60591361111111, 8.97398955655527, 4.7485282608695645, 2.7133211538461537, 0.0, 8.536404347826087, 10.853284615384615, 7.122792391304347, 5.982659704370181, 2.6514784027777774, 0.0), # 8
(5.1507163959613695, 10.698595482954543, 9.010836801092546, 4.768867663043478, 2.7288562499999993, 0.0, 8.532783220108696, 10.915424999999997, 7.153301494565217, 6.007224534061697, 2.6746488707386358, 0.0), # 9
(5.1895302897278315, 10.788737373737373, 9.046311053984574, 4.7884782608695655, 2.743846153846154, 0.0, 8.529076086956522, 10.975384615384616, 7.182717391304348, 6.030874035989716, 2.697184343434343, 0.0), # 10
(5.227494918788412, 10.87622358270202, 9.080379056876605, 4.807338858695652, 2.7582735576923074, 0.0, 8.525283355978262, 11.03309423076923, 7.2110082880434785, 6.053586037917737, 2.719055895675505, 0.0), # 11
(5.2645727172958745, 10.960938409090907, 9.113007551413881, 4.825428260869565, 2.7721211538461534, 0.0, 8.521405434782608, 11.088484615384614, 7.238142391304347, 6.0753383676092545, 2.740234602272727, 0.0), # 12
(5.3007261194029835, 11.042766152146465, 9.144163279241644, 4.8427252717391305, 2.7853716346153847, 0.0, 8.51744273097826, 11.141486538461539, 7.264087907608696, 6.096108852827762, 2.760691538036616, 0.0), # 13
(5.335917559262511, 11.121591111111112, 9.173812982005138, 4.859208695652173, 2.7980076923076918, 0.0, 8.513395652173912, 11.192030769230767, 7.288813043478259, 6.115875321336759, 2.780397777777778, 0.0), # 14
(5.370109471027217, 11.19729758522727, 9.201923401349612, 4.874857336956521, 2.810012019230769, 0.0, 8.509264605978261, 11.240048076923076, 7.312286005434782, 6.134615600899742, 2.7993243963068175, 0.0), # 15
(5.403264288849868, 11.269769873737372, 9.228461278920308, 4.88965, 2.8213673076923076, 0.0, 8.50505, 11.28546923076923, 7.334474999999999, 6.152307519280206, 2.817442468434343, 0.0), # 16
(5.4353444468832315, 11.338892275883836, 9.253393356362468, 4.903565489130434, 2.83205625, 0.0, 8.500752241847827, 11.328225, 7.3553482336956515, 6.168928904241644, 2.834723068970959, 0.0), # 17
(5.46631237928007, 11.40454909090909, 9.276686375321336, 4.916582608695652, 2.842061538461539, 0.0, 8.496371739130435, 11.368246153846156, 7.374873913043479, 6.184457583547558, 2.8511372727272724, 0.0), # 18
(5.496130520193152, 11.466624618055553, 9.298307077442159, 4.928680163043477, 2.8513658653846155, 0.0, 8.491908899456522, 11.405463461538462, 7.393020244565217, 6.198871384961439, 2.866656154513888, 0.0), # 19
(5.524761303775241, 11.525003156565655, 9.318222204370178, 4.939836956521739, 2.859951923076923, 0.0, 8.487364130434782, 11.439807692307692, 7.409755434782609, 6.212148136246785, 2.8812507891414136, 0.0), # 20
(5.552167164179106, 11.579569005681815, 9.336398497750643, 4.95003179347826, 2.8678024038461536, 0.0, 8.482737839673913, 11.471209615384614, 7.425047690217391, 6.224265665167096, 2.894892251420454, 0.0), # 21
(5.578310535557506, 11.630206464646465, 9.352802699228791, 4.95924347826087, 2.8748999999999993, 0.0, 8.47803043478261, 11.499599999999997, 7.438865217391305, 6.235201799485861, 2.907551616161616, 0.0), # 22
(5.603153852063214, 11.67679983270202, 9.367401550449872, 4.967450815217391, 2.8812274038461534, 0.0, 8.473242323369567, 11.524909615384614, 7.451176222826087, 6.244934366966581, 2.919199958175505, 0.0), # 23
(5.62665954784899, 11.719233409090908, 9.380161793059125, 4.974632608695652, 2.8867673076923075, 0.0, 8.468373913043479, 11.54706923076923, 7.461948913043478, 6.25344119537275, 2.929808352272727, 0.0), # 24
(5.648790057067603, 11.757391493055556, 9.391050168701799, 4.980767663043478, 2.8915024038461534, 0.0, 8.463425611413044, 11.566009615384614, 7.471151494565217, 6.260700112467866, 2.939347873263889, 0.0), # 25
(5.669507813871817, 11.79115838383838, 9.400033419023135, 4.985834782608695, 2.8954153846153843, 0.0, 8.458397826086957, 11.581661538461537, 7.478752173913043, 6.266688946015424, 2.947789595959595, 0.0), # 26
(5.688775252414398, 11.820418380681815, 9.40707828566838, 4.989812771739131, 2.8984889423076923, 0.0, 8.453290964673915, 11.593955769230769, 7.484719157608696, 6.271385523778919, 2.9551045951704538, 0.0), # 27
(5.7065548068481124, 11.84505578282828, 9.412151510282778, 4.992680434782609, 2.9007057692307687, 0.0, 8.448105434782608, 11.602823076923075, 7.489020652173913, 6.274767673521851, 2.96126394570707, 0.0), # 28
(5.722808911325724, 11.864954889520202, 9.415219834511568, 4.994416576086956, 2.902048557692307, 0.0, 8.44284164402174, 11.608194230769229, 7.491624864130435, 6.276813223007712, 2.9662387223800506, 0.0), # 29
(5.7375, 11.879999999999999, 9.41625, 4.995, 2.9025, 0.0, 8.4375, 11.61, 7.4925, 6.277499999999999, 2.9699999999999998, 0.0), # 30
(5.751246651214834, 11.892497471590906, 9.415477744565216, 4.994894632352941, 2.9023357180851064, 0.0, 8.430077267616193, 11.609342872340426, 7.492341948529411, 6.276985163043476, 2.9731243678977264, 0.0), # 31
(5.7646965153452685, 11.904829772727274, 9.413182826086956, 4.994580588235293, 2.901846382978723, 0.0, 8.418644565217393, 11.607385531914892, 7.49187088235294, 6.275455217391303, 2.9762074431818184, 0.0), # 32
(5.777855634590792, 11.916995369318181, 9.40939801630435, 4.994060955882353, 2.9010372606382977, 0.0, 8.403313830584706, 11.60414904255319, 7.491091433823529, 6.272932010869566, 2.9792488423295453, 0.0), # 33
(5.790730051150895, 11.928992727272727, 9.40415608695652, 4.993338823529412, 2.899913617021276, 0.0, 8.38419700149925, 11.599654468085104, 7.490008235294118, 6.269437391304347, 2.9822481818181816, 0.0), # 34
(5.803325807225064, 11.940820312499996, 9.39748980978261, 4.9924172794117645, 2.898480718085106, 0.0, 8.361406015742128, 11.593922872340425, 7.488625919117647, 6.264993206521739, 2.985205078124999, 0.0), # 35
(5.815648945012788, 11.952476590909091, 9.389431956521738, 4.9912994117647065, 2.896743829787234, 0.0, 8.335052811094453, 11.586975319148936, 7.486949117647059, 6.259621304347825, 2.988119147727273, 0.0), # 36
(5.8277055067135555, 11.96396002840909, 9.380015298913044, 4.989988308823529, 2.8947082180851056, 0.0, 8.305249325337332, 11.578832872340422, 7.484982463235293, 6.253343532608695, 2.9909900071022726, 0.0), # 37
(5.839501534526853, 11.97526909090909, 9.369272608695653, 4.988487058823529, 2.89237914893617, 0.0, 8.272107496251873, 11.56951659574468, 7.4827305882352935, 6.246181739130434, 2.9938172727272727, 0.0), # 38
(5.851043070652174, 11.986402244318182, 9.357236657608695, 4.98679875, 2.8897618882978717, 0.0, 8.23573926161919, 11.559047553191487, 7.480198125, 6.23815777173913, 2.9966005610795454, 0.0), # 39
(5.862336157289003, 11.997357954545455, 9.343940217391305, 4.984926470588235, 2.886861702127659, 0.0, 8.196256559220389, 11.547446808510635, 7.477389705882353, 6.22929347826087, 2.999339488636364, 0.0), # 40
(5.873386836636828, 12.008134687499997, 9.329416059782607, 4.982873308823529, 2.8836838563829783, 0.0, 8.153771326836583, 11.534735425531913, 7.474309963235294, 6.219610706521738, 3.002033671874999, 0.0), # 41
(5.88420115089514, 12.01873090909091, 9.31369695652174, 4.980642352941176, 2.880233617021277, 0.0, 8.108395502248875, 11.520934468085107, 7.4709635294117644, 6.209131304347826, 3.0046827272727277, 0.0), # 42
(5.894785142263428, 12.02914508522727, 9.296815679347825, 4.978236691176471, 2.8765162499999994, 0.0, 8.060241023238381, 11.506064999999998, 7.467355036764706, 6.1978771195652165, 3.0072862713068176, 0.0), # 43
(5.905144852941176, 12.03937568181818, 9.278805, 4.975659411764705, 2.8725370212765955, 0.0, 8.009419827586207, 11.490148085106382, 7.4634891176470575, 6.1858699999999995, 3.009843920454545, 0.0), # 44
(5.915286325127877, 12.049421164772726, 9.259697690217394, 4.972913602941176, 2.8683011968085106, 0.0, 7.956043853073464, 11.473204787234042, 7.459370404411764, 6.1731317934782615, 3.0123552911931815, 0.0), # 45
(5.925215601023019, 12.059280000000001, 9.239526521739132, 4.970002352941176, 2.8638140425531913, 0.0, 7.90022503748126, 11.455256170212765, 7.455003529411765, 6.159684347826087, 3.0148200000000003, 0.0), # 46
(5.934938722826087, 12.06895065340909, 9.218324266304347, 4.966928749999999, 2.859080824468085, 0.0, 7.842075318590705, 11.43632329787234, 7.450393124999999, 6.145549510869564, 3.0172376633522724, 0.0), # 47
(5.944461732736574, 12.07843159090909, 9.196123695652174, 4.9636958823529405, 2.854106808510638, 0.0, 7.7817066341829095, 11.416427234042551, 7.445543823529412, 6.130749130434782, 3.0196078977272727, 0.0), # 48
(5.953790672953963, 12.087721278409088, 9.17295758152174, 4.960306838235294, 2.8488972606382976, 0.0, 7.71923092203898, 11.39558904255319, 7.4404602573529415, 6.115305054347826, 3.021930319602272, 0.0), # 49
(5.96293158567775, 12.096818181818177, 9.148858695652175, 4.956764705882353, 2.8434574468085105, 0.0, 7.65476011994003, 11.373829787234042, 7.43514705882353, 6.099239130434783, 3.0242045454545443, 0.0), # 50
(5.971890513107417, 12.105720767045453, 9.123859809782608, 4.953072573529411, 2.837792632978723, 0.0, 7.588406165667167, 11.351170531914892, 7.429608860294118, 6.082573206521738, 3.026430191761363, 0.0), # 51
(5.980673497442456, 12.114427499999998, 9.097993695652173, 4.949233529411764, 2.8319080851063827, 0.0, 7.5202809970015, 11.32763234042553, 7.4238502941176465, 6.065329130434781, 3.0286068749999995, 0.0), # 52
(5.989286580882353, 12.122936846590909, 9.071293125, 4.945250661764706, 2.8258090691489364, 0.0, 7.450496551724138, 11.303236276595745, 7.417875992647058, 6.04752875, 3.030734211647727, 0.0), # 53
(5.9977358056266, 12.13124727272727, 9.043790869565216, 4.941127058823529, 2.8195008510638297, 0.0, 7.379164767616192, 11.278003404255319, 7.411690588235294, 6.0291939130434775, 3.0328118181818176, 0.0), # 54
(6.00602721387468, 12.139357244318182, 9.015519701086955, 4.93686580882353, 2.8129886968085103, 0.0, 7.306397582458771, 11.251954787234041, 7.405298713235295, 6.010346467391304, 3.0348393110795455, 0.0), # 55
(6.014166847826087, 12.147265227272724, 8.986512391304348, 4.9324699999999995, 2.8062778723404254, 0.0, 7.232306934032984, 11.225111489361701, 7.398705, 5.991008260869565, 3.036816306818181, 0.0), # 56
(6.022160749680308, 12.154969687500001, 8.95680171195652, 4.927942720588234, 2.7993736436170207, 0.0, 7.15700476011994, 11.197494574468083, 7.391914080882352, 5.9712011413043475, 3.0387424218750003, 0.0), # 57
(6.030014961636829, 12.16246909090909, 8.926420434782608, 4.923287058823529, 2.792281276595744, 0.0, 7.0806029985007495, 11.169125106382976, 7.384930588235295, 5.950946956521738, 3.0406172727272724, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
34, # 1
)
| 113.522388 | 213 | 0.730108 | 5,147 | 38,030 | 5.392462 | 0.238974 | 0.311295 | 0.246442 | 0.466943 | 0.3265 | 0.326139 | 0.326139 | 0.326139 | 0.326139 | 0.326139 | 0 | 0.819788 | 0.118696 | 38,030 | 334 | 214 | 113.862275 | 0.008324 | 0.031843 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9a407cfc6bbaf6cb442a45101c7deef15f600e7 | 1,279 | py | Python | 08-inheritance/multiple_inheritance_example.py | johnehunt/Python3Intro | 2a41ce488aac11bb3928ea81e57be1c2c8acdac2 | [
"Apache-2.0"
] | 1 | 2020-11-03T19:46:25.000Z | 2020-11-03T19:46:25.000Z | 05-inheritance/multiple/multiple_inheritance_example.py | johnehunt/PythonCleanCode | b0ade446cc98cdc1379f06ad20cc3c6de772a2bb | [
"Apache-2.0"
] | null | null | null | 05-inheritance/multiple/multiple_inheritance_example.py | johnehunt/PythonCleanCode | b0ade446cc98cdc1379f06ad20cc3c6de772a2bb | [
"Apache-2.0"
] | null | null | null | class A:
def __str__(self):
return 'A'
def print_info(self):
print('A')
class B:
def __str__(self):
return 'B'
class C:
def __str__(self):
return 'C'
def get_data(self):
return 'CData'
class D:
def __str__(self):
return 'D'
def print_info(self):
print('D')
class E:
def __str__(self):
return 'E'
def print_info(self):
print('E')
class F(C, D, E):
def __str__(self):
return super().__str__() + 'F'
def get_data(self):
return super().get_data() + 'FData'
def print_info(self):
print('F' + self.get_data())
class G(C, D, E):
def __str__(self):
return super().__str__() + 'G'
def get_data(self):
return super().get_data() + 'GData'
class H(F, G):
# class H(G, F):
def __str__(self):
return super().__str__() + 'H'
def print_info(self):
print('H' + self.get_data())
class J(H):
def __str__(self):
return super().__str__() + 'J'
class I(A, J):
def __str__(self):
return super().__str__() + 'I'
class X(J, H, B):
def __str__(self):
return super().__str__() + 'X'
x = X()
print('print(x):', x)
print('-' * 25)
x.print_info()
| 15.047059 | 43 | 0.520719 | 178 | 1,279 | 3.286517 | 0.140449 | 0.239316 | 0.188034 | 0.300855 | 0.639316 | 0.365812 | 0.201709 | 0.201709 | 0.092308 | 0 | 0 | 0.002265 | 0.309617 | 1,279 | 84 | 44 | 15.22619 | 0.660249 | 0.010946 | 0 | 0.358491 | 0 | 0 | 0.032462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.358491 | false | 0 | 0 | 0.264151 | 0.830189 | 0.245283 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
d9c91b530d8e669d5bcea4460ab4d762c0d26864 | 4,786 | py | Python | algo/test/test_dijkstra_shortest_path.py | ssavinash1/Algorithm_stanford | f2588b6bcac2b0858e78b819e6e8402109e80ee2 | [
"MIT"
] | 24 | 2016-03-21T07:53:54.000Z | 2020-06-29T12:16:36.000Z | algo/test/test_dijkstra_shortest_path.py | ssavinash1/Algorithm_stanford | f2588b6bcac2b0858e78b819e6e8402109e80ee2 | [
"MIT"
] | 5 | 2015-09-29T17:12:36.000Z | 2020-03-26T20:51:56.000Z | algo/test/test_dijkstra_shortest_path.py | ssavinash1/Algorithm_stanford | f2588b6bcac2b0858e78b819e6e8402109e80ee2 | [
"MIT"
] | 12 | 2016-05-24T16:48:32.000Z | 2020-10-02T12:22:09.000Z | import unittest
from src.graph import Graph
from src.dijkstra_shortest_path import get_frontier, pick_min_path, \
shortest_path_naive, shortest_path_heap, \
VISITED, NOT_VISITED
class TestDijkstraShortestPath(unittest.TestCase):
def test_shortest_path_heap(self):
""" Compute a shortest path using a naive implementation.
Given the following graph:
(s)---1-->(v)
| /|
4 /-2--/ 6
V v \V
(w)---3-->(t)
"""
g = Graph.build(edges=[('s', 'v', 1), ('s', 'w', 4), ('v', 'w', 2),
('v', 't', 6), ('w', 't', 3)],
directed=True)
length_to = shortest_path_heap(g, 's')
self.assertEqual(length_to['s'], 0, 'shortest path to self is 0')
self.assertEqual(length_to['v'], 1)
self.assertEqual(length_to['w'], 3)
self.assertEqual(length_to['t'], 6)
def test_shortest_path_heap_case1(self):
""" Compute a shortest path using the heap implementation. """
g = Graph.build(edges=[
('a','c',3),('c','b',10),('a','b',15),('d','b',9),
('a','d',4),('d','f',7),('d','e',3),('e','g',1),
('e','f',5),('g','f',2),('f','b',1)],
directed=True)
shortest_path = shortest_path_heap(g, 'a')
self.assertEqual(shortest_path['g'], 8)
self.assertEqual(shortest_path['b'], 11)
def test_shortest_path_heap_case2(self):
""" Compute a shortest path using the heap implementation. """
g = Graph.build(edges=[
('a','b',2),('a','c',2),('a','d',4),('a','e',2),
('a','f',4),('b','g',5),('c','d',4),('e','d',1)],
directed=True)
shortest_path = shortest_path_heap(g, 'a')
self.assertEqual(shortest_path['d'], 3)
def test_shortest_path_heap_case3(self):
""" Compute a shortest path using the heap implementation. """
g = Graph.build(edges=[
('a','b',1),('a','c',4),('a','d',4),('b','c',1),('c','d',1)],
directed=True)
shortest_path = shortest_path_heap(g, 'a')
self.assertEqual(shortest_path['d'], 3)
def test_shortest_path_naive(self):
""" Compute a shortest path using a naive implementation.
Given the following graph:
(s)---1-->(v)
| /|
4 /-2--/ 6
V v \V
(w)---3-->(t)
"""
g = Graph.build(edges=[('s', 'v', 1), ('s', 'w', 4), ('v', 'w', 2),
('v', 't', 6), ('w', 't', 3)],
directed=True)
length_to = shortest_path_naive(g, 's')
self.assertEqual(length_to['s'], 0, 'shortest path to self is 0')
self.assertEqual(length_to['v'], 1)
self.assertEqual(length_to['w'], 3)
self.assertEqual(length_to['t'], 6)
def test_shortest_path_naive_case1(self):
""" Compute a shortest path using the heap implementation. """
g = Graph.build(edges=[
('a','c',3),('c','b',10),('a','b',15),('d','b',9),
('a','d',4),('d','f',7),('d','e',3),('e','g',1),
('e','f',5),('g','f',2),('f','b',1)],
directed=True)
shortest_path = shortest_path_naive(g, 'a')
self.assertEqual(shortest_path['g'], 8)
self.assertEqual(shortest_path['b'], 11)
def test_shortest_path_naive_case2(self):
""" Compute a shortest path using the heap implementation. """
g = Graph.build(edges=[
('a','b',2),('a','c',2),('a','d',4),('a','e',2),
('a','f',4),('b','g',5),('c','d',4),('e','d',1)],
directed=True)
shortest_path = shortest_path_naive(g, 'a')
self.assertEqual(shortest_path['d'], 3)
def test_shortest_path_naive_case3(self):
""" Compute a shortest path using the heap implementation. """
g = Graph.build(edges=[
('a','b',1),('a','c',4),('a','d',4),('b','c',1),('c','d',1)],
directed=True)
shortest_path = shortest_path_naive(g, 'a')
self.assertEqual(shortest_path['d'], 3)
def test_get_frontier(self):
""" Makes sure frontier edges are correctly picked.
(1)--->(2)
| |
V V
(3)<---(4)
"""
g = Graph.build(edges=[(1,2), (1,3), (2,4), (4,3)], directed=True)
explored_vertices = [1,3]
actual = get_frontier(g, explored_vertices)
expected = set([(1,2,True)])
self.assertEqual(actual, expected, 'should only return the only node ' \
'reachable that has not yet been visited')
| 37.984127 | 80 | 0.487254 | 631 | 4,786 | 3.561014 | 0.129952 | 0.22964 | 0.068091 | 0.064085 | 0.799288 | 0.779261 | 0.779261 | 0.779261 | 0.779261 | 0.779261 | 0 | 0.034214 | 0.303803 | 4,786 | 125 | 81 | 38.288 | 0.640156 | 0.159214 | 0 | 0.693333 | 0 | 0 | 0.069529 | 0 | 0 | 0 | 0 | 0 | 0.226667 | 1 | 0.12 | false | 0 | 0.04 | 0 | 0.173333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9d040e64f4a7433456b134850a9cdaeda9a6d86 | 73,739 | py | Python | saleor/graphql/translations/tests/test_translations.py | genego-dev/saleor | 0cab5057cbda4c8b81ae588fa74090d8e9d0e214 | [
"CC-BY-4.0"
] | null | null | null | saleor/graphql/translations/tests/test_translations.py | genego-dev/saleor | 0cab5057cbda4c8b81ae588fa74090d8e9d0e214 | [
"CC-BY-4.0"
] | null | null | null | saleor/graphql/translations/tests/test_translations.py | genego-dev/saleor | 0cab5057cbda4c8b81ae588fa74090d8e9d0e214 | [
"CC-BY-4.0"
] | null | null | null | import json
import graphene
import pytest
from django.contrib.auth.models import Permission
from ....tests.utils import dummy_editorjs
from ...core.enums import LanguageCodeEnum
from ...tests.utils import assert_no_permission, get_graphql_content
from ..schema import TranslatableKinds
def test_product_translation(user_api_client, product, channel_USD):
description = dummy_editorjs("test desription")
product.translations.create(
language_code="pl", name="Produkt", description=description
)
query = """
query productById($productId: ID!, $channel: String) {
product(id: $productId, channel: $channel) {
translation(languageCode: PL) {
name
description
descriptionJson
language {
code
}
}
}
}
"""
product_id = graphene.Node.to_global_id("Product", product.id)
response = user_api_client.post_graphql(
query, {"productId": product_id, "channel": channel_USD.slug}
)
data = get_graphql_content(response)["data"]
translation_data = data["product"]["translation"]
assert translation_data["name"] == "Produkt"
assert translation_data["language"]["code"] == "PL"
assert (
translation_data["description"]
== translation_data["descriptionJson"]
== dummy_editorjs("test desription", json_format=True)
)
def test_product_translation_without_description(user_api_client, product, channel_USD):
product.translations.create(language_code="pl", name="Produkt")
query = """
query productById($productId: ID!, $channel: String) {
product(id: $productId, channel: $channel) {
translation(languageCode: PL) {
name
description
descriptionJson
language {
code
}
}
}
}
"""
product_id = graphene.Node.to_global_id("Product", product.id)
response = user_api_client.post_graphql(
query, {"productId": product_id, "channel": channel_USD.slug}
)
data = get_graphql_content(response)["data"]
translation_data = data["product"]["translation"]
assert translation_data["name"] == "Produkt"
assert translation_data["language"]["code"] == "PL"
assert translation_data["description"] is None
assert translation_data["descriptionJson"] == "{}"
def test_product_translation_with_app(app_api_client, product, channel_USD):
product.translations.create(language_code="pl", name="Produkt")
query = """
query productById($productId: ID!, $channel: String) {
product(id: $productId, channel: $channel) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
product_id = graphene.Node.to_global_id("Product", product.id)
response = app_api_client.post_graphql(
query, {"productId": product_id, "channel": channel_USD.slug}
)
data = get_graphql_content(response)["data"]
assert data["product"]["translation"]["name"] == "Produkt"
assert data["product"]["translation"]["language"]["code"] == "PL"
def test_product_variant_translation(user_api_client, variant, channel_USD):
variant.translations.create(language_code="pl", name="Wariant")
query = """
query productVariantById($productVariantId: ID!, $channel: String) {
productVariant(id: $productVariantId, channel: $channel) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
product_variant_id = graphene.Node.to_global_id("ProductVariant", variant.id)
response = user_api_client.post_graphql(
query, {"productVariantId": product_variant_id, "channel": channel_USD.slug}
)
data = get_graphql_content(response)["data"]
assert data["productVariant"]["translation"]["name"] == "Wariant"
assert data["productVariant"]["translation"]["language"]["code"] == "PL"
def test_collection_translation(user_api_client, published_collection, channel_USD):
description = dummy_editorjs("test desription")
published_collection.translations.create(
language_code="pl", name="Kolekcja", description=description
)
query = """
query collectionById($collectionId: ID!, $channel: String) {
collection(id: $collectionId, channel: $channel) {
translation(languageCode: PL) {
name
description
descriptionJson
language {
code
}
}
}
}
"""
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
variables = {"collectionId": collection_id, "channel": channel_USD.slug}
response = user_api_client.post_graphql(query, variables)
data = get_graphql_content(response)["data"]
translation_data = data["collection"]["translation"]
assert translation_data["name"] == "Kolekcja"
assert translation_data["language"]["code"] == "PL"
assert (
translation_data["description"]
== translation_data["descriptionJson"]
== dummy_editorjs("test desription", json_format=True)
)
def test_collection_translation_without_description(
user_api_client, published_collection, channel_USD
):
published_collection.translations.create(language_code="pl", name="Kolekcja")
query = """
query collectionById($collectionId: ID!, $channel: String) {
collection(id: $collectionId, channel: $channel) {
translation(languageCode: PL) {
name
description
descriptionJson
language {
code
}
}
}
}
"""
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
variables = {"collectionId": collection_id, "channel": channel_USD.slug}
response = user_api_client.post_graphql(query, variables)
data = get_graphql_content(response)["data"]
translation_data = data["collection"]["translation"]
assert translation_data["name"] == "Kolekcja"
assert translation_data["language"]["code"] == "PL"
assert translation_data["description"] is None
assert translation_data["descriptionJson"] == "{}"
def test_category_translation(user_api_client, category):
description = dummy_editorjs("test description")
category.translations.create(
language_code="pl", name="Kategoria", description=description
)
query = """
query categoryById($categoryId: ID!) {
category(id: $categoryId) {
translation(languageCode: PL) {
name
description
descriptionJson
language {
code
}
}
}
}
"""
category_id = graphene.Node.to_global_id("Category", category.id)
response = user_api_client.post_graphql(query, {"categoryId": category_id})
data = get_graphql_content(response)["data"]
translation_data = data["category"]["translation"]
assert translation_data["name"] == "Kategoria"
assert translation_data["language"]["code"] == "PL"
assert (
translation_data["description"]
== translation_data["descriptionJson"]
== dummy_editorjs("test description", json_format=True)
)
def test_category_translation_without_description(user_api_client, category):
category.translations.create(language_code="pl", name="Kategoria")
query = """
query categoryById($categoryId: ID!) {
category(id: $categoryId) {
translation(languageCode: PL) {
name
description
descriptionJson
language {
code
}
}
}
}
"""
category_id = graphene.Node.to_global_id("Category", category.id)
response = user_api_client.post_graphql(query, {"categoryId": category_id})
data = get_graphql_content(response)["data"]
translation_data = data["category"]["translation"]
assert translation_data["name"] == "Kategoria"
assert translation_data["language"]["code"] == "PL"
assert translation_data["description"] is None
assert translation_data["descriptionJson"] == "{}"
def test_voucher_translation(staff_api_client, voucher, permission_manage_discounts):
voucher.translations.create(language_code="pl", name="Bon")
query = """
query voucherById($voucherId: ID!) {
voucher(id: $voucherId) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
voucher_id = graphene.Node.to_global_id("Voucher", voucher.id)
response = staff_api_client.post_graphql(
query, {"voucherId": voucher_id}, permissions=[permission_manage_discounts]
)
data = get_graphql_content(response)["data"]
assert data["voucher"]["translation"]["name"] == "Bon"
assert data["voucher"]["translation"]["language"]["code"] == "PL"
def test_sale_translation(staff_api_client, sale, permission_manage_discounts):
sale.translations.create(language_code="pl", name="Wyprz")
query = """
query saleById($saleId: ID!) {
sale(id: $saleId) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
sale_id = graphene.Node.to_global_id("Sale", sale.id)
response = staff_api_client.post_graphql(
query, {"saleId": sale_id}, permissions=[permission_manage_discounts]
)
data = get_graphql_content(response)["data"]
assert data["sale"]["translation"]["name"] == "Wyprz"
assert data["sale"]["translation"]["language"]["code"] == "PL"
def test_page_translation(user_api_client, page):
content = dummy_editorjs("test content")
page.translations.create(language_code="pl", title="Strona", content=content)
query = """
query pageById($pageId: ID!) {
page(id: $pageId) {
translation(languageCode: PL) {
title
content
contentJson
language {
code
}
}
}
}
"""
page_id = graphene.Node.to_global_id("Page", page.id)
response = user_api_client.post_graphql(query, {"pageId": page_id})
data = get_graphql_content(response)["data"]
translation_data = data["page"]["translation"]
assert translation_data["title"] == "Strona"
assert translation_data["language"]["code"] == "PL"
assert (
translation_data["content"]
== translation_data["contentJson"]
== dummy_editorjs("test content", json_format=True)
)
def test_page_translation_without_content(user_api_client, page):
page.translations.create(language_code="pl", title="Strona")
query = """
query pageById($pageId: ID!) {
page(id: $pageId) {
translation(languageCode: PL) {
title
content
contentJson
language {
code
}
}
}
}
"""
page_id = graphene.Node.to_global_id("Page", page.id)
response = user_api_client.post_graphql(query, {"pageId": page_id})
data = get_graphql_content(response)["data"]
translation_data = data["page"]["translation"]
assert translation_data["title"] == "Strona"
assert translation_data["language"]["code"] == "PL"
assert translation_data["content"] is None
assert translation_data["contentJson"] == "{}"
def test_attribute_translation(user_api_client, color_attribute):
color_attribute.translations.create(language_code="pl", name="Kolor")
query = """
query {
attributes(first: 1) {
edges {
node {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
}
"""
response = user_api_client.post_graphql(query)
data = get_graphql_content(response)["data"]
attribute = data["attributes"]["edges"][0]["node"]
assert attribute["translation"]["name"] == "Kolor"
assert attribute["translation"]["language"]["code"] == "PL"
def test_attribute_value_translation(user_api_client, pink_attribute_value):
pink_attribute_value.translations.create(
language_code="pl", name="Różowy", rich_text=dummy_editorjs("Pink")
)
query = """
query {
attributes(first: 1) {
edges {
node {
choices(first: 10) {
edges {
node {
translation(languageCode: PL) {
name
richText
language {
code
}
}
}
}
}
}
}
}
}
"""
attribute_value_id = graphene.Node.to_global_id(
"AttributeValue", pink_attribute_value.id
)
response = user_api_client.post_graphql(
query, {"attributeValueId": attribute_value_id}
)
data = get_graphql_content(response)["data"]
attribute_value = data["attributes"]["edges"][0]["node"]["choices"]["edges"][-1][
"node"
]
assert attribute_value["translation"]["name"] == "Różowy"
assert attribute_value["translation"]["richText"] == json.dumps(
dummy_editorjs("Pink")
)
assert attribute_value["translation"]["language"]["code"] == "PL"
def test_shipping_method_translation(
staff_api_client, shipping_method, permission_manage_shipping
):
shipping_method.translations.create(language_code="pl", name="DHL Polska")
query = """
query shippingZoneById($shippingZoneId: ID!) {
shippingZone(id: $shippingZoneId) {
shippingMethods {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
shipping_zone_id = graphene.Node.to_global_id(
"ShippingZone", shipping_method.shipping_zone.id
)
response = staff_api_client.post_graphql(
query,
{"shippingZoneId": shipping_zone_id},
permissions=[permission_manage_shipping],
)
data = get_graphql_content(response)["data"]
shipping_method = data["shippingZone"]["shippingMethods"][-1]
assert shipping_method["translation"]["name"] == "DHL Polska"
assert shipping_method["translation"]["language"]["code"] == "PL"
def test_menu_item_translation(user_api_client, menu_item):
menu_item.translations.create(language_code="pl", name="Odnośnik 1")
query = """
query menuItemById($menuItemId: ID!) {
menuItem(id: $menuItemId) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
menu_item_id = graphene.Node.to_global_id("MenuItem", menu_item.id)
response = user_api_client.post_graphql(query, {"menuItemId": menu_item_id})
data = get_graphql_content(response)["data"]
assert data["menuItem"]["translation"]["name"] == "Odnośnik 1"
assert data["menuItem"]["translation"]["language"]["code"] == "PL"
def test_shop_translation(user_api_client, site_settings):
site_settings.translations.create(language_code="pl", header_text="Nagłówek")
query = """
query {
shop {
translation(languageCode: PL) {
headerText
language {
code
}
}
}
}
"""
response = user_api_client.post_graphql(query)
data = get_graphql_content(response)["data"]
assert data["shop"]["translation"]["headerText"] == "Nagłówek"
assert data["shop"]["translation"]["language"]["code"] == "PL"
def test_product_no_translation(user_api_client, product, channel_USD):
query = """
query productById($productId: ID!, $channel: String) {
product(id: $productId, channel: $channel) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
product_id = graphene.Node.to_global_id("Product", product.id)
response = user_api_client.post_graphql(
query, {"productId": product_id, "channel": channel_USD.slug}
)
data = get_graphql_content(response)["data"]
assert data["product"]["translation"] is None
def test_product_variant_no_translation(user_api_client, variant, channel_USD):
query = """
query productVariantById($productVariantId: ID!, $channel: String) {
productVariant(id: $productVariantId, channel: $channel) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
product_variant_id = graphene.Node.to_global_id("ProductVariant", variant.id)
response = user_api_client.post_graphql(
query, {"productVariantId": product_variant_id, "channel": channel_USD.slug}
)
data = get_graphql_content(response)["data"]
assert data["productVariant"]["translation"] is None
def test_collection_no_translation(user_api_client, published_collection, channel_USD):
query = """
query collectionById($collectionId: ID!, $channel: String) {
collection(id: $collectionId, channel: $channel) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
variables = {"collectionId": collection_id, "channel": channel_USD.slug}
response = user_api_client.post_graphql(query, variables)
data = get_graphql_content(response)["data"]
assert data["collection"]["translation"] is None
def test_category_no_translation(user_api_client, category):
query = """
query categoryById($categoryId: ID!) {
category(id: $categoryId) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
category_id = graphene.Node.to_global_id("Category", category.id)
response = user_api_client.post_graphql(query, {"categoryId": category_id})
data = get_graphql_content(response)["data"]
assert data["category"]["translation"] is None
def test_voucher_no_translation(staff_api_client, voucher, permission_manage_discounts):
query = """
query voucherById($voucherId: ID!) {
voucher(id: $voucherId) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
voucher_id = graphene.Node.to_global_id("Voucher", voucher.id)
response = staff_api_client.post_graphql(
query, {"voucherId": voucher_id}, permissions=[permission_manage_discounts]
)
data = get_graphql_content(response)["data"]
assert data["voucher"]["translation"] is None
def test_page_no_translation(user_api_client, page):
query = """
query pageById($pageId: ID!) {
page(id: $pageId) {
translation(languageCode: PL) {
title
language {
code
}
}
}
}
"""
page_id = graphene.Node.to_global_id("Page", page.id)
response = user_api_client.post_graphql(query, {"pageId": page_id})
data = get_graphql_content(response)["data"]
assert data["page"]["translation"] is None
def test_attribute_no_translation(user_api_client, color_attribute):
query = """
query {
attributes(first: 1) {
edges {
node {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
}
"""
response = user_api_client.post_graphql(query)
data = get_graphql_content(response)["data"]
attribute = data["attributes"]["edges"][0]["node"]
assert attribute["translation"] is None
def test_attribute_value_no_translation(user_api_client, pink_attribute_value):
query = """
query {
attributes(first: 1) {
edges {
node {
choices(first: 10) {
edges {
node {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
}
}
}
}
"""
attribute_value_id = graphene.Node.to_global_id(
"AttributeValue", pink_attribute_value.id
)
response = user_api_client.post_graphql(
query, {"attributeValueId": attribute_value_id}
)
data = get_graphql_content(response)["data"]
attribute_value = data["attributes"]["edges"][0]["node"]["choices"]["edges"][-1][
"node"
]
assert attribute_value["translation"] is None
def test_shipping_method_no_translation(
staff_api_client, shipping_method, permission_manage_shipping
):
query = """
query shippingZoneById($shippingZoneId: ID!) {
shippingZone(id: $shippingZoneId) {
shippingMethods {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
shipping_zone_id = graphene.Node.to_global_id(
"ShippingZone", shipping_method.shipping_zone.id
)
response = staff_api_client.post_graphql(
query,
{"shippingZoneId": shipping_zone_id},
permissions=[permission_manage_shipping],
)
data = get_graphql_content(response)["data"]
shipping_method = data["shippingZone"]["shippingMethods"][0]
assert shipping_method["translation"] is None
def test_menu_item_no_translation(user_api_client, menu_item):
query = """
query menuItemById($menuItemId: ID!) {
menuItem(id: $menuItemId) {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
"""
menu_item_id = graphene.Node.to_global_id("MenuItem", menu_item.id)
response = user_api_client.post_graphql(query, {"menuItemId": menu_item_id})
data = get_graphql_content(response)["data"]
assert data["menuItem"]["translation"] is None
def test_shop_no_translation(user_api_client, site_settings):
query = """
query {
shop {
translation(languageCode: PL) {
headerText
language {
code
}
}
}
}
"""
response = user_api_client.post_graphql(query)
data = get_graphql_content(response)["data"]
assert data["shop"]["translation"] is None
PRODUCT_TRANSLATE_MUTATION = """
mutation productTranslate($productId: ID!, $input: TranslationInput!) {
productTranslate(
id: $productId, languageCode: PL,
input: $input) {
product {
translation(languageCode: PL) {
name
description
language {
code
}
}
}
}
}
"""
def test_product_create_translation(
staff_api_client, product, permission_manage_translations
):
query = PRODUCT_TRANSLATE_MUTATION
product_id = graphene.Node.to_global_id("Product", product.id)
response = staff_api_client.post_graphql(
query,
{"productId": product_id, "input": {"name": "Produkt PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["productTranslate"]
assert data["product"]["translation"]["name"] == "Produkt PL"
assert data["product"]["translation"]["language"]["code"] == "PL"
def test_product_create_translation_for_description(
staff_api_client, product, permission_manage_translations
):
query = PRODUCT_TRANSLATE_MUTATION
product_id = graphene.Node.to_global_id("Product", product.id)
description = dummy_editorjs("description", True)
variables = {"productId": product_id, "input": {"description": description}}
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["productTranslate"]
assert data["product"]["translation"]["name"] is None
assert data["product"]["translation"]["description"] == description
assert data["product"]["translation"]["language"]["code"] == "PL"
def test_product_create_translation_for_description_and_name_as_null(
staff_api_client, product, permission_manage_translations
):
query = PRODUCT_TRANSLATE_MUTATION
product_id = graphene.Node.to_global_id("Product", product.id)
description = dummy_editorjs("description", True)
variables = {
"productId": product_id,
"input": {"description": description, "name": None},
}
response = staff_api_client.post_graphql(
query, variables, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["productTranslate"]
assert data["product"]["translation"]["name"] is None
assert data["product"]["translation"]["description"] == description
assert data["product"]["translation"]["language"]["code"] == "PL"
def test_product_create_translation_with_app(
app_api_client, product, permission_manage_translations
):
query = PRODUCT_TRANSLATE_MUTATION
product_id = graphene.Node.to_global_id("Product", product.id)
response = app_api_client.post_graphql(
query,
{"productId": product_id, "input": {"name": "Produkt PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["productTranslate"]
assert data["product"]["translation"]["name"] == "Produkt PL"
assert data["product"]["translation"]["language"]["code"] == "PL"
def test_product_update_translation(
staff_api_client, product, permission_manage_translations
):
product.translations.create(language_code="pl", name="Produkt")
query = PRODUCT_TRANSLATE_MUTATION
product_id = graphene.Node.to_global_id("Product", product.id)
response = staff_api_client.post_graphql(
query,
{"productId": product_id, "input": {"name": "Produkt PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["productTranslate"]
assert data["product"]["translation"]["name"] == "Produkt PL"
assert data["product"]["translation"]["language"]["code"] == "PL"
PRODUCT_VARIANT_TRANSLATE_MUTATION = """
mutation productVariantTranslate(
$productVariantId: ID!, $input: NameTranslationInput!
) {
productVariantTranslate(
id: $productVariantId, languageCode: PL,
input: $input) {
productVariant {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
def test_product_variant_create_translation(
staff_api_client, variant, permission_manage_translations
):
query = PRODUCT_VARIANT_TRANSLATE_MUTATION
product_variant_id = graphene.Node.to_global_id("ProductVariant", variant.id)
response = staff_api_client.post_graphql(
query,
{"productVariantId": product_variant_id, "input": {"name": "Wariant PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["productVariantTranslate"]
assert data["productVariant"]["translation"]["name"] == "Wariant PL"
assert data["productVariant"]["translation"]["language"]["code"] == "PL"
def test_product_variant_update_translation(
staff_api_client, variant, permission_manage_translations
):
variant.translations.create(language_code="pl", name="Wariant")
query = PRODUCT_VARIANT_TRANSLATE_MUTATION
product_variant_id = graphene.Node.to_global_id("ProductVariant", variant.id)
response = staff_api_client.post_graphql(
query,
{"productVariantId": product_variant_id, "input": {"name": "Wariant PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["productVariantTranslate"]
assert data["productVariant"]["translation"]["name"] == "Wariant PL"
assert data["productVariant"]["translation"]["language"]["code"] == "PL"
COLLECTION_TRANSLATE_MUTATION = """
mutation collectionTranslate($collectionId: ID!, $input: TranslationInput!) {
collectionTranslate(
id: $collectionId, languageCode: PL,
input: $input) {
collection {
translation(languageCode: PL) {
name
description
language {
code
}
}
}
}
}
"""
def test_collection_create_translation(
staff_api_client, published_collection, permission_manage_translations
):
query = COLLECTION_TRANSLATE_MUTATION
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
response = staff_api_client.post_graphql(
query,
{"collectionId": collection_id, "input": {"name": "Kolekcja PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["collectionTranslate"]
assert data["collection"]["translation"]["name"] == "Kolekcja PL"
assert data["collection"]["translation"]["language"]["code"] == "PL"
def test_collection_create_translation_for_description(
staff_api_client, published_collection, permission_manage_translations
):
query = COLLECTION_TRANSLATE_MUTATION
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
description = dummy_editorjs("description", True)
response = staff_api_client.post_graphql(
query,
{"collectionId": collection_id, "input": {"description": description}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["collectionTranslate"]
assert data["collection"]["translation"]["name"] is None
assert data["collection"]["translation"]["description"] == description
assert data["collection"]["translation"]["language"]["code"] == "PL"
def test_collection_create_translation_for_description_name_as_null(
staff_api_client, published_collection, permission_manage_translations
):
query = COLLECTION_TRANSLATE_MUTATION
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
description = dummy_editorjs("description", True)
response = staff_api_client.post_graphql(
query,
{
"collectionId": collection_id,
"input": {"description": description, "name": None},
},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["collectionTranslate"]
assert data["collection"]["translation"]["name"] is None
assert data["collection"]["translation"]["description"] == description
assert data["collection"]["translation"]["language"]["code"] == "PL"
def test_collection_update_translation(
staff_api_client, published_collection, permission_manage_translations
):
published_collection.translations.create(language_code="pl", name="Kolekcja")
query = COLLECTION_TRANSLATE_MUTATION
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
response = staff_api_client.post_graphql(
query,
{"collectionId": collection_id, "input": {"name": "Kolekcja PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["collectionTranslate"]
assert data["collection"]["translation"]["name"] == "Kolekcja PL"
assert data["collection"]["translation"]["language"]["code"] == "PL"
CATEGORY_TRANSLATE_MUTATION = """
mutation categoryTranslate($categoryId: ID!, $input: TranslationInput!) {
categoryTranslate(
id: $categoryId, languageCode: PL,
input: $input) {
category {
translation(languageCode: PL) {
name
description
language {
code
}
}
}
}
}
"""
def test_category_create_translation(
staff_api_client, category, permission_manage_translations
):
query = CATEGORY_TRANSLATE_MUTATION
category_id = graphene.Node.to_global_id("Category", category.id)
response = staff_api_client.post_graphql(
query,
{"categoryId": category_id, "input": {"name": "Kategoria PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["categoryTranslate"]
assert data["category"]["translation"]["name"] == "Kategoria PL"
assert data["category"]["translation"]["language"]["code"] == "PL"
def test_category_create_translation_for_description(
staff_api_client, category, permission_manage_translations
):
query = CATEGORY_TRANSLATE_MUTATION
category_id = graphene.Node.to_global_id("Category", category.id)
description = dummy_editorjs("description", True)
response = staff_api_client.post_graphql(
query,
{"categoryId": category_id, "input": {"description": description}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["categoryTranslate"]
assert data["category"]["translation"]["name"] is None
assert data["category"]["translation"]["description"] == description
assert data["category"]["translation"]["language"]["code"] == "PL"
def test_category_create_translation_for_description_name_as_null(
staff_api_client, category, permission_manage_translations
):
query = CATEGORY_TRANSLATE_MUTATION
category_id = graphene.Node.to_global_id("Category", category.id)
description = dummy_editorjs("description", True)
response = staff_api_client.post_graphql(
query,
{
"categoryId": category_id,
"input": {"name": None, "description": description},
},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["categoryTranslate"]
assert data["category"]["translation"]["name"] is None
assert data["category"]["translation"]["description"] == description
assert data["category"]["translation"]["language"]["code"] == "PL"
def test_category_update_translation(
staff_api_client, category, permission_manage_translations
):
category.translations.create(language_code="pl", name="Kategoria")
query = CATEGORY_TRANSLATE_MUTATION
category_id = graphene.Node.to_global_id("Category", category.id)
response = staff_api_client.post_graphql(
query,
{"categoryId": category_id, "input": {"name": "Kategoria PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["categoryTranslate"]
assert data["category"]["translation"]["name"] == "Kategoria PL"
assert data["category"]["translation"]["language"]["code"] == "PL"
def test_voucher_create_translation(
staff_api_client, voucher, permission_manage_translations
):
query = """
mutation voucherTranslate($voucherId: ID!) {
voucherTranslate(
id: $voucherId, languageCode: PL,
input: {name: "Bon PL"}) {
voucher {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
voucher_id = graphene.Node.to_global_id("Voucher", voucher.id)
response = staff_api_client.post_graphql(
query, {"voucherId": voucher_id}, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["voucherTranslate"]
assert data["voucher"]["translation"]["name"] == "Bon PL"
assert data["voucher"]["translation"]["language"]["code"] == "PL"
def test_voucher_update_translation(
staff_api_client, voucher, permission_manage_translations
):
voucher.translations.create(language_code="pl", name="Kategoria")
query = """
mutation voucherTranslate($voucherId: ID!) {
voucherTranslate(
id: $voucherId, languageCode: PL,
input: {name: "Bon PL"}) {
voucher {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
voucher_id = graphene.Node.to_global_id("Voucher", voucher.id)
response = staff_api_client.post_graphql(
query, {"voucherId": voucher_id}, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["voucherTranslate"]
assert data["voucher"]["translation"]["name"] == "Bon PL"
assert data["voucher"]["translation"]["language"]["code"] == "PL"
def test_sale_create_translation(
staff_api_client, sale, permission_manage_translations
):
query = """
mutation saleTranslate($saleId: ID!) {
saleTranslate(
id: $saleId, languageCode: PL,
input: {name: "Wyprz PL"}) {
sale {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
sale_id = graphene.Node.to_global_id("Sale", sale.id)
response = staff_api_client.post_graphql(
query, {"saleId": sale_id}, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["saleTranslate"]
assert data["sale"]["translation"]["name"] == "Wyprz PL"
assert data["sale"]["translation"]["language"]["code"] == "PL"
PAGE_TRANSLATE_MUTATION = """
mutation pageTranslate($pageId: ID!, $input: PageTranslationInput!) {
pageTranslate(
id: $pageId, languageCode: PL,
input: $input) {
page {
translation(languageCode: PL) {
title
content
language {
code
}
}
}
}
}
"""
def test_page_create_translation(
staff_api_client, page, permission_manage_translations
):
query = PAGE_TRANSLATE_MUTATION
page_id = graphene.Node.to_global_id("Page", page.id)
response = staff_api_client.post_graphql(
query,
{"pageId": page_id, "input": {"title": "Strona PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["pageTranslate"]
assert data["page"]["translation"]["title"] == "Strona PL"
assert data["page"]["translation"]["language"]["code"] == "PL"
def test_page_create_translation_for_content(
staff_api_client, page, permission_manage_translations
):
query = PAGE_TRANSLATE_MUTATION
page_id = graphene.Node.to_global_id("Page", page.id)
content = dummy_editorjs("content", True)
response = staff_api_client.post_graphql(
query,
{"pageId": page_id, "input": {"content": content}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["pageTranslate"]
assert data["page"]["translation"]["title"] is None
assert data["page"]["translation"]["content"] == content
assert data["page"]["translation"]["language"]["code"] == "PL"
def test_page_create_translation_for_content_title_as_null(
staff_api_client, page, permission_manage_translations
):
query = PAGE_TRANSLATE_MUTATION
page_id = graphene.Node.to_global_id("Page", page.id)
content = dummy_editorjs("content", True)
response = staff_api_client.post_graphql(
query,
{"pageId": page_id, "input": {"title": None, "content": content}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["pageTranslate"]
assert data["page"]["translation"]["title"] is None
assert data["page"]["translation"]["content"] == content
assert data["page"]["translation"]["language"]["code"] == "PL"
def test_page_update_translation(
staff_api_client, page, permission_manage_translations
):
page.translations.create(language_code="pl", title="Strona")
query = PAGE_TRANSLATE_MUTATION
page_id = graphene.Node.to_global_id("Page", page.id)
response = staff_api_client.post_graphql(
query,
{"pageId": page_id, "input": {"title": "Strona PL"}},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["pageTranslate"]
assert data["page"]["translation"]["title"] == "Strona PL"
assert data["page"]["translation"]["language"]["code"] == "PL"
def test_attribute_create_translation(
staff_api_client, color_attribute, permission_manage_translations
):
query = """
mutation attributeTranslate($attributeId: ID!) {
attributeTranslate(
id: $attributeId, languageCode: PL,
input: {name: "Kolor PL"}) {
attribute {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
attribute_id = graphene.Node.to_global_id("Attribute", color_attribute.id)
response = staff_api_client.post_graphql(
query,
{"attributeId": attribute_id},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["attributeTranslate"]
assert data["attribute"]["translation"]["name"] == "Kolor PL"
assert data["attribute"]["translation"]["language"]["code"] == "PL"
def test_attribute_update_translation(
staff_api_client, color_attribute, permission_manage_translations
):
color_attribute.translations.create(language_code="pl", name="Kolor")
query = """
mutation attributeTranslate($attributeId: ID!) {
attributeTranslate(
id: $attributeId, languageCode: PL,
input: {name: "Kolor PL"}) {
attribute {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
attribute_id = graphene.Node.to_global_id("Attribute", color_attribute.id)
response = staff_api_client.post_graphql(
query,
{"attributeId": attribute_id},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["attributeTranslate"]
assert data["attribute"]["translation"]["name"] == "Kolor PL"
assert data["attribute"]["translation"]["language"]["code"] == "PL"
def test_attribute_value_create_translation(
staff_api_client, pink_attribute_value, permission_manage_translations
):
query = """
mutation attributeValueTranslate($attributeValueId: ID!) {
attributeValueTranslate(
id: $attributeValueId, languageCode: PL,
input: {name: "Róż PL"}) {
attributeValue {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
attribute_value_id = graphene.Node.to_global_id(
"AttributeValue", pink_attribute_value.id
)
response = staff_api_client.post_graphql(
query,
{"attributeValueId": attribute_value_id},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["attributeValueTranslate"]
assert data["attributeValue"]["translation"]["name"] == "Róż PL"
assert data["attributeValue"]["translation"]["language"]["code"] == "PL"
def test_attribute_value_update_translation(
staff_api_client, pink_attribute_value, permission_manage_translations
):
pink_attribute_value.translations.create(language_code="pl", name="Różowy")
query = """
mutation attributeValueTranslate($attributeValueId: ID!) {
attributeValueTranslate(
id: $attributeValueId, languageCode: PL,
input: {name: "Róż PL"}) {
attributeValue {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
attribute_value_id = graphene.Node.to_global_id(
"AttributeValue", pink_attribute_value.id
)
response = staff_api_client.post_graphql(
query,
{"attributeValueId": attribute_value_id},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["attributeValueTranslate"]
assert data["attributeValue"]["translation"]["name"] == "Róż PL"
assert data["attributeValue"]["translation"]["language"]["code"] == "PL"
def test_shipping_method_create_translation(
staff_api_client, shipping_method, permission_manage_translations
):
query = """
mutation shippingPriceTranslate(
$shippingMethodId: ID!, $input: ShippingPriceTranslationInput!
) {
shippingPriceTranslate(
id: $shippingMethodId, languageCode: PL,
input: $input) {
shippingMethod {
translation(languageCode: PL) {
name
description
language {
code
}
}
}
}
}
"""
shipping_method_id = graphene.Node.to_global_id(
"ShippingMethod", shipping_method.id
)
description = dummy_editorjs("description", True)
variables = {
"shippingMethodId": shipping_method_id,
"input": {"name": "DHL PL", "description": description},
}
response = staff_api_client.post_graphql(
query,
variables,
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["shippingPriceTranslate"]
assert data["shippingMethod"]["translation"]["name"] == "DHL PL"
assert data["shippingMethod"]["translation"]["description"] == description
assert data["shippingMethod"]["translation"]["language"]["code"] == "PL"
def test_shipping_method_update_translation(
staff_api_client, shipping_method, permission_manage_translations
):
shipping_method.translations.create(language_code="pl", name="DHL")
query = """
mutation shippingPriceTranslate($shippingMethodId: ID!) {
shippingPriceTranslate(
id: $shippingMethodId, languageCode: PL,
input: {name: "DHL PL"}) {
shippingMethod {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
shipping_method_id = graphene.Node.to_global_id(
"ShippingMethod", shipping_method.id
)
response = staff_api_client.post_graphql(
query,
{"shippingMethodId": shipping_method_id},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["shippingPriceTranslate"]
assert data["shippingMethod"]["translation"]["name"] == "DHL PL"
assert data["shippingMethod"]["translation"]["language"]["code"] == "PL"
def test_menu_item_update_translation(
staff_api_client, menu_item, permission_manage_translations
):
menu_item.translations.create(language_code="pl", name="Odnośnik")
query = """
mutation menuItemTranslate($menuItemId: ID!) {
menuItemTranslate(
id: $menuItemId, languageCode: PL,
input: {name: "Odnośnik PL"}) {
menuItem {
translation(languageCode: PL) {
name
language {
code
}
}
}
}
}
"""
menu_item_id = graphene.Node.to_global_id("MenuItem", menu_item.id)
response = staff_api_client.post_graphql(
query,
{"menuItemId": menu_item_id},
permissions=[permission_manage_translations],
)
data = get_graphql_content(response)["data"]["menuItemTranslate"]
assert data["menuItem"]["translation"]["name"] == "Odnośnik PL"
assert data["menuItem"]["translation"]["language"]["code"] == "PL"
def test_shop_create_translation(staff_api_client, permission_manage_translations):
query = """
mutation shopSettingsTranslate {
shopSettingsTranslate(
languageCode: PL, input: {headerText: "Nagłówek PL"}) {
shop {
translation(languageCode: PL) {
headerText
language {
code
}
}
}
}
}
"""
response = staff_api_client.post_graphql(
query, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["shopSettingsTranslate"]
assert data["shop"]["translation"]["headerText"] == "Nagłówek PL"
assert data["shop"]["translation"]["language"]["code"] == "PL"
def test_shop_update_translation(
staff_api_client, site_settings, permission_manage_translations
):
site_settings.translations.create(language_code="pl", header_text="Nagłówek")
query = """
mutation shopSettingsTranslate {
shopSettingsTranslate(
languageCode: PL, input: {headerText: "Nagłówek PL"}) {
shop {
translation(languageCode: PL) {
headerText
language {
code
}
}
}
}
}
"""
response = staff_api_client.post_graphql(
query, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["shopSettingsTranslate"]
assert data["shop"]["translation"]["headerText"] == "Nagłówek PL"
assert data["shop"]["translation"]["language"]["code"] == "PL"
@pytest.mark.parametrize(
"kind, expected_typename",
[
(TranslatableKinds.PRODUCT, "ProductTranslatableContent"),
(TranslatableKinds.COLLECTION, "CollectionTranslatableContent"),
(TranslatableKinds.CATEGORY, "CategoryTranslatableContent"),
(TranslatableKinds.PAGE, "PageTranslatableContent"),
(TranslatableKinds.SHIPPING_METHOD, "ShippingMethodTranslatableContent"),
(TranslatableKinds.VOUCHER, "VoucherTranslatableContent"),
(TranslatableKinds.SALE, "SaleTranslatableContent"),
(TranslatableKinds.ATTRIBUTE, "AttributeTranslatableContent"),
(TranslatableKinds.ATTRIBUTE_VALUE, "AttributeValueTranslatableContent"),
(TranslatableKinds.VARIANT, "ProductVariantTranslatableContent"),
(TranslatableKinds.MENU_ITEM, "MenuItemTranslatableContent"),
],
)
def test_translations_query(
staff_api_client,
permission_manage_translations,
product,
published_collection,
voucher,
sale,
shipping_method,
page,
menu_item,
kind,
expected_typename,
):
query = """
query TranslationsQuery($kind: TranslatableKinds!) {
translations(kind: $kind, first: 1) {
edges {
node {
__typename
}
}
}
}
"""
response = staff_api_client.post_graphql(
query, {"kind": kind.name}, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["translations"]
assert data["edges"][0]["node"]["__typename"] == expected_typename
def test_translations_query_inline_fragment(
staff_api_client, permission_manage_translations, product
):
product.translations.create(language_code="pl", name="Produkt testowy")
query = """
{
translations(kind: PRODUCT, first: 1) {
edges {
node {
... on ProductTranslatableContent {
name
translation(languageCode: PL) {
name
}
}
}
}
}
}
"""
response = staff_api_client.post_graphql(
query, permissions=[permission_manage_translations]
)
data = get_graphql_content(response)["data"]["translations"]["edges"][0]
assert data["node"]["name"] == "Test product"
assert data["node"]["translation"]["name"] == "Produkt testowy"
QUERY_TRANSLATION_PRODUCT = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on ProductTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
product{
id
name
}
}
}
}
"""
def test_translation_query_product(
staff_api_client,
permission_manage_translations,
product,
product_translation_fr,
):
product_id = graphene.Node.to_global_id("Product", product.id)
variables = {
"id": product_id,
"kind": TranslatableKinds.PRODUCT.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_PRODUCT,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["name"] == product.name
assert data["translation"]["name"] == product_translation_fr.name
assert data["product"]["name"] == product.name
QUERY_TRANSLATION_COLLECTION = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on CollectionTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
collection{
id
name
}
}
}
}
"""
def test_translation_query_collection(
staff_api_client,
published_collection,
collection_translation_fr,
permission_manage_translations,
channel_USD,
):
channel_listing = published_collection.channel_listings.get()
channel_listing.save()
collection_id = graphene.Node.to_global_id("Collection", published_collection.id)
variables = {
"id": collection_id,
"kind": TranslatableKinds.COLLECTION.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_COLLECTION,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["name"] == published_collection.name
assert data["translation"]["name"] == collection_translation_fr.name
assert data["collection"]["name"] == published_collection.name
QUERY_TRANSLATION_CATEGORY = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on CategoryTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
category {
id
name
}
}
}
}
"""
def test_translation_query_category(
staff_api_client, category, category_translation_fr, permission_manage_translations
):
category_id = graphene.Node.to_global_id("Category", category.id)
variables = {
"id": category_id,
"kind": TranslatableKinds.CATEGORY.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_CATEGORY,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["name"] == category.name
assert data["translation"]["name"] == category_translation_fr.name
assert data["category"]["name"] == category.name
QUERY_TRANSLATION_ATTRIBUTE = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on AttributeTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
attribute {
id
name
}
}
}
}
"""
def test_translation_query_attribute(
staff_api_client, translated_attribute, permission_manage_translations
):
attribute = translated_attribute.attribute
attribute_id = graphene.Node.to_global_id("Attribute", attribute.id)
variables = {
"id": attribute_id,
"kind": TranslatableKinds.ATTRIBUTE.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_ATTRIBUTE,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["name"] == attribute.name
assert data["translation"]["name"] == translated_attribute.name
assert data["attribute"]["name"] == attribute.name
QUERY_TRANSLATION_ATTRIBUTE_VALUE = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on AttributeValueTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
attributeValue {
id
name
}
}
}
}
"""
def test_translation_query_attribute_value(
staff_api_client,
pink_attribute_value,
translated_attribute_value,
permission_manage_translations,
):
attribute_value_id = graphene.Node.to_global_id(
"AttributeValue", pink_attribute_value.id
)
variables = {
"id": attribute_value_id,
"kind": TranslatableKinds.ATTRIBUTE_VALUE.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_ATTRIBUTE_VALUE,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["name"] == pink_attribute_value.name
assert data["translation"]["name"] == translated_attribute_value.name
assert data["attributeValue"]["name"] == pink_attribute_value.name
QUERY_TRANSLATION_VARIANT = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on ProductVariantTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
productVariant {
id
name
}
}
}
}
"""
def test_translation_query_variant(
staff_api_client,
permission_manage_translations,
product,
variant,
variant_translation_fr,
):
variant_id = graphene.Node.to_global_id("ProductVariant", variant.id)
variables = {
"id": variant_id,
"kind": TranslatableKinds.VARIANT.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_VARIANT,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["name"] == variant.name
assert data["translation"]["name"] == variant_translation_fr.name
assert data["productVariant"]["name"] == variant.name
QUERY_TRANSLATION_PAGE = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on PageTranslatableContent{
id
title
translation(languageCode: $languageCode){
title
}
page {
id
title
}
}
}
}
"""
@pytest.mark.parametrize(
"is_published, perm_codenames",
[
(True, ["manage_translations"]),
(False, ["manage_translations"]),
(False, ["manage_translations", "manage_pages"]),
],
)
def test_translation_query_page(
staff_api_client,
page,
page_translation_fr,
is_published,
perm_codenames,
):
page.is_published = is_published
page.save()
page_id = graphene.Node.to_global_id("Page", page.id)
perms = list(Permission.objects.filter(codename__in=perm_codenames))
variables = {
"id": page_id,
"kind": TranslatableKinds.PAGE.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_PAGE, variables, permissions=perms
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["title"] == page.title
assert data["translation"]["title"] == page_translation_fr.title
assert data["page"]["title"] == page.title
QUERY_TRANSLATION_SHIPPING_METHOD = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on ShippingMethodTranslatableContent{
id
name
description
translation(languageCode: $languageCode){
name
}
shippingMethod {
id
name
}
}
}
}
"""
@pytest.mark.parametrize(
"perm_codenames, return_shipping_method",
[
(["manage_translations"], False),
(["manage_translations", "manage_shipping"], True),
],
)
def test_translation_query_shipping_method(
staff_api_client,
shipping_method,
shipping_method_translation_fr,
perm_codenames,
return_shipping_method,
):
shipping_method_id = graphene.Node.to_global_id(
"ShippingMethod", shipping_method.id
)
perms = list(Permission.objects.filter(codename__in=perm_codenames))
variables = {
"id": shipping_method_id,
"kind": TranslatableKinds.SHIPPING_METHOD.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_SHIPPING_METHOD, variables, permissions=perms
)
content = get_graphql_content(response, ignore_errors=True)
data = content["data"]["translation"]
assert data["name"] == shipping_method.name
assert data["description"] == shipping_method.description
assert data["translation"]["name"] == shipping_method_translation_fr.name
if return_shipping_method:
assert data["shippingMethod"]["name"] == shipping_method.name
else:
assert not data["shippingMethod"]
QUERY_TRANSLATION_SALE = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on SaleTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
sale {
id
name
}
}
}
}
"""
@pytest.mark.parametrize(
"perm_codenames, return_sale",
[
(["manage_translations"], False),
(["manage_translations", "manage_discounts"], True),
],
)
def test_translation_query_sale(
staff_api_client, sale, sale_translation_fr, perm_codenames, return_sale
):
sale_id = graphene.Node.to_global_id("Sale", sale.id)
perms = list(Permission.objects.filter(codename__in=perm_codenames))
variables = {
"id": sale_id,
"kind": TranslatableKinds.SALE.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_SALE, variables, permissions=perms
)
content = get_graphql_content(response, ignore_errors=True)
data = content["data"]["translation"]
assert data["name"] == sale.name
assert data["translation"]["name"] == sale_translation_fr.name
if return_sale:
assert data["sale"]["name"] == sale.name
else:
assert not data["sale"]
QUERY_TRANSLATION_VOUCHER = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on VoucherTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
voucher {
id
name
}
}
}
}
"""
@pytest.mark.parametrize(
"perm_codenames, return_voucher",
[
(["manage_translations"], False),
(["manage_translations", "manage_discounts"], True),
],
)
def test_translation_query_voucher(
staff_api_client, voucher, voucher_translation_fr, perm_codenames, return_voucher
):
voucher_id = graphene.Node.to_global_id("Voucher", voucher.id)
perms = list(Permission.objects.filter(codename__in=perm_codenames))
variables = {
"id": voucher_id,
"kind": TranslatableKinds.VOUCHER.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_VOUCHER, variables, permissions=perms
)
content = get_graphql_content(response, ignore_errors=True)
data = content["data"]["translation"]
assert data["name"] == voucher.name
assert data["translation"]["name"] == voucher_translation_fr.name
if return_voucher:
assert data["voucher"]["name"] == voucher.name
else:
assert not data["voucher"]
QUERY_TRANSLATION_MENU_ITEM = """
query translation(
$kind: TranslatableKinds!, $id: ID!, $languageCode: LanguageCodeEnum!
){
translation(kind: $kind, id: $id){
__typename
...on MenuItemTranslatableContent{
id
name
translation(languageCode: $languageCode){
name
}
menuItem {
id
name
}
}
}
}
"""
def test_translation_query_menu_item(
staff_api_client,
menu_item,
menu_item_translation_fr,
permission_manage_translations,
):
menu_item_id = graphene.Node.to_global_id("MenuItem", menu_item.id)
variables = {
"id": menu_item_id,
"kind": TranslatableKinds.MENU_ITEM.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_MENU_ITEM,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
data = content["data"]["translation"]
assert data["name"] == menu_item.name
assert data["translation"]["name"] == menu_item_translation_fr.name
assert data["menuItem"]["name"] == menu_item.name
def test_translation_query_incorrect_kind(
staff_api_client, menu_item, permission_manage_translations
):
menu_item_id = graphene.Node.to_global_id("MenuItem", menu_item.id)
variables = {
"id": menu_item_id,
"kind": TranslatableKinds.PRODUCT.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(
QUERY_TRANSLATION_MENU_ITEM,
variables,
permissions=[permission_manage_translations],
)
content = get_graphql_content(response)
assert not content["data"]["translation"]
def test_translation_query_no_permission(staff_api_client, menu_item):
menu_item_id = graphene.Node.to_global_id("MenuItem", menu_item.id)
variables = {
"id": menu_item_id,
"kind": TranslatableKinds.MENU_ITEM.name,
"languageCode": LanguageCodeEnum.FR.name,
}
response = staff_api_client.post_graphql(QUERY_TRANSLATION_MENU_ITEM, variables)
assert_no_permission(response)
def test_product_and_attribute_translation(user_api_client, product, channel_USD):
description = dummy_editorjs("test desription")
product.translations.create(
language_code="pl", name="Produkt", description=description
)
assigned_attribute = product.attributes.first()
attribute = assigned_attribute.attribute
attribute.translations.create(language_code="pl", name="Kolor")
query = """
query productById($productId: ID!, $channel: String) {
product(id: $productId, channel: $channel) {
translation(languageCode: PL) {
name
description
descriptionJson
language {
code
}
}
attributes{
attribute{
translation(languageCode: PL){
id
name
language{
code
}
}
}
}
}
}
"""
product_id = graphene.Node.to_global_id("Product", product.id)
response = user_api_client.post_graphql(
query, {"productId": product_id, "channel": channel_USD.slug}
)
data = get_graphql_content(response)["data"]
product_translation_data = data["product"]["translation"]
assert product_translation_data["name"] == "Produkt"
assert product_translation_data["language"]["code"] == "PL"
assert (
product_translation_data["description"]
== product_translation_data["descriptionJson"]
== dummy_editorjs("test desription", json_format=True)
)
attribute_translation_data = data["product"]["attributes"][0]["attribute"][
"translation"
]
assert attribute_translation_data["name"] == "Kolor"
assert attribute_translation_data["language"]["code"] == "PL"
| 31.048 | 88 | 0.600822 | 6,569 | 73,739 | 6.472522 | 0.031664 | 0.031751 | 0.032927 | 0.035279 | 0.855661 | 0.811986 | 0.775624 | 0.750506 | 0.717814 | 0.680535 | 0 | 0.000436 | 0.284598 | 73,739 | 2,374 | 89 | 31.061078 | 0.805547 | 0 | 0 | 0.625928 | 0 | 0 | 0.41731 | 0.048821 | 0 | 0 | 0 | 0 | 0.08758 | 1 | 0.03711 | false | 0 | 0.003958 | 0 | 0.041069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9d349391f462ad8ecf36b95bb99b48990647877 | 48 | py | Python | corefacility/core/synchronizations/__init__.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/core/synchronizations/__init__.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | corefacility/core/synchronizations/__init__.py | serik1987/corefacility | 78d84e19403361e83ef562e738473849f9133bef | [
"RSA-MD"
] | null | null | null | from .ihna_employees import IhnaSynchronization
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a4c1445daaa93c09825607454809b5764d375d3 | 9,391 | py | Python | tests/test_cli.py | Dipeshtwis/gg-shield | 30c5dffc70f62e1f5d7eebca306966b2238cf495 | [
"MIT"
] | 3 | 2020-08-14T13:26:27.000Z | 2021-03-16T16:58:30.000Z | tests/test_cli.py | GG-HH/gg-shield | 8e39fd46d172ffdfe606502015307c915ff57fe7 | [
"MIT"
] | null | null | null | tests/test_cli.py | GG-HH/gg-shield | 8e39fd46d172ffdfe606502015307c915ff57fe7 | [
"MIT"
] | null | null | null | import os
from unittest import mock
import pytest
from click.testing import CliRunner
from ggshield.cmd import cli
from .conftest import _SIMPLE_SECRET, my_vcr
@pytest.fixture(scope="session")
def cli_runner():
os.environ["GITGUARDIAN_API_KEY"] = os.getenv("GITGUARDIAN_API_KEY", "1234567890")
os.environ["GITGUARDIAN_API_URL"] = "https://api.gitguardian.com/"
return CliRunner()
@pytest.fixture(scope="class")
def cli_fs_runner(cli_runner):
with cli_runner.isolated_filesystem():
yield cli_runner
@pytest.fixture(scope="class")
def mockHookDirPath():
with mock.patch(
"ggshield.install.get_global_hook_dir_path", return_value="global/hooks"
):
yield
@my_vcr.use_cassette()
def test_scan_file(cli_fs_runner):
os.system('echo "This is a file with no secrets." > file')
assert os.path.isfile("file")
result = cli_fs_runner.invoke(cli, ["-v", "scan", "path", "file"])
assert not result.exception
assert "No secrets have been found" in result.output
def test_scan_file_secret(cli_fs_runner):
os.system(f'echo "{_SIMPLE_SECRET}" > file_secret')
assert os.path.isfile("file_secret")
with my_vcr.use_cassette("test_scan_file_secret"):
result = cli_fs_runner.invoke(cli, ["-v", "scan", "path", "file_secret"])
assert result.exit_code == 1
assert result.exception
def test_scan_file_secret_exit_zero(cli_fs_runner):
os.system(f'echo "{_SIMPLE_SECRET}" > file_secret')
assert os.path.isfile("file_secret")
with my_vcr.use_cassette("test_scan_file_secret"):
result = cli_fs_runner.invoke(
cli, ["scan", "--exit-zero", "-v", "path", "file_secret"]
)
assert result.exit_code == 0
assert not result.exception
class TestScanFiles:
def create_files(self):
os.system('echo "This is a file with no secrets." > file1')
os.system('echo "This is a file with no secrets." > file2')
def test_files_abort(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(
cli, ["scan", "path", "file1", "file2"], input="n\n"
)
assert result.exit_code == 0
assert not result.exception
@my_vcr.use_cassette()
def test_files_yes(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(
cli, ["scan", "path", "file1", "file2", "-r", "-y"]
)
assert not result.exception
assert "" in result.output
@my_vcr.use_cassette()
def test_files_verbose(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(
cli, ["-v", "scan", "path", "file1", "file2", "-r"], input="y\n"
)
assert not result.exception
assert "file1\n" in result.output
assert "file2\n" in result.output
assert "No secrets have been found" in result.output
def test_files_verbose_abort(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(
cli, ["-v", "scan", "path", "file1", "file2", "-r"], input="n\n"
)
assert result.exit_code == 0
assert not result.exception
@my_vcr.use_cassette()
def test_files_verbose_yes(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(
cli, ["-v", "scan", "path", "file1", "file2", "-r", "-y"]
)
assert not result.exception
assert "file1\n" in result.output
assert "file2\n" in result.output
assert "No secrets have been found" in result.output
class TestScanDirectory:
def create_files(self):
os.makedirs("dir", exist_ok=True)
os.makedirs("dir/subdir", exist_ok=True)
os.system('echo "This is a file with no secrets." > file1')
os.system('echo "This is a file with no secrets." > dir/file2')
os.system('echo "This is a file with no secrets." > dir/subdir/file3')
def test_directory_error(self, cli_fs_runner):
result = cli_fs_runner.invoke(cli, ["scan", "path", "-r", "./ewe-failing-test"])
assert result.exit_code == 2
assert result.exception
assert "does not exist" in result.output
def test_directory_abort(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(cli, ["scan", "path", "./", "-r"], input="n\n")
assert result.exit_code == 0
assert not result.exception
@my_vcr.use_cassette()
def test_directory_yes(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(cli, ["scan", "path", "./", "-r", "-y"])
assert "" in result.output
assert not result.exception
@my_vcr.use_cassette()
def test_directory_verbose(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(
cli, ["-v", "scan", "path", "./", "-r"], input="y\n"
)
assert not result.exception
assert "file1\n" in result.output
assert "dir/file2\n" in result.output
assert "dir/subdir/file3\n" in result.output
assert "No secrets have been found" in result.output
def test_directory_verbose_abort(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(
cli, ["-v", "scan", "path", "./", "-r"], input="n\n"
)
assert result.exit_code == 0
assert not result.exception
@my_vcr.use_cassette()
def test_directory_verbose_yes(self, cli_fs_runner):
self.create_files()
result = cli_fs_runner.invoke(cli, ["-v", "scan", "path", "./", "-r", "-y"])
assert not result.exception
assert "file1\n" in result.output
assert "dir/file2\n" in result.output
assert "dir/subdir/file3\n" in result.output
assert "No secrets have been found" in result.output
class TestInstallLocal:
def test_local_exist_is_dir(self, cli_fs_runner):
os.system("git init")
os.makedirs(".git/hooks/pre-commit/")
assert os.path.isdir(".git/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-m", "local"])
os.system("rm -R .git/hooks/pre-commit")
assert result.exit_code == 1
assert result.exception
assert "Error: .git/hooks/pre-commit is a directory" in result.output
def test_local_not_exist(self, cli_fs_runner):
assert not os.path.isfile(".git/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-m", "local"])
assert os.path.isfile(".git/hooks/pre-commit")
assert result.exit_code == 0
assert "pre-commit successfully added in .git/hooks/pre-commit" in result.output
def test_local_exist_not_force(self, cli_fs_runner):
os.makedirs(".git/hooks", exist_ok=True)
os.system('echo "pre-commit file" > .git/hooks/pre-commit')
assert os.path.isfile(".git/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-m", "local"])
assert result.exit_code == 1
assert result.exception
assert "Error: .git/hooks/pre-commit already exists." in result.output
def test_local_exist_force(self, cli_fs_runner):
os.makedirs(".git/hooks", exist_ok=True)
os.system('echo "pre-commit file" > .git/hooks/pre-commit')
assert os.path.isfile(".git/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-f", "-m", "local"])
assert result.exit_code == 0
assert "pre-commit successfully added in .git/hooks/pre-commit" in result.output
class TestInstallGlobal:
def test_global_exist_is_dir(self, cli_fs_runner, mockHookDirPath):
os.makedirs("global/hooks/pre-commit/")
assert os.path.isdir("global/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-m", "global"])
os.system("rm -R global/hooks/pre-commit")
assert result.exit_code == 1
assert result.exception
def test_global_not_exist(self, cli_fs_runner, mockHookDirPath):
assert not os.path.isfile("global/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-m", "global"])
assert os.path.isfile("global/hooks/pre-commit")
assert result.exit_code == 0
assert (
"pre-commit successfully added in global/hooks/pre-commit" in result.output
)
def test_global_exist_not_force(self, cli_fs_runner, mockHookDirPath):
os.makedirs("global/hooks", exist_ok=True)
os.system('echo "pre-commit file" > global/hooks/pre-commit')
assert os.path.isfile("global/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-m", "global"])
assert result.exit_code == 1
assert result.exception
assert "Error: global/hooks/pre-commit already exists." in result.output
def test_global_exist_force(self, cli_fs_runner, mockHookDirPath):
os.makedirs("global/hooks", exist_ok=True)
os.system('echo "pre-commit file" > global/hooks/pre-commit')
assert os.path.isfile("global/hooks/pre-commit")
result = cli_fs_runner.invoke(cli, ["install", "-m", "global", "-f"])
assert result.exit_code == 0
assert (
"pre-commit successfully added in global/hooks/pre-commit" in result.output
)
| 36.683594 | 88 | 0.638377 | 1,279 | 9,391 | 4.499609 | 0.093041 | 0.039096 | 0.086012 | 0.064987 | 0.851086 | 0.812337 | 0.786099 | 0.757081 | 0.742485 | 0.719722 | 0 | 0.006876 | 0.225642 | 9,391 | 255 | 89 | 36.827451 | 0.784516 | 0 | 0 | 0.54902 | 0 | 0 | 0.232137 | 0.067511 | 0 | 0 | 0 | 0 | 0.348039 | 1 | 0.132353 | false | 0 | 0.029412 | 0 | 0.186275 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a6d5272f0ac3c49a46fe6fa58b6038e23864382 | 49 | py | Python | src/test/java/com/anqiansong/Python.py | anqiansong/CommentShell | 689c33861a97b19b6db8e198ff372842628d0af3 | [
"MIT"
] | 8 | 2021-03-25T13:14:08.000Z | 2021-07-26T09:32:25.000Z | src/test/java/com/anqiansong/Python.py | anqiansong/CommentShell | 689c33861a97b19b6db8e198ff372842628d0af3 | [
"MIT"
] | 2 | 2021-04-20T02:39:02.000Z | 2021-11-17T17:02:03.000Z | src/test/java/com/anqiansong/Python.py | anqiansong/CommentShell | 689c33861a97b19b6db8e198ff372842628d0af3 | [
"MIT"
] | 1 | 2021-04-11T03:24:09.000Z | 2021-04-11T03:24:09.000Z | #!/usr/bin/python3
#x:generate echo hello python | 16.333333 | 29 | 0.755102 | 8 | 49 | 4.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.102041 | 49 | 3 | 29 | 16.333333 | 0.818182 | 0.918367 | 0 | null | 1 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a74859fc6e52e1b2b32c2952a7ec2195997dca1 | 144 | py | Python | pokermon/ai/policies.py | ghl3/Pokermon | ef4434884ee294fe845c906ab0d2f871af44f406 | [
"MIT"
] | null | null | null | pokermon/ai/policies.py | ghl3/Pokermon | ef4434884ee294fe845c906ab0d2f871af44f406 | [
"MIT"
] | null | null | null | pokermon/ai/policies.py | ghl3/Pokermon | ef4434884ee294fe845c906ab0d2f871af44f406 | [
"MIT"
] | 1 | 2020-11-05T11:57:25.000Z | 2020-11-05T11:57:25.000Z | from pokermon.ai.human import Human
from pokermon.ai.random_policy import RandomPolicy
POLICIES = {"random": RandomPolicy(), "human": Human()}
| 28.8 | 55 | 0.770833 | 18 | 144 | 6.111111 | 0.5 | 0.218182 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 144 | 4 | 56 | 36 | 0.852713 | 0 | 0 | 0 | 0 | 0 | 0.076389 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a7f14cc71d23aacf9f11d61bb45fcb44dcfd68d | 28 | py | Python | test/python/LIM2Metrics/py3/base/common/Python011/Python011.py | sagodiz/SonarQube-plug-in | 4f8e111baecc4c9f9eaa5cd3d7ebeb1e365ace2c | [
"BSD-4-Clause"
] | 20 | 2015-06-16T17:39:10.000Z | 2022-03-20T22:39:40.000Z | test/python/LIM2Metrics/py3/base/common/Python011/Python011.py | sagodiz/SonarQube-plug-in | 4f8e111baecc4c9f9eaa5cd3d7ebeb1e365ace2c | [
"BSD-4-Clause"
] | 29 | 2015-12-29T19:07:22.000Z | 2022-03-22T10:39:02.000Z | test/python/LIM2Metrics/py3/base/common/Python011/Python011.py | sagodiz/SonarQube-plug-in | 4f8e111baecc4c9f9eaa5cd3d7ebeb1e365ace2c | [
"BSD-4-Clause"
] | 12 | 2015-08-28T01:22:18.000Z | 2021-09-25T08:17:31.000Z | from . import fibo as valami | 28 | 28 | 0.785714 | 5 | 28 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 1 | 28 | 28 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a3095e76df601ed167e77f1eb59bab6e6fd3640 | 34 | py | Python | __init__.py | bonifield/IPv4Mutate | 8bf7ea830937fa8ee928a0c4c10e761ad94183c5 | [
"MIT"
] | null | null | null | __init__.py | bonifield/IPv4Mutate | 8bf7ea830937fa8ee928a0c4c10e761ad94183c5 | [
"MIT"
] | null | null | null | __init__.py | bonifield/IPv4Mutate | 8bf7ea830937fa8ee928a0c4c10e761ad94183c5 | [
"MIT"
] | null | null | null | from ipv4mutate import IPv4Mutate
| 17 | 33 | 0.882353 | 4 | 34 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a3138ae989da727bb2afcf89cf8c6e511c6f774 | 15,615 | py | Python | notifications/tests/tests.py | LegoStormtroopr/django-notifications | 1b85a996dfcdb2ffcb34edf5f7ea08f28e46f653 | [
"BSD-3-Clause"
] | null | null | null | notifications/tests/tests.py | LegoStormtroopr/django-notifications | 1b85a996dfcdb2ffcb34edf5f7ea08f28e46f653 | [
"BSD-3-Clause"
] | null | null | null | notifications/tests/tests.py | LegoStormtroopr/django-notifications | 1b85a996dfcdb2ffcb34edf5f7ea08f28e46f653 | [
"BSD-3-Clause"
] | null | null | null | """
This file demonstrates writing tests using the unittest module. These will pass
when you run "manage.py test".
Replace this with more appropriate tests for your application.
"""
from django.test import TestCase, RequestFactory
try:
# Django >= 1.7
from django.test import override_settings
except ImportError:
# Django <= 1.6
from django.test.utils import override_settings
from django.conf import settings
from django.contrib.auth.models import User, Group
from django.core.exceptions import ImproperlyConfigured
from django.core.urlresolvers import reverse
from django.utils.timezone import utc, localtime
from django.utils import timezone
import pytz
import json
from notifications import notify
from notifications.models import Notification
from notifications.utils import id2slug
class NotificationTest(TestCase):
@override_settings(USE_TZ=True)
@override_settings(TIME_ZONE='Asia/Shanghai')
def test_use_timezone(self):
from_user = User.objects.create(username="from", password="pwd", email="example@example.com")
to_user = User.objects.create(username="to", password="pwd", email="example@example.com")
notify.send(from_user, recipient=to_user, verb='commented', action_object=from_user)
notification = Notification.objects.get(recipient=to_user)
delta = timezone.now().replace(tzinfo=utc) - localtime(notification.timestamp, pytz.timezone(settings.TIME_ZONE))
self.assertTrue(delta.seconds < 60)
# The delta between the two events will still be less than a second despite the different timezones
# The call to now and the immediate call afterwards will be within a short period of time, not 8 hours as the
# test above was originally.
@override_settings(USE_TZ=False)
@override_settings(TIME_ZONE='Asia/Shanghai')
def test_disable_timezone(self):
from_user = User.objects.create(username="from2", password="pwd", email="example@example.com")
to_user = User.objects.create(username="to2", password="pwd", email="example@example.com")
notify.send(from_user, recipient=to_user, verb='commented', action_object=from_user)
notification = Notification.objects.get(recipient=to_user)
delta = timezone.now() - notification.timestamp
self.assertTrue(delta.seconds < 60)
class NotificationManagersTest(TestCase):
def setUp(self):
self.message_count = 10
self.from_user = User.objects.create(username="from2", password="pwd", email="example@example.com")
self.to_user = User.objects.create(username="to2", password="pwd", email="example@example.com")
self.to_group = Group.objects.create(name="to2_g")
self.to_group.user_set.add(self.to_user)
for i in range(self.message_count):
notify.send(self.from_user, recipient=self.to_user, verb='commented', action_object=self.from_user)
# Send notification to group
notify.send(self.from_user, recipient=self.to_group, verb='commented', action_object=self.from_user)
self.message_count += 1
def test_unread_manager(self):
self.assertEqual(Notification.objects.unread().count(), self.message_count)
n = Notification.objects.filter(recipient=self.to_user).first()
n.mark_as_read()
self.assertEqual(Notification.objects.unread().count(), self.message_count-1)
for n in Notification.objects.unread():
self.assertTrue(n.unread)
def test_read_manager(self):
self.assertEqual(Notification.objects.unread().count(), self.message_count)
n = Notification.objects.filter(recipient=self.to_user).first()
n.mark_as_read()
self.assertEqual(Notification.objects.read().count(), 1)
for n in Notification.objects.read():
self.assertFalse(n.unread)
def test_mark_all_as_read_manager(self):
self.assertEqual(Notification.objects.unread().count(), self.message_count)
Notification.objects.filter(recipient=self.to_user).mark_all_as_read()
self.assertEqual(Notification.objects.unread().count(), 0)
def test_mark_all_as_unread_manager(self):
self.assertEqual(Notification.objects.unread().count(), self.message_count)
Notification.objects.filter(recipient=self.to_user).mark_all_as_read()
self.assertEqual(Notification.objects.unread().count(), 0)
Notification.objects.filter(recipient=self.to_user).mark_all_as_unread()
self.assertEqual(Notification.objects.unread().count(), self.message_count)
def test_mark_all_deleted_manager_without_soft_delete(self):
self.assertRaises(ImproperlyConfigured, Notification.objects.active)
self.assertRaises(ImproperlyConfigured, Notification.objects.active)
self.assertRaises(ImproperlyConfigured, Notification.objects.mark_all_as_deleted)
self.assertRaises(ImproperlyConfigured, Notification.objects.mark_all_as_active)
@override_settings(NOTIFICATIONS_SOFT_DELETE=True)
def test_mark_all_deleted_manager(self):
n = Notification.objects.filter(recipient=self.to_user).first()
n.mark_as_read()
self.assertEqual(Notification.objects.read().count(), 1)
self.assertEqual(Notification.objects.unread().count(), self.message_count-1)
self.assertEqual(Notification.objects.active().count(), self.message_count)
self.assertEqual(Notification.objects.deleted().count(), 0)
Notification.objects.mark_all_as_deleted()
self.assertEqual(Notification.objects.read().count(), 0)
self.assertEqual(Notification.objects.unread().count(), 0)
self.assertEqual(Notification.objects.active().count(), 0)
self.assertEqual(Notification.objects.deleted().count(), self.message_count)
Notification.objects.mark_all_as_active()
self.assertEqual(Notification.objects.read().count(), 1)
self.assertEqual(Notification.objects.unread().count(), self.message_count-1)
self.assertEqual(Notification.objects.active().count(), self.message_count)
self.assertEqual(Notification.objects.deleted().count(), 0)
class NotificationTestPages(TestCase):
def setUp(self):
self.message_count = 10
self.from_user = User.objects.create_user(username="from", password="pwd", email="example@example.com")
self.to_user = User.objects.create_user(username="to", password="pwd", email="example@example.com")
self.to_user.is_staff = True
self.to_user.save()
for i in range(self.message_count):
notify.send(self.from_user, recipient=self.to_user, verb='commented', action_object=self.from_user)
def logout(self):
self.client.post(reverse('admin:logout')+'?next=/', {})
def login(self, username, password):
self.logout()
response = self.client.post(reverse('login'), {'username': username, 'password': password})
self.assertEqual(response.status_code, 302)
return response
def test_all_messages_page(self):
self.login('to', 'pwd')
response = self.client.get(reverse('notifications:all'))
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.all()))
def test_unread_messages_pages(self):
self.login('to', 'pwd')
response = self.client.get(reverse('notifications:unread'))
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.unread()))
self.assertEqual(len(response.context['notifications']), self.message_count)
for i, n in enumerate(self.to_user.notifications.all()):
if i % 3 == 0:
response = self.client.get(reverse('notifications:mark_as_read', args=[id2slug(n.id)]))
self.assertEqual(response.status_code, 302)
response = self.client.get(reverse('notifications:unread'))
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.unread()))
self.assertTrue(len(response.context['notifications']) < self.message_count)
response = self.client.get(reverse('notifications:mark_all_as_read'))
self.assertRedirects(response, reverse('notifications:all'))
response = self.client.get(reverse('notifications:unread'))
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.unread()))
self.assertEqual(len(response.context['notifications']), 0)
def test_next_pages(self):
self.login('to', 'pwd')
response = self.client.get(reverse('notifications:mark_all_as_read'), data={
"next": reverse('notifications:unread'),
})
self.assertRedirects(response, reverse('notifications:unread'))
slug = id2slug(self.to_user.notifications.first().id)
response = self.client.get(reverse('notifications:mark_as_read', args=[slug]), data={
"next": reverse('notifications:unread'),
})
self.assertRedirects(response, reverse('notifications:unread'))
slug = id2slug(self.to_user.notifications.first().id)
response = self.client.get(reverse('notifications:mark_as_unread', args=[slug]), {
"next": reverse('notifications:unread'),
})
self.assertRedirects(response, reverse('notifications:unread'))
def test_delete_messages_pages(self):
self.login('to', 'pwd')
slug = id2slug(self.to_user.notifications.first().id)
response = self.client.get(reverse('notifications:delete', args=[slug]))
self.assertRedirects(response, reverse('notifications:all'))
response = self.client.get(reverse('notifications:all'))
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.all()))
self.assertEqual(len(response.context['notifications']), self.message_count-1)
response = self.client.get(reverse('notifications:unread'))
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.unread()))
self.assertEqual(len(response.context['notifications']), self.message_count-1)
@override_settings(NOTIFICATIONS_SOFT_DELETE=True)
def test_soft_delete_messages_manager(self):
self.login('to', 'pwd')
slug = id2slug(self.to_user.notifications.first().id)
response = self.client.get(reverse('notifications:delete', args=[slug]))
self.assertRedirects(response, reverse('notifications:all'))
response = self.client.get(reverse('notifications:all'))
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.active()))
self.assertEqual(len(response.context['notifications']), self.message_count-1)
response = self.client.get(reverse('notifications:unread'))
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['notifications']), len(self.to_user.notifications.unread()))
self.assertEqual(len(response.context['notifications']), self.message_count-1)
def test_unread_count_api(self):
self.login('to', 'pwd')
response = self.client.get(reverse('notifications:live_unread_notification_count'))
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(list(data.keys()), ['unread_count'])
self.assertEqual(data['unread_count'], 10)
Notification.objects.filter(recipient=self.to_user).mark_all_as_read()
response = self.client.get(reverse('notifications:live_unread_notification_count'))
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(list(data.keys()), ['unread_count'])
self.assertEqual(data['unread_count'], 0)
notify.send(self.from_user, recipient=self.to_user, verb='commented', action_object=self.from_user)
response = self.client.get(reverse('notifications:live_unread_notification_count'))
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(list(data.keys()), ['unread_count'])
self.assertEqual(data['unread_count'], 1)
def test_unread_list_api(self):
self.login('to', 'pwd')
response = self.client.get(reverse('notifications:live_unread_notification_list'))
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(sorted(list(data.keys())), ['unread_count', 'unread_list'])
self.assertEqual(data['unread_count'], 10)
self.assertEqual(len(data['unread_list']), 5)
response = self.client.get(reverse('notifications:live_unread_notification_list'), data={"max": "12"})
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(sorted(list(data.keys())), ['unread_count', 'unread_list'])
self.assertEqual(data['unread_count'], 10)
self.assertEqual(len(data['unread_list']), 10)
# Test with a bad 'max' value
response = self.client.get(reverse('notifications:live_unread_notification_list'), data={
"max": "this_is_wrong",
})
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(sorted(list(data.keys())), ['unread_count', 'unread_list'])
self.assertEqual(data['unread_count'], 10)
self.assertEqual(len(data['unread_list']), 5)
Notification.objects.filter(recipient=self.to_user).mark_all_as_read()
response = self.client.get(reverse('notifications:live_unread_notification_list'))
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(sorted(list(data.keys())), ['unread_count', 'unread_list'])
self.assertEqual(data['unread_count'], 0)
self.assertEqual(len(data['unread_list']), 0)
notify.send(self.from_user, recipient=self.to_user, verb='commented', action_object=self.from_user)
response = self.client.get(reverse('notifications:live_unread_notification_list'))
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(sorted(list(data.keys())), ['unread_count', 'unread_list'])
self.assertEqual(data['unread_count'], 1)
self.assertEqual(len(data['unread_list']), 1)
self.assertEqual(data['unread_list'][0]['verb'], 'commented')
def test_live_update_tags(self):
from django.shortcuts import render
self.login('to', 'pwd')
self.factory = RequestFactory()
request = self.factory.get('/notification/live_updater')
request.user = self.to_user
render(request, 'notifications/test_tags.html', {'request': request})
# TODO: Add more tests to check what is being output.
def test_anon_user_gets_nothing(self):
response = self.client.post(reverse('notifications:live_unread_notification_count'))
self.assertEqual(response.status_code, 200)
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(data['unread_count'],0)
response = self.client.post(reverse('notifications:live_unread_notification_list'))
self.assertEqual(response.status_code, 200)
data = json.loads(response.content.decode('utf-8'))
self.assertEqual(data['unread_count'],0)
self.assertEqual(data['unread_list'],[])
| 50.209003 | 121 | 0.700672 | 1,879 | 15,615 | 5.670569 | 0.113358 | 0.099953 | 0.029094 | 0.045331 | 0.820366 | 0.801689 | 0.766119 | 0.739465 | 0.70061 | 0.678273 | 0 | 0.008507 | 0.164393 | 15,615 | 310 | 122 | 50.370968 | 0.808093 | 0.03471 | 0 | 0.593361 | 0 | 0 | 0.13375 | 0.041705 | 0 | 0 | 0 | 0.003226 | 0.356846 | 1 | 0.087137 | false | 0.041494 | 0.06639 | 0 | 0.170124 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a322bbb0fc829464af4a4329c95dedf0d468990 | 121 | py | Python | seispy/__init__.py | obsmax/seispy | 1ed5064dbcfeb4ceebf20b59b97e588609c5b8ab | [
"MIT"
] | 1 | 2020-06-15T23:12:45.000Z | 2020-06-15T23:12:45.000Z | seispy/__init__.py | obsmax/seispy | 1ed5064dbcfeb4ceebf20b59b97e588609c5b8ab | [
"MIT"
] | null | null | null | seispy/__init__.py | obsmax/seispy | 1ed5064dbcfeb4ceebf20b59b97e588609c5b8ab | [
"MIT"
] | null | null | null | from seispy.version import __version__
from seispy.trace import Trace
from seispy.stream import Stream, readseispystream
| 30.25 | 50 | 0.859504 | 16 | 121 | 6.25 | 0.4375 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107438 | 121 | 3 | 51 | 40.333333 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a5723045773cd4ceaf19cb4f8d3e3943f16016e | 118 | py | Python | PositioningSolver/src/io_manager/__init__.py | rodrigo-moliveira/PositioningSolver | d3ea60751ee1d7bf1f845e76c31fac95f7c02c43 | [
"MIT"
] | null | null | null | PositioningSolver/src/io_manager/__init__.py | rodrigo-moliveira/PositioningSolver | d3ea60751ee1d7bf1f845e76c31fac95f7c02c43 | [
"MIT"
] | null | null | null | PositioningSolver/src/io_manager/__init__.py | rodrigo-moliveira/PositioningSolver | d3ea60751ee1d7bf1f845e76c31fac95f7c02c43 | [
"MIT"
] | null | null | null | from .import_rinex.RinexObsReader import RinexObsReader
from .import_rinex.RinexNavReaderGPS import RinexNavReaderGPS
| 39.333333 | 61 | 0.898305 | 12 | 118 | 8.666667 | 0.416667 | 0.192308 | 0.288462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 118 | 2 | 62 | 59 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a5f2bfe6e40fd0348ee026999a8826f62936279 | 100 | py | Python | tests/test_case1.py | My-Novel-Management/refine-storybuilder | 8da781d131383393f78c554007ebbe7c5211f4e1 | [
"MIT"
] | null | null | null | tests/test_case1.py | My-Novel-Management/refine-storybuilder | 8da781d131383393f78c554007ebbe7c5211f4e1 | [
"MIT"
] | 1 | 2021-02-11T07:05:06.000Z | 2021-02-11T07:05:06.000Z | tests/test_case1.py | My-Novel-Management/refine-storybuilder | 8da781d131383393f78c554007ebbe7c5211f4e1 | [
"MIT"
] | null | null | null | from storybuilder.hello import helloworld
def test_hello():
assert helloworld(), "正常ならTrue"
| 12.5 | 41 | 0.74 | 11 | 100 | 6.636364 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17 | 100 | 7 | 42 | 14.285714 | 0.879518 | 0 | 0 | 0 | 0 | 0 | 0.080808 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a64dfc91f39a0512991ca1a2c8fdab904d78748 | 9,813 | py | Python | docs/panorama_xpaths_list.py | suzivp/iron-skillet | 5a9777cb14e9aff6ec88892b247273889a36f47d | [
"MIT"
] | 1 | 2019-03-14T21:08:37.000Z | 2019-03-14T21:08:37.000Z | docs/panorama_xpaths_list.py | suzivp/iron-skillet | 5a9777cb14e9aff6ec88892b247273889a36f47d | [
"MIT"
] | null | null | null | docs/panorama_xpaths_list.py | suzivp/iron-skillet | 5a9777cb14e9aff6ec88892b247273889a36f47d | [
"MIT"
] | 2 | 2019-04-10T20:50:54.000Z | 2019-05-30T23:43:21.000Z | # xpaths as python dictionary with name as a key
# %(text)s variables showing xpath variables - python substitution
xpaths_Panorama = {
"address": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/address",
"decryption_rules_post": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/post-rulebase/decryption",
"decryption_rules_pre": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/pre-rulebase/decryption",
"default_security_rules": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/post-rulebase/default-security-rules",
"device_group": "/config/devices/entry[@name='localhost.localdomain']/device-group",
"device_setting": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/deviceconfig/setting",
"device_system": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/deviceconfig/system",
"dhcp_server": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/network/dhcp/interface/entry[@name='%(port)s']/server",
"email_scheduler": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/email-scheduler",
"external_list": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/external-list",
"gpcs_service_connection": "/config/devices/entry[@name='localhost.localdomain']/plugins/cloud_services/service-connection",
"gpcs_trusted-zone": "/config/devices/entry[@name='localhost.localdomain']/plugins/cloud_services/remote-networks/trusted-zones",
"gpcs_onboarding": "/config/devices/entry[@name='localhost.localdomain']/plugins/cloud_services/remote-networks/onboarding",
"ike_gateway": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/network/ike",
"interface": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/network/interface",
"ipsec_tunnel": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/network/tunnel",
"log_collector_group": "/config/devices/entry[@name='localhost.localdomain']/log-collector-group",
"log_settings": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/shared/log-settings",
"log_settings_profiles": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/log-settings/profiles",
"nat": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/post-rulebase/nat",
"network_profiles": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/network/profiles",
"ntp_servers": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/deviceconfig/system/ntp-servers",
"panorama_log_settings": "/config/panorama/log-settings",
"panorama_setting": "/config/devices/entry[@name='localhost.localdomain']/deviceconfig/setting",
"panorama_system": "/config/devices/entry[@name='localhost.localdomain']/deviceconfig/system",
"profile_group": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profile-group",
"profiles_custom_url_category": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/custom-url-category",
"profiles_decryption": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/decryption",
"profiles_file_blocking": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/file-blocking",
"profiles_spyware": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/spyware",
"profiles_url_filtering": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/url-filtering",
"profiles_virus": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/virus",
"profiles_vulnerability": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/vulnerability",
"profiles_wildfire_analysis": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/profiles/wildfire-analysis",
"reports": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/reports",
"report_group": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/report-group",
"route_service": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/deviceconfig/system/route/service/entry[@name='%(route_service)s']",
"security_rules_post": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/post-rulebase/security",
"security_rules_pre": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/pre-rulebase/security",
"server_profile": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/shared/server-profile",
"service": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/service",
"shared_address": "/config/shared/address",
"shared_external_list": "/config/shared/external-list",
"shared_log_settings": "/config/shared/log-settings",
"shared_log_settings_profiles": "/config/shared/log-settings/profiles",
"shared_post_rulebase_decryption": "/config/shared/post-rulebase/decryption",
"shared_post_rulebase_security": "/config/shared/post-rulebase/security",
"shared_post_rulebase_default_security_rules": "/config/shared/post-rulebase/default-security-rules",
"shared_pre_rulebase_decryption": "/config/shared/pre-rulebase/decryption",
"shared_pre_rulebase_security": "/config/shared/pre-rulebase/security",
"shared_profiles": "/config/shared/profiles",
"shared_profiles_custom_url_category": "/config/shared/profiles/custom-url-category",
"shared_profiles_decryption": "/config/shared/profiles/decryption",
"shared_profiles_file_blocking": "/config/shared/profiles/file-blocking",
"shared_profiles_spyware": "/config/shared/profiles/spyware",
"shared_profiles_url_filtering": "/config/shared/profiles/url-filtering",
"shared_profiles_virus": "/config/shared/profiles/virus",
"shared_profiles_vulnerability": "/config/shared/profiles/vulnerability",
"shared_profiles_wildfire_analysis": "/config/shared/profiles/wildfire-analysis",
"shared_profile_group": "/config/shared/profile-group",
"shared_report_group": "/config/shared/report-group",
"shared_tag": "/config/shared/tag",
"tag": "/config/devices/entry[@name='localhost.localdomain']/device-group/entry[@name='%(devicegroup)s']/tag",
"template": "/config/devices/entry[@name='localhost.localdomain']/template",
"template-stack": "/config/devices/entry[@name='localhost.localdomain']/template-stack",
"update_schedule": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/deviceconfig/system/update-schedule",
"userid_agent": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/vsys/entry[@name='vsys1']/user-id-agent",
"userid_group_mapping": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/vsys/entry[@name='vsys1']/group-mapping",
"virtual_router": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/network/virtual-router",
"wildfire": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/deviceconfig/setting/wildfire",
"zone": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/vsys/entry[@name='vsys1']/zone",
"zone_import_interface": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/vsys/entry[@name='vsys1']/import",
"zone_protection_profile": "/config/devices/entry[@name='localhost.localdomain']/template/entry[@name='%(pan_template)s']/config/devices/entry[@name='localhost.localdomain']/network/profiles/zone-protection-profile",
}
| 124.21519 | 238 | 0.742179 | 1,128 | 9,813 | 6.336879 | 0.087766 | 0.146055 | 0.171237 | 0.209289 | 0.704952 | 0.689144 | 0.678511 | 0.648993 | 0.632065 | 0.615277 | 0 | 0.000436 | 0.064099 | 9,813 | 78 | 239 | 125.807692 | 0.777875 | 0.011312 | 0 | 0 | 0 | 0.586667 | 0.889886 | 0.823281 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013333 | 0 | 0.013333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a8fe9598640a83f32e26a0f465712abef6fbea7 | 7,529 | py | Python | examples.py | jolsfd/rsa-algorithm | e79aaef638962bc385c4b795aff480698a95907e | [
"MIT"
] | 1 | 2022-03-15T13:46:46.000Z | 2022-03-15T13:46:46.000Z | examples.py | jolsfd/rsa-algorithm | e79aaef638962bc385c4b795aff480698a95907e | [
"MIT"
] | null | null | null | examples.py | jolsfd/rsa-algorithm | e79aaef638962bc385c4b795aff480698a95907e | [
"MIT"
] | null | null | null | # 4096 bit
e_4096 = 17
d_4096 = 517909010805862499209701226695648369818068822857665221429630562878991035451397955806871142994486372422726363786896257862775160717699248432072217999909390276021374122266208152522178360328449120534698665056958761301044621568568021055520308050795455493863234475909673308659049202223736332728095401729476554537446891434798413575147969659442188422536001035616436227593665834739263603113102295192344529382268108558059663370213641169715860318987294852972645868843743135941271365893290940283802353345232987172614945536985411623662736410498827340202388903158324705599414835063238566187669983883074959506578921768451458372190431542788848902393387911767472645636185158639113085157900338876521662548782917067985729312615882805030930024759482168951232122662678752841522096264937212695567812371659622865076305492724630714362324156847382485431108064180373136876434045349950055417174063242657150865359872731675245709709861884551745464364461558534541966865549940337556345209886438180207698740787816650262339179306802378076139176912670463240126050907382407827131554453646109569179854191136974574690894945017249967024893155363067997876331734367900018541992557418247605657795167452331514641236852493846127530181788825361890155133358323974881872009694673
n_4096 = 800404834881787498778629168529638389718833635325482614936701778994804327515796840792437220991478939198758925852476034878834339290989747576838882363496330426578487279865958053897912011416694095371807027815299903828887142424150577994895021533047522126879544190042222386109439676163956150579783802672827402466963377671961184616137771291865200289373819782316310533553847199142498295720249001660896090863505258680637661572148354535015420492980364772775907251849421210091055747289631453165876364260814616539495824920795636145660592634407278616676419213971956363199095654188641420471853611455661301055621970005788617484294360006219044174253968913025088171305182481061283822556811627703130471110572893904060572374572698541449650439898543835402142576936210049712016621689600098090201636678870489119493824822381570324161665895336639862977070197915918660628090377363870302831189834942073338854047749590887832483528154529266822588366328624656428270302782285174722348655741985953320798055794540714254919042365666858924554614813216495850095151356530940776786212494686003327700998577438801010821292003750459431997045776131484179706613310247866433598040498423895487328131939021569214771112973647487811375806598713790761880469233995022713791231516267
m_4096 = 289456684643222718946289951726222411527013375584731423566290240800529282053702932407704032906268986674718654046405951921743886367774857528658509907715494867772350144343989575174716092553041817136087916538711687148656698782688765414072081103046661445260607977369710911002620147268169020027053498042177636296549128877399750263880874198916785398965809560213398414083500551731201616532487674154493302025664550181427130817475370790306284824918161727870020074107657602929273773751441744963737238895834306538502778656635515354527092018425737615652669731546590642018380014576107039277398315590098075250953535855268915182425148054507092500811437762317585125616753837298640356512815704579570722790120160328896231432363873575729229605643958269146834007054826368673581643018010810537930466817764068764499619236801458661701265770872368250474667011112366305098430271116656403594721998030780989312592775333533455569138487882596687752857855742768121977321932915283648341268204915795998066577158351963559331469752764056296186331142520783081828380607364556642275202731059024953057458312885556221084743014608150142652689860809835527387291636042943173041394594165323673080304922807044309960746599787870624250303851544509081879353808880431006135872082847
c_4096 = 525046649724675635609078270694446834370621446718793212625688963036053241676050519427112042459172280839354919839922139686447477363383280220523421222572820490566684514389747375794408161865432217202437244688968373313961597472553521633028401034818383977042513687329556216668617657203310041337313013114561526914005157764681517167911456890062226039745160339518553821256943529963375556701346201039996067151420747108970099518574634187060764400301469863976776103251903405359563350724399880811394347552645303861931794152687973990563986403200508090132909197262925908490771662026156891119917213696403597533865511732968201478864757587120324845798631895810695969316705563645390851787231988503612072203122158142443921007603969391806011327854374939299478264313337199623317602702274786081172239208457810597535487555347601313407977589724132906874772167852427614320249533690965465892385083271996557380120348472049126769960207774931235353051647689940700568978681085355945508325433347266502543091665799589361941939266114896189043976560361175833138469597035276913989699326001021099477763020331689672506697020412059692743524402394628509300209465441019966281598259089440068092586554656816538937644372743838621905685096032242099015678940786917672258884196260
# e_4096 = 5
# d_4096 = 729286407921427879579983176573734389702385324359645844941705874240179133207995728963730450908473815736044445968791785749079722765720904984686093980270433933069143160954473067059844766146550255744666028428266638936227372570436092000426552680217714806073243517765894597287235447878761332092090169048109006610876884039677611175795109512054843224390606219140745654202684394633945030055376290840065948488702869818922918017976240283484719018995192901046895986701282676876616520091796593010013527792129713736900669026884371217075121294520915908639285094649357471706570580470318693159578979425844745044538460764295892943589183790811157234878135443590232666054611581996639368670352833966338027551708144328441982860232675947548564642081601820040654502861464023356645087974231854022845662680761525050582146586347035737198871659576844453267868429291766129404156213378455106818828313878557911299140435850214333945752216523644680891189061276121427304887213504207165523917209093777062623980698873244179454053232062286477826667668445463839131190710301692031968907387952975658653021457147225327828899997070758232719321725426200910882156537494519907419658158907567938613494020808123986056927716326690706501382194105883570382298044539470386772163623325
# n_4096 = 911608009901784849474978970717167987127981655449557306177132342800223916509994661204663063635592269670055557460989732186349653457151131230857617475338042416336428951193091333824805957683187819680832535535333298670284215713045115000533190850272143507591554397207368246609044309848451665115112711310136258263596105049597013969743886890068554030488257773925932067753355493292431287569220363550082435610878587273653647522470300354355898773743991126308619983376603346095770650114745741262516909740162142171125836283605464021343901618151144885799106368311696839633213225587898366449473724282305931305673075955369866179486540168031948834943678659724374408615437245647541848632445677693588403109429612853044747347924208343957395257517422911462553844905730924188140042627351016980484395070148572105446467557555414750622020883675935525921005477260850955243999929228233687293588816352280259004375362847595872213268802050187698432980295321304263736728237760513206370717347411572654041379491146795355813687118520551364919324526983547832487890362612134011944608425294070058330835577011254632640931807594922370928516175488800161508316292626288453838194576176082696655213935145244309463234853402862657318235567412857637401052334403081931365492295337
e = 5
d = 77
n = 119
| 327.347826 | 1,244 | 0.9915 | 32 | 7,529 | 233.03125 | 0.53125 | 0.001341 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.995053 | 0.006508 | 7,529 | 22 | 1,245 | 342.227273 | 0.001872 | 0.332714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a99affc2e3d5a339e001a18754ee74ee27b404c | 34 | py | Python | usttc/audio/__init__.py | kakakuoka/asr-bridger | 118dfb8c9705b4d2ae229fba021dd2b85f6c1c97 | [
"Apache-2.0"
] | 2 | 2021-11-24T06:17:51.000Z | 2021-12-16T22:45:28.000Z | usttc/audio/__init__.py | kakakuoka/asr-bridger | 118dfb8c9705b4d2ae229fba021dd2b85f6c1c97 | [
"Apache-2.0"
] | null | null | null | usttc/audio/__init__.py | kakakuoka/asr-bridger | 118dfb8c9705b4d2ae229fba021dd2b85f6c1c97 | [
"Apache-2.0"
] | 1 | 2022-01-20T16:30:42.000Z | 2022-01-20T16:30:42.000Z | from .audio_file import AudioFile
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0abdac856be6daa42aaaae2bd91b693593169a54 | 18,311 | py | Python | consul/tests/test_unit.py | tzach/integrations-core | ac9daf60630bea4739947fe1d8df72c20bfcbc22 | [
"BSD-3-Clause"
] | null | null | null | consul/tests/test_unit.py | tzach/integrations-core | ac9daf60630bea4739947fe1d8df72c20bfcbc22 | [
"BSD-3-Clause"
] | null | null | null | consul/tests/test_unit.py | tzach/integrations-core | ac9daf60630bea4739947fe1d8df72c20bfcbc22 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2018-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
import logging
import mock
import pytest
from datadog_checks.consul import ConsulCheck
from datadog_checks.utils.containers import hash_mutable
from . import common, consul_mocks
pytestmark = pytest.mark.unit
log = logging.getLogger(__file__)
def test_get_nodes_with_service(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
consul_mocks.mock_check(consul_check, consul_mocks._get_consul_mocks())
consul_check.check(consul_mocks.MOCK_CONFIG)
expected_tags = [
'consul_datacenter:dc1',
'consul_service_id:service-1',
'consul_service-1_service_tag:az-us-east-1a',
]
aggregator.assert_metric('consul.catalog.nodes_up', value=1, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_passing', value=1, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_warning', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_critical', value=0, tags=expected_tags)
expected_tags = ['consul_datacenter:dc1', 'consul_node_id:node-1']
aggregator.assert_metric('consul.catalog.services_up', value=6, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_passing', value=6, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_warning', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_critical', value=0, tags=expected_tags)
def test_get_peers_in_cluster(aggregator):
my_mocks = consul_mocks._get_consul_mocks()
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG)
# When node is leader
aggregator.assert_metric('consul.peers', value=3, tags=['consul_datacenter:dc1', 'mode:leader'])
my_mocks['_get_cluster_leader'] = consul_mocks.mock_get_cluster_leader_B
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG)
aggregator.assert_metric('consul.peers', value=3, tags=['consul_datacenter:dc1', 'mode:follower'])
def test_count_all_nodes(aggregator):
my_mocks = consul_mocks._get_consul_mocks()
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG)
aggregator.assert_metric('consul.catalog.total_nodes', value=2, tags=['consul_datacenter:dc1'])
def test_get_nodes_with_service_warning(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
my_mocks = consul_mocks._get_consul_mocks()
my_mocks['get_nodes_with_service'] = consul_mocks.mock_get_nodes_with_service_warning
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG)
expected_tags = [
'consul_datacenter:dc1',
'consul_service_id:service-1',
'consul_service-1_service_tag:az-us-east-1a',
]
aggregator.assert_metric('consul.catalog.nodes_up', value=1, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_passing', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_warning', value=1, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_critical', value=0, tags=expected_tags)
expected_tags = ['consul_datacenter:dc1', 'consul_node_id:node-1']
aggregator.assert_metric('consul.catalog.services_up', value=6, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_passing', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_warning', value=6, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_critical', value=0, tags=expected_tags)
def test_get_nodes_with_service_critical(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
my_mocks = consul_mocks._get_consul_mocks()
my_mocks['get_nodes_with_service'] = consul_mocks.mock_get_nodes_with_service_critical
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG)
expected_tags = [
'consul_datacenter:dc1',
'consul_service_id:service-1',
'consul_service-1_service_tag:az-us-east-1a',
]
aggregator.assert_metric('consul.catalog.nodes_up', value=1, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_passing', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_warning', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.nodes_critical', value=1, tags=expected_tags)
expected_tags = ['consul_datacenter:dc1', 'consul_node_id:node-1']
aggregator.assert_metric('consul.catalog.services_up', value=6, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_passing', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_warning', value=0, tags=expected_tags)
aggregator.assert_metric('consul.catalog.services_critical', value=6, tags=expected_tags)
def test_consul_request(aggregator, instance):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
with mock.patch("datadog_checks.consul.consul.requests.get") as mock_requests_get:
consul_check.consul_request(instance, "foo")
url = "{}/{}".format(instance["url"], "foo")
aggregator.assert_service_check("consul.can_connect", ConsulCheck.OK, tags=["url:{}".format(url)], count=1)
aggregator.reset()
mock_requests_get.side_effect = Exception("message")
with pytest.raises(Exception):
consul_check.consul_request(instance, "foo")
aggregator.assert_service_check(
"consul.can_connect",
ConsulCheck.CRITICAL,
tags=["url:{}".format(url)],
count=1,
message="Consul request to {} failed: message".format(url),
)
def test_service_checks(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
my_mocks = consul_mocks._get_consul_mocks()
my_mocks['consul_request'] = consul_mocks.mock_get_health_check
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG)
expected_tags = [
"consul_datacenter:dc1",
"check:server-loadbalancer",
"consul_service_id:server-loadbalancer",
"service:server-loadbalancer",
"consul_service:server-loadbalancer",
]
aggregator.assert_service_check('consul.check', status=ConsulCheck.CRITICAL, tags=expected_tags, count=1)
expected_tags = [
"consul_datacenter:dc1",
"check:server-api",
"consul_service_id:server-loadbalancer",
"service:server-loadbalancer",
"consul_service:server-loadbalancer",
]
aggregator.assert_service_check('consul.check', status=ConsulCheck.OK, tags=expected_tags, count=1)
expected_tags = [
"consul_datacenter:dc1",
"check:server-api",
"service:server-loadbalancer",
"consul_service:server-loadbalancer",
]
aggregator.assert_service_check('consul.check', status=ConsulCheck.OK, tags=expected_tags, count=1)
expected_tags = ["consul_datacenter:dc1", "check:server-api", "consul_service_id:server-loadbalancer"]
aggregator.assert_service_check('consul.check', status=ConsulCheck.OK, tags=expected_tags, count=1)
expected_tags = [
"consul_datacenter:dc1",
"check:server-status-empty",
"consul_service_id:server-empty",
"service:server-empty",
"consul_service:server-empty",
]
aggregator.assert_service_check('consul.check', status=ConsulCheck.UNKNOWN, tags=expected_tags, count=1)
aggregator.assert_service_check('consul.check', count=5)
def test_service_checks_disable_service_tag(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
my_mocks = consul_mocks._get_consul_mocks()
my_mocks['consul_request'] = consul_mocks.mock_get_health_check
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG_DISABLE_SERVICE_TAG)
expected_tags = [
'consul_datacenter:dc1',
'check:server-loadbalancer',
'consul_service_id:server-loadbalancer',
'consul_service:server-loadbalancer',
]
aggregator.assert_service_check('consul.check', status=ConsulCheck.CRITICAL, tags=expected_tags, count=1)
expected_tags = [
'consul_datacenter:dc1',
'check:server-api',
'consul_service_id:server-loadbalancer',
'consul_service:server-loadbalancer',
]
aggregator.assert_service_check('consul.check', status=ConsulCheck.OK, tags=expected_tags, count=1)
expected_tags = ['consul_datacenter:dc1', 'check:server-api', 'consul_service:server-loadbalancer']
aggregator.assert_service_check('consul.check', status=ConsulCheck.OK, tags=expected_tags, count=1)
expected_tags = ['consul_datacenter:dc1', 'check:server-api', 'consul_service_id:server-loadbalancer']
aggregator.assert_service_check('consul.check', status=ConsulCheck.OK, tags=expected_tags, count=1)
expected_tags = [
'consul_datacenter:dc1',
'check:server-status-empty',
'consul_service_id:server-empty',
'consul_service:server-empty',
]
aggregator.assert_service_check('consul.check', status=ConsulCheck.UNKNOWN, tags=expected_tags, count=1)
aggregator.assert_service_check('consul.check', count=5)
def test_cull_services_list():
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
my_mocks = consul_mocks._get_consul_mocks()
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG_LEADER_CHECK)
# Pad num_services to kick in truncation logic
num_services = consul_check.MAX_SERVICES + 20
# Max services parameter (from consul.yaml) set to be bigger than MAX_SERVICES and smaller than total of services
max_services = num_services - 10
# Big whitelist
services = consul_mocks.mock_get_n_services_in_cluster(num_services)
whitelist = ['service_{}'.format(k) for k in range(num_services)]
assert len(consul_check._cull_services_list(services, whitelist)) == consul_check.MAX_SERVICES
# Big whitelist with max_services
assert len(consul_check._cull_services_list(services, whitelist, max_services)) == max_services
# Whitelist < MAX_SERVICES should spit out the whitelist
whitelist = ['service_{}'.format(k) for k in range(consul_check.MAX_SERVICES - 1)]
assert set(consul_check._cull_services_list(services, whitelist)) == set(whitelist)
# Whitelist < max_services param should spit out the whitelist
whitelist = ['service_{}'.format(k) for k in range(max_services - 1)]
assert set(consul_check._cull_services_list(services, whitelist, max_services)) == set(whitelist)
# No whitelist, still triggers truncation
whitelist = []
assert len(consul_check._cull_services_list(services, whitelist)) == consul_check.MAX_SERVICES
# No whitelist with max_services set, also triggers truncation
whitelist = []
assert len(consul_check._cull_services_list(services, whitelist, max_services)) == max_services
# Num. services < MAX_SERVICES should be no-op in absence of whitelist
num_services = consul_check.MAX_SERVICES - 1
services = consul_mocks.mock_get_n_services_in_cluster(num_services)
assert len(consul_check._cull_services_list(services, whitelist)) == num_services
# Num. services < MAX_SERVICES should spit out only the whitelist when one is defined
whitelist = ['service_1', 'service_2', 'service_3']
assert set(consul_check._cull_services_list(services, whitelist)) == set(whitelist)
# Num. services < max_services (from consul.yaml) should be no-op in absence of whitelist
num_services = max_services - 1
whitelist = []
services = consul_mocks.mock_get_n_services_in_cluster(num_services)
assert len(consul_check._cull_services_list(services, whitelist, max_services)) == num_services
# Num. services < max_services should spit out only the whitelist when one is defined
whitelist = ['service_1', 'service_2', 'service_3']
assert set(consul_check._cull_services_list(services, whitelist, max_services)) == set(whitelist)
def test_new_leader_event(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
my_mocks = consul_mocks._get_consul_mocks()
my_mocks['_get_cluster_leader'] = consul_mocks.mock_get_cluster_leader_B
consul_mocks.mock_check(consul_check, my_mocks)
instance_hash = hash_mutable(consul_mocks.MOCK_CONFIG_LEADER_CHECK)
consul_check._instance_states[instance_hash].last_known_leader = 'My Old Leader'
consul_check.check(consul_mocks.MOCK_CONFIG_LEADER_CHECK)
assert len(aggregator.events) == 1
event = aggregator.events[0]
assert event['event_type'] == 'consul.new_leader'
assert 'prev_consul_leader:My Old Leader' in event['tags']
assert 'curr_consul_leader:My New Leader' in event['tags']
def test_self_leader_event(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [consul_mocks.MOCK_CONFIG_SELF_LEADER_CHECK])
my_mocks = consul_mocks._get_consul_mocks()
instance_hash = hash_mutable(consul_mocks.MOCK_CONFIG_SELF_LEADER_CHECK)
consul_check._instance_states[instance_hash].last_known_leader = 'My Old Leader'
our_url = consul_mocks.mock_get_cluster_leader_A(None)
other_url = consul_mocks.mock_get_cluster_leader_B(None)
# We become the leader
my_mocks['_get_cluster_leader'] = consul_mocks.mock_get_cluster_leader_A
consul_mocks.mock_check(consul_check, my_mocks)
consul_check.check(consul_mocks.MOCK_CONFIG_SELF_LEADER_CHECK)
assert len(aggregator.events) == 1
assert our_url == consul_check._instance_states[instance_hash].last_known_leader
event = aggregator.events[0]
assert event['event_type'] == 'consul.new_leader'
assert 'prev_consul_leader:My Old Leader' in event['tags']
assert 'curr_consul_leader:{}'.format(our_url) in event['tags']
# We are already the leader, no new events
aggregator.reset()
consul_check.check(consul_mocks.MOCK_CONFIG_SELF_LEADER_CHECK)
assert len(aggregator.events) == 0
# We lose the leader, no new events
my_mocks['_get_cluster_leader'] = consul_mocks.mock_get_cluster_leader_B
consul_mocks.mock_check(consul_check, my_mocks)
aggregator.reset()
consul_check.check(consul_mocks.MOCK_CONFIG_SELF_LEADER_CHECK)
assert len(aggregator.events) == 0
assert other_url == consul_check._instance_states[instance_hash].last_known_leader
# We regain the leadership
my_mocks['_get_cluster_leader'] = consul_mocks.mock_get_cluster_leader_A
consul_mocks.mock_check(consul_check, my_mocks)
aggregator.reset()
consul_check.check(consul_mocks.MOCK_CONFIG_SELF_LEADER_CHECK)
assert len(aggregator.events) == 1
assert our_url == consul_check._instance_states[instance_hash].last_known_leader
event = aggregator.events[0]
assert event['event_type'] == 'consul.new_leader'
assert 'prev_consul_leader:{}'.format(other_url) in event['tags']
assert 'curr_consul_leader:{}'.format(our_url) in event['tags']
def test_network_latency_checks(aggregator):
consul_check = ConsulCheck(common.CHECK_NAME, {}, [{}])
my_mocks = consul_mocks._get_consul_mocks()
consul_mocks.mock_check(consul_check, my_mocks)
# We start out as the leader, and stay that way
instance_hash = hash_mutable(consul_mocks.MOCK_CONFIG_NETWORK_LATENCY_CHECKS)
consul_check._instance_states[instance_hash].last_known_leader = consul_mocks.mock_get_cluster_leader_A(None)
consul_check.check(consul_mocks.MOCK_CONFIG_NETWORK_LATENCY_CHECKS)
latency = []
for m_name, metrics in aggregator._metrics.items():
if m_name.startswith('consul.net.'):
latency.extend(metrics)
latency.sort()
# Make sure we have the expected number of metrics
assert 19 == len(latency)
# Only 3 dc-latency metrics since we only do source = self
dc = [m for m in latency if '.dc.latency.' in m[0]]
assert 3 == len(dc)
assert 1.6746410750238774 == dc[0][2]
# 16 latency metrics, 2 nodes * 8 metrics each
node = [m for m in latency if '.node.latency.' in m[0]]
assert 16 == len(node)
assert 0.26577747932995816 == node[0][2]
@pytest.mark.parametrize(
'test_case, extra_config, expected_http_kwargs',
[
(
"new config",
{
'tls_cert': 'certfile',
'tls_private_key': 'keyfile',
'tls_ca_cert': 'file/path',
'acl_token': 'token',
'headers': {'X-foo': 'bar'},
},
{
'cert': ('certfile', 'keyfile'),
'verify': 'file/path',
'headers': {'X-Consul-Token': 'token', 'X-foo': 'bar'},
},
),
("default config", {}, {'cert': None, 'verify': True, 'headers': {'User-Agent': 'Datadog Agent/0.0.0'}}),
(
"legacy config",
{
'client_cert_file': 'certfile',
'private_key_file': 'keyfile',
'ca_bundle_file': 'file/path',
'acl_token': 'token',
},
{
'cert': ('certfile', 'keyfile'),
'verify': 'file/path',
'headers': {'X-Consul-Token': 'token', 'User-Agent': 'Datadog Agent/0.0.0'},
},
),
],
)
def test_config(test_case, extra_config, expected_http_kwargs):
instance = extra_config
check = ConsulCheck(common.CHECK_NAME, {}, instances=[instance])
with mock.patch('datadog_checks.base.utils.http.requests') as r:
r.get.return_value = mock.MagicMock(status_code=200)
check.check(instance)
http_wargs = dict(
auth=mock.ANY, cert=mock.ANY, headers=mock.ANY, proxies=mock.ANY, timeout=mock.ANY, verify=mock.ANY
)
http_wargs.update(expected_http_kwargs)
r.get.assert_called_with('/v1/status/leader', **http_wargs)
| 43.186321 | 117 | 0.722953 | 2,358 | 18,311 | 5.283715 | 0.102629 | 0.0671 | 0.05779 | 0.060679 | 0.846216 | 0.816358 | 0.790272 | 0.763384 | 0.751344 | 0.723894 | 0 | 0.010447 | 0.163618 | 18,311 | 423 | 118 | 43.288416 | 0.803069 | 0.065043 | 0 | 0.56051 | 0 | 0 | 0.213337 | 0.137467 | 0 | 0 | 0 | 0 | 0.235669 | 1 | 0.041401 | false | 0.019108 | 0.019108 | 0 | 0.06051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ae679a86e57193fdf30150e53bc172ab7fe821f | 156 | py | Python | example/__init__.py | bolinette/bolinette | b35a7d828c7d9617da6a8d7ac066e3b675a65252 | [
"MIT"
] | 4 | 2020-11-02T15:16:32.000Z | 2022-01-11T11:19:24.000Z | example/__init__.py | bolinette/bolinette | b35a7d828c7d9617da6a8d7ac066e3b675a65252 | [
"MIT"
] | 14 | 2021-01-04T11:06:59.000Z | 2022-03-23T17:01:49.000Z | example/__init__.py | bolinette/bolinette | b35a7d828c7d9617da6a8d7ac066e3b675a65252 | [
"MIT"
] | null | null | null | import example.models
import example.mixins
import example.seeders
import example.middlewares
import example.controllers
from example.app import create_app
| 22.285714 | 34 | 0.871795 | 21 | 156 | 6.428571 | 0.47619 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089744 | 156 | 6 | 35 | 26 | 0.950704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0af9f4895905f2d14eecf2c1dee23acbb997b0a8 | 258 | py | Python | openhab_creator/models/common/__init__.py | DerOetzi/openhab_creator | 197876df5aae84192c34418f6b9a7cfcee23b195 | [
"MIT"
] | 1 | 2021-11-16T22:48:26.000Z | 2021-11-16T22:48:26.000Z | openhab_creator/models/common/__init__.py | DerOetzi/openhab_creator | 197876df5aae84192c34418f6b9a7cfcee23b195 | [
"MIT"
] | null | null | null | openhab_creator/models/common/__init__.py | DerOetzi/openhab_creator | 197876df5aae84192c34418f6b9a7cfcee23b195 | [
"MIT"
] | null | null | null | from openhab_creator.models.common.maptransformation import MapTransformation
from openhab_creator.models.common.scene import Scene
from openhab_creator.models.common.presence import Presence
from openhab_creator.models.common.heatcontrol import Heatcontrol
| 51.6 | 77 | 0.891473 | 32 | 258 | 7.0625 | 0.3125 | 0.19469 | 0.318584 | 0.424779 | 0.530973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062016 | 258 | 4 | 78 | 64.5 | 0.933884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0afe8d931e40fc3d708727b86c6d66262d5a730c | 28 | py | Python | print_schema/__init__.py | suryashekharc/print_schema | 473b1a24b85521134b6a0cd41e17bd51c4fef45d | [
"MIT"
] | 15 | 2020-03-30T14:34:30.000Z | 2021-12-14T23:34:31.000Z | print_schema/__init__.py | suryashekharc/print_schema | 473b1a24b85521134b6a0cd41e17bd51c4fef45d | [
"MIT"
] | 1 | 2020-06-08T16:47:22.000Z | 2020-06-08T21:20:18.000Z | print_schema/__init__.py | suryashekharc/print_schema | 473b1a24b85521134b6a0cd41e17bd51c4fef45d | [
"MIT"
] | 3 | 2020-06-09T19:17:34.000Z | 2022-02-21T13:28:46.000Z | from .print_schema import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
7c1f8b4c2a176f41662a975c144d02dbc8aa4473 | 144 | py | Python | ccc_client/exec_engine/cli/__init__.py | ohsu-comp-bio/ccc_client | 433a7fad3d8e6817362678b783110f38ca81e0a7 | [
"MIT"
] | null | null | null | ccc_client/exec_engine/cli/__init__.py | ohsu-comp-bio/ccc_client | 433a7fad3d8e6817362678b783110f38ca81e0a7 | [
"MIT"
] | null | null | null | ccc_client/exec_engine/cli/__init__.py | ohsu-comp-bio/ccc_client | 433a7fad3d8e6817362678b783110f38ca81e0a7 | [
"MIT"
] | null | null | null | from ccc_client.exec_engine.cli import submit, metadata, query, status, outputs
__all__ = ['submit', 'metadata', 'query', 'status', 'outputs']
| 36 | 79 | 0.729167 | 18 | 144 | 5.5 | 0.722222 | 0.282828 | 0.383838 | 0.505051 | 0.646465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 144 | 3 | 80 | 48 | 0.773438 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7c5eb0f1f9a748db494a916b72b154e32e0b94a1 | 274 | py | Python | gym_pcgrl/gym_pcgrl/envs/__init__.py | JiangZehua/control-pcgrl3D | f9b04e65e1cbf70b7306f4df251450d83c6fb2be | [
"MIT"
] | null | null | null | gym_pcgrl/gym_pcgrl/envs/__init__.py | JiangZehua/control-pcgrl3D | f9b04e65e1cbf70b7306f4df251450d83c6fb2be | [
"MIT"
] | null | null | null | gym_pcgrl/gym_pcgrl/envs/__init__.py | JiangZehua/control-pcgrl3D | f9b04e65e1cbf70b7306f4df251450d83c6fb2be | [
"MIT"
] | null | null | null | from gym_pcgrl.envs.pcgrl_env import PcgrlEnv
# for player functionality
from gym_pcgrl.envs.play_pcgrl_env import PlayPcgrlEnv
# for controllable design
from gym_pcgrl.envs.pcgrl_ctrl_env import PcgrlCtrlEnv
# for 3D envs
from gym_pcgrl.envs.pcgrl_env_3D import PcgrlEnv3D
| 34.25 | 54 | 0.857664 | 44 | 274 | 5.090909 | 0.409091 | 0.125 | 0.214286 | 0.285714 | 0.308036 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.10219 | 274 | 7 | 55 | 39.142857 | 0.898374 | 0.218978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7c66ee623b97e19415d3f15eefd17e38657d92f6 | 66 | py | Python | causeinfer/__init__.py | arita37/causeinfer | 1c4b81d21acb48f87c129e27c874e335e1dbb18d | [
"MIT"
] | 1 | 2020-08-07T04:58:40.000Z | 2020-08-07T04:58:40.000Z | causeinfer/__init__.py | arita37/causeinfer | 1c4b81d21acb48f87c129e27c874e335e1dbb18d | [
"MIT"
] | null | null | null | causeinfer/__init__.py | arita37/causeinfer | 1c4b81d21acb48f87c129e27c874e335e1dbb18d | [
"MIT"
] | null | null | null | from causeinfer.utils import *
from causeinfer.evaluation import * | 33 | 35 | 0.833333 | 8 | 66 | 6.875 | 0.625 | 0.509091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106061 | 66 | 2 | 35 | 33 | 0.932203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7c9deb54bdb8124488e623dc70f62614be3ef0c1 | 118,629 | py | Python | lingvodoc/views/v2/perspective/views.py | EhrmannGit/lingvodoc | 9c78f3f7272fd45ab0cfd9e670e55f54b97362df | [
"Apache-2.0"
] | 1 | 2017-11-14T11:32:11.000Z | 2017-11-14T11:32:11.000Z | lingvodoc/views/v2/perspective/views.py | EhrmannGit/lingvodoc | 9c78f3f7272fd45ab0cfd9e670e55f54b97362df | [
"Apache-2.0"
] | null | null | null | lingvodoc/views/v2/perspective/views.py | EhrmannGit/lingvodoc | 9c78f3f7272fd45ab0cfd9e670e55f54b97362df | [
"Apache-2.0"
] | null | null | null | __author__ = 'alexander'
from collections import deque
import base64
import datetime
import hashlib
import json
import logging
import multiprocessing
from sqlalchemy.orm.attributes import flag_modified
from pyramid.httpexceptions import (
HTTPBadRequest,
HTTPConflict,
HTTPForbidden,
HTTPFound,
HTTPInternalServerError,
HTTPNotFound,
HTTPOk
)
from sqlalchemy.sql.expression import case, true, false
from sqlalchemy.sql.functions import coalesce
from sqlalchemy.orm import aliased
from sqlalchemy import (
func,
or_,
and_,
tuple_
)
from pyramid.renderers import render_to_response
from pyramid.request import Request
from pyramid.response import Response
from pyramid.security import authenticated_userid
from pyramid.view import view_config
from sqlalchemy.exc import IntegrityError
from sqlalchemy.orm import joinedload
from sqlalchemy.orm.exc import NoResultFound
from lingvodoc.cache.caching import MEMOIZE
from lingvodoc.exceptions import CommonException
from lingvodoc.models import (
BaseGroup,
Client,
DBSession,
Dictionary,
DictionaryPerspective,
Entity,
PublishingEntity,
Group,
LexicalEntry,
Organization,
User,
TranslationAtom,
TranslationGist,
Field,
DictionaryPerspectiveToField,
UserBlobs,
ObjectTOC
)
from lingvodoc.views.v2.utils import (
cache_clients,
get_user_by_client_id,
user_counter,
view_perspective_from_object,
view_field_from_object
)
from lingvodoc.utils.creation import create_object, add_user_to_group
from lingvodoc.utils.verification import check_client_id
from lingvodoc.utils.search import fulfill_permissions_on_perspectives, FakeObject
from lingvodoc.views.v2.delete import real_delete_perspective
from pdb import set_trace
log = logging.getLogger(__name__)
@view_config(route_name='permissions_on_perspectives', renderer='json', request_method='GET')
def permissions_on_perspectives(request):
client_id = authenticated_userid(request)
subreq = Request.blank('/translation_service_search')
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = {'searchstring': 'Published'}
headers = dict()
if request.headers.get('Cookie'):
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
published_gist_object_id, published_gist_client_id = resp.json['object_id'], resp.json['client_id']
else:
raise KeyError("Something wrong with the base", resp.json['error'])
subreq = Request.blank('/translation_service_search')
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = {'searchstring': 'Limited access'} # todo: fix
headers = dict()
if request.headers.get('Cookie'):
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
limited_gist_object_id, limited_gist_client_id = resp.json['object_id'], resp.json['client_id']
else:
raise KeyError("Something wrong with the base", resp.json['error'])
intermediate = dict()
limited = DBSession.query(DictionaryPerspective.client_id, DictionaryPerspective.object_id, ).filter(
and_(DictionaryPerspective.state_translation_gist_client_id == limited_gist_client_id,
DictionaryPerspective.state_translation_gist_object_id == limited_gist_object_id)
)
limited_perms = [("limited", True), ("read", False), ("write", False), ("publish", False)]
for pers in limited.all():
fulfill_permissions_on_perspectives(intermediate, pers, limited_perms)
published = DBSession.query(DictionaryPerspective.client_id, DictionaryPerspective.object_id, ).filter(
and_(DictionaryPerspective.state_translation_gist_client_id == published_gist_client_id,
DictionaryPerspective.state_translation_gist_object_id == published_gist_object_id)
)
published_perms = [("read", True), ("write", False), ("publish", False)]
for pers in published.all():
fulfill_permissions_on_perspectives(intermediate, pers, published_perms)
if not client_id:
return intermediate
user_id = DBSession.query(Client).filter(client_id == Client.id).first().user_id
editor_basegroup = DBSession.query(BaseGroup).filter(and_(BaseGroup.subject == "lexical_entries_and_entities", BaseGroup.action == "create")).first()
editable_perspectives = DBSession.query(Group).join(Group.users).filter(and_(User.id == user_id, Group.base_group_id == editor_basegroup.id)).all()
pers = FakeObject()
editable_perms = [("write", True)]
for i in editable_perspectives:
pers.client_id = i.subject_client_id
pers.object_id = i.subject_object_id
fulfill_permissions_on_perspectives(intermediate, pers, editable_perms)
reader_basegroup = DBSession.query(BaseGroup).filter(and_(BaseGroup.subject == "approve_entities", BaseGroup.action == "view")).first()
readable_perspectives = DBSession.query(Group).join(Group.users).filter(and_(User.id == user_id, Group.base_group_id == reader_basegroup.id)).all()
pers = FakeObject()
readable_perms = [("read", True)]
for i in readable_perspectives:
pers.client_id = i.subject_client_id
pers.object_id = i.subject_object_id
fulfill_permissions_on_perspectives(intermediate, pers, readable_perms)
publisher_basegroup = DBSession.query(BaseGroup).filter(and_(BaseGroup.subject == "approve_entities", BaseGroup.action == "create")).first()
approvable_perspectives = DBSession.query(Group).join(Group.users).filter(and_(User.id == user_id, Group.base_group_id == publisher_basegroup.id)).all()
pers = FakeObject()
approvable_perms = [("publish", True)]
for i in approvable_perspectives:
pers.client_id = i.subject_client_id
pers.object_id = i.subject_object_id
fulfill_permissions_on_perspectives(intermediate, pers, approvable_perms)
return intermediate
@view_config(route_name='all_perspectives', renderer='json', request_method='GET')
def perspectives_list(request): # tested
response = list()
is_template = None
try:
is_template = request.GET.get('is_template')
except:
pass
published = request.params.get('published', None)
visible = request.params.get('visible', None)
subreq = Request.blank('/translation_service_search')
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = {'searchstring': 'Published'}
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
state_translation_gist_object_id, state_translation_gist_client_id = resp.json['object_id'], resp.json[
'client_id']
published_gist = (state_translation_gist_client_id, state_translation_gist_object_id)
else:
raise KeyError("Something wrong with the base", resp.json['error'])
subreq = Request.blank('/translation_service_search')
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = {'searchstring': 'Limited access'}
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
state_translation_gist_object_id, state_translation_gist_client_id = resp.json['object_id'], resp.json[
'client_id']
limited_gist = (state_translation_gist_client_id, state_translation_gist_object_id)
else:
raise KeyError("Something wrong with the base", resp.json['error'])
atom_perspective_name_alias = aliased(TranslationAtom, name="PerspectiveName")
atom_perspective_name_fallback_alias = aliased(TranslationAtom, name="PerspectiveNameFallback")
persps = DBSession.query(DictionaryPerspective,
TranslationAtom,
coalesce(atom_perspective_name_alias.content,
atom_perspective_name_fallback_alias.content,
"No translation for your locale available").label("Translation")
).filter(DictionaryPerspective.marked_for_deletion == False)
if is_template is not None:
if type(is_template) == str:
if is_template.lower() == 'true':
is_template = True
elif is_template.lower() == 'false':
is_template = False
else:
request.response.status = HTTPBadRequest.code
# TODO: write normal return
return
persps = persps.filter(DictionaryPerspective.is_template == is_template)
visible_persps = None
if visible:
user = Client.get_user_by_client_id(authenticated_userid(request))
visible_persps = [(-1, -1)] #hack to avoid empty in_
if user:
for group in user.groups:
if group.base_group_id == 21 or group.base_group_id == 22:
visible_persps.append((group.subject_client_id, group.subject_object_id))
persps = persps.filter(or_(and_(DictionaryPerspective.state_translation_gist_client_id == published_gist[0],
DictionaryPerspective.state_translation_gist_object_id == published_gist[1]),
tuple_(DictionaryPerspective.client_id, DictionaryPerspective.object_id).in_(visible_persps)))
else:
if published:
persps = persps.filter(or_(and_(DictionaryPerspective.state_translation_gist_client_id == published_gist[0],
DictionaryPerspective.state_translation_gist_object_id == published_gist[1]),
and_(DictionaryPerspective.state_translation_gist_client_id == limited_gist[0],
DictionaryPerspective.state_translation_gist_object_id == limited_gist[1])))
# if user:
# visible_persps = DBSession.query(user)
persps = persps.join(TranslationAtom,
and_(
TranslationAtom.parent_client_id == DictionaryPerspective.state_translation_gist_client_id,
TranslationAtom.parent_object_id == DictionaryPerspective.state_translation_gist_object_id)).filter(
TranslationAtom.locale_id == int(request.cookies['locale_id'])).join(
atom_perspective_name_alias, and_(
atom_perspective_name_alias.parent_client_id == DictionaryPerspective.translation_gist_client_id,
atom_perspective_name_alias.parent_object_id == DictionaryPerspective.translation_gist_object_id,
atom_perspective_name_alias.locale_id == int(request.cookies['locale_id'])), isouter=True).join(
atom_perspective_name_fallback_alias, and_(
atom_perspective_name_fallback_alias.parent_client_id == DictionaryPerspective.translation_gist_client_id,
atom_perspective_name_fallback_alias.parent_object_id == DictionaryPerspective.translation_gist_object_id,
atom_perspective_name_fallback_alias.locale_id == 2), isouter=True)
blobs = DBSession.query(UserBlobs).filter(UserBlobs.data_type == 'pdf').all()
blobs_fast_dict = {}
for blob in blobs:
if blob.client_id not in blobs_fast_dict:
blobs_fast_dict[blob.client_id] = dict()
blobs_fast_dict[blob.client_id][blob.object_id] = {'name': blob.name,
'content': blob.content,
'data_type': blob.data_type,
'client_id': blob.client_id,
'object_id': blob.object_id,
'created_at': blob.created_at}
row2dict = lambda r: {c.name: getattr(r, c.name) for c in r.__table__.columns}
perspectives = []
for perspective in persps.all():
resp = row2dict(perspective.DictionaryPerspective)
resp['status'] = perspective.TranslationAtom.content or "Unknown state"
if perspective.DictionaryPerspective.additional_metadata:
resp['additional_metadata'] = list(perspective.DictionaryPerspective.additional_metadata.keys())
else:
resp['additional_metadata'] = []
resp['translation'] = perspective.Translation or "Unknown perspective name"
if perspective.DictionaryPerspective.additional_metadata:
if 'location' in perspective.DictionaryPerspective.additional_metadata:
resp['location'] = perspective.DictionaryPerspective.additional_metadata['location']
if 'info' in perspective.DictionaryPerspective.additional_metadata:
resp['info'] = perspective.DictionaryPerspective.additional_metadata['info']
info_list = resp['info'].get('content')
for info in info_list:
blob_client_id, blob_object_id = info['info']['content']['client_id'], info['info']['content']['object_id']
if blob_client_id in blobs_fast_dict and blob_object_id in blobs_fast_dict[blob_client_id]:
resp['info']['content'] = blobs_fast_dict[blob_client_id][blob_object_id]
perspectives.append(resp)
response = perspectives
request.response.status = HTTPOk.code
return response
@view_config(route_name='all_perspectives_meta', renderer='json', request_method='GET')
def perspectives_meta_list(request): # tested
response = list()
is_template = None
try:
is_template = request.GET.get('is_template')
except:
pass
state_translation_gist_client_id = request.params.get('state_translation_gist_client_id', None)
state_translation_gist_object_id = request.params.get('state_translation_gist_object_id', None)
persps = DBSession.query(DictionaryPerspective).filter(DictionaryPerspective.marked_for_deletion == False,
DictionaryPerspective.additional_metadata != {})
if is_template is not None:
if type(is_template) == str:
if is_template.lower() == 'true':
is_template = True
elif is_template.lower() == 'false':
is_template = False
else:
request.response.status = HTTPBadRequest.code
# TODO: write normal return
return
persps = persps.filter(DictionaryPerspective.is_template == is_template)
if state_translation_gist_client_id and state_translation_gist_object_id:
persps = persps.filter(
DictionaryPerspective.state_translation_gist_client_id == state_translation_gist_client_id,
DictionaryPerspective.state_translation_gist_object_id == state_translation_gist_object_id)
perspectives = []
for perspective in persps:
# resp = view_perspective_from_object(request, perspective)
resp = perspective.additional_metadata
if resp:
resp.update({'client_id': perspective.client_id, 'object_id': perspective.object_id})
else:
resp = {'client_id': perspective.client_id, 'object_id': perspective.object_id}
if 'error' not in resp:
perspectives.append(resp)
response = perspectives
request.response.status = HTTPOk.code
return response
@view_config(route_name='perspective', renderer='json', request_method='GET')
@view_config(route_name='perspective_outside', renderer='json', request_method='GET')
def view_perspective(request): # tested & in docs
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = None
if parent_client_id and parent_object_id:
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system %s %s %s %s") % (
client_id, object_id, parent_client_id, parent_object_id)}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
response = view_perspective_from_object(request, perspective)
if 'error' in response:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
request.response.status = HTTPOk.code
return response
# TODO: completely broken!
@view_config(route_name='perspective_hash', renderer='json', request_method='PUT', permission='edit')
def edit_perspective_hash(request):
import requests
try:
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
l1es = list()
# l1es = DBSession.query(LevelOneEntity)\
# .join(LexicalEntry,
# and_(LevelOneEntity.parent_client_id == LexicalEntry.client_id,
# LevelOneEntity.parent_object_id == LexicalEntry.object_id,
# LexicalEntry.parent_client_id == client_id,
# LexicalEntry.parent_object_id == object_id))\
# .filter(func.lower(LevelOneEntity.entity_type).like('%sound%'), or_(LevelOneEntity.additional_metadata == None,
# not_(LevelOneEntity.additional_metadata.like('%hash%'))))
count_l1e = l1es.count()
for l1e in l1es:
url = l1e.content
try:
r = requests.get(url)
hash = hashlib.sha224(r.content).hexdigest()
old_meta = l1e.additional_metadata
hash_dict = {'hash': hash}
if old_meta:
old_meta.update(hash_dict)
else:
old_meta = hash_dict
l1e.additional_metadata = old_meta
except:
print('fail with sound', l1e.client_id, l1e.object_id)
# l2es = DBSession.query(LevelTwoEntity)\
# .join(LevelOneEntity,
# and_(LevelTwoEntity.parent_client_id == LevelOneEntity.client_id,
# LevelTwoEntity.parent_object_id == LevelOneEntity.object_id))\
# .join(LexicalEntry,
# and_(LevelOneEntity.parent_client_id == LexicalEntry.client_id,
# LevelOneEntity.parent_object_id == LexicalEntry.object_id,
# LexicalEntry.parent_client_id == client_id,
# LexicalEntry.parent_object_id == object_id))\
# .filter(func.lower(LevelTwoEntity.entity_type).like('%markup%'), or_(LevelTwoEntity.additional_metadata == None,
# not_(LevelTwoEntity.additional_metadata.like('%hash%'))))
l2es = list()
count_l2e = l2es.count()
for l2e in l2es:
url = l2e.content
try:
r = requests.get(url)
hash = hashlib.sha224(r.content).hexdigest()
old_meta = l2e.additional_metadata
hash_dict = {'hash': hash}
if old_meta:
old_meta.update(hash_dict)
else:
old_meta = hash_dict
l2e.additional_metadata = old_meta
except:
print('fail with markup', l2e.client_id, l2e.object_id)
response['count_l1e'] = count_l1e
response['count_l2e'] = count_l2e
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
@view_config(route_name='dangerous_perspectives_hash', renderer='json', request_method='PUT', permission='edit')
def dangerous_perspectives_hash(request): # TODO: test?
response = dict()
perspectives = DBSession.query(DictionaryPerspective)
for perspective in perspectives:
path = request.route_url('perspective_hash',
dictionary_client_id=perspective.parent_client_id,
dictionary_object_id=perspective.parent_object_id,
perspective_client_id=perspective.client_id,
perspective_object_id=perspective.object_id)
subreq = Request.blank(path)
subreq.method = 'PUT'
subreq.headers = request.headers
resp = request.invoke_subrequest(subreq)
print('Perspective', perspective.client_id, perspective.object_id, 'ready')
request.response.status = HTTPOk.code
return response
@view_config(route_name='perspective_meta', renderer='json', request_method='PUT', permission='edit')
def edit_perspective_meta(request): # tested & in docs
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
if perspective.parent != parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such pair of dictionary/perspective in the system")}
try:
req = request.json_body
except ValueError:
request.response.status = HTTPBadRequest.code
return {'error': 'body is invalid json or empty'}
if perspective.additional_metadata:
old_meta = perspective.additional_metadata
new_meta = req
old_meta.update(new_meta)
flag_modified(perspective, 'additional_metadata')
else:
perspective.additional_metadata = req
if "location" in req:
if "content" in req["location"]:
if not parent.additional_metadata:
parent.additional_metadata = dict()
loc = req["location"]["content"]
parent.additional_metadata['location'] = loc
flag_modified(parent, 'additional_metadata')
if "authors" in req:
if "content" in req["authors"]:
if not parent.additional_metadata:
parent.additional_metadata = dict()
parent.additional_metadata['authors'] = req["authors"]["content"]
flag_modified(parent, 'additional_metadata')
if "info" in req:
if "content" in req["info"]:
blobs = list()
for info_dict in req["info"]["content"]:
if info_dict["info"].get("type") == "blob":
if not parent.additional_metadata:
parent.additional_metadata = dict()
if not "blobs" in parent.additional_metadata:
parent.additional_metadata["blobs"] = []
blobs.append({
"client_id": info_dict["info"]["content"]["client_id"],
"object_id": info_dict["info"]["content"]["object_id"]
}
)
parent.additional_metadata['blobs'] = blobs
flag_modified(parent, 'additional_metadata')
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_meta', renderer='json', request_method='DELETE', permission='edit')
def delete_perspective_meta(request): # tested & in docs
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
if perspective.parent != parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such pair of dictionary/perspective in the system")}
try:
req = request.json_body
except ValueError:
request.response.status = HTTPBadRequest.code
return {'error': 'body is invalid json or empty'}
old_meta = perspective.additional_metadata
new_meta = req
for entry in new_meta:
if entry in old_meta:
del old_meta[entry]
perspective.additional_metadata = old_meta
flag_modified(perspective, 'additional_metadata')
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_meta', renderer='json', request_method='POST')
def view_perspective_meta(request): # tested & in docs
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective and not perspective.marked_for_deletion:
if perspective.parent != parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such pair of dictionary/perspective in the system")}
old_meta = perspective.additional_metadata
# import pdb
# pdb.set_trace()
try:
req = request.json_body
except AttributeError:
request.response.status = HTTPBadRequest.code
return {'error': "invalid json"}
except ValueError:
request.response.status = HTTPBadRequest.code
return {'error': "invalid json"}
if req:
new_meta = dict()
for key in req:
if old_meta.get(key):
new_meta[key] = old_meta[key]
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such key in metadata:, %s" % key)}
response = new_meta
request.response.status = HTTPOk.code
return response
else:
response = old_meta
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_tree', renderer='json', request_method='GET')
@view_config(route_name='perspective_outside_tree', renderer='json', request_method='GET')
def view_perspective_tree(request): # tested & in docs
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = None
if parent_client_id and parent_object_id:
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
tree = []
resp = view_perspective_from_object(request, perspective)
resp.update({"type": "perspective"})
tree.append(resp)
dictionary = perspective.parent
path = request.route_url('dictionary',
client_id=dictionary.client_id,
object_id=dictionary.object_id)
subreq = Request.blank(path)
subreq.method = 'GET'
subreq.headers = request.headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
elem = resp.json.copy()
elem.update({'type': 'dictionary'})
tree.append(elem)
parent = dictionary.parent
while parent:
path = request.route_url('language',
client_id=parent.client_id,
object_id=parent.object_id)
subreq = Request.blank(path)
subreq.method = 'GET'
subreq.headers = request.headers
resp = request.invoke_subrequest(subreq)
parent = parent.parent
if 'error' not in resp.json:
elem = resp.json.copy()
elem.update({'type': 'language'})
tree.append(elem)
else:
parent = None
request.response.status = HTTPOk.code
return tree
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_info', renderer='json', request_method='GET', permission='view')
def perspective_info(request): # TODO: test
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
starting_date = request.GET.get('starting_date')
if starting_date:
starting_date = datetime.datetime.strptime(starting_date, "%d%m%Y").date()
ending_date = request.GET.get('ending_date')
if ending_date:
ending_date = datetime.datetime(ending_date)
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective) \
.options(joinedload('lexicalentry').joinedload('leveloneentity').joinedload('leveltwoentity').joinedload(
'publishleveltwoentity')) \
.options(joinedload('lexicalentry').joinedload('leveloneentity').joinedload('publishleveloneentity')) \
.options(joinedload('lexicalentry').joinedload('groupingentity').joinedload('publishgroupingentity')) \
.options(joinedload('lexicalentry').joinedload('publishleveloneentity')) \
.options(joinedload('lexicalentry').joinedload('publishleveltwoentity')) \
.options(joinedload('lexicalentry').joinedload('publishgroupingentity')) \
.filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
result = []
path = request.route_url('perspective_fields',
dictionary_client_id=perspective.parent_client_id,
dictionary_object_id=perspective.parent_object_id,
perspective_client_id=perspective.client_id,
perspective_object_id=perspective.object_id
)
subreq = Request.blank(path)
subreq.method = 'GET'
subreq.headers = request.headers
resp = request.invoke_subrequest(subreq)
fields = resp.json["fields"]
types = []
for field in fields:
entity_type = field['entity_type']
if entity_type not in types:
types.append(entity_type)
if 'contains' in field:
for field2 in field['contains']:
entity_type = field2['entity_type']
if entity_type not in types:
types.append(entity_type)
clients_to_users_dict = cache_clients()
for lex in perspective.lexicalentry:
result = user_counter(lex.track(True, int(request.cookies.get('locale_id') or 2)), result, starting_date, ending_date, types, clients_to_users_dict)
response['count'] = result
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='create_perspective', renderer='json', request_method='POST', permission='create')
def create_perspective(request): # tested & in docs
try:
variables = {'auth': authenticated_userid(request)}
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
try:
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
except AttributeError:
request.response.status = HTTPBadRequest.code
return {'error': "invalid json"}
translation_gist_client_id = req['translation_gist_client_id']
translation_gist_object_id = req['translation_gist_object_id']
is_template = req.get('is_template')
client = DBSession.query(Client).filter_by(id=variables['auth']).first()
object_id = req.get('object_id', None)
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
user = DBSession.query(User).filter_by(id=client.user_id).first()
if not user:
raise CommonException("This client id is orphaned. Try to logout and then login once more.")
client_id = variables['auth']
if 'client_id' in req:
if check_client_id(authenticated = client.id, client_id=req['client_id']) or user.id == 1:
client_id = req['client_id']
else:
request.response.status_code = HTTPBadRequest
return {'error': 'client_id from another user'}
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
coord = {}
latitude = req.get('latitude')
longitude = req.get('longitude')
if latitude:
coord['latitude'] = latitude
if longitude:
coord['longitude'] = longitude
additional_metadata = req.get('additional_metadata')
if additional_metadata:
additional_metadata.update(coord)
else:
additional_metadata = coord
subreq = Request.blank('/translation_service_search')
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = {'searchstring': 'WiP'}
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
state_translation_gist_object_id, state_translation_gist_client_id = resp.json['object_id'], resp.json[
'client_id']
else:
raise KeyError("Something wrong with the base", resp.json['error'])
perspective = DictionaryPerspective(client_id=client_id,
object_id=object_id,
state_translation_gist_object_id=state_translation_gist_object_id,
state_translation_gist_client_id=state_translation_gist_client_id,
parent=parent,
import_source=req.get('import_source'),
import_hash=req.get('import_hash'),
additional_metadata=additional_metadata,
translation_gist_client_id=translation_gist_client_id,
translation_gist_object_id=translation_gist_object_id
)
if is_template is not None:
perspective.is_template = is_template
DBSession.add(perspective)
DBSession.flush()
owner_client = DBSession.query(Client).filter_by(id=parent.client_id).first()
owner = owner_client.user
if not object_id:
for base in DBSession.query(BaseGroup).filter_by(perspective_default=True):
new_group = Group(parent=base,
subject_object_id=perspective.object_id, subject_client_id=perspective.client_id)
add_user_to_group(user, new_group)
add_user_to_group(owner, new_group)
DBSession.add(new_group)
DBSession.flush()
request.response.status = HTTPOk.code
return {'object_id': perspective.object_id,
'client_id': perspective.client_id}
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
except IntegrityError as e:
request.response.status = HTTPInternalServerError.code
return {'error': str(e)}
except CommonException as e:
request.response.status = HTTPConflict.code
return {'error': str(e)}
@view_config(route_name='complex_create', renderer='json', request_method='POST', permission='create')
def complex_create(request):
try:
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
result = list()
try:
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
except AttributeError:
request.response.status = HTTPBadRequest.code
return {'error': "invalid json"}
fake_ids = dict()
for perspective_json in req:
path = request.route_url('create_perspective',
dictionary_client_id=parent_client_id,
dictionary_object_id=parent_object_id)
subreq = Request.blank(path)
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = perspective_json
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
perspective = resp.json
result.append({'object_id': perspective['object_id'],
'client_id': perspective['client_id']})
perspective_json['client_id'] = perspective['client_id']
perspective_json['object_id'] = perspective['object_id']
fake_ids[perspective_json['fake_id']] = perspective
# set_trace()
for perspective_json in req:
path = request.route_url('perspective_fields',
dictionary_client_id=parent_client_id,
dictionary_object_id=parent_object_id,
perspective_client_id=perspective_json['client_id'],
perspective_object_id=perspective_json['object_id'])
subreq = Request.blank(path)
subreq.method = 'PUT'
subreq.headers = request.headers
for field in perspective_json['fields']:
if field.get('link') and field['link'].get('fake_id'):
field['link'] = fake_ids[field['link']['fake_id']]
subreq.json = perspective_json['fields']
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
return result
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
except IntegrityError as e:
request.response.status = HTTPInternalServerError.code
return {'error': str(e)}
except CommonException as e:
request.response.status = HTTPConflict.code
return {'error': str(e)}
@view_config(route_name='perspectives', renderer='json', request_method='GET')
def view_perspectives(request):
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
published = request.params.get('published', None)
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspectives = list()
if published:
subreq = Request.blank('/translation_service_search')
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = {'searchstring': 'Published'}
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
state_translation_gist_object_id, state_translation_gist_client_id = resp.json['object_id'], resp.json[
'client_id']
published = (state_translation_gist_client_id, state_translation_gist_object_id)
else:
raise KeyError("Something wrong with the base", resp.json['error'])
subreq = Request.blank('/translation_service_search')
subreq.method = 'POST'
subreq.headers = request.headers
subreq.json = {'searchstring': 'Limited access'}
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
resp = request.invoke_subrequest(subreq)
if 'error' not in resp.json:
state_translation_gist_object_id, state_translation_gist_client_id = resp.json['object_id'], resp.json[
'client_id']
limited = (state_translation_gist_client_id, state_translation_gist_object_id)
else:
raise KeyError("Something wrong with the base", resp.json['error'])
for perspective in parent.dictionaryperspective:
path = request.route_url('perspective',
dictionary_client_id=parent_client_id,
dictionary_object_id=parent_object_id,
perspective_client_id=perspective.client_id,
perspective_object_id=perspective.object_id)
subreq = Request.blank(path)
subreq.method = 'GET'
subreq.headers = request.headers
resp = request.invoke_subrequest(subreq)
if published and not ((
published[0] == resp.json.get('state_translation_gist_client_id') and
published[1] == resp.json.get('state_translation_gist_object_id'))
or (
limited[0] == resp.json.get('state_translation_gist_client_id') and
limited[1] == resp.json.get('state_translation_gist_object_id'))
):
continue
if 'error' not in resp.json:
perspectives += [resp.json]
request.response.status = HTTPOk.code
return perspectives
@view_config(route_name='perspective_roles', renderer='json', request_method='GET')
def view_perspective_roles(request): # TODO: test
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('client_id')
parent_object_id = request.matchdict.get('object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
bases = DBSession.query(BaseGroup).filter_by(perspective_default=True)
roles_users = dict()
roles_organizations = dict()
for base in bases:
group = DBSession.query(Group).filter_by(base_group_id=base.id,
subject_object_id=object_id,
subject_client_id=client_id).first()
if not group:
print(base.name)
continue
perm = base.name
users = []
for user in group.users:
users += [user.id]
organizations = []
for org in group.organizations:
organizations += [org.id]
roles_users[perm] = users
roles_organizations[perm] = organizations
response['roles_users'] = roles_users
response['roles_organizations'] = roles_organizations
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_roles', renderer='json', request_method='POST', permission='create')
def edit_perspective_roles(request):
DBSession.execute("LOCK TABLE user_to_group_association IN EXCLUSIVE MODE;")
DBSession.execute("LOCK TABLE organization_to_group_association IN EXCLUSIVE MODE;")
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('client_id')
parent_object_id = request.matchdict.get('object_id')
url = request.route_url('perspective_roles',
client_id=parent_client_id,
object_id=parent_object_id,
perspective_client_id=client_id,
perspective_object_id=object_id)
subreq = Request.blank(url)
subreq.method = 'GET'
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
previous = request.invoke_subrequest(subreq).json_body
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
for role_name in req['roles_users']:
remove_list = list()
for user in req['roles_users'][role_name]:
if user in previous['roles_users'][role_name]:
previous['roles_users'][role_name].remove(user)
remove_list.append(user)
for user in remove_list:
req['roles_users'][role_name].remove(user)
for role_name in req['roles_organizations']:
remove_list = list()
for user in req['roles_organizations'][role_name]:
if user in previous['roles_organizations'][role_name]:
previous['roles_organizations'][role_name].remove(user)
req['roles_organizations'][role_name].remove(user)
for user in remove_list:
req['roles_users'][role_name].remove(user)
delete_flag = False
for role_name in previous['roles_users']:
if previous['roles_users'][role_name]:
delete_flag = True
break
for role_name in previous['roles_organizations']:
if previous['roles_organizations'][role_name]:
delete_flag = True
break
if delete_flag:
subreq = Request.blank(url)
subreq.json = previous
subreq.method = 'PATCH'
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
request.invoke_subrequest(subreq)
roles_users = None
if 'roles_users' in req:
roles_users = req['roles_users']
roles_organizations = None
if 'roles_organizations' in req:
roles_organizations = req['roles_organizations']
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective and not perspective.marked_for_deletion:
if roles_users:
for role_name in roles_users:
base = DBSession.query(BaseGroup).filter_by(name=role_name, perspective_default=True).first()
if not base:
log.debug("Not found role: " + role_name)
request.response.status = HTTPNotFound.code
return {'error': str("No such role in the system")}
group = DBSession.query(Group).filter_by(base_group_id=base.id,
subject_object_id=object_id,
subject_client_id=client_id).first()
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
userlogged = DBSession.query(User).filter_by(id=client.user_id).first()
permitted = False
if userlogged.is_active and userlogged in group.users:
permitted = True
if userlogged.is_active and not permitted:
for org in userlogged.organizations:
if org in group.organizations:
permitted = True
break
if userlogged.is_active and not permitted:
override_group = DBSession.query(Group).filter_by(base_group_id=base.id, subject_override=True).first()
if userlogged in override_group.users:
permitted = True
if permitted:
users = roles_users[role_name]
for userid in users:
user = DBSession.query(User).filter_by(id=userid).first()
if user:
if user not in group.users:
group.users.append(user)
else:
if roles_users[role_name]:
request.response.status = HTTPForbidden.code
return {'error': str("Not enough permission")}
if roles_organizations:
for role_name in roles_organizations:
base = DBSession.query(BaseGroup).filter_by(name=role_name, perspective_default=True).first()
if not base:
request.response.status = HTTPNotFound.code
return {'error': str("No such role in the system")}
group = DBSession.query(Group).filter_by(base_group_id=base.id,
subject_object_id=object_id,
subject_client_id=client_id).first()
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
userlogged = DBSession.query(User).filter_by(id=client.user_id).first()
permitted = False
if userlogged.is_active and userlogged in group.users:
permitted = True
if userlogged.is_active and not permitted:
for org in userlogged.organizations:
if org in group.organizations:
permitted = True
break
if userlogged.is_active and not permitted:
override_group = DBSession.query(Group).filter_by(base_group_id=base.id, subject_override=True).first()
if userlogged in override_group.users:
permitted = True
if permitted:
orgs = roles_organizations[role_name]
for orgid in orgs:
org = DBSession.query(Organization).filter_by(id=orgid).first()
if org:
if org not in group.organizations:
group.organizations.append(org)
else:
if roles_organizations[role_name]:
request.response.status = HTTPForbidden.code
return {'error': str("Not enough permission")}
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_roles', renderer='json', request_method='PATCH', permission='delete')
def delete_perspective_roles(request): # TODO: test
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('client_id')
parent_object_id = request.matchdict.get('object_id')
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
roles_users = None
if 'roles_users' in req:
roles_users = req['roles_users']
roles_organizations = None
if 'roles_organizations' in req:
roles_organizations = req['roles_organizations']
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
if roles_users:
for role_name in roles_users:
base = DBSession.query(BaseGroup).filter_by(name=role_name, perspective_default=True).first()
if not base:
request.response.status = HTTPNotFound.code
return {'error': str("No such role in the system")}
group = DBSession.query(Group).filter_by(base_group_id=base.id,
subject_object_id=object_id,
subject_client_id=client_id).first()
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
userlogged = DBSession.query(User).filter_by(id=client.user_id).first()
permitted = False
if userlogged.is_active and userlogged in group.users:
permitted = True
if userlogged.is_active and not permitted:
for org in userlogged.organizations:
if org in group.organizations:
permitted = True
break
if userlogged.is_active and not permitted:
override_group = DBSession.query(Group).filter_by(base_group_id=base.id, subject_override=True).first()
if userlogged in override_group.users:
permitted = True
if permitted:
users = roles_users[role_name]
for userid in users:
user = DBSession.query(User).filter_by(id=userid).first()
if user:
if user.id == userlogged.id:
request.response.status = HTTPForbidden.code
return {'error': str("Cannot delete roles from self")}
if user in group.users:
group.users.remove(user)
else:
if roles_users[role_name]:
request.response.status = HTTPForbidden.code
return {'error': str("Not enough permission")}
if roles_organizations:
for role_name in roles_organizations:
base = DBSession.query(BaseGroup).filter_by(name=role_name, perspective_default=True).first()
if not base:
request.response.status = HTTPNotFound.code
return {'error': str("No such role in the system")}
group = DBSession.query(Group).filter_by(base_group_id=base.id,
subject_object_id=object_id,
subject_client_id=client_id).first()
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
userlogged = DBSession.query(User).filter_by(id=client.user_id).first()
permitted = False
if userlogged.is_active and userlogged in group.users:
permitted = True
if userlogged.is_active and not permitted:
for org in userlogged.organizations:
if org in group.organizations:
permitted = True
break
if userlogged.is_active and not permitted:
override_group = DBSession.query(Group).filter_by(base_group_id=base.id, subject_override=True).first()
if userlogged in override_group.users:
permitted = True
if permitted:
orgs = roles_organizations[role_name]
for orgid in orgs:
org = DBSession.query(Organization).filter_by(id=orgid).first()
if org:
if org in group.organizations:
group.organizations.remove(org)
else:
if roles_organizations[role_name]:
request.response.status = HTTPForbidden.code
return {'error': str("Not enough permission")}
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_status', renderer='json', request_method='GET')
def view_perspective_status(request): # tested & in docs
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective and not perspective.marked_for_deletion:
if perspective.parent != parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such pair of dictionary/perspective in the system")}
response['state_translation_gist_client_id'] = perspective.state_translation_gist_client_id
response['state_translation_gist_object_id'] = perspective.state_translation_gist_object_id
atom = DBSession.query(TranslationAtom).filter_by(parent_client_id=perspective.state_translation_gist_client_id,
parent_object_id=perspective.state_translation_gist_object_id,
locale_id=int(request.cookies['locale_id'])).first()
response['status'] = atom.content
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_status', renderer='json', request_method='PUT', permission='edit')
def edit_perspective_status(request): # tested & in docs
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective and not perspective.marked_for_deletion:
if perspective.parent != parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such pair of dictionary/perspective in the system")}
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
perspective.state_translation_gist_client_id = req['state_translation_gist_client_id']
perspective.state_translation_gist_object_id = req['state_translation_gist_object_id']
atom = DBSession.query(TranslationAtom).filter_by(parent_client_id=req['state_translation_gist_client_id'],
parent_object_id=req['state_translation_gist_object_id'],
locale_id=int(request.cookies['locale_id'])).first()
response['status'] = atom.content
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='field', renderer='json', request_method='GET')
def view_field(request):
response = dict()
client_id = request.matchdict.get('client_id')
object_id = request.matchdict.get('object_id')
field = DBSession.query(Field).filter_by(client_id=client_id, object_id=object_id).first()
if field:
return view_field_from_object(request=request, field=field)
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such field in the system")}
@view_config(route_name='fields', renderer='json', request_method='GET')
def all_fields(request):
fields = DBSession.query(Field).filter_by(marked_for_deletion=False).all() #todo: think about desktop and sync
response = list()
for field in fields:
response.append(view_field_from_object(request=request, field=field))
request.response.code = HTTPOk.code
return response
@view_config(route_name='create_field', renderer='json', request_method='POST')
def create_field(request):
try:
variables = {'auth': authenticated_userid(request)}
try:
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
except AttributeError:
print('invalid json')
request.response.status = HTTPBadRequest.code
return {'error': "invalid json"}
translation_gist_client_id = req['translation_gist_client_id']
translation_gist_object_id = req['translation_gist_object_id']
data_type_translation_gist_client_id = req['data_type_translation_gist_client_id']
data_type_translation_gist_object_id = req['data_type_translation_gist_object_id']
object_id = req.get('object_id', None)
marked_for_deletion = req.get('object_id', None)
client = DBSession.query(Client).filter_by(id=variables['auth']).first()
if not client:
print('invalid client id')
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
user = DBSession.query(User).filter_by(id=client.user_id).first()
if not user:
raise CommonException("This client id is orphaned. Try to logout and then login once more.")
client_id = variables['auth']
if 'client_id' in req:
if check_client_id(authenticated = client.id, client_id=req['client_id']) or user.id == 1:
client_id = req['client_id']
else:
request.response.status_code = HTTPBadRequest
return {'error': 'client_id from another user'}
field = Field(client_id=client_id,
object_id=object_id,
data_type_translation_gist_client_id=data_type_translation_gist_client_id,
data_type_translation_gist_object_id=data_type_translation_gist_object_id,
translation_gist_client_id=translation_gist_client_id,
translation_gist_object_id=translation_gist_object_id,
marked_for_deletion=marked_for_deletion
)
if req.get('is_translatable', None):
field.is_translatable = bool(req['is_translatable'])
DBSession.add(field)
DBSession.flush()
request.response.status = HTTPOk.code
return {'object_id': field.object_id,
'client_id': field.client_id}
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
except IntegrityError as e:
request.response.status = HTTPInternalServerError.code
return {'error': str(e)}
except CommonException as e:
request.response.status = HTTPConflict.code
return {'error': str(e)}
def create_nested_field(field, perspective, client_id, upper_level, link_ids, position):
field_object = DictionaryPerspectiveToField(client_id=client_id,
parent=perspective,
field_client_id=field['client_id'],
field_object_id=field['object_id'],
upper_level=upper_level,
position=position)
if field.get('link'):
# if field_object.field.data_type_translation_gist_client_id != link_ids['client_id'] or field_object.field.data_type_translation_gist_client_id != link_ids['client_id']:
# return {'error':'wrong type for link'}
field_object.link_client_id = field['link']['client_id']
field_object.link_object_id = field['link']['object_id']
DBSession.flush()
contains = field.get('contains', None)
if contains:
inner_position = 1
for subfield in contains:
create_nested_field(subfield,
perspective,
client_id,
upper_level=field_object,
link_ids=link_ids,
position=inner_position)
inner_position += 1
return
def view_nested_field(request, field, link_ids):
row2dict = lambda r: {c.name: getattr(r, c.name) for c in r.__table__.columns}
field_object = field.field
field_json = view_field_from_object(request=request, field=field_object)
field_json['position'] = field.position
if 'error' in field_json:
return field_json
contains = list()
for subfield in field.dictionaryperspectivetofield: # todo: order subfields
if not subfield.marked_for_deletion:
subfield_json = view_nested_field(request, subfield, link_ids)
if 'error' in subfield_json:
return subfield_json
contains.append(subfield_json)
if contains:
field_json['contains'] = contains
if field_object.data_type_translation_gist_client_id == link_ids['client_id'] \
and field_object.data_type_translation_gist_object_id == link_ids['object_id']:
field_json['link'] = {'client_id': field.link_client_id, 'object_id': field.link_object_id}
upd_json = row2dict(field)
field_json.update(upd_json)
return field_json
@view_config(route_name='perspective_fields', renderer='json', request_method='GET')
def view_perspective_fields(request):
response = list()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective and not perspective.marked_for_deletion:
fields = DBSession.query(DictionaryPerspectiveToField) \
.filter_by(parent=perspective, upper_level=None, marked_for_deletion=False) \
.order_by(DictionaryPerspectiveToField.position) \
.all()
try:
link_gist = DBSession.query(TranslationGist) \
.join(TranslationAtom) \
.filter(TranslationGist.type == 'Service',
TranslationAtom.content == 'Link',
TranslationAtom.locale_id == 2).one()
link_ids = {'client_id': link_gist.client_id, 'object_id': link_gist.object_id}
except NoResultFound:
request.response.status = HTTPNotFound.code
return {'error': str("Something wrong with the base")}
for field in fields:
response.append(view_nested_field(request, field, link_ids))
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='perspective_fields', renderer='json', request_method='PUT')
def update_perspective_fields(request):
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
variables = {'auth': authenticated_userid(request)}
client = DBSession.query(Client).filter_by(id=variables['auth']).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
user = DBSession.query(User).filter_by(id=client.user_id).first()
if not user:
raise CommonException("This client id is orphaned. Try to logout and then login once more.")
if perspective and not perspective.marked_for_deletion:
try:
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
except AttributeError:
request.response.status = HTTPBadRequest.code
return {'error': "invalid json"}
try:
link_gist = DBSession.query(TranslationGist) \
.join(TranslationAtom) \
.filter(TranslationGist.type == 'Service',
TranslationAtom.content == 'Link',
TranslationAtom.locale_id == 2).one()
link_ids = {'client_id': link_gist.client_id, 'object_id': link_gist.object_id}
except NoResultFound:
request.response.status = HTTPNotFound.code
return {'error': str("Something wrong with the base")}
fields = DBSession.query(DictionaryPerspectiveToField) \
.filter_by(parent=perspective) \
.all()
DBSession.flush()
for field in fields:
# DBSession.delete(field)
field.marked_for_deletion = True
position = 1
for field in req:
create_nested_field(field=field,
perspective=perspective,
client_id=client.id,
upper_level=None,
link_ids=link_ids, position=position)
position += 1
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='all_perspective_authors', renderer='json', request_method='GET')
def all_perspective_authors(request):
response = list()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent and not parent.marked_for_deletion:
authors = DBSession.query(User).join(User.clients).join(Entity, Entity.client_id == Client.id) \
.join(Entity.parent).join(Entity.publishingentity) \
.filter(LexicalEntry.parent_client_id == parent.client_id,
LexicalEntry.parent_object_id == parent.object_id,
LexicalEntry.marked_for_deletion == False,
Entity.marked_for_deletion == False)
response = [o.id for o in authors.all()]
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='all_perspective_clients', renderer='json', request_method='GET')
def all_perspective_clients(request):
response = list()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent and not parent.marked_for_deletion:
clients = DBSession.query(Client).join(Entity, Entity.client_id == Client.id) \
.join(Entity.parent).join(Entity.publishingentity) \
.filter(LexicalEntry.parent_client_id == parent.client_id,
LexicalEntry.parent_object_id == parent.object_id,
LexicalEntry.marked_for_deletion == False,
Entity.marked_for_deletion == False)
response = [o.id for o in clients.all()]
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
# TODO: completely broken!
@view_config(route_name='lexical_entries_all', renderer='json', request_method='GET', permission='view')
def lexical_entries_all(request):
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
authors = request.params.getall('authors')
clients = request.params.getall('clients')
start_date = request.params.get('start_date', )
if start_date:
start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d')
end_date = request.params.get('end_date')
if end_date:
end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d')
field_client_id = int(request.params.get('field_client_id', 66))
field_object_id = int(request.params.get('field_object_id', 10))
start_from = request.params.get('start_from') or 0
count = request.params.get('count') or 20
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent and not parent.marked_for_deletion:
lexes = DBSession.query(LexicalEntry).join(LexicalEntry.entity).join(Entity.publishingentity) \
.filter(LexicalEntry.parent == parent, LexicalEntry.marked_for_deletion == False,
Entity.marked_for_deletion == False)
if authors or clients:
lexes = lexes.join(Client, Entity.client_id == Client.id)
if authors:
lexes = lexes.join(Client.user).filter(User.id.in_(authors))
if clients:
lexes = lexes.filter(Client.id.in_(clients))
if start_date:
lexes = lexes.filter(Entity.created_at >= start_date)
if end_date:
lexes = lexes.filter(Entity.created_at <= end_date) # todo: check if field=field ever works
lexes = lexes \
.order_by(func.min(case(
[(or_(Entity.field_client_id != field_client_id,
Entity.field_object_id != field_object_id),
'яяяяяя')],
else_=Entity.content))) \
.group_by(LexicalEntry) \
.offset(start_from).limit(count)
result = deque()
# print([o.client_id for o in lexes.all()])
lexes_composite_list = [(lex.created_at,
lex.client_id, lex.object_id, lex.parent_client_id, lex.parent_object_id,
lex.marked_for_deletion, lex.additional_metadata,
lex.additional_metadata.get('came_from')
if lex.additional_metadata and 'came_from' in lex.additional_metadata else None)
for lex in lexes.all()]
result = LexicalEntry.track_multiple(lexes_composite_list, int(request.cookies.get('locale_id') or 2), publish=None, accept=True)
response = list(result)
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='lexical_entries_all_count', renderer='json', request_method='GET')
def lexical_entries_all_count(request): # tested
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
authors = request.params.getall('authors')
clients = request.params.getall('clients')
start_date = request.params.get('start_date', )
if start_date:
start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d')
end_date = request.params.get('end_date')
if end_date:
end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d')
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent and not parent.marked_for_deletion:
lexical_entries_count = DBSession.query(LexicalEntry).join(LexicalEntry.entity) \
.join(Entity.publishingentity) \
.filter(LexicalEntry.parent == parent, LexicalEntry.marked_for_deletion == False,
Entity.marked_for_deletion == False)
if authors or clients or start_date or end_date:
lexical_entries_count = lexical_entries_count.join(LexicalEntry.entity)
if authors or clients:
lexical_entries_count = lexical_entries_count.join(Client, Entity.client_id == Client.id)
if authors:
lexical_entries_count = lexical_entries_count.join(Client.user).filter(User.id.in_(authors))
if clients:
lexical_entries_count = lexical_entries_count.filter(Client.id.in_(clients))
if start_date:
lexical_entries_count = lexical_entries_count.filter(Entity.created_at >= start_date)
if end_date:
lexical_entries_count = lexical_entries_count.filter(Entity.created_at <= end_date)
lexical_entries_count = lexical_entries_count.group_by(LexicalEntry).count()
return {"count": lexical_entries_count}
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
# TODO: completely broken!
@view_config(route_name='lexical_entries_published', renderer='json', request_method='GET')
def lexical_entries_published(request):
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
authors = request.params.getall('authors')
clients = request.params.getall('clients')
start_date = request.params.get('start_date', )
if start_date:
start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d')
end_date = request.params.get('end_date')
if end_date:
end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d')
field_client_id = int(request.params.get('field_client_id', 66))
field_object_id = int(request.params.get('field_object_id', 10))
start_from = request.params.get('start_from') or 0
count = request.params.get('count') or 20
preview_mode = False
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent and not parent.marked_for_deletion:
if (parent.state == 'Limited access' or parent.parent.state == 'Limited access') and "view:lexical_entries_and_entities:" + client_id + ":" + object_id not in request.effective_principals:
log.debug("PREVIEW MODE")
preview_mode = True
lexes = DBSession.query(LexicalEntry) \
.join(LexicalEntry.entity).join(Entity.publishingentity) \
.filter(LexicalEntry.parent == parent, PublishingEntity.published == True,
Entity.marked_for_deletion == False, LexicalEntry.marked_for_deletion == False)
if authors or clients:
lexes = lexes.join(Client, Entity.client_id == Client.id)
if authors:
lexes = lexes.join(Client.user).filter(User.id.in_(authors))
if clients:
lexes = lexes.filter(Client.id.in_(clients))
if start_date:
lexes = lexes.filter(Entity.created_at >= start_date)
if end_date:
lexes = lexes.filter(Entity.created_at <= end_date)
lexes = lexes.group_by(LexicalEntry) \
.order_by(func.min(case(
[(or_(Entity.field_client_id != field_client_id,
Entity.field_object_id != field_object_id),
'яяяяяя')],
else_=Entity.content))) \
.group_by(LexicalEntry) \
.offset(start_from).limit(count)
# lexes = list()
result = deque()
lexes_composite_list = [(lex.created_at,
lex.client_id, lex.object_id, lex.parent_client_id, lex.parent_object_id,
lex.marked_for_deletion, lex.additional_metadata,
lex.additional_metadata.get('came_from')
if lex.additional_metadata and 'came_from' in lex.additional_metadata else None)
for lex in lexes.all()]
result = LexicalEntry.track_multiple(lexes_composite_list, int(request.cookies.get('locale_id') or 2), publish=True, accept=True)
response = list(result)
if preview_mode:
if int(start_from) > 0 or int(count) > 20:
for i in response:
for j in i['contains']:
j['content'] = 'Entity hidden: you \nhave only demo access'
j['contains'] = []
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='lexical_entries_not_accepted', renderer='json', request_method='GET')
def lexical_entries_not_accepted(request):
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
authors = request.params.getall('authors')
clients = request.params.getall('clients')
start_date = request.params.get('start_date', )
if start_date:
start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d')
end_date = request.params.get('end_date')
if end_date:
end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d')
field_client_id = int(request.params.get('field_client_id', 66))
field_object_id = int(request.params.get('field_object_id', 10))
start_from = request.params.get('start_from') or 0
count = request.params.get('count') or 20
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent and not parent.marked_for_deletion:
lexes = DBSession.query(LexicalEntry).filter_by(marked_for_deletion=False, parent_client_id=parent.client_id,
parent_object_id=parent.object_id) \
.join(LexicalEntry.entity).join(Entity.publishingentity) \
.filter(PublishingEntity.accepted == False)
if authors or clients:
lexes = lexes.join(Client, Entity.client_id == Client.id)
if authors:
lexes = lexes.join(Client.user).filter(User.id.in_(authors))
if clients:
lexes = lexes.filter(Client.id.in_(clients))
if start_date:
lexes = lexes.filter(Entity.created_at >= start_date)
if end_date:
lexes = lexes.filter(Entity.created_at <= end_date)
lexes = lexes.group_by(LexicalEntry) \
.order_by(func.min(case(
[(or_(Entity.field_client_id != field_client_id,
Entity.field_object_id != field_object_id),
'яяяяяя')],
else_=Entity.content))) \
.group_by(LexicalEntry) \
.offset(start_from).limit(count)
result = deque()
lexes_composite_list = [(lex.created_at,
lex.client_id, lex.object_id, lex.parent_client_id, lex.parent_object_id,
lex.marked_for_deletion, lex.additional_metadata,
lex.additional_metadata.get('came_from')
if lex.additional_metadata and 'came_from' in lex.additional_metadata else None)
for lex in lexes.all()]
result = LexicalEntry.track_multiple(lexes_composite_list, int(request.cookies.get('locale_id') or 2), publish=None, accept=False)
response = list(result)
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='lexical_entries_published_count', renderer='json', request_method='GET')
def lexical_entries_published_count(request):
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
authors = request.params.getall('authors')
clients = request.params.getall('clients')
start_date = request.params.get('start_date', )
if start_date:
start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d')
end_date = request.params.get('end_date')
if end_date:
end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d')
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent:
if not parent.marked_for_deletion:
lexical_entries_count = DBSession.query(LexicalEntry) \
.join(LexicalEntry.entity).join(Entity.publishingentity) \
.filter(LexicalEntry.parent == parent, PublishingEntity.published == True,
Entity.marked_for_deletion == False, LexicalEntry.marked_for_deletion == False)
if authors or clients or start_date or end_date:
lexical_entries_count = lexical_entries_count.join(LexicalEntry.entity)
if authors or clients:
lexical_entries_count = lexical_entries_count.join(Client, Entity.client_id == Client.id)
if authors:
lexical_entries_count = lexical_entries_count.join(Client.user).filter(User.id.in_(authors))
if clients:
lexical_entries_count = lexical_entries_count.filter(Client.id.in_(clients))
if start_date:
lexical_entries_count = lexical_entries_count.filter(Entity.created_at >= start_date)
if end_date:
lexical_entries_count = lexical_entries_count.filter(Entity.created_at <= end_date)
lexical_entries_count = lexical_entries_count.group_by(LexicalEntry).count()
# lexical_entries_count = None
return {"count": lexical_entries_count}
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='lexical_entries_not_accepted_count', renderer='json', request_method='GET')
def lexical_entries_not_accepted_count(request):
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
authors = request.params.getall('authors')
clients = request.params.getall('clients')
start_date = request.params.get('start_date', )
if start_date:
start_date = datetime.datetime.strptime(start_date, '%Y-%m-%d')
end_date = request.params.get('end_date')
if end_date:
end_date = datetime.datetime.strptime(end_date, '%Y-%m-%d')
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent:
if not parent.marked_for_deletion:
lexical_entries_count = DBSession.query(LexicalEntry).filter_by(marked_for_deletion=False,
parent_client_id=parent.client_id,
parent_object_id=parent.object_id) \
.join(LexicalEntry.entity).join(Entity.publishingentity) \
.filter(PublishingEntity.accepted == False)
if authors or clients or start_date or end_date:
lexical_entries_count = lexical_entries_count.join(LexicalEntry.entity)
if authors or clients:
lexical_entries_count = lexical_entries_count.join(Client, Entity.client_id == Client.id)
if authors:
lexical_entries_count = lexical_entries_count.join(Client.user).filter(User.id.in_(authors))
if clients:
lexical_entries_count = lexical_entries_count.filter(Client.id.in_(clients))
if start_date:
lexical_entries_count = lexical_entries_count.filter(Entity.created_at >= start_date)
if end_date:
lexical_entries_count = lexical_entries_count.filter(Entity.created_at <= end_date)
lexical_entries_count = lexical_entries_count.group_by(LexicalEntry).count()
# lexical_entries_count = None
return {"count": lexical_entries_count}
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='approve_entity', renderer='json', request_method='PATCH', permission='create')
def approve_entity(request):
try:
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
variables = {'auth': request.authenticated_userid}
client = DBSession.query(Client).filter_by(id=variables['auth']).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.",
variables['auth'])
user = DBSession.query(User).filter_by(id=client.user_id).first()
if not user:
raise CommonException("This client id is orphaned. Try to logout and then login once more.")
for entry in req:
entity = DBSession.query(Entity). \
filter_by(client_id=entry['client_id'], object_id=entry['object_id']).first()
group = DBSession.query(Group).join(BaseGroup).filter(
BaseGroup.subject == 'approve_entities',
Group.subject_client_id == entity.parent.parent.client_id,
Group.subject_object_id == entity.parent.parent.object_id,
BaseGroup.action == 'create').one()
if user.is_active and user in group.users:
if entity:
entity.publishingentity.published = True
else:
raise CommonException("no such entity in system")
else:
raise CommonException("Forbidden")
request.response.status = HTTPOk.code
return {}
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
except IntegrityError as e:
request.response.status = HTTPInternalServerError.code
return {'error': str(e)}
except CommonException as e:
request.response.status = HTTPConflict.code
return {'error': str(e)}
@view_config(route_name='accept_entity', renderer='json', request_method='PATCH', permission='create')
def accept_entity(request):
try:
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
variables = {'auth': request.authenticated_userid}
client = DBSession.query(Client).filter_by(id=variables['auth']).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.",
variables['auth'])
user = DBSession.query(User).filter_by(id=client.user_id).first()
if not user:
raise CommonException("This client id is orphaned. Try to logout and then login once more.")
for entry in req:
entity = DBSession.query(Entity). \
filter_by(client_id=entry['client_id'], object_id=entry['object_id']).first()
if entity:
group = DBSession.query(Group).join(BaseGroup).filter(
BaseGroup.subject == 'lexical_entries_and_entities',
Group.subject_client_id == entity.parent.parent.client_id,
Group.subject_object_id == entity.parent.parent.object_id,
BaseGroup.action == 'create').one()
override_group = DBSession.query(Group).join(BaseGroup).filter(
BaseGroup.subject == 'lexical_entries_and_entities',
Group.subject_override == True,
BaseGroup.action == 'create').one()
if user.is_active and (user in group.users or user in override_group.users):
if entity:
entity.publishingentity.accepted = True
else:
raise CommonException("no such entity in system")
else:
raise CommonException("Forbidden")
else:
raise CommonException("no such entity in system")
request.response.status = HTTPOk.code
return {}
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
except IntegrityError as e:
request.response.status = HTTPInternalServerError.code
return {'error': str(e)}
except CommonException as e:
request.response.status = HTTPConflict.code
return {'error': str(e)}
@view_config(route_name='approve_entity', renderer='json', request_method='DELETE', permission='delete')
def disapprove_entity(request):
try:
if type(request.json_body) == str:
req = json.loads(request.json_body)
else:
req = request.json_body
variables = {'auth': request.authenticated_userid}
client = DBSession.query(Client).filter_by(id=variables['auth']).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.",
variables['auth'])
user = DBSession.query(User).filter_by(id=client.user_id).first()
if not user:
raise CommonException("This client id is orphaned. Try to logout and then login once more.")
for entry in req:
entity = DBSession.query(Entity). \
filter_by(client_id=entry['client_id'], object_id=entry['object_id']).first()
group = DBSession.query(Group).join(BaseGroup).filter(
BaseGroup.subject == 'approve_entities',
Group.subject_client_id == entity.parent.parent.client_id,
Group.subject_object_id == entity.parent.parent.object_id,
BaseGroup.action == 'delete').one()
if user.is_active and user in group.users:
if entity:
entity.publishingentity.published = False
else:
raise CommonException("no such entity in system")
else:
raise CommonException("Forbidden")
request.response.status = HTTPOk.code
return {}
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
except IntegrityError as e:
request.response.status = HTTPInternalServerError.code
return {'error': str(e)}
except CommonException as e:
request.response.status = HTTPConflict.code
return {'error': str(e)}
@view_config(route_name='approve_all', renderer='json', request_method='PATCH', permission='create')
def approve_all(request):
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent:
if not parent.marked_for_deletion:
dictionary_client_id = parent.parent_client_id
dictionary_object_id = parent.parent_object_id
entities = DBSession.query(PublishingEntity).join(Entity,
and_(Entity.client_id == PublishingEntity.client_id,
Entity.object_id == PublishingEntity.object_id))\
.join(Entity.parent).filter(LexicalEntry.parent == parent).all()
for entity in entities:
entity.published = True
# url = request.route_url('approve_entity',
# dictionary_client_id=dictionary_client_id,
# dictionary_object_id=dictionary_object_id,
# perspective_client_id=client_id,
# perspective_object_id=object_id)
# subreq = Request.blank(url)
# jsn = entities
# subreq.json = jsn
# subreq.method = 'PATCH'
# headers = {'Cookie': request.headers['Cookie']}
# subreq.headers = headers
# request.invoke_subrequest(subreq)
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='accept_all', renderer='json', request_method='PATCH', permission='create')
def accept_all(request):
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if parent:
if not parent.marked_for_deletion:
dictionary_client_id = parent.parent_client_id
dictionary_object_id = parent.parent_object_id
entities = DBSession.query(Entity).join(Entity.parent).filter(LexicalEntry.parent == parent).all()
url = request.route_url('accept_entity',
dictionary_client_id=dictionary_client_id,
dictionary_object_id=dictionary_object_id,
perspective_client_id=client_id,
perspective_object_id=object_id)
subreq = Request.blank(url)
jsn = [{'client_id': o.client_id, 'object_id': o.object_id} for o in entities]
subreq.json = jsn
subreq.method = 'PATCH'
headers = {'Cookie': request.headers['Cookie']}
subreq.headers = headers
request.invoke_subrequest(subreq)
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
@view_config(route_name='approve_all_outer', renderer='json', request_method='PATCH', permission='create')
def approve_outer(request): # TODO: create test.
from lingvodoc.scripts.approve import approve_all_outer
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
cli_id = request.matchdict.get('dictionary_client_id')
obj_id = request.matchdict.get('dictionary_object_id')
# convert_one(blob.real_storage_path,
# user.login,
# user.password.hash,
# parent_client_id,
# parent_object_id)
# NOTE: doesn't work on Mac OS otherwise
client = DBSession.query(Client).filter_by(id=authenticated_userid(request)).first()
user = client.user
p = multiprocessing.Process(target=approve_all_outer, args=(user.login,
user.password.hash,
cli_id,
obj_id,
client_id,
object_id))
log.debug("Conversion started")
p.start()
request.response.status = HTTPOk.code
return {"status": "Your dictionary is being approved."
" Wait 5-15 minutes."}
@view_config(route_name='create_entities_bulk', renderer='json', request_method='POST', permission='create')
def create_entities_bulk(request):
try:
variables = {'auth': authenticated_userid(request)}
response = dict()
req = request.json_body
client = DBSession.query(Client).filter_by(id=variables['auth']).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
user = DBSession.query(User).filter_by(id=client.user_id).first()
if not user:
raise CommonException("This client id is orphaned. Try to logout and then login once more.")
inserted_items = []
for item in req:
if item['level'] == 'leveloneentity':
parent = DBSession.query(LexicalEntry).filter_by(client_id=item['parent_client_id'],
object_id=item['parent_object_id']).first()
client_id = variables['auth']
if 'client_id' in item:
if check_client_id(authenticated=client.id, client_id=item['client_id']) or user.id == 1:
client_id = item['client_id']
else:
request.response.status_code = HTTPBadRequest
return {'error': 'client_id from another user'}
entity = Entity(client_id=client_id,
object_id=item.get('object_id', None),
entity_type=item['entity_type'],
locale_id=item['locale_id'],
additional_metadata=item.get('additional_metadata'),
parent=parent)
group = DBSession.query(Group).join(BaseGroup).filter(BaseGroup.subject == 'lexical_entries_and_entities',
Group.subject_client_id == entity.parent.parent.client_id,
Group.subject_object_id == entity.parent.parent.object_id,
BaseGroup.action == 'create').one()
if user.is_active and user in group.users:
entity.publishingentity.accepted = True
upper_level = None
if item.get('self_client_id') and item.get('self_object_id'):
upper_level = DBSession.query(Entity).filter_by(client_id=item['self_client_id'],
object_id=item['self_object_id']).first()
if not upper_level:
return {'error': str("No such upper level in the system")}
if upper_level:
entity.upper_level = upper_level
filename = req.get('filename')
real_location = None
url = None
tr_atom = DBSession.query(TranslationAtom).join(TranslationGist, and_(
TranslationAtom.locale_id == 2,
TranslationAtom.parent_client_id == TranslationGist.client_id,
TranslationAtom.parent_object_id == TranslationGist.object_id)).join(Field, and_(
TranslationGist.client_id == Field.data_type_translation_gist_client_id,
TranslationGist.object_id == Field.data_type_translation_gist_object_id)).filter(
Field.client_id == req['field_client_id'], Field.object_id == req['field_object_id']).first()
data_type = tr_atom.content.lower()
if data_type == 'image' or data_type == 'sound' or 'markup' in data_type:
real_location, url = create_object(request, req['content'], entity, data_type, filename)
entity.content = url
old_meta = entity.additional_metadata
need_hash = True
if old_meta:
if old_meta.get('hash'):
need_hash = False
if need_hash:
hash = hashlib.sha224(base64.urlsafe_b64decode(req['content'])).hexdigest()
hash_dict = {'hash': hash}
if old_meta:
old_meta.update(hash_dict)
else:
old_meta = hash_dict
entity.additional_metadata = old_meta
if 'markup' in data_type:
name = filename.split('.')
ext = name[len(name) - 1]
if ext.lower() == 'textgrid':
data_type = 'praat markup'
elif ext.lower() == 'eaf':
data_type = 'elan markup'
entity.additional_metadata['data_type'] = data_type
elif data_type == 'link':
try:
entity.link_client_id = req['link_client_id']
entity.link_object_id = req['link_object_id']
except (KeyError, TypeError):
request.response.status = HTTPBadRequest.code
return {'Error': "The field is of link type. You should provide client_id and object id in the content"}
else:
entity.content = req['content']
# return None
DBSession.add(entity)
inserted_items.append({"client_id": entity.client_id, "object_id": entity.object_id})
request.response.status = HTTPOk.code
return inserted_items
# except KeyError as e:
# request.response.status = HTTPBadRequest.code
# return {'error': str(e)}
except IntegrityError as e:
request.response.status = HTTPInternalServerError.code
return {'error': str(e)}
except CommonException as e:
request.response.status = HTTPConflict.code
return {'error': str(e)}
@view_config(route_name='perspective', renderer='json', request_method='PUT', permission='edit')
def edit_perspective(request): # tested & in docs
try:
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
client = DBSession.query(Client).filter_by(id=request.authenticated_userid).first()
if not client:
raise KeyError("Invalid client id (not registered on server). Try to logout and then login.")
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
if perspective.parent != parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such pair of dictionary/perspective in the system")}
req = request.json_body
# TODO: Status 500 will be returned if arguments are invalid; add try/catch
if 'translation_gist_client_id' in req:
perspective.translation_gist_client_id = req['translation_gist_client_id']
if 'translation_gist_object_id' in req:
perspective.translation_gist_object_id = req['translation_gist_object_id']
if 'parent_client_id' in req:
perspective.parent_client_id = req['parent_client_id']
if 'parent_object_id' in req:
perspective.parent_object_id = req['parent_object_id']
is_template = req.get('is_template')
if is_template is not None:
perspective.is_template = is_template
request.response.status = HTTPOk.code
return response
else:
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
except KeyError as e:
request.response.status = HTTPBadRequest.code
return {'error': str(e)}
@view_config(route_name='perspective', renderer='json', request_method='DELETE', permission='delete')
def delete_perspective(request): # tested & in docs
response = dict()
client_id = request.matchdict.get('perspective_client_id')
object_id = request.matchdict.get('perspective_object_id')
parent_client_id = request.matchdict.get('dictionary_client_id')
parent_object_id = request.matchdict.get('dictionary_object_id')
parent = DBSession.query(Dictionary).filter_by(client_id=parent_client_id, object_id=parent_object_id).first()
if not parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such dictionary in the system")}
perspective = DBSession.query(DictionaryPerspective).filter_by(client_id=client_id, object_id=object_id).first()
if perspective:
if not perspective.marked_for_deletion:
if perspective.parent != parent:
request.response.status = HTTPNotFound.code
return {'error': str("No such pair of dictionary/perspective in the system")}
if 'desktop' in request.registry.settings:
real_delete_perspective(perspective, request.registry.settings)
else:
perspective.marked_for_deletion = True
objecttoc = DBSession.query(ObjectTOC).filter_by(client_id=perspective.client_id,
object_id=perspective.object_id).one()
objecttoc.marked_for_deletion = True
request.response.status = HTTPOk.code
return response
request.response.status = HTTPNotFound.code
return {'error': str("No such perspective in the system")}
| 49.305486 | 196 | 0.61995 | 12,840 | 118,629 | 5.4831 | 0.035826 | 0.058407 | 0.019175 | 0.02159 | 0.833445 | 0.801599 | 0.765138 | 0.743179 | 0.712427 | 0.687627 | 0 | 0.001312 | 0.286937 | 118,629 | 2,405 | 197 | 49.325988 | 0.830973 | 0.027068 | 0 | 0.665706 | 0 | 0 | 0.113072 | 0.024121 | 0 | 0 | 0 | 0.000416 | 0 | 1 | 0.020653 | false | 0.001441 | 0.01633 | 0 | 0.107109 | 0.002882 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7ca75dbc5b526f2379131cf774692b17b9ffdb9c | 30 | py | Python | pymdown_env/__init__.py | skirsten/pymdown-env | 2aeb254d00c9328c056e861baba72c3415103a8a | [
"MIT"
] | 1 | 2019-11-03T20:35:10.000Z | 2019-11-03T20:35:10.000Z | pymdown_env/__init__.py | skirsten/pymdown-env | 2aeb254d00c9328c056e861baba72c3415103a8a | [
"MIT"
] | null | null | null | pymdown_env/__init__.py | skirsten/pymdown-env | 2aeb254d00c9328c056e861baba72c3415103a8a | [
"MIT"
] | null | null | null | from .envpreprocessor import * | 30 | 30 | 0.833333 | 3 | 30 | 8.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7cea45acb8b0ae924c849922c5dcb0eb71ccb9d6 | 66 | py | Python | iqra/models/__init__.py | nunenuh/crnn.pytorch | b0f8bc7fc43622f7396fa0550fb2cc6e17849551 | [
"MIT"
] | 1 | 2020-09-30T04:37:39.000Z | 2020-09-30T04:37:39.000Z | iqra/models/__init__.py | nunenuh/crnn.pytorch | b0f8bc7fc43622f7396fa0550fb2cc6e17849551 | [
"MIT"
] | null | null | null | iqra/models/__init__.py | nunenuh/crnn.pytorch | b0f8bc7fc43622f7396fa0550fb2cc6e17849551 | [
"MIT"
] | null | null | null | from .crnn_v1 import OCRNet
from .crnn_v2 import TransformerOCRNet | 33 | 38 | 0.863636 | 10 | 66 | 5.5 | 0.7 | 0.290909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033898 | 0.106061 | 66 | 2 | 38 | 33 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7ceda2e52797dd88848b902987e1b3f7fda85cfa | 625 | py | Python | openrec/modules/interactions/__init__.py | BoData-Bot/openrec | 3d655d21b762b40d50e53cea96d7802fd49c74ad | [
"Apache-2.0"
] | null | null | null | openrec/modules/interactions/__init__.py | BoData-Bot/openrec | 3d655d21b762b40d50e53cea96d7802fd49c74ad | [
"Apache-2.0"
] | null | null | null | openrec/modules/interactions/__init__.py | BoData-Bot/openrec | 3d655d21b762b40d50e53cea96d7802fd49c74ad | [
"Apache-2.0"
] | null | null | null | from openrec.modules.interactions.interaction import Interaction
from openrec.modules.interactions.pairwise_log import PairwiseLog
from openrec.modules.interactions.pairwise_hinge import PairwiseHinge
from openrec.modules.interactions.pointwise_ge_ce import PointwiseGeCE
from openrec.modules.interactions.pointwise_mlp_ce import PointwiseMLPCE
from openrec.modules.interactions.pairwise_eu_dist import PairwiseEuDist
from openrec.modules.interactions.ns_eu_dist import NSEuDist
from openrec.modules.interactions.pointwise_ge_mlp_ce import PointwiseGeMLPCE
from openrec.modules.interactions.pointwise_mse import PointwiseMSE
| 62.5 | 77 | 0.8992 | 77 | 625 | 7.116883 | 0.324675 | 0.180657 | 0.29562 | 0.492701 | 0.5 | 0.149635 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0576 | 625 | 9 | 78 | 69.444444 | 0.93039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6b00d6080731e47ba053d9e0ca0f9e29c4a3ff25 | 156 | py | Python | django_sphinx_sample/courselist/views.py | takaharuy/django-sphinx-sample | a47c6408f954cfdd734aaa1a4d94b9f84c85c065 | [
"MIT"
] | null | null | null | django_sphinx_sample/courselist/views.py | takaharuy/django-sphinx-sample | a47c6408f954cfdd734aaa1a4d94b9f84c85c065 | [
"MIT"
] | null | null | null | django_sphinx_sample/courselist/views.py | takaharuy/django-sphinx-sample | a47c6408f954cfdd734aaa1a4d94b9f84c85c065 | [
"MIT"
] | null | null | null | from django.http import HttpResponse
# Create your views here.
def index(request):
return HttpResponse("Hello, World You're at the CourseList index")
| 22.285714 | 70 | 0.762821 | 22 | 156 | 5.409091 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160256 | 156 | 6 | 71 | 26 | 0.908397 | 0.147436 | 0 | 0 | 0 | 0 | 0.328244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6b3f7787dcfb6558265606e95211ee1b66c28f81 | 124,663 | py | Python | tests/blockchain/test_blockchain.py | thinkboxx/bitchia-blockchain | 6c71ee9bab50497362071bbaf45c4114485d4098 | [
"Apache-2.0"
] | null | null | null | tests/blockchain/test_blockchain.py | thinkboxx/bitchia-blockchain | 6c71ee9bab50497362071bbaf45c4114485d4098 | [
"Apache-2.0"
] | null | null | null | tests/blockchain/test_blockchain.py | thinkboxx/bitchia-blockchain | 6c71ee9bab50497362071bbaf45c4114485d4098 | [
"Apache-2.0"
] | null | null | null | # flake8: noqa: F811, F401
import asyncio
import logging
import multiprocessing
import time
from dataclasses import replace
from secrets import token_bytes
import pytest
from blspy import AugSchemeMPL, G2Element
from clvm.casts import int_to_bytes
from bitchia.consensus.block_rewards import calculate_base_farmer_reward
from bitchia.consensus.blockchain import ReceiveBlockResult
from bitchia.consensus.coinbase import create_farmer_coin
from bitchia.consensus.pot_iterations import is_overflow_block
from bitchia.full_node.bundle_tools import detect_potential_template_generator
from bitchia.types.blockchain_format.classgroup import ClassgroupElement
from bitchia.types.blockchain_format.coin import Coin
from bitchia.types.blockchain_format.foliage import TransactionsInfo
from bitchia.types.blockchain_format.program import SerializedProgram
from bitchia.types.blockchain_format.sized_bytes import bytes32
from bitchia.types.blockchain_format.slots import InfusedChallengeChainSubSlot
from bitchia.types.blockchain_format.vdf import VDFInfo, VDFProof
from bitchia.types.condition_opcodes import ConditionOpcode
from bitchia.types.condition_with_args import ConditionWithArgs
from bitchia.types.end_of_slot_bundle import EndOfSubSlotBundle
from bitchia.types.full_block import FullBlock
from bitchia.types.spend_bundle import SpendBundle
from bitchia.types.unfinished_block import UnfinishedBlock
from tests.block_tools import BlockTools, get_vdf_info_and_proof
from bitchia.util.errors import Err
from bitchia.util.hash import std_hash
from bitchia.util.ints import uint8, uint64, uint32
from bitchia.util.merkle_set import MerkleSet
from bitchia.util.recursive_replace import recursive_replace
from tests.wallet_tools import WalletTool
from tests.core.fixtures import default_400_blocks # noqa: F401; noqa: F401
from tests.core.fixtures import default_1000_blocks # noqa: F401
from tests.core.fixtures import default_10000_blocks # noqa: F401
from tests.core.fixtures import default_10000_blocks_compact # noqa: F401
from tests.core.fixtures import empty_blockchain # noqa: F401
from tests.core.fixtures import create_blockchain
from tests.setup_nodes import bt, test_constants
log = logging.getLogger(__name__)
bad_element = ClassgroupElement.from_bytes(b"\x00")
@pytest.fixture(scope="session")
def event_loop():
loop = asyncio.get_event_loop()
yield loop
class TestGenesisBlock:
@pytest.mark.asyncio
async def test_block_tools_proofs_400(self, default_400_blocks):
vdf, proof = get_vdf_info_and_proof(
test_constants, ClassgroupElement.get_default_element(), test_constants.GENESIS_CHALLENGE, uint64(231)
)
if proof.is_valid(test_constants, ClassgroupElement.get_default_element(), vdf) is False:
raise Exception("invalid proof")
@pytest.mark.asyncio
async def test_block_tools_proofs_1000(self, default_1000_blocks):
vdf, proof = get_vdf_info_and_proof(
test_constants, ClassgroupElement.get_default_element(), test_constants.GENESIS_CHALLENGE, uint64(231)
)
if proof.is_valid(test_constants, ClassgroupElement.get_default_element(), vdf) is False:
raise Exception("invalid proof")
@pytest.mark.asyncio
async def test_block_tools_proofs(self):
vdf, proof = get_vdf_info_and_proof(
test_constants, ClassgroupElement.get_default_element(), test_constants.GENESIS_CHALLENGE, uint64(231)
)
if proof.is_valid(test_constants, ClassgroupElement.get_default_element(), vdf) is False:
raise Exception("invalid proof")
@pytest.mark.asyncio
async def test_non_overflow_genesis(self, empty_blockchain):
assert empty_blockchain.get_peak() is None
genesis = bt.get_consecutive_blocks(1, force_overflow=False)[0]
result, err, _ = await empty_blockchain.receive_block(genesis)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
assert empty_blockchain.get_peak().height == 0
@pytest.mark.asyncio
async def test_overflow_genesis(self, empty_blockchain):
genesis = bt.get_consecutive_blocks(1, force_overflow=True)[0]
result, err, _ = await empty_blockchain.receive_block(genesis)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_genesis_empty_slots(self, empty_blockchain):
genesis = bt.get_consecutive_blocks(1, force_overflow=False, skip_slots=30)[0]
result, err, _ = await empty_blockchain.receive_block(genesis)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_overflow_genesis_empty_slots(self, empty_blockchain):
genesis = bt.get_consecutive_blocks(1, force_overflow=True, skip_slots=3)[0]
result, err, _ = await empty_blockchain.receive_block(genesis)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_genesis_validate_1(self, empty_blockchain):
genesis = bt.get_consecutive_blocks(1, force_overflow=False)[0]
bad_prev = bytes([1] * 32)
genesis = recursive_replace(genesis, "foliage.prev_block_hash", bad_prev)
result, err, _ = await empty_blockchain.receive_block(genesis)
assert err == Err.INVALID_PREV_BLOCK_HASH
class TestBlockHeaderValidation:
@pytest.mark.asyncio
async def test_long_chain(self, empty_blockchain, default_1000_blocks):
blocks = default_1000_blocks
for block in blocks:
if (
len(block.finished_sub_slots) > 0
and block.finished_sub_slots[0].challenge_chain.subepoch_summary_hash is not None
):
# Sub/Epoch. Try using a bad ssi and difficulty to test 2m and 2n
new_finished_ss = recursive_replace(
block.finished_sub_slots[0],
"challenge_chain.new_sub_slot_iters",
uint64(10000000),
)
block_bad = recursive_replace(
block, "finished_sub_slots", [new_finished_ss] + block.finished_sub_slots[1:]
)
result, err, _ = await empty_blockchain.receive_block(block_bad)
assert err == Err.INVALID_NEW_SUB_SLOT_ITERS
new_finished_ss_2 = recursive_replace(
block.finished_sub_slots[0],
"challenge_chain.new_difficulty",
uint64(10000000),
)
block_bad_2 = recursive_replace(
block, "finished_sub_slots", [new_finished_ss_2] + block.finished_sub_slots[1:]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_2)
assert err == Err.INVALID_NEW_DIFFICULTY
# 3c
new_finished_ss_3: EndOfSubSlotBundle = recursive_replace(
block.finished_sub_slots[0],
"challenge_chain.subepoch_summary_hash",
bytes([0] * 32),
)
new_finished_ss_3 = recursive_replace(
new_finished_ss_3,
"reward_chain.challenge_chain_sub_slot_hash",
new_finished_ss_3.challenge_chain.get_hash(),
)
block_bad_3 = recursive_replace(
block, "finished_sub_slots", [new_finished_ss_3] + block.finished_sub_slots[1:]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_3)
assert err == Err.INVALID_SUB_EPOCH_SUMMARY
# 3d
new_finished_ss_4 = recursive_replace(
block.finished_sub_slots[0],
"challenge_chain.subepoch_summary_hash",
None,
)
new_finished_ss_4 = recursive_replace(
new_finished_ss_4,
"reward_chain.challenge_chain_sub_slot_hash",
new_finished_ss_4.challenge_chain.get_hash(),
)
block_bad_4 = recursive_replace(
block, "finished_sub_slots", [new_finished_ss_4] + block.finished_sub_slots[1:]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_4)
assert err == Err.INVALID_SUB_EPOCH_SUMMARY or err == Err.INVALID_NEW_SUB_SLOT_ITERS
result, err, _ = await empty_blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
log.info(
f"Added block {block.height} total iters {block.total_iters} "
f"new slot? {len(block.finished_sub_slots)}"
)
assert empty_blockchain.get_peak().height == len(blocks) - 1
@pytest.mark.asyncio
async def test_unfinished_blocks(self, empty_blockchain):
blockchain = empty_blockchain
blocks = bt.get_consecutive_blocks(3)
for block in blocks[:-1]:
result, err, _ = await blockchain.receive_block(block)
assert result == ReceiveBlockResult.NEW_PEAK
block = blocks[-1]
unf = UnfinishedBlock(
block.finished_sub_slots,
block.reward_chain_block.get_unfinished(),
block.challenge_chain_sp_proof,
block.reward_chain_sp_proof,
block.foliage,
block.foliage_transaction_block,
block.transactions_info,
block.transactions_generator,
[],
)
validate_res = await blockchain.validate_unfinished_block(unf, False)
err = validate_res.error
assert err is None
result, err, _ = await blockchain.receive_block(block)
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, force_overflow=True)
block = blocks[-1]
unf = UnfinishedBlock(
block.finished_sub_slots,
block.reward_chain_block.get_unfinished(),
block.challenge_chain_sp_proof,
block.reward_chain_sp_proof,
block.foliage,
block.foliage_transaction_block,
block.transactions_info,
block.transactions_generator,
[],
)
validate_res = await blockchain.validate_unfinished_block(unf, False)
assert validate_res.error is None
@pytest.mark.asyncio
async def test_empty_genesis(self, empty_blockchain):
blockchain = empty_blockchain
for block in bt.get_consecutive_blocks(2, skip_slots=3):
result, err, _ = await blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_empty_slots_non_genesis(self, empty_blockchain):
blockchain = empty_blockchain
blocks = bt.get_consecutive_blocks(10)
for block in blocks:
result, err, _ = await blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(10, skip_slots=2, block_list_input=blocks)
for block in blocks[10:]:
result, err, _ = await blockchain.receive_block(block)
assert err is None
assert blockchain.get_peak().height == 19
@pytest.mark.asyncio
async def test_one_sb_per_slot(self, empty_blockchain):
blockchain = empty_blockchain
num_blocks = 20
blocks = []
for i in range(num_blocks):
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, skip_slots=1)
result, err, _ = await blockchain.receive_block(blocks[-1])
assert result == ReceiveBlockResult.NEW_PEAK
assert blockchain.get_peak().height == num_blocks - 1
@pytest.mark.asyncio
async def test_all_overflow(self, empty_blockchain):
blockchain = empty_blockchain
num_rounds = 5
blocks = []
num_blocks = 0
for i in range(1, num_rounds):
num_blocks += i
blocks = bt.get_consecutive_blocks(i, block_list_input=blocks, skip_slots=1, force_overflow=True)
for block in blocks[-i:]:
result, err, _ = await blockchain.receive_block(block)
assert result == ReceiveBlockResult.NEW_PEAK
assert err is None
assert blockchain.get_peak().height == num_blocks - 1
@pytest.mark.asyncio
async def test_unf_block_overflow(self, empty_blockchain):
blockchain = empty_blockchain
blocks = []
while True:
# This creates an overflow block, then a normal block, and then an overflow in the next sub-slot
# blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, force_overflow=True)
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, force_overflow=True)
await blockchain.receive_block(blocks[-2])
sb_1 = blockchain.block_record(blocks[-2].header_hash)
sb_2_next_ss = blocks[-1].total_iters - blocks[-2].total_iters < sb_1.sub_slot_iters
# We might not get a normal block for sb_2, and we might not get them in the right slots
# So this while loop keeps trying
if sb_1.overflow and sb_2_next_ss:
block = blocks[-1]
unf = UnfinishedBlock(
[],
block.reward_chain_block.get_unfinished(),
block.challenge_chain_sp_proof,
block.reward_chain_sp_proof,
block.foliage,
block.foliage_transaction_block,
block.transactions_info,
block.transactions_generator,
[],
)
validate_res = await blockchain.validate_unfinished_block(unf, skip_overflow_ss_validation=True)
assert validate_res.error is None
return None
await blockchain.receive_block(blocks[-1])
@pytest.mark.asyncio
async def test_one_sb_per_two_slots(self, empty_blockchain):
blockchain = empty_blockchain
num_blocks = 20
blocks = []
for i in range(num_blocks): # Same thing, but 2 sub-slots per block
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, skip_slots=2)
result, err, _ = await blockchain.receive_block(blocks[-1])
assert result == ReceiveBlockResult.NEW_PEAK
assert blockchain.get_peak().height == num_blocks - 1
@pytest.mark.asyncio
async def test_one_sb_per_five_slots(self, empty_blockchain):
blockchain = empty_blockchain
num_blocks = 10
blocks = []
for i in range(num_blocks): # Same thing, but 5 sub-slots per block
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, skip_slots=5)
result, err, _ = await blockchain.receive_block(blocks[-1])
assert result == ReceiveBlockResult.NEW_PEAK
assert blockchain.get_peak().height == num_blocks - 1
@pytest.mark.asyncio
async def test_basic_chain_overflow(self, empty_blockchain):
blocks = bt.get_consecutive_blocks(5, force_overflow=True)
for block in blocks:
result, err, _ = await empty_blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
assert empty_blockchain.get_peak().height == len(blocks) - 1
@pytest.mark.asyncio
async def test_one_sb_per_two_slots_force_overflow(self, empty_blockchain):
blockchain = empty_blockchain
num_blocks = 10
blocks = []
for i in range(num_blocks):
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, skip_slots=2, force_overflow=True)
result, err, _ = await blockchain.receive_block(blocks[-1])
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
assert blockchain.get_peak().height == num_blocks - 1
@pytest.mark.asyncio
async def test_invalid_prev(self, empty_blockchain):
# 1
blocks = bt.get_consecutive_blocks(2, force_overflow=False)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_1_bad = recursive_replace(blocks[-1], "foliage.prev_block_hash", bytes([0] * 32))
result, err, _ = await empty_blockchain.receive_block(block_1_bad)
assert result == ReceiveBlockResult.DISCONNECTED_BLOCK
@pytest.mark.asyncio
async def test_invalid_pospace(self, empty_blockchain):
# 2
blocks = bt.get_consecutive_blocks(2, force_overflow=False)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_1_bad = recursive_replace(blocks[-1], "reward_chain_block.proof_of_space.proof", bytes([0] * 32))
result, err, _ = await empty_blockchain.receive_block(block_1_bad)
assert result == ReceiveBlockResult.INVALID_BLOCK
assert err == Err.INVALID_POSPACE
@pytest.mark.asyncio
async def test_invalid_sub_slot_challenge_hash_genesis(self, empty_blockchain):
# 2a
blocks = bt.get_consecutive_blocks(1, force_overflow=False, skip_slots=1)
new_finished_ss = recursive_replace(
blocks[0].finished_sub_slots[0],
"challenge_chain.challenge_chain_end_of_slot_vdf.challenge",
bytes([2] * 32),
)
block_0_bad = recursive_replace(
blocks[0], "finished_sub_slots", [new_finished_ss] + blocks[0].finished_sub_slots[1:]
)
result, err, _ = await empty_blockchain.receive_block(block_0_bad)
assert result == ReceiveBlockResult.INVALID_BLOCK
assert err == Err.INVALID_PREV_CHALLENGE_SLOT_HASH
@pytest.mark.asyncio
async def test_invalid_sub_slot_challenge_hash_non_genesis(self, empty_blockchain):
# 2b
blocks = bt.get_consecutive_blocks(1, force_overflow=False, skip_slots=0)
blocks = bt.get_consecutive_blocks(1, force_overflow=False, skip_slots=1, block_list_input=blocks)
new_finished_ss = recursive_replace(
blocks[1].finished_sub_slots[0],
"challenge_chain.challenge_chain_end_of_slot_vdf.challenge",
bytes([2] * 32),
)
block_1_bad = recursive_replace(
blocks[1], "finished_sub_slots", [new_finished_ss] + blocks[1].finished_sub_slots[1:]
)
_, _, _ = await empty_blockchain.receive_block(blocks[0])
result, err, _ = await empty_blockchain.receive_block(block_1_bad)
assert result == ReceiveBlockResult.INVALID_BLOCK
assert err == Err.INVALID_PREV_CHALLENGE_SLOT_HASH
@pytest.mark.asyncio
async def test_invalid_sub_slot_challenge_hash_empty_ss(self, empty_blockchain):
# 2c
blocks = bt.get_consecutive_blocks(1, force_overflow=False, skip_slots=0)
blocks = bt.get_consecutive_blocks(1, force_overflow=False, skip_slots=2, block_list_input=blocks)
new_finished_ss = recursive_replace(
blocks[1].finished_sub_slots[-1],
"challenge_chain.challenge_chain_end_of_slot_vdf.challenge",
bytes([2] * 32),
)
block_1_bad = recursive_replace(
blocks[1], "finished_sub_slots", blocks[1].finished_sub_slots[:-1] + [new_finished_ss]
)
_, _, _ = await empty_blockchain.receive_block(blocks[0])
result, err, _ = await empty_blockchain.receive_block(block_1_bad)
assert result == ReceiveBlockResult.INVALID_BLOCK
assert err == Err.INVALID_PREV_CHALLENGE_SLOT_HASH
@pytest.mark.asyncio
async def test_genesis_no_icc(self, empty_blockchain):
# 2d
blocks = bt.get_consecutive_blocks(1, force_overflow=False, skip_slots=1)
new_finished_ss = recursive_replace(
blocks[0].finished_sub_slots[0],
"infused_challenge_chain",
InfusedChallengeChainSubSlot(
VDFInfo(
bytes([0] * 32),
uint64(1200),
ClassgroupElement.get_default_element(),
)
),
)
block_0_bad = recursive_replace(
blocks[0], "finished_sub_slots", [new_finished_ss] + blocks[0].finished_sub_slots[1:]
)
result, err, _ = await empty_blockchain.receive_block(block_0_bad)
assert result == ReceiveBlockResult.INVALID_BLOCK
assert err == Err.SHOULD_NOT_HAVE_ICC
@pytest.mark.asyncio
async def test_invalid_icc_sub_slot_vdf(self):
bt_high_iters = BlockTools(
constants=test_constants.replace(SUB_SLOT_ITERS_STARTING=(2 ** 12), DIFFICULTY_STARTING=(2 ** 14))
)
bc1, connection, db_path = await create_blockchain(bt_high_iters.constants)
blocks = bt_high_iters.get_consecutive_blocks(10)
for block in blocks:
if len(block.finished_sub_slots) > 0 and block.finished_sub_slots[-1].infused_challenge_chain is not None:
# Bad iters
new_finished_ss = recursive_replace(
block.finished_sub_slots[-1],
"infused_challenge_chain",
InfusedChallengeChainSubSlot(
replace(
block.finished_sub_slots[
-1
].infused_challenge_chain.infused_challenge_chain_end_of_slot_vdf,
number_of_iterations=10000000,
)
),
)
block_bad = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss]
)
result, err, _ = await bc1.receive_block(block_bad)
assert err == Err.INVALID_ICC_EOS_VDF
# Bad output
new_finished_ss_2 = recursive_replace(
block.finished_sub_slots[-1],
"infused_challenge_chain",
InfusedChallengeChainSubSlot(
replace(
block.finished_sub_slots[
-1
].infused_challenge_chain.infused_challenge_chain_end_of_slot_vdf,
output=ClassgroupElement.get_default_element(),
)
),
)
log.warning(f"Proof: {block.finished_sub_slots[-1].proofs}")
block_bad_2 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_2]
)
result, err, _ = await bc1.receive_block(block_bad_2)
assert err == Err.INVALID_ICC_EOS_VDF
# Bad challenge hash
new_finished_ss_3 = recursive_replace(
block.finished_sub_slots[-1],
"infused_challenge_chain",
InfusedChallengeChainSubSlot(
replace(
block.finished_sub_slots[
-1
].infused_challenge_chain.infused_challenge_chain_end_of_slot_vdf,
challenge=bytes([0] * 32),
)
),
)
block_bad_3 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_3]
)
result, err, _ = await bc1.receive_block(block_bad_3)
assert err == Err.INVALID_ICC_EOS_VDF
# Bad proof
new_finished_ss_5 = recursive_replace(
block.finished_sub_slots[-1],
"proofs.infused_challenge_chain_slot_proof",
VDFProof(uint8(0), b"1239819023890", False),
)
block_bad_5 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_5]
)
result, err, _ = await bc1.receive_block(block_bad_5)
assert err == Err.INVALID_ICC_EOS_VDF
result, err, _ = await bc1.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
await connection.close()
bc1.shut_down()
db_path.unlink()
@pytest.mark.asyncio
async def test_invalid_icc_into_cc(self, empty_blockchain):
blockchain = empty_blockchain
blocks = bt.get_consecutive_blocks(1)
assert (await blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
case_1, case_2 = False, False
while not case_1 or not case_2:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, skip_slots=1)
block = blocks[-1]
if len(block.finished_sub_slots) > 0 and block.finished_sub_slots[-1].infused_challenge_chain is not None:
if block.finished_sub_slots[-1].reward_chain.deficit == test_constants.MIN_BLOCKS_PER_CHALLENGE_BLOCK:
# 2g
case_1 = True
new_finished_ss = recursive_replace(
block.finished_sub_slots[-1],
"challenge_chain",
replace(
block.finished_sub_slots[-1].challenge_chain,
infused_challenge_chain_sub_slot_hash=bytes([1] * 32),
),
)
else:
# 2h
case_2 = True
new_finished_ss = recursive_replace(
block.finished_sub_slots[-1],
"challenge_chain",
replace(
block.finished_sub_slots[-1].challenge_chain,
infused_challenge_chain_sub_slot_hash=block.finished_sub_slots[
-1
].infused_challenge_chain.get_hash(),
),
)
block_bad = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss]
)
result, err, _ = await blockchain.receive_block(block_bad)
assert err == Err.INVALID_ICC_HASH_CC
# 2i
new_finished_ss_bad_rc = recursive_replace(
block.finished_sub_slots[-1],
"reward_chain",
replace(block.finished_sub_slots[-1].reward_chain, infused_challenge_chain_sub_slot_hash=None),
)
block_bad = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_bad_rc]
)
result, err, _ = await blockchain.receive_block(block_bad)
assert err == Err.INVALID_ICC_HASH_RC
elif len(block.finished_sub_slots) > 0 and block.finished_sub_slots[-1].infused_challenge_chain is None:
# 2j
new_finished_ss_bad_cc = recursive_replace(
block.finished_sub_slots[-1],
"challenge_chain",
replace(
block.finished_sub_slots[-1].challenge_chain,
infused_challenge_chain_sub_slot_hash=bytes([1] * 32),
),
)
block_bad = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_bad_cc]
)
result, err, _ = await blockchain.receive_block(block_bad)
assert err == Err.INVALID_ICC_HASH_CC
# 2k
new_finished_ss_bad_rc = recursive_replace(
block.finished_sub_slots[-1],
"reward_chain",
replace(
block.finished_sub_slots[-1].reward_chain, infused_challenge_chain_sub_slot_hash=bytes([1] * 32)
),
)
block_bad = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_bad_rc]
)
result, err, _ = await blockchain.receive_block(block_bad)
assert err == Err.INVALID_ICC_HASH_RC
# Finally, add the block properly
result, err, _ = await blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_empty_slot_no_ses(self, empty_blockchain):
# 2l
blockchain = empty_blockchain
blocks = bt.get_consecutive_blocks(1)
assert (await blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, skip_slots=4)
new_finished_ss = recursive_replace(
blocks[-1].finished_sub_slots[-1],
"challenge_chain",
replace(blocks[-1].finished_sub_slots[-1].challenge_chain, subepoch_summary_hash=std_hash(b"0")),
)
block_bad = recursive_replace(
blocks[-1], "finished_sub_slots", blocks[-1].finished_sub_slots[:-1] + [new_finished_ss]
)
result, err, _ = await blockchain.receive_block(block_bad)
assert err == Err.INVALID_SUB_EPOCH_SUMMARY_HASH
@pytest.mark.asyncio
async def test_empty_sub_slots_epoch(self, empty_blockchain):
# 2m
# Tests adding an empty sub slot after the sub-epoch / epoch.
# Also tests overflow block in epoch
blocks_base = bt.get_consecutive_blocks(test_constants.EPOCH_BLOCKS)
blocks_1 = bt.get_consecutive_blocks(1, block_list_input=blocks_base, force_overflow=True)
blocks_2 = bt.get_consecutive_blocks(1, skip_slots=1, block_list_input=blocks_base, force_overflow=True)
blocks_3 = bt.get_consecutive_blocks(1, skip_slots=2, block_list_input=blocks_base, force_overflow=True)
blocks_4 = bt.get_consecutive_blocks(1, block_list_input=blocks_base)
for block in blocks_base:
result, err, _ = await empty_blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
for block in [blocks_1[-1], blocks_2[-1], blocks_3[-1], blocks_4[-1]]:
result, err, _ = await empty_blockchain.receive_block(block)
assert err is None
@pytest.mark.asyncio
async def test_wrong_cc_hash_rc(self, empty_blockchain):
# 2o
blockchain = empty_blockchain
blocks = bt.get_consecutive_blocks(1, skip_slots=1)
blocks = bt.get_consecutive_blocks(1, skip_slots=1, block_list_input=blocks)
assert (await blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
new_finished_ss = recursive_replace(
blocks[-1].finished_sub_slots[-1],
"reward_chain",
replace(blocks[-1].finished_sub_slots[-1].reward_chain, challenge_chain_sub_slot_hash=bytes([3] * 32)),
)
block_1_bad = recursive_replace(
blocks[-1], "finished_sub_slots", blocks[-1].finished_sub_slots[:-1] + [new_finished_ss]
)
result, err, _ = await blockchain.receive_block(block_1_bad)
assert result == ReceiveBlockResult.INVALID_BLOCK
assert err == Err.INVALID_CHALLENGE_SLOT_HASH_RC
@pytest.mark.asyncio
async def test_invalid_cc_sub_slot_vdf(self, empty_blockchain):
# 2q
blocks = bt.get_consecutive_blocks(10)
for block in blocks:
if len(block.finished_sub_slots):
# Bad iters
new_finished_ss = recursive_replace(
block.finished_sub_slots[-1],
"challenge_chain",
recursive_replace(
block.finished_sub_slots[-1].challenge_chain,
"challenge_chain_end_of_slot_vdf.number_of_iterations",
uint64(10000000),
),
)
new_finished_ss = recursive_replace(
new_finished_ss,
"reward_chain.challenge_chain_sub_slot_hash",
new_finished_ss.challenge_chain.get_hash(),
)
block_bad = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss]
)
result, err, _ = await empty_blockchain.receive_block(block_bad)
assert err == Err.INVALID_CC_EOS_VDF
# Bad output
new_finished_ss_2 = recursive_replace(
block.finished_sub_slots[-1],
"challenge_chain",
recursive_replace(
block.finished_sub_slots[-1].challenge_chain,
"challenge_chain_end_of_slot_vdf.output",
ClassgroupElement.get_default_element(),
),
)
new_finished_ss_2 = recursive_replace(
new_finished_ss_2,
"reward_chain.challenge_chain_sub_slot_hash",
new_finished_ss_2.challenge_chain.get_hash(),
)
block_bad_2 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_2]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_2)
assert err == Err.INVALID_CC_EOS_VDF
# Bad challenge hash
new_finished_ss_3 = recursive_replace(
block.finished_sub_slots[-1],
"challenge_chain",
recursive_replace(
block.finished_sub_slots[-1].challenge_chain,
"challenge_chain_end_of_slot_vdf.challenge",
bytes([1] * 32),
),
)
new_finished_ss_3 = recursive_replace(
new_finished_ss_3,
"reward_chain.challenge_chain_sub_slot_hash",
new_finished_ss_3.challenge_chain.get_hash(),
)
block_bad_3 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_3]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_3)
assert err == Err.INVALID_CC_EOS_VDF or err == Err.INVALID_PREV_CHALLENGE_SLOT_HASH
# Bad proof
new_finished_ss_5 = recursive_replace(
block.finished_sub_slots[-1],
"proofs.challenge_chain_slot_proof",
VDFProof(uint8(0), b"1239819023890", False),
)
block_bad_5 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_5]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_5)
assert err == Err.INVALID_CC_EOS_VDF
result, err, _ = await empty_blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_invalid_rc_sub_slot_vdf(self, empty_blockchain):
# 2p
blocks = bt.get_consecutive_blocks(10)
for block in blocks:
if len(block.finished_sub_slots):
# Bad iters
new_finished_ss = recursive_replace(
block.finished_sub_slots[-1],
"reward_chain",
recursive_replace(
block.finished_sub_slots[-1].reward_chain,
"end_of_slot_vdf.number_of_iterations",
uint64(10000000),
),
)
block_bad = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss]
)
result, err, _ = await empty_blockchain.receive_block(block_bad)
assert err == Err.INVALID_RC_EOS_VDF
# Bad output
new_finished_ss_2 = recursive_replace(
block.finished_sub_slots[-1],
"reward_chain",
recursive_replace(
block.finished_sub_slots[-1].reward_chain,
"end_of_slot_vdf.output",
ClassgroupElement.get_default_element(),
),
)
block_bad_2 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_2]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_2)
assert err == Err.INVALID_RC_EOS_VDF
# Bad challenge hash
new_finished_ss_3 = recursive_replace(
block.finished_sub_slots[-1],
"reward_chain",
recursive_replace(
block.finished_sub_slots[-1].reward_chain,
"end_of_slot_vdf.challenge",
bytes([1] * 32),
),
)
block_bad_3 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_3]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_3)
assert err == Err.INVALID_RC_EOS_VDF
# Bad proof
new_finished_ss_5 = recursive_replace(
block.finished_sub_slots[-1],
"proofs.reward_chain_slot_proof",
VDFProof(uint8(0), b"1239819023890", False),
)
block_bad_5 = recursive_replace(
block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss_5]
)
result, err, _ = await empty_blockchain.receive_block(block_bad_5)
assert err == Err.INVALID_RC_EOS_VDF
result, err, _ = await empty_blockchain.receive_block(block)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_genesis_bad_deficit(self, empty_blockchain):
# 2r
block = bt.get_consecutive_blocks(1, skip_slots=2)[0]
new_finished_ss = recursive_replace(
block.finished_sub_slots[-1],
"reward_chain",
recursive_replace(
block.finished_sub_slots[-1].reward_chain,
"deficit",
test_constants.MIN_BLOCKS_PER_CHALLENGE_BLOCK - 1,
),
)
block_bad = recursive_replace(block, "finished_sub_slots", block.finished_sub_slots[:-1] + [new_finished_ss])
result, err, _ = await empty_blockchain.receive_block(block_bad)
assert err == Err.INVALID_DEFICIT
@pytest.mark.asyncio
async def test_reset_deficit(self, empty_blockchain):
# 2s, 2t
blockchain = empty_blockchain
blocks = bt.get_consecutive_blocks(2)
await empty_blockchain.receive_block(blocks[0])
await empty_blockchain.receive_block(blocks[1])
case_1, case_2 = False, False
while not case_1 or not case_2:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks, skip_slots=1)
if len(blocks[-1].finished_sub_slots) > 0:
new_finished_ss = recursive_replace(
blocks[-1].finished_sub_slots[-1],
"reward_chain",
recursive_replace(
blocks[-1].finished_sub_slots[-1].reward_chain,
"deficit",
uint8(0),
),
)
if blockchain.block_record(blocks[-2].header_hash).deficit == 0:
case_1 = True
else:
case_2 = True
block_bad = recursive_replace(
blocks[-1], "finished_sub_slots", blocks[-1].finished_sub_slots[:-1] + [new_finished_ss]
)
result, err, _ = await empty_blockchain.receive_block(block_bad)
assert err == Err.INVALID_DEFICIT or err == Err.INVALID_ICC_HASH_CC
result, err, _ = await empty_blockchain.receive_block(blocks[-1])
assert result == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_genesis_has_ses(self, empty_blockchain):
# 3a
block = bt.get_consecutive_blocks(1, skip_slots=1)[0]
new_finished_ss = recursive_replace(
block.finished_sub_slots[0],
"challenge_chain",
recursive_replace(
block.finished_sub_slots[0].challenge_chain,
"subepoch_summary_hash",
bytes([0] * 32),
),
)
new_finished_ss = recursive_replace(
new_finished_ss,
"reward_chain",
replace(
new_finished_ss.reward_chain, challenge_chain_sub_slot_hash=new_finished_ss.challenge_chain.get_hash()
),
)
block_bad = recursive_replace(block, "finished_sub_slots", [new_finished_ss] + block.finished_sub_slots[1:])
result, err, _ = await empty_blockchain.receive_block(block_bad)
assert err == Err.INVALID_SUB_EPOCH_SUMMARY_HASH
@pytest.mark.asyncio
async def test_no_ses_if_no_se(self, empty_blockchain):
# 3b
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if len(blocks[-1].finished_sub_slots) > 0:
new_finished_ss: EndOfSubSlotBundle = recursive_replace(
blocks[-1].finished_sub_slots[0],
"challenge_chain",
recursive_replace(
blocks[-1].finished_sub_slots[0].challenge_chain,
"subepoch_summary_hash",
bytes([0] * 32),
),
)
new_finished_ss = recursive_replace(
new_finished_ss,
"reward_chain",
replace(
new_finished_ss.reward_chain,
challenge_chain_sub_slot_hash=new_finished_ss.challenge_chain.get_hash(),
),
)
block_bad = recursive_replace(
blocks[-1], "finished_sub_slots", [new_finished_ss] + blocks[-1].finished_sub_slots[1:]
)
result, err, _ = await empty_blockchain.receive_block(block_bad)
assert err == Err.INVALID_SUB_EPOCH_SUMMARY_HASH
return None
await empty_blockchain.receive_block(blocks[-1])
@pytest.mark.asyncio
async def test_too_many_blocks(self, empty_blockchain):
# 4: TODO
pass
@pytest.mark.asyncio
async def test_bad_pos(self, empty_blockchain):
# 5
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad = recursive_replace(blocks[-1], "reward_chain_block.proof_of_space.challenge", std_hash(b""))
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POSPACE
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.proof_of_space.pool_contract_puzzle_hash", std_hash(b"")
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POSPACE
block_bad = recursive_replace(blocks[-1], "reward_chain_block.proof_of_space.size", 62)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POSPACE
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.proof_of_space.plot_public_key",
AugSchemeMPL.key_gen(std_hash(b"1231n")).get_g1(),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POSPACE
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.proof_of_space.size",
32,
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POSPACE
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.proof_of_space.proof",
bytes([1] * int(blocks[-1].reward_chain_block.proof_of_space.size * 64 / 8)),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POSPACE
# TODO: test not passing the plot filter
@pytest.mark.asyncio
async def test_bad_signage_point_index(self, empty_blockchain):
# 6
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
with pytest.raises(ValueError):
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.signage_point_index", test_constants.NUM_SPS_SUB_SLOT
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_SP_INDEX
with pytest.raises(ValueError):
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.signage_point_index", test_constants.NUM_SPS_SUB_SLOT + 1
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_SP_INDEX
@pytest.mark.asyncio
async def test_sp_0_no_sp(self, empty_blockchain):
# 7
blocks = []
case_1, case_2 = False, False
while not case_1 or not case_2:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].reward_chain_block.signage_point_index == 0:
case_1 = True
block_bad = recursive_replace(blocks[-1], "reward_chain_block.signage_point_index", uint8(1))
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_SP_INDEX
elif not is_overflow_block(test_constants, blocks[-1].reward_chain_block.signage_point_index):
case_2 = True
block_bad = recursive_replace(blocks[-1], "reward_chain_block.signage_point_index", uint8(0))
error_code = (await empty_blockchain.receive_block(block_bad))[1]
assert error_code == Err.INVALID_SP_INDEX or error_code == Err.INVALID_POSPACE
assert (await empty_blockchain.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_epoch_overflows(self, empty_blockchain):
# 9. TODO. This is hard to test because it requires modifying the block tools to make these special blocks
pass
@pytest.mark.asyncio
async def test_bad_total_iters(self, empty_blockchain):
# 10
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.total_iters", blocks[-1].reward_chain_block.total_iters + 1
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_TOTAL_ITERS
@pytest.mark.asyncio
async def test_bad_rc_sp_vdf(self, empty_blockchain):
# 11
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].reward_chain_block.signage_point_index != 0:
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.reward_chain_sp_vdf.challenge", std_hash(b"1")
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_SP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.reward_chain_sp_vdf.output",
bad_element,
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_SP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.reward_chain_sp_vdf.number_of_iterations",
uint64(1111111111111),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_SP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_sp_proof",
VDFProof(uint8(0), std_hash(b""), False),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_SP_VDF
return None
assert (await empty_blockchain.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_bad_rc_sp_sig(self, empty_blockchain):
# 12
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad = recursive_replace(blocks[-1], "reward_chain_block.reward_chain_sp_signature", G2Element.generator())
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_SIGNATURE
@pytest.mark.asyncio
async def test_bad_cc_sp_vdf(self, empty_blockchain):
# 13. Note: does not validate fully due to proof of space being validated first
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].reward_chain_block.signage_point_index != 0:
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.challenge_chain_sp_vdf.challenge", std_hash(b"1")
)
assert (await empty_blockchain.receive_block(block_bad))[0] == ReceiveBlockResult.INVALID_BLOCK
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.challenge_chain_sp_vdf.output",
bad_element,
)
assert (await empty_blockchain.receive_block(block_bad))[0] == ReceiveBlockResult.INVALID_BLOCK
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.challenge_chain_sp_vdf.number_of_iterations",
uint64(1111111111111),
)
assert (await empty_blockchain.receive_block(block_bad))[0] == ReceiveBlockResult.INVALID_BLOCK
block_bad = recursive_replace(
blocks[-1],
"challenge_chain_sp_proof",
VDFProof(uint8(0), std_hash(b""), False),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_CC_SP_VDF
return None
assert (await empty_blockchain.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_bad_cc_sp_sig(self, empty_blockchain):
# 14
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.challenge_chain_sp_signature", G2Element.generator()
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_CC_SIGNATURE
@pytest.mark.asyncio
async def test_is_transaction_block(self, empty_blockchain):
# 15: TODO
pass
@pytest.mark.asyncio
async def test_bad_foliage_sb_sig(self, empty_blockchain):
# 16
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad = recursive_replace(blocks[-1], "foliage.foliage_block_data_signature", G2Element.generator())
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_PLOT_SIGNATURE
@pytest.mark.asyncio
async def test_bad_foliage_transaction_block_sig(self, empty_blockchain):
# 17
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].foliage_transaction_block is not None:
block_bad = recursive_replace(
blocks[-1], "foliage.foliage_transaction_block_signature", G2Element.generator()
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_PLOT_SIGNATURE
return None
assert (await empty_blockchain.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
@pytest.mark.asyncio
async def test_unfinished_reward_chain_sb_hash(self, empty_blockchain):
# 18
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage.foliage_block_data.unfinished_reward_block_hash", std_hash(b"2")
)
new_m = block_bad.foliage.foliage_block_data.get_hash()
new_fsb_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_block_data_signature", new_fsb_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_URSB_HASH
@pytest.mark.asyncio
async def test_pool_target_height(self, empty_blockchain):
# 19
blocks = bt.get_consecutive_blocks(3)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await empty_blockchain.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad: FullBlock = recursive_replace(blocks[-1], "foliage.foliage_block_data.pool_target.max_height", 1)
new_m = block_bad.foliage.foliage_block_data.get_hash()
new_fsb_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_block_data_signature", new_fsb_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.OLD_POOL_TARGET
@pytest.mark.asyncio
async def test_pool_target_pre_farm(self, empty_blockchain):
# 20a
blocks = bt.get_consecutive_blocks(1)
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage.foliage_block_data.pool_target.puzzle_hash", std_hash(b"12")
)
new_m = block_bad.foliage.foliage_block_data.get_hash()
new_fsb_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_block_data_signature", new_fsb_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_PREFARM
@pytest.mark.asyncio
async def test_pool_target_signature(self, empty_blockchain):
# 20b
blocks_initial = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks_initial[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await empty_blockchain.receive_block(blocks_initial[1]))[0] == ReceiveBlockResult.NEW_PEAK
attempts = 0
while True:
# Go until we get a block that has a pool pk, as opposed to a pool contract
blocks = bt.get_consecutive_blocks(
1, blocks_initial, seed=std_hash(attempts.to_bytes(4, byteorder="big", signed=False))
)
if blocks[-1].foliage.foliage_block_data.pool_signature is not None:
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage.foliage_block_data.pool_signature", G2Element.generator()
)
new_m = block_bad.foliage.foliage_block_data.get_hash()
new_fsb_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_block_data_signature", new_fsb_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POOL_SIGNATURE
return None
attempts += 1
@pytest.mark.asyncio
async def test_pool_target_contract(self, empty_blockchain):
# 20c invalid pool target with contract
blocks_initial = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks_initial[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await empty_blockchain.receive_block(blocks_initial[1]))[0] == ReceiveBlockResult.NEW_PEAK
attempts = 0
while True:
# Go until we get a block that has a pool contract opposed to a pool pk
blocks = bt.get_consecutive_blocks(
1, blocks_initial, seed=std_hash(attempts.to_bytes(4, byteorder="big", signed=False))
)
if blocks[-1].foliage.foliage_block_data.pool_signature is None:
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage.foliage_block_data.pool_target.puzzle_hash", bytes32(token_bytes(32))
)
new_m = block_bad.foliage.foliage_block_data.get_hash()
new_fsb_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_block_data_signature", new_fsb_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_POOL_TARGET
return None
attempts += 1
@pytest.mark.asyncio
async def test_foliage_data_presence(self, empty_blockchain):
# 22
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
case_1, case_2 = False, False
while not case_1 or not case_2:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].foliage_transaction_block is not None:
case_1 = True
block_bad: FullBlock = recursive_replace(blocks[-1], "foliage.foliage_transaction_block_hash", None)
else:
case_2 = True
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage.foliage_transaction_block_hash", std_hash(b"")
)
err_code = (await empty_blockchain.receive_block(block_bad))[1]
assert err_code == Err.INVALID_FOLIAGE_BLOCK_PRESENCE or err_code == Err.INVALID_IS_TRANSACTION_BLOCK
await empty_blockchain.receive_block(blocks[-1])
@pytest.mark.asyncio
async def test_foliage_transaction_block_hash(self, empty_blockchain):
# 23
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
case_1, case_2 = False, False
while not case_1 or not case_2:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].foliage_transaction_block is not None:
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage.foliage_transaction_block_hash", std_hash(b"2")
)
new_m = block_bad.foliage.foliage_transaction_block_hash
new_fbh_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_transaction_block_signature", new_fbh_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_FOLIAGE_BLOCK_HASH
return None
await empty_blockchain.receive_block(blocks[-1])
@pytest.mark.asyncio
async def test_genesis_bad_prev_block(self, empty_blockchain):
# 24a
blocks = bt.get_consecutive_blocks(1)
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage_transaction_block.prev_transaction_block_hash", std_hash(b"2")
)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.foliage_transaction_block_hash", block_bad.foliage_transaction_block.get_hash()
)
new_m = block_bad.foliage.foliage_transaction_block_hash
new_fbh_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_transaction_block_signature", new_fbh_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_PREV_BLOCK_HASH
@pytest.mark.asyncio
async def test_bad_prev_block_non_genesis(self, empty_blockchain):
# 24b
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].foliage_transaction_block is not None:
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage_transaction_block.prev_transaction_block_hash", std_hash(b"2")
)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.foliage_transaction_block_hash", block_bad.foliage_transaction_block.get_hash()
)
new_m = block_bad.foliage.foliage_transaction_block_hash
new_fbh_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_transaction_block_signature", new_fbh_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_PREV_BLOCK_HASH
return None
await empty_blockchain.receive_block(blocks[-1])
@pytest.mark.asyncio
async def test_bad_filter_hash(self, empty_blockchain):
# 25
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].foliage_transaction_block is not None:
block_bad: FullBlock = recursive_replace(
blocks[-1], "foliage_transaction_block.filter_hash", std_hash(b"2")
)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.foliage_transaction_block_hash", block_bad.foliage_transaction_block.get_hash()
)
new_m = block_bad.foliage.foliage_transaction_block_hash
new_fbh_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_transaction_block_signature", new_fbh_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_TRANSACTIONS_FILTER_HASH
return None
await empty_blockchain.receive_block(blocks[-1])
@pytest.mark.asyncio
async def test_bad_timestamp(self, empty_blockchain):
# 26
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if blocks[-1].foliage_transaction_block is not None:
block_bad: FullBlock = recursive_replace(
blocks[-1],
"foliage_transaction_block.timestamp",
blocks[0].foliage_transaction_block.timestamp - 10,
)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.foliage_transaction_block_hash", block_bad.foliage_transaction_block.get_hash()
)
new_m = block_bad.foliage.foliage_transaction_block_hash
new_fbh_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_transaction_block_signature", new_fbh_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.TIMESTAMP_TOO_FAR_IN_PAST
block_bad: FullBlock = recursive_replace(
blocks[-1],
"foliage_transaction_block.timestamp",
blocks[0].foliage_transaction_block.timestamp,
)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.foliage_transaction_block_hash", block_bad.foliage_transaction_block.get_hash()
)
new_m = block_bad.foliage.foliage_transaction_block_hash
new_fbh_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_transaction_block_signature", new_fbh_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.TIMESTAMP_TOO_FAR_IN_PAST
block_bad: FullBlock = recursive_replace(
blocks[-1],
"foliage_transaction_block.timestamp",
blocks[0].foliage_transaction_block.timestamp + 10000000,
)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.foliage_transaction_block_hash", block_bad.foliage_transaction_block.get_hash()
)
new_m = block_bad.foliage.foliage_transaction_block_hash
new_fbh_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block_bad = recursive_replace(block_bad, "foliage.foliage_transaction_block_signature", new_fbh_sig)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.TIMESTAMP_TOO_FAR_IN_FUTURE
return None
await empty_blockchain.receive_block(blocks[-1])
@pytest.mark.asyncio
async def test_height(self, empty_blockchain):
# 27
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad: FullBlock = recursive_replace(blocks[-1], "reward_chain_block.height", 2)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_HEIGHT
@pytest.mark.asyncio
async def test_height_genesis(self, empty_blockchain):
# 27
blocks = bt.get_consecutive_blocks(1)
block_bad: FullBlock = recursive_replace(blocks[-1], "reward_chain_block.height", 1)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_PREV_BLOCK_HASH
@pytest.mark.asyncio
async def test_weight(self, empty_blockchain):
# 28
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad: FullBlock = recursive_replace(blocks[-1], "reward_chain_block.weight", 22131)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_WEIGHT
@pytest.mark.asyncio
async def test_weight_genesis(self, empty_blockchain):
# 28
blocks = bt.get_consecutive_blocks(1)
block_bad: FullBlock = recursive_replace(blocks[-1], "reward_chain_block.weight", 0)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_WEIGHT
@pytest.mark.asyncio
async def test_bad_cc_ip_vdf(self, empty_blockchain):
# 29
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
block_bad = recursive_replace(blocks[-1], "reward_chain_block.challenge_chain_ip_vdf.challenge", std_hash(b"1"))
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_CC_IP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.challenge_chain_ip_vdf.output",
bad_element,
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_CC_IP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.challenge_chain_ip_vdf.number_of_iterations",
uint64(1111111111111),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_CC_IP_VDF
block_bad = recursive_replace(
blocks[-1],
"challenge_chain_ip_proof",
VDFProof(uint8(0), std_hash(b""), False),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_CC_IP_VDF
@pytest.mark.asyncio
async def test_bad_rc_ip_vdf(self, empty_blockchain):
# 30
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
block_bad = recursive_replace(blocks[-1], "reward_chain_block.reward_chain_ip_vdf.challenge", std_hash(b"1"))
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_IP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.reward_chain_ip_vdf.output",
bad_element,
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_IP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.reward_chain_ip_vdf.number_of_iterations",
uint64(1111111111111),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_IP_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_ip_proof",
VDFProof(uint8(0), std_hash(b""), False),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_RC_IP_VDF
@pytest.mark.asyncio
async def test_bad_icc_ip_vdf(self, empty_blockchain):
# 31
blocks = bt.get_consecutive_blocks(1)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.infused_challenge_chain_ip_vdf.challenge", std_hash(b"1")
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_ICC_VDF
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_ICC_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.infused_challenge_chain_ip_vdf.output",
bad_element,
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_ICC_VDF
block_bad = recursive_replace(
blocks[-1],
"reward_chain_block.infused_challenge_chain_ip_vdf.number_of_iterations",
uint64(1111111111111),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_ICC_VDF
block_bad = recursive_replace(
blocks[-1],
"infused_challenge_chain_ip_proof",
VDFProof(uint8(0), std_hash(b""), False),
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_ICC_VDF
@pytest.mark.asyncio
async def test_reward_block_hash(self, empty_blockchain):
# 32
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad: FullBlock = recursive_replace(blocks[-1], "foliage.reward_block_hash", std_hash(b""))
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_REWARD_BLOCK_HASH
@pytest.mark.asyncio
async def test_reward_block_hash_2(self, empty_blockchain):
# 33
blocks = bt.get_consecutive_blocks(1)
block_bad: FullBlock = recursive_replace(blocks[0], "reward_chain_block.is_transaction_block", False)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.reward_block_hash", block_bad.reward_chain_block.get_hash()
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_FOLIAGE_BLOCK_PRESENCE
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
# Test one which should not be a tx block
while True:
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
if not blocks[-1].is_transaction_block():
block_bad: FullBlock = recursive_replace(blocks[-1], "reward_chain_block.is_transaction_block", True)
block_bad: FullBlock = recursive_replace(
block_bad, "foliage.reward_block_hash", block_bad.reward_chain_block.get_hash()
)
assert (await empty_blockchain.receive_block(block_bad))[1] == Err.INVALID_FOLIAGE_BLOCK_PRESENCE
return None
assert (await empty_blockchain.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
class TestPreValidation:
@pytest.mark.asyncio
async def test_pre_validation_fails_bad_blocks(self, empty_blockchain):
blocks = bt.get_consecutive_blocks(2)
assert (await empty_blockchain.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block_bad = recursive_replace(
blocks[-1], "reward_chain_block.total_iters", blocks[-1].reward_chain_block.total_iters + 1
)
res = await empty_blockchain.pre_validate_blocks_multiprocessing([blocks[0], block_bad], {})
assert res[0].error is None
assert res[1].error is not None
@pytest.mark.asyncio
async def test_pre_validation(self, empty_blockchain, default_1000_blocks):
blocks = default_1000_blocks[:100]
start = time.time()
n_at_a_time = min(multiprocessing.cpu_count(), 32)
times_pv = []
times_rb = []
for i in range(0, len(blocks), n_at_a_time):
end_i = min(i + n_at_a_time, len(blocks))
blocks_to_validate = blocks[i:end_i]
start_pv = time.time()
res = await empty_blockchain.pre_validate_blocks_multiprocessing(blocks_to_validate, {})
end_pv = time.time()
times_pv.append(end_pv - start_pv)
assert res is not None
for n in range(end_i - i):
assert res[n] is not None
assert res[n].error is None
block = blocks_to_validate[n]
start_rb = time.time()
result, err, _ = await empty_blockchain.receive_block(block, res[n])
end_rb = time.time()
times_rb.append(end_rb - start_rb)
assert err is None
assert result == ReceiveBlockResult.NEW_PEAK
log.info(
f"Added block {block.height} total iters {block.total_iters} "
f"new slot? {len(block.finished_sub_slots)}, time {end_rb - start_rb}"
)
end = time.time()
log.info(f"Total time: {end - start} seconds")
log.info(f"Average pv: {sum(times_pv)/(len(blocks)/n_at_a_time)}")
log.info(f"Average rb: {sum(times_rb)/(len(blocks))}")
class TestBodyValidation:
@pytest.mark.asyncio
async def test_not_tx_block_but_has_data(self, empty_blockchain):
# 1
b = empty_blockchain
blocks = bt.get_consecutive_blocks(1)
while blocks[-1].foliage_transaction_block is not None:
assert (await b.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(1, block_list_input=blocks)
original_block: FullBlock = blocks[-1]
block = recursive_replace(original_block, "transactions_generator", SerializedProgram())
assert (await b.receive_block(block))[1] == Err.NOT_BLOCK_BUT_HAS_DATA
h = std_hash(b"")
i = uint64(1)
block = recursive_replace(
original_block,
"transactions_info",
TransactionsInfo(h, h, G2Element(), uint64(1), uint64(1), []),
)
assert (await b.receive_block(block))[1] == Err.NOT_BLOCK_BUT_HAS_DATA
block = recursive_replace(original_block, "transactions_generator_ref_list", [i])
assert (await b.receive_block(block))[1] == Err.NOT_BLOCK_BUT_HAS_DATA
@pytest.mark.asyncio
async def test_tx_block_missing_data(self, empty_blockchain):
# 2
b = empty_blockchain
blocks = bt.get_consecutive_blocks(2, guarantee_transaction_block=True)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block = recursive_replace(
blocks[-1],
"foliage_transaction_block",
None,
)
err = (await b.receive_block(block))[1]
assert err == Err.IS_TRANSACTION_BLOCK_BUT_NO_DATA or err == Err.INVALID_FOLIAGE_BLOCK_PRESENCE
block = recursive_replace(
blocks[-1],
"transactions_info",
None,
)
try:
err = (await b.receive_block(block))[1]
except AssertionError:
return None
assert err == Err.IS_TRANSACTION_BLOCK_BUT_NO_DATA or err == Err.INVALID_FOLIAGE_BLOCK_PRESENCE
@pytest.mark.asyncio
async def test_invalid_transactions_info_hash(self, empty_blockchain):
# 3
b = empty_blockchain
blocks = bt.get_consecutive_blocks(2, guarantee_transaction_block=True)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
h = std_hash(b"")
block = recursive_replace(
blocks[-1],
"foliage_transaction_block.transactions_info_hash",
h,
)
block = recursive_replace(
block, "foliage.foliage_transaction_block_hash", std_hash(block.foliage_transaction_block)
)
new_m = block.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block = recursive_replace(block, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block))[1]
assert err == Err.INVALID_TRANSACTIONS_INFO_HASH
@pytest.mark.asyncio
async def test_invalid_transactions_block_hash(self, empty_blockchain):
# 4
b = empty_blockchain
blocks = bt.get_consecutive_blocks(2, guarantee_transaction_block=True)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
h = std_hash(b"")
block = recursive_replace(blocks[-1], "foliage.foliage_transaction_block_hash", h)
new_m = block.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, blocks[-1].reward_chain_block.proof_of_space.plot_public_key)
block = recursive_replace(block, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block))[1]
assert err == Err.INVALID_FOLIAGE_BLOCK_HASH
@pytest.mark.asyncio
async def test_invalid_reward_claims(self, empty_blockchain):
# 5
b = empty_blockchain
blocks = bt.get_consecutive_blocks(2, guarantee_transaction_block=True)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
block: FullBlock = blocks[-1]
# Too few
too_few_reward_claims = block.transactions_info.reward_claims_incorporated[:-1]
block_2: FullBlock = recursive_replace(
block, "transactions_info.reward_claims_incorporated", too_few_reward_claims
)
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_REWARD_COINS
# Too many
h = std_hash(b"")
too_many_reward_claims = block.transactions_info.reward_claims_incorporated + [
Coin(h, h, too_few_reward_claims[0].amount)
]
block_2 = recursive_replace(block, "transactions_info.reward_claims_incorporated", too_many_reward_claims)
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_REWARD_COINS
# Duplicates
duplicate_reward_claims = block.transactions_info.reward_claims_incorporated + [
block.transactions_info.reward_claims_incorporated[-1]
]
block_2 = recursive_replace(block, "transactions_info.reward_claims_incorporated", duplicate_reward_claims)
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_REWARD_COINS
@pytest.mark.asyncio
async def test_initial_freeze(self, empty_blockchain):
# 6
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
pool_reward_puzzle_hash=bt.pool_ph,
farmer_reward_puzzle_hash=bt.pool_ph,
genesis_timestamp=time.time() - 1000,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[2].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1,
block_list_input=blocks,
guarantee_transaction_block=True,
transaction_data=tx,
)
err = (await b.receive_block(blocks[-1]))[1]
assert err == Err.INITIAL_TRANSACTION_FREEZE
@pytest.mark.asyncio
async def test_invalid_transactions_generator_hash(self, empty_blockchain):
# 7
b = empty_blockchain
blocks = bt.get_consecutive_blocks(2, guarantee_transaction_block=True)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
# No tx should have all zeroes
block: FullBlock = blocks[-1]
block_2 = recursive_replace(block, "transactions_info.generator_root", bytes([1] * 32))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_TRANSACTIONS_GENERATOR_HASH
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(
2,
block_list_input=blocks,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[3]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
# Non empty generator hash must be correct
block = blocks[-1]
block_2 = recursive_replace(block, "transactions_info.generator_root", bytes([0] * 32))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_TRANSACTIONS_GENERATOR_HASH
@pytest.mark.asyncio
async def test_invalid_transactions_ref_list(self, empty_blockchain):
# No generator should have [1]s for the root
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
block: FullBlock = blocks[-1]
block_2 = recursive_replace(block, "transactions_info.generator_refs_root", bytes([0] * 32))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_TRANSACTIONS_GENERATOR_REFS_ROOT
# No generator should have no refs list
block_2 = recursive_replace(block, "transactions_generator_ref_list", [uint32(0)])
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_TRANSACTIONS_GENERATOR_REFS_ROOT
# Hash should be correct when there is a ref list
assert (await b.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(5, block_list_input=blocks, guarantee_transaction_block=False)
for block in blocks[-5:]:
assert (await b.receive_block(block))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
assert (await b.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
generator_arg = detect_potential_template_generator(blocks[-1].height, blocks[-1].transactions_generator)
assert generator_arg is not None
blocks = bt.get_consecutive_blocks(
1,
block_list_input=blocks,
guarantee_transaction_block=True,
transaction_data=tx,
previous_generator=generator_arg,
)
block = blocks[-1]
assert len(block.transactions_generator_ref_list) > 0
block_2 = recursive_replace(block, "transactions_info.generator_refs_root", bytes([1] * 32))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_TRANSACTIONS_GENERATOR_REFS_ROOT
# Too many heights
block_2 = recursive_replace(block, "transactions_generator_ref_list", [block.height - 2, block.height - 1])
err = (await b.receive_block(block_2))[1]
assert err == Err.GENERATOR_REF_HAS_NO_GENERATOR
assert (await b.pre_validate_blocks_multiprocessing([block_2], {})) is None
# Not tx block
for h in range(0, block.height - 1):
block_2 = recursive_replace(block, "transactions_generator_ref_list", [h])
err = (await b.receive_block(block_2))[1]
assert err == Err.GENERATOR_REF_HAS_NO_GENERATOR or err == Err.INVALID_TRANSACTIONS_GENERATOR_REFS_ROOT
assert (await b.pre_validate_blocks_multiprocessing([block_2], {})) is None
@pytest.mark.asyncio
async def test_cost_exceeds_max(self, empty_blockchain):
# 7
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
condition_dict = {ConditionOpcode.CREATE_COIN: []}
for i in range(7000):
output = ConditionWithArgs(ConditionOpcode.CREATE_COIN, [bt.pool_ph, int_to_bytes(i)])
condition_dict[ConditionOpcode.CREATE_COIN].append(output)
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0], condition_dic=condition_dict
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
assert (await b.receive_block(blocks[-1]))[1] == Err.INVALID_BLOCK_COST
@pytest.mark.asyncio
async def test_clvm_must_not_fail(self, empty_blockchain):
# 8
pass
@pytest.mark.asyncio
async def test_invalid_cost_in_block(self, empty_blockchain):
# 9
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
block: FullBlock = blocks[-1]
# zero
block_2: FullBlock = recursive_replace(block, "transactions_info.cost", uint64(0))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_BLOCK_COST
# too low
block_2: FullBlock = recursive_replace(block, "transactions_info.cost", uint64(1))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_BLOCK_COST
# too high
block_2: FullBlock = recursive_replace(block, "transactions_info.cost", uint64(1000000))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
# when the CLVM program exceeds cost during execution, it will fail with
# a general runtime error
assert err == Err.GENERATOR_RUNTIME_ERROR
err = (await b.receive_block(block))[1]
assert err is None
@pytest.mark.asyncio
async def test_max_coin_amount(self):
# 10
# TODO: fix, this is not reaching validation. Because we can't create a block with such amounts due to uint64
# limit in Coin
pass
#
# new_test_constants = test_constants.replace(
# **{"GENESIS_PRE_FARM_POOL_PUZZLE_HASH": bt.pool_ph, "GENESIS_PRE_FARM_FARMER_PUZZLE_HASH": bt.pool_ph}
# )
# b, connection, db_path = await create_blockchain(new_test_constants)
# bt_2 = BlockTools(new_test_constants)
# bt_2.constants = bt_2.constants.replace(
# **{"GENESIS_PRE_FARM_POOL_PUZZLE_HASH": bt.pool_ph, "GENESIS_PRE_FARM_FARMER_PUZZLE_HASH": bt.pool_ph}
# )
# blocks = bt_2.get_consecutive_blocks(
# 3,
# guarantee_transaction_block=True,
# farmer_reward_puzzle_hash=bt.pool_ph,
# pool_reward_puzzle_hash=bt.pool_ph,
# )
# assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
# assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
# assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
#
# wt: WalletTool = bt_2.get_pool_wallet_tool()
#
# condition_dict = {ConditionOpcode.CREATE_COIN: []}
# output = ConditionWithArgs(ConditionOpcode.CREATE_COIN, [bt_2.pool_ph, int_to_bytes(2 ** 64)])
# condition_dict[ConditionOpcode.CREATE_COIN].append(output)
#
# tx: SpendBundle = wt.generate_signed_transaction_multiple_coins(
# 10,
# wt.get_new_puzzlehash(),
# list(blocks[1].get_included_reward_coins()),
# condition_dic=condition_dict,
# )
# try:
# blocks = bt_2.get_consecutive_blocks(
# 1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
# )
# assert False
# except Exception as e:
# pass
# await connection.close()
# b.shut_down()
# db_path.unlink()
@pytest.mark.asyncio
async def test_invalid_merkle_roots(self, empty_blockchain):
# 11
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
block: FullBlock = blocks[-1]
merkle_set = MerkleSet()
# additions
block_2 = recursive_replace(block, "foliage_transaction_block.additions_root", merkle_set.get_root())
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.BAD_ADDITION_ROOT
# removals
merkle_set.add_already_hashed(std_hash(b"1"))
block_2 = recursive_replace(block, "foliage_transaction_block.removals_root", merkle_set.get_root())
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.BAD_REMOVAL_ROOT
@pytest.mark.asyncio
async def test_invalid_filter(self, empty_blockchain):
# 12
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
block: FullBlock = blocks[-1]
block_2 = recursive_replace(block, "foliage_transaction_block.filter_hash", std_hash(b"3"))
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_TRANSACTIONS_FILTER_HASH
@pytest.mark.asyncio
async def test_duplicate_outputs(self, empty_blockchain):
# 13
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
condition_dict = {ConditionOpcode.CREATE_COIN: []}
for i in range(2):
output = ConditionWithArgs(ConditionOpcode.CREATE_COIN, [bt.pool_ph, int_to_bytes(1)])
condition_dict[ConditionOpcode.CREATE_COIN].append(output)
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0], condition_dic=condition_dict
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
assert (await b.receive_block(blocks[-1]))[1] == Err.DUPLICATE_OUTPUT
@pytest.mark.asyncio
async def test_duplicate_removals(self, empty_blockchain):
# 14
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
tx_2: SpendBundle = wt.generate_signed_transaction(
11, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
agg = SpendBundle.aggregate([tx, tx_2])
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=agg
)
assert (await b.receive_block(blocks[-1]))[1] == Err.DOUBLE_SPEND
@pytest.mark.asyncio
async def test_double_spent_in_coin_store(self, empty_blockchain):
# 15
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
assert (await b.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
tx_2: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-2].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx_2
)
assert (await b.receive_block(blocks[-1]))[1] == Err.DOUBLE_SPEND
@pytest.mark.asyncio
async def test_double_spent_in_reorg(self, empty_blockchain):
# 15
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
assert (await b.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
new_coin: Coin = tx.additions()[0]
tx_2: SpendBundle = wt.generate_signed_transaction(10, wt.get_new_puzzlehash(), new_coin)
# This is fine because coin exists
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx_2
)
assert (await b.receive_block(blocks[-1]))[0] == ReceiveBlockResult.NEW_PEAK
blocks = bt.get_consecutive_blocks(5, block_list_input=blocks, guarantee_transaction_block=True)
for block in blocks[-5:]:
assert (await b.receive_block(block))[0] == ReceiveBlockResult.NEW_PEAK
blocks_reorg = bt.get_consecutive_blocks(2, block_list_input=blocks[:-7], guarantee_transaction_block=True)
assert (await b.receive_block(blocks_reorg[-2]))[0] == ReceiveBlockResult.ADDED_AS_ORPHAN
assert (await b.receive_block(blocks_reorg[-1]))[0] == ReceiveBlockResult.ADDED_AS_ORPHAN
# Coin does not exist in reorg
blocks_reorg = bt.get_consecutive_blocks(
1, block_list_input=blocks_reorg, guarantee_transaction_block=True, transaction_data=tx_2
)
assert (await b.receive_block(blocks_reorg[-1]))[1] == Err.UNKNOWN_UNSPENT
# Finally add the block to the fork (spending both in same bundle, this is ephemeral)
agg = SpendBundle.aggregate([tx, tx_2])
blocks_reorg = bt.get_consecutive_blocks(
1, block_list_input=blocks_reorg[:-1], guarantee_transaction_block=True, transaction_data=agg
)
assert (await b.receive_block(blocks_reorg[-1]))[1] is None
blocks_reorg = bt.get_consecutive_blocks(
1, block_list_input=blocks_reorg, guarantee_transaction_block=True, transaction_data=tx_2
)
assert (await b.receive_block(blocks_reorg[-1]))[1] == Err.DOUBLE_SPEND_IN_FORK
rewards_ph = wt.get_new_puzzlehash()
blocks_reorg = bt.get_consecutive_blocks(
10,
block_list_input=blocks_reorg[:-1],
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=rewards_ph,
)
for block in blocks_reorg[-10:]:
r, e, _ = await b.receive_block(block)
assert e is None
# ephemeral coin is spent
first_coin = await b.coin_store.get_coin_record(new_coin.name())
assert first_coin is not None and first_coin.spent
second_coin = await b.coin_store.get_coin_record(tx_2.additions()[0].name())
assert second_coin is not None and not second_coin.spent
farmer_coin = create_farmer_coin(
blocks_reorg[-1].height,
rewards_ph,
calculate_base_farmer_reward(blocks_reorg[-1].height),
bt.constants.GENESIS_CHALLENGE,
)
tx_3: SpendBundle = wt.generate_signed_transaction(10, wt.get_new_puzzlehash(), farmer_coin)
blocks_reorg = bt.get_consecutive_blocks(
1, block_list_input=blocks_reorg, guarantee_transaction_block=True, transaction_data=tx_3
)
assert (await b.receive_block(blocks_reorg[-1]))[1] is None
farmer_coin = await b.coin_store.get_coin_record(farmer_coin.name())
assert first_coin is not None and farmer_coin.spent
@pytest.mark.asyncio
async def test_minting_coin(self, empty_blockchain):
# 16 TODO
# 17 is tested in mempool tests
pass
@pytest.mark.asyncio
async def test_max_coin_amount_fee(self):
# 18 TODO: we can't create a block with such amounts due to uint64
pass
@pytest.mark.asyncio
async def test_invalid_fees_in_block(self, empty_blockchain):
# 19
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
farmer_reward_puzzle_hash=bt.pool_ph,
pool_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[-1].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1, block_list_input=blocks, guarantee_transaction_block=True, transaction_data=tx
)
block: FullBlock = blocks[-1]
# wrong feees
block_2: FullBlock = recursive_replace(block, "transactions_info.fees", uint64(1239))
block_2 = recursive_replace(
block_2, "foliage_transaction_block.transactions_info_hash", block_2.transactions_info.get_hash()
)
block_2 = recursive_replace(
block_2, "foliage.foliage_transaction_block_hash", block_2.foliage_transaction_block.get_hash()
)
new_m = block_2.foliage.foliage_transaction_block_hash
new_fsb_sig = bt.get_plot_signature(new_m, block.reward_chain_block.proof_of_space.plot_public_key)
block_2 = recursive_replace(block_2, "foliage.foliage_transaction_block_signature", new_fsb_sig)
err = (await b.receive_block(block_2))[1]
assert err == Err.INVALID_BLOCK_FEE_AMOUNT
class TestReorgs:
@pytest.mark.asyncio
async def test_basic_reorg(self, empty_blockchain):
b = empty_blockchain
blocks = bt.get_consecutive_blocks(15)
for block in blocks:
assert (await b.receive_block(block))[0] == ReceiveBlockResult.NEW_PEAK
assert b.get_peak().height == 14
blocks_reorg_chain = bt.get_consecutive_blocks(7, blocks[:10], seed=b"2")
for reorg_block in blocks_reorg_chain:
result, error_code, fork_height = await b.receive_block(reorg_block)
if reorg_block.height < 10:
assert result == ReceiveBlockResult.ALREADY_HAVE_BLOCK
elif reorg_block.height < 14:
assert result == ReceiveBlockResult.ADDED_AS_ORPHAN
elif reorg_block.height >= 15:
assert result == ReceiveBlockResult.NEW_PEAK
assert error_code is None
assert b.get_peak().height == 16
@pytest.mark.asyncio
async def test_long_reorg(self, empty_blockchain, default_10000_blocks):
# Reorg longer than a difficulty adjustment
# Also tests higher weight chain but lower height
b = empty_blockchain
num_blocks_chain_1 = 3 * test_constants.EPOCH_BLOCKS + test_constants.MAX_SUB_SLOT_BLOCKS + 10
num_blocks_chain_2_start = test_constants.EPOCH_BLOCKS - 20
num_blocks_chain_2 = 3 * test_constants.EPOCH_BLOCKS + test_constants.MAX_SUB_SLOT_BLOCKS + 8
assert num_blocks_chain_1 < 10000
blocks = default_10000_blocks[:num_blocks_chain_1]
for block in blocks:
assert (await b.receive_block(block))[0] == ReceiveBlockResult.NEW_PEAK
chain_1_height = b.get_peak().height
chain_1_weight = b.get_peak().weight
assert chain_1_height == (num_blocks_chain_1 - 1)
# These blocks will have less time between them (timestamp) and therefore will make difficulty go up
# This means that the weight will grow faster, and we can get a heavier chain with lower height
blocks_reorg_chain = bt.get_consecutive_blocks(
num_blocks_chain_2 - num_blocks_chain_2_start,
blocks[:num_blocks_chain_2_start],
seed=b"2",
time_per_block=8,
)
found_orphan = False
for reorg_block in blocks_reorg_chain:
result, error_code, fork_height = await b.receive_block(reorg_block)
if reorg_block.height < num_blocks_chain_2_start:
assert result == ReceiveBlockResult.ALREADY_HAVE_BLOCK
if reorg_block.weight <= chain_1_weight:
if result == ReceiveBlockResult.ADDED_AS_ORPHAN:
found_orphan = True
assert error_code is None
assert result == ReceiveBlockResult.ADDED_AS_ORPHAN or result == ReceiveBlockResult.ALREADY_HAVE_BLOCK
elif reorg_block.weight > chain_1_weight:
assert reorg_block.height < chain_1_height
assert result == ReceiveBlockResult.NEW_PEAK
assert error_code is None
assert found_orphan
assert b.get_peak().weight > chain_1_weight
assert b.get_peak().height < chain_1_height
@pytest.mark.asyncio
async def test_long_compact_blockchain(self, empty_blockchain, default_10000_blocks_compact):
b = empty_blockchain
for block in default_10000_blocks_compact:
assert (await b.receive_block(block))[0] == ReceiveBlockResult.NEW_PEAK
assert b.get_peak().height == len(default_10000_blocks_compact) - 1
@pytest.mark.asyncio
async def test_reorg_from_genesis(self, empty_blockchain):
b = empty_blockchain
WALLET_A = WalletTool(b.constants)
WALLET_A_PUZZLE_HASHES = [WALLET_A.get_new_puzzlehash() for _ in range(5)]
blocks = bt.get_consecutive_blocks(15)
for block in blocks:
assert (await b.receive_block(block))[0] == ReceiveBlockResult.NEW_PEAK
assert b.get_peak().height == 14
# Reorg to alternate chain that is 1 height longer
found_orphan = False
blocks_reorg_chain = bt.get_consecutive_blocks(16, [], seed=b"2")
for reorg_block in blocks_reorg_chain:
result, error_code, fork_height = await b.receive_block(reorg_block)
if reorg_block.height < 14:
if result == ReceiveBlockResult.ADDED_AS_ORPHAN:
found_orphan = True
assert result == ReceiveBlockResult.ADDED_AS_ORPHAN or result == ReceiveBlockResult.ALREADY_HAVE_BLOCK
elif reorg_block.height >= 15:
assert result == ReceiveBlockResult.NEW_PEAK
assert error_code is None
# Back to original chain
blocks_reorg_chain_2 = bt.get_consecutive_blocks(3, blocks, seed=b"3")
result, error_code, fork_height = await b.receive_block(blocks_reorg_chain_2[-3])
assert result == ReceiveBlockResult.ADDED_AS_ORPHAN
result, error_code, fork_height = await b.receive_block(blocks_reorg_chain_2[-2])
assert result == ReceiveBlockResult.NEW_PEAK
result, error_code, fork_height = await b.receive_block(blocks_reorg_chain_2[-1])
assert result == ReceiveBlockResult.NEW_PEAK
assert found_orphan
assert b.get_peak().height == 17
@pytest.mark.asyncio
async def test_reorg_transaction(self, empty_blockchain):
b = empty_blockchain
wallet_a = WalletTool(b.constants)
WALLET_A_PUZZLE_HASHES = [wallet_a.get_new_puzzlehash() for _ in range(5)]
coinbase_puzzlehash = WALLET_A_PUZZLE_HASHES[0]
receiver_puzzlehash = WALLET_A_PUZZLE_HASHES[1]
blocks = bt.get_consecutive_blocks(10, farmer_reward_puzzle_hash=coinbase_puzzlehash)
blocks = bt.get_consecutive_blocks(
2, blocks, farmer_reward_puzzle_hash=coinbase_puzzlehash, guarantee_transaction_block=True
)
spend_block = blocks[10]
spend_coin = None
for coin in list(spend_block.get_included_reward_coins()):
if coin.puzzle_hash == coinbase_puzzlehash:
spend_coin = coin
spend_bundle = wallet_a.generate_signed_transaction(1000, receiver_puzzlehash, spend_coin)
blocks = bt.get_consecutive_blocks(
2,
blocks,
farmer_reward_puzzle_hash=coinbase_puzzlehash,
transaction_data=spend_bundle,
guarantee_transaction_block=True,
)
blocks_fork = bt.get_consecutive_blocks(
1, blocks[:12], farmer_reward_puzzle_hash=coinbase_puzzlehash, seed=b"123", guarantee_transaction_block=True
)
blocks_fork = bt.get_consecutive_blocks(
2,
blocks_fork,
farmer_reward_puzzle_hash=coinbase_puzzlehash,
transaction_data=spend_bundle,
guarantee_transaction_block=True,
seed=b"1245",
)
for block in blocks:
result, error_code, _ = await b.receive_block(block)
assert error_code is None and result == ReceiveBlockResult.NEW_PEAK
for block in blocks_fork:
result, error_code, _ = await b.receive_block(block)
assert error_code is None
@pytest.mark.asyncio
async def test_get_header_blocks_in_range_tx_filter(self, empty_blockchain):
b = empty_blockchain
blocks = bt.get_consecutive_blocks(
3,
guarantee_transaction_block=True,
pool_reward_puzzle_hash=bt.pool_ph,
farmer_reward_puzzle_hash=bt.pool_ph,
)
assert (await b.receive_block(blocks[0]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[1]))[0] == ReceiveBlockResult.NEW_PEAK
assert (await b.receive_block(blocks[2]))[0] == ReceiveBlockResult.NEW_PEAK
wt: WalletTool = bt.get_pool_wallet_tool()
tx: SpendBundle = wt.generate_signed_transaction(
10, wt.get_new_puzzlehash(), list(blocks[2].get_included_reward_coins())[0]
)
blocks = bt.get_consecutive_blocks(
1,
block_list_input=blocks,
guarantee_transaction_block=True,
transaction_data=tx,
)
err = (await b.receive_block(blocks[-1]))[1]
assert not err
blocks_with_filter = await b.get_header_blocks_in_range(0, 10, tx_filter=True)
blocks_without_filter = await b.get_header_blocks_in_range(0, 10, tx_filter=False)
header_hash = blocks[-1].header_hash
assert (
blocks_with_filter[header_hash].transactions_filter
!= blocks_without_filter[header_hash].transactions_filter
)
assert blocks_with_filter[header_hash].header_hash == blocks_without_filter[header_hash].header_hash
@pytest.mark.asyncio
async def test_get_blocks_at(self, empty_blockchain, default_1000_blocks):
b = empty_blockchain
heights = []
for block in default_1000_blocks[:200]:
heights.append(block.height)
result, error_code, _ = await b.receive_block(block)
assert error_code is None and result == ReceiveBlockResult.NEW_PEAK
blocks = await b.get_block_records_at(heights, batch_size=2)
assert blocks
assert len(blocks) == 200
assert blocks[-1].height == 199
| 47.80023 | 120 | 0.650626 | 15,003 | 124,663 | 5.028728 | 0.035526 | 0.027185 | 0.046656 | 0.042574 | 0.884368 | 0.861861 | 0.84186 | 0.817141 | 0.789598 | 0.761087 | 0 | 0.020666 | 0.264441 | 124,663 | 2,607 | 121 | 47.818565 | 0.802109 | 0.032969 | 0 | 0.64137 | 0 | 0 | 0.067199 | 0.057264 | 0 | 0 | 0 | 0.000384 | 0.151782 | 1 | 0.000463 | false | 0.003239 | 0.018973 | 0 | 0.027765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8610c50c206537f7f3e7affac6765a3c17ee89f1 | 91 | py | Python | command/database/loader/__init__.py | bartick/roBOT | 95ed4f0d0f80eae900326d82d49891c9723c0118 | [
"MIT"
] | 15 | 2021-05-30T15:45:02.000Z | 2021-12-16T11:10:39.000Z | command/database/loader/__init__.py | bartick/roBOT | 95ed4f0d0f80eae900326d82d49891c9723c0118 | [
"MIT"
] | 46 | 2021-05-18T12:46:38.000Z | 2021-10-21T18:47:46.000Z | command/database/loader/__init__.py | bartick/roBOT | 95ed4f0d0f80eae900326d82d49891c9723c0118 | [
"MIT"
] | 32 | 2021-05-18T06:07:08.000Z | 2021-12-12T16:13:40.000Z | # init, importing everything from loader.py
from .loader import *
from .database import *
| 18.2 | 43 | 0.758242 | 12 | 91 | 5.75 | 0.666667 | 0.289855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164835 | 91 | 4 | 44 | 22.75 | 0.907895 | 0.450549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
810a662318e01f6592f75fc2c6ee277bc267afaf | 38 | py | Python | uq360/datasets/__init__.py | Sclare87/UQ360 | 2378bfa4a8d61f813afbf6854341888434c9eb11 | [
"Apache-2.0"
] | 148 | 2021-05-27T20:52:51.000Z | 2022-03-16T22:49:48.000Z | uq360/datasets/__init__.py | Sclare87/UQ360 | 2378bfa4a8d61f813afbf6854341888434c9eb11 | [
"Apache-2.0"
] | 9 | 2021-06-21T18:45:07.000Z | 2021-11-08T14:42:30.000Z | uq360/datasets/__init__.py | Sclare87/UQ360 | 2378bfa4a8d61f813afbf6854341888434c9eb11 | [
"Apache-2.0"
] | 27 | 2021-06-01T18:29:02.000Z | 2022-03-02T06:56:03.000Z | from .meps_dataset import MEPSDataset
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d48d4f5303d7789a61ac1167a6df52eedd120157 | 38 | py | Python | snappi_trex/__init__.py | open-traffic-generator/snappi-trex | b46bac94236fb0e6b2abd33f06bb5aee779487e8 | [
"MIT"
] | 17 | 2021-08-05T05:19:50.000Z | 2022-03-28T07:30:48.000Z | snappi_trex/__init__.py | fredpower44/snappi-trex | b6bb639a1fdb03318eaa7845239529e3c1be471a | [
"MIT"
] | null | null | null | snappi_trex/__init__.py | fredpower44/snappi-trex | b6bb639a1fdb03318eaa7845239529e3c1be471a | [
"MIT"
] | 4 | 2021-08-04T18:34:12.000Z | 2022-01-09T00:14:28.000Z | from snappi_trex.snappi_api import Api | 38 | 38 | 0.894737 | 7 | 38 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4a6fc7ffbfd4e08c32059bee3639ade92366354 | 128 | py | Python | tests/conftest.py | BGmi/anime-episode-parser | 5b12ad7558748d1b1ee9bc27c13fac61c092ced7 | [
"MIT"
] | null | null | null | tests/conftest.py | BGmi/anime-episode-parser | 5b12ad7558748d1b1ee9bc27c13fac61c092ced7 | [
"MIT"
] | 27 | 2021-05-17T16:59:38.000Z | 2022-03-15T08:16:53.000Z | tests/conftest.py | BGmi/anime-episode-parser | 005bb54623c025c75721581b13bff79e520e5f3c | [
"MIT"
] | null | null | null | import logging
from anime_episode_parser import logger
def pytest_sessionstart() -> None:
logger.setLevel(logging.DEBUG)
| 16 | 39 | 0.789063 | 16 | 128 | 6.125 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 128 | 7 | 40 | 18.285714 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4ac64992c3dbc0fa11e6a892a09d3bad457f2ad | 22 | py | Python | src/gandalf/__init__.py | idleuncle/gandalf | 4ebedc3bc4a396414c5017d9df51e78c44566fa5 | [
"MIT"
] | null | null | null | src/gandalf/__init__.py | idleuncle/gandalf | 4ebedc3bc4a396414c5017d9df51e78c44566fa5 | [
"MIT"
] | null | null | null | src/gandalf/__init__.py | idleuncle/gandalf | 4ebedc3bc4a396414c5017d9df51e78c44566fa5 | [
"MIT"
] | null | null | null | from .Corpus import *
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d4c676bbde6e2295026f085a94d6fd3333f5a268 | 1,055 | py | Python | GUI/printer/Pillow-2.7.0/Tests/test_image_thumbnail.py | y-gupta/rfid-auth-system | 44f3de884d05e1906757b97f0a1a140469a3290f | [
"Apache-2.0"
] | 5 | 2015-01-21T14:13:34.000Z | 2016-05-14T06:53:38.000Z | GUI/printer/Pillow-2.7.0/Tests/test_image_thumbnail.py | 1upon0/rfid-auth-system | 44f3de884d05e1906757b97f0a1a140469a3290f | [
"Apache-2.0"
] | null | null | null | GUI/printer/Pillow-2.7.0/Tests/test_image_thumbnail.py | 1upon0/rfid-auth-system | 44f3de884d05e1906757b97f0a1a140469a3290f | [
"Apache-2.0"
] | 3 | 2015-02-01T17:10:39.000Z | 2019-12-05T05:21:42.000Z | from helper import unittest, PillowTestCase, hopper
class TestImageThumbnail(PillowTestCase):
def test_sanity(self):
im = hopper()
im.thumbnail((100, 100))
self.assert_image(im, im.mode, (100, 100))
def test_aspect(self):
im = hopper()
im.thumbnail((100, 100))
self.assert_image(im, im.mode, (100, 100))
im = hopper().resize((128, 256))
im.thumbnail((100, 100))
self.assert_image(im, im.mode, (50, 100))
im = hopper().resize((128, 256))
im.thumbnail((50, 100))
self.assert_image(im, im.mode, (50, 100))
im = hopper().resize((256, 128))
im.thumbnail((100, 100))
self.assert_image(im, im.mode, (100, 50))
im = hopper().resize((256, 128))
im.thumbnail((100, 50))
self.assert_image(im, im.mode, (100, 50))
im = hopper().resize((128, 128))
im.thumbnail((100, 100))
self.assert_image(im, im.mode, (100, 100))
if __name__ == '__main__':
unittest.main()
# End of file
| 23.977273 | 51 | 0.563981 | 138 | 1,055 | 4.188406 | 0.210145 | 0.083045 | 0.181661 | 0.205882 | 0.749135 | 0.749135 | 0.749135 | 0.749135 | 0.645329 | 0.645329 | 0 | 0.140078 | 0.269194 | 1,055 | 43 | 52 | 24.534884 | 0.609598 | 0.010427 | 0 | 0.666667 | 0 | 0 | 0.007678 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 1 | 0.074074 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07a196c83c6a23e2d011a06e6d134382db47f70f | 172 | py | Python | test.py | mac389/scientific-consensus | 2799695b61ae0aa0ed3e1ee89557cb1eb4945502 | [
"MIT"
] | null | null | null | test.py | mac389/scientific-consensus | 2799695b61ae0aa0ed3e1ee89557cb1eb4945502 | [
"MIT"
] | null | null | null | test.py | mac389/scientific-consensus | 2799695b61ae0aa0ed3e1ee89557cb1eb4945502 | [
"MIT"
] | null | null | null | import numpy as np
from awesome_print import ap
from Scientist import Scientist
sci_guy = Scientist(label='Sci guy')
print sci_guy
print sci_guy.estimate_pi(repeats=10) | 21.5 | 37 | 0.80814 | 29 | 172 | 4.62069 | 0.551724 | 0.179104 | 0.164179 | 0.208955 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013423 | 0.133721 | 172 | 8 | 37 | 21.5 | 0.885906 | 0 | 0 | 0 | 0 | 0 | 0.040462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
07c4e5515d1697ed2d13d329320c227378ed7416 | 18,628 | py | Python | test/TestDem.py | nitehawck/DevEnvManager | 425b0d621be577fe73f22b4641f7099eac65669e | [
"MIT"
] | 1 | 2016-05-16T23:13:47.000Z | 2016-05-16T23:13:47.000Z | test/TestDem.py | nitehawck/DevEnvManager | 425b0d621be577fe73f22b4641f7099eac65669e | [
"MIT"
] | 41 | 2016-01-22T00:56:14.000Z | 2016-05-12T14:38:37.000Z | test/TestDem.py | nitehawck/DevEnvManager | 425b0d621be577fe73f22b4641f7099eac65669e | [
"MIT"
] | null | null | null | import io
import os
import unittest, mock
from tarfile import TarFile
from zipfile import ZipFile
try:
from mock import patch, MagicMock
except ImportError:
from unittest.mock import patch
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
import pyfakefs.fake_filesystem_unittest as fake_filesystem_unittest
from dem import dem as go
class MyDem(fake_filesystem_unittest.TestCase):
def setUp(self):
self.setUpPyfakefs()
self.project = 'project'
# Fix for python 3+
io.open = open
# mock out virtual env since it still has a case sensitive bug in windows
self.mock_virtual_env_patcher = mock.patch('virtualenv.create_environment')
self.addCleanup(self.mock_virtual_env_patcher.stop)
self.mock_virtual_env = self.mock_virtual_env_patcher.start()
self.mock_virtual_env.side_effect = self.create_environment
def create_environment(self, path):
pass
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_willCreateDependenciesFolder(self):
self.fs.CreateFile('devenv.yaml')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists('.devenv'))
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_willUnzipDependencyIntoDependenciesDirectory(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive''')
os.makedirs(remote_location)
self.fs.CreateFile('json/eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write('json/eggs.txt')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'dependencies', 'json', 'eggs.txt')))
@patch('sys.platform', "win32")
@patch('sys.stdout', new_callable=StringIO)
@mock.patch('subprocess.call', MagicMock())
def test_willPrintMessageWhenArchivedPackageCannotBeFound(self, mock_stdout):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive''')
os.makedirs('not_opts')
self.fs.CreateFile('eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join('not_opts', 'json-1.8.zip'), 'w') as myzip:
myzip.write('eggs.txt')
go.get_dem_packages(self.project)
self.assertTrue('Could not find package: json, version: 1.8\n' in mock_stdout.getvalue())
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_willInstallFirstPackageFound(self):
remote_location1 = os.path.abspath(os.path.join(os.pathsep, 'opt1'))
remote_location2 = os.path.abspath(os.path.join(os.pathsep, 'opt2'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: [\'''' + remote_location1 + '\', \'' + remote_location2 + '''\']
packages:
json:
version: 1.8
type: archive''')
os.makedirs(remote_location1)
os.makedirs(remote_location2)
self.fs.CreateFile('json/eggs.txt', contents='''
I like my eggs runny.''')
self.fs.CreateFile('json/not_my_eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location1, 'json-1.8.zip'), 'w') as myzip:
myzip.write('json/eggs.txt')
with ZipFile(os.path.join(remote_location2, 'json-1.8.zip'), 'w') as myzip:
myzip.write('json/not_my_eggs.txt')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'dependencies', 'json', 'eggs.txt')))
self.assertFalse(os.path.exists(os.path.join('.devenv', self.project, 'dependencies', 'json', 'not_my_eggs.txt')))
@unittest.skip("FakeFS does not support tar?\n")
@mock.patch('subprocess.call', MagicMock())
def test_willUntarDependencyIntoLibsDirectory(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive''')
os.makedirs(remote_location)
self.fs.CreateFile('eggs.txt', contents='''
I like my eggs runny.''')
with TarFile.open(os.path.join(remote_location, 'json-1.8.tar.gz'), 'w:gz') as tar:
tar.add('eggs.txt')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'dependencies', 'json', 'eggs.txt')))
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_willNotInstallLinuxPackagesForWindowsOS(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages-linux:
json:
version: 1.8
type: archive''')
os.makedirs(remote_location)
self.fs.CreateFile('eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write('eggs.txt')
go.get_dem_packages(self.project)
self.assertFalse(os.path.exists(os.path.join('.devenv', 'libs', 'json', 'eggs.txt')))
@patch('sys.platform', "linux")
@patch('platform.linux_distribution', MagicMock(return_value=('centos', '7.34.21', 'core')))
@mock.patch('subprocess.call', MagicMock())
def test_willNotInstallWindowsPackagesForLinuxOS(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages-win32:
json:
version: 1.8
type: archive''')
os.makedirs(remote_location)
self.fs.CreateFile('eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write('eggs.txt')
go.get_dem_packages(self.project)
self.assertFalse(os.path.exists(os.path.join('.devenv', 'libs', 'json', 'eggs.txt')))
@patch('sys.platform', "linux")
@patch('platform.linux_distribution', MagicMock(return_value=('centos', '7.34.21', 'core')))
@mock.patch('subprocess.call', MagicMock())
def test_willInstallLinuxPackagesForLinuxOS(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages-linux:
json:
version: 1.8
type: archive''')
os.makedirs(remote_location)
self.fs.CreateFile('json/eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write('json/eggs.txt')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'dependencies', 'json', 'eggs.txt')))
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_willInstallWindowsPackagesForWindowsOS(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages-win32:
json:
version: msvc2015-1.8
type: archive''')
os.makedirs(remote_location)
self.fs.CreateFile('json/eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-msvc2015-1.8.zip'), 'w') as myzip:
myzip.write('json/eggs.txt')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'dependencies', 'json', 'eggs.txt')))
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_willUnzipToBinaryDestinationWindowsStrippingParentDirectory(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive
destination: bin''')
os.makedirs(remote_location)
self.fs.CreateFile('eggs.exe', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write('eggs.exe')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'Scripts', 'eggs.exe')))
@patch('sys.platform', "linux")
@patch('platform.linux_distribution', MagicMock(return_value=('centos', '7.34.21', 'core')))
@mock.patch('subprocess.call', MagicMock())
def test_willUnzipToBinaryDestinationLinuxStrippingParentDirectory(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive
destination: bin''')
os.makedirs(remote_location)
self.fs.CreateFile('eggs.exe', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write('eggs.exe')
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'bin', 'eggs.exe')))
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_willUnzipToPythonSitePackagesDestinationWindowsStrippingParentDirectory(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive
destination: python-site-packages''')
os.makedirs(remote_location)
self.fs.CreateFile(os.path.join('json', 'eggs.exe'), contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write(os.path.join('json', 'eggs.exe'))
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'Lib', 'site-packages', 'json', 'eggs.exe')))
@patch('sys.platform', "linux2")
@patch('platform.linux_distribution', MagicMock(return_value=('centos', '7.34.21', 'core')))
@mock.patch('subprocess.call', MagicMock())
def test_willUnzipToPythonSitePackagesDestinationLinuxStrippingParentDirectory(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive
destination: python-site-packages''')
os.makedirs(remote_location)
self.fs.CreateFile(os.path.join('json', 'eggs.exe'), contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write(os.path.join('json', 'eggs.exe'))
go.get_dem_packages(self.project)
self.assertTrue(os.path.exists(os.path.join('.devenv', self.project, 'lib', 'python2.7', 'site-packages', 'json', 'eggs.exe')))
@patch('sys.platform', "win32")
@patch('wget.download')
@mock.patch('subprocess.call', MagicMock())
def test_willDownloadUrlToPythonSitePackagesDestinationWindowsStrippingParentDirectory(self, mock_wget):
self.fs.CreateFile('devenv.yaml', contents='''
packages:
qtcwatchdog:
version: 1.0.1
type: url
url: https://github.com/ismacaulay/qtcwatchdog/archive/v1.0.1.zip
destination: python-site-packages''')
def wget_side_effect(url, out):
remote_location = os.path.join('.devenv', self.project, 'downloads')
self.fs.CreateFile(os.path.join('qtcwatchdog', 'qtc.py'), contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'qtcwatchdog-1.0.1.zip'), 'w') as myzip:
myzip.write(os.path.join('qtcwatchdog', 'qtc.py'))
mock_wget.side_effect = wget_side_effect
go.get_dem_packages(self.project)
self.assertTrue(
os.path.exists(os.path.join('.devenv', self.project, 'Lib', 'site-packages', 'qtcwatchdog')))
self.assertTrue(
os.path.exists(os.path.join('.devenv', self.project, 'Lib', 'site-packages', 'qtcwatchdog', 'qtc.py')))
@patch('sys.platform', "win32")
@patch('sys.stdout', new_callable=StringIO)
@mock.patch('subprocess.call', MagicMock())
def test_will_not_extract_already_installed_archive(self, mock_stdout):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive''')
os.makedirs(remote_location)
self.fs.CreateFile('eggs.txt', contents='''
I like my eggs runny.''')
with ZipFile(os.path.join(remote_location, 'json-1.8.zip'), 'w') as myzip:
myzip.write('eggs.txt')
go.get_dem_packages(self.project)
with open('devenv.yaml', 'a+') as f:
f.write('\n\n')
go.get_dem_packages(self.project)
self.assertTrue('json-1.8 already installed' in mock_stdout.getvalue())
@patch('sys.platform', "win32")
@patch('git.Repo.clone_from')
@mock.patch('subprocess.call', MagicMock())
def test_willCloneGitRepositoryAndCheckoutShaToARelativeDirectory(self, mock_clone):
self.fs.CreateFile('devenv.yaml', contents='''
config:
packages:
qtcwatchdog:
version: 72f3588eef1019bac8788fa58c52722dfa7c4d28
type: git
url: https://github.com/ismacaulay/qtcwatchdog
destination: code/python/''')
mock_repo = MagicMock()
def clone_side_effect(url, destination):
os.makedirs(destination)
self.fs.CreateFile(os.path.join(destination, 'qtc.py'), contents='''
I like my eggs runny.''')
return mock_repo
mock_clone.side_effect = clone_side_effect
go.get_dem_packages(self.project)
self.assertTrue(
os.path.exists(os.path.join('code/python/', 'qtcwatchdog')))
@patch('sys.platform', "win32")
@patch('subprocess.call')
@patch('sys.stdout', new_callable=StringIO)
def test_will_not_install_already_installed_rpm(self, mock_stdout, mock_subprocess):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: rpm''')
os.makedirs(remote_location)
go.get_dem_packages(self.project)
with open('devenv.yaml', 'a+') as f:
f.write('\n\n')
go.get_dem_packages(self.project)
self.assertTrue('json-1.8 already installed' in mock_stdout.getvalue())
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
def test_will_replace_pkg_config_prefix_with_installing_archives(self):
remote_location = os.path.abspath(os.path.join(os.pathsep, 'opt'))
self.fs.CreateFile('devenv.yaml', contents='''
config:
remote-locations: ''' + remote_location + '''
packages:
json:
version: 1.8
type: archive
pkg-config: pkgconfig''')
os.makedirs(remote_location)
self.fs.CreateFile('json/eggs.txt', contents='''
I like my eggs runny.''')
self.fs.CreateFile('json/pkgconfig/eggs.pc', contents='''prefix=hello_world''')
@patch('sys.platform', "win32")
@mock.patch('subprocess.call', MagicMock())
@patch('dem.dependency.pip.PipRunner.install')
def test_willInstallLatestDem(self, mock_pip):
self.fs.CreateFile('devenv.yaml')
go.get_dem_packages(self.project)
mock_pip.assert_any_call('dem', 'latest')
if __name__ == '__main__':
unittest.main()
| 40.233261 | 135 | 0.588147 | 2,067 | 18,628 | 5.195452 | 0.096275 | 0.048049 | 0.050284 | 0.029798 | 0.771394 | 0.758357 | 0.731539 | 0.71692 | 0.69811 | 0.688891 | 0 | 0.012579 | 0.265944 | 18,628 | 462 | 136 | 40.320346 | 0.77278 | 0.004778 | 0 | 0.740642 | 0 | 0.002674 | 0.638703 | 0.339663 | 0 | 0 | 0 | 0 | 0.053476 | 1 | 0.061497 | false | 0.002674 | 0.034759 | 0 | 0.101604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07d88815153bf4429692c2b1881afdc84afc61bc | 6,508 | py | Python | tests/unit_tests/test_beatmap_link_parser.py | sibyl666/ronnia | f66f730ad87acff741a14d7030f2e9a1c0f2cdf6 | [
"Apache-2.0"
] | null | null | null | tests/unit_tests/test_beatmap_link_parser.py | sibyl666/ronnia | f66f730ad87acff741a14d7030f2e9a1c0f2cdf6 | [
"Apache-2.0"
] | null | null | null | tests/unit_tests/test_beatmap_link_parser.py | sibyl666/ronnia | f66f730ad87acff741a14d7030f2e9a1c0f2cdf6 | [
"Apache-2.0"
] | null | null | null | import unittest
from helpers.beatmap_link_parser import parse_single_beatmap, parse_beatmapset, get_mod_from_text
class TestBeatmapLinkParser(unittest.TestCase):
@classmethod
def setUp(self) -> None:
self.official_beatmap_link = 'https://osu.ppy.sh/beatmapsets/1341551#osu/2778999'
self.official_beatmap_link_alt = 'https://osu.ppy.sh/beatmaps/806017?mode=osu'
self.old_beatmap_link = 'https://osu.ppy.sh/b/2778999'
self.old_beatmap_link_alt = 'https://old.ppy.sh/p/beatmap?b=1955170&m=2'
self.official_beatmapset_link = 'https://osu.ppy.sh/beatmapsets/1341551'
self.old_beatmapset_link = 'https://osu.ppy.sh/s/1341551'
self.old_beatmapset_link_alt = 'https://old.ppy.sh/p/beatmap?s=1955170&m=2'
self.beatmap_links = [self.official_beatmap_link, self.official_beatmap_link_alt, self.old_beatmap_link,
self.old_beatmap_link_alt]
self.beatmapset_links = [self.official_beatmapset_link, self.old_beatmapset_link, self.old_beatmapset_link_alt]
def test_parse_single_beatmap_returns_beatmap_id_for_official_links(self):
expected_id = '2778999'
expected_mod = '0'
result_mod, result_id = parse_single_beatmap(self.official_beatmap_link)
self.assertEqual(expected_id, result_id)
self.assertEqual(expected_mod, result_mod)
def test_parse_single_beatmap_returns_beatmap_id_for_official_links_alternate(self):
expected_id = '806017'
expected_mod = '0'
result_mod, result_id = parse_single_beatmap(self.official_beatmap_link_alt)
self.assertEqual(expected_id, result_id)
self.assertEqual(expected_mod, result_mod)
def test_parse_single_beatmap_returns_beatmap_id_for_old_links(self):
expected_id = '2778999'
expected_mod = '0'
result_mod, result_id = parse_single_beatmap(self.old_beatmap_link)
self.assertEqual(expected_id, result_id)
self.assertEqual(expected_mod, result_mod)
def test_parse_single_beatmap_returns_beatmap_id_for_old_links_alternate(self):
expected_id = '1955170'
expected_mod = '2'
result_mod, result_id = parse_single_beatmap(self.old_beatmap_link_alt)
self.assertEqual(expected_id, result_id)
self.assertEqual(expected_mod, result_mod)
def test_parse_beatmapset_returns_beatmapset_id_for_official_links(self):
expected_id = '1341551'
expected_mod = '0'
result_mod, result_id = parse_beatmapset(self.official_beatmapset_link)
self.assertEqual(expected_id, result_id)
self.assertEqual(expected_mod, result_mod)
def test_parse_beatmapset_returns_beatmapset_id_for_old_links(self):
expected_id = '1341551'
expected_mod = '0'
result_mod, result_id = parse_beatmapset(self.old_beatmapset_link)
self.assertEqual(expected_id, result_id)
self.assertEqual(expected_mod, result_mod)
def test_parse_beatmapset_returns_beatmapset_id_for_old_links_alternate(self):
expected_id = '1955170'
expected_mod = '2'
result_mod, result_id = parse_beatmapset(self.old_beatmapset_link_alt)
self.assertEqual(expected_id, result_id)
self.assertEqual(expected_mod, result_mod)
def test_get_mod_from_text_returns_correct_mod_combination_when_no_spaces(self):
expected_mods_int = 8 + 64
expected_mods_text = "+HDDT"
for beatmap_link in self.beatmap_links:
content = f'{beatmap_link}+HDDT'
mods_int, mods_text = get_mod_from_text(content, beatmap_link)
self.assertEqual(expected_mods_int, mods_int)
self.assertEqual(expected_mods_text, mods_text)
def test_get_mod_from_text_returns_correct_mod_combination_when_spaced_with_beatmap_link(self):
expected_mods_int = 16 + 8
expected_mods_text = "+HRHD"
for beatmap_link in self.beatmap_links:
content = f'{beatmap_link} +HRHD'
mods_int, mods_text = get_mod_from_text(content, beatmap_link)
self.assertEqual(expected_mods_int, mods_int)
self.assertEqual(expected_mods_text, mods_text)
def test_get_mod_from_text_returns_correct_mod_combination_when_multiple_plus_mod_given(self):
expected_mods_int = 64 + 16 + 8
expected_mods_text = "+HDDTHR"
for beatmap_link in self.beatmap_links:
content = f'{beatmap_link} +HD +DT +HR'
mods_int, mods_text = get_mod_from_text(content, beatmap_link)
self.assertEqual(expected_mods_int, mods_int)
self.assertEqual(expected_mods_text, mods_text)
def test_get_mod_from_text_returns_correct_mod_combination_when_mod_is_given_with_space_after_plus_sign(self):
expected_mods_int = 64 + 8
expected_mods_text = "+HDDT"
for beatmap_link in self.beatmap_links:
content = f'{beatmap_link} + HD + DT'
mods_int, mods_text = get_mod_from_text(content, beatmap_link)
self.assertEqual(expected_mods_int, mods_int)
self.assertEqual(expected_mods_text, mods_text)
def test_get_mod_from_text_returns_correct_mod_combination_when_mod_is_given_without_plus_sign(self):
expected_mods_int = 64 + 8
expected_mods_text = "+HDDT"
for beatmap_link in self.beatmap_links:
content = f'{beatmap_link} HDDT'
mods_int, mods_text = get_mod_from_text(content, beatmap_link)
self.assertEqual(expected_mods_int, mods_int)
self.assertEqual(expected_mods_text, mods_text)
def test_get_mod_from_text_returns_correct_mod_combination_when_mod_is_case_insensitive(self):
expected_mods_int = 64 + 8
expected_mods_text = "+HDDT"
for beatmap_link in self.beatmap_links:
content = f'{beatmap_link} +HdDt'
mods_int, mods_text = get_mod_from_text(content, beatmap_link)
self.assertEqual(expected_mods_int, mods_int)
self.assertEqual(expected_mods_text, mods_text)
def test_get_mod_from_text_returns_nomod_when_no_mod_is_given(self):
expected_mods_int = 0
expected_mods_text = ""
for beatmap_link in self.beatmap_links:
content = f'{beatmap_link}'
mods_int, mods_text = get_mod_from_text(content, beatmap_link)
self.assertEqual(expected_mods_int, mods_int)
self.assertEqual(expected_mods_text, mods_text)
| 41.987097 | 119 | 0.719115 | 872 | 6,508 | 4.888761 | 0.094037 | 0.090312 | 0.151067 | 0.049261 | 0.888576 | 0.831809 | 0.789819 | 0.771522 | 0.758386 | 0.758386 | 0 | 0.025745 | 0.206208 | 6,508 | 154 | 120 | 42.25974 | 0.799458 | 0 | 0 | 0.548673 | 0 | 0 | 0.076829 | 0 | 0 | 0 | 0 | 0 | 0.247788 | 1 | 0.132743 | false | 0 | 0.017699 | 0 | 0.159292 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07de757280c0cc060d4923e7a4f7ffd848d64dc8 | 5,416 | py | Python | data/york_urban.py | cumtchenchang/PPGNet | 9b280aacb887ec584e905b9f9ab006b4f4cb2cc3 | [
"MIT"
] | 171 | 2019-04-04T11:50:42.000Z | 2022-03-15T11:20:58.000Z | data/york_urban.py | cumtchenchang/PPGNet | 9b280aacb887ec584e905b9f9ab006b4f4cb2cc3 | [
"MIT"
] | 23 | 2019-06-28T07:13:09.000Z | 2021-09-08T07:06:19.000Z | data/york_urban.py | cumtchenchang/PPGNet | 9b280aacb887ec584e905b9f9ab006b4f4cb2cc3 | [
"MIT"
] | 33 | 2019-05-30T02:09:50.000Z | 2022-03-15T11:21:01.000Z | import os
import numpy as np
import torch as th
from torch.utils import data
from data.line_graph import LineGraph
from glob import glob
from PIL import Image
from data.utils import gen_gaussian_map
class YorkUrban(data.Dataset):
def __init__(self, data_root, transforms, phase="test", sigma_junction=3., max_junctions=800):
print(f"{phase}")
assert phase == "eval"
self.data_root = data_root
self.img = [os.path.basename(f) for f in glob(os.path.join(data_root, phase, "*.jpg"))]
self.transforms = transforms
self.phase = phase
self.max_junctions = max_junctions
self.sigma_junction = sigma_junction
def __getitem__(self, item):
img = Image.open(os.path.join(self.data_root, self.phase, self.img[item]))
ori_w, ori_h = img.size
lg = LineGraph().load(os.path.join(self.data_root, self.phase, self.img[item][:-4] + ".lg"))
num_junc = lg.num_junctions
assert num_junc <= self.max_junctions, f"{(item, num_junc)}"
junc = np.zeros((self.max_junctions, 2))
# tic = time()
junc[:num_junc] = np.array([j if np.sum(j) > 0 else j + 1 for j in lg.junctions()])
# print(f"junc time: {time() - tic:.4f}")
assert np.sum(junc[:num_junc].sum(axis=1) <= 0) == 0, f"{item}"
# tic = time()
adj_mtx = np.zeros((self.max_junctions, self.max_junctions))
# print(f"mtx time: {time() - tic:.4f}")
adj_mtx[:num_junc, :num_junc] = lg.adj_mtx
if self.transforms is not None:
img, junc = self.transforms(img, junc)
cur_w, cur_h = img.size
junc[junc >= img.size[0]] = img.size[0] - 1
junc[junc < 0] = 0
# tic = time()
heatmap = gen_gaussian_map(junc[:num_junc], img.size[:2], self.sigma_junction)
assert cur_h == cur_w
line_map = lg.line_map(cur_h, cur_w / ori_w, cur_h / ori_h, line_width=self.sigma_junction)
# print(f"gaussian time: {time() - tic:.4f}")
img = np.array(np.asarray(img)[:, :, ::-1])
img = th.from_numpy(img).permute(2, 0, 1)
adj_mtx = th.from_numpy(adj_mtx)
junc = th.from_numpy(junc)
heatmap = th.from_numpy(heatmap)
line_map = th.from_numpy(line_map)
batch = dict(
image=img.float(),
adj_mtx=adj_mtx.float(),
heatmap=heatmap.float(),
junctions=junc.float(),
line_map=line_map.float()
)
return batch
def __call__(self, item):
return self.__getitem__(item)
def __len__(self):
return len(self.img)
# class YorkUrbanTrain(data.Dataset):
# def __init__(self, data_root, transforms, phase="train", sigma_junction=3., max_junctions=512):
# assert phase == "train"
# self.data_root = data_root
# self.img = [os.path.basename(f) for f in glob(os.path.join(data_root, "*.jpg"))]
# self.transforms = transforms
# self.phase = phase
# self.max_junctions = max_junctions
# self.sigma_junction = sigma_junction
#
# def __getitem__(self, item):
# img = Image.open(os.path.join(self.data_root, self.img[item]))
# ori_w, ori_h = img.size
#
# lg = LineGraph().load(os.path.join(self.data_root, self.img[item][:-4] + ".lg"))
# num_junc = lg.num_junctions
# # assert num_junc <= self.max_junctions, f"{(item, num_junc)}"
# junc = np.zeros((max(num_junc, self.max_junctions), 2))
# # tic = time()
# junc[:num_junc] = np.array([j if np.sum(j) > 0 else j + 1 for j in lg.junctions()])
# # print(f"junc time: {time() - tic:.4f}")
#
# assert np.sum(junc[:num_junc].sum(axis=1) <= 0) == 0, f"{item}"
# # tic = time()
# adj_mtx = np.zeros((max(num_junc, self.max_junctions), max(num_junc, self.max_junctions)))
# # print(f"mtx time: {time() - tic:.4f}")
# adj_mtx[:num_junc, :num_junc] = lg.adj_mtx
#
# if self.transforms is not None:
# img, junc = self.transforms(img, junc)
#
# cur_w, cur_h = img.size
#
# junc[junc >= img.size[0]] = img.size[0] - 1
# junc[junc < 0] = 0
# # tic = time()
# heatmap = gen_gaussian_map(junc[:num_junc], img.size[:2], self.sigma_junction)
# assert cur_h == cur_w
# line_map = lg.line_map(cur_h, cur_w / ori_w, cur_h / ori_h, line_width=self.sigma_junction)
# # print(f"gaussian time: {time() - tic:.4f}")
#
# if num_junc > self.max_junctions:
# choice_junc = np.random.choice(num_junc, self.max_junctions, replace=False)
# junc = np.array(junc[choice_junc])
# adj_mtx = np.array(adj_mtx[choice_junc][:, choice_junc])
#
# img = np.array(np.asarray(img)[:, :, ::-1])
# img = th.from_numpy(img).permute(2, 0, 1)
# adj_mtx = th.from_numpy(adj_mtx)
# junc = th.from_numpy(junc)
# heatmap = th.from_numpy(heatmap)
# line_map = th.from_numpy(line_map)
#
# batch = dict(
# image=img.float(),
# adj_mtx=adj_mtx.float(),
# heatmap=heatmap.float(),
# junctions=junc.float(),
# line_map=line_map.float()
# )
#
# return batch
#
# def __call__(self, item):
# return self.__getitem__(item)
#
# def __len__(self):
# return len(self.img) | 37.611111 | 101 | 0.580871 | 769 | 5,416 | 3.864759 | 0.122237 | 0.049462 | 0.064603 | 0.032974 | 0.878869 | 0.840511 | 0.837147 | 0.837147 | 0.825034 | 0.794751 | 0 | 0.012091 | 0.266987 | 5,416 | 144 | 102 | 37.611111 | 0.736524 | 0.518648 | 0 | 0 | 0 | 0 | 0.018606 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 1 | 0.072727 | false | 0 | 0.145455 | 0.036364 | 0.290909 | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6af8129f958cf44522e777c5939e4a28a6dd214a | 124 | py | Python | Team_6_Project/third_party_implementation/phantom/network_simulation/__init__.py | cliffton/Fractal | 95dd9cd24494f0f668dcdfa6e734d360207f7435 | [
"MIT"
] | null | null | null | Team_6_Project/third_party_implementation/phantom/network_simulation/__init__.py | cliffton/Fractal | 95dd9cd24494f0f668dcdfa6e734d360207f7435 | [
"MIT"
] | null | null | null | Team_6_Project/third_party_implementation/phantom/network_simulation/__init__.py | cliffton/Fractal | 95dd9cd24494f0f668dcdfa6e734d360207f7435 | [
"MIT"
] | null | null | null | from .miner import Miner
from .miner import MaliciousMiner
from .network import Network
from .simulation import Simulation
| 20.666667 | 34 | 0.830645 | 16 | 124 | 6.4375 | 0.375 | 0.174757 | 0.291262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137097 | 124 | 5 | 35 | 24.8 | 0.962617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed0004990ee7f6de580e131442cef425fa71cae8 | 8,097 | py | Python | talentmap_api/user_profile/tests/test_saved_searches.py | 18F/State-TalentMAP-API | 37043194b7f977a9b481567660f8caec698346c9 | [
"CC0-1.0"
] | 5 | 2017-06-22T13:53:34.000Z | 2018-05-14T13:44:06.000Z | talentmap_api/user_profile/tests/test_saved_searches.py | 18F/State-TalentMAP-API | 37043194b7f977a9b481567660f8caec698346c9 | [
"CC0-1.0"
] | 232 | 2017-06-16T02:09:54.000Z | 2018-05-10T16:15:48.000Z | talentmap_api/user_profile/tests/test_saved_searches.py | 18F/State-TalentMAP-API | 37043194b7f977a9b481567660f8caec698346c9 | [
"CC0-1.0"
] | 3 | 2017-07-14T01:48:49.000Z | 2021-02-14T10:38:26.000Z | import pytest
import json
from model_mommy import mommy
from rest_framework import status
from talentmap_api.user_profile.models import SavedSearch
from talentmap_api.messaging.models import Notification
@pytest.fixture()
def test_saved_search_fixture(authorized_user):
return mommy.make('user_profile.SavedSearch',
name="Test search",
owner=authorized_user.profile,
endpoint='/api/v1/position/',
filters={
"position_number__startswith": ["56"],
})
@pytest.mark.django_db()
def test_saved_search_create_no_endpoint(authorized_client, authorized_user):
# Test posting with no endpoint
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
}
), content_type='application/json')
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db()
def test_saved_search_create_bad_endpoint(authorized_client, authorized_user):
# Test a bad endpoint
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
"endpoint": "/api/v1/asdf/"
}
), content_type='application/json')
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db()
def test_saved_search_create_unfilterable_endpoint(authorized_client, authorized_user):
# Test a bad endpoint
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
"endpoint": "/api/v1/affff/"
}
), content_type='application/json')
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db()
def test_saved_search_create_bad_filters(authorized_client, authorized_user):
# Test a valid endpoint with bad filters
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
"endpoint": "/api/v1/position/",
"filters": {
"asdf": ["05"]
}
}
), content_type='application/json')
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db()
def test_saved_search_create_in_array_filters(authorized_client, authorized_user):
# Test a valid endpoint with declared (i.e. manual) filters
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
"endpoint": "/api/v1/position/",
"filters": {
"grade__code__in": ["05", "06"]
}
}
), content_type='application/json')
assert response.status_code == status.HTTP_201_CREATED
@pytest.mark.django_db()
def test_saved_search_create_in_string_filters(authorized_client, authorized_user):
# Test a valid endpoint with declared (i.e. manual) filters
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
"endpoint": "/api/v1/position/",
"filters": {
"post__in": "254,123"
}
}
), content_type='application/json')
assert response.status_code == status.HTTP_201_CREATED
@pytest.mark.django_db()
def test_saved_search_create_declared_filters(authorized_client, authorized_user):
# Test a valid endpoint with declared (i.e. manual) filters
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
"endpoint": "/api/v1/position/",
"filters": {
"q": ["german security"]
}
}
), content_type='application/json')
assert response.status_code == status.HTTP_201_CREATED
@pytest.mark.django_db()
def test_saved_search_create_valid_filters(authorized_client, authorized_user):
# Test a valid endpoint with valid automatic filters
response = authorized_client.post('/api/v1/searches/', data=json.dumps(
{
"name": "Banana search",
"endpoint": "/api/v1/position/",
"filters": {
"position_number__startswith": ["56"],
"title__in": ["SPECIAL AGENT", "OFFICE MANAGER"],
"post__tour_of_duty__months__gt": ["6"]
}
}
), content_type='application/json')
assert response.status_code == status.HTTP_201_CREATED
@pytest.mark.django_db()
def test_saved_search_patch_bad_endpoint(authorized_client, authorized_user, test_saved_search_fixture):
# Test patching a bad endpoint
response = authorized_client.patch(f'/api/v1/searches/{test_saved_search_fixture.id}/', data=json.dumps(
{
"endpoint": "/api/v1/asdf/"
}
), content_type='application/json')
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db()
def test_saved_search_patch_bad_filters(authorized_client, authorized_user, test_saved_search_fixture):
# Test patching bad filters
response = authorized_client.patch(f'/api/v1/searches/{test_saved_search_fixture.id}/', data=json.dumps(
{
"filters": {
"asdf": ["05"]
}
}
), content_type='application/json')
assert response.status_code == status.HTTP_400_BAD_REQUEST
@pytest.mark.django_db()
def test_saved_search_patch_valid_filters(authorized_client, authorized_user, test_saved_search_fixture):
# Test a valid endpoint with valid filters and new endpoint
response = authorized_client.patch(f'/api/v1/searches/{test_saved_search_fixture.id}/', data=json.dumps(
{
"endpoint": "/api/v1/organization/",
"filters": {
"code__startswith": ["56"],
"long_description__in": ["OFF OF THE AMB-AT-LARGE FOR COUNTER-TERRORISM", "OFFICE MANAGER"],
"bureau_organization__code__contains": ["6"]
}
}
), content_type='application/json')
assert response.status_code == status.HTTP_200_OK
@pytest.mark.django_db(transaction=True)
def test_saved_search_delete(authorized_client, authorized_user, test_saved_search_fixture):
response = authorized_client.delete(f'/api/v1/searches/{test_saved_search_fixture.id}/')
assert response.status_code == status.HTTP_204_NO_CONTENT
@pytest.mark.django_db(transaction=True)
def test_saved_search_counts(authorized_client, authorized_user):
oms_contains = mommy.make('user_profile.SavedSearch',
name="Test search",
owner=authorized_user.profile,
endpoint='/api/v1/position/',
filters={
"title__contains": "OMS",
})
oms_exact = mommy.make('user_profile.SavedSearch',
name="Test search",
owner=authorized_user.profile,
endpoint='/api/v1/position/',
filters={
"title": "OMS",
})
mommy.make('position.Position', title="OMS", _quantity=5)
mommy.make('position.Position', title="OMS banana", _quantity=5)
assert oms_contains.count == 0
assert oms_exact.count == 0
SavedSearch.update_counts_for_endpoint("/api/v1/position/")
oms_contains.refresh_from_db()
oms_exact.refresh_from_db()
assert Notification.objects.filter(owner=authorized_user.profile).count() == 2
assert oms_contains.count == 10
assert oms_exact.count == 5
mommy.make('position.Position', title="OMS", _quantity=5)
SavedSearch.update_counts_for_endpoint()
oms_contains.refresh_from_db()
oms_exact.refresh_from_db()
assert Notification.objects.filter(owner=authorized_user.profile).count() == 4
assert oms_contains.count == 15
assert oms_exact.count == 10
| 34.751073 | 108 | 0.637397 | 911 | 8,097 | 5.36663 | 0.143798 | 0.025568 | 0.067498 | 0.051544 | 0.83289 | 0.816322 | 0.794232 | 0.783596 | 0.755778 | 0.728779 | 0 | 0.01592 | 0.247499 | 8,097 | 232 | 109 | 34.900862 | 0.786476 | 0.055082 | 0 | 0.538012 | 0 | 0 | 0.195444 | 0.052887 | 0 | 0 | 0 | 0 | 0.116959 | 1 | 0.081871 | false | 0 | 0.035088 | 0.005848 | 0.122807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed19c02038b18503bcae2a75a6f41e0daee72614 | 203 | py | Python | Downloader/Utils/status.py | Sangeerththan/SangGDownloader | 2ab824633ed2bdb77c25926c6b12f41ef78ec9f6 | [
"MIT"
] | 5 | 2022-01-09T13:38:25.000Z | 2022-01-21T12:04:20.000Z | Downloader/Utils/status.py | Sangeerththan/SangGDownloader | 2ab824633ed2bdb77c25926c6b12f41ef78ec9f6 | [
"MIT"
] | null | null | null | Downloader/Utils/status.py | Sangeerththan/SangGDownloader | 2ab824633ed2bdb77c25926c6b12f41ef78ec9f6 | [
"MIT"
] | 1 | 2022-01-09T13:38:28.000Z | 2022-01-09T13:38:28.000Z | from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip
def status_video(video, s_time, e_time, status_name):
return ffmpeg_extract_subclip(video, s_time, e_time, targetname=status_name)
| 29 | 80 | 0.82266 | 32 | 203 | 4.84375 | 0.53125 | 0.167742 | 0.258065 | 0.141935 | 0.193548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 203 | 6 | 81 | 33.833333 | 0.851648 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
ed7413fc8e8b40912cc2129fcb450b1f54aed2f9 | 3,828 | py | Python | combined-compile-deploy/PyTorch_BERT_Inf1.py | C24IO/SageMaker-Inf1-Endpoints | 0e482079b7c19958a32bee4a5f9cb817b08b37f1 | [
"MIT"
] | 5 | 2020-08-07T15:18:23.000Z | 2021-07-01T17:36:16.000Z | combined-compile-deploy/PyTorch_BERT_Inf1.py | C24IO/SageMaker-Inf1-Endpoints | 0e482079b7c19958a32bee4a5f9cb817b08b37f1 | [
"MIT"
] | null | null | null | combined-compile-deploy/PyTorch_BERT_Inf1.py | C24IO/SageMaker-Inf1-Endpoints | 0e482079b7c19958a32bee4a5f9cb817b08b37f1 | [
"MIT"
] | 5 | 2020-09-16T19:30:16.000Z | 2021-04-06T15:45:35.000Z | import torch
import torch_neuron
import transformers
from transformers import BertTokenizer
from transformers import BertModel
import math
from transformers import AutoTokenizer, AutoModelForSequenceClassification
def main():
sentence1 = "If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success."
sentence2 = "The greatest glory in living lies not in never falling, but in rising every time we fall."
sentence3 = "If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success. If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success. If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success."
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
cos = torch.nn.CosineSimilarity()
encoded_sentence = tokenizer.encode_plus(sentence1, sentence3, max_length=128, padding='max_length',
return_tensors="pt", truncation=True)
outputs = model(encoded_sentence['input_ids'])
s1 = outputs[1] # The last hidden-state is the first element of the output tuple
encoded_sentence = tokenizer.encode_plus(sentence2, sentence3, max_length=128, padding='max_length',
return_tensors="pt", truncation=True)
outputs = model(encoded_sentence['input_ids'])
s2 = outputs[1] # The last hidden-state is the first element of the output tuple
cos_sim = cos(s1, s2)
cosine_measure = cos_sim[0].item()
angle_in_radians = math.acos(cosine_measure)
print(math.degrees(angle_in_radians))
example_inputs = encoded_sentence['input_ids'], encoded_sentence['attention_mask'], encoded_sentence['token_type_ids']
model_neuron = torch.neuron.trace(model, example_inputs, compiler_args=['-O2'], verbose=10, compiler_workdir='./compile')
model_neuron.save('neuron_compiled_1_model.pt')
model_again = torch.jit.load('neuron_compiled_1_model.pt')
print(model_neuron)
sentence1 = "If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success."
sentence2 = "The greatest glory in living lies not in never falling, but in rising every time we fall."
sentence3 = "If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success. If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success. If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success."
encoded_sentence = tokenizer.encode_plus(sentence1, sentence3, max_length=128, pad_to_max_length=True,
return_tensors="pt")
input_statement = encoded_sentence['input_ids'], encoded_sentence['attention_mask'], encoded_sentence[
'token_type_ids']
outputs = model_again(*input_statement)
s1 = outputs[1] # The last hidden-state is the first element of the output tuple
encoded_sentence = tokenizer.encode_plus(sentence2, sentence3, max_length=128, pad_to_max_length=True,
return_tensors="pt")
input_statement = encoded_sentence['input_ids'], encoded_sentence['attention_mask'], encoded_sentence[
'token_type_ids']
outputs = model_again(*input_statement)
s2 = outputs[1] # The last hidden-state is the first element of the output tuple
cos_sim = cos(s1, s2)
cosine_measure = cos_sim[0].item()
angle_in_radians = math.acos(cosine_measure)
print(math.degrees(angle_in_radians))
if __name__ == '__main__':
main()
import sys
sys.exit(0)
| 51.72973 | 332 | 0.720742 | 535 | 3,828 | 4.97757 | 0.224299 | 0.084491 | 0.024033 | 0.03605 | 0.832144 | 0.793842 | 0.793842 | 0.793842 | 0.793842 | 0.793842 | 0 | 0.014984 | 0.198015 | 3,828 | 73 | 333 | 52.438356 | 0.852443 | 0.065569 | 0 | 0.555556 | 0 | 0.074074 | 0.357503 | 0.014558 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018519 | false | 0 | 0.148148 | 0 | 0.166667 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71fa94a3e4943995c50551f60e03b78b17f9edc9 | 96 | py | Python | models/net_modules/__init__.py | ChenWang8750/WTAM_net | b5c01b9ebc2514cf6fc8ce45c0944b79f0c2a54d | [
"MIT"
] | 4 | 2021-07-12T23:28:21.000Z | 2021-07-25T02:16:50.000Z | models/net_modules/__init__.py | ChenWang8750/WTAM_net | b5c01b9ebc2514cf6fc8ce45c0944b79f0c2a54d | [
"MIT"
] | null | null | null | models/net_modules/__init__.py | ChenWang8750/WTAM_net | b5c01b9ebc2514cf6fc8ce45c0944b79f0c2a54d | [
"MIT"
] | null | null | null | from .discrimators import *
from .losses import *
from .modules import *
from .WTAM_net import * | 24 | 27 | 0.760417 | 13 | 96 | 5.538462 | 0.538462 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 96 | 4 | 28 | 24 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c06758aa44ed8a0c16ddc9ab6fa429f8acd3b9f | 11,307 | py | Python | Python/problem0459.py | 1050669722/LeetCode-Answers | c8f4d1ccaac09cda63b60d75144335347b06dc81 | [
"MIT"
] | null | null | null | Python/problem0459.py | 1050669722/LeetCode-Answers | c8f4d1ccaac09cda63b60d75144335347b06dc81 | [
"MIT"
] | null | null | null | Python/problem0459.py | 1050669722/LeetCode-Answers | c8f4d1ccaac09cda63b60d75144335347b06dc81 | [
"MIT"
] | null | null | null | class Solution:
def repeatedSubstringPattern(self, s: str) -> bool:
# p = 1
# length = len(s)
# while p < length:
# tmp = s[:p]
# if set(tmp) == set(s):
# count = length//p
# # print(tmp*count)
# if tmp*count == s:
# return True
# p += 1
# return False
p = 1
length = len(s)
while p < length:
tmp = s[:p]
count = length//p
# print(tmp*count)
if tmp*count == s:
return True
p += 1
return False
solu = Solution()
# s = "abab"
# s = "aba"
# s = "abcabcabcabc"
s = "hppetqceqsbhqcrgrttmjygnibdorreygvfblhfcbiltmczdvuqgtytdayrrqxrytwagghkhsvdezeiuzacuyvxawqrmplmkjmrpwbzqzcuygevhexbfvafrqzfikrstgjlenkuooqmwvhebhhgciovanaiztbszmffbrzpfscenlkqsrzwznrcctkbnnvoaduduvtanxgckqtfhsbjhvllovobllqlomqjhjlvgrxthsyqmzztukgliumtgeguqwdygovofuhonffzhevdrbozwdschawawcyeqvvypeocmtctaxyrapswsmybmxbkzbrrwmrmqgqcbuxdtwuuloqfargoqkzrlqiiecwukozljwpeulyharmckvrafsrqibaodyinnjbygsccdbkfuyketdeavxtfyttcubphnqfvkhxokjvgihkdkqgfnzkmudqohfvuycrimoyyawfkdrpokvvzwglrlbfsjdojhftvwuuwqbgvuvlethepnriyvqtgtjrcrkypgulyvturqfwjmcbbtjcqzxwuinxzaxogrbfowbfnidyvhzybjctkzsfifejhbyqubxkyyrngvldclefwgbggtlqapziszaobxybvsodpzjtmnzitcpbvcrvutfosfdvcdwzvmfkmoeadfjwhaacetxymfnhkscnvborntdbjhcmonlvplxtgxstehaozedwhspvntyxccjrrumghmaolshpbjfcpjyxdouqjunlxxeqttxbhxpuryjsjqwyzuvckrvtmihlhnbbgycnxthqtskcjgakbypnrkhduqqcdsfksjzscjivbtzmbzxezosrabwurnywhdizmktqtcnuxmjyoidpwxgwyfsrsrpzuyajkubdypzxdivrqahmzpkxufqowgpsgqdqmfvmuujzdgrthaiirugozycxguqomteyazkwwvwzbpskpctgxbwyzzwgtoufjbfkcrgymcznruyiwtrvunutosbjgyopbvbdoieamfqgzqqwjhtdxnylhavnylfzjgexqkyfqqnridnrnhwkwuxeustugyvphcmxomegerymxndkwbwvwtzsouputklcozzdmglsxjfuzkgvmcqiyrcmorghcrjsskxesjzsueotovrczjmxdpjrgrakklddxajqjiiemwzdtsftesqhhcyptaaeldhidqapzivnhwqapyttsmaboaqhcqnnvuxznyqoilbphuqyulrmxtnnvfxxykmthkuuimiqxlihfyfzlxllsayoivngiabpkyluktmieurmuwlgvzrobrejprrxtvodtzzduonaigmfdalyzeocxsmmmflfablvckbwyoxjvloalbamfppehdrvieblgmgiyhhxygivtwvfzvtgmikwndryisjqeradzhczpmujirqjojpbuzxhdohnjqdpkdulnykekgnszddnpsojsnsxeaknspecuznjxzoifbcehguwykfsyzrezdtusxwpwmywnmgvqizxqvtrgajgzdmbgfvzctobhozvdfqtnrsgnlxvnidmlppsukryghbnxaiafyvvqnbfyyangfasurmqcfoimsxlsgmaghvwxydvyflgknaeemugrlqfdorxwfzcoubluejskubuhbbloxuhimnnagnynmbbjcndiwyssbpzcqmsniayvpnxxawknxlybadjybeqctrhlgzyobyjsmjpmfzbenzfndqmguelnwsyetzsxzeplnfasgdytddhitvqqzbfxvgbrfwogadspkujrxhkcmtkhobxqedncjrtqpjwroqgzkpqiwckkwxrkapaeuqidhdvymrpdkvcumuekwpuumlfmahsuxdzgguevotayocscyxmwogrcswqufkrdnqlwnqtbjtbaxvcvuprixikpgckondravcyiurlgkoghkkeebypzizqpccdrfwtbaslvjxbwljfxvmczkrassqjwvonakhdnbpkmolkbwqztcbumuugonqlieaipjoekdoxrbhszzrsduprqjyfyosgssrjcfnmidlbettdunyyjnpayphxdzfyrwjvdxilcvohqimlxklgzciyspxxqcvibfdeensgjgpzqcmnoxwoagouylroppyquevarnictyemaqzoqxesesmcffsxurnqvkqozztvxxhzpiphguzkonowtitnziewvunuvgpufytwhlgnffzvvproxmdzvhxqekmbsewzcryjeeyjlxhgmywmlalijiypvmrpqpptipcntdygafppgldrnobzybovnhlewcxhtbuoesuhajygxbzmralrbcnqjauietpxvllbffkfrilqlmccoqwpsjidlclpwcmtnzwtghaxropfaujpkfgeqohbtvqpzekndgikpkjhyzmbvxqfdyjtnsvinnznujczrmlhwvqxweyrbqyeohadbxlpkkegvignurusomrkqpdrfbywkyzmxndhzjqvrwilnefcsxoioubwxbsibtwyibiikydbunojtvllscvjwyftaxdbqbczckjokoredhnydbjxfggdelwgkckbfmciynyibqmexbccenalviozwnigrsjwengcafmbxyhwblziybttlkvhxdooxxkdlrhnytpvtyrwksektagfkdmjiczfalinepackrzdqrzcjemfgsmsxybfdckdnusslswvkwycpyeaeqhkltciufqxhaawxsqimnewlcaccecgxkskfwuzdkwmnyjksbufoydbdedhkiqhukrzhozmyxznwkxolutcszdxdjfntxxphqooepdfpesloszbmvdgwjgzunonkncresikklpzpkkfclgqimwevcfprwebjivnadykqplhzvmdjuttgsadwfsobyplgkajpavfqhoreavpxojdijhfqbtscifivhtkipsawgrcjosgfblnmuseylwawdirledttvtremtpblxgoitcfmhdxfdtjnmwrqrmnmdtyxibkhhbsddxpmaosdkdswbkosweecxcbielrnojqsghgiwanidggesvyqbcsahtinhaavltpsawaywogcwniokhenjznquyfbyizlboddkgcjwklszvilcmymnmeikklkskvvzbylhcwfpjxoffchtctjoarakcmepizolzbucyztjwjodlwyorheryfddrjubkkmkliolhjvfsjiehhubqyupfauzjqawapilxyzhhumzfvfpezquaklhmhgwxjuxaclzakghgtilqocwpsqrfezrlhplqlksnvsnhywntfbjvdfkwycdedwpkocbznvnincsobfhigtdkaniarneujwfxyizldowtqqhtvqbeleoouyollviwrpwpxvdcjbxbrgvozwskdiaxgpktksqdhmsgjxluakvtrsiqrccwldtrudngydjhrdocdbwfltzeojuhlzdwewqabdibirjbwzdbczhnugsipopcpsbvqrvuwdvgwehvfkwhldvhlpqcfhfxcgsuzqovtkbsqknwwjdjnaqaridzsiwuoqongfkcpnuhxhftslchluifdlevvcrjufydkkhbxblwkqrebtmppwuuhapcegnaonfaxmewprsbhjgleuatqwoxyfbeoogedmgaykwobqrlzxwdryyhwogwujaiziocuuevhalkscvratwttvdpljlfvnpuwdxsabnheyrwdpqdimyejbtvnhciwucuzbnzfcgldyjgpzlzojdzlzwyizievmbuoquvsagxapdprqrhaugntdnbevibhjvxzpstsarsswkjpdsrxyetdrwjogkxpgxqxrmpsfkmdwxszpjynnrtgoewupwmxteukqmevwqbsnttcdrssjnbzrzvivjfoqcbgofemwfglazodsiydvbemacvylcobepkuxqivxogxpwdieblzeqogsjeflvjskvojlxginnfdlknqlarrqfykoesczbwmwmvjjcrzryecjruwrmqkrowisomurignwdyihrhasldbczzvlpfffcpasbuklczhfypppwphjuknumjhbqmhsbjncdxphwxmwodoltvwnikjutrxjfgehprluqdbmaqlotzbowyeeknadgyomeuvwniqdlsslidcbcfsafwfpjhuqfjemfzithawtsqgatkexqwyxufndohvwsbiyastksrdnilpdytdqrdnnkarykoueqeeswxcrphezvtctphjikywuzptlfprxuwqstujkeplzjquaxfiidgeevzrdpjajfsbapnltcyuloqnmvywaeafccyfrhhamcdprqamtaigpywdvuzxabecddjwktwzvcomuqanqiwhiskdojconhtskcpwxnvsplgkbgzuoxbwpmbfxeumnnfzruvphthxeojiwiclgfjxtndrtzdgmiffccumvejcuukqeodktnkpcpgvoldawkfamcmigxmcrwswmwihluwnjeixslzoxhojjdtrcftudnsrjczwxxjgctgugfkdmanxdgqiolcrzwjkakhxhsglmmhstrwgulfztwhhjlbihmviwehfwntibadvubdomiphgxpsiscsexccbjhazakadnvxqanelemtbdlmvoezlgbprkpqlbtqpqphrcmcgyvkbhwyvcxikazbkquxsnpjdeqwicyrcwbfdzdabcklcmmpciouvedbiwxryyidulizkmblonwtzkkcvayqectpariyrqdldmmnynaoawjaivedwcwcgrrgibhbtkmwwyjwnjnohyqsuuxqwvufnmlxnszhfnfbmpabaprknhchdzzaxufkishxngeswkvkbvlbkdlamphqrhsodzylrhieqpymbuwcrhfemtezklpbuhrxgpkzzvgpkedlyzpqiwuvrywelnfguxfcosdpnjexohkoiberzaotymxmzeuvdbzutcjimhppetqceqsbhqcrgrttmjygnibdorreygvfblhfcbiltmczdvuqgtytdayrrqxrytwagghkhsvdezeiuzacuyvxawqrmplmkjmrpwbzqzcuygevhexbfvafrqzfikrstgjlenkuooqmwvhebhhgciovanaiztbszmffbrzpfscenlkqsrzwznrcctkbnnvoaduduvtanxgckqtfhsbjhvllovobllqlomqjhjlvgrxthsyqmzztukgliumtgeguqwdygovofuhonffzhevdrbozwdschawawcyeqvvypeocmtctaxyrapswsmybmxbkzbrrwmrmqgqcbuxdtwuuloqfargoqkzrlqiiecwukozljwpeulyharmckvrafsrqibaodyinnjbygsccdbkfuyketdeavxtfyttcubphnqfvkhxokjvgihkdkqgfnzkmudqohfvuycrimoyyawfkdrpokvvzwglrlbfsjdojhftvwuuwqbgvuvlethepnriyvqtgtjrcrkypgulyvturqfwjmcbbtjcqzxwuinxzaxogrbfowbfnidyvhzybjctkzsfifejhbyqubxkyyrngvldclefwgbggtlqapziszaobxybvsodpzjtmnzitcpbvcrvutfosfdvcdwzvmfkmoeadfjwhaacetxymfnhkscnvborntdbjhcmonlvplxtgxstehaozedwhspvntyxccjrrumghmaolshpbjfcpjyxdouqjunlxxeqttxbhxpuryjsjqwyzuvckrvtmihlhnbbgycnxthqtskcjgakbypnrkhduqqcdsfksjzscjivbtzmbzxezosrabwurnywhdizmktqtcnuxmjyoidpwxgwyfsrsrpzuyajkubdypzxdivrqahmzpkxufqowgpsgqdqmfvmuujzdgrthaiirugozycxguqomteyazkwwvwzbpskpctgxbwyzzwgtoufjbfkcrgymcznruyiwtrvunutosbjgyopbvbdoieamfqgzqqwjhtdxnylhavnylfzjgexqkyfqqnridnrnhwkwuxeustugyvphcmxomegerymxndkwbwvwtzsouputklcozzdmglsxjfuzkgvmcqiyrcmorghcrjsskxesjzsueotovrczjmxdpjrgrakklddxajqjiiemwzdtsftesqhhcyptaaeldhidqapzivnhwqapyttsmaboaqhcqnnvuxznyqoilbphuqyulrmxtnnvfxxykmthkuuimiqxlihfyfzlxllsayoivngiabpkyluktmieurmuwlgvzrobrejprrxtvodtzzduonaigmfdalyzeocxsmmmflfablvckbwyoxjvloalbamfppehdrvieblgmgiyhhxygivtwvfzvtgmikwndryisjqeradzhczpmujirqjojpbuzxhdohnjqdpkdulnykekgnszddnpsojsnsxeaknspecuznjxzoifbcehguwykfsyzrezdtusxwpwmywnmgvqizxqvtrgajgzdmbgfvzctobhozvdfqtnrsgnlxvnidmlppsukryghbnxaiafyvvqnbfyyangfasurmqcfoimsxlsgmaghvwxydvyflgknaeemugrlqfdorxwfzcoubluejskubuhbbloxuhimnnagnynmbbjcndiwyssbpzcqmsniayvpnxxawknxlybadjybeqctrhlgzyobyjsmjpmfzbenzfndqmguelnwsyetzsxzeplnfasgdytddhitvqqzbfxvgbrfwogadspkujrxhkcmtkhobxqedncjrtqpjwroqgzkpqiwckkwxrkapaeuqidhdvymrpdkvcumuekwpuumlfmahsuxdzgguevotayocscyxmwogrcswqufkrdnqlwnqtbjtbaxvcvuprixikpgckondravcyiurlgkoghkkeebypzizqpccdrfwtbaslvjxbwljfxvmczkrassqjwvonakhdnbpkmolkbwqztcbumuugonqlieaipjoekdoxrbhszzrsduprqjyfyosgssrjcfnmidlbettdunyyjnpayphxdzfyrwjvdxilcvohqimlxklgzciyspxxqcvibfdeensgjgpzqcmnoxwoagouylroppyquevarnictyemaqzoqxesesmcffsxurnqvkqozztvxxhzpiphguzkonowtitnziewvunuvgpufytwhlgnffzvvproxmdzvhxqekmbsewzcryjeeyjlxhgmywmlalijiypvmrpqpptipcntdygafppgldrnobzybovnhlewcxhtbuoesuhajygxbzmralrbcnqjauietpxvllbffkfrilqlmccoqwpsjidlclpwcmtnzwtghaxropfaujpkfgeqohbtvqpzekndgikpkjhyzmbvxqfdyjtnsvinnznujczrmlhwvqxweyrbqyeohadbxlpkkegvignurusomrkqpdrfbywkyzmxndhzjqvrwilnefcsxoioubwxbsibtwyibiikydbunojtvllscvjwyftaxdbqbczckjokoredhnydbjxfggdelwgkckbfmciynyibqmexbccenalviozwnigrsjwengcafmbxyhwblziybttlkvhxdooxxkdlrhnytpvtyrwksektagfkdmjiczfalinepackrzdqrzcjemfgsmsxybfdckdnusslswvkwycpyeaeqhkltciufqxhaawxsqimnewlcaccecgxkskfwuzdkwmnyjksbufoydbdedhkiqhukrzhozmyxznwkxolutcszdxdjfntxxphqooepdfpesloszbmvdgwjgzunonkncresikklpzpkkfclgqimwevcfprwebjivnadykqplhzvmdjuttgsadwfsobyplgkajpavfqhoreavpxojdijhfqbtscifivhtkipsawgrcjosgfblnmuseylwawdirledttvtremtpblxgoitcfmhdxfdtjnmwrqrmnmdtyxibkhhbsddxpmaosdkdswbkosweecxcbielrnojqsghgiwanidggesvyqbcsahtinhaavltpsawaywogcwniokhenjznquyfbyizlboddkgcjwklszvilcmymnmeikklkskvvzbylhcwfpjxoffchtctjoarakcmepizolzbucyztjwjodlwyorheryfddrjubkkmkliolhjvfsjiehhubqyupfauzjqawapilxyzhhumzfvfpezquaklhmhgwxjuxaclzakghgtilqocwpsqrfezrlhplqlksnvsnhywntfbjvdfkwycdedwpkocbznvnincsobfhigtdkaniarneujwfxyizldowtqqhtvqbeleoouyollviwrpwpxvdcjbxbrgvozwskdiaxgpktksqdhmsgjxluakvtrsiqrccwldtrudngydjhrdocdbwfltzeojuhlzdwewqabdibirjbwzdbczhnugsipopcpsbvqrvuwdvgwehvfkwhldvhlpqcfhfxcgsuzqovtkbsqknwwjdjnaqaridzsiwuoqongfkcpnuhxhftslchluifdlevvcrjufydkkhbxblwkqrebtmppwuuhapcegnaonfaxmewprsbhjgleuatqwoxyfbeoogedmgaykwobqrlzxwdryyhwogwujaiziocuuevhalkscvratwttvdpljlfvnpuwdxsabnheyrwdpqdimyejbtvnhciwucuzbnzfcgldyjgpzlzojdzlzwyizievmbuoquvsagxapdprqrhaugntdnbevibhjvxzpstsarsswkjpdsrxyetdrwjogkxpgxqxrmpsfkmdwxszpjynnrtgoewupwmxteukqmevwqbsnttcdrssjnbzrzvivjfoqcbgofemwfglazodsiydvbemacvylcobepkuxqivxogxpwdieblzeqogsjeflvjskvojlxginnfdlknqlarrqfykoesczbwmwmvjjcrzryecjruwrmqkrowisomurignwdyihrhasldbczzvlpfffcpasbuklczhfypppwphjuknumjhbqmhsbjncdxphwxmwodoltvwnikjutrxjfgehprluqdbmaqlotzbowyeeknadgyomeuvwniqdlsslidcbcfsafwfpjhuqfjemfzithawtsqgatkexqwyxufndohvwsbiyastksrdnilpdytdqrdnnkarykoueqeeswxcrphezvtctphjikywuzptlfprxuwqstujkeplzjquaxfiidgeevzrdpjajfsbapnltcyuloqnmvywaeafccyfrhhamcdprqamtaigpywdvuzxabecddjwktwzvcomuqanqiwhiskdojconhtskcpwxnvsplgkbgzuoxbwpmbfxeumnnfzruvphthxeojiwiclgfjxtndrtzdgmiffccumvejcuukqeodktnkpcpgvoldawkfamcmigxmcrwswmwihluwnjeixslzoxhojjdtrcftudnsrjczwxxjgctgugfkdmanxdgqiolcrzwjkakhxhsglmmhstrwgulfztwhhjlbihmviwehfwntibadvubdomiphgxpsiscsexccbjhazakadnvxqanelemtbdlmvoezlgbprkpqlbtqpqphrcmcgyvkbhwyvcxikazbkquxsnpjdeqwicyrcwbfdzdabcklcmmpciouvedbiwxryyidulizkmblonwtzkkcvayqectpariyrqdldmmnynaoawjaivedwcwcgrrgibhbtkmwwyjwnjnohyqsuuxqwvufnmlxnszhfnfbmpabaprknhchdzzaxufkishxngeswkvkbvlbkdlamphqrhsodzylrhieqpymbuwcrhfemtezklpbuhrxgpkzzvgpkedlyzpqiwuvrywelnfguxfcosdpnjexohkoiberzaotymxmzeuvdbzutcjim"
print(solu.repeatedSubstringPattern(s))
'''
观察判断集合相等的时间复杂度
'''
import time
import numpy as np
# a1 = [1,2,3]*100
# a2 = [1,2,3]*100
# b1 = [1,2,3]*100000
# b2 = [1,2,3]*100000
a1 = np.random.randint(10,100000,[100]).tolist()
a2 = np.random.randint(10,100000,[100]).tolist()
b1 = np.random.randint(10,100000,[100000]).tolist()
b2 = np.random.randint(10,100000,[100000]).tolist()
a1 = set(a1)
a2 = set(a2)
b1 = set(b1)
b2 = set(b2)
start = time.perf_counter()
print(a1 == a2)
print(time.perf_counter()-start)
start = time.perf_counter()
print(b1 == b2)
print(time.perf_counter()-start)
| 176.671875 | 10,006 | 0.943486 | 176 | 11,307 | 60.590909 | 0.267045 | 0.00075 | 0.001125 | 0.006377 | 0.038447 | 0.02907 | 0.02907 | 0.016504 | 0.016504 | 0.016504 | 0 | 0.009554 | 0.037322 | 11,307 | 63 | 10,007 | 179.47619 | 0.970142 | 0.03007 | 0 | 0.133333 | 0 | 0 | 0.915583 | 0.915583 | 0 | 1 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.066667 | 0 | 0.2 | 0.166667 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c1bc20669397b0ac20cbe8f8b857b6890550306 | 22,516 | py | Python | FEMpy/Solvers.py | floydie7/FEMpy | 50e11b88dc249ff7c599472b455b07b04df1afd7 | [
"MIT"
] | null | null | null | FEMpy/Solvers.py | floydie7/FEMpy | 50e11b88dc249ff7c599472b455b07b04df1afd7 | [
"MIT"
] | null | null | null | FEMpy/Solvers.py | floydie7/FEMpy | 50e11b88dc249ff7c599472b455b07b04df1afd7 | [
"MIT"
] | 1 | 2022-01-22T06:39:38.000Z | 2022-01-22T06:39:38.000Z | """
Solvers.py
Author: Benjamin Floyd
Contains the Finite Element Method Solvers.
"""
import numpy as np
from scipy.integrate import quad
from scipy.sparse import linalg
from scipy.special import roots_legendre
from .Assemblers import assemble_matrix, assemble_vector
from .helpers import dbquad_triangle, copy_docstring_from
class Poisson1D(object):
r"""
Solves a one-dimensional Poisson equation.
Uses finite element methods to solve a Poisson differential equation of the form
.. math::
- \frac{{\rm d}}{{\rm d} x}\left(c(x) \frac{{\rm d}}{{\rm d} x} u(x) \right) = f(x); a \leq x \leq b.
with a combination of Dirichlet, Neumann, or Robin boundary conditions.
.. warning:: Both end point boundary conditions cannot be Neumann as this may result in a loss of uniqueness of the
solution.
Parameters
----------
mesh : :class:`FEMpy.Mesh.Interval1D`
A :class:`Mesh` class defining the mesh and associated information matrices.
fe_trial_basis, fe_test_basis : :class:`FEMpy.FEBasis.IntervalBasis1D`
A :class:`FEBasis` class defining the finite element basis functions for the trial and test bases.
boundary_conditions : :class:`FEMpy.Boundaries.BoundaryConditions`
A :class:`BoundaryConditions` class defining the boundary conditions on the domain.
Examples
--------
>>> import numpy as np
>>> from FEMpy import Interval1D, IntervalBasis1D, BoundaryConditions, Poisson1D
>>> mesh = Interval1D(0, 1, h=1/2, basis_type='linear')
>>> basis = IntervalBasis1D('linear')
>>> dirichlet_funct = lambda x: 0 if x == 0 else np.cos(1)
>>> bcs = BoundaryConditions(mesh, ('dirichlet', 'dirichlet'), dirichlet_fun=dirichlet_funct)
>>> coefficient_funct = lambda x: np.exp(x)
>>> source_funct = lambda x: -np.exp(x) * (np.cos(x) - 2*np.sin(x) - x*np.cos(x) - x*np.sin(x))
>>> poisson_eq = Poisson1D(mesh, basis, basis, bcs)
>>> poisson_eq.solve(coefficient_funct, source_funct)
array([0. , 0.44814801, 0.54030231])
"""
def __init__(self, mesh, fe_trial_basis, fe_test_basis, boundary_conditions):
self._mesh = mesh
self._fe_trial = fe_trial_basis
self._fe_test = fe_test_basis
self._boundary_conditions = boundary_conditions
# This will be overwritten by the solver
self._nodal_solution = None
def solve(self, coeff_fun, source_fun):
"""
Method that performs the finte element solution algorithm.
Calls the assembly functions `FEMpy.Assemblers.assemble_matrix` and `FEMpy.Assemblers.assemble_vector` to create
the stiffness matrix and load vector respectively. Then, applies the boundary condition treatments to the matrix
and vector. Finally, solves the linear system :math:`A\mathbf{x} = \mathbf{b}.`
Parameters
----------
coeff_fun : function
Function name of the coefficient function `c`(`x`) in the Poisson equation.
source_fun : function
The nonhomogeneous source function `f`(`x`) of the Poisson equation.
Returns
-------
ndarray
The nodal solution vector.
"""
# Create our stiffness matrix
stiffness_matrix = assemble_matrix(coeff_fun,
mesh=self._mesh,
trial_basis=self._fe_trial, test_basis=self._fe_test,
derivative_order_trial=1, derivative_order_test=1)
# Create our load vector
load_vector = assemble_vector(source_fun,
mesh=self._mesh,
test_basis=self._fe_test,
derivative_order_test=0)
# Modify the stiffness matrix and the load vector to accommodate the boundary conditions specified.
stiffness_matrix, load_vector = self._boundary_conditions.treat_robin(stiffness_matrix, load_vector)
load_vector = self._boundary_conditions.treat_neumann(load_vector)
stiffness_matrix, load_vector = self._boundary_conditions.treat_dirichlet(stiffness_matrix, load_vector)
# Convert the stiffness matrix to a compressed sparse row matrix as it will be more efficient for matrix-vector
# products
stiffness_matrix = stiffness_matrix.tocsr()
# Solve the system for our result
self._nodal_solution = linalg.spsolve(stiffness_matrix, load_vector)
return self._nodal_solution
def fe_solution(self, x, local_sol, vertices, derivative_order):
"""
Defines the functional solution piecewise on the finite on the finite elements.
Uses the solution vector and the basis function to define a piecewise continuous solution over the element.
Parameters
----------
x : float or array_like
A value or array of points to evaluate the function on.
local_sol : array_like
Finite element solution node vector local to the element `En`.
vertices : array_like
Global node coordinates for the mesh element `En`.
derivative_order : int
The derivative order to take the basis function to.
Returns
-------
float
Solution at all points in `x` in element.
"""
# Set the basis type from the test basis
basis_type = self._fe_test.basis_type
if basis_type == 101: # linear basis
num_local_basis = 2
elif basis_type == 102: # quadratic basis
num_local_basis = 3
else:
raise ValueError('Unknown basis type')
fun_value = np.sum([local_sol[k] * self._fe_test.fe_local_basis(x, vertices, basis_idx=k,
derivative_order=derivative_order)
for k in range(num_local_basis)], axis=0)
return fun_value
def l_inf_error(self, exact_sol):
"""
Computes the L-infinity norm error.
Computes the L-infinity norm error using the exact solution and the finite element function `fe_solution`.
Parameters
----------
exact_sol : function
The analytical solution to compare the finite element solution against.
Returns
-------
float
The L-infinity norm error of the finite element solution over the domain evaluated element-wise.
"""
# Get the number of elements from the mesh
num_elements = self._mesh.num_elements_x
# Initialize the element maximum error vector
element_max = np.empty(num_elements)
for n in range(num_elements):
# Extract the global node coordinates for the element E_n
vertices = self._mesh.get_vertices(n)
# Select for the solution at the local finite element nodes
local_sol = self._nodal_solution[self._mesh.Tb[:, n]]
# Generate grid of points local to the element
node_points = roots_legendre(4)[0]
element_points = (vertices[1] - vertices[0]) / 2 * node_points + (vertices[0] + vertices[1]) / 2
# Compute the error on each evaluation node point in the element
element_error = np.abs(exact_sol(element_points) - self.fe_solution(element_points, local_sol, vertices, 0))
# Find the maximum error in the element
element_max[n] = np.max(element_error)
# Return the maximum error over all elements
return np.max(element_max)
def __l2_hsemi_norm_error(self, exact_sol, derivative_order):
"""
Computes either the L2 norm error or the H1 semi-norm error.
Computes either the L2 norm error or the H1 semi-norm error dependent on the derivative order specified . If
```derivative_order` == 1`` then the L2 norm error is computed, if ```derivative_order` == 1`` then the H1
semi-norm is computed.
.. note:: This is designed to be called via the `l2_error` and `h1_seminorm_error` methods which will provide
the appropriate `derivative_order`.
Parameters
----------
exact_sol : function
The analytical solution. If the H1 semi-norm error is desired, this must be the first derivative of the
analytical solution.
derivative_order : int
The derivative order to take the basis function to.
Returns
-------
float
The L2 norm error or the H1 semi-norm error of the finite element solution over the domain evaluated
element-wise.
"""
# Get the number of elements from the mesh
num_elements = self._mesh.num_elements_x
# Initialize the elment error vector
element_error = np.empty(num_elements)
for n in range(num_elements):
# Extract the global node coordinates for the element E_n
vertices = self._mesh.get_vertices(n)
# Select for the solution at the local finite element nodes
local_sol = self._nodal_solution[self._mesh.Tb[:, n]]
# Define the integrand
def integrand(x):
return (exact_sol(x) - self.fe_solution(x, local_sol, vertices, derivative_order))**2
# Integrate
element_error[n] = quad(integrand, a=vertices[0], b=vertices[1])[0]
# Return the sqrt of the sum of the errors
return np.sqrt(np.sum(element_error))
def l2_error(self, exact_sol):
"""
The L2 norm error of the finite element solution compared against the given analytical solution.
Parameters
----------
exact_sol : function
The analytical solution to the Poisson equation.
Returns
-------
float
The L2 norm error of the finite element solution over the domain evaluated element-wise.
"""
return self.__l2_hsemi_norm_error(exact_sol, 0)
def h1_seminorm_error(self, diff_exact_sol):
"""
The H1 semi-norm error of the finite element solution compared against the given analyatical solution.
Parameters
----------
diff_exact_sol : function
The first derivative of the analytical solution to the Poisson equation.
Returns
-------
float
The H1 semi-norm error of the finite element solution over the domain evaluated element-wise.
"""
return self.__l2_hsemi_norm_error(diff_exact_sol, 1)
class Poisson2D(Poisson1D):
r"""
Solves a two-dimensional Poisson equation.
Uses finite element methods to solve a Poisson differential equation of the form
.. math::
-\nabla\left(c(\mathbf{x}) \cdot \nabla u(\mathbf{x}) \right) = f(\mathbf{x}); \mathbf{x} \in \Omega
with a combination of Dirichlet, Neumann, or Robin boundary conditions.
.. warning:: All edge boundary conditions cannot be Neumann as this may result in a loss of uniqueness of the
solution.
Parameters
----------
mesh : :class:`FEMpy.Mesh.TriangularMesh2D`
A :class:`Mesh` class defining the mesh and associated information matrices.
fe_trial_basis, fe_test_basis : :class:`FEMpy.FEBasis.IntervalBasis1D`
A :class:`FEBasis` class defining the finite element basis functions for the trial and test bases.
boundary_conditions : :class:`FEMpy.Boundaries.BoundaryConditions2D`
A :class: `BoundaryConditions` class defining the boundary conditions on the domain.
Examples
--------
>>> import numpy as np
>>> from FEMpy import TriangularMesh2D, TriangularBasis2D, BoundaryConditions2D, Poisson2D
>>> left, right, bottom, top = -1, 1, -1, 1
>>> h = 1
>>> def dirichlet_funct(coord):
>>> x, y = coord
>>> if x == -1:
>>> return np.exp(-1 + y)
>>> elif x == 1:
>>> return np.exp(1 + y)
>>> elif y == 1:
>>> return np.exp(x + 1)
>>> elif y == -1:
>>> return np.exp(x - 1)
>>> coeff_funct = lambda coord: 1
>>> source_funct = lambda coord: -2 * np.exp(coord[0] + coord[1])
>>> mesh = TriangularMesh2D(left, right, bottom, top, h, h, 'linear')
>>> basis = TriangularBasis2D('linear')
>>> boundary_node_types = ['dirichlet'] * mesh.boundary_nodes.shape[1]
>>> boundary_edge_types = ['dirichlet'] * (mesh.boundary_edges.shape[1]-1)
>>> bcs = BoundaryConditions2D(mesh, boundary_node_types, boundary_edge_types, dirichlet_fun=dirichlet_funct)
>>> poisson_eq = Poisson2D(mesh, basis, basis, bcs)
>>> poisson_eq.solve(coeff_funct, source_funct)
array([0.13533528, 0.36787944, 1., 0.36787944, 1., 2.71828183, 1., 2.71828183, 7.3890561])
"""
def __init__(self, mesh, fe_trial_basis, fe_test_basis, boundary_conditions):
super().__init__(mesh, fe_trial_basis, fe_test_basis, boundary_conditions)
def solve(self, coeff_fun, source_fun):
"""
Method that performs the finte element solution algorithm.
Calls the assembly functions `FEMpy.Assemblers.assemble_matrix` and `FEMpy.Assemblers.assemble_vector` to create
the stiffness matrix and load vector respectively. Then, applies the boundary condition treatments to the matrix
and vector. Finally, solves the linear system :math:`A\mathbf{x} = \mathbf{b}.`
Parameters
----------
coeff_fun : function
Function name of the coefficient function `c`(`x`, `y`) in the Poisson equation.
source_fun : function
The nonhomogeneous source function `f`(`x`, `y`) of the Poisson equation.
"""
# Create our stiffness matrix
stiffness_matrix1 = assemble_matrix(coeff_fun,
mesh=self._mesh,
trial_basis=self._fe_trial, test_basis=self._fe_test,
derivative_order_trial=(1, 0), derivative_order_test=(1, 0))
stiffness_matrix2 = assemble_matrix(coeff_fun,
mesh=self._mesh,
trial_basis=self._fe_trial, test_basis=self._fe_test,
derivative_order_trial=(0, 1), derivative_order_test=(0, 1))
stiffness_matrix = (stiffness_matrix1.tocsr() + stiffness_matrix2.tocsr()).tolil()
# Create our load vector
load_vector = assemble_vector(source_fun,
mesh=self._mesh,
test_basis=self._fe_test,
derivative_order_test=(0, 0))
# Modify the stiffness matrix and the load vector to accommodate the boundary conditions specified.
stiffness_matrix, load_vector = self._boundary_conditions.treat_robin(stiffness_matrix, load_vector)
load_vector = self._boundary_conditions.treat_neumann(load_vector)
stiffness_matrix, load_vector = self._boundary_conditions.treat_dirichlet(stiffness_matrix, load_vector)
# Convert the stiffness matrix to a compressed sparse row matrix as it will be more efficient for matrix-vector
# products
stiffness_matrix = stiffness_matrix.tocsr()
# Solve the system for our result
self._nodal_solution = linalg.spsolve(stiffness_matrix, load_vector)
return self._nodal_solution
def fe_solution(self, coords, local_sol, vertices, derivative_order):
"""
Defines the functional solution piecewise on the finite on the finite elements.
Uses the solution vector and the basis function to define a piecewise continuous solution over the element.
Parameters
----------
coords : float or array_like
A value or array of points to evaluate the function on.
local_sol : array_like
Finite element solution node vector local to the element `En`.
vertices : array_like
Global node coordinates for the mesh element `En`.
derivative_order : tuple of int
The derivative orders in the x- and y-directions to take the basis function to.
Returns
-------
float
Solution at all points in `coords` in element.
"""
# Set the basis type from the test basis
basis_type = self._fe_test.basis_type
if basis_type == 201: # linear basis
num_local_basis = 3
elif basis_type == 202: # quadratic basis
num_local_basis = 6
else:
raise ValueError('Unknown basis type')
fun_value = np.sum([local_sol[k] * self._fe_test.fe_local_basis(coords, vertices, basis_idx=k,
derivative_order=derivative_order)
for k in range(num_local_basis)])
return fun_value
@copy_docstring_from(Poisson1D.l_inf_error)
def l_inf_error(self, exact_sol):
# Get the number of elements from the mesh
num_elements = self._mesh.num_elements_x
# Initialize the element maximum error vector
element_max = np.empty(num_elements)
for n in range(num_elements):
# Extract the global node coordinates for the element E_n
vertices = self._mesh.get_vertices(n).T
# Select for the solution at the local finite element nodes
local_sol = self._nodal_solution[self._mesh.Tb[:, n]]
# Generate grid of points local to the element
# grid_points = sample_points_in_triangle(50)
grid_points = np.array([[(1+0)/2, (1-0)*(1+0)/4],
[(1+np.sqrt(3/5))/2, (1-np.sqrt(3/5))*(1+np.sqrt(3/5))/4],
[(1+np.sqrt(3/5))/2, (1-np.sqrt(3/5))*(1-np.sqrt(3/5))/4],
[(1-np.sqrt(3/5))/2, (1+np.sqrt(3/5))*(1+np.sqrt(3/5))/4],
[(1-np.sqrt(3/5))/2, (1+np.sqrt(3/5))*(1-np.sqrt(3/5))/4],
[(1+0)/2, (1-0)*(1+np.sqrt(3/5))/4],
[(1+0)/2, (1-0)*(1-np.sqrt(3/5))/4],
[(1+np.sqrt(3/5))/2, (1-np.sqrt(3/5))*(1+0)/4],
[(1-np.sqrt(3/5))/2, (1+np.sqrt(3/5))*(1+0)/4]]).T
# Transform the sampled grid point into our triangle
x1, y1 = vertices[0]
x2, y2 = vertices[1]
x3, y3 = vertices[2]
# The affine transformation from the standard triangle to our element triangle is
new_x = (-x1 + x2) * grid_points[0] + (-x1 + x3) * grid_points[1] + x1
new_y = (-y1 + y2) * grid_points[0] + (-y1 + y3) * grid_points[1] + y1
element_points = np.vstack([new_x, new_y])
# Compute the error on each evaluation node point in the element
element_error = np.abs(exact_sol(element_points) - self.fe_solution(element_points, local_sol, vertices,
derivative_order=(0, 0)))
# Find the maximum error in the element
element_max[n] = np.max(element_error)
# Return the maximum error over all elements
return np.max(element_max)
def __l2_hsemi_norm_error(self, exact_sol, derivative_order):
"""
Computes either the L2 norm error or the H1 semi-norm error.
Computes either the L2 norm error or the H1 semi-norm error dependent on the derivative order specified . If
```derivative_order` == 1`` then the L2 norm error is computed, if ```derivative_order` == 1`` then the H1
semi-norm is computed.
.. note:: This is designed to be called via the `l2_error` and `h1_seminorm_error` methods which will provide
the appropriate `derivative_order`.
Parameters
----------
exact_sol : function
The analytical solution. If the H1 semi-norm error is desired, this must be the first derivative of the
analytical solution.
derivative_order : tuple of int
The derivative orders in the x- and y-directions to take the basis function to.
Returns
-------
float
The L2 norm error or the H1 semi-norm error of the finite element solution over the domain evaluated
element-wise.
"""
# Get the number of elements from the mesh
num_elements = self._mesh.num_elements_x
# Initialize the elment error vector
element_error = np.empty(num_elements)
for n in range(num_elements):
# Extract the global node coordinates for the element E_n
vertices = self._mesh.get_vertices(n).T
# Select for the solution at the local finite element nodes
local_sol = self._nodal_solution[self._mesh.Tb[:, n]]
# Define the integrand
def integrand(x):
return (exact_sol(x) - self.fe_solution(x, local_sol, vertices, derivative_order))**2
# Integrate
element_error[n] = dbquad_triangle(integrand, vertices)[0]
# Return the sqrt of the sum of the errors
return np.sqrt(np.sum(element_error))
@copy_docstring_from(Poisson1D.l2_error)
def l2_error(self, exact_sol):
return self.__l2_hsemi_norm_error(exact_sol, (0, 0))
def h1_seminorm_error(self, diff_exact_sol):
"""
The H1 semi-norm error of the finite element solution compared against the given analyatical solution.
Parameters
----------
diff_exact_sol : tuple of function
A tuple of first derivatives in the x- and the y- directions the analytical solution to the Poisson equation
respectively.
Returns
-------
float
The full H1 semi-norm error of the finite element solution over the domain evaluated element-wise.
"""
dx_exact_sol, dy_exact_sol = diff_exact_sol
h1_x_error = self.__l2_hsemi_norm_error(dx_exact_sol, (1, 0))
h1_y_error = self.__l2_hsemi_norm_error(dy_exact_sol, (0, 1))
return np.sqrt(h1_x_error**2 + h1_y_error**2)
| 41.773655 | 120 | 0.615118 | 2,864 | 22,516 | 4.664804 | 0.114176 | 0.037051 | 0.009431 | 0.010778 | 0.803144 | 0.783683 | 0.770808 | 0.754341 | 0.748952 | 0.73503 | 0 | 0.022166 | 0.296722 | 22,516 | 538 | 121 | 41.851301 | 0.821535 | 0.51741 | 0 | 0.534247 | 0 | 0 | 0.003864 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109589 | false | 0 | 0.041096 | 0.020548 | 0.260274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c35cf41177866d2fdb89a92f1276e2ddd3c26af | 10,461 | py | Python | QUANTAXIS/QAFetch/QAQuery_Advance.py | liujiannong/QUANTAXIS | ba26f6bc58a3e19179a8793eab7a3e2e0e0bbc5e | [
"MIT"
] | null | null | null | QUANTAXIS/QAFetch/QAQuery_Advance.py | liujiannong/QUANTAXIS | ba26f6bc58a3e19179a8793eab7a3e2e0e0bbc5e | [
"MIT"
] | null | null | null | QUANTAXIS/QAFetch/QAQuery_Advance.py | liujiannong/QUANTAXIS | ba26f6bc58a3e19179a8793eab7a3e2e0e0bbc5e | [
"MIT"
] | 1 | 2021-04-01T08:59:46.000Z | 2021-04-01T08:59:46.000Z | # coding: utf-8
#
# The MIT License (MIT)
#
# Copyright (c) 2016-2018 yutiansut/QUANTAXIS
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import pandas as pd
from pandas import DataFrame
from QUANTAXIS.QAData import (QA_DataStruct_Index_day, QA_DataStruct_Index_min,
QA_DataStruct_Stock_block,
QA_DataStruct_Stock_day, QA_DataStruct_Stock_min,
QA_DataStruct_Stock_transaction)
from QUANTAXIS.QAFetch.QAQuery import (QA_fetch_indexlist_day,
QA_fetch_stocklist_day,
QA_fetch_stocklist_min)
from QUANTAXIS.QAUtil import (DATABASE, QA_Setting, QA_util_date_stamp,
QA_util_date_valid, QA_util_log_info,
QA_util_time_stamp)
"""
按要求从数据库取数据,并转换成numpy结构
"""
def QA_fetch_stock_day_adv(
code,
start, end=None,
if_drop_index=False,
collections=DATABASE.stock_day):
'获取股票日线'
end = start if end is None else end
start = str(start)[0:10]
end = str(end)[0:10]
if isinstance(code, str):
if QA_util_date_valid(end) == True:
__data = []
for item in collections.find({
'code': str(code)[0:6], "date_stamp": {
"$lte": QA_util_date_stamp(end),
"$gte": QA_util_date_stamp(start)}}):
__data.append([str(item['code']), float(item['open']), float(item['high']), float(
item['low']), float(item['close']), float(item['vol']), float(item['amount']), item['date']])
__data = DataFrame(__data, columns=[
'code', 'open', 'high', 'low', 'close', 'volume', 'amount', 'date'])
__data['date'] = pd.to_datetime(__data['date'])
return QA_DataStruct_Stock_day(__data.query('volume>1').set_index(['date', 'code'], drop=if_drop_index))
else:
QA_util_log_info('something wrong with date')
elif isinstance(code, list):
return QA_DataStruct_Stock_day(pd.concat(QA_fetch_stocklist_day(code, [start, end])).query('volume>1').set_index(['date', 'code'], drop=if_drop_index))
#print([Greenlet.get(item).data for item in gevent.joinall([gevent.spawn(QA_fetch_stock_day_adv,_code,start,end,if_drop_index) for _code in code])])
# return QA_DataStruct_Stock_day(pd.concat([Greenlet.get(item).data for item in gevent.joinall([gevent.spawn(QA_fetch_stock_day_adv,_code,start,end,if_drop_index) for _code in code])]).set_index(['date', 'code'], drop=if_drop_index))
def QA_fetch_stocklist_day_adv(
code,
start, end=None,
if_drop_index=False,
collections=DATABASE.stock_day):
'获取股票日线'
return QA_DataStruct_Stock_day(pd.concat(QA_fetch_stocklist_day(code, [start, end])).query('volume>1').set_index(['date', 'code'], drop=if_drop_index))
def QA_fetch_stock_min_adv(
code,
start, end=None,
frequence='1min',
if_drop_index=False,
collections=DATABASE.stock_min):
'获取股票分钟线'
if frequence in ['1min', '1m']:
frequence = '1min'
elif frequence in ['5min', '5m']:
frequence = '5min'
elif frequence in ['15min', '15m']:
frequence = '15min'
elif frequence in ['30min', '30m']:
frequence = '30min'
elif frequence in ['60min', '60m']:
frequence = '60min'
__data = []
end = start if end is None else end
if len(start) == 10:
start = '{} 09:30:00'.format(start)
if len(end) == 10:
end = '{} 15:00:00'.format(end)
if isinstance(code, str):
for item in collections.find({
'code': str(code), "time_stamp": {
"$gte": QA_util_time_stamp(start),
"$lte": QA_util_time_stamp(end)
}, 'type': frequence
}):
__data.append([str(item['code']), float(item['open']), float(item['high']), float(
item['low']), float(item['close']), float(item['vol']), item['datetime'], item['time_stamp'], item['date']])
__data = DataFrame(__data, columns=[
'code', 'open', 'high', 'low', 'close', 'volume', 'datetime', 'time_stamp', 'date'])
__data['datetime'] = pd.to_datetime(__data['datetime'])
return QA_DataStruct_Stock_min(__data.query('volume>1').set_index(['datetime', 'code'], drop=if_drop_index))
elif isinstance(code, list):
'新增codelist的代码'
return QA_DataStruct_Stock_min(pd.concat([QA_fetch_stock_min_adv(code_, start, end, frequence, if_drop_index).data for code_ in code]).set_index(['datetime', 'code'], drop=if_drop_index))
def QA_fetch_stocklist_min_adv(
code,
start, end=None,
frequence='1min',
if_drop_index=False, collections=DATABASE.stock_min):
return QA_DataStruct_Stock_min(pd.concat(QA_fetch_stocklist_min(code, [start, end], frequence)).query('volume>1').set_index(['datetime', 'code'], drop=if_drop_index))
def QA_fetch_index_day_adv(
code,
start, end=None,
if_drop_index=False,
collections=DATABASE.index_day):
'获取指数日线'
end = start if end is None else end
start = str(start)[0:10]
end = str(end)[0:10]
if isinstance(code, str):
if QA_util_date_valid(end) == True:
__data = []
for item in collections.find({
'code': str(code)[0:6], "date_stamp": {
"$lte": QA_util_date_stamp(end),
"$gte": QA_util_date_stamp(start)}}):
__data.append([str(item['code']), float(item['open']), float(item['high']), float(
item['low']), float(item['close']), float(item['vol']), item['date']])
__data = DataFrame(__data, columns=[
'code', 'open', 'high', 'low', 'close', 'volume', 'date'])
__data['date'] = pd.to_datetime(__data['date'])
return QA_DataStruct_Index_day(__data.query('volume>1').set_index(['date', 'code'], drop=if_drop_index))
else:
QA_util_log_info('something wrong with date')
elif isinstance(code, list):
return QA_DataStruct_Index_day(pd.concat(QA_fetch_indexlist_day(code, [start, end])).query('volume>1').set_index(['date', 'code'], drop=if_drop_index))
def QA_fetch_index_min_adv(
code,
start, end=None,
frequence='1min',
if_drop_index=False,
collections=DATABASE.index_min):
'获取股票分钟线'
if frequence in ['1min', '1m']:
frequence = '1min'
elif frequence in ['5min', '5m']:
frequence = '5min'
elif frequence in ['15min', '15m']:
frequence = '15min'
elif frequence in ['30min', '30m']:
frequence = '30min'
elif frequence in ['60min', '60m']:
frequence = '60min'
__data = []
end = start if end is None else end
if len(start) == 10:
start = '{} 09:30:00'.format(start)
if len(end) == 10:
end = '{} 15:00:00'.format(end)
if isinstance(code, str):
for item in collections.find({
'code': str(code), "time_stamp": {
"$gte": QA_util_time_stamp(start),
"$lte": QA_util_time_stamp(end)
}, 'type': frequence
}):
__data.append([str(item['code']), float(item['open']), float(item['high']), float(
item['low']), float(item['close']), float(item['vol']), item['datetime'], item['time_stamp'], item['date']])
__data = DataFrame(__data, columns=[
'code', 'open', 'high', 'low', 'close', 'volume', 'datetime', 'time_stamp', 'date'])
__data['datetime'] = pd.to_datetime(__data['datetime'])
return QA_DataStruct_Index_min(__data.query('volume>1').set_index(['datetime', 'code'], drop=if_drop_index))
elif isinstance(code, list):
return QA_DataStruct_Index_min(pd.concat([QA_fetch_index_min_adv(code_, start, end, frequence, if_drop_index).data for code_ in code]).set_index(['datetime', 'code'], drop=if_drop_index))
def QA_fetch_stock_transaction_adv(
code,
start, end=None,
if_drop_index=False,
collections=DATABASE.stock_transaction):
end = start if end is None else end
data = DataFrame([item for item in collections.find({
'code': str(code), "date": {
"$gte": start,
"$lte": end
}})])
data['datetime'] = pd.to_datetime(data['datetime'])
return QA_DataStruct_Stock_transaction(data.set_index('datetime', drop=if_drop_index))
def QA_fetch_security_list_adv(collections=DATABASE.stock_list):
'获取股票列表'
return pd.DataFrame([item for item in collections.find()]).drop('_id', axis=1, inplace=False)
def QA_fetch_stock_list_adv(collections=DATABASE.stock_list):
'获取股票列表'
return pd.DataFrame([item for item in collections.find()]).drop('_id', axis=1, inplace=False)
def QA_fetch_stock_block_adv(code=None, collections=DATABASE.stock_block):
if code is not None:
data = pd.DataFrame([item for item in collections.find(
{'code': code})]).drop(['_id'], axis=1)
return QA_DataStruct_Stock_block(data.set_index('code', drop=False).drop_duplicates())
else:
data = pd.DataFrame(
[item for item in collections.find()]).drop(['_id'], axis=1)
return QA_DataStruct_Stock_block(data.set_index('code', drop=False).drop_duplicates())
| 42.181452 | 241 | 0.624128 | 1,367 | 10,461 | 4.528164 | 0.151426 | 0.022294 | 0.040872 | 0.029079 | 0.752827 | 0.744265 | 0.744265 | 0.733118 | 0.711309 | 0.68433 | 0 | 0.014998 | 0.235159 | 10,461 | 247 | 242 | 42.352227 | 0.758655 | 0.146353 | 0 | 0.708791 | 0 | 0 | 0.108123 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054945 | false | 0 | 0.027473 | 0.005495 | 0.164835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c50573790ba98988748f0418f006045cb4679db | 374 | py | Python | boundio/sockets/__init__.py | A1Liu/boundio | 285587e58177624980f29925e408ff1c3cb33083 | [
"Apache-2.0"
] | null | null | null | boundio/sockets/__init__.py | A1Liu/boundio | 285587e58177624980f29925e408ff1c3cb33083 | [
"Apache-2.0"
] | null | null | null | boundio/sockets/__init__.py | A1Liu/boundio | 285587e58177624980f29925e408ff1c3cb33083 | [
"Apache-2.0"
] | null | null | null | from boundio.sockets.tasks import get_socket_task, get_socket_tasks, run_socket
from boundio.sockets.process import process_socket
from boundio.sockets.process import process_socket
from boundio.sockets.utils import process_frame
#
# __all__ = [
# 'get_socket_task', 'get_socket_tasks',
# 'run_socket','process_socket',
# 'process_frame','connect' ]
| 37.4 | 79 | 0.759358 | 48 | 374 | 5.520833 | 0.291667 | 0.166038 | 0.271698 | 0.271698 | 0.671698 | 0.671698 | 0.671698 | 0.671698 | 0.422642 | 0.422642 | 0 | 0 | 0.147059 | 374 | 9 | 80 | 41.555556 | 0.830721 | 0.355615 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9c770fe045a769f9e24d7c640545f67ba2493461 | 8,194 | py | Python | networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py | jebogaert/networkx | 8563c3313223a53c548530f39c8cfb6e433539d3 | [
"BSD-3-Clause"
] | 76 | 2020-07-06T14:44:05.000Z | 2022-02-14T15:30:21.000Z | networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py | jebogaert/networkx | 8563c3313223a53c548530f39c8cfb6e433539d3 | [
"BSD-3-Clause"
] | 25 | 2020-11-16T15:36:41.000Z | 2021-06-01T05:15:31.000Z | networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py | jebogaert/networkx | 8563c3313223a53c548530f39c8cfb6e433539d3 | [
"BSD-3-Clause"
] | 11 | 2020-07-12T16:18:07.000Z | 2022-02-05T16:48:35.000Z | import networkx as nx
from networkx.testing import almost_equal
class TestSubsetBetweennessCentrality:
def test_K5(self):
"""Betweenness Centrality Subset: K5"""
G = nx.complete_graph(5)
b = nx.betweenness_centrality_subset(
G, sources=[0], targets=[1, 3], weight=None
)
b_answer = {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_P5_directed(self):
"""Betweenness Centrality Subset: P5 directed"""
G = nx.DiGraph()
nx.add_path(G, range(5))
b_answer = {0: 0, 1: 1, 2: 1, 3: 0, 4: 0, 5: 0}
b = nx.betweenness_centrality_subset(G, sources=[0], targets=[3], weight=None)
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_P5(self):
"""Betweenness Centrality Subset: P5"""
G = nx.Graph()
nx.add_path(G, range(5))
b_answer = {0: 0, 1: 0.5, 2: 0.5, 3: 0, 4: 0, 5: 0}
b = nx.betweenness_centrality_subset(G, sources=[0], targets=[3], weight=None)
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_P5_multiple_target(self):
"""Betweenness Centrality Subset: P5 multiple target"""
G = nx.Graph()
nx.add_path(G, range(5))
b_answer = {0: 0, 1: 1, 2: 1, 3: 0.5, 4: 0, 5: 0}
b = nx.betweenness_centrality_subset(
G, sources=[0], targets=[3, 4], weight=None
)
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_box(self):
"""Betweenness Centrality Subset: box"""
G = nx.Graph()
G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3)])
b_answer = {0: 0, 1: 0.25, 2: 0.25, 3: 0}
b = nx.betweenness_centrality_subset(G, sources=[0], targets=[3], weight=None)
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_box_and_path(self):
"""Betweenness Centrality Subset: box and path"""
G = nx.Graph()
G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3), (3, 4), (4, 5)])
b_answer = {0: 0, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0, 5: 0}
b = nx.betweenness_centrality_subset(
G, sources=[0], targets=[3, 4], weight=None
)
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_box_and_path2(self):
"""Betweenness Centrality Subset: box and path multiple target"""
G = nx.Graph()
G.add_edges_from([(0, 1), (1, 2), (2, 3), (1, 20), (20, 3), (3, 4)])
b_answer = {0: 0, 1: 1.0, 2: 0.5, 20: 0.5, 3: 0.5, 4: 0}
b = nx.betweenness_centrality_subset(
G, sources=[0], targets=[3, 4], weight=None
)
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_diamond_multi_path(self):
"""Betweenness Centrality Subset: Diamond Multi Path"""
G = nx.Graph()
G.add_edges_from(
[
(1, 2),
(1, 3),
(1, 4),
(1, 5),
(1, 10),
(10, 11),
(11, 12),
(12, 9),
(2, 6),
(3, 6),
(4, 6),
(5, 7),
(7, 8),
(6, 8),
(8, 9),
]
)
b = nx.betweenness_centrality_subset(G, sources=[1], targets=[9], weight=None)
expected_b = {
1: 0,
2: 1.0 / 10,
3: 1.0 / 10,
4: 1.0 / 10,
5: 1.0 / 10,
6: 3.0 / 10,
7: 1.0 / 10,
8: 4.0 / 10,
9: 0,
10: 1.0 / 10,
11: 1.0 / 10,
12: 1.0 / 10,
}
for n in sorted(G):
assert almost_equal(b[n], expected_b[n])
class TestBetweennessCentralitySources:
def test_K5(self):
"""Betweenness Centrality Sources: K5"""
G = nx.complete_graph(5)
b = nx.betweenness_centrality_source(G, weight=None, normalized=False)
b_answer = {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
def test_P3(self):
"""Betweenness Centrality Sources: P3"""
G = nx.path_graph(3)
b_answer = {0: 0.0, 1: 1.0, 2: 0.0}
b = nx.betweenness_centrality_source(G, weight=None, normalized=True)
for n in sorted(G):
assert almost_equal(b[n], b_answer[n])
class TestEdgeSubsetBetweennessCentrality:
def test_K5(self):
"""Edge betweenness subset centrality: K5"""
G = nx.complete_graph(5)
b = nx.edge_betweenness_centrality_subset(
G, sources=[0], targets=[1, 3], weight=None
)
b_answer = dict.fromkeys(G.edges(), 0)
b_answer[(0, 3)] = b_answer[(0, 1)] = 0.5
for n in sorted(G.edges()):
assert almost_equal(b[n], b_answer[n])
def test_P5_directed(self):
"""Edge betweenness subset centrality: P5 directed"""
G = nx.DiGraph()
nx.add_path(G, range(5))
b_answer = dict.fromkeys(G.edges(), 0)
b_answer[(0, 1)] = b_answer[(1, 2)] = b_answer[(2, 3)] = 1
b = nx.edge_betweenness_centrality_subset(
G, sources=[0], targets=[3], weight=None
)
for n in sorted(G.edges()):
assert almost_equal(b[n], b_answer[n])
def test_P5(self):
"""Edge betweenness subset centrality: P5"""
G = nx.Graph()
nx.add_path(G, range(5))
b_answer = dict.fromkeys(G.edges(), 0)
b_answer[(0, 1)] = b_answer[(1, 2)] = b_answer[(2, 3)] = 0.5
b = nx.edge_betweenness_centrality_subset(
G, sources=[0], targets=[3], weight=None
)
for n in sorted(G.edges()):
assert almost_equal(b[n], b_answer[n])
def test_P5_multiple_target(self):
"""Edge betweenness subset centrality: P5 multiple target"""
G = nx.Graph()
nx.add_path(G, range(5))
b_answer = dict.fromkeys(G.edges(), 0)
b_answer[(0, 1)] = b_answer[(1, 2)] = b_answer[(2, 3)] = 1
b_answer[(3, 4)] = 0.5
b = nx.edge_betweenness_centrality_subset(
G, sources=[0], targets=[3, 4], weight=None
)
for n in sorted(G.edges()):
assert almost_equal(b[n], b_answer[n])
def test_box(self):
"""Edge betweenness subset centrality: box"""
G = nx.Graph()
G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3)])
b_answer = dict.fromkeys(G.edges(), 0)
b_answer[(0, 1)] = b_answer[(0, 2)] = 0.25
b_answer[(1, 3)] = b_answer[(2, 3)] = 0.25
b = nx.edge_betweenness_centrality_subset(
G, sources=[0], targets=[3], weight=None
)
for n in sorted(G.edges()):
assert almost_equal(b[n], b_answer[n])
def test_box_and_path(self):
"""Edge betweenness subset centrality: box and path"""
G = nx.Graph()
G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3), (3, 4), (4, 5)])
b_answer = dict.fromkeys(G.edges(), 0)
b_answer[(0, 1)] = b_answer[(0, 2)] = 0.5
b_answer[(1, 3)] = b_answer[(2, 3)] = 0.5
b_answer[(3, 4)] = 0.5
b = nx.edge_betweenness_centrality_subset(
G, sources=[0], targets=[3, 4], weight=None
)
for n in sorted(G.edges()):
assert almost_equal(b[n], b_answer[n])
def test_box_and_path2(self):
"""Edge betweenness subset centrality: box and path multiple target"""
G = nx.Graph()
G.add_edges_from([(0, 1), (1, 2), (2, 3), (1, 20), (20, 3), (3, 4)])
b_answer = dict.fromkeys(G.edges(), 0)
b_answer[(0, 1)] = 1.0
b_answer[(1, 20)] = b_answer[(3, 20)] = 0.5
b_answer[(1, 2)] = b_answer[(2, 3)] = 0.5
b_answer[(3, 4)] = 0.5
b = nx.edge_betweenness_centrality_subset(
G, sources=[0], targets=[3, 4], weight=None
)
for n in sorted(G.edges()):
assert almost_equal(b[n], b_answer[n])
| 36.096916 | 86 | 0.51135 | 1,221 | 8,194 | 3.28665 | 0.058149 | 0.102916 | 0.154747 | 0.050835 | 0.888114 | 0.842512 | 0.791677 | 0.763768 | 0.737603 | 0.711189 | 0 | 0.080377 | 0.325848 | 8,194 | 226 | 87 | 36.256637 | 0.64609 | 0.092019 | 0 | 0.614973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.090909 | false | 0 | 0.010695 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c798d45bea1b94b051d361ba35d8b2b8d1378cc | 108 | py | Python | src/winnow/models/namespace.py | opendesk/schema | 9c6d8483abbfeb5e3bd225b2916c212e2c21872b | [
"Unlicense"
] | 3 | 2016-04-05T16:51:27.000Z | 2016-08-30T20:56:25.000Z | src/winnow/models/namespace.py | opendesk/schema | 9c6d8483abbfeb5e3bd225b2916c212e2c21872b | [
"Unlicense"
] | null | null | null | src/winnow/models/namespace.py | opendesk/schema | 9c6d8483abbfeb5e3bd225b2916c212e2c21872b | [
"Unlicense"
] | 1 | 2019-08-14T16:51:56.000Z | 2019-08-14T16:51:56.000Z | import winnow
from winnow.models.base import WinnowVersion
class WinnowNamespace(WinnowVersion):
pass
| 15.428571 | 44 | 0.814815 | 12 | 108 | 7.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 108 | 6 | 45 | 18 | 0.946237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
92dca4a71b31154d71770a21b434e0677551bae9 | 9,093 | py | Python | test_holidays.py | leogregianin/stock-exchange-holidays | 7b6a7bb9ab27535ce8a1130f2f7916018d68bc6d | [
"MIT"
] | 2 | 2021-06-24T20:29:26.000Z | 2022-01-01T23:19:22.000Z | test_holidays.py | leogregianin/stock-exchange-holidays | 7b6a7bb9ab27535ce8a1130f2f7916018d68bc6d | [
"MIT"
] | 7 | 2021-11-11T12:42:58.000Z | 2022-01-05T12:30:46.000Z | test_holidays.py | leogregianin/stock-exchange-holidays | 7b6a7bb9ab27535ce8a1130f2f7916018d68bc6d | [
"MIT"
] | null | null | null | from datetime import date
from unittest import TestCase
from stock_exchange_holidays import Holidays, NYSE, CME, B3
class TestNYSE(TestCase):
def setUp(self):
self.holidays = Holidays(exchange=NYSE())
self.nyse_holidays = self.holidays.get_holidays()
self.all_holidays = {
date(2020, 1, 1): True,
date(2020, 1, 20): True,
date(2020, 2, 17): True,
date(2020, 4, 10): True,
date(2020, 5, 25): True,
date(2020, 7, 4): True,
date(2020, 9, 7): True,
date(2020, 11, 26): True,
date(2020, 12, 25): True,
date(2020, 12, 26): False,
date(2021, 1, 1): True,
date(2021, 1, 18): True,
date(2021, 2, 15): True,
date(2021, 4, 2): True,
date(2021, 5, 31): True,
date(2021, 7, 4): True,
date(2021, 9, 6): True,
date(2021, 11, 25): True,
date(2021, 12, 25): True,
date(2021, 12, 26): False,
date(2022, 1, 1): True,
date(2022, 1, 17): True,
date(2022, 2, 21): True,
date(2022, 4, 15): True,
date(2022, 5, 30): True,
date(2022, 7, 4): True,
date(2022, 9, 5): True,
date(2022, 11, 24): True,
date(2022, 12, 25): True,
date(2022, 12, 26): False,
date(2023, 1, 1): True,
date(2023, 1, 16): True,
date(2023, 2, 20): True,
date(2023, 4, 7): True,
date(2023, 5, 29): True,
date(2023, 7, 4): True,
date(2023, 9, 4): True,
date(2023, 11, 23): True,
date(2023, 12, 25): True,
date(2023, 12, 26): False,
}
def test_nyse_all_holidays(self):
for holiday in self.all_holidays.items():
if holiday[1]:
self.assertTrue(self.holidays.is_date_holiday(holiday[0]))
else:
self.assertFalse(self.holidays.is_date_holiday(holiday[0]))
def test_nyse_first_day_year_is_holiday(self):
get_date = date(2020, 1, 1)
self.assertTrue(self.holidays.is_date_holiday(get_date))
def test_nyse_independence_day_is_holiday(self):
get_date = date(2022, 7, 4)
self.assertTrue(self.holidays.is_date_holiday(get_date))
def test_nyse_random_date_is_not_holiday(self):
get_date = date(2020, 1, 10)
self.assertFalse(self.holidays.is_date_holiday(get_date))
def test_nyse_holidays_2020(self):
year = 2020
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 10)
def test_nyse_holidays_2021(self):
year = 2021
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 10)
def test_nyse_holidays_2022(self):
year = 2022
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 11)
def test_nyse_holidays_2023(self):
year = 2023
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 11)
class TestCME(TestCase):
def setUp(self):
self.holidays = Holidays(exchange=CME())
self.cme_holidays = self.holidays.get_holidays()
self.all_holidays = {
date(2020, 1, 1): True,
date(2020, 1, 20): True,
date(2020, 2, 17): True,
date(2020, 4, 10): True,
date(2020, 5, 25): True,
date(2020, 7, 4): True,
date(2020, 9, 7): True,
date(2020, 11, 26): True,
date(2020, 12, 25): True,
date(2020, 12, 26): False,
date(2021, 1, 1): True,
date(2021, 1, 18): True,
date(2021, 2, 15): True,
date(2021, 4, 2): True,
date(2021, 5, 31): True,
date(2021, 7, 4): True,
date(2021, 9, 6): True,
date(2021, 11, 25): True,
date(2021, 12, 25): True,
date(2021, 12, 26): False,
date(2022, 1, 1): True,
date(2022, 1, 17): True,
date(2022, 2, 21): True,
date(2022, 4, 15): True,
date(2022, 5, 30): True,
date(2022, 7, 4): True,
date(2022, 9, 5): True,
date(2022, 11, 24): True,
date(2022, 12, 25): True,
date(2022, 12, 26): False,
}
def test_cme_all_holidays(self):
for holiday in self.all_holidays.items():
if holiday[1]:
self.assertTrue(self.holidays.is_date_holiday(holiday[0]))
else:
self.assertFalse(self.holidays.is_date_holiday(holiday[0]))
def test_cme_first_day_year_is_holiday(self):
get_date = date(2020, 1, 1)
self.assertTrue(self.holidays.is_date_holiday(get_date))
def test_cme_independence_day_is_holiday(self):
get_date = date(2022, 7, 4)
self.assertTrue(self.holidays.is_date_holiday(get_date))
def test_cme_random_date_is_not_holiday(self):
get_date = date(2020, 1, 10)
self.assertFalse(self.holidays.is_date_holiday(get_date))
def test_cme_holidays_2020(self):
year = 2020
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 10)
def test_cme_holidays_2021(self):
year = 2021
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 10)
def test_cme_holidays_2022(self):
year = 2022
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 11)
class TestB3(TestCase):
def setUp(self):
self.holidays = Holidays(exchange=B3())
self.b3_holidays = self.holidays.get_holidays()
self.all_holidays = {
date(2020, 1, 1): True,
date(2020, 2, 24): True,
date(2020, 2, 25): True,
date(2020, 4, 10): True,
date(2020, 4, 21): True,
date(2020, 5, 1): True,
date(2020, 6, 11): True,
date(2020, 7, 9): True,
date(2020, 9, 7): True,
date(2020, 10, 12): True,
date(2020, 11, 2): True,
date(2020, 11, 15): True,
date(2020, 12, 25): True,
date(2020, 12, 26): False,
date(2020, 12, 31): True,
date(2021, 1, 1): True,
date(2021, 1, 25): True,
date(2021, 2, 15): True,
date(2021, 2, 16): True,
date(2021, 4, 2): True,
date(2021, 4, 21): True,
date(2021, 5, 1): True,
date(2021, 6, 3): True,
date(2021, 7, 9): True,
date(2021, 9, 7): True,
date(2021, 10, 12): True,
date(2021, 11, 2): True,
date(2021, 11, 15): True,
date(2021, 12, 25): True,
date(2021, 12, 31): True,
date(2021, 12, 26): False,
date(2022, 1, 1): True,
date(2022, 2, 28): True,
date(2022, 3, 1): True,
date(2022, 4, 15): True,
date(2022, 4, 21): True,
date(2022, 5, 1): True,
date(2022, 6, 16): True,
date(2022, 9, 7): True,
date(2022, 10, 12): True,
date(2022, 11, 2): True,
date(2022, 11, 15): True,
date(2022, 12, 25): True,
date(2022, 12, 31): True,
date(2022, 12, 26): False,
}
def test_b3_all_holidays(self):
for holiday in self.all_holidays.items():
if holiday[1]:
self.assertTrue(self.holidays.is_date_holiday(holiday[0]))
else:
self.assertFalse(self.holidays.is_date_holiday(holiday[0]))
def test_b3_first_day_year_is_holiday(self):
get_date = date(2020, 1, 1)
self.assertTrue(self.holidays.is_date_holiday(get_date))
def test_b3_random_date_is_not_holiday(self):
get_date = date(2020, 1, 10)
self.assertFalse(self.holidays.is_date_holiday(get_date))
def test_b3_holidays_2020(self):
year = 2020
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 14)
def test_b3_holidays_2021(self):
year = 2021
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 15)
def test_b3_holidays_2022(self):
year = 2022
holidays_by_year = self.holidays.get_holidays_by_year(year)
self.assertEqual(len(holidays_by_year), 13)
| 35.940711 | 76 | 0.532168 | 1,200 | 9,093 | 3.855 | 0.061667 | 0.181582 | 0.088197 | 0.054475 | 0.849546 | 0.843926 | 0.843926 | 0.843926 | 0.771509 | 0.758539 | 0 | 0.163228 | 0.344441 | 9,093 | 252 | 77 | 36.083333 | 0.612817 | 0 | 0 | 0.62844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110092 | 1 | 0.110092 | false | 0 | 0.013761 | 0 | 0.137615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
130f6a702702c72e5c06692db56f64743f4390c7 | 1,490 | py | Python | rudder_airflow_provider/test/operators/test_rudderstack_operator.py | rudderlabs/rudder-airflow-provider | e65fabc08522c5f39edf911dde9454cf6c028b0c | [
"MIT"
] | null | null | null | rudder_airflow_provider/test/operators/test_rudderstack_operator.py | rudderlabs/rudder-airflow-provider | e65fabc08522c5f39edf911dde9454cf6c028b0c | [
"MIT"
] | null | null | null | rudder_airflow_provider/test/operators/test_rudderstack_operator.py | rudderlabs/rudder-airflow-provider | e65fabc08522c5f39edf911dde9454cf6c028b0c | [
"MIT"
] | null | null | null | import unittest
from unittest import mock
from rudder_airflow_provider.operators.rudderstack import RudderstackOperator
class TestRudderstackOperator(unittest.TestCase):
@mock.patch('rudder_airflow_provider.operators.rudderstack.RudderstackHook.poll_for_status')
@mock.patch('rudder_airflow_provider.operators.rudderstack.RudderstackHook.trigger_sync')
def test_operator_trigger_sync_without_wait(self, mock_hook_sync: mock.Mock,
mock_poll_status: mock.Mock):
mock_hook_sync.return_value = None
operator = RudderstackOperator(source_id='some-source-id',
wait_for_completion=False, task_id='some-task-id')
operator.execute(context=None)
mock_hook_sync.assert_called_once()
mock_poll_status.assert_not_called()
@mock.patch('rudder_airflow_provider.operators.rudderstack.RudderstackHook.poll_for_status')
@mock.patch('rudder_airflow_provider.operators.rudderstack.RudderstackHook.trigger_sync')
def test_operator_trigger_sync_with_wait(self, mock_hook_sync: mock.Mock,
mock_poll_status: mock.Mock):
mock_hook_sync.return_value = None
mock_poll_status.return_value = None
operator = RudderstackOperator(source_id='some-source-id',
wait_for_completion=True, task_id='some-task-id')
operator.execute(context=None)
mock_hook_sync.assert_called_once()
mock_poll_status.assert_called_once()
if __name__ == '__main__':
unittest.main() | 43.823529 | 96 | 0.763087 | 183 | 1,490 | 5.803279 | 0.256831 | 0.060264 | 0.067797 | 0.141243 | 0.826742 | 0.788136 | 0.788136 | 0.788136 | 0.788136 | 0.788136 | 0 | 0 | 0.149664 | 1,490 | 34 | 97 | 43.823529 | 0.8382 | 0 | 0 | 0.518519 | 0 | 0 | 0.24279 | 0.202549 | 0 | 0 | 0 | 0 | 0.148148 | 1 | 0.074074 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
132748b8956a73fc360307fb2ec79a575783331b | 79 | py | Python | pocket/__init__.py | Seth-Park/pocket | 42c7da16028bccc846d163ca4b52701a394cce18 | [
"BSD-3-Clause"
] | 13 | 2020-12-03T12:12:09.000Z | 2022-02-28T06:53:19.000Z | pocket/__init__.py | Seth-Park/pocket | 42c7da16028bccc846d163ca4b52701a394cce18 | [
"BSD-3-Clause"
] | 26 | 2020-12-08T01:01:47.000Z | 2021-10-09T13:00:48.000Z | pocket/__init__.py | DzReal/Pocket | 0b9278f3d239a78ecd1855a3c8b7423a685a85b4 | [
"BSD-3-Clause"
] | 1 | 2022-03-27T14:13:57.000Z | 2022-03-27T14:13:57.000Z | from . import data
from . import core
from . import utils
from . import models
| 15.8 | 20 | 0.746835 | 12 | 79 | 4.916667 | 0.5 | 0.677966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202532 | 79 | 4 | 21 | 19.75 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
132cd6c7501c3270b97ca5211b1826043589a2c0 | 117 | py | Python | src/riski/schema/__init__.py | GFDRR/RISKi | 98c75e642bfca72f6bb26f6301db049804c61151 | [
"0BSD"
] | null | null | null | src/riski/schema/__init__.py | GFDRR/RISKi | 98c75e642bfca72f6bb26f6301db049804c61151 | [
"0BSD"
] | null | null | null | src/riski/schema/__init__.py | GFDRR/RISKi | 98c75e642bfca72f6bb26f6301db049804c61151 | [
"0BSD"
] | null | null | null | from .common import *
from .exposure import *
from .hazard import *
from .loss import *
from .vulnerability import *
| 19.5 | 28 | 0.74359 | 15 | 117 | 5.8 | 0.466667 | 0.45977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17094 | 117 | 5 | 29 | 23.4 | 0.896907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1357cb4098fe75f85c4eb766dfdfb984bc6ff671 | 30 | py | Python | src/openpersonen/converters/api2stufbg/views/base/__init__.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 2 | 2020-08-26T11:24:43.000Z | 2021-07-28T09:46:40.000Z | src/openpersonen/converters/api2stufbg/views/base/__init__.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 153 | 2020-08-26T10:45:35.000Z | 2021-12-10T17:33:16.000Z | src/openpersonen/converters/api2stufbg/views/base/__init__.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | null | null | null | from .nested_viewset import *
| 15 | 29 | 0.8 | 4 | 30 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13e76e0f825a1bb66750b00813a48fb302ed9e99 | 175 | py | Python | CodeWars/Python/5 kyu/int32 to IPv4/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | CodeWars/Python/5 kyu/int32 to IPv4/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | CodeWars/Python/5 kyu/int32 to IPv4/main.py | opastushkov/codewars-solutions | 0132a24259a4e87f926048318332dcb4d94858ca | [
"MIT"
] | null | null | null | def int32_to_ip(int32):
num = bin(int32)[2:].zfill(32)
return '{:d}.{:d}.{:d}.{:d}'.format(int(num[0:8], 2), int(num[8:16], 2), int(num[16:24], 2), int(num[24:32], 2)) | 58.333333 | 116 | 0.542857 | 37 | 175 | 2.513514 | 0.459459 | 0.258065 | 0.225806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 0.12 | 175 | 3 | 116 | 58.333333 | 0.435065 | 0 | 0 | 0 | 0 | 0 | 0.107955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
b93d03f755f1107eb7918c4cd3cc78bcf2feecc3 | 201 | py | Python | Python/empire/history/__init__.py | Tombmyst/Empire | f28782787c5fa9127e353549b73ec90d3c82c003 | [
"Apache-2.0"
] | null | null | null | Python/empire/history/__init__.py | Tombmyst/Empire | f28782787c5fa9127e353549b73ec90d3c82c003 | [
"Apache-2.0"
] | null | null | null | Python/empire/history/__init__.py | Tombmyst/Empire | f28782787c5fa9127e353549b73ec90d3c82c003 | [
"Apache-2.0"
] | null | null | null | from empire.history.history_stack import HistoryStack
from empire.history.memento import Memento
from empire.history.history_decorator import history
__all__ = ['HistoryStack', 'Memento', 'history']
| 28.714286 | 53 | 0.820896 | 24 | 201 | 6.625 | 0.375 | 0.188679 | 0.320755 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094527 | 201 | 6 | 54 | 33.5 | 0.873626 | 0 | 0 | 0 | 0 | 0 | 0.129353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9448eb2af34c44911d21a748bd911d725119daf | 46 | py | Python | birdy/ipyleafletwfs/__init__.py | generic-ci-org/birdy | 63c2d0aacad67569d8d8fc25c9a702d80c69fcd0 | [
"Apache-2.0"
] | null | null | null | birdy/ipyleafletwfs/__init__.py | generic-ci-org/birdy | 63c2d0aacad67569d8d8fc25c9a702d80c69fcd0 | [
"Apache-2.0"
] | null | null | null | birdy/ipyleafletwfs/__init__.py | generic-ci-org/birdy | 63c2d0aacad67569d8d8fc25c9a702d80c69fcd0 | [
"Apache-2.0"
] | null | null | null | from .base import IpyleafletWFS # noqa: F401
| 23 | 45 | 0.76087 | 6 | 46 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.173913 | 46 | 1 | 46 | 46 | 0.842105 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9704ccc83823e30dd7d981f9cb554824682389f | 1,506 | py | Python | modispds/products.py | matthewhanson/modis-ingestor | c8b903b8ce671a93a40f563103a9ca5264658815 | [
"MIT"
] | 13 | 2017-01-31T16:37:56.000Z | 2020-06-23T19:55:55.000Z | modispds/products.py | matthewhanson/modis-ingestor | c8b903b8ce671a93a40f563103a9ca5264658815 | [
"MIT"
] | 22 | 2017-01-12T19:42:32.000Z | 2021-05-20T16:03:08.000Z | modispds/products.py | matthewhanson/modis-ingestor | c8b903b8ce671a93a40f563103a9ca5264658815 | [
"MIT"
] | 2 | 2018-03-29T23:41:59.000Z | 2019-11-09T00:33:38.000Z | # MODIS product configuration
products = {
'MCD43A4.006': {
'day_offset': 8,
'bandnames':
['B%sqa' % str(i).zfill(2) for i in range(1, 8)] +
['B%s' % str(i).zfill(2) for i in range(1, 8)],
'overviews': ([False] * 7) + ([True] * 7)
},
'MOD09GA.006': {
'day_offset': 0,
'bandnames':
['numobs1km', 'state', 'senzen', 'senaz', 'range', 'solzen', 'solaz', 'geoflags', 'orbit', 'granule', 'numobs500m'] +
['B%s' % str(i).zfill(2) for i in range(1, 8)] +
['qc500m', 'obscov', 'obsnum', 'qscan'],
'overviews': ([False] * 11) + ([True] * 7) + ([False] * 4)
},
'MYD09GA.006': {
'day_offset': 0,
'bandnames':
['numobs1km', 'state', 'senzen', 'senaz', 'range', 'solzen', 'solaz', 'geoflags', 'orbit', 'granule', 'numobs500m'] +
['B%s' % str(i).zfill(2) for i in range(1, 8)] +
['qc500m', 'obscov', 'obsnum', 'qscan'],
'overviews': ([False] * 11) + ([True] * 7) + ([False] * 4)
},
'MOD09GQ.006': {
'day_offset': 0,
'bandnames': ['numobs', 'B01', 'B02', 'qc', 'obscov', 'obsnum', 'orbit', 'granule'],
'overviews': [False, True, True, False, False, False, False, False]
},
'MYD09GQ.006': {
'day_offset': 0,
'bandnames': ['numobs', 'B01', 'B02', 'qc', 'obscov', 'obsnum', 'orbit', 'granule'],
'overviews': [False, True, True, False, False, False, False, False]
}
}
| 39.631579 | 129 | 0.476096 | 165 | 1,506 | 4.315152 | 0.30303 | 0.11236 | 0.126404 | 0.05618 | 0.839888 | 0.839888 | 0.839888 | 0.839888 | 0.839888 | 0.839888 | 0 | 0.069573 | 0.284197 | 1,506 | 37 | 130 | 40.702703 | 0.590909 | 0.017928 | 0 | 0.542857 | 0 | 0 | 0.320244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b97630a00e6adc33ae64fe19a80e87661fe5e653 | 95 | py | Python | bonobo/util/collections.py | winsmith/bonobo | 6fb9f52bec43a23feac2db968dd4315d75d69910 | [
"Apache-2.0"
] | 1 | 2017-11-13T22:29:27.000Z | 2017-11-13T22:29:27.000Z | bonobo/util/collections.py | winsmith/bonobo | 6fb9f52bec43a23feac2db968dd4315d75d69910 | [
"Apache-2.0"
] | null | null | null | bonobo/util/collections.py | winsmith/bonobo | 6fb9f52bec43a23feac2db968dd4315d75d69910 | [
"Apache-2.0"
] | null | null | null | import bisect
class sortedlist(list):
def insort(self, x):
bisect.insort(self, x) | 15.833333 | 30 | 0.652632 | 13 | 95 | 4.769231 | 0.692308 | 0.322581 | 0.354839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231579 | 95 | 6 | 30 | 15.833333 | 0.849315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
b9842f0f67723d7f51c6780b3d65fd6bc2d06a35 | 49 | py | Python | tiny_router/simple/__init__.py | nekonoshiri/tiny-router | 3bb808bcc9f9eb368ee390179dfc5e9d48cf8600 | [
"MIT"
] | null | null | null | tiny_router/simple/__init__.py | nekonoshiri/tiny-router | 3bb808bcc9f9eb368ee390179dfc5e9d48cf8600 | [
"MIT"
] | null | null | null | tiny_router/simple/__init__.py | nekonoshiri/tiny-router | 3bb808bcc9f9eb368ee390179dfc5e9d48cf8600 | [
"MIT"
] | null | null | null | from .router import SimpleRouter as SimpleRouter
| 24.5 | 48 | 0.857143 | 6 | 49 | 7 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 49 | 1 | 49 | 49 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b989e897256092d80485b923222045b9089cd43e | 34 | py | Python | union_find/__init__.py | chanhosuh/algorithms | 82c56cef36d07f7fd441b26f75a932a9fe22a871 | [
"Apache-2.0"
] | 2 | 2019-03-16T06:17:08.000Z | 2020-05-18T16:52:54.000Z | union_find/__init__.py | chanhosuh/algorithms | 82c56cef36d07f7fd441b26f75a932a9fe22a871 | [
"Apache-2.0"
] | null | null | null | union_find/__init__.py | chanhosuh/algorithms | 82c56cef36d07f7fd441b26f75a932a9fe22a871 | [
"Apache-2.0"
] | 1 | 2019-03-16T06:17:17.000Z | 2019-03-16T06:17:17.000Z | from .union_find import UnionFind
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a0c801ddd126aa78b20e3b329d296b46c942008 | 35 | py | Python | project/RealEstateMarketPlace/ml/__init__.py | Mihaaai/RealEstateMarketplace | 9b9fa1376436801303e1ed0207ef09845a7d827e | [
"Apache-2.0"
] | null | null | null | project/RealEstateMarketPlace/ml/__init__.py | Mihaaai/RealEstateMarketplace | 9b9fa1376436801303e1ed0207ef09845a7d827e | [
"Apache-2.0"
] | null | null | null | project/RealEstateMarketPlace/ml/__init__.py | Mihaaai/RealEstateMarketplace | 9b9fa1376436801303e1ed0207ef09845a7d827e | [
"Apache-2.0"
] | null | null | null | from .core import predict, retrain
| 17.5 | 34 | 0.8 | 5 | 35 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 1 | 35 | 35 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a2f027f7aee7f22a6f6f8043a9fdb155b53fb09 | 79 | py | Python | nighres/statistics/__init__.py | marcobarilari/nighres | e503bb96a6a73f73020c5d9d7b540bc5f17699a8 | [
"Apache-2.0"
] | 41 | 2017-08-15T12:23:31.000Z | 2022-02-28T15:12:22.000Z | nighres/statistics/__init__.py | marcobarilari/nighres | e503bb96a6a73f73020c5d9d7b540bc5f17699a8 | [
"Apache-2.0"
] | 130 | 2017-07-27T11:09:09.000Z | 2022-03-31T10:05:07.000Z | nighres/statistics/__init__.py | marcobarilari/nighres | e503bb96a6a73f73020c5d9d7b540bc5f17699a8 | [
"Apache-2.0"
] | 35 | 2017-08-17T17:05:41.000Z | 2022-03-28T12:22:14.000Z | from nighres.statistics.segmentation_statistics import segmentation_statistics
| 39.5 | 78 | 0.924051 | 8 | 79 | 8.875 | 0.625 | 0.619718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050633 | 79 | 1 | 79 | 79 | 0.946667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dbdff35d6c96c5426dac89db56b18a1581e559b2 | 40 | py | Python | Lib/test/test_compiler/testcorpus/92_qual_class_in_class.py | diogommartins/cinder | 79103e9119cbecef3b085ccf2878f00c26e1d175 | [
"CNRI-Python-GPL-Compatible"
] | 1,886 | 2021-05-03T23:58:43.000Z | 2022-03-31T19:15:58.000Z | Lib/test/test_compiler/testcorpus/92_qual_class_in_class.py | diogommartins/cinder | 79103e9119cbecef3b085ccf2878f00c26e1d175 | [
"CNRI-Python-GPL-Compatible"
] | 70 | 2021-05-04T23:25:35.000Z | 2022-03-31T18:42:08.000Z | Lib/test/test_compiler/testcorpus/92_qual_class_in_class.py | diogommartins/cinder | 79103e9119cbecef3b085ccf2878f00c26e1d175 | [
"CNRI-Python-GPL-Compatible"
] | 52 | 2021-05-04T21:26:03.000Z | 2022-03-08T18:02:56.000Z | class Bar:
class Foo:
pass
| 8 | 14 | 0.5 | 5 | 40 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.45 | 40 | 4 | 15 | 10 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
e0056d836ba57f9190b40d382cf5d546176a4d40 | 529 | py | Python | tests/core/tests/__init__.py | infoxchange/django-haystack | 5c2dec0da28846eec87a3a5c6166a1734d4245ab | [
"BSD-3-Clause"
] | null | null | null | tests/core/tests/__init__.py | infoxchange/django-haystack | 5c2dec0da28846eec87a3a5c6166a1734d4245ab | [
"BSD-3-Clause"
] | null | null | null | tests/core/tests/__init__.py | infoxchange/django-haystack | 5c2dec0da28846eec87a3a5c6166a1734d4245ab | [
"BSD-3-Clause"
] | 1 | 2021-12-20T15:35:48.000Z | 2021-12-20T15:35:48.000Z | import warnings
warnings.simplefilter('ignore', Warning)
from django.conf import settings
from core.tests.backends import *
from core.tests.fields import *
from core.tests.forms import *
from core.tests.indexes import *
from core.tests.inputs import *
from core.tests.loading import *
from core.tests.models import *
from core.tests.query import *
from core.tests.templatetags import *
from core.tests.views import *
from core.tests.utils import *
from core.tests.management_commands import *
from core.tests.managers import *
| 27.842105 | 44 | 0.795841 | 77 | 529 | 5.454545 | 0.324675 | 0.247619 | 0.402381 | 0.542857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117202 | 529 | 18 | 45 | 29.388889 | 0.899358 | 0 | 0 | 0 | 0 | 0 | 0.011342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.9375 | 0 | 0.9375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
160c4f07dd837464c1a49ca5ba25162008ad726f | 36,500 | py | Python | code/pngsuite.py | timgates42/pypng | 8fb7553a2813aca05d2e6f1f6c9845916ec5f9f5 | [
"MIT"
] | 329 | 2015-01-20T17:47:53.000Z | 2022-03-20T11:19:45.000Z | code/pngsuite.py | timgates42/pypng | 8fb7553a2813aca05d2e6f1f6c9845916ec5f9f5 | [
"MIT"
] | 69 | 2015-02-02T15:27:02.000Z | 2022-01-16T14:26:28.000Z | code/pngsuite.py | timgates42/pypng | 8fb7553a2813aca05d2e6f1f6c9845916ec5f9f5 | [
"MIT"
] | 92 | 2015-01-25T09:12:00.000Z | 2022-03-07T16:30:09.000Z | #!/usr/bin/env python
# pngsuite.py
# PngSuite Test PNGs.
# https://docs.python.org/3.2/library/argparse.html
import argparse
import sys
"""
After you import this module with "import pngsuite" use
``pngsuite.bai0g01`` to get the bytes for a particular PNG image, or
use ``pngsuite.png`` to get a dict() of them all.
Also a delicious command line tool.
"""
def _dehex(s):
"""Liberally convert from hex string to binary string."""
import binascii
return binascii.unhexlify(s.replace(b"\n", b""))
# Copies of PngSuite test files taken
# from http://www.schaik.com/pngsuite/pngsuite_bas_png.html
# on 2009-02-19 by drj and converted to hex.
# Some of these are not actually in PngSuite (but maybe they should
# be?), they use the same naming scheme, but start with a capital
# letter.
png = {
'basi0g01': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000200000002001000000012c0677
cf0000000467414d41000186a031e8965f0000009049444154789c2d8d310ec2
300c45dfc682c415187a00a42e197ab81e83b127e00c5639001363a580d8582c
65c910357c4b78b0bfbfdf4f70168c19e7acb970a3f2d1ded9695ce5bf5963df
d92aaf4c9fd927ea449e6487df5b9c36e799b91bdf082b4d4bd4014fe4014b01
ab7a17aee694d28d328a2d63837a70451e1648702d9a9ff4a11d2f7a51aa21e5
a18c7ffd0094e3511d661822f20000000049454e44ae426082
"""),
'basi0g02': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000200000002002000000016ba60d
1f0000000467414d41000186a031e8965f0000005149444154789c635062e860
00e17286bb609c93c370ec189494960631366e4467b3ae675dcf10f521ea0303
90c1ca006444e11643482064114a4852c710baea3f18c31918020c30410403a6
0ac1a09239009c52804d85b6d97d0000000049454e44ae426082
"""),
'basi0g04': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200400000001e4e6f8
bf0000000467414d41000186a031e8965f000000ae49444154789c658e5111c2
301044171c141c141c041c843a287510ea20d441c041c141c141c04191102454
03994998cecd7edcecedbb9bdbc3b2c2b6457545fbc4bac1be437347f7c66a77
3c23d60db15e88f5c5627338a5416c2e691a9b475a89cd27eda12895ae8dfdab
43d61e590764f5c83a226b40d669bec307f93247701687723abf31ff83a2284b
a5b4ae6b63ac6520ad730ca4ed7b06d20e030369bd6720ed383290360406d24e
13811f2781eba9d34d07160000000049454e44ae426082
"""),
'basi0g08': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200800000001211615
be0000000467414d41000186a031e8965f000000b549444154789cb5905d0ac2
3010849dbac81c42c47bf843cf253e8878b0aa17110f214bdca6be240f5d21a5
94ced3e49bcd322c1624115515154998aa424822a82a5624a1aa8a8b24c58f99
999908130989a04a00d76c2c09e76cf21adcb209393a6553577da17140a2c59e
70ecbfa388dff1f03b82fb82bd07f05f7cb13f80bb07ad2fd60c011c3c588eef
f1f4e03bbec7ce832dca927aea005e431b625796345307b019c845e6bfc3bb98
769d84f9efb02ea6c00f9bb9ff45e81f9f280000000049454e44ae426082
"""),
'basi0g16': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000200000002010000000017186c9
fd0000000467414d41000186a031e8965f000000e249444154789cb5913b0ec2
301044c7490aa8f85d81c3e4301c8f53a4ca0da8902c8144b3920b4043111282
23bc4956681a6bf5fc3c5a3ba0448912d91a4de2c38dd8e380231eede4c4f7a1
4677700bec7bd9b1d344689315a3418d1a6efbe5b8305ba01f8ff4808c063e26
c60d5c81edcf6c58c535e252839e93801b15c0a70d810ae0d306b205dc32b187
272b64057e4720ff0502154034831520154034c3df81400510cdf0015c86e5cc
5c79c639fddba9dcb5456b51d7980eb52d8e7d7fa620a75120d6064641a05120
b606771a05626b401a05f1f589827cf0fe44c1f0bae0055698ee8914fffffe00
00000049454e44ae426082
"""),
'basi2c08': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000200000002008020000018b1fdd
350000000467414d41000186a031e8965f000000f249444154789cd59341aa04
210c44abc07b78133d59d37333bd89d76868b566d10cf4675af8596431a11662
7c5688919280e312257dd6a0a4cf1a01008ee312a5f3c69c37e6fcc3f47e6776
a07f8bdaf5b40feed2d33e025e2ff4fe2d4a63e1a16d91180b736d8bc45854c5
6d951863f4a7e0b66dcf09a900f3ffa2948d4091e53ca86c048a64390f662b50
4a999660ced906182b9a01a8be00a56404a6ede182b1223b4025e32c4de34304
63457680c93aada6c99b73865aab2fc094920d901a203f5ddfe1970d28456783
26cffbafeffcd30654f46d119be4793f827387fc0d189d5bc4d69a3c23d45a7f
db803146578337df4d0a3121fc3d330000000049454e44ae426082
"""),
'basi2c16': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000201002000001db8f01
760000000467414d41000186a031e8965f0000020a49444154789cd5962173e3
3010853fcf1838cc61a1818185a53e56787fa13fa130852e3b5878b4b0b03081
b97f7030070b53e6b057a0a8912bbb9163b9f109ececbc59bd7dcf2b45492409
d66f00eb1dd83cb5497d65456aeb8e1040913b3b2c04504c936dd5a9c7e2c6eb
b1b8f17a58e8d043da56f06f0f9f62e5217b6ba3a1b76f6c9e99e8696a2a72e2
c4fb1e4d452e92ec9652b807486d12b6669be00db38d9114b0c1961e375461a5
5f76682a85c367ad6f682ff53a9c2a353191764b78bb07d8ddc3c97c1950f391
6745c7b9852c73c2f212605a466a502705c8338069c8b9e84efab941eb393a97
d4c9fd63148314209f1c1d3434e847ead6380de291d6f26a25c1ebb5047f5f24
d85c49f0f22cc1d34282c72709cab90477bf25b89d49f0f351822297e0ea9704
f34c82bc94002448ede51866e5656aef5d7c6a385cb4d80e6a538ceba04e6df2
480e9aa84ddedb413bb5c97b3838456df2d4fec2c7a706983e7474d085fae820
a841776a83073838973ac0413fea2f1dc4a06e71108fda73109bdae48954ad60
bf867aac3ce44c7c1589a711cf8a81df9b219679d96d1cec3d8bbbeaa2012626
df8c7802eda201b2d2e0239b409868171fc104ba8b76f10b4da09f6817ffc609
c413ede267fd1fbab46880c90f80eccf0013185eb48b47ba03df2bdaadef3181
cb8976f18e13188768170f98c0f844bb78cb04c62ddac59d09fc3fa25dfc1da4
14deb3df1344f70000000049454e44ae426082
"""),
'basi3p08': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020080300000133a3ba
500000000467414d41000186a031e8965f00000300504c5445224400f5ffed77
ff77cbffff110a003a77002222ffff11ff110000222200ffac5566ff66ff6666
ff01ff221200dcffffccff994444ff005555220000cbcbff44440055ff55cbcb
00331a00ffecdcedffffe4ffcbffdcdc44ff446666ff330000442200ededff66
6600ffa444ffffaaeded0000cbcbfefffffdfffeffff0133ff33552a000101ff
8888ff00aaaa010100440000888800ffe4cbba5b0022ff22663200ffff99aaaa
ff550000aaaa00cb630011ff11d4ffaa773a00ff4444dc6b0066000001ff0188
4200ecffdc6bdc00ffdcba00333300ed00ed7300ffff88994a0011ffff770000
ff8301ffbabafe7b00fffeff00cb00ff999922ffff880000ffff77008888ffdc
ff1a33000000aa33ffff009900990000000001326600ffbaff44ffffffaaff00
770000fefeaa00004a9900ffff66ff22220000998bff1155ffffff0101ff88ff
005500001111fffffefffdfea4ff4466ffffff66ff003300ffff55ff77770000
88ff44ff00110077ffff006666ffffed000100fff5ed1111ffffff44ff22ffff
eded11110088ffff00007793ff2200dcdc3333fffe00febabaff99ffff333300
63cb00baba00acff55ffffdcffff337bfe00ed00ed5555ffaaffffdcdcff5555
00000066dcdc00dc00dc83ff017777fffefeffffffcbff5555777700fefe00cb
00cb0000fe010200010000122200ffff220044449bff33ffd4aa0000559999ff
999900ba00ba2a5500ffcbcbb4ff66ff9b33ffffbaaa00aa42880053aa00ffaa
aa0000ed00babaffff1100fe00000044009999990099ffcc99ba000088008800
dc00ff93220000dcfefffeaa5300770077020100cb0000000033ffedff00ba00
ff3333edffedffc488bcff7700aa00660066002222dc0000ffcbffdcffdcff8b
110000cb00010155005500880000002201ffffcbffcbed0000ff88884400445b
ba00ffbc77ff99ff006600baffba00777773ed00fe00003300330000baff77ff
004400aaffaafffefe000011220022c4ff8800eded99ff99ff55ff002200ffb4
661100110a1100ff1111dcffbabaffff88ff88010001ff33ffb98ed362000002
a249444154789c65d0695c0b001806f03711a9904a94d24dac63292949e5a810
d244588a14ca5161d1a1323973252242d62157d12ae498c8124d25ca3a11398a
16e55a3cdffab0ffe7f77d7fcff3528645349b584c3187824d9d19d4ec2e3523
9eb0ae975cf8de02f2486d502191841b42967a1ad49e5ddc4265f69a899e26b5
e9e468181baae3a71a41b95669da8df2ea3594c1b31046d7b17bfb86592e4cbe
d89b23e8db0af6304d756e60a8f4ad378bdc2552ae5948df1d35b52143141533
33bbbbababebeb3b3bc9c9c9c6c6c0c0d7b7b535323225a5aa8a02024a4bedec
0a0a2a2bcdcd7d7cf2f3a9a9c9cdcdd8b8adcdd5b5ababa828298982824a4ab2
b21212acadbdbc1414e2e24859b9a72730302f4f49292c4c57373c9c0a0b7372
8c8c1c1c3a3a92936d6dfdfd293e3e26262a4a4eaea2424b4b5fbfbc9c323278
3c0b0ba1303abaae8ecdeeed950d6669a9a7a7a141d4de9e9d5d5cdcd2229b94
c572716132f97cb1d8db9bc3110864a39795d9db6b6a26267a7a9a98d4d6a6a7
cb76090ef6f030354d4d75766e686030545464cb393a1a1ac6c68686eae8f8f9
a9aa4644c8b66d6e1689dcdd2512a994cb35330b0991ad9f9b6b659596a6addd
d8282fafae5e5323fb8f41d01f76c22fd8061be01bfc041a0323e1002c81cd30
0b9ec027a0c930014ec035580fc3e112bc069a0b53e11c0c8095f00176c163a0
e5301baec06a580677600ddc05ba0f13e120bc81a770133ec355a017300d4ec2
0c7800bbe1219c02fa08f3e13c1c85dbb00a2ec05ea0dff00a6ec15a98027360
070c047a06d7e1085c84f1b014f6c03fa0b33018b6c0211801ebe018fc00da0a
6f61113c877eb01d4ec317a085700f26c130f80efbe132bc039a0733e106fc81
f7f017f6c10aa0d1300a0ec374780943e1382c06fa0a9b60238c83473016cec0
02f80f73fefe1072afc1e50000000049454e44ae426082
"""),
'basi6a08': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200806000001047d4a
620000000467414d41000186a031e8965f0000012049444154789cc595414ec3
3010459fa541b8bbb26641b8069b861e8b4d12c1c112c1452a710a2a65d840d5
949041fc481ec98ae27c7f3f8d27e3e4648047600fec0d1f390fbbe2633a31e2
9389e4e4ea7bfdbf3d9a6b800ab89f1bd6b553cfcbb0679e960563d72e0a9293
b7337b9f988cc67f5f0e186d20e808042f1c97054e1309da40d02d7e27f92e03
6cbfc64df0fc3117a6210a1b6ad1a00df21c1abcf2a01944c7101b0cb568a001
909c9cf9e399cf3d8d9d4660a875405d9a60d000b05e2de55e25780b7a5268e0
622118e2399aab063a815808462f1ab86890fc2e03e48bb109ded7d26ce4bf59
0db91bac0050747fec5015ce80da0e5700281be533f0ce6d5900b59bcb00ea6d
200314cf801faab200ea752803a8d7a90c503a039f824a53f4694e7342000000
0049454e44ae426082
"""),
'basn0g01': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000200000002001000000005b0147
590000000467414d41000186a031e8965f0000005b49444154789c2dccb10903
300c05d1ebd204b24a200b7a346f90153c82c18d0a61450751f1e08a2faaead2
a4846ccea9255306e753345712e211b221bf4b263d1b427325255e8bdab29e6f
6aca30692e9d29616ee96f3065f0bf1f1087492fd02f14c90000000049454e44
ae426082
"""),
'basn0g02': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000200000002002000000001ca13d
890000000467414d41000186a031e8965f0000001f49444154789c6360085df5
1f8cf1308850c20053868f0133091f6390b90700bd497f818b0989a900000000
49454e44ae426082
"""),
# A version of basn0g04 dithered down to 3 bits.
'Basn0g03': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020040000000093e1c8
2900000001734249540371d88211000000fd49444154789c6d90d18906210c84
c356f22356b2889588604301b112112b11d94a96bb495cf7fe87f32d996f2689
44741cc658e39c0b118f883e1f63cc89dafbc04c0f619d7d898396c54b875517
83f3a2e7ac09a2074430e7f497f00f1138a5444f82839c5206b1f51053cca968
63258821e7f2b5438aac16fbecc052b646e709de45cf18996b29648508728612
952ca606a73566d44612b876845e9a347084ea4868d2907ff06be4436c4b41a3
a3e1774285614c5affb40dbd931a526619d9fa18e4c2be420858de1df0e69893
a0e3e5523461be448561001042b7d4a15309ce2c57aef2ba89d1c13794a109d7
b5880aa27744fc5c4aecb5e7bcef5fe528ec6293a930690000000049454e44ae
426082
"""),
'basn0g04': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020040000000093e1c8
290000000467414d41000186a031e8965f0000004849444154789c6360601014
545232367671090d4d4b2b2f6720430095dbd1418e002a77e64c720450b9ab56
912380caddbd9b1c0154ee9933e408a072efde25470095fbee1d1902001f14ee
01eaff41fa0000000049454e44ae426082
"""),
'basn0g08': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200800000000561125
280000000467414d41000186a031e8965f0000004149444154789c6364602400
1408c8b30c05058c0f0829f8f71f3f6079301c1430ca11906764a2795c0c0605
8c8ff0cafeffcff887e67131181430cae0956564040050e5fe7135e2d8590000
000049454e44ae426082
"""),
'basn0g16': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000200000002010000000000681f9
6b0000000467414d41000186a031e8965f0000005e49444154789cd5d2310ac0
300c4351395bef7fc6dca093c0287b32d52a04a3d98f3f3880a7b857131363a0
3a82601d089900dd82f640ca04e816dc06422640b7a03d903201ba05b7819009
d02d680fa44c603f6f07ec4ff41938cf7f0016d84bd85fae2b9fd70000000049
454e44ae426082
"""),
'basn2c08': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200802000000fc18ed
a30000000467414d41000186a031e8965f0000004849444154789cedd5c10900
300c024085ec91fdb772133b442bf4a1f8cee12bb40d043b800a14f81ca0ede4
7d4c784081020f4a871fc284071428f0a0743823a94081bb7077a3c00182b1f9
5e0f40cf4b0000000049454e44ae426082
"""),
'basn2c16': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000201002000000ac8831
e00000000467414d41000186a031e8965f000000e549444154789cd596c10a83
301044a7e0417fcb7eb7fdadf6961e06039286266693cc7a188645e43dd6a08f
1042003e2fe09aef6472737e183d27335fcee2f35a77b702ebce742870a23397
f3edf2705dd10160f3b2815fe8ecf2027974a6b0c03f74a6e4192843e75c6c03
35e8ec3202f5e84c0181bbe8cca967a00d9df3491bb040671f2e6087ce1c2860
8d1e05f8c7ee0f1d00b667e70df44467ef26d01fbd9bc028f42860f71d188bce
fb8d3630039dbd59601e7ab3c06cf428507f0634d039afdc80123a7bb1801e7a
b1802a7a14c89f016d74ce331bf080ce9e08f8414f04bca133bfe642fe5e07bb
c4ec0000000049454e44ae426082
"""),
'basn3p04': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200403000000815467
c70000000467414d41000186a031e8965f000000037342495404040477f8b5a3
0000002d504c54452200ff00ffff8800ff22ff000099ffff6600dd00ff77ff00
ff000000ff99ddff00ff00bbffbb000044ff00ff44d2b049bd00000047494441
54789c63e8e8080d3d7366d5aaf27263e377ef66ce64204300952b28488e002a
d7c5851c0154eeddbbe408a07119c81140e52a29912380ca4d4b23470095bb7b
37190200e0c4ead10f82057d0000000049454e44ae426082
"""),
'basn4a16': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020100400000089e36e
3c0000000467414d41000186a031e8965f0000085549444154789cc5975f685b
e719c67f968fa4a363ebf84875524754ae9d283885121aba42ba2d17b1bd8e50
d22e253412bbc8e4d042694b618977119d8b5d48be98938bd0f4a2c9901658b0
1a028366258524a68cd27a84d2e2956da169ea4aade219574791ed63fd399677
f17e19a174d73518994fc7d2fb3eeff33ecff30160656158873da760d48217ce
c2b10138fe47c80ec1d93fc3c55df0de65f8e809f8e75fe1ee5e58bf2ebf77f7
cad9474fc8331777c9ff6487e4338e0dc8678e5af21dc3ba7c27806665a1665b
b9ae19f015a1bb025a1102bb217008f42684de86e6756817c1d36063043acf02
6fc887749272e669d05e90679b29589f04f710ac5d825503ea15a8a7a056805a
0aac6c2dd335ac43ad60e59c54241b75e121171e5aff3faf3f7006f09d01df85
bef7fa4367eab56a4064c6b1ad742da35959e9bccb85aa61657d27a13b03fed3
10c8807e124219e8c9403303ed0c7827a19381cd8c4220075e0eda53d0cc4123
076e0ed672b03205f51cd472e0e4a03a0551b76647526066418b6405769f0bbe
93b03c15c9fae6401b03ff97a05f84d022f48c41c383d687e09d868dc3b0f988
14b07158ce5a1fca33ee53b0f63aacdc807bc7c0d902d583b03c0bfd271d3be2
42df0c9831c501ad08da2fa473df1c2ccd5a59dfa3a0ed83600cf4bd601c8170
1a1a67a13d011bfdb0f91355c03cb4af40230febbf83d502d4d7a0f62fa8b660
f9362ccdc2d6d19a1dcd805505f35de8bd8f406037f87f26b06b63e07b14160b
91acef0cf83f80e00a1825089f80f53a34df026f0536af4a01de889cadfb61f5
04d44be0bc00cb4761c984c5020ca41dbb3f01910c98af40b8083df30a81c021
089465e6fe2fa573df19a89856568b4370108c41080f8235088d4168ef81cea0
14d02e41a3046b25a8ff1d9c122c97e03f25a8942156afd95b3f836800fa7640
f85be89901e32f0a01bd09fa1e219c7e5160f77f005a1c4ae54856d340d7a1b7
172c0b5c175a2de874a480564bceea75a8566169092a1528956130eed80fd7a1
7f02ac0a847f0d3d69308a109a560884de86d02e617b6851661e5c91ce350dee
7c6565fdfbc1380ad6046c39068d51e8fc460a68e4616516aa0558cc43390f77
6ec0f6e19a1d8b41ff0a44d260cec936195f42a808c1fb1c685e07e35379b367
4c08679404765d07ff7eb8958f64838f415f0db66c037714bc5352803b0ad549
b85b83858fe1561e46261c3bfe356cdd0a913a9813d0db034606f42404672038
ae106817a115973d6f78c2f6f00999796faf741e7c0ce627adac5186fe323c6a
43fb7329a06643250e5f7c02f371d83d5db3879e86810b108d82b902bd6908fd
01f46720f80f0814c17f1f014f83f66b2232ad0f65d5d6eb4238cb12d8fb6a60
94612e1ec94612309c8046420a58bc06ffbe0d73b7616fd9b1773e09db2c88a6
c134a1a70ea134e86310f839f89f077f11344f21b031021bd744e1bcd3b2e7cd
b784edae2b33dfb24d3a8f24e0ea6d2b1bdf0f3f3d2a057c7e0eaebe0f071235
7b571962a7207a17c2436018a07f07c157c17f10b4e3a0dd84ee19e8bea510e8
3c0b1d43e475e3b0888cb722abd66a09e1dc51817d3801f1fd70ee7c243b3e2e
059c3b0f2fbfe4d88f9761a00cd63418b3a02f402000fe05d0d2d0bd5b89dd2f
45fe7def290478033693a2ed9b8f88c26d5e953def7484edde29997923219d8f
8fc38b47c4542fbd53b3b76f87be0ba07f03fe53a04d80ef4fe0f381af0e5d13
d0d5075d23d0f537e82a0267c0c78ffca3d56cf1f38e21aeb67158b4dd1b1185
6bb564cfdd5161fbe23599f9b9f3d239c08b47e0e597e0f1320cec03eb841ac1
1d350213b4bc1ac165358224f86cd01cfb0112e61409af28129684842bb3b2e7
95b8b0fdeafb32f3eddba58b975f92820e2460571c629310cd3f40c230040b8a
843945c2e7a07b8f42e07f6b38a5d6302fc6b25652f25a1091f9e21359b50389
9afd7859660ed2f981045cbd0d4e1c76feea7b6bb80d4279d05f834053ad614a
ada1634b8c6a855498f094a59e1063a956455e173e1691d95b76ec5d8aedfa37
52c0c03ee9dc89c35c1cdc69b8f7a0108d40ef2908dd005d53429404ff9042a0
791d9a9faa24f394f2f392b8dad29268fbadbc28dcce2765cfad69613bc8cc63
93d2b93b0df393d09c00f76b585d854818cc02f4be03c64d25c54925c58ead02
e4ef558c7a5dc284f382586aa522c63232e1d8434f2b68ef0ac9b40929c09895
996fb3a4f3e68414dc1e8646035c13dcbc32a3379519a520682b04d627c10da9
0c774392ccf251f1f352595c2dfeb582342de4d21764cf41d81e1e92f7062e48
e7ed61b8f315781e34c3d02c40a302e19cb2e32484ee6f817b08d6ca1220ef1d
9318b5644a98188c3b762c26ae168d0aa90c43d6cba75424109033d394675657
a573cf93063c13da796806a0b1031adf422309465021b0760956de94f4ea6c91
0cb7589024f3705df9795d5cada72edaee5f108503d9733d2c6c374764e6ae29
9d7b26544ce8a4c14b28e77d055a2968cd2b04560da81b129dab0725400ea41d
7beb6792642269e5e76971b5e0aba2ed5d8a035a5ef63c9417b69b059979b320
9d77d2d2506714bc2bd0ae423b09ade71402f50adc7325b72fabf4da9f900c67
55843cbd3dcacfc74450ba778bb683fced3f287b1eba216c37e764e6cd8074de
1995c63a39d8f82d7849f0620a817a0a9c19b934f49f74ec6846d26bdf0e95e1
322ac93c237eae1d1757eb1a51055c16850b3465cf8d9bc2f6704e66de2e4ae7
9d1c2c4f41c7864e0a366cf8f1af668e2d17c5c88c634752eac6f2aecaed332a
bd1625c3058a9264bad545b6ab2805f892a2edfe94285c30297b6e2485edad94
ccdc4b4ae79b33e0a46033ab3860656b192b2d7735332637969e79c9eda16949
afc17195e13c4932bef78033aa005b198b27f21a1c179109d9b26aad79219c17
13d83b69f9f29a0dff052002c70fc3e1ac750000000049454e44ae426082
"""),
'basn6a08': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200806000000737a7a
f40000000467414d41000186a031e8965f0000006f49444154789cedd6310a80
300c46e12764684fa1f73f55048f21c4ddc545781d52e85028fc1f4d28d98a01
305e7b7e9cffba33831d75054703ca06a8f90d58a0074e351e227d805c8254e3
1bb0420f5cdc2e0079208892ffe2a00136a07b4007943c1004d900195036407f
011bf00052201a9c160fb84c0000000049454e44ae426082
"""),
'cs3n3p08': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020080300000044a48a
c60000000467414d41000186a031e8965f0000000373424954030303a392a042
00000054504c544592ff0000ff9200ffff00ff0000dbff00ff6dffb600006dff
b6ff00ff9200dbff000049ffff2400ff000024ff0049ff0000ffdb00ff4900ff
b6ffff0000ff2400b6ffffdb000092ffff6d000024ffff49006dff00df702b17
0000004b49444154789c85cac70182000000b1b3625754b0edbfa72324ef7486
184ed0177a437b680bcdd0031c0ed00ea21f74852ed00a1c9ed0086da0057487
6ed0121cd6d004bda0013a421ff803224033e177f4ae260000000049454e44ae
426082
"""),
'f02n0g08': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200800000000561125
280000012a49444154789c85d12f4b83511805f0c3f938168b2088200882410c
03834dd807182c588749300c5604c30b0b03c360e14d826012c162b1182c8241
100441f47dee5fc3a6f7b9efc2bdf9c7e59cf370703a3caf26d3faeae6f6fee1
f1e9f9e5f5edfde3f3ebbb31d6f910227f1a6944448c31d65aebac77de7b1f42
883146444a41b029084a41500a825210340541d1e2607f777b733d13344a7401
00c8046d127da09a4ceb5cd024010c45446a40e5a04d029827055452da247ac7
f32e80ea42a7c4a20ba0dad22e892ea0f6a06b8b3e50a9c5e85ae264d1e54fd0
e762040cb2d5e93331067af95de8b4980147adcb3128710d74dab7a54fe20ec0
ec727c313a53822109fc3ff50743122bab6b1b5b3b7b9d439d834189e5d54518
0b82b120180b82b1208882200ae217e9e497bfbfccebfd0000000049454e44ae
426082
"""),
's09n3p02': _dehex(b"""
89504e470d0a1a0a0000000d49484452000000090000000902030000009dffee
830000000467414d41000186a031e8965f000000037342495404040477f8b5a3
0000000c504c544500ff000077ffff00ffff7700ff5600640000001f49444154
789c63600002fbff0c0c56ab19182ca381581a4283f82071200000696505c36a
437f230000000049454e44ae426082
"""),
'tbgn3p08': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020080300000044a48a
c60000000467414d41000186a031e8965f00000207504c54457f7f7fafafafab
abab110000222200737300999999510d00444400959500959595e6e600919191
8d8d8d620d00898989666600b7b700911600000000730d007373736f6f6faaaa
006b6b6b676767c41a00cccc0000f30000ef00d51e0055555567670000dd0051
515100d1004d4d4de61e0038380000b700160d0d00ab00560d00090900009500
009100008d003333332f2f2f2f2b2f2b2b000077007c7c001a05002b27000073
002b2b2b006f00bb1600272727780d002323230055004d4d00cc1e00004d00cc
1a000d00003c09006f6f00002f003811271111110d0d0d55554d090909001100
4d0900050505000d00e2e200000900000500626200a6a6a6a2a2a29e9e9e8484
00fb00fbd5d500801100800d00ea00ea555500a6a600e600e6f7f700e200e233
0500888888d900d9848484c01a007777003c3c05c8c8008080804409007c7c7c
bb00bbaa00aaa600a61e09056262629e009e9a009af322005e5e5e05050000ee
005a5a5adddd00a616008d008d00e20016050027270088110078780000c40078
00787300736f006f44444400aa00c81e004040406600663c3c3c090000550055
1a1a00343434d91e000084004d004d007c004500453c3c00ea1e00222222113c
113300331e1e1efb22001a1a1a004400afaf00270027003c001616161e001e0d
160d2f2f00808000001e00d1d1001100110d000db7b7b7090009050005b3b3b3
6d34c4230000000174524e530040e6d86600000001624b474402660b7c640000
01f249444154789c6360c0048c8c58049100575f215ee92e6161ef109cd2a15e
4b9645ce5d2c8f433aa4c24f3cbd4c98833b2314ab74a186f094b9c2c27571d2
6a2a58e4253c5cda8559057a392363854db4d9d0641973660b0b0bb76bb16656
06970997256877a07a95c75a1804b2fbcd128c80b482a0b0300f8a824276a9a8
ec6e61612b3e57ee06fbf0009619d5fac846ac5c60ed20e754921625a2daadc6
1967e29e97d2239c8aec7e61fdeca9cecebef54eb36c848517164514af16169e
866444b2b0b7b55534c815cc2ec22d89cd1353800a8473100a4485852d924a6a
412adc74e7ad1016ceed043267238c901716f633a812022998a4072267c4af02
92127005c0f811b62830054935ce017b38bf0948cc5c09955f030a24617d9d46
63371fd940b0827931cbfdf4956076ac018b592f72d45594a9b1f307f3261b1a
084bc2ad50018b1900719ba6ba4ca325d0427d3f6161449486f981144cf3100e
2a5f2a1ce8683e4ddf1b64275240c8438d98af0c729bbe07982b8a1c94201dc2
b3174c9820bcc06201585ad81b25b64a2146384e3798290c05ad280a18c0a62e
e898260c07fca80a24c076cc864b777131a00190cdfa3069035eccbc038c30e1
3e88b46d16b6acc5380d6ac202511c392f4b789aa7b0b08718765990111606c2
9e854c38e5191878fbe471e749b0112bb18902008dc473b2b2e8e72700000000
49454e44ae426082
"""),
'Tp2n3p08': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020080300000044a48a
c60000000467414d41000186a031e8965f00000300504c544502ffff80ff05ff
7f0703ff7f0180ff04ff00ffff06ff000880ff05ff7f07ffff06ff000804ff00
0180ff02ffff03ff7f02ffff80ff0503ff7f0180ffff0008ff7f0704ff00ffff
06ff000802ffffff7f0704ff0003ff7fffff0680ff050180ff04ff000180ffff
0008ffff0603ff7f80ff05ff7f0702ffffff000880ff05ffff0603ff7f02ffff
ff7f070180ff04ff00ffff06ff000880ff050180ffff7f0702ffff04ff0003ff
7fff7f0704ff0003ff7f0180ffffff06ff000880ff0502ffffffff0603ff7fff
7f0702ffff04ff000180ff80ff05ff0008ff7f07ffff0680ff0504ff00ff0008
0180ff03ff7f02ffff02ffffffff0604ff0003ff7f0180ffff000880ff05ff7f
0780ff05ff00080180ff02ffffff7f0703ff7fffff0604ff00ff7f07ff0008ff
ff0680ff0504ff0002ffff0180ff03ff7fff0008ffff0680ff0504ff000180ff
02ffff03ff7fff7f070180ff02ffff04ff00ffff06ff0008ff7f0780ff0503ff
7fffff06ff0008ff7f0780ff0502ffff03ff7f0180ff04ff0002ffffff7f07ff
ff0604ff0003ff7fff00080180ff80ff05ffff0603ff7f0180ffff000804ff00
80ff0502ffffff7f0780ff05ffff0604ff000180ffff000802ffffff7f0703ff
7fff0008ff7f070180ff03ff7f02ffff80ff05ffff0604ff00ff0008ffff0602
ffff0180ff04ff0003ff7f80ff05ff7f070180ff04ff00ff7f0780ff0502ffff
ff000803ff7fffff0602ffffff7f07ffff0680ff05ff000804ff0003ff7f0180
ff02ffff0180ffff7f0703ff7fff000804ff0080ff05ffff0602ffff04ff00ff
ff0603ff7fff7f070180ff80ff05ff000803ff7f0180ffff7f0702ffffff0008
04ff00ffff0680ff0503ff7f0180ff04ff0080ff05ffff06ff000802ffffff7f
0780ff05ff0008ff7f070180ff03ff7f04ff0002ffffffff0604ff00ff7f07ff
000880ff05ffff060180ff02ffff03ff7f80ff05ffff0602ffff0180ff03ff7f
04ff00ff7f07ff00080180ffff000880ff0502ffff04ff00ff7f0703ff7fffff
06ff0008ffff0604ff00ff7f0780ff0502ffff03ff7f0180ffdeb83387000000
f874524e53000000000000000008080808080808081010101010101010181818
1818181818202020202020202029292929292929293131313131313131393939
393939393941414141414141414a4a4a4a4a4a4a4a52525252525252525a5a5a
5a5a5a5a5a62626262626262626a6a6a6a6a6a6a6a73737373737373737b7b7b
7b7b7b7b7b83838383838383838b8b8b8b8b8b8b8b94949494949494949c9c9c
9c9c9c9c9ca4a4a4a4a4a4a4a4acacacacacacacacb4b4b4b4b4b4b4b4bdbdbd
bdbdbdbdbdc5c5c5c5c5c5c5c5cdcdcdcdcdcdcdcdd5d5d5d5d5d5d5d5dedede
dededededee6e6e6e6e6e6e6e6eeeeeeeeeeeeeeeef6f6f6f6f6f6f6f6b98ac5
ca0000012c49444154789c6360e7169150d230b475f7098d4ccc28a96ced9e32
63c1da2d7b8e9fb97af3d1fb8f3f18e8a0808953544a4dd7c4c2c9233c2621bf
b4aab17fdacce5ab36ee3a72eafaad87efbefea68702362e7159652d031b07cf
c0b8a4cce28aa68e89f316aedfb4ffd0b92bf79fbcfcfe931e0a183904e55435
8decdcbcc22292b3caaadb7b27cc5db67af3be63e72fdf78fce2d31f7a2860e5
119356d037b374f10e8a4fc92eaa6fee99347fc9caad7b0f9ebd74f7c1db2fbf
e8a180995f484645dbdccad12f38363dafbcb6a573faeca5ebb6ed3e7ce2c29d
e76fbefda38702063e0149751d537b67ff80e8d4dcc29a86bea97316add9b0e3
c0e96bf79ebdfafc971e0a587885e515f58cad5d7d43a2d2720aeadaba26cf5a
bc62fbcea3272fde7efafac37f3a28000087c0fe101bc2f85f0000000049454e
44ae426082
"""),
'tbbn1g04': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020040000000093e1c8
290000000467414d41000186a031e8965f0000000274524e530007e8f7589b00
000002624b47440000aa8d23320000013e49444154789c55d1cd4b024118c7f1
efbe6419045b6a48a72d352808b435284f9187ae9b098627a1573a19945beba5
e8129e8222af11d81e3a4545742de8ef6af6d5762e0fbf0fc33c33f36085cb76
bc4204778771b867260683ee57e13f0c922df5c719c2b3b6c6c25b2382cea4b9
9f7d4f244370746ac71f4ca88e0f173a6496749af47de8e44ba8f3bf9bdfa98a
0faf857a7dd95c7dc8d7c67c782c99727997f41eb2e3c1e554152465bb00fe8e
b692d190b718d159f4c0a45c4435915a243c58a7a4312a7a57913f05747594c6
46169866c57101e4d4ce4d511423119c419183a3530cc63db88559ae28e7342a
1e9c8122b71139b8872d6e913153224bc1f35b60e4445bd4004e20ed6682c759
1d9873b3da0fbf50137dc5c9bde84fdb2ec8bde1189e0448b63584735993c209
7a601bd2710caceba6158797285b7f2084a2f82c57c01a0000000049454e44ae
426082
"""),
'tbrn2c08': _dehex(b"""
89504e470d0a1a0a0000000d4948445200000020000000200802000000fc18ed
a30000000467414d41000186a031e8965f0000000674524e53007f007f007f8a
33334f00000006624b474400ff0000000033277cf3000004d649444154789cad
965f68537714c73fd912d640235e692f34d0406fa0c1663481045ab060065514
56660a295831607df0a1488715167060840a1614e6431e9cb34fd2c00a762c85
f6a10f816650c13b0cf40612e1822ddc4863bd628a8924d23d6464f9d3665dd9
f7e977ce3dbff3cd3939bfdfef6bb87dfb364782dbed065ebe7cd93acc78b4ec
a228debd7bb7bfbfbfbbbbfb7f261045311a8d261209405194274f9ea4d3e916
f15f1c3eb5dd6e4fa5fecce526239184a2b0b8486f6f617171b1f5ae4311381c
8e57af5e5dbd7a351088150a78bd389d44222c2f93cdfe66b7db8f4ee07038b6
b6b6bebf766d7e7e7e60a06432313b4ba984c3c1c4049a46b95c5a58583822c1
dbb76f27272733d1b9df853c3030c0f232562b9108cf9eb1b888d7cbf030abab
31abd5fa1f08dc6ef7e7cf9f1f3f7e1c8944745d4f1400c62c001313acad21cb
b8dd2c2c603271eb1640341aad4c6d331aa7e8c48913a150a861307ecc11e964
74899919bc5e14e56fffc404f1388502f178dceff7ef4bf0a5cfe7abb533998c
e5f9ea2f1dd88c180d64cb94412df3dd57e83a6b3b3c7a84c98420100c72fd3a
636348bae726379fe69e8e8d8dbd79f3a6558b0607079796965256479b918085
7b02db12712b6181950233023f3f647494ee6e2e5ea45864cce5b8a7fe3acffc
3aebb22c2bd5d20e22d0757d7b7bbbbdbd3d94a313bed1b0aa3cd069838b163a
8d4c59585f677292d0b84d9a995bd337def3fe6bbe5e6001989b9b6bfe27ea08
36373781542ab56573248b4c5bc843ac4048c7ab21aa24ca00534c25482828a3
8c9ee67475bbaaaab22cb722c8e57240a150301a8d219de94e44534d7d90e885
87acb0e2c4f9800731629b6c5ee14a35a6b9887d2a0032994cb9cf15dbe59650
ff7b46a04c9a749e7cc5112214266cc65c31354d5b5d5d3d90209bcd5616a552
a95c2e87f2a659bd9ee01c2cd73964e438f129a6aa9e582c363838b80f81d7eb
5555b56a2a8ad2d9d7affd0409f8015c208013fea00177b873831b0282c964f2
783c1e8fa7582cee5f81a669b5e6eeeeaee58e8559b0c233d8843c7c0b963a82
34e94b5cb2396d7d7d7db22c8ba258fb0afd43f0e2c58b919191ba9de9b4d425
118329b0c3323c8709d02041b52b4ea7f39de75d2a934a2693c0a953a76a93d4
5d157ebf7f6565a5542a553df97c5e10045dd731c130b86113cc300cbd489224
08422a952a140a95788fc763b1d41558d7a2d7af5f5fb870a1d6a3aaaacd6603
18802da84c59015bd2e6897b745d9765b99a1df0f97c0daf74e36deaf7fbcd66
73ad2797cb89a2c839880188a2e8743a8bc5a22ccbba5e376466b3b9bdbdbd21
6123413a9d0e0402b51e4dd3bababa788eb022b85caeb6b6364551b6b7b76942
43f7f727007a7a7a04a1ee8065b3595fde2768423299ac1ec6669c3973e65004
c0f8f878ad69341a33994ced2969c0d0d0502412f9f8f163f3a7fd654b474787
288ad53e74757535df6215b85cae60302849d2410aecc037f9f2e5cbd5b5c160
680eb0dbede170381c0e7ff8f0a185be3b906068684892a4ca7a6f6faff69328
8ad3d3d3f7efdfdfdbdbfb57e96868a14d0d0643381c96242997cbe5f3794010
84603078fcf8f1d6496bd14a3aba5c2ea7d369341a5555b5582c8140e0fcf9f3
1b1b1b87cf4eeb0a8063c78e45a3d19e9e1ebfdfdf5a831e844655d18093274f
9e3d7bf6d3a74f3b3b3b47c80efc05ff7af28fefb70d9b0000000049454e44ae
426082
"""),
'basn6a16': _dehex(b"""
89504e470d0a1a0a0000000d494844520000002000000020100600000023eaa6
b70000000467414d41000186a031e8965f00000d2249444154789cdd995f6c1c
d775c67ff38fb34b724d2ee55a8e4b04a0ac87049100cab4dbd8c6528902cb4d
10881620592e52d4325ac0905bc98a94025e71fd622cb5065ac98a0c283050c0
728a00b6e542a1d126885cd3298928891d9a0444037e904434951d4b90b84b2f
c9dde1fcebc33977a95555348f411e16dfce9d3b77ee77eebde77ce78c95a669
0ad07c17009a13edd898b87dfb1fcb7d2b4d1bff217f33df80deb1e6267df0ff
c1e6e6dfafdf1f5a7fd30f9aef66b6d546dd355bf02c40662e3307f9725a96c6
744c3031f83782f171c148dbc3bf1774f5dad1e79d6f095a3f54d4fbec5234ef
d9a2f8d73afe4f14f57ef4f42def7b44f19060f06b45bddf1c5534d77fd922be
2973a15a82e648661c6e3240aa3612ead952b604bde57458894f29deaf133bac
13d2766f5227a4a3b8cf08da7adfd6fbd6bd8a4fe9dbb43d35e3dfa3f844fbf8
9119bf4f7144094fb56333abf8a86063ca106f94b3a3b512343765e60082097f
1bb86ba72439a653519b09f5cee1ce61c897d37eedf5553580ae60f4af8af33a
b14fd400b6a0f34535c0434afc0b3a9f07147527a5fa7ca218ff56c74d74dc3f
155cfd3325fc278acf2ae1cb4a539f5f9937c457263b0bd51234c732a300cdd1
cc1840f0aaff54db0e4874ed5a9b5d6d27d4bb36746d80de72baa877ff4b275a
d7895ed1897ea4139b5143fcbb1a62560da1ed9662aaed895ec78a91c18795b8
5e07ab4af8ba128e95e682e0728bf8f2e5ae815a091a53d902ac1920d8e05f06
589de8d8d66680789f4e454fb9d9ec66cd857af796ee2d902fa73fd5bba775a2
153580ae44705ed0d37647d15697cb8f14bfa3e3e8fdf8031d47af571503357c
f30d25acedcbbf135c9a35c49766ba07ab255859e8ec03684e66860182dff8f7
0304bff6ff1c20fc81b7afdd00a71475539a536e36bb5973a19e3b923b02bde5
e4efd4003ac170eb2d13fe274157afedbd82d6fb3a9a1e85e4551d47cf7078f8
9671fe4289ebf5f2bf08d63f37c4eb4773c55a0996efeefa0ca011671d8060ca
2f0004c7fcc300e166ef0240f825efe3361f106d57d423d0723f7acacd66376b
2ed47b7a7a7a205f4ef4ac4691e0aad9aa0d41cf13741c3580a506487574ddca
61a8c403c1863ebfbcac3475168b2de28b8b3d77544bb05ce92a02aceced3c0d
d0cc65ea371b201cf1c601c24dde1c4078cedbdeb60322f50126a019bf6edc9b
39e566b39b3517eaf97c3e0fbde5e4491d45bd74537145d155b476aa0176e868
c6abebf30dbd5e525c54ac8e18e2d56abeb756827a3d970358a97416019a6f64
f60004fdfe1580d5c98e618070cc1b05887eee7e0d209a70db7d8063029889b4
c620ead78d7b33a7dc6c76b3e6427ddddbebde867c393aa7845e5403e8ca794a
d0d6fb897af5f03525fe5782f5e7046bdaef468bf88d1debc6ab25583cd17310
6079b9ab0ba059c914018245bf076075b5a303200c3c1f209a733701444fbbaf
00c4134ebb016c5d0b23614c243701cdf875e3decce9349bddacb9505fbf7dfd
76e82d87736a00f5d2b5ffd4b7dce2719a4d25ae717ee153c1abef18e257cfad
7fa45682da48ef38c052b53b0fd06864b300c151ff08c0ea431de701a287dd5f
004497dc7b01a253ee3e80b8c7f91c20f967fb6fdb7c80ada7d8683723614c24
3701cdf875e3decc29379bddacb950ef3fd47f08f2e5a61ea4aa2a3eb757cd55
13345efcfa59c12b2f19e2578ef77fb75a82854ffbee01a83f977b11a031931d
040802df07082b5e11207cc17b1e209a770700e2df0a83e409fb7580f827c230
99b06fd901fb058d6835dacd481813c94d40337eddb83773cacd66376b2ed437
bebcf165e82d2f4e4beb7f3fa6e652c2d7ee10bc78c010bfb87fe3c95a09ae9f
bd732740bd2fb700d0f865f64180e059ff044018ca0ca28a5b04883f701e0088
bfec7c0c909cb71f0448c6ec518074b375012079d9dedf66004bcfbc51eb2dd1
aadacd481813c94d40337eddb83773cacd66376b2ed487868686205fbe7c49ef
5605a73f34c4a7a787eeab96e0da81bb4e022c15ba27019a5b339300e16bf286
a8eae601e25866907cdf3e0890acb36f00245fb57f05904e59c300e92561946e
b2e600d209ab7d07f04d458dfb46ad1bd16ab49b913026929b8066fcba716fe6
949bcd6ed65ca8ef7e7cf7e3d05b7e7c8f217ee6cdddbb6a25a856f37980e0c7
fe4e80a82623c48193014846ec7180f4acf518409aca0cd28a5504e03b32c374
de1a00608a0240faaa327a4b19fe946fb6f90054dbb5f2333d022db56eb4966a
3723614c243701cdf8f556bea8a7dc6c76b3e66bd46584ddbbcebc0990cf4b0f
ff4070520c282338a7e26700ec725202b01e4bcf0258963c6f1d4d8f0030cb20
805549c520930c03584fa522b676f11600ffc03fde3e1b3489a9c9054c9aa23b
c08856a3dd8c843191dc0434e3d78d7b33a75c36fb993761f7ae5a69f72ef97f
e6ad336fed7e1c60e8bee96980bbdebbb60da07b7069062033d9dc0ae03d296f
70ab511ec071640676252902d833c916007b3e1900b0a6d2028035968e025861
ea01581369fb11488c34d18cbc95989afccca42baad65ba2d5683723614c24d7
8066fcbab8b7e96918baaf5aaa56219f975fb50a43f7c9bde90fa73f1c1a02d8
78f2e27e803b77ca08b90519315b6fe400fc1392097a9eccc0ad444500e70199
a1331f0f00d8934901c07e5d526ceb87c2d07e2579badd005a2b31a5089391b7
1253358049535a6add8856dd0146c298482e01ede27ed878b256ba7600ee3a09
c18fc1df09fe01084ec25defc1b56db0f1a4f4bd78e0e2818d2f0334e7330300
7df7c888b917e50dd9c1c60c80efcb0cbc63e1f700bce7c31700dccbd1060027
8add9b0de06c8e2f00d84962b7d7030e2a61538331b98051f92631bd253f336a
dd8856a3dd44c25c390efddfad96ae9f853b77c25201ba27c533b8bdf28b6ad0
3d084b33d2e7fa59099e9901b8f2d29597fa0f01848f78e70082117f1ca07b76
6910209b9519f895a008d031bbba05c09d8f06005c5b18b8fba25300cea6780e
c03e911c6ccf06d507b48a4fa606634a114609de929f9934c5a87511ad57cfc1
fa476aa5854fa1ef1e3910b905686e85cc24c40138198915f133d2d6dc2a7dea
7df2ccc2a752faf2cec1d577aebeb37e3b4034eeee0008dff3be0e6b923773b4
7904c0ef9119767cb4fa1500ef1361e08e452500f71561e84cc4ed3e20fab6a2
c905f40cb76a3026bf3319b91ac2e46792a6dcd801ebc6aba5da08f48ecb81c8
bd088d5f42f6417191de93908c803d0e76199292b485af41b60e8d9c3c537f0e
8211f0c7211a077707dc18b931b2ee6d80a4d7ae024491ebc24d4a708ff70680
7f25e807e8785f1878e322d6ddaf453f0770ff2dfa769b01423dbbad72a391b6
5a7c3235985629423372494cab55c8f7d64a8b27a0e7202c55a13b0f8d19c80e
4ae9ca3f015115dc3ca467c17a4c7ee95970ab10e5a54ff0ac3cd39881ee5958
1a84f03df0be0e492fd855a8d6aa35d10b4962dbb0a604a3d3ee5e80a8eee600
a24977f8660378bf0bbf00e01d0a8fb7f980f04b8aa6ce6aca8d5a7533c52753
839152c4e222f4dc512dd5eb90cbc981e8ea12cf90cd8a8bf47d89159e2741d3
7124f65b96fcd254dae258fa84a13c13043246a32129574787e49eae2b49b86d
c3e2e78b9ff7f4002415bb08907c66df0d103b4e0c104db90500ff70700c203a
ee1e82dba4c3e16e256c0acca6ceaae9afd1f612d7eb472157ac95962bd05594
7dd1598466053245088e827f44628657942a825b84e4fb601f84b4025611aca3
901e01bb024911dc0a4445f08e41f83df02b10142173149ab71baf027611ea95
7a257704201d14cd9af4d90b00f194530088cb4e09c0df1c5c0088f7393f6833
c0aa3ac156655de3bca9b34ab9716906ba07aba5e5bba1eb3358d90b9da7c533
64f6888bf47b60f521e8380fe10be03d2feac17900927560df40f4e48f805960
50328d648bf4893f9067c217a0631656b7c898c122847bc07b03a2d3e0ee85e4
33b0ef867450c4fad2ecd26cf7168074c0ba0c904cdac300c9cfec4701924df6
1cdca61e10685c6f7d52d0caba1498972f43d740adb4b2009d7d7220b20e3473
90a943d00ffe959bb6eac3e0fe42ea49ee00c45f06e76329b1dabf127d690d80
5581b408f63c2403e0cc433c00ee658836803b0fd100747c04ab5f917704fd10
d5c1cd41ec801343d207f602a403605d86e5f9e5f9ae0d00e994556833806685
c931fb709b0f08b4e869bea5c827859549e82c544b8d29c816a0390999613920
7e610d5727a16318c2003c1fa24be0de2b32caf92224e7c17e5004b6350c4c01
05601218066b0ad28224e149019c086257ca315102de2712903bde97b8144d82
3b2c6ac52d403c054e019249b087f53d0558995a99ea946c70cc927458b3c1ff
550f30050df988d4284376b4566a8e416654cc921985e037e0df0fc131f00f4b
acf0c6211c036f14a239703741740adc7da227edd7e56b833d0ae92549b4d357
25dfb49ed2ff63908e6adf27d6d0dda7638d4154d2778daca17f58e61297c129
41f233b01f5dc3740cac51688c35c6b22580f48224fee9b83502569a66b629f1
09f3713473413e2666e7fe6f6c6efefdfafda1f56f6e06f93496d9d67cb7366a
9964b6f92e64b689196ec6c604646fd3fe4771ff1bf03f65d8ecc3addbb5f300
00000049454e44ae426082
"""),
}
# Make each of the dict entries also be a module entry.
sys.modules[__name__].__dict__.update(png)
def binary_stdout():
"""
A sys.stdout that accepts bytes.
"""
stdout = sys.stdout.buffer
# On Windows the C runtime file orientation needs changing.
if sys.platform == "win32":
import msvcrt
import os
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
return stdout
def main(argv=None):
parser = argparse.ArgumentParser(
description="Output a PNG file from the PNG suite")
either = parser.add_mutually_exclusive_group(required=True)
either.add_argument('--list', action='store_true')
either.add_argument('image', nargs='?')
args = parser.parse_args()
if args.list:
for name in sorted(png):
print(name)
return 0
if args.image not in png:
raise ValueError("cannot find PNG suite image " + args.image)
binary_stdout().write(png[args.image])
if __name__ == '__main__':
sys.exit(main())
| 54.640719 | 69 | 0.954247 | 894 | 36,500 | 38.895973 | 0.785235 | 0.004831 | 0.006039 | 0.001208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.630198 | 0.030877 | 36,500 | 667 | 70 | 54.722639 | 0.35284 | 0.017014 | 0 | 0.075806 | 0 | 0 | 0.954613 | 0.926039 | 0 | 1 | 0 | 0 | 0 | 1 | 0.004839 | false | 0 | 0.008065 | 0 | 0.017742 | 0.001613 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
162821a0d3a448f705d0b8d49a8282feba5e46ff | 195 | py | Python | src/tests/fixtures/test_binaries.py | lalyon/python_testbed | be3545de15e7b244ad2c060f11d84ee317532dc7 | [
"MIT"
] | null | null | null | src/tests/fixtures/test_binaries.py | lalyon/python_testbed | be3545de15e7b244ad2c060f11d84ee317532dc7 | [
"MIT"
] | null | null | null | src/tests/fixtures/test_binaries.py | lalyon/python_testbed | be3545de15e7b244ad2c060f11d84ee317532dc7 | [
"MIT"
] | null | null | null | from src.python_testbed.binary_ops import BinaryFile
testBin0 = BinaryFile(filename="src/tests/fixtures/test_input_file.txt")
testBin1 = BinaryFile(filename="src/tests/fixtures/test_image.jpg")
| 39 | 72 | 0.830769 | 27 | 195 | 5.814815 | 0.703704 | 0.229299 | 0.267516 | 0.33121 | 0.484076 | 0.484076 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.05641 | 195 | 4 | 73 | 48.75 | 0.842391 | 0 | 0 | 0 | 0 | 0 | 0.364103 | 0.364103 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1632ccb94a7fa433dac56b1c69adb4ac09a18519 | 37 | py | Python | foliant/preprocessors/superlinks/__init__.py | foliant-docs/foliantcontrib.superlinks | e48215ab6ea083a0b8e5d07bd2ff4bda23ae4ace | [
"MIT"
] | null | null | null | foliant/preprocessors/superlinks/__init__.py | foliant-docs/foliantcontrib.superlinks | e48215ab6ea083a0b8e5d07bd2ff4bda23ae4ace | [
"MIT"
] | 1 | 2020-11-27T12:52:18.000Z | 2020-11-30T08:44:33.000Z | foliant/preprocessors/superlinks/__init__.py | foliant-docs/foliantcontrib.superlinks | e48215ab6ea083a0b8e5d07bd2ff4bda23ae4ace | [
"MIT"
] | null | null | null | from .superlinks import Preprocessor
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
163355b0f041c4423fb75dc19002313bb4d63e77 | 20,200 | py | Python | tests/api/v1_3_3/test_wireless.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 32 | 2019-09-05T05:16:56.000Z | 2022-03-22T09:50:38.000Z | tests/api/v1_3_3/test_wireless.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 35 | 2019-09-07T18:58:54.000Z | 2022-03-24T19:29:36.000Z | tests/api/v1_3_3/test_wireless.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 18 | 2019-09-09T11:07:21.000Z | 2022-03-25T08:49:59.000Z | # -*- coding: utf-8 -*-
"""DNACenterAPI wireless API fixtures and tests.
Copyright (c) 2019-2021 Cisco Systems.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import pytest
from fastjsonschema.exceptions import JsonSchemaException
from dnacentersdk.exceptions import MalformedRequest
from tests.environment import DNA_CENTER_VERSION
pytestmark = pytest.mark.skipif(DNA_CENTER_VERSION != '1.3.3', reason='version does not match')
def is_valid_retrieve_rf_profiles(json_schema_validate, obj):
json_schema_validate('jsd_098cab9141c9a3fe_v1_3_3').validate(obj)
return True
def retrieve_rf_profiles(api):
endpoint_result = api.wireless.retrieve_rf_profiles(
rf_profile_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_retrieve_rf_profiles(api, validator):
assert is_valid_retrieve_rf_profiles(
validator,
retrieve_rf_profiles(api)
)
def retrieve_rf_profiles_default(api):
endpoint_result = api.wireless.retrieve_rf_profiles(
rf_profile_name=None
)
return endpoint_result
@pytest.mark.wireless
def test_retrieve_rf_profiles_default(api, validator):
try:
assert is_valid_retrieve_rf_profiles(
validator,
retrieve_rf_profiles_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_create_and_provision_ssid(json_schema_validate, obj):
json_schema_validate('jsd_1eb72ad34e098990_v1_3_3').validate(obj)
return True
def create_and_provision_ssid(api):
endpoint_result = api.wireless.create_and_provision_ssid(
active_validation=True,
enableFabric=True,
flexConnect={'enableFlexConnect': True, 'localToVlan': 0},
managedAPLocations=['string'],
payload=None,
ssidDetails={'name': 'string', 'securityLevel': 'WPA2_ENTERPRISE', 'enableFastLane': True, 'passphrase': 'string', 'trafficType': 'data', 'enableBroadcastSSID': True, 'radioPolicy': 'Dual band operation (2.4GHz and 5GHz)', 'enableMACFiltering': True, 'fastTransition': 'Adaptive', 'webAuthURL': 'string'},
ssidType='Guest'
)
return endpoint_result
@pytest.mark.wireless
def test_create_and_provision_ssid(api, validator):
assert is_valid_create_and_provision_ssid(
validator,
create_and_provision_ssid(api)
)
def create_and_provision_ssid_default(api):
endpoint_result = api.wireless.create_and_provision_ssid(
active_validation=True,
enableFabric=None,
flexConnect=None,
managedAPLocations=None,
payload=None,
ssidDetails=None,
ssidType=None
)
return endpoint_result
@pytest.mark.wireless
def test_create_and_provision_ssid_default(api, validator):
try:
assert is_valid_create_and_provision_ssid(
validator,
create_and_provision_ssid_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_delete_rf_profiles(json_schema_validate, obj):
json_schema_validate('jsd_28b24a744a9994be_v1_3_3').validate(obj)
return True
def delete_rf_profiles(api):
endpoint_result = api.wireless.delete_rf_profiles(
rf_profile_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_rf_profiles(api, validator):
assert is_valid_delete_rf_profiles(
validator,
delete_rf_profiles(api)
)
def delete_rf_profiles_default(api):
endpoint_result = api.wireless.delete_rf_profiles(
rf_profile_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_rf_profiles_default(api, validator):
try:
assert is_valid_delete_rf_profiles(
validator,
delete_rf_profiles_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_create_wireless_profile(json_schema_validate, obj):
json_schema_validate('jsd_709769624bf988d5_v1_3_3').validate(obj)
return True
def create_wireless_profile(api):
endpoint_result = api.wireless.create_wireless_profile(
active_validation=True,
payload=None,
profileDetails={'name': 'string', 'sites': ['string'], 'ssidDetails': [{'name': 'string', 'type': 'Guest', 'enableFabric': True, 'flexConnect': {'enableFlexConnect': True, 'localToVlan': 0}, 'interfaceName': 'string'}]}
)
return endpoint_result
@pytest.mark.wireless
def test_create_wireless_profile(api, validator):
assert is_valid_create_wireless_profile(
validator,
create_wireless_profile(api)
)
def create_wireless_profile_default(api):
endpoint_result = api.wireless.create_wireless_profile(
active_validation=True,
payload=None,
profileDetails=None
)
return endpoint_result
@pytest.mark.wireless
def test_create_wireless_profile_default(api, validator):
try:
assert is_valid_create_wireless_profile(
validator,
create_wireless_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_provision_update(json_schema_validate, obj):
json_schema_validate('jsd_87a5ab044139862d_v1_3_3').validate(obj)
return True
def provision_update(api):
endpoint_result = api.wireless.provision_update(
active_validation=True,
payload=[{'deviceName': 'string', 'managedAPLocations': ['string'], 'dynamicInterfaces': [{'interfaceIPAddress': 'string', 'interfaceNetmaskInCIDR': 0, 'interfaceGateway': 'string', 'lagOrPortNumber': 0, 'vlanId': 0, 'interfaceName': 'string'}]}]
)
return endpoint_result
@pytest.mark.wireless
def test_provision_update(api, validator):
assert is_valid_provision_update(
validator,
provision_update(api)
)
def provision_update_default(api):
endpoint_result = api.wireless.provision_update(
active_validation=True,
payload=None
)
return endpoint_result
@pytest.mark.wireless
def test_provision_update_default(api, validator):
try:
assert is_valid_provision_update(
validator,
provision_update_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_create_enterprise_ssid(json_schema_validate, obj):
return True if obj else False
def create_enterprise_ssid(api):
endpoint_result = api.wireless.create_enterprise_ssid(
active_validation=True,
enableBroadcastSSID=True,
enableFastLane=True,
enableMACFiltering=True,
fastTransition='Adaptive',
name='********************************',
passphrase='********',
payload=None,
radioPolicy='Dual band operation (2.4GHz and 5GHz)',
securityLevel='WPA2_ENTERPRISE',
trafficType='voicedata'
)
return endpoint_result
@pytest.mark.wireless
def test_create_enterprise_ssid(api, validator):
assert is_valid_create_enterprise_ssid(
validator,
create_enterprise_ssid(api)
)
def create_enterprise_ssid_default(api):
endpoint_result = api.wireless.create_enterprise_ssid(
active_validation=True,
enableBroadcastSSID=None,
enableFastLane=None,
enableMACFiltering=None,
fastTransition=None,
name=None,
passphrase=None,
payload=None,
radioPolicy=None,
securityLevel=None,
trafficType=None
)
return endpoint_result
@pytest.mark.wireless
def test_create_enterprise_ssid_default(api, validator):
try:
assert is_valid_create_enterprise_ssid(
validator,
create_enterprise_ssid_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_wireless_profile(json_schema_validate, obj):
json_schema_validate('jsd_b3a1c8804c8b9b8b_v1_3_3').validate(obj)
return True
def get_wireless_profile(api):
endpoint_result = api.wireless.get_wireless_profile(
profile_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_get_wireless_profile(api, validator):
assert is_valid_get_wireless_profile(
validator,
get_wireless_profile(api)
)
def get_wireless_profile_default(api):
endpoint_result = api.wireless.get_wireless_profile(
profile_name=None
)
return endpoint_result
@pytest.mark.wireless
def test_get_wireless_profile_default(api, validator):
try:
assert is_valid_get_wireless_profile(
validator,
get_wireless_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_create_or_update_rf_profile(json_schema_validate, obj):
json_schema_validate('jsd_b78329674878b815_v1_3_3').validate(obj)
return True
def create_or_update_rf_profile(api):
endpoint_result = api.wireless.create_or_update_rf_profile(
active_validation=True,
channelWidth='string',
defaultRfProfile=True,
enableBrownField=True,
enableCustom=True,
enableRadioTypeA=True,
enableRadioTypeB=True,
name='string',
payload=None,
radioTypeAProperties={'parentProfile': 'string', 'radioChannels': 'string', 'dataRates': 'string', 'mandatoryDataRates': 'string', 'powerThresholdV1': 0, 'rxSopThreshold': 'string', 'minPowerLevel': 0, 'maxPowerLevel': 0},
radioTypeBProperties={'parentProfile': 'string', 'radioChannels': 'string', 'dataRates': 'string', 'mandatoryDataRates': 'string', 'powerThresholdV1': 0, 'rxSopThreshold': 'string', 'minPowerLevel': 0, 'maxPowerLevel': 0}
)
return endpoint_result
@pytest.mark.wireless
def test_create_or_update_rf_profile(api, validator):
assert is_valid_create_or_update_rf_profile(
validator,
create_or_update_rf_profile(api)
)
def create_or_update_rf_profile_default(api):
endpoint_result = api.wireless.create_or_update_rf_profile(
active_validation=True,
channelWidth=None,
defaultRfProfile=None,
enableBrownField=None,
enableCustom=None,
enableRadioTypeA=None,
enableRadioTypeB=None,
name=None,
payload=None,
radioTypeAProperties=None,
radioTypeBProperties=None
)
return endpoint_result
@pytest.mark.wireless
def test_create_or_update_rf_profile_default(api, validator):
try:
assert is_valid_create_or_update_rf_profile(
validator,
create_or_update_rf_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_delete_enterprise_ssid(json_schema_validate, obj):
json_schema_validate('jsd_c7a6592b4b98a369_v1_3_3').validate(obj)
return True
def delete_enterprise_ssid(api):
endpoint_result = api.wireless.delete_enterprise_ssid(
ssid_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_enterprise_ssid(api, validator):
assert is_valid_delete_enterprise_ssid(
validator,
delete_enterprise_ssid(api)
)
def delete_enterprise_ssid_default(api):
endpoint_result = api.wireless.delete_enterprise_ssid(
ssid_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_enterprise_ssid_default(api, validator):
try:
assert is_valid_delete_enterprise_ssid(
validator,
delete_enterprise_ssid_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_enterprise_ssid(json_schema_validate, obj):
json_schema_validate('jsd_cca519ba45ebb423_v1_3_3').validate(obj)
return True
def get_enterprise_ssid(api):
endpoint_result = api.wireless.get_enterprise_ssid(
ssid_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_get_enterprise_ssid(api, validator):
assert is_valid_get_enterprise_ssid(
validator,
get_enterprise_ssid(api)
)
def get_enterprise_ssid_default(api):
endpoint_result = api.wireless.get_enterprise_ssid(
ssid_name=None
)
return endpoint_result
@pytest.mark.wireless
def test_get_enterprise_ssid_default(api, validator):
try:
assert is_valid_get_enterprise_ssid(
validator,
get_enterprise_ssid_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_provision(json_schema_validate, obj):
json_schema_validate('jsd_d09b08a3447aa3b9_v1_3_3').validate(obj)
return True
def provision(api):
endpoint_result = api.wireless.provision(
active_validation=True,
payload=[{'deviceName': 'string', 'site': 'string', 'managedAPLocations': ['string'], 'dynamicInterfaces': [{'interfaceIPAddress': 'string', 'interfaceNetmaskInCIDR': 0, 'interfaceGateway': 'string', 'lagOrPortNumber': 0, 'vlanId': 0, 'interfaceName': 'string'}]}]
)
return endpoint_result
@pytest.mark.wireless
def test_provision(api, validator):
assert is_valid_provision(
validator,
provision(api)
)
def provision_default(api):
endpoint_result = api.wireless.provision(
active_validation=True,
payload=None
)
return endpoint_result
@pytest.mark.wireless
def test_provision_default(api, validator):
try:
assert is_valid_provision(
validator,
provision_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_update_wireless_profile(json_schema_validate, obj):
json_schema_validate('jsd_cfbd3870405aad55_v1_3_3').validate(obj)
return True
def update_wireless_profile(api):
endpoint_result = api.wireless.update_wireless_profile(
active_validation=True,
payload=None,
profileDetails={'name': 'string', 'sites': ['string'], 'ssidDetails': [{'name': 'string', 'type': 'Guest', 'enableFabric': True, 'flexConnect': {'enableFlexConnect': True, 'localToVlan': 0}, 'interfaceName': 'string'}]}
)
return endpoint_result
@pytest.mark.wireless
def test_update_wireless_profile(api, validator):
assert is_valid_update_wireless_profile(
validator,
update_wireless_profile(api)
)
def update_wireless_profile_default(api):
endpoint_result = api.wireless.update_wireless_profile(
active_validation=True,
payload=None,
profileDetails=None
)
return endpoint_result
@pytest.mark.wireless
def test_update_wireless_profile_default(api, validator):
try:
assert is_valid_update_wireless_profile(
validator,
update_wireless_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_ap_provision(json_schema_validate, obj):
json_schema_validate('jsd_e9b99b2248c88014_v1_3_3').validate(obj)
return True
def ap_provision(api):
endpoint_result = api.wireless.ap_provision(
active_validation=True,
payload=[{'rfProfile': 'string', 'siteId': 'string', 'type': 'string', 'deviceName': 'string', 'customFlexGroupName': ['string'], 'customApGroupName': 'string'}]
)
return endpoint_result
@pytest.mark.wireless
def test_ap_provision(api, validator):
assert is_valid_ap_provision(
validator,
ap_provision(api)
)
def ap_provision_default(api):
endpoint_result = api.wireless.ap_provision(
active_validation=True,
payload=None
)
return endpoint_result
@pytest.mark.wireless
def test_ap_provision_default(api, validator):
try:
assert is_valid_ap_provision(
validator,
ap_provision_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_delete_wireless_profile(json_schema_validate, obj):
json_schema_validate('jsd_e39588a5494982c4_v1_3_3').validate(obj)
return True
def delete_wireless_profile(api):
endpoint_result = api.wireless.delete_wireless_profile(
wireless_profile_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_wireless_profile(api, validator):
assert is_valid_delete_wireless_profile(
validator,
delete_wireless_profile(api)
)
def delete_wireless_profile_default(api):
endpoint_result = api.wireless.delete_wireless_profile(
wireless_profile_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_wireless_profile_default(api, validator):
try:
assert is_valid_delete_wireless_profile(
validator,
delete_wireless_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_delete_ssid_and_provision_it_to_devices(json_schema_validate, obj):
json_schema_validate('jsd_fc9538fe43d9884d_v1_3_3').validate(obj)
return True
def delete_ssid_and_provision_it_to_devices(api):
endpoint_result = api.wireless.delete_ssid_and_provision_it_to_devices(
managed_aplocations='string',
ssid_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_ssid_and_provision_it_to_devices(api, validator):
assert is_valid_delete_ssid_and_provision_it_to_devices(
validator,
delete_ssid_and_provision_it_to_devices(api)
)
def delete_ssid_and_provision_it_to_devices_default(api):
endpoint_result = api.wireless.delete_ssid_and_provision_it_to_devices(
managed_aplocations='string',
ssid_name='string'
)
return endpoint_result
@pytest.mark.wireless
def test_delete_ssid_and_provision_it_to_devices_default(api, validator):
try:
assert is_valid_delete_ssid_and_provision_it_to_devices(
validator,
delete_ssid_and_provision_it_to_devices_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
| 29.618768 | 313 | 0.719703 | 2,296 | 20,200 | 5.993902 | 0.107143 | 0.061038 | 0.037059 | 0.043598 | 0.817396 | 0.810929 | 0.795669 | 0.768202 | 0.711742 | 0.64729 | 0 | 0.014263 | 0.198218 | 20,200 | 681 | 314 | 29.662261 | 0.835453 | 0.05604 | 0 | 0.534517 | 0 | 0 | 0.094958 | 0.023818 | 0 | 0 | 0 | 0 | 0.059172 | 1 | 0.147929 | false | 0.005917 | 0.00789 | 0.001972 | 0.244576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
163a9360f610aaf7d8f8adee13178720df8a33ff | 565 | py | Python | bims/models/__init__.py | ann26/django-bims | 410e57d99137aea4146b4a40640f9f5ce03d06c5 | [
"MIT"
] | null | null | null | bims/models/__init__.py | ann26/django-bims | 410e57d99137aea4146b4a40640f9f5ce03d06c5 | [
"MIT"
] | null | null | null | bims/models/__init__.py | ann26/django-bims | 410e57d99137aea4146b4a40640f9f5ce03d06c5 | [
"MIT"
] | null | null | null | from bims.models.location_type import * # noqa
from bims.models.location_site import * # noqa
from bims.models.iucn_status import * # noqa
from bims.models.taxon import * # noqa
from bims.models.survey import * # noqa
from bims.models.location_context import * # noqa
from bims.models.biological_collection_record import * # noqa
from bims.models.profile import Profile
from bims.models.cluster import * # noqa
from bims.models.boundary import * # noqa
from bims.models.boundary_type import * # noqa
from bims.models.carousel_header import CarouselHeader
| 43.461538 | 62 | 0.784071 | 80 | 565 | 5.4375 | 0.275 | 0.22069 | 0.386207 | 0.413793 | 0.643678 | 0.367816 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138053 | 565 | 12 | 63 | 47.083333 | 0.893224 | 0.086726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1648228f6833f3a59b56fcba7e6d05c61a7fe290 | 20,972 | py | Python | kats/tests/models/test_ensemble.py | utkucanaytac/Kats | 9781615750a2f3b49f16cccf335b5c29fdfd181a | [
"MIT"
] | null | null | null | kats/tests/models/test_ensemble.py | utkucanaytac/Kats | 9781615750a2f3b49f16cccf335b5c29fdfd181a | [
"MIT"
] | null | null | null | kats/tests/models/test_ensemble.py | utkucanaytac/Kats | 9781615750a2f3b49f16cccf335b5c29fdfd181a | [
"MIT"
] | null | null | null | # Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# pyre-unsafe
import sys
import unittest
import unittest.mock as mock
from unittest import TestCase
import numpy as np
import pandas as pd
from kats.consts import TimeSeriesData
from kats.data.utils import load_data, load_air_passengers
from kats.models import (
arima,
holtwinters,
linear_model,
prophet,
quadratic_model,
theta,
sarima,
)
from kats.models.ensemble.ensemble import (
BaseEnsemble,
BaseModelParams,
EnsembleParams,
)
from kats.models.ensemble.kats_ensemble import KatsEnsemble
from kats.models.ensemble.median_ensemble import MedianEnsembleModel
from kats.models.ensemble.weighted_avg_ensemble import WeightedAvgEnsemble
from parameterized.parameterized import parameterized
np.random.seed(123321)
DATA_dummy = pd.DataFrame(
{
"time": pd.date_range(start="2019-01-01", end="2019-12-31", freq="D"),
"y": [x + np.random.randint(20) for x in range(365)],
}
)
TSData_dummy = TimeSeriesData(DATA_dummy)
ALL_ERRORS = ["mape", "smape", "mae", "mase", "mse", "rmse"]
def get_fake_preds(
ts_data: TimeSeriesData, fcst_periods: int, fcst_freq: str
) -> pd.DataFrame:
time = pd.date_range(
start=ts_data.time.iloc[-1], periods=fcst_periods + 1, freq=fcst_freq
)[1:]
fcst = np.random.uniform(0, 100, len(time))
return pd.DataFrame(
{
"time": {i: t for i, t in enumerate(time)},
"fcst": {i: t for i, t in enumerate(fcst)},
"fcst_lower": {i: t for i, t in enumerate(fcst + 10)},
"fcst_upper": {i: t for i, t in enumerate(fcst - 10)},
}
)
class testBaseEnsemble(TestCase):
def setUp(self) -> None:
self.TSData = load_air_passengers()
DATA_daily = load_data("peyton_manning.csv")
DATA_daily.columns = ["time", "y"]
self.TSData_daily = TimeSeriesData(DATA_daily)
DATA_multi = load_data("multivariate_anomaly_simulated_data.csv")
self.TSData_multi = TimeSeriesData(DATA_multi)
self.TSData_dummy = TSData_dummy
@parameterized.expand(
[["TSData", 30, "MS"], ["TSData_daily", 30, "D"], ["TSData_dummy", 30, "D"]]
)
def test_fit_forecast(self, ts_data_name, steps: int, freq: str) -> None:
ts_data = getattr(self, ts_data_name)
preds = get_fake_preds(ts_data, fcst_periods=steps, fcst_freq=freq)
params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams("holtwinters", holtwinters.HoltWintersParams()),
BaseModelParams(
"sarima",
sarima.SARIMAParams(
p=2,
d=1,
q=1,
trend="ct",
seasonal_order=(1, 0, 1, 12),
enforce_invertibility=False,
enforce_stationarity=False,
),
),
BaseModelParams("prophet", prophet.ProphetParams()),
BaseModelParams("linear", linear_model.LinearModelParams()),
BaseModelParams(
"quadratic",
quadratic_model.QuadraticModelParams(),
),
]
)
m = BaseEnsemble(ts_data, params)
with mock.patch("kats.models.ensemble.ensemble.Pool") as mock_pooled:
mock_fit_model = mock_pooled.return_value.apply_async.return_value.get
mock_fit_model.return_value.predict = mock.MagicMock(return_value=preds)
# fit the ensemble model
m.fit()
mock_pooled.assert_called()
mock_fit_model.assert_called()
# no predictions should be made yet
mock_fit_model.return_value.predict.assert_not_called()
# now run predict for each of the component models
m._predict_all(steps=steps, freq=freq)
# now predict should have been called
mock_fit_model.return_value.predict.assert_called_with(
steps, freq=f"{freq}"
)
self.assertEqual(m.__str__(), "Ensemble")
def test_others(self) -> None:
# test validate_param in base params
base_param = BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1))
base_param.validate_params()
params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams("holtwinters", holtwinters.HoltWintersParams()),
BaseModelParams(
"sarima",
sarima.SARIMAParams(
p=2,
d=1,
q=1,
trend="ct",
seasonal_order=(1, 0, 1, 12),
enforce_invertibility=False,
enforce_stationarity=False,
),
),
BaseModelParams("prophet", prophet.ProphetParams()),
BaseModelParams("linear", linear_model.LinearModelParams()),
BaseModelParams("quadratic", quadratic_model.QuadraticModelParams()),
]
)
self.assertRaises(
ValueError,
BaseEnsemble,
self.TSData_multi,
params,
)
# validate params in EnsembleParams
params = EnsembleParams(
[
BaseModelParams("random_model_name", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams("holtwinters", holtwinters.HoltWintersParams()),
]
)
self.assertRaises(
ValueError,
BaseEnsemble,
self.TSData,
params,
)
class testMedianEnsemble(TestCase):
def setUp(self) -> None:
self.TSData = load_air_passengers()
DATA_daily = load_data("peyton_manning.csv")
DATA_daily.columns = ["time", "y"]
self.TSData_daily = TimeSeriesData(DATA_daily)
DATA_multi = load_data("multivariate_anomaly_simulated_data.csv")
self.TSData_multi = TimeSeriesData(DATA_multi)
self.TSData_dummy = TSData_dummy
@parameterized.expand(
[["TSData", 30, "MS"], ["TSData_daily", 30, "D"], ["TSData_dummy", 30, "D"]]
)
def test_fit_forecast(self, ts_data_name, steps: int, freq: str) -> None:
ts_data = getattr(self, ts_data_name)
preds = get_fake_preds(ts_data, fcst_periods=steps, fcst_freq=freq)[
["time", "fcst"]
]
params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams("holtwinters", holtwinters.HoltWintersParams()),
BaseModelParams(
"sarima",
sarima.SARIMAParams(
p=2,
d=1,
q=1,
trend="ct",
seasonal_order=(1, 0, 1, 12),
enforce_invertibility=False,
enforce_stationarity=False,
),
),
BaseModelParams("prophet", prophet.ProphetParams()),
BaseModelParams("linear", linear_model.LinearModelParams()),
BaseModelParams(
"quadratic",
quadratic_model.QuadraticModelParams(),
),
]
)
m = MedianEnsembleModel(data=ts_data, params=params)
with mock.patch("kats.models.ensemble.ensemble.Pool") as mock_pooled:
mock_fit_model = mock_pooled.return_value.apply_async.return_value.get
mock_fit_model.return_value.predict = mock.MagicMock(return_value=preds)
# fit the ensemble model
m.fit()
mock_pooled.assert_called()
mock_fit_model.assert_called()
# no predictions should be made yet
mock_fit_model.return_value.predict.assert_not_called()
# now run predict on the ensemble model
m.predict(steps=steps, freq=freq)
mock_fit_model.return_value.predict.assert_called_with(
steps, freq=f"{freq}"
)
m.plot()
# test __str__ method
self.assertEqual(m.__str__(), "Median Ensemble")
def test_others(self) -> None:
# validate params in EnsembleParams
params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams("holtwinters", holtwinters.HoltWintersParams()),
]
)
self.assertRaises(
ValueError,
MedianEnsembleModel,
self.TSData_multi,
params,
)
class testWeightedAvgEnsemble(TestCase):
def setUp(self) -> None:
self.TSData = load_air_passengers()
DATA_daily = load_data("peyton_manning.csv")
DATA_daily.columns = ["time", "y"]
self.TSData_daily = TimeSeriesData(DATA_daily)
DATA_multi = load_data("multivariate_anomaly_simulated_data.csv")
self.TSData_multi = TimeSeriesData(DATA_multi)
self.TSData_dummy = TSData_dummy
@parameterized.expand(
[["TSData", 30, "MS"], ["TSData_daily", 30, "D"], ["TSData_dummy", 30, "D"]]
)
def test_fit_forecast(self, ts_data_name, steps: int, freq: str) -> None:
ts_data = getattr(self, ts_data_name)
preds = get_fake_preds(ts_data, fcst_periods=steps, fcst_freq=freq)[
["time", "fcst"]
]
params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams("holtwinters", holtwinters.HoltWintersParams()),
BaseModelParams(
"sarima",
sarima.SARIMAParams(
p=2,
d=1,
q=1,
trend="ct",
seasonal_order=(1, 0, 1, 12),
enforce_invertibility=False,
enforce_stationarity=False,
),
),
BaseModelParams(
"prophet",
prophet.ProphetParams(seasonality_mode="multiplicative"),
),
BaseModelParams("linear", linear_model.LinearModelParams()),
BaseModelParams("quadratic", quadratic_model.QuadraticModelParams()),
]
)
m = WeightedAvgEnsemble(ts_data, params=params)
with mock.patch("kats.models.ensemble.ensemble.Pool") as mock_pooled:
mock_fit_model = mock_pooled.return_value.apply_async.return_value.get
mock_fit_model.return_value.predict = mock.MagicMock(return_value=preds)
# fit the ensemble model
m.fit()
mock_pooled.assert_called()
with mock.patch(
"kats.models.ensemble.weighted_avg_ensemble.Pool"
) as mock_weighted_pooled:
mock_backtest = (
mock_weighted_pooled.return_value.apply_async.return_value.get
)
# the backtester should just return a random number here
mock_backtest.return_value = np.random.rand()
m.predict(steps=steps, freq=freq)
mock_backtest.assert_called()
m.plot()
# test __str__ method
self.assertEqual(m.__str__(), "Weighted Average Ensemble")
def test_others(self) -> None:
# validate params in EnsembleParams
params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams("holtwinters", holtwinters.HoltWintersParams()),
]
)
self.assertRaises(
ValueError,
WeightedAvgEnsemble,
self.TSData_multi,
params,
)
class testKatsEnsemble(TestCase):
def setUp(self) -> None:
self.TSData = load_air_passengers()
self.TSData_dummy = TSData_dummy
@parameterized.expand(
[["TSData", 30, "MS"], ["TSData", 30, "D"], ["TSData_dummy", 30, "D"]]
)
def test_fit_median_forecast(self, ts_data_name, steps: int, freq: str) -> None:
ts_data = getattr(self, ts_data_name)
preds = get_fake_preds(ts_data, fcst_periods=steps, fcst_freq=freq)
model_params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams(
"sarima",
sarima.SARIMAParams(
p=2,
d=1,
q=1,
trend="ct",
seasonal_order=(1, 0, 1, 12),
enforce_invertibility=False,
enforce_stationarity=False,
),
),
BaseModelParams("prophet", prophet.ProphetParams()),
BaseModelParams("linear", linear_model.LinearModelParams()),
BaseModelParams("quadratic", quadratic_model.QuadraticModelParams()),
BaseModelParams("theta", theta.ThetaParams(m=12)),
]
)
decomps = ["additive", "multiplicative"]
for decomp in decomps:
KatsEnsembleParam = {
"models": model_params,
"aggregation": "median",
"seasonality_length": 12,
"decomposition_method": decomp,
}
m = KatsEnsemble(data=ts_data, params=KatsEnsembleParam)
with mock.patch("multiprocessing.managers.SyncManager.Pool") as mock_pooled:
mock_fit_model = mock_pooled.return_value.apply_async.return_value.get
mock_fit_model.return_value.predict = mock.MagicMock(return_value=preds)
# fit the model
m.fit()
mock_pooled.assert_called()
# no predictions should be made yet
mock_fit_model.return_value.predict.assert_not_called()
# now run predict on the ensemble model
m.predict(steps=steps)
mock_fit_model.return_value.predict.assert_called_with(steps)
m.aggregate()
m.plot()
@parameterized.expand(
[["TSData", 30, "MS"], ["TSData", 30, "D"], ["TSData_dummy", 30, "D"]]
)
def test_fit_weightedavg_forecast(
self, ts_data_name, steps: int, freq: str
) -> None:
ts_data = getattr(self, ts_data_name)
preds = get_fake_preds(ts_data, fcst_periods=steps, fcst_freq=freq)
model_params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams(
"sarima",
sarima.SARIMAParams(
p=2,
d=1,
q=1,
trend="ct",
seasonal_order=(1, 0, 1, 12),
enforce_invertibility=False,
enforce_stationarity=False,
),
),
BaseModelParams("prophet", prophet.ProphetParams()),
BaseModelParams("linear", linear_model.LinearModelParams()),
BaseModelParams("quadratic", quadratic_model.QuadraticModelParams()),
BaseModelParams("theta", theta.ThetaParams(m=12)),
]
)
decomps = ["additive", "multiplicative"]
for decomp in decomps:
KatsEnsembleParam = {
"models": model_params,
"aggregation": "weightedavg",
"seasonality_length": 12,
"decomposition_method": decomp,
}
m = KatsEnsemble(data=ts_data, params=KatsEnsembleParam)
with mock.patch("multiprocessing.managers.SyncManager.Pool") as mock_pooled:
mock_fit_model = mock_pooled.return_value.apply_async.return_value.get
mock_fit_model.return_value.predict = mock.MagicMock(return_value=preds)
mock_fit_model.return_value.__add__ = mock.MagicMock(
return_value=np.random.rand()
)
# fit the model
m.fit()
mock_pooled.assert_called()
# no predictions should be made yet
mock_fit_model.return_value.predict.assert_not_called()
# backtesting should be done after calling fit
mock_fit_model.return_value.__add__.assert_called_with(
sys.float_info.epsilon
)
# now run predict on the ensemble model
m.predict(steps=steps)
mock_fit_model.return_value.predict.assert_called_with(steps)
m.aggregate()
m.plot()
# reset all the mocks and make sure they're not called
mock_pooled.reset_mock()
mock_pooled.assert_not_called()
mock_fit_model.return_value.predict.assert_not_called()
mock_fit_model.return_value.__add__.assert_not_called()
# now retry the above with forecast rather than fit/predict
m.forecast(steps=30)
mock_pooled.assert_called()
# backtesting should be done after calling fit
mock_fit_model.return_value.__add__.assert_called_with(
sys.float_info.epsilon
)
# now run predict on the ensemble model
# m.predict(steps=steps)
mock_fit_model.return_value.predict.assert_called_with(steps)
m.aggregate()
m.plot()
def test_others(self) -> None:
model_params = EnsembleParams(
[
BaseModelParams("arima", arima.ARIMAParams(p=1, d=1, q=1)),
BaseModelParams(
"sarima",
sarima.SARIMAParams(
p=2,
d=1,
q=1,
trend="ct",
seasonal_order=(1, 0, 1, 12),
enforce_invertibility=False,
enforce_stationarity=False,
),
),
BaseModelParams("prophet", prophet.ProphetParams()),
BaseModelParams("linear", linear_model.LinearModelParams()),
BaseModelParams("quadratic", quadratic_model.QuadraticModelParams()),
BaseModelParams("theta", theta.ThetaParams(m=12)),
]
)
KatsEnsembleParam = {
"models": model_params,
"aggregation": "median",
"seasonality_length": 12,
"decomposition_method": "random_decomp",
}
# test invalid decomposition method
m = KatsEnsemble(data=self.TSData, params=KatsEnsembleParam)
m.validate_params()
# test invalid seasonality length
KatsEnsembleParam = {
"models": model_params,
"aggregation": "median",
"seasonality_length": 1000000,
"decomposition_method": "additive",
}
self.assertRaises(
ValueError,
KatsEnsemble,
self.TSData,
KatsEnsembleParam,
)
# test logging with default executors
KatsEnsembleParam = {
"models": model_params,
"aggregation": "median",
"seasonality_length": 12,
"decomposition_method": "random_decomp",
"fitExecutor": None,
"forecastExecutor": None,
}
with self.assertLogs(level="INFO"):
m = KatsEnsemble(data=self.TSData, params=KatsEnsembleParam)
# test non-seasonal data
KatsEnsembleParam = {
"models": model_params,
"aggregation": "median",
"seasonality_length": 12,
"decomposition_method": "additive",
}
dummy_ts = TimeSeriesData(
time=pd.date_range(start="2020-01-01", end="2020-05-31", freq="D"),
value=pd.Series(list(range(152))),
)
m = KatsEnsemble(data=dummy_ts, params=KatsEnsembleParam)
if __name__ == "__main__":
unittest.main()
| 36.34662 | 88 | 0.54692 | 1,986 | 20,972 | 5.553877 | 0.128902 | 0.037897 | 0.028286 | 0.031006 | 0.80272 | 0.788486 | 0.769266 | 0.742792 | 0.726927 | 0.711786 | 0 | 0.014875 | 0.355665 | 20,972 | 576 | 89 | 36.409722 | 0.801376 | 0.058793 | 0 | 0.670259 | 0 | 0 | 0.082631 | 0.017663 | 0 | 0 | 0 | 0 | 0.068966 | 1 | 0.030172 | false | 0.010776 | 0.030172 | 0 | 0.071121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16bf67b47f56d930b8798276d5f1b131660a6a4c | 2,271 | py | Python | .ipython/profile_default/startup/00_imports.py | fn-reflection/dotfiles | cd3b3a7d5aaec7af5c7d4b6fa15a3699b7eb9e01 | [
"Apache-2.0"
] | null | null | null | .ipython/profile_default/startup/00_imports.py | fn-reflection/dotfiles | cd3b3a7d5aaec7af5c7d4b6fa15a3699b7eb9e01 | [
"Apache-2.0"
] | null | null | null | .ipython/profile_default/startup/00_imports.py | fn-reflection/dotfiles | cd3b3a7d5aaec7af5c7d4b6fa15a3699b7eb9e01 | [
"Apache-2.0"
] | null | null | null | # pylint:disable=unused-import
# python standard modules
import ast
import collections
import csv
from datetime import datetime
import glob
import io
import itertools
import json
import math
import os
from pathlib import Path
import pdb
import pickle
import sys
import re
import threading
import time
import traceback
from typing import List, Dict, Tuple, Deque, Callable
# third-party libraries
try:
import stringcase
except ModuleNotFoundError:
pass
try:
import dill
except ModuleNotFoundError:
pass
try:
from IPython.lib.backgroundjobs import BackgroundJobManager
except ModuleNotFoundError:
pass
try:
import matplotlib.pyplot as plt
except ModuleNotFoundError:
pass
try:
import numba
import numba.cuda
except ModuleNotFoundError:
pass
try:
import numpy as np
except ModuleNotFoundError:
pass
try:
import pandas as pd
from pandas import Series, DataFrame, read_csv, read_pickle
except ModuleNotFoundError:
pass
try:
import plotly
from plotly.subplots import make_subplots
except ModuleNotFoundError:
pass
try:
import psycopg2
except ModuleNotFoundError:
pass
try:
from sortedcontainers import SortedDict
except ModuleNotFoundError:
pass
try:
import vaex
except ModuleNotFoundError:
pass
# my public libraries
try:
import fn_reflection
from fn_reflection.typed_dict import *
except ModuleNotFoundError:
pass
# my private libraries
try:
import lactivemodel
except ModuleNotFoundError:
pass
try:
import lbf
except ModuleNotFoundError:
pass
try:
import lconn
except ModuleNotFoundError:
pass
try:
import lcred
except ModuleNotFoundError:
pass
try:
import lenv
except ModuleNotFoundError:
pass
try:
import liberate
import liberate.search as lsearch
import liberate.signal as lsignal
except ModuleNotFoundError:
pass
try:
import lpandas
import lpandas.pyutil as lpy
import lpandas.nputil as lnp
import lpandas.dfutil as ldf
except ModuleNotFoundError:
pass
try:
import ltrade
except ModuleNotFoundError:
pass
# notebookファイルの上にプロジェクトディレクトリがあると想定しておりそれをjupyterでreloadできるようにしたい。
# あまりよくない書き方だが使えはする
if Path.cwd().name == 'notebook':
sys.path.append("../")
| 16.338129 | 66 | 0.757376 | 257 | 2,271 | 6.669261 | 0.377432 | 0.291715 | 0.33839 | 0.317386 | 0.374562 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000553 | 0.203435 | 2,271 | 138 | 67 | 16.456522 | 0.946932 | 0.087186 | 0 | 0.545455 | 0 | 0 | 0.005327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.181818 | 0.436364 | 0 | 0.436364 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
16d16514371239a94431facc06aa23024f40ae3d | 126 | py | Python | demos-lifelong-transfer/demos-cnn/utils/__init__.py | wangning911/Transferability_Black-Box_Attacks_ning | 8670c73c2cfa8df979760879629820e4a2e5e7ab | [
"MIT"
] | 19 | 2018-08-20T12:15:57.000Z | 2022-02-17T01:53:16.000Z | demos-lifelong-transfer/demos-cnn/utils/__init__.py | wangning911/Transferability_Black-Box_Attacks_ning | 8670c73c2cfa8df979760879629820e4a2e5e7ab | [
"MIT"
] | 1 | 2020-02-29T14:41:45.000Z | 2020-02-29T14:41:45.000Z | demos-lifelong-transfer/demos-cnn/utils/__init__.py | wangning911/Transferability_Black-Box_Attacks_ning | 8670c73c2cfa8df979760879629820e4a2e5e7ab | [
"MIT"
] | 9 | 2018-08-02T10:03:35.000Z | 2020-06-29T14:07:13.000Z | """Useful utils
"""
from .misc import *
from .logger import *
from .eval import *
from .train import *
from .datasets import * | 18 | 23 | 0.698413 | 17 | 126 | 5.176471 | 0.529412 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174603 | 126 | 7 | 23 | 18 | 0.846154 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
16dc249c3b322ec64599d68fff6bb0e260497489 | 566 | py | Python | kats/detectors/__init__.py | menefotto/Kats | 3fc8a3f819502d45736734eabb3601f42a6b7759 | [
"MIT"
] | 1 | 2021-06-22T03:40:33.000Z | 2021-06-22T03:40:33.000Z | kats/detectors/__init__.py | menefotto/Kats | 3fc8a3f819502d45736734eabb3601f42a6b7759 | [
"MIT"
] | null | null | null | kats/detectors/__init__.py | menefotto/Kats | 3fc8a3f819502d45736734eabb3601f42a6b7759 | [
"MIT"
] | null | null | null | from . import bocpd_model # noqa
from . import bocpd # noqa
from . import changepoint_evaluator # noqa
from . import cusum_detection # noqa
from . import cusum_model # noqa
from . import detector_consts # noqa
from . import detector # noqa
from . import hourly_ratio_detection # noqa
from . import outlier # noqa
from . import prophet_detector # noqa
from . import residual_translation # noqa
from . import robust_stat_detection # noqa
from . import seasonality # noqa
from . import stat_sig_detector # noqa
from . import trend_mk # noqa
| 35.375 | 45 | 0.738516 | 74 | 566 | 5.459459 | 0.310811 | 0.371287 | 0.485149 | 0.170792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208481 | 566 | 15 | 46 | 37.733333 | 0.901786 | 0.130742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc6904c32704036c863e3194d05e4732d411efe4 | 39 | py | Python | scripts/field/Ranmaru_ExpeditionEnter.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 54 | 2019-04-16T23:24:48.000Z | 2021-12-18T11:41:50.000Z | scripts/field/Ranmaru_ExpeditionEnter.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 3 | 2019-05-19T15:19:41.000Z | 2020-04-27T16:29:16.000Z | scripts/field/Ranmaru_ExpeditionEnter.py | G00dBye/YYMS | 1de816fc842b6598d5b4b7896b6ab0ee8f7cdcfb | [
"MIT"
] | 49 | 2020-11-25T23:29:16.000Z | 2022-03-26T16:20:24.000Z | sm.spawnMob(9421583, -373, 123, False)
| 19.5 | 38 | 0.717949 | 6 | 39 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.371429 | 0.102564 | 39 | 1 | 39 | 39 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc79a100e9ce26da53871c445f8b3e8196ae4f13 | 46,719 | py | Python | py65816/tests/test_db_disassembler_65816_native_16.py | tmr4/py65816 | 00d9a378ebd0e27378c8ce9e6611a7fec0020b44 | [
"BSD-3-Clause"
] | 1 | 2022-02-22T18:04:26.000Z | 2022-02-22T18:04:26.000Z | py65816/tests/test_db_disassembler_65816_native_16.py | tmr4/py65816 | 00d9a378ebd0e27378c8ce9e6611a7fec0020b44 | [
"BSD-3-Clause"
] | null | null | null | py65816/tests/test_db_disassembler_65816_native_16.py | tmr4/py65816 | 00d9a378ebd0e27378c8ce9e6611a7fec0020b44 | [
"BSD-3-Clause"
] | null | null | null | import unittest
import sys
from py65816.devices.mpu65c816 import MPU
from py65816.db_disassembler import dbDisassembler as Disassembler
from py65.utils.addressing import AddressParser
class DisassemblerTests(unittest.TestCase):
def _dont_test_disassemble_wraps_after_top_of_mem(self):
'''
TODO: This test fails with IndexError. We should fix this
so that it does not attempt to index memory out of range.
It does not affect most Py65 users because py65mon uses
ObservableMemory, which does not raise IndexError.
'''
mpu = MPU()
mpu.memory[0xFFFF] = 0x20 # JSR
mpu.memory[0x0000] = 0xD2 #
mpu.memory[0x0001] = 0xFF # $ffD2
dis = Disassembler(mpu)
length, disasm = dis.instruction_at(0xFFFF)
self.assertEqual(3, length)
self.assertEqual('JSR $ffd2', disasm)
def test_disassembles_00(self):
length, disasm = self.disassemble([0x00])
self.assertEqual(1, length)
self.assertEqual('BRK', disasm)
def test_disassembles_01(self):
length, disasm = self.disassemble([0x01, 0x44])
self.assertEqual(2, length)
self.assertEqual('ORA ($44,X)', disasm)
def test_disassembles_02(self):
length, disasm = self.disassemble([0x02])
self.assertEqual(1, length)
self.assertEqual('COP', disasm)
def test_disassembles_03(self):
length, disasm = self.disassemble([0x03, 0x10])
self.assertEqual(2, length)
self.assertEqual('ORA $10,S', disasm)
def test_disassembles_04(self):
length, disasm = self.disassemble([0x04, 0x10])
self.assertEqual(2, length)
self.assertEqual('TSB $10', disasm)
def test_disassembles_05(self):
length, disasm = self.disassemble([0x05, 0x44])
self.assertEqual(2, length)
self.assertEqual('ORA $44', disasm)
def test_disassembles_06(self):
length, disasm = self.disassemble([0x06, 0x44])
self.assertEqual(2, length)
self.assertEqual('ASL $44', disasm)
def test_disassembles_07_6502(self):
length, disasm = self.disassemble([0x07, 0x10])
self.assertEqual(2, length)
self.assertEqual('ORA [$10]', disasm)
def test_disassembles_08(self):
length, disasm = self.disassemble([0x08])
self.assertEqual(1, length)
self.assertEqual('PHP', disasm)
def test_disassembles_09(self):
length, disasm = self.disassemble([0x09, 0x44])
self.assertEqual(3, length)
self.assertEqual('ORA #$0044', disasm)
def test_disassembles_0a(self):
length, disasm = self.disassemble([0x0a])
self.assertEqual(1, length)
self.assertEqual('ASL A', disasm)
def test_disassembles_0b(self):
length, disasm = self.disassemble([0x0b])
self.assertEqual(1, length)
self.assertEqual('PHD', disasm)
def test_disassembles_0c(self):
length, disasm = self.disassemble([0x0c, 0xCD, 0xAB])
self.assertEqual(3, length)
self.assertEqual('TSB $abcd', disasm)
def test_disassembles_0d(self):
length, disasm = self.disassemble([0x0d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ORA $4400', disasm)
def test_disassembles_0e(self):
length, disasm = self.disassemble([0x0e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ASL $4400', disasm)
def test_disassembles_0f(self):
length, disasm = self.disassemble([0x0f, 0xCD, 0xAB, 0x01])
self.assertEqual(4, length)
self.assertEqual('ORA $1abcd', disasm)
def test_disassembles_10(self):
length, disasm = self.disassemble([0x10, 0x44])
self.assertEqual(2, length)
self.assertEqual('BPL $0046', disasm)
def test_disassembles_11(self):
length, disasm = self.disassemble([0x11, 0x44])
self.assertEqual(2, length)
self.assertEqual('ORA ($44),Y', disasm)
def test_disassembles_12(self):
length, disasm = self.disassemble([0x12, 0x10])
self.assertEqual(2, length)
self.assertEqual('ORA ($10)', disasm)
def test_disassembles_13(self):
length, disasm = self.disassemble([0x13, 0x10])
self.assertEqual(2, length)
self.assertEqual('ORA ($10,S),Y', disasm)
def test_disassembles_14(self):
length, disasm = self.disassemble([0x14, 0x10])
self.assertEqual(2, length)
self.assertEqual('TRB $10', disasm)
def test_disassembles_15(self):
length, disasm = self.disassemble([0x15, 0x44])
self.assertEqual(2, length)
self.assertEqual('ORA $44,X', disasm)
def test_disassembles_16(self):
length, disasm = self.disassemble([0x16, 0x44])
self.assertEqual(2, length)
self.assertEqual('ASL $44,X', disasm)
def test_disassembles_17(self):
length, disasm = self.disassemble([0x17, 0x10])
self.assertEqual(2, length)
self.assertEqual('ORA [$10],Y', disasm)
def test_disassembles_18(self):
length, disasm = self.disassemble([0x18])
self.assertEqual(1, length)
self.assertEqual('CLC', disasm)
def test_disassembles_19(self):
length, disasm = self.disassemble([0x19, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ORA $4400,Y', disasm)
def test_disassembles_1a(self):
length, disasm = self.disassemble([0x1a])
self.assertEqual(1, length)
self.assertEqual('INC A', disasm)
def test_disassembles_1b(self):
length, disasm = self.disassemble([0x1b])
self.assertEqual(1, length)
self.assertEqual('TCS', disasm)
def test_disassembles_1c(self):
length, disasm = self.disassemble([0x1c, 0xCD, 0xAB])
self.assertEqual(3, length)
self.assertEqual('TRB $abcd', disasm)
def test_disassembles_1d(self):
length, disasm = self.disassemble([0x1d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ORA $4400,X', disasm)
def test_disassembles_1e(self):
length, disasm = self.disassemble([0x1e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ASL $4400,X', disasm)
def test_disassembles_1f(self):
length, disasm = self.disassemble([0x1f, 0xCD, 0xAB, 0x01])
self.assertEqual(4, length)
self.assertEqual('ORA $1abcd,X', disasm)
def test_disassembles_20(self):
length, disasm = self.disassemble([0x20, 0x97, 0x55])
self.assertEqual(3, length)
self.assertEqual('JSR $5597', disasm)
def test_disassembles_21(self):
length, disasm = self.disassemble([0x21, 0x44])
self.assertEqual(2, length)
self.assertEqual('AND ($44,X)', disasm)
def test_disassembles_22(self):
length, disasm = self.disassemble([0x22, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('JSL $1abcd', disasm)
def test_disassembles_23(self):
length, disasm = self.disassemble([0x23, 0x10])
self.assertEqual(2, length)
self.assertEqual('AND $10,S', disasm)
def test_disassembles_24(self):
length, disasm = self.disassemble([0x24, 0x44])
self.assertEqual(2, length)
self.assertEqual('BIT $44', disasm)
def test_disassembles_25(self):
length, disasm = self.disassemble([0x25, 0x44])
self.assertEqual(2, length)
self.assertEqual('AND $44', disasm)
def test_disassembles_26(self):
length, disasm = self.disassemble([0x26, 0x44])
self.assertEqual(2, length)
self.assertEqual('ROL $44', disasm)
def test_disassembles_27(self):
length, disasm = self.disassemble([0x27, 0x10])
self.assertEqual(2, length)
self.assertEqual('AND [$10]', disasm)
def test_disassembles_28(self):
length, disasm = self.disassemble([0x28])
self.assertEqual(1, length)
self.assertEqual('PLP', disasm)
def test_disassembles_29(self):
length, disasm = self.disassemble([0x29, 0x44])
self.assertEqual(3, length)
self.assertEqual('AND #$0044', disasm)
def test_disassembles_2a(self):
length, disasm = self.disassemble([0x2a])
self.assertEqual(1, length)
self.assertEqual('ROL A', disasm)
def test_disassembles_2b(self):
length, disasm = self.disassemble([0x2b])
self.assertEqual(1, length)
self.assertEqual('PLD', disasm)
def test_disassembles_2c(self):
length, disasm = self.disassemble([0x2c, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('BIT $4400', disasm)
def test_disassembles_2d(self):
length, disasm = self.disassemble([0x2d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('AND $4400', disasm)
def test_disassembles_2e(self):
length, disasm = self.disassemble([0x2e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ROL $4400', disasm)
def test_disassembles_2f(self):
length, disasm = self.disassemble([0x2f, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('AND $1abcd', disasm)
def test_disassembles_30(self):
length, disasm = self.disassemble([0x30, 0x44])
self.assertEqual(2, length)
self.assertEqual('BMI $0046', disasm)
def test_disassembles_31(self):
length, disasm = self.disassemble([0x31, 0x44])
self.assertEqual(2, length)
self.assertEqual('AND ($44),Y', disasm)
def test_disassembles_32(self):
length, disasm = self.disassemble([0x32, 0x10])
self.assertEqual(2, length)
self.assertEqual('AND ($10)', disasm)
def test_disassembles_33(self):
length, disasm = self.disassemble([0x33, 0x10])
self.assertEqual(2, length)
self.assertEqual('AND ($10,S),Y', disasm)
def test_disassembles_34(self):
length, disasm = self.disassemble([0x34, 0x10])
self.assertEqual(2, length)
self.assertEqual('BIT $10,X', disasm)
def test_disassembles_35(self):
length, disasm = self.disassemble([0x35, 0x44])
self.assertEqual(2, length)
self.assertEqual('AND $44,X', disasm)
def test_disassembles_36(self):
length, disasm = self.disassemble([0x36, 0x44])
self.assertEqual(2, length)
self.assertEqual('ROL $44,X', disasm)
def test_disassembles_37(self):
length, disasm = self.disassemble([0x37, 0x10])
self.assertEqual(2, length)
self.assertEqual('AND [$10],Y', disasm)
def test_disassembles_38(self):
length, disasm = self.disassemble([0x38])
self.assertEqual(1, length)
self.assertEqual('SEC', disasm)
def test_disassembles_39(self):
length, disasm = self.disassemble([0x39, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('AND $4400,Y', disasm)
def test_disassembles_3a(self):
length, disasm = self.disassemble([0x3a])
self.assertEqual(1, length)
self.assertEqual('DEC A', disasm)
def test_disassembles_3b(self):
length, disasm = self.disassemble([0x3b])
self.assertEqual(1, length)
self.assertEqual('TSC', disasm)
def test_disassembles_3c(self):
length, disasm = self.disassemble([0x3c, 0xcd, 0xab])
self.assertEqual(3, length)
self.assertEqual('BIT $abcd,X', disasm)
def test_disassembles_3d(self):
length, disasm = self.disassemble([0x3d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('AND $4400,X', disasm)
def test_disassembles_3e(self):
length, disasm = self.disassemble([0x3e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ROL $4400,X', disasm)
def test_disassembles_3f(self):
length, disasm = self.disassemble([0x3f, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('AND $1abcd,X', disasm)
def test_disassembles_40(self):
length, disasm = self.disassemble([0x40])
self.assertEqual(1, length)
self.assertEqual('RTI', disasm)
def test_disassembles_41(self):
length, disasm = self.disassemble([0x41, 0x44])
self.assertEqual(2, length)
self.assertEqual('EOR ($44,X)', disasm)
def test_disassembles_42(self):
length, disasm = self.disassemble([0x42])
self.assertEqual(1, length)
self.assertEqual('WDM', disasm)
def test_disassembles_43(self):
length, disasm = self.disassemble([0x43, 0x10])
self.assertEqual(2, length)
self.assertEqual('EOR $10,S', disasm)
def test_disassembles_44(self):
length, disasm = self.disassemble([0x44, 0x01, 0x00])
self.assertEqual(3, length)
self.assertEqual('MVP $01,$00', disasm)
def test_disassembles_45(self):
length, disasm = self.disassemble([0x45, 0x44])
self.assertEqual(2, length)
self.assertEqual('EOR $44', disasm)
def test_disassembles_46(self):
length, disasm = self.disassemble([0x46, 0x44])
self.assertEqual(2, length)
self.assertEqual('LSR $44', disasm)
def test_disassembles_47(self):
length, disasm = self.disassemble([0x47, 0x10])
self.assertEqual(2, length)
self.assertEqual('EOR [$10]', disasm)
def test_disassembles_48(self):
length, disasm = self.disassemble([0x48])
self.assertEqual(1, length)
self.assertEqual('PHA', disasm)
def test_disassembles_49(self):
length, disasm = self.disassemble([0x49, 0x44])
self.assertEqual(3, length)
self.assertEqual('EOR #$0044', disasm)
def test_disassembles_4a(self):
length, disasm = self.disassemble([0x4a])
self.assertEqual(1, length)
self.assertEqual('LSR A', disasm)
def test_disassembles_4b(self):
length, disasm = self.disassemble([0x4b])
self.assertEqual(1, length)
self.assertEqual('PHK', disasm)
def test_disassembles_4c(self):
length, disasm = self.disassemble([0x4c, 0x97, 0x55])
self.assertEqual(3, length)
self.assertEqual('JMP $5597', disasm)
def test_disassembles_4d(self):
length, disasm = self.disassemble([0x4d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('EOR $4400', disasm)
def test_disassembles_4e(self):
length, disasm = self.disassemble([0x4e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LSR $4400', disasm)
def test_disassembles_4f(self):
length, disasm = self.disassemble([0x4f, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('EOR $1abcd', disasm)
def test_disassembles_50(self):
length, disasm = self.disassemble([0x50, 0x44])
self.assertEqual(2, length)
self.assertEqual('BVC $0046', disasm)
def test_disassembles_51(self):
length, disasm = self.disassemble([0x51, 0x44])
self.assertEqual(2, length)
self.assertEqual('EOR ($44),Y', disasm)
def test_disassembles_52(self):
length, disasm = self.disassemble([0x52, 0x10])
self.assertEqual(2, length)
self.assertEqual('EOR ($10)', disasm)
def test_disassembles_53(self):
length, disasm = self.disassemble([0x53, 0x10])
self.assertEqual(2, length)
self.assertEqual('EOR ($10,S),Y', disasm)
def test_disassembles_54(self):
length, disasm = self.disassemble([0x54, 0x01, 0x00])
self.assertEqual(3, length)
self.assertEqual('MVN $01,$00', disasm)
def test_disassembles_55(self):
length, disasm = self.disassemble([0x55, 0x44])
self.assertEqual(2, length)
self.assertEqual('EOR $44,X', disasm)
def test_disassembles_56(self):
length, disasm = self.disassemble([0x56, 0x44])
self.assertEqual(2, length)
self.assertEqual('LSR $44,X', disasm)
def test_disassembles_57(self):
length, disasm = self.disassemble([0x57, 0x10])
self.assertEqual(2, length)
self.assertEqual('EOR [$10],Y', disasm)
def test_disassembles_58(self):
length, disasm = self.disassemble([0x58])
self.assertEqual(1, length)
self.assertEqual('CLI', disasm)
def test_disassembles_59(self):
length, disasm = self.disassemble([0x59, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('EOR $4400,Y', disasm)
def test_disassembles_5a(self):
length, disasm = self.disassemble([0x5a])
self.assertEqual(1, length)
self.assertEqual('PHY', disasm)
def test_disassembles_5b(self):
length, disasm = self.disassemble([0x5b])
self.assertEqual(1, length)
self.assertEqual('TCD', disasm)
def test_disassembles_5c(self):
length, disasm = self.disassemble([0x5c, 0xCD, 0xAB, 0x01])
self.assertEqual(4, length)
self.assertEqual('JML $1abcd', disasm)
def test_disassembles_5d(self):
length, disasm = self.disassemble([0x5d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('EOR $4400,X', disasm)
def test_disassembles_5e(self):
length, disasm = self.disassemble([0x5e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LSR $4400,X', disasm)
def test_disassembles_5f(self):
length, disasm = self.disassemble([0x5f, 0xCD, 0xAB, 0x01])
self.assertEqual(4, length)
self.assertEqual('EOR $1abcd,X', disasm)
def test_disassembles_60(self):
length, disasm = self.disassemble([0x60])
self.assertEqual(1, length)
self.assertEqual('RTS', disasm)
def test_disassembles_61(self):
length, disasm = self.disassemble([0x61, 0x44])
self.assertEqual(2, length)
self.assertEqual('ADC ($44,X)', disasm)
def test_disassembles_62(self):
length, disasm = self.disassemble([0x62, 0x34, 0x12])
self.assertEqual(3, length)
self.assertEqual('PER $1234', disasm)
def test_disassembles_63(self):
length, disasm = self.disassemble([0x63, 0x10])
self.assertEqual(2, length)
self.assertEqual('ADC $10,S', disasm)
def test_disassembles_64(self):
length, disasm = self.disassemble([0x64, 0x10])
self.assertEqual(2, length)
self.assertEqual('STZ $10', disasm)
def test_disassembles_65(self):
length, disasm = self.disassemble([0x65, 0x44])
self.assertEqual(2, length)
self.assertEqual('ADC $44', disasm)
def test_disassembles_66(self):
length, disasm = self.disassemble([0x66, 0x44])
self.assertEqual(2, length)
self.assertEqual('ROR $44', disasm)
def test_disassembles_67(self):
length, disasm = self.disassemble([0x67, 0x10])
self.assertEqual(2, length)
self.assertEqual('ADC [$10]', disasm)
def test_disassembles_68(self):
length, disasm = self.disassemble([0x68])
self.assertEqual(1, length)
self.assertEqual('PLA', disasm)
def test_disassembles_69(self):
length, disasm = self.disassemble([0x69, 0x44])
self.assertEqual(3, length)
self.assertEqual('ADC #$0044', disasm)
def test_disassembles_6a(self):
length, disasm = self.disassemble([0x6a])
self.assertEqual(1, length)
self.assertEqual('ROR A', disasm)
def test_disassembles_6b(self):
length, disasm = self.disassemble([0x6b])
self.assertEqual(1, length)
self.assertEqual('RTL', disasm)
def test_disassembles_6c(self):
length, disasm = self.disassemble([0x6c, 0x97, 0x55])
self.assertEqual(3, length)
self.assertEqual('JMP ($5597)', disasm)
def test_disassembles_6d(self):
length, disasm = self.disassemble([0x6d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ADC $4400', disasm)
def test_disassembles_6e(self):
length, disasm = self.disassemble([0x6e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ROR $4400', disasm)
def test_disassembles_6f(self):
length, disasm = self.disassemble([0x6f, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('ADC $1abcd', disasm)
def test_disassembles_70(self):
length, disasm = self.disassemble([0x70, 0x44])
self.assertEqual(2, length)
self.assertEqual('BVS $0046', disasm)
def test_disassembles_71(self):
length, disasm = self.disassemble([0x71, 0x44])
self.assertEqual(2, length)
self.assertEqual('ADC ($44),Y', disasm)
def test_disassembles_72(self):
length, disasm = self.disassemble([0x72, 0x10])
self.assertEqual(2, length)
self.assertEqual('ADC ($10)', disasm)
def test_disassembles_73(self):
length, disasm = self.disassemble([0x73, 0x10])
self.assertEqual(2, length)
self.assertEqual('ADC ($10,S),Y', disasm)
def test_disassembles_74(self):
length, disasm = self.disassemble([0x74, 0x10])
self.assertEqual(2, length)
self.assertEqual('STZ $10,X', disasm)
def test_disassembles_75(self):
length, disasm = self.disassemble([0x75, 0x44])
self.assertEqual(2, length)
self.assertEqual('ADC $44,X', disasm)
def test_disassembles_76(self):
length, disasm = self.disassemble([0x76, 0x44])
self.assertEqual(2, length)
self.assertEqual('ROR $44,X', disasm)
def test_disassembles_77(self):
length, disasm = self.disassemble([0x77, 0x10])
self.assertEqual(2, length)
self.assertEqual('ADC [$10],Y', disasm)
def test_disassembles_78(self):
length, disasm = self.disassemble([0x78])
self.assertEqual(1, length)
self.assertEqual('SEI', disasm)
def test_disassembles_79(self):
length, disasm = self.disassemble([0x79, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ADC $4400,Y', disasm)
def test_disassembles_7a(self):
length, disasm = self.disassemble([0x7a])
self.assertEqual(1, length)
self.assertEqual('PLY', disasm)
def test_disassembles_7b(self):
length, disasm = self.disassemble([0x7b])
self.assertEqual(1, length)
self.assertEqual('TDC', disasm)
def test_disassembles_7c_6502(self):
length, disasm = self.disassemble([0x7c, 0x34, 0x12])
self.assertEqual(3, length)
self.assertEqual('JMP ($1234,X)', disasm)
def test_disassembles_7d(self):
length, disasm = self.disassemble([0x7d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ADC $4400,X', disasm)
def test_disassembles_7e(self):
length, disasm = self.disassemble([0x7e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('ROR $4400,X', disasm)
def test_disassembles_7f(self):
length, disasm = self.disassemble([0x7f, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('ADC $1abcd,X', disasm)
def test_disassembles_80(self):
length, disasm = self.disassemble([0x80, 0xff])
self.assertEqual(2, length)
self.assertEqual('BRA $0001', disasm)
def test_disassembles_81(self):
length, disasm = self.disassemble([0x81, 0x44])
self.assertEqual(2, length)
self.assertEqual('STA ($44,X)', disasm)
def test_disassembles_82(self):
length, disasm = self.disassemble([0x82, 0x10, 0x10])
self.assertEqual(3, length)
self.assertEqual('BRL $1013', disasm)
def test_disassembles_83(self):
length, disasm = self.disassemble([0x83, 0x10])
self.assertEqual(2, length)
self.assertEqual('STA $10,S', disasm)
def test_disassembles_84(self):
length, disasm = self.disassemble([0x84, 0x44])
self.assertEqual(2, length)
self.assertEqual('STY $44', disasm)
def test_disassembles_85(self):
length, disasm = self.disassemble([0x85, 0x44])
self.assertEqual(2, length)
self.assertEqual('STA $44', disasm)
def test_disassembles_86(self):
length, disasm = self.disassemble([0x86, 0x44])
self.assertEqual(2, length)
self.assertEqual('STX $44', disasm)
def test_disassembles_87(self):
length, disasm = self.disassemble([0x87, 0x10])
self.assertEqual(2, length)
self.assertEqual('STA [$10]', disasm)
def test_disassembles_88(self):
length, disasm = self.disassemble([0x88])
self.assertEqual(1, length)
self.assertEqual('DEY', disasm)
def test_disassembles_89(self):
length, disasm = self.disassemble([0x89, 0xcd, 0xab])
self.assertEqual(3, length)
self.assertEqual('BIT #$abcd', disasm)
def test_disassembles_8a(self):
length, disasm = self.disassemble([0x8a])
self.assertEqual(1, length)
self.assertEqual('TXA', disasm)
def test_disassembles_8b(self):
length, disasm = self.disassemble([0x8b])
self.assertEqual(1, length)
self.assertEqual('PHB', disasm)
def test_disassembles_8c(self):
length, disasm = self.disassemble([0x8c, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('STY $4400', disasm)
def test_disassembles_8d(self):
length, disasm = self.disassemble([0x8d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('STA $4400', disasm)
def test_disassembles_8e(self):
length, disasm = self.disassemble([0x8e, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('STX $4400', disasm)
def test_disassembles_8f(self):
length, disasm = self.disassemble([0x8f, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('STA $1abcd', disasm)
def test_disassembles_90(self):
length, disasm = self.disassemble([0x90, 0x44])
self.assertEqual(2, length)
self.assertEqual('BCC $0046', disasm)
def test_disassembles_91(self):
length, disasm = self.disassemble([0x91, 0x44])
self.assertEqual(2, length)
self.assertEqual('STA ($44),Y', disasm)
def test_disassembles_92(self):
length, disasm = self.disassemble([0x92, 0x10])
self.assertEqual(2, length)
self.assertEqual('STA ($10)', disasm)
def test_disassembles_93(self):
length, disasm = self.disassemble([0x93, 0x10])
self.assertEqual(2, length)
self.assertEqual('STA ($10,S),Y', disasm)
def test_disassembles_94(self):
length, disasm = self.disassemble([0x94, 0x44])
self.assertEqual(2, length)
self.assertEqual('STY $44,X', disasm)
def test_disassembles_95(self):
length, disasm = self.disassemble([0x95, 0x44])
self.assertEqual(2, length)
self.assertEqual('STA $44,X', disasm)
def test_disassembles_96(self):
length, disasm = self.disassemble([0x96, 0x44])
self.assertEqual(2, length)
self.assertEqual('STX $44,Y', disasm)
def test_disassembles_97(self):
length, disasm = self.disassemble([0x97, 0x10])
self.assertEqual(2, length)
self.assertEqual('STA [$10],Y', disasm)
def test_disassembles_98(self):
length, disasm = self.disassemble([0x98])
self.assertEqual(1, length)
self.assertEqual('TYA', disasm)
def test_disassembles_99(self):
length, disasm = self.disassemble([0x99, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('STA $4400,Y', disasm)
def test_disassembles_9a(self):
length, disasm = self.disassemble([0x9a])
self.assertEqual(1, length)
self.assertEqual('TXS', disasm)
def test_disassembles_9b(self):
length, disasm = self.disassemble([0x9b])
self.assertEqual(1, length)
self.assertEqual('TXY', disasm)
def test_disassembles_9c(self):
length, disasm = self.disassemble([0x9c, 0xcd, 0xab])
self.assertEqual(3, length)
self.assertEqual('STZ $abcd', disasm)
def test_disassembles_9d(self):
length, disasm = self.disassemble([0x9d, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('STA $4400,X', disasm)
def test_disassembles_9e(self):
length, disasm = self.disassemble([0x9e, 0xcd, 0xab])
self.assertEqual(3, length)
self.assertEqual('STZ $abcd,X', disasm)
def test_disassembles_9f(self):
length, disasm = self.disassemble([0x9f, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('STA $1abcd,X', disasm)
def test_disassembles_a0(self):
length, disasm = self.disassemble([0xa0, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDY #$0044', disasm)
def test_disassembles_a1(self):
length, disasm = self.disassemble([0xa1, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDA ($44,X)', disasm)
def test_disassembles_a2(self):
length, disasm = self.disassemble([0xa2, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDX #$0044', disasm)
def test_disassembles_a3(self):
length, disasm = self.disassemble([0xa3, 0x10])
self.assertEqual(2, length)
self.assertEqual('LDA $10,S', disasm)
def test_disassembles_a4(self):
length, disasm = self.disassemble([0xa4, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDY $44', disasm)
def test_disassembles_a5(self):
length, disasm = self.disassemble([0xa5, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDA $44', disasm)
def test_disassembles_a6(self):
length, disasm = self.disassemble([0xa6, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDX $44', disasm)
def test_disassembles_a7(self):
length, disasm = self.disassemble([0xa7, 0x10])
self.assertEqual(2, length)
self.assertEqual('LDA [$10]', disasm)
def test_disassembles_a8(self):
length, disasm = self.disassemble([0xa8])
self.assertEqual(1, length)
self.assertEqual('TAY', disasm)
def test_disassembles_a9(self):
length, disasm = self.disassemble([0xa9, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDA #$0044', disasm)
def test_disassembles_aa(self):
length, disasm = self.disassemble([0xaa])
self.assertEqual(1, length)
self.assertEqual('TAX', disasm)
def test_disassembles_ab(self):
length, disasm = self.disassemble([0xab])
self.assertEqual(1, length)
self.assertEqual('PLB', disasm)
def test_disassembles_ac(self):
length, disasm = self.disassemble([0xac, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDY $4400', disasm)
def test_disassembles_ad(self):
length, disasm = self.disassemble([0xad, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDA $4400', disasm)
def test_disassembles_ae(self):
length, disasm = self.disassemble([0xae, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDX $4400', disasm)
def test_disassembles_af(self):
length, disasm = self.disassemble([0xaf, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('LDA $1abcd', disasm)
def test_disassembles_b0(self):
length, disasm = self.disassemble([0xb0, 0x44])
self.assertEqual(2, length)
self.assertEqual('BCS $0046', disasm)
def test_disassembles_b1(self):
length, disasm = self.disassemble([0xb1, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDA ($44),Y', disasm)
def test_disassembles_b2(self):
length, disasm = self.disassemble([0xb2, 0x10])
self.assertEqual(2, length)
self.assertEqual('LDA ($10)', disasm)
def test_disassembles_b3(self):
length, disasm = self.disassemble([0xb3, 0x10])
self.assertEqual(2, length)
self.assertEqual('LDA ($10,S),Y', disasm)
def test_disassembles_b4(self):
length, disasm = self.disassemble([0xb4, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDY $44,X', disasm)
def test_disassembles_b5(self):
length, disasm = self.disassemble([0xb5, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDA $44,X', disasm)
def test_disassembles_b6(self):
length, disasm = self.disassemble([0xb6, 0x44])
self.assertEqual(2, length)
self.assertEqual('LDX $44,Y', disasm)
def test_disassembles_b7(self):
length, disasm = self.disassemble([0xb7, 0x10])
self.assertEqual(2, length)
self.assertEqual('LDA [$10],Y', disasm)
def test_disassembles_b8(self):
length, disasm = self.disassemble([0xb8])
self.assertEqual(1, length)
self.assertEqual('CLV', disasm)
def test_disassembles_b9(self):
length, disasm = self.disassemble([0xb9, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDA $4400,Y', disasm)
def test_disassembles_ba(self):
length, disasm = self.disassemble([0xba])
self.assertEqual(1, length)
self.assertEqual('TSX', disasm)
def test_disassembles_bb(self):
length, disasm = self.disassemble([0xbb])
self.assertEqual(1, length)
self.assertEqual('TYX', disasm)
def test_disassembles_bc(self):
length, disasm = self.disassemble([0xbc, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDY $4400,X', disasm)
def test_disassembles_bd(self):
length, disasm = self.disassemble([0xbd, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDA $4400,X', disasm)
def test_disassembles_be(self):
length, disasm = self.disassemble([0xbe, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('LDX $4400,Y', disasm)
def test_disassembles_bf(self):
length, disasm = self.disassemble([0xbf, 0xcd,0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('LDA $1abcd,X', disasm)
def test_disassembles_c0(self):
length, disasm = self.disassemble([0xc0, 0x44])
self.assertEqual(3, length)
self.assertEqual('CPY #$0044', disasm)
def test_disassembles_c1(self):
length, disasm = self.disassemble([0xc1, 0x44])
self.assertEqual(2, length)
self.assertEqual('CMP ($44,X)', disasm)
def test_disassembles_c2(self):
length, disasm = self.disassemble([0xc2, 0xff])
self.assertEqual(3, length)
self.assertEqual('REP #$00ff', disasm)
def test_disassembles_c3(self):
length, disasm = self.disassemble([0xc3 ,0x10])
self.assertEqual(2, length)
self.assertEqual('CMP $10,S', disasm)
def test_disassembles_c4(self):
length, disasm = self.disassemble([0xc4, 0x44])
self.assertEqual(2, length)
self.assertEqual('CPY $44', disasm)
def test_disassembles_c5(self):
length, disasm = self.disassemble([0xc5, 0x44])
self.assertEqual(2, length)
self.assertEqual('CMP $44', disasm)
def test_disassembles_c6(self):
length, disasm = self.disassemble([0xc6, 0x44])
self.assertEqual(2, length)
self.assertEqual('DEC $44', disasm)
def test_disassembles_c7(self):
length, disasm = self.disassemble([0xc7, 0x10])
self.assertEqual(2, length)
self.assertEqual('CMP [$10]', disasm)
def test_disassembles_c8(self):
length, disasm = self.disassemble([0xc8])
self.assertEqual(1, length)
self.assertEqual('INY', disasm)
def test_disassembles_c9(self):
length, disasm = self.disassemble([0xc9, 0x44])
self.assertEqual(3, length)
self.assertEqual('CMP #$0044', disasm)
def test_disassembles_ca(self):
length, disasm = self.disassemble([0xca])
self.assertEqual(1, length)
self.assertEqual('DEX', disasm)
def test_disassembles_cb(self):
length, disasm = self.disassemble([0xcb])
self.assertEqual(1, length)
self.assertEqual('WAI', disasm)
def test_disassembles_cc(self):
length, disasm = self.disassemble([0xcc, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('CPY $4400', disasm)
def test_disassembles_cd(self):
length, disasm = self.disassemble([0xcd, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('CMP $4400', disasm)
def test_disassembles_ce(self):
length, disasm = self.disassemble([0xce, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('DEC $4400', disasm)
def test_disassembles_cf(self):
length, disasm = self.disassemble([0xcf, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('CMP $1abcd', disasm)
def test_disassembles_d0(self):
length, disasm = self.disassemble([0xd0, 0x44])
self.assertEqual(2, length)
self.assertEqual('BNE $0046', disasm)
def test_disassembles_d1(self):
length, disasm = self.disassemble([0xd1, 0x44])
self.assertEqual(2, length)
self.assertEqual('CMP ($44),Y', disasm)
def test_disassembles_d2(self):
length, disasm = self.disassemble([0xd2, 0x10])
self.assertEqual(2, length)
self.assertEqual('CMP ($10)', disasm)
def test_disassembles_d3(self):
length, disasm = self.disassemble([0xd3, 0x10])
self.assertEqual(2, length)
self.assertEqual('CMP ($10,S),Y', disasm)
def test_disassembles_d4(self):
length, disasm = self.disassemble([0xd4, 0x10])
self.assertEqual(2, length)
self.assertEqual('PEI $10', disasm)
def test_disassembles_d5(self):
length, disasm = self.disassemble([0xd5, 0x44])
self.assertEqual(2, length)
self.assertEqual('CMP $44,X', disasm)
def test_disassembles_d6(self):
length, disasm = self.disassemble([0xd6, 0x44])
self.assertEqual(2, length)
self.assertEqual('DEC $44,X', disasm)
def test_disassembles_d7(self):
length, disasm = self.disassemble([0xd7, 0x10])
self.assertEqual(2, length)
self.assertEqual('CMP [$10],Y', disasm)
def test_disassembles_d8(self):
length, disasm = self.disassemble([0xd8])
self.assertEqual(1, length)
self.assertEqual('CLD', disasm)
def test_disassembles_d9(self):
length, disasm = self.disassemble([0xd9, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('CMP $4400,Y', disasm)
def test_disassembles_da(self):
length, disasm = self.disassemble([0xda])
self.assertEqual(1, length)
self.assertEqual('PHX', disasm)
def test_disassembles_db(self):
length, disasm = self.disassemble([0xdb])
self.assertEqual(1, length)
self.assertEqual('STP', disasm)
def test_disassembles_dc(self):
length, disasm = self.disassemble([0xdc, 0x00, 0x02])
self.assertEqual(3, length)
self.assertEqual('JML [$0200]', disasm)
def test_disassembles_dd(self):
length, disasm = self.disassemble([0xdd, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('CMP $4400,X', disasm)
def test_disassembles_de(self):
length, disasm = self.disassemble([0xde, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('DEC $4400,X', disasm)
def test_disassembles_df(self):
length, disasm = self.disassemble([0xdf, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('CMP $1abcd,X', disasm)
def test_disassembles_e0(self):
length, disasm = self.disassemble([0xe0, 0x44])
self.assertEqual(3, length)
self.assertEqual('CPX #$0044', disasm)
def test_disassembles_e1(self):
length, disasm = self.disassemble([0xe1, 0x44])
self.assertEqual(2, length)
self.assertEqual('SBC ($44,X)', disasm)
def test_disassembles_e2(self):
length, disasm = self.disassemble([0xe2, 0xff])
self.assertEqual(3, length)
self.assertEqual('SEP #$00ff', disasm)
def test_disassembles_e3(self):
length, disasm = self.disassemble([0xe3, 0x10])
self.assertEqual(2, length)
self.assertEqual('SBC $10,S', disasm)
def test_disassembles_e4(self):
length, disasm = self.disassemble([0xe4, 0x44])
self.assertEqual(2, length)
self.assertEqual('CPX $44', disasm)
def test_disassembles_e5(self):
length, disasm = self.disassemble([0xe5, 0x44])
self.assertEqual(2, length)
self.assertEqual('SBC $44', disasm)
def test_disassembles_e6(self):
length, disasm = self.disassemble([0xe6, 0x44])
self.assertEqual(2, length)
self.assertEqual('INC $44', disasm)
def test_disassembles_e7(self):
length, disasm = self.disassemble([0xe7, 0x10])
self.assertEqual(2, length)
self.assertEqual('SBC [$10]', disasm)
def test_disassembles_e8(self):
length, disasm = self.disassemble([0xe8])
self.assertEqual(1, length)
self.assertEqual('INX', disasm)
def test_disassembles_e9(self):
length, disasm = self.disassemble([0xe9, 0x44])
self.assertEqual(3, length)
self.assertEqual('SBC #$0044', disasm)
def test_disassembles_ea(self):
length, disasm = self.disassemble([0xea])
self.assertEqual(1, length)
self.assertEqual('NOP', disasm)
def test_disassembles_eb(self):
length, disasm = self.disassemble([0xeb])
self.assertEqual(1, length)
self.assertEqual('XBA', disasm)
def test_disassembles_ec(self):
length, disasm = self.disassemble([0xec, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('CPX $4400', disasm)
def test_disassembles_ed(self):
length, disasm = self.disassemble([0xed, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('SBC $4400', disasm)
def test_disassembles_ee(self):
length, disasm = self.disassemble([0xee, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('INC $4400', disasm)
def test_disassembles_ef(self):
length, disasm = self.disassemble([0xef, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('SBC $1abcd', disasm)
def test_disassembles_f0_forward(self):
length, disasm = self.disassemble([0xf0, 0x44])
self.assertEqual(2, length)
self.assertEqual('BEQ $0046', disasm)
def test_disassembled_f0_backward(self):
length, disasm = self.disassemble([0xf0, 0xfc], pc=0xc000)
self.assertEqual(2, length)
self.assertEqual('BEQ $bffe', disasm)
def test_disassembles_f1(self):
length, disasm = self.disassemble([0xf1, 0x44])
self.assertEqual(2, length)
self.assertEqual('SBC ($44),Y', disasm)
def test_disassembles_f2(self):
length, disasm = self.disassemble([0xf2, 0x10])
self.assertEqual(2, length)
self.assertEqual('SBC ($10)', disasm)
def test_disassembles_f3(self):
length, disasm = self.disassemble([0xf3, 0x10])
self.assertEqual(2, length)
self.assertEqual('SBC ($10,S),Y', disasm)
def test_disassembles_f4(self):
length, disasm = self.disassemble([0xf4, 0x34, 0x12])
self.assertEqual(3, length)
self.assertEqual('PEA $1234', disasm)
def test_disassembles_f5(self):
length, disasm = self.disassemble([0xf5, 0x44])
self.assertEqual(2, length)
self.assertEqual('SBC $44,X', disasm)
def test_disassembles_f6(self):
length, disasm = self.disassemble([0xf6, 0x44])
self.assertEqual(2, length)
self.assertEqual('INC $44,X', disasm)
def test_disassembles_f7(self):
length, disasm = self.disassemble([0xf7, 0x10])
self.assertEqual(2, length)
self.assertEqual('SBC [$10],Y', disasm)
def test_disassembles_f8(self):
length, disasm = self.disassemble([0xf8])
self.assertEqual(1, length)
self.assertEqual('SED', disasm)
def test_disassembles_f9(self):
length, disasm = self.disassemble([0xf9, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('SBC $4400,Y', disasm)
def test_disassembles_fa(self):
length, disasm = self.disassemble([0xfa])
self.assertEqual(1, length)
self.assertEqual('PLX', disasm)
def test_disassembles_fb(self):
length, disasm = self.disassemble([0xfb])
self.assertEqual(1, length)
self.assertEqual('XCE', disasm)
def test_disassembles_fc(self):
length, disasm = self.disassemble([0xfc, 0xcd, 0xab])
self.assertEqual(3, length)
self.assertEqual('JSR ($abcd,X)', disasm)
def test_disassembles_fd(self):
length, disasm = self.disassemble([0xfd, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('SBC $4400,X', disasm)
def test_disassembles_fe(self):
length, disasm = self.disassemble([0xfe, 0x00, 0x44])
self.assertEqual(3, length)
self.assertEqual('INC $4400,X', disasm)
def test_disassembles_ff(self):
length, disasm = self.disassemble([0xff, 0xcd, 0xab, 0x01])
self.assertEqual(4, length)
self.assertEqual('SBC $1abcd,X', disasm)
# Test Helpers
def disassemble(self, bytes, pc=0, mpu=None):
if mpu is None:
mpu = MPU()
# set native mode, 16-bit
mpu.pCLR(mpu.CARRY)
mpu.inst_0xfb() # XCE
mpu.pCLR(mpu.CARRY) # many 6502 based tests expect the carry flag to be clear
mpu.pCLR(mpu.MS)
mpu.pCLR(mpu.IRS)
address_parser = AddressParser(maxwidth=24)
disasm = Disassembler(mpu, address_parser)
mpu.memory[pc:len(bytes) - 81] = bytes
return disasm.instruction_at(pc)
def test_suite():
return unittest.findTestCases(sys.modules[__name__])
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
| 34.995506 | 89 | 0.6367 | 5,451 | 46,719 | 5.356815 | 0.133737 | 0.265068 | 0.185548 | 0.176027 | 0.858356 | 0.507603 | 0.367603 | 0.333151 | 0.294795 | 0.218973 | 0 | 0.071147 | 0.234637 | 46,719 | 1,334 | 90 | 35.021739 | 0.745476 | 0.007085 | 0 | 0.247403 | 0 | 0 | 0.04705 | 0 | 0 | 0 | 0.049165 | 0.00075 | 0.487252 | 1 | 0.245515 | false | 0 | 0.004721 | 0.000944 | 0.253069 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc839851f3a416cabaa73623facb73c6055534cf | 301 | py | Python | configs/gdrn/ycbvPbrSO/resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO_bop_test/resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO_bop_test_17_37Scissors.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 33 | 2021-12-15T07:11:47.000Z | 2022-03-29T08:58:32.000Z | configs/gdrn/ycbvPbrSO/resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO_bop_test/resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO_bop_test_17_37Scissors.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 3 | 2021-12-15T11:39:54.000Z | 2022-03-29T07:24:23.000Z | configs/gdrn/ycbvPbrSO/resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO_bop_test/resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO_bop_test_17_37Scissors.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | null | null | null | _base_ = "./resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO_bop_test_01_02MasterChefCan.py"
OUTPUT_DIR = (
"output/gdrn/ycbvPbrSO/resnest50d_AugCosyAAEGray_BG05_visib10_mlBCE_DoubleMask_ycbvPbr100e_SO/17_37Scissors"
)
DATASETS = dict(TRAIN=("ycbv_037_scissors_train_pbr",))
| 50.166667 | 114 | 0.870432 | 37 | 301 | 6.378378 | 0.72973 | 0.20339 | 0.237288 | 0.29661 | 0.533898 | 0.533898 | 0.533898 | 0.533898 | 0 | 0 | 0 | 0.101399 | 0.049834 | 301 | 5 | 115 | 60.2 | 0.723776 | 0 | 0 | 0 | 0 | 0 | 0.784053 | 0.784053 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc9e176559451ff9cbb6226243d725649c3a8a59 | 36 | py | Python | cogdl/wrappers/model_wrapper/pretraining/__init__.py | li-ziang/cogdl | 60022d3334e3abae2d2a505e6e049a26acf10f39 | [
"MIT"
] | 6 | 2020-07-09T02:48:41.000Z | 2021-06-16T09:04:14.000Z | cogdl/wrappers/model_wrapper/pretraining/__init__.py | li-ziang/cogdl | 60022d3334e3abae2d2a505e6e049a26acf10f39 | [
"MIT"
] | null | null | null | cogdl/wrappers/model_wrapper/pretraining/__init__.py | li-ziang/cogdl | 60022d3334e3abae2d2a505e6e049a26acf10f39 | [
"MIT"
] | 1 | 2020-05-19T11:45:45.000Z | 2020-05-19T11:45:45.000Z | from .gcc_mw import GCCModelWrapper
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bca0784c7f58893754edd759166c071ca2499370 | 133 | py | Python | pype/modules/ftrack/tray/__init__.py | kalisp/pype | 28bbffaf2d12ccee48313cd9985e8dfa05e81a5c | [
"MIT"
] | null | null | null | pype/modules/ftrack/tray/__init__.py | kalisp/pype | 28bbffaf2d12ccee48313cd9985e8dfa05e81a5c | [
"MIT"
] | null | null | null | pype/modules/ftrack/tray/__init__.py | kalisp/pype | 28bbffaf2d12ccee48313cd9985e8dfa05e81a5c | [
"MIT"
] | null | null | null | from .ftrack_module import FtrackModule
def tray_init(tray_widget, main_widget):
return FtrackModule(main_widget, tray_widget)
| 22.166667 | 49 | 0.819549 | 18 | 133 | 5.722222 | 0.611111 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120301 | 133 | 5 | 50 | 26.6 | 0.880342 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
bcf416946db34170d31663ec3bf2c185670a5a86 | 42 | py | Python | cegs_portal/search/views/__init__.py | ReddyLab/cegs-portal | a83703a3557167be328c24bfb866b6aa019ba059 | [
"MIT"
] | null | null | null | cegs_portal/search/views/__init__.py | ReddyLab/cegs-portal | a83703a3557167be328c24bfb866b6aa019ba059 | [
"MIT"
] | null | null | null | cegs_portal/search/views/__init__.py | ReddyLab/cegs-portal | a83703a3557167be328c24bfb866b6aa019ba059 | [
"MIT"
] | null | null | null | from . import v1
from .index import index
| 14 | 24 | 0.761905 | 7 | 42 | 4.571429 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.190476 | 42 | 2 | 25 | 21 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.