hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
73c44a40df8a25538935d78f07b7f11014300089 | 42 | py | Python | carla/recourse_methods/catalog/cchvae/__init__.py | jayanthyetukuri/CARLA | c3f3aaf11a5a8499c4bec5065e0c17ec8e6f5950 | [
"MIT"
] | 140 | 2021-08-03T21:53:32.000Z | 2022-03-20T08:52:02.000Z | carla/recourse_methods/catalog/cchvae/__init__.py | jayanthyetukuri/CARLA | c3f3aaf11a5a8499c4bec5065e0c17ec8e6f5950 | [
"MIT"
] | 54 | 2021-03-07T18:22:16.000Z | 2021-08-03T12:06:31.000Z | carla/recourse_methods/catalog/cchvae/__init__.py | jayanthyetukuri/CARLA | c3f3aaf11a5a8499c4bec5065e0c17ec8e6f5950 | [
"MIT"
] | 16 | 2021-08-23T12:14:58.000Z | 2022-03-01T00:52:58.000Z | # flake8: noqa
from .model import CCHVAE
| 10.5 | 25 | 0.738095 | 6 | 42 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.190476 | 42 | 3 | 26 | 14 | 0.882353 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73c8cd0b461060df0c0f7c96f788d19ae3622953 | 50 | py | Python | model/__init__.py | zhangsunsuochang/sams | 3069f7c61c50a8a949c34db0bb810fbeab943116 | [
"MIT"
] | 5 | 2020-11-10T08:21:03.000Z | 2021-07-06T12:10:25.000Z | model/__init__.py | zhangsunsuochang/sams | 3069f7c61c50a8a949c34db0bb810fbeab943116 | [
"MIT"
] | 1 | 2020-11-01T11:49:07.000Z | 2020-11-03T06:40:06.000Z | model/__init__.py | zhangsunsuochang/sams | 3069f7c61c50a8a949c34db0bb810fbeab943116 | [
"MIT"
] | 1 | 2020-12-08T08:31:56.000Z | 2020-12-08T08:31:56.000Z | from .transformer import BiaffineSegmentationModel | 50 | 50 | 0.92 | 4 | 50 | 11.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 50 | 1 | 50 | 50 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73d4697e1dd4656b6d9cda7c750dd23c4783f076 | 5,940 | py | Python | ckan/tests/logic/auth/test_get.py | depositar/ckan | f74dc413b550f09847df23bc3bad061a5258a0f1 | [
"Apache-2.0"
] | null | null | null | ckan/tests/logic/auth/test_get.py | depositar/ckan | f74dc413b550f09847df23bc3bad061a5258a0f1 | [
"Apache-2.0"
] | null | null | null | ckan/tests/logic/auth/test_get.py | depositar/ckan | f74dc413b550f09847df23bc3bad061a5258a0f1 | [
"Apache-2.0"
] | null | null | null | # encoding: utf-8
'''Unit tests for ckan/logic/auth/get.py.
'''
from nose.tools import assert_raises
import ckan.tests.helpers as helpers
import ckan.tests.factories as factories
import ckan.logic as logic
from ckan import model
class TestUserListAuth(object):
@helpers.change_config(u'ckan.auth.public_user_details', u'false')
def test_auth_user_list(self):
context = {'user': None,
'model': model}
assert_raises(logic.NotAuthorized, helpers.call_auth,
'user_list', context=context)
def test_authed_user_list(self):
context = {'user': None,
'model': model}
assert helpers.call_auth('user_list', context=context)
def test_user_list_email_parameter(self):
context = {'user': None,
'model': model}
# using the 'email' parameter is not allowed (unless sysadmin)
assert_raises(logic.NotAuthorized, helpers.call_auth,
'user_list', email='a@example.com', context=context)
class TestUserShowAuth(object):
def setup(self):
helpers.reset_db()
@helpers.change_config(u'ckan.auth.public_user_details', u'false')
def test_auth_user_show(self):
fred = factories.User(name='fred')
fred['capacity'] = 'editor'
context = {'user': None,
'model': model}
assert_raises(logic.NotAuthorized, helpers.call_auth,
'user_show', context=context, id=fred['id'])
def test_authed_user_show(self):
fred = factories.User(name='fred')
fred['capacity'] = 'editor'
context = {'user': None,
'model': model}
assert helpers.call_auth('user_show', context=context, id=fred['id'])
class TestPackageShowAuth(object):
def setup(self):
helpers.reset_db()
def test_package_show__deleted_dataset_is_hidden_to_public(self):
dataset = factories.Dataset(state='deleted')
context = {'model': model}
context['user'] = ''
assert_raises(logic.NotAuthorized, helpers.call_auth,
'package_show', context=context,
id=dataset['name'])
def test_package_show__deleted_dataset_is_visible_to_editor(self):
fred = factories.User(name='fred')
fred['capacity'] = 'editor'
org = factories.Organization(users=[fred])
dataset = factories.Dataset(owner_org=org['id'], state='deleted')
context = {'model': model}
context['user'] = 'fred'
ret = helpers.call_auth('package_show', context=context,
id=dataset['name'])
assert ret
class TestGroupShowAuth(object):
def setup(self):
helpers.reset_db()
def test_group_show__deleted_group_is_hidden_to_public(self):
group = factories.Group(state='deleted')
context = {'model': model}
context['user'] = ''
assert_raises(logic.NotAuthorized, helpers.call_auth,
'group_show', context=context,
id=group['name'])
def test_group_show__deleted_group_is_visible_to_its_member(self):
fred = factories.User(name='fred')
org = factories.Group(users=[fred])
context = {'model': model}
context['user'] = 'fred'
ret = helpers.call_auth('group_show', context=context,
id=org['name'])
assert ret
def test_group_show__deleted_org_is_visible_to_its_member(self):
fred = factories.User(name='fred')
fred['capacity'] = 'editor'
org = factories.Organization(users=[fred])
context = {'model': model}
context['user'] = 'fred'
ret = helpers.call_auth('group_show', context=context,
id=org['name'])
assert ret
class TestConfigOptionShowAuth(object):
def setup(self):
helpers.reset_db()
def test_config_option_show_anon_user(self):
'''An anon user is not authorized to use config_option_show action.'''
context = {'user': None, 'model': None}
assert_raises(logic.NotAuthorized, helpers.call_auth,
'config_option_show', context=context)
def test_config_option_show_normal_user(self):
'''A normal logged in user is not authorized to use config_option_show
action.'''
factories.User(name='fred')
context = {'user': 'fred', 'model': None}
assert_raises(logic.NotAuthorized, helpers.call_auth,
'config_option_show', context=context)
def test_config_option_show_sysadmin(self):
'''A sysadmin is authorized to use config_option_show action.'''
factories.Sysadmin(name='fred')
context = {'user': 'fred', 'model': None}
assert helpers.call_auth('config_option_show', context=context)
class TestConfigOptionListAuth(object):
def setup(self):
helpers.reset_db()
def test_config_option_list_anon_user(self):
'''An anon user is not authorized to use config_option_list action.'''
context = {'user': None, 'model': None}
assert_raises(logic.NotAuthorized, helpers.call_auth,
'config_option_list', context=context)
def test_config_option_list_normal_user(self):
'''A normal logged in user is not authorized to use config_option_list
action.'''
factories.User(name='fred')
context = {'user': 'fred', 'model': None}
assert_raises(logic.NotAuthorized, helpers.call_auth,
'config_option_list', context=context)
def test_config_option_list_sysadmin(self):
'''A sysadmin is authorized to use config_option_list action.'''
factories.Sysadmin(name='fred')
context = {'user': 'fred', 'model': None}
assert helpers.call_auth('config_option_list', context=context)
| 33.942857 | 78 | 0.624074 | 685 | 5,940 | 5.173723 | 0.131387 | 0.060948 | 0.06772 | 0.076185 | 0.835779 | 0.816874 | 0.808691 | 0.764673 | 0.745203 | 0.688488 | 0 | 0.000227 | 0.259259 | 5,940 | 174 | 79 | 34.137931 | 0.805227 | 0.086869 | 0 | 0.641026 | 0 | 0 | 0.111566 | 0.010803 | 0 | 0 | 0 | 0 | 0.145299 | 1 | 0.179487 | false | 0 | 0.042735 | 0 | 0.273504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
73da648c0a748f03ab258a891a19e51963061934 | 26 | py | Python | google/__init__.py | willgeorgetaylor/alfred-elixir-search | e94dbf3adb3999d51cac4a4956b475a1f81d6d42 | [
"MIT"
] | null | null | null | google/__init__.py | willgeorgetaylor/alfred-elixir-search | e94dbf3adb3999d51cac4a4956b475a1f81d6d42 | [
"MIT"
] | null | null | null | google/__init__.py | willgeorgetaylor/alfred-elixir-search | e94dbf3adb3999d51cac4a4956b475a1f81d6d42 | [
"MIT"
] | null | null | null | from google import search
| 13 | 25 | 0.846154 | 4 | 26 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fb57526a24d58e576fbddcda58aaab44ddde1249 | 155 | py | Python | modelator_py/tlc/__init__.py | informalsystems/modelator-py | d66464096c022799e680e6201590a2ead69be32d | [
"Apache-2.0"
] | null | null | null | modelator_py/tlc/__init__.py | informalsystems/modelator-py | d66464096c022799e680e6201590a2ead69be32d | [
"Apache-2.0"
] | 3 | 2022-03-30T16:01:49.000Z | 2022-03-31T13:40:03.000Z | modelator_py/tlc/__init__.py | informalsystems/modelator-py | d66464096c022799e680e6201590a2ead69be32d | [
"Apache-2.0"
] | null | null | null | from .args import TlcArgs
from .pure import PureCmd as TlcPureCmd
from .pure import tlc_pure
from .raw import RawCmd as TlcRawCmd
from .raw import tlc_raw
| 25.833333 | 39 | 0.812903 | 26 | 155 | 4.769231 | 0.461538 | 0.129032 | 0.225806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154839 | 155 | 5 | 40 | 31 | 0.946565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fbd3b2d37a810731dab106c9a912a7afe4b3e166 | 115 | py | Python | yara-validator/stix2_patch/__init__.py | seth-goodwin/CCCS-Yara | 79f66bbc8fee3729bf5aa3804fddb8632cac4a82 | [
"MIT"
] | null | null | null | yara-validator/stix2_patch/__init__.py | seth-goodwin/CCCS-Yara | 79f66bbc8fee3729bf5aa3804fddb8632cac4a82 | [
"MIT"
] | null | null | null | yara-validator/stix2_patch/__init__.py | seth-goodwin/CCCS-Yara | 79f66bbc8fee3729bf5aa3804fddb8632cac4a82 | [
"MIT"
] | null | null | null | import importlib
filter_casefold = importlib.import_module('CCCS-Yara.yara-validator.stix2_patch.filter_casefold') | 38.333333 | 97 | 0.86087 | 15 | 115 | 6.333333 | 0.666667 | 0.294737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.043478 | 115 | 3 | 97 | 38.333333 | 0.854545 | 0 | 0 | 0 | 0 | 0 | 0.448276 | 0.448276 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8378bd34366c54588307e3f41b62ee7f30fb8e2e | 33 | py | Python | pywiface/__init__.py | k3an3/pywiface | d8caaa6f974df0e340309d49e3c77ba1a9dd96e8 | [
"MIT"
] | null | null | null | pywiface/__init__.py | k3an3/pywiface | d8caaa6f974df0e340309d49e3c77ba1a9dd96e8 | [
"MIT"
] | null | null | null | pywiface/__init__.py | k3an3/pywiface | d8caaa6f974df0e340309d49e3c77ba1a9dd96e8 | [
"MIT"
] | null | null | null | from pywiface.interface import *
| 16.5 | 32 | 0.818182 | 4 | 33 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8391eb0cc86bd9b9b16e39852f153f115db56aeb | 13,692 | py | Python | tests/test_values.py | austintrose/django-querysetsequence | 81f75dd7094deaacb066d3da6154dce25cd90b60 | [
"ISC"
] | 71 | 2016-01-11T16:23:35.000Z | 2020-08-04T14:14:33.000Z | tests/test_values.py | austintrose/django-querysetsequence | 81f75dd7094deaacb066d3da6154dce25cd90b60 | [
"ISC"
] | 38 | 2016-01-25T14:40:09.000Z | 2020-07-27T20:54:03.000Z | tests/test_values.py | austintrose/django-querysetsequence | 81f75dd7094deaacb066d3da6154dce25cd90b60 | [
"ISC"
] | 23 | 2016-01-23T10:56:15.000Z | 2020-08-04T08:18:43.000Z | import datetime
from tests.test_querysetsequence import TestBase
class TestValues(TestBase):
def test_values(self):
"""Ensure the values conversion works as expected."""
with self.assertNumQueries(2):
values = list(self.all.values())
titles = [it["title"] for it in values]
# Foreign keys are kept as IDs.
authors = [it["author_id"] for it in values]
self.assertEqual(titles, self.TITLES_BY_PK)
self.assertEqual(authors, [2, 2, 1, 1, 2])
self.assertCountEqual(
values[0].keys(), ["#", "id", "author_id", "pages", "release", "title"]
)
def test_fields(self):
"""Ensure the proper fields are returned."""
with self.assertNumQueries(2):
# Note that to ensure we go through most of the QuerySetSequence
# logic this converts the entire results to a list before getting
# the first element.
data = list(self.all.values("title"))[0]
self.assertEqual(data, {"title": "Fiction"})
def test_foreign_key(self):
"""Calling values for a foreign key should end up with the ID."""
with self.assertNumQueries(2):
data = list(self.all.values("author"))[0]
self.assertEqual(data, {"author": 2})
def test_join(self):
"""Including a field across a foreign key join should work."""
with self.assertNumQueries(2):
data = list(self.all.values("author__name"))[0]
self.assertEqual(data, {"author__name": "Bob"})
def test_qss_field(self):
"""
Should be able to include the ordering of the QuerySet in the returned fields.
"""
with self.assertNumQueries(2):
data = list(self.all.values("#", "author__name"))[0]
self.assertEqual(data, {"#": 0, "author__name": "Bob"})
def test_order_by(self):
"""Ensure that order_by() propagates to QuerySets and iteration."""
# Check the titles are properly ordered.
with self.assertNumQueries(2):
data = [it["title"] for it in self.all.values("title").order_by("title")]
self.assertEqual(data, sorted(self.TITLES_BY_PK))
with self.assertNumQueries(2):
data = [it["title"] for it in self.all.values("title").order_by("-title")]
self.assertEqual(data, sorted(self.TITLES_BY_PK, reverse=True))
def test_order_by_other_field(self):
"""Ordering by a field that isn't included in the responses should work."""
with self.assertNumQueries(2):
values = list(self.all.values("title").order_by("release"))
data = [it["title"] for it in values]
# Check the expected ordering.
self.assertEqual(
data,
[
"Some Article",
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
],
)
# Check that only the requested fields are returned.
self.assertEqual(values[0], {"title": "Some Article"})
def test_order_by_qs(self):
"""Ordering by a QuerySet should work."""
with self.assertNumQueries(2):
values = list(self.all.values("title").order_by("author", "#"))
data = [it["title"] for it in values]
# Check the expected ordering.
self.assertEqual(
data,
[
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
"Some Article",
],
)
# Check that only the requested fields are returned.
self.assertEqual(values[0], {"title": "Django Rocks"})
class TestValuesList(TestBase):
def test_values_list(self):
"""Ensure the values conversion works as expected."""
with self.assertNumQueries(2):
values = list(self.all.values_list())
self.assertEqual(values[0], (1, "Fiction", 2, datetime.date(2001, 6, 12), 10))
def test_fields(self):
"""Ensure the proper fields are returned."""
with self.assertNumQueries(2):
# Note that to ensure we go through most of the QuerySetSequence
# logic this converts the entire results to a list before getting
# the first element.
data = list(self.all.values_list("title"))[0]
self.assertEqual(data, ("Fiction",))
def test_foreign_key(self):
"""Calling values for a foreign key should end up with the ID."""
with self.assertNumQueries(2):
data = list(self.all.values_list("author"))[0]
self.assertEqual(data, (2,))
def test_join(self):
with self.assertNumQueries(2):
data = list(self.all.values_list("author__name"))[0]
self.assertEqual(data, ("Bob",))
def test_qss_field(self):
"""
Should be able to include the ordering of the QuerySet in the returned fields.
"""
with self.assertNumQueries(2):
data = list(self.all.values_list("#", "author__name"))[0]
self.assertEqual(data, (0, "Bob"))
def test_order_by(self):
"""Ensure that order_by() propagates to QuerySets and iteration."""
# Check the titles are properly ordered.
with self.assertNumQueries(2):
data = [it[0] for it in self.all.values_list("title").order_by("title")]
self.assertEqual(data, sorted(self.TITLES_BY_PK))
with self.assertNumQueries(2):
data = [it[0] for it in self.all.values_list("title").order_by("-title")]
self.assertEqual(data, sorted(self.TITLES_BY_PK, reverse=True))
def test_order_by_other_field(self):
"""Ordering by a field that isn't included in the responses should work."""
with self.assertNumQueries(2):
values = list(self.all.values_list("title").order_by("release"))
titles = [it[0] for it in values]
# Check the expected ordering.
self.assertEqual(
titles,
[
"Some Article",
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
],
)
# Check that only the requested fields are returned.
self.assertEqual(values[0], ("Some Article",))
def test_order_by_qs(self):
"""Ordering by a QuerySet should work."""
with self.assertNumQueries(2):
values = list(self.all.values_list("title").order_by("author", "#"))
data = [it[0] for it in values]
# Check the expected ordering.
self.assertEqual(
data,
[
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
"Some Article",
],
)
# Check that only the requested fields are returned.
self.assertEqual(values[0], ("Django Rocks",))
class TestFlatValuesList(TestBase):
def test_values_list(self):
"""Ensure the values conversion works as expected."""
with self.assertNumQueries(2):
values = list(self.all.values_list(flat=True))
self.assertEqual(values[0], 1)
def test_fields(self):
"""Ensure the proper fields are returned."""
with self.assertNumQueries(2):
titles = list(self.all.values_list("title", flat=True))
self.assertEqual(titles, self.TITLES_BY_PK)
def test_foreign_key(self):
"""Calling values for a foreign key should end up with the ID."""
with self.assertNumQueries(2):
data = list(self.all.values_list("author", flat=True))[0]
self.assertEqual(data, 2)
def test_join(self):
with self.assertNumQueries(2):
data = list(self.all.values_list("author__name", flat=True))[0]
self.assertEqual(data, "Bob")
def test_qss_field(self):
"""
Should be able to include the ordering of the QuerySet in the returned fields.
"""
with self.assertNumQueries(2):
data = list(self.all.values_list("#", flat=True))[0]
self.assertEqual(data, 0)
def test_order_by(self):
"""Ensure that order_by() propagates to QuerySets and iteration."""
# Check the titles are properly ordered.
with self.assertNumQueries(2):
data = list(self.all.values_list("title", flat=True).order_by("title"))
self.assertEqual(data, sorted(self.TITLES_BY_PK))
with self.assertNumQueries(2):
data = list(self.all.values_list("title", flat=True).order_by("-title"))
self.assertEqual(data, sorted(self.TITLES_BY_PK, reverse=True))
def test_order_by_other_field(self):
"""Ordering by a field that isn't included in the responses should work."""
with self.assertNumQueries(2):
titles = list(self.all.values_list("title", flat=True).order_by("release"))
# Check the expected ordering.
self.assertEqual(
titles,
[
"Some Article",
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
],
)
# Check that only the requested fields are returned.
self.assertEqual(titles[0], "Some Article")
def test_order_by_qs(self):
"""Ordering by a QuerySet should work."""
with self.assertNumQueries(2):
data = list(
self.all.values_list("title", flat=True).order_by("author", "#")
)
# Check the expected ordering.
self.assertEqual(
data,
[
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
"Some Article",
],
)
class TestNamedValuesList(TestBase):
def test_values_list(self):
"""Ensure the values conversion works as expected."""
with self.assertNumQueries(2):
values = list(self.all.values_list(named=True))
self.assertEqual(values[0], (1, "Fiction", 2, datetime.date(2001, 6, 12), 10))
self.assertEqual(
values[0]._fields, ("id", "title", "author_id", "release", "pages")
)
# Also check one of the other types.
self.assertEqual(
values[2]._fields, ("id", "title", "author_id", "publisher_id", "release")
)
def test_fields(self):
"""Ensure the proper fields are returned."""
with self.assertNumQueries(2):
values = list(self.all.values_list("title", named=True))
self.assertEqual([value.title for value in values], self.TITLES_BY_PK)
# There should only be a single field.
self.assertEqual(values[0]._fields, ("title",))
def test_foreign_key(self):
"""Calling values for a foreign key should end up with the ID."""
with self.assertNumQueries(2):
data = list(self.all.values_list("author", named=True))[0]
self.assertEqual(data, (2,))
def test_join(self):
with self.assertNumQueries(2):
data = list(self.all.values_list("author__name", named=True))[0]
self.assertEqual(data, ("Bob",))
def test_qss_field(self):
"""
Should be able to include the ordering of the QuerySet in the returned fields.
"""
with self.assertNumQueries(2):
data = list(self.all.values_list("#", "author__name", named=True))[0]
self.assertEqual(data, (0, "Bob"))
def test_order_by(self):
"""Ensure that order_by() propagates to QuerySets and iteration."""
# Check the titles are properly ordered.
with self.assertNumQueries(2):
data = [
it[0]
for it in self.all.values_list("title", named=True).order_by("title")
]
self.assertEqual(data, sorted(self.TITLES_BY_PK))
with self.assertNumQueries(2):
data = [
it[0]
for it in self.all.values_list("title", named=True).order_by("-title")
]
self.assertEqual(data, sorted(self.TITLES_BY_PK, reverse=True))
def test_order_by_other_field(self):
"""Ordering by a field that isn't included in the responses should work."""
with self.assertNumQueries(2):
values = list(self.all.values_list("title", named=True).order_by("release"))
titles = [it[0] for it in values]
# Check the expected ordering.
self.assertEqual(
titles,
[
"Some Article",
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
],
)
# Check that only the requested fields are returned.
self.assertEqual(values[0], ("Some Article",))
def test_order_by_qs(self):
"""Ordering by a QuerySet should work."""
with self.assertNumQueries(2):
values = list(
self.all.values_list("title", named=True).order_by("author", "#")
)
data = [it[0] for it in values]
# Check the expected ordering.
self.assertEqual(
data,
[
"Django Rocks",
"Alice in Django-land",
"Fiction",
"Biography",
"Some Article",
],
)
# Check that only the requested fields are returned.
self.assertEqual(values[0], ("Django Rocks",))
| 37.719008 | 88 | 0.571429 | 1,601 | 13,692 | 4.785134 | 0.07995 | 0.092025 | 0.112779 | 0.117478 | 0.927686 | 0.90171 | 0.890354 | 0.877823 | 0.877823 | 0.877823 | 0 | 0.011364 | 0.305872 | 13,692 | 362 | 89 | 37.823204 | 0.794718 | 0.202454 | 0 | 0.649606 | 0 | 0 | 0.103232 | 0 | 0 | 0 | 0 | 0 | 0.330709 | 1 | 0.125984 | false | 0 | 0.007874 | 0 | 0.149606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83a7f816ed845f6809cd39a7205e800872c17bad | 1,356 | py | Python | test/test_edit_group.py | vitalrakach/python_test_training | 78b50548d79d76283b182f34186d82ff9f9f4e25 | [
"Apache-2.0"
] | null | null | null | test/test_edit_group.py | vitalrakach/python_test_training | 78b50548d79d76283b182f34186d82ff9f9f4e25 | [
"Apache-2.0"
] | null | null | null | test/test_edit_group.py | vitalrakach/python_test_training | 78b50548d79d76283b182f34186d82ff9f9f4e25 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from model.group import Group
def test_edit_first_group_name(app):
if app.group.count() == 0:
app.group.create(Group(name="test"))
old_groups = app.group.get_group_list()
group_var = Group(name="edit")
group_var.id = old_groups[0].id
app.group.edit_first_group(group_var)
new_groups = app.group.get_group_list()
assert len(old_groups) == app.group.count()
old_groups[0] = group_var
assert sorted(old_groups, key=Group.id_or_max) == sorted(new_groups, key=Group.id_or_max)
'''
def test_edit_first_group_header(app):
if app.group.count() == 0:
app.group.create(Group(name="test"))
old_groups = app.group.get_group_list()
group_var = Group(header="edit")
group_var.id = old_groups[0].id
app.group.edit_first_group(group_var)
new_groups = app.group.get_group_list()
assert len(old_groups) == len(new_groups)
old_groups[0] = group_var
assert sorted(old_groups, key=Group.id_or_max) == sorted(new_groups, key=Group.id_or_max)
def test_edit_first_group_footer(app):
if app.group.count() == 0:
app.group.create(Group(name="test"))
old_groups = app.group.get_group_list()
app.group.edit_first_group(Group(footer="edit_group_FOOTER"))
new_groups = app.group.get_group_list()
assert len(old_groups) == len(new_groups)
''' | 34.769231 | 93 | 0.699115 | 217 | 1,356 | 4.059908 | 0.147465 | 0.145289 | 0.111237 | 0.115778 | 0.892168 | 0.868331 | 0.837684 | 0.837684 | 0.837684 | 0.837684 | 0 | 0.007018 | 0.159292 | 1,356 | 39 | 94 | 34.769231 | 0.765789 | 0.015487 | 0 | 0 | 0 | 0 | 0.015326 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83bbac5aa591ffe4cd555ff4239391c36792a1ea | 32,401 | py | Python | tests/integration/test_secretsmanager.py | roguesupport/localstack | 087abb05fcb360297431ad8e5790c8014e0a80d7 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_secretsmanager.py | roguesupport/localstack | 087abb05fcb360297431ad8e5790c8014e0a80d7 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_secretsmanager.py | roguesupport/localstack | 087abb05fcb360297431ad8e5790c8014e0a80d7 | [
"Apache-2.0"
] | null | null | null | import json
import uuid
from datetime import datetime
from typing import Dict, List, Optional
import pytest
import requests
from localstack.constants import TEST_AWS_ACCOUNT_ID
from localstack.services.awslambda.lambda_utils import LAMBDA_RUNTIME_PYTHON36
from localstack.utils import testutil
from localstack.utils.aws import aws_stack
from localstack.utils.strings import short_uid
from tests.integration.awslambda.test_lambda import TEST_LAMBDA_PYTHON_VERSION
RESOURCE_POLICY = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::%s:root" % TEST_AWS_ACCOUNT_ID},
"Action": "secretsmanager:GetSecretValue",
"Resource": "*",
}
],
}
class TestSecretsManager:
@pytest.fixture
def secretsmanager_client(self):
return aws_stack.create_external_boto_client("secretsmanager")
def test_create_and_update_secret(self, secretsmanager_client):
secret_name = "s-%s" % short_uid()
rs = secretsmanager_client.create_secret(
Name=secret_name,
SecretString="my_secret",
Description="testing creation of secrets",
)
secret_arn = rs["ARN"]
assert len(secret_arn.rpartition("-")[2]) == 6
rs = secretsmanager_client.get_secret_value(SecretId=secret_name)
assert rs["Name"] == secret_name
assert rs["SecretString"] == "my_secret"
assert rs["ARN"] == secret_arn
assert isinstance(rs["CreatedDate"], datetime)
rs = secretsmanager_client.get_secret_value(SecretId=secret_arn)
assert rs["Name"] == secret_name
assert rs["SecretString"] == "my_secret"
assert rs["ARN"] == secret_arn
rs = secretsmanager_client.get_secret_value(SecretId=secret_arn[: len(secret_arn) - 6])
assert rs["Name"] == secret_name
assert rs["SecretString"] == "my_secret"
assert rs["ARN"] == secret_arn
rs = secretsmanager_client.get_secret_value(SecretId=secret_arn[: len(secret_arn) - 7])
assert rs["Name"] == secret_name
assert rs["SecretString"] == "my_secret"
assert rs["ARN"] == secret_arn
secretsmanager_client.put_secret_value(SecretId=secret_name, SecretString="new_secret")
rs = secretsmanager_client.get_secret_value(SecretId=secret_name)
assert rs["Name"] == secret_name
assert rs["SecretString"] == "new_secret"
# update secret by ARN
rs = secretsmanager_client.update_secret(
SecretId=secret_arn, KmsKeyId="test123", Description="d1"
)
assert rs["ResponseMetadata"]["HTTPStatusCode"] == 200
assert rs["ARN"] == secret_arn
# clean up
secretsmanager_client.delete_secret(SecretId=secret_name, ForceDeleteWithoutRecovery=True)
def test_call_lists_secrets_multiple_time(self, secretsmanager_client):
secret_name = "s-%s" % short_uid()
secretsmanager_client.create_secret(
Name=secret_name,
SecretString="my_secret",
Description="testing creation of secrets",
)
# call list_secrets multiple times
for i in range(3):
rs = secretsmanager_client.list_secrets()
secrets = [secret for secret in rs["SecretList"] if secret["Name"] == secret_name]
assert 1 == len(secrets)
# clean up
secretsmanager_client.delete_secret(SecretId=secret_name, ForceDeleteWithoutRecovery=True)
def test_create_multi_secrets(self, secretsmanager_client):
secret_names = [short_uid(), short_uid(), short_uid()]
arns = []
for secret_name in secret_names:
rs = secretsmanager_client.create_secret(
Name=secret_name,
SecretString="my_secret_{}".format(secret_name),
Description="testing creation of secrets",
)
arns.append(rs["ARN"])
rs = secretsmanager_client.list_secrets()
secrets = {
secret["Name"]: secret["ARN"]
for secret in rs["SecretList"]
if secret["Name"] in secret_names
}
assert len(secrets.keys()) == len(secret_names)
for arn in arns:
assert arn in secrets.values()
# clean up
for secret_name in secret_names:
secretsmanager_client.delete_secret(
SecretId=secret_name, ForceDeleteWithoutRecovery=True
)
def test_get_random_exclude_characters_and_symbols(self, secretsmanager_client):
random_password = secretsmanager_client.get_random_password(
PasswordLength=120, ExcludeCharacters="xyzDje@?!."
)
assert len(random_password["RandomPassword"]) == 120
assert all(c not in "xyzDje@?!." for c in random_password["RandomPassword"])
def test_resource_policy(self, secretsmanager_client):
secret_name = "s-%s" % short_uid()
secretsmanager_client.create_secret(
Name=secret_name,
SecretString="my_secret",
Description="testing creation of secrets",
)
secretsmanager_client.put_resource_policy(
SecretId=secret_name, ResourcePolicy=json.dumps(RESOURCE_POLICY)
)
rs = secretsmanager_client.get_resource_policy(SecretId=secret_name)
policy = json.loads(rs["ResourcePolicy"])
assert policy["Version"] == RESOURCE_POLICY["Version"]
assert policy["Statement"] == RESOURCE_POLICY["Statement"]
rs = secretsmanager_client.delete_resource_policy(SecretId=secret_name)
assert rs["ResponseMetadata"]["HTTPStatusCode"] == 200
# clean up
secretsmanager_client.delete_secret(SecretId=secret_name, ForceDeleteWithoutRecovery=True)
def test_rotate_secret_with_lambda(self, secretsmanager_client):
secret_name = f"s-{short_uid()}"
secretsmanager_client.create_secret(
Name=secret_name,
SecretString="my_secret",
Description="testing rotation of secrets",
)
function_name = f"s-{short_uid()}"
function_arn = testutil.create_lambda_function(
handler_file=TEST_LAMBDA_PYTHON_VERSION,
func_name=function_name,
runtime=LAMBDA_RUNTIME_PYTHON36,
)["CreateFunctionResponse"]["FunctionArn"]
response = secretsmanager_client.rotate_secret(
SecretId=secret_name,
RotationLambdaARN=function_arn,
RotationRules={
"AutomaticallyAfterDays": 1,
},
RotateImmediately=True,
)
assert response["ResponseMetadata"]["HTTPStatusCode"] == 200
# clean up
secretsmanager_client.delete_secret(SecretId=secret_name, ForceDeleteWithoutRecovery=True)
testutil.delete_lambda_function(function_name)
def test_put_secret_value_with_version_stages(self, secretsmanager_client):
secret_name: str = "s-%s" % short_uid()
secret_string_v0: str = "secret_string_v0"
cr_v0_res = secretsmanager_client.create_secret(
Name=secret_name, SecretString=secret_string_v0
)
pv_v0_vid: str = cr_v0_res["VersionId"]
rs_get_curr = secretsmanager_client.get_secret_value(SecretId=secret_name)
assert rs_get_curr["SecretString"] == secret_string_v0
assert rs_get_curr["VersionStages"] == ["AWSCURRENT"]
secret_string_v1: str = "secret_string_v1"
version_stages_v1: ["str"] = ["SAMPLESTAGE1", "SAMPLESTAGE0"]
pv_v1_vid: str = str(uuid.uuid4())
pv_v1_res = secretsmanager_client.put_secret_value(
SecretId=secret_name,
SecretString=secret_string_v1,
VersionStages=version_stages_v1,
ClientRequestToken=pv_v1_vid,
)
assert pv_v1_res["VersionId"] == pv_v1_vid
assert pv_v1_res["VersionStages"] == version_stages_v1
rs_get_curr = secretsmanager_client.get_secret_value(SecretId=secret_name)
assert rs_get_curr["VersionId"] == pv_v0_vid
assert rs_get_curr["SecretString"] == secret_string_v0
assert rs_get_curr["VersionStages"] == ["AWSCURRENT"]
secret_string_v2: str = "secret_string_v2"
version_stages_v2: ["str"] = version_stages_v1
pv_v2_vid: str = str(uuid.uuid4())
pv_v2_res = secretsmanager_client.put_secret_value(
SecretId=secret_name,
SecretString=secret_string_v2,
VersionStages=version_stages_v2,
ClientRequestToken=pv_v2_vid,
)
assert pv_v2_res["VersionId"] == pv_v2_vid
assert pv_v2_res["VersionStages"] == version_stages_v2
rs_get_curr = secretsmanager_client.get_secret_value(SecretId=secret_name)
assert rs_get_curr["VersionId"] == pv_v0_vid
assert rs_get_curr["SecretString"] == secret_string_v0
assert rs_get_curr["VersionStages"] == ["AWSCURRENT"]
secret_string_v3: str = "secret_string_v3"
version_stages_v3: ["str"] = ["AWSPENDING"]
pv_v3_vid: str = str(uuid.uuid4())
pv_v3_res = secretsmanager_client.put_secret_value(
SecretId=secret_name,
SecretString=secret_string_v3,
VersionStages=version_stages_v3,
ClientRequestToken=pv_v3_vid,
)
assert pv_v3_res["VersionId"] == pv_v3_vid
assert pv_v3_res["VersionStages"] == version_stages_v3
rs_get_curr = secretsmanager_client.get_secret_value(SecretId=secret_name)
assert rs_get_curr["VersionId"] == pv_v0_vid
assert rs_get_curr["SecretString"] == secret_string_v0
assert rs_get_curr["VersionStages"] == ["AWSCURRENT"]
secret_string_v4: str = "secret_string_v4"
pv_v4_vid: str = str(uuid.uuid4())
pv_v4_res = secretsmanager_client.put_secret_value(
SecretId=secret_name, SecretString=secret_string_v4, ClientRequestToken=pv_v4_vid
)
assert pv_v4_res["VersionId"] == pv_v4_vid
assert pv_v4_res["VersionStages"] == ["AWSCURRENT"]
rs_get_curr = secretsmanager_client.get_secret_value(SecretId=secret_name)
assert rs_get_curr["VersionId"] == pv_v4_vid
assert rs_get_curr["SecretString"] == secret_string_v4
assert rs_get_curr["VersionStages"] == ["AWSCURRENT"]
secretsmanager_client.delete_secret(SecretId=secret_name, ForceDeleteWithoutRecovery=True)
@staticmethod
def secretsmanager_http_json_headers(amz_target: str) -> Dict:
headers = aws_stack.mock_aws_request_headers("secretsmanager")
headers["X-Amz-Target"] = amz_target
return headers
def secretsmanager_http_json_post(self, amz_target: str, http_body: json) -> requests.Response:
ep_url: str = aws_stack.get_local_service_url("secretsmanager")
http_headers: Dict = self.secretsmanager_http_json_headers(amz_target)
return requests.post(ep_url, headers=http_headers, data=json.dumps(http_body))
def secretsmanager_http_create_secret_string(
self, secret_name: str, secret_string: str
) -> requests.Response:
http_body: json = {"Name": secret_name, "SecretString": secret_string}
return self.secretsmanager_http_json_post("secretsmanager.CreateSecret", http_body)
@staticmethod
def secretsmanager_http_create_secret_string_val_res(
res: requests.Response, secret_name: str
) -> json:
assert res.status_code == 200
res_json: json = res.json()
assert res_json["Name"] == secret_name
return res_json
def secretsmanager_http_delete_secret(self, secret_id: str) -> requests.Response:
http_body: json = {"SecretId": secret_id}
return self.secretsmanager_http_json_post("secretsmanager.DeleteSecret", http_body)
@staticmethod
def secretsmanager_http_delete_secret_val_res(res: requests.Response, secret_id: str) -> json:
assert res.status_code == 200
res_json: json = res.json()
assert res_json["Name"] == secret_id
return res_json
def secretsmanager_http_get_secret_value(self, secret_id: str) -> requests.Response:
http_body: json = {"SecretId": secret_id}
return self.secretsmanager_http_json_post("secretsmanager.GetSecretValue", http_body)
@staticmethod
def secretsmanager_http_get_secret_value_val_res(
res: requests.Response, secret_name: str, secret_string: str, version_id: str
) -> json:
assert res.status_code == 200
res_json: json = res.json()
assert res_json["Name"] == secret_name
assert res_json["SecretString"] == secret_string
assert res_json["VersionId"] == version_id
return res_json
def secretsmanager_http_get_secret_value_with(
self, secret_id: str, version_stage: str
) -> requests.Response:
http_body: json = {"SecretId": secret_id, "VersionStage": version_stage}
return self.secretsmanager_http_json_post("secretsmanager.GetSecretValue", http_body)
@staticmethod
def secretsmanager_http_get_secret_value_with_val_res(
res: requests.Response,
secret_name: str,
secret_string: str,
version_id: str,
version_stage: str,
) -> json:
res_json = TestSecretsManager.secretsmanager_http_get_secret_value_val_res(
res, secret_name, secret_string, version_id
)
assert res_json["VersionStages"] == [version_stage]
return res_json
def secretsmanager_http_list_secret_version_ids(self, secret_id: str) -> requests.Response:
http_body: json = {"SecretId": secret_id}
return self.secretsmanager_http_json_post("secretsmanager.ListSecretVersionIds", http_body)
@staticmethod
def secretsmanager_http_list_secret_version_ids_val_res(
res: requests.Response, secret_name: str, versions: json
) -> json:
assert res.status_code == 200
res_json: json = res.json()
assert res_json["Name"] == secret_name
res_versions: [json] = res_json["Versions"]
assert len(res_versions) == len(versions)
assert len(set([rv["VersionId"] for rv in res_versions])) == len(res_versions)
assert len(set([v["VersionId"] for v in versions])) == len(versions)
for version in versions:
vs_in_res: [json] = list(
filter(lambda rv: rv["VersionId"] == version["VersionId"], res_versions)
)
assert len(vs_in_res) == 1
v_in_res = vs_in_res[0]
assert v_in_res["VersionStages"] == version["VersionStages"]
return res_json
def secretsmanager_http_put_secret_value(
self, secret_id: str, secret_string: str
) -> requests.Response:
http_body: json = {
"SecretId": secret_id,
"SecretString": secret_string,
}
return self.secretsmanager_http_json_post("secretsmanager.PutSecretValue", http_body)
@staticmethod
def secretsmanager_http_put_secret_value_val_res(
res: requests.Response, secret_name: str
) -> json:
assert res.status_code == 200
res_json: json = res.json()
assert res_json["Name"] == secret_name
return res_json
def secretsmanager_http_put_pending_secret_value(
self, secret_id: str, secret_string: str
) -> requests.Response:
http_body: json = {
"SecretId": secret_id,
"SecretString": secret_string,
"VersionStages": ["AWSPENDING"],
}
return self.secretsmanager_http_json_post("secretsmanager.PutSecretValue", http_body)
@staticmethod
def secretsmanager_http_put_pending_secret_value_val_res(
res: requests.Response, secret_name: str
) -> json:
return TestSecretsManager.secretsmanager_http_put_secret_value_val_res(res, secret_name)
def secretsmanager_http_put_secret_value_with(
self, secret_id: str, secret_string: str, client_request_token: Optional[str]
) -> requests.Response:
http_body: json = {
"SecretId": secret_id,
"SecretString": secret_string,
"ClientRequestToken": client_request_token,
}
return self.secretsmanager_http_json_post("secretsmanager.PutSecretValue", http_body)
@staticmethod
def secretsmanager_http_put_secret_value_with_val_res(
res: requests.Response, secret_name: str, client_request_token: str
) -> json:
assert res.status_code == 200
res_json: json = res.json()
assert res_json["Name"] == secret_name
assert res_json["VersionId"] == client_request_token
return res_json
def secretsmanager_http_put_secret_value_with_version(
self,
secret_id: str,
secret_string: str,
client_request_token: Optional[str],
version_stages: List[str],
) -> requests.Response:
http_body: json = {
"SecretId": secret_id,
"SecretString": secret_string,
"ClientRequestToken": client_request_token,
"VersionStages": version_stages,
}
return self.secretsmanager_http_json_post("secretsmanager.PutSecretValue", http_body)
@staticmethod
def secretsmanager_http_put_secret_value_with_version_val_res(
res: requests.Response,
secret_name: str,
client_request_token: Optional[str],
version_stages: List[str],
) -> json:
req_version_id: str
if client_request_token is None:
assert res.status_code == 200
req_version_id = res.json()["VersionId"]
else:
req_version_id = client_request_token
res_json = TestSecretsManager.secretsmanager_http_put_secret_value_with_val_res(
res, secret_name, req_version_id
)
assert res_json["VersionStages"] == version_stages
return res_json
def test_http_put_secret_value_with_new_custom_client_request_token(self):
secret_name: str = "s-%s" % short_uid()
# Create v0.
secret_string_v0: str = "MySecretString"
cr_v0_res_json: json = self.secretsmanager_http_create_secret_string_val_res(
self.secretsmanager_http_create_secret_string(secret_name, secret_string_v0),
secret_name,
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
# Update v0 with predefined ClientRequestToken.
secret_string_v1: str = "MyNewSecretString"
#
crt_v1: str = str(uuid.uuid4())
while crt_v1 == cr_v0_res_json["VersionId"]:
crt_v1 = str(uuid.uuid4())
#
self.secretsmanager_http_put_secret_value_val_res(
self.secretsmanager_http_put_secret_value_with(secret_name, secret_string_v1, crt_v1),
secret_name,
)
#
# Check v1 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v1,
crt_v1,
)
#
# Check versioning base consistency.
versions_v0_v1: json = [
{"VersionId": cr_v0_res_json["VersionId"], "VersionStages": ["AWSPREVIOUS"]},
{"VersionId": crt_v1, "VersionStages": ["AWSCURRENT"]},
]
self.secretsmanager_http_list_secret_version_ids_val_res(
self.secretsmanager_http_list_secret_version_ids(secret_name),
secret_name,
versions_v0_v1,
)
self.secretsmanager_http_delete_secret_val_res(
self.secretsmanager_http_delete_secret(secret_name), secret_name
)
def test_http_put_secret_value_with_duplicate_client_request_token(self):
secret_name: str = "s-%s" % short_uid()
# Create v0.
secret_string_v0: str = "MySecretString"
cr_v0_res_json: json = self.secretsmanager_http_create_secret_string_val_res(
self.secretsmanager_http_create_secret_string(secret_name, secret_string_v0),
secret_name,
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
# Update v0 with duplicate ClientRequestToken.
secret_string_v1: str = "MyNewSecretString"
#
crt_v1: str = cr_v0_res_json["VersionId"]
#
self.secretsmanager_http_put_secret_value_val_res(
self.secretsmanager_http_put_secret_value_with(secret_name, secret_string_v1, crt_v1),
secret_name,
)
#
# Check v1 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v1,
crt_v1,
)
#
# Check versioning base consistency.
versions_v0_v1: json = [{"VersionId": crt_v1, "VersionStages": ["AWSCURRENT"]}]
self.secretsmanager_http_list_secret_version_ids_val_res(
self.secretsmanager_http_list_secret_version_ids(secret_name),
secret_name,
versions_v0_v1,
)
self.secretsmanager_http_delete_secret_val_res(
self.secretsmanager_http_delete_secret(secret_name), secret_name
)
def test_http_put_secret_value_with_null_client_request_token(self):
secret_name: str = "s-%s" % short_uid()
# Create v0.
secret_string_v0: str = "MySecretString"
cr_v0_res_json: json = self.secretsmanager_http_create_secret_string_val_res(
self.secretsmanager_http_create_secret_string(secret_name, secret_string_v0),
secret_name,
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
# Update v0 with null ClientRequestToken.
secret_string_v1: str = "MyNewSecretString"
#
pv_v1_res_json = self.secretsmanager_http_put_secret_value_val_res(
self.secretsmanager_http_put_secret_value_with(secret_name, secret_string_v1, None),
secret_name,
)
#
# Check v1 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v1,
pv_v1_res_json["VersionId"],
)
#
# Check versioning base consistency.
versions_v0_v1: json = [
{"VersionId": cr_v0_res_json["VersionId"], "VersionStages": ["AWSPREVIOUS"]},
{"VersionId": pv_v1_res_json["VersionId"], "VersionStages": ["AWSCURRENT"]},
]
self.secretsmanager_http_list_secret_version_ids_val_res(
self.secretsmanager_http_list_secret_version_ids(secret_name),
secret_name,
versions_v0_v1,
)
self.secretsmanager_http_delete_secret_val_res(
self.secretsmanager_http_delete_secret(secret_name), secret_name
)
def test_http_put_secret_value_with_undefined_client_request_token(self):
secret_name: str = "s-%s" % short_uid()
# Create v0.
secret_string_v0: str = "MySecretString"
cr_v0_res_json: json = self.secretsmanager_http_create_secret_string_val_res(
self.secretsmanager_http_create_secret_string(secret_name, secret_string_v0),
secret_name,
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
# Update v0 with undefined ClientRequestToken.
secret_string_v1: str = "MyNewSecretString"
#
pv_v1_res_json = self.secretsmanager_http_put_secret_value_val_res(
self.secretsmanager_http_put_secret_value(secret_name, secret_string_v1), secret_name
)
#
# Check v1 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v1,
pv_v1_res_json["VersionId"],
)
#
# Check versioning base consistency.
versions_v0_v1: json = [
{"VersionId": cr_v0_res_json["VersionId"], "VersionStages": ["AWSPREVIOUS"]},
{"VersionId": pv_v1_res_json["VersionId"], "VersionStages": ["AWSCURRENT"]},
]
self.secretsmanager_http_list_secret_version_ids_val_res(
self.secretsmanager_http_list_secret_version_ids(secret_name),
secret_name,
versions_v0_v1,
)
self.secretsmanager_http_delete_secret_val_res(
self.secretsmanager_http_delete_secret(secret_name), secret_name
)
def test_http_put_secret_value_duplicate_req(self):
secret_name: str = "s-%s" % short_uid()
# Create v0.
secret_string_v0: str = "MySecretString"
cr_v0_res_json: json = self.secretsmanager_http_create_secret_string_val_res(
self.secretsmanager_http_create_secret_string(secret_name, secret_string_v0),
secret_name,
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
# Duplicate update.
self.secretsmanager_http_put_secret_value_val_res(
self.secretsmanager_http_put_secret_value_with(
secret_name, secret_string_v0, cr_v0_res_json["VersionId"]
),
secret_name,
)
#
# Check v1 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
#
# Check versioning base consistency.
versions_v0_v1: json = [
{"VersionId": cr_v0_res_json["VersionId"], "VersionStages": ["AWSCURRENT"]},
]
self.secretsmanager_http_list_secret_version_ids_val_res(
self.secretsmanager_http_list_secret_version_ids(secret_name),
secret_name,
versions_v0_v1,
)
self.secretsmanager_http_delete_secret_val_res(
self.secretsmanager_http_delete_secret(secret_name), secret_name
)
def test_http_put_secret_value_null_client_request_token_new_version_stages(self):
secret_name: str = "s-%s" % short_uid()
# Create v0.
secret_string_v0: str = "MySecretString"
cr_v0_res_json: json = self.secretsmanager_http_create_secret_string_val_res(
self.secretsmanager_http_create_secret_string(secret_name, secret_string_v0),
secret_name,
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
# Update v0 with null ClientRequestToken.
secret_string_v1: str = "MyNewSecretString"
version_stages_v1: List[str] = ["AWSPENDING"]
#
pv_v1_res_json = self.secretsmanager_http_put_secret_value_with_version_val_res(
self.secretsmanager_http_put_secret_value_with_version(
secret_name, secret_string_v1, None, version_stages_v1
),
secret_name,
None,
version_stages_v1,
)
#
assert pv_v1_res_json["VersionId"] != cr_v0_res_json["VersionId"]
#
# Check v1 base consistency.
self.secretsmanager_http_get_secret_value_with_val_res(
self.secretsmanager_http_get_secret_value_with(secret_name, "AWSPENDING"),
secret_name,
secret_string_v1,
pv_v1_res_json["VersionId"],
"AWSPENDING",
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
#
# Check versioning base consistency.
versions_v0_v1: json = [
{"VersionId": cr_v0_res_json["VersionId"], "VersionStages": ["AWSCURRENT"]},
{"VersionId": pv_v1_res_json["VersionId"], "VersionStages": ["AWSPENDING"]},
]
self.secretsmanager_http_list_secret_version_ids_val_res(
self.secretsmanager_http_list_secret_version_ids(secret_name),
secret_name,
versions_v0_v1,
)
self.secretsmanager_http_delete_secret_val_res(
self.secretsmanager_http_delete_secret(secret_name), secret_name
)
def test_http_put_secret_value_custom_client_request_token_new_version_stages(self):
secret_name: str = "s-%s" % short_uid()
# Create v0.
secret_string_v0: str = "MySecretString"
cr_v0_res_json: json = self.secretsmanager_http_create_secret_string_val_res(
self.secretsmanager_http_create_secret_string(secret_name, secret_string_v0),
secret_name,
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
# Update v0 with null ClientRequestToken.
secret_string_v1: str = "MyNewSecretString"
version_stages_v1: List[str] = ["AWSPENDING"]
crt_v1: str = str(uuid.uuid4())
while crt_v1 == cr_v0_res_json["VersionId"]:
crt_v1 = str(uuid.uuid4())
#
self.secretsmanager_http_put_secret_value_with_version_val_res(
self.secretsmanager_http_put_secret_value_with_version(
secret_name, secret_string_v1, crt_v1, version_stages_v1
),
secret_name,
crt_v1,
version_stages_v1,
)
#
# Check v1 base consistency.
self.secretsmanager_http_get_secret_value_with_val_res(
self.secretsmanager_http_get_secret_value_with(secret_name, "AWSPENDING"),
secret_name,
secret_string_v1,
crt_v1,
"AWSPENDING",
)
#
# Check v0 base consistency.
self.secretsmanager_http_get_secret_value_val_res(
self.secretsmanager_http_get_secret_value(secret_name),
secret_name,
secret_string_v0,
cr_v0_res_json["VersionId"],
)
#
# Check versioning base consistency.
versions_v0_v1: json = [
{"VersionId": cr_v0_res_json["VersionId"], "VersionStages": ["AWSCURRENT"]},
{"VersionId": crt_v1, "VersionStages": ["AWSPENDING"]},
]
self.secretsmanager_http_list_secret_version_ids_val_res(
self.secretsmanager_http_list_secret_version_ids(secret_name),
secret_name,
versions_v0_v1,
)
self.secretsmanager_http_delete_secret_val_res(
self.secretsmanager_http_delete_secret(secret_name), secret_name
)
| 39.037349 | 99 | 0.657757 | 3,624 | 32,401 | 5.433775 | 0.06043 | 0.083283 | 0.109486 | 0.053626 | 0.809415 | 0.78179 | 0.755078 | 0.723289 | 0.706074 | 0.676468 | 0 | 0.012198 | 0.258634 | 32,401 | 829 | 100 | 39.084439 | 0.807585 | 0.034752 | 0 | 0.557927 | 0 | 0 | 0.091352 | 0.01077 | 0 | 0 | 0 | 0 | 0.11128 | 1 | 0.053354 | false | 0.006098 | 0.018293 | 0.003049 | 0.105183 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7d5711766710fb13d90549bd70f8779596793e6 | 34 | py | Python | sales/__init__.py | oldrev/odoodev-demo-2014 | 77085df7856f8fe5fac0a6ad510fe13e19ed5a1a | [
"MIT"
] | 2 | 2015-06-16T07:18:30.000Z | 2016-03-27T01:58:52.000Z | sales/__init__.py | oldrev/odoodev-demo-2014 | 77085df7856f8fe5fac0a6ad510fe13e19ed5a1a | [
"MIT"
] | null | null | null | sales/__init__.py | oldrev/odoodev-demo-2014 | 77085df7856f8fe5fac0a6ad510fe13e19ed5a1a | [
"MIT"
] | null | null | null | #encoding: utf-8
import sales
| 4.857143 | 16 | 0.676471 | 5 | 34 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.235294 | 34 | 6 | 17 | 5.666667 | 0.846154 | 0.441176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
79358d78cc5b8af60a2ed766c56fc550ec62f91f | 77,767 | py | Python | raiden/tests/integration/api/test_restapi.py | ExchangeUnion/raiden | 2217bcb698fcfce3499dc1f41ad919ed82e8e45f | [
"MIT"
] | null | null | null | raiden/tests/integration/api/test_restapi.py | ExchangeUnion/raiden | 2217bcb698fcfce3499dc1f41ad919ed82e8e45f | [
"MIT"
] | 12 | 2019-08-09T19:12:17.000Z | 2019-12-05T15:49:29.000Z | raiden/tests/integration/api/test_restapi.py | ExchangeUnion/raiden | 2217bcb698fcfce3499dc1f41ad919ed82e8e45f | [
"MIT"
] | null | null | null | import datetime
import json
from hashlib import sha256
from http import HTTPStatus
import gevent
import grequests
import pytest
from eth_utils import (
is_checksum_address,
to_bytes,
to_canonical_address,
to_checksum_address,
to_hex,
)
from flask import url_for
from raiden.api.v1.encoding import AddressField, HexAddressConverter
from raiden.constants import (
GENESIS_BLOCK_NUMBER,
RED_EYES_PER_CHANNEL_PARTICIPANT_LIMIT,
SECRET_LENGTH,
Environment,
)
from raiden.messages.transfers import LockedTransfer, Unlock
from raiden.tests.integration.api.utils import create_api_server
from raiden.tests.utils import factories
from raiden.tests.utils.client import burn_eth
from raiden.tests.utils.events import check_dict_nested_attrs, must_have_event, must_have_events
from raiden.tests.utils.network import CHAIN
from raiden.tests.utils.protocol import WaitForMessage
from raiden.tests.utils.smartcontracts import deploy_contract_web3
from raiden.transfer import views
from raiden.transfer.state import ChannelState
from raiden.utils import get_system_spec
from raiden.waiting import (
TransferWaitResult,
wait_for_received_transfer_result,
wait_for_token_network,
)
from raiden_contracts.constants import (
CONTRACT_CUSTOM_TOKEN,
CONTRACT_HUMAN_STANDARD_TOKEN,
TEST_SETTLE_TIMEOUT_MAX,
TEST_SETTLE_TIMEOUT_MIN,
)
# pylint: disable=too-many-locals,unused-argument,too-many-lines
class CustomException(Exception):
pass
def get_json_response(response):
"""
Utility function to deal with JSON responses.
requests's `.json` can fail when simplejson is installed. See
https://github.com/raiden-network/raiden/issues/4174
"""
return json.loads(response.content)
def assert_no_content_response(response):
assert (
response is not None
and response.text == ""
and response.status_code == HTTPStatus.NO_CONTENT
)
def assert_response_with_code(response, status_code):
assert response is not None and response.status_code == status_code
def assert_response_with_error(response, status_code):
json_response = get_json_response(response)
assert (
response is not None
and response.status_code == status_code
and "errors" in json_response
and json_response["errors"] != ""
)
def assert_proper_response(response, status_code=HTTPStatus.OK):
assert (
response is not None
and response.status_code == status_code
and response.headers["Content-Type"] == "application/json"
)
def api_url_for(api_server, endpoint, **kwargs):
# url_for() expects binary address so we have to convert here
for key, val in kwargs.items():
if isinstance(val, str) and val.startswith("0x"):
kwargs[key] = to_canonical_address(val)
with api_server.flask_app.app_context():
return url_for(f"v1_resources.{endpoint}", **kwargs)
def test_hex_converter():
converter = HexAddressConverter(map=None)
# invalid hex data
with pytest.raises(Exception):
converter.to_python("-")
# invalid address, too short
with pytest.raises(Exception):
converter.to_python("0x1234")
# missing prefix 0x
with pytest.raises(Exception):
converter.to_python("414d72a6f6e28f4950117696081450d63d56c354")
address = b"AMr\xa6\xf6\xe2\x8fIP\x11v\x96\x08\x14P\xd6=V\xc3T"
assert converter.to_python("0x414D72a6f6E28F4950117696081450d63D56C354") == address
def test_address_field():
# pylint: disable=protected-access
field = AddressField()
attr = "test"
data = object()
# invalid hex data
with pytest.raises(Exception):
field._deserialize("-", attr, data)
# invalid address, too short
with pytest.raises(Exception):
field._deserialize("0x1234", attr, data)
# missing prefix 0x
with pytest.raises(Exception):
field._deserialize("414d72a6f6e28f4950117696081450d63d56c354", attr, data)
address = b"AMr\xa6\xf6\xe2\x8fIP\x11v\x96\x08\x14P\xd6=V\xc3T"
assert field._deserialize("0x414D72a6f6E28F4950117696081450d63D56C354", attr, data) == address
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_payload_with_invalid_addresses(api_server_test_instance, rest_api_port_number):
""" Addresses require leading 0x in the payload. """
invalid_address = "61c808d82a3ac53231750dadc13c777b59310bd9"
channel_data_obj = {
"partner_address": invalid_address,
"token_address": "0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8",
"settle_timeout": 10,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
url_without_prefix = (
"http://localhost:{port}/api/v1/channels/ea674fdde714fd979de3edf0f56aa9716b898ec8"
).format(port=rest_api_port_number)
request = grequests.patch(
url_without_prefix, json=dict(state=ChannelState.STATE_SETTLED.value)
)
response = request.send().response
assert_response_with_code(response, HTTPStatus.NOT_FOUND)
@pytest.mark.xfail(
strict=True, reason="Crashed app also crashes on teardown", raises=CustomException
)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_crash_on_unhandled_exception(api_server_test_instance):
""" Crash when an unhandled exception happens on APIServer. """
# as we should not have unhandled exceptions in our endpoints, create one to test
@api_server_test_instance.flask_app.route("/error_endpoint", methods=["GET"])
def error_endpoint(): # pylint: disable=unused-variable
raise CustomException("This is an unhandled error")
with api_server_test_instance.flask_app.app_context():
url = url_for("error_endpoint")
request = grequests.get(url)
request.send()
api_server_test_instance.get(timeout=10)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_payload_with_address_invalid_chars(api_server_test_instance):
""" Addresses cannot have invalid characters in it. """
invalid_address = "0x61c808d82a3ac53231750dadc13c777b59310bdg" # g at the end is invalid
channel_data_obj = {
"partner_address": invalid_address,
"token_address": "0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8",
"settle_timeout": 10,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_payload_with_address_invalid_length(api_server_test_instance):
""" Encoded addresses must have the right length. """
invalid_address = "0x61c808d82a3ac53231750dadc13c777b59310b" # g at the end is invalid
channel_data_obj = {
"partner_address": invalid_address,
"token_address": "0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8",
"settle_timeout": 10,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_payload_with_address_not_eip55(api_server_test_instance):
""" Provided addresses must be EIP55 encoded. """
invalid_address = "0xf696209d2ca35e6c88e5b99b7cda3abf316bed69"
channel_data_obj = {
"partner_address": invalid_address,
"token_address": "0xEA674fdDe714fd979de3EdF0F56AA9716B898ec8",
"settle_timeout": 90,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_query_our_address(api_server_test_instance):
request = grequests.get(api_url_for(api_server_test_instance, "addressresource"))
response = request.send().response
assert_proper_response(response)
our_address = api_server_test_instance.rest_api.raiden_api.address
assert get_json_response(response) == {"our_address": to_checksum_address(our_address)}
def test_api_get_raiden_version(api_server_test_instance):
request = grequests.get(api_url_for(api_server_test_instance, "versionresource"))
response = request.send().response
assert_proper_response(response)
raiden_version = get_system_spec()["raiden"]
assert get_json_response(response) == {"version": raiden_version}
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_get_channel_list(api_server_test_instance, token_addresses, reveal_timeout):
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
request = grequests.get(api_url_for(api_server_test_instance, "channelsresource"))
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert json_response == []
# let's create a new channel
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
request = grequests.get(api_url_for(api_server_test_instance, "channelsresource"))
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
channel_info = json_response[0]
assert channel_info["partner_address"] == partner_address
assert channel_info["token_address"] == to_checksum_address(token_address)
assert channel_info["total_deposit"] == 0
assert "token_network_address" in channel_info
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_channel_status_channel_nonexistant(api_server_test_instance, token_addresses):
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
request = grequests.get(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.NOT_FOUND)
assert get_json_response(response)["errors"] == (
"Channel with partner '{}' for token '{}' could not be found.".format(
to_checksum_address(partner_address), to_checksum_address(token_address)
)
)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_open_and_deposit_channel(api_server_test_instance, token_addresses, reveal_timeout):
# let's create a new channel
first_partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": first_partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
first_channel_id = 1
json_response = get_json_response(response)
expected_response = channel_data_obj.copy()
expected_response.update(
{
"balance": 0,
"state": ChannelState.STATE_OPENED.value,
"channel_identifier": 1,
"total_deposit": 0,
}
)
assert check_dict_nested_attrs(json_response, expected_response)
token_network_address = json_response["token_network_address"]
# now let's open a channel and make a deposit too
second_partner_address = "0x29FA6cf0Cce24582a9B20DB94Be4B6E017896038"
total_deposit = 100
channel_data_obj = {
"partner_address": second_partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
"total_deposit": total_deposit,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
second_channel_id = 2
json_response = get_json_response(response)
expected_response = channel_data_obj.copy()
expected_response.update(
{
"balance": total_deposit,
"state": ChannelState.STATE_OPENED.value,
"channel_identifier": second_channel_id,
"token_network_address": token_network_address,
"total_deposit": total_deposit,
}
)
assert check_dict_nested_attrs(json_response, expected_response)
# assert depositing again with less than the initial deposit returns 409
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=second_partner_address,
),
json={"total_deposit": 99},
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CONFLICT)
# assert depositing negative amount fails
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=first_partner_address,
),
json={"total_deposit": -1000},
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CONFLICT)
# let's deposit on the first channel
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=first_partner_address,
),
json={"total_deposit": total_deposit},
)
response = request.send().response
assert_proper_response(response)
json_response = get_json_response(response)
expected_response = {
"channel_identifier": first_channel_id,
"partner_address": first_partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
"state": ChannelState.STATE_OPENED.value,
"balance": total_deposit,
"total_deposit": total_deposit,
"token_network_address": token_network_address,
}
assert check_dict_nested_attrs(json_response, expected_response)
# let's try querying for the second channel
request = grequests.get(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=second_partner_address,
)
)
response = request.send().response
assert_proper_response(response)
json_response = get_json_response(response)
expected_response = {
"channel_identifier": second_channel_id,
"partner_address": second_partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
"state": ChannelState.STATE_OPENED.value,
"balance": total_deposit,
"total_deposit": total_deposit,
"token_network_address": token_network_address,
}
assert check_dict_nested_attrs(json_response, expected_response)
# finally let's burn all eth and try to open another channel
burn_eth(api_server_test_instance.rest_api.raiden_api.raiden.chain.client)
channel_data_obj = {
"partner_address": "0xf3AF96F89b3d7CdcBE0C083690A28185Feb0b3CE",
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
"balance": 1,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.PAYMENT_REQUIRED)
json_response = get_json_response(response)
assert "The account balance is below the estimated amount" in json_response["errors"]
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_open_close_and_settle_channel(
api_server_test_instance, token_addresses, reveal_timeout
):
# let's create a new channel
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
balance = 0
assert_proper_response(response, status_code=HTTPStatus.CREATED)
channel_identifier = 1
json_response = get_json_response(response)
expected_response = channel_data_obj.copy()
expected_response.update(
{
"balance": balance,
"state": ChannelState.STATE_OPENED.value,
"reveal_timeout": reveal_timeout,
"channel_identifier": channel_identifier,
"total_deposit": 0,
}
)
assert check_dict_nested_attrs(json_response, expected_response)
token_network_address = json_response["token_network_address"]
# let's close the channel
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json={"state": ChannelState.STATE_CLOSED.value},
)
response = request.send().response
assert_proper_response(response)
expected_response = {
"token_network_address": token_network_address,
"channel_identifier": channel_identifier,
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
"state": ChannelState.STATE_CLOSED.value,
"balance": balance,
"total_deposit": balance,
}
assert check_dict_nested_attrs(get_json_response(response), expected_response)
@pytest.mark.parametrize("number_of_nodes", [2])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_close_insufficient_eth(api_server_test_instance, token_addresses, reveal_timeout):
# let's create a new channel
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
balance = 0
assert_proper_response(response, status_code=HTTPStatus.CREATED)
channel_identifier = 1
json_response = get_json_response(response)
expected_response = channel_data_obj.copy()
expected_response.update(
{
"balance": balance,
"state": ChannelState.STATE_OPENED.value,
"reveal_timeout": reveal_timeout,
"channel_identifier": channel_identifier,
"total_deposit": 0,
}
)
assert check_dict_nested_attrs(json_response, expected_response)
# let's burn all eth and try to close the channel
burn_eth(api_server_test_instance.rest_api.raiden_api.raiden.chain.client)
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json={"state": ChannelState.STATE_CLOSED.value},
)
response = request.send().response
assert_proper_response(response, HTTPStatus.PAYMENT_REQUIRED)
json_response = get_json_response(response)
assert "Insufficient ETH" in json_response["errors"]
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_open_channel_invalid_input(api_server_test_instance, token_addresses, reveal_timeout):
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = TEST_SETTLE_TIMEOUT_MIN - 1
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_response_with_error(response, status_code=HTTPStatus.CONFLICT)
channel_data_obj["settle_timeout"] = TEST_SETTLE_TIMEOUT_MAX + 1
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_response_with_error(response, status_code=HTTPStatus.CONFLICT)
channel_data_obj["settle_timeout"] = TEST_SETTLE_TIMEOUT_MAX - 1
channel_data_obj["token_address"] = to_checksum_address(factories.make_address())
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_response_with_error(response, status_code=HTTPStatus.CONFLICT)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_api_channel_state_change_errors(
api_server_test_instance, token_addresses, reveal_timeout
):
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
# let's try to set a random state
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(state="inlimbo"),
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
# let's try to set both new state and balance
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(state=ChannelState.STATE_CLOSED.value, total_deposit=200),
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.CONFLICT)
# let's try to patch with no arguments
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
)
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
# ok now let's close and settle for real
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(state=ChannelState.STATE_CLOSED.value),
)
response = request.send().response
assert_proper_response(response)
# let's try to deposit to a settled channel
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(total_deposit=500),
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.CONFLICT)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("number_of_tokens", [2])
@pytest.mark.parametrize("environment_type", [Environment.DEVELOPMENT])
def test_api_tokens(api_server_test_instance, blockchain_services, token_addresses):
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address1 = token_addresses[0]
token_address2 = token_addresses[1]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address1),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address2),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
# and now let's get the token list
request = grequests.get(api_url_for(api_server_test_instance, "tokensresource"))
response = request.send().response
assert_proper_response(response)
json_response = get_json_response(response)
expected_response = [to_checksum_address(token_address1), to_checksum_address(token_address2)]
assert set(json_response) == set(expected_response)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_query_partners_by_token(api_server_test_instance, blockchain_services, token_addresses):
first_partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
second_partner_address = "0x29FA6cf0Cce24582a9B20DB94Be4B6E017896038"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": first_partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
json_response = get_json_response(response)
channel_data_obj["partner_address"] = second_partner_address
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
json_response = get_json_response(response)
# and a channel for another token
channel_data_obj["partner_address"] = "0xb07937AbA15304FBBB0Bf6454a9377a76E3dD39E"
channel_data_obj["token_address"] = to_checksum_address(token_address)
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
# and now let's query our partners per token for the first token
request = grequests.get(
api_url_for(
api_server_test_instance,
"partnersresourcebytokenaddress",
token_address=to_checksum_address(token_address),
)
)
response = request.send().response
assert_proper_response(response)
json_response = get_json_response(response)
expected_response = [
{
"partner_address": first_partner_address,
"channel": "/api/v1/channels/{}/{}".format(
to_checksum_address(token_address), to_checksum_address(first_partner_address)
),
},
{
"partner_address": second_partner_address,
"channel": "/api/v1/channels/{}/{}".format(
to_checksum_address(token_address), to_checksum_address(second_partner_address)
),
},
]
assert all(r in json_response for r in expected_response)
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_payments_target_error(api_server_test_instance, raiden_network, token_addresses):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
# stop app1 to force an error
app1.stop()
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier},
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.CONFLICT)
app1.start()
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_payments(api_server_test_instance, raiden_network, token_addresses):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
our_address = api_server_test_instance.rest_api.raiden_api.address
payment = {
"initiator_address": to_checksum_address(our_address),
"target_address": to_checksum_address(target_address),
"token_address": to_checksum_address(token_address),
"amount": amount,
"identifier": identifier,
}
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier},
)
response = request.send().response
assert_proper_response(response)
json_response = get_json_response(response)
assert_payment_secret_and_hash(json_response, payment)
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_timestamp_format(api_server_test_instance, raiden_network, token_addresses):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
payment_url = api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
)
# Make payment
grequests.post(payment_url, json={"amount": amount, "identifier": identifier}).send()
json_response = get_json_response(grequests.get(payment_url).send().response)
assert len(json_response) == 1, "payment response had no event record"
event_data = json_response[0]
assert "log_time" in event_data, "missing log_time attribute from event record"
log_timestamp = event_data["log_time"]
# python (and javascript) can parse strings with either space or T as a separator of date
# and time and still treat it as a ISO8601 string
log_date = datetime.datetime.fromisoformat(log_timestamp)
log_timestamp_iso = log_date.isoformat()
assert log_timestamp_iso == log_timestamp, "log_time is not a valid ISO8601 string"
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_payments_secret_hash_errors(
api_server_test_instance, raiden_network, token_addresses
):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
secret = to_hex(factories.make_secret())
bad_secret = "Not Hex String. 0x78c8d676e2f2399aa2a015f3433a2083c55003591a0f3f33"
bad_secret_hash = "Not Hex String. 0x78c8d676e2f2399aa2a015f3433a2083c55003591a0f3f33"
short_secret = "0x123"
short_secret_hash = "Short secret hash"
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier, "secret": short_secret},
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.BAD_REQUEST)
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier, "secret": bad_secret},
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.BAD_REQUEST)
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier, "secret_hash": short_secret_hash},
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.BAD_REQUEST)
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier, "secret_hash": bad_secret_hash},
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.BAD_REQUEST)
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier, "secret": secret, "secret_hash": secret},
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.CONFLICT)
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_payments_with_secret_no_hash(
api_server_test_instance, raiden_network, token_addresses
):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
secret = to_hex(factories.make_secret())
our_address = api_server_test_instance.rest_api.raiden_api.address
payment = {
"initiator_address": to_checksum_address(our_address),
"target_address": to_checksum_address(target_address),
"token_address": to_checksum_address(token_address),
"amount": amount,
"identifier": identifier,
}
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier, "secret": secret},
)
response = request.send().response
assert_proper_response(response)
json_response = get_json_response(response)
assert_payment_secret_and_hash(json_response, payment)
assert secret == json_response["secret"]
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_payments_with_hash_no_secret(
api_server_test_instance, raiden_network, token_addresses
):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
secret = to_hex(factories.make_secret())
secret_hash = to_hex(sha256(to_bytes(hexstr=secret)).digest())
our_address = api_server_test_instance.rest_api.raiden_api.address
payment = {
"initiator_address": to_checksum_address(our_address),
"target_address": to_checksum_address(target_address),
"token_address": to_checksum_address(token_address),
"amount": amount,
"identifier": identifier,
}
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier, "secret_hash": secret_hash},
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.CONFLICT)
assert payment == payment
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_payments_with_secret_and_hash(
api_server_test_instance, raiden_network, token_addresses
):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
secret = to_hex(factories.make_secret())
secret_hash = to_hex(sha256(to_bytes(hexstr=secret)).digest())
our_address = api_server_test_instance.rest_api.raiden_api.address
payment = {
"initiator_address": to_checksum_address(our_address),
"target_address": to_checksum_address(target_address),
"token_address": to_checksum_address(token_address),
"amount": amount,
"identifier": identifier,
}
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={
"amount": amount,
"identifier": identifier,
"secret": secret,
"secret_hash": secret_hash,
},
)
response = request.send().response
assert_proper_response(response)
json_response = get_json_response(response)
assert_payment_secret_and_hash(json_response, payment)
assert secret == json_response["secret"]
assert secret_hash == json_response["secret_hash"]
def assert_payment_secret_and_hash(response, payment):
# make sure that payment key/values are part of the response.
assert response.items() >= payment.items()
assert "secret" in response
assert "secret_hash" in response
secret = to_bytes(hexstr=response["secret"])
assert len(secret) == SECRET_LENGTH
assert to_bytes(hexstr=response["secret_hash"]) == sha256(secret).digest()
def assert_payment_conflict(responses):
assert all(response is not None for response in responses)
assert any(
resp.status_code == HTTPStatus.CONFLICT
and get_json_response(resp)["errors"] == "Another payment with the same id is in flight"
for resp in responses
)
@pytest.mark.parametrize("number_of_nodes", [2])
def test_api_payments_conflicts(api_server_test_instance, raiden_network, token_addresses):
_, app1 = raiden_network
token_address = token_addresses[0]
target_address = app1.raiden.address
payment_url = api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
)
# two different transfers (different amounts) with same identifier at the same time:
# payment conflict
responses = grequests.map(
[
grequests.post(payment_url, json={"amount": 10, "identifier": 11}),
grequests.post(payment_url, json={"amount": 11, "identifier": 11}),
]
)
assert_payment_conflict(responses)
# same request sent twice, e. g. when it is retried: no conflict
responses = grequests.map(
[
grequests.post(payment_url, json={"amount": 10, "identifier": 73}),
grequests.post(payment_url, json={"amount": 10, "identifier": 73}),
]
)
assert all(response.status_code == HTTPStatus.OK for response in responses)
@pytest.mark.parametrize("number_of_tokens", [0])
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("environment_type", [Environment.PRODUCTION])
def test_register_token_mainnet(
api_server_test_instance, token_amount, token_addresses, raiden_network, contract_manager
):
app0 = raiden_network[0]
new_token_address = deploy_contract_web3(
CONTRACT_HUMAN_STANDARD_TOKEN,
app0.raiden.chain.client,
contract_manager=contract_manager,
constructor_arguments=(token_amount, 2, "raiden", "Rd"),
)
register_request = grequests.put(
api_url_for(
api_server_test_instance,
"registertokenresource",
token_address=to_checksum_address(new_token_address),
)
)
response = register_request.send().response
assert response is not None and response.status_code == HTTPStatus.NOT_IMPLEMENTED
@pytest.mark.parametrize("number_of_tokens", [0])
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("environment_type", [Environment.DEVELOPMENT])
def test_register_token(
api_server_test_instance, token_amount, token_addresses, raiden_network, contract_manager
):
app0 = raiden_network[0]
new_token_address = deploy_contract_web3(
CONTRACT_HUMAN_STANDARD_TOKEN,
app0.raiden.chain.client,
contract_manager=contract_manager,
constructor_arguments=(token_amount, 2, "raiden", "Rd"),
)
other_token_address = deploy_contract_web3(
CONTRACT_HUMAN_STANDARD_TOKEN,
app0.raiden.chain.client,
contract_manager=contract_manager,
constructor_arguments=(token_amount, 2, "raiden", "Rd"),
)
register_request = grequests.put(
api_url_for(
api_server_test_instance,
"registertokenresource",
token_address=to_checksum_address(new_token_address),
)
)
register_response = register_request.send().response
assert_proper_response(register_response, status_code=HTTPStatus.CREATED)
response_json = get_json_response(register_response)
assert "token_network_address" in response_json
assert is_checksum_address(response_json["token_network_address"])
# now try to reregister it and get the error
conflict_request = grequests.put(
api_url_for(
api_server_test_instance,
"registertokenresource",
token_address=to_checksum_address(new_token_address),
)
)
conflict_response = conflict_request.send().response
assert_response_with_error(conflict_response, HTTPStatus.CONFLICT)
# Burn all the eth and then make sure we get the appropriate API error
burn_eth(app0.raiden.chain.client)
poor_request = grequests.put(
api_url_for(
api_server_test_instance,
"registertokenresource",
token_address=to_checksum_address(other_token_address),
)
)
poor_response = poor_request.send().response
assert_response_with_error(poor_response, HTTPStatus.PAYMENT_REQUIRED)
@pytest.mark.parametrize("number_of_tokens", [0])
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("environment_type", [Environment.DEVELOPMENT])
def test_get_token_network_for_token(
api_server_test_instance, token_amount, token_addresses, raiden_network, contract_manager
):
app0 = raiden_network[0]
new_token_address = deploy_contract_web3(
CONTRACT_HUMAN_STANDARD_TOKEN,
app0.raiden.chain.client,
contract_manager=contract_manager,
constructor_arguments=(token_amount, 2, "raiden", "Rd"),
)
# unregistered token returns 404
token_request = grequests.get(
api_url_for(
api_server_test_instance,
"registertokenresource",
token_address=to_checksum_address(new_token_address),
)
)
token_response = token_request.send().response
assert_proper_response(token_response, status_code=HTTPStatus.NOT_FOUND)
# register token
register_request = grequests.put(
api_url_for(
api_server_test_instance,
"registertokenresource",
token_address=to_checksum_address(new_token_address),
)
)
register_response = register_request.send().response
assert_proper_response(register_response, status_code=HTTPStatus.CREATED)
token_network_address = get_json_response(register_response)["token_network_address"]
wait_for_token_network(
app0.raiden, app0.raiden.default_registry.address, new_token_address, 0.1
)
# now it should return the token address
token_request = grequests.get(
api_url_for(
api_server_test_instance,
"registertokenresource",
token_address=to_checksum_address(new_token_address),
)
)
token_response = token_request.send().response
assert_proper_response(token_response, status_code=HTTPStatus.OK)
assert token_network_address == get_json_response(token_response)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("number_of_tokens", [1])
# For non-red eyes mainnet code set number_of_tokens to 2 and uncomment the code
# at the end of this test
def test_get_connection_managers_info(api_server_test_instance, token_addresses):
# check that there are no registered tokens
request = grequests.get(api_url_for(api_server_test_instance, "connectionsinforesource"))
response = request.send().response
result = get_json_response(response)
assert len(result) == 0
funds = 100
token_address1 = to_checksum_address(token_addresses[0])
connect_data_obj = {"funds": funds}
request = grequests.put(
api_url_for(api_server_test_instance, "connectionsresource", token_address=token_address1),
json=connect_data_obj,
)
response = request.send().response
assert_no_content_response(response)
# check that there now is one registered channel manager
request = grequests.get(api_url_for(api_server_test_instance, "connectionsinforesource"))
response = request.send().response
result = get_json_response(response)
assert isinstance(result, dict) and len(result.keys()) == 1
assert token_address1 in result
assert isinstance(result[token_address1], dict)
assert set(result[token_address1].keys()) == {"funds", "sum_deposits", "channels"}
# funds = 100
# token_address2 = to_checksum_address(token_addresses[1])
# connect_data_obj = {
# 'funds': funds,
# }
# request = grequests.put(
# api_url_for(
# api_server_test_instance,
# 'connectionsresource',
# token_address=token_address2,
# ),
# json=connect_data_obj,
# )
# response = request.send().response
# assert_no_content_response(response)
# # check that there now are two registered channel managers
# request = grequests.get(
# api_url_for(api_server_test_instance, 'connectionsinforesource'),
# )
# response = request.send().response
# result = response.json()
# assert isinstance(result, dict) and len(result.keys()) == 2
# assert token_address2 in result
# assert isinstance(result[token_address2], dict)
# assert set(result[token_address2].keys()) == {'funds', 'sum_deposits', 'channels'}
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("number_of_tokens", [1])
def test_connect_insufficient_reserve(api_server_test_instance, token_addresses):
# Burn all eth and then try to connect to a token network
burn_eth(api_server_test_instance.rest_api.raiden_api.raiden.chain.client)
funds = 100
token_address1 = to_checksum_address(token_addresses[0])
connect_data_obj = {"funds": funds}
request = grequests.put(
api_url_for(api_server_test_instance, "connectionsresource", token_address=token_address1),
json=connect_data_obj,
)
response = request.send().response
assert_proper_response(response, HTTPStatus.PAYMENT_REQUIRED)
json_response = get_json_response(response)
assert "The account balance is below the estimated amount" in json_response["errors"]
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_network_events(api_server_test_instance, token_addresses):
# let's create a new channel
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.CREATED)
request = grequests.get(
api_url_for(
api_server_test_instance,
"blockchaineventsnetworkresource",
from_block=GENESIS_BLOCK_NUMBER,
)
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.OK)
assert len(get_json_response(response)) > 0
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_token_events(api_server_test_instance, token_addresses):
# let's create a new channel
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.CREATED)
request = grequests.get(
api_url_for(
api_server_test_instance,
"blockchaineventstokenresource",
token_address=token_address,
from_block=GENESIS_BLOCK_NUMBER,
)
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.OK)
assert len(get_json_response(response)) > 0
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_channel_events(api_server_test_instance, token_addresses):
# let's create a new channel
partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
channel_data_obj = {
"partner_address": partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.CREATED)
request = grequests.get(
api_url_for(
api_server_test_instance,
"tokenchanneleventsresourceblockchain",
partner_address=partner_address,
token_address=token_address,
from_block=GENESIS_BLOCK_NUMBER,
)
)
response = request.send().response
assert_proper_response(response, status_code=HTTPStatus.OK)
assert len(get_json_response(response)) > 0
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
def test_token_events_errors_for_unregistered_token(api_server_test_instance):
request = grequests.get(
api_url_for(
api_server_test_instance,
"tokenchanneleventsresourceblockchain",
token_address="0x61C808D82A3Ac53231750daDc13c777b59310bD9",
from_block=5,
to_block=20,
)
)
response = request.send().response
assert_response_with_error(response, status_code=HTTPStatus.NOT_FOUND)
request = grequests.get(
api_url_for(
api_server_test_instance,
"channelblockchaineventsresource",
token_address="0x61C808D82A3Ac53231750daDc13c777b59310bD9",
partner_address="0x61C808D82A3Ac53231750daDc13c777b59313bD9",
from_block=5,
to_block=20,
)
)
response = request.send().response
assert_response_with_error(response, status_code=HTTPStatus.NOT_FOUND)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("deposit", [50000])
def test_api_deposit_limit(api_server_test_instance, token_addresses, reveal_timeout):
# let's create a new channel and deposit exactly the limit amount
first_partner_address = "0x61C808D82A3Ac53231750daDc13c777b59310bD9"
token_address = token_addresses[0]
settle_timeout = 1650
balance_working = RED_EYES_PER_CHANNEL_PARTICIPANT_LIMIT
channel_data_obj = {
"partner_address": first_partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
"total_deposit": balance_working,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CREATED)
first_channel_identifier = 1
json_response = get_json_response(response)
expected_response = channel_data_obj.copy()
expected_response.update(
{
"balance": balance_working,
"state": ChannelState.STATE_OPENED.value,
"channel_identifier": first_channel_identifier,
"total_deposit": balance_working,
}
)
assert check_dict_nested_attrs(json_response, expected_response)
# now let's open a channel and deposit a bit more than the limit
second_partner_address = "0x29FA6cf0Cce24582a9B20DB94Be4B6E017896038"
balance_failing = balance_working + 1 # token has two digits
channel_data_obj = {
"partner_address": second_partner_address,
"token_address": to_checksum_address(token_address),
"settle_timeout": settle_timeout,
"reveal_timeout": reveal_timeout,
"total_deposit": balance_failing,
}
request = grequests.put(
api_url_for(api_server_test_instance, "channelsresource"), json=channel_data_obj
)
response = request.send().response
assert_proper_response(response, HTTPStatus.CONFLICT)
json_response = get_json_response(response)
assert (
json_response["errors"]
== "Deposit of 75000000000000001 is larger than the channel participant deposit limit"
)
@pytest.mark.parametrize("number_of_nodes", [3])
def test_payment_events_endpoints(api_server_test_instance, raiden_network, token_addresses):
app0, app1, app2 = raiden_network
amount1 = 200
identifier1 = 42
secret1, secrethash1 = factories.make_secret_with_hash()
token_address = token_addresses[0]
app0_address = app0.raiden.address
target1_address = app1.raiden.address
target2_address = app2.raiden.address
app1_server = create_api_server(app1, 8575)
app2_server = create_api_server(app2, 8576)
# app0 is sending tokens to target 1
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target1_address),
),
json={"amount": amount1, "identifier": identifier1, "secret": to_hex(secret1)},
)
request.send()
# app0 is sending some tokens to target 2
identifier2 = 43
amount2 = 123
secret2, secrethash2 = factories.make_secret_with_hash()
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target2_address),
),
json={"amount": amount2, "identifier": identifier2, "secret": to_hex(secret2)},
)
request.send()
# target1 also sends some tokens to target 2
identifier3 = 44
amount3 = 5
secret3, secrethash3 = factories.make_secret_with_hash()
request = grequests.post(
api_url_for(
app1_server,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target2_address),
),
json={"amount": amount3, "identifier": identifier3, "secret": to_hex(secret3)},
)
request.send()
exception = ValueError("Waiting for transfer received success in the WAL timed out")
with gevent.Timeout(seconds=60, exception=exception):
result = wait_for_received_transfer_result(
app1.raiden, identifier1, amount1, app1.raiden.alarm.sleep_time, secrethash1
)
msg = f"Unexpected transfer result: {str(result)}"
assert result == TransferWaitResult.UNLOCKED, msg
result = wait_for_received_transfer_result(
app2.raiden, identifier2, amount2, app2.raiden.alarm.sleep_time, secrethash2
)
msg = f"Unexpected transfer result: {str(result)}"
assert result == TransferWaitResult.UNLOCKED, msg
result = wait_for_received_transfer_result(
app2.raiden, identifier3, amount3, app2.raiden.alarm.sleep_time, secrethash3
)
msg = f"Unexpected transfer result: {str(result)}"
assert result == TransferWaitResult.UNLOCKED, msg
# test endpoint without (partner and token) for sender
request = grequests.get(api_url_for(api_server_test_instance, "paymentresource"))
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier1,
"target": to_checksum_address(target1_address),
},
)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier2,
"target": to_checksum_address(target2_address),
},
)
# test endpoint without (partner and token) for target1
request = grequests.get(api_url_for(app1_server, "paymentresource"))
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier1}
)
assert must_have_event(
json_response, {"event": "EventPaymentSentSuccess", "identifier": identifier3}
)
# test endpoint without (partner and token) for target2
request = grequests.get(api_url_for(app2_server, "paymentresource"))
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier2}
)
assert must_have_event(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier3}
)
# test endpoint without partner for app0
request = grequests.get(
api_url_for(api_server_test_instance, "token_paymentresource", token_address=token_address)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier1,
"target": to_checksum_address(target1_address),
},
)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier2,
"target": to_checksum_address(target2_address),
},
)
# test endpoint without partner for app0 but with limit/offset to get only first
request = grequests.get(
api_url_for(
api_server_test_instance,
"token_paymentresource",
token_address=token_address,
limit=1,
offset=0,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier1,
"target": to_checksum_address(target1_address),
},
)
assert len(json_response) == 1
# test endpoint without partner for app0 but with limit/offset to get only second
request = grequests.get(
api_url_for(
api_server_test_instance,
"token_paymentresource",
token_address=token_address,
limit=1,
offset=1,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier2,
"target": to_checksum_address(target2_address),
},
)
# test endpoint without partner for target1
request = grequests.get(
api_url_for(app1_server, "token_paymentresource", token_address=token_address)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_events(
json_response,
{"event": "EventPaymentReceivedSuccess", "identifier": identifier1},
{
"event": "EventPaymentSentSuccess",
"identifier": identifier3,
"target": to_checksum_address(target2_address),
},
)
# test endpoint without partner for target2
request = grequests.get(
api_url_for(app2_server, "token_paymentresource", token_address=token_address)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_events(
json_response,
{"event": "EventPaymentReceivedSuccess", "identifier": identifier2},
{"event": "EventPaymentReceivedSuccess", "identifier": identifier3},
)
# test endpoint for token and partner for app0
request = grequests.get(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=token_address,
target_address=target1_address,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier1,
"target": to_checksum_address(target1_address),
},
)
assert not must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier2,
"target": to_checksum_address(target2_address),
},
)
# test endpoint for token and partner for target1. Check both partners
# to see that filtering works correctly
request = grequests.get(
api_url_for(
app1_server,
"token_target_paymentresource",
token_address=token_address,
target_address=target2_address,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_event(
json_response,
{
"event": "EventPaymentSentSuccess",
"identifier": identifier3,
"target": to_checksum_address(target2_address),
},
)
assert not must_have_event(
response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier1}
)
request = grequests.get(
api_url_for(
app1_server,
"token_target_paymentresource",
token_address=token_address,
target_address=target1_address,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert len(json_response) == 0
# test endpoint for token and partner for target2
request = grequests.get(
api_url_for(
app2_server,
"token_target_paymentresource",
token_address=token_address,
target_address=app0_address,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_events(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier2}
)
assert not must_have_event(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier1}
)
assert not must_have_event(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier3}
)
request = grequests.get(
api_url_for(
app2_server,
"token_target_paymentresource",
token_address=token_address,
target_address=target1_address,
)
)
response = request.send().response
assert_proper_response(response, HTTPStatus.OK)
json_response = get_json_response(response)
assert must_have_events(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier3}
)
assert not must_have_event(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier2}
)
assert not must_have_event(
json_response, {"event": "EventPaymentReceivedSuccess", "identifier": identifier1}
)
app1_server.stop()
app2_server.stop()
@pytest.mark.parametrize("number_of_nodes", [2])
def test_channel_events_raiden(api_server_test_instance, raiden_network, token_addresses):
_, app1 = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
target_address = app1.raiden.address
request = grequests.post(
api_url_for(
api_server_test_instance,
"token_target_paymentresource",
token_address=to_checksum_address(token_address),
target_address=to_checksum_address(target_address),
),
json={"amount": amount, "identifier": identifier},
)
response = request.send().response
assert_proper_response(response)
@pytest.mark.parametrize("number_of_nodes", [3])
@pytest.mark.parametrize("channels_per_node", [CHAIN])
def test_pending_transfers_endpoint(raiden_network, token_addresses):
initiator, mediator, target = raiden_network
amount = 200
identifier = 42
token_address = token_addresses[0]
token_network_address = views.get_token_network_address_by_token_address(
views.state_from_app(mediator), mediator.raiden.default_registry.address, token_address
)
initiator_server = create_api_server(initiator, 8575)
mediator_server = create_api_server(mediator, 8576)
target_server = create_api_server(target, 8577)
target.raiden.message_handler = target_wait = WaitForMessage()
mediator.raiden.message_handler = mediator_wait = WaitForMessage()
secret = factories.make_secret()
secrethash = sha256(secret).digest()
request = grequests.get(
api_url_for(
mediator_server, "pending_transfers_resource_by_token", token_address=token_address
)
)
response = request.send().response
assert response.status_code == 200 and response.content == b"[]"
target_hold = target.raiden.raiden_event_handler
target_hold.hold_secretrequest_for(secrethash=secrethash)
initiator.raiden.start_mediated_transfer_with_secret(
token_network_address=token_network_address,
amount=amount,
fee=0,
target=target.raiden.address,
identifier=identifier,
secret=secret,
)
transfer_arrived = target_wait.wait_for_message(LockedTransfer, {"payment_identifier": 42})
transfer_arrived.wait()
for server in (initiator_server, mediator_server, target_server):
request = grequests.get(api_url_for(server, "pending_transfers_resource"))
response = request.send().response
assert response.status_code == 200
content = json.loads(response.content)
assert len(content) == 1
assert content[0]["payment_identifier"] == str(identifier)
assert content[0]["locked_amount"] == str(amount)
assert content[0]["token_address"] == to_checksum_address(token_address)
assert content[0]["token_network_address"] == to_checksum_address(token_network_address)
mediator_unlock = mediator_wait.wait_for_message(Unlock, {})
target_unlock = target_wait.wait_for_message(Unlock, {})
target_hold.release_secretrequest_for(target.raiden, secrethash)
gevent.wait((mediator_unlock, target_unlock))
for server in (initiator_server, mediator_server, target_server):
request = grequests.get(api_url_for(server, "pending_transfers_resource"))
response = request.send().response
assert response.status_code == 200 and response.content == b"[]"
request = grequests.get(
api_url_for(
initiator_server,
"pending_transfers_resource_by_token",
token_address=to_hex(b"notaregisteredtokenn"),
)
)
response = request.send().response
assert response.status_code == 404 and b"Token" in response.content
request = grequests.get(
api_url_for(
target_server,
"pending_transfers_resource_by_token_and_partner",
token_address=token_address,
partner_address=to_hex(b"~nonexistingchannel~"),
)
)
response = request.send().response
assert response.status_code == 404 and b"Channel" in response.content
@pytest.mark.parametrize("number_of_nodes", [2])
@pytest.mark.parametrize("deposit", [1000])
def test_api_withdraw(api_server_test_instance, raiden_network, token_addresses):
_, app1 = raiden_network
token_address = token_addresses[0]
partner_address = app1.raiden.address
# Withdraw a 0 amount
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(total_withdraw=0),
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.CONFLICT)
# Withdraw an amount larger than balance
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(total_withdraw=1500),
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.CONFLICT)
# Withdraw a valid amount
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(total_withdraw=750),
)
response = request.send().response
assert_response_with_code(response, HTTPStatus.OK)
# Withdraw same amount as before
request = grequests.patch(
api_url_for(
api_server_test_instance,
"channelsresourcebytokenandpartneraddress",
token_address=token_address,
partner_address=partner_address,
),
json=dict(total_withdraw=750),
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.CONFLICT)
@pytest.mark.parametrize("number_of_nodes", [1])
@pytest.mark.parametrize("channels_per_node", [0])
@pytest.mark.parametrize("number_of_tokens", [1])
@pytest.mark.parametrize("token_contract_name", [CONTRACT_CUSTOM_TOKEN])
def test_api_testnet_token_mint(api_server_test_instance, token_addresses):
user_address = factories.make_checksum_address()
token_address = token_addresses[0]
url = api_url_for(api_server_test_instance, "tokensmintresource", token_address=token_address)
request = grequests.post(url, json=dict(to=user_address, value=1, contract_method="mintFor"))
response = request.send().response
assert_response_with_code(response, HTTPStatus.OK)
# mint method defaults to mintFor
request = grequests.post(url, json=dict(to=user_address, value=10))
response = request.send().response
assert_response_with_code(response, HTTPStatus.OK)
# fails because requested mint method is not there
request = grequests.post(url, json=dict(to=user_address, value=10, contract_method="mint"))
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
# fails because of invalid choice of mint method
request = grequests.post(
url, json=dict(to=user_address, value=10, contract_method="unknownMethod")
)
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
# invalid due to negative value
request = grequests.post(url, json=dict(to=user_address, value=-1))
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
# invalid due to invalid address
request = grequests.post(url, json=dict(to=user_address[:-2], value=10))
response = request.send().response
assert_response_with_error(response, HTTPStatus.BAD_REQUEST)
| 36.979078 | 99 | 0.70782 | 8,629 | 77,767 | 6.023989 | 0.062 | 0.047787 | 0.033512 | 0.054135 | 0.803063 | 0.777284 | 0.750255 | 0.734499 | 0.716993 | 0.705662 | 0 | 0.027074 | 0.20303 | 77,767 | 2,102 | 100 | 36.99667 | 0.811627 | 0.05901 | 0 | 0.657487 | 0 | 0.001147 | 0.140623 | 0.060396 | 0 | 0 | 0.018831 | 0 | 0.110155 | 1 | 0.028686 | false | 0.000574 | 0.013769 | 0 | 0.044177 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
793b7ae4ad8a9a9ed71ad7452389a11025efdef2 | 25 | py | Python | hardware/drivers/__init__.py | 17012/practicum | d196e2c513a1763441b8be3ede8a39c30e9ff03d | [
"MIT"
] | null | null | null | hardware/drivers/__init__.py | 17012/practicum | d196e2c513a1763441b8be3ede8a39c30e9ff03d | [
"MIT"
] | 1 | 2021-09-27T03:09:36.000Z | 2021-09-28T00:56:56.000Z | hardware/drivers/__init__.py | 17012/practicum | d196e2c513a1763441b8be3ede8a39c30e9ff03d | [
"MIT"
] | 2 | 2021-11-26T00:00:39.000Z | 2021-11-26T00:03:57.000Z | from .i2c_dev import Lcd
| 12.5 | 24 | 0.8 | 5 | 25 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 0.16 | 25 | 1 | 25 | 25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f7309b8ab0ec6cc5672805a9ee0b86213e917ba4 | 964 | py | Python | webdriver_test_tools/config/__init__.py | connordelacruz/webdriver-test-tools | fe6906839e4423562c6d4d0aa6b10b2ea90bff6b | [
"MIT"
] | 5 | 2018-07-02T13:18:59.000Z | 2019-10-14T04:55:31.000Z | webdriver_test_tools/config/__init__.py | connordelacruz/webdriver-test-tools | fe6906839e4423562c6d4d0aa6b10b2ea90bff6b | [
"MIT"
] | 1 | 2019-10-16T20:54:25.000Z | 2019-10-16T20:54:25.000Z | webdriver_test_tools/config/__init__.py | connordelacruz/webdriver-test-tools | fe6906839e4423562c6d4d0aa6b10b2ea90bff6b | [
"MIT"
] | 1 | 2019-09-03T05:29:41.000Z | 2019-09-03T05:29:41.000Z | """Default configurations for various items in the test framework.
This module imports the following classes:
:class:`webdriver_test_tools.config.browser.BrowserConfig`
:class:`webdriver_test_tools.config.browser.BrowserStackConfig`
:class:`webdriver_test_tools.config.projectfiles.ProjectFilesConfig`
:class:`webdriver_test_tools.config.site.SiteConfig`
:class:`webdriver_test_tools.config.test.TestSuiteConfig`
:class:`webdriver_test_tools.config.webdriver.WebDriverConfig`
.. toctree::
webdriver_test_tools.config.browser
webdriver_test_tools.config.browserstack
webdriver_test_tools.config.projectfiles
webdriver_test_tools.config.site
webdriver_test_tools.config.test
webdriver_test_tools.config.webdriver
"""
from .projectfiles import ProjectFilesConfig
from .site import SiteConfig
from .test import TestSuiteConfig
from .webdriver import WebDriverConfig
from .browser import BrowserConfig, BrowserStackConfig
| 35.703704 | 72 | 0.823651 | 109 | 964 | 7.06422 | 0.284404 | 0.202597 | 0.280519 | 0.374026 | 0.484416 | 0.093506 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10166 | 964 | 26 | 73 | 37.076923 | 0.889146 | 0.78112 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3c0b1eef25f8a5e47dbfdaf5307d3423362b42b | 252 | py | Python | tests/unit/django/test_config.py | matthewgdv/sqlhandler | b82fd159195f6bb63175bb8a8d81fc421e7d5835 | [
"MIT"
] | null | null | null | tests/unit/django/test_config.py | matthewgdv/sqlhandler | b82fd159195f6bb63175bb8a8d81fc421e7d5835 | [
"MIT"
] | null | null | null | tests/unit/django/test_config.py | matthewgdv/sqlhandler | b82fd159195f6bb63175bb8a8d81fc421e7d5835 | [
"MIT"
] | null | null | null | # import pytest
class TestNullOp:
pass
class TestSqlConfig:
def test_ready(self): # synced
assert True
def test_setup(self): # synced
assert True
def test_initialize_database(self): # synced
assert True
| 14.823529 | 49 | 0.642857 | 29 | 252 | 5.448276 | 0.551724 | 0.132911 | 0.303797 | 0.379747 | 0.341772 | 0.341772 | 0 | 0 | 0 | 0 | 0 | 0 | 0.297619 | 252 | 16 | 50 | 15.75 | 0.892655 | 0.134921 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | false | 0.111111 | 0 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
e3dfaff01ac1bbacc92067036c3367596405ba8f | 106 | py | Python | conftest.py | seperman/redisworks | e8b7c4be1baf3520d388b6065375c488942648c9 | [
"BSD-3-Clause"
] | 92 | 2016-08-23T00:48:04.000Z | 2022-03-10T21:06:17.000Z | conftest.py | DanInSpace104/redisworks | e8b7c4be1baf3520d388b6065375c488942648c9 | [
"BSD-3-Clause"
] | 15 | 2016-11-22T22:11:41.000Z | 2020-10-15T18:58:27.000Z | conftest.py | DanInSpace104/redisworks | e8b7c4be1baf3520d388b6065375c488942648c9 | [
"BSD-3-Clause"
] | 18 | 2016-09-27T09:49:16.000Z | 2021-02-18T06:51:34.000Z | import os
import sys
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), 'tests')))
| 17.666667 | 82 | 0.745283 | 18 | 106 | 4.166667 | 0.555556 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 106 | 5 | 83 | 21.2 | 0.765306 | 0 | 0 | 0 | 0 | 0 | 0.04717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3e0c5f563dd749ce015c31c18f88f13d019cfbb | 164 | py | Python | wuphf/__init__.py | hvgab/Wuphf | 6a2a64676d9df8617860a88613fde10da299c0db | [
"Unlicense"
] | null | null | null | wuphf/__init__.py | hvgab/Wuphf | 6a2a64676d9df8617860a88613fde10da299c0db | [
"Unlicense"
] | 3 | 2018-06-29T14:16:30.000Z | 2021-06-01T22:32:47.000Z | wuphf/__init__.py | hvgab/Wuphf | 6a2a64676d9df8617860a88613fde10da299c0db | [
"Unlicense"
] | null | null | null |
from .wuphf import Wuphf
from .wuphf import Message
from . import clients
# from .clients.smtp import SMTP_client
# from .clients.sendgrid import Sendgrid_client
| 20.5 | 47 | 0.79878 | 23 | 164 | 5.608696 | 0.347826 | 0.139535 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 164 | 7 | 48 | 23.428571 | 0.921429 | 0.506098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5442a3434b7f1b1d95d623bd46a402c6e1bb03a0 | 123 | py | Python | settings.py | Aixile/chainer-gan-experiments | 4371e8369d2805e8ace6d7aacc397aa6e62680a6 | [
"MIT"
] | 70 | 2017-06-24T10:55:57.000Z | 2021-11-23T22:52:37.000Z | settings.py | Aixile/chainer-gan-experiments | 4371e8369d2805e8ace6d7aacc397aa6e62680a6 | [
"MIT"
] | 1 | 2017-08-21T06:19:31.000Z | 2017-08-21T07:54:28.000Z | settings.py | Aixile/chainer-gan-experiments | 4371e8369d2805e8ace6d7aacc397aa6e62680a6 | [
"MIT"
] | 16 | 2017-08-22T07:00:16.000Z | 2018-11-18T16:15:21.000Z | CELEBA_PATH = "/home/aixile/Workspace/dataset/celeba/"
GAME_FACE_PATH = "/home/aixile/Workspace/dataset/game_face_170701/"
| 41 | 67 | 0.804878 | 17 | 123 | 5.529412 | 0.529412 | 0.170213 | 0.297872 | 0.489362 | 0.638298 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0.04878 | 123 | 2 | 68 | 61.5 | 0.752137 | 0 | 0 | 0 | 0 | 0 | 0.699187 | 0.699187 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5443ca1c99cded5cb93768dd78ccc614bb7b353e | 9,258 | py | Python | tests/datastore_sqlite/test_database.py | newrelic/newrelic-python-agen | 4f292ec1219c0daffc5721a7b3a245b97d0f83ba | [
"Apache-2.0"
] | 92 | 2020-06-12T17:53:23.000Z | 2022-03-01T11:13:21.000Z | tests/datastore_sqlite/test_database.py | newrelic/newrelic-python-agen | 4f292ec1219c0daffc5721a7b3a245b97d0f83ba | [
"Apache-2.0"
] | 347 | 2020-07-10T00:10:19.000Z | 2022-03-31T17:58:56.000Z | tests/datastore_sqlite/test_database.py | newrelic/newrelic-python-agen | 4f292ec1219c0daffc5721a7b3a245b97d0f83ba | [
"Apache-2.0"
] | 58 | 2020-06-17T13:51:57.000Z | 2022-03-06T14:26:53.000Z | # Copyright 2010 New Relic, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sqlite3 as database
import os
import sys
is_pypy = hasattr(sys, 'pypy_version_info')
from testing_support.fixtures import validate_transaction_metrics
from testing_support.validators.validate_database_trace_inputs import validate_database_trace_inputs
from newrelic.api.background_task import background_task
DATABASE_DIR = os.environ.get('TOX_ENVDIR', '.')
DATABASE_NAME = ':memory:'
_test_execute_via_cursor_scoped_metrics = [
('Function/_sqlite3:connect', 1),
('Datastore/statement/SQLite/datastore_sqlite/select', 1),
('Datastore/statement/SQLite/datastore_sqlite/insert', 2),
('Datastore/statement/SQLite/datastore_sqlite/update', 1),
('Datastore/statement/SQLite/datastore_sqlite/delete', 1),
('Datastore/operation/SQLite/drop', 1),
('Datastore/operation/SQLite/create', 1),
('Datastore/operation/SQLite/commit', 3),
('Datastore/operation/SQLite/rollback', 1)]
_test_execute_via_cursor_rollup_metrics = [
('Datastore/all', 12),
('Datastore/allOther', 12),
('Datastore/SQLite/all', 12),
('Datastore/SQLite/allOther', 12),
('Datastore/operation/SQLite/select', 1),
('Datastore/statement/SQLite/datastore_sqlite/select', 1),
('Datastore/operation/SQLite/insert', 2),
('Datastore/statement/SQLite/datastore_sqlite/insert', 2),
('Datastore/operation/SQLite/update', 1),
('Datastore/statement/SQLite/datastore_sqlite/update', 1),
('Datastore/operation/SQLite/delete', 1),
('Datastore/statement/SQLite/datastore_sqlite/delete', 1),
('Datastore/operation/SQLite/drop', 1),
('Datastore/operation/SQLite/create', 1),
('Datastore/operation/SQLite/commit', 3),
('Datastore/operation/SQLite/rollback', 1)]
if is_pypy:
_test_execute_via_cursor_scoped_metrics.extend([
('Function/_sqlite3:Connection.__exit__', 1)])
_test_execute_via_cursor_rollup_metrics.extend([
('Function/_sqlite3:Connection.__exit__', 1)])
else:
_test_execute_via_cursor_scoped_metrics.extend([
('Function/sqlite3:Connection.__exit__', 1)])
_test_execute_via_cursor_rollup_metrics.extend([
('Function/sqlite3:Connection.__exit__', 1)])
@validate_transaction_metrics('test_database:test_execute_via_cursor',
scoped_metrics=_test_execute_via_cursor_scoped_metrics,
rollup_metrics=_test_execute_via_cursor_rollup_metrics,
background_task=True)
@validate_database_trace_inputs(sql_parameters_type=tuple)
@background_task()
def test_execute_via_cursor():
with database.connect(DATABASE_NAME) as connection:
cursor = connection.cursor()
cursor.execute("""drop table if exists datastore_sqlite""")
cursor.execute("""create table datastore_sqlite (a, b, c)""")
cursor.executemany("""insert into datastore_sqlite values (?, ?, ?)""",
[(1, 1.0, '1.0'), (2, 2.2, '2.2'), (3, 3.3, '3.3')])
test_data = [(4, 4.0, '4.0'), (5, 5.5, '5.5'), (6, 6.6, '6.6')]
cursor.executemany("""insert into datastore_sqlite values (?, ?, ?)""",
((value) for value in test_data))
cursor.execute("""select * from datastore_sqlite""")
assert len(list(cursor)) == 6
cursor.execute("""update datastore_sqlite set a=?, b=?, """
"""c=? where a=?""", (4, 4.0, '4.0', 1))
script = """delete from datastore_sqlite where a = 2;"""
cursor.executescript(script)
connection.commit()
connection.rollback()
connection.commit()
_test_execute_via_connection_scoped_metrics = [
('Function/_sqlite3:connect', 1),
('Datastore/statement/SQLite/datastore_sqlite/select', 1),
('Datastore/statement/SQLite/datastore_sqlite/insert', 2),
('Datastore/statement/SQLite/datastore_sqlite/update', 1),
('Datastore/statement/SQLite/datastore_sqlite/delete', 1),
('Datastore/operation/SQLite/drop', 1),
('Datastore/operation/SQLite/create', 1),
('Datastore/operation/SQLite/commit', 3),
('Datastore/operation/SQLite/rollback', 1)]
_test_execute_via_connection_rollup_metrics = [
('Datastore/all', 12),
('Datastore/allOther', 12),
('Datastore/SQLite/all', 12),
('Datastore/SQLite/allOther', 12),
('Datastore/operation/SQLite/select', 1),
('Datastore/statement/SQLite/datastore_sqlite/select', 1),
('Datastore/operation/SQLite/insert', 2),
('Datastore/statement/SQLite/datastore_sqlite/insert', 2),
('Datastore/operation/SQLite/update', 1),
('Datastore/statement/SQLite/datastore_sqlite/update', 1),
('Datastore/operation/SQLite/delete', 1),
('Datastore/statement/SQLite/datastore_sqlite/delete', 1),
('Datastore/operation/SQLite/drop', 1),
('Datastore/operation/SQLite/create', 1),
('Datastore/operation/SQLite/commit', 3),
('Datastore/operation/SQLite/rollback', 1)]
if is_pypy:
_test_execute_via_connection_scoped_metrics.extend([
('Function/_sqlite3:Connection.__enter__', 1),
('Function/_sqlite3:Connection.__exit__', 1)])
_test_execute_via_connection_rollup_metrics.extend([
('Function/_sqlite3:Connection.__enter__', 1),
('Function/_sqlite3:Connection.__exit__', 1)])
else:
_test_execute_via_connection_scoped_metrics.extend([
('Function/sqlite3:Connection.__enter__', 1),
('Function/sqlite3:Connection.__exit__', 1)])
_test_execute_via_connection_rollup_metrics.extend([
('Function/sqlite3:Connection.__enter__', 1),
('Function/sqlite3:Connection.__exit__', 1)])
@validate_transaction_metrics('test_database:test_execute_via_connection',
scoped_metrics=_test_execute_via_connection_scoped_metrics,
rollup_metrics=_test_execute_via_connection_rollup_metrics,
background_task=True)
@validate_database_trace_inputs(sql_parameters_type=tuple)
@background_task()
def test_execute_via_connection():
with database.connect(DATABASE_NAME) as connection:
connection.execute("""drop table if exists datastore_sqlite""")
connection.execute("""create table datastore_sqlite (a, b, c)""")
connection.executemany("""insert into datastore_sqlite values """
"""(?, ?, ?)""", [(1, 1.0, '1.0'), (2, 2.2, '2.2'),
(3, 3.3, '3.3')])
test_data = [(4, 4.0, '4.0'), (5, 5.5, '5.5'), (6, 6.6, '6.6')]
connection.executemany("""insert into datastore_sqlite values (?, ?, ?)""",
((value) for value in test_data))
cursor = connection.execute("""select * from datastore_sqlite""")
assert len(list(cursor)) == 6
connection.execute("""update datastore_sqlite set a=?, b=?, """
"""c=? where a=?""", (4, 4.0, '4.0', 1))
script = """delete from datastore_sqlite where a = 2;"""
connection.executescript(script)
connection.commit()
connection.rollback()
connection.commit()
_test_rollback_on_exception_scoped_metrics = [
('Function/_sqlite3:connect', 1),
('Datastore/operation/SQLite/rollback', 1)]
_test_rollback_on_exception_rollup_metrics = [
('Datastore/all', 2),
('Datastore/allOther', 2),
('Datastore/SQLite/all', 2),
('Datastore/SQLite/allOther', 2),
('Datastore/operation/SQLite/rollback', 1)]
if is_pypy:
_test_rollback_on_exception_scoped_metrics.extend([
('Function/_sqlite3:Connection.__enter__', 1),
('Function/_sqlite3:Connection.__exit__', 1)])
_test_rollback_on_exception_rollup_metrics.extend([
('Function/_sqlite3:Connection.__enter__', 1),
('Function/_sqlite3:Connection.__exit__', 1)])
else:
_test_rollback_on_exception_scoped_metrics.extend([
('Function/sqlite3:Connection.__enter__', 1),
('Function/sqlite3:Connection.__exit__', 1)])
_test_rollback_on_exception_rollup_metrics.extend([
('Function/sqlite3:Connection.__enter__', 1),
('Function/sqlite3:Connection.__exit__', 1)])
@validate_transaction_metrics('test_database:test_rollback_on_exception',
scoped_metrics=_test_rollback_on_exception_scoped_metrics,
rollup_metrics=_test_rollback_on_exception_rollup_metrics,
background_task=True)
@validate_database_trace_inputs(sql_parameters_type=tuple)
@background_task()
def test_rollback_on_exception():
try:
with database.connect(DATABASE_NAME) as connection:
raise RuntimeError('error')
except RuntimeError:
pass
| 41.702703 | 100 | 0.675956 | 1,062 | 9,258 | 5.56403 | 0.145951 | 0.091386 | 0.105602 | 0.071924 | 0.829413 | 0.827213 | 0.780674 | 0.710103 | 0.697918 | 0.665595 | 0 | 0.02556 | 0.184381 | 9,258 | 221 | 101 | 41.891403 | 0.756986 | 0.059732 | 0 | 0.703488 | 0 | 0 | 0.398819 | 0.308868 | 0 | 0 | 0 | 0 | 0.011628 | 1 | 0.017442 | false | 0.005814 | 0.034884 | 0 | 0.052326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
546c6d96e919eab6b47174d8a0deaedb95a94018 | 139 | py | Python | test3.py | Mespyr/Random-String-Generator | ae8c5286bee08be20dac4f6c9fc8e714422408bd | [
"MIT"
] | null | null | null | test3.py | Mespyr/Random-String-Generator | ae8c5286bee08be20dac4f6c9fc8e714422408bd | [
"MIT"
] | null | null | null | test3.py | Mespyr/Random-String-Generator | ae8c5286bee08be20dac4f6c9fc8e714422408bd | [
"MIT"
] | null | null | null | import random_string_gen
for i in range(10):
print(random_string_gen.gen.generate_string(5, random_string_gen.util.CONSONANT_CHANCE))
| 27.8 | 92 | 0.820144 | 23 | 139 | 4.608696 | 0.652174 | 0.339623 | 0.424528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.093525 | 139 | 4 | 93 | 34.75 | 0.81746 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
547056e1b937ebd2c0f2f0b98b637ed3f6b182df | 37 | py | Python | vendor-local/lib/python/noseprogressive/__init__.py | Koenkk/popcorn_maker | 0978b9f98dacd4e8eb753404b24eb584f410aa11 | [
"BSD-3-Clause"
] | 15 | 2015-03-23T02:55:20.000Z | 2021-01-12T12:42:30.000Z | vendor-local/lib/python/noseprogressive/__init__.py | Koenkk/popcorn_maker | 0978b9f98dacd4e8eb753404b24eb584f410aa11 | [
"BSD-3-Clause"
] | null | null | null | vendor-local/lib/python/noseprogressive/__init__.py | Koenkk/popcorn_maker | 0978b9f98dacd4e8eb753404b24eb584f410aa11 | [
"BSD-3-Clause"
] | 16 | 2015-02-18T21:43:31.000Z | 2021-11-09T22:50:03.000Z | from plugin import ProgressivePlugin
| 18.5 | 36 | 0.891892 | 4 | 37 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
547c98e3a68f425a8f4f3e654b4f88e7c83e6170 | 22,218 | py | Python | src/python/variamos/libs/mx_graph/tests/mx_graph_tests.py | fpiedrah/SPL-Solver | 77832ea12b09cb5f423c31335e524923e7f078be | [
"MIT"
] | 2 | 2020-11-24T14:50:45.000Z | 2021-01-05T14:40:52.000Z | src/python/variamos/libs/mx_graph/tests/mx_graph_tests.py | fpiedrah/SPL-Solver | 77832ea12b09cb5f423c31335e524923e7f078be | [
"MIT"
] | 10 | 2021-02-09T11:41:24.000Z | 2021-02-09T11:41:25.000Z | src/python/variamos/libs/mx_graph/tests/mx_graph_tests.py | fpiedrah/SPL-Solver | 77832ea12b09cb5f423c31335e524923e7f078be | [
"MIT"
] | null | null | null | import io
from variamos.libs.mx_graph import MXGraph
def test_ministore_model():
mx_graph_str = """
<mxGraphModel>
<root>
<mxCell id="0"/>
<mxCell id="feature" parent="0"/>
<root label="MiniStores" type="root" id="27">
<mxCell style="strokeWidth=3" parent="feature" vertex="1">
<mxGeometry x="336" y="47.5" width="100" height="35" as="geometry"/>
</mxCell>
</root>
<concrete label="Index" type="concrete" selected="true" id="28">
<mxCell style="" parent="feature" vertex="1">
<mxGeometry x="192" y="154" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="Product" type="concrete" selected="true" id="29">
<mxCell style="" parent="feature" vertex="1">
<mxGeometry x="337.5" y="152" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="ProductStar" type="concrete" selected="true" id="30">
<mxCell style="" parent="feature" vertex="1">
<mxGeometry x="490" y="150" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<rel_concrete_root type="relation" relType="mandatory" id="31">
<mxCell parent="feature" source="28" target="27" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_root>
<rel_concrete_root type="relation" relType="mandatory" id="32">
<mxCell parent="feature" source="29" target="27" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_root>
<rel_concrete_root type="relation" relType="mandatory" id="33">
<mxCell parent="feature" source="30" target="27" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_root>
<mxCell id="component" parent="0" visible="0"/>
<component label="Index" type="component" id="8">
<mxCell style="shape=component" parent="component" vertex="1">
<mxGeometry x="240" y="50" width="100" height="40" as="geometry"/>
</mxCell>
</component>
<component label="Product" type="component" id="9">
<mxCell style="shape=component" parent="component" vertex="1">
<mxGeometry x="390" y="50" width="100" height="40" as="geometry"/>
</mxCell>
</component>
<component label="ProductStar" type="component" id="10">
<mxCell style="shape=component" parent="component" vertex="1">
<mxGeometry x="540" y="50" width="100" height="40" as="geometry"/>
</mxCell>
</component>
<file label="Product-Model" type="file" filename="Product.java" destination="src/Product.java" id="11">
<mxCell style="shape=file" parent="component" vertex="1">
<mxGeometry x="390" y="140" width="100" height="40" as="geometry"/>
</mxCell>
</file>
<file label="Index-Control" type="file" filename="Index.java" destination="src/Index.java" id="12">
<mxCell style="shape=file" parent="component" vertex="1">
<mxGeometry x="100" y="80" width="100" height="40" as="geometry"/>
</mxCell>
</file>
<file label="Index-Custom" type="file" filename="customization.json" destination="" id="13">
<mxCell style="shape=file" parent="component" vertex="1">
<mxGeometry x="230" y="140" width="100" height="40" as="geometry"/>
</mxCell>
</file>
<file label="ProductStar-AlterIndex" type="file" filename="alterIndex.frag" destination="" id="14">
<mxCell style="shape=file" parent="component" vertex="1">
<mxGeometry x="530" y="140" width="100" height="40" as="geometry"/>
</mxCell>
</file>
<file label="ProductStar-AlterProduct" type="file" filename="alterProduct.frag" destination="" id="15">
<mxCell style="shape=file" parent="component" vertex="1">
<mxGeometry x="672" y="139.5" width="100" height="40" as="geometry"/>
</mxCell>
</file>
<rel_file_component type="relation" id="16">
<mxCell parent="component" source="12" target="8" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_file_component>
<rel_file_component type="relation" id="17">
<mxCell parent="component" source="13" target="8" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_file_component>
<rel_file_component type="relation" id="18">
<mxCell parent="component" source="11" target="9" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_file_component>
<rel_file_component type="relation" id="19">
<mxCell parent="component" source="14" target="10" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_file_component>
<rel_file_component type="relation" id="20">
<mxCell parent="component" source="15" target="10" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_file_component>
<mxCell id="binding_feature_component" parent="0" visible="0"/>
<component label="Index" type="component" id="clon8">
<mxCell style="shape=component;fillColor=#DCDCDC;" parent="binding_feature_component" vertex="1">
<mxGeometry x="240" y="50" width="100" height="40" as="geometry"/>
</mxCell>
</component>
<component label="Product" type="component" id="clon9">
<mxCell style="shape=component;fillColor=#DCDCDC;" parent="binding_feature_component" vertex="1">
<mxGeometry x="410" y="50" width="100" height="40" as="geometry"/>
</mxCell>
</component>
<component label="ProductStar" type="component" id="clon10">
<mxCell style="shape=component;fillColor=#DCDCDC;" parent="binding_feature_component" vertex="1">
<mxGeometry x="570" y="50" width="100" height="40" as="geometry"/>
</mxCell>
</component>
<concrete label="Index" type="concrete" selected="true" id="clon28">
<mxCell style="fillColor=#DCDCDC;" parent="binding_feature_component" vertex="1">
<mxGeometry x="242" y="168" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="Product" type="concrete" selected="true" id="clon29">
<mxCell style="fillColor=#DCDCDC;" parent="binding_feature_component" vertex="1">
<mxGeometry x="412" y="168" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="ProductStar" type="concrete" selected="true" id="clon30">
<mxCell style="fillColor=#DCDCDC;" parent="binding_feature_component" vertex="1">
<mxGeometry x="572" y="170" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<rel_concrete_component type="relation" id="34">
<mxCell parent="binding_feature_component" source="clon28" target="clon8" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_component>
<rel_concrete_component type="relation" id="35">
<mxCell parent="binding_feature_component" source="clon29" target="clon9" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_component>
<rel_concrete_component type="relation" id="36">
<mxCell parent="binding_feature_component" source="clon30" target="clon10" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_component>
<mxCell id="istar" parent="0" visible="0"/>
<mxCell id="classdiag" parent="0" visible="0"/>
<mxCell id="adap_architecture" parent="0" visible="0"/>
<mxCell id="adaptation_state" parent="0" visible="0"/>
<mxCell id="adaptation_hardware" parent="0" visible="0"/>
<mxCell id="adaptation_binding_state_hardware" parent="0" visible="0"/>
<mxCell id="control" parent="0" visible="0"/>
</root>
</mxGraphModel>
"""
mx_graph = MXGraph.parse_string(mx_graph_str)
expected_dict = {
"features": [
{"id": 28, "name": "Index"},
{"id": 29, "name": "Product"},
{"id": 30, "name": "ProductStar"},
{
"constraints": [
{"destination": 27, "constraint_type": "root"},
{"constraint_type": "mandatory", "destination": 28},
{"constraint_type": "mandatory", "destination": 29},
{"constraint_type": "mandatory", "destination": 30},
],
"id": 27,
"name": "MiniStores",
},
]
}
assert mx_graph == expected_dict
def test_cellphone_model():
mx_graph_str = """
<mxGraphModel>
<root>
<mxCell id="0"/>
<mxCell id="feature" parent="0"/>
<root label="Mobile Phone" type="root" id="30">
<mxCell style="strokeWidth=3" vertex="1" parent="feature">
<mxGeometry x="190" y="50" width="100" height="35" as="geometry"/>
</mxCell>
</root>
<abstract label="Calls" type="abstract" id="31">
<mxCell style="strokeWidth=2" vertex="1" parent="feature">
<mxGeometry x="10" y="110" width="100" height="35" as="geometry"/>
</mxCell>
</abstract>
<abstract label="GPS" type="abstract" id="34">
<mxCell style="strokeWidth=2" vertex="1" parent="feature">
<mxGeometry x="130" y="110" width="100" height="35" as="geometry"/>
</mxCell>
</abstract>
<abstract label="Screen" type="abstract" id="35">
<mxCell style="strokeWidth=2" vertex="1" parent="feature">
<mxGeometry x="250" y="110" width="100" height="35" as="geometry"/>
</mxCell>
</abstract>
<abstract label="Media" type="abstract" id="36">
<mxCell style="strokeWidth=2" vertex="1" parent="feature">
<mxGeometry x="370" y="110" width="100" height="35" as="geometry"/>
</mxCell>
</abstract>
<concrete label="Camera" type="concrete" selected="false" id="37">
<mxCell style="" vertex="1" parent="feature">
<mxGeometry x="380" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="MP3" type="concrete" selected="false" id="38">
<mxCell style="" vertex="1" parent="feature">
<mxGeometry x="500" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<bundle label="bundle" type="bundle" bundleType="OR" lowRange="1" highRange="1" id="39">
<mxCell style="shape=ellipse" vertex="1" parent="feature">
<mxGeometry x="400" y="170" width="35" height="35" as="geometry"/>
</mxCell>
</bundle>
<concrete label="High Resolution" type="concrete" selected="false" id="40">
<mxCell style="" vertex="1" parent="feature">
<mxGeometry x="260" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="Cololur" type="concrete" selected="false" id="41">
<mxCell style="" vertex="1" parent="feature">
<mxGeometry x="140" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="Basic" type="concrete" selected="false" id="42">
<mxCell style="" vertex="1" parent="feature">
<mxGeometry x="20" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<bundle label="bundle" type="bundle" bundleType="RANGE" lowRange="1" highRange="1" id="43">
<mxCell style="shape=ellipse" vertex="1" parent="feature">
<mxGeometry x="290" y="170" width="35" height="35" as="geometry"/>
</mxCell>
</bundle>
<rel_abstract_root type="relation" relType="mandatory" id="44">
<mxCell edge="1" parent="feature" source="31" target="30">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_abstract_root>
<rel_abstract_root type="relation" relType="optional" id="45">
<mxCell edge="1" parent="feature" source="34" target="30">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_abstract_root>
<rel_abstract_root type="relation" relType="mandatory" id="46">
<mxCell edge="1" parent="feature" source="35" target="30">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_abstract_root>
<rel_abstract_root type="relation" relType="optional" id="47">
<mxCell edge="1" parent="feature" source="36" target="30">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_abstract_root>
<rel_concrete_bundle type="relation" id="48">
<mxCell edge="1" parent="feature" source="37" target="39">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_bundle>
<rel_concrete_bundle type="relation" id="49">
<mxCell edge="1" parent="feature" source="38" target="39">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_bundle>
<rel_bundle_abstract type="relation" id="50">
<mxCell edge="1" parent="feature" source="39" target="36">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_bundle_abstract>
<rel_bundle_abstract type="relation" id="51">
<mxCell edge="1" parent="feature" source="43" target="35">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_bundle_abstract>
<rel_concrete_bundle type="relation" id="52">
<mxCell edge="1" parent="feature" source="40" target="43">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_bundle>
<rel_concrete_bundle type="relation" id="53">
<mxCell edge="1" parent="feature" source="41" target="43">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_bundle>
<rel_concrete_bundle type="relation" id="54">
<mxCell edge="1" parent="feature" source="42" target="43">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_bundle>
<rel_concrete_abstract type="relation" relType="excludes" id="55">
<mxCell edge="1" parent="feature" source="42" target="34">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_abstract>
<rel_concrete_concrete type="relation" relType="requires" id="56">
<mxCell edge="1" parent="feature" source="37" target="40">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_concrete>
<mxCell id="component" parent="0" visible="0"/>
<mxCell id="binding_feature_component" parent="0" visible="0"/>
<concrete label="Camera" type="concrete" selected="false" id="clon37">
<mxCell style="fillColor=#DCDCDC;" vertex="1" parent="binding_feature_component">
<mxGeometry x="380" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="MP3" type="concrete" selected="false" id="clon38">
<mxCell style="fillColor=#DCDCDC;" vertex="1" parent="binding_feature_component">
<mxGeometry x="540" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="High Resolution" type="concrete" selected="false" id="clon40">
<mxCell style="fillColor=#DCDCDC;" vertex="1" parent="binding_feature_component">
<mxGeometry x="310" y="220" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="Cololur" type="concrete" selected="false" id="clon41">
<mxCell style="fillColor=#DCDCDC;" vertex="1" parent="binding_feature_component">
<mxGeometry x="160" y="230" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="Basic" type="concrete" selected="false" id="clon42">
<mxCell style="fillColor=#DCDCDC;" vertex="1" parent="binding_feature_component">
<mxGeometry x="60" y="250" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<mxCell id="istar" parent="0" visible="0"/>
<mxCell id="adap_architecture" parent="0" visible="0"/>
<mxCell id="adaptation_state" parent="0" visible="0"/>
<mxCell id="adaptation_hardware" parent="0" visible="0"/>
<mxCell id="adaptation_binding_state_hardware" parent="0" visible="0"/>
<mxCell id="control" parent="0" visible="0"/>
</root>
</mxGraphModel>
"""
mx_graph = MXGraph.parse_string(mx_graph_str)
expected_dict = {
"features": [
{"id": 31, "name": "Calls"},
{
"constraints": [{"constraint_type": "excludes", "destination": 42}],
"id": 34,
"name": "GPS",
},
{
"constraints": [
{
"constraint_type": "group_cardinality",
"destination": [40, 41, 42],
"high_threshold": "1",
"low_threshold": "1",
}
],
"id": 35,
"name": "Screen",
},
{
"constraints": [{"constraint_type": "or", "destination": [37, 38]}],
"id": 36,
"name": "Media",
},
{
"constraints": [{"constraint_type": "requires", "destination": 40}],
"id": 37,
"name": "Camera",
},
{"id": 38, "name": "MP3"},
{"id": 40, "name": "High Resolution"},
{"id": 41, "name": "Cololur"},
{"id": 42, "name": "Basic"},
{
"constraints": [
{"constraint_type": "root", "destination": 30},
{"constraint_type": "mandatory", "destination": 31},
{"constraint_type": "optional", "destination": 34},
{"constraint_type": "mandatory", "destination": 35},
{"constraint_type": "optional", "destination": 36},
],
"id": 30,
"name": "Mobile Phone",
},
]
}
assert mx_graph == expected_dict
def test_fake_optional():
mx_graph_str = """
<mxGraphModel>
<root>
<mxCell id="0"/>
<mxCell id="feature" parent="0"/>
<root label="root" type="root" id="1">
<mxCell style="strokeWidth=3" vertex="1" parent="feature">
<mxGeometry x="210" y="30" width="100" height="35" as="geometry"/>
</mxCell>
</root>
<concrete label="concrete" type="concrete" selected="false" id="2">
<mxCell style="" vertex="1" parent="feature">
<mxGeometry x="60" y="190" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<concrete label="concrete" type="concrete" selected="false" id="3">
<mxCell style="" vertex="1" parent="feature">
<mxGeometry x="360" y="150" width="100" height="35" as="geometry"/>
</mxCell>
</concrete>
<rel_concrete_root type="relation" relType="mandatory" id="0.3">
<mxCell edge="1" parent="feature" source="2" target="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_root>
<rel_concrete_root type="relation" relType="optional" id="0.4">
<mxCell edge="1" parent="feature" source="3" target="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_root>
<rel_concrete_concrete type="relation" relType="requires" id="0.5">
<mxCell edge="1" parent="feature" source="2" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
</rel_concrete_concrete>
</root>
</mxGraphModel>
"""
mx_graph = MXGraph.parse_string(mx_graph_str)
expected_dict = {
"features": [
{
"id": 2,
"name": "concrete",
"constraints": [{"destination": 3, "constraint_type": "requires"}],
},
{"id": 3, "name": "concrete"},
{
"id": 1,
"name": "root",
"constraints": [
{"destination": 1, "constraint_type": "root"},
{"destination": 2, "constraint_type": "mandatory"},
{"destination": 3, "constraint_type": "optional"},
],
},
]
}
assert mx_graph == expected_dict
| 46.578616 | 111 | 0.530966 | 2,287 | 22,218 | 5.068212 | 0.091823 | 0.056078 | 0.089725 | 0.041929 | 0.812009 | 0.80597 | 0.751704 | 0.742731 | 0.64274 | 0.615132 | 0 | 0.053881 | 0.298317 | 22,218 | 476 | 112 | 46.676471 | 0.689609 | 0 | 0 | 0.558696 | 0 | 0.097826 | 0.887794 | 0.097804 | 0 | 0 | 0 | 0 | 0.006522 | 1 | 0.006522 | false | 0 | 0.004348 | 0 | 0.01087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49b50a3d19561a58a34e109ddd5bffbc8a071b4a | 202 | py | Python | sparkplus/dependencies/__init__.py | SWM-SparkPlus/sparkplus | b7900d3ca4fbaef4ddb4fd7e0971370426af2ee2 | [
"MIT"
] | 11 | 2021-11-04T23:58:52.000Z | 2021-11-16T11:58:16.000Z | sparkplus/dependencies/__init__.py | SWM-SparkPlus/spark-plugin | b7900d3ca4fbaef4ddb4fd7e0971370426af2ee2 | [
"MIT"
] | null | null | null | sparkplus/dependencies/__init__.py | SWM-SparkPlus/spark-plugin | b7900d3ca4fbaef4ddb4fd7e0971370426af2ee2 | [
"MIT"
] | 2 | 2021-11-05T00:00:05.000Z | 2021-11-26T06:00:17.000Z | from .spark import *
# from .logging import *
from .tablename import ESido, EPrefix, get_tablename_by_prefix_and_sido
__all__ = ["start_spark", "ESido", "EPrefix", "get_tablename_by_prefix_and_sido"]
| 28.857143 | 81 | 0.777228 | 28 | 202 | 5.071429 | 0.5 | 0.140845 | 0.211268 | 0.338028 | 0.549296 | 0.549296 | 0.549296 | 0.549296 | 0 | 0 | 0 | 0 | 0.113861 | 202 | 6 | 82 | 33.666667 | 0.793296 | 0.108911 | 0 | 0 | 0 | 0 | 0.308989 | 0.179775 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49bdb5f41d73c2317b9ea0027892492403e81a55 | 149,199 | py | Python | openbb_terminal/portfolio/portfolio_optimization/optimizer_view.py | jmaslek/OpenBBTerminal | 919ca99f80809b2b9fe828dc3dd201c813d12d6d | [
"MIT"
] | null | null | null | openbb_terminal/portfolio/portfolio_optimization/optimizer_view.py | jmaslek/OpenBBTerminal | 919ca99f80809b2b9fe828dc3dd201c813d12d6d | [
"MIT"
] | null | null | null | openbb_terminal/portfolio/portfolio_optimization/optimizer_view.py | jmaslek/OpenBBTerminal | 919ca99f80809b2b9fe828dc3dd201c813d12d6d | [
"MIT"
] | null | null | null | """Optimization View"""
__docformat__ = "numpy"
# pylint: disable=R0913, R0914, C0302, too-many-branches, too-many-statements
import logging
import math
import warnings
from datetime import date
from typing import Dict, List, Optional
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import riskfolio as rp
from dateutil.relativedelta import relativedelta, FR
from matplotlib.gridspec import GridSpec
from matplotlib.lines import Line2D
from scipy.interpolate import interp1d
from openbb_terminal.config_plot import PLOT_DPI
from openbb_terminal.config_terminal import theme
from openbb_terminal.decorators import log_start_end
from openbb_terminal.helper_funcs import plot_autoscale, print_rich_table
from openbb_terminal.portfolio.portfolio_optimization import (
optimizer_model,
yahoo_finance_model,
)
from openbb_terminal.rich_config import console
warnings.filterwarnings("ignore")
logger = logging.getLogger(__name__)
objectives_choices = {
"minrisk": "MinRisk",
"sharpe": "Sharpe",
"utility": "Utility",
"maxret": "MaxRet",
"erc": "ERC",
}
risk_names = {
"mv": "volatility",
"mad": "mean absolute deviation",
"gmd": "gini mean difference",
"msv": "semi standard deviation",
"var": "value at risk (VaR)",
"cvar": "conditional value at risk (CVaR)",
"tg": "tail gini",
"evar": "entropic value at risk (EVaR)",
"rg": "range",
"cvrg": "CVaR range",
"tgrg": "tail gini range",
"wr": "worst realization",
"flpm": "first lower partial moment",
"slpm": "second lower partial moment",
"mdd": "maximum drawdown uncompounded",
"add": "average drawdown uncompounded",
"dar": "drawdown at risk (DaR) uncompounded",
"cdar": "conditional drawdown at risk (CDaR) uncompounded",
"edar": "entropic drawdown at risk (EDaR) uncompounded",
"uci": "ulcer index uncompounded",
"mdd_rel": "maximum drawdown compounded",
"add_rel": "average drawdown compounded",
"dar_rel": "drawdown at risk (DaR) compounded",
"cdar_rel": "conditional drawdown at risk (CDaR) compounded",
"edar_rel": "entropic drawdown at risk (EDaR) compounded",
"uci_rel": "ulcer index compounded",
}
risk_choices = {
"mv": "MV",
"mad": "MAD",
"gmd": "GMD",
"msv": "MSV",
"var": "VaR",
"cvar": "CVaR",
"tg": "TG",
"evar": "EVaR",
"rg": "RG",
"cvrg": "CVRG",
"tgrg": "TGRG",
"wr": "WR",
"flpm": "FLPM",
"slpm": "SLPM",
"mdd": "MDD",
"add": "ADD",
"dar": "DaR",
"cdar": "CDaR",
"edar": "EDaR",
"uci": "UCI",
"mdd_rel": "MDD_Rel",
"add_rel": "ADD_Rel",
"dar_rel": "DaR_Rel",
"cdar_rel": "CDaR_Rel",
"edar_rel": "EDaR_Rel",
"uci_rel": "UCI_Rel",
}
time_factor = {
"D": 252.0,
"W": 52.0,
"M": 12.0,
}
dict_conversion = {"period": "historic_period", "start": "start_period"}
@log_start_end(log=logger)
def d_period(period: str, start: str, end: str):
"""
Builds a date range string
Parameters
----------
period : str
Period starting today
start: str
If not using period, start date string (YYYY-MM-DD)
end: str
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
"""
extra_choices = {
"ytd": "[Year-to-Date]",
"max": "[All-time]",
}
if start == "":
if period in extra_choices:
p = extra_choices[period]
else:
if period[-1] == "d":
p = "[" + period[:-1] + " Days]"
elif period[-1] == "w":
p = "[" + period[:-1] + " Weeks]"
elif period[-1] == "o":
p = "[" + period[:-2] + " Months]"
elif period[-1] == "y":
p = "[" + period[:-1] + " Years]"
if p[1:3] == "1 ":
p = p.replace("s", "")
else:
if end == "":
end_ = date.today()
if end_.weekday() >= 5:
end_ = end_ + relativedelta(weekday=FR(-1))
end = end_.strftime("%Y-%m-%d")
p = "[From " + start + " to " + end + "]"
return p
@log_start_end(log=logger)
def portfolio_performance(
weights: dict,
stock_returns: pd.DataFrame,
freq: str = "D",
risk_measure: str = "MV",
risk_free_rate: float = 0,
alpha: float = 0.05,
a_sim: float = 100,
beta: float = None,
b_sim: float = None,
):
"""
Prints portfolio performance indicators
Parameters
----------
weights: dict
Portfolio weights
stock_returns: pd.DataFrame
Stock returns dataframe
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
risk_measure : str, optional
The risk measure used. The default is 'MV'. Possible values are:
- 'MV': Variance.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'VaR': Value at Risk.
- 'CVaR': Conditional Value at Risk.
- 'TG': Tail Gini.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization (Minimax).
- 'RG': Range of returns.
- 'CVRG': CVaR range of returns.
- 'TGRG': Tail Gini range of returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'DaR': Drawdown at Risk of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'MDD_Rel': Maximum Drawdown of compounded cumulative returns (Calmar Ratio).
- 'ADD_Rel': Average Drawdown of compounded cumulative returns.
- 'DaR_Rel': Drawdown at Risk of compounded cumulative returns.
- 'CDaR_Rel': Conditional Drawdown at Risk of compounded cumulative returns.
- 'EDaR_Rel': Entropic Drawdown at Risk of compounded cumulative returns.
- 'UCI_Rel': Ulcer Index of compounded cumulative returns.
risk_free_rate : float, optional
risk free rate.
alpha : float, optional
Significance level of VaR, CVaR, EDaR, DaR, CDaR, EDaR, Tail Gini of
losses. The default is 0.05.
a_sim : float, optional
Number of CVaRs used to approximate Tail Gini of losses. The default
is 100.
beta : float, optional
Significance level of CVaR and Tail Gini of gains. If None it
duplicates alpha value. The default is None.
b_sim : float, optional
Number of CVaRs used to approximate Tail Gini of gains. If None
it duplicates a_sim value. The default is None.
"""
freq = freq.upper()
weights = pd.Series(weights).to_frame()
returns = stock_returns @ weights
mu = returns.mean().item() * time_factor[freq]
sigma = returns.std().item() * time_factor[freq] ** 0.5
sharpe = (mu - risk_free_rate) / sigma
factor_1 = str(int(time_factor[freq])) + ") "
factor_2 = "√" + factor_1
print("Annual (by " + factor_1 + f"expected return: {100 * mu:.2f}%")
print("Annual (by " + factor_2 + f"volatility: {100 * sigma:.2f}%")
print(f"Sharpe ratio: {sharpe:.4f}")
if risk_measure != "MV":
risk = rp.Sharpe_Risk(
weights,
cov=stock_returns.cov(),
returns=stock_returns,
rm=risk_measure,
rf=risk_free_rate,
alpha=alpha,
a_sim=a_sim,
beta=beta,
b_sim=b_sim,
)
drawdowns = [
"MDD",
"ADD",
"DaR",
"CDaR",
"EDaR",
"UCI",
"MDD_Rel",
"ADD_Rel",
"DaR_Rel",
"CDaR_Rel",
"EDaR_Rel",
"UCI_Rel",
]
if risk_measure in drawdowns:
sharpe_2 = (mu - risk_free_rate) / risk
print(
risk_names[risk_measure.lower()].capitalize()
+ " : "
+ f"{100 * risk:.2f}%"
)
else:
risk = risk * time_factor[freq] ** 0.5
sharpe_2 = (mu - risk_free_rate) / risk
print(
"Annual (by "
+ factor_2
+ risk_names[risk_measure.lower()]
+ " : "
+ f"{100 * risk:.2f}%"
)
print(
"Return / " + risk_names[risk_measure.lower()] + f" ratio: {sharpe_2:.4f}"
)
@log_start_end(log=logger)
def display_weights(weights: dict, market_neutral: bool = False):
"""
Prints weights in a nice format
Parameters
----------
weights: dict
weights to display. Keys are stocks. Values are either weights or values
market_neutral : bool
Flag indicating shorting allowed (negative weights)
"""
if not weights:
return
weight_df = pd.DataFrame.from_dict(data=weights, orient="index", columns=["value"])
if not market_neutral:
if math.isclose(weight_df.sum()["value"], 1, rel_tol=0.1):
weight_df["value"] = (weight_df["value"] * 100).apply(
lambda s: f"{s:.2f}"
) + " %"
weight_df["value"] = (
weight_df["value"]
.astype(str)
.apply(lambda s: " " * (8 - len(s)) + s if len(s) < 8 else "" + s)
)
else:
weight_df["value"] = (weight_df["value"] * 100).apply(
lambda s: f"{s:.0f}"
) + " $"
weight_df["value"] = (
weight_df["value"]
.astype(str)
.apply(lambda s: " " * (16 - len(s)) + s if len(s) < 16 else "" + s)
)
print_rich_table(weight_df, headers=["Value"], show_index=True, title="Weights")
else:
tot_value = weight_df["value"].abs().mean()
header = "Value ($)" if tot_value > 1.01 else "Value (%)"
print_rich_table(weight_df, headers=[header], show_index=True, title="Weights")
@log_start_end(log=logger)
def display_weights_sa(weights: dict, weights_sa: dict):
"""
Prints weights in a nice format
Parameters
----------
weights: dict
weights to display. Keys are stocks. Values are either weights or values
market_neutral : bool
Flag indicating shorting allowed (negative weights)
"""
if not weights or not weights_sa:
return
weight_df = pd.DataFrame.from_dict(
data=weights, orient="index", columns=["value"], dtype=float
)
weight_sa_df = pd.DataFrame.from_dict(
data=weights_sa, orient="index", columns=["value s.a."], dtype=float
)
weight_df = weight_df.join(weight_sa_df, how="inner")
weight_df["value vs value s.a."] = weight_df["value"] - weight_df["value s.a."]
weight_df["value"] = (weight_df["value"] * 100).apply(lambda s: f"{s:.2f}") + " %"
weight_df["value"] = (
weight_df["value"]
.astype(str)
.apply(lambda s: " " * (8 - len(s)) + s if len(s) < 8 else "" + s)
)
weight_df["value s.a."] = (weight_df["value s.a."] * 100).apply(
lambda s: f"{s:.2f}"
) + " %"
weight_df["value s.a."] = (
weight_df["value s.a."]
.astype(str)
.apply(
lambda s: " " * (len("value s.a.") - len(s)) + s
if len(s) < len("value s.a.")
else "" + s
)
)
weight_df["value vs value s.a."] = (weight_df["value vs value s.a."] * 100).apply(
lambda s: f"{s:.2f}"
) + " %"
weight_df["value vs value s.a."] = (
weight_df["value vs value s.a."]
.astype(str)
.apply(
lambda s: " " * (len("value vs value s.a.") - len(s)) + s
if len(s) < len("value vs value s.a.")
else "" + s
)
)
headers = list(weight_df.columns)
headers = [s.title() for s in headers]
print_rich_table(
weight_df, headers=headers, show_index=True, title="Weights Comparison"
)
@log_start_end(log=logger)
def display_categories(weights: dict, categories: dict, column: str, title: str = ""):
"""
Prints categories in a nice format
Parameters
----------
weights: dict
weights to display. Keys are stocks. Values are either weights or values
categories: dict
categories to display. Keys are stocks. Values are either weights or values
column: int.
column selected to show table
- ASSET_CLASS
- SECTOR
- INDUSTRY
- COUNTRY
"""
if not weights:
return
weight_df = pd.DataFrame.from_dict(
data=weights, orient="index", columns=["value"], dtype=float
)
categories_df = pd.DataFrame.from_dict(data=categories, dtype=float)
col = list(categories_df.columns).index(column)
categories_df = weight_df.join(categories_df.iloc[:, [col, 4, 5]], how="inner")
categories_df.set_index(column, inplace=True)
categories_df.groupby(level=0).sum()
table_df = pd.pivot_table(
categories_df,
values=["value", "CURRENT_INVESTED_AMOUNT"],
index=["CURRENCY", column],
aggfunc=np.sum,
)
table_df["CURRENT_WEIGHTS"] = (
table_df["CURRENT_INVESTED_AMOUNT"]
.groupby(level=0)
.transform(lambda x: x / sum(x))
)
table_df["value"] = (
table_df["value"].groupby(level=0).transform(lambda x: x / sum(x))
)
table_df = pd.concat(
[
d.append(d.sum().rename((k, "TOTAL " + k)))
for k, d in table_df.groupby(level=0)
]
)
table_df = table_df.iloc[:, [0, 2, 1]]
table_df["value"] = (table_df["value"] * 100).apply(lambda s: f"{s:.2f}") + " %"
table_df["value"] = (
table_df["value"]
.astype(str)
.apply(lambda s: " " * (8 - len(s)) + s if len(s) < 8 else "" + s)
)
table_df["CURRENT_WEIGHTS"] = (table_df["CURRENT_WEIGHTS"] * 100).apply(
lambda s: f"{s:.2f}"
) + " %"
table_df["CURRENT_WEIGHTS"] = (
table_df["CURRENT_WEIGHTS"]
.astype(str)
.apply(
lambda s: " " * (len("CURRENT_WEIGHTS") - len(s)) + s
if len(s) < len("CURRENT_WEIGHTS")
else "" + s
)
)
table_df["CURRENT_INVESTED_AMOUNT"] = (
table_df["CURRENT_INVESTED_AMOUNT"].apply(lambda s: f"{s:,.0f}") + " $"
)
table_df["CURRENT_INVESTED_AMOUNT"] = (
table_df["CURRENT_INVESTED_AMOUNT"]
.astype(str)
.apply(
lambda s: " " * (len("CURRENT_INVESTED_AMOUNT") - len(s)) + s
if len(s) < len("CURRENT_INVESTED_AMOUNT")
else "" + s
)
)
table_df.reset_index(inplace=True)
table_df.set_index("CURRENCY", inplace=True)
headers = list(table_df.columns)
headers = [s.title() for s in headers]
print_rich_table(table_df, headers=headers, show_index=True, title=title)
@log_start_end(log=logger)
def display_categories_sa(
weights: dict, weights_sa: dict, categories: dict, column: str, title: str = ""
):
"""
Prints categories in a nice format
Parameters
----------
weights: dict
weights to display. Keys are stocks. Values are either weights or values
weights_sa: dict
weights of sensitivity analysis to display. Keys are stocks. Values are either weights or values
categories: dict
categories to display. Keys are stocks. Values are either weights or values
column: int.
column selected to show table
- ASSET_CLASS
- SECTOR
- INDUSTRY
- COUNTRY
"""
if not weights or not weights_sa:
return
weight_df = pd.DataFrame.from_dict(
data=weights, orient="index", columns=["value"], dtype=float
)
weight_sa_df = pd.DataFrame.from_dict(
data=weights_sa, orient="index", columns=["value s.a."], dtype=float
)
categories_df = pd.DataFrame.from_dict(data=categories, dtype=float)
col = list(categories_df.columns).index(column)
categories_df = weight_df.join(categories_df.iloc[:, [col, 4, 5]], how="inner")
categories_df = categories_df.join(weight_sa_df, how="inner")
categories_df.set_index(column, inplace=True)
categories_df.groupby(level=0).sum()
table_df = pd.pivot_table(
categories_df,
values=["value", "value s.a.", "CURRENT_INVESTED_AMOUNT"],
index=["CURRENCY", column],
aggfunc=np.sum,
)
table_df["CURRENT_WEIGHTS"] = (
table_df["CURRENT_INVESTED_AMOUNT"]
.groupby(level=0)
.transform(lambda x: x / sum(x))
)
table_df["value"] = (
table_df["value"].groupby(level=0).transform(lambda x: x / sum(x))
)
table_df["value s.a."] = (
table_df["value s.a."].groupby(level=0).transform(lambda x: x / sum(x))
)
table_df = pd.concat(
[
d.append(d.sum().rename((k, "TOTAL " + k)))
for k, d in table_df.groupby(level=0)
]
)
table_df["value vs value s.a."] = table_df["value"] - table_df["value s.a."]
table_df = table_df.iloc[:, [0, 3, 1, 2, 4]]
table_df["value"] = (table_df["value"] * 100).apply(lambda s: f"{s:.2f}") + " %"
table_df["value"] = (
table_df["value"]
.astype(str)
.apply(lambda s: " " * (8 - len(s)) + s if len(s) < 8 else "" + s)
)
table_df["value s.a."] = (table_df["value s.a."] * 100).apply(
lambda s: f"{s:.2f}"
) + " %"
table_df["value s.a."] = (
table_df["value s.a."]
.astype(str)
.apply(
lambda s: " " * (len("value s.a.") - len(s)) + s
if len(s) < len("value s.a.")
else "" + s
)
)
table_df["value vs value s.a."] = (table_df["value vs value s.a."] * 100).apply(
lambda s: f"{s:.2f}"
) + " %"
table_df["value vs value s.a."] = (
table_df["value vs value s.a."]
.astype(str)
.apply(
lambda s: " " * (len("value vs value s.a.") - len(s)) + s
if len(s) < len("value vs value s.a.")
else "" + s
)
)
table_df["CURRENT_WEIGHTS"] = (table_df["CURRENT_WEIGHTS"] * 100).apply(
lambda s: f"{s:.2f}"
) + " %"
table_df["CURRENT_WEIGHTS"] = (
table_df["CURRENT_WEIGHTS"]
.astype(str)
.apply(
lambda s: " " * (len("CURRENT_WEIGHTS") - len(s)) + s
if len(s) < len("CURRENT_WEIGHTS")
else "" + s
)
)
table_df["CURRENT_INVESTED_AMOUNT"] = (
table_df["CURRENT_INVESTED_AMOUNT"].apply(lambda s: f"{s:,.0f}") + " $"
)
table_df["CURRENT_INVESTED_AMOUNT"] = (
table_df["CURRENT_INVESTED_AMOUNT"]
.astype(str)
.apply(
lambda s: " " * (len("CURRENT_INVESTED_AMOUNT") - len(s)) + s
if len(s) < len("CURRENT_INVESTED_AMOUNT")
else "" + s
)
)
table_df.reset_index(inplace=True)
table_df.set_index("CURRENCY", inplace=True)
headers = list(table_df.columns)
headers = [s.title() for s in headers]
print_rich_table(table_df, headers=headers, show_index=True, title=title)
@log_start_end(log=logger)
def display_equal_weight(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure="mv",
risk_free_rate: float = 0,
alpha: float = 0.05,
value: float = 1,
table: bool = False,
) -> Dict:
"""
Equally weighted portfolio, where weight = 1/# of stocks
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False.
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str, optional
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR.
value : float, optional
Amount to allocate to portfolio, by default 1.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
s_title = f"{p} Equally Weighted Portfolio\n"
weights, stock_returns = optimizer_model.get_equal_weights(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
value=value,
)
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure=risk_choices[risk_measure],
risk_free_rate=risk_free_rate,
alpha=alpha,
# a_sim=a_sim,
# beta=beta,
# b_sim=beta_sim,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_property_weighting(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
s_property: str = "marketCap",
risk_measure="mv",
risk_free_rate: float = 0,
alpha=0.05,
value: float = 1,
table: bool = False,
) -> Dict:
"""
Builds a portfolio weighted by selected property
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
s_property : str
Property to get weighted portfolio of
risk_measure: str, optional
The risk measure used to compute indicators.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR.
value : float, optional
Amount to allocate to portfolio, by default 1.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
s_title = f"{p} Weighted Portfolio based on " + s_property + "\n"
weights, stock_returns = optimizer_model.get_property_weights(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
s_property=s_property,
value=value,
)
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure=risk_choices[risk_measure],
risk_free_rate=risk_free_rate,
alpha=alpha,
# a_sim=a_sim,
# beta=beta,
# b_sim=beta_sim,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_mean_risk(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure: str = "mv",
objective: str = "sharpe",
risk_free_rate: float = 0,
risk_aversion: float = 1,
alpha: float = 0.05,
target_return: float = -1,
target_risk: float = -1,
mean: str = "hist",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
value_short: float = 0.0,
table: bool = False,
) -> Dict:
"""
Builds a mean risk optimal portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str, optional
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
objective: str
Objective function of the optimization model.
The default is 'Sharpe'. Possible values are:
- 'MinRisk': Minimize the selected risk measure.
- 'Utility': Maximize the risk averse utility function.
- 'Sharpe': Maximize the risk adjusted return ratio based on the selected risk measure.
- 'MaxRet': Maximize the expected return of the portfolio.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
risk_aversion: float, optional
Risk aversion factor of the 'Utility' objective function.
The default is 1.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR
target_return: float, optional
Constraint on minimum level of portfolio's return.
target_risk: float, optional
Constraint on maximum level of portfolio's risk.
mean: str, optional
The method used to estimate the expected returns.
The default value is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
if objective == "sharpe":
s_title = f"{p} Maximal return/risk ratio portfolio using "
elif objective == "minrisk":
s_title = f"{p} Minimum risk portfolio using "
elif objective == "maxret":
s_title = f"{p} Maximal return portfolio using "
elif objective == "utility":
s_title = f"{p} Maximal risk averse utility function portfolio using "
s_title += risk_names[risk_measure] + " as risk measure\n"
weights, stock_returns = optimizer_model.get_mean_risk_portfolio(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
risk_measure=risk_choices[risk_measure],
objective=objectives_choices[objective],
risk_free_rate=risk_free_rate,
risk_aversion=risk_aversion,
alpha=alpha,
target_return=target_return,
target_risk=target_risk,
mean=mean,
covariance=covariance,
d_ewma=d_ewma,
value=value,
value_short=value_short,
)
if weights is None:
console.print("\n", "There is no solution with these parameters")
return {}
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure=risk_choices[risk_measure],
risk_free_rate=risk_free_rate,
alpha=alpha,
# a_sim=a_sim,
# beta=beta,
# b_sim=beta_sim,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_max_sharpe(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure: str = "MV",
risk_free_rate: float = 0,
alpha: float = 0.05,
target_return: float = -1,
target_risk: float = -1,
mean: str = "hist",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
value_short: float = 0.0,
table: bool = False,
) -> Dict:
"""
Builds a maximal return/risk ratio portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str, optional
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR
target_return: float, optional
Constraint on minimum level of portfolio's return.
target_risk: float, optional
Constraint on maximum level of portfolio's risk.
mean: str, optional
The method used to estimate the expected returns.
The default value is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
weights = display_mean_risk(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
risk_measure=risk_measure,
objective="sharpe",
risk_free_rate=risk_free_rate,
alpha=alpha,
target_return=target_return,
target_risk=target_risk,
mean=mean,
covariance=covariance,
d_ewma=d_ewma,
value=value,
value_short=value_short,
table=table,
)
return weights
@log_start_end(log=logger)
def display_min_risk(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure: str = "MV",
risk_free_rate: float = 0,
alpha: float = 0.05,
target_return: float = -1,
target_risk: float = -1,
mean: str = "hist",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
value_short: float = 0.0,
table: bool = False,
) -> Dict:
"""
Builds a minimum risk portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str, optional
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR
target_return: float, optional
Constraint on minimum level of portfolio's return.
target_risk: float, optional
Constraint on maximum level of portfolio's risk.
mean: str, optional
The method used to estimate the expected returns.
The default value is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
weights = display_mean_risk(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
risk_measure=risk_measure,
objective="minrisk",
risk_free_rate=risk_free_rate,
alpha=alpha,
target_return=target_return,
target_risk=target_risk,
mean=mean,
covariance=covariance,
d_ewma=d_ewma,
value=value,
value_short=value_short,
table=table,
)
return weights
@log_start_end(log=logger)
def display_max_util(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure: str = "MV",
risk_free_rate: float = 0,
risk_aversion: float = 1,
alpha: float = 0.05,
target_return: float = -1,
target_risk: float = -1,
mean: str = "hist",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
value_short: float = 0.0,
table: bool = False,
) -> Dict:
"""
Builds a maximal risk averse utility portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str, optional
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
risk_aversion: float, optional
Risk aversion factor of the 'Utility' objective function.
The default is 1.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR
target_return: float, optional
Constraint on minimum level of portfolio's return.
target_risk: float, optional
Constraint on maximum level of portfolio's risk.
mean: str, optional
The method used to estimate the expected returns.
The default value is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
weights = display_mean_risk(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
risk_measure=risk_measure,
objective="utility",
risk_free_rate=risk_free_rate,
risk_aversion=risk_aversion,
alpha=alpha,
target_return=target_return,
target_risk=target_risk,
mean=mean,
covariance=covariance,
d_ewma=d_ewma,
value=value,
value_short=value_short,
table=table,
)
return weights
@log_start_end(log=logger)
def display_max_ret(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure: str = "MV",
risk_free_rate: float = 0,
alpha: float = 0.05,
target_return: float = -1,
target_risk: float = -1,
mean: str = "hist",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
value_short: float = 0.0,
table: bool = False,
) -> Dict:
"""
Builds a maximal return portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str, optional
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR
target_return: float, optional
Constraint on minimum level of portfolio's return.
target_risk: float, optional
Constraint on maximum level of portfolio's risk.
mean: str, optional
The method used to estimate the expected returns.
The default value is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
weights = display_mean_risk(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
risk_measure=risk_measure,
objective="maxret",
risk_free_rate=risk_free_rate,
alpha=alpha,
target_return=target_return,
target_risk=target_risk,
mean=mean,
covariance=covariance,
d_ewma=d_ewma,
value=value,
value_short=value_short,
table=table,
)
return weights
@log_start_end(log=logger)
def display_max_div(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
value_short: float = 0.0,
table: bool = False,
) -> Dict:
"""
Builds a maximal diversification portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
s_title = f"{p} Maximal diversification portfolio\n"
weights, stock_returns = optimizer_model.get_max_diversification_portfolio(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
covariance=covariance,
d_ewma=d_ewma,
value=value,
value_short=value_short,
)
if weights is None:
console.print("\n", "There is no solution with this parameters")
return {}
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure="MV",
risk_free_rate=0,
# alpha=0.05,
# a_sim=100,
# beta=None,
# b_sim=None,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_max_decorr(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
value_short: float = 0.0,
table: bool = False,
) -> Dict:
"""
Builds a maximal decorrelation portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
s_title = f"{p} Maximal decorrelation portfolio\n"
weights, stock_returns = optimizer_model.get_max_decorrelation_portfolio(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
covariance=covariance,
d_ewma=d_ewma,
value=value,
value_short=value_short,
)
if weights is None:
console.print("\n", "There is no solution with this parameters")
return {}
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure="MV",
risk_free_rate=0,
# alpha=alpha,
# a_sim=a_sim,
# beta=beta,
# b_simb_sim,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_black_litterman(
stocks: List[str],
p_views: List,
q_views: List,
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
benchmark: Dict = None,
objective: str = "Sharpe",
risk_free_rate: float = 0,
risk_aversion: float = 1,
delta: float = None,
equilibrium: bool = True,
optimize: bool = True,
value: float = 1.0,
value_short: float = 0,
table: bool = False,
) -> Dict:
"""
Builds a black litterman portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
p_views: List
Matrix P of views that shows relationships among assets and returns.
Default value to None.
q_views: List
Matrix Q of expected returns of views. Default value is None.
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
benchmark : Dict
Dict of portfolio weights
objective: str
Objective function of the optimization model.
The default is 'Sharpe'. Possible values are:
- 'MinRisk': Minimize the selected risk measure.
- 'Utility': Maximize the risk averse utility function.
- 'Sharpe': Maximize the risk adjusted return ratio based on the selected risk measure.
- 'MaxRet': Maximize the expected return of the portfolio.
risk_free_rate: float, optional
Risk free rate, must be in annual frequency. The default is 0.
risk_aversion: float, optional
Risk aversion factor of the 'Utility' objective function.
The default is 1.
delta: float, optional
Risk aversion factor of Black Litterman model. Default value is None.
equilibrium: bool, optional
If True excess returns are based on equilibrium market portfolio, if False
excess returns are calculated as historical returns minus risk free rate.
Default value is True.
optimize: bool, optional
If True Black Litterman estimates are used as inputs of mean variance model,
if False returns equilibrium weights from Black Litterman model
Default value is True.
value : float, optional
Amount of money to allocate. The default is 1.
value_short : float, optional
Amount to allocate to portfolio in short positions. The default is 0.
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
s_title = f"{p} Black Litterman portfolio\n"
weights, stock_returns = optimizer_model.get_black_litterman_portfolio(
stocks=stocks,
benchmark=benchmark,
p_views=p_views,
q_views=q_views,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
objective=objectives_choices[objective],
risk_free_rate=risk_free_rate,
risk_aversion=risk_aversion,
delta=delta,
equilibrium=equilibrium,
optimize=optimize,
value=value,
value_short=value_short,
)
if weights is None:
console.print("\n", "There is no solution with this parameters")
return {}
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure="MV",
risk_free_rate=0,
# alpha=alpha,
# a_sim=a_sim,
# beta=beta,
# b_simb_sim,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_ef(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure: str = "MV",
risk_free_rate: float = 0,
alpha: float = 0.05,
value: float = 1.0,
value_short: float = 0.0,
n_portfolios: int = 100,
seed: int = 123,
tangency: bool = False,
plot_tickers: bool = True,
external_axes: Optional[List[plt.Axes]] = None,
):
"""
Display efficient frontier
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str, optional
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization.
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR
The default is 0.05.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
n_portfolios: int, optional
"Number of portfolios to simulate. The default value is 100.
seed: int, optional
Seed used to generate random portfolios. The default value is 123.
tangency: bool, optional
Adds the optimal line with the risk-free asset.
external_axes: Optional[List[plt.Axes]]
Optional axes to plot data on
plot_tickers: bool
Whether to plot the tickers for the assets
"""
stock_prices = yahoo_finance_model.process_stocks(stocks, period, start, end)
stock_returns = yahoo_finance_model.process_returns(
stock_prices,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
)
risk_free_rate = risk_free_rate / time_factor[freq.upper()]
# Building the portfolio object
port = rp.Portfolio(returns=stock_returns, alpha=alpha)
# Estimate input parameters:
port.assets_stats(method_mu="hist", method_cov="hist")
# Budget constraints
port.upperlng = value
if value_short > 0:
port.sht = True
port.uppersht = value_short
port.budget = value - value_short
else:
port.budget = value
# Estimate tangency portfolio:
weights = port.optimization(
model="Classic",
rm=risk_choices[risk_measure],
obj="Sharpe",
rf=risk_free_rate,
hist=True,
)
points = 20 # Number of points of the frontier
frontier = port.efficient_frontier(
model="Classic",
rm=risk_choices[risk_measure],
points=points,
rf=risk_free_rate,
hist=True,
)
random_weights = optimizer_model.generate_random_portfolios(
stocks=stocks,
n_portfolios=n_portfolios,
seed=seed,
)
mu = stock_returns.mean().to_frame().T
cov = stock_returns.cov()
Y = (mu @ frontier).to_numpy() * time_factor[freq.upper()]
Y = np.ravel(Y)
X = np.zeros_like(Y)
for i in range(frontier.shape[1]):
w = np.array(frontier.iloc[:, i], ndmin=2).T
risk = rp.Sharpe_Risk(
w,
cov=cov,
returns=stock_returns,
rm=risk_choices[risk_measure],
rf=risk_free_rate,
alpha=alpha,
# a_sim=a_sim,
# beta=beta,
# b_sim=b_sim,
)
X[i] = risk
if risk_choices[risk_measure] not in ["ADD", "MDD", "CDaR", "EDaR", "UCI"]:
X = X * time_factor[freq.upper()] ** 0.5
f = interp1d(X, Y, kind="quadratic")
X1 = np.linspace(X[0], X[-1], num=100)
Y1 = f(X1)
if external_axes is None:
_, ax = plt.subplots(figsize=plot_autoscale(), dpi=PLOT_DPI)
else:
ax = external_axes[0]
frontier = pd.concat([frontier, random_weights], axis=1)
ax = rp.plot_frontier(
w_frontier=frontier,
mu=mu,
cov=cov,
returns=stock_returns,
rm=risk_choices[risk_measure],
rf=risk_free_rate,
alpha=alpha,
cmap="RdYlBu",
w=weights,
label="",
marker="*",
s=16,
c="r",
t_factor=time_factor[freq.upper()],
ax=ax,
)
# Add risk free line
if tangency:
ret_sharpe = (mu @ weights).to_numpy().item() * time_factor[freq.upper()]
risk_sharpe = rp.Sharpe_Risk(
weights,
cov=cov,
returns=stock_returns,
rm=risk_choices[risk_measure],
rf=risk_free_rate,
alpha=alpha,
# a_sim=a_sim,
# beta=beta,
# b_sim=b_sim,
)
if risk_choices[risk_measure] not in ["ADD", "MDD", "CDaR", "EDaR", "UCI"]:
risk_sharpe = risk_sharpe * time_factor[freq.upper()] ** 0.5
y = ret_sharpe * 1.5
b = risk_free_rate * time_factor[freq.upper()]
m = (ret_sharpe - b) / risk_sharpe
x2 = (y - b) / m
x = [0, x2]
y = [b, y]
line = Line2D(x, y, label="Capital Allocation Line")
ax.set_xlim(xmin=min(X1) * 0.8)
ax.add_line(line)
ax.plot(X1, Y1, color="b")
plot_tickers = True
if plot_tickers:
ticker_plot = pd.DataFrame(columns=["ticker", "var"])
for ticker in port.cov.columns:
weight_df = pd.DataFrame({"weights": 1}, index=[ticker])
risk = rp.Sharpe_Risk(
weight_df,
cov=port.cov[ticker][ticker],
returns=stock_returns.loc[:, [ticker]],
rm=risk_choices[risk_measure],
rf=risk_free_rate,
alpha=alpha,
)
if risk_choices[risk_measure] not in ["MDD", "ADD", "CDaR", "EDaR", "UCI"]:
risk = risk * time_factor[freq.upper()] ** 0.5
ticker_plot = ticker_plot.append(
{"ticker": ticker, "var": risk * time_factor[freq.upper()] ** 0.5},
ignore_index=True,
)
ticker_plot = ticker_plot.set_index("ticker")
ticker_plot = ticker_plot.merge(
port.mu.T * time_factor[freq.upper()], right_index=True, left_index=True
)
ticker_plot = ticker_plot.rename(columns={0: "ret"})
ax.scatter(ticker_plot["var"], ticker_plot["ret"])
for row in ticker_plot.iterrows():
ax.annotate(row[0], (row[1]["var"], row[1]["ret"]))
ax.set_title(f"Efficient Frontier simulating {n_portfolios} portfolios")
ax.legend(loc="best", scatterpoints=1)
theme.style_primary_axis(ax)
l, b, w, h = ax.get_position().bounds
ax.set_position([l, b, w * 0.9, h])
ax1 = ax.get_figure().axes
ll, bb, ww, hh = ax1[-1].get_position().bounds
ax1[-1].set_position([ll * 1.02, bb, ww, hh])
if external_axes is None:
theme.visualize_output(force_tight_layout=False)
@log_start_end(log=logger)
def display_risk_parity(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
risk_measure: str = "mv",
risk_cont: List[str] = None,
risk_free_rate: float = 0,
alpha: float = 0.05,
target_return: float = -1,
mean: str = "hist",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
table: bool = False,
) -> Dict:
"""
Builds a risk parity portfolio using the risk budgeting approach
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str
Period to look at returns from
start: str
If not using period, start date string (YYYY-MM-DD)
end: str
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
- X (integer days) for returns calculated every X days.
maxnan: float
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float
Value used to replace outliers that are higher to threshold.
method: str
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
risk_measure: str
The risk measure used to optimize the portfolio.
The default is 'MV'. Possible values are:
- 'MV': Standard Deviation.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'CVaR': Conditional Value at Risk.
- 'EVaR': Entropic Value at Risk.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
risk_cont: List[str], optional
The vector of risk contribution per asset. If empty, the default is
1/n (number of assets).
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns. Used for
'FLPM' and 'SLPM' and Sharpe objective function. The default is 0.
alpha: float, optional
Significance level of CVaR, EVaR, CDaR and EDaR
target_return: float, optional
Constraint on minimum level of portfolio's return.
mean: str, optional
The method used to estimate the expected returns.
The default value is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio, by default 1.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
s_title = f"{p} Risk parity portfolio based on risk budgeting approach\n"
s_title += "using " + risk_names[risk_measure] + " as risk measure\n"
weights, stock_returns = optimizer_model.get_risk_parity_portfolio(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
risk_measure=risk_choices[risk_measure],
risk_cont=risk_cont,
risk_free_rate=risk_free_rate,
alpha=alpha,
target_return=target_return,
mean=mean,
covariance=covariance,
d_ewma=d_ewma,
value=value,
)
if weights is None:
console.print("\n", "There is no solution with this parameters")
return {}
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure=risk_choices[risk_measure],
risk_free_rate=risk_free_rate,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_rel_risk_parity(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
version: str = "A",
risk_cont: List[str] = None,
penal_factor: float = 1,
target_return: float = -1,
mean: str = "hist",
covariance: str = "hist",
d_ewma: float = 0.94,
value: float = 1.0,
table: bool = False,
) -> Dict:
"""
Builds a relaxed risk parity portfolio using the least squares approach
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str, optional
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
- X (integer days) for returns calculated every X days.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str, optional
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
version : str, optional
Relaxed risk parity model version. The default is 'A'.
Possible values are:
- 'A': without regularization and penalization constraints.
- 'B': with regularization constraint but without penalization constraint.
- 'C': with regularization and penalization constraints.
risk_cont: List[str], optional
The vector of risk contribution per asset. If empty, the default is
1/n (number of assets).
penal_factor: float, optional
The penalization factor of penalization constraints. Only used with
version 'C'. The default is 1.
target_return: float, optional
Constraint on minimum level of portfolio's return.
mean: str, optional
The method used to estimate the expected returns.
The default value is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`a-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`a-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`a-MLforAM`.
d_ewma: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio, by default 1.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
s_title = f"{p} Relaxed risk parity portfolio based on least squares approach\n"
weights, stock_returns = optimizer_model.get_rel_risk_parity_portfolio(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
version=version.upper(),
risk_cont=risk_cont,
penal_factor=penal_factor,
target_return=target_return,
mean=mean,
covariance=covariance,
d_ewma=d_ewma,
value=value,
)
if weights is None:
console.print("\n", "There is no solution with this parameters")
return {}
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure=risk_choices["mv"],
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_hcp(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
model: str = "HRP",
codependence: str = "pearson",
covariance: str = "hist",
objective: str = "minrisk",
risk_measure: str = "mv",
risk_free_rate: float = 0.0,
risk_aversion: float = 1.0,
alpha: float = 0.05,
a_sim: int = 100,
beta: float = None,
b_sim: int = None,
linkage: str = "ward",
k: int = 0,
max_k: int = 10,
bins_info: str = "KN",
alpha_tail: float = 0.05,
leaf_order: bool = True,
d_ewma: float = 0.94,
value: float = 1.0,
table: bool = False,
) -> Dict:
"""
Builds a hierarchical clustering portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str, optional
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
model: str, optional
The hierarchical cluster portfolio model used for optimize the
portfolio. The default is 'HRP'. Possible values are:
- 'HRP': Hierarchical Risk Parity.
- 'HERC': Hierarchical Equal Risk Contribution.
- 'NCO': Nested Clustered Optimization.
codependence: str, optional
The codependence or similarity matrix used to build the distance
metric and clusters. The default is 'pearson'. Possible values are:
- 'pearson': pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{pearson}_{i,j})}`.
- 'spearman': spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{spearman}_{i,j})}`.
- 'abs_pearson': absolute value pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{pearson}_{i,j}|)}`.
- 'abs_spearman': absolute value spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{spearman}_{i,j}|)}`.
- 'distance': distance correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-\rho^{distance}_{i,j})}`.
- 'mutual_info': mutual information matrix. Distance used is variation information matrix.
- 'tail': lower tail dependence index matrix. Dissimilarity formula:
:math:`D_{i,j} = -\\log{\\lambda_{i,j}}`.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`c-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`c-MLforAM`.
objective: str, optional
Objective function used by the NCO model.
The default is 'MinRisk'. Possible values are:
- 'MinRisk': Minimize the selected risk measure.
- 'Utility': Maximize the risk averse utility function.
- 'Sharpe': Maximize the risk adjusted return ratio based on the selected risk measure.
- 'ERC': Equally risk contribution portfolio of the selected risk measure.
risk_measure: str, optional
The risk measure used to optimize the portfolio. If model is 'NCO',
the risk measures available depends on the objective function.
The default is 'MV'. Possible values are:
- 'MV': Variance.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'VaR': Value at Risk.
- 'CVaR': Conditional Value at Risk.
- 'TG': Tail Gini.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization (Minimax).
- 'RG': Range of returns.
- 'CVRG': CVaR range of returns.
- 'TGRG': Tail Gini range of returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'DaR': Drawdown at Risk of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'MDD_Rel': Maximum Drawdown of compounded cumulative returns (Calmar Ratio).
- 'ADD_Rel': Average Drawdown of compounded cumulative returns.
- 'DaR_Rel': Drawdown at Risk of compounded cumulative returns.
- 'CDaR_Rel': Conditional Drawdown at Risk of compounded cumulative returns.
- 'EDaR_Rel': Entropic Drawdown at Risk of compounded cumulative returns.
- 'UCI_Rel': Ulcer Index of compounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns.
Used for 'FLPM' and 'SLPM'. The default is 0.
risk_aversion: float, optional
Risk aversion factor of the 'Utility' objective function.
The default is 1.
alpha: float, optional
Significance level of VaR, CVaR, EDaR, DaR, CDaR, EDaR, Tail Gini of losses.
The default is 0.05.
a_sim: float, optional
Number of CVaRs used to approximate Tail Gini of losses. The default is 100.
beta: float, optional
Significance level of CVaR and Tail Gini of gains. If None it duplicates alpha value.
The default is None.
b_sim: float, optional
Number of CVaRs used to approximate Tail Gini of gains. If None it duplicates a_sim value.
The default is None.
linkage: str, optional
Linkage method of hierarchical clustering. For more information see
`linkage <https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html>`_.
The default is 'single'. Possible values are:
- 'single'.
- 'complete'.
- 'average'.
- 'weighted'.
- 'centroid'.
- 'median'.
- 'ward'.
- 'dbht': Direct Bubble Hierarchical Tree.
k: int, optional
Number of clusters. This value is took instead of the optimal number
of clusters calculated with the two difference gap statistic.
The default is None.
max_k: int, optional
Max number of clusters used by the two difference gap statistic
to find the optimal number of clusters. The default is 10.
bins_info: str, optional
Number of bins used to calculate variation of information. The default
value is 'KN'. Possible values are:
- 'KN': Knuth's choice method. For more information see
`knuth_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.knuth_bin_width.html>`_.
- 'FD': Freedman–Diaconis' choice method. For more information see
`freedman_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.freedman_bin_width.html>`_.
- 'SC': Scotts' choice method. For more information see
`scott_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.scott_bin_width.html>`_.
- 'HGR': Hacine-Gharbi and Ravier' choice method.
alpha_tail: float, optional
Significance level for lower tail dependence index. The default is 0.05.
leaf_order: bool, optional
Indicates if the cluster are ordered so that the distance between
successive leaves is minimal. The default is True.
d: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio, by default 1.0
table: bool, optional
True if plot table weights, by default False
"""
p = d_period(period, start, end)
if model == "HRP":
s_title = f"{p} Hierarchical risk parity portfolio"
s_title += " using " + codependence + " codependence,\n" + linkage
elif model == "HERC":
s_title = f"{p} Hierarchical equal risk contribution portfolio"
s_title += " using " + codependence + "\ncodependence," + linkage
elif model == "NCO":
s_title = f"{p} Nested clustered optimization"
s_title += " using " + codependence + " codependence,\n" + linkage
s_title += " linkage and " + risk_names[risk_measure] + " as risk measure\n"
weights, stock_returns = optimizer_model.get_hcp_portfolio(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
model=model,
codependence=codependence,
covariance=covariance,
objective=objectives_choices[objective],
risk_measure=risk_choices[risk_measure],
risk_free_rate=risk_free_rate,
risk_aversion=risk_aversion,
alpha=alpha,
a_sim=a_sim,
beta=beta,
b_sim=b_sim,
linkage=linkage,
k=k,
max_k=max_k,
bins_info=bins_info,
alpha_tail=alpha_tail,
leaf_order=leaf_order,
d_ewma=d_ewma,
value=value,
)
if weights is None:
console.print("\n", "There is no solution with this parameters")
return {}
if table:
console.print("\n", s_title)
display_weights(weights)
portfolio_performance(
weights=weights,
stock_returns=stock_returns,
risk_measure=risk_choices[risk_measure],
risk_free_rate=risk_free_rate,
alpha=alpha,
a_sim=a_sim,
beta=beta,
b_sim=b_sim,
freq=freq,
)
console.print("")
return weights
@log_start_end(log=logger)
def display_hrp(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
codependence: str = "pearson",
covariance: str = "hist",
risk_measure: str = "mv",
risk_free_rate: float = 0.0,
alpha: float = 0.05,
a_sim: int = 100,
beta: float = None,
b_sim: int = None,
linkage: str = "ward",
k: int = 0,
max_k: int = 10,
bins_info: str = "KN",
alpha_tail: float = 0.05,
leaf_order: bool = True,
d_ewma: float = 0.94,
value: float = 1.0,
table: bool = False,
) -> Dict:
"""
Builds a hierarchical risk parity portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str, optional
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
codependence: str, optional
The codependence or similarity matrix used to build the distance
metric and clusters. The default is 'pearson'. Possible values are:
- 'pearson': pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{pearson}_{i,j})}`.
- 'spearman': spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{spearman}_{i,j})}`.
- 'abs_pearson': absolute value pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{pearson}_{i,j}|)}`.
- 'abs_spearman': absolute value spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{spearman}_{i,j}|)}`.
- 'distance': distance correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-\rho^{distance}_{i,j})}`.
- 'mutual_info': mutual information matrix. Distance used is variation information matrix.
- 'tail': lower tail dependence index matrix. Dissimilarity formula:
:math:`D_{i,j} = -\\log{\\lambda_{i,j}}`.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`c-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`c-MLforAM`.
risk_measure: str, optional
The risk measure used to optimize the portfolio. If model is 'NCO',
the risk measures available depends on the objective function.
The default is 'MV'. Possible values are:
- 'MV': Variance.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'VaR': Value at Risk.
- 'CVaR': Conditional Value at Risk.
- 'TG': Tail Gini.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization (Minimax).
- 'RG': Range of returns.
- 'CVRG': CVaR range of returns.
- 'TGRG': Tail Gini range of returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'DaR': Drawdown at Risk of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'MDD_Rel': Maximum Drawdown of compounded cumulative returns (Calmar Ratio).
- 'ADD_Rel': Average Drawdown of compounded cumulative returns.
- 'DaR_Rel': Drawdown at Risk of compounded cumulative returns.
- 'CDaR_Rel': Conditional Drawdown at Risk of compounded cumulative returns.
- 'EDaR_Rel': Entropic Drawdown at Risk of compounded cumulative returns.
- 'UCI_Rel': Ulcer Index of compounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns.
Used for 'FLPM' and 'SLPM'. The default is 0.
alpha: float, optional
Significance level of VaR, CVaR, EDaR, DaR, CDaR, EDaR, Tail Gini of losses.
The default is 0.05.
a_sim: float, optional
Number of CVaRs used to approximate Tail Gini of losses. The default is 100.
beta: float, optional
Significance level of CVaR and Tail Gini of gains. If None it duplicates alpha value.
The default is None.
b_sim: float, optional
Number of CVaRs used to approximate Tail Gini of gains. If None it duplicates a_sim value.
The default is None.
linkage: str, optional
Linkage method of hierarchical clustering. For more information see
`linkage <https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html>`_.
The default is 'single'. Possible values are:
- 'single'.
- 'complete'.
- 'average'.
- 'weighted'.
- 'centroid'.
- 'median'.
- 'ward'.
- 'dbht': Direct Bubble Hierarchical Tree.
k: int, optional
Number of clusters. This value is took instead of the optimal number
of clusters calculated with the two difference gap statistic.
The default is None.
max_k: int, optional
Max number of clusters used by the two difference gap statistic
to find the optimal number of clusters. The default is 10.
bins_info: str, optional
Number of bins used to calculate variation of information. The default
value is 'KN'. Possible values are:
- 'KN': Knuth's choice method. For more information see
`knuth_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.knuth_bin_width.html>`_.
- 'FD': Freedman–Diaconis' choice method. For more information see
`freedman_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.freedman_bin_width.html>`_.
- 'SC': Scotts' choice method. For more information see
`scott_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.scott_bin_width.html>`_.
- 'HGR': Hacine-Gharbi and Ravier' choice method.
alpha_tail: float, optional
Significance level for lower tail dependence index. The default is 0.05.
leaf_order: bool, optional
Indicates if the cluster are ordered so that the distance between
successive leaves is minimal. The default is True.
d: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
weights = display_hcp(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
model="HRP",
codependence=codependence,
covariance=covariance,
risk_measure=risk_measure,
risk_free_rate=risk_free_rate,
alpha=alpha,
a_sim=a_sim,
beta=beta,
b_sim=b_sim,
linkage=linkage,
k=k,
max_k=max_k,
bins_info=bins_info,
alpha_tail=alpha_tail,
leaf_order=leaf_order,
d_ewma=d_ewma,
value=value,
table=table,
)
return weights
@log_start_end(log=logger)
def display_herc(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
codependence: str = "pearson",
covariance: str = "hist",
risk_measure: str = "mv",
risk_free_rate: float = 0.0,
alpha: float = 0.05,
a_sim: int = 100,
beta: float = None,
b_sim: int = None,
linkage: str = "ward",
k: int = 0,
max_k: int = 10,
bins_info: str = "KN",
alpha_tail: float = 0.05,
leaf_order: bool = True,
d_ewma: float = 0.94,
value: float = 1.0,
table: bool = False,
) -> Dict:
"""
Builds a hierarchical equal risk contribution portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str, optional
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
model: str, optional
The hierarchical cluster portfolio model used for optimize the
portfolio. The default is 'HRP'. Possible values are:
- 'HRP': Hierarchical Risk Parity.
- 'HERC': Hierarchical Equal Risk Contribution.
- 'NCO': Nested Clustered Optimization.
codependence: str, optional
The codependence or similarity matrix used to build the distance
metric and clusters. The default is 'pearson'. Possible values are:
- 'pearson': pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{pearson}_{i,j})}`.
- 'spearman': spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{spearman}_{i,j})}`.
- 'abs_pearson': absolute value pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{pearson}_{i,j}|)}`.
- 'abs_spearman': absolute value spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{spearman}_{i,j}|)}`.
- 'distance': distance correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-\rho^{distance}_{i,j})}`.
- 'mutual_info': mutual information matrix. Distance used is variation information matrix.
- 'tail': lower tail dependence index matrix. Dissimilarity formula:
:math:`D_{i,j} = -\\log{\\lambda_{i,j}}`.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`c-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`c-MLforAM`.
risk_measure: str, optional
The risk measure used to optimize the portfolio. If model is 'NCO',
the risk measures available depends on the objective function.
The default is 'MV'. Possible values are:
- 'MV': Variance.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'VaR': Value at Risk.
- 'CVaR': Conditional Value at Risk.
- 'TG': Tail Gini.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization (Minimax).
- 'RG': Range of returns.
- 'CVRG': CVaR range of returns.
- 'TGRG': Tail Gini range of returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'DaR': Drawdown at Risk of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'MDD_Rel': Maximum Drawdown of compounded cumulative returns (Calmar Ratio).
- 'ADD_Rel': Average Drawdown of compounded cumulative returns.
- 'DaR_Rel': Drawdown at Risk of compounded cumulative returns.
- 'CDaR_Rel': Conditional Drawdown at Risk of compounded cumulative returns.
- 'EDaR_Rel': Entropic Drawdown at Risk of compounded cumulative returns.
- 'UCI_Rel': Ulcer Index of compounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns.
Used for 'FLPM' and 'SLPM'. The default is 0.
alpha: float, optional
Significance level of VaR, CVaR, EDaR, DaR, CDaR, EDaR, Tail Gini of losses.
The default is 0.05.
a_sim: float, optional
Number of CVaRs used to approximate Tail Gini of losses. The default is 100.
beta: float, optional
Significance level of CVaR and Tail Gini of gains. If None it duplicates alpha value.
The default is None.
b_sim: float, optional
Number of CVaRs used to approximate Tail Gini of gains. If None it duplicates a_sim value.
The default is None.
linkage: str, optional
Linkage method of hierarchical clustering. For more information see
`linkage <https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html>`_.
The default is 'single'. Possible values are:
- 'single'.
- 'complete'.
- 'average'.
- 'weighted'.
- 'centroid'.
- 'median'.
- 'ward'.
- 'dbht': Direct Bubble Hierarchical Tree.
k: int, optional
Number of clusters. This value is took instead of the optimal number
of clusters calculated with the two difference gap statistic.
The default is None.
max_k: int, optional
Max number of clusters used by the two difference gap statistic
to find the optimal number of clusters. The default is 10.
bins_info: str, optional
Number of bins used to calculate variation of information. The default
value is 'KN'. Possible values are:
- 'KN': Knuth's choice method. For more information see
`knuth_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.knuth_bin_width.html>`_.
- 'FD': Freedman–Diaconis' choice method. For more information see
`freedman_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.freedman_bin_width.html>`_.
- 'SC': Scotts' choice method. For more information see
`scott_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.scott_bin_width.html>`_.
- 'HGR': Hacine-Gharbi and Ravier' choice method.
alpha_tail: float, optional
Significance level for lower tail dependence index. The default is 0.05.
leaf_order: bool, optional
Indicates if the cluster are ordered so that the distance between
successive leaves is minimal. The default is True.
d: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
weights = display_hcp(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
model="HERC",
codependence=codependence,
covariance=covariance,
risk_measure=risk_measure,
risk_free_rate=risk_free_rate,
alpha=alpha,
a_sim=a_sim,
beta=beta,
b_sim=b_sim,
linkage=linkage,
k=k,
max_k=max_k,
bins_info=bins_info,
alpha_tail=alpha_tail,
leaf_order=leaf_order,
d_ewma=d_ewma,
value=value,
table=table,
)
return weights
@log_start_end(log=logger)
def display_nco(
stocks: List[str],
period: str = "3y",
start: str = "",
end: str = "",
log_returns: bool = False,
freq: str = "D",
maxnan: float = 0.05,
threshold: float = 0,
method: str = "time",
codependence: str = "pearson",
covariance: str = "hist",
objective: str = "MinRisk",
risk_measure: str = "mv",
risk_free_rate: float = 0.0,
risk_aversion: float = 2.0,
alpha: float = 0.05,
linkage: str = "ward",
k: int = 0,
max_k: int = 10,
bins_info: str = "KN",
alpha_tail: float = 0.05,
leaf_order: bool = True,
d_ewma: float = 0.94,
value: float = 1.0,
table: bool = False,
) -> Dict:
"""
Builds a nested clustered optimization portfolio
Parameters
----------
stocks : List[str]
List of portfolio tickers
period : str
Period to look at returns from
start: str, optional
If not using period, start date string (YYYY-MM-DD)
end: str, optional
If not using period, end date string (YYYY-MM-DD). If empty use last
weekday.
log_returns: bool, optional
If True calculate log returns, else arithmetic returns. Default value
is False
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
maxnan: float, optional
Max percentage of nan values accepted per asset to be included in
returns.
threshold: float, optional
Value used to replace outliers that are higher to threshold.
method: str, optional
Method used to fill nan values. Default value is 'time'. For more information see
`interpolate <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html>`_.
model: str, optional
The hierarchical cluster portfolio model used for optimize the
portfolio. The default is 'HRP'. Possible values are:
- 'HRP': Hierarchical Risk Parity.
- 'HERC': Hierarchical Equal Risk Contribution.
- 'NCO': Nested Clustered Optimization.
codependence: str, optional
The codependence or similarity matrix used to build the distance
metric and clusters. The default is 'pearson'. Possible values are:
- 'pearson': pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{pearson}_{i,j})}`.
- 'spearman': spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{0.5(1-\rho^{spearman}_{i,j})}`.
- 'abs_pearson': absolute value pearson correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{pearson}_{i,j}|)}`.
- 'abs_spearman': absolute value spearman correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-|\rho^{spearman}_{i,j}|)}`.
- 'distance': distance correlation matrix. Distance formula:
:math:`D_{i,j} = \\sqrt{(1-\rho^{distance}_{i,j})}`.
- 'mutual_info': mutual information matrix. Distance used is variation information matrix.
- 'tail': lower tail dependence index matrix. Dissimilarity formula:
:math:`D_{i,j} = -\\log{\\lambda_{i,j}}`.
covariance: str, optional
The method used to estimate the covariance matrix:
The default is 'hist'. Possible values are:
- 'hist': use historical estimates.
- 'ewma1': use ewma with adjust=True. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ewma2': use ewma with adjust=False. For more information see
`EWM <https://pandas.pydata.org/pandas-docs/stable/user_guide/window.html#exponentially-weighted-window>`_.
- 'ledoit': use the Ledoit and Wolf Shrinkage method.
- 'oas': use the Oracle Approximation Shrinkage method.
- 'shrunk': use the basic Shrunk Covariance method.
- 'gl': use the basic Graphical Lasso Covariance method.
- 'jlogo': use the j-LoGo Covariance method. For more information see: :cite:`c-jLogo`.
- 'fixed': denoise using fixed method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'spectral': denoise using spectral method. For more information see chapter 2 of :cite:`c-MLforAM`.
- 'shrink': denoise using shrink method. For more information see chapter 2 of :cite:`c-MLforAM`.
objective: str, optional
Objective function used by the NCO model.
The default is 'MinRisk'. Possible values are:
- 'MinRisk': Minimize the selected risk measure.
- 'Utility': Maximize the risk averse utility function.
- 'Sharpe': Maximize the risk adjusted return ratio based on the selected risk measure.
- 'ERC': Equally risk contribution portfolio of the selected risk measure.
risk_measure: str, optional
The risk measure used to optimize the portfolio. If model is 'NCO',
the risk measures available depends on the objective function.
The default is 'MV'. Possible values are:
- 'MV': Variance.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'VaR': Value at Risk.
- 'CVaR': Conditional Value at Risk.
- 'TG': Tail Gini.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization (Minimax).
- 'RG': Range of returns.
- 'CVRG': CVaR range of returns.
- 'TGRG': Tail Gini range of returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'DaR': Drawdown at Risk of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'MDD_Rel': Maximum Drawdown of compounded cumulative returns (Calmar Ratio).
- 'ADD_Rel': Average Drawdown of compounded cumulative returns.
- 'DaR_Rel': Drawdown at Risk of compounded cumulative returns.
- 'CDaR_Rel': Conditional Drawdown at Risk of compounded cumulative returns.
- 'EDaR_Rel': Entropic Drawdown at Risk of compounded cumulative returns.
- 'UCI_Rel': Ulcer Index of compounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns.
Used for 'FLPM' and 'SLPM'. The default is 0.
risk_aversion: float, optional
Risk aversion factor of the 'Utility' objective function.
The default is 1.
alpha: float, optional
Significance level of VaR, CVaR, EDaR, DaR, CDaR, EDaR, Tail Gini of losses.
The default is 0.05.
a_sim: float, optional
Number of CVaRs used to approximate Tail Gini of losses. The default is 100.
beta: float, optional
Significance level of CVaR and Tail Gini of gains. If None it duplicates alpha value.
The default is None.
b_sim: float, optional
Number of CVaRs used to approximate Tail Gini of gains. If None it duplicates a_sim value.
The default is None.
linkage: str, optional
Linkage method of hierarchical clustering. For more information see
`linkage <https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html>`_.
The default is 'single'. Possible values are:
- 'single'.
- 'complete'.
- 'average'.
- 'weighted'.
- 'centroid'.
- 'median'.
- 'ward'.
- 'dbht': Direct Bubble Hierarchical Tree.
k: int, optional
Number of clusters. This value is took instead of the optimal number
of clusters calculated with the two difference gap statistic.
The default is None.
max_k: int, optional
Max number of clusters used by the two difference gap statistic
to find the optimal number of clusters. The default is 10.
bins_info: str, optional
Number of bins used to calculate variation of information. The default
value is 'KN'. Possible values are:
- 'KN': Knuth's choice method. For more information see
`knuth_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.knuth_bin_width.html>`_.
- 'FD': Freedman–Diaconis' choice method. For more information see
`freedman_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.freedman_bin_width.html>`_.
- 'SC': Scotts' choice method. For more information see
`scott_bin_width <https://docs.astropy.org/en/stable/api/astropy.stats.scott_bin_width.html>`_.
- 'HGR': Hacine-Gharbi and Ravier' choice method.
alpha_tail: float, optional
Significance level for lower tail dependence index. The default is 0.05.
leaf_order: bool, optional
Indicates if the cluster are ordered so that the distance between
successive leaves is minimal. The default is True.
d: float, optional
The smoothing factor of ewma methods.
The default is 0.94.
value : float, optional
Amount to allocate to portfolio in long positions, by default 1.0
value_short : float, optional
Amount to allocate to portfolio in short positions, by default 0.0
table: bool, optional
True if plot table weights, by default False
"""
weights = display_hcp(
stocks=stocks,
period=period,
start=start,
end=end,
log_returns=log_returns,
freq=freq,
maxnan=maxnan,
threshold=threshold,
method=method,
model="NCO",
codependence=codependence,
covariance=covariance,
objective=objective,
risk_measure=risk_measure,
risk_free_rate=risk_free_rate,
risk_aversion=risk_aversion,
alpha=alpha,
linkage=linkage,
k=k,
max_k=max_k,
bins_info=bins_info,
alpha_tail=alpha_tail,
leaf_order=leaf_order,
d_ewma=d_ewma,
value=value,
table=table,
)
return weights
@log_start_end(log=logger)
def my_autopct(x):
"""Function for autopct of plt.pie. This results in values not being printed in the pie if they are 'too small'"""
if x > 4:
return f"{x:.2f} %"
return ""
@log_start_end(log=logger)
def pie_chart_weights(
weights: dict, title_opt: str, external_axes: Optional[List[plt.Axes]]
):
"""Show a pie chart of holdings
Parameters
----------
weights: dict
Weights to display, where keys are tickers, and values are either weights or values if -v specified
title_opt: str
Title to be used on the plot title
external_axes:Optiona[List[plt.Axes]]
Optional external axes to plot data on
"""
if not weights:
return
init_stocks = list(weights.keys())
init_sizes = list(weights.values())
stocks = []
sizes = []
for stock, size in zip(init_stocks, init_sizes):
if size > 0:
stocks.append(stock)
sizes.append(size)
total_size = np.sum(sizes)
colors = theme.get_colors()
if external_axes is None:
_, ax = plt.subplots(figsize=plot_autoscale(), dpi=PLOT_DPI)
else:
ax = external_axes[0]
if math.isclose(sum(sizes), 1, rel_tol=0.1):
_, _, autotexts = ax.pie(
sizes,
labels=stocks,
autopct=my_autopct,
colors=colors,
textprops=dict(color="white"),
wedgeprops={"linewidth": 0.5, "edgecolor": "white"},
labeldistance=1.05,
startangle=45,
normalize=True,
)
plt.setp(autotexts, color="white", fontweight="bold")
else:
_, _, autotexts = ax.pie(
sizes,
labels=stocks,
autopct="",
colors=colors,
textprops=dict(color="white"),
wedgeprops={"linewidth": 0.5, "edgecolor": "white"},
labeldistance=1.05,
startangle=45,
normalize=True,
)
plt.setp(autotexts, color="white", fontweight="bold")
for i, a in enumerate(autotexts):
if sizes[i] / total_size > 0.05:
a.set_text(f"{sizes[i]:.2f}")
else:
a.set_text("")
ax.axis("equal")
# leg1 = ax.legend(
# wedges,
# [str(s) for s in stocks],
# title=" Ticker",
# loc="upper left",
# bbox_to_anchor=(0.80, 0, 0.5, 1),
# frameon=False,
# )
# leg2 = ax.legend(
# wedges,
# [
# f"{' ' if ((100*s/total_size) < 10) else ''}{100*s/total_size:.2f}%"
# for s in sizes
# ],
# title=" ",
# loc="upper left",
# handlelength=0,
# bbox_to_anchor=(0.91, 0, 0.5, 1),
# frameon=False,
# )
# ax.add_artist(leg1)
# ax.add_artist(leg2)
plt.setp(autotexts, size=8, weight="bold")
title = "Portfolio - " + title_opt + "\n"
title += "Portfolio Composition"
ax.set_title(title)
if external_axes is None:
theme.visualize_output()
@log_start_end(log=logger)
def additional_plots(
weights,
stock_returns: pd.DataFrame,
category: Dict,
title_opt: str,
freq: str,
risk_measure: str,
risk_free_rate: float,
alpha: float,
a_sim: float,
beta: float,
b_sim: float,
pie: bool,
hist: bool,
dd: bool,
rc_chart: bool,
heat: bool,
external_axes: Optional[List[plt.Axes]],
):
"""
Plot additional charts
Parameters
----------
weights: Dict
Dict of portfolio weights
stock_returns: pd.DataFrame
DataFrame of stock returns
title_opt: str
Title to be used on the pie chart
freq: str, optional
The frequency used to calculate returns. Default value is 'D'. Possible
values are:
- 'D' for daily returns.
- 'W' for weekly returns.
- 'M' for monthly returns.
risk_measure: str, optional
The risk measure used to optimize the portfolio. If model is 'NCO',
the risk measures available depends on the objective function.
The default is 'MV'. Possible values are:
- 'MV': Variance.
- 'MAD': Mean Absolute Deviation.
- 'MSV': Semi Standard Deviation.
- 'FLPM': First Lower Partial Moment (Omega Ratio).
- 'SLPM': Second Lower Partial Moment (Sortino Ratio).
- 'VaR': Value at Risk.
- 'CVaR': Conditional Value at Risk.
- 'TG': Tail Gini.
- 'EVaR': Entropic Value at Risk.
- 'WR': Worst Realization (Minimax).
- 'RG': Range of returns.
- 'CVRG': CVaR range of returns.
- 'TGRG': Tail Gini range of returns.
- 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
- 'ADD': Average Drawdown of uncompounded cumulative returns.
- 'DaR': Drawdown at Risk of uncompounded cumulative returns.
- 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
- 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
- 'UCI': Ulcer Index of uncompounded cumulative returns.
- 'MDD_Rel': Maximum Drawdown of compounded cumulative returns (Calmar Ratio).
- 'ADD_Rel': Average Drawdown of compounded cumulative returns.
- 'DaR_Rel': Drawdown at Risk of compounded cumulative returns.
- 'CDaR_Rel': Conditional Drawdown at Risk of compounded cumulative returns.
- 'EDaR_Rel': Entropic Drawdown at Risk of compounded cumulative returns.
- 'UCI_Rel': Ulcer Index of compounded cumulative returns.
risk_free_rate: float, optional
Risk free rate, must be in the same period of assets returns.
Used for 'FLPM' and 'SLPM'. The default is 0.
alpha: float, optional
Significance level of VaR, CVaR, EDaR, DaR, CDaR, EDaR, Tail Gini of losses.
The default is 0.05.
a_sim: float, optional
Number of CVaRs used to approximate Tail Gini of losses. The default is 100.
beta: float, optional
Significance level of CVaR and Tail Gini of gains. If None it duplicates alpha value.
The default is None.
b_sim: float, optional
Number of CVaRs used to approximate Tail Gini of gains. If None it duplicates a_sim value.
The default is None.
pie : bool, optional
Display a pie chart of values, by default False
hist : bool, optional
Display a histogram with risk measures, by default False
dd : bool, optional
Display a drawdown chart with risk measures, by default False
rc-chart : float, optional
Display a risk contribution chart for assets, by default False
heat : float, optional
Display a heatmap of correlation matrix with dendrogram, by default False
external_axes: Optional[List[plt.Axes]]
Optional axes to plot data on
"""
if category is not None:
weights = pd.DataFrame.from_dict(
data=weights, orient="index", columns=["value"], dtype=float
)
category_df = pd.DataFrame.from_dict(
data=category, orient="index", columns=["category"]
)
weights = weights.join(category_df, how="inner")
weights.sort_index(inplace=True)
# Calculating classes returns
classes = list(set(weights["category"]))
weights_classes = weights.groupby(["category"]).sum()
matrix_classes = np.zeros((len(weights), len(classes)))
labels = weights["category"].tolist()
j_value = 0
for i in classes:
matrix_classes[:, j_value] = np.array(
[1 if x == i else 0 for x in labels], dtype=float
)
matrix_classes[:, j_value] = (
matrix_classes[:, j_value]
* weights["value"]
/ weights_classes.loc[i, "value"]
)
j_value += 1
matrix_classes = pd.DataFrame(
matrix_classes, columns=classes, index=weights.index
)
stock_returns = stock_returns @ matrix_classes
weights = weights_classes["value"].copy()
weights.replace(0, np.nan, inplace=True)
weights.dropna(inplace=True)
weights.sort_values(ascending=True, inplace=True)
stock_returns = stock_returns[weights.index.tolist()]
stock_returns.columns = [i.title() for i in stock_returns.columns]
weights.index = [i.title() for i in weights.index]
weights = weights.to_dict()
colors = theme.get_colors()
if pie:
pie_chart_weights(weights, title_opt, external_axes)
if hist:
if external_axes is None:
_, ax = plt.subplots(figsize=plot_autoscale(), dpi=PLOT_DPI)
else:
ax = external_axes[0]
ax = rp.plot_hist(
stock_returns, w=pd.Series(weights).to_frame(), alpha=alpha, ax=ax
)
ax.legend(fontsize="x-small", loc="best")
# Changing colors
for i in ax.get_children()[:-1]:
if isinstance(i, matplotlib.patches.Rectangle):
i.set_color(colors[0])
i.set_alpha(0.7)
k = 1
for i, j in zip(ax.get_legend().get_lines()[::-1], ax.get_lines()[::-1]):
i.set_color(colors[k])
j.set_color(colors[k])
k += 1
title = "Portfolio - " + title_opt + "\n"
title += ax.get_title(loc="left")
ax.set_title(title)
if external_axes is None:
theme.visualize_output()
if dd:
if external_axes is None:
_, ax = plt.subplots(figsize=plot_autoscale(), dpi=PLOT_DPI)
else:
ax = external_axes[0]
nav = stock_returns.cumsum()
ax = rp.plot_drawdown(
nav=nav, w=pd.Series(weights).to_frame(), alpha=alpha, ax=ax
)
ax[0].remove()
ax = ax[1]
fig = ax.get_figure()
gs = GridSpec(1, 1, figure=fig)
ax.set_position(gs[0].get_position(fig))
ax.set_subplotspec(gs[0])
# Changing colors
ax.get_lines()[0].set_color(colors[0])
k = 1
for i, j in zip(ax.get_legend().get_lines()[::-1], ax.get_lines()[1:][::-1]):
i.set_color(colors[k])
j.set_color(colors[k])
k += 1
ax.get_children()[1].set_facecolor(colors[0])
ax.get_children()[1].set_alpha(0.7)
title = "Portfolio - " + title_opt + "\n"
title += ax.get_title(loc="left")
ax.set_title(title)
if external_axes is None:
theme.visualize_output()
if rc_chart:
if external_axes is None:
_, ax = plt.subplots(figsize=plot_autoscale(), dpi=PLOT_DPI)
else:
ax = external_axes[0]
ax = rp.plot_risk_con(
w=pd.Series(weights).to_frame(),
cov=stock_returns.cov(),
returns=stock_returns,
rm=risk_choices[risk_measure],
rf=risk_free_rate,
alpha=alpha,
a_sim=a_sim,
beta=beta,
b_sim=b_sim,
color=colors[1],
t_factor=time_factor[freq.upper()],
ax=ax,
)
# Changing colors
for i in ax.get_children()[:-1]:
if isinstance(i, matplotlib.patches.Rectangle):
i.set_width(i.get_width())
i.set_color(colors[0])
title = "Portfolio - " + title_opt + "\n"
title += ax.get_title(loc="left")
ax.set_title(title)
if external_axes is None:
theme.visualize_output()
if heat:
if external_axes is None:
_, ax = plt.subplots(figsize=plot_autoscale(), dpi=PLOT_DPI)
else:
ax = external_axes[0]
if len(weights) <= 3:
number_of_clusters = len(weights)
else:
number_of_clusters = None
ax = rp.plot_clusters(
returns=stock_returns,
codependence="pearson",
linkage="ward",
k=number_of_clusters,
max_k=10,
leaf_order=True,
dendrogram=True,
cmap="RdYlBu",
# linecolor='tab:purple',
ax=ax,
)
ax = ax.get_figure().axes
ax[0].grid(False)
ax[0].axis("off")
if category is None:
# Vertical dendrogram
l, b, w, h = ax[4].get_position().bounds
l1 = l * 0.5
w1 = w * 0.2
b1 = h * 0.05
ax[4].set_position([l - l1, b + b1, w * 0.8, h * 0.95])
# Heatmap
l, b, w, h = ax[1].get_position().bounds
ax[1].set_position([l - l1 - w1, b + b1, w * 0.8, h * 0.95])
w2 = w * 0.2
# colorbar
l, b, w, h = ax[2].get_position().bounds
ax[2].set_position([l - l1 - w1 - w2, b, w, h])
# Horizontal dendrogram
l, b, w, h = ax[3].get_position().bounds
ax[3].set_position([l - l1 - w1, b, w * 0.8, h])
else:
# Vertical dendrogram
l, b, w, h = ax[4].get_position().bounds
l1 = l * 0.5
w1 = w * 0.4
b1 = h * 0.2
ax[4].set_position([l - l1, b + b1, w * 0.6, h * 0.8])
# Heatmap
l, b, w, h = ax[1].get_position().bounds
ax[1].set_position([l - l1 - w1, b + b1, w * 0.6, h * 0.8])
w2 = w * 0.05
# colorbar
l, b, w, h = ax[2].get_position().bounds
ax[2].set_position([l - l1 - w1 - w2, b, w, h])
# Horizontal dendrogram
l, b, w, h = ax[3].get_position().bounds
ax[3].set_position([l - l1 - w1, b, w * 0.6, h])
title = "Portfolio - " + title_opt + "\n"
title += ax[3].get_title(loc="left")
ax[3].set_title(title)
if external_axes is None:
theme.visualize_output(force_tight_layout=True)
| 37.629004 | 119 | 0.619481 | 18,877 | 149,199 | 4.804471 | 0.038936 | 0.021214 | 0.024809 | 0.028943 | 0.891239 | 0.878239 | 0.865691 | 0.858745 | 0.854092 | 0.849406 | 0 | 0.009224 | 0.278434 | 149,199 | 3,964 | 120 | 37.638496 | 0.833165 | 0.580212 | 0 | 0.660205 | 0 | 0 | 0.089321 | 0.006669 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014823 | false | 0 | 0.011403 | 0 | 0.0439 | 0.021095 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49fa0cf7d71b3489d894d3af84c53ca8fdef5152 | 2,711 | py | Python | calc.py | NagarBoomer123/Python | f0601f7c9ebca92a6564c64cbcfef9bede44aa9f | [
"MIT"
] | null | null | null | calc.py | NagarBoomer123/Python | f0601f7c9ebca92a6564c64cbcfef9bede44aa9f | [
"MIT"
] | 1 | 2020-10-01T05:10:10.000Z | 2020-10-01T05:11:55.000Z | calc.py | NagarBoomer123/Python | f0601f7c9ebca92a6564c64cbcfef9bede44aa9f | [
"MIT"
] | 1 | 2020-10-01T05:07:17.000Z | 2020-10-01T05:07:17.000Z | from tkinter import *
root = Tk()
root.geometry("333x470")
root.title("Calculator")
scvalue = StringVar()
scvalue.set("")
screen = Entry(root, textvar=scvalue, font="lucida 30 bold")
screen.pack(fill=X, ipadx=8, padx=3, pady=3)
def click(event):
global scvalue
text = event.widget.cget("text")
print(text)
if text == "=":
if scvalue.get().isdigit():
value = int(scvalue.get())
else:
value = eval(screen.get())
scvalue.set(value)
screen.update()
elif text == "c":
scvalue.set("")
screen.update()
else:
scvalue.set(scvalue.get() + text)
#creating the buttons (1)
f = Frame(root, bg="grey")
b = Button(f, text="1", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
#2
b = Button(f, text="2", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
b = Button(f, text="3", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
f = Frame(root, bg="grey")
b = Button(f, text="4", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
#2
b = Button(f, text="5", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
b = Button(f, text="6", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
f = Frame(root, bg="grey")
b = Button(f, text="7", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
#2
b = Button(f, text="8", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
b = Button(f, text="9", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
f = Frame(root, bg="grey")
b = Button(f, text="c", font="lucida 30 bold", padx=1, pady=1)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
#2
b = Button(f, text="-", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
b = Button(f, text="+", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
f = Frame(root, bg="grey")
b = Button(f, text="*", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
#2
b = Button(f, text="0", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
b = Button(f, text="=", font="lucida 30 bold", padx=2, pady=2)
b.pack(side=LEFT)
b.bind("<Button-1>", click)
f.pack()
root.mainloop() | 18.958042 | 63 | 0.587975 | 471 | 2,711 | 3.384289 | 0.140127 | 0.023839 | 0.120452 | 0.160602 | 0.736512 | 0.723965 | 0.723965 | 0.723965 | 0.723965 | 0.706399 | 0 | 0.04605 | 0.182958 | 2,711 | 143 | 64 | 18.958042 | 0.673589 | 0.010697 | 0 | 0.622222 | 0 | 0 | 0.170414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011111 | false | 0 | 0.011111 | 0 | 0.022222 | 0.011111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3fad0c6f248abb28c5de778d7fa7ef2985167870 | 38 | py | Python | src/font_scanner/__init__.py | salieri/2d-enhance | 9ec2f3c63161d44ce0b25540eccf26e2c5cdccf0 | [
"MIT"
] | null | null | null | src/font_scanner/__init__.py | salieri/2d-enhance | 9ec2f3c63161d44ce0b25540eccf26e2c5cdccf0 | [
"MIT"
] | 3 | 2021-06-08T20:14:32.000Z | 2022-03-11T23:56:59.000Z | src/font_scanner/__init__.py | salieri/2d-enhance | 9ec2f3c63161d44ce0b25540eccf26e2c5cdccf0 | [
"MIT"
] | null | null | null | from .font_library import FontLibrary
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3fb63a4050c74a92dd63729b73b55dd5a11975e7 | 32 | py | Python | shadowrun_prototype/defs/coll.py | holy-crust/reclaimer | 0aa693da3866ce7999c68d5f71f31a9c932cdb2c | [
"MIT"
] | null | null | null | shadowrun_prototype/defs/coll.py | holy-crust/reclaimer | 0aa693da3866ce7999c68d5f71f31a9c932cdb2c | [
"MIT"
] | null | null | null | shadowrun_prototype/defs/coll.py | holy-crust/reclaimer | 0aa693da3866ce7999c68d5f71f31a9c932cdb2c | [
"MIT"
] | null | null | null | from ...hek.defs.coll import *
| 16 | 31 | 0.65625 | 5 | 32 | 4.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 1 | 32 | 32 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3fbec7d8148d48163ba260fa6da625c62c20a613 | 8,484 | py | Python | imcsdk/mometa/memory/MemoryPersistentMemoryRegion.py | TetrationAnalytics/imcsdk | d86e47831f294dc9fa5e99b9a92abceac2502d76 | [
"Apache-2.0"
] | null | null | null | imcsdk/mometa/memory/MemoryPersistentMemoryRegion.py | TetrationAnalytics/imcsdk | d86e47831f294dc9fa5e99b9a92abceac2502d76 | [
"Apache-2.0"
] | null | null | null | imcsdk/mometa/memory/MemoryPersistentMemoryRegion.py | TetrationAnalytics/imcsdk | d86e47831f294dc9fa5e99b9a92abceac2502d76 | [
"Apache-2.0"
] | 2 | 2016-05-26T02:05:46.000Z | 2017-09-13T05:13:25.000Z | """This module contains the general information for MemoryPersistentMemoryRegion ManagedObject."""
from ...imcmo import ManagedObject
from ...imccoremeta import MoPropertyMeta, MoMeta
from ...imcmeta import VersionMeta
class MemoryPersistentMemoryRegionConsts:
HEALTH_STATE_CRITICAL_FAILURE = "CriticalFailure"
HEALTH_STATE_HEALTHY = "Healthy"
HEALTH_STATE_MINOR_FAILURE = "MinorFailure"
HEALTH_STATE_NON_FUNCTIONAL = "NonFunctional"
HEALTH_STATE_UNKNOWN = "Unknown"
HEALTH_STATE_UNMANAGABLE = "Unmanagable"
HEALTH_STATE_UNRECOVERABLE_ERROR = "UnrecoverableError"
ID_UNSPECIFIED = "unspecified"
SOCKET_ID_1 = "1"
SOCKET_ID_2 = "2"
SOCKET_ID_3 = "3"
SOCKET_ID_4 = "4"
SOCKET_LOCAL_DIMM_NUMBER_10 = "10"
SOCKET_LOCAL_DIMM_NUMBER_12 = "12"
SOCKET_LOCAL_DIMM_NUMBER_2 = "2"
SOCKET_LOCAL_DIMM_NUMBER_4 = "4"
SOCKET_LOCAL_DIMM_NUMBER_6 = "6"
SOCKET_LOCAL_DIMM_NUMBER_8 = "8"
SOCKET_LOCAL_DIMM_NUMBER_NOT_APPLICABLE = "Not applicable"
class MemoryPersistentMemoryRegion(ManagedObject):
"""This is MemoryPersistentMemoryRegion class."""
consts = MemoryPersistentMemoryRegionConsts()
naming_props = set(['id'])
mo_meta = {
"classic": MoMeta("MemoryPersistentMemoryRegion", "memoryPersistentMemoryRegion", "region-[id]", VersionMeta.Version404b, "OutputOnly", 0xf, [], ["admin", "read-only", "user"], ['memoryPersistentMemoryConfiguration'], ['memoryPersistentMemoryNamespace'], [None]),
"modular": MoMeta("MemoryPersistentMemoryRegion", "memoryPersistentMemoryRegion", "region-[id]", VersionMeta.Version404b, "OutputOnly", 0xf, [], ["admin", "read-only", "user"], ['memoryPersistentMemoryConfiguration'], ['memoryPersistentMemoryNamespace'], [None])
}
prop_meta = {
"classic": {
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version404b, MoPropertyMeta.INTERNAL, None, None, None, None, [], []),
"dimm_locator_ids": MoPropertyMeta("dimm_locator_ids", "dimmLocatorIds", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, 0, 510, None, [], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, 0x2, 0, 255, None, [], []),
"free_capacity": MoPropertyMeta("free_capacity", "freeCapacity", "long", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, [], []),
"health_state": MoPropertyMeta("health_state", "healthState", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, ["CriticalFailure", "Healthy", "MinorFailure", "NonFunctional", "Unknown", "Unmanagable", "UnrecoverableError"], []),
"id": MoPropertyMeta("id", "id", "string", VersionMeta.Version404b, MoPropertyMeta.NAMING, None, None, None, None, ["unspecified"], ["0-4294967295"]),
"interleaved_set_id": MoPropertyMeta("interleaved_set_id", "interleavedSetId", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, 0, 510, None, [], []),
"persistent_memory_type": MoPropertyMeta("persistent_memory_type", "persistentMemoryType", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, 0, 510, None, [], []),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, 0x4, 0, 255, None, [], []),
"socket_id": MoPropertyMeta("socket_id", "socketId", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, ["1", "2", "3", "4"], []),
"socket_local_dimm_number": MoPropertyMeta("socket_local_dimm_number", "socketLocalDimmNumber", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, ["10", "12", "2", "4", "6", "8", "Not applicable"], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, 0x8, None, None, r"""((removed|created|modified|deleted),){0,3}(removed|created|modified|deleted){0,1}""", [], []),
"total_capacity": MoPropertyMeta("total_capacity", "totalCapacity", "long", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, [], []),
},
"modular": {
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version404b, MoPropertyMeta.INTERNAL, None, None, None, None, [], []),
"dimm_locator_ids": MoPropertyMeta("dimm_locator_ids", "dimmLocatorIds", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, 0, 510, None, [], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, 0x2, 0, 255, None, [], []),
"free_capacity": MoPropertyMeta("free_capacity", "freeCapacity", "long", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, [], []),
"health_state": MoPropertyMeta("health_state", "healthState", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, ["CriticalFailure", "Healthy", "MinorFailure", "NonFunctional", "Unknown", "Unmanagable", "UnrecoverableError"], []),
"id": MoPropertyMeta("id", "id", "string", VersionMeta.Version404b, MoPropertyMeta.NAMING, None, None, None, None, ["unspecified"], ["0-4294967295"]),
"interleaved_set_id": MoPropertyMeta("interleaved_set_id", "interleavedSetId", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, 0, 510, None, [], []),
"persistent_memory_type": MoPropertyMeta("persistent_memory_type", "persistentMemoryType", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, 0, 510, None, [], []),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, 0x4, 0, 255, None, [], []),
"socket_id": MoPropertyMeta("socket_id", "socketId", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, ["1", "2", "3", "4"], []),
"socket_local_dimm_number": MoPropertyMeta("socket_local_dimm_number", "socketLocalDimmNumber", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, ["10", "12", "2", "4", "6", "8", "Not applicable"], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, 0x8, None, None, r"""((removed|created|modified|deleted),){0,3}(removed|created|modified|deleted){0,1}""", [], []),
"total_capacity": MoPropertyMeta("total_capacity", "totalCapacity", "long", VersionMeta.Version404b, MoPropertyMeta.READ_ONLY, None, None, None, None, [], []),
},
}
prop_map = {
"classic": {
"childAction": "child_action",
"dimmLocatorIds": "dimm_locator_ids",
"dn": "dn",
"freeCapacity": "free_capacity",
"healthState": "health_state",
"id": "id",
"interleavedSetId": "interleaved_set_id",
"persistentMemoryType": "persistent_memory_type",
"rn": "rn",
"socketId": "socket_id",
"socketLocalDimmNumber": "socket_local_dimm_number",
"status": "status",
"totalCapacity": "total_capacity",
},
"modular": {
"childAction": "child_action",
"dimmLocatorIds": "dimm_locator_ids",
"dn": "dn",
"freeCapacity": "free_capacity",
"healthState": "health_state",
"id": "id",
"interleavedSetId": "interleaved_set_id",
"persistentMemoryType": "persistent_memory_type",
"rn": "rn",
"socketId": "socket_id",
"socketLocalDimmNumber": "socket_local_dimm_number",
"status": "status",
"totalCapacity": "total_capacity",
},
}
def __init__(self, parent_mo_or_dn, id, **kwargs):
self._dirty_mask = 0
self.id = id
self.child_action = None
self.dimm_locator_ids = None
self.free_capacity = None
self.health_state = None
self.interleaved_set_id = None
self.persistent_memory_type = None
self.socket_id = None
self.socket_local_dimm_number = None
self.status = None
self.total_capacity = None
ManagedObject.__init__(self, "MemoryPersistentMemoryRegion", parent_mo_or_dn, **kwargs)
| 65.261538 | 276 | 0.661363 | 809 | 8,484 | 6.693449 | 0.147095 | 0.065005 | 0.06205 | 0.170637 | 0.77747 | 0.77747 | 0.768975 | 0.768975 | 0.768975 | 0.768975 | 0 | 0.031322 | 0.183404 | 8,484 | 129 | 277 | 65.767442 | 0.750289 | 0.01603 | 0 | 0.518519 | 0 | 0.018519 | 0.306142 | 0.09525 | 0 | 0 | 0.002879 | 0 | 0 | 1 | 0.009259 | false | 0 | 0.027778 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3fd6a0a3465ad87670cbd01be6ea62ffe1aff6d6 | 79 | py | Python | fpipe/meta/port.py | vkvam/fpipe | 2905095f46923c6c4c460c3d154544b654136df4 | [
"MIT"
] | 18 | 2019-12-16T17:55:57.000Z | 2020-10-21T23:25:40.000Z | fpipe/meta/port.py | vkvam/fpipe | 2905095f46923c6c4c460c3d154544b654136df4 | [
"MIT"
] | 23 | 2019-12-11T14:15:08.000Z | 2020-02-17T12:53:21.000Z | fpipe/meta/port.py | vkvam/fpipe | 2905095f46923c6c4c460c3d154544b654136df4 | [
"MIT"
] | null | null | null | from fpipe.meta.abstract import FileData
class Port(FileData[int]):
pass
| 13.166667 | 40 | 0.746835 | 11 | 79 | 5.363636 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 79 | 5 | 41 | 15.8 | 0.893939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3fd861a11acb935e0718c956aa8ac489435f26df | 522 | py | Python | acp/exception.py | 6sibilings/daniel-Allen | db35e9ebc2b0057a32d8011336dc5fa948d88276 | [
"MIT"
] | 133 | 2016-03-19T21:18:27.000Z | 2022-03-23T21:23:08.000Z | acp/exception.py | 6sibilings/daniel-Allen | db35e9ebc2b0057a32d8011336dc5fa948d88276 | [
"MIT"
] | 6 | 2017-10-08T19:53:26.000Z | 2022-02-16T23:49:27.000Z | acp/exception.py | 6sibilings/daniel-Allen | db35e9ebc2b0057a32d8011336dc5fa948d88276 | [
"MIT"
] | 24 | 2016-04-01T17:30:40.000Z | 2022-03-23T21:23:19.000Z | #TODO: put other exceptions in here...
class ACPError(Exception):
"""Base class for exceptions in this module."""
pass
class ACPClientError(ACPError):
"""Exception raised for errors in the ACP client"""
pass
class ACPCommandLineError(ACPError):
"""Exception raised for command line invocation errors"""
pass
class ACPMessageError(ACPError):
"""Exception raised for errors processing ACP packets"""
pass
class ACPPropertyError(ACPError):
"""Exception raised for errors processing ACP properties"""
pass
| 19.333333 | 60 | 0.754789 | 62 | 522 | 6.354839 | 0.467742 | 0.215736 | 0.233503 | 0.263959 | 0.309645 | 0.228426 | 0.228426 | 0 | 0 | 0 | 0 | 0 | 0.14751 | 522 | 26 | 61 | 20.076923 | 0.885393 | 0.54023 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3fea7cd95a03a139da3b5e69e5b223fc3df58bc0 | 18 | py | Python | pipenv/patched/pew/__init__.py | dschaller/pipenv | 0ec97edbf797d0d3d133dc773831c5e7fab92cd2 | [
"MIT"
] | 2 | 2021-10-01T17:23:49.000Z | 2021-10-01T17:26:19.000Z | pipenv/patched/pew/__init__.py | RL-TOP-DEV/pipenv | cf20894017b768ac7306189c5660833bd9197164 | [
"MIT"
] | 1 | 2017-09-15T19:01:09.000Z | 2017-09-15T23:42:43.000Z | pipenv/patched/pew/__init__.py | RL-TOP-DEV/pipenv | cf20894017b768ac7306189c5660833bd9197164 | [
"MIT"
] | 2 | 2018-04-06T05:36:25.000Z | 2018-12-30T22:58:58.000Z | from . import pew
| 9 | 17 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 18 | 1 | 18 | 18 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ff173534468c0df313936bb63d27e1297ebccf3 | 46 | py | Python | Frame_Level_Speech_recognition/Frame_Level_Speech_Recognition/src/wer.py | MonitSharma/Data-Science-Projects | b78df36061a9877240763bf3e71ec797f53b4450 | [
"MIT"
] | null | null | null | Frame_Level_Speech_recognition/Frame_Level_Speech_Recognition/src/wer.py | MonitSharma/Data-Science-Projects | b78df36061a9877240763bf3e71ec797f53b4450 | [
"MIT"
] | null | null | null | Frame_Level_Speech_recognition/Frame_Level_Speech_Recognition/src/wer.py | MonitSharma/Data-Science-Projects | b78df36061a9877240763bf3e71ec797f53b4450 | [
"MIT"
] | null | null | null | import struct
print(struct.calcsize("P")*8) | 15.333333 | 29 | 0.717391 | 7 | 46 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.108696 | 46 | 3 | 29 | 15.333333 | 0.780488 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
b7552c70b4fa62bbc2b6ed12b13d394397bf5a08 | 17 | py | Python | examples/http_and_ssh/http_and_ssh/servers/__init__.py | Ovvovy/API-Hour | c9508245bf1472befcd2c51635ebb6ad994b63a0 | [
"Apache-2.0"
] | 571 | 2015-01-07T14:28:04.000Z | 2022-02-27T19:37:39.000Z | examples/http_and_ssh/http_and_ssh/servers/__init__.py | Ovvovy/API-Hour | c9508245bf1472befcd2c51635ebb6ad994b63a0 | [
"Apache-2.0"
] | 16 | 2015-02-26T12:06:15.000Z | 2021-06-10T17:42:34.000Z | examples/http_and_ssh/http_and_ssh/servers/__init__.py | Ovvovy/API-Hour | c9508245bf1472befcd2c51635ebb6ad994b63a0 | [
"Apache-2.0"
] | 27 | 2015-02-25T15:56:39.000Z | 2018-05-24T15:05:55.000Z | from . import ssh | 17 | 17 | 0.764706 | 3 | 17 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 17 | 1 | 17 | 17 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d0be07e52076840d84f64c9da3ff3921aea1fde | 103 | py | Python | src/app/models/order/__init__.py | SantaSpeen/web-merchandise-shop | 5febf4e88d0b7438b5d4d5b5529d2bf123d0ab28 | [
"MIT"
] | 5 | 2022-02-08T05:52:21.000Z | 2022-02-23T17:06:06.000Z | src/app/models/order/__init__.py | SantaSpeen/web-merchandise-shop | 5febf4e88d0b7438b5d4d5b5529d2bf123d0ab28 | [
"MIT"
] | 13 | 2022-02-09T07:18:20.000Z | 2022-03-03T08:29:43.000Z | src/app/models/order/__init__.py | SantaSpeen/web-merchandise-shop | 5febf4e88d0b7438b5d4d5b5529d2bf123d0ab28 | [
"MIT"
] | 1 | 2022-02-23T17:00:26.000Z | 2022-02-23T17:00:26.000Z | #!env/bin/python
"""
Order models.
"""
from .order import Order
from .order_item import OrderItem
| 12.875 | 33 | 0.699029 | 14 | 103 | 5.071429 | 0.642857 | 0.253521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174757 | 103 | 7 | 34 | 14.714286 | 0.835294 | 0.281553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d18670400b179bfbb115d3568645cbc5966625f | 166 | py | Python | Chapter 15/ch15_44.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 15/ch15_44.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 15/ch15_44.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | import numpy as np
n_arr = np.array([-75.4, 42.45, 60.0])
n_arr[n_arr < 0] = 0
print(n_arr)
a2=np.delete(n_arr,[0])
print(a2)
#[0. 42.45. 60.0]
#[42.45 60.0] | 18.444444 | 39 | 0.584337 | 40 | 166 | 2.3 | 0.4 | 0.217391 | 0.195652 | 0.228261 | 0.163043 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220588 | 0.180723 | 166 | 9 | 40 | 18.444444 | 0.455882 | 0.168675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d2f31d7a3d1547b7674abfd1b9544a28215c5c1 | 114 | py | Python | app/styleguide/__init__.py | shots47s/conp-portal | 6ca45d4e3f6f40e16cb35a1d77827c2a48b13546 | [
"MIT"
] | null | null | null | app/styleguide/__init__.py | shots47s/conp-portal | 6ca45d4e3f6f40e16cb35a1d77827c2a48b13546 | [
"MIT"
] | 2 | 2020-04-14T21:41:55.000Z | 2020-12-02T16:59:52.000Z | app/styleguide/__init__.py | shots47s/conp-portal | 6ca45d4e3f6f40e16cb35a1d77827c2a48b13546 | [
"MIT"
] | null | null | null | from flask import Blueprint
styleguide_bp = Blueprint('styleguide', __name__)
from app.styleguide import routes
| 19 | 49 | 0.815789 | 14 | 114 | 6.285714 | 0.642857 | 0.431818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122807 | 114 | 5 | 50 | 22.8 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
4d42bde19eca3bd240f96a8aaf2f76fb2a128960 | 117 | py | Python | lib/disco/schemes/scheme_https.py | amit-siddhu/disco | be65272d3eecca184a3c8f2fa911b86ac87a4e8a | [
"BSD-3-Clause"
] | 786 | 2015-01-01T12:35:40.000Z | 2022-03-19T04:39:22.000Z | lib/disco/schemes/scheme_https.py | DavidAlphaFox/disco | d550a4ef548991921f9521a59b057cd066c37290 | [
"BSD-3-Clause"
] | 51 | 2015-01-19T20:07:01.000Z | 2019-10-19T21:03:06.000Z | lib/disco/schemes/scheme_https.py | DavidAlphaFox/disco | d550a4ef548991921f9521a59b057cd066c37290 | [
"BSD-3-Clause"
] | 122 | 2015-01-05T18:16:03.000Z | 2021-07-10T12:35:22.000Z | from scheme_http import open, input_stream
# keep those unused import checkers quiet
assert open
assert input_stream
| 23.4 | 42 | 0.846154 | 18 | 117 | 5.333333 | 0.722222 | 0.229167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136752 | 117 | 4 | 43 | 29.25 | 0.950495 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
4d51cc871c67b6639516b5f5128f0d93f09da6ad | 22,552 | py | Python | tests/test_zoneinfo.py | wacuuu/workload-collocation-agent | 9250ec2ab8def033e8546481eaed6aca2caad3d3 | [
"Apache-2.0"
] | 40 | 2019-05-16T16:42:33.000Z | 2021-11-18T06:33:03.000Z | tests/test_zoneinfo.py | wacuuu/workload-collocation-agent | 9250ec2ab8def033e8546481eaed6aca2caad3d3 | [
"Apache-2.0"
] | 72 | 2019-05-09T02:30:25.000Z | 2020-11-17T09:24:44.000Z | tests/test_zoneinfo.py | ppalucki/owca | 9316f92e2d67f6c37da2dec33e5f769a4c3a465b | [
"Apache-2.0"
] | 26 | 2019-05-20T09:13:38.000Z | 2021-12-15T17:57:21.000Z | # Copyright (c) 2020 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from unittest.mock import patch
from tests.testing import create_open_mock, relative_module_path
from wca.metrics import MetricName
from wca.zoneinfo import get_zoneinfo_measurements, DEFAULT_REGEXP
@patch('builtins.open', new=create_open_mock({
"/proc/zoneinfo": open(relative_module_path(__file__, 'fixtures/proc-zoneinfo.txt')).read(),
}))
def test_parse_proc_zoneinfo(*mocks):
zoneinfo_measurements = get_zoneinfo_measurements(
re.compile(DEFAULT_REGEXP))[MetricName.PLATFORM_ZONEINFO]
# USE THIS TO GET RVALUE for assertion
# import pprint
# pprint.pprint(zoneinfo_measurements)
assert zoneinfo_measurements == {
'0': {
'DMA': {
'free': 3867.0,
'high': 6.0,
'hmem_autonuma_promote_dst': 0.0,
'hmem_autonuma_promote_src': 0.0,
'hmem_reclaim_demote_dst': 0.0,
'hmem_reclaim_demote_src': 0.0,
'hmem_reclaim_promote_dst': 0.0,
'hmem_reclaim_promote_src': 0.0,
'hmem_swapcache_promote_dst': 0.0,
'hmem_swapcache_promote_src': 0.0,
'hmem_unknown': 0.0,
'low': 3.0,
'managed': 3867.0,
'min': 0.0,
'nr_bounce': 0.0,
'nr_free_cma': 0.0,
'nr_free_pages': 3867.0,
'nr_kernel_stack': 0.0,
'nr_mlock': 0.0,
'nr_page_table_pages': 0.0,
'nr_zone_active_anon': 0.0,
'nr_zone_active_file': 0.0,
'nr_zone_inactive_anon': 0.0,
'nr_zone_inactive_file': 0.0,
'nr_zone_unevictable': 0.0,
'nr_zone_write_pending': 0.0,
'nr_zspages': 0.0,
'numa_foreign': 0.0,
'numa_hit': 0.0,
'numa_interleave': 0.0,
'numa_local': 0.0,
'numa_miss': 0.0,
'numa_other': 0.0,
'present': 3999.0,
'spanned': 4095.0,
'toptier': 773.0},
'DMA32': {
'free': 423157.0,
'high': 868.0,
'hmem_autonuma_promote_dst': 0.0,
'hmem_autonuma_promote_src': 0.0,
'hmem_reclaim_demote_dst': 0.0,
'hmem_reclaim_demote_src': 0.0,
'hmem_reclaim_promote_dst': 0.0,
'hmem_reclaim_promote_src': 0.0,
'hmem_swapcache_promote_dst': 0.0,
'hmem_swapcache_promote_src': 0.0,
'hmem_unknown': 0.0,
'low': 445.0,
'managed': 423884.0,
'min': 22.0,
'nr_bounce': 0.0,
'nr_free_cma': 0.0,
'nr_free_pages': 423157.0,
'nr_kernel_stack': 0.0,
'nr_mlock': 0.0,
'nr_page_table_pages': 0.0,
'nr_zone_active_anon': 0.0,
'nr_zone_active_file': 0.0,
'nr_zone_inactive_anon': 0.0,
'nr_zone_inactive_file': 0.0,
'nr_zone_unevictable': 0.0,
'nr_zone_write_pending': 0.0,
'nr_zspages': 0.0,
'numa_foreign': 0.0,
'numa_hit': 0.0,
'numa_interleave': 0.0,
'numa_local': 0.0,
'numa_miss': 0.0,
'numa_other': 0.0,
'present': 441081.0,
'spanned': 1044480.0,
'toptier': 84776.0},
'Device': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 132120576.0,
'toptier': 0.0},
'Movable': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Normal': {
'free': 21182724.0,
'high': 49232.0,
'hmem_autonuma_promote_dst': 0.0,
'hmem_autonuma_promote_src': 0.0,
'hmem_reclaim_demote_dst': 0.0,
'hmem_reclaim_demote_src': 0.0,
'hmem_reclaim_promote_dst': 0.0,
'hmem_reclaim_promote_src': 0.0,
'hmem_swapcache_promote_dst': 0.0,
'hmem_swapcache_promote_src': 0.0,
'hmem_unknown': 0.0,
'low': 25251.0,
'managed': 23981257.0,
'min': 1270.0,
'nr_bounce': 0.0,
'nr_free_cma': 0.0,
'nr_free_pages': 21182724.0,
'nr_kernel_stack': 15848.0,
'nr_mlock': 0.0,
'nr_page_table_pages': 1398.0,
'nr_zone_active_anon': 63788.0,
'nr_zone_active_file': 36524.0,
'nr_zone_inactive_anon': 8651.0,
'nr_zone_inactive_file': 154408.0,
'nr_zone_unevictable': 0.0,
'nr_zone_write_pending': 23.0,
'nr_zspages': 0.0,
'numa_foreign': 0.0,
'numa_hit': 10292075.0,
'numa_interleave': 36453.0,
'numa_local': 10282684.0,
'numa_miss': 0.0,
'numa_other': 9391.0,
'present': 24379392.0,
'spanned': 24379392.0,
'toptier': 4796251.0},
'per-node-stats': {
'nr_accessed': 52673413.0,
'nr_active_anon': 63788.0,
'nr_active_file': 36524.0,
'nr_anon_pages': 69794.0,
'nr_anon_transparent_hugepages': 0.0,
'nr_dirtied': 101138.0,
'nr_dirty': 23.0,
'nr_file_hugepages': 0.0,
'nr_file_pages': 193171.0,
'nr_file_pmdmapped': 0.0,
'nr_inactive_anon': 8651.0,
'nr_inactive_file': 154408.0,
'nr_isolated_anon': 0.0,
'nr_isolated_file': 0.0,
'nr_kernel_misc_reclaimable': 0.0,
'nr_mapped': 49112.0,
'nr_promote_fail': 0.0,
'nr_promote_isolate_fail': 0.0,
'nr_promote_ratelimit': 0.0,
'nr_promoted': 0.0,
'nr_shmem': 8817.0,
'nr_shmem_hugepages': 0.0,
'nr_shmem_pmdmapped': 0.0,
'nr_slab_reclaimable': 31387.0,
'nr_slab_unreclaimable': 93095.0,
'nr_unevictable': 0.0,
'nr_unstable': 0.0,
'nr_vmscan_immediate_reclaim': 0.0,
'nr_vmscan_write': 0.0,
'nr_writeback': 0.0,
'nr_writeback_temp': 0.0,
'nr_written': 96446.0,
'numa_try_migrate': 0.0,
'workingset_activate': 0.0,
'workingset_nodereclaim': 0.0,
'workingset_nodes': 0.0,
'workingset_refault': 0.0,
'workingset_restore': 0.0}},
'1': {
'DMA': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'DMA32': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Device': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Movable': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Normal': {
'free': 3122809.0,
'high': 50850.0,
'hmem_autonuma_promote_dst': 0.0,
'hmem_autonuma_promote_src': 0.0,
'hmem_reclaim_demote_dst': 0.0,
'hmem_reclaim_demote_src': 0.0,
'hmem_reclaim_promote_dst': 0.0,
'hmem_reclaim_promote_src': 0.0,
'hmem_swapcache_promote_dst': 0.0,
'hmem_swapcache_promote_src': 0.0,
'hmem_unknown': 0.0,
'low': 26081.0,
'managed': 24769969.0,
'min': 1312.0,
'nr_bounce': 0.0,
'nr_free_cma': 0.0,
'nr_free_pages': 3122809.0,
'nr_kernel_stack': 13320.0,
'nr_mlock': 0.0,
'nr_page_table_pages': 39236.0,
'nr_zone_active_anon': 19239370.0,
'nr_zone_active_file': 10230.0,
'nr_zone_inactive_anon': 4154.0,
'nr_zone_inactive_file': 110142.0,
'nr_zone_unevictable': 0.0,
'nr_zone_write_pending': 1.0,
'nr_zspages': 0.0,
'numa_foreign': 0.0,
'numa_hit': 28865491.0,
'numa_interleave': 36459.0,
'numa_local': 28792852.0,
'numa_miss': 0.0,
'numa_other': 72639.0,
'present': 25165824.0,
'spanned': 25165824.0,
'toptier': 4953993.0},
'per-node-stats': {
'nr_accessed': 5245474.0,
'nr_active_anon': 19239370.0,
'nr_active_file': 10230.0,
'nr_anon_pages': 19242205.0,
'nr_anon_transparent_hugepages': 0.0,
'nr_dirtied': 80350.0,
'nr_dirty': 1.0,
'nr_file_hugepages': 0.0,
'nr_file_pages': 121210.0,
'nr_file_pmdmapped': 0.0,
'nr_inactive_anon': 4154.0,
'nr_inactive_file': 110142.0,
'nr_isolated_anon': 0.0,
'nr_isolated_file': 0.0,
'nr_kernel_misc_reclaimable': 0.0,
'nr_mapped': 34312.0,
'nr_promote_fail': 0.0,
'nr_promote_isolate_fail': 0.0,
'nr_promote_ratelimit': 0.0,
'nr_promoted': 0.0,
'nr_shmem': 4208.0,
'nr_shmem_hugepages': 0.0,
'nr_shmem_pmdmapped': 0.0,
'nr_slab_reclaimable': 22947.0,
'nr_slab_unreclaimable': 74730.0,
'nr_unevictable': 0.0,
'nr_unstable': 0.0,
'nr_vmscan_immediate_reclaim': 0.0,
'nr_vmscan_write': 0.0,
'nr_writeback': 0.0,
'nr_writeback_temp': 0.0,
'nr_written': 76640.0,
'numa_try_migrate': 0.0,
'workingset_activate': 0.0,
'workingset_nodereclaim': 0.0,
'workingset_nodes': 0.0,
'workingset_refault': 0.0,
'workingset_restore': 0.0}},
'2': {
'DMA': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'DMA32': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Device': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Movable': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Normal': {
'free': 130023387.0,
'high': 266935.0,
'hmem_autonuma_promote_dst': 0.0,
'hmem_autonuma_promote_src': 0.0,
'hmem_reclaim_demote_dst': 0.0,
'hmem_reclaim_demote_src': 0.0,
'hmem_reclaim_promote_dst': 0.0,
'hmem_reclaim_promote_src': 0.0,
'hmem_swapcache_promote_dst': 0.0,
'hmem_swapcache_promote_src': 0.0,
'hmem_unknown': 0.0,
'low': 136912.0,
'managed': 130023424.0,
'min': 6889.0,
'nr_bounce': 0.0,
'nr_free_cma': 0.0,
'nr_free_pages': 130023387.0,
'nr_kernel_stack': 0.0,
'nr_mlock': 0.0,
'nr_page_table_pages': 0.0,
'nr_zone_active_anon': 0.0,
'nr_zone_active_file': 0.0,
'nr_zone_inactive_anon': 0.0,
'nr_zone_inactive_file': 0.0,
'nr_zone_unevictable': 0.0,
'nr_zone_write_pending': 0.0,
'nr_zspages': 0.0,
'numa_foreign': 0.0,
'numa_hit': 16.0,
'numa_interleave': 0.0,
'numa_local': 0.0,
'numa_miss': 0.0,
'numa_other': 16.0,
'present': 130023424.0,
'spanned': 130023424.0,
'toptier': 26004684.0},
'per-node-stats': {
'nr_accessed': 0.0,
'nr_active_anon': 0.0,
'nr_active_file': 0.0,
'nr_anon_pages': 0.0,
'nr_anon_transparent_hugepages': 0.0,
'nr_dirtied': 0.0,
'nr_dirty': 0.0,
'nr_file_hugepages': 0.0,
'nr_file_pages': 0.0,
'nr_file_pmdmapped': 0.0,
'nr_inactive_anon': 0.0,
'nr_inactive_file': 0.0,
'nr_isolated_anon': 0.0,
'nr_isolated_file': 0.0,
'nr_kernel_misc_reclaimable': 0.0,
'nr_mapped': 0.0,
'nr_promote_fail': 0.0,
'nr_promote_isolate_fail': 0.0,
'nr_promote_ratelimit': 0.0,
'nr_promoted': 0.0,
'nr_shmem': 0.0,
'nr_shmem_hugepages': 0.0,
'nr_shmem_pmdmapped': 0.0,
'nr_slab_reclaimable': 0.0,
'nr_slab_unreclaimable': 37.0,
'nr_unevictable': 0.0,
'nr_unstable': 0.0,
'nr_vmscan_immediate_reclaim': 0.0,
'nr_vmscan_write': 0.0,
'nr_writeback': 0.0,
'nr_writeback_temp': 0.0,
'nr_written': 0.0,
'numa_try_migrate': 0.0,
'workingset_activate': 0.0,
'workingset_nodereclaim': 0.0,
'workingset_nodes': 0.0,
'workingset_refault': 0.0,
'workingset_restore': 0.0}},
'3': {
'DMA': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'DMA32': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Device': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Movable': {
'free': 0.0,
'high': 0.0,
'low': 0.0,
'managed': 0.0,
'min': 0.0,
'present': 0.0,
'spanned': 0.0,
'toptier': 0.0},
'Normal': {
'free': 130023140.0,
'high': 266935.0,
'hmem_autonuma_promote_dst': 0.0,
'hmem_autonuma_promote_src': 0.0,
'hmem_reclaim_demote_dst': 0.0,
'hmem_reclaim_demote_src': 0.0,
'hmem_reclaim_promote_dst': 0.0,
'hmem_reclaim_promote_src': 0.0,
'hmem_swapcache_promote_dst': 0.0,
'hmem_swapcache_promote_src': 0.0,
'hmem_unknown': 0.0,
'low': 136912.0,
'managed': 130023424.0,
'min': 6889.0,
'nr_bounce': 0.0,
'nr_free_cma': 0.0,
'nr_free_pages': 130023140.0,
'nr_kernel_stack': 0.0,
'nr_mlock': 0.0,
'nr_page_table_pages': 0.0,
'nr_zone_active_anon': 0.0,
'nr_zone_active_file': 0.0,
'nr_zone_inactive_anon': 0.0,
'nr_zone_inactive_file': 0.0,
'nr_zone_unevictable': 0.0,
'nr_zone_write_pending': 0.0,
'nr_zspages': 0.0,
'numa_foreign': 0.0,
'numa_hit': 102.0,
'numa_interleave': 0.0,
'numa_local': 0.0,
'numa_miss': 0.0,
'numa_other': 102.0,
'present': 130023424.0,
'spanned': 130023424.0,
'toptier': 26004684.0},
'per-node-stats': {
'nr_accessed': 0.0,
'nr_active_anon': 0.0,
'nr_active_file': 0.0,
'nr_anon_pages': 0.0,
'nr_anon_transparent_hugepages': 0.0,
'nr_dirtied': 0.0,
'nr_dirty': 0.0,
'nr_file_hugepages': 0.0,
'nr_file_pages': 0.0,
'nr_file_pmdmapped': 0.0,
'nr_inactive_anon': 0.0,
'nr_inactive_file': 0.0,
'nr_isolated_anon': 0.0,
'nr_isolated_file': 0.0,
'nr_kernel_misc_reclaimable': 0.0,
'nr_mapped': 0.0,
'nr_promote_fail': 0.0,
'nr_promote_isolate_fail': 0.0,
'nr_promote_ratelimit': 0.0,
'nr_promoted': 0.0,
'nr_shmem': 0.0,
'nr_shmem_hugepages': 0.0,
'nr_shmem_pmdmapped': 0.0,
'nr_slab_reclaimable': 0.0,
'nr_slab_unreclaimable': 284.0,
'nr_unevictable': 0.0,
'nr_unstable': 0.0,
'nr_vmscan_immediate_reclaim': 0.0,
'nr_vmscan_write': 0.0,
'nr_writeback': 0.0,
'nr_writeback_temp': 0.0,
'nr_written': 0.0,
'numa_try_migrate': 0.0,
'workingset_activate': 0.0,
'workingset_nodereclaim': 0.0,
'workingset_nodes': 0.0,
'workingset_refault': 0.0,
'workingset_restore': 0.0}}}
| 41.762963 | 96 | 0.387637 | 2,271 | 22,552 | 3.56539 | 0.114927 | 0.091392 | 0.073608 | 0.025689 | 0.79091 | 0.752377 | 0.746696 | 0.741756 | 0.741756 | 0.710881 | 0 | 0.130137 | 0.495034 | 22,552 | 539 | 97 | 41.840445 | 0.580875 | 0.028645 | 0 | 0.782692 | 0 | 0 | 0.291242 | 0.099228 | 0 | 0 | 0 | 0 | 0.001923 | 1 | 0.001923 | false | 0 | 0.009615 | 0 | 0.011538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4dc612cd8d474e4671f6c1923aacbeb7c2ab76c9 | 46,103 | py | Python | detection/Votenet/models/loss_helper.py | wyf-ACCEPT/BackToReality | 0b7609eab5087afcd52827e5e6a87f78a8a633cc | [
"MIT"
] | 21 | 2022-03-15T05:22:52.000Z | 2022-03-27T08:33:14.000Z | detection/Votenet/models/loss_helper.py | wyf-ACCEPT/BackToReality | 0b7609eab5087afcd52827e5e6a87f78a8a633cc | [
"MIT"
] | null | null | null | detection/Votenet/models/loss_helper.py | wyf-ACCEPT/BackToReality | 0b7609eab5087afcd52827e5e6a87f78a8a633cc | [
"MIT"
] | 4 | 2022-03-15T05:42:11.000Z | 2022-03-23T19:37:37.000Z | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable, Function
import numpy as np
import sys
import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
ROOT_DIR = os.path.dirname(BASE_DIR)
sys.path.append(os.path.join(ROOT_DIR, 'utils'))
sys.path.append(os.path.join(ROOT_DIR, 'pointnet2'))
from nn_distance import nn_distance, huber_loss
FAR_THRESHOLD = 0.6
NEAR_THRESHOLD = 0.3
GT_VOTE_FACTOR = 3 # number of GT votes per point
OBJECTNESS_CLS_WEIGHTS = [0.2,0.8] # put larger weights on positive objectness
def compute_vote_loss(end_points):
""" Compute vote loss: Match predicted votes to GT votes.
Args:
end_points: dict (read-only)
Returns:
vote_loss: scalar Tensor
Overall idea:
If the seed point belongs to an object (votes_label_mask == 1),
then we require it to vote for the object center.
Each seed point may vote for multiple translations v1,v2,v3
A seed point may also be in the boxes of multiple objects:
o1,o2,o3 with corresponding GT votes c1,c2,c3
Then the loss for this seed point is:
min(d(v_i,c_j)) for i=1,2,3 and j=1,2,3
"""
# Load ground truth votes and assign them to seed points
batch_size = end_points['seed_xyz'].shape[0]
num_seed = end_points['seed_xyz'].shape[1] # B,num_seed,3
vote_xyz = end_points['vote_xyz'] # B,num_seed*vote_factor,3
seed_inds = end_points['seed_inds'].long() # B,num_seed in [0,num_points-1]
# Get groundtruth votes for the seed points
# vote_label_mask: Use gather to select B,num_seed from B,num_point
# non-object point has no GT vote mask = 0, object point has mask = 1
# vote_label: Use gather to select B,num_seed,9 from B,num_point,9
# with inds in shape B,num_seed,9 and 9 = GT_VOTE_FACTOR * 3
seed_gt_votes_mask = torch.gather(end_points['vote_label_mask'], 1, seed_inds)
seed_inds_expand = seed_inds.view(batch_size,num_seed,1).repeat(1,1,3*GT_VOTE_FACTOR)
seed_gt_votes = torch.gather(end_points['vote_label'], 1, seed_inds_expand)
seed_gt_votes += end_points['seed_xyz'].repeat(1,1,3)
# Compute the min of min of distance
vote_xyz_reshape = vote_xyz.view(batch_size*num_seed, -1, 3) # from B,num_seed*vote_factor,3 to B*num_seed,vote_factor,3
seed_gt_votes_reshape = seed_gt_votes.view(batch_size*num_seed, GT_VOTE_FACTOR, 3) # from B,num_seed,3*GT_VOTE_FACTOR to B*num_seed,GT_VOTE_FACTOR,3
# A predicted vote to no where is not penalized as long as there is a good vote near the GT vote.
dist1, _, dist2, _ = nn_distance(vote_xyz_reshape, seed_gt_votes_reshape, l1=True)
votes_dist, _ = torch.min(dist2, dim=1) # (B*num_seed,vote_factor) to (B*num_seed,)
votes_dist = votes_dist.view(batch_size, num_seed)
vote_loss = torch.sum(votes_dist*seed_gt_votes_mask.float())/(torch.sum(seed_gt_votes_mask.float())+1e-6)
return vote_loss
def compute_weak_vote_loss(end_points):
""" Compute vote loss: Match predicted votes to GT votes.
Args:
end_points: dict (read-only)
Returns:
vote_loss: scalar Tensor
Overall idea:
If the seed point belongs to an object (votes_label_mask == 1),
then we require it to vote for the object center.
Each seed point may vote for multiple translations v1,v2,v3
A seed point may also be in the boxes of multiple objects:
o1,o2,o3 with corresponding GT votes c1,c2,c3
Then the loss for this seed point is:
min(d(v_i,c_j)) for i=1,2,3 and j=1,2,3
"""
# Load ground truth votes and assign them to seed points
batch_size = end_points['seed_xyz'].shape[0]
num_seed = end_points['seed_xyz'].shape[1] # B,num_seed,3
vote_xyz = end_points['vote_xyz'] # B,num_seed*vote_factor,3
gt_center = end_points['center_label'][:,:,0:3] # B,K2,3
# A predicted vote to no where is not penalized as long as there is a good vote near the GT vote.
dist1, _, dist2, _ = nn_distance(vote_xyz, gt_center, l1=True) # dist1: B,num_seed*vote_factor, dist2: B,K2
dist1 = dist1.view(batch_size, num_seed, -1) # dist1: B,num_seed,vote_factor
votes_dist, _ = torch.min(dist1, dim=2) # (B,num_seed,vote_factor) to (B,num_seed,)
box_label_mask = end_points['box_label_mask'] # B,K2
sem_cls_label = end_points['sem_cls_label'] # B,K2
object_weight = torch.ones_like(sem_cls_label).cuda()
#object_weight[(sem_cls_label == 4) + (sem_cls_label == 6) + (sem_cls_label == 11)] = 10
vote_loss = torch.mean(votes_dist) + torch.sum(dist2*object_weight*box_label_mask)/(torch.sum(box_label_mask)+1e-6)
return vote_loss
def compute_objectness_loss(end_points):
""" Compute objectness loss for the proposals.
Args:
end_points: dict (read-only)
Returns:
objectness_loss: scalar Tensor
objectness_label: (batch_size, num_seed) Tensor with value 0 or 1
objectness_mask: (batch_size, num_seed) Tensor with value 0 or 1
object_assignment: (batch_size, num_seed) Tensor with long int
within [0,num_gt_object-1]
"""
# Associate proposal and GT objects by point-to-point distances
aggregated_vote_xyz = end_points['aggregated_vote_xyz']
# aggregated_vote_xyz = end_points['center']
gt_center = end_points['center_label'][:,:,0:3]
B = gt_center.shape[0]
K = aggregated_vote_xyz.shape[1]
K2 = gt_center.shape[1]
dist1, ind1, dist2, _ = nn_distance(aggregated_vote_xyz, gt_center) # dist1: BxK, dist2: BxK2
# Generate objectness label and mask
# objectness_label: 1 if pred object center is within NEAR_THRESHOLD of any GT object
# objectness_mask: 0 if pred object center is in gray zone (DONOTCARE), 1 otherwise
euclidean_dist1 = torch.sqrt(dist1+1e-6)
objectness_label = torch.zeros((B,K), dtype=torch.long).cuda()
objectness_mask = torch.zeros((B,K)).cuda()
objectness_label[euclidean_dist1<NEAR_THRESHOLD] = 1
objectness_mask[euclidean_dist1<NEAR_THRESHOLD] = 1
objectness_mask[euclidean_dist1>FAR_THRESHOLD] = 1
# Compute objectness loss
objectness_scores = end_points['objectness_scores']
criterion = nn.CrossEntropyLoss(torch.Tensor(OBJECTNESS_CLS_WEIGHTS).cuda(), reduction='none')
objectness_loss = criterion(objectness_scores.transpose(2,1), objectness_label)
objectness_loss = torch.sum(objectness_loss * objectness_mask)/(torch.sum(objectness_mask)+1e-6)
# Set assignment
object_assignment = ind1 # (B,K) with values in 0,1,...,K2-1
return objectness_loss, objectness_label, objectness_mask, object_assignment
def compute_box_and_sem_cls_loss(end_points, config):
""" Compute 3D bounding box and semantic classification loss.
Args:
end_points: dict (read-only)
Returns:
center_loss
heading_cls_loss
heading_reg_loss
size_cls_loss
size_reg_loss
sem_cls_loss
"""
num_heading_bin = config.num_heading_bin
num_size_cluster = config.num_size_cluster
num_class = config.num_class
mean_size_arr = config.mean_size_arr
object_assignment = end_points['object_assignment']
batch_size = object_assignment.shape[0]
# Compute center loss
pred_center = end_points['center']
gt_center = end_points['center_label'][:,:,0:3]
dist1, ind1, dist2, _ = nn_distance(pred_center, gt_center) # dist1: BxK, dist2: BxK2
box_label_mask = end_points['box_label_mask']
objectness_label = end_points['objectness_label'].float()
centroid_reg_loss1 = \
torch.sum(dist1*objectness_label)/(torch.sum(objectness_label)+1e-6)
centroid_reg_loss2 = \
torch.sum(dist2*box_label_mask)/(torch.sum(box_label_mask)+1e-6)
center_loss = centroid_reg_loss1 + centroid_reg_loss2
# Compute heading loss
heading_class_label = torch.gather(end_points['heading_class_label'], 1, object_assignment) # select (B,K) from (B,K2)
criterion_heading_class = nn.CrossEntropyLoss(reduction='none')
heading_class_loss = criterion_heading_class(end_points['heading_scores'].transpose(2,1), heading_class_label) # (B,K)
heading_class_loss = torch.sum(heading_class_loss * objectness_label)/(torch.sum(objectness_label)+1e-6)
heading_residual_label = torch.gather(end_points['heading_residual_label'], 1, object_assignment) # select (B,K) from (B,K2)
heading_residual_normalized_label = heading_residual_label / (np.pi/num_heading_bin)
# Ref: https://discuss.pytorch.org/t/convert-int-into-one-hot-format/507/3
heading_label_one_hot = torch.cuda.FloatTensor(batch_size, heading_class_label.shape[1], num_heading_bin).zero_()
heading_label_one_hot.scatter_(2, heading_class_label.unsqueeze(-1), 1) # src==1 so it's *one-hot* (B,K,num_heading_bin)
heading_residual_normalized_loss = huber_loss(torch.sum(end_points['heading_residuals_normalized']*heading_label_one_hot, -1) - heading_residual_normalized_label, delta=1.0) # (B,K)
heading_residual_normalized_loss = torch.sum(heading_residual_normalized_loss*objectness_label)/(torch.sum(objectness_label)+1e-6)
# Compute size loss
size_class_label = torch.gather(end_points['size_class_label'], 1, object_assignment) # select (B,K) from (B,K2)
criterion_size_class = nn.CrossEntropyLoss(reduction='none')
size_class_loss = criterion_size_class(end_points['size_scores'].transpose(2,1), size_class_label) # (B,K)
size_class_loss = torch.sum(size_class_loss * objectness_label)/(torch.sum(objectness_label)+1e-6)
size_residual_label = torch.gather(end_points['size_residual_label'], 1, object_assignment.unsqueeze(-1).repeat(1,1,3)) # select (B,K,3) from (B,K2,3)
size_label_one_hot = torch.cuda.FloatTensor(batch_size, size_class_label.shape[1], num_size_cluster).zero_()
size_label_one_hot.scatter_(2, size_class_label.unsqueeze(-1), 1) # src==1 so it's *one-hot* (B,K,num_size_cluster)
size_label_one_hot_tiled = size_label_one_hot.unsqueeze(-1).repeat(1,1,1,3) # (B,K,num_size_cluster,3)
predicted_size_residual_normalized = torch.sum(end_points['size_residuals_normalized']*size_label_one_hot_tiled, 2) # (B,K,3)
mean_size_arr_expanded = torch.from_numpy(mean_size_arr.astype(np.float32)).cuda().unsqueeze(0).unsqueeze(0) # (1,1,num_size_cluster,3)
mean_size_label = torch.sum(size_label_one_hot_tiled * mean_size_arr_expanded, 2) # (B,K,3)
size_residual_label_normalized = size_residual_label / mean_size_label # (B,K,3)
size_residual_normalized_loss = torch.mean(huber_loss(predicted_size_residual_normalized - size_residual_label_normalized, delta=1.0), -1) # (B,K,3) -> (B,K)
size_residual_normalized_loss = torch.sum(size_residual_normalized_loss*objectness_label)/(torch.sum(objectness_label)+1e-6)
# 3.4 Semantic cls loss
sem_cls_label = torch.gather(end_points['sem_cls_label'], 1, object_assignment) # select (B,K) from (B,K2)
criterion_sem_cls = nn.CrossEntropyLoss(reduction='none')
sem_cls_loss = criterion_sem_cls(end_points['sem_cls_scores'].transpose(2,1), sem_cls_label) # (B,K)
sem_cls_loss = torch.sum(sem_cls_loss * objectness_label)/(torch.sum(objectness_label)+1e-6)
return center_loss, heading_class_loss, heading_residual_normalized_loss, size_class_loss, size_residual_normalized_loss, sem_cls_loss
def smoothl1_loss(error, delta=1.0):
"""Smooth L1 loss.
x = error = pred - gt or dist(pred,gt)
0.5 * |x|^2 if |x|<=d
|x| - 0.5 * d if |x|>d
"""
diff = torch.abs(error)
loss = torch.where(diff < delta, 0.5 * diff * diff / delta, diff - 0.5 * delta)
return loss
def compute_center_and_sem_cls_loss(end_points, config):
""" Compute 3D bounding box and semantic classification loss.
Args:
end_points: dict (read-only)
Returns:
center_loss
heading_cls_loss
heading_reg_loss
size_cls_loss
size_reg_loss
sem_cls_loss
"""
num_heading_bin = config.num_heading_bin
num_size_cluster = config.num_size_cluster
num_class = config.num_class
mean_size_arr = config.mean_size_arr
object_assignment = end_points['object_assignment']
batch_size = object_assignment.shape[0]
# Compute center loss
pred_center = end_points['center']
gt_center = end_points['center_label'][:,:,0:3]
dist1, ind1, dist2, _ = nn_distance(pred_center, gt_center) # dist1: BxK, dist2: BxK2
box_label_mask = end_points['box_label_mask']
objectness_label = end_points['objectness_label'].float()
centroid_reg_loss1 = \
torch.sum(dist1*objectness_label)/(torch.sum(objectness_label)+1e-6)
centroid_reg_loss2 = \
torch.sum(dist2*box_label_mask)/(torch.sum(box_label_mask)+1e-6)
center_loss = centroid_reg_loss1 + centroid_reg_loss2
'''
pred_center = end_points['center']
gt_center = end_points['center_label'][:, :, 0:3]
size_class_label = torch.gather(end_points['size_class_label'], 1, object_assignment) # select (B,K) from (B,K2)
center_margin = torch.from_numpy(0.05 * mean_size_arr[size_class_label.cpu(), :]).cuda() # (B,K,3)
objectness_label = end_points['objectness_label'].float()
object_assignment_expand = object_assignment.unsqueeze(2).repeat(1, 1, 3)
assigned_gt_center = torch.gather(gt_center, 1, object_assignment_expand) # (B, K, 3) from (B, K2, 3)
center_loss = smoothl1_loss(assigned_gt_center - pred_center) # (B,K)
center_loss -= center_margin
center_loss[center_loss < 0] = 0
center_loss = torch.sum(center_loss * objectness_label.unsqueeze(2)) / (torch.sum(objectness_label) + 1e-6)
'''
# Compute size loss
size_class_label = torch.gather(end_points['size_class_label'], 1, object_assignment) # select (B,K) from (B,K2)
criterion_size_class = nn.CrossEntropyLoss(reduction='none')
size_class_loss = criterion_size_class(end_points['size_scores'].transpose(2,1), size_class_label) # (B,K)
size_class_loss = torch.sum(size_class_loss * objectness_label)/(torch.sum(objectness_label)+1e-6)
# 3.4 Semantic cls loss
sem_cls_label = torch.gather(end_points['sem_cls_label'], 1, object_assignment) # select (B,K) from (B,K2)
criterion_sem_cls = nn.CrossEntropyLoss(reduction='none')
sem_cls_loss = criterion_sem_cls(end_points['sem_cls_scores'].transpose(2,1), sem_cls_label) # (B,K)
sem_cls_loss = torch.sum(sem_cls_loss * objectness_label)/(torch.sum(objectness_label)+1e-6)
return center_loss, size_class_loss, sem_cls_loss
def compute_sem_cls_loss(end_points, config):
""" Compute 3D bounding box and semantic classification loss.
Args:
end_points: dict (read-only)
Returns:
center_loss
heading_cls_loss
heading_reg_loss
size_cls_loss
size_reg_loss
sem_cls_loss
"""
num_heading_bin = config.num_heading_bin
num_size_cluster = config.num_size_cluster
num_class = config.num_class
mean_size_arr = config.mean_size_arr
cloud_label = end_points['cloud_label'] # Bxnum_class
batch_size = cloud_label.shape[0]
# 3.4 Semantic cls loss
cloud_pred = end_points['sem_cls_scores'].transpose(2,1) # Bxnum_classxK
cloud_pred_gap = torch.mean(cloud_pred, dim=2) # Bxnum_class
BCEWL = nn.BCEWithLogitsLoss()
sem_cls_loss = BCEWL(cloud_pred_gap.float(), cloud_label.float())
return sem_cls_loss
def get_loss(end_points, config):
""" Loss functions
Args:
end_points: dict
{
seed_xyz, seed_inds, vote_xyz,
center,
heading_scores, heading_residuals_normalized,
size_scores, size_residuals_normalized,
sem_cls_scores, #seed_logits,#
center_label,
heading_class_label, heading_residual_label,
size_class_label, size_residual_label,
sem_cls_label,
box_label_mask,
vote_label, vote_label_mask
}
config: dataset config instance
Returns:
loss: pytorch scalar tensor
end_points: dict
"""
# Vote loss
vote_loss = compute_vote_loss(end_points)
end_points['vote_loss'] = vote_loss
# Obj loss
objectness_loss, objectness_label, objectness_mask, object_assignment = \
compute_objectness_loss(end_points)
end_points['objectness_loss'] = objectness_loss
end_points['objectness_label'] = objectness_label
end_points['objectness_mask'] = objectness_mask
end_points['object_assignment'] = object_assignment
total_num_proposal = objectness_label.shape[0]*objectness_label.shape[1]
end_points['pos_ratio'] = \
torch.sum(objectness_label.float().cuda())/float(total_num_proposal)
end_points['neg_ratio'] = \
torch.sum(objectness_mask.float())/float(total_num_proposal) - end_points['pos_ratio']
# Box loss and sem cls loss
center_loss, heading_cls_loss, heading_reg_loss, size_cls_loss, size_reg_loss, sem_cls_loss = \
compute_box_and_sem_cls_loss(end_points, config)
end_points['center_loss'] = center_loss
end_points['heading_cls_loss'] = heading_cls_loss
end_points['heading_reg_loss'] = heading_reg_loss
end_points['size_cls_loss'] = size_cls_loss
end_points['size_reg_loss'] = size_reg_loss
end_points['sem_cls_loss'] = sem_cls_loss
box_loss = center_loss + 0.1*heading_cls_loss + heading_reg_loss + 0.1*size_cls_loss + size_reg_loss
end_points['box_loss'] = box_loss
# Final loss function
loss = vote_loss + 0.5*objectness_loss + box_loss + 0.1*sem_cls_loss
loss *= 10
end_points['loss'] = loss
# --------------------------------------------
# Some other statistics
obj_pred_val = torch.argmax(end_points['objectness_scores'], 2) # B,K
obj_acc = torch.sum((obj_pred_val==objectness_label.long()).float()*objectness_mask)/(torch.sum(objectness_mask)+1e-6)
end_points['obj_acc'] = obj_acc
return loss, end_points
def get_loss_weak(end_points, config):
""" Loss functions
Args:
end_points: dict
{
seed_xyz, seed_inds, vote_xyz,
center,
heading_scores, heading_residuals_normalized,
size_scores, size_residuals_normalized,
sem_cls_scores, #seed_logits,#
center_label,
heading_class_label, heading_residual_label,
size_class_label, size_residual_label,
sem_cls_label,
box_label_mask,
vote_label, vote_label_mask
}
config: dataset config instance
Returns:
loss: pytorch scalar tensor
end_points: dict
"""
# Vote loss
vote_loss = compute_weak_vote_loss(end_points)
end_points['vote_loss'] = vote_loss
# Obj loss
objectness_loss, objectness_label, objectness_mask, object_assignment = \
compute_objectness_loss(end_points)
end_points['objectness_loss'] = objectness_loss
end_points['objectness_label'] = objectness_label
end_points['objectness_mask'] = objectness_mask
end_points['object_assignment'] = object_assignment
total_num_proposal = objectness_label.shape[0]*objectness_label.shape[1]
end_points['pos_ratio'] = \
torch.sum(objectness_label.float().cuda())/float(total_num_proposal)
end_points['neg_ratio'] = \
torch.sum(objectness_mask.float())/float(total_num_proposal) - end_points['pos_ratio']
# Box loss and sem cls loss
center_loss, size_cls_loss, sem_cls_loss = compute_center_and_sem_cls_loss(end_points, config)
end_points['center_loss'] = center_loss
end_points['size_cls_loss'] = size_cls_loss
end_points['sem_cls_loss'] = sem_cls_loss
box_loss = center_loss + 0.1*size_cls_loss
sem_cls_loss = sem_cls_loss
# Final loss function
loss = vote_loss + 0.5*objectness_loss + box_loss + 0.1*sem_cls_loss
loss *= 10
end_points['loss'] = loss
# --------------------------------------------
# Some other statistics
obj_pred_val = torch.argmax(end_points['objectness_scores'], 2) # B,K
obj_acc = torch.sum((obj_pred_val==objectness_label.long()).float()*objectness_mask)/(torch.sum(objectness_mask)+1e-6)
end_points['obj_acc'] = obj_acc
return loss, end_points
class FocalLoss(nn.Module):
r"""
This criterion is a implemenation of Focal Loss, which is proposed in
Focal Loss for Dense Object Detection.
Loss(x, class) = - \alpha (1-softmax(x)[class])^gamma \log(softmax(x)[class])
The losses are averaged across observations for each minibatch.
Args:
alpha(1D Tensor, Variable) : the scalar factor for this criterion
gamma(float, double) : gamma > 0; reduces the relative loss for well-classi?ed examples (p > .5),
putting more focus on hard, misclassi?ed examples
size_average(bool): size_average(bool): By default, the losses are averaged over observations for each minibatch.
However, if the field size_average is set to False, the losses are
instead summed for each minibatch.
"""
def __init__(self, class_num, alpha=None, gamma=2, size_average=True,sigmoid=False,reduce=True):
super(FocalLoss, self).__init__()
if alpha is None:
self.alpha = Variable(torch.ones(class_num, 1) * 1.0)
else:
if isinstance(alpha, Variable):
self.alpha = alpha
else:
self.alpha = Variable(alpha)
self.gamma = gamma
self.class_num = class_num
self.size_average = size_average
self.sigmoid = sigmoid
self.reduce = reduce
def forward(self, inputs, targets, global_weight=None):
N = inputs.size(0)
# print(N)
C = inputs.size(1)
if self.sigmoid:
P = F.sigmoid(inputs)
#F.softmax(inputs)
if targets == 0:
probs = 1 - P#(P * class_mask).sum(1).view(-1, 1)
log_p = probs.log()
batch_loss = - (torch.pow((1 - probs), self.gamma)) * log_p
if targets == 1:
probs = P # (P * class_mask).sum(1).view(-1, 1)
log_p = probs.log()
batch_loss = - (torch.pow((1 - probs), self.gamma)) * log_p
else:
#inputs = F.sigmoid(inputs)
P = F.softmax(inputs, dim=-1)
class_mask = inputs.data.new(N, C).fill_(0)
class_mask = Variable(class_mask)
ids = targets.view(-1, 1)
class_mask.scatter_(1, ids.data, 1.)
# print(class_mask)
if inputs.is_cuda and not self.alpha.is_cuda:
self.alpha = self.alpha.cuda()
alpha = self.alpha[ids.data.view(-1)]
probs = (P * class_mask).sum(1).view(-1, 1)
log_p = probs.log()
# print('probs size= {}'.format(probs.size()))
# print(probs)
batch_loss = -alpha * (torch.pow((1 - probs), self.gamma)) * log_p
# print('-----bacth_loss------')
# print(batch_loss)
if not self.reduce:
return batch_loss
if self.size_average:
if global_weight is not None:
global_weight = global_weight.view(-1, 1)
batch_loss = batch_loss * global_weight
loss = batch_loss.mean()
else:
loss = batch_loss.sum()
return loss
def get_loss_DA(end_points_S, end_points_T, config):
""" Loss functions
Args:
end_points: dict
{
seed_xyz, seed_inds, global_d_pred, vote_xyz, local_d_pred,
center,
heading_scores, heading_residuals_normalized,
size_scores, size_residuals_normalized,
sem_cls_scores, #seed_logits,#
center_label,
heading_class_label, heading_residual_label,
size_class_label, size_residual_label,
sem_cls_label,
box_label_mask,
vote_label, vote_label_mask
}
config: dataset config instance
Returns:
loss: pytorch scalar tensor
end_points: dict
"""
source_coefficient = 0.1
# Vote loss
vote_loss_S = compute_weak_vote_loss(end_points_S)
vote_loss_T = compute_weak_vote_loss(end_points_T)
vote_loss = source_coefficient*vote_loss_S + vote_loss_T
end_points_S['vote_loss'] = vote_loss_S
end_points_T['vote_loss'] = vote_loss_T
# Obj loss
objectness_loss_S, objectness_label_S, objectness_mask_S, object_assignment = \
compute_objectness_loss(end_points_S)
end_points_S['objectness_loss'] = objectness_loss_S
end_points_S['objectness_label'] = objectness_label_S
end_points_S['objectness_mask'] = objectness_mask_S
end_points_S['object_assignment'] = object_assignment
total_num_proposal = objectness_label_S.shape[0]*objectness_label_S.shape[1]
end_points_S['pos_ratio'] = \
torch.sum(objectness_label_S.float().cuda())/float(total_num_proposal)
end_points_S['neg_ratio'] = \
torch.sum(objectness_mask_S.float())/float(total_num_proposal) - end_points_S['pos_ratio']
objectness_loss_T, objectness_label_T, objectness_mask_T, object_assignment = \
compute_objectness_loss(end_points_T)
end_points_T['objectness_loss'] = objectness_loss_T
end_points_T['objectness_label'] = objectness_label_T
end_points_T['objectness_mask'] = objectness_mask_T
end_points_T['object_assignment'] = object_assignment
total_num_proposal = objectness_label_T.shape[0]*objectness_label_T.shape[1]
end_points_T['pos_ratio'] = \
torch.sum(objectness_label_T.float().cuda())/float(total_num_proposal)
end_points_T['neg_ratio'] = \
torch.sum(objectness_mask_T.float())/float(total_num_proposal) - end_points_T['pos_ratio']
objectness_loss = source_coefficient*objectness_loss_S + objectness_loss_T
# Box loss and sem cls loss
center_loss_S, heading_cls_loss, heading_reg_loss, size_cls_loss_S, size_reg_loss, sem_cls_loss_S = \
compute_box_and_sem_cls_loss(end_points_S, config)
end_points_S['center_loss'] = center_loss_S
end_points_S['heading_cls_loss'] = heading_cls_loss
end_points_S['heading_reg_loss'] = heading_reg_loss
end_points_S['size_cls_loss'] = size_cls_loss_S
end_points_S['size_reg_loss'] = size_reg_loss
end_points_S['sem_cls_loss'] = sem_cls_loss_S
box_loss_S = center_loss_S + 0.1*heading_cls_loss + heading_reg_loss + 0.1*size_cls_loss_S + size_reg_loss
end_points_S['box_loss'] = box_loss_S
center_loss_T, size_cls_loss_T, sem_cls_loss_T = compute_center_and_sem_cls_loss(end_points_T, config)
end_points_T['center_loss'] = center_loss_T
end_points_T['size_cls_loss'] = size_cls_loss_T
end_points_T['sem_cls_loss'] = sem_cls_loss_T
box_loss_T = center_loss_T + 0.1*size_cls_loss_T
box_loss = source_coefficient*box_loss_S + box_loss_T
sem_cls_loss = source_coefficient*sem_cls_loss_S + sem_cls_loss_T
## Domain Align Loss
FL_global = FocalLoss(class_num=2, gamma=3)
#FL_vote = FocalLoss(class_num=2, gamma=3)
da_coefficient = 0.5
# Source domain
global_d_pred_S = end_points_S['global_d_pred']
local_d_pred_S = end_points_S['local_d_pred'].transpose(1,2).contiguous()
domain_S = Variable(torch.zeros(global_d_pred_S.size(0)).long().cuda())
#object_weight_local_S = F.softmax(end_points_S['objectness_scores'], dim=-1)[:,:,1:]
object_weight_local_S = end_points_S['objectness_label'].unsqueeze(-1)
source_dloss = da_coefficient * torch.mean(local_d_pred_S**2 * object_weight_local_S) + da_coefficient * FL_global(global_d_pred_S, domain_S)
# Target domain
global_d_pred_T = end_points_T['global_d_pred']
local_d_pred_T = end_points_T['local_d_pred'].transpose(1,2).contiguous()
domain_T = Variable(torch.ones(global_d_pred_T.size(0)).long().cuda())
#object_weight_local_T = F.softmax(end_points_T['objectness_scores'], dim=-1)[:,:,1:]
object_weight_local_T = end_points_T['objectness_label'].unsqueeze(-1)
target_dloss = da_coefficient * torch.mean((1-local_d_pred_T)**2 * object_weight_local_T) + da_coefficient * FL_global(global_d_pred_T, domain_T)
DA_loss = source_dloss + target_dloss
# Final loss function
loss = vote_loss + 0.5*objectness_loss + box_loss + 0.1*sem_cls_loss + DA_loss
loss *= 10
end_points_S['loss'] = loss
# --------------------------------------------
# Some other statistics
obj_pred_val = torch.argmax(end_points_S['objectness_scores'], 2) # B,K
obj_acc = torch.sum((obj_pred_val==objectness_label_S.long()).float()*objectness_mask_S)/(torch.sum(objectness_mask_S)+1e-6)
end_points_S['obj_acc'] = obj_acc
return loss, end_points_S, end_points_T
def compute_jitter_loss(end_points):
# center_jitter: B 64 3
# jitter_pred: B 3 64
jitter_loss = ((end_points['center_jitter']-end_points['jitter_pred'].transpose(1,2).contiguous())**2).mean()
end_points['jitter_loss'] = jitter_loss
return jitter_loss
def get_loss_DA_jitter(end_points_S, end_points_T, epoch, config):
""" Loss functions
Args:
end_points: dict
{
seed_xyz, seed_inds, global_d_pred, vote_xyz, local_d_pred,
center,
heading_scores, heading_residuals_normalized,
size_scores, size_residuals_normalized,
sem_cls_scores, #seed_logits,#
center_label,
heading_class_label, heading_residual_label,
size_class_label, size_residual_label,
sem_cls_label,
box_label_mask,
vote_label, vote_label_mask
}
config: dataset config instance
Returns:
loss: pytorch scalar tensor
end_points: dict
"""
if epoch > -1:
end_points_S['center_label'] -= min(epoch/60.0, 1.0) * end_points_S['center_jitter']
end_points_T['center_label'] -= min(epoch/60.0, 1.0) * end_points_T['jitter_pred'].transpose(1,2) * end_points_T['box_label_mask'].unsqueeze(-1)
end_points_T['center_label'] = end_points_T['center_label'].detach()
source_coefficient = 0.1
# Jitter loss
jitter_loss_S = compute_jitter_loss(end_points_S)
end_points_S['jitter_loss'] = jitter_loss_S
# Vote loss
vote_loss_S = compute_weak_vote_loss(end_points_S)
vote_loss_T = compute_weak_vote_loss(end_points_T)
vote_loss = source_coefficient*vote_loss_S + vote_loss_T
end_points_S['vote_loss'] = vote_loss_S
end_points_T['vote_loss'] = vote_loss_T
# Obj loss
objectness_loss_S, objectness_label_S, objectness_mask_S, object_assignment = \
compute_objectness_loss(end_points_S)
end_points_S['objectness_loss'] = objectness_loss_S
end_points_S['objectness_label'] = objectness_label_S
end_points_S['objectness_mask'] = objectness_mask_S
end_points_S['object_assignment'] = object_assignment
total_num_proposal = objectness_label_S.shape[0]*objectness_label_S.shape[1]
end_points_S['pos_ratio'] = \
torch.sum(objectness_label_S.float().cuda())/float(total_num_proposal)
end_points_S['neg_ratio'] = \
torch.sum(objectness_mask_S.float())/float(total_num_proposal) - end_points_S['pos_ratio']
objectness_loss_T, objectness_label_T, objectness_mask_T, object_assignment = \
compute_objectness_loss(end_points_T)
end_points_T['objectness_loss'] = objectness_loss_T
end_points_T['objectness_label'] = objectness_label_T
end_points_T['objectness_mask'] = objectness_mask_T
end_points_T['object_assignment'] = object_assignment
total_num_proposal = objectness_label_T.shape[0]*objectness_label_T.shape[1]
end_points_T['pos_ratio'] = \
torch.sum(objectness_label_T.float().cuda())/float(total_num_proposal)
end_points_T['neg_ratio'] = \
torch.sum(objectness_mask_T.float())/float(total_num_proposal) - end_points_T['pos_ratio']
objectness_loss = source_coefficient*objectness_loss_S + objectness_loss_T
# Box loss and sem cls loss
center_loss_S, heading_cls_loss, heading_reg_loss, size_cls_loss_S, size_reg_loss, sem_cls_loss_S = \
compute_box_and_sem_cls_loss(end_points_S, config)
end_points_S['center_loss'] = center_loss_S
end_points_S['heading_cls_loss'] = heading_cls_loss
end_points_S['heading_reg_loss'] = heading_reg_loss
end_points_S['size_cls_loss'] = size_cls_loss_S
end_points_S['size_reg_loss'] = size_reg_loss
end_points_S['sem_cls_loss'] = sem_cls_loss_S
box_loss_S = center_loss_S + 0.1*heading_cls_loss + heading_reg_loss + 0.1*size_cls_loss_S + size_reg_loss
end_points_S['box_loss'] = box_loss_S
center_loss_T, size_cls_loss_T, sem_cls_loss_T = compute_center_and_sem_cls_loss(end_points_T, config)
end_points_T['center_loss'] = center_loss_T
end_points_T['size_cls_loss'] = size_cls_loss_T
end_points_T['sem_cls_loss'] = sem_cls_loss_T
box_loss_T = center_loss_T + 0.1*size_cls_loss_T
box_loss = source_coefficient*box_loss_S + box_loss_T
sem_cls_loss = source_coefficient*sem_cls_loss_S + sem_cls_loss_T
## Domain Align Loss
FL_global = FocalLoss(class_num=2, gamma=3)
#FL_vote = FocalLoss(class_num=2, gamma=3)
da_coefficient = 0.5
# Source domain
global_d_pred_S = end_points_S['global_d_pred']
local_d_pred_S = end_points_S['local_d_pred'].transpose(1,2).contiguous()
jitter_d_pred_S = end_points_S['jitter_d_pred'].transpose(1,2).contiguous()
domain_S = Variable(torch.zeros(global_d_pred_S.size(0)).long().cuda())
#object_weight_local_S = F.softmax(end_points_S['objectness_scores'], dim=-1)[:,:,1:]
jitter_weight_S = end_points_S['box_label_mask'].unsqueeze(-1)
object_weight_local_S = end_points_S['objectness_label'].unsqueeze(-1)
source_dloss = da_coefficient * torch.mean(local_d_pred_S**2 * object_weight_local_S) + da_coefficient * FL_global(global_d_pred_S, domain_S)# + da_coefficient * torch.mean(jitter_d_pred_S**2 * jitter_weight_S)
# Target domain
global_d_pred_T = end_points_T['global_d_pred']
local_d_pred_T = end_points_T['local_d_pred'].transpose(1,2).contiguous()
jitter_d_pred_T = end_points_T['jitter_d_pred'].transpose(1,2).contiguous()
domain_T = Variable(torch.ones(global_d_pred_T.size(0)).long().cuda())
#object_weight_local_T = F.softmax(end_points_T['objectness_scores'], dim=-1)[:,:,1:]
jitter_weight_T = end_points_T['box_label_mask'].unsqueeze(-1)
object_weight_local_T = end_points_T['objectness_label'].unsqueeze(-1)
target_dloss = da_coefficient * torch.mean((1-local_d_pred_T)**2 * object_weight_local_T) + da_coefficient * FL_global(global_d_pred_T, domain_T)# + da_coefficient * torch.mean((1-jitter_d_pred_T)**2 * jitter_weight_T)
DA_loss = source_dloss + target_dloss
# Final loss function
loss = vote_loss + 0.5*objectness_loss + box_loss + 0.1*sem_cls_loss + DA_loss + source_coefficient*jitter_loss_S
loss *= 10
end_points_S['loss'] = loss
# --------------------------------------------
# Some other statistics
obj_pred_val = torch.argmax(end_points_S['objectness_scores'], 2) # B,K
obj_acc = torch.sum((obj_pred_val==objectness_label_S.long()).float()*objectness_mask_S)/(torch.sum(objectness_mask_S)+1e-6)
end_points_S['obj_acc'] = obj_acc
return loss, end_points_S, end_points_T
def get_loss_DA_separate(end_points_S, end_points_T, config):
""" Loss functions
Args:
end_points: dict
{
seed_xyz, seed_inds, global_d_pred, vote_xyz, local_d_pred,
center,
heading_scores, heading_residuals_normalized,
size_scores, size_residuals_normalized,
sem_cls_scores, #seed_logits,#
center_label,
heading_class_label, heading_residual_label,
size_class_label, size_residual_label,
sem_cls_label,
box_label_mask,
vote_label, vote_label_mask
}
config: dataset config instance
Returns:
loss: pytorch scalar tensor
end_points: dict
"""
# Vote loss
vote_loss_S = compute_vote_loss(end_points_S)
vote_loss_T = compute_weak_vote_loss(end_points_T)
vote_loss = vote_loss_S + vote_loss_T
end_points_S['vote_loss'] = vote_loss_S
end_points_T['vote_loss'] = vote_loss_T
# Obj loss
objectness_loss_S, objectness_label_S, objectness_mask_S, object_assignment = \
compute_objectness_loss(end_points_S)
end_points_S['objectness_loss'] = objectness_loss_S
end_points_S['objectness_label'] = objectness_label_S
end_points_S['objectness_mask'] = objectness_mask_S
end_points_S['object_assignment'] = object_assignment
total_num_proposal = objectness_label_S.shape[0]*objectness_label_S.shape[1]
end_points_S['pos_ratio'] = \
torch.sum(objectness_label_S.float().cuda())/float(total_num_proposal)
end_points_S['neg_ratio'] = \
torch.sum(objectness_mask_S.float())/float(total_num_proposal) - end_points_S['pos_ratio']
objectness_loss_T, objectness_label_T, objectness_mask_T, object_assignment = \
compute_objectness_loss(end_points_T)
end_points_T['objectness_loss'] = objectness_loss_T
end_points_T['objectness_label'] = objectness_label_T
end_points_T['objectness_mask'] = objectness_mask_T
end_points_T['object_assignment'] = object_assignment
total_num_proposal = objectness_label_T.shape[0]*objectness_label_T.shape[1]
end_points_T['pos_ratio'] = \
torch.sum(objectness_label_T.float().cuda())/float(total_num_proposal)
end_points_T['neg_ratio'] = \
torch.sum(objectness_mask_T.float())/float(total_num_proposal) - end_points_T['pos_ratio']
objectness_loss = objectness_loss_S + objectness_loss_T
# Box loss and sem cls loss
center_loss_S, heading_cls_loss, heading_reg_loss, size_cls_loss_S, size_reg_loss, sem_cls_loss_S = \
compute_box_and_sem_cls_loss(end_points_S, config)
end_points_S['center_loss'] = center_loss_S
end_points_S['heading_cls_loss'] = heading_cls_loss
end_points_S['heading_reg_loss'] = heading_reg_loss
end_points_S['size_cls_loss'] = size_cls_loss_S
end_points_S['size_reg_loss'] = size_reg_loss
end_points_S['sem_cls_loss'] = sem_cls_loss_S
box_loss = center_loss_S + 0.1*heading_cls_loss + heading_reg_loss + 0.1*size_cls_loss_S + size_reg_loss
end_points_S['box_loss'] = box_loss
center_loss_T, size_cls_loss_T, sem_cls_loss_T = compute_center_and_sem_cls_loss(end_points_T, config)
end_points_T['center_loss'] = center_loss_T
end_points_T['size_cls_loss'] = size_cls_loss_T
end_points_T['sem_cls_loss'] = sem_cls_loss_T
box_loss += center_loss_T + 0.1*size_cls_loss_T
sem_cls_loss = sem_cls_loss_S + sem_cls_loss_T
# Source domain
local_d_pred_S = end_points_S['local_d_pred'].transpose(1,2).contiguous()
object_weight_S = F.softmax(end_points_S['objectness_scores'], dim=-1)[:,:,1:]
source_dloss = 1.0 * torch.mean(local_d_pred_S ** 2 * object_weight_S)
# Target domain
local_d_pred_T = end_points_T['local_d_pred'].transpose(1,2).contiguous()
object_weight_T = F.softmax(end_points_T['objectness_scores'], dim=-1)[:,:,1:]
target_dloss = 1.0 * torch.mean((1-local_d_pred_T) ** 2 * object_weight_T)
DA_loss = source_dloss + target_dloss
# Final loss function
loss = vote_loss + 0.5*objectness_loss + box_loss + 0.1*sem_cls_loss + DA_loss
loss *= 10
end_points_S['loss'] = loss
# --------------------------------------------
# Some other statistics
obj_pred_val = torch.argmax(end_points_S['objectness_scores'], 2) # B,K
obj_acc = torch.sum((obj_pred_val==objectness_label_S.long()).float()*objectness_mask_S)/(torch.sum(objectness_mask_S)+1e-6)
end_points_S['obj_acc'] = obj_acc
return loss, end_points_S, end_points_T
def get_loss_cam(end_points, config):
""" Loss functions
Args:
end_points: dict
{
seed_xyz, seed_inds, vote_xyz,
center,
heading_scores, heading_residuals_normalized,
size_scores, size_residuals_normalized,
sem_cls_scores, #seed_logits,#
center_label,
heading_class_label, heading_residual_label,
size_class_label, size_residual_label,
sem_cls_label,
box_label_mask,
vote_label, vote_label_mask
}
config: dataset config instance
Returns:
loss: pytorch scalar tensor
end_points: dict
"""
# Final loss function
pred_cam = end_points['cam'] # Bxnum_classx256
pred_cam_gap = torch.mean(pred_cam, dim=2) # Bxnum_class
cloud_label = end_points['cloud_label'] # Bxnum_class
BCEWL = nn.BCEWithLogitsLoss()
loss = BCEWL(pred_cam_gap.float(), cloud_label.float())
end_points['loss'] = loss
return loss, end_points
def get_loss_DA_cam(end_points_S, end_points_T, config):
""" Loss functions
Args:
end_points: dict
{
seed_xyz, seed_inds, global_d_pred, vote_xyz, local_d_pred,
center,
heading_scores, heading_residuals_normalized,
size_scores, size_residuals_normalized,
sem_cls_scores, #seed_logits,#
center_label,
heading_class_label, heading_residual_label,
size_class_label, size_residual_label,
sem_cls_label,
box_label_mask,
vote_label, vote_label_mask
}
config: dataset config instance
Returns:
loss: pytorch scalar tensor
end_points: dict
"""
# Vote loss
vote_loss_S = compute_vote_loss(end_points_S)
vote_loss = vote_loss_S
end_points_S['vote_loss'] = vote_loss_S
# Obj loss
objectness_loss_S, objectness_label_S, objectness_mask_S, object_assignment = \
compute_objectness_loss(end_points_S)
end_points_S['objectness_loss'] = objectness_loss_S
end_points_S['objectness_label'] = objectness_label_S
end_points_S['objectness_mask'] = objectness_mask_S
end_points_S['object_assignment'] = object_assignment
total_num_proposal = objectness_label_S.shape[0]*objectness_label_S.shape[1]
end_points_S['pos_ratio'] = \
torch.sum(objectness_label_S.float().cuda())/float(total_num_proposal)
end_points_S['neg_ratio'] = \
torch.sum(objectness_mask_S.float())/float(total_num_proposal) - end_points_S['pos_ratio']
objectness_loss = objectness_loss_S
# Box loss and sem cls loss
center_loss_S, heading_cls_loss, heading_reg_loss, size_cls_loss_S, size_reg_loss, sem_cls_loss_S = \
compute_box_and_sem_cls_loss(end_points_S, config)
end_points_S['center_loss'] = center_loss_S
end_points_S['heading_cls_loss'] = heading_cls_loss
end_points_S['heading_reg_loss'] = heading_reg_loss
end_points_S['size_cls_loss'] = size_cls_loss_S
end_points_S['size_reg_loss'] = size_reg_loss
end_points_S['sem_cls_loss'] = sem_cls_loss_S
box_loss = center_loss_S + 0.1*heading_cls_loss + heading_reg_loss + 0.1*size_cls_loss_S + size_reg_loss
end_points_S['box_loss'] = box_loss
sem_cls_loss_T = compute_sem_cls_loss(end_points_T, config)
end_points_T['sem_cls_loss'] = sem_cls_loss_T
sem_cls_loss = sem_cls_loss_S + 2*sem_cls_loss_T
## Domain Align Loss
FL_global = FocalLoss(class_num=2, gamma=5)
FL_vote = FocalLoss(class_num=2, gamma=3)
# Source domain
global_d_pred_S = end_points_S['global_d_pred']
vote_feature_d_pred_S = end_points_S['vote_feature_d_pred']
local_d_pred_S = end_points_S['local_d_pred'].transpose(1,2).contiguous()
domain_S = Variable(torch.zeros(global_d_pred_S.size(0)).long().cuda())
object_weight_local_S = F.softmax(end_points_S['objectness_scores'], dim=-1)[:,:,1:]
source_dloss = 0.5 * torch.mean(local_d_pred_S ** 2 * object_weight_local_S) + 0.5 * FL_global(global_d_pred_S, domain_S) + 0.5 * FL_vote(vote_feature_d_pred_S, domain_S)
# Target domain
global_d_pred_T = end_points_T['global_d_pred']
vote_feature_d_pred_T = end_points_T['vote_feature_d_pred']
local_d_pred_T = end_points_T['local_d_pred'].transpose(1,2).contiguous()
domain_T = Variable(torch.ones(global_d_pred_T.size(0)).long().cuda())
object_weight_local_T = F.softmax(end_points_T['objectness_scores'], dim=-1)[:,:,1:]
target_dloss = 0.5 * torch.mean((1-local_d_pred_T) ** 2 * object_weight_local_T) + 0.5 * FL_global(global_d_pred_T, domain_T) + 0.5 * FL_vote(vote_feature_d_pred_T, domain_T)
DA_loss = source_dloss + target_dloss
# Final loss function
loss = vote_loss + 0.5*objectness_loss + box_loss + 0.1*sem_cls_loss + DA_loss
loss *= 10
end_points_S['loss'] = loss
# --------------------------------------------
# Some other statistics
obj_pred_val = torch.argmax(end_points_S['objectness_scores'], 2) # B,K
obj_acc = torch.sum((obj_pred_val==objectness_label_S.long()).float()*objectness_mask_S)/(torch.sum(objectness_mask_S)+1e-6)
end_points_S['obj_acc'] = obj_acc
return loss, end_points_S, end_points_T
| 44.329808 | 222 | 0.694553 | 6,838 | 46,103 | 4.2685 | 0.053817 | 0.095279 | 0.038372 | 0.014321 | 0.808209 | 0.772749 | 0.750514 | 0.737289 | 0.712142 | 0.701658 | 0 | 0.015672 | 0.19862 | 46,103 | 1,039 | 223 | 44.372474 | 0.774346 | 0.256166 | 0 | 0.650854 | 0 | 0 | 0.087206 | 0.002325 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.01518 | 0 | 0.081594 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4dce9f9b79e3c1828c194c4eb9e5d91e84cdcf1f | 43 | py | Python | planex/cms_core/blocks/__init__.py | octue/planex-cms | 9ec17dccd174bf99ffcd9b46ac4f6f7c37200d60 | [
"MIT"
] | null | null | null | planex/cms_core/blocks/__init__.py | octue/planex-cms | 9ec17dccd174bf99ffcd9b46ac4f6f7c37200d60 | [
"MIT"
] | 1 | 2021-01-12T18:13:21.000Z | 2021-01-12T18:13:21.000Z | planex/cms_core/blocks/__init__.py | octue/planex-cms | 9ec17dccd174bf99ffcd9b46ac4f6f7c37200d60 | [
"MIT"
] | null | null | null | from .icons import IconBlock # noqa: F401
| 21.5 | 42 | 0.744186 | 6 | 43 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.186047 | 43 | 1 | 43 | 43 | 0.828571 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4dea8682b1afa96ea52963a2a2792dac61fe3691 | 93 | py | Python | skypy/astrometry/__init__.py | nickalaskreynolds/skypy | 777c6d82bf520c75b5c38f8cee9b7b4d438fbdba | [
"MIT"
] | null | null | null | skypy/astrometry/__init__.py | nickalaskreynolds/skypy | 777c6d82bf520c75b5c38f8cee9b7b4d438fbdba | [
"MIT"
] | 3 | 2018-02-11T00:26:18.000Z | 2018-02-17T18:10:29.000Z | skypy/astrometry/__init__.py | nickalaskreynolds/skypy | 777c6d82bf520c75b5c38f8cee9b7b4d438fbdba | [
"MIT"
] | null | null | null | from . import moonfinder
from . import planetfinder
from . import radec
from . import skypos
| 18.6 | 26 | 0.784946 | 12 | 93 | 6.083333 | 0.5 | 0.547945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172043 | 93 | 4 | 27 | 23.25 | 0.948052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12b701e07094ac390c21a60b5372a86ede8a9a32 | 165 | py | Python | alpha_viergewinnt/agent/alpha/__init__.py | wahtak/alpha_viergewinnt | 569b66e656722387e450f72842ed7fe8c7d1a732 | [
"MIT"
] | null | null | null | alpha_viergewinnt/agent/alpha/__init__.py | wahtak/alpha_viergewinnt | 569b66e656722387e450f72842ed7fe8c7d1a732 | [
"MIT"
] | null | null | null | alpha_viergewinnt/agent/alpha/__init__.py | wahtak/alpha_viergewinnt | 569b66e656722387e450f72842ed7fe8c7d1a732 | [
"MIT"
] | null | null | null | from .alpha import AlphaAgent, AlphaTrainer
from .evaluator import Evaluator
from .generic_estimator import GenericEstimator
from .mlp_estimator import MlpEstimator
| 33 | 47 | 0.866667 | 19 | 165 | 7.421053 | 0.578947 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10303 | 165 | 4 | 48 | 41.25 | 0.952703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12bde75651b6182f9890044adffaa43393a2fb28 | 45 | bzl | Python | buck_imports/profilo_path.bzl | simpleton/profilo | 91ef4ba1a8316bad2b3080210316dfef4761e180 | [
"Apache-2.0"
] | null | null | null | buck_imports/profilo_path.bzl | simpleton/profilo | 91ef4ba1a8316bad2b3080210316dfef4761e180 | [
"Apache-2.0"
] | null | null | null | buck_imports/profilo_path.bzl | simpleton/profilo | 91ef4ba1a8316bad2b3080210316dfef4761e180 | [
"Apache-2.0"
] | null | null | null | def profilo_path(dep):
return "//" + dep
| 15 | 22 | 0.6 | 6 | 45 | 4.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 45 | 2 | 23 | 22.5 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
12f20a0ce5bd8ecbbceb7642a47b8c905c982fcf | 2,661 | py | Python | torch_sparse/mul.py | mdiephuis/pytorch_sparse | 328aaf2f92cdc37c8daa2d38c53c995ffbd743c8 | [
"MIT"
] | null | null | null | torch_sparse/mul.py | mdiephuis/pytorch_sparse | 328aaf2f92cdc37c8daa2d38c53c995ffbd743c8 | [
"MIT"
] | null | null | null | torch_sparse/mul.py | mdiephuis/pytorch_sparse | 328aaf2f92cdc37c8daa2d38c53c995ffbd743c8 | [
"MIT"
] | null | null | null | from typing import Optional
import torch
from torch_scatter import gather_csr
from torch_sparse.tensor import SparseTensor
@torch.jit.script
def mul(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
rowptr, col, value = src.csr()
if other.size(0) == src.size(0) and other.size(1) == 1: # Row-wise...
other = gather_csr(other.squeeze(1), rowptr)
pass
elif other.size(0) == 1 and other.size(1) == src.size(1): # Col-wise...
other = other.squeeze(0)[col]
else:
raise ValueError(f'Size mismatch: Expected size ({src.size(0)}, 1,'
f' ...) or (1, {src.size(1)}, ...), but got size '
f'{other.size()}.')
if value is not None:
value = other.to(value.dtype).mul_(value)
else:
value = other
return src.set_value(value, layout='coo')
@torch.jit.script
def mul_(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
rowptr, col, value = src.csr()
if other.size(0) == src.size(0) and other.size(1) == 1: # Row-wise...
other = gather_csr(other.squeeze(1), rowptr)
pass
elif other.size(0) == 1 and other.size(1) == src.size(1): # Col-wise...
other = other.squeeze(0)[col]
else:
raise ValueError(f'Size mismatch: Expected size ({src.size(0)}, 1,'
f' ...) or (1, {src.size(1)}, ...), but got size '
f'{other.size()}.')
if value is not None:
value = value.mul_(other.to(value.dtype))
else:
value = other
return src.set_value_(value, layout='coo')
@torch.jit.script
def mul_nnz(src: SparseTensor, other: torch.Tensor,
layout: Optional[str] = None) -> SparseTensor:
value = src.storage.value()
if value is not None:
value = value.mul(other.to(value.dtype))
else:
value = other
return src.set_value(value, layout=layout)
@torch.jit.script
def mul_nnz_(src: SparseTensor, other: torch.Tensor,
layout: Optional[str] = None) -> SparseTensor:
value = src.storage.value()
if value is not None:
value = value.mul_(other.to(value.dtype))
else:
value = other
return src.set_value_(value, layout=layout)
SparseTensor.mul = lambda self, other: mul(self, other)
SparseTensor.mul_ = lambda self, other: mul_(self, other)
SparseTensor.mul_nnz = lambda self, other, layout=None: mul_nnz(
self, other, layout)
SparseTensor.mul_nnz_ = lambda self, other, layout=None: mul_nnz_(
self, other, layout)
SparseTensor.__mul__ = SparseTensor.mul
SparseTensor.__rmul__ = SparseTensor.mul
SparseTensor.__imul__ = SparseTensor.mul_
| 33.683544 | 76 | 0.620819 | 362 | 2,661 | 4.455801 | 0.151934 | 0.055797 | 0.034718 | 0.042157 | 0.871668 | 0.871668 | 0.871668 | 0.871668 | 0.871668 | 0.871668 | 0 | 0.013807 | 0.237881 | 2,661 | 78 | 77 | 34.115385 | 0.781558 | 0.017663 | 0 | 0.676923 | 0 | 0 | 0.085857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061538 | false | 0.030769 | 0.061538 | 0 | 0.184615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
420cb449bd1945091aafa4dd1666619aefdd1a45 | 74,937 | py | Python | test/test_json_client.py | xmedius/sendsecure-python | c1fd75acf7eb1772025e92b1cc7777b205c5d734 | [
"MIT"
] | null | null | null | test/test_json_client.py | xmedius/sendsecure-python | c1fd75acf7eb1772025e92b1cc7777b205c5d734 | [
"MIT"
] | null | null | null | test/test_json_client.py | xmedius/sendsecure-python | c1fd75acf7eb1772025e92b1cc7777b205c5d734 | [
"MIT"
] | null | null | null | import path
from sendsecure import *
import unittest
try:
from unittest.mock import Mock
except ImportError:
from mock import Mock
class TestJsonClient(unittest.TestCase):
client = JsonClient({ 'api_token': 'USER|489b3b1f-b411-428e-be5b-2abbace87689',
'user_id': '123456',
'enterprise_account': 'acme',
'endpoint': 'https://awesome.portal' })
client._get_sendsecure_endpoint = Mock(return_value='https://awesome.sendsecure.portal/')
safebox_guid = '7a3c51e00a004917a8f5db807180fcc5'
def test_new_safebox_success(self):
expected_response = json.dumps({ 'guid': '1234sa4sad87ew87t', 'public_encryption_key': 'key', 'upload_url': 'url'})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.new_safebox('user@email.com'))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/new.json?user_email=user@email.com',
'application/json')
self.assertEqual(result['guid'], '1234sa4sad87ew87t')
self.assertEqual(result['public_encryption_key'], 'key')
self.assertEqual(result['upload_url'], 'url')
def test_new_safebox_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.new_safebox('user@example.com')
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/new.json?user_email=user@example.com',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_new_file_success(self):
file_params = json.dumps({ 'temporary_document': { 'document_file_size': 17580 },
'multipart': False,
'public_encryption_key': 'AyOmyAawJXKepb9LuJAOyiJXvk'
})
expected_response = json.dumps({ 'temporary_document_guid': '1c820789a50747df8746aa5d71922a3f',
'upload_url': 'http://upload_url/' })
self.client._do_post = Mock(return_value=expected_response)
result = json.loads(self.client.new_file(self.safebox_guid, file_params))
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/uploads.json',
'application/json',
file_params,
'application/json')
self.assertEqual(result['temporary_document_guid'], '1c820789a50747df8746aa5d71922a3f')
self.assertEqual(result['upload_url'], 'http://upload_url/')
def test_commit_safebox_success(self):
safebox_json = json.dumps({
'safebox': { 'guid': '1c820789a50747df8746aa5d71922a3f',
'recipients': [{ 'email': 'recipient@test.xmedius.com',
'contact_methods': [
{ 'destination_type': 'cell_phone',
'destination': '+15145550000'
}]
}],
'message': 'lorem ipsum...',
'security_profile_id': 10,
'public_encryption_key': 'AyOmyAawJXKepb9LuJAyCaciv7QBt5Dqoz',
'notification_language': ''}})
expected_response = json.dumps({ 'guid': '1c820789a50747df8746aa5d71922a3f',
'user_id': 3,
'enterprise_id': 1 })
self.client._do_post = Mock(return_value=expected_response)
result = json.loads(self.client.commit_safebox(safebox_json))
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes.json',
'application/json',
safebox_json,
'application/json')
self.assertEqual(result['guid'], '1c820789a50747df8746aa5d71922a3f')
self.assertEqual(result['user_id'], 3)
self.assertEqual(result['enterprise_id'], 1)
def test_commit_safebox_error(self):
safebox_json = json.dumps({
'safebox': { 'guid': '1c820789a50747df8746aa5d71922a3f',
'recipients': [{ 'email': 'recipient@test.xmedius.com',
'contact_methods': [
{ 'destination_type': 'cell_phone',
'destination': '+15145550000'
}]
}],
'message': 'lorem ipsum...',
'security_profile_id': 10,
'public_encryption_key': 'AyOmyAawJXKepb9LuJAyCaciv7QBt5Dqoz',
'notification_language': ''}})
expected_error = json.dumps({'error':'Some entered values are incorrect.', 'attributes':{'language':['cannot be blank']}})
self.client._do_post = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.commit_safebox(safebox_json)
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes.json',
'application/json',
safebox_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_get_security_profiles_success(self):
expected_response = json.dumps({ 'security_profiles': [{ 'id': 5 }, { 'id': 10 }] })
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_security_profiles('user@example.com'))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/security_profiles.json?user_email=user@example.com',
'application/json')
self.assertEqual(len(result['security_profiles']), 2)
def test_get_security_profiles_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_security_profiles('user@example.com')
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/security_profiles.json?user_email=user@example.com',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_enterprise_settings_success(self):
expected_response = json.dumps({ 'created_at': '2016-03-15T19:58:11.588Z',
'updated_at': '2016-09-28T18:32:16.643Z',
'default_security_profile_id': 10 })
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_enterprise_settings())
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/settings.json',
'application/json')
self.assertEqual(result['created_at'], '2016-03-15T19:58:11.588Z')
self.assertEqual(result['updated_at'], '2016-09-28T18:32:16.643Z')
self.assertEqual(result['default_security_profile_id'], 10)
def test_get_enterprise_settings_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_enterprise_settings()
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/settings.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_user_settings_success(self):
expected_response = json.dumps({ 'created_at': '2016-03-15T19:58:11.588Z',
'updated_at': '2016-09-28T18:32:16.643Z',
'default_filter': 'unread' })
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_user_settings())
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/settings.json',
'application/json')
self.assertEqual(result['created_at'], '2016-03-15T19:58:11.588Z')
self.assertEqual(result['updated_at'], '2016-09-28T18:32:16.643Z')
self.assertEqual(result['default_filter'], 'unread')
def test_get_user_settings_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_user_settings()
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/settings.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_favorites_succcess(self):
expected_response = json.dumps({ 'favorites': [
{ 'email': 'john.smith@example.com',
'id': 456 },
{ 'email': 'jane.doe@example.com',
'id': 789 } ] })
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_favorites())
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites.json',
'application/json')
self.assertEqual(len(result['favorites']), 2)
self.assertEqual(result['favorites'][0]['id'], 456)
self.assertEqual(result['favorites'][0]['email'], 'john.smith@example.com')
self.assertEqual(result['favorites'][1]['id'], 789)
self.assertEqual(result['favorites'][1]['email'], 'jane.doe@example.com')
def test_get_favorites_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_favorites()
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_create_favorite_success(self):
favorite_json = json.dumps({ 'favorite': {
'first_name': 'John',
'last_name': 'Smith',
'email': 'john.smith@example.com',
'company_name': 'Acme',
'contact_methods':[
{ 'destination': '+15145550000',
'destination_type': 'office_phone' },
{ 'destination': '+15145550001',
'destination_type': 'cell_phone' }]}
})
expected_response = json.dumps({ 'id': 456,
'created_at': '2017-04-28T17:18:30.850Z',
'contact_methods': [
{ 'id': 1,
'created_at': '2017-04-28T17:14:55.304Z' },
{ 'id': 2,
'created_at': '2017-04-28T18:14:55.304Z' }]
})
self.client._do_post = Mock(return_value=expected_response)
result = json.loads(self.client.create_favorite(favorite_json))
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites.json',
'application/json',
favorite_json,
'application/json')
self.assertEqual(result['id'], 456)
self.assertEqual(result['created_at'], '2017-04-28T17:18:30.850Z')
self.assertEqual(len(result['contact_methods']), 2)
self.assertEqual(result['contact_methods'][0]['id'], 1)
self.assertEqual(result['contact_methods'][0]['created_at'], '2017-04-28T17:14:55.304Z')
self.assertEqual(result['contact_methods'][1]['id'], 2)
self.assertEqual(result['contact_methods'][1]['created_at'], '2017-04-28T18:14:55.304Z')
def test_create_favorite_error(self):
favorite_json = json.dumps({ 'favorite': {
'first_name': 'John',
'last_name': 'Smith',
'email': '',
'company_name': 'Acme',
'contact_methods':[
{ 'destination': '+15145550000',
'destination_type': 'office_phone' },
{ 'destination': '+15145550001',
'destination_type': 'cell_phone' }]}
})
expected_error = json.dumps({'error':'Some entered values are incorrect.', 'attributes':{'email':['cannot be blank']}})
self.client._do_post = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.create_favorite(favorite_json)
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites.json',
'application/json',
favorite_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_update_favorite_success(self):
favorite_json = json.dumps({ 'favorite': {
'id': 456,
'first_name': 'John',
'last_name': 'Smith',
'email': 'john.smith@example.com',
'order_number': 10,
'company_name': 'Acme',
'contact_methods':[
{ 'id': 1,
'destination': '+15145550000',
'destination_type': 'office_phone' },
{ 'destination': '+15145550001',
'destination_type': 'cell_phone' }]}
})
expected_response = json.dumps({ 'id': 456,
'created_at': '2017-04-28T17:18:30.850Z',
'updated_at': '2017-04-28T17:20:54.320Z',
'contact_methods': [
{ 'id': 1,
'created_at': '2017-04-28T17:14:55.304Z',
'updated_at': '2017-04-28T17:20:54.320Z' },
{ 'id': 2,
'created_at': '2017-04-28T18:14:55.304Z',
'updated_at': '2017-04-28T17:20:54.320Z' }]
})
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.update_favorite(456, favorite_json))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites/456.json',
'application/json',
favorite_json,
'application/json')
self.assertEqual(result['id'], 456)
self.assertEqual(result['updated_at'], '2017-04-28T17:20:54.320Z')
self.assertEqual(len(result['contact_methods']), 2)
self.assertEqual(result['contact_methods'][0]['id'], 1)
self.assertEqual(result['contact_methods'][0]['updated_at'], '2017-04-28T17:20:54.320Z')
self.assertEqual(result['contact_methods'][1]['id'], 2)
self.assertEqual(result['contact_methods'][1]['created_at'], '2017-04-28T18:14:55.304Z')
def test_update_favorite_error(self):
favorite_json = json.dumps({ 'favorite': {
'id': 456,
'first_name': 'John',
'last_name': 'Smith',
'email': '',
'order_number': 10,
'company_name': 'Acme',
'contact_methods':[
{ 'id': 1,
'destination': '+15145550000',
'destination_type': 'office_phone' },
{ 'destination': '+15145550001',
'destination_type': 'cell_phone' }]}
})
expected_error = json.dumps({'error':'Some entered values are incorrect.', 'attributes':{'email':['cannot be blank']}})
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.update_favorite(456, favorite_json)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites/456.json',
'application/json',
favorite_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_delete_favorite_success(self):
self.client._do_delete = Mock(expected_response=None)
result = self.client.delete_favorite(456)
self.client._do_delete.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites/456.json',
'application/json')
def test_delete_favorite_error(self):
self.client._do_delete = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.delete_favorite(456)
self.client._do_delete.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/users/123456/favorites/456.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_create_participant_success(self):
participant_json = json.dumps({ 'participant': {
'first_name': 'John',
'last_name': 'Smith',
'company_name': 'ACME',
'email': 'johny.smith@example.com',
'contact_methods': [
{ 'destination': '+15145550000',
'destination_type': 'office_phone' }] }
})
expected_response = json.dumps({ 'id': '23a3c8ec897548dc82f50a9a1550e52c',
'first_name': 'John',
'last_name': 'Smith',
'email': 'johny.smith@example.com',
'type': 'guest',
'role': 'guest',
'guest_options': {
'company_name': 'ACME',
'locked': False,
'bounced_email': False,
'failed_login_attempts': 0,
'verified': False,
'created_at': '2017-05-26T19:27:27.798Z',
'updated_at': '2017-05-26T19:27:27.798Z',
'contact_methods': [
{ 'id': 35105,
'destination': '+15145550000',
'destination_type': 'office_phone',
'verified': False,
'created_at': '2017-05-26T19:27:27.864Z',
'updated_at': '2017-05-26T19:27:27.864Z' } ] }
})
self.client._do_post = Mock(return_value=expected_response)
result = json.loads(self.client.create_participant(self.safebox_guid, participant_json))
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/participants.json',
'application/json',
participant_json,
'application/json')
self.assertEqual(result['id'], '23a3c8ec897548dc82f50a9a1550e52c')
self.assertEqual(result['guest_options']['created_at'], '2017-05-26T19:27:27.798Z')
self.assertEqual(result['guest_options']['updated_at'], '2017-05-26T19:27:27.798Z')
self.assertEqual(len(result['guest_options']['contact_methods']), 1)
def test_create_participant_error(self):
participant_json = json.dumps({ 'participant': {
'first_name': 'John',
'last_name': 'Smith',
'company_name': 'ACME',
'email': '',
'contact_methods': [
{ 'destination': '+15145550000',
'destination_type': 'office_phone' }] }
})
expected_error = json.dumps({'error':'Some entered values are incorrect.', 'attributes':{'email':['cannot be blank']}})
self.client._do_post = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.create_participant(self.safebox_guid, participant_json)
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/participants.json',
'application/json',
participant_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_update_participant_success(self):
participant_json = json.dumps({ 'participant': {
'first_name': 'John',
'last_name': 'Smith',
'company_name': 'XMedius',
'email': 'johny.smith@example.com',
'contact_methods': [
{ 'id': 32,
'destination': '+15145550000',
'destination_type': 'office_phone' }] }
})
expected_response = json.dumps({ 'id': '23a3c8ec897548dc82f50a9a1550e52c',
'first_name': 'John',
'last_name': 'Smith',
'email': 'johny.smith@example.com',
'type': 'guest',
'role': 'guest',
'guest_options': {
'company_name': 'XMedius',
'locked': False,
'bounced_email': False,
'failed_login_attempts': 0,
'verified': False,
'created_at': '2017-05-26T19:27:27.798Z',
'updated_at': '2017-05-26T19:27:27.798Z',
'contact_methods': [
{ 'id': 32,
'destination': '+15145550000',
'destination_type': 'office_phone',
'verified': False,
'created_at': '2017-05-26T19:27:27.864Z',
'updated_at': '2017-05-26T19:27:27.864Z' } ] }
})
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.update_participant(self.safebox_guid, '23a3c8ec897548dc82f50a9a1550e52c', participant_json))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/participants/23a3c8ec897548dc82f50a9a1550e52c.json',
'application/json',
participant_json,
'application/json')
self.assertEqual(result['id'], '23a3c8ec897548dc82f50a9a1550e52c')
self.assertEqual(result['guest_options']['created_at'], '2017-05-26T19:27:27.798Z')
self.assertEqual(result['guest_options']['updated_at'], '2017-05-26T19:27:27.798Z')
self.assertEqual(len(result['guest_options']['contact_methods']), 1)
def test_update_participant_error(self):
participant_json = json.dumps({ 'participant': {
'first_name': 'John',
'last_name': 'Smith',
'company_name': 'XMedius',
'email': '',
'contact_methods': [
{ 'id': 32,
'destination': '+15145550000',
'destination_type': 'office_phone' }] }
})
expected_error = json.dumps({'error':'Some entered values are incorrect.', 'attributes':{'email':['cannot be blank']}})
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.update_participant(self.safebox_guid, '23a3c8ec897548dc82f50a9a1550e52c', participant_json)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/participants/23a3c8ec897548dc82f50a9a1550e52c.json',
'application/json',
participant_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_search_recipient_success(self):
expected_response = json.dumps({ 'results': [
{ 'id': 1,
'type': 'favorite',
'first_name': 'John',
'last_name': 'Doe',
'email': 'john@xmedius.com',
'company_name': ''
},
{
'id': 4,
'type': 'favorite',
'first_name': '',
'last_name': '',
'email': 'john@xmedius.com',
'company_name': ''
},
{
'id': 3,
'type': 'user',
'first_name': '',
'last_name': 'john',
'email': 'john.doe@sagemcom.com',
'company_name': ''
}
]
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.search_recipient('john'))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/recipients/autocomplete?term=john',
'application/json')
self.assertEqual(len(result['results']), 3)
def test_reply_success(self):
reply_json = json.dumps({ 'safebox': {
'message': 'Test reply message',
'consent': True,
'document_ids': ['1234fdr5ewet5tew4wt'] } })
expected_response = json.dumps({ 'result': True,
'message': 'SafeBox successfully updated.' })
self.client._do_post = Mock(return_value=expected_response)
result = json.loads(self.client.reply(self.safebox_guid, reply_json))
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/messages.json',
'application/json',
reply_json,
'application/json')
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'SafeBox successfully updated.')
def test_add_time_success(self):
add_time_json = json.dumps({ 'safebox': { 'add_time_value': 8, 'add_time_unit': 'hours' }})
expected_response = json.dumps({ 'result': True,
'message': 'SafeBox duration successfully extended.',
'expiration': '2017-05-14T18:09:05.662Z' })
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.add_time(self.safebox_guid, add_time_json))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/add_time.json',
'application/json',
add_time_json,
'application/json')
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'SafeBox duration successfully extended.')
self.assertEqual(result['expiration'], '2017-05-14T18:09:05.662Z')
def test_add_time_error(self):
add_time_json = json.dumps({ 'safebox': { 'add_time_value': 8, 'add_time_unit': 'hours' }})
expected_error = json.dumps({ 'result': False, 'message': 'Unable to extend the SafeBox duration.' })
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.add_time(self.safebox_guid, add_time_json)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/add_time.json',
'application/json',
add_time_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_close_safebox_success(self):
expected_response = json.dumps({ 'result': True, 'message': 'SafeBox successfully closed.' })
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.close_safebox(self.safebox_guid))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/close.json',
'application/json',
'',
'application/json')
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'SafeBox successfully closed.')
def test_close_safebox_error(self):
expected_error = json.dumps({ 'result': False, 'message': 'Unable to close the SafeBox.' })
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.close_safebox(self.safebox_guid)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/close.json',
'application/json',
'',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_delete_safebox_content_success(self):
expected_response = json.dumps({ 'result': True, 'message': 'SafeBox content successfully deleted.' })
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.delete_safebox_content(self.safebox_guid))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/delete_content.json',
'application/json',
'',
'application/json')
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'SafeBox content successfully deleted.')
def test_delete_safebox_content_error(self):
expected_error = json.dumps({ 'result': False, 'message': 'Unable to delete the SafeBox content.' })
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.delete_safebox_content(self.safebox_guid)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/delete_content.json',
'application/json',
'',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_mark_as_read_success(self):
expected_response = json.dumps({ 'result': True })
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.mark_as_read(self.safebox_guid))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/mark_as_read.json',
'application/json',
'',
'application/json')
self.assertTrue(result['result'])
def test_mark_as_read_error(self):
expected_error = json.dumps({ 'result': False, 'message': 'Unable to mark the SafeBox as Read.' })
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.mark_as_read(self.safebox_guid)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/mark_as_read.json',
'application/json',
'',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_mark_as_unread_success(self):
expected_response = json.dumps({ 'result': True })
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.mark_as_unread(self.safebox_guid))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/mark_as_unread.json',
'application/json',
'',
'application/json')
self.assertTrue(result['result'])
def test_mark_as_unread_error(self):
expected_error = json.dumps({ 'result': False, 'message': 'Unable to mark the SafeBox as Unread.' })
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.mark_as_unread(self.safebox_guid)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/mark_as_unread.json',
'application/json',
'',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_mark_as_read_message_success(self):
expected_response = json.dumps({ 'result': True })
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.mark_as_read_message(self.safebox_guid, 1234))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/messages/1234/read',
'application/json',
'',
'application/json')
self.assertTrue(result['result'])
def test_mark_as_read_message_error(self):
expected_error = json.dumps({ 'result': False, 'message': 'Unable to mark the message as read.' })
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.mark_as_read_message(self.safebox_guid, 1234)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/messages/1234/read',
'application/json',
'',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_mark_as_unread_message_success(self):
expected_response = json.dumps({ 'result': True })
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.mark_as_unread_message(self.safebox_guid, 1234))
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/messages/1234/unread',
'application/json',
'',
'application/json')
self.assertTrue(result['result'])
def test_mark_as_unread_message_error(self):
expected_error = json.dumps({ 'result': False, 'message': 'Unable to mark the message as Unread.' })
self.client._do_patch = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.mark_as_unread_message(self.safebox_guid, 1234)
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/messages/1234/unread',
'application/json',
'',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_get_file_url_success(self):
expected_response = json.dumps({ 'url': 'https://fileserver.integration.xmedius.com/xmss/DteeDmb-2zfN5WtCbgpJfSENaNjvbHi_nt' })
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_file_url(self.safebox_guid, '154awe5qw4erq5', 'user@email.com'))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/documents/154awe5qw4erq5/url.json?user_email=user@email.com',
'application/json')
self.assertEqual(result['url'], 'https://fileserver.integration.xmedius.com/xmss/DteeDmb-2zfN5WtCbgpJfSENaNjvbHi_nt')
def test_get_file_url_error(self):
expected_error = json.dumps({'error': 'User email not found'})
self.client._do_get = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_file_url(self.safebox_guid, '154awe5qw4erq5', 'user@email.com')
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/documents/154awe5qw4erq5/url.json?user_email=user@email.com',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_get_audit_record_url_success(self):
expected_response = json.dumps({ 'url': 'http://sendsecure.integration.xmedius.com/s/73af62f766ee459e81f46e4f533085a4.pdf' })
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_audit_record_url(self.safebox_guid))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/audit_record_pdf.json',
'application/json')
self.assertEqual(result['url'], 'http://sendsecure.integration.xmedius.com/s/73af62f766ee459e81f46e4f533085a4.pdf')
def test_get_audit_record_url_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(400, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_audit_record_url(self.safebox_guid)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/audit_record_pdf.json',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn('Access denied', context.exception.message)
def test_get_safeboxes_success(self):
expected_response = json.dumps({ 'count': 1,
'previous_page_url': None,
'next_page_url': 'api/v2/safeboxes?status=unread&search=test&page=2',
'safeboxes': [
{
'guid': '73af62f766ee459e81f46e4f533085a4',
'user_id': 1,
'enterprise_id': 1
}]
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safeboxes(None, None))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes.json',
'application/json')
self.assertEqual(result['count'], 1)
self.assertEqual(len(result['safeboxes']), 1)
self.assertEqual(result['safeboxes'][0]['guid'], '73af62f766ee459e81f46e4f533085a4')
def test_get_safeboxes_with_search_params_success(self):
expected_response = json.dumps({ 'count': 1,
'previous_page_url': None,
'next_page_url': 'api/v2/safeboxes?status=unread&search=test&page=2',
'safeboxes': [
{
'guid': '73af62f766ee459e81f46e4f533085a4',
'user_id': 1,
'enterprise_id': 1
}]
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safeboxes(None, {'status': 'unread', 'search_term': 'test', 'page': 1}))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes.json?status=unread&search_term=test&page=1',
'application/json')
self.assertEqual(result['count'], 1)
self.assertEqual(len(result['safeboxes']), 1)
self.assertEqual(result['safeboxes'][0]['guid'], '73af62f766ee459e81f46e4f533085a4')
def test_get_safeboxes_with_url_success(self):
expected_response = json.dumps({ 'count': 1,
'previous_page_url': None,
'next_page_url': 'api/v2/safeboxes?status=unread&search=test&page=2',
'safeboxes': [
{
'guid': '73af62f766ee459e81f46e4f533085a4',
'user_id': 1,
'enterprise_id': 1
}]
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safeboxes('https://awesome.sendsecure.portal/api/v2/safeboxes.json?status=unread&search_term=test&page=1', None))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes.json?status=unread&search_term=test&page=1',
'application/json')
self.assertEqual(result['count'], 1)
self.assertEqual(len(result['safeboxes']), 1)
self.assertEqual(result['safeboxes'][0]['guid'], '73af62f766ee459e81f46e4f533085a4')
def test_get_safeboxes_error(self):
expected_error = json.dumps({ 'error': 'Invalid per_page parameter value (1001)' })
self.client._do_get = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_safeboxes(None, { 'status': 'unread', 'search_term': 'test', 'per_page': 1001, 'page': 2 })
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes.json?status=unread&search_term=test&per_page=1001&page=2',
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_get_safebox_info_success(self):
expected_response = json.dumps({ 'safebox': {
'guid': '73af62f766ee459e81f46e4f533085a4',
'security_options': {},
'participants': [],
'messages': [],
'download_activity': {},
'event_history': []
}
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safebox_info(self.safebox_guid, None))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5.json',
'application/json')
self.assertEqual(result['safebox']['guid'], '73af62f766ee459e81f46e4f533085a4')
self.assertFalse(result['safebox']['security_options'])
self.assertFalse(result['safebox']['participants'])
self.assertFalse(result['safebox']['messages'])
self.assertFalse(result['safebox']['download_activity'])
self.assertFalse(result['safebox']['event_history'])
def test_get_safebox_info_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_safebox_info(self.safebox_guid, None)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_safebox_participants_success(self):
expected_response = json.dumps({ 'participants': [{
'id': '7a3c51e00a004917a8f5db807180fcc5',
'first_name': '',
'last_name': '',
'email': 'john.smith@example.com',
'type': 'guest',
'role': 'guest',
'guest_options': {
'company_name': '',
'locked': False,
'bounced_email': False,
'failed_login_attempts': 0,
'verified': False,
'contact_methods': [{
'id': 35016,
'destination': '+15145550000',
'destination_type': 'cell_phone',
'verified': False,
'created_at': '2017-05-24T14:45:35.453Z',
'updated_at': '2017-05-24T14:45:35.453Z' }]
}},
{
'id': 34208,
'first_name': 'Jane',
'last_name': 'Doe',
'email': 'jane.doe@example.com',
'type': 'user',
'role': 'owner' }]
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safebox_participants(self.safebox_guid))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/participants.json',
'application/json')
self.assertEqual(len(result['participants']), 2)
def test_get_safebox_participants_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_safebox_participants(self.safebox_guid)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/participants.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_safebox_messages_success(self):
expected_response = json.dumps({ 'messages': [{
'note': 'Lorem Ipsum...',
'note_size': 148,
'read': True,
'author_id': '3',
'author_type': 'guest',
'created_at': '2017-04-05T14:49:35.198Z',
'documents': [
{
'id': '5a3df276aaa24e43af5aca9b2204a535',
'name': 'Axient-soapui-project.xml',
'sha': '724ae04430315c60ca17f4dbee775a37f5b18c05aee99c9c',
'size': 129961,
'url': 'https://sendsecure.xmedius.com/api/v2/safeboxes/b4d898ada15f42f293e31905c514607f/documents/5a3df276aaa24e43af5aca9b2204a535/url'
}] }]
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safebox_messages(self.safebox_guid))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/messages.json',
'application/json')
self.assertEqual(len(result['messages']), 1)
self.assertEqual(len(result['messages'][0]['documents']), 1)
self.assertEqual(result['messages'][0]['documents'][0]['id'], '5a3df276aaa24e43af5aca9b2204a535')
def test_get_safebox_messages_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_safebox_messages(self.safebox_guid)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/messages.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_safebox_security_options_success(self):
expected_response = json.dumps({ 'security_options': {
'security_code_length': 4,
'allowed_login_attempts': 3,
'allow_remember_me': True,
'allow_sms': True,
'allow_voice': True,
'allow_email': False,
'reply_enabled': True,
'group_replies': False,
'code_time_limit': 5,
'encrypt_message': True,
'two_factor_required': True,
'auto_extend_value': 3,
'auto_extend_unit': 'days',
'retention_period_type': 'do_not_discard',
'retention_period_value': None,
'retention_period_unit': 'hours',
'delete_content_on': None,
'allow_manual_delete': True,
'allow_manual_close': False
}})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safebox_security_options(self.safebox_guid))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/security_options.json',
'application/json')
self.assertTrue(result['security_options'])
def test_get_safebox_security_options_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_safebox_security_options(self.safebox_guid)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/security_options.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_safebox_download_activity_success(self):
expected_response = json.dumps({ 'download_activity': {
'guests': [
{
'id': '42220c777c30486e80cd3bbfa7f8e82f',
'documents': [
{
'id': '5a3df276aaa24e43af5aca9b2204a535',
'downloaded_bytes': 0,
'download_date': None
}]
}],
'owner': {
'id': 72,
'documents': [] }
}})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safebox_download_activity(self.safebox_guid))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/download_activity.json',
'application/json')
self.assertEqual(result['download_activity']['guests'][0]['id'], '42220c777c30486e80cd3bbfa7f8e82f')
self.assertEqual(result['download_activity']['owner']['id'], 72)
def test_get_safebox_download_activity_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_safebox_download_activity(self.safebox_guid)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/download_activity.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_get_safebox_event_history_success(self):
expected_response = json.dumps({ 'event_history': [
{ 'type': '42220c777c30486e80cd3bbfa7f8e82f',
'date': [],
'metadata': {},
'message': 'SafeBox created by john.smith@example.com with 0 attachment(s) from 0.0.0.0 for john.smith@example.com' }
]
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_safebox_event_history(self.safebox_guid))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/event_history.json',
'application/json')
self.assertEqual(result['event_history'][0]['message'], 'SafeBox created by john.smith@example.com with 0 attachment(s) from 0.0.0.0 for john.smith@example.com')
def test_get_safebox_event_history_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(403, 'Access denied', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_safebox_event_history(self.safebox_guid)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/event_history.json',
'application/json')
self.assertEqual(context.exception.code, 403)
self.assertIn('Access denied', context.exception.message)
def test_archive_safebox_success(self):
user_email_json = json.dumps({ 'user_email': 'user@example.com' })
expected_response = json.dumps({'result': True, 'message': 'SafeBox successfully archived'})
self.client._do_post = Mock(return_value=expected_response)
result = json.loads(self.client.archive_safebox(self.safebox_guid, user_email_json))
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/tag/archive',
'application/json',
user_email_json,
'application/json')
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'SafeBox successfully archived')
def test_archive_safebox_error(self):
user_email_json = json.dumps({ 'user_email': 'user@example.com' })
expected_error = json.dumps({ 'result': False, 'message': 'Unable to add Safebox to Archives' })
self.client._do_post = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.archive_safebox(self.safebox_guid, user_email_json)
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/tag/archive',
'application/json',
user_email_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_unarchive_safebox_success(self):
user_email_json = json.dumps({ 'user_email': 'user@example.com' })
expected_response = json.dumps({'result': True, 'message': 'SafeBox successfully removed from Archives'})
self.client._do_post = Mock(return_value=expected_response)
result = json.loads(self.client.unarchive_safebox(self.safebox_guid, user_email_json))
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/untag/archive',
'application/json',
user_email_json,
'application/json')
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'SafeBox successfully removed from Archives')
def test_unarchive_safebox_error(self):
user_email_json = json.dumps({ 'user_email': 'user@example.com' })
expected_error = json.dumps({ 'result': False, 'message': 'Unable to remove Safebox from Archives' })
self.client._do_post = Mock(side_effect=SendSecureException(400, expected_error, ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.unarchive_safebox(self.safebox_guid, user_email_json)
self.client._do_post.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/7a3c51e00a004917a8f5db807180fcc5/untag/archive',
'application/json',
user_email_json,
'application/json')
self.assertEqual(context.exception.code, 400)
self.assertIn(expected_error, context.exception.message)
def test_get_consent_group_messages_success(self):
expected_response = json.dumps({ 'consent_message_group': {
'id': 1,
'name': 'Default',
'created_at': '2016-08-29T14:52:26.085Z',
'updated_at': '2016-08-29T14:52:26.085Z',
'consent_messages': [{
'locale': 'en',
'value': 'Lorem ipsum',
'created_at': '2016-08-29T14:52:26.085Z',
'updated_at': '2016-08-29T14:52:26.085Z'
}]}
})
self.client._do_get = Mock(return_value=expected_response)
result = json.loads(self.client.get_consent_group_messages(1))
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/consent_message_groups/1',
'application/json')
self.assertEqual(result['consent_message_group']['id'], 1)
self.assertEqual(result['consent_message_group']['consent_messages'][0]['value'], 'Lorem ipsum')
def test_get_consent_group_messages_error(self):
self.client._do_get = Mock(side_effect=SendSecureException(404, 'The requested URL cannot be found.', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.get_consent_group_messages(42)
self.client._do_get.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/enterprises/acme/consent_message_groups/42',
'application/json')
self.assertEqual(context.exception.code, 404)
self.assertIn('The requested URL cannot be found.', context.exception.message)
def test_unfollow(self):
expected_response = json.dumps({'result': True, 'message': 'The SafeBox is now unfollowed.'})
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.unfollow('A-Safebox-Guid'))
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'The SafeBox is now unfollowed.')
def test_unfollow_invalid_safebox_id(self):
self.client._do_patch = Mock(side_effect=SendSecureException(404, 'The requested URL cannot be found.', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.unfollow('this-safebox-does-not-exist')
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/this-safebox-does-not-exist/unfollow',
'application/json', '', 'application/json')
self.assertEqual(context.exception.code, 404)
self.assertIn('The requested URL cannot be found.', context.exception.message)
def test_follow(self):
expected_response = json.dumps({'result': True, 'message': 'The SafeBox is now followed.'})
self.client._do_patch = Mock(return_value=expected_response)
result = json.loads(self.client.unfollow('A-Safebox-Guid'))
self.assertTrue(result['result'])
self.assertEqual(result['message'], 'The SafeBox is now followed.')
def test_follow_invalid_safebox_id(self):
self.client._do_patch = Mock(side_effect=SendSecureException(404, 'The requested URL cannot be found.', ''))
with self.assertRaises(SendSecureException) as context:
result = self.client.follow('this-safebox-does-not-exist')
self.client._do_patch.assert_called_once_with('https://awesome.sendsecure.portal/api/v2/safeboxes/this-safebox-does-not-exist/follow',
'application/json', '', 'application/json')
self.assertEqual(context.exception.code, 404)
self.assertIn('The requested URL cannot be found.', context.exception.message)
if __name__ == '__main__':
unittest.main()
| 69.066359 | 198 | 0.519677 | 6,427 | 74,937 | 5.825113 | 0.054769 | 0.054757 | 0.043592 | 0.051605 | 0.904536 | 0.876623 | 0.854747 | 0.839147 | 0.832229 | 0.82443 | 0 | 0.068192 | 0.376329 | 74,937 | 1,084 | 199 | 69.130074 | 0.732861 | 0 | 0 | 0.643211 | 0 | 0.030723 | 0.257443 | 0.042476 | 0 | 0 | 0 | 0 | 0.251734 | 1 | 0.068385 | false | 0 | 0.005946 | 0 | 0.077304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
42348791884b666ce51028cd26339a8d351a2b37 | 39 | py | Python | synbioweaver/__init__.py | PhilippBoeing/synbioweaver | 23efdf79a325885a43e82ba13e6ccefb8eb3d733 | [
"MIT"
] | null | null | null | synbioweaver/__init__.py | PhilippBoeing/synbioweaver | 23efdf79a325885a43e82ba13e6ccefb8eb3d733 | [
"MIT"
] | null | null | null | synbioweaver/__init__.py | PhilippBoeing/synbioweaver | 23efdf79a325885a43e82ba13e6ccefb8eb3d733 | [
"MIT"
] | null | null | null | from core import *
from parts import *
| 13 | 19 | 0.74359 | 6 | 39 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 39 | 2 | 20 | 19.5 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
424d9c709d82f5b707a843f2534869f5ce63e734 | 20,638 | py | Python | owner/employeeapi.py | himasnhu1/example | 27db7941c5f7bd16ffb407654818012e43d82f7e | [
"MIT"
] | null | null | null | owner/employeeapi.py | himasnhu1/example | 27db7941c5f7bd16ffb407654818012e43d82f7e | [
"MIT"
] | null | null | null | owner/employeeapi.py | himasnhu1/example | 27db7941c5f7bd16ffb407654818012e43d82f7e | [
"MIT"
] | null | null | null | from django.shortcuts import render, get_object_or_404
from . import models, serializers
from rest_framework import viewsets, status, permissions
from rest_framework.decorators import action
import datetime
from core import models as coremodels
from library import models as librarymodels
from student import models as studentmodels
from student import serializers as studentserializers
from django.db.models import Q
from django.db.models import Avg, Count, Min, Sum
from rest_framework.response import Response
from rest_framework.views import APIView
from django.core.mail import EmailMessage
import json
import ast
from rest_framework.generics import *
from django.core.exceptions import ObjectDoesNotExist
from django.db import IntegrityError
from django.core.mail import EmailMultiAlternatives
from django.template import Context
from django.template.loader import render_to_string
from django.utils.html import strip_tags
class EmployeeEnquiryApi(ListAPIView,CreateAPIView):
queryset = models.Enquiry.objects.all()
serializer_class = serializers.EnquirySerializer
permission_classes = [permissions.IsAuthenticated, ]
def get_queryset(self):
queryset = self.queryset.filter(library_branch=self.kwargs["id"])
active = self.request.query_params.get('status', None)
if status is not None:
if status=="open":
queryset = queryset.filter(status="Open")
elif status=="closed":
queryset = queryset.filter(Q(status="Registered")|Q(status="Withdrawed"))
from_date = self.request.query_params.get('from_Date', None)
if from_date is not None:
from_date = datetime.datetime.strptime(from_date, "%Y-%m-%d").date()
queryset = queryset.filter(add_date__gte = from_date)
to_date = self.request.query_params.get('to_date', None)
if to_date is not None:
to_date = datetime.datetime.strptime(to_date, "%Y-%m-%d").date()
queryset = queryset.filter(add_date__lte = to_date)
search = self.request.query_params.get('search', None)
if search is not None:
queryset = queryset.filter(name__icontains =search)
return queryset
def create(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if self.kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def list(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if self.kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
queryset = self.get_queryset()
serializer = self.get_serializer(queryset,many=True)
return Response(serializer.data,status=200)
class EmployeeEnquiryUpdateApi(UpdateAPIView):
queryset = models.Enquiry.objects.all()
serializer_class = serializers.EnquirySerializer
permission_classes = [permissions.IsAuthenticated, ]
def partial_update(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if self.kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
enquiry = self.queryset.get(id=kwargs["pk"])
serializer = self.serializer_class(enquiry,data=request.data,partial=True)
serializer.is_valid(raise_exception=True)
serializer.save()
return Response(serializer.data,status=200)
class EmployeeEnquiryFollowApi(CreateAPIView):
queryset = models.EnquiryFollowUp.objects.all()
serializer_class = serializers.EnquiryFollowUpSerializer
permission_classes = [permissions.IsAuthenticated, ]
def create(self,request,*args,**kwargs):
serializer = self.serializer_class(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
enquiry = Enquiry.objects.get(id=request.data["enquiry"])
data ={
"follow_up_date": request.data["next_follow_up_date"]
}
serializer2 = EnquirySerializer(enquiry,data=data,partial=True)
serializer2.is_valid(raise_exception=True)
#enquiry.follow_up_date = datetime.datetime.strptime(request.data["next_follow_up_date"], "%Y-%m-%d").date()
serializer2.save()
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class EmployeeFeedbackListAPI(ListAPIView,CreateAPIView):
queryset = models.Feedback.objects.all()
serializer_class = serializers.FeedbackMinSerializer
permission_classes = [permissions.IsAuthenticated, ]
def list(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
queryset = self.queryset.filter(library_branch=self.kwargs["id"])
from_date = self.request.query_params.get('from_Date', None)
if from_date is not None:
from_date = datetime.datetime.strptime(from_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__gte = from_date)
to_date = self.request.query_params.get('to_date', None)
if to_date is not None:
to_date = datetime.datetime.strptime(to_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__lte = to_date)
search = self.request.query_params.get('search', None)
if search is not None:
queryset = queryset.filter(Q(title__icontains =search)|Q(details__icontains =search))
active = self.request.query_params.get("active")
if active is not None:
queryset = queryset.filter(active = True)
type = self.request.query_params.get("type")
if type is not None:
queryset = queryset.filter(type=type)
queryset = queryset.order_by('date')
serializer = self.serializer_class(queryset,many=True)
return Response(serializer.data,status=200)
def create(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
student = studentmodels.Student.objects.get(id=request.data["student"])
if request.data["type"]=="Complaint":
title = "Thank For Complaining. Your Complaint has been successfully registered"
description = "Thank For Complaining. Your Complaint has been successfully registered \n We have started looking into the details. We will update your once the issue is ressolved.\n\nThank you"
notiftype = "Complaint Registered"
notif = coremodels.Notifications.objects.create(student=student,title=title,description=description,notifType=notiftype)
request.data["library_branch"]=kwargs["id"]
print(request.data)
serializer = serializers.FeedbackSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
class EmployeeFeedbackUpdateAPI(UpdateAPIView):
queryset = models.Feedback.objects.all()
serializer_class = serializers.FeedbackSerializer
permission_classes = [permissions.IsAuthenticated, ]
def partial_update(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
feedback = self.queryset.get(id=kwargs["pk"])
if feedback.type == 'Complaint':
title = "Your complaint registered on "+str(feedback.date)+" is ressolved"
description = request.data["response"]
notiftype = "Complaint Closed"
notif = coremodels.Notifications.objects.create(student=feedback.student,title=title,description=description,notifType=notiftype)
request.data["active"]= False
serializer = self.serializer_class(feedback,data=request.data,partial=True)
serializer.is_valid(raise_exception=True)
serializer.save()
# headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=200)
class EmployeeExpenseAPI(APIView):
queryset = models.Expense.objects.all()
serializer_class = serializers.ExpenseSerializer
permission_classes = [permissions.IsAuthenticated, ]
def get(self,request,id):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if self.kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
today = datetime.datetime.today()
data = []
yearwise=[]
queryset = self.queryset.filter(library_branch=id)
from_date = self.request.query_params.get('from_date', None)
if from_date is not None:
from_date = datetime.datetime.strptime(from_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__gte = from_date)
to_date = self.request.query_params.get('to_date', None)
if to_date is not None:
to_date = datetime.datetime.strptime(to_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__lte = to_date)
if from_date is None and to_date is None:
queryset = queryset.order_by('date')
startyear = (queryset.first()).date.year
lastyear = (queryset.last()).date.year
for i in range(startyear,lastyear+1):
data=[]
if i == today.year:
startmonth = 1
endmonth = today.month
else:
startmonth = 1
endmonth = 12
for j in range(startmonth,endmonth+1):
expense = 0
queryset = self.queryset.filter(library_branch=id)
queryset = queryset.filter(library_branch=id,date__month=j,date__year=i)
for k in queryset:
expense = expense + k.amount_paid
data.append(
{
"month":j,
"expense":expense
}
)
yearwise.append(
{
"year":i,
"details":data
}
)
return Response(yearwise,status=200)
queryset = queryset.order_by('date')
if queryset.count()==0:
return Response(yearwise,status=200)
startyear = (queryset.first()).date.year
lastyear = (queryset.last()).date.year
for i in range(startyear,lastyear+1):
data=[]
if i == today.year:
startmonth = 1
endmonth = today.month
else:
startmonth = 1
endmonth = 12
for j in range(startmonth,endmonth+1):
expense = 0
queryset = self.queryset.filter(library_branch=id,date__gte = from_date)
queryset = queryset.filter(date__lte = to_date)
queryset = queryset.filter(library_branch=id,date__month=j,date__year=i)
for x in queryset:
expense = expense + x.amount_paid
data.append(
{
"month":j,
"expense":expense
}
)
yearwise.append(
{
"year":i,
"details":data
}
)
return Response(yearwise,status=200)
# search = self.request.query_params.get('search', None)
# if search is not None:
# queryset = queryset.filter(Q(title__icontains =search)|Q(note__icontains =search))
return queryset
class EmployeeMonthlyExpenseAPI(ListAPIView,CreateAPIView):
queryset = models.Expense.objects.all()
serializer_class = serializers.ExpenseSerializer
permission_classes = [permissions.IsAuthenticated, ]
def list(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
queryset = self.queryset.filter(library_branch=self.kwargs["id"])
year = self.request.query_params.get('year', None)
if year is not None:
queryset = queryset.filter(date__year = year)
month = self.request.query_params.get('month', None)
if month is not None:
queryset = queryset.filter(date__month = month)
search = self.request.query_params.get('search', None)
if search is not None:
queryset = queryset.filter(Q(title__icontains =search)|Q(note__icontains =search))
from_date = self.request.query_params.get('from_date', None)
if from_date is not None:
from_date = datetime.datetime.strptime(from_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__gte = from_date)
to_date = self.request.query_params.get('to_date', None)
if to_date is not None:
to_date = datetime.datetime.strptime(to_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__lte = to_date)
queryset = queryset.order_by('-date')
serializer = self.serializer_class(queryset,many=True)
return Response(serializer.data,status=200)
class StudentPendingsAPI(ListAPIView):
queryset = studentmodels.PurchasedSubscription.objects.all()
serializer_class = studentserializers.StudentManageSubSerializer
permission_classes = [permissions.IsAuthenticated, ]
def get_queryset(self):
queryset = self.queryset.filter(student__library_branch=self.kwargs["id"])
search = self.request.query_params.get('search', None)
if search is not None:
queryset = queryset.filter(Q(student__name__icontains =search)|Q(student__id__icontains =search))
from_date = self.request.query_params.get('from_Date', None)
if from_date is not None:
from_date = datetime.datetime.strptime(from_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__gte = from_date)
to_date = self.request.query_params.get('to_date', None)
if to_date is not None:
to_date = datetime.datetime.strptime(to_date, "%Y-%m-%d").date()
queryset = queryset.filter(date__lte = to_date)
queryset = queryset.order_by('student__name')
return queryset
def list(self,request,*args,**kwargs):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if self.kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
queryset = self.get_queryset()
serializer = self.serializer_class(queryset,many=True)
return Response(serializer.data,status=200)
class StudentPendingCountAPI(APIView):
queryset = studentmodels.PurchasedSubscription.objects.all()
permission_classes = [permissions.IsAuthenticated, ]
def get(self,request,id):
if not request.user.is_owner and not request.user.is_employee:
return Response({"Error":"Access Denied"},status=401)
if request.user.is_owner:
instance = models.Owner.objects.get(user=request.user)
else:
instance = models.Employee.objects.get(user=request.user)
branchlist = instance.branches.all()
if self.kwargs["id"] not in branchlist.values_list('id', flat=True):
# if kwargs["id"] in instance.branches:
return Response({"error":"Branch does not belong to the owner/employee"},status=403)
queryset = self.queryset.filter(student__library_branch=self.kwargs["id"])
return Response({"count":queryset.count()},status=200) | 40.867327 | 205 | 0.638482 | 2,370 | 20,638 | 5.437975 | 0.090295 | 0.042675 | 0.030261 | 0.03414 | 0.807883 | 0.767768 | 0.739991 | 0.727654 | 0.718653 | 0.696695 | 0 | 0.007813 | 0.255742 | 20,638 | 505 | 206 | 40.867327 | 0.83125 | 0.03416 | 0 | 0.689373 | 0 | 0.002725 | 0.073903 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035422 | false | 0 | 0.06267 | 0 | 0.291553 | 0.002725 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4257c29a71f311001aabe6867a1174fa11110e9b | 3,416 | py | Python | advanced_classifier/advanced_classifiers.py | jvanecek/tf-practices | 14c14d2b9b775367caa8b6d8237dc8ca24734a15 | [
"MIT"
] | null | null | null | advanced_classifier/advanced_classifiers.py | jvanecek/tf-practices | 14c14d2b9b775367caa8b6d8237dc8ca24734a15 | [
"MIT"
] | null | null | null | advanced_classifier/advanced_classifiers.py | jvanecek/tf-practices | 14c14d2b9b775367caa8b6d8237dc8ca24734a15 | [
"MIT"
] | null | null | null |
import tensorflow as tf
from tensorflow import keras
class ClassifierBehavior:
def train( self, normalized_training_data, training_labels, validation_data, input_size ):
self._buildSequentialModel(input_size)
return self._model.fit(
normalized_training_data,
training_labels,
epochs=20,
batch_size=512,
validation_data=validation_data,
verbose=2)
class BaselineClassifier(ClassifierBehavior):
def _buildSequentialModel(self, input_size):
self._model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(input_size,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
self._model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
class SmallerModelClassifier(ClassifierBehavior):
def _buildSequentialModel(self, input_size):
self._model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(input_size,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
self._model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
class BiggerModelClassifier(ClassifierBehavior):
def _buildSequentialModel(self, input_size):
self._model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(input_size,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
self._model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
class L2RegularizedClassifier(ClassifierBehavior):
def _buildSequentialModel(self, input_size):
self._model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(input_size,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
self._model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
class DropoutRegularizedClassifier(ClassifierBehavior):
def _buildSequentialModel(self, input_size):
self._model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(input_size,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
self._model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy']) | 42.7 | 94 | 0.619145 | 343 | 3,416 | 6 | 0.201166 | 0.090865 | 0.116618 | 0.087464 | 0.818756 | 0.783771 | 0.783771 | 0.783771 | 0.763362 | 0.763362 | 0 | 0.0184 | 0.26815 | 3,416 | 80 | 95 | 42.7 | 0.8048 | 0.017857 | 0 | 0.651515 | 0 | 0 | 0.07456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.030303 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
428955f099bf0ebb13fe716d76548d9c251b4dab | 86 | py | Python | src/analysis/analyser/__init__.py | rimij405/dsci-623_midterm | 6134d2472630c32379e15a458f0482dcdacd0472 | [
"MIT"
] | null | null | null | src/analysis/analyser/__init__.py | rimij405/dsci-623_midterm | 6134d2472630c32379e15a458f0482dcdacd0472 | [
"MIT"
] | null | null | null | src/analysis/analyser/__init__.py | rimij405/dsci-623_midterm | 6134d2472630c32379e15a458f0482dcdacd0472 | [
"MIT"
] | null | null | null | # analyser/__init__.py
# print(f"Imported {__name__} analysis/analyser/__init__.py")
| 21.5 | 61 | 0.767442 | 11 | 86 | 4.909091 | 0.727273 | 0.444444 | 0.518519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081395 | 86 | 3 | 62 | 28.666667 | 0.683544 | 0.930233 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c44ed80e32b42da71e2a22bba3cbb697566b7473 | 115 | py | Python | gerritlib/tests/base.py | GabrielGanne/gerritlib | 7adadc3e51659259552b8146a573b88ab42c9e1b | [
"Apache-2.0"
] | null | null | null | gerritlib/tests/base.py | GabrielGanne/gerritlib | 7adadc3e51659259552b8146a573b88ab42c9e1b | [
"Apache-2.0"
] | null | null | null | gerritlib/tests/base.py | GabrielGanne/gerritlib | 7adadc3e51659259552b8146a573b88ab42c9e1b | [
"Apache-2.0"
] | null | null | null | import testtools
class TestCase(testtools.TestCase):
"Placeholder wrapper for the testtools.TestCase class."
| 19.166667 | 59 | 0.791304 | 13 | 115 | 7 | 0.615385 | 0.373626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13913 | 115 | 5 | 60 | 23 | 0.919192 | 0.46087 | 0 | 0 | 0 | 0 | 0.46087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
670d3ab304f7bdc7aee50a49de13c06b1b18a9a2 | 5,159 | py | Python | 最好大学网/daxue.py | 13060923171/xianmu | 3deb9cdcf4ed1d821043ba1d3947ff35697e4aae | [
"MIT"
] | 29 | 2020-08-02T12:06:10.000Z | 2022-03-07T17:51:54.000Z | 最好大学网/daxue.py | 13060923171/xianmu | 3deb9cdcf4ed1d821043ba1d3947ff35697e4aae | [
"MIT"
] | 3 | 2020-08-16T15:56:47.000Z | 2021-11-20T21:49:59.000Z | 最好大学网/daxue.py | 13060923171/xianmu | 3deb9cdcf4ed1d821043ba1d3947ff35697e4aae | [
"MIT"
] | 15 | 2020-08-16T08:28:08.000Z | 2021-09-29T07:17:38.000Z | import requests
import re
from bs4 import BeautifulSoup
import time
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"Cookie": "Hm_lvt_2ce94714199fe618dcebb5872c6def14=1594741637; Hm_lpvt_2ce94714199fe618dcebb5872c6def14=1594741768",
"Host": "www.zuihaodaxue.cn",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"
}
session = requests.session()
session.headers = headers
def get_html9(url):
html = session.get(url)
#它的解码等于他当前的页面的解码,这样破解里面字体的反爬
html.encoding = html.apparent_encoding
if html.status_code == 200:
content = html.text
#用正则去定位排名
rankings = re.compile('"><td>(.*?)</td>',re.I|re.S)
ranking = rankings.findall(content)
soup = BeautifulSoup(content,'lxml')
list = []
for i in range(len(ranking)):
#定位大学名称
daxues = soup.select("td.align-left a")[i].text
list.append(daxues)
print(list)
#定位大学排名
states = re.compile('title="查看(.*?)大学排名">', re.I | re.S)
state = states.findall(content)
state_ranks = re.compile('</a></td><td class="hidden-xs">(.*?)</td><td>',re.I|re.S)
state_rank = state_ranks.findall(content)
grades = re.compile('\d+</td><td>(.*?)</td><td', re.I | re.S)
grade = grades.findall(content)
indexs = re.compile('class="hidden-xs need-hidden alumni">(.*?)</td><td', re.I | re.S)
index = indexs.findall(content)
for j in range(len(ranking)):
with open('2019.text', 'a+',encoding='utf-8')as f:
f.write('{} {} {} {} {} {}'.format(ranking[j],list[j],state[j],state_rank[j],grade[j],index[j]))
f.write('\n')
print('写入成功')
print('{} {} {} {} {} {}'.format(ranking[j],list[j],state[j],state_rank[j],grade[j],index[j]))
else:
print(html.status_code)
def get_html8(url):
html = session.get(url)
html.encoding = html.apparent_encoding
if html.status_code == 200:
content = html.text
rankings = re.compile('"><td>(.*?)</td>',re.I|re.S)
ranking = rankings.findall(content)
soup = BeautifulSoup(content,'lxml')
list = []
for i in range(len(ranking)):
daxues = soup.select("td.align-left a")[i].text
list.append(daxues)
print(list)
states = re.compile('title="查看(.*?)大学排名">', re.I | re.S)
state = states.findall(content)
state_ranks = re.compile('</a></td><td class="hidden-xs">(.*?)</td><td>',re.I|re.S)
state_rank = state_ranks.findall(content)
grades = re.compile('\d+</td><td>(.*?)</td><td', re.I | re.S)
grade = grades.findall(content)
indexs = re.compile('class="hidden-xs need-hidden alumni">(.*?)</td><td', re.I | re.S)
index = indexs.findall(content)
for j in range(len(ranking)):
with open('2018.text', 'a+',encoding='utf-8')as f:
f.write('{} {} {} {} {} {}'.format(ranking[j],list[j],state[j],state_rank[j],grade[j],index[j]))
f.write('\n')
print('写入成功')
print('{} {} {} {} {} {}'.format(ranking[j],list[j],state[j],state_rank[j],grade[j],index[j]))
else:
print(html.status_code)
def get_html7(url):
html = session.get(url)
html.encoding = html.apparent_encoding
if html.status_code == 200:
content = html.text
rankings = re.compile('"><td>(.*?)</td>',re.I|re.S)
ranking = rankings.findall(content)
soup = BeautifulSoup(content,'lxml')
list = []
for i in range(len(ranking)):
daxues = soup.select("td.align-left a")[i].text
list.append(daxues)
print(list)
states = re.compile('title="查看(.*?)大学排名">', re.I | re.S)
state = states.findall(content)
state_ranks = re.compile('</a></td><td class="hidden-xs">(.*?)</td><td>',re.I|re.S)
state_rank = state_ranks.findall(content)
grades = re.compile('\d+</td><td>(.*?)</td><td', re.I | re.S)
grade = grades.findall(content)
indexs = re.compile('class="hidden-xs need-hidden alumni">(.*?)</td><td', re.I | re.S)
index = indexs.findall(content)
for j in range(len(ranking)):
with open('2017.text', 'a+',encoding='utf-8')as f:
f.write('{} {} {} {} {} {}'.format(ranking[j],list[j],state[j],state_rank[j],grade[j],index[j]))
f.write('\n')
print('写入成功')
print('{} {} {} {} {} {}'.format(ranking[j],list[j],state[j],state_rank[j],grade[j],index[j]))
else:
print(html.status_code)
if __name__ == '__main__':
start = time.time()
url = "http://www.zuihaodaxue.cn/ARWU2019.html"
get_html9(url)
time.sleep(90)
url2 = "http://www.zuihaodaxue.cn/ARWU2018.html"
get_html8(url2)
time.sleep(90)
url3 = "http://www.zuihaodaxue.cn/ARWU2017.html"
get_html7(url3)
print(time.time()-start) | 42.636364 | 141 | 0.565032 | 693 | 5,159 | 4.145743 | 0.199134 | 0.029238 | 0.026105 | 0.031326 | 0.752872 | 0.74591 | 0.74591 | 0.74591 | 0.74591 | 0.74591 | 0 | 0.03669 | 0.23396 | 5,159 | 121 | 142 | 42.636364 | 0.690283 | 0.00911 | 0 | 0.754545 | 0 | 0.018182 | 0.235663 | 0.090037 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027273 | false | 0 | 0.036364 | 0 | 0.063636 | 0.118182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
673291e3c3865a89648c8cb1b9dab5d2cb4c5574 | 153 | py | Python | netdev/vendors/cisco/cisco_tcl.py | maliciousgroup/netdev | e2585ac24891cba172fc2056e9868e1d7c41ddc2 | [
"Apache-2.0"
] | null | null | null | netdev/vendors/cisco/cisco_tcl.py | maliciousgroup/netdev | e2585ac24891cba172fc2056e9868e1d7c41ddc2 | [
"Apache-2.0"
] | null | null | null | netdev/vendors/cisco/cisco_tcl.py | maliciousgroup/netdev | e2585ac24891cba172fc2056e9868e1d7c41ddc2 | [
"Apache-2.0"
] | null | null | null | from netdev.vendors.ios_like import IOSLikeDevice
class CiscoTCL(IOSLikeDevice):
"""Class for working with Cisco IOS with TCL support"""
pass
| 19.125 | 59 | 0.751634 | 20 | 153 | 5.7 | 0.8 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 153 | 7 | 60 | 21.857143 | 0.904762 | 0.320261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
6745b2c37d9ea4bd338c0edf66ffa649f3d89919 | 66 | py | Python | models/__init__.py | jc-audet/GOKU | 5627052a96bc95d9e893fe589bb51af447ff4f01 | [
"MIT"
] | 11 | 2020-04-27T12:55:40.000Z | 2022-03-15T08:51:45.000Z | models/__init__.py | jc-audet/GOKU | 5627052a96bc95d9e893fe589bb51af447ff4f01 | [
"MIT"
] | 1 | 2020-12-13T07:34:47.000Z | 2020-12-13T12:01:49.000Z | models/__init__.py | jc-audet/GOKU | 5627052a96bc95d9e893fe589bb51af447ff4f01 | [
"MIT"
] | 2 | 2020-06-29T19:20:38.000Z | 2021-05-03T17:49:53.000Z | from .LSTM import *
from .GOKU import *
from .Latent_ODE import *
| 16.5 | 25 | 0.727273 | 10 | 66 | 4.7 | 0.6 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 66 | 3 | 26 | 22 | 0.87037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67493cf897e05da517f3bb78515ec0a485ed759a | 38 | py | Python | envs/__init__.py | leilayasmeen/Bandits | 701c0385e6536380240e7de2d509294e6d3e4fba | [
"MIT"
] | null | null | null | envs/__init__.py | leilayasmeen/Bandits | 701c0385e6536380240e7de2d509294e6d3e4fba | [
"MIT"
] | null | null | null | envs/__init__.py | leilayasmeen/Bandits | 701c0385e6536380240e7de2d509294e6d3e4fba | [
"MIT"
] | null | null | null | from .contextual import ContextualEnv
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6762bc59ef5dfb00952a24c5272a24c94eebd8f6 | 116 | py | Python | enthought/plugins/text_editor/text_editor_plugin.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/plugins/text_editor/text_editor_plugin.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/plugins/text_editor/text_editor_plugin.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from __future__ import absolute_import
from envisage.plugins.text_editor.text_editor_plugin import *
| 29 | 61 | 0.862069 | 16 | 116 | 5.75 | 0.6875 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094828 | 116 | 3 | 62 | 38.666667 | 0.87619 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67a46d65cc0933696a5bb6021af45910154e1e84 | 91 | py | Python | python/py_refresh/py_imports/mymodules.py | star-junk/references | 5bf8f4eb710ebf953131722efea55d998ea98ed2 | [
"MIT"
] | null | null | null | python/py_refresh/py_imports/mymodules.py | star-junk/references | 5bf8f4eb710ebf953131722efea55d998ea98ed2 | [
"MIT"
] | null | null | null | python/py_refresh/py_imports/mymodules.py | star-junk/references | 5bf8f4eb710ebf953131722efea55d998ea98ed2 | [
"MIT"
] | null | null | null | import lib.gui
def avg(*num):
return sum(num) / len(num)
print("myModules", __name__) | 15.166667 | 30 | 0.67033 | 14 | 91 | 4.071429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164835 | 91 | 6 | 31 | 15.166667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.097826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
67c60983d58ff8c75fb3c1726dc49a869ea33f36 | 162 | py | Python | sutils/bin/assistbatch/__main__.py | t-mertz/slurm_utils | 6fc9709f62e2bca1387ea9c7a5975f0f0be5d0dd | [
"MIT"
] | null | null | null | sutils/bin/assistbatch/__main__.py | t-mertz/slurm_utils | 6fc9709f62e2bca1387ea9c7a5975f0f0be5d0dd | [
"MIT"
] | null | null | null | sutils/bin/assistbatch/__main__.py | t-mertz/slurm_utils | 6fc9709f62e2bca1387ea9c7a5975f0f0be5d0dd | [
"MIT"
] | null | null | null | import sys
from ...applications import runapp
def main():
runapp.run_application("asbatch")
if __name__ == "__main__":
runapp.run_application("asbatch") | 20.25 | 37 | 0.728395 | 19 | 162 | 5.684211 | 0.631579 | 0.185185 | 0.240741 | 0.444444 | 0.574074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141975 | 162 | 8 | 38 | 20.25 | 0.776978 | 0 | 0 | 0.333333 | 0 | 0 | 0.134969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
67f803efa6343a9e3ba5ae37ab09584833b056a3 | 6,623 | py | Python | ark/segmentation/signal_extraction_test.py | ngreenwald/segmentation | 8bc87c2db96434a24194040f7ea754af2caf5e5f | [
"Apache-2.0"
] | 17 | 2020-10-15T20:50:12.000Z | 2022-01-27T19:24:40.000Z | ark/segmentation/signal_extraction_test.py | ngreenwald/segmentation | 8bc87c2db96434a24194040f7ea754af2caf5e5f | [
"Apache-2.0"
] | 309 | 2020-08-14T16:21:36.000Z | 2022-03-24T22:22:53.000Z | ark/segmentation/signal_extraction_test.py | ngreenwald/segmentation | 8bc87c2db96434a24194040f7ea754af2caf5e5f | [
"Apache-2.0"
] | 5 | 2020-02-21T14:00:20.000Z | 2020-07-02T07:41:33.000Z | import numpy as np
import xarray as xr
from ark.segmentation import signal_extraction
from ark.utils import synthetic_spatial_datagen
from skimage.measure import regionprops
def test_positive_pixels_extraction():
# sample params
size_img = (1024, 1024)
cell_radius = 10
nuc_radius = 3
memb_thickness = 5
nuc_signal_strength = 10
memb_signal_strength = 100
nuc_uncertainty_length = 0
memb_uncertainty_length = 0
# generate sample segmentation mask and channel data
sample_segmentation_mask, sample_channel_data = \
synthetic_spatial_datagen.generate_two_cell_chan_data(
size_img=size_img,
cell_radius=cell_radius,
nuc_radius=nuc_radius,
memb_thickness=memb_thickness,
nuc_signal_strength=nuc_signal_strength,
memb_signal_strength=memb_signal_strength,
nuc_uncertainty_length=nuc_uncertainty_length,
memb_uncertainty_length=memb_uncertainty_length
)
# extract the cell regions for cells 1 and 2
coords_1 = np.argwhere(sample_segmentation_mask == 1)
coords_2 = np.argwhere(sample_segmentation_mask == 2)
# test default extraction (threshold == 0)
channel_counts_1 = signal_extraction.positive_pixels_extraction(
cell_coords=coords_1,
image_data=xr.DataArray(sample_channel_data)
)
channel_counts_2 = signal_extraction.positive_pixels_extraction(
cell_coords=coords_2,
image_data=xr.DataArray(sample_channel_data)
)
# test signal counts for different channels
assert np.all(channel_counts_1 == [25, 0])
assert np.all(channel_counts_2 == [0, 236])
# test with new threshold == 10
kwargs = {'threshold': 10}
channel_counts_1 = signal_extraction.positive_pixels_extraction(
cell_coords=coords_1,
image_data=xr.DataArray(sample_channel_data),
**kwargs
)
channel_counts_2 = signal_extraction.positive_pixels_extraction(
cell_coords=coords_2,
image_data=xr.DataArray(sample_channel_data),
**kwargs
)
assert np.all(channel_counts_1 == [0, 0])
assert np.all(channel_counts_2 == [0, 236])
# test for multichannel thresholds
kwargs = {'threshold': np.array([0, 10])}
channel_counts_1 = signal_extraction.positive_pixels_extraction(
cell_coords=coords_1,
image_data=xr.DataArray(sample_channel_data),
**kwargs
)
channel_counts_2 = signal_extraction.positive_pixels_extraction(
cell_coords=coords_2,
image_data=xr.DataArray(sample_channel_data),
**kwargs
)
assert np.all(channel_counts_1 == [25, 0])
assert np.all(channel_counts_2 == [0, 236])
def test_center_weighting_extraction():
# sample params
size_img = (1024, 1024)
cell_radius = 10
nuc_radius = 3
memb_thickness = 5
nuc_signal_strength = 10
memb_signal_strength = 10
nuc_uncertainty_length = 1
memb_uncertainty_length = 1
# generate sample segmentation mask and channel data
sample_segmentation_mask, sample_channel_data = \
synthetic_spatial_datagen.generate_two_cell_chan_data(
size_img=size_img,
cell_radius=cell_radius,
nuc_radius=nuc_radius,
memb_thickness=memb_thickness,
nuc_signal_strength=nuc_signal_strength,
memb_signal_strength=memb_signal_strength,
nuc_uncertainty_length=nuc_uncertainty_length,
memb_uncertainty_length=memb_uncertainty_length
)
# extract the cell regions for cells 1 and 2
coords_1 = np.argwhere(sample_segmentation_mask == 1)
coords_2 = np.argwhere(sample_segmentation_mask == 2)
# extract the centroids and coords
region_info = regionprops(sample_segmentation_mask.astype(np.int16))
kwarg_1 = {'centroid': region_info[0].centroid}
kwarg_2 = {'centroid': region_info[1].centroid}
coords_1 = region_info[0].coords
coords_2 = region_info[1].coords
channel_counts_1_center_weight = signal_extraction.center_weighting_extraction(
cell_coords=coords_1,
image_data=xr.DataArray(sample_channel_data),
**kwarg_1
)
channel_counts_2_center_weight = signal_extraction.center_weighting_extraction(
cell_coords=coords_2,
image_data=xr.DataArray(sample_channel_data),
**kwarg_2
)
channel_counts_1_base_weight = signal_extraction.total_intensity_extraction(
cell_coords=coords_1,
image_data=xr.DataArray(sample_channel_data)
)
channel_counts_2_base_weight = signal_extraction.total_intensity_extraction(
cell_coords=coords_2,
image_data=xr.DataArray(sample_channel_data)
)
# cell 1 and cell 2 nuclear signal should be lower for weighted than default
assert channel_counts_1_center_weight[0] < channel_counts_1_base_weight[0]
assert channel_counts_2_center_weight[1] < channel_counts_2_base_weight[1]
# assert effect of "bleeding" membrane signal is less with weighted than default
assert channel_counts_1_center_weight[1] < channel_counts_1_base_weight[1]
def test_total_intensity_extraction():
# sample params
size_img = (1024, 1024)
cell_radius = 10
nuc_radius = 3
memb_thickness = 5
nuc_signal_strength = 10
memb_signal_strength = 10
nuc_uncertainty_length = 0
memb_uncertainty_length = 0
# generate sample segmentation mask and channel data
sample_segmentation_mask, sample_channel_data = \
synthetic_spatial_datagen.generate_two_cell_chan_data(
size_img=size_img,
cell_radius=cell_radius,
nuc_radius=nuc_radius,
memb_thickness=memb_thickness,
nuc_signal_strength=nuc_signal_strength,
memb_signal_strength=memb_signal_strength,
nuc_uncertainty_length=nuc_uncertainty_length,
memb_uncertainty_length=memb_uncertainty_length
)
# extract the cell regions for cells 1 and 2
coords_1 = np.argwhere(sample_segmentation_mask == 1)
coords_2 = np.argwhere(sample_segmentation_mask == 2)
channel_counts_1 = signal_extraction.total_intensity_extraction(
cell_coords=coords_1,
image_data=xr.DataArray(sample_channel_data)
)
channel_counts_2 = signal_extraction.total_intensity_extraction(
cell_coords=coords_2,
image_data=xr.DataArray(sample_channel_data)
)
# test signal counts for different channels
assert np.all(channel_counts_1 == [250, 0])
assert np.all(channel_counts_2 == [0, 2360])
| 33.619289 | 84 | 0.716745 | 840 | 6,623 | 5.235714 | 0.119048 | 0.076853 | 0.057981 | 0.070941 | 0.850841 | 0.815371 | 0.815371 | 0.815371 | 0.806958 | 0.783765 | 0 | 0.032407 | 0.217273 | 6,623 | 196 | 85 | 33.790816 | 0.815972 | 0.10539 | 0 | 0.671329 | 0 | 0 | 0.005756 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.020979 | false | 0 | 0.034965 | 0 | 0.055944 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db33c1869ef570bacd2f29e40ec1f64d62170015 | 128 | py | Python | django_site/views.py | tbaindur/personal-website-django | 2bd5185eb7f1434c61e9e13093f16f20b911f2fb | [
"MIT"
] | null | null | null | django_site/views.py | tbaindur/personal-website-django | 2bd5185eb7f1434c61e9e13093f16f20b911f2fb | [
"MIT"
] | null | null | null | django_site/views.py | tbaindur/personal-website-django | 2bd5185eb7f1434c61e9e13093f16f20b911f2fb | [
"MIT"
] | null | null | null | from django.http import HttpResponse
def ping(request):
return HttpResponse("<h1>Ping Received for tejasbaindur.com</h1>") | 25.6 | 70 | 0.765625 | 17 | 128 | 5.764706 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.125 | 128 | 5 | 70 | 25.6 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0.162791 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
c01ec5c210f2f02047f7b1e67b6352891e4367ca | 145 | py | Python | pskgu_bot/db/services/__init__.py | mrgick/pskgu_bot | a6252c33b3ca18e6df6e79ed9e9721a766ed1e1f | [
"MIT"
] | 14 | 2021-02-26T14:33:35.000Z | 2021-12-27T09:36:12.000Z | pskgu_bot/db/services/__init__.py | mrgick/pskgu_bot | a6252c33b3ca18e6df6e79ed9e9721a766ed1e1f | [
"MIT"
] | 1 | 2022-02-05T12:37:21.000Z | 2022-02-05T12:37:24.000Z | pskgu_bot/db/services/__init__.py | mrgick/pskgu_bot | a6252c33b3ca18e6df6e79ed9e9721a766ed1e1f | [
"MIT"
] | 2 | 2021-03-05T18:07:39.000Z | 2021-12-03T00:12:29.000Z | """
Модуль с функциями взаимодействий с бд.
"""
from .storage import *
from .group import *
from .main_page import *
from .vk_user import *
| 16.111111 | 43 | 0.696552 | 20 | 145 | 4.95 | 0.65 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 145 | 8 | 44 | 18.125 | 0.853448 | 0.268966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22438f8b50914488c93be3e4eec65eea444d562c | 214 | py | Python | nevow/plugins/nevow_package.py | wthie/nevow | e630de8f640f27df85c38bc37ecdaf4e7b931afc | [
"MIT"
] | 49 | 2015-03-18T15:29:16.000Z | 2021-11-17T12:30:51.000Z | src/nevow/plugins/nevow_package.py | winjer/squeal | 20401986e0d1698776f5b482b28e14c57b11833c | [
"Apache-2.0"
] | 62 | 2015-01-21T08:48:08.000Z | 2021-04-02T17:31:29.000Z | src/nevow/plugins/nevow_package.py | winjer/squeal | 20401986e0d1698776f5b482b28e14c57b11833c | [
"Apache-2.0"
] | 30 | 2015-02-26T09:35:39.000Z | 2021-07-24T12:45:04.000Z | from twisted.python import util
from nevow import athena
import nevow
nevowCSSPkg = athena.AutoCSSPackage(util.sibpath(nevow.__file__, 'css'))
nevowPkg = athena.AutoJSPackage(util.sibpath(nevow.__file__, 'js'))
| 23.777778 | 72 | 0.794393 | 27 | 214 | 6 | 0.555556 | 0.135802 | 0.197531 | 0.246914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098131 | 214 | 8 | 73 | 26.75 | 0.839378 | 0 | 0 | 0 | 0 | 0 | 0.023364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
225daf55404aebb91f2864d5e592eb3e575ec085 | 25,946 | py | Python | networkx/classes/tests/test_views.py | nadesai/networkx | ca2df82141cf6977c9d59af2d0bfbc990e2aabce | [
"BSD-3-Clause"
] | null | null | null | networkx/classes/tests/test_views.py | nadesai/networkx | ca2df82141cf6977c9d59af2d0bfbc990e2aabce | [
"BSD-3-Clause"
] | null | null | null | networkx/classes/tests/test_views.py | nadesai/networkx | ca2df82141cf6977c9d59af2d0bfbc990e2aabce | [
"BSD-3-Clause"
] | null | null | null | from nose.tools import assert_equal, assert_not_equal, \
assert_true, assert_false, assert_raises
import networkx as nx
# Nodes
class test_nodeview(object):
def setup(self):
self.G = nx.path_graph(9)
def test_pickle(self):
import pickle
nv = self.G.nodes() # NodeView(self.G)
pnv = pickle.loads(pickle.dumps(nv, -1))
assert_equal(nv, pnv)
assert_equal(nv.__slots__, pnv.__slots__)
def test_repr(self):
nv = self.G.nodes()
assert_equal(str(nv), "NodeView((0, 1, 2, 3, 4, 5, 6, 7, 8))")
def test_contains(self):
nv = self.G.nodes()
assert_true(7 in nv)
assert_false(9 in nv)
self.G.remove_node(7)
self.G.add_node(9)
assert_false(7 in nv)
assert_true(9 in nv)
def test_contains_data(self):
nvd = self.G.nodes(data=True)
self.G.node[3]['foo'] = 'bar'
assert_true((7, {}) in nvd)
assert_true((3, {'foo': 'bar'}) in nvd)
nvdf = self.G.nodes(data='foo', default='biz')
assert_true((7, 'biz') in nvdf)
assert_true((3, 'bar') in nvdf)
assert_true((3, nvdf[3]) in nvdf)
def test_getitem(self):
nv = self.G.nodes
nvd = self.G.nodes(data=True)
self.G.node[3]['foo'] = 'bar'
assert_equal(nv[7], {})
assert_equal(nv[3], {'foo': 'bar'})
assert_equal(nvd[3], {'foo': 'bar'})
nvdf = self.G.nodes(data='foo', default='biz')
assert_true(nvdf[7], 'biz')
assert_equal(nvdf[3], 'bar')
def test_iter(self):
nv = self.G.nodes()
for i, n in enumerate(nv):
assert_equal(i, n)
inv = iter(nv)
assert_equal(next(inv), 0)
assert_not_equal(iter(nv), nv)
assert_equal(iter(inv), inv)
inv2 = iter(nv)
next(inv2)
assert_equal(list(inv), list(inv2))
# odd case where NodeView calls NodeDataView with data=False
nnv = nv(data=False)
for i, n in enumerate(nnv):
assert_equal(i, n)
def test_iter_data(self):
nv = self.G.nodes(data=True)
for i, (n, d) in enumerate(nv):
assert_equal(i, n)
assert_equal(d, {})
inv = iter(nv)
assert_equal(next(inv), (0, {}))
self.G.node[3]['foo'] = 'bar'
for n, d in nv:
if n == 3:
assert_equal(d, {'foo': 'bar'})
break
def test_len(self):
nv = self.G.nodes()
assert_equal(len(nv), 9)
self.G.remove_node(7)
assert_equal(len(nv), 8)
self.G.add_node(9)
assert_equal(len(nv), 9)
def test_and(self):
# print("G & H nodes:", gnv & hnv)
nv = self.G.nodes()
some_nodes = {n for n in range(5, 12)}
assert_equal(nv & some_nodes, {n for n in range(5, 9)})
assert_equal(some_nodes & nv, {n for n in range(5, 9)})
def test_or(self):
# print("G | H nodes:", gnv | hnv)
nv = self.G.nodes()
some_nodes = {n for n in range(5, 12)}
assert_equal(nv | some_nodes, {n for n in range(12)})
assert_equal(some_nodes | nv, {n for n in range(12)})
def test_xor(self):
# print("G ^ H nodes:", gnv ^ hnv)
nv = self.G.nodes()
some_nodes = {n for n in range(5, 12)}
assert_equal(nv ^ some_nodes, {0, 1, 2, 3, 4, 9, 10, 11})
assert_equal(some_nodes ^ nv, {0, 1, 2, 3, 4, 9, 10, 11})
def test_sub(self):
# print("G - H nodes:", gnv - hnv)
nv = self.G.nodes()
some_nodes = {n for n in range(5, 12)}
assert_equal(nv - some_nodes, {n for n in range(5)})
assert_equal(some_nodes - nv, {n for n in range(9, 12)})
# Edges Data View
class test_edgedataview(object):
def setup(self):
self.G = nx.path_graph(9)
self.DG = nx.path_graph(9, create_using=nx.DiGraph())
self.eview = nx.reportviews.EdgeView
def modify_edge(G, e, **kwds):
G._adj[e[0]][e[1]].update(kwds)
self.modify_edge = modify_edge
def test_iterdata(self):
G = self.G.copy()
evr = self.eview(G)
ev = evr(data=True)
for u, v, d in ev:
pass
assert_equal(d, {})
ev = evr(data='foo', default=1)
for u, v, wt in ev:
pass
assert_equal(wt, 1)
self.modify_edge(G, (2, 3), foo='bar')
ev = evr(data=True)
for e in ev:
if set(e[:2]) == {2, 3}:
assert_equal(e[2], {'foo': 'bar'})
assert_equal(len(e), 3)
checked = True
break
assert_true(checked)
ev = evr(data='foo', default=1)
for e in ev:
if set(e[:2]) == {2, 3}:
assert_equal(e[2], 'bar')
assert_equal(len(e), 3)
checked_wt = True
break
assert_true(checked_wt)
def test_iter(self):
evr = self.eview(self.G)
ev = evr()
for u, v in ev:
pass
iev = iter(ev)
assert_equal(next(iev), (0, 1))
assert_not_equal(iter(ev), ev)
assert_equal(iter(iev), iev)
def test_contains(self):
evr = self.eview(self.G)
ev = evr()
if self.G.is_directed():
assert_true((1, 2) in ev and (2, 1) not in ev)
else:
assert_true((1, 2) in ev and (2, 1) in ev)
assert_false((1, 4) in ev)
assert_false((1, 90) in ev)
assert_false((90, 1) in ev)
def test_len(self):
evr = self.eview(self.G)
ev = evr(data='foo')
assert_equal(len(ev), 8)
assert_equal(len(evr(1)), 2)
assert_equal(len(evr([1, 2, 3])), 4)
evr = self.eview(self.DG)
assert_equal(len(evr(1)), 1)
assert_equal(len(evr([1, 2, 3])), 3)
assert_equal(len(self.G.edges(1)), 2)
assert_equal(len(self.G.edges()), 8)
assert_equal(len(self.G.edges), 8)
assert_equal(len(self.DG.edges(1)), 1)
assert_equal(len(self.DG.edges()), 8)
assert_equal(len(self.DG.edges), 8)
# Edges
class test_edgeview(object):
def setup(self):
self.G = nx.path_graph(9)
self.eview = nx.reportviews.EdgeView
def modify_edge(G, e, **kwds):
G._adj[e[0]][e[1]].update(kwds)
self.modify_edge = modify_edge
def test_repr(self):
ev = self.eview(self.G)
rep = "EdgeView([(0, 1), (1, 2), (2, 3), (3, 4), " + \
"(4, 5), (5, 6), (6, 7), (7, 8)])"
assert_equal(repr(ev), rep)
def test_call(self):
ev = self.eview(self.G)
assert_equal(id(ev), id(ev()))
assert_not_equal(id(ev), id(ev(data=True)))
assert_not_equal(id(ev), id(ev(nbunch=1)))
def test_data(self):
ev = self.eview(self.G)
assert_equal(id(ev), id(ev.data()))
assert_not_equal(id(ev), id(ev.data(data=True)))
assert_not_equal(id(ev), id(ev.data(nbunch=1)))
def test_iter(self):
ev = self.eview(self.G)
for u, v in ev:
pass
iev = iter(ev)
assert_equal(next(iev), (0, 1))
assert_not_equal(iter(ev), ev)
assert_equal(iter(iev), iev)
def test_contains(self):
ev = self.eview(self.G)
edv = ev()
if self.G.is_directed():
assert_true((1, 2) in ev and (2, 1) not in ev)
assert_true((1, 2) in edv and (2, 1) not in edv)
else:
assert_true((1, 2) in ev and (2, 1) in ev)
assert_true((1, 2) in edv and (2, 1) in edv)
assert_false((1, 4) in ev)
assert_false((1, 4) in edv)
# edge not in graph
assert_false((1, 90) in ev)
assert_false((90, 1) in ev)
assert_false((1, 90) in edv)
assert_false((90, 1) in edv)
def test_len(self):
ev = self.eview(self.G)
num_ed = 9 if self.G.is_multigraph() else 8
assert_equal(len(ev), num_ed)
def test_and(self):
# print("G & H edges:", gnv & hnv)
ev = self.eview(self.G)
some_edges = {(0, 1), (1, 0), (0, 2)}
if self.G.is_directed():
assert_true(some_edges & ev, {(0, 1)})
assert_true(ev & some_edges, {(0, 1)})
else:
assert_equal(ev & some_edges, {(0, 1), (1, 0)})
assert_equal(some_edges & ev, {(0, 1), (1, 0)})
return
def test_or(self):
# print("G | H edges:", gnv | hnv)
ev = self.eview(self.G)
some_edges = {(0, 1), (1, 0), (0, 2)}
result1 = {(n, n + 1) for n in range(8)}
result1.update(some_edges)
result2 = {(n + 1, n) for n in range(8)}
result2.update(some_edges)
assert_true((ev | some_edges) in (result1, result2))
assert_true((some_edges | ev) in (result1, result2))
def test_xor(self):
# print("G ^ H edges:", gnv ^ hnv)
ev = self.eview(self.G)
some_edges = {(0, 1), (1, 0), (0, 2)}
if self.G.is_directed():
result = {(n, n + 1) for n in range(1, 8)}
result.update({(1, 0), (0, 2)})
assert_equal(ev ^ some_edges, result)
else:
result = {(n, n + 1) for n in range(1, 8)}
result.update({(0, 2)})
assert_equal(ev ^ some_edges, result)
return
def test_sub(self):
# print("G - H edges:", gnv - hnv)
ev = self.eview(self.G)
some_edges = {(0, 1), (1, 0), (0, 2)}
result = {(n, n + 1) for n in range(8)}
result.remove((0, 1))
assert_true(ev - some_edges, result)
class test_directed_edges(test_edgeview):
def setup(self):
self.G = nx.path_graph(9, nx.DiGraph())
self.eview = nx.reportviews.OutEdgeView
def modify_edge(G, e, **kwds):
G._adj[e[0]][e[1]].update(kwds)
self.modify_edge = modify_edge
def test_repr(self):
ev = self.eview(self.G)
rep = "OutEdgeView([(0, 1), (1, 2), (2, 3), (3, 4), " + \
"(4, 5), (5, 6), (6, 7), (7, 8)])"
assert_equal(repr(ev), rep)
class test_inedges(test_edgeview):
def setup(self):
self.G = nx.path_graph(9, nx.DiGraph())
self.eview = nx.reportviews.InEdgeView
def modify_edge(G, e, **kwds):
G._adj[e[0]][e[1]].update(kwds)
self.modify_edge = modify_edge
def test_repr(self):
ev = self.eview(self.G)
rep = "InEdgeView([(0, 1), (1, 2), (2, 3), (3, 4), " + \
"(4, 5), (5, 6), (6, 7), (7, 8)])"
assert_equal(repr(ev), rep)
class test_multiedges(test_edgeview):
def setup(self):
self.G = nx.path_graph(9, nx.MultiGraph())
self.G.add_edge(1, 2, key=3, foo='bar')
self.eview = nx.reportviews.MultiEdgeView
def modify_edge(G, e, **kwds):
if len(e) == 2:
e = e + (0,)
G._adj[e[0]][e[1]][e[2]].update(kwds)
self.modify_edge = modify_edge
def test_repr(self):
ev = self.eview(self.G)
rep = "MultiEdgeView([(0, 1, 0), (1, 2, 0), (1, 2, 3), (2, 3, 0), " + \
"(3, 4, 0), (4, 5, 0), (5, 6, 0), (6, 7, 0), (7, 8, 0)])"
assert_equal(repr(ev), rep)
def test_call(self):
ev = self.eview(self.G)
assert_equal(id(ev), id(ev(keys=True)))
assert_not_equal(id(ev), id(ev(data=True)))
assert_not_equal(id(ev), id(ev(nbunch=1)))
def test_data(self):
ev = self.eview(self.G)
assert_equal(id(ev), id(ev.data(keys=True)))
assert_not_equal(id(ev), id(ev.data(data=True)))
assert_not_equal(id(ev), id(ev.data(nbunch=1)))
def test_iter(self):
ev = self.eview(self.G)
for u, v, k in ev:
pass
iev = iter(ev)
assert_equal(next(iev), (0, 1, 0))
assert_not_equal(iter(ev), ev)
assert_equal(iter(iev), iev)
def test_iterkeys(self):
G = self.G.copy()
evr = self.eview(G)
ev = evr(keys=True)
for u, v, k in ev:
pass
assert_equal(k, 0)
ev = evr(keys=True, data="foo", default=1)
for u, v, k, wt in ev:
pass
assert_equal(wt, 1)
self.modify_edge(G, (2, 3, 0), foo='bar')
ev = evr(keys=True, data=True)
for e in ev:
if set(e[:2]) == {2, 3}:
assert_equal(e[2], 0)
assert_equal(e[3], {'foo': 'bar'})
assert_equal(len(e), 4)
checked = True
break
assert_true(checked)
ev = evr(keys=True, data='foo', default=1)
for e in ev:
if set(e[:2]) == {1, 2} and e[2] == 3:
assert_equal(e[3], 'bar')
if set(e[:2]) == {1, 2} and e[2] == 0:
assert_equal(e[3], 1)
if set(e[:2]) == {2, 3}:
assert_equal(e[2], 0)
assert_equal(e[3], 'bar')
assert_equal(len(e), 4)
checked_wt = True
assert_true(checked_wt)
ev = evr(keys=True)
for e in ev:
assert_equal(len(e), 3)
elist = sorted([(i, i + 1, 0) for i in range(8)] + [(1, 2, 3)])
assert_equal(sorted(list(ev)), elist)
# test order of arguments:graph, nbunch, data, keys, default
ev = evr((1, 2), 'foo', True, 1)
for e in ev:
if set(e[:2]) == {1, 2}:
assert_true(e[2] in {0, 3})
if e[2] == 3:
assert_equal(e[3], 'bar')
else: # e[2] == 0
assert_equal(e[3], 1)
if G.is_directed():
assert_equal(len(list(ev)), 3)
else:
assert_equal(len(list(ev)), 4)
def test_or(self):
# print("G | H edges:", gnv | hnv)
ev = self.eview(self.G)
some_edges = {(0, 1, 0), (1, 0, 0), (0, 2, 0)}
result = {(n, n + 1, 0) for n in range(8)}
result.update(some_edges)
result.update({(1, 2, 3)})
assert_equal(ev | some_edges, result)
assert_equal(some_edges | ev, result)
def test_sub(self):
# print("G - H edges:", gnv - hnv)
ev = self.eview(self.G)
some_edges = {(0, 1, 0), (1, 0, 0), (0, 2, 0)}
result = {(n, n + 1, 0) for n in range(8)}
result.remove((0, 1, 0))
result.update({(1, 2, 3)})
assert_true(ev - some_edges, result)
assert_true(some_edges - ev, result)
def test_xor(self):
# print("G ^ H edges:", gnv ^ hnv)
ev = self.eview(self.G)
some_edges = {(0, 1, 0), (1, 0, 0), (0, 2, 0)}
if self.G.is_directed():
result = {(n, n + 1, 0) for n in range(1, 8)}
result.update({(1, 0, 0), (0, 2, 0), (1, 2, 3)})
assert_equal(ev ^ some_edges, result)
assert_equal(some_edges ^ ev, result)
else:
result = {(n, n + 1, 0) for n in range(1, 8)}
result.update({(0, 2, 0), (1, 2, 3)})
assert_equal(ev ^ some_edges, result)
assert_equal(some_edges ^ ev, result)
def test_and(self):
# print("G & H edges:", gnv & hnv)
ev = self.eview(self.G)
some_edges = {(0, 1, 0), (1, 0, 0), (0, 2, 0)}
if self.G.is_directed():
assert_equal(ev & some_edges, {(0, 1, 0)})
assert_equal(some_edges & ev, {(0, 1, 0)})
else:
assert_equal(ev & some_edges, {(0, 1, 0), (1, 0, 0)})
assert_equal(some_edges & ev, {(0, 1, 0), (1, 0, 0)})
class test_directed_multiedges(test_multiedges):
def setup(self):
self.G = nx.path_graph(9, nx.MultiDiGraph())
self.G.add_edge(1, 2, key=3, foo='bar')
self.eview = nx.reportviews.OutMultiEdgeView
def modify_edge(G, e, **kwds):
if len(e) == 2:
e = e + (0,)
G._adj[e[0]][e[1]][e[2]].update(kwds)
self.modify_edge = modify_edge
def test_repr(self):
ev = self.eview(self.G)
rep = "OutMultiEdgeView([(0, 1, 0), (1, 2, 0), (1, 2, 3), (2, 3, 0),"\
+ " (3, 4, 0), (4, 5, 0), (5, 6, 0), (6, 7, 0), (7, 8, 0)])"
assert_equal(repr(ev), rep)
class test_in_multiedges(test_multiedges):
def setup(self):
self.G = nx.path_graph(9, nx.MultiDiGraph())
self.G.add_edge(1, 2, key=3, foo='bar')
self.eview = nx.reportviews.InMultiEdgeView
def modify_edge(G, e, **kwds):
if len(e) == 2:
e = e + (0,)
G._adj[e[0]][e[1]][e[2]].update(kwds)
self.modify_edge = modify_edge
def test_repr(self):
ev = self.eview(self.G)
rep = "InMultiEdgeView([(0, 1, 0), (1, 2, 0), (1, 2, 3), (2, 3, 0), "\
+ "(3, 4, 0), (4, 5, 0), (5, 6, 0), (6, 7, 0), (7, 8, 0)])"
assert_equal(repr(ev), rep)
# Degrees
class test_degreeview(object):
GRAPH = nx.Graph
dview = nx.reportviews.DegreeView
def setup(self):
self.G = nx.path_graph(6, self.GRAPH())
self.G.add_edge(1, 3, foo=2)
self.G.add_edge(1, 3, foo=3)
def modify_edge(G, e, **kwds):
G._adj[e[0]][e[1]].update(kwds)
self.modify_edge = modify_edge
def test_repr(self):
dv = self.G.degree()
rep = "DegreeView({0: 1, 1: 3, 2: 2, 3: 3, 4: 2, 5: 1})"
assert_equal(repr(dv), rep)
def test_iter(self):
dv = self.dview(self.G)
for n, d in dv:
pass
idv = iter(dv)
assert_not_equal(iter(dv), dv)
assert_equal(iter(idv), idv)
assert_equal(next(idv), (0, dv[0]))
assert_equal(next(idv), (1, dv[1]))
# weighted
dv = self.dview(self.G, weight='foo')
for n, d in dv:
pass
idv = iter(dv)
assert_not_equal(iter(dv), dv)
assert_equal(iter(idv), idv)
assert_equal(next(idv), (0, dv[0]))
assert_equal(next(idv), (1, dv[1]))
def test_nbunch(self):
dv = self.dview(self.G)
dvn = dv(0)
assert_equal(dvn, 1)
dvn = dv([2, 3])
assert_equal(sorted(dvn), [(2, 2), (3, 3)])
def test_getitem(self):
dv = self.dview(self.G)
assert_equal(dv[0], 1)
assert_equal(dv[1], 3)
assert_equal(dv[2], 2)
assert_equal(dv[3], 3)
dv = self.dview(self.G, weight='foo')
assert_equal(dv[0], 1)
assert_equal(dv[1], 5)
assert_equal(dv[2], 2)
assert_equal(dv[3], 5)
def test_weight(self):
dv = self.dview(self.G)
dvw = dv(0, weight='foo')
assert_equal(dvw, 1)
dvw = dv(1, weight='foo')
assert_equal(dvw, 5)
dvw = dv([2, 3], weight='foo')
assert_equal(sorted(dvw), [(2, 2), (3, 5)])
dvd = dict(dv(weight='foo'))
assert_equal(dvd[0], 1)
assert_equal(dvd[1], 5)
assert_equal(dvd[2], 2)
assert_equal(dvd[3], 5)
def test_len(self):
dv = self.dview(self.G)
assert_equal(len(dv), 6)
class test_didegreeview(test_degreeview):
GRAPH = nx.DiGraph
dview = nx.reportviews.DiDegreeView
def test_repr(self):
dv = self.G.degree()
rep = "DiDegreeView({0: 1, 1: 3, 2: 2, 3: 3, 4: 2, 5: 1})"
assert_equal(repr(dv), rep)
class test_outdegreeview(test_degreeview):
GRAPH = nx.DiGraph
dview = nx.reportviews.OutDegreeView
def test_repr(self):
dv = self.G.out_degree()
rep = "OutDegreeView({0: 1, 1: 2, 2: 1, 3: 1, 4: 1, 5: 0})"
assert_equal(repr(dv), rep)
def test_nbunch(self):
dv = self.dview(self.G)
dvn = dv(0)
assert_equal(dvn, 1)
dvn = dv([2, 3])
assert_equal(sorted(dvn), [(2, 1), (3, 1)])
def test_getitem(self):
dv = self.dview(self.G)
assert_equal(dv[0], 1)
assert_equal(dv[1], 2)
assert_equal(dv[2], 1)
assert_equal(dv[3], 1)
dv = self.dview(self.G, weight='foo')
assert_equal(dv[0], 1)
assert_equal(dv[1], 4)
assert_equal(dv[2], 1)
assert_equal(dv[3], 1)
def test_weight(self):
dv = self.dview(self.G)
dvw = dv(0, weight='foo')
assert_equal(dvw, 1)
dvw = dv(1, weight='foo')
assert_equal(dvw, 4)
dvw = dv([2, 3], weight='foo')
assert_equal(sorted(dvw), [(2, 1), (3, 1)])
dvd = dict(dv(weight='foo'))
assert_equal(dvd[0], 1)
assert_equal(dvd[1], 4)
assert_equal(dvd[2], 1)
assert_equal(dvd[3], 1)
class test_indegreeview(test_degreeview):
GRAPH = nx.DiGraph
dview = nx.reportviews.InDegreeView
def test_repr(self):
dv = self.G.in_degree()
rep = "InDegreeView({0: 0, 1: 1, 2: 1, 3: 2, 4: 1, 5: 1})"
assert_equal(repr(dv), rep)
def test_nbunch(self):
dv = self.dview(self.G)
dvn = dv(0)
assert_equal(dvn, 0)
dvn = dv([2, 3])
assert_equal(sorted(dvn), [(2, 1), (3, 2)])
def test_getitem(self):
dv = self.dview(self.G)
assert_equal(dv[0], 0)
assert_equal(dv[1], 1)
assert_equal(dv[2], 1)
assert_equal(dv[3], 2)
dv = self.dview(self.G, weight='foo')
assert_equal(dv[0], 0)
assert_equal(dv[1], 1)
assert_equal(dv[2], 1)
assert_equal(dv[3], 4)
def test_weight(self):
dv = self.dview(self.G)
dvw = dv(0, weight='foo')
assert_equal(dvw, 0)
dvw = dv(1, weight='foo')
assert_equal(dvw, 1)
dvw = dv([2, 3], weight='foo')
assert_equal(sorted(dvw), [(2, 1), (3, 4)])
dvd = dict(dv(weight='foo'))
assert_equal(dvd[0], 0)
assert_equal(dvd[1], 1)
assert_equal(dvd[2], 1)
assert_equal(dvd[3], 4)
class test_multidegreeview(test_degreeview):
GRAPH = nx.MultiGraph
dview = nx.reportviews.MultiDegreeView
def test_repr(self):
dv = self.G.degree()
rep = "MultiDegreeView({0: 1, 1: 4, 2: 2, 3: 4, 4: 2, 5: 1})"
assert_equal(repr(dv), rep)
def test_nbunch(self):
dv = self.dview(self.G)
dvn = dv(0)
assert_equal(dvn, 1)
dvn = dv([2, 3])
assert_equal(sorted(dvn), [(2, 2), (3, 4)])
def test_getitem(self):
dv = self.dview(self.G)
assert_equal(dv[0], 1)
assert_equal(dv[1], 4)
assert_equal(dv[2], 2)
assert_equal(dv[3], 4)
dv = self.dview(self.G, weight='foo')
assert_equal(dv[0], 1)
assert_equal(dv[1], 7)
assert_equal(dv[2], 2)
assert_equal(dv[3], 7)
def test_weight(self):
dv = self.dview(self.G)
dvw = dv(0, weight='foo')
assert_equal(dvw, 1)
dvw = dv(1, weight='foo')
assert_equal(dvw, 7)
dvw = dv([2, 3], weight='foo')
assert_equal(sorted(dvw), [(2, 2), (3, 7)])
dvd = dict(dv(weight='foo'))
assert_equal(dvd[0], 1)
assert_equal(dvd[1], 7)
assert_equal(dvd[2], 2)
assert_equal(dvd[3], 7)
class test_dimultidegreeview(test_multidegreeview):
GRAPH = nx.MultiDiGraph
dview = nx.reportviews.DiMultiDegreeView
def test_repr(self):
dv = self.G.degree()
rep = "DiMultiDegreeView({0: 1, 1: 4, 2: 2, 3: 4, 4: 2, 5: 1})"
assert_equal(repr(dv), rep)
class test_outmultidegreeview(test_degreeview):
GRAPH = nx.MultiDiGraph
dview = nx.reportviews.OutMultiDegreeView
def test_repr(self):
dv = self.G.out_degree()
rep = "OutMultiDegreeView({0: 1, 1: 3, 2: 1, 3: 1, 4: 1, 5: 0})"
assert_equal(repr(dv), rep)
def test_nbunch(self):
dv = self.dview(self.G)
dvn = dv(0)
assert_equal(dvn, 1)
dvn = dv([2, 3])
assert_equal(sorted(dvn), [(2, 1), (3, 1)])
def test_getitem(self):
dv = self.dview(self.G)
assert_equal(dv[0], 1)
assert_equal(dv[1], 3)
assert_equal(dv[2], 1)
assert_equal(dv[3], 1)
dv = self.dview(self.G, weight='foo')
assert_equal(dv[0], 1)
assert_equal(dv[1], 6)
assert_equal(dv[2], 1)
assert_equal(dv[3], 1)
def test_weight(self):
dv = self.dview(self.G)
dvw = dv(0, weight='foo')
assert_equal(dvw, 1)
dvw = dv(1, weight='foo')
assert_equal(dvw, 6)
dvw = dv([2, 3], weight='foo')
assert_equal(sorted(dvw), [(2, 1), (3, 1)])
dvd = dict(dv(weight='foo'))
assert_equal(dvd[0], 1)
assert_equal(dvd[1], 6)
assert_equal(dvd[2], 1)
assert_equal(dvd[3], 1)
class test_inmultidegreeview(test_degreeview):
GRAPH = nx.MultiDiGraph
dview = nx.reportviews.InMultiDegreeView
def test_repr(self):
dv = self.G.in_degree()
rep = "InMultiDegreeView({0: 0, 1: 1, 2: 1, 3: 3, 4: 1, 5: 1})"
assert_equal(repr(dv), rep)
def test_nbunch(self):
dv = self.dview(self.G)
dvn = dv(0)
assert_equal(dvn, 0)
dvn = dv([2, 3])
assert_equal(sorted(dvn), [(2, 1), (3, 3)])
def test_getitem(self):
dv = self.dview(self.G)
assert_equal(dv[0], 0)
assert_equal(dv[1], 1)
assert_equal(dv[2], 1)
assert_equal(dv[3], 3)
dv = self.dview(self.G, weight='foo')
assert_equal(dv[0], 0)
assert_equal(dv[1], 1)
assert_equal(dv[2], 1)
assert_equal(dv[3], 6)
def test_weight(self):
dv = self.dview(self.G)
dvw = dv(0, weight='foo')
assert_equal(dvw, 0)
dvw = dv(1, weight='foo')
assert_equal(dvw, 1)
dvw = dv([2, 3], weight='foo')
assert_equal(sorted(dvw), [(2, 1), (3, 6)])
dvd = dict(dv(weight='foo'))
assert_equal(dvd[0], 0)
assert_equal(dvd[1], 1)
assert_equal(dvd[2], 1)
assert_equal(dvd[3], 6)
| 31.487864 | 79 | 0.512757 | 3,947 | 25,946 | 3.243223 | 0.042564 | 0.179595 | 0.048746 | 0.046871 | 0.8397 | 0.795797 | 0.770877 | 0.73213 | 0.672994 | 0.634325 | 0 | 0.052915 | 0.320435 | 25,946 | 823 | 80 | 31.526124 | 0.673094 | 0.023241 | 0 | 0.701016 | 0 | 0.024673 | 0.049171 | 0.003357 | 0 | 0 | 0 | 0 | 0.37881 | 1 | 0.121916 | false | 0.013062 | 0.004354 | 0 | 0.175617 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f49410cdb749cbbf17fc6499a11e3b210738e7a | 712 | py | Python | Latest/venv/Lib/site-packages/apptools/naming/adapter/api.py | adamcvj/SatelliteTracker | 49a8f26804422fdad6f330a5548e9f283d84a55d | [
"Apache-2.0"
] | 1 | 2022-01-09T20:04:31.000Z | 2022-01-09T20:04:31.000Z | Latest/venv/Lib/site-packages/apptools/naming/adapter/api.py | adamcvj/SatelliteTracker | 49a8f26804422fdad6f330a5548e9f283d84a55d | [
"Apache-2.0"
] | 1 | 2022-02-15T12:01:57.000Z | 2022-03-24T19:48:47.000Z | Latest/venv/Lib/site-packages/apptools/naming/adapter/api.py | adamcvj/SatelliteTracker | 49a8f26804422fdad6f330a5548e9f283d84a55d | [
"Apache-2.0"
] | null | null | null | from .dict_context_adapter import DictContextAdapter
from .dict_context_adapter_factory import DictContextAdapterFactory
from .instance_context_adapter import InstanceContextAdapter
from .instance_context_adapter_factory import InstanceContextAdapterFactory
from .list_context_adapter import ListContextAdapter
from .list_context_adapter_factory import ListContextAdapterFactory
from .trait_list_context_adapter import TraitListContextAdapter
from .trait_list_context_adapter_factory import TraitListContextAdapterFactory
from .tuple_context_adapter import TupleContextAdapter
from .tuple_context_adapter_factory import TupleContextAdapterFactory
from .trait_dict_context_adapter import TraitDictContextAdapter
| 54.769231 | 78 | 0.921348 | 74 | 712 | 8.459459 | 0.283784 | 0.246006 | 0.191693 | 0.215655 | 0.15655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063202 | 712 | 12 | 79 | 59.333333 | 0.938531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58aaf76f2c34417cb8c46e81458dc35aa16e6bb8 | 5,862 | py | Python | test/test_12_filterbreakunlock.py | growell/svnhook | ceb337c472c5ec470286576a9bfca8f5b8ec424e | [
"Apache-2.0"
] | 1 | 2015-11-23T21:05:19.000Z | 2015-11-23T21:05:19.000Z | test/test_12_filterbreakunlock.py | growell/svnhook | ceb337c472c5ec470286576a9bfca8f5b8ec424e | [
"Apache-2.0"
] | null | null | null | test/test_12_filterbreakunlock.py | growell/svnhook | ceb337c472c5ec470286576a9bfca8f5b8ec424e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
######################################################################
# Test Break Unlock Filter
######################################################################
import os, re, sys, unittest
# Prefer local modules.
mylib = os.path.normpath(os.path.join(
os.path.dirname(__file__), '..'))
if os.path.isdir(mylib): sys.path.insert(0, mylib)
from test.base import HookTestCase
class TestFilterBreakUnlock(HookTestCase):
"""Break Unlock Filter Tests"""
def setUp(self):
super(TestFilterBreakUnlock, self).setUp(
re.sub(r'^test_?(.+)\.[^\.]+$', r'\1',
os.path.basename(__file__)))
def test_01_default_match(self):
"""Default sense with set flag."""
# Define the hook configuration.
self.writeConf('pre-unlock.xml', '''\
<?xml version="1.0"?>
<Actions>
<FilterBreakUnlock>
<SendError>Cannot remove other user's lock!</SendError>
</FilterBreakUnlock>
</Actions>
''')
# Call the script with the flag set.
p = self.callHook(
'pre-unlock', self.repopath, '/file1.txt',
self.username, 'mytoken', '1')
(stdoutdata, stderrdata) = p.communicate()
p.wait()
# Verify the proper error message is returned.
self.assertRegexpMatches(
stderrdata, r'Cannot remove other user',
'Expected error message not found')
# Verify a failure is indicated.
self.assertTrue(
p.returncode != 0,
'Unexpected success exit code found')
def test_02_default_mismatch(self):
"""Default sense with unset flag."""
# Define the hook configuration.
self.writeConf('pre-unlock.xml', '''\
<?xml version="1.0"?>
<Actions>
<FilterBreakUnlock>
<SendError>Cannot remove other user's lock!</SendError>
</FilterBreakUnlock>
</Actions>
''')
# Call the script with the flag unset.
p = self.callHook(
'pre-unlock', self.repopath, '/file1.txt',
self.username, 'mytoken', '0')
(stdoutdata, stderrdata) = p.communicate()
p.wait()
# Verify a failure isn't indicated.
self.assertTrue(
p.returncode == 0,
'Expected success exit code not found')
# Verify an error message isn't returned.
self.assertRegexpMatches(
stderrdata, r'(?s)^\s*$',
'Unexpected error message found')
def test_03_explicit_match(self):
"""Explicit sense with matching set flag."""
# Define the hook configuration.
self.writeConf('pre-unlock.xml', '''\
<?xml version="1.0"?>
<Actions>
<FilterBreakUnlock sense="true">
<SendError>Cannot remove other user's lock!</SendError>
</FilterBreakUnlock>
</Actions>
''')
# Call the script with the flag set.
p = self.callHook(
'pre-unlock', self.repopath, '/file2.txt',
self.username, 'mytoken', '1')
(stdoutdata, stderrdata) = p.communicate()
p.wait()
# Verify the proper error message is returned.
self.assertRegexpMatches(
stderrdata, r'Cannot remove other user',
'Expected error message not found')
# Verify a failure is indicated.
self.assertTrue(
p.returncode != 0,
'Unexpected success exit code found')
def test_04_explicit_match2(self):
"""Explicit sense with matching unset flag."""
# Define the hook configuration.
self.writeConf('pre-unlock.xml', '''\
<?xml version="1.0"?>
<Actions>
<FilterBreakUnlock sense="false">
<SendError>Cannot remove your own lock?!</SendError>
</FilterBreakUnlock>
</Actions>
''')
# Call the script with the flag unset.
p = self.callHook(
'pre-unlock', self.repopath, '/file2.txt',
self.username, 'mytoken', '0')
(stdoutdata, stderrdata) = p.communicate()
p.wait()
# Verify the proper error message is returned.
self.assertRegexpMatches(
stderrdata, r'Cannot remove your own',
'Expected error message not found')
# Verify a failure isn't indicated.
self.assertTrue(
p.returncode != 0,
'Unexpected success exit code found')
def test_05_explicit_mismatch(self):
"""Explicit sense with mismatched set flag."""
# Define the hook configuration.
self.writeConf('pre-unlock.xml', '''\
<?xml version="1.0"?>
<Actions>
<FilterBreakUnlock sense="false">
<SendError>Cannot remove your own lock?!</SendError>
</FilterBreakUnlock>
</Actions>
''')
# Call the script with the flag set.
p = self.callHook(
'pre-unlock', self.repopath, '/file2.txt',
self.username, 'mytoken', '1')
(stdoutdata, stderrdata) = p.communicate()
p.wait()
# Verify a failure isn't indicated.
self.assertTrue(
p.returncode == 0,
'Unexpected error exit code found')
# Verify an error message isn't returned.
self.assertRegexpMatches(
stderrdata, r'(?s)^\s*$',
'Unexpected error message found')
# Allow manual execution of tests.
if __name__=='__main__':
for tclass in [TestFilterBreakUnlock]:
suite = unittest.TestLoader().loadTestsFromTestCase(tclass)
unittest.TextTestRunner(verbosity=2).run(suite)
########################### end of file ##############################
| 32.748603 | 70 | 0.545548 | 582 | 5,862 | 5.439863 | 0.225086 | 0.028427 | 0.020531 | 0.026848 | 0.776374 | 0.758054 | 0.758054 | 0.758054 | 0.74921 | 0.74921 | 0 | 0.009573 | 0.305015 | 5,862 | 178 | 71 | 32.932584 | 0.76755 | 0.176049 | 0 | 0.798246 | 0 | 0 | 0.391219 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 1 | 0.052632 | false | 0 | 0.017544 | 0 | 0.078947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
58e24080355bfa6ad1becc114212a2841b7e4db8 | 25 | py | Python | evoke/Reset/__init__.py | howiemac/evoke5 | 430d6dfd719f8c88a4c3de2b735f8736187ff19b | [
"BSD-3-Clause"
] | null | null | null | evoke/Reset/__init__.py | howiemac/evoke5 | 430d6dfd719f8c88a4c3de2b735f8736187ff19b | [
"BSD-3-Clause"
] | null | null | null | evoke/Reset/__init__.py | howiemac/evoke5 | 430d6dfd719f8c88a4c3de2b735f8736187ff19b | [
"BSD-3-Clause"
] | null | null | null | from .Reset import Reset
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58f7cb16c03565d18731d61cf7adcf6980996600 | 501 | py | Python | isy_homie/devices/base.py | jspeckman/ISY-Homie-Bridge | 2bb952e5bfc07cb85e961654963c2f4e5e962aec | [
"MIT"
] | null | null | null | isy_homie/devices/base.py | jspeckman/ISY-Homie-Bridge | 2bb952e5bfc07cb85e961654963c2f4e5e962aec | [
"MIT"
] | null | null | null | isy_homie/devices/base.py | jspeckman/ISY-Homie-Bridge | 2bb952e5bfc07cb85e961654963c2f4e5e962aec | [
"MIT"
] | null | null | null | #! /usr/bin/env python
import re
class Base(object):
def __init__(self, isy_device=None):
self.isy_device = isy_device
self.isy_device.add_property_event_handler(self.property_change)
def get_homie_device_id (self):
#return re.sub(r'\W+', '', self.isy_device.name.lower())
return self.isy_device.get_identifier().replace(' ','').lower()
def property_change(self,property_,value):
pass
#print ('property change',self,property_,value)
| 25.05 | 72 | 0.668663 | 67 | 501 | 4.686567 | 0.507463 | 0.171975 | 0.207006 | 0.165605 | 0.197452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195609 | 501 | 19 | 73 | 26.368421 | 0.779156 | 0.243513 | 0 | 0 | 0 | 0 | 0.002674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.111111 | 0.111111 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
45027c7e8edc20ec3477f0eb8919de4bea7912af | 22 | py | Python | papermill/tests/test_exceptions.py | pricebenjamin/papermill | dde07c23e3c69e7c964e616f83c715badf3048a3 | [
"BSD-3-Clause"
] | 4,645 | 2017-07-11T10:40:06.000Z | 2022-03-31T09:24:53.000Z | papermill/tests/test_exceptions.py | pricebenjamin/papermill | dde07c23e3c69e7c964e616f83c715badf3048a3 | [
"BSD-3-Clause"
] | 587 | 2017-07-12T23:50:40.000Z | 2022-03-24T03:41:43.000Z | papermill/tests/test_exceptions.py | pricebenjamin/papermill | dde07c23e3c69e7c964e616f83c715badf3048a3 | [
"BSD-3-Clause"
] | 373 | 2017-07-12T21:40:43.000Z | 2022-03-27T19:19:11.000Z | import pytest # noqa
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 1 | 22 | 22 | 0.941176 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4511e89ee247f4785c3c72b0999eb505b455d96d | 37 | py | Python | labelbox/data/serialization/coco/__init__.py | Cyniikal/labelbox-python | 526fb8235c245a3c6161af57c354a47d68385bab | [
"Apache-2.0"
] | null | null | null | labelbox/data/serialization/coco/__init__.py | Cyniikal/labelbox-python | 526fb8235c245a3c6161af57c354a47d68385bab | [
"Apache-2.0"
] | null | null | null | labelbox/data/serialization/coco/__init__.py | Cyniikal/labelbox-python | 526fb8235c245a3c6161af57c354a47d68385bab | [
"Apache-2.0"
] | null | null | null | from .converter import COCOConverter
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
452d501a08ee071d91bdee51453a043ab505a693 | 49 | py | Python | src/catcher/libs/responder/src/test.py | gavin-anders/callback-catcher | 77d18a983fc5a9e53b33189d4202868210b5d7e3 | [
"Apache-2.0"
] | 2 | 2019-06-27T21:08:23.000Z | 2020-10-16T12:07:19.000Z | src/catcher/libs/responder/src/test.py | gavin-anders/callback-catcher | 77d18a983fc5a9e53b33189d4202868210b5d7e3 | [
"Apache-2.0"
] | null | null | null | src/catcher/libs/responder/src/test.py | gavin-anders/callback-catcher | 77d18a983fc5a9e53b33189d4202868210b5d7e3 | [
"Apache-2.0"
] | null | null | null | from .packets import SMB2NegoAns
print("start")
| 12.25 | 32 | 0.77551 | 6 | 49 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.122449 | 49 | 3 | 33 | 16.333333 | 0.860465 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
18be59590aa8a0095ccb9eabe8eafa090010b846 | 1,417 | py | Python | test/test.py | pmolinag/JamWifi | e76a2e56cfdb9c8398fcedffce79a1ab550fdf92 | [
"MIT"
] | null | null | null | test/test.py | pmolinag/JamWifi | e76a2e56cfdb9c8398fcedffce79a1ab550fdf92 | [
"MIT"
] | 8 | 2019-04-28T15:44:26.000Z | 2019-06-18T13:36:20.000Z | test/test.py | pmolinag/JamWifi | e76a2e56cfdb9c8398fcedffce79a1ab550fdf92 | [
"MIT"
] | null | null | null | import unittest, sys, os
from scapy.all import *
class TestApp(unittest.TestCase):
#Test if calcule_time function from Controller class works
def test_calcule_packets(self):
time = 1
packets = (60*time)/0.0001
self.assertEqual(packets, 600000)
#Test if calcule_time function from Controller class works
def test2_calcule_packets(self):
time = 1
packets = (60*time)/0.03
self.assertEqual(packets, 2000)
#Test if the deauthentication jammer build the packet correctly
def test_create_deauthentication(self):
packet = RadioTap()/Dot11(addr1='ff:ff:ff:ff:ff:ff',addr2='ff:ff:ff:ff:ff:ff',addr3='ff:ff:ff:ff:ff:ff')/Dot11Deauth()
self.assertEqual(packet.summary(), "RadioTap / 802.11 Management 12 ff:ff:ff:ff:ff:ff > ff:ff:ff:ff:ff:ff / Dot11Deauth")
#Test if the deauthentication jammer build the packet correctly
def test_create_rts(self):
packet = RadioTap()/Dot11(type=1, subtype=11, addr1='ff:ff:ff:ff:ff:ff',addr2='ff:ff:ff:ff:ff:ff', ID=0xFF7F)
self.assertEqual(packet.summary(), "RadioTap / 802.11 Control 11 ff:ff:ff:ff:ff:ff > ff:ff:ff:ff:ff:ff")
#Test if the deauthentication jammer build the packet correctly
def test_create_cts(self):
packet = RadioTap()/Dot11(type=1, subtype=12, addr1='ff:ff:ff:ff:ff:ff', ID=0xFF7F)
self.assertEqual(packet.summary(), "RadioTap / 802.11 Control 12 00:00:00:00:00:00 > ff:ff:ff:ff:ff:ff")
if __name__ == '__main__':
unittest.main()
| 41.676471 | 123 | 0.733239 | 237 | 1,417 | 4.299578 | 0.244726 | 0.223749 | 0.28263 | 0.306183 | 0.778214 | 0.766438 | 0.754661 | 0.607458 | 0.607458 | 0.534838 | 0 | 0.067308 | 0.119266 | 1,417 | 33 | 124 | 42.939394 | 0.749199 | 0.211715 | 0 | 0.090909 | 0 | 0.136364 | 0.292266 | 0 | 0 | 0 | 0.010791 | 0 | 0.227273 | 1 | 0.227273 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18c321daa877be1019b0169e6a7942302f7c7b0a | 96 | py | Python | theo/database.py | TheodoreWon/python-theo-database | 4a8ae63a37aae6222d8536e9b2b163fd858ce2da | [
"MIT"
] | 1 | 2018-12-20T07:13:57.000Z | 2018-12-20T07:13:57.000Z | theo/database.py | TheodoreWon/python-theo-database | 4a8ae63a37aae6222d8536e9b2b163fd858ce2da | [
"MIT"
] | null | null | null | theo/database.py | TheodoreWon/python-theo-database | 4a8ae63a37aae6222d8536e9b2b163fd858ce2da | [
"MIT"
] | 1 | 2018-12-22T01:51:32.000Z | 2018-12-22T01:51:32.000Z | from theo.src.database.MongoDB import MongoDB
from theo.src.comp.MongoDBCtrl import MongoDBCtrl
| 32 | 49 | 0.854167 | 14 | 96 | 5.857143 | 0.571429 | 0.195122 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 96 | 2 | 50 | 48 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a0d1645d0db463330072b5383b2f68383374b87 | 14,009 | py | Python | test_autolens/unit/lens/util/test_lens_util.py | PyJedi/PyAutoLens | bcfb2e7b447aa24508fc648d60b6fd9b4fd852e7 | [
"MIT"
] | null | null | null | test_autolens/unit/lens/util/test_lens_util.py | PyJedi/PyAutoLens | bcfb2e7b447aa24508fc648d60b6fd9b4fd852e7 | [
"MIT"
] | null | null | null | test_autolens/unit/lens/util/test_lens_util.py | PyJedi/PyAutoLens | bcfb2e7b447aa24508fc648d60b6fd9b4fd852e7 | [
"MIT"
] | null | null | null | import numpy as np
import pytest
import autolens as al
from autolens import exc
class TestPlaneImageFromGrid:
def test__3x3_grid__extracts_max_min_coordinates__creates_grid_including_half_pixel_offset_from_edge(
self
):
galaxy = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
grid = np.array([[-1.5, -1.5], [1.5, 1.5]])
plane_image = al.util.lens.plane_image_of_galaxies_from_grid(
shape=(3, 3), grid=grid, galaxies=[galaxy], buffer=0.0
)
mask = al.Mask.manual(
mask_2d=np.full(shape=(3, 3), fill_value=False),
pixel_scales=1.0,
sub_size=1,
)
grid = al.MaskedGrid.manual_1d(
grid=np.array(
[
[-1.0, -1.0],
[-1.0, 0.0],
[-1.0, 1.0],
[0.0, -1.0],
[0.0, 0.0],
[0.0, 1.0],
[1.0, -1.0],
[1.0, 0.0],
[1.0, 1.0],
]
),
mask=mask,
)
plane_image_galaxy = galaxy.profile_image_from_grid(grid)
assert (plane_image.array == plane_image_galaxy).all()
def test__3x3_grid__extracts_max_min_coordinates__ignores_other_coordinates_more_central(
self
):
galaxy = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
grid = np.array(
[
[-1.5, -1.5],
[1.5, 1.5],
[0.1, -0.1],
[-1.0, 0.6],
[1.4, -1.3],
[1.5, 1.5],
]
)
plane_image = al.util.lens.plane_image_of_galaxies_from_grid(
shape=(3, 3), grid=grid, galaxies=[galaxy], buffer=0.0
)
mask = al.Mask.manual(
mask_2d=np.full(shape=(3, 3), fill_value=False),
pixel_scales=1.0,
sub_size=1,
)
grid = al.MaskedGrid.manual_1d(
grid=np.array(
[
[-1.0, -1.0],
[-1.0, 0.0],
[-1.0, 1.0],
[0.0, -1.0],
[0.0, 0.0],
[0.0, 1.0],
[1.0, -1.0],
[1.0, 0.0],
[1.0, 1.0],
]
),
mask=mask,
)
plane_image_galaxy = galaxy.profile_image_from_grid(grid=grid)
assert (plane_image.array == plane_image_galaxy).all()
def test__2x3_grid__shape_change_correct_and_coordinates_shift(self):
galaxy = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
grid = np.array([[-1.5, -1.5], [1.5, 1.5]])
plane_image = al.util.lens.plane_image_of_galaxies_from_grid(
shape=(2, 3), grid=grid, galaxies=[galaxy], buffer=0.0
)
mask = al.Mask.manual(
mask_2d=np.full(shape=(2, 3), fill_value=False),
pixel_scales=1.0,
sub_size=1,
)
grid = al.MaskedGrid.manual_1d(
grid=np.array(
[
[-0.75, -1.0],
[-0.75, 0.0],
[-0.75, 1.0],
[0.75, -1.0],
[0.75, 0.0],
[0.75, 1.0],
]
),
mask=mask,
)
plane_image_galaxy = galaxy.profile_image_from_grid(grid=grid)
assert (plane_image.array == plane_image_galaxy).all()
def test__3x2_grid__shape_change_correct_and_coordinates_shift(self):
galaxy = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
grid = np.array([[-1.5, -1.5], [1.5, 1.5]])
plane_image = al.util.lens.plane_image_of_galaxies_from_grid(
shape=(3, 2), grid=grid, galaxies=[galaxy], buffer=0.0
)
mask = al.Mask.manual(
mask_2d=np.full(shape=(3, 2), fill_value=False),
pixel_scales=1.0,
sub_size=1,
)
grid = al.MaskedGrid.manual_1d(
grid=np.array(
[
[-1.0, -0.75],
[-1.0, 0.75],
[0.0, -0.75],
[0.0, 0.75],
[1.0, -0.75],
[1.0, 0.75],
]
),
mask=mask,
)
plane_image_galaxy = galaxy.profile_image_from_grid(grid=grid)
assert (plane_image.array == plane_image_galaxy).all()
def test__3x3_grid__buffer_aligns_two_grids(self):
galaxy = al.Galaxy(redshift=0.5, light=al.lp.EllipticalSersic(intensity=1.0))
grid_without_buffer = np.array([[-1.48, -1.48], [1.48, 1.48]])
plane_image = al.util.lens.plane_image_of_galaxies_from_grid(
shape=(3, 3), grid=grid_without_buffer, galaxies=[galaxy], buffer=0.02
)
mask = al.Mask.manual(
mask_2d=np.full(shape=(3, 3), fill_value=False),
pixel_scales=1.0,
sub_size=1,
)
grid = al.MaskedGrid.manual_1d(
grid=np.array(
[
[-1.0, -1.0],
[-1.0, 0.0],
[-1.0, 1.0],
[0.0, -1.0],
[0.0, 0.0],
[0.0, 1.0],
[1.0, -1.0],
[1.0, 0.0],
[1.0, 1.0],
]
),
mask=mask,
)
plane_image_galaxy = galaxy.profile_image_from_grid(grid=grid)
assert (plane_image.array == plane_image_galaxy).all()
class TestPlaneRedshifts:
def test__from_galaxies__3_galaxies_reordered_in_ascending_redshift(self):
galaxies = [
al.Galaxy(redshift=2.0),
al.Galaxy(redshift=1.0),
al.Galaxy(redshift=0.1),
]
ordered_plane_redshifts = al.util.lens.ordered_plane_redshifts_from_galaxies(
galaxies=galaxies
)
assert ordered_plane_redshifts == [0.1, 1.0, 2.0]
def test_from_galaxies__3_galaxies_two_same_redshift_planes_redshift_order_is_size_2_with_redshifts(
self
):
galaxies = [
al.Galaxy(redshift=1.0),
al.Galaxy(redshift=1.0),
al.Galaxy(redshift=0.1),
]
ordered_plane_redshifts = al.util.lens.ordered_plane_redshifts_from_galaxies(
galaxies=galaxies
)
assert ordered_plane_redshifts == [0.1, 1.0]
def test__from_galaxies__6_galaxies_producing_4_planes(self):
g0 = al.Galaxy(redshift=1.0)
g1 = al.Galaxy(redshift=1.0)
g2 = al.Galaxy(redshift=0.1)
g3 = al.Galaxy(redshift=1.05)
g4 = al.Galaxy(redshift=0.95)
g5 = al.Galaxy(redshift=1.05)
galaxies = [g0, g1, g2, g3, g4, g5]
ordered_plane_redshifts = al.util.lens.ordered_plane_redshifts_from_galaxies(
galaxies=galaxies
)
assert ordered_plane_redshifts == [0.1, 0.95, 1.0, 1.05]
def test__from_main_plane_redshifts_and_slices(self):
ordered_plane_redshifts = al.util.lens.ordered_plane_redshifts_from_lens_source_plane_redshifts_and_slice_sizes(
lens_redshifts=[1.0],
source_plane_redshift=3.0,
planes_between_lenses=[1, 1],
)
assert ordered_plane_redshifts == [0.5, 1.0, 2.0]
def test__different_number_of_slices_between_planes(self):
ordered_plane_redshifts = al.util.lens.ordered_plane_redshifts_from_lens_source_plane_redshifts_and_slice_sizes(
lens_redshifts=[1.0],
source_plane_redshift=2.0,
planes_between_lenses=[2, 3],
)
assert ordered_plane_redshifts == [
(1.0 / 3.0),
(2.0 / 3.0),
1.0,
1.25,
1.5,
1.75,
]
def test__if_number_of_input_slices_is_not_equal_to_number_of_plane_intervals__raises_errror(
self
):
with pytest.raises(exc.RayTracingException):
al.util.lens.ordered_plane_redshifts_from_lens_source_plane_redshifts_and_slice_sizes(
lens_redshifts=[1.0],
source_plane_redshift=2.0,
planes_between_lenses=[2, 3, 1],
)
with pytest.raises(exc.RayTracingException):
al.util.lens.ordered_plane_redshifts_from_lens_source_plane_redshifts_and_slice_sizes(
lens_redshifts=[1.0],
source_plane_redshift=2.0,
planes_between_lenses=[2],
)
with pytest.raises(exc.RayTracingException):
al.util.lens.ordered_plane_redshifts_from_lens_source_plane_redshifts_and_slice_sizes(
lens_redshifts=[1.0, 3.0],
source_plane_redshift=2.0,
planes_between_lenses=[2],
)
class TestGalaxyOrdering:
def test__3_galaxies_reordered_in_ascending_redshift__planes_match_galaxy_redshifts(
self
):
galaxies = [
al.Galaxy(redshift=2.0),
al.Galaxy(redshift=1.0),
al.Galaxy(redshift=0.1),
]
ordered_plane_redshifts = [0.1, 1.0, 2.0]
galaxies_in_redshift_ordered_planes = al.util.lens.galaxies_in_redshift_ordered_planes_from_galaxies(
galaxies=galaxies, plane_redshifts=ordered_plane_redshifts
)
assert galaxies_in_redshift_ordered_planes[0][0].redshift == 0.1
assert galaxies_in_redshift_ordered_planes[1][0].redshift == 1.0
assert galaxies_in_redshift_ordered_planes[2][0].redshift == 2.0
def test_3_galaxies_x2_same_redshift__order_is_size_2_with_redshifts__plane_match_galaxy_redshifts(
self
):
galaxies = [
al.Galaxy(redshift=1.0),
al.Galaxy(redshift=1.0),
al.Galaxy(redshift=0.1),
]
ordered_plane_redshifts = [0.1, 1.0]
galaxies_in_redshift_ordered_planes = al.util.lens.galaxies_in_redshift_ordered_planes_from_galaxies(
galaxies=galaxies, plane_redshifts=ordered_plane_redshifts
)
assert galaxies_in_redshift_ordered_planes[0][0].redshift == 0.1
assert galaxies_in_redshift_ordered_planes[1][0].redshift == 1.0
assert galaxies_in_redshift_ordered_planes[1][1].redshift == 1.0
def test__6_galaxies_producing_4_planes__galaxy_redshift_match_planes(self):
g0 = al.Galaxy(redshift=1.0)
g1 = al.Galaxy(redshift=1.0)
g2 = al.Galaxy(redshift=0.1)
g3 = al.Galaxy(redshift=1.05)
g4 = al.Galaxy(redshift=0.95)
g5 = al.Galaxy(redshift=1.05)
galaxies = [g0, g1, g2, g3, g4, g5]
ordered_plane_redshifts = [0.1, 0.95, 1.0, 1.05]
galaxies_in_redshift_ordered_planes = al.util.lens.galaxies_in_redshift_ordered_planes_from_galaxies(
galaxies=galaxies, plane_redshifts=ordered_plane_redshifts
)
assert galaxies_in_redshift_ordered_planes[0][0].redshift == 0.1
assert galaxies_in_redshift_ordered_planes[1][0].redshift == 0.95
assert galaxies_in_redshift_ordered_planes[2][0].redshift == 1.0
assert galaxies_in_redshift_ordered_planes[2][1].redshift == 1.0
assert galaxies_in_redshift_ordered_planes[3][0].redshift == 1.05
assert galaxies_in_redshift_ordered_planes[3][1].redshift == 1.05
assert galaxies_in_redshift_ordered_planes[0] == [g2]
assert galaxies_in_redshift_ordered_planes[1] == [g4]
assert galaxies_in_redshift_ordered_planes[2] == [g0, g1]
assert galaxies_in_redshift_ordered_planes[3] == [g3, g5]
def test__galaxy_redshifts_dont_match_plane_redshifts__tied_to_nearest_plane(self):
ordered_plane_redshifts = [0.5, 1.0, 2.0, 3.0]
galaxies = [
al.Galaxy(redshift=0.2),
al.Galaxy(redshift=0.4),
al.Galaxy(redshift=0.8),
al.Galaxy(redshift=1.2),
al.Galaxy(redshift=2.9),
]
galaxies_in_redshift_ordered_planes = al.util.lens.galaxies_in_redshift_ordered_planes_from_galaxies(
galaxies=galaxies, plane_redshifts=ordered_plane_redshifts
)
assert galaxies_in_redshift_ordered_planes[0][0].redshift == 0.2
assert galaxies_in_redshift_ordered_planes[0][1].redshift == 0.4
assert galaxies_in_redshift_ordered_planes[1][0].redshift == 0.8
assert galaxies_in_redshift_ordered_planes[1][1].redshift == 1.2
assert galaxies_in_redshift_ordered_planes[2] == []
assert galaxies_in_redshift_ordered_planes[3][0].redshift == 2.9
def test__different_number_of_slices_between_planes(self):
ordered_plane_redshifts = [(1.0 / 3.0), (2.0 / 3.0), 1.0, 1.25, 1.5, 1.75, 2.0]
galaxies = [
al.Galaxy(redshift=0.1),
al.Galaxy(redshift=0.2),
al.Galaxy(redshift=1.25),
al.Galaxy(redshift=1.35),
al.Galaxy(redshift=1.45),
al.Galaxy(redshift=1.55),
al.Galaxy(redshift=1.9),
]
galaxies_in_redshift_ordered_planes = al.util.lens.galaxies_in_redshift_ordered_planes_from_galaxies(
galaxies=galaxies, plane_redshifts=ordered_plane_redshifts
)
assert galaxies_in_redshift_ordered_planes[0][0].redshift == 0.1
assert galaxies_in_redshift_ordered_planes[0][1].redshift == 0.2
assert galaxies_in_redshift_ordered_planes[1] == []
assert galaxies_in_redshift_ordered_planes[2] == []
assert galaxies_in_redshift_ordered_planes[3][0].redshift == 1.25
assert galaxies_in_redshift_ordered_planes[3][1].redshift == 1.35
assert galaxies_in_redshift_ordered_planes[4][0].redshift == 1.45
assert galaxies_in_redshift_ordered_planes[4][1].redshift == 1.55
assert galaxies_in_redshift_ordered_planes[6][0].redshift == 1.9
| 34.420147 | 120 | 0.575273 | 1,828 | 14,009 | 4.086433 | 0.071116 | 0.025167 | 0.087818 | 0.137216 | 0.8917 | 0.882062 | 0.854618 | 0.821151 | 0.799331 | 0.77336 | 0 | 0.069236 | 0.30923 | 14,009 | 406 | 121 | 34.504926 | 0.702697 | 0 | 0 | 0.557927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.04878 | false | 0 | 0.012195 | 0 | 0.070122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e13940fb40f3d5d8afaf06fc40ccb3c17aadb134 | 192 | py | Python | tests/_site/loader.py | QueoLda/django-oscar | 8dd992d82e31d26c929b3caa0e08b57e9701d097 | [
"BSD-3-Clause"
] | 4,639 | 2015-01-01T00:42:33.000Z | 2022-03-29T18:32:12.000Z | tests/_site/loader.py | QueoLda/django-oscar | 8dd992d82e31d26c929b3caa0e08b57e9701d097 | [
"BSD-3-Clause"
] | 2,215 | 2015-01-02T22:32:51.000Z | 2022-03-29T12:16:23.000Z | tests/_site/loader.py | QueoLda/django-oscar | 8dd992d82e31d26c929b3caa0e08b57e9701d097 | [
"BSD-3-Clause"
] | 2,187 | 2015-01-02T06:33:31.000Z | 2022-03-31T15:32:36.000Z | class DummyClass:
pass
def custom_class_loader(module_label, classnames, module_prefix):
# For testing purposes just return a dummy class
return [DummyClass for c in classnames]
| 24 | 65 | 0.765625 | 26 | 192 | 5.5 | 0.730769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 192 | 7 | 66 | 27.428571 | 0.916667 | 0.239583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
e1a9be1ab889c489beb967ebd10d8f9835909378 | 348 | py | Python | zookeeper_dashboard/common.py | gamechanger/zookeeper_dashboard | 3b756f975b8781c56f4ff38aa83e166c3baf31c4 | [
"Apache-2.0"
] | 4 | 2016-03-10T08:16:01.000Z | 2016-12-31T13:44:38.000Z | zookeeper_dashboard/common.py | gamechanger/zookeeper_dashboard | 3b756f975b8781c56f4ff38aa83e166c3baf31c4 | [
"Apache-2.0"
] | 4 | 2016-05-03T19:07:16.000Z | 2016-12-13T16:45:39.000Z | zookeeper_dashboard/common.py | gamechanger/zookeeper_dashboard | 3b756f975b8781c56f4ff38aa83e166c3baf31c4 | [
"Apache-2.0"
] | 6 | 2016-05-03T18:53:21.000Z | 2021-05-09T01:29:51.000Z | import os
from django.conf import settings
def get_zookeeper_servers():
return (
os.getenv('ZOOKEEPER_SERVERS')
or getattr(settings,'ZOOKEEPER_SERVERS')
)
def get_zookeeper_servers_as_list():
return get_zookeeper_servers().split(',')
def get_zookeeper_server(id):
return get_zookeeper_servers_as_list()[int(id)]
| 21.75 | 51 | 0.729885 | 45 | 348 | 5.288889 | 0.444444 | 0.403361 | 0.319328 | 0.184874 | 0.210084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 348 | 15 | 52 | 23.2 | 0.82069 | 0 | 0 | 0 | 0 | 0 | 0.100575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e1c889b5eb8db4b5011edc3c16bf01cf3ddd7739 | 5,902 | py | Python | training/nnmodels.py | Geophysics-OpenSource/gan_for_gradient_based_inv | aeb48ea3b45579c186267741ccabc31bd7e1dabd | [
"MIT"
] | 3 | 2019-08-05T19:42:34.000Z | 2020-06-16T14:33:48.000Z | training/nnmodels.py | elaloy/gan_for_gradient_based_inv | aeb48ea3b45579c186267741ccabc31bd7e1dabd | [
"MIT"
] | null | null | null | training/nnmodels.py | elaloy/gan_for_gradient_based_inv | aeb48ea3b45579c186267741ccabc31bd7e1dabd | [
"MIT"
] | 5 | 2019-11-28T12:33:41.000Z | 2020-12-11T00:36:57.000Z | # -*- coding: utf-8 -*-
"""
Created on Thu Aug 30 11:54:17 2018
@author: elaloy
"""
import torch.nn as nn
class netD(nn.Module):
def __init__(self, nc = 1, ndf = 64, dfs = 9, ngpu = 1):
super(netD, self).__init__()
self.ngpu = ngpu
main = nn.Sequential(
nn.Conv2d(nc, ndf, dfs, 2, dfs//2, bias=False),
nn.LeakyReLU(0.2, inplace=True),
nn.InstanceNorm2d(ndf),
nn.Conv2d(ndf, ndf*2, dfs, 2, dfs//2, bias=False),
nn.LeakyReLU(0.2, inplace=True),
nn.InstanceNorm2d(ndf*2),
nn.Conv2d(ndf*2, ndf*4, dfs, 2, dfs//2, bias=False),
nn.LeakyReLU(0.2, inplace=True),
nn.InstanceNorm2d(ndf*4),
nn.Conv2d(ndf*4, ndf*8, dfs, 2, dfs//2, bias=False),
nn.LeakyReLU(0.2, inplace=True),
nn.InstanceNorm2d(ndf*8),
nn.Conv2d(ndf * 8, 1, kernel_size=dfs, stride=2, padding=2, bias=False),
nn.Sigmoid()
)
self.main = main
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input,
range(self.ngpu))
else:
output = self.main(input)
return output.view(-1, 1).squeeze(1)
#class netD(nn.Module):
# def __init__(self, nc = 1, ndf = 64, dfs = 9, ngpu = 1):
# super(netD, self).__init__()
# self.ngpu = ngpu
#
# main = nn.Sequential(
#
# nn.Conv2d(nc, ndf, dfs, 2, dfs//2, bias=False),
# nn.LeakyReLU(0.2, inplace=True),
# nn.BatchNorm2d(ndf),
#
# nn.Conv2d(ndf, ndf*2, dfs, 2, dfs//2, bias=False),
# nn.LeakyReLU(0.2, inplace=True),
# nn.BatchNorm2d(ndf*2),
#
# nn.Conv2d(ndf*2, ndf*4, dfs, 2, dfs//2, bias=False),
# nn.LeakyReLU(0.2, inplace=True),
# nn.BatchNorm2d(ndf*4),
#
# nn.Conv2d(ndf*4, ndf*8, dfs, 2, dfs//2, bias=False),
# nn.LeakyReLU(0.2, inplace=True),
# nn.BatchNorm2d(ndf*8),
#
# nn.Conv2d(ndf * 8, 1, kernel_size=dfs, stride=2, padding=2, bias=False),
# nn.Sigmoid()
# )
# self.main = main
#
# def forward(self, input):
# if input.is_cuda and self.ngpu > 1:
# output = nn.parallel.data_parallel(self.main, input,
# range(self.ngpu))
# else:
# output = self.main(input)
#
# return output.view(-1, 1).squeeze(1)
class netG(nn.Module):
def __init__(self, nc = 1, nz = 1, ngf = 64, gfs = 5, ngpu = 1):
super(netG, self).__init__()
self.ngpu = ngpu
self.main = nn.Sequential(
nn.ConvTranspose2d( nz, ngf * 8, gfs, 2, gfs//2, bias=False),
nn.ReLU(True),
nn.InstanceNorm2d(ngf * 8),
nn.ConvTranspose2d(ngf * 8, ngf * 4, gfs, 2, gfs//2, bias=False),
nn.ReLU(True),
nn.InstanceNorm2d(ngf * 4),
nn.ConvTranspose2d(ngf * 4, ngf * 2, gfs, 2, gfs//2, bias=False),
nn.ReLU(True),
nn.InstanceNorm2d(ngf * 2),
nn.ConvTranspose2d(ngf * 2, ngf, gfs, 2, gfs//2, bias=False),
nn.ReLU(True),
nn.InstanceNorm2d(ngf),
nn.ConvTranspose2d( ngf, nc, gfs, 2, 2, bias=False),
nn.ReLU(True),
### Start dilations ###
nn.ConvTranspose2d( nc,ngf, gfs, 1, 6, output_padding=0,bias=False,dilation=3),
nn.ReLU(True),
nn.InstanceNorm2d(ngf),
nn.ConvTranspose2d( ngf, nc, gfs, 1, 10, output_padding=0, bias=False,dilation=5),
nn.Tanh()
)
def forward(self, input):
if input.is_cuda and self.ngpu > 1:
output = nn.parallel.data_parallel(self.main, input,
range(self.ngpu))
else:
output = self.main(input)
return output
#class netG(nn.Module):
# def __init__(self, nc = 1, nz = 1, ngf = 64, gfs = 5, ngpu = 1):
# super(netG, self).__init__()
# self.ngpu = ngpu
#
# self.main = nn.Sequential(
#
# nn.ConvTranspose2d( nz, ngf * 8, gfs, 2, gfs//2, bias=False),
# nn.ReLU(True),
# nn.BatchNorm2d(ngf * 8),
#
# nn.ConvTranspose2d(ngf * 8, ngf * 4, gfs, 2, gfs//2, bias=False),
# nn.ReLU(True),
# nn.BatchNorm2d(ngf * 4),
#
# nn.ConvTranspose2d(ngf * 4, ngf * 2, gfs, 2, gfs//2, bias=False),
# nn.ReLU(True),
# nn.BatchNorm2d(ngf * 2),
#
# nn.ConvTranspose2d(ngf * 2, ngf, gfs, 2, gfs//2, bias=False),
# nn.ReLU(True),
# nn.BatchNorm2d(ngf),
#
# nn.ConvTranspose2d( ngf, nc, gfs, 2, 2, bias=False),
# nn.ReLU(True),
#
# ### Start dilations ###
# nn.ConvTranspose2d( nc, 64, gfs, 1, 6, output_padding=0,bias=False,dilation=3),
# nn.ReLU(True),
# nn.BatchNorm2d(64),
#
# nn.ConvTranspose2d( 64, nc, gfs, 1, 10, output_padding=0, bias=False,dilation=5),
# nn.Tanh()
#
# )
#
# def forward(self, input):
# if input.is_cuda and self.ngpu > 1:
# output = nn.parallel.data_parallel(self.main, input,
# range(self.ngpu))
# else:
# output = self.main(input)
# return output | 34.51462 | 102 | 0.467638 | 707 | 5,902 | 3.838755 | 0.110325 | 0.079587 | 0.073692 | 0.08843 | 0.966102 | 0.962049 | 0.962049 | 0.962049 | 0.962049 | 0.962049 | 0 | 0.056645 | 0.386818 | 5,902 | 171 | 103 | 34.51462 | 0.693285 | 0.481871 | 0 | 0.42623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065574 | false | 0 | 0.016393 | 0 | 0.147541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bec968f19f25f36fec709f851302f3eca41d2059 | 1,224 | py | Python | Visualization/main.py | miku/batchdata | 25446f7d9c6baad24de7ee8b964c62726bf27ea5 | [
"MIT"
] | 8 | 2018-11-17T17:17:29.000Z | 2019-09-23T17:31:09.000Z | Visualization/main.py | miku/batchdata | 25446f7d9c6baad24de7ee8b964c62726bf27ea5 | [
"MIT"
] | null | null | null | Visualization/main.py | miku/batchdata | 25446f7d9c6baad24de7ee8b964c62726bf27ea5 | [
"MIT"
] | null | null | null |
import luigi
import random
import time
class A(luigi.Task):
serial = luigi.IntParameter(default=0)
def run(self):
""" Just `touch` file. """
with self.output().open('w') as output:
pass
time.sleep(random.randint(5, 15))
def output(self):
return luigi.LocalTarget(path='throwaway-a-%04d' % self.serial)
class B(luigi.Task):
serial = luigi.IntParameter(default=0)
def run(self):
""" Just `touch` file. """
with self.output().open('w') as output:
pass
time.sleep(random.randint(5, 15))
def output(self):
return luigi.LocalTarget(path='throwaway-b-%04d' % self.serial)
class C(luigi.Task):
def requires(self):
return [A(serial=i) for i in range(20)] + [B(serial=i) for i in range(20)]
def run(self):
""" Just `touch` file. """
with self.output().open('w') as output:
pass
time.sleep(random.randint(1, 10))
def output(self):
return luigi.LocalTarget(path='throwaway-c')
if __name__ == '__main__':
# Start central scheduler with `luigid` in a separate terminal or with
# `luigid --background`.
luigi.run()
| 21.857143 | 82 | 0.580882 | 159 | 1,224 | 4.421384 | 0.333333 | 0.056899 | 0.042674 | 0.059744 | 0.708393 | 0.708393 | 0.708393 | 0.651494 | 0.583215 | 0.583215 | 0 | 0.021348 | 0.272876 | 1,224 | 55 | 83 | 22.254545 | 0.768539 | 0.124183 | 0 | 0.533333 | 0 | 0 | 0.05138 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233333 | false | 0.1 | 0.1 | 0.133333 | 0.633333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
befa80aaf4cd51b5403587a83330d7dcf69f6b52 | 4,728 | py | Python | tests/test_dataframe_comparer.py | alexott/chispa | 3c35c455ab4927074186ad38f8aa986c8beb0343 | [
"MIT"
] | null | null | null | tests/test_dataframe_comparer.py | alexott/chispa | 3c35c455ab4927074186ad38f8aa986c8beb0343 | [
"MIT"
] | null | null | null | tests/test_dataframe_comparer.py | alexott/chispa | 3c35c455ab4927074186ad38f8aa986c8beb0343 | [
"MIT"
] | 1 | 2020-12-21T00:02:15.000Z | 2020-12-21T00:02:15.000Z | import pytest
from spark import *
from chispa import *
from chispa.dataframe_comparer import are_dfs_equal
from chispa.schema_comparer import SchemasNotEqualError
def describe_assert_column_equality():
def it_throws_with_schema_mismatches():
data1 = [(1, "jose"), (2, "li"), (3, "laura")]
df1 = spark.createDataFrame(data1, ["num", "expected_name"])
data2 = [("bob", "jose"), ("li", "li"), ("luisa", "laura")]
df2 = spark.createDataFrame(data2, ["name", "expected_name"])
with pytest.raises(SchemasNotEqualError) as e_info:
assert_df_equality(df1, df2)
def it_throws_with_schema_column_order_mismatch():
data1 = [(1, "jose"), (2, "li")]
df1 = spark.createDataFrame(data1, ["num", "name"])
data2 = [("jose", 1), ("li", 1)]
df2 = spark.createDataFrame(data2, ["name", "num"])
with pytest.raises(SchemasNotEqualError) as e_info:
assert_df_equality(df1, df2)
def it_does_not_throw_on_schema_column_order_mismatch_with_transforms():
data1 = [(1, "jose"), (2, "li")]
df1 = spark.createDataFrame(data1, ["num", "expected_name"])
data2 = [("jose", 1), ("li", 1)]
df2 = spark.createDataFrame(data2, ["name", "num"])
with pytest.raises(SchemasNotEqualError) as e_info:
assert_df_equality(df1, df2, transforms=[
lambda df: df.select(sorted(df.columns))
])
def it_throws_with_content_mismatches():
data1 = [("jose", "jose"), ("li", "li"), ("luisa", "laura")]
df1 = spark.createDataFrame(data1, ["name", "expected_name"])
data2 = [("bob", "jose"), ("li", "li"), ("luisa", "laura")]
df2 = spark.createDataFrame(data2, ["name", "expected_name"])
with pytest.raises(DataFramesNotEqualError) as e_info:
assert_df_equality(df1, df2)
def it_throws_with_length_mismatches():
data1 = [("jose", "jose"), ("li", "li"), ("laura", "laura")]
df1 = spark.createDataFrame(data1, ["name", "expected_name"])
data2 = [("jose", "jose"), ("li", "li")]
df2 = spark.createDataFrame(data2, ["name", "expected_name"])
with pytest.raises(DataFramesNotEqualError) as e_info:
assert_df_equality(df1, df2)
def describe_are_dfs_equal():
def it_returns_false_with_schema_mismatches():
data1 = [(1, "jose"), (2, "li"), (3, "laura")]
df1 = spark.createDataFrame(data1, ["num", "expected_name"])
data2 = [("bob", "jose"), ("li", "li"), ("luisa", "laura")]
df2 = spark.createDataFrame(data2, ["name", "expected_name"])
assert are_dfs_equal(df1, df2) == False
def it_returns_false_with_content_mismatches():
data1 = [("jose", "jose"), ("li", "li"), ("luisa", "laura")]
df1 = spark.createDataFrame(data1, ["name", "expected_name"])
data2 = [("bob", "jose"), ("li", "li"), ("luisa", "laura")]
df2 = spark.createDataFrame(data2, ["name", "expected_name"])
assert are_dfs_equal(df1, df2) == False
def it_returns_true_when_dfs_are_same():
data1 = [("bob", "jose"), ("li", "li"), ("luisa", "laura")]
df1 = spark.createDataFrame(data1, ["name", "expected_name"])
data2 = [("bob", "jose"), ("li", "li"), ("luisa", "laura")]
df2 = spark.createDataFrame(data2, ["name", "expected_name"])
assert are_dfs_equal(df1, df2) == True
def describe_assert_approx_df_equality():
def it_throws_with_content_mismatch():
data1 = [(1.0, "jose"), (1.1, "li"), (1.2, "laura"), (1.0, None)]
df1 = spark.createDataFrame(data1, ["num", "expected_name"])
data2 = [(1.0, "jose"), (1.05, "li"), (1.0, "laura"), (None, "hi")]
df2 = spark.createDataFrame(data2, ["num", "expected_name"])
with pytest.raises(DataFramesNotEqualError) as e_info:
assert_approx_df_equality(df1, df2, 0.1)
def it_throws_with_with_length_mismatch():
data1 = [(1.0, "jose"), (1.1, "li"), (1.2, "laura"), (None, None)]
df1 = spark.createDataFrame(data1, ["num", "expected_name"])
data2 = [(1.0, "jose"), (1.05, "li")]
df2 = spark.createDataFrame(data2, ["num", "expected_name"])
with pytest.raises(DataFramesNotEqualError) as e_info:
assert_approx_df_equality(df1, df2, 0.1)
def it_does_not_throw_with_no_mismatch():
data1 = [(1.0, "jose"), (1.1, "li"), (1.2, "laura"), (None, None)]
df1 = spark.createDataFrame(data1, ["num", "expected_name"])
data2 = [(1.0, "jose"), (1.05, "li"), (1.2, "laura"), (None, None)]
df2 = spark.createDataFrame(data2, ["num", "expected_name"])
assert_approx_df_equality(df1, df2, 0.1)
| 43.777778 | 76 | 0.595389 | 567 | 4,728 | 4.746032 | 0.118166 | 0.163508 | 0.094017 | 0.114456 | 0.853958 | 0.811594 | 0.795243 | 0.779264 | 0.765143 | 0.745076 | 0 | 0.039612 | 0.215102 | 4,728 | 107 | 77 | 44.186916 | 0.68553 | 0 | 0 | 0.626506 | 0 | 0 | 0.137085 | 0 | 0 | 0 | 0 | 0 | 0.156627 | 1 | 0.168675 | false | 0 | 0.060241 | 0 | 0.228916 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
833d151f87c1d979bd24b557483f33babb75fb3e | 31 | py | Python | conf/development/urls.py | cybersturmer/pmdragon-core-api | 20715cc7f1aee75ea5e567c458899da325e861be | [
"MIT"
] | 2 | 2021-01-03T21:09:29.000Z | 2022-01-30T20:56:59.000Z | conf/development/urls.py | cybersturmer/pmdragon-core-api | 20715cc7f1aee75ea5e567c458899da325e861be | [
"MIT"
] | null | null | null | conf/development/urls.py | cybersturmer/pmdragon-core-api | 20715cc7f1aee75ea5e567c458899da325e861be | [
"MIT"
] | 1 | 2021-01-09T01:14:35.000Z | 2021-01-09T01:14:35.000Z | from conf.common.urls import *
| 15.5 | 30 | 0.774194 | 5 | 31 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55caf48166fa4b8e6a5b7cafd157d449c7e2b7c1 | 74 | py | Python | kafka-consumer/consumer/adapter/__init__.py | shiv12095/realtimeviz | ee2bf10b5f9467212f9a9ce8957d80456ebd0259 | [
"MIT"
] | 1 | 2021-03-03T13:54:15.000Z | 2021-03-03T13:54:15.000Z | backend/server/adapter/__init__.py | shiv12095/realtimeviz | ee2bf10b5f9467212f9a9ce8957d80456ebd0259 | [
"MIT"
] | null | null | null | backend/server/adapter/__init__.py | shiv12095/realtimeviz | ee2bf10b5f9467212f9a9ce8957d80456ebd0259 | [
"MIT"
] | 1 | 2021-03-03T13:59:48.000Z | 2021-03-03T13:59:48.000Z | from .kafka_adapter import KafkaAdapter
from .db_adapter import DBAdapter
| 24.666667 | 39 | 0.864865 | 10 | 74 | 6.2 | 0.7 | 0.419355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 74 | 2 | 40 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55ec07a93bcd289ad1480fcbc4b30abf77b27774 | 170 | py | Python | tests/test_pythonpath.py | janezlapajne/python-project-template | bca24a061008e398a1d68b95df8c991d3524ad9d | [
"MIT"
] | null | null | null | tests/test_pythonpath.py | janezlapajne/python-project-template | bca24a061008e398a1d68b95df8c991d3524ad9d | [
"MIT"
] | null | null | null | tests/test_pythonpath.py | janezlapajne/python-project-template | bca24a061008e398a1d68b95df8c991d3524ad9d | [
"MIT"
] | null | null | null | import os
def test_PYTHONPATH():
# set XX to path of root directory
print(os.environ.get('PYTHONPATH') == "XX")
assert(os.environ.get('PYTHONPATH') == "XX")
| 24.285714 | 48 | 0.652941 | 24 | 170 | 4.583333 | 0.666667 | 0.163636 | 0.218182 | 0.4 | 0.436364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182353 | 170 | 6 | 49 | 28.333333 | 0.791367 | 0.188235 | 0 | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55ed00ffe80bcd8e9942e514b0f76d25fc9a733a | 305 | py | Python | src/report_generators/base_report_generator.py | alphagov-mirror/govuk-accessibility-reports | 88204c03e273fff76b67ab0730a44869f02e28c9 | [
"MIT"
] | null | null | null | src/report_generators/base_report_generator.py | alphagov-mirror/govuk-accessibility-reports | 88204c03e273fff76b67ab0730a44869f02e28c9 | [
"MIT"
] | null | null | null | src/report_generators/base_report_generator.py | alphagov-mirror/govuk-accessibility-reports | 88204c03e273fff76b67ab0730a44869f02e28c9 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
class BaseReportGenerator(ABC):
@property
@abstractmethod
def filename(self):
return ''
@property
@abstractmethod
def headers(self):
return []
@abstractmethod
def process_page(self, content_item, html):
pass
| 16.052632 | 47 | 0.639344 | 29 | 305 | 6.655172 | 0.62069 | 0.264249 | 0.259067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.281967 | 305 | 18 | 48 | 16.944444 | 0.881279 | 0 | 0 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.076923 | 0.076923 | 0.153846 | 0.538462 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 6 |
36541cc89d88c895f4eef68aaea1d65d6755681e | 7,411 | py | Python | test/unit/space/test_segments.py | pescap/bempp-cl | 3a68666e8db0e873d418b734289067483f68f12e | [
"MIT"
] | 70 | 2019-09-04T15:15:05.000Z | 2022-03-22T16:54:40.000Z | test/unit/space/test_segments.py | pescap/bempp-cl | 3a68666e8db0e873d418b734289067483f68f12e | [
"MIT"
] | 66 | 2020-01-16T08:31:00.000Z | 2022-03-25T11:18:59.000Z | test/unit/space/test_segments.py | pescap/bempp-cl | 3a68666e8db0e873d418b734289067483f68f12e | [
"MIT"
] | 22 | 2019-09-30T08:50:33.000Z | 2022-03-20T19:37:22.000Z | """Unit tests for Space objects."""
# pylint: disable=redefined-outer-name
# pylint: disable=C0103
import numpy as _np
import pytest
@pytest.mark.parametrize(
"space_info",
[
("DP", 0),
("DP", 1),
("P", 1),
("DUAL", 0),
("DUAL", 1),
("RWG", 0),
("SNC", 0),
("BC", 0),
("RBC", 0),
],
)
def test_segment_space(space_info, helpers, precision):
"""Test that a space on a face of a cube has fewer DOFs."""
import bempp.api
import math
grid = bempp.api.shapes.cube(h=0.4)
space0 = bempp.api.function_space(grid, space_info[0], space_info[1])
fun0 = bempp.api.GridFunction(
space0, coefficients=_np.ones(space0.global_dof_count)
)
space1 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=False,
)
fun1 = bempp.api.GridFunction(
space1, coefficients=_np.ones(space1.global_dof_count)
)
assert space0.global_dof_count > space1.global_dof_count
assert fun0.l2_norm() > fun1.l2_norm()
space2 = bempp.api.function_space(
grid, space_info[0], space_info[1], segments=[1], truncate_at_segment_edge=True
)
fun2 = bempp.api.GridFunction(
space2, coefficients=_np.ones(space2.global_dof_count)
)
if space2.is_barycentric:
c = 6
else:
c = 1
evals = fun2.evaluate_on_element_centers()
for n, i in enumerate(grid.domain_indices):
if i != 1:
for cell in range(c * n, c * (n + 1)):
assert math.isclose(
_np.linalg.norm(evals[:, cell]),
0,
rel_tol=helpers.default_tolerance(precision),
)
@pytest.mark.parametrize(
"space_info", [("P", 1), ("DUAL", 0), ("RWG", 0), ("SNC", 0), ("BC", 0), ("RBC", 0)]
)
def test_segments_space_with_boundary_dofs(space_info):
"""Test that space with boundary DOFs have more DOFs if these are included."""
import bempp.api
grid = bempp.api.shapes.cube(h=0.4)
space0 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=False,
truncate_at_segment_edge=False,
)
fun0 = bempp.api.GridFunction(
space0, coefficients=_np.ones(space0.global_dof_count)
)
space1 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=True,
truncate_at_segment_edge=False,
)
fun1 = bempp.api.GridFunction(
space1, coefficients=_np.ones(space1.global_dof_count)
)
assert space0.global_dof_count < space1.global_dof_count
assert fun0.l2_norm() < fun1.l2_norm()
@pytest.mark.parametrize("space_info", [("DP", 0), ("DP", 1), ("DUAL", 1)])
def test_segments_space_without_boundary_dofs(space_info, helpers, precision):
"""Test that including boundary DOFs has no effect on these spaces."""
import bempp.api
import math
grid = bempp.api.shapes.cube(h=0.4)
space0 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=False,
truncate_at_segment_edge=False,
)
fun0 = bempp.api.GridFunction(
space0, coefficients=_np.ones(space0.global_dof_count)
)
space1 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=True,
truncate_at_segment_edge=False,
)
fun1 = bempp.api.GridFunction(
space1, coefficients=_np.ones(space1.global_dof_count)
)
space2 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=True,
truncate_at_segment_edge=False,
)
fun2 = bempp.api.GridFunction(
space2, coefficients=_np.ones(space2.global_dof_count)
)
assert space0.global_dof_count == space1.global_dof_count == space2.global_dof_count
assert math.isclose(
fun0.l2_norm(), fun1.l2_norm(), rel_tol=helpers.default_tolerance(precision)
)
assert math.isclose(
fun0.l2_norm(), fun2.l2_norm(), rel_tol=helpers.default_tolerance(precision)
)
@pytest.mark.parametrize(
"space_info",
[("P", 1), ("DUAL", 0), ("DUAL", 1), ("RWG", 0), ("SNC", 0), ("BC", 0), ("RBC", 0)],
)
def test_truncating(space_info):
"""Test that truncating these spaces at the boundary is correct."""
import bempp.api
grid = bempp.api.shapes.cube(h=0.4)
space0 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=True,
truncate_at_segment_edge=False,
)
fun0 = bempp.api.GridFunction(
space0, coefficients=_np.ones(space0.global_dof_count)
)
space1 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=True,
truncate_at_segment_edge=True,
)
fun1 = bempp.api.GridFunction(
space1, coefficients=_np.ones(space1.global_dof_count)
)
assert space0.global_dof_count == space1.global_dof_count
assert fun1.l2_norm() < fun0.l2_norm()
@pytest.mark.parametrize("space_info", [("DUAL", 0)])
def test_truncating_node_dual_spaces(space_info, helpers, precision):
"""Test spaces on segments."""
import bempp.api
import math
grid = bempp.api.shapes.cube(h=0.4)
space0 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=False,
truncate_at_segment_edge=True,
)
fun0 = bempp.api.GridFunction(
space0, coefficients=_np.ones(space0.global_dof_count)
)
space1 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=False,
truncate_at_segment_edge=False,
)
fun1 = bempp.api.GridFunction(
space1, coefficients=_np.ones(space1.global_dof_count)
)
assert space0.global_dof_count == space1.global_dof_count
assert math.isclose(
fun0.l2_norm(), fun1.l2_norm(), rel_tol=helpers.default_tolerance(precision)
)
@pytest.mark.parametrize("space_info", [("BC", 0), ("RBC", 0), ("DUAL", 1)])
def test_truncating_edge_and_face_dual_spaces(space_info):
"""Test spaces on segments."""
import bempp.api
grid = bempp.api.shapes.cube(h=0.4)
space0 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=False,
truncate_at_segment_edge=True,
)
fun0 = bempp.api.GridFunction(
space0, coefficients=_np.ones(space0.global_dof_count)
)
space1 = bempp.api.function_space(
grid,
space_info[0],
space_info[1],
segments=[1],
include_boundary_dofs=False,
truncate_at_segment_edge=False,
)
fun1 = bempp.api.GridFunction(
space1, coefficients=_np.ones(space1.global_dof_count)
)
assert space0.global_dof_count == space1.global_dof_count
assert fun0.l2_norm() < fun1.l2_norm()
| 27.246324 | 88 | 0.621644 | 938 | 7,411 | 4.662047 | 0.120469 | 0.082323 | 0.08644 | 0.067231 | 0.846787 | 0.833067 | 0.8118 | 0.79442 | 0.79442 | 0.777956 | 0 | 0.034327 | 0.253137 | 7,411 | 271 | 89 | 27.346863 | 0.755736 | 0.053029 | 0 | 0.65 | 0 | 0 | 0.020198 | 0 | 0 | 0 | 0 | 0 | 0.063636 | 1 | 0.027273 | false | 0 | 0.05 | 0 | 0.077273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
368a8690d3cff2af216f124946421b4155e3ebe4 | 19,662 | py | Python | UnitTests/test_MemberSetLoad_test.py | DavidNaizheZhou/RFEM_Python_Client | a5f7790b67de3423907ce10c0aa513c0a1aca47b | [
"MIT"
] | null | null | null | UnitTests/test_MemberSetLoad_test.py | DavidNaizheZhou/RFEM_Python_Client | a5f7790b67de3423907ce10c0aa513c0a1aca47b | [
"MIT"
] | null | null | null | UnitTests/test_MemberSetLoad_test.py | DavidNaizheZhou/RFEM_Python_Client | a5f7790b67de3423907ce10c0aa513c0a1aca47b | [
"MIT"
] | null | null | null | import sys
import os
PROJECT_ROOT = os.path.abspath(os.path.join(
os.path.dirname(__file__),
os.pardir)
)
sys.path.append(PROJECT_ROOT)
from RFEM.Loads.membersetload import MemberSetLoad
from RFEM.LoadCasesAndCombinations.loadCase import LoadCase
from RFEM.LoadCasesAndCombinations.staticAnalysisSettings import StaticAnalysisSettings
from RFEM.TypesForNodes.nodalSupport import NodalSupport
from RFEM.BasicObjects.memberSet import MemberSet
from RFEM.BasicObjects.member import Member
from RFEM.BasicObjects.node import Node
from RFEM.BasicObjects.section import Section
from RFEM.BasicObjects.material import Material
from RFEM.initModel import Model, Calculate_all
from RFEM.enums import *
if Model.clientModel is None:
Model()
def test_member_set_load():
Model.clientModel.service.delete_all()
Model.clientModel.service.begin_modification()
# Create Material
Material(1, 'S235')
# Create Section
Section(1, 'IPE 300')
Section(2, 'CHS 100x4')
# Create Nodes
Node(1, 0.0, 0.0, 0.0)
Node(2, 2, 0.0, 0.0)
Node(3, 4, 0, 0)
Node(4, 0, 5, 0)
Node(5, 2, 5, 0)
Node(6, 4, 5, 0)
# Create Member
Member(1, 1, 2, 0, 1)
Member(2, 2, 3, 0, 1)
Member(3, 4, 6, 0, 2)
Member(4, 6, 5, 0, 2)
# Create Member Set
MemberSet(1, '1 2', SetType.SET_TYPE_CONTINUOUS)
MemberSet(2, '3 4', SetType.SET_TYPE_CONTINUOUS)
# Create Nodal Supports
NodalSupport(1, '1 3 4 6', NodalSupportType.FIXED)
# Create Static Analysis Settings
StaticAnalysisSettings(1, '1. Order', StaticAnalysisType.GEOMETRICALLY_LINEAR)
# Create Load Case
LoadCase(1, 'DEAD', [True, 0.0, 0.0, 1.0])
## Initial Member Set Load ##
MemberSetLoad(1, 1, '1', LoadDirectionType.LOAD_DIRECTION_LOCAL_Z, 5000)
## Force Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Force(0, 2, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000])
## Force Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM with Eccentricity ##
MemberSetLoad.Force(0, 3, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000], force_eccentricity=True, params={'eccentricity_y_at_start' : 0.01, 'eccentricity_z_at_start': 0.02})
## Force Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM_TOTAL ##
MemberSetLoad.Force(0, 4, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM_TOTAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000])
## Force Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Force(0, 5, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, 5000, 1.2])
## Force Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Force(0, 6, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 2, 1, 2])
## Force Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Force(0, 7, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, False, 5000, 1, 2, 3])
## Force Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2x ##
MemberSetLoad.Force(0, 8, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 6000, 1, 2])
## Force Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Force(0, 9, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Force Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Force(0, 10, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Force Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Force(0, 11, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Force Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Force(0, 12, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[4000, 8000, 12000])
## Force Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Force(0, 13, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Force Type Member Set Load with LOAD_DISTRIBUTION_VARYING_IN_Z ##
MemberSetLoad.Force(0, 14, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING_IN_Z, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Moment(0, 15, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[5000])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Moment(0, 16, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, 5000, 1.2])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Moment(0, 17, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 2, 1, 2])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Moment(0, 18, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, False, 5000, 1, 2, 3])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2x ##
MemberSetLoad.Moment(0, 19, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 5000, 6000, 1, 2])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Moment(0, 20, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Moment(0, 21, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Moment(0, 22, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[False, False, 4000, 8000, 1, 2])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Moment(0, 23, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[4000, 8000, 12000])
## Moment Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Moment(0, 24, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 4000], [2, 1, 5000]])
## Mass Type Member Set Load ##
MemberSetLoad.Mass(0, 25, 1, mass_components=[1000])
## Temperature Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Temperature(0, 26, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[18, 2])
## Temperature Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Temperature(0, 27, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## Temperature Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Temperature(0, 28, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## Temperature Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Temperature(0, 29, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3, 4, 5, 6])
## Temperature Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Temperature(0, 30, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285, 289], [2, 1, 293, 297]])
## TemperatureChange Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.TemperatureChange(0, 31, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[18, 2])
## TemperatureChange Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.TemperatureChange(0, 32, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## TemperatureChange Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.TemperatureChange(0, 33, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, 18, 20, False, False, 1, 2])
## TemperatureChange Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.TemperatureChange(0, 34, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3, 4, 5, 6])
## TemperatureChange Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.TemperatureChange(0, 35, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285, 289], [2, 1, 293, 297]])
## AxialStrain Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.AxialStrain(0, 36, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[0.005])
## AxialStrain Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.AxialStrain(0, 37, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[12, 16, False, False, 1, 2])
## AxialStrain Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.AxialStrain(0, 38, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[12, 16, False, False, 1, 2])
## AxialStrain Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.AxialStrain(0, 39, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[1, 2, 3])
## AxialStrain Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.AxialStrain(0, 40, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, load_parameter=[[1, 1, 285, 289], [2, 1, 293, 297]])
## AxialDisplacement Type Member Set Load ##
MemberSetLoad.AxialDisplacement(0, 41, 1, '1', MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, 0.05)
## Precamber Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Precamber(0, 42, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[0.005])
## Precamber Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Precamber(0, 43, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Precamber Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Precamber(0, 44, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Precamber Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Precamber(0, 45, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3])
## Precamber Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Precamber(0, 46, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285], [2, 1, 293]])
## InitialPrestress Type Member Set Load ##
MemberSetLoad.InitialPrestress(0, 47, 1, '1', MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_X, 50)
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Displacement(0, 48, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Displacement(0, 49, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, 1])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Displacement(0, 50, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, 1, 2])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Displacement(0, 51, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, False, 1, 2, 3])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2 ##
MemberSetLoad.Displacement(0, 52, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, 0.6, False, False, 1, 2])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Displacement(0, 53, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [[0.001, 1, 1], [0.002, 2, 1]])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Displacement(0, 54, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Displacement(0, 55, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Displacement(0, 56, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3])
## Displacement Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Displacement(0, 57, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285], [2, 1, 293]])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_UNIFORM ##
MemberSetLoad.Rotation(0, 58, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_UNIFORM, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_1 ##
MemberSetLoad.Rotation(0, 59, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_1, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, 1])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_N ##
MemberSetLoad.Rotation(0, 60, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_N, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, 1, 2])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2x2 ##
MemberSetLoad.Rotation(0, 61, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2x2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, False, False, False, 1, 2, 3])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_2 ##
MemberSetLoad.Rotation(0, 62, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_2, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [0.5, 0.6, False, False, 1, 2])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_CONCENTRATED_VARYING ##
MemberSetLoad.Rotation(0, 63, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_CONCENTRATED_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, [[1, 1, 285], [2, 1, 293]])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_TRAPEZOIDAL ##
MemberSetLoad.Rotation(0, 64, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TRAPEZOIDAL, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_TAPERED ##
MemberSetLoad.Rotation(0, 65, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_TAPERED, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[12, 16, False, False, 1, 2])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_PARABOLIC ##
MemberSetLoad.Rotation(0, 66, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_PARABOLIC, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[1, 2, 3])
## Rotation Type Member Set Load with LOAD_DISTRIBUTION_VARYING ##
MemberSetLoad.Rotation(0, 67, 1, '1', MemberSetLoadDistribution.LOAD_DISTRIBUTION_VARYING, MemberSetLoadDirection.LOAD_DIRECTION_LOCAL_Z, load_parameter=[[1, 1, 285], [2, 1, 293]])
## Pipe Content Full Type Member Set Load ##
MemberSetLoad.PipeContentFull(0, 68, 1, '2', specific_weight=5000)
## Pipe Content Partial Type Member Set Load ##
MemberSetLoad.PipeContentPartial(0, 69, 1, '2', specific_weight=2000, filling_height=0.1)
## Pipe Internal Pressure Type Member Set Load ##
MemberSetLoad.PipeInternalPressure(0, 70, 1, '2', 2000)
## Pipe Rotary Motion Type Member Set Load ##
MemberSetLoad.RotaryMotion(0, 71, 1, '2', 3.5, 5,
MemberSetLoadAxisDefinitionType.AXIS_DEFINITION_TWO_POINTS,
MemberLoadAxisDefinitionAxisOrientation.AXIS_NEGATIVE,
MemberSetLoadAxisDefinition.AXIS_Y, [10,11,12], [0,5,6])
#Calculate_all() # Don't use in unit tests. See template for more info.
Model.clientModel.service.finish_modification()
| 69.723404 | 261 | 0.777134 | 2,416 | 19,662 | 6.083609 | 0.096026 | 0.137162 | 0.063682 | 0.080963 | 0.81739 | 0.801334 | 0.794462 | 0.787658 | 0.777453 | 0.669139 | 0 | 0.056476 | 0.125572 | 19,662 | 281 | 262 | 69.97153 | 0.798406 | 0.240718 | 0 | 0 | 0 | 0 | 0.011002 | 0.003143 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008621 | false | 0 | 0.112069 | 0 | 0.12069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36a37ec38c1e9c3517c37bd0564c2d2584389bee | 36 | py | Python | alexber/rpsgame/__init__.py | alex-ber/RocketPaperScissorsGame | c38c82a17d508c892c686454864ee2356f441d1a | [
"BSD-2-Clause"
] | null | null | null | alexber/rpsgame/__init__.py | alex-ber/RocketPaperScissorsGame | c38c82a17d508c892c686454864ee2356f441d1a | [
"BSD-2-Clause"
] | 1 | 2019-03-20T10:35:36.000Z | 2019-03-21T12:46:44.000Z | alexber/rpsgame/__init__.py | alex-ber/RocketPaperScissorsGame | c38c82a17d508c892c686454864ee2356f441d1a | [
"BSD-2-Clause"
] | null | null | null | from alexber.rpsgame.app import conf | 36 | 36 | 0.861111 | 6 | 36 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36ae5657233f8ad6d9714b5960112b7a65cfb653 | 2,073 | py | Python | test_cmd_maya_liquid_add.py | droposhado/err-maya-plugin | eec855ecd5f8f47ccd4bf729bf5420933fa546a1 | [
"MIT"
] | null | null | null | test_cmd_maya_liquid_add.py | droposhado/err-maya-plugin | eec855ecd5f8f47ccd4bf729bf5420933fa546a1 | [
"MIT"
] | null | null | null | test_cmd_maya_liquid_add.py | droposhado/err-maya-plugin | eec855ecd5f8f47ccd4bf729bf5420933fa546a1 | [
"MIT"
] | null | null | null |
pytest_plugins = ["errbot.backends.test"]
extra_plugin_dir = '.'
def test_command_coffee_add_missing_args(testbot):
testbot.push_message('!maya liquid add coffee')
assert "Please use '/maya liquid add <type> <quantity>'" == testbot.pop_message()
def test_command_water_add_missing_args(testbot):
testbot.push_message('!maya liquid add water')
assert "Please use '/maya liquid add <type> <quantity>'" == testbot.pop_message()
def test_command_coffee_add_invalid_amount(testbot):
testbot.push_message('!maya liquid add coffee xx')
assert "Please enter a valid quantity" == testbot.pop_message()
def test_command_water_add_invalid_amount(testbot):
testbot.push_message('!maya liquid add water xx')
assert "Please enter a valid quantity" == testbot.pop_message()
def test_command_notexistliquid_add_not_supported_type(testbot):
testbot.push_message('!maya liquid add notexistliquid 250')
assert "Not supported type" == testbot.pop_message()
def test_command_water_add_ok(testbot):
quantity = 250
testbot.push_message(f"!maya liquid add water {quantity}")
poped = testbot.pop_message()
assert str(quantity) in poped
assert "water" in poped
assert "was drunk" in poped
def test_command_coffee_add_ok(testbot):
quantity = 250
testbot.push_message(f"!maya liquid add coffee {quantity}")
poped = testbot.pop_message()
assert str(quantity) in poped
assert "coffee" in poped
assert "was drunk" in poped
def test_command_water_add_with_datetime_ok(testbot):
quantity = 250
testbot.push_message(f"!maya liquid add water {quantity} 2022-05-25T15:21:56Z")
poped = testbot.pop_message()
assert str(quantity) in poped
assert "water" in poped
assert "was drunk" in poped
def test_command_coffee_add_with_datetime_ok(testbot):
quantity = 250
testbot.push_message(f"!maya liquid add coffee {quantity} 2022-05-25T15:21:56Z")
poped = testbot.pop_message()
assert str(quantity) in poped
assert "coffee" in poped
assert "was drunk" in poped
| 31.409091 | 85 | 0.737578 | 292 | 2,073 | 5.006849 | 0.171233 | 0.057456 | 0.097811 | 0.085499 | 0.919973 | 0.898769 | 0.898769 | 0.872777 | 0.833105 | 0.826265 | 0 | 0.024855 | 0.165461 | 2,073 | 65 | 86 | 31.892308 | 0.820231 | 0 | 0 | 0.533333 | 0 | 0 | 0.26834 | 0 | 0 | 0 | 0 | 0 | 0.377778 | 1 | 0.2 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36bd5b867c3945f85a3847c854d104f21db7fdc0 | 7,614 | py | Python | util/additional_ct_process.py | qingshan412/pytorch-CycleGAN-and-pix2pix | cec26e9926ccdfadaf29282ec7193ed32bc48773 | [
"BSD-3-Clause"
] | 3 | 2018-11-05T23:12:43.000Z | 2020-03-31T07:51:10.000Z | util/additional_ct_process.py | qingshan412/pytorch-CycleGAN-and-pix2pix | cec26e9926ccdfadaf29282ec7193ed32bc48773 | [
"BSD-3-Clause"
] | null | null | null | util/additional_ct_process.py | qingshan412/pytorch-CycleGAN-and-pix2pix | cec26e9926ccdfadaf29282ec7193ed32bc48773 | [
"BSD-3-Clause"
] | 1 | 2018-11-05T23:12:08.000Z | 2018-11-05T23:12:08.000Z | import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
edge_color = 'r'
line_width = 2
def image_119_rec(image_numpy, image_path, orig_mean):
# print('A')
image_npy = image_numpy/(image_numpy.mean()/orig_mean)
print(image_npy.mean())
plt.clf()
plt.imshow(np.squeeze(image_npy), cmap=plt.cm.bone)
currentAxisA = plt.gca()
# rectA0 = patches.Rectangle((15, 50), 20, 25, linewidth=1, edgecolor='r', facecolor='none')
rectA1 = patches.Rectangle((25, 112), 18, 20, linewidth=line_width, edgecolor=edge_color, facecolor='none')
rectA2 = patches.Rectangle((100, 8), 25, 25, linewidth=line_width, edgecolor=edge_color, facecolor='none')
# currentAxisA.add_patch(rectA0)
currentAxisA.add_patch(rectA1)
currentAxisA.add_patch(rectA2)
# plt.axis('off')
currentAxisA.axes.get_xaxis().set_visible(False)
currentAxisA.axes.get_yaxis().set_visible(False)
currentAxisA.spines['left'].set_color('none')
currentAxisA.spines['bottom'].set_color('none')
# plt.show()
plt.savefig(image_path, bbox_inches='tight', pad_inches=0.0)
# mean_str = [str(round(np.mean(image_numpy[50:50+25, 15:15+20]),2)), str(round(np.mean(image_numpy[112:112+20, 25:25+18]),2)), str(round(np.mean(image_numpy[8:8+25, 100:100+25]),2))]
# std_str = [str(round(np.std(image_numpy[50:50+25, 15:15+20]),2)), str(round(np.std(image_numpy[112:112+20, 25:25+18]),2)), str(round(np.std(image_numpy[8:8+25, 100:100+25]),2))]
mean_str = [str(round(np.mean(image_npy[112:112+20, 25:25+18]),2)), str(round(np.mean(image_npy[8:8+25, 100:100+25]),2))]
std_str = [str(round(np.std(image_npy[112:112+20, 25:25+18]),2)), str(round(np.std(image_npy[8:8+25, 100:100+25]),2))]
return mean_str, std_str
def image_201_rec(image_numpy, image_path, orig_mean):
# print('A')
image_npy = image_numpy-(image_numpy.mean()-orig_mean)
print(image_npy.mean())
plt.clf()
plt.imshow(np.squeeze(image_npy), cmap=plt.cm.bone)
currentAxisA = plt.gca()
rectA1 = patches.Rectangle((25, 12), 30, 25, linewidth=line_width, edgecolor=edge_color, facecolor='none')
# rectA2 = patches.Rectangle((138, 173), 20, 15, linewidth=1, edgecolor='r', facecolor='none')
currentAxisA.add_patch(rectA1)
# currentAxisA.add_patch(rectA2)
# plt.axis('off')
currentAxisA.axes.get_xaxis().set_visible(False)
currentAxisA.axes.get_yaxis().set_visible(False)
currentAxisA.spines['left'].set_color('none')
currentAxisA.spines['bottom'].set_color('none')
# plt.show()
plt.savefig(image_path, bbox_inches='tight', pad_inches=0.0)
mean_str = [str(round(np.mean(image_npy[12:12+25, 25:25+30]),2)), str(round(np.mean(image_npy[173:173+15, 138:138+20]),2))]
std_str = [str(round(np.std(image_npy[12:12+25, 25:25+30]),2)), str(round(np.std(image_npy[173:173+15, 138:138+20]),2))]
return mean_str, std_str
def image_506_rec(image_numpy, image_path, orig_mean):
# print('A')
image_npy = image_numpy-(image_numpy.mean()-orig_mean)
print(image_npy.mean())
plt.clf()
plt.imshow(np.squeeze(image_npy), cmap=plt.cm.bone)
currentAxisA = plt.gca()
rectA1 = patches.Rectangle((20, 142), 25, 20, linewidth=line_width, edgecolor=edge_color, facecolor='none')
rectA2 = patches.Rectangle((40, 105), 25, 15, linewidth=line_width, edgecolor=edge_color, facecolor='none')
currentAxisA.add_patch(rectA1)
currentAxisA.add_patch(rectA2)
# plt.axis('off')
currentAxisA.axes.get_xaxis().set_visible(False)
currentAxisA.axes.get_yaxis().set_visible(False)
currentAxisA.spines['left'].set_color('none')
currentAxisA.spines['bottom'].set_color('none')
# plt.show()
plt.savefig(image_path, bbox_inches='tight', pad_inches=0.0)
mean_str = [str(round(np.mean(image_npy[142:142+20, 20:20+25]),2)), str(round(np.mean(image_npy[105:105+15, 40:40+25]),2))]
std_str = [str(round(np.std(image_npy[142:142+20, 20:20+25]),2)), str(round(np.std(image_npy[105:105+15, 40:40+25]),2))]
return mean_str, std_str
# if cycle-gan
# image_199_names = ["199_fbp_atf_real_A", "199_fbp_atf_fake_B", "200_fbp_atf_real_A", "200_fbp_atf_fake_B"]
# image_201_names = ["201_fbp_atf_real_A", "201_fbp_atf_fake_B"]
# image_506_names = ["506_fbp_atf_real_A", "506_fbp_atf_fake_B"]
# if multi-step(long distance) or multi-cycle
image_199_names = ["199_fbp_atf_real_A", "199_fbp_atf_fake_B_A", "200_fbp_atf_real_A", "200_fbp_atf_fake_B_A"]
image_201_names = ["201_fbp_atf_real_A", "201_fbp_atf_fake_B_A"]
image_506_names = ["506_fbp_atf_real_A", "506_fbp_atf_fake_B_A"]
# if decoupled, use _fake_B
# image_199_names = ["199_fbp_atf_real_A_fake_B", "199_fbp_atf_fake_B_fake_B", "200_fbp_atf_real_A_fake_B", "200_fbp_atf_fake_B_fake_B"]
# image_201_names = ["201_fbp_atf_real_A_fake_B", "201_fbp_atf_fake_B_fake_B"]
# image_506_names = ["506_fbp_atf_real_A_fake_B", "506_fbp_atf_fake_B_fake_B"]
npy_dir = "."
image_dir = "./miccai"
experiment_name = "twnp200c_cyclegan4c_batch2"
# "twnp200c_cyclegan4c_batch2"
# "decouple_cb200_cyclegan4_iter50_batch2"
# "twnp200_cyclegan_iter50_batch2"
# "twnp200c_cyclegan4cl_batch2"
# Get mean of pixel values of a whole CT image
mean_199 = 0.
mean_200 = 0.
mean_201 = 0.
mean_506 = 0.
for image in image_199_names:
if ("real_A" in image) and ("199" in image):
npy_path = os.path.join(npy_dir, experiment_name, "test_latest/images", image + ".npy")
mean_199 = np.load(npy_path).mean()
for image in image_199_names:
if ("real_A" in image) and ("200" in image):
npy_path = os.path.join(npy_dir, experiment_name, "test_latest/images", image + ".npy")
mean_200 = np.load(npy_path).mean()
for image in image_201_names:
if ("real_A" in image) and ("201" in image):
npy_path = os.path.join(npy_dir, experiment_name, "test_latest/images", image + ".npy")
mean_201 = np.load(npy_path).mean()
for image in image_506_names:
if ("real_A" in image) and ("506" in image):
npy_path = os.path.join(npy_dir, experiment_name, "test_latest/images", image + ".npy")
mean_506 = np.load(npy_path).mean()
print(mean_199)
print(mean_200)
print(mean_201)
print(mean_506)
for image in image_199_names:
print(image+": ")
npy_path = os.path.join(npy_dir, experiment_name, "test_latest/images", image + ".npy")
image_path = os.path.join(image_dir, experiment_name, "images", image + ".png")
if "199" in image:
[mean_str, std_str] = image_119_rec(np.load(npy_path), image_path, mean_199)
print("mean: " + ",".join(mean_str))
print("std: " + ",".join(std_str))
elif "200" in image:
[mean_str, std_str] = image_119_rec(np.load(npy_path), image_path, mean_200)
print("mean: " + ",".join(mean_str))
print("std: " + ",".join(std_str))
for image in image_201_names:
print(image+": ")
npy_path = os.path.join(npy_dir, experiment_name, "test_latest/images", image + ".npy")
image_path = os.path.join(image_dir, experiment_name, "images", image + ".png")
[mean_str, std_str] = image_201_rec(np.load(npy_path), image_path, mean_201)
print("mean: " + ",".join(mean_str))
print("std: " + ",".join(std_str))
for image in image_506_names:
print(image+": ")
npy_path = os.path.join(npy_dir, experiment_name, "test_latest/images", image + ".npy")
image_path = os.path.join(image_dir, experiment_name, "images", image + ".png")
[mean_str, std_str] = image_506_rec(np.load(npy_path), image_path, mean_506)
print("mean: " + ",".join(mean_str))
print("std: " + ",".join(std_str)) | 45.053254 | 187 | 0.695823 | 1,258 | 7,614 | 3.917329 | 0.104134 | 0.056818 | 0.036526 | 0.026786 | 0.87358 | 0.867289 | 0.840503 | 0.815138 | 0.78267 | 0.734172 | 0 | 0.088419 | 0.136985 | 7,614 | 169 | 188 | 45.053254 | 0.661543 | 0.197268 | 0 | 0.570175 | 0 | 0 | 0.092224 | 0.004274 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.035088 | 0 | 0.087719 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36ddd0877c42e3bb962656d2eda37cfdb2709c48 | 218 | py | Python | address/compat.py | OmenApps/django-uuid-address | 70d6b7101f7a99cb72d53424e4ce92e277aa90c3 | [
"BSD-3-Clause"
] | null | null | null | address/compat.py | OmenApps/django-uuid-address | 70d6b7101f7a99cb72d53424e4ce92e277aa90c3 | [
"BSD-3-Clause"
] | null | null | null | address/compat.py | OmenApps/django-uuid-address | 70d6b7101f7a99cb72d53424e4ce92e277aa90c3 | [
"BSD-3-Clause"
] | null | null | null | from django.db.models.fields.related import ForeignObject
def compat_contribute_to_class(self, cls, name, private_only=False):
super(ForeignObject, self).contribute_to_class(cls, name, private_only=private_only)
| 36.333333 | 88 | 0.821101 | 31 | 218 | 5.516129 | 0.645161 | 0.192982 | 0.19883 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087156 | 218 | 5 | 89 | 43.6 | 0.859296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36ec2b955a657d64b68bd310a32a6d7594383e3e | 19 | py | Python | utils/csrc/__init__.py | voldemortX/DeeplabV3_PyTorch1.3_Codebase | d22d23e74800fafb58eeb61d6649008745c1a287 | [
"BSD-3-Clause"
] | 1 | 2020-09-17T06:21:39.000Z | 2020-09-17T06:21:39.000Z | utils/csrc/__init__.py | voldemortX/pytorch-segmentation | 9c62c0a721d11c8ea6bf312ecf1c7b238a54dcda | [
"BSD-3-Clause"
] | null | null | null | utils/csrc/__init__.py | voldemortX/pytorch-segmentation | 9c62c0a721d11c8ea6bf312ecf1c7b238a54dcda | [
"BSD-3-Clause"
] | null | null | null | from . import apis
| 9.5 | 18 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7fc9ddf274e944f5bed421dabdab41eba93630f9 | 51,161 | py | Python | 023-Merge-k-Sorted-Lists/solution01.py | Eroica-cpp/LeetCode | 07276bd11558f3d0e32bec768b09e886de145f9e | [
"CC-BY-3.0",
"MIT"
] | 7 | 2015-05-05T22:21:30.000Z | 2021-03-13T04:04:15.000Z | 023-Merge-k-Sorted-Lists/solution01.py | Eroica-cpp/LeetCode | 07276bd11558f3d0e32bec768b09e886de145f9e | [
"CC-BY-3.0",
"MIT"
] | null | null | null | 023-Merge-k-Sorted-Lists/solution01.py | Eroica-cpp/LeetCode | 07276bd11558f3d0e32bec768b09e886de145f9e | [
"CC-BY-3.0",
"MIT"
] | 2 | 2018-12-26T08:13:25.000Z | 2020-07-18T20:18:24.000Z | #!/usr/bin/python
# ==============================================================================
# Author: Tao Li (taoli@ucsd.edu)
# Date: Jun 2, 2015
# Question: 023-Merge-k-Sorted-Lists
# Link: https://leetcode.com/problems/merge-k-sorted-lists/
# ==============================================================================
# Merge k sorted linked lists and return it as one sorted list. Analyze
# and describe its complexity.
# ==============================================================================
# Method: Naive Method
# Time complexity: O(kn)
# Space complexity: O(1)
# Note: TLE
# ==============================================================================
# Definition for singly-linked list.
class ListNode:
def __init__(self, x):
self.val = x
self.next = None
class Solution:
# @param {ListNode[]} lists
# @return {ListNode}
def mergeKLists(self, lists):
lists = [i for i in lists if i is not None]
new = head = ListNode(0)
leftNum = len(lists)
vals = [j.val if j is not None else float('inf') for j in lists]
while leftNum > 0:
idx = vals.index(min(vals))
vals[idx] = lists[idx].next.val if lists[idx].next is not None else float('inf')
head.next = ListNode(lists[idx].val)
lists[idx] = lists[idx].next
head = head.next
if lists[idx] is None:
leftNum -= 1
return new.next
# TEST CODE
if __name__ == '__main__':
lists = [[1,2, 10], [4, 5], [3]]
lists = [[7],[49],[73],[58],[30],[72],[44],[78],[23],[9],[40],[65],[92],[42],[87],[3],[27],[29],[40],[12],[3],[69],[9],[57],[60],[33],[99],[78],[16],[35],[97],[26],[12],[67],[10],[33],[79],[49],[79],[21],[67],[72],[93],[36],[85],[45],[28],[91],[94],[57],[1],[53],[8],[44],[68],[90],[24],[96],[30],[3],[22],[66],[49],[24],[1],[53],[77],[8],[28],[33],[98],[81],[35],[13],[65],[14],[63],[36],[25],[69],[15],[94],[29],[1],[17],[95],[5],[4],[51],[98],[88],[23],[5],[82],[52],[66],[16],[37],[38],[44],[1],[97],[71],[28],[37],[58],[77],[97],[94],[4],[9],[31],[45],[75],[35],[98],[42],[99],[68],[12],[60],[57],[94],[8],[95],[68],[13],[30],[6],[62],[42],[65],[82],[52],[67],[21],[95],[12],[71],[1],[90],[31],[38],[57],[16],[90],[40],[79],[35],[6],[72],[98],[95],[19],[54],[23],[89],[60],[5],[26],[23],[6],[13],[70],[38],[94],[20],[44],[66],[34],[26],[94],[63],[38],[44],[90],[50],[59],[23],[47],[85],[17],[72],[39],[47],[85],[96],[85],[23],[20],[44],[68],[35],[15],[25],[34],[42],[11],[79],[52],[44],[95],[18],[96],[92],[15],[91],[33],[69],[97],[53],[47],[25],[10],[62],[11],[8],[77],[61],[25],[35],[68],[95],[76],[67],[39],[74],[31],[56],[1],[72],[60],[94],[84],[55],[89],[7],[15],[93],[69],[80],[55],[55],[6],[63],[2],[76],[8],[49],[31],[44],[38],[8],[97],[51],[49],[3],[31],[31],[14],[19],[75],[9],[80],[29],[23],[54],[60],[37],[45],[17],[25],[0],[56],[64],[97],[48],[4],[50],[50],[76],[12],[54],[97],[4],[81],[48],[65],[78],[99],[9],[29],[53],[83],[47],[7],[73],[22],[5],[76],[53],[24],[30],[66],[0],[44],[70],[85],[16],[98],[55],[33],[57],[76],[78],[66],[57],[11],[78],[14],[19],[37],[33],[91],[20],[62],[33],[97],[31],[88],[89],[73],[77],[4],[58],[0],[54],[60],[15],[47],[80],[30],[55],[46],[7],[38],[0],[26],[35],[57],[13],[14],[93],[60],[54],[18],[57],[85],[29],[15],[63],[2],[17],[43],[19],[67],[47],[69],[95],[3],[73],[3],[48],[85],[58],[59],[6],[30],[24],[32],[73],[3],[97],[20],[50],[31],[80],[3],[0],[20],[33],[58],[3],[76],[50],[34],[80],[79],[32],[74],[49],[42],[49],[71],[10],[79],[83],[70],[40],[23],[50],[71],[29],[18],[46],[99],[30],[21],[76],[24],[44],[58],[96],[71],[64],[60],[98],[51],[40],[3],[51],[1],[5],[80],[18],[74],[49],[13],[20],[25],[12],[83],[88],[17],[8],[50],[24],[95],[57],[11],[90],[66],[10],[93],[53],[65],[60],[42],[3],[52],[7],[41],[10],[0],[99],[27],[71],[87],[14],[25],[41],[17],[48],[42],[15],[74],[45],[73],[20],[11],[39],[54],[5],[29],[53],[89],[66],[56],[4],[60],[98],[92],[20],[16],[80],[67],[52],[39],[98],[1],[11],[16],[91],[71],[28],[71],[61],[45],[20],[40],[58],[53],[27],[50],[11],[63],[94],[33],[27],[27],[95],[31],[42],[16],[6],[15],[24],[1],[97],[61],[3],[24],[8],[36],[81],[15],[10],[16],[5],[73],[81],[20],[91],[69],[65],[27],[36],[28],[25],[84],[67],[49],[76],[46],[76],[66],[67],[20],[84],[91],[10],[58],[11],[44],[18],[18],[34],[25],[67],[89],[18],[14],[25],[18],[28],[29],[34],[27],[40],[54],[40],[96],[35],[83],[48],[65],[11],[52],[64],[76],[37],[75],[45],[10],[15],[92],[10],[47],[37],[99],[47],[15],[93],[79],[29],[64],[79],[1],[25],[89],[58],[33],[0],[89],[70],[17],[46],[38],[43],[38],[36],[21],[19],[96],[47],[88],[59],[87],[35],[35],[12],[84],[89],[84],[34],[67],[19],[15],[45],[76],[61],[25],[19],[31],[3],[98],[98],[39],[4],[3],[79],[65],[91],[7],[24],[65],[21],[54],[6],[50],[3],[32],[32],[41],[54],[77],[17],[0],[89],[65],[47],[22],[79],[17],[16],[48],[62],[29],[39],[8],[3],[57],[61],[52],[66],[21],[57],[96],[55],[30],[94],[55],[21],[12],[93],[27],[32],[44],[91],[98],[52],[56],[70],[3],[39],[14],[99],[66],[35],[21],[43],[52],[86],[19],[50],[10],[23],[69],[5],[43],[11],[31],[92],[16],[99],[71],[39],[70],[36],[91],[57],[33],[28],[77],[10],[83],[76],[89],[91],[34],[11],[4],[26],[91],[90],[22],[64],[90],[84],[13],[41],[27],[79],[84],[37],[70],[61],[81],[65],[2],[32],[32],[54],[59],[47],[77],[62],[10],[19],[50],[77],[41],[36],[20],[99],[12],[59],[56],[90],[52],[48],[14],[44],[18],[50],[1],[95],[21],[31],[1],[45],[61],[57],[10],[28],[5],[73],[37],[69],[96],[3],[21],[75],[18],[0],[89],[62],[54],[63],[43],[89],[50],[35],[15],[64],[94],[63],[58],[52],[92],[16],[14],[20],[60],[50],[68],[41],[47],[96],[87],[1],[34],[28],[71],[48],[75],[53],[71],[29],[93],[19],[71],[20],[64],[79],[30],[10],[80],[13],[42],[38],[82],[44],[28],[93],[75],[80],[0],[96],[47],[70],[87],[43],[33],[52],[61],[24],[0],[80],[78],[57],[23],[98],[14],[45],[62],[9],[10],[49],[18],[90],[55],[43],[55],[85],[34],[75],[21],[51],[26],[51],[59],[83],[14],[37],[79],[98],[0],[37],[85],[78],[84],[42],[15],[60],[67],[40],[7],[66],[28],[62],[63],[69],[90],[23],[78],[13],[61],[10],[40],[78],[0],[94],[7],[56],[51],[86],[31],[44],[39],[76],[84],[52],[8],[14],[54],[19],[28],[71],[70],[63],[47],[24],[43],[54],[8],[81],[52],[88],[63],[59],[19],[79],[56],[61],[87],[53],[99],[88],[44],[80],[66],[83],[74],[36],[9],[67],[34],[39],[84],[51],[49],[66],[67],[10],[89],[27],[73],[81],[95],[36],[74],[35],[31],[72],[28],[98],[70],[87],[97],[89],[46],[90],[11],[12],[63],[12],[81],[51],[30],[21],[13],[28],[50],[0],[59],[54],[92],[94],[30],[32],[59],[77],[79],[32],[72],[83],[81],[53],[22],[21],[56],[18],[91],[0],[96],[20],[4],[99],[29],[44],[75],[70],[16],[99],[80],[18],[88],[52],[28],[0],[62],[40],[49],[85],[83],[15],[59],[78],[59],[61],[82],[48],[54],[40],[81],[70],[28],[51],[44],[69],[95],[69],[10],[72],[23],[25],[19],[50],[31],[43],[46],[69],[70],[94],[10],[92],[64],[24],[61],[19],[20],[62],[61],[25],[34],[49],[90],[22],[60],[93],[28],[22],[81],[66],[68],[23],[22],[39],[17],[93],[64],[78],[56],[71],[41],[55],[36],[89],[28],[20],[2],[12],[16],[47],[46],[51],[72],[11],[23],[36],[5],[7],[33],[66],[53],[12],[25],[40],[53],[57],[33],[95],[39],[51],[58],[94],[60],[38],[29],[75],[98],[92],[33],[62],[76],[36],[46],[73],[64],[84],[92],[19],[42],[28],[59],[62],[45],[16],[27],[72],[48],[0],[70],[98],[92],[45],[28],[0],[43],[92],[63],[83],[72],[1],[9],[21],[86],[13],[69],[31],[57],[19],[86],[56],[16],[54],[54],[14],[15],[37],[66],[97],[77],[60],[12],[91],[31],[74],[63],[77],[24],[84],[33],[50],[27],[99],[29],[9],[44],[64],[51],[12],[79],[34],[7],[83],[0],[59],[10],[53],[91],[21],[25],[53],[29],[96],[53],[61],[58],[91],[63],[20],[68],[87],[26],[72],[19],[41],[4],[51],[92],[45],[70],[74],[62],[76],[17],[26],[13],[44],[71],[79],[35],[29],[88],[48],[78],[17],[23],[38],[8],[29],[26],[68],[6],[99],[55],[29],[76],[88],[96],[19],[64],[37],[16],[18],[91],[83],[98],[20],[48],[61],[51],[18],[77],[46],[61],[78],[28],[63],[35],[6],[8],[68],[79],[19],[23],[77],[92],[53],[78],[53],[95],[71],[75],[4],[62],[32],[19],[31],[35],[68],[68],[10],[34],[70],[78],[65],[45],[79],[75],[27],[87],[63],[37],[66],[34],[83],[75],[84],[55],[56],[99],[64],[93],[86],[33],[24],[47],[94],[30],[67],[65],[10],[82],[5],[14],[52],[41],[85],[91],[6],[60],[60],[12],[3],[30],[85],[0],[85],[67],[84],[88],[0],[86],[0],[95],[99],[2],[28],[83],[17],[36],[2],[89],[78],[25],[89],[22],[95],[5],[19],[91],[21],[74],[14],[62],[6],[83],[26],[19],[1],[30],[44],[75],[58],[62],[31],[36],[20],[8],[24],[47],[8],[11],[97],[0],[14],[96],[99],[45],[33],[62],[39],[54],[21],[86],[36],[10],[62],[73],[93],[35],[88],[71],[38],[21],[91],[13],[60],[57],[92],[43],[75],[68],[13],[15],[36],[91],[47],[69],[32],[34],[28],[43],[79],[6],[54],[7],[37],[3],[21],[47],[34],[49],[16],[59],[38],[50],[83],[86],[96],[85],[24],[22],[63],[81],[12],[51],[2],[95],[36],[54],[8],[80],[31],[30],[95],[14],[12],[36],[37],[9],[85],[19],[81],[10],[8],[9],[39],[67],[19],[66],[99],[20],[15],[77],[22],[42],[80],[81],[85],[30],[25],[36],[94],[45],[2],[33],[80],[56],[34],[17],[81],[81],[58],[5],[92],[60],[51],[1],[42],[44],[35],[55],[72],[43],[22],[13],[41],[81],[79],[18],[60],[29],[55],[82],[64],[32],[13],[95],[28],[4],[55],[11],[81],[10],[4],[51],[16],[61],[64],[5],[22],[13],[43],[36],[18],[94],[88],[43],[39],[2],[43],[70],[19],[12],[17],[32],[82],[97],[59],[73],[90],[19],[10],[11],[15],[78],[42],[45],[70],[62],[8],[83],[99],[9],[13],[50],[51],[65],[2],[17],[53],[99],[16],[84],[69],[42],[3],[15],[29],[57],[82],[64],[52],[54],[26],[27],[55],[59],[52],[86],[22],[29],[86],[49],[79],[98],[90],[3],[86],[43],[27],[85],[98],[87],[17],[31],[76],[61],[63],[14],[83],[32],[22],[11],[91],[71],[42],[33],[53],[30],[80],[4],[15],[87],[10],[81],[58],[86],[33],[40],[59],[36],[62],[38],[54],[26],[26],[12],[95],[21],[77],[41],[58],[54],[20],[97],[27],[51],[62],[32],[37],[4],[57],[5],[7],[43],[58],[39],[43],[97],[81],[65],[57],[5],[17],[12],[24],[84],[59],[99],[16],[40],[51],[46],[22],[93],[15],[49],[61],[49],[31],[96],[23],[96],[44],[50],[38],[31],[59],[35],[65],[1],[87],[37],[48],[20],[24],[94],[75],[3],[89],[86],[45],[43],[57],[32],[61],[78],[4],[2],[31],[20],[86],[30],[43],[67],[48],[44],[17],[49],[25],[73],[63],[16],[80],[48],[79],[57],[14],[5],[89],[7],[5],[27],[14],[15],[8],[97],[49],[95],[29],[26],[66],[98],[85],[62],[18],[53],[79],[3],[83],[23],[59],[21],[74],[34],[75],[2],[36],[79],[15],[50],[13],[50],[92],[48],[83],[81],[47],[8],[6],[78],[6],[67],[14],[10],[70],[58],[17],[46],[4],[56],[63],[37],[98],[34],[56],[8],[12],[13],[80],[24],[4],[13],[73],[62],[79],[77],[69],[47],[31],[53],[33],[39],[20],[94],[97],[24],[71],[58],[20],[48],[18],[1],[17],[92],[13],[80],[56],[40],[94],[68],[70],[63],[49],[34],[25],[71],[91],[50],[17],[10],[54],[25],[1],[94],[92],[34],[70],[46],[87],[21],[65],[89],[71],[65],[60],[10],[75],[9],[94],[23],[34],[74],[14],[94],[23],[75],[77],[57],[27],[22],[74],[72],[39],[39],[83],[17],[30],[90],[6],[96],[12],[95],[38],[16],[21],[84],[27],[78],[60],[72],[87],[35],[81],[26],[94],[30],[95],[58],[19],[12],[43],[12],[6],[28],[93],[26],[26],[26],[33],[81],[96],[92],[61],[9],[77],[24],[22],[40],[52],[15],[80],[92],[51],[52],[73],[77],[17],[26],[65],[61],[39],[49],[7],[19],[30],[76],[25],[81],[27],[59],[97],[93],[25],[29],[22],[48],[53],[79],[68],[27],[24],[31],[77],[6],[0],[72],[1],[30],[98],[77],[49],[94],[27],[52],[59],[77],[70],[51],[24],[84],[20],[15],[19],[77],[95],[95],[64],[0],[18],[17],[23],[17],[54],[68],[87],[75],[13],[22],[57],[16],[82],[42],[62],[8],[40],[5],[70],[6],[1],[19],[20],[62],[17],[35],[52],[94],[35],[10],[31],[63],[51],[47],[51],[10],[49],[47],[34],[48],[23],[88],[71],[32],[59],[56],[45],[2],[23],[89],[19],[94],[86],[7],[92],[0],[46],[1],[65],[44],[56],[74],[17],[75],[95],[66],[64],[91],[91],[40],[56],[59],[97],[97],[9],[63],[22],[85],[99],[19],[20],[92],[48],[53],[76],[76],[33],[97],[93],[3],[8],[74],[66],[3],[30],[81],[86],[25],[69],[25],[58],[85],[99],[21],[14],[60],[56],[72],[53],[53],[75],[81],[55],[62],[48],[45],[52],[70],[31],[0],[63],[47],[19],[70],[1],[59],[82],[79],[96],[29],[1],[56],[30],[81],[20],[62],[51],[45],[70],[54],[57],[93],[15],[8],[61],[66],[73],[78],[86],[30],[52],[26],[79],[10],[32],[51],[72],[1],[87],[79],[8],[37],[36],[2],[78],[23],[88],[74],[9],[96],[23],[62],[23],[46],[79],[55],[74],[22],[31],[57],[71],[79],[94],[27],[56],[36],[16],[47],[13],[84],[57],[60],[57],[0],[36],[13],[33],[75],[10],[62],[66],[6],[8],[7],[73],[37],[96],[55],[24],[66],[8],[33],[61],[57],[71],[76],[14],[14],[89],[35],[59],[38],[94],[60],[27],[43],[42],[55],[21],[11],[80],[12],[9],[90],[83],[45],[38],[34],[38],[64],[81],[40],[50],[18],[75],[98],[74],[87],[42],[88],[20],[82],[5],[34],[20],[46],[3],[20],[50],[75],[62],[42],[70],[38],[42],[15],[35],[77],[34],[67],[74],[80],[90],[96],[6],[81],[43],[72],[55],[38],[75],[20],[75],[86],[97],[5],[37],[95],[83],[95],[76],[83],[82],[27],[51],[99],[5],[51],[77],[6],[90],[95],[96],[61],[31],[25],[6],[79],[70],[24],[57],[22],[53],[73],[80],[3],[83],[34],[82],[57],[63],[41],[37],[29],[87],[86],[51],[94],[22],[30],[30],[54],[6],[19],[17],[83],[39],[13],[92],[39],[44],[68],[83],[29],[65],[5],[87],[40],[87],[97],[79],[68],[63],[0],[99],[47],[1],[88],[5],[20],[44],[72],[76],[30],[19],[69],[88],[29],[52],[32],[58],[7],[81],[62],[16],[9],[45],[48],[85],[8],[13],[67],[5],[14],[54],[35],[70],[63],[79],[58],[39],[59],[45],[23],[48],[44],[77],[37],[91],[42],[98],[77],[4],[82],[56],[75],[33],[10],[92],[70],[70],[53],[53],[28],[28],[73],[82],[71],[0],[5],[55],[21],[84],[34],[47],[28],[88],[85],[8],[58],[74],[23],[61],[72],[24],[35],[89],[56],[44],[98],[47],[95],[67],[83],[76],[43],[36],[37],[46],[62],[57],[42],[96],[3],[81],[81],[94],[88],[4],[55],[87],[83],[17],[65],[93],[56],[38],[51],[87],[90],[80],[32],[26],[29],[7],[14],[17],[25],[38],[26],[45],[57],[25],[7],[79],[51],[41],[59],[81],[65],[25],[94],[93],[64],[63],[29],[1],[72],[40],[53],[29],[32],[80],[59],[74],[40],[76],[89],[4],[18],[68],[86],[66],[62],[74],[98],[21],[27],[15],[27],[41],[9],[58],[35],[65],[61],[61],[32],[72],[19],[94],[3],[54],[31],[16],[4],[83],[74],[13],[11],[44],[8],[59],[5],[72],[92],[72],[15],[29],[13],[43],[80],[9],[5],[84],[49],[11],[9],[83],[48],[85],[7],[60],[80],[89],[63],[33],[66],[85],[0],[85],[42],[74],[91],[63],[93],[3],[98],[63],[61],[91],[77],[89],[97],[29],[12],[69],[60],[31],[65],[11],[13],[96],[50],[4],[23],[88],[22],[20],[16],[65],[77],[72],[12],[80],[76],[0],[41],[18],[75],[19],[43],[63],[81],[58],[82],[67],[52],[51],[61],[95],[17],[29],[27],[91],[86],[60],[56],[92],[1],[39],[11],[84],[70],[13],[70],[8],[85],[20],[57],[45],[97],[79],[99],[2],[83],[67],[19],[69],[37],[29],[99],[18],[18],[5],[15],[37],[15],[29],[24],[15],[66],[72],[96],[60],[78],[79],[71],[24],[45],[80],[77],[58],[54],[15],[8],[21],[8],[45],[52],[95],[11],[10],[60],[82],[10],[82],[10],[1],[60],[35],[20],[84],[6],[5],[8],[77],[24],[49],[93],[44],[16],[24],[81],[61],[85],[38],[3],[7],[22],[13],[95],[51],[83],[98],[57],[0],[56],[89],[1],[99],[10],[74],[95],[28],[91],[47],[81],[93],[20],[68],[15],[76],[24],[53],[64],[87],[95],[19],[93],[6],[5],[31],[78],[49],[30],[9],[87],[32],[74],[84],[70],[53],[11],[88],[64],[38],[88],[7],[62],[58],[16],[75],[77],[92],[77],[29],[90],[60],[51],[27],[21],[39],[40],[64],[77],[36],[5],[11],[84],[55],[75],[59],[39],[49],[99],[72],[74],[90],[66],[42],[22],[21],[60],[34],[47],[93],[17],[1],[65],[48],[74],[32],[88],[65],[58],[2],[10],[19],[10],[37],[92],[36],[48],[8],[50],[33],[19],[37],[32],[94],[93],[10],[11],[5],[58],[7],[53],[74],[64],[7],[2],[14],[69],[39],[14],[87],[95],[48],[67],[13],[67],[60],[53],[70],[83],[56],[68],[26],[0],[22],[37],[99],[71],[96],[17],[41],[58],[35],[88],[71],[73],[7],[20],[92],[93],[52],[89],[13],[28],[3],[98],[17],[50],[88],[46],[43],[65],[88],[11],[66],[43],[38],[4],[67],[57],[22],[46],[31],[3],[20],[27],[64],[51],[5],[29],[42],[16],[88],[50],[85],[2],[73],[45],[97],[15],[52],[88],[86],[28],[17],[22],[23],[26],[96],[61],[79],[84],[42],[90],[97],[28],[11],[43],[44],[52],[25],[38],[72],[87],[57],[15],[27],[61],[82],[77],[44],[28],[13],[35],[1],[39],[23],[23],[64],[91],[58],[59],[43],[48],[27],[52],[33],[8],[41],[21],[99],[69],[30],[81],[76],[13],[7],[54],[98],[69],[15],[80],[47],[27],[75],[88],[86],[64],[47],[6],[38],[27],[2],[31],[79],[57],[46],[27],[40],[38],[41],[42],[2],[65],[69],[11],[8],[2],[43],[73],[80],[36],[2],[64],[70],[1],[87],[40],[8],[55],[5],[87],[14],[1],[65],[99],[59],[57],[90],[43],[82],[82],[3],[39],[84],[61],[4],[32],[27],[84],[24],[56],[14],[7],[48],[8],[4],[87],[35],[32],[36],[57],[25],[64],[54],[28],[33],[23],[30],[44],[70],[32],[4],[85],[79],[64],[55],[26],[79],[10],[20],[90],[1],[24],[49],[67],[2],[10],[4],[63],[60],[64],[28],[27],[1],[0],[82],[44],[85],[69],[23],[55],[68],[78],[27],[69],[74],[21],[37],[57],[20],[89],[12],[42],[83],[2],[41],[52],[67],[87],[88],[7],[33],[32],[81],[3],[32],[27],[50],[85],[70],[19],[23],[65],[74],[17],[18],[93],[0],[34],[91],[57],[19],[45],[93],[12],[79],[69],[18],[30],[98],[49],[57],[90],[68],[8],[26],[72],[52],[17],[93],[11],[41],[31],[7],[68],[63],[1],[22],[37],[58],[78],[46],[94],[63],[83],[10],[84],[73],[87],[32],[69],[7],[62],[84],[48],[35],[30],[98],[94],[26],[50],[34],[15],[68],[91],[99],[71],[69],[1],[96],[10],[73],[92],[11],[28],[0],[12],[79],[39],[69],[51],[14],[64],[84],[11],[44],[91],[27],[41],[99],[97],[95],[6],[91],[11],[38],[82],[53],[18],[23],[3],[80],[8],[73],[37],[80],[99],[93],[49],[43],[18],[46],[75],[75],[81],[76],[30],[35],[28],[48],[97],[62],[61],[47],[37],[14],[95],[30],[7],[56],[25],[43],[66],[15],[50],[42],[75],[23],[65],[12],[29],[80],[73],[57],[56],[23],[53],[6],[52],[76],[62],[38],[11],[66],[94],[9],[82],[96],[9],[96],[44],[35],[65],[58],[3],[3],[4],[46],[55],[50],[75],[27],[63],[52],[11],[84],[87],[16],[77],[51],[90],[83],[20],[99],[90],[42],[11],[36],[62],[60],[69],[14],[78],[66],[57],[99],[50],[19],[98],[43],[68],[76],[1],[46],[88],[20],[81],[49],[11],[47],[44],[76],[96],[78],[56],[36],[62],[39],[39],[79],[87],[66],[0],[6],[91],[56],[98],[46],[70],[33],[1],[90],[84],[58],[27],[94],[86],[97],[89],[35],[27],[68],[0],[64],[40],[88],[10],[83],[23],[31],[23],[56],[14],[12],[69],[52],[62],[81],[35],[35],[71],[24],[70],[66],[93],[28],[26],[33],[99],[3],[26],[88],[41],[89],[19],[81],[15],[23],[98],[52],[68],[61],[89],[1],[83],[93],[20],[13],[42],[26],[50],[56],[32],[6],[92],[64],[2],[6],[90],[19],[2],[34],[75],[92],[32],[94],[69],[43],[47],[27],[95],[64],[93],[66],[94],[84],[58],[21],[66],[70],[42],[92],[54],[29],[45],[47],[72],[80],[96],[97],[46],[40],[6],[0],[43],[81],[14],[75],[17],[2],[92],[11],[68],[75],[37],[98],[32],[37],[66],[86],[92],[4],[60],[59],[15],[95],[85],[81],[63],[68],[64],[78],[25],[4],[15],[26],[86],[40],[23],[8],[50],[59],[20],[97],[79],[18],[17],[57],[30],[21],[51],[71],[28],[22],[99],[52],[56],[49],[76],[4],[48],[87],[33],[14],[57],[59],[10],[76],[0],[16],[70],[72],[55],[42],[86],[14],[86],[29],[83],[87],[87],[31],[59],[85],[99],[19],[71],[69],[3],[53],[96],[41],[98],[48],[37],[26],[21],[28],[94],[63],[27],[47],[98],[31],[90],[90],[30],[23],[17],[29],[7],[83],[39],[74],[33],[46],[89],[31],[71],[55],[36],[79],[46],[84],[76],[74],[84],[85],[87],[45],[49],[0],[86],[8],[13],[12],[27],[50],[42],[85],[71],[49],[54],[68],[60],[96],[59],[73],[98],[85],[48],[19],[67],[4],[33],[29],[90],[94],[66],[31],[11],[69],[58],[99],[57],[45],[23],[55],[75],[20],[76],[75],[85],[68],[92],[67],[24],[44],[86],[90],[13],[25],[38],[43],[3],[12],[11],[96],[72],[63],[96],[27],[65],[3],[19],[43],[71],[0],[49],[29],[38],[97],[65],[16],[43],[79],[47],[48],[77],[48],[80],[60],[38],[92],[61],[17],[76],[88],[42],[45],[4],[79],[69],[75],[65],[53],[15],[40],[69],[27],[1],[56],[10],[85],[93],[6],[93],[72],[15],[99],[5],[46],[53],[37],[49],[22],[20],[54],[22],[46],[59],[96],[28],[52],[16],[57],[76],[99],[93],[3],[36],[63],[35],[15],[19],[21],[93],[52],[44],[15],[76],[23],[80],[9],[37],[23],[85],[42],[41],[57],[2],[98],[43],[62],[85],[34],[92],[23],[79],[83],[94],[86],[52],[93],[77],[9],[54],[67],[68],[98],[50],[52],[54],[32],[8],[1],[90],[41],[44],[94],[87],[74],[31],[37],[82],[96],[81],[72],[82],[33],[56],[36],[30],[15],[70],[11],[33],[46],[45],[20],[97],[26],[12],[14],[33],[1],[77],[83],[35],[45],[66],[70],[69],[47],[84],[81],[32],[41],[81],[1],[47],[23],[15],[61],[61],[32],[26],[40],[62],[64],[59],[77],[35],[55],[23],[26],[59],[23],[65],[39],[46],[67],[21],[83],[51],[73],[43],[86],[99],[17],[61],[93],[55],[14],[94],[16],[73],[7],[1],[80],[58],[4],[91],[44],[81],[63],[98],[92],[90],[20],[54],[29],[40],[81],[60],[55],[82],[79],[65],[95],[15],[50],[88],[89],[25],[57],[86],[82],[45],[14],[93],[42],[81],[12],[96],[66],[66],[90],[61],[89],[59],[35],[19],[36],[92],[23],[81],[2],[5],[27],[18],[5],[63],[92],[38],[49],[33],[8],[36],[31],[96],[28],[9],[44],[17],[36],[36],[9],[39],[90],[14],[4],[68],[56],[84],[17],[75],[65],[12],[10],[59],[88],[48],[33],[32],[34],[6],[50],[10],[7],[75],[20],[8],[21],[7],[99],[57],[61],[7],[90],[10],[0],[36],[78],[38],[93],[15],[15],[72],[47],[43],[51],[39],[95],[26],[14],[85],[19],[80],[98],[56],[9],[89],[45],[65],[76],[51],[41],[73],[15],[44],[46],[3],[83],[40],[33],[53],[86],[52],[12],[68],[82],[87],[68],[62],[4],[96],[12],[64],[9],[45],[16],[9],[61],[78],[61],[31],[12],[11],[56],[60],[99],[97],[42],[6],[23],[44],[7],[84],[87],[80],[57],[78],[40],[9],[62],[66],[21],[36],[7],[26],[37],[36],[6],[52],[30],[67],[0],[36],[16],[15],[46],[99],[77],[12],[80],[36],[24],[10],[87],[54],[35],[23],[31],[73],[77],[57],[94],[16],[65],[71],[12],[76],[25],[7],[9],[75],[50],[92],[6],[63],[88],[29],[0],[36],[3],[15],[31],[75],[97],[8],[96],[9],[16],[77],[75],[24],[57],[19],[92],[44],[57],[15],[72],[54],[25],[67],[37],[48],[96],[70],[36],[13],[42],[41],[78],[3],[81],[62],[19],[8],[11],[76],[82],[67],[23],[68],[20],[51],[20],[49],[62],[24],[91],[65],[71],[29],[69],[73],[83],[27],[68],[88],[11],[30],[24],[57],[39],[59],[60],[79],[24],[77],[86],[1],[65],[32],[47],[62],[91],[79],[91],[0],[87],[31],[23],[18],[35],[10],[24],[19],[57],[92],[64],[76],[5],[36],[31],[66],[92],[26],[20],[77],[4],[85],[7],[5],[12],[6],[71],[18],[94],[81],[69],[33],[71],[81],[86],[62],[0],[19],[41],[79],[9],[85],[79],[82],[80],[56],[19],[94],[86],[59],[22],[63],[83],[30],[28],[50],[99],[65],[97],[39],[83],[13],[53],[92],[24],[53],[13],[95],[11],[20],[69],[43],[68],[67],[51],[10],[15],[64],[91],[49],[40],[87],[23],[44],[38],[21],[67],[52],[52],[12],[56],[62],[51],[84],[0],[16],[27],[17],[20],[8],[10],[61],[90],[63],[90],[58],[4],[32],[42],[73],[10],[11],[57],[53],[14],[72],[0],[7],[21],[38],[60],[81],[34],[37],[82],[41],[10],[33],[37],[4],[9],[91],[49],[60],[71],[36],[59],[49],[75],[25],[87],[28],[80],[7],[55],[72],[7],[23],[97],[62],[12],[14],[38],[96],[19],[47],[63],[74],[58],[69],[32],[12],[67],[79],[22],[66],[48],[54],[63],[84],[47],[79],[0],[37],[23],[20],[11],[55],[38],[45],[19],[9],[77],[43],[8],[92],[76],[49],[2],[52],[13],[71],[74],[71],[4],[1],[86],[91],[0],[22],[15],[64],[57],[71],[13],[82],[77],[85],[50],[58],[70],[54],[9],[5],[10],[74],[91],[58],[66],[77],[83],[71],[15],[33],[91],[58],[21],[44],[83],[18],[73],[85],[40],[84],[27],[61],[28],[6],[15],[73],[14],[31],[54],[12],[32],[12],[39],[79],[12],[57],[18],[1],[47],[35],[87],[25],[93],[59],[33],[53],[7],[7],[52],[93],[90],[69],[33],[35],[23],[76],[61],[18],[15],[49],[79],[32],[2],[88],[35],[56],[5],[22],[34],[46],[4],[75],[93],[20],[48],[98],[87],[46],[97],[67],[20],[39],[73],[99],[16],[5],[56],[64],[34],[0],[17],[61],[56],[0],[29],[99],[4],[88],[27],[60],[96],[34],[82],[53],[48],[1],[4],[40],[36],[50],[22],[84],[81],[19],[72],[1],[90],[43],[69],[74],[79],[8],[85],[85],[22],[11],[78],[87],[2],[75],[21],[82],[16],[93],[70],[60],[81],[75],[12],[63],[51],[28],[48],[87],[57],[89],[90],[5],[89],[5],[89],[85],[67],[36],[5],[23],[59],[36],[88],[34],[51],[32],[66],[88],[35],[15],[69],[97],[87],[86],[90],[90],[69],[52],[96],[55],[1],[0],[90],[34],[94],[0],[74],[88],[86],[25],[53],[91],[68],[75],[20],[81],[99],[85],[95],[19],[51],[95],[39],[95],[69],[45],[98],[94],[79],[47],[43],[96],[26],[63],[45],[26],[71],[19],[85],[19],[80],[79],[34],[26],[47],[58],[24],[96],[73],[98],[91],[53],[39],[0],[30],[6],[60],[28],[38],[0],[10],[28],[58],[87],[94],[52],[80],[56],[98],[15],[55],[22],[43],[59],[49],[28],[0],[70],[65],[92],[89],[66],[92],[52],[57],[24],[90],[87],[5],[31],[89],[56],[38],[90],[50],[67],[96],[91],[44],[16],[33],[58],[62],[66],[19],[32],[2],[18],[77],[81],[93],[89],[47],[71],[49],[47],[41],[65],[79],[14],[99],[60],[66],[27],[71],[94],[17],[41],[95],[15],[97],[74],[51],[74],[46],[17],[79],[66],[79],[1],[97],[52],[37],[50],[70],[31],[14],[68],[97],[91],[78],[13],[19],[29],[12],[3],[36],[98],[89],[86],[9],[43],[95],[99],[4],[75],[33],[4],[61],[63],[61],[22],[51],[61],[42],[62],[88],[85],[3],[74],[18],[56],[96],[78],[11],[37],[36],[89],[60],[20],[77],[98],[18],[97],[80],[38],[35],[62],[3],[40],[92],[76],[74],[26],[34],[64],[47],[43],[92],[76],[63],[68],[90],[16],[96],[53],[60],[57],[77],[58],[65],[94],[38],[30],[13],[27],[11],[51],[46],[48],[82],[26],[71],[8],[78],[73],[5],[15],[42],[22],[90],[18],[84],[50],[44],[69],[10],[97],[2],[53],[60],[42],[91],[56],[57],[36],[89],[26],[58],[57],[1],[40],[12],[67],[48],[13],[23],[32],[75],[36],[71],[97],[54],[41],[21],[37],[72],[81],[46],[36],[32],[45],[91],[24],[69],[41],[81],[18],[83],[77],[93],[68],[8],[90],[20],[47],[6],[15],[41],[39],[35],[33],[94],[16],[35],[70],[76],[96],[63],[40],[91],[91],[51],[89],[80],[85],[17],[17],[23],[2],[29],[8],[50],[67],[86],[32],[39],[88],[15],[29],[92],[0],[4],[12],[7],[39],[77],[7],[85],[76],[14],[60],[22],[35],[9],[97],[93],[86],[76],[13],[63],[42],[1],[77],[71],[92],[84],[95],[11],[93],[79],[77],[39],[19],[49],[40],[75],[55],[33],[34],[65],[82],[48],[21],[7],[67],[3],[44],[30],[78],[50],[51],[95],[66],[95],[65],[58],[84],[14],[56],[10],[48],[43],[55],[62],[57],[44],[62],[86],[57],[85],[95],[57],[99],[77],[36],[30],[84],[47],[71],[96],[6],[93],[53],[83],[55],[38],[80],[5],[15],[33],[34],[47],[89],[94],[83],[3],[0],[19],[5],[14],[74],[64],[95],[38],[93],[20],[5],[35],[49],[14],[8],[29],[33],[83],[36],[40],[82],[36],[6],[78],[4],[46],[4],[7],[24],[83],[44],[19],[51],[6],[88],[19],[92],[17],[92],[52],[80],[20],[81],[22],[96],[1],[93],[38],[53],[74],[97],[69],[59],[61],[55],[14],[19],[7],[27],[58],[66],[1],[73],[1],[83],[12],[76],[88],[85],[69],[9],[66],[99],[64],[0],[95],[2],[24],[18],[11],[56],[34],[0],[73],[84],[94],[23],[69],[67],[71],[45],[76],[35],[97],[41],[76],[54],[88],[3],[95],[37],[45],[58],[89],[4],[47],[14],[83],[61],[13],[25],[54],[60],[76],[71],[79],[70],[99],[21],[92],[88],[89],[37],[79],[12],[97],[27],[15],[96],[23],[46],[59],[62],[68],[68],[61],[38],[68],[11],[82],[31],[95],[99],[23],[26],[97],[9],[67],[16],[90],[29],[69],[78],[9],[1],[89],[66],[40],[49],[18],[14],[75],[41],[63],[20],[64],[76],[9],[93],[34],[45],[20],[80],[54],[3],[78],[43],[88],[78],[15],[84],[95],[83],[7],[9],[33],[85],[96],[37],[13],[87],[60],[25],[45],[16],[74],[51],[68],[66],[30],[53],[4],[41],[63],[39],[90],[37],[70],[40],[51],[85],[71],[26],[22],[63],[24],[6],[91],[51],[5],[4],[61],[8],[51],[97],[48],[64],[26],[8],[24],[54],[88],[45],[91],[18],[22],[19],[68],[82],[89],[48],[82],[43],[74],[7],[26],[95],[46],[15],[4],[30],[4],[50],[14],[84],[13],[24],[93],[17],[62],[13],[59],[63],[95],[64],[17],[30],[62],[46],[13],[90],[19],[59],[34],[73],[98],[7],[82],[81],[72],[58],[87],[62],[84],[46],[42],[53],[84],[69],[87],[62],[49],[16],[41],[52],[37],[84],[87],[43],[0],[13],[49],[36],[53],[61],[97],[82],[76],[28],[83],[9],[68],[46],[75],[56],[80],[98],[54],[61],[69],[83],[78],[86],[17],[94],[61],[81],[31],[82],[89],[81],[16],[44],[8],[17],[48],[95],[59],[28],[81],[91],[81],[77],[8],[58],[56],[12],[61],[96],[12],[38],[51],[50],[69],[37],[10],[26],[91],[37],[20],[96],[21],[3],[89],[29],[79],[44],[48],[94],[88],[3],[33],[29],[69],[33],[73],[14],[84],[21],[70],[55],[57],[82],[14],[28],[48],[14],[91],[97],[4],[47],[17],[74],[14],[36],[86],[20],[75],[79],[74],[16],[53],[36],[52],[72],[62],[3],[64],[98],[39],[18],[7],[60],[11],[52],[22],[2],[76],[46],[20],[42],[95],[1],[64],[82],[91],[92],[98],[85],[96],[62],[41],[6],[72],[76],[73],[3],[96],[31],[54],[50],[69],[57],[51],[62],[92],[64],[99],[58],[78],[78],[7],[67],[21],[85],[69],[56],[50],[78],[10],[5],[39],[57],[40],[17],[83],[85],[34],[40],[20],[95],[81],[53],[19],[71],[28],[92],[28],[6],[86],[26],[63],[76],[61],[56],[62],[53],[39],[67],[60],[82],[0],[30],[20],[34],[69],[40],[12],[59],[67],[23],[25],[34],[19],[67],[45],[10],[20],[30],[36],[33],[46],[70],[60],[59],[43],[71],[21],[97],[10],[1],[43],[85],[53],[53],[81],[63],[21],[20],[11],[10],[67],[9],[64],[65],[97],[6],[67],[8],[9],[61],[57],[35],[45],[53],[12],[40],[83],[34],[65],[31],[75],[35],[4],[83],[15],[0],[86],[64],[31],[59],[35],[79],[52],[19],[58],[29],[69],[90],[92],[32],[43],[65],[93],[14],[72],[56],[71],[45],[27],[98],[61],[15],[78],[12],[31],[71],[10],[12],[32],[79],[23],[79],[69],[28],[41],[45],[9],[91],[43],[53],[46],[86],[47],[26],[6],[12],[4],[58],[3],[41],[10],[57],[48],[91],[56],[94],[37],[13],[13],[32],[66],[54],[54],[66],[90],[72],[19],[69],[94],[31],[25],[9],[68],[6],[14],[62],[67],[38],[38],[92],[19],[43],[26],[34],[6],[50],[26],[60],[69],[91],[15],[71],[66],[41],[39],[84],[12],[25],[61],[40],[19],[6],[40],[96],[41],[59],[10],[72],[13],[67],[42],[14],[96],[42],[27],[93],[78],[44],[64],[8],[91],[16],[97],[73],[80],[88],[90],[12],[61],[53],[41],[71],[30],[33],[58],[39],[16],[3],[73],[73],[11],[69],[7],[23],[81],[35],[9],[39],[5],[63],[26],[52],[47],[88],[84],[86],[69],[99],[11],[34],[13],[40],[96],[78],[52],[42],[84],[1],[37],[40],[25],[43],[43],[56],[74],[34],[62],[58],[29],[22],[6],[92],[64],[63],[1],[75],[85],[40],[23],[88],[14],[95],[59],[99],[95],[10],[81],[33],[46],[49],[36],[91],[68],[81],[39],[80],[78],[58],[63],[7],[40],[75],[68],[60],[90],[35],[55],[53],[94],[44],[68],[46],[61],[26],[26],[29],[89],[84],[25],[10],[77],[14],[68],[50],[31],[93],[38],[4],[28],[72],[1],[65],[79],[26],[93],[62],[58],[32],[16],[41],[16],[90],[89],[45],[32],[51],[29],[33],[73],[95],[95],[73],[81],[48],[50],[15],[49],[22],[87],[65],[40],[67],[73],[65],[23],[36],[8],[52],[17],[14],[33],[39],[34],[60],[86],[67],[21],[49],[81],[79],[70],[10],[0],[7],[64],[81],[29],[34],[79],[14],[22],[23],[38],[5],[12],[52],[26],[79],[33],[94],[97],[72],[71],[31],[91],[52],[42],[12],[60],[94],[76],[39],[86],[84],[82],[7],[68],[2],[17],[34],[24],[76],[87],[77],[91],[32],[70],[24],[57],[96],[82],[96],[96],[31],[9],[27],[22],[56],[98],[66],[51],[37],[0],[62],[60],[1],[55],[28],[20],[65],[1],[53],[54],[57],[72],[26],[8],[0],[95],[96],[77],[51],[1],[84],[43],[64],[83],[63],[79],[60],[53],[39],[27],[65],[40],[60],[85],[2],[10],[63],[32],[10],[27],[84],[3],[22],[14],[87],[46],[76],[72],[86],[56],[58],[16],[4],[19],[74],[28],[94],[38],[22],[66],[62],[76],[84],[71],[79],[60],[96],[56],[73],[55],[24],[57],[63],[73],[43],[35],[43],[61],[35],[58],[82],[34],[81],[61],[73],[22],[94],[83],[31],[17],[19],[46],[96],[62],[75],[89],[43],[43],[83],[20],[17],[64],[20],[55],[38],[86],[55],[20],[0],[50],[95],[17],[41],[28],[85],[15],[37],[12],[28],[29],[52],[33],[71],[86],[78],[34],[30],[52],[65],[48],[39],[75],[23],[9],[83],[5],[42],[58],[26],[25],[57],[15],[38],[44],[2],[91],[61],[27],[21],[0],[88],[16],[51],[93],[47],[9],[5],[71],[24],[47],[91],[96],[66],[18],[59],[18],[29],[2],[25],[61],[42],[4],[80],[58],[44],[99],[14],[24],[75],[37],[39],[8],[83],[51],[84],[54],[99],[95],[79],[4],[23],[90],[18],[54],[18],[61],[13],[40],[38],[42],[11],[10],[36],[42],[73],[81],[40],[25],[80],[9],[77],[23],[75],[10],[65],[25],[65],[13],[12],[69],[75],[7],[83],[46],[47],[26],[64],[74],[28],[16],[6],[22],[11],[22],[81],[69],[40],[13],[32],[91],[40],[90],[13],[59],[46],[16],[7],[51],[58],[20],[9],[20],[56],[0],[68],[39],[95],[40],[36],[69],[24],[89],[45],[17],[5],[40],[98],[97],[73],[46],[67],[85],[88],[98],[80],[74],[13],[22],[37],[83],[28],[14],[63],[58],[42],[84],[40],[45],[87],[19],[12],[47],[99],[50],[8],[76],[6],[76],[93],[92],[75],[23],[53],[50],[87],[25],[38],[23],[63],[64],[58],[48],[85],[63],[20],[24],[30],[0],[93],[93],[68],[4],[69],[3],[43],[49],[79],[38],[10],[43],[60],[46],[43],[54],[35],[29],[90],[1],[66],[78],[33],[87],[87],[94],[91],[97],[30],[59],[31],[83],[26],[0],[27],[47],[50],[27],[48],[22],[37],[75],[85],[79],[75],[63],[6],[50],[1],[47],[46],[14],[28],[73],[34],[80],[91],[3],[74],[61],[12],[65],[58],[49],[79],[22],[7],[81],[49],[49],[57],[11],[94],[34],[31],[48],[33],[22],[60],[22],[13],[8],[81],[46],[31],[95],[97],[87],[24],[27],[92],[17],[76],[67],[11],[97],[55],[2],[18],[68],[23],[48],[24],[51],[68],[92],[68],[63],[44],[48],[27],[78],[43],[80],[43],[58],[16],[14],[33],[96],[74],[42],[35],[81],[31],[45],[0],[91],[20],[7],[72],[53],[33],[29],[2],[81],[32],[94],[65],[38],[97],[48],[54],[40],[11],[18],[90],[9],[84],[63],[50],[40],[23],[67],[48],[19],[98],[17],[57],[97],[29],[95],[63],[34],[54],[44],[4],[70],[5],[14],[36],[9],[31],[70],[33],[97],[64],[69],[39],[28],[90],[42],[63],[17],[10],[17],[7],[24],[4],[36],[77],[57],[10],[87],[84],[19],[51],[10],[57],[14],[75],[20],[21],[55],[95],[54],[18],[70],[62],[86],[52],[91],[67],[71],[88],[23],[63],[22],[91],[68],[32],[93],[6],[74],[69],[41],[60],[21],[94],[67],[18],[36],[24],[19],[84],[34],[45],[99],[75],[86],[85],[2],[21],[78],[65],[19],[29],[68],[18],[59],[17],[26],[24],[95],[94],[16],[69],[74],[28],[20],[54],[54],[68],[21],[79],[30],[59],[61],[65],[34],[94],[29],[22],[18],[27],[80],[45],[55],[2],[75],[65],[36],[94],[98],[93],[77],[76],[42],[44],[33],[69],[46],[8],[81],[92],[8],[5],[56],[69],[86],[27],[30],[92],[95],[61],[16],[86],[77],[32],[72],[56],[44],[88],[88],[34],[37],[57],[11],[9],[33],[76],[70],[10],[91],[51],[21],[31],[30],[10],[23],[75],[88],[27],[43],[0],[19],[17],[51],[79],[3],[81],[48],[96],[10],[44],[3],[4],[87],[44],[35],[83],[81],[73],[61],[46],[88],[31],[34],[44],[91],[2],[62],[57],[81],[3],[65],[69],[19],[99],[62],[76],[96],[48],[8],[78],[9],[40],[27],[67],[39],[39],[94],[75],[96],[2],[40],[42],[16],[87],[28],[29],[71],[84],[93],[83],[88],[12],[6],[1],[60],[5],[8],[11],[93],[64],[27],[5],[26],[93],[3],[20],[25],[15],[51],[14],[46],[72],[34],[44],[9],[93],[75],[83],[71],[59],[15],[91],[93],[95],[74],[95],[12],[52],[56],[46],[45],[66],[78],[25],[67],[78],[66],[60],[41],[12],[31],[1],[90],[33],[75],[98],[95],[17],[64],[6],[35],[90],[69],[32],[42],[66],[71],[89],[32],[76],[19],[8],[53],[16],[67],[25],[66],[84],[27],[33],[10],[94],[65],[73],[46],[79],[41],[93],[86],[99],[63],[93],[2],[4],[59],[47],[8],[50],[40],[4],[7],[48],[84],[12],[94],[33],[93],[2],[53],[99],[10],[84],[73],[7],[98],[54],[90],[27],[18],[69],[15],[76],[35],[22],[4],[89],[5],[74],[45],[81],[87],[56],[63],[93],[54],[49],[45],[34],[69],[67],[87],[67],[9],[62],[97],[8],[2],[17],[54],[54],[83],[44],[61],[25],[3],[5],[27],[43],[69],[68],[44],[70],[18],[75],[78],[61],[1],[45],[39],[31],[57],[60],[0],[57],[64],[55],[36],[44],[37],[46],[1],[31],[83],[74],[64],[35],[50],[13],[29],[70],[40],[16],[99],[12],[57],[69],[57],[23],[72],[84],[62],[35],[83],[75],[72],[71],[64],[25],[63],[92],[8],[20],[62],[38],[33],[45],[3],[16],[23],[45],[19],[90],[58],[50],[58],[62],[4],[54],[4],[16],[18],[44],[20],[95],[23],[95],[86],[78],[6],[77],[17],[86],[63],[56],[88],[63],[79],[73],[92],[4],[75],[37],[26],[86],[17],[94],[7],[75],[26],[8],[69],[94],[69],[71],[26],[86],[41],[69],[10],[45],[13],[7],[78],[28],[78],[45],[78],[22],[74],[50],[62],[36],[51],[41],[52],[87],[44],[18],[73],[70],[31],[3],[55],[48],[92],[85],[83],[94],[89],[89],[7],[61],[86],[51],[40],[58],[34],[94],[23],[61],[39],[74],[32],[1],[80],[91],[32],[91],[5],[39],[11],[43],[56],[23],[89],[24],[13],[9],[81],[60],[5],[75],[47],[70],[38],[9],[95],[92],[30],[67],[52],[28],[55],[6],[51],[76],[78],[41],[23],[17],[56],[60],[20],[37],[7],[19],[16],[48],[39],[31],[25],[54],[56],[11],[23],[9],[84],[3],[58],[6],[68],[56],[66],[22],[59],[9],[24],[79],[64],[20],[38],[60],[75],[97],[63],[28],[63],[11],[8],[9],[32],[31],[48],[17],[5],[87],[86],[71],[86],[46],[58],[28],[19],[19],[16],[57],[99],[93],[32],[60],[75],[20],[31],[49],[73],[37],[76],[34],[80],[28],[89],[52],[70],[55],[43],[43],[59],[16],[34],[61],[17],[2],[58],[45],[98],[90],[45],[9],[59],[76],[78],[37],[15],[89],[27],[71],[57],[76],[56],[70],[72],[73],[12],[11],[82],[93],[70],[17],[5],[20],[46],[63],[24],[2],[7],[17],[5],[67],[15],[68],[99],[79],[77],[42],[27],[91],[91],[29],[49],[19],[81],[68],[5],[57],[65],[53],[62],[0],[30],[69],[87],[6],[47],[42],[51],[32],[90],[53],[15],[44],[4],[20],[81],[79],[52],[40],[50],[65],[41],[38],[36],[33],[27],[2],[23],[48],[18],[16],[40],[89],[81],[83],[41],[88],[79],[54],[73],[7],[5],[71],[11],[69],[78],[92],[74],[13],[10],[84],[97],[87],[12],[14],[76],[6],[14],[48],[73],[14],[38],[46],[11],[18],[47],[34],[90],[5],[41],[36],[24],[47],[8],[21],[74],[27],[17],[25],[59],[5],[38],[69],[55],[74],[64],[51],[84],[9],[95],[19],[55],[79],[71],[49],[86],[48],[99],[91],[81],[13],[64],[71],[50],[26],[14],[61],[71],[87],[49],[8],[96],[9],[19],[89],[1],[90],[34],[21],[62],[1],[85],[21],[58],[72],[87],[93],[16],[49],[98],[18],[43],[18],[5],[68],[83],[47],[2],[82],[14],[0],[15],[47],[34],[44],[75],[77],[83],[8],[81],[5],[92],[40],[51],[47],[96],[47],[16],[99],[44],[74],[89],[38],[89],[3],[24],[75],[1],[34],[23],[95],[60],[72],[45],[9],[68],[8],[48],[88],[65],[44],[49],[6],[65],[24],[39],[52],[33],[35],[33],[36],[89],[15],[67],[19],[85],[3],[54],[68],[65],[46],[52],[35],[18],[80],[53],[56],[51],[46],[52],[74],[94],[52],[83],[16],[24],[90],[43],[12],[23],[26],[81],[19],[85],[62],[7],[13],[18],[30],[83],[15],[48],[58],[48],[55],[16],[33],[38],[66],[89],[15],[92],[1],[44],[47],[41],[67],[57],[7],[11],[23],[51],[97],[22],[53],[9],[58],[88],[10],[87],[75],[62],[55],[48],[77],[74],[14],[21],[7],[98],[24],[91],[13],[64],[94],[32],[0],[47],[3],[0],[30],[45],[44],[14],[66],[66],[92],[26],[66],[84],[19],[96],[89],[47],[26],[27],[16],[16],[16],[16],[19],[96],[88],[23],[59],[90],[85],[78],[58],[37],[72],[13],[24],[77],[25],[11],[80],[89],[16],[55],[20],[58],[98],[54],[73],[80],[68],[2],[65],[87],[81],[64],[64],[82],[32],[48],[13],[34],[96],[42],[89],[39],[47],[61],[71],[17],[7],[51],[52],[82],[55],[69],[56],[9],[54],[28],[71],[40],[68],[83],[58],[29],[75],[92],[64],[52],[94],[10],[49],[19],[78],[28],[87],[59],[31],[6],[95],[2],[68],[84],[36],[80],[60],[20],[16],[46],[27],[56],[80],[98],[67],[76],[99],[56],[79],[26],[12],[58],[44],[7],[83],[5],[48],[36],[61],[71],[61],[0],[17],[70],[39],[2],[95],[43],[32],[77],[57],[94],[2],[26],[70],[96],[49],[16],[40],[36],[52],[73],[84],[34],[6],[70],[0],[4],[7],[57],[96],[9],[15],[69],[19],[39],[63],[27],[88],[82],[2],[42],[55],[78],[41],[56],[56],[34],[48],[69],[17],[55],[97],[98],[1],[74],[83],[26],[5],[70],[58],[54],[4],[66],[85],[92],[60],[49],[29],[80],[60],[99],[62],[46],[87],[37],[16],[38],[33],[89],[74],[9],[56],[85],[58],[79],[57],[24],[0],[56],[38],[37],[33],[82],[59],[67],[41],[24],[80],[22],[80],[68],[40],[24],[72],[36],[26],[69],[88],[12],[88],[97],[91],[8],[33],[89],[48],[94],[50],[76],[68],[61],[70],[24],[78],[63],[1],[70],[58],[64],[53],[60],[44],[0],[91],[20],[61],[63],[32],[1],[8],[50],[5],[94],[12],[21],[88],[30],[59],[25],[80],[5],[81],[63],[25],[1],[54],[7],[87],[77],[71],[82],[56],[48],[79],[75],[93],[47],[75],[81],[77],[3],[88],[32],[53],[35],[21],[71],[79],[14],[46],[77],[5],[61],[0],[55],[68],[29],[69],[28],[74],[85],[3],[62],[50],[15],[64],[1],[10],[58],[36],[64],[87],[31],[45],[83],[4],[52],[89],[85],[63],[11],[6],[79],[46],[51],[44],[58],[38],[54],[38],[56],[23],[0],[12],[55],[13],[98],[33],[44],[2],[80],[81],[33],[2],[49],[0],[19],[37],[35],[99],[79],[42],[69],[32],[12],[6],[48],[51],[64],[73],[84],[96],[86],[27],[44],[24],[14],[67],[99],[68],[35],[94],[6],[91],[20],[10],[88],[60],[93],[57],[51],[29],[63],[31],[61],[61],[79],[6],[71],[88],[27],[9],[44],[30],[32],[37],[54],[20],[32],[11],[31],[79],[68],[6],[53],[87],[14],[11],[92],[78],[25],[5],[85],[77],[62],[93],[34],[8],[38],[55],[20],[75],[4],[60],[42],[29],[22],[13],[4],[76],[12],[73],[74],[97],[53],[59],[58],[70],[36],[63],[60],[56],[20],[35],[54],[26],[63],[72],[73],[34],[3],[24],[66],[70],[54],[75],[42],[3],[66],[88],[67],[86],[90],[70],[80],[70],[30],[69],[92],[87],[58],[46],[22],[42],[37],[64],[28],[35],[22],[47],[59],[16],[19],[80],[92],[40],[11],[26],[65],[24],[72],[5],[82],[43],[92],[62],[17],[0],[28],[59],[30],[95],[64],[88],[54],[34],[5],[59],[23],[38],[46],[62],[46],[18],[73],[63],[56],[9],[71],[0],[13],[99],[14],[40],[4],[88],[87],[5],[94],[56],[10],[45],[17],[39],[36],[69],[84],[16],[79],[16],[41],[40],[81],[54],[63],[7],[67],[5],[95],[73],[80],[67],[28],[50],[29],[41],[70],[54],[35],[28],[80],[46],[11],[51],[91],[45],[50],[0],[82],[82],[5],[19],[87],[38],[28],[46],[38],[96],[31],[45],[87],[60],[42],[2],[36],[30],[33],[18],[94],[98],[5],[26],[27],[88],[99],[11],[13],[7],[37],[85],[70],[84],[68],[74],[50],[67],[13],[32],[99],[22],[72],[55],[59],[69],[87],[81],[11],[90],[57],[1],[66],[71],[86],[13],[81],[47],[53],[70],[39],[50],[16],[45],[94],[23],[68],[72],[18],[44],[49],[1],[26],[25],[29],[79],[44],[96],[49],[7],[79],[73],[50],[13],[48],[56],[78],[87],[5],[3],[82],[7],[43],[23],[22],[56],[34],[11],[97],[97],[78],[97],[34],[66],[15],[11],[78],[0],[20],[88],[48],[63],[63],[34],[17],[63],[2],[12],[12],[77],[60],[28],[73],[69],[95],[68],[90],[82],[36],[23],[9],[97],[29],[94],[2],[14],[7],[96],[31],[68],[69],[18],[0],[23],[39],[7],[80],[65],[74],[17],[56],[34],[2],[12],[58],[60],[54],[87],[55],[66],[33],[62],[66],[63],[11],[81],[21],[86],[91],[38],[57],[35],[38],[73],[81],[93],[73],[22],[8],[68],[14],[20],[93],[86],[81],[53],[40],[35],[56],[3],[72],[65],[79],[64],[19],[12],[99],[33],[53],[10],[7],[27],[29],[96],[16],[0],[89],[22],[89],[57],[96],[70],[30],[59],[51],[25],[35],[27],[7],[95],[49],[43],[3],[85],[9],[22],[22],[37],[91],[35],[81],[98],[2],[84],[84],[89],[65],[63],[17],[87],[36],[60],[72],[85],[79],[90],[49],[98],[3],[1],[86],[59],[2],[36],[18],[74],[84],[9],[52],[69],[71],[65],[99],[85],[84],[25],[95],[34],[94],[29],[52],[85],[6],[13],[51],[61],[77],[74],[93],[81],[51],[45],[48],[97],[98],[39],[58],[87],[45],[75],[74],[49],[60],[89],[52],[21],[22],[66],[6],[20],[71],[88],[74],[95],[52],[64],[0],[87],[21],[8],[49],[89],[69],[46],[0],[79],[90],[82],[73],[90],[55],[68],[72],[75],[55],[30],[4],[47],[32],[59],[13],[96],[97],[95],[79],[44],[33],[38],[56],[21],[16],[29],[84],[6],[78],[20],[40],[37],[91],[29],[92],[70],[45],[67],[44],[53],[71],[30],[31],[5],[10],[21],[35],[55],[63],[53],[19],[13],[71],[64],[98],[97],[55],[87],[83],[61],[31],[41],[62],[34],[15],[56],[49],[87],[95],[26],[42],[20],[5],[78],[28],[44],[70],[1],[89],[71],[58],[53],[74],[62],[89],[18],[58],[62],[43],[93],[42],[6],[89],[95],[79],[49],[40],[86],[88],[62],[80],[93],[15],[45],[3],[28],[31],[62],[54],[76],[5],[1],[81],[20],[86],[75],[19],[21],[2],[72],[99],[91],[37],[86],[66],[43],[20],[29],[99],[76],[13],[79],[44],[35],[96],[32],[97],[82],[46],[83],[35],[83],[15],[21],[26],[23],[47],[15],[28],[28],[87],[70],[44],[7],[99],[34],[62],[47],[16],[85],[61],[24],[34],[21],[36],[30],[59],[84],[65],[6],[70],[18],[45],[47],[77],[57],[72],[92],[68],[34],[73],[16],[54],[73],[48],[3],[11],[47],[43],[79],[17],[75],[2],[29],[69],[78],[28],[54],[43],[61],[93],[32],[40],[98],[3],[92],[64],[81],[41],[76],[77],[71],[75],[84],[22],[23],[1],[27],[74],[13],[30],[29],[65],[41],[85],[24],[83],[61],[44],[24],[40],[70],[86],[89],[21],[28],[92],[98],[25],[63],[77],[11],[72],[85],[31],[44],[66],[57],[52],[24],[43],[73],[84],[70],[99],[2],[24],[73],[97],[48],[73],[54],[93],[25],[21],[1],[92],[69],[37],[71],[99],[33],[97],[52],[14],[1],[63],[54],[47],[72],[46],[47],[10],[92],[51],[81],[62],[73],[6],[63],[3],[53],[39],[42],[69],[10],[94],[61],[85],[97],[35],[18],[97],[52],[70],[84],[21],[73],[60],[14],[34],[86],[39],[48],[42],[26],[73],[48],[10],[47],[0],[24],[13],[69],[25],[2],[9],[77],[66],[94],[16],[37],[51],[39],[37],[67],[19],[70],[89],[50],[48],[66],[44],[64],[71],[89],[20],[78],[33],[53],[1],[2],[31],[50],[77],[54],[52],[51],[85],[83],[47],[38],[59],[37],[16],[42],[71],[87],[33],[22],[12],[74],[40],[21],[12],[85],[92],[82],[84],[60],[68],[72],[35],[65],[77],[20],[92],[86],[58],[9],[87],[18],[49],[96],[50],[61],[0],[41],[98],[3],[11],[2],[28],[87],[62],[38],[58],[55],[22],[87],[25],[50],[80],[16],[49],[75],[82],[78],[44],[97],[89],[77],[92],[30],[31],[42],[61],[12],[74],[85],[4],[40],[94],[74],[26],[54],[31],[25],[83],[69],[0],[37],[92],[85],[39],[9],[38],[93],[23],[75],[27],[49],[29],[99],[95],[61],[4],[88],[91],[85],[92],[5],[31],[86],[68],[36],[28],[99],[4],[22],[49],[20],[73],[83],[93],[58],[89],[2],[93],[52],[2],[0],[2],[13],[80],[40],[4],[86],[76],[48],[5],[97],[5],[50],[25],[55],[88],[84],[37],[30],[78],[26],[21],[56],[14],[3],[85],[95],[31],[9],[63],[96],[10],[29],[13],[55],[53],[34],[14],[1],[99],[37],[76],[84],[89],[3],[87],[71],[20],[13],[32],[85],[45],[24],[82],[10],[18],[20],[45],[32],[68],[34],[90],[35],[84],[32],[40],[21],[32],[50],[92],[28],[96],[98],[94],[11],[87],[49],[83],[77],[64],[31],[28],[18],[93],[56],[13],[57],[12],[46],[25],[75],[27],[33],[2],[10],[74],[55],[46],[16],[91],[12],[96],[65],[94],[82],[20],[28],[66],[94],[17],[39],[95],[42],[49],[61],[76],[24],[41],[96],[73],[89],[55],[19],[91],[43],[33],[80],[33],[89],[58],[64],[55],[60],[24],[56],[7],[62],[89],[29],[92],[84],[81],[11],[95],[79],[17],[2],[24],[8],[32],[30],[34],[7],[26],[14],[90],[57],[37],[16],[45],[3],[7],[53],[80],[62],[3],[29],[42],[40],[20],[37],[40],[85],[79],[28],[11],[49],[45],[26],[35],[45],[19],[72],[54],[33],[19],[26],[46],[47],[74],[68],[94],[62],[47],[46],[37],[71],[4],[13],[84],[76],[97],[14],[73],[52],[77],[61],[54],[82],[25],[47],[6],[87],[92],[39],[65],[3],[27],[3],[70],[42],[65],[67],[77],[94],[94],[61],[52],[37],[58],[6],[0],[44],[67],[74],[37],[32],[71],[18],[94],[79],[22],[71],[67],[59],[69],[39],[24],[76],[46],[39],[23],[39],[92],[79],[76],[98],[72],[36],[16],[62],[66],[4],[98],[57],[38],[67],[49],[27],[52],[33],[16],[48],[13],[46],[78],[56],[72],[73],[78],[65],[94],[67],[8],[87],[73],[54],[15],[7],[33],[35],[1],[95],[49],[39],[90],[79],[65],[52],[97],[89],[71],[54],[56],[6],[55],[50],[62],[48],[35],[79],[92],[99],[72],[57],[52],[30],[83],[6],[17],[61],[75],[76],[22],[2],[50],[44],[91],[27],[39],[54],[88],[65],[23],[84],[50],[32],[23],[51],[53],[69],[49],[12],[95],[33],[43],[79],[87],[0],[44],[72],[43],[79],[53],[56],[16],[21],[22],[24],[96],[67],[48],[3],[57],[75],[4],[68],[21],[83],[24],[98],[34],[2],[99],[56],[78],[3],[34],[63],[79],[12],[90],[92],[32],[93],[8],[59],[41],[53],[87],[87],[39],[66],[51],[76],[50],[57],[74],[97],[74],[83],[64],[39],[74],[49],[12],[85],[82],[39],[56],[51],[72],[38],[48],[77],[95],[90],[73],[6],[75],[2],[2],[65],[36],[42],[39],[61],[21],[48],[82],[39],[29],[54],[6],[77],[42],[18],[66],[79],[18],[86],[96],[39],[56],[6],[64],[96],[21],[18],[9],[41],[87],[93],[87],[86],[75],[52],[57],[42],[73],[30],[66],[91],[77],[17],[61],[49],[42],[20],[45],[79],[16],[15],[84],[71],[5],[34],[61],[61],[70],[34],[75],[44],[57],[78],[38],[32],[56],[59],[83],[57],[4],[25],[16],[96],[63],[44],[1],[85],[9],[39],[79],[13],[7],[34],[41],[43],[39],[38],[35],[72],[98],[24],[87],[15],[54],[96],[83],[34],[22],[75],[34],[43],[54],[10],[38],[34],[20],[53],[17],[6],[92],[18],[74],[96],[60],[57],[78],[26],[36],[20],[76],[15],[17],[19],[83],[65],[50],[6],[7],[71],[39],[70],[77],[71],[57],[42],[56],[75],[40],[2],[62],[71],[18],[76],[96],[21],[62],[79],[39],[58],[24],[56],[23],[88],[79],[98],[2],[26],[80],[99],[6],[85],[9],[45],[92],[15],[66],[69],[78],[64],[11],[76],[75],[80],[43],[30],[78],[90],[94],[47],[85],[63],[23],[52],[11],[59],[8],[39],[27],[28],[2],[43],[0],[96],[75],[38],[91],[84],[63],[32],[39],[66],[88],[67],[64],[32],[44],[68],[58],[11],[31],[77],[29],[52],[45],[18],[13],[91],[55],[77],[5],[0],[22],[30],[0],[41],[3],[62],[90],[96],[11],[76],[5],[54],[46],[80],[0],[90],[32],[49],[57],[57],[36],[32],[22],[1],[72],[14],[41],[44],[55],[8],[14],[54],[13],[98],[47],[96],[25],[80],[77],[61],[69],[71],[66],[77],[22],[92],[1],[66],[12],[1],[10],[24],[26],[70],[99],[66],[3],[32],[78],[86],[25],[16],[36],[50],[39],[44],[98],[93],[24],[10],[5],[50],[77],[12],[34],[93],[13],[53],[57],[28],[25],[75],[50],[25],[50],[80],[52],[59],[83],[29],[47],[78],[45],[90],[2],[91],[42],[0],[72],[2],[27],[60],[9],[48],[31],[0],[48],[5],[39],[12],[48],[71],[52],[74],[34],[44],[89],[6],[45],[34],[18],[0],[40],[34],[91],[59],[67],[7],[97],[80],[77],[99],[54],[45],[4],[85],[56],[89],[53],[48],[23],[98],[95],[91],[39],[83],[0],[40],[93],[45],[36],[71],[69],[87],[25],[80],[93],[87],[32],[61],[10],[39],[57],[48],[85],[97],[84],[6],[61],[40],[45],[85],[50],[12],[83],[46],[48],[98],[99],[57],[27],[92],[79],[66],[21],[0],[83],[51],[5],[99],[73],[48],[60],[85],[55],[55],[95],[20],[54],[51],[52],[66],[77],[94],[21],[51],[80],[26],[3],[82],[52],[44],[20],[2],[5],[96],[70],[88],[5],[71],[37],[79],[66],[62],[31],[7],[3],[70],[47],[91],[87],[87],[12],[20],[85],[38],[42],[5],[60],[22],[69],[58],[37],[47],[75],[1],[51],[33],[64],[52],[49],[35],[2],[34],[90],[69],[74],[2],[90],[74],[21],[50],[29],[15],[89],[9],[19],[7],[17],[72],[68],[84],[96],[66],[27],[29],[9],[17],[67],[87],[59],[47],[60],[86],[18],[32],[44],[12],[53],[66],[96],[55],[27],[3],[96],[47],[60],[14],[58],[63],[0],[87],[96],[14],[0],[91],[65],[28],[32],[50],[75],[81],[40],[58],[75],[6],[58],[15],[63],[50],[71],[39],[11],[53],[15],[36],[19],[79],[60],[60],[52],[67],[6],[95],[81],[48],[58],[69],[50],[37],[93],[5],[64],[55],[99],[61],[45],[82],[36],[14],[40],[6],[57],[94],[77],[97],[25],[98],[36],[52],[12],[86],[19],[1],[97],[75],[7],[43],[48],[51],[53],[28],[65],[7],[48],[33],[85],[96],[76],[44],[80],[72],[20],[46],[37],[66],[27],[49],[5],[33],[45],[46],[23],[43],[2],[79],[72],[19],[46],[16],[96],[6],[57],[37],[86],[83],[14],[37],[6],[42],[51],[95],[58],[95],[72],[47],[99],[48],[70],[15],[53],[22],[15],[22],[94],[94],[73],[93],[12],[36],[61],[35],[4],[79],[12],[8],[74],[93],[21],[51],[68],[48],[48],[17],[52],[14],[2],[89],[15],[96],[70],[48],[39],[53],[28],[42],[66],[45],[48],[54],[48],[84],[35],[30],[96],[14],[89],[40],[48],[50],[33],[40],[86],[9],[25],[24],[63],[68],[40],[34],[39],[18],[72],[39],[71],[25],[91],[94],[22],[24],[45],[46],[94],[69],[59],[38],[14],[3],[15],[22],[27],[47],[91],[16],[63],[98],[58],[5],[51],[76],[92],[65],[9],[91],[16],[29],[47],[33],[56],[82],[82],[78],[48],[14],[58],[91],[37],[91],[8],[36],[35],[27],[96],[12],[69],[13],[92],[44],[26],[15],[82],[47],[53],[90],[62],[33],[67],[31],[32],[97],[88],[31],[60],[69],[32],[11],[53],[49],[73],[7],[65],[61],[60],[26],[46],[69],[78],[64],[85],[70],[83],[27],[33],[41],[32],[60],[57],[85],[96],[61],[15],[97],[81],[17],[88],[49],[93],[77],[68],[41],[92],[13],[66],[0],[34],[20],[30],[38],[15],[8],[55],[41],[91],[78],[55],[60],[14],[78],[30],[61],[56],[12],[15],[48],[50],[95],[88],[43],[27],[1],[88],[88],[41],[89],[5],[24],[31],[21],[31],[23],[62],[9],[14],[79],[19],[66],[53],[85],[15],[65]]
# lists = lists[:10000]
# lists = [[], []]
newList = []
for i in lists:
new = head = ListNode(0)
for j in i:
head.next = ListNode(j)
head = head.next
newList.append(new.next)
lists = newList
for i in lists:
head = i
while head:
print head.val
head = head.next
print
head = Solution().mergeKLists(lists)
counter = 0
while head:
print head.val
head = head.next
counter += 1
print "counter", counter
| 673.171053 | 49,028 | 0.391353 | 10,244 | 51,161 | 1.953339 | 0.017962 | 0.001399 | 0.002399 | 0.001649 | 0.007696 | 0.007696 | 0.003298 | 0.003298 | 0 | 0 | 0 | 0.376681 | 0.011904 | 51,161 | 75 | 49,029 | 682.146667 | 0.019149 | 0.015461 | 0 | 0.27907 | 0 | 0 | 0.000417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.093023 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d1ab2cea239d31af717fe941819098f4111a4f2 | 48 | py | Python | unidef/languages/common/type_inference/__init__.py | qiujiangkun/unidef | 6d3ca31a6b1d498f38f483d4174f79f7fe920f65 | [
"MIT"
] | 4 | 2021-11-08T10:01:19.000Z | 2022-03-17T06:27:14.000Z | unidef/languages/common/type_inference/__init__.py | qiujiangkun/unidef | 6d3ca31a6b1d498f38f483d4174f79f7fe920f65 | [
"MIT"
] | null | null | null | unidef/languages/common/type_inference/__init__.py | qiujiangkun/unidef | 6d3ca31a6b1d498f38f483d4174f79f7fe920f65 | [
"MIT"
] | null | null | null | from .node_type_processors import TypeInference
| 24 | 47 | 0.895833 | 6 | 48 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d1ccc4ff65850bcf2cb0ad3789544aaddcc3615 | 154 | py | Python | easy_crypto/lesson2/test_task5.py | PeteCoward/teach-python | 2a63ece83151631ab4dcf868c361acdfe4e6c85f | [
"MIT"
] | 1 | 2015-12-19T00:38:46.000Z | 2015-12-19T00:38:46.000Z | easy_crypto/lesson2/test_task5.py | PeteCoward/teach-python | 2a63ece83151631ab4dcf868c361acdfe4e6c85f | [
"MIT"
] | null | null | null | easy_crypto/lesson2/test_task5.py | PeteCoward/teach-python | 2a63ece83151631ab4dcf868c361acdfe4e6c85f | [
"MIT"
] | null | null | null | from .task5 import get_all_shifts
def test_get_all_shifts():
assert get_all_shifts(bytearray([0])) == [bytearray([i % 256]) for i in range(1, 256)]
| 25.666667 | 90 | 0.707792 | 26 | 154 | 3.923077 | 0.653846 | 0.176471 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068702 | 0.149351 | 154 | 5 | 91 | 30.8 | 0.709924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3d2b8a14ea13de0c391a261ec667b64741a830c1 | 27 | py | Python | app/celery_test_app/tasks/tasks.py | arjunbiyani/reptile | a87ed7be2345fa02b4de6ad10593dea1924892ec | [
"MIT"
] | null | null | null | app/celery_test_app/tasks/tasks.py | arjunbiyani/reptile | a87ed7be2345fa02b4de6ad10593dea1924892ec | [
"MIT"
] | null | null | null | app/celery_test_app/tasks/tasks.py | arjunbiyani/reptile | a87ed7be2345fa02b4de6ad10593dea1924892ec | [
"MIT"
] | null | null | null | from celery import Celery
| 9 | 25 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 2 | 26 | 13.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9fcf805cb65205246bc3de8371b93f234e6a382 | 40 | py | Python | pyqlc/utils/__init__.py | realForbis/pyqlc | 4469b51c79ef875d55f8855a77ee8a2162a53d98 | [
"MIT"
] | null | null | null | pyqlc/utils/__init__.py | realForbis/pyqlc | 4469b51c79ef875d55f8855a77ee8a2162a53d98 | [
"MIT"
] | 1 | 2021-06-01T18:08:08.000Z | 2021-06-01T18:08:08.000Z | pyqlc/utils/__init__.py | realForbis/qlc-python-SDK | 4469b51c79ef875d55f8855a77ee8a2162a53d98 | [
"MIT"
] | null | null | null | from . import crypto, helper, exceptions | 40 | 40 | 0.8 | 5 | 40 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1816eca2e5d071055ddef07c6aa8f55eddbe62d7 | 32 | py | Python | serivces/user_mgt/rest/entity/__init__.py | oneInsect/magic-box | 0d2e9fe621558961d3e14b5492c7de2cd21d053e | [
"MIT"
] | 19 | 2020-08-28T15:55:57.000Z | 2020-12-08T11:45:46.000Z | serivces/user_mgt/rest/entity/__init__.py | oneInsect/magic-box | 0d2e9fe621558961d3e14b5492c7de2cd21d053e | [
"MIT"
] | 4 | 2021-03-15T15:08:04.000Z | 2022-02-09T22:29:45.000Z | serivces/user_mgt/rest/entity/__init__.py | oneInsect/magic-box | 0d2e9fe621558961d3e14b5492c7de2cd21d053e | [
"MIT"
] | null | null | null |
from .mapper import UserMapper | 10.666667 | 30 | 0.8125 | 4 | 32 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 3 | 30 | 10.666667 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1822a2d61a084d8709a9d8162c5af6eb117b04ea | 27 | py | Python | py-polars/legacy/pypolars/frame.py | Spirans/polars | 7774f419fdbf79bc4c4ec3bd6f0f72d87b32a70c | [
"MIT"
] | 3,395 | 2021-05-06T13:46:12.000Z | 2022-03-31T23:50:15.000Z | py-polars/legacy/pypolars/frame.py | Spirans/polars | 7774f419fdbf79bc4c4ec3bd6f0f72d87b32a70c | [
"MIT"
] | 1,253 | 2021-05-06T15:05:23.000Z | 2022-03-31T23:31:23.000Z | py-polars/legacy/pypolars/frame.py | Spirans/polars | 7774f419fdbf79bc4c4ec3bd6f0f72d87b32a70c | [
"MIT"
] | 263 | 2021-05-08T07:37:45.000Z | 2022-03-30T05:19:55.000Z | from polars.frame import *
| 13.5 | 26 | 0.777778 | 4 | 27 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
43fa92f8bdda09db839e3bc1da895403c9a9c03a | 207 | py | Python | www.py | zhenxiyinger/FlaskOrderingMiniprogram | 070385d7a522a2d0c8c8c6449f5c6eb12e49a74c | [
"Apache-2.0"
] | 6 | 2020-04-30T08:05:51.000Z | 2021-12-23T02:49:01.000Z | www.py | zhenxiyinger/FlaskOrderingMiniprogram | 070385d7a522a2d0c8c8c6449f5c6eb12e49a74c | [
"Apache-2.0"
] | null | null | null | www.py | zhenxiyinger/FlaskOrderingMiniprogram | 070385d7a522a2d0c8c8c6449f5c6eb12e49a74c | [
"Apache-2.0"
] | 2 | 2020-06-15T03:30:45.000Z | 2020-08-02T11:21:03.000Z | # -*- coding: utf-8 -*-
from application import app
from web.controllers.index import route_index
app.register_blueprint(route_index, url_prefix="/")
def is_import():
return "Blue Print Start Inject"
| 20.7 | 51 | 0.743961 | 29 | 207 | 5.137931 | 0.758621 | 0.134228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005618 | 0.140097 | 207 | 9 | 52 | 23 | 0.831461 | 0.101449 | 0 | 0 | 0 | 0 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0.2 | 1 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
a120a57b8d6339e50f95a46d30baf5236c395058 | 4,719 | py | Python | authors/tests/data/test_password_reset.py | andela/ah-the-jedi-backend | ba429dfcec577bd6d52052673c1c413835f65988 | [
"BSD-3-Clause"
] | 1 | 2019-12-25T18:59:34.000Z | 2019-12-25T18:59:34.000Z | authors/tests/data/test_password_reset.py | katherine95/ah-the-jedi-backend | ba429dfcec577bd6d52052673c1c413835f65988 | [
"BSD-3-Clause"
] | 26 | 2019-04-23T11:20:35.000Z | 2022-03-11T23:45:54.000Z | authors/tests/data/test_password_reset.py | katherine95/ah-the-jedi-backend | ba429dfcec577bd6d52052673c1c413835f65988 | [
"BSD-3-Clause"
] | 8 | 2019-05-21T06:54:34.000Z | 2019-11-18T19:45:22.000Z | from rest_framework import status
from .base_test import BaseTest
class UserLoginTest(BaseTest):
"""
Test suite for the user login
"""
def setUp(self):
""" Define the test client and required test variables. """
self.email = {
"email": 'testuser@email.com',
}
self.unregistered_email = {
"email": "invalid@email.com"
}
self.new_pass = {
"password": "NewTechyPass3"
}
self.wrong_pass = {
"password": "wrong$#$"
}
self.invalid_email = {
"email": "invalid_email"
}
BaseTest.setUp(self)
signup = self.signup_user()
uid = signup.data.get('data')['id']
token = signup.data.get('data')['token']
self.activate_user(uid=uid, token=token)
def test_successful_email_reset_link_sending(self):
"""
Tests that a successfully signed up user can
request password reset link
"""
data = self.email
self.response = self.client.post(
'/api/users/reset_password/',
data,
format="json"
)
self.assertEqual(self.response.status_code,
status.HTTP_200_OK)
def test_successful_password_reset(self):
"""
Test that only signed up users can request to reset their passwords
"""
data = self.email
self.result = self.client.post(
'/api/users/reset_password/',
data,
format="json"
)
token = self.result.data.get('token')
uid = self.result.data.get('uid')
self.response = self.client.patch(
"/api/users/reset_password_confirm/?uid={}&token={}".format(
uid, token),
self.new_pass,
format="json"
)
self.assertEqual(self.response.status_code,
status.HTTP_200_OK)
def test_cannot_send_reset_link_to_unregistered_user(self):
"""
Test that only signed up users can request to reset their passwords
"""
data = self.unregistered_email
self.response = self.client.post(
'/api/users/reset_password/',
data,
format="json"
)
self.assertEqual(self.response.status_code,
status.HTTP_404_NOT_FOUND)
def test_throws_error_on_empty_email_field(self):
"""
Test that only signed up users can request to reset their passwords
"""
data = self.invalid_email
self.response = self.client.post(
'/api/users/reset_password/',
format="json"
)
self.assertEqual(self.response.status_code,
status.HTTP_400_BAD_REQUEST)
def test_throws_error_on_invalid_email_format(self):
"""
Test that only signed up users can request to reset their passwords
"""
data = self.invalid_email
self.response = self.client.post(
'/api/users/reset_password/',
data,
format="json"
)
self.assertEqual(self.response.status_code,
status.HTTP_400_BAD_REQUEST)
def test_throws_error_on_wrong_password_format(self):
"""
Test that only signed up users can request to reset their passwords
"""
data = self.email
self.result = self.client.post(
'/api/users/reset_password/',
data,
format="json"
)
token = self.result.data.get('token')
uid = self.result.data.get('uid')
self.response = self.client.patch(
"/api/users/reset_password_confirm/?uid={}&token={}".format(
uid, token),
self.wrong_pass,
format="json"
)
self.assertEqual(self.response.status_code,
status.HTTP_400_BAD_REQUEST)
def test_throws_error_if_password_key_is_missing(self):
"""
Test that only signed up users can request to reset their passwords
"""
data = self.email
self.result = self.client.post(
'/api/users/reset_password/',
data,
format="json"
)
token = self.result.data.get('token')
uid = self.result.data.get('uid')
self.response = self.client.patch(
"/api/users/reset_password_confirm/?uid={}&token={}".format(
uid, token),
format="json"
)
self.assertEqual(self.response.status_code,
status.HTTP_400_BAD_REQUEST)
| 26.8125 | 75 | 0.549481 | 505 | 4,719 | 4.950495 | 0.170297 | 0.0672 | 0.052 | 0.084 | 0.7184 | 0.7104 | 0.7104 | 0.7104 | 0.7104 | 0.7104 | 0 | 0.007141 | 0.347107 | 4,719 | 175 | 76 | 26.965714 | 0.804284 | 0.119305 | 0 | 0.587156 | 0 | 0 | 0.128878 | 0.083733 | 0 | 0 | 0 | 0 | 0.06422 | 1 | 0.073395 | false | 0.174312 | 0.018349 | 0 | 0.100917 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a1348bdd4386d191546057db2c5a80bb7cfe5544 | 163 | py | Python | rest_api/bookshop/admin.py | JimBob3000/django_rest_framework | 0ea2ef6348286ddc753c29eaa1c119a036fa0b9f | [
"MIT"
] | null | null | null | rest_api/bookshop/admin.py | JimBob3000/django_rest_framework | 0ea2ef6348286ddc753c29eaa1c119a036fa0b9f | [
"MIT"
] | null | null | null | rest_api/bookshop/admin.py | JimBob3000/django_rest_framework | 0ea2ef6348286ddc753c29eaa1c119a036fa0b9f | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Book, Author, Publisher
admin.site.register(Book)
admin.site.register(Author)
admin.site.register(Publisher)
| 23.285714 | 43 | 0.815951 | 23 | 163 | 5.782609 | 0.478261 | 0.203008 | 0.383459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08589 | 163 | 6 | 44 | 27.166667 | 0.892617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a137d4f3da1b031badb80764569be2a8a72c45ec | 9,984 | py | Python | test/test_cmd_delete.py | jwodder/javaproperties-cli | e1bbe04bbb961bf0832a591bdb8edc6f240cc950 | [
"MIT"
] | 4 | 2017-04-27T02:11:05.000Z | 2021-12-14T14:53:30.000Z | test/test_cmd_delete.py | jwodder/javaproperties-cli | e1bbe04bbb961bf0832a591bdb8edc6f240cc950 | [
"MIT"
] | null | null | null | test/test_cmd_delete.py | jwodder/javaproperties-cli | e1bbe04bbb961bf0832a591bdb8edc6f240cc950 | [
"MIT"
] | 2 | 2017-06-11T02:13:53.000Z | 2019-02-07T22:15:54.000Z | from click.testing import CliRunner
import pytest
from javaproperties_cli.__main__ import javaproperties
INPUT = (
b"foo: bar\n"
b"key = value\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n"
)
@pytest.mark.parametrize(
"args,rc,output",
[
(
["delete", "--preserve-timestamp", "-", "key"],
0,
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "-", "nonexistent"],
0,
INPUT,
),
(
["delete", "-", "key"],
0,
b"#Mon Nov 07 15:29:40 EST 2016\n"
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "-", "nonexistent"],
0,
b"#Mon Nov 07 15:29:40 EST 2016\n" + INPUT,
),
(
["delete", "--preserve-timestamp", "-", "key", "nonexistent"],
0,
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "--escaped", "-", "e\\u00F0"],
0,
b"foo: bar\n"
b"key = value\n"
b"zebra apple\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "--escaped", "-", "x\\u00F0"],
0,
INPUT,
),
(
["delete", "--preserve-timestamp", "-", "e\\u00f0"],
0,
b"foo: bar\n"
b"key = value\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "-", "x\\u00f0"],
0,
INPUT,
),
(
["delete", "--preserve-timestamp", "-", b"e\xC3\xB0"],
0,
b"foo: bar\n"
b"key = value\n"
b"zebra apple\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "-", b"x\xC3\xB0"],
0,
INPUT,
),
(
["delete", "--preserve-timestamp", "-", "key", "key"],
0,
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "-", "key", "bmp"],
0,
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "-", "bmp", "key"],
0,
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"astral = \\uD83D\\uDC10\n",
),
],
)
def test_cmd_delete(args, rc, output):
r = CliRunner().invoke(javaproperties, args, input=INPUT)
assert r.exit_code == rc, r.stdout_bytes
assert r.stdout_bytes == output
def test_cmd_delete_del_bad_surrogate():
r = CliRunner().invoke(
javaproperties,
["delete", "--preserve-timestamp", "-", "bad-surrogate"],
input=b"good-surrogate = \\uD83D\\uDC10\n" b"bad-surrogate = \\uDC10\\uD83D\n",
)
assert r.exit_code == 0
assert r.stdout_bytes == b"good-surrogate = \\uD83D\\uDC10\n"
def test_cmd_delete_keep_bad_surrogate():
r = CliRunner().invoke(
javaproperties,
["delete", "--preserve-timestamp", "-", "good-surrogate"],
input=b"good-surrogate = \\uD83D\\uDC10\n" b"bad-surrogate = \\uDC10\\uD83D\n",
)
assert r.exit_code == 0
assert r.stdout_bytes == b"bad-surrogate = \\uDC10\\uD83D\n"
@pytest.mark.parametrize(
"args,rc,output",
[
(
["delete", "--preserve-timestamp", "-", "key"],
0,
b"#Tue Feb 25 19:13:27 EST 2020\n"
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "--preserve-timestamp", "-", "nonexistent"],
0,
b"#Tue Feb 25 19:13:27 EST 2020\n" + INPUT,
),
(
["delete", "-", "key"],
0,
b"#Mon Nov 07 15:29:40 EST 2016\n"
b"foo: bar\n"
b"zebra apple\n"
b"e\\u00f0=escaped\n"
b"e\\\\u00f0=not escaped\n"
b"latin-1 = \xF0\n"
b"bmp = \\u2603\n"
b"astral = \\uD83D\\uDC10\n",
),
(
["delete", "-", "nonexistent"],
0,
b"#Mon Nov 07 15:29:40 EST 2016\n" + INPUT,
),
],
)
def test_cmd_delete_with_timestamp(args, rc, output):
r = CliRunner().invoke(
javaproperties, args, input=b"#Tue Feb 25 19:13:27 EST 2020\n" + INPUT
)
assert r.exit_code == rc, r.stdout_bytes
assert r.stdout_bytes == output
def test_cmd_delete_repeated():
r = CliRunner().invoke(
javaproperties,
["delete", "--preserve-timestamp", "-", "repeated"],
input=(
b"foo: bar\n"
b"repeated = first\n"
b"key = value\n"
b"zebra apple\n"
b"repeated = second\n"
),
)
assert r.exit_code == 0, r.stdout_bytes
assert r.stdout_bytes == (b"foo: bar\n" b"key = value\n" b"zebra apple\n")
@pytest.mark.parametrize(
"args,rc,output",
[
(
["delete", "--preserve-timestamp", "-", b"k\xC3\xABy"],
0,
b"foo: bar\n" b"zebra apple\n",
),
(
["delete", "--preserve-timestamp", "--escaped", "-", "k\\u00EBy"],
0,
b"foo: bar\n" b"zebra apple\n",
),
],
)
def test_cmd_delete_raw_latin1_key(args, rc, output):
r = CliRunner().invoke(
javaproperties, args, input=(b"foo: bar\n" b"k\xEBy = value\n" b"zebra apple\n")
)
assert r.exit_code == rc, r.stdout_bytes
assert r.stdout_bytes == output
@pytest.mark.parametrize(
"args,rc,output",
[
(
["delete", "--preserve-timestamp", "-", b"k\xC3\xABy"],
0,
b"foo: bar\n" b"k\xC3\xABy = value\n" b"zebra apple\n",
),
(
["delete", "--preserve-timestamp", "--escaped", "-", "k\\u00EBy"],
0,
b"foo: bar\n" b"k\xC3\xABy = value\n" b"zebra apple\n",
),
(
[
"delete",
"--preserve-timestamp",
"--encoding",
"utf-8",
"-",
b"k\xC3\xABy",
],
0,
b"foo: bar\n" b"zebra apple\n",
),
(
[
"delete",
"--preserve-timestamp",
"-E",
"utf-8",
"--escaped",
"-",
"k\\u00EBy",
],
0,
b"foo: bar\n" b"zebra apple\n",
),
],
)
def test_cmd_delete_raw_utf8_key(args, rc, output):
r = CliRunner().invoke(
javaproperties,
args,
input=(b"foo: bar\n" b"k\xC3\xABy = value\n" b"zebra apple\n"),
)
assert r.exit_code == rc, r.stdout_bytes
assert r.stdout_bytes == output
@pytest.mark.parametrize(
"args,rc,output",
[
(["delete", "-T", "-", "key"], 0, b"foo: bar\n" b"zebra apple\n"),
(["delete", "-T", "-", "zebra"], 0, b"foo: bar\n" b"key = value\n"),
(
["delete", "-T", "-", "nonexistent"],
0,
b"foo: bar\n" b"key = value\n" b"zebra apple\n",
),
],
)
@pytest.mark.parametrize(
"inp",
[
b"foo: bar\n" b"key = value\n" b"zebra apple\\\n",
b"foo: bar\n" b"key = value\n" b"zebra apple\\",
b"foo: bar\n" b"key = value\n" b"zebra apple",
],
)
def test_cmd_delete_fix_final_eol(args, rc, inp, output):
r = CliRunner().invoke(javaproperties, args, input=inp)
assert r.exit_code == rc, r.stdout_bytes
assert r.stdout_bytes == output
def test_cmd_delete_header_comments():
r = CliRunner().invoke(
javaproperties,
["delete", "-", "key"],
input=(
b"#This is a comment.\n"
b" ! So is this.\n"
b"foo: bar\n"
b"key = value\n"
b"zebra apple\n"
),
)
assert r.exit_code == 0, r.stdout_bytes
assert r.stdout_bytes == (
b"#This is a comment.\n"
b" ! So is this.\n"
b"#Mon Nov 07 15:29:40 EST 2016\n"
b"foo: bar\n"
b"zebra apple\n"
)
# --outfile
# universal newlines?
# reading from a file
# invalid \u escape
| 27.810585 | 88 | 0.438802 | 1,201 | 9,984 | 3.591174 | 0.089092 | 0.0524 | 0.04869 | 0.055646 | 0.916068 | 0.894273 | 0.881753 | 0.82773 | 0.82773 | 0.776026 | 0 | 0.057742 | 0.379006 | 9,984 | 358 | 89 | 27.888268 | 0.637903 | 0.006711 | 0 | 0.693694 | 0 | 0 | 0.365617 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 1 | 0.027027 | false | 0 | 0.009009 | 0 | 0.036036 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1433343913db5320f0911e09b8a5d3fa82a5610 | 105 | py | Python | script/pages/__init__.py | ybkuroki/selenium-e2e-sample | 18a7e92d9b338104ac8b418a6987cadfd1c12d39 | [
"MIT"
] | 1 | 2021-09-08T20:05:40.000Z | 2021-09-08T20:05:40.000Z | script/pages/__init__.py | ybkuroki/selenium-e2e-sample | 18a7e92d9b338104ac8b418a6987cadfd1c12d39 | [
"MIT"
] | null | null | null | script/pages/__init__.py | ybkuroki/selenium-e2e-sample | 18a7e92d9b338104ac8b418a6987cadfd1c12d39 | [
"MIT"
] | null | null | null | from .page_object import PageObject
from .login_page import LoginPage
from .regist_page import RegistPage | 35 | 35 | 0.866667 | 15 | 105 | 5.866667 | 0.6 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104762 | 105 | 3 | 36 | 35 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.