hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
79dab15a110466c096a01faaff87f384a1042ec9 | 84 | py | Python | xparse/regular/__init__.py | aiyogi01/xparse | e50f48493ec1835cc79195a8805a74e0d003860f | [
"MIT"
] | null | null | null | xparse/regular/__init__.py | aiyogi01/xparse | e50f48493ec1835cc79195a8805a74e0d003860f | [
"MIT"
] | null | null | null | xparse/regular/__init__.py | aiyogi01/xparse | e50f48493ec1835cc79195a8805a74e0d003860f | [
"MIT"
] | 1 | 2020-05-08T09:42:23.000Z | 2020-05-08T09:42:23.000Z | from xparse.regular.automata import Nfa, Dfa
from xparse.regular.regex import match
| 28 | 44 | 0.833333 | 13 | 84 | 5.384615 | 0.692308 | 0.285714 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 84 | 2 | 45 | 42 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8dd22baf207a6e1b117fa5db8359f57e4a052f7b | 7,279 | py | Python | test/test_synchronized_basic_math.py | mnpatil17/threading-tools | e8f4a63e4c5f54c802232c38e27936ffc74e2baf | [
"BSD-3-Clause"
] | null | null | null | test/test_synchronized_basic_math.py | mnpatil17/threading-tools | e8f4a63e4c5f54c802232c38e27936ffc74e2baf | [
"BSD-3-Clause"
] | null | null | null | test/test_synchronized_basic_math.py | mnpatil17/threading-tools | e8f4a63e4c5f54c802232c38e27936ffc74e2baf | [
"BSD-3-Clause"
] | null | null | null | import unittest
from threading_tools import SynchronizedNumber
NUM_TRIALS = 2500
class TestSynchronizedBasicMath(unittest.TestCase):
#
# testing __neg__()
#
def test_sync_neg(self):
sync_num1 = SynchronizedNumber(50.0)
res_sync_num = -sync_num1
assert res_sync_num == -50, 'The sum should be -50. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the original obj.'
#
# testing __add__() and __radd__()
#
def test_sync_add(self):
sync_num1 = SynchronizedNumber(50.0)
sync_num2 = SynchronizedNumber(50.0)
res_sync_num = sync_num1 + sync_num2
assert res_sync_num == 100, 'The sum should be 100. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the addend obj.'
assert res_sync_num is not sync_num2, 'The result obj should not be the addend obj.'
def test_non_sync_add(self):
sync_num1 = SynchronizedNumber(50.0)
res_sync_num = sync_num1 + 50
assert res_sync_num == 100, 'The sum should be 100. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the addend obj.'
def test_reverse_non_sync_add(self):
sync_num1 = SynchronizedNumber(50.0)
res_sync_num = 50 + sync_num1
assert res_sync_num == 100, 'The sum should be 100. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the addend obj.'
#
# testing __sub__() and __rsub__()
#
def test_sync_sub(self):
sync_num1 = SynchronizedNumber(50.0)
sync_num2 = SynchronizedNumber(50.0)
res_sync_num = sync_num1 - sync_num2
assert res_sync_num == 0, 'The sum should be 0. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the minuend obj.'
assert res_sync_num is not sync_num2, 'The result obj should not be the subtrahend obj.'
def test_non_sync_sub(self):
sync_num1 = SynchronizedNumber(50.0)
res_sync_num = sync_num1 - 50
assert res_sync_num == 0, 'The sum should be 0. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the minuend obj.'
def test_reverse_non_sync_sub(self):
sync_num1 = SynchronizedNumber(50.0)
res_sync_num = 60 - sync_num1
assert res_sync_num == 10, 'The sum should be 10. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the subtrahend obj.'
#
# testing __mul__() and __rmul__()
#
def test_sync_mul(self):
sync_num1 = SynchronizedNumber(4.0)
sync_num2 = SynchronizedNumber(2.0)
res_sync_num = sync_num1 * sync_num2
assert res_sync_num == 8.0, 'The sum should be 8.0. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the multiplicand obj.'
assert res_sync_num is not sync_num2, 'The result obj should not be the multiplicand obj.'
def test_non_sync_mul(self):
sync_num1 = SynchronizedNumber(4.0)
res_sync_num = sync_num1 * 2
assert res_sync_num == 8.0, 'The sum should be 8.0. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the multiplicand obj.'
def test_reverse_non_sync_mul(self):
sync_num1 = SynchronizedNumber(4.0)
res_sync_num = 2 * sync_num1
assert res_sync_num == 8.0, 'The sum should be 8.0. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the multiplicand obj.'
#
# testing __div__() and __rdiv__()
#
def test_sync_div(self):
sync_num1 = SynchronizedNumber(4.0)
sync_num2 = SynchronizedNumber(2.0)
res_sync_num = sync_num1 / sync_num2
assert res_sync_num == 2.0, 'The sum should be 2.0. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the dividend obj.'
assert res_sync_num is not sync_num2, 'The result obj should not be the divisor obj.'
def test_non_sync_div(self):
sync_num1 = SynchronizedNumber(4.0)
res_sync_num = sync_num1 / 2
assert res_sync_num == 2.0, 'The sum should be 2.0. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the dividend obj.'
def test_reverse_non_sync_div(self):
sync_num1 = SynchronizedNumber(4.0)
res_sync_num = 2 / sync_num1
assert res_sync_num == 0.5, 'The sum should be 0.5. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the divisor obj.'
#
# testing __pow__() and __rpow__()
#
def test_sync_pow(self):
sync_num1 = SynchronizedNumber(4.0)
sync_num2 = SynchronizedNumber(2.0)
res_sync_num = sync_num1 ** sync_num2
assert res_sync_num == 16, 'The sum should be 16. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the base obj.'
assert res_sync_num is not sync_num2, 'The result obj should not be the exponent obj.'
def test_non_sync_pow(self):
sync_num1 = SynchronizedNumber(4.0)
res_sync_num = sync_num1 ** 2
assert res_sync_num == 16, 'The sum should be 16. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the base obj.'
def test_reverse_non_sync_pow(self):
sync_num1 = SynchronizedNumber(4.0)
res_sync_num = 2 ** sync_num1
assert res_sync_num == 16, 'The sum should be 16. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the exponent obj.'
#
# testing __mod__() and __rmod__()
#
def test_sync_mod(self):
sync_num1 = SynchronizedNumber(5.0)
sync_num2 = SynchronizedNumber(2.0)
res_sync_num = sync_num1 % sync_num2
assert res_sync_num == 1, 'The sum should be 1. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the dividend obj.'
assert res_sync_num is not sync_num2, 'The result obj should not be the divisor obj.'
def test_non_sync_mod(self):
sync_num1 = SynchronizedNumber(5.0)
res_sync_num = sync_num1 % 2
assert res_sync_num == 1, 'The sum should be 1. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the dividend obj.'
def test_reverse_non_sync_mod(self):
sync_num1 = SynchronizedNumber(2.0)
res_sync_num = 5 % sync_num1
assert res_sync_num == 1, 'The sum should be 1. Instead it is {0}'.format(res_sync_num)
assert res_sync_num is not sync_num1, 'The result obj should not be the divisor obj.'
| 40.21547 | 99 | 0.677703 | 1,176 | 7,279 | 3.897959 | 0.057823 | 0.125218 | 0.178883 | 0.153578 | 0.926483 | 0.919939 | 0.895942 | 0.888743 | 0.865401 | 0.85493 | 0 | 0.044497 | 0.243577 | 7,279 | 180 | 100 | 40.438889 | 0.788049 | 0.029537 | 0 | 0.522523 | 0 | 0 | 0.268958 | 0 | 0 | 0 | 0 | 0 | 0.396396 | 1 | 0.171171 | false | 0 | 0.018018 | 0 | 0.198198 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5c08f2073156c545ccb8bc0388978fcabdaace7a | 30 | py | Python | Python/hello_blueenvelope31.py | saurabhcommand/Hello-world | 647bad9da901a52d455f05ecc37c6823c22dc77e | [
"MIT"
] | 1,428 | 2018-10-03T15:15:17.000Z | 2019-03-31T18:38:36.000Z | Python/hello_blueenvelope31.py | saurabhcommand/Hello-world | 647bad9da901a52d455f05ecc37c6823c22dc77e | [
"MIT"
] | 1,162 | 2018-10-03T15:05:49.000Z | 2018-10-18T14:17:52.000Z | Python/hello_blueenvelope31.py | saurabhcommand/Hello-world | 647bad9da901a52d455f05ecc37c6823c22dc77e | [
"MIT"
] | 3,909 | 2018-10-03T15:07:19.000Z | 2019-03-31T18:39:08.000Z | print("Hello blueenvelope31")
| 15 | 29 | 0.8 | 3 | 30 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.066667 | 30 | 1 | 30 | 30 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3086c08234c434f8099d830de0532f476dc4c4b5 | 253 | py | Python | SCHChatBot/module/apple.py | Help-Us/SCH_University_Bot | e1a50843ad7ea496623eff9fe266408a10348fc1 | [
"MIT"
] | 1 | 2020-09-30T13:31:27.000Z | 2020-09-30T13:31:27.000Z | SCHChatBot/module/apple.py | Help-Us/SCH_University_Bot | e1a50843ad7ea496623eff9fe266408a10348fc1 | [
"MIT"
] | null | null | null | SCHChatBot/module/apple.py | Help-Us/SCH_University_Bot | e1a50843ad7ea496623eff9fe266408a10348fc1 | [
"MIT"
] | null | null | null | import message
print(message.health_room_msg)
print(message.developer_question_msg)
print(message.bus_to_sin_error_msg)
print(message.first_room_msg)
print(message.wifi_msg)
print(message.student_food_info_msg)
print('■ 신창역 지하철 출발 시간 ■\n\n• 이번 지하철은 ' )
| 28.111111 | 41 | 0.822134 | 46 | 253 | 4.282609 | 0.565217 | 0.365482 | 0.380711 | 0.192893 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071146 | 253 | 8 | 42 | 31.625 | 0.825532 | 0 | 0 | 0 | 0 | 0 | 0.12253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0.875 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
30bc54ec0aa8d80eefbc0b91bc37730ca8650c5d | 786 | py | Python | user/vistas/widgets/nav-bar.py | ZerpaTechnology/occoa | a8c0bd2657bc058801a883109c0ec0d608d04ccc | [
"Apache-2.0"
] | null | null | null | user/vistas/widgets/nav-bar.py | ZerpaTechnology/occoa | a8c0bd2657bc058801a883109c0ec0d608d04ccc | [
"Apache-2.0"
] | null | null | null | user/vistas/widgets/nav-bar.py | ZerpaTechnology/occoa | a8c0bd2657bc058801a883109c0ec0d608d04ccc | [
"Apache-2.0"
] | null | null | null | doc+="""<p>Pills With Dropdown Example</p><ul class="nav nav-pills"><li class="active"><a href="#">Home</a></li><li><a href="#">SVN</a></li><li><a href="#">iOS</a></li><li><a href="#">VB.Net</a></li><li class="dropdown"><a class="dropdown-toggle" data-toggle="dropdown" href="#">Java <span class="caret"></span></a><ul class="dropdown-menu"><li><a href="#">Swing</a></li><li><a href="#">jMeter</a></li><li><a href="#">EJB</a></li><li class="divider"></li><li class="dropdown-submenu"><a class="dropdown-toggle" data-toggle="dropdown-menu" href="#">Java <span class="caret"></span><ul class="dropdown-menu"><li><a href="#">Swing</a></li><li><a href="#">jMeter</a></li><li><a href="#">EJB</a></li><li class="divider"></li><li><a href="#">Separated link</a></li></ul></li></ul></li><li>""" | 786 | 786 | 0.603053 | 138 | 786 | 3.434783 | 0.224638 | 0.109705 | 0.105485 | 0.151899 | 0.675105 | 0.611814 | 0.50211 | 0.341772 | 0.341772 | 0.341772 | 0 | 0 | 0.043257 | 786 | 1 | 786 | 786 | 0.630319 | 0 | 0 | 0 | 0 | 1 | 0.984752 | 0.753494 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
30e452215e8b253f871b1617b68adbd0d6d8c462 | 46 | py | Python | netbox/utilities/testing/__init__.py | aslafy-z/netbox | a5512dd4c46c005df8752fc330c1382ac22b31ea | [
"Apache-2.0"
] | 1 | 2021-09-23T00:06:51.000Z | 2021-09-23T00:06:51.000Z | netbox/utilities/testing/__init__.py | aslafy-z/netbox | a5512dd4c46c005df8752fc330c1382ac22b31ea | [
"Apache-2.0"
] | 4 | 2021-06-08T22:29:06.000Z | 2022-03-12T00:48:51.000Z | netbox/utilities/testing/__init__.py | aslafy-z/netbox | a5512dd4c46c005df8752fc330c1382ac22b31ea | [
"Apache-2.0"
] | null | null | null | from .testcases import *
from .utils import *
| 15.333333 | 24 | 0.73913 | 6 | 46 | 5.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 46 | 2 | 25 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30fd5384f7e7ba40dade6078323e64c3c16c143e | 49 | py | Python | hydrogels/theory/models/integrator/__init__.py | debeshmandal/brownian | bc5b2e00a04d11319c85e749f9c056b75b450ff7 | [
"MIT"
] | 3 | 2020-05-13T01:07:30.000Z | 2021-02-12T13:37:23.000Z | hydrogels/theory/models/integrator/__init__.py | debeshmandal/brownian | bc5b2e00a04d11319c85e749f9c056b75b450ff7 | [
"MIT"
] | 24 | 2020-06-04T13:48:57.000Z | 2021-12-31T18:46:52.000Z | hydrogels/theory/models/integrator/__init__.py | debeshmandal/brownian | bc5b2e00a04d11319c85e749f9c056b75b450ff7 | [
"MIT"
] | 1 | 2020-07-23T17:15:23.000Z | 2020-07-23T17:15:23.000Z | from .engine import Simulation, Equation, History | 49 | 49 | 0.836735 | 6 | 49 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a513787d75284fd3da7d9b04876ff45801e8d94b | 32 | py | Python | __init__.py | hidura/sugelico | d3c76f358a788d5f3a891cf0a7dd7420ac3a7845 | [
"MIT"
] | null | null | null | __init__.py | hidura/sugelico | d3c76f358a788d5f3a891cf0a7dd7420ac3a7845 | [
"MIT"
] | null | null | null | __init__.py | hidura/sugelico | d3c76f358a788d5f3a891cf0a7dd7420ac3a7845 | [
"MIT"
] | null | null | null | from tools.loadKar import core
| 10.666667 | 30 | 0.8125 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 32 | 2 | 31 | 16 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb4e0df0a0807486f2a847426282f32ecaa72545 | 56 | py | Python | mavetools/models/base.py | VariantEffect/MaveTools | 1621814390ddb9801ea01d3dc6e0d5cc17441b09 | [
"BSD-3-Clause"
] | 3 | 2021-11-26T14:04:29.000Z | 2021-12-02T21:50:32.000Z | mavetools/models/base.py | VariantEffect/MaveTools | 1621814390ddb9801ea01d3dc6e0d5cc17441b09 | [
"BSD-3-Clause"
] | 5 | 2021-11-26T12:03:50.000Z | 2021-11-30T03:56:50.000Z | mavetools/models/base.py | VariantEffect/MaveTools | 1621814390ddb9801ea01d3dc6e0d5cc17441b09 | [
"BSD-3-Clause"
] | null | null | null | class APIObject:
def api_url() -> str:
pass
| 14 | 25 | 0.553571 | 7 | 56 | 4.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.339286 | 56 | 3 | 26 | 18.666667 | 0.810811 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
eba95084b23cd86a399af29225111ea89227f118 | 45 | py | Python | agents/utils/__init__.py | maartenbuyl/memory-enhanced-maze-exploration | e897b14ac3678a6d9a80d1366eaec9ebaa13255e | [
"MIT"
] | null | null | null | agents/utils/__init__.py | maartenbuyl/memory-enhanced-maze-exploration | e897b14ac3678a6d9a80d1366eaec9ebaa13255e | [
"MIT"
] | null | null | null | agents/utils/__init__.py | maartenbuyl/memory-enhanced-maze-exploration | e897b14ac3678a6d9a80d1366eaec9ebaa13255e | [
"MIT"
] | null | null | null | from agents.utils.transition_memory import *
| 22.5 | 44 | 0.844444 | 6 | 45 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
691c8737d84e462a63f6aabeba8c8b2f88ea7327 | 111,393 | py | Python | SAM_Result.py | Sameerpython/Bindingdata | e13d1c152339117ee33e6084da3f34aae222dbcd | [
"MIT"
] | null | null | null | SAM_Result.py | Sameerpython/Bindingdata | e13d1c152339117ee33e6084da3f34aae222dbcd | [
"MIT"
] | null | null | null | SAM_Result.py | Sameerpython/Bindingdata | e13d1c152339117ee33e6084da3f34aae222dbcd | [
"MIT"
] | null | null | null | #!/usr/bin/python
# Import modules for CGI handling
import cgi, cgitb
import webbrowser
import urllib
import urllib2
import re
import sys, os
from itertools import izip
import requests
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib
import matplotlib.pyplot as plt
import uuid
from Bio.Seq import Seq
from Bio import motifs
import string
from Bio.Alphabet import IUPAC
from zipfile import ZipFile
# Create instance of FieldStorage
form = cgi.FieldStorage()
print "Content-type:text/html\r\n\r\n"
print "<html>"
print "<head>"
print "<style>"
print """
* {
-moz-box-sizing: border-box;
-webkit-box-sizing: border-box;
box-sizing: border-box;
}
.grid {
background: white;
margin: 0 0 20px 0;
}
.grid:after {
/* Or @extend clearfix */
content: "";
display: table;
clear: both;
}
[class*='col-'] {
float: left;
padding-right: 20px;
}
.grid [class*='col-']:last-of-type {
padding-right: 0;
}
.col-2-3 {
width: 33.33%;
overflow: scroll;
}
.col-1-3 {
width: 33.33%;
overflow: scroll;
}
.module {
padding: 20px;
background: #eee;
}
body {
padding: 10px 50px 200px;
background-size: 300px 300px;
}
h1 {
color: white;
}
h1 em {
color: #666;
font-size: 16px;
}
"""
#############
#style for printing image side by side
print """
.weblogo_column {
float: left;
width: 33.33%;
padding: 2px;
}
/* Clearfix (clear floats) */
.weblogo_row::after {
content: "";
clear: both;
display: table;
}
"""
############
######style for collapsible content###
print """
.collapsible {
background-color: #777;
color: white;
cursor: pointer;
padding: 18px;
width: 100%;
border: none;
text-align: left;
outline: none;
font-size: 15px;
}
.active, .collapsible:hover {
background-color: #555;
}
.contentsection {
padding: 0 18px;
display: none;
overflow: scroll;
background-color: #f1f1f1;
}
"""
############End of style for collapsible content###
#style for divindg into 2 columns
print "* {box-sizing: border-box;}"
print ".column {float: left;width: 50%;padding: 10px;height: 300px;}"
print ".row:after {content: "";display: table;clear: both;}"
#END of style for divindg into 2 columns
print "</style>"
print "<body>"
# Substructure Atom Information for PCHILIDE ligand
METHI=sorted(['SD','CE','CG','CB','CA','N','C','O','OXT'])
Ribose=sorted(["C1'","O4'","C4'","C3'","O3'","C2'","O2'", "C5'"])
Adenin=sorted(['N6','N1','C2','N3','C4','C5','C6','N7','C8','N9'])
#SUbstructure section ends here
# Information of the selected ligands and PDB ids from LigPage.py
variable = ""
value = ""
r = ""
value_dict={}
lig_sel=[]
for key in form.keys():
variable = str(key)
# print "The selected Ligand for PDBID:%s" %variable
value = str(form.getvalue(variable))
# print "is", value
value_dict.setdefault('%s'%variable,[]).append(value)
r += "<p>"+ variable +", "+ value +"</p>\n"
print "<p style='font-size:20px; color:blue'> Results for the selected PDBID's and Ligands: ",'\n'.join("{}:{}".format(k,v) for k,v in value_dict.items()),"</p>","<br/>"
pdbsum_URL="http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetPage.pl?pdbcode="
pdbsum_URL2="&template=links.html"
#DIctionary and List
pdbsum_dict={}
PDBID_LIST=[]
#Title for Page
Title="The Results are for the following selected PDBID's and Ligands:"
#print Title.center(100,' '),"<br/>"
#Preparing PdbSum Url with selected PDB ids
for ids,lig in value_dict.iteritems():
pdbsumurl=pdbsum_URL+ids+pdbsum_URL2
lig_sel.append(lig)
pdbsum_dict.setdefault('%s'%ids,[]).append(pdbsumurl)
#print "PDBSUMDICT", pdbsum_dict,"<br/>","<br/>"
#creating a list for PDB ids
for id,url in pdbsum_dict.iteritems():
PDBID_LIST.append(id)
#print "PDBID LIST", PDBID_LIST, "<br/>"
#Extracting the Href links from PDBSum home page for the selected PDB ids and Ligands using BeautifulSoup
litems=[]
new=[]
items2=[]
new1=[]
lig_link=[]
finalLIG_link=[]
liginte_set=set()
ligintelist=[]
link_set=set()
for id,url in pdbsum_dict.iteritems():
#print id, url, "<br/>"
for link in url:
html_page=requests.get(link)
soup = BeautifulSoup(html_page.text,'html.parser')
ligand_name_items=soup.find_all('a')
for items in ligand_name_items:
name=items.contents[0]
links='www.ebi.ac.uk' + items.get('href')
text=str(name)+ " " + str(links)
litems.append(text)
#Looping over the extracted URLs from PDBSum and appending into a lIst called New
for x in litems:
x=x.strip()
new.append(x)
#print new
#Looping over Ligand and PDBSum Urls (from above step) to extract the PDBSUm URL for the seleted Ligand Page in PDBSUm. The Ligand Page URL is now as a SET data type
ligand_urlLIST=[]
for lig in lig_sel:
#print "LIGAND", lig, "<br/>"
for y in new:
lig= ''.join(lig)
if y.startswith(lig):#y=y.split()
y=y.split()
link=y[1]
link1="http://"+link
#print link1
ligand_urlLIST.append(link1)
link_set.add(link1)
link_setlist=list(link_set)
#print ligand_urlLIST
#print "SET",link_setlist, "<br/>"
#Using Beautifulsoup to extract all the links from PDBSUM Ligand interaction Page for each of the PDB ids.
#print PDBID_LIST
PDBID_URL_dict=zip(PDBID_LIST,ligand_urlLIST)
LiginteractPage=dict(PDBID_URL_dict)
#print "what1", LiginteractPage, "<br/>"
#print "what2", PDBID_URL_dict
for pdbid,pdbsumlink in LiginteractPage.iteritems():
links=list(LiginteractPage.viewvalues())
for y in links:
html_page2=requests.get(y)
soup2 = BeautifulSoup(html_page2.text,'html.parser')
ligand_name_items1=soup2.find_all('a')
#Looping over all the href links from PDBSUM ligand intercation page to extract
#URL(Final page for extracting atom details) for the atom based interaction for PDB ids with ligands
for items in ligand_name_items1:
name=items.contents[0]
links='www.ebi.ac.uk' + items.get('href')
text=str(links)
items2.append(text)
for i in items2:
# print i
final=i.split()
final=''.join(final)
#print final, "<br/>"
if 'thornton-srv/databases/cgi-bin/pdbsum/GetLigInt.pl?pdb=' in final:
finalLIG_link.append(final)
lastitem=finalLIG_link[-1]
lastitem="http://"+lastitem
# print lastitem,"<br/>"
if lastitem not in ligintelist:
ligintelist.append(lastitem)
liginte_set.add(lastitem)
liginte_list=list(liginte_set)
#print "LIST", (ligintelist),"<br/>"
PDBID_INTURL_dict=zip(PDBID_LIST,ligintelist)
pdbsum_dict1=dict(PDBID_INTURL_dict)
for id,link in pdbsum_dict1.iteritems():
links=list(pdbsum_dict1.viewvalues())
PDBID=list(pdbsum_dict1.viewkeys())
#print "HI LINKS CHECK for SPACE", links,"<br/>"
#print "PDBIDs", PDBID,"<br/>"
Number_of_Ids=len(PDBID)
#FInal DIctionary with PDBID and Ligplot URL for extracting intercation details
mydictcheck={}
for ids,links in zip(PDBID,ligintelist):
mydictcheck.setdefault('%s'%ids,[]).append(links)
###############################################################################
#SECTION OF FIDING COMMON LIGAND ATOMS
###############################################################################
#selecting common ligand atoms that are hydrogen bonded in selected PDB structures
H_printing = False
H_atoms_commoncomp={}
for H_pdbids,H_pdbidlinks in mydictcheck.iteritems():
for H_links_sel in H_pdbidlinks:
H_links_sel1=str(H_links_sel)
weblink=requests.get(H_links_sel1, stream=True)
for H_atomlines in weblink.iter_lines():
H_atomlines1=H_atomlines.strip()
if H_atomlines1.startswith('Hydrogen bonds'):
H_printing = True
elif H_atomlines1.startswith('Non-bonded contacts'):
H_printing = False
if H_printing:
#print H_atomlines
if H_atomlines1.startswith(('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')):
H_atomlines2=H_atomlines1.split()
H_atm_sel=H_atomlines2[8]
H_atoms_commoncomp.setdefault('%s'%H_pdbids,[]).append(H_atm_sel)
H_atomsvalues_dict1=H_atoms_commoncomp.values()
H_common_intersectionfinal=sorted(list(set.intersection(*map(set,H_atomsvalues_dict1))))
#END of selecting common ligand atoms that are hydrogen bonded in selected PDB structures
#selecting common ligand atoms that are non-hydrogen bonded in selected PDB structures
NONH_printing = False
NONHatoms_commoncomp={}
for NONHpdbids,NONHpdbidlinks in mydictcheck.iteritems():
for NONHlinks_sel in NONHpdbidlinks:
NONHlinks_sel1=str(NONHlinks_sel)
weblink=requests.get(NONHlinks_sel1, stream=True)
for NONHatomlines in weblink.iter_lines():
NONHatomlines1=NONHatomlines.strip()
if NONHatomlines1.startswith('Non-bonded contacts'):
NONH_printing = True
elif NONHatomlines1.startswith('Hydrogen bonds'):
NONH_printing = False
if NONH_printing:
#print atomlines1
if NONHatomlines1.startswith(('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')):
NONHatomlines2=NONHatomlines1.split()
NONHatm_sel=NONHatomlines2[8]
NONHatoms_commoncomp.setdefault('%s'%NONHpdbids,[]).append(NONHatm_sel)
NONHatomsvalues_dict1=NONHatoms_commoncomp.values()
NONHcommon_intersectionfinal=sorted(list(set.intersection(*map(set,NONHatomsvalues_dict1))))
###############################################################################
#END OF SECTION OF FIDING COMMON LIGAND ATOMS
###############################################################################
###############################################################################
#START OF SECTION OF LIGAND ATOMS IN EACH SUBGROUPS:
###############################################################################
###############################################################################
# 1.START OF SECTION OF NAD SUBGROUPS:
###############################################################################
#METHI
METHI_graphdicH={}
METHI_common_graphdicH={}
METHI_graphdicNH={}
METHI_common_graphdicNH={}
METHI_All_combine_Lig_Res_H={}
METHI_allH_Lig_Resdict={}
METHI_Common_combine_Lig_Res_H={}
METHI_CommonH_Lig_Resdict={}
METHI_All_combine_Lig_Res_NH={}
METHI_allNH_Lig_Resdict={}
METHI_Common_combine_Lig_Res_NH={}
METHI_CommonNH_Lig_Resdict={}
METHI_All_combine_Lig_Res_H_distance={}
METHI_allH_Lig_Resdict_distance={}
METHI_All_combine_Lig_Res_NH_distance={}
METHI_allNH_Lig_Resdict_distance={}
METHI_Common_combine_Lig_Res_H_distance={}
METHI_CommonH_Lig_Resdict_distance={}
METHI_Common_combine_Lig_Res_NH_distance={}
METHI_CommonNH_Lig_Resdict_distance={}
METHI_listdata_H=[]
METHI_listdata_NH=[]
METHI_lresidueH=[]
METHI_latomH=[]
METHI_lresidueNH=[]
METHI_latomNH=[]
METHI_common_listdata_H=[]
METHI_common_listdata_NH=[]
METHI_H_appended_lig_tabledic={}
METHI_H_Common_appended_lig_tabledic={}
METHI_NH_appended_lig_tabledic={}
METHI_NH_Common_appended_lig_tabledic={}
#End of METHI
#Ribose
Ribose_graphdicH={}
Ribose_common_graphdicH={}
Ribose_graphdicNH={}
Ribose_common_graphdicNH={}
Ribose_All_combine_Lig_Res_H={}
Ribose_allH_Lig_Resdict={}
Ribose_Common_combine_Lig_Res_H={}
Ribose_CommonH_Lig_Resdict={}
Ribose_All_combine_Lig_Res_NH={}
Ribose_allNH_Lig_Resdict={}
Ribose_Common_combine_Lig_Res_NH={}
Ribose_CommonNH_Lig_Resdict={}
Ribose_All_combine_Lig_Res_H_distance={}
Ribose_allH_Lig_Resdict_distance={}
Ribose_All_combine_Lig_Res_NH_distance={}
Ribose_allNH_Lig_Resdict_distance={}
Ribose_Common_combine_Lig_Res_H_distance={}
Ribose_CommonH_Lig_Resdict_distance={}
Ribose_Common_combine_Lig_Res_NH_distance={}
Ribose_CommonNH_Lig_Resdict_distance={}
Ribose_listdata_H=[]
Ribose_listdata_NH=[]
Ribose_common_listdata_H=[]
Ribose_common_listdata_NH=[]
Ribose_H_appended_lig_tabledic={}
Ribose_H_Common_appended_lig_tabledic={}
Ribose_NH_appended_lig_tabledic={}
Ribose_NH_Common_appended_lig_tabledic={}
#End ofRibose
#Adenin
Adenin_graphdicH={}
Adenin_common_graphdicH={}
Adenin_graphdicNH={}
Adenin_common_graphdicNH={}
Adenin_All_combine_Lig_Res_H={}
Adenin_allH_Lig_Resdict={}
Adenin_Common_combine_Lig_Res_H={}
Adenin_CommonH_Lig_Resdict={}
Adenin_All_combine_Lig_Res_NH={}
Adenin_allNH_Lig_Resdict={}
Adenin_Common_combine_Lig_Res_NH={}
Adenin_CommonNH_Lig_Resdict={}
Adenin_All_combine_Lig_Res_H_distance={}
Adenin_allH_Lig_Resdict_distance={}
Adenin_All_combine_Lig_Res_NH_distance={}
Adenin_allNH_Lig_Resdict_distance={}
Adenin_Common_combine_Lig_Res_H_distance={}
Adenin_CommonH_Lig_Resdict_distance={}
Adenin_Common_combine_Lig_Res_NH_distance={}
Adenin_CommonNH_Lig_Resdict_distance={}
Adenin_listdata_H=[]
Adenin_listdata_NH=[]
Adenin_lresidueH=[]
Adenin_latomH=[]
Adenin_lresidueNH=[]
Adenin_latomNH=[]
Adenin_common_listdata_H=[]
Adenin_common_listdata_NH=[]
Adenin_H_appended_lig_tabledic={}
Adenin_H_Common_appended_lig_tabledic={}
Adenin_NH_appended_lig_tabledic={}
Adenin_NH_Common_appended_lig_tabledic={}
#End ofAdenin
lresidueH=[]
latomH=[]
ldistanceH=[]
residueH={}
atmnameH={}
dicresidue_unique={}
residue_seenH=set()
atom_seenH=set()
lresidueNH=[]
latomNH=[]
ldistanceNH=[]
residueNH={}
atmnameNH={}
dicresidue_unique={}
residue_seenNH=set()
atom_seenNH=set()
METHI_finalsetH=set()
METHI_finalsetNH=set()
Adenin_finalsetH=set()
Adenin_finalsetNH=set()
finalsetH=set()
finalsetNH=set()
combines_listdata=[]
#print "<table style=width:50%>"
#print "<tr>"
#print "<th colspan='%d'>Interaction List</th>"% Number_of_Ids
#print "</tr>"
#print "</div>"
H_appended_lig_tabledic={}
H_Common_appended_lig_tabledic={}
NH_Common_appended_lig_tabledic={}
NH_appended_lig_tabledic={}
graphdicH={}
graphdicNH={}
common_graphdicH={}
common_graphdicNH={}
printing = False
for id,link in mydictcheck.iteritems():
#print link, id
links_sel=link[0]
link1= ''.join(str(links_sel))
res2=urllib.urlopen(str(link1))
html=res2.read()
#print html
for l in link:
ll=str(l)
r = requests.get(ll, stream=True)
for line in r.iter_lines():
line=line.strip()
if line.startswith('Hydrogen bonds'):
printing = True
elif line.startswith('Non-bonded contacts'):
printing = False
if printing:
if line.startswith(('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')):
#print "HB", line
lineH=line.split()
lignameH=lineH[9]
atmH=lineH[8]
resH=lineH[3]
residuenumH=lineH[4]
distanceH=lineH[12]
resnumH=resH+residuenumH
# #appending each residue and its position to list called lresidue
lresidueH.append(resnumH)
#appending each ligand atom to list called latom
latomH.append(atmH)
#appending distance of each interaction to ldistance
ldistanceH.append(distanceH)
#creating a set for residue with position
residue_seenH.add(resnumH)
#creating a set for each ligand atom
atom_seenH.add(atmH)
#making a dictionary with list comtaining residue name and position
residueH.setdefault('%s'%id,[]).append(resnumH)
#making a dictionary with list comataing ligand atoms
atmnameH.setdefault('%s'%id,[]).append(atmH)
if atmH in METHI:
METHI_lresidueH.append(resnumH)
METHI_latomH.append(atmH)
METHI_graphdicH.setdefault('%s'%atmH,[]).append(resnumH)#creating dictionary with all lig atom and residues for physio and weblogo
METHI_All_combine_Lig_Res_H.setdefault('%s'%atmH,[]).append(resnumH)#creating dictionary with all lig atom and residues for table
METHI_All_combine_Lig_Res_H_uniquify= {k:list(set(j)) for k,j in METHI_All_combine_Lig_Res_H.items()}
METHI_allH_Lig_Resdict['%s'%id]=METHI_All_combine_Lig_Res_H_uniquify#final dic for table with pdb id , lig atom and residues for all group
METHI_All_combine_Lig_Res_H_distance.setdefault('%s'%atmH,[]).append(distanceH)#creating dictionary with all lig atom and distance for table
METHI_All_combine_Lig_Res_H_distance_uniquify= {k:list(set(j)) for k,j in METHI_All_combine_Lig_Res_H_distance.items()}
METHI_allH_Lig_Resdict_distance['%s'%id]=METHI_All_combine_Lig_Res_H_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmH in H_common_intersectionfinal:
METHI_common_graphdicH.setdefault('%s'%atmH,[]).append(resnumH)
METHI_Common_combine_Lig_Res_H.setdefault('%s'%atmH,[]).append(resnumH)#creating dictionary with common lig atom and residues for table
METHI_Common_combine_Lig_Res_H_uniquify={k:list(set(j)) for k,j in METHI_Common_combine_Lig_Res_H.items()}
METHI_CommonH_Lig_Resdict['%s'%id]=METHI_Common_combine_Lig_Res_H_uniquify#final dic for table with pdb id , lig atom and residues for common group
METHI_Common_combine_Lig_Res_H_distance.setdefault('%s'%atmH,[]).append(distanceH)#creating dictionary with all lig atom and distance for table
METHI_Common_combine_Lig_Res_H_distance_uniquify={k:list(set(j)) for k,j in METHI_Common_combine_Lig_Res_H_distance.items()}
METHI_CommonH_Lig_Resdict_distance['%s'%id]=METHI_Common_combine_Lig_Res_H_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmH in Ribose:
Ribose_graphdicH.setdefault('%s'%atmH,[]).append(resnumH)
Ribose_All_combine_Lig_Res_H.setdefault('%s'%atmH,[]).append(resnumH)#creating dictionary with all lig atom and residues for table
Ribose_All_combine_Lig_Res_H_uniquify={k:list(set(j)) for k,j in Ribose_All_combine_Lig_Res_H.items()}
Ribose_allH_Lig_Resdict['%s'%id]=Ribose_All_combine_Lig_Res_H_uniquify#final dic for table with pdb id , lig atom and residues for all group
Ribose_All_combine_Lig_Res_H_distance.setdefault('%s'%atmH,[]).append(distanceH)#creating dictionary with all lig atom and distance for table
Ribose_All_combine_Lig_Res_H_distance_uniquify= {k:list(set(j)) for k,j in Ribose_All_combine_Lig_Res_H_distance.items()}
Ribose_allH_Lig_Resdict_distance['%s'%id]=Ribose_All_combine_Lig_Res_H_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmH in H_common_intersectionfinal:
Ribose_common_graphdicH.setdefault('%s'%atmH,[]).append(resnumH)
Ribose_Common_combine_Lig_Res_H.setdefault('%s'%atmH,[]).append(resnumH)#creating dictionary with common lig atom and residues for table
Ribose_Common_combine_Lig_Res_H_uniquify={k:list(set(j)) for k,j in Ribose_Common_combine_Lig_Res_H.items()}
Ribose_CommonH_Lig_Resdict['%s'%id]=Ribose_Common_combine_Lig_Res_H_uniquify#final dic for table with pdb id , lig atom and residues for common group
Ribose_Common_combine_Lig_Res_H_distance.setdefault('%s'%atmH,[]).append(distanceH)#creating dictionary with all lig atom and distance for table
Ribose_Common_combine_Lig_Res_H_distance_uniquify={k:list(set(j)) for k,j in Ribose_Common_combine_Lig_Res_H_distance.items()}
Ribose_CommonH_Lig_Resdict_distance['%s'%id]=Ribose_Common_combine_Lig_Res_H_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmH in Adenin:
Adenin_lresidueH.append(resnumH)
Adenin_latomH.append(atmH)
Adenin_graphdicH.setdefault('%s'%atmH,[]).append(resnumH)
Adenin_All_combine_Lig_Res_H.setdefault('%s'%atmH,[]).append(resnumH)#creating dictionary with all lig atom and residues for table
Adenin_All_combine_Lig_Res_H_uniquify={k:list(set(j)) for k,j in Adenin_All_combine_Lig_Res_H.items()}
Adenin_allH_Lig_Resdict['%s'%id]=Adenin_All_combine_Lig_Res_H_uniquify#final dic for table with pdb id , lig atom and residues for all group
Adenin_All_combine_Lig_Res_H_distance.setdefault('%s'%atmH,[]).append(distanceH)#creating dictionary with all lig atom and distance for table
Adenin_All_combine_Lig_Res_H_distance_uniquify= {k:list(set(j)) for k,j in Adenin_All_combine_Lig_Res_H_distance.items()}
Adenin_allH_Lig_Resdict_distance['%s'%id]=Adenin_All_combine_Lig_Res_H_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmH in H_common_intersectionfinal:
Adenin_common_graphdicH.setdefault('%s'%atmH,[]).append(resnumH)
Adenin_Common_combine_Lig_Res_H.setdefault('%s'%atmH,[]).append(resnumH)#creating dictionary with common lig atom and residues for table
Adenin_Common_combine_Lig_Res_H_uniquify={k:list(set(j)) for k,j in Adenin_Common_combine_Lig_Res_H.items()}
Adenin_CommonH_Lig_Resdict['%s'%id]=Adenin_Common_combine_Lig_Res_H_uniquify#final dic for table with pdb id , lig atom and residues for common group
Adenin_Common_combine_Lig_Res_H_distance.setdefault('%s'%atmH,[]).append(distanceH)#creating dictionary with all lig atom and distance for table
Adenin_Common_combine_Lig_Res_H_distance_uniquify={k:list(set(j)) for k,j in Adenin_Common_combine_Lig_Res_H_distance.items()}
Adenin_CommonH_Lig_Resdict_distance['%s'%id]=Adenin_Common_combine_Lig_Res_H_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
else:
if line.startswith(('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')):
#print "NHB", line
lineNH=line.split()
lignameNH=lineNH[9]
atmNH=lineNH[8]
resNH=lineNH[3]
residuenumNH=lineNH[4]
distanceNH=lineNH[12]
resnumNH=resNH+residuenumNH
# #appending each residue and its position to list called lresidue
lresidueNH.append(resnumNH)
#appending each ligand atom to list called latom
latomNH.append(atmNH)
#appending distance of each interaction to ldistance
ldistanceNH.append(distanceNH)
#creating a set for residue with position
residue_seenNH.add(resnumNH)
#creating a set for each ligand atom
atom_seenNH.add(atmNH)
#making a dictionary with list comtaining residue name and position
residueNH.setdefault('%s'%id,[]).append(resnumNH)
#making a dictionary with list comataing ligand atoms
atmnameNH.setdefault('%s'%id,[]).append(atmNH)
if atmNH in METHI:
METHI_lresidueNH.append(resnumNH)
METHI_latomNH.append(atmNH)
METHI_graphdicNH.setdefault('%s'%atmNH,[]).append(resnumNH)
METHI_All_combine_Lig_Res_NH.setdefault('%s'%atmNH,[]).append(resnumNH)#creating dictionary with all lig atom and residues for table
METHI_All_combine_Lig_Res_NH_uniquify= {k:list(set(j)) for k,j in METHI_All_combine_Lig_Res_NH.items()}
METHI_allNH_Lig_Resdict['%s'%id]=METHI_All_combine_Lig_Res_NH_uniquify#final dic for table with pdb id , lig atom and residues for all group
METHI_All_combine_Lig_Res_NH_distance.setdefault('%s'%atmNH,[]).append(distanceNH)#creating dictionary with all lig atom and distance for table
METHI_All_combine_Lig_Res_NH_distance_uniquify= {k:list(set(j)) for k,j in METHI_All_combine_Lig_Res_NH_distance.items()}
METHI_allNH_Lig_Resdict_distance['%s'%id]=METHI_All_combine_Lig_Res_NH_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmNH in NONHcommon_intersectionfinal:
METHI_common_graphdicNH.setdefault('%s'%atmNH,[]).append(resnumNH)
METHI_Common_combine_Lig_Res_NH.setdefault('%s'%atmNH,[]).append(resnumNH)#creating dictionary with common lig atom and residues for table
METHI_Common_combine_Lig_Res_NH_uniquify= {k:list(set(j)) for k,j in METHI_Common_combine_Lig_Res_NH.items()}
METHI_CommonNH_Lig_Resdict['%s'%id]=METHI_Common_combine_Lig_Res_NH_uniquify#final dic for table with pdb id , lig atom and residues for common group
METHI_Common_combine_Lig_Res_NH_distance.setdefault('%s'%atmNH,[]).append(distanceNH)#creating dictionary with all lig atom and distance for table
METHI_Common_combine_Lig_Res_NH_distance_uniquify={k:list(set(j)) for k,j in METHI_Common_combine_Lig_Res_NH_distance.items()}
METHI_CommonNH_Lig_Resdict_distance['%s'%id]=METHI_Common_combine_Lig_Res_NH_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmNH in Ribose:
Ribose_graphdicNH.setdefault('%s'%atmNH,[]).append(resnumNH)
Ribose_All_combine_Lig_Res_NH.setdefault('%s'%atmNH,[]).append(resnumNH)#creating dictionary with all lig atom and residues for table
Ribose_All_combine_Lig_Res_NH_uniquify= {k:list(set(j)) for k,j in Ribose_All_combine_Lig_Res_NH.items()}
Ribose_allNH_Lig_Resdict['%s'%id]=Ribose_All_combine_Lig_Res_NH_uniquify#final dic for table with pdb id , lig atom and residues for all group
Ribose_All_combine_Lig_Res_NH_distance.setdefault('%s'%atmNH,[]).append(distanceNH)#creating dictionary with all lig atom and distance for table
Ribose_All_combine_Lig_Res_NH_distance_uniquify= {k:list(set(j)) for k,j in Ribose_All_combine_Lig_Res_NH_distance.items()}
Ribose_allNH_Lig_Resdict_distance['%s'%id]=Ribose_All_combine_Lig_Res_NH_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmNH in NONHcommon_intersectionfinal:
Ribose_common_graphdicNH.setdefault('%s'%atmNH,[]).append(resnumNH)
Ribose_Common_combine_Lig_Res_NH.setdefault('%s'%atmNH,[]).append(resnumNH)#creating dictionary with common lig atom and residues for table
Ribose_Common_combine_Lig_Res_NH_uniquify= {k:list(set(j)) for k,j in Ribose_Common_combine_Lig_Res_NH.items()}
Ribose_CommonNH_Lig_Resdict['%s'%id]=Ribose_Common_combine_Lig_Res_NH_uniquify#final dic for table with pdb id , lig atom and residues for common group
Ribose_Common_combine_Lig_Res_NH_distance.setdefault('%s'%atmNH,[]).append(distanceNH)#creating dictionary with all lig atom and distance for table
Ribose_Common_combine_Lig_Res_NH_distance_uniquify={k:list(set(j)) for k,j in Ribose_Common_combine_Lig_Res_NH_distance.items()}
Ribose_CommonNH_Lig_Resdict_distance['%s'%id]=Ribose_Common_combine_Lig_Res_NH_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmNH in Adenin:
Adenin_lresidueNH.append(resnumNH)
Adenin_latomNH.append(atmNH)
Adenin_graphdicNH.setdefault('%s'%atmNH,[]).append(resnumNH)
Adenin_All_combine_Lig_Res_NH.setdefault('%s'%atmNH,[]).append(resnumNH)#creating dictionary with all lig atom and residues for table
Adenin_All_combine_Lig_Res_NH_uniquify= {k:list(set(j)) for k,j in Adenin_All_combine_Lig_Res_NH.items()}
Adenin_allNH_Lig_Resdict['%s'%id]=Adenin_All_combine_Lig_Res_NH_uniquify#final dic for table with pdb id , lig atom and residues for all group
Adenin_All_combine_Lig_Res_NH_distance.setdefault('%s'%atmNH,[]).append(distanceNH)#creating dictionary with all lig atom and distance for table
Adenin_All_combine_Lig_Res_NH_distance_uniquify= {k:list(set(j)) for k,j in Adenin_All_combine_Lig_Res_NH_distance.items()}
Adenin_allNH_Lig_Resdict_distance['%s'%id]=Adenin_All_combine_Lig_Res_NH_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
if atmNH in NONHcommon_intersectionfinal:
Adenin_common_graphdicNH.setdefault('%s'%atmNH,[]).append(resnumNH)
Adenin_Common_combine_Lig_Res_NH.setdefault('%s'%atmNH,[]).append(resnumNH)#creating dictionary with common lig atom and residues for table
Adenin_Common_combine_Lig_Res_NH_uniquify= {k:list(set(j)) for k,j in Adenin_Common_combine_Lig_Res_NH.items()}
Adenin_CommonNH_Lig_Resdict['%s'%id]=Adenin_Common_combine_Lig_Res_NH_uniquify#final dic for table with pdb id , lig atom and residues for common group
Adenin_Common_combine_Lig_Res_NH_distance.setdefault('%s'%atmNH,[]).append(distanceNH)#creating dictionary with all lig atom and distance for table
Adenin_Common_combine_Lig_Res_NH_distance_uniquify={k:list(set(j)) for k,j in Adenin_Common_combine_Lig_Res_NH_distance.items()}
Adenin_CommonNH_Lig_Resdict_distance['%s'%id]=Adenin_Common_combine_Lig_Res_NH_distance_uniquify#final dic for table with pdb id , lig atom and distance for all group
METHI_listdata_H=[]
METHI_listdata_NH=[]
METHI_lresidueH=[]
METHI_latomH=[]
METHI_lresidueNH=[]
METHI_latomNH=[]
METHI_All_combine_Lig_Res_H={}
METHI_Common_combine_Lig_Res_H={}
METHI_All_combine_Lig_Res_NH={}
METHI_Common_combine_Lig_Res_NH={}
METHI_All_combine_Lig_Res_H_distance={}
METHI_All_combine_Lig_Res_NH_distance={}
METHI_Common_combine_Lig_Res_H_distance={}
METHI_Common_combine_Lig_Res_NH_distance={}
Ribose_listdata_H=[]
Ribose_listdata_NH=[]
Ribose_All_combine_Lig_Res_H={}
Ribose_Common_combine_Lig_Res_H={}
Ribose_All_combine_Lig_Res_NH={}
Ribose_Common_combine_Lig_Res_NH={}
Ribose_All_combine_Lig_Res_H_distance={}
Ribose_All_combine_Lig_Res_NH_distance={}
Ribose_Common_combine_Lig_Res_H_distance={}
Ribose_Common_combine_Lig_Res_NH_distance={}
Adenin_lresidueH=[]
Adenin_latomH=[]
Adenin_lresidueNH=[]
Adenin_latomNH=[]
Adenin_listdata_H=[]
Adenin_listdata_NH=[]
Adenin_All_combine_Lig_Res_H={}
Adenin_Common_combine_Lig_Res_H={}
Adenin_All_combine_Lig_Res_NH={}
Adenin_Common_combine_Lig_Res_NH={}
Adenin_All_combine_Lig_Res_H_distance={}
Adenin_All_combine_Lig_Res_NH_distance={}
Adenin_Common_combine_Lig_Res_H_distance={}
Adenin_Common_combine_Lig_Res_NH_distance={}
METHI_common_listdata_H=[]
METHI_common_listdata_NH=[]
Ribose_common_listdata_H=[]
Ribose_common_listdata_NH=[]
Adenin_common_listdata_H=[]
Adenin_common_listdata_NH=[]
lresidueH=[]
latomH=[]
ldistanceH=[]
lresidueNH=[]
latomNH=[]
ldistanceNH=[]
combines_listdata=[]
residue_seenH.clear()
METHI_finalsetH.clear()
METHI_finalsetNH.clear()
Adenin_finalsetH.clear()
Adenin_finalsetNH.clear()
finalsetH.clear()
finalsetNH.clear()
####################Define function for Statistics ################################
def percentage(dictname,subgroup):
Count_Atom={}
percentage_Atom={}
atmlist=[]
if bool(dictname):
for key, value in dictname.iteritems():
for atom in subgroup:
for key1,value1 in value.iteritems():
#for i in dict1.keys():
if atom == key1:
Count_Atom[key1]=1
percentage_Atom['%s'%key]=Count_Atom
#print percent
Count_Atom={}
tabl=pd.DataFrame.from_dict(percentage_Atom).fillna(0)
Num_cols = len (PDBID_LIST)
for atms in percentage_Atom.values():
for atms_key in atms.keys():
atmlist.append(atms_key)
count_atmlist=list(set(atmlist))
tabl['Percentage of Interaction']= (tabl.sum(axis=1)/Num_cols)*100
tabl['Percentage of Interaction']=tabl['Percentage of Interaction'].round(2)
print "<br/>"," No. of Ligand atoms:", len(count_atmlist), "/",len(subgroup), "<br/>"
print tabl.T.to_html(justify='center'),"<br/>"
#print tabl.style.background_gradient(cmap='summer')
#sns.heatmap(tabl['Percentage of Interaction'], annot=True)
Highest_value= tabl['Percentage of Interaction'][tabl['Percentage of Interaction']==tabl['Percentage of Interaction'].max()]
Highest_value=Highest_value.to_dict()
print "Highest percenrage of Interactions identified","<br/>"
Max_tabl=pd.DataFrame(Highest_value.items())
Max_tabl.columns = ['Ligand Atom', 'Percentage']
Max_tabl.rename(index={0: 'Highest'})
#Max_tabl=pd.Series(Highest_value).to_frame()
#Max_tabl.index.rename = 'index'
#Max_tabl.rename(index={0:'zero'}, inplace=True)
#df1.rename(index={0: 'a'})
print Max_tabl.T.to_html(justify='center')
else:
print "No Interactions Observed"
######End of Percentage section###
####Start of Distance section##
#####Start of Distance section###
def distance_calc(dictnames):
DistMean_dict={}
DistFinal_pdb={}
if bool(dictnames):
for key,value in dictnames.iteritems():
for key1,value1 in value.iteritems():
results = map(float, value1)
#print value1, np.mean(results)
mean1=round(np.float64(np.mean(results)), 2)
DistMean_dict[key1]=mean1
DistFinal_pdb[key]=DistMean_dict
DistMean_dict={}
Distance_tabl=pd.DataFrame.from_dict(DistFinal_pdb)
print Distance_tabl.T.to_html(justify='center'),"<br/>"
print Distance_tabl.apply(pd.Series.describe, axis=1)[['count','mean','std']].dropna().round(2).T.to_html(justify='center'),"<br/>"
#Distance_tabl['Standard Deviation']=Distance_tabl.std(axis=1)
#Distance_tabl['Standard Deviation']=Distance_tabl['Standard Deviation'].round(2)
else:
print "No Interactions Observed","<br/>"
#End of distance section##
####################End of Define function for Statistics ################################
aminoacid_code={'CYS': 'C', 'ASP': 'D', 'SER': 'S', 'GLN': 'Q', 'LYS': 'K',
'ILE': 'I', 'PRO': 'P', 'THR': 'T', 'PHE': 'F', 'ASN': 'N',
'GLY': 'G', 'HIS': 'H', 'LEU': 'L', 'ARG': 'R', 'TRP': 'W',
'ALA': 'A', 'VAL':'V', 'GLU': 'E', 'TYR': 'Y', 'MET': 'M'}
### List of filenames for csv download ##########
CSVrandom_name= str(uuid.uuid4())
Adenin_allH='tmp/'+'Adenin_allH' +CSVrandom_name+'.csv'
Adenin_allNH='tmp/'+'Adenin_allNH' +CSVrandom_name+'.csv'
Adenin_CommonH='tmp/'+'Adenin_CommonH' +CSVrandom_name+'.csv'
Adenin_CommonNH='tmp/'+'Adenin_CommonNH' +CSVrandom_name+'.csv'
Ribose_allH='tmp/'+'Ribose_allH' +CSVrandom_name+'.csv'
Ribose_allNH='tmp/'+'Ribose_allNH' +CSVrandom_name+'.csv'
Ribose_CommonH='tmp/'+'Ribose_CommonH' +CSVrandom_name+'.csv'
Ribose_CommonNH='tmp/'+'Ribose_CommonNH' +CSVrandom_name+'.csv'
METHI_allH='tmp/'+'METHI_allH' +CSVrandom_name+'.csv'
METHI_allNH='tmp/'+'METHI_allNH' +CSVrandom_name+'.csv'
METHI_CommonH='tmp/'+'METHI_CommonH' +CSVrandom_name+'.csv'
METHI_CommonNH='tmp/'+'METHI_CommonNH' +CSVrandom_name+'.csv'
#### dict to csv ###
Adenin_allH_df=pd.DataFrame(Adenin_allH_Lig_Resdict)
Adenin_allH_df.to_csv(Adenin_allH)
Adenin_allNH_df=pd.DataFrame(Adenin_allNH_Lig_Resdict)
Adenin_allNH_df.to_csv(Adenin_allNH)
Adenin_CommonH_df=pd.DataFrame(Adenin_CommonH_Lig_Resdict)
Adenin_CommonH_df.to_csv(Adenin_CommonH)
Adenin_CommonNH_df=pd.DataFrame(Adenin_CommonNH_Lig_Resdict)
Adenin_CommonNH_df.to_csv(Adenin_CommonNH)
Ribose_allH_df=pd.DataFrame(Ribose_allH_Lig_Resdict)
Ribose_allH_df.to_csv(Ribose_allH)
Ribose_allNH_df=pd.DataFrame(Ribose_allNH_Lig_Resdict)
Ribose_allNH_df.to_csv(Ribose_allNH)
Ribose_CommonH_df=pd.DataFrame(Ribose_CommonH_Lig_Resdict)
Ribose_CommonH_df.to_csv(Ribose_CommonH)
Ribose_CommonNH_df=pd.DataFrame(Ribose_CommonNH_Lig_Resdict)
Ribose_CommonNH_df.to_csv(Ribose_CommonNH)
METHI_allH_df=pd.DataFrame(METHI_allH_Lig_Resdict)
METHI_allH_df.to_csv(METHI_allH)
METHI_allNH_df=pd.DataFrame(METHI_allNH_Lig_Resdict)
METHI_allNH_df.to_csv(METHI_allNH)
METHI_CommonH_df=pd.DataFrame(METHI_CommonH_Lig_Resdict)
METHI_CommonH_df.to_csv(METHI_CommonH)
METHI_CommonNH_df=pd.DataFrame(METHI_CommonNH_Lig_Resdict)
METHI_CommonNH_df.to_csv(METHI_CommonNH)
#############END of filenames for csv download ############
###Link to download file
#print '<p style=text-align:center >Download: <a href=%s download>Interaction Data</a>'% SubstructureExcel
print "<p align='center'>################################################################","</p>"
print "<p style='font-size:20px; color:blue' align='center'>Adenin sub group structure","</p>"
print '<p style=text-align:center >Download: <a href=%s download>All bonded, </a>' % Adenin_allH
print ' <a href=%s download>All non-bonded, </a>' % Adenin_allNH
print ' <a href=%s download>Common bonded, </a>' % Adenin_CommonH
print ' <a href=%s download>Common non-bonded</a>' % Adenin_CommonNH,"</p>"
print "<p align='center'>################################################################" ,"</p>"
print "<button class='collapsible'>I. All bonded interactions - Click to read basic statistical information</button>"#Start of click drop down
print "<div class='contentsection'>"
print "<p style='font-size:20px; color:black' align='center'>"
print " Number of Ligand atoms:", len(Adenin), "<br/>"
print " Number of PDB IDs:", len(Adenin_allNH_Lig_Resdict.keys()), "<br/>"
print "<div class='row'>"# spliting into two columns
print "<div class='column'>"# spliting into two columns
if bool(Adenin_allH_Lig_Resdict):
print "Statistics of Bonded Intercations"
print percentage(Adenin_allH_Lig_Resdict,Adenin)
if bool(Adenin_allH_Lig_Resdict_distance):
print distance_calc(Adenin_allH_Lig_Resdict_distance)
print "</div>"# closing of first columns
print "<div class='column'>"
if bool(Adenin_allNH_Lig_Resdict):
print "Statistics of Non-Bonded Intercations", "<br/>"
print percentage(Adenin_allNH_Lig_Resdict,Adenin)
if bool(Adenin_allNH_Lig_Resdict_distance):
print distance_calc(Adenin_allNH_Lig_Resdict_distance)
print "</div>"# closing of second columns
print "</div>"#closing of row
print "</div>"#End of click drop down
print "<br/>"
print """
<div class="grid">
<div class="col-2-3">
<div class="module">
"""#Initialization of Adenin grid section
if bool(Adenin_allH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of residues: hydrogen bonds contacts" ,"</p>"
df_Adenin_allH_Lig_Resdict=pd.DataFrame.from_dict(Adenin_allH_Lig_Resdict).fillna('NIL')
print (df_Adenin_allH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Adenin_allH_Lig_Resdict).to_html(justify='center')#for all ligand atoms - hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of residues: hydrogen bonds contacts" ,"</p>"
print "No Interactions"
####################All Residues Colored Table for Adenin: H bonded################################
H_templist4graph=[]
H_graphdic1={}
if bool(Adenin_graphdicH):
for k,v in Adenin_graphdicH.iteritems():
#print k
for value in v:
H_templist4graph.append(value)
samp=sorted(list(set(H_templist4graph)))
H_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
H_templist4graph=[]
length_listofcompiledresidues=[]
for key,value in H_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
#print valu
#print len(valu)
length_listofcompiledresidues.append(len(valu))
length_ofcell=max(length_listofcompiledresidues)
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(H_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in H_graphdic1[key]:
dat1= g1.split(', ')
for H_k3 in dat1:
print "<td align='center'>"
#print k3
if H_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%H_k3
if H_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%H_k3
if H_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%H_k3
if H_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%H_k3
if H_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%H_k3
if H_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%H_k3
if H_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%H_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "No Interactions"
if bool(Adenin_allNH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of residues: non-bonded contacts","</p>"
df_Adenin_allNH_Lig_Resdict=pd.DataFrame.from_dict(Adenin_allNH_Lig_Resdict).fillna('NIL')
print (df_Adenin_allNH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Adenin_allNH_Lig_Resdict).to_html(justify='center')#for all ligand atoms - Non hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of residues: non-bonded contacts","</p>"
print "No Interactions"
####################All Residues Colored Table for NON bonded################################
NH_templist4graph=[]
NH_graphdic1={}
if bool(Adenin_graphdicNH):
for k,v in Adenin_graphdicNH.iteritems():
#print k
for value in v:
NH_templist4graph.append(value)
samp=sorted(list(set(NH_templist4graph)))
NH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
#print temlist
#print samp
NH_templist4graph=[]
length_listofcompiledresidues=[]
for key,value in NH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
#print valu
#print len(valu)
length_listofcompiledresidues.append(len(valu))
length_ofcell=max(length_listofcompiledresidues)
#print "<br/>"
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of amino acids: non-bonded contacts","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(NH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in NH_graphdic1[key]:
dat1= g1.split(', ')
for NH_k3 in dat1:
print "<td align='center'>"
#print k3
if NH_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%NH_k3
if NH_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%NH_k3
if NH_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%NH_k3
if NH_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%NH_k3
if NH_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%NH_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of amino acids: non-bonded contacts","</p>"
print "No Interactions"
print """
</div>
</div>
"""#closing of col-2-3 and module
print """
<div class="col-2-3">
<div class="module">
"""
if bool(Adenin_CommonH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of common residues: hydrogen bonds contacts" ,"</p>"
df_Adenin_CommonH_Lig_Resdict=pd.DataFrame.from_dict(Adenin_CommonH_Lig_Resdict).fillna('NIL')
print (df_Adenin_CommonH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Adenin_CommonH_Lig_Resdict).to_html(justify='center')#for common ligand atoms - hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of common residues: hydrogen bonds contacts" ,"</p>"
print "<p> No Common Interactions</p>"
####################Common Residues Colored Table for Adenin : H bonded################################
CommH_templist4graph=[]
CommH_graphdic1={}
if bool(Adenin_common_graphdicH):
for k,v in Adenin_common_graphdicH.iteritems():
for value in v:
CommH_templist4graph.append(value)
samp=sorted(list(set(CommH_templist4graph)))
CommH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
CommH_templist4graph=[]
length_listofcompiled_Common_residues=[]
for key,value in CommH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
length_listofcompiled_Common_residues.append(len(valu))
length_ofcell=max(length_listofcompiled_Common_residues)
#print "<br/>"
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of common residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(CommH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in CommH_graphdic1[key]:
dat1= g1.split(', ')
for H_k3 in dat1:
print "<td align='center'>"
#print k3
if H_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%H_k3
if H_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%H_k3
if H_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%H_k3
if H_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%H_k3
if H_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%H_k3
if H_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%H_k3
if H_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%H_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "<p> No Common Atoms Identified</p>"
if bool(Adenin_CommonNH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of common residues: non-bonded contacts","</p>"
df_Adenin_CommonNH_Lig_Resdict=pd.DataFrame.from_dict(Adenin_CommonNH_Lig_Resdict).fillna('NIL')
print (df_Adenin_CommonNH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Adenin_CommonNH_Lig_Resdict).to_html(justify='center')#for Common ligand atoms - Non hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of common residues: non-bonded contacts","</p>"
print "No Interactions"
####################Common Residues Colored Table for Adenin: NON bonded################################
CommNH_templist4graph=[]
CommNH_graphdic1={}
if bool(Adenin_common_graphdicNH):
for k,v in Adenin_common_graphdicNH.iteritems():
#print k
for value in v:
CommNH_templist4graph.append(value)
samp=sorted(list(set(CommNH_templist4graph)))
CommNH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
CommNH_templist4graph=[]
length_listofcompile_Common_dresidues=[]
for key,value in CommNH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
length_listofcompiled_Common_residues.append(len(valu))
length_ofcell=max(length_listofcompiled_Common_residues)
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: non-bonded contacts","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of common residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(CommNH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in CommNH_graphdic1[key]:
dat1= g1.split(', ')
for NH_k3 in dat1:
print "<td align='center'>"
#print k3
if NH_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%NH_k3
if NH_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%NH_k3
if NH_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%NH_k3
if NH_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%NH_k3
if NH_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%NH_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: non-bonded contacts","</p>"
print "No Interactions"
print """
</div>
</div>
"""# closinf of column and module divi
###############Web logo for Common Residues Section: H bonding#######################
print """
<div class="col-2-3">
<div class="module">
"""
Adenin_graph_filename = str(uuid.uuid4())
Weblogo_dict_H={}
Weblogo_dict_H1={}
if bool(CommH_graphdic1):
for key in sorted(CommH_graphdic1):
for i in CommH_graphdic1[key]:
tems=i.split(', ')
for items in tems:
se=re.split('([0-9])' , items)
Weblogo_dict_H.setdefault('%s'%key,[]).append(se[0])
for m,n in Weblogo_dict_H.iteritems():
counted=dict(Counter(n))
Weblogo_dict_H1.setdefault('%s'%m,{}).update(counted)
zipfilename='tmp/'+Adenin_graph_filename+'_Hbonding'+'.zip'
Adenin_aminoacid_singlecode={}
aminoacid_code={'CYS': 'C', 'ASP': 'D', 'SER': 'S', 'GLN': 'Q', 'LYS': 'K',
'ILE': 'I', 'PRO': 'P', 'THR': 'T', 'PHE': 'F', 'ASN': 'N',
'GLY': 'G', 'HIS': 'H', 'LEU': 'L', 'ARG': 'R', 'TRP': 'W',
'ALA': 'A', 'VAL':'V', 'GLU': 'E', 'TYR': 'Y', 'MET': 'M'}
recoded={}
for Adenin_ligand_key, Adenin_amino_frequency in Weblogo_dict_H1.iteritems():
#print ligand_key
for i in Adenin_ligand_key:
for Adenin_amino,Adenin_frequency in Adenin_amino_frequency.iteritems():
for Adenin_amino_3letter,Adenin_code_frequency in aminoacid_code.iteritems():
if Adenin_amino == Adenin_amino_3letter:
recoded[Adenin_code_frequency]=Adenin_frequency
Adenin_aminoacid_singlecode.setdefault('%s'%Adenin_ligand_key,{}).update(recoded)
recoded={}
Adenin_Frequency=1
instances=[]
Adenin_weblogo_collection=[]
for Adenin_ligand_key1, amino_frequency1 in Adenin_aminoacid_singlecode.iteritems():
for Adenin_Amino1, Adenin_number in amino_frequency1.iteritems():
Adenin_Frequency=1
while Adenin_Frequency <= Adenin_number:
instances.append(Seq(Adenin_Amino1, IUPAC.protein))
Adenin_Frequency=Adenin_Frequency+1
Adenin_motif = motifs.create(instances)
Adenin_mymotif ='tmp/'+ Adenin_graph_filename+ '_H_'+ Adenin_ligand_key1 +'.svg'
Adenin_motif.weblogo('%s'%Adenin_mymotif,format='SVG',xaxis_label= '%s' %Adenin_ligand_key1,show_errorbars= False, color_scheme= 'color_chemistry')
Adenin_weblogo_collection.append(Adenin_mymotif)
instances=[]
weblogo_images=' '.join(str(x) for x in Adenin_weblogo_collection)
print "<p style='font-size:20px; color:brown'> Weblogo showing the frequency of residues binding to ligand atoms for the selected structures:</p>"
print "<div class='weblogo_row'>"
for Adenin_image in sorted(Adenin_weblogo_collection):
print "<div class='weblogo_column'>"
print "<embed src='%s#page=1&view=FitH ' />" %Adenin_image
#print "<iframe src='%s#page=1&view=FitH ' width='200' height='100' border='0'></iframe>"%Adenin_image
print "</div>"
print "</div>"
####zip file
with ZipFile('%s'%zipfilename, 'w') as Adenin_myzip:
for Adenin_Images in Adenin_weblogo_collection:
Adenin_myzip.write(Adenin_Images)
else:
print "<p style='font-size:20px; color:brown'> Weblogo for Common bonded Interactions:</p>"
print "No Interactions"
###############Web logo for Common Residues Section: NON bonding#######################
Weblogo_dict_NH={}
Weblogo_dict_NH1={}
if bool(CommNH_graphdic1):
for key in sorted(CommNH_graphdic1):
for i in CommNH_graphdic1[key]:
tems=i.split(', ')
for items in tems:
se=re.split('([0-9])' , items)
Weblogo_dict_NH.setdefault('%s'%key,[]).append(se[0])
for m,n in Weblogo_dict_NH.iteritems():
counted=dict(Counter(n))
Weblogo_dict_NH1.setdefault('%s'%m,{}).update(counted)
zipfilename='tmp/'+Adenin_graph_filename+'_NHbonding'+'.zip'
Adenin_aminoacid_singlecode={}
recoded={}
for Adenin_ligand_key, Adenin_amino_frequency in Weblogo_dict_NH1.iteritems():
#print ligand_key
for i in Adenin_ligand_key:
for Adenin_amino,Adenin_frequency in Adenin_amino_frequency.iteritems():
for Adenin_amino_3letter,Adenin_code_frequency in aminoacid_code.iteritems():
if Adenin_amino == Adenin_amino_3letter:
recoded[Adenin_code_frequency]=Adenin_frequency
Adenin_aminoacid_singlecode.setdefault('%s'%Adenin_ligand_key,{}).update(recoded)
recoded={}
Adenin_Frequency=1
instances=[]
Adenin_weblogo_collection=[]
for Adenin_ligand_key1, amino_frequency1 in Adenin_aminoacid_singlecode.iteritems():
for Adenin_Amino1, Adenin_number in amino_frequency1.iteritems():
Adenin_Frequency=1
while Adenin_Frequency <= Adenin_number:
instances.append(Seq(Adenin_Amino1, IUPAC.protein))
Adenin_Frequency=Adenin_Frequency+1
Adenin_motif = motifs.create(instances)
Adenin_mymotif ='tmp/'+ Adenin_graph_filename+ '_NH_'+ Adenin_ligand_key1 +'.svg'
Adenin_motif.weblogo('%s'%Adenin_mymotif,format='SVG',xaxis_label= '%s' %Adenin_ligand_key1,show_errorbars= False, color_scheme= 'color_chemistry')
Adenin_weblogo_collection.append(Adenin_mymotif)
instances=[]
weblogo_images=' '.join(str(x) for x in Adenin_weblogo_collection)
print "<p style='font-size:20px; color:brown'> Weblogo showing the frequency of residues binding to ligand atoms for the selected structures:</p>"
print "<div class='weblogo_row'>" #initiation of weblog_row
for Adenin_image in sorted(Adenin_weblogo_collection):
print "<div class='weblogo_column'>" #initiation of weblog_column
print "<embed src='%s#page=1&view=FitH ' />" %Adenin_image
#print "<iframe src='%s#page=1&view=FitH ' width='200' height='200' border='0'></iframe>"%Adenin_image
print "</div>"#closing of weblog_column
print "</div>"#closing of weblog_row
####zip file
with ZipFile('%s'%zipfilename, 'w') as Adenin_myzip:
for Adenin_Images in Adenin_weblogo_collection:
Adenin_myzip.write(Adenin_Images)
else:
print "<p style='font-size:20px; color:brown'> Weblogo for Common Nonbonded Interactions:</p>"
print "No Interactions"
print """
</div>
</div>
</div>
""" # closing of Adenin section
#####################################################
print "<p align='center'>################################################################","</p>"
print "<p style='font-size:20px; color:blue' align='center'>Ribose sub group structure","</p>"
print '<p style=text-align:center>Download: <a href=%s download>All Bonded,</a>' % Ribose_allH
print ' <a href=%s download>All Non-bonded,</a>' % Ribose_allNH
print ' <a href=%s download>Common Bonded,</a>' % Ribose_CommonH
print ' <a href=%s download>Common Non-bonded,</a>' % Ribose_CommonNH ,'</p>'
print "<p align='center'>################################################################" ,"</p>"
print "<button class='collapsible'>I. All bonded interactions - Click to read basic statistical information</button>"#Start of click drop down
print "<div class='contentsection'>"
print "<p style='font-size:20px; color:black' align='center'>"
print " Number of Ligand atoms:", len(Ribose), "<br/>"
print " Number of PDB IDs:", len(Ribose_allNH_Lig_Resdict.keys()), "<br/>"
print "<div class='row'>"# spliting into two columns
print "<div class='column'>"# spliting into two columns
if bool(Ribose_allH_Lig_Resdict):
print "Statistics of Bonded Intercations"
print percentage(Ribose_allH_Lig_Resdict,Ribose)
if bool(Ribose_allH_Lig_Resdict_distance):
print distance_calc(Ribose_allH_Lig_Resdict_distance)
print "</div>"# closing of first columns
print "<div class='column'>"
if bool(Ribose_allNH_Lig_Resdict):
print "Statistics of Non-Bonded Intercations", "<br/>"
print percentage(Ribose_allNH_Lig_Resdict,Ribose)
if bool(Ribose_allNH_Lig_Resdict_distance):
print distance_calc(Ribose_allNH_Lig_Resdict_distance)
print "</div>"# closing of second columns
print "</div>"#closing of row
print "</div>"#End of click drop down
print "<br/>"
print """
<div class="grid">
<div class="col-2-3">
<div class="module">
"""#start of Ribose grid section
if bool(Ribose_allH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of residues: hydrogen bonds contacts" ,"</p>"
df_Ribose_allH_Lig_Resdict=pd.DataFrame.from_dict(Ribose_allH_Lig_Resdict).fillna('NIL')
print (df_Ribose_allH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Ribose_allH_Lig_Resdict).to_html(justify='center')#for all ligand atoms - hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of residues: hydrogen bonds contacts" ,"</p>"
print "No Interactions"
####################All Residues Colored Table for Ribose: H bonded################################
H_templist4graph=[]
H_graphdic1={}
if bool(Ribose_graphdicH):
for k,v in Ribose_graphdicH.iteritems():
#print k
for value in v:
H_templist4graph.append(value)
samp=sorted(list(set(H_templist4graph)))
H_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
H_templist4graph=[]
length_listofcompiledresidues=[]
for key,value in H_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
#print valu
#print len(valu)
length_listofcompiledresidues.append(len(valu))
length_ofcell=max(length_listofcompiledresidues)
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(H_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in H_graphdic1[key]:
dat1= g1.split(', ')
for H_k3 in dat1:
print "<td align='center'>"
#print k3
if H_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%H_k3
if H_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%H_k3
if H_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%H_k3
if H_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%H_k3
if H_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%H_k3
if H_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%H_k3
if H_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%H_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "No Interactions"
if bool(Ribose_allNH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of residues: non-bonded contacts","</p>"
df_Ribose_allNH_Lig_Resdict=pd.DataFrame.from_dict(Ribose_allNH_Lig_Resdict).fillna('NIL')
print (df_Ribose_allNH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Ribose_allNH_Lig_Resdict).to_html(justify='center')#for all ligand atoms - Non hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of residues: non-bonded contacts","</p>"
print "NO Interactions"
####################All Residues Colored Table for NON bonded################################
NH_templist4graph=[]
NH_graphdic1={}
if bool(Ribose_graphdicNH):
for k,v in Ribose_graphdicNH.iteritems():
#print k
for value in v:
NH_templist4graph.append(value)
samp=sorted(list(set(NH_templist4graph)))
NH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
#print temlist
#print samp
NH_templist4graph=[]
length_listofcompiledresidues=[]
for key,value in NH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
#print valu
#print len(valu)
length_listofcompiledresidues.append(len(valu))
length_ofcell=max(length_listofcompiledresidues)
#print "<br/>"
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of amino acids: non-bonded contacts","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(NH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in NH_graphdic1[key]:
dat1= g1.split(', ')
for NH_k3 in dat1:
print "<td align='center'>"
#print k3
if NH_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%NH_k3
if NH_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%NH_k3
if NH_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%NH_k3
if NH_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%NH_k3
if NH_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%NH_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of amino acids: non-bonded contacts","</p>"
print "No Interactions"
print """
</div>
</div>
"""#closing of first col-2-3 and module
print """
<div class="col-2-3">
<div class="module">
""" #initializing of second column
if bool(Ribose_CommonH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of common residues: hydrogen bonds contacts" ,"</p>"
df_Ribose_CommonH_Lig_Resdict=pd.DataFrame.from_dict(Ribose_CommonH_Lig_Resdict).fillna('NIL')
print (df_Ribose_CommonH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Ribose_CommonH_Lig_Resdict).to_html(justify='center')#for common ligand atoms - hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of common residues: hydrogen bonds contacts" ,"</p>"
print "No Interactions"
####################Common Residues Colored Table for Ribose : H bonded################################
CommH_templist4graph=[]
CommH_graphdic1={}
if bool(Ribose_common_graphdicH):
for k,v in Ribose_common_graphdicH.iteritems():
for value in v:
CommH_templist4graph.append(value)
samp=sorted(list(set(CommH_templist4graph)))
CommH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
CommH_templist4graph=[]
length_listofcompiled_Common_residues=[]
for key,value in CommH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
length_listofcompiled_Common_residues.append(len(valu))
length_ofcell=max(length_listofcompiled_Common_residues)
#print "<br/>"
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of common residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(CommH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in CommH_graphdic1[key]:
dat1= g1.split(', ')
for H_k3 in dat1:
print "<td align='center'>"
#print k3
if H_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%H_k3
if H_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%H_k3
if H_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%H_k3
if H_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%H_k3
if H_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%H_k3
if H_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%H_k3
if H_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%H_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "No Interactions"
if bool(Ribose_CommonNH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of common residues: non-bonded contacts","</p>"
df_Ribose_CommonNH_Lig_Resdict=pd.DataFrame.from_dict(Ribose_CommonNH_Lig_Resdict).fillna('NIL')
print (df_Ribose_CommonNH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(Ribose_CommonNH_Lig_Resdict).to_html(justify='center')#for Common ligand atoms - Non hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of common residues: non-bonded contacts","</p>"
print "No Interactions"
####################Common Residues Colored Table for Ribose: NON bonded################################
CommNH_templist4graph=[]
CommNH_graphdic1={}
if bool(Ribose_common_graphdicNH):
for k,v in Ribose_common_graphdicNH.iteritems():
#print k
for value in v:
CommNH_templist4graph.append(value)
samp=sorted(list(set(CommNH_templist4graph)))
CommNH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
CommNH_templist4graph=[]
length_listofcompile_Common_dresidues=[]
for key,value in CommNH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
length_listofcompiled_Common_residues.append(len(valu))
length_ofcell=max(length_listofcompiled_Common_residues)
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: non-bonded contacts","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of common residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(CommNH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in CommNH_graphdic1[key]:
dat1= g1.split(', ')
for NH_k3 in dat1:
print "<td align='center'>"
#print k3
if NH_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%NH_k3
if NH_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%NH_k3
if NH_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%NH_k3
if NH_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%NH_k3
if NH_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%NH_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: non-bonded contacts","</p>"
print "No Interactions"
print """
</div>
</div>
"""# closinf of second column and module divi
###############Web logo for Common Residues Section: H bonding#######################
print """
<div class="col-2-3">
<div class="module">
"""
Ribose_graph_filename = str(uuid.uuid4())
Weblogo_dict_H={}
Weblogo_dict_H1={}
if bool (CommH_graphdic1):
for key in sorted(CommH_graphdic1):
for i in CommH_graphdic1[key]:
tems=i.split(', ')
for items in tems:
se=re.split('([0-9])' , items)
Weblogo_dict_H.setdefault('%s'%key,[]).append(se[0])
for m,n in Weblogo_dict_H.iteritems():
counted=dict(Counter(n))
Weblogo_dict_H1.setdefault('%s'%m,{}).update(counted)
zipfilename='tmp/'+Ribose_graph_filename+'_Hbonding'+'.zip'
Ribose_aminoacid_singlecode={}
aminoacid_code={'CYS': 'C', 'ASP': 'D', 'SER': 'S', 'GLN': 'Q', 'LYS': 'K',
'ILE': 'I', 'PRO': 'P', 'THR': 'T', 'PHE': 'F', 'ASN': 'N',
'GLY': 'G', 'HIS': 'H', 'LEU': 'L', 'ARG': 'R', 'TRP': 'W',
'ALA': 'A', 'VAL':'V', 'GLU': 'E', 'TYR': 'Y', 'MET': 'M'}
recoded={}
for Ribose_ligand_key, Ribose_amino_frequency in Weblogo_dict_H1.iteritems():
#print ligand_key
for i in Ribose_ligand_key:
for Ribose_amino,Ribose_frequency in Ribose_amino_frequency.iteritems():
for Ribose_amino_3letter,Ribose_code_frequency in aminoacid_code.iteritems():
if Ribose_amino == Ribose_amino_3letter:
recoded[Ribose_code_frequency]=Ribose_frequency
Ribose_aminoacid_singlecode.setdefault('%s'%Ribose_ligand_key,{}).update(recoded)
recoded={}
Ribose_Frequency=1
instances=[]
Ribose_weblogo_collection=[]
for Ribose_ligand_key1, amino_frequency1 in Ribose_aminoacid_singlecode.iteritems():
for Ribose_Amino1, Ribose_number in amino_frequency1.iteritems():
Ribose_Frequency=1
while Ribose_Frequency <= Ribose_number:
instances.append(Seq(Ribose_Amino1, IUPAC.protein))
Ribose_Frequency=Ribose_Frequency+1
Ribose_motif = motifs.create(instances)
Ribose_mymotif ='tmp/'+ Ribose_graph_filename+ '_H_'+ Ribose_ligand_key1 +'.svg'
Ribose_motif.weblogo('%s'%Ribose_mymotif,format='SVG',xaxis_label= '%s' %Ribose_ligand_key1,show_errorbars= False, color_scheme= 'color_chemistry')
Ribose_weblogo_collection.append(Ribose_mymotif)
instances=[]
weblogo_images=' '.join(str(x) for x in Ribose_weblogo_collection)
print "<p style='font-size:20px; color:brown'> Weblogo showing the frequency of residues binding to ligand atoms for the selected structures:"
print "<div class='weblogo_row'>"
for Ribose_image in sorted(Ribose_weblogo_collection):
print "<div class='weblogo_column'>"
print "<embed src='%s#page=1&view=FitH ' />" %Ribose_image
#print "<iframe src='%s#page=1&view=FitH ' width='200' height='100' border='0'></iframe>"%Ribose_image
print "</div>"
print "</div>"
####zip file
with ZipFile('%s'%zipfilename, 'w') as Ribose_myzip:
for Ribose_Images in Ribose_weblogo_collection:
Ribose_myzip.write(Ribose_Images)
else:
print "<p style='font-size:20px; color:brown'> Weblogo for Common Bonded Interactions:</p>"
print "NO Interactions"
###############Web logo for Common Residues Section: NON bonding#######################
Weblogo_dict_NH={}
Weblogo_dict_NH1={}
if bool(CommNH_graphdic1):
for key in sorted(CommNH_graphdic1):
for i in CommNH_graphdic1[key]:
tems=i.split(', ')
for items in tems:
se=re.split('([0-9])' , items)
Weblogo_dict_NH.setdefault('%s'%key,[]).append(se[0])
for m,n in Weblogo_dict_NH.iteritems():
counted=dict(Counter(n))
Weblogo_dict_NH1.setdefault('%s'%m,{}).update(counted)
zipfilename='tmp/'+Ribose_graph_filename+'_NHbonding'+'.zip'
Ribose_aminoacid_singlecode={}
recoded={}
for Ribose_ligand_key, Ribose_amino_frequency in Weblogo_dict_NH1.iteritems():
#print ligand_key
for i in Ribose_ligand_key:
for Ribose_amino,Ribose_frequency in Ribose_amino_frequency.iteritems():
for Ribose_amino_3letter,Ribose_code_frequency in aminoacid_code.iteritems():
if Ribose_amino == Ribose_amino_3letter:
recoded[Ribose_code_frequency]=Ribose_frequency
Ribose_aminoacid_singlecode.setdefault('%s'%Ribose_ligand_key,{}).update(recoded)
recoded={}
Ribose_Frequency=1
instances=[]
Ribose_weblogo_collection=[]
for Ribose_ligand_key1, amino_frequency1 in Ribose_aminoacid_singlecode.iteritems():
for Ribose_Amino1, Ribose_number in amino_frequency1.iteritems():
Ribose_Frequency=1
while Ribose_Frequency <= Ribose_number:
instances.append(Seq(Ribose_Amino1, IUPAC.protein))
Ribose_Frequency=Ribose_Frequency+1
Ribose_motif = motifs.create(instances)
Ribose_mymotif ='tmp/'+ Ribose_graph_filename+ '_NH_'+ Ribose_ligand_key1 +'.svg'
Ribose_motif.weblogo('%s'%Ribose_mymotif,format='SVG',xaxis_label= '%s' %Ribose_ligand_key1,show_errorbars= False, color_scheme= 'color_chemistry')
Ribose_weblogo_collection.append(Ribose_mymotif)
instances=[]
weblogo_images=' '.join(str(x) for x in Ribose_weblogo_collection)
print "<p style='font-size:20px; color:brown'> Weblogo showing the frequency of residues binding to ligand atoms for the selected structures:"
print "<div class='weblogo_row'>" #initiation of weblog_row
for Ribose_image in sorted(Ribose_weblogo_collection):
print "<div class='weblogo_column'>" #initiation of weblog_column
print "<embed src='%s#page=1&view=FitH ' />" %Ribose_image
#print "<iframe src='%s#page=1&view=FitH ' width='200' height='200' border='0'></iframe>"%Ribose_image
print "</div>"#closing of weblog_column
print "</div>"#closing of weblog_row
####zip file
with ZipFile('%s'%zipfilename, 'w') as Ribose_myzip:
for Ribose_Images in Ribose_weblogo_collection:
Ribose_myzip.write(Ribose_Images)
else:
print "<p style='font-size:20px; color:brown'> Weblogo for Common Nonbonded Interactions:</p>"
print "No Interaction"
print """
</div>
</div>
</div>
""" # closing of RiboSE section
##############################
print "<p align='center'>################################################################","</p>"
print "<p style='font-size:20px; color:blue' align='center'>METHI sub group structure","</p>"
print '<p style=text-align:center>Download: <a href=%s download>All bonded,</a> '% METHI_allH
print ' <a href=%s download>All non-bonded,</a>'% METHI_allNH
print ' <a href=%s download>Common bonded,</a>'% METHI_CommonH
print ' <a href=%s download>Common non-bonded,</a>'% METHI_CommonNH ,'</p>'
print "<p align='center'>################################################################" ,"</p>"
print "<button class='collapsible'>I. All bonded interactions - Click to read basic statistical information</button>"#Start of click drop down
print "<div class='contentsection'>"
print "<p style='font-size:20px; color:black' align='center'>"
print " Number of Ligand atoms:", len(METHI), "<br/>"
print " Number of PDB IDs:", len(METHI_allNH_Lig_Resdict.keys()), "<br/>"
print "<div class='row'>"# spliting into two columns
print "<div class='column'>"# spliting into two columns
if bool(METHI_allH_Lig_Resdict):
print "Statistics of Bonded Intercations"
print percentage(METHI_allH_Lig_Resdict,METHI)
if bool(METHI_allH_Lig_Resdict_distance):
print distance_calc(METHI_allH_Lig_Resdict_distance)
print "</div>"# closing of first columns
print "<div class='column'>"
if bool(METHI_allNH_Lig_Resdict):
print "Statistics of Non-Bonded Intercations", "<br/>"
print percentage(METHI_allNH_Lig_Resdict,METHI)
if bool(METHI_allNH_Lig_Resdict_distance):
print distance_calc(METHI_allNH_Lig_Resdict_distance)
print "</div>"# closing of second columns
print "</div>"#closing of row
print "</div>"#End of click drop down
print "<br/>"
print """
<div class="grid">
<div class="col-2-3">
<div class="module">
"""
if bool(METHI_allH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of residues: hydrogen bonds contacts" ,"</p>"
df_METHI_allH_Lig_Resdict=pd.DataFrame.from_dict(METHI_allH_Lig_Resdict).fillna('NIL')
print (df_METHI_allH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(METHI_allH_Lig_Resdict).to_html(justify='center')#for all ligand atoms - hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of residues: hydrogen bonds contacts" ,"</p>"
print "No Interactions"
####################All Residues Colored Table for METHI: H bonded################################
####################All Residues Colored Table for METHI: H bonded################################
H_templist4graph=[]
H_graphdic1={}
if bool(METHI_graphdicH):
for k,v in METHI_graphdicH.iteritems():
#print k
for value in v:
H_templist4graph.append(value)
samp=sorted(list(set(H_templist4graph)))
H_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
H_templist4graph=[]
length_listofcompiledresidues=[]
for key,value in H_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
#print valu
#print len(valu)
length_listofcompiledresidues.append(len(valu))
length_ofcell=max(length_listofcompiledresidues)
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(H_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in H_graphdic1[key]:
dat1= g1.split(', ')
for H_k3 in dat1:
print "<td align='center'>"
#print k3
if H_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%H_k3
if H_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%H_k3
if H_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%H_k3
if H_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%H_k3
if H_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%H_k3
if H_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%H_k3
if H_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%H_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "No Interactions"
if bool(METHI_allNH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of residues: non-bonded contacts","</p>"
df_METHI_allNH_Lig_Resdict=pd.DataFrame.from_dict(METHI_allNH_Lig_Resdict).fillna('NIL')
print (df_METHI_allNH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(METHI_allNH_Lig_Resdict).to_html(justify='center')#for all ligand atoms - Non hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of residues: non-bonded contacts","</p>"
print "No Interactions"
####################All Residues Colored Table for METHI: NON bonded################################
NH_templist4graph=[]
NH_graphdic1={}
if bool(METHI_graphdicNH):
for k,v in METHI_graphdicNH.iteritems():
#print k
for value in v:
NH_templist4graph.append(value)
samp=sorted(list(set(NH_templist4graph)))
NH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
#print temlist
#print samp
NH_templist4graph=[]
length_listofcompiledresidues=[]
for key,value in NH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
#print valu
#print len(valu)
length_listofcompiledresidues.append(len(valu))
length_ofcell=max(length_listofcompiledresidues)
#print "<br/>"
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of amino acids: non-bonded contacts","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(NH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in NH_graphdic1[key]:
dat1= g1.split(', ')
for NH_k3 in dat1:
print "<td align='center'>"
#print k3
if NH_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%NH_k3
if NH_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%NH_k3
if NH_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%NH_k3
if NH_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%NH_k3
if NH_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%NH_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of amino acids: non-bonded contacts","</p>"
print "No Interactions"
print """
</div>
</div>
"""#closing of col-2-3 and module
print """
<div class="col-2-3">
<div class="module">
"""# initializing the middle column
if bool(METHI_CommonH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of common residues: hydrogen bonds contacts" ,"</p>"
df_METHI_CommonH_Lig_Resdict=pd.DataFrame.from_dict(METHI_CommonH_Lig_Resdict).fillna('NIL')
print (df_METHI_CommonH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(METHI_CommonH_Lig_Resdict).to_html(justify='center')#for common ligand atoms - hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of common residues: hydrogen bonds contacts" ,"</p>"
print "No Interactions"
####################Common Residues Colored Table for METHI : H bonded################################
CommH_templist4graph=[]
CommH_graphdic1={}
if bool(METHI_common_graphdicH):
for k,v in METHI_common_graphdicH.iteritems():
for value in v:
CommH_templist4graph.append(value)
samp=sorted(list(set(CommH_templist4graph)))
CommH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
CommH_templist4graph=[]
length_listofcompiled_Common_residues=[]
for key,value in CommH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
length_listofcompiled_Common_residues.append(len(valu))
length_ofcell=max(length_listofcompiled_Common_residues)
#print "<br/>"
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of common residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(CommH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in CommH_graphdic1[key]:
dat1= g1.split(', ')
for H_k3 in dat1:
print "<td align='center'>"
#print k3
if H_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%H_k3
if H_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%H_k3
if H_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%H_k3
if H_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%H_k3
if H_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%H_k3
if H_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%H_k3
if H_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%H_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: hydrogen bonds contacts ","</p>"
print "No Interactions"
if bool(METHI_CommonNH_Lig_Resdict):
print "<p style='font-size:20px; color:brown'>List of common residues: non-bonded contacts","</p>"
df_METHI_CommonNH_Lig_Resdict=pd.DataFrame.from_dict(METHI_CommonNH_Lig_Resdict).fillna('NIL')
print (df_METHI_CommonNH_Lig_Resdict.to_html(justify='center'))
#print pd.DataFrame.from_dict(METHI_CommonNH_Lig_Resdict).to_html(justify='center')#for Common ligand atoms - Non hydrogen bonded
else:
print "<p style='font-size:20px; color:brown'>List of common residues: non-bonded contacts","</p>"
print "No Interactions"
####################Common Residues Colored Table for METHI: NON bonded################################
CommNH_templist4graph=[]
CommNH_graphdic1={}
if bool(METHI_common_graphdicNH):
for k,v in METHI_common_graphdicNH.iteritems():
#print k
for value in v:
CommNH_templist4graph.append(value)
samp=sorted(list(set(CommNH_templist4graph)))
CommNH_graphdic1.setdefault('%s'%k,[]).append(', '.join(samp))
CommNH_templist4graph=[]
length_listofcompile_Common_dresidues=[]
for key,value in CommNH_graphdic1.iteritems():
for i in value:
valu=i.split(', ')
length_listofcompiled_Common_residues.append(len(valu))
length_ofcell=max(length_listofcompiled_Common_residues)
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: non-bonded contacts","</p>"
print "<table border='1'>"
print "<tr>"
print "<th col width='60'>Ligand Atoms</th>"
print "<th colspan='%d'>List of common residues from analysed protein structures</th>"% length_ofcell
print "</tr>"
for key in sorted(CommNH_graphdic1.iterkeys()):
print "<td align='center'>%s</td>" %key
for g1 in CommNH_graphdic1[key]:
dat1= g1.split(', ')
for NH_k3 in dat1:
print "<td align='center'>"
#print k3
if NH_k3.startswith(('ALA','ILE','LEU','MET','MSE','VAL')):
print "<b><font color='pink'>%s</font></b>"%NH_k3
if NH_k3.startswith(('PHE','TRP', 'TYR')):
print " <b><font color='orange'>%s</font></b>"%NH_k3
if NH_k3.startswith(('LYS','ARG', 'HIS')):
print " <b><font color='red'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLU','ASP')):
print " <b><font color='green'>%s</font></b>"%NH_k3
if NH_k3.startswith(('ASN','GLN','SER','THR')):
print " <b><font color='blue'>%s</font></b>"%NH_k3
if NH_k3.startswith(('GLY','PRO')):
print " <b><font color='magenta'>%s</font></b>"%NH_k3
if NH_k3.startswith(('CYS','CME')):
print " <b><font color='yellow'>%s</font></b>"%NH_k3
print "</td>"
#print "<tr>"
print "</tr>"
print "</table>"
else:
print "<p style='font-size:20px; color:brown'> Physicochemical property based color-coding of common amino acids: non-bonded contacts","</p>"
print "No Interactions"
print """
</div>
</div>
"""# closinf of column and module div
###############Web logo for Common Residues Section: H bonding#######################
print """
<div class="col-2-3">
<div class="module">
"""
METHI_graph_filename = str(uuid.uuid4())
Weblogo_dict_H={}
Weblogo_dict_H1={}
if bool (CommH_graphdic1):
for key in sorted(CommH_graphdic1):
for i in CommH_graphdic1[key]:
tems=i.split(', ')
for items in tems:
se=re.split('([0-9])' , items)
Weblogo_dict_H.setdefault('%s'%key,[]).append(se[0])
for m,n in Weblogo_dict_H.iteritems():
counted=dict(Counter(n))
Weblogo_dict_H1.setdefault('%s'%m,{}).update(counted)
zipfilename='tmp/'+METHI_graph_filename+'_Hbonding'+'.zip'
METHI_aminoacid_singlecode={}
aminoacid_code={'CYS': 'C', 'ASP': 'D', 'SER': 'S', 'GLN': 'Q', 'LYS': 'K',
'ILE': 'I', 'PRO': 'P', 'THR': 'T', 'PHE': 'F', 'ASN': 'N',
'GLY': 'G', 'HIS': 'H', 'LEU': 'L', 'ARG': 'R', 'TRP': 'W',
'ALA': 'A', 'VAL':'V', 'GLU': 'E', 'TYR': 'Y', 'MET': 'M'}
recoded={}
for METHI_ligand_key, METHI_amino_frequency in Weblogo_dict_H1.iteritems():
#print ligand_key
for i in METHI_ligand_key:
for METHI_amino,METHI_frequency in METHI_amino_frequency.iteritems():
for METHI_amino_3letter,METHI_code_frequency in aminoacid_code.iteritems():
if METHI_amino == METHI_amino_3letter:
recoded[METHI_code_frequency]=METHI_frequency
METHI_aminoacid_singlecode.setdefault('%s'%METHI_ligand_key,{}).update(recoded)
recoded={}
METHI_Frequency=1
instances=[]
METHI_weblogo_collection=[]
for METHI_ligand_key1, amino_frequency1 in METHI_aminoacid_singlecode.iteritems():
for METHI_Amino1, METHI_number in amino_frequency1.iteritems():
METHI_Frequency=1
while METHI_Frequency <= METHI_number:
instances.append(Seq(METHI_Amino1, IUPAC.protein))
METHI_Frequency=METHI_Frequency+1
METHI_motif = motifs.create(instances)
METHI_mymotif ='tmp/'+ METHI_graph_filename+ '_H_'+ METHI_ligand_key1 +'.svg'
METHI_motif.weblogo('%s'%METHI_mymotif,format='SVG',xaxis_label= '%s' %METHI_ligand_key1,show_errorbars= False, color_scheme= 'color_chemistry')
METHI_weblogo_collection.append(METHI_mymotif)
instances=[]
weblogo_images=' '.join(str(x) for x in METHI_weblogo_collection)
print "<p style='font-size:20px; color:brown'> Weblogo showing the frequency of residues binding to ligand atoms for the selected structures:"
print "<div class='weblogo_row'>"
for METHI_image in sorted(METHI_weblogo_collection):
print "<div class='weblogo_column'>"
print "<embed src='%s#page=1&view=FitH ' />" %METHI_image
#print "<iframe src='%s#page=1&view=FitH ' width='200' height='100' border='0'></iframe>"%METHI_image
print "</div>"
print "</div>"
####zip file
with ZipFile('%s'%zipfilename, 'w') as METHI_myzip:
for METHI_Images in METHI_weblogo_collection:
METHI_myzip.write(METHI_Images)
else:
print "<p style='font-size:20px; color:brown'> Weblogo showing Common Bonded Interactions:</p>"
print "No Interactions"
###############Web logo for Common Residues Section: NON bonding#######################
Weblogo_dict_NH={}
Weblogo_dict_NH1={}
if bool(CommNH_graphdic1):
for key in sorted(CommNH_graphdic1):
for i in CommNH_graphdic1[key]:
tems=i.split(', ')
for items in tems:
se=re.split('([0-9])' , items)
Weblogo_dict_NH.setdefault('%s'%key,[]).append(se[0])
for m,n in Weblogo_dict_NH.iteritems():
counted=dict(Counter(n))
Weblogo_dict_NH1.setdefault('%s'%m,{}).update(counted)
zipfilename='tmp/'+METHI_graph_filename+'_NHbonding'+'.zip'
METHI_aminoacid_singlecode={}
recoded={}
for METHI_ligand_key, METHI_amino_frequency in Weblogo_dict_NH1.iteritems():
#print ligand_key
for i in METHI_ligand_key:
for METHI_amino,METHI_frequency in METHI_amino_frequency.iteritems():
for METHI_amino_3letter,METHI_code_frequency in aminoacid_code.iteritems():
if METHI_amino == METHI_amino_3letter:
recoded[METHI_code_frequency]=METHI_frequency
METHI_aminoacid_singlecode.setdefault('%s'%METHI_ligand_key,{}).update(recoded)
recoded={}
METHI_Frequency=1
instances=[]
METHI_weblogo_collection=[]
for METHI_ligand_key1, amino_frequency1 in METHI_aminoacid_singlecode.iteritems():
for METHI_Amino1, METHI_number in amino_frequency1.iteritems():
METHI_Frequency=1
while METHI_Frequency <= METHI_number:
instances.append(Seq(METHI_Amino1, IUPAC.protein))
METHI_Frequency=METHI_Frequency+1
METHI_motif = motifs.create(instances)
METHI_mymotif ='tmp/'+ METHI_graph_filename+ '_NH_'+ METHI_ligand_key1 +'.svg'
METHI_motif.weblogo('%s'%METHI_mymotif,format='SVG',xaxis_label= '%s' %METHI_ligand_key1,show_errorbars= False, color_scheme= 'color_chemistry')
METHI_weblogo_collection.append(METHI_mymotif)
instances=[]
weblogo_images=' '.join(str(x) for x in METHI_weblogo_collection)
print "<p style='font-size:20px; color:brown'> Weblogo showing the frequency of residues binding to ligand atoms for the selected structures:"
print "<div class='weblogo_row'>" #initiation of weblog_row
for METHI_image in sorted(METHI_weblogo_collection):
print "<div class='weblogo_column'>" #initiation of weblog_column
print "<embed src='%s#page=1&view=FitH ' />" %METHI_image
#print "<iframe src='%s#page=1&view=FitH ' width='200' height='200' border='0'></iframe>"%METHI_image
print "</div>"#closing of weblog_column
print "</div>"#closing of weblog_row
####zip file
with ZipFile('%s'%zipfilename, 'w') as METHI_myzip:
for METHI_Images in METHI_weblogo_collection:
METHI_myzip.write(METHI_Images)
else:
print "<p style='font-size:20px; color:brown'> Weblogo showing Common Nonbonded Interactions:</p>"
print "No Interactions"
#####To write the dataframes to excel for download
# Adenin_allH=pd.DataFrame.from_dict(Adenin_allH_Lig_Resdict)
# Adenin_allH.to_excel(writer, sheet_name='Adenin_allH')
# Adenin_allNH=pd.DataFrame.from_dict(Adenin_allNH_Lig_Resdict)
# Adenin_allNH.to_excel(writer, sheet_name='Adenin_allNH')
# Adenin_CommonH=pd.DataFrame.from_dict(Adenin_CommonH_Lig_Resdict)
# Adenin_CommonH.to_excel(writer, sheet_name='Adenin_CommonH')
# Adenin_CommonNH=pd.DataFrame.from_dict(Adenin_CommonNH_Lig_Resdict)
# Adenin_CommonNH.to_excel(writer, sheet_name='Adenin_CommonNH')
# Ribose_allH=pd.DataFrame.from_dict(Ribose_allH_Lig_Resdict)
# Ribose_allH.to_excel(writer, sheet_name='Ribose_allH')
# Ribose_allNH=pd.DataFrame.from_dict(Ribose_allNH_Lig_Resdict)
# Ribose_allNH.to_excel(writer, sheet_name='Ribose_allNH')
# Ribose_CommonH=pd.DataFrame.from_dict(Ribose_CommonH_Lig_Resdict)
# Ribose_CommonH.to_excel(writer, sheet_name='Ribose_CommonH')
# Ribose_CommonNH=pd.DataFrame.from_dict(Ribose_CommonNH_Lig_Resdict)
# Ribose_CommonNH.to_excel(writer, sheet_name='Ribose_CommonNH')
# METHI_allH=pd.DataFrame.from_dict(METHI_allH_Lig_Resdict)
# METHI_allH.to_excel(writer, sheet_name='METHI_allH')
# METHI_allNH=pd.DataFrame.from_dict(METHI_allNH_Lig_Resdict)
# METHI_allNH.to_excel(writer, sheet_name='METHI_allNH')
# METHI_CommonH=pd.DataFrame.from_dict(METHI_CommonH_Lig_Resdict)
# METHI_CommonH.to_excel(writer, sheet_name='METHI_CommonH')
# METHI_CommonNH=pd.DataFrame.from_dict(METHI_CommonNH_Lig_Resdict)
# METHI_CommonNH.to_excel(writer, sheet_name='METHI_CommonNH')
# writer.save()
CSVrandom_name= str(uuid.uuid4())
dir = os.path.join("CSV",CSVrandom_name)
if not os.path.exists(dir):
oldmask = os.umask(000)
os.makedirs(dir,0777)
os.umask(oldmask)
def zipdir(path, ziph):
# ziph is zipfile handle
for root, dirs, files in os.walk(path):
for file in files:
ziph.write(os.path.join(root, file))
DownloadZipFilename= CSV/CSVrandom_name+'.zip'
folderToZip=CSV/CSVrandom_name
zipf = zipfile.ZipFile(DownloadZipFilename, 'w', zipfile.ZIP_DEFLATED)
zipdir(folderToZip, zipf)
zipf.close()
print """
</div>
</div>
</div>
""" # closing of METHI section
####Java script####
print """
<script>
var coll = document.getElementsByClassName("collapsible");
var i;
for (i = 0; i < coll.length; i++) {
coll[i].addEventListener("click", function() {
this.classList.toggle("active");
var content = this.nextElementSibling;
if (content.style.display === "block") {
content.style.display = "none";
} else {
content.style.display = "block";
}
});
}
</script>
"""
###################
print "</body>"
print "</html>"
| 41.502608 | 202 | 0.597174 | 13,584 | 111,393 | 4.671304 | 0.048807 | 0.025057 | 0.029501 | 0.019857 | 0.823592 | 0.803845 | 0.769979 | 0.72434 | 0.680908 | 0.665558 | 0 | 0.011788 | 0.258248 | 111,393 | 2,683 | 203 | 41.518077 | 0.756191 | 0.124496 | 0 | 0.643559 | 0 | 0.038755 | 0.225269 | 0.056042 | 0.009825 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.010917 | null | null | 0.255459 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
69228ecd2e8a19b613de37cb852a32d613a1528a | 75 | py | Python | backend/storyful/__init__.py | enfiskutensykkel/hackathon2013 | 112419dd2cad5f61e7a118e4be8a860bd2c436ab | [
"0BSD"
] | 1 | 2015-11-22T20:10:47.000Z | 2015-11-22T20:10:47.000Z | backend/storyful/__init__.py | enfiskutensykkel/hackathon2013 | 112419dd2cad5f61e7a118e4be8a860bd2c436ab | [
"0BSD"
] | 4 | 2017-11-14T09:24:36.000Z | 2017-11-14T09:24:36.000Z | backend/storyful/__init__.py | enfiskutensykkel/hackathon2013 | 112419dd2cad5f61e7a118e4be8a860bd2c436ab | [
"0BSD"
] | null | null | null | from storyful import get_storyful_data
from storyful import search_storyful | 37.5 | 38 | 0.906667 | 11 | 75 | 5.909091 | 0.545455 | 0.369231 | 0.553846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093333 | 75 | 2 | 39 | 37.5 | 0.955882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
69610ff3406aafc6d2a76cb2e7fab9155dc37d41 | 36 | py | Python | continual_learning/methods/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | continual_learning/methods/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | continual_learning/methods/__init__.py | jaryP/ContinualAI | 7d9b7614066d219ebd72049692da23ad6ec132b0 | [
"MIT"
] | null | null | null | from .base import BaseMethod, Naive
| 18 | 35 | 0.805556 | 5 | 36 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 36 | 1 | 36 | 36 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6970df2d43a3e0046638beb6115f0af26c209c92 | 36 | py | Python | power.py | learningandgrowing/Data-structures-problems | 444a6dad145c9571c0f0989f7049074cf4d1b17b | [
"MIT"
] | null | null | null | power.py | learningandgrowing/Data-structures-problems | 444a6dad145c9571c0f0989f7049074cf4d1b17b | [
"MIT"
] | null | null | null | power.py | learningandgrowing/Data-structures-problems | 444a6dad145c9571c0f0989f7049074cf4d1b17b | [
"MIT"
] | null | null | null | l = [1 , 2, 3, 4]
l = l[1:]
print(l) | 12 | 17 | 0.388889 | 10 | 36 | 1.4 | 0.6 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 0.277778 | 36 | 3 | 18 | 12 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
15f3249d8e92b790cb42b37a7e2189f10441dcf0 | 265 | py | Python | erica/domain/Repositories/EricaAuftragRepositoryInterface.py | punknoir101/erica-1 | 675a6280d38ca5b56946af6f3ed7e295ba896db0 | [
"MIT"
] | null | null | null | erica/domain/Repositories/EricaAuftragRepositoryInterface.py | punknoir101/erica-1 | 675a6280d38ca5b56946af6f3ed7e295ba896db0 | [
"MIT"
] | null | null | null | erica/domain/Repositories/EricaAuftragRepositoryInterface.py | punknoir101/erica-1 | 675a6280d38ca5b56946af6f3ed7e295ba896db0 | [
"MIT"
] | null | null | null | from abc import ABC
from erica.domain.EricaAuftrag.EricaAuftrag import EricaAuftrag
from erica.domain.Repositories.BaseRepositoryInterface import BaseRepositoryInterface
class EricaAuftragRepositoryInterface(BaseRepositoryInterface[EricaAuftrag], ABC):
pass
| 29.444444 | 85 | 0.867925 | 24 | 265 | 9.583333 | 0.458333 | 0.078261 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086792 | 265 | 8 | 86 | 33.125 | 0.950413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ba3ab2ce2fbb7a2080ac630a089c54933d5a81a4 | 28 | py | Python | torchlayers/_inferable/__init__.py | ghost2718/torchlayers | 2f0f44ab64115c0a14ac8a27cf0159c2119d3f8f | [
"MIT"
] | 3 | 2019-12-15T23:29:11.000Z | 2020-05-08T03:26:20.000Z | torchlayers/_inferable/__init__.py | devanshuDesai/torchlayers | 585e250c2a03d330841551f3612cfe9588985d13 | [
"MIT"
] | null | null | null | torchlayers/_inferable/__init__.py | devanshuDesai/torchlayers | 585e250c2a03d330841551f3612cfe9588985d13 | [
"MIT"
] | 3 | 2019-12-30T15:49:57.000Z | 2020-04-30T08:06:18.000Z | from . import custom, torch
| 14 | 27 | 0.75 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 28 | 1 | 28 | 28 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ba516a4a80129fc32a62fde000f4fb151c62b33d | 33,142 | py | Python | data.py | amonod/udvd | a1ccb777d205255ac68c40efb93dd3996f562c45 | [
"MIT"
] | null | null | null | data.py | amonod/udvd | a1ccb777d205255ac68c40efb93dd3996f562c45 | [
"MIT"
] | null | null | null | data.py | amonod/udvd | a1ccb777d205255ac68c40efb93dd3996f562c45 | [
"MIT"
] | null | null | null | import os
import os.path
import cv2
import glob
import h5py
from PIL import Image
import skimage
import skimage.io
import numpy as np
import pandas as pd
import torch
from torchvision import transforms
import torchvision.transforms.functional as TF
import utils
DATASET_REGISTRY = {}
def build_dataset(name, *args, **kwargs):
return DATASET_REGISTRY[name](*args, **kwargs)
def register_dataset(name):
def register_dataset_fn(fn):
if name in DATASET_REGISTRY:
raise ValueError("Cannot register duplicate dataset ({})".format(name))
DATASET_REGISTRY[name] = fn
return fn
return register_dataset_fn
@register_dataset("DAVIS")
def load_DAVIS(data, batch_size=100, num_workers=0, image_size=None, stride=64, n_frames=5):
train_dataset = DAVIS(data, datatype="train", patch_size=image_size, stride=stride, n_frames=n_frames)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, num_workers=8, shuffle=True)
valid_dataset = DAVIS(data, datatype="val", n_frames=n_frames)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=1, num_workers=8, shuffle=False)
test_dataset = DAVIS(data, datatype="test", n_frames=n_frames)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, num_workers=8, shuffle=False)
return train_loader, valid_loader, test_loader
@register_dataset("ImageDAVIS")
def load_ImageDAVIS(data, batch_size=100, num_workers=0, image_size=None, stride=64, n_frames=1):
train_dataset = ImageDAVIS(data, datatype="train", patch_size=image_size, stride=stride)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, num_workers=4, shuffle=True)
valid_dataset = ImageDAVIS(data, datatype="val")
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=1, num_workers=4, shuffle=False)
test_dataset = ImageDAVIS(data, datatype="test")
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, num_workers=4, shuffle=False)
return train_loader, valid_loader, test_loader
@register_dataset("Set8")
def load_Set8(data, batch_size=100, num_workers=0, n_frames=5):
test_dataset = Set8(data, n_frames=n_frames)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, num_workers=8, shuffle=False)
return test_loader
@register_dataset("CTC")
def load_CTC(data, batch_size=100, num_workers=0, image_size=None, stride=64, n_frames=5):
train_dataset = CTC(data, patch_size=image_size, stride=stride, n_frames=n_frames)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, num_workers=4, shuffle=True)
valid_dataset = CTC(data, n_frames=n_frames)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=1, num_workers=4, shuffle=False)
return train_loader, valid_loader
@register_dataset("SingleVideo")
def load_SingleVideo(data, batch_size=8, dataset="DAVIS", video="giant-slalom",image_size=None, stride=64, n_frames=5,
aug=0, dist="G", mode="S", noise_std=30, min_noise=0, max_noise=100, sample=False, heldout=False):
train_dataset = SingleVideo(data, dataset=dataset, video=video, patch_size=image_size, stride=stride, n_frames=n_frames,
aug=aug, dist=dist, mode=mode, noise_std=noise_std, min_noise=min_noise, max_noise=max_noise,
sample=sample, heldout=heldout
)
test_dataset = SingleVideo(data, dataset=dataset, video=video, patch_size=None, stride=stride, n_frames=n_frames,
aug=0, dist=dist, mode=mode, noise_std=noise_std, min_noise=min_noise, max_noise=max_noise,
sample=False, heldout=False
)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, num_workers=2, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, num_workers=1, shuffle=False)
return train_loader, test_loader
@register_dataset("Nanoparticles")
def load_Nanoparticles(data, batch_size=8, image_size=None, stride=64, n_frames=5, aug=0):
train_dataset = Nanoparticles(data, datatype="train", patch_size=image_size, stride=stride, n_frames=n_frames, aug=aug)
test_dataset = Nanoparticles(data, datatype="test", patch_size=None, stride=200, n_frames=n_frames, aug=0)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, num_workers=2, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, num_workers=1, shuffle=False)
return train_loader, test_loader
@register_dataset("RawVideo")
def load_RawVideo(data, batch_size=8, image_size=None, stride=64, n_frames=5, aug=0, scenes=[7, 8, 9, 10, 11], isos = [1600, 3200, 6400, 12800, 25600]):
train_dataset = RawVideo(data, datatype="train", patch_size=image_size, stride=stride, n_frames=n_frames, aug=aug, scenes=scenes, isos=isos)
valid_dataset = RawVideo(data, datatype="val", patch_size=1080, stride=1920-1080, n_frames=n_frames, aug=0, scenes=scenes, isos=isos)
test_dataset = RawVideo(data, datatype="test", patch_size=None, stride=64, n_frames=n_frames, aug=0, scenes=scenes, isos=isos)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, num_workers=2, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=1, num_workers=1, shuffle=False)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, num_workers=1, shuffle=False)
return train_loader, valid_loader, test_loader
class DAVIS(torch.utils.data.Dataset):
def __init__(self, data_path, datatype="train", patch_size=None, stride=64, n_frames=5):
super().__init__()
self.data_path = data_path
self.datatype = datatype
self.size = patch_size
self.stride = stride
self.n_frames = n_frames
if self.datatype == "train":
self.folders = pd.read_csv(os.path.join(data_path, "ImageSets", "2017", "train.txt"), header=None)
elif self.datatype == "val":
self.folders = pd.read_csv(os.path.join(data_path, "ImageSets", "2017", "val.txt"), header=None)
else:
self.folders = pd.read_csv(os.path.join(data_path, "ImageSets", "2017", "test-dev.txt"), header=None)
self.len = 0
self.bounds = []
for folder in self.folders.values:
files = sorted(glob.glob(os.path.join(data_path, "JPEGImages", "480p", folder[0], "*.jpg")))
self.len += len(files)
self.bounds.append(self.len)
if self.size is not None:
self.n_H = (int((480-self.size)/self.stride)+1)
self.n_W = (int((854-self.size)/self.stride)+1)
self.n_patches = self.n_H * self.n_W
self.len *= self.n_patches
self.transform = transforms.Compose([transforms.ToTensor()])
def __len__(self):
return self.len
def __getitem__(self, index):
if self.size is not None:
patch = index % self.n_patches
index = index // self.n_patches
ends = 0
x = (self.n_frames-1) // 2
for i, bound in enumerate(self.bounds):
if index < bound:
folder = self.folders.values[i][0]
if i>0:
index -= self.bounds[i-1]
newbound = bound - self.bounds[i-1]
else:
newbound = bound
if(index < x):
ends = x-index
elif(newbound-1-index < x):
ends = -(x-(newbound-1-index))
break
files = sorted(glob.glob(os.path.join(self.data_path, "JPEGImages", "480p", folder, "*.jpg")))
Img = Image.open(files[index])
Img = np.array(Img)
for i in range(1,x+1):
end = max(0, ends)
off = max(0,i-x+end)
img = Image.open(files[index-i+off])
img = np.array(img)
Img = np.concatenate((img, Img), axis=2)
for i in range(1,x+1):
end = -min(0,ends)
off = max(0,i-x+end)
img = Image.open(files[index+i-off])
img = np.array(img)
Img = np.concatenate((Img, img), axis=2)
if self.size is not None:
nh = (patch // self.n_W)*self.stride
nw = (patch % self.n_W)*self.stride
Img = Img[nh:(nh+self.size), nw:(nw+self.size), :]
return self.transform(np.array(Img)).type(torch.FloatTensor)
class ImageDAVIS(torch.utils.data.Dataset):
def __init__(self, data_path, datatype="train", patch_size=None, stride=40):
super().__init__()
self.data_path = data_path
self.datatype = datatype
self.size = patch_size
self.stride = stride
if self.datatype == "train":
self.folders = pd.read_csv(os.path.join(data_path, "ImageSets", "2017", "train.txt"), header=None)
elif self.datatype == "val":
self.folders = pd.read_csv(os.path.join(data_path, "ImageSets", "2017", "val.txt"), header=None)
else:
self.folders = pd.read_csv(os.path.join(data_path, "ImageSets", "2017", "test-dev.txt"), header=None)
self.len = 0
self.bounds = []
for folder in self.folders.values:
files = sorted(glob.glob(os.path.join(data_path, "JPEGImages", "480p", folder[0], "*.jpg")))
self.len += len(files)
self.bounds.append(self.len)
if self.size is not None:
self.n_H = (int((480-self.size)/self.stride)+1)
self.n_W = (int((854-self.size)/self.stride)+1)
self.n_patches = self.n_H * self.n_W
self.len *= self.n_patches
self.transform = transforms.Compose([transforms.ToTensor()])
def __len__(self):
return self.len
def __getitem__(self, index):
if self.size is not None:
patch = index % self.n_patches
index = index // self.n_patches
for i, bound in enumerate(self.bounds):
if index < bound:
folder = self.folders.values[i][0]
if i>0:
index -= self.bounds[i-1]
break
files = sorted(glob.glob(os.path.join(self.data_path, "JPEGImages", "480p", folder, "*.jpg")))
Img = np.array(Image.open(files[index]))
if self.size is not None:
nh = (patch // self.n_W)*self.stride
nw = (patch % self.n_W)*self.stride
Img = Img[nh:(nh+self.size), nw:(nw+self.size), :]
return self.transform(Img).type(torch.FloatTensor)
class Set8(torch.utils.data.Dataset):
def __init__(self, data_path, n_frames=5, hop=1):
super().__init__()
self.data_path = data_path
self.len = 0
self.bounds = []
self.hop = hop
self.n_frames = n_frames
self.folders = []
self.folders += sorted(glob.glob(os.path.join(data_path, "GoPro/snowboard")))
self.folders += sorted(glob.glob(os.path.join(data_path, "GoPro/hypersmooth")))
self.folders += sorted(glob.glob(os.path.join(data_path, "GoPro/rafting")))
self.folders += sorted(glob.glob(os.path.join(data_path, "GoPro/motorbike")))
self.folders += sorted(glob.glob(os.path.join(data_path, "Derfs/tractor")))
self.folders += sorted(glob.glob(os.path.join(data_path, "Derfs/sunflower")))
self.folders += sorted(glob.glob(os.path.join(data_path, "Derfs/touchdown")))
self.folders += sorted(glob.glob(os.path.join(data_path, "Derfs/park_joy")))
for folder in self.folders:
files = sorted(glob.glob(os.path.join(folder, "*.png")))
self.len += len(files)
self.bounds.append(self.len)
self.transform = transforms.Compose([transforms.ToTensor()])
def __len__(self):
return self.len
def __getitem__(self, index):
ends = 0
x = ((self.n_frames-1) // 2)*self.hop
for i, bound in enumerate(self.bounds):
if index < bound:
folder = self.folders[i]
if i>0:
index -= self.bounds[i-1]
newbound = bound - self.bounds[i-1]
else:
newbound = bound
if(index < x):
ends = x-index
elif(newbound-1-index < x):
ends = -(x-(newbound-1-index))
break
files = sorted(glob.glob(os.path.join(folder, "*.png")))
Img = Image.open(files[index])
Img = np.array(Img)
for i in range(self.hop, x+1, self.hop):
end = max(0, ends)
off = max(0,i-x+end)
img = Image.open(files[index-i+off])
img = np.array(img)
Img = np.concatenate((img, Img), axis=2)
for i in range(self.hop, x+1, self.hop):
end = -min(0,ends)
off = max(0,i-x+end)
img = Image.open(files[index+i-off])
img = np.array(img)
Img = np.concatenate((Img, img), axis=2)
return self.transform(Img).type(torch.FloatTensor)
class CTC(torch.utils.data.Dataset):
def __init__(self, data_path, patch_size=None, stride=64, n_frames=5):
super().__init__()
self.data_path = data_path
self.size = patch_size
self.stride = stride
self.len = 0
self.bounds = [0]
self.nHs = []
self.nWs = []
self.n_frames = n_frames
parent_folders = sorted([x for x in glob.glob(os.path.join(data_path, "*/*")) if os.path.isdir(x)])
self.folders = []
for folder in parent_folders:
self.folders.append(os.path.join(folder, "01"))
self.folders.append(os.path.join(folder, "02"))
for folder in self.folders:
files = sorted(glob.glob(os.path.join(folder, "*.tif")))
if self.size is not None:
(h, w) = np.array(cv2.imread(files[0], cv2.IMREAD_GRAYSCALE)).shape
nH = (int((h-self.size)/self.stride)+1)
nW = (int((w-self.size)/self.stride)+1)
self.len += len(files)*nH*nW
self.nHs.append(nH)
self.nWs.append(nW)
else:
self.len += len(files)
self.bounds.append(self.len)
self.transform = transforms.Compose([transforms.ToTensor()])
def __len__(self):
return self.len
def __getitem__(self, index):
ends = 0
x = (self.n_frames-1) // 2
for i, bound in enumerate(self.bounds):
if index < bound:
folder = self.folders[i-1]
index -= self.bounds[i-1]
newbound = bound - self.bounds[i-1]
if self.size is not None:
nH = self.nHs[i-1]
nW = self.nWs[i-1]
patch = index % (nH*nW)
index = index // (nH*nW)
newbound = newbound // (nH*nW)
if(index < x):
ends = x-index
elif(newbound-1-index < x):
ends = -(x-(newbound-1-index))
break
files = sorted(glob.glob(os.path.join(folder, "*.tif")))
img = cv2.imread(files[index], cv2.IMREAD_GRAYSCALE)
(h, w) = np.array(img).shape
Img = np.reshape(np.array(img), (h,w,1))
for i in range(1,x+1):
end = max(0, ends)
off = max(0,i-x+end)
img = cv2.imread(files[index-i+off], cv2.IMREAD_GRAYSCALE)
img = np.reshape(np.array(img), (h,w,1))
Img = np.concatenate((img, Img), axis=2)
for i in range(1,x+1):
end = -min(0,ends)
off = max(0,i-x+end)
img = cv2.imread(files[index+i-off], cv2.IMREAD_GRAYSCALE)
img = np.reshape(np.array(img), (h,w,1))
Img = np.concatenate((Img, img), axis=2)
if self.size is not None:
nh = (patch // nW)*self.stride
nw = (patch % nW)*self.stride
Img = Img[nh:(nh+self.size), nw:(nw+self.size), :]
return self.transform(Img).type(torch.FloatTensor)
class SingleVideo(torch.utils.data.Dataset):
def __init__(self, data_path, dataset="DAVIS", video="giant-slalom", patch_size=None, stride=64, n_frames=5,
aug=0, dist="G", mode="S", noise_std=30, min_noise=0, max_noise=100, sample=True, heldout=False):
super().__init__()
self.data_path = data_path
self.dataset = dataset
self.size = patch_size
self.stride = stride
self.n_frames = n_frames
self.aug = aug
self.heldout = heldout
if dataset == "DAVIS":
self.files = sorted(glob.glob(os.path.join(data_path, "JPEGImages", "480p", video, "*.jpg")))
elif dataset == "GoPro" or dataset == "Derfs":
self.files = sorted(glob.glob(os.path.join(data_path, video, "*.png")))
elif dataset == "Vid3oC":
self.files = sorted(glob.glob(os.path.join(data_path, "TrainingHR", video, "*.png")))
elif dataset == "Nanoparticles":
self.files = sorted(glob.glob(os.path.join(data_path, "*.png")))
self.noisy_files = sorted(glob.glob(os.path.join(data_path, "*.npy")))
self.len = self.bound = len(self.files)
if self.heldout:
self.len -= 5
self.transform = transforms.Compose([transforms.ToTensor()])
self.reverse = transforms.Compose([transforms.ToPILImage()])
Img = Image.open(self.files[0])
Img = np.array(Img)
if dataset == "Nanoparticles":
H, W = Img.shape
else:
H, W, C = Img.shape
if not dataset == "Nanoparticles":
os.makedirs(os.path.join(data_path, f"Noisy_Videos_{int(noise_std)}"), exist_ok=True)
os.makedirs(os.path.join(data_path, f"Noisy_Videos_{int(noise_std)}", video), exist_ok=True)
self.noisy_folder = os.path.join(data_path, f"Noisy_Videos_{int(noise_std)}", video)
if sample:
for i in range(self.len):
Img = Image.open(self.files[i])
Img = self.transform(Img)
self.C, self.H, self.W = Img.shape
Noise = utils.get_noise(Img, dist=dist, mode=mode, min_noise=min_noise, max_noise=max_noise, noise_std=noise_std).numpy()
Img = Img + Noise
np.save(os.path.join(self.noisy_folder, os.path.basename(self.files[i])[:-3]+".npy"), Img)
self.noisy_files = sorted(glob.glob(os.path.join(self.noisy_folder, "*.npy")))
if self.size is not None:
self.n_H = (int((H-self.size)/self.stride)+1)
self.n_W = (int((W-self.size)/self.stride)+1)
self.n_patches = self.n_H * self.n_W
self.len *= self.n_patches
self.hflip = transforms.Compose([transforms.RandomHorizontalFlip(p=1)])
self.vflip = transforms.Compose([transforms.RandomVerticalFlip(p=1)])
if aug >= 1: # Horizonatal and Vertical Flips
self.len *= 4
if aug >= 2: # Reverse the Video
self.len *= 2
if aug >= 3: # Variable Frame Rate
self.len *= 4
def __len__(self):
return self.len
def __getitem__(self, index):
hop = 1
reverse = 0
flip = 0
if self.aug >= 3: # Variable Frame Rate
hop = index % 4 + 1
index = index // 4
if self.aug >= 2: # Reverse the Video
reverse = index % 2
index = index // 2
if self.aug >= 1: # Horizonatal and Vertical Flips
flip = index % 4
index = index // 4
if self.size is not None:
patch = index % self.n_patches
index = index // self.n_patches
ends = 0
x = ((self.n_frames-1) // 2)*hop
if index < x:
ends = x - index
elif self.bound-1-index < x:
ends = -(x-(self.bound-1-index))
Img = Image.open(self.files[index])
Img = np.array(Img)
if self.dataset == "Nanoparticles":
H, W = Img.shape
else:
H, W, C = Img.shape
if self.dataset == "Nanoparticles":
Img = Img.reshape(H, W, 1)
noisy_Img = np.load(self.noisy_files[index])
for i in range(hop, x+1, hop):
end = max(0, ends)
off = max(0,i-x+end)
img = Image.open(self.files[index-i+off])
img = np.array(img)
if self.dataset == "Nanoparticles":
img = img.reshape(H, W, 1)
noisy_img = np.load(self.noisy_files[index-i+off])
if reverse == 0:
Img = np.concatenate((img, Img), axis=2)
noisy_Img = np.concatenate((noisy_img, noisy_Img), axis=0)
else:
Img = np.concatenate((Img, img), axis=2)
noisy_Img = np.concatenate((noisy_Img, noisy_img), axis=0)
for i in range(hop, x+1, hop):
end = -min(0,ends)
off = max(0,i-x+end)
img = Image.open(self.files[index+i-off])
img = np.array(img)
if self.dataset == "Nanoparticles":
img = img.reshape(H, W, 1)
noisy_img = np.load(self.noisy_files[index+i-off])
if reverse == 0:
Img = np.concatenate((Img, img), axis=2)
noisy_Img = np.concatenate((noisy_Img, noisy_img), axis=0)
else:
Img = np.concatenate((img, Img), axis=2)
noisy_Img = np.concatenate((noisy_img, noisy_Img), axis=0)
if self.size is not None:
nh = (patch // self.n_W)*self.stride
nw = (patch % self.n_W)*self.stride
Img = Img[nh:(nh+self.size), nw:(nw+self.size), :]
noisy_Img = noisy_Img[:, nh:(nh+self.size), nw:(nw+self.size)]
if flip == 1:
Img = np.flip(Img, 1)
noisy_Img = np.flip(noisy_Img, 2)
elif flip == 2:
Img = np.flip(Img, 0)
noisy_Img = np.flip(noisy_Img, 1)
elif flip == 3:
Img = np.flip(Img, (1,0))
noisy_Img = np.flip(noisy_Img, (2,1))
return self.transform(np.array(Img)).type(torch.FloatTensor), torch.from_numpy(noisy_Img.copy())
class Nanoparticles(torch.utils.data.Dataset):
def __init__(self, data_path, datatype="train", patch_size=None, stride=64, n_frames=5, aug=0):
super().__init__()
self.data_path = data_path
self.size = patch_size
self.stride = stride
self.n_frames = n_frames
self.datatype = datatype
self.aug = aug
self.files = sorted(glob.glob(os.path.join(data_path, "*.npy")))
if datatype == "train":
self.files = self.files[0:35]
elif datatype == "test":
self.files = self.files
self.len = self.bound = len(self.files)
self.transform = transforms.Compose([transforms.ToTensor()])
self.reverse = transforms.Compose([transforms.ToPILImage()])
Img = np.load(self.files[0])
C, H, W = Img.shape
if self.size is not None:
self.n_H = (int((H-self.size)/self.stride)+1)
self.n_W = (int((W-self.size)/self.stride)+1)
self.n_patches = self.n_H * self.n_W
self.len *= self.n_patches
self.hflip = transforms.Compose([transforms.RandomHorizontalFlip(p=1)])
self.vflip = transforms.Compose([transforms.RandomVerticalFlip(p=1)])
if aug >= 1: # Horizonatal and Vertical Flips
self.len *= 4
if aug >= 2: # Reverse the Video
self.len *= 2
if aug >= 3: # Variable Frame Rate
self.len *= 4
def __len__(self):
return self.len
def __getitem__(self, index):
hop = 1
reverse = 0
flip = 0
if self.aug >= 3: # Variable Frame Rate
hop = index % 4 + 1
index = index // 4
if self.aug >= 2: # Reverse the Video
reverse = index % 2
index = index // 2
if self.aug >= 1: # Horizonatal and Vertical Flips
flip = index % 4
index = index // 4
if self.size is not None:
patch = index % self.n_patches
index = index // self.n_patches
ends = 0
x = ((self.n_frames-1) // 2)*hop
if index < x:
ends = x - index
Img = np.load(self.files[index])
C, H, W = Img.shape
for i in range(hop, x+1, hop):
end = max(0, ends)
off = max(0,i-x+end)
img = np.load(self.files[index-i+off])
if reverse == 0:
Img = np.concatenate((img, Img), axis=0)
else:
Img = np.concatenate((Img, img), axis=0)
if self.bound-1-index < x:
ends = -(x-(self.bound-1-index))
for i in range(hop, x+1, hop):
end = -min(0,ends)
off = max(0,i-x+end)
img = np.load(self.files[index+i-off])
if reverse == 0:
Img = np.concatenate((Img, img), axis=0)
else:
Img = np.concatenate((img, Img), axis=0)
if self.size is not None:
nh = (patch // self.n_W)*self.stride
nw = (patch % self.n_W)*self.stride
Img = Img[:, nh:(nh+self.size), nw:(nw+self.size)]
if flip == 1:
Img = np.flip(Img, 2)
elif flip == 2:
Img = np.flip(Img, 1)
elif flip == 3:
Img = np.flip(Img, (2,1))
return torch.from_numpy(Img.copy()).type(torch.FloatTensor)
class RawVideo(torch.utils.data.Dataset):
def __init__(self, data_path, datatype="train", patch_size=None, stride=64, n_frames=5, aug=0,
scenes=[7, 8, 9, 10, 11],
isos = [1600, 3200, 6400, 12800, 25600]):
super().__init__()
self.data_path = data_path
self.datatype = datatype
self.size = patch_size
self.stride = stride
self.n_frames = n_frames
self.aug = aug
self.noisy_path = os.path.join(self.data_path, "indoor_raw_noisy")
self.gt_path = os.path.join(self.data_path, "indoor_raw_gt")
self.scenes = scenes
self.isos = isos
if self.datatype == "train":
self.nr = 9 # noise_realisations
elif self.datatype == "val":
self.nr = 1 # only the 9th noise realisation used for heldout
elif self.datatype == "test":
self.nr = 10
self.fpv = self.bound = 7 # frames_per_video
self.len = self.fpv * self.nr * len(self.isos) * len(self.scenes)
self.transform = transforms.Compose([transforms.ToTensor()])
self.reverse = transforms.Compose([transforms.ToPILImage()])
Img = skimage.io.imread(os.path.join(self.noisy_path, f"scene{self.scenes[0]}",
f"ISO{self.isos[0]}", "frame1_noisy0.tiff"))
H, W = Img.shape
if self.size is not None:
self.n_H = (int((H-self.size)/self.stride)+1)
self.n_W = (int((W-self.size)/self.stride)+1)
self.n_patches = self.n_H * self.n_W
self.len *= self.n_patches
self.hflip = transforms.Compose([transforms.RandomHorizontalFlip(p=1)])
self.vflip = transforms.Compose([transforms.RandomVerticalFlip(p=1)])
if aug >= 1: # Horizonatal and Vertical Flips
self.len *= 4
if aug >= 2: # Reverse the Video
self.len *= 2
def __len__(self):
return self.len
def __getitem__(self, index):
hop = 1
reverse = 0
flip = 0
if self.aug >= 2: # Reverse the Video
reverse = index % 2
index = index // 2
if self.aug >= 1: # Horizonatal and Vertical Flips
flip = index % 4
index = index // 4
if self.size is not None:
patch = index % self.n_patches
index = index // self.n_patches
scene = index % len(self.scenes)
index = index // len(self.scenes)
iso = index % len(self.isos)
index = index // len(self.isos)
if self.datatype == "val":
nr = 9
else:
nr = index % self.nr
index = index // self.nr
ends = 0
x = ((self.n_frames-1) // 2)*hop
if index < x:
ends = x - index
elif self.bound-1-index < x:
ends = -(x-(self.bound-1-index))
Img = skimage.io.imread(os.path.join(self.gt_path,
f"scene{self.scenes[scene]}",
f"ISO{self.isos[iso]}",
f"frame{index+1}_clean_and_slightly_denoised.tiff"))
H, W = Img.shape
Img = Img.reshape(H, W, 1)
noisy_Img = skimage.io.imread(os.path.join(self.noisy_path,
f"scene{self.scenes[scene]}",
f"ISO{self.isos[iso]}",
f"frame{index+1}_noisy{nr}.tiff"))
noisy_Img = noisy_Img.reshape(H, W, 1)
for i in range(hop, x+1, hop):
end = max(0, ends)
off = max(0,i-x+end)
# img = Image.open(self.files[index-i+off])
img = skimage.io.imread(os.path.join(self.gt_path,
f"scene{self.scenes[scene]}",
f"ISO{self.isos[iso]}",
f"frame{index-i+off+1}_clean_and_slightly_denoised.tiff"))
img = img.reshape(H, W, 1)
# noisy_img = np.load(self.noisy_files[index-i+off])
noisy_img = skimage.io.imread(os.path.join(self.noisy_path,
f"scene{self.scenes[scene]}",
f"ISO{self.isos[iso]}",
f"frame{index-i+off+1}_noisy{nr}.tiff"))
noisy_img = noisy_img.reshape(H, W, 1)
if reverse == 0:
Img = np.concatenate((img, Img), axis=2)
noisy_Img = np.concatenate((noisy_img, noisy_Img), axis=2)
else:
Img = np.concatenate((Img, img), axis=2)
noisy_Img = np.concatenate((noisy_Img, noisy_img), axis=2)
for i in range(hop, x+1, hop):
end = -min(0,ends)
off = max(0,i-x+end)
# img = Image.open(self.files[index+i-off])
img = skimage.io.imread(os.path.join(self.gt_path,
f"scene{self.scenes[scene]}",
f"ISO{self.isos[iso]}",
f"frame{index+i-off+1}_clean_and_slightly_denoised.tiff"))
img = img.reshape(H, W, 1)
# noisy_img = np.load(self.noisy_files[index+i-off])
noisy_img = skimage.io.imread(os.path.join(self.noisy_path,
f"scene{self.scenes[scene]}",
f"ISO{self.isos[iso]}",
f"frame{index+i-off+1}_noisy{nr}.tiff"))
noisy_img = noisy_img.reshape(H, W, 1)
if reverse == 0:
Img = np.concatenate((Img, img), axis=2)
noisy_Img = np.concatenate((noisy_Img, noisy_img), axis=2)
else:
Img = np.concatenate((img, Img), axis=2)
noisy_Img = np.concatenate((noisy_img, noisy_Img), axis=2)
if self.size is not None:
nh = (patch // self.n_W)*self.stride
nw = (patch % self.n_W)*self.stride
Img = Img[nh:(nh+self.size), nw:(nw+self.size), :]
noisy_Img = noisy_Img[nh:(nh+self.size), nw:(nw+self.size), :]
if flip == 1:
Img = np.flip(Img, 1)
noisy_Img = np.flip(noisy_Img, 1)
elif flip == 2:
Img = np.flip(Img, 0)
noisy_Img = np.flip(noisy_Img, 0)
elif flip == 3:
Img = np.flip(Img, (1,0))
noisy_Img = np.flip(noisy_Img, (1,0))
Img = Img.astype(np.float32)
noisy_Img = noisy_Img.astype(np.float32)
Img = (Img-240)/(2**12-1-240)
noisy_Img = (noisy_Img-240)/(2**12-1-240)
return self.transform(np.array(Img)).type(torch.FloatTensor), self.transform(np.array(noisy_Img)).type(torch.FloatTensor)
| 40.417073 | 152 | 0.554885 | 4,422 | 33,142 | 4.024423 | 0.054953 | 0.017982 | 0.025287 | 0.020454 | 0.84991 | 0.830018 | 0.819004 | 0.800686 | 0.788773 | 0.752585 | 0 | 0.023366 | 0.311719 | 33,142 | 819 | 153 | 40.466422 | 0.756784 | 0.019401 | 0 | 0.686854 | 0 | 0 | 0.044711 | 0.015704 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04579 | false | 0 | 0.020679 | 0.011817 | 0.11226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ba58bd0e0048728e7d3e0d1387cc05a524035236 | 16 | py | Python | lang/py/cookbook/v2/source/cb2_6_2_exm_3.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_6_2_exm_3.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | lang/py/cookbook/v2/source/cb2_6_2_exm_3.py | ch1huizong/learning | 632267634a9fd84a5f5116de09ff1e2681a6cc85 | [
"MIT"
] | null | null | null | import ro, copy
| 8 | 15 | 0.75 | 3 | 16 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 16 | 1 | 16 | 16 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ba618491042f4b97a7cb00f8f8fdf89ade4abc0d | 73 | py | Python | modules/analysis/mean.py | ansteh/multivariate | fbd166f9e9a6d721a1d876b6e46db064f43afe53 | [
"Apache-2.0"
] | null | null | null | modules/analysis/mean.py | ansteh/multivariate | fbd166f9e9a6d721a1d876b6e46db064f43afe53 | [
"Apache-2.0"
] | null | null | null | modules/analysis/mean.py | ansteh/multivariate | fbd166f9e9a6d721a1d876b6e46db064f43afe53 | [
"Apache-2.0"
] | null | null | null | import numpy as np
def mean(matrix):
return np.mean(matrix, axis=1)
| 14.6 | 34 | 0.69863 | 13 | 73 | 3.923077 | 0.769231 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0.191781 | 73 | 4 | 35 | 18.25 | 0.847458 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
ba6ddccbe1c931e1328d043d14827fffc491b746 | 43 | py | Python | tests/test_dummy.py | huoguoml/huoguoml | 749b0b2a24ddcf1ab34c36267eae7a2427b907f4 | [
"Apache-2.0"
] | 5 | 2021-08-02T16:35:29.000Z | 2022-03-28T13:07:38.000Z | tests/test_dummy.py | huoguoml/huoguoml | 749b0b2a24ddcf1ab34c36267eae7a2427b907f4 | [
"Apache-2.0"
] | 12 | 2021-07-03T19:00:07.000Z | 2021-07-11T17:26:47.000Z | tests/test_dummy.py | huoguoml/huoguoml | 749b0b2a24ddcf1ab34c36267eae7a2427b907f4 | [
"Apache-2.0"
] | null | null | null | def test_dummy_call():
assert 42 == 42
| 14.333333 | 22 | 0.651163 | 7 | 43 | 3.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 0.232558 | 43 | 2 | 23 | 21.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bab932b5fde4a61e7d7e80e8c4a072ca87d8e290 | 38,030 | py | Python | instances/passenger_demand/pas-20210421-2109-int18e/88.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int18e/88.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int18e/88.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 4183
passenger_arriving = (
(4, 9, 5, 7, 0, 0, 9, 10, 3, 5, 4, 0), # 0
(5, 9, 11, 3, 3, 0, 5, 14, 9, 1, 1, 0), # 1
(3, 17, 9, 3, 1, 0, 11, 8, 12, 6, 1, 0), # 2
(4, 16, 14, 1, 3, 0, 15, 10, 8, 9, 2, 0), # 3
(5, 10, 8, 2, 5, 0, 12, 13, 4, 6, 1, 0), # 4
(7, 16, 6, 2, 2, 0, 15, 8, 11, 5, 1, 0), # 5
(3, 7, 8, 3, 1, 0, 11, 14, 8, 5, 0, 0), # 6
(5, 17, 18, 5, 1, 0, 7, 9, 7, 7, 3, 0), # 7
(3, 10, 10, 4, 1, 0, 8, 6, 5, 7, 1, 0), # 8
(4, 8, 13, 5, 4, 0, 5, 10, 7, 10, 1, 0), # 9
(5, 10, 11, 2, 3, 0, 7, 9, 7, 6, 1, 0), # 10
(9, 13, 10, 3, 1, 0, 11, 8, 11, 9, 4, 0), # 11
(3, 6, 11, 8, 3, 0, 11, 8, 6, 11, 1, 0), # 12
(7, 11, 13, 9, 7, 0, 8, 12, 5, 8, 3, 0), # 13
(6, 12, 5, 4, 0, 0, 8, 13, 6, 6, 0, 0), # 14
(7, 14, 10, 7, 5, 0, 7, 11, 7, 5, 2, 0), # 15
(8, 12, 16, 3, 1, 0, 12, 15, 12, 8, 2, 0), # 16
(3, 10, 8, 8, 3, 0, 6, 9, 5, 2, 2, 0), # 17
(6, 6, 14, 4, 4, 0, 11, 10, 12, 6, 3, 0), # 18
(2, 6, 10, 7, 4, 0, 6, 9, 12, 7, 1, 0), # 19
(2, 11, 9, 3, 4, 0, 10, 21, 6, 1, 2, 0), # 20
(5, 12, 12, 6, 2, 0, 11, 5, 9, 9, 0, 0), # 21
(4, 13, 7, 9, 4, 0, 5, 13, 5, 5, 0, 0), # 22
(10, 7, 11, 2, 1, 0, 10, 18, 7, 9, 2, 0), # 23
(6, 10, 8, 3, 5, 0, 10, 13, 4, 3, 2, 0), # 24
(1, 11, 10, 5, 4, 0, 4, 18, 6, 3, 4, 0), # 25
(4, 13, 18, 4, 1, 0, 9, 10, 9, 6, 5, 0), # 26
(8, 5, 6, 2, 4, 0, 10, 15, 3, 3, 2, 0), # 27
(6, 7, 13, 2, 4, 0, 8, 12, 6, 7, 7, 0), # 28
(5, 11, 9, 6, 3, 0, 16, 18, 6, 7, 5, 0), # 29
(8, 13, 6, 4, 1, 0, 10, 13, 7, 6, 2, 0), # 30
(5, 12, 11, 6, 5, 0, 6, 10, 5, 9, 3, 0), # 31
(8, 9, 8, 6, 4, 0, 8, 13, 6, 4, 8, 0), # 32
(9, 9, 16, 6, 5, 0, 3, 11, 5, 3, 3, 0), # 33
(4, 12, 12, 6, 2, 0, 12, 14, 6, 4, 3, 0), # 34
(9, 14, 8, 6, 3, 0, 5, 12, 6, 7, 3, 0), # 35
(4, 20, 9, 1, 5, 0, 10, 13, 4, 3, 2, 0), # 36
(10, 12, 11, 5, 1, 0, 12, 9, 6, 5, 7, 0), # 37
(8, 13, 9, 6, 5, 0, 7, 11, 5, 6, 5, 0), # 38
(1, 15, 9, 7, 1, 0, 10, 5, 10, 5, 5, 0), # 39
(2, 11, 5, 6, 2, 0, 6, 8, 9, 4, 2, 0), # 40
(9, 9, 11, 5, 6, 0, 7, 17, 12, 2, 5, 0), # 41
(1, 14, 9, 8, 3, 0, 7, 12, 8, 2, 5, 0), # 42
(3, 16, 11, 7, 3, 0, 10, 16, 9, 9, 2, 0), # 43
(6, 13, 11, 3, 1, 0, 7, 9, 13, 8, 2, 0), # 44
(9, 17, 11, 4, 4, 0, 10, 6, 9, 11, 0, 0), # 45
(6, 12, 6, 4, 4, 0, 9, 10, 8, 9, 3, 0), # 46
(4, 7, 7, 7, 4, 0, 7, 5, 6, 1, 6, 0), # 47
(9, 9, 7, 6, 5, 0, 11, 20, 4, 10, 4, 0), # 48
(7, 11, 15, 4, 7, 0, 8, 8, 7, 8, 6, 0), # 49
(5, 14, 9, 3, 7, 0, 6, 15, 4, 7, 4, 0), # 50
(2, 17, 14, 4, 1, 0, 12, 10, 8, 5, 5, 0), # 51
(5, 8, 7, 9, 3, 0, 8, 15, 10, 7, 1, 0), # 52
(2, 12, 7, 1, 3, 0, 5, 13, 10, 6, 2, 0), # 53
(3, 9, 11, 4, 4, 0, 5, 12, 10, 3, 2, 0), # 54
(3, 9, 5, 5, 2, 0, 3, 17, 7, 3, 6, 0), # 55
(5, 19, 9, 5, 5, 0, 6, 14, 11, 9, 4, 0), # 56
(4, 15, 6, 6, 1, 0, 4, 8, 8, 3, 3, 0), # 57
(7, 13, 11, 2, 1, 0, 6, 17, 10, 10, 0, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(4.769372805092186, 12.233629261363635, 14.389624839331619, 11.405298913043477, 12.857451923076923, 8.562228260869567), # 0
(4.81413961808604, 12.369674877683082, 14.46734796754499, 11.46881589673913, 12.953819711538461, 8.559309850543478), # 1
(4.8583952589991215, 12.503702525252525, 14.54322622107969, 11.530934782608696, 13.048153846153847, 8.556302173913043), # 2
(4.902102161984196, 12.635567578125, 14.617204169344474, 11.591602581521737, 13.14036778846154, 8.553205638586958), # 3
(4.94522276119403, 12.765125410353535, 14.689226381748071, 11.650766304347826, 13.230375, 8.550020652173911), # 4
(4.987719490781387, 12.892231395991162, 14.759237427699228, 11.708372961956522, 13.318088942307691, 8.546747622282608), # 5
(5.029554784899035, 13.01674090909091, 14.827181876606687, 11.764369565217393, 13.403423076923078, 8.54338695652174), # 6
(5.0706910776997365, 13.138509323705808, 14.893004297879177, 11.818703125, 13.486290865384618, 8.5399390625), # 7
(5.1110908033362605, 13.257392013888888, 14.956649260925452, 11.871320652173912, 13.56660576923077, 8.536404347826087), # 8
(5.1507163959613695, 13.373244353693181, 15.018061335154243, 11.922169157608696, 13.644281249999999, 8.532783220108696), # 9
(5.1895302897278315, 13.485921717171717, 15.077185089974291, 11.971195652173915, 13.719230769230771, 8.529076086956522), # 10
(5.227494918788412, 13.595279478377526, 15.133965094794343, 12.018347146739131, 13.791367788461539, 8.525283355978262), # 11
(5.2645727172958745, 13.701173011363636, 15.188345919023137, 12.063570652173912, 13.860605769230768, 8.521405434782608), # 12
(5.3007261194029835, 13.803457690183082, 15.240272132069407, 12.106813179347826, 13.926858173076925, 8.51744273097826), # 13
(5.335917559262511, 13.90198888888889, 15.289688303341899, 12.148021739130433, 13.99003846153846, 8.513395652173912), # 14
(5.370109471027217, 13.996621981534089, 15.336539002249355, 12.187143342391304, 14.050060096153846, 8.509264605978261), # 15
(5.403264288849868, 14.087212342171718, 15.380768798200515, 12.224124999999999, 14.10683653846154, 8.50505), # 16
(5.4353444468832315, 14.173615344854797, 15.422322260604112, 12.258913722826087, 14.16028125, 8.500752241847827), # 17
(5.46631237928007, 14.255686363636363, 15.461143958868895, 12.291456521739132, 14.210307692307696, 8.496371739130435), # 18
(5.496130520193152, 14.333280772569443, 15.4971784624036, 12.321700407608695, 14.256829326923079, 8.491908899456522), # 19
(5.524761303775241, 14.40625394570707, 15.530370340616965, 12.349592391304348, 14.299759615384616, 8.487364130434782), # 20
(5.552167164179106, 14.47446125710227, 15.56066416291774, 12.375079483695652, 14.339012019230768, 8.482737839673913), # 21
(5.578310535557506, 14.537758080808082, 15.588004498714653, 12.398108695652175, 14.374499999999998, 8.47803043478261), # 22
(5.603153852063214, 14.595999790877526, 15.612335917416454, 12.418627038043478, 14.40613701923077, 8.473242323369567), # 23
(5.62665954784899, 14.649041761363636, 15.633602988431875, 12.43658152173913, 14.433836538461538, 8.468373913043479), # 24
(5.648790057067603, 14.696739366319445, 15.651750281169667, 12.451919157608696, 14.457512019230768, 8.463425611413044), # 25
(5.669507813871817, 14.738947979797977, 15.66672236503856, 12.464586956521739, 14.477076923076922, 8.458397826086957), # 26
(5.688775252414398, 14.77552297585227, 15.6784638094473, 12.474531929347828, 14.492444711538463, 8.453290964673915), # 27
(5.7065548068481124, 14.806319728535353, 15.68691918380463, 12.481701086956523, 14.503528846153845, 8.448105434782608), # 28
(5.722808911325724, 14.831193611900254, 15.69203305751928, 12.486041440217392, 14.510242788461538, 8.44284164402174), # 29
(5.7375, 14.85, 15.69375, 12.4875, 14.512500000000001, 8.4375), # 30
(5.751246651214834, 14.865621839488634, 15.692462907608693, 12.487236580882353, 14.511678590425532, 8.430077267616193), # 31
(5.7646965153452685, 14.881037215909092, 15.68863804347826, 12.486451470588234, 14.509231914893617, 8.418644565217393), # 32
(5.777855634590792, 14.896244211647728, 15.682330027173915, 12.485152389705883, 14.50518630319149, 8.403313830584706), # 33
(5.790730051150895, 14.91124090909091, 15.67359347826087, 12.483347058823531, 14.499568085106382, 8.38419700149925), # 34
(5.803325807225064, 14.926025390624996, 15.662483016304348, 12.481043198529411, 14.492403590425532, 8.361406015742128), # 35
(5.815648945012788, 14.940595738636366, 15.649053260869564, 12.478248529411767, 14.48371914893617, 8.335052811094453), # 36
(5.8277055067135555, 14.954950035511365, 15.63335883152174, 12.474970772058823, 14.47354109042553, 8.305249325337332), # 37
(5.839501534526853, 14.969086363636364, 15.615454347826088, 12.471217647058824, 14.461895744680852, 8.272107496251873), # 38
(5.851043070652174, 14.983002805397728, 15.595394429347825, 12.466996875000001, 14.44880944148936, 8.23573926161919), # 39
(5.862336157289003, 14.99669744318182, 15.573233695652176, 12.462316176470589, 14.434308510638296, 8.196256559220389), # 40
(5.873386836636828, 15.010168359374997, 15.549026766304348, 12.457183272058824, 14.418419281914893, 8.153771326836583), # 41
(5.88420115089514, 15.023413636363639, 15.522828260869566, 12.451605882352942, 14.401168085106384, 8.108395502248875), # 42
(5.894785142263428, 15.03643135653409, 15.494692798913043, 12.445591727941178, 14.38258125, 8.060241023238381), # 43
(5.905144852941176, 15.049219602272727, 15.464675, 12.439148529411764, 14.36268510638298, 8.009419827586207), # 44
(5.915286325127877, 15.061776455965909, 15.432829483695656, 12.43228400735294, 14.341505984042554, 7.956043853073464), # 45
(5.925215601023019, 15.074100000000003, 15.39921086956522, 12.425005882352941, 14.319070212765958, 7.90022503748126), # 46
(5.934938722826087, 15.086188316761364, 15.363873777173913, 12.417321874999999, 14.295404122340427, 7.842075318590705), # 47
(5.944461732736574, 15.098039488636365, 15.326872826086957, 12.409239705882353, 14.27053404255319, 7.7817066341829095), # 48
(5.953790672953963, 15.10965159801136, 15.288262635869566, 12.400767095588236, 14.24448630319149, 7.71923092203898), # 49
(5.96293158567775, 15.121022727272724, 15.248097826086958, 12.391911764705883, 14.217287234042553, 7.65476011994003), # 50
(5.971890513107417, 15.132150958806818, 15.206433016304347, 12.38268143382353, 14.188963164893616, 7.588406165667167), # 51
(5.980673497442456, 15.143034375, 15.163322826086954, 12.373083823529411, 14.159540425531915, 7.5202809970015), # 52
(5.989286580882353, 15.153671058238638, 15.118821875, 12.363126654411765, 14.129045345744682, 7.450496551724138), # 53
(5.9977358056266, 15.164059090909088, 15.072984782608694, 12.352817647058824, 14.09750425531915, 7.379164767616192), # 54
(6.00602721387468, 15.174196555397728, 15.02586616847826, 12.342164522058825, 14.064943484042553, 7.306397582458771), # 55
(6.014166847826087, 15.184081534090907, 14.977520652173913, 12.331175, 14.031389361702129, 7.232306934032984), # 56
(6.022160749680308, 15.193712109375003, 14.92800285326087, 12.319856801470587, 13.996868218085105, 7.15700476011994), # 57
(6.030014961636829, 15.203086363636363, 14.877367391304347, 12.308217647058825, 13.961406382978723, 7.0806029985007495), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(4, 9, 5, 7, 0, 0, 9, 10, 3, 5, 4, 0), # 0
(9, 18, 16, 10, 3, 0, 14, 24, 12, 6, 5, 0), # 1
(12, 35, 25, 13, 4, 0, 25, 32, 24, 12, 6, 0), # 2
(16, 51, 39, 14, 7, 0, 40, 42, 32, 21, 8, 0), # 3
(21, 61, 47, 16, 12, 0, 52, 55, 36, 27, 9, 0), # 4
(28, 77, 53, 18, 14, 0, 67, 63, 47, 32, 10, 0), # 5
(31, 84, 61, 21, 15, 0, 78, 77, 55, 37, 10, 0), # 6
(36, 101, 79, 26, 16, 0, 85, 86, 62, 44, 13, 0), # 7
(39, 111, 89, 30, 17, 0, 93, 92, 67, 51, 14, 0), # 8
(43, 119, 102, 35, 21, 0, 98, 102, 74, 61, 15, 0), # 9
(48, 129, 113, 37, 24, 0, 105, 111, 81, 67, 16, 0), # 10
(57, 142, 123, 40, 25, 0, 116, 119, 92, 76, 20, 0), # 11
(60, 148, 134, 48, 28, 0, 127, 127, 98, 87, 21, 0), # 12
(67, 159, 147, 57, 35, 0, 135, 139, 103, 95, 24, 0), # 13
(73, 171, 152, 61, 35, 0, 143, 152, 109, 101, 24, 0), # 14
(80, 185, 162, 68, 40, 0, 150, 163, 116, 106, 26, 0), # 15
(88, 197, 178, 71, 41, 0, 162, 178, 128, 114, 28, 0), # 16
(91, 207, 186, 79, 44, 0, 168, 187, 133, 116, 30, 0), # 17
(97, 213, 200, 83, 48, 0, 179, 197, 145, 122, 33, 0), # 18
(99, 219, 210, 90, 52, 0, 185, 206, 157, 129, 34, 0), # 19
(101, 230, 219, 93, 56, 0, 195, 227, 163, 130, 36, 0), # 20
(106, 242, 231, 99, 58, 0, 206, 232, 172, 139, 36, 0), # 21
(110, 255, 238, 108, 62, 0, 211, 245, 177, 144, 36, 0), # 22
(120, 262, 249, 110, 63, 0, 221, 263, 184, 153, 38, 0), # 23
(126, 272, 257, 113, 68, 0, 231, 276, 188, 156, 40, 0), # 24
(127, 283, 267, 118, 72, 0, 235, 294, 194, 159, 44, 0), # 25
(131, 296, 285, 122, 73, 0, 244, 304, 203, 165, 49, 0), # 26
(139, 301, 291, 124, 77, 0, 254, 319, 206, 168, 51, 0), # 27
(145, 308, 304, 126, 81, 0, 262, 331, 212, 175, 58, 0), # 28
(150, 319, 313, 132, 84, 0, 278, 349, 218, 182, 63, 0), # 29
(158, 332, 319, 136, 85, 0, 288, 362, 225, 188, 65, 0), # 30
(163, 344, 330, 142, 90, 0, 294, 372, 230, 197, 68, 0), # 31
(171, 353, 338, 148, 94, 0, 302, 385, 236, 201, 76, 0), # 32
(180, 362, 354, 154, 99, 0, 305, 396, 241, 204, 79, 0), # 33
(184, 374, 366, 160, 101, 0, 317, 410, 247, 208, 82, 0), # 34
(193, 388, 374, 166, 104, 0, 322, 422, 253, 215, 85, 0), # 35
(197, 408, 383, 167, 109, 0, 332, 435, 257, 218, 87, 0), # 36
(207, 420, 394, 172, 110, 0, 344, 444, 263, 223, 94, 0), # 37
(215, 433, 403, 178, 115, 0, 351, 455, 268, 229, 99, 0), # 38
(216, 448, 412, 185, 116, 0, 361, 460, 278, 234, 104, 0), # 39
(218, 459, 417, 191, 118, 0, 367, 468, 287, 238, 106, 0), # 40
(227, 468, 428, 196, 124, 0, 374, 485, 299, 240, 111, 0), # 41
(228, 482, 437, 204, 127, 0, 381, 497, 307, 242, 116, 0), # 42
(231, 498, 448, 211, 130, 0, 391, 513, 316, 251, 118, 0), # 43
(237, 511, 459, 214, 131, 0, 398, 522, 329, 259, 120, 0), # 44
(246, 528, 470, 218, 135, 0, 408, 528, 338, 270, 120, 0), # 45
(252, 540, 476, 222, 139, 0, 417, 538, 346, 279, 123, 0), # 46
(256, 547, 483, 229, 143, 0, 424, 543, 352, 280, 129, 0), # 47
(265, 556, 490, 235, 148, 0, 435, 563, 356, 290, 133, 0), # 48
(272, 567, 505, 239, 155, 0, 443, 571, 363, 298, 139, 0), # 49
(277, 581, 514, 242, 162, 0, 449, 586, 367, 305, 143, 0), # 50
(279, 598, 528, 246, 163, 0, 461, 596, 375, 310, 148, 0), # 51
(284, 606, 535, 255, 166, 0, 469, 611, 385, 317, 149, 0), # 52
(286, 618, 542, 256, 169, 0, 474, 624, 395, 323, 151, 0), # 53
(289, 627, 553, 260, 173, 0, 479, 636, 405, 326, 153, 0), # 54
(292, 636, 558, 265, 175, 0, 482, 653, 412, 329, 159, 0), # 55
(297, 655, 567, 270, 180, 0, 488, 667, 423, 338, 163, 0), # 56
(301, 670, 573, 276, 181, 0, 492, 675, 431, 341, 166, 0), # 57
(308, 683, 584, 278, 182, 0, 498, 692, 441, 351, 166, 0), # 58
(308, 683, 584, 278, 182, 0, 498, 692, 441, 351, 166, 0), # 59
)
passenger_arriving_rate = (
(4.769372805092186, 9.786903409090908, 8.63377490359897, 4.56211956521739, 2.5714903846153843, 0.0, 8.562228260869567, 10.285961538461537, 6.843179347826086, 5.755849935732647, 2.446725852272727, 0.0), # 0
(4.81413961808604, 9.895739902146465, 8.680408780526994, 4.587526358695651, 2.5907639423076922, 0.0, 8.559309850543478, 10.363055769230769, 6.881289538043478, 5.786939187017995, 2.4739349755366162, 0.0), # 1
(4.8583952589991215, 10.00296202020202, 8.725935732647814, 4.612373913043478, 2.609630769230769, 0.0, 8.556302173913043, 10.438523076923076, 6.918560869565217, 5.817290488431875, 2.500740505050505, 0.0), # 2
(4.902102161984196, 10.1084540625, 8.770322501606683, 4.636641032608694, 2.628073557692308, 0.0, 8.553205638586958, 10.512294230769232, 6.954961548913042, 5.846881667737789, 2.527113515625, 0.0), # 3
(4.94522276119403, 10.212100328282828, 8.813535829048842, 4.66030652173913, 2.6460749999999997, 0.0, 8.550020652173911, 10.584299999999999, 6.990459782608696, 5.875690552699228, 2.553025082070707, 0.0), # 4
(4.987719490781387, 10.313785116792928, 8.855542456619537, 4.6833491847826085, 2.663617788461538, 0.0, 8.546747622282608, 10.654471153846153, 7.025023777173913, 5.90369497107969, 2.578446279198232, 0.0), # 5
(5.029554784899035, 10.413392727272727, 8.896309125964011, 4.705747826086957, 2.680684615384615, 0.0, 8.54338695652174, 10.72273846153846, 7.058621739130436, 5.930872750642674, 2.603348181818182, 0.0), # 6
(5.0706910776997365, 10.510807458964646, 8.935802578727506, 4.72748125, 2.697258173076923, 0.0, 8.5399390625, 10.789032692307693, 7.0912218750000005, 5.95720171915167, 2.6277018647411614, 0.0), # 7
(5.1110908033362605, 10.60591361111111, 8.97398955655527, 4.7485282608695645, 2.7133211538461537, 0.0, 8.536404347826087, 10.853284615384615, 7.122792391304347, 5.982659704370181, 2.6514784027777774, 0.0), # 8
(5.1507163959613695, 10.698595482954543, 9.010836801092546, 4.768867663043478, 2.7288562499999993, 0.0, 8.532783220108696, 10.915424999999997, 7.153301494565217, 6.007224534061697, 2.6746488707386358, 0.0), # 9
(5.1895302897278315, 10.788737373737373, 9.046311053984574, 4.7884782608695655, 2.743846153846154, 0.0, 8.529076086956522, 10.975384615384616, 7.182717391304348, 6.030874035989716, 2.697184343434343, 0.0), # 10
(5.227494918788412, 10.87622358270202, 9.080379056876605, 4.807338858695652, 2.7582735576923074, 0.0, 8.525283355978262, 11.03309423076923, 7.2110082880434785, 6.053586037917737, 2.719055895675505, 0.0), # 11
(5.2645727172958745, 10.960938409090907, 9.113007551413881, 4.825428260869565, 2.7721211538461534, 0.0, 8.521405434782608, 11.088484615384614, 7.238142391304347, 6.0753383676092545, 2.740234602272727, 0.0), # 12
(5.3007261194029835, 11.042766152146465, 9.144163279241644, 4.8427252717391305, 2.7853716346153847, 0.0, 8.51744273097826, 11.141486538461539, 7.264087907608696, 6.096108852827762, 2.760691538036616, 0.0), # 13
(5.335917559262511, 11.121591111111112, 9.173812982005138, 4.859208695652173, 2.7980076923076918, 0.0, 8.513395652173912, 11.192030769230767, 7.288813043478259, 6.115875321336759, 2.780397777777778, 0.0), # 14
(5.370109471027217, 11.19729758522727, 9.201923401349612, 4.874857336956521, 2.810012019230769, 0.0, 8.509264605978261, 11.240048076923076, 7.312286005434782, 6.134615600899742, 2.7993243963068175, 0.0), # 15
(5.403264288849868, 11.269769873737372, 9.228461278920308, 4.88965, 2.8213673076923076, 0.0, 8.50505, 11.28546923076923, 7.334474999999999, 6.152307519280206, 2.817442468434343, 0.0), # 16
(5.4353444468832315, 11.338892275883836, 9.253393356362468, 4.903565489130434, 2.83205625, 0.0, 8.500752241847827, 11.328225, 7.3553482336956515, 6.168928904241644, 2.834723068970959, 0.0), # 17
(5.46631237928007, 11.40454909090909, 9.276686375321336, 4.916582608695652, 2.842061538461539, 0.0, 8.496371739130435, 11.368246153846156, 7.374873913043479, 6.184457583547558, 2.8511372727272724, 0.0), # 18
(5.496130520193152, 11.466624618055553, 9.298307077442159, 4.928680163043477, 2.8513658653846155, 0.0, 8.491908899456522, 11.405463461538462, 7.393020244565217, 6.198871384961439, 2.866656154513888, 0.0), # 19
(5.524761303775241, 11.525003156565655, 9.318222204370178, 4.939836956521739, 2.859951923076923, 0.0, 8.487364130434782, 11.439807692307692, 7.409755434782609, 6.212148136246785, 2.8812507891414136, 0.0), # 20
(5.552167164179106, 11.579569005681815, 9.336398497750643, 4.95003179347826, 2.8678024038461536, 0.0, 8.482737839673913, 11.471209615384614, 7.425047690217391, 6.224265665167096, 2.894892251420454, 0.0), # 21
(5.578310535557506, 11.630206464646465, 9.352802699228791, 4.95924347826087, 2.8748999999999993, 0.0, 8.47803043478261, 11.499599999999997, 7.438865217391305, 6.235201799485861, 2.907551616161616, 0.0), # 22
(5.603153852063214, 11.67679983270202, 9.367401550449872, 4.967450815217391, 2.8812274038461534, 0.0, 8.473242323369567, 11.524909615384614, 7.451176222826087, 6.244934366966581, 2.919199958175505, 0.0), # 23
(5.62665954784899, 11.719233409090908, 9.380161793059125, 4.974632608695652, 2.8867673076923075, 0.0, 8.468373913043479, 11.54706923076923, 7.461948913043478, 6.25344119537275, 2.929808352272727, 0.0), # 24
(5.648790057067603, 11.757391493055556, 9.391050168701799, 4.980767663043478, 2.8915024038461534, 0.0, 8.463425611413044, 11.566009615384614, 7.471151494565217, 6.260700112467866, 2.939347873263889, 0.0), # 25
(5.669507813871817, 11.79115838383838, 9.400033419023135, 4.985834782608695, 2.8954153846153843, 0.0, 8.458397826086957, 11.581661538461537, 7.478752173913043, 6.266688946015424, 2.947789595959595, 0.0), # 26
(5.688775252414398, 11.820418380681815, 9.40707828566838, 4.989812771739131, 2.8984889423076923, 0.0, 8.453290964673915, 11.593955769230769, 7.484719157608696, 6.271385523778919, 2.9551045951704538, 0.0), # 27
(5.7065548068481124, 11.84505578282828, 9.412151510282778, 4.992680434782609, 2.9007057692307687, 0.0, 8.448105434782608, 11.602823076923075, 7.489020652173913, 6.274767673521851, 2.96126394570707, 0.0), # 28
(5.722808911325724, 11.864954889520202, 9.415219834511568, 4.994416576086956, 2.902048557692307, 0.0, 8.44284164402174, 11.608194230769229, 7.491624864130435, 6.276813223007712, 2.9662387223800506, 0.0), # 29
(5.7375, 11.879999999999999, 9.41625, 4.995, 2.9025, 0.0, 8.4375, 11.61, 7.4925, 6.277499999999999, 2.9699999999999998, 0.0), # 30
(5.751246651214834, 11.892497471590906, 9.415477744565216, 4.994894632352941, 2.9023357180851064, 0.0, 8.430077267616193, 11.609342872340426, 7.492341948529411, 6.276985163043476, 2.9731243678977264, 0.0), # 31
(5.7646965153452685, 11.904829772727274, 9.413182826086956, 4.994580588235293, 2.901846382978723, 0.0, 8.418644565217393, 11.607385531914892, 7.49187088235294, 6.275455217391303, 2.9762074431818184, 0.0), # 32
(5.777855634590792, 11.916995369318181, 9.40939801630435, 4.994060955882353, 2.9010372606382977, 0.0, 8.403313830584706, 11.60414904255319, 7.491091433823529, 6.272932010869566, 2.9792488423295453, 0.0), # 33
(5.790730051150895, 11.928992727272727, 9.40415608695652, 4.993338823529412, 2.899913617021276, 0.0, 8.38419700149925, 11.599654468085104, 7.490008235294118, 6.269437391304347, 2.9822481818181816, 0.0), # 34
(5.803325807225064, 11.940820312499996, 9.39748980978261, 4.9924172794117645, 2.898480718085106, 0.0, 8.361406015742128, 11.593922872340425, 7.488625919117647, 6.264993206521739, 2.985205078124999, 0.0), # 35
(5.815648945012788, 11.952476590909091, 9.389431956521738, 4.9912994117647065, 2.896743829787234, 0.0, 8.335052811094453, 11.586975319148936, 7.486949117647059, 6.259621304347825, 2.988119147727273, 0.0), # 36
(5.8277055067135555, 11.96396002840909, 9.380015298913044, 4.989988308823529, 2.8947082180851056, 0.0, 8.305249325337332, 11.578832872340422, 7.484982463235293, 6.253343532608695, 2.9909900071022726, 0.0), # 37
(5.839501534526853, 11.97526909090909, 9.369272608695653, 4.988487058823529, 2.89237914893617, 0.0, 8.272107496251873, 11.56951659574468, 7.4827305882352935, 6.246181739130434, 2.9938172727272727, 0.0), # 38
(5.851043070652174, 11.986402244318182, 9.357236657608695, 4.98679875, 2.8897618882978717, 0.0, 8.23573926161919, 11.559047553191487, 7.480198125, 6.23815777173913, 2.9966005610795454, 0.0), # 39
(5.862336157289003, 11.997357954545455, 9.343940217391305, 4.984926470588235, 2.886861702127659, 0.0, 8.196256559220389, 11.547446808510635, 7.477389705882353, 6.22929347826087, 2.999339488636364, 0.0), # 40
(5.873386836636828, 12.008134687499997, 9.329416059782607, 4.982873308823529, 2.8836838563829783, 0.0, 8.153771326836583, 11.534735425531913, 7.474309963235294, 6.219610706521738, 3.002033671874999, 0.0), # 41
(5.88420115089514, 12.01873090909091, 9.31369695652174, 4.980642352941176, 2.880233617021277, 0.0, 8.108395502248875, 11.520934468085107, 7.4709635294117644, 6.209131304347826, 3.0046827272727277, 0.0), # 42
(5.894785142263428, 12.02914508522727, 9.296815679347825, 4.978236691176471, 2.8765162499999994, 0.0, 8.060241023238381, 11.506064999999998, 7.467355036764706, 6.1978771195652165, 3.0072862713068176, 0.0), # 43
(5.905144852941176, 12.03937568181818, 9.278805, 4.975659411764705, 2.8725370212765955, 0.0, 8.009419827586207, 11.490148085106382, 7.4634891176470575, 6.1858699999999995, 3.009843920454545, 0.0), # 44
(5.915286325127877, 12.049421164772726, 9.259697690217394, 4.972913602941176, 2.8683011968085106, 0.0, 7.956043853073464, 11.473204787234042, 7.459370404411764, 6.1731317934782615, 3.0123552911931815, 0.0), # 45
(5.925215601023019, 12.059280000000001, 9.239526521739132, 4.970002352941176, 2.8638140425531913, 0.0, 7.90022503748126, 11.455256170212765, 7.455003529411765, 6.159684347826087, 3.0148200000000003, 0.0), # 46
(5.934938722826087, 12.06895065340909, 9.218324266304347, 4.966928749999999, 2.859080824468085, 0.0, 7.842075318590705, 11.43632329787234, 7.450393124999999, 6.145549510869564, 3.0172376633522724, 0.0), # 47
(5.944461732736574, 12.07843159090909, 9.196123695652174, 4.9636958823529405, 2.854106808510638, 0.0, 7.7817066341829095, 11.416427234042551, 7.445543823529412, 6.130749130434782, 3.0196078977272727, 0.0), # 48
(5.953790672953963, 12.087721278409088, 9.17295758152174, 4.960306838235294, 2.8488972606382976, 0.0, 7.71923092203898, 11.39558904255319, 7.4404602573529415, 6.115305054347826, 3.021930319602272, 0.0), # 49
(5.96293158567775, 12.096818181818177, 9.148858695652175, 4.956764705882353, 2.8434574468085105, 0.0, 7.65476011994003, 11.373829787234042, 7.43514705882353, 6.099239130434783, 3.0242045454545443, 0.0), # 50
(5.971890513107417, 12.105720767045453, 9.123859809782608, 4.953072573529411, 2.837792632978723, 0.0, 7.588406165667167, 11.351170531914892, 7.429608860294118, 6.082573206521738, 3.026430191761363, 0.0), # 51
(5.980673497442456, 12.114427499999998, 9.097993695652173, 4.949233529411764, 2.8319080851063827, 0.0, 7.5202809970015, 11.32763234042553, 7.4238502941176465, 6.065329130434781, 3.0286068749999995, 0.0), # 52
(5.989286580882353, 12.122936846590909, 9.071293125, 4.945250661764706, 2.8258090691489364, 0.0, 7.450496551724138, 11.303236276595745, 7.417875992647058, 6.04752875, 3.030734211647727, 0.0), # 53
(5.9977358056266, 12.13124727272727, 9.043790869565216, 4.941127058823529, 2.8195008510638297, 0.0, 7.379164767616192, 11.278003404255319, 7.411690588235294, 6.0291939130434775, 3.0328118181818176, 0.0), # 54
(6.00602721387468, 12.139357244318182, 9.015519701086955, 4.93686580882353, 2.8129886968085103, 0.0, 7.306397582458771, 11.251954787234041, 7.405298713235295, 6.010346467391304, 3.0348393110795455, 0.0), # 55
(6.014166847826087, 12.147265227272724, 8.986512391304348, 4.9324699999999995, 2.8062778723404254, 0.0, 7.232306934032984, 11.225111489361701, 7.398705, 5.991008260869565, 3.036816306818181, 0.0), # 56
(6.022160749680308, 12.154969687500001, 8.95680171195652, 4.927942720588234, 2.7993736436170207, 0.0, 7.15700476011994, 11.197494574468083, 7.391914080882352, 5.9712011413043475, 3.0387424218750003, 0.0), # 57
(6.030014961636829, 12.16246909090909, 8.926420434782608, 4.923287058823529, 2.792281276595744, 0.0, 7.0806029985007495, 11.169125106382976, 7.384930588235295, 5.950946956521738, 3.0406172727272724, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
87, # 1
)
| 113.522388 | 213 | 0.730108 | 5,147 | 38,030 | 5.392462 | 0.237808 | 0.311295 | 0.246442 | 0.466943 | 0.327941 | 0.327004 | 0.327004 | 0.327004 | 0.326284 | 0.326284 | 0 | 0.819788 | 0.118696 | 38,030 | 334 | 214 | 113.862275 | 0.008324 | 0.031843 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
246ba37583be0491bea34fc6e64fb3a4468e0903 | 24,060 | py | Python | gci-vci-serverless/ddb_stream_interp_count/app.py | ClinGen/gene-and-variant-curation-tools | 30f21d8f03d8b5c180c1ce3cb8401b5abc660080 | [
"MIT"
] | 1 | 2021-09-17T20:39:07.000Z | 2021-09-17T20:39:07.000Z | gci-vci-serverless/ddb_stream_interp_count/app.py | ClinGen/gene-and-variant-curation-tools | 30f21d8f03d8b5c180c1ce3cb8401b5abc660080 | [
"MIT"
] | 133 | 2021-08-29T17:24:26.000Z | 2022-03-25T17:24:31.000Z | gci-vci-serverless/ddb_stream_interp_count/app.py | ClinGen/gene-and-variant-curation-tools | 30f21d8f03d8b5c180c1ce3cb8401b5abc660080 | [
"MIT"
] | null | null | null | # # © 2021 Amazon Web Services, Inc. or its affiliates. All Rights Reserved.
# #
# # This AWS Content is provided subject to the terms of the AWS Customer Agreement
# # available at http://aws.amazon.com/agreement or other written agreement between
# # Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL or both.
# import json, os, copy
# from boto3.dynamodb.types import TypeDeserializer
# import logging
# ############################################################################
# # Initialization activities
# ############################################################################
# logger = logging.getLogger()
# # If run in AWS Lambda, preconfigures a handler for you.
# log_level = os.environ.get('LOG_LEVEL', 'INFO').upper()
# if logger.hasHandlers():
# logger.setLevel(log_level)
# # If run outside AWS, can still use logging.
# else:
# logging.basicConfig(level=log_level, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
# from ddb_stream_interp_count.dynamodb.client import DynamoClient
# gvc_table = DynamoClient(os.environ['GENE_VARIANT_CURATION_TABLE'])
# vp_table = DynamoClient(os.environ['VP_TABLE'])
# status_map = {
# 'Provisioned': 'p',
# 'Approved': 'a',
# 'in_progress': 'i'
# }
# #############################################################################
# def handler(event, context):
# logger.info(json.dumps(event))
# # Parse the records to determine a set of actions (increment or decrement status counts)
# logger.info("Generating actions")
# actions = generate_actions(event.get('Records', []))
# logger.info("Actions: %s", json.dumps(actions))
# # Perform the actions
# logger.info("Performing Actions")
# perform_actions(actions)
# logger.info("Done")
# def generate_actions(records):
# """
# Description:
# Will cycle through input records and handle event types.
# Args:
# records [list]: Expects a list of records from a DynamoDB Stream.
# See https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_StreamRecord.html
# Returns:
# actions (list): For each input record, will return a dict in the format:
# {
# 'variant_pk': "4903abcr..."
# 'affiliation': 10007,
# 'actions': {
# 'Provisioned': -1,
# 'Approved': 1,
# 'in_progress': 0
# }
# }
# where 1 indicates an increment action
# -1 indicates a decrement action
# 0 indicates no change
# """
# actions = []
# for record in records:
# result = None
# if record['eventName'] == 'MODIFY':
# new_raw_item = record['dynamodb']['NewImage']
# new_item = ddb_deserialize(new_raw_item)
# old_raw_item = record['dynamodb']['OldImage']
# old_item = ddb_deserialize(old_raw_item)
# result = handle_modify(new_item, old_item)
# elif record['eventName'] == 'INSERT':
# new_raw_item = record['dynamodb']['NewImage']
# new_item = ddb_deserialize(new_raw_item)
# result = handle_insert(new_item)
# elif record['eventName'] == 'REMOVE':
# old_raw_item = record['dynamodb']['OldImage']
# old_item = ddb_deserialize(old_raw_item)
# result = handle_remove(old_item)
# if result is not None:
# actions.append(result)
# return actions
# def perform_actions(actions):
# """
# Description:
# Will perform increment/decrement actions on VP aggregate object given
# a list of actions (returned from generate_actions())
# Args:
# actions (list): See return value from generate_actions()
# Returns:
# success (bool): True on success.
# """
# for action in actions:
# # If we have all zeroes, skip.
# num_changes = sum([abs(v) for v in action['actions'].values()])
# if num_changes == 0:
# logger.info("Skipping actions for %s because no actions required", action)
# continue
# # Grab the carId for the variant
# carId = get_car_id(action['variant_pk'])
# if carId is None:
# logger.info("Did not find carId in GVC table variant PK: %s", action['variant_pk'])
# continue
# # Grab the aggregation/count object from VP table
# aggregate = None
# try:
# aggregate, vppk = get_variant_aggregate(carId)
# except Exception as e:
# logger.info("Exception when retrieving aggregate: %s: %s", type(e).__name__, e)
# logger.info("Could not find vciStatus for variant: PK: %s, carId: %s", action['variant_pk'], carId)
# continue
# if aggregate is None:
# logger.info("Could not find vciStatus for variant: PK: %s, carId: %s", action['variant_pk'], carId)
# continue
# # Update the aggregate object with actions
# updated_aggregate = update_aggregate_object(aggregate, action['actions'], action['affiliation'])
# # And write it back to the database
# write_aggregate(updated_aggregate, vppk)
# return True
# def get_variant_aggregate(carId):
# items = vp_table.get_items(
# pk=carId,
# keyname='carId',
# index_name='carId_index',
# projections=['PK']
# )
# pk = None
# if len(items) == 1:
# pk = items[0].get('PK', None)
# if pk is None:
# raise ValueError(f"Could not find PK for carId {carId}")
# items = vp_table.get_items(
# pk=pk,
# keyname='PK',
# projections=['vciStatus']
# )
# agg = None
# if len(items) == 1:
# agg = items[0].get('vciStatus', None)
# if agg is None:
# raise ValueError(f"Could not find vciStatus for PK {pk}")
# return agg, pk
# def update_aggregate_object(aggregate, actions, affiliation=None):
# # Create a deep copy so we don't change the original.
# updated_aggregate = copy.deepcopy(aggregate)
# # If we have no affiliation, this is 'individual' interpretation,
# # which is represented as 'd'.
# if affiliation is None or affiliation == '':
# affiliation = 'd' # Individual
# # If we don't have this affiliation yet in the aggregate object.
# if affiliation not in aggregate:
# updated_aggregate[affiliation] = _populate_new_status_dict(actions)
# # If the interpretation is not associated with an affiliation (i.e. individual)
# elif affiliation == 'd':
# aff = updated_aggregate[affiliation]
# _update_status_dict(actions, aff)
# # Or if it's associated with an affiliation
# else:
# logger.info("Found aff in updated_aggregate")
# aff = updated_aggregate[affiliation]
# _update_status_dict(actions, aff)
# # Now, we need to update the 'a' portion of the dict (except for individuals).
# # This counts the number of affiliations which contain an interpretation for this
# # variant in each state.
# if affiliation != 'd':
# if 'a' not in updated_aggregate:
# updated_aggregate['a'] = _populate_new_status_dict(actions)
# else:
# for status,key in status_map.items():
# # If we don't need to do anything, skip it.
# if actions[status] == 0:
# continue
# if key in updated_aggregate['a']:
# updated_aggregate['a'][key] += actions[status]
# else:
# if actions[status] == -1:
# raise ValueError("Tried to decrement missing count.")
# updated_aggregate['a'][key] = 1
# return updated_aggregate
# def _update_status_dict(actions, status_dict):
# for status,key in status_map.items():
# logger.info(f"{status} {key}, {actions[status]}")
# if key in status_dict:
# status_dict[key] += actions[status]
# if status_dict[key] < 0:
# raise ValueError(f"Error: count is now less than zero.")
# elif actions[status] == 1:
# status_dict[key] = 1
# elif actions[status] == -1:
# raise ValueError(f"Tried to decrement non-existing value")
# def _populate_new_status_dict(actions):
# tmp = {}
# for status,key in status_map.items():
# if actions[status] == 1:
# tmp[key] = 1
# elif actions[status] == -1:
# logger.info(f"Attempting to decrement {status} but did not find aggregate object.")
# raise ValueError("Did not find affiliation in vciStatus and tried decrement action. Invalid state.")
# return tmp
# def write_aggregate(updated_aggregate, vppk):
# logger.info("Writing: %s", updated_aggregate)
# vp_table.update_attr(vppk, 'PK', 'vciStatus', updated_aggregate)
# return True
# def get_car_id(variant_pk):
# items = gvc_table.get_items(variant_pk, "PK", projections=["carId"])
# retval = None
# if len(items) != 0:
# retval = items[0].get('carId', None)
# return retval
# def handle_insert(new_item):
# if new_item.get('item_type', None) != 'interpretation':
# return None
# new_status = get_interpretation_status(new_item)
# return {
# 'variant_pk': new_item['variant'],
# 'affiliation': new_item['affiliation'],
# 'actions': new_status
# }
# def handle_modify(new_item, old_item):
# if new_item.get('item_type', None) != 'interpretation':
# return None
# new_status = get_interpretation_status(new_item)
# old_status = get_interpretation_status(old_item)
# # This will give us 1 for increment, 0 for stay the same or -1 for decrement
# actions = {k:(v - old_status[k]) for k,v in new_status.items()}
# return {
# 'variant_pk': new_item['variant'],
# 'affiliation': new_item['affiliation'],
# 'actions': actions
# }
# def handle_remove(old_item):
# if old_item.get('item_type', None) != 'interpretation':
# return None
# old_status = get_interpretation_status(old_item)
# actions = {k:(0-v) for k,v in old_status.items()}
# return {
# 'variant_pk': old_item['variant'],
# 'affiliation': old_item['affiliation'],
# 'actions': actions
# }
# def get_interpretation_status(interpretation):
# statuses = {
# 'in_progress': 0,
# 'Provisioned': 0,
# 'Approved': 0
# }
# assoc_interp_snaps = []
# try:
# assoc_interp_snaps = interpretation['provisionalVariant']['associatedInterpretationSnapshots']
# except KeyError as e:
# # This means that the interpretation record didn't have any related snapshots, which indicates the
# # variant is in progress state.
# statuses['in_progress'] = 1
# # Or maybe the key exists but contains an empty list; this mean in progress.
# if len(assoc_interp_snaps) == 0:
# statuses['in_progress'] = 1
# else:
# # Grab the unique set of statuses and update the statuses hash with
# # one for each status found.
# for ustatus in set([snap['approvalStatus'] for snap in assoc_interp_snaps]):
# logger.info("Found status %s", ustatus)
# try:
# # This will raise a key error for an unexpected status:
# tmp = statuses[ustatus]
# except KeyError as e:
# raise KeyError(f"Unexpected status {ustatus} found in snapshot")
# statuses[ustatus] = 1
# return statuses
# def ddb_deserialize(r, type_deserializer = TypeDeserializer()):
# return type_deserializer.deserialize({"M": r})
# if __name__ == "__main__":
# context = []
# with open("test_event.json", "r") as f:
# event = json.loads(f.read())
# handler(event, context)
# VERSION SENT FROM GLORIA 8/6/21
# © 2021 Amazon Web Services, Inc. or its affiliates. All Rights Reserved.
#
# This AWS Content is provided subject to the terms of the AWS Customer Agreement
# available at http://aws.amazon.com/agreement or other written agreement between
# Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL or both.
import json, os, copy
from boto3.dynamodb.types import TypeDeserializer
import logging
############################################################################
# Initialization activities
############################################################################
logger = logging.getLogger()
# If run in AWS Lambda, preconfigures a handler for you.
log_level = os.environ.get('LOG_LEVEL', 'INFO').upper()
if logger.hasHandlers():
logger.setLevel(log_level)
# If run outside AWS, can still use logging.
else:
logging.basicConfig(level=log_level, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
from ddb_stream_interp_count.dynamodb.client import DynamoClient
gvc_table = DynamoClient(os.environ['GENE_VARIANT_CURATION_TABLE'])
vp_table = DynamoClient(os.environ['VP_TABLE'])
status_map = {
'Provisioned': 'p',
'Approved': 'a',
'in_progress': 'i'
}
#############################################################################
def handler(event, context):
logger.info(json.dumps(event))
# Parse the records to determine a set of actions (increment or decrement status counts)
logger.info("Generating actions")
actions = generate_actions(event.get('Records', []))
logger.info("Actions: %s", json.dumps(actions))
# Perform the actions
logger.info("Performing Actions")
perform_actions(actions)
logger.info("Done")
def generate_actions(records):
"""
Description:
Will cycle through input records and handle event types.
Args:
records [list]: Expects a list of records from a DynamoDB Stream.
See https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_StreamRecord.html
Returns:
actions (list): For each input record, will return a dict in the format:
{
'variant_pk': "4903abcr..."
'affiliation': 10007,
'actions': {
'Provisioned': -1,
'Approved': 1,
'in_progress': 0
}
}
where 1 indicates an increment action
-1 indicates a decrement action
0 indicates no change
"""
actions = []
for record in records:
result = None
if record['eventName'] == 'MODIFY':
new_raw_item = record['dynamodb']['NewImage']
new_item = ddb_deserialize(new_raw_item)
old_raw_item = record['dynamodb']['OldImage']
old_item = ddb_deserialize(old_raw_item)
result = handle_modify(new_item, old_item)
elif record['eventName'] == 'INSERT':
new_raw_item = record['dynamodb']['NewImage']
new_item = ddb_deserialize(new_raw_item)
result = handle_insert(new_item)
elif record['eventName'] == 'REMOVE':
old_raw_item = record['dynamodb']['OldImage']
old_item = ddb_deserialize(old_raw_item)
result = handle_remove(old_item)
if result is not None:
actions.append(result)
return actions
def perform_actions(actions):
"""
Description:
Will perform increment/decrement actions on VP aggregate object given
a list of actions (returned from generate_actions())
Args:
actions (list): See return value from generate_actions()
Returns:
success (bool): True on success.
"""
for action in actions:
# If we have all zeroes, skip.
num_changes = sum([abs(v) for v in action['actions'].values()])
if num_changes == 0:
logger.info("Skipping actions for %s because no actions required", action)
continue
# Grab the carId for the variant
carId = get_car_id(action['variant_pk'])
if carId is None:
logger.info("Did not find carId in GVC table variant PK: %s", action['variant_pk'])
continue
# Grab the aggregation/count object from VP table
aggregate = None
try:
aggregate, vppk = get_variant_aggregate(carId)
except Exception as e:
logger.info("Exception when retrieving aggregate: %s: %s", type(e).__name__, e)
logger.info("Could not find vciStatus for variant: PK: %s, carId: %s", action['variant_pk'], carId)
continue
if aggregate is None:
logger.info("Could not find vciStatus for variant: PK: %s, carId: %s", action['variant_pk'], carId)
continue
# Update the aggregate object with actions
updated_aggregate = update_aggregate_object(aggregate, action['actions'], action['affiliation'])
# And write it back to the database
write_aggregate(updated_aggregate, vppk)
return True
def get_variant_aggregate(carId):
items = vp_table.get_items(
pk=carId,
keyname='carId',
index_name='carId_index',
projections=['PK']
)
pk = None
if len(items) == 1:
pk = items[0].get('PK', None)
if pk is None:
raise ValueError(f"Could not find PK for carId {carId}")
items = vp_table.get_items(
pk=pk,
keyname='PK',
projections=['vciStatus']
)
agg = None
if len(items) == 1:
agg = items[0].get('vciStatus', None)
if agg is None:
raise ValueError(f"Could not find vciStatus for PK {pk}")
return agg, pk
def update_aggregate_object(aggregate, actions, affiliation=None):
# Create a deep copy so we don't change the original.
updated_aggregate = copy.deepcopy(aggregate)
# If we have no affiliation, this is 'individual' interpretation,
# which is represented as 'd'.
if affiliation is None or affiliation == '':
affiliation = 'd' # Individual
# If we don't have this affiliation yet in the aggregate object.
if affiliation not in aggregate:
updated_aggregate[affiliation] = _populate_new_status_dict(actions)
# If the interpretation is not associated with an affiliation (i.e. individual)
elif affiliation == 'd':
aff = updated_aggregate[affiliation]
_update_status_dict(actions, aff)
# Or if it's associated with an affiliation
else:
logger.info("Found aff in updated_aggregate")
aff = updated_aggregate[affiliation]
_update_status_dict(actions, aff)
# Now, we need to update the 'a' portion of the dict (except for individuals).
# This counts the number of affiliations which contain an interpretation for this
# variant in each state.
if affiliation != 'd':
if 'a' not in updated_aggregate:
updated_aggregate['a'] = _populate_new_status_dict(actions)
else:
for status,key in status_map.items():
# If we don't need to do anything, skip it.
if actions[status] == 0:
continue
if key in updated_aggregate['a']:
updated_aggregate['a'][key] += actions[status]
else:
if actions[status] == -1:
raise ValueError("Tried to decrement missing count.")
updated_aggregate['a'][key] = 1
return updated_aggregate
def _update_status_dict(actions, status_dict):
for status,key in status_map.items():
logger.info(f"{status} {key}, {actions[status]}")
if key in status_dict:
status_dict[key] += actions[status]
if status_dict[key] < 0:
raise ValueError(f"Error: count is now less than zero.")
elif actions[status] == 1:
status_dict[key] = 1
elif actions[status] == -1:
raise ValueError(f"Tried to decrement non-existing value")
def _populate_new_status_dict(actions):
tmp = {}
for status,key in status_map.items():
if actions[status] == 1:
tmp[key] = 1
elif actions[status] == -1:
logger.info(f"Attempting to decrement {status} but did not find aggregate object.")
raise ValueError("Did not find affiliation in vciStatus and tried decrement action. Invalid state.")
return tmp
def write_aggregate(updated_aggregate, vppk):
logger.info("Writing: %s", updated_aggregate)
vp_table.update_attr(vppk, 'PK', 'vciStatus', updated_aggregate)
return True
def get_car_id(variant_pk):
items = gvc_table.get_items(variant_pk, "PK", projections=["carId"])
retval = None
if len(items) != 0:
retval = items[0].get('carId', None)
return retval
def handle_insert(new_item):
if new_item.get('item_type', None) != 'interpretation':
return None
new_status = get_interpretation_status(new_item)
return {
'variant_pk': new_item['variant'],
'affiliation': new_item['affiliation'],
'actions': new_status
}
def handle_modify(new_item, old_item):
if new_item.get('item_type', None) != 'interpretation':
return None
new_status = get_interpretation_status(new_item)
old_status = get_interpretation_status(old_item)
# This will give us 1 for increment, 0 for stay the same or -1 for decrement
actions = {k:(v - old_status[k]) for k,v in new_status.items()}
return {
'variant_pk': new_item['variant'],
'affiliation': new_item['affiliation'],
'actions': actions
}
def handle_remove(old_item):
if old_item.get('item_type', None) != 'interpretation':
return None
old_status = get_interpretation_status(old_item)
actions = {k:(0-v) for k,v in old_status.items()}
return {
'variant_pk': old_item['variant'],
'affiliation': old_item['affiliation'],
'actions': actions
}
def get_interpretation_status(interpretation):
statuses = {
'in_progress': 0,
'Provisioned': 0,
'Approved': 0
}
assoc_interp_snaps = []
if 'snapshots' in interpretation:
# Grab the unique set of statuses and update the statuses hash with
# one for each status found.
for snapshotPK in interpretation['snapshots']:
items = gvc_table.get_items(snapshotPK, "PK", projections=["approvalStatus"])
if len(items) == 1:
ustatus = items[0].get('approvalStatus', None)
logger.info("Found status %s", ustatus)
try:
# This will raise a key error for an unexpected status:
tmp = statuses[ustatus]
except KeyError as e:
raise KeyError(f"Unexpected status {ustatus} found in snapshot")
statuses[ustatus] = 1
else:
# This means that the interpretation record didn't have any related snapshots, which indicates the
# variant is in progress state.
statuses['in_progress'] = 1
return statuses
def ddb_deserialize(r, type_deserializer = TypeDeserializer()):
return type_deserializer.deserialize({"M": r})
if __name__ == "__main__":
context = []
with open("test_event.json", "r") as f:
event = json.loads(f.read())
handler(event, context)
| 34.668588 | 114 | 0.592145 | 2,798 | 24,060 | 4.948535 | 0.111508 | 0.036978 | 0.014734 | 0.012133 | 0.968727 | 0.967211 | 0.967211 | 0.967211 | 0.967211 | 0.967211 | 0 | 0.005797 | 0.283084 | 24,060 | 693 | 115 | 34.718615 | 0.7968 | 0.577681 | 0 | 0.270531 | 0 | 0 | 0.158613 | 0.002925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067633 | false | 0 | 0.019324 | 0.004831 | 0.15942 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
031da6ceaa749e4d2206ec118cf88a3f405838f5 | 86 | py | Python | lidar_pbl/utils/__init__.py | jdlar1/lidar-pbl | 6eb605c25719b77abe6e6f676f098e47c0d91292 | [
"MIT"
] | null | null | null | lidar_pbl/utils/__init__.py | jdlar1/lidar-pbl | 6eb605c25719b77abe6e6f676f098e47c0d91292 | [
"MIT"
] | null | null | null | lidar_pbl/utils/__init__.py | jdlar1/lidar-pbl | 6eb605c25719b77abe6e6f676f098e47c0d91292 | [
"MIT"
] | null | null | null | from .io import *
from .visualization import *
from .misc import *
from .rcs import *
| 17.2 | 28 | 0.72093 | 12 | 86 | 5.166667 | 0.5 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 86 | 4 | 29 | 21.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
033319a9e256c221d55a1829713a9fbc367e2ef0 | 19 | py | Python | examples/list_basic.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | examples/list_basic.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | examples/list_basic.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | l = [1, 2]
print(l) | 9.5 | 10 | 0.473684 | 5 | 19 | 1.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 0.210526 | 19 | 2 | 11 | 9.5 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
300a924c1630fa0cb68438a7dc5a2405c1006e8b | 7 | py | Python | pruner/tests/fake_proj/fake_fail_proj.py | mattjegan/pruner | 8fb6faf0a4c111342f27120b84b50888186479cb | [
"Apache-2.0"
] | 3 | 2017-11-04T19:10:39.000Z | 2020-01-03T01:18:38.000Z | pruner/tests/fake_proj/fake_fail_proj.py | mattjegan/pruner | 8fb6faf0a4c111342f27120b84b50888186479cb | [
"Apache-2.0"
] | 5 | 2017-02-19T01:09:42.000Z | 2017-02-19T12:16:20.000Z | pruner/tests/fake_proj/fake_fail_proj.py | mattjegan/pruner | 8fb6faf0a4c111342f27120b84b50888186479cb | [
"Apache-2.0"
] | 3 | 2018-02-21T19:24:54.000Z | 2019-08-29T03:58:04.000Z | a = 1/0 | 7 | 7 | 0.428571 | 3 | 7 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 0.285714 | 7 | 1 | 7 | 7 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3048c865e19175e4f61b095cedcf687eed92d3a4 | 4,312 | py | Python | server/test/test_foo.py | NWCalvank/react-python-starter | 8bee6129f425d6284aba0a9bf1ccce7b696b837c | [
"Apache-2.0"
] | null | null | null | server/test/test_foo.py | NWCalvank/react-python-starter | 8bee6129f425d6284aba0a9bf1ccce7b696b837c | [
"Apache-2.0"
] | 6 | 2019-10-02T23:35:34.000Z | 2019-11-20T23:28:05.000Z | server/test/test_foo.py | NWCalvank/react-python-starter | 8bee6129f425d6284aba0a9bf1ccce7b696b837c | [
"Apache-2.0"
] | 4 | 2019-12-06T18:39:58.000Z | 2021-12-01T03:07:41.000Z | import json
import unittest
from app import db
from test.base import BaseTestCase
from test.helpers import create_foo_string
class TestFooService(BaseTestCase):
def test_get_all_foo(self):
create_foo_string('foo_string_1')
create_foo_string('foo_string_2')
create_foo_string('foo_string_3')
with self.client:
response = self.client.get('/api/foo')
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 200)
self.assertEqual(len(data['records']), 3)
def test_get_foo(self):
record = create_foo_string('foo_string_1')
with self.client:
response = self.client.get(f'/api/foo/{record.id}')
data = json.loads(response.data.decode())
self.assertEqual(response.status_code, 200)
self.assertEqual(data['id'], record.id)
def test_get_foo_not_found(self):
with self.client:
response = self.client.get(f'/api/foo/9999')
self.assertEqual(response.status_code, 404)
def test_post_foo(self):
test_string = 'I am a test string'
with self.client:
response = self.client.post('/api/foo',
data=json.dumps({
'string_field': test_string,
}),
content_type='application/json',
)
self.assertEqual(response.status_code, 201)
data = json.loads(response.data.decode())
self.assertEqual(data['string_field'], test_string)
def test_post_foo_exists(self):
test_string = 'I am a test string'
record = create_foo_string(test_string)
with self.client:
response = self.client.post('/api/foo',
data=json.dumps({
'string_field': test_string,
}),
content_type='application/json',
)
self.assertEqual(response.status_code, 400)
def test_put_foo(self):
record = create_foo_string('foo_string_1')
with self.client:
response = self.client.put(f'/api/foo/{record.id}',
data=json.dumps({
'string_field': 'I am a test string',
}),
content_type='application/json',
)
self.assertEqual(response.status_code, 200)
data = json.loads(response.data.decode())
self.assertEqual(data['string_field'], 'I am a test string')
def test_put_foo_not_found(self):
with self.client:
response = self.client.put(f'/api/foo/9999',
data=json.dumps({
'string_field': 'I am a test string',
}),
content_type='application/json',
)
self.assertEqual(response.status_code, 404)
def test_put_foo_invalid_string(self):
record = create_foo_string('string')
with self.client:
response = self.client.put(f'/api/foo/{record.id}',
data=json.dumps({
'string_field': None,
}),
content_type='application/json',
)
self.assertEqual(response.status_code, 400)
def test_delete_foo(self):
record = create_foo_string('foo_string_1')
with self.client:
response = self.client.delete(f'/api/foo/{record.id}')
self.assertEqual(response.status_code, 200)
data = json.loads(response.data.decode())
def test_delete_foo_not_found(self):
with self.client:
response = self.client.delete(f'/api/foo/9999')
self.assertEqual(response.status_code, 404)
if __name__ == '__main__':
unittest.app()
| 41.066667 | 80 | 0.505566 | 439 | 4,312 | 4.747153 | 0.138952 | 0.095969 | 0.067179 | 0.105566 | 0.830134 | 0.791267 | 0.779271 | 0.74904 | 0.71881 | 0.71881 | 0 | 0.018781 | 0.394944 | 4,312 | 104 | 81 | 41.461538 | 0.779992 | 0 | 0 | 0.565217 | 0 | 0 | 0.1141 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 1 | 0.108696 | false | 0 | 0.054348 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0652916fc25da96f7ddd42da251d7b23c6321ca7 | 1,081 | py | Python | combine_files.py | navierula/language-in-real-and-fake-news | aa2714b9848793cb15020807aaeea533d099417c | [
"MIT"
] | 2 | 2017-12-17T16:54:54.000Z | 2017-12-23T23:52:09.000Z | combine_files.py | navierula/language-in-real-and-fake-news | aa2714b9848793cb15020807aaeea533d099417c | [
"MIT"
] | null | null | null | combine_files.py | navierula/language-in-real-and-fake-news | aa2714b9848793cb15020807aaeea533d099417c | [
"MIT"
] | null | null | null | import glob2
#######################################################################
# find all file names with a .txt extension
filenames = glob2.glob('data/political_news/fake_headlines/*.txt')
# concatenate all individual files into one file
with open("fake_headlines.txt", "w", encoding="ISO-8859-1") as f:
for file in filenames:
with open(file, encoding="ISO-8859-1") as infile:
# append 'fake' parameter at end of each line
f.write(infile.read()+"\t"+"fake"+"\n")
########################################################################
# find all file names with a .txt extension
filenames = glob2.glob('data/political_news/real_headlines/*.txt')
# concatenate all individual files into one file
with open("real_headlines.txt", "w", encoding="ISO-8859-1") as f:
for file in filenames:
with open(file, encoding="ISO-8859-1") as infile:
# append 'fake' parameter at end of each line
f.write(infile.read()+"\t"+"real"+"\n")
######################################################################## | 40.037037 | 72 | 0.530065 | 130 | 1,081 | 4.361538 | 0.346154 | 0.084656 | 0.10582 | 0.112875 | 0.934744 | 0.934744 | 0.934744 | 0.934744 | 0.934744 | 0.934744 | 0 | 0.025386 | 0.161887 | 1,081 | 27 | 73 | 40.037037 | 0.600442 | 0.245143 | 0 | 0.363636 | 0 | 0 | 0.292437 | 0.134454 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0682b35c2284ae7dc65596d73f9055d7ba1e09cb | 8,446 | py | Python | commonkit/shell/feedback.py | develmaycare/python-commonkit | 329e723cdcc3591cf42ca5a02893c17ec28141c4 | [
"BSD-3-Clause"
] | null | null | null | commonkit/shell/feedback.py | develmaycare/python-commonkit | 329e723cdcc3591cf42ca5a02893c17ec28141c4 | [
"BSD-3-Clause"
] | 7 | 2020-10-19T17:44:25.000Z | 2021-05-27T22:44:51.000Z | commonkit/shell/feedback.py | develmaycare/python-commonkit | 329e723cdcc3591cf42ca5a02893c17ec28141c4 | [
"BSD-3-Clause"
] | 1 | 2021-06-10T10:42:06.000Z | 2021-06-10T10:42:06.000Z | # Imports
from colorama import init as colorama_init, Fore, Style
from ..context_managers import captured_output
colorama_init()
__version__ = "0.7.1-a"
# Exports
__all__ = (
"BLUE",
"GREEN",
"RED",
"YELLOW",
"blue",
"colorize",
"green",
"hr",
"plain",
"red",
"yellow",
"Feedback",
)
# Constants
BLUE = Fore.BLUE
GREEN = Fore.GREEN
RED = Fore.RED
YELLOW = Fore.YELLOW
# Functions
def colorize(color, message, prefix=None, suffix=None):
"""Return the given message in color.
:param color: The color to use. A ``coloroma.Fore`` class constant.
:type color: int
:param message: The message to be colorized.
:type message: str
:param prefix: A string to include before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to include after the message. A space is automatically added to the beginning.
:type suffix: str
:rtype: str
"""
a = list()
a.append(color)
if prefix is not None:
a.append(prefix + " ")
a.append(message)
if suffix is not None:
a.append(" " + suffix)
a.append(Style.RESET_ALL)
return "".join(a)
# Colors
def blue(message, prefix=None, suffix=None):
"""Print the message in blue text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
print(colorize(BLUE, message, prefix=prefix, suffix=suffix))
def green(message, prefix=None, suffix=None):
"""Print the message in green text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
print(colorize(GREEN, message, prefix=prefix, suffix=suffix))
def hr(character="-", color=None, size=80):
"""Print a horizontal rule to feedback.
:param character: The character to use for the line.
:type character: str
:param color: The color function to use.
:type color: function
:param size: The number of characters to print.
:type size: int
"""
message = character * size
if callable(color):
color(message)
return
print(message)
def plain(message, prefix=None, suffix=None):
"""Print the message in plain text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
a = list()
if prefix is not None:
a.append(prefix + " ")
a.append(message)
if suffix is not None:
a.append(" " + suffix)
print("".join(a))
def red(message, prefix=None, suffix=None):
"""Print the message in red text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
print(colorize(RED, message, prefix=prefix, suffix=suffix))
def yellow(message, prefix=None, suffix=None):
"""Print the message in yellow text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
print(colorize(YELLOW, message, prefix=prefix, suffix=suffix))
# Feedback Class
class Feedback(object):
"""Collects feedback in a single instance."""
def __init__(self):
self.messages = list()
def __iter__(self):
return iter(self.messages)
def __len__(self):
return len(self.messages)
def __str__(self):
return "\n".join(self.messages)
def blue(self, message, prefix=None, suffix=None):
"""Add a message in blue text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
self.messages.append(colorize(BLUE, message, prefix=prefix, suffix=suffix))
def cr(self):
"""Add a carriage return (line feed) to the feedback."""
self.messages.append("")
def green(self, message, prefix=None, suffix=None):
"""Add a message in green text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
self.messages.append(colorize(GREEN, message, prefix=prefix, suffix=suffix))
def heading(self, label, divider="="):
"""Add a heading to the output.
:param label: The label of the heading.
:type label: str
:param divider: The divider that goes under the heading.
:type divider: str
"""
self.messages.append(label)
self.messages.append(divider * len(label))
self.cr()
def hr(self, character="-", color=None, size=80):
"""Add a horizontal rule to feedback.
:param character: The character to use for the line.
:type character: str
:param color: The color function to use.
:type color: function
:param size: The number of characters to print.
:type size: int
"""
message = character * size
if callable(color):
with captured_output() as (output, error):
color(message)
message = output.getvalue()
self.messages.append(message)
def plain(self, message, prefix=None, suffix=None):
"""Add a plain text message.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
a = list()
if prefix is not None:
a.append(prefix + " ")
a.append(message)
if suffix is not None:
a.append(" " + suffix)
self.messages.append("".join(a))
def red(self, message, prefix=None, suffix=None):
"""Add a message in red text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
self.messages.append(colorize(RED, message, prefix=prefix, suffix=suffix))
def yellow(self, message, prefix=None, suffix=None):
"""Add a message in yellow text.
:param message: The message to be printed.
:type message: str
:param prefix: A string to print before the message. A space is automatically added to the end.
:type prefix: str
:param suffix: A string to print after the message. A space is automatically added to the beginning.
:type suffix: str
"""
self.messages.append(colorize(YELLOW, message, prefix=prefix, suffix=suffix))
| 25.987692 | 108 | 0.636633 | 1,144 | 8,446 | 4.673951 | 0.091783 | 0.071068 | 0.03703 | 0.065831 | 0.784926 | 0.7709 | 0.766037 | 0.752197 | 0.710492 | 0.659996 | 0 | 0.001136 | 0.270187 | 8,446 | 324 | 109 | 26.067901 | 0.866321 | 0.534454 | 0 | 0.26087 | 0 | 0 | 0.02381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206522 | false | 0 | 0.021739 | 0.032609 | 0.293478 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
068b875422650631db8e244cbebb8bfe6b0cda65 | 81 | py | Python | sender/src/utils/__init__.py | kaminski-pawel/send-emails | aff7b930754394ea72100ca9ec93362e198f0eda | [
"MIT"
] | null | null | null | sender/src/utils/__init__.py | kaminski-pawel/send-emails | aff7b930754394ea72100ca9ec93362e198f0eda | [
"MIT"
] | null | null | null | sender/src/utils/__init__.py | kaminski-pawel/send-emails | aff7b930754394ea72100ca9ec93362e198f0eda | [
"MIT"
] | null | null | null | from src.utils.utils import open_json_file, open_html_file, open_csv_file # noqa
| 40.5 | 80 | 0.839506 | 15 | 81 | 4.133333 | 0.666667 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098765 | 81 | 1 | 81 | 81 | 0.849315 | 0.049383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ebff3b90f446a8191e33495afb843374aaca76b3 | 178 | py | Python | code_search_web/code_search_app/shared.py | novoselrok/codesnippetsearch | 11310a8bfc9553df86dd98b120306159fd030b28 | [
"MIT"
] | 70 | 2020-05-13T23:43:25.000Z | 2022-03-07T07:41:54.000Z | code_search_web/code_search_app/shared.py | novoselrok/codesnippetsearch | 11310a8bfc9553df86dd98b120306159fd030b28 | [
"MIT"
] | 18 | 2020-05-14T13:59:42.000Z | 2022-02-27T09:37:01.000Z | code_search_web/code_search_app/shared.py | novoselrok/codesnippetsearch | 11310a8bfc9553df86dd98b120306159fd030b28 | [
"MIT"
] | 5 | 2020-05-14T18:13:45.000Z | 2022-01-03T07:32:33.000Z | from pygments.formatters import HtmlFormatter
def get_pygments_html_formatter():
return HtmlFormatter(linenos=False, style='xcode', cssclass='codesnippetsearch-highlight')
| 29.666667 | 94 | 0.820225 | 19 | 178 | 7.526316 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089888 | 178 | 5 | 95 | 35.6 | 0.882716 | 0 | 0 | 0 | 0 | 0 | 0.179775 | 0.151685 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
23086a3287d9c9663c03956b7f514c59d8be743a | 124 | py | Python | python/darknet/api2/__init__.py | elsampsa/darknet-python | 6c62a5934082157154087809d67d0ee43384cc7a | [
"MIT"
] | 10 | 2019-05-10T07:26:56.000Z | 2021-04-22T18:59:12.000Z | python/darknet/api2/__init__.py | elsampsa/darknet-python | 6c62a5934082157154087809d67d0ee43384cc7a | [
"MIT"
] | null | null | null | python/darknet/api2/__init__.py | elsampsa/darknet-python | 6c62a5934082157154087809d67d0ee43384cc7a | [
"MIT"
] | 4 | 2018-11-16T00:55:41.000Z | 2020-09-29T03:44:28.000Z | from .predictor import *
from .trainer import *
from .tools import downloadYOLOv3, downloadYOLOv3Tiny
from .error import *
| 24.8 | 53 | 0.790323 | 14 | 124 | 7 | 0.571429 | 0.204082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018868 | 0.145161 | 124 | 4 | 54 | 31 | 0.90566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
232da0af1ed6a9fc9b3ec6601de7eaf5eb7d3544 | 23 | py | Python | modules/ssds2018/scripts/ssds/__init__.py | moChen0607/ssds | 580cfec718f758e54477c6fb90d0d441168eed47 | [
"MIT"
] | 1 | 2021-12-28T00:07:33.000Z | 2021-12-28T00:07:33.000Z | modules/ssds2018/scripts/ssds/__init__.py | tHeBeStXu/ssds | 99ea73a4c58731cdc2daa5382e83ceb9e5990ed0 | [
"MIT"
] | null | null | null | modules/ssds2018/scripts/ssds/__init__.py | tHeBeStXu/ssds | 99ea73a4c58731cdc2daa5382e83ceb9e5990ed0 | [
"MIT"
] | 1 | 2018-12-29T08:39:13.000Z | 2018-12-29T08:39:13.000Z | from main import build
| 11.5 | 22 | 0.826087 | 4 | 23 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cc5cbe22b548e1d061a8499ad9b175a06a4f19a2 | 136 | py | Python | dbert/distill/model/__init__.py | samsucik/d-bert | 2e3bf66e18388d78ef2d2a4ca42206b6b5ee7920 | [
"MIT"
] | 29 | 2019-10-15T21:01:52.000Z | 2022-03-29T08:10:44.000Z | dbert/distill/model/__init__.py | samsucik/d-bert | 2e3bf66e18388d78ef2d2a4ca42206b6b5ee7920 | [
"MIT"
] | null | null | null | dbert/distill/model/__init__.py | samsucik/d-bert | 2e3bf66e18388d78ef2d2a4ca42206b6b5ee7920 | [
"MIT"
] | 13 | 2019-10-14T09:54:32.000Z | 2021-05-24T15:28:13.000Z | from .base import *
from .bert import *
from .kim_cnn import *
from .conv_rnn import *
from .bi_rnn import *
from .siamese_rnn import *
| 19.428571 | 26 | 0.735294 | 22 | 136 | 4.363636 | 0.454545 | 0.520833 | 0.270833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 136 | 6 | 27 | 22.666667 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
aef409734f1785a8e3428335ef8d2aff6713ff99 | 7,082 | py | Python | dset_loaders/collect_ids_func.py | Luodian/Learning-Invariant-Representations-and-Risks | f3058fe50e86660ca0c17ba0df41ece9af64c557 | [
"MIT"
] | 17 | 2021-04-22T03:24:38.000Z | 2022-03-30T03:12:09.000Z | dset_loaders/collect_ids_func.py | Luodian/Learning-Invariant-Representations-and-Risks | f3058fe50e86660ca0c17ba0df41ece9af64c557 | [
"MIT"
] | 5 | 2021-12-10T10:12:26.000Z | 2022-03-31T00:01:58.000Z | dset_loaders/collect_ids_func.py | Luodian/Learning-Invariant-Representations-and-Risks | f3058fe50e86660ca0c17ba0df41ece9af64c557 | [
"MIT"
] | 3 | 2021-05-19T06:12:14.000Z | 2021-12-17T09:27:49.000Z | import os
import numpy as np
import random
from .label_parser_dict import *
portion = {
1: "labeled",
5: "labeled_5",
10:"labeled_10",
15:"labeled_15",
20:"labeled_20",
25:"labeled_25",
30:"labeled_30",
70:"labeled_70"
}
# /nfs/volume-92-5/wangyezhen_i/Projects/Theoretical_Projects/InstaPBM-V1/
shift_path_root_dict = {
'lds': 'datasets/LDS',
'ilds': 'datasets/ILDS',
'convention': 'datasets/convention'
}
def shuffling(tensor_list):
# shuffling
permutation = np.random.permutation(len(tensor_list[0]))
new_tensor_list = np.array(tensor_list)[:, permutation].tolist()
return new_tensor_list
def collect_ids_cls(args):
data_collection = {
'source':{
'train': {'ids':[], 'labels':[]},
'validation': {'ids':[], 'labels':[]}
},
'target':{
'labeled': {'ids':[], 'labels':[]},
'unlabeled': {'ids':[], 'labels':[]},
'validation': {'ids':[], 'labels':[]}
}
}
shift_type = args.domain_shift_type
general_domain = args.dataset
print('==> begin to load ids.')
shift_path_root = shift_path_root_dict[shift_type]
for dm in args.source:
domain_ls_path = os.path.join(
shift_path_root,
general_domain,
'source', dm + '.txt'
)
domain_reader = open(domain_ls_path, 'r')
for line in domain_reader:
if line == '\n':
continue
id, cls = line.replace('\n', '').split(' ')
data_collection['source']['train']['ids'].append(os.path.join(args.data_root, dm.split('_')[0] + '/' + id))
data_collection['source']['train']['labels'].append(label2index_parser[general_domain][cls])
domain_reader.close()
target_partitions = [portion.get(args.target_labeled_portion, "labeled"), 'unlabeled', 'validation']
for item in target_partitions:
t_p = item.split("_")[0]
domain_ls_path = os.path.join(
shift_path_root,
general_domain,
'target', args.target + '_' + item + '.txt'
)
domain_reader = open(domain_ls_path, 'r')
for line in domain_reader:
if line == '\n':
continue
id, cls = line.replace('\n', '').split(' ')
data_collection['target'][t_p]['ids'].append(os.path.join(args.data_root, args.target + '/' + id))
data_collection['target'][t_p]['labels'].append(label2index_parser[general_domain][cls])
domain_reader.close()
# shuffling
shuffled_src_data = shuffling(
[data_collection['source']['train']['ids'],
data_collection['source']['train']['labels']]
)
data_collection['source']['train']['ids'] = shuffled_src_data[0]
data_collection['source']['train']['labels'] = shuffled_src_data[1]
data_collection['source']['validation']['ids'] = shuffled_src_data[0][:5000]
data_collection['source']['validation']['labels'] = shuffled_src_data[1][:5000]
for item in target_partitions:
t_p = item.split("_")[0]
shuffled_src_data = shuffling(
[data_collection['target'][t_p]['ids'],
data_collection['target'][t_p]['labels']]
)
data_collection['target'][t_p]['ids'] = shuffled_src_data[0]
data_collection['target'][t_p]['labels'] = shuffled_src_data[1]
return data_collection
def collect_ids_reg(args):
data_collection = {
'source':{
'train': {'ids':[], 'labels':[], 'masks':[]},
'validation': {'ids':[], 'labels':[], 'masks':[]}
},
'target':{
'labeled': {'ids':[], 'labels':[], 'masks':[]},
'unlabeled': {'ids':[], 'labels':[], 'masks':[]},
'validation': {'ids':[], 'labels':[], 'masks':[]}
}
}
shift_type = args.domain_shift_type
general_domain = args.dataset
print('==> begin to load ids.')
shift_path_root = shift_path_root_dict[shift_type]
for dm in args.source:
domain_ls_path = os.path.join(
shift_path_root,
general_domain,
'source', dm + '.txt'
)
domain_reader = open(domain_ls_path, 'r')
for line in domain_reader:
if line == '\n':
continue
id, reg, mask = line.replace('\n', '').split(' ')
data_collection['source']['train']['ids'].append(os.path.join(args.data_root, dm.split('_')[0] + '/' + id))
data_collection['source']['train']['labels'].append(os.path.join(args.data_root, dm.split('_')[0] + '/' + reg))
data_collection['source']['train']['masks'].append(os.path.join(args.data_root, dm.split('_')[0] + '/' + mask))
domain_reader.close()
target_partitions = [portion.get(args.target_labeled_portion, "labeled"), 'unlabeled', 'validation']
for item in target_partitions:
t_p = item.split("_")[0]
domain_ls_path = os.path.join(
shift_path_root,
general_domain,
'target', args.target + '_' + item + '.txt'
)
domain_reader = open(domain_ls_path, 'r')
for line in domain_reader:
if line == '\n':
continue
id, reg, mask = line.replace('\n', '').split(' ')
data_collection['target'][t_p]['ids'].append(os.path.join(args.data_root, args.target + '/' + id))
data_collection['target'][t_p]['labels'].append(os.path.join(args.data_root, args.target + '/' + reg))
data_collection['target'][t_p]['masks'].append(os.path.join(args.data_root, args.target + '/' + mask))
domain_reader.close()
# shuffling
shuffled_src_data = shuffling(
[data_collection['source']['train']['ids'],
data_collection['source']['train']['labels'],
data_collection['source']['train']['masks']]
)
data_collection['source']['train']['ids'] = shuffled_src_data[0]
data_collection['source']['train']['labels'] = shuffled_src_data[1]
data_collection['source']['train']['masks'] = shuffled_src_data[2]
data_collection['source']['validation']['ids'] = shuffled_src_data[0][:5000]
data_collection['source']['validation']['labels'] = shuffled_src_data[1][:5000]
data_collection['source']['validation']['masks'] = shuffled_src_data[2][:5000]
for t_p in target_partitions:
if 'validation' not in t_p:
t_p = t_p.split("_")[0]
shuffled_src_data = shuffling(
[data_collection['target'][t_p]['ids'],
data_collection['target'][t_p]['labels'],
data_collection['target'][t_p]['masks']]
)
data_collection['target'][t_p]['ids'] = shuffled_src_data[0]
data_collection['target'][t_p]['labels'] = shuffled_src_data[1]
data_collection['target'][t_p]['masks'] = shuffled_src_data[2]
return data_collection
collect_ids = {'cls': collect_ids_cls, 'reg': collect_ids_reg} | 40.468571 | 123 | 0.572861 | 815 | 7,082 | 4.706748 | 0.121472 | 0.142336 | 0.114703 | 0.110792 | 0.825078 | 0.780761 | 0.770334 | 0.729406 | 0.726799 | 0.708551 | 0 | 0.014452 | 0.24767 | 7,082 | 175 | 124 | 40.468571 | 0.705518 | 0.014403 | 0 | 0.594937 | 0 | 0 | 0.150229 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018987 | false | 0 | 0.025316 | 0 | 0.063291 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4e3fa84f3875a3cd47c39cf0e3218a673213f05a | 49 | py | Python | scripts/qgis_fixes/fix_raise.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | scripts/qgis_fixes/fix_raise.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | scripts/qgis_fixes/fix_raise.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | 1 | 2021-12-25T08:40:30.000Z | 2021-12-25T08:40:30.000Z | from libfuturize.fixes.fix_raise import FixRaise
| 24.5 | 48 | 0.877551 | 7 | 49 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d923530799a482efeb1921501edba26614bbf18 | 307 | py | Python | pyleecan/Methods/Slot/SlotW11/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 95 | 2019-01-23T04:19:45.000Z | 2022-03-17T18:22:10.000Z | pyleecan/Methods/Slot/SlotW11/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 366 | 2019-02-20T07:15:08.000Z | 2022-03-31T13:37:23.000Z | pyleecan/Methods/Slot/SlotW11/__init__.py | IrakozeFD/pyleecan | 5a93bd98755d880176c1ce8ac90f36ca1b907055 | [
"Apache-2.0"
] | 74 | 2019-01-24T01:47:31.000Z | 2022-02-25T05:44:42.000Z | from ....Methods.Slot.Slot import SlotCheckError
class S11_W01CheckError(SlotCheckError):
""" """
pass
class S11_RWCheckError(SlotCheckError):
""" """
pass
class S11_RHCheckError(SlotCheckError):
""" """
pass
class S11_H1rCheckError(SlotCheckError):
""" """
pass
| 11.807692 | 48 | 0.641694 | 26 | 307 | 7.423077 | 0.461538 | 0.165803 | 0.357513 | 0.404145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046025 | 0.221498 | 307 | 25 | 49 | 12.28 | 0.761506 | 0 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.444444 | 0.111111 | 0 | 0.555556 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
9d929aff4979b9dc0dd413afcbbea199838d20c8 | 148 | py | Python | sks/sks/doctype/compliant/test_compliant.py | Shankarv19bcr/SKSSSSS | c5899d125b635e199fc6817282a13cf0aa60f2bc | [
"MIT"
] | null | null | null | sks/sks/doctype/compliant/test_compliant.py | Shankarv19bcr/SKSSSSS | c5899d125b635e199fc6817282a13cf0aa60f2bc | [
"MIT"
] | null | null | null | sks/sks/doctype/compliant/test_compliant.py | Shankarv19bcr/SKSSSSS | c5899d125b635e199fc6817282a13cf0aa60f2bc | [
"MIT"
] | null | null | null | # Copyright (c) 2022, Thirvusoft and Contributors
# See license.txt
# import frappe
import unittest
class TestCompliant(unittest.TestCase):
pass
| 16.444444 | 49 | 0.783784 | 18 | 148 | 6.444444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031496 | 0.141892 | 148 | 8 | 50 | 18.5 | 0.88189 | 0.52027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d192f2277212c4b43c519bb740f790c36f34a31f | 95 | py | Python | backend/beedare/hive/__init__.py | gijs3ntius/BeeDare | 9ad5a93dad9b531b332aeb58f9b97e98585bc1ac | [
"Apache-2.0"
] | null | null | null | backend/beedare/hive/__init__.py | gijs3ntius/BeeDare | 9ad5a93dad9b531b332aeb58f9b97e98585bc1ac | [
"Apache-2.0"
] | 17 | 2020-06-05T18:27:11.000Z | 2022-03-11T23:24:50.000Z | backend/beedare/hive/__init__.py | gijsentius/BeeDare | 9ad5a93dad9b531b332aeb58f9b97e98585bc1ac | [
"Apache-2.0"
] | null | null | null | from flask import Blueprint
hive_blueprint = Blueprint('hive', __name__)
from . import views
| 15.833333 | 44 | 0.778947 | 12 | 95 | 5.75 | 0.583333 | 0.376812 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147368 | 95 | 5 | 45 | 19 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.042105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
d19da3a9f38dde17c78334638936df6fc7952b4a | 273 | py | Python | csp/propagators/__init__.py | abeccaro/csp-solver | a761dee02a4dd12162eb55ef34cc0989c79567cc | [
"MIT"
] | null | null | null | csp/propagators/__init__.py | abeccaro/csp-solver | a761dee02a4dd12162eb55ef34cc0989c79567cc | [
"MIT"
] | null | null | null | csp/propagators/__init__.py | abeccaro/csp-solver | a761dee02a4dd12162eb55ef34cc0989c79567cc | [
"MIT"
] | null | null | null | from csp.propagators.propagator import Propagator
from csp.propagators.dummy_propagator import DummyPropagator
from csp.propagators.forward_check_propagator import ForwardCheckPropagator
from csp.propagators.arc_consistency_propagator import ArcConsistencyPropagator
| 45.5 | 80 | 0.89011 | 29 | 273 | 8.206897 | 0.448276 | 0.117647 | 0.302521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080586 | 273 | 5 | 81 | 54.6 | 0.948207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
06023970d9653c93ac93be52b8506e3dc8c18179 | 1,127 | py | Python | CRI_WeeklyMaps/Tracking_Map/mapperFieldscratch2.py | adambreznicky/smudge_python | af7ba221890253ac6fe7f38691b351861f8b3d96 | [
"MIT"
] | 1 | 2017-05-24T02:05:20.000Z | 2017-05-24T02:05:20.000Z | CRI_WeeklyMaps/Tracking_Map/mapperFieldscratch2.py | adambreznicky/smudge_python | af7ba221890253ac6fe7f38691b351861f8b3d96 | [
"MIT"
] | null | null | null | CRI_WeeklyMaps/Tracking_Map/mapperFieldscratch2.py | adambreznicky/smudge_python | af7ba221890253ac6fe7f38691b351861f8b3d96 | [
"MIT"
] | null | null | null | # ---------------------------------------------------------------------------
# mapperFieldscratch2.py
# Created on: 2014-01-03 10:01:45.00000
# (generated by ArcGIS/ModelBuilder)
# Description:
# ---------------------------------------------------------------------------
# Import arcpy module
import arcpy
# Local variables:
owssvr_ = "C:\\TxDOT\\CountyRoadInventory\\Book1.xlsx\\owssvr$"
Shapefiles = "C:\\TxDOT\\CountyRoadInventory\\TRACKING\\Shapefiles"
# Process: Table to Table
arcpy.TableToTable_conversion(owssvr_, Shapefiles, "queriedtable.dbf", "", "ID \"ID\" true true false 8 Double 6 15 ,First,#;Update_Yea \"Update_Yea\" true true false 255 Text 0 0 ,First,#,C:\\TxDOT\\CountyRoadInventory\\Book1.xlsx\\owssvr$,Update Year,-1,-1;District \"District\" true true false 255 Text 0 0 ,First,#,C:\\TxDOT\\CountyRoadInventory\\Book1.xlsx\\owssvr$,District,-1,-1;County \"County\" true true false 255 Text 0 0 ,First,#,C:\\TxDOT\\CountyRoadInventory\\Book1.xlsx\\owssvr$,County,-1,-1;Status \"Status\" true true false 255 Text 0 0 ,First,#,C:\\TxDOT\\CountyRoadInventory\\Book1.xlsx\\owssvr$,Status,-1,-1", "")
| 59.315789 | 633 | 0.628217 | 136 | 1,127 | 5.169118 | 0.389706 | 0.051209 | 0.213371 | 0.213371 | 0.438122 | 0.438122 | 0.381223 | 0.381223 | 0.381223 | 0.381223 | 0 | 0.05534 | 0.086069 | 1,127 | 18 | 634 | 62.611111 | 0.627184 | 0.287489 | 0 | 0 | 1 | 1 | 0.786616 | 0.511364 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ae5f3162f883783017eb8f5bb111727bc5701ab7 | 51,399 | py | Python | src/classifier.py | HKUST-KnowComp/DisCOC | d9e10d4938ef485254551fdb6c1a36eb31a26cfd | [
"MIT"
] | 4 | 2021-05-30T03:30:16.000Z | 2022-01-04T08:03:01.000Z | src/classifier.py | HKUST-KnowComp/DisCOC | d9e10d4938ef485254551fdb6c1a36eb31a26cfd | [
"MIT"
] | null | null | null | src/classifier.py | HKUST-KnowComp/DisCOC | d9e10d4938ef485254551fdb6c1a36eb31a26cfd | [
"MIT"
] | 1 | 2021-11-19T04:09:08.000Z | 2021-11-19T04:09:08.000Z | import torch as th
import torch.nn as nn
import torch.nn.functional as F
from dataset import CONTEXT, TEXT, CLS
from encoder import *
from function import map_activation_str_to_layer
from layer import *
from util import *
INF = 1e30
_INF = -1e30
def process_indices(sent_ids):
if sent_ids.dim() == 1:
indices = split_ids(sent_ids)
elif sent_ids.dim() == 2:
indices = split_ids(sent_ids[0])
else:
raise ValueError("Error: sent_ids.dim() != 1 or 2.")
return indices
def process_type_ids(x1_indices, x2_indices, method="bert"):
x1_len = x1_indices[-1].item()
x2_len = x2_indices[-1].item()
# BERT (Devlin et al., 2019)
# only two segment ids
if method == "bert" or method == "flat":
x1_type_ids = th.zeros((x1_len, ), dtype=th.long, device=x1_indices.device)
x2_type_ids = th.ones((x2_len, ), dtype=th.long, device=x2_indices.device)
# XLNet (Yang et al., 2019)
# each segment has an unique id
elif method == "xlnet" or method == "segmented":
x1_type_ids = th.zeros((x1_len, ), dtype=th.long, device=x1_indices.device)
for i in range(1, len(x1_indices)):
x1_type_ids[x1_indices[i-1]:x1_indices[i]].fill_(i)
bias = len(x1_indices)
x2_type_ids = th.empty((x2_len, ), dtype=th.long, device=x2_indices.device).fill_(bias)
for i in range(1, len(x2_indices)):
x2_type_ids[x2_indices[i-1]:x2_indices[i]].fill_(i + bias)
# BERTSum (Liu et al., 2019)
# use segment ids in turn
# 0/1: context
# 0/1: text
elif method == "bert-sum" or method == "naive-interval":
# make sure the last context as 1
bias = len(x1_indices) % 2
x1_type_ids = th.zeros((x1_len, ), dtype=th.long, device=x1_indices.device).fill_(bias)
for i in range(1, len(x1_indices)):
x1_type_ids[x1_indices[i-1]:x1_indices[i]].fill_((i + bias) % 2)
x2_type_ids = th.zeros((x2_len, ), dtype=th.long, device=x2_indices.device).fill_(2)
# make sure the first text as 2
for i in range(1, len(x2_indices)):
x2_type_ids[x2_indices[i-1]:x2_indices[i]].fill_(i % 2)
# BMGF-RoBERTa (Liu et al., 2020)
# use 0 for previous context
# 0/1: context
# 2/3: text
elif method == "bmgf-roberta" or method == "interval":
# make sure the last context as 1
bias = len(x1_indices) % 2
x1_type_ids = th.zeros((x1_len, ), dtype=th.long, device=x1_indices.device).fill_(bias)
for i in range(1, len(x1_indices)):
x1_type_ids[x1_indices[i-1]:x1_indices[i]].fill_((i + bias) % 2)
x2_type_ids = th.zeros((x2_len, ), dtype=th.long, device=x2_indices.device).fill_(2)
# make sure the first text as 2
for i in range(1, len(x2_indices)):
x2_type_ids[x2_indices[i-1]:x2_indices[i]].fill_(2 + i % 2)
else:
raise NotImplementedError(
"Error: the method of process_type_ids should be \
\"bert (flat)\", \"xlnet (segmented)\", \"bert-sum (naive-interval)\", or \"bmgf-roberta (interval)\"."
)
return x1_type_ids, x2_type_ids
#################################################################
########################## Flatten Model ########################
#################################################################
class FlatModel(nn.Module):
def __init__(self, **kw):
super(FlatModel, self).__init__()
max_num_text = kw.get("max_num_text", 1)
max_num_context = kw.get("max_num_context", 1)
encoder = kw.get("encoder", "roberta")
dropout = kw.get("dropout", 0.0)
self.max_num_context = max_num_context
self.max_num_text = max_num_text
self.drop = nn.Dropout(dropout, inplace=False)
if encoder == "bert":
self.encoder = BertEncoder(num_segments=max_num_text+max_num_context+2, **kw)
elif encoder == "albert":
self.encoder = AlbertEncoder(num_segments=max_num_text+max_num_context+2, **kw)
elif encoder == "roberta":
self.encoder = RobertaEncoder(num_segments=max_num_text+max_num_context+2, **kw)
elif encoder == "xlnet":
self.encoder = XLNetEncoder(num_segments=max_num_text+max_num_context+2, **kw)
elif encoder == "lstm":
self.encoder = LSTMEncoder(num_segments=max_num_text+max_num_context+2, **kw)
else:
raise NotImplementedError("Error: encoder=%s is not supported now." % (encoder))
self.create_layers(self.encoder.get_output_dim(), **kw)
def create_layers(self, input_dim, **kw):
max_num_text = kw.get("max_num_text", 1)
max_num_context = kw.get("max_num_context", 1)
max_len = kw.get("max_len", 512)
hidden_dim = kw.get("hidden_dim", 128)
add_matching = kw.get("add_matching", False)
add_fusion = kw.get("add_fusion", False)
add_conv = kw.get("add_conv", False)
add_trans = kw.get("add_trans", False)
add_gru = kw.get("add_gru", False)
conv_filters = kw.get("conv_filters", 64)
num_perspectives = kw.get("num_perspectives", 8)
num_labels = kw.get("num_labels", 3)
dropout = kw.get("dropout", 0.0)
activation = kw.get("activation", "relu")
dim = input_dim
if add_matching:
# bidirectional matching
self.matching_layer = BiMpmMatching(hidden_dim=dim, num_perspectives=num_perspectives)
dim = dim + self.matching_layer.get_output_dim() * 2
else:
self.register_parameter("matching_layer", None)
if add_fusion:
self.fusion_layer = DotAttention(
query_dim=dim,
key_dim=dim,
value_dim=dim,
hidden_dim=hidden_dim,
num_heads=num_perspectives,
scale=1 / hidden_dim**0.5,
score_func="softmax",
add_zero_attn=False,
add_residual=False,
add_gate=True,
pre_lnorm=True,
post_lnorm=False,
dropout=dropout
)
else:
self.register_parameter("fusion_layer", None)
if add_conv:
self.conv_layer = CnnHighway(
input_dim=dim,
filters=[(1, conv_filters), (2, conv_filters)],
num_highway=1,
activation=activation,
layer_norm=False
)
dim = conv_filters * 2
else:
self.register_parameter("conv_layer", None)
if add_trans:
self.pos_emb = PositionEmbedding(
input_dim=dim,
max_len=(max_num_text+max_num_context+2),
scale=INIT_EMB_STD
)
self.trans_layer = DotAttention(
query_dim=dim,
key_dim=dim,
value_dim=dim,
hidden_dim=hidden_dim,
num_heads=num_perspectives,
scale=1 / hidden_dim**0.5,
score_func="softmax",
add_zero_attn=False,
add_residual=False,
add_gate=True,
pre_lnorm=True,
post_lnorm=False,
dropout=dropout
)
else:
self.register_parameter("pos_emb", None)
self.register_parameter("trans_layer", None)
if add_gru:
self.gru_layer = nn.GRU(
input_size=dim,
hidden_size=dim//2,
num_layers=1,
bidirectional=True,
batch_first=True
)
else:
self.register_parameter("gru_layer", None)
self.fc_layer = MLP(
input_dim=dim,
hidden_dim=hidden_dim,
output_dim=num_labels,
num_mlp_layers=2,
activation="none",
norm_layer="batch_norm"
)
# init
if self.gru_layer is not None:
init_weight(self.gru_layer)
def set_finetune(self, finetune):
assert finetune in ["full", "layers", "last", "type", "none"]
for param in self.parameters():
param.requires_grad = True
self.encoder.set_finetune(finetune)
# fix word embeddings
# if isinstance(self.encoder, LSTMEncoder):
# self.encoder.word_embeddings.weight.requires_grad = False
# elif isinstance(self.encoder, XLNetEncoder):
# self.encoder.model.word_embedding.weight.requires_grad = False
# else:
# self.encoder.model.embeddings.word_embeddings.weight.requires_grad = False
# fix position embeddings
# if isinstance(self.encoder, LSTMEncoder):
# self.position_embeddings.weight.requires_grad = False
# elif isinstance(self.encoder, XLNetEncoder):
# pass
# else:
# self.encoder.model.embeddings.position_embeddings.weight.requires_grad = False
def load_pt(self, model_path, device=None):
if device is None:
device = th.device("cpu")
own_dict = self.state_dict()
state_dict = th.load(model_path, map_location=device)
try:
for name, param in state_dict.items():
if "fc_layer" in name:
print("skip the fully connected layer")
continue
if name not in own_dict:
continue
if isinstance(param, nn.Parameter):
param = param.data
if param.size() == own_dict[name].size():
own_dict[name].copy_(param)
# self.load_state_dict(state_dict, strict=False)
except BaseException as e:
print(e)
def encode(
self,
x1,
x2,
x1_mask=None,
x2_mask=None,
x1_sent_ids=None,
x2_sent_ids=None,
stance_logit=None,
disco_logit=None
):
if isinstance(self, nn.DataParallel):
encoder = self.module.encoder
else:
encoder = self.encoder
bsz, x1_len = x1.size()
x2_len = x2.size(1)
x_len = x1_len + x2_len
if x1_mask is None:
x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
if x2_mask is None:
x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
if x1_sent_ids is not None:
x1_indices = process_indices(x1_sent_ids)
else:
x1_sent_ids = th.zeros_like(x1)
x1_indices = th.tensor([x1_len], dtype=th.long, device=x1.device)
if x2_sent_ids is not None:
x2_indices = process_indices(x2_sent_ids)
else:
x2_sent_ids = th.zeros_like(x2)
x2_indices = th.tensor([x2_len], dtype=th.long, device=x2.device)
x1_type_ids, x2_type_ids = process_type_ids(x1_indices, x2_indices, method="flat")
x1_indices, x2_indices = x1_indices.tolist(), x2_indices.tolist()
x = th.cat([x1, x2], dim=1)
mask = th.cat([x1_mask, x2_mask], dim=1)
# pos_ids = th.cumsum(mask, dim=1).masked_fill(th.logical_not(mask), 0)
pos_ids = None
sent_ids = th.cat([x1_sent_ids, x2_sent_ids+x1_sent_ids[:, -1:]+1], dim=1)
type_ids = th.cat([x1_type_ids, x2_type_ids], dim=0).unsqueeze(0).expand(bsz, -1)
xs = encoder.forward(
x,
mask=mask,
sent_ids=sent_ids,
type_ids=type_ids,
pos_ids=pos_ids,
stance_logit=stance_logit,
disco_logit=disco_logit
)[0]
x1_split_sizes = [x1_indices[0]] + [x1_indices[i] - x1_indices[i-1] for i in range(1, len(x1_indices))]
x2_split_sizes = [x2_indices[0]] + [x2_indices[i] - x2_indices[i-1] for i in range(1, len(x2_indices))]
xs = th.split(xs, x1_split_sizes + x2_split_sizes, dim=1)
masks = th.split(x1_mask, x1_split_sizes, dim=1) + th.split(x2_mask, x2_split_sizes, dim=1)
return xs, masks
def forward(
self,
x1,
x2,
x1_mask=None,
x2_mask=None,
x1_sent_ids=None,
x2_sent_ids=None,
stance_logit=None,
disco_logit=None
):
if isinstance(self, nn.DataParallel):
encoder = self.module.encoder
matching_layer = self.module.matching_layer
fusion_layer = self.module.fusion_layer
conv_layer = self.module.conv_layer
pos_emb = self.module.pos_emb
trans_layer = self.module.trans_layer
gru_layer = self.module.gru_layer
fc_layer = self.module.fc_layer
drop = self.module.drop
else:
encoder = self.encoder
matching_layer = self.matching_layer
fusion_layer = self.fusion_layer
conv_layer = self.conv_layer
pos_emb = self.pos_emb
trans_layer = self.trans_layer
gru_layer = self.gru_layer
fc_layer = self.fc_layer
drop = self.drop
bsz, x1_len = x1.size()
x2_len = x2.size(1)
if x1_mask is None:
x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
if x2_mask is None:
x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
if x1_sent_ids is not None:
x1_indices = process_indices(x1_sent_ids).tolist()
else:
x1_sent_ids = th.zeros_like(x1)
x1_indices = [x1_len]
xs, masks = self.encode(
x1,
x2,
x1_mask=x1_mask,
x2_mask=x2_mask,
x1_sent_ids=x1_sent_ids,
x2_sent_ids=x2_sent_ids,
stance_logit=stance_logit,
disco_logit=disco_logit
)
if matching_layer is not None:
zeros = th.zeros((bsz, 1, matching_layer.get_output_dim()), dtype=xs[0].dtype, device=xs[0].device)
m_forwards = []
m_backwords = []
ms = []
for i in range(1, len(xs)):
if i == 1:
m_forwards.append(zeros.expand(-1, xs[0].size(1), -1))
m1, m2 = matching_layer(xs[i - 1], xs[i], masks[i - 1], masks[i])
m1, m2 = drop(th.cat(m1, dim=2)), drop(th.cat(m2, dim=2))
m_backwords.append(m1)
m_forwards.append(m2)
if i == len(xs) - 1:
m_backwords.append(zeros.expand(-1, xs[-1].size(1), -1))
for i in range(len(xs)):
ms.append(th.cat([xs[i], m_forwards[i], m_backwords[i]], dim=-1))
xs = tuple(ms)
if fusion_layer is not None:
fs = []
for i in range(len(xs)):
f = fusion_layer(xs[i], xs[i], xs[i], query_mask=masks[i], key_mask=masks[i])
f = drop(f)
fs.append(f)
xs = tuple(fs)
if conv_layer is not None:
# x1_feat = conv_layer(th.cat(xs[:len(x1_indices)], dim=1))
x2_feat = conv_layer(th.cat(xs[len(x1_indices):], dim=1))
feat = x2_feat
elif trans_layer is not None:
rep_idx = encoder.get_special_rep_idx(x2)
if rep_idx == x2_len-1:
rep_idx = -1
ts = []
tsm = []
for i in range(len(xs)):
ts.append(xs[i][:, rep_idx])
tsm.append(masks[i][:, rep_idx])
ts = th.stack(ts, dim=1)
tsm = th.stack(tsm, dim=1)
if pos_emb is not None:
pos_ids = th.arange(ts.size(1)-1, -1, -1, dtype=th.long, device=ts.device)
ts = ts + pos_emb(pos_ids).unsqueeze(0)
ts = trans_layer(ts, ts, ts, tsm, tsm)
start, end = batch_convert_mask_to_start_and_end(tsm)
if rep_idx == 0:
# x1_feat = th.gather(
# ts,
# dim=1,
# index=start.unsqueeze(1).unsqueeze(1).expand(-1, -1, ts.size(-1))
# ).squeeze(1)
x2_feat = ts[:, len(x1_indices)]
else:
# x1_feat = ts[:, len(x1_indices)-1]
x2_feat = th.gather(
ts,
dim=1,
index=end.unsqueeze(1).unsqueeze(1).expand(-1, -1, ts.size(-1))
).squeeze(1)
feat = x2_feat
elif gru_layer is not None:
rep_idx = encoder.get_special_rep_idx(x2)
if rep_idx == x2_len-1:
rep_idx = -1
g = []
gm = []
for i in range(len(xs)):
g.append(xs[i][:, rep_idx])
gm.append(masks[i][:, rep_idx])
g = th.stack(g, dim=1)
gm = th.stack(gm, dim=1)
g = gru_layer(g)[0]
start, end = batch_convert_mask_to_start_and_end(gm)
if rep_idx == 0:
# x1_feat = th.gather(
# g,
# dim=1,
# index=start.unsqueeze(1).unsqueeze(1).expand(-1, -1, g.size(-1))
# ).squeeze(1)
x2_feat = g[:, len(x1_indices)]
else:
# x1_feat = g[:, len(x1_indices)-1]
x2_feat = th.gather(
g,
dim=1,
index=end.unsqueeze(1).unsqueeze(1).expand(-1, -1, g.size(-1))
).squeeze(1)
feat = x2_feat
feat = drop(feat)
output = fc_layer(feat)
return output # unnormalized results
class IntervalModel(FlatModel):
def __init__(self, **kw):
super(IntervalModel, self).__init__(**kw)
def encode(
self,
x1,
x2,
x1_mask=None,
x2_mask=None,
x1_sent_ids=None,
x2_sent_ids=None,
stance_logit=None,
disco_logit=None
):
if isinstance(self, nn.DataParallel):
encoder = self.module.encoder
else:
encoder = self.encoder
bsz, x1_len = x1.size()
x2_len = x2.size(1)
x_len = x1_len + x2_len
if x1_mask is None:
x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
if x2_mask is None:
x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
if x1_sent_ids is not None:
x1_indices = process_indices(x1_sent_ids)
else:
x1_sent_ids = th.zeros_like(x1)
x1_indices = th.tensor([x1_len], dtype=th.long, device=x1.device)
if x2_sent_ids is not None:
x2_indices = process_indices(x2_sent_ids)
else:
x2_sent_ids = th.zeros_like(x2)
x2_indices = th.tensor([x2_len], dtype=th.long, device=x2.device)
x1_type_ids, x2_type_ids = process_type_ids(x1_indices, x2_indices, method="interval")
x1_indices, x2_indices = x1_indices.tolist(), x2_indices.tolist()
x = th.cat([x1, x2], dim=1)
mask = th.cat([x1_mask, x2_mask], dim=1)
# pos_ids = th.cumsum(mask, dim=1).masked_fill(th.logical_not(mask), 0)
pos_ids = None
sent_ids = th.cat([x1_sent_ids, x2_sent_ids+x1_sent_ids[:, -1:]+1], dim=1)
type_ids = th.cat([x1_type_ids, x2_type_ids], dim=0).unsqueeze(0).expand(bsz, -1)
xs = encoder.forward(
x,
mask=mask,
sent_ids=sent_ids,
type_ids=type_ids,
pos_ids=pos_ids,
stance_logit=stance_logit,
disco_logit=disco_logit
)[0]
x1_split_sizes = [x1_indices[0]] + [x1_indices[i] - x1_indices[i-1] for i in range(1, len(x1_indices))]
x2_split_sizes = [x2_indices[0]] + [x2_indices[i] - x2_indices[i-1] for i in range(1, len(x2_indices))]
xs = th.split(xs, x1_split_sizes + x2_split_sizes, dim=1)
masks = th.split(x1_mask, x1_split_sizes, dim=1) + th.split(x2_mask, x2_split_sizes, dim=1)
return xs, masks
class SegmentedModel(FlatModel):
def __init__(self, **kw):
super(SegmentedModel, self).__init__(**kw)
def encode(
self,
x1,
x2,
x1_mask=None,
x2_mask=None,
x1_sent_ids=None,
x2_sent_ids=None,
stance_logit=None,
disco_logit=None
):
if isinstance(self, nn.DataParallel):
encoder = self.module.encoder
else:
encoder = self.encoder
bsz, x1_len = x1.size()
x2_len = x2.size(1)
x_len = x1_len + x2_len
if x1_mask is None:
x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
if x2_mask is None:
x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
if x1_sent_ids is not None:
x1_indices = process_indices(x1_sent_ids)
else:
x1_sent_ids = th.zeros_like(x1)
x1_indices = th.tensor([x1_len], dtype=th.long, device=x1.device)
if x2_sent_ids is not None:
x2_indices = process_indices(x2_sent_ids)
else:
x2_sent_ids = th.zeros_like(x2)
x2_indices = th.tensor([x2_len], dtype=th.long, device=x2.device)
x1_type_ids, x2_type_ids = process_type_ids(x1_indices, x2_indices, method="segmented")
num_context = th.max(x1_type_ids, dim=0, keepdim=True)[0] + 1
dummy_type_ids = self.max_num_context - num_context
x1_type_ids = x1_type_ids + dummy_type_ids
x2_type_ids = x2_type_ids + dummy_type_ids
x1_indices, x2_indices = x1_indices.tolist(), x2_indices.tolist()
x = th.cat([x1, x2], dim=1)
mask = th.cat([x1_mask, x2_mask], dim=1)
# pos_ids = th.cumsum(mask, dim=1).masked_fill(th.logical_not(mask), 0)
pos_ids = None
sent_ids = th.cat([x1_sent_ids, x2_sent_ids+x1_sent_ids[:, -1:]+1], dim=1)
type_ids = th.cat([x1_type_ids, x2_type_ids], dim=0).unsqueeze(0).expand(bsz, -1)
indices = x1_indices + [x_len]
xs = []
# cls_pad = th.empty((bsz, 1), dtype=x.dtype, device=x.device).fill_(encoder.get_special_token_id(CLS))
for i in range(len(indices)):
if i == 0:
j = 0
k = indices[0]
mems = None
mmk = None
else:
j = indices[i-1]
k = indices[i]
mmk = mask[:, j-mems[0].size(1):j]
inp = x[:, j:k]
mk = mask[:, j:k]
sd = sent_ids[:, j:k]
td = type_ids[:, j:k] - type_ids[:, j:(j+1)] # zero-one
if i == len(indices) - 1:
td = td + 2
# pd = th.cumsum(mk, dim=1).masked_fill(th.logical_not(mk), 0)
pd = None
sl = stance_logit[:, j:k] if stance_logit is not None else None
dl = disco_logit[:, j:k] if disco_logit is not None else None
feat, mems = encoder.forward(
inp,
mask=mk,
sent_ids=sd,
type_ids=td,
pos_ids=pd,
mems=mems,
mems_mask=mmk,
stance_logit=sl,
disco_logit=dl
)
xs.append(feat)
x1_split_sizes = [x1_indices[0]] + [x1_indices[i] - x1_indices[i-1] for i in range(1, len(x1_indices))]
x2_split_sizes = [x2_indices[0]] + [x2_indices[i] - x2_indices[i-1] for i in range(1, len(x2_indices))]
xs = tuple(xs[:-1]) + th.split(xs[-1], x2_split_sizes, dim=1)
masks = th.split(x1_mask, x1_split_sizes, dim=1) + th.split(x2_mask, x2_split_sizes, dim=1)
return xs, masks
class ContextualizedModel(FlatModel):
def __init__(self, **kw):
assert kw.get("encoder", "roberta") != "lstm"
super(ContextualizedModel, self).__init__(**kw)
def encode(
self,
x1,
x2,
x1_mask=None,
x2_mask=None,
x1_sent_ids=None,
x2_sent_ids=None,
stance_logit=None,
disco_logit=None
):
if isinstance(self, nn.DataParallel):
encoder = self.module.encoder
else:
encoder = self.encoder
bsz, x1_len = x1.size()
x2_len = x2.size(1)
x_len = x1_len + x2_len
if x1_mask is None:
x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
if x2_mask is None:
x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
if x1_sent_ids is not None:
x1_indices = process_indices(x1_sent_ids)
else:
x1_sent_ids = th.zeros_like(x1)
x1_indices = th.tensor([x1_len], dtype=th.long, device=x1.device)
if x2_sent_ids is not None:
x2_indices = process_indices(x2_sent_ids)
else:
x2_sent_ids = th.zeros_like(x2)
x2_indices = th.tensor([x2_len], dtype=th.long, device=x2.device)
x1_type_ids, x2_type_ids = process_type_ids(x1_indices, x2_indices, method="interval")
x1_indices, x2_indices = x1_indices.tolist(), x2_indices.tolist()
x = th.cat([x1, x2], dim=1)
mask = th.cat([x1_mask, x2_mask], dim=1)
# pos_ids = th.cumsum(mask, dim=1).masked_fill(th.logical_not(mask), 0)
pos_ids = None
sent_ids = th.cat([x1_sent_ids, x2_sent_ids+x1_sent_ids[:, -1:]+1], dim=1)
type_ids = th.cat([x1_type_ids, x2_type_ids], dim=0).unsqueeze(0).expand(bsz, -1)
indices = x1_indices + [x_len]
context_mask = th.zeros((bsz, x_len, x_len), dtype=th.bool, device=x.device)
for i in range(len(indices)):
j = indices[i - 2] if i - 2 >= 0 else 0
k = indices[i + 1] if i + 1 < len(indices) else indices[i]
context_mask[:, (indices[i-1] if i - 1 >= 0 else 0):indices[i], j:k].data.copy_(mask[:, j:k].unsqueeze(1))
xs = encoder.forward(
x,
mask=context_mask,
sent_ids=sent_ids,
type_ids=type_ids,
pos_ids=pos_ids,
stance_logit=stance_logit,
disco_logit=disco_logit
)[0]
x1_split_sizes = [x1_indices[0]] + [x1_indices[i] - x1_indices[i-1] for i in range(1, len(x1_indices))]
x2_split_sizes = [x2_indices[0]] + [x2_indices[i] - x2_indices[i-1] for i in range(1, len(x2_indices))]
xs = th.split(xs, x1_split_sizes + x2_split_sizes, dim=1)
masks = th.split(x1_mask, x1_split_sizes, dim=1) + th.split(x2_mask, x2_split_sizes, dim=1)
return xs, masks
class ConcatCell(nn.Module):
def __init__(self, input_dim):
super(ConcatCell, self).__init__()
self.input_dim = input_dim
def forward(self, x1, x2):
return th.cat([x1, x2], dim=-1)
def get_output_dim(self):
return self.input_dim * 2
class GRUCell(nn.Module):
def __init__(self, input_dim):
super(GRUCell, self).__init__()
self.input_dim = input_dim
self.r_net = nn.Linear(input_dim * 2, input_dim)
self.z_net = nn.Linear(input_dim * 2, input_dim)
self.o_net = nn.Linear(input_dim * 2, input_dim)
# init
init_weight(self.r_net, activation="sigmoid", init="uniform")
init_weight(self.z_net, activation="sigmoid", init="uniform")
init_weight(self.o_net, activation="tanh", init="uniform")
def forward(self, x1, x2):
x = th.cat([x1, x2], dim=-1)
r = F.sigmoid(self.r_net(x))
z = F.sigmoid(self.z_net(x))
o = F.tanh(self.o_net(th.cat([x1, r * x2], dim=-1)))
return (1 - z) * x1 + z * o
def get_output_dim(self):
return self.input_dim
class HighwayCell(nn.Module):
def __init__(self, input_dim):
super(HighwayCell, self).__init__()
self.input_dim = input_dim
self.highway = Highway(input_dim * 2, activation="tanh")
self.z_net = nn.Linear(input_dim * 2, input_dim)
# init
init_weight(self.z_net, activation="sigmoid", init="uniform")
def forward(self, x1, x2):
size = x1.size()
dim = x1.size(-1)
x = th.cat([x1, x2], dim=-1)
o = self.highway(x)
x = x.view(-1, 2, dim)
o = o.view(-1, 2, dim)
z = self.z_net(th.cat([x * o, th.abs(x - o)], dim=2))
z = F.softmax(z, dim=1)
o = th.sum(z * o, dim=1)
o = o.view(size)
return o
def get_output_dim(self):
return self.input_dim
class AttnCell(nn.Module):
def __init__(self, input_dim, hidden_dim, num_heads=1):
super(AttnCell, self).__init__()
self.input_dim = input_dim
self.attn = DotAttention(
query_dim=input_dim,
key_dim=input_dim,
value_dim=input_dim,
hidden_dim=hidden_dim,
num_heads=num_heads,
scale=1/input_dim**0.5,
score_func="softmax",
add_zero_attn=False,
add_residual=False,
add_gate=True,
pre_lnorm=True,
post_lnorm=False
)
def forward(self, x1, x2, mask=None):
o = self.attn(x2, x1, x1, mask, mask)
return o
def get_output_dim(self):
return self.input_dim
class DisCOCModel(FlatModel):
def __init__(self, **kw):
super(DisCOCModel, self).__init__(**kw)
num_perspectives = kw.get("num_perspectives", 8)
hidden_dim = kw.get("hidden_dim", 128)
encoder_dim = self.encoder.get_output_dim()
# self.cell = ConcatCell(encoder_dim)
# self.cell = GRUCell(encoder_dim)
# self.cell = HighwayCell(encoder_dim)
self.cell = AttnCell(encoder_dim, hidden_dim, num_perspectives)
if self.cell.get_output_dim() != encoder_dim:
if self.matching_layer is not None:
del self.matching_layer
if self.fusion_layer is not None:
del self.fusion_layer
if self.conv_layer is not None:
del self.conv_layer
if self.trans_layer is not None:
del self.trans_layer
self.create_layers(self.encoder.get_output_dim())
def encode(
self,
x1,
x2,
x1_mask=None,
x2_mask=None,
x1_sent_ids=None,
x2_sent_ids=None,
stance_logit=None,
disco_logit=None
):
if isinstance(self, nn.DataParallel):
encoder = self.module.encoder
cell = self.module.cell
else:
encoder = self.encoder
cell = self.cell
bsz, x1_len = x1.size()
x2_len = x2.size(1)
x_len = x1_len + x2_len
if x1_mask is None:
x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
if x2_mask is None:
x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
if x1_sent_ids is not None:
x1_indices = process_indices(x1_sent_ids)
else:
x1_sent_ids = th.zeros_like(x1)
x1_indices = th.tensor([x1_len], dtype=th.long, device=x1.device)
if x2_sent_ids is not None:
x2_indices = process_indices(x2_sent_ids)
else:
x2_sent_ids = th.zeros_like(x2)
x2_indices = th.tensor([x2_len], dtype=th.long, device=x2.device)
x1_type_ids, x2_type_ids = process_type_ids(x1_indices, x2_indices, method="segmented")
num_context = th.max(x1_type_ids, dim=0, keepdim=True)[0] + 1
dummy_type_ids = self.max_num_context - num_context
x1_type_ids = x1_type_ids + dummy_type_ids
x2_type_ids = x2_type_ids + dummy_type_ids
x1_indices, x2_indices = x1_indices.tolist(), x2_indices.tolist()
x = th.cat([x1, x2], dim=1)
mask = th.cat([x1_mask, x2_mask], dim=1)
sent_ids = th.cat([x1_sent_ids, x2_sent_ids+x1_sent_ids[:, -1:]+1], dim=1)
type_ids = th.cat([x1_type_ids, x2_type_ids], dim=0).unsqueeze(0).expand(bsz, -1)
indices = x1_indices + [x_len]
x_forwards = []
x_backwards = []
for i in range(len(indices)):
if i == 0:
j = 0
k = indices[0]
sd = th.zeros((bsz, k), dtype=th.long, device=x.device)
td = th.ones((bsz, k), dtype=th.long, device=x.device)
else:
if i == 1:
j = 0
else:
j = indices[i - 2]
k = indices[i]
sd = sent_ids[:, j:k]
td = type_ids[:, j:k] - type_ids[:, j:(j+1)] # zero-one
inp = x[:, j:k]
mk = mask[:, j:k]
# pd = th.cumsum(mk, dim=1).masked_fill(th.logical_not(mk), 0)
pd = None
if stance_logit is None or i == 0:
sl = None
else:
dummy_sl = th.zeros(
(bsz, indices[i-1]-j, stance_logit.size(-1)),
device=stance_logit.device,
dtype=stance_logit.dtype
)
dummy_sl[:, :, 0].fill_(INF)
sl = th.cat([dummy_sl, stance_logit[:, indices[i-1]:k]], dim=1)
# sl = stance_logit[:, j:k]
if disco_logit is None or i == 0:
dl = None
else:
dummy_dl = th.zeros(
(bsz, indices[i-1]-j, disco_logit.size(-1)),
device=disco_logit.device,
dtype=disco_logit.dtype
)
dummy_dl[:, :, 0].fill_(INF)
dl = th.cat([dummy_dl, disco_logit[:, indices[i-1]:k]], dim=1)
# dl = disco_logit[:, j:k]
feat = encoder.forward(
inp,
mask=mk,
sent_ids=sd,
type_ids=td,
pos_ids=pd,
stance_logit=sl,
disco_logit=dl
)[0]
if i == 0:
x_forwards.append(feat)
else:
x_backwards.append(feat[:, :indices[i-1]-j])
x_forwards.append(feat[:, indices[i-1]-j:])
if i == len(indices) - 1: # text
j = indices[i-1] if i > 0 else 0
k = indices[i]
inp = x[:, j:k]
mk = mask[:, j:k]
sd = sent_ids[:, j:k]
td = type_ids[:, j:k] - type_ids[:, j:(j+1)] + 2
# pd = th.cumsum(mk, dim=1).masked_fill(th.logical_not(mk), 0)
pd = None
if stance_logit is None:
sl = None
else:
dummy_sl = th.zeros(
(bsz, indices[i-1]-j, stance_logit.size(-1)),
device=stance_logit.device,
dtype=stance_logit.dtype
)
dummy_sl[:, :, 0].fill_(INF)
sl = th.cat([dummy_sl, stance_logit[:, indices[i-1]:k]], dim=1)
# sl = stance_logit[:, j:k]
if disco_logit is None:
dl = None
else:
dummy_dl = th.zeros(
(bsz, indices[i-1]-j, disco_logit.size(-1)),
device=disco_logit.device,
dtype=disco_logit.dtype
)
dummy_dl[:, :, 0].fill_(INF)
dl = th.cat([dummy_dl, disco_logit[:, indices[i-1]:k]], dim=1)
# dl = disco_logit[:, j:k]
feat = encoder.forward(
inp,
mask=mk,
sent_ids=sd,
type_ids=td,
pos_ids=pd,
stance_logit=sl,
disco_logit=dl
)[0]
x_backwards.append(feat)
xs = []
if isinstance(cell, AttnCell):
for i in range(len(indices)):
j = indices[i-1] if i > 0 else 0
k = indices[i]
xs.append(cell(x_forwards[i], x_backwards[i], mask[:, j:k]))
else:
for i in range(len(indices)):
xs.append(cell(x_forwards[i], x_backwards[i]))
x1_split_sizes = [x1_indices[0]] + [x1_indices[i] - x1_indices[i-1] for i in range(1, len(x1_indices))]
x2_split_sizes = [x2_indices[0]] + [x2_indices[i] - x2_indices[i-1] for i in range(1, len(x2_indices))]
xs = tuple(xs[:-1]) + th.split(xs[-1], x2_split_sizes, dim=1)
masks = th.split(x1_mask, x1_split_sizes, dim=1) + th.split(x2_mask, x2_split_sizes, dim=1)
return xs, masks
# def encode(
# self,
# x1,
# x2,
# x1_mask=None,
# x2_mask=None,
# x1_sent_ids=None,
# x2_sent_ids=None,
# stance_logit=None,
# disco_logit=None
# ):
# if isinstance(self, nn.DataParallel):
# encoder = self.module.encoder
# cell = self.module.cell
# else:
# encoder = self.encoder
# cell = self.cell
# bsz, x1_len = x1.size()
# x2_len = x2.size(1)
# x_len = x1_len + x2_len
# if x1_mask is None:
# x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
# if x2_mask is None:
# x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
# if x1_sent_ids is not None:
# x1_indices = process_indices(x1_sent_ids)
# else:
# x1_sent_ids = th.zeros_like(x1)
# x1_indices = th.tensor([x1_len], dtype=th.long, device=x1.device)
# if x2_sent_ids is not None:
# x2_indices = process_indices(x2_sent_ids)
# else:
# x2_sent_ids = th.zeros_like(x2)
# x2_indices = th.tensor([x2_len], dtype=th.long, device=x2.device)
# x1_type_ids, x2_type_ids = process_type_ids(x1_indices, x2_indices, method="segmented")
# num_context = th.max(x1_type_ids, dim=0, keepdim=True)[0] + 1
# dummy_type_ids = self.max_num_context - num_context
# x1_type_ids = x1_type_ids + dummy_type_ids
# x2_type_ids = x2_type_ids + dummy_type_ids
# x1_indices, x2_indices = x1_indices.tolist(), x2_indices.tolist()
# x = th.cat([x1, x2], dim=1)
# mask = th.cat([x1_mask, x2_mask], dim=1)
# # pos_ids = th.cumsum(mask, dim=1).masked_fill(th.logical_not(mask), 0)
# pos_ids = None
# sent_ids = th.cat([x1_sent_ids, x2_sent_ids+x1_sent_ids[:, -1:]+1], dim=1)
# type_ids = th.cat([x1_type_ids, x2_type_ids], dim=0).unsqueeze(0).expand(bsz, -1)
# indices = x1_indices + [x_len]
# dummy_ids = th.ones_like(sent_ids)
# clamped_ids = th.clamp(sent_ids, max=len(x1_indices)) # regard all x2 as a whole
# even_sent_ids = th.bitwise_or(clamped_ids, dummy_ids)
# even_mask = even_sent_ids.unsqueeze(1) == even_sent_ids.unsqueeze(2)
# odd_sent_ids = th.bitwise_or(clamped_ids + 1, dummy_ids)
# odd_mask = odd_sent_ids.unsqueeze(1) == odd_sent_ids.unsqueeze(2)
# even_mask.masked_fill_((mask == 0).unsqueeze(-1), 0)
# odd_mask.masked_fill_((mask == 0).unsqueeze(-1), 0)
# if len(indices) % 2 == 1:
# even_type_ids = th.cat(
# [
# type_ids[:, :x1_len] % 2,
# type_ids[:, x1_len:] - type_ids[:, x1_len:(x1_len+1)] + 2
# ],
# dim=1
# )
# odd_type_ids = (type_ids + 1) % 2
# else:
# even_type_ids = type_ids % 2
# odd_type_ids = th.cat(
# [
# (type_ids[:, :x1_len] + 1) % 2,
# type_ids[:, x1_len:] - type_ids[:, x1_len:(x1_len+1)] + 2
# ],
# dim=1
# )
# even_x = encoder.forward(
# x,
# mask=even_mask,
# sent_ids=sent_ids,
# type_ids=even_type_ids,
# pos_ids=pos_ids,
# stance_logit=stance_logit,
# disco_logit=disco_logit
# )[0]
# odd_x = encoder.forward(
# x,
# mask=odd_mask,
# sent_ids=sent_ids,
# type_ids=odd_type_ids,
# pos_ids=pos_ids,
# stance_logit=stance_logit,
# disco_logit=disco_logit
# )[0]
# x1_split_sizes = [x1_indices[0]] + [x1_indices[i] - x1_indices[i-1] for i in range(1, len(x1_indices))]
# x2_split_sizes = [x2_indices[0]] + [x2_indices[i] - x2_indices[i-1] for i in range(1, len(x2_indices))]
# even_xs = th.split(
# even_x,
# x1_split_sizes + x2_split_sizes,
# dim=1
# )
# odd_xs = th.split(
# odd_x,
# x1_split_sizes + x2_split_sizes,
# dim=1
# )
# masks = th.split(x1_mask, x1_split_sizes, dim=1) + th.split(x2_mask, x2_split_sizes, dim=1)
# xs = []
# if isinstance(cell, AttnCell):
# for i in range(len(even_xs)):
# if i % 2 == 0:
# xs.append(cell(odd_xs[i], even_xs[i], masks[i]))
# else:
# xs.append(cell(even_xs[i], odd_xs[i]))
# else:
# for i in range(len(even_xs)):
# if i % 2 == 0:
# xs.append(cell(odd_xs[i], even_xs[i]))
# else:
# xs.append(cell(even_xs[i], odd_xs[i]))
# return xs, masks
class HAN(nn.Module):
def __init__(self, **kw):
super(HAN, self).__init__()
max_num_text = kw.get("max_num_text", 1)
max_num_context = kw.get("max_num_context", 1)
encoder = kw.get("encoder", "roberta")
hidden_dim = kw.get("hidden_dim", 128)
num_perspectives = kw.get("num_perspectives", 8)
num_labels = kw.get("num_labels", 3)
dropout = kw.get("dropout", 0.0)
self.max_num_context = max_num_context
self.max_num_text = max_num_text
self.drop = nn.Dropout(dropout, inplace=False)
if encoder == "bert":
self.encoder = BertEncoder(num_segments=max_num_text+max_num_context+2, **kw)
dim = self.encoder.get_output_dim()
self.word_linear = nn.Linear(dim, dim)
self.word_attn_vec = nn.Parameter(th.Tensor(dim))
self.sent_encoder = TransformerLayer(
input_dim=dim,
hidden_dim=hidden_dim,
num_heads=num_perspectives,
add_residual=True,
add_gate=False,
pre_lnorm=True,
post_lnorm=False,
dropout=0.0
)
self.sent_linear = nn.Linear(dim, dim)
self.sent_attn_vec = nn.Parameter(th.Tensor(dim))
elif encoder == "albert":
self.encoder = AlbertEncoder(num_segments=max_num_text+max_num_context+2, **kw)
dim = self.encoder.get_output_dim()
self.word_linear = nn.Linear(dim, dim)
self.word_attn_vec = nn.Parameter(th.Tensor(dim))
self.sent_encoder = TransformerLayer(
input_dim=dim,
hidden_dim=hidden_dim,
num_heads=num_perspectives,
add_residual=True,
add_gate=False,
pre_lnorm=True,
post_lnorm=False,
dropout=0.0
)
self.sent_linear = nn.Linear(dim, dim)
self.sent_attn_vec = nn.Parameter(th.Tensor(dim))
elif encoder == "roberta":
self.encoder = RobertaEncoder(num_segments=max_num_text+max_num_context+2, **kw)
dim = self.encoder.get_output_dim()
self.word_linear = nn.Linear(dim, dim)
self.word_attn_vec = nn.Parameter(th.Tensor(dim))
self.sent_encoder = TransformerLayer(
input_dim=dim,
hidden_dim=hidden_dim,
num_heads=num_perspectives,
add_residual=True,
add_gate=False,
pre_lnorm=True,
post_lnorm=False,
dropout=0.0
)
self.sent_linear = nn.Linear(dim, dim)
self.sent_attn_vec = nn.Parameter(th.Tensor(dim))
elif encoder == "xlnet":
self.encoder = XLNetEncoder(num_segments=max_num_text+max_num_context+2, **kw)
dim = self.encoder.get_output_dim()
self.word_linear = nn.Linear(dim, dim)
self.word_attn_vec = nn.Parameter(th.Tensor(dim))
self.sent_encoder = TransformerLayer(
input_dim=dim,
hidden_dim=hidden_dim,
num_heads=num_perspectives,
add_residual=True,
add_gate=False,
pre_lnorm=True,
post_lnorm=False,
dropout=0.0
)
self.sent_linear = nn.Linear(dim, dim)
self.sent_attn_vec = nn.Parameter(th.Tensor(dim))
elif encoder == "lstm":
self.encoder = LSTMEncoder(num_segments=max_num_text+max_num_context+2, **kw)
dim = self.encoder.get_output_dim()
self.word_linear = nn.Linear(dim, dim)
self.word_attn_vec = nn.Parameter(th.Tensor(dim))
self.sent_encoder = nn.LSTM(
input_size=dim,
hidden_size=dim//2,
num_layers=1,
bidirectional=True,
batch_first=True
)
self.sent_linear = nn.Linear(dim, dim)
self.sent_attn_vec = nn.Parameter(th.Tensor(dim))
else:
raise NotImplementedError("Error: encoder=%s is not supported now." % (encoder))
self.fc_layer = MLP(
input_dim=dim,
hidden_dim=hidden_dim,
output_dim=num_labels,
num_mlp_layers=2,
activation="none",
norm_layer="batch_norm"
)
self.drop = nn.Dropout(dropout)
# init
init_weight(self.word_attn_vec, init="uniform")
init_weight(self.word_linear, init="uniform")
init_weight(self.sent_attn_vec, init="uniform")
init_weight(self.sent_linear, init="uniform")
def set_finetune(self, finetune):
assert finetune in ["full", "layers", "last", "type", "none"]
for param in self.parameters():
param.requires_grad = True
self.encoder.set_finetune(finetune)
def forward(
self,
x1,
x2,
x1_mask=None,
x2_mask=None,
x1_sent_ids=None,
x2_sent_ids=None,
stance_logit=None,
disco_logit=None
):
if isinstance(self, nn.DataParallel):
encoder = self.module.encoder
encoder = self.module.encoder
word_attn_vec = self.module.word_attn_vec
word_linear = self.module.word_linear
sent_encoder = self.module.sent_encoder
sent_attn_vec = self.module.sent_attn_vec
sent_linear = self.module.sent_linear
fc_layer = self.module.fc_layer
drop = self.module.drop
else:
encoder = self.encoder
encoder = self.encoder
word_attn_vec = self.word_attn_vec
word_linear = self.word_linear
sent_encoder = self.sent_encoder
sent_attn_vec = self.sent_attn_vec
sent_linear = self.sent_linear
fc_layer = self.fc_layer
drop = self.drop
bsz, x1_len = x1.size()
x2_len = x2.size(1)
x_len = x1_len + x2_len
if x1_mask is None:
x1_mask = th.ones((bsz, x1_len), dtype=th.bool, device=x1.device)
if x2_mask is None:
x2_mask = th.ones((bsz, x2_len), dtype=th.bool, device=x2.device)
if x1_sent_ids is not None:
x1_indices = process_indices(x1_sent_ids)
else:
x1_sent_ids = th.zeros_like(x1)
x1_indices = th.tensor([x1_len], dtype=th.long, device=x1.device)
if x2_sent_ids is not None:
x2_indices = process_indices(x2_sent_ids)
else:
x2_sent_ids = th.zeros_like(x2)
x2_indices = th.tensor([x2_len], dtype=th.long, device=x2.device)
x1_type_ids, x2_type_ids = process_type_ids(x1_indices, x2_indices, method="segmented")
num_context = th.max(x1_type_ids, dim=0, keepdim=True)[0] + 1
dummy_type_ids = self.max_num_context - num_context
x1_type_ids = x1_type_ids + dummy_type_ids
x2_type_ids = x2_type_ids + dummy_type_ids
x1_indices, x2_indices = x1_indices.tolist(), x2_indices.tolist()
x = th.cat([x1, x2], dim=1)
mask = th.cat([x1_mask, x2_mask], dim=1)
sent_ids = th.cat([x1_sent_ids, x2_sent_ids+x1_sent_ids[:, -1:]+1], dim=1)
type_ids = th.cat([x1_type_ids, x2_type_ids], dim=0).unsqueeze(0).expand(bsz, -1)
indices = x1_indices + [x_len]
sent_feats = []
for i in range(len(indices)):
if i == 0:
j = 0
k = indices[0]
else:
j = indices[i-1]
k = indices[i]
sd = th.zeros((bsz, k-j), dtype=th.long, device=x.device)
td = th.zeros((bsz, k-j), dtype=th.long, device=x.device)
inp = x[:, j:k]
mk = mask[:, j:k]
# pd = th.cumsum(mk, dim=1).masked_fill(th.logical_not(mk), 0)
pd = None
if stance_logit is None or i == 0:
sl = None
else:
sl = stance_logit[:, j:k]
if disco_logit is None or i == 0:
dl = None
else:
dl = disco_logit[:, j:k]
word_feat = encoder.forward(
inp,
mask=mk,
sent_ids=sd,
type_ids=td,
pos_ids=pd,
stance_logit=sl,
disco_logit=dl
)[0]
attn_score = th.einsum(
"bid,bjd->bij",
(
word_linear(word_feat),
word_attn_vec.view(1, 1, -1).expand(bsz, -1, -1)
)
)
attn_score.masked_fill_((mk == 0).unsqueeze(-1), _INF)
attn_score = F.softmax(attn_score, dim=1)
sent_feats.append(th.sum(word_feat * attn_score, dim=1))
sent_feat = th.stack(sent_feats, dim=1)
sent_feat = sent_encoder(sent_feat)
if isinstance(sent_feat, tuple):
sent_feat = sent_feat[0]
attn_score = th.einsum(
"bid,bjd->bij",
(
sent_linear(sent_feat),
sent_attn_vec.view(1, 1, -1).expand(bsz, -1, -1)
)
)
attn_score = F.softmax(attn_score, dim=1)
feat = th.sum(sent_feat * attn_score, dim=1)
feat = drop(feat)
output = fc_layer(feat)
return output | 36.739814 | 118 | 0.535983 | 6,924 | 51,399 | 3.720248 | 0.046216 | 0.033697 | 0.016771 | 0.013238 | 0.816491 | 0.773671 | 0.741566 | 0.715827 | 0.684732 | 0.664622 | 0 | 0.033932 | 0.343489 | 51,399 | 1,399 | 119 | 36.739814 | 0.729433 | 0.127824 | 0 | 0.700765 | 0 | 0 | 0.022222 | 0 | 0 | 0 | 0 | 0 | 0.002868 | 1 | 0.029637 | false | 0 | 0.007648 | 0.00478 | 0.063098 | 0.001912 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8828939a806757b8a50f3bd0f24b6ce50ddcbdd5 | 77 | py | Python | node-embeddings/rdf2vec/walkers/__init__.py | kant/Multilingual-RDF-Verbalizer | 227219883d88d67fefd3aad8df54e2b49f165d6d | [
"MIT"
] | 3 | 2020-12-15T13:13:33.000Z | 2021-01-29T10:33:25.000Z | node-embeddings/rdf2vec/walkers/__init__.py | kant/Multilingual-RDF-Verbalizer | 227219883d88d67fefd3aad8df54e2b49f165d6d | [
"MIT"
] | 4 | 2020-06-27T22:42:09.000Z | 2021-08-25T15:06:38.000Z | node-embeddings/rdf2vec/walkers/__init__.py | kant/Multilingual-RDF-Verbalizer | 227219883d88d67fefd3aad8df54e2b49f165d6d | [
"MIT"
] | 2 | 2020-10-05T01:42:03.000Z | 2021-01-07T22:39:26.000Z | from .walker import *
from .random import *
from .weisfeiler_lehman import *
| 19.25 | 32 | 0.766234 | 10 | 77 | 5.8 | 0.6 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155844 | 77 | 3 | 33 | 25.666667 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
883c1fc20febe5a4f445a7323262d01b58421abb | 7,715 | py | Python | app/juhannus/tests/test_views.py | T-101/juhannus | 81de0652b9a8f4f26e831b3203b3af5581a89f41 | [
"MIT"
] | null | null | null | app/juhannus/tests/test_views.py | T-101/juhannus | 81de0652b9a8f4f26e831b3203b3af5581a89f41 | [
"MIT"
] | 7 | 2020-02-12T00:12:58.000Z | 2022-03-08T13:58:46.000Z | app/juhannus/tests/test_views.py | T-101/juhannus | 81de0652b9a8f4f26e831b3203b3af5581a89f41 | [
"MIT"
] | null | null | null | import datetime
from unittest import mock
from django.contrib.auth import get_user_model
from django.test import TestCase, Client
from django.urls import reverse
from django.utils import timezone
from juhannus.models import Event, Participant, get_midsummer_saturday
from juhannus.forms import SubmitForm
class ViewsTests(TestCase):
fixtures = ['juhannus/tests/juhannus.json']
def setUp(self):
self.admin_user = get_user_model().objects.create_superuser(username="user", email="user@example.com",
password="test")
def tearDown(self):
del self.admin_user
def test_empty_db(self):
client = Client()
endpoint = reverse("juhannus:event-latest")
self.assertEqual(endpoint, "/")
with mock.patch('juhannus.models.Event', new=Event.objects.all().delete()):
response = client.get(endpoint)
self.assertEqual(response.content, b"No event in db")
self.assertEqual(response.status_code, 200)
def test_new_event_creation_prior_midsummer_week(self):
client = Client()
week_before_midsummer = get_midsummer_saturday(2019) - datetime.timedelta(days=7)
event_count = Event.objects.count()
with mock.patch('juhannus.models.timezone.now', return_value=week_before_midsummer):
response = client.get(reverse("juhannus:event-latest"))
self.assertEqual(response.status_code, 200)
self.assertEqual(Event.objects.count(), event_count)
def test_new_event_creation_during_midsummer_week(self):
client = Client()
two_days_before_midsummer = get_midsummer_saturday(2019) - datetime.timedelta(days=2)
event_count = Event.objects.count()
with mock.patch('juhannus.models.timezone.now', return_value=two_days_before_midsummer):
response = client.get(reverse("juhannus:event-latest"))
self.assertEqual(response.status_code, 200)
self.assertEqual(Event.objects.count(), event_count + 1)
def test_new_event_creation_by_seconds(self):
client = Client()
sat = get_midsummer_saturday(2019)
sun_evening = sat.replace(hour=23, minute=59, second=59) - datetime.timedelta(days=6)
event_count = Event.objects.count()
with mock.patch('juhannus.models.timezone.now', return_value=sun_evening):
response = client.get(reverse("juhannus:event-latest"))
self.assertEqual(response.status_code, 200)
self.assertEqual(Event.objects.count(), event_count)
with mock.patch('juhannus.models.timezone.now', return_value=sun_evening + datetime.timedelta(seconds=2)):
response = client.get(reverse("juhannus:event-latest"))
self.assertEqual(response.status_code, 200)
self.assertEqual(Event.objects.count(), event_count + 1)
def test_endpoints(self):
client = Client()
response = client.get(reverse("juhannus:event-latest"))
self.assertEqual(response.status_code, 200)
response = client.get("asdf")
self.assertEqual(response.status_code, 404)
def test_post_insert_record_past_deadline(self):
client = Client()
original_count = Participant.objects.count()
form = SubmitForm(data={"name": "abc", "vote": 6, "event": 1})
endpoint = reverse("juhannus:event-latest")
response = client.post(endpoint, {**form.data, **{"action": "save"}})
self.assertEqual(response.status_code, 302)
self.assertEqual(Participant.objects.count(), original_count)
def test_post_insert_record_prior_deadline(self):
client = Client()
original_count = Participant.objects.count()
form = SubmitForm(data={"name": "abc", "vote": 6, "event": 1})
endpoint = reverse("juhannus:event-latest")
now_in_past = timezone.now().replace(year=Event.objects.first().year - 1, month=1, day=1)
with mock.patch('juhannus.models.timezone.now', return_value=now_in_past):
response = client.post(endpoint, {**form.data, **{"action": "save"}})
self.assertEqual(response.status_code, 302)
self.assertGreater(Participant.objects.count(), original_count)
def test_post_insert_record_past_deadline_superuser(self):
client = Client()
client.login(username='user', password='test')
original_count = Participant.objects.count()
form = SubmitForm(data={"name": "abc", "vote": 6, "event": 1})
endpoint = reverse("juhannus:event-latest")
response = client.post(endpoint, {**form.data, **{"action": "save"}})
self.assertEqual(response.status_code, 302)
self.assertGreater(Participant.objects.count(), original_count)
def test_post_modify_record_normal_user(self):
original_count = Participant.objects.count()
client = Client()
client.login(username='user', password='test')
form = SubmitForm(data={"name": "abc", "vote": 6, "event": 1})
endpoint = reverse("juhannus:event-latest")
response = client.post(endpoint, {**form.data, **{"action": "save"}})
self.assertEqual(response.status_code, 302)
self.assertGreater(Participant.objects.count(), original_count)
client = Client()
form.data["name"] = "abcd"
response = client.post(endpoint, {**form.data, **{"action": "modify", "pk": 2}})
self.assertEqual(response.status_code, 302)
self.assertEqual(Participant.objects.last().name, "abc")
def test_post_modify_record_superuser(self):
original_count = Participant.objects.count()
client = Client()
client.login(username='user', password='test')
form = SubmitForm(data={"name": "abc", "vote": 6, "event": 1})
endpoint = reverse("juhannus:event-latest")
response = client.post(endpoint, {**form.data, **{"action": "save"}})
self.assertEqual(response.status_code, 302)
self.assertGreater(Participant.objects.count(), original_count)
form.data["name"] = "abcd"
response = client.post(endpoint, {**form.data, **{"action": "modify", "pk": 2}})
self.assertEqual(response.status_code, 302)
self.assertEqual(Participant.objects.last().name, "abcd")
def test_post_delete_record_normal_user(self):
client = Client()
client.login(username='user', password='test')
form = SubmitForm(data={"name": "abc", "vote": 6, "event": 1})
endpoint = reverse("juhannus:event-latest")
response = client.post(endpoint, {**form.data, **{"action": "save"}})
self.assertEqual(response.status_code, 302)
new_count = Participant.objects.count()
client = Client()
response = client.post(endpoint, {**form.data, **{"action": "delete", "pk": 2}})
self.assertEqual(response.status_code, 302)
self.assertEqual(Participant.objects.count(), new_count)
def test_post_delete_record_superuser(self):
original_count = Participant.objects.count()
client = Client()
client.login(username='user', password='test')
form = SubmitForm(data={"name": "abc", "vote": 6, "event": 1})
endpoint = reverse("juhannus:event-latest")
response = client.post(endpoint, {**form.data, **{"action": "save"}})
self.assertEqual(response.status_code, 302)
self.assertGreater(Participant.objects.count(), original_count)
response = client.post(endpoint, {**form.data, **{"action": "delete", "pk": 2}})
self.assertEqual(response.status_code, 302)
self.assertLess(Participant.objects.last().name, "abcd")
| 49.774194 | 114 | 0.656902 | 879 | 7,715 | 5.617747 | 0.136519 | 0.085055 | 0.088497 | 0.105711 | 0.808019 | 0.749089 | 0.725395 | 0.717497 | 0.717497 | 0.672134 | 0 | 0.016095 | 0.202722 | 7,715 | 154 | 115 | 50.097403 | 0.786701 | 0 | 0 | 0.639706 | 0 | 0 | 0.10499 | 0.059883 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.102941 | false | 0.044118 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
885b27689689e8860a7a857fbe9a8193b64e1b37 | 45 | py | Python | mlpipeline_analyzer/suggest/__init__.py | JasmineBhalla17/ml-pipeline-analyzer | 9beb94925b77ba4d50007d8f6fcde05d086bb361 | [
"MIT"
] | 5 | 2022-02-14T19:27:33.000Z | 2022-03-29T01:38:45.000Z | mlpipeline_analyzer/suggest/__init__.py | JasmineBhalla17/ml-pipeline-analyzer | 9beb94925b77ba4d50007d8f6fcde05d086bb361 | [
"MIT"
] | null | null | null | mlpipeline_analyzer/suggest/__init__.py | JasmineBhalla17/ml-pipeline-analyzer | 9beb94925b77ba4d50007d8f6fcde05d086bb361 | [
"MIT"
] | 3 | 2022-02-19T20:05:52.000Z | 2022-03-08T09:31:36.000Z | from .PipelineSuggest import PipelineSuggest
| 22.5 | 44 | 0.888889 | 4 | 45 | 10 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
88adeec6142bb4cf6b376f7104a3139ed5f26776 | 1,612 | py | Python | test/test_modif_group.py | Jenerishka/python_training | 7e8d080b3c2fa6f271097b548247e30ffc04d532 | [
"Apache-2.0"
] | null | null | null | test/test_modif_group.py | Jenerishka/python_training | 7e8d080b3c2fa6f271097b548247e30ffc04d532 | [
"Apache-2.0"
] | null | null | null | test/test_modif_group.py | Jenerishka/python_training | 7e8d080b3c2fa6f271097b548247e30ffc04d532 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from model.group import Group
def test_modification_first_group(app):
if app.group.count() == 0:
app.group.create(Group(name="Group for test modify group"))
old_groups = app.group.get_group_list()
group = Group(name="344fdf", header="dfdf",
footer="sfdfdfew")
group.id = old_groups[0].id
app.group.modif_first_group(group)
new_groups = app.group.get_group_list()
assert len(old_groups) == len(new_groups)
old_groups[0] = group
assert sorted(old_groups, key=Group.id_or_max) == sorted(new_groups,
key=Group.id_or_max)
def test_modification_first_group_name(app):
if app.group.count() == 0:
app.group.create(Group(name="Group for test modify group2"))
old_groups = app.group.get_group_list()
group = Group(name="New name")
group.id = old_groups[0].id
app.group.modif_first_group(group)
new_groups = app.group.get_group_list()
assert len(old_groups) == len(new_groups)
old_groups[0] = group
assert sorted(old_groups, key=Group.id_or_max) == sorted(new_groups,
key=Group.id_or_max)
# def test_modification_first_group_header(app):
# if app.group.count() == 0:
# app.group.create(Group(name="Group for test modify group3"))
# old_groups = app.group.get_group_list()
# app.group.modif_first_group(Group(header="New header"))
# new_groups = app.group.get_group_list()
# assert len(old_groups) == len(new_groups)
| 38.380952 | 81 | 0.629032 | 224 | 1,612 | 4.272321 | 0.174107 | 0.125392 | 0.087774 | 0.106583 | 0.880878 | 0.850575 | 0.821317 | 0.791014 | 0.791014 | 0.791014 | 0 | 0.010717 | 0.247519 | 1,612 | 41 | 82 | 39.317073 | 0.778236 | 0.207196 | 0 | 0.692308 | 0 | 0 | 0.06388 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.076923 | false | 0 | 0.038462 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee095a6ec9d8f019949a362113e65898600b3a6b | 193 | py | Python | run_bot.py | mu22le/vaccine-progress-bot | 984a87f0deac50405d6bbc39a85bdb731b5770e9 | [
"MIT"
] | null | null | null | run_bot.py | mu22le/vaccine-progress-bot | 984a87f0deac50405d6bbc39a85bdb731b5770e9 | [
"MIT"
] | null | null | null | run_bot.py | mu22le/vaccine-progress-bot | 984a87f0deac50405d6bbc39a85bdb731b5770e9 | [
"MIT"
] | null | null | null | # from tweetbot.tweetbot import VaxTweetBot
from tweetbot.tweetbot_it import VaxTweetBotIt
DRY_RUN = False
# VaxTweetBot(dry_run = DRY_RUN).runAll()
VaxTweetBotIt(dry_run = DRY_RUN).runAll()
| 24.125 | 46 | 0.803109 | 26 | 193 | 5.730769 | 0.384615 | 0.201342 | 0.268456 | 0.161074 | 0.241611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108808 | 193 | 7 | 47 | 27.571429 | 0.866279 | 0.419689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ee15131562eedfd7ad2c9a8c5eae154c8ca4e67e | 49 | py | Python | kestrel/__init__.py | Ometria/pykestrel | e3441e9529cce7b383acc3d889bab38b68610645 | [
"BSD-3-Clause"
] | null | null | null | kestrel/__init__.py | Ometria/pykestrel | e3441e9529cce7b383acc3d889bab38b68610645 | [
"BSD-3-Clause"
] | null | null | null | kestrel/__init__.py | Ometria/pykestrel | e3441e9529cce7b383acc3d889bab38b68610645 | [
"BSD-3-Clause"
] | null | null | null | """Kestrel Client"""
from .client import Client
| 12.25 | 26 | 0.714286 | 6 | 49 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 3 | 27 | 16.333333 | 0.833333 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ee75326cf0ee5362d53226d17911afe71e4802fd | 2,415 | py | Python | tests/template_prefixes/loader_test.py | jwminton/voila | b003a7fc62023e5b4c8dab7dd64b94a920610c15 | [
"BSD-3-Clause"
] | 2,977 | 2019-09-27T04:51:38.000Z | 2022-03-31T12:02:41.000Z | tests/template_prefixes/loader_test.py | sthagen/voila-dashboards-voila | 7613fbb95f39a93f874ea57a8ab4a31140ace394 | [
"BSD-3-Clause"
] | 735 | 2019-09-27T08:02:34.000Z | 2022-03-31T19:58:01.000Z | tests/template_prefixes/loader_test.py | sthagen/voila-dashboards-voila | 7613fbb95f39a93f874ea57a8ab4a31140ace394 | [
"BSD-3-Clause"
] | 335 | 2019-10-06T05:23:29.000Z | 2022-03-23T21:35:00.000Z | """Tests loading template of jinja2 templates"""
import os
from jinja2 import Environment, FileSystemLoader
from voila.paths import collect_paths
HERE = os.path.dirname(__file__)
ROOT_DIRS = [os.path.join(HERE, 'user'), os.path.join(HERE, 'system')]
def test_loader_default_nbconvert():
paths = collect_paths(['nbconvert'], 'default', root_dirs=ROOT_DIRS)
loader = FileSystemLoader(paths)
env = Environment(loader=loader)
template = env.get_template('index.tpl')
output = template.render()
assert 'this is block base:nested in nbconvert/default/index.tpl' in output
def test_loader_foo():
paths = collect_paths(['voila', 'nbconvert'], 'foo', root_dirs=ROOT_DIRS)
loader = FileSystemLoader(paths)
env = Environment(loader=loader)
template = env.get_template('index.tpl')
output = template.render()
assert 'this is block base:nested in voila/default/index.tpl' in output
assert 'this is block base:nested in voila/foo/index.tpl' in output
assert 'this is block base:nested in nbconvert/foo/index.tpl' in output
assert 'this is block base:nested in nbconvert/default/index.tpl' not in output
def test_loader_bar_voila():
paths = collect_paths(['voila', 'nbconvert'], 'bar', root_dirs=ROOT_DIRS)
loader = FileSystemLoader(paths)
env = Environment(loader=loader)
template = env.get_template('index.tpl')
output = template.render()
assert 'this is block base in nbconvert/bar/index.tpl' in output
assert 'this is block base in nbconvert/default/index.tpl' in output
assert 'this is block base:nested in voila/default/index.tpl' in output
assert 'this is block base:nested2 in nbconvert/default/index.tpl' in output
assert 'this is block common in nbconvert/bar/parent.tpl' in output
def test_loader_bar_nbconvert():
paths = collect_paths(['nbconvert'], 'bar', root_dirs=ROOT_DIRS)
loader = FileSystemLoader(paths)
env = Environment(loader=loader)
template = env.get_template('index.tpl')
output = template.render()
assert 'this is block base in nbconvert/bar/index.tpl' in output
assert 'this is block base in nbconvert/default/index.tpl' in output
assert 'this is block base:nested in nbconvert/default/index.tpl' in output
assert 'this is block base:nested2 in nbconvert/default/index.tpl' in output
assert 'this is block common in nbconvert/bar/parent.tpl' in output
| 42.368421 | 83 | 0.733333 | 347 | 2,415 | 5.008646 | 0.135447 | 0.078251 | 0.103567 | 0.14672 | 0.868815 | 0.803222 | 0.772727 | 0.772727 | 0.772727 | 0.772727 | 0 | 0.001976 | 0.161905 | 2,415 | 56 | 84 | 43.125 | 0.856719 | 0.017391 | 0 | 0.636364 | 0 | 0 | 0.370934 | 0.148711 | 0 | 0 | 0 | 0 | 0.340909 | 1 | 0.090909 | false | 0 | 0.068182 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ee7eaa84c859327e0491843283ff188ef9c42086 | 53 | py | Python | flask/project/blueprints/payment/__init__.py | schmidni/app_ActiveCampaign | 2fc2e1e5472a26e5c250f067e3e7d184ae536b0d | [
"MIT"
] | null | null | null | flask/project/blueprints/payment/__init__.py | schmidni/app_ActiveCampaign | 2fc2e1e5472a26e5c250f067e3e7d184ae536b0d | [
"MIT"
] | null | null | null | flask/project/blueprints/payment/__init__.py | schmidni/app_ActiveCampaign | 2fc2e1e5472a26e5c250f067e3e7d184ae536b0d | [
"MIT"
] | null | null | null | from project.blueprints.payment.views import payment
| 26.5 | 52 | 0.867925 | 7 | 53 | 6.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 53 | 1 | 53 | 53 | 0.938776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
c9b33d37aa46204a05764b723ec80e2c555c0b28 | 245 | py | Python | l10n_br_eletronic_document/models/__init__.py | kaoecoito/odoo-brasil | 6e019efc4e03b2e7be6ca51d08ace095240e0f07 | [
"MIT"
] | 181 | 2016-11-11T04:39:43.000Z | 2022-03-14T21:17:19.000Z | l10n_br_eletronic_document/models/__init__.py | kaoecoito/odoo-brasil | 6e019efc4e03b2e7be6ca51d08ace095240e0f07 | [
"MIT"
] | 899 | 2016-11-14T02:42:56.000Z | 2022-03-29T20:47:39.000Z | l10n_br_eletronic_document/models/__init__.py | kaoecoito/odoo-brasil | 6e019efc4e03b2e7be6ca51d08ace095240e0f07 | [
"MIT"
] | 227 | 2016-11-10T17:16:59.000Z | 2022-03-26T16:46:38.000Z | from . import res_company
from . import base_account
from . import account_move
from . import eletronic_document
from . import nfe_models
from . import nfe
from . import fiscal_position
from . import res_config_settings
from . import res_partner | 27.222222 | 33 | 0.820408 | 36 | 245 | 5.333333 | 0.444444 | 0.46875 | 0.203125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 245 | 9 | 34 | 27.222222 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c9df50063d59973c9b77b273632f357b002bec33 | 235 | py | Python | 2016/Day1/day1.py | dh256/adventofcode | 428eec13f4cbf153333a0e359bcff23070ef6d27 | [
"MIT"
] | null | null | null | 2016/Day1/day1.py | dh256/adventofcode | 428eec13f4cbf153333a0e359bcff23070ef6d27 | [
"MIT"
] | null | null | null | 2016/Day1/day1.py | dh256/adventofcode | 428eec13f4cbf153333a0e359bcff23070ef6d27 | [
"MIT"
] | null | null | null | from Directions import Directions
directions = Directions("input.txt")
print(f'Part 1 distance to Easter Bunny HQ: {directions.easter_bunny_hq()}')
print(f'Part 2 distance to Easter Bunny HQ: {directions.easter_bunny_hq(part2=True)}') | 47 | 86 | 0.787234 | 36 | 235 | 5.027778 | 0.472222 | 0.243094 | 0.287293 | 0.232044 | 0.508287 | 0.508287 | 0.508287 | 0.508287 | 0.508287 | 0 | 0 | 0.014151 | 0.097872 | 235 | 5 | 86 | 47 | 0.839623 | 0 | 0 | 0 | 0 | 0 | 0.639831 | 0.29661 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c9fe9e35df93041aa5bb22f0ada8dd9192d6cd1c | 36 | py | Python | tests/test.py | JacopoDeAngelis/TPS-dice-roller-bot | b296966ccde1078eb6d67e71dfb09130e1811035 | [
"MIT"
] | 4 | 2020-10-06T14:47:17.000Z | 2022-02-24T17:24:26.000Z | tests/test.py | JacopoDeAngelis/TPS-dice-roller-bot | b296966ccde1078eb6d67e71dfb09130e1811035 | [
"MIT"
] | null | null | null | tests/test.py | JacopoDeAngelis/TPS-dice-roller-bot | b296966ccde1078eb6d67e71dfb09130e1811035 | [
"MIT"
] | 1 | 2020-10-06T14:47:18.000Z | 2020-10-06T14:47:18.000Z | from .context import pumas_rollbot
| 12 | 34 | 0.833333 | 5 | 36 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 36 | 2 | 35 | 18 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e496c4eabda9216f5f9aefb148e59f1b9893885 | 137 | py | Python | apps/locations/serializers/__init__.py | jorgesaw/kmarket | bffdced85c55585a664622b346e272af60b67c33 | [
"MIT"
] | null | null | null | apps/locations/serializers/__init__.py | jorgesaw/kmarket | bffdced85c55585a664622b346e272af60b67c33 | [
"MIT"
] | 1 | 2019-09-20T01:33:45.000Z | 2019-09-20T01:33:45.000Z | apps/locations/serializers/__init__.py | jorgesaw/kmarket | bffdced85c55585a664622b346e272af60b67c33 | [
"MIT"
] | null | null | null | from .states import StateModelSerializer
from .cities import CityModelSerializer, CityWithStateModelSerializer, UpdateCityModelSerializer | 68.5 | 96 | 0.905109 | 10 | 137 | 12.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065693 | 137 | 2 | 96 | 68.5 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4e90fcd711f82fbaa16ec00078242c775994da26 | 2,689 | py | Python | tests/api/test_config.py | isu-avista/base-server | 266f74becfb19083125c40f3d15bc7c67ebff243 | [
"MIT"
] | null | null | null | tests/api/test_config.py | isu-avista/base-server | 266f74becfb19083125c40f3d15bc7c67ebff243 | [
"MIT"
] | null | null | null | tests/api/test_config.py | isu-avista/base-server | 266f74becfb19083125c40f3d15bc7c67ebff243 | [
"MIT"
] | null | null | null | import unittest
from tests.base_api_test import BaseApiTest
class ConfigTest(BaseApiTest):
def test_read_dbconfig(self):
rv = BaseApiTest.auth_get(self.client, "admin", "admin", "/api/config/dbdata")
self.assertEqual(4, len(rv.get_json()))
def test_read_dbconfig_noauth(self):
rv = self.client.get("/api/config/dbdata")
self.assertEqual("Missing Authorization Header", rv.get_json().get("msg"))
# def test_update_dbconfig(self):
# json = dict()
# rv = BaseApiTest.auth_put(self.client, "admin", "admin", route="/api/config/dbdata", json=json)
# print(rv.get_json())
# self.fail()
def test_update_dbconfig_noauth(self):
rv = self.client.put("/api/config/dbdata", json=dict())
self.assertEqual("Missing Authorization Header", rv.get_json().get("msg"))
def test_update_dbconfig_nojson(self):
json = dict()
headers = BaseApiTest._create_auth_header(self.client, "admin", "admin")
rv = self.client.put("/api/config/dbdata", headers=headers, data=json)
self.assertEqual("data cannot be None", rv.get_json().get("msg"))
def test_read_sysconfig(self):
rv = BaseApiTest.auth_get(self.client, "admin", "admin", "/api/config/sysdata")
self.assertEqual(5, len(rv.get_json()))
def test_read_sysconfig_noauth(self):
rv = self.client.get("/api/config/sysdata")
self.assertEqual("Missing Authorization Header", rv.get_json().get("msg"))
# def test_update_sysconfig(self):
# json = dict()
# rv = BaseApiTest.auth_put(self.client, "admin", "admin", route="/api/config/sysdata", json=json)
# print(rv.get_json())
# self.fail()
def test_update_sysconfig_noauth(self):
rv = self.client.put("/api/config/sysdata", json=dict())
self.assertEqual("Missing Authorization Header", rv.get_json().get("msg"))
def test_update_sysconfig_nojson(self):
json = dict()
headers = BaseApiTest._create_auth_header(self.client, "admin", "admin")
rv = self.client.put("/api/config/dbdata", headers=headers, data=json)
self.assertEqual("data cannot be None", rv.get_json().get("msg"))
def test_read_unknown_section(self):
rv = BaseApiTest.auth_get(self.client, "admin", "admin", "/api/config/unknown")
self.assertEqual("section cannot be None or empty and must be in the config", rv.get_json().get('msg'))
def test_update_unknown_section(self):
rv = self.client.get("/api/config/unknown")
self.assertEqual("Missing Authorization Header", rv.get_json().get("msg"))
if __name__ == '__main__':
unittest.main() | 41.369231 | 111 | 0.658981 | 350 | 2,689 | 4.877143 | 0.162857 | 0.082015 | 0.063269 | 0.056239 | 0.869361 | 0.819566 | 0.799649 | 0.752783 | 0.656708 | 0.656708 | 0 | 0.000914 | 0.186687 | 2,689 | 65 | 112 | 41.369231 | 0.779607 | 0.142432 | 0 | 0.333333 | 0 | 0 | 0.218641 | 0 | 0 | 0 | 0 | 0 | 0.25641 | 1 | 0.25641 | false | 0 | 0.051282 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14c8107d7532e02e341937284931e8df0fb74c81 | 87 | py | Python | netbuffer/abm/models/__init__.py | bstabler/netbuffer | 25fb44804f160a92c8bee80f9f6b44b8f97b2b16 | [
"BSD-3-Clause"
] | null | null | null | netbuffer/abm/models/__init__.py | bstabler/netbuffer | 25fb44804f160a92c8bee80f9f6b44b8f97b2b16 | [
"BSD-3-Clause"
] | 15 | 2018-03-08T19:06:01.000Z | 2020-05-07T23:44:48.000Z | netbuffer/abm/models/__init__.py | bstabler/netbuffer | 25fb44804f160a92c8bee80f9f6b44b8f97b2b16 | [
"BSD-3-Clause"
] | 3 | 2018-03-19T19:32:52.000Z | 2019-10-31T17:47:12.000Z | from . import nearby_zones
from . import buffer_zones
from . import write_daysim_files
| 21.75 | 32 | 0.827586 | 13 | 87 | 5.230769 | 0.615385 | 0.441176 | 0.441176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 87 | 3 | 33 | 29 | 0.906667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0909c538a0ed9bf74c827f1a3d94fdef2a9a0990 | 124 | py | Python | Python/Tests/TestData/DebuggerProject/EGGceptionOnCall.py | nanshuiyu/pytools | 9f9271fe8cf564b4f94e9456d400f4306ea77c23 | [
"Apache-2.0"
] | null | null | null | Python/Tests/TestData/DebuggerProject/EGGceptionOnCall.py | nanshuiyu/pytools | 9f9271fe8cf564b4f94e9456d400f4306ea77c23 | [
"Apache-2.0"
] | null | null | null | Python/Tests/TestData/DebuggerProject/EGGceptionOnCall.py | nanshuiyu/pytools | 9f9271fe8cf564b4f94e9456d400f4306ea77c23 | [
"Apache-2.0"
] | null | null | null | import os, sys
sys.path.append(os.path.abspath('EGG.egg'))
import EGG.function_exception
EGG.function_exception.f()
| 17.714286 | 44 | 0.741935 | 19 | 124 | 4.736842 | 0.526316 | 0.244444 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120968 | 124 | 6 | 45 | 20.666667 | 0.825688 | 0 | 0 | 0 | 0 | 0 | 0.059322 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
090fc909d4e1adbcc8bd709fbde814742a2fe180 | 39 | py | Python | symnet/__init__.py | SurajAralihalli/GeneExpression | b2e53b2ccf7beece1925d1749e317efc32045486 | [
"MIT"
] | null | null | null | symnet/__init__.py | SurajAralihalli/GeneExpression | b2e53b2ccf7beece1925d1749e317efc32045486 | [
"MIT"
] | null | null | null | symnet/__init__.py | SurajAralihalli/GeneExpression | b2e53b2ccf7beece1925d1749e317efc32045486 | [
"MIT"
] | null | null | null | from symnet.model import AbstractModel
| 19.5 | 38 | 0.871795 | 5 | 39 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
092e92f1be45e9afaba9902dbd1d7ac58a4ed063 | 661 | py | Python | iocsv.py | mairieli/botSE-2019 | bfcda1197fccd05650db1e37c85c43db9e28b26d | [
"MIT"
] | null | null | null | iocsv.py | mairieli/botSE-2019 | bfcda1197fccd05650db1e37c85c43db9e28b26d | [
"MIT"
] | 1 | 2020-11-06T18:47:10.000Z | 2020-11-19T18:51:29.000Z | iocsv.py | mairieli/botSE-2019 | bfcda1197fccd05650db1e37c85c43db9e28b26d | [
"MIT"
] | null | null | null | import csv
def read_csv(file):
input_csv = open('{}'.format(file), 'r')
reader_csv = csv.reader(input_csv, delimiter=',')
repos = [{'url': r[1], 'owner': r[2], 'repo': r[3]} for r in reader_csv]
input_csv.close()
return repos
def read_csv_repos(file):
input_csv = open('{}'.format(file), 'r')
reader_csv = csv.reader(input_csv, delimiter=',')
repos = [r[1] for r in reader_csv]
input_csv.close()
return repos
def read_csv_repos_fil(file):
input_csv = open('{}'.format(file), 'r')
reader_csv = csv.reader(input_csv, delimiter=',')
repos = [r[2] for r in reader_csv]
input_csv.close()
return repos | 26.44 | 76 | 0.624811 | 102 | 661 | 3.843137 | 0.22549 | 0.183673 | 0.076531 | 0.122449 | 0.892857 | 0.892857 | 0.892857 | 0.892857 | 0.892857 | 0.892857 | 0 | 0.009452 | 0.199697 | 661 | 25 | 77 | 26.44 | 0.731569 | 0 | 0 | 0.631579 | 0 | 0 | 0.036254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.052632 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0958fbc59e98102e4af9afb91a752a98812ab55a | 70 | py | Python | bayes_by_backprop/FrequentistModels/__init__.py | AlbertoCastelo/bayesian-dl-medical-diagnosis | 82b0efc7147d88663b81cc066d5cd41189860e43 | [
"MIT"
] | 1 | 2021-07-12T02:54:57.000Z | 2021-07-12T02:54:57.000Z | bayes_by_backprop/FrequentistModels/__init__.py | AlbertoCastelo/bayesian-dl-medical-diagnosis | 82b0efc7147d88663b81cc066d5cd41189860e43 | [
"MIT"
] | 20 | 2020-01-28T22:18:55.000Z | 2021-09-08T01:21:52.000Z | experiment_Bayesian_CNN/utils/FrequentistModels/__init__.py | slds-lmu/paper_2019_variationalResampleDistributionShift | 3664eea4d243eb828d13ba69112308630d80d244 | [
"Apache-2.0",
"MIT"
] | 2 | 2019-12-14T09:17:47.000Z | 2020-02-24T16:55:07.000Z | from .LeNet import *
from .AlexNet import *
from .F3Conv3FC import *
| 14 | 24 | 0.728571 | 9 | 70 | 5.666667 | 0.555556 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 0.185714 | 70 | 4 | 25 | 17.5 | 0.859649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
096042e01a4f7f6caa85d99c61ab8f78bc8a16c0 | 18,538 | py | Python | utilities/python/saint_specificity/test_main.py | knightjdr/prohits-viz-containers | a696e8f2a3c9fca398aa2141f64c6b2003cff8d0 | [
"MIT"
] | null | null | null | utilities/python/saint_specificity/test_main.py | knightjdr/prohits-viz-containers | a696e8f2a3c9fca398aa2141f64c6b2003cff8d0 | [
"MIT"
] | null | null | null | utilities/python/saint_specificity/test_main.py | knightjdr/prohits-viz-containers | a696e8f2a3c9fca398aa2141f64c6b2003cff8d0 | [
"MIT"
] | null | null | null | import math
import pandas as pd
import pandas.testing as pd_testing
import pyfakefs.fake_filesystem_unittest
import unittest
from .main import (
add_specificity_to_saint,
get_specificty_calculator,
read_saint,
)
class ReadSaint(pyfakefs.fake_filesystem_unittest.TestCase):
def assertDataframeEqual(self, a, b, msg):
try:
pd_testing.assert_frame_equal(a, b)
except AssertionError as e:
raise self.failureException(msg) from e
def setUp(self):
self.addTypeEqualityFunc(pd.DataFrame, self.assertDataframeEqual)
self.setUpPyfakefs()
def test(self):
file_contents = (
'Bait\tPrey\tPreyGene\tAvgSpec\tSpec\tctrlCounts\n'
'AAA\tP11111\tprey1\t10\t10|10\t0|0\n'
'AAA\tP22222\tprey2\t20\t20|20\t5|4\n'
'AAA\tP33333\tprey3\t30\t30|30\t0|3\n'
'AAA\tP44444\tprey4\t15\t15|15\t7|8\n'
'AAA\tP55555\tprey5\t25\t25|25\t0|0\n'
'AAA\tP66666\tprey6\t40\t40|40\t1|1\n'
'BBB\tP11111\tprey1\t10\t10|10\t0|0\n'
'BBB\tP22222\tprey2\t20\t20|20\t5|4\n'
'BBB\tP33333\tprey3\t30\t30|30\t0|3\n'
)
filepath = '/test/saint.txt'
self.fs.create_file(filepath, contents=file_contents)
control_subtract = False
expected = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'AvgSpec': 10, 'Spec': '10|10', 'ctrlCounts': '0|0', 'Abundance': 10, 'Replicates': '10|10' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'AvgSpec': 20, 'Spec': '20|20', 'ctrlCounts': '5|4', 'Abundance': 20, 'Replicates': '20|20' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'AvgSpec': 30, 'Spec': '30|30', 'ctrlCounts': '0|3', 'Abundance': 30, 'Replicates': '30|30' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'AvgSpec': 15, 'Spec': '15|15', 'ctrlCounts': '7|8', 'Abundance': 15, 'Replicates': '15|15' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'AvgSpec': 25, 'Spec': '25|25', 'ctrlCounts': '0|0', 'Abundance': 25, 'Replicates': '25|25' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'AvgSpec': 40, 'Spec': '40|40', 'ctrlCounts': '1|1', 'Abundance': 40, 'Replicates': '40|40' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'AvgSpec': 10, 'Spec': '10|10', 'ctrlCounts': '0|0', 'Abundance': 10, 'Replicates': '10|10' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'AvgSpec': 20, 'Spec': '20|20', 'ctrlCounts': '5|4', 'Abundance': 20, 'Replicates': '20|20' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'AvgSpec': 30, 'Spec': '30|30', 'ctrlCounts': '0|3', 'Abundance': 30, 'Replicates': '30|30' },
])
self.assertEqual(read_saint(filepath, control_subtract), expected)
def test_control_subtract(self):
file_contents = (
'Bait\tPrey\tPreyGene\tAvgSpec\tSpec\tctrlCounts\n'
'AAA\tP11111\tprey1\t10\t10|10\t0|0\n'
'AAA\tP22222\tprey2\t20\t20|20\t5|4\n'
'AAA\tP33333\tprey3\t30\t30|30\t0|3\n'
'AAA\tP44444\tprey4\t15\t15|15\t7|8\n'
'AAA\tP55555\tprey5\t25\t25|25\t0|0\n'
'AAA\tP66666\tprey6\t40\t40|40\t1|1\n'
'BBB\tP11111\tprey1\t10\t10|10\t0|0\n'
'BBB\tP22222\tprey2\t20\t20|20\t5|4\n'
'BBB\tP33333\tprey3\t30\t30|30\t0|3\n'
)
filepath = '/test/saint.txt'
self.fs.create_file(filepath, contents=file_contents)
control_subtract = True
expected = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'AvgSpec': 10, 'Spec': '10|10', 'ctrlCounts': '0|0', 'Abundance': 10, 'Replicates': '10.0|10.0' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'AvgSpec': 20, 'Spec': '20|20', 'ctrlCounts': '5|4', 'Abundance': 15.5, 'Replicates': '15.5|15.5' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'AvgSpec': 30, 'Spec': '30|30', 'ctrlCounts': '0|3', 'Abundance': 28.5, 'Replicates': '28.5|28.5' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'AvgSpec': 15, 'Spec': '15|15', 'ctrlCounts': '7|8', 'Abundance': 7.5, 'Replicates': '7.5|7.5' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'AvgSpec': 25, 'Spec': '25|25', 'ctrlCounts': '0|0', 'Abundance': 25, 'Replicates': '25.0|25.0' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'AvgSpec': 40, 'Spec': '40|40', 'ctrlCounts': '1|1', 'Abundance': 39, 'Replicates': '39.0|39.0' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'AvgSpec': 10, 'Spec': '10|10', 'ctrlCounts': '0|0', 'Abundance': 10, 'Replicates': '10.0|10.0' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'AvgSpec': 20, 'Spec': '20|20', 'ctrlCounts': '5|4', 'Abundance': 15.5, 'Replicates': '15.5|15.5' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'AvgSpec': 30, 'Spec': '30|30', 'ctrlCounts': '0|3', 'Abundance': 28.5, 'Replicates': '28.5|28.5' },
])
self.assertEqual(read_saint(filepath, control_subtract), expected)
class CalculateSpecificity(unittest.TestCase):
def test_dscore(self):
metric = 'dscore'
df = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '7|8' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'Abundance': 25, 'Replicates': '25|25', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'Abundance': 40, 'Replicates': '40|40', 'ctrlCounts': '1|1' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'CCC', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|0' },
{ 'Bait': 'CCC', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '5|4' },
{ 'Bait': 'CCC', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|3' },
])
calculate_specificity = get_specificty_calculator(df, metric)
param_list = [
('P11111', 10, '10|10', 3.16),
('P33333', 30, '30|30', 5.48),
('P44444', 15, '15|15', 11.62),
('P22222', 20, '20|20', 4.47),
]
for prey, spec, reps, expected in param_list:
with self.subTest():
self.assertEqual(calculate_specificity(prey, spec, reps), expected)
def test_fc(self):
metric = 'fe'
df = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '7|8' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'Abundance': 25, 'Replicates': '25|25', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'Abundance': 40, 'Replicates': '40|40', 'ctrlCounts': '1|1' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'CCC', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|0' },
{ 'Bait': 'CCC', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '5|4' },
{ 'Bait': 'CCC', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|3' },
])
calculate_specificity = get_specificty_calculator(df, metric)
param_list = [
('P11111', 10, 0.8),
('P33333', 30, 1.33),
('P44444', 15, math.inf),
('P22222', 20, 1.14),
]
for prey, spec, expected in param_list:
with self.subTest():
self.assertEqual(calculate_specificity(prey, spec), expected)
def test_sscore(self):
metric = 'sscore'
df = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '7|8' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'Abundance': 25, 'Replicates': '25|25', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'Abundance': 40, 'Replicates': '40|40', 'ctrlCounts': '1|1' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'CCC', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|0' },
{ 'Bait': 'CCC', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '5|4' },
{ 'Bait': 'CCC', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|3' },
])
calculate_specificity = get_specificty_calculator(df, metric)
param_list = [
('P11111', 10, 3.16),
('P33333', 30, 5.48),
('P44444', 15, 6.71),
('P22222', 20, 4.47),
]
for prey, spec, expected in param_list:
with self.subTest():
self.assertEqual(calculate_specificity(prey, spec), expected)
def test_wdscore(self):
metric = 'wdscore'
df = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '7|8' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'Abundance': 25, 'Replicates': '25|25', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'Abundance': 40, 'Replicates': '40|40', 'ctrlCounts': '1|1' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'CCC', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|0' },
{ 'Bait': 'CCC', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '5|4' },
{ 'Bait': 'CCC', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|3' },
])
calculate_specificity = get_specificty_calculator(df, metric)
param_list = [
('P11111', 10, '10|10', 3.16),
('P33333', 30, '30|30', 5.48),
('P44444', 15, '15|15', 20.12),
('P22222', 20, '20|20', 4.47),
]
for prey, spec, reps, expected in param_list:
with self.subTest():
self.assertEqual(calculate_specificity(prey, spec, reps), expected)
def test_zscore(self):
metric = 'zscore'
df = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '7|8' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'Abundance': 25, 'Replicates': '25|25', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'Abundance': 40, 'Replicates': '40|40', 'ctrlCounts': '1|1' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'CCC', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|0' },
{ 'Bait': 'CCC', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '5|4' },
{ 'Bait': 'CCC', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|3' },
])
calculate_specificity = get_specificty_calculator(df, metric)
param_list = [
('P11111', 10, -0.58),
('P33333', 30, 0.58),
('P44444', 15, 1.15),
('P22222', 20, 0.58),
]
for prey, spec, expected in param_list:
with self.subTest():
self.assertEqual(calculate_specificity(prey, spec), expected)
class AddSpecificityToSaint(unittest.TestCase):
def assertDataframeEqual(self, a, b, msg):
try:
pd_testing.assert_frame_equal(a, b)
except AssertionError as e:
raise self.failureException(msg) from e
def setUp(self):
self.addTypeEqualityFunc(pd.DataFrame, self.assertDataframeEqual)
def test(self):
metric = 'fc'
df = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '7|8' },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'Abundance': 25, 'Replicates': '25|25', 'ctrlCounts': '0|0' },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'Abundance': 40, 'Replicates': '40|40', 'ctrlCounts': '1|1' },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0' },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4' },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3' },
{ 'Bait': 'CCC', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|0' },
{ 'Bait': 'CCC', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '5|4' },
{ 'Bait': 'CCC', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|3' },
])
calculate_specificity = get_specificty_calculator(df, metric)
expected = pd.DataFrame([
{ 'Bait': 'AAA', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0', 'Specificity': 0.8 },
{ 'Bait': 'AAA', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4', 'Specificity': 1.14 },
{ 'Bait': 'AAA', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3', 'Specificity': 1.33 },
{ 'Bait': 'AAA', 'Prey': 'P44444', 'PreyGene': 'prey4', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '7|8', 'Specificity': math.inf },
{ 'Bait': 'AAA', 'Prey': 'P55555', 'PreyGene': 'prey5', 'Abundance': 25, 'Replicates': '25|25', 'ctrlCounts': '0|0', 'Specificity': math.inf },
{ 'Bait': 'AAA', 'Prey': 'P66666', 'PreyGene': 'prey6', 'Abundance': 40, 'Replicates': '40|40', 'ctrlCounts': '1|1', 'Specificity': math.inf },
{ 'Bait': 'BBB', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 10, 'Replicates': '10|10', 'ctrlCounts': '0|0', 'Specificity': 0.8 },
{ 'Bait': 'BBB', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 20, 'Replicates': '20|20', 'ctrlCounts': '5|4', 'Specificity': 1.14 },
{ 'Bait': 'BBB', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 30, 'Replicates': '30|30', 'ctrlCounts': '0|3', 'Specificity': 1.33 },
{ 'Bait': 'CCC', 'Prey': 'P11111', 'PreyGene': 'prey1', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|0', 'Specificity': 1.5 },
{ 'Bait': 'CCC', 'Prey': 'P22222', 'PreyGene': 'prey2', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '5|4', 'Specificity': 0.75 },
{ 'Bait': 'CCC', 'Prey': 'P33333', 'PreyGene': 'prey3', 'Abundance': 15, 'Replicates': '15|15', 'ctrlCounts': '0|3', 'Specificity': 0.5 },
])
self.assertEqual(add_specificity_to_saint(df, calculate_specificity), expected) | 65.97153 | 162 | 0.566458 | 2,234 | 18,538 | 4.66786 | 0.068039 | 0.062236 | 0.056962 | 0.063962 | 0.931722 | 0.926352 | 0.92242 | 0.922325 | 0.911776 | 0.911009 | 0 | 0.133996 | 0.170299 | 18,538 | 281 | 163 | 65.97153 | 0.543983 | 0 | 0 | 0.674797 | 0 | 0 | 0.44517 | 0.04024 | 0 | 0 | 0 | 0 | 0.065041 | 1 | 0.04878 | false | 0 | 0.02439 | 0 | 0.085366 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1193f45ebec0e4d1677fdf7cd2dcc3c0c872e791 | 7,937 | py | Python | tests/unit/test_contract.py | hellmage/pacte | a3b6c2b39b52d6e8c1bb5d0df305e5fc30251fff | [
"MIT"
] | null | null | null | tests/unit/test_contract.py | hellmage/pacte | a3b6c2b39b52d6e8c1bb5d0df305e5fc30251fff | [
"MIT"
] | null | null | null | tests/unit/test_contract.py | hellmage/pacte | a3b6c2b39b52d6e8c1bb5d0df305e5fc30251fff | [
"MIT"
] | null | null | null | # Copyright (c) 2018 App Annie Inc. All rights reserved.
import unittest as ut
from pacte import VERSION
from pacte.contract import Contract
from pacte.interaction import Interaction
class TestContract(ut.TestCase):
def test_mock_service_serialize_json(self):
contract = Contract('provider', 'consumer')
contract.given("Test").upon_receiving("a request").with_request(
method="get",
path="/path",
headers={"Custom-Header": "value"},
).will_respond_with(
status=200,
headers={"Content-Type": "text/html"},
body={"key": "value"}
)
expected_contract = {
'provider': {'name': 'provider'},
'consumer': {'name': 'consumer'},
"metadata": {
"pacte": {
"version": VERSION
}
},
"interactions": [
{
"providerState": "Test",
"description": "a request",
"request": {
"method": "GET",
"path": "/path",
"headers": {"Custom-Header": "value"},
},
"response": {
"status": 200,
"headers": {
"Content-Type": "text/html"
},
"body": {"key": "value"}
}
}
],
}
actual_contract = contract.to_dict()
self.assertDictEqual(actual_contract, expected_contract)
def test_mock_service_serialize_text(self):
contract = Contract('provider', 'consumer')
contract.given("Test").upon_receiving("a request").with_request(
method="get",
path="/path",
headers={"Custom-Header": "value"},
).will_respond_with(
status=200,
headers={"Content-Type": "text/html"},
body="Test String Response"
)
expected_contract = {
'provider': {'name': 'provider'},
'consumer': {'name': 'consumer'},
"metadata": {
"pacte": {
"version": VERSION
}
},
"interactions": [
{
"providerState": "Test",
"description": "a request",
"request": {
"method": "GET",
"path": "/path",
"headers": {"Custom-Header": "value"},
},
"response": {
"status": 200,
"headers": {
"Content-Type": "text/html"
},
"body": "Test String Response"
}
}
],
}
actual_contract = contract.to_dict()
self.assertDictEqual(actual_contract, expected_contract)
def test_mock_service_multi_interactions_serialize(self):
contract = Contract('provider', 'consumer')
contract.given("Test").upon_receiving("a request").with_request(
method="get",
path="/path",
headers={"Custom-Header": "value"},
).will_respond_with(
status=200,
headers={"Content-Type": "text/html"},
body="Test String Response"
)
contract.given("Test2").upon_receiving("a request2").with_request(
method="post",
path="/path",
query="name=ron&status=good",
headers={"Custom-Header": "value"},
).will_respond_with(
status=200,
headers={"Content-Type": "text/html"},
body={"key": "value"}
)
expected_contract = {
'provider': {'name': 'provider'},
'consumer': {'name': 'consumer'},
"metadata": {
"pacte": {
"version": VERSION
}
},
"interactions": [
{
"providerState": "Test",
"description": "a request",
"request": {
"method": "GET",
"path": "/path",
"headers": {"Custom-Header": "value"},
},
"response": {
"status": 200,
"headers": {
"Content-Type": "text/html"
},
"body": "Test String Response"
}
},
{
"providerState": "Test2",
"description": "a request2",
"request": {
"method": "POST",
"path": "/path",
"query": "name=ron&status=good",
"headers": {"Custom-Header": "value"},
},
"response": {
"status": 200,
"headers": {
"Content-Type": "text/html"
},
"body": {"key": "value"}
}
}
],
}
actual_contract = contract.to_dict()
self.assertDictEqual(actual_contract, expected_contract)
def test_from_dict(self):
contract = Contract.from_dict({
'provider': {'name': 'provider'},
'consumer': {'name': 'consumer'},
"metadata": {
"pacte": {
"version": VERSION
}
},
"interactions": [
{
"providerState": "Test",
"description": "a request",
"request": {
"method": "GET",
"path": "/path",
"headers": {"Custom-Header": "value"},
},
"response": {
"status": 200,
"headers": {
"Content-Type": "text/html"
},
"body": "Test String Response"
}
},
{
"providerState": "Test2",
"description": "a request2",
"request": {
"method": "POST",
"path": "/path",
"query": "name=ron&status=good",
"headers": {"Custom-Header": "value"},
},
"response": {
"status": 200,
"headers": {
"Content-Type": "text/html"
},
"body": {"key": "value"}
}
}
],
})
self.assertEqual('consumer', contract.consumer)
self.assertEqual('provider', contract.provider)
self.assertEqual(2, len(contract.interactions))
def test_add_interaction(self):
contract = Contract('provider', 'consumer')
interaction = Interaction()
interaction.given("Test").upon_receiving("a request").with_request(
method="get",
path="/path",
headers={"Custom-Header": "value"},
).will_respond_with(
status=200,
headers={"Content-Type": "text/html"},
body={"key": "value"}
)
contract.add_interaction(interaction)
contract.add_interaction(interaction)
self.assertEqual(1, len(contract.interactions))
| 33.631356 | 75 | 0.391458 | 518 | 7,937 | 5.891892 | 0.148649 | 0.046855 | 0.06848 | 0.086501 | 0.81422 | 0.790629 | 0.790629 | 0.790629 | 0.790629 | 0.790629 | 0 | 0.010836 | 0.476754 | 7,937 | 235 | 76 | 33.774468 | 0.724055 | 0.006804 | 0 | 0.688372 | 0 | 0 | 0.214848 | 0 | 0 | 0 | 0 | 0 | 0.032558 | 1 | 0.023256 | false | 0 | 0.018605 | 0 | 0.046512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1198752a08b39cf69f1feb84e8fe101699174bf8 | 23 | py | Python | tests/__init__.py | khurrumsaleem/dassh | 8823e4b5256975a375391787558e5b6aba816251 | [
"BSD-3-Clause"
] | 11 | 2021-08-12T17:08:37.000Z | 2021-12-09T22:35:48.000Z | tests/__init__.py | khurrumsaleem/dassh | 8823e4b5256975a375391787558e5b6aba816251 | [
"BSD-3-Clause"
] | 3 | 2021-11-24T21:15:36.000Z | 2022-03-25T14:00:52.000Z | tests/__init__.py | khurrumsaleem/dassh | 8823e4b5256975a375391787558e5b6aba816251 | [
"BSD-3-Clause"
] | 2 | 2021-08-23T08:00:55.000Z | 2021-09-16T02:26:59.000Z | from . import conftest
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
11fec3de52a8c28325f9d49e0b891dd63b030f4d | 101 | py | Python | theano/tests/__init__.py | ganguli-lab/Theano | d61c929b6d1a5bae314545cba79c879de687ea18 | [
"BSD-3-Clause"
] | 1 | 2019-01-26T01:53:46.000Z | 2019-01-26T01:53:46.000Z | theano/tests/__init__.py | ganguli-lab/Theano | d61c929b6d1a5bae314545cba79c879de687ea18 | [
"BSD-3-Clause"
] | null | null | null | theano/tests/__init__.py | ganguli-lab/Theano | d61c929b6d1a5bae314545cba79c879de687ea18 | [
"BSD-3-Clause"
] | null | null | null |
try:
from main import main, TheanoNoseTester
except ImportError:
pass
import unittest_tools
| 14.428571 | 43 | 0.772277 | 12 | 101 | 6.416667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19802 | 101 | 6 | 44 | 16.833333 | 0.950617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
eeadf20d892b9824c5b44d8f261f3a23a08dbee0 | 20 | py | Python | lean/__init__.py | CBMM/lean-python-bindings | 812781aa12af18bb9662f78274b005310860a758 | [
"Apache-2.0"
] | 8 | 2018-04-18T23:59:59.000Z | 2021-07-29T14:06:21.000Z | lean/__init__.py | CBMM/lean-python-bindings | 812781aa12af18bb9662f78274b005310860a758 | [
"Apache-2.0"
] | 3 | 2017-08-25T15:26:32.000Z | 2019-10-26T15:13:28.000Z | lean/__init__.py | CBMM/lean-python-bindings | 812781aa12af18bb9662f78274b005310860a758 | [
"Apache-2.0"
] | 4 | 2017-08-24T22:01:35.000Z | 2021-02-18T12:00:16.000Z | from .lean import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0100fae10ee1c0780c7a33f814143cf674500fe3 | 145 | py | Python | tests/conftest.py | jaustinpage/silver-spork | b8813166fec9339c4ed52106a2349fe7bff28b73 | [
"MIT"
] | null | null | null | tests/conftest.py | jaustinpage/silver-spork | b8813166fec9339c4ed52106a2349fe7bff28b73 | [
"MIT"
] | 8 | 2022-02-15T23:38:22.000Z | 2022-02-24T20:24:54.000Z | tests/conftest.py | jaustinpage/silver-spork | b8813166fec9339c4ed52106a2349fe7bff28b73 | [
"MIT"
] | null | null | null | """Setup flask fixture."""
import pytest
import silver_spork
@pytest.fixture(scope="session")
def app():
return silver_spork.create_app()
| 14.5 | 36 | 0.731034 | 19 | 145 | 5.421053 | 0.684211 | 0.213592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131034 | 145 | 9 | 37 | 16.111111 | 0.81746 | 0.137931 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
0125162f868054438aba1f2e9220c7fb56c71892 | 552 | py | Python | comment/templatetags/comment.py | fajardm/django-comment | 026acd21dc9cb155e491e3873d6d9d66845cb4f4 | [
"MIT"
] | null | null | null | comment/templatetags/comment.py | fajardm/django-comment | 026acd21dc9cb155e491e3873d6d9d66845cb4f4 | [
"MIT"
] | null | null | null | comment/templatetags/comment.py | fajardm/django-comment | 026acd21dc9cb155e491e3873d6d9d66845cb4f4 | [
"MIT"
] | null | null | null | from django import template
from comment.forms import CommentForm
from comment import models
register = template.Library()
@register.simple_tag
def comment_form(content_type, object_id):
return CommentForm(initial={'content_type': content_type, 'object_id': object_id})
@register.simple_tag
def comment_list(content_type, object_id):
return models.get_all_by_content_type_and_object_id(content_type, object_id)
@register.simple_tag
def total_comment(content_type, object_id):
return models.get_total_comment(content_type, object_id)
| 26.285714 | 86 | 0.818841 | 80 | 552 | 5.2875 | 0.325 | 0.208038 | 0.241135 | 0.269504 | 0.534279 | 0.394799 | 0.160757 | 0 | 0 | 0 | 0 | 0 | 0.103261 | 552 | 20 | 87 | 27.6 | 0.854545 | 0 | 0 | 0.230769 | 0 | 0 | 0.038043 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.230769 | 0.230769 | 0.692308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0197dcd4890529a0f455e6ecb3276df7afccd096 | 8,903 | py | Python | tests/unit/api/test_subscriptions.py | ets-labs/newsfeed | 9f59f94e1cd5f24d4b4121929050fc8b304173af | [
"BSD-3-Clause"
] | 10 | 2019-11-07T15:04:02.000Z | 2022-02-19T11:47:40.000Z | tests/unit/api/test_subscriptions.py | ets-labs/newsfeed | 9f59f94e1cd5f24d4b4121929050fc8b304173af | [
"BSD-3-Clause"
] | 27 | 2019-10-31T16:31:27.000Z | 2020-01-14T15:21:29.000Z | tests/unit/api/test_subscriptions.py | ets-labs/newsfeed | 9f59f94e1cd5f24d4b4121929050fc8b304173af | [
"BSD-3-Clause"
] | 10 | 2019-11-07T15:08:43.000Z | 2021-12-03T22:31:49.000Z | """Subscription handler tests."""
import uuid
import datetime
async def test_get_subscriptions(web_client, container):
"""Check subscriptions getting handler."""
newsfeed_id = '123'
subscription_storage = container.subscription_storage()
await subscription_storage.add(
{
'id': str(uuid.uuid4()),
'newsfeed_id': newsfeed_id,
'to_newsfeed_id': '124',
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
await subscription_storage.add(
{
'id': str(uuid.uuid4()),
'newsfeed_id': newsfeed_id,
'to_newsfeed_id': '125',
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
await subscription_storage.add(
{
'id': str(uuid.uuid4()),
'newsfeed_id': '125',
'to_newsfeed_id': '126',
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
response = await web_client.get(f'/newsfeed/{newsfeed_id}/subscriptions/')
assert response.status == 200
data = await response.json()
subscription_1, subscription_2 = data['results']
assert uuid.UUID(subscription_1['id'])
assert subscription_1['newsfeed_id'] == newsfeed_id
assert subscription_1['to_newsfeed_id'] == '125'
assert int(subscription_1['subscribed_at'])
assert uuid.UUID(subscription_2['id'])
assert subscription_2['newsfeed_id'] == newsfeed_id
assert subscription_2['to_newsfeed_id'] == '124'
assert int(subscription_2['subscribed_at'])
async def test_post_subscriptions(web_client, container):
"""Check subscriptions posting handler."""
newsfeed_id = '124'
response = await web_client.post(
f'/newsfeed/{newsfeed_id}/subscriptions/',
json={
'to_newsfeed_id': '123',
},
)
assert response.status == 200
data = await response.json()
assert uuid.UUID(data['id'])
subscription_storage = container.subscription_storage()
subscriptions = await subscription_storage.get_by_to_newsfeed_id(newsfeed_id='123')
assert len(subscriptions) == 1
assert subscriptions[0]['newsfeed_id'] == '124'
assert subscriptions[0]['to_newsfeed_id'] == '123'
async def test_post_subscription_to_self(web_client, container):
"""Check subscriptions posting handler."""
newsfeed_id = '124'
response = await web_client.post(
f'/newsfeed/{newsfeed_id}/subscriptions/',
json={
'to_newsfeed_id': newsfeed_id,
},
)
assert response.status == 400
data = await response.json()
assert data['message'] == f'Subscription of newsfeed "{newsfeed_id}" to itself is restricted'
subscription_storage = container.subscription_storage()
subscriptions = await subscription_storage.get_by_newsfeed_id(newsfeed_id=newsfeed_id)
assert len(subscriptions) == 0
async def test_post_subscription_with_abnormally_long_newsfeed_id(web_client, container):
"""Check subscriptions posting handler."""
newsfeed_id_max_length = container.newsfeed_id_specification().max_length
newsfeed_id = 'x' * (newsfeed_id_max_length + 1)
response = await web_client.post(
f'/newsfeed/{newsfeed_id}/subscriptions/',
json={
'to_newsfeed_id': newsfeed_id,
},
)
assert response.status == 400
data = await response.json()
assert data['message'] == (
f'Newsfeed id "{newsfeed_id[:newsfeed_id_max_length]}..." is too long'
)
subscription_storage = container.subscription_storage()
subscriptions = await subscription_storage.get_by_newsfeed_id(newsfeed_id=newsfeed_id)
assert len(subscriptions) == 0
async def test_post_subscription_with_abnormally_long_to_newsfeed_id(web_client, container):
"""Check subscriptions posting handler."""
newsfeed_id = '124'
newsfeed_id_max_length = container.newsfeed_id_specification().max_length
to_newsfeed_id = 'x' * (newsfeed_id_max_length + 1)
response = await web_client.post(
f'/newsfeed/{newsfeed_id}/subscriptions/',
json={
'to_newsfeed_id': to_newsfeed_id,
},
)
assert response.status == 400
data = await response.json()
assert data['message'] == (
f'Newsfeed id "{to_newsfeed_id[:newsfeed_id_max_length]}..." is too long'
)
subscription_storage = container.subscription_storage()
subscriptions = await subscription_storage.get_by_newsfeed_id(newsfeed_id=newsfeed_id)
assert len(subscriptions) == 0
async def test_post_multiple_subscriptions_to_the_same_feed(web_client, container):
"""Check subscriptions posting handler."""
newsfeed_id = '123'
to_newsfeed_id = '124'
subscription_storage = container.subscription_storage()
await subscription_storage.add(
{
'id': str(uuid.uuid4()),
'newsfeed_id': newsfeed_id,
'to_newsfeed_id': to_newsfeed_id,
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
response = await web_client.post(
f'/newsfeed/{newsfeed_id}/subscriptions/',
json={
'to_newsfeed_id': to_newsfeed_id,
},
)
assert response.status == 400
data = await response.json()
assert data['message'] == (
f'Subscription from newsfeed "{newsfeed_id}" to "{to_newsfeed_id}" already exists'
)
subscription_storage = container.subscription_storage()
subscriptions = await subscription_storage.get_by_to_newsfeed_id(newsfeed_id=to_newsfeed_id)
assert len(subscriptions) == 1
assert subscriptions[0]['newsfeed_id'] == newsfeed_id
assert subscriptions[0]['to_newsfeed_id'] == to_newsfeed_id
async def test_delete_subscriptions(web_client, container):
"""Check subscriptions deleting handler."""
newsfeed_id = '123'
subscription_id_1 = uuid.uuid4()
subscription_id_2 = uuid.uuid4()
subscription_id_3 = uuid.uuid4()
subscription_storage = container.subscription_storage()
await subscription_storage.add(
{
'id': str(subscription_id_1),
'newsfeed_id': newsfeed_id,
'to_newsfeed_id': '124',
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
await subscription_storage.add(
{
'id': str(subscription_id_2),
'newsfeed_id': newsfeed_id,
'to_newsfeed_id': '125',
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
await subscription_storage.add(
{
'id': str(subscription_id_3),
'newsfeed_id': '125',
'to_newsfeed_id': '126',
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
response = await web_client.delete(
f'/newsfeed/{newsfeed_id}/subscriptions/{subscription_id_1}/',
)
assert response.status == 204
subscription_2, = await subscription_storage.get_by_newsfeed_id(newsfeed_id)
assert uuid.UUID(subscription_2['id']) == subscription_id_2
assert len(await subscription_storage.get_by_to_newsfeed_id('124')) == 0
assert len(await subscription_storage.get_by_to_newsfeed_id('125')) == 1
assert len(await subscription_storage.get_by_to_newsfeed_id('126')) == 1
async def test_get_subscriber_subscriptions(web_client, container):
"""Check subscriber subscriptions getting handler."""
newsfeed_id = '123'
subscription_storage = container.subscription_storage()
await subscription_storage.add(
{
'id': str(uuid.uuid4()),
'newsfeed_id': '124',
'to_newsfeed_id': newsfeed_id,
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
await subscription_storage.add(
{
'id': str(uuid.uuid4()),
'newsfeed_id': '125',
'to_newsfeed_id': newsfeed_id,
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
await subscription_storage.add(
{
'id': str(uuid.uuid4()),
'newsfeed_id': '125',
'to_newsfeed_id': '126',
'subscribed_at': datetime.datetime.utcnow().timestamp(),
},
)
response = await web_client.get(f'/newsfeed/{newsfeed_id}/subscribers/subscriptions/')
assert response.status == 200
data = await response.json()
subscription_1, subscription_2 = data['results']
assert uuid.UUID(subscription_1['id'])
assert subscription_1['newsfeed_id'] == '125'
assert subscription_1['to_newsfeed_id'] == newsfeed_id
assert int(subscription_1['subscribed_at'])
assert uuid.UUID(subscription_2['id'])
assert subscription_2['newsfeed_id'] == '124'
assert subscription_2['to_newsfeed_id'] == newsfeed_id
assert int(subscription_2['subscribed_at'])
| 32.731618 | 97 | 0.656071 | 989 | 8,903 | 5.588473 | 0.076845 | 0.197214 | 0.078162 | 0.094084 | 0.911706 | 0.884929 | 0.817623 | 0.815451 | 0.791388 | 0.752488 | 0 | 0.024754 | 0.224082 | 8,903 | 271 | 98 | 32.852399 | 0.775333 | 0.003033 | 0 | 0.556604 | 0 | 0 | 0.168622 | 0.049871 | 0 | 0 | 0 | 0 | 0.198113 | 1 | 0 | false | 0 | 0.009434 | 0 | 0.009434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6da5703b3aa0ec48583b9621b3059f80e3a41a56 | 109 | py | Python | ExploriPy/__init__.py | wolframalpha/exploripy | 5b15ae1dddd2b797a98cbd2e0b3cf3308e11cd58 | [
"MIT"
] | 24 | 2019-12-17T11:13:03.000Z | 2022-03-19T01:11:21.000Z | ExploriPy/__init__.py | wolframalpha/exploripy | 5b15ae1dddd2b797a98cbd2e0b3cf3308e11cd58 | [
"MIT"
] | 2 | 2019-05-03T21:16:16.000Z | 2019-08-06T04:32:20.000Z | ExploriPy/__init__.py | wolframalpha/exploripy | 5b15ae1dddd2b797a98cbd2e0b3cf3308e11cd58 | [
"MIT"
] | 11 | 2018-12-29T18:31:49.000Z | 2019-10-10T08:50:01.000Z | from ExploriPy.EDA import EDA
from ExploriPy.WOE_IV import WOE
from ExploriPy.FeatureType import FeatureType
| 27.25 | 45 | 0.862385 | 16 | 109 | 5.8125 | 0.4375 | 0.419355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110092 | 109 | 3 | 46 | 36.333333 | 0.958763 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6df527a6a7911755d4e44a4f8c0699431ebf7a1a | 2,932 | py | Python | tests/utils/test_deprecated_kwargs.py | aliavni/pyjanitor | 245012443d01247a591fd0e931b154c7a12a9753 | [
"MIT"
] | null | null | null | tests/utils/test_deprecated_kwargs.py | aliavni/pyjanitor | 245012443d01247a591fd0e931b154c7a12a9753 | [
"MIT"
] | null | null | null | tests/utils/test_deprecated_kwargs.py | aliavni/pyjanitor | 245012443d01247a591fd0e931b154c7a12a9753 | [
"MIT"
] | null | null | null | import pytest
from janitor.utils import deprecated_kwargs
@pytest.mark.utils
@pytest.mark.parametrize(
"arguments, message, func_kwargs, msg_expected",
[
(
["a"],
"The keyword argument '{argument}' of '{func_name}' is deprecated",
dict(a=1),
"The keyword argument 'a' of 'simple_sum' is deprecated",
),
(
["b"],
"The keyword argument '{argument}' of '{func_name}' is deprecated",
dict(b=2),
"The keyword argument 'b' of 'simple_sum' is deprecated",
),
(
["a", "b"],
"The option '{argument}' of '{func_name}' is deprecated.",
dict(a=1, b=2),
"The option 'a' of 'simple_sum' is deprecated.",
),
(
["b", "a"],
"The keyword of function is deprecated.",
dict(a=1, b=2),
"The keyword of function is deprecated.",
),
],
)
def test_error(arguments, message, func_kwargs, msg_expected):
@deprecated_kwargs(*arguments, message=message)
def simple_sum(alpha, beta, a=0, b=0):
return alpha + beta
with pytest.raises(ValueError, match=msg_expected):
simple_sum(1, 2, **func_kwargs)
@pytest.mark.utils
@pytest.mark.parametrize(
"arguments, message, func_kwargs, msg_expected",
[
(
["a"],
"The keyword argument '{argument}' of '{func_name}' is deprecated",
dict(a=1),
"The keyword argument 'a' of 'simple_sum' is deprecated",
),
(
["b"],
"The keyword argument '{argument}' of '{func_name}' is deprecated",
dict(b=2),
"The keyword argument 'b' of 'simple_sum' is deprecated",
),
(
["a", "b"],
"The option '{argument}' of '{func_name}' is deprecated.",
dict(a=1, b=2),
"The option 'a' of 'simple_sum' is deprecated.",
),
(
["b", "a"],
"The keyword of function is deprecated.",
dict(a=1, b=2),
"The keyword of function is deprecated.",
),
],
)
def test_warning(arguments, message, func_kwargs, msg_expected):
@deprecated_kwargs(*arguments, message=message, error=False)
def simple_sum(alpha, beta, a=0, b=0):
return alpha + beta
with pytest.warns(DeprecationWarning, match=msg_expected):
simple_sum(1, 2, **func_kwargs)
@pytest.mark.utils
@pytest.mark.parametrize(
"arguments, func_args, expected",
[
(["a"], [0, 0], 0),
(["b"], [1, 1], 2),
(["a", "b"], [0, 1], 1),
(["b", "a"], [0, 1], 1),
],
)
def test_without_error(arguments, func_args, expected):
@deprecated_kwargs(*arguments)
def simple_sum(alpha, beta, a=0, b=0):
return alpha + beta
assert simple_sum(*func_args) == expected
| 29.32 | 79 | 0.531037 | 341 | 2,932 | 4.445748 | 0.13783 | 0.126649 | 0.094987 | 0.07124 | 0.842348 | 0.842348 | 0.842348 | 0.842348 | 0.842348 | 0.842348 | 0 | 0.017026 | 0.318895 | 2,932 | 99 | 80 | 29.616162 | 0.742113 | 0 | 0 | 0.662921 | 0 | 0 | 0.328104 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 1 | 0.067416 | false | 0 | 0.022472 | 0.033708 | 0.123596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
098be521e368bad4cfd9e8b41c6ee1a31ac62156 | 9,200 | py | Python | tests/test_bib.py | gvwilson/mccole-old | 5d724a64e7e91d39d72947798f5ee38bfdf96a23 | [
"MIT"
] | 1 | 2022-01-08T04:10:46.000Z | 2022-01-08T04:10:46.000Z | tests/test_bib.py | gvwilson/mccole | 5d724a64e7e91d39d72947798f5ee38bfdf96a23 | [
"MIT"
] | 43 | 2022-01-21T11:04:39.000Z | 2022-02-11T21:11:54.000Z | tests/test_bib.py | gvwilson/mccole-old | 5d724a64e7e91d39d72947798f5ee38bfdf96a23 | [
"MIT"
] | 1 | 2022-01-23T18:52:23.000Z | 2022-01-23T18:52:23.000Z | """Test bibliography."""
import logging
from textwrap import dedent
import pytest
from mccole.accounting import Config
from mccole.bib import bib_to_html, load_bib
from mccole.util import McColeExc
def test_bib_empty_when_not_specified(fs):
config = Config()
load_bib(config)
assert config.bib_data == []
assert config.bib_keys == set()
def test_bib_fail_with_nonexistent_file(fs):
config = Config(bib="test.bib")
with pytest.raises(McColeExc):
load_bib(config)
def test_bib_load_empty_file_when_present(fs):
fs.create_file("test.bib", contents="")
config = Config(bib="test.bib")
load_bib(config)
assert config.bib_data == []
assert config.bib_keys == set()
def test_bib_load_file_containing_data(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@book{Key1234,
author = {Some Key},
title = {Some Title},
publisher = {Some Publisher},
year = {1234},
isbn = {978-1234567890},
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
assert len(config.bib_data) == 1
assert config.bib_data[0]["ID"] == "Key1234"
assert config.bib_keys == {"Key1234"}
def test_bib_convert_article_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@article{Key1234,
author = {A B and C D},
title = {Some paper},
journal = {Journal},
month = {1},
year = {1234},
publisher = {Some Publisher},
doi = {12.34/56-78-90}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">A B and C D: '
'"Some paper". <em>Journal</em>, Jan 1234, Some Publisher, '
'<a href="https://doi.org/12.34/56-78-90">12.34/56-78-90</a>.</span>' in html
)
def test_bib_convert_article_without_doi_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@article{Key1234,
author = {A B and C D},
title = {Some paper},
journal = {Journal},
month = {1},
year = {1234},
publisher = {Some Publisher}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">A B and C D: '
'"Some paper". <em>Journal</em>, Jan 1234, Some Publisher.</span>' in html
)
def test_bib_convert_article_volume_number_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@article{Key1234,
author = {A B and C D},
title = {Some paper},
journal = {Journal},
month = {1},
year = {1234},
number = {7},
volume = {3},
publisher = {Some Publisher},
doi = {12.34/56-78-90}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">A B and C D: '
'"Some paper". <em>Journal</em>, 3(7), Jan 1234, Some Publisher, '
'<a href="https://doi.org/12.34/56-78-90">12.34/56-78-90</a>.</span>' in html
)
def test_bib_convert_article_volume_without_number_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@article{Key1234,
author = {A B and C D},
title = {Some paper},
journal = {Journal},
month = {1},
year = {1234},
volume = {3},
publisher = {Some Publisher},
doi = {12.34/56-78-90}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">A B and C D: '
'"Some paper". <em>Journal</em>, 3, Jan 1234, Some Publisher, '
'<a href="https://doi.org/12.34/56-78-90">12.34/56-78-90</a>.</span>' in html
)
def test_bib_convert_book_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@book{Key1234,
author = {Some Author},
title = {Some Title},
publisher = {Some Publisher},
year = {1234},
isbn = {978-1234567890},
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">Some Author: '
"<em>Some Title</em> Some Publisher, 1234, 978-1234567890.</span>" in html
)
def test_bib_convert_edited_book_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@book{Key1234,
editor = {A. N. Editor},
title = {Some Title},
publisher = {Some Publisher},
year = {1234},
isbn = {978-1234567890},
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">A. N. Editor (ed.): '
"<em>Some Title</em> Some Publisher, 1234, 978-1234567890.</span>" in html
)
def test_bib_convert_incollection_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@incollection{Key1234,
author = {Some Author},
title = {Some Article},
editor = {A B and C D and E F},
publisher = {Some Publisher},
booktitle = {Some Book},
year = {1234}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">Some Author: "Some Article". '
"In A B, C D, and E F (ed.): <em>Some Book</em>, Some Publisher, 1234." in html
)
def test_bib_convert_inproceedings_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@inproceedings{Key1234,
author = {Some Author},
title = {Some Article},
booktitle = {Some Book},
year = {1234},
doi = {12.3456/78.90},
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">Some Author: "Some Article". '
"In <em>Some Book</em>, 1234, "
'<a href="https://doi.org/12.3456/78.90">12.3456/78.90</a>.</span>' in html
)
def test_bib_convert_misc_to_html(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@misc{Key1234,
author = {Some Author},
title = {Some Article},
year = {1234},
url = {http://some.where}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">Some Author: "Some Article" '
'<a href="http://some.where">http://some.where</a>, '
"viewed 1234.</span>" in html
)
def test_bib_convert_missing_year(fs, caplog):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@misc{Key1234,
author = {Some Author},
title = {Some Article},
url = {http://some.where}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
with caplog.at_level(logging.DEBUG):
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">Some Author: "Some Article" '
'<a href="http://some.where">http://some.where</a>.' in html
)
assert len(caplog.record_tuples) == 1
assert "Bibliography entry missing year" in caplog.record_tuples[0][2]
def test_bib_convert_missing_url(fs):
fs.create_file(
"test.bib",
contents=dedent(
"""\
@misc{Key1234,
author = {Some Author},
title = {Some Article},
year = {1234}
}
"""
),
)
config = Config(bib="test.bib")
load_bib(config)
html = bib_to_html(config)
assert '<p id="Key1234" class="bib">' in html
assert '<span class="bibkey">Key1234</span>' in html
assert (
'<span class="bibentry">Some Author: "Some Article", viewed 1234.</span>'
in html
)
| 25.988701 | 87 | 0.560217 | 1,174 | 9,200 | 4.26661 | 0.091993 | 0.058694 | 0.055101 | 0.070274 | 0.851867 | 0.818327 | 0.808944 | 0.782192 | 0.782192 | 0.767419 | 0 | 0.064741 | 0.281413 | 9,200 | 353 | 88 | 26.062323 | 0.692936 | 0.001957 | 0 | 0.673171 | 0 | 0.029268 | 0.337169 | 0.071221 | 0 | 0 | 0 | 0 | 0.204878 | 1 | 0.073171 | false | 0 | 0.029268 | 0 | 0.102439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
111a4915d1bee1ba177fd81392a632c1b68a469f | 3,879 | py | Python | tests/test_cflow_line_parser.py | andymeneely/attack-surface-metrics | 9cef791a79771ee29f18a0da2159f36c3df32755 | [
"MIT"
] | 16 | 2015-12-25T10:53:10.000Z | 2022-02-26T08:27:55.000Z | tests/test_cflow_line_parser.py | andymeneely/attack-surface-metrics | 9cef791a79771ee29f18a0da2159f36c3df32755 | [
"MIT"
] | 30 | 2015-01-29T19:34:31.000Z | 2021-06-10T17:22:57.000Z | tests/test_cflow_line_parser.py | andymeneely/attack-surface-metrics | 9cef791a79771ee29f18a0da2159f36c3df32755 | [
"MIT"
] | 4 | 2016-11-03T15:59:42.000Z | 2020-10-29T17:56:59.000Z | __author__ = 'kevin'
import unittest
from attacksurfacemeter.loaders.cflow_line_parser import CflowLineParser
class CflowLineParserTestCase(unittest.TestCase):
def test_get_function_name(self):
# Arrange
test_line_parser = CflowLineParser.get_instance("GreeterSayHi() <void GreeterSayHi () at ./src/helloworld.c:48>:")
# Act
test_function_name = test_line_parser.get_function_name()
# Assert
self.assertEqual("GreeterSayHi", test_function_name)
def test_get_function_signature(self):
# Arrange
test_line_parser = CflowLineParser.get_instance("GreeterSayHi() <void GreeterSayHi () at ./src/helloworld.c:48>:")
# Act
test_function_signature = test_line_parser.get_function_signature()
# Assert
self.assertEqual("./src/helloworld.c", test_function_signature)
def test_get_function_name_name_only(self):
# Arrange
test_line_parser = CflowLineParser.get_instance(" printf()")
# Act
test_function_name = test_line_parser.get_function_name()
# Assert
self.assertEqual("printf", test_function_name)
def test_get_function_signature_name_only(self):
# Arrange
test_line_parser = CflowLineParser.get_instance(" printf()")
# Act
test_function_signature = test_line_parser.get_function_signature()
# Assert
self.assertEqual("", test_function_signature)
def test_get_level_0(self):
# Arrange
test_line_parser = CflowLineParser.get_instance("GreeterSayHi() <void GreeterSayHi () at ./src/helloworld.c:48>:")
# Act
test_level = test_line_parser.get_level()
# Assert
self.assertEqual(0, test_level)
def test_get_level_1(self):
# Arrange
test_line_parser = CflowLineParser.get_instance(" recursive_a() <void recursive_a (int i) at ./src/greetings.c:26> (R):")
# Act
test_level = test_line_parser.get_level()
# Assert
self.assertEqual(1, test_level)
def test_get_level_2(self):
# Arrange
test_line_parser = CflowLineParser.get_instance(" recursive_b() <void recursive_b (int i) at ./src/greetings.c:32> (R):")
# Act
test_level = test_line_parser.get_level()
# Assert
self.assertEqual(2, test_level)
def test_issue_41(self):
'''Unit test to test the fix for issue #41.
Specifics:
https://github.com/andymeneely/attack-surface-metrics/issues/41
'''
# Arrange
test_line_parser = CflowLineParser.get_instance(
"mp_msg() <void mp_msg (int mod, int lev, const char *format, "
"...) at ./libavfilter/vf_mp.c:353>: [see 20795]"
)
# Act
test_function_signature = test_line_parser.get_function_signature()
# Assert
self.assertEqual("./libavfilter/vf_mp.c", test_function_signature)
# Arrange
test_line_parser = CflowLineParser.get_instance(
" mp_msg() <void mp_msg (int mod, int lev, const char "
"*format, ...) at ./libavfilter/vf_mp.c:353>:"
)
# Act
test_function_signature = test_line_parser.get_function_signature()
# Assert
self.assertEqual("./libavfilter/vf_mp.c", test_function_signature)
# Arrange
test_line_parser = CflowLineParser.get_instance(
" mp_msg() <void mp_msg (int mod, int lev, const char "
"*format, ...) at ./libavfilter/vf_mp.c:353>: [see 20795]"
)
# Act
test_function_signature = test_line_parser.get_function_signature()
# Assert
self.assertEqual("./libavfilter/vf_mp.c", test_function_signature)
if __name__ == '__main__':
unittest.main()
| 31.032 | 136 | 0.637535 | 442 | 3,879 | 5.237557 | 0.176471 | 0.090713 | 0.12095 | 0.090713 | 0.847948 | 0.833261 | 0.76933 | 0.76933 | 0.732181 | 0.680346 | 0 | 0.014296 | 0.260634 | 3,879 | 124 | 137 | 31.282258 | 0.792887 | 0.080175 | 0 | 0.433962 | 0 | 0.037736 | 0.231714 | 0.072857 | 0 | 0 | 0 | 0 | 0.188679 | 1 | 0.150943 | false | 0 | 0.037736 | 0 | 0.207547 | 0.056604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
11349d8365aa7050614f8d115d97b0559b165f21 | 48 | py | Python | NAS/AngleNAS/utils/__init__.py | naviocean/SimpleCVReproduction | 61b43e3583977f42e6f91ef176ec5e1701e98d33 | [
"Apache-2.0"
] | 923 | 2020-01-11T06:36:53.000Z | 2022-03-31T00:26:57.000Z | NAS/AngleNAS/utils/__init__.py | Twenty3hree/SimpleCVReproduction | 9939f8340c54dbd69b0017cecad875dccf428f26 | [
"Apache-2.0"
] | 25 | 2020-02-27T08:35:46.000Z | 2022-01-25T08:54:19.000Z | NAS/AngleNAS/utils/__init__.py | Twenty3hree/SimpleCVReproduction | 9939f8340c54dbd69b0017cecad875dccf428f26 | [
"Apache-2.0"
] | 262 | 2020-01-02T02:19:40.000Z | 2022-03-23T04:56:16.000Z | from .imagenet import *
from .nas_utils import * | 24 | 24 | 0.770833 | 7 | 48 | 5.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 2 | 24 | 24 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
11657b9b732fc8388b10983c41ea014ed40b489a | 93 | py | Python | app/players/__init__.py | rookiebulls/scala | 504efd5187b8f15a54086590e3e5572d9eda8f16 | [
"MIT"
] | null | null | null | app/players/__init__.py | rookiebulls/scala | 504efd5187b8f15a54086590e3e5572d9eda8f16 | [
"MIT"
] | null | null | null | app/players/__init__.py | rookiebulls/scala | 504efd5187b8f15a54086590e3e5572d9eda8f16 | [
"MIT"
] | null | null | null | from flask import Blueprint
players = Blueprint('players', __name__)
from . import routes
| 13.285714 | 40 | 0.763441 | 11 | 93 | 6.090909 | 0.636364 | 0.477612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 93 | 6 | 41 | 15.5 | 0.858974 | 0 | 0 | 0 | 0 | 0 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
fec4b9a451b23e6eaff926df00f8ac62689ccc27 | 4,960 | py | Python | python_modules/libraries/dagster-aws/dagster_aws_tests/ecs_tests/launcher_tests/test_secrets.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | python_modules/libraries/dagster-aws/dagster_aws_tests/ecs_tests/launcher_tests/test_secrets.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | python_modules/libraries/dagster-aws/dagster_aws_tests/ecs_tests/launcher_tests/test_secrets.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | # pylint: disable=redefined-outer-name
# pylint: disable=unused-argument
# pylint: disable=unused-variable
from unittest.mock import MagicMock, patch
import pytest
def test_secrets(
ecs,
secrets_manager,
instance_cm,
launch_run,
tagged_secret,
other_secret,
configured_secret,
):
initial_task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
config = {
"secrets": [
{
"name": "HELLO",
"valueFrom": configured_secret.arn + "/hello",
}
],
}
with instance_cm(config) as instance:
launch_run(instance)
# A new task definition is created
task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
assert len(task_definitions) == len(initial_task_definitions) + 1
task_definition_arn = list(set(task_definitions).difference(initial_task_definitions))[0]
task_definition = ecs.describe_task_definition(taskDefinition=task_definition_arn)
task_definition = task_definition["taskDefinition"]
# It includes tagged secrets
secrets = task_definition["containerDefinitions"][0]["secrets"]
assert {"name": tagged_secret.name, "valueFrom": tagged_secret.arn} in secrets
# And configured secrets
assert {
"name": "HELLO",
"valueFrom": configured_secret.arn + "/hello",
} in secrets
# But no other secrets
assert len(secrets) == 2
def test_secrets_with_container_context(
ecs,
secrets_manager,
instance_cm,
launch_run_with_container_context,
tagged_secret,
other_secret,
configured_secret,
):
initial_task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
# Secrets config is pulled from container context on the run, rather than run launcher config
config = {"secrets_tag": None, "secrets": []}
with instance_cm(config) as instance:
launch_run_with_container_context(instance)
# A new task definition is created
task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
assert len(task_definitions) == len(initial_task_definitions) + 1
task_definition_arn = list(set(task_definitions).difference(initial_task_definitions))[0]
task_definition = ecs.describe_task_definition(taskDefinition=task_definition_arn)
task_definition = task_definition["taskDefinition"]
# It includes tagged secrets
secrets = task_definition["containerDefinitions"][0]["secrets"]
assert {"name": tagged_secret.name, "valueFrom": tagged_secret.arn} in secrets
# And configured secrets
assert {
"name": "HELLO",
"valueFrom": configured_secret.arn + "/hello",
} in secrets
# But no other secrets
assert len(secrets) == 2
def test_secrets_backcompat(
ecs,
secrets_manager,
instance_cm,
launch_run,
tagged_secret,
other_secret,
configured_secret,
):
initial_task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
with pytest.warns(DeprecationWarning, match="Setting secrets as a list of ARNs is deprecated"):
with instance_cm({"secrets": [configured_secret.arn]}) as instance:
launch_run(instance)
# A new task definition is created
task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
assert len(task_definitions) == len(initial_task_definitions) + 1
task_definition_arn = list(set(task_definitions).difference(initial_task_definitions))[0]
task_definition = ecs.describe_task_definition(taskDefinition=task_definition_arn)
task_definition = task_definition["taskDefinition"]
# It includes tagged secrets
secrets = task_definition["containerDefinitions"][0]["secrets"]
assert {"name": tagged_secret.name, "valueFrom": tagged_secret.arn} in secrets
# And configured secrets
assert {"name": configured_secret.name, "valueFrom": configured_secret.arn} in secrets
# But no other secrets
assert len(secrets) == 2
def test_empty_secrets(
ecs,
secrets_manager,
instance_cm,
launch_run,
):
initial_task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
with instance_cm({"secrets_tag": None}) as instance:
m = MagicMock()
with patch.object(instance.run_launcher, "secrets_manager", new=m):
launch_run(instance)
m.get_paginator.assert_not_called()
m.describe_secret.assert_not_called()
# A new task definition is created
task_definitions = ecs.list_task_definitions()["taskDefinitionArns"]
assert len(task_definitions) == len(initial_task_definitions) + 1
task_definition_arn = list(set(task_definitions).difference(initial_task_definitions))[0]
task_definition = ecs.describe_task_definition(taskDefinition=task_definition_arn)
task_definition = task_definition["taskDefinition"]
# No secrets
assert not task_definition["containerDefinitions"][0].get("secrets")
| 33.288591 | 99 | 0.71875 | 563 | 4,960 | 6.05151 | 0.150977 | 0.140886 | 0.077488 | 0.051658 | 0.79865 | 0.786909 | 0.786909 | 0.764015 | 0.732022 | 0.711476 | 0 | 0.003714 | 0.185685 | 4,960 | 148 | 100 | 33.513514 | 0.839812 | 0.110484 | 0 | 0.76 | 0 | 0 | 0.12224 | 0 | 0 | 0 | 0 | 0 | 0.16 | 1 | 0.04 | false | 0 | 0.02 | 0 | 0.06 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fee691e5686ff8dacf36f37235a7041cd5429150 | 670 | py | Python | tests/epyccel/modules/types.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 206 | 2018-06-28T00:28:47.000Z | 2022-03-29T05:17:03.000Z | tests/epyccel/modules/types.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 670 | 2018-07-23T11:02:24.000Z | 2022-03-30T07:28:05.000Z | tests/epyccel/modules/types.py | dina-fouad/pyccel | f4d919e673b400442b9c7b81212b6fbef749c7b7 | [
"MIT"
] | 19 | 2019-09-19T06:01:00.000Z | 2022-03-29T05:17:06.000Z | # pylint: disable=missing-function-docstring, missing-module-docstring/
def test_int_default(x : 'int'):
return x
def test_int64(x : 'int64'):
return x
def test_int32(x : 'int32'):
return x
def test_int16(x : 'int16'):
return x
def test_int8(x : 'int8'):
return x
def test_real_default(x : 'float'):
return x
def test_float32(x : 'float32'):
return x
def test_float64(x : 'float64'):
return x
def test_complex_default(x : 'complex'):
return x
def test_complex64(x : 'complex64'):
return x
def test_complex128(x : 'complex128'):
return x
def test_bool(x : 'bool'):
return x
| 17.631579 | 72 | 0.620896 | 95 | 670 | 4.221053 | 0.252632 | 0.209476 | 0.274314 | 0.38404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064386 | 0.258209 | 670 | 37 | 73 | 18.108108 | 0.742455 | 0.102985 | 0 | 0.5 | 0 | 0 | 0.126335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3a0c50461768493a0dd03f03132a76c592645d78 | 15,314 | py | Python | tests/commands/test__vi_k.py | uri/Vintageous | d5662872bcf1e7439875fe1c5133010db2ace8fd | [
"MIT"
] | null | null | null | tests/commands/test__vi_k.py | uri/Vintageous | d5662872bcf1e7439875fe1c5133010db2ace8fd | [
"MIT"
] | null | null | null | tests/commands/test__vi_k.py | uri/Vintageous | d5662872bcf1e7439875fe1c5133010db2ace8fd | [
"MIT"
] | null | null | null | import unittest
from Vintageous.vi.constants import _MODE_INTERNAL_NORMAL
from Vintageous.vi.constants import MODE_NORMAL
from Vintageous.vi.constants import MODE_VISUAL
from Vintageous.vi.constants import MODE_VISUAL_LINE
from Vintageous.tests.commands import set_text
from Vintageous.tests.commands import add_selection
from Vintageous.tests.commands import get_sel
from Vintageous.tests.commands import first_sel
from Vintageous.tests.commands import make_region_at_row
from Vintageous.tests.commands import BufferTest
# TODO: Test against folded regions.
# TODO: Ensure that we only create empty selections while testing. Add assert_all_sels_empty()?
# TODO: Test different values for xpos in combination with the starting col.
class Test_vi_k_InNormalMode(BufferTest):
def testMoveOne(self):
set_text(self.view, 'abc\nabc\nabc')
add_selection(self.view, make_region_at_row(self.view, row=1, col=1, size=0))
self.view.run_command('_vi_k', {'mode': MODE_NORMAL, 'count': 1, 'xpos': 1})
expected = make_region_at_row(self.view, row=0, col=1, size=0)
self.assertEqual(expected, first_sel(self.view))
def testMoveMany(self):
set_text(self.view, 'abc\nabc\nabc')
add_selection(self.view, make_region_at_row(self.view, row=2, col=1, size=0))
self.view.run_command('_vi_k', {'mode': MODE_NORMAL, 'count': 2, 'xpos': 1})
expected = make_region_at_row(self.view, row=0, col=1, size=0)
self.assertEqual(expected, first_sel(self.view))
def testMoveOntoLongerLine(self):
set_text(self.view, 'foo bar\nfoo')
add_selection(self.view, make_region_at_row(self.view, row=1, col=1, size=0))
self.view.run_command('_vi_k', {'mode': MODE_NORMAL, 'count': 1, 'xpos': 1})
expected = make_region_at_row(self.view, row=0, col=1, size=0)
self.assertEqual(expected, first_sel(self.view))
def testMoveOntoShorterLine(self):
set_text(self.view, 'foo\nfoo bar')
add_selection(self.view, make_region_at_row(self.view, row=1, col=5, size=0))
self.view.run_command('_vi_k', {'mode': MODE_NORMAL, 'count': 1, 'xpos': 5})
expected = make_region_at_row(self.view, row=0, col=2, size=0)
self.assertEqual(expected, first_sel(self.view))
def testMoveFromEmptyLine(self):
set_text(self.view, 'foo\n\n')
add_selection(self.view, make_region_at_row(self.view, row=1, col=0, size=0))
self.view.run_command('_vi_k', {'mode': MODE_NORMAL, 'count': 1, 'xpos': 1})
expected = make_region_at_row(self.view, row=0, col=1, size=0)
self.assertEqual(expected, first_sel(self.view))
def testMoveFromEmptyLineToEmptyLine(self):
set_text(self.view, '\n\n\n')
add_selection(self.view, make_region_at_row(self.view, row=1, col=0, size=0))
self.view.run_command('_vi_k', {'mode': MODE_NORMAL, 'count': 1, 'xpos': 0})
expected = make_region_at_row(self.view, row=0, col=0, size=0)
self.assertEqual(expected, first_sel(self.view))
def testMoveTooFar(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, make_region_at_row(self.view, row=2, col=1, size=0))
self.view.run_command('_vi_k', {'mode': MODE_NORMAL, 'count': 100, 'xpos': 1})
expected = make_region_at_row(self.view, row=0, col=1, size=0)
self.assertEqual(expected, first_sel(self.view))
class Test_vi_k_InVisualMode(BufferTest):
def testMoveOne(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, self.R((1, 1), (1, 2)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 2})
expected = self.R((1, 2), (0, 2))
self.assertEqual(expected, first_sel(self.view))
def testMoveOppositeEndGreaterWithSelOfSize1(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, self.R((2, 1), (2, 2)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 2})
expected = self.R((2, 2), (1, 2))
self.assertEqual(expected, first_sel(self.view))
def testMoveOppositeEndSmallerWithSelOfSize2(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, self.R((1, 1), (1, 3)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 3})
expected = self.R((1, 2), (0, 3))
self.assertEqual(expected, first_sel(self.view))
def testMoveOppositeEndSmallerWithSelOfSize3(self):
set_text(self.view, 'foobar\nbarfoo\nbuzzfizz\n')
add_selection(self.view, self.R((1, 1), (1, 4)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 3})
expected = self.R((1, 2), (0, 3))
self.assertEqual(expected, first_sel(self.view))
def testMove_OppositeEndSmaller_DifferentLines_NoCrossOver(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, self.R((0, 1), (2, 1)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 1})
expected = self.R((0, 1), (1, 2))
self.assertEqual(expected, first_sel(self.view))
def testMove_OppositeEndSmaller_DifferentLines_CrossOver_XposAt0(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, self.R((1, 0), (2, 1)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 2, 'xpos': 0})
expected = self.R((1, 1), (0, 0))
self.assertEqual(expected, first_sel(self.view))
def testMove_OppositeEndSmaller_DifferentLines_CrossOver_Non0Xpos(self):
set_text(self.view, 'foo bar\nfoo bar\nfoo bar\n')
add_selection(self.view, self.R((1, 4), (2, 4)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 2, 'xpos': 4})
expected = self.R((1, 5), (0, 4))
self.assertEqual(expected, first_sel(self.view))
def testMoveBackToSameLineSameXpos(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, self.R((0, 1), (1, 1)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 1})
expected = self.R((0, 2), (0, 1))
self.assertEqual(expected, first_sel(self.view))
def testMoveBackToSameLine_OppositeEndHasGreaterXpos(self):
set_text(self.view, 'foo\nbar\nbaz\n')
add_selection(self.view, self.R((0, 2), (1, 0)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 0})
expected = self.R((0, 3), (0, 0))
self.assertEqual(expected, first_sel(self.view))
def testMoveMany_OppositeEndGreater_FromSameLine(self):
set_text(self.view, ''.join(('foo\n',) * 50))
add_selection(self.view, self.R((20, 2), (20, 1)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 10, 'xpos': 1})
expected = self.R((20, 2), (10, 1))
self.assertEqual(expected, first_sel(self.view))
def testMoveMany_OppositeEndGreater_DifferentLines(self):
set_text(self.view, ''.join(('foo\n',) * 50))
add_selection(self.view, self.R((21, 2), (20, 1)))
self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 10, 'xpos': 1})
expected = self.R((21, 2), (10, 1))
self.assertEqual(expected, first_sel(self.view))
# def testMoveMany(self):
# set_text(self.view, ''.join(('abc\n',) * 60))
# add_selection(self.view, a=1, b=2)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 50, 'xpos': 1})
# target = self.view.text_point(50, 2)
# expected = self.R(1, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveOntoLongerLine(self):
# set_text(self.view, 'foo\nfoo bar\nfoo bar')
# add_selection(self.view, a=1, b=2)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 1})
# target = self.view.text_point(1, 2)
# expected = self.R(1, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveOntoShorterLine(self):
# set_text(self.view, 'foo bar\nfoo\nbar')
# add_selection(self.view, a=5, b=6)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 5})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(5, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveFromEmptyLine(self):
# set_text(self.view, '\nfoo\nbar')
# add_selection(self.view, a=0, b=1)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 0})
# target = self.view.text_point(1, 1)
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveFromEmptyLineToEmptyLine(self):
# set_text(self.view, '\n\nbar')
# add_selection(self.view, a=0, b=1)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 1, 'xpos': 0})
# target = self.view.text_point(1, 1)
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveTooFar(self):
# set_text(self.view, 'foo\nbar\nbaz')
# add_selection(self.view, a=1, b=2)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL, 'count': 10000, 'xpos': 1})
# target = self.view.text_point(2, 2)
# expected = self.R(1, target)
# self.assertEqual(expected, first_sel(self.view))
# # TODO: Ensure that we only create empty selections while testing. Add assert_all_sels_empty()?
# class Test_vi_k_InInternalNormalMode(BufferTest):
# def testMoveOne(self):
# set_text(self.view, 'abc\nabc\nabc')
# add_selection(self.view, a=1, b=1)
# self.view.run_command('_vi_k', {'mode': _MODE_INTERNAL_NORMAL, 'count': 1, 'xpos': 1})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveMany(self):
# set_text(self.view, ''.join(('abc\n',) * 60))
# add_selection(self.view, a=1, b=1)
# self.view.run_command('_vi_k', {'mode': _MODE_INTERNAL_NORMAL, 'count': 50, 'xpos': 1})
# target = self.view.text_point(50, 2)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveOntoLongerLine(self):
# set_text(self.view, 'foo\nfoo bar\nfoo bar')
# add_selection(self.view, a=1, b=1)
# self.view.run_command('_vi_k', {'mode': _MODE_INTERNAL_NORMAL, 'count': 1, 'xpos': 1})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveOntoShorterLine(self):
# set_text(self.view, 'foo bar\nfoo\nbar')
# add_selection(self.view, a=5, b=5)
# self.view.run_command('_vi_k', {'mode': _MODE_INTERNAL_NORMAL, 'count': 1, 'xpos': 5})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveFromEmptyLine(self):
# set_text(self.view, '\nfoo\nbar')
# add_selection(self.view, a=0, b=0)
# self.view.run_command('_vi_k', {'mode': _MODE_INTERNAL_NORMAL, 'count': 1, 'xpos': 0})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveFromEmptyLineToEmptyLine(self):
# set_text(self.view, '\n\nbar')
# add_selection(self.view, a=0, b=0)
# self.view.run_command('_vi_k', {'mode': _MODE_INTERNAL_NORMAL, 'count': 1, 'xpos': 0})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveTooFar(self):
# set_text(self.view, 'foo\nbar\nbaz')
# add_selection(self.view, a=1, b=1)
# self.view.run_command('_vi_k', {'mode': _MODE_INTERNAL_NORMAL, 'count': 10000, 'xpos': 1})
# target = self.view.text_point(2, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# class Test_vi_k_InVisualLineMode(BufferTest):
# def testMoveOne(self):
# set_text(self.view, 'abc\nabc\nabc')
# add_selection(self.view, a=0, b=4)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL_LINE, 'count': 1, 'xpos': 1})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveMany(self):
# set_text(self.view, ''.join(('abc\n',) * 60))
# add_selection(self.view, a=0, b=4)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL_LINE, 'count': 50, 'xpos': 1})
# target = self.view.text_point(50, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveFromEmptyLine(self):
# set_text(self.view, '\nfoo\nbar')
# add_selection(self.view, a=0, b=1)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL_LINE, 'count': 1, 'xpos': 0})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveFromEmptyLineToEmptyLine(self):
# set_text(self.view, '\n\nbar')
# add_selection(self.view, a=0, b=1)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL_LINE, 'count': 1, 'xpos': 0})
# target = self.view.text_point(1, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
# def testMoveTooFar(self):
# set_text(self.view, 'foo\nbar\nbaz')
# add_selection(self.view, a=0, b=4)
# self.view.run_command('_vi_k', {'mode': MODE_VISUAL_LINE, 'count': 10000, 'xpos': 1})
# target = self.view.text_point(2, 0)
# target = self.view.full_line(target).b
# expected = self.R(0, target)
# self.assertEqual(expected, first_sel(self.view))
| 38.380952 | 101 | 0.610161 | 2,113 | 15,314 | 4.23663 | 0.057265 | 0.168901 | 0.044236 | 0.060322 | 0.934763 | 0.914321 | 0.901698 | 0.887288 | 0.872096 | 0.862377 | 0 | 0.028581 | 0.230051 | 15,314 | 398 | 102 | 38.477387 | 0.730642 | 0.481716 | 0 | 0.46281 | 0 | 0 | 0.07711 | 0.003517 | 0 | 0 | 0 | 0.002513 | 0.14876 | 1 | 0.14876 | false | 0 | 0.090909 | 0 | 0.256198 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3a10b4c58e22491f8d34d9fc72cd45a30d5f565a | 3,503 | py | Python | Grain growth/largeScaleGG.py | zwang586/MICNN | 3d27a7f624ed03502fd500628b8e5136cb3f0730 | [
"MIT"
] | null | null | null | Grain growth/largeScaleGG.py | zwang586/MICNN | 3d27a7f624ed03502fd500628b8e5136cb3f0730 | [
"MIT"
] | null | null | null | Grain growth/largeScaleGG.py | zwang586/MICNN | 3d27a7f624ed03502fd500628b8e5136cb3f0730 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
##Use the trained yNet to perform large-scale grain growth simulation
import numpy as np
import matplotlib.pyplot as plt
from model import *
nx = 1600
ny = 1600
deltaT = [1,3,4,5,7,9,11,12,13,15,17,18,19,21,22,24,25,27,29,30,2,6,14,20,28,8,10,16,23,26]
deltaT = deltaT/np.max(deltaT)
MICNN = yNet(nx,ny)
MICNN.summary()
MICNN.load_weights("weights_yNet.h5")
##########delta_T = 1#############################################
eta = np.zeros((nx,ny),dtype = np.float32)
eta = np.load("data_seeding_1600x1600\\eta_initial_1.npy")
x_test_0 = eta[:,:]
x_test_0 = np.reshape(x_test_0, (1, nx, ny, 1))
deltaT_test_0 = deltaT[0] #[0] = 1, [3] = 5; [19] = 30;
deltaT_test_0 = np.reshape(deltaT_test_0, (1, 1))
###Recurrent prediction
for istep in range(0,65):
print(istep)
ax = plt.imshow(x_test_0.reshape(nx, ny),cmap = 'coolwarm', vmin = 0, vmax = 1)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.axes.set_title('$\mathit{\Delta}t$'+' = 1',loc = 'right', fontsize = 10)
ax.axes.set_title('$\mathit{t}$'+'_'+'$\mathit{step}$'+' = '+str(istep),loc = 'left', fontsize = 10)
plt.savefig('large_dt1_'+'eta_'+str(istep)+'.jpg', dpi=200, bbox_inches = "tight")
plt.close()
eta2D = x_test_0.reshape(nx, ny)
x_test_1 = MICNN.predict([x_test_0,deltaT_test_0])
x_test_0 = x_test_1
##########delta_T = 5#############################################
eta = np.zeros((nx,ny),dtype = np.float32)
eta = np.load("data_seeding_1600x1600\\eta_initial_5.npy")
x_test_0 = eta[:,:]
x_test_0 = np.reshape(x_test_0, (1, nx, ny, 1))
deltaT_test_0 = deltaT[3] #[0] = 1, [3] = 5; [19] = 30;
deltaT_test_0 = np.reshape(deltaT_test_0, (1, 1))
###Recurrent prediction
for istep in range(0,65):
print(istep)
ax = plt.imshow(x_test_0.reshape(nx, ny),cmap = 'coolwarm', vmin = 0, vmax = 1)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.axes.set_title('$\mathit{\Delta}t$'+' = 5',loc = 'right', fontsize = 10)
ax.axes.set_title('$\mathit{t}$'+'_'+'$\mathit{step}$'+' = '+str(istep),loc = 'left', fontsize = 10)
plt.savefig('large_dt5_'+'eta_'+str(istep)+'.jpg', dpi=200, bbox_inches = "tight")
plt.close()
eta2D = x_test_0.reshape(nx, ny)
x_test_1 = MICNN.predict([x_test_0,deltaT_test_0])
x_test_0 = x_test_1
##########delta_T = 30#############################################
eta = np.zeros((nx,ny),dtype = np.float32)
eta = np.load("data_seeding_1600x1600\\eta_initial_30.npy")
nx2 = 128
ny2 = 128
x_test_0 = eta[:,:]
x_test_0 = np.reshape(x_test_0, (1, nx, ny, 1))
deltaT_test_0 = deltaT[19] #[0] = 1, [3] = 5; [19] = 30;
deltaT_test_0 = np.reshape(deltaT_test_0, (1, 1))
###Recurrent prediction
for istep in range(0,65):
print(istep)
ax = plt.imshow(x_test_0.reshape(nx, ny),cmap = 'coolwarm', vmin = 0, vmax = 1)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.axes.set_title('$\mathit{\Delta}t$'+' = 30',loc = 'right', fontsize = 10)
ax.axes.set_title('$\mathit{t}$'+'_'+'$\mathit{step}$'+' = '+str(istep),loc = 'left', fontsize = 10)
plt.savefig('large_dt30_'+'eta_'+str(istep)+'.jpg', dpi=200, bbox_inches = "tight")
plt.close()
eta2D = x_test_0.reshape(nx, ny)
x_test_1 = MICNN.predict([x_test_0,deltaT_test_0])
x_test_0 = x_test_1 | 39.806818 | 105 | 0.597202 | 568 | 3,503 | 3.457746 | 0.214789 | 0.084012 | 0.064155 | 0.04277 | 0.838595 | 0.838595 | 0.838595 | 0.838595 | 0.838595 | 0.838595 | 0 | 0.080234 | 0.170996 | 3,503 | 88 | 106 | 39.806818 | 0.596074 | 0.075935 | 0 | 0.636364 | 0 | 0 | 0.141414 | 0.041751 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
28bba74c3018b9da234cad46629df46d96741cac | 203 | py | Python | app/settings/admin/places.py | mandarhan/mandarhan | 9ce38d10e536e0d3e2f907c3b5c560d66ccf8e40 | [
"MIT"
] | null | null | null | app/settings/admin/places.py | mandarhan/mandarhan | 9ce38d10e536e0d3e2f907c3b5c560d66ccf8e40 | [
"MIT"
] | 6 | 2020-02-18T03:49:09.000Z | 2022-03-12T00:10:05.000Z | app/settings/admin/places.py | mandarhan/mandarhan | 9ce38d10e536e0d3e2f907c3b5c560d66ccf8e40 | [
"MIT"
] | 1 | 2020-03-25T10:25:43.000Z | 2020-03-25T10:25:43.000Z | from django.contrib import admin
from adminsortable2.admin import SortableAdminMixin
from ..models import Place
@admin.register(Place)
class PlaceAdmin(SortableAdminMixin, admin.ModelAdmin):
pass
| 20.3 | 55 | 0.817734 | 23 | 203 | 7.217391 | 0.608696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005587 | 0.118227 | 203 | 9 | 56 | 22.555556 | 0.921788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.5 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
28e27ec7b1eb7d9a4b8cab20a7b19626a1dc5aac | 23 | py | Python | tinyBT/__init__.py | EternityForest/tinyBT | e823a10129044b6480d93398dc6546742454632c | [
"MIT"
] | null | null | null | tinyBT/__init__.py | EternityForest/tinyBT | e823a10129044b6480d93398dc6546742454632c | [
"MIT"
] | null | null | null | tinyBT/__init__.py | EternityForest/tinyBT | e823a10129044b6480d93398dc6546742454632c | [
"MIT"
] | null | null | null | from . dht import DHT
| 11.5 | 22 | 0.695652 | 4 | 23 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.26087 | 23 | 1 | 23 | 23 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9309d8f56fc628a18480da3a998fae7c2b7eacb | 72 | py | Python | src/Hoja.py | victorlujan/Dise-odeSoftwarePatrones | b9845cc1c4abdc44867c90b9e9784246e57f16b3 | [
"MIT"
] | null | null | null | src/Hoja.py | victorlujan/Dise-odeSoftwarePatrones | b9845cc1c4abdc44867c90b9e9784246e57f16b3 | [
"MIT"
] | null | null | null | src/Hoja.py | victorlujan/Dise-odeSoftwarePatrones | b9845cc1c4abdc44867c90b9e9784246e57f16b3 | [
"MIT"
] | null | null | null | from ElementoMapa import ElementoMapa
class Hoja(ElementoMapa):
pass | 24 | 37 | 0.819444 | 8 | 72 | 7.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 72 | 3 | 38 | 24 | 0.951613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e93ffa3b3cef712b27d0c3f09007729157230347 | 20,211 | py | Python | tasks/blind-robot/flask/tokens.py | irdkwmnsb/lkshl-ctf | e5c0200ddc8ba73df5f321b87b9763fb1bbaba57 | [
"MIT"
] | 3 | 2021-03-30T06:27:58.000Z | 2021-04-03T17:56:35.000Z | tasks/blind-robot/flask/tokens.py | irdkwmnsb/lkshl-ctf | e5c0200ddc8ba73df5f321b87b9763fb1bbaba57 | [
"MIT"
] | null | null | null | tasks/blind-robot/flask/tokens.py | irdkwmnsb/lkshl-ctf | e5c0200ddc8ba73df5f321b87b9763fb1bbaba57 | [
"MIT"
] | null | null | null | flags = {
'8ff45698d0615fdd2fcc67edee8c3aea': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_6u9CC}',
'e03355a7e86fedade74ffb8bb5e5067a': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_TQh62}',
'e4d29071f40e3ec4a85ee0d090bfa135': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_4Jmh6}',
'8af761c650744dc73788b55063a450c6': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_8zWqd}',
'dc9e390dc0ba2d517623863fac0e62c7': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_p2uPm}',
'22253ed65165a600de8fa8e85103231b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Xypv2}',
'f9340c7d9377e34c1a8f21e875a3e5d3': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_PTjac}',
'7070b1e06ed7a5fae96f828032c02e33': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ZDxRG}',
'db1ae92b11c6732182bd461f80290546': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_DYUAu}',
'c55214e36f845d10bd09733fe458a459': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_WrSN8}',
'f791cc4e975d9e9f2d08b0c90f93658a': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_GMASc}',
'd4cc6e74f5061298802d6247036a2a08': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_E5bML}',
'7579bf9d853f7eeed21c4496f1108d93': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_sTtbB}',
'6a532206ab2f6a6393b35e7fd3bfcc00': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_U9HBi}',
'134ce6cda1b61ff448d7cee485ba3a05': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_GEtPC}',
'57488bb1e8260da6ab4ac37fd1794e44': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_v2cti}',
'5ea1e8645c53b5d829acf41658c5137b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_S9CE4}',
'2c7c27be609b83afe408825c5c08b973': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_djO4E}',
'74deda113dfe851cc6546f95cf25582d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_lsY59}',
'9337ecfb0678398af2ec7ecdcdfad07e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_IPEQh}',
'a88c14dabc42350f4ad36a5d6e861d15': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_NnBVZ}',
'9d053312a647c6793fa9c65694ab202c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_mQFsM}',
'10b69c4c3c42cde1b248c8620aaa7d37': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_069nt}',
'b16968cd5a2a410dbfb8593e379eb663': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_YTNtg}',
'df3afe40cedf7c9421bec73cf6b1bd28': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_SgILa}',
'c34389e50a00489f2578660951b8a26b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_quzgu}',
'd7664b4c59a8c1954ea73b843fe0118e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_5HrWd}',
'd416f2478cc1186d49fdeea19cdb0743': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_CsVzy}',
'f31bc9a48faadc9c4e88da91a55bd0e4': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_EFqPW}',
'050eb773c4ad3728cc128d686e13634d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_BHwyX}',
'fafe8ef5844e576356ebe0ec00f4c979': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_iaK7Z}',
'2a86539abbec63dbf2a0437b83aabb69': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_sZUQr}',
'1d680dd625c846366f68099d03b8d857': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_QXjU5}',
'fa8bb91fb2ddd313ce2d5c3d7e9b7dd5': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_TFHYp}',
'bdbdddf38a299f33c02e20297f7fda39': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ZvAnN}',
'40e891ebfd87d02b048ea61cd4312607': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_wWF3w}',
'46cb7b470ac31905a61f91fde7947426': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_AOvkP}',
'a49117f6fbf7a50bc440068a597c1a99': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_brOeV}',
'817e8cf9df981a9407f4e6d5e95cef01': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ME1Eq}',
'f316a754494cb3ff4dde8def4b3c5168': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_zyRL6}',
'4bbeeb1c7062b185a97a146896c1d9b7': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_nHLW9}',
'e7c43153e79097d4536516a4e4c30ffa': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_2tpKT}',
'06a33a45c7f1e4ff39baa266330c959f': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_N6KaT}',
'5c5be767a7a9362001479e0bbbbdf270': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_0mJSH}',
'919b5de6698f43feccd631a3d9b0658e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_u88kv}',
'3e651f2a715dccbc8da137f6878d9c78': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_iFJDl}',
'b341b9082706ca2c91ce05e98bf5d977': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_RWUCm}',
'3ccbb3083710080f5e4a903feda54156': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_B2ctn}',
'b6578471734df868dfd27151db81ea86': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_6sj7N}',
'ec5fdc7ded8589240a17a7d1785465d2': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_1wdi0}',
'1e5f1a74746d71c1f0b8defeb882b574': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_N2ehP}',
'bce6b5226daf95a5f3e9560852f28e3a': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_vREQE}',
'4b200313b198a9e65e8086c446741c9b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_PlOWU}',
'9f55d023e70a7a81a4adcfdd72b1f2fa': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_16igE}',
'466e21dc30fc6567bf1f7f1cca81e62b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_UL0iw}',
'15c194a5028a5e3c56809a0b06851b7a': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_t6gVL}',
'd5be0edd9e3080a7e1e96a241dceeff7': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_n5JIB}',
'b4a4c011b4fbceb0f6cf786a88405ebf': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_CYuso}',
'44293c0c3751798bdc7facdef743bdf3': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_AMeuR}',
'3f80f7df70fce272a6d5c041330c06f8': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_LQbE7}',
'20b1d39d79f2307f09277cb04a6cbdd6': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_JgS2M}',
'eab104f1460637657342d7a80c43fda3': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_mSj0V}',
'4992d3d721822f4fc2414c313f0c61ea': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_wTk34}',
'b24dfde64e8bd65223b3b2e51260f4c0': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_defg1}',
'3f2e30aee2dd555de36428615ede4682': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_eTcCn}',
'f906555b13f1d23cb0879e0a1fcdd6e7': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ldPmZ}',
'28c8395e67c995fab1e63b072ef7adff': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_yK1GJ}',
'4030f9d8f0839128a5e55bd9d79e5176': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_k8SJt}',
'd1329e099b8cee5707d6869c9184263d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_gMCVd}',
'88e1af9bdcf38c60e4715b0c4cf16a11': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_So5gO}',
'ef93c928256d9e0111f14d518bce4396': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_cIng9}',
'd9b0cfe504078e78eed51cb170e46dcf': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_f7ol5}',
'f2383e4109904bb2b41c81fa5405cb3f': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Km9QX}',
'3afc0048b5769d93bf56a88506137dbb': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_maW1s}',
'1e83c3ea322c5605b65d3ca5cb66aecf': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_CSKak}',
'6563611ba486579d2919732ac7794d71': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_0rAMb}',
'981bcb6a94f8cf03758d6d272b0729ee': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_GUXSp}',
'e4cf5996134952f5f2fb037ab2f1979f': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_5kOsd}',
'754532881616fa382aab7d116ce2d7db': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_npz5i}',
'bfd9a72408d53a6b594b7f1b605b188e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_e8l1O}',
'f186ba6872eedf6739cdfad3f6546290': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_rCJJt}',
'429371204eb8598ba2935238f8e4f453': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_5btJx}',
'387df4246918be56b1ba3edcf0fc0490': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Younw}',
'6895b7437279f96c5ec3ebf3a9790d86': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_d2TTU}',
'760fef2d50d7478641aa4c30fc518d85': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_xQhDu}',
'd986479db415952d7d8e892914e31086': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_gtYvF}',
'22e7e24ef5130d5844a862be591a5abc': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ECe75}',
'409e4475958ef1173b78de64a936080d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_cp69M}',
'd25eb3d55e250c0dd36bc0712aaf7821': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_rZAsB}',
'03e4a54c87541e56acc606e90538242a': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_sONkE}',
'97ef9e2fd53c8e2d4484e648ca99c095': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_fyJ1O}',
'efbf225437714623fc1e5962db684c56': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Wm980}',
'8dcbf488ad6a423a5d738640e3e5cfe4': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_v7jrD}',
'd20bf7f366b8086783719486a0cae5de': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_SWNY8}',
'65f6f8440176673367afbbb3047cfda7': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_AhAox}',
'f58e275b131ea332c98cef05761abd5e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_q4URh}',
'4e816e5a2067de6ff61316aa11e8293b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ez56X}',
'5bdb94a71b0ec068a39e7a7651e72d8b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_vOlnr}',
'e576b893d1a65e628691b0c186e5d8a2': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_t9G7w}',
'bb71201e289db04bf5177850749ecd62': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_MPgdt}',
'7b5bbaa48061d4b08398c3628615f0b8': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_0YMlD}',
'72b050d93a4e552187982482a72832c4': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_diwig}',
'3f66d0d112bda80b8e6966cc66981236': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_VS6m7}',
'1a44da54f31553b786d4897a1c33000d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_T9fLN}',
'6123fa6635e6b453ac537aa2e69a8402': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_n5FqX}',
'd103b10cd997599ad4afbb6753651004': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_s0hfU}',
'27f159fc0cb42a7088084e44c87e1158': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_wgnjR}',
'7a8beab105a8ce1ae31a8f82d55feabe': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_QZ6NQ}',
'31be3705878fe95fc77a31a28b7aa1a8': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_RQbRg}',
'b499208fc74e237c440f0e22d1c1baf1': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_EDpwT}',
'68173e35a09c5d78de0eea60f353d1ab': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_VYaqo}',
'2b93ecde298b5c1cf6277298e53426df': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_HurMZ}',
'5e1119cbbee0155e6442494d57cc5fea': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_GuIWB}',
'736c6f4190d386d4ee373e1afc33c42e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_5EREM}',
'3b295d0ed9a146d7683aaa694a4df8d8': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_leHLp}',
'c48dd62842a01953482eed05e5201ffb': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_OBfam}',
'f7252949d8b56feef9641628e9764527': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_brhVX}',
'31b978edb5b3266781ac4e3729316443': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_pw2Fe}',
'2147bb2e3e7ea014735941af131a39cb': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_FSJzN}',
'8396f38568f69d8a3454d1120e28063d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_YpkFW}',
'3ca55046f9c3a34de255241835e368cf': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_5tgL2}',
'8793da48456bd8fce8965ac7de538f69': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_NN8HF}',
'018982c7b16edcfe9aed0482bd482715': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ENVar}',
'6ad0fad179b757c027aff5c994d673d1': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_uT4xO}',
'e84487e447bb16b675aaf6f5cec8cfec': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_668uN}',
'1a92196f9fa26fca288ac72e9380ee9d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_dpCSz}',
'778a3a08b32b0fa67f817c2365ed5011': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_dVpd0}',
'ba0af1c0d67575c667c66405e4e4c25c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_8EcmB}',
'135fe10aa233ed756f5af384ab2175f2': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_TsHcF}',
'b5fb5d9e1669b358f0ff6650fd027d66': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ZWr2P}',
'06ae02f32217b95a6b7cda50b76a17ea': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_hKeOa}',
'751b5cede8a9a94016823067cf440b3f': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_L1ocO}',
'dc9432f0f2918358339643fa96f8a592': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_an2DE}',
'88e679e6847a875788c2d7b7268f0a92': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_KqdEe}',
'1304418ba9bb30a894fac67e78246941': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_3WCOr}',
'96a251ad009513ec508921c3c23f2d9b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Tq22Q}',
'85aedbcabc9051695dbcedb8115a4e2d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_EnaR8}',
'38f54d53d40b24474dff67625c8f2da0': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_3nTPJ}',
'7e6b506688cebcf439b3a792cf78331c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_DA50k}',
'31aac68e16bdda45d939d43336d99df3': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_C2MZk}',
'b094ce3b4e144bf482ecac1b554a8029': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_TXXw1}',
'3bfb607650e870746ebe466132849e3b': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_1NC2A}',
'ae670091f08c6390b234fb89011c3ab0': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_kBjKa}',
'18b43181fefaa0cf27c501322e24959c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_9mgN5}',
'68bd369b95da4e564ec5ef298061cfcf': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_wZjxI}',
'ba8398855233a82ae1c837bf28432b37': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_7vFZd}',
'c14917ff8a2a3f51d72e29efff60ec8a': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_xlYbr}',
'3ae123c6e48c81a9a63737dda150e2aa': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_4WEQZ}',
'68f9a955bf43fd760b349070e7b0292c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_bnVWM}',
'4e39f8b926f6538fbd2a7b713206a8aa': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_PzXyn}',
'51198283100a2cabd7280062934d3015': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ugmFf}',
'21c26db0079b51fe08df566e3220e46c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_rJjla}',
'5781c0ab34ecf0faf58be27a09b3823d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_MtLWP}',
'6075a8ba3931ae33620cce255ecffc03': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_IVpzt}',
'bcc57508ca88ef6d76ce4692505545b4': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_fhvFA}',
'39fa5d58c4bef0324e2020c58feb30f9': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_ImNxU}',
'6eda43352b13b1a320d7cb51d6af2361': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_1sPw5}',
'421e8188bc2f82cfbf582021847fbfeb': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_nr8J2}',
'af8f3d2e1f22ec87c025a007b390b95d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_pRzDt}',
'322593c5448025d2954ab3cb79a9f85e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_vKpWj}',
'b0109837cbdaad90e0955559bf45c6ae': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Q7PTN}',
'35f668d03457804335e6c1090b4d2c1f': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_CCX7F}',
'cc8d103856a0876bda696eeb9446538c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_suG0f}',
'be6968f5c53aa54ded107559aaf012f0': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_KkZZx}',
'862d9aadbf1afa5a33e5e93ab44bbbd4': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_fEjjX}',
'0779bd1bece75d1065490a0e1e0eba35': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Pel9s}',
'644db62951ce877c5434af1a0ca1fc7c': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_NyOM9}',
'd0311e787bb5d52b83c13fcbb1c82e68': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_E0Txl}',
'579e6a2b087a8a3e4dd232b726d0f2a2': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_T8uJT}',
'1ab75e57389e4d089cef0661d9f51efe': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_pEjch}',
'd8d4dfa044e28fbb45ea53bac121d7d6': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_NZP7L}',
'a2b497caafa1130feecacf2cf07419d0': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_Fkhcl}',
'1fe5c4ebd1170ac6eb4a94ce6b330c41': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_dn8hj}',
'725586cab7a8e5b28c128852f8ab8555': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_jwlA7}',
'cb1be298e4e1931ab75553aa76e80f65': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_qnDdL}',
'35af8c5b33884246ef2d1060ba93db88': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_5HtDR}',
'25dd4a5ce19b39c075c2527f84ed0056': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_KMFHr}',
'7ae9800d05fce97d1f6e5fa5ca0a58a9': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_WNePr}',
'f465719e1b35b1e540f8ab58b2d98a93': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_HpF8R}',
'99d9603662f7747571bd9c236254e18d': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_c99IV}',
'718216a24b9295d06bb964bd2954fc37': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_qfDZI}',
'c3ffba0610b9d677dbcce185858aef9e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_2w7iM}',
'a6c884853881abd33fb2b89656fefa78': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_0ivuG}',
'dfd009c9ec75825da9f7dbb132939b2f': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_rt37y}',
'c0c6886c2d25b7268ea2bdf29ddc9ac2': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_JUlyv}',
'06e197b9d2c3c3848c47f09241d38dd9': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_eIftL}',
'd44196f2a9f6ae9f96d797c33023810e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_RxN4m}',
'df1d024ea9a01f354f9d00ee852bff1e': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_LDF1W}',
'5ea54c667712dffb409d9cf2c49e6cd1': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_3gdqN}',
'6875656aaf1c5585a5da313e324cf0db': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_UbDr2}',
'345a21429df6d0f64d55b4b52fbb7143': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_dkU6w}',
'a53bf0babc8e306107964c0942f14761': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_kVLUg}',
'73833c1e975558bb265b655e0cbebb77': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_7JWuo}',
'765acca6988e9d91721bd5a1de717641': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_lgOKw}',
'c55873ba1fb1d8464a9b9ce7c979d3c5': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_jFmqS}',
'cec5072a621c1a9972ae583494f59a0a': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_2URjY}',
'f3861044dcf6bef3a5f024d85d9fc0b1': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_6zsOI}',
'12ec205cc2baffbecc5fdf1df0897df1': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_PVIE9}',
'388fb4a5695f1ca85e12607122a2bda3': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_7c4yW}',
'a78e504f70852af2a1e8f80204f61108': 'LKL{m1gh7_b3_th3_loo0ng35t_fLag_y0u_hav3_Ever_s33N_pXopS}'
}
| 99.561576 | 100 | 0.861165 | 2,401 | 20,211 | 6.499375 | 0.171179 | 0.102531 | 0.128164 | 0.166613 | 0.525473 | 0.525473 | 0.525473 | 0.525473 | 0.525473 | 0.525473 | 0 | 0.335929 | 0.059572 | 20,211 | 202 | 101 | 100.054455 | 0.485084 | 0 | 0 | 0 | 0 | 0 | 0.880709 | 0.880709 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3aa96a078ab33d64379685002667af65dd5f043c | 9,121 | py | Python | resources.py | gatewayheart/TCM | fe0fa0ab1e6c98adfd6a16cb6eb58de0dc4b1f01 | [
"Unlicense"
] | null | null | null | resources.py | gatewayheart/TCM | fe0fa0ab1e6c98adfd6a16cb6eb58de0dc4b1f01 | [
"Unlicense"
] | null | null | null | resources.py | gatewayheart/TCM | fe0fa0ab1e6c98adfd6a16cb6eb58de0dc4b1f01 | [
"Unlicense"
] | null | null | null | from flask_restful import Resource, reqparse
import psycopg2
import pandas as pd
from pandas.io.json import json_normalize
import cx_Oracle
from sqlalchemy import create_engine
import datetime
import numpy as np
import json
import re
from flask_jwt_extended import (create_access_token, create_refresh_token, jwt_required, jwt_refresh_token_required, get_jwt_identity, get_raw_jwt)
#Epidemiological zoning request
Epidemiological=reqparse.RequestParser()
Epidemiological.add_argument('type',help = 'This field cannot be blank', required = True)
# yeu cau lay nguon nuoc
#nn=reqparse.RequestParser()
#nn.add_argument('type',help = 'This field cannot be blank', required = True)
def epidemiology(value):
df=pd.read_csv(r"data.csv",encoding='utf-8')
with open(r'datajson.json', 'r') as myfile:
datajson=myfile.read()
#data=df
data=df.query("LOAI2=='%s'"%value) #quyre theo loai
#data=df.query("LOAI=='%s'"%value) #quyre theo loai
statesdata = json.loads(datajson)
districts = statesdata['features']
for i,obj in enumerate(districts) :
district_name = obj['properties']['name']
data_district = data.query("Huyen=='%s'"%district_name)
statesdata['features'][i]['properties']['density'] = len(data_district)
# Bắt đàu đặt biến lấy theo huyện
# Ngày 24/5/2020 Tắt option chọn nguồn nước
#dataNguonnuoc1 = data_district
#Query số tổng số theo nguồn nước bằng nước máy
#statesdata['features'][i]['properties']['nuocMay'] = len(dataNguonnuoc1.query("Nguonnuoc=='Nuoc may'"))
#dataNguonNuoc2 = data_district
#statesdata['features'][i]['properties']['nuocGiengKhoan'] = len(dataNguonNuoc2.query("Nguonnuoc=='Nuoc gieng khoan'"))
#dataNguonNuoc3 = data_district
#statesdata['features'][i]['properties']['nuocTuNhien'] = len(dataNguonNuoc3.query("Nguonnuoc=='Nuoc tu nhien'"))
# Bắt đàu đặt biết lấy loai ca bệnh theo huyện
dataChangeCaBenh1 = data_district
#Query số tổng số theo loại ca bệnh theo tan phat
statesdata['features'][i]['properties']['tanPhat'] = len(dataChangeCaBenh1.query("Loaicabenh=='Tan phat'"))
# Bắt đàu đặt biết lấy loai dương tính EV71 theo huyện
dataDuongTinhEV71 = data_district
#Query số tổng số theo loại ca bệnh theo EV71
statesdata['features'][i]['properties']['EV71'] = len(dataDuongTinhEV71.query("LOAI=='EV71'"))
dataDuongTinhCA16 = data_district
#Query số tổng số theo loại ca bệnh theo CA16
statesdata['features'][i]['properties']['CA16'] = len(dataDuongTinhCA16.query("LOAI=='CA16'"))
dataDuongTinhChungKhac = data_district
#Query số tổng số theo loại ca bệnh theo Chung khac
#statesdata['features'][i]['properties']['CHUNGKHAC'] = len(dataDuongTinhChungKhac.query("LOAI=='Non EV71 va CA16'"))
statesdata['features'][i]['properties']['CHUNGKHAC'] = len(dataDuongTinhChungKhac.query("LOAI =='A6' or LOAI =='A10' or LOAI =='A2'"))
#Query số tổng số theo loại ca bệnh theo Chung A10
dataDuongTinhA10 = data_district
statesdata['features'][i]['properties']['A10'] = len(dataDuongTinhChungKhac.query("LOAI=='A10'"))
#Query số tổng số theo loại ca bệnh theo Chung A2
dataDuongTinhA2 = data_district
statesdata['features'][i]['properties']['A2'] = len(dataDuongTinhChungKhac.query("LOAI=='A2'"))
#Query số tổng số theo loại ca bệnh theo Chung A6
dataDuongTinhA6 = data_district
statesdata['features'][i]['properties']['A6'] = len(dataDuongTinhChungKhac.query("LOAI=='A6'"))
#Bổ sung ngày 20200411 Số ca xét nghiệm dương tính
dataDuongTinh = data_district
#Query số tổng số theo loại ca bệnh theo Ca dương tính
statesdata['features'][i]['properties']['DuongTinh'] = len(dataDuongTinh.query("LOAI=='EV71' or LOAI=='CA16' or LOAI=='Non EV71 va CA16' or LOAI =='A6' or LOAI =='A10' or LOAI =='A2'"))
return {'status':'SUCCESS','message':'The number of positive cases is located in the area of Binh Dinh province','data':statesdata}
class EpidemiologicalZoning(Resource):
def post(self):
data = Epidemiological.parse_args()
if data['type']=='Positive':
df=pd.read_csv(r"data.csv",encoding='utf-8')
with open(r'datajson.json', 'r') as myfile:
datajson=myfile.read()
data=df
#data=df.query("KQXetNghiemTCM=='Duong Tinh'")
statesdata = json.loads(datajson)
districts = statesdata['features']
for i,obj in enumerate(districts) :
district_name = obj['properties']['name']
data_district = data.query("Huyen=='%s'"%district_name)
statesdata['features'][i]['properties']['density'] = len(data_district)
# Bắt đàu đặt biết lấy theo huyện
# Ngày 24/5/2020 Tắt option chọn nguồn nước
# dataNguonnuoc1 = data_district
#Query số tổng số theo nguồn nước bằng nước máy
#statesdata['features'][i]['properties']['nuocMay'] = len(dataNguonnuoc1.query("Nguonnuoc=='Nuoc may'"))
#dataNguonNuoc2 = data_district
#statesdata['features'][i]['properties']['nuocGiengKhoan'] = len(dataNguonNuoc2.query("Nguonnuoc=='Nuoc gieng khoan'"))
#dataNguonNuoc3 = data_district
#statesdata['features'][i]['properties']['nuocTuNhien'] = len(dataNguonNuoc3.query("Nguonnuoc=='Nuoc tu nhien'"))
# Bắt đàu đặt biết lấy loai ca bệnh theo huyện
#dataChangeCaBenh1 = data_district
#Query số tổng số theo loại ca bệnh theo tan phat
#statesdata['features'][i]['properties']['tanPhat'] = len(dataChangeCaBenh1.query("Loaicabenh=='Tan phat'"))
# Bắt đàu đặt biết lấy loai dương tính EV71 theo huyện
dataDuongTinhEV71 = data_district
#Query số tổng số theo loại ca bệnh theo tan phat
statesdata['features'][i]['properties']['EV71'] = len(dataDuongTinhEV71.query("LOAI=='EV71'"))
dataDuongTinhCA16 = data_district
#Query số tổng số theo loại ca bệnh theo tan phat
statesdata['features'][i]['properties']['CA16'] = len(dataDuongTinhCA16.query("LOAI=='CA16'"))
dataDuongTinhChungKhac = data_district
#Query số tổng số theo loại ca bệnh theo tan phat
statesdata['features'][i]['properties']['CHUNGKHAC'] = len(dataDuongTinhChungKhac.query("LOAI=='Non EV71 va CA16'"))
#Query số tổng số theo loại ca bệnh theo Chung A10
dataDuongTinhA10 = data_district
statesdata['features'][i]['properties']['A10'] = len(dataDuongTinhChungKhac.query("LOAI=='A10'"))
#Query số tổng số theo loại ca bệnh theo Chung A2
dataDuongTinhA2 = data_district
statesdata['features'][i]['properties']['A2'] = len(dataDuongTinhChungKhac.query("LOAI=='A2'"))
#Query số tổng số theo loại ca bệnh theo Chung A6
dataDuongTinhA6 = data_district
statesdata['features'][i]['properties']['A6'] = len(dataDuongTinhChungKhac.query("LOAI=='A6'"))
#Bổ sung ngày 20200411 Số ca xét nghiệm dương tính
dataDuongTinh = data_district
#Query số tổng số theo loại ca bệnh theo Ca dương tính
statesdata['features'][i]['properties']['DuongTinh'] = len(dataDuongTinh.query("LOAI=='EV71' or LOAI=='CA16' or LOAI=='Non EV71 va CA16' or LOAI =='A6' or LOAI =='A10' or LOAI =='A2'"))
return {'status':'SUCCESS','message':'The number of positive cases is located in the area of Binh Dinh province','data':statesdata}
else:
if data['type']!='':
return epidemiology(data['type'])
else:
return {'message':'Something went wrong'}
def nguonnuoc(type):
df=pd.read_csv(r"data.csv",encoding='utf-8')
with open(r'datajson.json', 'r') as myfile:
datajson=myfile.read()
data=df.query("Nguonnuoc=='%s'"%type)
statesdata = json.loads(datajson)
districts = statesdata['features']
for i,obj in enumerate(districts) :
district_name = obj['properties']['name']
statesdata['features'][i]['properties']['density'] = len(data.query("Huyen=='%s'"%district_name))
#statesdata['features'][i]['properties']['whatever'] = 3
return {'status':'SUCCESS','message':'The number of positive cases is located in the area of Binh Dinh province','data':statesdata}
class layNguonNuoc(Resource):
def post(self):
data = nn.parse_args()
if data['type']!='':
return nguonnuoc(data['type'])
else:
return {'message':'Something went wrong'}
| 57.36478 | 202 | 0.622519 | 1,062 | 9,121 | 5.292844 | 0.1742 | 0.096068 | 0.091265 | 0.139299 | 0.882583 | 0.859456 | 0.859456 | 0.859456 | 0.834905 | 0.822096 | 0 | 0.023283 | 0.246574 | 9,121 | 158 | 203 | 57.727848 | 0.794674 | 0.300406 | 0 | 0.677419 | 0 | 0.021505 | 0.232615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043011 | false | 0 | 0.11828 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3ae938a9a2d450930696ac4757c179657b5ee8b8 | 695 | py | Python | vendor/penguin_client/penguin_client/api/__init__.py | Sait0Yuuki/ArknightsAutoHelper | 5ecec0d120482c930181346cfdb8542090e169c1 | [
"MIT"
] | 1,035 | 2019-05-14T11:58:32.000Z | 2022-03-16T15:09:53.000Z | vendor/penguin_client/penguin_client/api/__init__.py | Sait0Yuuki/ArknightsAutoHelper | 5ecec0d120482c930181346cfdb8542090e169c1 | [
"MIT"
] | 209 | 2019-05-11T13:19:57.000Z | 2022-03-12T01:42:11.000Z | vendor/penguin_client/penguin_client/api/__init__.py | Sait0Yuuki/ArknightsAutoHelper | 5ecec0d120482c930181346cfdb8542090e169c1 | [
"MIT"
] | 254 | 2019-05-13T09:06:54.000Z | 2022-03-16T09:47:44.000Z | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from penguin_client.api.account_api import AccountApi
from penguin_client.api.formula_api import FormulaApi
from penguin_client.api.item_api import ItemApi
from penguin_client.api.notice_api import NoticeApi
from penguin_client.api.period_api import PeriodApi
from penguin_client.api.report_api import ReportApi
from penguin_client.api.result_api import ResultApi
from penguin_client.api.stage_api import StageApi
from penguin_client.api.website_statistics_api import WebsiteStatisticsApi
from penguin_client.api.zone_api import ZoneApi
from penguin_client.api._deprecated_ap_is_api import DeprecatedAPIsApi
| 40.882353 | 74 | 0.879137 | 103 | 695 | 5.631068 | 0.359223 | 0.208621 | 0.322414 | 0.37931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001572 | 0.084892 | 695 | 16 | 75 | 43.4375 | 0.910377 | 0.058993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3af6295d9bfb579e26509a64b445a1569e5d970c | 43 | py | Python | spotifyLocalExport/interfaz/__init__.py | ValdrST/spotify-local-export | 6fb8db9f20a8cd815b4cd85c1904cb580c82650e | [
"MIT"
] | 1 | 2020-07-23T18:55:36.000Z | 2020-07-23T18:55:36.000Z | spotifyLocalExport/interfaz/__init__.py | ValdrST/spotify-local-export | 6fb8db9f20a8cd815b4cd85c1904cb580c82650e | [
"MIT"
] | null | null | null | spotifyLocalExport/interfaz/__init__.py | ValdrST/spotify-local-export | 6fb8db9f20a8cd815b4cd85c1904cb580c82650e | [
"MIT"
] | null | null | null | #!/bin/python
from .Console import Console
| 14.333333 | 28 | 0.767442 | 6 | 43 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 43 | 2 | 29 | 21.5 | 0.868421 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a30bb6316f14ba1894594e5f92008714dfa55707 | 50,327 | py | Python | tests/test_wow_game_data_api.py | trevorphillipscoding/python-blizzardapi | e98e1ee38f4b336bc99baa668691c842a090109c | [
"MIT"
] | 10 | 2020-12-03T14:23:56.000Z | 2022-02-01T10:48:42.000Z | tests/test_wow_game_data_api.py | trevorphillipscoding/python-blizzardapi | e98e1ee38f4b336bc99baa668691c842a090109c | [
"MIT"
] | 65 | 2020-12-24T02:09:56.000Z | 2022-03-28T20:09:01.000Z | tests/test_wow_game_data_api.py | trevorphillips/python-blizzardapi | 92921abd44dbf684ff8b8c06c8dc74539d2e4721 | [
"MIT"
] | 6 | 2021-06-24T17:37:55.000Z | 2022-02-17T20:36:23.000Z | from blizzardapi import BlizzardApi
class TestWowGameDataApi:
def setup(self):
self.api = BlizzardApi("client_id", "client_secret")
self.api.wow.game_data._access_token = "access_token"
# Achievement API
def test_get_achievement_categories_index(self, success_response_mock):
self.api.wow.game_data.get_achievement_categories_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/achievement-category/index",
params=params,
)
def test_get_achievement_category(self, success_response_mock):
self.api.wow.game_data.get_achievement_category("us", "en_US", 81)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/achievement-category/81",
params=params,
)
def test_get_achievements_index(self, success_response_mock):
self.api.wow.game_data.get_achievements_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/achievement/index",
params=params,
)
def test_get_achievement(self, success_response_mock):
self.api.wow.game_data.get_achievement("us", "en_US", 6)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/achievement/6", params=params
)
def test_get_achievement_media(self, success_response_mock):
self.api.wow.game_data.get_achievement_media("us", "en_US", 6)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/achievement/6",
params=params,
)
# Auction House API
def test_get_auction_house_index(self, success_response_mock):
self.api.wow.game_data.get_auction_house_index("us", "en_US", 4372)
params = {
"namespace": "dynamic-classic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/connected-realm/4372/auctions/index",
params=params,
)
def test_get_auctions_for_auction_house(self, success_response_mock):
self.api.wow.game_data.get_auctions_for_auction_house("us", "en_US", 4372, 2)
params = {
"namespace": "dynamic-classic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/connected-realm/4372/auctions/2",
params=params,
)
def test_get_auctions(self, success_response_mock):
self.api.wow.game_data.get_auctions("us", "en_US", 1146)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/connected-realm/1146/auctions",
params=params,
)
# Azerite Essence API
def test_get_azerite_essences_index(self, success_response_mock):
self.api.wow.game_data.get_azerite_essences_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/azerite-essence/index",
params=params,
)
def test_get_azerite_essence(self, success_response_mock):
self.api.wow.game_data.get_azerite_essence("us", "en_US", 2)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/azerite-essence/2",
params=params,
)
def test_get_azerite_essence_media(self, success_response_mock):
self.api.wow.game_data.get_azerite_essence_media("us", "en_US", 2)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/azerite-essence/2",
params=params,
)
# Connected Realm API
def test_get_connected_realms_index(self, success_response_mock):
self.api.wow.game_data.get_connected_realms_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/connected-realm/index",
params=params,
)
def test_get_connected_realm(self, success_response_mock):
self.api.wow.game_data.get_connected_realm("us", "en_US", 1)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/connected-realm/1",
params=params,
)
# Creature API
def test_get_creature_families_index(self, success_response_mock):
self.api.wow.game_data.get_creature_families_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/creature-family/index",
params=params,
)
def test_get_creature_family(self, success_response_mock):
self.api.wow.game_data.get_creature_family("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/creature-family/1",
params=params,
)
def test_get_creature_types_index(self, success_response_mock):
self.api.wow.game_data.get_creature_types_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/creature-type/index",
params=params,
)
def test_get_creature_type(self, success_response_mock):
self.api.wow.game_data.get_creature_type("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/creature-type/1",
params=params,
)
def test_get_creature(self, success_response_mock):
self.api.wow.game_data.get_creature("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/creature/1", params=params
)
def test_get_creature_display_media(self, success_response_mock):
self.api.wow.game_data.get_creature_display_media("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/creature-display/1",
params=params,
)
def test_get_creature_family_media(self, success_response_mock):
self.api.wow.game_data.get_creature_family_media("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/creature-family/1",
params=params,
)
# Guild Crest API
def test_get_guild_crest_components_index(self, success_response_mock):
self.api.wow.game_data.get_guild_crest_components_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/guild-crest/index",
params=params,
)
def test_get_guild_crest_border_media(self, success_response_mock):
self.api.wow.game_data.get_guild_crest_border_media("us", "en_US", 0)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/guild-crest/border/0",
params=params,
)
def test_get_guild_crest_emblem_media(self, success_response_mock):
self.api.wow.game_data.get_guild_crest_emblem_media("us", "en_US", 0)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/guild-crest/emblem/0",
params=params,
)
# Item API
def test_get_item_classes_index(self, success_response_mock):
self.api.wow.game_data.get_item_classes_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/item-class/index",
params=params,
)
def test_get_item_class(self, success_response_mock):
self.api.wow.game_data.get_item_class("us", "en_US", 2)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/item-class/2", params=params
)
def test_get_item_sets_index(self, success_response_mock):
self.api.wow.game_data.get_item_sets_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/item-set/index",
params=params,
)
def test_get_item_set(self, success_response_mock):
self.api.wow.game_data.get_item_set("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/item-set/1", params=params
)
def test_get_item_subclass(self, success_response_mock):
self.api.wow.game_data.get_item_subclass("us", "en_US", 2, 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/item-class/2/item-subclass/1",
params=params,
)
def test_get_item(self, success_response_mock):
self.api.wow.game_data.get_item("us", "en_US", 9999)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/item/9999", params=params
)
def test_get_item_media(self, success_response_mock):
self.api.wow.game_data.get_item_media("us", "en_US", 9999)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/item/9999",
params=params,
)
# Journal API
def test_get_journal_expansions_index(self, success_response_mock):
self.api.wow.game_data.get_journal_expansions_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/journal-expansion/index",
params=params,
)
def test_get_journal_expansion(self, success_response_mock):
self.api.wow.game_data.get_journal_expansion("us", "en_US", 68)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/journal-expansion/68",
params=params,
)
def test_get_journal_encounters_index(self, success_response_mock):
self.api.wow.game_data.get_journal_encounters_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/journal-encounter/index",
params=params,
)
def test_get_journal_encounter(self, success_response_mock):
self.api.wow.game_data.get_journal_encounter("us", "en_US", 89)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/journal-encounter/89",
params=params,
)
def test_get_journal_instances_index(self, success_response_mock):
self.api.wow.game_data.get_journal_instances_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/journal-instance/index",
params=params,
)
def test_get_journal_instance(self, success_response_mock):
self.api.wow.game_data.get_journal_instance("us", "en_US", 63)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/journal-instance/63",
params=params,
)
def test_get_journal_instance_media(self, success_response_mock):
self.api.wow.game_data.get_journal_instance_media("us", "en_US", 63)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/journal-instance/63",
params=params,
)
# Modified Crafting API
def test_get_modified_crafting_index(self, success_response_mock):
self.api.wow.game_data.get_modified_crafting_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/modified-crafting/index",
params=params,
)
def test_get_modified_crafting_category_index(self, success_response_mock):
self.api.wow.game_data.get_modified_crafting_category_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/modified-crafting/category/index",
params=params,
)
def test_get_modified_crafting_category(self, success_response_mock):
self.api.wow.game_data.get_modified_crafting_category("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/modified-crafting/category/1",
params=params,
)
def test_get_modified_crafting_reagent_slot_type_index(self, success_response_mock):
self.api.wow.game_data.get_modified_crafting_reagent_slot_type_index(
"us", "en_US"
)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/modified-crafting/reagent-slot-type/index",
params=params,
)
def test_get_modified_crafting_reagent_slot_type(self, success_response_mock):
self.api.wow.game_data.get_modified_crafting_reagent_slot_type(
"us", "en_US", 16
)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/modified-crafting/reagent-slot-type/16",
params=params,
)
# Mount API
def test_get_mounts_index(self, success_response_mock):
self.api.wow.game_data.get_mounts_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mount/index", params=params
)
def test_get_mount(self, success_response_mock):
self.api.wow.game_data.get_mount("us", "en_US", 6)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mount/6", params=params
)
# Mythic Keystone Affix API
def test_get_mythic_keystone_affixes_index(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_affixes_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/keystone-affix/index",
params=params,
)
def test_get_mythic_keystone_affix(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_affix("us", "en_US", 3)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/keystone-affix/3",
params=params,
)
def test_get_mythic_keystone_affix_media(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_affix_media("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/keystone-affix/1",
params=params,
)
# Mythic Keystone Dungeon API
def test_get_mythic_keystone_dungeons_index(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_dungeons_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mythic-keystone/dungeon/index",
params=params,
)
def test_get_mythic_keystone_dungeon(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_dungeon("us", "en_US", 5)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mythic-keystone/dungeon/5",
params=params,
)
def test_get_mythic_keystone_index(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mythic-keystone/index",
params=params,
)
def test_get_mythic_keystone_periods_index(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_periods_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mythic-keystone/period/index",
params=params,
)
def test_get_mythic_keystone_period(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_period("us", "en_US", 641)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mythic-keystone/period/641",
params=params,
)
def test_get_mythic_keystone_seasons_index(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_seasons_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mythic-keystone/season/index",
params=params,
)
def test_get_mythic_keystone_season(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_season("us", "en_US", 1)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/mythic-keystone/season/1",
params=params,
)
# Mythic Keystone Leaderboard API
def test_get_mythic_keystone_leaderboards_index(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_leaderboards_index("us", "en_US", 1)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/connected-realm/1/mythic-leaderboard/index",
params=params,
)
def test_get_mythic_keystone_leaderboard(self, success_response_mock):
self.api.wow.game_data.get_mythic_keystone_leaderboard("us", "en_US", 1, 2, 3)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/connected-realm/1/mythic-leaderboard/2/period/3",
params=params,
)
# Mythic Raid Leaderboard API
def test_get_mythic_raid_leaderboard(self, success_response_mock):
self.api.wow.game_data.get_mythic_raid_leaderboard(
"us", "en_US", "uldir", "horde"
)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/leaderboard/hall-of-fame/uldir/horde",
params=params,
)
# Pet API
def test_get_pets_index(self, success_response_mock):
self.api.wow.game_data.get_pets_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pet/index", params=params
)
def test_get_pet(self, success_response_mock):
self.api.wow.game_data.get_pet("us", "en_US", 39)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pet/39", params=params
)
def test_get_pet_media(self, success_response_mock):
self.api.wow.game_data.get_pet_media("us", "en_US", 39)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/pet/39", params=params
)
def test_get_pet_abilities_index(self, success_response_mock):
self.api.wow.game_data.get_pet_abilities_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pet-ability/index",
params=params,
)
def test_get_pet_ability(self, success_response_mock):
self.api.wow.game_data.get_pet_ability("us", "en_US", 110)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pet-ability/110",
params=params,
)
def test_get_pet_ability_media(self, success_response_mock):
self.api.wow.game_data.get_pet_ability_media("us", "en_US", 110)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/pet-ability/110",
params=params,
)
# Playable Class API
def test_get_playable_classes_index(self, success_response_mock):
self.api.wow.game_data.get_playable_classes_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/playable-class/index",
params=params,
)
def test_get_playable_class(self, success_response_mock):
self.api.wow.game_data.get_playable_class("us", "en_US", 7)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/playable-class/7",
params=params,
)
def test_get_playable_class_media(self, success_response_mock):
self.api.wow.game_data.get_playable_class_media("us", "en_US", 7)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/playable-class/7",
params=params,
)
def test_get_pvp_talent_slots(self, success_response_mock):
self.api.wow.game_data.get_pvp_talent_slots("us", "en_US", 7)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/playable-class/7/pvp-talent-slots",
params=params,
)
# Playable Race API
def test_get_playable_races_index(self, success_response_mock):
self.api.wow.game_data.get_playable_races_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/playable-race/index",
params=params,
)
def test_get_playable_race(self, success_response_mock):
self.api.wow.game_data.get_playable_race("us", "en_US", 2)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/playable-race/2",
params=params,
)
# Playable Specialization API
def test_get_playable_specializations_index(self, success_response_mock):
self.api.wow.game_data.get_playable_specializations_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/playable-specialization/index",
params=params,
)
def test_get_playable_specialization(self, success_response_mock):
self.api.wow.game_data.get_playable_specialization("us", "en_US", 262)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/playable-specialization/262",
params=params,
)
def test_get_playable_specialization_media(self, success_response_mock):
self.api.wow.game_data.get_playable_specialization_media("us", "en_US", 262)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/playable-specialization/262",
params=params,
)
# Power Type API
def test_get_power_types_index(self, success_response_mock):
self.api.wow.game_data.get_power_types_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/power-type/index",
params=params,
)
def test_get_power_type(self, success_response_mock):
self.api.wow.game_data.get_power_type("us", "en_US", 0)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/power-type/0", params=params
)
# Profession API
def test_get_professions_index(self, success_response_mock):
self.api.wow.game_data.get_professions_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/profession/index",
params=params,
)
def test_get_profession(self, success_response_mock):
self.api.wow.game_data.get_profession("us", "en_US", 164)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/profession/164",
params=params,
)
def test_get_profession_media(self, success_response_mock):
self.api.wow.game_data.get_profession_media("us", "en_US", 164)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/profession/164",
params=params,
)
def test_get_profession_skill_tier(self, success_response_mock):
self.api.wow.game_data.get_profession_skill_tier("us", "en_US", 164, 2477)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/profession/164/skill-tier/2477",
params=params,
)
def test_get_recipe(self, success_response_mock):
self.api.wow.game_data.get_recipe("us", "en_US", 1631)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/recipe/1631", params=params
)
def test_get_recipe_media(self, success_response_mock):
self.api.wow.game_data.get_recipe_media("us", "en_US", 1631)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/recipe/1631",
params=params,
)
# Pvp Season API
def test_get_pvp_seasons_index(self, success_response_mock):
self.api.wow.game_data.get_pvp_seasons_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-season/index",
params=params,
)
def test_get_pvp_season(self, success_response_mock):
self.api.wow.game_data.get_pvp_season("us", "en_US", 27)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-season/27", params=params
)
def test_get_pvp_leaderboards_index(self, success_response_mock):
self.api.wow.game_data.get_pvp_leaderboards_index("us", "en_US", 27)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-season/27/pvp-leaderboard/index",
params=params,
)
def test_get_pvp_leaderboard(self, success_response_mock):
self.api.wow.game_data.get_pvp_leaderboard("us", "en_US", 27, "3v3")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-season/27/pvp-leaderboard/3v3",
params=params,
)
def test_get_pvp_rewards_index(self, success_response_mock):
self.api.wow.game_data.get_pvp_rewards_index("us", "en_US", 27)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-season/27/pvp-reward/index",
params=params,
)
# Pvp Tier API
def test_get_pvp_tier_media(self, success_response_mock):
self.api.wow.game_data.get_pvp_tier_media("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/pvp-tier/1",
params=params,
)
def test_get_pvp_tiers_index(self, success_response_mock):
self.api.wow.game_data.get_pvp_tiers_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-tier/index",
params=params,
)
def test_get_pvp_tier(self, success_response_mock):
self.api.wow.game_data.get_pvp_tier("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-tier/1", params=params
)
# Quest API
def test_get_quests_index(self, success_response_mock):
self.api.wow.game_data.get_quests_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/index", params=params
)
def test_get_quest(self, success_response_mock):
self.api.wow.game_data.get_quest("us", "en_US", 2)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/2", params=params
)
def test_get_quest_categories_index(self, success_response_mock):
self.api.wow.game_data.get_quest_categories_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/category/index",
params=params,
)
def test_get_quest_category(self, success_response_mock):
self.api.wow.game_data.get_quest_category("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/category/1",
params=params,
)
def test_get_quest_areas_index(self, success_response_mock):
self.api.wow.game_data.get_quest_areas_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/area/index",
params=params,
)
def test_get_quest_area(self, success_response_mock):
self.api.wow.game_data.get_quest_area("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/area/1", params=params
)
def test_get_quest_types_index(self, success_response_mock):
self.api.wow.game_data.get_quest_types_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/type/index",
params=params,
)
def test_get_quest_type(self, success_response_mock):
self.api.wow.game_data.get_quest_type("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/quest/type/1", params=params
)
# Realm API
def test_get_realms_index(self, success_response_mock):
self.api.wow.game_data.get_realms_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/realm/index", params=params
)
def test_get_realm(self, success_response_mock):
self.api.wow.game_data.get_realm("us", "en_US", "tichondrius")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/realm/tichondrius",
params=params,
)
# Region API
def test_get_regions_index(self, success_response_mock):
self.api.wow.game_data.get_regions_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/region/index", params=params
)
def test_get_region(self, success_response_mock):
self.api.wow.game_data.get_region("us", "en_US", 1)
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/region/1", params=params
)
# Reputations API
def test_get_reputation_factions_index(self, success_response_mock):
self.api.wow.game_data.get_reputation_factions_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/reputation-faction/index",
params=params,
)
def test_get_reputation_faction(self, success_response_mock):
self.api.wow.game_data.get_reputation_faction("us", "en_US", 21)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/reputation-faction/21",
params=params,
)
def test_get_reputation_tiers_index(self, success_response_mock):
self.api.wow.game_data.get_reputation_tiers_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/reputation-tiers/index",
params=params,
)
def test_get_reputation_tier(self, success_response_mock):
self.api.wow.game_data.get_reputation_tier("us", "en_US", 2)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/reputation-tiers/2",
params=params,
)
# Spell API
def test_get_spell(self, success_response_mock):
self.api.wow.game_data.get_spell("us", "en_US", 196607)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/spell/196607", params=params
)
def test_get_spell_media(self, success_response_mock):
self.api.wow.game_data.get_spell_media("us", "en_US", 196607)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/media/spell/196607",
params=params,
)
# Talent API
def test_get_talents_index(self, success_response_mock):
self.api.wow.game_data.get_talents_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/talent/index", params=params
)
def test_get_talent(self, success_response_mock):
self.api.wow.game_data.get_talent("us", "en_US", 23106)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/talent/23106", params=params
)
def test_get_pvp_talents_index(self, success_response_mock):
self.api.wow.game_data.get_pvp_talents_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-talent/index",
params=params,
)
def test_get_pvp_talent(self, success_response_mock):
self.api.wow.game_data.get_pvp_talent("us", "en_US", 3)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/pvp-talent/3", params=params
)
# Title API
def test_get_titles_index(self, success_response_mock):
self.api.wow.game_data.get_titles_index("us", "en_US")
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/title/index", params=params
)
def test_get_title(self, success_response_mock):
self.api.wow.game_data.get_title("us", "en_US", 1)
params = {
"namespace": "static-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/title/1", params=params
)
# Wow Token API
def test_get_tokens_index(self, success_response_mock):
self.api.wow.game_data.get_token_index("us", "en_US")
params = {
"namespace": "dynamic-us",
"locale": "en_US",
"access_token": "access_token",
}
success_response_mock.assert_called_with(
"https://us.api.blizzard.com/data/wow/token/index", params=params
)
| 35.896576 | 99 | 0.590836 | 5,879 | 50,327 | 4.728865 | 0.027726 | 0.090213 | 0.154455 | 0.057408 | 0.925542 | 0.904032 | 0.879681 | 0.824862 | 0.807669 | 0.804288 | 0 | 0.007898 | 0.280505 | 50,327 | 1,401 | 100 | 35.922198 | 0.759873 | 0.009776 | 0 | 0.530179 | 0 | 0.026917 | 0.267067 | 0 | 0 | 0 | 0 | 0 | 0.09217 | 1 | 0.092985 | false | 0 | 0.000816 | 0 | 0.094617 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a32c7cebbab3d329535200b88b89b01474b3ea3a | 21 | py | Python | daily_activity/controllers/__init__.py | komarr007/odoo-app | 7888fbd299ea3eb18e0b0d1651bac21e98c935f3 | [
"Unlicense"
] | null | null | null | daily_activity/controllers/__init__.py | komarr007/odoo-app | 7888fbd299ea3eb18e0b0d1651bac21e98c935f3 | [
"Unlicense"
] | null | null | null | daily_activity/controllers/__init__.py | komarr007/odoo-app | 7888fbd299ea3eb18e0b0d1651bac21e98c935f3 | [
"Unlicense"
] | null | null | null | from . import noteapi | 21 | 21 | 0.809524 | 3 | 21 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a395fa270c465826663efc3cffec04e0d38bda08 | 765 | py | Python | normalizingData.py | anasbadawy/Steering-Prediction-CNN | be19e71f32960c799dc96ba64145677c5f8a98f4 | [
"Apache-2.0"
] | null | null | null | normalizingData.py | anasbadawy/Steering-Prediction-CNN | be19e71f32960c799dc96ba64145677c5f8a98f4 | [
"Apache-2.0"
] | null | null | null | normalizingData.py | anasbadawy/Steering-Prediction-CNN | be19e71f32960c799dc96ba64145677c5f8a98f4 | [
"Apache-2.0"
] | null | null | null |
import csv
fil=open('dataNormalized.csv', 'a', newline='')
writer = csv.writer(fil)
with open('driving_log.csv', 'r', newline='') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
for row in csv_reader:
normVal = float(row[3])
writer.writerow([row[0], round(normVal,7)])
with open('driving_log2.csv', 'r', newline='') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
for row in csv_reader:
normVal = float(row[3])
writer.writerow([row[0], round(normVal,7)])
with open('driving_log3.csv', 'r', newline='') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
for row in csv_reader:
normVal = float(row[3])
writer.writerow([row[0], round(normVal,7)])
| 30.6 | 60 | 0.63268 | 111 | 765 | 4.225225 | 0.252252 | 0.172708 | 0.153518 | 0.083156 | 0.818763 | 0.818763 | 0.818763 | 0.818763 | 0.818763 | 0.818763 | 0 | 0.017799 | 0.192157 | 765 | 24 | 61 | 31.875 | 0.7411 | 0 | 0 | 0.666667 | 0 | 0 | 0.094241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3974684196e219db84d8c95abb6979386887470 | 21,623 | py | Python | markusapi/tests/test_markusapi.py | ANUSHIYAPRIYA/markus-api | 1c81e488300a6001c4fd44a35a7727a4cb362942 | [
"MIT"
] | null | null | null | markusapi/tests/test_markusapi.py | ANUSHIYAPRIYA/markus-api | 1c81e488300a6001c4fd44a35a7727a4cb362942 | [
"MIT"
] | null | null | null | markusapi/tests/test_markusapi.py | ANUSHIYAPRIYA/markus-api | 1c81e488300a6001c4fd44a35a7727a4cb362942 | [
"MIT"
] | null | null | null | import pytest
import typing
import mimetypes
import json
import http.client
from hypothesis import given
from hypothesis import strategies as st
from unittest.mock import patch
from functools import wraps
from markusapi import Markus
def strategies_from_signature(method):
mapping = {k:st.from_type(v) for k,v in typing.get_type_hints(method).items() if k != 'return'}
return st.fixed_dictionaries(mapping)
def dummy_markus(scheme='http'):
return Markus('',f'{scheme}://localhost:8080')
DUMMY_RETURNS = {
"submit_request": b'{"f": "foo"}',
"decode_json_response": {'f': 'foo'},
"decode_text_response": '{"f": "foo"}',
"path": '/fake/path'
}
def file_name_strategy():
exts = '|'.join([f'\\{ext}' for ext in mimetypes.types_map.keys()])
return st.from_regex(fr'\w+({exts})', fullmatch=True)
class TestMarkusAPICalls:
def test_init_set_attributes(self):
obj = dummy_markus()
assert isinstance(obj, Markus)
def test_init_bad_scheme(self):
try:
obj = dummy_markus('ftp')
except AssertionError:
return
pytest.fail()
def test_init_parse_url(self):
api_key = ''
url = 'https://markus.com/api/users?id=1'
obj = Markus(api_key, url)
assert obj.parsed_url.scheme == 'https'
assert obj.parsed_url.netloc == 'markus.com'
assert obj.parsed_url.path == '/api/users'
assert obj.parsed_url.query == 'id=1'
@given(kwargs=strategies_from_signature(Markus.get_all_users))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response')
def test_get_all_users(self, decode_json_response, submit_request, kwargs):
dummy_markus().get_all_users(**kwargs)
submit_request.assert_called_with(None, '/api/users.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.new_user))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
def test_new_user(self, submit_request, kwargs):
dummy_markus().new_user(**kwargs)
submit_request.assert_called_once()
call_args = submit_request.call_args[0][0].values()
assert all(v in call_args for k,v in kwargs.items() if v is not None)
@given(kwargs=strategies_from_signature(Markus.get_assignments))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response')
def test_get_assignments(self, decode_json_response, submit_request, kwargs):
dummy_markus().get_assignments(**kwargs)
submit_request.assert_called_with(None, '/api/assignments.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.get_groups))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_groups(self, get_path, decode_json_response, submit_request, kwargs):
dummy_markus().get_groups(**kwargs)
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=None)
submit_request.assert_called_with(None, f'{get_path.return_value}.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.get_groups_by_name))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_groups_by_name(self, get_path, decode_json_response, submit_request, kwargs):
dummy_markus().get_groups_by_name(**kwargs)
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=None, group_ids_by_name=None)
submit_request.assert_called_with(None, f'{get_path.return_value}.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.get_group))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_group(self, get_path, decode_json_response, submit_request, kwargs):
dummy_markus().get_group(**kwargs)
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'])
submit_request.assert_called_with(None, f'{get_path.return_value}.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.get_feedback_files))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_feedback_files(self, get_path, decode_json_response, submit_request, kwargs):
dummy_markus().get_feedback_files(**kwargs)
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], feedback_files=None)
submit_request.assert_called_with({}, f'{get_path.return_value}.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.get_feedback_file))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_text_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_feedback_file(self, get_path, decode_text_response, submit_request, kwargs):
dummy_markus().get_feedback_file(**kwargs)
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], feedback_files=kwargs['feedback_file_id'])
submit_request.assert_called_with({}, f'{get_path.return_value}.json', 'GET')
decode_text_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.get_grades_summary))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_text_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_grades_summary(self, get_path, decode_text_response, submit_request, kwargs):
dummy_markus().get_grades_summary(**kwargs)
get_path.get_grades_summary(assignments=kwargs['assignment_id'], grades_summary=None)
submit_request.assert_called_with({}, f'{get_path.return_value}.json', 'GET')
decode_text_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.new_marks_spreadsheet))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_new_marks_spreadsheet(self, get_path, submit_request, kwargs):
dummy_markus().new_marks_spreadsheet(**kwargs)
get_path.assert_called_with(grade_entry_forms=None)
params = {'short_identifier': kwargs['short_identifier'],
'description': kwargs['description'],
'date': kwargs['date'],
'is_hidden': kwargs['is_hidden'],
'show_total': kwargs['show_total'],
'grade_entry_items': kwargs['grade_entry_items']}
submit_request.assert_called_with(params, get_path.return_value + '.json', 'POST', content_type='application/json')
@given(kwargs=strategies_from_signature(Markus.update_marks_spreadsheet))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_update_marks_spreadsheet(self, get_path, submit_request, kwargs):
dummy_markus().update_marks_spreadsheet(**kwargs)
get_path.assert_called_with(grade_entry_forms=kwargs['spreadsheet_id'])
params = {'short_identifier': kwargs['short_identifier'],
'description': kwargs['description'],
'date': kwargs['date'],
'is_hidden': kwargs['is_hidden'],
'show_total': kwargs['show_total'],
'grade_entry_items': kwargs['grade_entry_items']}
for name in list(params):
if params[name] is None:
params.pop(name)
submit_request.assert_called_with(params, get_path.return_value + '.json', 'PUT', content_type='application/json')
@given(kwargs=strategies_from_signature(Markus.update_marks_spreadsheets_grades))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_update_marks_spreadsheets_grades(self, get_path, submit_request, kwargs):
dummy_markus().update_marks_spreadsheets_grades(**kwargs)
get_path.assert_called_with(grade_entry_forms=kwargs['spreadsheet_id'], update_grades=None)
params = {'user_name': kwargs['user_name'], 'grade_entry_items': kwargs['grades_per_column']}
submit_request.assert_called_with(params, get_path.return_value + '.json', 'PUT', content_type='application/json')
@given(kwargs=strategies_from_signature(Markus.get_marks_spreadsheets))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response', return_value=[DUMMY_RETURNS['decode_json_response']])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_marks_spreadsheets(self, get_path, decode_json_response, submit_request, kwargs):
dummy_markus().get_marks_spreadsheets(**kwargs)
get_path.assert_called_with(grade_entry_forms=None)
submit_request.assert_called_with({}, f'{get_path.return_value}.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.get_marks_spreadsheet))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_text_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_marks_spreadsheet(self, get_path, decode_text_response, submit_request, kwargs):
dummy_markus().get_marks_spreadsheet(**kwargs)
get_path.assert_called_with(grade_entry_forms=kwargs['spreadsheet_id'])
submit_request.assert_called_with({}, f'{get_path.return_value}.json', 'GET')
decode_text_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.upload_feedback_file), filename=file_name_strategy())
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response', return_value=[DUMMY_RETURNS['decode_json_response']])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_upload_feedback_file_good_title(self, get_path, decode_json_response, submit_request, kwargs, filename):
dummy_markus().upload_feedback_file(**{**kwargs, 'title': filename})
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], feedback_files=None)
params, path, request_type, _content_type = submit_request.call_args[0]
assert path == get_path.return_value
assert params.keys() == {'filename', 'file_content', 'mime_type'}
@given(kwargs=strategies_from_signature(Markus.upload_feedback_file), filename=file_name_strategy())
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_upload_feedback_file_overwrite(self, get_path, submit_request, kwargs, filename):
with patch.object(Markus, 'decode_json_response', return_value=[{'id': 1, 'filename': filename}]):
dummy_markus().upload_feedback_file(**{**kwargs, 'title': filename})
_params, _path, request_type, _content_type = submit_request.call_args[0]
assert request_type == ('PUT' if kwargs['overwrite'] else 'POST')
@given(kwargs=strategies_from_signature(Markus.upload_feedback_file), filename=st.from_regex(r'\w+', fullmatch=True))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response', return_value=[DUMMY_RETURNS['decode_json_response']])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_upload_feedback_file_bad_title(self, get_path, decode_json_response, submit_request, kwargs, filename):
try:
dummy_markus().upload_feedback_file(**{**kwargs, 'title': filename, 'mime_type': None})
except ValueError:
return
pytest.fail()
@given(kwargs=strategies_from_signature(Markus.upload_test_group_results))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_upload_test_group_results(self, get_path, submit_request, kwargs):
dummy_markus().upload_test_group_results(**kwargs)
params = {
'test_run_id': kwargs['test_run_id'],
'test_output': kwargs['test_output']
}
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], test_group_results=None)
submit_request.assert_called_with(params, get_path.return_value, 'POST')
@given(kwargs=strategies_from_signature(Markus.upload_annotations))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_upload_annotations(self, get_path, submit_request, kwargs):
dummy_markus().upload_annotations(**kwargs)
params = {
'annotations': kwargs['annotations'],
'force_complete': kwargs['force_complete']
}
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], add_annotations=None)
submit_request.assert_called_with(params, get_path.return_value, 'POST', 'application/json')
@given(kwargs=strategies_from_signature(Markus.get_annotations))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response')
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_annotations(self, get_path, decode_json_response, submit_request, kwargs):
dummy_markus().get_annotations(**kwargs)
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], annotations=None)
submit_request.assert_called_with(None, f'{get_path.return_value}.json', 'GET')
decode_json_response.assert_called_with(submit_request.return_value)
@given(kwargs=strategies_from_signature(Markus.update_marks_single_group))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_update_marks_single_group(self, get_path, submit_request, kwargs):
dummy_markus().update_marks_single_group(**kwargs)
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], update_marks=None)
submit_request.assert_called_with(kwargs['criteria_mark_map'], get_path.return_value, 'PUT')
@given(kwargs=strategies_from_signature(Markus.upload_file_to_repo), filename=file_name_strategy())
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response', return_value=[DUMMY_RETURNS['decode_json_response']])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_upload_file_to_repo(self, get_path, decode_json_response, submit_request, kwargs, filename):
dummy_markus().upload_file_to_repo(**{**kwargs, 'file_path': filename})
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], submission_files=None)
params, path, request_type, _content_type = submit_request.call_args[0]
assert path == get_path.return_value
assert params.keys() == {'filename', 'file_content', 'mime_type'}
@given(kwargs=strategies_from_signature(Markus.remove_file_from_repo), filename=file_name_strategy())
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'decode_json_response', return_value=[DUMMY_RETURNS['decode_json_response']])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_remove_file_from_repo(self, get_path, decode_json_response, submit_request, kwargs, filename):
dummy_markus().remove_file_from_repo(**{**kwargs, 'file_path': filename})
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], submission_files=None, remove_file=None)
params, path, request_type = submit_request.call_args[0]
assert path == get_path.return_value
assert params.keys() == {'filename'}
@given(kwargs=strategies_from_signature(Markus.get_files_from_repo))
@patch.object(Markus, 'submit_request', return_value=DUMMY_RETURNS['submit_request'])
@patch.object(Markus, 'get_path', return_value=DUMMY_RETURNS['path'])
def test_get_files_from_repo(self, get_path, submit_request, kwargs):
dummy_markus().get_files_from_repo(**{**kwargs})
get_path.assert_called_with(assignments=kwargs['assignment_id'], groups=kwargs['group_id'], submission_files=None)
params, path, request_type = submit_request.call_args[0]
assert path == get_path.return_value + '.json'
if kwargs.get('filename'):
assert 'filename' in params.keys()
if kwargs.get('collected'):
assert 'collected' in params.keys()
class TestMarkusAPIHelpers:
@given(kwargs=strategies_from_signature(Markus.submit_request),
content_type=st.sampled_from(['application/x-www-form-urlencoded', 'application/json']))
@patch.object(Markus, '_do_submit_request')
def test_submit_request_check_types(self, do_submit_request, kwargs, content_type):
dummy_markus().submit_request(**{**kwargs, 'content_type': content_type})
params, _path, _request_type, headers = do_submit_request.call_args[0]
assert isinstance(params, (str, type(None)))
assert isinstance(headers, dict)
@given(kwargs=strategies_from_signature(Markus.submit_request),
content_type=st.sampled_from(['multipart/form-data', 'bad/content/type']))
@patch.object(Markus, '_do_submit_request')
def test_submit_request_check_catches_invalid(self, do_submit_request, kwargs, content_type):
try:
dummy_markus().submit_request(**{**kwargs, 'content_type': content_type})
except ValueError:
return
params, _path, _request_type, headers = do_submit_request.call_args[0]
assert isinstance(params, (str, type(None)))
@given(kwargs=strategies_from_signature(Markus._do_submit_request))
@patch.object(http.client.HTTPConnection, 'request')
@patch.object(http.client.HTTPConnection, 'getresponse')
@patch.object(http.client.HTTPConnection, 'close')
def test__do_submit_request_http(self, request, getresponse, close, kwargs):
dummy_markus('http')._do_submit_request(**kwargs)
request.assert_called_once()
getresponse.assert_called_once()
close.assert_called_once()
@given(kwargs=strategies_from_signature(Markus._do_submit_request))
@patch.object(http.client.HTTPSConnection, 'request')
@patch.object(http.client.HTTPSConnection, 'getresponse')
@patch.object(http.client.HTTPSConnection, 'close')
def test__do_submit_request_https(self, request, getresponse, close, kwargs):
dummy_markus('https')._do_submit_request(**kwargs)
request.assert_called_once()
getresponse.assert_called_once()
close.assert_called_once()
@given(kwargs=st.dictionaries(st.text(), st.text()))
def test_get_path(self, kwargs):
path = Markus.get_path(**kwargs)
for k,v in kwargs.items():
assert k + (f'/{v}' if v is not None else '') in path
@given(strategies_from_signature(Markus.decode_text_response))
def test_decode_text_response(self, **kwargs):
result = Markus.decode_text_response(**kwargs)
assert isinstance(result, str)
@given(in_dict=st.dictionaries(st.text(), st.text()))
def test_decode_text_response(self, in_dict):
res = json.dumps(in_dict).encode()
result = Markus.decode_text_response(['', '', res])
assert isinstance(result, str)
| 59.567493 | 142 | 0.732137 | 2,739 | 21,623 | 5.402337 | 0.069003 | 0.110698 | 0.07238 | 0.077718 | 0.838346 | 0.81057 | 0.786443 | 0.758938 | 0.733797 | 0.710955 | 0 | 0.000863 | 0.142533 | 21,623 | 362 | 143 | 59.732044 | 0.797206 | 0 | 0 | 0.457944 | 0 | 0 | 0.144205 | 0.015308 | 0 | 0 | 0 | 0 | 0.23676 | 1 | 0.115265 | false | 0 | 0.031153 | 0.003115 | 0.17134 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6e72e2ff3f74c31adf374c10e31d568c5fb2c03a | 48 | py | Python | main.py | dailyhacks/terraform-google-cloud-function | 51881f92093db7ab63b23386538c70802e13ae84 | [
"MIT"
] | null | null | null | main.py | dailyhacks/terraform-google-cloud-function | 51881f92093db7ab63b23386538c70802e13ae84 | [
"MIT"
] | null | null | null | main.py | dailyhacks/terraform-google-cloud-function | 51881f92093db7ab63b23386538c70802e13ae84 | [
"MIT"
] | null | null | null | def handler(request):
return 'Hola mundo!!'
| 16 | 25 | 0.666667 | 6 | 48 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 48 | 2 | 26 | 24 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6e7f8d5bbf909496f18bf7bae7ffe5c9fa7adb1e | 89 | py | Python | nsmr/__init__.py | Jumpei-Arima/Navigation_Simulator_for_Mobile_Robot | 4727fa696f363e88d757c0896f1c3f0dacc83af3 | [
"MIT"
] | 7 | 2020-01-06T05:52:33.000Z | 2021-07-31T18:24:04.000Z | nsmr/__init__.py | Jumpei-Arima/Navigation_Simulator_for_Mobile_Robot | 4727fa696f363e88d757c0896f1c3f0dacc83af3 | [
"MIT"
] | 1 | 2021-11-20T14:49:02.000Z | 2021-11-21T09:22:45.000Z | nsmr/__init__.py | Jumpei-Arima/Navigation_Simulator_for_Mobile_Robot | 4727fa696f363e88d757c0896f1c3f0dacc83af3 | [
"MIT"
] | 5 | 2020-07-09T14:14:17.000Z | 2021-09-07T05:00:25.000Z | import nsmr.envs
import nsmr.obs
import nsmr.layouts
import nsmr.robots
import nsmr.utils | 17.8 | 19 | 0.842697 | 15 | 89 | 5 | 0.466667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101124 | 89 | 5 | 20 | 17.8 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6eeea516417854f07ea453192d21fc10c114d5a9 | 188 | py | Python | plaza_routing/plaza_routing/integration/routing_engine_service.py | PlazaNav/PlazaNav | ab81e074c3728e889ad741591c2dd01197b9704b | [
"MIT"
] | 10 | 2018-03-16T02:39:46.000Z | 2022-01-01T09:15:22.000Z | plaza_routing/plaza_routing/integration/routing_engine_service.py | PlazaNav/PlazaNav | ab81e074c3728e889ad741591c2dd01197b9704b | [
"MIT"
] | 7 | 2017-12-13T08:31:38.000Z | 2022-02-09T08:41:04.000Z | plaza_routing/plaza_routing/integration/routing_engine_service.py | PlazaNav/PlazaNav | ab81e074c3728e889ad741591c2dd01197b9704b | [
"MIT"
] | 3 | 2017-12-13T08:24:37.000Z | 2020-01-04T18:12:18.000Z | class RoutingEngine:
def __init__(self, strategy):
self._strategy = strategy
def route(self, start, destination):
return self._strategy.route(start, destination)
| 23.5 | 55 | 0.691489 | 20 | 188 | 6.2 | 0.5 | 0.290323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218085 | 188 | 7 | 56 | 26.857143 | 0.843537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
42eea92f9260cd545f71b97825d28be83f2a4a4a | 589 | py | Python | cdhweb/conftest.py | bwhicks/cdh-web | d6002dc1933a4d6e97f5459aafc9ab92cb1f8050 | [
"Apache-2.0"
] | 1 | 2017-11-21T16:02:33.000Z | 2017-11-21T16:02:33.000Z | cdhweb/conftest.py | bwhicks/cdh-web | d6002dc1933a4d6e97f5459aafc9ab92cb1f8050 | [
"Apache-2.0"
] | 367 | 2017-08-14T16:05:41.000Z | 2021-11-03T15:29:18.000Z | cdhweb/conftest.py | bwhicks/cdh-web | d6002dc1933a4d6e97f5459aafc9ab92cb1f8050 | [
"Apache-2.0"
] | 5 | 2017-09-08T21:08:49.000Z | 2020-10-02T04:39:37.000Z | """Fixtures/utilities that should be globally available for testing."""
# FIXME not sure how else to share fixtures that depend on other fixtures
# between modules - if you import just the top-level fixture (e.g. "events"),
# it fails to find the fixture dependencies, and so on all the way down. For
# now this does what we want, although it pollutes the namespace somewhat
from cdhweb.blog.tests.conftest import *
from cdhweb.events.tests.conftest import *
from cdhweb.pages.tests.conftest import *
from cdhweb.people.tests.conftest import *
from cdhweb.projects.tests.conftest import *
| 53.545455 | 77 | 0.782683 | 92 | 589 | 5.01087 | 0.641304 | 0.10846 | 0.206074 | 0.199566 | 0.251627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142615 | 589 | 10 | 78 | 58.9 | 0.912871 | 0.612903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42f81ada206085da748d36d5db6fe65e1ec5f566 | 200 | py | Python | example.py | wert23239/AzulRL | 564d46d99392057366e4d1d3f177b5387c298735 | [
"MIT"
] | 1 | 2020-05-20T18:05:50.000Z | 2020-05-20T18:05:50.000Z | example.py | wert23239/AzulRL | 564d46d99392057366e4d1d3f177b5387c298735 | [
"MIT"
] | 1 | 2020-04-06T18:06:54.000Z | 2020-04-06T18:06:54.000Z | example.py | wert23239/AzulRL | 564d46d99392057366e4d1d3f177b5387c298735 | [
"MIT"
] | null | null | null | from dataclasses import dataclass, field
@dataclass
class Example:
policy_vector: int = -1
possible_actions : list = field(default_factory=list)
history: list = field(default_factory=list)
| 22.222222 | 56 | 0.76 | 25 | 200 | 5.92 | 0.68 | 0.121622 | 0.216216 | 0.310811 | 0.364865 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005952 | 0.16 | 200 | 8 | 57 | 25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
42fd9f87560c5c81eeeb2d94184ce26e290c9fe3 | 7,200 | py | Python | parapred/tests/test_parapred.py | alchemab/parapred-pytorch | 4d221d7e1ba165a4e233c36907b9c88fa1d7fe73 | [
"MIT"
] | 14 | 2021-04-22T15:41:28.000Z | 2022-03-23T02:16:12.000Z | parapred/tests/test_parapred.py | alchemab/parapred-pytorch | 4d221d7e1ba165a4e233c36907b9c88fa1d7fe73 | [
"MIT"
] | 2 | 2021-04-25T22:59:28.000Z | 2021-07-26T12:47:20.000Z | parapred/tests/test_parapred.py | alchemab/parapred-pytorch | 4d221d7e1ba165a4e233c36907b9c88fa1d7fe73 | [
"MIT"
] | null | null | null | import unittest
import torch
import os
from parapred.model import Parapred, clean_output
from parapred.preprocessing import encode_parapred, encode_batch
from parapred.cnn import generate_mask
FPATH = os.path.dirname(os.path.abspath(__file__))
WEIGHTS_PATH = os.path.join(FPATH, "../weights/parapred_pytorch.h5")
class ParapredTest(unittest.TestCase):
def setUp(self):
self.model = Parapred()
self.model.load_state_dict(torch.load(WEIGHTS_PATH))
_ = self.model.eval()
self.sequence = "YCQRYNRAPYTFG"
self.max_length = 40
self.num_features = 28
def test_probailities(self):
"""
Integration test for probability prediction
"""
encoding, lengths = encode_batch([self.sequence], self.max_length)
m = generate_mask(encoding, lengths)
with torch.no_grad():
pr = self.model(encoding, m, lengths)
v = clean_output(pr[0], lengths[0].item())
self.assertTrue(
torch.allclose(
torch.Tensor([0.03122, 0.00289, 0.01522, 0.03233, 0.91215, 0.82423, 0.87741,
0.77854, 0.24664, 0.76494, 0.00932, 0.00534, 0.00251]), v,
rtol=1e-2
)
)
def test_encoding(self):
"""
Unit test the encoding function
"""
# Deliberately not testing padding
encoded_representation = encode_parapred(self.sequence, len(self.sequence))
self.assertTrue(
torch.allclose(
torch.Tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 2.9400, 0.3000, 6.4700,
0.9600, 5.6600, 0.2500, 0.4100],
[1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.7700, 0.1300, 2.4300,
1.5400, 6.3500, 0.1700, 0.4100],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.5600, 0.1800, 3.9500,
-0.2200, 5.6500, 0.3600, 0.2500],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 2.3400, 0.2900, 6.1300,
-1.0100, 10.7400, 0.3600, 0.2500],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 2.9400, 0.3000, 6.4700,
0.9600, 5.6600, 0.2500, 0.4100],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.6000, 0.1300, 2.9500,
-0.6000, 6.5200, 0.2100, 0.2200],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 2.3400, 0.2900, 6.1300,
-1.0100, 10.7400, 0.3600, 0.2500],
[0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.2800, 0.0500, 1.0000,
0.3100, 6.1100, 0.4200, 0.2300],
[0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 2.6700, 0.0000, 2.7200,
0.7200, 6.8000, 0.1300, 0.3400],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 2.9400, 0.3000, 6.4700,
0.9600, 5.6600, 0.2500, 0.4100],
[0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.0300, 0.1100, 2.6000,
0.2600, 5.6000, 0.2100, 0.3600],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 2.9400, 0.2900, 5.8900,
1.7900, 5.6700, 0.3000, 0.3800],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 6.0700, 0.1300, 0.1500]]), encoded_representation
)
)
def test_batch_prediction(self):
"""
Integration testing for a batch of sequences
"""
batch = ["SRWGGDGFYAMDYWG", "YCQRYNRAPYTFG"]
encoding, lengths = encode_batch(batch, self.max_length)
m = generate_mask(encoding, lengths)
with torch.no_grad():
pr = self.model(encoding, m, lengths)
v1 = clean_output(pr[0], lengths[0].item())
v2 = clean_output(pr[1], lengths[1].item())
self.assertTrue(
torch.allclose(
torch.Tensor([0.04144, 0.34117, 0.97052, 0.67401, 0.9148, 0.93996, 0.81214,
0.78589, 0.94175, 0.04701, 0.06284, 0.18635, 0.08849, 0.00447,
0.00532]), v1,
rtol=1e-2
)
)
self.assertTrue(
torch.allclose(
torch.Tensor([0.03122, 0.00289, 0.01522, 0.03233, 0.91215, 0.82423, 0.87741,
0.77854, 0.24664, 0.76494, 0.00932, 0.00534, 0.00251]), v2,
rtol=1e-2
)
)
| 53.333333 | 94 | 0.463472 | 1,043 | 7,200 | 3.166826 | 0.149569 | 0.40115 | 0.686649 | 0.723585 | 0.641538 | 0.641538 | 0.641538 | 0.615501 | 0.599455 | 0.599455 | 0 | 0.473684 | 0.387778 | 7,200 | 134 | 95 | 53.731343 | 0.275635 | 0.021389 | 0 | 0.436364 | 1 | 0 | 0.010178 | 0.0043 | 0 | 0 | 0 | 0 | 0.036364 | 1 | 0.036364 | false | 0 | 0.054545 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6e0e95bed0eb1dd1e6bbb41d2f34d005ea097082 | 42 | py | Python | ejercicio3/Pollo.py | mariagarciau/practicaPOO | 633a7dfd618c886c34d4310257175cd725520ac7 | [
"Apache-2.0"
] | null | null | null | ejercicio3/Pollo.py | mariagarciau/practicaPOO | 633a7dfd618c886c34d4310257175cd725520ac7 | [
"Apache-2.0"
] | null | null | null | ejercicio3/Pollo.py | mariagarciau/practicaPOO | 633a7dfd618c886c34d4310257175cd725520ac7 | [
"Apache-2.0"
] | null | null | null | from Animal import *
from Oviparo import * | 21 | 21 | 0.785714 | 6 | 42 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 2 | 21 | 21 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e243beb60e2c2d2ff50bf69e51994bd7aeccf9b | 77 | py | Python | solutions/amina_hasimova/01/hello1.py | kipiek-ksu/programming-2021 | 987dedb8a493c373a00da24ecd2e1dae737f2088 | [
"MIT"
] | null | null | null | solutions/amina_hasimova/01/hello1.py | kipiek-ksu/programming-2021 | 987dedb8a493c373a00da24ecd2e1dae737f2088 | [
"MIT"
] | null | null | null | solutions/amina_hasimova/01/hello1.py | kipiek-ksu/programming-2021 | 987dedb8a493c373a00da24ecd2e1dae737f2088 | [
"MIT"
] | null | null | null | print('Hello world')
a = str(input('Ваше имя : '))
print('Hello world ' + a)
| 19.25 | 29 | 0.61039 | 12 | 77 | 3.916667 | 0.666667 | 0.425532 | 0.638298 | 0.680851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 3 | 30 | 25.666667 | 0.734375 | 0 | 0 | 0 | 0 | 0 | 0.441558 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6e2a6491791e6515922fb688bf6b913beb6b58c6 | 207 | py | Python | syncmanagerapi/syncmanagerapi/utils.py | Frie-man/syncmanager | f76e36f85ea68ab177a9ffd50dfff033ae0fc8f6 | [
"MIT"
] | null | null | null | syncmanagerapi/syncmanagerapi/utils.py | Frie-man/syncmanager | f76e36f85ea68ab177a9ffd50dfff033ae0fc8f6 | [
"MIT"
] | null | null | null | syncmanagerapi/syncmanagerapi/utils.py | Frie-man/syncmanager | f76e36f85ea68ab177a9ffd50dfff033ae0fc8f6 | [
"MIT"
] | null | null | null | import string
import secrets
def generate_password(length=12):
pwchars = string.ascii_letters + string.digits
password = ''.join(secrets.choice(pwchars) for i in range(length))
return password
| 23 | 70 | 0.743961 | 27 | 207 | 5.62963 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011561 | 0.164251 | 207 | 8 | 71 | 25.875 | 0.867052 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.5 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6e2ca29ce55a1db8ccef84ad48e82ac5c5013235 | 26 | py | Python | src/navv/__init__.py | leflambeur/network-architecture-verification-and-validation | 7756b12589056c6b4bb9a1e1ec7d44f6fd3c25d6 | [
"BSD-3-Clause"
] | 1 | 2022-03-09T16:58:19.000Z | 2022-03-09T16:58:19.000Z | src/navv/__init__.py | Dbones202/network-architecture-verification-and-validation | 7c0784ae04d34cc14e549a4f4947c8e931eee3c5 | [
"BSD-3-Clause"
] | 3 | 2022-01-26T17:43:14.000Z | 2022-02-14T18:16:54.000Z | src/navv/__init__.py | Dbones202/network-architecture-verification-and-validation | 7c0784ae04d34cc14e549a4f4947c8e931eee3c5 | [
"BSD-3-Clause"
] | 5 | 2022-01-05T00:16:21.000Z | 2022-02-25T02:22:57.000Z | from navv import _version
| 13 | 25 | 0.846154 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6e4a57df5f9aa7793b979cde9ff00522d97edff7 | 125 | py | Python | pynet/net/__init__.py | iavr/pynet | 09f3500e12a72c63699c74c34573539bfdc3ea12 | [
"BSD-2-Clause"
] | 3 | 2019-12-11T15:09:58.000Z | 2020-12-29T05:54:40.000Z | pynet/net/__init__.py | iavr/pynet | 09f3500e12a72c63699c74c34573539bfdc3ea12 | [
"BSD-2-Clause"
] | null | null | null | pynet/net/__init__.py | iavr/pynet | 09f3500e12a72c63699c74c34573539bfdc3ea12 | [
"BSD-2-Clause"
] | null | null | null | from init import *
from defn import param
from units import *
from models import *
from solvers import *
from learn import *
| 17.857143 | 22 | 0.768 | 19 | 125 | 5.052632 | 0.473684 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192 | 125 | 6 | 23 | 20.833333 | 0.950495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
289bff0edaa499f4a37703bc1a5da8149359aa85 | 277 | py | Python | tasks/util/codegen.py | jchesterpivotal/Faasm | d4e25baf0c69df7eea8614de3759792748f7b9d4 | [
"Apache-2.0"
] | null | null | null | tasks/util/codegen.py | jchesterpivotal/Faasm | d4e25baf0c69df7eea8614de3759792748f7b9d4 | [
"Apache-2.0"
] | null | null | null | tasks/util/codegen.py | jchesterpivotal/Faasm | d4e25baf0c69df7eea8614de3759792748f7b9d4 | [
"Apache-2.0"
] | null | null | null | from tasks.util.env import POSSIBLE_BUILD_BINS
from tasks.util.shell import find_command
def find_codegen_shared_lib():
return find_command("codegen_shared_obj", POSSIBLE_BUILD_BINS)
def find_codegen_func():
return find_command("codegen_func", POSSIBLE_BUILD_BINS)
| 25.181818 | 66 | 0.819495 | 41 | 277 | 5.121951 | 0.439024 | 0.185714 | 0.242857 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108303 | 277 | 10 | 67 | 27.7 | 0.850202 | 0 | 0 | 0 | 0 | 0 | 0.108303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.