hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ea9b9d3950a94a2670d471c1cf96a3215301b02f | 118 | py | Python | app/core/tests/__init__.py | Jihyun-Choi/recipe-app-api | 3b054112582c03a5e0c43e223f275f2d211e254e | [
"MIT"
] | null | null | null | app/core/tests/__init__.py | Jihyun-Choi/recipe-app-api | 3b054112582c03a5e0c43e223f275f2d211e254e | [
"MIT"
] | null | null | null | app/core/tests/__init__.py | Jihyun-Choi/recipe-app-api | 3b054112582c03a5e0c43e223f275f2d211e254e | [
"MIT"
] | null | null | null | from .test_admin import AdminSiteTests
from .test_commands import CommandsTestCase
from .test_models import ModelTests | 39.333333 | 43 | 0.881356 | 15 | 118 | 6.733333 | 0.6 | 0.237624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09322 | 118 | 3 | 44 | 39.333333 | 0.943925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5762b63e1b4555aee2f131e92a3608c9553d0891 | 8,886 | py | Python | rooms/tests/test_views.py | brianseidl/koda | b6e2e703afd6f3b8ccb16740824eac222c28be6e | [
"MIT"
] | null | null | null | rooms/tests/test_views.py | brianseidl/koda | b6e2e703afd6f3b8ccb16740824eac222c28be6e | [
"MIT"
] | null | null | null | rooms/tests/test_views.py | brianseidl/koda | b6e2e703afd6f3b8ccb16740824eac222c28be6e | [
"MIT"
] | null | null | null | from django.test import TestCase, RequestFactory
from django.contrib.auth.models import AnonymousUser, User
from rooms.models import Message, Room
from rooms.views import *
from datetime import datetime, timedelta
from django.core.exceptions import PermissionDenied
class TestBaseView(TestCase):
def setUp(self):
self.request = RequestFactory().get('/')
self.view = BaseView.as_view()
def test_get_context_data_no_login(self):
""" Test the get_context_data for a user who is not logged in """
self.request.user = AnonymousUser
response = self.view(self.request)
self.assertEqual(response.context_data["user"], AnonymousUser)
self.assertEqual(response.context_data["logged_in"], False)
def test_get_context_data_yes_login(self):
""" Test get_context_data for a user who is logged in """
test_user = User.objects.create(username="test_user")
self.request.user = test_user
response = self.view(self.request)
self.assertEqual(response.context_data["user"], test_user)
self.assertEqual(response.context_data["logged_in"], True)
class TestBaseRoomView(TestCase):
def setUp(self):
self.rf = RequestFactory()
self.view = BaseRoomView.as_view()
self.room1 = Room.objects.create(name="test_room_1", id=1)
self.room2 = Room.objects.create(name="test_room_2", id=2)
self.room3 = Room.objects.create(name="test_room_3", id=3)
self.user = User.objects.create(username="test_user")
self.room1.add_user(self.user)
self.room2.add_user(self.user)
def test_get_no_login(self):
""" Test a request for a user who is not logged in """
response = self.client.get('/rooms/')
self.assertRedirects(response, '/accounts/login/?next=/rooms/')
def test_get_yes_login(self):
""" Test a request for a user who is logged in """
request = self.rf.get("room/")
request.user = self.user
response = self.view(request)
self.assertEqual(response.status_code, 200)
def test_get_context_data(self):
""" Test get_context_data for user who is logged in """
request = self.rf.get("room/")
request.user = self.user
not_user_room = self.room3
response = self.view(request)
# make sure room type is room
self.assertEqual(response.context_data["type"], "room")
# make sure not_user_room is not in the result set
self.assertNotIn(not_user_room, response.context_data["rooms"])
# make sure the correct rooms are loaded
self.assertEqual(response.context_data["rooms"], [self.room1, self.room2])
class TestDetailRoomView(TestCase):
def setUp(self):
self.rf = RequestFactory()
self.view = DetailRoomView.as_view()
self.room1 = Room.objects.create(name="test_room_1", id=1)
self.room2 = Room.objects.create(name="test_room_2", id=2)
self.room3 = Room.objects.create(name="test_room_3", id=3)
self.user = User.objects.create(username="test_user")
self.room1.add_user(self.user)
self.room2.add_user(self.user)
def test_get_no_login(self):
""" Test a request for a user who is not logged in """
request = self.rf.get("rooms/1/")
request.user = AnonymousUser
kwargs = {"room_id": 1}
with self.assertRaises(PermissionDenied):
response = self.view(request, **kwargs)
def test_get_yes_login(self):
""" Test a request for a user who is logged in """
request = self.rf.get("rooms/1/")
request.user = self.user
kwargs = {"room_id": 1}
response = self.view(request, **kwargs)
self.assertEqual(response.status_code, 200)
def test_room_not_authorized(self):
""" Test user tries to view room where he/she is not authorized """
request = self.rf.get("rooms/3/")
request.user = self.user
kwargs = {"room_id": 3}
with self.assertRaises(PermissionDenied):
response = self.view(request, **kwargs)
def test_room_is_actually_chat(self):
""" Test user is authorized to view room but room is actually a chat """
new_chat_room = Room.objects.create(name="test_room_4", id=4, rtype="chat")
new_chat_room.add_user(self.user)
request = self.rf.get("rooms/4/")
request.user = self.user
kwargs = {"room_id": 4}
with self.assertRaises(PermissionDenied):
response = self.view(request, **kwargs)
class TestBaseChatView(TestCase):
"""
I just want to point out that there must be 2 users in a chat room.
So before you get all triggered and what not that there are no test cases
for this, it's because I know that It will break and I dont have time
to make chats more robust. Only the admin can configure rooms and chats.
"""
def setUp(self):
self.rf = RequestFactory()
self.view = BaseChatView.as_view()
self.room1 = Room.objects.create(name="test_room_1", id=1, rtype="chat")
self.room2 = Room.objects.create(name="test_room_2", id=2, rtype="chat")
self.room3 = Room.objects.create(name="test_room_3", id=3, rtype="chat")
self.user = User.objects.create(username="test_user")
self.room1.add_user(self.user)
self.room2.add_user(self.user)
def test_get_no_login(self):
""" Test a request for a user who is not logged in """
response = self.client.get('/chats/')
self.assertRedirects(response, '/accounts/login/?next=/chats/')
def test_get_yes_login(self):
""" Test a request for a user who is logged in """
request = self.rf.get("chats/")
request.user = self.user
response = self.view(request)
self.assertEqual(response.status_code, 200)
def test_get_context_data(self):
""" Test get_context_data for user who is logged in """
request = self.rf.get("chat/")
request.user = self.user
not_user_room = self.room3
response = self.view(request)
# make sure room type is room
self.assertEqual(response.context_data["type"], "chat")
# make sure not_user_room is not in the result set
self.assertNotIn(not_user_room, response.context_data["rooms"])
# make sure the correct rooms are loaded
self.assertEqual(response.context_data["rooms"], [self.room1, self.room2])
def test_get_other_member(self):
""" Test get_other_member returns the other member in the group """
test_user2 = User.objects.create(username="test_user2")
self.room1.add_user(test_user2)
self.assertEqual(BaseChatView.get_other_member(self.room1, self.user), test_user2)
self.assertEqual(BaseChatView.get_other_member(self.room1, test_user2), self.user)
class TestDetailChatView(TestCase):
def setUp(self):
self.rf = RequestFactory()
self.view = DetailChatView.as_view()
self.room1 = Room.objects.create(name="test_room_1", id=1, rtype="chat")
self.room2 = Room.objects.create(name="test_room_2", id=2, rtype="chat")
self.room3 = Room.objects.create(name="test_room_3", id=3, rtype="chat")
self.user = User.objects.create(username="test_user")
self.room1.add_user(self.user)
self.room2.add_user(self.user)
def test_get_no_login(self):
""" Test a request for a user who is not logged in """
request = self.rf.get("chats/1/")
request.user = AnonymousUser
kwargs = {"chat_id": 1}
with self.assertRaises(PermissionDenied):
response = self.view(request, **kwargs)
def test_get_yes_login(self):
""" Test a request for a user who is logged in """
request = self.rf.get("chats/1/")
request.user = self.user
kwargs = {"chat_id": 1}
response = self.view(request, **kwargs)
self.assertEqual(response.status_code, 200)
def test_room_not_authorized(self):
""" Test user tries to view room where he/she is not authorized """
request = self.rf.get("chats/3/")
request.user = self.user
kwargs = {"chat_id": 3}
with self.assertRaises(PermissionDenied):
response = self.view(request, **kwargs)
def test_room_is_actually_chat(self):
""" Test user is authorized to view room but room is actually a chat """
new_chat_room = Room.objects.create(name="test_room_4", id=4, rtype="room")
new_chat_room.add_user(self.user)
request = self.rf.get("chats/4/")
request.user = self.user
kwargs = {"chat_id": 4}
with self.assertRaises(PermissionDenied):
response = self.view(request, **kwargs) | 40.208145 | 90 | 0.645622 | 1,209 | 8,886 | 4.602978 | 0.114971 | 0.043127 | 0.043127 | 0.05283 | 0.832345 | 0.804852 | 0.786882 | 0.743037 | 0.716801 | 0.690925 | 0 | 0.01342 | 0.236889 | 8,886 | 221 | 91 | 40.208145 | 0.807256 | 0.158564 | 0 | 0.673333 | 0 | 0 | 0.070407 | 0.007899 | 0 | 0 | 0 | 0 | 0.16 | 1 | 0.146667 | false | 0 | 0.04 | 0 | 0.22 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57b8441b2ab1f324ccb8a383bf72a599d6adaf7e | 1,295 | py | Python | Config.py | DotaLab/DotalabCatcher | 683c654e209ad5782dba2004d81c2775fd40278c | [
"MIT"
] | null | null | null | Config.py | DotaLab/DotalabCatcher | 683c654e209ad5782dba2004d81c2775fd40278c | [
"MIT"
] | null | null | null | Config.py | DotaLab/DotalabCatcher | 683c654e209ad5782dba2004d81c2775fd40278c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# encoding=utf-8
__author__ = 'Vietronic'
__date__ = '$2018-7-23$'
import json
class DatabaseConfig:
__name__ = 'DatabaseConfig'
def __init__(self):
# 初始化配置信息
self.CONFIG_PATH = './config/database.json'
# 加载配置文件
f = open(self.CONFIG_PATH, 'r', encoding='utf-8')
config = json.load(f)
f.close()
self.CONFIG = config
return
def database(self):
return self.CONFIG["database"]
def user(self):
return self.CONFIG["user"]
def password(self):
return self.CONFIG["password"]
def table(self):
return self.CONFIG["table"]
def host(self):
return self.CONFIG["host"]
def port(self):
return self.CONFIG["port"]
def init_tables_path(self):
return self.CONFIG["init_tables_path"]
class ApiConfig:
__name__ = 'ApiConfig'
def __init__(self):
# 初始化配置信息
self.CONFIG_PATH = './config/api.json'
# 加载配置文件
f = open(self.CONFIG_PATH, 'r', encoding='utf-8')
config = json.load(f)
f.close()
self.CONFIG = config
return
def api_key(self):
return "?api_key=" + self.CONFIG["api_key"]
def api_url(self):
return self.CONFIG["api_url"] | 20.887097 | 57 | 0.580695 | 156 | 1,295 | 4.583333 | 0.269231 | 0.20979 | 0.156643 | 0.223776 | 0.352448 | 0.352448 | 0.352448 | 0.352448 | 0.246154 | 0.246154 | 0 | 0.010834 | 0.287259 | 1,295 | 62 | 58 | 20.887097 | 0.763814 | 0.050193 | 0 | 0.307692 | 0 | 0 | 0.13551 | 0.017959 | 0 | 0 | 0 | 0 | 0 | 1 | 0.282051 | false | 0.051282 | 0.025641 | 0.230769 | 0.692308 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
17c40ece2706fe2fcee0a03fc4d1ba27909aa5f4 | 46 | py | Python | snowshu/adapters/target_adapters/postgres_adapter/__init__.py | norton120/snowshu | 3595972a2ab28350f0283c3703adc1ca4b26bec2 | [
"Apache-2.0"
] | 11 | 2020-02-27T23:09:43.000Z | 2022-03-30T08:19:49.000Z | snowshu/adapters/target_adapters/postgres_adapter/__init__.py | ankur112358/snowshu | f4f596d08e6441a96fe0adcbc699cf7613fc59e0 | [
"Apache-2.0"
] | 66 | 2020-02-20T17:07:20.000Z | 2022-03-18T19:53:20.000Z | snowshu/adapters/target_adapters/postgres_adapter/__init__.py | ankur112358/snowshu | f4f596d08e6441a96fe0adcbc699cf7613fc59e0 | [
"Apache-2.0"
] | 8 | 2020-02-20T00:29:26.000Z | 2022-03-29T14:59:41.000Z | from .postgres_adapter import PostgresAdapter
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
17ee7945cdccd2a4d9d9bee9c68ad51f7c64d23b | 140 | py | Python | pyzipcin/db/constants.py | ravigoel08/Zipcin | 870c7a9e65084800fa0a63a2c2082447505b9247 | [
"MIT"
] | 4 | 2020-12-22T19:13:30.000Z | 2020-12-23T08:42:56.000Z | pyzipcin/db/constants.py | ravigoel08/Zipcin | 870c7a9e65084800fa0a63a2c2082447505b9247 | [
"MIT"
] | null | null | null | pyzipcin/db/constants.py | ravigoel08/Zipcin | 870c7a9e65084800fa0a63a2c2082447505b9247 | [
"MIT"
] | null | null | null | DB_URI = "sqlite:///E:\\learning\\studies\\pyzipcin\\pyzipcin\\modules\\database.db"
FILE_PATH = "E:\\learning\studies\pyzipcin\pyzipcin\db" | 70 | 84 | 0.742857 | 19 | 140 | 5.368421 | 0.578947 | 0.176471 | 0.313725 | 0.470588 | 0.627451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 140 | 2 | 85 | 70 | 0.755556 | 0 | 0 | 0 | 0 | 0 | 0.808511 | 0.808511 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa4678eec8ffb8dbcccb61d460a49e1435bd2e53 | 61 | py | Python | AtC_Beg_Con_141-150/ABC143/B.py | yosho-18/AtCoder | 50f6d5c92a01792552c31ac912ce1cd557b06fb0 | [
"MIT"
] | null | null | null | AtC_Beg_Con_141-150/ABC143/B.py | yosho-18/AtCoder | 50f6d5c92a01792552c31ac912ce1cd557b06fb0 | [
"MIT"
] | null | null | null | AtC_Beg_Con_141-150/ABC143/B.py | yosho-18/AtCoder | 50f6d5c92a01792552c31ac912ce1cd557b06fb0 | [
"MIT"
] | null | null | null | y_train = [0, 3, 1, 0]
print((y_train == 0) | (y_train == 1)) | 30.5 | 38 | 0.52459 | 13 | 61 | 2.230769 | 0.461538 | 0.62069 | 0.482759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 0.196721 | 61 | 2 | 38 | 30.5 | 0.469388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a4bba8729eafdd40a1d0578b576b058135ee6d78 | 1,882 | py | Python | test/test_func.py | RENCI/tx-autht | 5027cd0060daa14a0e02a700337c6441ceaad177 | [
"MIT"
] | null | null | null | test/test_func.py | RENCI/tx-autht | 5027cd0060daa14a0e02a700337c6441ceaad177 | [
"MIT"
] | 1 | 2021-05-11T18:11:02.000Z | 2021-05-11T18:11:02.000Z | test/test_func.py | RENCI/tx-autht | 5027cd0060daa14a0e02a700337c6441ceaad177 | [
"MIT"
] | null | null | null | import requests
def test_authorize():
# test apikey authorization
resp = requests.get("http://txautht:8080/authorize?apikey=wrongkey&provider=venderbilt&code=testAuth1234&return_url=http://tx-autht:8080")
assert resp.status_code == 401
# test provider parameter
resp = requests.get("http://txautht:8080/authorize?apikey=TEST123&code=testAuth1234&return_url=http://tx-autht:8080")
assert resp.status_code == 500
# test not supported provider parameter
resp = requests.get("http://txautht:8080/authorize?apikey=TEST123&provider=doesnotexist&code=testAuth1234&return_url=http://tx-autht:8080")
assert resp.status_code == 500
# test a valid code has to be provided for venderbilt provider user authentication
resp = requests.get("http://txautht:8080/authorize?apikey=TEST123&provider=venderbilt&return_url=http://tx-autht:8080")
assert resp.status_code == 400
resp = requests.get("http://txautht:8080/authorize?apikey=TEST123&provider=venderbilt&code=notvalidcode&return_url=http://tx-autht:8080")
assert resp.status_code != 200
# test a valid code for venderbilt provider succeeds in user authentication
resp = requests.get("http://txautht:8080/authorize?apikey=TEST123&provider=venderbilt&code=testAuth1234&return_url=http://tx-autht:8080")
assert resp.status_code == 200
# test a valid code for venderbilt provider succeeds in user authentication while redirect set to false
resp = requests.get(
"http://txautht:8080/authorize?apikey=TEST123&provider=venderbilt&code=testAuth1234&return_url=http://tx-autht:8080&redirect=false")
assert resp.status_code == 200
assert resp.json() == {
"access_level": "6",
"email": "kyle.mcguffin@vumc.org",
"first_name": "Test",
"last_name": "User",
"organization": "Test Org",
"username": "test-user"
}
| 55.352941 | 143 | 0.725292 | 245 | 1,882 | 5.497959 | 0.253061 | 0.059391 | 0.077951 | 0.098738 | 0.778025 | 0.76095 | 0.76095 | 0.76095 | 0.727543 | 0.727543 | 0 | 0.072184 | 0.146121 | 1,882 | 33 | 144 | 57.030303 | 0.766024 | 0.182784 | 0 | 0.16 | 0 | 0.28 | 0.576094 | 0.01437 | 0 | 0 | 0 | 0 | 0.32 | 1 | 0.04 | false | 0 | 0.04 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4f6a51275ed2f74293d5916f8844fbd9243bc88 | 8,431 | py | Python | venv3/lib/python3.8/site-packages/tests/script/test_simulate_verbose.py | paul-romeo/pytest-in-60-minutes | a4817312081347737f87801c0623054eba599418 | [
"MIT"
] | 94 | 2016-05-23T17:13:11.000Z | 2021-12-03T23:06:45.000Z | venv3/lib/python3.8/site-packages/tests/script/test_simulate_verbose.py | paul-romeo/pytest-in-60-minutes | a4817312081347737f87801c0623054eba599418 | [
"MIT"
] | 39 | 2016-05-19T17:57:53.000Z | 2020-12-26T09:57:21.000Z | venv3/lib/python3.8/site-packages/tests/script/test_simulate_verbose.py | paul-romeo/pytest-in-60-minutes | a4817312081347737f87801c0623054eba599418 | [
"MIT"
] | 15 | 2016-05-23T20:22:37.000Z | 2019-12-27T21:13:04.000Z | import pytest
import punch
pytestmark = pytest.mark.slow
version_file_content = """
major = 1
minor = 0
patch = 0
"""
config_file_content_without_vcs = """
__config_version__ = 1
GLOBALS = {
'serializer': '{{major}}.{{minor}}.{{patch}}',
}
FILES = ["README.md"]
VERSION = ['major', 'minor', 'patch']
"""
config_file_content_with_git = """
__config_version__ = 1
GLOBALS = {
'serializer': '{{major}}.{{minor}}.{{patch}}',
}
FILES = ["README.md"]
VERSION = ['major', 'minor', 'patch']
VCS = {
'name': 'git'
}
"""
config_file_vcs_with_named_serializers = """
__config_version__ = 1
GLOBALS = {
'serializer': {
'full': '{{major}}.{{minor}}.{{patch}}'
}
}
FILES = ["README.md"]
VERSION = ['major', 'minor', 'patch']
VCS_SERIALIZER = 'full'
VCS = {
'name': 'git'
}
"""
config_file_vcs_with_named_serializers_no_vcs_serializer = """
__config_version__ = 1
GLOBALS = {
'serializer': {
'full': '{{major}}.{{minor}}.{{patch}}'
}
}
FILES = ["README.md"]
VERSION = ['major', 'minor', 'patch']
VCS = {
'name': 'git'
}
"""
config_file_content_with_git_flow = """
__config_version__ = 1
GLOBALS = {
'serializer': '{{major}}.{{minor}}.{{patch}}',
}
FILES = ["README.md"]
VERSION = ['major', 'minor', 'patch']
VCS = {
'name': 'git-flow'
}
"""
config_file_content_with_hg = """
__config_version__ = 1
GLOBALS = {
'serializer': '{{major}}.{{minor}}.{{patch}}',
}
FILES = ["README.md"]
VERSION = ['major', 'minor', 'patch']
VCS = {
'name': 'hg'
}
"""
@pytest.fixture
def verbose_output_without_vcs():
return """## Punch version {version}
# Current version
major=1
minor=0
patch=0
# New version
major=2
minor=0
patch=0
# Global version updates
- 1.0.0 -> 2.0.0
# Configured files
+ README.md:
- 1.0.0 -> 2.0.0
"""
@pytest.fixture
def verbose_output_with_git(verbose_output_without_vcs):
return verbose_output_without_vcs + """
# VCS
+ Commit message: {commit_message}
+ Create release branch: yes
+ Release branch: 2.0.0
+ Annotate tags: no
+ Annotation message:
"""
@pytest.fixture
def verbose_output_with_git_flow(verbose_output_without_vcs):
return verbose_output_without_vcs + """
# VCS
+ Commit message: {commit_message}
+ Release branch: release/2.0.0
"""
@pytest.fixture
def verbose_output_with_hg(verbose_output_without_vcs):
return verbose_output_without_vcs + """
# VCS
+ Commit message: {commit_message}
"""
def test_verbose(test_environment, verbose_output_without_vcs):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_content_without_vcs
)
ret = test_environment.call(["punch", "--verbose", "--part", "major"])
assert not ret.stderr
assert ret.stdout == verbose_output_without_vcs.format(
version=punch.__version__
)
assert test_environment.get_file_content("README.md") == "Version 2.0.0"
def test_simulate(test_environment, verbose_output_without_vcs):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_content_without_vcs
)
ret = test_environment.call([
"punch",
"--simulate",
"--part",
"major"
])
assert not ret.stderr
assert ret.stdout == verbose_output_without_vcs.format(
version=punch.__version__
)
assert test_environment.get_file_content("README.md") == "Version 1.0.0"
def test_simulate_and_verbose(test_environment, verbose_output_without_vcs):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_content_without_vcs
)
ret = test_environment.call([
"punch",
"--simulate",
"--verbose",
"--part",
"major"
])
assert not ret.stderr
assert ret.stdout == verbose_output_without_vcs.format(
version=punch.__version__
)
assert test_environment.get_file_content("README.md") == "Version 1.0.0"
def test_simulate_with_git(test_environment, verbose_output_with_git):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_content_with_git
)
test_environment.output(["git", "init"])
ret = test_environment.call([
"punch",
"--simulate",
"--part",
"major"
])
assert not ret.stderr
assert ret.stdout == verbose_output_with_git.format(
version=punch.__version__,
commit_message="Version updated 1.0.0 -> 2.0.0"
)
assert test_environment.get_file_content("README.md") == "Version 1.0.0"
def test_simulate_named_serializers(test_environment, verbose_output_with_git):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_vcs_with_named_serializers
)
test_environment.output(["git", "init"])
ret = test_environment.call([
"punch",
"--simulate",
"--part",
"major"
])
assert not ret.stderr
assert ret.stdout == verbose_output_with_git.format(
version=punch.__version__,
commit_message="Version updated 1.0.0 -> 2.0.0"
)
assert test_environment.get_file_content("README.md") == "Version 1.0.0"
def test_simulate_named_serializers_no_vcs_serializer(
test_environment):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_vcs_with_named_serializers_no_vcs_serializer
)
test_environment.output(["git", "init"])
ret = test_environment.call([
"punch",
"--simulate",
"--part",
"major"
])
assert ret.returncode == 1
assert test_environment.get_file_content("README.md") == "Version 1.0.0"
def test_simulate_with_git_flow(test_environment, verbose_output_with_git_flow):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_content_with_git_flow
)
test_environment.output(["git", "init"])
test_environment.output(["git", "flow", "init", "-d"])
ret = test_environment.call([
"punch",
"--simulate",
"--part",
"major"
])
assert not ret.stderr
assert ret.stdout == verbose_output_with_git_flow.format(
version=punch.__version__,
commit_message="Version updated 1.0.0 -> 2.0.0"
)
assert test_environment.get_file_content("README.md") == "Version 1.0.0"
def test_simulate_with_hg(test_environment, verbose_output_with_hg):
test_environment.ensure_file_is_present("README.md", "Version 1.0.0")
test_environment.ensure_file_is_present(
"punch_version.py",
version_file_content
)
test_environment.ensure_file_is_present(
"punch_config.py",
config_file_content_with_hg
)
test_environment.output(["hg", "init"])
ret = test_environment.call([
"punch",
"--simulate",
"--part",
"major"
])
assert not ret.stderr
assert ret.stdout == verbose_output_with_hg.format(
version=punch.__version__,
commit_message="Version updated 1.0.0 -> 2.0.0"
)
assert test_environment.get_file_content("README.md") == "Version 1.0.0"
| 21.56266 | 80 | 0.659115 | 1,032 | 8,431 | 4.988372 | 0.069767 | 0.157343 | 0.097902 | 0.11655 | 0.92871 | 0.887141 | 0.867327 | 0.853924 | 0.853924 | 0.82129 | 0 | 0.015842 | 0.206381 | 8,431 | 390 | 81 | 21.617949 | 0.75355 | 0 | 0 | 0.697917 | 0 | 0 | 0.309097 | 0.022536 | 0 | 0 | 0 | 0 | 0.079861 | 1 | 0.041667 | false | 0 | 0.006944 | 0.013889 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10304b98df0b68425384c718b5327ce3e4d21bca | 96 | py | Python | venv/lib/python3.8/site-packages/clikit/api/args/exceptions.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/clikit/api/args/exceptions.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/clikit/api/args/exceptions.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/15/24/63/b194091dfd848430cb4191b77a1cbb2a70d56271631a00cd4ef8a56590 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.46875 | 0 | 96 | 1 | 96 | 96 | 0.427083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
103dfec916e1ad121b69e4b21e00ada26eae2ea4 | 10,672 | py | Python | catboost/spark/catboost4j-spark/core/src/test/generate_canonical_results/catboost_classifier_test.py | timgates42/catboost | 2fa492f5e32ba14c890dc4b3313cfe1024ca4839 | [
"Apache-2.0"
] | 1 | 2021-09-10T06:47:56.000Z | 2021-09-10T06:47:56.000Z | catboost/spark/catboost4j-spark/core/src/test/generate_canonical_results/catboost_classifier_test.py | timgates42/catboost | 2fa492f5e32ba14c890dc4b3313cfe1024ca4839 | [
"Apache-2.0"
] | null | null | null | catboost/spark/catboost4j-spark/core/src/test/generate_canonical_results/catboost_classifier_test.py | timgates42/catboost | 2fa492f5e32ba14c890dc4b3313cfe1024ca4839 | [
"Apache-2.0"
] | 1 | 2021-04-27T23:40:09.000Z | 2021-04-27T23:40:09.000Z | import json
import os
import tempfile
import catboost as cb
import numpy as np
import utils
from config import OUTPUT_DIR
def binary_classification_simple_on_dataframe():
learn_set_path = tempfile.mkstemp(prefix='catboost_learn_set_')[1]
cd_path = tempfile.mkstemp(prefix='catboost_cd_')[1]
try:
utils.object_list_to_tsv(
[
(0.1, 0.2, 0.11, 1),
(0.97, 0.82, 0.33, 2),
(0.13, 0.22, 0.23, 2),
(0.14, 0.18, 0.1, 1),
(0.9, 0.67, 0.17, 2),
(0.66, 0.1, 0.31, 1)
],
learn_set_path
)
with open(cd_path, 'w') as cd:
cd.write('3\tTarget')
model = utils.run_dist_train(
['--iterations', '20',
'--loss-function', 'Logloss',
'--learn-set', learn_set_path,
'--cd', cd_path
],
model_class=cb.CatBoostClassifier
)
train_pool = cb.Pool(learn_set_path, column_description=cd_path)
result = {}
raw_predictions = np.array(model.predict(train_pool, prediction_type='RawFormulaVal'), ndmin=2).transpose()
result['raw_prediction'] = np.hstack((np.negative(raw_predictions / 2), raw_predictions / 2)).tolist()
result['probability'] = model.predict_proba(train_pool).tolist()
result['prediction'] = model.predict(train_pool).tolist()
json.dump(
result,
fp=open(os.path.join(OUTPUT_DIR, 'binary_classification_simple_on_dataframe_predictions.json'), 'w'),
allow_nan=True,
indent=2
)
finally:
os.remove(learn_set_path)
os.remove(cd_path)
def simple_binary_classification():
learn_set_path = tempfile.mkstemp(prefix='catboost_learn_set_')[1]
cd_path = tempfile.mkstemp(prefix='catboost_cd_')[1]
try:
utils.object_list_to_tsv(
[
(0.1, 0.2, 0.11, "0", "query0", 1.0, "site1", 0.12),
(0.97, 0.82, 0.33, "0", "query0", 1.0, "site22", 0.18),
(0.13, 0.22, 0.23, "1", "query1", 0.0, "Site9", 1.0),
(0.14, 0.18, 0.1, "1", "Query 2", 0.5, "site12", 0.45),
(0.9, 0.67, 0.17, "0", "Query 2", 0.5, "site22", 1.0),
(0.66, 0.1, 0.31, "1", "Query 2", 0.5, "Site45", 2.0)
],
learn_set_path
)
with open(cd_path, 'w') as cd:
cd.write(
"3\tTarget\n"
+ "4\tGroupId\n"
+ "5\tGroupWeight\n"
+ "6\tSubgroupId\n"
+ "7\tWeight\n"
)
model = utils.run_dist_train(
['--iterations', '20',
'--loss-function', 'Logloss',
'--learn-set', learn_set_path,
'--cd', cd_path
],
model_class=cb.CatBoostClassifier
)
train_pool = cb.Pool(learn_set_path, column_description=cd_path)
result = {}
raw_predictions = np.array(model.predict(train_pool, prediction_type='RawFormulaVal'), ndmin=2).transpose()
result['raw_prediction'] = np.hstack((np.negative(raw_predictions / 2), raw_predictions / 2)).tolist()
result['probability'] = model.predict_proba(train_pool).tolist()
result['prediction'] = model.predict(train_pool).tolist()
json.dump(
result,
fp=open(os.path.join(OUTPUT_DIR, 'simple_binary_classification.json'), 'w'),
allow_nan=True,
indent=2
)
finally:
os.remove(learn_set_path)
os.remove(cd_path)
def binary_classification_with_target_border():
learn_set_path = tempfile.mkstemp(prefix='catboost_learn_set_')[1]
cd_path = tempfile.mkstemp(prefix='catboost_cd_')[1]
try:
utils.object_list_to_tsv(
[
(0.1, 0.2, 0.11, 0.12),
(0.97, 0.82, 0.33, 0.1),
(0.13, 0.22, 0.23, 0.7),
(0.14, 0.18, 0.1, 0.33),
(0.9, 0.67, 0.17, 0.82),
(0.66, 0.1, 0.31, 0.93)
],
learn_set_path
)
with open(cd_path, 'w') as cd:
cd.write('3\tTarget')
model = utils.run_dist_train(
['--iterations', '20',
'--target-border', '0.5',
'--loss-function', 'Logloss',
'--learn-set', learn_set_path,
'--cd', cd_path
],
model_class=cb.CatBoostClassifier
)
train_pool = cb.Pool(learn_set_path, column_description=cd_path)
result = {}
raw_predictions = np.array(model.predict(train_pool, prediction_type='RawFormulaVal'), ndmin=2).transpose()
result['raw_prediction'] = np.hstack((np.negative(raw_predictions / 2), raw_predictions / 2)).tolist()
result['probability'] = model.predict_proba(train_pool).tolist()
result['prediction'] = model.predict(train_pool).tolist()
json.dump(
result,
fp=open(os.path.join(OUTPUT_DIR, 'binary_classification_with_target_border.json'), 'w'),
allow_nan=True,
indent=2
)
finally:
os.remove(learn_set_path)
os.remove(cd_path)
def binary_classification_with_class_weights_map():
learn_set_path = tempfile.mkstemp(prefix='catboost_learn_set_')[1]
cd_path = tempfile.mkstemp(prefix='catboost_cd_')[1]
try:
utils.object_list_to_tsv(
[
(0.1, 0.2, 0.11, 0),
(0.97, 0.82, 0.33, 1),
(0.13, 0.22, 0.23, 1),
(0.14, 0.18, 0.1, 0),
(0.9, 0.67, 0.17, 0),
(0.66, 0.1, 0.31, 0)
],
learn_set_path
)
with open(cd_path, 'w') as cd:
cd.write('3\tTarget')
model = utils.run_dist_train(
['--iterations', '20',
'--class-weights', '1,2',
'--loss-function', 'Logloss',
'--learn-set', learn_set_path,
'--cd', cd_path,
],
model_class=cb.CatBoostClassifier
)
train_pool = cb.Pool(learn_set_path, column_description=cd_path)
result = {}
raw_predictions = np.array(model.predict(train_pool, prediction_type='RawFormulaVal'), ndmin=2).transpose()
result['raw_prediction'] = np.hstack((np.negative(raw_predictions / 2), raw_predictions / 2)).tolist()
result['probability'] = model.predict_proba(train_pool).tolist()
result['prediction'] = model.predict(train_pool).tolist()
json.dump(
result,
fp=open(os.path.join(OUTPUT_DIR, 'binary_classification_with_class_weights_map.json'), 'w'),
allow_nan=True,
indent=2
)
finally:
os.remove(learn_set_path)
os.remove(cd_path)
def binary_classification_with_weights():
learn_set_path = tempfile.mkstemp(prefix='catboost_learn_set_')[1]
cd_path = tempfile.mkstemp(prefix='catboost_cd_')[1]
try:
utils.object_list_to_tsv(
[
(0.1, 0.2, 0.11, 0, 1.0),
(0.97, 0.82, 0.33, 1, 2.0),
(0.13, 0.22, 0.23, 1, 2.0),
(0.14, 0.18, 0.1, 0, 1.0),
(0.9, 0.67, 0.17, 0, 1.0),
(0.66, 0.1, 0.31, 0, 1.0)
],
learn_set_path
)
with open(cd_path, 'w') as cd:
cd.write(
'3\tTarget'
+ '\n4\tWeight'
)
model = utils.run_dist_train(
['--iterations', '20',
'--loss-function', 'Logloss',
'--learn-set', learn_set_path,
'--cd', cd_path,
],
model_class=cb.CatBoostClassifier
)
train_pool = cb.Pool(learn_set_path, column_description=cd_path)
result = {}
raw_predictions = np.array(model.predict(train_pool, prediction_type='RawFormulaVal'), ndmin=2).transpose()
result['raw_prediction'] = np.hstack((np.negative(raw_predictions / 2), raw_predictions / 2)).tolist()
result['probability'] = model.predict_proba(train_pool).tolist()
result['prediction'] = model.predict(train_pool).tolist()
json.dump(
result,
fp=open(os.path.join(OUTPUT_DIR, 'binary_classification_with_weights.json'), 'w'),
allow_nan=True,
indent=2
)
finally:
os.remove(learn_set_path)
os.remove(cd_path)
def simple_multi_classification():
learn_set_path = tempfile.mkstemp(prefix='catboost_learn_set_')[1]
cd_path = tempfile.mkstemp(prefix='catboost_cd_')[1]
try:
utils.object_list_to_tsv(
[
(0.13, 0.22, 0.23, "1", "query1", 0.0, "Site9", 1.0),
(0.1, 0.2, 0.11, "2", "query0", 1.0, "site1", 0.12),
(0.97, 0.82, 0.33, "0", "query0", 1.0, "site22", 0.18),
(0.9, 0.67, 0.17, "0", "Query 2", 0.5, "site22", 1.0),
(0.66, 0.1, 0.31, "2", "Query 2", 0.5, "Site45", 2.0),
(0.14, 0.18, 0.1, "1", "Query 2", 0.5, "site12", 0.45)
],
learn_set_path
)
with open(cd_path, 'w') as cd:
cd.write(
"3\tTarget\n"
+ "4\tGroupId\n"
+ "5\tGroupWeight\n"
+ "6\tSubgroupId\n"
+ "7\tWeight\n"
)
model = utils.run_dist_train(
['--iterations', '20',
'--loss-function', 'MultiClass',
'--learn-set', learn_set_path,
'--cd', cd_path
],
model_class=cb.CatBoostClassifier
)
train_pool = cb.Pool(learn_set_path, column_description=cd_path)
result = {}
result['raw_prediction'] = model.predict(train_pool, prediction_type='RawFormulaVal').tolist()
result['probability'] = model.predict_proba(train_pool).tolist()
result['prediction'] = model.predict(train_pool).tolist()
json.dump(
result,
fp=open(os.path.join(OUTPUT_DIR, 'simple_multi_classification.json'), 'w'),
allow_nan=True,
indent=2
)
finally:
os.remove(learn_set_path)
os.remove(cd_path)
def main():
binary_classification_simple_on_dataframe()
simple_binary_classification()
binary_classification_with_target_border()
binary_classification_with_class_weights_map()
binary_classification_with_weights()
simple_multi_classification()
| 32.938272 | 115 | 0.531297 | 1,327 | 10,672 | 4.051997 | 0.09043 | 0.062488 | 0.066952 | 0.055793 | 0.942161 | 0.912219 | 0.893435 | 0.863865 | 0.85438 | 0.852148 | 0 | 0.069222 | 0.320465 | 10,672 | 323 | 116 | 33.040248 | 0.672228 | 0 | 0 | 0.67037 | 0 | 0 | 0.133246 | 0.023988 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025926 | false | 0 | 0.025926 | 0 | 0.051852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10446c46c1fd980680b9c0080027f369e9917aae | 191 | py | Python | platform/hwconf_data/efr32fg14v/modules/PIN/PIN_Snippets.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | null | null | null | platform/hwconf_data/efr32fg14v/modules/PIN/PIN_Snippets.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T02:36:22.000Z | 2020-08-25T02:36:22.000Z | platform/hwconf_data/efr32fg14v/modules/PIN/PIN_Snippets.py | lenloe1/v2.7 | 9ac9c4a7bb37987af382c80647f42d84db5f2e1d | [
"Zlib"
] | 1 | 2020-08-25T01:56:04.000Z | 2020-08-25T01:56:04.000Z | """
Generated from a template
"""
import efr32fg14v.PythonSnippet.RuntimeModel as RuntimeModel
from efr32fg14v.modules.PIN.PIN_Defs import PORT_PINS
def activate_runtime():
pass
| 11.235294 | 60 | 0.769634 | 23 | 191 | 6.26087 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049689 | 0.157068 | 191 | 16 | 61 | 11.9375 | 0.844721 | 0.13089 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
1053ff236ba6d7c38c7afbb68183dd395d881c3c | 48,777 | py | Python | ironic/tests/unit/conductor/test_steps.py | vexxhost/ironic | f3bd06be39cf2f5f2480820bb66634666cb3b87b | [
"Apache-2.0"
] | 1 | 2021-02-27T02:48:59.000Z | 2021-02-27T02:48:59.000Z | ironic/tests/unit/conductor/test_steps.py | vexxhost/ironic | f3bd06be39cf2f5f2480820bb66634666cb3b87b | [
"Apache-2.0"
] | null | null | null | ironic/tests/unit/conductor/test_steps.py | vexxhost/ironic | f3bd06be39cf2f5f2480820bb66634666cb3b87b | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from oslo_config import cfg
from oslo_utils import uuidutils
from ironic.common import exception
from ironic.common import states
from ironic.conductor import steps as conductor_steps
from ironic.conductor import task_manager
from ironic import objects
from ironic.tests.unit.db import base as db_base
from ironic.tests.unit.db import utils as db_utils
from ironic.tests.unit.objects import utils as obj_utils
CONF = cfg.CONF
class NodeDeployStepsTestCase(db_base.DbTestCase):
def setUp(self):
super(NodeDeployStepsTestCase, self).setUp()
self.deploy_start = {
'step': 'deploy_start', 'priority': 50, 'interface': 'deploy'}
self.power_one = {
'step': 'power_one', 'priority': 40, 'interface': 'power'}
self.deploy_middle = {
'step': 'deploy_middle', 'priority': 40, 'interface': 'deploy'}
self.deploy_end = {
'step': 'deploy_end', 'priority': 20, 'interface': 'deploy'}
self.power_disable = {
'step': 'power_disable', 'priority': 0, 'interface': 'power'}
self.deploy_core = {
'step': 'deploy', 'priority': 100, 'interface': 'deploy'}
# enabled steps
self.deploy_steps = [self.deploy_start, self.power_one,
self.deploy_middle, self.deploy_end]
# Deploy step with argsinfo.
self.deploy_raid = {
'step': 'build_raid', 'priority': 0, 'interface': 'deploy',
'argsinfo': {'arg1': {'description': 'desc1', 'required': True},
'arg2': {'description': 'desc2'}}}
self.node = obj_utils.create_test_node(
self.context, driver='fake-hardware')
@mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_deploy_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakePower.get_deploy_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakeManagement.get_deploy_steps',
autospec=True)
def test__get_deployment_steps(self, mock_mgt_steps, mock_power_steps,
mock_deploy_steps):
# Test getting deploy steps, with one driver returning None, two
# conflicting priorities, and asserting they are ordered properly.
mock_power_steps.return_value = [self.power_disable, self.power_one]
mock_deploy_steps.return_value = [
self.deploy_start, self.deploy_middle, self.deploy_end]
expected = self.deploy_steps + [self.power_disable]
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
steps = conductor_steps._get_deployment_steps(task, enabled=False)
self.assertEqual(expected, steps)
mock_mgt_steps.assert_called_once_with(mock.ANY, task)
mock_power_steps.assert_called_once_with(mock.ANY, task)
mock_deploy_steps.assert_called_once_with(mock.ANY, task)
@mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_deploy_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakePower.get_deploy_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakeManagement.get_deploy_steps',
autospec=True)
def test__get_deploy_steps_unsorted(self, mock_mgt_steps, mock_power_steps,
mock_deploy_steps):
mock_deploy_steps.return_value = [self.deploy_end,
self.deploy_start,
self.deploy_middle]
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
steps = conductor_steps._get_deployment_steps(task, enabled=False,
sort=False)
self.assertEqual(mock_deploy_steps.return_value, steps)
mock_mgt_steps.assert_called_once_with(mock.ANY, task)
mock_power_steps.assert_called_once_with(mock.ANY, task)
mock_deploy_steps.assert_called_once_with(mock.ANY, task)
@mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_deploy_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakePower.get_deploy_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakeManagement.get_deploy_steps',
autospec=True)
def test__get_deployment_steps_only_enabled(
self, mock_mgt_steps, mock_power_steps, mock_deploy_steps):
# Test getting only deploy steps, with one driver returning None, two
# conflicting priorities, and asserting they are ordered properly.
# Should discard zero-priority deploy step.
mock_power_steps.return_value = [self.power_one, self.power_disable]
mock_deploy_steps.return_value = [self.deploy_end,
self.deploy_middle,
self.deploy_start]
with task_manager.acquire(
self.context, self.node.uuid, shared=True) as task:
steps = conductor_steps._get_deployment_steps(task, enabled=True)
self.assertEqual(self.deploy_steps, steps)
mock_mgt_steps.assert_called_once_with(mock.ANY, task)
mock_power_steps.assert_called_once_with(mock.ANY, task)
mock_deploy_steps.assert_called_once_with(mock.ANY, task)
@mock.patch.object(objects.DeployTemplate, 'list_by_names', autospec=True)
def test__get_deployment_templates_no_traits(self, mock_list):
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
templates = conductor_steps._get_deployment_templates(task)
self.assertEqual([], templates)
self.assertFalse(mock_list.called)
@mock.patch.object(objects.DeployTemplate, 'list_by_names',
autospec=True)
def test__get_deployment_templates(self, mock_list):
traits = ['CUSTOM_DT1', 'CUSTOM_DT2']
node = obj_utils.create_test_node(
self.context, uuid=uuidutils.generate_uuid(),
instance_info={'traits': traits})
template1 = obj_utils.get_test_deploy_template(self.context)
template2 = obj_utils.get_test_deploy_template(
self.context, name='CUSTOM_DT2', uuid=uuidutils.generate_uuid(),
steps=[{'interface': 'bios', 'step': 'apply_configuration',
'args': {}, 'priority': 1}])
mock_list.return_value = [template1, template2]
expected = [template1, template2]
with task_manager.acquire(
self.context, node.uuid, shared=False) as task:
templates = conductor_steps._get_deployment_templates(task)
self.assertEqual(expected, templates)
mock_list.assert_called_once_with(task.context, traits)
def test__get_steps_from_deployment_templates(self):
template1 = obj_utils.get_test_deploy_template(self.context)
template2 = obj_utils.get_test_deploy_template(
self.context, name='CUSTOM_DT2', uuid=uuidutils.generate_uuid(),
steps=[{'interface': 'bios', 'step': 'apply_configuration',
'args': {}, 'priority': 1}])
step1 = template1.steps[0]
step2 = template2.steps[0]
expected = [
{
'interface': step1['interface'],
'step': step1['step'],
'args': step1['args'],
'priority': step1['priority'],
},
{
'interface': step2['interface'],
'step': step2['step'],
'args': step2['args'],
'priority': step2['priority'],
}
]
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
steps = conductor_steps._get_steps_from_deployment_templates(
task, [template1, template2])
self.assertEqual(expected, steps)
@mock.patch.object(conductor_steps, '_get_validated_user_deploy_steps',
autospec=True)
@mock.patch.object(conductor_steps, '_get_validated_steps_from_templates',
autospec=True)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def _test__get_all_deployment_steps(self, user_steps, template_steps,
driver_steps, expected_steps,
mock_steps, mock_validated_template,
mock_validated_user):
returned_user_steps = user_steps.copy()
mock_validated_user.return_value = returned_user_steps
mock_validated_template.return_value = template_steps
mock_steps.return_value = driver_steps
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
steps = conductor_steps._get_all_deployment_steps(task)
self.assertEqual(expected_steps, steps)
mock_validated_template.assert_called_once_with(task,
skip_missing=False)
mock_steps.assert_called_once_with(task, enabled=True, sort=False)
mock_validated_user.assert_called_once_with(
task, skip_missing=False)
def test__get_all_deployment_steps_no_steps(self):
# Nothing in -> nothing out.
user_steps = []
template_steps = []
driver_steps = []
expected_steps = []
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_no_template_and_user_steps(self):
# Only driver steps in -> only driver steps out.
user_steps = []
template_steps = []
driver_steps = self.deploy_steps
expected_steps = self.deploy_steps
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_no_user_and_driver_steps(self):
# Only template steps in -> only template steps out.
user_steps = []
template_steps = self.deploy_steps
driver_steps = []
expected_steps = self.deploy_steps
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_no_template_and_driver_steps(self):
# Only template steps in -> only template steps out.
user_steps = self.deploy_steps
template_steps = []
driver_steps = []
expected_steps = self.deploy_steps
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_template_and_driver_steps(self):
# Driver and template steps in -> driver and template steps out.
user_steps = []
template_steps = self.deploy_steps[:2]
driver_steps = self.deploy_steps[2:]
expected_steps = self.deploy_steps
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_user_and_driver_steps(self):
# Driver and user steps in -> driver and user steps out.
user_steps = self.deploy_steps[:2]
template_steps = []
driver_steps = self.deploy_steps[2:]
expected_steps = self.deploy_steps
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_user_and_template_steps(self):
# Template and user steps in -> template and user steps out.
user_steps = self.deploy_steps[:2]
template_steps = self.deploy_steps[2:]
driver_steps = []
expected_steps = self.deploy_steps
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_all_steps(self):
# All steps in -> all steps out.
user_steps = self.deploy_steps[:1]
template_steps = self.deploy_steps[1:3]
driver_steps = self.deploy_steps[3:]
expected_steps = self.deploy_steps
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
@mock.patch.object(conductor_steps, '_get_validated_steps_from_templates',
autospec=True)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__get_all_deployment_steps_skip_missing(self, mock_steps,
mock_validated):
template_steps = self.deploy_steps[:2]
driver_steps = self.deploy_steps[2:]
expected_steps = self.deploy_steps
mock_validated.return_value = template_steps
mock_steps.return_value = driver_steps
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
steps = conductor_steps._get_all_deployment_steps(
task, skip_missing=True)
self.assertEqual(expected_steps, steps)
mock_validated.assert_called_once_with(task, skip_missing=True)
mock_steps.assert_called_once_with(task, enabled=True, sort=False)
def test__get_all_deployment_steps_disable_core_steps(self):
# User steps can disable core driver steps.
template_steps = [self.deploy_core.copy()]
template_steps[0].update({'priority': 0})
driver_steps = [self.deploy_core]
expected_steps = []
self._test__get_all_deployment_steps([], template_steps, driver_steps,
expected_steps)
def test__get_all_deployment_steps_override_driver_steps(self):
# User steps override non-core driver steps.
template_steps = [step.copy() for step in self.deploy_steps[:2]]
template_steps[0].update({'priority': 200})
template_steps[1].update({'priority': 100})
driver_steps = self.deploy_steps
expected_steps = template_steps + self.deploy_steps[2:]
self._test__get_all_deployment_steps([], template_steps, driver_steps,
expected_steps)
def test__get_all_deployment_steps_override_template_steps(self):
# User steps override template steps.
user_steps = [step.copy() for step in self.deploy_steps[:1]]
user_steps[0].update({'priority': 300})
template_steps = [step.copy() for step in self.deploy_steps[:2]]
template_steps[0].update({'priority': 200})
template_steps[1].update({'priority': 100})
driver_steps = self.deploy_steps
expected_steps = (user_steps[:1]
+ template_steps[1:2]
+ self.deploy_steps[2:])
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
def test__get_all_deployment_steps_duplicate_template_steps(self):
# Duplicate template steps override non-core driver steps.
# NOTE(mgoddard): This case is currently prevented by the API and
# conductor - the interface/step must be unique across all enabled
# steps. This test ensures that we can support this case, in case we
# choose to allow it in future.
template_steps = [self.deploy_start.copy(), self.deploy_start.copy()]
template_steps[0].update({'priority': 200})
template_steps[1].update({'priority': 100})
driver_steps = self.deploy_steps
# Each user invocation of the deploy_start step should be included, but
# not the default deploy_start from the driver.
expected_steps = template_steps + self.deploy_steps[1:]
self._test__get_all_deployment_steps([], template_steps, driver_steps,
expected_steps)
def test__get_all_deployment_steps_duplicate_template_and_user_steps(self):
# Duplicate user steps override non-core driver steps.
# NOTE(ajya):
# See also test__get_all_deployment_steps_duplicate_template_steps.
# As user steps provided via API arguments take over template steps,
# currently it will override all duplicated steps as it cannot know
# which to keep. If duplicates are getting supported, then
# _get_all_deployment_steps needs to be updated. Until then this case
# tests currently desired outcome.
user_steps = [self.deploy_start.copy()]
user_steps[0].update({'priority': 300})
template_steps = [self.deploy_start.copy(), self.deploy_start.copy()]
template_steps[0].update({'priority': 200})
template_steps[1].update({'priority': 100})
driver_steps = self.deploy_steps
# Each user invocation of the deploy_start step should be included, but
# not the default deploy_start from the driver.
expected_steps = user_steps + self.deploy_steps[1:]
self._test__get_all_deployment_steps(user_steps, template_steps,
driver_steps, expected_steps)
@mock.patch.object(conductor_steps, '_get_validated_steps_from_templates',
autospec=True)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__get_all_deployment_steps_error(self, mock_steps, mock_validated):
mock_validated.side_effect = exception.InvalidParameterValue('foo')
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
self.assertRaises(exception.InvalidParameterValue,
conductor_steps._get_all_deployment_steps, task)
mock_validated.assert_called_once_with(task, skip_missing=False)
self.assertFalse(mock_steps.called)
@mock.patch.object(conductor_steps, '_get_all_deployment_steps',
autospec=True)
def test_set_node_deployment_steps(self, mock_steps):
mock_steps.return_value = self.deploy_steps
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
conductor_steps.set_node_deployment_steps(task)
self.node.refresh()
self.assertEqual(self.deploy_steps,
self.node.driver_internal_info['deploy_steps'])
self.assertEqual({}, self.node.deploy_step)
self.assertIsNone(
self.node.driver_internal_info['deploy_step_index'])
mock_steps.assert_called_once_with(task, skip_missing=False)
@mock.patch.object(conductor_steps, '_get_all_deployment_steps',
autospec=True)
def test_set_node_deployment_steps_skip_missing(self, mock_steps):
mock_steps.return_value = self.deploy_steps
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
conductor_steps.set_node_deployment_steps(task, skip_missing=True)
self.node.refresh()
self.assertEqual(self.deploy_steps,
self.node.driver_internal_info['deploy_steps'])
self.assertEqual({}, self.node.deploy_step)
self.assertIsNone(
self.node.driver_internal_info['deploy_step_index'])
mock_steps.assert_called_once_with(task, skip_missing=True)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps(self, mock_steps):
mock_steps.return_value = self.deploy_steps
user_steps = [{'step': 'deploy_start', 'interface': 'deploy',
'priority': 100},
{'step': 'power_one', 'interface': 'power',
'priority': 200}]
with task_manager.acquire(self.context, self.node.uuid) as task:
result = conductor_steps._validate_user_deploy_steps(task,
user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
self.assertEqual(user_steps, result)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_no_steps(self, mock_steps):
mock_steps.return_value = self.deploy_steps
with task_manager.acquire(self.context, self.node.uuid) as task:
conductor_steps._validate_user_deploy_steps(task, [])
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_get_steps_exception(self, mock_steps):
mock_steps.side_effect = exception.InstanceDeployFailure('bad')
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.InstanceDeployFailure,
conductor_steps._validate_user_deploy_steps,
task, [])
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_not_supported(self, mock_steps):
mock_steps.return_value = self.deploy_steps
user_steps = [{'step': 'power_one', 'interface': 'power',
'priority': 200},
{'step': 'bad_step', 'interface': 'deploy',
'priority': 100}]
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"does not support.*bad_step",
conductor_steps._validate_user_deploy_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_skip_missing(self, mock_steps):
mock_steps.return_value = self.deploy_steps
user_steps = [{'step': 'power_one', 'interface': 'power',
'priority': 200},
{'step': 'bad_step', 'interface': 'deploy',
'priority': 100}]
with task_manager.acquire(self.context, self.node.uuid) as task:
result = conductor_steps._validate_user_deploy_steps(
task, user_steps, skip_missing=True)
self.assertEqual(user_steps[:1], result)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_invalid_arg(self, mock_steps):
mock_steps.return_value = self.deploy_steps
user_steps = [{'step': 'power_one', 'interface': 'power',
'args': {'arg1': 'val1', 'arg2': 'val2'},
'priority': 200}]
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"power_one.*unexpected.*arg1",
conductor_steps._validate_user_deploy_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_missing_required_arg(self,
mock_steps):
mock_steps.return_value = [self.power_one, self.deploy_raid]
user_steps = [{'step': 'power_one', 'interface': 'power',
'priority': 200},
{'step': 'build_raid', 'interface': 'deploy',
'priority': 100}]
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"build_raid.*missing.*arg1",
conductor_steps._validate_user_deploy_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_disable_non_core(self, mock_steps):
# Required arguments don't apply to disabled steps.
mock_steps.return_value = [self.power_one, self.deploy_raid]
user_steps = [{'step': 'power_one', 'interface': 'power',
'priority': 200},
{'step': 'build_raid', 'interface': 'deploy',
'priority': 0}]
with task_manager.acquire(self.context, self.node.uuid) as task:
result = conductor_steps._validate_user_deploy_steps(task,
user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
self.assertEqual(user_steps, result)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_disable_core(self, mock_steps):
mock_steps.return_value = [self.power_one, self.deploy_core]
user_steps = [{'step': 'power_one', 'interface': 'power',
'priority': 200},
{'step': 'deploy', 'interface': 'deploy', 'priority': 0}]
with task_manager.acquire(self.context, self.node.uuid) as task:
result = conductor_steps._validate_user_deploy_steps(task,
user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
self.assertEqual(user_steps, result)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_override_core(self, mock_steps):
mock_steps.return_value = [self.power_one, self.deploy_core]
user_steps = [{'step': 'power_one', 'interface': 'power',
'priority': 200},
{'step': 'deploy', 'interface': 'deploy',
'priority': 200}]
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"deploy.*is a core step",
conductor_steps._validate_user_deploy_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_deployment_steps', autospec=True)
def test__validate_user_deploy_steps_duplicates(self, mock_steps):
mock_steps.return_value = [self.power_one, self.deploy_core]
user_steps = [{'step': 'power_one', 'interface': 'power',
'priority': 200},
{'step': 'power_one', 'interface': 'power',
'priority': 100}]
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"Duplicate deploy steps for "
"power.power_one",
conductor_steps._validate_user_deploy_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
class NodeCleaningStepsTestCase(db_base.DbTestCase):
def setUp(self):
super(NodeCleaningStepsTestCase, self).setUp()
self.power_update = {
'step': 'update_firmware', 'priority': 10, 'interface': 'power'}
self.deploy_update = {
'step': 'update_firmware', 'priority': 10, 'interface': 'deploy'}
self.deploy_erase = {
'step': 'erase_disks', 'priority': 20, 'interface': 'deploy',
'abortable': True}
# Automated cleaning should be executed in this order
self.clean_steps = [self.deploy_erase, self.power_update,
self.deploy_update]
# Manual clean step
self.deploy_raid = {
'step': 'build_raid', 'priority': 0, 'interface': 'deploy',
'argsinfo': {'arg1': {'description': 'desc1', 'required': True},
'arg2': {'description': 'desc2'}}}
@mock.patch('ironic.drivers.modules.fake.FakeBIOS.get_clean_steps',
lambda self, task: [])
@mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps',
autospec=True)
def test__get_cleaning_steps(self, mock_power_steps, mock_deploy_steps):
# Test getting cleaning steps, with one driver returning None, two
# conflicting priorities, and asserting they are ordered properly.
node = obj_utils.create_test_node(
self.context, driver='fake-hardware',
provision_state=states.CLEANING,
target_provision_state=states.AVAILABLE)
mock_power_steps.return_value = [self.power_update]
mock_deploy_steps.return_value = [self.deploy_erase,
self.deploy_update]
with task_manager.acquire(
self.context, node.uuid, shared=False) as task:
steps = conductor_steps._get_cleaning_steps(task, enabled=False)
self.assertEqual(self.clean_steps, steps)
@mock.patch('ironic.drivers.modules.fake.FakeBIOS.get_clean_steps',
lambda self, task: [])
@mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps',
autospec=True)
def test__get_cleaning_steps_unsorted(self, mock_power_steps,
mock_deploy_steps):
node = obj_utils.create_test_node(
self.context, driver='fake-hardware',
provision_state=states.CLEANING,
target_provision_state=states.MANAGEABLE)
mock_deploy_steps.return_value = [self.deploy_raid,
self.deploy_update,
self.deploy_erase]
with task_manager.acquire(
self.context, node.uuid, shared=False) as task:
steps = conductor_steps._get_cleaning_steps(task, enabled=False,
sort=False)
self.assertEqual(mock_deploy_steps.return_value, steps)
@mock.patch('ironic.drivers.modules.fake.FakeDeploy.get_clean_steps',
autospec=True)
@mock.patch('ironic.drivers.modules.fake.FakePower.get_clean_steps',
autospec=True)
def test__get_cleaning_steps_only_enabled(self, mock_power_steps,
mock_deploy_steps):
# Test getting only cleaning steps, with one driver returning None, two
# conflicting priorities, and asserting they are ordered properly.
# Should discard zero-priority (manual) clean step
node = obj_utils.create_test_node(
self.context, driver='fake-hardware',
provision_state=states.CLEANING,
target_provision_state=states.AVAILABLE)
mock_power_steps.return_value = [self.power_update]
mock_deploy_steps.return_value = [self.deploy_erase,
self.deploy_update,
self.deploy_raid]
with task_manager.acquire(
self.context, node.uuid, shared=True) as task:
steps = conductor_steps._get_cleaning_steps(task, enabled=True)
self.assertEqual(self.clean_steps, steps)
@mock.patch.object(conductor_steps, '_validate_user_clean_steps',
autospec=True)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test_set_node_cleaning_steps_automated(self, mock_steps,
mock_validate_user_steps):
mock_steps.return_value = self.clean_steps
node = obj_utils.create_test_node(
self.context, driver='fake-hardware',
provision_state=states.CLEANING,
target_provision_state=states.AVAILABLE,
last_error=None,
clean_step=None)
with task_manager.acquire(
self.context, node.uuid, shared=False) as task:
conductor_steps.set_node_cleaning_steps(task)
node.refresh()
self.assertEqual(self.clean_steps,
node.driver_internal_info['clean_steps'])
self.assertEqual({}, node.clean_step)
mock_steps.assert_called_once_with(task, enabled=True)
self.assertFalse(mock_validate_user_steps.called)
@mock.patch.object(conductor_steps, '_validate_user_clean_steps',
autospec=True)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test_set_node_cleaning_steps_manual(self, mock_steps,
mock_validate_user_steps):
clean_steps = [self.deploy_raid]
mock_steps.return_value = self.clean_steps
mock_validate_user_steps.return_value = clean_steps
node = obj_utils.create_test_node(
self.context, driver='fake-hardware',
provision_state=states.CLEANING,
target_provision_state=states.MANAGEABLE,
last_error=None,
clean_step=None,
driver_internal_info={'clean_steps': clean_steps})
with task_manager.acquire(
self.context, node.uuid, shared=False) as task:
conductor_steps.set_node_cleaning_steps(task)
node.refresh()
self.assertEqual(clean_steps,
node.driver_internal_info['clean_steps'])
self.assertEqual({}, node.clean_step)
self.assertFalse(mock_steps.called)
mock_validate_user_steps.assert_called_once_with(task, clean_steps)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test__validate_user_clean_steps(self, mock_steps):
node = obj_utils.create_test_node(self.context)
mock_steps.return_value = self.clean_steps
user_steps = [{'step': 'update_firmware', 'interface': 'power'},
{'step': 'erase_disks', 'interface': 'deploy'}]
with task_manager.acquire(self.context, node.uuid) as task:
result = conductor_steps._validate_user_clean_steps(task,
user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
expected = [{'step': 'update_firmware', 'interface': 'power',
'priority': 10, 'abortable': False},
{'step': 'erase_disks', 'interface': 'deploy',
'priority': 20, 'abortable': True}]
self.assertEqual(expected, result)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test__validate_user_clean_steps_no_steps(self, mock_steps):
node = obj_utils.create_test_node(self.context)
mock_steps.return_value = self.clean_steps
with task_manager.acquire(self.context, node.uuid) as task:
conductor_steps._validate_user_clean_steps(task, [])
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test__validate_user_clean_steps_get_steps_exception(self, mock_steps):
node = obj_utils.create_test_node(self.context)
mock_steps.side_effect = exception.NodeCleaningFailure('bad')
with task_manager.acquire(self.context, node.uuid) as task:
self.assertRaises(exception.NodeCleaningFailure,
conductor_steps._validate_user_clean_steps,
task, [])
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test__validate_user_clean_steps_not_supported(self, mock_steps):
node = obj_utils.create_test_node(self.context)
mock_steps.return_value = [self.power_update, self.deploy_raid]
user_steps = [{'step': 'update_firmware', 'interface': 'power'},
{'step': 'bad_step', 'interface': 'deploy'}]
with task_manager.acquire(self.context, node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"does not support.*bad_step",
conductor_steps._validate_user_clean_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test__validate_user_clean_steps_invalid_arg(self, mock_steps):
node = obj_utils.create_test_node(self.context)
mock_steps.return_value = self.clean_steps
user_steps = [{'step': 'update_firmware', 'interface': 'power',
'args': {'arg1': 'val1', 'arg2': 'val2'}},
{'step': 'erase_disks', 'interface': 'deploy'}]
with task_manager.acquire(self.context, node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"update_firmware.*unexpected.*arg1",
conductor_steps._validate_user_clean_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_cleaning_steps', autospec=True)
def test__validate_user_clean_steps_missing_required_arg(self, mock_steps):
node = obj_utils.create_test_node(self.context)
mock_steps.return_value = [self.power_update, self.deploy_raid]
user_steps = [{'step': 'update_firmware', 'interface': 'power'},
{'step': 'build_raid', 'interface': 'deploy'}]
with task_manager.acquire(self.context, node.uuid) as task:
self.assertRaisesRegex(exception.InvalidParameterValue,
"build_raid.*missing.*arg1",
conductor_steps._validate_user_clean_steps,
task, user_steps)
mock_steps.assert_called_once_with(task, enabled=False, sort=False)
@mock.patch.object(conductor_steps, '_get_deployment_templates',
autospec=True)
@mock.patch.object(conductor_steps, '_get_steps_from_deployment_templates',
autospec=True)
@mock.patch.object(conductor_steps, '_validate_user_deploy_steps',
autospec=True)
class GetValidatedStepsFromTemplatesTestCase(db_base.DbTestCase):
def setUp(self):
super(GetValidatedStepsFromTemplatesTestCase, self).setUp()
self.node = obj_utils.create_test_node(self.context,
driver='fake-hardware')
self.template = obj_utils.get_test_deploy_template(self.context)
def test_ok(self, mock_validate, mock_steps, mock_templates):
mock_templates.return_value = [self.template]
steps = [db_utils.get_test_deploy_template_step()]
mock_steps.return_value = steps
mock_validate.return_value = steps
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
result = conductor_steps._get_validated_steps_from_templates(task)
self.assertEqual(steps, result)
mock_templates.assert_called_once_with(task)
mock_steps.assert_called_once_with(task, [self.template])
mock_validate.assert_called_once_with(task, steps, mock.ANY,
skip_missing=False)
def test_skip_missing(self, mock_validate, mock_steps, mock_templates):
mock_templates.return_value = [self.template]
steps = [db_utils.get_test_deploy_template_step()]
mock_steps.return_value = steps
mock_validate.return_value = steps
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
result = conductor_steps._get_validated_steps_from_templates(
task, skip_missing=True)
self.assertEqual(steps, result)
mock_templates.assert_called_once_with(task)
mock_steps.assert_called_once_with(task, [self.template])
mock_validate.assert_called_once_with(task, steps, mock.ANY,
skip_missing=True)
def test_invalid_parameter_value(self, mock_validate, mock_steps,
mock_templates):
mock_templates.return_value = [self.template]
mock_validate.side_effect = exception.InvalidParameterValue('fake')
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
self.assertRaises(
exception.InvalidParameterValue,
conductor_steps._get_validated_steps_from_templates, task)
def test_instance_deploy_failure(self, mock_validate, mock_steps,
mock_templates):
mock_templates.return_value = [self.template]
mock_validate.side_effect = exception.InstanceDeployFailure('foo')
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
self.assertRaises(
exception.InstanceDeployFailure,
conductor_steps._get_validated_steps_from_templates, task)
@mock.patch.object(conductor_steps, '_get_validated_steps_from_templates',
autospec=True)
@mock.patch.object(conductor_steps, '_get_validated_user_deploy_steps',
autospec=True)
class ValidateUserDeployStepsAndTemplatesTestCase(db_base.DbTestCase):
def setUp(self):
super(ValidateUserDeployStepsAndTemplatesTestCase, self).setUp()
self.node = obj_utils.create_test_node(self.context,
driver='fake-hardware')
def test_ok(self, mock_validated_steps, mock_validated_template):
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
result = conductor_steps.validate_user_deploy_steps_and_templates(
task, {'key': 'value'})
self.assertIsNone(result)
mock_validated_template.assert_called_once_with(
task, skip_missing=False)
mock_validated_steps.assert_called_once_with(
task, {'key': 'value'}, skip_missing=False)
def test_skip_missing(self, mock_validated_steps, mock_validated_template):
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
result = conductor_steps.validate_user_deploy_steps_and_templates(
task, {'key': 'value'}, skip_missing=True)
self.assertIsNone(result)
mock_validated_template.assert_called_once_with(
task, skip_missing=True)
mock_validated_steps.assert_called_once_with(
task, {'key': 'value'}, skip_missing=True)
def test_error_on_template(
self, mock_validated_steps, mock_validated_template):
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
mock_validated_template.side_effect =\
exception.InvalidParameterValue('foo')
self.assertRaises(
exception.InvalidParameterValue,
conductor_steps.validate_user_deploy_steps_and_templates,
task,
{'key': 'value'})
mock_validated_template.assert_called_once_with(
task, skip_missing=False)
def test_error_on_usersteps(
self, mock_validated_steps, mock_validated_template):
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
mock_validated_steps.side_effect =\
exception.InvalidParameterValue('foo')
self.assertRaises(
exception.InvalidParameterValue,
conductor_steps.validate_user_deploy_steps_and_templates,
task,
{'key': 'value'})
mock_validated_template.assert_called_once_with(
task, skip_missing=False)
mock_validated_steps.assert_called_once_with(
task, {'key': 'value'}, skip_missing=False)
@mock.patch.object(conductor_steps, '_validate_user_deploy_steps',
autospec=True)
class ValidateUserDeployStepsTestCase(db_base.DbTestCase):
def setUp(self):
super(ValidateUserDeployStepsTestCase, self).setUp()
self.node = obj_utils.create_test_node(self.context,
driver='fake-hardware')
def test__get_validate_user_deploy_steps(self, mock_validated):
deploy_steps = [{"interface": "bios", "step": "factory_reset",
"priority": 95}]
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
result = conductor_steps._get_validated_user_deploy_steps(
task, deploy_steps)
self.assertIsNotNone(result)
mock_validated.assert_called_once_with(task, deploy_steps,
mock.ANY,
skip_missing=False)
def test__get_validate_user_deploy_steps_on_node(self, mock_validated):
deploy_steps = [{"interface": "bios", "step": "factory_reset",
"priority": 95}]
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
task.node.driver_internal_info['user_deploy_steps'] = deploy_steps
result = conductor_steps._get_validated_user_deploy_steps(task)
self.assertIsNotNone(result)
mock_validated.assert_called_once_with(task, deploy_steps,
mock.ANY,
skip_missing=False)
def test__get_validate_user_deploy_steps_no_steps(self, mock_validated):
with task_manager.acquire(
self.context, self.node.uuid, shared=False) as task:
result = conductor_steps._get_validated_user_deploy_steps(task)
self.assertEqual([], result)
mock_validated.assert_not_called()
| 50.65109 | 79 | 0.629005 | 5,361 | 48,777 | 5.372319 | 0.056706 | 0.045832 | 0.030103 | 0.035415 | 0.866185 | 0.84129 | 0.82261 | 0.77973 | 0.761467 | 0.743446 | 0 | 0.00554 | 0.282141 | 48,777 | 962 | 80 | 50.703742 | 0.816993 | 0.056707 | 0 | 0.677296 | 0 | 0 | 0.093139 | 0.038169 | 0 | 0 | 0 | 0 | 0.132653 | 1 | 0.079082 | false | 0 | 0.014031 | 0 | 0.09949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52b6674a19c4b7e91e523e0ea27c565cbe78030f | 123 | py | Python | Prolog/PrologFact.py | josericardojr/BinG | bbd5f3732995ddec8f7976e3ceffcdd8ce00913d | [
"MIT"
] | null | null | null | Prolog/PrologFact.py | josericardojr/BinG | bbd5f3732995ddec8f7976e3ceffcdd8ce00913d | [
"MIT"
] | null | null | null | Prolog/PrologFact.py | josericardojr/BinG | bbd5f3732995ddec8f7976e3ceffcdd8ce00913d | [
"MIT"
] | null | null | null | class PrologFact:
def __init__(self, fact):
self.fact = fact
def get_fact(self):
return self.fact
| 17.571429 | 29 | 0.609756 | 16 | 123 | 4.375 | 0.5 | 0.342857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.300813 | 123 | 6 | 30 | 20.5 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
eaaf5ab101281801620cf7a1f91c06ba0eee0927 | 107 | py | Python | test/tests/sys_stdout.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:28:45.000Z | 2020-02-06T14:28:45.000Z | test/tests/sys_stdout.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | test/tests/sys_stdout.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:29:00.000Z | 2020-02-06T14:29:00.000Z | import sys
sys.stdout.write("hello world\n")
print >>sys.stdout, "hello world"
print sys.stdout.fileno()
| 15.285714 | 33 | 0.728972 | 17 | 107 | 4.588235 | 0.529412 | 0.346154 | 0.358974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11215 | 107 | 6 | 34 | 17.833333 | 0.821053 | 0 | 0 | 0 | 0 | 0 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
eaf460b2b446d13868aa1c98af0813214a6b8302 | 39 | py | Python | modules/2.79/bpy/types/Window.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/types/Window.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | modules/2.79/bpy/types/Window.py | cmbasnett/fake-bpy-module | acb8b0f102751a9563e5b5e5c7cd69a4e8aa2a55 | [
"MIT"
] | null | null | null | def cursor_modal_restore():
pass
| 7.8 | 27 | 0.692308 | 5 | 39 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 39 | 4 | 28 | 9.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d80dc6a113f7690ea0a77181e6edb213dc252221 | 31,164 | py | Python | bot.py | ibrag8998/workout | f3e329c6dee588784dcff45c0a7e995880c33652 | [
"MIT"
] | null | null | null | bot.py | ibrag8998/workout | f3e329c6dee588784dcff45c0a7e995880c33652 | [
"MIT"
] | null | null | null | bot.py | ibrag8998/workout | f3e329c6dee588784dcff45c0a7e995880c33652 | [
"MIT"
] | null | null | null | import out_workout
from telebot import TeleBot, types
from time import sleep
from os import environ
bot = TeleBot(environ.get('TOKEN'))
states = dict() # состояние юзера -----------------------------------------------------------------
main_kb = types.ReplyKeyboardMarkup(resize_keyboard = True)
main_kb.row('/teach', '/train')
ds_kb = types.ReplyKeyboardMarkup(resize_keyboard = True)
ds_kb.row('Динамика', 'Статика')
ds_kb.row('⬅️Назад')
statics_kb = types.ReplyKeyboardMarkup(resize_keyboard = True)
statics_kb.row('1', '2', '3', '4', '5')
statics_kb.row('6', '7', '8', '9', '10')
statics_kb.row('⬅️Назад')
dynamics_kb = types.ReplyKeyboardMarkup(resize_keyboard = True)
dynamics_kb.row('1', '2', '3', '4', '5')
dynamics_kb.row('6', '7', '8', '9', '10')
dynamics_kb.row('⬅️Назад')
trains_kb = types.ReplyKeyboardMarkup(resize_keyboard = True)
trains_kb.row('Икхван', 'Ганнибал')
trains_kb.row('⬅️Назад')
go_no_kb = types.ReplyKeyboardMarkup(resize_keyboard = True)
go_no_kb.row('Начинаем', 'Не сейчас')
done_kb = types.ReplyKeyboardMarkup(resize_keyboard = True, one_time_keyboard = True)
done_kb.row('Сделано!', 'Не справляюсь!')
done_kb.row('Отмена тренировки')
# старт -------------------------------------------------------------------------------------------
@bot.message_handler(commands = ['start'])
def welcome(message):
bot.send_message(message.chat.id, out_workout.welcome, reply_markup = main_kb)
# список команд -----------------------------------------------------------------------------------
@bot.message_handler(commands = ['help'])
def commands_list(message):
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
# тич ---------------------------------------------------------------------------------------------
@bot.message_handler(commands = ['teach'])
def teach_way_select(message):
uid = message.from_user.id
states[uid] = 'teach_select'
bot.send_message(message.chat.id, 'Выбери направление:', reply_markup = ds_kb)
# тренировка --------------------------------------------------------------------------------------
@bot.message_handler(commands = ['train'])
def train_select(message):
uid = message.from_user.id
states[uid] = 'train'
bot.send_message(message.chat.id, out_workout.train_select, reply_markup = trains_kb)
###################################################################################################
# ЛЯМБДЫ ------------------------------------------------------------------------------------------
# Выборка элемента --------------------------------------------------------------------------------
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'teach_select')
def teach(message):
uid = message.from_user.id
if message.text == '⬅️Назад':
bot.send_message(message.chat.id, 'Чем могу быть полезен?', reply_markup = main_kb)
elif message.text == 'Динамика':
bot.send_message(message.chat.id, out_workout.dynamics, reply_markup = dynamics_kb)
states[uid] = 'teach_dynamics'
elif message.text == 'Статика':
bot.send_message(message.chat.id, out_workout.statics, reply_markup = statics_kb)
states[uid] = 'teach_statics'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
# трейн -------------------------------------------------------------------------------------------
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'train')
def train(message):
uid = message.from_user.id
if message.text == '⬅️Назад':
bot.send_message(message.chat.id, 'Чем могу быть полезен?', reply_markup = main_kb)
elif message.text == 'Икхван':
bot.send_message(message.chat.id, out_workout.ikhwan_train, reply_markup = go_no_kb)
states[uid] = 'ikhwan_train'
elif message.text == 'Ганнибал':
bot.send_message(message.chat.id, out_workout.hannibal_train, reply_markup = go_no_kb)
states[uid] = 'hannibal_train'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
###################################################################################################
########################################## ТУПО КОПИПАСТ ##########################################
###################################################################################################
# выкидыш обучалки --------------------------------------------------------------------------------
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'teach_statics')
def teach_statics1(message):
uid = message.from_user.id
if message.text == '⬅️Назад':
bot.send_message(message.chat.id, 'Чем могу быть полезен?', reply_markup = main_kb)
elif message.text == '1':
bot.send_message(message.chat.id, out_workout.t_skills['flag'][0], reply_markup = done_kb)
states[uid] = 'flag_t_skills1'
elif message.text == '2':
bot.send_message(message.chat.id, out_workout.t_skills['dragon_flag'][0], reply_markup = done_kb)
states[uid] = 'dragon_flag_t_skills1'
elif message.text == '3':
f = open('front_lever.jpg', 'rb')
bot.send_photo(message.chat.id, f, reply_markup = done_kb)
f.close()
bot.send_message(message.chat.id, out_workout.t_skills['front_lever'][0], reply_markup = done_kb)
states[uid] = 'front_lever_t_skills1'
elif message.text == '4':
f = open('back_lever.jpg', 'rb')
bot.send_photo(message.chat.id, f, reply_markup = done_kb)
f.close()
bot.send_message(message.chat.id, out_workout.t_skills['back_lever'][0], reply_markup = done_kb)
states[uid] = 'back_lever_t_skills1'
elif message.text == '5':
bot.send_message(message.chat.id, out_workout.t_skills['planche'][0], reply_markup = done_kb)
states[uid] = 'planche_t_skills1'
elif message.text == '6':
bot.send_message(message.chat.id, out_workout.t_skills['maltese'][0], reply_markup = done_kb)
states[uid] = 'maltese_t_skills1'
elif message.text == '7':
bot.send_message(message.chat.id, out_workout.t_skills['hefesto'][0], reply_markup = done_kb)
states[uid] = 'hefesto_t_skills1'
elif message.text == '8':
bot.send_message(message.chat.id, out_workout.t_skills['angel'][0], reply_markup = done_kb)
states[uid] = 'angel_t_skills1'
elif message.text == '9':
bot.send_message(message.chat.id, out_workout.t_skills['one_arm_pu'][0], reply_markup = done_kb)
states[uid] = 'one_arm_pu_t_skills1'
elif message.text == '10':
bot.send_message(message.chat.id, out_workout.t_skills['handstand'][0], reply_markup = done_kb)
states[uid] = 'handstand_t_skills1'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and 't_skills1' in states[message.from_user.id])
def teach_statics2(message):
uid = message.from_user.id
chatid = message.chat.id
if message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Поработай над базой и все получится', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
elif message.text == 'Сделано!':
if states[uid] == 'flag_t_skills1':
bot.send_message(chatid, out_workout.t_skills['flag'][1], reply_markup = done_kb)
states[uid] = 'flag_t_skills2'
elif states[uid] == 'dragon_flag_t_skills1':
bot.send_message(chatid, out_workout.t_skills['dragon_flag'][1], reply_markup = done_kb)
states[uid] = 'dragon_flag_t_skills2'
elif states[uid] == 'front_lever_t_skills1':
bot.send_message(chatid, out_workout.t_skills['front_lever'][1], reply_markup = done_kb)
states[uid] = 'front_lever_t_skills2'
elif states[uid] == 'back_lever_t_skills1':
bot.send_message(chatid, out_workout.t_skills['back_lever'][1], reply_markup = done_kb)
states[uid] = 'back_lever_t_skills2'
elif states[uid] == 'planche_t_skills1':
bot.send_message(chatid, out_workout.t_skills['planche'][1], reply_markup = done_kb)
states[uid] = 'planche_t_skills2'
elif states[uid] == 'maltese_t_skills1':
bot.send_message(chatid, 'Отлично!', reply_markup = main_kb)
elif states[uid] == 'hefesto_t_skills1':
bot.send_message(chatid, out_workout.t_skills['hefesto'][1], reply_markup = done_kb)
states[uid] = 'hefesto_t_skills2'
elif states[uid] == 'angel_t_skills1':
bot.send_message(chatid, out_workout.t_skills['angel'][1], reply_markup = done_kb)
states[uid] = 'angel_t_skills2'
elif states[uid] == 'one_arm_pu_t_skills1':
bot.send_message(chatid, out_workout.t_skills['one_arm_pu'][1], reply_markup = done_kb)
states[uid] = 'one_arm_pu_t_skills2'
elif states[uid] == 'handstand_t_skills1':
bot.send_message(chatid, out_workout.t_skills['handstand'][1], reply_markup = done_kb)
states[uid] = 'handstand_t_skills2'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and 't_skills2' in states[message.from_user.id])
def teach_statics3(message):
uid = message.from_user.id
chatid = message.chat.id
if message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Поработай над базой и все получится', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
elif message.text == 'Сделано!':
if states[uid] == 'flag_t_skills2':
bot.send_message(chatid, out_workout.t_skills['flag'][2], reply_markup = done_kb)
states[uid] = 'flag_t_skills3'
elif states[uid] == 'dragon_flag_t_skills2':
bot.send_message(chatid, out_workout.t_skills['dragon_flag'][2], reply_markup = done_kb)
states[uid] = 'dragon_flag_t_skills3'
elif states[uid] == 'front_lever_t_skills2':
bot.send_message(chatid, out_workout.t_skills['front_lever'][2], reply_markup = done_kb)
states[uid] = 'front_lever_t_skills3'
elif states[uid] == 'back_lever_t_skills2':
bot.send_message(chatid, out_workout.t_skills['back_lever'][2], reply_markup = done_kb)
states[uid] = 'back_lever_t_skills3'
elif states[uid] == 'planche_t_skills2':
bot.send_message(chatid, out_workout.t_skills['planche'][2], reply_markup = done_kb)
states[uid] = 'planche_t_skills3'
elif states[uid] == 'hefesto_t_skills2':
bot.send_message(chatid, out_workout.t_skills['hefesto'][2], reply_markup = done_kb)
states[uid] = 'hefesto_t_skills3'
elif states[uid] == 'angel_t_skills2':
bot.send_message(chatid, out_workout.t_skills['angel'][2], reply_markup = done_kb)
states[uid] = 'angel_t_skills3'
elif states[uid] == 'one_arm_pu_t_skills2':
bot.send_message(chatid, out_workout.t_skills['one_arm_pu'][2], reply_markup = done_kb)
states[uid] = 'one_arm_pu_t_skills3'
elif states[uid] == 'handstand_t_skills2':
bot.send_message(chatid, out_workout.t_skills['handstand'][2], reply_markup = done_kb)
states[uid] = 'handstand_t_skills3'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and 't_skills3' in states[message.from_user.id])
def teach_statics4(message):
uid = message.from_user.id
chatid = message.chat.id
if message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Поработай над базой и все получится', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
elif message.text == 'Сделано!':
bot.send_message(chatid, 'Молодца', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
###################################################################################################
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'teach_dynamics')
def teach_dynamics1(message):
uid = message.from_user.id
if message.text == '⬅️Назад':
bot.send_message(message.chat.id, 'Чем могу быть полезен?', reply_markup = done_kb)
elif message.text == '1':
bot.send_message(message.chat.id, out_workout.t_skills['sklepka'][0], reply_markup = done_kb)
states[uid] = 'sklepka_t_dyn1'
elif message.text == '2':
bot.send_message(message.chat.id, out_workout.t_skills['chair'][0], reply_markup = done_kb)
states[uid] = 'chair_t_dyn1'
elif message.text == '3':
bot.send_message(message.chat.id, out_workout.t_skills['under_bar'][0], reply_markup = done_kb)
states[uid] = 'under_bar_t_dyn1'
elif message.text == '4':
bot.send_message(message.chat.id, out_workout.t_skills['sun'][0], reply_markup = done_kb)
states[uid] = 'sun_t_dyn1'
elif message.text == '5':
bot.send_message(message.chat.id, out_workout.t_skills['ganger'][0], reply_markup = done_kb)
states[uid] = 'ganger_t_dyn1'
elif message.text == '6':
bot.send_message(message.chat.id, out_workout.t_skills['360'][0], reply_markup = done_kb)
states[uid] = '360_t_dyn1'
elif message.text == '7':
bot.send_message(message.chat.id, out_workout.t_skills['540'][0], reply_markup = done_kb)
states[uid] = '540_t_dyn1'
elif message.text == '8':
bot.send_message(message.chat.id, out_workout.t_skills['shrimpflip'][0], reply_markup = done_kb)
states[uid] = 'shrimpflip_t_dyn1'
elif message.text == '9':
bot.send_message(message.chat.id, out_workout.t_skills['korbut'][0], reply_markup = done_kb)
states[uid] = 'korbut_t_dyn1'
elif message.text == '10':
f = open('lach_gainer.mp4', 'rb')
bot.send_video(message.chat.id, f, reply_markup = done_kb)
f.close()
bot.send_message(message.chat.id, out_workout.t_skills['lach_gainer'][0], reply_markup = done_kb)
states[uid] = 'lach_gainer_t_dyn1'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and 't_dyn1' in states[message.from_user.id])
def teach_dynamics2(message):
uid = message.from_user.id
chatid = message.chat.id
if message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Поработай над базой и все получится', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
elif message.text == 'Сделано!':
if states[uid] == 'sklepka_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['sklepka'][1], reply_markup = done_kb)
states[uid] = 'sklepka_t_dyn2'
elif states[uid] == 'chair_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['chair'][1], reply_markup = done_kb)
states[uid] = 'chair_t_dyn2'
elif states[uid] == 'under_bar_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['under_bar'][1], reply_markup = done_kb)
states[uid] = 'under_bar_t_dyn2'
elif states[uid] == 'sun_t_dyn1':
bot.send_message(chatid, 'Отлично!', reply_markup = main_kb)
elif states[uid] == 'ganger_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['ganger'][1], reply_markup = done_kb)
states[uid] = 'ganger_t_dyn2'
elif states[uid] == '360_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['360'][1], reply_markup = done_kb)
states[uid] = '360_t_dyn2'
elif states[uid] == '540_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['540'][1], reply_markup = done_kb)
states[uid] = '540_t_dyn2'
elif states[uid] == 'korbut_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['korbut'][1], reply_markup = done_kb)
states[uid] = 'korbut_t_dyn2'
elif states[uid] == 'lach_gainer_t_dyn1':
bot.send_message(chatid, out_workout.t_skills['lach_gainer'][1], reply_markup = done_kb)
states[uid] = 'lach_gainer_t_dyn2'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and 't_dyn2' in states[message.from_user.id])
def teach_dynamics3(message):
uid = message.from_user.id
chatid = message.chat.id
if message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Поработай над базой и все получится', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
elif message.text == 'Сделано!':
if states[uid] == 'sklepka_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['sklepka'][2], reply_markup = done_kb)
states[uid] = 'sklepka_t_dyn3'
elif states[uid] == 'chair_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['chair'][2], reply_markup = done_kb)
states[uid] = 'chair_t_dyn3'
elif states[uid] == 'under_bar_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['under_bar'][2], reply_markup = done_kb)
states[uid] = 'under_bar_t_dyn3'
elif states[uid] == 'ganger_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['ganger'][2], reply_markup = done_kb)
states[uid] = 'ganger_t_dyn3'
elif states[uid] == '360_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['360'][2], reply_markup = done_kb)
states[uid] = '360_t_dyn3'
elif states[uid] == '540_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['540'][2], reply_markup = done_kb)
states[uid] = '540_t_dyn3'
elif states[uid] == 'korbut_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['korbut'][2], reply_markup = done_kb)
states[uid] = 'korbut_t_dyn3'
elif states[uid] == 'lach_gainer_t_dyn2':
bot.send_message(chatid, out_workout.t_skills['lach_gainer'][2], reply_markup = done_kb)
states[uid] = 'lach_gainer_t_dyn3'
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and 't_dyn3' in states[message.from_user.id])
def teach_dynamics4(message):
uid = message.from_user.id
chatid = message.chat.id
if message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Поработай над базой и все получится', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
elif message.text == 'Сделано!':
bot.send_message(chatid, 'Поздравляю!', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
###################################################################################################
# икхван ------------------------------------------------------------------------------------------
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train')
def ikhwan_train1(message):
uid = message.from_user.id
if message.text == 'Начинаем':
bot.send_message(message.chat.id, 'Итак, 20 алмазных отжиманий!', reply_markup = done_kb)
states[uid] = 'ikhwan_train1'
elif message.text == 'Не сейчас':
bot.send_message(message.chat.id, 'Жаль...', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train1')
def ikhwan_train2(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '20 отжиманий с руками у пояса!', reply_markup = done_kb)
states[uid] = 'ikhwan_train2'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Пробуй пока что-то полегче', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train2')
def ikhwan_train3(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '60 сек лягушка! Погнали!', reply_markup = done_kb)
states[uid] = 'ikhwan_train3'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Пробуй пока что-то полегче', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train3')
def ikhwan_train4(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '20 отжиманий с руками у пояса!', reply_markup = done_kb)
states[uid] = 'ikhwan_train4'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Учи лягушку, развивает баланс', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train4')
def ikhwan_train5(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '15 суперменов', reply_markup = done_kb)
states[uid] = 'ikhwan_train5'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Попробуй пока что-то полегче', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train5')
def ikhwan_train6(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '15 отжиманий в стойке у стены', reply_markup = done_kb)
states[uid] = 'ikhwan_train6'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Понимаю, это трудно. Подкачайся 💪 :D', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train6')
def ikhwan_train7(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '15 хинду отжиманий', reply_markup = done_kb)
states[uid] = 'ikhwan_train7'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Тренируй плечи и все будет ОК 💪', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train7')
def ikhwan_train8(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '15 армейских отжиманий', reply_markup = done_kb)
states[uid] = 'ikhwan_train8'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Жаль... Ты почти дошел до финиша 😢', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train8')
def ikhwan_train9(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '60 сек планка! Погнали!', reply_markup = done_kb)
states[uid] = 'ikhwan_train9'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Улучши плечи и пробуй еще раз 💪', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train9')
def ikhwan_train10(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отдохни полторы минутки')
sleep(90)
bot.send_message(message.chat.id, '30 отжиманий на стойках', reply_markup = done_kb)
states[uid] = 'ikhwan_train10'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Почти дошел до финиша, эх 😢', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'ikhwan_train10')
def ikhwan_train11(message):
uid = message.from_user.id
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отлично! 💪💪', reply_markup = main_kb)
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Это было последнее упражнение... 😢', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
###################################################################################################
# ганнибал ----------------------------------------------------------------------------------------
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'hannibal_train')
def hannibal_train1(message):
uid = message.from_user.id
if message.text == 'Начинаем':
bot.send_message(message.chat.id, 'Отжимания! Погнали! Отдых между подходами на твой выбор')
bot.send_message(message.chat.id, '30/29/28/27/26/25/24/23/22/21', reply_markup = done_kb)
states[uid] = 'hannibal_train1'
elif message.text == 'Не сейчас':
bot.send_message(message.chat.id, 'Жаль...', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'hannibal_train1')
def hannibal_train2(message):
uid = message.from_user.id
sleep(0.5)
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Подтягивания прямым хватом! Не торопись, следи за качеством!')
bot.send_message(message.chat.id, '10/9/8/7/6/5/5/5/5/5', reply_markup = done_kb)
states[uid] = 'hannibal_train2'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Потренируйся еще, попробуй позже', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'hannibal_train2')
def hannibal_train3(message):
uid = message.from_user.id
sleep(0.5)
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отжимания на брусьях! Следя за качеством, не думай о времени!')
bot.send_message(message.chat.id, '20/19/18/17/16/15/14/13/12/11', reply_markup = done_kb)
states[uid] = 'hannibal_train3'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Тренируйся больше, все получится', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'hannibal_train3')
def hannibal_train4(message):
uid = message.from_user.id
sleep(0.5)
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Подтягивания обратным хватом! Это последнее! 💪')
bot.send_message(message.chat.id, '10/9/8/7/6/5/5/5/5/5', reply_markup = done_kb)
states[uid] = 'hannibal_train4'
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Нужно подкачать трицепс', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
@bot.message_handler(func = lambda message: message.from_user.id in states
and states[message.from_user.id] == 'hannibal_train4')
def hannibal_train5(message):
sleep(0.5)
if message.text == 'Сделано!':
bot.send_message(message.chat.id, 'Отлично! 💪', reply_markup = main_kb)
elif message.text == 'Не справляюсь!':
bot.send_message(message.chat.id, 'Эхх, это было последнее упражнение 😢', reply_markup = main_kb)
elif message.text == 'Отмена тренировки':
bot.send_message(message.chat.id, 'Надеюсь, у тебя уважительная причина', reply_markup = main_kb)
else:
bot.send_message(message.chat.id, out_workout.supported_commands, reply_markup = main_kb)
if __name__ == '__main__':
bot.polling() | 48.316279 | 100 | 0.707002 | 4,530 | 31,164 | 4.622958 | 0.063135 | 0.056824 | 0.111642 | 0.129357 | 0.898768 | 0.868876 | 0.833875 | 0.809617 | 0.725862 | 0.687422 | 0 | 0.014666 | 0.109517 | 31,164 | 645 | 101 | 48.316279 | 0.739 | 0.034527 | 0 | 0.470796 | 0 | 0 | 0.197286 | 0.009116 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053097 | false | 0 | 0.00708 | 0 | 0.060177 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc22854dcf3315e0023d5b6cd87c92250b8d9191 | 87 | py | Python | quests/training/helping_the_farm.py | 0rtis/dfk | 4c85ccc1e78f4d6fd1bb0915024af56ec4529382 | [
"MIT"
] | 90 | 2021-10-17T19:36:45.000Z | 2022-03-31T17:19:43.000Z | quests/training/helping_the_farm.py | neelfirst/dfk | 52f9c5288a0637f507cb3e42b680c13f776ac4c0 | [
"MIT"
] | 13 | 2021-11-13T00:19:31.000Z | 2022-03-20T15:13:22.000Z | quests/training/helping_the_farm.py | neelfirst/dfk | 52f9c5288a0637f507cb3e42b680c13f776ac4c0 | [
"MIT"
] | 71 | 2021-11-05T03:00:41.000Z | 2022-03-30T06:16:25.000Z | '''
Vitality
'''
QUEST_CONTRACT_ADDRESS = '0x2174bBeFbEFBD766326a7C7538f93a78Db3eD449'
| 17.4 | 69 | 0.827586 | 5 | 87 | 14 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.296296 | 0.068966 | 87 | 4 | 70 | 21.75 | 0.567901 | 0.091954 | 0 | 0 | 0 | 0 | 0.591549 | 0.591549 | 0 | 0 | 0.591549 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc7e2df2c4586c7d4d49d7f4e5fd8863d99b06ae | 25,545 | py | Python | integration_tests/gdk/components/test_integ_BuildCommand.py | timmattison/aws-greengrass-gdk-cli | 60a002f0f2fee84b79022662ba0cae9e0246b6f8 | [
"Apache-2.0"
] | 10 | 2022-01-15T09:50:32.000Z | 2022-03-26T16:39:49.000Z | integration_tests/gdk/components/test_integ_BuildCommand.py | timmattison/aws-greengrass-gdk-cli | 60a002f0f2fee84b79022662ba0cae9e0246b6f8 | [
"Apache-2.0"
] | 46 | 2021-11-30T19:49:16.000Z | 2022-03-31T07:14:23.000Z | integration_tests/gdk/components/test_integ_BuildCommand.py | timmattison/aws-greengrass-gdk-cli | 60a002f0f2fee84b79022662ba0cae9e0246b6f8 | [
"Apache-2.0"
] | 7 | 2021-11-30T19:49:42.000Z | 2022-03-17T16:25:34.000Z | import json
from pathlib import Path
from shutil import Error
from unittest.mock import mock_open, patch
import gdk.CLIParser as CLIParser
import gdk.common.consts as consts
import gdk.common.exceptions.error_messages as error_messages
import gdk.common.parse_args_actions as parse_args_actions
import gdk.common.utils as utils
import pytest
from gdk.commands.component.BuildCommand import BuildCommand
@pytest.fixture()
def supported_build_system(mocker):
builds_file = utils.get_static_file_path(consts.project_build_system_file)
with open(builds_file, "r") as f:
data = json.loads(f.read())
mock_get_supported_component_builds = mocker.patch(
"gdk.commands.component.project_utils.get_supported_component_builds", return_value=data
)
return mock_get_supported_component_builds
@pytest.fixture()
def rglob_build_file(mocker):
def search(*args, **kwargs):
if "build.gradle" in args[0] or "pom.xml" in args[0]:
return [Path(utils.current_directory).joinpath("build_file")]
return []
mock_rglob = mocker.patch("pathlib.Path.rglob", side_effect=search)
return mock_rglob
def test_build_command_instantiation(mocker):
mock_get_supported_component_builds = mocker.patch(
"gdk.commands.component.project_utils.get_supported_component_builds", return_value={}
)
mock_check_if_arguments_conflict = mocker.patch.object(BuildCommand, "check_if_arguments_conflict", return_value=None)
mock_run = mocker.patch.object(BuildCommand, "run", return_value=None)
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value={},
)
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert mock_get_proj_config.call_count == 1
assert mock_get_supported_component_builds.call_count == 1
assert mock_check_if_arguments_conflict.call_count == 1
assert mock_run.call_count == 1
def test_build_command_instantiation_failed_fetching_config(mocker):
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
side_effect=Exception("exception fetching proj values"),
)
mock_get_supported_component_builds = mocker.patch(
"gdk.commands.component.project_utils.get_supported_component_builds", return_value={}
)
mock_check_if_arguments_conflict = mocker.patch.object(BuildCommand, "check_if_arguments_conflict", return_value=None)
mock_run = mocker.patch.object(BuildCommand, "run", return_value=None)
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert "exception fetching proj values" in e.value.args[0]
assert mock_get_proj_config.call_count == 1
assert mock_get_supported_component_builds.call_count == 0
assert mock_check_if_arguments_conflict.call_count == 1
assert mock_run.call_count == 0
def test_build_command_instantiation_failed_fetching_build_config(mocker):
mock_get_supported_component_builds = mocker.patch(
"gdk.commands.component.project_utils.get_supported_component_builds",
side_effect=Exception("exception fetching build"),
)
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value={},
)
mock_check_if_arguments_conflict = mocker.patch.object(BuildCommand, "check_if_arguments_conflict", return_value=None)
mock_run = mocker.patch.object(BuildCommand, "run", return_value=None)
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert "exception fetching build" in e.value.args[0]
assert mock_get_proj_config.call_count == 1
assert mock_get_supported_component_builds.call_count == 1
assert mock_check_if_arguments_conflict.call_count == 1
assert mock_run.call_count == 0
def test_build_command_instantiation_failed_conflicting_args(mocker):
mock_get_supported_component_builds = mocker.patch(
"gdk.commands.component.project_utils.get_supported_component_builds", return_value={}
)
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
side_effect=Exception("exception fetching proj values"),
)
mock_check_if_arguments_conflict = mocker.patch.object(
BuildCommand,
"check_if_arguments_conflict",
side_effect=Exception("exception due to conflictins args"),
)
mock_run = mocker.patch.object(BuildCommand, "run", return_value=None)
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert "exception due to conflictins args" in e.value.args[0]
assert mock_get_proj_config.call_count == 0
assert mock_get_supported_component_builds.call_count == 0
assert mock_check_if_arguments_conflict.call_count == 1
assert mock_run.call_count == 0
def test_build_run():
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert "Could not build the project due to the following error." in e.value.args[0]
def test_build_run_default_zip_json(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
mock_archive_dir = mocker.patch("shutil.make_archive", return_value=None)
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=project_config(),
)
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=project_config(),
)
mock_is_artifact_in_build = mocker.patch.object(BuildCommand, "is_artifact_in_build", return_value=True)
mock_subprocess_run = mocker.patch("subprocess.run")
mock_json_dump = mocker.patch("json.dumps")
pc = mock_get_proj_config.return_value
file_name = Path(pc["gg_build_recipes_dir"]).joinpath(pc["component_recipe_file"].name).resolve()
with patch("builtins.open", mock_open()) as mock_file:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
mock_file.assert_any_call(file_name, "w")
mock_json_dump.call_count == 1
assert mock_get_proj_config.assert_called_once
assert not mock_subprocess_run.called
assert mock_copy_dir.call_count == 1 # copy files to zip-build to create a zip
assert mock_archive_dir.call_count == 1 # archiving directory
assert mock_is_artifact_in_build.call_count == 1 # only one artifact in project_config. Available in build
assert mock_clean_dir.call_count == 2 # clean zip-build, clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
def test_build_run_default_maven_yaml(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
mock_archive_dir = mocker.patch("shutil.make_archive", return_value=None)
pc = project_config()
pc["component_build_config"] = {"build_system": "maven"}
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=pc,
)
mock_platform = mocker.patch("platform.system", return_value="not-windows")
pc["component_recipe_file"] = Path("/src/GDK-CLI-Internal/tests/gdk/static/build_command/recipe.yaml")
mock_is_artifact_in_build = mocker.patch.object(BuildCommand, "is_artifact_in_build", return_value=True)
mock_subprocess_run = mocker.patch("subprocess.run")
pc = mock_get_proj_config.return_value
file_name = Path(pc["gg_build_recipes_dir"]).joinpath(pc["component_recipe_file"].name).resolve()
with patch("builtins.open", mock_open()) as mock_file:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
mock_file.assert_any_call(file_name, "w")
assert mock_get_proj_config.assert_called_once
mock_subprocess_run.assert_called_with(["mvn", "clean", "package"]) # called maven build command
assert mock_copy_dir.call_count == 0 # No copying directories
assert supported_build_system.call_count == 1
assert mock_archive_dir.call_count == 0 # Archvie never called in maven
assert mock_is_artifact_in_build.call_count == 1 # only one artifact in project_config. Available in build
assert mock_clean_dir.call_count == 1 # clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
assert mock_platform.call_count == 1
def test_build_run_default_maven_yaml_windows(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
mock_archive_dir = mocker.patch("shutil.make_archive", return_value=None)
mock_platform = mocker.patch("platform.system", return_value="Windows")
pc = project_config()
pc["component_build_config"] = {"build_system": "maven"}
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=pc,
)
mock_is_artifact_in_build = mocker.patch.object(BuildCommand, "is_artifact_in_build", return_value=True)
mock_subprocess_run = mocker.patch("subprocess.run", side_effect="error with maven build cmd")
mock_yaml_dump = mocker.patch("yaml.dump")
pc = mock_get_proj_config.return_value
file_name = Path(pc["gg_build_recipes_dir"]).joinpath(pc["component_recipe_file"].name).resolve()
with patch("builtins.open", mock_open()) as mock_file:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
mock_file.assert_any_call(file_name, "w")
mock_yaml_dump.call_count == 1
assert mock_get_proj_config.assert_called_once
mock_subprocess_run.assert_called_with(["mvn.cmd", "clean", "package"]) # called maven build command
assert mock_copy_dir.call_count == 0 # No copying directories
assert supported_build_system.call_count == 1
assert mock_archive_dir.call_count == 0 # Archvie never called in maven
assert mock_is_artifact_in_build.call_count == 1 # only one artifact in project_config. Available in build
assert mock_clean_dir.call_count == 1 # clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
assert mock_platform.call_count == 1
def test_build_run_default_maven_yaml_error(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
mock_archive_dir = mocker.patch("shutil.make_archive", return_value=None)
mock_platform = mocker.patch("platform.system", return_value="Windows")
pc = project_config()
pc["component_build_config"] = {"build_system": "maven"}
pc["component_recipe_file"] = Path("/src/GDK-CLI-Internal/tests/gdk/static/build_command/recipe.yaml")
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=pc,
)
mock_is_artifact_in_build = mocker.patch.object(BuildCommand, "is_artifact_in_build", return_value=True)
mock_subprocess_run = mocker.patch("subprocess.run", side_effect=Exception("error with maven build cmd"))
pc = mock_get_proj_config.return_value
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build", "-d"]))
assert "error with maven build cmd" in e.value.args[0]
assert mock_get_proj_config.assert_called_once
mock_subprocess_run.assert_called_with(["mvn.cmd", "clean", "package"]) # called maven build command
assert mock_copy_dir.call_count == 0 # No copying directories
assert supported_build_system.call_count == 1
assert mock_archive_dir.call_count == 0 # Archvie never called in maven
assert mock_is_artifact_in_build.call_count == 0 # only one artifact in project_config. Available in build
assert mock_clean_dir.call_count == 1 # clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
assert mock_platform.called
def test_build_run_default_gradle_yaml_artifact_not_found(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
mock_archive_dir = mocker.patch("shutil.make_archive", return_value=None)
pc = project_config()
pc["component_build_config"] = {"build_system": "gradle"}
pc["component_recipe_file"] = Path("/src/GDK-CLI-Internal/tests/gdk/static/build_command/recipe.yaml")
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=pc,
)
mock_boto3_client = mocker.patch("boto3.client")
mock_subprocess_run = mocker.patch("subprocess.run")
mock_yaml_dump = mocker.patch("yaml.dump")
pc = mock_get_proj_config.return_value
with patch("builtins.open", mock_open()) as mock_file:
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert (
"Could not find artifact with URI"
" 's3://DOC-EXAMPLE-BUCKET/artifacts/com.example.HelloWorld/1.0.0/hello_world.py' on s3 or inside"
" the build folders."
in e.value.args[0]
)
assert not mock_file.called
mock_yaml_dump.call_count == 0
assert mock_get_proj_config.assert_called_once
mock_subprocess_run.assert_called_with(["gradle", "build"]) # called gradle build command
assert mock_copy_dir.call_count == 0 # No copying directories
assert supported_build_system.call_count == 1
assert mock_archive_dir.call_count == 0 # Archvie never called in gralde
assert mock_boto3_client.call_count == 1
assert mock_clean_dir.call_count == 1 # clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
def test_build_run_default_exception(mocker, rglob_build_file):
mock_create_gg_build_directories = mocker.patch.object(BuildCommand, "create_gg_build_directories")
mock_default_build_component = mocker.patch.object(
BuildCommand, "default_build_component", side_effect=Exception("error in default_build_component")
)
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=project_config(),
)
mock_get_supported_component_builds = mocker.patch(
"gdk.commands.component.project_utils.get_supported_component_builds", return_value={}
)
mock_subprocess_run = mocker.patch("subprocess.run")
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert "error in default_build_component" in e.value.args[0]
assert mock_get_proj_config.called
assert mock_get_supported_component_builds.called
assert mock_create_gg_build_directories.assert_called_once
assert mock_default_build_component.assert_called_once
assert not mock_subprocess_run.called
def test_default_build_component_error_run_build_command(mocker, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_run_build_command = mocker.patch.object(
BuildCommand, "run_build_command", side_effect=Error("err in run_build_command")
)
mock_find_artifacts_and_update_uri = mocker.patch.object(BuildCommand, "find_artifacts_and_update_uri")
mock_create_build_recipe_file = mocker.patch.object(BuildCommand, "create_build_recipe_file")
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=project_config(),
)
mock_get_supported_component_builds = mocker.patch(
"gdk.commands.component.project_utils.get_supported_component_builds", return_value={}
)
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert error_messages.BUILD_FAILED in e.value.args[0]
assert mock_run_build_command.assert_called_once
assert not mock_find_artifacts_and_update_uri.called
assert not mock_create_build_recipe_file.called
assert mock_get_supported_component_builds.called
assert mock_clean_dir.call_count == 1
assert mock_create_dir.call_count == 2
assert mock_get_proj_config.call_count == 1
def test_build_run_custom(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
pc = project_config()
pc["component_build_config"] = {"build_system": "custom", "custom_build_command": ["some-command"]}
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=pc,
)
mock_is_artifact_in_build = mocker.patch.object(BuildCommand, "is_artifact_in_build", return_value=False)
mock_is_artifact_in_s3 = mocker.patch.object(BuildCommand, "is_artifact_in_s3", return_value=True)
mock_boto3_client = mocker.patch("boto3.client")
mock_subprocess_run = mocker.patch("subprocess.run")
mock_yaml_dump = mocker.patch("yaml.dump")
pc = mock_get_proj_config.return_value
with patch("builtins.open", mock_open()) as mock_file:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
assert not mock_file.called
mock_yaml_dump.call_count == 0
assert mock_get_proj_config.assert_called_once
mock_subprocess_run.assert_called_with(["some-command"]) # called maven build command
assert mock_copy_dir.call_count == 0 # No copying directories
assert supported_build_system.call_count == 1
assert mock_is_artifact_in_build.call_count == 0 # only one artifact in project_config. Not vailable in build
assert mock_is_artifact_in_s3.call_count == 0 # only one artifact in project_config. Not available in s3
assert mock_boto3_client.call_count == 0
assert mock_clean_dir.call_count == 1 # clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
def test_build_run_default_gradle_yaml_artifact_found_build(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
mock_archive_dir = mocker.patch("shutil.make_archive", return_value=None)
pc = project_config()
pc["component_build_config"] = {"build_system": "gradle"}
pc["component_recipe_file"] = Path("/src/GDK-CLI-Internal/tests/gdk/static/build_command/recipe.yaml")
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=pc,
)
mock_boto3_client = mocker.patch("boto3.client")
mock_subprocess_run = mocker.patch("subprocess.run")
mock_yaml_dump = mocker.patch("yaml.dump")
pc = mock_get_proj_config.return_value
mocker.patch("pathlib.Path.is_file", return_value=True)
mock_copy_file = mocker.patch("shutil.copy", return_value=None)
mock_exists = mocker.patch("pathlib.Path.exists", return_value=True)
file_name = Path(pc["gg_build_recipes_dir"]).joinpath(pc["component_recipe_file"].name).resolve()
with patch("builtins.open", mock_open()) as mock_file:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
mock_file.assert_any_call(file_name, "w")
mock_yaml_dump.call_count == 0
assert mock_get_proj_config.assert_called_once
mock_subprocess_run.assert_called_with(["gradle", "build"]) # called gradle build command
assert mock_copy_dir.call_count == 0 # No copying directories
assert supported_build_system.call_count == 1
assert mock_archive_dir.call_count == 0 # Archvie never called in gralde
assert mock_boto3_client.call_count == 0 # artifacts found in s3
assert mock_clean_dir.call_count == 1 # clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
assert mock_copy_file.call_count == 1
assert mock_exists.called
def test_build_run_default_gradle_yaml_error_creating_recipe(mocker, supported_build_system, rglob_build_file):
mock_clean_dir = mocker.patch("gdk.common.utils.clean_dir", return_value=None)
mock_create_dir = mocker.patch("pathlib.Path.mkdir", return_value=None)
mock_copy_dir = mocker.patch("shutil.copytree", return_value=None)
mock_archive_dir = mocker.patch("shutil.make_archive", return_value=None)
pc = project_config()
pc["component_build_config"] = {"build_system": "gradle"}
pc["component_recipe_file"] = Path("/src/GDK-CLI-Internal/tests/gdk/static/build_command/recipe.yaml")
mock_get_proj_config = mocker.patch(
"gdk.commands.component.project_utils.get_project_config_values",
return_value=pc,
)
mock_boto3_client = mocker.patch("boto3.client")
mock_subprocess_run = mocker.patch("subprocess.run")
mock_yaml_dump = mocker.patch("yaml.dump", side_effect=Exception("writing failed"))
pc = mock_get_proj_config.return_value
mock_is_artifact_in_build = mocker.patch.object(BuildCommand, "is_artifact_in_build", return_value=True)
file_name = Path(pc["gg_build_recipes_dir"]).joinpath(pc["component_recipe_file"].name).resolve()
with patch("builtins.open", mock_open()) as mock_file:
with pytest.raises(Exception) as e:
parse_args_actions.run_command(CLIParser.cli_parser.parse_args(["component", "build"]))
mock_file.assert_any_call(file_name, "w")
mock_yaml_dump.call_count == 1
assert "Failed to create build recipe file at" in e.value.args[0]
assert mock_get_proj_config.assert_called_once
mock_subprocess_run.assert_called_with(["gradle", "build"]) # called gradle build command
assert mock_copy_dir.call_count == 0 # No copying directories
assert supported_build_system.call_count == 1
assert mock_is_artifact_in_build.call_count == 1
assert mock_archive_dir.call_count == 0 # Archvie never called in gralde
assert mock_boto3_client.call_count == 0 # artifacts found in s3
assert mock_clean_dir.call_count == 1 # clean greengrass-build
assert mock_create_dir.call_count == 2 # create gg directories
def project_config():
return {
"component_name": "component_name",
"component_build_config": {"build_system": "zip"},
"component_version": "1.0.0",
"component_author": "abc",
"bucket": "default",
"region": "us-east-1",
"gg_build_directory": Path("/src/GDK-CLI-Internal/greengrass-build"),
"gg_build_artifacts_dir": Path("/src/GDK-CLI-Internal/greengrass-build/artifacts"),
"gg_build_recipes_dir": Path("/src/GDK-CLI-Internal/greengrass-build/recipes"),
"gg_build_component_artifacts_dir": Path("/src/GDK-CLI-Internal/greengrass-build/artifacts/component_name/1.0.0"),
"component_recipe_file": Path("/src/GDK-CLI-Internal/tests/gdk/static/build_command/valid_component_recipe.json"),
"parsed_component_recipe": {
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "com.example.HelloWorld",
"ComponentVersion": "1.0.0",
"ComponentDescription": "My first Greengrass component.",
"ComponentPublisher": "Amazon",
"ComponentConfiguration": {"DefaultConfiguration": {"Message": "world"}},
"Manifests": [
{
"Platform": {"os": "linux"},
"Lifecycle": {"Run": "python3 -u {artifacts:path}/hello_world.py '{configuration:/Message}'"},
"Artifacts": [{"URI": "s3://DOC-EXAMPLE-BUCKET/artifacts/com.example.HelloWorld/1.0.0/hello_world.py"}],
}
],
},
}
| 51.920732 | 124 | 0.747818 | 3,506 | 25,545 | 5.086994 | 0.05733 | 0.062293 | 0.034483 | 0.035268 | 0.873956 | 0.837398 | 0.82882 | 0.810877 | 0.797365 | 0.792879 | 0 | 0.006176 | 0.150597 | 25,545 | 491 | 125 | 52.026477 | 0.815789 | 0.053083 | 0 | 0.644706 | 0 | 0.007059 | 0.238091 | 0.133875 | 0 | 0 | 0 | 0 | 0.268235 | 1 | 0.044706 | false | 0 | 0.025882 | 0.002353 | 0.082353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc9d14427614f9f782cbaf05966a13a62792c515 | 511 | py | Python | keras_contrib/metrics/__init__.py | jeonghwaYoo/high-res-mapping | fb485501c8c2aa92e389fa98d2a9aac8f09bd034 | [
"MIT"
] | 11 | 2019-03-23T13:23:49.000Z | 2022-01-20T07:57:56.000Z | keras_contrib/metrics/__init__.py | jeonghwaYoo/high-res-mapping | fb485501c8c2aa92e389fa98d2a9aac8f09bd034 | [
"MIT"
] | 1 | 2021-06-18T23:07:54.000Z | 2021-07-13T21:43:51.000Z | keras_contrib/metrics/__init__.py | jeonghwaYoo/high-res-mapping | fb485501c8c2aa92e389fa98d2a9aac8f09bd034 | [
"MIT"
] | 11 | 2017-07-06T14:11:51.000Z | 2021-08-21T23:18:20.000Z | from __future__ import absolute_import
from . import segmentation_metrics
# Globally-importable metrics
from .segmentation_metrics import categorical_pixel_accuracy
from .segmentation_metrics import mean_accuracy
from .segmentation_metrics import mean_intersection_over_union
from .segmentation_metrics import binary_accuracy
from .segmentation_metrics import categorical_accuracy
from .segmentation_metrics import top_k_categorical_accuracy
from .segmentation_metrics import sparse_top_k_categorical_accuracy
| 42.583333 | 67 | 0.902153 | 62 | 511 | 6.983871 | 0.306452 | 0.351039 | 0.371824 | 0.468822 | 0.588915 | 0.411085 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078278 | 511 | 11 | 68 | 46.454545 | 0.919321 | 0.052838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dcb13e393bd4e808a03bbbc7eb2a1ac5a9ef6dc8 | 14,189 | py | Python | test/oivae_cmu_comparison.py | AndrewRLawrence/dp_gp_lvm | b0d4c776714f22e83de31127fbfbbd511f017dcd | [
"MIT"
] | 1 | 2021-01-17T11:44:36.000Z | 2021-01-17T11:44:36.000Z | test/oivae_cmu_comparison.py | AndrewRLawrence/dp_gp_lvm | b0d4c776714f22e83de31127fbfbbd511f017dcd | [
"MIT"
] | 1 | 2020-07-19T20:47:02.000Z | 2020-07-19T20:47:02.000Z | test/oivae_cmu_comparison.py | AndrewRLawrence/dp_gp_lvm | b0d4c776714f22e83de31127fbfbbd511f017dcd | [
"MIT"
] | 1 | 2020-07-21T07:13:13.000Z | 2020-07-21T07:13:13.000Z | """This module tests our model against the results from the oi-VAE paper on CMU subject 7."""
from src.models.dp_gp_lvm import dp_gp_lvm
from src.utils.constants import RESULTS_FILE_NAME, DATA_PATH
from src.utils.types import NP_DTYPE
import src.visualisation.plotters as vis
import matplotlib.cm as color_map
import matplotlib.pyplot as plot
import numpy as np
from os.path import isfile
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from time import time
if __name__ == '__main__':
# Optimisation variables.
learning_rate = 0.025
num_iter_train = 2500
num_iter_predict = 2000
# Model hyperparameters.
num_training_samples = 150
num_inducing_points = 40
num_latent_dimensions = 16 # oi-VAE uses 4, 8, and 16.
truncation_level = 15
# Read data.
training_data = None
for i in range(10):
sequence = i + 1
np_file = '07_0{}_joint_angles.npy'.format(sequence) if sequence < 10 \
else '07_{}_joint_angles.npy'.format(sequence)
cmu_data = np.load(DATA_PATH + 'cmu_mocap/' + np_file)
if training_data is None:
training_data = cmu_data
else:
training_data = np.vstack((training_data, cmu_data))
total_num_frames = training_data.shape[0]
num_output_dimensions = training_data.shape[1]
# Randomly sample 200 frames and normalise data to zero mean and unit variance.
np.random.seed(seed=1) # Set seed.
training_indices = np.random.choice(training_data.shape[0], size=num_training_samples, replace=False)
scaler = StandardScaler()
y_train = scaler.fit_transform(training_data[training_indices, 6:]) # Remove first 6 dimensions to ignore root.
# Print info.
print('\nCMU Subject 7 - Sequences 1-10:')
print(' Total number of observations (N): {}'.format(num_training_samples))
print(' Total number of output dimensions (D): {}'.format(num_output_dimensions))
print(' Total number of inducing points (M): {}'.format(num_inducing_points))
print(' Total number of latent dimensions (Q): {}'.format(num_latent_dimensions))
# Define file path for results.
dataset_str = 'cmu_subject7_joint_angles'
dp_gp_lvm_results_file = RESULTS_FILE_NAME.format(model='dp_gp_lvm', dataset=dataset_str)
# Define instance of necessary model.
if not isfile(dp_gp_lvm_results_file):
# Reset default graph before building new model graph. This speeds up script.
tf.reset_default_graph()
np.random.seed(1) # Random seed.
# Define instance of DP-GP-LVM.
model = dp_gp_lvm(y_train=y_train,
num_inducing_points=num_inducing_points,
num_latent_dims=num_latent_dimensions,
truncation_level=truncation_level,
mask_size=1) # Treat each observed dimension as independent.
model_training_objective = model.objective
# Optimisation.
model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
loss=model_training_objective)
with tf.Session() as s:
# Initialise variables.
s.run(tf.global_variables_initializer())
# Training optimisation loop.
start_time = time()
print('\nTraining DP-GP-LVM:')
for c in range(num_iter_train):
s.run(model_opt_train)
if (c % 100) == 0:
print(' DP-GP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
end_time = time()
train_opt_time = end_time - start_time
final_cost = s.run(model_training_objective)
print('Final iter {:5}:'.format(c))
print(' DP-GP-LVM: {}'.format(s.run(model_training_objective)))
print('Time to optimise: {} s'.format(train_opt_time))
# Get converged values as numpy arrays.
ard_weights, noise_precision, signal_variance, inducing_input, assignments = \
s.run((model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input,
model.assignments))
x_mean, x_covar = s.run(model.q_x)
w_1, w_2 = s.run(model.dp.q_alpha)
gamma_atoms, alpha_atoms, beta_atoms = s.run(model.dp_atoms)
# Save results.
print('\nSaving results to .npz file.')
np.savez(dp_gp_lvm_results_file, original_data=training_data, y_train=y_train,
ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
x_u=inducing_input, assignments=assignments, x_mean=x_mean, x_covar=x_covar,
gamma_atoms=gamma_atoms, alpha_atoms=alpha_atoms, beta_atoms=beta_atoms,
q_alpha_w1=w_1, q_alpha_w2=w_2, train_opt_time=train_opt_time, final_cost=final_cost)
else:
# Load results.
results = np.load(dp_gp_lvm_results_file)
# Load number of dimensions per joint.
joint_dim_dict = np.load(DATA_PATH + 'cmu_mocap/' + '07_joint_dims.npy').item()
labels = list(joint_dim_dict.keys())
ticks = np.array(list(joint_dim_dict.values()), dtype=int)
# labels = list(joint_dim_dict.keys())[1:] # Remove root joint.
# ticks = np.array(list(joint_dim_dict.values()), dtype=int)[1:] # Remove root joint.
# Plot latent spaces.
dp_gp_lvm_ard = results['ard_weights']
# dp_gp_lvm_ard[dp_gp_lvm_ard < 0.1] = 0.0
dp_gp_lvm_ard = np.sqrt(dp_gp_lvm_ard)
# plot.figure()
# # plot.imshow(dp_gp_lvm_ard, interpolation='nearest', aspect='auto',
# # extent=(0, num_latent_dimensions, num_output_dimensions, 0), origin='upper')
# plot.imshow(dp_gp_lvm_ard, interpolation='nearest', aspect='auto',
# extent=(0, num_latent_dimensions, num_output_dimensions, 0), origin='upper', cmap=color_map.Blues)
# # plot.colorbar()
# plot.title('Latent factorization for each joint')
# plot.xlabel('X-Dimension')
# plot.ylabel('')
# ax = plot.gca()
# # ax.set_xticks(np.arange(0.5, num_latent_dimensions, 1))
# ax.set_xticks(np.arange(num_latent_dimensions))
# ax.set_xticklabels([])
# ax.set_yticks(np.cumsum(ticks), minor=False)
# ax.set_yticklabels([], minor=False)
# ax.set_yticks(np.cumsum(ticks) - 0.5 * ticks, minor=True)
# ax.set_yticklabels(labels, minor=True)
# plot.show()
# Sum sort.
index = np.argsort(np.sum(dp_gp_lvm_ard, axis=0))
plot.figure(figsize=(10,5))
plot.imshow(np.transpose(dp_gp_lvm_ard[:, index[::-1]]), interpolation='nearest', aspect='auto',
extent=(0, num_output_dimensions, num_latent_dimensions, 0), origin='upper', cmap=color_map.Blues)
plot.ylabel('X', rotation='horizontal')
plot.xlabel('')
ax = plot.gca()
ax.set_yticks(np.arange(num_latent_dimensions))
ax.set_yticklabels([])
ax.set_xticks(np.cumsum(ticks), minor=False)
ax.set_xticklabels([], minor=False)
ax.set_xticks(np.cumsum(ticks) - 0.5 * ticks, minor=True)
ax.set_xticklabels(labels, minor=True, rotation='vertical', fontweight='bold')
plot.savefig('cmu_7_sum_sort.pdf', bbox_inches='tight')
plot.show()
# # Largest sum.
# index = np.argsort(np.sum(dp_gp_lvm_ard, axis=1))[::-1]
# index = np.argsort(dp_gp_lvm_ard[index[0], :])[::-1]
# plot.figure(figsize=(7,10))
# plot.imshow(dp_gp_lvm_ard[:, index], interpolation='nearest', aspect='auto',
# extent=(0, num_latent_dimensions, num_output_dimensions, 0), origin='upper', cmap=color_map.Blues)
# plot.title('Latent factorization for each joint')
# plot.xlabel('X-Dimension')
# plot.ylabel('')
# ax = plot.gca()
# ax.set_xticks(np.arange(num_latent_dimensions))
# ax.set_xticklabels([])
# ax.set_yticks(np.cumsum(ticks), minor=False)
# ax.set_yticklabels([], minor=False)
# ax.set_yticks(np.cumsum(ticks) - 0.5 * ticks, minor=True)
# ax.set_yticklabels(labels, minor=True)
# plot.savefig('cmu_7_largest_sum.pdf', bbox_inches='tight')
# # plot.show()
#
# # Variance
# index = np.argsort(np.var(dp_gp_lvm_ard, axis=0))[::-1]
# plot.figure(figsize=(7,10))
# plot.imshow(dp_gp_lvm_ard[:, index], interpolation='nearest', aspect='auto',
# extent=(0, num_latent_dimensions, num_output_dimensions, 0), origin='upper', cmap=color_map.Blues)
# plot.title('Latent factorization for each joint')
# plot.xlabel('X-Dimension')
# plot.ylabel('')
# ax = plot.gca()
# ax.set_xticks(np.arange(num_latent_dimensions))
# ax.set_xticklabels([])
# ax.set_yticks(np.cumsum(ticks), minor=False)
# ax.set_yticklabels([], minor=False)
# ax.set_yticks(np.cumsum(ticks) - 0.5 * ticks, minor=True)
# ax.set_yticklabels(labels, minor=True)
# plot.savefig('cmu_7_variance_sort.pdf', bbox_inches='tight')
# plot.show()
# Using cartesian coordinates.
# # Read data.
# training_data = None
# for i in range(10):
# sequence = i + 1
# np_file = '07_0{}.npy'.format(sequence) if sequence < 10 else '07_{}.npy'.format(sequence)
# cmu_data = np.load(DATA_PATH + 'cmu_mocap/' + np_file)
# if training_data is None:
# training_data = cmu_data
# else:
# training_data = np.vstack((training_data, cmu_data))
# total_num_frames = training_data.shape[0]
# num_output_dimensions = training_data.shape[1]
#
# # Randomly sample 200 frames and normalise data to zero mean and unit variance.
# np.random.seed(seed=1) # Set seed.
# training_indices = np.random.choice(training_data.shape[0], size=num_training_samples, replace=False)
# scaler = StandardScaler()
# y_train = scaler.fit_transform(training_data[training_indices, :])
#
# # Print info.
# print('\nCMU Subject 7 - Sequences 1-10:')
# print(' Total number of observations (N): {}'.format(num_training_samples))
# print(' Total number of output dimensions (D): {}'.format(num_output_dimensions))
# print(' Total number of inducing points (M): {}'.format(num_inducing_points))
# print(' Total number of latent dimensions (Q): {}'.format(num_latent_dimensions))
#
# # Define file path for results.
# dataset_str = 'cmu_subject7'
# dp_gp_lvm_results_file = RESULTS_FILE_NAME.format(model='dp_gp_lvm', dataset=dataset_str) # Keep 3d points together
#
# # Define instance of necessary model.
# if not isfile(dp_gp_lvm_results_file):
# # Reset default graph before building new model graph. This speeds up script.
# tf.reset_default_graph()
# np.random.seed(1) # Random seed.
# # Define instance of DP-GP-LVM.
# model = dp_gp_lvm(y_train=y_train,
# num_inducing_points=num_inducing_points,
# num_latent_dims=num_latent_dimensions,
# truncation_level=truncation_level,
# mask_size=3)
#
# model_training_objective = model.objective
# # Optimisation.
# model_opt_train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(
# loss=model_training_objective)
#
# with tf.Session() as s:
# # Initialise variables.
# s.run(tf.global_variables_initializer())
#
# # Training optimisation loop.
# start_time = time()
# print('\nTraining DP-GP-LVM:')
# for c in range(num_iter_train):
# s.run(model_opt_train)
# if (c % 100) == 0:
# print(' DP-GP-LVM opt iter {:5}: {}'.format(c, s.run(model_training_objective)))
# end_time = time()
# train_opt_time = end_time - start_time
# final_cost = s.run(model_training_objective)
# print('Final iter {:5}:'.format(c))
# print(' DP-GP-LVM: {}'.format(s.run(model_training_objective)))
# print('Time to optimise: {} s'.format(train_opt_time))
#
# # Get converged values as numpy arrays.
# ard_weights, noise_precision, signal_variance, inducing_input, assignments = \
# s.run((model.ard_weights, model.noise_precision, model.signal_variance, model.inducing_input,
# model.assignments))
# x_mean, x_covar = s.run(model.q_x)
# w_1, w_2 = s.run(model.dp.q_alpha)
# gamma_atoms, alpha_atoms, beta_atoms = s.run(model.dp_atoms)
#
# # Save results.
# print('\nSaving results to .npz file.')
# np.savez(dp_gp_lvm_results_file, original_data=training_data, y_train=y_train,
# ard_weights=ard_weights, noise_precision=noise_precision, signal_variance=signal_variance,
# x_u=inducing_input, assignments=assignments, x_mean=x_mean, x_covar=x_covar,
# gamma_atoms=gamma_atoms, alpha_atoms=alpha_atoms, beta_atoms=beta_atoms,
# q_alpha_w1=w_1, q_alpha_w2=w_2, train_opt_time=train_opt_time, final_cost=final_cost)
#
# else:
# # Load results.
# results = np.load(dp_gp_lvm_results_file)
#
# # Plot latent spaces.
# dp_gp_lvm_ard = results['ard_weights']
#
# plot.figure()
# # plot.imshow(np.sqrt(dp_gp_lvm_ard).T, interpolation='nearest', aspect='auto',
# # extent=(0, num_output_dimensions, num_latent_dimensions, 0), origin='upper')
# plot.imshow(np.sqrt(dp_gp_lvm_ard), interpolation='nearest', aspect='auto',
# extent=(0, num_latent_dimensions, num_output_dimensions, 0), origin='upper')
# plot.colorbar()
# plot.title('ARD Weights')
# plot.show()
| 47.614094 | 122 | 0.628445 | 1,843 | 14,189 | 4.569723 | 0.151384 | 0.018523 | 0.032415 | 0.020185 | 0.85716 | 0.849323 | 0.831156 | 0.816908 | 0.80361 | 0.801116 | 0 | 0.014521 | 0.25259 | 14,189 | 297 | 123 | 47.774411 | 0.779632 | 0.52886 | 0 | 0.019802 | 0 | 0 | 0.081544 | 0.010811 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.108911 | 0 | 0.108911 | 0.108911 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dcb46645713910e8790b3e7f3b464ab7ee591e02 | 37 | py | Python | api/plangtool/__init__.py | mtlsn/PLangTool | 97b4cc5028936e14e2cb6d1ca12b3c76f2db4625 | [
"Apache-2.0"
] | 1 | 2020-12-08T20:11:54.000Z | 2020-12-08T20:11:54.000Z | api/plangtool/__init__.py | mtlsn/PLangTool | 97b4cc5028936e14e2cb6d1ca12b3c76f2db4625 | [
"Apache-2.0"
] | null | null | null | api/plangtool/__init__.py | mtlsn/PLangTool | 97b4cc5028936e14e2cb6d1ca12b3c76f2db4625 | [
"Apache-2.0"
] | 1 | 2020-11-29T06:54:07.000Z | 2020-11-29T06:54:07.000Z | # here entry point
import http.server | 18.5 | 18 | 0.810811 | 6 | 37 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 2 | 19 | 18.5 | 0.9375 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4a24b3e421816f9de73f177c15a1aab599f37df | 9,624 | py | Python | myGym/vae/model.py | incognite-lab/myGym | a093a4a0f75ba081fbcf3fef70aca14dc2078997 | [
"MIT"
] | 18 | 2020-12-22T18:14:50.000Z | 2022-02-12T22:26:16.000Z | myGym/vae/model.py | incognite-lab/myGym | a093a4a0f75ba081fbcf3fef70aca14dc2078997 | [
"MIT"
] | 1 | 2021-08-03T10:39:03.000Z | 2021-08-03T10:39:03.000Z | myGym/vae/model.py | incognite-lab/myGym | a093a4a0f75ba081fbcf3fef70aca14dc2078997 | [
"MIT"
] | 5 | 2021-01-22T16:42:47.000Z | 2022-01-11T15:28:46.000Z | import numpy as np
import torch
import torch.nn as nn
IMSIZE = 64
class VAE(nn.Module):
"""Multimodal Variational Autoencoder.
@param n_latents: integer
number of latent dimensions
"""
def __init__(self, n_latents, batch_size, training, imsize, use_cuda):
super(VAE, self).__init__()
self.batch_size = batch_size
self.device = "cuda" if use_cuda is True else "cpu"
self.use_cuda = True if self.device == "cuda" else False
self.n_latents = n_latents
self.training = training
self.bidirectional = False
self.imsize = imsize
if imsize == 128:
self.image_encoder = ImageEncoder128(self.n_latents)
self.image_decoder = ImageDecoder128(self.n_latents)
elif imsize == 64:
self.image_encoder = ImageEncoder64(self.n_latents)
self.image_decoder = ImageDecoder64(self.n_latents)
def reparametrize(self, mu, logvar):
if self.training == True:
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std)
return mu + eps*std
else: # return mean during inference
return mu
def forward(self, image=None):
mu, logvar = self.infer(image)
# reparametrization trick to sample
z = self.reparametrize(mu, logvar)
# reconstruct inputs based on that gaussian
image_recon = self.image_decoder(z)
return image_recon, mu, logvar
def infer(self, image=None):
# initialize the universal prior expert
try:
img_mu, img_logvar = self.image_encoder(image.to(self.device))
except:
img_mu, img_logvar = self.image_encoder(image)
return img_mu, img_logvar
class ImageEncoder128(nn.Module):
"""Parametrizes q(z|x).
This is the standard DCGAN architecture.
@param n_latents: integer
number of latent variable dimensions.
"""
def __init__(self, n_latents):
super(ImageEncoder128, self).__init__()
hid_channels = 32
kernel_size = 4
hidden_dim = 256
self.latent_dim = n_latents
# Shape required to start transpose convs
self.reshape = (hid_channels, kernel_size, kernel_size)
n_chan = 3
# Convolutional layers
cnn_kwargs = dict(stride=2, padding=1)
self.conv1 = nn.Conv2d(n_chan, hid_channels, kernel_size, **cnn_kwargs)
self.conv2 = nn.Conv2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.conv3 = nn.Conv2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.conv_128 = nn.Conv2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
# Fully connected layers
self.lin1 = nn.Linear(np.product(self.reshape), hidden_dim)
self.lin2 = nn.Linear(hidden_dim, hidden_dim)
# Fully connected layers for mean and variance
self.mu_logvar_gen = nn.Linear(hidden_dim, self.latent_dim * 2)
def forward(self, x):
batch_size = x.size(0)
if len(x.shape) < 4:
x = x.unsqueeze(0)
# Convolutional layers with ReLu activations
x = torch.relu(self.conv1(x))
x = torch.relu(self.conv2(x))
x = torch.relu(self.conv3(x))
x = torch.relu(self.conv_128(x))
x = torch.relu(self.conv_128(x))
# Fully connected layers with ReLu activations
x = x.view((batch_size, -1 ))
x = torch.relu(self.lin1(x))
x = torch.relu(self.lin2(x))
# Fully connected layer for log variance and mean
# Log std-dev in paper (bear in mind)
mu_logvar = self.mu_logvar_gen(x)
mu, logvar = mu_logvar.view(-1, self.latent_dim, 2).unbind(-1)
return mu, logvar
class ImageDecoder128(nn.Module):
"""Parametrizes p(x|z).
This is the standard DCGAN architecture.
@param n_latents: integer
number of latent variable dimensions.
"""
def __init__(self, n_latents):
super(ImageDecoder128, self).__init__()
latent_dim = n_latents
# Layer parameters
hid_channels = 32
kernel_size = 4
hidden_dim = 256
# Shape required to start transpose convs
self.reshape = (hid_channels, kernel_size, kernel_size)
n_chan = 3
# Fully connected layers
self.lin1 = nn.Linear(latent_dim, hidden_dim)
self.lin2 = nn.Linear(hidden_dim, hidden_dim)
self.lin3 = nn.Linear(hidden_dim, np.product(self.reshape))
# Convolutional layers
cnn_kwargs = dict(stride=2, padding=1)
# If input image is 64x64 do fourth convolution
self.convT_128 = nn.ConvTranspose2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.convT1 = nn.ConvTranspose2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.convT2 = nn.ConvTranspose2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.convT3 = nn.ConvTranspose2d(hid_channels, n_chan, kernel_size, **cnn_kwargs)
def forward(self, z):
batch_size = z.size(0)
# Fully connected layers with ReLu activations
x = torch.relu(self.lin1(z))
x = torch.relu(self.lin2(x))
x = torch.relu(self.lin3(x))
x = x.view(batch_size, *self.reshape)
# Convolutional layers with ReLu activations
x = torch.relu(self.convT_128(x))
x = torch.relu(self.convT_128(x))
x = torch.relu(self.convT1(x))
x = torch.relu(self.convT2(x))
# Sigmoid activation for final conv layer
x = torch.sigmoid(self.convT3(x))
return x
class ImageEncoder64(nn.Module):
"""Parametrizes q(z|x).
This is the standard DCGAN architecture.
@param n_latents: integer
number of latent variable dimensions.
"""
def __init__(self, n_latents):
super(ImageEncoder64, self).__init__()
hid_channels = 32
kernel_size = 4
hidden_dim = 256
self.latent_dim = n_latents
# Shape required to start transpose convs
self.reshape = (hid_channels, kernel_size, kernel_size)
n_chan = 3
# Convolutional layers
cnn_kwargs = dict(stride=2, padding=1)
self.conv1 = nn.Conv2d(n_chan, hid_channels, kernel_size, **cnn_kwargs)
self.conv2 = nn.Conv2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.conv3 = nn.Conv2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
# If input image is 64x64 do fourth convolution
self.conv_64 = nn.Conv2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
# Fully connected layers
self.lin1 = nn.Linear(np.product(self.reshape), hidden_dim)
self.lin2 = nn.Linear(hidden_dim, hidden_dim)
# Fully connected layers for mean and variance
self.mu_logvar_gen = nn.Linear(hidden_dim, self.latent_dim * 2)
def forward(self, x):
batch_size = x.size(0)
if len(x.shape) < 4:
x = x.unsqueeze(0)
# Convolutional layers with ReLu activations
x = torch.relu(self.conv1(x))
x = torch.relu(self.conv2(x))
x = torch.relu(self.conv3(x))
x = torch.relu(self.conv_64(x))
# Fully connected layers with ReLu activations
x = x.view((batch_size, -1 ))
x = torch.relu(self.lin1(x))
x = torch.relu(self.lin2(x))
# Fully connected layer for log variance and mean
# Log std-dev in paper (bear in mind)
mu_logvar = self.mu_logvar_gen(x)
mu, logvar = mu_logvar.view(-1, self.latent_dim, 2).unbind(-1)
return mu, logvar
class ImageDecoder64(nn.Module):
"""Parametrizes p(x|z).
This is the standard DCGAN architecture.
@param n_latents: integer
number of latent variable dimensions.
"""
def __init__(self, n_latents):
super(ImageDecoder64, self).__init__()
latent_dim = n_latents
# Layer parameters
hid_channels = 32
kernel_size = 4
hidden_dim = 256
# Shape required to start transpose convs
self.reshape = (hid_channels, kernel_size, kernel_size)
n_chan = 3
# Fully connected layers
self.lin1 = nn.Linear(latent_dim, hidden_dim)
self.lin2 = nn.Linear(hidden_dim, hidden_dim)
self.lin3 = nn.Linear(hidden_dim, np.product(self.reshape))
# Convolutional layers
cnn_kwargs = dict(stride=2, padding=1)
self.convT_128 = nn.ConvTranspose2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.convT1 = nn.ConvTranspose2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.convT2 = nn.ConvTranspose2d(hid_channels, hid_channels, kernel_size, **cnn_kwargs)
self.convT3 = nn.ConvTranspose2d(hid_channels, n_chan, kernel_size, **cnn_kwargs)
def forward(self, z):
batch_size = z.size(0)
# Fully connected layers with ReLu activations
x = torch.relu(self.lin1(z))
x = torch.relu(self.lin2(x))
x = torch.relu(self.lin3(x))
x = x.view(batch_size, *self.reshape)
# Convolutional layers with ReLu activations
x = torch.relu(self.convT_128(x))
x = torch.relu(self.convT1(x))
x = torch.relu(self.convT2(x))
# Sigmoid activation for final conv layer
x = torch.sigmoid(self.convT3(x))
return x
class Swish(nn.Module):
"""https://arxiv.org/abs/1710.05941"""
def forward(self, x):
return x * torch.sigmoid(x)
| 34.996364 | 98 | 0.628948 | 1,268 | 9,624 | 4.589117 | 0.134069 | 0.068053 | 0.044681 | 0.062554 | 0.81337 | 0.81337 | 0.798763 | 0.79292 | 0.776594 | 0.762846 | 0 | 0.028249 | 0.271717 | 9,624 | 274 | 99 | 35.124088 | 0.801969 | 0.209996 | 0 | 0.68323 | 0 | 0 | 0.001476 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080745 | false | 0 | 0.018634 | 0.006211 | 0.192547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4c7c21ebc1a5eefa64a76a0dac8ba8bef0613df | 3,760 | py | Python | server/lib/api/routes/gameAdmin.py | ryanshi42/ARG-Event | e7446a659483b539198dd923bf5a57c0f83d85e9 | [
"MIT"
] | 4 | 2020-04-28T02:42:35.000Z | 2020-05-20T05:42:41.000Z | server/lib/api/routes/gameAdmin.py | ryanshi42/ARG-Event | e7446a659483b539198dd923bf5a57c0f83d85e9 | [
"MIT"
] | 1 | 2021-08-07T12:19:28.000Z | 2021-08-07T12:19:28.000Z | server/lib/api/routes/gameAdmin.py | ryanshi42/ARG-Event | e7446a659483b539198dd923bf5a57c0f83d85e9 | [
"MIT"
] | 1 | 2020-09-11T11:02:17.000Z | 2020-09-11T11:02:17.000Z | from .. import routing, JSON
from tornado.web import authenticated, RequestHandler
from lib.questions import SQLMethod as questionsSQLMethod
from lib.auth import SQLMethod as authSQLMethod
@routing.POST("/questions/question/submit")
@authenticated
def questionSubmit(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.questions.createQuestion(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
@routing.POST("/questions/question/edit")
@authenticated
def questionEdit(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.questions.editQuestion(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
@routing.POST("/questions/question/editAnswer")
@authenticated
def questionEditAnswer(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.questions.editQuestionAnswer(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
@routing.POST("/questions/question/getAnswer")
@authenticated
def questionGetAnswer(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.questions.getAnswer(**args)
if result:
return self.finish(JSON.data(result))
return self.finish(JSON.FALSE())
@routing.POST("/questions/question/delete")
@authenticated
def questionDelete(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.questions.deleteQuestion(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
@routing.POST("/questions/category/submit")
@authenticated
def categorySubmit(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.categories.createCategory(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
@routing.POST("/questions/category/edit")
@authenticated
def categoryEdit(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.categories.editCategory(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
@routing.POST("/questions/category/delete")
@authenticated
def categoryDelete(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.categories.deleteCategory(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
@routing.POST("/questions/users/getAll")
@authenticated
def usersGetAll(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
return self.finish(JSON.data(questionsSQLMethod.users.getAllUsers()))
@routing.POST("/questions/users/delete")
@authenticated
def usersDelete(self: RequestHandler, args: dict):
if not self.current_user.isAdmin:
return self.finish(JSON.error("access denied"))
result = questionsSQLMethod.users.deleteUser(**args)
if result:
return self.finish(JSON.OK())
return self.finish(JSON.FALSE())
| 31.333333 | 73 | 0.721809 | 436 | 3,760 | 6.201835 | 0.144495 | 0.107249 | 0.171598 | 0.214497 | 0.72855 | 0.715976 | 0.715976 | 0.704142 | 0.704142 | 0.684541 | 0 | 0 | 0.153989 | 3,760 | 119 | 74 | 31.596639 | 0.850047 | 0 | 0 | 0.615385 | 0 | 0 | 0.102926 | 0.068351 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10989 | false | 0 | 0.043956 | 0 | 0.472527 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f4cd8cba2987519e96f909a416997a5393144f0f | 32 | py | Python | discord_styled/utils/__init__.py | discord-styled/discord-interactions-styled | ccc31197c20badab4bb2bc4ef415ee1749404fd5 | [
"MIT"
] | 2 | 2021-09-04T16:01:36.000Z | 2022-01-27T02:03:42.000Z | discord_styled/utils/__init__.py | discord-styled/discord-interactions-styled | ccc31197c20badab4bb2bc4ef415ee1749404fd5 | [
"MIT"
] | null | null | null | discord_styled/utils/__init__.py | discord-styled/discord-interactions-styled | ccc31197c20badab4bb2bc4ef415ee1749404fd5 | [
"MIT"
] | null | null | null | from . import permissions, slash | 32 | 32 | 0.8125 | 4 | 32 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
52097b05b47f30d0b0bc7cd71dd313941ff8f6ac | 7,752 | py | Python | src/nucleotide/component/windows/msvc/atom/package.py | dmilos/nucleotide | aad5d60508c9e4baf4888069284f2cb5c9fd7c55 | [
"Apache-2.0"
] | 1 | 2020-09-04T13:00:04.000Z | 2020-09-04T13:00:04.000Z | src/nucleotide/component/windows/msvc/atom/package.py | dmilos/nucleotide | aad5d60508c9e4baf4888069284f2cb5c9fd7c55 | [
"Apache-2.0"
] | 1 | 2020-04-10T01:52:32.000Z | 2020-04-10T09:11:29.000Z | src/nucleotide/component/windows/msvc/atom/package.py | dmilos/nucleotide | aad5d60508c9e4baf4888069284f2cb5c9fd7c55 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python2
# Copyright 2015 Dejan D. M. Milosavljevic
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import platform
import nucleotide
import nucleotide.component
import nucleotide.component.function
import nucleotide.component.windows.msvc.atom.module.boost
import nucleotide.component.windows.msvc.atom.module.opencv
import nucleotide.component.windows.msvc.atom.module.zlib
import nucleotide.component.windows.msvc.atom.module.tbb
import nucleotide.component.windows.msvc.atom.module.protobuf
import nucleotide.component.windows.msvc.atom.module.python
import nucleotide.component.windows.msvc.atom.module.bzip2
Is_list={
'boost': {
'CPPDEFINES':nucleotide.component.windows.msvc.atom.module.boost._windows_msvc_atom_module_boost_CPPDEFINES,
'CPPPATH' :nucleotide.component.windows.msvc.atom.module.boost._windows_msvc_atom_module_boost_CPPPATH,
'LINKFLAGS' :nucleotide.component.windows.msvc.atom.module.boost._windows_msvc_atom_module_boost_LINKFLAGS,
'LIBPATH' :nucleotide.component.windows.msvc.atom.module.boost._windows_msvc_atom_module_boost_LIBPATH,
'LIBS' :nucleotide.component.windows.msvc.atom.module.boost._windows_msvc_atom_module_boost_LIBS,
},
'opencv': {
'CPPDEFINES':nucleotide.component.windows.msvc.atom.module.opencv._windows_msvc_atom_module_opencv_CPPDEFINES,
'CPPPATH' :nucleotide.component.windows.msvc.atom.module.opencv._windows_msvc_atom_module_opencv_CPPPATH,
'LINKFLAGS' :nucleotide.component.windows.msvc.atom.module.opencv._windows_msvc_atom_module_opencv_LINKFLAGS,
'LIBPATH' :nucleotide.component.windows.msvc.atom.module.opencv._windows_msvc_atom_module_opencv_LIBPATH,
'LIBS' :nucleotide.component.windows.msvc.atom.module.opencv._windows_msvc_atom_module_opencv_LIBS,
},
'zlib': {
'CPPDEFINES':nucleotide.component.windows.msvc.atom.module.zlib._windows_msvc_atom_module_zlib_CPPDEFINES,
'CPPPATH' :nucleotide.component.windows.msvc.atom.module.zlib._windows_msvc_atom_module_zlib_CPPPATH,
'LINKFLAGS' :nucleotide.component.windows.msvc.atom.module.zlib._windows_msvc_atom_module_zlib_LINKFLAGS,
'LIBPATH' :nucleotide.component.windows.msvc.atom.module.zlib._windows_msvc_atom_module_zlib_LIBPATH,
'LIBS' :nucleotide.component.windows.msvc.atom.module.zlib._windows_msvc_atom_module_zlib_LIBS,
},
'tbb': {
'CPPDEFINES':nucleotide.component.windows.msvc.atom.module.tbb._windows_msvc_atom_module_tbb_CPPDEFINES,
'CPPPATH' :nucleotide.component.windows.msvc.atom.module.tbb._windows_msvc_atom_module_tbb_CPPPATH,
'LINKFLAGS' :nucleotide.component.windows.msvc.atom.module.tbb._windows_msvc_atom_module_tbb_LINKFLAGS,
'LIBPATH' :nucleotide.component.windows.msvc.atom.module.tbb._windows_msvc_atom_module_tbb_LIBPATH,
'LIBS' :nucleotide.component.windows.msvc.atom.module.tbb._windows_msvc_atom_module_tbb_LIBS,
},
'protobuf': {
'CPPDEFINES':nucleotide.component.windows.msvc.atom.module.protobuf._windows_msvc_atom_module_protobuf_CPPDEFINES,
'CPPPATH' :nucleotide.component.windows.msvc.atom.module.protobuf._windows_msvc_atom_module_protobuf_CPPPATH,
'LINKFLAGS' :nucleotide.component.windows.msvc.atom.module.protobuf._windows_msvc_atom_module_protobuf_LINKFLAGS,
'LIBPATH' :nucleotide.component.windows.msvc.atom.module.protobuf._windows_msvc_atom_module_protobuf_LIBPATH,
'LIBS' :nucleotide.component.windows.msvc.atom.module.protobuf._windows_msvc_atom_module_protobuf_LIBS,
},
'python': {
'CPPDEFINES':nucleotide.component.windows.msvc.atom.module.python._windows_msvc_atom_module_python_CPPDEFINES,
'CPPPATH' :nucleotide.component.windows.msvc.atom.module.python._windows_msvc_atom_module_python_CPPPATH,
'LINKFLAGS' :nucleotide.component.windows.msvc.atom.module.python._windows_msvc_atom_module_python_LINKFLAGS,
'LIBPATH' :nucleotide.component.windows.msvc.atom.module.python._windows_msvc_atom_module_python_LIBPATH,
'LIBS' :nucleotide.component.windows.msvc.atom.module.python._windows_msvc_atom_module_python_LIBS,
},
'bzip2': {
'CPPDEFINES':nucleotide.component.windows.msvc.atom.module.bzip2._windows_msvc_atom_module_bzip2_CPPDEFINES,
'CPPPATH' :nucleotide.component.windows.msvc.atom.module.bzip2._windows_msvc_atom_module_bzip2_CPPPATH,
'LINKFLAGS' :nucleotide.component.windows.msvc.atom.module.bzip2._windows_msvc_atom_module_bzip2_LINKFLAGS,
'LIBPATH' :nucleotide.component.windows.msvc.atom.module.bzip2._windows_msvc_atom_module_bzip2_LIBPATH,
'LIBS' :nucleotide.component.windows.msvc.atom.module.bzip2._windows_msvc_atom_module_bzip2_LIBS,
}
}
def _windows_msvc_atom_package_CPPDEFINES( P_list ):
#print( P_list )
for key in P_list:
if( key in Is_list ):
return Is_list[key]['CPPDEFINES']( P_list[key] )
else:
print( 'Pakage: \'' + key + '\' Not found.' )
return []
def _windows_msvc_atom_package_CPPPATH( P_list ):
#print( P_list )
for key in P_list:
if( key in Is_list ):
return Is_list[key]['CPPPATH']( P_list[key] )
else:
print( 'Pakage: \'' + key + '\' Not found.' )
return []
def _windows_msvc_atom_package_LINKFLAGS( P_list ):
#print( P_list )
for key in P_list:
if( key in Is_list ):
return Is_list[key]['LINKFLAGS']( P_list[key] )
else:
print( 'Pakage: \'' + key + '\' Not found.' )
return []
def _windows_msvc_atom_package_LIBPATH( P_list ):
#print( P_list )
for key in P_list:
if( key in Is_list ):
return Is_list[key]['LIBPATH']( P_list[key] )
else:
print( 'Pakage: \'' + key + '\' Not found.' )
return []
def _windows_msvc_atom_package_LIBS( P_list ):
#print( P_list )
for key in P_list:
if( key in Is_list ):
return Is_list[key]['LIBS']( P_list[key] )
else:
print( 'Pakage: \'' + key + '\' Not found.' )
return []
atom__common_package = {
'platform' : {
'host' : 'Windows',
'guest' : 'Windows'
},
'cc' : {
'vendor' : 'Microsoft',
'name' : 'msvc',
'version': 'X'
},
'config' : {
'CPPDEFINES' : _windows_msvc_atom_package_CPPDEFINES,
'CPPPATH' : _windows_msvc_atom_package_CPPPATH,
'LINKFLAGS' : _windows_msvc_atom_package_LINKFLAGS,
'LIBPATH' : _windows_msvc_atom_package_LIBPATH,
'LIBS' : _windows_msvc_atom_package_LIBS,
},
'name' :'package',
'class': [ 'package', 'windows:package' ]
}
class Package:
def __init( self ) :
pass
@staticmethod
def extend( P_option ) :
nucleotide.component.function.extend( P_option, 'package', atom__common_package )
@staticmethod
def check():
pass | 43.066667 | 123 | 0.698787 | 921 | 7,752 | 5.554832 | 0.129207 | 0.18706 | 0.255082 | 0.316067 | 0.809812 | 0.76896 | 0.76896 | 0.685106 | 0.60301 | 0.60301 | 0 | 0.003372 | 0.196723 | 7,752 | 180 | 124 | 43.066667 | 0.818211 | 0.08759 | 0 | 0.228346 | 0 | 0 | 0.091623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062992 | false | 0.015748 | 0.094488 | 0 | 0.244094 | 0.03937 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52836c8099533c5505ae2ba76091406b34d59009 | 1,179 | py | Python | tests/test_manager.py | Izeren/dr_tg | 862c996200177e033149152e33985bfb114c758d | [
"Apache-2.0"
] | 1 | 2021-11-11T15:05:46.000Z | 2021-11-11T15:05:46.000Z | tests/test_manager.py | Izeren/dr_tg | 862c996200177e033149152e33985bfb114c758d | [
"Apache-2.0"
] | 40 | 2020-10-09T21:13:54.000Z | 2021-12-02T00:54:31.000Z | tests/test_manager.py | Izeren/pewpewbot | 862c996200177e033149152e33985bfb114c758d | [
"Apache-2.0"
] | null | null | null | import pytest
from pytest_mock import MockerFixture
from tests.mock_utils import mock_manager_with_future_koline
@pytest.mark.asyncio
async def test_get_or_load_and_parse_koline():
# given
koline, manager = mock_manager_with_future_koline()
# when
returned_koline = await manager.get_or_load_and_parse_koline()
# then
assert manager.state.koline == koline
assert returned_koline == koline
manager.http_client.status.assert_called_once_with()
@pytest.mark.asyncio
async def test_load_and_parse_koline():
# given
koline, manager = mock_manager_with_future_koline()
# when
returned_koline = await manager.load_and_parse_koline()
# then
assert manager.state.koline == koline
assert returned_koline == koline
manager.http_client.status.assert_called_once_with()
@pytest.mark.asyncio
async def test_get_or_load_and_parse_koline_not_empty():
# given
koline, manager = mock_manager_with_future_koline()
manager.state.koline = koline
# when
await manager.get_or_load_and_parse_koline()
# then
assert manager.state.koline == koline
manager.http_client.status.assert_not_called()
| 24.5625 | 66 | 0.759966 | 159 | 1,179 | 5.232704 | 0.220126 | 0.109375 | 0.086538 | 0.129808 | 0.862981 | 0.830529 | 0.830529 | 0.795673 | 0.741587 | 0.741587 | 0 | 0 | 0.167939 | 1,179 | 47 | 67 | 25.085106 | 0.848114 | 0.039864 | 0 | 0.541667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
529011b3f92e05a288885533f4ca0ab2565c02ef | 176 | py | Python | treeseg/__init__.py | brycefrank/treeseg | fee448712ee9bb4f582d9ea933bac37aaa7dc42f | [
"MIT"
] | 13 | 2019-06-09T16:07:19.000Z | 2021-09-23T07:44:16.000Z | treeseg/__init__.py | brycefrank/treeseg | fee448712ee9bb4f582d9ea933bac37aaa7dc42f | [
"MIT"
] | null | null | null | treeseg/__init__.py | brycefrank/treeseg | fee448712ee9bb4f582d9ea933bac37aaa7dc42f | [
"MIT"
] | 5 | 2019-04-27T01:23:56.000Z | 2020-09-21T08:32:23.000Z | from __future__ import absolute_import
__version__ = "0.0.1"
from treeseg import base
from treeseg import detection
from treeseg import segmentation
from treeseg import plot
| 19.555556 | 38 | 0.829545 | 25 | 176 | 5.48 | 0.48 | 0.321168 | 0.49635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019868 | 0.142045 | 176 | 8 | 39 | 22 | 0.887417 | 0 | 0 | 0 | 0 | 0 | 0.028409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
875314da14f75523374a948e7b116ffebfee2672 | 147 | py | Python | pypro/base/views.py | felipeapellegrini/curso-django | 9a552ae3cb6c100055814cc8530c8c9b531c95dd | [
"MIT"
] | null | null | null | pypro/base/views.py | felipeapellegrini/curso-django | 9a552ae3cb6c100055814cc8530c8c9b531c95dd | [
"MIT"
] | 9 | 2021-08-14T14:38:23.000Z | 2021-08-17T03:11:53.000Z | pypro/base/views.py | felipeapellegrini/curso-django | 9a552ae3cb6c100055814cc8530c8c9b531c95dd | [
"MIT"
] | null | null | null | from django.http import HttpResponse
def home(request):
return HttpResponse('<html><body>Olá mundo</body></html>', content_type='text/html')
| 24.5 | 88 | 0.734694 | 20 | 147 | 5.35 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108844 | 147 | 5 | 89 | 29.4 | 0.816794 | 0 | 0 | 0 | 0 | 0 | 0.29932 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
5e6dc330f51fa97444f6cb8c230d75537f4cb0b5 | 41 | py | Python | 01RNN_Seq2Seq_Translation/config/__init__.py | DunZhang/NLPGeneration | 6edae15cac4d0ee633264a2dab0abcd6bd540cfa | [
"MIT"
] | 1 | 2021-06-25T02:21:27.000Z | 2021-06-25T02:21:27.000Z | 01RNN_Seq2Seq_Translation/config/__init__.py | DunZhang/NLPGeneration | 6edae15cac4d0ee633264a2dab0abcd6bd540cfa | [
"MIT"
] | null | null | null | 01RNN_Seq2Seq_Translation/config/__init__.py | DunZhang/NLPGeneration | 6edae15cac4d0ee633264a2dab0abcd6bd540cfa | [
"MIT"
] | null | null | null | from .Seq2SeqConfig import Seq2SeqConfig
| 20.5 | 40 | 0.878049 | 4 | 41 | 9 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.097561 | 41 | 1 | 41 | 41 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e72912c800d496310f25ffa4ffac1857ed4ec5a | 126 | py | Python | search.py | kawseribn/SimoBot | 2d91077eb152635b50fa215a077f0871788c7cda | [
"MIT"
] | null | null | null | search.py | kawseribn/SimoBot | 2d91077eb152635b50fa215a077f0871788c7cda | [
"MIT"
] | null | null | null | search.py | kawseribn/SimoBot | 2d91077eb152635b50fa215a077f0871788c7cda | [
"MIT"
] | null | null | null | import os
from parallelHillClimber import PARALLEL_HILL_CLIMBER
phc = PARALLEL_HILL_CLIMBER()
phc.Evolve()
phc.Show_Best()
| 14 | 53 | 0.81746 | 17 | 126 | 5.764706 | 0.647059 | 0.244898 | 0.387755 | 0.44898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 126 | 8 | 54 | 15.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5eac83d9f809bdbfec862933a452a73ecd1aa21e | 92 | py | Python | evoflow/callbacks/__init__.py | joaogui1/evoflow | 4824c70443498c5ceb7311f235a7fe8274c69a23 | [
"Apache-2.0"
] | 33 | 2020-05-16T22:45:52.000Z | 2021-12-11T14:31:38.000Z | evoflow/callbacks/__init__.py | joaogui1/evoflow | 4824c70443498c5ceb7311f235a7fe8274c69a23 | [
"Apache-2.0"
] | 54 | 2020-05-17T05:09:38.000Z | 2020-12-02T05:26:58.000Z | evoflow/callbacks/__init__.py | isabella232/evoflow | b2649137e77b237416e7022b0eb9cf7cf03c0d62 | [
"Apache-2.0"
] | 6 | 2020-05-30T13:23:47.000Z | 2022-01-16T11:39:19.000Z | from .callback import Callback # noqa: F401
from .dummy import DummyCallback # noqa: F401
| 30.666667 | 46 | 0.76087 | 12 | 92 | 5.833333 | 0.583333 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.173913 | 92 | 2 | 47 | 46 | 0.842105 | 0.228261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0d607a8eb4acd2a5a09d9a213c5420121927b088 | 47 | py | Python | Models/Convolution/__init__.py | lupoglaz/DeepProteinDocking2D | f63a0bf3abbb3be25720fa6984a7fff72307897c | [
"MIT"
] | 1 | 2021-12-14T02:15:02.000Z | 2021-12-14T02:15:02.000Z | Models/Convolution/__init__.py | lupoglaz/DeepProteinDocking2D | f63a0bf3abbb3be25720fa6984a7fff72307897c | [
"MIT"
] | null | null | null | Models/Convolution/__init__.py | lupoglaz/DeepProteinDocking2D | f63a0bf3abbb3be25720fa6984a7fff72307897c | [
"MIT"
] | 1 | 2021-12-14T02:14:55.000Z | 2021-12-14T02:14:55.000Z | from .ProteinConvolution2D import ProteinConv2D | 47 | 47 | 0.914894 | 4 | 47 | 10.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.06383 | 47 | 1 | 47 | 47 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0d892983f9366d80bd2f918be498b357ae6249ba | 165 | py | Python | food_finder/settings.py | broadsinatlanta/atsushi-rf | 76a5e74056dfb5df694474e1e6f9e5b99eeb3be0 | [
"MIT"
] | null | null | null | food_finder/settings.py | broadsinatlanta/atsushi-rf | 76a5e74056dfb5df694474e1e6f9e5b99eeb3be0 | [
"MIT"
] | 10 | 2020-02-12T00:09:41.000Z | 2022-02-10T11:34:13.000Z | food_finder/settings.py | broadsinatlanta/atsushi-rf | 76a5e74056dfb5df694474e1e6f9e5b99eeb3be0 | [
"MIT"
] | null | null | null | import socket
# Ensure proper usage (SSL etc.)
if socket.gethostname() == 'AT.local':
from .local_settings import *
else:
from .production_settings import * | 23.571429 | 38 | 0.715152 | 21 | 165 | 5.52381 | 0.714286 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175758 | 165 | 7 | 39 | 23.571429 | 0.852941 | 0.181818 | 0 | 0 | 0 | 0 | 0.059701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0da9835619dd517e17933ad98de1821556c4c608 | 36 | py | Python | builder/agent/lib/__init__.py | attackgithub/Keylogger-3 | e410a72f6857dc803c837a0cf621b15deb5bf07e | [
"MIT"
] | 1 | 2020-03-30T04:22:45.000Z | 2020-03-30T04:22:45.000Z | builder/agent/lib/__init__.py | attackgithub/Keylogger-3 | e410a72f6857dc803c837a0cf621b15deb5bf07e | [
"MIT"
] | null | null | null | builder/agent/lib/__init__.py | attackgithub/Keylogger-3 | e410a72f6857dc803c837a0cf621b15deb5bf07e | [
"MIT"
] | 1 | 2020-03-30T04:22:46.000Z | 2020-03-30T04:22:46.000Z | # Date: 01/28/2019
# Author: Mohamed | 18 | 18 | 0.694444 | 6 | 36 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258065 | 0.138889 | 36 | 2 | 19 | 18 | 0.548387 | 0.888889 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0dcce6e34e1ec96c8ee21af77458b01476548e08 | 234 | py | Python | tests_src/temp_tests.py | Video-Lab/pyifx | 9b9aaa690059f3148833041eebdc4de7cc8d5459 | [
"MIT"
] | null | null | null | tests_src/temp_tests.py | Video-Lab/pyifx | 9b9aaa690059f3148833041eebdc4de7cc8d5459 | [
"MIT"
] | null | null | null | tests_src/temp_tests.py | Video-Lab/pyifx | 9b9aaa690059f3148833041eebdc4de7cc8d5459 | [
"MIT"
] | null | null | null | """This file contains tests for new features/changes that haven't been designated to a file yet. Feel free to use this function as a playground to test your changes & additions."""
#TODO: Depth of ImageVolume
from test_vars import *
| 46.8 | 180 | 0.773504 | 39 | 234 | 4.615385 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 234 | 4 | 181 | 58.5 | 0.923077 | 0.858974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0de6cc8ef0bc724b858048ae91983b7b5441c012 | 11,683 | py | Python | src/pentadClass.py | DantheJack/SliCer | 9cb93dcf0cb9fa5174bf7482fb5f0d02bbce20a9 | [
"MIT"
] | null | null | null | src/pentadClass.py | DantheJack/SliCer | 9cb93dcf0cb9fa5174bf7482fb5f0d02bbce20a9 | [
"MIT"
] | null | null | null | src/pentadClass.py | DantheJack/SliCer | 9cb93dcf0cb9fa5174bf7482fb5f0d02bbce20a9 | [
"MIT"
] | null | null | null | from globalVariables import PENTADprinter
class pentadStruct:
"""Class representing a line of the subject, containing :
- its number
- its full text
- its list of roles & variables
- its significance"""
def __init__(self, line, text): # Notre méthode constructeur
self.id = 0
if (len(line) != 2):
line.append(line[0])
self.lines = line
self.text = text
self.roles = [roleStruct()]
self.useful = False
if(self.text != "" and PENTADprinter):
print("PENTAD printing --> ", "New Pentad created : (", self.lines, ", \"" + str(self.text) + "\" )")
def addRole(self, roleName, mainVar = None, otherVars = []):
if(self.roles[0].type == "unknown"):
del self.roles[:]
cancelAdding = False
for h in self.roles :
if(h.type == roleName and h.mainVar == mainVar and h.otherVars == otherVars):
cancelAdding = True
break
if(cancelAdding) :
print("PENTAD printing --> ", "Role : ", roleName, " for ", mainVar, "was already given to this statement.") if PENTADprinter and roleName != "unknown" else False
else :
self.roles.append(roleStruct(roleName, mainVar, otherVars))
print("PENTAD printing --> ", "Role added : ", roleName, " for ", mainVar) if PENTADprinter and roleName != "unknown" else False
if(mainVar): return roleName + " of " + mainVar
else: return roleName
def printing(self):
if self.lines[0] == self.lines[1] :
return str("[ " + str(self.lines[0]) + " ]" + " \t— " + self.text)
else :
return str(str(self.lines) + " \t— " + self.text)
def searchForRole(self, roleToSearch):
for role in self.roles:
if role.type == roleToSearch : return role
return False
class roleStruct:
"""Class representing a role fulfilled by a line of the subject, containing :
- its type (defVar, initVar, beginIf, etc...)
- its variable (if one is needed)
"""
def __init__(self, type = "unknown", mainVar = None, otherVars = []): # Notre méthode constructeur
self.type = type
self.mainVar = mainVar
self.otherVars = otherVars
def printAll(listOfEveryPentads = None):
print("")
for o in listOfEveryPentads:
#print(o.printing() + space + o.roles[0].type)
print(o.printing())
print("")
def printNothingButRoles(listOfEveryPentads = None):
print("")
max_size = 0
for o in listOfEveryPentads:
if(len(o.printing()) > max_size) :
max_size = len(o.printing())
for o in listOfEveryPentads:
space = ""
length = max_size + 5 -len(o.printing())
if(o.lines[0] == o.lines[1] and o.lines[0] < 10): length = length - 1
if(o.lines[0] != o.lines[1] and o.lines[0] < 10): length = length - 1
if(o.lines[0] != o.lines[1] and o.lines[1] < 10): length = length - 1
for char in range (length):
space += " "
if (len(o.roles) >= 3):
if(not o.roles[2].mainVar and not o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.roles[0].type + ", " + o.roles[1].type + ", " + o.roles[2].type)
if(not o.roles[2].mainVar and o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.roles[0].type + " (" + o.roles[0].mainVar + ")" + ", " + o.roles[1].type + ", " + o.roles[2].type)
if(not o.roles[2].mainVar and not o.roles[0].mainVar and o.roles[1].mainVar):
print(o.roles[0].type + ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" + ", " + o.roles[2].type)
if(not o.roles[2].mainVar and o.roles[0].mainVar and o.roles[1].mainVar):
print(o.roles[0].type + " (" + o.roles[0].mainVar + ")" ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" + ", " + o.roles[2].type)
if(o.roles[2].mainVar and not o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.roles[0].type + ", " + o.roles[1].type + ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if(o.roles[2].mainVar and o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.roles[0].type + " (" + o.roles[0].mainVar + ")" + ", " + o.roles[1].type + ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if(o.roles[2].mainVar and not o.roles[0].mainVar and o.roles[1].mainVar):
print(o.roles[0].type + ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if(o.roles[2].mainVar and o.roles[0].mainVar and o.roles[1].mainVar):
print(o.roles[0].type + " (" + o.roles[0].mainVar + ")" ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if (len(o.roles) == 2):
if(not o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.roles[0].type + ", " + o.roles[1].type)
if(o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.roles[0].type + " (" + o.roles[0].mainVar + ")" + ", " + o.roles[1].type)
if(not o.roles[0].mainVar and o.roles[1].mainVar):
print(o.roles[0].type + ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")")
if(o.roles[0].mainVar and o.roles[1].mainVar):
print(o.roles[0].type + " (" + o.roles[0].mainVar + ")" ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")")
if (len(o.roles) == 1):
if(o.roles[0].mainVar):
print(o.roles[0].type + " (" + o.roles[0].mainVar + ")")
else:
print(o.roles[0].type)
print("")
def printAllWithRoles(listOfEveryPentads = None):
print("")
max_size = 0
for o in listOfEveryPentads:
if(len(o.printing()) > max_size) :
max_size = len(o.printing())
for o in listOfEveryPentads:
space = ""
length = max_size + 5 -len(o.printing())
if(o.lines[0] == o.lines[1] and o.lines[0] < 10): length = length - 1
if(o.lines[0] != o.lines[1] and o.lines[0] < 10): length = length - 1
if(o.lines[0] != o.lines[1] and o.lines[1] < 10): length = length - 1
for char in range (length):
space += " "
if (len(o.roles) >= 3):
if(not o.roles[2].mainVar and not o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + ", " + o.roles[1].type + ", " + o.roles[2].type)
if(not o.roles[2].mainVar and o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + " (" + o.roles[0].mainVar + ")" + ", " + o.roles[1].type + ", " + o.roles[2].type)
if(not o.roles[2].mainVar and not o.roles[0].mainVar and o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" + ", " + o.roles[2].type)
if(not o.roles[2].mainVar and o.roles[0].mainVar and o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + " (" + o.roles[0].mainVar + ")" ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" + ", " + o.roles[2].type)
if(o.roles[2].mainVar and not o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + ", " + o.roles[1].type + ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if(o.roles[2].mainVar and o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + " (" + o.roles[0].mainVar + ")" + ", " + o.roles[1].type + ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if(o.roles[2].mainVar and not o.roles[0].mainVar and o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if(o.roles[2].mainVar and o.roles[0].mainVar and o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + " (" + o.roles[0].mainVar + ")" ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")" ", " + o.roles[2].type + " (" + o.roles[2].mainVar + ")")
if (len(o.roles) == 2):
if(not o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + ", " + o.roles[1].type)
if(o.roles[0].mainVar and not o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + " (" + o.roles[0].mainVar + ")" + ", " + o.roles[1].type)
if(not o.roles[0].mainVar and o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")")
if(o.roles[0].mainVar and o.roles[1].mainVar):
print(o.printing() + space + o.roles[0].type + " (" + o.roles[0].mainVar + ")" ", " + o.roles[1].type + " (" + o.roles[1].mainVar + ")")
if (len(o.roles) == 1):
if(o.roles[0].mainVar):
print(o.printing() + space + o.roles[0].type + " (" + o.roles[0].mainVar + ")")
else:
print(o.printing() + space + o.roles[0].type)
print("")
def printAllLoopCondVariables(listOfEveryPentads = None):
print("")
for o in listOfEveryPentads:
if (len(o.roles) > 1):
if(o.roles[0].type == "loopCondition"):
print(o.printing() + " --- LoopCondition :")
for otherVar in o.roles[0].otherVars :
print(" --> " + otherVar )
print("")
elif(o.roles[1].type == "loopCondition"):
print(o.printing() + " --- LoopCondition :")
for otherVar in o.roles[1].otherVars :
print(" --> " + otherVar )
print("")
else :
if(o.roles[0].type == "loopCondition"):
print(o.printing() + " --- LoopCondition :")
for otherVar in o.roles[0].otherVars :
print(" --> " + otherVar )
print("")
print("")
def printAllIfCondVariables(listOfEveryPentads = None):
print("")
for o in listOfEveryPentads:
if (len(o.roles) > 1):
if(o.roles[0].type == "ifCondition"):
print(o.printing() + " --- IfCondition :")
for otherVar in o.roles[0].otherVars :
print(" --> " + otherVar )
print("")
elif(o.roles[1].type == "ifCondition"):
print(o.printing() + " --- IfCondition :")
for otherVar in o.roles[1].otherVars :
print(" --> " + otherVar )
print("")
else :
if(o.roles[0].type == "ifCondition"):
print(o.printing() + " --- IfCondition :")
for otherVar in o.roles[0].otherVars :
print(" --> " + otherVar )
print("")
print("")
def printAllVarDefVariables(listOfEveryPentads = None):
print("")
for o in listOfEveryPentads:
for role in o.roles:
if(role.type == "varDefine"):
print(o.printing() + " --- varDefine of " + role.mainVar)
print("")
for otherVar in role.otherVars :
print(" \t\t\t--> " + otherVar )
print("")
print("")
| 52.626126 | 209 | 0.507318 | 1,515 | 11,683 | 3.90297 | 0.069967 | 0.192796 | 0.091155 | 0.094707 | 0.76391 | 0.756807 | 0.753256 | 0.711652 | 0.711652 | 0.700829 | 0 | 0.02877 | 0.297869 | 11,683 | 221 | 210 | 52.864253 | 0.69182 | 0.033296 | 0 | 0.575916 | 0 | 0 | 0.059106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057592 | false | 0 | 0.005236 | 0 | 0.089005 | 0.424084 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
21ca49068d168d3fdb49271c696ccd78110c1634 | 277 | py | Python | simplemooc/core/views.py | leorzz/simplemooc | 8b1c5e939d534b1fd729596df4c59fc69708b896 | [
"MIT"
] | null | null | null | simplemooc/core/views.py | leorzz/simplemooc | 8b1c5e939d534b1fd729596df4c59fc69708b896 | [
"MIT"
] | null | null | null | simplemooc/core/views.py | leorzz/simplemooc | 8b1c5e939d534b1fd729596df4c59fc69708b896 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
import sys
def home(request):
return render(request, 'home.html')
def contact(request):
return render(request, 'contact.html')
def about(request):
return render(request, 'about.html')
| 19.785714 | 44 | 0.732852 | 36 | 277 | 5.638889 | 0.416667 | 0.192118 | 0.280788 | 0.384236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162455 | 277 | 13 | 45 | 21.307692 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.112319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
df1b763637659e3e337b37c425c65e91c19bed60 | 21,186 | py | Python | test/new_tests/test_new_list_operations.py | indigo-tribe/aerospike-client-python | 1526bc3e47327dfcc2eabbf8729bee0b5f66bbd9 | [
"Apache-2.0"
] | 105 | 2015-01-07T09:51:13.000Z | 2022-03-24T04:23:54.000Z | test/new_tests/test_new_list_operations.py | indigo-tribe/aerospike-client-python | 1526bc3e47327dfcc2eabbf8729bee0b5f66bbd9 | [
"Apache-2.0"
] | 180 | 2015-01-01T19:29:50.000Z | 2022-03-19T14:14:06.000Z | test/new_tests/test_new_list_operations.py | indigo-tribe/aerospike-client-python | 1526bc3e47327dfcc2eabbf8729bee0b5f66bbd9 | [
"Apache-2.0"
] | 94 | 2015-01-21T19:17:48.000Z | 2022-01-31T07:17:47.000Z | # -*- coding: utf-8 -*-
import pytest
import sys
from aerospike import exception as e
aerospike = pytest.importorskip("aerospike")
try:
import aerospike
except:
print("Please install aerospike python client.")
sys.exit(1)
def get_list_result_from_operation(client, key, operation, bin):
_, _, result_bins = client.operate(key, [operation])
return result_bins[bin]
class TestNewListOperations(object):
@pytest.fixture(autouse=True)
def setup(self, request, as_connection):
"""
Setup Method
"""
self.keys = []
# INDEXES 0, 1, 2, 3, 4, 05
# RINDEX 5, 4, 3, 2, 1, 0
# RANK 2, 1, 0, 3, 4, 5
# RRANK -4,-5,-6,-3,-2,-1
self.test_list = [7, 6, 5, 8, 9, 10]
self.test_key = 'test', 'demo', 'new_list_op'
self.test_bin = 'list'
self.as_connection.put(self.test_key, {self.test_bin: self.test_list})
self.keys.append(self.test_key)
yield
for key in self.keys:
try:
self.as_connection.remove(key)
except e.AerospikeError:
pass
@pytest.mark.parametrize(
"index, expected",
(
(2, 5),
(-2, 9)
))
def test_get_by_index(self, index, expected):
'''
Without a return type this should return the value
'''
operation = {
'op': aerospike.OP_LIST_GET_BY_INDEX,
'bin': 'list',
'index': index,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == expected
@pytest.mark.parametrize(
"return_type, expected",
(
(aerospike.LIST_RETURN_VALUE, 5),
(aerospike.LIST_RETURN_INDEX, 2),
(aerospike.LIST_RETURN_REVERSE_INDEX, 3),
(aerospike.LIST_RETURN_RANK, 0),
(aerospike.LIST_RETURN_REVERSE_RANK, 5),
))
def test_list_get_by_index_return_types(self, return_type, expected):
'''
Without a return type this should return the value
'''
operation = {
'op': aerospike.OP_LIST_GET_BY_INDEX,
'bin': 'list',
'index': 2,
'return_type': return_type
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == expected
@pytest.mark.parametrize(
"index, count",
(
(0, 3),
(-2, 1),
(0, 100)
))
def test_get_by_index_range_both_present(self, index, count):
expected = self.test_list[index: index + count]
operation = {
'op': aerospike.OP_LIST_GET_BY_INDEX_RANGE,
'bin': self.test_bin,
'index': index,
'count': count,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == expected
def test_get_by_index_range_no_count(self):
operation = {
'op': aerospike.OP_LIST_GET_BY_INDEX_RANGE,
'bin': self.test_bin,
'index': 2,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == self.test_list[2:]
def test_get_by_index_range_inverted(self):
start = 0
count = 3
expected = self.test_list[start + count:]
operation = {
'op': aerospike.OP_LIST_GET_BY_INDEX_RANGE,
'bin': self.test_bin,
'index': start,
'count': count,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == expected
@pytest.mark.parametrize(
"rank, expected",
(
(0, 5),
(-1, 10)
))
def test_get_by_rank(self, rank, expected):
'''
Without a return type this should return the value
'''
operation = {
'op': aerospike.OP_LIST_GET_BY_RANK,
'bin': 'list',
'rank': rank,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == expected
@pytest.mark.parametrize(
"return_type, expected",
(
(aerospike.LIST_RETURN_VALUE, 5),
(aerospike.LIST_RETURN_INDEX, 2),
(aerospike.LIST_RETURN_REVERSE_INDEX, 3),
(aerospike.LIST_RETURN_RANK, 0),
(aerospike.LIST_RETURN_REVERSE_RANK, 5),
))
def test_list_get_by_rank_return_types(self, return_type, expected):
'''
Without a return type this should return the value
'''
operation = {
'op': aerospike.OP_LIST_GET_BY_RANK,
'bin': 'list',
'rank': 0,
'return_type': return_type
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == expected
@pytest.mark.parametrize(
"rank, count",
(
(0, 3),
(-2, 1),
(0, 100)
))
def test_get_by_rank_range_both_present(self, rank, count):
expected = sorted(self.test_list)[rank: rank + count]
operation = {
'op': aerospike.OP_LIST_GET_BY_RANK_RANGE,
'bin': self.test_bin,
'rank': rank,
'count': count,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert set(result) == set(expected)
def test_get_by_rank_range_no_count(self):
operation = {
'op': aerospike.OP_LIST_GET_BY_RANK_RANGE,
'bin': self.test_bin,
'rank': 2,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == sorted(self.test_list)[2:]
def test_get_by_rank_range_inverted(self):
rank_start = 0
rank_count = 3
# All of the values except for those in the specified rank range
expected = sorted(self.test_list)[rank_start + rank_count:]
operation = {
'op': aerospike.OP_LIST_GET_BY_RANK_RANGE,
'bin': self.test_bin,
'rank': rank_start,
'count': rank_count,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert set(result) == set(expected)
def test_get_by_value_no_duplicates(self):
'''
7 is in the 0th position, so we expect [0] as the result
'''
operation = {
'op': aerospike.OP_LIST_GET_BY_VALUE,
'bin': self.test_bin,
'val': 7,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == [0]
def test_get_by_value_no_duplicates_inverted(self):
'''
7 is in the 0th position, so we expect [0] as the result
'''
operation = {
'op': aerospike.OP_LIST_GET_BY_VALUE,
'bin': self.test_bin,
'val': 7,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
# Every value except for 7
assert result == [6, 5, 8, 9, 10]
def test_get_by_value_with_duplicates(self):
'''
Add a list [0, 1, 0, 2, 0] with 3 0's
We expect [0, 2, 4] as the results
'''
dup_list = [0, 1, 0, 2, 0]
dup_key = 'test', 'list', 'dup'
self.keys.append(dup_key)
self.as_connection.put(dup_key, {self.test_bin: dup_list})
operation = {
'op': aerospike.OP_LIST_GET_BY_VALUE,
'bin': self.test_bin,
'val': 0,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, dup_key, operation, self.test_bin)
assert result == [0, 2, 4]
def test_get_by_value_list(self):
values = [7, 5, 9]
operation = {
'op': aerospike.OP_LIST_GET_BY_VALUE_LIST,
'bin': self.test_bin,
'value_list': values,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == [0, 2, 4]
def test_get_by_value_list_inverted(self):
values = [7, 5, 9]
operation = {
'op': aerospike.OP_LIST_GET_BY_VALUE_LIST,
'bin': self.test_bin,
'value_list': values,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert set(result) == set([6, 8, 10])
def test_get_by_value_range(self):
operation = {
'op': aerospike.OP_LIST_GET_BY_VALUE_RANGE,
'bin': self.test_bin,
'value_begin': 5,
'value_end': 8,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert len(result) == 3 and set(result) == set([0, 1, 2])
def test_get_by_value_range_inverted(self):
operation = {
'op': aerospike.OP_LIST_GET_BY_VALUE_RANGE,
'bin': self.test_bin,
'value_begin': 6,
'value_end': 8,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert len(result) == 4 and set(result) == set([5, 8, 9, 10])
# REMOVE Family of operations
def test_remove_by_index(self):
'''
Remove the 3rd item, a 5
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_INDEX,
'bin': 'list',
'index': 2,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == 5
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == self.test_list[:2] + self.test_list[3:]
def test_remove_by_index_range(self):
'''
Remove the 3rd item, a 5
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_INDEX_RANGE,
'bin': 'list',
'index': 2,
'count': 2,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == [5, 8]
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == self.test_list[:2] + self.test_list[4:]
def test_remove_by_index_range_inverted(self):
'''
Remove the 3rd item, a 5
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_INDEX_RANGE,
'bin': 'list',
'index': 2,
'count': 2,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert set(result) == set([7, 6, 9, 10])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == [5, 8]
def test_remove_by_rank(self):
'''
Remove the 3rd smallest item, a 7 at index 0
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_RANK,
'bin': 'list',
'rank': 2,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == 7
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == self.test_list[1:]
def test_remove_by_rank_range(self):
'''
Remove the 3rd smallest item, a 7 at index 0
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_RANK_RANGE,
'bin': 'list',
'rank': 0,
'count': 3,
'return_type': aerospike.LIST_RETURN_VALUE
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == [7, 6, 5]
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == self.test_list[3:]
def test_remove_by_rank_range_inverted(self):
'''
Remove the 3rd smallest item, a 7 at index 0
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_RANK_RANGE,
'bin': 'list',
'rank': 0,
'count': 3,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert set(result) == set([8, 9, 10])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == self.test_list[0:3]
def test_remove_by_value_no_duplicates(self):
'''
7 is in the 0th position, so we expect [0] as the result
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE,
'bin': self.test_bin,
'val': 7,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == [0]
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == self.test_list[1:]
def test_remove_by_value_inverted(self):
'''
7 is in the 0th position, so we expect [0] as the result
'''
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE,
'bin': self.test_bin,
'val': 7,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == [6, 5, 8, 9, 10]
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == self.test_list[0:1]
def test_remove_by_value_with_duplicates(self):
'''
Add a list [0, 1, 0, 2, 0] with 3 0's
We expect [0, 2, 4] as the results
'''
dup_list = [0, 1, 0, 2, 0]
dup_key = 'test', 'list', 'dup'
self.keys.append(dup_key)
self.as_connection.put(dup_key, {self.test_bin: dup_list})
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE,
'bin': self.test_bin,
'val': 0,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, dup_key, operation, self.test_bin)
assert result == [0, 2, 4]
_, _, bins = self.as_connection.get(dup_key)
assert bins[self.test_bin] == [1, 2]
def test_remove_by_value_list(self):
values = [7, 5, 9]
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE_LIST,
'bin': self.test_bin,
'value_list': values,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert result == [0, 2, 4]
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == [6, 8, 10]
def test_remove_by_value_list_inverted(self):
values = [7, 5, 9]
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE_LIST,
'bin': self.test_bin,
'value_list': values,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert set(result) == set([6, 8, 10])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == [7, 5, 9]
def test_remove_by_value_range(self):
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE_RANGE,
'bin': self.test_bin,
'value_begin': 5,
'value_end': 8,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert len(result) == 3 and set(result) == set([0, 1, 2])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == [8, 9, 10]
def test_remove_by_value_range_inverted(self):
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE_RANGE,
'bin': self.test_bin,
'value_begin': 6,
'value_end': 8,
'return_type': aerospike.LIST_RETURN_VALUE,
'inverted': True
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert len(result) == 4 and set(result) == set([5, 8, 9, 10])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == [7, 6]
def test_remove_by_value_range_no_begin(self):
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE_RANGE,
'bin': self.test_bin,
'value_end': 8,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert len(result) == 3 and set(result) == set([0, 1, 2])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == [8, 9, 10]
def test_remove_by_value_range_no_end(self):
operation = {
'op': aerospike.OP_LIST_REMOVE_BY_VALUE_RANGE,
'bin': self.test_bin,
'value_begin': 7,
'return_type': aerospike.LIST_RETURN_INDEX
}
result = get_list_result_from_operation(
self.as_connection, self.test_key, operation, self.test_bin)
assert len(result) == 4 and set(result) == set([0, 3, 4, 5])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == [6, 5]
def test_list_set_order(self):
operation = {
'op': aerospike.OP_LIST_SET_ORDER,
'list_order': aerospike.LIST_ORDERED,
'bin': self.test_bin
}
self.as_connection.operate(self.test_key, [operation])
_, _, bins = self.as_connection.get(self.test_key)
assert bins[self.test_bin] == sorted(self.test_list)
def test_list_sort(self):
unsorted_dups = [2, 5, 2, 5]
sort_key = 'test', 'demo', 'dup_list'
self.keys.append(sort_key)
self.as_connection.put(sort_key, {self.test_bin: unsorted_dups})
operation = {
'op': aerospike.OP_LIST_SORT,
'sort_flags': aerospike.LIST_SORT_DROP_DUPLICATES,
'bin': self.test_bin
}
self.as_connection.operate(sort_key, [operation])
_, _, bins = self.as_connection.get(sort_key)
assert bins[self.test_bin] == [2, 5] | 34.788177 | 78 | 0.570518 | 2,600 | 21,186 | 4.32 | 0.05 | 0.103276 | 0.076389 | 0.066595 | 0.89245 | 0.874555 | 0.840812 | 0.829772 | 0.814726 | 0.812589 | 0 | 0.020417 | 0.320306 | 21,186 | 609 | 79 | 34.788177 | 0.759583 | 0.04975 | 0 | 0.680412 | 0 | 0 | 0.057562 | 0 | 0 | 0 | 0 | 0 | 0.101031 | 1 | 0.074227 | false | 0.002062 | 0.010309 | 0 | 0.08866 | 0.002062 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df34402d44a417a5cb65d619f559b316dfac63ea | 271 | py | Python | tests/molecular/molecules/molecule/fixtures/cage/two_plus_four/__init__.py | stevenbennett96/stk | 6e5af87625b83e0bfc7243bc42d8c7a860cbeb76 | [
"MIT"
] | 21 | 2018-04-12T16:25:24.000Z | 2022-02-14T23:05:43.000Z | tests/molecular/molecules/molecule/fixtures/cage/two_plus_four/__init__.py | stevenbennett96/stk | 6e5af87625b83e0bfc7243bc42d8c7a860cbeb76 | [
"MIT"
] | 8 | 2019-03-19T12:36:36.000Z | 2020-11-11T12:46:00.000Z | tests/molecular/molecules/molecule/fixtures/cage/two_plus_four/__init__.py | stevenbennett96/stk | 6e5af87625b83e0bfc7243bc42d8c7a860cbeb76 | [
"MIT"
] | 5 | 2018-08-07T13:00:16.000Z | 2021-11-01T00:55:10.000Z | from .eight_plus_sixteen import * # noqa
from .five_plus_ten import * # noqa
from .four_plus_eight import * # noqa
from .six_plus_twelve import * # noqa
from .ten_plus_twenty import * # noqa
from .three_plus_six import * # noqa
from .two_plus_four import * # noqa
| 33.875 | 41 | 0.741697 | 42 | 271 | 4.452381 | 0.333333 | 0.374332 | 0.449198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180812 | 271 | 7 | 42 | 38.714286 | 0.842342 | 0.125461 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
df8089539360d0635bbb1ba026747fcc64e50f00 | 22,994 | bzl | Python | pkg/rules_purescript/tests/rules_tests.bzl | joneshf/purs-tools | 41a93ac18103d6c4cd02f20cb205e30fb728bd7a | [
"BSD-3-Clause"
] | 10 | 2021-02-04T19:58:59.000Z | 2021-05-19T12:03:15.000Z | pkg/rules_purescript/tests/rules_tests.bzl | joneshf/purs-tools | 41a93ac18103d6c4cd02f20cb205e30fb728bd7a | [
"BSD-3-Clause"
] | 11 | 2021-02-07T03:40:55.000Z | 2021-02-16T07:39:50.000Z | pkg/rules_purescript/tests/rules_tests.bzl | joneshf/purs-tools | 41a93ac18103d6c4cd02f20cb205e30fb728bd7a | [
"BSD-3-Clause"
] | null | null | null | """
Tests for validating rules behavior.
"""
load(
"@bazel_skylib//lib:paths.bzl",
"paths",
)
load(
"@bazel_skylib//lib:unittest.bzl",
"analysistest",
"asserts",
)
load(
"//internal:rules.bzl",
"purescript_binary",
"purescript_library",
"purescript_package",
)
load(
":list_helpers.bzl",
"contains",
"find_action",
)
def _purescript_binary_works_with_only_purescript_implementation_test(ctx):
"""
Test to verify that compiled PureScript files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_module_action = find_action(env, actions, "PursCompileModule")
inputs = [input.basename for input in purs_compile_module_action.inputs.to_list()]
asserts.equals(env, 2, len(inputs))
contains(env, inputs, "purs-compile-module", "Expected purs-compile-module to be an input")
contains(env, inputs, "PureScriptOnly.purs", "Expected PureScriptOnly.purs to be an input")
outputs = [output.basename for output in purs_compile_module_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "index.js", "Expected index.js to be an output")
argv = purs_compile_module_action.argv
contains(env, argv, "--output-javascript-file", "Expected --output-javascript-file to be an argument")
contains(env, argv, "--purs-file", "Expected --purs-file to be an argument")
purs_bundle_action = find_action(env, actions, "PursBundle")
inputs = [input.basename for input in purs_bundle_action.inputs.to_list()]
asserts.equals(env, 2, len(inputs))
contains(env, inputs, "purs", "Expected purs to be an input")
contains(env, inputs, "index.js", "Expected index.js to be an input")
outputs = [output.basename for output in purs_bundle_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "purescript_binary_works_with_only_purescript_fake_target.js", "Expected purescript_binary_works_with_only_purescript_fake_target.js to be an output")
argv = purs_bundle_action.argv
contains(env, argv, "--main", "Expected --main to be an argument")
contains(env, argv, "--module", "Expected --module to be an argument")
contains(env, argv, "--output", "Expected --output to be an argument")
return analysistest.end(env)
_purescript_binary_works_with_only_purescript_test = analysistest.make(
_purescript_binary_works_with_only_purescript_implementation_test,
)
def _purescript_binary_works_with_purescript_and_ffi_implementation_test(ctx):
"""
Test to verify that both compiled PureScript and FFI files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_module_action = find_action(env, actions, "PursCompileModule")
inputs = [input.basename for input in purs_compile_module_action.inputs.to_list()]
asserts.equals(env, 3, len(inputs))
contains(env, inputs, "purs-compile-module", "Expected purs-compile-module to be an input")
contains(env, inputs, "PureScriptAndFFI.js", "Expected PureScriptAndFFI.js to be an input")
contains(env, inputs, "PureScriptAndFFI.purs", "Expected PureScriptAndFFI.purs to be an input")
outputs = [output.basename for output in purs_compile_module_action.outputs.to_list()]
asserts.equals(env, 2, len(outputs))
contains(env, outputs, "foreign.js", "Expected foreign.js to be an output")
contains(env, outputs, "index.js", "Expected index.js to be an output")
argv = purs_compile_module_action.argv
contains(env, argv, "--input-ffi-file", "Expected --input-ffi-file to be an argument")
contains(env, argv, "--output-ffi-file", "Expected --output-ffi-file to be an argument")
contains(env, argv, "--output-javascript-file", "Expected --output-javascript-file to be an argument")
contains(env, argv, "--purs-file", "Expected --purs-file to be an argument")
purs_bundle_action = find_action(env, actions, "PursBundle")
inputs = [input.basename for input in purs_bundle_action.inputs.to_list()]
asserts.equals(env, 3, len(inputs))
contains(env, inputs, "purs", "Expected purs to be an input")
contains(env, inputs, "foreign.js", "Expected foreign.js to be an input")
contains(env, inputs, "index.js", "Expected index.js to be an input")
outputs = [output.basename for output in purs_bundle_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "purescript_binary_works_with_purescript_and_ffi_fake_target.js", "Expected purescript_binary_works_with_purescript_and_ffi_fake_target.js to be an output")
argv = purs_bundle_action.argv
contains(env, argv, "--main", "Expected --main to be an argument")
contains(env, argv, "--module", "Expected --module to be an argument")
contains(env, argv, "--output", "Expected --output to be an argument")
return analysistest.end(env)
_purescript_binary_works_with_purescript_and_ffi_test = analysistest.make(
_purescript_binary_works_with_purescript_and_ffi_implementation_test,
)
def _purescript_binary_works_with_dependencies_implementation_test(ctx):
"""
Test to verify that compiled PureScript files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_module_action = find_action(env, actions, "PursCompileModule")
inputs = [input.basename for input in purs_compile_module_action.inputs.to_list()]
asserts.equals(env, 3, len(inputs))
contains(env, inputs, "purs-compile-module", "Expected purs-compile-module to be an input")
contains(env, inputs, "Foo.purs", "Expected Foo.purs to be an input")
contains(env, inputs, "signature-externs.cbor", "Expected signature-externs.cbor to be an input")
outputs = [output.basename for output in purs_compile_module_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "index.js", "Expected index.js to be an output")
argv = purs_compile_module_action.argv
contains(env, argv, "--output-javascript-file", "Expected --output-javascript-file to be an argument")
contains(env, argv, "--purs-file", "Expected --purs-file to be an argument")
purs_bundle_action = find_action(env, actions, "PursBundle")
inputs = []
for input in purs_bundle_action.inputs.to_list():
inputs.append(paths.join(paths.basename(input.dirname), input.basename))
asserts.equals(env, 3, len(inputs))
# The repository can change depending on where the tests are run.
# Only check the binary name.
contains(env, [input.basename for input in purs_compile_module_action.inputs.to_list()], "purs-compile-module", "Expected purs-compile-module to be an input")
contains(env, inputs, "Foo/index.js", "Expected Foo/index.js to be an input")
contains(env, inputs, "Bar/index.js", "Expected Bar/index.js to be an input")
outputs = [output.basename for output in purs_bundle_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "purescript_binary_works_with_dependencies_foo_fake_target.js", "Expected purescript_binary_works_with_dependencies_foo_fake_target.js to be an output")
argv = purs_bundle_action.argv
contains(env, argv, "--main", "Expected --main to be an argument")
contains(env, argv, "--module", "Expected --module to be an argument")
contains(env, argv, "--output", "Expected --output to be an argument")
return analysistest.end(env)
_purescript_binary_works_with_dependencies_test = analysistest.make(
_purescript_binary_works_with_dependencies_implementation_test,
)
def _purescript_library_works_with_only_purescript_implementation_test(ctx):
"""
Test to verify that compiled PureScript files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_module_action = find_action(env, actions, "PursCompileModule")
inputs = [input.basename for input in purs_compile_module_action.inputs.to_list()]
asserts.equals(env, 2, len(inputs))
contains(env, inputs, "purs-compile-module", "Expected purs-compile-module to be an input")
contains(env, inputs, "PureScriptOnly.purs", "Expected PureScriptOnly.purs to be an input")
outputs = [output.basename for output in purs_compile_module_action.outputs.to_list()]
asserts.equals(env, 3, len(outputs))
contains(env, outputs, "index.js", "Expected index.js to be an output")
contains(env, outputs, "signature-externs.cbor", "Expected signature-externs.cbor to be an output")
contains(env, outputs, "standard-externs.cbor", "Expected standard-externs.cbor to be an output")
argv = purs_compile_module_action.argv
contains(env, argv, "--output-javascript-file", "Expected --output-javascript-file to be an argument")
contains(env, argv, "--output-signature-externs-file", "Expected --output-signature-externs-file to be an argument")
contains(env, argv, "--output-standard-externs-file", "Expected --output-standard-externs-file to be an argument")
contains(env, argv, "--purs-file", "Expected --purs-file to be an argument")
return analysistest.end(env)
_purescript_library_works_with_only_purescript_test = analysistest.make(
_purescript_library_works_with_only_purescript_implementation_test,
)
def _purescript_library_works_with_purescript_and_ffi_implementation_test(ctx):
"""
Test to verify that both compiled PureScript and FFI files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_module_action = find_action(env, actions, "PursCompileModule")
inputs = [input.basename for input in purs_compile_module_action.inputs.to_list()]
asserts.equals(env, 3, len(inputs))
contains(env, inputs, "purs-compile-module", "Expected purs-compile-module to be an input")
contains(env, inputs, "PureScriptAndFFI.js", "Expected PureScriptAndFFI.js to be an input")
contains(env, inputs, "PureScriptAndFFI.purs", "Expected PureScriptAndFFI.purs to be an input")
outputs = [output.basename for output in purs_compile_module_action.outputs.to_list()]
asserts.equals(env, 4, len(outputs))
contains(env, outputs, "foreign.js", "Expected foreign.js to be an output")
contains(env, outputs, "index.js", "Expected index.js to be an output")
contains(env, outputs, "signature-externs.cbor", "Expected signature-externs.cbor to be an output")
contains(env, outputs, "standard-externs.cbor", "Expected standard-externs.cbor to be an output")
argv = purs_compile_module_action.argv
contains(env, argv, "--input-ffi-file", "Expected --input-ffi-file to be an argument")
contains(env, argv, "--output-ffi-file", "Expected --output-ffi-file to be an argument")
contains(env, argv, "--output-javascript-file", "Expected --output-javascript-file to be an argument")
contains(env, argv, "--output-signature-externs-file", "Expected --output-signature-externs-file to be an argument")
contains(env, argv, "--output-standard-externs-file", "Expected --output-standard-externs-file to be an argument")
contains(env, argv, "--purs-file", "Expected --purs-file to be an argument")
return analysistest.end(env)
_purescript_library_works_with_purescript_and_ffi_test = analysistest.make(
_purescript_library_works_with_purescript_and_ffi_implementation_test,
)
def _purescript_library_works_with_dependencies_implementation_test(ctx):
"""
Test to verify that compiled PureScript files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_module_action = find_action(env, actions, "PursCompileModule")
inputs = [input.basename for input in purs_compile_module_action.inputs.to_list()]
asserts.equals(env, 3, len(inputs))
contains(env, inputs, "purs-compile-module", "Expected purs-compile-module to be an input")
contains(env, inputs, "Foo.purs", "Expected Foo.purs to be an input")
contains(env, inputs, "signature-externs.cbor", "Expected signature-externs.cbor to be an input")
outputs = [output.basename for output in purs_compile_module_action.outputs.to_list()]
asserts.equals(env, 3, len(outputs))
contains(env, outputs, "index.js", "Expected index.js to be an output")
contains(env, outputs, "signature-externs.cbor", "Expected signature-externs.cbor to be an output")
contains(env, outputs, "standard-externs.cbor", "Expected standard-externs.cbor to be an output")
argv = purs_compile_module_action.argv
contains(env, argv, "--output-javascript-file", "Expected --output-javascript-file to be an argument")
contains(env, argv, "--output-signature-externs-file", "Expected --output-signature-externs-file to be an argument")
contains(env, argv, "--output-standard-externs-file", "Expected --output-standard-externs-file to be an argument")
contains(env, argv, "--purs-file", "Expected --purs-file to be an argument")
return analysistest.end(env)
_purescript_library_works_with_dependencies_test = analysistest.make(
_purescript_library_works_with_dependencies_implementation_test,
)
def _purescript_package_works_with_only_purescript_implementation_test(ctx):
"""
Test to verify that compiled PureScript files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_action = find_action(env, actions, "PursCompile")
inputs = [input.basename for input in purs_compile_action.inputs.to_list()]
asserts.equals(env, 2, len(inputs))
contains(env, inputs, "purs-compile", "Expected purs-compile to be an input")
contains(env, inputs, "PureScriptOnly.purs", "Expected PureScriptOnly.purs to be an input")
outputs = [output.basename for output in purs_compile_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "output-purescript_package_works_with_only_purescript_fake_target", "Expected output-purescript_package_works_with_only_purescript_fake_target to be an output")
argv = purs_compile_action.argv
contains(env, argv, "--output", "Expected --output to be an argument")
return analysistest.end(env)
_purescript_package_works_with_only_purescript_test = analysistest.make(
_purescript_package_works_with_only_purescript_implementation_test,
)
def _purescript_package_works_with_purescript_and_ffi_implementation_test(ctx):
"""
Test to verify that both compiled PureScript and FFI files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_action = find_action(env, actions, "PursCompile")
inputs = [input.basename for input in purs_compile_action.inputs.to_list()]
asserts.equals(env, 3, len(inputs))
contains(env, inputs, "purs-compile", "Expected purs-compile to be an input")
contains(env, inputs, "PureScriptAndFFI.js", "Expected PureScriptAndFFI.js to be an input")
contains(env, inputs, "PureScriptAndFFI.purs", "Expected PureScriptAndFFI.purs to be an input")
outputs = [output.basename for output in purs_compile_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "output-purescript_package_works_with_purescript_and_ffi_fake_target", "Expected output-purescript_package_works_with_purescript_and_ffi_fake_target to be an output")
argv = purs_compile_action.argv
contains(env, argv, "--output", "Expected --output to be an argument")
return analysistest.end(env)
_purescript_package_works_with_purescript_and_ffi_test = analysistest.make(
_purescript_package_works_with_purescript_and_ffi_implementation_test,
)
def _purescript_package_works_with_dependencies_implementation_test(ctx):
"""
Test to verify that compiled PureScript files generate the correct actions.
"""
env = analysistest.begin(ctx)
actions = analysistest.target_actions(env)
purs_compile_action = find_action(env, actions, "PursCompile")
inputs = [input.basename for input in purs_compile_action.inputs.to_list()]
asserts.equals(env, 3, len(inputs))
contains(env, inputs, "purs-compile", "Expected purs-compile to be an input")
contains(env, inputs, "Foo.purs", "Expected Foo.purs to be an input")
contains(env, inputs, "output-purescript_package_works_with_dependencies_bar_fake_target", "Expected output-purescript_package_works_with_dependencies_bar_fake_target to be an input")
outputs = [output.basename for output in purs_compile_action.outputs.to_list()]
asserts.equals(env, 1, len(outputs))
contains(env, outputs, "output-purescript_package_works_with_dependencies_foo_fake_target", "Expected output-purescript_package_works_with_dependencies_foo_fake_target to be an output")
argv = purs_compile_action.argv
contains(env, argv, "--output", "Expected --output to be an argument")
contains(env, argv, "--include", "Expected --include to be an argument")
return analysistest.end(env)
_purescript_package_works_with_dependencies_test = analysistest.make(
_purescript_package_works_with_dependencies_implementation_test,
)
def purescript_binary_tests_suite(name):
"""
A suite of tests around purescript_binary.
Args:
name: A unique name for this target.
"""
_purescript_binary_works_with_only_purescript_test(
name = "purescript_binary_works_with_only_purescript_test",
target_under_test = ":purescript_binary_works_with_only_purescript_fake_target",
)
purescript_binary(
name = "purescript_binary_works_with_only_purescript_fake_target",
module = "PureScriptOnly",
src = "PureScriptOnly.purs",
tags = [
"manual",
],
)
_purescript_binary_works_with_purescript_and_ffi_test(
name = "purescript_binary_works_with_purescript_and_ffi_test",
target_under_test = ":purescript_binary_works_with_purescript_and_ffi_fake_target",
)
purescript_binary(
name = "purescript_binary_works_with_purescript_and_ffi_fake_target",
ffi = "PureScriptAndFFI.js",
module = "PureScriptAndFFI",
src = "PureScriptAndFFI.purs",
tags = [
"manual",
],
)
_purescript_binary_works_with_dependencies_test(
name = "purescript_binary_works_with_dependencies_test",
target_under_test = ":purescript_binary_works_with_dependencies_foo_fake_target",
)
purescript_binary(
name = "purescript_binary_works_with_dependencies_foo_fake_target",
module = "Foo",
src = "Foo.purs",
deps = [
":purescript_binary_works_with_dependencies_bar_fake_target",
],
tags = [
"manual",
],
)
purescript_library(
name = "purescript_binary_works_with_dependencies_bar_fake_target",
module = "Bar",
src = "Bar.purs",
tags = [
"manual",
],
)
def purescript_library_tests_suite(name):
"""
A suite of tests around purescript_library.
Args:
name: A unique name for this target.
"""
_purescript_library_works_with_only_purescript_test(
name = "purescript_library_works_with_only_purescript_test",
target_under_test = ":purescript_library_works_with_only_purescript_fake_target",
)
purescript_library(
name = "purescript_library_works_with_only_purescript_fake_target",
module = "PureScriptOnly",
src = "PureScriptOnly.purs",
tags = [
"manual",
],
)
_purescript_library_works_with_purescript_and_ffi_test(
name = "purescript_library_works_with_purescript_and_ffi_test",
target_under_test = ":purescript_library_works_with_purescript_and_ffi_fake_target",
)
purescript_library(
name = "purescript_library_works_with_purescript_and_ffi_fake_target",
ffi = "PureScriptAndFFI.js",
module = "PureScriptAndFFI",
src = "PureScriptAndFFI.purs",
tags = [
"manual",
],
)
_purescript_library_works_with_dependencies_test(
name = "purescript_library_works_with_dependencies_test",
target_under_test = ":purescript_library_works_with_dependencies_foo_fake_target",
)
purescript_library(
name = "purescript_library_works_with_dependencies_foo_fake_target",
module = "Foo",
src = "Foo.purs",
deps = [
":purescript_library_works_with_dependencies_bar_fake_target",
],
tags = [
"manual",
],
)
purescript_library(
name = "purescript_library_works_with_dependencies_bar_fake_target",
module = "Bar",
src = "Bar.purs",
tags = [
"manual",
],
)
def purescript_package_tests_suite(name):
"""
A suite of tests around purescript_package.
Args:
name: A unique name for this target.
"""
_purescript_package_works_with_only_purescript_test(
name = "purescript_package_works_with_only_purescript_test",
target_under_test = ":purescript_package_works_with_only_purescript_fake_target",
)
purescript_package(
name = "purescript_package_works_with_only_purescript_fake_target",
srcs = [
"PureScriptOnly.purs",
],
tags = [
"manual",
],
)
_purescript_package_works_with_purescript_and_ffi_test(
name = "purescript_package_works_with_purescript_and_ffi_test",
target_under_test = ":purescript_package_works_with_purescript_and_ffi_fake_target",
)
purescript_package(
name = "purescript_package_works_with_purescript_and_ffi_fake_target",
ffis = [
"PureScriptAndFFI.js",
],
srcs = [
"PureScriptAndFFI.purs",
],
tags = [
"manual",
],
)
_purescript_package_works_with_dependencies_test(
name = "purescript_package_works_with_dependencies_test",
target_under_test = ":purescript_package_works_with_dependencies_foo_fake_target",
)
purescript_package(
name = "purescript_package_works_with_dependencies_foo_fake_target",
srcs = [
"Foo.purs",
],
deps = [
":purescript_package_works_with_dependencies_bar_fake_target",
],
tags = [
"manual",
],
)
purescript_package(
name = "purescript_package_works_with_dependencies_bar_fake_target",
srcs = [
"Bar.purs",
],
tags = [
"manual",
],
)
| 42.899254 | 193 | 0.72258 | 2,893 | 22,994 | 5.460422 | 0.040442 | 0.060581 | 0.033044 | 0.031019 | 0.966956 | 0.964487 | 0.955561 | 0.919732 | 0.850478 | 0.732924 | 0 | 0.001264 | 0.174045 | 22,994 | 535 | 194 | 42.979439 | 0.830508 | 0.049013 | 0 | 0.644836 | 0 | 0 | 0.375266 | 0.183846 | 0 | 0 | 0 | 0 | 0.062972 | 1 | 0.030227 | false | 0 | 0 | 0 | 0.052897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10cd71779e17b174a52fe79ce05536896d69e384 | 196 | py | Python | tumblelog/admin.py | matthewbdaly/djumblr | 1867f703ff6e13a48e1f3bb9f95153146bc16b35 | [
"MIT"
] | null | null | null | tumblelog/admin.py | matthewbdaly/djumblr | 1867f703ff6e13a48e1f3bb9f95153146bc16b35 | [
"MIT"
] | 2 | 2021-06-04T21:26:45.000Z | 2021-06-09T17:19:01.000Z | tumblelog/admin.py | matthewbdaly/djumblr | 1867f703ff6e13a48e1f3bb9f95153146bc16b35 | [
"MIT"
] | null | null | null | from django.contrib import admin
from . import models
# Register your models here.
admin.site.register(models.TextPost)
admin.site.register(models.ImagePost)
admin.site.register(models.LinkPost)
| 24.5 | 37 | 0.816327 | 27 | 196 | 5.925926 | 0.481481 | 0.16875 | 0.31875 | 0.43125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086735 | 196 | 7 | 38 | 28 | 0.893855 | 0.132653 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
10d7ff1914de52612710a2117e35cf7f1c890eb9 | 9,715 | py | Python | src/api/tests/test_deviceprofile.py | massenergize/api | 0df3368cb763e9160229f48138b7706a9d0569aa | [
"MIT"
] | 2 | 2020-07-24T12:58:17.000Z | 2020-12-17T02:26:13.000Z | src/api/tests/test_deviceprofile.py | massenergize/api | 0df3368cb763e9160229f48138b7706a9d0569aa | [
"MIT"
] | 214 | 2019-06-26T17:33:54.000Z | 2022-03-26T00:02:34.000Z | src/api/tests/dont_test_deviceprofile.py | massenergize/portalBackEnd | 7ed971b2be13901667a216d8c8a46f0bed6d6ccd | [
"MIT"
] | 6 | 2020-03-13T20:29:06.000Z | 2021-08-20T16:15:08.000Z | """
This is the test file for device profiles
"""
from django.test import TestCase, Client
from six import print_
from database.models import DeviceProfile
from database.models import DeviceProfile, Community, CommunityAdminGroup
import json
from urllib.parse import urlencode
from api.tests.common import signinAs, setupCC, createUsers
class DeviceHandlerTest(TestCase):
@classmethod
def setUpClass(self):
print("\n---> Testing Devices <---\n")
self.client = Client()
self.USER, self.CADMIN, self.SADMIN = createUsers()
signinAs(self.client, self.SADMIN)
setupCC(self.client)
signinAs(self.client, None)
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
devices = DeviceProfile.objects.filter(id=create_response["data"]["id"])
self.device = devices.first()
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
devices = DeviceProfile.objects.filter(id=create_response["data"]["id"])
self.device1 = devices.first()
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
devices = DeviceProfile.objects.filter(id=create_response["data"]["id"])
self.device2 = devices.first()
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
devices = DeviceProfile.objects.filter(id=create_response["data"]["id"])
self.device3 = devices.first()
@classmethod
def tearDownClass(self):
pass
def setUp(self):
# this gets run on every test case
pass
def test_create(self):
# test create not logged in
signinAs(self.client, None)
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": create_response["data"]["id"]}), content_type="application/x-www-form-urlencoded").toDict()
# test create logged as user
signinAs(self.client, self.USER)
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": create_response["data"]["id"]}), content_type="application/x-www-form-urlencoded").toDict()
# test create logged as cadmin
signinAs(self.client, self.CADMIN)
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": create_response["data"]["id"]}), content_type="application/x-www-form-urlencoded").toDict()
# test create logged as sadmin
signinAs(self.client, self.SADMIN)
create_response = self.client.post('/api/device.create', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(create_response["success"])
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": create_response["data"]["id"]}), content_type="application/x-www-form-urlencoded").toDict()
def test_info(self):
# test info not logged in
signinAs(self.client, None)
info_response = self.client.post('/api/device.info', urlencode({"id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(info_response["success"])
# test info logged as user
signinAs(self.client, self.USER)
info_response = self.client.post('/api/device.info', urlencode({"id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(info_response["success"])
# test info logged as cadmin
signinAs(self.client, self.CADMIN)
info_response = self.client.post('/api/device.info', urlencode({"id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(info_response["success"])
# test info logged as sadmin
signinAs(self.client, self.SADMIN)
info_response = self.client.post('/api/device.info', urlencode({"id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(info_response["success"])
# def test_update(self): # Currently update does nothing since data is grabbed from the back end
# # test update not signed in
# signinAs(self.client, None)
# update_response = self.client.post('/api/device.update', urlencode({"device_id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
# self.assertTrue(update_response["success"])
#
# # test update signed as user
# signinAs(self.client, self.USER)
# update_response = self.client.post('/api/device.update', urlencode({"device_id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
# self.assertTrue(update_response["success"])
#
# # test update as cadmin
# signinAs(self.client, self.CADMIN)
# update_response = self.client.post('/api/device.update', urlencode({"device_id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
# self.assertTrue(update_response["success"])
# self.assertEquals(update_response["data"]["title"], "cadmin_title")
#
# # test update as sadmin
# signinAs(self.client, self.SADMIN)
# update_response = self.client.post('/api/device.update', urlencode({"device_id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
# self.assertTrue(update_response["success"])
# self.assertEquals(update_response["data"]["title"], "sadmin_title")
def test_device_log(self):
visit_log = self.device.visit_log
visit_logs = len(visit_log)
# test update not signed in
signinAs(self.client, None)
log_response = self.client.post('/api/device.log', urlencode({"device_id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
visit_logs += 1
self.assertTrue(log_response["success"])
# self.assertEquals(len(log_response["data"]["visit_log"]), visit_logs)
def test_user_log(self):
visit_log = self.device1.visit_log
visit_logs = len(visit_log)
# test log signed as user
signinAs(self.client, self.USER)
log_response = self.client.post('/api/device.log', urlencode({"device_id": self.device1.id}), content_type="application/x-www-form-urlencoded").toDict()
visit_logs += 1
self.assertTrue(log_response["success"])
# self.assertEquals(len(log_response["data"]["visit_log"]), visit_logs)
visit_log = self.device2.visit_log
visit_logs = len(visit_log)
# test log as cadmin
signinAs(self.client, self.CADMIN)
log_response = self.client.post('/api/device.log', urlencode({"device_id": self.device2.id}), content_type="application/x-www-form-urlencoded").toDict()
visit_logs += 1
self.assertTrue(log_response["success"])
# self.assertEquals(len(log_response["data"]["visit_log"]), visit_logs)
visit_log = self.device3.visit_log
visit_logs = len(visit_log)
# test log as sadmin
signinAs(self.client, self.SADMIN)
log_response = self.client.post('/api/device.log', urlencode({"device_id": self.device3.id}), content_type="application/x-www-form-urlencoded").toDict()
visit_logs += 1
self.assertTrue(log_response["success"])
# self.assertEquals(len(log_response["data"]["visit_log"]), visit_logs)
# device object has no attribute first?
def test_delete(self):
# test not signed in
signinAs(self.client, None)
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": self.device.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(delete_response["success"])
self.assertTrue(delete_response["data"]["is_deleted"])
# test as user
signinAs(self.client, self.USER)
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": self.device1.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(delete_response["success"])
self.assertTrue(delete_response["data"]["is_deleted"])
# test as cadmin
signinAs(self.client, self.CADMIN)
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": self.device2.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(delete_response["success"])
self.assertTrue(delete_response["data"]["is_deleted"])
# test as sadmin
signinAs(self.client, self.SADMIN)
delete_response = self.client.post('/api/device.delete', urlencode({"device_id": self.device3.id}), content_type="application/x-www-form-urlencoded").toDict()
self.assertTrue(delete_response["success"])
self.assertTrue(delete_response["data"]["is_deleted"])
# test no device_id
delete_response = self.client.post('/api/device.delete', urlencode({}), content_type="application/x-www-form-urlencoded").toDict()
self.assertFalse(delete_response["success"])
| 49.820513 | 178 | 0.694802 | 1,217 | 9,715 | 5.41742 | 0.082169 | 0.080388 | 0.079175 | 0.096769 | 0.882754 | 0.86152 | 0.86152 | 0.813742 | 0.768239 | 0.743364 | 0 | 0.001941 | 0.151415 | 9,715 | 194 | 179 | 50.07732 | 0.797792 | 0.215131 | 0 | 0.648148 | 0 | 0 | 0.214456 | 0.109012 | 0 | 0 | 0 | 0 | 0.194444 | 1 | 0.074074 | false | 0.018519 | 0.064815 | 0 | 0.148148 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8034e7604c0eed89685d20d696b3928faa6a60d6 | 1,550 | py | Python | python/node_store.py | holtzermann17/FloWrTester | 864257b9837ed252e5d7358287ba232d359535e7 | [
"CC0-1.0"
] | 1 | 2017-03-21T14:52:30.000Z | 2017-03-21T14:52:30.000Z | python/node_store.py | holtzermann17/FloWrTester | 864257b9837ed252e5d7358287ba232d359535e7 | [
"CC0-1.0"
] | 1 | 2016-11-06T12:50:28.000Z | 2016-11-06T12:59:44.000Z | python/node_store.py | holtzermann17/FloWrTester | 864257b9837ed252e5d7358287ba232d359535e7 | [
"CC0-1.0"
] | 2 | 2021-03-17T10:58:51.000Z | 2021-05-02T15:13:31.000Z | global_vars['node_store'] = {u'test': {u'null': [u'ImageGenerator', u'OutputTest', u'ErrorCapture', u'ManyParameter', u'LongRunner', u'BigData']}, u'text': {u'extractors': [u'ListExtractor', u'RegexGenerator', u'RegexPhraseExtractor', u'TextRankKeyphraseExtractor', u'PhraseExtractor', u'NamesExtractor'], u'analysers': [u'WordCounter'], u'language': [u'GrammarChecker'], u'manipulators': [u'WordReplacer', u'ConceptNetScenarioReplacer', u'LineSplitter'], u'theoryFormation': [u'HR3Poems', u'HR3', u'HR3ConceptNet'], u'retrievers': [u'SimileRetriever', u'ConceptNetBundler', u'Disco', u'WordNet', u'TextReader', u'Reverb', u'MetaphorEyes', u'ConceptNet', u'Guardian', u'Dictionary', u'Twitter'], u'sorters': [u'SentimentSorter', u'WordVarietySorter', u'RhymeSorter', u'ConceptNetChainSorter'], u'matchers': [u'RhymeMatcher', u'FootprintMatcher'], u'categorisers': [u'RegexCategoriser', u'POSCategoriser', u'SentimentCategoriser', u'ConceptNetChainCategoriser', u'WordListCategoriser', u'SizeCategoriser', u'MatchingFootprintCategoriser', u'WordSenseCategoriser'], u'chainers': [u'ConceptNetChainer'], u'combiners': [u'StringAppender', u'TextOverlapper', u'WhatIfGenerator', u'TemplateCombiner', u'LineCollator', u'ListAppender']}, u'utility': {u'filter': [u'Conditional', u'SymbolFilter'], u'dates': [u'DateFinder'], u'saving': [u'TextSaver'], u'io': [u'IORewriter', u'Sampler', u'IOReader'], u'output': [u'Echo', u'FormatConverter'], u'random': [u'RandomInteger']}, u'ideation': {u'scenarios': [u'ScenariosGenerator', u'ConceptNetChainScenarios']}}
| 775 | 1,549 | 0.740645 | 178 | 1,550 | 6.438202 | 0.516854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002053 | 0.057419 | 1,550 | 1 | 1,550 | 1,550 | 0.782341 | 0 | 0 | 0 | 0 | 0 | 0.677419 | 0.097419 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
338aa586b1ad5d839d92ff16692e132131812122 | 106 | py | Python | src/test/resources/rocks/juergen/maven/jythonplugin/package-test/main.py | edelbluth/jython-maven-plugin | 2d6780343871e2f4da539366f42d0a86ebc8fc84 | [
"Apache-2.0"
] | 2 | 2020-03-26T17:12:54.000Z | 2021-03-17T14:15:59.000Z | src/test/resources/rocks/juergen/maven/jythonplugin/package-test/main.py | juergen-rocks/jython-maven-plugin | 2d6780343871e2f4da539366f42d0a86ebc8fc84 | [
"Apache-2.0"
] | 18 | 2016-10-09T18:12:36.000Z | 2020-03-26T17:19:46.000Z | src/test/resources/rocks/juergen/maven/jythonplugin/package-test/main.py | edelbluth/jython-maven-plugin | 2d6780343871e2f4da539366f42d0a86ebc8fc84 | [
"Apache-2.0"
] | 3 | 2016-10-09T10:23:52.000Z | 2018-11-08T09:05:57.000Z | from mypackage.package import sub_package_func
if __name__ == '__main__':
sub_package_func('Hello')
| 17.666667 | 46 | 0.764151 | 14 | 106 | 4.928571 | 0.714286 | 0.289855 | 0.405797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141509 | 106 | 5 | 47 | 21.2 | 0.758242 | 0 | 0 | 0 | 0 | 0 | 0.122642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
33a0c587f8f9946c631e4175ab43a8d9641082b8 | 44,342 | py | Python | networks/PyTorch/jojo.py | jbcnrlz/san | 1eab20f83d3c7dba5607e22d1c70768905b62b12 | [
"MIT"
] | null | null | null | networks/PyTorch/jojo.py | jbcnrlz/san | 1eab20f83d3c7dba5607e22d1c70768905b62b12 | [
"MIT"
] | null | null | null | networks/PyTorch/jojo.py | jbcnrlz/san | 1eab20f83d3c7dba5607e22d1c70768905b62b12 | [
"MIT"
] | null | null | null | from PyTorchLayers.maxout_dynamic import *
from PyTorchLayers.octoconv import *
from PyTorchLayers.CorrelationImages import *
from networks.PyTorch.attentionModule import *
from networks.PyTorch.normActive import *
from scipy import stats
import math
def calculateMaxPoolingSize(inputsize,padding,dilatation,kernel,stride):
return math.floor(((inputsize + (2 * padding) - dilatation * (kernel - 1) - 1) / stride)+1)
class FusingNetwork(nn.Module):
def __init__(self,featureSize,classes):
super(FusingNetwork,self).__init__()
self.classifier = nn.Sequential(
MaxoutDynamic(featureSize, featureSize),
nn.Dropout(),
nn.Linear(featureSize, 2048),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
MaxoutDynamic(1024, 2048),
nn.Linear(2048, classes)
)
self.avgFuse = AverageFusion(featureSize)
#self.maxFuse = MaxFusion(featureSize)
#self.catFuse = ConcatFusion(featureSize)
def forward(self, x):
#outFeatures = self.catFuse(self.avgFuse(x[0].view(x[0].size(0), -1),x[1].view(x[1].size(0), -1)), self.maxFuse(x[0].view(x[0].size(0), -1),x[1].view(x[1].size(0), -1)))
outFeatures = self.avgFuse(x[0].view(x[0].size(0), -1), x[1].view(x[1].size(0), -1))
outFeatures = self.classifier(outFeatures)
return self.softmax(outFeatures), outFeatures
def conv8x8(in_planes, out_planes, stride=4):
return nn.Conv2d(in_planes, out_planes, kernel_size=8, stride=stride,padding=1, bias=False)
def conv5x5(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=5, stride=stride,padding=1, bias=False)
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,padding=1, bias=False)
class BasicBlock(nn.Module):
def __init__(self, inplanes):
super(BasicBlock, self).__init__()
self.conv1 = conv8x8(inplanes,64)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv5x5(64, 128)
self.bn2 = nn.BatchNorm2d(128)
def addResidualInformation(self,out,residualOv,residualUn,resize=True):
residualSum =None
if resize:
max_redux = nn.MaxPool2d(4,stride=4)
identityOv = max_redux(residualOv)[:,1:23,1:23]
identityUnd = max_redux(residualUn)[:,1:23,1:23]
residualSum = identityOv+identityUnd
else:
residualSum = residualOv + residualUn
for i in range(out.shape[1]):
out[:,i,:,:] += residualSum
return out
def forward(self, x):
identityOv = x[:, 4,:,:]
identityUnd = x[:, 5, :, :]
data = x[:, 0:4, :, :]
data= self.addResidualInformation(data, identityOv, identityUnd,False)
out = self.conv1(data)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = nn.AvgPool2d(kernel_size=3, stride=2)(out)
return out
class Joseph(nn.Module):
def __init__(self,classes):
super(Joseph, self).__init__()
self.features = BasicBlock(4)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(128*10*10, classes)
)
def forward(self, x):
out = self.features(x)
out = out.view(out.size(0), 128*10*10)
out = self.classifier(out)
return out
class OctJolyne(nn.Module):
def __init__(self,classes,imageInput=(100,100),in_channels=4):
self.imageInput = imageInput
super(OctJolyne,self).__init__()
self.features = nn.Sequential(
OctConv(in_channels, 256, kernel_size=8, stride=4,alphas=[0,0.5]),
nn.ReLU(inplace=True),
OctConv(256, 512, kernel_size=4,stride=2),
nn.ReLU(inplace=True),
#nn.MaxPool2d(kernel_size=3, stride=2),
OctConv(512, 1024, kernel_size=2, stride=1,alphas=[0.5,0]),
nn.ReLU(inplace=True),
#nn.MaxPool2d(kernel_size=3, stride=2)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(1024*10*10, 2048),
nn.ReLU(inplace=True),
MaxoutDynamic(1024, 2048),
nn.Dropout(),
nn.Linear(2048, 2048),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
MaxoutDynamic(1024, 2048),
nn.Linear(2048, classes)
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class SyameseJolyne(nn.Module):
def normedCrossCorrelation(self, a, b):
correlationBetData = ((a - torch.mean(a)) * (b - torch.mean(b))) / (torch.sqrt(torch.var(a) * torch.var(b)))
return correlationBetData
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(SyameseJolyne,self).__init__()
#self.input3 = nn.Conv2d(3, 256, kernel_size=8, stride=3)
self.features = nn.Sequential(
nn.Conv2d(4, 256, kernel_size=8, stride=3),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 512, kernel_size=2,stride=2),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
#nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(512, 1024, kernel_size=2, stride=1),
nn.BatchNorm2d(1024),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=3)
)
self.classifier = nn.Sequential(
MaxoutDynamic(int(16384 / 2), 16384),
nn.Dropout(),
nn.Linear(16384, 2048),
nn.ReLU(inplace=True),
MaxoutDynamic(1024, 2048),
nn.Dropout(),
nn.Linear(2048, 2048),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
MaxoutDynamic(1024, 2048),
nn.Linear(2048, classes)
)
self.avgFuse = AverageFusion(16384)
#self.maxFuse = MaxFusion(16384)
#self.catFuse = ConcatFusion(9216)
def forward(self, x):
'''
outFeat = self.input4(x[0])
outFeat = self.features(outFeat)
outFeat = outFeat.view(outFeat.size(0), -1)
outFeat = self.classifier(outFeat)
'''
outFeatures = []
for i in x:
outFeat = self.features(i)
outFeat = outFeat.view(outFeat.size(0), -1)
#outFeat = self.classifier(outFeat)
outFeatures.append(outFeat)
#outFeatures = self.activationJoin(self.joinmaps(outFeatures[0],outFeatures[1]))
#outFeatures = self.avgFuse(outFeatures[0],outFeatures[1])
#outFeatures = self.maxFuse(outFeatures[0], outFeatures[1])
#outFeatures = self.catFuse(self.avgFuse(outFeatures[0],outFeatures[1]), self.maxFuse(outFeatures[0],outFeatures[1]))
outFeatures = self.avgFuse(outFeatures[0], outFeatures[1])
outFeatures = self.classifier(outFeatures)
return self.softmax(outFeatures), outFeatures
class Jolyne(nn.Module):
def __init__(self,classes,imageInput=(100,100),in_channels=4):
self.imageInput = imageInput
super(Jolyne,self).__init__()
'''
self.block0 = nn.Sequential(
nn.Conv2d(in_channels, 128, kernel_size=8, stride=4),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True)
)
self.block1 = nn.Sequential(
nn.Conv2d(128, 256, kernel_size=4, stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True)
)
self.block2 = nn.Sequential(
nn.Conv2d(256, 512, kernel_size=4, stride=2),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True)
)
self.block3 = nn.Sequential(
nn.Conv2d(896+in_channels, (896+in_channels)*2, kernel_size=1, stride=1),
nn.BatchNorm2d((896+in_channels)*2),
nn.ReLU(inplace=True)
)
self.maxpool = nn.MaxPool2d(kernel_size=10, stride=5, padding=2)
self.maxpoolb2 = nn.MaxPool2d(kernel_size=4, stride=2)
self.maxInput = nn.MaxPool2d(kernel_size=25, stride=20)
self.features = nn.Sequential(
nn.Dropout(),
nn.Linear(((896+in_channels)*2)*4*4, 2048),
nn.ReLU(inplace=True),
MaxoutDynamic(int(2048 / 2), 2048),
nn.Dropout(),
nn.Linear(2048, 2048),
)
self.convNet = nn.Sequential(
nn.Conv2d(in_channels, 128, kernel_size=8, stride=4),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=4, stride=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 512, kernel_size=4,stride=2),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Conv2d(512, 1024, kernel_size=2, stride=1),
nn.BatchNorm2d(1024),
nn.ReLU(inplace=True),
)
'''
self.cv1 = nn.Conv2d(in_channels, 128, kernel_size=8, stride=4)
self.bn1 = nn.BatchNorm2d(128)
self.rl1 = nn.ReLU(inplace=True)
self.cv2 = nn.Conv2d(128, 256, kernel_size=4, stride=2)
self.bn2 = nn.BatchNorm2d(256)
self.rl2 = nn.ReLU(inplace=True)
self.cv3 = nn.Conv2d(256, 512, kernel_size=4,stride=2)
self.bn3 = nn.BatchNorm2d(512)
self.rl3 = nn.ReLU(inplace=True)
self.cv4 = nn.Conv2d(512, 1024, kernel_size=2, stride=1)
self.bn4 = nn.BatchNorm2d(1024)
self.rl4 = nn.ReLU(inplace=True)
self.cv5 = nn.Conv2d(1920, 3840, kernel_size=1, stride=1)
self.bn5 = nn.BatchNorm2d(3840)
self.rl5 = nn.ReLU(inplace=True)
self.features = nn.Sequential(
nn.Dropout(),
nn.Linear(34560, 2048),
nn.ReLU(inplace=True),
MaxoutDynamic(int(2048 / 2), 2048),
nn.Dropout(),
nn.Linear(2048, 2048),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
MaxoutDynamic(int(2048 / 2), 2048),
nn.Linear(2048, classes, bias=False)
)
self.maxpool = nn.MaxPool2d(kernel_size=10, stride=7, padding=2)
self.maxpoolb2 = nn.MaxPool2d(kernel_size=4, stride=3)
self.maxpoolb3 = nn.MaxPool2d(kernel_size=2, stride=1)
def forward(self, x):
'''
inputImage = self.maxInput(x)
x = self.block0(x)
ft1 = self.maxpool(x)
x = self.block1(x)
ft2 = self.maxpoolb2(x)
x = self.block2(x)
x = torch.cat((inputImage,ft1,ft2,x),dim=1)
x = self.block3(x)
x = x.view(x.size(0),-1)
x = self.features(x)
'''
#x = self.convNet(x)
x = self.cv1(x)
ft1 = self.maxpool(x)
x = self.bn1(x)
x = self.rl1(x)
x = self.cv2(x)
ft2 = self.maxpoolb2(x)
x = self.bn2(x)
x = self.rl2(x)
x = self.cv3(x)
ft3 = self.maxpoolb3(x)
x = self.bn3(x)
x = self.rl3(x)
x = self.cv4(x)
x = self.bn4(x)
x = self.rl4(x)
x = torch.cat((ft1, ft2, ft3, x), dim=1)
x = self.cv5(x)
x = self.bn5(x)
x = self.rl5(x)
x = x.view(x.size(0),-1)
x = self.features(x)
return self.softmax(x), x
class GioGio(nn.Module):
def calculateSize(self,dim,layer,inputSize):
padding = layer.padding if (type(layer.padding) is not list) else layer.padding[dim]
dilation = layer.dilation if (type(layer.dilation) is not list) else layer.dilation[dim]
kernel_size = layer.kernel_size if (type(layer.kernel_size) is not list) else layer.kernel_size[dim]
stride = layer.stride if (type(layer.stride) is not list) else layer.stride[dim]
return int(((inputSize+(padding*2)-dilation*(kernel_size-1)-1) / stride) + 1)
def __init__(self,classes,imageInput=(100,100),in_channels=4):
self.imageInput = imageInput
super(GioGio,self).__init__()
self.features = nn.Sequential(
nn.Conv2d(in_channels, 64, kernel_size=8, stride=4, padding=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 128, kernel_size=5, padding=2),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(128, 256, kernel_size=3, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
#nn.Linear(4096, 4096),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(4096, classes,bias=False)
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
def getBlock(in_channels,ks1,ks2,ks3):
return nn.Sequential(
nn.Conv2d(in_channels, 64, kernel_size=ks1, stride=int(ks1 / 2), padding=int(int(ks1 / 2)/2)),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 128, kernel_size=ks2, padding=int(ks2 / 2)),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(128, 256, kernel_size=ks3, padding=int(ks3 / 2)),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2)
)
class GioGioModulateKernel(nn.Module):
def calculateSize(self,dim,layer,inputSize):
padding = layer.padding if (type(layer.padding) is not list) else layer.padding[dim]
dilation = layer.dilation if (type(layer.dilation) is not list) else layer.dilation[dim]
kernel_size = layer.kernel_size if (type(layer.kernel_size) is not list) else layer.kernel_size[dim]
stride = layer.stride if (type(layer.stride) is not list) else layer.stride[dim]
return int(((inputSize+(padding*2)-dilation*(kernel_size-1)-1) / stride) + 1)
def __init__(self,classes,imageInput=(100,100),in_channels=4):
self.imageInput = imageInput
super(GioGioModulateKernel,self).__init__()
self.features1 = getBlock(1, 8, 5, 3)
self.features2 = getBlock(1, 6, 3, 2)
self.features3 = getBlock(1, 6, 3, 2)
self.features4 = getBlock(1, 3, 2, 1)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(25600, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, classes,bias=False)
)
def forward(self, x):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x1 = self.features1(x1)
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x2 = self.features1(x2)
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x3 = self.features1(x3)
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x4 = self.features1(x4)
x = torch.cat((x1,x2,x3,x4),axis=1)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class GioGioModulateKernelInput(nn.Module):
def calculateSize(self,dim,layer,inputSize):
padding = layer.padding if (type(layer.padding) is not list) else layer.padding[dim]
dilation = layer.dilation if (type(layer.dilation) is not list) else layer.dilation[dim]
kernel_size = layer.kernel_size if (type(layer.kernel_size) is not list) else layer.kernel_size[dim]
stride = layer.stride if (type(layer.stride) is not list) else layer.stride[dim]
return int(((inputSize+(padding*2)-dilation*(kernel_size-1)-1) / stride) + 1)
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(GioGioModulateKernelInput,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhance(in_channels=64,out_channels=64)
self.normInput = nn.Sequential(
nn.LayerNorm((256,50,50)),
nn.Conv2d(256,64,kernel_size=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.features = nn.Sequential(
nn.Conv2d(64, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout(),
#nn.Linear(4096, 4096),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x2 = self.input2(x2)
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x3 = self.input3(x3)
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x4 = self.input4(x4)
x1, x2, x3, x4 = self.enFeat(x1,x2,x3,x4)
x = torch.cat((x1,x2,x3,x4),axis=1)
x = self.normInput(x)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class GioGioModulateKernelInputDepth(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(GioGioModulateKernelInputDepth,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhanceNoCross(in_channels=64,out_channels=64)
self.features = nn.Sequential(
nn.Conv2d(64, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout(),
#nn.Linear(4096, 4096),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x):
x = x[:,0,:,:].reshape((-1,1,100,100))
x = self.input1(x)
x = self.enFeat(x)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class Bottleneck(nn.Module):
# Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2)
# while original implementation places the stride at the first 1x1 convolution(self.conv1)
# according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385.
# This variant is also known as ResNet V1.5 and improves accuracy according to
# https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch.
def __init__(self,inplanes,planes,stride=1,downsample=None,groups=1,norm_layer=None):
super(Bottleneck, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
width = int(planes / 2)
# Both self.conv2 and self.downsample layers downsample the input when stride != 1
self.conv1 = nn.Conv2d(inplanes, width, kernel_size=1, stride=1, bias=False)
self.bn1 = norm_layer(width)
self.conv2 = nn.Conv2d(width,width,kernel_size=3,stride=stride,groups=groups)
self.bn2 = norm_layer(width)
self.conv3 = nn.Conv2d(width, planes, kernel_size=1, stride=1, bias=False)
self.bn3 = norm_layer(planes)
self.relu = nn.ReLU(inplace=True)
self.downsample = nn.Sequential(
nn.Conv2d(inplanes, planes, kernel_size=3,stride=stride),
norm_layer(planes),
)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class MaestroNetwork(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(MaestroNetwork,self).__init__()
self.input1 = nn.Sequential(
Bottleneck(1,64,2),
Bottleneck(64, 128, 2),
Bottleneck(128, 256, 2)
)
self.input2 = nn.Sequential(
Bottleneck(1,64,2),
Bottleneck(64, 128, 2),
Bottleneck(128, 256, 2)
)
self.input3 = nn.Sequential(
Bottleneck(1,64,2),
Bottleneck(64, 128, 2),
Bottleneck(128, 256, 2)
)
self.input4 = nn.Sequential(
Bottleneck(1,64,2),
Bottleneck(64, 128, 2),
Bottleneck(128, 256, 2)
)
self.normLayer = nn.Sequential(
nn.LayerNorm((1024,11,11)),
nn.Conv2d(1024,512,stride=2,kernel_size=3),
nn.ReLU(inplace=True),
)
self.feature = nn.Sequential(
nn.Dropout(),
nn.Linear(12800,1024),
nn.ReLU(inplace=True)
)
self.softmax = nn.Sequential(
nn.Dropout(),
nn.Linear(1024, classes,bias=False)
)
def forward(self, x):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x2 = self.input2(x2)
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x3 = self.input3(x3)
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x4 = self.input4(x4)
x = torch.cat((x1,x2,x3,x4),axis=1)
x = self.normLayer(x)
x = x.view(x.size(0),-1)
x = self.feature(x)
return self.softmax(x), x
class GioGioModulateKernelInputDepthDI(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(GioGioModulateKernelInputDepthDI,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input5 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhanceDepthDI(in_channels=64,out_channels=64)
'''
self.normInput = nn.Sequential(
nn.LayerNorm((320,50,50)),
nn.Conv2d(320,64,kernel_size=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
'''
self.features = nn.Sequential(
nn.Conv2d(320, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
#nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
#nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
#nn.BatchNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
#nn.Linear(4096, 4096),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x, xDepth):
#ccalc = x.clone().cpu()
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x4 = x[:,3,:,:].reshape((-1,1,100,100))
#print('Inicial')
#print(stats.pearsonr(ccalc[0,0,:,:].flatten(),ccalc[0,1,:,:].flatten()))
#print(stats.pearsonr(ccalc[0,0,:,:].flatten(),ccalc[0,2,:,:].flatten()))
#print(stats.pearsonr(ccalc[0,0,:,:].flatten(),ccalc[0,3,:,:].flatten()))
x1 = self.input1(x1)
x2 = self.input2(x2)
x3 = self.input3(x3)
x4 = self.input4(x4)
x5 = self.input5(xDepth[:,0,:,:].reshape((-1,1,100,100)))
x1, x2, x3, x4,x5 = self.enFeat(x1,x2,x3,x4,x5)
#print('Att Maps')
#print(stats.pearsonr(x1[0,:,:,:].clone().cpu().flatten(),x2[0,:,:,:].clone().cpu().flatten()))
#print(stats.pearsonr(x1[0,:,:,:].clone().cpu().flatten(), x3[0,:,:,:].clone().cpu().flatten()))
#print(stats.pearsonr(x1[0,:,:,:].clone().cpu().flatten(), x4[0,:,:,:].clone().cpu().flatten()))
x = torch.cat((x1,x2,x3,x4,x5),axis=1)
#x = self.normInput(x)
#print('After Norm')
#xNormed = x.clone().cpu()
#for idxChan in range(1,xNormed.shape[1]):
# print(stats.pearsonr(xNormed[0,0,:,:].flatten(),xNormed[0,idxChan,:,:].flatten()))
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class VanillaNetworkPaper(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(VanillaNetworkPaper,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
#self.enFeat = FeatureEnhanceDepthDI(in_channels=64,out_channels=64)
'''
self.normInput = nn.Sequential(
nn.LayerNorm((256,50,50)),
nn.Conv2d(256,64,kernel_size=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
'''
self.features = nn.Sequential(
nn.Conv2d(256, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
#nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
#nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
#nn.BatchNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
#nn.Linear(4096, 4096),
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = self.input2(x2)
x3 = self.input3(x3)
x4 = self.input4(x4)
x = torch.cat((x1,x2,x3,x4),axis=1)
#x = self.normInput(x)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class AttentionDINet(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(AttentionDINet,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhanceDINoCross(in_channels=64,out_channels=64)
self.features = nn.Sequential(
nn.Conv2d(256, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = self.input2(x2)
x3 = self.input3(x3)
x4 = self.input4(x4)
x1, x2, x3, x4 = self.enFeat(x1,x2,x3,x4)
x = torch.cat((x1,x2,x3,x4),axis=1)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class AttentionDICrossNet(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(AttentionDICrossNet,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhanceDI(in_channels=64,out_channels=64)
'''
self.normInput = nn.Sequential(
nn.LayerNorm((256,50,50)),
nn.Conv2d(256,64,kernel_size=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
'''
self.features = nn.Sequential(
nn.Conv2d(256, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = self.input2(x2)
x3 = self.input3(x3)
x4 = self.input4(x4)
x1, x2, x3, x4 = self.enFeat(x1,x2,x3,x4)
x = torch.cat((x1,x2,x3,x4),axis=1)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class DepthAM(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(DepthAM,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhanceDepth(in_channels=64,out_channels=64)
self.features = nn.Sequential(
nn.Conv2d(64, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x):
x = x[:,0,:,:].reshape((-1,1,100,100))
x = self.input1(x)
x = self.enFeat(x)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class GioGioModulateKernelInputDepthDINoCross(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(GioGioModulateKernelInputDepthDINoCross,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input5 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhanceDepthDIOnlyCross(in_channels=64,out_channels=64)
self.features = nn.Sequential(
nn.Conv2d(320, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x, xDepth):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = self.input2(x2)
x3 = self.input3(x3)
x4 = self.input4(x4)
x5 = self.input5(xDepth[:,0,:,:].reshape((-1,1,100,100)))
x1, x2, x3, x4,x5 = self.enFeat(x1,x2,x3,x4,x5)
x = torch.cat((x1,x2,x3,x4,x5),axis=1)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class AttentionDIDepthNet(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(AttentionDIDepthNet,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input5 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.enFeat = FeatureEnhanceDIDepthNoCross(in_channels=64,out_channels=64)
self.features = nn.Sequential(
nn.Conv2d(320, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x, xDepth):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = self.input2(x2)
x3 = self.input3(x3)
x4 = self.input4(x4)
x5 = self.input5(xDepth[:,0,:,:].reshape((-1,1,100,100)))
x1, x2, x3, x4,x5 = self.enFeat(x1,x2,x3,x4,x5)
x = torch.cat((x1,x2,x3,x4,x5),axis=1)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
class VanillaDepthDINetworkPaper(nn.Module):
def __init__(self,classes,imageInput=(100,100)):
self.imageInput = imageInput
super(VanillaDepthDINetworkPaper,self).__init__()
self.input1 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input2 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=4, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input3 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2,padding=1),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input4 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=2, stride=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.input5 = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=5, stride=2,padding=2),
nn.InstanceNorm2d(64),
nn.ReLU(inplace=True)
)
self.features = nn.Sequential(
nn.Conv2d(320, 128, kernel_size=5, stride=2),
nn.InstanceNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, stride=2),
nn.InstanceNorm2d(256),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(6400, 2048),
nn.ReLU(inplace=True),
nn.Dropout()
)
self.softmax = nn.Sequential(
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(2048, classes,bias=False)
)
def forward(self, x, xDepth):
x1 = x[:,0,:,:].reshape((-1,1,100,100))
x2 = x[:,1,:,:].reshape((-1,1,100,100))
x3 = x[:,2,:,:].reshape((-1,1,100,100))
x4 = x[:,3,:,:].reshape((-1,1,100,100))
x1 = self.input1(x1)
x2 = self.input2(x2)
x3 = self.input3(x3)
x4 = self.input4(x4)
x5 = self.input5(xDepth[:,0,:,:].reshape((-1,1,100,100)))
x = torch.cat((x1,x2,x3,x4,x5),axis=1)
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return self.softmax(x), x
| 32.821614 | 177 | 0.546254 | 5,537 | 44,342 | 4.310096 | 0.052194 | 0.056987 | 0.072449 | 0.094741 | 0.805447 | 0.785167 | 0.768196 | 0.754494 | 0.737524 | 0.724618 | 0 | 0.093662 | 0.303662 | 44,342 | 1,350 | 178 | 32.845926 | 0.679243 | 0.063168 | 0 | 0.697183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050302 | false | 0 | 0.007042 | 0.00503 | 0.107646 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33d6985fa401a74d41bad6b2a4b9a794d6bd8e71 | 14,011 | py | Python | tests/test_extension/test_gettext.py | timothyqiu/Flask-Xuanzang | 227bace18e69c990c623a444ec99cc8db104e26c | [
"MIT"
] | 2 | 2021-11-08T02:55:24.000Z | 2021-11-08T09:40:40.000Z | tests/test_extension/test_gettext.py | timothyqiu/Flask-Xuanzang | 227bace18e69c990c623a444ec99cc8db104e26c | [
"MIT"
] | null | null | null | tests/test_extension/test_gettext.py | timothyqiu/Flask-Xuanzang | 227bace18e69c990c623a444ec99cc8db104e26c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import unicode_literals
import sys
from babel.support import Locale
from mock import Mock, patch
from flask_xuanzang import Xuanzang
from flask_xuanzang import gettext, ngettext
from flask_xuanzang import ugettext, ungettext
from flask_xuanzang import pgettext, npgettext
from flask_xuanzang import lazy_gettext, lazy_ngettext
from flask_xuanzang import lazy_ugettext, lazy_ungettext
from flask_xuanzang import lazy_pgettext, lazy_npgettext
from tests import XuanzangTestCase
PY2 = (sys.version_info[0] == 2)
class GettextTestCase(XuanzangTestCase):
DEFAULT_LOCALE = None
def setUp(self):
self.app = self.create_app(self.DEFAULT_LOCALE)
self.locale_selector = Mock(name='locale_selector', return_value=None)
self.xuanzang = Xuanzang(self.app,
locale_selector=self.locale_selector)
class GettextFunctionTestCase(GettextTestCase):
DEFAULT_LOCALE = 'de'
def test_gettext(self):
with self.app.test_request_context():
message = gettext('Large')
if PY2:
self.assertEqual(message, 'Groß'.encode('utf-8'))
else:
self.assertEqual(message, 'Groß')
def test_ngettext(self):
with self.app.test_request_context():
singular = ngettext('%(num)s apple', '%(num)s apples', 1)
plural = ngettext('%(num)s apple', '%(num)s apples', 2)
if PY2:
self.assertEqual(singular, '1 Apfel'.encode('utf-8'))
self.assertEqual(plural, '2 Äpfel'.encode('utf-8'))
else:
self.assertEqual(singular, '1 Apfel')
self.assertEqual(plural, '2 Äpfel')
def test_ugettext(self):
with self.app.test_request_context():
self.assertEqual(ugettext('Large'), 'Groß')
def test_ungettext(self):
with self.app.test_request_context():
singular = ungettext('%(num)s apple', '%(num)s apples', 1)
plural = ungettext('%(num)s apple', '%(num)s apples', 2)
self.assertEqual(singular, '1 Apfel')
self.assertEqual(plural, '2 Äpfel')
def test_pgettext(self):
with self.app.test_request_context():
self.assertEqual(pgettext('month name', 'May'), 'Mai')
def test_npgettext(self):
with self.app.test_request_context():
singular = npgettext('fruits', 'apple', 'apples', 1)
plural = npgettext('fruits', 'apple', 'apples', 2)
self.assertEqual(singular, 'Apfel')
self.assertEqual(plural, 'Äpfel')
class LocaleSelectorTestCase(GettextTestCase):
DEFAULT_LOCALE = 'de'
def test_return_none(self):
# Returning None selects the default locale
self.locale_selector.return_value = None
with self.app.test_request_context():
self.assertEqual(ugettext('Large'), 'Groß')
self.locale_selector.assert_any_call()
def test_return_locale(self):
# Returning Locale selects that locale
self.locale_selector.return_value = Locale.parse('zh_CN')
with self.app.test_request_context():
self.assertEqual(ugettext('Large'), '大型')
self.locale_selector.assert_any_call()
def test_return_text(self):
# Returning text selects locale denoted by the text
self.locale_selector.return_value = 'zh_CN'
with self.app.test_request_context():
self.assertEqual(ugettext('Large'), '大型')
self.locale_selector.assert_any_call()
def test_cache_in_request(self):
# Result of locale selector is cached within the request by default
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(self.locale_selector.call_count, 1)
ungettext('%(num)s apple', '%(num)s apples', 1)
self.assertEqual(self.locale_selector.call_count, 1)
ugettext('Large')
self.assertEqual(self.locale_selector.call_count, 1)
def test_no_cache_between_requests(self):
# Result of locale selector is not cached between requests
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(self.locale_selector.call_count, 1)
with self.app.test_request_context():
ungettext('%(num)s apple', '%(num)s apples', 1)
self.assertEqual(self.locale_selector.call_count, 2)
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(self.locale_selector.call_count, 3)
def test_refresh(self):
# Calling refresh() clears the cache
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(self.locale_selector.call_count, 1)
self.xuanzang.refresh()
ungettext('%(num)s apple', '%(num)s apples', 1)
self.assertEqual(self.locale_selector.call_count, 2)
self.xuanzang.refresh()
ugettext('Large')
self.assertEqual(self.locale_selector.call_count, 3)
@patch('babel.support.Translations.load')
class GetTranslationCacheTestCase(GettextTestCase):
DEFAULT_LOCALE = 'de'
def test_cache(self, mock_load):
# Translation loading is cached for the each locale
with self.app.test_request_context():
# The cache is empty, always load from disk
ugettext('Large')
self.assertEqual(mock_load.call_count, 1)
# Locale is not changed, load from cache
ugettext('Large')
self.assertEqual(mock_load.call_count, 1)
# The new locale is not cached
self.xuanzang.refresh() # Clears locale selector cache
self.locale_selector.return_value = 'zh_CN'
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
# The previous locale is still cached
self.xuanzang.refresh() # Clears locale selector cache
self.locale_selector.return_value = 'de'
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
# The new locale is still cached
self.xuanzang.refresh() # Clears locale selector cache
self.locale_selector.return_value = 'zh_CN'
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
def test_cache_between_requests(self, mock_load):
# Translation loading is cached across requests
# Requests have no means to affect translations for a specified locale
# The cache is empty, always load from disk
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(mock_load.call_count, 1)
# Locale is not changed, load form cache
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(mock_load.call_count, 1)
# The new locale is not cached
with self.app.test_request_context():
self.locale_selector.return_value = 'zh_CN'
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
# The previous locale is still cached
with self.app.test_request_context():
self.locale_selector.return_value = 'de'
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
# The new locale is still cached
with self.app.test_request_context():
self.locale_selector.return_value = 'zh_CN'
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
def test_clear_cache_in_request(self, mock_load):
# refresh_translations() clears translation cache in request
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(mock_load.call_count, 1)
self.xuanzang.refresh_translations()
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
def test_clear_cache_between_requests(self, mock_load):
# refresh_translations() clears translation cache between requests
with self.app.test_request_context():
ugettext('Large')
self.assertEqual(mock_load.call_count, 1)
with self.app.test_request_context():
self.xuanzang.refresh_translations() # Application Context needed
ugettext('Large')
self.assertEqual(mock_load.call_count, 2)
class LazyGettextTestCase(GettextTestCase):
DEFAULT_LOCALE = 'de'
def test_not_lazy(self):
# not-lazy gettext should be called inside application context
self.assertRaises(RuntimeError, gettext, 'Large')
def test_lazy_gettext(self):
message = lazy_gettext('Large')
with self.app.test_request_context():
if PY2:
self.assertEqual(message, 'Groß'.encode('utf-8'))
else:
self.assertEqual(message, 'Groß')
def test_lazy_ngettext(self):
singular = lazy_ngettext('%(num)s apple', '%(num)s apples', 1)
plural = lazy_ngettext('%(num)s apple', '%(num)s apples', 2)
with self.app.test_request_context():
if PY2:
self.assertEqual(singular, '1 Apfel'.encode('utf-8'))
self.assertEqual(plural, '2 Äpfel'.encode('utf-8'))
else:
self.assertEqual(singular, '1 Apfel')
self.assertEqual(plural, '2 Äpfel')
def test_lazy_ugettext(self):
message = lazy_ugettext('Large')
with self.app.test_request_context():
self.assertEqual(message, 'Groß')
def test_lazy_ungettext(self):
singular = lazy_ungettext('%(num)s apple', '%(num)s apples', 1)
plural = lazy_ungettext('%(num)s apple', '%(num)s apples', 2)
with self.app.test_request_context():
self.assertEqual(singular, '1 Apfel')
self.assertEqual(plural, '2 Äpfel')
def test_lazy_pgettext(self):
message = lazy_pgettext('month name', 'May')
with self.app.test_request_context():
self.assertEqual(message, 'Mai')
def test_lazy_npgettext(self):
singular = lazy_npgettext('fruits', 'apple', 'apples', 1)
plural = lazy_npgettext('fruits', 'apple', 'apples', 2)
with self.app.test_request_context():
self.assertEqual(singular, 'Apfel')
self.assertEqual(plural, 'Äpfel')
class LazyGettextLocaleCacheTestCase(GettextTestCase):
DEFAULT_LOCALE = 'de'
def test_cache(self):
message = lazy_ugettext('Large')
with self.app.test_request_context():
len(message) # Triggers ugettext
self.assertEqual(self.locale_selector.call_count, 1)
len(message) # Triggers ugettext
self.assertEqual(self.locale_selector.call_count, 1)
def test_refresh_cache(self):
message = lazy_ugettext('Large')
with self.app.test_request_context():
len(message) # Triggers ugettext
self.assertEqual(self.locale_selector.call_count, 1)
self.xuanzang.refresh()
len(message) # Triggers ugettext
self.assertEqual(self.locale_selector.call_count, 2)
class MethodTestCase(GettextTestCase):
DEFAULT_LOCALE = 'de'
def test_gettext(self):
self.locale_selector.return_value = None
with self.app.test_request_context():
self.assertEqual(self.xuanzang.ugettext('Large'), 'Groß')
def test_locale_selector(self):
self.locale_selector.return_value = 'zh_CN'
with self.app.test_request_context():
self.assertEqual(self.xuanzang.ugettext('Large'), '大型')
self.locale_selector.assert_any_call()
def test_not_lazy(self):
self.assertRaises(RuntimeError, self.xuanzang.gettext, 'Large')
def test_lazy_gettext(self):
message = self.xuanzang.lazy_gettext('Large')
with self.app.test_request_context():
if PY2:
self.assertEqual(message, 'Groß'.encode('utf-8'))
else:
self.assertEqual(message, 'Groß')
def test_lazy_ngettext(self):
singular = self.xuanzang.lazy_ngettext('%(num)s apple',
'%(num)s apples', 1)
plural = self.xuanzang.lazy_ngettext('%(num)s apple',
'%(num)s apples', 2)
with self.app.test_request_context():
if PY2:
self.assertEqual(singular, '1 Apfel'.encode('utf-8'))
self.assertEqual(plural, '2 Äpfel'.encode('utf-8'))
else:
self.assertEqual(singular, '1 Apfel')
self.assertEqual(plural, '2 Äpfel')
def test_lazy_ugettext(self):
message = self.xuanzang.lazy_ugettext('Large')
with self.app.test_request_context():
self.assertEqual(message, 'Groß')
def test_lazy_ungettext(self):
singular = self.xuanzang.lazy_ungettext('%(num)s apple',
'%(num)s apples', 1)
plural = self.xuanzang.lazy_ungettext('%(num)s apple',
'%(num)s apples', 2)
with self.app.test_request_context():
self.assertEqual(singular, '1 Apfel')
self.assertEqual(plural, '2 Äpfel')
def test_lazy_pgettext(self):
message = self.xuanzang.lazy_pgettext('month name', 'May')
with self.app.test_request_context():
self.assertEqual(message, 'Mai')
def test_lazy_npgettext(self):
singular = self.xuanzang.lazy_npgettext('fruits', 'apple', 'apples', 1)
plural = self.xuanzang.lazy_npgettext('fruits', 'apple', 'apples', 2)
with self.app.test_request_context():
self.assertEqual(singular, 'Apfel')
self.assertEqual(plural, 'Äpfel')
| 37.663978 | 79 | 0.62665 | 1,615 | 14,011 | 5.247678 | 0.08483 | 0.120354 | 0.050619 | 0.069027 | 0.842006 | 0.80295 | 0.768614 | 0.744661 | 0.672094 | 0.632094 | 0 | 0.008264 | 0.265934 | 14,011 | 371 | 80 | 37.765499 | 0.815751 | 0.085504 | 0 | 0.718519 | 0 | 0 | 0.084344 | 0.002425 | 0 | 0 | 0 | 0 | 0.274074 | 1 | 0.12963 | false | 0 | 0.048148 | 0 | 0.22963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
33dff7222e80d992202b4a0da169be22ff03db86 | 152 | py | Python | freelance/invoices/admin.py | halfnibble/django-intro | 0564e85fdcd4bbfdeed41dcea0740b51dd276b2d | [
"MIT"
] | null | null | null | freelance/invoices/admin.py | halfnibble/django-intro | 0564e85fdcd4bbfdeed41dcea0740b51dd276b2d | [
"MIT"
] | null | null | null | freelance/invoices/admin.py | halfnibble/django-intro | 0564e85fdcd4bbfdeed41dcea0740b51dd276b2d | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Invoice
class InvoiceAdmin(admin.ModelAdmin):
pass
admin.site.register(Invoice, InvoiceAdmin) | 21.714286 | 42 | 0.809211 | 19 | 152 | 6.473684 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118421 | 152 | 7 | 42 | 21.714286 | 0.91791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
1d561be3e33694502f795a1b9edb5226c14aa43f | 83 | py | Python | sircel/__init__.py | axtambe/DropseqBarcodeSplitting | 19c78591533e03cf1addeb691db929761752cc40 | [
"MIT"
] | 44 | 2017-04-29T16:04:45.000Z | 2021-05-18T20:59:38.000Z | sircel/__init__.py | axtambe/DropseqBarcodeSplitting | 19c78591533e03cf1addeb691db929761752cc40 | [
"MIT"
] | 19 | 2017-05-12T16:49:55.000Z | 2021-12-14T16:17:21.000Z | sircel/__init__.py | axtambe/DropseqBarcodeSplitting | 19c78591533e03cf1addeb691db929761752cc40 | [
"MIT"
] | 15 | 2017-05-12T19:19:45.000Z | 2021-12-13T12:59:36.000Z | from sircel.Sircel_master import get_args, run_all
from sircel.Split_reads import * | 41.5 | 50 | 0.855422 | 14 | 83 | 4.785714 | 0.714286 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 83 | 2 | 51 | 41.5 | 0.893333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d7f580f8161bb9b10ad6dc989ea013d89f5e83b | 23 | py | Python | h_softmax/__init__.py | NekuSakuraba/embeddings | 28fce20f24d5e8a287b05c875e28d76c86e1e2d4 | [
"MIT"
] | null | null | null | h_softmax/__init__.py | NekuSakuraba/embeddings | 28fce20f24d5e8a287b05c875e28d76c86e1e2d4 | [
"MIT"
] | null | null | null | h_softmax/__init__.py | NekuSakuraba/embeddings | 28fce20f24d5e8a287b05c875e28d76c86e1e2d4 | [
"MIT"
] | null | null | null | from .utils import Tree | 23 | 23 | 0.826087 | 4 | 23 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d8a3ebf9618d7fd802b22b22495c413a494e81e | 98 | py | Python | Example_assignments/ps3dan.py | Dannnno/PyGrader | d72b0038a34e734d006eb50a6fe1faeeb84bccf9 | [
"MIT"
] | 2 | 2017-09-06T00:37:08.000Z | 2020-02-10T19:59:10.000Z | Example_assignments/ps3dan.py | Dannnno/PyGrader | d72b0038a34e734d006eb50a6fe1faeeb84bccf9 | [
"MIT"
] | null | null | null | Example_assignments/ps3dan.py | Dannnno/PyGrader | d72b0038a34e734d006eb50a6fe1faeeb84bccf9 | [
"MIT"
] | null | null | null | ## Dan, a CS whiz
def func1(a): return a+1
def func2(a): return a[::-1]
def printer(a): print a | 14 | 28 | 0.622449 | 21 | 98 | 2.904762 | 0.52381 | 0.229508 | 0.262295 | 0.295082 | 0.393443 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050633 | 0.193878 | 98 | 7 | 29 | 14 | 0.721519 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d9f4af3ec590a9bbe92a4406e9a91d334cd9f0b | 167 | py | Python | kolibri/core/content/errors.py | MBKayro/kolibri | 0a38a5fb665503cf8f848b2f65938e73bfaa5989 | [
"MIT"
] | 545 | 2016-01-19T19:26:55.000Z | 2022-03-20T00:13:04.000Z | kolibri/core/content/errors.py | MBKayro/kolibri | 0a38a5fb665503cf8f848b2f65938e73bfaa5989 | [
"MIT"
] | 8,329 | 2016-01-19T19:32:02.000Z | 2022-03-31T21:23:12.000Z | kolibri/core/content/errors.py | MBKayro/kolibri | 0a38a5fb665503cf8f848b2f65938e73bfaa5989 | [
"MIT"
] | 493 | 2016-01-19T19:26:48.000Z | 2022-03-28T14:35:05.000Z | from kolibri.core.errors import KolibriError
class InvalidStorageFilenameError(KolibriError):
pass
class InsufficientStorageSpaceError(KolibriError):
pass
| 16.7 | 50 | 0.820359 | 14 | 167 | 9.785714 | 0.714286 | 0.233577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131737 | 167 | 9 | 51 | 18.555556 | 0.944828 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
1d9fa07f406e3156093fcf3a76dc2679b7ce3114 | 10,201 | py | Python | tests/test_validation.py | J-81/dp_tools | a9e0401e4da0dc8002ea153184d5590bc41feedc | [
"MIT"
] | null | null | null | tests/test_validation.py | J-81/dp_tools | a9e0401e4da0dc8002ea153184d5590bc41feedc | [
"MIT"
] | null | null | null | tests/test_validation.py | J-81/dp_tools | a9e0401e4da0dc8002ea153184d5590bc41feedc | [
"MIT"
] | null | null | null | """ Tests for validation report results, relies on test for loaders passing """
from decimal import DivisionByZero
from pathlib import Path
import os
from pytest import MonkeyPatch
import pytest
from dp_tools.bulkRNASeq.entity import BulkRNASeqSample
from dp_tools.bulkRNASeq.loaders import (
load_BulkRNASeq_STAGE_00,
load_BulkRNASeq_STAGE_01,
)
from dp_tools.bulkRNASeq.vv_protocols import STAGE, BulkRNASeq_VVProtocol
@pytest.fixture(autouse=True)
def mock_dev_exceptions(monkeypatch):
monkeypatch.setattr(
"dp_tools.core.check_model.ALLOWED_DEV_EXCEPTIONS", (DivisionByZero)
) # ensure unhandled developer exceptions are raised
def test_bulkRNASeq_STAGE00_validation_paired(caplog, glds194_dataSystem_STAGE00):
"""This tests validation as it would be run on dataset after demultiplexing"""
CAPLEVEL = 20
caplog.set_level(CAPLEVEL)
ds = glds194_dataSystem_STAGE00
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, dry_run=True, protocol_name="only raw"
)
with caplog.at_level(CAPLEVEL):
vv_protocol.validate_all()
assert isinstance(vv_protocol.flags["dataset"], dict)
assert isinstance(vv_protocol.flags["sample"], dict)
assert isinstance(vv_protocol.flags["component"], dict)
# second, run with full validation
with caplog.at_level(CAPLEVEL):
caplog.clear()
with MonkeyPatch.context() as m:
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
df_verbose = vv_protocol.flags_to_df(schema="verbose")
# assert that no failing flags were raised
# assert df["flag_code"].max() == 20 # not needed as this tests the truncated data rather than the logic
# check if appropriate number of flags are raised
# Currently:
# Dataset check : 2
# Sample check : 1 per sample
# Component checks :
# Reads : 1 per component
assert len(df) == 41
assert [0] == list(
df["flag_code"].unique()
) # only the dry run code should be returned
def test_bulkRNASeq_STAGE00_validation_paired_no_dry_run(
caplog, glds194_dataSystem_STAGE00
):
"""This tests validation as it would be run on dataset after demultiplexing"""
CAPLEVEL = 20
caplog.set_level(CAPLEVEL)
ds = glds194_dataSystem_STAGE00
vv_protocol = BulkRNASeq_VVProtocol(dataset=ds.dataset, protocol_name="only raw")
with caplog.at_level(CAPLEVEL):
vv_protocol.validate_all()
assert isinstance(vv_protocol.flags["dataset"], dict)
assert isinstance(vv_protocol.flags["sample"], dict)
assert isinstance(vv_protocol.flags["component"], dict)
# second, run with full validation
with caplog.at_level(CAPLEVEL):
caplog.clear()
with MonkeyPatch.context() as m:
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
df_verbose = vv_protocol.flags_to_df(schema="verbose")
# assert that no failing flags were raised
# assert df["flag_code"].max() == 20 # not needed as this tests the truncated data rather than the logic
# check if appropriate number of flags are raised
# Currently:
# Dataset check : 2
# Sample check : 1 per sample
# Component checks :
# Reads : 1 per component
assert len(df) == 41
def test_bulkRNASeq_STAGE00_validation_paired_with_skips(
caplog, glds194_dataSystem_STAGE00
):
"""This tests validation as it would be run on dataset after demultiplexing"""
CAPLEVEL = 20
caplog.set_level(CAPLEVEL)
ds = glds194_dataSystem_STAGE00
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset,
protocol_name="only raw",
dry_run=True,
skip_these_checks={"DATASET_RAWREADS_0001"},
)
with caplog.at_level(CAPLEVEL):
vv_protocol.validate_all()
assert isinstance(vv_protocol.flags["dataset"], dict)
assert isinstance(vv_protocol.flags["sample"], dict)
assert isinstance(vv_protocol.flags["component"], dict)
# second, run with full validation
with caplog.at_level(CAPLEVEL):
caplog.clear()
with MonkeyPatch.context() as m:
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
df_verbose = vv_protocol.flags_to_df(schema="verbose")
# assert that no failing flags were raised
# assert df["flag_code"].max() == 20 # not needed as this tests the truncated data rather than the logic
# check if appropriate number of flags are raised
# Currently:
# Dataset check : 2
# Sample check : 1 per sample
# Component checks :
# Reads : 1 per component
assert len(df) == 41
assert 0 in df["flag_code"].values # ensure dry run flag codes returned
assert 1 in df["flag_code"].values # ensure skip flag codes returned
def test_bulkRNASeq_STAGE00_validation_paired_with_config(
caplog, glds194_dataSystem_STAGE00
):
"""This tests validation as it would be run on dataset after demultiplexing"""
CAPLEVEL = 20
caplog.set_level(CAPLEVEL)
ds = glds194_dataSystem_STAGE00
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, config=("bulkRNASeq", "0"), protocol_name="only raw"
)
with caplog.at_level(CAPLEVEL):
vv_protocol.validate_all()
assert isinstance(vv_protocol.flags["dataset"], dict)
assert isinstance(vv_protocol.flags["sample"], dict)
assert isinstance(vv_protocol.flags["component"], dict)
# second, run with full validation
with caplog.at_level(CAPLEVEL):
caplog.clear()
with MonkeyPatch.context() as m:
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
df_verbose = vv_protocol.flags_to_df(schema="verbose")
# assert that no failing flags were raised
# assert df["flag_code"].max() == 20 # not needed as this tests the truncated data rather than the logic
# check if appropriate number of flags are raised
# Currently:
# Dataset check : 2
# Sample check : 1 per sample
# Component checks :
# Reads : 1 per component
assert len(df) == 41
assert 0 not in df["flag_code"].values # ensure dry run flag codes returned
assert 1 not in df["flag_code"].values # ensure skip flag codes returned
def test_bulkRNASeq_STAGE00_validation_single(caplog, glds48_dataSystem_STAGE00):
"""This tests validation as it would be run on dataset after demultiplexing"""
CAPLEVEL = 20
caplog.set_level(CAPLEVEL)
ds = glds48_dataSystem_STAGE00
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, protocol_name="only raw", dry_run=True
)
with MonkeyPatch.context() as m:
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
df_verbose = vv_protocol.flags_to_df(schema="verbose")
# check if appropriate number of flags are raised
# Currently:
# Dataset check : 2
# Sample check : 1 per sample
# Component checks
# Reads : 1 per component (1 per sample)
assert len(df) == 30
"""
def test_bulkRNASeq_STAGE01_validation_paired(glds194_dataSystem_STAGE01):
ds = glds194_dataSystem_STAGE01
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, stage_names=STAGE.Reads_PreProcessed, dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 81
def test_bulkRNASeq_STAGE01_validation_single(glds48_dataSystem_STAGE01):
ds = glds48_dataSystem_STAGE01
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, stage_names=STAGE.Reads_PreProcessed, dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 59
def test_bulkRNASeq_STAGE02_validation_paired(glds194_dataSystem_STAGE02):
ds = glds194_dataSystem_STAGE02
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, stage_names=STAGE.GenomeAligned, dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 95
def test_bulkRNASeq_STAGE02_validation_single(glds48_dataSystem_STAGE02):
ds = glds48_dataSystem_STAGE02
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, stage_names=STAGE.GenomeAligned, dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 74
def test_bulkRNASeq_STAGE03_validation_paired(glds194_dataSystem_STAGE03):
ds = glds194_dataSystem_STAGE03
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, stage_names=STAGE.GeneCounted, dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 97
def test_bulkRNASeq_STAGE03_validation_single(glds48_dataSystem_STAGE03):
ds = glds48_dataSystem_STAGE03
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, stage_names=STAGE.GeneCounted, dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 76
""" # DISABLED PENDING REWORK OF ARGS + CONFIG APPROACH
def test_bulkRNASeq_STAGE04_validation_paired(glds194_dataSystem_STAGE04):
ds = glds194_dataSystem_STAGE04
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, protocol_name="full", dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 98
assert df["flag_code"].max() < 90
def test_bulkRNASeq_STAGE04_validation_single(glds48_dataSystem_STAGE04):
ds = glds48_dataSystem_STAGE04
vv_protocol = BulkRNASeq_VVProtocol(
dataset=ds.dataset, protocol_name="full", dry_run=True
)
vv_protocol.validate_all()
df = vv_protocol.flags_to_df()
assert len(df) == 77
assert df["flag_code"].max() < 90
| 32.906452 | 116 | 0.681012 | 1,264 | 10,201 | 5.248418 | 0.124209 | 0.090443 | 0.067832 | 0.046126 | 0.841574 | 0.800573 | 0.788212 | 0.771932 | 0.771932 | 0.771932 | 0 | 0.028847 | 0.2388 | 10,201 | 309 | 117 | 33.012945 | 0.825499 | 0.218508 | 0 | 0.632353 | 0 | 0 | 0.053041 | 0.011655 | 0 | 0 | 0 | 0 | 0.191176 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1da809271fe996fc9a0dd49b264abc97918af988 | 1,572 | py | Python | generate.py | mj-will/bbh-ccnn | bb81b30024aaae5d51b1c3dd41ee2dc0efede1a2 | [
"MIT"
] | null | null | null | generate.py | mj-will/bbh-ccnn | bb81b30024aaae5d51b1c3dd41ee2dc0efede1a2 | [
"MIT"
] | null | null | null | generate.py | mj-will/bbh-ccnn | bb81b30024aaae5d51b1c3dd41ee2dc0efede1a2 | [
"MIT"
] | null | null | null | import random
def generate_sequence(submap_count, method="naive", sampler_count=2):
if method == "naive":
return [0] * submap_count
elif method == "lattice":
# Gets log2(submap_count)
index = submap_count.bit_length() - 1
return lattice_generator[index]
elif method == "random":
return [random.randint(0, sampler_count-1) for _ in range(submap_count)]
else:
raise ValueError("The method for generating samplers should be either naive, lattice, or random.")
# Defines on what submaps to apply the standard checkered sampler (0) or the complementary sampler (1)
# in order to generate a low-discrepancy lattice sampling, up to 10 subsampling steps.
lattice_generator = [
[0],
[0, 0],
[0, 1, 0, 1],
[0, 1, 1, 0, 0, 0, 1, 1],
[0,0,1,0,1,0,0,1,0,1,0,0,1,0,1,0],
[0,0,0,1,1,0,0,0,1,1,0,0,0,1,1,0,0,0,1,1,0,0,0,1,1,0,0,0,1,1,0,0],
[0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,
0,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,
0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,
0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0],
[0]*20 + [1]*20 + [0]*20 + [1]*19 + [0]*20 + [1]*20 + [0]*19 + [1]*20 + [0]*20 + [1]*19 + [0]*20 + [1]*20 + [0]*19,
# Alternative 0s and 1s
[i % 2 for i in range(512)],
] | 44.914286 | 123 | 0.537532 | 390 | 1,572 | 2.135897 | 0.161538 | 0.271309 | 0.320528 | 0.326531 | 0.352941 | 0.348139 | 0.342137 | 0.342137 | 0.342137 | 0.342137 | 0 | 0.249799 | 0.21056 | 1,572 | 35 | 124 | 44.914286 | 0.421434 | 0.146947 | 0 | 0 | 1 | 0 | 0.075542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
63647c387655a670815ac8a9e4b33b74ea03c6bf | 212 | py | Python | journalism/__init__.py | higs4281/journalism | d0de8371544c15ffb70cec884c6a851f4faaa955 | [
"MIT"
] | 1 | 2015-07-07T01:46:31.000Z | 2015-07-07T01:46:31.000Z | journalism/__init__.py | higs4281/journalism | d0de8371544c15ffb70cec884c6a851f4faaa955 | [
"MIT"
] | null | null | null | journalism/__init__.py | higs4281/journalism | d0de8371544c15ffb70cec884c6a851f4faaa955 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from journalism.columns import TextType, BooleanType, NumberType, DateType
from journalism.exceptions import *
from journalism.table import Table
def save():
raise NotImplementedError
| 23.555556 | 74 | 0.801887 | 25 | 212 | 6.8 | 0.72 | 0.247059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127358 | 212 | 8 | 75 | 26.5 | 0.918919 | 0.09434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
891e7509a01c5664bbeedb5848e41681397538f1 | 104 | py | Python | landlab/values/__init__.py | saraahsimon/landlab | 1cf809b685efbccaaa149b5899a600c3ccedf30f | [
"MIT"
] | null | null | null | landlab/values/__init__.py | saraahsimon/landlab | 1cf809b685efbccaaa149b5899a600c3ccedf30f | [
"MIT"
] | null | null | null | landlab/values/__init__.py | saraahsimon/landlab | 1cf809b685efbccaaa149b5899a600c3ccedf30f | [
"MIT"
] | null | null | null | from .synthetic import random, plane, constant, sine
__all__ = ["random", "plane", "constant", "sine"]
| 26 | 52 | 0.692308 | 12 | 104 | 5.666667 | 0.666667 | 0.323529 | 0.558824 | 0.676471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134615 | 104 | 3 | 53 | 34.666667 | 0.755556 | 0 | 0 | 0 | 0 | 0 | 0.221154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
896e6c000ee32de12694907a60cabaa41c9d7b03 | 47 | py | Python | photonqat/Gaussianformula/__init__.py | ryosukehata/Photonqat | d5e320d3cc9ed94f6d63b1721f6871f13a0e6ea7 | [
"Apache-2.0"
] | 25 | 2018-09-16T22:54:48.000Z | 2019-02-22T01:21:30.000Z | blueqat/photonqat/Gaussianformula/__init__.py | mdrft/blueqat | 6c5f26b377bc3ce0d02adec8b9132d70870b3d95 | [
"Apache-2.0"
] | 22 | 2018-09-20T02:47:56.000Z | 2019-02-08T05:25:30.000Z | blueqat/photonqat/Gaussianformula/__init__.py | mdrft/blueqat | 6c5f26b377bc3ce0d02adec8b9132d70870b3d95 | [
"Apache-2.0"
] | 5 | 2019-12-14T08:39:03.000Z | 2021-06-30T06:51:24.000Z | from .baseFunc import *
from .ordering import * | 23.5 | 23 | 0.765957 | 6 | 47 | 6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 47 | 2 | 24 | 23.5 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
897c0ddb8c06a25fb00bb8a79b068f8b101d48e1 | 95 | py | Python | test_project/test_lib/dummy.py | depop/python-flexisettings | f8e0b35fe061b781cff8d7b2b4a9875ab1106e06 | [
"Apache-2.0"
] | 1 | 2018-03-15T11:14:35.000Z | 2018-03-15T11:14:35.000Z | test_project/test_lib/dummy.py | depop/python-flexisettings | f8e0b35fe061b781cff8d7b2b4a9875ab1106e06 | [
"Apache-2.0"
] | null | null | null | test_project/test_lib/dummy.py | depop/python-flexisettings | f8e0b35fe061b781cff8d7b2b4a9875ab1106e06 | [
"Apache-2.0"
] | null | null | null | from test_lib.conf import settings
def get_setting(name):
return getattr(settings, name)
| 15.833333 | 34 | 0.768421 | 14 | 95 | 5.071429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 95 | 5 | 35 | 19 | 0.8875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
89b3f8091e7ba5043bc9ffc4828447f291870593 | 44,729 | py | Python | genomel/slurm/utils/workflow.py | uc-cdis/cwl | 7f01768479e6a77a5caf6b3382174aa038ba05fc | [
"Apache-2.0"
] | 1 | 2020-02-19T15:53:03.000Z | 2020-02-19T15:53:03.000Z | genomel/slurm/utils/workflow.py | uc-cdis/genomel_pipelines | c661469505c606e1353f23c21a6654724a9d8d63 | [
"Apache-2.0"
] | 10 | 2018-10-16T00:56:01.000Z | 2019-02-05T20:53:28.000Z | genomel/slurm/utils/workflow.py | uc-cdis/genomel_pipelines | c661469505c606e1353f23c21a6654724a9d8d63 | [
"Apache-2.0"
] | 1 | 2019-07-16T15:41:12.000Z | 2019-07-16T15:41:12.000Z | '''pipeline runner'''
import os
import json
import time
import glob
import logging
import tempfile
import datetime
import yaml
import utils.pipeline
import postgres.metrics
import postgres.utils
def filter_list(alist, blist):
'''remove blist from alist'''
return list(set(alist)-set(blist))
def get_cwl_steps(cwlwf):
'''get cwl steps names'''
cwl = dict()
with open(cwlwf, 'r') as fhandle:
cwl = yaml.load(fhandle)
cwl_steps = cwl['steps'].keys()
return cwl_steps
def dict_to_string(list_of_dict):
'''convert list of dict to list of strings'''
list_of_string = []
if list_of_dict:
for i in list_of_dict:
list_of_string.append(str(i))
else:
return None
return list_of_string
def run_alignment(args):
'''run alignment'''
input_data = utils.pipeline.load_template_json()['alignment_template']
sample_list = [args.aliquot_id] * len(args.readgroup_names)
pu_list = [args.job_uuid] * len(args.readgroup_names)
rgl_list = ['@RG\tCN:CGR\tPL:ILLUMINA\tID:{RG}\tSM:{SM}\tPU:{PU}\tLB:Library'\
.format(RG=rg, SM=sm, PU=pu) \
for rg, sm, pu in zip(args.readgroup_names, sample_list, pu_list)]
input_data['job_uuid'] = args.job_uuid
input_data['fastq_read1_uri'] = args.fastq_read1_uri
input_data['fastq_read2_uri'] = args.fastq_read2_uri
input_data['fastq_read1_md5'] = args.fastq_read1_md5
input_data['fastq_read2_md5'] = args.fastq_read2_md5
input_data['readgroup_lines'] = rgl_list
input_data['readgroup_names'] = args.readgroup_names
workflow_meta = {
'basedir': args.basedir,
'pipeline': args.choice,
'project': args.project,
'job_uuid': args.job_uuid,
'aliquot_id': args.aliquot_id,
'input_table': args.input_table,
'cwlwf': args.cwlwf
}
genomel = GenomelIndiv(
workflow_meta=workflow_meta,
input_data=input_data,
psql_conf=args.psql_conf
)
genomel.run()
def run_harmonization(args):
'''run harmonization'''
input_data = utils.pipeline.load_template_json()['harmonization_template']
input_data['job_uuid'] = args.job_uuid
input_data['bam_uri'] = args.bam_uri
input_data['bam_md5'] = args.bam_md5
workflow_meta = {
'basedir': args.basedir,
'pipeline': args.choice,
'project': args.project,
'job_uuid': args.job_uuid,
'aliquot_id': args.aliquot_id,
'input_table': args.input_table,
'cwlwf': args.cwlwf
}
genomel = GenomelIndiv(
workflow_meta=workflow_meta,
input_data=input_data,
psql_conf=args.psql_conf
)
genomel.run()
def run_cohort_genotyping(args):
'''run cohort genotyping'''
cohort_template_json = os.path.join(
os.path.dirname(os.path.dirname(os.path.realpath(__file__))),
"etc/cohort_genotyping.json"
)
input_data = utils.pipeline.load_json(cohort_template_json)
input_data['job_uuid'] = args.job_uuid
input_data['gvcf_files'] = utils.pipeline.create_cwl_array_input(args.gvcf_files_manifest)
input_data['gatk4_genotyping_thread_count'] = args.gatk4_genotyping_thread_count
input_data['number_of_chunks_for_gatk'] = args.number_of_chunks_for_gatk
input_data['bam_files'] = utils.pipeline.create_cwl_array_input(args.bam_files_manifest)
input_data['freebayes_thread_count'] = args.freebayes_thread_count
input_data['number_of_chunks_for_freebayes'] = args.number_of_chunks_for_freebayes
input_data['upload_s3_bucket'] = os.path.join(
args.upload_s3_bucket,
args.project,
args.batch_id,
args.job_uuid
)
workflow_meta = {
'basedir': args.basedir,
'project': args.project,
'batch_id': args.batch_id,
'job_uuid': args.job_uuid,
'input_table': args.input_table,
'cromwell_config': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"cromwell/cromwell.conf"
),
'cromwell_jar_path': args.cromwell_jar_path,
'cwlwf': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"genomel_cohort_genotyping.cwl"
),
'cwl_pack': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"cwl.zip"
)
}
genomel = GenomelCohort(
workflow_meta=workflow_meta,
input_data=input_data,
psql_conf=args.psql_conf
)
genomel.run()
def run_cohort_gatk(args):
'''run cohort gatk4'''
cohort_template_json = os.path.join(
os.path.dirname(os.path.dirname(os.path.realpath(__file__))),
"etc/cohort_gatk.prod.json"
)
input_data = utils.pipeline.load_json(cohort_template_json)
input_data['job_uuid'] = args.job_uuid
input_data['gvcf_files'] = utils.pipeline.create_cwl_array_input(args.gvcf_files_manifest)
input_data['gatk4_genotyping_thread_count'] = args.gatk4_genotyping_thread_count
input_data['number_of_chunks_for_gatk'] = args.number_of_chunks_for_gatk
input_data['upload_s3_bucket'] = os.path.join(
args.upload_s3_bucket,
args.project,
args.batch_id,
args.job_uuid
)
workflow_meta = {
'basedir': args.basedir,
'project': args.project,
'batch_id': args.batch_id,
'job_uuid': args.job_uuid,
'input_table': args.input_table,
'cromwell_config': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"cromwell/cromwell.conf"
),
'cromwell_jar_path': args.cromwell_jar_path,
'cwlwf': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"genomel_cohort_gatk4.cwl"
),
'cwl_pack': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"cwl.zip"
)
}
genomel = GenomelCohort(
workflow_meta=workflow_meta,
input_data=input_data,
psql_conf=args.psql_conf
)
genomel.run()
def run_cohort_freebayes(args):
'''run cohort genotyping'''
cohort_template_json = os.path.join(
os.path.dirname(os.path.dirname(os.path.realpath(__file__))),
"etc/cohort_freebayes.json"
)
input_data = utils.pipeline.load_json(cohort_template_json)
input_data['job_uuid'] = args.job_uuid
input_data['bed_files'] = utils.pipeline.create_cwl_array_input(args.bed_files_manifest)
input_data['freebayes_thread_count'] = args.freebayes_thread_count
input_data['number_of_chunks_for_freebayes'] = args.number_of_chunks_for_freebayes
input_data['upload_s3_bucket'] = os.path.join(
args.upload_s3_bucket,
args.project,
args.batch_id,
args.job_uuid
)
workflow_meta = {
'basedir': args.basedir,
'project': args.project,
'batch_id': args.batch_id,
'job_uuid': args.job_uuid,
'input_table': args.input_table,
'cromwell_config': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"cromwell/cromwell.conf"
),
'cromwell_jar_path': args.cromwell_jar_path,
'cwlwf': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"genomel_cohort_freebayes.cwl"
),
'cwl_pack': os.path.join(
os.path.dirname(
os.path.dirname(
os.path.dirname(
os.path.realpath(__file__)
)
)
),
"cwl.zip"
)
}
genomel = GenomelCohort(
workflow_meta=workflow_meta,
input_data=input_data,
psql_conf=args.psql_conf
)
genomel.run()
def run_fbc(args):
'''run post-qc on freebayes chunks'''
input_data = utils.pipeline.load_template_json()['post_freebayes_template']
input_data['job_uuid'] = args.job_uuid
input_data['vcf']['path'] = args.vcf
workflow_meta = {
'basedir': args.basedir,
'job_uuid': args.job_uuid,
'bed': args.bed,
'cwlwf': args.cwlwf
}
genomel = PostFreebayes(
workflow_meta=workflow_meta,
input_data=input_data,
psql_conf=args.psql_conf
)
genomel.run()
class GenomelIndiv(object):
'''this class describes GenoMEL-Bionimbus Protected Data Cloud pipelines'''
def __init__(self, workflow_meta, input_data, psql_conf):
'''
workflow_meta.keys() = [
'basedir',
'pipeline',
'project',
'job_uuid',
'aliquot_id',
'input_table',
'cwlwf'
]
'''
self.input_data = input_data
self.pg_data = utils.pipeline.pg_data_template()
self.psql_conf = psql_conf
self.psql_class = postgres.metrics.GenomelIndividualMetrics
# setup workflow metadata
self.workflow_meta = workflow_meta
self.workflow_meta['base_s3_loc'] = os.path.join(
self.input_data['upload_s3_bucket'],
self.workflow_meta['job_uuid']
)
self.workflow_meta['log_file'] = None
self.workflow_meta['cwl_input_json'] = self._cwl_input_json()
self.workflow_meta['cwl_output_json'] = self._cwl_output_json()
self.workflow_meta['log_dir'] = None
self.workflow_meta['cwl_log_tar'] = None
self.workflow_meta['cwl_start'] = None
self.workflow_meta['cwl_end'] = None
self.workflow_meta['cwl_failure'] = False
self.workflow_meta['runner_failure'] = False
self.workflow_meta['pipeline_time'] = 0.0
self.workflow_meta['pipeline_avg_cpu_percentage'] = 0
self.workflow_meta['haplotypecaller_time'] = 0.0
self.workflow_meta['haplotypecaller_avg_cpu_percentage'] = 0
def run(self):
'''main pipeline'''
# setup start-time
self.workflow_meta['cwl_start'] = time.time()
self.workflow_meta['datetime_start'] = str(datetime.datetime.now())
# setup work env
os.chdir(self.workflow_meta['basedir'])
tmpdir = self.create_tmp_dir('tmpdir_')
logger = self._log()
# cwl cmd
cmd = [
'/home/ubuntu/.virtualenvs/p2/bin/cwltool',
'--debug',
'--relax-path-checks',
'--outdir', self.workflow_meta['basedir'],
'--tmpdir-prefix', tmpdir,
'--tmp-outdir-prefix', tmpdir,
self.workflow_meta['cwlwf'],
self.workflow_meta['cwl_input_json']
]
# run cwl
cwl_exit = utils.pipeline.run_command(cmd, logger, self.workflow_meta['cwl_output_json'])
# cwl status
if cwl_exit:
self.workflow_meta['cwl_failure'] = True
# calculate cpu percentage
self._calculate_cpu_percentage()
# tar all logs
tar_exit = self._tar_log(logger)
if tar_exit:
self.workflow_meta['runner_failure'] = True
# upload ancillary files
upload_exit = self._upload_ancillary_files(logger)
if upload_exit:
self.workflow_meta['runner_failure'] = True
# update psql
if not self.workflow_meta['cwl_failure'] and not self.workflow_meta['runner_failure']:
self._process_cwl_success()
else:
self._process_cwl_fail()
engine = postgres.utils.get_db_engine(self.psql_conf)
postgres.metrics.add_metrics(engine, self.psql_class, self.pg_data)
# clean up
utils.pipeline.remove_dir(self.workflow_meta['basedir'])
def _cwl_input_json(self):
'''prepare cwl input json'''
cwl_input_json = os.path.join(
self.workflow_meta['basedir'], 'genomel_individual.{0}.{1}.{2}.{3}.json'.format(
self.workflow_meta['pipeline'],
self.workflow_meta['project'],
self.workflow_meta['job_uuid'],
self.workflow_meta['aliquot_id']
)
)
with open(cwl_input_json, 'wt') as ohandle:
json.dump(self.input_data, ohandle, indent=4)
return cwl_input_json
def _cwl_output_json(self):
'''prepare cwl output json'''
cwl_output_json = os.path.join(
self.workflow_meta['basedir'], 'genomel_individual.{0}.{1}.{2}.{3}.output'.format(
self.workflow_meta['pipeline'],
self.workflow_meta['project'],
self.workflow_meta['job_uuid'],
self.workflow_meta['aliquot_id']
)
)
return cwl_output_json
def create_tmp_dir(self, prefix):
'''create cwl tmp directory'''
tmpdir = tempfile.mkdtemp(prefix="{}".format(prefix), dir=self.workflow_meta['basedir'])
return tmpdir
def _log(self):
'''setup log file'''
log_file = os.path.join(
os.path.dirname(self.workflow_meta['basedir']),
'genomel_individual.{0}.{1}.{2}.{3}.log'.format(
self.workflow_meta['pipeline'],
self.workflow_meta['project'],
self.workflow_meta['job_uuid'],
self.workflow_meta['aliquot_id']
)
)
self.workflow_meta['log_file'] = log_file
logger = utils.pipeline.setup_logging(
logging.INFO,
self.workflow_meta['job_uuid'],
log_file
)
return logger
def _calculate_cpu_percentage(self):
'''calculate average cpu percentage'''
cwl_logs = glob.glob(
'{}/{}*time.json'.format(
self.workflow_meta['basedir'],
self.workflow_meta['job_uuid']
)
)
pipeline_cpu_usage = []
pipeline_cpu_time = []
haplotypecaller_cpu_usage = []
haplotypecaller_cpu_time = []
if cwl_logs:
for log in cwl_logs:
dic = utils.pipeline.load_json(log)
cpu_percent = float(dic['percent_of_cpu'][:-1])
step_weight = float(dic['wall_clock'])
if 'gatk3' in log or 'picard' in log:
haplotypecaller_cpu_usage.append(cpu_percent * step_weight)
haplotypecaller_cpu_time.append(step_weight)
else:
pipeline_cpu_usage.append(cpu_percent * step_weight)
pipeline_cpu_time.append(step_weight)
pipeline_time = sum(pipeline_cpu_time)
pipeline_avg_cpu_usage = str(int(sum(pipeline_cpu_usage)/sum(pipeline_cpu_time))) + '%'
haplotypecaller_time = sum(haplotypecaller_cpu_time)
haplotypecaller_avg_cpu_usage = str(
int(sum(haplotypecaller_cpu_usage)/sum(haplotypecaller_cpu_time))
) + '%'
else:
pipeline_time = None
pipeline_avg_cpu_usage = None
haplotypecaller_time = None
haplotypecaller_avg_cpu_usage = None
self.workflow_meta['pipeline_time'] = pipeline_time
self.workflow_meta['pipeline_avg_cpu_percentage'] = pipeline_avg_cpu_usage
self.workflow_meta['haplotypecaller_time'] = haplotypecaller_time
self.workflow_meta['haplotypecaller_avg_cpu_percentage'] = haplotypecaller_avg_cpu_usage
def _tar_log(self, logger):
'''make tar for all cwl time logs'''
cwl_logs = glob.glob('{}/*time.json'.format(self.workflow_meta['basedir']))
if cwl_logs:
self.workflow_meta['log_dir'] = self.create_tmp_dir('cwl_logs_')
for log in cwl_logs:
utils.pipeline.move_file(log, self.workflow_meta['log_dir'])
self.workflow_meta['cwl_log_tar'] = os.path.join(
self.workflow_meta['basedir'], \
'genomel_individual.{0}.{1}.{2}.{3}.cwl_logs.tar.bz2'.format(
self.workflow_meta['pipeline'],
self.workflow_meta['project'],
self.workflow_meta['job_uuid'],
self.workflow_meta['aliquot_id']
)
)
exit_code = utils.pipeline.targz_compress(
logger,
self.workflow_meta['cwl_log_tar'],
self.workflow_meta['log_dir']
)
else: exit_code = 1
return exit_code
def _upload_ancillary_files(self, logger):
'''upload tar file of all cwl logs'''
to_upload_dir = self.create_tmp_dir('to_upload_')
utils.pipeline.move_file(self.workflow_meta['cwl_input_json'], to_upload_dir)
if self.workflow_meta['cwl_log_tar']:
utils.pipeline.move_file(self.workflow_meta['cwl_log_tar'], to_upload_dir)
remote_loc = os.path.join(
self.input_data['upload_s3_bucket'],
self.workflow_meta['job_uuid']
)
exit_code = utils.pipeline.aws_s3_put(
logger=logger,
remote_output=remote_loc,
local_input=to_upload_dir,
profile=self.input_data['upload_s3_profile'],
endpoint_url=self.input_data['upload_s3_endpoint'],
recursive=True
)
return exit_code
def _time(self, handle):
'''extract time from cwl logs'''
logs = glob.glob('{}/{}'.format(self.workflow_meta['log_dir'], handle))
time_list = []
if logs:
for log in logs:
dic = utils.pipeline.load_json(log)
_time = float(dic['wall_clock'])
time_list.append(_time)
total_time = sum(time_list)
else: total_time = None
return total_time
def _stage_local(self, indiv):
'''stage cwl output to local gluster'''
indiv_dir = os.path.join(
'/mnt/glusterfs',
'genomel_individual.{0}.{1}.{2}.{3}'.format(
self.workflow_meta['pipeline'],
self.workflow_meta['project'],
self.workflow_meta['job_uuid'],
self.workflow_meta['aliquot_id']
)
)
if not os.path.isdir(indiv_dir):
os.mkdir(indiv_dir)
utils.pipeline.move_file(indiv, indiv_dir)
return os.path.join(indiv_dir, os.path.basename(indiv))
def _process_cwl_success(self):
'''process when cwl successes'''
download_time = self._time('aws_download*')
bam_upload_time = self._time('aws_upload*duplicates_marked.sorted*')
gvcf_upload_time = self._time('aws_upload*haplotypecaller*')
cwl_output = utils.pipeline.load_json(self.workflow_meta['cwl_output_json'])
bam_local_path = self._stage_local(
cwl_output['genomel_bam']['path']
)
self._stage_local(cwl_output['genomel_bam']['secondaryFiles'][0]['path'])
gvcf_local_path = self._stage_local(
cwl_output['genomel_gvcf']['path']
)
self._stage_local(cwl_output['genomel_gvcf']['secondaryFiles'][0]['path'])
self.workflow_meta['cwl_end'] = time.time()
self.pg_data['job_uuid'] = self.workflow_meta['job_uuid']
self.pg_data['aliquot_id'] = self.workflow_meta['aliquot_id']
self.pg_data['input_table'] = self.workflow_meta['input_table']
self.pg_data['project'] = self.workflow_meta['project']
self.pg_data['status'] = "COMPLETED"
self.pg_data['datetime_start'] = self.workflow_meta['datetime_start']
self.pg_data['datetime_end'] = str(datetime.datetime.now())
self.pg_data['download_time'] = download_time
self.pg_data['bam_upload_time'] = bam_upload_time
self.pg_data['gvcf_upload_time'] = gvcf_upload_time
self.pg_data['bam_url'] = os.path.join(
self.workflow_meta['base_s3_loc'],
cwl_output['genomel_bam']['basename']
)
self.pg_data['gvcf_url'] = os.path.join(
self.workflow_meta['base_s3_loc'],
cwl_output['genomel_gvcf']['basename']
)
self.pg_data['bam_local_path'] = bam_local_path
self.pg_data['gvcf_local_path'] = gvcf_local_path
self.pg_data['bam_md5sum'] = utils.pipeline.get_md5(bam_local_path)
self.pg_data['gvcf_md5sum'] = utils.pipeline.get_md5(gvcf_local_path)
self.pg_data['bam_filesize'] = utils.pipeline.get_file_size(bam_local_path)
self.pg_data['gvcf_filesize'] = utils.pipeline.get_file_size(gvcf_local_path)
if self.workflow_meta['pipeline'] == 'alignment':
self.pg_data['alignment_cwl_walltime'] = self.workflow_meta['pipeline_time']
self.pg_data['alignment_cwl_cpu_percentage'] = self.workflow_meta\
['pipeline_avg_cpu_percentage']
else:
self.pg_data['harmonization_cwl_walltime'] = self.workflow_meta['pipeline_time']
self.pg_data['harmonization_cwl_cpu_percentage'] = self.workflow_meta\
['pipeline_avg_cpu_percentage']
self.pg_data['haplotypecaller_cwl_walltime'] = self.workflow_meta['haplotypecaller_time']
self.pg_data['haplotypecaller_cwl_cpu_percentage'] = self.workflow_meta\
['haplotypecaller_avg_cpu_percentage']
self.pg_data['whole_workflow_elapsed'] = float(
self.workflow_meta['cwl_end'] - self.workflow_meta['cwl_start']
)
self.pg_data['cwl_input_json'] = os.path.join(
self.workflow_meta['base_s3_loc'],
os.path.basename(self.workflow_meta['cwl_input_json'])
)
self.pg_data['time_metrics_json'] = os.path.join(
self.workflow_meta['base_s3_loc'],
os.path.basename(self.workflow_meta['cwl_log_tar'])
)
self.pg_data['debug_path'] = self.workflow_meta['log_file']
def _process_cwl_fail(self):
'''process when cwl fails'''
if self.workflow_meta['cwl_failure']:
cwl_output = utils.pipeline.load_json(self.workflow_meta['cwl_output_json'])
if cwl_output:
try:
if cwl_output['genomel_gvcf']:
status = "FAILED_WHEN_UPLOAD"
elif cwl_output['genomel_bam']:
status = "FAILED_IN_VARIANT_CALLING"
else:
status = "FAILED_IN_EARLY_STAGE"
except ValueError:
status = "FAILED"
else: status = "FAILED_IN_CWL"
else: status = "FAILED_IN_PYTHON_RUNNER"
self.workflow_meta['cwl_end'] = time.time()
self.pg_data['job_uuid'] = self.workflow_meta['job_uuid']
self.pg_data['aliquot_id'] = self.workflow_meta['aliquot_id']
self.pg_data['input_table'] = self.workflow_meta['input_table']
self.pg_data['project'] = self.workflow_meta['project']
self.pg_data['status'] = status
self.pg_data['datetime_start'] = self.workflow_meta['datetime_start']
self.pg_data['datetime_end'] = str(datetime.datetime.now())
self.pg_data['whole_workflow_elapsed'] = float(
self.workflow_meta['cwl_end'] - self.workflow_meta['cwl_start']
)
self.pg_data['cwl_input_json'] = os.path.join(
self.workflow_meta['base_s3_loc'],
os.path.basename(self.workflow_meta['cwl_input_json'])
)
self.pg_data['debug_path'] = self.workflow_meta['log_file']
class GenomelCohort(object):
'''this class describes GenoMEL-Bionimbus Protected Data Cloud cohort genotyping pipeline'''
def __init__(self, workflow_meta, input_data, psql_conf):
'''
workflow_meta.keys() = [
'basedir',
'project',
'batch_id',
'job_uuid',
'input_table',
'cromwell_config',
'cromwell_jar_path',
'cwlwf',
'cwl_pack'
]
'''
self.input_data = input_data
self.pg_data = utils.pipeline.cohort_data_template()
self.psql_conf = psql_conf
self.psql_class = postgres.metrics.GenomelCohortGenotypingMetrics
# setup workflow metadata
self.workflow_meta = workflow_meta
self.workflow_meta['base_s3_loc'] = self.input_data['upload_s3_bucket']
self.workflow_meta['log_file'] = None
self.workflow_meta['cwl_input_json'] = self._cwl_input_json()
self.workflow_meta['cromwell_metadata_output'] = self._cromwell_metadata_output()
self.workflow_meta['log_dir'] = None
self.workflow_meta['cwl_log_tar'] = None
self.workflow_meta['cromwell_start'] = None
self.workflow_meta['cromwell_end'] = None
self.workflow_meta['cromwell_status'] = None
self.workflow_meta['cromwell_failures'] = None
self.workflow_meta['cromwell_cwl_outputs'] = None
self.workflow_meta['cromwell_cwl_logs'] = None
self.workflow_meta['cromwell_finished_steps'] = None
self.workflow_meta['cromwell_todo_steps'] = None
self.workflow_meta['runner_failure'] = None
self.workflow_meta['output_vcf'] = list()
self.workflow_meta['output_vcf_url'] = list()
self.workflow_meta['output_vcf_local'] = list()
self.workflow_meta['output_vcf_md5sum'] = list()
self.workflow_meta['output_vcf_filesize'] = list()
self.workflow_meta['pipeline_time'] = 0.0
self.workflow_meta['pipeline_avg_cpu_percentage'] = 0
self.cwl_steps = get_cwl_steps(self.workflow_meta['cwlwf'])
def run(self):
'''main pipeline'''
# setup work env
os.chdir(self.workflow_meta['basedir'])
logger = self._log()
# make sure cwltool in the path
# os.environ['PATH'] = "/home/ubuntu/.virtualenvs/p2/bin/:$PATH"
# cromwell cmd
cmd = [
'/usr/bin/java',
'-Dconfig.file={}'.format(self.workflow_meta['cromwell_config']),
'-jar', self.workflow_meta['cromwell_jar_path'],
'run', self.workflow_meta['cwlwf'],
'-i', self.workflow_meta['cwl_input_json'],
'--imports', self.workflow_meta['cwl_pack'],
'--metadata-output', self.workflow_meta['cromwell_metadata_output']
]
logger.info('%s', cmd)
try:
# run cromwell
utils.pipeline.run_command(cmd, logger)
# cromwell status
self._extract_cromwell_output_metadata()
if not self.workflow_meta['cromwell_failures'] \
and not self.workflow_meta['runner_failure']:
# calculate cpu percentage
self._calculate_cwl_metadata()
tar_exit = self._tar_log(logger)
if tar_exit:
self.workflow_meta['runner_failure'] = 'tar_logs_fails'
# upload log files
upload_exit = self._upload_log_files(logger)
if upload_exit:
self.workflow_meta['runner_failure'] = 'upload_logs_fails'
else:
self._process_job_success()
except BaseException, error:
logger.error('Failed: %s', error)
self.workflow_meta['runner_failure'] = '{}'.format(error)
if self.workflow_meta['cromwell_failures'] or self.workflow_meta['runner_failure']:
self._process_job_fail()
engine = postgres.utils.get_db_engine(self.psql_conf)
postgres.metrics.add_cohort_metrics(engine, self.psql_class, self.pg_data)
def create_tmp_dir(self, prefix):
'''create cwl tmp directory'''
tmpdir = tempfile.mkdtemp(prefix="{}".format(prefix), dir=self.workflow_meta['basedir'])
return tmpdir
def _cwl_input_json(self):
'''prepare cwl input json'''
cwl_input_json = os.path.join(
self.workflow_meta['basedir'], 'genomel_cohort_genotyping.{0}.{1}.{2}.json'.format(
self.workflow_meta['project'],
self.workflow_meta['batch_id'],
self.workflow_meta['job_uuid']
)
)
with open(cwl_input_json, 'wt') as ohandle:
json.dump(self.input_data, ohandle, indent=4)
return cwl_input_json
def _cromwell_metadata_output(self):
'''prepare cromwell metadata output'''
output_json = os.path.join(
self.workflow_meta['basedir'], 'genomel_cohort_genotyping.{0}.{1}.{2}.output'.format(
self.workflow_meta['project'],
self.workflow_meta['batch_id'],
self.workflow_meta['job_uuid']
)
)
return output_json
def _log(self):
'''setup log file'''
log_file = os.path.join(
self.workflow_meta['basedir'], 'genomel_cohort_genotyping.{0}.{1}.{2}.log'.format(
self.workflow_meta['project'],
self.workflow_meta['batch_id'],
self.workflow_meta['job_uuid']
)
)
self.workflow_meta['log_file'] = log_file
logger = utils.pipeline.setup_logging(
logging.INFO,
self.workflow_meta['job_uuid'],
log_file
)
return logger
def _extract_cromwell_output_metadata(self):
'''extract metadata from cromwell output'''
if not os.path.isfile(self._cromwell_metadata_output()):
self.workflow_meta['runner_failure'] = 'no_cromwell_metadata_output'
else:
metadata_json = utils.pipeline.load_json(self._cromwell_metadata_output())
self.workflow_meta['cromwell_failures'] = dict_to_string(
metadata_json.get('failures')
)
self.workflow_meta['cromwell_start'] = metadata_json['start']
self.workflow_meta['cromwell_end'] = metadata_json['end']
self.workflow_meta['cromwell_status'] = metadata_json['status']
self.workflow_meta['cromwell_cwl_outputs'] = metadata_json['outputs']
self.workflow_meta['cromwell_finished_steps'] = metadata_json['calls'].keys()
self.workflow_meta['cromwell_todo_steps'] = filter_list(
self.cwl_steps, self.workflow_meta['cromwell_finished_steps']
)
def _calculate_cwl_metadata(self):
'''gather and calculate cwl metadata'''
self.workflow_meta['cromwell_cwl_logs'] = []
for key, value in self.workflow_meta['cromwell_cwl_outputs'].items():
if key.endswith('time_logs'):
for log in value:
self.workflow_meta['cromwell_cwl_logs'].append(log['location'])
pipeline_cpu_usage = []
pipeline_cpu_time = []
for log in self.workflow_meta['cromwell_cwl_logs']:
dic = utils.pipeline.load_json(log)
cpu_percent = float(dic['percent_of_cpu'][:-1])
step_weight = float(dic['wall_clock'])
pipeline_cpu_usage.append(cpu_percent * step_weight)
pipeline_cpu_time.append(step_weight)
pipeline_time = sum(pipeline_cpu_time)
pipeline_avg_cpu_usage = str(int(sum(pipeline_cpu_usage)/sum(pipeline_cpu_time))) + '%'
self.workflow_meta['pipeline_time'] = pipeline_time
self.workflow_meta['pipeline_avg_cpu_percentage'] = pipeline_avg_cpu_usage
def _tar_log(self, logger):
'''make tar for all cwl time logs'''
self.workflow_meta['log_dir'] = self.create_tmp_dir('cwl_logs_')
for log in self.workflow_meta['cromwell_cwl_logs']:
utils.pipeline.move_file(log, self.workflow_meta['log_dir'])
self.workflow_meta['cwl_log_tar'] = os.path.join(
self.workflow_meta['basedir'], \
'genomel_cohort_genotyping.{0}.{1}.{2}.cwl_logs.tar.bz2'.format(
self.workflow_meta['project'],
self.workflow_meta['batch_id'],
self.workflow_meta['job_uuid']
)
)
exit_code = utils.pipeline.targz_compress(
logger,
self.workflow_meta['cwl_log_tar'],
self.workflow_meta['log_dir']
)
return exit_code
def _upload_log_files(self, logger):
'''upload tar file of all cwl logs'''
to_upload_dir = self.create_tmp_dir('to_upload_')
utils.pipeline.move_file(self.workflow_meta['cwl_input_json'], to_upload_dir)
utils.pipeline.move_file(self.workflow_meta['cwl_log_tar'], to_upload_dir)
exit_code = utils.pipeline.aws_s3_put(
logger=logger,
remote_output=self.workflow_meta['base_s3_loc'],
local_input=to_upload_dir,
profile=self.input_data['upload_s3_profile'],
endpoint_url=self.input_data['upload_s3_endpoint'],
recursive=True
)
return exit_code
def _get_output_meta(self):
'''get output vcf'''
for key, value in self.workflow_meta['cromwell_cwl_outputs'].items():
if key.endswith('vcf'):
self.workflow_meta['output_vcf_local'].append(value['location'])
self.workflow_meta['output_vcf_filesize'].append(value['size'])
self.workflow_meta['output_vcf'].append(
os.path.basename(value['location'])
)
self.workflow_meta['output_vcf_md5sum'].append(
utils.pipeline.get_md5(value['location'])
)
self.workflow_meta['output_vcf_url'].append(
os.path.join(
self.workflow_meta['base_s3_loc'],
os.path.basename(value['location'])
)
)
def _process_job_success(self):
'''process when job successes'''
self.pg_data['job_uuid'] = self.workflow_meta['job_uuid']
self.pg_data['batch_id'] = self.workflow_meta['batch_id']
self.pg_data['input_table'] = self.workflow_meta['input_table']
self.pg_data['project'] = self.workflow_meta['project']
self.pg_data['cromwell_status'] = self.workflow_meta['cromwell_status']
self.pg_data['cromwell_failures'] = self.workflow_meta['cromwell_failures']
self.pg_data['cromwell_finished_steps'] = self.workflow_meta['cromwell_finished_steps']
self.pg_data['cromwell_todo_steps'] = self.workflow_meta['cromwell_todo_steps']
self.pg_data['datetime_start'] = self.workflow_meta['cromwell_start']
self.pg_data['datetime_end'] = self.workflow_meta['cromwell_end']
self._get_output_meta()
self.pg_data['vcf_url'] = self.workflow_meta['output_vcf_url']
self.pg_data['vcf_local_path'] = self.workflow_meta['output_vcf_local']
self.pg_data['vcf_md5sum'] = self.workflow_meta['output_vcf_md5sum']
self.pg_data['vcf_filesize'] = self.workflow_meta['output_vcf_filesize']
self.pg_data['cwl_walltime'] = self.workflow_meta['pipeline_time']
self.pg_data['cwl_cpu_percentage'] = self.workflow_meta['pipeline_avg_cpu_percentage']
self.pg_data['cwl_input_json'] = os.path.join(
self.workflow_meta['base_s3_loc'],
os.path.basename(self.workflow_meta['cwl_input_json'])
)
self.pg_data['time_metrics_json'] = os.path.join(
self.workflow_meta['base_s3_loc'],
os.path.basename(self.workflow_meta['cwl_log_tar'])
)
self.pg_data['cromwell_version'] = os.path.basename(
self.workflow_meta['cromwell_jar_path']
)
self.pg_data['debug_path'] = self.workflow_meta['log_file']
def _process_job_fail(self):
'''process when job fails'''
self.pg_data['job_uuid'] = self.workflow_meta['job_uuid']
self.pg_data['batch_id'] = self.workflow_meta['batch_id']
self.pg_data['input_table'] = self.workflow_meta['input_table']
self.pg_data['project'] = self.workflow_meta['project']
self.pg_data['runner_failures'] = self.workflow_meta['runner_failure']
self.pg_data['cromwell_status'] = self.workflow_meta['cromwell_status']
self.pg_data['cromwell_failures'] = self.workflow_meta['cromwell_failures']
self.pg_data['cromwell_finished_steps'] = self.workflow_meta['cromwell_finished_steps']
self.pg_data['cromwell_todo_steps'] = self.workflow_meta['cromwell_todo_steps']
self.pg_data['datetime_start'] = self.workflow_meta['cromwell_start']
self.pg_data['datetime_end'] = self.workflow_meta['cromwell_end']
self.pg_data['cromwell_version'] = os.path.basename(
self.workflow_meta['cromwell_jar_path']
)
self.pg_data['debug_path'] = self.workflow_meta['log_file']
class PostFreebayes(object):
'''this class describes GenoMEL-Bionimbus Protected Data Cloud pipelines'''
def __init__(self, workflow_meta, input_data, psql_conf):
'''
workflow_meta.keys() = [
'basedir',
'job_uuid',
'bed',
'cwlwf'
]
'''
self.input_data = input_data
self.pg_data = utils.pipeline.pfc_data_template()
self.psql_conf = psql_conf
self.psql_class = postgres.metrics.PFCMetrics
# setup workflow metadata
self.workflow_meta = workflow_meta
self.workflow_meta['cwl_start'] = None
self.workflow_meta['cwl_end'] = None
self.workflow_meta['cwl_failure'] = False
self.workflow_meta['chrom'], self.workflow_meta['chrom_start'], self.workflow_meta['chrom_end'] = self._read_bed()
self.workflow_meta['cwl_input_json'] = self._cwl_input_json()
self.workflow_meta['cwl_output_json'] = self._cwl_output_json()
def run(self):
'''main pipeline'''
# setup start-time
self.workflow_meta['cwl_start'] = time.time()
# setup bed info
# setup work env
os.chdir(self.workflow_meta['basedir'])
tmpdir = self.create_tmp_dir('tmpdir_')
logger = self._log()
# cwl cmd
cmd = [
'/home/ubuntu/.virtualenvs/p2/bin/cwltool',
'--debug',
'--relax-path-checks',
'--outdir', self.workflow_meta['basedir'],
'--tmpdir-prefix', tmpdir,
'--tmp-outdir-prefix', tmpdir,
self.workflow_meta['cwlwf'],
self.workflow_meta['cwl_input_json']
]
# run cwl
cwl_exit = utils.pipeline.run_command(cmd, logger, self.workflow_meta['cwl_output_json'])
# cwl status
if cwl_exit:
self.workflow_meta['cwl_failure'] = True
# update psql
if not self.workflow_meta['cwl_failure']:
self._process_cwl_success()
else:
self._process_cwl_fail()
engine = postgres.utils.get_db_engine(self.psql_conf)
postgres.metrics.add_fbc_metrics(engine, self.psql_class, self.pg_data)
#clean up
utils.pipeline.remove_dir(self.workflow_meta['basedir'])
def _read_bed(self):
'''read bed file to collect metadata'''
bed = self.workflow_meta['bed']
with open(bed, 'r') as f:
chrom, start, end = f.readline().rstrip().split('\t')
return chrom, start, end
def _cwl_input_json(self):
'''prepare cwl input json'''
cwl_input_json = os.path.join(
self.workflow_meta['basedir'], 'post_freebayes_qc.{0}.{1}.{2}.json'.format(
self.workflow_meta['chrom'],
self.workflow_meta['chrom_start'],
self.workflow_meta['chrom_end']
)
)
self.input_data['output_prefix'] = '{}_{}_{}'.format(
self.workflow_meta['chrom'],
self.workflow_meta['chrom_start'],
self.workflow_meta['chrom_end']
)
with open(cwl_input_json, 'wt') as ohandle:
json.dump(self.input_data, ohandle, indent=4)
return cwl_input_json
def _cwl_output_json(self):
'''prepare cwl output json'''
cwl_output_json = os.path.join(
self.workflow_meta['basedir'], 'post_freebayes_qc.{0}.{1}.{2}.output'.format(
self.workflow_meta['chrom'],
self.workflow_meta['chrom_start'],
self.workflow_meta['chrom_end']
)
)
return cwl_output_json
def create_tmp_dir(self, prefix):
'''create cwl tmp directory'''
tmpdir = tempfile.mkdtemp(prefix="{}".format(prefix), dir=self.workflow_meta['basedir'])
return tmpdir
def _log(self):
'''setup log file'''
log_file = os.path.join(
os.path.dirname(self.workflow_meta['basedir']),
'post_freebayes_qc.{0}.{1}.{2}.log'.format(
self.workflow_meta['chrom'],
self.workflow_meta['chrom_start'],
self.workflow_meta['chrom_end']
)
)
self.workflow_meta['log_file'] = log_file
logger = utils.pipeline.setup_logging(
logging.INFO,
self.workflow_meta['job_uuid'],
log_file
)
return logger
def _stage_local(self, chrom, indiv):
'''stage cwl output to local gluster'''
chrom_dir = os.path.join('/mnt/nfs/post_freebayes_qc', chrom)
if not os.path.isdir(chrom_dir):
os.mkdir(chrom_dir)
utils.pipeline.move_file(indiv, chrom_dir)
return os.path.join(chrom_dir, os.path.basename(indiv))
def _process_cwl_success(self):
'''process when cwl successes'''
cwl_output = utils.pipeline.load_json(self.workflow_meta['cwl_output_json'])
filtered_vcf = self._stage_local(
self.workflow_meta['chrom'], cwl_output['filtered_vcf']['path']
)
self._stage_local(self.workflow_meta['chrom'], cwl_output['filtered_vcf']['secondaryFiles'][0]['path'])
self.workflow_meta['cwl_end'] = time.time()
self.pg_data['job_uuid'] = self.workflow_meta['job_uuid']
self.pg_data['chrom'] = self.workflow_meta['chrom']
self.pg_data['start'] = int(self.workflow_meta['chrom_start'])
self.pg_data['end'] = int(self.workflow_meta['chrom_end'])
self.pg_data['inputCount'] = int(cwl_output['raw_counts'])
self.pg_data['outputCount'] = int(cwl_output['filtered_counts'])
self.pg_data['inputFS'] = utils.pipeline.get_file_size(self.input_data['vcf']['path'])
self.pg_data['outputFS'] = utils.pipeline.get_file_size(filtered_vcf)
self.pg_data['inputMD5'] = utils.pipeline.get_md5(self.input_data['vcf']['path'])
self.pg_data['outputMD5'] = utils.pipeline.get_md5(filtered_vcf)
self.pg_data['inputBed'] = self.workflow_meta['bed']
self.pg_data['inputVCF'] = self.input_data['vcf']['path']
self.pg_data['outputVCF'] = filtered_vcf
self.pg_data['status'] = "COMPLETED"
self.pg_data['runtime'] = int(self.workflow_meta['cwl_end'] - self.workflow_meta['cwl_start'])
def _process_cwl_fail(self):
'''process when cwl fails'''
self.workflow_meta['cwl_end'] = time.time()
self.pg_data['job_uuid'] = self.workflow_meta['job_uuid']
self.pg_data['chrom'] = self.workflow_meta['chrom']
self.pg_data['start'] = int(self.workflow_meta['chrom_start'])
self.pg_data['end'] = int(self.workflow_meta['chrom_end'])
self.pg_data['inputBed'] = self.workflow_meta['bed']
self.pg_data['inputVCF'] = self.input_data['vcf']['path']
self.pg_data['status'] = "FAILED"
self.pg_data['runtime'] = self.workflow_meta['cwl_end'] - self.workflow_meta['cwl_start']
| 41.531105 | 122 | 0.60605 | 5,284 | 44,729 | 4.783308 | 0.061506 | 0.149555 | 0.184214 | 0.041345 | 0.829871 | 0.77183 | 0.724827 | 0.68906 | 0.671889 | 0.660455 | 0 | 0.00381 | 0.272307 | 44,729 | 1,076 | 123 | 41.569703 | 0.772712 | 0.011134 | 0 | 0.587216 | 0 | 0.001083 | 0.176914 | 0.047804 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.013001 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9835c3bf073936829e1a919635b2fc123a60983a | 74 | py | Python | test-registry/install4/f.py | ZikeWang/open-lambda | 5617fa7d943e49134ba31ad28879bb90ec09cb53 | [
"Apache-2.0"
] | null | null | null | test-registry/install4/f.py | ZikeWang/open-lambda | 5617fa7d943e49134ba31ad28879bb90ec09cb53 | [
"Apache-2.0"
] | null | null | null | test-registry/install4/f.py | ZikeWang/open-lambda | 5617fa7d943e49134ba31ad28879bb90ec09cb53 | [
"Apache-2.0"
] | null | null | null | import chardet
# ol-install: chardet
def f(event):
return 'imported' | 12.333333 | 21 | 0.702703 | 10 | 74 | 5.2 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 74 | 6 | 22 | 12.333333 | 0.866667 | 0.256757 | 0 | 0 | 0 | 0 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.666667 | 0.333333 | 1.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
983cbe45759dfc0c4fa1f46042a4bd8826ffdea3 | 23 | py | Python | __init__.py | sean-mackenzie/curlypiv | 21c96c1bb1ba2548c4d5bebb389eb66ff58f851d | [
"MIT"
] | null | null | null | __init__.py | sean-mackenzie/curlypiv | 21c96c1bb1ba2548c4d5bebb389eb66ff58f851d | [
"MIT"
] | 1 | 2021-06-14T17:24:43.000Z | 2021-06-14T17:24:43.000Z | __init__.py | sean-mackenzie/curlypiv | 21c96c1bb1ba2548c4d5bebb389eb66ff58f851d | [
"MIT"
] | null | null | null | from curlypiv import *
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
983df85fdc40af11b3ecc2dcae27c8d77b630318 | 105 | py | Python | implicitresnet/__init__.py | vreshniak/ImplicitResNet | 62e3c2f047f2572a0d0a0ee7cd3c8dd6e340080e | [
"MIT"
] | 2 | 2021-01-01T00:42:17.000Z | 2021-01-01T17:32:01.000Z | implicitresnet/__init__.py | vreshniak/ImplicitResNet | 62e3c2f047f2572a0d0a0ee7cd3c8dd6e340080e | [
"MIT"
] | null | null | null | implicitresnet/__init__.py | vreshniak/ImplicitResNet | 62e3c2f047f2572a0d0a0ee7cd3c8dd6e340080e | [
"MIT"
] | null | null | null | from .models.ode import theta_solver, regularized_ode_solver
from .models.rhs import rhs_mlp, rhs_conv2d
| 35 | 60 | 0.847619 | 17 | 105 | 4.941176 | 0.588235 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.095238 | 105 | 2 | 61 | 52.5 | 0.873684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98b4fb789a9ea61afe42357f7e559f2b191a2210 | 16,568 | py | Python | Florence/FunctionSpace/ThreeDimensional/Tet/hpModal.py | jdlaubrie/florence | 830dca4a34be00d6e53cbec3007c10d438b27f57 | [
"MIT"
] | 65 | 2017-08-04T10:21:13.000Z | 2022-02-21T21:45:09.000Z | Florence/FunctionSpace/ThreeDimensional/Tet/hpModal.py | jdlaubrie/florence | 830dca4a34be00d6e53cbec3007c10d438b27f57 | [
"MIT"
] | 6 | 2018-06-03T02:29:20.000Z | 2022-01-18T02:30:22.000Z | Florence/FunctionSpace/ThreeDimensional/Tet/hpModal.py | jdlaubrie/florence | 830dca4a34be00d6e53cbec3007c10d438b27f57 | [
"MIT"
] | 10 | 2018-05-30T09:44:10.000Z | 2021-05-18T08:06:51.000Z | import imp, os
import numpy as np
from Florence.FunctionSpace.JacobiPolynomials import *
def hpBases(C,r0,s,t):
# The input argument r is changed to r0, because r is used as the polynomial degree in the 3rd (z) direction
# Coordinate transformation for tetrahedrals
a = 2.0*(1.+r0)/(-s-t) -1.
b = 2.0*(1.+s)/(1.-t) - 1.
c = t
order = -1
P1=C+1
P2=C+1
P3=C+1
# Size of bases is (for equal order interpolation)
nsize = int((P1+1.)*(P1+2.)*(P1+3.)/6.)
# Vertex based bases size
vsize = 4
# Edge based bases size
esize = 6*C
# Face based bases size
fsize = 2*C*(C-1)
# Interior base bases size
isize = int(C*(C-1)*(C-2)/6.)
# Allocate
Bases = np.zeros(nsize)
# Vertices
va = ((1.-a)/2.)*((1.-b)/2.)*((1.-c)/2.)
vb = ((1.+a)/2.)*((1.-b)/2.)*((1.-c)/2.)
vc = ((1.-a)/2.)*((1.+b)/2.)*((1.-c)/2.) # vc = ((1.+b)/2.)*((1.-c)/2.)
vd = (1.+c)/2.
Bases[:4] = np.array([va,vb,vc,vd])
if C > 0:
p = P1-1; q = P2-1; r = P3-1
# Edges
e1 = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[:,0]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)
e2 = ((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1)
e3 = ((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1)
e4 = ((1.-a)/2.)*((1.-b)/2.)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
e5 = ((1.+a)/2.)*((1.-b)/2.)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
e6 = ((1.+b)/2.)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
Bases[4:4+C] = e1; Bases[4+C:4+2*C] = e2; Bases[4+2*C:4+3*C] = e3; Bases[4+3*C:4+4*C] = e4; Bases[4+4*C:4+5*C] = e5; Bases[4+5*C:4+6*C] = e6
# Faces
f1 = []; f2 = []; f3 = []; f4 = []
for p in range(1,P1):
for q in range(1,P2):
if p+q < P2:
f1 = np.append(f1,((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1))
for p in range(1,P1):
for r in range(1,P3):
if p+r < P3:
f2 = np.append(f2,((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+1.,1.)[-1])
for q in range(1,P2):
for r in range(1,P3):
if q+r < P3:
f3 = np.append(f3,((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1])
f4 = np.append(f4,((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1])
Bases[4+6*C:4+6*C+2*C*(C-1)] = np.append(np.append(np.append(f1,f2),f3),f4) # 2*C*(C-1) is the total number of bases on the faces (fsize)
# Interior
interior = []
for p in range(1,P1):
for q in range(1,P2):
for r in range(1,P3):
if p+q+r < P3:
interior = np.append(interior,((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1])
Bases[4+6*C+2*C*(C-1):4+6*C+2*C*(C-1)+isize] = interior
return Bases, np.array([nsize,vsize,esize,fsize,isize])
def GradhpBases(C,r0,s,t):
# The input argument r is changed to r0, because r is used as the polynomial degree in the 3rd (z) direction
# Coordinate transformation for tetrahedrals
a = 2.0*(1.+r0)/(-s-t) -1.
b = 2.0*(1.+s)/(1.-t) - 1.
c = t
order = -1
P1=C+1
P2=C+1
P3=C+1
# Size of bases is (for equal order interpolation)
nsize = int((P1+1.)*(P1+2.)*(P1+3.)/6.);
vsize = 4; esize = 6*C; fsize = 2*C*(C-1); isize = int(C*(C-1)*(C-2)/6.)
# Allocate
GradBases = np.zeros((nsize,3))
# Vertices
# dN/dx = dN/da (a being the tetrahedral coordinate)
dvadx = (-0.5)*((1.-b)/2.)*((1.-c)/2.)
dvbdx = (0.5)*((1.-b)/2.)*((1.-c)/2.)
dvcdx = (-0.5)*((1.+b)/2.)*((1.-c)/2.) # dvcdx = 0. # The commented one is if we follow Sherwin's 95 paper
dvddx = 0.
# dN/dy = dN/db (b being the tetrahedral coordinate)
dvady = ((1.-a)/2.)*(-0.5)*((1.-c)/2.)
dvbdy = ((1.+a)/2.)*(-0.5)*((1.-c)/2.)
dvcdy = ((1.-a)/2.)*(0.5)*((1.-c)/2.) # dvcdx = (0.5)*((1.-c)/2.)
dvddy = 0.
# dN/dz = dN/dc (c being the tetrahedral coordinate)
dvadz = ((1.-a)/2.)*((1.-b)/2.)*(-0.5)
dvbdz = ((1.+a)/2.)*((1.-b)/2.)*(-0.5)
dvcdz = ((1.-a)/2.)*((1.+b)/2.)*(-0.5) # dvcdx = ((1.+b)/2.)*(-0.5)
dvddz = 0.5
GradBases[:4,:] = np.array([
[dvadx,dvbdx,dvcdx,dvddx],
[dvady,dvbdy,dvcdy,dvddy],
[dvadz,dvbdz,dvcdz,dvddz]
]).T
if C > 0:
p = P1-1; q = P2-1; r = P3-1
# Edges
# dN/dx = dN/da (a being the tetrahedral coordinate)
de1dx = (-0.5)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[:,0]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1) +\
((1.-a)/2.)*(0.5)*JacobiPolynomials(p-1,a,1.,1.)[:,0]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1) +\
((1.-a)/2.)*((1.+a)/2.)*DiffJacobiPolynomials(p-1,a,1.,1.,1)[:,0]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)
de2dx = (-0.5)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1)
de3dx = (0.5)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1)
de4dx = (-0.5)*((1.-b)/2.)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
de5dx = (0.5)*((1.-b)/2.)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
de6dx = 0.
# dN/dy = dN/db (b being the tetrahedral coordinate)
de1dy = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[:,0]*(p+1)*((1.-b)/2.)**(p)*(-0.5)*((1.-c)/2.)**(p+1)
de2dy = ((1.-a)/2.)*(-0.5)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1) +\
((1.-a)/2.)*((1.-b)/2.)*(0.5)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1) +\
((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*DiffJacobiPolynomials(q-1,b,1.,1.,1)[:,0]*((1.-c)/2.)**(q+1)
de3dy = ((1.+a)/2.)*(-0.5)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1) +\
((1.+a)/2.)*((1.-b)/2.)*(0.5)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*((1.-c)/2.)**(q+1) +\
((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*DiffJacobiPolynomials(q-1,b,1.,1.,1)[:,0]*((1.-c)/2.)**(q+1)
de4dy = ((1.-a)/2.)*(-0.5)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
de5dy = ((1.+a)/2.)*(-0.5)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
de6dy = (0.5)*((1.-c)/2.)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0]
# dN/dz = dN/dc (c being the tetrahedral coordinate)
de1dz = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[:,0]*((1.-b)/2.)**(p+1)*(p+1)*((1.-c)/2.)**(p)*(-0.5)
de2dz = ((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*(q+1)*((1.-c)/2.)**(q)*(-0.5)
de3dz = ((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[:,0]*(q+1)*((1.-c)/2.)**(q)*(-0.5)
de4dz = ((1.-a)/2.)*((1.-b)/2.)*(-0.5)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0] +\
((1.-a)/2.)*((1.-b)/2.)*((1.-c)/2.)*(0.5)*JacobiPolynomials(r-1,c,1.,1.)[:,0] +\
((1.-a)/2.)*((1.-b)/2.)*((1.-c)/2.)*((1.+c)/2.)*DiffJacobiPolynomials(r-1,c,1.,1.,1)[:,0]
de5dz = ((1.+a)/2.)*((1.-b)/2.)*(-0.5)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0] +\
((1.+a)/2.)*((1.-b)/2.)*((1.-c)/2.)*(0.5)*JacobiPolynomials(r-1,c,1.,1.)[:,0] +\
((1.+a)/2.)*((1.-b)/2.)*((1.-c)/2.)*((1.+c)/2.)*DiffJacobiPolynomials(r-1,c,1.,1.,1)[:,0]
de6dz = ((1.+b)/2.)*(-0.5)*((1.+c)/2.)*JacobiPolynomials(r-1,c,1.,1.)[:,0] +\
((1.+b)/2.)*((1.-c)/2.)*(0.5)*JacobiPolynomials(r-1,c,1.,1.)[:,0] +\
((1.+b)/2.)*((1.-c)/2.)*((1.+c)/2.)*DiffJacobiPolynomials(r-1,c,1.,1.,1)[:,0]
GradBases[4:4+C,0] = de1dx; GradBases[4+C:4+2*C,0] = de2dx; GradBases[4+2*C:4+3*C,0] = de3dx; GradBases[4+3*C:4+4*C,0] = de4dx; GradBases[4+4*C:4+5*C,0] = de5dx; GradBases[4+5*C:4+6*C,0] = de6dx
GradBases[4:4+C,1] = de1dy; GradBases[4+C:4+2*C,1] = de2dy; GradBases[4+2*C:4+3*C,1] = de3dy; GradBases[4+3*C:4+4*C,1] = de4dy; GradBases[4+4*C:4+5*C,1] = de5dy; GradBases[4+5*C:4+6*C,1] = de6dy
GradBases[4:4+C,2] = de1dy; GradBases[4+C:4+2*C,2] = de2dz; GradBases[4+2*C:4+3*C,2] = de3dz; GradBases[4+3*C:4+4*C,2] = de4dz; GradBases[4+4*C:4+5*C,2] = de5dz; GradBases[4+5*C:4+6*C,2] = de6dz
# Faces
dface1dx = []; dface1dy = []; dface1dz = []
for p in range(1,P1):
for q in range(1,P2):
if p+q < P2:
df1dx = (-0.5)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1) +\
((1.-a)/2.)*(0.5)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1) +\
((1.-a)/2.)*((1.+a)/2.)*DiffJacobiPolynomials(p-1,a,1.,1.,1)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)
dface1dx = np.append(dface1dx,df1dx)
df1dy = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*(p+1)*((1.-b)/2.)**(p)*(0.5)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1) +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*(0.5)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1) +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*DiffJacobiPolynomials(q-1,b,2.*p+1.,1.,1)[-1]*((1.-c)/2.)**(p+q+1)
dface1dy = np.append(dface1dy,df1dy)
df1dz = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*(p+q+1)*((1.-c)/2.)**(p+q)*(-0.5)
dface1dz = np.append(dface1dz,df1dz)
dface2dx = []; dface2dy = []; dface2dz = []
for p in range(1,P1):
for r in range(1,P3):
if p+r < P3:
df2dx = (-0.5)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+1.,1.)[-1] +\
((1.-a)/2.)*(0.5)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*DiffJacobiPolynomials(p-1,a,1.,1.,1)[-1]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+1.,1.)[-1]
dface2dx = np.append(dface2dx,df2dx)
df2dy = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*(p+1)*((1.-b)/2.)**(p)*(-0.5)*((1.-c)/2.)**(p+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+1.,1.)[-1]
dface2dy = np.append(dface2dy,df2dy)
df2dz = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*(p+1)*((1.-c)/2.)**(p)*(-0.5)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)*(0.5)*JacobiPolynomials(r-1,c,2.*p+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.-c)/2.)**(p+1)*((1.+c)/2.)*DiffJacobiPolynomials(r-1,c,2.*p+1.,1.,1)[-1]
dface2dz = np.append(dface2dz,df2dz)
dface3dx = []; dface3dy = []; dface3dz = []
dface4dx = []; dface4dy = []; dface4dz = []
for q in range(1,P2):
for r in range(1,P3):
if q+r < P3:
df3dx = (-0.5)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1]
dface3dx = np.append(dface3dx,df3dx)
df3dy = ((1.-a)/2.)*(-0.5)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.-b)/2.)*(0.5)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*DiffJacobiPolynomials(q-1,b,1.,1.,1)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1]
dface3dy = np.append(dface3dy,df3dy)
df3dz = ((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*(q+1)*((1.-c)/2.)**(q)*(-0.5)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*(0.5)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*DiffJacobiPolynomials(r-1,c,2.*q+1.,1.,1)[-1]
dface3dz = np.append(dface3dz,df3dz)
df4dx = (0.5)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1]
dface4dx = np.append(dface4dx,df4dx)
df4dy = ((1.+a)/2.)*(-0.5)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.+a)/2.)*((1.-b)/2.)*(0.5)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*DiffJacobiPolynomials(q-1,b,1.,1.,1)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1]
dface4dy = np.append(dface4dy,df4dy)
df4dz = ((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*(q+1)*((1.-c)/2.)**(q)*(-0.5)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*(0.5)*JacobiPolynomials(r-1,c,2.*q+1.,1.)[-1] +\
((1.+a)/2.)*((1.-b)/2.)*((1.+b)/2.)*JacobiPolynomials(q-1,b,1.,1.)[-1]*((1.-c)/2.)**(q+1)*((1.+c)/2.)*DiffJacobiPolynomials(r-1,c,2.*q+1.,1.,1)[-1]
dface4dz = np.append(dface4dz,df4dz)
GradBases[4+6*C:4+6*C+2*C*(C-1),0] = np.append(np.append(np.append(dface1dx,dface2dx),dface3dx),dface4dx)
GradBases[4+6*C:4+6*C+2*C*(C-1),1] = np.append(np.append(np.append(dface1dy,dface2dy),dface3dy),dface4dy)
GradBases[4+6*C:4+6*C+2*C*(C-1),2] = np.append(np.append(np.append(dface1dz,dface2dz),dface3dz),dface4dz)
# Interior
dinteriordx = []; dinteriordy = []; dinteriordz = []
for p in range(1,P1):
for q in range(1,P2):
for r in range(1,P3):
if p+q+r < P3:
didx = (-0.5)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1] +\
((1.-a)/2.)*(0.5)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*DiffJacobiPolynomials(p-1,a,1.,1.,1)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1]
dinteriordx = np.append(dinteriordx,didx)
didy = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*(p+1)*((1.-b)/2.)**(p)*(-0.5)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*(0.5)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*DiffJacobiPolynomials(q-1,b,2.*p+1.,1.,1)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1]
dinteriordy = np.append(dinteriordy,didy)
didz = ((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*(p+q+1)*((1.-c)/2.)**(p+q)*(-0.5)*((1.+c)/2.)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*(0.5)*JacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.)[-1] +\
((1.-a)/2.)*((1.+a)/2.)*JacobiPolynomials(p-1,a,1.,1.)[-1]*((1.-b)/2.)**(p+1)*((1.+b)/2.)*JacobiPolynomials(q-1,b,2.*p+1.,1.)[-1]*((1.-c)/2.)**(p+q+1)*((1.+c)/2.)*DiffJacobiPolynomials(r-1,c,2.*p+2.*q+1.,1.,1)[-1]
dinteriordz = np.append(dinteriordz,didz)
GradBases[4+6*C+2*C*(C-1):4+6*C+2*C*(C-1)+isize,0] = dinteriordx
GradBases[4+6*C+2*C*(C-1):4+6*C+2*C*(C-1)+isize,1] = dinteriordy
GradBases[4+6*C+2*C*(C-1):4+6*C+2*C*(C-1)+isize,2] = dinteriordz
# Build the Jacobian to take you from a,b,c to r,s,t (Recently changed fro r to r0)
Jacobian = np.array([
[-2./(s+t), 2.*(1.+r0)/(s+t)**2, 2.*(1.+r0)/(s+t)**2],
[0., 2.0/(1.-t), 2.*(1.+s)/(1.-t)**2],
[0., 0., 1.]
])
return GradBases, Jacobian
| 51.613707 | 244 | 0.487325 | 3,533 | 16,568 | 2.28531 | 0.050665 | 0.09512 | 0.067624 | 0.040624 | 0.769259 | 0.76542 | 0.752415 | 0.718355 | 0.711419 | 0.693213 | 0 | 0.131695 | 0.098503 | 16,568 | 320 | 245 | 51.775 | 0.408878 | 0.070739 | 0 | 0.221106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01005 | false | 0 | 0.015075 | 0 | 0.035176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f2659fd474950525d117a7bc3d485d62f2a963e | 42 | py | Python | testapp/models/__init__.py | jrice128/permission_manager | 02d415a5f6bc0b4fb2cf40fa6431c813fabe38ba | [
"MIT"
] | null | null | null | testapp/models/__init__.py | jrice128/permission_manager | 02d415a5f6bc0b4fb2cf40fa6431c813fabe38ba | [
"MIT"
] | null | null | null | testapp/models/__init__.py | jrice128/permission_manager | 02d415a5f6bc0b4fb2cf40fa6431c813fabe38ba | [
"MIT"
] | 1 | 2020-04-28T09:18:14.000Z | 2020-04-28T09:18:14.000Z | from .models import User, Role, Permission | 42 | 42 | 0.809524 | 6 | 42 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f804652e63d61442a54594f2db654c5e88c30bf | 58 | py | Python | src/encryption_api/Config.py | jocon15/Voice_Assistant-JARVIS | 4d4e5afb1752f8390f6956bb499098286b7c1d9c | [
"MIT"
] | null | null | null | src/encryption_api/Config.py | jocon15/Voice_Assistant-JARVIS | 4d4e5afb1752f8390f6956bb499098286b7c1d9c | [
"MIT"
] | null | null | null | src/encryption_api/Config.py | jocon15/Voice_Assistant-JARVIS | 4d4e5afb1752f8390f6956bb499098286b7c1d9c | [
"MIT"
] | null | null | null | import keys
PRIVATE_FOLDER_KEY = keys.PRIVATE_FOLDER_KEY
| 14.5 | 44 | 0.862069 | 9 | 58 | 5.111111 | 0.555556 | 0.478261 | 0.73913 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 58 | 3 | 45 | 19.333333 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7f8c0c91f2770c5080fcba91a0a2c141b5dc3c11 | 164 | py | Python | backend/model/__init__.py | guchengxi1994/code-find | 0793f61643109250f01011eb500fb290e60accb6 | [
"MIT"
] | 1 | 2022-03-14T05:22:35.000Z | 2022-03-14T05:22:35.000Z | backend/model/__init__.py | guchengxi1994/a-cool-flutter-app | 0793f61643109250f01011eb500fb290e60accb6 | [
"MIT"
] | null | null | null | backend/model/__init__.py | guchengxi1994/a-cool-flutter-app | 0793f61643109250f01011eb500fb290e60accb6 | [
"MIT"
] | null | null | null | '''
Descripttion:
version:
Author: xiaoshuyui
email: guchengxi1994@qq.com
Date: 2022-04-10 12:51:26
LastEditors: xiaoshuyui
LastEditTime: 2022-04-10 12:51:26
'''
| 16.4 | 33 | 0.75 | 24 | 164 | 5.125 | 0.708333 | 0.097561 | 0.130081 | 0.162602 | 0.227642 | 0.227642 | 0 | 0 | 0 | 0 | 0 | 0.219178 | 0.109756 | 164 | 9 | 34 | 18.222222 | 0.623288 | 0.945122 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f8d250ae0c83c0b60742a9145b5923689542b51 | 30 | py | Python | backend/infrastructure/controller/storage/__init__.py | uesleicarvalhoo/Ecommerce | 1d8d0f0c522dcd27fd90e315989b6fa93caf62b8 | [
"MIT"
] | null | null | null | backend/infrastructure/controller/storage/__init__.py | uesleicarvalhoo/Ecommerce | 1d8d0f0c522dcd27fd90e315989b6fa93caf62b8 | [
"MIT"
] | null | null | null | backend/infrastructure/controller/storage/__init__.py | uesleicarvalhoo/Ecommerce | 1d8d0f0c522dcd27fd90e315989b6fa93caf62b8 | [
"MIT"
] | null | null | null | from .none import NoneStorage
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f9209c9ff261d4381c24fac62a72a2e7db1293c | 111 | py | Python | src/__init__.py | quirkyweasel/antitranslator | 9fcd22adf40b0e0b787f77192fa4b5e1885f7c6d | [
"MIT"
] | 1 | 2020-08-25T15:48:38.000Z | 2020-08-25T15:48:38.000Z | src/__init__.py | quirkyweasel/Antitranslator | 9fcd22adf40b0e0b787f77192fa4b5e1885f7c6d | [
"MIT"
] | null | null | null | src/__init__.py | quirkyweasel/Antitranslator | 9fcd22adf40b0e0b787f77192fa4b5e1885f7c6d | [
"MIT"
] | null | null | null | from src.translation import Translation
from src.file_controller import FileController
from src.logic import *
| 27.75 | 46 | 0.855856 | 15 | 111 | 6.266667 | 0.533333 | 0.223404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 111 | 3 | 47 | 37 | 0.949495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7fbc09785becef9053c1d0092d7c0b21a3c481a9 | 17,302 | py | Python | BFS/Leetcode174.py | Rylie-W/LeetRecord | 623c4efe88b3af54b8a65f6ec23db850b8c6f46f | [
"Apache-2.0"
] | null | null | null | BFS/Leetcode174.py | Rylie-W/LeetRecord | 623c4efe88b3af54b8a65f6ec23db850b8c6f46f | [
"Apache-2.0"
] | null | null | null | BFS/Leetcode174.py | Rylie-W/LeetRecord | 623c4efe88b3af54b8a65f6ec23db850b8c6f46f | [
"Apache-2.0"
] | null | null | null | class Solution:
def calculateMinimumHP(self, dungeon) -> int:
'''
#bfs. Time limit exceeded.
q=[[0,0,1-dungeon[0][0] if dungeon[0][0] <=0 else 1, 1 if dungeon[0][0]<=0 else 1+dungeon[0][0]]]
res=float('inf')
direction=[[0,1],[1,0]]
while q:
size=len(q)
for s in range(size):
c=q[0]
q.pop(0)
if c[0]==len(dungeon)-1 and c[1]==len(dungeon[0])-1:
res=min(res,c[2])
else:
for ax,ay in direction:
nx=c[0]+ax
ny=c[1]+ay
if nx>-1 and nx <len(dungeon) and ny>-1 and ny<len(dungeon[0]):
if c[-1]+dungeon[nx][ny]>0:
q.append([nx,ny,c[2],c[-1]+dungeon[nx][ny]])
else:
q.append([nx,ny,c[2]-(c[-1]+dungeon[nx][ny])+1,1])
return res
'''
dp = [[None for i in range(len(dungeon[0]))]
for i in range(len(dungeon))]
dp[0][0] = [1 - dungeon[0][0] if dungeon[0][0] <= 0 else 1,
1 if dungeon[0][0] <= 0 else 1 + dungeon[0][0]]
direction = [[-1, 0], [0, -1]]
for i in range(len(dungeon)):
for j in range(len(dungeon[0])):
for ax, ay in direction:
nx = i + ax
ny = j + ay
if nx > -1 and nx < len(dungeon) and ny > - \
1 and ny < len(dungeon[0]):
if dp[nx][ny][-1] + dungeon[i][j] > 0:
dp[i][j] = [dp[nx][ny][0], dp[nx][ny][-1] + dungeon[i][j]
] if dp[i][j] is None or dp[i][j][0] > dp[nx][ny][0] else dp[i][j]
else:
dp[i][j] = [dp[nx][ny][0] - (dp[nx][ny][-1] + dungeon[i][j]) + 1, 1] if dp[i][j] is None or dp[i][j][
0] > dp[nx][ny][0] - (dp[nx][ny][-1] + dungeon[i][j]) + 1 else dp[i][j]
return dp[-1][-1][0]
if __name__ == '__main__':
sol = Solution()
dungeon = [[1, -3, 3], [0, -2, 0], [-3, -3, -3]]
# dungeon = [[-2, -3, 3], [-5, -10, 1], [10, 30, -5]]
# dungeon=[[2,-8,-79,-88,-12,-87,-5,-56,-55,-42,18,-91,1,-30,-36,42,-96,-26,-17,-69,38,18,44,-58,-33,20,-45,-11,11,15,-40,-92,-62,-51,-23,20,-86,-2,-90,-64,-100,-42,-16,-55,29,-62,-81,-60,7,-5,31,-7,40,19,-53,-81,-77,42,-87,37,-43,37,-50,-21,-86,-28,13,-18,-65,-76],[-67,-23,-62,45,-94,-1,-95,-66,-41,37,33,-96,-95,-17,12,30,-4,40,-40,-89,-89,-25,-62,10,-19,-53,-36,38,-21,1,-41,-81,-62,3,-96,-17,-75,-81,37,32,-9,-80,-41,-13,-58,1,40,-13,-85,-78,-67,-36,-7,48,-16,2,-69,-85,9,15,-91,-32,-16,-84,-9,-31,-62,35,-11,28],[39,-28,1,-31,-4,-39,-64,-86,-68,-72,-68,21,-33,-73,37,-39,2,-59,-71,-17,-60,4,-16,-92,-15,10,-99,-37,21,-70,31,-10,-9,-45,6,26,8,30,13,-72,5,37,-94,35,9,36,-96,47,-61,15,-22,-60,-96,-94,-60,43,-48,-79,19,24,-40,33,-18,-33,50,42,-42,-6,-59,-17],[-95,-40,-96,42,-49,-3,6,-47,-38,31,-25,-61,-18,-52,-80,-55,29,27,22,6,29,-89,-9,14,-77,-26,-2,-7,-2,-64,-100,40,-52,-15,-76,13,-27,-83,-70,13,-62,-54,-92,-71,-65,-18,26,37,0,-58,4,43,-5,-33,-47,-21,-65,-58,21,2,-67,-62,-32,30,-4,-46,18,21,2,-5],[-5,34,41,11,45,-46,-86,31,-57,42,-92,43,-37,-9,42,-29,-3,41,-71,13,-8,37,-36,23,17,-74,-12,-55,-18,-17,-13,-76,-18,-90,-5,14,7,-82,-19,-16,44,-96,-88,37,-98,8,17,9,-2,-29,11,-39,-49,-95,20,-33,-37,-42,42,26,-28,-21,-44,-9,17,-26,-27,24,-60,-19],[-95,-73,-88,-4,32,7,20,19,-17,36,-81,-91,-6,-74,20,47,-24,15,40,-5,-28,-5,23,-30,6,-97,49,-12,-57,21,1,11,-64,-32,-95,-33,10,50,47,41,-11,-51,22,-84,39,10,-36,-72,-27,-60,-19,-51,-11,37,2,-62,22,-66,-61,29,-50,-94,48,-23,18,-37,-92,-92,-4,-97],[46,4,-96,-31,14,-25,-74,-73,-40,-46,38,-31,23,-34,12,25,34,42,-43,-91,3,34,-17,-64,21,-98,11,-70,-36,-66,2,-19,30,-88,43,-62,33,-75,-11,45,-95,-65,-25,27,-35,-57,-81,2,10,33,-46,-40,-4,-15,-94,48,-95,24,-87,-12,12,-70,35,42,-69,19,-74,-14,-81,2],[32,17,-79,-89,25,19,-98,-60,-81,-47,-8,-61,-41,-33,-33,-81,15,-75,8,-99,-29,-75,-23,-54,27,41,-23,-70,19,-60,-91,-60,34,42,-35,25,-61,-46,41,44,-27,-57,-24,-87,4,-55,-40,-74,-20,44,37,14,-48,-89,-26,-15,-40,-38,14,22,24,47,-62,-65,-73,-86,50,-62,24,-53],[-24,10,32,7,-21,44,-23,5,-98,40,-21,-4,-63,38,-89,26,-62,-98,-87,-67,7,-50,-84,6,-12,-10,-25,7,-96,0,-78,-36,6,22,-83,39,-94,-15,-63,-70,-4,-91,-96,-73,-11,33,11,-24,-11,-64,7,49,7,-55,-91,-88,-100,45,-48,31,28,-97,-88,-96,-14,-22,-41,-97,6,31],[-12,-9,-71,-55,-40,-66,-21,48,30,38,-64,-13,-22,9,-61,-41,12,-26,-92,-55,-33,-67,-59,-31,-77,19,13,-28,8,-13,-13,-19,-31,-87,2,-93,-95,14,6,34,-38,-88,48,38,-52,-92,-62,-76,-55,45,50,24,-76,-54,-70,-35,35,-90,-99,16,12,39,21,-88,-68,6,6,-2,-72,26],[-58,5,-60,16,-84,-67,-28,-11,-63,-49,-4,-41,-99,3,-1,47,12,-69,-69,-20,-78,-21,20,-67,6,14,-86,-35,7,7,7,3,10,-18,33,-14,36,-75,-89,-90,-29,-3,-89,18,48,-61,48,25,-17,-28,-46,44,5,48,-10,-21,-4,49,-57,37,-16,28,22,-95,39,7,-82,13,-68,23],[-77,-64,-34,-54,-25,-19,-63,-57,-33,-76,-89,-21,-70,-62,-97,50,-14,-55,-27,-16,-17,7,29,9,-95,-77,45,-71,-12,27,-68,-62,-66,-100,-60,-86,-15,38,19,-50,43,-9,47,-97,-96,-31,-51,48,-24,2,13,-25,-8,-44,-34,-100,-14,-5,-4,-69,25,-63,-70,-99,-9,-68,-88,-24,5,-69],[25,-88,-91,-26,-73,-52,-81,-96,-53,2,-35,-77,-1,2,16,10,-14,15,42,-52,-42,-48,-68,21,3,-26,-75,45,12,-82,25,-29,15,18,-37,4,41,-26,-98,-94,-74,-5,-15,11,11,-68,-94,-100,-100,29,-83,-99,-75,-9,-62,-31,23,-47,-68,-8,8,-100,-1,-76,-17,-58,36,-84,-32,-93],[-81,37,19,-5,18,15,-34,-29,-84,-35,-67,-64,-57,-99,46,26,-38,-37,-79,0,-6,32,50,37,-63,27,-56,11,-3,-66,-72,34,13,-4,30,-82,32,-71,-35,-30,10,-48,-93,-94,-92,-4,-21,-34,-67,-80,-98,-28,32,-86,-22,6,46,-75,-40,-100,-7,11,-44,-12,-36,-24,46,-1,26,-20],[-81,-20,-54,-75,-78,23,-76,-90,46,-20,-5,30,-59,-37,-82,-68,36,8,50,-72,48,-97,-93,-10,-31,-91,-81,-59,27,-47,-65,-75,-19,26,12,41,16,-30,33,9,-28,-23,45,-32,-68,39,-24,-89,-47,-41,-71,-96,-45,-34,-43,-23,39,-32,-99,-19,46,30,-89,-8,-65,-94,42,8,-14,-50],[-49,-64,20,-6,-82,-30,-99,-79,-27,-2,-17,30,-14,47,-3,-47,31,2,-28,-7,-83,-56,-43,11,-46,-85,-17,-30,-47,-84,-88,-36,42,-71,-39,-28,8,-78,-39,49,8,11,37,6,-38,33,-36,-36,-37,-44,-77,41,45,-47,-63,-58,-28,-31,-97,23,-94,-72,35,-39,-2,27,-87,3,13,-66],[49,27,-43,-84,27,-93,38,25,18,43,48,49,-35,-51,28,27,2,29,0,-47,-30,-25,-90,-92,-93,0,-73,-73,0,-62,-73,12,-15,-40,-50,11,-97,-74,42,-55,-27,-7,49,-32,-30,-6,-54,-88,-73,43,-46,-29,-50,-50,-11,5,23,6,12,20,-59,47,-98,-77,47,-5,-17,23,-31,-14],[-28,23,-90,-47,41,32,-38,32,-56,-29,-20,-34,-79,41,6,-52,41,-43,-24,-55,-81,3,-42,6,26,31,-64,35,7,-85,-38,14,-29,8,-88,26,-100,9,-11,-79,-59,2,38,47,23,39,46,-65,-62,-61,1,43,-58,-90,33,36,-30,38,-11,-37,19,41,-2,-94,-8,20,-38,-82,-73,28],[-54,-64,24,-40,-33,34,36,3,-56,-36,23,-91,50,16,27,-26,25,41,-15,-64,-30,-53,50,-46,11,-89,-98,11,-81,-66,-76,-40,12,4,-21,-67,48,-86,-43,22,-70,46,48,-82,-94,-13,0,29,29,36,-19,-1,-55,48,-63,47,-30,-81,1,-43,-83,16,12,48,47,-51,-4,44,16,-86],[-63,31,1,-65,-78,42,-14,-99,-77,-62,27,-51,46,-58,40,15,-19,45,10,0,-67,42,-86,-100,-40,30,-75,0,-82,10,-14,-43,26,-26,-97,-31,-68,-76,13,-85,35,-60,-82,-67,-99,12,-22,8,21,-93,-70,48,36,-57,-57,-91,8,-32,-60,28,-73,-59,-72,-69,-32,33,6,-8,-45,35],[18,-51,-21,48,-18,22,-46,-52,-55,-30,27,-88,-29,-64,22,-34,-62,-23,-60,-76,-41,-64,-82,15,18,-10,-37,-97,38,-50,12,-28,-48,-57,-53,-48,-81,-75,-85,-12,38,-87,-72,-82,-59,-3,25,-46,9,-95,-28,-79,-67,14,-71,-18,14,0,-65,-37,26,30,35,-33,-37,1,33,-70,-28,47],[-78,-43,-64,-91,-78,35,-8,38,-79,-15,-11,17,11,-78,-32,-14,-15,-64,18,-90,32,-48,38,33,-19,32,-23,-29,-68,-23,-53,-58,-20,1,-22,-51,-29,1,-60,-91,8,-15,-12,-71,-16,-23,3,-84,-89,15,14,18,-99,-27,-70,-18,29,-14,15,0,-49,0,17,-97,-23,-30,-49,-46,-52,41],[-34,-39,42,34,-54,43,32,49,46,-23,24,24,-1,-21,-12,-15,-3,-72,-61,-75,-53,-49,-47,-64,40,-56,-63,-95,-52,3,-26,-12,-58,49,5,21,-47,32,-92,-46,-10,28,-90,-83,46,9,-49,-19,32,-41,34,29,0,-98,-40,26,-30,-10,-51,20,-54,33,16,-24,-52,44,49,15,-58,-56],[31,-24,-28,-5,40,32,-74,-1,28,-81,8,-96,8,-22,-27,32,22,-24,16,40,-68,-50,-49,-84,25,-55,-7,-100,-77,25,-40,-6,-68,-90,-88,-74,-77,-24,-17,6,-72,46,23,-52,40,-71,-41,21,28,-29,31,-71,16,-83,42,25,-3,-84,33,-66,-33,-24,-65,-39,-5,21,-36,-87,3,38],[-98,-64,38,45,-90,44,-63,20,-44,32,-4,0,-22,-39,-8,19,-20,-58,5,1,32,-92,-36,25,-82,45,-55,23,-50,-89,-81,-33,16,-18,-7,21,40,-62,-16,-84,-19,-37,-66,48,-30,30,-8,1,-70,-6,-65,15,-11,-66,-71,-26,-15,48,-80,-6,-41,29,-80,-82,-6,-73,-56,-38,44,23],[14,-29,19,-13,-93,-33,-29,-16,21,-51,8,6,-63,24,-23,14,47,-84,-27,-11,-56,-65,0,-11,-28,-55,11,-67,18,-6,-87,-34,-44,-24,-77,-34,-22,-8,36,-11,-89,-75,-43,-92,-94,-73,41,-27,-63,30,-37,-82,22,-60,-92,-58,11,-27,-98,-68,45,-3,50,-83,-45,6,12,-48,-16,27],[47,-61,2,7,-84,26,-67,-32,33,-100,-66,-64,-25,25,-4,-51,-69,-92,39,6,-82,-36,-14,-59,-40,-48,-30,-42,5,-72,33,-85,-65,9,45,-11,-28,32,-19,-71,-51,-73,-34,-21,-35,-52,-27,-76,40,44,12,-18,-14,28,-31,19,-20,-84,-28,-32,-75,-84,21,44,3,-70,-59,-6,-27,-39],[-65,-41,-22,29,-31,-62,-26,-40,-100,-40,48,-34,44,-20,20,-29,-65,2,-81,0,-29,-64,-59,38,30,-58,31,-87,7,37,-95,28,48,40,-81,25,-81,-96,4,15,15,-96,14,24,20,-35,-37,-98,5,-82,34,-39,-43,-63,35,1,-10,-83,34,-21,-78,-65,-44,48,-32,-5,-1,-18,-97,28],[11,34,47,21,-69,17,-38,-4,-25,-89,-64,50,7,4,44,44,-5,-16,-99,31,9,-57,11,-100,41,-93,21,-20,28,-39,-23,-6,-34,-61,-24,31,-76,-66,-54,34,2,-85,-2,45,28,-51,-48,-67,-20,-83,-12,-55,-83,-2,-86,-44,-80,-9,4,43,-98,-24,-82,34,-27,-63,-9,-12,-94,-15],[20,-49,-91,-71,37,39,-50,-93,-93,-94,13,32,-99,-26,-25,-79,-10,-18,27,38,13,18,-55,-37,-66,10,-61,32,49,-27,-60,-69,-92,-54,-96,-89,-40,-5,-29,5,-85,20,-22,-5,-49,-31,-83,28,-62,-92,-2,28,-24,20,-12,-92,-30,-92,-76,46,22,-55,28,-81,-59,23,-22,-3,36,20],[31,-79,31,-22,-47,-47,-42,-10,-32,-38,27,2,-39,-28,-33,4,-54,-80,-89,24,23,-1,30,-60,-31,-35,0,-37,-46,-33,33,44,7,10,-27,-98,6,22,-48,14,-72,-69,42,23,6,-50,-84,-62,-15,-69,4,32,16,-61,48,42,-41,-8,-82,-62,-79,-15,-88,-11,-61,-96,-63,-95,24,-49],[5,9,-93,-50,-42,-54,50,17,22,12,-11,-5,-35,-7,-65,-71,-4,27,42,-13,-100,-21,-7,14,-86,16,-42,33,-54,24,-44,-1,-89,42,-34,-99,-74,-15,-70,-40,-69,-41,-17,47,48,33,-100,-64,-39,47,-75,-1,-97,-45,-71,-48,-80,-86,14,6,-97,-80,-75,-59,-3,-1,-66,-93,-65,-14],[-67,-57,-41,41,-54,10,-25,-78,7,21,-2,27,32,-68,35,-61,46,-80,-89,-20,-87,-1,-97,-81,4,11,-56,-31,-38,15,-70,-11,-15,-9,-76,1,40,-72,-66,-52,30,-97,15,22,41,-57,15,13,-53,-71,-50,-39,18,-18,6,-20,-41,32,-16,22,-1,-47,-82,-2,-92,-93,-6,28,-60,-100],[-88,16,-14,-3,-12,16,-94,-49,3,-6,8,-45,-36,-81,21,-37,38,-53,-54,-78,-99,38,-60,10,22,10,20,43,-27,-10,24,-15,-3,28,-51,-93,32,-9,-68,-22,34,-91,-34,-7,-48,-6,-49,-13,-54,-7,-56,-79,-5,36,-58,-36,-8,10,-20,-72,-82,13,-70,-98,39,-88,-62,-25,19,-62],[-16,50,-27,-22,32,-30,50,-88,4,-71,-14,-39,-52,-16,-43,-62,20,11,-73,-10,13,31,-71,-68,-79,21,5,-55,48,33,-36,35,-14,-65,37,29,-54,15,40,-3,-68,-11,-48,-25,33,-78,-41,-81,-48,-63,-100,-56,-70,3,31,-20,-80,17,-95,14,-80,11,-31,-38,-54,-17,0,-93,-2,1],[-16,-16,-36,0,-78,-8,22,-90,21,-46,26,42,16,-54,-72,-27,-69,-33,-82,-41,29,39,-46,-100,-78,-80,-85,-81,15,-66,-65,45,-21,9,16,-42,12,-62,50,-71,-44,-44,-85,-68,8,-52,-77,4,-19,-29,50,39,-73,-92,2,-88,31,-47,-12,-31,-33,-6,-75,-71,18,-61,-43,25,-73,33],[-40,-74,-34,-81,8,1,-52,-65,13,-10,48,-31,38,-26,-33,7,31,38,-70,-93,-34,-85,-4,7,-9,-23,-97,-18,-13,-12,-11,-63,-25,-6,-42,-44,38,-26,-28,-52,-7,-43,-85,-76,31,37,29,47,-83,-55,-8,-32,8,49,5,-49,-18,5,-59,-27,-37,-60,36,-27,-45,-87,-8,-74,-92,-49],[-18,-35,-84,28,15,-99,-37,-96,-29,44,11,-50,-78,-56,41,-42,-13,14,8,14,-96,8,-52,-33,41,-4,-65,-65,-79,-18,-73,1,-69,1,-82,33,-33,-10,19,2,-51,20,-19,-30,-28,-40,38,35,10,-3,44,-29,22,-95,-2,-97,-1,-16,29,-2,-7,-92,-38,-19,-88,-16,28,-54,-94,-65],[-75,-38,-79,-54,11,-45,11,23,15,24,-39,1,50,-61,-44,-83,13,7,-2,0,38,16,-90,40,-42,34,46,-62,37,-90,34,-87,-2,-86,-53,7,-33,17,-34,-93,31,-52,-63,30,49,-71,-84,-82,23,39,45,22,-90,24,-99,-75,-32,-31,-73,10,-59,-34,-40,26,-1,-17,-1,31,43,-32],[-74,-98,-55,-88,-76,-62,-48,2,-50,39,-6,-34,39,-37,-52,-8,-23,-66,-23,-31,-38,44,38,-75,-57,21,15,-25,-68,-20,-16,-15,-78,36,-69,-21,-72,8,-63,-92,-75,-5,-98,-96,-67,-88,-91,1,-20,-43,-81,-68,49,-83,35,-3,-2,3,-84,-63,46,30,-63,-36,14,28,-68,-33,8,-33],[7,-89,34,-58,-15,27,-37,-1,-83,-28,-95,13,-50,-100,34,41,-88,-61,-37,2,-38,-97,-88,-14,-39,-58,17,-95,40,-93,-33,15,-83,-13,-32,-91,-77,-44,-93,1,3,26,-73,41,-63,-24,1,-30,-84,18,-5,-3,13,30,45,42,-81,-91,-75,-63,-66,24,-22,40,30,37,-2,-97,-43,-4],[50,3,17,-6,-11,-67,-48,46,-22,-74,-17,-41,-24,-60,35,10,-85,-48,14,-36,4,-42,-88,-10,38,-67,-98,-70,-52,-36,-72,-66,-77,-51,-98,-19,-50,-77,47,27,-80,-28,-4,31,39,-28,27,-16,11,-7,-13,41,-15,-60,16,1,-66,-34,30,45,-41,29,-23,-89,-96,9,-75,-57,-17,19],[20,-18,-68,-61,-93,48,-31,-29,9,-100,-27,-9,-81,-45,-44,-2,7,38,-27,15,41,-66,25,-49,1,36,-66,-30,-53,-67,-75,12,5,-2,-96,13,-27,-36,3,1,-65,-47,-86,-65,40,19,36,-16,15,-49,-30,-28,-56,-57,1,-28,-82,-64,8,40,-87,-56,45,-37,-15,-72,-42,44,-66,-34],[-75,40,-87,-89,-27,50,-20,-1,27,-41,-90,-72,38,-26,43,-84,-4,-1,38,39,-8,-82,-99,7,-56,-55,-70,10,-27,15,-56,50,-69,-48,-44,7,-83,-68,-27,-24,-22,-34,20,26,-66,-58,-7,-87,29,33,-15,18,-47,-70,-22,35,3,-87,-14,8,-75,41,26,38,-83,42,24,20,42,8],[16,-11,-22,45,13,-9,43,15,-100,-16,-65,-81,-36,42,26,-13,-64,28,16,-52,4,-87,50,1,-9,-62,27,-42,-65,-21,-53,-53,-28,-78,-6,-95,-18,-76,27,-65,33,33,-99,-27,28,-2,-80,-63,-9,-89,-20,-49,-38,43,11,-27,-63,-62,-42,-33,-96,-39,16,-79,-53,18,-8,-84,-55,-2],[-57,4,-74,1,-94,-72,-58,-81,-97,24,-53,33,-66,-17,-12,-61,-89,-30,7,-56,11,-20,26,30,2,-92,40,11,30,-32,-23,-31,-90,-61,-13,33,-59,-74,6,-76,23,31,-63,16,-9,-22,29,17,18,-22,-55,-25,-45,-98,-86,-81,-32,-100,-83,-40,-1,-99,-51,-55,-87,-8,44,-94,9,-50],[40,-19,-54,-78,-12,-20,13,3,-36,-98,19,49,-69,29,-63,-41,-81,-84,-83,-32,-25,30,-56,-96,-48,50,-87,-100,33,-15,-66,47,45,-26,-60,-1,27,-64,-4,-77,-57,-52,9,2,-72,-66,-73,-23,-39,-66,43,34,-26,17,-98,-68,-39,49,-14,-20,-32,-50,18,-37,-61,10,16,-75,2,-32],[-29,-13,-75,-3,-9,49,-77,10,-65,18,-97,-63,-14,-10,-65,-26,17,-53,-49,-68,-46,-65,-47,-5,15,22,-26,-40,24,-89,-12,27,-60,11,20,8,-4,32,-47,-62,2,9,-85,-40,-18,-80,-56,-12,42,-59,-97,-68,2,-4,48,-12,50,-17,-45,-100,20,7,-13,-22,20,26,48,-52,13,16],[-40,-95,-48,-83,-69,-46,-81,-19,-47,-50,42,-53,32,-39,-51,-54,-95,8,-44,-67,-42,-38,40,19,43,18,12,-64,-59,13,31,17,24,-63,-44,21,-36,-16,-38,-20,-83,-28,36,-14,23,-71,29,49,-65,1,37,49,38,31,-30,-34,32,14,-85,3,-41,-80,-29,-53,-88,45,39,-39,-1,36],[-85,-62,-8,-76,-89,-74,-36,36,-86,5,3,-55,-44,-98,48,47,-3,-75,-31,-11,26,-66,-82,46,-73,-96,10,-95,-19,30,-79,0,32,-22,-43,-51,-22,-49,5,48,-30,-20,8,-20,-44,20,15,3,-56,28,-17,-87,19,-22,-71,-72,-57,-91,-62,-1,-99,36,-57,19,47,-62,-52,-42,7,4],[-24,-8,47,-83,-3,29,25,-28,9,-13,-57,3,-5,40,-16,-81,39,-89,-56,-56,-85,-34,-17,-95,-63,-20,1,-93,31,12,15,-70,-46,43,16,-3,26,-50,-23,-77,18,10,33,30,-8,-6,37,-97,-73,32,21,19,-30,-28,14,5,36,-78,-85,36,21,15,22,-32,-11,-45,21,-58,24,-68],[-87,-90,0,-38,18,22,-52,28,-83,50,-81,-76,28,-46,16,-6,-84,-100,-58,-34,25,5,19,-41,-12,-20,-42,-44,-70,-6,-9,-6,25,-91,-43,-40,-39,-13,-89,-11,33,12,-72,-6,-60,-96,-90,6,34,34,-88,26,6,-98,-99,-47,-3,10,-88,-80,-97,25,8,-63,21,33,26,12,-67,16],[3,-32,-39,46,-14,-100,30,27,-22,-48,-11,8,-97,36,-33,47,42,-78,46,26,-50,5,-81,1,-58,18,8,-36,-63,-100,-47,-25,-42,-97,25,-30,-73,-84,18,11,47,-33,-27,-16,-82,20,-4,-30,-62,47,-90,2,41,-3,-91,-26,-5,24,-48,-75,-53,16,-7,-42,-52,-73,45,-30,-78,-98],[-80,12,-48,-80,-64,-42,-70,28,46,16,11,-10,20,24,30,-37,2,-3,-41,4,-47,6,-33,28,35,27,33,-4,-91,-87,-13,-62,-72,-43,48,9,20,21,24,36,27,-10,34,42,13,-61,-54,-30,-15,10,-99,45,39,-13,-18,25,-49,23,47,-79,20,-28,-94,16,19,43,-94,4,-35,-40],[-30,-96,-46,-81,-73,18,-50,37,29,-68,-12,-85,28,-90,43,-22,-46,-22,6,23,-43,-46,-32,-54,14,28,-20,25,-87,-97,24,-55,25,-70,-27,40,-77,22,16,-85,-7,-63,-59,-99,-97,8,50,-62,11,-70,-5,-28,-11,-8,41,-4,34,-31,-17,24,-10,23,-3,-61,-51,-89,42,18,50,-94],[-63,32,36,-75,-79,-31,-12,10,-87,20,-85,-10,-61,-81,4,-78,-71,-41,44,-70,25,-92,44,-52,-59,-29,-57,-77,-54,-10,-92,-40,-60,-3,-24,-47,24,35,24,-27,26,-10,-64,15,-41,25,-44,16,41,-82,18,8,43,-55,31,30,13,-83,12,-22,29,-97,-2,-46,-23,30,-32,25,-3,44],[50,11,-49,-5,-25,4,12,27,-90,-27,48,16,-67,-58,-22,48,-8,-85,-39,29,8,-7,-22,18,41,20,30,12,-37,-18,7,-89,-5,-66,23,5,-87,6,-44,-6,-63,-42,-21,18,-60,-98,-77,-56,-2,-21,35,2,5,29,36,-49,-82,5,24,49,11,-37,-80,-51,-8,-21,11,21,-70,-79],[-59,-6,-91,3,-83,29,1,-44,-13,-60,-19,-91,-13,-31,-72,19,-24,-68,0,-19,-86,-40,49,-83,-3,47,35,-67,26,-63,35,-95,-10,-7,-55,47,-49,-17,-10,39,-8,-93,34,-86,37,-31,-33,-49,-1,-7,-37,-77,20,-24,-72,38,-5,-31,-40,-69,-93,-4,-63,-73,-71,-11,-47,42,17,-73],[-12,-56,-25,-7,-91,-6,-62,-34,-27,-88,-58,-90,-12,-3,23,-6,-42,36,27,39,-48,-1,-92,-8,23,-91,-50,-84,5,19,-63,32,30,31,-67,36,-15,44,-60,-57,-33,18,49,11,-55,-54,-27,-31,-96,-20,1,36,-14,-15,44,-72,-88,-41,24,-7,-4,48,34,33,19,-41,-11,-91,39,-9],[-98,-47,-87,-97,-82,15,9,13,-56,35,-7,50,-70,34,27,3,-24,-72,-63,-56,-95,-60,-59,32,-29,31,49,43,40,-75,-54,3,-87,-84,-37,46,30,-5,-74,-36,-76,-91,-40,28,-20,26,-78,-27,2,44,-24,14,-15,29,-86,-55,24,-80,-24,10,-53,-98,50,-63,-61,25,-90,24,24,-92]]
print(sol.calculateMinimumHP(dungeon))
| 320.407407 | 14,954 | 0.508381 | 4,546 | 17,302 | 1.933128 | 0.032116 | 0.003414 | 0.008193 | 0.003983 | 0.060423 | 0.058944 | 0.045061 | 0.043241 | 0.043241 | 0.043241 | 0 | 0.490858 | 0.058028 | 17,302 | 53 | 14,955 | 326.45283 | 0.048349 | 0.910588 | 0 | 0 | 0 | 0 | 0.005874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0 | 0 | 0.12 | 0.04 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6892b2f5125adfe5ae1611ded21756b71a3668d | 173 | py | Python | jira_comment/base.py | xandox/jira_comment | c47adf4d3a1a5d7be82281b32cea95a85bee06d4 | [
"MIT"
] | 1 | 2020-05-28T18:22:52.000Z | 2020-05-28T18:22:52.000Z | jira_comment/base.py | xandox/jira_comment | c47adf4d3a1a5d7be82281b32cea95a85bee06d4 | [
"MIT"
] | null | null | null | jira_comment/base.py | xandox/jira_comment | c47adf4d3a1a5d7be82281b32cea95a85bee06d4 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
class JiraBase(ABC):
@abstractmethod
def render(self) -> str:
pass
def __str__(self):
return self.render() | 17.3 | 35 | 0.635838 | 20 | 173 | 5.3 | 0.6 | 0.320755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.271676 | 173 | 10 | 36 | 17.3 | 0.84127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.142857 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
f6d8b7abd0d8550f8ff2d72ec9694e3064db4941 | 43 | py | Python | qazaq_transliterator/__init__.py | altynbek07/python-qazaq-transliterator | 5a1298ed5749d79d605ff4ae4f681ef706c480d3 | [
"MIT"
] | 1 | 2022-03-26T22:41:32.000Z | 2022-03-26T22:41:32.000Z | qazaq_transliterator/__init__.py | altynbek07/python-qazaq-transliterator | 5a1298ed5749d79d605ff4ae4f681ef706c480d3 | [
"MIT"
] | null | null | null | qazaq_transliterator/__init__.py | altynbek07/python-qazaq-transliterator | 5a1298ed5749d79d605ff4ae4f681ef706c480d3 | [
"MIT"
] | null | null | null | from .qazaq_transliterator import translit
| 21.5 | 42 | 0.883721 | 5 | 43 | 7.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63eb84a13d2e09b24889915b2e2c3add94b81ed7 | 24 | py | Python | test/performance-regression/full-apps/qmcpack/nexus/library/nexus.py | JKChenFZ/hclib | 50970656ac133477c0fbe80bb674fe88a19d7177 | [
"BSD-3-Clause"
] | 55 | 2015-07-28T01:32:58.000Z | 2022-02-27T16:27:46.000Z | test/performance-regression/full-apps/qmcpack/nexus/library/nexus.py | JKChenFZ/hclib | 50970656ac133477c0fbe80bb674fe88a19d7177 | [
"BSD-3-Clause"
] | 66 | 2015-06-15T20:38:19.000Z | 2020-08-26T00:11:43.000Z | test/performance-regression/full-apps/qmcpack/nexus/library/nexus.py | JKChenFZ/hclib | 50970656ac133477c0fbe80bb674fe88a19d7177 | [
"BSD-3-Clause"
] | 26 | 2015-10-26T22:11:51.000Z | 2021-03-02T22:09:15.000Z |
from project import *
| 6 | 21 | 0.708333 | 3 | 24 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 24 | 3 | 22 | 8 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63f81c111b4be00be4db9d878f0c59fa959f9c14 | 43 | py | Python | enthought/traits/ui/view.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/traits/ui/view.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/traits/ui/view.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from traitsui.view import *
| 14.333333 | 27 | 0.767442 | 6 | 43 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 2 | 28 | 21.5 | 0.916667 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
121c34c5f02055e790d86aa03dd453361997aac0 | 4,462 | py | Python | AngularJS Projects/TODO/server/databas.py | ADHIL-MOHAMMED-P-N/Dev-Geeks | 980ea053d0c62735ca05ba24684994c412612215 | [
"MIT"
] | 5 | 2021-10-18T14:37:37.000Z | 2022-02-26T14:08:11.000Z | AngularJS Projects/TODO/server/databas.py | ADHIL-MOHAMMED-P-N/Dev-Geeks | 980ea053d0c62735ca05ba24684994c412612215 | [
"MIT"
] | 30 | 2021-10-18T14:52:39.000Z | 2022-01-07T08:03:18.000Z | AngularJS Projects/TODO/server/databas.py | ADHIL-MOHAMMED-P-N/Dev-Geeks | 980ea053d0c62735ca05ba24684994c412612215 | [
"MIT"
] | 14 | 2021-10-18T15:20:48.000Z | 2021-10-30T19:56:16.000Z | import mysql.connector
def test():
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
print(db.is_connected())
db.close()
def insert(todo):
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
print(todo)
cur = db.cursor()
sql="insert into todo (task,dueby,status) values (%s, %s, %s)"
val=[todo["task"],todo["dueby"][:10],todo["status"]]
cur.execute(sql,val)
db.commit()
cur.close()
db.close()
def getTodos():
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
cur = db.cursor()
cur.execute("select * from todo")
headers=['id','task','dueby','status']
res=[]
for x in cur:
res.append(dict(zip(headers,x)))
cur.close()
db.close()
return res
def update(todo):
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
if "task" in todo.keys():
cur = db.cursor()
val=[todo["task"],todo["id"]]
cur.execute("update todo set task=%s where id=%s",val)
cur.close()
if "dueby" in todo.keys():
cur = db.cursor()
val=[todo["dueby"],todo["id"]]
cur.execute("update todo set dueby=%s where id=%s",val)
cur.close()
if "status" in todo.keys():
cur = db.cursor()
val=[todo["status"],todo["id"]]
cur.execute("update todo set status=%s where id=%s",val)
cur.close()
db.commit()
db.close()
def delete(todo):
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
cur = db.cursor()
val=[todo["id"]]
cur.execute("delete from todo where id=%s",val)
db.commit()
cur.close()
db.close()
def due(due_date):
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
cur = db.cursor()
cur.execute("select * from todo where dueby=%s",[due_date])
res=[]
headers=['id','task','dueby','status']
for x in cur:
res.append(dict(zip(headers,x)))
cur.close()
db.close()
return res
def overdue():
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
cur = db.cursor()
cur.execute("select * from todo where dueby<curdate()")
res=[]
headers=['id','task','dueby','status']
for x in cur:
res.append(dict(zip(headers,x)))
cur.close()
db.close()
return res
def finished():
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
cur = db.cursor()
cur.execute("select * from todo where status='finished'")
res=[]
headers=['id','task','dueby','status']
for x in cur:
res.append(dict(zip(headers,x)))
cur.close()
db.close()
return res
def login(creds):
db = mysql.connector.connect(
host="us-cdbr-east-03.cleardb.com",
user="b785fbb89c1d88",
password="5872c3bf",
database="heroku_d100d1934e0d01d"
)
cur = db.cursor()
cur.execute("select count(*) from users where username=%s",[creds["username"]])
for x in cur:
count=x
if count[0]==0:
cur.close()
db.close()
return '',202
else:
cur.execute("select * from users where username=%s",[creds["username"]])
for x in cur:
pas=x
print(pas)
if pas[2]==creds["password"]:
cur.close()
db.close()
headers=['username','password','access']
return dict(zip(headers,pas[1:])),201
else:
cur.close()
db.close()
return '',202
| 26.247059 | 83 | 0.564097 | 539 | 4,462 | 4.647495 | 0.142857 | 0.038323 | 0.043912 | 0.082635 | 0.80479 | 0.792415 | 0.773253 | 0.730539 | 0.656287 | 0.656287 | 0 | 0.073823 | 0.271403 | 4,462 | 170 | 84 | 26.247059 | 0.696709 | 0 | 0 | 0.690323 | 0 | 0 | 0.271566 | 0.098812 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058065 | false | 0.070968 | 0.006452 | 0 | 0.109677 | 0.019355 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
122605386fc62b5e8949ce38a707ec35f4590848 | 219 | py | Python | tests/__init__.py | TigerDX/dj-stripe | 2fd4897abaedf2d9faa3dd5af86402dae3ab86a3 | [
"BSD-3-Clause"
] | null | null | null | tests/__init__.py | TigerDX/dj-stripe | 2fd4897abaedf2d9faa3dd5af86402dae3ab86a3 | [
"BSD-3-Clause"
] | null | null | null | tests/__init__.py | TigerDX/dj-stripe | 2fd4897abaedf2d9faa3dd5af86402dae3ab86a3 | [
"BSD-3-Clause"
] | 1 | 2021-08-30T10:51:49.000Z | 2021-08-30T10:51:49.000Z | from stripe import api_key
from stripe.resource import convert_to_stripe_object
def convert_to_fake_stripe_object(response):
return convert_to_stripe_object(resp=response, api_key=api_key, account="test_account")
| 31.285714 | 91 | 0.849315 | 34 | 219 | 5.058824 | 0.470588 | 0.104651 | 0.174419 | 0.244186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091324 | 219 | 6 | 92 | 36.5 | 0.864322 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
12382157d5b2b774188b0c6f74f22469addeba34 | 44 | py | Python | tests/test_backslash.py | yotamr/backslash-python | 03bdbcfadad9a7add6ad295ecba8686ab871e03d | [
"BSD-3-Clause"
] | null | null | null | tests/test_backslash.py | yotamr/backslash-python | 03bdbcfadad9a7add6ad295ecba8686ab871e03d | [
"BSD-3-Clause"
] | null | null | null | tests/test_backslash.py | yotamr/backslash-python | 03bdbcfadad9a7add6ad295ecba8686ab871e03d | [
"BSD-3-Clause"
] | null | null | null | import backslash
# py.test style tests here | 14.666667 | 26 | 0.795455 | 7 | 44 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 3 | 26 | 14.666667 | 0.945946 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89dc9b9cd07abc2a752a72c42f9e331d24b8943c | 167 | py | Python | plotly/graph_objs/scatterpolar/marker/colorbar/__init__.py | mprostock/plotly.py | 3471c3dfbf783927c203c676422260586514b341 | [
"MIT"
] | 12 | 2020-04-18T18:10:22.000Z | 2021-12-06T10:11:15.000Z | plotly/graph_objs/scatterpolar/marker/colorbar/__init__.py | Vesauza/plotly.py | e53e626d59495d440341751f60aeff73ff365c28 | [
"MIT"
] | 27 | 2020-04-28T21:23:12.000Z | 2021-06-25T15:36:38.000Z | plotly/graph_objs/scatterpolar/marker/colorbar/__init__.py | Vesauza/plotly.py | e53e626d59495d440341751f60aeff73ff365c28 | [
"MIT"
] | 6 | 2020-04-18T23:07:08.000Z | 2021-11-18T07:53:06.000Z | from ._title import Title
from plotly.graph_objs.scatterpolar.marker.colorbar import title
from ._tickformatstop import Tickformatstop
from ._tickfont import Tickfont
| 33.4 | 64 | 0.862275 | 21 | 167 | 6.666667 | 0.52381 | 0.157143 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095808 | 167 | 4 | 65 | 41.75 | 0.927152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89deb15152fd8f98f87a83934fb93e833cb4eaf0 | 125 | py | Python | tests/python-reference/modules/test_in_modules.py | jpolitz/lambda-py-paper | 746ef63fc1123714b4adaf78119028afbea7bd76 | [
"Apache-2.0"
] | 25 | 2015-04-16T04:31:49.000Z | 2022-03-10T15:53:28.000Z | tests/python-reference/modules/test_in_modules.py | jpolitz/lambda-py-paper | 746ef63fc1123714b4adaf78119028afbea7bd76 | [
"Apache-2.0"
] | 1 | 2018-11-21T22:40:02.000Z | 2018-11-26T17:53:11.000Z | tests/python-reference/modules/test_in_modules.py | jpolitz/lambda-py-paper | 746ef63fc1123714b4adaf78119028afbea7bd76 | [
"Apache-2.0"
] | 1 | 2021-03-26T03:36:19.000Z | 2021-03-26T03:36:19.000Z | # When import a module
# the module will be as to sys.modules
import sys
import support
___assertIn("support", sys.modules)
| 17.857143 | 38 | 0.768 | 20 | 125 | 4.65 | 0.65 | 0.215054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 125 | 6 | 39 | 20.833333 | 0.885714 | 0.456 | 0 | 0 | 0 | 0 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d64dea55fcc376f81fdfd4b4f8774758b7caa503 | 80 | py | Python | snippets/py/array/clear remove/clear.py | snippetfinder/The-Quick-Snippet-Reference | 4d2c38cb3687f31428539b6c9cdb11abdd4c6682 | [
"BSL-1.0"
] | 10 | 2022-01-13T15:56:14.000Z | 2022-01-21T20:43:29.000Z | snippets/py/array/clear remove/clear.py | snippetfinder/The-Quick-Snippet-Reference | 4d2c38cb3687f31428539b6c9cdb11abdd4c6682 | [
"BSL-1.0"
] | 1 | 2022-01-21T20:33:13.000Z | 2022-01-22T20:26:57.000Z | snippets/py/array/clear remove/clear.py | snippetfinder/The-Quick-Snippet-Reference | 4d2c38cb3687f31428539b6c9cdb11abdd4c6682 | [
"BSL-1.0"
] | null | null | null | array = [1, 2, 3]
print(array) # [1, 2, 3]
del array[:] # ≡
print(array) # [] | 20 | 25 | 0.4875 | 14 | 80 | 2.857143 | 0.5 | 0.3 | 0.35 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 0.2375 | 80 | 4 | 26 | 20 | 0.540984 | 0.175 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d659c14728b898c9b64d10fff082f7ade6a427a1 | 35,469 | py | Python | pipecaster/transform_wrappers.py | ajcallegari/pipecaster | dc283db67662385a54179310e3dbede04ec3db84 | [
"MIT"
] | null | null | null | pipecaster/transform_wrappers.py | ajcallegari/pipecaster | dc283db67662385a54179310e3dbede04ec3db84 | [
"MIT"
] | 1 | 2021-03-26T21:06:06.000Z | 2021-03-26T21:06:06.000Z | pipecaster/transform_wrappers.py | ajcallegari/pipecaster | dc283db67662385a54179310e3dbede04ec3db84 | [
"MIT"
] | null | null | null | """
Wrapper classes for internal ML models.
MultichannelPipelines treat all internal component as transfomers (i.e.
invoking fit/transform/fit_transform). As a consequence, when predictors are
used internally (e.g. for voting or stacking) a transformer interface must be
added to the internal predictors. In practice, this means choosing a
prediction method to use when transforming, converting 1D outputs to 2D
outputs, and applying internal cross validation training when required.
:class:`SingleChannel` and :class:`Multichannel` classes add a transformer
interface to single channel and multichannel predictors respectively.
:class:`SingleChannelCV` and :class:`MultichannelCV` classes add a transformer
interface and internal cross validaiton training to single channel and
multichannel predictors respectively. Internal cross validation (internal cv)
training is typically used when outputs of a base predictor will be used to
train a meta-predictor. It guarantees that base predictors do not make
inferences on their own training samples (1). Internal cv training can improve
meta-predictor accuracy if overfitting is a limiting problem, or it can reduce
metapredictor accuracy if the number of training samples is limiting.
(1) Wolpert, David H. "Stacked generalization." Neural networks 5.2
(1992): 241-259.
"""
import functools
import numpy as np
from sklearn.metrics import log_loss
import pipecaster.utils as utils
import pipecaster.config as config
from pipecaster.utils import Cloneable, Saveable
from pipecaster.cross_validation import cross_val_predict, score_predictions
__all__ = ['make_transformer', 'make_cv_transformer', 'unwrap_predictor',
'unwrap_model']
def make_transformer(predictor, transform_method='auto'):
"""
Add transform methods to a predictor.
Parameters
----------
predictor : scikit-learn predictor or multichannel predictor
Predictor to wrap.
transform_method : str, default='auto'
- Name of the prediction method to call when transforming (e.g. when
outputting meta-features).
- If 'auto' :
- If classifier : method picked using
config.transform_method_precedence order (default:
predict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
Returns
-------
Predictor/transformer
A wrapped predictor with both predictor and transformer interfaces.
Examples
--------
::
from sklearn.ensemble import GradientBoostingClassifier
import pipecaster as pc
Xs, y, X_types = pc.make_multi_input_classification(n_informative_Xs=3,
n_random_Xs=2)
clf = pc.MultichannelPipeline(n_channels=5)
base_clf = GradientBoostingRegressor()
base_clf = pc.make_transformer(base_clf)
clf.add_layer(base_clf)
clf.add_layer(pc.SoftVotingClassifier())
pc.cross_val_score(clf, Xs, y, cv=3)
# output: [0.8529411764705882, 0.9411764705882353, 0.96875]
"""
if utils.is_multichannel(predictor):
return Multichannel(predictor, transform_method)
else:
return SingleChannel(predictor, transform_method)
def make_cv_transformer(predictor, transform_method='auto', internal_cv=5,
score_method='auto', scorer='auto', cv_processes=1):
"""
Add internal cross validation training and transform methods to a
predictor.
Parameters
----------
predictor : scikit-learn predictor or multichannel predictor
Predictor to wrap.
transform_method : str, default='auto'
- Name of the prediction method to call when transforming (e.g. when
outputting meta-features).
- If 'auto' :
- If classifier : method picked using
config.transform_method_precedence order (default:
predict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
internal_cv : int, None, or callable, default=5
- Function for train/test subdivision of the training data. Used to
estimate performance of base classifiers and ensure they do not
generate predictions from their training samples during
meta-predictor training.
- If int > 1: StratifiedKfold(n_splits=internal_cv) if classifier or
KFold(n_splits=internal_cv) if regressor.
- If {None, 1}: disable internal cv.
- If callable: Assumed to be split generator like scikit-learn KFold.
score_method : str, default='auto'
- Name of prediction method used when scoring predictor performance.
- If 'auto' :
- If classifier : method picked using
config.score_method_precedence order (default:
ppredict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
scorer : callable, default='auto'
Callable that computes a figure of merit score for the internal_cv run.
The score is exposed as score_ attribute during fit_transform().
- If 'auto':
- explained_variance_score for regressors with predict()
- roc_auc_score for classifiers with {predict_proba,
predict_log_proba, decision_function}
- balanced_accuracy_score for classifiers with only predict()
- If callable: A scorer with signature: score = scorer(y_true, y_pred).
cv_processes : int or 'max', default=1
- The number of parallel processes to run for internal cross
validation.
- If int : Use up to cv_processes number of processes.
- If 'max' : Use all available CPUs.
Returns
-------
Predictor/transformer
A wrapped predictor with both predictor and transformer interfaces.
Internal cross_validation training occurs during calls to
fit_transform().
Examples
--------
::
from sklearn.ensemble import GradientBoostingClassifier
import pipecaster as pc
Xs, y, X_types = pc.make_multi_input_classification(n_informative_Xs=3,
n_random_Xs=2)
clf = pc.MultichannelPipeline(n_channels=5)
base_clf = GradientBoostingRegressor()
base_clf = pc.make_cv_transformer(base_clf)
clf.add_layer(base_clf)
clf.add_layer(pc.MultichannelPredictor(GradientBoostingClassifier()))
pc.cross_val_score(clf, Xs, y, cv=3)
# output: [0.8529411764705882, 0.9080882352941176, 1.0]
"""
if utils.is_multichannel(predictor):
return MultichannelCV(predictor, transform_method, internal_cv,
score_method, scorer, cv_processes)
else:
return SingleChannelCV(predictor, transform_method, internal_cv,
score_method, scorer, cv_processes)
class SingleChannel(Cloneable, Saveable):
"""
Add transformer interface to a scikit-learn predictor.
Wrapper class that provides scikit-learn conformant predictors with
transform() and fit_transform methods().
Parameters
----------
predictor : predictor instance
The scikit-learn conformant estimator/predictor to wrap.
transform_method : str, default='auto'
- Name of the prediction method to call when transforming (e.g. when
outputting meta-features).
- If 'auto' :
- If classifier : method picked using
config.transform_method_precedence order (default:
predict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
Examples
--------
Model stacking, classification:
::
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
import pipecaster as pc
Xs, y, _ = pc.make_multi_input_classification(n_informative_Xs=3,
n_random_Xs=7)
clf = pc.MultichannelPipeline(n_channels=10)
base_clf = GradientBoostingClassifier()
base_clf = pc.transform_wrappers.SingleChannel(base_clf)
clf.add_layer(base_clf, pipe_processes='max')
clf.add_layer(pc.MultichannelPredictor(SVC()))
pc.cross_val_score(clf, Xs, y, cv=3)
# output: [0.8529411764705882, 0.8216911764705883, 0.9099264705882353]
Model stacking, regression:
::
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.svm import SVR
import pipecaster as pc
Xs, y, _ = pc.make_multi_input_regression(n_informative_Xs=7,
n_random_Xs=3)
clf = pc.MultichannelPipeline(n_channels=10)
base_clf = GradientBoostingRegressor()
base_clf = pc.transform_wrappers.SingleChannel(base_clf)
clf.add_layer(base_clf, pipe_processes=1)
clf.add_layer(pc.MultichannelPredictor(SVR()))
pc.cross_val_score(clf, Xs, y, cv=3)
# output: [0.077183453, 0.067682880449, 0.07849665]
Notes
-----
This class uses reflection to expose the predictor methods found in the
object that it wraps, so the method attributes in a SingleChannel instance
are not usually identical to the method attributes of the SingleChannel
class.
If a sample_weight parameter is sent to the fit() method but the wrapped
predictor doesn't accept this argument, fit() will be called without the
sample_weight parameter and no warning will be given.
"""
def __init__(self, predictor, transform_method='auto'):
self._params_to_attributes(SingleChannel.__init__, locals())
utils.enforce_fit(predictor)
utils.enforce_predict(predictor)
self._add_predictor_interface(predictor)
self._set_estimator_type(predictor)
def _set_estimator_type(self, predictor):
if hasattr(predictor, '_estimator_type') is True:
self._estimator_type = predictor._estimator_type
def _add_predictor_interface(self, predictor):
for method_name in config.recognized_pred_methods:
if hasattr(predictor, method_name):
prediction_method = functools.partial(self.predict_with_method,
method_name=method_name)
setattr(self, method_name, prediction_method)
def _add_model_interface(self, model, X):
detected_methods = utils.detect_predict_methods(model, X)
for method_name in detected_methods:
prediction_method = functools.partial(self.predict_with_method,
method_name=method_name)
setattr(self, method_name, prediction_method)
def _remove_predictor_interface(self):
for method_name in config.recognized_pred_methods:
if hasattr(self, method_name):
delattr(self, method_name)
def set_transform_method(self, method_name):
self.transform_method = method_name
return self
def get_transform_method(self):
if self.transform_method == 'auto':
method_name = utils.get_transform_method(self)
if method_name is None:
raise NameError('model lacks a recognized method for '
'conversion to transformer')
else:
method_name = self.transform_method
return method_name
def fit(self, X, y=None, **fit_params):
self.model = utils.get_clone(self.predictor)
is_classifier = utils.is_classifier(self.predictor)
if y is None:
try:
self.model.fit(X, **fit_params)
except:
self.model.fit(X)
else:
if is_classifier:
self.classes_, y = np.unique(y, return_inverse=True)
try:
self.model.fit(X, y, **fit_params)
except:
self.model.fit(X, y)
self._set_estimator_type(self.model)
self._remove_predictor_interface()
self._add_model_interface(self.model, X)
return self
def predict_with_method(self, X, method_name):
if hasattr(self, 'model') is False:
raise utils.FitError('prediction attempted before model fitting')
if hasattr(self.model, method_name):
predict_method = getattr(self.model, method_name)
predictions = predict_method(X)
else:
raise NameError('prediction method {} not found in {} attributes'
.format(method_name, self.model))
if utils.is_classifier(self) and method_name == 'predict':
predictions = self.classes_[predictions]
return predictions
def transform(self, X):
if hasattr(self, 'model'):
transformer = getattr(self.model, self.get_transform_method())
X_t = transformer(X)
# convert output array to output matrix:
if len(X_t.shape) == 1:
X_t = X_t.reshape(-1, 1)
# drop redundant prob output from binary classifiers:
elif (len(X_t.shape) == 2 and X_t.shape[1] == 2 and
utils.is_classifier(self.model)):
X_t = X_t[:, 1].reshape(-1, 1)
return X_t
else:
raise utils.FitError('transform called before model fitting')
def fit_transform(self, X, y=None, **fit_params):
self.fit(X, y, **fit_params)
return self.transform(X)
def _more_tags(self):
return {'multichannel': False}
def get_clone(self):
"""
Get a stateful clone.
"""
clone = super().get_clone()
if hasattr(self, 'classes_'):
clone.classes_ = self.classes_.copy()
if hasattr(self, 'model'):
clone.model = utils.get_clone(self.model)
clone._set_estimator_type(self.model)
clone._remove_predictor_interface()
clone._add_predictor_interface(self)
return clone
def get_descriptor(self, verbose=1):
return '{' + utils.get_descriptor(self.predictor, verbose,
self.get_params()) + '}tr'
class SingleChannelCV(SingleChannel):
"""
Add transformer interface and internal cross validation training to
scikit-learn predictor.
Wrapper class that provides predictors with transform() and fit_transform()
methods, and internal cross validation training with performance scoring.
Parameters
----------
predictor : predictor instance
The scikit-learn conformant predictor to wrap.
transform_method : str, default='auto'
- Name of the prediction method to call when transforming (e.g. when
outputting meta-features).
- If 'auto' :
- If classifier : method picked using
config.transform_method_precedence order (default:
predict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
internal_cv : int, None, or callable, default=5
- Function for train/test subdivision of the training data. Used to
estimate performance of base classifiers and ensure they do not
generate predictions from their training samples during
meta-predictor training.
- If int > 1: StratifiedKfold(n_splits=internal_cv) if classifier or
KFold(n_splits=internal_cv) if regressor.
- If {None, 1}: disable internal cv.
- If callable: Assumed to be split generator like scikit-learn KFold.
score_method : str, default='auto'
- Name of prediction method used when scoring predictor performance.
- If 'auto' :
- If classifier : method picked using
config.score_method_precedence order (default:
ppredict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
scorer : callable, default='auto'
Callable that computes a figure of merit score for the internal_cv run.
The score is exposed as score_ attribute during fit_transform().
- If 'auto':
- explained_variance_score for regressors with predict()
- roc_auc_score for classifiers with {predict_proba,
predict_log_proba, decision_function}
- balanced_accuracy_score for classifiers with only predict()
- If callable: A scorer with signature: score = scorer(y_true, y_pred).
cv_processes : int or 'max', default=1
- The number of parallel processes to run for internal cross
validation.
- If int : Use up to cv_processes number of processes.
- If 'max' : Use all available CPUs.
Examples
--------
Model stacking:
::
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
import pipecaster as pc
Xs, y, _ = pc.make_multi_input_classification(n_informative_Xs=3,
n_random_Xs=7)
clf = pc.MultichannelPipeline(n_channels=10)
base_clf = GradientBoostingClassifier()
base_clf = pc.transform_wrappers.SingleChannelCV(base_clf)
clf.add_layer(base_clf, pipe_processes='max')
clf.add_layer(pc.MultichannelPredictor(SVC()))
pc.cross_val_score(clf, Xs, y, cv=3)
# output: [0.9411764705882353, 0.8897058823529411, 0.9963235294117647]
Notes
-----
fit().transform() is not the same as fit_tranform() because only the latter
uses internal cv training and inference.
On calls to fit_transform() the model is fit on both the entire training
set and cv splits of the training set. The model fit on the entire dataset
is stored for futer inference. The models fit on cv splits are used
to make the outputs of fit_transform() but are not stored for future use.
This class uses reflection to expose the predictor methods found in the
object that it wraps, so the method attributes in a SingleChannelCV
instance are usually not identical to the method attributes of the
SingleChannelCV class.
If a sample_weight parameter is sent to the fit() method but the wrapped
predictor doesn't accept this argument, fit() will be called without the
sample_weight parameter and no warning will be given.
"""
def __init__(self, predictor, transform_method='auto', internal_cv=5,
score_method='auto', scorer='auto', cv_processes=1):
self._params_to_attributes(SingleChannelCV.__init__, locals())
super().__init__(predictor, transform_method)
def fit_transform(self, X, y=None, groups=None, **fit_params):
is_classifier = utils.is_classifier(self.predictor)
if y is not None and is_classifier:
self.classes_, y = np.unique(y, return_inverse=True)
self.fit(X, y, **fit_params)
transform_method = self.get_transform_method()
# if internal cv training is disabled
if (self.internal_cv is None or
(type(self.internal_cv) == int and self.internal_cv < 2)):
y_transform = self.transform(X)
# internal cv training is enabled
else:
split_results = cross_val_predict(self.predictor, X, y,
groups=groups,
predict_method=None,
transform_method=transform_method,
score_method=self.score_method,
cv=self.internal_cv, combine_splits=True,
n_processes=self.cv_processes,
fit_params=fit_params)
y_transform = split_results['transform']['y_pred']
y_score = split_results['score']['y_pred']
is_binary = (True if is_classifier and len(self.classes_) == 2
else False)
score_method = split_results['score']['method']
self.score_ = score_predictions(y, y_score, score_method,
self.scorer, is_classifier,
is_binary)
# convert output array to output matrix:
X_t = y_transform
if len(X_t.shape) == 1:
X_t = X_t.reshape(-1, 1)
# drop the redundant prob output from binary classifiers:
elif (len(X_t.shape) == 2 and X_t.shape[1] == 2 and
utils.is_classifier(self.model)):
X_t = X_t[:, 1].reshape(-1, 1)
return X_t
def _more_tags(self):
return {'multichannel': False}
def get_descriptor(self, verbose=1):
return '{' + utils.get_descriptor(self.predictor, verbose,
self.get_params()) + '}cvtr'
def get_clone(self):
clone = super().get_clone()
if hasattr(self, 'scores_'):
clone.scores_ = self.scores_
return clone
class Multichannel(Cloneable, Saveable):
"""
Add transformer interface to a multichannel predictor.
Wrapper class that provides pipecaster's multichannel predictors with
transform() and fit_transform methods().
Parameters
----------
multichannel_predictor : multichannel predictor instance
The predictor to wrap.
transform_method : str, default='auto'
- Name of the prediction method to call when transforming (e.g. when
outputting meta-features).
- If 'auto' :
- If classifier : method picked using
config.transform_method_precedence order (default:
predict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
Examples
--------
model stacking:
::
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
import pipecaster as pc
Xs, y, _ = pc.make_multi_input_classification(n_informative_Xs=3,
n_random_Xs=7)
clf = pc.MultichannelPipeline(n_channels=10)
base_clf = pc.MultichannelPredictor(GradientBoostingClassifier())
base_clf = pc.transform_wrappers.Multichannel(base_clf)
clf.add_layer(5, base_clf, 5, base_clf, pipe_processes=1)
clf.add_layer(pc.MultichannelPredictor(SVC()))
pc.cross_val_score(clf, Xs, y, cv=3)
# output: [0.9411764705882353, 0.9411764705882353, 0.8768382352941176]
Notes
-----
This class uses reflection to expose the predictor methods found in the
object that it wraps, so the method attributes in a Multichannel instance
are not usually identical to the method attributes of the Multichannel
class.
"""
def __init__(self, multichannel_predictor, transform_method='auto'):
self._params_to_attributes(Multichannel.__init__, locals())
utils.enforce_fit(multichannel_predictor)
utils.enforce_predict(multichannel_predictor)
utils.enforce_predict(multichannel_predictor)
self._add_predictor_interface(multichannel_predictor)
self._set_estimator_type(multichannel_predictor)
def _set_estimator_type(self, predictor):
if hasattr(predictor, '_estimator_type') is True:
self._estimator_type = predictor._estimator_type
def _add_predictor_interface(self, predictor):
for method_name in config.recognized_pred_methods:
if hasattr(predictor, method_name):
prediction_method = functools.partial(self.predict_with_method,
method_name=method_name)
setattr(self, method_name, prediction_method)
def _add_model_interface(self, model, Xs):
detected_methods = utils.detect_predict_methods(model, Xs)
for method_name in detected_methods:
prediction_method = functools.partial(self.predict_with_method,
method_name=method_name)
setattr(self, method_name, prediction_method)
def _remove_predictor_interface(self):
for method_name in config.recognized_pred_methods:
if hasattr(self, method_name):
delattr(self, method_name)
def get_transform_method(self):
if self.transform_method == 'auto':
if hasattr(self, 'model'):
method_name = utils.get_transform_method(self.model)
else:
method_name = utils.get_transform_method(
self.multichannel_predictor)
if method_name is None:
raise NameError('model lacks a recognized method for '
'conversion to transformer')
else:
method_name = self.transform_method
return method_name
def fit(self, Xs, y=None, **fit_params):
self.model = utils.get_clone(self.multichannel_predictor)
if y is None:
self.model.fit(Xs, **fit_params)
else:
if utils.is_classifier(self.model):
self.classes_, y = np.unique(y, return_inverse=True)
self.model.fit(Xs, y, **fit_params)
self._set_estimator_type(self.model)
self._remove_predictor_interface()
self._add_model_interface(self.model, Xs)
return self
def predict_with_method(self, Xs, method_name):
if hasattr(self, 'model') is False:
raise FitError('prediction attempted before call to fit()')
prediction_method = getattr(self.model, method_name)
predictions = prediction_method(Xs)
if utils.is_classifier(self) and method_name == 'predict':
predictions = self.classes_[predictions]
return predictions
def transform(self, Xs):
if hasattr(self, 'model') is False:
raise FitError('transform attempted before call to fit()')
tansformer = getattr(self.model, self.get_transform_method())
predictions = np.array(tansformer(Xs))
# convert output array to output matrix:
if len(predictions.shape) == 1:
predictions = predictions.reshape(-1, 1)
# drop the redundant prob output from binary classifiers:
elif (len(predictions.shape) == 2 and predictions.shape[1] == 2
and utils.is_classifier(self.model)):
predictions = predictions[:, 1].reshape(-1, 1)
Xs_t = [predictions if i == 0 else None for i, X in enumerate(Xs)]
return Xs_t
def fit_transform(self, Xs, y=None, **fit_params):
self.fit(Xs, y, **fit_params)
return self.transform(Xs)
def get_descriptor(self, verbose=1):
return '{' + utils.get_descriptor(self.multichannel_predictor, verbose,
self.get_params()) + '}tr'
def get_clone(self):
"""
Get a stateful clone.
"""
clone = super().get_clone()
if hasattr(self, 'classes_'):
clone.classes_ = self.classes_.copy()
if hasattr(self, 'model'):
clone.model = utils.get_clone(self.model)
clone._set_estimator_type(self.model)
clone._remove_predictor_interface()
clone._add_predictor_interface(self)
return clone
class MultichannelCV(Multichannel):
"""
Add transformer interface and internal cross validation training to
multichannel predictor.
Wrapper class that provides pipecaster's multichannel predictors with
transform() and fit_transform() methods, and internal cross validation
training with performance scoring.
Parameters
----------
multichannel_predictor : multichannel_predictor instance
The pipecaster predictor to wrap.
transform_method : str, default='auto'
- Name of the prediction method to call when transforming (e.g. when
outputting meta-features).
- If 'auto' :
- If classifier : method picked using
config.transform_method_precedence order (default:
predict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
internal_cv : int, None, or callable, default=5
- Function for train/test subdivision of the training data. Used to
estimate performance of base classifiers and ensure they do not
generate predictions from their training samples during
meta-predictor training.
- If int > 1: StratifiedKfold(n_splits=internal_cv) if classifier or
KFold(n_splits=internal_cv) if regressor.
- If {None, 1}: disable internal cv.
- If callable: Assumed to be split generator like scikit-learn KFold.
score_method : str, default='auto'
- Name of prediction method used when scoring predictor performance.
- If 'auto' :
- If classifier : method picked using
config.score_method_precedence order (default:
ppredict_proba->predict_log_proba->decision_function->predict).
- If regressor : 'predict'
scorer : callable, default='auto'
Callable that computes a figure of merit score for the internal_cv run.
The score is exposed as score_ attribute during fit_transform().
- If 'auto':
- explained_variance_score for regressors with predict()
- roc_auc_score for classifiers with {predict_proba,
predict_log_proba, decision_function}
- balanced_accuracy_score for classifiers with only predict()
- If callable: A scorer with signature: score = scorer(y_true, y_pred).
cv_processes : int or 'max', default=1
- The number of parallel processes to run for internal cross
validation.
- If int : Use up to cv_processes number of processes.
- If 'max' : Use all available CPUs.
Examples
--------
model stacking:
::
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
import pipecaster as pc
Xs, y, _ = pc.make_multi_input_classification(n_informative_Xs=3,
n_random_Xs=7)
clf = pc.MultichannelPipeline(n_channels=10)
base_clf = pc.MultichannelPredictor(GradientBoostingClassifier())
base_clf = pc.transform_wrappers.MultichannelCV(base_clf)
clf.add_layer(5, base_clf, 5, base_clf, pipe_processes=1)
clf.add_layer(pc.MultichannelPredictor(SVC()))
pc.cross_val_score(clf, Xs, y, cv=3)
# output: [0.8823529411764706, 0.9393382352941176, 0.9080882352941176]
Notes
-----
fit().transform() is not the same as fit_tranform() because only the latter
uses internal cv training and inference.
On calls to fit_transform() the model is fit on both the entire training
set and cv splits of the training set. The model fit on the entire dataset
is stored for future inferences. The models fit on cv splits are used
to make the outputs of fit_transform() but are not stored for future use.
This class uses reflection to expose the predictor methods found in the
object that it wraps, so the method attributes in a MultichannelCV
instance are usually not identical to the method attributes of the
MultichannelCV class.
If a sample_weight parameter is sent to the fit() method but the wrapped
predictor doesn't accept this argument, fit() will be called without the
sample_weight parameter and no warning will be given.
"""
def __init__(self, multichannel_predictor, transform_method='auto',
internal_cv=5, score_method='auto',
scorer='auto', cv_processes=1):
self._params_to_attributes(MultichannelCV.__init__, locals())
super().__init__(multichannel_predictor, transform_method)
def fit_transform(self, Xs, y=None, groups=None, **fit_params):
is_classifier = utils.is_classifier(self.multichannel_predictor)
if y is not None and is_classifier:
self.classes_, y = np.unique(y, return_inverse=True)
self.fit(Xs, y, **fit_params)
transform_method = self.get_transform_method()
# if internal cv training is disabled
if (self.internal_cv is None or
(type(self.internal_cv) == int and self.internal_cv < 2)):
y_transform = self.transform(X)
# internal cv training is enabled
else:
split_results = cross_val_predict(self.multichannel_predictor,
Xs, y,
groups=groups,
predict_method=None,
transform_method=transform_method,
score_method=self.score_method,
cv=self.internal_cv, combine_splits=True,
n_processes=self.cv_processes,
fit_params=fit_params)
y_transform = split_results['transform']['y_pred']
y_score = split_results['score']['y_pred']
is_binary = (True if is_classifier and len(self.classes_) == 2
else False)
score_method = split_results['score']['method']
self.score_ = score_predictions(y, y_score, score_method,
self.scorer, is_classifier,
is_binary)
# convert predictions to transformed matrix:
X_t = y_transform
if len(X_t.shape) == 1:
X_t = X_t.reshape(-1, 1)
# drop the redundant prob output from binary classifiers:
elif (len(X_t.shape) == 2 and X_t.shape[1] == 2 and
utils.is_classifier(self.model)):
X_t = X_t[:, 1].reshape(-1, 1)
Xs_t = [None for X in Xs]
Xs_t[0] = X_t
return Xs_t
def get_descriptor(self, verbose=1):
return '{' + utils.get_descriptor(self.multichannel_predictor, verbose,
self.get_params()) + '}cvtr'
def get_clone(self):
"""
Get a stateful clone.
"""
clone = super().get_clone()
if hasattr(self, 'score_'):
clone.score_ = self.score_
return clone
def unwrap_predictor(pipe):
"""
Return a predictor that is wrapped in a transform wrapper.
"""
if type(pipe) not in [SingleChannel, SingleChannelCV, Multichannel,
MultichannelCV]:
return pipe
if type(pipe) in [Multichannel, MultichannelCV]:
return pipe.multichannel_predictor
else:
return pipe.predictor
def unwrap_model(pipe):
"""
Return a model that is wrapped in a transform wrapper.
"""
if type(pipe) not in [SingleChannel, SingleChannelCV, Multichannel,
MultichannelCV]:
return pipe
if hasattr(pipe, 'model') is True:
return pipe.model
else:
raise utils.FitError('no model found')
| 41.975148 | 79 | 0.636838 | 4,144 | 35,469 | 5.260859 | 0.084701 | 0.030962 | 0.007064 | 0.011009 | 0.848034 | 0.826109 | 0.812532 | 0.758773 | 0.739049 | 0.724829 | 0 | 0.017053 | 0.285771 | 35,469 | 844 | 80 | 42.024882 | 0.843524 | 0.505681 | 0 | 0.656347 | 0 | 0 | 0.043301 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.120743 | false | 0 | 0.021672 | 0.018576 | 0.25387 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c39d3be99a3b458c0495ba917aa8fa441db3b6ea | 317 | py | Python | src/local/config.py | RLogik/phpytex | 4e422a07ec23b4ade5263db499318b3e2c75f1f9 | [
"MIT"
] | null | null | null | src/local/config.py | RLogik/phpytex | 4e422a07ec23b4ade5263db499318b3e2c75f1f9 | [
"MIT"
] | 8 | 2021-08-24T12:27:02.000Z | 2021-10-14T07:50:12.000Z | src/local/config.py | RLogik/phpytex | 4e422a07ec23b4ade5263db499318b3e2c75f1f9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# EXPORTS
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
import json;
from yaml import add_constructor;
from yaml import load;
from yaml import Loader;
from yaml import FullLoader;
| 24.384615 | 66 | 0.422713 | 27 | 317 | 4.925926 | 0.62963 | 0.240602 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.104101 | 317 | 12 | 67 | 26.416667 | 0.461268 | 0.570978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c3a4bdd19d6431250591c8376bf1d2c785c2cb10 | 98 | py | Python | test/smoke_test.py | hartb/vision | f2f085bf099c4c31bc6f09c21844b2d57dabcb87 | [
"BSD-3-Clause"
] | 12,063 | 2017-01-18T19:58:38.000Z | 2022-03-31T23:08:44.000Z | test/smoke_test.py | hartb/vision | f2f085bf099c4c31bc6f09c21844b2d57dabcb87 | [
"BSD-3-Clause"
] | 4,673 | 2017-01-18T21:30:03.000Z | 2022-03-31T20:58:33.000Z | test/smoke_test.py | hartb/vision | f2f085bf099c4c31bc6f09c21844b2d57dabcb87 | [
"BSD-3-Clause"
] | 7,132 | 2017-01-18T18:12:23.000Z | 2022-03-31T21:19:10.000Z | import torch
import torchvision
import torchvision.datasets as dset
import torchvision.transforms
| 19.6 | 35 | 0.877551 | 12 | 98 | 7.166667 | 0.583333 | 0.593023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 98 | 4 | 36 | 24.5 | 0.977273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c3bc141b8ac3140613edb8e8ad3f3acf841a4ee5 | 135 | py | Python | cctbx/covariance/__init__.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | cctbx/covariance/__init__.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | cctbx/covariance/__init__.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | from __future__ import division
import boost.python
boost.python.import_ext("cctbx_covariance_ext")
from cctbx_covariance_ext import *
| 27 | 47 | 0.859259 | 19 | 135 | 5.631579 | 0.473684 | 0.205607 | 0.336449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081481 | 135 | 4 | 48 | 33.75 | 0.862903 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c3f0a57207f4df5e64a92b1b0986f5a635f14f4e | 223 | py | Python | test/test_package_downloader.py | jayvdb/pypi_librarian | e1b98bd035c7d9bbab7bdd1511d03e58fb927236 | [
"MIT"
] | 3 | 2019-06-07T14:45:03.000Z | 2019-12-26T19:48:29.000Z | test/test_package_downloader.py | jayvdb/pypi_librarian | e1b98bd035c7d9bbab7bdd1511d03e58fb927236 | [
"MIT"
] | 2 | 2019-12-26T15:13:18.000Z | 2020-03-30T06:35:22.000Z | test/test_package_downloader.py | jayvdb/pypi_librarian | e1b98bd035c7d9bbab7bdd1511d03e58fb927236 | [
"MIT"
] | 2 | 2019-06-07T14:45:07.000Z | 2019-12-26T14:37:16.000Z | # coding=utf-8
"""
"""
from pypi_librarian.pip_endpoints import download
from pypi_librarian.qypi_endpoints import files
def test_download_em():
print(files("jiggle_version"))
# download("jiggle_version", "tmp")
| 18.583333 | 49 | 0.744395 | 29 | 223 | 5.448276 | 0.655172 | 0.101266 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005155 | 0.130045 | 223 | 11 | 50 | 20.272727 | 0.809278 | 0.206278 | 0 | 0 | 0 | 0 | 0.084848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0.25 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6178c959af3d4b8c4a9d154596c3cb1dd656d743 | 99,045 | py | Python | tests/test_scripts.py | jas14/khmer | 97261214539d00ef9bea5601cfff3827049307c3 | [
"BSD-3-Clause"
] | null | null | null | tests/test_scripts.py | jas14/khmer | 97261214539d00ef9bea5601cfff3827049307c3 | [
"BSD-3-Clause"
] | null | null | null | tests/test_scripts.py | jas14/khmer | 97261214539d00ef9bea5601cfff3827049307c3 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function
from __future__ import absolute_import
from __future__ import unicode_literals
#
# This file is part of khmer, https://github.com/dib-lab/khmer/, and is
# Copyright (C) Michigan State University, 2009-2015. It is licensed under
# the three-clause BSD license; see LICENSE.
# Contact: khmer-project@idyll.org
#
# pylint: disable=C0111,C0103,E1103,W0612
import json
import sys
import os
import stat
import shutil
from io import StringIO
import traceback
from nose.plugins.attrib import attr
import subprocess
import threading
import bz2
import io
from . import khmer_tst_utils as utils
import khmer
import khmer.kfile
import screed
def teardown():
utils.cleanup()
def test_check_space():
# @CTB this probably belongs in a new test file, along with other
# tests of the file.py module.
khmer.kfile.check_space(
['', utils.get_test_data('test-abund-read-2.fa')], False)
def test_load_into_counting():
script = 'load-into-counting.py'
args = ['-x', '1e3', '-N', '2', '-k', '20', '-t']
outfile = utils.get_temp_filename('out.ct')
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 83' in err, err
assert os.path.exists(outfile)
def test_load_into_counting_max_memory_usage_parameter():
script = 'load-into-counting.py'
args = ['-M', '2e3', '-k', '20', '-t']
outfile = utils.get_temp_filename('out.ct')
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert os.path.exists(outfile)
kh = khmer.load_counting_hash(outfile)
assert sum(kh.hashsizes()) < 3e8
def test_load_into_counting_abundance_dist_nobig():
script = 'load-into-counting.py'
args = ['-x', '1e3', '-N', '2', '-k', '20', '-t', '-b']
outfile = utils.get_temp_filename('out.ct')
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 83' in err, err
assert os.path.exists(outfile)
htfile = outfile
outfile = utils.get_temp_filename('out')
script2 = 'abundance-dist.py'
args = ['-z', htfile, infile, outfile]
(status, out, err) = utils.runscript(script2, args)
assert 'WARNING: The loaded graph has bigcount' in err, err
assert 'bigcount' in err, err
def test_load_into_counting_nonwritable():
script = 'load-into-counting.py'
args = ['-x', '1e3', '-N', '2', '-k', '20', '-t']
outfile = utils.get_temp_filename('test-nonwritable')
with open(outfile, 'w') as fout:
fout.write("This file is non-writable (after this)")
os.chmod(outfile, stat.S_IWOTH | stat.S_IRUSR)
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args, fail_ok=True)
assert 'does not have write permission; exiting' in err, err
assert status == 1, status
@attr('huge')
def test_load_into_counting_toobig():
script = 'load-into-counting.py'
args = ['-x', '1e12', '-N', '2', '-k', '20', '-t', '--force']
outfile = utils.get_temp_filename('out.kh')
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args, fail_ok=True)
assert status == -1, status
assert "MemoryError" in err, err
def test_load_into_counting_fail():
script = 'load-into-counting.py'
args = ['-x', '1e2', '-N', '2', '-k', '20'] # use small HT
outfile = utils.get_temp_filename('out.ct')
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args, fail_ok=True)
assert status == 1, status
print(err)
assert "** ERROR: the graph structure is too small" in err
def test_load_into_counting_multifile():
script = 'load-into-counting.py'
args = ['-x', '1e7', '-N', '2', '-k', '20', '-t']
outfile = utils.get_temp_filename('out.kh')
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile, infile, infile, infile, infile,
infile, infile, infile, infile, infile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 95' in err, err
assert os.path.exists(outfile)
def test_load_into_counting_tsv():
script = 'load-into-counting.py'
args = ['-x', '1e7', '-N', '2', '-k', '20', '-t', '-s', 'tsv']
outfile = utils.get_temp_filename('out.ct')
tabfile = outfile + '.info.tsv'
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 95' in err, err
assert os.path.exists(outfile)
assert os.path.exists(tabfile)
with open(tabfile) as tabfh:
tabfile_lines = tabfh.readlines()
assert len(tabfile_lines) == 2
outbase = os.path.basename(outfile)
tsv = [outbase, '0.000', '95', '1001', infile]
expected_tsv_line = '\t'.join(tsv) + '\n'
assert tabfile_lines[1] == expected_tsv_line, tabfile_lines
def test_load_into_counting_json():
script = 'load-into-counting.py'
args = ['-x', '1e7', '-N', '2', '-k', '20', '-t', '-s', 'json']
outfile = utils.get_temp_filename('out.ct')
jsonfile = outfile + '.info.json'
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 95' in err, err
assert os.path.exists(outfile)
assert os.path.exists(jsonfile)
with open(jsonfile) as jsonfh:
got_json = json.load(jsonfh)
outbase = os.path.basename(outfile)
expected_json = {
u"files": [infile],
u"ht_name": outbase,
u"num_kmers": 95,
u"num_reads": 1001,
u"fpr": 9.025048735197377e-11,
u"mrinfo_version": "0.2.0",
}
assert got_json == expected_json, got_json
def test_load_into_counting_bad_summary_fmt():
script = 'load-into-counting.py'
args = ['-x', '1e7', '-N', '2', '-k', '20', '-s', 'badfmt']
outfile = utils.get_temp_filename('out.ct')
infile = utils.get_test_data('test-abund-read-2.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args, fail_ok=True)
assert status != 0, status
assert "invalid choice: 'badfmt'" in err, err
def _make_counting(infilename, SIZE=1e7, N=2, K=20, BIGCOUNT=True):
script = 'load-into-counting.py'
args = ['-x', str(SIZE), '-N', str(N), '-k', str(K)]
if not BIGCOUNT:
args.append('-b')
outfile = utils.get_temp_filename('out.ct')
args.extend([outfile, infilename])
utils.runscript(script, args)
assert os.path.exists(outfile)
return outfile
def test_filter_abund_1():
script = 'filter-abund.py'
infile = utils.get_temp_filename('test.fa')
n_infile = utils.get_temp_filename('test-fastq-n-reads.fq')
in_dir = os.path.dirname(infile)
n_in_dir = os.path.dirname(n_infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
shutil.copyfile(utils.get_test_data('test-fastq-n-reads.fq'), n_infile)
counting_ht = _make_counting(infile, K=17)
n_counting_ht = _make_counting(n_infile, K=17)
args = [counting_ht, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
n_outfile = n_infile + '.abundfilt'
n_outfile2 = n_infile + '2.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 1, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
args = [n_counting_ht, n_infile]
utils.runscript(script, args, n_in_dir)
seqs = set([r.sequence for r in screed.open(n_infile)])
assert os.path.exists(n_outfile), n_outfile
args = [n_counting_ht, n_infile, '-o', n_outfile2]
utils.runscript(script, args, in_dir)
assert os.path.exists(n_outfile2), n_outfile2
def test_filter_abund_2():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
counting_ht = _make_counting(infile, K=17)
script = 'filter-abund.py'
args = ['-C', '1', counting_ht, infile, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
# make sure that FASTQ records are retained.
def test_filter_abund_3_fq_retained():
infile = utils.get_temp_filename('test.fq')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fq'), infile)
counting_ht = _make_counting(infile, K=17)
script = 'filter-abund.py'
args = ['-C', '1', counting_ht, infile, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
# check for 'quality' string.
quals = set([r.quality for r in screed.open(outfile)])
assert len(quals) == 2, quals
assert '##################' in quals
# make sure that FASTQ names are properly parsed, both formats.
def test_filter_abund_4_fq_casava_18():
infile = utils.get_temp_filename('test.fq')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired2.fq'),
infile)
counting_ht = _make_counting(infile, K=17)
script = 'filter-abund.py'
args = [counting_ht, infile, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.name for r in screed.open(outfile, parse_description=False)])
assert 'pair:foo 1::N' in seqs, seqs
def test_filter_abund_1_singlefile():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'filter-abund-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-t', infile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert 'Total number of unique k-mers: 98' in err, err
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 1, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
def test_filter_abund_2_singlefile():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
tabfile = utils.get_temp_filename('test-savetable.ct')
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'filter-abund-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-t', '--savetable',
tabfile, infile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert 'Total number of unique k-mers: 98' in err, err
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 1, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
def test_filter_abund_2_singlefile_fq_casava_18():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired2.fq'),
infile)
script = 'filter-abund-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', infile]
(status, out, err) = utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.name for r in screed.open(outfile, parse_description=False)])
assert 'pair:foo 1::N' in seqs, seqs
def test_filter_abund_4_retain_low_abund():
# test that the -V option does not trim sequences that are low abundance
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
counting_ht = _make_counting(infile, K=17)
script = 'filter-abund.py'
args = ['-V', counting_ht, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
def test_filter_abund_5_trim_high_abund():
# test that the -V option *does* trim sequences that are high abundance
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-3.fa'), infile)
counting_ht = _make_counting(infile, K=17)
script = 'filter-abund.py'
args = ['-V', counting_ht, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
# trimmed sequence @ error
assert 'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGC' in seqs
def test_filter_abund_6_trim_high_abund_Z():
# test that -V/-Z settings interact properly -
# trimming should not happen if -Z is set high enough.
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-3.fa'), infile)
counting_ht = _make_counting(infile, K=17)
script = 'filter-abund.py'
args = ['-V', '-Z', '25', counting_ht, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
# untrimmed seq.
badseq = 'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCgtgCCGCAGCTGTCGTCAGGG' \
'GATTTCCGGGCGG'
assert badseq in seqs # should be there, untrimmed
def test_filter_abund_7_retain_Ns():
# check that filter-abund retains sequences with Ns, and treats them as As.
infile = utils.get_temp_filename('test.fq')
in_dir = os.path.dirname(infile)
# copy test file over to test.fq & load into counting table
shutil.copyfile(utils.get_test_data('test-filter-abund-Ns.fq'), infile)
counting_ht = _make_counting(infile, K=17)
script = 'filter-abund.py'
args = ['-C', '3', counting_ht, infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
# test for a sequence with an 'N' in it --
names = set([r.name for r in screed.open(outfile, parse_description=0)])
assert '895:1:37:17593:9954 1::FOO_withN' in names, names
# check to see if that 'N' was properly changed to an 'A'
seqs = set([r.sequence for r in screed.open(outfile)])
assert 'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAG' not in seqs, seqs
# ...and that an 'N' remains in the output sequences
found_N = False
for s in seqs:
if 'N' in s:
found_N = True
assert found_N, seqs
def test_filter_abund_single_8_retain_Ns():
# check that filter-abund-single retains
# sequences with Ns, and treats them as As.
infile = utils.get_temp_filename('test.fq')
in_dir = os.path.dirname(infile)
# copy test file over to test.fq & load into counting table
shutil.copyfile(utils.get_test_data('test-filter-abund-Ns.fq'), infile)
script = 'filter-abund-single.py'
args = ['-k', '17', '-x', '1e7', '-N', '2', '-C', '3', infile]
utils.runscript(script, args, in_dir)
outfile = infile + '.abundfilt'
assert os.path.exists(outfile), outfile
# test for a sequence with an 'N' in it --
names = set([r.name for r in screed.open(outfile, parse_description=0)])
assert '895:1:37:17593:9954 1::FOO_withN' in names, names
# check to see if that 'N' was properly changed to an 'A'
seqs = set([r.sequence for r in screed.open(outfile)])
assert 'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAG' not in seqs, seqs
# ...and that an 'N' remains in the output sequences
found_N = False
for s in seqs:
if 'N' in s:
found_N = True
assert found_N, seqs
def test_filter_stoptags():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
stopfile = utils.get_temp_filename('stoptags', in_dir)
# first, copy test-abund-read-2.fa to 'test.fa' in the temp dir.
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
# now, create a file with some stop tags in it --
K = 18
kh = khmer._Hashbits(K, [1])
kh.add_stop_tag('GTTGACGGGGCTCAGGGG')
kh.save_stop_tags(stopfile)
del kh
# finally, run filter-stoptags.
script = 'filter-stoptags.py'
args = ['-k', str(K), stopfile, infile, infile]
utils.runscript(script, args, in_dir)
# verify that the basic output file exists
outfile = infile + '.stopfilt'
assert os.path.exists(outfile), outfile
# it should contain only one unique sequence, because we've trimmed
# off everything after the beginning of the only long sequence in there.
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 1, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs, seqs
def test_filter_stoptags_fq():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
stopfile = utils.get_temp_filename('stoptags', in_dir)
# first, copy test-abund-read-2.fa to 'test.fa' in the temp dir.
shutil.copyfile(utils.get_test_data('test-abund-read-2.fq'), infile)
# now, create a file with some stop tags in it --
K = 18
kh = khmer._Hashbits(K, [1])
kh.add_stop_tag('GTTGACGGGGCTCAGGGG')
kh.save_stop_tags(stopfile)
del kh
# finally, run filter-stoptags.
script = 'filter-stoptags.py'
args = ['-k', str(K), stopfile, infile, infile]
utils.runscript(script, args, in_dir)
# verify that the basic output file exists
outfile = infile + '.stopfilt'
assert os.path.exists(outfile), outfile
# it should contain only one unique sequence, because we've trimmed
# off everything after the beginning of the only long sequence in there.
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 1, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs, seqs
# make sure that record names are carried through unparsed
names = [r.name for r in screed.open(outfile, parse_description=False)]
names = set(names)
assert 'seq 1::BAR' in names
def test_count_median():
infile = utils.get_temp_filename('test.fa')
outfile = infile + '.counts'
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
counting_ht = _make_counting(infile, K=8)
script = 'count-median.py'
args = [counting_ht, infile, outfile]
utils.runscript(script, args)
assert os.path.exists(outfile), outfile
data = [x.strip() for x in open(outfile)]
data = set(data)
assert len(data) == 2, data
assert 'seq 1001 1001.0 0.0 18' in data
assert '895:1:37:17593:9954/1 1 103.803741455 303.702941895 114' in data
def test_count_median_fq():
infile = utils.get_temp_filename('test.fa')
outfile = infile + '.counts'
shutil.copyfile(utils.get_test_data('test-abund-read-2.fq'), infile)
counting_ht = _make_counting(infile, K=8)
script = 'count-median.py'
args = [counting_ht, infile, outfile]
utils.runscript(script, args)
assert os.path.exists(outfile), outfile
data = [x.strip() for x in open(outfile)]
data = set(data)
assert len(data) == 2, data
assert 'seq 1001 1001.0 0.0 18' in data
assert '895:1:37:17593:9954 1 103.803741455 303.702941895 114' in data
def test_count_median_fq_csv():
infile = utils.get_temp_filename('test.fa')
outfile = infile + '.counts'
shutil.copyfile(utils.get_test_data('test-abund-read-2.fq'), infile)
counting_ht = _make_counting(infile, K=8)
script = 'count-median.py'
args = ['--csv', counting_ht, infile, outfile]
utils.runscript(script, args)
assert os.path.exists(outfile), outfile
data = [x.strip() for x in open(outfile)]
data = set(data)
assert len(data) == 4, data
assert 'name,median,average,stddev,seqlen' in data
assert 'seq,1001,1001.0,0.0,18' in data
# verify that sequence names remain unparsed with '--csv'
names = set([line.split(',')[0] for line in data])
assert '895:1:37:17593:9954 1::FOO' in names, names
def test_load_graph():
script = 'load-graph.py'
args = ['-x', '1e7', '-N', '2', '-k', '20']
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 3960' in err, err
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
tagset_file = outfile + '.tagset'
assert os.path.exists(tagset_file), tagset_file
try:
ht = khmer.load_hashbits(ht_file)
except IOError as err:
assert 0, str(err)
ht.load_tagset(tagset_file)
# check to make sure we get the expected result for this data set
# upon partitioning (all in one partition). This is kind of a
# roundabout way of checking that load-graph worked :)
subset = ht.do_subset_partition(0, 0)
x = ht.subset_count_partitions(subset)
assert x == (1, 0), x
def test_oxli_build_graph():
script = 'oxli'
args = ['build-graph', '-x', '1e7', '-N', '2', '-k', '20']
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 3960' in err, err
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
tagset_file = outfile + '.tagset'
assert os.path.exists(tagset_file), tagset_file
ht = khmer.load_hashbits(ht_file)
ht.load_tagset(tagset_file)
# check to make sure we get the expected result for this data set
# upon partitioning (all in one partition). This is kind of a
# roundabout way of checking that load-graph worked :)
subset = ht.do_subset_partition(0, 0)
x = ht.subset_count_partitions(subset)
assert x == (1, 0), x
def test_load_graph_no_tags():
script = 'load-graph.py'
args = ['-x', '1e7', '-N', '2', '-k', '20', '-n']
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
utils.runscript(script, args)
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
tagset_file = outfile + '.tagset'
assert not os.path.exists(tagset_file), tagset_file
assert khmer.load_hashbits(ht_file)
# can't think of a good way to make sure this worked, beyond just
# loading the ht file...
def test_oxli_build_graph_no_tags():
script = 'oxli'
args = ['build-graph', '-x', '1e7', '-N', '2', '-k', '20', '-n']
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
utils.runscript(script, args)
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
tagset_file = outfile + '.tagset'
assert not os.path.exists(tagset_file), tagset_file
assert khmer.load_hashbits(ht_file)
# can't think of a good way to make sure this worked, beyond just
# loading the ht file...
def test_load_graph_fail():
script = 'load-graph.py'
args = ['-x', '1e3', '-N', '2', '-k', '20'] # use small HT
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args, fail_ok=True)
assert status == 1, status
assert "** ERROR: the graph structure is too small" in err
def test_oxli_build_graph_fail():
script = 'oxli'
args = ['build-graph', '-x', '1e3', '-N', '2', '-k', '20'] # use small HT
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args, fail_ok=True)
assert status == 1, status
assert "** ERROR: the graph structure is too small" in err
def test_load_graph_write_fp():
script = 'load-graph.py'
args = ['-x', '1e5', '-N', '2', '-k', '20'] # use small HT
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
info_file = outfile + '.info'
assert os.path.exists(info_file), info_file
data = [x.strip() for x in open(info_file)]
data = set(data)
assert '3959 unique k-mers' in data, data
assert 'false positive rate estimated to be 0.002' in data
def test_oxli_build_graph_write_fp():
script = 'oxli'
# use small HT
args = ['build-graph', '-x', '1e5', '-N', '2', '-k', '20']
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
info_file = outfile + '.info'
assert os.path.exists(info_file), info_file
data = [x.strip() for x in open(info_file)]
data = set(data)
assert '3959 unique k-mers' in data
assert 'false positive rate estimated to be 0.002' in data
def test_load_graph_multithread():
script = 'load-graph.py'
outfile = utils.get_temp_filename('test')
infile = utils.get_test_data('test-reads.fa')
args = ['-N', '4', '-x', '1e7', '-T', '8', outfile, infile]
(status, out, err) = utils.runscript(script, args)
def test_oxli_build_graph_multithread():
script = 'oxli'
outfile = utils.get_temp_filename('test')
infile = utils.get_test_data('test-reads.fa')
args = ['build-graph', '-N', '4', '-x', '1e7', '-T', '8', outfile, infile]
(status, out, err) = utils.runscript(script, args)
def test_load_graph_max_memory_usage_parameter():
script = 'load-graph.py'
args = ['-M', '2e7', '-k', '20', '-n']
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data('random-20-a.fa')
args.extend([outfile, infile])
(status, out, err) = utils.runscript(script, args)
assert 'Total number of unique k-mers: 3960' in err, err
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
try:
ht = khmer.load_hashbits(ht_file)
except IOError as err:
assert 0, str(err)
assert (sum(ht.hashsizes()) / 8.) < 2e7, ht.hashsizes()
def _make_graph(infilename, min_hashsize=1e7, n_hashes=2, ksize=20,
do_partition=False,
annotate_partitions=False,
stop_big_traverse=False):
script = 'load-graph.py'
args = ['-x', str(min_hashsize), '-N', str(n_hashes), '-k', str(ksize)]
outfile = utils.get_temp_filename('out')
infile = infilename
args.extend([outfile, infile])
utils.runscript(script, args)
ht_file = outfile + '.pt'
assert os.path.exists(ht_file), ht_file
tagset_file = outfile + '.tagset'
assert os.path.exists(tagset_file), tagset_file
if do_partition:
script = 'partition-graph.py'
args = [outfile]
if stop_big_traverse:
args.insert(0, '--no-big-traverse')
utils.runscript(script, args)
script = 'merge-partitions.py'
args = [outfile, '-k', str(ksize)]
utils.runscript(script, args)
final_pmap_file = outfile + '.pmap.merged'
assert os.path.exists(final_pmap_file)
if annotate_partitions:
script = 'annotate-partitions.py'
args = ["-k", str(ksize), outfile, infilename]
in_dir = os.path.dirname(outfile)
utils.runscript(script, args, in_dir)
baseinfile = os.path.basename(infilename)
assert os.path.exists(os.path.join(in_dir, baseinfile + '.part'))
return outfile
def _DEBUG_make_graph(infilename, min_hashsize=1e7, n_hashes=2, ksize=20,
do_partition=False,
annotate_partitions=False,
stop_big_traverse=False):
script = 'load-graph.py'
args = ['-x', str(min_hashsize), '-N', str(n_hashes), '-k', str(ksize)]
outfile = utils.get_temp_filename('out')
infile = utils.get_test_data(infilename)
args.extend([outfile, infile])
utils.runscript(script, args)
ht_file = outfile + '.ct'
assert os.path.exists(ht_file), ht_file
tagset_file = outfile + '.tagset'
assert os.path.exists(tagset_file), tagset_file
if do_partition:
print(">>>> DEBUG: Partitioning <<<")
script = 'partition-graph.py'
args = [outfile]
if stop_big_traverse:
args.insert(0, '--no-big-traverse')
utils.runscript(script, args)
print(">>>> DEBUG: Merging Partitions <<<")
script = 'merge-partitions.py'
args = [outfile, '-k', str(ksize)]
utils.runscript(script, args)
final_pmap_file = outfile + '.pmap.merged'
assert os.path.exists(final_pmap_file)
if annotate_partitions:
print(">>>> DEBUG: Annotating Partitions <<<")
script = 'annotate-partitions.py'
args = ["-k", str(ksize), outfile, infilename]
in_dir = os.path.dirname(outfile)
utils.runscript(script, args, in_dir)
baseinfile = os.path.basename(infilename)
assert os.path.exists(os.path.join(in_dir, baseinfile + '.part'))
return outfile
def test_partition_graph_1():
graphbase = _make_graph(utils.get_test_data('random-20-a.fa'))
script = 'partition-graph.py'
args = [graphbase]
utils.runscript(script, args)
script = 'merge-partitions.py'
args = [graphbase, '-k', str(20)]
utils.runscript(script, args)
final_pmap_file = graphbase + '.pmap.merged'
assert os.path.exists(final_pmap_file)
ht = khmer.load_hashbits(graphbase + '.pt')
ht.load_tagset(graphbase + '.tagset')
ht.load_partitionmap(final_pmap_file)
x = ht.count_partitions()
assert x == (1, 0), x # should be exactly one partition.
def test_partition_graph_nojoin_k21():
# test with K=21
graphbase = _make_graph(utils.get_test_data('random-20-a.fa'), ksize=21)
script = 'partition-graph.py'
args = [graphbase]
utils.runscript(script, args)
script = 'merge-partitions.py'
args = [graphbase, '-k', str(21)]
utils.runscript(script, args)
final_pmap_file = graphbase + '.pmap.merged'
assert os.path.exists(final_pmap_file)
ht = khmer.load_hashbits(graphbase + '.pt')
ht.load_tagset(graphbase + '.tagset')
ht.load_partitionmap(final_pmap_file)
x = ht.count_partitions()
assert x == (99, 0), x # should be 99 partitions at K=21
def test_partition_graph_nojoin_stoptags():
# test with stoptags
graphbase = _make_graph(utils.get_test_data('random-20-a.fa'))
# add in some stop tags
ht = khmer.load_hashbits(graphbase + '.pt')
ht.add_stop_tag('TTGCATACGTTGAGCCAGCG')
stoptags_file = graphbase + '.stoptags'
ht.save_stop_tags(stoptags_file)
del ht
# run script with stoptags option
script = 'partition-graph.py'
args = ['--stoptags', stoptags_file, graphbase]
utils.runscript(script, args)
script = 'merge-partitions.py'
args = [graphbase, '-k', str(20)]
utils.runscript(script, args)
final_pmap_file = graphbase + '.pmap.merged'
assert os.path.exists(final_pmap_file)
ht = khmer.load_hashbits(graphbase + '.pt')
ht.load_tagset(graphbase + '.tagset')
ht.load_partitionmap(final_pmap_file)
x = ht.count_partitions()
assert x == (2, 0), x # should be 2 partitions
def test_partition_graph_big_traverse():
graphbase = _make_graph(utils.get_test_data('biglump-random-20-a.fa'),
do_partition=True, stop_big_traverse=False)
final_pmap_file = graphbase + '.pmap.merged'
assert os.path.exists(final_pmap_file)
ht = khmer.load_hashbits(graphbase + '.pt')
ht.load_tagset(graphbase + '.tagset')
ht.load_partitionmap(final_pmap_file)
x = ht.count_partitions()
assert x == (1, 0), x # should be exactly one partition.
def test_partition_graph_no_big_traverse():
# do NOT exhaustively traverse
graphbase = _make_graph(utils.get_test_data('biglump-random-20-a.fa'),
do_partition=True, stop_big_traverse=True)
final_pmap_file = graphbase + '.pmap.merged'
assert os.path.exists(final_pmap_file)
ht = khmer.load_hashbits(graphbase + '.pt')
ht.load_tagset(graphbase + '.tagset')
ht.load_partitionmap(final_pmap_file)
x = ht.count_partitions()
assert x[0] == 4, x # should be four partitions, broken at knot.
def test_partition_find_knots_execute():
graphbase = _make_graph(utils.get_test_data('random-20-a.fa'))
script = 'partition-graph.py'
args = [graphbase]
utils.runscript(script, args)
script = 'find-knots.py'
args = [graphbase]
utils.runscript(script, args)
stoptags_file = graphbase + '.stoptags'
assert os.path.exists(stoptags_file)
def test_annotate_partitions():
seqfile = utils.get_test_data('random-20-a.fa')
graphbase = _make_graph(seqfile, do_partition=True)
in_dir = os.path.dirname(graphbase)
# get the final pmap file
final_pmap_file = graphbase + '.pmap.merged'
assert os.path.exists(final_pmap_file)
script = 'annotate-partitions.py'
args = ["-k", "20", graphbase, seqfile]
utils.runscript(script, args, in_dir)
partfile = os.path.join(in_dir, 'random-20-a.fa.part')
parts = [r.name.split('\t')[1] for r in screed.open(partfile)]
parts = set(parts)
assert '2' in parts
assert len(parts) == 1
def test_annotate_partitions_2():
# test with K=21 (no joining of sequences)
seqfile = utils.get_test_data('random-20-a.fa')
graphbase = _make_graph(seqfile, do_partition=True,
ksize=21)
in_dir = os.path.dirname(graphbase)
# get the final pmap file
final_pmap_file = graphbase + '.pmap.merged'
assert os.path.exists(final_pmap_file)
script = 'annotate-partitions.py'
args = ["-k", "21", graphbase, seqfile]
utils.runscript(script, args, in_dir)
partfile = os.path.join(in_dir, 'random-20-a.fa.part')
parts = [r.name.split('\t')[1] for r in screed.open(partfile)]
parts = set(parts)
print(parts)
assert len(parts) == 99, len(parts)
def test_extract_partitions():
seqfile = utils.get_test_data('random-20-a.fa')
graphbase = _make_graph(
seqfile, do_partition=True, annotate_partitions=True)
in_dir = os.path.dirname(graphbase)
# get the final part file
partfile = os.path.join(in_dir, 'random-20-a.fa.part')
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['extracted', partfile]
utils.runscript(script, args, in_dir)
distfile = os.path.join(in_dir, 'extracted.dist')
groupfile = os.path.join(in_dir, 'extracted.group0000.fa')
assert os.path.exists(distfile)
assert os.path.exists(groupfile)
dist = open(distfile).readline()
assert dist.strip() == '99 1 1 99'
parts = [r.name.split('\t')[1] for r in screed.open(partfile)]
assert len(parts) == 99, len(parts)
parts = set(parts)
assert len(parts) == 1, len(parts)
def test_extract_partitions_header_whitespace():
seqfile = utils.get_test_data('test-overlap2.fa')
graphbase = _make_graph(
seqfile, do_partition=True, annotate_partitions=True)
in_dir = os.path.dirname(graphbase)
# get the final part file
partfile = os.path.join(in_dir, 'test-overlap2.fa.part')
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['extracted', partfile]
utils.runscript(script, args, in_dir)
distfile = os.path.join(in_dir, 'extracted.dist')
groupfile = os.path.join(in_dir, 'extracted.group0000.fa')
assert os.path.exists(distfile)
assert os.path.exists(groupfile)
dist = open(distfile).readline()
assert dist.strip() == '1 11960 11960 11960', dist.strip()
parts = [r.name.split('\t')[1]
for r in screed.open(partfile, parse_description=False)]
assert len(parts) == 13538, len(parts)
parts = set(parts)
assert len(parts) == 12602, len(parts)
def test_extract_partitions_fq():
seqfile = utils.get_test_data('random-20-a.fq')
graphbase = _make_graph(
seqfile, do_partition=True, annotate_partitions=True)
in_dir = os.path.dirname(graphbase)
# get the final part file
partfile = os.path.join(in_dir, 'random-20-a.fq.part')
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['extracted', partfile]
utils.runscript(script, args, in_dir)
distfile = os.path.join(in_dir, 'extracted.dist')
groupfile = os.path.join(in_dir, 'extracted.group0000.fq')
assert os.path.exists(distfile)
assert os.path.exists(groupfile)
dist = open(distfile).readline()
assert dist.strip() == '99 1 1 99'
screed_iter = screed.open(partfile, parse_description=False)
names = [r.name.split('\t')[0] for r in screed_iter]
assert '35 1::FOO' in names
assert '46 1::FIZ' in names
screed_iter = screed.open(partfile, parse_description=False)
parts = [r.name.split('\t')[1] for r in screed_iter]
assert len(parts) == 99, len(parts)
parts = set(parts)
assert len(parts) == 1, len(parts)
quals = set([r.quality for r in screed.open(partfile)])
quals = list(quals)
assert quals[0], quals
def test_extract_partitions_output_unassigned():
seqfile = utils.get_test_data('random-20-a.fa')
graphbase = _make_graph(
seqfile, do_partition=True, annotate_partitions=True)
in_dir = os.path.dirname(graphbase)
# get the final part file
partfile = os.path.join(in_dir, 'random-20-a.fa.part')
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['-U', 'extracted', partfile]
utils.runscript(script, args, in_dir)
distfile = os.path.join(in_dir, 'extracted.dist')
groupfile = os.path.join(in_dir, 'extracted.group0000.fa')
unassigned_file = os.path.join(in_dir, 'extracted.unassigned.fa')
assert os.path.exists(distfile)
assert os.path.exists(groupfile)
assert os.path.exists(unassigned_file)
dist = open(distfile).readline()
assert dist.strip() == '99 1 1 99'
parts = [r.name.split('\t')[1] for r in screed.open(partfile)]
assert len(parts) == 99, len(parts)
parts = set(parts)
assert len(parts) == 1, len(parts)
def test_extract_partitions_no_output_groups():
seqfile = utils.get_test_data('random-20-a.fq')
graphbase = _make_graph(
seqfile, do_partition=True, annotate_partitions=True)
in_dir = os.path.dirname(graphbase)
# get the final part file
partfile = os.path.join(in_dir, 'random-20-a.fq.part')
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['-n', 'extracted', partfile]
# We expect a sys.exit -> we need the test to be tolerant
_, out, err = utils.runscript(script, args, in_dir, fail_ok=True)
assert "NOT outputting groups! Beware!" in err
# Group files are created after output_groups is
# checked. They should not exist in this scenario
groupfile = os.path.join(in_dir, 'extracted.group0000.fa')
assert not os.path.exists(groupfile)
def test_extract_partitions_pid_0():
basefile = utils.get_test_data('random-20-a.fa.part')
partfile = utils.get_temp_filename('random-20-a.fa.part')
shutil.copyfile(basefile, partfile)
in_dir = os.path.dirname(partfile)
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['-U', 'extracted', partfile]
utils.runscript(script, args, in_dir)
distfile = os.path.join(in_dir, 'extracted.dist')
groupfile = os.path.join(in_dir, 'extracted.group0000.fa')
unassigned_file = os.path.join(in_dir, 'extracted.unassigned.fa')
assert os.path.exists(distfile)
assert os.path.exists(groupfile)
assert os.path.exists(unassigned_file)
# Assert unassigned file not empty
unassigned_content = open(unassigned_file).readline()
assert unassigned_content.strip().split('\t')[0] != ''
def test_extract_partitions_multi_groups():
basefile = utils.get_test_data('random-20-a.fa.part')
partfile = utils.get_temp_filename('random-20-a.fa.part')
shutil.copyfile(basefile, partfile)
in_dir = os.path.dirname(partfile)
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['-m', '1', '-X', '1', 'extracted', partfile]
utils.runscript(script, args, in_dir)
# Multiple group files are created after should be created
groupfile1 = os.path.join(in_dir, 'extracted.group0000.fa')
groupfile2 = os.path.join(in_dir, 'extracted.group0001.fa')
groupfile3 = os.path.join(in_dir, 'extracted.group0002.fa')
assert os.path.exists(groupfile1)
assert os.path.exists(groupfile2)
assert os.path.exists(groupfile3)
def test_extract_partitions_no_groups():
empty_file = utils.get_temp_filename('empty-file')
basefile = utils.get_test_data('empty-file')
shutil.copyfile(basefile, empty_file)
in_dir = os.path.dirname(empty_file)
# ok, now run extract-partitions.
script = 'extract-partitions.py'
args = ['extracted', empty_file]
_, _, err = utils.runscript(script, args, in_dir, fail_ok=True)
assert "ERROR: Input file", "is empty; Exiting." in err
# No group files should be created
groupfile = os.path.join(in_dir, 'extracted.group0000.fa')
assert not os.path.exists(groupfile)
def test_abundance_dist():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test.dist')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
htfile = _make_counting(infile, K=17)
script = 'abundance-dist.py'
args = ['-z', htfile, infile, outfile]
utils.runscript(script, args, in_dir)
with open(outfile) as fp:
line = fp.readline().strip()
assert line == '1 96 96 0.98', line
line = fp.readline().strip()
assert line == '1001 2 98 1.0', line
os.remove(outfile)
args = ['-z', '--csv', htfile, infile, outfile]
utils.runscript(script, args, in_dir)
with open(outfile) as fp:
line = fp.readline().strip()
assert (line == 'abundance,count,cumulative,cumulative_fraction'), line
line = fp.readline().strip()
assert line == '1,96,96,0.98', line
line = fp.readline().strip()
assert line == '1001,2,98,1.0', line
def test_abundance_dist_nobigcount():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test.dist')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
htfile = _make_counting(infile, K=17)
script = 'abundance-dist.py'
args = ['-b', '-z', htfile, infile, outfile]
utils.runscript(script, args, in_dir)
with open(outfile) as fp:
line = fp.readline().strip()
assert line == '1 96 96 0.98', line
line = fp.readline().strip()
assert line == '255 2 98 1.0', line
def test_abundance_dist_single():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test.dist')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'abundance-dist-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-z', '-t', infile,
outfile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert 'Total number of unique k-mers: 98' in err, err
with open(outfile) as fp:
line = fp.readline().strip()
assert line == '1 96 96 0.98', line
line = fp.readline().strip()
assert line == '1001 2 98 1.0', line
def test_abundance_dist_threaded():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test.dist')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'abundance-dist-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-z', '-t', '--threads', '18',
infile, outfile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert 'Total number of unique k-mers: 98' in err, err
with open(outfile) as fp:
line = fp.readline().strip()
assert line == '1 96 96 0.98', line
line = fp.readline().strip()
assert line == '1001 2 98 1.0', line
def test_abundance_dist_single_csv():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test.dist')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'abundance-dist-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-z', '--csv', infile,
outfile]
(status, out, err) = utils.runscript(script, args, in_dir)
with open(outfile) as fp:
line = fp.readline().strip()
assert (line == 'abundance,count,cumulative,cumulative_fraction'), line
line = fp.readline().strip()
assert line == '1,96,96,0.98', line
line = fp.readline().strip()
assert line == '1001,2,98,1.0', line
def test_abundance_dist_single_nobigcount():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test.dist')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'abundance-dist-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-z', '-b', infile, outfile]
utils.runscript(script, args, in_dir)
with open(outfile) as fp:
line = fp.readline().strip()
assert line == '1 96 96 0.98', line
line = fp.readline().strip()
assert line == '255 2 98 1.0', line
def test_abundance_dist_single_nosquash():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test-abund-read-2.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'abundance-dist-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-z', '-t', infile, outfile]
utils.runscript(script, args, in_dir)
with open(outfile) as fp:
line = fp.readline().strip()
assert line == '1 96 96 0.98', line
line = fp.readline().strip()
assert line == '1001 2 98 1.0', line
def test_abundance_dist_single_savetable():
infile = utils.get_temp_filename('test.fa')
outfile = utils.get_temp_filename('test.dist')
tabfile = utils.get_temp_filename('test-savetable.ct')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
script = 'abundance-dist-single.py'
args = ['-x', '1e7', '-N', '2', '-k', '17', '-z', '-t', '--savetable',
tabfile, infile, outfile]
utils.runscript(script, args, in_dir)
with open(outfile) as fp:
line = fp.readline().strip()
assert line == '1 96 96 0.98', line
line = fp.readline().strip()
assert line == '1001 2 98 1.0', line
def test_do_partition():
seqfile = utils.get_test_data('random-20-a.fa')
graphbase = utils.get_temp_filename('out')
in_dir = os.path.dirname(graphbase)
script = 'do-partition.py'
args = ["-k", "20", graphbase, seqfile]
utils.runscript(script, args, in_dir)
partfile = os.path.join(in_dir, 'random-20-a.fa.part')
parts = [r.name.split('\t')[1] for r in screed.open(partfile)]
parts = set(parts)
assert '2' in parts
assert len(parts) == 1
def test_do_partition_2():
# test with K=21 (no joining of sequences)
seqfile = utils.get_test_data('random-20-a.fa')
graphbase = utils.get_temp_filename('out')
in_dir = os.path.dirname(graphbase)
script = 'do-partition.py'
args = ["-k", "21", graphbase, seqfile]
utils.runscript(script, args, in_dir)
partfile = os.path.join(in_dir, 'random-20-a.fa.part')
parts = [r.name.split('\t')[1] for r in screed.open(partfile)]
parts = set(parts)
assert len(parts) == 99, len(parts)
def test_do_partition_2_fq():
# test with K=21 (no joining of sequences)
seqfile = utils.get_test_data('random-20-a.fq')
graphbase = utils.get_temp_filename('out')
in_dir = os.path.dirname(graphbase)
script = 'do-partition.py'
args = ["-k", "21", graphbase, seqfile]
utils.runscript(script, args, in_dir)
partfile = os.path.join(in_dir, 'random-20-a.fq.part')
screed_iter = screed.open(partfile, parse_description=False)
names = [r.name.split('\t')[0] for r in screed_iter]
assert '35 1::FOO' in names
assert '46 1::FIZ' in names
def test_interleave_reads_1_fq():
# test input files
infile1 = utils.get_test_data('paired.fq.1')
infile2 = utils.get_test_data('paired.fq.2')
# correct output
ex_outfile = utils.get_test_data('paired.fq')
# actual output file
outfile = utils.get_temp_filename('out.fq')
script = 'interleave-reads.py'
args = [infile1, infile2, '-o', outfile]
utils.runscript(script, args)
r = open(ex_outfile).read()
q = open(outfile).read()
assert r == q, (r, q)
def test_interleave_reads_broken_fq():
# test input files
infile1 = utils.get_test_data('paired-broken.fq.1')
infile2 = utils.get_test_data('paired-broken.fq.2')
# actual output file
outfile = utils.get_temp_filename('out.fq')
script = 'interleave-reads.py'
args = [infile1, infile2, '-o', outfile]
status, out, err = utils.runscript(script, args, fail_ok=True)
assert status == 1
assert 'ERROR: Input files contain different number of records.' in err
def test_interleave_reads_broken_fq_2():
# test input files
infile1 = utils.get_test_data('paired-broken2.fq.1')
infile2 = utils.get_test_data('paired-broken2.fq.2')
# actual output file
outfile = utils.get_temp_filename('out.fq')
script = 'interleave-reads.py'
args = [infile1, infile2, '-o', outfile]
status, out, err = utils.runscript(script, args, fail_ok=True)
assert status == 1
assert "ERROR: This doesn't look like paired data!" in err
def test_interleave_reads_broken_fq_3():
# test input files
infile1 = utils.get_test_data('paired-broken3.fq.1')
infile2 = utils.get_test_data('paired-broken3.fq.2')
# actual output file
outfile = utils.get_temp_filename('out.fq')
script = 'interleave-reads.py'
args = [infile1, infile2, '-o', outfile]
status, out, err = utils.runscript(script, args, fail_ok=True)
assert status == 1
assert "ERROR: This doesn't look like paired data!" in err
def test_interleave_reads_broken_fq_4():
# test input files
infile1 = utils.get_test_data('paired-mixed-broken.fq')
# actual output file
outfile = utils.get_temp_filename('out.fq')
script = 'interleave-reads.py'
args = [infile1, '-o', outfile]
status, out, err = utils.runscript(script, args, fail_ok=True)
assert status == 1
assert "ERROR: given only one filename, that doesn't contain _R1_" in err
def test_interleave_reads_2_fa():
# test input files
infile1 = utils.get_test_data('paired.fa.1')
infile2 = utils.get_test_data('paired.fa.2')
# correct output
ex_outfile = utils.get_test_data('paired.fa')
# actual output file
outfile = utils.get_temp_filename('out.fa')
script = 'interleave-reads.py'
args = [infile1, infile2, '-o', outfile]
utils.runscript(script, args)
n = 0
for r, q in zip(screed.open(ex_outfile), screed.open(outfile)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert n > 0
def test_make_initial_stoptags():
# gen input files using load-graph.py -t
# should keep test_data directory size down
# or something like that
# this assumes (obv.) load-graph works properly
bzinfile = utils.get_temp_filename('test-reads.fq.bz2')
shutil.copyfile(utils.get_test_data('test-reads.fq.bz2'), bzinfile)
in_dir = os.path.dirname(bzinfile)
genscript = 'load-graph.py'
genscriptargs = ['test-reads', 'test-reads.fq.bz2']
utils.runscript(genscript, genscriptargs, in_dir)
# test input file gen'd by load-graphs
infile = utils.get_temp_filename('test-reads.pt')
infile2 = utils.get_temp_filename('test-reads.tagset', in_dir)
# get file to compare against
ex_outfile = utils.get_test_data('test-reads.stoptags')
# actual output file
outfile1 = utils.get_temp_filename('test-reads.stoptags', in_dir)
script = 'make-initial-stoptags.py'
# make-initial-stoptags has weird file argument syntax
# read the code before modifying
args = ['test-reads']
utils.runscript(script, args, in_dir)
assert os.path.exists(outfile1), outfile1
def test_extract_paired_reads_1_fa():
# test input file
infile = utils.get_test_data('paired-mixed.fa')
ex_outfile1 = utils.get_test_data('paired-mixed.fa.pe')
ex_outfile2 = utils.get_test_data('paired-mixed.fa.se')
# actual output files...
outfile1 = utils.get_temp_filename('paired-mixed.fa.pe')
in_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('paired-mixed.fa.se', in_dir)
script = 'extract-paired-reads.py'
args = [infile]
utils.runscript(script, args, in_dir)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1), screed.open(outfile1)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2), screed.open(outfile2)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert n > 0
def test_extract_paired_reads_2_fq():
# test input file
infile = utils.get_test_data('paired-mixed.fq')
ex_outfile1 = utils.get_test_data('paired-mixed.fq.pe')
ex_outfile2 = utils.get_test_data('paired-mixed.fq.se')
# actual output files...
outfile1 = utils.get_temp_filename('paired-mixed.fq.pe')
in_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('paired-mixed.fq.se', in_dir)
script = 'extract-paired-reads.py'
args = [infile]
utils.runscript(script, args, in_dir)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1, parse_description=False),
screed.open(outfile1, parse_description=False)):
n += 1
assert r.name == q.name, (r.name, q.name, n)
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2, parse_description=False),
screed.open(outfile2, parse_description=False)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
def execute_split_paired_streaming(ifilename):
fifo = utils.get_temp_filename('fifo')
in_dir = os.path.dirname(fifo)
outfile1 = utils.get_temp_filename('paired-1.fa')
outfile2 = utils.get_temp_filename('paired-2.fa')
script = 'split-paired-reads.py'
args = [fifo, '-1', outfile1, '-2', outfile2]
# make a fifo to simulate streaming
os.mkfifo(fifo)
thread = threading.Thread(target=utils.runscript,
args=(script, args, in_dir))
thread.start()
ifile = open(ifilename, 'r')
fifofile = open(fifo, 'w')
chunk = ifile.read(4)
while len(chunk) > 0:
fifofile.write(chunk)
chunk = ifile.read(4)
fifofile.close()
thread.join()
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
def test_split_paired_streaming():
o = execute_split_paired_streaming(utils.get_test_data('paired.fa'))
def test_split_paired_reads_1_fa():
# test input file
infile = utils.get_test_data('paired.fa')
ex_outfile1 = utils.get_test_data('paired.fa.1')
ex_outfile2 = utils.get_test_data('paired.fa.2')
# actual output files...
outfile1 = utils.get_temp_filename('paired.fa.1')
in_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('paired.fa.2', in_dir)
script = 'split-paired-reads.py'
args = [infile]
utils.runscript(script, args, in_dir)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1), screed.open(outfile1)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2), screed.open(outfile2)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert n > 0
def test_split_paired_reads_2_fq():
# test input file
infile = utils.get_test_data('paired.fq')
ex_outfile1 = utils.get_test_data('paired.fq.1')
ex_outfile2 = utils.get_test_data('paired.fq.2')
# actual output files...
outfile1 = utils.get_temp_filename('paired.fq.1')
in_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('paired.fq.2', in_dir)
script = 'split-paired-reads.py'
args = [infile]
utils.runscript(script, args, in_dir)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1), screed.open(outfile1)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2), screed.open(outfile2)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
def test_split_paired_reads_2_mixed_fq_require_pair():
# test input file
infile = utils.get_temp_filename('test.fq')
shutil.copyfile(utils.get_test_data('paired-mixed.fq'), infile)
in_dir = os.path.dirname(infile)
script = 'split-paired-reads.py'
args = ['-p', infile]
status, out, err = utils.runscript(script, args, in_dir, fail_ok=True)
assert status == 1
assert "is not part of a pair" in err
def test_split_paired_reads_2_mixed_fq():
# test input file
infile = utils.get_temp_filename('test.fq')
shutil.copyfile(utils.get_test_data('paired-mixed-2.fq'), infile)
in_dir = os.path.dirname(infile)
script = 'split-paired-reads.py'
args = [infile]
status, out, err = utils.runscript(script, args, in_dir)
assert status == 0
assert "split 11 sequences (7 left, 4 right)" in err, err
def test_split_paired_reads_2_mixed_fq_broken_pairing_format():
# test input file
infile = utils.get_temp_filename('test.fq')
shutil.copyfile(utils.get_test_data('paired-mixed-broken.fq'), infile)
in_dir = os.path.dirname(infile)
script = 'split-paired-reads.py'
args = [infile]
status, out, err = utils.runscript(script, args, in_dir, fail_ok=True)
assert status == 1
assert "Unrecognized format" in err
def test_split_paired_reads_3_output_dir():
# test input file
infile = utils.get_test_data('paired.fq')
ex_outfile1 = utils.get_test_data('paired.fq.1')
ex_outfile2 = utils.get_test_data('paired.fq.2')
# actual output files...
outfile1 = utils.get_temp_filename('paired.fq.1')
output_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('paired.fq.2', output_dir)
script = 'split-paired-reads.py'
args = ['--output-dir', output_dir, infile]
utils.runscript(script, args)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1), screed.open(outfile1)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2), screed.open(outfile2)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
def test_split_paired_reads_3_output_files():
# test input file
infile = utils.get_test_data('paired.fq')
ex_outfile1 = utils.get_test_data('paired.fq.1')
ex_outfile2 = utils.get_test_data('paired.fq.2')
# actual output files...
outfile1 = utils.get_temp_filename('xxx')
output_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('yyy', output_dir)
script = 'split-paired-reads.py'
args = ['-1', outfile1, '-2', outfile2, infile]
utils.runscript(script, args)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1), screed.open(outfile1)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2), screed.open(outfile2)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
def test_split_paired_reads_3_output_files_left():
# test input file
infile = utils.get_test_data('paired.fq')
ex_outfile1 = utils.get_test_data('paired.fq.1')
ex_outfile2 = utils.get_test_data('paired.fq.2')
# actual output files...
outfile1 = utils.get_temp_filename('xxx')
output_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('paired.fq.2', output_dir)
script = 'split-paired-reads.py'
args = ['-o', output_dir, '-1', outfile1, infile]
utils.runscript(script, args)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1), screed.open(outfile1)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2), screed.open(outfile2)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
def test_split_paired_reads_3_output_files_right():
# test input file
infile = utils.get_test_data('paired.fq')
ex_outfile1 = utils.get_test_data('paired.fq.1')
ex_outfile2 = utils.get_test_data('paired.fq.2')
# actual output files...
outfile1 = utils.get_temp_filename('paired.fq.1')
output_dir = os.path.dirname(outfile1)
outfile2 = utils.get_temp_filename('yyy', output_dir)
script = 'split-paired-reads.py'
args = ['-2', outfile2, '-o', output_dir, infile]
utils.runscript(script, args)
assert os.path.exists(outfile1), outfile1
assert os.path.exists(outfile2), outfile2
n = 0
for r, q in zip(screed.open(ex_outfile1), screed.open(outfile1)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
n = 0
for r, q in zip(screed.open(ex_outfile2), screed.open(outfile2)):
n += 1
assert r.name == q.name
assert r.sequence == q.sequence
assert r.quality == q.quality
assert n > 0
def test_sample_reads_randomly():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-reads.fa'), infile)
script = 'sample-reads-randomly.py'
# fix random number seed for reproducibility
args = ['-N', '10', '-M', '12000', '-R', '1']
args.append(infile)
utils.runscript(script, args, in_dir)
outfile = infile + '.subset'
assert os.path.exists(outfile), outfile
seqs = set([r.name for r in screed.open(outfile)])
print(list(sorted(seqs)))
if sys.version_info.major == 2:
answer = {'850:2:1:1859:11742/1', '850:2:1:1859:11742/2',
'850:2:1:2131:17360/1', '850:2:1:2131:17360/2',
'850:2:1:2416:7565/1', '850:2:1:2416:7565/2',
'850:2:1:2490:13491/1', '850:2:1:2490:13491/2',
'850:2:1:2962:3999/1', '850:2:1:2962:3999/2',
'850:2:1:3096:20321/1', '850:2:1:3096:20321/2',
'850:2:1:3164:6414/1', '850:2:1:3164:6414/2',
'850:2:1:3206:13876/1', '850:2:1:3206:13876/2',
'850:2:1:3631:20919/1', '850:2:1:3631:20919/2',
'850:2:1:3655:15581/1', '850:2:1:3655:15581/2'}
else:
answer = {'850:2:1:1257:3404/1', '850:2:1:1257:3404/2',
'850:2:1:1362:19357/1', '850:2:1:1362:19357/2',
'850:2:1:1396:5659/1', '850:2:1:1396:5659/2',
'850:2:1:2063:11124/1', '850:2:1:2063:11124/2',
'850:2:1:2121:12070/1', '850:2:1:2121:12070/2',
'850:2:1:2528:15779/1', '850:2:1:2528:15779/2',
'850:2:1:2581:12886/1', '850:2:1:2581:12886/2',
'850:2:1:2864:8505/1', '850:2:1:2864:8505/2',
'850:2:1:3000:2015/1', '850:2:1:3000:2015/2',
'850:2:1:3302:5025/1', '850:2:1:3302:5025/2'}
assert seqs == answer
def test_sample_reads_randomly_force_single():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-reads.fa'), infile)
script = 'sample-reads-randomly.py'
# fix random number seed for reproducibility
args = ['-N', '10', '-M', '12000', '-R', '1', '--force_single']
args.append(infile)
utils.runscript(script, args, in_dir)
outfile = infile + '.subset'
assert os.path.exists(outfile), outfile
seqs = set([r.name for r in screed.open(outfile)])
print(list(sorted(seqs)))
if sys.version_info.major == 2:
answer = {'850:2:1:2399:20086/2',
'850:2:1:2273:13309/1',
'850:2:1:2065:16816/1',
'850:2:1:1984:7162/2',
'850:2:1:2691:14602/1',
'850:2:1:1762:5439/1',
'850:2:1:2503:4494/2',
'850:2:1:2263:11143/2',
'850:2:1:1792:15774/2',
'850:2:1:2084:17145/1'}
else:
answer = {'850:2:1:1199:4197/1',
'850:2:1:1251:16575/2',
'850:2:1:1267:6790/2',
'850:2:1:1601:4443/1',
'850:2:1:1625:19325/1',
'850:2:1:1832:14607/2',
'850:2:1:1946:20852/2',
'850:2:1:2401:4896/2',
'850:2:1:2562:1308/1',
'850:2:1:3123:15968/2'}
assert seqs == answer
def test_sample_reads_randomly_fq():
infile = utils.get_temp_filename('test.fq.gz')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-reads.fq.gz'), infile)
script = 'sample-reads-randomly.py'
# fix random number seed for reproducibility
args = ['-N', '10', '-M', '12000', '-R', '1']
args.append(infile)
utils.runscript(script, args, in_dir)
outfile = infile + '.subset'
assert os.path.exists(outfile), outfile
if sys.version_info.major == 2:
answer = {'850:2:1:2399:20086/2',
'850:2:1:1762:5439 1::FOO',
'850:2:1:2065:16816/1',
'850:2:1:2263:11143/2',
'850:2:1:1792:15774/2',
'850:2:1:2691:14602/1',
'850:2:1:2503:4494 1::FOO',
'850:2:1:2084:17145/1',
'850:2:1:1984:7162 1::FOO',
'850:2:1:2273:13309 1::FOO'}
else:
answer = {'850:2:1:1199:4197 1::FOO',
'850:2:1:1251:16575/2',
'850:2:1:1267:6790/2',
'850:2:1:1601:4443 1::FOO',
'850:2:1:1625:1932 1::FOO1',
'850:2:1:1832:14607 1::FOO',
'850:2:1:1946:20852 1::FOO',
'850:2:1:2401:4896/2',
'850:2:1:2562:1308/1',
'850:2:1:3123:15968/2'}
seqs = set([r.name for r in screed.open(outfile,
parse_description=False)])
print(list(sorted(seqs)))
assert seqs == answer
def test_fastq_to_fasta():
script = 'fastq-to-fasta.py'
clean_infile = utils.get_temp_filename('test-clean.fq')
n_infile = utils.get_temp_filename('test-n.fq')
shutil.copyfile(utils.get_test_data('test-fastq-reads.fq'), clean_infile)
shutil.copyfile(utils.get_test_data('test-fastq-n-reads.fq'), n_infile)
clean_outfile = clean_infile + '.keep.fa'
n_outfile = n_infile + '.keep.fa'
in_dir = os.path.dirname(clean_infile)
in_dir_n = os.path.dirname(n_infile)
args = [clean_infile, '-n', '-o', clean_outfile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert len(out.splitlines()) == 2, len(out.splitlines())
assert "No lines dropped" in err
names = [r.name for r in screed.open(clean_outfile,
parse_description=False)]
assert '895:1:1:1246:14654 1:N:0:NNNNN' in names, names
args = [n_infile, '-n', '-o', n_outfile]
(status, out, err) = utils.runscript(script, args, in_dir_n)
assert len(out.splitlines()) == 2
assert "No lines dropped" in err
args = [clean_infile, '-o', clean_outfile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert len(out.splitlines()) == 2
assert "0 lines dropped" in err
args = [n_infile, '-o', n_outfile]
(status, out, err) = utils.runscript(script, args, in_dir_n)
assert len(out.splitlines()) == 2, out
assert "4 lines dropped" in err, err
args = [clean_infile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert len(out.splitlines()) > 2
assert "0 lines dropped" in err
args = [n_infile]
(status, out, err) = utils.runscript(script, args, in_dir_n)
assert len(out.splitlines()) > 2
assert "4 lines dropped" in err
def test_extract_long_sequences_fa():
script = 'extract-long-sequences.py'
fa_infile = utils.get_temp_filename('test.fa')
shutil.copyfile(utils.get_test_data('paired-mixed.fa'), fa_infile)
fa_outfile = fa_infile + '.keep.fa'
in_dir_fa = os.path.dirname(fa_infile)
args = [fa_infile, '-l', '10', '-o', fa_outfile]
(status, out, err) = utils.runscript(script, args, in_dir_fa)
countlines = sum(1 for line in open(fa_outfile))
assert countlines == 22, countlines
names = [r.name for r in screed.open(fa_outfile, parse_description=False)]
assert "895:1:37:17593:9954/1" in names
assert "895:1:37:17593:9954/2" in names
def test_extract_long_sequences_fq():
script = 'extract-long-sequences.py'
fq_infile = utils.get_temp_filename('test.fq')
shutil.copyfile(utils.get_test_data('paired-mixed.fq'), fq_infile)
fq_outfile = fq_infile + '.keep.fq'
in_dir_fq = os.path.dirname(fq_infile)
args = [fq_infile, '-l', '10', '-o', fq_outfile]
(status, out, err) = utils.runscript(script, args, in_dir_fq)
countlines = sum(1 for line in open(fq_outfile))
assert countlines == 44, countlines
names = [r.name for r in screed.open(fq_outfile, parse_description=False)]
assert "895:1:37:17593:9954 1::foo" in names
assert "895:1:37:17593:9954 2::foo" in names
def test_sample_reads_randomly_S():
infile = utils.get_temp_filename('test.fq')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-fastq-reads.fq'), infile)
script = 'sample-reads-randomly.py'
# fix random number seed for reproducibility
args = ['-N', '10', '-R', '1', '-S', '3']
badargs = list(args)
badargs.extend(['-o', 'test', infile, infile])
(status, out, err) = utils.runscript(script, badargs, in_dir, fail_ok=True)
assert status == 1, (status, out, err)
assert "Error: cannot specify -o with more than one sample" in err
args.append(infile)
utils.runscript(script, args, in_dir)
outfile = infile + '.subset.0'
assert os.path.exists(outfile), outfile
seqs = set([r.name for r in screed.open(outfile, parse_description=True)])
print(list(sorted(seqs)))
print(seqs)
if sys.version_info.major == 2:
answer = {'895:1:1:1303:14389', '895:1:1:1347:3237',
'895:1:1:1295:6189', '895:1:1:1308:20421',
'895:1:1:1320:11648', '895:1:1:1352:5369',
'895:1:1:1318:10532', '895:1:1:1363:11839',
'895:1:1:1355:13535', '895:1:1:1349:15165'}
else:
answer = {'895:1:1:1290:11501', '895:1:1:1303:14389',
'895:1:1:1307:4308', '895:1:1:1308:2539',
'895:1:1:1331:1766', '895:1:1:1333:2512',
'895:1:1:1347:3237', '895:1:1:1363:11839',
'895:1:1:1378:18986', '895:1:1:1383:3089'}
assert seqs == answer
outfile = infile + '.subset.1'
assert os.path.exists(outfile), outfile
seqs = set([r.name for r in screed.open(outfile, parse_description=True)])
print(list(sorted(seqs)))
if sys.version_info.major == 2:
answer = set(['895:1:1:1303:14389', '895:1:1:1373:4848',
'895:1:1:1357:19736', '895:1:1:1347:3237',
'895:1:1:1338:7557', '895:1:1:1388:11093',
'895:1:1:1296:1784', '895:1:1:1290:11501',
'895:1:1:1355:13535', '895:1:1:1303:6251'])
else:
answer = {'895:1:1:1255:18861', '895:1:1:1276:16426',
'895:1:1:1303:6251', '895:1:1:1308:20421',
'895:1:1:1314:10430', '895:1:1:1351:14718',
'895:1:1:1355:13535', '895:1:1:1358:4953',
'895:1:1:1362:3983', '895:1:1:1363:9988'}
assert seqs == answer
seqs = set([r.name for r in screed.open(outfile, parse_description=True)])
print(list(sorted(seqs)))
if sys.version_info.major == 2:
answer = {'895:1:1:1303:14389', '895:1:1:1373:4848',
'895:1:1:1357:19736', '895:1:1:1347:3237',
'895:1:1:1338:7557', '895:1:1:1388:11093',
'895:1:1:1296:1784', '895:1:1:1290:11501',
'895:1:1:1355:13535', '895:1:1:1303:6251'}
else:
answer = {'895:1:1:1362:3983', '895:1:1:1363:9988',
'895:1:1:1314:10430', '895:1:1:1255:18861',
'895:1:1:1308:20421', '895:1:1:1358:4953',
'895:1:1:1351:14718', '895:1:1:1303:6251',
'895:1:1:1276:16426', '895:1:1:1355:13535'}
assert seqs == answer
def test_count_overlap_invalid_datafile():
seqfile1 = utils.get_temp_filename('test-overlap1.fa')
in_dir = os.path.dirname(seqfile1)
shutil.copy(utils.get_test_data('test-overlap1.fa'), seqfile1)
htfile = _make_graph(seqfile1, ksize=20)
outfile = utils.get_temp_filename('overlap.out', in_dir)
script = 'count-overlap.py'
args = ['--ksize', '20', '--n_tables', '2', '--max-tablesize', '10000000',
htfile + '.pt', htfile + '.pt', outfile]
(status, out, err) = utils.runscript(script, args, in_dir, fail_ok=True)
if sys.version_info.major == 2:
assert "IOError" in err
else:
assert "OSError" in err
def test_count_overlap():
seqfile1 = utils.get_temp_filename('test-overlap1.fa')
in_dir = os.path.dirname(seqfile1)
seqfile2 = utils.get_temp_filename('test-overlap2.fa', in_dir)
outfile = utils.get_temp_filename('overlap.out', in_dir)
curvefile = utils.get_temp_filename('overlap.out.curve', in_dir)
shutil.copy(utils.get_test_data('test-overlap1.fa'), seqfile1)
shutil.copy(utils.get_test_data('test-overlap2.fa'), seqfile2)
htfile = _make_graph(seqfile1, ksize=20)
script = 'count-overlap.py'
args = ['--ksize', '20', '--n_tables', '2', '--max-tablesize', '10000000',
htfile + '.pt', seqfile2, outfile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert status == 0
assert os.path.exists(outfile), outfile
data = [x.strip() for x in open(outfile)]
data = set(data)
assert '# of unique k-mers in dataset2: 759020' in data, data
assert '# of overlap unique k-mers: 245547' in data
assert os.path.exists(curvefile), curvefile
data = [x.strip() for x in open(curvefile)]
data = set(data)
assert '178630 1134' in data, data
assert '496280 2904' in data
assert '752031 238558' in data
def test_count_overlap_csv():
seqfile1 = utils.get_temp_filename('test-overlap1.fa')
in_dir = os.path.dirname(seqfile1)
seqfile2 = utils.get_temp_filename('test-overlap2.fa', in_dir)
outfile = utils.get_temp_filename('overlap.out', in_dir)
curvefile = utils.get_temp_filename('overlap.out.curve', in_dir)
shutil.copy(utils.get_test_data('test-overlap1.fa'), seqfile1)
shutil.copy(utils.get_test_data('test-overlap2.fa'), seqfile2)
htfile = _make_graph(seqfile1, ksize=20)
script = 'count-overlap.py'
args = ['--ksize', '20', '--n_tables', '2', '--max-tablesize',
'10000000', '--csv', htfile + '.pt', seqfile2, outfile]
(status, out, err) = utils.runscript(script, args, in_dir)
assert status == 0
assert os.path.exists(outfile), outfile
data = [x.strip() for x in open(outfile)]
data = set(data)
assert '# of unique k-mers in dataset2: 759020' in data
assert '# of overlap unique k-mers: 245547' in data
assert os.path.exists(curvefile), curvefile
data = [x.strip() for x in open(curvefile)]
data = set(data)
assert '178630,1134' in data, data
assert '496280,2904' in data
assert '752031,238558' in data
def execute_streaming_diginorm(ifilename):
'''Helper function for the matrix of streaming tests for read_parser
using diginorm, i.e. uncompressed fasta, gzip fasta, bz2 fasta,
uncompressed fastq, etc.
This is not directly executed but is run by the tests themselves
'''
# Get temp filenames, etc.
fifo = utils.get_temp_filename('fifo')
in_dir = os.path.dirname(fifo)
script = 'normalize-by-median.py'
args = ['-C', '1', '-k', '17', '-o', 'outfile', fifo]
# make a fifo to simulate streaming
os.mkfifo(fifo)
# FIFOs MUST BE OPENED FOR READING BEFORE THEY ARE WRITTEN TO
# If this isn't done, they will BLOCK and things will hang.
thread = threading.Thread(target=utils.runscript,
args=(script, args, in_dir))
thread.start()
ifile = io.open(ifilename, 'rb')
fifofile = io.open(fifo, 'wb')
# read binary to handle compressed files
chunk = ifile.read(8192)
while len(chunk) > 0:
fifofile.write(chunk)
chunk = ifile.read(8192)
fifofile.close()
thread.join()
return in_dir + '/outfile'
def execute_load_graph_streaming(filename):
'''Helper function for the matrix of streaming tests using screed via
filter-abund-single, i.e. uncompressed fasta, gzip fasta, bz2 fasta,
uncompressed fastq, etc.
This is not directly executed but is run by the tests themselves
'''
script = 'load-graph.py'
args = '-x 1e7 -N 2 -k 20 out -'
infile = utils.get_temp_filename('temp')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data(filename), infile)
(status, out, err) = utils.runscriptredirect(script, args, infile, in_dir)
if status != 0:
for line in out:
print(out)
for line in err:
print(err)
assert status == 0, status
err.seek(0)
err = err.read()
assert 'Total number of unique k-mers: 3960' in err, err
ht_file = os.path.join(in_dir, 'out.pt')
assert os.path.exists(ht_file), ht_file
tagset_file = os.path.join(in_dir, 'out.tagset')
assert os.path.exists(tagset_file), tagset_file
ht = khmer.load_hashbits(ht_file)
ht.load_tagset(tagset_file)
# check to make sure we get the expected result for this data set
# upon partitioning (all in one partition). This is kind of a
# roundabout way of checking that load-graph worked :)
subset = ht.do_subset_partition(0, 0)
x = ht.subset_count_partitions(subset)
assert x == (1, 0), x
def test_screed_streaming_ufa():
# uncompressed fa
o = execute_streaming_diginorm(utils.get_test_data('test-abund-read-2.fa'))
pathstat = os.stat(o)
seqs = [r.sequence for r in screed.open(o)]
assert len(seqs) == 1, seqs
assert seqs[0].startswith('GGTTGACGGGGCTCAGGGGG')
def test_screed_streaming_ufq():
# uncompressed fq
o = execute_streaming_diginorm(utils.get_test_data('test-fastq-reads.fq'))
seqs = [r.sequence for r in screed.open(o)]
assert seqs[0].startswith('CAGGCGCCCACCACCGTGCCCTCCAACCTGATGGT')
def test_screed_streaming_bzipfq():
# bzip compressed fq
o = execute_streaming_diginorm(utils.get_test_data('100-reads.fq.bz2'))
seqs = [r.sequence for r in screed.open(o)]
assert len(seqs) == 100, seqs
assert seqs[0].startswith('CAGGCGCCCACCACCGTGCCCTCCAACCTGATGGT'), seqs
def test_screed_streaming_bzipfa():
# bzip compressed fa
o = execute_streaming_diginorm(
utils.get_test_data('test-abund-read-2.fa.bz2'))
seqs = [r.sequence for r in screed.open(o)]
assert len(seqs) == 1, seqs
assert seqs[0].startswith('GGTTGACGGGGCTCAGGGGG')
@attr('known_failing')
def test_screed_streaming_gzipfq():
# gzip compressed fq
o = execute_streaming_diginorm(utils.get_test_data('100-reads.fq.gz'))
assert os.path.exists(o)
seqs = [r.sequence for r in screed.open(o)]
assert seqs[0].startswith('CAGGCGCCCACCACCGTGCCCTCCAACCTG')
@attr('known_failing')
def test_screed_streaming_gzipfa():
o = execute_streaming_diginorm(
utils.get_test_data('test-abund-read-2.fa.gz'))
assert os.path.exists(o)
seqs = [r.sequence for r in screed.open(o)]
assert seqs[0].startswith('GGTTGACGGGGCTCAGGGG')
def test_read_parser_streaming_ufa():
# uncompressed FASTA
execute_load_graph_streaming(utils.get_test_data('random-20-a.fa'))
def test_read_parser_streaming_ufq():
# uncompressed FASTQ
execute_load_graph_streaming(utils.get_test_data('random-20-a.fq'))
@attr('known_failing')
def test_read_parser_streaming_bzfq():
# bzip compressed FASTQ
execute_load_graph_streaming(utils.get_test_data('random-20-a.fq.bz2'))
def test_read_parser_streaming_gzfq():
# gzip compressed FASTQ
execute_load_graph_streaming(utils.get_test_data('random-20-a.fq.gz'))
@attr('known_failing')
def test_read_parser_streaming_bzfa():
# bzip compressed FASTA
execute_load_graph_streaming(utils.get_test_data('random-20-a.fa.bz2'))
def test_read_parser_streaming_gzfa():
# gzip compressed FASTA
execute_load_graph_streaming(utils.get_test_data('random-20-a.fa.gz'))
def test_readstats():
readstats_output = ("358 bp / 5 seqs; 71.6 average length",
"916 bp / 11 seqs; 83.3 average length")
args = [utils.get_test_data("test-sweep-reads.fq"),
utils.get_test_data("paired-mixed.fq")]
status, out, err = utils.runscript('readstats.py', args)
assert status == 0
for k in readstats_output:
assert k in out, (k, out)
def test_readstats_csv():
readstats_output = ("358,5,71.6," +
utils.get_test_data("test-sweep-reads.fq"),
"916,11,83.3," +
utils.get_test_data("paired-mixed.fq"))
args = [utils.get_test_data("test-sweep-reads.fq"),
utils.get_test_data("paired-mixed.fq"),
'--csv']
status, out, err = utils.runscript('readstats.py', args)
assert status == 0
for k in readstats_output:
assert k in out, (k, out)
def test_readstats_output():
readstats_output = ("358 bp / 5 seqs; 71.6 average length",
"916 bp / 11 seqs; 83.3 average length")
outfile = utils.get_temp_filename('output.txt')
args = ["-o", outfile,
utils.get_test_data("test-sweep-reads.fq"),
utils.get_test_data("paired-mixed.fq")]
status, _, _ = utils.runscript('readstats.py', args)
assert status == 0
out = open(outfile).read()
for k in readstats_output:
assert k in out, (k, out)
def test_readstats_empty():
expected_output = "No sequences found in 2 files"
args = [utils.get_test_data("test-empty.fa"),
utils.get_test_data("test-empty.fa.bz2")]
status, out, err = utils.runscript('readstats.py', args)
assert status == 0
assert expected_output in out
def test_trim_low_abund_1():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 1, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
def test_trim_low_abund_1_duplicate_filename_err():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", '-C', '1', infile, infile]
try:
utils.runscript('trim-low-abund.py', args, in_dir)
raise Exception("should not reach this")
except AssertionError:
# an error should be raised by passing 'infile' twice.
pass
def test_trim_low_abund_2():
infile = utils.get_temp_filename('test.fa')
infile2 = utils.get_temp_filename('test2.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile2)
args = ["-k", "17", "-x", "1e7", "-N", "2", '-C', '1', infile, infile2]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
# make sure that FASTQ records are retained.
def test_trim_low_abund_3_fq_retained():
infile = utils.get_temp_filename('test.fq')
infile2 = utils.get_temp_filename('test2.fq')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fq'), infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fq'), infile2)
args = ["-k", "17", "-x", "1e7", "-N", "2", '-C', '1', infile, infile2]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
# check for 'quality' string.
seqs = set([r.quality for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
assert '##################' in seqs
# test that the -V option does not trim sequences that are low abundance
def test_trim_low_abund_4_retain_low_abund():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", '-V', infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
assert 'GGTTGACGGGGCTCAGGG' in seqs
# test that the -V option *does* trim sequences that are low abundance
def test_trim_low_abund_5_trim_high_abund():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-3.fa'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", '-V', infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
# trimmed sequence @ error
assert 'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGC' in seqs
# test that -V/-Z setting - should not trip if -Z is set high enough.
def test_trim_low_abund_6_trim_high_abund_Z():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-3.fa'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", '-V', '-Z', '25', infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = set([r.sequence for r in screed.open(outfile)])
assert len(seqs) == 2, seqs
# untrimmed seq.
badseq = 'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCgtgCCGCAGCTGTCGTCAGGG' \
'GATTTCCGGGCGG'
assert badseq in seqs # should be there, untrimmed
def test_trim_low_abund_keep_paired():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired.fq'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", "-V", infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = [r.name for r in screed.open(outfile)]
assert seqs[-2:] == ['pair/1', 'pair/2'], seqs
def test_trim_low_abund_keep_paired_casava18():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired2.fq'),
infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", "-V", infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
seqs = [r.name for r in screed.open(outfile, parse_description=False)]
assert seqs[-2:] == ['pair:foo 1::N', 'pair:foo 2::N'], seqs
def test_trim_low_abund_highfpr():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired.fq'), infile)
args = ["-k", "17", "-x", "1", "-N", "1", "-V", infile]
code, out, err = utils.runscript('trim-low-abund.py', args, in_dir,
fail_ok=True)
assert code == 1
assert '** ERROR: the graph structure is too small' in err, err
def test_trim_low_abund_trimtest():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired.fq'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", "-Z", "2", "-C", "1",
"-V", infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
for record in screed.open(outfile):
if record.name == 'seqtrim/1':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCAGCC'
elif record.name == 'seqtrim/2':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCAGCCGC'
elif record.name == 'seqtrim2/1':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCA'
def test_trim_low_abund_trimtest_after_load():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
saved_table = utils.get_temp_filename('save.ct')
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired.fq'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", saved_table, infile]
utils.runscript('load-into-counting.py', args, in_dir)
args = ["-Z", "2", "-C", "2", "-V", '--loadtable', saved_table, infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
for record in screed.open(outfile):
if record.name == 'seqtrim/1':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCAGCC'
elif record.name == 'seqtrim/2':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCAGCCGC'
elif record.name == 'seqtrim2/1':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCA'
def test_trim_low_abund_trimtest_savetable():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
saved_table = utils.get_temp_filename('save.ct')
shutil.copyfile(utils.get_test_data('test-abund-read-2.paired.fq'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2",
"-Z", "2", "-C", "2", "-V", '--savetable', saved_table, infile]
utils.runscript('trim-low-abund.py', args, in_dir)
outfile = infile + '.abundtrim'
assert os.path.exists(outfile), outfile
assert os.path.exists(saved_table)
for record in screed.open(outfile):
if record.name == 'seqtrim/1':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCAGCC'
elif record.name == 'seqtrim/2':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCAGCCGC'
elif record.name == 'seqtrim2/1':
print(record.name, record.sequence)
assert record.sequence == \
'GGTTGACGGGGCTCAGGGGGCGGCTGACTCCGAGAGACAGCA'
# test that -o/--out option outputs to STDOUT
def test_trim_low_abund_stdout():
infile = utils.get_temp_filename('test.fa')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('test-abund-read-2.fa'), infile)
args = ["-k", "17", "-x", "1e7", "-N", "2", infile, "-o", "-"]
_, out, err = utils.runscript('trim-low-abund.py', args, in_dir)
assert 'GGTTGACGGGGCTCAGGG' in out
def test_roundtrip_casava_format_1():
# check to make sure that extract-paired-reads produces a file identical
# to the input file when only paired data is given.
infile = utils.get_temp_filename('test.fq')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('casava_18-pe.fq'), infile)
_, out, err = utils.runscript('extract-paired-reads.py', [infile], in_dir)
r = open(infile).read()
outfile = infile + '.pe'
r2 = open(outfile).read()
assert r == r2, (r, r2)
def test_roundtrip_casava_format_2():
# check that split-paired-reads -> interleave-reads produces a file
# identical to input, when only paired reads are given.
infile = utils.get_temp_filename('test.fq')
outfile = utils.get_temp_filename('test2.fq')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('casava_18-pe.fq'), infile)
_, out, err = utils.runscript('split-paired-reads.py', [infile], in_dir)
utils.runscript('interleave-reads.py', [infile + '.1',
infile + '.2',
'-o', outfile], in_dir)
r = open(infile).read()
r2 = open(outfile).read()
assert r == r2, (r, r2)
def test_existance_failure():
expected_output = 'ERROR: Input file'
args = [utils.get_temp_filename('thisfiledoesnotexistatall')]
status, out, err = utils.runscript(
'extract-paired-reads.py', args, fail_ok=True)
assert status == 1
assert expected_output in err
def test_roundtrip_commented_format():
"""Split/interleave roundtrip for old style format with comments (#873).
This should produce a file identical to the input when only paired
reads are given.
"""
infile = utils.get_temp_filename('test.fq')
outfile = utils.get_temp_filename('test2.fq')
in_dir = os.path.dirname(infile)
shutil.copyfile(utils.get_test_data('old-style-format-w-comments.fq'),
infile)
_, out, err = utils.runscript('split-paired-reads.py', [infile], in_dir)
utils.runscript('interleave-reads.py', [infile + '.1',
infile + '.2',
'-o', outfile], in_dir)
r = open(infile).read()
r2 = open(outfile).read()
assert r == r2, (r, r2)
| 32.074158 | 79 | 0.642496 | 14,187 | 99,045 | 4.345034 | 0.059421 | 0.040491 | 0.032315 | 0.043087 | 0.877456 | 0.842058 | 0.818357 | 0.803076 | 0.774881 | 0.755641 | 0 | 0.049241 | 0.213469 | 99,045 | 3,087 | 80 | 32.084548 | 0.742048 | 0.068959 | 0 | 0.694349 | 0 | 0 | 0.173831 | 0.033344 | 0 | 0 | 0 | 0 | 0.208354 | 1 | 0.06683 | false | 0.000491 | 0.009337 | 0 | 0.078133 | 0.011794 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61a10792f59d8da5e0e44bdb52a8fdaac2db40fe | 5,686 | py | Python | test/brainwit/test_info.py | harpribot/Qbeak | 7d38be94a6f152ad2c79d79e341f05d6ebb87621 | [
"MIT"
] | null | null | null | test/brainwit/test_info.py | harpribot/Qbeak | 7d38be94a6f152ad2c79d79e341f05d6ebb87621 | [
"MIT"
] | null | null | null | test/brainwit/test_info.py | harpribot/Qbeak | 7d38be94a6f152ad2c79d79e341f05d6ebb87621 | [
"MIT"
] | null | null | null | import unittest
from src.utils.handlers import Handler
handle = Handler()
brainwit_handle = handle.wit().brain()
class TestInfo(unittest.TestCase):
def test_about(self):
self.assertEqual(brainwit_handle.process_query("Who is Ubik?")[1], "about")
self.assertEqual(brainwit_handle.process_query("What is Ubik?")[1], "about")
self.assertEqual(brainwit_handle.process_query("Who are you?")[1], "about")
self.assertEqual(brainwit_handle.process_query("What are you?")[1], "about")
self.assertEqual(brainwit_handle.process_query("Who is Ubik")[1], "about")
self.assertEqual(brainwit_handle.process_query("What is Ubik")[1], "about")
self.assertEqual(brainwit_handle.process_query("Who are you")[1], "about")
self.assertEqual(brainwit_handle.process_query("What do you do?")[1], "about")
def test_not_about(self):
self.assertNotEqual(brainwit_handle.process_query("Who is Barack Obama?")[1], "about")
self.assertNotEqual(brainwit_handle.process_query("Who is Donald Trump?")[1], "about")
self.assertNotEqual(brainwit_handle.process_query("What is Phenol?")[1], "about")
self.assertNotEqual(brainwit_handle.process_query("What is NCAA?")[1], "about")
def test_help(self):
self.assertEqual(brainwit_handle.process_query("help me please")[1], "help")
self.assertEqual(brainwit_handle.process_query("help")[1], "help")
self.assertEqual(brainwit_handle.process_query("this is so confusing")[1], "help")
self.assertEqual(brainwit_handle.process_query("i am confused")[1], "help")
self.assertEqual(brainwit_handle.process_query("how to use Ubik?")[1], "help")
self.assertEqual(brainwit_handle.process_query("how to use")[1], "help")
def test_not_help(self):
self.assertNotEqual(brainwit_handle.process_query("Help required. What is 2^20 ?")[1], "help")
self.assertNotEqual(brainwit_handle.process_query("I need some help understanding string theory?")[1],
"help")
self.assertNotEqual(brainwit_handle.process_query("Can anyone help me with boolean circuits?")[1], "help")
self.assertNotEqual(brainwit_handle.process_query("I will be more than happy to help you. "
"So Boolean circuits is yet another "
"theoretical concept.")[1], "help")
def test_statistics(self):
self.assertEqual(brainwit_handle.process_query("ranking")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("my rank")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("my standing")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("what is my karma score?")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("karma score")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("my karma")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("karma points")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("statistics")[1], "ranking")
self.assertEqual(brainwit_handle.process_query("my karma score please")[1], "ranking")
def test_not_statistics(self):
self.assertNotEqual(brainwit_handle.process_query("Help me")[1], "ranking")
self.assertNotEqual(brainwit_handle.process_query("Define karma?")[1], "ranking")
self.assertNotEqual(brainwit_handle.process_query("IPL statistics?")[1], "ranking")
self.assertNotEqual(brainwit_handle.process_query("What is karma?")[1], "ranking")
def test_greeting(self):
self.assertEqual(brainwit_handle.process_query("hi")[1], "greeting")
self.assertEqual(brainwit_handle.process_query("hello")[1], "greeting")
self.assertEqual(brainwit_handle.process_query("hi ubik")[1], "greeting")
self.assertEqual(brainwit_handle.process_query("hello ubik")[1], "greeting")
self.assertEqual(brainwit_handle.process_query("hola")[1], "greeting")
self.assertEqual(brainwit_handle.process_query("namaste")[1], "greeting")
self.assertEqual(brainwit_handle.process_query("bonjour")[1], "greeting")
self.assertEqual(brainwit_handle.process_query("salaam")[1], "greeting")
def test_not_greeting(self):
self.assertNotEqual(brainwit_handle.process_query("Hello! Thanks for the question. "
"I will try my best to answer it. So")[1], "greeting")
self.assertNotEqual(brainwit_handle.process_query("hi ubik! Can you give me my ranking?")[1], "greeting")
self.assertNotEqual(brainwit_handle.process_query("hello ubik. My score please")[1], "greeting")
def test_joke(self):
self.assertEqual(brainwit_handle.process_query("joke please")[1], "joke")
self.assertEqual(brainwit_handle.process_query("I am getting bored")[1], "joke")
self.assertEqual(brainwit_handle.process_query("humor me")[1], "joke")
self.assertEqual(brainwit_handle.process_query("make me laugh")[1], "joke")
def test_not_joke(self):
self.assertNotEqual(brainwit_handle.process_query("this is not a joke")[1], "joke")
self.assertNotEqual(brainwit_handle.process_query("we should have good humor")[1], "joke")
self.assertNotEqual(brainwit_handle.process_query("humor takes you long way")[1], "joke")
self.assertNotEqual(brainwit_handle.process_query("laughter is the best medicine")[1], "joke")
self.assertNotEqual(brainwit_handle.process_query("she makes me laugh")[1], "joke")
| 65.356322 | 115 | 0.686071 | 700 | 5,686 | 5.392857 | 0.164286 | 0.207682 | 0.30596 | 0.378808 | 0.771921 | 0.769272 | 0.762384 | 0.681325 | 0.335099 | 0.222781 | 0 | 0.012354 | 0.174288 | 5,686 | 86 | 116 | 66.116279 | 0.791693 | 0 | 0 | 0 | 0 | 0 | 0.22318 | 0 | 0 | 0 | 0 | 0 | 0.743243 | 1 | 0.135135 | false | 0 | 0.027027 | 0 | 0.175676 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61ca6b53fdf76c502c7449862c4d5d56d61248f5 | 15,026 | py | Python | corehq/apps/api/odata/tests/test_service.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | null | null | null | corehq/apps/api/odata/tests/test_service.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | null | null | null | corehq/apps/api/odata/tests/test_service.py | kkrampa/commcare-hq | d64d7cad98b240325ad669ccc7effb07721b4d44 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import absolute_import
from __future__ import unicode_literals
import json
from django.test import TestCase
from django.urls import reverse
from mock import patch
from corehq.apps.api.odata.tests.utils import (
CaseOdataTestMixin,
DeprecatedCaseOdataTestMixin,
DeprecatedFormOdataTestMixin,
FormOdataTestMixin,
generate_api_key_from_web_user,
)
from corehq.apps.api.odata.views import (
ODataCaseServiceView,
DeprecatedODataCaseServiceView,
DeprecatedODataFormServiceView,
ODataFormServiceView,
)
from corehq.apps.domain.models import Domain
from corehq.apps.export.models import CaseExportInstance, FormExportInstance
from corehq.util.test_utils import flag_enabled
class TestDeprecatedCaseServiceDocument(TestCase, DeprecatedCaseOdataTestMixin):
view_urlname = DeprecatedODataCaseServiceView.urlname
@classmethod
def setUpClass(cls):
super(TestDeprecatedCaseServiceDocument, cls).setUpClass()
cls._set_up_class()
@classmethod
def tearDownClass(cls):
cls._teardownclass()
super(TestDeprecatedCaseServiceDocument, cls).tearDownClass()
def test_no_credentials(self):
with flag_enabled('ODATA'):
response = self.client.get(self.view_url)
self.assertEqual(response.status_code, 401)
def test_wrong_password(self):
wrong_credentials = self._get_basic_credentials(self.web_user.username, 'wrong_password')
with flag_enabled('ODATA'):
response = self._execute_query(wrong_credentials)
self.assertEqual(response.status_code, 401)
def test_wrong_domain(self):
other_domain = Domain(name='other_domain')
other_domain.save()
self.addCleanup(other_domain.delete)
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self.client.get(
reverse(self.view_urlname, kwargs={'domain': other_domain.name}),
HTTP_AUTHORIZATION='Basic ' + correct_credentials,
)
self.assertEqual(response.status_code, 403)
def test_user_permissions(self):
self.web_user.set_role(self.domain.name, 'none')
self.web_user.save()
self.addCleanup(self._setup_user_permissions)
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 403)
def test_missing_feature_flag(self):
correct_credentials = self._get_correct_credentials()
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 404)
def test_no_case_types(self):
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
with patch('corehq.apps.api.odata.views.get_case_types_for_domain_es', return_value=set()):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 200)
self.assertEqual(
json.loads(response.content.decode('utf-8')),
{"@odata.context": "http://localhost:8000/a/test_domain/api/v0.5/odata/Cases/$metadata", "value": []}
)
def test_with_case_types(self):
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
with patch(
'corehq.apps.api.odata.views.get_case_types_for_domain_es',
return_value=['case_type_1', 'case_type_2'], # return ordered iterable for deterministic test
):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 200)
self.assertEqual(
json.loads(response.content.decode('utf-8')),
{
"@odata.context": "http://localhost:8000/a/test_domain/api/v0.5/odata/Cases/$metadata",
"value": [
{'kind': 'EntitySet', 'name': 'case_type_1', 'url': 'case_type_1'},
{'kind': 'EntitySet', 'name': 'case_type_2', 'url': 'case_type_2'},
],
}
)
class TestDeprecatedCaseServiceDocumentUsingApiKey(TestDeprecatedCaseServiceDocument):
@classmethod
def setUpClass(cls):
super(TestDeprecatedCaseServiceDocumentUsingApiKey, cls).setUpClass()
cls.api_key = generate_api_key_from_web_user(cls.web_user)
@classmethod
def _get_correct_credentials(cls):
return TestDeprecatedCaseServiceDocumentUsingApiKey._get_basic_credentials(cls.web_user.username, cls.api_key.key)
@flag_enabled('TWO_FACTOR_SUPERUSER_ROLLOUT')
class TestDeprecatedCaseServiceDocumentWithTwoFactorUsingApiKey(TestDeprecatedCaseServiceDocumentUsingApiKey):
pass
class TestDeprecatedFormServiceDocument(TestCase, DeprecatedFormOdataTestMixin):
view_urlname = DeprecatedODataFormServiceView.urlname
@classmethod
def setUpClass(cls):
super(TestDeprecatedFormServiceDocument, cls).setUpClass()
cls._set_up_class()
@classmethod
def tearDownClass(cls):
cls._teardownclass()
super(TestDeprecatedFormServiceDocument, cls).tearDownClass()
def test_no_credentials(self):
with flag_enabled('ODATA'):
response = self.client.get(self.view_url)
self.assertEqual(response.status_code, 401)
def test_wrong_password(self):
wrong_credentials = self._get_basic_credentials(self.web_user.username, 'wrong_password')
with flag_enabled('ODATA'):
response = self._execute_query(wrong_credentials)
self.assertEqual(response.status_code, 401)
def test_wrong_domain(self):
other_domain = Domain(name='other_domain')
other_domain.save()
self.addCleanup(other_domain.delete)
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self.client.get(
reverse(self.view_urlname, kwargs={'domain': other_domain.name, 'app_id': 'my_app_id'}),
HTTP_AUTHORIZATION='Basic ' + correct_credentials,
)
self.assertEqual(response.status_code, 403)
def test_user_permissions(self):
self.web_user.set_role(self.domain.name, 'none')
self.web_user.save()
self.addCleanup(self._setup_user_permissions)
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 403)
def test_missing_feature_flag(self):
correct_credentials = self._get_correct_credentials()
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 404)
def test_no_xmlnss(self):
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
with patch('corehq.apps.api.odata.views.get_xmlns_by_app', return_value=[]):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 200)
self.assertEqual(
json.loads(response.content.decode('utf-8')),
{
"@odata.context": "http://localhost:8000/a/test_domain/api/v0.5/odata/Forms/my_app_id/$metadata",
"value": [],
}
)
def test_with_xmlnss(self):
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
with patch('corehq.apps.api.odata.views.get_xmlns_by_app', return_value=['xmlns_1', 'xmlns_2']):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 200)
self.assertEqual(
json.loads(response.content.decode('utf-8')),
{
"@odata.context": "http://localhost:8000/a/test_domain/api/v0.5/odata/Forms/my_app_id/$metadata",
"value": [
{'kind': 'EntitySet', 'name': 'xmlns_1', 'url': 'xmlns_1'},
{'kind': 'EntitySet', 'name': 'xmlns_2', 'url': 'xmlns_2'},
],
}
)
class TestDeprecatedFormServiceDocumentUsingApiKey(TestDeprecatedFormServiceDocument):
@classmethod
def setUpClass(cls):
super(TestDeprecatedFormServiceDocumentUsingApiKey, cls).setUpClass()
cls.api_key = generate_api_key_from_web_user(cls.web_user)
@classmethod
def _get_correct_credentials(cls):
return TestDeprecatedFormServiceDocumentUsingApiKey._get_basic_credentials(cls.web_user.username, cls.api_key.key)
@flag_enabled('TWO_FACTOR_SUPERUSER_ROLLOUT')
class TestDeprecatedFormServiceDocumentWithTwoFactorUsingApiKey(TestDeprecatedFormServiceDocumentUsingApiKey):
pass
class TestCaseServiceDocument(TestCase, CaseOdataTestMixin):
view_urlname = ODataCaseServiceView.urlname
@classmethod
def setUpClass(cls):
super(TestCaseServiceDocument, cls).setUpClass()
cls._set_up_class()
@classmethod
def tearDownClass(cls):
cls._teardownclass()
super(TestCaseServiceDocument, cls).tearDownClass()
def test_no_credentials(self):
response = self.client.get(self.view_url)
self.assertEqual(response.status_code, 401)
def test_wrong_password(self):
wrong_credentials = self._get_basic_credentials(self.web_user.username, 'wrong_password')
response = self._execute_query(wrong_credentials)
self.assertEqual(response.status_code, 401)
def test_wrong_domain(self):
other_domain = Domain(name='other_domain')
other_domain.save()
self.addCleanup(other_domain.delete)
correct_credentials = self._get_correct_credentials()
response = self.client.get(
reverse(self.view_urlname, kwargs={'domain': other_domain.name, 'config_id': 'my_config_id'}),
HTTP_AUTHORIZATION='Basic ' + correct_credentials,
)
self.assertEqual(response.status_code, 403)
def test_user_permissions(self):
self.web_user.set_role(self.domain.name, 'none')
self.web_user.save()
self.addCleanup(self._setup_user_permissions)
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 403)
def test_missing_feature_flag(self):
correct_credentials = self._get_correct_credentials()
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 404)
def test_successful_request(self):
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['OData-Version'], '4.0')
self.assertEqual(json.loads(response.content.decode('utf-8')), {
'@odata.context': 'http://localhost:8000/a/test_domain/api/v0.5/odata/cases/my_config_id/$metadata',
'value': [{
'name': 'feed',
'kind': 'EntitySet',
'url': 'feed',
}],
})
class TestCaseServiceDocumentUsingApiKey(TestCaseServiceDocument):
@classmethod
def setUpClass(cls):
super(TestCaseServiceDocumentUsingApiKey, cls).setUpClass()
cls.api_key = generate_api_key_from_web_user(cls.web_user)
@classmethod
def _get_correct_credentials(cls):
return TestDeprecatedFormServiceDocumentUsingApiKey._get_basic_credentials(cls.web_user.username, cls.api_key.key)
@flag_enabled('TWO_FACTOR_SUPERUSER_ROLLOUT')
class TestCaseServiceDocumentWithTwoFactorUsingApiKey(TestCaseServiceDocumentUsingApiKey):
pass
class TestFormServiceDocument(TestCase, FormOdataTestMixin):
view_urlname = ODataFormServiceView.urlname
@classmethod
def setUpClass(cls):
super(TestFormServiceDocument, cls).setUpClass()
cls._set_up_class()
@classmethod
def tearDownClass(cls):
cls._teardownclass()
super(TestFormServiceDocument, cls).tearDownClass()
def test_no_credentials(self):
response = self.client.get(self.view_url)
self.assertEqual(response.status_code, 401)
def test_wrong_password(self):
wrong_credentials = self._get_basic_credentials(self.web_user.username, 'wrong_password')
response = self._execute_query(wrong_credentials)
self.assertEqual(response.status_code, 401)
def test_wrong_domain(self):
other_domain = Domain(name='other_domain')
other_domain.save()
self.addCleanup(other_domain.delete)
correct_credentials = self._get_correct_credentials()
response = self.client.get(
reverse(self.view_urlname, kwargs={'domain': other_domain.name, 'config_id': 'my_config_id'}),
HTTP_AUTHORIZATION='Basic ' + correct_credentials,
)
self.assertEqual(response.status_code, 403)
def test_user_permissions(self):
self.web_user.set_role(self.domain.name, 'none')
self.web_user.save()
self.addCleanup(self._setup_user_permissions)
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 403)
def test_missing_feature_flag(self):
correct_credentials = self._get_correct_credentials()
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 404)
def test_successful_request(self):
correct_credentials = self._get_correct_credentials()
with flag_enabled('ODATA'):
response = self._execute_query(correct_credentials)
self.assertEqual(response.status_code, 200)
self.assertEqual(response['OData-Version'], '4.0')
self.assertEqual(json.loads(response.content.decode('utf-8')), {
'@odata.context': 'http://localhost:8000/a/test_domain/api/v0.5/odata/forms/my_config_id/$metadata',
'value': [{
'name': 'feed',
'kind': 'EntitySet',
'url': 'feed',
}],
})
class TestFormServiceDocumentUsingApiKey(TestFormServiceDocument):
@classmethod
def setUpClass(cls):
super(TestFormServiceDocumentUsingApiKey, cls).setUpClass()
cls.api_key = generate_api_key_from_web_user(cls.web_user)
@classmethod
def _get_correct_credentials(cls):
return cls._get_basic_credentials(cls.web_user.username, cls.api_key.key)
@flag_enabled('TWO_FACTOR_SUPERUSER_ROLLOUT')
class TestFormServiceDocumentWithTwoFactorUsingApiKey(TestFormServiceDocumentUsingApiKey):
pass
| 38.234097 | 122 | 0.691069 | 1,564 | 15,026 | 6.329923 | 0.09335 | 0.105455 | 0.08 | 0.076162 | 0.783131 | 0.756263 | 0.73798 | 0.73798 | 0.73798 | 0.73798 | 0 | 0.01144 | 0.208838 | 15,026 | 392 | 123 | 38.331633 | 0.821332 | 0.003061 | 0 | 0.719745 | 0 | 0.019108 | 0.098745 | 0.020831 | 0 | 0 | 0 | 0 | 0.10828 | 1 | 0.133758 | false | 0.038217 | 0.035032 | 0.012739 | 0.232484 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4ef3f1408faf1ce222b65176e95a6fd61941f3fc | 30 | py | Python | src/Parser/SLR1/__init__.py | ChalkCode/cool-compiler-2021 | 9bab662676c3281b496aad63228583db4a7244db | [
"MIT"
] | null | null | null | src/Parser/SLR1/__init__.py | ChalkCode/cool-compiler-2021 | 9bab662676c3281b496aad63228583db4a7244db | [
"MIT"
] | null | null | null | src/Parser/SLR1/__init__.py | ChalkCode/cool-compiler-2021 | 9bab662676c3281b496aad63228583db4a7244db | [
"MIT"
] | 1 | 2022-02-24T17:16:42.000Z | 2022-02-24T17:16:42.000Z | from .parser import SLR1Parser | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.1 | 30 | 1 | 30 | 30 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6156411e200176ab05d5df5618cd39b5e1b46c2 | 47 | py | Python | cellbase/helper/__init__.py | imjp94/cellbase | 80353caee3ca0878f0f1ee8580209f5064dbdf1d | [
"MIT"
] | null | null | null | cellbase/helper/__init__.py | imjp94/cellbase | 80353caee3ca0878f0f1ee8580209f5064dbdf1d | [
"MIT"
] | null | null | null | cellbase/helper/__init__.py | imjp94/cellbase | 80353caee3ca0878f0f1ee8580209f5064dbdf1d | [
"MIT"
] | null | null | null | from .helper import DAO, Entity, CellFormatter
| 23.5 | 46 | 0.808511 | 6 | 47 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 47 | 1 | 47 | 47 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f640ec784854c65381879cf612dff63252badd84 | 23 | py | Python | third_party/__init__.py | qdmy/detectron2 | 0a74634d804f64409770bab082b6501f2ac57641 | [
"Apache-2.0"
] | 3 | 2020-07-15T07:46:06.000Z | 2022-03-06T08:03:59.000Z | third_party/__init__.py | qdmy/detectron2 | 0a74634d804f64409770bab082b6501f2ac57641 | [
"Apache-2.0"
] | 1 | 2021-06-16T07:01:43.000Z | 2021-11-09T03:48:47.000Z | third_party/__init__.py | blueardour/detectron2 | 424383ad1c08f1399ea2cc5b28b13ed9bfc55dfc | [
"Apache-2.0"
] | null | null | null |
from . import layers
| 5.75 | 20 | 0.695652 | 3 | 23 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.26087 | 23 | 3 | 21 | 7.666667 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f671c2c0001a5346b5dad0d05eaeafc5ab333719 | 187 | py | Python | environment/lib/python3.8/site-packages/numba/targets/__init__.py | 123972/PCA-nutricion | aff3c51a71c887c3fa367dbf9d599be5915c80cc | [
"MIT"
] | null | null | null | environment/lib/python3.8/site-packages/numba/targets/__init__.py | 123972/PCA-nutricion | aff3c51a71c887c3fa367dbf9d599be5915c80cc | [
"MIT"
] | 2 | 2021-05-11T16:00:55.000Z | 2021-08-23T20:45:22.000Z | environment/lib/python3.8/site-packages/numba/targets/__init__.py | 123972/PCA-nutricion | aff3c51a71c887c3fa367dbf9d599be5915c80cc | [
"MIT"
] | null | null | null | import sys
from numba.core.errors import _MovedModule
from numba.misc import quicksort
sys.modules[__name__] = _MovedModule(locals(), None)
sys.modules[__name__].quicksort = quicksort
| 20.777778 | 52 | 0.802139 | 24 | 187 | 5.833333 | 0.541667 | 0.128571 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106952 | 187 | 8 | 53 | 23.375 | 0.838323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9caa4f7f49a397fe32e80b14f2afb1ab51c5da42 | 83 | py | Python | ledfx_frontend/__init__.py | broccoliboy/LedFx | 1c90d5c3ddaf993a072eab92d3e373dd3b0fb45c | [
"MIT"
] | 524 | 2020-12-18T19:34:55.000Z | 2022-03-31T14:52:25.000Z | ledfx_frontend/__init__.py | broccoliboy/LedFx | 1c90d5c3ddaf993a072eab92d3e373dd3b0fb45c | [
"MIT"
] | 119 | 2020-12-18T21:28:12.000Z | 2022-03-31T14:44:02.000Z | ledfx_frontend/__init__.py | broccoliboy/LedFx | 1c90d5c3ddaf993a072eab92d3e373dd3b0fb45c | [
"MIT"
] | 85 | 2020-12-18T18:23:16.000Z | 2022-03-29T16:37:52.000Z | """ledfx_frontend"""
import os
def where():
return os.path.dirname(__file__)
| 11.857143 | 36 | 0.686747 | 11 | 83 | 4.727273 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156627 | 83 | 6 | 37 | 13.833333 | 0.742857 | 0.168675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
9cb518351052d48a21ce78e79153ee7c245aaf57 | 5,901 | py | Python | tables/Table3/table_creation.py | OmnesRes/onco_lnc | e8d20e43026ffe4651bd25783db36cabc2c1519f | [
"MIT"
] | 33 | 2016-06-03T17:19:58.000Z | 2021-07-08T03:09:40.000Z | tables/Table3/table_creation.py | OmnesRes/onco_lnc | e8d20e43026ffe4651bd25783db36cabc2c1519f | [
"MIT"
] | 3 | 2016-07-13T23:12:18.000Z | 2016-09-15T19:35:22.000Z | tables/Table3/table_creation.py | OmnesRes/onco_lnc | e8d20e43026ffe4651bd25783db36cabc2c1519f | [
"MIT"
] | 19 | 2016-04-13T15:12:29.000Z | 2021-07-08T03:11:19.000Z | ##A script for creating a table
## Load necessary modules
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
##load data for each cancer, find total genes in oncolnc, get patient info
f=open(os.path.join(BASE_DIR,'lncrna','cox','BLCA','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','BLCA','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
BLCA=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','BRCA','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','BRCA','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
BRCA=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','CESC','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','CESC','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
CESC=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','COAD','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','COAD','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
COAD=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','GBM','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','GBM','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
GBM=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','HNSC','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','HNSC','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
HNSC=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','KIRC','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','KIRC','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
KIRC=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','KIRP','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','KIRP','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
KIRP=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','LAML','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','LAML','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
LAML=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','LGG','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','LGG','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
LGG=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','LIHC','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','LIHC','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
LIHC=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','LUAD','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','LUAD','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
LUAD=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','LUSC','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','LUSC','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
LUSC=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','SKCM','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','SKCM','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
SKCM=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','OV','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','OV','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
OV=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','READ','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','READ','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
READ=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','STAD','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','STAD','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
STAD=[genes_in_oncolnc]+data
f=open(os.path.join(BASE_DIR,'lncrna','cox','UCEC','coeffs_pvalues.txt'))
data=[i for i in f]
genes_in_oncolnc=len(data)
f=open(os.path.join(BASE_DIR,'lncrna','cox','UCEC','patient_info.txt'))
f.readline()
data=f.readline().strip().split()
UCEC=[genes_in_oncolnc]+data
all_cancers=[BLCA,BRCA,CESC,COAD,GBM,HNSC,KIRC,KIRP,LAML,LGG,LIHC,LUAD,LUSC,OV,READ,SKCM,STAD,UCEC]
names=['BLCA','BRCA','CESC','COAD','GBM','HNSC','KIRC','KIRP','LAML','LGG','LIHC','LUAD','LUSC','OV',\
'READ','SKCM','STAD','UCEC']
f=open('table_3.txt','w')
for i,j in zip(all_cancers,names):
f.write(j)
f.write('\t')
##write total patients (add males and females)
f.write(str(int(i[2])+int(i[3])))
f.write('\t')
##write male/female
f.write(i[2]+'/'+i[3])
f.write('\t')
##write average age at diagnosis
f.write(i[1])
f.write('\t')
##write events
f.write(i[4])
f.write('\t')
##write median survival
f.write(i[5])
f.write('\t')
##write genes in oncolnc
f.write(str(i[0]))
f.write('\t')
f.write('\n')
f.close()
| 29.505 | 102 | 0.699712 | 1,059 | 5,901 | 3.75543 | 0.084986 | 0.066633 | 0.133769 | 0.099573 | 0.874277 | 0.874277 | 0.867237 | 0.867237 | 0.855922 | 0.855922 | 0 | 0.001638 | 0.068632 | 5,901 | 199 | 103 | 29.653266 | 0.72198 | 0.045755 | 0 | 0.530201 | 0 | 0 | 0.208527 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006711 | 0 | 0.006711 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9cb6929328b3eabf4e48ed2fd937d565465178d2 | 99 | py | Python | DCNN-Pytorch/deepracing_models/endtoend_controls/__init__.py | linklab-uva/deepracing | fc25c47658277df029e7399d295d97a75fe85216 | [
"Apache-2.0"
] | 11 | 2020-06-29T15:21:37.000Z | 2021-04-12T00:42:26.000Z | DCNN-Pytorch/deepracing_models/endtoend_controls/__init__.py | linklab-uva/deepracing | fc25c47658277df029e7399d295d97a75fe85216 | [
"Apache-2.0"
] | null | null | null | DCNN-Pytorch/deepracing_models/endtoend_controls/__init__.py | linklab-uva/deepracing | fc25c47658277df029e7399d295d97a75fe85216 | [
"Apache-2.0"
] | 4 | 2019-01-23T23:36:57.000Z | 2021-07-02T00:18:37.000Z | from .EndToEndPurePursuit import AdmiralNetPurePursuitController as AdmiralNetPurePursuitController | 99 | 99 | 0.939394 | 6 | 99 | 15.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050505 | 99 | 1 | 99 | 99 | 0.989362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
140a585d99613c98e6e2bfb44b243bd829a5d873 | 34 | py | Python | src/devcenter/azext_devcenter/generated/_help.py | tbyfield/azure-cli-extensions | e7e5f37fdcea3afb5c4aecb61fa72eac72c2128e | [
"MIT"
] | null | null | null | src/devcenter/azext_devcenter/generated/_help.py | tbyfield/azure-cli-extensions | e7e5f37fdcea3afb5c4aecb61fa72eac72c2128e | [
"MIT"
] | null | null | null | src/devcenter/azext_devcenter/generated/_help.py | tbyfield/azure-cli-extensions | e7e5f37fdcea3afb5c4aecb61fa72eac72c2128e | [
"MIT"
] | 1 | 2022-02-14T21:43:29.000Z | 2022-02-14T21:43:29.000Z | from knack.help_files import helps | 34 | 34 | 0.882353 | 6 | 34 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
144b494fb19d11988b98fbefd34dfcd40e17be8b | 4,082 | py | Python | ja_timex/pattern/duration.py | po3rin/ja-timex | 98c855690577cfd997ec32416dff50e75b1c6c27 | [
"MIT"
] | 2 | 2021-08-08T14:03:49.000Z | 2021-08-08T14:14:31.000Z | ja_timex/pattern/duration.py | po3rin/ja-timex | 98c855690577cfd997ec32416dff50e75b1c6c27 | [
"MIT"
] | null | null | null | ja_timex/pattern/duration.py | po3rin/ja-timex | 98c855690577cfd997ec32416dff50e75b1c6c27 | [
"MIT"
] | null | null | null | import re
from ja_timex.pattern.place import Pattern, Place
from ja_timex.tag import TIMEX
def parse_p(re_match: re.Match, pattern: Pattern) -> TIMEX:
args = re_match.groupdict()
span = re_match.span()
if args.get("half_suffix"):
value_suffix = ".5"
else:
value_suffix = ""
# 日付を表す持続時間表現の場合
value = "P"
if "year" in args:
value += args["year"] + value_suffix + "Y"
if "month" in args:
value += args["month"] + value_suffix + "M"
if "week" in args:
value += args["week"] + value_suffix + "W"
if "day" in args:
value += args["day"] + value_suffix + "D"
return TIMEX(
type="DURATION",
value=value,
text=re_match.group(),
parsed=args,
span=span,
pattern=pattern,
)
def parse_pt(re_match: re.Match, pattern: Pattern) -> TIMEX:
args = re_match.groupdict()
span = re_match.span()
# 時間を表す持続時間表現の場合
if args.get("half_suffix"):
value_suffix = ".5"
else:
value_suffix = ""
value = "PT"
if "hour" in args:
value += args["hour"] + value_suffix + "H"
if "minute" in args:
value += args["minute"] + value_suffix + "M"
if "second" in args:
value += args["second"] + value_suffix + "S"
if "second_with_ms" in args:
if "秒" in args["second_with_ms"]:
value += args["second_with_ms"].replace("秒", ".") + "S"
else:
value += args["second_with_ms"] + value_suffix + "S"
return TIMEX(
type="DURATION",
value=value,
text=re_match.group(),
parsed=args,
span=span,
pattern=pattern,
)
p = Place()
patterns = []
patterns += [
# P
Pattern(
re_pattern=f"{p.year}年(間)?",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.month}[ヶ|か|カ|ケ|箇]?月(間)?",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.week}週(間)?",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.day}日(間)?",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.year}年{p.month}[ヶ|か|カ|ケ|箇]月(間)?",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.year}年{p.month}[ヶ|か|カ|ケ|箇]月{p.day}日(間)?",
parse_func=parse_p,
option={},
),
# PT
Pattern(
re_pattern=f"{p.hour}時間",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.minute}分(間)?",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.second}秒(間)?",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.second_with_ms}",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.hour}時間{p.minute}分(間)?",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.hour}時間{p.minute}分{p.second}秒(間)?",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.minute}分{p.second}秒(間)?",
parse_func=parse_pt,
option={},
),
]
patterns += [
# P
Pattern(
re_pattern=f"{p.year}年{p.half_suffix}",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.month}[ヶ|か|カ|ケ|箇]?月{p.half_suffix}",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.week}週(間)?{p.half_suffix}",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.day}日{p.half_suffix}",
parse_func=parse_p,
option={},
),
Pattern(
re_pattern=f"{p.hour}時間{p.half_suffix}",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.minute}分{p.half_suffix}",
parse_func=parse_pt,
option={},
),
Pattern(
re_pattern=f"{p.second}秒{p.half_suffix}",
parse_func=parse_pt,
option={},
),
]
| 22.184783 | 67 | 0.516169 | 535 | 4,082 | 3.745794 | 0.121495 | 0.08982 | 0.159681 | 0.169661 | 0.766467 | 0.73503 | 0.73503 | 0.723054 | 0.706587 | 0.658184 | 0 | 0.000717 | 0.316512 | 4,082 | 183 | 68 | 22.306011 | 0.717563 | 0.008819 | 0 | 0.68125 | 0 | 0.025 | 0.161139 | 0.093564 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0125 | false | 0 | 0.01875 | 0 | 0.04375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14543bd15530e91439d02a93dc4887e5d77a52cf | 4,684 | py | Python | bigsi/tests/graph/test_index.py | Phelimb/bfg | bf34abbb9d6f72a9f0c64c40eefc44d810a2502e | [
"MIT"
] | 109 | 2017-12-13T12:25:40.000Z | 2021-08-18T08:35:44.000Z | bigsi/tests/graph/test_index.py | Phelimb/bfg | bf34abbb9d6f72a9f0c64c40eefc44d810a2502e | [
"MIT"
] | 25 | 2017-12-14T04:03:46.000Z | 2021-11-04T11:50:34.000Z | bigsi/tests/graph/test_index.py | Phelimb/bfg | bf34abbb9d6f72a9f0c64c40eefc44d810a2502e | [
"MIT"
] | 20 | 2017-12-22T02:14:13.000Z | 2021-02-01T02:49:02.000Z | from bigsi.matrix import BitMatrix
from bigsi.bloom import BloomFilter
from bigsi.graph.index import KmerSignatureIndex
from bitarray import bitarray
import pytest
from bigsi.utils import convert_query_kmers
from bigsi.tests.base import get_test_storages
def get_storages():
return get_test_storages()
def test_lookup1():
bloomfilter_size = 250
number_hash_functions = 3
kmers1 = ["ATC", "ATG", "ATA", "ATT"]
kmers2 = ["ATC", "ATG", "ATA", "TTT"]
bloomfilter1 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers1)
) # canonical
bloomfilter2 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers2)
)
bloomfilters = [bloomfilter1.bitarray, bloomfilter2.bitarray]
for storage in get_storages():
storage.delete_all()
KmerSignatureIndex.create(
storage, bloomfilters, bloomfilter_size, number_hash_functions
)
ksi = KmerSignatureIndex(storage)
assert ksi.lookup(["ATC"]) == {"ATC": bitarray("11")}
print(ksi.lookup(["ATC", "ATC", "ATT"]))
assert ksi.lookup(["ATC", "ATC", "ATT"]) == {
"ATC": bitarray("11"),
"ATT": bitarray("10"),
}
assert ksi.lookup(["ATC", "ATC", "ATT", "TTT"]) == {
"ATC": bitarray("11"),
"ATT": bitarray("10"),
"TTT": bitarray("01"),
}
def test_lookup2():
bloomfilter_size = 2500
number_hash_functions = 2
kmers1 = ["ATC", "ATG", "ATA", "ATT"]
kmers2 = ["ATC", "ATG", "ATA", "TTT"]
bloomfilter1 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers1)
)
bloomfilter2 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers2)
)
bloomfilters = [bloomfilter1, bloomfilter2]
for storage in get_storages():
storage.delete_all()
ksi = KmerSignatureIndex.create(
storage, bloomfilters, bloomfilter_size, number_hash_functions
)
assert ksi.lookup(["ATC"]) == {"ATC": bitarray("11")}
assert ksi.lookup(["ATC", "ATC", "ATT"]) == {
"ATC": bitarray("11"),
"ATT": bitarray("10"),
}
assert ksi.lookup(["ATC", "ATC", "ATT", "TTT"]) == {
"ATC": bitarray("11"),
"ATT": bitarray("10"),
"TTT": bitarray("01"),
}
def test_lookup3():
bloomfilter_size = 250
number_hash_functions = 1
kmers1 = ["ATC", "ATG", "ATA", "ATT"]
kmers2 = ["ATC", "ATG", "ATA", "TTT"]
bloomfilter1 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers1)
)
bloomfilter2 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers2)
)
bloomfilters = [bloomfilter1, bloomfilter2]
for storage in get_storages():
storage.delete_all()
ksi = KmerSignatureIndex.create(
storage, bloomfilters, bloomfilter_size, number_hash_functions
)
assert ksi.lookup(["ATC"]) == {"ATC": bitarray("11")}
assert ksi.lookup(["ATC", "ATC", "ATT"]) == {
"ATC": bitarray("11"),
"ATT": bitarray("10"),
}
assert ksi.lookup(["ATC", "ATC", "ATT", "TTT"]) == {
"ATC": bitarray("11"),
"ATT": bitarray("10"),
"TTT": bitarray("01"),
}
def test_merge():
bloomfilter_size = 250
number_hash_functions = 1
kmers1 = ["ATC", "ATG", "ATA", "ATT"]
kmers2 = ["ATC", "ATG", "ATA", "TTT"]
bloomfilter1 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers1)
)
bloomfilter2 = BloomFilter(bloomfilter_size, number_hash_functions).update(
convert_query_kmers(kmers2)
)
bloomfilters = [bloomfilter1, bloomfilter2]
for storage in get_storages():
storage.delete_all()
ksi1 = KmerSignatureIndex.create(
storage, bloomfilters, bloomfilter_size, number_hash_functions
)
ksi2 = KmerSignatureIndex.create(
storage, bloomfilters, bloomfilter_size, number_hash_functions
)
ksi1.merge_indexes(ksi2)
assert ksi1.lookup(["ATC"]) == {"ATC": bitarray("11" * 2)}
assert ksi1.lookup(["ATC", "ATC", "ATT"]) == {
"ATC": bitarray("11" * 2),
"ATT": bitarray("10" * 2),
}
assert ksi1.lookup(["ATC", "ATC", "ATT", "TTT"]) == {
"ATC": bitarray("11" * 2),
"ATT": bitarray("10" * 2),
"TTT": bitarray("01" * 2),
}
| 33.697842 | 79 | 0.591588 | 475 | 4,684 | 5.650526 | 0.134737 | 0.095007 | 0.120343 | 0.121088 | 0.836811 | 0.818182 | 0.804396 | 0.779434 | 0.737332 | 0.651267 | 0 | 0.032555 | 0.258967 | 4,684 | 138 | 80 | 33.942029 | 0.740709 | 0.001921 | 0 | 0.609756 | 0 | 0 | 0.068692 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 1 | 0.04065 | false | 0 | 0.056911 | 0.00813 | 0.105691 | 0.00813 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
148b6c49d3c40acc94f9f0b3425365714fcc66fc | 92 | py | Python | gateflow/externals/curv/__init__.py | MasiCal354/gateflow | 7a8ae2ff53f08b50727af7dc543054dac606f020 | [
"Apache-2.0"
] | 1 | 2021-01-06T11:29:04.000Z | 2021-01-06T11:29:04.000Z | gateflow/externals/curv/__init__.py | MasiCal354/gateflow | 7a8ae2ff53f08b50727af7dc543054dac606f020 | [
"Apache-2.0"
] | null | null | null | gateflow/externals/curv/__init__.py | MasiCal354/gateflow | 7a8ae2ff53f08b50727af7dc543054dac606f020 | [
"Apache-2.0"
] | null | null | null | from .client import *
from .dataframe import *
from .exceptions import *
from .auth import * | 23 | 25 | 0.75 | 12 | 92 | 5.75 | 0.5 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163043 | 92 | 4 | 26 | 23 | 0.896104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
148db97ce8ca2bcbcb811a6aa4ae7d3109fb3a98 | 22 | py | Python | genderdecoder/__init__.py | tws4793/gender-decoder | 617329cdd87e7df20a1650327d184d744635733f | [
"MIT"
] | null | null | null | genderdecoder/__init__.py | tws4793/gender-decoder | 617329cdd87e7df20a1650327d184d744635733f | [
"MIT"
] | null | null | null | genderdecoder/__init__.py | tws4793/gender-decoder | 617329cdd87e7df20a1650327d184d744635733f | [
"MIT"
] | null | null | null | from .combine import * | 22 | 22 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
149ac5ab9e6db6d008097c38ab9700c735c3944d | 6,886 | py | Python | tests/test_parsers/test_google_parser.py | jepperaskdk/doctest | b084223b971c4f52e8a97f7623729d8c256c10e4 | [
"MIT"
] | 1 | 2022-02-25T14:30:57.000Z | 2022-02-25T14:30:57.000Z | tests/test_parsers/test_google_parser.py | jepperaskdk/doctest | b084223b971c4f52e8a97f7623729d8c256c10e4 | [
"MIT"
] | 13 | 2021-05-23T06:59:45.000Z | 2021-10-14T06:10:10.000Z | tests/test_parsers/test_google_parser.py | jepperaskdk/pydoctest | b084223b971c4f52e8a97f7623729d8c256c10e4 | [
"MIT"
] | null | null | null | import pydoc
import pytest
from pydoctest.parsers.google_parser import GoogleParser
from pydoctest.exceptions import ParseException
import tests.test_class.incorrect_class
class TestGoogleParser():
def test_parse_exception_get_parameters(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.incorrect_class.IncorrectTestClass.func_parse_exception)
with pytest.raises(ParseException) as exc_info:
parser.get_parameters(doc, tests.test_class.incorrect_class)
def test_parse_exception_get_return_type(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.incorrect_class.IncorrectTestClass.func_parse_exception)
with pytest.raises(ParseException) as exc_info:
parser.get_return_type(doc, tests.test_class.incorrect_class)
def test_get_exceptions_raised(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.incorrect_class.IncorrectTestClass.func_parse_exception)
with pytest.raises(ParseException) as exc_info:
parser.get_exceptions_raised(doc)
def test_empty_func(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.empty_func)
arguments = parser.get_parameters(doc, tests.test_class.correct_class)
assert len(arguments) == 0, f"GoogleParser failed assertion"
return_type = parser.get_return_type(doc, tests.test_class.correct_class)
assert return_type == type(None), f"GoogleParser failed assertion"
def test_func_returns_none(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.func_returns_none)
arguments = parser.get_parameters(doc, tests.test_class.correct_class)
assert len(arguments) == 1, f"GoogleParser failed assertion"
assert arguments[0].type == int, f"GoogleParser failed assertion"
return_type = parser.get_return_type(doc, tests.test_class.correct_class)
assert return_type == type(None), f"GoogleParser failed assertion"
def test_func_returns_int(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.func_returns_int)
arguments = parser.get_parameters(doc, tests.test_class.correct_class)
assert len(arguments) == 0, f"GoogleParser failed assertion"
return_type = parser.get_return_type(doc, tests.test_class.correct_class)
assert return_type == int, f"GoogleParser failed assertion"
def test_func_has_arg_returns_arg(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.func_has_arg_returns_arg)
arguments = parser.get_parameters(doc, tests.test_class.correct_class)
assert len(arguments) == 1, f"GoogleParser failed assertion"
assert arguments[0].type == int, f"GoogleParser failed assertion"
return_type = parser.get_return_type(doc, tests.test_class.correct_class)
assert return_type == float, f"GoogleParser failed assertion"
def test_func_has_raises_doc(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.func_has_raises_doc)
arguments = parser.get_parameters(doc, tests.test_class.correct_class)
assert len(arguments) == 1, f"GoogleParser failed assertion"
assert arguments[0].type == int, f"GoogleParser failed assertion"
return_type = parser.get_return_type(doc, tests.test_class.correct_class)
assert return_type == int, f"GoogleParser failed assertion"
def test_func_with_multiline_summary(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.func_with_multiline_summary)
arguments = parser.get_parameters(doc, tests.test_class.correct_class)
assert len(arguments) == 1, f"GoogleParser failed assertion"
assert arguments[0].type == int, f"GoogleParser failed assertion"
return_type = parser.get_return_type(doc, tests.test_class.correct_class)
assert return_type == int, f"GoogleParser failed assertion"
def test_get_summary_multiline_summary(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.func_with_multiline_summary)
summary = parser.get_summary(doc, tests.test_class.correct_class)
assert summary is not None
assert len(summary) > 0, f"GoogleParser failed assertion"
assert(len([x for x in summary if x == '\n']) > 1), f"GoogleParser failed assertion"
def test_get_summary_empty_summary(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.correct_class.CorrectTestClass.func_no_summary)
arguments = parser.get_parameters(doc, tests.test_class.correct_class)
assert len(arguments) == 0, f"GoogleParser failed assertion"
return_type = parser.get_return_type(doc, tests.test_class.correct_class)
assert return_type == type(None), f"GoogleParser failed assertion"
summary = parser.get_summary(doc, tests.test_class.correct_class)
assert summary is None, f"GoogleParser failed assertion"
def test_func_with_raise_and_args_and_return(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.raises_class.RaisesClass.func_with_raise_and_args_and_return)
actual_exceptions = parser.get_exceptions_raised(doc)
expected_exceptions = [ 'RuntimeError', 'ValueError', 'IndexError' ]
assert len(expected_exceptions) == len(actual_exceptions)
intersection = set(expected_exceptions) - set(actual_exceptions)
assert len(intersection) == 0
def test_func_with_raise_and_args(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.raises_class.RaisesClass.func_with_raise_and_args)
actual_exceptions = parser.get_exceptions_raised(doc)
expected_exceptions = [ 'RuntimeError', 'ValueError', 'IndexError' ]
assert len(expected_exceptions) == len(actual_exceptions)
intersection = set(expected_exceptions) - set(actual_exceptions)
assert len(intersection) == 0
def test_func_with_raise(self) -> None:
parser = GoogleParser()
doc = pydoc.getdoc(tests.test_class.raises_class.RaisesClass.func_with_raise)
actual_exceptions = parser.get_exceptions_raised(doc)
expected_exceptions = [ 'RuntimeError', 'ValueError', 'IndexError' ]
assert len(expected_exceptions) == len(actual_exceptions)
intersection = set(expected_exceptions) - set(actual_exceptions)
assert len(intersection) == 0
| 50.262774 | 105 | 0.728725 | 838 | 6,886 | 5.713604 | 0.085919 | 0.06203 | 0.096491 | 0.105263 | 0.938388 | 0.907059 | 0.907059 | 0.897243 | 0.847744 | 0.847744 | 0 | 0.002843 | 0.18269 | 6,886 | 136 | 106 | 50.632353 | 0.847903 | 0 | 0 | 0.654206 | 0 | 0 | 0.102672 | 0 | 0 | 0 | 0 | 0 | 0.261682 | 1 | 0.130841 | false | 0 | 0.046729 | 0 | 0.186916 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
14a2d52be1c2a43fbbb6fbf59f92b1c97a0205bc | 185 | py | Python | swerve/database/dbpassword.py | Swerved/Hulkster | a7b5fd9cefe17032d3a738247cb2633d3ecddb31 | [
"blessing"
] | null | null | null | swerve/database/dbpassword.py | Swerved/Hulkster | a7b5fd9cefe17032d3a738247cb2633d3ecddb31 | [
"blessing"
] | null | null | null | swerve/database/dbpassword.py | Swerved/Hulkster | a7b5fd9cefe17032d3a738247cb2633d3ecddb31 | [
"blessing"
] | null | null | null | """
Database Passwords
Requires PWPPY.dll for PWP Databases
"""
from swerve.core import commandline
def pwp():
return commandline.cli_get_value_for_argument("pwp")
| 14.230769 | 57 | 0.708108 | 23 | 185 | 5.521739 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205405 | 185 | 12 | 58 | 15.416667 | 0.863946 | 0.302703 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.