hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
be511c24332a24f0e68bea70c9ae61478b823261 | 30 | py | Python | larning/setup.py | tasigabi97/larning | 6489bb47c9d6bc08e58349d4cff621d108a3cb0a | [
"MIT"
] | null | null | null | larning/setup.py | tasigabi97/larning | 6489bb47c9d6bc08e58349d4cff621d108a3cb0a | [
"MIT"
] | null | null | null | larning/setup.py | tasigabi97/larning | 6489bb47c9d6bc08e58349d4cff621d108a3cb0a | [
"MIT"
] | null | null | null | from larning.setup_i import *
| 15 | 29 | 0.8 | 5 | 30 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
be954e72d3b10c97cfbf28cdae7ef832d5aefc1b | 186 | py | Python | web/frontend/views.py | dzionek/todo-list | cc71cdf01f4fc0339bbe7ebd490c6527a4e86919 | [
"MIT"
] | 1 | 2020-10-09T21:06:43.000Z | 2020-10-09T21:06:43.000Z | web/frontend/views.py | dzionek/todo-list | cc71cdf01f4fc0339bbe7ebd490c6527a4e86919 | [
"MIT"
] | 2 | 2020-09-18T10:10:55.000Z | 2020-09-25T20:02:05.000Z | frontend/views.py | dzionek/social-media-predict | 5e20f0285d423fc4e46a9612eb96617646713cee | [
"MIT"
] | null | null | null | from django.http import HttpResponse, HttpRequest
from django.shortcuts import render
def index(request: HttpRequest) -> HttpResponse:
return render(request, 'frontend/index.html') | 31 | 49 | 0.795699 | 22 | 186 | 6.727273 | 0.636364 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11828 | 186 | 6 | 50 | 31 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0.101604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
be9c142bfb316fea0001f03def3a615a73c1147d | 19,735 | py | Python | tests/api/v2_2_2_3/test_site_design.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 32 | 2019-09-05T05:16:56.000Z | 2022-03-22T09:50:38.000Z | tests/api/v2_2_2_3/test_site_design.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 35 | 2019-09-07T18:58:54.000Z | 2022-03-24T19:29:36.000Z | tests/api/v2_2_2_3/test_site_design.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 18 | 2019-09-09T11:07:21.000Z | 2022-03-25T08:49:59.000Z | # -*- coding: utf-8 -*-
"""DNACenterAPI site_design API fixtures and tests.
Copyright (c) 2019-2021 Cisco Systems.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import pytest
from fastjsonschema.exceptions import JsonSchemaException
from dnacentersdk.exceptions import MalformedRequest
from tests.environment import DNA_CENTER_VERSION
pytestmark = pytest.mark.skipif(DNA_CENTER_VERSION != '2.2.2.3', reason='version does not match')
def is_valid_provision_nfv(json_schema_validate, obj):
json_schema_validate('jsd_cc72e307e5df50c48ce57370f27395a0_v2_2_2_3').validate(obj)
return True
def provision_nfv(api):
endpoint_result = api.site_design.provision_nfv(
active_validation=True,
payload=None,
provisioning=[{'site': {'siteProfileName': 'string', 'area': {'name': 'string', 'parentName': 'string'}, 'building': {'name': 'string', 'address': 'string', 'latitude': 0, 'longitude': 0, 'parentName': 'string'}, 'floor': {'name': 'string', 'parentName': 'string', 'rfModel': 'string', 'width': 0, 'length': 0, 'height': 0}}, 'device': [{'ip': 'string', 'deviceSerialNumber': 'string', 'tagName': 'string', 'serviceProviders': [{'serviceProvider': 'string', 'wanInterface': {'ipAddress': 'string', 'interfaceName': 'string', 'subnetmask': 'string', 'bandwidth': 'string', 'gateway': 'string'}}], 'services': [{'type': 'string', 'mode': 'string', 'systemIp': 'string', 'centralManagerIP': 'string', 'centralRegistrationKey': 'string', 'commonKey': 'string', 'adminPasswordHash': 'string', 'disk': 'string'}], 'vlan': [{'type': 'string', 'id': 'string', 'interfaces': 'string', 'network': 'string'}], 'subPools': [{'type': 'string', 'name': 'string', 'ipSubnet': 'string', 'gateway': 'string', 'parentPoolName': 'string'}], 'customNetworks': [{'name': 'string', 'port': 'string', 'ipAddressPool': 'string'}], 'templateParam': {'nfvis': {'var1': 'string'}, 'asav': {'var1': 'string'}}}]}],
siteProfile=[{'siteProfileName': 'string', 'device': [{'deviceType': 'string', 'tagName': 'string', 'serviceProviders': [{'serviceProvider': 'string', 'linkType': 'string', 'connect': True, 'defaultGateway': True}], 'dia': True, 'services': [{'type': 'string', 'profile': 'string', 'mode': 'string', 'name': 'string', 'imageName': 'string', 'topology': {'type': 'string', 'name': 'string', 'assignIp': 'string'}}], 'customServices': [{'name': 'string', 'applicationType': 'string', 'profile': 'string', 'topology': {'type': 'string', 'name': 'string', 'assignIp': 'string'}, 'imageName': 'string'}], 'customNetworks': [{'name': 'string', 'servicesToConnect': [{'service': 'string'}], 'connectionType': 'string', 'networkMode': 'string', 'vlan': 'string'}], 'vlan': [{'type': 'string', 'id': 'string'}], 'customTemplate': [{'deviceType': 'string', 'template': 'string'}]}]}]
)
return endpoint_result
@pytest.mark.site_design
def test_provision_nfv(api, validator):
try:
assert is_valid_provision_nfv(
validator,
provision_nfv(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def provision_nfv_default(api):
endpoint_result = api.site_design.provision_nfv(
active_validation=True,
payload=None,
provisioning=None,
siteProfile=None
)
return endpoint_result
@pytest.mark.site_design
def test_provision_nfv_default(api, validator):
try:
assert is_valid_provision_nfv(
validator,
provision_nfv_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_device_details_by_ip(json_schema_validate, obj):
json_schema_validate('jsd_2bfde206eb445821a5722511f138814a_v2_2_2_3').validate(obj)
return True
def get_device_details_by_ip(api):
endpoint_result = api.site_design.get_device_details_by_ip(
device_ip='string'
)
return endpoint_result
@pytest.mark.site_design
def test_get_device_details_by_ip(api, validator):
try:
assert is_valid_get_device_details_by_ip(
validator,
get_device_details_by_ip(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def get_device_details_by_ip_default(api):
endpoint_result = api.site_design.get_device_details_by_ip(
device_ip=None
)
return endpoint_result
@pytest.mark.site_design
def test_get_device_details_by_ip_default(api, validator):
try:
assert is_valid_get_device_details_by_ip(
validator,
get_device_details_by_ip_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_nfv_provisioning_detail(json_schema_validate, obj):
json_schema_validate('jsd_497d9ccfce8451809129ec5de42c5048_v2_2_2_3').validate(obj)
return True
def nfv_provisioning_detail(api):
endpoint_result = api.site_design.nfv_provisioning_detail(
active_validation=True,
device_ip='string',
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_nfv_provisioning_detail(api, validator):
try:
assert is_valid_nfv_provisioning_detail(
validator,
nfv_provisioning_detail(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def nfv_provisioning_detail_default(api):
endpoint_result = api.site_design.nfv_provisioning_detail(
active_validation=True,
device_ip=None,
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_nfv_provisioning_detail_default(api, validator):
try:
assert is_valid_nfv_provisioning_detail(
validator,
nfv_provisioning_detail_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_create_nfv_profile(json_schema_validate, obj):
json_schema_validate('jsd_d2a712eb315650618d475db5de0aabec_v2_2_2_3').validate(obj)
return True
def create_nfv_profile(api):
endpoint_result = api.site_design.create_nfv_profile(
active_validation=True,
device=[{'deviceType': 'string', 'deviceTag': 'string', 'serviceProviderProfile': [{'serviceProvider': 'string', 'linkType': 'string', 'connect': True, 'connectDefaultGatewayOnWan': True}], 'directInternetAccessForFirewall': True, 'services': [{'serviceType': 'string', 'profileType': 'string', 'serviceName': 'string', 'imageName': 'string', 'vNicMapping': [{'networkType': 'string', 'assignIpAddressToNetwork': 'string'}], 'firewallMode': 'string'}], 'customNetworks': [{'networkName': 'string', 'servicesToConnect': [{'serviceName': 'string'}], 'connectionType': 'string', 'vlanMode': 'string', 'vlanId': 0}], 'vlanForL2': [{'vlanType': 'string', 'vlanId': 0, 'vlanDescription': 'string'}], 'customTemplate': [{'deviceType': 'string', 'template': 'string', 'templateType': 'string'}]}],
payload=None,
profileName='string'
)
return endpoint_result
@pytest.mark.site_design
def test_create_nfv_profile(api, validator):
try:
assert is_valid_create_nfv_profile(
validator,
create_nfv_profile(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def create_nfv_profile_default(api):
endpoint_result = api.site_design.create_nfv_profile(
active_validation=True,
device=None,
payload=None,
profileName=None
)
return endpoint_result
@pytest.mark.site_design
def test_create_nfv_profile_default(api, validator):
try:
assert is_valid_create_nfv_profile(
validator,
create_nfv_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_update_nfv_profile(json_schema_validate, obj):
json_schema_validate('jsd_159612e2202e5f7586e68778ed7772b1_v2_2_2_3').validate(obj)
return True
def update_nfv_profile(api):
endpoint_result = api.site_design.update_nfv_profile(
active_validation=True,
device=[{'deviceTag': 'string', 'directInternetAccessForFirewall': True, 'services': [{'serviceType': 'string', 'profileType': 'string', 'serviceName': 'string', 'imageName': 'string', 'vNicMapping': [{'networkType': 'string', 'assignIpAddressToNetwork': 'string'}], 'firewallMode': 'string'}], 'customNetworks': [{'networkName': 'string', 'servicesToConnect': [{'serviceName': 'string'}], 'connectionType': 'string', 'vlanMode': 'string', 'vlanId': 0}], 'vlanForL2': [{'vlanType': 'string', 'vlanId': 0, 'vlanDescription': 'string'}], 'customTemplate': [{'deviceType': 'string', 'template': 'string', 'templateType': 'string'}], 'currentDeviceTag': 'string'}],
id='string',
name='string',
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_update_nfv_profile(api, validator):
try:
assert is_valid_update_nfv_profile(
validator,
update_nfv_profile(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def update_nfv_profile_default(api):
endpoint_result = api.site_design.update_nfv_profile(
active_validation=True,
device=None,
id='string',
name=None,
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_update_nfv_profile_default(api, validator):
try:
assert is_valid_update_nfv_profile(
validator,
update_nfv_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_nfv_profile(json_schema_validate, obj):
json_schema_validate('jsd_f50579d855255df89ab3545de9745545_v2_2_2_3').validate(obj)
return True
def get_nfv_profile(api):
endpoint_result = api.site_design.get_nfv_profile(
id='string',
limit='string',
name='string',
offset='string'
)
return endpoint_result
@pytest.mark.site_design
def test_get_nfv_profile(api, validator):
try:
assert is_valid_get_nfv_profile(
validator,
get_nfv_profile(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def get_nfv_profile_default(api):
endpoint_result = api.site_design.get_nfv_profile(
id='string',
limit=None,
name=None,
offset=None
)
return endpoint_result
@pytest.mark.site_design
def test_get_nfv_profile_default(api, validator):
try:
assert is_valid_get_nfv_profile(
validator,
get_nfv_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_delete_nfv_profile(json_schema_validate, obj):
json_schema_validate('jsd_89252bcefb205d26b9aced6dc6d8c269_v2_2_2_3').validate(obj)
return True
def delete_nfv_profile(api):
endpoint_result = api.site_design.delete_nfv_profile(
id='string',
name='string'
)
return endpoint_result
@pytest.mark.site_design
def test_delete_nfv_profile(api, validator):
try:
assert is_valid_delete_nfv_profile(
validator,
delete_nfv_profile(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def delete_nfv_profile_default(api):
endpoint_result = api.site_design.delete_nfv_profile(
id='string',
name=None
)
return endpoint_result
@pytest.mark.site_design
def test_delete_nfv_profile_default(api, validator):
try:
assert is_valid_delete_nfv_profile(
validator,
delete_nfv_profile_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_create_floormap(json_schema_validate, obj):
json_schema_validate('jsd_311c1c51662f583485311df0a0c29a3f_v2_2_2_3').validate(obj)
return True
def create_floormap(api):
endpoint_result = api.site_design.create_floormap(
active_validation=True,
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_create_floormap(api, validator):
try:
assert is_valid_create_floormap(
validator,
create_floormap(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def create_floormap_default(api):
endpoint_result = api.site_design.create_floormap(
active_validation=True,
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_create_floormap_default(api, validator):
try:
assert is_valid_create_floormap(
validator,
create_floormap_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_floormaps(json_schema_validate, obj):
json_schema_validate('jsd_7c78410e9dcf52e4a1e686811904597e_v2_2_2_3').validate(obj)
return True
def get_floormaps(api):
endpoint_result = api.site_design.get_floormaps(
)
return endpoint_result
@pytest.mark.site_design
def test_get_floormaps(api, validator):
try:
assert is_valid_get_floormaps(
validator,
get_floormaps(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def get_floormaps_default(api):
endpoint_result = api.site_design.get_floormaps(
)
return endpoint_result
@pytest.mark.site_design
def test_get_floormaps_default(api, validator):
try:
assert is_valid_get_floormaps(
validator,
get_floormaps_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_delete_floormap(json_schema_validate, obj):
json_schema_validate('jsd_96a80b69435c55e480c18fa89cab061a_v2_2_2_3').validate(obj)
return True
def delete_floormap(api):
endpoint_result = api.site_design.delete_floormap(
floor_id='string'
)
return endpoint_result
@pytest.mark.site_design
def test_delete_floormap(api, validator):
try:
assert is_valid_delete_floormap(
validator,
delete_floormap(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def delete_floormap_default(api):
endpoint_result = api.site_design.delete_floormap(
floor_id='string'
)
return endpoint_result
@pytest.mark.site_design
def test_delete_floormap_default(api, validator):
try:
assert is_valid_delete_floormap(
validator,
delete_floormap_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_update_floormap(json_schema_validate, obj):
json_schema_validate('jsd_49c73f51add559448beae2345a8c924a_v2_2_2_3').validate(obj)
return True
def update_floormap(api):
endpoint_result = api.site_design.update_floormap(
active_validation=True,
floor_id='string',
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_update_floormap(api, validator):
try:
assert is_valid_update_floormap(
validator,
update_floormap(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def update_floormap_default(api):
endpoint_result = api.site_design.update_floormap(
active_validation=True,
floor_id='string',
payload=None
)
return endpoint_result
@pytest.mark.site_design
def test_update_floormap_default(api, validator):
try:
assert is_valid_update_floormap(
validator,
update_floormap_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_floormap(json_schema_validate, obj):
json_schema_validate('jsd_06ecdfc4068850a89a3f6b3da16d95b4_v2_2_2_3').validate(obj)
return True
def get_floormap(api):
endpoint_result = api.site_design.get_floormap(
floor_id='string'
)
return endpoint_result
@pytest.mark.site_design
def test_get_floormap(api, validator):
try:
assert is_valid_get_floormap(
validator,
get_floormap(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print(original_e)
raise original_e
def get_floormap_default(api):
endpoint_result = api.site_design.get_floormap(
floor_id='string'
)
return endpoint_result
@pytest.mark.site_design
def test_get_floormap_default(api, validator):
try:
assert is_valid_get_floormap(
validator,
get_floormap_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
| 33.112416 | 1,194 | 0.691766 | 2,199 | 19,735 | 5.915871 | 0.121419 | 0.04151 | 0.033208 | 0.036898 | 0.78561 | 0.782381 | 0.759782 | 0.751941 | 0.719656 | 0.692444 | 0 | 0.020978 | 0.205321 | 19,735 | 595 | 1,195 | 33.168067 | 0.808519 | 0.057512 | 0 | 0.628062 | 0 | 0 | 0.155058 | 0.038724 | 0 | 0 | 0 | 0 | 0.053452 | 1 | 0.13363 | false | 0.002227 | 0.008909 | 0 | 0.222717 | 0.026726 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bea391fc7d80193bf1e48db0e08b7d6303b8cadf | 97 | py | Python | mpischedule/__init__.py | ckoerber/mpi-schedule | ef304ddb335bd893bdc7fc4a8a21134371a95786 | [
"MIT"
] | null | null | null | mpischedule/__init__.py | ckoerber/mpi-schedule | ef304ddb335bd893bdc7fc4a8a21134371a95786 | [
"MIT"
] | null | null | null | mpischedule/__init__.py | ckoerber/mpi-schedule | ef304ddb335bd893bdc7fc4a8a21134371a95786 | [
"MIT"
] | null | null | null | """Collects parallel application functions
"""
from mpischedule.parallel_map import parallel_map
| 24.25 | 49 | 0.835052 | 11 | 97 | 7.181818 | 0.727273 | 0.278481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092784 | 97 | 3 | 50 | 32.333333 | 0.897727 | 0.402062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe296f808d12e22e35ffce67bee36657ae45a70f | 121 | py | Python | netgear_scrapers/__init__.py | iandees/netgear-scrapers | f61b08512e5eec0219cf234a2d749ae5eae2418e | [
"MIT"
] | 5 | 2019-01-31T02:07:08.000Z | 2021-05-22T01:33:15.000Z | netgear_scrapers/__init__.py | iandees/netgear-scrapers | f61b08512e5eec0219cf234a2d749ae5eae2418e | [
"MIT"
] | 4 | 2019-10-18T18:01:47.000Z | 2021-07-31T15:48:38.000Z | netgear_scrapers/__init__.py | iandees/netgear-scrapers | f61b08512e5eec0219cf234a2d749ae5eae2418e | [
"MIT"
] | 3 | 2020-08-09T03:49:01.000Z | 2021-07-31T14:01:55.000Z | from .netgear_cm1000 import CM1000Parser
from .netgear_r7000 import R7000Parser
from .nest_fetcher import NestThermostat
| 30.25 | 40 | 0.876033 | 15 | 121 | 6.866667 | 0.666667 | 0.213592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146789 | 0.099174 | 121 | 3 | 41 | 40.333333 | 0.798165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe74753bbec76fd2896247df9415ef80f0d30d18 | 97 | py | Python | src/heat_control/settings/__init__.py | Radmor/heat_control | 595bd347237f7d769983c47dd6dec1c00ba19635 | [
"MIT"
] | null | null | null | src/heat_control/settings/__init__.py | Radmor/heat_control | 595bd347237f7d769983c47dd6dec1c00ba19635 | [
"MIT"
] | null | null | null | src/heat_control/settings/__init__.py | Radmor/heat_control | 595bd347237f7d769983c47dd6dec1c00ba19635 | [
"MIT"
] | null | null | null | try:
from .local import * # noqa
except ImportError:
from .default import * # noqa
| 19.4 | 35 | 0.618557 | 11 | 97 | 5.454545 | 0.727273 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.298969 | 97 | 4 | 36 | 24.25 | 0.882353 | 0.092784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fea81ffb72b070619dff192c02dd6625e2d5d60a | 4,611 | py | Python | huaweicloud-sdk-cloudpipeline/huaweicloudsdkcloudpipeline/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 64 | 2020-06-12T07:05:07.000Z | 2022-03-30T03:32:50.000Z | huaweicloud-sdk-cloudpipeline/huaweicloudsdkcloudpipeline/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 11 | 2020-07-06T07:56:54.000Z | 2022-01-11T11:14:40.000Z | huaweicloud-sdk-cloudpipeline/huaweicloudsdkcloudpipeline/v2/__init__.py | huaweicloud/huaweicloud-sdk-python-v3 | 7a6270390fcbf192b3882bf763e7016e6026ef78 | [
"Apache-2.0"
] | 24 | 2020-06-08T11:42:13.000Z | 2022-03-04T06:44:08.000Z | # coding: utf-8
from __future__ import absolute_import
# import CloudPipelineClient
from huaweicloudsdkcloudpipeline.v2.cloudpipeline_client import CloudPipelineClient
from huaweicloudsdkcloudpipeline.v2.cloudpipeline_async_client import CloudPipelineAsyncClient
# import models into sdk package
from huaweicloudsdkcloudpipeline.v2.model.batch_show_pipelines_status_request import BatchShowPipelinesStatusRequest
from huaweicloudsdkcloudpipeline.v2.model.batch_show_pipelines_status_response import BatchShowPipelinesStatusResponse
from huaweicloudsdkcloudpipeline.v2.model.constraint import Constraint
from huaweicloudsdkcloudpipeline.v2.model.create_pipeline_by_template_request import CreatePipelineByTemplateRequest
from huaweicloudsdkcloudpipeline.v2.model.create_pipeline_by_template_response import CreatePipelineByTemplateResponse
from huaweicloudsdkcloudpipeline.v2.model.list_pipeline_simple_info_request import ListPipelineSimpleInfoRequest
from huaweicloudsdkcloudpipeline.v2.model.list_pipeline_simple_info_request_body import ListPipelineSimpleInfoRequestBody
from huaweicloudsdkcloudpipeline.v2.model.list_pipeline_simple_info_response import ListPipelineSimpleInfoResponse
from huaweicloudsdkcloudpipeline.v2.model.list_pipleine_build_result_request import ListPipleineBuildResultRequest
from huaweicloudsdkcloudpipeline.v2.model.list_pipleine_build_result_response import ListPipleineBuildResultResponse
from huaweicloudsdkcloudpipeline.v2.model.list_templates_request import ListTemplatesRequest
from huaweicloudsdkcloudpipeline.v2.model.list_templates_response import ListTemplatesResponse
from huaweicloudsdkcloudpipeline.v2.model.param_type_limits import ParamTypeLimits
from huaweicloudsdkcloudpipeline.v2.model.pipeline_basic_info import PipelineBasicInfo
from huaweicloudsdkcloudpipeline.v2.model.pipeline_build_result import PipelineBuildResult
from huaweicloudsdkcloudpipeline.v2.model.pipeline_execute_states import PipelineExecuteStates
from huaweicloudsdkcloudpipeline.v2.model.pipeline_param import PipelineParam
from huaweicloudsdkcloudpipeline.v2.model.pipeline_parameter import PipelineParameter
from huaweicloudsdkcloudpipeline.v2.model.pipeline_state_status import PipelineStateStatus
from huaweicloudsdkcloudpipeline.v2.model.register_agent_request import RegisterAgentRequest
from huaweicloudsdkcloudpipeline.v2.model.register_agent_response import RegisterAgentResponse
from huaweicloudsdkcloudpipeline.v2.model.remove_pipeline_request import RemovePipelineRequest
from huaweicloudsdkcloudpipeline.v2.model.remove_pipeline_response import RemovePipelineResponse
from huaweicloudsdkcloudpipeline.v2.model.show_agent_status_request import ShowAgentStatusRequest
from huaweicloudsdkcloudpipeline.v2.model.show_agent_status_response import ShowAgentStatusResponse
from huaweicloudsdkcloudpipeline.v2.model.show_instance_status_request import ShowInstanceStatusRequest
from huaweicloudsdkcloudpipeline.v2.model.show_instance_status_response import ShowInstanceStatusResponse
from huaweicloudsdkcloudpipeline.v2.model.show_pipleine_status_request import ShowPipleineStatusRequest
from huaweicloudsdkcloudpipeline.v2.model.show_pipleine_status_response import ShowPipleineStatusResponse
from huaweicloudsdkcloudpipeline.v2.model.show_template_detail_request import ShowTemplateDetailRequest
from huaweicloudsdkcloudpipeline.v2.model.show_template_detail_response import ShowTemplateDetailResponse
from huaweicloudsdkcloudpipeline.v2.model.slave_register import SlaveRegister
from huaweicloudsdkcloudpipeline.v2.model.source import Source
from huaweicloudsdkcloudpipeline.v2.model.stages import Stages
from huaweicloudsdkcloudpipeline.v2.model.start_new_pipeline_request import StartNewPipelineRequest
from huaweicloudsdkcloudpipeline.v2.model.start_new_pipeline_response import StartNewPipelineResponse
from huaweicloudsdkcloudpipeline.v2.model.start_pipeline_build_params import StartPipelineBuildParams
from huaweicloudsdkcloudpipeline.v2.model.start_pipeline_parameters import StartPipelineParameters
from huaweicloudsdkcloudpipeline.v2.model.stop_pipeline_new_request import StopPipelineNewRequest
from huaweicloudsdkcloudpipeline.v2.model.stop_pipeline_new_response import StopPipelineNewResponse
from huaweicloudsdkcloudpipeline.v2.model.template_cddl import TemplateCddl
from huaweicloudsdkcloudpipeline.v2.model.template_param import TemplateParam
from huaweicloudsdkcloudpipeline.v2.model.template_state import TemplateState
from huaweicloudsdkcloudpipeline.v2.model.template_view import TemplateView
from huaweicloudsdkcloudpipeline.v2.model.workflow import Workflow
| 83.836364 | 121 | 0.923878 | 452 | 4,611 | 9.170354 | 0.227876 | 0.351508 | 0.374186 | 0.412545 | 0.53848 | 0.427503 | 0.294331 | 0.136068 | 0.032328 | 0 | 0 | 0.010899 | 0.044893 | 4,611 | 54 | 122 | 85.388889 | 0.930291 | 0.015398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
feac72d9c9040cac6b640049d38ff959b2be9745 | 40 | py | Python | app/db/__init__.py | joestarhu/cross | b607db2099a178904f41633b0c7137c12fa02af4 | [
"MIT"
] | 1 | 2021-11-17T09:43:37.000Z | 2021-11-17T09:43:37.000Z | app/db/__init__.py | joestarhu/cross | b607db2099a178904f41633b0c7137c12fa02af4 | [
"MIT"
] | null | null | null | app/db/__init__.py | joestarhu/cross | b607db2099a178904f41633b0c7137c12fa02af4 | [
"MIT"
] | null | null | null | from .database import Base,LocalSession
| 20 | 39 | 0.85 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
feb1a4d4b97780b745df7ff2a951a61aa3816f73 | 751 | py | Python | story/forms.py | luisgaboardi/2020.2-Projeto-Kokama-Usuario | 5d6a384708959f7a37c214496ea55e4bfe860d66 | [
"MIT"
] | null | null | null | story/forms.py | luisgaboardi/2020.2-Projeto-Kokama-Usuario | 5d6a384708959f7a37c214496ea55e4bfe860d66 | [
"MIT"
] | null | null | null | story/forms.py | luisgaboardi/2020.2-Projeto-Kokama-Usuario | 5d6a384708959f7a37c214496ea55e4bfe860d66 | [
"MIT"
] | null | null | null | from django import forms
REQUIRED_MESSAGE = 'Preencha este campo.'
class StoryForm(forms.Form):
title_portuguese = forms.CharField(
label='title',
required=False,
error_messages={'required': REQUIRED_MESSAGE}
)
text_portuguese = forms.CharField(
label='text',
required=False,
widget=forms.Textarea,
error_messages={'required': REQUIRED_MESSAGE}
)
title_kokama = forms.CharField(
label='title',
required=False,
error_messages={'required': REQUIRED_MESSAGE}
)
text_kokama = forms.CharField(
label='text',
required=False,
widget=forms.Textarea,
error_messages={'required': REQUIRED_MESSAGE}
)
| 23.46875 | 53 | 0.621838 | 71 | 751 | 6.394366 | 0.323944 | 0.165198 | 0.167401 | 0.255507 | 0.740088 | 0.740088 | 0.740088 | 0.740088 | 0.740088 | 0.740088 | 0 | 0 | 0.274301 | 751 | 32 | 54 | 23.46875 | 0.833028 | 0 | 0 | 0.56 | 0 | 0 | 0.093085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22982d5cbda105359f7baba312c8d576e5273b59 | 159 | py | Python | rasa_addons/core/nlg/__init__.py | engahmed1190/rasa-for-botfront | 9f712ab89ddf850c248877b3b77a160200df7d04 | [
"Apache-2.0"
] | 90 | 2018-04-11T11:54:57.000Z | 2019-05-26T09:52:40.000Z | rasa_addons/core/nlg/__init__.py | engahmed1190/rasa-for-botfront | 9f712ab89ddf850c248877b3b77a160200df7d04 | [
"Apache-2.0"
] | 79 | 2021-08-19T09:49:24.000Z | 2022-03-14T12:10:54.000Z | rasa_addons/core/nlg/__init__.py | engahmed1190/rasa-for-botfront | 9f712ab89ddf850c248877b3b77a160200df7d04 | [
"Apache-2.0"
] | 65 | 2019-05-21T12:16:53.000Z | 2022-02-23T10:54:15.000Z | from rasa_addons.core.nlg.graphql import GraphQLNaturalLanguageGenerator
from rasa_addons.core.nlg.bftemplate import BotfrontTemplatedNaturalLanguageGenerator
| 53 | 85 | 0.91195 | 16 | 159 | 8.9375 | 0.625 | 0.111888 | 0.195804 | 0.251748 | 0.293706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050314 | 159 | 2 | 86 | 79.5 | 0.94702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
22b27b8553c90e8fc52723c6cab13d5d078c6a9e | 90 | py | Python | examples/python/mypackage/test_module.py | ech0-de/popper | 58b994660c954ab267407820e30d76a739a4d2df | [
"MIT"
] | 179 | 2016-11-19T22:38:07.000Z | 2020-05-24T10:42:30.000Z | examples/python/mypackage/test_module.py | ech0-de/popper | 58b994660c954ab267407820e30d76a739a4d2df | [
"MIT"
] | 739 | 2016-10-05T21:31:13.000Z | 2020-05-22T20:42:55.000Z | examples/python/mypackage/test_module.py | ech0-de/popper | 58b994660c954ab267407820e30d76a739a4d2df | [
"MIT"
] | 51 | 2016-10-14T05:42:10.000Z | 2020-05-15T19:05:33.000Z | import pytest
from . import module
def test_myfunc():
assert module.myfunc(1) == 2
| 11.25 | 32 | 0.688889 | 13 | 90 | 4.692308 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 0.211111 | 90 | 7 | 33 | 12.857143 | 0.830986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22fe0fab3bdad3a49ef682cc7ac306303a1f57f5 | 24 | py | Python | msgtopdf/__init__.py | ushills/msgtopdf | 25c5cf60158220ab840c0fb2fd46bae1cf3d5e22 | [
"MIT"
] | 6 | 2020-03-14T11:21:30.000Z | 2021-11-14T15:37:29.000Z | msgtopdf/__init__.py | ushills/msgtopdf | 25c5cf60158220ab840c0fb2fd46bae1cf3d5e22 | [
"MIT"
] | 2 | 2020-06-11T20:44:09.000Z | 2021-03-02T10:13:40.000Z | msgtopdf/__init__.py | ushills/msgtopdf | 25c5cf60158220ab840c0fb2fd46bae1cf3d5e22 | [
"MIT"
] | 1 | 2020-03-13T04:18:53.000Z | 2020-03-13T04:18:53.000Z | from .msgtopdf import *
| 12 | 23 | 0.75 | 3 | 24 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
22fe5d329739f9b8b34b18530513d85c3e88c7c8 | 719 | py | Python | tests/test_double_height.py | d21d3q/thermalprinter | a502fe8a7b7ab5a0773e92a37e6539f73b34b950 | [
"MIT"
] | 28 | 2016-08-31T15:50:38.000Z | 2022-03-24T15:19:17.000Z | tests/test_double_height.py | d21d3q/thermalprinter | a502fe8a7b7ab5a0773e92a37e6539f73b34b950 | [
"MIT"
] | 11 | 2016-09-28T15:46:46.000Z | 2021-03-09T16:37:13.000Z | tests/test_double_height.py | d21d3q/thermalprinter | a502fe8a7b7ab5a0773e92a37e6539f73b34b950 | [
"MIT"
] | 10 | 2017-03-02T19:08:15.000Z | 2021-02-19T16:11:06.000Z | # coding: utf-8
def test_default_value(printer):
assert printer._double_height is False
assert printer._char_height == 24
def test_changing_no_value(printer):
printer.double_height()
assert printer._double_height is False
assert printer._char_height == 24
def test_changing_state_on(printer):
printer.double_height(True)
assert printer._double_height is True
assert printer._char_height == 48
def test_changing_state_off(printer):
printer.double_height(False)
assert printer._double_height is False
assert printer._char_height == 24
def test_reset_value(printer):
printer.reset()
assert printer._double_height is False
assert printer._char_height == 24
| 23.193548 | 42 | 0.759388 | 99 | 719 | 5.151515 | 0.232323 | 0.254902 | 0.298039 | 0.245098 | 0.572549 | 0.519608 | 0.519608 | 0.519608 | 0.519608 | 0.519608 | 0 | 0.018456 | 0.171071 | 719 | 30 | 43 | 23.966667 | 0.837248 | 0.018081 | 0 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.526316 | 1 | 0.263158 | false | 0 | 0 | 0 | 0.263158 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a3c6688b8e9e078e7551c457c295682b73d21504 | 25 | py | Python | models/__init__.py | ethanfetaya/pytorch_template | 81da6166aef8a9330d7a7fe528d26bb9764276f7 | [
"MIT"
] | 15 | 2020-08-24T07:11:20.000Z | 2021-09-13T08:03:42.000Z | models/__init__.py | ethanfetaya/pytorch_template | 81da6166aef8a9330d7a7fe528d26bb9764276f7 | [
"MIT"
] | 5 | 2021-02-28T17:30:26.000Z | 2021-06-15T09:33:00.000Z | models/__init__.py | ethanfetaya/pytorch_template | 81da6166aef8a9330d7a7fe528d26bb9764276f7 | [
"MIT"
] | 5 | 2018-06-15T15:34:23.000Z | 2022-03-19T22:52:47.000Z | from .lenet import LeNet
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a3e720880d492fb15742ea6395b51ff7b99eb0a1 | 190 | py | Python | tcga/utils/__init__.py | numpde/tcga | a7df66530a0249b82788f6367b9642b68eaf6ec5 | [
"MIT"
] | 2 | 2020-06-30T13:15:14.000Z | 2021-08-04T07:46:02.000Z | tcga/utils/__init__.py | numpde/tcga | a7df66530a0249b82788f6367b9642b68eaf6ec5 | [
"MIT"
] | null | null | null | tcga/utils/__init__.py | numpde/tcga | a7df66530a0249b82788f6367b9642b68eaf6ec5 | [
"MIT"
] | null | null | null | from .first import First, join
from .circular import Circular, laola
from .download import download
from .containers import *
from .files import *
from .meta import *
from .peek import Peek
| 23.75 | 37 | 0.778947 | 27 | 190 | 5.481481 | 0.407407 | 0.202703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 190 | 7 | 38 | 27.142857 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
43150e6b2e06e3809768f7b9f978e103893ae423 | 3,110 | py | Python | test/doc.quick-start.py | cnangel/HyperDex | b272e85b08d232993baf6105a4beba833deadfe3 | [
"BSD-3-Clause"
] | 1 | 2016-08-10T07:53:58.000Z | 2016-08-10T07:53:58.000Z | test/doc.quick-start.py | cnangel/HyperDex | b272e85b08d232993baf6105a4beba833deadfe3 | [
"BSD-3-Clause"
] | null | null | null | test/doc.quick-start.py | cnangel/HyperDex | b272e85b08d232993baf6105a4beba833deadfe3 | [
"BSD-3-Clause"
] | null | null | null | # File generated from python blocks in "doc/quick-start.tex"
>>> import sys
>>> HOST = sys.argv[2]
>>> PORT = int(sys.argv[3])
>>> import hyperdex.admin
>>> a = hyperdex.admin.Admin(HOST, PORT)
>>> a.add_space('''
... space phonebook
... key username
... attributes first, last, int phone
... subspace first, last, phone
... create 8 partitions
... tolerate 2 failures
... ''')
True
>>> import hyperdex.client
>>> c = hyperdex.client.Client(HOST, PORT)
>>> c.put('phonebook', 'jsmith1', {'first': 'John', 'last': 'Smith',
... 'phone': 6075551024})
True
>>> c.get('phonebook', 'jsmith1')
{'first': 'John', 'last': 'Smith', 'phone': 6075551024}
>>> [x for x in c.search('phonebook', {'first': 'John'})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'}]
>>> [x for x in c.search('phonebook', {'last': 'Smith'})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'}]
>>> [x for x in c.search('phonebook', {'phone': 6075551024})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'}]
>>> [x for x in c.search('phonebook',
... {'first': 'John', 'last': 'Smith', 'phone': 6075551024})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'}]
>>> c.put('phonebook', 'jd', {'first': 'John', 'last': 'Doe', 'phone': 6075557878})
True
>>> [x for x in c.search('phonebook',
... {'first': 'John', 'last': 'Smith', 'phone': 6075551024})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'}]
>>> [x for x in c.search('phonebook', {'first': 'John'})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'},
{'first': 'John',
'last': 'Doe',
'phone': 6075557878,
'username': 'jd'}]
>>> [x for x in c.search('phonebook', {'last': 'Smith'})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'}]
>>> [x for x in c.search('phonebook', {'last': 'Doe'})]
[{'first': 'John',
'last': 'Doe',
'phone': 6075557878,
'username': 'jd'}]
>>> c.delete('phonebook', 'jd')
True
>>> [x for x in c.search('phonebook', {'first': 'John'})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075551024,
'username': 'jsmith1'}]
>>> c.put('phonebook', 'jsmith1', {'phone': 6075552048})
True
>>> c.get('phonebook', 'jsmith1')
{'first': 'John',
'last': 'Smith',
'phone': 6075552048}
>>> c.put('phonebook', 'jsmith2',
... {'first': 'John', 'last': 'Smith', 'phone': 5855552048})
True
>>> c.get('phonebook', 'jsmith2')
{'first': 'John',
'last': 'Smith',
'phone': 5855552048}
>>> [x for x in c.search('phonebook',
... {'last': 'Smith', 'phone': (6070000000, 6080000000)})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075552048,
'username': 'jsmith1'}]
>>> [x for x in c.search('phonebook',
... {'first': ('Jack', 'Joseph')})]
[{'first': 'John',
'last': 'Smith',
'phone': 6075552048,
'username': 'jsmith1'},
{'first': 'John',
'last': 'Smith',
'phone': 5855552048,
'username': 'jsmith2'}]
>>> a.rm_space('phonebook')
True
| 28.53211 | 83 | 0.563023 | 357 | 3,110 | 4.89916 | 0.168067 | 0.123499 | 0.156089 | 0.185249 | 0.753573 | 0.753573 | 0.711264 | 0.711264 | 0.528302 | 0.510006 | 0 | 0.103633 | 0.159164 | 3,110 | 108 | 84 | 28.796296 | 0.565201 | 0.01865 | 0 | 0.691589 | 1 | 0 | 0.373443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028037 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
432ccd04e9e9f985eaa1f9d5605cbf79c013ea1b | 199 | py | Python | codewars/7kyu/amrlotfy77/Square Every Digit/test_bench.py | ictcubeMENA/Training_one | dff6bee96ba42babe4888e5cf9a9448a6fd93fc3 | [
"MIT"
] | null | null | null | codewars/7kyu/amrlotfy77/Square Every Digit/test_bench.py | ictcubeMENA/Training_one | dff6bee96ba42babe4888e5cf9a9448a6fd93fc3 | [
"MIT"
] | 2 | 2019-01-22T10:53:42.000Z | 2019-01-31T08:02:48.000Z | codewars/7kyu/amrlotfy77/Square Every Digit/test_bench.py | ictcubeMENA/Training_one | dff6bee96ba42babe4888e5cf9a9448a6fd93fc3 | [
"MIT"
] | 13 | 2019-01-22T10:37:42.000Z | 2019-01-25T13:30:43.000Z | from main import square_digits, square_digits1
def test1(benchmark):
assert benchmark(square_digits, 9119) == 811181
def test(benchmark):
assert benchmark(square_digits1, 9119) == 811181
| 19.9 | 52 | 0.758794 | 25 | 199 | 5.88 | 0.52 | 0.163265 | 0.326531 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136905 | 0.155779 | 199 | 9 | 53 | 22.111111 | 0.738095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.4 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
4a625b5bef13af92a6158639a30c111be8646400 | 1,719 | py | Python | tests/get_top_tests.py | mesenev/top_bot_lyceum | 35d92be35fe6b2bed6a38791aea00fee98bea872 | [
"MIT"
] | 1 | 2019-09-09T23:55:32.000Z | 2019-09-09T23:55:32.000Z | tests/get_top_tests.py | mesenev/top_bot_lyceum | 35d92be35fe6b2bed6a38791aea00fee98bea872 | [
"MIT"
] | 8 | 2017-10-03T16:35:45.000Z | 2018-01-15T10:29:09.000Z | tests/get_top_tests.py | mesenev/top_bot_lyceum | 35d92be35fe6b2bed6a38791aea00fee98bea872 | [
"MIT"
] | 3 | 2017-10-08T09:33:45.000Z | 2019-09-26T23:35:54.000Z | import unittest
import methods
class TestGetTopMethods(unittest.TestCase):
def test_simple_order(self):
var = [('first', 6), ('second', 5), ('third', 4), ('fourth', 3), ('fifth', 2), ('sixth', 1),
('seventh', 0)]
msg = methods.get_top._create_top(var)
s = [x for x in msg.split('\n') if x]
self.assertEqual(len(s), 7)
print(msg)
def test_duplicate_last(self):
var = [('first', 6), ('second', 5), ('third', 4), ('fourth', 3), ('fifth', 3), ('sixth', 1),
('seventh', 0)]
msg = methods.get_top._create_top(var)
s = [x for x in msg.split('\n') if x]
self.assertEqual(len(s), 7)
print(msg)
def test_duplicate_fourth(self):
var = [('first', 6), ('second', 5), ('third', 4), ('fourth', 4), ('fifth', 3), ('sixth', 1),
('seventh', 0)]
msg = methods.get_top._create_top(var)
s = [x for x in msg.split('\n') if x]
self.assertEqual(len(s), 7)
print(msg)
def test_duplicate_several(self):
var = [('first', 6), ('second', 6), ('third', 5), ('fourth', 5), ('fifth', 3), ('sixth', 1),
('seventh', 0)]
msg = methods.get_top._create_top(var)
s = [x for x in msg.split('\n') if x]
self.assertEqual(len(s), 7)
print(msg)
def test_duplicate_first_and_last(self):
var = [('first', 6), ('second', 6), ('third', 5), ('fourth', 4), ('fifth', 4), ('sixth', 1),
('seventh', 0)]
msg = methods.get_top._create_top(var)
s = [x for x in msg.split('\n') if x]
self.assertEqual(len(s), 7)
print(msg)
if __name__ == '__main__':
unittest.main()
| 34.38 | 100 | 0.510762 | 235 | 1,719 | 3.587234 | 0.2 | 0.041518 | 0.071174 | 0.077106 | 0.838671 | 0.838671 | 0.829181 | 0.829181 | 0.829181 | 0.715302 | 0 | 0.032389 | 0.281559 | 1,719 | 49 | 101 | 35.081633 | 0.650202 | 0 | 0 | 0.625 | 0 | 0 | 0.123909 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.05 | 0 | 0.2 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a8950f436d70e5e2b799c7e3b49992be542389a | 28 | py | Python | data_structure_lru_cache.py | macio-matheus/algorithms-and-data-structure-practices | 64d013ec04a0489401e8b2110f578fbf3893dca1 | [
"Apache-2.0"
] | null | null | null | data_structure_lru_cache.py | macio-matheus/algorithms-and-data-structure-practices | 64d013ec04a0489401e8b2110f578fbf3893dca1 | [
"Apache-2.0"
] | null | null | null | data_structure_lru_cache.py | macio-matheus/algorithms-and-data-structure-practices | 64d013ec04a0489401e8b2110f578fbf3893dca1 | [
"Apache-2.0"
] | null | null | null | # TODO IMPLEMENT all methods | 28 | 28 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0.928571 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a9361700cbe71d05ad4b59994758780039201fe | 150 | py | Python | src/bio2bel_ddr/constants.py | bio2bel/ddr | 6e4cae605d798c15049970a65d5a6fcf69255b36 | [
"MIT"
] | 1 | 2020-09-26T12:20:06.000Z | 2020-09-26T12:20:06.000Z | src/bio2bel_ddr/constants.py | bio2bel/ddr | 6e4cae605d798c15049970a65d5a6fcf69255b36 | [
"MIT"
] | 5 | 2019-01-22T14:13:10.000Z | 2019-02-08T14:43:01.000Z | src/bio2bel_ddr/constants.py | bio2bel/ddr | 6e4cae605d798c15049970a65d5a6fcf69255b36 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Constants for Bio2BEL DDR."""
from bio2bel import get_data_dir
MODULE_NAME = 'ddr'
DATA_DIR = get_data_dir(MODULE_NAME)
| 16.666667 | 36 | 0.706667 | 23 | 150 | 4.304348 | 0.608696 | 0.212121 | 0.20202 | 0.323232 | 0.40404 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023438 | 0.146667 | 150 | 8 | 37 | 18.75 | 0.75 | 0.326667 | 0 | 0 | 0 | 0 | 0.031579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
43686f7e315e83c00a00954dd79391a224fc1872 | 134 | py | Python | ckanext/example_theme/v01_empty_extension/plugin.py | florianm/ckan | 1cfd98d591ac70b4eb81048bcd227b6c1354b1bf | [
"Apache-2.0"
] | 12 | 2015-08-28T16:59:07.000Z | 2020-03-08T01:39:30.000Z | ckanext/example_theme/v01_empty_extension/plugin.py | florianm/ckan | 1cfd98d591ac70b4eb81048bcd227b6c1354b1bf | [
"Apache-2.0"
] | 13 | 2019-05-02T21:01:28.000Z | 2020-10-20T23:34:48.000Z | ckanext/example_theme/v01_empty_extension/plugin.py | florianm/ckan | 1cfd98d591ac70b4eb81048bcd227b6c1354b1bf | [
"Apache-2.0"
] | 10 | 2015-05-08T04:33:20.000Z | 2020-03-03T15:17:58.000Z | import ckan.plugins as plugins
class ExampleThemePlugin(plugins.SingletonPlugin):
'''An example theme plugin.
'''
pass
| 14.888889 | 50 | 0.708955 | 14 | 134 | 6.785714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201493 | 134 | 8 | 51 | 16.75 | 0.88785 | 0.179104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
43ab2ae426837b41e6b2c483e51eb3750b3b0b4e | 168 | py | Python | dashboard/admin.py | Miranshox/Oquvportali | ac5439d766f53c3a381968cb4b63d2b3ae7ffe48 | [
"MIT"
] | null | null | null | dashboard/admin.py | Miranshox/Oquvportali | ac5439d766f53c3a381968cb4b63d2b3ae7ffe48 | [
"MIT"
] | null | null | null | dashboard/admin.py | Miranshox/Oquvportali | ac5439d766f53c3a381968cb4b63d2b3ae7ffe48 | [
"MIT"
] | null | null | null | from django.contrib import admin
from . models import *
# Register your models here.
admin.site.register(Notes)
admin.site.register(Homework)
admin.site.register(Todo) | 24 | 32 | 0.797619 | 24 | 168 | 5.583333 | 0.541667 | 0.201493 | 0.380597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10119 | 168 | 7 | 33 | 24 | 0.887417 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
43ca18d1deac1b210541a2d3a9be3c8fcd5ef41d | 91 | py | Python | stocklab_twse/error.py | syoukore/stocklab-twse | de5c81083b3dffff4d85f1e3312588ce5d65eca2 | [
"MIT"
] | 1 | 2020-06-16T16:22:01.000Z | 2020-06-16T16:22:01.000Z | stocklab_twse/error.py | syoukore/stocklab-twse | de5c81083b3dffff4d85f1e3312588ce5d65eca2 | [
"MIT"
] | null | null | null | stocklab_twse/error.py | syoukore/stocklab-twse | de5c81083b3dffff4d85f1e3312588ce5d65eca2 | [
"MIT"
] | 1 | 2020-06-16T16:55:40.000Z | 2020-06-16T16:55:40.000Z | from stocklab.core.error import *
class InvalidDateRequested(ExceptionWithInfo):
pass
| 18.2 | 46 | 0.802198 | 9 | 91 | 8.111111 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131868 | 91 | 4 | 47 | 22.75 | 0.924051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
602ad3d3d05bd33a17d034aafe06b0b6631b8f4b | 127 | py | Python | Decorators/decorators.py | sv549/calculator | 4b61234a5f3e628ec4adbc5e61b3d1b1ba831c3d | [
"MIT"
] | null | null | null | Decorators/decorators.py | sv549/calculator | 4b61234a5f3e628ec4adbc5e61b3d1b1ba831c3d | [
"MIT"
] | null | null | null | Decorators/decorators.py | sv549/calculator | 4b61234a5f3e628ec4adbc5e61b3d1b1ba831c3d | [
"MIT"
] | null | null | null | def decorator(func):
def decorator_do_twice(a,b):
func()
func()
return decorator_do_twice()
| 18.142857 | 36 | 0.559055 | 15 | 127 | 4.466667 | 0.533333 | 0.358209 | 0.477612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.338583 | 127 | 6 | 37 | 21.166667 | 0.797619 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
604a6963e95ccc4763ae09fce4c180845573802c | 156 | py | Python | scripts/rewards/boost/boost_arbitrum.py | gosuto-ai/badger-rewards | 45a7cefce2035bc385bebf5f103780c7ff614304 | [
"MIT"
] | 3 | 2022-01-05T20:33:35.000Z | 2022-02-09T16:07:30.000Z | scripts/rewards/boost/boost_arbitrum.py | gosuto-ai/badger-rewards | 45a7cefce2035bc385bebf5f103780c7ff614304 | [
"MIT"
] | 341 | 2021-08-04T13:01:21.000Z | 2022-03-31T19:46:30.000Z | scripts/rewards/boost/boost_arbitrum.py | gosuto-ai/badger-rewards | 45a7cefce2035bc385bebf5f103780c7ff614304 | [
"MIT"
] | 3 | 2021-09-07T12:54:27.000Z | 2021-12-22T13:27:23.000Z | from helpers.enums import Network
from scripts.rewards.utils.boost import generate_boosts
if __name__ == "__main__":
generate_boosts(Network.Arbitrum)
| 26 | 55 | 0.807692 | 20 | 156 | 5.8 | 0.75 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 156 | 5 | 56 | 31.2 | 0.84058 | 0 | 0 | 0 | 1 | 0 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
60507b3de65d7e941c885ac259b1a5e41989ee37 | 170 | py | Python | app/src/schemas/__init__.py | fpaludi/StreamVideo | 72c14a13f27c96731d35f927b380494d6ea2b9a8 | [
"MIT"
] | null | null | null | app/src/schemas/__init__.py | fpaludi/StreamVideo | 72c14a13f27c96731d35f927b380494d6ea2b9a8 | [
"MIT"
] | null | null | null | app/src/schemas/__init__.py | fpaludi/StreamVideo | 72c14a13f27c96731d35f927b380494d6ea2b9a8 | [
"MIT"
] | null | null | null | from schemas.user import User, UserCreate, UserUpdate
from schemas.book import Book, BookCreate, BookUpdate
from schemas.review import Review, ReviewCreate, ReviewUpdate
| 42.5 | 61 | 0.841176 | 21 | 170 | 6.809524 | 0.571429 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105882 | 170 | 3 | 62 | 56.666667 | 0.940789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
605320fdc84d56d7695eb34d5eb595818fbe79b6 | 21 | py | Python | src/quick_pypi/__init__.py | dhchenx/quick-pypi | 6f41793fd1729bdc4daf05f3fc5836c4e037ab99 | [
"MIT"
] | null | null | null | src/quick_pypi/__init__.py | dhchenx/quick-pypi | 6f41793fd1729bdc4daf05f3fc5836c4e037ab99 | [
"MIT"
] | null | null | null | src/quick_pypi/__init__.py | dhchenx/quick-pypi | 6f41793fd1729bdc4daf05f3fc5836c4e037ab99 | [
"MIT"
] | null | null | null | from .deploy import * | 21 | 21 | 0.761905 | 3 | 21 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
606892cb95ef2f0a3b0922586e1ac1dbf057b44a | 61,003 | py | Python | tests/test_packages/test_skills/test_generic_buyer/test_handlers.py | valory-xyz/open-aea | 80ac202c6b5413709be1b7a71fd712d97b587fef | [
"Apache-2.0"
] | 28 | 2021-10-31T18:54:14.000Z | 2022-03-17T13:10:43.000Z | tests/test_packages/test_skills/test_generic_buyer/test_handlers.py | valory-xyz/open-aea | 80ac202c6b5413709be1b7a71fd712d97b587fef | [
"Apache-2.0"
] | 66 | 2021-10-31T11:55:48.000Z | 2022-03-31T06:26:23.000Z | tests/test_packages/test_skills/test_generic_buyer/test_handlers.py | valory-xyz/open-aea | 80ac202c6b5413709be1b7a71fd712d97b587fef | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# ------------------------------------------------------------------------------
#
# Copyright 2021 Valory AG
# Copyright 2018-2019 Fetch.AI Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ------------------------------------------------------------------------------
"""This module contains the tests of the handler classes of the generic buyer skill."""
import logging
from pathlib import Path
from typing import cast
from unittest.mock import patch
import pytest
from aea.crypto.ledger_apis import LedgerApis
from aea.helpers.search.models import Description
from aea.helpers.transaction.base import (
RawTransaction,
SignedTransaction,
Terms,
TransactionDigest,
TransactionReceipt,
)
from aea.protocols.dialogue.base import DialogueMessage
from aea.test_tools.test_skill import BaseSkillTestCase, COUNTERPARTY_AGENT_ADDRESS
from packages.fetchai.protocols.default.message import DefaultMessage
from packages.fetchai.protocols.fipa.message import FipaMessage
from packages.fetchai.protocols.ledger_api.message import LedgerApiMessage
from packages.fetchai.protocols.oef_search.message import OefSearchMessage
from packages.fetchai.skills.generic_buyer.behaviours import GenericTransactionBehaviour
from packages.fetchai.skills.generic_buyer.dialogues import (
FipaDialogue,
FipaDialogues,
LedgerApiDialogue,
LedgerApiDialogues,
OefSearchDialogues,
SigningDialogue,
SigningDialogues,
)
from packages.fetchai.skills.generic_buyer.handlers import (
GenericFipaHandler,
GenericLedgerApiHandler,
GenericOefSearchHandler,
GenericSigningHandler,
LEDGER_API_ADDRESS,
)
from packages.fetchai.skills.generic_buyer.strategy import GenericStrategy
from packages.open_aea.protocols.signing.message import SigningMessage
from tests.conftest import ROOT_DIR
class TestGenericFipaHandler(BaseSkillTestCase):
"""Test fipa handler of generic buyer."""
path_to_skill = Path(ROOT_DIR, "packages", "fetchai", "skills", "generic_buyer")
@classmethod
def setup(cls):
"""Setup the test class."""
super().setup()
cls.fipa_handler = cast(
GenericFipaHandler, cls._skill.skill_context.handlers.fipa
)
cls.strategy = cast(GenericStrategy, cls._skill.skill_context.strategy)
cls.fipa_dialogues = cast(
FipaDialogues, cls._skill.skill_context.fipa_dialogues
)
cls.list_of_messages = (
DialogueMessage(FipaMessage.Performative.CFP, {"query": "some_query"}),
DialogueMessage(
FipaMessage.Performative.PROPOSE, {"proposal": "some_proposal"}
),
DialogueMessage(FipaMessage.Performative.ACCEPT),
DialogueMessage(
FipaMessage.Performative.MATCH_ACCEPT_W_INFORM,
{"info": {"address": "some_term_sender_address"}},
),
DialogueMessage(
FipaMessage.Performative.INFORM,
{"info": {"transaction_digest": "some_transaction_digest_body"}},
),
)
def test_setup(self):
"""Test the setup method of the fipa handler."""
assert self.fipa_handler.setup() is None
self.assert_quantity_in_outbox(0)
def test_handle_unidentified_dialogue(self):
"""Test the _handle_unidentified_dialogue method of the fipa handler."""
# setup
incorrect_dialogue_reference = ("", "")
incoming_message = self.build_incoming_message(
message_type=FipaMessage,
dialogue_reference=incorrect_dialogue_reference,
performative=FipaMessage.Performative.ACCEPT,
)
# operation
with patch.object(self.fipa_handler.context.logger, "log") as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received invalid fipa message={incoming_message}, unidentified dialogue.",
)
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=DefaultMessage,
performative=DefaultMessage.Performative.ERROR,
to=incoming_message.sender,
sender=self.skill.skill_context.agent_address,
error_code=DefaultMessage.ErrorCode.INVALID_DIALOGUE,
error_msg="Invalid dialogue.",
error_data={"fipa_message": incoming_message.encode()},
)
assert has_attributes, error_str
def test_handle_propose_is_affordable_and_is_acceptable(self):
"""Test the _handle_propose method of the fipa handler."""
# setup
proposal = Description(
{
"ledger_id": self.strategy.ledger_id,
"price": 100,
"currency_id": "FET",
"service_id": "some_service_id",
"quantity": 1,
"tx_nonce": "some_tx_nonce",
}
)
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:1],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue,
performative=FipaMessage.Performative.PROPOSE,
proposal=proposal,
)
# operation
with patch.object(
self.strategy, "is_acceptable_proposal", return_value=True,
):
with patch.object(
self.strategy, "is_affordable_proposal", return_value=True,
):
with patch.object(
self.fipa_handler.context.logger, "log"
) as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
incoming_message = cast(FipaMessage, incoming_message)
mock_logger.assert_any_call(
logging.INFO,
f"received proposal={incoming_message.proposal.values} from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
mock_logger.assert_any_call(
logging.INFO,
f"accepting the proposal from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=FipaMessage,
performative=FipaMessage.Performative.ACCEPT,
to=incoming_message.sender,
sender=self.skill.skill_context.agent_address,
target=incoming_message.message_id,
)
assert has_attributes, error_str
def test_handle_propose_not_is_affordable_or_not_is_acceptable(self):
"""Test the _handle_propose method of the fipa handler."""
# setup
proposal = Description(
{
"ledger_id": self.strategy.ledger_id,
"price": 100,
"currency_id": "FET",
"service_id": "some_service_id",
"quantity": 1,
"tx_nonce": "some_tx_nonce",
}
)
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:1],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue,
performative=FipaMessage.Performative.PROPOSE,
proposal=proposal,
)
# operation
with patch.object(
self.strategy, "is_acceptable_proposal", return_value=False,
):
with patch.object(
self.strategy, "is_affordable_proposal", return_value=False,
):
with patch.object(
self.fipa_handler.context.logger, "log"
) as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
incoming_message = cast(FipaMessage, incoming_message)
mock_logger.assert_any_call(
logging.INFO,
f"received proposal={incoming_message.proposal.values} from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
mock_logger.assert_any_call(
logging.INFO,
f"declining the proposal from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=FipaMessage,
performative=FipaMessage.Performative.DECLINE,
to=incoming_message.sender,
sender=self.skill.skill_context.agent_address,
target=incoming_message.message_id,
)
assert has_attributes, error_str
def test_handle_decline_decline_cfp(self):
"""Test the _handle_decline method of the fipa handler where the end state is decline_cfp."""
# setup
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:1],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue, performative=FipaMessage.Performative.DECLINE,
)
# before
for (
end_state_numbers
) in self.fipa_dialogues.dialogue_stats.self_initiated.values():
assert end_state_numbers == 0
for (
end_state_numbers
) in self.fipa_dialogues.dialogue_stats.other_initiated.values():
assert end_state_numbers == 0
# operation
with patch.object(self.fipa_handler.context.logger, "log") as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received DECLINE from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
for (
end_state_numbers
) in self.fipa_dialogues.dialogue_stats.other_initiated.values():
assert end_state_numbers == 0
for (
end_state,
end_state_numbers,
) in self.fipa_dialogues.dialogue_stats.self_initiated.items():
if end_state == FipaDialogue.EndState.DECLINED_CFP:
assert end_state_numbers == 1
else:
assert end_state_numbers == 0
def test_handle_decline_decline_accept(self):
"""Test the _handle_decline method of the fipa handler where the end state is decline_accept."""
# setup
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:3],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue, performative=FipaMessage.Performative.DECLINE,
)
# before
for end_state_numbers in list(
self.fipa_dialogues.dialogue_stats.self_initiated.values()
) + list(self.fipa_dialogues.dialogue_stats.other_initiated.values()):
assert end_state_numbers == 0
# operation
with patch.object(self.fipa_handler.context.logger, "log") as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received DECLINE from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
for (
end_state_numbers
) in self.fipa_dialogues.dialogue_stats.other_initiated.values():
assert end_state_numbers == 0
for (
end_state,
end_state_numbers,
) in self.fipa_dialogues.dialogue_stats.self_initiated.items():
if end_state == FipaDialogue.EndState.DECLINED_ACCEPT:
assert end_state_numbers == 1
else:
assert end_state_numbers == 0
def test_handle_match_accept_is_ledger_tx(self):
"""Test the _handle_match_accept method of the fipa handler where is_ledger_tx is True."""
# setup
self.strategy._is_ledger_tx = True
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:3],
)
fipa_dialogue.terms = Terms(
"some_ledger_id",
self.skill.skill_context.agent_address,
"counterprty",
{"currency_id": 50},
{"good_id": -10},
"some_nonce",
)
incoming_message = cast(
FipaMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue,
performative=FipaMessage.Performative.MATCH_ACCEPT_W_INFORM,
info={"info": {"address": "some_term_sender_address"}},
),
)
# operation
with patch.object(
self.fipa_handler.context.logger, "log"
) as mock_logger_handler:
self.fipa_handler.handle(incoming_message)
# after
mock_logger_handler.assert_any_call(
logging.INFO,
f"received MATCH_ACCEPT_W_INFORM from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]} with info={incoming_message.info}",
)
# operation
with patch.object(
self.fipa_handler.context.behaviours.transaction.context.logger, "log"
) as _:
self.fipa_handler.context.behaviours.transaction.act()
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=LedgerApiMessage,
performative=LedgerApiMessage.Performative.GET_RAW_TRANSACTION,
to=LEDGER_API_ADDRESS,
sender=str(self.skill.skill_context.skill_id),
terms=fipa_dialogue.terms,
)
assert has_attributes, error_str
def test_handle_match_accept_not_is_ledger_tx(self):
"""Test the _handle_match_accept method of the fipa handler where is_ledger_tx is False."""
# setup
self.strategy._is_ledger_tx = False
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:3],
)
incoming_message = cast(
FipaMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue,
performative=FipaMessage.Performative.MATCH_ACCEPT_W_INFORM,
info={"info": {"address": "some_term_sender_address"}},
),
)
# operation
with patch.object(self.fipa_handler.context.logger, "log") as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received MATCH_ACCEPT_W_INFORM from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]} with info={incoming_message.info}",
)
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=FipaMessage,
performative=FipaMessage.Performative.INFORM,
to=incoming_message.sender,
sender=self.skill.skill_context.agent_address,
target=incoming_message.message_id,
info={"Done": "Sending payment via bank transfer"},
)
assert has_attributes, error_str
mock_logger.assert_any_call(
logging.INFO,
f"informing counterparty={COUNTERPARTY_AGENT_ADDRESS[-5:]} of payment.",
)
def test_handle_inform_with_data(self):
"""Test the _handle_inform method of the fipa handler where info has data."""
# setup
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:4],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue,
performative=FipaMessage.Performative.INFORM,
info={"data_name": "data"},
)
# before
for end_state_numbers in list(
self.fipa_dialogues.dialogue_stats.self_initiated.values()
) + list(self.fipa_dialogues.dialogue_stats.other_initiated.values()):
assert end_state_numbers == 0
# operation
with patch.object(self.fipa_handler.context.logger, "log") as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received INFORM from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
mock_logger.assert_any_call(
logging.INFO, "received the following data={'data_name': 'data'}"
)
for (
end_state_numbers
) in self.fipa_dialogues.dialogue_stats.other_initiated.values():
assert end_state_numbers == 0
for (
end_state,
end_state_numbers,
) in self.fipa_dialogues.dialogue_stats.self_initiated.items():
if end_state == FipaDialogue.EndState.SUCCESSFUL:
assert end_state_numbers == 1
else:
assert end_state_numbers == 0
def test_handle_inform_without_data(self):
"""Test the _handle_inform method of the fipa handler where info has NO data."""
# setup
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:4],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue,
performative=FipaMessage.Performative.INFORM,
info={},
)
# operation
with patch.object(self.fipa_handler.context.logger, "log") as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received INFORM from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
mock_logger.assert_any_call(
logging.INFO,
f"received no data from sender={COUNTERPARTY_AGENT_ADDRESS[-5:]}",
)
def test_handle_invalid(self):
"""Test the _handle_invalid method of the fipa handler."""
# setup
fipa_dialogue = self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues, messages=self.list_of_messages[:2],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=fipa_dialogue, performative=FipaMessage.Performative.ACCEPT,
)
# operation
with patch.object(self.fipa_handler.context.logger, "log") as mock_logger:
self.fipa_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.WARNING,
f"cannot handle fipa message of performative={incoming_message.performative} in dialogue={fipa_dialogue}.",
)
def test_teardown(self):
"""Test the teardown method of the fipa handler."""
assert self.fipa_handler.teardown() is None
self.assert_quantity_in_outbox(0)
class TestGenericOefSearchHandler(BaseSkillTestCase):
"""Test oef search handler of generic buyer."""
path_to_skill = Path(ROOT_DIR, "packages", "fetchai", "skills", "generic_buyer")
is_agent_to_agent_messages = False
@classmethod
def setup(cls):
"""Setup the test class."""
super().setup()
cls.oef_search_handler = cast(
GenericOefSearchHandler, cls._skill.skill_context.handlers.oef_search
)
cls.strategy = cast(GenericStrategy, cls._skill.skill_context.strategy)
cls.oef_dialogues = cast(
OefSearchDialogues, cls._skill.skill_context.oef_search_dialogues
)
cls.list_of_messages = (
DialogueMessage(
OefSearchMessage.Performative.SEARCH_SERVICES, {"query": "some_query"}
),
)
def test_setup(self):
"""Test the setup method of the oef_search handler."""
assert self.oef_search_handler.setup() is None
self.assert_quantity_in_outbox(0)
def test_handle_unidentified_dialogue(self):
"""Test the _handle_unidentified_dialogue method of the oef_search handler."""
# setup
incorrect_dialogue_reference = ("", "")
incoming_message = self.build_incoming_message(
message_type=OefSearchMessage,
dialogue_reference=incorrect_dialogue_reference,
performative=OefSearchMessage.Performative.SEARCH_SERVICES,
)
# operation
with patch.object(self.oef_search_handler.context.logger, "log") as mock_logger:
self.oef_search_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received invalid oef_search message={incoming_message}, unidentified dialogue.",
)
def test_handle_error(self):
"""Test the _handle_error method of the oef_search handler."""
# setup
oef_dialogue = self.prepare_skill_dialogue(
dialogues=self.oef_dialogues, messages=self.list_of_messages[:1],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=oef_dialogue,
performative=OefSearchMessage.Performative.OEF_ERROR,
oef_error_operation=OefSearchMessage.OefErrorOperation.SEARCH_SERVICES,
)
# operation
with patch.object(self.oef_search_handler.context.logger, "log") as mock_logger:
self.oef_search_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received oef_search error message={incoming_message} in dialogue={oef_dialogue}.",
)
def test_handle_search_zero_agents(self):
"""Test the _handle_search method of the oef_search handler."""
# setup
oef_dialogue = self.prepare_skill_dialogue(
dialogues=self.oef_dialogues, messages=self.list_of_messages[:1],
)
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=oef_dialogue,
performative=OefSearchMessage.Performative.SEARCH_RESULT,
agents=tuple(),
agents_info=OefSearchMessage.AgentsInfo({}),
)
# operation
with patch.object(self.oef_search_handler.context.logger, "log") as mock_logger:
self.oef_search_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"found no agents in dialogue={oef_dialogue}, continue searching.",
)
def test_handle_search_i(self):
"""Test the _handle_search method of the oef_search handler where is_stop_searching_on_result is True."""
# setup
self.strategy._max_negotiations = 3
self.strategy._is_stop_searching_on_result = True
self.strategy._is_searching = True
oef_dialogue = self.prepare_skill_dialogue(
dialogues=self.oef_dialogues, messages=self.list_of_messages[:1],
)
agents = ("agnt1", "agnt2")
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=oef_dialogue,
performative=OefSearchMessage.Performative.SEARCH_RESULT,
agents=agents,
agents_info=OefSearchMessage.AgentsInfo(
{"agent_1": {"key_1": "value_1"}, "agent_2": {"key_2": "value_2"}}
),
)
# operation
with patch.object(self.oef_search_handler.context.logger, "log") as mock_logger:
self.oef_search_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO, f"found agents={list(agents)}, stopping search."
)
assert self.strategy.is_searching is False
self.assert_quantity_in_outbox(len(agents))
for agent in agents:
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=FipaMessage,
performative=FipaMessage.Performative.CFP,
to=agent,
sender=self.skill.skill_context.agent_address,
target=0,
query=self.strategy.get_service_query(),
)
assert has_attributes, error_str
mock_logger.assert_any_call(logging.INFO, f"sending CFP to agent={agent}")
def test_handle_search_ii(self):
"""Test the _handle_search method of the oef_search handler where is_stop_searching_on_result is False."""
# setup
self.strategy._max_negotiations = 3
self.strategy._is_stop_searching_on_result = False
self.strategy._is_searching = True
oef_dialogue = self.prepare_skill_dialogue(
dialogues=self.oef_dialogues, messages=self.list_of_messages[:1],
)
agents = ("agnt1", "agnt2")
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=oef_dialogue,
performative=OefSearchMessage.Performative.SEARCH_RESULT,
agents=agents,
agents_info=OefSearchMessage.AgentsInfo(
{"agent_1": {"key_1": "value_1"}, "agent_2": {"key_2": "value_2"}}
),
)
# operation
with patch.object(self.oef_search_handler.context.logger, "log") as mock_logger:
self.oef_search_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(logging.INFO, f"found agents={list(agents)}.")
assert self.strategy.is_searching is True
self.assert_quantity_in_outbox(len(agents))
for agent in agents:
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=FipaMessage,
performative=FipaMessage.Performative.CFP,
to=agent,
sender=self.skill.skill_context.agent_address,
target=0,
query=self.strategy.get_service_query(),
)
assert has_attributes, error_str
mock_logger.assert_any_call(logging.INFO, f"sending CFP to agent={agent}")
def test_handle_search_more_than_max_negotiation(self):
"""Test the _handle_search method of the oef_search handler where number of agents is more than max_negotiation."""
# setup
self.strategy._max_negotiations = 1
oef_dialogue = self.prepare_skill_dialogue(
dialogues=self.oef_dialogues, messages=self.list_of_messages[:1],
)
agents = ("agnt1", "agnt2")
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=oef_dialogue,
performative=OefSearchMessage.Performative.SEARCH_RESULT,
agents=agents,
agents_info=OefSearchMessage.AgentsInfo(
{"agent_1": {"key_1": "value_1"}, "agent_2": {"key_2": "value_2"}}
),
)
# operation
with patch.object(self.oef_search_handler.context.logger, "log") as mock_logger:
self.oef_search_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO, f"found agents={list(agents)}, stopping search."
)
assert not self.strategy.is_searching
self.assert_quantity_in_outbox(self.strategy._max_negotiations)
for idx in range(0, self.strategy._max_negotiations):
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=FipaMessage,
performative=FipaMessage.Performative.CFP,
to=agents[idx],
sender=self.skill.skill_context.agent_address,
target=0,
query=self.strategy.get_service_query(),
)
assert has_attributes, error_str
mock_logger.assert_any_call(
logging.INFO, f"sending CFP to agent={agents[idx]}"
)
def test_handle_invalid(self):
"""Test the _handle_invalid method of the oef_search handler."""
# setup
invalid_performative = OefSearchMessage.Performative.UNREGISTER_SERVICE
incoming_message = self.build_incoming_message(
message_type=OefSearchMessage,
dialogue_reference=("1", ""),
performative=invalid_performative,
service_description="some_service_description",
)
# operation
with patch.object(self.oef_search_handler.context.logger, "log") as mock_logger:
self.oef_search_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.WARNING,
f"cannot handle oef_search message of performative={invalid_performative} in dialogue={self.oef_dialogues.get_dialogue(incoming_message)}.",
)
def test_teardown(self):
"""Test the teardown method of the oef_search handler."""
assert self.oef_search_handler.teardown() is None
self.assert_quantity_in_outbox(0)
class TestGenericSigningHandler(BaseSkillTestCase):
"""Test signing handler of generic buyer."""
path_to_skill = Path(ROOT_DIR, "packages", "fetchai", "skills", "generic_buyer")
is_agent_to_agent_messages = False
@classmethod
def setup(cls):
"""Setup the test class."""
super().setup()
cls.signing_handler = cast(
GenericSigningHandler, cls._skill.skill_context.handlers.signing
)
cls.strategy = cast(GenericStrategy, cls._skill.skill_context.strategy)
cls.fipa_dialogues = cast(
FipaDialogues, cls._skill.skill_context.fipa_dialogues
)
cls.ledger_api_dialogues = cast(
LedgerApiDialogues, cls._skill.skill_context.ledger_api_dialogues
)
cls.signing_dialogues = cast(
SigningDialogues, cls._skill.skill_context.signing_dialogues
)
cls.terms = Terms(
"some_ledger_id",
cls._skill.skill_context.agent_address,
"counterprty",
{"currency_id": 50},
{"good_id": -10},
"some_nonce",
)
cls.list_of_fipa_messages = (
DialogueMessage(FipaMessage.Performative.CFP, {"query": "some_query"}),
DialogueMessage(
FipaMessage.Performative.PROPOSE, {"proposal": "some_proposal"}
),
DialogueMessage(FipaMessage.Performative.ACCEPT),
DialogueMessage(
FipaMessage.Performative.MATCH_ACCEPT_W_INFORM,
{"info": {"address": "some_term_sender_address"}},
),
DialogueMessage(
FipaMessage.Performative.INFORM,
{"info": {"transaction_digest": "some_transaction_digest_body"}},
),
)
cls.list_of_signing_messages = (
DialogueMessage(
SigningMessage.Performative.SIGN_TRANSACTION,
{
"terms": cls.terms,
"raw_transaction": SigningMessage.RawTransaction(
"some_ledger_id", {"some_key": "some_value"}
),
},
),
)
cls.list_of_ledger_api_messages = (
DialogueMessage(LedgerApiMessage.Performative.GET_RAW_TRANSACTION, {}),
DialogueMessage(LedgerApiMessage.Performative.RAW_TRANSACTION, {}),
DialogueMessage(LedgerApiMessage.Performative.SEND_SIGNED_TRANSACTION, {}),
DialogueMessage(LedgerApiMessage.Performative.TRANSACTION_DIGEST, {}),
)
def test_setup(self):
"""Test the setup method of the signing handler."""
assert self.signing_handler.setup() is None
self.assert_quantity_in_outbox(0)
def test_handle_unidentified_dialogue(self):
"""Test the _handle_unidentified_dialogue method of the signing handler."""
# setup
incorrect_dialogue_reference = ("", "")
incoming_message = self.build_incoming_message(
message_type=SigningMessage,
dialogue_reference=incorrect_dialogue_reference,
performative=SigningMessage.Performative.ERROR,
error_code=SigningMessage.ErrorCode.UNSUCCESSFUL_MESSAGE_SIGNING,
to=str(self.skill.skill_context.skill_id),
)
# operation
with patch.object(self.signing_handler.context.logger, "log") as mock_logger:
self.signing_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received invalid signing message={incoming_message}, unidentified dialogue.",
)
def test_handle_signed_transaction_last_ledger_api_message_is_none(self,):
"""Test the _handle_signed_transaction method of the signing handler."""
# setup
signing_dialogue = cast(
SigningDialogue,
self.prepare_skill_dialogue(
dialogues=self.signing_dialogues,
messages=self.list_of_signing_messages[:1],
),
)
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:2],
),
)
signing_dialogue.associated_ledger_api_dialogue = ledger_api_dialogue
signing_dialogue.associated_ledger_api_dialogue._incoming_messages = []
incoming_message = self.build_incoming_message_for_skill_dialogue(
dialogue=signing_dialogue,
performative=SigningMessage.Performative.SIGNED_TRANSACTION,
signed_transaction=SigningMessage.SignedTransaction(
"some_ledger_id", {"some_key": "some_value"}
),
)
# operation
with pytest.raises(
ValueError, match="Could not retrieve last message in ledger api dialogue"
):
with patch.object(
self.signing_handler.context.logger, "log"
) as mock_logger:
self.signing_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(logging.INFO, "transaction signing was successful.")
def test_handle_signed_transaction_last_ledger_api_message_is_not_none(self,):
"""Test the _handle_signed_transaction method of the signing handler where the last ledger_api message is not None."""
# setup
signing_counterparty = self.skill.skill_context.decision_maker_address
signing_dialogue = cast(
SigningDialogue,
self.prepare_skill_dialogue(
dialogues=self.signing_dialogues,
messages=self.list_of_signing_messages[:1],
counterparty=signing_counterparty,
),
)
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:2],
counterparty=LEDGER_API_ADDRESS,
),
)
signing_dialogue.associated_ledger_api_dialogue = ledger_api_dialogue
incoming_message = cast(
SigningMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=signing_dialogue,
performative=SigningMessage.Performative.SIGNED_TRANSACTION,
signed_transaction=SigningMessage.SignedTransaction(
"some_ledger_id", {"some_key": "some_value"}
),
),
)
# operation
with patch.object(self.signing_handler.context.logger, "log") as mock_logger:
self.signing_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(logging.INFO, "transaction signing was successful.")
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=LedgerApiMessage,
performative=LedgerApiMessage.Performative.SEND_SIGNED_TRANSACTION,
to=LEDGER_API_ADDRESS,
sender=str(self.skill.skill_context.skill_id),
signed_transaction=incoming_message.signed_transaction,
)
assert has_attributes, error_str
mock_logger.assert_any_call(logging.INFO, "sending transaction to ledger.")
def test_handle_error(self):
"""Test the _handle_error method of the signing handler."""
# setup
fipa_dialogue = cast(
FipaDialogue,
self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues,
messages=self.list_of_fipa_messages[:4],
counterparty=COUNTERPARTY_AGENT_ADDRESS,
is_agent_to_agent_messages=True,
),
)
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:4],
counterparty=LEDGER_API_ADDRESS,
),
)
ledger_api_dialogue.associated_fipa_dialogue = fipa_dialogue
signing_counterparty = self.skill.skill_context.decision_maker_address
signing_dialogue = cast(
SigningDialogue,
self.prepare_skill_dialogue(
dialogues=self.signing_dialogues,
messages=self.list_of_signing_messages[:1],
counterparty=signing_counterparty,
),
)
signing_dialogue.associated_ledger_api_dialogue = ledger_api_dialogue
incoming_message = cast(
SigningMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=signing_dialogue,
performative=SigningMessage.Performative.ERROR,
error_code=SigningMessage.ErrorCode.UNSUCCESSFUL_TRANSACTION_SIGNING,
),
)
# operation
with patch.object(
self.signing_handler.context.behaviours.transaction, "failed_processing"
):
with patch.object(
self.signing_handler.context.logger, "log"
) as mock_logger:
self.signing_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"transaction signing was not successful. Error_code={incoming_message.error_code} in dialogue={signing_dialogue}",
)
behaviour = cast(
GenericTransactionBehaviour, self.skill.skill_context.behaviours.transaction
)
# finish_processing
assert behaviour.processing_time == 0.0
assert behaviour.processing is None
def test_handle_invalid(self):
"""Test the _handle_invalid method of the signing handler."""
# setup
invalid_performative = SigningMessage.Performative.SIGN_TRANSACTION
incoming_message = self.build_incoming_message(
message_type=SigningMessage,
dialogue_reference=("1", ""),
performative=invalid_performative,
terms=self.terms,
raw_transaction=SigningMessage.RawTransaction(
"some_ledger_id", {"some_key": "some_value"}
),
to=str(self.skill.skill_context.skill_id),
)
# operation
with patch.object(self.signing_handler.context.logger, "log") as mock_logger:
self.signing_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.WARNING,
f"cannot handle signing message of performative={invalid_performative} in dialogue={self.signing_dialogues.get_dialogue(incoming_message)}.",
)
def test_teardown(self):
"""Test the teardown method of the signing handler."""
assert self.signing_handler.teardown() is None
self.assert_quantity_in_outbox(0)
class TestGenericLedgerApiHandler(BaseSkillTestCase):
"""Test ledger_api handler of generic buyer."""
path_to_skill = Path(ROOT_DIR, "packages", "fetchai", "skills", "generic_buyer")
is_agent_to_agent_messages = False
@classmethod
def setup(cls):
"""Setup the test class."""
super().setup()
cls.ledger_api_handler = cast(
GenericLedgerApiHandler, cls._skill.skill_context.handlers.ledger_api
)
cls.transaction_behaviour = cast(
GenericTransactionBehaviour, cls._skill.skill_context.behaviours.transaction
)
cls.strategy = cast(GenericStrategy, cls._skill.skill_context.strategy)
cls.logger = cls._skill.skill_context.logger
cls.fipa_dialogues = cast(
FipaDialogues, cls._skill.skill_context.fipa_dialogues
)
cls.ledger_api_dialogues = cast(
LedgerApiDialogues, cls._skill.skill_context.ledger_api_dialogues
)
cls.terms = Terms(
"some_ledger_id",
cls._skill.skill_context.agent_address,
"counterprty",
{"currency_id": 50},
{"good_id": -10},
"some_nonce",
)
cls.list_of_fipa_messages = (
DialogueMessage(FipaMessage.Performative.CFP, {"query": "some_query"}),
DialogueMessage(
FipaMessage.Performative.PROPOSE, {"proposal": "some_proposal"}
),
DialogueMessage(FipaMessage.Performative.ACCEPT),
DialogueMessage(
FipaMessage.Performative.MATCH_ACCEPT_W_INFORM,
{"info": {"address": "some_term_sender_address"}},
),
DialogueMessage(
FipaMessage.Performative.INFORM,
{"info": {"transaction_digest": "some_transaction_digest_body"}},
),
)
cls.raw_transaction = RawTransaction(
"some_ledger_id", {"some_key": "some_value"}
)
cls.signed_transaction = SignedTransaction(
"some_ledger_id", {"some_key": "some_value"}
)
cls.transaction_digest = TransactionDigest("some_ledger_id", "some_body")
cls.transaction_receipt = TransactionReceipt(
"some_ledger_id",
{"receipt_key": "receipt_value"},
{"transaction_key": "transaction_value"},
)
cls.list_of_ledger_api_messages = (
DialogueMessage(
LedgerApiMessage.Performative.GET_RAW_TRANSACTION, {"terms": cls.terms}
),
DialogueMessage(
LedgerApiMessage.Performative.RAW_TRANSACTION,
{"raw_transaction": cls.raw_transaction},
),
DialogueMessage(
LedgerApiMessage.Performative.SEND_SIGNED_TRANSACTION,
{"signed_transaction": cls.signed_transaction},
),
DialogueMessage(
LedgerApiMessage.Performative.TRANSACTION_DIGEST,
{"transaction_digest": cls.transaction_digest},
),
DialogueMessage(
LedgerApiMessage.Performative.GET_TRANSACTION_RECEIPT,
{"transaction_digest": cls.transaction_digest},
),
DialogueMessage(
LedgerApiMessage.Performative.TRANSACTION_RECEIPT,
{"transaction_receipt": cls.transaction_receipt},
),
)
def test_setup(self):
"""Test the setup method of the ledger_api handler."""
assert self.ledger_api_handler.setup() is None
self.assert_quantity_in_outbox(0)
def test_handle_unidentified_dialogue(self):
"""Test the _handle_unidentified_dialogue method of the ledger_api handler."""
# setup
incorrect_dialogue_reference = ("", "")
incoming_message = self.build_incoming_message(
message_type=LedgerApiMessage,
dialogue_reference=incorrect_dialogue_reference,
performative=LedgerApiMessage.Performative.GET_BALANCE,
ledger_id="some_ledger_id",
address="some_address",
)
# operation
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received invalid ledger_api message={incoming_message}, unidentified dialogue.",
)
def test_handle_balance_positive_balance(self):
"""Test the _handle_balance method of the ledger_api handler where balance is positive."""
# setup
balance = 10
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=(
DialogueMessage(
LedgerApiMessage.Performative.GET_BALANCE,
{"ledger_id": "some_ledger_id", "address": "some_address"},
),
),
counterparty=LEDGER_API_ADDRESS,
),
)
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.BALANCE,
ledger_id="some-Ledger_id",
balance=balance,
),
)
# operation
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"starting balance on {self.strategy.ledger_id} ledger={incoming_message.balance}.",
)
assert self.strategy.balance == balance
assert self.strategy.is_searching
def test_handle_balance_zero_balance(self):
"""Test the _handle_balance method of the ledger_api handler where balance is zero."""
# setup
balance = 0
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=(
DialogueMessage(
LedgerApiMessage.Performative.GET_BALANCE,
{"ledger_id": "some_ledger_id", "address": "some_address"},
),
),
counterparty=LEDGER_API_ADDRESS,
),
)
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.BALANCE,
ledger_id="some-Ledger_id",
balance=balance,
),
)
# operation
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.WARNING,
f"you have no starting balance on {self.strategy.ledger_id} ledger! Stopping skill {self.strategy.context.skill_id}.",
)
assert not self.skill.skill_context.is_active
def test_handle_raw_transaction(self):
"""Test the _handle_raw_transaction method of the ledger_api handler."""
# setup
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:1],
counterparty=LEDGER_API_ADDRESS,
),
)
fipa_dialogue = cast(
FipaDialogue,
self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues,
messages=self.list_of_fipa_messages[:4],
is_agent_to_agent_messages=True,
),
)
ledger_api_dialogue.associated_fipa_dialogue = fipa_dialogue
fipa_dialogue.terms = self.terms
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.RAW_TRANSACTION,
raw_transaction=self.raw_transaction,
),
)
# operation
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO, f"received raw transaction={incoming_message}"
)
message_quantity = self.get_quantity_in_decision_maker_inbox()
assert (
message_quantity == 1
), f"Invalid number of messages in decision maker queue. Expected {1}. Found {message_quantity}."
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_decision_maker_inbox(),
message_type=SigningMessage,
performative=SigningMessage.Performative.SIGN_TRANSACTION,
to=self.skill.skill_context.decision_maker_address,
sender=str(self.skill.skill_context.skill_id),
terms=self.terms,
)
assert has_attributes, error_str
mock_logger.assert_any_call(
logging.INFO,
"proposing the transaction to the decision maker. Waiting for confirmation ...",
)
def test_handle_transaction_digest(self):
"""Test the _handle_transaction_digest method of the ledger_api handler."""
# setup
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:3],
counterparty=LEDGER_API_ADDRESS,
),
)
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.TRANSACTION_DIGEST,
transaction_digest=self.transaction_digest,
),
)
# operation
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"transaction was successfully submitted. Transaction digest={incoming_message.transaction_digest}",
)
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=LedgerApiMessage,
performative=LedgerApiMessage.Performative.GET_TRANSACTION_RECEIPT,
to=incoming_message.sender,
sender=str(self.skill.skill_context.skill_id),
transaction_digest=self.transaction_digest,
)
assert has_attributes, error_str
mock_logger.assert_any_call(
logging.INFO, "checking transaction is settled.",
)
def test_handle_transaction_receipt_i(self):
"""Test the _handle_transaction_receipt method of the ledger_api handler."""
# setup
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:5],
counterparty=LEDGER_API_ADDRESS,
),
)
fipa_dialogue = cast(
FipaDialogue,
self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues,
messages=self.list_of_fipa_messages[:4],
is_agent_to_agent_messages=True,
),
)
ledger_api_dialogue.associated_fipa_dialogue = fipa_dialogue
fipa_dialogue.terms = self.terms
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.TRANSACTION_RECEIPT,
transaction_receipt=self.transaction_receipt,
),
)
# operation
with patch.object(
self.ledger_api_handler.context.behaviours.transaction, "finish_processing"
):
with patch.object(LedgerApis, "is_transaction_settled", return_value=True):
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"transaction confirmed, informing counterparty={fipa_dialogue.dialogue_label.dialogue_opponent_addr[-5:]} of transaction digest.",
)
self.assert_quantity_in_outbox(1)
has_attributes, error_str = self.message_has_attributes(
actual_message=self.get_message_from_outbox(),
message_type=FipaMessage,
performative=FipaMessage.Performative.INFORM,
to=COUNTERPARTY_AGENT_ADDRESS,
sender=self.skill.skill_context.agent_address,
info={"transaction_digest": self.transaction_digest.body},
)
assert has_attributes, error_str
def test_handle_transaction_receipt_ii(self):
"""Test the _handle_transaction_receipt method of the ledger_api handler where fipa dialogue's last_incoming_message is None."""
# setup
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:5],
counterparty=LEDGER_API_ADDRESS,
),
)
fipa_dialogue = cast(
FipaDialogue,
self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues,
messages=self.list_of_fipa_messages[:4],
is_agent_to_agent_messages=True,
),
)
ledger_api_dialogue.associated_fipa_dialogue = fipa_dialogue
fipa_dialogue._incoming_messages = []
fipa_dialogue.terms = self.terms
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.TRANSACTION_RECEIPT,
transaction_receipt=self.transaction_receipt,
),
)
# operation
with patch.object(
self.ledger_api_handler.context.behaviours.transaction, "finish_processing"
):
with patch.object(LedgerApis, "is_transaction_settled", return_value=True):
with patch.object(self.logger, "log"):
with pytest.raises(
ValueError, match="Could not retrieve last fipa message"
):
self.ledger_api_handler.handle(incoming_message)
# after
self.assert_quantity_in_outbox(0)
def test_handle_transaction_receipt_iii(self):
"""Test the _handle_transaction_receipt method of the ledger_api handler where tx is NOT settled."""
# setup
ledger_api_dialogue = cast(
LedgerApiDialogue,
self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:5],
counterparty=LEDGER_API_ADDRESS,
),
)
fipa_dialogue = cast(
FipaDialogue,
self.prepare_skill_dialogue(
dialogues=self.fipa_dialogues,
messages=self.list_of_fipa_messages[:4],
is_agent_to_agent_messages=True,
),
)
ledger_api_dialogue.associated_fipa_dialogue = fipa_dialogue
fipa_dialogue.terms = self.terms
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.TRANSACTION_RECEIPT,
transaction_receipt=self.transaction_receipt,
),
)
# operation
with patch.object(
self.ledger_api_handler.context.behaviours.transaction, "failed_processing"
):
with patch.object(LedgerApis, "is_transaction_settled", return_value=False):
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
self.assert_quantity_in_outbox(0)
assert self.transaction_behaviour.processing is None
assert self.transaction_behaviour.processing_time == 0.0
mock_logger.assert_any_call(
logging.INFO,
f"transaction_receipt={self.transaction_receipt} not settled or not valid, aborting",
)
def test_handle_error(self):
"""Test the _handle_error method of the ledger_api handler."""
# setup
ledger_api_dialogue = self.prepare_skill_dialogue(
dialogues=self.ledger_api_dialogues,
messages=self.list_of_ledger_api_messages[:1],
)
incoming_message = cast(
LedgerApiMessage,
self.build_incoming_message_for_skill_dialogue(
dialogue=ledger_api_dialogue,
performative=LedgerApiMessage.Performative.ERROR,
code=1,
),
)
ledger_api_dialogue.associated_fipa_dialogue = "mock"
# operation
with patch.object(
self.ledger_api_handler.context.behaviours.transaction, "failed_processing"
):
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.INFO,
f"received ledger_api error message={incoming_message} in dialogue={ledger_api_dialogue}.",
)
def test_handle_invalid(self):
"""Test the _handle_invalid method of the ledger_api handler."""
# setup
invalid_performative = LedgerApiMessage.Performative.GET_BALANCE
incoming_message = self.build_incoming_message(
message_type=LedgerApiMessage,
dialogue_reference=("1", ""),
performative=invalid_performative,
ledger_id="some_ledger_id",
address="some_address",
to=str(self.skill.public_id),
)
# operation
with patch.object(self.logger, "log") as mock_logger:
self.ledger_api_handler.handle(incoming_message)
# after
mock_logger.assert_any_call(
logging.WARNING,
f"cannot handle ledger_api message of performative={invalid_performative} in dialogue={self.ledger_api_dialogues.get_dialogue(incoming_message)}.",
)
def test_teardown(self):
"""Test the teardown method of the ledger_api handler."""
assert self.ledger_api_handler.teardown() is None
self.assert_quantity_in_outbox(0)
| 39.179833 | 159 | 0.631739 | 6,184 | 61,003 | 5.90249 | 0.054657 | 0.053012 | 0.018493 | 0.021862 | 0.856168 | 0.827402 | 0.799348 | 0.790937 | 0.763267 | 0.744254 | 0 | 0.003582 | 0.286084 | 61,003 | 1,556 | 160 | 39.205013 | 0.834539 | 0.076652 | 0 | 0.691742 | 0 | 0.000818 | 0.095744 | 0.037525 | 0 | 0 | 0 | 0 | 0.08749 | 1 | 0.035977 | false | 0 | 0.016353 | 0 | 0.061325 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71665bea64bd67f458aa304fbd03696fff5bceee | 44 | py | Python | testcontainers/google/__init__.py | FrancisLfg/testcontainers-python | d9bd61a32194b821dfdb432e77f70b7ba7e8b9d3 | [
"Apache-2.0"
] | null | null | null | testcontainers/google/__init__.py | FrancisLfg/testcontainers-python | d9bd61a32194b821dfdb432e77f70b7ba7e8b9d3 | [
"Apache-2.0"
] | null | null | null | testcontainers/google/__init__.py | FrancisLfg/testcontainers-python | d9bd61a32194b821dfdb432e77f70b7ba7e8b9d3 | [
"Apache-2.0"
] | null | null | null | from .pubsub import PubSubContainer # noqa
| 22 | 43 | 0.795455 | 5 | 44 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 1 | 44 | 44 | 0.945946 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7172456d8479c3d74d8b85d601e9972ffde1b942 | 142 | py | Python | menu/__init__.py | grecoe/Python-Command-Line-Utility | e98efb51aa4900e6afca086f1a7886ec3889de40 | [
"MIT"
] | null | null | null | menu/__init__.py | grecoe/Python-Command-Line-Utility | e98efb51aa4900e6afca086f1a7886ec3889de40 | [
"MIT"
] | null | null | null | menu/__init__.py | grecoe/Python-Command-Line-Utility | e98efb51aa4900e6afca086f1a7886ec3889de40 | [
"MIT"
] | null | null | null | from menu.menuaction import MenuAction
from menu.menuitem import MenuItem
from menu.appmenu import Menu
from menu.loader import FunctionLoader | 35.5 | 38 | 0.866197 | 20 | 142 | 6.15 | 0.4 | 0.260163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105634 | 142 | 4 | 39 | 35.5 | 0.968504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
718b93b9868f0ac90fc79234f91a76d0f7670a55 | 51 | py | Python | bindings/pydairlib/lcm/__init__.py | HaoxiangYou/dairlib | 30eb15ec0bc62ec0dbddd5c30d5c286a9306b567 | [
"BSD-3-Clause"
] | null | null | null | bindings/pydairlib/lcm/__init__.py | HaoxiangYou/dairlib | 30eb15ec0bc62ec0dbddd5c30d5c286a9306b567 | [
"BSD-3-Clause"
] | null | null | null | bindings/pydairlib/lcm/__init__.py | HaoxiangYou/dairlib | 30eb15ec0bc62ec0dbddd5c30d5c286a9306b567 | [
"BSD-3-Clause"
] | null | null | null | from .lcm_trajectory import *
from .lcm_py import * | 25.5 | 29 | 0.784314 | 8 | 51 | 4.75 | 0.625 | 0.368421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 51 | 2 | 30 | 25.5 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
71a36531fda6adb2ac1f5ab719cf6d5bdb0b5ac0 | 13,680 | py | Python | tf_slim/ops/sparse_ops_test.py | ShanuDey/tf-slim | 19c840abfa6de567d760254c42ea68760cf5d9f0 | [
"Apache-2.0"
] | 1 | 2020-10-01T23:37:41.000Z | 2020-10-01T23:37:41.000Z | tf_slim/ops/sparse_ops_test.py | ShanuDey/tf-slim | 19c840abfa6de567d760254c42ea68760cf5d9f0 | [
"Apache-2.0"
] | null | null | null | tf_slim/ops/sparse_ops_test.py | ShanuDey/tf-slim | 19c840abfa6de567d760254c42ea68760cf5d9f0 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tf_slim.ops.sparse_ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tf_slim.ops import sparse_ops
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import test
def _assert_sparse_tensor_value(test_case, expected, actual):
test_case.assertEqual(np.int64, np.array(actual.indices).dtype)
test_case.assertAllEqual(expected.indices, actual.indices)
test_case.assertEqual(
np.array(expected.values).dtype, np.array(actual.values).dtype)
test_case.assertAllEqual(expected.values, actual.values)
test_case.assertEqual(np.int64, np.array(actual.dense_shape).dtype)
test_case.assertAllEqual(expected.dense_shape, actual.dense_shape)
class DenseToSparseTensorTest(test.TestCase):
def test_dense_to_sparse_tensor_1d(self):
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor([1, 0, 2, 0])
result = sess.run(st)
self.assertEqual(result.indices.dtype, np.int64)
self.assertEqual(result.values.dtype, np.int32)
self.assertEqual(result.dense_shape.dtype, np.int64)
self.assertAllEqual([[0], [2]], result.indices)
self.assertAllEqual([1, 2], result.values)
self.assertAllEqual([4], result.dense_shape)
def test_dense_to_sparse_tensor_1d_float(self):
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor([1.5, 0.0, 2.3, 0.0])
result = sess.run(st)
self.assertEqual(result.indices.dtype, np.int64)
self.assertEqual(result.values.dtype, np.float32)
self.assertEqual(result.dense_shape.dtype, np.int64)
self.assertAllEqual([[0], [2]], result.indices)
self.assertAllClose([1.5, 2.3], result.values)
self.assertAllEqual([4], result.dense_shape)
def test_dense_to_sparse_tensor_1d_bool(self):
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor([True, False, True, False])
result = sess.run(st)
self.assertEqual(result.indices.dtype, np.int64)
self.assertEqual(result.values.dtype, np.bool)
self.assertEqual(result.dense_shape.dtype, np.int64)
self.assertAllEqual([[0], [2]], result.indices)
self.assertAllEqual([True, True], result.values)
self.assertAllEqual([4], result.dense_shape)
def test_dense_to_sparse_tensor_1d_str(self):
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor([b'qwe', b'', b'ewq', b''])
result = sess.run(st)
self.assertEqual(result.indices.dtype, np.int64)
self.assertEqual(result.values.dtype, np.object)
self.assertEqual(result.dense_shape.dtype, np.int64)
self.assertAllEqual([[0], [2]], result.indices)
self.assertAllEqual([b'qwe', b'ewq'], result.values)
self.assertAllEqual([4], result.dense_shape)
def test_dense_to_sparse_tensor_1d_str_special_ignore(self):
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor(
[b'qwe', b'', b'ewq', b''], ignore_value=b'qwe')
result = sess.run(st)
self.assertEqual(result.indices.dtype, np.int64)
self.assertEqual(result.values.dtype, np.object)
self.assertEqual(result.dense_shape.dtype, np.int64)
self.assertAllEqual([[1], [2], [3]], result.indices)
self.assertAllEqual([b'', b'ewq', b''], result.values)
self.assertAllEqual([4], result.dense_shape)
def test_dense_to_sparse_tensor_2d(self):
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor([[1, 2, 0, 0], [3, 4, 5, 0]])
result = sess.run(st)
self.assertAllEqual([[0, 0], [0, 1], [1, 0], [1, 1], [1, 2]],
result.indices)
self.assertAllEqual([1, 2, 3, 4, 5], result.values)
self.assertAllEqual([2, 4], result.dense_shape)
def test_dense_to_sparse_tensor_3d(self):
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor([[[1, 2, 0, 0], [3, 4, 5, 0]],
[[7, 8, 0, 0], [9, 0, 0, 0]]])
result = sess.run(st)
self.assertAllEqual([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [0, 1, 2],
[1, 0, 0], [1, 0, 1], [1, 1, 0]], result.indices)
self.assertAllEqual([1, 2, 3, 4, 5, 7, 8, 9], result.values)
self.assertAllEqual([2, 2, 4], result.dense_shape)
def test_dense_to_sparse_tensor_unknown_1d_shape(self):
with self.cached_session() as sess:
tensor = array_ops.placeholder(shape=[None], dtype=dtypes.int32)
st = sparse_ops.dense_to_sparse_tensor(tensor)
result = sess.run(st, feed_dict={tensor: [0, 100, 0, 3]})
self.assertAllEqual([[1], [3]], result.indices)
self.assertAllEqual([100, 3], result.values)
self.assertAllEqual([4], result.dense_shape)
def test_dense_to_sparse_tensor_unknown_3d_shape(self):
with self.cached_session() as sess:
tensor = array_ops.placeholder(
shape=[None, None, None], dtype=dtypes.int32)
st = sparse_ops.dense_to_sparse_tensor(tensor)
result = sess.run(st,
feed_dict={
tensor: [[[1, 2, 0, 0], [3, 4, 5, 0]],
[[7, 8, 0, 0], [9, 0, 0, 0]]]
})
self.assertAllEqual([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [0, 1, 2],
[1, 0, 0], [1, 0, 1], [1, 1, 0]], result.indices)
self.assertAllEqual([1, 2, 3, 4, 5, 7, 8, 9], result.values)
self.assertAllEqual([2, 2, 4], result.dense_shape)
def test_dense_to_sparse_unknown_rank(self):
ph = array_ops.placeholder(dtype=dtypes.int32)
with self.cached_session() as sess:
st = sparse_ops.dense_to_sparse_tensor(ph)
result = sess.run(st, feed_dict={ph: [[1, 2, 0, 0], [3, 4, 5, 0]]})
self.assertAllEqual([[0, 0], [0, 1], [1, 0], [1, 1], [1, 2]],
result.indices)
self.assertAllEqual([1, 2, 3, 4, 5], result.values)
self.assertAllEqual([2, 4], result.dense_shape)
class SparseRowEnvelopeTest(test.TestCase):
def test_sparse_row_envelope(self):
expected_sparse_row_envelope = [1, 0, 3]
with self.cached_session() as sess:
sparse_input = sparse_tensor.SparseTensor(
indices=[[0, 0], [2, 0], [2, 1], [2, 2]],
values=[0, 1, 2, 3],
dense_shape=[3, 3])
sparse_row_envelope = sess.run(
sparse_ops.sparse_row_envelope(sparse_input))
self.assertAllEqual(expected_sparse_row_envelope,
sparse_row_envelope)
def test_sparse_row_envelope_unsorted_indices(self):
expected_sparse_row_envelope = [1, 0, 3]
with self.cached_session() as sess:
sparse_input = sparse_tensor.SparseTensor(
indices=[[2, 0], [2, 2], [2, 1], [0, 0]],
values=[0, 1, 2, 3],
dense_shape=[3, 3])
sparse_row_envelope = sess.run(
sparse_ops.sparse_row_envelope(sparse_input))
self.assertAllEqual(expected_sparse_row_envelope,
sparse_row_envelope)
def test_sparse_row_envelope_empty_in_the_end(self):
expected_sparse_row_envelope = [1, 0, 3, 0, 0]
with self.cached_session() as sess:
sparse_input = sparse_tensor.SparseTensor(
indices=[[0, 0], [2, 0], [2, 1], [2, 2]],
values=[0, 1, 2, 3],
dense_shape=[5, 3])
sparse_row_envelope = sess.run(
sparse_ops.sparse_row_envelope(sparse_input))
self.assertAllEqual(expected_sparse_row_envelope,
sparse_row_envelope)
def test_sparse_row_envelope_empty_3d(self):
expected_sparse_row_envelope = [1, 0, 3, 0, 0]
with self.cached_session() as sess:
sparse_input = sparse_tensor.SparseTensor(
indices=[[0, 0, 0], [0, 2, 0], [0, 2, 1], [0, 2, 2]],
values=[0, 1, 2, 3],
dense_shape=[1, 5, 3])
sparse_row_envelope = sess.run(
sparse_ops.sparse_row_envelope(sparse_input, 1, 2))
self.assertAllEqual(expected_sparse_row_envelope,
sparse_row_envelope)
class IndicatorToSparseIdsTest(test.TestCase):
def test_indicators_to_sparse_ids_1d(self):
indicators = (0, 0, 1, 0)
sparse_ids = sparse_ops.indicators_to_sparse_ids(indicators)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0,),),
values=(2,),
dense_shape=(1,),
), sparse_ids.eval())
def test_indicators_to_sparse_ids_2d(self):
indicators = (
(0, 0, 1, 0),
(1, 0, 0, 1),
)
sparse_ids = sparse_ops.indicators_to_sparse_ids(indicators)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0, 0), (1, 0), (1, 1)),
values=(2, 0, 3),
dense_shape=(2, 2),
), sparse_ids.eval())
def test_indicators_to_sparse_ids_3d(self):
indicators = (
((0, 0, 1, 0, 0), (0, 0, 0, 0, 0)),
((1, 0, 0, 1, 0), (0, 0, 1, 0, 0)),
((0, 0, 0, 0, 0), (0, 0, 0, 0, 0)),
((1, 0, 0, 1, 1), (0, 0, 1, 0, 0)),
)
sparse_ids = sparse_ops.indicators_to_sparse_ids(indicators)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=(
(0, 0, 0),
(1, 0, 0), (1, 0, 1), (1, 1, 0),
(3, 0, 0), (3, 0, 1), (3, 0, 2), (3, 1, 0)
), values=(
2,
0, 3, 2,
0, 3, 4, 2
), dense_shape=(4, 2, 3),
), sparse_ids.eval())
def test_int16_to_sparse_ids_2d(self):
indicators = (
(0, 0, 1, 0),
(1, 0, 0, 1),
)
sparse_ids = sparse_ops.indicators_to_sparse_ids(
indicators, dtype=dtypes.int16)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0, 0), (1, 0), (1, 1)),
values=np.array((2, 0, 3), dtype=np.int16),
dense_shape=(2, 2),
), sparse_ids.eval())
def test_indicators_to_sparse_ids_ignore_value(self):
indicators = (
((-1, -1, 10, -1), (-1, -1, -1, -1)),
((11, -1, -1, 12), (-1, -1, 13, -1)),
)
sparse_ids = sparse_ops.indicators_to_sparse_ids(
indicators, ignore_value=-1)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0, 0, 0), (1, 0, 0), (1, 0, 1), (1, 1, 0)),
values=(2, 0, 3, 2),
dense_shape=(2, 2, 2),
), sparse_ids.eval())
def test_string_indicators_to_sparse_ids(self):
indicators = (
(('', '', 'A', ''), ('', '', '', '')),
(('B', '', '', 'C'), ('', '', 'D', '')),
)
sparse_ids = sparse_ops.indicators_to_sparse_ids(indicators)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0, 0, 0), (1, 0, 0), (1, 0, 1), (1, 1, 0)),
values=(2, 0, 3, 2),
dense_shape=(2, 2, 2),
), sparse_ids.eval())
def test_string_indicators_to_sparse_ids_ignore_value(self):
indicators = (
(('x', 'x', 'A', 'x'), ('x', 'x', 'x', 'x')),
(('B', 'x', 'x', 'C'), ('x', 'x', 'D', 'x')),
)
sparse_ids = sparse_ops.indicators_to_sparse_ids(
indicators, ignore_value='x')
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0, 0, 0), (1, 0, 0), (1, 0, 1), (1, 1, 0)),
values=(2, 0, 3, 2),
dense_shape=(2, 2, 2),
), sparse_ids.eval())
def test_indicators_to_sparse_ids_unknown_3d_shape(self):
indicators_values = (
((0, 0, 1, 0), (0, 0, 0, 0)),
((1, 0, 0, 1), (0, 0, 1, 0)),
)
indicators = array_ops.placeholder(
dtype=dtypes.int32, shape=(None, None, None))
sparse_ids = sparse_ops.indicators_to_sparse_ids(indicators)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0, 0, 0), (1, 0, 0), (1, 0, 1), (1, 1, 0)),
values=(2, 0, 3, 2),
dense_shape=(2, 2, 2),
), sparse_ids.eval(feed_dict={indicators: indicators_values}))
def test_indicators_to_sparse_ids_unknown_rank(self):
indicators_values = (
((0, 0, 1, 0), (0, 0, 0, 0)),
((1, 0, 0, 1), (0, 0, 1, 0)),
)
indicators = array_ops.placeholder(dtype=dtypes.int32)
sparse_ids = sparse_ops.indicators_to_sparse_ids(indicators)
with self.cached_session():
_assert_sparse_tensor_value(self, sparse_tensor.SparseTensorValue(
indices=((0, 0, 0), (1, 0, 0), (1, 0, 1), (1, 1, 0)),
values=(2, 0, 3, 2),
dense_shape=(2, 2, 2),
), sparse_ids.eval(feed_dict={indicators: indicators_values}))
if __name__ == '__main__':
test.main()
| 40.473373 | 80 | 0.617544 | 1,936 | 13,680 | 4.134298 | 0.086777 | 0.025237 | 0.015742 | 0.017491 | 0.821214 | 0.784358 | 0.769365 | 0.758496 | 0.741504 | 0.727511 | 0 | 0.057738 | 0.223904 | 13,680 | 337 | 81 | 40.593472 | 0.696148 | 0.050804 | 0 | 0.573944 | 0 | 0 | 0.004088 | 0 | 0 | 0 | 0 | 0 | 0.228873 | 1 | 0.084507 | false | 0 | 0.03169 | 0 | 0.126761 | 0.003521 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71d47193641202ca6550fbc7ba09aaf8b8e1d07f | 11,669 | py | Python | tests/test_relative_ordering.py | Joacchim/pytest-order | 1e873d5c2b67211ed3bc578ee72bb3d986906532 | [
"MIT"
] | 41 | 2021-03-16T07:57:00.000Z | 2022-03-01T10:02:10.000Z | tests/test_relative_ordering.py | Joacchim/pytest-order | 1e873d5c2b67211ed3bc578ee72bb3d986906532 | [
"MIT"
] | 39 | 2021-03-04T16:50:04.000Z | 2022-02-18T18:51:14.000Z | tests/test_relative_ordering.py | Joacchim/pytest-order | 1e873d5c2b67211ed3bc578ee72bb3d986906532 | [
"MIT"
] | 9 | 2021-03-04T18:27:12.000Z | 2021-12-16T06:46:13.000Z | # -*- coding: utf-8 -*-
from textwrap import dedent
import pytest
def test_relative(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(after="test_second")
def test_third():
pass
def test_second():
pass
@pytest.mark.order(before="test_second")
def test_first():
pass
"""
)
assert item_names_for(test_content) == [
"test_first", "test_second", "test_third"
]
def test_relative2(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(after="test_second")
def test_third():
pass
def test_second():
pass
@pytest.mark.order(before="test_second")
def test_first():
pass
def test_five():
pass
@pytest.mark.order(before="test_five")
def test_four():
pass
"""
)
assert item_names_for(test_content) == [
"test_first", "test_second", "test_third", "test_four", "test_five"
]
def test_relative3(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(after="test_second")
def test_third():
pass
def test_second():
pass
@pytest.mark.order(before="test_second")
def test_first():
pass
def test_five():
pass
@pytest.mark.order(before="test_five")
def test_four():
pass
"""
)
assert item_names_for(test_content) == [
"test_first", "test_second", "test_third", "test_four", "test_five"
]
def test_relative_in_class(item_names_for):
tests_content = (
"""
import pytest
class Test:
@pytest.mark.order(after="test_b")
def test_a(self):
pass
def test_b(self):
pass
def test_c(self):
pass
"""
)
assert item_names_for(tests_content) == [
"Test::test_b", "Test::test_a", "Test::test_c"
]
def test_relative_in_classes(item_names_for):
tests_content = (
"""
import pytest
class TestA:
@pytest.mark.order(after="TestB::test_b")
def test_a(self):
pass
@pytest.mark.order(after="test_c")
def test_b(self):
pass
def test_c(self):
pass
class TestB:
@pytest.mark.order(before="TestA::test_c")
def test_a(self):
pass
def test_b(self):
pass
def test_c(self):
pass
"""
)
assert item_names_for(tests_content) == [
"TestB::test_a",
"TestA::test_c",
"TestA::test_b",
"TestB::test_b",
"TestA::test_a",
"TestB::test_c",
]
@pytest.fixture
def fixture_path(test_path):
test_path.makepyfile(
mod1_test=(
"""
import pytest
class TestA:
@pytest.mark.order(after="mod2_test.py::TestB::test_b")
def test_a(self):
pass
@pytest.mark.order(after="sub/mod3_test.py::test_b")
def test_b(self):
pass
def test_c(self):
pass
"""
),
mod2_test=(
"""
import pytest
class TestB:
@pytest.mark.order(before="mod1_test.py::TestA::test_c")
def test_a(self):
pass
def test_b(self):
pass
def test_c(self):
pass
"""
),
)
test_path.mkpydir("sub")
path = test_path.tmpdir.join("sub", "mod3_test.py")
path.write(dedent(
"""
import pytest
@pytest.mark.order(before="mod2_test.py::TestB::test_c")
def test_a():
pass
def test_b():
pass
def test_c():
pass
"""
))
yield test_path
def test_relative_in_modules(fixture_path):
result = fixture_path.runpytest("-v")
result.assert_outcomes(passed=9, failed=0)
result.stdout.fnmatch_lines([
"mod2_test.py::TestB::test_a PASSED",
"mod1_test.py::TestA::test_c PASSED",
"mod2_test.py::TestB::test_b PASSED",
"mod1_test.py::TestA::test_a PASSED",
"sub/mod3_test.py::test_a PASSED",
"mod2_test.py::TestB::test_c PASSED",
"sub/mod3_test.py::test_b PASSED",
"mod1_test.py::TestA::test_b PASSED",
"sub/mod3_test.py::test_c PASSED",
])
def test_false_insert(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(after="test_a")
def test_third():
pass
def test_second():
pass
@pytest.mark.order(before="test_b")
def test_first():
pass
"""
)
assert item_names_for(test_content) == [
"test_third", "test_second", "test_first"
]
def test_mixed_markers1(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(2)
def test_1():
pass
@pytest.mark.order(after="test_1")
def test_2():
pass
@pytest.mark.order(1)
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_3", "test_1", "test_2"]
def test_mixed_markers2(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(2)
def test_1():
pass
@pytest.mark.order(1)
def test_2():
pass
@pytest.mark.order(before="test_2")
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_3", "test_2", "test_1"]
def test_combined_markers1(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(2)
def test_1():
pass
def test_2():
pass
@pytest.mark.order(index=1, before="test_2")
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_3", "test_1", "test_2"]
def test_combined_markers2(item_names_for):
test_content = (
"""
import pytest
def test_1():
pass
@pytest.mark.order(index=2, before="test_1")
def test_2():
pass
@pytest.mark.order(index=1, before="test_1")
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_3", "test_2", "test_1"]
def test_multiple_markers(item_names_for):
test_content = (
"""
import pytest
def test_1():
pass
@pytest.mark.order(before="test_1")
@pytest.mark.order(2)
def test_2():
pass
@pytest.mark.order(1)
@pytest.mark.order(before="test_1")
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_3", "test_2", "test_1"]
def test_combined_markers3(item_names_for):
test_content = (
"""
import pytest
def test_1():
pass
@pytest.mark.order(index=2, before="test_3")
def test_2():
pass
@pytest.mark.order(index=1, before="test_1")
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_2", "test_3", "test_1"]
def test_mixed_markers4(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(2)
def test_1():
pass
@pytest.mark.order(index=1, after="test_3")
def test_2():
pass
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_3", "test_2", "test_1"]
def test_multiple_markers_in_same_test(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(after=["test_3", "test_4", "test_5"])
def test_1():
pass
def test_2():
pass
def test_3():
pass
@pytest.mark.order(before=["test_3", "test_2"])
def test_4():
pass
def test_5():
pass
"""
)
assert item_names_for(test_content) == [
"test_4", "test_2", "test_3", "test_5", "test_1"
]
def test_dependency_after_unknown_test(item_names_for, capsys):
test_content = (
"""
import pytest
@pytest.mark.order(after="some_module.py::test_2")
def test_1():
pass
def test_2():
pass
"""
)
assert item_names_for(test_content) == ["test_1", "test_2"]
out, err = capsys.readouterr()
warning = (
"cannot execute 'test_1' relative to others: 'some_module.py::test_2' "
"- ignoring the marker"
)
assert warning in out
def test_dependency_before_unknown_test(item_names_for, capsys):
test_content = (
"""
import pytest
def test_1():
pass
@pytest.mark.order(before="test_4")
def test_2():
pass
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_1", "test_2", "test_3"]
out, err = capsys.readouterr()
warning = (
"cannot execute 'test_2' relative to others: 'test_4' "
"- ignoring the marker"
)
assert warning in out
def test_dependency_in_class_before_unknown_test(item_names_for, capsys):
test_content = (
"""
import pytest
class Test:
def test_1(self):
pass
@pytest.mark.order(before="test_4")
def test_2(self):
pass
def test_3(self):
pass
"""
)
assert item_names_for(test_content) == [
"Test::test_1", "Test::test_2", "Test::test_3"
]
out, err = capsys.readouterr()
warning = (
"cannot execute 'test_2' relative to others: 'test_4' "
"- ignoring the marker"
)
assert warning in out
def test_dependency_loop(item_names_for, capsys):
test_content = (
"""
import pytest
@pytest.mark.order(after="test_3")
def test_1():
pass
@pytest.mark.order(1)
def test_2():
pass
@pytest.mark.order(before="test_1")
def test_3():
pass
"""
)
assert item_names_for(test_content) == ["test_2", "test_1", "test_3"]
out, err = capsys.readouterr()
warning = (
"cannot execute test relative to others: "
"test_dependency_loop.py::test_3"
)
assert warning in out
def test_dependency_on_parametrized_test(item_names_for):
test_content = (
"""
import pytest
@pytest.mark.order(after="test_2")
def test_1():
pass
@pytest.mark.parametrize("arg", ["aaaaa", "bbbbb", "ccccc", "ddddd"])
def test_2(arg):
pass
@pytest.mark.order(before="test_2")
def test_3():
pass
"""
)
assert item_names_for(test_content) == [
"test_3", "test_2[aaaaa]", "test_2[bbbbb]",
"test_2[ccccc]", "test_2[ddddd]", "test_1"
]
| 21.649351 | 79 | 0.511441 | 1,344 | 11,669 | 4.146577 | 0.078125 | 0.118069 | 0.123811 | 0.08613 | 0.834918 | 0.809079 | 0.751121 | 0.739638 | 0.665351 | 0.655661 | 0 | 0.020808 | 0.365755 | 11,669 | 538 | 80 | 21.689591 | 0.732198 | 0.0018 | 0 | 0.359477 | 0 | 0 | 0.23365 | 0.056084 | 0 | 0 | 0 | 0 | 0.156863 | 1 | 0.137255 | false | 0.065359 | 0.013072 | 0 | 0.150327 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
e07f17a37f07a5b9ae42815b42d0b94db6c28215 | 27,959 | py | Python | main/fluxes.py | crewsdw/Vlasov1D2V | 29d65efe68d3ea027e1067d433122cc4acd410c0 | [
"MIT"
] | 2 | 2021-11-14T20:18:30.000Z | 2021-11-27T02:22:44.000Z | main/fluxes.py | crewsdw/Vlasov1D2V | 29d65efe68d3ea027e1067d433122cc4acd410c0 | [
"MIT"
] | null | null | null | main/fluxes.py | crewsdw/Vlasov1D2V | 29d65efe68d3ea027e1067d433122cc4acd410c0 | [
"MIT"
] | null | null | null | import numpy as np
import cupy as cp
import time as timer
import matplotlib.pyplot as plt
def basis_product(flux, basis_arr, axis, permutation):
return cp.transpose(cp.tensordot(flux, basis_arr, axes=([axis], [1])),
axes=permutation)
class DGFlux:
def __init__(self, resolutions, orders, flux_coefficients):
self.resolutions = resolutions
self.orders = orders
# Permutations (list of tuples)
self.permutations = [(0, 5, 1, 2, 3, 4), # For contraction with x nodes
(0, 1, 2, 5, 3, 4), # For contraction with u nodes
(0, 1, 2, 3, 4, 5)] # For contraction with v nodes
# Boundary slices (list of lists of tuples)
self.boundary_slices = [
# x-directed face slices [(left), (right)]
[(slice(resolutions[0]), 0,
slice(resolutions[1]), slice(orders[1]),
slice(resolutions[2]), slice(orders[2])),
(slice(resolutions[0]), -1,
slice(resolutions[1]), slice(orders[1]),
slice(resolutions[2]), slice(orders[2]))],
# u-directed face slices [(left), (right)]
[(slice(resolutions[0]), slice(orders[0]),
slice(resolutions[1]), 0,
slice(resolutions[2]), slice(orders[2])),
(slice(resolutions[0]), slice(orders[0]),
slice(resolutions[1]), -1,
slice(resolutions[2]), slice(orders[2]))],
# v-directed face slices [(left), (right)]
[(slice(resolutions[0]), slice(orders[0]),
slice(resolutions[1]), slice(orders[1]),
slice(resolutions[2]), 0),
(slice(resolutions[0]), slice(orders[0]),
slice(resolutions[1]), slice(orders[1]),
slice(resolutions[2]), -1)]]
# Speed slices
self.speed_slices = [(None, slice(resolutions[1]), slice(orders[1]), None, None),
(slice(resolutions[0]), slice(orders[0]), None, slice(resolutions[2]), slice(orders[2])),
(None, None, slice(resolutions[1]), slice(orders[1]), None)]
# Grid and sub-element axes
self.grid_axis = np.array([0, 2, 4])
self.sub_element_axis = np.array([1, 3, 5])
# Acceleration coefficients
self.acceleration_coefficient = flux_coefficients
# Numerical flux allocation size arrays
self.num_flux_sizes = [(resolutions[0], 2, resolutions[1], orders[1], resolutions[2], orders[2]),
(resolutions[0], orders[0], resolutions[1], 2, resolutions[2], orders[2]),
(resolutions[0], orders[0], resolutions[1], orders[1], resolutions[2], 2)]
def semi_discrete_rhs(self, function, elliptic, basis, grids):
"""
Calculate the right-hand side of semi-discrete equation
"""
# Debug
# df_dt_x = (self.x_flux(function=function, basis=basis.b1, grid_u=grids.u) * grids.x.J)
# df_dt_u = (self.u_flux(function=function, basis=basis.b2, elliptic=elliptic, grid_v=grids.v) * grids.u.J)
# df_dt_v = (self.v_flux(function=function, basis=basis.b3, elliptic=elliptic, grid_u=grids.u) * grids.v.J)
# df_dt_x_f = df_dt_x.reshape(self.resolutions[0] * self.orders[0], self.resolutions[1] * self.orders[1],
# self.resolutions[2] * self.orders[2])
# df_dt_u_f = df_dt_u.reshape(self.resolutions[0] * self.orders[0], self.resolutions[1] * self.orders[1],
# self.resolutions[2] * self.orders[2])
# df_dt_v_f = df_dt_v.reshape(self.resolutions[0] * self.orders[0], self.resolutions[1] * self.orders[1],
# self.resolutions[2] * self.orders[2])
# plt.figure()
# plt.imshow(df_dt_x_f[11, :, :].get())
# plt.colorbar()
# plt.figure()
# plt.imshow(df_dt_u_f[11, :, :].get())
# plt.colorbar()
# plt.figure()
# plt.imshow(df_dt_v_f[11, :, :].get())
# plt.colorbar()
# plt.show()
# # Time it
# t0 = timer.time()
# self.x_flux(function=function, basis=basis.b1, grid_u=grids.u) * grids.x.J
# t1 = timer.time()
# # Compute the flux
# self.u_flux(function=function, basis=basis.b2, elliptic=elliptic, grid_v=grids.v) * grids.u.J
# t2 = timer.time()
# self.v_flux(function=function, basis=basis.b3, elliptic=elliptic, grid_u=grids.u) * grids.v.J
# t3 = timer.time()
# # b = grid.v.J * self.v_flux_lgl(distribution=distribution, grid=grid)
# # t4 = timer.time()
# # c = self.source_term_lgl(distribution=distribution, grid=grid)
# # t5 = timer.time()
# print('x-flux time {:0.3e}'.format(t1 - t0))
# print('u-flux time {:0.3e}'.format(t2 - t1))
# print('v-flux time {:0.3e}'.format(t3 - t2))
# # print('source term time {:0.3e}'.format(t5 - t4))
# quit()
return ((self.x_flux(function=function, basis=basis.b1, grid_u=grids.u) * grids.x.J) +
(self.u_flux(function=function, basis=basis.b2, elliptic=elliptic, grid_v=grids.v) * grids.u.J) +
(self.v_flux(function=function, basis=basis.b3, elliptic=elliptic, grid_u=grids.u) * grids.v.J))
def x_flux(self, function, basis, grid_u):
dim = 0
# Advection: compute x-directed flux as element-wise multiply
flux = cp.multiply(function, grid_u.arr_cp[None, None, :, :, None, None])
# Compute internal and numerical fluxes
return (basis_product(flux=flux, basis_arr=basis.up,
axis=self.sub_element_axis[dim],
permutation=self.permutations[dim])
- self.spatial_flux(flux=flux, speed=grid_u, basis=basis, dim=dim))
def u_flux(self, function, basis, elliptic, grid_v):
dim = 1
# Lorentz force: compute u-directed speed as 4-index array
speed = self.acceleration_coefficient * (elliptic.electric_field[:, :, None, None] +
elliptic.magnetic_field * grid_v.arr_cp[None, None, :, :])
# then flux as element-wise multiply (does this work? test)
flux = cp.multiply(function, speed[:, :, None, None, :, :])
# Debug
# ff = flux.reshape(self.resolutions[0]*self.orders[0], self.resolutions[1]*self.orders[1],
# self.resolutions[2]*self.orders[2])
# plt.figure()
# plt.imshow(ff[11, :, :].get())
# plt.show()
# flux = (self.acceleration_coefficient *
# (cp.multiply(function, elliptic.electric_field[:, :, None, None, None, None]) +
# cp.multiply(function, elliptic.magnetic_field * grid_v.arr_cp[None, None, None, None, :, :]))
# )
# Compute internal and numerical fluxes
return (basis_product(flux=flux, basis_arr=basis.up,
axis=self.sub_element_axis[dim],
permutation=self.permutations[dim])
- self.velocity_flux(flux=flux, speed=speed, basis=basis, dim=1))
def v_flux(self, function, basis, elliptic, grid_u):
dim = 2
# Lorentz force: compute v-directed speed as 2-index array
speed = self.acceleration_coefficient * (-elliptic.magnetic_field * grid_u.arr_cp)
# then flux as element-wise multiply
flux = cp.multiply(function, speed[None, None, :, :, None, None])
# flux = (self.acceleration_coefficient *
# cp.multiply(function, -elliptic.magnetic_field * grid_u.arr_cp[None, None, :, :, None, None])
# )
# Compute internal and numerical fluxes
return (basis_product(flux=flux, basis_arr=basis.up,
axis=self.sub_element_axis[dim],
permutation=self.permutations[dim])
- self.velocity_flux(flux=flux, speed=speed, basis=basis, dim=dim))
# noinspection PyTypeChecker
def spatial_flux(self, flux, speed, basis, dim):
# Allocate
num_flux = cp.zeros(self.num_flux_sizes[dim])
# Debug
# print(speed.one_positives.shape)
# print(flux[self.boundary_slices[dim][0]].shape)
# quit()
# Upwind flux, left face
num_flux[self.boundary_slices[dim][0]] = -1.0 * (cp.multiply(cp.roll(flux[self.boundary_slices[dim][1]],
shift=1, axis=self.grid_axis[dim]),
speed.one_positives[self.speed_slices[dim]]) +
cp.multiply(flux[self.boundary_slices[dim][0]],
speed.one_negatives[self.speed_slices[dim]]))
# Upwind flux, right face
num_flux[self.boundary_slices[dim][1]] = (cp.multiply(flux[self.boundary_slices[dim][1]],
speed.one_positives[self.speed_slices[dim]]) +
cp.multiply(cp.roll(flux[self.boundary_slices[dim][0]], shift=-1,
axis=self.grid_axis[dim]),
speed.one_negatives[self.speed_slices[dim]]))
return basis_product(flux=num_flux, basis_arr=basis.xi,
axis=self.sub_element_axis[dim],
permutation=self.permutations[dim])
# noinspection PyTypeChecker
def velocity_flux(self, flux, speed, basis, dim):
# Allocate
num_flux = cp.zeros(self.num_flux_sizes[dim])
# Measure upwind directions
one_negatives = cp.where(condition=speed < 0, x=1, y=0)
one_positives = cp.where(condition=speed >= 0, x=1, y=0)
# Debug
# if dim == 2:
# print(one_negatives.shape)
# print(flux[self.boundary_slices[dim][0]].shape)
# quit()
# Upwind flux, left face
num_flux[self.boundary_slices[dim][0]] = -1.0 * (cp.multiply(cp.roll(flux[self.boundary_slices[dim][1]],
shift=1, axis=self.grid_axis[dim]),
one_positives[self.speed_slices[dim]]) +
cp.multiply(flux[self.boundary_slices[dim][0]],
one_negatives[self.speed_slices[dim]]))
# Upwind flux, right face
num_flux[self.boundary_slices[dim][1]] = (cp.multiply(flux[self.boundary_slices[dim][1]],
one_positives[self.speed_slices[dim]]) +
cp.multiply(cp.roll(flux[self.boundary_slices[dim][0]], shift=-1,
axis=self.grid_axis[dim]),
one_negatives[self.speed_slices[dim]]))
return basis_product(flux=num_flux, basis_arr=basis.xi,
axis=self.sub_element_axis[dim],
permutation=self.permutations[dim])
# def stabilized_flux(self, flux, basis, dim):
# # Stabilization parameter
# alpha = 1.0
# param = (1.0 - alpha) / 2.0
# # Allocate
# num_flux = cp.zeros(self.num_flux_sizes[dim])
# # Left face, average (central flux part)
# num_flux[self.boundary_slices[dim][0]] = -0.5 * (cp.add(flux[self.boundary_slices[dim][0]],
# cp.roll(flux[self.boundary_slices[dim][1]],
# shift=1, axis=self.grid_axis[dim])))
# # Left face, jump
# # num_flux[self.boundary_slices[dim][0]] += -1.0 * param *
# # (-1.0 * cp.absolute(flux[self.boundary_slices[dim][0]]) +
# # cp.absolute(cp.roll(flux[self.boundary_slices[dim][1]],
# # shift=1, axis=self.grid_axis[dim])))
# # cp.cuda.runtime.deviceSynchronize()
# # Right face, average
# num_flux[self.boundary_slices[dim][1]] = 0.5 * (cp.add(flux[self.boundary_slices[dim][1]],
# cp.roll(flux[self.boundary_slices[dim][0]],
# shift=-1, axis=self.grid_axis[dim])))
# # Right face, jump
# # num_flux[self.boundary_slices[dim][1]] += param * (cp.absolute(flux[self.boundary_slices[dim][1]]) -
# # cp.absolute(cp.roll(flux[self.boundary_slices[dim][0]],
# # shift=-1, axis=self.grid_axis[dim])))
# # Compute product
# # cp.cuda.runtime.deviceSynchronize()
# return numerical_flux_product(flux=num_flux, basis_arr=basis.xi,
# axis=self.sub_element_axis[dim], permutation=self.permutations[dim])
#
# # noinspection PyTypeChecker
# def upwind_flux2(self, flux, basis, dim):
# # Allocate
# num_flux = cp.zeros(self.num_flux_sizes[dim])
# # Left face
# num_flux[self.boundary_slices[dim][0]] = -1.0 * (cp.where(condition=flux[self.boundary_slices[dim][0]] >= 0,
# # Where the flux on left face (0) is positive
# x=cp.roll(flux[self.boundary_slices[dim][1]],
# shift=1, axis=self.grid_axis[dim]),
# # Then use the left neighbor (-1) right face (1)
# y=0.0) + # else zero
# cp.where(condition=flux[self.boundary_slices[dim][0]] < 0,
# # Where the flux on left face (0) is negative
# x=flux[self.boundary_slices[dim][0]],
# # Then keep local values, else zero
# y=0.0))
# # Right face
# num_flux[self.boundary_slices[dim][1]] = (cp.where(condition=flux[self.boundary_slices[dim][1]] >= 0,
# # Where the flux on right face (1) is positive
# x=flux[self.boundary_slices[dim][1]],
# # Then use the local value, else zero
# y=0.0) +
# cp.where(condition=flux[self.boundary_slices[dim][1]] < 0,
# # Where the flux on right face (1) is negative
# x=cp.roll(flux[self.boundary_slices[dim][0]],
# shift=-1, axis=self.grid_axis[dim]),
# # Then use the right neighbor (-1) left face (0)
# y=0.0))
# # if dim ==
# # flat = num_flux[self.boundary_slices[dim][0]].reshape(self.resolutions[0]*self.orders[0],
# # self.resolutions[1],
# # self.resolutions[2]*self.orders[2])
# # plt.figure()
# # plt.imshow(flat[11, :, :].get())
# # plt.show()
# # print(num_flux.shape)
# # quit()
# return numerical_flux_product(flux=num_flux, basis_arr=basis.xi,
# axis=self.sub_element_axis[dim], permutation=self.permutations[dim])
#
# # noinspection PyTypeChecker
# def upwind_flux(self, flux, basis, dim):
# # Using upwind flux scheme, check sign and keep local values or use neighbors
# return (numerical_flux_product(flux=(cp.where(condition=flux[self.boundary_slices[dim][1]] >= 0,
# # Where the flux on right face (1) is positive
# x=flux[self.boundary_slices[dim][1]],
# # Then use the local value, else zero
# y=0.0) +
# cp.where(condition=flux[self.boundary_slices[dim][1]] < 0,
# # Where the flux on right face (1) is negative
# x=cp.roll(flux[self.boundary_slices[dim][0]],
# shift=-1, axis=self.grid_axis[dim]),
# # Then use the right neighbor (-1) left face (0)
# y=0.0)), # else zero
# basis_arr=basis.xi,
# face=1, # right
# permutation=self.permutations[dim]) -
#
# numerical_flux_product(flux=(cp.where(condition=flux[self.boundary_slices[dim][0]] >= 0,
# # Where the flux on left face (0) is positive
# x=cp.roll(flux[self.boundary_slices[dim][1]],
# shift=1, axis=self.grid_axis[dim]),
# # Then use the left neighbor (-1) right face (1)
# y=0.0) + # else zero
# cp.where(condition=flux[self.boundary_slices[dim][0]] < 0,
# # Where the flux on left face (0) is negative
# x=flux[self.boundary_slices[dim][0]],
# # Then keep local values, else zero
# y=0.0)),
# basis_arr=basis.xi,
# face=0, # left face
# permutation=self.permutations[dim]))
# Stuff I might want later
# flux_left_positive = cp.where(flux[:, 0, :, :, :, :] > 0,
# cp.roll(flux[:, -1, :, :, :, :], shift=1, axis=0), 0)
# flux_left_negative = cp.where(flux[:, 0, :, :, :, :] < 0,
# flux[:, 0, :, :, :, :], 0)
# flux_right_positive = cp.where(flux[:, -1, :, :, :, :] > 0,
# flux[:, -1, :, :, :, :], 0)
# flux_right_negative = cp.where(flux[:, -1, :, :, :, :] < 0,
# cp.roll(flux[:, 0, :, :, :, :], shift=-1, axis=0), 0)
# flux_left = flux_left_negative + flux_left_positive
# flux_right = flux_right_negative + flux_right_positive
#
# num_flux = cp.zeros_like(flux[:, [0, -1], :, :, :, :])
# num_flux[:, 0, :, :, :, :] = -1.0 * flux_left
# num_flux[:, -1, :, :, :, :] = flux_right
# x flux
# flux = cp.multiply(function, grid_u.arr_cp[None, None, :, :, None, None])
# Compute internal and numerical fluxes
# internal = internal_flux_product(flux, basis3.b1.up, axis=1, permutation=(0, 5, 1, 2, 3, 4))
# Compute x-directed numerical flux
# sides = np.array([0, -1])
# flux_left_positive = cp.where(flux[:, 0, :, :, :, :] > 0,
# cp.roll(flux[:, -1, :, :, :, :], shift=1, axis=0), 0)
# flux_left_negative = cp.where(flux[:, 0, :, :, :, :] < 0,
# flux[:, 0, :, :, :, :], 0)
# flux_right_positive = cp.where(flux[:, -1, :, :, :, :] > 0,
# flux[:, -1, :, :, :, :], 0)
# flux_right_negative = cp.where(flux[:, -1, :, :, :, :] < 0,
# cp.roll(flux[:, 0, :, :, :, :], shift=-1, axis=0), 0)
#
# flux_left = flux_left_negative + flux_left_positive
# flux_right = flux_right_negative + flux_right_positive
#
# num_flux = cp.zeros_like(flux[:, [0, -1], :, :, :, :])
# num_flux[:, 0, :, :, :, :] = -1.0 * flux_left
# num_flux[:, -1, :, :, :, :] = flux_right
# print(num_flux.shape)
# boundary_flux = cp.transpose(cp.tensordot(num_flux,
# basis3.b1.xi,
# axes=([1], [1])),
# (0, 5, 1, 2, 3, 4))
# num_flux_left = -1.0 * cp.transpose(cp.tensordot(flux_left,
# basis3.b1.xi[:, 0], axes=0),
# (0, 5, 1, 2, 3, 4))
# num_flux_right = cp.transpose(cp.tensordot(flux_right,
# basis3.b1.xi[:, 1], axes=0),
# (0, 5, 1, 2, 3, 4))
# boundary_flux = num_flux_right + num_flux_left
# # # # Debug
# a = (cp.where(condition=flux[self.boundary_slices[dim][1]] >= 0,
# # Where the flux on right face (1) is positive
# x=flux[self.boundary_slices[dim][1]],
# # Then use the local value, else zero
# y=0.0) +
# cp.where(condition=flux[self.boundary_slices[dim][1]] < 0,
# # Where the flux on right face (1) is negative
# x=cp.roll(flux[self.boundary_slices[dim][0]],
# shift=-1, axis=self.grid_axis[dim]),
# # Then use the neighbor left face (0), else zero
# y=0.0))
# b = (cp.where(condition=flux[self.boundary_slices[dim][0]] >= 0,
# # Where the flux on left face (0) is positive
# x=cp.roll(flux[self.boundary_slices[dim][1]],
# shift=1, axis=self.grid_axis[dim]),
# # Then use the neighbor's right face (1), else zero
# y=0.0) +
# cp.where(condition=flux[self.boundary_slices[dim][0]] < 0,
# # Where the flux on left face (0) is negative
# x=flux[self.boundary_slices[dim][0]],
# # Then keep local values, else zero
# y=0.0))
# a1 = cp.zeros_like(flux)
# a2 = cp.zeros_like(flux)
# if dim == 1:
# a1[self.boundary_slices[dim][1]] = (cp.where(flux[self.boundary_slices[dim][1]] > 0,
# # Where the flux on right face (1) is positive
# flux[self.boundary_slices[dim][1]],
# # Then use the local value, else zero
# 0.0) +
# cp.where(flux[self.boundary_slices[dim][1]] < 0,
# # Where the flux on right face (1) is negative
# cp.roll(flux[self.boundary_slices[dim][0]],
# shift=-1, axis=self.grid_axis[dim]),
# # Then use the neighbor left face (0), else zero
# 0.0))
#
# a2[self.boundary_slices[dim][0]] = (cp.where(flux[self.boundary_slices[dim][0]] >= 0,
# # Where the flux on left face (0) is positive
# cp.roll(flux[self.boundary_slices[dim][1]],
# shift=1, axis=self.grid_axis[dim]),
# # Then use the neighbor's right face (1), else zero
# 0.0) +
# cp.where(flux[self.boundary_slices[dim][0]] < 0,
# # Where the flux on left face (0) is negative
# flux[self.boundary_slices[dim][0]],
# # Then keep local values, else zero
# 0.0))
#
# plt.figure()
# plt.contourf(a1.reshape(self.resolutions[0] * self.orders[0],
# self.resolutions[1] * self.orders[1],
# self.resolutions[2] * self.orders[2])[20, :, :], levels=200)
# plt.colorbar()
#
# plt.figure()
# plt.contourf(a2.reshape(self.resolutions[0] * self.orders[0],
# self.resolutions[1] * self.orders[1],
# self.resolutions[2] * self.orders[2])[20, :, :], levels=200)
# plt.colorbar() # [20, 10:14, 90:110], levels=200)[20, 95:105, 95:105]
# plt.show()
# flux = (self.acceleration_coefficient *
# (cp.multiply(function, elliptic.electric_field[:, :, None, None, None, None]) +
# cp.multiply(function, elliptic.magnetic_field * grid_v.arr_cp[None, None, None, None, :, :]))
# )
# corners = cp.zeros_like(flux)
# corners[:, :, :, 0, :, :] = -1
# corners[:, :, :, -1, :, :] = +1
# corners[:, :, :, :, :, 0] = -1
# corners[:, :, :, :, :, -1] = +1
# flux_f = cp.asnumpy(flux[1:-1, :, 1:-1, :, 1:-1, :].reshape(6 * 8, 24 * 8, 24 * 8))
# corners_f = cp.asnumpy(corners[1:-1, :, 1:-1, :, 1:-1, :].reshape(
# (6 * 8, 24 * 8, 24 * 8)))
# plt.figure()
# plt.imshow(flux_f[11, 90:102, 90:102])
# plt.colorbar()
# plt.figure()
# plt.imshow(corners_f[11, 90:102, 90:102])
# plt.colorbar()
# plt.show()
# cp.cuda.Stream.null.synchronize()
# Bin
#
# def internal_flux_product(flux, basis_arr, axis, permutation):
# # Steps: contract flux with basis array, then permute indices back
# return cp.transpose(cp.tensordot(flux, basis_arr,
# axes=([axis], [1])),
# axes=permutation)
# def internal_flux_product(flux, basis_arr, axis, permutation):
# # Steps: contract flux with basis array, then permute indices back
# return cp.transpose(cp.tensordot(flux, basis_arr,
# axes=([axis], [1])),
# axes=permutation)
# def numerical_flux_product(flux, basis_arr, face, permutation):
# # Contract flux with basis array, then permute indices back
# return cp.transpose(cp.tensordot(flux, basis_arr[:, face],
# axes=0),
# axes=permutation)
# def numerical_flux_product(flux, basis_arr, axis, permutation):
# # Contract flux with basis array and permute indices back
# # print(flux.shape)
# # print(basis_arr.shape)
# # print(axis)
# # quit()
# return cp.transpose(cp.tensordot(flux, basis_arr, axes=([axis], [1])),
# axes=permutation)
| 58.36952 | 120 | 0.465646 | 3,030 | 27,959 | 4.176568 | 0.073597 | 0.045516 | 0.091031 | 0.104544 | 0.821968 | 0.793915 | 0.764362 | 0.731331 | 0.710707 | 0.669538 | 0 | 0.03348 | 0.399621 | 27,959 | 478 | 121 | 58.491632 | 0.720422 | 0.677134 | 0 | 0.480392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.039216 | 0.009804 | 0.196078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e09e94bd29e7c62776bfd33470a009ef5871c172 | 25 | py | Python | devlog/__init__.py | kachaMukabe/devlog | 40c34460598c2414e80b4dc0b6288de2b59208bc | [
"MIT"
] | null | null | null | devlog/__init__.py | kachaMukabe/devlog | 40c34460598c2414e80b4dc0b6288de2b59208bc | [
"MIT"
] | null | null | null | devlog/__init__.py | kachaMukabe/devlog | 40c34460598c2414e80b4dc0b6288de2b59208bc | [
"MIT"
] | null | null | null | from .devlog import main
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0c30871871f707c21fcdc9ad2916d1f07afaf3a | 27 | py | Python | Eldorado_Actor-Critic/source/networks/__init__.py | jbr-ai-labs/BAROCCO | 799341cd76be88745086289583fc95ff9a2bc72e | [
"Apache-2.0"
] | 3 | 2021-04-14T17:01:49.000Z | 2021-07-09T11:24:25.000Z | Eldorado_Actor-Critic/source/networks/__init__.py | jbr-ai-labs/BAROCCO | 799341cd76be88745086289583fc95ff9a2bc72e | [
"Apache-2.0"
] | null | null | null | Eldorado_Actor-Critic/source/networks/__init__.py | jbr-ai-labs/BAROCCO | 799341cd76be88745086289583fc95ff9a2bc72e | [
"Apache-2.0"
] | 2 | 2021-09-15T09:36:49.000Z | 2021-10-18T08:49:01.000Z | from .lawmaker import COMA
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0fe7a97f3ea29e5b8a1f777ce3091dc1c0954b8 | 1,523 | py | Python | model_build.py | kkhtun/Pneumonia_CXR_classifer | 17b2177c9f6e0d8236f0805743810bc34a1c807f | [
"MIT"
] | null | null | null | model_build.py | kkhtun/Pneumonia_CXR_classifer | 17b2177c9f6e0d8236f0805743810bc34a1c807f | [
"MIT"
] | null | null | null | model_build.py | kkhtun/Pneumonia_CXR_classifer | 17b2177c9f6e0d8236f0805743810bc34a1c807f | [
"MIT"
] | null | null | null | from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout
def build_model(input_size):
model = Sequential()
model.add(Conv2D(filters=8, kernel_size=(7,7), padding='same', activation='relu', input_shape=input_size))
model.add(Conv2D(filters=8, kernel_size=(7,7), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Conv2D(filters=16, kernel_size=(5,5), padding='same', activation='relu'))
model.add(Conv2D(filters=16, kernel_size=(5,5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Conv2D(filters=32, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=32, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
return model | 44.794118 | 111 | 0.68155 | 214 | 1,523 | 4.761682 | 0.182243 | 0.149166 | 0.13739 | 0.206084 | 0.743867 | 0.743867 | 0.743867 | 0.743867 | 0.743867 | 0.743867 | 0 | 0.055387 | 0.134603 | 1,523 | 34 | 112 | 44.794118 | 0.717754 | 0 | 0 | 0.541667 | 0 | 0 | 0.061033 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
46173e192bbfad78f8b000a16be7ba222ae95227 | 138 | py | Python | textbeat/run.py | idbrii/textbeat | 0f34afc9f72e04942a77c923f1bbfe5d5f7632a8 | [
"MIT"
] | 231 | 2018-08-18T08:10:09.000Z | 2022-03-21T02:35:19.000Z | textbeat/run.py | idbrii/textbeat | 0f34afc9f72e04942a77c923f1bbfe5d5f7632a8 | [
"MIT"
] | 9 | 2018-09-15T20:53:42.000Z | 2022-02-04T20:16:18.000Z | textbeat/run.py | idbrii/textbeat | 0f34afc9f72e04942a77c923f1bbfe5d5f7632a8 | [
"MIT"
] | 7 | 2019-02-21T00:56:35.000Z | 2022-02-20T16:32:10.000Z | from __future__ import absolute_import, unicode_literals, print_function, generators
# import textbeat
# def run():
# textbeat.main()
| 27.6 | 84 | 0.775362 | 16 | 138 | 6.25 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137681 | 138 | 4 | 85 | 34.5 | 0.840336 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
1c9d96efe061579248b80ee18b73474ab885e95a | 42 | py | Python | modules/__init__.py | franccesco/spi-work | efbaaf682687684a96f93ddd298367aa7b8cbc60 | [
"Apache-2.0"
] | 1 | 2017-08-24T20:50:22.000Z | 2017-08-24T20:50:22.000Z | modules/__init__.py | franccesco/spi-work | efbaaf682687684a96f93ddd298367aa7b8cbc60 | [
"Apache-2.0"
] | 3 | 2017-08-24T19:25:00.000Z | 2017-11-10T23:28:24.000Z | modules/__init__.py | franccesco/spi-work | efbaaf682687684a96f93ddd298367aa7b8cbc60 | [
"Apache-2.0"
] | null | null | null | from modules import file_operations as io
| 21 | 41 | 0.857143 | 7 | 42 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 42 | 1 | 42 | 42 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ca163619862d79a83c96263d7c8984427396fbd | 350 | py | Python | src/nlu/extractors/__init__.py | phamnam-mta/know-life | f7c226c41e315f21b5d7fe2ccbc9ec4f9961ed1d | [
"MIT"
] | null | null | null | src/nlu/extractors/__init__.py | phamnam-mta/know-life | f7c226c41e315f21b5d7fe2ccbc9ec4f9961ed1d | [
"MIT"
] | null | null | null | src/nlu/extractors/__init__.py | phamnam-mta/know-life | f7c226c41e315f21b5d7fe2ccbc9ec4f9961ed1d | [
"MIT"
] | null | null | null | from src.nlu.extractors.model.xlmr import XLMR
from src.nlu.extractors.model.mbert import mBERT
from src.nlu.extractors.model.mdapt import mDAPT
from src.nlu.extractors.model.biobert import BioBERT
from src.nlu.extractors.model.hnbertvn import HnBERTvn
from src.nlu.extractors.model.phobert import phoBERT
from src.nlu.extractors.model.our import OUR | 50 | 54 | 0.842857 | 56 | 350 | 5.267857 | 0.232143 | 0.166102 | 0.237288 | 0.474576 | 0.59322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077143 | 350 | 7 | 55 | 50 | 0.913313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ce6a6cbf203a39735e5ef36d5733ac78b840993 | 150 | py | Python | open_data/badge/admin.py | balfroim/OpenData | f0334dae16c2806e81f7d2d53adeabc72403ecce | [
"MIT"
] | null | null | null | open_data/badge/admin.py | balfroim/OpenData | f0334dae16c2806e81f7d2d53adeabc72403ecce | [
"MIT"
] | null | null | null | open_data/badge/admin.py | balfroim/OpenData | f0334dae16c2806e81f7d2d53adeabc72403ecce | [
"MIT"
] | null | null | null | from django.contrib import admin
from badge.models import BadgeAward
@admin.register(BadgeAward)
class BadgeAwardAdmin(admin.ModelAdmin):
pass
| 16.666667 | 40 | 0.806667 | 18 | 150 | 6.722222 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126667 | 150 | 8 | 41 | 18.75 | 0.923664 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e80d25c5355ba912680a62a44f24dbc85d4a047e | 180 | py | Python | tests/integration/roots/test-kitchensink/kaybee_plugins/kitchensink_toctree.py | pauleveritt/kaybee | a00a718aaaa23b2d12db30dfacb6b2b6ec84459c | [
"Apache-2.0"
] | 2 | 2017-11-08T19:55:57.000Z | 2018-12-21T12:41:41.000Z | tests/integration/roots/test-kitchensink/kaybee_plugins/kitchensink_toctree.py | pauleveritt/kaybee | a00a718aaaa23b2d12db30dfacb6b2b6ec84459c | [
"Apache-2.0"
] | null | null | null | tests/integration/roots/test-kitchensink/kaybee_plugins/kitchensink_toctree.py | pauleveritt/kaybee | a00a718aaaa23b2d12db30dfacb6b2b6ec84459c | [
"Apache-2.0"
] | 1 | 2018-10-13T08:59:29.000Z | 2018-10-13T08:59:29.000Z | from kaybee.app import kb
from kaybee.plugins.articles.base_toctree import BaseToctree
@kb.toctree(context='kitchensink', system_order=40)
class MyToctree(BaseToctree):
pass
| 22.5 | 60 | 0.805556 | 24 | 180 | 5.958333 | 0.75 | 0.13986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012422 | 0.105556 | 180 | 7 | 61 | 25.714286 | 0.875776 | 0 | 0 | 0 | 0 | 0 | 0.061111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e81949fa6b024f0f8e29111c08292e1b8bcf20e2 | 287 | py | Python | src/keras/keras/engine/__init__.py | lu791019/iii_HA_Image_Recognition_DL | d5f56d62af6d3aac1c216ca4ff309db08a8c9072 | [
"Apache-2.0"
] | null | null | null | src/keras/keras/engine/__init__.py | lu791019/iii_HA_Image_Recognition_DL | d5f56d62af6d3aac1c216ca4ff309db08a8c9072 | [
"Apache-2.0"
] | null | null | null | src/keras/keras/engine/__init__.py | lu791019/iii_HA_Image_Recognition_DL | d5f56d62af6d3aac1c216ca4ff309db08a8c9072 | [
"Apache-2.0"
] | null | null | null | # note: `Node` is an internal class,
# it isn't meant to be used by Keras users.
from .input_layer import Input
from .input_layer import InputLayer
from .base_layer import InputSpec
from .base_layer import Layer
from .network import get_source_inputs
from .training import Model
| 31.888889 | 44 | 0.780488 | 46 | 287 | 4.73913 | 0.630435 | 0.201835 | 0.12844 | 0.183486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 287 | 8 | 45 | 35.875 | 0.915966 | 0.264808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c4f204d98285ca30a0b34fc4bcf1c7e61a47cfe | 171 | py | Python | 0x03-python-data_structures/2-replace_in_list.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | 0x03-python-data_structures/2-replace_in_list.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | 0x03-python-data_structures/2-replace_in_list.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
def replace_in_list(my_list, idx, element):
if idx < 0 or idx >= len(my_list):
return my_list
my_list[idx] = element
return my_list
| 24.428571 | 43 | 0.654971 | 29 | 171 | 3.62069 | 0.517241 | 0.285714 | 0.190476 | 0.247619 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015267 | 0.233918 | 171 | 6 | 44 | 28.5 | 0.78626 | 0.099415 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
1c803f705c4805bd1080ecceb505dfc248025bd8 | 191 | py | Python | condense/optimizer/layer_operations/__init__.py | SirBubbls/condense | e28f008477fe75c24b43cc853b2dc6d923f01813 | [
"MIT"
] | null | null | null | condense/optimizer/layer_operations/__init__.py | SirBubbls/condense | e28f008477fe75c24b43cc853b2dc6d923f01813 | [
"MIT"
] | null | null | null | condense/optimizer/layer_operations/__init__.py | SirBubbls/condense | e28f008477fe75c24b43cc853b2dc6d923f01813 | [
"MIT"
] | null | null | null | """This module define base classes and implementations for layer operatinos."""
import condense.optimizer.layer_operations.unit_prune
import condense.optimizer.layer_operations.weight_prune
| 38.2 | 79 | 0.853403 | 24 | 191 | 6.625 | 0.708333 | 0.176101 | 0.289308 | 0.352201 | 0.477987 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078534 | 191 | 4 | 80 | 47.75 | 0.903409 | 0.382199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1c90941d5dc39e44b487e1c9f49dcfd1e6eb0502 | 46 | py | Python | helloworld.py | ewonsok/docker-starter | 73dce2d31aedaa2fd3c889294141343afcd46171 | [
"Apache-2.0"
] | null | null | null | helloworld.py | ewonsok/docker-starter | 73dce2d31aedaa2fd3c889294141343afcd46171 | [
"Apache-2.0"
] | null | null | null | helloworld.py | ewonsok/docker-starter | 73dce2d31aedaa2fd3c889294141343afcd46171 | [
"Apache-2.0"
] | null | null | null | print 'hello world'
print ("hello wolrdworld") | 23 | 26 | 0.76087 | 6 | 46 | 5.833333 | 0.666667 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 2 | 26 | 23 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0.574468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
98df712019c0d285538b787236ce41456cf431fb | 9,471 | py | Python | tests/importer/test_eitem_priorities.py | zzacharo/cds-ils | 6816c348e209607b97583acc40fb37dea0c62418 | [
"MIT"
] | 6 | 2020-09-18T00:13:38.000Z | 2021-11-14T17:12:19.000Z | tests/importer/test_eitem_priorities.py | zzacharo/cds-ils | 6816c348e209607b97583acc40fb37dea0c62418 | [
"MIT"
] | 321 | 2020-08-28T15:42:25.000Z | 2022-03-14T15:11:50.000Z | tests/importer/test_eitem_priorities.py | zzacharo/cds-ils | 6816c348e209607b97583acc40fb37dea0c62418 | [
"MIT"
] | 8 | 2019-07-10T07:02:08.000Z | 2020-08-10T14:07:25.000Z | import pytest
from invenio_app_ils.proxies import current_app_ils
from invenio_pidstore.errors import PIDDeletedError
from invenio_search import current_search
from cds_ils.importer.eitems.importer import EItemImporter
def test_replace_lower_priority(importer_test_data):
document_cls = current_app_ils.document_record_cls
eitem_cls = current_app_ils.eitem_record_cls
eitem_search_cls = current_app_ils.eitem_search_cls
# setup
matched_document = document_cls.get_record_by_pid("docid-6")
current_import_eitem = {
"urls": [
{
"description": "Protected URL",
"value": "http://protected-cds-ils.ch/",
"login_required": True
},
{
"description": "Another open URL",
"value": "http://cds-ils.ch/",
"login_required": True
}
]
}
metadata_provider = "springer"
IS_PROVIDER_PRIORITY_SENSITIVE = True
EITEM_OPEN_ACCESS = False
EITEM_URLS_LOGIN_REQUIRED = True
eitem_importer_preview = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
eitem_importer = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
preview_summary = eitem_importer_preview.preview_import(matched_document)
# make sure ebl item exists
eitem_cls.get_record_by_pid("eitemid-6")
eitem_importer.update_eitems(matched_document)
summary = eitem_importer.summary()
current_search.flush_and_refresh(index="*")
assert len(summary["deleted_eitems"]) == 1
# check if replaced in the import summary
assert summary["deleted_eitems"][0]["pid"] == "eitemid-6"
assert summary["eitem"]["document_pid"] == "docid-6"
# check if deleted
with pytest.raises(PIDDeletedError):
eitem_cls.get_record_by_pid("eitemid-6")
# check if deleted from the index
search = eitem_search_cls().search_by_document_pid(
"docid-6"
)
assert search.count() == 0
# check if preview equals report
# this should be the only differing item
summary["output_pid"] = "preview-doc-pid"
assert preview_summary == summary
def test_import_equal_priority(importer_test_data):
document_cls = current_app_ils.document_record_cls
eitem_cls = current_app_ils.eitem_record_cls
# setup
matched_document = document_cls.get_record_by_pid("docid-6A")
current_import_eitem = {
"urls": [
{
"description": "Protected URL",
"value": "http://protected-cds-ils.ch/",
"login_required": True
},
{
"description": "Another open URL",
"value": "http://cds-ils.ch/",
"login_required": True
}
]
}
metadata_provider = "ebl"
IS_PROVIDER_PRIORITY_SENSITIVE = False
EITEM_OPEN_ACCESS = False
EITEM_URLS_LOGIN_REQUIRED = True
eitem_importer = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
preview_eitem_importer = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
preview_summary = preview_eitem_importer.preview_import(matched_document)
eitem_importer.update_eitems(matched_document)
summary = eitem_importer.summary()
assert len(summary["deleted_eitems"]) == 0
# check if replaced in the import summary
assert summary["eitem"]["document_pid"] == "docid-6A"
# check if safari not deleted
eitem_cls.get_record_by_pid("eitemid-6A")
# check if new record added
eitem_cls.get_record_by_pid(summary["eitem"]["pid"])
# check if preview equals report
summary["output_pid"] = "preview-doc-pid"
assert preview_summary == summary
def test_do_not_import_lower_priority(importer_test_data):
document_cls = current_app_ils.document_record_cls
eitem_cls = current_app_ils.eitem_record_cls
# setup
matched_document = document_cls.get_record_by_pid("docid-7")
current_import_eitem = {
"urls": [
{
"description": "Protected URL",
"value": "http://protected-cds-ils.ch/",
"login_required": True
},
{
"description": "Another open URL",
"value": "http://cds-ils.ch/",
"login_required": True
}
]
}
metadata_provider = "ebl"
IS_PROVIDER_PRIORITY_SENSITIVE = False
EITEM_OPEN_ACCESS = False
EITEM_URLS_LOGIN_REQUIRED = True
eitem_importer = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
preview_eitem_importer = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
preview_summary = preview_eitem_importer.preview_import(matched_document)
eitem_importer.update_eitems(matched_document)
current_search.flush_and_refresh(index="*")
summary = eitem_importer.summary()
assert len(summary["deleted_eitems"]) == 0
# check if doing nothing
assert summary["eitem"] is None
assert summary["action"] == "none"
# check if higher priority record not deleted
eitem_cls.get_record_by_pid("eitemid-7")
# check if preview equals report
summary["output_pid"] = "preview-doc-pid"
assert preview_summary == summary
def test_ignore_if_existing_item_not_imported(importer_test_data):
document_cls = current_app_ils.document_record_cls
eitem_cls = current_app_ils.eitem_record_cls
# setup
matched_document = document_cls.get_record_by_pid("docid-8")
current_import_eitem = {
"urls": [
{
"description": "Protected URL",
"value": "http://protected-cds-ils.ch/",
"login_required": True
},
{
"description": "Another open URL",
"value": "http://cds-ils.ch/",
"login_required": True
}
]
}
metadata_provider = "springer"
IS_PROVIDER_PRIORITY_SENSITIVE = False
EITEM_OPEN_ACCESS = False
EITEM_URLS_LOGIN_REQUIRED = True
eitem_importer = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
preview_eitem_importer = EItemImporter(matched_document,
current_import_eitem,
metadata_provider,
IS_PROVIDER_PRIORITY_SENSITIVE,
EITEM_OPEN_ACCESS,
EITEM_URLS_LOGIN_REQUIRED
)
preview_summary = preview_eitem_importer.preview_import(matched_document)
eitem_importer.update_eitems(matched_document)
summary = eitem_importer.summary()
assert len(summary["deleted_eitems"]) == 0
# check if new eitem assigned to doc
assert summary["eitem"]["document_pid"] == "docid-8"
# check if user created eitem not deleted (ignored)
eitem_cls.get_record_by_pid("eitemid-8")
# check if new record added
eitem_cls.get_record_by_pid(summary["eitem"]["pid"])
# check if preview equals report
summary["output_pid"] = "preview-doc-pid"
assert preview_summary == summary
| 38.5 | 77 | 0.55823 | 899 | 9,471 | 5.491657 | 0.121246 | 0.060766 | 0.043751 | 0.065627 | 0.85477 | 0.839376 | 0.807981 | 0.802107 | 0.789953 | 0.756937 | 0 | 0.003359 | 0.371239 | 9,471 | 245 | 78 | 38.657143 | 0.825693 | 0.0605 | 0 | 0.670103 | 0 | 0 | 0.106805 | 0 | 0 | 0 | 0 | 0 | 0.07732 | 1 | 0.020619 | false | 0 | 0.21134 | 0 | 0.231959 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98e425e7a0ea49c5409c1ddc01e5786177247549 | 26 | py | Python | subsync/__init__.py | 0xflotus/subsync | 244afbb58e2bdeba8cc833ff330d7d27773b9667 | [
"MIT"
] | 1 | 2019-02-27T02:07:24.000Z | 2019-02-27T02:07:24.000Z | subsync/__init__.py | shaunstanislauslau/subsync | 42f95e652451ce29f24c9d75ddc78ad37f26359d | [
"MIT"
] | null | null | null | subsync/__init__.py | shaunstanislauslau/subsync | 42f95e652451ce29f24c9d75ddc78ad37f26359d | [
"MIT"
] | null | null | null | from .subsync import main
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98f29040404605bf7678c5a1051fd94275fea713 | 346 | py | Python | tests/test_phenotyper.py | Varstation/pypgx | e81ac2dd16aaf54806630a2548a3b86d230eccd9 | [
"MIT"
] | null | null | null | tests/test_phenotyper.py | Varstation/pypgx | e81ac2dd16aaf54806630a2548a3b86d230eccd9 | [
"MIT"
] | null | null | null | tests/test_phenotyper.py | Varstation/pypgx | e81ac2dd16aaf54806630a2548a3b86d230eccd9 | [
"MIT"
] | null | null | null | from pypgx.phenotyper import phenotyper
def test_phenotyper():
assert phenotyper("cyp2d6", "*1", "*1") == "normal_metabolizer"
assert phenotyper("cyp2d6", "*1", "*4") == "intermediate_metabolizer"
assert phenotyper("cyp2d6", "*1", "*2x2") == "ultrarapid_metabolizer"
assert phenotyper("cyp2d6", "*4", "*5") == "poor_metabolizer"
| 43.25 | 73 | 0.66763 | 36 | 346 | 6.277778 | 0.472222 | 0.283186 | 0.389381 | 0.30531 | 0.300885 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056856 | 0.135838 | 346 | 7 | 74 | 49.428571 | 0.698997 | 0 | 0 | 0 | 0 | 0 | 0.352601 | 0.132948 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.166667 | true | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c70e183f0e711167e7196cafe2bd015d010f4ad3 | 29 | py | Python | tests/regressiontests/templates/templatetags/broken_tag.py | Smarsh/django | ffb738e0f56027e16564a79b709cbf44596c2335 | [
"BSD-3-Clause"
] | 19 | 2015-05-01T19:59:03.000Z | 2021-12-09T08:03:16.000Z | tests/regressiontests/templates/templatetags/broken_tag.py | alex/django-old | 6f964c8f03e5d25c9e36898a001c8463f82fbb81 | [
"BSD-3-Clause"
] | 1 | 2018-01-03T15:26:49.000Z | 2018-01-03T15:26:49.000Z | tests/regressiontests/templates/templatetags/broken_tag.py | alex/django-old | 6f964c8f03e5d25c9e36898a001c8463f82fbb81 | [
"BSD-3-Clause"
] | 30 | 2015-03-25T19:40:07.000Z | 2021-05-28T22:59:26.000Z | from django import Xtemplate
| 14.5 | 28 | 0.862069 | 4 | 29 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7167662f8fc2e8cfe340da80864e6fdc09c8df8 | 13,513 | py | Python | tests/test_task_autosql.py | robin-173/sayn | d1cf36b92fad6a1798b57ad80abb22e8386e0e86 | [
"Apache-2.0"
] | 105 | 2020-04-23T17:04:34.000Z | 2022-03-18T15:47:52.000Z | tests/test_task_autosql.py | robin-173/sayn | d1cf36b92fad6a1798b57ad80abb22e8386e0e86 | [
"Apache-2.0"
] | 53 | 2020-06-12T14:41:12.000Z | 2022-01-24T13:04:58.000Z | tests/test_task_autosql.py | robin-173/sayn | d1cf36b92fad6a1798b57ad80abb22e8386e0e86 | [
"Apache-2.0"
] | 9 | 2020-04-23T16:56:23.000Z | 2021-08-16T10:54:48.000Z | from contextlib import contextmanager
import pytest
from sayn.tasks.autosql import AutoSqlTask
from . import inside_dir, simulate_task, tables_with_data, validate_table, clear_tables
@contextmanager
def autosql_task(tmp_path, target_db, sql, data=None, **kwargs):
"""Creates an autosql task and drops the tables/views created after it's done"""
fs = {"sql/test.sql": sql} if sql is not None else dict()
with inside_dir(tmp_path, fs):
task = AutoSqlTask()
simulate_task(task, target_db=target_db, **kwargs)
if data is not None:
with tables_with_data(task.connections["target_db"], data):
yield task
else:
yield task
if hasattr(task, "table"):
clear_tables(
task.connections["target_db"],
[
f"{task.schema +'.' if task.schema else ''}{task.table}",
f"{task.tmp_schema +'.' if task.tmp_schema else ''}sayn_tmp_{task.table}",
],
)
def test_autosql_task_table(tmp_path, target_db):
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(task.default_db, "test_autosql_task", [{"x": 1}])
def test_autosql_task_view(tmp_path, target_db):
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="view",
destination={"table": "test_autosql_task"},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.default_db,
"test_autosql_task",
[{"x": 1}],
)
def test_autosql_task_incremental(tmp_path, target_db):
with autosql_task(
tmp_path,
target_db,
"SELECT * FROM source_table WHERE updated_at >= 2 OR updated_at IS NULL",
{
"source_table": [
{"id": 1, "updated_at": 1, "name": "x"},
{"id": 2, "updated_at": 2, "name": "y1"},
{"id": 3, "updated_at": None, "name": "z"},
],
"test_autosql_task": [
{"id": 1, "updated_at": 1, "name": "x"},
{"id": 2, "updated_at": None, "name": "y"},
],
},
) as task:
assert task.setup(
file_name="test.sql",
materialisation="incremental",
destination={"table": "test_autosql_task"},
delete_key="id",
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.default_db,
"test_autosql_task",
[
{"id": 1, "updated_at": 1, "name": "x"},
{"id": 2, "updated_at": 2, "name": "y1"},
{"id": 3, "updated_at": None, "name": "z"},
],
)
def test_autosql_task_compile(tmp_path, target_db):
with autosql_task(
tmp_path,
target_db,
"SELECT 1 AS x",
run_arguments={"command": "compile"},
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
).is_ok
task.target_db._introspect()
assert task.compile().is_ok
def test_autosql_task_param(tmp_path, target_db):
with autosql_task(
tmp_path,
target_db,
"SELECT {{number}} AS x",
task_params={"number": 1},
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.default_db,
"test_autosql_task",
[{"x": 1}],
)
def test_autosql_task_config_error1(tmp_path, target_db):
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_nam="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
).is_err
def test_autosql_task_config_error2(tmp_path, target_db):
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="wrong",
destination={"table": "test_autosql_task"},
).is_err
def test_autosql_task_config_error3(tmp_path, target_db):
"""Tests missing parameters for jinja compilation"""
with autosql_task(
tmp_path,
target_db,
"SELECT {{number}} AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
).is_err
def test_autosql_task_run_error(tmp_path, target_db):
"""Tests failure with erratic sql"""
with autosql_task(
tmp_path,
target_db,
"SELECT * FROM non_existing_table",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
).is_ok
task.target_db._introspect()
assert task.run().is_err
# Destination tests
def test_autosql_task_table_db_dst(tmp_path, target_db):
"""Test autosql with db destination set"""
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"db": "target_db", "table": "test_autosql_task"},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.target_db,
"test_autosql_task",
[{"x": 1}],
)
def test_autosql_task_table_wrong_db_dst(tmp_path, target_db):
"""Test autosql with db destination set but does not exist in connections"""
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"db": "wrong_dst", "table": "test_autosql_task"},
).is_err
# DDL tests
@pytest.mark.target_dbs(["sqlite"])
def test_autosql_task_run_ddl_columns(tmp_path, target_db):
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
ddl={"columns": [{"name": "x", "type": "integer", "primary": True}]},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
# test the pk has indeed been set
pk_info = task.default_db.read_data("PRAGMA table_info(test_autosql_task)")
assert pk_info[0]["pk"] == 1
@pytest.mark.target_dbs(["sqlite", "mysql", "postgresql"])
def test_autosql_task_run_indexes_pk01(tmp_path, target_db):
"""Test indexes with the primary key only returns error on SQLite
this is because SQLite requires primary keys to be defined in create table statement
so columns definition is needed
"""
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
ddl={"indexes": [{"primary_key": "x"}]},
).is_err
@pytest.mark.target_dbs(["sqlite", "mysql", "postgresql"])
def test_autosql_task_run_indexes_pk02(tmp_path, target_db):
with autosql_task(tmp_path, target_db, "SELECT 1 AS x") as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
ddl={"columns": ["x"], "indexes": [{"primary_key": "x"}]},
).is_err
@pytest.mark.target_dbs(["sqlite", "mysql", "postgresql"])
def test_autosql_task_ddl_diff_pk_err(tmp_path, target_db):
"""Test autosql task set with different pks in indexes and columns setup error"""
with autosql_task(
tmp_path,
target_db,
"SELECT CAST(1 AS INTEGER) AS y, CAST(1 AS TEXT) AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
ddl={
"columns": [
{"name": "y", "type": "int"},
{"name": "x", "type": "text", "primary": True},
],
"indexes": {"primary_key": {"columns": ["y"]}},
},
).is_err
@pytest.mark.target_dbs(["sqlite", "postgresql", "mysql", "redshift"])
def test_autosql_task_run_ddl_diff_col_order(tmp_path, target_db):
"""Test that autosql with ddl columns creates a table with order similar to ddl definition"""
with autosql_task(
tmp_path,
target_db,
"SELECT 1 AS y, '1' AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
ddl={
"columns": [
{"name": "x", "type": "text"},
{"name": "y", "type": "int"},
]
},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.default_db,
"test_autosql_task",
[{"x": "1", "y": 1}],
)
@pytest.mark.target_dbs(["bigquery"])
def test_autosql_task_run_ddl_diff_col_order_bq(tmp_path, target_db):
"""Test that autosql with ddl columns creates a table with order similar to ddl definition"""
with autosql_task(
tmp_path,
target_db,
"SELECT 1 AS y, '1' AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"table": "test_autosql_task"},
ddl={
"columns": [
{"name": "x", "type": "string"},
{"name": "y", "type": "int64"},
]
},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.default_db,
"test_autosql_task",
[{"x": "1", "y": 1}],
)
# Testing schemas: this code expects 2 schemas in the database: test and test2
@pytest.mark.target_dbs(["bigquery", "mysql", "postgresql", "redshift", "snowflake"])
def test_autosql_schemas01(tmp_path, target_db):
"""Autosql task with schema specified"""
with autosql_task(
tmp_path,
target_db,
"SELECT 1 AS y, '1' AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"schema": "test2", "table": "test_autosql_task"},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.default_db,
"test2.test_autosql_task",
[{"x": "1", "y": 1}],
)
@pytest.mark.target_dbs(["sqlite"])
def test_autosql_schemas_error01(tmp_path, target_db):
"""Autosql task with schema specified with failure as sqlite doesn't support schemas"""
with autosql_task(
tmp_path,
target_db,
"SELECT 1 AS y, '1' AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={"schema": "test2", "table": "test_autosql_task"},
).is_err
@pytest.mark.target_dbs(["bigquery", "mysql", "postgresql", "redshift", "snowflake"])
def test_autosql_schemas02(tmp_path, target_db):
"""Autosql task with temporary schema and schema specified"""
with autosql_task(
tmp_path,
target_db,
"SELECT 1 AS y, '1' AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={
"tmp_schema": "test2",
"schema": "test",
"table": "test_autosql_task",
},
).is_ok
task.target_db._introspect()
assert task.run().is_ok
assert validate_table(
task.default_db,
"test.test_autosql_task",
[{"x": "1", "y": 1}],
)
@pytest.mark.target_dbs(["sqlite"])
def test_autosql_schemas_error02(tmp_path, target_db):
"""Autosql task with temporary schema and schema specified with failure"""
with autosql_task(
tmp_path,
target_db,
"SELECT 1 AS y, '1' AS x",
) as task:
assert task.setup(
file_name="test.sql",
materialisation="table",
destination={
"tmp_schema": "test2",
"schema": "test",
"table": "test_autosql_task",
},
).is_err
| 31.57243 | 97 | 0.562199 | 1,602 | 13,513 | 4.483146 | 0.111735 | 0.117934 | 0.104428 | 0.089808 | 0.778892 | 0.7555 | 0.742412 | 0.738234 | 0.727792 | 0.71164 | 0 | 0.008451 | 0.308222 | 13,513 | 427 | 98 | 31.64637 | 0.759842 | 0.079331 | 0 | 0.728863 | 0 | 0 | 0.183834 | 0.007929 | 0 | 0 | 0 | 0 | 0.125364 | 1 | 0.06414 | false | 0 | 0.011662 | 0 | 0.075802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c7d8cddc3455f32fa770e609e499fde86e234ad1 | 8,103 | py | Python | py_ml_utils/test/test_missingValueInferer.py | goldentom42/py_ml_utils | 95a2788dd78b38d13f2c7c0e311319aac48f028a | [
"Apache-2.0"
] | 29 | 2017-10-26T01:20:07.000Z | 2021-09-28T08:53:29.000Z | py_ml_utils/test/test_missingValueInferer.py | goldentom42/py_ml_utils | 95a2788dd78b38d13f2c7c0e311319aac48f028a | [
"Apache-2.0"
] | null | null | null | py_ml_utils/test/test_missingValueInferer.py | goldentom42/py_ml_utils | 95a2788dd78b38d13f2c7c0e311319aac48f028a | [
"Apache-2.0"
] | 12 | 2017-12-08T13:14:58.000Z | 2022-03-04T14:06:32.000Z | from unittest import TestCase
import pandas as pd
import numpy as np
from py_ml_utils.missing_value_inferer import *
class TestMissingValueInferer(TestCase):
def test_infer_missing_value_constant(self):
""" Test MeanMissingValueInferer, replacing np.nan by mean """
idx = np.arange(12)
np.random.shuffle(idx)
series = pd.Series([1, 0, 1, 0, 1, 0, 1, 1, 0, 1, np.nan, np.nan], index=idx)
mvi = ConstantMissingValueInferer(feature_name="test", missing_value=np.nan, replacement=-1)
ft_series = mvi.infer(series.to_frame(name="test"))
self.assertEqual(ft_series.values[10], -1)
self.assertEqual(ft_series.values[11], -1)
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series.index) - np.array(ft_series.index))))
def test_infer_missing_value_mean(self):
""" Test MeanMissingValueInferer, replacing np.nan by mean """
idx = np.arange(12)
np.random.shuffle(idx)
series = pd.Series([1, 0, 1, 0, 1, 0, 1, 1, 0, 1, np.nan, np.nan], index=idx)
mvi = MeanMissingValueInferer(feature_name="test", missing_value=np.nan)
ft_series = mvi.infer(series.to_frame(name="test"))
self.assertEqual(ft_series.values[10], 0.6)
self.assertEqual(ft_series.values[11], 0.6)
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series.index) - np.array(ft_series.index))))
def test_infer_missing_value_mean_index(self):
""" Test MeanMissingValueInferer, check returned series has same index as input Series """
idx = np.arange(12)
np.random.shuffle(idx)
series = pd.Series([1, 0, 1, 0, 1, 0, 1, 1, 0, 1, np.nan, np.nan], index=idx)
idx = np.arange(12)
np.random.shuffle(idx)
series.index = idx
mvi = MeanMissingValueInferer(feature_name="test", missing_value=np.nan)
ft_series = mvi.infer(series.to_frame(name="test"))
self.assertEqual(0, np.mean(np.abs((ft_series.index - series.index))))
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series.index) - np.array(ft_series.index))))
def test_infer_missing_value_median(self):
""" Test MedianMissingValueInferer, replacing np.nan by median """
idx = np.arange(12)
np.random.shuffle(idx)
series = pd.Series([1, 0, 1, 0, 1, 0, 1, 1, 0, 1, np.nan, np.nan], index=idx)
mvi = MedianMissingValueInferer(feature_name="test", missing_value=np.nan)
ft_series = mvi.infer(series.to_frame(name="test"))
self.assertEqual(ft_series.values[10], 1.0)
self.assertEqual(ft_series.values[11], 1.0)
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series.index) - np.array(ft_series.index))))
def test_infer_missing_value_most_frequent(self):
""" Test MostFrequentMissingValueInferer, replacing np.nan by most frequent value """
idx = np.arange(12)
np.random.shuffle(idx)
series = pd.Series([1, 0, 1, 0, 1, 0, 1, 1, 0, 1, np.nan, np.nan], index=idx)
mvi = MostFrequentMissingValueInferer(feature_name="test", missing_value=np.nan)
ft_series = mvi.infer(series.to_frame(name="test"))
self.assertEqual(ft_series.values[10], 1)
self.assertEqual(ft_series.values[11], 1)
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series.index) - np.array(ft_series.index))))
def test_infer_missing_value_most_frequent_missing_value_arg(self):
""" Test MostFrequentMissingValueInferer, replacing -1 by most frequent value """
idx = np.arange(12)
np.random.shuffle(idx)
series = pd.Series([1, 0, 1, 0, 1, 0, 1, 1, 0, 1, -1, -1.0], index=idx)
mvi = MostFrequentMissingValueInferer(feature_name="test", missing_value=-1)
ft_series = mvi.infer(series.to_frame(name="test"))
self.assertEqual(ft_series.values[10], 1)
self.assertEqual(ft_series.values[11], 1)
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series.index) - np.array(ft_series.index))))
def test_infer_missing_value_no_series(self):
""" Test exception when no Series provided """
mvi = MissingValueInferer()
self.assertRaises(ValueError, mvi.infer, None)
def test_infer_missing_value_empty_series(self):
""" Test exception when empty Series provided """
mvi = MissingValueInferer()
self.assertRaises(ValueError, mvi.infer, dataset=pd.DataFrame())
def test_infer_missing_value_using_groupby(self):
""" Test GroupByMissingValueInferer, where missing value is infered using other features in the dataset """
len_series = 30
idx = np.arange(len_series)
np.random.shuffle(idx)
np.random.seed(18)
f1_series = pd.Series(np.random.choice(['A', 'B', 'C'], len_series, p=[0.40, 0.30, 0.30]), name='f1', index=idx)
f2_series = pd.Series(np.random.choice(['D', 'E'], len_series, p=[0.40, 0.60]), name='f2', index=idx)
target = pd.Series(np.random.choice([0, 1, np.nan], len_series, p=[0.6, 0.3, 0.1]), name='target', index=idx)
mvi = GroupByMissingValueInferer(feature_name="target",
missing_value=np.nan,
groupby=["f1", "f2"],
average_type="MEAN")
series1 = mvi.infer(dataset=pd.concat([f1_series, f2_series, target], axis=1))
series2 = mvi.infer(dataset=[pd.concat([f1_series, f2_series, target], axis=1)])
expected_result = pd.Series([1.0, 1.0, 0.0, 0.5, 0.0, 1.0, 0.0, 0.0, 0.5, 1.0, 1.0, 0.0,
0.25, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 0.0, 0.0, 0.0, 0.0], index=idx)
self.assertAlmostEqual(0, (series1 - expected_result).abs().mean())
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series1.index) - np.array(expected_result.index))))
self.assertAlmostEqual(0, (series2 - expected_result).abs().mean())
self.assertAlmostEqual(0, np.sum(np.abs(np.array(series2.index) - np.array(expected_result.index))))
series1 = mvi.infer(dataset=[pd.concat([f1_series[:20], f2_series[:20], target[:20]], axis=1),
pd.concat([f1_series[20:], f2_series[20:], target[20:]], axis=1)])
expected0 = [1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]
expected1 = [0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0]
self.assertAlmostEqual(0, (series1[0] - expected0).abs().mean())
self.assertAlmostEqual(0, (series1[1] - expected1).abs().mean())
def test_infer_missing_value_using_groupby_index(self):
""" Test GroupByMissingValueInferer, where missing value is infered using other features in the dataset
Here we check that output Serie index is equal to inpu Series index """
len_series = 30
np.random.seed(18)
f1_series = pd.Series(np.random.choice(['A', 'B', 'C'], len_series, p=[0.40, 0.30, 0.30]), name='f1')
f2_series = pd.Series(np.random.choice(['D', 'E'], len_series, p=[0.40, 0.60]), name='f2')
target = pd.Series(np.random.choice([0, 1, np.nan], len_series, p=[0.6, 0.3, 0.1]), name='target')
idx = np.arange(len_series)
np.random.shuffle(idx)
f1_series.index = idx
f2_series.index = idx
target.index = idx
mvi = GroupByMissingValueInferer(feature_name="target", missing_value=np.nan, groupby=["f1", "f2"],
average_type="MEAN")
series1 = mvi.infer(dataset=pd.concat([f1_series, f2_series, target], axis=1))
expected_result = [1.0, 1.0, 0.0, 0.5, 0.0, 1.0, 0.0, 0.0, 0.5, 1.0, 1.0, 0.0,
0.25, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 0.0, 0.0, 0.0, 0.0]
self.assertAlmostEqual(0, (series1 - expected_result).abs().mean())
self.assertAlmostEqual(0, np.mean(np.abs((series1.index - target.index))))
| 55.5 | 120 | 0.617055 | 1,224 | 8,103 | 3.974673 | 0.099673 | 0.043165 | 0.050565 | 0.050976 | 0.837205 | 0.805961 | 0.7852 | 0.757246 | 0.752724 | 0.680164 | 0 | 0.067366 | 0.223251 | 8,103 | 145 | 121 | 55.882759 | 0.705593 | 0.093916 | 0 | 0.486726 | 0 | 0 | 0.014584 | 0 | 0 | 0 | 0 | 0 | 0.238938 | 1 | 0.088496 | false | 0 | 0.035398 | 0 | 0.132743 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c7e3da80234ab4344c571bc28635eab3656b037e | 44 | py | Python | iopipe/contrib/eventinfo/__init__.py | skeptycal/iopipe-python | f6afba36663751779cba55ce53c0e1f2042df0d7 | [
"Apache-2.0"
] | 74 | 2016-08-18T14:26:50.000Z | 2021-11-21T10:58:32.000Z | iopipe/contrib/eventinfo/__init__.py | vemel/iopipe-python | 46c277f9447ddb00e544437ceaa7ba263a759c1d | [
"Apache-2.0"
] | 198 | 2016-08-18T18:52:43.000Z | 2021-05-09T10:01:14.000Z | iopipe/contrib/eventinfo/__init__.py | vemel/iopipe-python | 46c277f9447ddb00e544437ceaa7ba263a759c1d | [
"Apache-2.0"
] | 23 | 2016-08-04T23:22:21.000Z | 2020-01-20T13:54:27.000Z | from .plugin import EventInfoPlugin # noqa
| 22 | 43 | 0.795455 | 5 | 44 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159091 | 44 | 1 | 44 | 44 | 0.945946 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1be05a4c0c8940e843fa5cc5a5605dacc3928a16 | 122 | py | Python | lfd/datasets/__init__.py | Syhen/learn-from-datasets | 18b5104f1ea9f6b4e950e03958f27d7a12fdadbd | [
"MIT"
] | null | null | null | lfd/datasets/__init__.py | Syhen/learn-from-datasets | 18b5104f1ea9f6b4e950e03958f27d7a12fdadbd | [
"MIT"
] | null | null | null | lfd/datasets/__init__.py | Syhen/learn-from-datasets | 18b5104f1ea9f6b4e950e03958f27d7a12fdadbd | [
"MIT"
] | null | null | null | """
@created by: heyao
@created at: 2021-12-08 14:29:59
"""
from lfd.datasets.disaster_tweets import load_disaster_tweets
| 20.333333 | 61 | 0.762295 | 20 | 122 | 4.5 | 0.85 | 0.311111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12844 | 0.106557 | 122 | 5 | 62 | 24.4 | 0.697248 | 0.418033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40288ecd8bda7bf80913d4eb7d294b9d17c9f5c7 | 141 | py | Python | app/crossref/__init__.py | ETspielberg/sdg_query_execution | 6e95d4d6e158f72e2aa61c64ba3aac5980b14e5b | [
"MIT"
] | null | null | null | app/crossref/__init__.py | ETspielberg/sdg_query_execution | 6e95d4d6e158f72e2aa61c64ba3aac5980b14e5b | [
"MIT"
] | 2 | 2021-03-31T18:47:27.000Z | 2021-12-13T19:50:45.000Z | app/crossref/__init__.py | Aurora-Network-Global/sdg_query_execution | 74375faa41656adef13ab472c2f4f4b2097a955a | [
"MIT"
] | 2 | 2018-09-21T07:42:17.000Z | 2021-08-02T15:58:45.000Z | from flask import Blueprint
crossref_blueprint = Blueprint('crossref', __name__, template_folder='templates')
from . import crossref_routes | 28.2 | 81 | 0.822695 | 16 | 141 | 6.8125 | 0.625 | 0.311927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099291 | 141 | 5 | 82 | 28.2 | 0.858268 | 0 | 0 | 0 | 0 | 0 | 0.119718 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
4064606009848858d0f02ec201eb30c9ec4026c9 | 26 | py | Python | terrascript/dnsimple/__init__.py | GarnerCorp/python-terrascript | ec6c2d9114dcd3cb955dd46069f8ba487e320a8c | [
"BSD-2-Clause"
] | null | null | null | terrascript/dnsimple/__init__.py | GarnerCorp/python-terrascript | ec6c2d9114dcd3cb955dd46069f8ba487e320a8c | [
"BSD-2-Clause"
] | null | null | null | terrascript/dnsimple/__init__.py | GarnerCorp/python-terrascript | ec6c2d9114dcd3cb955dd46069f8ba487e320a8c | [
"BSD-2-Clause"
] | 1 | 2018-11-15T16:23:05.000Z | 2018-11-15T16:23:05.000Z | """2019-05-28 10:49:27"""
| 13 | 25 | 0.538462 | 6 | 26 | 2.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.583333 | 0.076923 | 26 | 1 | 26 | 26 | 0 | 0.730769 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
40b9e672b206235e875a9efeb6c6d340d8d76656 | 48 | py | Python | scripts/qgis_fixes/fix_has_key.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | scripts/qgis_fixes/fix_has_key.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | null | null | null | scripts/qgis_fixes/fix_has_key.py | dyna-mis/Hilabeling | cb7d5d4be29624a20c8a367162dbc6fd779b2b52 | [
"MIT"
] | 1 | 2021-12-25T08:40:30.000Z | 2021-12-25T08:40:30.000Z | from lib2to3.fixes.fix_has_key import FixHasKey
| 24 | 47 | 0.875 | 8 | 48 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.083333 | 48 | 1 | 48 | 48 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40d43b3a580507d1430eede16d1977c3566e8642 | 29 | py | Python | envs/wrappers/gym_wrapper/__init__.py | kmakeev/RLs | c47e9b504db157731c26d7c881719a4fb54cc355 | [
"Apache-2.0"
] | 1 | 2021-01-05T12:08:56.000Z | 2021-01-05T12:08:56.000Z | envs/wrappers/gym_wrapper/__init__.py | kmakeev/RLs | c47e9b504db157731c26d7c881719a4fb54cc355 | [
"Apache-2.0"
] | null | null | null | envs/wrappers/gym_wrapper/__init__.py | kmakeev/RLs | c47e9b504db157731c26d7c881719a4fb54cc355 | [
"Apache-2.0"
] | 1 | 2021-01-24T13:29:16.000Z | 2021-01-24T13:29:16.000Z | from .gym_env import gym_envs | 29 | 29 | 0.862069 | 6 | 29 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
40ffaa95e126a6a2bc57ac54a0676079d98a1487 | 50 | py | Python | pympg/__init__.py | DeveloperNeon/pympg | c9c15319cdbafe6fae1ed4491548f32099f40de4 | [
"MIT"
] | null | null | null | pympg/__init__.py | DeveloperNeon/pympg | c9c15319cdbafe6fae1ed4491548f32099f40de4 | [
"MIT"
] | null | null | null | pympg/__init__.py | DeveloperNeon/pympg | c9c15319cdbafe6fae1ed4491548f32099f40de4 | [
"MIT"
] | null | null | null | from .gen.main import *
from .gen.apache import *
| 16.666667 | 25 | 0.72 | 8 | 50 | 4.5 | 0.625 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 50 | 2 | 26 | 25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc09c1ccae4f4294d21f6579d6ed5d711370d6d4 | 22,451 | py | Python | getGeneRegions.py | TaliaferroLab/AnalysisScripts | 3df37d2f8fca9bc402afe5ea870c42200fca1ed3 | [
"MIT"
] | null | null | null | getGeneRegions.py | TaliaferroLab/AnalysisScripts | 3df37d2f8fca9bc402afe5ea870c42200fca1ed3 | [
"MIT"
] | null | null | null | getGeneRegions.py | TaliaferroLab/AnalysisScripts | 3df37d2f8fca9bc402afe5ea870c42200fca1ed3 | [
"MIT"
] | 1 | 2021-10-30T07:37:19.000Z | 2021-10-30T07:37:19.000Z | #Given a genome annotation (ensembl preferably) and a genome sequence, and a list of transcript
#IDs that you are interested in, get coords and sequences for the 5' UTR, 3' UTR, and CDS
#regions of those transcripts
import gffutils
import os
from Bio import SeqIO
import argparse
import gzip
from operator import itemgetter
from itertools import groupby
def getCDScoords(gff, ens2short, txs, outputgff):
txCDScoords = {} #{ENSMUST_chrm_strand : [[cdsexon1start, cdsexon1stop], [cdsexon2start, cdsexon2stop]]}
tx2gene = {} # {ENSMUST : ENSMUSG}
txchrmstrand = {} #{ENSMUST: [chrm, strand]}
geneboundaries = {} # {ensid : [genestart, genestop]}
genecount = 0
geneswithcodingtranscript = 0
e2sdict = {} #{ENSGene : shortname}
infh = open(ens2short, 'r')
for line in infh:
line = line.strip().split('\t')
if line[0].startswith('ENSMUSG'):
e2sdict[line[0]] = line[2]
infh.close()
#Make gff database
print 'Indexing gff...'
gff_fn = gff
db_fn = os.path.abspath(gff_fn) + '.db'
if os.path.isfile(db_fn) == False:
gffutils.create_db(gff_fn, db_fn, merge_strategy = 'merge', verbose = True)
db = gffutils.FeatureDB(db_fn)
print 'Done indexing!'
genes = db.features_of_type('gene')
for gene in genes:
genecount +=1
if genecount % 10000 == 0:
print 'Gene {0}...'.format(genecount)
geneID = str(gene.id).replace('gene:', '')
geneboundaries[geneID] = [gene.start, gene.end]
chrm = str(gene.chrom)
strand = gene.strand
for transcript in db.children(gene, featuretype = 'transcript', order_by = 'start'):
#If this transcript has no coding exons, skip it:
if len(list(db.children(transcript, featuretype = 'CDS', level = 1))) == 0:
continue
transcriptID = str(transcript.id).replace('transcript:', '').split('.')[0]
if transcriptID in txs:
tx2gene[transcriptID] = geneID
#txCDScoords[transcriptID + '_' + chrm + '_' + strand] = []
txCDScoords[transcriptID] = []
txchrmstrand[transcriptID] = [transcript.chrom, transcript.strand]
for codingexon in db.children(transcript, featuretype = 'CDS', order_by = 'start'):
#txCDScoords[transcriptID + '_' + chrm + '_' + strand].append([codingexon.start, codingexon.end])
txCDScoords[transcriptID].append([codingexon.start, codingexon.end])
print 'Looked through {0} genes for {1} transcripts. Found coding regions for {2} of them.'.format(genecount, len(txs), len(txCDScoords))
with open(outputgff, 'w') as f:
for transcript in txCDScoords:
exoncounter = 0
transcriptID = transcript.split('_')[0]
chrm = txchrmstrand[transcriptID][0]
strand = txchrmstrand[transcriptID][1]
geneID = tx2gene[transcriptID]
geneshortname = e2sdict[geneID.split('.')[0]]
genestart, geneend = geneboundaries[geneID][0], geneboundaries[geneID][1]
IDline = 'ID=gene:{0};Name={1};gene_id={2}'.format(geneID, geneshortname, geneID)
f.write(('\t').join([chrm, 'gene', 'gene', str(genestart), str(geneend), '.', strand, '.', IDline]) + '\n')
CDSstart = txCDScoords[transcript][0][0]
CDSstop = txCDScoords[transcript][-1][1]
IDline = 'ID=CDS:{0};Parent={1};gene_id={2}'.format(transcriptID, geneID, geneID)
f.write(('\t').join([chrm, 'CDS', 'CDS', str(CDSstart), str(CDSstop), '.', strand, '.', IDline]) + '\n')
for CDSexon in txCDScoords[transcript]:
exoncounter +=1
exonstart = CDSexon[0]
exonstop = CDSexon[1]
IDline = 'ID=exon:{0}.cdsexon{1};Parent=CDS:{2}'.format(transcriptID, exoncounter, transcriptID)
f.write(('\t').join([chrm, 'exon', 'exon', str(exonstart), str(exonstop), '.', strand, '.', IDline]) + '\n')
return txCDScoords, tx2gene, txchrmstrand
def get3UTRcoords(gff, ens2short, txs, outputgff):
txUTRcoords = {} #{ENSMUST_chrm_strand : [[utrexon1start, utrexon1stop], [utrexon2start, utrexon2stop]]}
txchrmstrand = {} #{ENSMUST: [chrm, strand]}
genechrmstrand = {} #{ENSMUSG : [chrm , strand]}
gene2tx = {} # {ENSMUSG : [ENSMUST]}
tx2gene = {} # {ENSMUST : ENSMSUG}
geneboundaries = {} # {ensid : [genestart, genestop]}
genecount = 0
e2sdict = {} #{ENSGene : shortname}
infh = open(ens2short, 'r')
for line in infh:
line = line.strip().split('\t')
if line[0].startswith('ENSMUSG'):
e2sdict[line[0]] = line[2]
infh.close()
a = []
for tx in txs:
a.append(tx.split('.')[0])
txs = a
#Make gff database
print 'Indexing gff...'
gff_fn = gff
db_fn = os.path.abspath(gff_fn) + '.db'
if os.path.isfile(db_fn) == False:
gffutils.create_db(gff_fn, db_fn, merge_strategy = 'merge', verbose = True)
db = gffutils.FeatureDB(db_fn)
print 'Done indexing!'
genes = db.features_of_type('gene')
for gene in genes:
genecount +=1
if genecount % 5000 == 0:
print 'Gene {0}...'.format(genecount)
geneID = str(gene.id).replace('gene:', '')
geneboundaries[geneID] = [gene.start, gene.end]
chrm = str(gene.chrom)
strand = gene.strand
for transcript in db.children(gene, featuretype = 'transcript', order_by = 'start'):
#If this transcript has no coding exons, skip it:
if len(list(db.children(transcript, featuretype = 'CDS', level = 1))) == 0:
continue
transcriptID = str(transcript.id).replace('transcript:', '').split('.')[0]
if transcriptID in txs:
if geneID not in genechrmstrand:
genechrmstrand[geneID] = [gene.chrom, gene.strand]
if transcriptID not in txchrmstrand:
txchrmstrand[transcriptID] = [transcript.chrom, transcript.strand]
exoncoords = [] #[[exon1start, exon1stop], [exon2start, exon2stop]]
CDScoords = []
UTRcoords = [] #[UTRstart, UTRstop]
for exon in db.children(transcript, featuretype = 'exon', order_by = 'start'):
exoncoords.append([exon.start, exon.end])
for CDSexon in db.children(transcript, featuretype = 'CDS', order_by = 'start'):
CDScoords.append([CDSexon.start, CDSexon.end])
#3' UTR start is directly after CDS end
if transcript.strand == '+':
CDSend = max(CDScoords, key = itemgetter(1))[1]
#If the transcript ends right where the CDS ends, then there's no UTR
if CDSend == transcript.end:
continue
UTR3start = CDSend + 1
UTRcoords = [UTR3start, transcript.end]
elif transcript.strand == '-':
CDSend = min(CDScoords, key = itemgetter(0))[0]
#If the transcript ends right where the CDS ends, then there's no UTR
if CDSend == transcript.start:
continue
UTR3start = CDSend - 1
UTRcoords = [transcript.start, UTR3start]
#Check to see if the UTR is fully contained within the coordinates of one exon
singleexonUTR = False
for exoncoord in exoncoords:
exonstart, exonend = exoncoord[0], exoncoord[1]
if exonstart <= UTRcoords[0] and exonend >= UTRcoords[1] and len(UTRcoords) > 0:
singleexonUTR = True
txUTRcoords[transcriptID] = [UTRcoords]
if geneID not in gene2tx:
gene2tx[geneID] = []
gene2tx[geneID].append(transcriptID)
tx2gene[transcriptID] = geneID
if singleexonUTR == False:
#Get all positions that are both exonic and in the 3' UTR
overlappingbp = [] #sorted exonic positions in UTR
UTR3range = range(UTRcoords[0], UTRcoords[1] + 1)
for exoncoord in exoncoords:
exonrange = range(exoncoord[0], exoncoord[1] + 1)
overlap = set(UTR3range).intersection(exonrange)
for nt in sorted(list(overlap)):
overlappingbp.append(nt)
#Now get breaks in consecutive exonic positions
#http://stackoverflow.com/questions/2361945/detecting-consecutive-integers-in-a-list
UTRexoncoords = []
for k, g in groupby(enumerate(overlappingbp), lambda (index, item): index-item):
exonbp = map(itemgetter(1), g)
if len(exonbp) > 1:
UTRexoncoords.append([exonbp[0], exonbp[-1]])
txUTRcoords[transcriptID] = UTRexoncoords
if geneID not in gene2tx:
gene2tx[geneID] = []
gene2tx[geneID].append(transcriptID)
tx2gene[transcriptID] = geneID
print 'Looked through {0} genes for {1} transcripts. Found 3\' UTRs for {2} of them.'.format(genecount, len(txs), len(txUTRcoords))
#Output to gff
with open(outputgff, 'w') as f:
#txUTRcoords = {} #{ENSMUST : [[utrexon1start, utrexon1stop], [utrexon2start, utrexon2stop]]}
#txchrmstrand = {} #{ENSMUST: [chrm, strand]}
#genechrmstrand = {} #{ENSMUSG : [chrm , strand]}
#gene2tx = {} # {ENSMUSG : ENSMUST}
for gene in gene2tx:
#Check to see if this gene has at least one transcript with a 3' UTR
has3UTR = False
for transcript in gene2tx[gene]:
if len(txUTRcoords[transcript]) > 0:
has3UTR = True
if not has3UTR:
continue
genechrm = genechrmstrand[gene][0]
genestrand = genechrmstrand[gene][1]
genestart = geneboundaries[gene][0]
genestop = geneboundaries[gene][1]
if gene in e2sdict:
geneshortname = e2sdict[gene]
else:
geneshortname = gene
IDline = 'ID=gene:{0};Name={1};gene_id={2}'.format(gene, geneshortname, gene)
f.write(('\t').join([genechrm, 'gene', 'gene', str(genestart), str(genestop), '.', genestrand, '.', IDline]) + '\n')
for transcript in gene2tx[gene]:
#If it doesn't have a UTR, skip it
if len(txUTRcoords[transcript]) == 0:
continue
exoncounter = 0
txchrm = txchrmstrand[transcript][0]
txstrand = txchrmstrand[transcript][1]
UTRstart = txUTRcoords[transcript][0][0]
UTRstop = txUTRcoords[transcript][-1][1]
IDline = 'ID=UTR3:{0};Parent=gene:{1};gene_id={2}'.format(transcript, gene, gene)
f.write(('\t').join([txchrm, 'UTR3', 'UTR3', str(UTRstart), str(UTRstop), '.', txstrand, '.', IDline]) + '\n')
for UTRexon in txUTRcoords[transcript]:
exoncounter +=1
exonstart = UTRexon[0]
exonstop = UTRexon[1]
IDline = 'ID=exon:{0}.utr3exon{1};Parent=UTR3:{2}'.format(transcript, exoncounter, transcript)
f.write(('\t').join([txchrm, 'exon', 'exon', str(exonstart), str(exonstop), '.', txstrand, '.', IDline]) + '\n')
'''
for transcript in txUTRcoords:
#If there is no UTR here, skip it
if len(txUTRcoords[transcript]) == 0:
continue
exoncounter = 0
transcriptID = transcript.split('_')[0]
chrm = transcript.split('_')[1]
strand = transcript.split('_')[2]
#transcriptID = ('_').join([transcript.split('_')[0], transcript.split('_')[1]])
#chrm = transcript.split('_')[2]
#strand = transcript.split('_')[3]
geneID = tx2gene[transcriptID]
if geneID in e2sdict:
geneshortname = e2sdict[geneID]
else:
geneshortname = geneID
genestart, geneend = geneboundaries[geneID][0], geneboundaries[geneID][1]
IDline = 'ID=gene:{0};Name={1};gene_id={2}'.format(geneID, geneshortname, geneID)
f.write(('\t').join([chrm, 'gene', 'gene', str(genestart), str(geneend), '.', strand, '.', IDline]) + '\n')
UTRstart = txUTRcoords[transcript][0][0]
UTRstop = txUTRcoords[transcript][-1][1]
IDline = 'ID=UTR3:{0};Parent=gene:{1};gene_id={2}'.format(transcriptID, geneID, geneID)
f.write(('\t').join([chrm, 'UTR3', 'UTR3', str(UTRstart), str(UTRstop), '.', strand, '.', IDline]) + '\n')
for UTRexon in txUTRcoords[transcript]:
exoncounter +=1
exonstart = UTRexon[0]
exonstop = UTRexon[1]
IDline = 'ID=exon:{0}.utr3exon{1};Parent=UTR3:{2}'.format(transcriptID, exoncounter, transcriptID)
f.write(('\t').join([chrm, 'exon', 'exon', str(exonstart), str(exonstop), '.', strand, '.', IDline]) + '\n')
'''
return txUTRcoords, tx2gene, txchrmstrand
def get5UTRcoords(gff, ens2short, txs, outputgff):
txUTRcoords = {} #{ENSMUST_chrm_strand : [[utrexon1start, utrexon1stop], [utrexon2start, utrexon2stop]]}
txchrmstrand = {} #{ENSMUST: [chrm, strand]}
genechrmstrand = {} #{ENSMUSG : [chrm , strand]}
gene2tx = {} # {ENSMUSG : [ENSMUST]}
tx2gene = {} # {ENSMUST : ENSMSUG}
geneboundaries = {} # {ensid : [genestart, genestop]}
genecount = 0
e2sdict = {} #{ENSGene : shortname}
infh = open(ens2short, 'r')
for line in infh:
line = line.strip().split('\t')
if line[0].startswith('ENSMUSG'):
e2sdict[line[0]] = line[2]
infh.close()
#Make gff database
print 'Indexing gff...'
gff_fn = gff
db_fn = os.path.abspath(gff_fn) + '.db'
if os.path.isfile(db_fn) == False:
gffutils.create_db(gff_fn, db_fn, merge_strategy = 'merge', verbose = True)
db = gffutils.FeatureDB(db_fn)
print 'Done indexing!'
a = []
for tx in txs:
a.append(tx.split('.')[0])
txs = a
genes = db.features_of_type('gene')
for gene in genes:
genecount +=1
if genecount % 10000 == 0:
print 'Gene {0}...'.format(genecount)
geneID = str(gene.id).replace('gene:', '')
geneboundaries[geneID] = [gene.start, gene.end]
chrm = str(gene.chrom)
strand = gene.strand
for transcript in db.children(gene, featuretype = 'transcript', order_by = 'start'):
#If this transcript has no coding exons, skip it:
if len(list(db.children(transcript, featuretype = 'CDS', level = 1))) == 0:
continue
transcriptID = str(transcript.id).replace('transcipt:', '').split('.')[0]
if transcriptID in txs:
if geneID not in genechrmstrand:
genechrmstrand[geneID] = [gene.chrom, gene.strand]
if transcriptID not in txchrmstrand:
txchrmstrand[transcriptID] = [transcript.chrom, transcript.strand]
tx2gene[transcriptID] = geneID
exoncoords = [] #[[exon1start, exon1stop], [exon2start, exon2stop]]
CDScoords = []
UTRcoords = [] #[UTRstart, UTRstop]
for exon in db.children(transcript, featuretype = 'exon', order_by = 'start'):
exoncoords.append([exon.start, exon.end])
for CDSexon in db.children(transcript, featuretype = 'CDS', order_by = 'start'):
CDScoords.append([CDSexon.start, CDSexon.end])
#5' UTR end is directly before CDS start
if transcript.strand == '+':
CDSstart = min(CDScoords, key = itemgetter(0))[0]
#If the transcript starts right where the CDS starts, then there's no UTR
if CDSstart == transcript.start:
continue
UTR5end = CDSstart - 1
UTRcoords = [transcript.start, UTR5end]
elif transcript.strand == '-':
CDSstart = max(CDScoords, key = itemgetter(1))[1]
#If the transcript starts right where the CDS starts, then there's no UTR
if CDSstart == transcript.end:
continue
UTR5end = CDSstart + 1
UTRcoords = [UTR5end, transcript.end]
#Check to see if the UTR is fully contained within the coordinates of one exon
singleexonUTR = False
for exoncoord in exoncoords:
exonstart, exonend = exoncoord[0], exoncoord[1]
if exonstart <= UTRcoords[0] and exonend >= UTRcoords[1]:
singleexonUTR = True
txUTRcoords[transcriptID] = [UTRcoords]
if geneID not in gene2tx:
gene2tx[geneID] = []
gene2tx[geneID].append(transcriptID)
tx2gene[transcriptID] = geneID
if singleexonUTR == False:
#Get all positions that are both exonic and in the 3' UTR
overlappingbp = [] #sorted exonic positions in UTR
UTR5range = range(UTRcoords[0], UTRcoords[1] + 1)
for exoncoord in exoncoords:
exonrange = range(exoncoord[0], exoncoord[1] + 1)
overlap = set(UTR5range).intersection(exonrange)
for nt in sorted(list(overlap)):
overlappingbp.append(nt)
#Now get breaks in consecutive exonic positions
#http://stackoverflow.com/questions/2361945/detecting-consecutive-integers-in-a-list
UTRexoncoords = []
for k, g in groupby(enumerate(overlappingbp), lambda (index, item): index-item):
exonbp = map(itemgetter(1), g)
if len(exonbp) > 1:
UTRexoncoords.append([exonbp[0], exonbp[-1]])
txUTRcoords[transcriptID] = UTRexoncoords
if geneID not in gene2tx:
gene2tx[geneID] = []
gene2tx[geneID].append(transcriptID)
tx2gene[transcriptID] = geneID
print 'Looked through {0} genes for {1} transcripts. Found 5\' UTRs for {2} of them.'.format(genecount, len(txs), len(txUTRcoords))
#Output to gff
#txUTRcoords = {} #{ENSMUST : [[utrexon1start, utrexon1stop], [utrexon2start, utrexon2stop]]}
#txchrmstrand = {} #{ENSMUST: [chrm, strand]}
#genechrmstrand = {} #{ENSMUSG : [chrm , strand]}
#gene2tx = {} # {ENSMUSG : ENSMUST}
with open(outputgff, 'w') as f:
for gene in gene2tx:
#Check to see if this gene has at least one transcript with a 3' UTR
has5UTR = False
for transcript in gene2tx[gene]:
if len(txUTRcoords[transcript]) > 0:
has5UTR = True
if not has5UTR:
continue
genechrm = genechrmstrand[gene][0]
genestrand = genechrmstrand[gene][1]
genestart = geneboundaries[gene][0]
genestop = geneboundaries[gene][1]
if gene in e2sdict:
geneshortname = e2sdict[gene]
else:
geneshortname = gene
IDline = 'ID=gene:{0};Name={1};gene_id={2}'.format(gene, geneshortname, gene)
f.write(('\t').join([genechrm, 'gene', 'gene', str(genestart), str(genestop), '.', genestrand, '.', IDline]) + '\n')
for transcript in gene2tx[gene]:
#If it doesn't have a UTR, skip it
if len(txUTRcoords[transcript]) == 0:
continue
exoncounter = 0
txchrm = txchrmstrand[transcript][0]
txstrand = txchrmstrand[transcript][1]
UTRstart = txUTRcoords[transcript][0][0]
UTRstop = txUTRcoords[transcript][-1][1]
IDline = 'ID=UTR5:{0};Parent=gene:{1};gene_id={2}'.format(transcript, gene, gene)
f.write(('\t').join([txchrm, 'UTR5', 'UTR5', str(UTRstart), str(UTRstop), '.', txstrand, '.', IDline]) + '\n')
for UTRexon in txUTRcoords[transcript]:
exoncounter +=1
exonstart = UTRexon[0]
exonstop = UTRexon[1]
IDline = 'ID=exon:{0}.utr5exon{1};Parent=UTR5:{2}'.format(transcript, exoncounter, transcript)
f.write(('\t').join([txchrm, 'exon', 'exon', str(exonstart), str(exonstop), '.', txstrand, '.', IDline]) + '\n')
'''
with open(outputgff, 'w') as f:
#txUTRcoords = {} #{ENSMUST_chrm_strand : [[utrexon1start, utrexon1stop], [utrexon2start, utrexon2stop]]}
for transcript in txUTRcoords:
exoncounter = 0
#transcriptID = transcript.split('_')[0]
#chrm = transcript.split('_')[1]
#strand = transcript.split('_')[2]
geneID = tx2gene[transcriptID]
geneshortname = e2sdict[geneID]
genestart, geneend = geneboundaries[geneID][0], geneboundaries[geneID][1]
IDline = 'ID=gene:{0};Name={1};gene_id={2}'.format(geneID, geneshortname, geneID)
f.write(('\t').join([chrm, 'gene', 'gene', str(genestart), str(geneend), '.', strand, '.', IDline]) + '\n')
UTRstart = txUTRcoords[transcript][0][0]
UTRstop = txUTRcoords[transcript][-1][1]
IDline = 'ID=UTR5:{0};Parent={1};gene_id={2}'.format(transcriptID, geneID, geneID)
f.write(('\t').join([chrm, 'UTR5', 'UTR5', str(UTRstart), str(UTRstop), '.', strand, '.', IDline]) + '\n')
for UTRexon in txUTRcoords[transcript]:
exoncounter +=1
exonstart = UTRexon[0]
exonstop = UTRexon[1]
IDline = 'ID=exon:{0}.utr5exon{1};Parent=UTR5:{2}'.format(transcriptID, exoncounter, transcriptID)
f.write(('\t').join([chrm, 'exon', 'exon', str(exonstart), str(exonstop), '.', strand, '.', IDline]) + '\n')
'''
return txUTRcoords, tx2gene, txchrmstrand
def getSequences(regioncoords, genomefasta, ens2short, tx2gene, txchrmstrand, outfasta):
#txUTRcoords = {} #{ENSMUST : [[utrexon1start, utrexon1stop], [utrexon2start, utrexon2stop]]}
#txchrmstrand = {} #{ENSMUST: [chrm, strand]}
#tx2gene = {} #{ENSMUST : ENSMUSG}
#ens2short for mm10 is Ensembl_to_genename.txt
print 'Indexing genome sequence...'
seq_dict = SeqIO.to_dict(SeqIO.parse(gzip.open(genomefasta), 'fasta'))
print 'Done indexing!'
seqs = {} #{txname : CDSseq}
e2sdict = {} #{ENSGene : shortname}
chrmswithoutseq = [] #Chromsome names that are in coords but that don't have a fasta entry in genomefasta
infh = open(ens2short, 'r')
for line in infh:
line = line.strip().split('\t')
if line[0].startswith('ENSMUSG'):
e2sdict[line[0]] = line[2]
infh.close()
for tx in regioncoords:
seq = ''
txname = tx
chrm = txchrmstrand[tx][0]
strand = txchrmstrand[tx][1]
#txname = ('_').join([tx.split('_')[0], tx.split('_')[1]])
#chrm = tx.split('_')[2]
#strand = tx.split('_')[3]
#Is this chromosome in genomefasta?
if chrm not in seq_dict:
if chrm not in chrmswithoutseq:
print 'WARNING: No entry for chromosome {0} in genomefasta.'.format(chrm)
chrmswithoutseq.append(chrm)
continue
for coords in regioncoords[tx]:
start = coords[0]
end = coords[1]
if strand == '+':
exonseq = seq_dict[chrm].seq[start-1:end].upper()
seq += exonseq
elif strand == '-':
exonseq = seq_dict[chrm].seq[start-1:end].reverse_complement().upper()
newseq = exonseq + seq
seq = newseq
genename = tx2gene[txname].split('.')[0]
if genename in e2sdict:
shortname = e2sdict[genename]
seqs[txname + '_' + genename + '_' + shortname] = str(seq)
else:
shortname = genename
seqs[txname + '_' + genename + '_' + shortname] = str(seq)
with open(outfasta, 'w') as f:
for seq in seqs:
f.write('>' + seq + '\n' + str(seqs[seq]) + '\n')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--transcripts', type = str, help = 'List of ensembl transcript IDs.')
parser.add_argument('--gff', type = str, help = 'Genome annotation containing transcript IDs.')
parser.add_argument('--ens2short', type = str, help = 'File containing ensembl IDs to gene short name relations. Usually Ensembl_to_genename.txt')
parser.add_argument('--genomefasta', type = str, help = 'Genome sequence in fasta format.')
parser.add_argument('--outputgff', type = str, help = 'Output file of gene regions in gff format.')
parser.add_argument('--outputfasta', type = str, help = 'Output file of gene regions in fasta format.')
parser.add_argument('--region', type = str, choices = ['UTR5', 'CDS', 'UTR3'])
args = parser.parse_args()
txs = []
with open(args.transcripts, 'r') as f:
for line in f:
line = line.strip()
txs.append(line)
if args.region == 'CDS':
coords, tx2gene, txchrmstrand = getCDScoords(args.gff, args.ens2short, txs, args.outputgff)
getSequences(coords, args.genomefasta, args.ens2short, tx2gene, txchrmstrand, args.outputfasta)
elif args.region == 'UTR3':
coords, tx2gene, txchrmstrand = get3UTRcoords(args.gff, args.ens2short, txs, args.outputgff)
getSequences(coords, args.genomefasta, args.ens2short, tx2gene, txchrmstrand, args.outputfasta)
elif args.region == 'UTR5':
coords, tx2gene, txchrmstrand = get5UTRcoords(args.gff, args.ens2short, txs, args.outputgff)
getSequences(coords, args.genomefasta, args.ens2short, tx2gene, txchrmstrand, args.outputfasta)
| 38.181973 | 147 | 0.665405 | 2,760 | 22,451 | 5.372826 | 0.107609 | 0.024074 | 0.007081 | 0.011127 | 0.792029 | 0.771192 | 0.751838 | 0.740306 | 0.732214 | 0.706723 | 0 | 0.022843 | 0.177141 | 22,451 | 587 | 148 | 38.247019 | 0.779853 | 0.15202 | 0 | 0.681122 | 0 | 0.005102 | 0.091744 | 0.021271 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.017857 | null | null | 0.038265 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc0fff3fd8a148b2ef0409d43849e72b55221fc5 | 756 | py | Python | src/homicide_exploration/explore_helpers.py | ras9841/UP-STAT-2018 | cad06bfac3c12b4cb14c3b703e23c52cc391383a | [
"MIT"
] | null | null | null | src/homicide_exploration/explore_helpers.py | ras9841/UP-STAT-2018 | cad06bfac3c12b4cb14c3b703e23c52cc391383a | [
"MIT"
] | 1 | 2018-05-08T12:16:50.000Z | 2018-05-08T21:28:40.000Z | src/homicide_exploration/explore_helpers.py | ras9841/UP-STAT-2018 | cad06bfac3c12b4cb14c3b703e23c52cc391383a | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def yPerX_numeric(xlabel, x, ylabel, y):
uniqueX = x.unique()
data = { ux : 0 for ux in uniqueX }
for ux in uniqueX:
xLocs = x.isin([ux])
yc = y[xLocs].sum()
data[ux] += yc
plt.figure()
plt.bar(uniqueX, list(data.values()), edgecolor="k")
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.show()
def yPerX_cat(xlabel, x, ylabel, y):
uniqueX = x.unique()
data = { ux : 0 for ux in uniqueX }
for ux in uniqueX:
xLocs = x.isin([ux])
yc = y[xLocs].size
data[ux] += yc
plt.figure()
plt.bar(uniqueX, list(data.values()), edgecolor="k")
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.show()
| 25.2 | 56 | 0.583333 | 113 | 756 | 3.884956 | 0.318584 | 0.05467 | 0.063781 | 0.127563 | 0.792711 | 0.792711 | 0.792711 | 0.792711 | 0.792711 | 0.792711 | 0 | 0.003623 | 0.269841 | 756 | 29 | 57 | 26.068966 | 0.791667 | 0 | 0 | 0.740741 | 0 | 0 | 0.002646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.111111 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
905a0d8dbe9570c99cc5ad4052e1f062cae4c2f5 | 426 | py | Python | tatsu/model.py | bookofproofs/TatSu | 501875416c8c802bb518f35f1ae08d9ebf437af2 | [
"BSD-2-Clause"
] | 259 | 2017-05-22T04:33:21.000Z | 2022-03-29T00:20:35.000Z | tatsu/model.py | bookofproofs/TatSu | 501875416c8c802bb518f35f1ae08d9ebf437af2 | [
"BSD-2-Clause"
] | 160 | 2017-05-30T01:28:58.000Z | 2022-03-31T02:45:52.000Z | tatsu/model.py | bookofproofs/TatSu | 501875416c8c802bb518f35f1ae08d9ebf437af2 | [
"BSD-2-Clause"
] | 53 | 2017-05-22T05:00:58.000Z | 2022-01-04T16:06:17.000Z | from __future__ import annotations
from tatsu.ast import AST # noqa: F401
from tatsu.objectmodel import Node # noqa: F401
from tatsu.objectmodel import Node as ParseModel # noqa: F401
from tatsu.walkers import NodeWalker # noqa: F401
from tatsu.walkers import DepthFirstWalker # noqa: F401
from tatsu.semantics import ModelBuilderSemantics # noqa: F401
| 47.333333 | 64 | 0.673709 | 48 | 426 | 5.895833 | 0.354167 | 0.190813 | 0.212014 | 0.300353 | 0.480565 | 0.480565 | 0.268551 | 0 | 0 | 0 | 0 | 0.059016 | 0.284038 | 426 | 8 | 65 | 53.25 | 0.868852 | 0.152582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
908dd7de2dc6338c0c4434138d8a9aef5427b801 | 16,112 | py | Python | tests/terraform/module_loading/test_registry.py | pmalkki/checkov | b6cdf386dd976fe27c16fed6d550756a678a5d7b | [
"Apache-2.0"
] | 1 | 2022-02-20T21:20:39.000Z | 2022-02-20T21:20:39.000Z | tests/terraform/module_loading/test_registry.py | pmalkki/checkov | b6cdf386dd976fe27c16fed6d550756a678a5d7b | [
"Apache-2.0"
] | 3 | 2022-03-07T20:37:31.000Z | 2022-03-21T20:20:14.000Z | tests/terraform/module_loading/test_registry.py | pmalkki/checkov | b6cdf386dd976fe27c16fed6d550756a678a5d7b | [
"Apache-2.0"
] | null | null | null | import os
import shutil
import unittest
from contextlib import ExitStack as does_not_raise
from pathlib import Path
from unittest import mock
import pytest
from checkov.common.util.consts import DEFAULT_EXTERNAL_MODULES_DIR
from checkov.terraform.module_loading.loaders.bitbucket_loader import BitbucketLoader
from checkov.terraform.module_loading.loaders.git_loader import GenericGitLoader
from checkov.terraform.module_loading.loaders.github_loader import GithubLoader
from checkov.terraform.module_loading.registry import ModuleLoaderRegistry
class TestModuleLoaderRegistry(unittest.TestCase):
def setUp(self) -> None:
self.current_dir = str(Path(__file__).parent / "tmp")
def tearDown(self) -> None:
if os.path.exists(self.current_dir):
shutil.rmtree(self.current_dir)
def test_load_terraform_registry(self):
registry = ModuleLoaderRegistry(True, DEFAULT_EXTERNAL_MODULES_DIR)
registry.root_dir = self.current_dir
source = "terraform-aws-modules/security-group/aws"
content = registry.load(current_dir=self.current_dir, source=source, source_version="~> 3.0")
assert content.loaded()
expected_content_path = os.path.join(
self.current_dir,
DEFAULT_EXTERNAL_MODULES_DIR,
"github.com/terraform-aws-modules/terraform-aws-security-group",
)
self.assertRegex(content.path(), f"^{expected_content_path}/v3.*")
def test_load_terraform_registry_check_cache(self):
registry = ModuleLoaderRegistry(download_external_modules=True)
registry.root_dir = self.current_dir
source1 = "git::https://github.com/bridgecrewio/checkov_not_working1.git"
registry.load(current_dir=self.current_dir, source=source1, source_version="latest")
self.assertIn(source1, registry.failed_urls_cache)
source2 = "git::https://github.com/bridgecrewio/checkov_not_working2.git"
registry.load(current_dir=self.current_dir, source=source2, source_version="latest")
self.assertIn(source1 in registry.failed_urls_cache and source2, registry.failed_urls_cache)
@pytest.mark.parametrize(
"source, source_version, expected_content_path, expected_git_url, expected_dest_dir, expected_module_source, expected_inner_module",
[
(
"terraform-aws-modules/security-group/aws",
"4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0",
"https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0",
"git::https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"",
),
(
"terraform-aws-modules/security-group/aws//modules/http-80",
"4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0/modules/http-80",
"https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0",
"git::https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"modules/http-80",
),
],
ids=["module_with_version", "inner_module_with_version"],
)
@mock.patch("checkov.terraform.module_loading.loaders.git_loader.GitGetter", autospec=True)
def test_load_terraform_registry(
git_getter,
source,
source_version,
expected_content_path,
expected_git_url,
expected_dest_dir,
expected_module_source,
expected_inner_module,
):
# given
current_dir = Path(__file__).parent / "tmp"
registry = ModuleLoaderRegistry(download_external_modules=True)
# when
content = registry.load(current_dir=str(current_dir), source=source, source_version=source_version)
# then
assert content.loaded()
assert content.path() == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_content_path)
git_getter.assert_called_once_with(expected_git_url, mock.ANY)
git_loader = next(loader for loader in registry.loaders if isinstance(loader, GenericGitLoader))
assert git_loader.dest_dir == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_dest_dir)
assert git_loader.module_source == expected_module_source
assert git_loader.inner_module == expected_inner_module
@pytest.mark.parametrize(
"source, expected_content_path, expected_git_url, expected_dest_dir, expected_module_source, expected_inner_module",
[
(
"git::https://example.com/network.git",
"example.com/network/HEAD",
"https://example.com/network.git",
"example.com/network/HEAD",
"git::https://example.com/network.git",
"",
),
(
"git::https://example.com/network.git?ref=v1.2.0",
"example.com/network/v1.2.0",
"https://example.com/network.git?ref=v1.2.0",
"example.com/network/v1.2.0",
"git::https://example.com/network.git?ref=v1.2.0",
"",
),
(
"git::https://example.com/network.git//modules/vpc",
"example.com/network/HEAD/modules/vpc",
"https://example.com/network",
"example.com/network/HEAD",
"git::https://example.com/network",
"modules/vpc",
),
(
"git::https://example.com/network.git//modules/vpc?ref=v1.2.0",
"example.com/network/v1.2.0/modules/vpc",
"https://example.com/network?ref=v1.2.0",
"example.com/network/v1.2.0",
"git::https://example.com/network?ref=v1.2.0",
"modules/vpc",
),
(
"git::ssh://username@example.com/network.git",
"example.com/network/HEAD",
"ssh://username@example.com/network.git",
"example.com/network/HEAD",
"git::ssh://username@example.com/network.git",
"",
),
(
"git::ssh://username@example.com/network.git?ref=v1.2.0",
"example.com/network/v1.2.0",
"ssh://username@example.com/network.git?ref=v1.2.0",
"example.com/network/v1.2.0",
"git::ssh://username@example.com/network.git?ref=v1.2.0",
"",
),
(
"git::username@example.com/network.git",
"example.com/network/HEAD",
"username@example.com/network.git",
"example.com/network/HEAD",
"git::username@example.com/network.git",
"",
),
(
"git::username@example.com/network.git?ref=v1.2.0",
"example.com/network/v1.2.0",
"username@example.com/network.git?ref=v1.2.0",
"example.com/network/v1.2.0",
"git::username@example.com/network.git?ref=v1.2.0",
"",
),
(
"git::ssh://git@github.com/bridgecrewio/terragoat//modules/s3-encrypted",
"git@github.com/bridgecrewio/terragoat/HEAD/modules/s3-encrypted",
"ssh://git@github.com/bridgecrewio/terragoat",
"git@github.com/bridgecrewio/terragoat/HEAD",
"git::ssh://git@github.com/bridgecrewio/terragoat",
"modules/s3-encrypted",
),
],
ids=[
"module",
"module_with_version",
"inner_module",
"inner_module_with_version",
"module_over_ssh",
"module_over_ssh_with_version",
"module_over_ssh_without_protocol",
"module_over_ssh_without_protocol_with_version",
"git_username",
],
)
@mock.patch("checkov.terraform.module_loading.loaders.git_loader.GitGetter", autospec=True)
def test_load_generic_git(
git_getter,
source,
expected_content_path,
expected_git_url,
expected_dest_dir,
expected_module_source,
expected_inner_module,
):
# given
current_dir = Path(__file__).parent / "tmp"
registry = ModuleLoaderRegistry(download_external_modules=True)
# when
content = registry.load(current_dir=str(current_dir), source=source, source_version="latest")
# then
assert content.loaded()
assert content.path() == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_content_path)
git_getter.assert_called_once_with(expected_git_url, mock.ANY)
git_loader = next(loader for loader in registry.loaders if isinstance(loader, GenericGitLoader))
assert git_loader.dest_dir == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_dest_dir)
assert git_loader.module_source == expected_module_source
assert git_loader.inner_module == expected_inner_module
@pytest.mark.parametrize(
"source, expected_content_path, expected_git_url, expected_dest_dir, expected_module_source, expected_inner_module",
[
(
"github.com/terraform-aws-modules/terraform-aws-security-group",
"github.com/terraform-aws-modules/terraform-aws-security-group/HEAD",
"https://github.com/terraform-aws-modules/terraform-aws-security-group",
"github.com/terraform-aws-modules/terraform-aws-security-group/HEAD",
"git::https://github.com/terraform-aws-modules/terraform-aws-security-group",
"",
),
(
"github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0",
"https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0",
"git::https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"",
),
(
"github.com/terraform-aws-modules/terraform-aws-security-group//modules/http-80",
"github.com/terraform-aws-modules/terraform-aws-security-group/HEAD/modules/http-80",
"https://github.com/terraform-aws-modules/terraform-aws-security-group",
"github.com/terraform-aws-modules/terraform-aws-security-group/HEAD",
"git::https://github.com/terraform-aws-modules/terraform-aws-security-group",
"modules/http-80",
),
(
"github.com/terraform-aws-modules/terraform-aws-security-group//modules/http-80?ref=v4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0/modules/http-80",
"https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"github.com/terraform-aws-modules/terraform-aws-security-group/v4.0.0",
"git::https://github.com/terraform-aws-modules/terraform-aws-security-group?ref=v4.0.0",
"modules/http-80",
),
],
ids=["module", "module_with_version", "inner_module", "inner_module_with_version"],
)
@mock.patch("checkov.terraform.module_loading.loaders.git_loader.GitGetter", autospec=True)
def test_load_github(
git_getter,
source,
expected_content_path,
expected_git_url,
expected_dest_dir,
expected_module_source,
expected_inner_module,
):
# given
current_dir = Path(__file__).parent / "tmp"
registry = ModuleLoaderRegistry(download_external_modules=True)
# when
content = registry.load(current_dir=str(current_dir), source=source, source_version="latest")
# then
assert content.loaded()
assert content.path() == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_content_path)
git_getter.assert_called_once_with(expected_git_url, mock.ANY)
git_loader = next(loader for loader in registry.loaders if isinstance(loader, GithubLoader))
assert git_loader.dest_dir == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_dest_dir)
assert git_loader.module_source == expected_module_source
assert git_loader.inner_module == expected_inner_module
# TODO: create a dummy repo in bitbucket for more consitent tests
@pytest.mark.parametrize(
"source, expected_content_path, expected_git_url, expected_dest_dir, expected_module_source, expected_inner_module",
[
(
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/HEAD",
"https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/HEAD",
"git::https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha",
"",
),
(
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha?ref=v0.1.0",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/v0.1.0",
"https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha?ref=v0.1.0",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/v0.1.0",
"git::https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha?ref=v0.1.0",
"",
),
(
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha//rancher2-ha",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/HEAD/rancher2-ha",
"https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/HEAD",
"git::https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha",
"rancher2-ha",
),
(
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha//rancher2-ha?ref=v0.1.0",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/v0.1.0/rancher2-ha",
"https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha?ref=v0.1.0",
"bitbucket.org/nuarch/terraform-aws-rancher-server-ha/v0.1.0",
"git::https://bitbucket.org/nuarch/terraform-aws-rancher-server-ha?ref=v0.1.0",
"rancher2-ha",
),
],
ids=["module", "module_with_version", "inner_module", "inner_module_with_version"],
)
@mock.patch("checkov.terraform.module_loading.loaders.git_loader.GitGetter", autospec=True)
def test_load_bitbucket(
git_getter,
source,
expected_content_path,
expected_git_url,
expected_dest_dir,
expected_module_source,
expected_inner_module,
):
# given
current_dir = Path(__file__).parent / "tmp"
registry = ModuleLoaderRegistry(download_external_modules=True)
# when
content = registry.load(current_dir=str(current_dir), source=source, source_version="latest")
# then
assert content.loaded()
assert content.path() == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_content_path)
git_getter.assert_called_once_with(expected_git_url, mock.ANY)
git_loader = next(loader for loader in registry.loaders if isinstance(loader, BitbucketLoader))
assert git_loader.dest_dir == str(Path(DEFAULT_EXTERNAL_MODULES_DIR) / expected_dest_dir)
assert git_loader.module_source == expected_module_source
assert git_loader.inner_module == expected_inner_module
@pytest.mark.parametrize(
"source, expected_content_path, expected_exception",
[
("./loaders/resources", "loaders/resources", does_not_raise()),
("../module_loading/loaders/resources", "loaders/resources", does_not_raise()),
("./does_not_exist", "", pytest.raises(FileNotFoundError)),
],
ids=["current_dir", "parent_dir", "not_exists"],
)
@mock.patch("checkov.terraform.module_loading.loaders.git_loader.GitGetter", autospec=True)
def test_load_local_path(git_getter, source, expected_content_path, expected_exception):
# given
current_dir = Path(__file__).parent
registry = ModuleLoaderRegistry()
# when
with expected_exception:
content = registry.load(current_dir=str(current_dir), source=source, source_version="latest")
# then
assert content.loaded()
assert content.path() == str(current_dir / expected_content_path)
git_getter.assert_not_called()
| 42.511873 | 136 | 0.669998 | 1,976 | 16,112 | 5.26417 | 0.075405 | 0.093444 | 0.065372 | 0.058546 | 0.88233 | 0.860219 | 0.799654 | 0.768506 | 0.757066 | 0.720246 | 0 | 0.014597 | 0.196375 | 16,112 | 378 | 137 | 42.624339 | 0.78877 | 0.008875 | 0 | 0.613497 | 0 | 0.110429 | 0.43493 | 0.26166 | 0 | 0 | 0 | 0.002646 | 0.095092 | 1 | 0.027607 | false | 0 | 0.03681 | 0 | 0.067485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
90d1b1fe6209747c667789edc824243d9fcad89a | 25 | py | Python | robitcontrol/views/__init__.py | ToxicFrazzles/django-robitcontrol | e2e2ec287fb938354df54b09ed8e1bb061268208 | [
"MIT"
] | null | null | null | robitcontrol/views/__init__.py | ToxicFrazzles/django-robitcontrol | e2e2ec287fb938354df54b09ed8e1bb061268208 | [
"MIT"
] | null | null | null | robitcontrol/views/__init__.py | ToxicFrazzles/django-robitcontrol | e2e2ec287fb938354df54b09ed8e1bb061268208 | [
"MIT"
] | null | null | null | from .index import Index
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90de381ccd6da9221cde2b9681efc4c7ad019152 | 138 | py | Python | language_demos/input_output.py | t4d-classes/python_03222021_afternoon | c954702cf55c0f9d919ac32404a29830601883ba | [
"MIT"
] | null | null | null | language_demos/input_output.py | t4d-classes/python_03222021_afternoon | c954702cf55c0f9d919ac32404a29830601883ba | [
"MIT"
] | null | null | null | language_demos/input_output.py | t4d-classes/python_03222021_afternoon | c954702cf55c0f9d919ac32404a29830601883ba | [
"MIT"
] | null | null | null |
first_name = input("Please enter your name: ")
print(f"Your first name is: {first_name}")
print("Your first name is: " + first_name)
| 15.333333 | 46 | 0.688406 | 22 | 138 | 4.181818 | 0.409091 | 0.48913 | 0.282609 | 0.326087 | 0.521739 | 0.521739 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 138 | 8 | 47 | 17.25 | 0.807018 | 0 | 0 | 0 | 0 | 0 | 0.562963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
90fec3d65f8dcaed10674913c78f4670544cba11 | 3,932 | py | Python | cfgov/privacy/tests/test_views.py | lfatty/consumerfinance.gov | 4716b298dd92e21b280f10f113ccb39dbbaf3561 | [
"CC0-1.0"
] | null | null | null | cfgov/privacy/tests/test_views.py | lfatty/consumerfinance.gov | 4716b298dd92e21b280f10f113ccb39dbbaf3561 | [
"CC0-1.0"
] | null | null | null | cfgov/privacy/tests/test_views.py | lfatty/consumerfinance.gov | 4716b298dd92e21b280f10f113ccb39dbbaf3561 | [
"CC0-1.0"
] | null | null | null | from django.core import mail
from django.test import TestCase, override_settings
from django.urls import reverse
@override_settings(
FLAGS={'PRIVACY_FORMS': [('boolean', True)]},
PRIVACY_EMAIL_TARGET='email@foia.gov',
)
class TestRecordsAccessForm(TestCase):
def test_get_the_form(self):
response = self.client.get(reverse('privacy:records_access'))
self.assertContains(
response,
'Request for individual access to records protected under the Privacy Act' # noqa: E501
)
def test_invalid_form_post_does_not_send_email(self):
self.client.post(
reverse('privacy:records_access'),
{
'description': '',
'system_of_record': '',
'requestor_name': '',
'requestor_email': '',
'contact_channel': 'mail',
},
)
self.assertEqual(len(mail.outbox), 0)
def test_valid_form_post_sends_email_and_redirects(self):
response = self.client.post(
reverse('privacy:records_access'),
{
'description': 'This is a description of the desired records',
'requestor_name': 'Example Person',
'requestor_email': 'person@example.com',
'contact_channel': 'email',
'full_name': 'Example Q. Person',
'consent': True,
'supporting_documentation': []
},
)
email = mail.outbox[0]
self.assertEqual(
email.subject,
'Records request from consumerfinance.gov: Example Person',
)
self.assertIn('Example Q. Person', email.body)
self.assertEqual(email.to, ['email@foia.gov'])
self.assertEqual(email.reply_to, ['person@example.com'])
self.assertRedirects(response, reverse('privacy:form_submitted'))
@override_settings(
FLAGS={'PRIVACY_FORMS': [('boolean', True)]},
PRIVACY_EMAIL_TARGET='email@foia.gov',
)
class TestDisclosureConsentForm(TestCase):
def test_get_the_form(self):
response = self.client.get(reverse('privacy:disclosure_consent'))
self.assertContains(
response,
'Consent for disclosure of records protected under the Privacy Act'
)
def test_invalid_form_post_does_not_send_email(self):
self.client.post(
reverse('privacy:disclosure_consent'),
{
'description': '',
'system_of_record': '',
'requestor_name': '',
'requestor_email': '',
'recipient_name': 'Recipient Person',
'recipient_email': 'recipient@example.com',
'contact_channel': 'mail',
},
)
self.assertEqual(len(mail.outbox), 0)
def test_valid_form_post_sends_email_and_redirects(self):
response = self.client.post(
reverse('privacy:disclosure_consent'),
{
'description': 'This is a description of the desired records',
'requestor_name': 'Example Person',
'requestor_email': 'person@example.com',
'recipient_name': 'Recipient Person',
'recipient_email': 'recipient@example.com',
'contact_channel': 'email',
'full_name': 'Example Q. Person',
'consent': True,
'supporting_documentation': []
},
)
email = mail.outbox[0]
self.assertEqual(
email.subject,
'Disclosure request from consumerfinance.gov: Example Person',
)
self.assertIn('Recipient Person', email.body)
self.assertEqual(email.to, ['email@foia.gov'])
self.assertEqual(email.reply_to, ['person@example.com'])
self.assertRedirects(response, reverse('privacy:form_submitted'))
| 36.407407 | 100 | 0.578332 | 375 | 3,932 | 5.858667 | 0.224 | 0.050979 | 0.05462 | 0.040055 | 0.856623 | 0.856623 | 0.825671 | 0.825671 | 0.701866 | 0.701866 | 0 | 0.002567 | 0.30646 | 3,932 | 107 | 101 | 36.747664 | 0.80308 | 0.002543 | 0 | 0.680412 | 0 | 0 | 0.324745 | 0.070918 | 0 | 0 | 0 | 0 | 0.14433 | 1 | 0.061856 | false | 0 | 0.030928 | 0 | 0.113402 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2975d2422034af9a37099b11f9f759b94aa2cfbc | 108 | py | Python | app/main/__init__.py | edwinkipchumba/Blog-website | 4a287e5f1f9618f5cd9108c953c470f358d7355a | [
"MIT"
] | null | null | null | app/main/__init__.py | edwinkipchumba/Blog-website | 4a287e5f1f9618f5cd9108c953c470f358d7355a | [
"MIT"
] | null | null | null | app/main/__init__.py | edwinkipchumba/Blog-website | 4a287e5f1f9618f5cd9108c953c470f358d7355a | [
"MIT"
] | null | null | null | from flask import Blueprint
main = Blueprint('main',__name__)
# importing error
from . import views,error
| 15.428571 | 33 | 0.768519 | 14 | 108 | 5.642857 | 0.642857 | 0.329114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 108 | 6 | 34 | 18 | 0.858696 | 0.138889 | 0 | 0 | 0 | 0 | 0.043956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
4661edc82bcf53c96e7c94c10ed3ed97092033d9 | 27 | py | Python | install/__init__.py | lucasvg/Satyrus3-FinalProject-EspTopsOTM | 024785752abdc46e3463d8c94df7c3da873c354d | [
"MIT"
] | null | null | null | install/__init__.py | lucasvg/Satyrus3-FinalProject-EspTopsOTM | 024785752abdc46e3463d8c94df7c3da873c354d | [
"MIT"
] | null | null | null | install/__init__.py | lucasvg/Satyrus3-FinalProject-EspTopsOTM | 024785752abdc46e3463d8c94df7c3da873c354d | [
"MIT"
] | null | null | null | from .main import installer | 27 | 27 | 0.851852 | 4 | 27 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
468696a816961bd4e1c32ac7dd2e50119c438d80 | 32 | py | Python | kaysync/algos/random.py | AmrMKayid/sync1 | 4e68e22a165d18cab8cf1e0158e4c4bf6cf400e8 | [
"MIT"
] | null | null | null | kaysync/algos/random.py | AmrMKayid/sync1 | 4e68e22a165d18cab8cf1e0158e4c4bf6cf400e8 | [
"MIT"
] | null | null | null | kaysync/algos/random.py | AmrMKayid/sync1 | 4e68e22a165d18cab8cf1e0158e4c4bf6cf400e8 | [
"MIT"
] | null | null | null | print("Better Random Agent! :D") | 32 | 32 | 0.71875 | 5 | 32 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
469db8dca40d6c1094bde4bc81dd368b742a7c60 | 155 | py | Python | icekit/utils/strings.py | ic-labs/django-icekit | c507ea5b1864303732c53ad7c5800571fca5fa94 | [
"MIT"
] | 52 | 2016-09-13T03:50:58.000Z | 2022-02-23T16:25:08.000Z | icekit/utils/strings.py | ic-labs/django-icekit | c507ea5b1864303732c53ad7c5800571fca5fa94 | [
"MIT"
] | 304 | 2016-08-11T14:17:30.000Z | 2020-07-22T13:35:18.000Z | icekit/utils/strings.py | ic-labs/django-icekit | c507ea5b1864303732c53ad7c5800571fca5fa94 | [
"MIT"
] | 12 | 2016-09-21T18:46:35.000Z | 2021-02-15T19:37:50.000Z | def is_empty(value):
"""
Return `True` if the given value is `None` or empty after `strip()`
"""
return value is None or not value.strip()
| 25.833333 | 71 | 0.619355 | 24 | 155 | 3.958333 | 0.583333 | 0.147368 | 0.231579 | 0.273684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.251613 | 155 | 5 | 72 | 31 | 0.818966 | 0.432258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
310ebeed67c25d69e2b28a69211e2c4c7778abff | 66 | py | Python | test_hello.py | ivtransgruasortiz/ManageAccountsAndPasswords | 599a28b4fb824924c66e169557808f31774a010e | [
"MIT"
] | null | null | null | test_hello.py | ivtransgruasortiz/ManageAccountsAndPasswords | 599a28b4fb824924c66e169557808f31774a010e | [
"MIT"
] | null | null | null | test_hello.py | ivtransgruasortiz/ManageAccountsAndPasswords | 599a28b4fb824924c66e169557808f31774a010e | [
"MIT"
] | null | null | null | from hello import add
def test_add():
assert add(2, 3) == 5
| 11 | 25 | 0.621212 | 12 | 66 | 3.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 0.257576 | 66 | 5 | 26 | 13.2 | 0.755102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3152021b52b75e9749ec2fd7d6b51040ddeab4a0 | 15,227 | py | Python | skule_vote/backend/test_admin.py | albogdan/skvtemp | d792c27184ab5dfcea660f1e293048aa929f3493 | [
"MIT"
] | null | null | null | skule_vote/backend/test_admin.py | albogdan/skvtemp | d792c27184ab5dfcea660f1e293048aa929f3493 | [
"MIT"
] | null | null | null | skule_vote/backend/test_admin.py | albogdan/skvtemp | d792c27184ab5dfcea660f1e293048aa929f3493 | [
"MIT"
] | null | null | null | from datetime import timedelta
import random
from unittest.mock import patch
from django.conf import settings
from django.test import TestCase
from django.urls import reverse
from rest_framework import status
from backend.models import ElectionSession, Election, Candidate, Eligibility
from skule_vote.tests import SetupMixin
class ElectionSessionAdminTestCase(SetupMixin, TestCase):
"""
Tests the changelist view for ElectionSessions, which shows the standard fields for
ElectionSessions and a few custom fields.
"""
def setUp(self):
super().setUp()
self._set_election_session_data()
self._login_admin()
def test_election_session_name_start_and_end_time_can_be_changed_before_session_start(
self,
):
# Get an ElectionSession that has not started
self._set_election_session_data(start_time_offset_days=1)
election_session = self._create_election_session()
list_view = reverse(
"admin:backend_electionsession_change",
kwargs={"object_id": election_session.id},
)
new_start = self._now() + timedelta(days=5)
new_end = self._now() + timedelta(days=10)
new_data = {
"election_session_name": "ElectionSession2021Part2",
"start_time_0": new_start.date(),
"start_time_1": new_start.time(),
"end_time_0": new_end.date(),
"end_time_1": new_end.time(),
}
response = self.client.post(list_view, data=new_data)
self.assertRedirects(
response, reverse("admin:backend_electionsession_changelist")
)
election_session.refresh_from_db()
self.assertEqual(
election_session.election_session_name, new_data["election_session_name"]
)
self.assertEqual(election_session.start_time, new_start)
self.assertEqual(election_session.end_time, new_end)
def test_end_time_can_be_changed_after_session_start(
self,
):
# Get an ElectionSession that has started
election_session = self._create_election_session()
list_view = reverse(
"admin:backend_electionsession_change",
kwargs={"object_id": election_session.id},
)
new_end = self._now() + timedelta(days=10)
new_data = {
"election_session_name": self.data["election_session_name"],
"start_time_0": self.data["start_time"].date(),
"start_time_1": self.data["start_time"].time(),
"end_time_0": new_end.date(),
"end_time_1": new_end.time(),
}
response = self.client.post(list_view, data=new_data)
self.assertRedirects(
response, reverse("admin:backend_electionsession_changelist")
)
election_session.refresh_from_db()
self.assertEqual(
election_session.election_session_name, self.data["election_session_name"]
)
self.assertEqual(
election_session.start_time.astimezone(settings.TZ_INFO),
self.data["start_time"],
)
self.assertEqual(
election_session.end_time.astimezone(settings.TZ_INFO), new_end
)
def test_start_time_and_name_cannot_be_changed_after_session_start(
self,
):
# Get an ElectionSession that has started
election_session = self._create_election_session()
list_view = reverse(
"admin:backend_electionsession_change",
kwargs={"object_id": election_session.id},
)
# Try changing the start_time
new_start = self._now() + timedelta(days=1)
new_data = {
"election_session_name": self.data["election_session_name"],
"start_time_0": new_start.date(),
"start_time_1": new_start.time(),
"end_time_0": self.data["end_time"].date(),
"end_time_1": self.data["end_time"].time(),
}
response = self.client.post(list_view, data=new_data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(
response,
f"start_time cannot be changed once the "
f"election session has begun. Revert changes, or leave and return to this page to "
f"reset all fields.",
)
# Try changing the ElectionSessionName
new_data = {
"election_session_name": "NewElectionSessionName2021",
"start_time_0": self.data["start_time"].date(),
"start_time_1": self.data["start_time"].time(),
"end_time_0": self.data["end_time"].date(),
"end_time_1": self.data["end_time"].time(),
}
response = self.client.post(list_view, data=new_data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(
response,
f"election_session_name cannot be changed once the "
f"election session has begun. Revert changes, or leave and return to this page to "
f"reset all fields.",
)
election_session.refresh_from_db()
self.assertEqual(
election_session.election_session_name, self.data["election_session_name"]
)
self.assertEqual(
election_session.start_time.astimezone(settings.TZ_INFO),
self.data["start_time"],
)
self.assertEqual(
election_session.end_time.astimezone(settings.TZ_INFO),
self.data["end_time"],
)
def test_csv_uploads_can_be_changed_before_session_start(
self,
):
# Get an ElectionSession that has not started
self._set_election_session_data(start_time_offset_days=1)
election_session = self._create_election_session()
list_view = reverse(
"admin:backend_electionsession_change",
kwargs={"object_id": election_session.id},
)
files = self._build_admin_csv_files()
new_data = {
"election_session_name": self.data["election_session_name"],
"start_time_0": self.data["start_time"].date(),
"start_time_1": self.data["start_time"].time(),
"end_time_0": self.data["end_time"].date(),
"end_time_1": self.data["end_time"].time(),
"upload_elections": files["upload_elections"],
"upload_candidates": files["upload_candidates"],
"upload_eligibilities": files["upload_eligibilities"],
}
response = self.client.post(list_view, data=new_data)
self.assertRedirects(
response, reverse("admin:backend_electionsession_changelist")
)
election_session.refresh_from_db()
self.assertEqual(
election_session.election_session_name, new_data["election_session_name"]
)
self.assertEqual(ElectionSession.objects.count(), 1)
self.assertEqual(
Election.objects.count(), len(self.body_definitions["elections"])
)
self.assertEqual(
Candidate.objects.count(),
len(self.body_definitions["candidates"])
+ Election.objects.count(), # Factor in RON Candidate
)
self.assertEqual(
Eligibility.objects.count(), len(self.body_definitions["eligibilities"])
)
def test_csv_uploads_cannot_be_changed_after_session_start(
self,
):
# Get an ElectionSession that has started
self._set_election_session_data(start_time_offset_days=-1)
election_session = self._create_election_session()
list_view = reverse(
"admin:backend_electionsession_change",
kwargs={"object_id": election_session.id},
)
files = self._build_admin_csv_files()
new_data = {
"election_session_name": self.data["election_session_name"],
"start_time_0": self.data["start_time"].date(),
"start_time_1": self.data["start_time"].time(),
"end_time_0": self.data["end_time"].date(),
"end_time_1": self.data["end_time"].time(),
"upload_elections": files["upload_elections"],
"upload_candidates": files["upload_candidates"],
"upload_eligibilities": files["upload_eligibilities"],
}
response = self.client.post(list_view, data=new_data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(
response,
f"upload_candidates, upload_elections, upload_eligibilities cannot be changed once the "
f"election session has begun. Revert changes, or leave and return to this page to "
f"reset all fields.",
)
election_session.refresh_from_db()
self.assertEqual(ElectionSession.objects.count(), 1)
self.assertEqual(Election.objects.count(), 0)
self.assertEqual(Candidate.objects.count(), 0)
self.assertEqual(Eligibility.objects.count(), 0)
def test_uploading_new_csvs_removes_the_old_objects(self):
# Create an ElectionSession that has not started
self._set_election_session_data(start_time_offset_days=1)
form = self._build_election_session_form(files=self._build_admin_csv_files())
self.assertTrue(form.is_valid())
election_session = form.save()
list_view = reverse(
"admin:backend_electionsession_change",
kwargs={"object_id": election_session.id},
)
modified_body_definitions = {
"elections": [
[
"2nd Year EngSci Officer",
"2",
"Officer",
],
],
"candidates": [
[
"Bobby Draper",
"2nd Year EngSci Officer",
"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
],
],
"eligibilities": [
[
"2nd Year EngSci Officer",
"0",
"0",
"0",
"0",
"0",
"1",
"0",
"0",
"0",
"0",
"0",
"1",
"0",
"0",
"0",
"Full and Part Time",
],
],
}
modified_csv_files = self._build_admin_csv_files(body=modified_body_definitions)
new_data = {
"election_session_name": self.data["election_session_name"],
"start_time_0": self.data["start_time"].date(),
"start_time_1": self.data["start_time"].time(),
"end_time_0": self.data["end_time"].date(),
"end_time_1": self.data["end_time"].time(),
"upload_elections": modified_csv_files["upload_elections"],
"upload_candidates": modified_csv_files["upload_candidates"],
"upload_eligibilities": modified_csv_files["upload_eligibilities"],
}
response = self.client.post(list_view, data=new_data)
self.assertRedirects(
response, reverse("admin:backend_electionsession_changelist")
)
election_session.refresh_from_db()
self.assertEqual(ElectionSession.objects.count(), 1)
self.assertEqual(ElectionSession.objects.all()[0], election_session)
self.assertEqual(Election.objects.count(), 1)
self.assertEqual(
Candidate.objects.count(), 1 + Election.objects.count()
) # Factor in RON Candidate
self.assertEqual(Eligibility.objects.count(), 1)
class CandidateAdminTestCase(SetupMixin, TestCase):
"""
Tests the changelist view for Candidates.
"""
def setUp(self):
super().setUp()
self._set_election_session_data()
election_session = self._create_election_session(self.data)
self.setUpElections(election_session)
self._login_admin()
def test_ron_candidates_do_not_appear_in_changelist_in_production_mode(self):
self.changelist_view = reverse("admin:backend_candidate_changelist")
response = self.client.get(self.changelist_view)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertNotContains(response, "Reopen Nominations")
# setUpElections() function doesn't create any Candidates.
self.assertContains(response, "0 candidates")
@patch("django.conf.settings.DEBUG")
def test_ron_candidates_appear_in_changelist_in_debug_mode(self, mock_debug):
mock_debug.return_value = True
self.changelist_view = reverse("admin:backend_candidate_changelist")
response = self.client.get(self.changelist_view)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(response, "Reopen Nominations")
# The only Candidates that were created were the RON ones.
self.assertContains(response, f"0 of {Candidate.objects.count()} selected")
def test_delete_button_does_not_appear_for_ron_candidates_in_production_mode(self):
# Create random Candidates for each Election that are not RON
for election in Election.objects.all():
data = {
"name": f"Candidate{str(random.randint(100, 1000))}",
"election": election,
"statement": "Insert voter statement here.",
}
candidate = Candidate(**data)
candidate.save()
for candidate in Candidate.objects.all():
change_view = reverse(
"admin:backend_candidate_change", kwargs={"object_id": candidate.id}
)
response = self.client.get(change_view)
if candidate.name == "Reopen Nominations":
self.assertRedirects(response, reverse("admin:index"))
else:
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(response, "Delete")
self.assertNotContains(response, "Reopen Nominations")
@patch("django.conf.settings.DEBUG")
def test_delete_button_appears_for_ron_candidates_in_debug_mode(self, mock_debug):
mock_debug.return_value = True
# Create random Candidates for each Election that are not RON
for election in Election.objects.all():
data = {
"name": f"Candidate{str(random.randint(100, 1000))}",
"election": election,
"statement": "Insert voter statement here.",
}
candidate = Candidate(**data)
candidate.save()
for candidate in Candidate.objects.all():
change_view = reverse(
"admin:backend_candidate_change", kwargs={"object_id": candidate.id}
)
response = self.client.get(change_view)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertContains(response, "Delete")
| 36.780193 | 100 | 0.614172 | 1,626 | 15,227 | 5.441574 | 0.121771 | 0.116976 | 0.047242 | 0.041591 | 0.811257 | 0.757346 | 0.727735 | 0.7092 | 0.7092 | 0.707505 | 0 | 0.010384 | 0.285348 | 15,227 | 413 | 101 | 36.869249 | 0.802702 | 0.050568 | 0 | 0.661538 | 0 | 0 | 0.199944 | 0.073288 | 0 | 0 | 0 | 0 | 0.141538 | 1 | 0.036923 | false | 0 | 0.027692 | 0 | 0.070769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31aaef0f39623947c41b4530dcbafe1b64801f1f | 11,128 | py | Python | src/cr/nimble/_src/distance.py | carnotresearch/cr-nimble | 6e067610a04fe110514bb42355803418cc98f27f | [
"Apache-2.0"
] | 2 | 2021-12-10T04:22:54.000Z | 2022-03-09T20:09:28.000Z | src/cr/nimble/_src/distance.py | carnotresearch/cr-nimble | 6e067610a04fe110514bb42355803418cc98f27f | [
"Apache-2.0"
] | null | null | null | src/cr/nimble/_src/distance.py | carnotresearch/cr-nimble | 6e067610a04fe110514bb42355803418cc98f27f | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 CR-Nimble Development Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Pairwise distances between a set of points
"""
from jax import jit
import jax.numpy as jnp
@jit
def pairwise_sqr_l2_distances_rw(A, B):
r"""Computes the pairwise squared distances between points in A and points in B where each point is a row vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (row-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of squared distances
between points in A and points in B
* Let the ambient space of points be :math:`\mathbb{F}^K`.
* :math:`A` contains the points :math:`a_i` with :math:`1 \leq i \leq M`
and each point maps to a row of :math:`A`.
* :math:`B` contains the points :math:`b_j` with :math:`1 \leq j \leq N`
and each point maps to a row of :math:`B`.
Then the distance matrix :math:`D` is of size :math:`M \times N` and consists of:
.. math::
d_{i, j} = \| a_i - b_j \|_2^2 = \langle a_i - b_j , a_i - b_j \rangle
"""
M = A.shape[0]
N = B.shape[0]
# compute squared sums for each row vector
a_sums = jnp.sum(A*A, axis=1)
# reshape to Mx1 column vector
a_sums = jnp.reshape(a_sums, (M, 1))
# broadcast to MxN matrix
a_sums = a_sums * jnp.ones((1,N))
# compute squared sums for each row vector
b_sums = jnp.sum(B*B, axis=1)
# broadcast to MxN matrix
b_sums = b_sums * jnp.ones((M, 1))
# multiply A (M x p) and B.T (p x N)
prods = A @ B.T
return a_sums + b_sums - 2 * prods
@jit
def pairwise_sqr_l2_distances_cw(A, B):
r"""Computes the pairwise squared distances between points in A and points in B where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (column-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of squared distances
between points in A and points in B
* Let the ambient space of points be :math:`\mathbb{F}^K`.
* :math:`A` contains the points :math:`a_i` with :math:`1 \leq i \leq M`
and each point maps to a column of :math:`A`.
* :math:`B` contains the points :math:`b_j` with :math:`1 \leq j \leq N`
and each point maps to a column of :math:`B`.
Then the distance matrix :math:`D` is of size :math:`M \times N` and consists of:
.. math::
d_{i, j} = \| a_i - b_j \|_2^2 = \langle a_i - b_j , a_i - b_j \rangle
"""
M = A.shape[1]
N = B.shape[1]
# compute squared sums for each column vector
a_sums = jnp.sum(A*A, axis=0)
# reshape to Mx1 column vector
a_sums = jnp.reshape(a_sums, (M, 1))
# broadcast to MxN matrix
a_sums = a_sums * jnp.ones((1,N))
# compute squared sums for each column vector
b_sums = jnp.sum(B*B, axis=0)
# broadcast to MxN matrix
b_sums = b_sums * jnp.ones((M, 1))
# multiply A.T (M x p) and B (p x N)
prods = A.T @ B
return a_sums + b_sums - 2 * prods
@jit
def pairwise_l2_distances_rw(A, B):
r"""Computes the pairwise distances between points in A and points in B where each point is a row vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (row-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of euclidean distances
between points in A and points in B
"""
return jnp.sqrt(pairwise_sqr_l2_distances_rw(A, B))
@jit
def pairwise_l2_distances_cw(A, B):
r"""Computes the pairwise distances between points in A and points in B where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (column-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of euclidean distances
between points in A and points in B
"""
return jnp.sqrt(pairwise_sqr_l2_distances_cw(A, B))
@jit
def pdist_sqr_l2_rw(A):
r"""Computes the pairwise squared distances between points in A where each point is a row vector
Args:
A (jax.numpy.ndarray): A set of N K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An NxN matrix D of squared euclidean distances
between points in A
* Let the ambient space of points be :math:`\mathbb{F}^K`.
* :math:`A` contains the points :math:`a_i` with :math:`1 \leq i \leq N`
and each point maps to a row of :math:`A`.
Then the distance matrix :math:`D` is of size :math:`N \times N` and consists of:
.. math::
d_{i, j} = \| a_i - a_j \|_2^2 = \langle a_i - a_j , a_i - a_j \rangle
"""
M = A.shape[0]
# compute squared sums for each row vector
sums = jnp.sum(A*A, axis=1)
# broadcast to MxM matrix
a_sums = jnp.reshape(sums, (M,1)) * jnp.ones((1, M))
b_sums = sums * jnp.ones((M, 1))
# multiply A (M x p) and A.T (p x M)
prods = A @ A.T
return a_sums + b_sums - 2*prods
@jit
def pdist_sqr_l2_cw(A):
r"""Computes the pairwise squared distances between points in A where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of N K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An NxN matrix D of squared euclidean distances
between points in A
* Let the ambient space of points be :math:`\mathbb{F}^K`.
* :math:`A` contains the points :math:`a_i` with :math:`1 \leq i \leq N`
and each point maps to a column of :math:`A`.
Then the distance matrix :math:`D` is of size :math:`N \times N` and consists of:
.. math::
d_{i, j} = \| a_i - a_j \|_2^2 = \langle a_i - a_j , a_i - a_j \rangle
"""
M = A.shape[1]
# compute squared sums for each col vector
sums = jnp.sum(A*A, axis=0)
# broadcast to MxN matrix
a_sums = jnp.reshape(sums, (M, 1)) * jnp.ones((1,M))
b_sums = sums * jnp.ones((M, 1))
# multiply A.T (M x p) and A (p x M)
prods = A.T @ A
return a_sums + b_sums - 2 * prods
@jit
def pdist_l2_rw(A):
r"""Computes the pairwise distances between points in A where ach point is a row vector
Args:
A (jax.numpy.ndarray): A set of N K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An NxN matrix D of euclidean distances
between points in A
"""
return jnp.sqrt(pdist_sqr_l2_rw(A))
@jit
def pdist_l2_cw(A):
r"""Computes the pairwise distances between points in A where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of N K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An NxN matrix D of euclidean distances
between points in A
"""
return jnp.sqrt(pdist_sqr_l2_cw(A))
@jit
def pairwise_l1_distances_rw(A, B):
r"""Computes the pairwise city-block distances between points in A and points in B where each point is a row vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (row-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of city-block distances
between points in A and points in B
"""
return jnp.sum(jnp.abs(A[:, None, :] - B[None, :, :]), axis=-1)
@jit
def pairwise_l1_distances_cw(A, B):
r"""Computes the pairwise city-block distances between points in A and points in B where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (column-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of city-block distances
between points in A and points in B
"""
return jnp.sum(jnp.abs(A[:, :, None] - B[:, None, :]), axis=0)
@jit
def pdist_l1_rw(A):
r"""Computes the pairwise city-block distances between points in A where each point is a row vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An MxM matrix D of city-block distances
between points in A
"""
return pairwise_l1_distances_rw(A, A)
@jit
def pdist_l1_cw(A):
r"""Computes the pairwise city-block distances between points in A where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An MxM matrix D of city-block distances
between points in A
"""
return pairwise_l1_distances_cw(A, A)
@jit
def pairwise_linf_distances_rw(A, B):
r"""Computes the pairwise Chebyshev distances between points in A and points in B where each point is a row vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (row-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of Chebyshev distances
between points in A and points in B
"""
return jnp.max(jnp.abs(A[:, None, :] - B[None, :, :]), axis=-1)
@jit
def pairwise_linf_distances_cw(A, B):
r"""Computes the pairwise Chebyshev distances between points in A and points in B where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (column-wise)
B (jax.numpy.ndarray): A set of N K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An MxN matrix D of Chebyshev distances
between points in A and points in B
"""
return jnp.max(jnp.abs(A[:, :, None] - B[:, None, :]), axis=0)
@jit
def pdist_linf_rw(A):
r"""Computes the pairwise Chebyshev distances between points in A where each point is a row vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (row-wise)
Returns:
(jax.numpy.ndarray): An MxM matrix D of Chebyshev distances
between points in A
"""
return pairwise_linf_distances_rw(A, A)
@jit
def pdist_linf_cw(A):
r"""Computes the pairwise Chebyshev distances between points in A where each point is a column vector
Args:
A (jax.numpy.ndarray): A set of M K-dimensional points (column-wise)
Returns:
(jax.numpy.ndarray): An MxM matrix D of Chebyshev distances
between points in A
"""
return pairwise_linf_distances_cw(A, A)
| 33.21791 | 122 | 0.645219 | 1,898 | 11,128 | 3.711275 | 0.080084 | 0.054514 | 0.085179 | 0.109029 | 0.917518 | 0.899915 | 0.895514 | 0.869676 | 0.851221 | 0.833191 | 0 | 0.008861 | 0.249551 | 11,128 | 334 | 123 | 33.317365 | 0.834631 | 0.721783 | 0 | 0.369565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.021739 | 0 | 0.369565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7316c99af53048d1ad8208fca0e3c4ae162caced | 41 | py | Python | keras_retinanet/callbacks/__init__.py | Accioy/keras-retinanet | 01dce4547f78588185fa6a138c45279609bfa1c9 | [
"Apache-2.0"
] | 7,141 | 2018-03-22T16:27:31.000Z | 2022-03-31T07:18:34.000Z | keras_retinanet/callbacks/__init__.py | Accioy/keras-retinanet | 01dce4547f78588185fa6a138c45279609bfa1c9 | [
"Apache-2.0"
] | 1,472 | 2017-11-11T23:10:27.000Z | 2022-03-25T11:04:22.000Z | keras_retinanet/callbacks/__init__.py | Accioy/keras-retinanet | 01dce4547f78588185fa6a138c45279609bfa1c9 | [
"Apache-2.0"
] | 2,580 | 2017-05-14T14:33:41.000Z | 2022-03-31T15:04:14.000Z | from .common import * # noqa: F401,F403
| 20.5 | 40 | 0.682927 | 6 | 41 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 0.195122 | 41 | 1 | 41 | 41 | 0.666667 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
731aa22c45c4f0e0947aede102cb75d069ec8060 | 18,319 | py | Python | api_tests/nodes/views/test_node_relationship_institutions.py | alexschiller/osf.io | 4122d4be152c6189142c2ebb19cfdee09c77035d | [
"Apache-2.0"
] | 1 | 2015-10-02T18:35:53.000Z | 2015-10-02T18:35:53.000Z | api_tests/nodes/views/test_node_relationship_institutions.py | alexschiller/osf.io | 4122d4be152c6189142c2ebb19cfdee09c77035d | [
"Apache-2.0"
] | 4 | 2016-05-13T14:24:16.000Z | 2017-03-30T15:28:31.000Z | api_tests/nodes/views/test_node_relationship_institutions.py | alexschiller/osf.io | 4122d4be152c6189142c2ebb19cfdee09c77035d | [
"Apache-2.0"
] | null | null | null | from nose.tools import * # flake8: noqa
from tests.base import ApiTestCase
from osf_tests.factories import InstitutionFactory, AuthUserFactory, NodeFactory
from api.base.settings.defaults import API_BASE
from website.util import permissions
class TestNodeRelationshipInstitutions(ApiTestCase):
def setUp(self):
super(TestNodeRelationshipInstitutions, self).setUp()
self.institution2 = InstitutionFactory()
self.institution1 = InstitutionFactory()
self.user = AuthUserFactory()
self.user.affiliated_institutions.add(self.institution1)
self.user.affiliated_institutions.add(self.institution2)
self.user.save()
self.read_write_contributor = AuthUserFactory()
self.read_write_contributor_institution = InstitutionFactory()
self.read_write_contributor.affiliated_institutions.add(self.read_write_contributor_institution)
self.read_write_contributor.save()
self.read_only_contributor = AuthUserFactory()
self.read_only_contributor_institution = InstitutionFactory()
self.read_only_contributor.affiliated_institutions.add(self.read_only_contributor_institution)
self.read_only_contributor.save()
self.node = NodeFactory(creator=self.user)
self.node.add_contributor(self.read_write_contributor, permissions=[permissions.WRITE])
self.node.add_contributor(self.read_only_contributor, permissions=[permissions.READ])
self.node.save()
self.node_institutions_url = '/{0}nodes/{1}/relationships/institutions/'.format(API_BASE, self.node._id)
def create_payload(self, *institution_ids):
data = []
for id_ in institution_ids:
data.append({'type': 'institutions', 'id': id_})
return {'data': data}
def test_node_with_no_permissions(self):
user = AuthUserFactory()
user.affiliated_institutions.add(self.institution1)
user.save()
res = self.app.put_json_api(
self.node_institutions_url,
self.create_payload([self.institution1._id]),
auth=user.auth,
expect_errors=True,
)
assert_equal(res.status_code, 403)
def test_user_with_no_institution(self):
user = AuthUserFactory()
node = NodeFactory(creator=user)
res = self.app.put_json_api(
'/{0}nodes/{1}/relationships/institutions/'.format(API_BASE, node._id),
self.create_payload(self.institution1._id),
expect_errors=True,
auth=user.auth
)
assert_equal(res.status_code, 403)
def test_get_public_node(self):
self.node.is_public = True
self.node.save()
res = self.app.get(
self.node_institutions_url
)
assert_equal(res.status_code, 200)
assert_equal(res.json['data'], [])
def test_institution_does_not_exist(self):
res = self.app.put_json_api(
self.node_institutions_url,
self.create_payload('not_an_id'),
expect_errors=True,
auth=self.user.auth
)
assert_equal(res.status_code, 404)
def test_wrong_type(self):
res = self.app.put_json_api(
self.node_institutions_url,
{'data': [{'type': 'not_institution', 'id': self.institution1._id}]},
expect_errors=True,
auth=self.user.auth
)
assert_equal(res.status_code, 409)
def test_user_with_institution_and_permissions(self):
assert_not_in(self.institution1, self.node.affiliated_institutions.all())
assert_not_in(self.institution2, self.node.affiliated_institutions.all())
res = self.app.post_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id, self.institution2._id),
auth=self.user.auth
)
assert_equal(res.status_code, 201)
data = res.json['data']
ret_institutions = [inst['id'] for inst in data]
assert_in(self.institution1._id, ret_institutions)
assert_in(self.institution2._id, ret_institutions)
self.node.reload()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_in(self.institution2, self.node.affiliated_institutions.all())
def test_user_with_institution_and_permissions_through_patch(self):
assert_not_in(self.institution1, self.node.affiliated_institutions.all())
assert_not_in(self.institution2, self.node.affiliated_institutions.all())
res = self.app.put_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id, self.institution2._id),
auth=self.user.auth
)
assert_equal(res.status_code, 200)
data = res.json['data']
ret_institutions = [inst['id'] for inst in data]
assert_in(self.institution1._id, ret_institutions)
assert_in(self.institution2._id, ret_institutions)
self.node.reload()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_in(self.institution2, self.node.affiliated_institutions.all())
def test_remove_institutions_with_no_permissions(self):
res = self.app.put_json_api(
self.node_institutions_url,
self.create_payload(),
expect_errors=True
)
assert_equal(res.status_code, 401)
def test_remove_institutions_with_affiliated_user(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
assert_in(self.institution1, self.node.affiliated_institutions.all())
res = self.app.put_json_api(
self.node_institutions_url,
{'data': []},
auth=self.user.auth
)
assert_equal(res.status_code, 200)
self.node.reload()
assert_equal(self.node.affiliated_institutions.count(), 0)
def test_using_post_making_no_changes_returns_204(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
assert_in(self.institution1, self.node.affiliated_institutions.all())
res = self.app.post_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id),
auth=self.user.auth
)
assert_equal(res.status_code, 204)
self.node.reload()
assert_in(self.institution1, self.node.affiliated_institutions.all())
def test_put_not_admin_but_affiliated(self):
user = AuthUserFactory()
user.affiliated_institutions.add(self.institution1)
user.save()
self.node.add_contributor(user)
self.node.save()
res = self.app.put_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id),
auth=user.auth
)
self.node.reload()
assert_equal(res.status_code, 200)
assert_in(self.institution1, self.node.affiliated_institutions.all())
def test_retrieve_private_node_no_auth(self):
res = self.app.get(self.node_institutions_url, expect_errors=True)
assert_equal(res.status_code, 401)
def test_add_through_patch_one_inst_to_node_with_inst(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_not_in(self.institution2, self.node.affiliated_institutions.all())
res = self.app.patch_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id, self.institution2._id),
auth=self.user.auth
)
assert_equal(res.status_code, 200)
self.node.reload()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_in(self.institution2, self.node.affiliated_institutions.all())
def test_add_through_patch_one_inst_while_removing_other(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_not_in(self.institution2, self.node.affiliated_institutions.all())
res = self.app.patch_json_api(
self.node_institutions_url,
self.create_payload(self.institution2._id),
auth=self.user.auth
)
assert_equal(res.status_code, 200)
self.node.reload()
assert_not_in(self.institution1, self.node.affiliated_institutions.all())
assert_in(self.institution2, self.node.affiliated_institutions.all())
def test_add_one_inst_with_post_to_node_with_inst(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_not_in(self.institution2, self.node.affiliated_institutions.all())
res = self.app.post_json_api(
self.node_institutions_url,
self.create_payload(self.institution2._id),
auth=self.user.auth
)
assert_equal(res.status_code, 201)
self.node.reload()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_in(self.institution2, self.node.affiliated_institutions.all())
def test_delete_nothing(self):
res = self.app.delete_json_api(
self.node_institutions_url,
self.create_payload(),
auth=self.user.auth
)
assert_equal(res.status_code, 204)
def test_delete_existing_inst(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
assert_in(self.institution1, self.node.affiliated_institutions.all())
res = self.app.delete_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id),
auth=self.user.auth
)
assert_equal(res.status_code, 204)
self.node.reload()
assert_not_in(self.institution1, self.node.affiliated_institutions.all())
def test_delete_not_affiliated_and_affiliated_insts(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
assert_in(self.institution1, self.node.affiliated_institutions.all())
assert_not_in(self.institution2, self.node.affiliated_institutions.all())
res = self.app.delete_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id, self.institution2._id),
auth=self.user.auth,
)
assert_equal(res.status_code, 204)
self.node.reload()
assert_not_in(self.institution1, self.node.affiliated_institutions.all())
assert_not_in(self.institution2, self.node.affiliated_institutions.all())
def test_delete_user_is_admin(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
res = self.app.delete_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id),
auth=self.user.auth
)
assert_equal(res.status_code, 204)
def test_delete_user_is_read_write(self):
user = AuthUserFactory()
user.affiliated_institutions.add(self.institution1)
user.save()
self.node.add_contributor(user)
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
res = self.app.delete_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id),
auth=user.auth
)
assert_equal(res.status_code, 204)
def test_delete_user_is_read_only(self):
user = AuthUserFactory()
user.affiliated_institutions.add(self.institution1)
user.save()
self.node.add_contributor(user, permissions=[permissions.READ])
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
res = self.app.delete_json_api(
self.node_institutions_url,
self.create_payload(self.institution1._id),
auth=user.auth,
expect_errors=True
)
assert_equal(res.status_code, 403)
def test_delete_user_is_admin_but_not_affiliated_with_inst(self):
user = AuthUserFactory()
node = NodeFactory(creator=user)
node.affiliated_institutions.add(self.institution1)
node.save()
assert_in(self.institution1, node.affiliated_institutions.all())
res = self.app.delete_json_api(
'/{0}nodes/{1}/relationships/institutions/'.format(API_BASE, node._id),
self.create_payload(self.institution1._id),
auth=user.auth,
)
assert_equal(res.status_code, 204)
node.reload()
assert_not_in(self.institution1, node.affiliated_institutions.all())
def test_admin_can_add_affiliated_institution(self):
payload = {
'data': [{
'type': 'institutions',
'id': self.institution1._id
}]
}
res = self.app.post_json_api(self.node_institutions_url, payload, auth=self.user.auth)
self.node.reload()
assert_equal(res.status_code, 201)
assert_in(self.institution1, self.node.affiliated_institutions.all())
def test_admin_can_remove_admin_affiliated_institution(self):
self.node.affiliated_institutions.add(self.institution1)
payload = {
'data': [{
'type': 'institutions',
'id': self.institution1._id
}]
}
res = self.app.delete_json_api(self.node_institutions_url, payload, auth=self.user.auth)
self.node.reload()
assert_equal(res.status_code, 204)
assert_not_in(self.institution1, self.node.affiliated_institutions.all())
def test_admin_can_remove_read_write_contributor_affiliated_institution(self):
self.node.affiliated_institutions.add(self.read_write_contributor_institution)
self.node.save()
payload = {
'data': [{
'type': 'institutions',
'id': self.read_write_contributor_institution._id
}]
}
res = self.app.delete_json_api(self.node_institutions_url, payload, auth=self.user.auth)
self.node.reload()
assert_equal(res.status_code, 204)
assert_not_in(self.read_write_contributor_institution, self.node.affiliated_institutions.all())
def test_read_write_contributor_can_add_affiliated_institution(self):
payload = {
'data': [{
'type': 'institutions',
'id': self.read_write_contributor_institution._id
}]
}
res = self.app.post_json_api(self.node_institutions_url, payload, auth=self.read_write_contributor.auth)
self.node.reload()
assert_equal(res.status_code, 201)
assert_in(self.read_write_contributor_institution, self.node.affiliated_institutions.all())
def test_read_write_contributor_can_remove_affiliated_institution(self):
self.node.affiliated_institutions.add(self.read_write_contributor_institution)
self.node.save()
payload = {
'data': [{
'type': 'institutions',
'id': self.read_write_contributor_institution._id
}]
}
res = self.app.delete_json_api(self.node_institutions_url, payload, auth=self.read_write_contributor.auth)
self.node.reload()
assert_equal(res.status_code, 204)
assert_not_in(self.read_write_contributor_institution, self.node.affiliated_institutions.all())
def test_read_write_contributor_cannot_remove_admin_affiliated_institution(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
payload = {
'data': [{
'type': 'institutions',
'id': self.institution1._id
}]
}
res = self.app.delete_json_api(self.node_institutions_url, payload, auth=self.read_write_contributor.auth, expect_errors=True)
self.node.reload()
assert_equal(res.status_code, 403)
assert_in(self.institution1, self.node.affiliated_institutions.all())
def test_read_only_contributor_cannot_remove_admin_affiliated_institution(self):
self.node.affiliated_institutions.add(self.institution1)
self.node.save()
payload = {
'data': [{
'type': 'institutions',
'id': self.institution1._id
}]
}
res = self.app.delete_json_api(self.node_institutions_url, payload, auth=self.read_only_contributor.auth, expect_errors=True)
self.node.reload()
assert_equal(res.status_code, 403)
assert_in(self.institution1, self.node.affiliated_institutions.all())
def test_read_only_contributor_cannot_add_affiliated_institution(self):
payload = {
'data': [{
'type': 'institutions',
'id': self.read_only_contributor_institution._id
}]
}
res = self.app.post_json_api(self.node_institutions_url, payload, auth=self.read_only_contributor.auth, expect_errors=True)
self.node.reload()
assert_equal(res.status_code, 403)
assert_not_in(self.read_write_contributor_institution, self.node.affiliated_institutions.all())
def test_read_only_contributor_cannot_remove_affiliated_institution(self):
self.node.affiliated_institutions.add(self.read_only_contributor_institution)
self.node.save()
payload = {
'data': [{
'type': 'institutions',
'id': self.read_only_contributor_institution._id
}]
}
res = self.app.delete_json_api(self.node_institutions_url, payload, auth=self.read_only_contributor.auth, expect_errors=True)
self.node.reload()
assert_equal(res.status_code, 403)
assert_in(self.read_only_contributor_institution, self.node.affiliated_institutions.all())
| 38.729387 | 134 | 0.667667 | 2,115 | 18,319 | 5.471395 | 0.054374 | 0.090563 | 0.132561 | 0.145178 | 0.89008 | 0.870204 | 0.846008 | 0.811614 | 0.793467 | 0.782147 | 0 | 0.013509 | 0.232218 | 18,319 | 472 | 135 | 38.811441 | 0.809243 | 0.000655 | 0 | 0.696429 | 0 | 0 | 0.021688 | 0.006719 | 0 | 0 | 0 | 0 | 0.19898 | 1 | 0.084184 | false | 0 | 0.012755 | 0 | 0.102041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
732e227af88a60370afa6a94fc9d9fe12c833737 | 44 | py | Python | API/Generators/Tickets/__init__.py | Slavkata/MAC | 1ba8f830367b550922af87b1acf8d22caf93dc23 | [
"MIT"
] | 3 | 2019-02-19T11:53:39.000Z | 2019-05-26T15:36:52.000Z | API/Generators/Tickets/__init__.py | Slavkata/MAC-website | 1ba8f830367b550922af87b1acf8d22caf93dc23 | [
"MIT"
] | 24 | 2019-02-26T12:26:34.000Z | 2022-03-11T23:49:43.000Z | API/Generators/Tickets/__init__.py | Slavkata/MAC | 1ba8f830367b550922af87b1acf8d22caf93dc23 | [
"MIT"
] | 1 | 2019-02-19T08:04:48.000Z | 2019-02-19T08:04:48.000Z | from .pdf_generator import generate_tickets
| 22 | 43 | 0.886364 | 6 | 44 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.925 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
733cb3bea418e39494d1891d82bbdae47b255af1 | 79 | py | Python | src/vae/models/__init__.py | universuen/VGAN-BL | fea487bfea8fd449c4f97c5c4b07a81b4be76d3a | [
"MIT"
] | null | null | null | src/vae/models/__init__.py | universuen/VGAN-BL | fea487bfea8fd449c4f97c5c4b07a81b4be76d3a | [
"MIT"
] | null | null | null | src/vae/models/__init__.py | universuen/VGAN-BL | fea487bfea8fd449c4f97c5c4b07a81b4be76d3a | [
"MIT"
] | null | null | null | from .decoder_model import DecoderModel
from .encoder_model import EncoderModel | 39.5 | 39 | 0.886076 | 10 | 79 | 6.8 | 0.7 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088608 | 79 | 2 | 40 | 39.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7340ffb1e92d2b4591b1694424abba069815c775 | 3,876 | py | Python | coba/tests/test_environments_simulated_serialized.py | mrucker/coba | 4f679fb5c6e39e2d0bf3e609c77a2a6865168795 | [
"BSD-3-Clause"
] | 1 | 2020-07-22T13:43:14.000Z | 2020-07-22T13:43:14.000Z | coba/tests/test_environments_simulated_serialized.py | mrucker/coba | 4f679fb5c6e39e2d0bf3e609c77a2a6865168795 | [
"BSD-3-Clause"
] | null | null | null | coba/tests/test_environments_simulated_serialized.py | mrucker/coba | 4f679fb5c6e39e2d0bf3e609c77a2a6865168795 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from coba.pipes import ListSink, ListSource, HttpSource, DiskSource
from coba.contexts import CobaContext, NullLogger
from coba.environments import SimulatedInteraction, MemorySimulation, SerializedSimulation
CobaContext.logger = NullLogger()
class SerializedSimulation_Tests(unittest.TestCase):
def test_sim_source(self):
expected_env = MemorySimulation(params={}, interactions=[SimulatedInteraction(1,[1,2],rewards=[2,3])])
actual_env = SerializedSimulation(expected_env)
self.assertEqual(expected_env.params, actual_env.params)
self.assertEqual(len(list(expected_env.read())), len(list(actual_env.read())))
for e_interaction, a_interaction in zip(expected_env.read(), actual_env.read()):
self.assertEqual(e_interaction.context, a_interaction.context)
self.assertEqual(e_interaction.actions, a_interaction.actions)
self.assertEqual(e_interaction.kwargs , a_interaction.kwargs )
def test_http_url(self):
env = SerializedSimulation("https://github.com")
self.assertIsInstance(env._source._source, HttpSource)
self.assertEqual("https://github.com", env._source._url)
def test_filepath(self):
env = SerializedSimulation("C:/test")
self.assertIsInstance(env._source._source, DiskSource)
self.assertEqual("C:/test", env._source._source._filename)
def test_sim_write_read_simple(self):
sink = ListSink()
expected_env = MemorySimulation(params={}, interactions=[SimulatedInteraction(1,[1,2],rewards=[2,3])])
SerializedSimulation(expected_env).write(sink)
actual_env = SerializedSimulation(ListSource(sink.items))
self.assertEqual(expected_env.params, actual_env.params)
self.assertEqual(len(list(expected_env.read())), len(list(actual_env.read())))
for e_interaction, a_interaction in zip(expected_env.read(), actual_env.read()):
self.assertEqual(e_interaction.context, a_interaction.context)
self.assertEqual(e_interaction.actions, a_interaction.actions)
self.assertEqual(e_interaction.kwargs , a_interaction.kwargs )
def test_sim_write_read_with_params_and_none_context(self):
sink = ListSink()
expected_env = MemorySimulation(params={'a':1}, interactions=[SimulatedInteraction(None,[1,2],rewards=[2,3])])
SerializedSimulation(expected_env).write(sink)
actual_env = SerializedSimulation(ListSource(sink.items))
self.assertEqual(expected_env.params, actual_env.params)
self.assertEqual(len(list(expected_env.read())), len(list(actual_env.read())))
for e_interaction, a_interaction in zip(expected_env.read(), actual_env.read()):
self.assertEqual(e_interaction.context, a_interaction.context)
self.assertEqual(e_interaction.actions, a_interaction.actions)
self.assertEqual(e_interaction.kwargs , a_interaction.kwargs )
def test_sim_write_read_with_params_and_action_tuple(self):
sink = ListSink()
expected_env = MemorySimulation(params={'a':1}, interactions=[SimulatedInteraction(None,[(1,0),(0,1)],rewards=[2,3])])
SerializedSimulation(expected_env).write(sink)
actual_env = SerializedSimulation(ListSource(sink.items))
self.assertEqual(expected_env.params, actual_env.params)
self.assertEqual(len(list(expected_env.read())), len(list(actual_env.read())))
for e_interaction, a_interaction in zip(expected_env.read(), actual_env.read()):
self.assertEqual(e_interaction.context, a_interaction.context)
self.assertEqual(e_interaction.actions, a_interaction.actions)
self.assertEqual(e_interaction.kwargs , a_interaction.kwargs )
if __name__ == '__main__':
unittest.main()
| 51 | 126 | 0.713364 | 442 | 3,876 | 6.004525 | 0.156109 | 0.124341 | 0.072344 | 0.12208 | 0.781085 | 0.747551 | 0.747551 | 0.741522 | 0.741522 | 0.741522 | 0 | 0.006849 | 0.171311 | 3,876 | 75 | 127 | 51.68 | 0.819427 | 0 | 0 | 0.603448 | 0 | 0 | 0.01548 | 0 | 0 | 0 | 0 | 0 | 0.413793 | 1 | 0.103448 | false | 0 | 0.068966 | 0 | 0.189655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c33c8b38af1f112bba3af5768a323582c9d09e73 | 8,794 | py | Python | jyotisha/panchaanga/spatio_temporal/annual.py | Prabhakaran-cbe/jyotisha | 689327c5944c6cc84b7e58af4deae2a4ebe94d7b | [
"MIT"
] | 15 | 2021-01-23T12:13:27.000Z | 2022-02-15T08:30:20.000Z | jyotisha/panchaanga/spatio_temporal/annual.py | Prabhakaran-cbe/jyotisha | 689327c5944c6cc84b7e58af4deae2a4ebe94d7b | [
"MIT"
] | 51 | 2020-12-11T01:46:00.000Z | 2022-03-13T11:28:30.000Z | jyotisha/panchaanga/spatio_temporal/annual.py | Prabhakaran-cbe/jyotisha | 689327c5944c6cc84b7e58af4deae2a4ebe94d7b | [
"MIT"
] | 10 | 2020-12-16T22:58:07.000Z | 2022-01-15T08:34:12.000Z | import logging
import os
import sys
import traceback
from jyotisha.panchaanga.spatio_temporal import periodical
from jyotisha.panchaanga.spatio_temporal.periodical import Panchaanga
from jyotisha.panchaanga.temporal import ComputationSystem, set_constants, time, era
from jyotisha.panchaanga.temporal.festival.rules import RulesRepo
from jyotisha.panchaanga.temporal.time import Date, Timezone
from jyotisha.panchaanga.temporal.body import Graha
from jyotisha.panchaanga.temporal.zodiac import AngaSpanFinder, Ayanamsha
from sanskrit_data.schema import common
from jyotisha.panchaanga.temporal.zodiac.angas import AngaType
common.update_json_class_index(sys.modules[__name__])
set_constants()
def load_panchaanga(fname, fallback_fn):
logging.info('Loaded pre-computed panchaanga from %s.\n' % fname)
panchaanga = Panchaanga.read_from_file(filename=fname, name_to_json_class_index_extra={"Panchangam": periodical.Panchaanga})
if getattr(panchaanga, 'version', None) is None or panchaanga.version != periodical.Panchaanga.LATEST_VERSION:
logging.warning("Precomputed Panchanga obsolete.")
return fallback_fn()
else:
panchaanga.dump_to_file(filename=fname)
return panchaanga
def get_panchaanga_for_kali_year(city, year, precomputed_json_dir="~/Documents/jyotisha", computation_system: ComputationSystem = None, allow_precomputed=True, recompute_festivals=True):
year = int(year)
fname = os.path.expanduser('%s/%s__kali_%s__%s.json' % (precomputed_json_dir, city.name, year, computation_system))
if os.path.isfile(fname) and allow_precomputed:
fn = lambda: get_panchaanga_for_kali_year(city=city, year=year, precomputed_json_dir=precomputed_json_dir,
computation_system=computation_system, allow_precomputed=False)
panchaanga = load_panchaanga(fname=fname, fallback_fn=fn)
# Fest repos to be used might have changed in this call.
panchaanga.computation_system = computation_system
if recompute_festivals:
panchaanga.update_festival_details()
return panchaanga
else:
logging.info('No precomputed data available or allowed. Computing panchaanga...\n')
start_year_civil = year - era.get_year_0_offset(era_id=era.ERA_KALI)
anga_span_finder = AngaSpanFinder.get_cached(ayanaamsha_id=Ayanamsha.CHITRA_AT_180, anga_type=AngaType.SIDEREAL_MONTH)
start_mesha = anga_span_finder.find(jd1=time.utc_gregorian_to_jd(Date(year=start_year_civil, month=3, day=1)), jd2=time.utc_gregorian_to_jd(Date(year=start_year_civil, month=5, day=1)), target_anga_id=1)
jd_next_sunset_start_mesha = city.get_setting_time(julian_day_start=start_mesha.jd_start, body=Graha.SUN)
end_mina = anga_span_finder.find(jd1=time.utc_gregorian_to_jd(Date(year=start_year_civil + 1, month=3, day=1)), jd2=time.utc_gregorian_to_jd(Date(year=start_year_civil + 1, month=5, day=1)), target_anga_id=1)
jd_preceding_sunset_end_mina = city.get_setting_time(julian_day_start=end_mina.jd_start - 1, body=Graha.SUN)
tz = Timezone(city.timezone)
panchaanga = periodical.Panchaanga(city=city, start_date=tz.julian_day_to_local_time(julian_day=jd_next_sunset_start_mesha), end_date=tz.julian_day_to_local_time(julian_day=jd_preceding_sunset_end_mina), computation_system=computation_system)
panchaanga.year = year
# Festival data may be updated more frequently and a precomputed panchaanga may go out of sync. Hence we keep this method separate.
logging.info('Writing computed panchaanga to %s...\n' % fname)
try:
panchaanga.dump_to_file(filename=fname)
except EnvironmentError:
logging.warning("Not able to save.")
logging.error(traceback.format_exc())
return panchaanga
def get_panchaanga_for_shaka_year(city, year, precomputed_json_dir="~/Documents/jyotisha", computation_system: ComputationSystem = None, allow_precomputed=True):
fname = os.path.expanduser('%s/%s__shaka_%s__%s.json' % (precomputed_json_dir, city.name, year, computation_system))
if os.path.isfile(fname) and allow_precomputed:
fn = lambda: get_panchaanga_for_shaka_year(city=city, year=year, precomputed_json_dir=precomputed_json_dir,
computation_system=computation_system, allow_precomputed=False)
panchaanga = load_panchaanga(fname=fname, fallback_fn=fn)
# Fest repos to be used might have changed in this call.
panchaanga.computation_system = computation_system
panchaanga.update_festival_details()
return panchaanga
else:
logging.info('No precomputed data available. Computing panchaanga...\n')
SHAKA_CIVIL_ERA_DIFF = 78
start_year_civil = year + era.get_year_0_offset(era_id=era.ERA_SHAKA)
anga_span_finder = AngaSpanFinder.get_cached(ayanaamsha_id=Ayanamsha.ASHVINI_STARTING_0, anga_type=AngaType.SIDEREAL_MONTH)
start_equinox = anga_span_finder.find(jd1=time.utc_gregorian_to_jd(Date(year=start_year_civil, month=3, day=1)), jd2=time.utc_gregorian_to_jd(Date(year=start_year_civil, month=5, day=1)), target_anga_id=1)
end_equinox = anga_span_finder.find(jd1=time.utc_gregorian_to_jd(Date(year=start_year_civil + 1, month=3, day=1)), jd2=time.utc_gregorian_to_jd(Date(year=start_year_civil + 1, month=5, day=1)), target_anga_id=1)
tz = Timezone(city.timezone)
panchaanga = periodical.Panchaanga(city=city, start_date=tz.julian_day_to_local_time(julian_day=start_equinox.jd_start), end_date=tz.julian_day_to_local_time(julian_day=end_equinox.jd_start), computation_system=computation_system)
panchaanga.year = year
# Festival data may be updated more frequently and a precomputed panchaanga may go out of sync. Hence we keep this method separate.
logging.info('Writing computed panchaanga to %s...\n' % fname)
try:
panchaanga.dump_to_file(filename=fname)
except EnvironmentError:
logging.warning("Not able to save.")
logging.error(traceback.format_exc())
return panchaanga
def get_panchaanga_for_civil_year(city, year, precomputed_json_dir="~/Documents/jyotisha",
computation_system: ComputationSystem = None, allow_precomputed=True):
fname = os.path.expanduser('%s/%s__gregorian_%s__%s.json' % (precomputed_json_dir, city.name, year, computation_system))
if os.path.isfile(fname) and allow_precomputed:
fn = lambda: get_panchaanga_for_civil_year(city=city, year=year, precomputed_json_dir=precomputed_json_dir,
computation_system=computation_system, allow_precomputed=False)
panchaanga = load_panchaanga(fname=fname, fallback_fn=fn)
return panchaanga
else:
logging.info('No precomputed data available or allowed. Computing panchaanga...\n')
panchaanga = periodical.Panchaanga(city=city, start_date='%d-01-01' % year, end_date='%d-12-31' % year, computation_system=computation_system)
panchaanga.year = year
logging.info('Writing computed panchaanga to %s...\n' % fname)
panchaanga.dump_to_file(filename=fname)
return panchaanga
def get_panchaanga_for_year(city, year, year_type, computation_system, allow_precomputed=True):
if year_type == era.ERA_GREGORIAN:
return get_panchaanga_for_civil_year(city=city, year=year, computation_system=computation_system, allow_precomputed=allow_precomputed)
elif year_type == era.ERA_KALI:
return get_panchaanga_for_kali_year(city=city, year=year, computation_system=computation_system, allow_precomputed=allow_precomputed)
elif year_type == era.ERA_SHAKA:
return get_panchaanga_for_shaka_year(city=city, year=year, computation_system=computation_system, allow_precomputed=allow_precomputed)
def get_panchaanga_for_given_dates(city, start_date, end_date, precomputed_json_dir="~/Documents/jyotisha",
computation_system: ComputationSystem = None, allow_precomputed=True):
fname = os.path.expanduser('%s/%s__%s-%s__%s.json' % (precomputed_json_dir, city.name, start_date, end_date, computation_system))
if os.path.isfile(fname) and allow_precomputed:
fn = lambda: get_panchaanga_for_given_dates(city=city, start_date=start_date, end_date=end_date,
precomputed_json_dir=precomputed_json_dir,
computation_system=computation_system, allow_precomputed=False)
panchaanga = load_panchaanga(fname=fname, fallback_fn=fn)
return panchaanga
else:
logging.info('No precomputed data available or allowed. Computing panchaanga...\n')
panchaanga = periodical.Panchaanga(city=city, start_date=start_date, end_date=end_date, computation_system=computation_system)
logging.info('Writing computed panchaanga to %s...\n' % fname)
panchaanga.dump_to_file(filename=fname)
return panchaanga
| 61.496503 | 246 | 0.769843 | 1,205 | 8,794 | 5.310373 | 0.148548 | 0.092983 | 0.045007 | 0.069073 | 0.813565 | 0.772933 | 0.748554 | 0.730427 | 0.725426 | 0.694796 | 0 | 0.006472 | 0.139072 | 8,794 | 142 | 247 | 61.929577 | 0.838727 | 0.04196 | 0 | 0.491379 | 0 | 0 | 0.086006 | 0.011404 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0 | 0.112069 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f17d6d1ded9b0f963dc900f0af4ecd288fa83fc | 129 | py | Python | students/K33402/Velts Andrey/lab0304/backend/events/admin.py | ShubhamKunal/ITMO_ICT_WebDevelopment_2020-2021 | bb91c91a56d21cec2b12ae4cc722eaa652a88420 | [
"MIT"
] | 4 | 2020-09-03T15:41:42.000Z | 2021-12-24T15:28:20.000Z | students/K33402/Velts Andrey/lab0304/backend/events/admin.py | ShubhamKunal/ITMO_ICT_WebDevelopment_2020-2021 | bb91c91a56d21cec2b12ae4cc722eaa652a88420 | [
"MIT"
] | 48 | 2020-09-13T20:22:42.000Z | 2021-04-30T11:13:30.000Z | students/K33402/Velts Andrey/lab0304/backend/events/admin.py | ShubhamKunal/ITMO_ICT_WebDevelopment_2020-2021 | bb91c91a56d21cec2b12ae4cc722eaa652a88420 | [
"MIT"
] | 69 | 2020-09-06T10:32:37.000Z | 2021-11-28T18:13:17.000Z | from django.contrib import admin
from .models import Event
@admin.register(Event)
class EventAdmin(admin.ModelAdmin):
pass
| 16.125 | 35 | 0.782946 | 17 | 129 | 5.941176 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 129 | 7 | 36 | 18.428571 | 0.90991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6f2056bbb867d1cb5716bbb94773a28db3929acf | 6,822 | py | Python | oxe-api/test/resource/taxonomy/test_add_taxonomy_value_hierarchy.py | CybersecurityLuxembourg/openxeco | 8d4e5578bde6a07f5d6d569b16b4de224abf7bf0 | [
"BSD-2-Clause"
] | null | null | null | oxe-api/test/resource/taxonomy/test_add_taxonomy_value_hierarchy.py | CybersecurityLuxembourg/openxeco | 8d4e5578bde6a07f5d6d569b16b4de224abf7bf0 | [
"BSD-2-Clause"
] | null | null | null | oxe-api/test/resource/taxonomy/test_add_taxonomy_value_hierarchy.py | CybersecurityLuxembourg/openxeco | 8d4e5578bde6a07f5d6d569b16b4de224abf7bf0 | [
"BSD-2-Clause"
] | null | null | null | from unittest.mock import patch
from sqlalchemy.exc import IntegrityError
from test.BaseCase import BaseCase
class TestAddTaxonomyValueHierarchy(BaseCase):
@BaseCase.login
@BaseCase.grant_access("/taxonomy/add_taxonomy_value_hierarchy")
def test_ok(self, token):
self.db.insert({"name": "CAT1"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"name": "CAT2"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"parent_category": "CAT1", "child_category": "CAT2"},
self.db.tables["TaxonomyCategoryHierarchy"])
self.db.insert({"id": 1, "name": "VAL1", "category": "CAT1"}, self.db.tables["TaxonomyValue"])
self.db.insert({"id": 2, "name": "VAL2", "category": "CAT2"}, self.db.tables["TaxonomyValue"])
payload = {
"parent_value": 1,
"child_value": 2,
}
response = self.application.post('/taxonomy/add_taxonomy_value_hierarchy',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(200, response.status_code)
self.assertEqual(self.db.get_count(self.db.tables["TaxonomyValueHierarchy"]), 1)
@BaseCase.login
@BaseCase.grant_access("/taxonomy/add_taxonomy_value_hierarchy")
def test_ko_same_values(self, token):
payload = {
"parent_value": 1,
"child_value": 1,
}
response = self.application.post('/taxonomy/add_taxonomy_value_hierarchy',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(422, response.status_code)
self.assertEqual("422 The provided values cannot be the same one", response.status)
@BaseCase.login
@BaseCase.grant_access("/taxonomy/add_taxonomy_value_hierarchy")
def test_ko_parent_value_not_existing(self, token):
self.db.insert({"name": "CAT2"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"id": 2, "name": "VAL2", "category": "CAT2"}, self.db.tables["TaxonomyValue"])
payload = {
"parent_value": 1,
"child_value": 2,
}
response = self.application.post('/taxonomy/add_taxonomy_value_hierarchy',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(422, response.status_code)
self.assertEqual("422 Provided parent value not existing", response.status)
@BaseCase.login
@BaseCase.grant_access("/taxonomy/add_taxonomy_value_hierarchy")
def test_ko_child_value_not_existing(self, token):
self.db.insert({"name": "CAT1"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"id": 1, "name": "VAL1", "category": "CAT1"}, self.db.tables["TaxonomyValue"])
payload = {
"parent_value": 1,
"child_value": 2,
}
response = self.application.post('/taxonomy/add_taxonomy_value_hierarchy',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(422, response.status_code)
self.assertEqual("422 Provided child value not existing", response.status)
@BaseCase.login
@BaseCase.grant_access("/taxonomy/add_taxonomy_value_hierarchy")
def test_ko_category_hierarchy_not_existing(self, token):
self.db.insert({"name": "CAT1"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"name": "CAT2"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"id": 1, "name": "VAL1", "category": "CAT1"}, self.db.tables["TaxonomyValue"])
self.db.insert({"id": 2, "name": "VAL2", "category": "CAT2"}, self.db.tables["TaxonomyValue"])
payload = {
"parent_value": 1,
"child_value": 2,
}
response = self.application.post('/taxonomy/add_taxonomy_value_hierarchy',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual("422 Hierarchy between the categories of the values does not exist", response.status)
@BaseCase.login
@BaseCase.grant_access("/taxonomy/add_taxonomy_value_hierarchy")
def test_ko_duplicate_entry(self, token):
self.db.insert({"name": "CAT1"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"name": "CAT2"}, self.db.tables["TaxonomyCategory"])
self.db.insert({"parent_category": "CAT1", "child_category": "CAT2"},
self.db.tables["TaxonomyCategoryHierarchy"])
self.db.insert({"id": 1, "name": "VAL1", "category": "CAT1"}, self.db.tables["TaxonomyValue"])
self.db.insert({"id": 2, "name": "VAL2", "category": "CAT2"}, self.db.tables["TaxonomyValue"])
self.db.insert({"parent_value": 1, "child_value": 2}, self.db.tables["TaxonomyValueHierarchy"])
payload = {
"parent_value": 1,
"child_value": 2,
}
response = self.application.post('/taxonomy/add_taxonomy_value_hierarchy',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual("422 This relation is already existing", response.status)
@BaseCase.login
@BaseCase.grant_access("/taxonomy/add_taxonomy_value_hierarchy")
@patch('db.db.DB.insert')
def test_ko_force_integrity_error_out_of_duplicate(self, mock_db_insert, token):
self.db.session.add(self.db.tables["TaxonomyCategory"](**{"name": "CAT1"}))
self.db.session.add(self.db.tables["TaxonomyCategory"](**{"name": "CAT2"}))
self.db.session.commit()
self.db.session.add(self.db.tables["TaxonomyCategoryHierarchy"]
(**{"parent_category": "CAT1", "child_category": "CAT2"}))
self.db.session.add(self.db.tables["TaxonomyValue"](**{"id": 1, "name": "My Value", "category": "CAT1"}))
self.db.session.add(self.db.tables["TaxonomyValue"](**{"id": 2, "name": "My Value2", "category": "CAT2"}))
self.db.session.commit()
mock_db_insert.side_effect = [IntegrityError(None, None, None), None]
payload = {
"parent_value": 1,
"child_value": 2,
}
response = self.application.post('/taxonomy/add_taxonomy_value_hierarchy',
headers=self.get_standard_post_header(token),
json=payload)
self.assertEqual(500, response.status_code)
self.assertEqual(mock_db_insert.call_count, 2)
| 45.48 | 114 | 0.602316 | 732 | 6,822 | 5.428962 | 0.122951 | 0.07851 | 0.075491 | 0.08455 | 0.843986 | 0.81379 | 0.800705 | 0.790639 | 0.779819 | 0.726472 | 0 | 0.018303 | 0.255204 | 6,822 | 149 | 115 | 45.785235 | 0.763826 | 0 | 0 | 0.683761 | 0 | 0 | 0.26136 | 0.095427 | 0 | 0 | 0 | 0 | 0.102564 | 1 | 0.059829 | false | 0 | 0.025641 | 0 | 0.094017 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6f41852cde61822ec2e0ece959878a7294ecd08a | 60 | py | Python | 001113StepikPyGEK/StepikPyGEK001113сh02p04st02T02_20200407.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001113StepikPyGEK/StepikPyGEK001113сh02p04st02T02_20200407.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001113StepikPyGEK/StepikPyGEK001113сh02p04st02T02_20200407.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | print(float(input()))
print(float(input()) + float(input())) | 30 | 38 | 0.666667 | 8 | 60 | 5 | 0.375 | 0.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 60 | 2 | 38 | 30 | 0.701754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
489bb3e6bacd2c792117c37d97f81df793ce64b7 | 193 | py | Python | tests/test_A070939.py | TimothyDJones/oeis | d9d608bc32ee31c73c139e1b68e4eb6315205e8d | [
"MIT"
] | 21 | 2020-03-21T17:50:13.000Z | 2022-01-18T01:52:47.000Z | tests/test_A070939.py | TimothyDJones/oeis | d9d608bc32ee31c73c139e1b68e4eb6315205e8d | [
"MIT"
] | 296 | 2019-11-18T14:04:36.000Z | 2022-03-27T21:59:24.000Z | tests/test_A070939.py | TimothyDJones/oeis | d9d608bc32ee31c73c139e1b68e4eb6315205e8d | [
"MIT"
] | 29 | 2019-11-18T11:56:22.000Z | 2022-03-26T22:31:57.000Z | from oeis import A070939
def test_sequence():
assert A070939[:10] == [
1,
1,
2,
2,
3,
3,
3,
3,
4,
4,
]
| 11.352941 | 28 | 0.321244 | 20 | 193 | 3.05 | 0.65 | 0.098361 | 0.098361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292683 | 0.57513 | 193 | 16 | 29 | 12.0625 | 0.45122 | 0 | 0 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | true | 0 | 0.071429 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
48ae67024e474c663e03b9db24a9f9d3d1a32ad6 | 41 | py | Python | spatial_utils/io/__init__.py | kevinyamauchi/spatial-utils | 239aed21e4c45375baf328a5d9c9e6401b94f386 | [
"BSD-3-Clause"
] | 1 | 2021-09-07T09:58:18.000Z | 2021-09-07T09:58:18.000Z | spatial_utils/io/__init__.py | kevinyamauchi/squidpy-utils | 239aed21e4c45375baf328a5d9c9e6401b94f386 | [
"BSD-3-Clause"
] | null | null | null | spatial_utils/io/__init__.py | kevinyamauchi/squidpy-utils | 239aed21e4c45375baf328a5d9c9e6401b94f386 | [
"BSD-3-Clause"
] | null | null | null | from .visium import load_visium_kallisto
| 20.5 | 40 | 0.878049 | 6 | 41 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48b71a00bea7ee1d9deb2a2df09c9527e8549cb1 | 129 | py | Python | example/core/views.py | COEXCZ/django-adminfilter | d66f6a3c5294156c01db1cf1927942f7cb31119a | [
"0BSD"
] | 3 | 2015-11-17T15:32:02.000Z | 2021-08-06T16:16:04.000Z | example/core/views.py | COEXCZ/django-adminfilter | d66f6a3c5294156c01db1cf1927942f7cb31119a | [
"0BSD"
] | null | null | null | example/core/views.py | COEXCZ/django-adminfilter | d66f6a3c5294156c01db1cf1927942f7cb31119a | [
"0BSD"
] | null | null | null | # -*- coding: utf-8 -*-
from django.shortcuts import render
def homepage(request):
return render(request, 'homepage.html')
| 18.428571 | 43 | 0.697674 | 16 | 129 | 5.625 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009174 | 0.155039 | 129 | 6 | 44 | 21.5 | 0.816514 | 0.162791 | 0 | 0 | 0 | 0 | 0.122642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
48c68c9b5a4ad5fc100ec37295a87089eeef343c | 186 | py | Python | src/compas_rv2/singular/rhino/artists/__init__.py | selinabitting/compas-RV2 | 0884cc00d09c8f4a75eb2b97614105e4c8bfd818 | [
"MIT"
] | 4 | 2022-01-17T19:17:22.000Z | 2022-01-21T18:06:02.000Z | src/compas_rv2/singular/rhino/artists/__init__.py | selinabitting/compas-RV2 | 0884cc00d09c8f4a75eb2b97614105e4c8bfd818 | [
"MIT"
] | null | null | null | src/compas_rv2/singular/rhino/artists/__init__.py | selinabitting/compas-RV2 | 0884cc00d09c8f4a75eb2b97614105e4c8bfd818 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from .patternartist import PatternArtist
__all__ = [
'PatternArtist'
]
| 18.6 | 40 | 0.822581 | 20 | 186 | 6.75 | 0.45 | 0.222222 | 0.355556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 186 | 9 | 41 | 20.666667 | 0.849057 | 0 | 0 | 0 | 0 | 0 | 0.069892 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0.142857 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48d5547ddeca0ad2a9bbef606821f16855629790 | 121 | py | Python | CodeWars/2016/2+2Problem-8k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | CodeWars/2016/2+2Problem-8k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | CodeWars/2016/2+2Problem-8k.py | JLJTECH/TutorialTesting | f2dbbd49a86b3b086d0fc156ac3369fb74727f86 | [
"MIT"
] | null | null | null | #Help Steve to fix program, so result will be identical to MS Windows calculator.
def calculate():
return (2 + 2) * 2 | 40.333333 | 81 | 0.710744 | 20 | 121 | 4.3 | 0.85 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.206612 | 121 | 3 | 82 | 40.333333 | 0.864583 | 0.661157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5b07ce460a8e0b892aa112f34e529ba8fec56b85 | 49,414 | py | Python | mindspore/ops/operations/_quant_ops.py | fufunoyu/mindspore | 704e367ada35653e8144eb0528c714f4b0231508 | [
"Apache-2.0"
] | 2 | 2021-04-22T07:00:59.000Z | 2021-11-08T02:49:09.000Z | mindspore/ops/operations/_quant_ops.py | fufunoyu/mindspore | 704e367ada35653e8144eb0528c714f4b0231508 | [
"Apache-2.0"
] | 1 | 2020-12-29T06:46:38.000Z | 2020-12-29T06:46:38.000Z | mindspore/ops/operations/_quant_ops.py | kungfu-ml/mindspore | 3fa5dd4495f4071b701e7ff490b7085b8824aaaa | [
"Apache-2.0"
] | 1 | 2021-05-10T03:30:36.000Z | 2021-05-10T03:30:36.000Z | # Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0(the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http: // www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Operators for quantization."""
import mindspore.context as context
from ..._checkparam import Validator as validator
from ..._checkparam import Rel
from ..primitive import PrimitiveWithInfer, prim_attr_register
from ...common import dtype as mstype
__all__ = ["MinMaxUpdatePerLayer",
"MinMaxUpdatePerChannel",
"FakeQuantPerLayer",
"FakeQuantPerLayerGrad",
"FakeQuantPerChannel",
"FakeQuantPerChannelGrad",
"BatchNormFold",
"BatchNormFoldGrad",
"CorrectionMul",
"CorrectionMulGrad",
"CorrectionMulGradReduce",
"BatchNormFold2",
"BatchNormFold2Grad",
"BatchNormFoldD",
"BatchNormFoldGradD",
"BatchNormFold2_D",
"BatchNormFold2GradD",
"BatchNormFold2GradReduce"
]
class MinMaxUpdatePerLayer(PrimitiveWithInfer):
r"""
Updates min and max per layer.
Args:
ema (bool): Uses EMA algorithm update value min and max. Default: False.
ema_decay (int) : EMA algorithm decay parameter. Default: 0.999.
Inputs:
- **x** (Tensor) : float32 Tensor representing the shape of the output tensor.
- **min** (Tensor) : Value of the min range of the input data x.
- **max** (Tensor) : Value of the max range of the input data x.
Outputs:
- Tensor: Simulates quantize tensor of x.
Examples:
>>> input_tensor = Tensor(np.random.rand(3, 16, 5, 5), mstype.float32)
>>> min_tensor = Tensor(np.array([-6]), mstype.float32)
>>> max_tensor = Tensor(np.array([6]), mstype.float32)
>>> output_tensor = MinMaxUpdatePerLayer(num_bits=8)(input_tensor, min_tensor, max_tensor)
"""
support_quant_bit = [4, 7, 8]
@prim_attr_register
def __init__(self, ema=False, ema_decay=0.999):
"""Initialize FakeQuantMinMaxPerLayerUpdate OP"""
if context.get_context('device_target') == "Ascend":
from mindspore.ops._op_impl._custom_op import minmax_update_perlayer
if ema and not ema_decay:
raise ValueError(
f"For '{self.name}' attr \'ema\' and \'ema_decay\' should set together.")
self.ema = validator.check_value_type('ema', ema, (bool,), self.name)
self.ema_decay = validator.check_number_range(
'ema_decay', ema_decay, 0, 1, Rel.INC_BOTH, self.name)
self.init_prim_io_names(inputs=['x', 'min', 'max'],
outputs=['min_up', 'max_up'])
def infer_shape(self, x_shape, min_shape, max_shape):
validator.check_integer("x rank", len(x_shape), 1, Rel.GE, self.name)
validator.check("min shape", min_shape, "max shape",
max_shape, Rel.EQ, self.name)
validator.check_integer("min shape", len(
min_shape), 1, Rel.EQ, self.name)
return min_shape, max_shape
def infer_dtype(self, x_type, min_type, max_type):
valid_types = (mstype.float16, mstype.float32)
validator.check_tensor_type_same({"x": x_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"min": min_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"max": max_type}, valid_types, self.name)
return min_type, max_type
class MinMaxUpdatePerChannel(PrimitiveWithInfer):
r"""
Updates min and max per channel.
Args:
ema (bool): Uses EMA algorithm update value min and max. Default: False.
ema_decay (int) : EMA algorithm decay parameter. Default: 0.999.
channel_axis (int): Quantization by channel axis. Ascend backend only supports 0 or 1. Default: 1.
Inputs:
- **x** (Tensor) : float32 Tensor representing the shape of the output tensor.
- **min** (Tensor) : Value of the min range of the input data x.
- **max** (Tensor) : Value of the max range of the input data x.
Outputs:
- Tensor: Simulates quantize tensor of x.
Examples:
>>> x = Tensor(np.random.rand(3, 16, 5, 5), mstype.float32)
>>> min = Tensor(np.random.uniform(-1, 1, size=16), mstype.float32)
>>> max = Tensor(np.random.uniform(-1, 1, size=16), mstype.float32)
>>> output_tensor = MinMaxUpdatePerChannel(num_bits=8)(x, min, max)
"""
support_quant_bit = [4, 7, 8]
ascend_support_x_rank = [2, 4]
@prim_attr_register
def __init__(self, ema=False, ema_decay=0.999, channel_axis=1):
"""Initialize FakeQuantPerChannelUpdate OP for Ascend"""
self.is_ascend = context.get_context('device_target') == "Ascend"
if self.is_ascend:
from mindspore.ops._op_impl._custom_op import minmax_update_perchannel
if ema and not ema_decay:
raise ValueError(
f"For '{self.name}' attr \'ema\' and \'ema_decay\' should set together.")
self.ema = validator.check_value_type('ema', ema, (bool,), self.name)
self.ema_decay = validator.check_number_range(
'ema_decay', ema_decay, 0, 1, Rel.INC_BOTH, self.name)
if self.is_ascend:
self.channel_axis = validator.check_int_range('channel_axis', channel_axis, 0, 1, Rel.INC_BOTH, self.name)
else:
self.channel_axis = validator.check_integer('channel_axis', channel_axis, 0, Rel.GE, self.name)
self.init_prim_io_names(
inputs=['x', 'min', 'max'], outputs=['min_up', 'max_up'])
def infer_shape(self, x_shape, min_shape, max_shape):
if self.is_ascend and len(x_shape) not in self.ascend_support_x_rank:
raise ValueError(f"For '{self.name}' x rank should be in '{self.ascend_support_x_rank}'")
if not self.is_ascend:
validator.check_integer("x rank", len(x_shape), 1, Rel.GE, self.name)
validator.check("min shape", min_shape, "max shape",
max_shape, Rel.EQ, self.name)
validator.check_integer("min shape", len(
min_shape), 1, Rel.EQ, self.name)
return min_shape, max_shape
def infer_dtype(self, x_type, min_type, max_type):
valid_types = (mstype.float16, mstype.float32)
validator.check_tensor_type_same(
{"x": x_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"min": min_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"max": max_type}, valid_types, self.name)
return min_type, max_type
class FakeQuantPerLayer(PrimitiveWithInfer):
r"""
Simulates the quantize and dequantize operations in training time.
Args:
num_bits (int) : Number bits for quantization aware. Default: 8.
ema (bool): Uses EMA algorithm update value min and max. Default: False.
ema_decay (int) : EMA algorithm decay parameter. Default: 0.999.
quant_delay (int): Quantilization delay parameter. Before delay step in training time not update
simulate quantization aware funcion. After delay step in training time begin simulate the aware
quantize funcion. Default: 0.
symmetric (bool): Whether the quantization algorithm is symmetric or not. Default: False.
narrow_range (bool): Whether the quantization algorithm uses narrow range or not. Default: False.
training (bool): Training the network or not. Default: True.
Inputs:
- **x** (Tensor) : float32 Tensor representing the shape of the output tensor.
- **min** (Tensor) : Value of the min range of the input data x.
- **max** (Tensor) : Value of the max range of the input data x.
Outputs:
- Tensor: Simulates quantize tensor of x.
Examples:
>>> input_tensor = Tensor(np.random.rand(3, 16, 5, 5), mstype.float32)
>>> min_tensor = Tensor(np.array([-6]), mstype.float32)
>>> max_tensor = Tensor(np.array([6]), mstype.float32)
>>> output_tensor = FakeQuantPerLayer(num_bits=8)(input_tensor, min_tensor, max_tensor)
"""
support_quant_bit = [4, 7, 8]
@prim_attr_register
def __init__(self,
num_bits=8,
ema=False,
ema_decay=0.999,
quant_delay=0,
symmetric=False,
narrow_range=False,
training=True):
"""Initialize FakeQuantPerLayer OP"""
if context.get_context('device_target') == "Ascend":
from mindspore.ops._op_impl._custom_op import fake_quant_perlayer
if num_bits not in self.support_quant_bit:
raise ValueError(
f"For '{self.name}' attr \'num_bits\' is not support.")
if ema and not ema_decay:
raise ValueError(
f"For '{self.name}' attr \'ema\' and \'ema_decay\' should set together.")
self.ema = validator.check_value_type('ema', ema, (bool,), self.name)
self.symmetric = validator.check_value_type(
'symmetric', symmetric, (bool,), self.name)
self.narrow_range = validator.check_value_type(
'narrow_range', narrow_range, (bool,), self.name)
self.training = validator.check_value_type(
'training', training, (bool,), self.name)
self.ema_decay = validator.check_number_range(
'ema_decay', ema_decay, 0, 1, Rel.INC_BOTH, self.name)
self.num_bits = validator.check_integer(
'num_bits', num_bits, 0, Rel.GT, self.name)
self.quant_delay = validator.check_integer(
'quant_delay', quant_delay, 0, Rel.GE, self.name)
self.init_prim_io_names(inputs=['x', 'min', 'max'],
outputs=['out'])
def infer_shape(self, x_shape, min_shape, max_shape):
validator.check_integer("x rank", len(x_shape), 1, Rel.GE, self.name)
validator.check("min shape", min_shape, "max shape", max_shape, Rel.EQ, self.name)
validator.check_integer("min shape", len(min_shape), 1, Rel.EQ, self.name)
return x_shape
def infer_dtype(self, x_type, min_type, max_type):
valid_types = (mstype.float16, mstype.float32)
validator.check_tensor_type_same({"x": x_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"min": min_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"max": max_type}, valid_types, self.name)
return x_type
class FakeQuantPerLayerGrad(PrimitiveWithInfer):
r"""
Performs grad of FakeQuantPerLayerGrad operation.
Examples:
>>> fake_min_max_grad = FakeQuantPerLayerGrad()
>>> dout = Tensor(np.array([[-2.3, 1.2], [5.7, 0.2]]), mindspore.float32)
>>> input_x = Tensor(np.array([[18, -23], [0.2, 6]]), mindspore.float32)
>>> _min = Tensor(np.array([-4]), mindspore.float32)
>>> _max = Tensor(np.array([2]), mindspore.float32)
>>> result = fake_min_max_grad(dout, input_x, _min, _max)
"""
support_quant_bit = [4, 7, 8]
@prim_attr_register
def __init__(self,
num_bits=8,
quant_delay=0,
symmetric=False,
narrow_range=False):
if context.get_context('device_target') == "Ascend":
from mindspore.ops._op_impl._custom_op import fake_quant_perlayer_grad
if num_bits not in self.support_quant_bit:
raise ValueError(
f"For '{self.name}' attr \'num_bits\' is not support.")
self.num_bits = validator.check_integer(
'num_bits', num_bits, 0, Rel.GT, self.name)
self.quant_delay = validator.check_value_type(
'quant_delay', quant_delay, (int,), self.name)
self.symmetric = validator.check_value_type(
'symmetric', symmetric, (bool,), self.name)
self.narrow_range = validator.check_value_type(
'narrow_range', narrow_range, (bool,), self.name)
self.init_prim_io_names(
inputs=['dout', 'x', 'min', 'max'], outputs=['dx'])
def infer_shape(self, dout_shape, x_shape, min_shape, max_shape):
validator.check("dout shape", dout_shape, "x shape",
x_shape, Rel.EQ, self.name)
validator.check("min shape", min_shape, "max shape",
max_shape, Rel.EQ, self.name)
validator.check_integer("min shape", len(
min_shape), 1, Rel.EQ, self.name)
return dout_shape
def infer_dtype(self, dout_type, x_type, min_type, max_type):
valid_types = (mstype.float16, mstype.float32)
validator.check_tensor_type_same(
{"dout": dout_type}, valid_types, self.name)
validator.check_tensor_type_same({"x": x_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"min": min_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"max": max_type}, valid_types, self.name)
return dout_type
class FakeQuantPerChannel(PrimitiveWithInfer):
r"""
Simulates the quantize and dequantize operations in training time base on per channel.
Args:
num_bits (int) : Number bits to quantilization. Default: 8.
ema (bool): Uses EMA algorithm update tensor min and tensor max. Default: False.
ema_decay (int) : EMA algorithm decay parameter. Default: 0.999.
quant_delay (int): Quantilization delay parameter. Before delay step in training time not
update the weight data to simulate quantize operation. After delay step in training time
begin simulate the quantize operation. Default: 0.
symmetric (bool): Whether the quantization algorithm is symmetric or not. Default: False.
narrow_range (bool): Whether the quantization algorithm uses narrow range or not. Default: False.
training (bool): Training the network or not. Default: True.
channel_axis (int): Quantization by channel axis. Ascend backend only supports 0 or 1. Default: 1.
Inputs:
- **x** (Tensor) : 4-D float32 Tensor representing the shape of the output tensor.
- **min** (int, float) : Value of the min range of the input data.
- **max** (int, float) : Value of the max range of the input data.
Outputs:
- Tensor, has the same type as input.
Examples:
>>> fake_quant = FakeQuantPerChannel()
>>> input_x = Tensor(np.array([3, 4, 5, -2, -3, -1]).reshape(3, 2), mindspore.float32)
>>> _min = Tensor(np.linspace(-2, 2, 12).reshape(3, 2, 2), mindspore.float32)
>>> _max = Tensor(np.linspace(8, 12, 12).reshape(3, 2, 2), mindspore.float32)
>>> result = fake_quant(input_x, _min, _max)
"""
support_quant_bit = [4, 7, 8]
ascend_support_x_rank = [2, 4]
@prim_attr_register
def __init__(self,
num_bits=8,
ema=False,
ema_decay=0.999,
quant_delay=0,
symmetric=False,
narrow_range=False,
training=True,
channel_axis=1):
"""Initialize FakeQuantPerChannel OP"""
self.is_ascend = context.get_context('device_target') == "Ascend"
if self.is_ascend:
from mindspore.ops._op_impl._custom_op import fake_quant_perchannel
if num_bits not in self.support_quant_bit:
raise ValueError(
f"For '{self.name}' Attr \'num_bits\' is not support.")
if ema and not ema_decay:
raise ValueError(
f"For '{self.name}' attr \'ema\' and \'ema_decay\' should set together.")
self.ema = validator.check_value_type('ema', ema, (bool,), self.name)
self.symmetric = validator.check_value_type(
'symmetric', symmetric, (bool,), self.name)
self.narrow_range = validator.check_value_type(
'narrow_range', narrow_range, (bool,), self.name)
self.training = validator.check_value_type(
'training', training, (bool,), self.name)
self.ema_decay = validator.check_number_range(
'ema_decay', ema_decay, 0, 1, Rel.INC_BOTH, self.name)
self.num_bits = validator.check_integer(
'num_bits', num_bits, 0, Rel.GT, self.name)
self.quant_delay = validator.check_integer(
'quant_delay', quant_delay, 0, Rel.GE, self.name)
if self.is_ascend:
self.channel_axis = validator.check_int_range('channel_axis', channel_axis, 0, 1, Rel.INC_BOTH, self.name)
else:
self.channel_axis = validator.check_integer('channel_axis', channel_axis, 0, Rel.GE, self.name)
self.init_prim_io_names(inputs=['x', 'min', 'max'], outputs=['out'])
def infer_shape(self, x_shape, min_shape, max_shape):
if self.is_ascend and len(x_shape) not in self.ascend_support_x_rank:
raise ValueError(f"For '{self.name}' x rank should be in '{self.ascend_support_x_rank}'")
if not self.is_ascend:
validator.check_integer("x rank", len(x_shape), 1, Rel.GE, self.name)
if len(x_shape) == 1:
self.channel_axis = 0
validator.check("min shape", min_shape, "max shape", max_shape, Rel.EQ, self.name)
validator.check_integer(
"min shape", min_shape[0], x_shape[self.channel_axis], Rel.EQ, self.name)
validator.check_integer(
"max shape", max_shape[0], x_shape[self.channel_axis], Rel.EQ, self.name)
return x_shape
def infer_dtype(self, x_type, min_type, max_type):
valid_types = (mstype.float16, mstype.float32)
validator.check_tensor_type_same({"x": x_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"min": min_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"max": max_type}, valid_types, self.name)
return x_type
class FakeQuantPerChannelGrad(PrimitiveWithInfer):
r"""
Performs grad of FakeQuantPerChannelGrad operation.
Examples:
>>> fqmmpc_grad = FakeQuantPerChannelGrad()
>>> input_x = Tensor(np.random.randint(-4, 4, (2, 3, 4)), mindspore.float32)
>>> dout = Tensor(np.random.randint(-2, 2, (2, 3, 4)), mindspore.float32)
>>> _min = Tensor(np.random.randint(-8, 2, (2, 3, 4)), mindspore.float32)
>>> _max = Tensor(np.random.randint(-2, 8, (2, 3, 4)), mindspore.float32)
>>> result = fqmmpc_grad(dout, input_x, _min, _max)
"""
support_quant_bit = [4, 7, 8]
@prim_attr_register
def __init__(self,
num_bits=8,
quant_delay=0,
symmetric=False,
narrow_range=False,
channel_axis=1):
"""Initialize FakeQuantPerChannelGrad Fill"""
if context.get_context('device_target') == "Ascend":
from mindspore.ops._op_impl._custom_op import fake_quant_perchannel_grad
if num_bits not in self.support_quant_bit:
raise ValueError(
f"For '{self.name}' attr \'num_bits\' is not support.")
self.num_bits = validator.check_integer(
'num_bits', num_bits, 0, Rel.GT, self.name)
self.quant_delay = validator.check_value_type(
'quant_delay', quant_delay, (int,), self.name)
self.symmetric = validator.check_value_type(
'symmetric', symmetric, (bool,), self.name)
self.narrow_range = validator.check_value_type(
'narrow_range', narrow_range, (bool,), self.name)
self.channel_axis = validator.check_integer(
'channel axis', channel_axis, 0, Rel.GE, self.name)
self.init_prim_io_names(
inputs=['dout', 'x', 'min', 'max'], outputs=['dx'])
def infer_shape(self, dout_shape, x_shape, min_shape, max_shape):
validator.check("dout shape", dout_shape, "x shape", x_shape)
validator.check("min shape", min_shape, "max shape", max_shape)
return dout_shape
def infer_dtype(self, dout_type, x_type, min_type, max_type):
valid_types = (mstype.float16, mstype.float32)
validator.check_tensor_type_same(
{"dout": dout_type}, valid_types, self.name)
validator.check_tensor_type_same({"x": x_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"min": min_type}, valid_types, self.name)
validator.check_tensor_type_same(
{"max": max_type}, valid_types, self.name)
return dout_type
class BatchNormFold(PrimitiveWithInfer):
"""
Batch normalization folded.
Args:
momentum (float): Momentum value must be [0, 1]. Default: 0.9.
epsilon (float): A small float number to avoid dividing by 0. 1e-5 if dtype in
float32 else 1e-3. Default: 1e-5.
is_training (bool): In training mode set True, else set False. Default: True.
freeze_bn (int): Delay in steps at which computation switches from regular batch
norm to frozen mean and std. Default: 0.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(N, C)`.
- **mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **variance** (Tensor) - Tensor of shape :math:`(C,)`.
- **global_step** (Tensor) - Tensor to record current global step.
Outputs:
Tuple of 4 Tensor, the normalized input and the updated parameters.
- **batch_mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **batch_std** (Tensor) - Tensor of shape :math:`(C,)`.
- **running_mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **running_std** (Tensor) - Tensor of shape :math:`(C,)`.
Examples:
>>> batch_norm_fold = P.BatchNormFold()
>>> input_x = Tensor(np.array([1, 2, -1, -2, -2, 1]).reshape(2, 3), mindspore.float32)
>>> mean = Tensor(np.array([0.5, -1, 1,]), mindspore.float32)
>>> variance = Tensor(np.array([0.36, 0.4, 0.49]), mindspore.float32)
>>> global_step = Tensor(np.arange(6), mindspore.int32)
>>> batch_mean, batch_std, running_mean, running_std = batch_norm_fold(input_x, mean, variance, global_step)
"""
channel_axis = 1
@prim_attr_register
def __init__(self, momentum=0.9, epsilon=1e-5, is_training=True, freeze_bn=0):
"""Initialize batch norm fold layer"""
self.momentum = validator.check_number_range('momentum', momentum, 0, 1, Rel.INC_BOTH, self.name)
self.epsilon = validator.check_float_positive('epsilon', epsilon, self.name)
self.is_training = validator.check_value_type('is_training', is_training, (bool,), self.name)
self.freeze_bn = validator.check_value_type('freeze_bn', freeze_bn, (int,), self.name)
self.init_prim_io_names(inputs=['x', 'mean', 'variance', 'global_step'],
outputs=['batch_mean', 'batch_std', 'running_mean', 'running_std'])
def infer_shape(self, x_shape, mean_shape, variance_shape, global_step_shape):
validator.check("mean shape", mean_shape, "gamma_shape", variance_shape, Rel.EQ, self.name)
validator.check("mean_shape[0]", mean_shape[0], "input channel", x_shape[self.channel_axis], Rel.EQ, self.name)
validator.check_integer("global step shape len", len(global_step_shape), 1, Rel.EQ, self.name)
return mean_shape, mean_shape, mean_shape, mean_shape
def infer_dtype(self, x_type, mean_type, variance_type, global_step_type):
validator.check("input type", x_type, "mean type", mean_type)
validator.check("input type", x_type, "variance type", variance_type)
args = {"x": x_type, "mean": mean_type, "variance": variance_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
validator.check_tensor_type_same({"global_step": global_step_type}, (mstype.int32,), self.name)
return x_type, x_type, x_type, x_type
class BatchNormFoldGrad(PrimitiveWithInfer):
r"""
Performs grad of BatchNormFold operation.
Examples:
>>> batch_norm_fold_grad = P.BatchNormFoldGrad()
>>> d_batch_mean = Tensor(np.random.randint(-2., 2., (1, 2, 2, 3)), mindspore.float32)
>>> d_batch_std = Tensor(np.random.randn(1, 2, 2, 3), mindspore.float32)
>>> input_x = Tensor(np.random.randint(0, 256, (4, 1, 4, 6)), mindspore.float32)
>>> batch_mean = Tensor(np.random.randint(-8., 8., (1, 2, 2, 3)), mindspore.float32)
>>> batch_std = Tensor(np.random.randint(0, 12, (1, 2, 2, 3)), mindspore.float32)
>>> global_step = Tensor([2], mindspore.int32)
>>> result = batch_norm_fold_grad(d_batch_mean, d_batch_std, input_x, batch_mean, batch_std, global_step)
"""
channel_axis = 1
@prim_attr_register
def __init__(self, epsilon=1e-5, is_training=True, freeze_bn=0):
"""Initialize BatchNormGrad layer"""
self.is_training = validator.check_value_type('is_training', is_training, (bool,), self.name)
self.freeze_bn = validator.check_value_type('freeze_bn', freeze_bn, (int,), self.name)
self.epsilon = validator.check_float_positive('epsilon', epsilon, self.name)
self.init_prim_io_names(inputs=['d_batch_mean', 'd_batch_std', 'x', 'batch_mean', 'batch_std', 'global_step'],
outputs=['dx'])
def infer_shape(self, d_batch_mean_shape, d_batch_std_shape, x_shape, batch_mean_shape, batch_std_shape,
global_step_shape):
validator.check("d_batch_mean shape", d_batch_mean_shape,
"d_batch_std shape", d_batch_std_shape, Rel.EQ, self.name)
validator.check("d_batch_mean shape", d_batch_mean_shape,
"batch_mean shape", batch_mean_shape, Rel.EQ, self.name)
validator.check("d_batch_mean shape", d_batch_mean_shape,
"batch_std shape", batch_std_shape, Rel.EQ, self.name)
validator.check("d_batch_mean_shape[0]", d_batch_mean_shape[0],
"input channel", x_shape[self.channel_axis], Rel.EQ, self.name)
validator.check_integer("global step shape len", len(global_step_shape), 1, Rel.EQ, self.name)
return x_shape
def infer_dtype(self, d_batch_mean_type, d_batch_std_type, x_type, batch_mean_type, batch_std_type,
global_step_type):
args = {"input": x_type, "d_batch_mean": d_batch_mean_type, "d_batch_std": d_batch_std_type,
"batch_mean": batch_mean_type, "batch_std": batch_std_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
validator.check_tensor_type_same({"global_step": global_step_type}, (mstype.int32,), self.name)
return x_type
class CorrectionMul(PrimitiveWithInfer):
"""
Scales the weights with a correction factor to the long term statistics
prior to quantization. This ensures that there is no jitter in the quantized weights
due to batch to batch variation.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(N, C)`.
- **batch_std** (Tensor) - Tensor of shape :math:`(C,)`.
- **running_std** (Tensor) - Tensor of shape :math:`(C,)`.
Outputs:
- **out** (Tensor) - Tensor has the same shape as x.
Examples:
>>> correction_mul = P.CorrectionMul()
>>> input_x = Tensor(np.random.randint(-8, 12, (3, 4)), mindspore.float32)
>>> batch_std = Tensor(np.array([1.5, 3, 2]), mindspore.float32)
>>> running_std = Tensor(np.array([2, 1.2, 0.5]), mindspore.float32)
>>> out = correction_mul(input_x, batch_std, running_std)
"""
@prim_attr_register
def __init__(self, channel_axis=0):
"""Initialize correction mul layer"""
if context.get_context('device_target') == "Ascend":
from mindspore.ops._op_impl._custom_op import correction_mul
self.channel_axis = channel_axis
self.init_prim_io_names(inputs=['x', 'batch_std', 'running_std'],
outputs=['out'])
def infer_shape(self, x_shape, batch_std_shape, running_std_shape):
validator.check("batch_std shape", batch_std_shape, "running_std shape", running_std_shape, Rel.EQ, self.name)
validator.check("batch_std_shape[0]", batch_std_shape[0], "x_shape channel size", x_shape[self.channel_axis],
Rel.EQ, self.name)
return x_shape
def infer_dtype(self, x_type, batch_std_type, running_std_type):
args = {"x": x_type, "batch_std": batch_std_type, "running_std": running_std_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
return x_type
class CorrectionMulGrad(PrimitiveWithInfer):
r"""
Performs grad of CorrectionMul operation.
Examples:
>>> correction_mul_grad = P.CorrectionMulGrad()
>>> dout = Tensor(np.array([1.5, -2.2, 0.7, -3, 1.6, 2.8]).reshape(2, 1, 1, 3), mindspore.float32)
>>> input_x = Tensor(np.random.randint(0, 256, (2, 1, 1, 3)), mindspore.float32)
>>> gamma = Tensor(np.array([0.2, -0.2, 2.5, -1.]).reshape(2, 1, 2), mindspore.float32)
>>> running_std = Tensor(np.array([1.2, 0.1, 0.7, 2.3]).reshape(2, 1, 2), mindspore.float32)
>>> result = correction_mul_grad(dout, input_x, gamma, running_std)
"""
@prim_attr_register
def __init__(self, channel_axis=0):
"""Initialize correction mul layer"""
if context.get_context('device_target') == "Ascend":
from mindspore.ops._op_impl._custom_op import correction_mul_grad
self.channel_axis = channel_axis
self.init_prim_io_names(inputs=['dout', 'x', 'gamma', 'running_std'],
outputs=['dx', 'mul_dx'])
def infer_shape(self, dout_shape, x_shape, gamma_shape, running_std_shape):
validator.check("dout shape", dout_shape, "x_shape x", x_shape, Rel.EQ, self.name)
validator.check("gamma_shape[0]", gamma_shape[0], "dout channel size", dout_shape[self.channel_axis],
Rel.EQ, self.name)
validator.check("running_std_shape[0]", running_std_shape[0],
"dout channel size", dout_shape[self.channel_axis], Rel.EQ, self.name)
if context.get_context('device_target') == "Ascend":
return x_shape, x_shape
return x_shape, gamma_shape
def infer_dtype(self, dout_type, x_type, gamma_type, running_std_type):
args = {"dout": dout_type, "x": x_type, "gamma": gamma_type, "running_std": running_std_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
if context.get_context('device_target') == "Ascend":
return x_type, x_type
return x_type, gamma_type
class CorrectionMulGradReduce(PrimitiveWithInfer):
r"""
Performs grad reduce of CorrectionMul operation.
Examples:
>>> correction_mul_grad_rd = P.CorrectionMulGradReduce()
>>> dout = Tensor(np.array([1.5, -2.2, 0.7, -3, 1.6, 2.8]).reshape(2, 1, 1, 3), mindspore.float32)
>>> input_x = Tensor(np.random.randint(0, 256, (2, 1, 1, 3)), mindspore.float32)
>>> gamma = Tensor(np.array([0.2, -0.2, 2.5, -1.]).reshape(2, 1, 2), mindspore.float32)
>>> running_std = Tensor(np.array([1.2, 0.1, 0.7, 2.3]).reshape(2, 1, 2), mindspore.float32)
>>> result = correction_mul_grad_rd(dout, input_x, gamma, running_std)
"""
@prim_attr_register
def __init__(self, channel_axis=0):
"""Initialize correction mul reduce layer"""
if context.get_context('device_target') == "Ascend":
from mindspore.ops._op_impl._custom_op import correction_mul_grad
self.channel_axis = channel_axis
self.init_prim_io_names(inputs=['mul_dx'],
outputs=['d_gamma'])
def infer_shape(self, mul_dx_shape):
return [mul_dx_shape[self.channel_axis]]
def infer_dtype(self, mul_dx_type):
return mul_dx_type
class BatchNormFold2(PrimitiveWithInfer):
"""
Scales the bias with a correction factor to the long term statistics
prior to quantization. This ensures that there is no jitter in the quantized bias
due to batch to batch variation.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(N, C)`.
- **beta** (Tensor) - Tensor of shape :math:`(C,)`.
- **gamma** (Tensor) - Tensor of shape :math:`(C,)`.
- **batch_std** (Tensor) - Tensor of shape :math:`(C,)`.
- **batch_mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **running_std** (Tensor) - Tensor of shape :math:`(C,)`.
- **running_mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **global_step** (Tensor) - Tensor to record current global step.
Outputs:
- **y** (Tensor) - Tensor has the same shape as x.
Examples:
>>> batch_norm_fold2 = P.BatchNormFold2()
>>> input_x = Tensor(np.random.randint(-6, 6, (4, 3)), mindspore.float32)
>>> beta = Tensor(np.array([0.2, -0.1, 0.25]), mindspore.float32)
>>> gamma = Tensor(np.array([-0.1, -0.25, 0.1]), mindspore.float32)
>>> batch_std = Tensor(np.array([0.1, 0.2, 0.1]), mindspore.float32)
>>> batch_mean = Tensor(np.array([0, 0.05, 0.2]), mindspore.float32)
>>> running_std = Tensor(np.array([0.1, 0.1, 0.3]), mindspore.float32)
>>> running_mean = Tensor(np.array([-0.1, 0, -0.1]), mindspore.float32)
>>> global_step = Tensor(np.random.randint(1, 8, (8, )), mindspore.int32)
>>> result = batch_norm_fold2(input_x, beta, gamma, batch_std, batch_mean,
>>> running_std, running_mean, global_step)
"""
channel_axis = 1
@prim_attr_register
def __init__(self, freeze_bn=0):
"""Initialize conv2d fold layer"""
self.freeze_bn = validator.check_value_type('freeze_bn', freeze_bn, (int,), self.name)
self.init_prim_io_names(inputs=['x', 'beta', 'gamma', 'batch_std', 'batch_mean',
'running_std', 'running_mean', 'global_step'],
outputs=['y'])
def infer_shape(self, x_shape, beta_shape, gamma_shape, batch_std_shape, running_std_shape, batch_mean_shape,
running_mean_shape, global_step_shape):
validator.check("batch_std shape", batch_std_shape, "running_std shape", running_std_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "batch_mean shape", batch_mean_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "beta shape", beta_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "running_mean shape", running_mean_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "batch_mean shape", gamma_shape, Rel.EQ, self.name)
validator.check("batch_std_shape[0]", batch_std_shape[0], "x_shape channel size", x_shape[self.channel_axis],
Rel.EQ, self.name)
validator.check_integer("global step shape len", len(global_step_shape), 1, Rel.EQ, self.name)
return x_shape
def infer_dtype(self, x_type, beta_type, gamma_type, batch_std_type, running_std_type, batch_mean_type,
running_mean_type, global_step_type):
args = {"batch_std": batch_std_type, "running_std": running_std_type, "batch_mean": batch_mean_type,
"beta": beta_type, "running_mean": running_mean_type, "gamma": gamma_type, "x": x_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
validator.check_tensor_type_same({"global_step": global_step_type}, (mstype.int32,), self.name)
return x_type
class BatchNormFold2Grad(PrimitiveWithInfer):
r"""
Performs grad of CorrectionAddGrad operation.
Examples:
>>> bnf2_grad = P.BatchNormFold2Grad()
>>> input_x = Tensor(np.arange(3*3*12*12).reshape(6, 3, 6, 12), mindspore.float32)
>>> dout = Tensor(np.random.randint(-32, 32, (6, 3, 6, 12)), mindspore.float32)
>>> gamma = Tensor(np.random.randint(-4, 4, (3, 1, 1, 2)), mindspore.float32)
>>> batch_std = Tensor(np.random.randint(0, 8, (3, 1, 1, 2)), mindspore.float32)
>>> batch_mean = Tensor(np.random.randint(-6, 6, (3, 1, 1, 2)), mindspore.float32)
>>> running_std = Tensor(np.linspace(0, 2, 6).reshape(3, 1, 1, 2), mindspore.float32)
>>> running_mean = Tensor(np.random.randint(-3, 3, (3, 1, 1, 2)), mindspore.float32)
>>> global_step = Tensor(np.array([-2]), mindspore.int32)
>>> result = bnf2_grad(dout, input_x, gamma, batch_std, batch_mean, running_std, running_mean, global_step)
"""
channel_axis = 1
@prim_attr_register
def __init__(self, freeze_bn=0):
"""Initialize MulFold layer"""
self.freeze_bn = freeze_bn
self.init_prim_io_names(inputs=['dout', 'x', 'gamma',
'batch_std', 'batch_mean',
'running_std', 'running_mean', 'global_step'],
outputs=['d_batch_std', 'd_batch_mean', 'd_beta', 'd_gamma', 'dx'])
def infer_shape(self, dout_shape, x_shape, gamma_shape,
batch_std_shape, batch_mean_shape,
running_std_shape, running_mean_shape, global_step_shape):
validator.check("batch_std shape", batch_std_shape, "batch_mean shape", batch_mean_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "running_std shape", running_std_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "running_mean shape", running_mean_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "gamma shape", gamma_shape, Rel.EQ, self.name)
validator.check("batch_std size", batch_std_shape[0], "dout channel size", dout_shape[self.channel_axis],
Rel.EQ, self.name)
validator.check_integer("global step shape len", len(global_step_shape), 1, Rel.EQ, self.name)
return gamma_shape, gamma_shape, gamma_shape, gamma_shape, x_shape
def infer_dtype(self, dout_type, x_type, gamma_type,
batch_std_type, batch_mean_type,
running_std_type, running_mean_type, global_step_type):
validator.check("batch_std type", batch_std_type,
"batch_mean type", batch_mean_type)
validator.check("batch_std type", batch_std_type,
"gamma type", gamma_type)
validator.check("batch_std type", batch_std_type,
"running_std type", running_std_type)
validator.check("batch_std type", batch_std_type,
"running_mean type", running_mean_type)
validator.check("batch_std_type", batch_std_type,
"dout type", dout_type)
args = {"batch_std": batch_std_type, "batch_mean": batch_mean_type, "gamma": gamma_type,
"running_std": running_std_type, "running_mean": running_mean_type, "dout": dout_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
validator.check_tensor_type_same({"global_step": global_step_type}, (mstype.int32,), self.name)
return gamma_type, gamma_type, gamma_type, gamma_type, gamma_type
class BatchNormFoldD(PrimitiveWithInfer):
"""Performs grad of _BatchNormFold operation."""
@prim_attr_register
def __init__(self, momentum=0.9, epsilon=1e-5, is_training=True, freeze_bn=0):
"""Initialize _BatchNormFold layer"""
from mindspore.ops._op_impl._custom_op import batchnorm_fold
self.momentum = validator.check_number_range('momentum', momentum, 0, 1, Rel.INC_BOTH, self.name)
self.epsilon = validator.check_float_positive('epsilon', epsilon, self.name)
self.is_training = validator.check_value_type('is_training', is_training, (bool,), self.name)
self.freeze_bn = validator.check_value_type('freeze_bn', freeze_bn, (int,), self.name)
self.data_format = "NCHW"
self.init_prim_io_names(inputs=['x', 'x_sum', 'x_square_sum', 'mean', 'variance'],
outputs=['batch_mean', 'batch_std', 'running_mean', 'running_std',
'mean_updated', 'variance_updated'])
def infer_shape(self, x_shape, x_sum_shape, x_square_sum_shape, mean_shape, variance_shape):
validator.check("mean shape", mean_shape, "gamma_shape", variance_shape, Rel.EQ, self.name)
validator.check("mean_shape[0]", mean_shape[0], "input channel", x_shape[1], Rel.EQ, self.name)
return x_shape, mean_shape, mean_shape, mean_shape, mean_shape, mean_shape, mean_shape
def infer_dtype(self, x_type, x_sum_type, x_square_sum_type, mean_type, variance_type):
validator.check("input type", x_type, "mean type", mean_type)
validator.check("input type", x_type, "variance type", variance_type)
args = {"x": x_type, "mean": mean_type, "variance": variance_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
return x_type, x_type, x_type, x_type, x_type, x_type, x_type
class BatchNormFoldGradD(PrimitiveWithInfer):
"""Performs grad of _BatchNormFoldGrad operation."""
@prim_attr_register
def __init__(self, epsilon=1e-5, is_training=True, freeze_bn=0):
"""Initialize _BatchNormFoldGrad layer"""
from mindspore.ops._op_impl._custom_op import batchnorm_fold_grad
self.epsilon = validator.check_float_positive('epsilon', epsilon, self.name)
self.is_training = validator.check_value_type('is_training', is_training, (bool,), self.name)
self.freeze_bn = validator.check_value_type('freeze_bn', freeze_bn, (int,), self.name)
self.init_prim_io_names(inputs=['d_batch_mean', 'd_batch_std', 'x', 'batch_mean', 'batch_std'],
outputs=['dx'])
def infer_shape(self, d_batch_mean_shape, d_batch_std_shape, x_shape, batch_mean_shape, batch_std_shape):
validator.check("d_batch_mean shape", d_batch_mean_shape, "d_batch_std shape", d_batch_std_shape)
validator.check("d_batch_mean shape", d_batch_mean_shape, "batch_mean shape", batch_mean_shape)
validator.check("d_batch_mean shape", d_batch_mean_shape, "batch_std shape", batch_std_shape)
validator.check("x_shape shape", d_batch_mean_shape[0], "input channel", x_shape[1])
return x_shape
def infer_dtype(self, d_batch_mean_type, d_batch_std_type, x_type, batch_mean_type, batch_std_type):
validator.check("input type", x_type, "d_batch_mean type", d_batch_mean_type)
validator.check("input type", x_type, "d_batch_std type", d_batch_std_type)
validator.check("input type", x_type, "batch_mean type", batch_mean_type)
validator.check("input type", x_type, "batch_std type", batch_std_type)
args = {"input type": x_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
return x_type
class BatchNormFold2_D(PrimitiveWithInfer):
"""
Scales the bias with a correction factor to the long term statistics
prior to quantization. This ensures that there is no jitter in the quantized bias
due to batch to batch variation.
Inputs:
- **x** (Tensor) - Tensor of shape :math:`(N, C)`.
- **beta** (Tensor) - Tensor of shape :math:`(C,)`.
- **gamma** (Tensor) - Tensor of shape :math:`(C,)`.
- **batch_std** (Tensor) - Tensor of shape :math:`(C,)`.
- **batch_mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **running_std** (Tensor) - Tensor of shape :math:`(C,)`.
- **running_mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **global_step** (Tensor) - Tensor to record current global step.
Outputs:
- **y** (Tensor) - Tensor has the same shape as x.
"""
channel_axis = 1
@prim_attr_register
def __init__(self, freeze_bn=0):
"""Initialize conv2d fold layer"""
from mindspore.ops._op_impl._custom_op import batchnorm_fold2
self.init_prim_io_names(inputs=['x', 'beta', 'gamma', 'batch_std', 'batch_mean', 'running_std'],
outputs=['y'])
def infer_shape(self, x_shape, beta_shape, gamma_shape, batch_std_shape, running_std_shape, batch_mean_shape):
validator.check("batch_std shape", batch_std_shape, "running_std shape", running_std_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "batch_mean shape", batch_mean_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "beta shape", beta_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "batch_mean shape", gamma_shape, Rel.EQ, self.name)
validator.check("batch_std_shape[0]", batch_std_shape[0], "x_shape channel size", x_shape[self.channel_axis],
Rel.EQ, self.name)
return x_shape
def infer_dtype(self, x_type, beta_type, gamma_type, batch_std_type, running_std_type, batch_mean_type):
args = {"batch_std": batch_std_type, "running_std": running_std_type, "batch_mean": batch_mean_type,
"beta": beta_type, "gamma": gamma_type, "x": x_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
return x_type
class BatchNormFold2GradD(PrimitiveWithInfer):
"""Performs grad of CorrectionAddGrad operation."""
channel_axis = 1
@prim_attr_register
def __init__(self, freeze_bn=False):
"""Initialize MulFold layer"""
from mindspore.ops._op_impl._custom_op import batchnorm_fold2_grad
self.freeze_bn = freeze_bn
self.init_prim_io_names(
inputs=['dout', 'dout_reduce', 'dout_x_reduce', 'gamma', 'batch_std', 'batch_mean', 'running_std'],
outputs=['d_batch_std', 'd_batch_mean', 'd_gamma', 'dx'])
def infer_shape(self, dout_shape, dout_reduce_shape, dout_x_reduce_shape, gamma_shape, batch_std_shape,
batch_mean_shape, running_std_shape):
validator.check("batch_std shape", batch_std_shape, "batch_mean shape", batch_mean_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "running_std shape", running_std_shape, Rel.EQ, self.name)
validator.check("batch_std shape", batch_std_shape, "gamma shape", gamma_shape, Rel.EQ, self.name)
validator.check("batch_std size", batch_std_shape[0], "dout channel size", dout_shape[self.channel_axis],
Rel.EQ, self.name)
return gamma_shape, gamma_shape, gamma_shape, dout_shape
def infer_dtype(self, dout_type, dout_reduce_type, dout_x_reduce_type, gamma_type, batch_std_type,
batch_mean_type, running_std_type):
validator.check("batch_std type", batch_std_type,
"batch_mean type", batch_mean_type)
validator.check("batch_std type", batch_std_type,
"gamma type", gamma_type)
validator.check("batch_std type", batch_std_type,
"running_std type", running_std_type)
validator.check("batch_std_type", batch_std_type,
"dout type", dout_type)
args = {"batch_std": batch_std_type, "batch_mean": batch_mean_type, "gamma": gamma_type,
"running_std": running_std_type, "dout": dout_type}
validator.check_tensor_type_same(args, (mstype.float16, mstype.float32), self.name)
return gamma_type, gamma_type, gamma_type, gamma_type
class BatchNormFold2GradReduce(PrimitiveWithInfer):
"""Performs grad of CorrectionAddGrad operation."""
channel_axis = 1
@prim_attr_register
def __init__(self, freeze_bn=False):
"""Initialize MulFold layer"""
from mindspore.ops._op_impl._custom_op import batchnorm_fold2_grad_reduce
self.freeze_bn = freeze_bn
self.init_prim_io_names(inputs=['dout', 'x'],
outputs=['dout_reduce', 'dout_x_reduce'])
def infer_shape(self, dout_shape, x_shape):
validator.check("dout shape", dout_shape, "x shape", x_shape, Rel.EQ, self.name)
return (dout_shape[self.channel_axis],), (dout_shape[self.channel_axis],)
def infer_dtype(self, dout_type, x_type):
validator.check("dout type", dout_type, "x type", x_type)
return dout_type, dout_type
| 50.942268 | 120 | 0.642146 | 6,585 | 49,414 | 4.548975 | 0.049355 | 0.073844 | 0.025605 | 0.041128 | 0.873911 | 0.841062 | 0.813921 | 0.790986 | 0.769888 | 0.745151 | 0 | 0.019654 | 0.231878 | 49,414 | 969 | 121 | 50.99484 | 0.769549 | 0.287408 | 0 | 0.66843 | 0 | 0 | 0.135602 | 0.005692 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.03351 | 0.003527 | 0.253968 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b1578b7ae92b501eea1f2d895bfc33d3bf8208d | 168 | py | Python | tests/unit/helpers/helpers.py | alexandrahably/imdb_scraper | e364c9cdccb42369fcc84de54c15621cfced9b5a | [
"Apache-2.0"
] | null | null | null | tests/unit/helpers/helpers.py | alexandrahably/imdb_scraper | e364c9cdccb42369fcc84de54c15621cfced9b5a | [
"Apache-2.0"
] | null | null | null | tests/unit/helpers/helpers.py | alexandrahably/imdb_scraper | e364c9cdccb42369fcc84de54c15621cfced9b5a | [
"Apache-2.0"
] | null | null | null | import os
from pathlib import Path
def path_for_resource(filename: str):
return Path(os.path.dirname(os.path.realpath(__file__))).parent / 'resources' / filename
| 24 | 92 | 0.761905 | 24 | 168 | 5.083333 | 0.666667 | 0.098361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 168 | 6 | 93 | 28 | 0.829932 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d288d4c4679c4dab8ea2e4451aafc57ac8a19bd4 | 16,323 | py | Python | plugins/dm/callBack/text.py | InHameDev/tele-pdf-v2 | 8744f78c7e4a6ce0132b1ebfb10c5f614f0f6ede | [
"Apache-2.0"
] | 1 | 2022-03-06T04:19:09.000Z | 2022-03-06T04:19:09.000Z | plugins/dm/callBack/text.py | InHameDev/tele-pdf-v2 | 8744f78c7e4a6ce0132b1ebfb10c5f614f0f6ede | [
"Apache-2.0"
] | null | null | null | plugins/dm/callBack/text.py | InHameDev/tele-pdf-v2 | 8744f78c7e4a6ce0132b1ebfb10c5f614f0f6ede | [
"Apache-2.0"
] | null | null | null | '''
█ █▄ █ █▄█ ▄▀▄ █▄ ▄█ ██▀ █▀▄ █▀▄ █▀
█ █ ▀█ █ █ █▀█ █ ▀ █ █▄▄ █▀ █▄▀ █▀
Dev : IlhamGUD
'''
import time
import fitz
import shutil
from pdf import PROCESS
from pyrogram import filters
from Configs.dm import Config
from plugins.checkPdf import checkPdf
from plugins.progress import progress
from pyrogram import Client as InHamePDF
from plugins.fileSize import get_size_format as gSF
from pyrogram.types import InlineKeyboardButton, InlineKeyboardMarkup
#--------------->
#--------> LOCAL VARIABLES
#------------------->
pdfInfoMsg = """`Apa yang ingin saya lakukan dengan file ini?`
Nama FIle: `{}`
Ukuran File: `{}`
`Jumlah Halaman: {}`✌️
"""
PDF_THUMBNAIL = Config.PDF_THUMBNAIL
#--------------->
#--------> VARIABLES
#------------------->
"""
______VARIABLES______
M = text message
T = text file
H = html file
J = Json file
'K' for pg no known pdfs
"""
#--------------->
#--------> PDF TO TEXT
#------------------->
M = filters.create(lambda _, __, query: query.data in ["M", "KM"])
T = filters.create(lambda _, __, query: query.data in ["T", "KT"])
J = filters.create(lambda _, __, query: query.data in ["J", "KJ"])
H = filters.create(lambda _, __, query: query.data in ["H", "KH"])
toText = filters.create(lambda _, __, query: query.data == "toText")
KtoText = filters.create(lambda _, __, query: query.data.startswith("KtoText|"))
# pdf to images (with tidak diketahui pdf page number)
@InHamePDF.on_callback_query(toText)
async def _toText(bot, callbackQuery):
try:
await callbackQuery.edit_message_text(
"__Pdf » Text\nTotal halaman: Tidak diketahui \nNow, Specify the format:__",
reply_markup=InlineKeyboardMarkup(
[
[
InlineKeyboardButton(
"Messages 📜",
callback_data="M"
),
InlineKeyboardButton(
"Txt file 🧾",
callback_data="T"
)
],
[
InlineKeyboardButton(
"Html 🌐",
callback_data="H"
),
InlineKeyboardButton(
"Json 🎀",
callback_data="J"
)
],
[
InlineKeyboardButton(
"« Back «",
callback_data="BTPM"
)
]
]
)
)
except Exception:
pass
# pdf to images (with known page Number)
@InHamePDF.on_callback_query(KtoText)
async def _KtoText(bot, callbackQuery):
try:
_, number_of_pages = callbackQuery.data.split("|")
await callbackQuery.edit_message_text(
f"__Pdf » Text\nTotal halaman: {number_of_pages} 🌟 \nNow, Specify the format:__",
reply_markup=InlineKeyboardMarkup(
[
[
InlineKeyboardButton(
"Messages 📜",
callback_data="KM"
),
InlineKeyboardButton(
"Txt file 🧾",
callback_data="KT"
)
],
[
InlineKeyboardButton(
"Html 🌐",
callback_data="KH"
),
InlineKeyboardButton(
"Json 🎀",
callback_data="KJ"
)
],
[
InlineKeyboardButton(
"« Back «",
callback_data=f"KBTPM|{number_of_pages}"
)
]
]
)
)
except Exception:
pass
# to Text file (with tidak diketahui pdf page number)
@InHamePDF.on_callback_query(T)
async def _T(bot, callbackQuery):
try:
# CHECH USER PROCESS
if callbackQuery.message.chat.id in PROCESS:
await callbackQuery.answer(
"⏳ - Sedang dalam proses"
)
return
# ADD TO PROCESS
PROCESS.append(callbackQuery.message.chat.id)
data = callbackQuery.data
# DOWNLOAD MESSAGE
downloadMessage = await callbackQuery.message.reply_text(
"`📥 - Mendownload PDF`", quote=True
)
# DOWNLOAD PROGRESS
file_id = callbackQuery.message.reply_to_message.document.file_id
fileSize = callbackQuery.message.reply_to_message.document.file_size
c_time = time.time()
downloadLoc = await bot.download_media(
message = file_id,
file_name = f"{callbackQuery.message.message_id}/pdf.pdf",
progress = progress,
progress_args = (
fileSize,
downloadMessage,
c_time
)
)
if downloadLoc is None:
PROCESS.remove(callbackQuery.message.chat.id)
return
await downloadMessage.edit(
"`Downloading Completed..` 🥱"
)
if data == "T":
checked = await checkPdf(f'{callbackQuery.message.message_id}/pdf.pdf', callbackQuery)
if not(checked == "pass"):
await bot.delete_messages(
chat_id = callbackQuery.message.chat.id,
message_ids = downloadMessage.message.message_id
)
return
with fitz.open(f'{callbackQuery.message.message_id}/pdf.pdf') as doc:
number_of_pages = doc.pageCount
with open(f'{callbackQuery.message.message_id}/pdf.txt', "wb") as out: # open text output
for page in doc: # iterate the document pages
text = page.get_text().encode("utf8") # get plain text (is in UTF-8)
out.write(text) # write text of page()
out.write(bytes((12,))) # write page delimiter (form feed 0x0C)
await bot.send_chat_action(
callbackQuery.message.chat.id,
"upload_document"
)
await bot.send_document(
chat_id = callbackQuery.message.chat.id,
reply_to_message_id = callbackQuery.message.reply_to_message.message_id,
thumb = PDF_THUMBNAIL,
document = f"{callbackQuery.message.message_id}/pdf.txt",
caption = "__Txt file__"
)
await downloadMessage.delete()
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception as e:
try:
print("Text/T: ", e)
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception:
pass
# to Text message (with tidak diketahui pdf page number)
@InHamePDF.on_callback_query(M)
async def _M(bot, callbackQuery):
try:
if callbackQuery.message.chat.id in PROCESS:
await callbackQuery.answer(
"⏳ - Sedang dalam proses"
)
return
PROCESS.append(callbackQuery.message.chat.id)
data = callbackQuery.data
downloadMessage = await bot.send_message(
chat_id = callbackQuery.message.chat.id,
reply_to_message_id = callbackQuery.message.reply_to_message.message_id,
text = "`📥 - Mendownload PDF`"
)
file_id = callbackQuery.message.reply_to_message.document.file_id
fileSize = callbackQuery.message.reply_to_message.document.file_size
c_time = time.time()
downloadLoc = await bot.download_media(
message = file_id,
file_name = f"{callbackQuery.message.message_id}/pdf.pdf",
progress = progress,
progress_args = (
fileSize,
downloadMessage,
c_time
)
)
if downloadLoc is None:
PROCESS.remove(callbackQuery.message.chat.id)
return
await downloadMessage.edit(
"`Downloading Completed..` 🥱"
)
if data == "M":
checked = await checkPdf(f'{callbackQuery.message.message_id}/pdf.pdf', callbackQuery)
if not(checked == "pass"):
await bot.delete_messages(
chat_id = callbackQuery.message.chat.id,
message_ids = downloadMessage.message.message_id
)
return
with fitz.open(f'{callbackQuery.message.message_id}/pdf.pdf') as doc:
number_of_pages = doc.pageCount
for page in doc: # iterate the document pages
pdfText = page.get_text().encode("utf8") # get plain text (is in UTF-8)
if 1 <= len(pdfText) <= 1048:
await bot.send_chat_action(
callbackQuery.message.chat.id, "typing"
)
await bot.send_message(
callbackQuery.message.chat.id, pdfText
)
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception as e:
try:
print("Text/M: ", e)
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception:
pass
# to Html file (with tidak diketahui pdf page number)
@InHamePDF.on_callback_query(H)
async def _H(bot, callbackQuery):
try:
if callbackQuery.message.chat.id in PROCESS:
await callbackQuery.answer(
"⏳ - Sedang dalam proses"
)
return
PROCESS.append(callbackQuery.message.chat.id)
data = callbackQuery.data
downloadMessage = await bot.send_message(
chat_id = callbackQuery.message.chat.id,
reply_to_message_id = callbackQuery.message.reply_to_message.message_id,
text = "`📥 - Mendownload PDF`"
)
file_id = callbackQuery.message.reply_to_message.document.file_id
fileSize = callbackQuery.message.reply_to_message.document.file_size
c_time = time.time()
downloadLoc = await bot.download_media(
message = file_id,
file_name = f"{callbackQuery.message.message_id}/pdf.pdf",
progress = progress,
progress_args = (
fileSize,
downloadMessage,
c_time
)
)
if downloadLoc is None:
PROCESS.remove(callbackQuery.message.chat.id)
return
await downloadMessage.edit(
"`Downloading Completed..` 🥱"
)
if data == "H":
checked = await checkPdf(f'{callbackQuery.message.message_id}/pdf.pdf', callbackQuery)
if not(checked == "pass"):
await bot.delete_messages(
chat_id = callbackQuery.message.chat.id,
message_ids = downloadMessage.message.message_id
)
return
with fitz.open(f'{callbackQuery.message.message_id}/pdf.pdf') as doc:
number_of_pages = doc.pageCount
with open(f'{callbackQuery.message.message_id}/pdf.html', "wb") as out: # open text output
for page in doc: # iterate the document pages
text = page.get_text("html").encode("utf8") # get plain text (is in UTF-8)
out.write(text) # write text of page()
out.write(bytes((12,))) # write page delimiter (form feed 0x0C)
await bot.send_chat_action(
callbackQuery.message.chat.id,
"upload_document"
)
await bot.send_document(
chat_id = callbackQuery.message.chat.id,
reply_to_message_id = callbackQuery.message.reply_to_message.message_id,
thumb = PDF_THUMBNAIL,
document = f"{callbackQuery.message.message_id}/pdf.html",
caption = "__Html file : helps to view pdf on any browser..__ 😉"
)
await downloadMessage.delete()
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception:
try:
print("Text/H: ", e)
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception:
pass
# to Text file (with tidak diketahui pdf page number)
@InHamePDF.on_callback_query(J)
async def _J(bot, callbackQuery):
try:
if callbackQuery.message.chat.id in PROCESS:
await callbackQuery.answer(
"⏳ - Sedang dalam proses"
)
return
PROCESS.append(callbackQuery.message.chat.id)
data = callbackQuery.data
downloadMessage = await bot.send_message(
chat_id = callbackQuery.message.chat.id,
reply_to_message_id = callbackQuery.message.reply_to_message.message_id,
text = "`📥 - Mendownload PDF`"
)
file_id = callbackQuery.message.reply_to_message.document.file_id
fileSize = callbackQuery.message.reply_to_message.document.file_size
c_time = time.time()
downloadLoc = await bot.download_media(
message = file_id,
file_name = f"{callbackQuery.message.message_id}/pdf.pdf",
progress = progress,
progress_args = (
fileSize,
downloadMessage,
c_time
)
)
if downloadLoc is None:
PROCESS.remove(callbackQuery.message.chat.id)
return
await downloadMessage.edit(
"`Downloading Completed..` 🥱"
)
if data == "J":
checked = await checkPdf(f'{callbackQuery.message.message_id}/pdf.pdf', callbackQuery)
if not(checked == "pass"):
await bot.delete_messages(
chat_id = callbackQuery.message.chat.id,
message_ids = downloadMessage.message.message_id
)
return
with fitz.open(f'{callbackQuery.message.message_id}/pdf.pdf') as doc:
number_of_pages = doc.pageCount
with open(f'{callbackQuery.message.message_id}/pdf.json', "wb") as out: # open text output
for page in doc: # iterate the document pages
text = page.get_text("json").encode("utf8") # get plain text (is in UTF-8)
out.write(text) # write text of page()
out.write(bytes((12,))) # write page delimiter (form feed 0x0C)
await bot.send_chat_action(
callbackQuery.message.chat.id,
"upload_document"
)
await bot.send_document(
chat_id = callbackQuery.message.chat.id,
reply_to_message_id = callbackQuery.message.reply_to_message.message_id,
thumb = PDF_THUMBNAIL,
document = f"{callbackQuery.message.message_id}/pdf.json",
caption = "__Json File__"
)
await downloadMessage.delete()
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception:
try:
print("Text/J: ", e)
PROCESS.remove(callbackQuery.message.chat.id)
shutil.rmtree(f"{callbackQuery.message.message_id}")
except Exception:
pass
# Copyright InHame Dev
| 37.352403 | 103 | 0.530478 | 1,593 | 16,323 | 5.31764 | 0.124922 | 0.179436 | 0.058317 | 0.107425 | 0.854208 | 0.823988 | 0.793767 | 0.774643 | 0.770511 | 0.757762 | 0 | 0.002436 | 0.371378 | 16,323 | 436 | 104 | 37.438073 | 0.815613 | 0.070024 | 0 | 0.651351 | 0 | 0 | 0.129436 | 0.070244 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.027027 | 0.02973 | 0 | 0.062162 | 0.010811 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d2bf8a942b657cfdee33c353df0dafb722fc552d | 36 | py | Python | livemark/plugins/reference/__init__.py | AyrtonB/livemark | f8c49d449ea6242c674cf345823468aaabea6e6b | [
"MIT"
] | 73 | 2021-06-07T13:28:36.000Z | 2022-03-26T05:37:59.000Z | livemark/plugins/reference/__init__.py | AyrtonB/livemark | f8c49d449ea6242c674cf345823468aaabea6e6b | [
"MIT"
] | 120 | 2021-06-04T12:51:01.000Z | 2022-03-21T11:11:36.000Z | livemark/plugins/reference/__init__.py | AyrtonB/livemark | f8c49d449ea6242c674cf345823468aaabea6e6b | [
"MIT"
] | 7 | 2021-09-22T11:38:26.000Z | 2022-03-26T05:35:58.000Z | from .plugin import ReferencePlugin
| 18 | 35 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
961221e40f474fbab39129a3b606e16e29ac9ec6 | 61,098 | py | Python | tests/unit/test_service_steps.py | nicolaei/aws-step-functions-data-science-sdk-python | aafdb76d2f1a1a4aa6e863047f9d9ba4138e37e2 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_service_steps.py | nicolaei/aws-step-functions-data-science-sdk-python | aafdb76d2f1a1a4aa6e863047f9d9ba4138e37e2 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_service_steps.py | nicolaei/aws-step-functions-data-science-sdk-python | aafdb76d2f1a1a4aa6e863047f9d9ba4138e37e2 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
from __future__ import absolute_import
import boto3
import pytest
import re
from unittest.mock import patch
from stepfunctions.steps.service import DynamoDBGetItemStep, DynamoDBPutItemStep, DynamoDBUpdateItemStep, DynamoDBDeleteItemStep
from stepfunctions.steps.service import (
EksCallStep,
EksCreateClusterStep,
EksCreateFargateProfileStep,
EksCreateNodegroupStep,
EksDeleteClusterStep,
EksDeleteFargateProfileStep,
EksDeleteNodegroupStep,
EksRunJobStep,
)
from stepfunctions.steps.service import EmrCreateClusterStep, EmrTerminateClusterStep, EmrAddStepStep, EmrCancelStepStep, EmrSetClusterTerminationProtectionStep, EmrModifyInstanceFleetByNameStep, EmrModifyInstanceGroupByNameStep
from stepfunctions.steps.service import EventBridgePutEventsStep
from stepfunctions.steps.service import SnsPublishStep, SqsSendMessageStep
from stepfunctions.steps.service import GlueDataBrewStartJobRunStep
from stepfunctions.steps.service import StepFunctionsStartExecutionStep
from stepfunctions.steps.integration_resources import IntegrationPattern
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_sns_publish_step_creation():
step = SnsPublishStep('Publish to SNS', parameters={
'TopicArn': 'arn:aws:sns:us-east-1:123456789012:myTopic',
'Message': 'message',
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::sns:publish',
'Parameters': {
'TopicArn': 'arn:aws:sns:us-east-1:123456789012:myTopic',
'Message': 'message',
},
'End': True
}
step = SnsPublishStep('Publish to SNS', wait_for_callback=True, parameters={
'TopicArn': 'arn:aws:sns:us-east-1:123456789012:myTopic',
'Message': {
'Input.$': '$',
'TaskToken.$': '$$.Task.Token'
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::sns:publish.waitForTaskToken',
'Parameters': {
'TopicArn': 'arn:aws:sns:us-east-1:123456789012:myTopic',
'Message': {
'Input.$': '$',
'TaskToken.$': '$$.Task.Token'
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_sqs_send_message_step_creation():
step = SqsSendMessageStep('Send to SQS', parameters={
'QueueUrl': 'https://sqs.us-east-1.amazonaws.com/123456789012/myQueue',
'MessageBody': 'Hello'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::sqs:sendMessage',
'Parameters': {
'QueueUrl': 'https://sqs.us-east-1.amazonaws.com/123456789012/myQueue',
'MessageBody': 'Hello'
},
'End': True
}
step = SqsSendMessageStep('Send to SQS', wait_for_callback=True, parameters={
'QueueUrl': 'https://sqs.us-east-1.amazonaws.com/123456789012/myQueue',
'MessageBody': {
'Input.$': '$',
'TaskToken.$': '$$.Task.Token'
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::sqs:sendMessage.waitForTaskToken',
'Parameters': {
'QueueUrl': 'https://sqs.us-east-1.amazonaws.com/123456789012/myQueue',
'MessageBody': {
'Input.$': '$',
'TaskToken.$': '$$.Task.Token'
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eventbridge_put_events_step_creation():
step = EventBridgePutEventsStep('Send to EventBridge', parameters={
"Entries": [
{
"Detail": {
"Message": "MyMessage"
},
"DetailType": "MyDetailType",
"EventBusName": "MyEventBus",
"Source": "my.source"
}
]
})
assert step.to_dict() == {
"Type": "Task",
"Resource": 'arn:aws:states:::events:putEvents',
"Parameters": {
"Entries": [
{
"Detail": {
"Message": "MyMessage"
},
"DetailType": "MyDetailType",
"EventBusName": "MyEventBus",
"Source": "my.source"
}
]
},
"End": True
}
step = EventBridgePutEventsStep('Send to EventBridge', wait_for_callback=True, parameters={
"Entries": [
{
"Detail": {
"Message.$": "$.MyMessage"
},
"DetailType": "MyDetailType",
"EventBusName": "MyEventBus",
"Source": "my.source"
}
]
})
assert step.to_dict() == {
"Type": "Task",
"Resource": "arn:aws:states:::events:putEvents.waitForTaskToken",
"Parameters": {
"Entries": [
{
"Detail": {
"Message.$": "$.MyMessage"
},
"DetailType": "MyDetailType",
"EventBusName": "MyEventBus",
"Source": "my.source"
}
]
},
"End": True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_dynamodb_get_item_step_creation():
step = DynamoDBGetItemStep('Read Message From DynamoDB', parameters={
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Key': {
'MessageId': {
'S.$': '$.List[0]'
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::dynamodb:getItem',
'Parameters': {
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Key': {
'MessageId': {
'S.$': '$.List[0]'
}
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_dynamodb_put_item_step_creation():
step = DynamoDBPutItemStep('Add Message From DynamoDB', parameters={
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Item': {
'MessageId': {
'S': '123456789'
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::dynamodb:putItem',
'Parameters': {
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Item': {
'MessageId': {
'S': '123456789'
}
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_dynamodb_delete_item_step_creation():
step = DynamoDBDeleteItemStep('Delete Message From DynamoDB', parameters={
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Key': {
'MessageId': {
'S': 'MyMessage'
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::dynamodb:deleteItem',
'Parameters': {
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Key': {
'MessageId': {
'S': 'MyMessage'
}
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_dynamodb_update_item_step_creation():
step = DynamoDBUpdateItemStep('Update Message From DynamoDB', parameters={
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Key': {
'RecordId': {
'S': 'RecordId'
}
},
'UpdateExpression': 'set Revision = :val1',
'ExpressionAttributeValues': {
':val1': { 'S': '2' }
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::dynamodb:updateItem',
'Parameters': {
'TableName': 'TransferDataRecords-DDBTable-3I41R5L5EAGT',
'Key': {
'RecordId': {
'S': 'RecordId'
}
},
'UpdateExpression': 'set Revision = :val1',
'ExpressionAttributeValues': {
':val1': { 'S': '2' }
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_emr_create_cluster_step_creation():
step = EmrCreateClusterStep('Create EMR cluster', parameters={
'Name': 'MyWorkflowCluster',
'VisibleToAllUsers': True,
'ReleaseLabel': 'emr-5.28.0',
'Applications': [
{
'Name': 'Hive'
}
],
'ServiceRole': 'EMR_DefaultRole',
'JobFlowRole': 'EMR_EC2_DefaultRole',
'LogUri': 's3n://aws-logs-123456789012-us-east-1/elasticmapreduce/',
'Instances': {
'KeepJobFlowAliveWhenNoSteps': True,
'InstanceFleets': [
{
'InstanceFleetType': 'MASTER',
'Name': 'MASTER',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
},
{
'InstanceFleetType': 'CORE',
'Name': 'CORE',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
}
]
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:createCluster.sync',
'Parameters': {
'Name': 'MyWorkflowCluster',
'VisibleToAllUsers': True,
'ReleaseLabel': 'emr-5.28.0',
'Applications': [
{
'Name': 'Hive'
}
],
'ServiceRole': 'EMR_DefaultRole',
'JobFlowRole': 'EMR_EC2_DefaultRole',
'LogUri': 's3n://aws-logs-123456789012-us-east-1/elasticmapreduce/',
'Instances': {
'KeepJobFlowAliveWhenNoSteps': True,
'InstanceFleets': [
{
'InstanceFleetType': 'MASTER',
'Name': 'MASTER',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
},
{
'InstanceFleetType': 'CORE',
'Name': 'CORE',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
}
]
}
},
'End': True
}
step = EmrCreateClusterStep('Create EMR cluster', wait_for_completion=False, parameters={
'Name': 'MyWorkflowCluster',
'VisibleToAllUsers': True,
'ReleaseLabel': 'emr-5.28.0',
'Applications': [
{
'Name': 'Hive'
}
],
'ServiceRole': 'EMR_DefaultRole',
'JobFlowRole': 'EMR_EC2_DefaultRole',
'LogUri': 's3n://aws-logs-123456789012-us-east-1/elasticmapreduce/',
'Instances': {
'KeepJobFlowAliveWhenNoSteps': True,
'InstanceFleets': [
{
'InstanceFleetType': 'MASTER',
'Name': 'MASTER',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
},
{
'InstanceFleetType': 'CORE',
'Name': 'CORE',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
}
]
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:createCluster',
'Parameters': {
'Name': 'MyWorkflowCluster',
'VisibleToAllUsers': True,
'ReleaseLabel': 'emr-5.28.0',
'Applications': [
{
'Name': 'Hive'
}
],
'ServiceRole': 'EMR_DefaultRole',
'JobFlowRole': 'EMR_EC2_DefaultRole',
'LogUri': 's3n://aws-logs-123456789012-us-east-1/elasticmapreduce/',
'Instances': {
'KeepJobFlowAliveWhenNoSteps': True,
'InstanceFleets': [
{
'InstanceFleetType': 'MASTER',
'Name': 'MASTER',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
},
{
'InstanceFleetType': 'CORE',
'Name': 'CORE',
'TargetOnDemandCapacity': 1,
'InstanceTypeConfigs': [
{
'InstanceType': 'm4.xlarge'
}
]
}
]
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_emr_terminate_cluster_step_creation():
step = EmrTerminateClusterStep('Terminate EMR cluster', parameters={
'ClusterId': 'MyWorkflowClusterId'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:terminateCluster.sync',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
},
'End': True
}
step = EmrTerminateClusterStep('Terminate EMR cluster', wait_for_completion=False, parameters={
'ClusterId': 'MyWorkflowClusterId'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:terminateCluster',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_emr_add_step_step_creation():
step = EmrAddStepStep('Add step to EMR cluster', parameters={
'ClusterId': 'MyWorkflowClusterId',
'Step': {
'Name': 'The first step',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'hive-script',
'--run-hive-script',
'--args',
'-f',
's3://<region>.elasticmapreduce.samples/cloudfront/code/Hive_CloudFront.q',
'-d',
'INPUT=s3://<region>.elasticmapreduce.samples',
'-d',
'OUTPUT=s3://<mybucket>/MyHiveQueryResults/'
]
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:addStep.sync',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
'Step': {
'Name': 'The first step',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'hive-script',
'--run-hive-script',
'--args',
'-f',
's3://<region>.elasticmapreduce.samples/cloudfront/code/Hive_CloudFront.q',
'-d',
'INPUT=s3://<region>.elasticmapreduce.samples',
'-d',
'OUTPUT=s3://<mybucket>/MyHiveQueryResults/'
]
}
}
},
'End': True
}
step = EmrAddStepStep('Add step to EMR cluster', wait_for_completion=False, parameters={
'ClusterId': 'MyWorkflowClusterId',
'Step': {
'Name': 'The first step',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'hive-script',
'--run-hive-script',
'--args',
'-f',
's3://<region>.elasticmapreduce.samples/cloudfront/code/Hive_CloudFront.q',
'-d',
'INPUT=s3://<region>.elasticmapreduce.samples',
'-d',
'OUTPUT=s3://<mybucket>/MyHiveQueryResults/'
]
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:addStep',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
'Step': {
'Name': 'The first step',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': [
'hive-script',
'--run-hive-script',
'--args',
'-f',
's3://<region>.elasticmapreduce.samples/cloudfront/code/Hive_CloudFront.q',
'-d',
'INPUT=s3://<region>.elasticmapreduce.samples',
'-d',
'OUTPUT=s3://<mybucket>/MyHiveQueryResults/'
]
}
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_emr_cancel_step_step_creation():
step = EmrCancelStepStep('Cancel step from EMR cluster', parameters={
'ClusterId': 'MyWorkflowClusterId',
'StepId': 'MyWorkflowStepId'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:cancelStep',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
'StepId': 'MyWorkflowStepId'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_emr_set_cluster_termination_protection_step_creation():
step = EmrSetClusterTerminationProtectionStep('Set termination protection for EMR cluster', parameters={
'ClusterId': 'MyWorkflowClusterId',
'TerminationProtected': True
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:setClusterTerminationProtection',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
'TerminationProtected': True
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_emr_modify_instance_fleet_by_name_step_creation():
step = EmrModifyInstanceFleetByNameStep('Modify Instance Fleet by name for EMR cluster', parameters={
'ClusterId': 'MyWorkflowClusterId',
'InstanceFleetName': 'MyCoreFleet',
'InstanceFleet': {
'TargetOnDemandCapacity': 8,
'TargetSpotCapacity': 0
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:modifyInstanceFleetByName',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
'InstanceFleetName': 'MyCoreFleet',
'InstanceFleet': {
'TargetOnDemandCapacity': 8,
'TargetSpotCapacity': 0
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_emr_modify_instance_group_by_name_step_creation():
step = EmrModifyInstanceGroupByNameStep('Modify Instance Group by name for EMR cluster', parameters={
'ClusterId': 'MyWorkflowClusterId',
'InstanceGroupName': 'MyCoreGroup',
'InstanceGroup': {
'InstanceCount': 8
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::elasticmapreduce:modifyInstanceGroupByName',
'Parameters': {
'ClusterId': 'MyWorkflowClusterId',
'InstanceGroupName': 'MyCoreGroup',
'InstanceGroup': {
'InstanceCount': 8
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_databrew_start_job_run_step_creation_sync():
step = GlueDataBrewStartJobRunStep('Start Glue DataBrew Job Run - Sync', parameters={
"Name": "MyWorkflowJobRun"
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::databrew:startJobRun.sync',
'Parameters': {
'Name': 'MyWorkflowJobRun'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_databrew_start_job_run_step_creation():
step = GlueDataBrewStartJobRunStep('Start Glue DataBrew Job Run', wait_for_completion=False, parameters={
"Name": "MyWorkflowJobRun"
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::databrew:startJobRun',
'Parameters': {
'Name': 'MyWorkflowJobRun'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_cluster_step_creation_call_and_continue():
step = EksCreateClusterStep("Create Eks cluster - CallAndContinue", integration_pattern=IntegrationPattern.CallAndContinue,
parameters={
'Name': 'MyCluster',
'ResourcesVpcConfig': {
'SubnetIds': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
},
'RoleArn': 'arn:aws:iam::123456789012:role/MyEKSClusterRole'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createCluster',
'Parameters': {
'Name': 'MyCluster',
'ResourcesVpcConfig': {
'SubnetIds': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
},
'RoleArn': 'arn:aws:iam::123456789012:role/MyEKSClusterRole'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_cluster_step_creation_default():
step = EksCreateClusterStep("Create Eks cluster - default", parameters={
'Name': 'MyCluster',
'ResourcesVpcConfig': {
'SubnetIds': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
},
'RoleArn': 'arn:aws:iam::123456789012:role/MyEKSClusterRole'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createCluster.sync',
'Parameters': {
'Name': 'MyCluster',
'ResourcesVpcConfig': {
'SubnetIds': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
},
'RoleArn': 'arn:aws:iam::123456789012:role/MyEKSClusterRole'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_cluster_step_creation_wait_for_completion():
step = EksCreateClusterStep("Create Eks cluster - WaitForCompletion",
integration_pattern=IntegrationPattern.WaitForCompletion,
parameters={
'Name': 'MyCluster',
'ResourcesVpcConfig': {
'SubnetIds': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
},
'RoleArn': 'arn:aws:iam::123456789012:role/MyEKSClusterRole'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createCluster.sync',
'Parameters': {
'Name': 'MyCluster',
'ResourcesVpcConfig': {
'SubnetIds': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
},
'RoleArn': 'arn:aws:iam::123456789012:role/MyEKSClusterRole'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_cluster_step_creation_wait_for_task_token_raises_error():
error_message = re.escape(f"Integration Pattern ({IntegrationPattern.WaitForTaskToken.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.WaitForCompletion.name, IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksCreateClusterStep("Create Eks cluster - WaitForTaskToken",
integration_pattern=IntegrationPattern.WaitForTaskToken)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_cluster_step_creation_call_and_continue():
step = EksDeleteClusterStep("Delete Eks cluster - CallAndContinue",
integration_pattern=IntegrationPattern.CallAndContinue,
parameters={
'Name': 'MyCluster'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteCluster',
'Parameters': {
'Name': 'MyCluster'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_cluster_step_creation_default():
step = EksDeleteClusterStep("Delete Eks cluster - default", parameters={
'Name': 'MyCluster'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteCluster.sync',
'Parameters': {
'Name': 'MyCluster'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_cluster_step_creation_wait_for_completion():
step = EksDeleteClusterStep("Delete Eks cluster - WaitForCompletion",
integration_pattern=IntegrationPattern.WaitForCompletion,
parameters={
'Name': 'MyCluster'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteCluster.sync',
'Parameters': {
'Name': 'MyCluster'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_cluster_step_creation_wait_for_task_token_raises_error():
error_message = re.escape(f"Integration Pattern ({IntegrationPattern.WaitForTaskToken.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.WaitForCompletion.name, IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksDeleteClusterStep("Delete Eks cluster - WaitForTaskToken",
integration_pattern=IntegrationPattern.WaitForTaskToken)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_fargate_profile_step_creation_call_and_continue():
step = EksCreateFargateProfileStep("Create Fargate profile - CallAndContinue",
integration_pattern=IntegrationPattern.CallAndContinue,
parameters={
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile',
'PodExecutionRoleArn': 'arn:aws:iam::123456789012:role/MyFargatePodExecutionRole',
'Selectors': [{
'Namespace': 'my-namespace',
'Labels': {'my-label': 'my-value'}
}],
'subnets': ['subnet-00000000000000000']
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createFargateProfile',
'Parameters': {
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile',
'PodExecutionRoleArn': 'arn:aws:iam::123456789012:role/MyFargatePodExecutionRole',
'Selectors': [{
'Namespace': 'my-namespace',
'Labels': {'my-label': 'my-value'}
}],
'subnets': ['subnet-00000000000000000']
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_fargate_profile_step_creation_default():
step = EksCreateFargateProfileStep("Create Fargate profile - default", parameters={
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile',
'PodExecutionRoleArn': 'arn:aws:iam::123456789012:role/MyFargatePodExecutionRole',
'Selectors': [{
'Namespace': 'my-namespace',
'Labels': {'my-label': 'my-value'}
}],
'subnets': ['subnet-00000000000000000']
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createFargateProfile.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile',
'PodExecutionRoleArn': 'arn:aws:iam::123456789012:role/MyFargatePodExecutionRole',
'Selectors': [{
'Namespace': 'my-namespace',
'Labels': {'my-label': 'my-value'}
}],
'subnets': ['subnet-00000000000000000']
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_fargate_profile_step_creation_wait_for_completion():
step = EksCreateFargateProfileStep("Create Fargate profile - WaitForCompletion",
integration_pattern=IntegrationPattern.WaitForCompletion,
parameters={
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile',
'PodExecutionRoleArn': 'arn:aws:iam::123456789012:role/MyFargatePodExecutionRole',
'Selectors': [{
'Namespace': 'my-namespace',
'Labels': {'my-label': 'my-value'}
}],
'subnets': ['subnet-00000000000000000']
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createFargateProfile.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile',
'PodExecutionRoleArn': 'arn:aws:iam::123456789012:role/MyFargatePodExecutionRole',
'Selectors': [{
'Namespace': 'my-namespace',
'Labels': {'my-label': 'my-value'}
}],
'subnets': ['subnet-00000000000000000']
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_fargate_profile_step_creation_wait_for_task_token_raises_error():
error_message = re.escape(f"Integration Pattern ({IntegrationPattern.WaitForTaskToken.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.WaitForCompletion.name, IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksCreateFargateProfileStep("Create Fargate profile - WaitForTaskToken",
integration_pattern=IntegrationPattern.WaitForTaskToken)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_fargate_profile_step_creation_call_and_continue():
step = EksDeleteFargateProfileStep("Delete Fargate profile - CallAndContinue",
integration_pattern=IntegrationPattern.CallAndContinue,
parameters={
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteFargateProfile',
'Parameters': {
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_fargate_profile_step_creation_default():
step = EksDeleteFargateProfileStep("Delete Fargate profile - default", parameters={
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteFargateProfile.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_fargate_profile_step_creation_wait_for_completion():
step = EksDeleteFargateProfileStep("Delete Fargate profile - WaitForCompletion",
integration_pattern=IntegrationPattern.WaitForCompletion,
parameters={
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteFargateProfile.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'FargateProfileName': 'MyFargateProfile'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_fargate_profile_step_creation_wait_for_task_token_raises_error():
error_message = re.escape(f"Integration Pattern ({IntegrationPattern.WaitForTaskToken.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.WaitForCompletion.name, IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksDeleteFargateProfileStep("Delete Fargate profile - WaitForTaskToken",
integration_pattern=IntegrationPattern.WaitForTaskToken)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_nodegroup_step_creation_call_and_continue():
step = EksCreateNodegroupStep("Create Nodegroup - CallAndContinue",
integration_pattern=IntegrationPattern.CallAndContinue,
parameters={
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup',
'NodeRole': 'arn:aws:iam::123456789012:role/MyNodeInstanceRole',
'Subnets': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createNodegroup',
'Parameters': {
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup',
'NodeRole': 'arn:aws:iam::123456789012:role/MyNodeInstanceRole',
'Subnets': [
'subnet-00000000000000000',
'subnet-00000000000000001'
],
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_nodegroup_step_creation_wait_for_completion():
step = EksCreateNodegroupStep("Create Nodegroup - WaitForCompletion",
integration_pattern=IntegrationPattern.WaitForCompletion,
parameters={
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup',
'NodeRole': 'arn:aws:iam::123456789012:role/MyNodeInstanceRole',
'Subnets': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createNodegroup.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup',
'NodeRole': 'arn:aws:iam::123456789012:role/MyNodeInstanceRole',
'Subnets': [
'subnet-00000000000000000',
'subnet-00000000000000001'
],
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_nodegroup_step_creation_default():
step = EksCreateNodegroupStep("Create Nodegroup - default", parameters={
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup',
'NodeRole': 'arn:aws:iam::123456789012:role/MyNodeInstanceRole',
'Subnets': [
'subnet-00000000000000000',
'subnet-00000000000000001'
]
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:createNodegroup.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup',
'NodeRole': 'arn:aws:iam::123456789012:role/MyNodeInstanceRole',
'Subnets': [
'subnet-00000000000000000',
'subnet-00000000000000001'
],
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_create_nodegroup_step_creation_wait_for_task_token_raises_error():
error_message = re.escape(f"Integration Pattern ({IntegrationPattern.WaitForTaskToken.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.WaitForCompletion.name, IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksCreateNodegroupStep("Create Nodegroup - WaitForTaskToken",
integration_pattern=IntegrationPattern.WaitForTaskToken)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_nodegroup_step_creation_call_and_continue():
step = EksDeleteNodegroupStep("Delete Nodegroup - CallAndContinue",
integration_pattern=IntegrationPattern.CallAndContinue,
parameters={
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteNodegroup',
'Parameters': {
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_nodegroup_step_creation_default():
step = EksDeleteNodegroupStep("Delete Nodegroup - default", parameters={
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteNodegroup.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_nodegroup_step_creation_wait_for_completion():
step = EksDeleteNodegroupStep("Delete Nodegroup - WaitForCompletion",
integration_pattern=IntegrationPattern.WaitForCompletion,
parameters={
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup'
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:deleteNodegroup.sync',
'Parameters': {
'ClusterName': 'MyCluster',
'NodegroupName': 'MyNodegroup'
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_delete_nodegroup_step_creation_wait_for_task_token_raises_error():
error_message = re.escape(f"Integration Pattern ({IntegrationPattern.WaitForTaskToken.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.WaitForCompletion.name, IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksDeleteNodegroupStep("Delete Nodegroup - WaitForTaskToken",
integration_pattern=IntegrationPattern.WaitForTaskToken)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_run_job_step_creation_call_and_continue():
step = EksRunJobStep("Run Job - CallAndContinue", integration_pattern=IntegrationPattern.CallAndContinue, parameters={
'ClusterName': 'MyCluster',
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'Endpoint': 'https://AKIAIOSFODNN7EXAMPLE.yl4.us-east-1.eks.amazonaws.com',
'Job': {
'apiVersion': 'batch/v1',
'kind': 'Job',
'metadata': {
'name': 'example-job'
},
'spec': {
'backoffLimit': 0,
'template': {
'metadata': {
'name': 'example-job'
},
'spec': {
'containers': [
{
'name': 'pi-2000',
'image': 'perl',
'command': ['perl'],
'args': [
'-Mbignum=bpi',
'-wle',
'print bpi(2000)'
]
}
],
'restartPolicy': 'Never'
}
}
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:runJob',
'Parameters': {
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'ClusterName': 'MyCluster',
'Endpoint': 'https://AKIAIOSFODNN7EXAMPLE.yl4.us-east-1.eks.amazonaws.com',
'Job': {
'apiVersion': 'batch/v1',
'kind': 'Job',
'metadata': {'name': 'example-job'},
'spec': {
'backoffLimit': 0,
'template': {
'metadata': {'name': 'example-job'},
'spec': {
'containers': [{
'args': ['-Mbignum=bpi',
'-wle',
'print '
'bpi(2000)'],
'command': ['perl'],
'image': 'perl',
'name': 'pi-2000'}],
'restartPolicy': 'Never'}
}
}
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_run_job_step_creation_default():
step = EksRunJobStep("Run Job - default", parameters={
'ClusterName': 'MyCluster',
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'Endpoint': 'https://AKIAIOSFODNN7EXAMPLE.yl4.us-east-1.eks.amazonaws.com',
'LogOptions': {
'RetrieveLogs': True
},
'Job': {
'apiVersion': 'batch/v1',
'kind': 'Job',
'metadata': {
'name': 'example-job'
},
'spec': {
'backoffLimit': 0,
'template': {
'metadata': {
'name': 'example-job'
},
'spec': {
'containers': [
{
'name': 'pi-2000',
'image': 'perl',
'command': ['perl'],
'args': [
'-Mbignum=bpi',
'-wle',
'print bpi(2000)'
]
}
],
'restartPolicy': 'Never'
}
}
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:runJob.sync',
'Parameters': {
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'ClusterName': 'MyCluster',
'Endpoint': 'https://AKIAIOSFODNN7EXAMPLE.yl4.us-east-1.eks.amazonaws.com',
'LogOptions': {
'RetrieveLogs': True
},
'Job': {
'apiVersion': 'batch/v1',
'kind': 'Job',
'metadata': {'name': 'example-job'},
'spec': {
'backoffLimit': 0,
'template': {
'metadata': {'name': 'example-job'},
'spec': {
'containers': [{
'args': ['-Mbignum=bpi',
'-wle',
'print '
'bpi(2000)'],
'command': ['perl'],
'image': 'perl',
'name': 'pi-2000'}],
'restartPolicy': 'Never'}
}
}
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_run_job_step_creation_wait_for_completion():
step = EksRunJobStep("Run Job - WaitForCompletion", integration_pattern=IntegrationPattern.WaitForCompletion,
parameters={
'ClusterName': 'MyCluster',
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'Endpoint': 'https://AKIAIOSFODNN7EXAMPLE.yl4.us-east-1.eks.amazonaws.com',
'LogOptions': {
'RetrieveLogs': True
},
'Job': {
'apiVersion': 'batch/v1',
'kind': 'Job',
'metadata': {
'name': 'example-job'
},
'spec': {
'backoffLimit': 0,
'template': {
'metadata': {
'name': 'example-job'
},
'spec': {
'containers': [
{
'name': 'pi-2000',
'image': 'perl',
'command': ['perl'],
'args': [
'-Mbignum=bpi',
'-wle',
'print bpi(2000)'
]
}
],
'restartPolicy': 'Never'
}
}
}
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:runJob.sync',
'Parameters': {
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'ClusterName': 'MyCluster',
'Endpoint': 'https://AKIAIOSFODNN7EXAMPLE.yl4.us-east-1.eks.amazonaws.com',
'LogOptions': {
'RetrieveLogs': True
},
'Job': {
'apiVersion': 'batch/v1',
'kind': 'Job',
'metadata': {'name': 'example-job'},
'spec': {
'backoffLimit': 0,
'template': {
'metadata': {'name': 'example-job'},
'spec': {
'containers': [{
'args': ['-Mbignum=bpi',
'-wle',
'print '
'bpi(2000)'],
'command': ['perl'],
'image': 'perl',
'name': 'pi-2000'}],
'restartPolicy': 'Never'}
}
}
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_run_job_step_creation_wait_for_task_token_raises_error():
error_message = re.escape(f"Integration Pattern ({IntegrationPattern.WaitForTaskToken.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.WaitForCompletion.name, IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksRunJobStep("Run Job - WaitForTaskToken", integration_pattern=IntegrationPattern.WaitForTaskToken)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_call_step_creation_default():
step = EksCallStep("Call - default", parameters={
'ClusterName': 'MyCluster',
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'Endpoint': 'https://444455556666.yl4.us-east-1.eks.amazonaws.com',
'Method': 'GET',
'Path': '/api/v1/namespaces/default/pods',
'QueryParameters': {
'labelSelector': [
'job-name=example-job'
]
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:call',
'Parameters': {
'ClusterName': 'MyCluster',
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'Endpoint': 'https://444455556666.yl4.us-east-1.eks.amazonaws.com',
'Method': 'GET',
'Path': '/api/v1/namespaces/default/pods',
'QueryParameters': {
'labelSelector': [
'job-name=example-job'
]
}
},
'End': True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_call_step_creation_call_and_continue():
step = EksCallStep("Call - default", integration_pattern=IntegrationPattern.CallAndContinue, parameters={
'ClusterName': 'MyCluster',
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'Endpoint': 'https://444455556666.yl4.us-east-1.eks.amazonaws.com',
'Method': 'GET',
'Path': '/api/v1/namespaces/default/pods',
'QueryParameters': {
'labelSelector': [
'job-name=example-job'
]
}
})
assert step.to_dict() == {
'Type': 'Task',
'Resource': 'arn:aws:states:::eks:call',
'Parameters': {
'ClusterName': 'MyCluster',
'CertificateAuthority': 'ANPAJ2UCCR6DPCEXAMPLE',
'Endpoint': 'https://444455556666.yl4.us-east-1.eks.amazonaws.com',
'Method': 'GET',
'Path': '/api/v1/namespaces/default/pods',
'QueryParameters': {
'labelSelector': [
'job-name=example-job'
]
}
},
'End': True
}
@pytest.mark.parametrize("integration_pattern", [
IntegrationPattern.WaitForTaskToken,
IntegrationPattern.WaitForCompletion
])
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_eks_call_step_creation_with_unsupported_integration_pattern_raises_error(integration_pattern):
error_message = re.escape(f"Integration Pattern ({integration_pattern.name}) is not supported for this step - "
f"Please use one of the following: "
f"{[IntegrationPattern.CallAndContinue.name]}")
with pytest.raises(ValueError, match=error_message):
EksCallStep("Call with unsupported integration pattern", integration_pattern=integration_pattern)
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_step_functions_start_execution_step_creation_default():
step = StepFunctionsStartExecutionStep(
"SFN Start Execution", parameters={
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
})
assert step.to_dict() == {
"Type": "Task",
"Resource": "arn:aws:states:::states:startExecution.sync:2",
"Parameters": {
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
},
"End": True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_step_functions_start_execution_step_creation_call_and_continue():
step = StepFunctionsStartExecutionStep(
"SFN Start Execution", integration_pattern=IntegrationPattern.CallAndContinue, parameters={
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
})
assert step.to_dict() == {
"Type": "Task",
"Resource": "arn:aws:states:::states:startExecution",
"Parameters": {
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
},
"End": True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_step_functions_start_execution_step_creation_wait_for_completion():
step = StepFunctionsStartExecutionStep(
"SFN Start Execution - Sync", integration_pattern=IntegrationPattern.WaitForCompletion, parameters={
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
})
assert step.to_dict() == {
"Type": "Task",
"Resource": "arn:aws:states:::states:startExecution.sync:2",
"Parameters": {
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
},
"End": True
}
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_step_functions_start_execution_step_creation_wait_for_task_token():
step = StepFunctionsStartExecutionStep(
"SFN Start Execution - Wait for Callback", integration_pattern=IntegrationPattern.WaitForTaskToken,
parameters={
"Input": {
"token.$": "$$.Task.Token"
},
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
})
assert step.to_dict() == {
"Type": "Task",
"Resource": "arn:aws:states:::states:startExecution.waitForTaskToken",
"Parameters": {
"Input": {
"token.$": "$$.Task.Token"
},
"StateMachineArn": "arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld",
"Name": "ExecutionName"
},
"End": True
}
@pytest.mark.parametrize("integration_pattern", [
None,
"ServiceIntegrationTypeStr",
0
])
@patch.object(boto3.session.Session, 'region_name', 'us-east-1')
def test_step_functions_start_execution_step_creation_invalid_integration_pattern_raises_type_error(integration_pattern):
with pytest.raises(TypeError):
StepFunctionsStartExecutionStep("SFN Start Execution - invalid ServiceType", integration_pattern=integration_pattern)
| 37.645102 | 228 | 0.496416 | 4,372 | 61,098 | 6.803522 | 0.084858 | 0.016541 | 0.019297 | 0.040208 | 0.902471 | 0.843335 | 0.835838 | 0.819533 | 0.804404 | 0.793175 | 0 | 0.036508 | 0.379521 | 61,098 | 1,622 | 229 | 37.668311 | 0.748114 | 0.009051 | 0 | 0.69337 | 0 | 0 | 0.344 | 0.118962 | 0 | 0 | 0 | 0 | 0.03384 | 1 | 0.035912 | false | 0 | 0.008978 | 0 | 0.04489 | 0.004144 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
96212372e529ab189b119a210691b84330948ff5 | 38 | py | Python | Demo_XRD_patterns_from_dpp/ds_cake/__init__.py | SHDShim/PMatRes | 92440c11f2723861dbb82cecdc321fcef9de4443 | [
"Apache-2.0"
] | 15 | 2017-09-02T13:55:35.000Z | 2022-03-26T08:20:16.000Z | Demo_XRD_patterns_from_dpp/ds_cake/__init__.py | SHDShim/PMatRes | 92440c11f2723861dbb82cecdc321fcef9de4443 | [
"Apache-2.0"
] | null | null | null | Demo_XRD_patterns_from_dpp/ds_cake/__init__.py | SHDShim/PMatRes | 92440c11f2723861dbb82cecdc321fcef9de4443 | [
"Apache-2.0"
] | 2 | 2018-05-16T13:32:08.000Z | 2019-06-16T08:09:38.000Z | from .DiffractionImage import DiffImg
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
828235384dd11d44fa4ebfce4575eeb51d9ccdcd | 256 | py | Python | gaqqie_rainbow/rest/api/__init__.py | gaqqie/gaqqie-rainbow | 307086a89d91aaf911f3094415ca2447f3c190bd | [
"Apache-2.0"
] | 1 | 2021-08-31T05:18:12.000Z | 2021-08-31T05:18:12.000Z | gaqqie_rainbow/rest/api/__init__.py | gaqqie/gaqqie-rainbow | 307086a89d91aaf911f3094415ca2447f3c190bd | [
"Apache-2.0"
] | null | null | null | gaqqie_rainbow/rest/api/__init__.py | gaqqie/gaqqie-rainbow | 307086a89d91aaf911f3094415ca2447f3c190bd | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from gaqqie_rainbow.rest.api.device_api import DeviceApi
from gaqqie_rainbow.rest.api.job_api import JobApi
from gaqqie_rainbow.rest.api.provider_api import ProviderApi
| 28.444444 | 60 | 0.847656 | 39 | 256 | 5.282051 | 0.487179 | 0.145631 | 0.247573 | 0.305825 | 0.349515 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004367 | 0.105469 | 256 | 8 | 61 | 32 | 0.895197 | 0.160156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8291441abbe17cf2b4768973cea02579c525d5c5 | 28 | py | Python | biopymlff/calculators/__init__.py | saandre15/biopymlff | ec90370a8c03c51426bd24477034c9413bdcdb04 | [
"MIT"
] | null | null | null | biopymlff/calculators/__init__.py | saandre15/biopymlff | ec90370a8c03c51426bd24477034c9413bdcdb04 | [
"MIT"
] | null | null | null | biopymlff/calculators/__init__.py | saandre15/biopymlff | ec90370a8c03c51426bd24477034c9413bdcdb04 | [
"MIT"
] | null | null | null | " Interface to calculators " | 28 | 28 | 0.785714 | 3 | 28 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0.857143 | 0 | 0 | 0 | 0 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82c61360be9be242c3e2b40165b317ead80f854e | 24 | py | Python | models/__init__.py | vsecoder/iplogger | 42571cf72f61107ed4a70ba5c1ab7055a5146583 | [
"MIT"
] | null | null | null | models/__init__.py | vsecoder/iplogger | 42571cf72f61107ed4a70ba5c1ab7055a5146583 | [
"MIT"
] | null | null | null | models/__init__.py | vsecoder/iplogger | 42571cf72f61107ed4a70ba5c1ab7055a5146583 | [
"MIT"
] | null | null | null | from .logs import Logs
| 12 | 23 | 0.75 | 4 | 24 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 1 | 24 | 24 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82e6d3c96a195305976e2ac8f4ec39ea9a1790b9 | 18,982 | py | Python | tests.py | fictorial/python-object-persistence | 93331e6e94ef356fe51ad377a67ed85225406943 | [
"MIT"
] | 1 | 2020-01-18T01:56:47.000Z | 2020-01-18T01:56:47.000Z | tests.py | fictorial/python-object-persistence | 93331e6e94ef356fe51ad377a67ed85225406943 | [
"MIT"
] | null | null | null | tests.py | fictorial/python-object-persistence | 93331e6e94ef356fe51ad377a67ed85225406943 | [
"MIT"
] | 1 | 2021-03-28T05:23:17.000Z | 2021-03-28T05:23:17.000Z | import os
import sqlite3
from datetime import datetime, timedelta
import pytest
import persistent
class A(persistent.Persistent):
pass
class B(persistent.Persistent):
references = [ 'ref0' ]
class C(persistent.Persistent):
references = [ 'ref0', 'ref1' ]
def test_connect():
persistent.connect()
def test_connect_debug():
persistent.connect(debug=True)
def test_connect_no_cache():
persistent.connect(cache_size=0)
def test_connect_file():
try:
persistent.connect(db_path='.test.sqlite3')
finally:
try:
os.remove('.test.sqlite3')
except:
pass
def test_subclass_create():
persistent.connect(debug=True)
a = A()
def test_dirty():
persistent.connect(debug=True)
a = A()
assert a.is_dirty
def test_mark_clean():
persistent.connect(debug=True)
a = A()
assert a.is_dirty
a.mark_clean()
assert not a.is_dirty
a.mark_clean()
assert not a.is_dirty
def test_save_empty_object():
persistent.connect(debug=True)
a = A()
a.save()
def test_save_empty_object_not_dirty():
persistent.connect(debug=True)
a = A()
a.save()
assert not a.is_dirty
assert a.save() == a
def test_save_and_load_empty():
persistent.connect(debug=True)
a = A()
a.save()
b = persistent.get(a.id)
assert b.id == a.id
def test_save_and_load_with_attribs():
persistent.connect(debug=True)
a = A()
a.foo = 1
a.save()
b = persistent.get(a.id)
assert b.id == a.id
assert b.foo == a.foo
def test_save_and_load_with_attribs_multi():
persistent.connect(debug=True)
a = A()
a.foo = 1
a.save()
a.bar = 2
a.save() # not new so update
b = persistent.get(a.id)
assert b.id == a.id
assert b.foo == a.foo
def test_with_references():
persistent.connect(debug=True)
b = B()
c = A()
c.foo = 1
b.foo = 2
b.ref0 = c
bp = b._with_references()
assert type(bp.ref0) is str
assert bp.ref0 == c.id
assert b.save() # c will be saved too
assert not b.is_dirty
assert not c.is_dirty
def test_refs_save_reload():
persistent.connect(debug=True)
b = B()
c = A()
c.foo = 1
b.foo = 2
b.ref0 = c
b.save()
bp = persistent.get(b.id)
assert isinstance(bp.ref0, A)
assert bp.ref0.id == c.id
def test_refs_save_reload_multi():
persistent.connect(debug=True)
b = B()
c = A()
c.foo = 1
b.foo = 2
b.ref0 = c
b.save()
b.bar = 1
b.save() # not new so trigger update
bp = persistent.get(b.id)
assert isinstance(bp.ref0, A)
assert bp.ref0.id == c.id
def test_multiple_refs_save_reload():
persistent.connect(debug=True)
c = C()
a0 = A()
a0.foo = 1
a1 = A()
a1.foo = 2
c.foo = 3
c.ref0 = a0
c.ref1 = a1
c.save()
cp = persistent.get(c.id)
assert isinstance(cp.ref0, A)
assert isinstance(cp.ref1, A)
assert cp.ref0.id == a0.id
assert cp.ref1.id == a1.id
assert cp.ref0.foo == a0.foo
assert cp.ref1.foo == a1.foo
def test_get_unknown():
persistent.connect(debug=True)
with pytest.raises(persistent.NotFoundError):
persistent.get('whatever')
def test_unique_index():
persistent.connect(debug=True)
persistent.add_index(['a', 'b.c'], unique=True)
x = A()
x.a = 1
x.b = dict(c=1)
x.save() # OK
y = A()
y.a = 1
y.b = dict(c=1)
with pytest.raises(persistent.UniquenessError):
y.save()
def test_query_can_create():
persistent.Query(A)
def test_query_can_create_without_class():
persistent.Query()
def test_query_all_objects_one():
persistent.connect(debug=True)
a = A()
a.foo = 1
a.save()
objects = persistent.Query().find()
assert objects[0].id == a.id
def test_query_all_objects_many():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objects = persistent.Query().find()
assert objects[0].id == a0.id
assert objects[1].id == a1.id
def test_query_all_A_objects_one():
persistent.connect(debug=True)
a = A()
a.foo = 1
a.save()
objects = persistent.Query(A).find() # scoped
assert objects[0].id == a.id
def test_query_all_A_objects_many():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objects = persistent.Query(A).find()
assert objects[0].id == a0.id
assert objects[1].id == a1.id
def test_query_first_A_objects_many():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
obj = persistent.Query(A).first()
assert obj.id == a0.id
def test_exists():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
objs = persistent.Query(A).exists('foo').find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_does_not_exist():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.bar = 1
a1.save()
objs = persistent.Query(A).does_not_exist('foo').find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_equal_to():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 1
a1.save()
objs = persistent.Query(A).equal_to('foo', 1).find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a0.id in ids
assert a1.id in ids
def test_equal_to_list():
persistent.connect(debug=True)
a0 = A()
a0.foo = [1,2,3]
a0.save()
a1 = A()
a1.foo = 1
a1.save()
objs = persistent.Query(A).equal_to('foo', [1,2,3]).find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_not_equal_to():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).not_equal_to('foo', 1).find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_not_equal_to_list():
persistent.connect(debug=True)
a0 = A()
a0.foo = [1,2,3]
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).not_equal_to('foo', [1,2,3]).find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_greater_than():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).greater_than('foo', 1).find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_greater_than_with_list():
"""With a list, the operand tests against the *length* of the list"""
persistent.connect(debug=True)
a0 = A()
a0.foo = [1,2,3]
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).greater_than('foo', 2, is_list=True).find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_greater_than_or_equal_to():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
a2 = A()
a2.foo = 0
a2.save()
objs = persistent.Query(A).greater_than_or_equal_to('foo', 1).find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a0.id in ids
assert a1.id in ids
def test_greater_than_or_equal_to_with_list():
persistent.connect(debug=True)
a0 = A()
a0.foo = [1]
a0.save()
a1 = A()
a1.foo = [1,2,3]
a1.save()
a2 = A()
a2.foo = [1,2]
a2.save()
q = persistent.Query(A)
q.greater_than_or_equal_to('foo', 2, is_list=True)
objs = q.find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a1.id in ids
assert a2.id in ids
def test_less_than():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).less_than('foo', 2).find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_less_than_with_list():
persistent.connect(debug=True)
a0 = A()
a0.foo = [1,2,3]
a0.save()
a1 = A()
a1.foo = [1,2]
a1.save()
objs = persistent.Query(A).less_than('foo', 3, is_list=True).find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_less_than_or_equal_to():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).less_than_or_equal_to('foo', 2).find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a0.id in ids
assert a1.id in ids
def test_less_than_or_equal_to_with_list():
persistent.connect(debug=True)
a0 = A()
a0.foo = [1,2,3]
a0.save()
a1 = A()
a1.foo = [1,2]
a1.save()
a2 = A()
a2.foo = [1,2,3,4]
a2.save()
objs = persistent.Query(A).less_than_or_equal_to('foo', 3, is_list=True).find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a0.id in ids
assert a1.id in ids
def test_contained_in():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).contained_in('foo', (1,3,5)).find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_not_contained_in():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
objs = persistent.Query(A).not_contained_in('foo', (1,3,5)).find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_contains_str():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).contains('foo', 'c').find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a0.id in ids
assert a1.id in ids
def test_starts_with_str():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).starts_with('foo', 'ab').find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_ends_with_str():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).ends_with('foo', 'de').find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_contains_str_case_insensitive():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).contains('foo', 'C', case_insensitive=True).find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a0.id in ids
assert a1.id in ids
def test_starts_with_str_case_insensitive():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).starts_with('foo', 'aB', case_insensitive=True).find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_ends_with_str_case_insensitive():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).ends_with('foo', 'DE', case_insensitive=True).find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_matches_regex():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).matches('foo', r'^a').find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_matches_regex_case_insensitive():
persistent.connect(debug=True)
a0 = A()
a0.foo = 'abc'
a0.save()
a1 = A()
a1.foo = 'cde'
a1.save()
objs = persistent.Query(A).matches('foo', r'^CDE$', case_insensitive=True).find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_sort_ascending_one():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
q = persistent.Query(A)
q.exists('foo')
q.ascending('foo')
objs = q.find()
assert len(objs) == 2
assert objs[0].id == a0.id
assert objs[1].id == a1.id
def test_sort_descending_one():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
q = persistent.Query(A)
q.exists('foo')
q.descending('foo')
objs = q.find()
assert len(objs) == 2
assert objs[0].id == a1.id
assert objs[1].id == a0.id
def test_sort_ascending_multi():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.bar = 0
a0.save()
a1 = A()
a1.foo = 1
a1.bar = 2
a1.save()
q = persistent.Query(A)
q.exists('foo')
q.ascending('foo')
q.ascending('bar')
objs = q.find()
assert len(objs) == 2
assert objs[0].id == a0.id
assert objs[1].id == a1.id
def test_sort_descending_multi():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.bar = 0
a0.save()
a1 = A()
a1.foo = 1
a1.bar = 2
a1.save()
q = persistent.Query(A)
q.exists('foo')
q.descending('foo')
q.descending('bar')
objs = q.find()
assert len(objs) == 2
assert objs[0].id == a1.id
assert objs[1].id == a0.id
def test_query_skip():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
q = persistent.Query(A)
q.exists('foo')
q.descending('foo')
q.skip(1)
objs = q.find()
assert len(objs) == 1
assert objs[0].id == a0.id
def test_query_count():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
q = persistent.Query(A)
q.exists('foo')
assert q.count() == 2
def test_query_count_nothing():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
q = persistent.Query(A)
q.exists('bar')
assert q.count() == 0
def test_query_find_nothing():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
q = persistent.Query(A)
q.exists('bar')
assert q.find() == None
def test_query_invalid_limit():
persistent.connect(debug=True)
q = persistent.Query(A)
with pytest.raises(ValueError):
q.limit(-1)
def test_query_invalid_skip():
persistent.connect(debug=True)
q = persistent.Query(A)
with pytest.raises(ValueError):
q.skip(-1)
def test_or_query():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = A()
a1.foo = 2
a1.save()
a2 = A()
a2.bar = 2
a2.save()
q1 = persistent.Query(A).exists('bar')
q2 = persistent.Query(A).equal_to('foo', 1)
q = persistent.OrQuery(q1, q2)
objs = q.find()
assert len(objs) == 2
ids = [obj.id for obj in objs]
assert a0.id in ids
assert a2.id in ids
def test_query_multiple_AND_conditions():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.bar = 2
a0.save()
a1 = A()
a1.foo = 1
a1.bar = 3
a1.save()
q = persistent.Query(A)
q.equal_to('foo', 1)
q.greater_than('bar', 2)
objs = q.find()
assert len(objs) == 1
assert objs[0].id == a1.id
def test_should_fail_when_dimwit_alters_table():
import persistent.database
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
persistent.database.connection.execute("""
ALTER TABLE objects RENAME TO whatever
""")
with pytest.raises(sqlite3.OperationalError):
a0.save()
def test_delete():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.save()
a1 = persistent.get(a0.id)
assert a1.id == a0.id
a0.delete()
with pytest.raises(persistent.NotFoundError):
persistent.get(a0.id)
def test_transaction():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a1 = A()
a1.foo = 1
with persistent.transaction():
a0.save(use_transaction=False)
a1.save(use_transaction=False)
assert persistent.get(a0.id).id == a0.id
assert persistent.get(a1.id).id == a1.id
def test_transaction_delete():
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a1 = A()
a1.foo = 1
with persistent.transaction():
a0.delete(use_transaction=False)
a1.save(use_transaction=False)
assert persistent.get(a1.id).id == a1.id
with pytest.raises(persistent.NotFoundError):
persistent.get(a0.id)
def test_query_ref():
persistent.connect(debug=True)
a = A()
a.foo = 1
a.save()
b = B()
b.ref0 = a
b.save()
q = persistent.Query(B)
q.equal_to('ref0', a)
assert q.count() == 1
def test_query_matches_query():
persistent.connect(debug=True)
a = A()
a.foo = 1
a.save()
# b0 references a
b0 = B()
b0.foo = 2
b0.ref0 = a
b0.save()
# No reference for b1
b1 = B()
b1.foo = 2
b1.save()
# Find all As where foo==1
qa = persistent.Query(A)
qa.equal_to('foo', 1)
# Find all Bs that have a reference to any
# of the objects matched by query qa
qb = persistent.Query(B)
qb.matches_query('ref0', qa)
objs = qb.find()
assert len(objs) == 1
assert objs[0].id == b0.id
def test_query_does_not_match_query():
persistent.connect(debug=True)
a = A()
a.foo = 1
a.save()
a1 = A()
a1.foo = 2
a1.save()
# b0 references a
b0 = B()
b0.foo = 2
b0.ref0 = a
b0.save()
# No reference for b1
b1 = B()
b1.foo = 2
b1.save()
# Find all As where foo==1
qa = persistent.Query(A)
qa.equal_to('foo', 1)
# Find all Bs that do not have a reference to any
# of the objects matched by query qa
no_ref = persistent.Query(B)
no_ref.does_not_exist('ref0')
in_qa = persistent.Query(B)
in_qa.does_not_match_query('ref0', qa)
q = persistent.OrQuery(no_ref, in_qa)
objs = q.find()
assert len(objs) == 1
assert objs[0].id == b1.id
def test_query_dates():
# isodatetimehandler stores datetime objects
# as ISO-8601 formatted text. Thus, we can use
# regular SQL comparisons on key paths storing
# datetimes in our queries.
persistent.connect(debug=True)
a0 = A()
a0.foo = 1
a0.a_date = datetime.utcnow() - timedelta(hours=1)
a0.save()
a1 = A()
a1.foo = 2
a1.a_date = datetime.utcnow() - timedelta(hours=1, minutes=1)
a1.save()
assert persistent.Query(A).less_than('a_date', datetime.utcnow()).count() == 2
| 20.882288 | 85 | 0.581972 | 2,961 | 18,982 | 3.626815 | 0.068558 | 0.026446 | 0.131111 | 0.154949 | 0.79486 | 0.763479 | 0.753701 | 0.71692 | 0.700996 | 0.688612 | 0 | 0.044686 | 0.264356 | 18,982 | 908 | 86 | 20.905286 | 0.724363 | 0.03045 | 0 | 0.739653 | 0 | 0 | 0.017629 | 0 | 0 | 0 | 0 | 0 | 0.160214 | 1 | 0.092123 | false | 0.00267 | 0.008011 | 0 | 0.106809 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d67f55ccc7db1898003b8770835e7d66047da5b | 50,393 | py | Python | examples/pybullet/gym/pybullet_envs/minitaur/robots/laikago_interface_pb2.py | felipeek/bullet3 | 6a59241074720e9df119f2f86bc01765917feb1e | [
"Zlib"
] | 9,136 | 2015-01-02T00:41:45.000Z | 2022-03-31T15:30:02.000Z | examples/pybullet/gym/pybullet_envs/minitaur/robots/laikago_interface_pb2.py | felipeek/bullet3 | 6a59241074720e9df119f2f86bc01765917feb1e | [
"Zlib"
] | 2,424 | 2015-01-05T08:55:58.000Z | 2022-03-30T19:34:55.000Z | examples/pybullet/gym/pybullet_envs/minitaur/robots/laikago_interface_pb2.py | felipeek/bullet3 | 6a59241074720e9df119f2f86bc01765917feb1e | [
"Zlib"
] | 2,921 | 2015-01-02T10:19:30.000Z | 2022-03-31T02:48:42.000Z | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: laikago_interface.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from pybullet_envs.minitaur.robots import timestamp_pb2 as timestamp__pb2
from pybullet_envs.minitaur.robots import vector_pb2 as vector__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='laikago_interface.proto',
package='minitaur_fluxworks.control',
syntax='proto3',
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n\x17laikago_interface.proto\x12\x1aminitaur_fluxworks.control\x1a\x0ftimestamp.proto\x1a\x0cvector.proto\"\x82\x01\n\x0cMotorCommand\x12\x10\n\x08motor_id\x18\x01 \x01(\r\x12\x10\n\x08position\x18\x02 \x01(\x02\x12\x15\n\rposition_gain\x18\x03 \x01(\x02\x12\x10\n\x08velocity\x18\x04 \x01(\x02\x12\x15\n\rvelocity_gain\x18\x05 \x01(\x02\x12\x0e\n\x06torque\x18\x06 \x01(\x02\"6\n\x03Led\x12\x0e\n\x06leg_id\x18\x01 \x01(\r\x12\t\n\x01r\x18\x02 \x01(\r\x12\t\n\x01g\x18\x03 \x01(\r\x12\t\n\x01\x62\x18\x04 \x01(\r\"\xf6\x02\n\x0eLaikagoCommand\x12-\n\ttimestamp\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12L\n\x0c\x63ontrol_mode\x18\x02 \x01(\x0e\x32\x36.minitaur_fluxworks.control.LaikagoCommand.ControlMode\x12?\n\rmotor_command\x18\x03 \x03(\x0b\x32(.minitaur_fluxworks.control.MotorCommand\x12,\n\x03led\x18\x04 \x03(\x0b\x32\x1f.minitaur_fluxworks.control.Led\"x\n\x0b\x43ontrolMode\x12\x1c\n\x18\x43ONTROL_MODE_UNSPECIFIED\x10\x00\x12\x19\n\x15\x43ONTROL_MODE_POSITION\x10\x01\x12\x17\n\x13\x43ONTROL_MODE_TORQUE\x10\x02\x12\x17\n\x13\x43ONTROL_MODE_HYBRID\x10\x03\"\x15\n\x13LaikagoStateRequest\"\xd8\x01\n\x03Imu\x12/\n\nquaternion\x18\x01 \x01(\x0b\x32\x1b.robotics.messages.Vector4f\x12.\n\tgyroscope\x18\x02 \x01(\x0b\x32\x1b.robotics.messages.Vector3f\x12\x31\n\x0c\x61\x63\x63\x65leration\x18\x03 \x01(\x0b\x32\x1b.robotics.messages.Vector3f\x12(\n\x03rpy\x18\x04 \x01(\x0b\x32\x1b.robotics.messages.Vector3f\x12\x13\n\x0btemperature\x18\x05 \x01(\x02\"\xa3\x01\n\nMotorState\x12\x10\n\x08motor_id\x18\x01 \x01(\r\x12\x0c\n\x04mode\x18\x02 \x01(\r\x12\x10\n\x08position\x18\x03 \x01(\x02\x12\x15\n\rposition_gain\x18\x04 \x01(\x02\x12\x10\n\x08velocity\x18\x05 \x01(\x02\x12\x15\n\rvelocity_gain\x18\x06 \x01(\x02\x12\x0e\n\x06torque\x18\x07 \x01(\x02\x12\x13\n\x0btemperature\x18\x08 \x01(\x02\"X\n\x0c\x43ontactState\x12\x0e\n\x06leg_id\x18\x01 \x01(\r\x12\r\n\x05\x66orce\x18\x02 \x01(\x02\x12)\n\x04\x61xis\x18\x03 \x01(\x0b\x32\x1b.robotics.messages.Vector3f\"\xcb\x02\n\x0cLaikagoState\x12-\n\ttimestamp\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x15\n\rcontrol_level\x18\x02 \x01(\r\x12,\n\x03imu\x18\x03 \x01(\x0b\x32\x1f.minitaur_fluxworks.control.Imu\x12;\n\x0bmotor_state\x18\x04 \x03(\x0b\x32&.minitaur_fluxworks.control.MotorState\x12?\n\rcontact_state\x18\x05 \x03(\x0b\x32(.minitaur_fluxworks.control.ContactState\x12#\n\x1bmicrocontroller_time_millis\x18\x06 \x01(\r\x12\x17\n\x0fwireless_remote\x18\x07 \x01(\x0c\x12\x0b\n\x03\x63rc\x18\x08 \x01(\r\"\x8b\x01\n\x13LaikagoCommandState\x12;\n\x07\x63ommand\x18\x01 \x01(\x0b\x32*.minitaur_fluxworks.control.LaikagoCommand\x12\x37\n\x05state\x18\x02 \x01(\x0b\x32(.minitaur_fluxworks.control.LaikagoState\"\x84\x02\n\x17LaikagoHighLevelCommand\x12-\n\ttimestamp\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x15\n\rcontrol_level\x18\x02 \x01(\r\x12\x14\n\x0c\x63ontrol_mode\x18\x03 \x01(\r\x12/\n\nwalk_speed\x18\x04 \x01(\x0b\x32\x1b.robotics.messages.Vector3f\x12\x13\n\x0b\x62ody_height\x18\x05 \x01(\x02\x12\x1d\n\x15\x66oot_clearance_height\x18\x06 \x01(\x02\x12(\n\x03rpy\x18\x07 \x01(\x0b\x32\x1b.robotics.messages.Vector3f\"\xb3\x04\n\x15LaikagoHighLevelState\x12-\n\ttimestamp\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x15\n\rcontrol_level\x18\x02 \x01(\r\x12\x14\n\x0c\x63ontrol_mode\x18\x03 \x01(\r\x12,\n\x03imu\x18\x04 \x01(\x0b\x32\x1f.minitaur_fluxworks.control.Imu\x12/\n\nwalk_speed\x18\x05 \x01(\x0b\x32\x1b.robotics.messages.Vector3f\x12\x13\n\x0b\x62ody_height\x18\x08 \x01(\x02\x12\x15\n\rup_down_speed\x18\t \x01(\x02\x12\x31\n\x0c\x63om_position\x18\n \x01(\x0b\x32\x1b.robotics.messages.Vector3f\x12\x39\n\x14\x66oot_position_to_com\x18\x0b \x03(\x0b\x32\x1b.robotics.messages.Vector3f\x12\x39\n\x14\x66oot_velocity_to_com\x18\x0c \x03(\x0b\x32\x1b.robotics.messages.Vector3f\x12?\n\rcontact_state\x18\r \x03(\x0b\x32(.minitaur_fluxworks.control.ContactState\x12#\n\x1bmicrocontroller_time_millis\x18\x0e \x01(\r\x12\x17\n\x0fwireless_remote\x18\x0f \x01(\x0c\x12\x0b\n\x03\x63rc\x18\x10 \x01(\r\"\x1e\n\x1cLaikagoHighLevelStateRequest\"\xa6\x01\n\x1cLaikagoHighLevelCommandState\x12\x44\n\x07\x63ommand\x18\x01 \x01(\x0b\x32\x33.minitaur_fluxworks.control.LaikagoHighLevelCommand\x12@\n\x05state\x18\x02 \x01(\x0b\x32\x31.minitaur_fluxworks.control.LaikagoHighLevelState2\xed\x01\n\x1bLaikagoControlGrpcInterface\x12\x65\n\x0bSendCommand\x12*.minitaur_fluxworks.control.LaikagoCommand\x1a(.minitaur_fluxworks.control.LaikagoState\"\x00\x12g\n\x08GetState\x12/.minitaur_fluxworks.control.LaikagoStateRequest\x1a(.minitaur_fluxworks.control.LaikagoState\"\x00\x32\x9a\x02\n$LaikagoHighLevelControlGrpcInterface\x12w\n\x0bSendCommand\x12\x33.minitaur_fluxworks.control.LaikagoHighLevelCommand\x1a\x31.minitaur_fluxworks.control.LaikagoHighLevelState\"\x00\x12y\n\x08GetState\x12\x38.minitaur_fluxworks.control.LaikagoHighLevelStateRequest\x1a\x31.minitaur_fluxworks.control.LaikagoHighLevelState\"\x00\x62\x06proto3'
,
dependencies=[timestamp__pb2.DESCRIPTOR,vector__pb2.DESCRIPTOR,])
_LAIKAGOCOMMAND_CONTROLMODE = _descriptor.EnumDescriptor(
name='ControlMode',
full_name='minitaur_fluxworks.control.LaikagoCommand.ControlMode',
filename=None,
file=DESCRIPTOR,
create_key=_descriptor._internal_create_key,
values=[
_descriptor.EnumValueDescriptor(
name='CONTROL_MODE_UNSPECIFIED', index=0, number=0,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='CONTROL_MODE_POSITION', index=1, number=1,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='CONTROL_MODE_TORQUE', index=2, number=2,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
_descriptor.EnumValueDescriptor(
name='CONTROL_MODE_HYBRID', index=3, number=3,
serialized_options=None,
type=None,
create_key=_descriptor._internal_create_key),
],
containing_type=None,
serialized_options=None,
serialized_start=530,
serialized_end=650,
)
_sym_db.RegisterEnumDescriptor(_LAIKAGOCOMMAND_CONTROLMODE)
_MOTORCOMMAND = _descriptor.Descriptor(
name='MotorCommand',
full_name='minitaur_fluxworks.control.MotorCommand',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='motor_id', full_name='minitaur_fluxworks.control.MotorCommand.motor_id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='position', full_name='minitaur_fluxworks.control.MotorCommand.position', index=1,
number=2, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='position_gain', full_name='minitaur_fluxworks.control.MotorCommand.position_gain', index=2,
number=3, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='velocity', full_name='minitaur_fluxworks.control.MotorCommand.velocity', index=3,
number=4, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='velocity_gain', full_name='minitaur_fluxworks.control.MotorCommand.velocity_gain', index=4,
number=5, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='torque', full_name='minitaur_fluxworks.control.MotorCommand.torque', index=5,
number=6, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=87,
serialized_end=217,
)
_LED = _descriptor.Descriptor(
name='Led',
full_name='minitaur_fluxworks.control.Led',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='leg_id', full_name='minitaur_fluxworks.control.Led.leg_id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='r', full_name='minitaur_fluxworks.control.Led.r', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='g', full_name='minitaur_fluxworks.control.Led.g', index=2,
number=3, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='b', full_name='minitaur_fluxworks.control.Led.b', index=3,
number=4, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=219,
serialized_end=273,
)
_LAIKAGOCOMMAND = _descriptor.Descriptor(
name='LaikagoCommand',
full_name='minitaur_fluxworks.control.LaikagoCommand',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='timestamp', full_name='minitaur_fluxworks.control.LaikagoCommand.timestamp', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='control_mode', full_name='minitaur_fluxworks.control.LaikagoCommand.control_mode', index=1,
number=2, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='motor_command', full_name='minitaur_fluxworks.control.LaikagoCommand.motor_command', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='led', full_name='minitaur_fluxworks.control.LaikagoCommand.led', index=3,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
_LAIKAGOCOMMAND_CONTROLMODE,
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=276,
serialized_end=650,
)
_LAIKAGOSTATEREQUEST = _descriptor.Descriptor(
name='LaikagoStateRequest',
full_name='minitaur_fluxworks.control.LaikagoStateRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=652,
serialized_end=673,
)
_IMU = _descriptor.Descriptor(
name='Imu',
full_name='minitaur_fluxworks.control.Imu',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='quaternion', full_name='minitaur_fluxworks.control.Imu.quaternion', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='gyroscope', full_name='minitaur_fluxworks.control.Imu.gyroscope', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='acceleration', full_name='minitaur_fluxworks.control.Imu.acceleration', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='rpy', full_name='minitaur_fluxworks.control.Imu.rpy', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='temperature', full_name='minitaur_fluxworks.control.Imu.temperature', index=4,
number=5, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=676,
serialized_end=892,
)
_MOTORSTATE = _descriptor.Descriptor(
name='MotorState',
full_name='minitaur_fluxworks.control.MotorState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='motor_id', full_name='minitaur_fluxworks.control.MotorState.motor_id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='mode', full_name='minitaur_fluxworks.control.MotorState.mode', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='position', full_name='minitaur_fluxworks.control.MotorState.position', index=2,
number=3, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='position_gain', full_name='minitaur_fluxworks.control.MotorState.position_gain', index=3,
number=4, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='velocity', full_name='minitaur_fluxworks.control.MotorState.velocity', index=4,
number=5, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='velocity_gain', full_name='minitaur_fluxworks.control.MotorState.velocity_gain', index=5,
number=6, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='torque', full_name='minitaur_fluxworks.control.MotorState.torque', index=6,
number=7, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='temperature', full_name='minitaur_fluxworks.control.MotorState.temperature', index=7,
number=8, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=895,
serialized_end=1058,
)
_CONTACTSTATE = _descriptor.Descriptor(
name='ContactState',
full_name='minitaur_fluxworks.control.ContactState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='leg_id', full_name='minitaur_fluxworks.control.ContactState.leg_id', index=0,
number=1, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='force', full_name='minitaur_fluxworks.control.ContactState.force', index=1,
number=2, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='axis', full_name='minitaur_fluxworks.control.ContactState.axis', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1060,
serialized_end=1148,
)
_LAIKAGOSTATE = _descriptor.Descriptor(
name='LaikagoState',
full_name='minitaur_fluxworks.control.LaikagoState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='timestamp', full_name='minitaur_fluxworks.control.LaikagoState.timestamp', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='control_level', full_name='minitaur_fluxworks.control.LaikagoState.control_level', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='imu', full_name='minitaur_fluxworks.control.LaikagoState.imu', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='motor_state', full_name='minitaur_fluxworks.control.LaikagoState.motor_state', index=3,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='contact_state', full_name='minitaur_fluxworks.control.LaikagoState.contact_state', index=4,
number=5, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='microcontroller_time_millis', full_name='minitaur_fluxworks.control.LaikagoState.microcontroller_time_millis', index=5,
number=6, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='wireless_remote', full_name='minitaur_fluxworks.control.LaikagoState.wireless_remote', index=6,
number=7, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=b"",
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='crc', full_name='minitaur_fluxworks.control.LaikagoState.crc', index=7,
number=8, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1151,
serialized_end=1482,
)
_LAIKAGOCOMMANDSTATE = _descriptor.Descriptor(
name='LaikagoCommandState',
full_name='minitaur_fluxworks.control.LaikagoCommandState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='command', full_name='minitaur_fluxworks.control.LaikagoCommandState.command', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='state', full_name='minitaur_fluxworks.control.LaikagoCommandState.state', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1485,
serialized_end=1624,
)
_LAIKAGOHIGHLEVELCOMMAND = _descriptor.Descriptor(
name='LaikagoHighLevelCommand',
full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='timestamp', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand.timestamp', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='control_level', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand.control_level', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='control_mode', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand.control_mode', index=2,
number=3, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='walk_speed', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand.walk_speed', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='body_height', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand.body_height', index=4,
number=5, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='foot_clearance_height', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand.foot_clearance_height', index=5,
number=6, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='rpy', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommand.rpy', index=6,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1627,
serialized_end=1887,
)
_LAIKAGOHIGHLEVELSTATE = _descriptor.Descriptor(
name='LaikagoHighLevelState',
full_name='minitaur_fluxworks.control.LaikagoHighLevelState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='timestamp', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.timestamp', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='control_level', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.control_level', index=1,
number=2, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='control_mode', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.control_mode', index=2,
number=3, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='imu', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.imu', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='walk_speed', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.walk_speed', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='body_height', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.body_height', index=5,
number=8, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='up_down_speed', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.up_down_speed', index=6,
number=9, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='com_position', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.com_position', index=7,
number=10, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='foot_position_to_com', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.foot_position_to_com', index=8,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='foot_velocity_to_com', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.foot_velocity_to_com', index=9,
number=12, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='contact_state', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.contact_state', index=10,
number=13, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='microcontroller_time_millis', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.microcontroller_time_millis', index=11,
number=14, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='wireless_remote', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.wireless_remote', index=12,
number=15, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=b"",
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='crc', full_name='minitaur_fluxworks.control.LaikagoHighLevelState.crc', index=13,
number=16, type=13, cpp_type=3, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1890,
serialized_end=2453,
)
_LAIKAGOHIGHLEVELSTATEREQUEST = _descriptor.Descriptor(
name='LaikagoHighLevelStateRequest',
full_name='minitaur_fluxworks.control.LaikagoHighLevelStateRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=2455,
serialized_end=2485,
)
_LAIKAGOHIGHLEVELCOMMANDSTATE = _descriptor.Descriptor(
name='LaikagoHighLevelCommandState',
full_name='minitaur_fluxworks.control.LaikagoHighLevelCommandState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='command', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommandState.command', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='state', full_name='minitaur_fluxworks.control.LaikagoHighLevelCommandState.state', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=2488,
serialized_end=2654,
)
_LAIKAGOCOMMAND.fields_by_name['timestamp'].message_type = timestamp__pb2._TIMESTAMP
_LAIKAGOCOMMAND.fields_by_name['control_mode'].enum_type = _LAIKAGOCOMMAND_CONTROLMODE
_LAIKAGOCOMMAND.fields_by_name['motor_command'].message_type = _MOTORCOMMAND
_LAIKAGOCOMMAND.fields_by_name['led'].message_type = _LED
_LAIKAGOCOMMAND_CONTROLMODE.containing_type = _LAIKAGOCOMMAND
_IMU.fields_by_name['quaternion'].message_type = vector__pb2._VECTOR4F
_IMU.fields_by_name['gyroscope'].message_type = vector__pb2._VECTOR3F
_IMU.fields_by_name['acceleration'].message_type = vector__pb2._VECTOR3F
_IMU.fields_by_name['rpy'].message_type = vector__pb2._VECTOR3F
_CONTACTSTATE.fields_by_name['axis'].message_type = vector__pb2._VECTOR3F
_LAIKAGOSTATE.fields_by_name['timestamp'].message_type = timestamp__pb2._TIMESTAMP
_LAIKAGOSTATE.fields_by_name['imu'].message_type = _IMU
_LAIKAGOSTATE.fields_by_name['motor_state'].message_type = _MOTORSTATE
_LAIKAGOSTATE.fields_by_name['contact_state'].message_type = _CONTACTSTATE
_LAIKAGOCOMMANDSTATE.fields_by_name['command'].message_type = _LAIKAGOCOMMAND
_LAIKAGOCOMMANDSTATE.fields_by_name['state'].message_type = _LAIKAGOSTATE
_LAIKAGOHIGHLEVELCOMMAND.fields_by_name['timestamp'].message_type = timestamp__pb2._TIMESTAMP
_LAIKAGOHIGHLEVELCOMMAND.fields_by_name['walk_speed'].message_type = vector__pb2._VECTOR3F
_LAIKAGOHIGHLEVELCOMMAND.fields_by_name['rpy'].message_type = vector__pb2._VECTOR3F
_LAIKAGOHIGHLEVELSTATE.fields_by_name['timestamp'].message_type = timestamp__pb2._TIMESTAMP
_LAIKAGOHIGHLEVELSTATE.fields_by_name['imu'].message_type = _IMU
_LAIKAGOHIGHLEVELSTATE.fields_by_name['walk_speed'].message_type = vector__pb2._VECTOR3F
_LAIKAGOHIGHLEVELSTATE.fields_by_name['com_position'].message_type = vector__pb2._VECTOR3F
_LAIKAGOHIGHLEVELSTATE.fields_by_name['foot_position_to_com'].message_type = vector__pb2._VECTOR3F
_LAIKAGOHIGHLEVELSTATE.fields_by_name['foot_velocity_to_com'].message_type = vector__pb2._VECTOR3F
_LAIKAGOHIGHLEVELSTATE.fields_by_name['contact_state'].message_type = _CONTACTSTATE
_LAIKAGOHIGHLEVELCOMMANDSTATE.fields_by_name['command'].message_type = _LAIKAGOHIGHLEVELCOMMAND
_LAIKAGOHIGHLEVELCOMMANDSTATE.fields_by_name['state'].message_type = _LAIKAGOHIGHLEVELSTATE
DESCRIPTOR.message_types_by_name['MotorCommand'] = _MOTORCOMMAND
DESCRIPTOR.message_types_by_name['Led'] = _LED
DESCRIPTOR.message_types_by_name['LaikagoCommand'] = _LAIKAGOCOMMAND
DESCRIPTOR.message_types_by_name['LaikagoStateRequest'] = _LAIKAGOSTATEREQUEST
DESCRIPTOR.message_types_by_name['Imu'] = _IMU
DESCRIPTOR.message_types_by_name['MotorState'] = _MOTORSTATE
DESCRIPTOR.message_types_by_name['ContactState'] = _CONTACTSTATE
DESCRIPTOR.message_types_by_name['LaikagoState'] = _LAIKAGOSTATE
DESCRIPTOR.message_types_by_name['LaikagoCommandState'] = _LAIKAGOCOMMANDSTATE
DESCRIPTOR.message_types_by_name['LaikagoHighLevelCommand'] = _LAIKAGOHIGHLEVELCOMMAND
DESCRIPTOR.message_types_by_name['LaikagoHighLevelState'] = _LAIKAGOHIGHLEVELSTATE
DESCRIPTOR.message_types_by_name['LaikagoHighLevelStateRequest'] = _LAIKAGOHIGHLEVELSTATEREQUEST
DESCRIPTOR.message_types_by_name['LaikagoHighLevelCommandState'] = _LAIKAGOHIGHLEVELCOMMANDSTATE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
MotorCommand = _reflection.GeneratedProtocolMessageType('MotorCommand', (_message.Message,), {
'DESCRIPTOR' : _MOTORCOMMAND,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.MotorCommand)
})
_sym_db.RegisterMessage(MotorCommand)
Led = _reflection.GeneratedProtocolMessageType('Led', (_message.Message,), {
'DESCRIPTOR' : _LED,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.Led)
})
_sym_db.RegisterMessage(Led)
LaikagoCommand = _reflection.GeneratedProtocolMessageType('LaikagoCommand', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOCOMMAND,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoCommand)
})
_sym_db.RegisterMessage(LaikagoCommand)
LaikagoStateRequest = _reflection.GeneratedProtocolMessageType('LaikagoStateRequest', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOSTATEREQUEST,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoStateRequest)
})
_sym_db.RegisterMessage(LaikagoStateRequest)
Imu = _reflection.GeneratedProtocolMessageType('Imu', (_message.Message,), {
'DESCRIPTOR' : _IMU,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.Imu)
})
_sym_db.RegisterMessage(Imu)
MotorState = _reflection.GeneratedProtocolMessageType('MotorState', (_message.Message,), {
'DESCRIPTOR' : _MOTORSTATE,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.MotorState)
})
_sym_db.RegisterMessage(MotorState)
ContactState = _reflection.GeneratedProtocolMessageType('ContactState', (_message.Message,), {
'DESCRIPTOR' : _CONTACTSTATE,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.ContactState)
})
_sym_db.RegisterMessage(ContactState)
LaikagoState = _reflection.GeneratedProtocolMessageType('LaikagoState', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOSTATE,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoState)
})
_sym_db.RegisterMessage(LaikagoState)
LaikagoCommandState = _reflection.GeneratedProtocolMessageType('LaikagoCommandState', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOCOMMANDSTATE,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoCommandState)
})
_sym_db.RegisterMessage(LaikagoCommandState)
LaikagoHighLevelCommand = _reflection.GeneratedProtocolMessageType('LaikagoHighLevelCommand', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOHIGHLEVELCOMMAND,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoHighLevelCommand)
})
_sym_db.RegisterMessage(LaikagoHighLevelCommand)
LaikagoHighLevelState = _reflection.GeneratedProtocolMessageType('LaikagoHighLevelState', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOHIGHLEVELSTATE,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoHighLevelState)
})
_sym_db.RegisterMessage(LaikagoHighLevelState)
LaikagoHighLevelStateRequest = _reflection.GeneratedProtocolMessageType('LaikagoHighLevelStateRequest', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOHIGHLEVELSTATEREQUEST,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoHighLevelStateRequest)
})
_sym_db.RegisterMessage(LaikagoHighLevelStateRequest)
LaikagoHighLevelCommandState = _reflection.GeneratedProtocolMessageType('LaikagoHighLevelCommandState', (_message.Message,), {
'DESCRIPTOR' : _LAIKAGOHIGHLEVELCOMMANDSTATE,
'__module__' : 'laikago_interface_pb2'
# @@protoc_insertion_point(class_scope:minitaur_fluxworks.control.LaikagoHighLevelCommandState)
})
_sym_db.RegisterMessage(LaikagoHighLevelCommandState)
_LAIKAGOCONTROLGRPCINTERFACE = _descriptor.ServiceDescriptor(
name='LaikagoControlGrpcInterface',
full_name='minitaur_fluxworks.control.LaikagoControlGrpcInterface',
file=DESCRIPTOR,
index=0,
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_start=2657,
serialized_end=2894,
methods=[
_descriptor.MethodDescriptor(
name='SendCommand',
full_name='minitaur_fluxworks.control.LaikagoControlGrpcInterface.SendCommand',
index=0,
containing_service=None,
input_type=_LAIKAGOCOMMAND,
output_type=_LAIKAGOSTATE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='GetState',
full_name='minitaur_fluxworks.control.LaikagoControlGrpcInterface.GetState',
index=1,
containing_service=None,
input_type=_LAIKAGOSTATEREQUEST,
output_type=_LAIKAGOSTATE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
])
_sym_db.RegisterServiceDescriptor(_LAIKAGOCONTROLGRPCINTERFACE)
DESCRIPTOR.services_by_name['LaikagoControlGrpcInterface'] = _LAIKAGOCONTROLGRPCINTERFACE
_LAIKAGOHIGHLEVELCONTROLGRPCINTERFACE = _descriptor.ServiceDescriptor(
name='LaikagoHighLevelControlGrpcInterface',
full_name='minitaur_fluxworks.control.LaikagoHighLevelControlGrpcInterface',
file=DESCRIPTOR,
index=1,
serialized_options=None,
create_key=_descriptor._internal_create_key,
serialized_start=2897,
serialized_end=3179,
methods=[
_descriptor.MethodDescriptor(
name='SendCommand',
full_name='minitaur_fluxworks.control.LaikagoHighLevelControlGrpcInterface.SendCommand',
index=0,
containing_service=None,
input_type=_LAIKAGOHIGHLEVELCOMMAND,
output_type=_LAIKAGOHIGHLEVELSTATE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
_descriptor.MethodDescriptor(
name='GetState',
full_name='minitaur_fluxworks.control.LaikagoHighLevelControlGrpcInterface.GetState',
index=1,
containing_service=None,
input_type=_LAIKAGOHIGHLEVELSTATEREQUEST,
output_type=_LAIKAGOHIGHLEVELSTATE,
serialized_options=None,
create_key=_descriptor._internal_create_key,
),
])
_sym_db.RegisterServiceDescriptor(_LAIKAGOHIGHLEVELCONTROLGRPCINTERFACE)
DESCRIPTOR.services_by_name['LaikagoHighLevelControlGrpcInterface'] = _LAIKAGOHIGHLEVELCONTROLGRPCINTERFACE
# @@protoc_insertion_point(module_scope)
| 48.408261 | 4,982 | 0.775008 | 6,158 | 50,393 | 5.997727 | 0.052452 | 0.044837 | 0.074593 | 0.064331 | 0.793713 | 0.766909 | 0.703661 | 0.677289 | 0.660096 | 0.641469 | 0 | 0.036082 | 0.112337 | 50,393 | 1,040 | 4,983 | 48.454808 | 0.789591 | 0.025361 | 0 | 0.70332 | 1 | 0.001037 | 0.237228 | 0.204921 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.006224 | 0 | 0.006224 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d6f5d6b30b02cbb5e31fe7a1f2416c02d1acbab | 4,032 | py | Python | cltk/tag/pos/pos_tagger.py | fractaledmind/cltk | 78c7259c1845a4ae8bbd33935ffbae34da23234b | [
"MIT"
] | 1 | 2020-08-02T19:35:06.000Z | 2020-08-02T19:35:06.000Z | cltk/tag/pos/pos_tagger.py | fractaledmind/cltk | 78c7259c1845a4ae8bbd33935ffbae34da23234b | [
"MIT"
] | null | null | null | cltk/tag/pos/pos_tagger.py | fractaledmind/cltk | 78c7259c1845a4ae8bbd33935ffbae34da23234b | [
"MIT"
] | null | null | null | """Tags part of speech (POS)."""
__author__ = 'Kyle P. Johnson <kyle@kyle-p-johnson.com>'
__license__ = 'MIT License. See LICENSE.'
from nltk.tokenize import wordpunct_tokenize
import os
import pickle
class POSTag(object):
"""Picks up taggers made with UnigramTagger"""
def __init__(self):
"""Initializer. Should it do anything?"""
pass
def unigram_tagger(self, untagged_string, language):
"""Reads language .pickle for right language"""
if language == 'greek':
pickle_path = os.path.expanduser('~/cltk_data/greek/cltk_linguistic_data/taggers/pos/unigram.pickle')
elif language == 'latin':
pickle_path = os.path.expanduser('~/cltk_data/latin/cltk_linguistic_data/taggers/pos/unigram.pickle')
else:
print('No unigram tagger for this language available.')
with open(pickle_path, 'rb') as open_pickle:
tagger = pickle.load(open_pickle)
untagged_tokens = wordpunct_tokenize(untagged_string)
tagged_text = tagger.tag(untagged_tokens)
return tagged_text
def bigram_tagger(self, untagged_string, language):
"""Reads language .pickle for right language"""
if language == 'greek':
pickle_path = os.path.expanduser('~/cltk_data/greek/cltk_linguistic_data/taggers/pos/bigram.pickle')
elif language == 'latin':
pickle_path = os.path.expanduser('~/cltk_data/latin/cltk_linguistic_data/taggers/pos/bigram.pickle')
else:
print('No bigram tagger for this language available.')
with open(pickle_path, 'rb') as open_pickle:
tagger = pickle.load(open_pickle)
untagged_tokens = wordpunct_tokenize(untagged_string)
tagged_text = tagger.tag(untagged_tokens)
return tagged_text
def trigram_tagger(self, untagged_string, language):
"""Reads language .pickle for right language"""
if language == 'greek':
pickle_path = os.path.expanduser('~/cltk_data/greek/cltk_linguistic_data/taggers/pos/trigram.pickle')
elif language == 'latin':
pickle_path = os.path.expanduser('~/cltk_data/latin/cltk_linguistic_data/taggers/pos/trigram.pickle')
else:
print('No trigram tagger for this language available.')
with open(pickle_path, 'rb') as open_pickle:
tagger = pickle.load(open_pickle)
untagged_tokens = wordpunct_tokenize(untagged_string)
tagged_text = tagger.tag(untagged_tokens)
return tagged_text
def ngram_123_backoff_tagger(self, untagged_string, language):
"""Reads language .pickle for right language"""
if language == 'greek':
pickle_path = os.path.expanduser('~/cltk_data/greek/cltk_linguistic_data/taggers/pos/123grambackoff.pickle')
elif language == 'latin':
pickle_path = os.path.expanduser('~/cltk_data/latin/cltk_linguistic_data/taggers/pos/123grambackoff.pickle')
else:
print('No n–gram backoff tagger for this language available.')
with open(pickle_path, 'rb') as open_pickle:
tagger = pickle.load(open_pickle)
untagged_tokens = wordpunct_tokenize(untagged_string)
tagged_text = tagger.tag(untagged_tokens)
return tagged_text
def tnt_tagger(self, untagged_string, language):
"""Reads language .pickle for right language"""
if language == 'greek':
pickle_path = os.path.expanduser('~/cltk_data/greek/cltk_linguistic_data/taggers/pos/tnt.pickle')
elif language == 'latin':
pickle_path = os.path.expanduser('~/cltk_data/latin/cltk_linguistic_data/taggers/pos/tnt.pickle')
else:
print('No n–gram backoff tagger for this language available.')
with open(pickle_path, 'rb') as open_pickle:
tagger = pickle.load(open_pickle)
untagged_tokens = wordpunct_tokenize(untagged_string)
tagged_text = tagger.tag(untagged_tokens)
return tagged_text
| 44.307692 | 120 | 0.671875 | 485 | 4,032 | 5.362887 | 0.162887 | 0.05767 | 0.046136 | 0.061515 | 0.872357 | 0.872357 | 0.872357 | 0.836986 | 0.836986 | 0.836986 | 0 | 0.002874 | 0.223462 | 4,032 | 90 | 121 | 44.8 | 0.827212 | 0.077629 | 0 | 0.617647 | 0 | 0 | 0.27814 | 0.184611 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0.014706 | 0.044118 | 0 | 0.220588 | 0.073529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7d71717a76303b96652be547a8cb727b59a12925 | 65 | py | Python | kscipy/app/job/__init__.py | lbn/ksci | 53b30d2e5f0937d4040fcfd635c0642150e74388 | [
"MIT"
] | 5 | 2021-06-22T03:39:01.000Z | 2021-12-15T08:02:51.000Z | kscipy/app/job/__init__.py | lbn/ksci | 53b30d2e5f0937d4040fcfd635c0642150e74388 | [
"MIT"
] | null | null | null | kscipy/app/job/__init__.py | lbn/ksci | 53b30d2e5f0937d4040fcfd635c0642150e74388 | [
"MIT"
] | 1 | 2021-06-30T14:40:06.000Z | 2021-06-30T14:40:06.000Z | from . import routes
from . import resources
from . import tasks
| 16.25 | 23 | 0.769231 | 9 | 65 | 5.555556 | 0.555556 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184615 | 65 | 3 | 24 | 21.666667 | 0.943396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.