hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8711414078d7b08aea3e8447b2ff97960980203f | 147 | py | Python | tests/modules/contrib/test_traffic.py | spxtr/bumblebee-status | 45125f39af8323775aeabf809ae5ae80cfe3ccd9 | [
"MIT"
] | 1,089 | 2016-11-06T10:02:53.000Z | 2022-03-26T12:53:30.000Z | tests/modules/contrib/test_traffic.py | spxtr/bumblebee-status | 45125f39af8323775aeabf809ae5ae80cfe3ccd9 | [
"MIT"
] | 817 | 2016-11-05T05:42:39.000Z | 2022-03-25T19:43:52.000Z | tests/modules/contrib/test_traffic.py | spxtr/bumblebee-status | 45125f39af8323775aeabf809ae5ae80cfe3ccd9 | [
"MIT"
] | 317 | 2016-11-05T00:35:06.000Z | 2022-03-24T13:35:03.000Z | import pytest
pytest.importorskip("psutil")
pytest.importorskip("netifaces")
def test_load_module():
__import__("modules.contrib.traffic")
| 14.7 | 41 | 0.768707 | 16 | 147 | 6.6875 | 0.75 | 0.336449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 147 | 9 | 42 | 16.333333 | 0.810606 | 0 | 0 | 0 | 0 | 0 | 0.260274 | 0.157534 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.8 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87550096d72cf44392e9d9b9ed44401e244e882b | 572 | py | Python | test/01_util/util/test_loss_factors.py | OfficialCodexplosive/RESKit | e006e8c9923ddb044dab6951c95a15fa43489398 | [
"MIT"
] | 16 | 2020-01-08T09:44:37.000Z | 2022-03-24T15:56:02.000Z | test/01_util/util/test_loss_factors.py | OfficialCodexplosive/RESKit | e006e8c9923ddb044dab6951c95a15fa43489398 | [
"MIT"
] | 22 | 2020-04-25T18:01:40.000Z | 2020-10-07T14:11:57.000Z | test/01_util/util/test_loss_factors.py | OfficialCodexplosive/RESKit | e006e8c9923ddb044dab6951c95a15fa43489398 | [
"MIT"
] | 16 | 2020-02-26T14:31:26.000Z | 2021-04-28T10:34:51.000Z | import numpy as np
from reskit.util.loss_factors import low_generation_loss
def test_low_generation_loss():
assert np.isclose(low_generation_loss(0.05, base=0, sharpness=5), 0.7788007830714049)
assert np.isclose(low_generation_loss(0.05, base=0.5, sharpness=5), 0.38940039153570244)
assert np.isclose(low_generation_loss(0.05, base=0.3, sharpness=5), 0.5451605481499834)
assert np.isclose(low_generation_loss(0.25, base=0.3, sharpness=20), 0.004716562899359827)
assert np.isclose(low_generation_loss(0.50, base=0.5, sharpness=1), 0.3032653298563167)
| 52 | 94 | 0.77972 | 92 | 572 | 4.673913 | 0.326087 | 0.211628 | 0.276744 | 0.209302 | 0.432558 | 0.432558 | 0.432558 | 0.27907 | 0.27907 | 0.27907 | 0 | 0.229126 | 0.09965 | 572 | 10 | 95 | 57.2 | 0.605825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.625 | 1 | 0.125 | true | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e3eb7c3fa35de56a51cad31b09370b16646260b | 3,138 | py | Python | rdmo/conditions/tests/test_views.py | Raspeanut/rdmo | 9f785010a499c372a2f8368ccf76d2ea4150adcb | [
"Apache-2.0"
] | null | null | null | rdmo/conditions/tests/test_views.py | Raspeanut/rdmo | 9f785010a499c372a2f8368ccf76d2ea4150adcb | [
"Apache-2.0"
] | null | null | null | rdmo/conditions/tests/test_views.py | Raspeanut/rdmo | 9f785010a499c372a2f8368ccf76d2ea4150adcb | [
"Apache-2.0"
] | null | null | null | import os
import pytest
from django.urls import reverse
users = (
('editor', 'editor'),
('reviewer', 'reviewer'),
('user', 'user'),
('api', 'api'),
('anonymous', None),
)
status_map = {
'conditions': {
'editor': 200, 'reviewer': 200, 'api': 200, 'user': 403, 'anonymous': 302
},
'conditions_export': {
'editor': 200, 'reviewer': 200, 'api': 200, 'user': 403, 'anonymous': 302
},
'conditions_import': {
'editor': 302, 'reviewer': 403, 'api': 302, 'user': 403, 'anonymous': 302
},
'conditions_import_error': {
'editor': 400, 'reviewer': 403, 'api': 400, 'user': 403, 'anonymous': 302
}
}
export_formats = ('xml', 'rtf', 'odt', 'docx', 'html', 'markdown', 'tex', 'pdf')
@pytest.mark.parametrize('username,password', users)
def test_conditions(db, client, username, password):
client.login(username=username, password=password)
url = reverse('conditions')
response = client.get(url)
assert response.status_code == status_map['conditions'][username]
@pytest.mark.parametrize('username,password', users)
@pytest.mark.parametrize('export_format', export_formats)
def test_conditions_export(db, client, username, password, export_format):
client.login(username=username, password=password)
url = reverse('conditions_export', args=[export_format])
response = client.get(url)
assert response.status_code == status_map['conditions_export'][username]
@pytest.mark.parametrize('username,password', users)
def test_conditions_import_get(db, client, username, password):
client.login(username=username, password=password)
url = reverse('conditions_import', args=['xml'])
response = client.get(url)
assert response.status_code == status_map['conditions_import'][username]
@pytest.mark.parametrize('username,password', users)
def test_conditions_import_post(db, settings, client, username, password):
client.login(username=username, password=password)
url = reverse('conditions_import', args=['xml'])
xml_file = os.path.join(settings.BASE_DIR, 'xml', 'conditions.xml')
with open(xml_file, encoding='utf8') as f:
response = client.post(url, {'uploaded_file': f})
assert response.status_code == status_map['conditions_import'][username]
@pytest.mark.parametrize('username,password', users)
def test_conditions_import_empty_post(db, client, username, password):
client.login(username=username, password=password)
url = reverse('conditions_import', args=['xml'])
response = client.post(url)
assert response.status_code == status_map['conditions_import'][username]
@pytest.mark.parametrize('username,password', users)
def test_conditions_import_error_post(db, settings, client, username, password):
client.login(username=username, password=password)
url = reverse('conditions_import', args=['xml'])
xml_file = os.path.join(settings.BASE_DIR, 'xml', 'error.xml')
with open(xml_file, encoding='utf8') as f:
response = client.post(url, {'uploaded_file': f})
assert response.status_code == status_map['conditions_import_error'][username]
| 35.258427 | 82 | 0.695029 | 372 | 3,138 | 5.706989 | 0.169355 | 0.135657 | 0.062647 | 0.081959 | 0.795572 | 0.795572 | 0.77626 | 0.752708 | 0.752708 | 0.695243 | 0 | 0.023169 | 0.147228 | 3,138 | 88 | 83 | 35.659091 | 0.770179 | 0 | 0 | 0.424242 | 0 | 0 | 0.208732 | 0.014659 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.090909 | false | 0.272727 | 0.257576 | 0 | 0.348485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
5e40f1e6b1910b3e392787b2e8bc3d394e8fa868 | 43 | py | Python | python/random/ran.py | mifomen/codepuzzles | 430ffcc2d55a91746ce55c2881582f9db5a5b051 | [
"MIT"
] | null | null | null | python/random/ran.py | mifomen/codepuzzles | 430ffcc2d55a91746ce55c2881582f9db5a5b051 | [
"MIT"
] | null | null | null | python/random/ran.py | mifomen/codepuzzles | 430ffcc2d55a91746ce55c2881582f9db5a5b051 | [
"MIT"
] | null | null | null | import random
print(random.randint(1,100))
| 14.333333 | 28 | 0.790698 | 7 | 43 | 4.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0.069767 | 43 | 2 | 29 | 21.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
5e4bcaaa482aa9d96ba11c7486707be9e253dc8a | 48 | py | Python | pygglz/memory/__init__.py | cbuschka/pygglz | 01f362024d6a5fd89d46b3b7da2cb5970ec43ed9 | [
"Apache-2.0"
] | 1 | 2020-05-16T14:38:10.000Z | 2020-05-16T14:38:10.000Z | pygglz/memory/__init__.py | cbuschka/pygglz | 01f362024d6a5fd89d46b3b7da2cb5970ec43ed9 | [
"Apache-2.0"
] | 1 | 2020-06-02T18:43:56.000Z | 2020-06-02T18:43:56.000Z | pygglz/memory/__init__.py | cbuschka/pygglz | 01f362024d6a5fd89d46b3b7da2cb5970ec43ed9 | [
"Apache-2.0"
] | 2 | 2020-05-16T14:25:46.000Z | 2020-05-16T14:55:46.000Z | from .memory_repository import MemoryRepository
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ecd9756929b2b6a188e603b98ee58b03a1f52c0 | 69,540 | py | Python | pybind/slxos/v16r_1_00b/isis_state/router_isis_config/is_address_family_v4/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/isis_state/router_isis_config/is_address_family_v4/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | null | null | null | pybind/slxos/v16r_1_00b/isis_state/router_isis_config/is_address_family_v4/__init__.py | shivharis/pybind | 4e1c6d54b9fd722ccec25546ba2413d79ce337e6 | [
"Apache-2.0"
] | 1 | 2021-11-05T22:15:42.000Z | 2021-11-05T22:15:42.000Z |
from operator import attrgetter
import pyangbind.lib.xpathhelper as xpathhelper
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType, RestrictedClassType, TypedListType
from pyangbind.lib.yangtypes import YANGBool, YANGListType, YANGDynClass, ReferenceType
from pyangbind.lib.base import PybindBase
from decimal import Decimal
from bitarray import bitarray
import __builtin__
import redist_isis
import redist_ospf
import redist_static
import redist_connected
import redist_rip
import redist_bgp
import summary_address_v4
class is_address_family_v4(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module brocade-isis-operational - based on the path /isis-state/router-isis-config/is-address-family-v4. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: ISIS ipv4 address family
"""
__slots__ = ('_pybind_generated_by', '_path_helper', '_yang_name', '_rest_name', '_extmethods', '__afi','__safi','__originate_default_route','__originate_default_routemap_name','__default_metric','__l1_default_link_metric','__l2_default_link_metric','__administrative_distance','__maximum_equal_cost_paths','__redist_isis','__redist_ospf','__redist_static','__redist_connected','__redist_rip','__redist_bgp','__l1_wide_metric_enabled','__l2_wide_metric_enabled','__ldp_sync_enabled','__ldp_sync_hold_down','__summary_address_v4',)
_yang_name = 'is-address-family-v4'
_rest_name = 'is-address-family-v4'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
path_helper_ = kwargs.pop("path_helper", None)
if path_helper_ is False:
self._path_helper = False
elif path_helper_ is not None and isinstance(path_helper_, xpathhelper.YANGPathHelper):
self._path_helper = path_helper_
elif hasattr(self, "_parent"):
path_helper_ = getattr(self._parent, "_path_helper", False)
self._path_helper = path_helper_
else:
self._path_helper = False
extmethods = kwargs.pop("extmethods", None)
if extmethods is False:
self._extmethods = False
elif extmethods is not None and isinstance(extmethods, dict):
self._extmethods = extmethods
elif hasattr(self, "_parent"):
extmethods = getattr(self._parent, "_extmethods", None)
self._extmethods = extmethods
else:
self._extmethods = False
self.__summary_address_v4 = YANGDynClass(base=YANGListType("address",summary_address_v4.summary_address_v4, yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='address', extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}), is_container='list', yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='list', is_config=False)
self.__redist_static = YANGDynClass(base=redist_static.redist_static, is_container='container', presence=False, yang_name="redist-static", rest_name="redist-static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-static-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
self.__l1_default_link_metric = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l1-default-link-metric", rest_name="l1-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
self.__ldp_sync_enabled = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="ldp-sync-enabled", rest_name="ldp-sync-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)
self.__afi = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-afi': {'value': 1}, u'isis-ipv4-afi': {'value': 0}},), is_leaf=True, yang_name="afi", rest_name="afi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-afi', is_config=False)
self.__safi = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-unicast-safi': {'value': 1}, u'isis-ipv4-unicast-safi': {'value': 0}},), is_leaf=True, yang_name="safi", rest_name="safi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-safi', is_config=False)
self.__default_metric = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="default-metric", rest_name="default-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)
self.__administrative_distance = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="administrative-distance", rest_name="administrative-distance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
self.__redist_bgp = YANGDynClass(base=redist_bgp.redist_bgp, is_container='container', presence=False, yang_name="redist-bgp", rest_name="redist-bgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-bgp-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
self.__l2_wide_metric_enabled = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="l2-wide-metric-enabled", rest_name="l2-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)
self.__originate_default_routemap_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="originate-default-routemap-name", rest_name="originate-default-routemap-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='string', is_config=False)
self.__ldp_sync_hold_down = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="ldp-sync-hold-down", rest_name="ldp-sync-hold-down", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)
self.__redist_rip = YANGDynClass(base=redist_rip.redist_rip, is_container='container', presence=False, yang_name="redist-rip", rest_name="redist-rip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-rip-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
self.__redist_connected = YANGDynClass(base=redist_connected.redist_connected, is_container='container', presence=False, yang_name="redist-connected", rest_name="redist-connected", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-connected-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
self.__maximum_equal_cost_paths = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="maximum-equal-cost-paths", rest_name="maximum-equal-cost-paths", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
self.__originate_default_route = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="originate-default-route", rest_name="originate-default-route", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)
self.__l1_wide_metric_enabled = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="l1-wide-metric-enabled", rest_name="l1-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)
self.__l2_default_link_metric = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l2-default-link-metric", rest_name="l2-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
self.__redist_ospf = YANGDynClass(base=redist_ospf.redist_ospf, is_container='container', presence=False, yang_name="redist-ospf", rest_name="redist-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ospf-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
self.__redist_isis = YANGDynClass(base=redist_isis.redist_isis, is_container='container', presence=False, yang_name="redist-isis", rest_name="redist-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-isis-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return [u'isis-state', u'router-isis-config', u'is-address-family-v4']
def _rest_path(self):
if hasattr(self, "_parent"):
if self._rest_name:
return self._parent._rest_path()+[self._rest_name]
else:
return self._parent._rest_path()
else:
return [u'isis-state', u'router-isis-config', u'is-address-family-v4']
def _get_afi(self):
"""
Getter method for afi, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/afi (isis-afi)
YANG Description: AFI
"""
return self.__afi
def _set_afi(self, v, load=False):
"""
Setter method for afi, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/afi (isis-afi)
If this variable is read-only (config: false) in the
source YANG file, then _set_afi is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_afi() directly.
YANG Description: AFI
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-afi': {'value': 1}, u'isis-ipv4-afi': {'value': 0}},), is_leaf=True, yang_name="afi", rest_name="afi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-afi', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """afi must be of a type compatible with isis-afi""",
'defined-type': "brocade-isis-operational:isis-afi",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-afi': {'value': 1}, u'isis-ipv4-afi': {'value': 0}},), is_leaf=True, yang_name="afi", rest_name="afi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-afi', is_config=False)""",
})
self.__afi = t
if hasattr(self, '_set'):
self._set()
def _unset_afi(self):
self.__afi = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-afi': {'value': 1}, u'isis-ipv4-afi': {'value': 0}},), is_leaf=True, yang_name="afi", rest_name="afi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-afi', is_config=False)
def _get_safi(self):
"""
Getter method for safi, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/safi (isis-safi)
YANG Description: SAFI
"""
return self.__safi
def _set_safi(self, v, load=False):
"""
Setter method for safi, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/safi (isis-safi)
If this variable is read-only (config: false) in the
source YANG file, then _set_safi is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_safi() directly.
YANG Description: SAFI
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-unicast-safi': {'value': 1}, u'isis-ipv4-unicast-safi': {'value': 0}},), is_leaf=True, yang_name="safi", rest_name="safi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-safi', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """safi must be of a type compatible with isis-safi""",
'defined-type': "brocade-isis-operational:isis-safi",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-unicast-safi': {'value': 1}, u'isis-ipv4-unicast-safi': {'value': 0}},), is_leaf=True, yang_name="safi", rest_name="safi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-safi', is_config=False)""",
})
self.__safi = t
if hasattr(self, '_set'):
self._set()
def _unset_safi(self):
self.__safi = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'isis-ipv6-unicast-safi': {'value': 1}, u'isis-ipv4-unicast-safi': {'value': 0}},), is_leaf=True, yang_name="safi", rest_name="safi", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-safi', is_config=False)
def _get_originate_default_route(self):
"""
Getter method for originate_default_route, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/originate_default_route (isis-status)
YANG Description: Advertise a default route to neighboring ISs
"""
return self.__originate_default_route
def _set_originate_default_route(self, v, load=False):
"""
Setter method for originate_default_route, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/originate_default_route (isis-status)
If this variable is read-only (config: false) in the
source YANG file, then _set_originate_default_route is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_originate_default_route() directly.
YANG Description: Advertise a default route to neighboring ISs
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="originate-default-route", rest_name="originate-default-route", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """originate_default_route must be of a type compatible with isis-status""",
'defined-type': "brocade-isis-operational:isis-status",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="originate-default-route", rest_name="originate-default-route", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)""",
})
self.__originate_default_route = t
if hasattr(self, '_set'):
self._set()
def _unset_originate_default_route(self):
self.__originate_default_route = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="originate-default-route", rest_name="originate-default-route", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)
def _get_originate_default_routemap_name(self):
"""
Getter method for originate_default_routemap_name, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/originate_default_routemap_name (string)
YANG Description: Route map to originate the default route
"""
return self.__originate_default_routemap_name
def _set_originate_default_routemap_name(self, v, load=False):
"""
Setter method for originate_default_routemap_name, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/originate_default_routemap_name (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_originate_default_routemap_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_originate_default_routemap_name() directly.
YANG Description: Route map to originate the default route
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=unicode, is_leaf=True, yang_name="originate-default-routemap-name", rest_name="originate-default-routemap-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='string', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """originate_default_routemap_name must be of a type compatible with string""",
'defined-type': "string",
'generated-type': """YANGDynClass(base=unicode, is_leaf=True, yang_name="originate-default-routemap-name", rest_name="originate-default-routemap-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='string', is_config=False)""",
})
self.__originate_default_routemap_name = t
if hasattr(self, '_set'):
self._set()
def _unset_originate_default_routemap_name(self):
self.__originate_default_routemap_name = YANGDynClass(base=unicode, is_leaf=True, yang_name="originate-default-routemap-name", rest_name="originate-default-routemap-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='string', is_config=False)
def _get_default_metric(self):
"""
Getter method for default_metric, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/default_metric (uint16)
YANG Description: Default redistribution metric
"""
return self.__default_metric
def _set_default_metric(self, v, load=False):
"""
Setter method for default_metric, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/default_metric (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_default_metric is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_default_metric() directly.
YANG Description: Default redistribution metric
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="default-metric", rest_name="default-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """default_metric must be of a type compatible with uint16""",
'defined-type': "uint16",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="default-metric", rest_name="default-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)""",
})
self.__default_metric = t
if hasattr(self, '_set'):
self._set()
def _unset_default_metric(self):
self.__default_metric = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="default-metric", rest_name="default-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)
def _get_l1_default_link_metric(self):
"""
Getter method for l1_default_link_metric, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l1_default_link_metric (uint32)
YANG Description: Default IS-IS Level-1 Link metric
"""
return self.__l1_default_link_metric
def _set_l1_default_link_metric(self, v, load=False):
"""
Setter method for l1_default_link_metric, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l1_default_link_metric (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_l1_default_link_metric is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l1_default_link_metric() directly.
YANG Description: Default IS-IS Level-1 Link metric
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l1-default-link-metric", rest_name="l1-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """l1_default_link_metric must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l1-default-link-metric", rest_name="l1-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)""",
})
self.__l1_default_link_metric = t
if hasattr(self, '_set'):
self._set()
def _unset_l1_default_link_metric(self):
self.__l1_default_link_metric = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l1-default-link-metric", rest_name="l1-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
def _get_l2_default_link_metric(self):
"""
Getter method for l2_default_link_metric, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l2_default_link_metric (uint32)
YANG Description: Default IS-IS Level-2 Link metric
"""
return self.__l2_default_link_metric
def _set_l2_default_link_metric(self, v, load=False):
"""
Setter method for l2_default_link_metric, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l2_default_link_metric (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_l2_default_link_metric is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l2_default_link_metric() directly.
YANG Description: Default IS-IS Level-2 Link metric
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l2-default-link-metric", rest_name="l2-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """l2_default_link_metric must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l2-default-link-metric", rest_name="l2-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)""",
})
self.__l2_default_link_metric = t
if hasattr(self, '_set'):
self._set()
def _unset_l2_default_link_metric(self):
self.__l2_default_link_metric = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="l2-default-link-metric", rest_name="l2-default-link-metric", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
def _get_administrative_distance(self):
"""
Getter method for administrative_distance, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/administrative_distance (uint32)
YANG Description: Administrative Distance
"""
return self.__administrative_distance
def _set_administrative_distance(self, v, load=False):
"""
Setter method for administrative_distance, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/administrative_distance (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_administrative_distance is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_administrative_distance() directly.
YANG Description: Administrative Distance
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="administrative-distance", rest_name="administrative-distance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """administrative_distance must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="administrative-distance", rest_name="administrative-distance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)""",
})
self.__administrative_distance = t
if hasattr(self, '_set'):
self._set()
def _unset_administrative_distance(self):
self.__administrative_distance = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="administrative-distance", rest_name="administrative-distance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
def _get_maximum_equal_cost_paths(self):
"""
Getter method for maximum_equal_cost_paths, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/maximum_equal_cost_paths (uint32)
YANG Description: Maximum paths
"""
return self.__maximum_equal_cost_paths
def _set_maximum_equal_cost_paths(self, v, load=False):
"""
Setter method for maximum_equal_cost_paths, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/maximum_equal_cost_paths (uint32)
If this variable is read-only (config: false) in the
source YANG file, then _set_maximum_equal_cost_paths is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_maximum_equal_cost_paths() directly.
YANG Description: Maximum paths
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="maximum-equal-cost-paths", rest_name="maximum-equal-cost-paths", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """maximum_equal_cost_paths must be of a type compatible with uint32""",
'defined-type': "uint32",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="maximum-equal-cost-paths", rest_name="maximum-equal-cost-paths", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)""",
})
self.__maximum_equal_cost_paths = t
if hasattr(self, '_set'):
self._set()
def _unset_maximum_equal_cost_paths(self):
self.__maximum_equal_cost_paths = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), is_leaf=True, yang_name="maximum-equal-cost-paths", rest_name="maximum-equal-cost-paths", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint32', is_config=False)
def _get_redist_isis(self):
"""
Getter method for redist_isis, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_isis (container)
YANG Description: Redistribution config for IS-IS routes into IS-IS between levels
"""
return self.__redist_isis
def _set_redist_isis(self, v, load=False):
"""
Setter method for redist_isis, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_isis (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_redist_isis is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_redist_isis() directly.
YANG Description: Redistribution config for IS-IS routes into IS-IS between levels
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=redist_isis.redist_isis, is_container='container', presence=False, yang_name="redist-isis", rest_name="redist-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-isis-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """redist_isis must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=redist_isis.redist_isis, is_container='container', presence=False, yang_name="redist-isis", rest_name="redist-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-isis-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)""",
})
self.__redist_isis = t
if hasattr(self, '_set'):
self._set()
def _unset_redist_isis(self):
self.__redist_isis = YANGDynClass(base=redist_isis.redist_isis, is_container='container', presence=False, yang_name="redist-isis", rest_name="redist-isis", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-isis-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
def _get_redist_ospf(self):
"""
Getter method for redist_ospf, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_ospf (container)
YANG Description: Redistribution config for OSPF routes into IS-IS
"""
return self.__redist_ospf
def _set_redist_ospf(self, v, load=False):
"""
Setter method for redist_ospf, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_ospf (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_redist_ospf is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_redist_ospf() directly.
YANG Description: Redistribution config for OSPF routes into IS-IS
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=redist_ospf.redist_ospf, is_container='container', presence=False, yang_name="redist-ospf", rest_name="redist-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ospf-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """redist_ospf must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=redist_ospf.redist_ospf, is_container='container', presence=False, yang_name="redist-ospf", rest_name="redist-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ospf-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)""",
})
self.__redist_ospf = t
if hasattr(self, '_set'):
self._set()
def _unset_redist_ospf(self):
self.__redist_ospf = YANGDynClass(base=redist_ospf.redist_ospf, is_container='container', presence=False, yang_name="redist-ospf", rest_name="redist-ospf", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ospf-to-isis-redistribution', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
def _get_redist_static(self):
"""
Getter method for redist_static, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_static (container)
"""
return self.__redist_static
def _set_redist_static(self, v, load=False):
"""
Setter method for redist_static, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_static (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_redist_static is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_redist_static() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=redist_static.redist_static, is_container='container', presence=False, yang_name="redist-static", rest_name="redist-static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-static-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """redist_static must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=redist_static.redist_static, is_container='container', presence=False, yang_name="redist-static", rest_name="redist-static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-static-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)""",
})
self.__redist_static = t
if hasattr(self, '_set'):
self._set()
def _unset_redist_static(self):
self.__redist_static = YANGDynClass(base=redist_static.redist_static, is_container='container', presence=False, yang_name="redist-static", rest_name="redist-static", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-static-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
def _get_redist_connected(self):
"""
Getter method for redist_connected, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_connected (container)
"""
return self.__redist_connected
def _set_redist_connected(self, v, load=False):
"""
Setter method for redist_connected, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_connected (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_redist_connected is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_redist_connected() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=redist_connected.redist_connected, is_container='container', presence=False, yang_name="redist-connected", rest_name="redist-connected", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-connected-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """redist_connected must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=redist_connected.redist_connected, is_container='container', presence=False, yang_name="redist-connected", rest_name="redist-connected", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-connected-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)""",
})
self.__redist_connected = t
if hasattr(self, '_set'):
self._set()
def _unset_redist_connected(self):
self.__redist_connected = YANGDynClass(base=redist_connected.redist_connected, is_container='container', presence=False, yang_name="redist-connected", rest_name="redist-connected", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-connected-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
def _get_redist_rip(self):
"""
Getter method for redist_rip, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_rip (container)
"""
return self.__redist_rip
def _set_redist_rip(self, v, load=False):
"""
Setter method for redist_rip, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_rip (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_redist_rip is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_redist_rip() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=redist_rip.redist_rip, is_container='container', presence=False, yang_name="redist-rip", rest_name="redist-rip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-rip-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """redist_rip must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=redist_rip.redist_rip, is_container='container', presence=False, yang_name="redist-rip", rest_name="redist-rip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-rip-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)""",
})
self.__redist_rip = t
if hasattr(self, '_set'):
self._set()
def _unset_redist_rip(self):
self.__redist_rip = YANGDynClass(base=redist_rip.redist_rip, is_container='container', presence=False, yang_name="redist-rip", rest_name="redist-rip", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-rip-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
def _get_redist_bgp(self):
"""
Getter method for redist_bgp, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_bgp (container)
"""
return self.__redist_bgp
def _set_redist_bgp(self, v, load=False):
"""
Setter method for redist_bgp, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/redist_bgp (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_redist_bgp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_redist_bgp() directly.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=redist_bgp.redist_bgp, is_container='container', presence=False, yang_name="redist-bgp", rest_name="redist-bgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-bgp-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """redist_bgp must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=redist_bgp.redist_bgp, is_container='container', presence=False, yang_name="redist-bgp", rest_name="redist-bgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-bgp-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)""",
})
self.__redist_bgp = t
if hasattr(self, '_set'):
self._set()
def _unset_redist_bgp(self):
self.__redist_bgp = YANGDynClass(base=redist_bgp.redist_bgp, is_container='container', presence=False, yang_name="redist-bgp", rest_name="redist-bgp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-redistribution-redist-bgp-1'}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='container', is_config=False)
def _get_l1_wide_metric_enabled(self):
"""
Getter method for l1_wide_metric_enabled, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l1_wide_metric_enabled (boolean)
YANG Description: Level-1 ISIS use wide-metric
"""
return self.__l1_wide_metric_enabled
def _set_l1_wide_metric_enabled(self, v, load=False):
"""
Setter method for l1_wide_metric_enabled, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l1_wide_metric_enabled (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_l1_wide_metric_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l1_wide_metric_enabled() directly.
YANG Description: Level-1 ISIS use wide-metric
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="l1-wide-metric-enabled", rest_name="l1-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """l1_wide_metric_enabled must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="l1-wide-metric-enabled", rest_name="l1-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)""",
})
self.__l1_wide_metric_enabled = t
if hasattr(self, '_set'):
self._set()
def _unset_l1_wide_metric_enabled(self):
self.__l1_wide_metric_enabled = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="l1-wide-metric-enabled", rest_name="l1-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)
def _get_l2_wide_metric_enabled(self):
"""
Getter method for l2_wide_metric_enabled, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l2_wide_metric_enabled (boolean)
YANG Description: Level-2 ISIS use wide-metric
"""
return self.__l2_wide_metric_enabled
def _set_l2_wide_metric_enabled(self, v, load=False):
"""
Setter method for l2_wide_metric_enabled, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/l2_wide_metric_enabled (boolean)
If this variable is read-only (config: false) in the
source YANG file, then _set_l2_wide_metric_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l2_wide_metric_enabled() directly.
YANG Description: Level-2 ISIS use wide-metric
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGBool, is_leaf=True, yang_name="l2-wide-metric-enabled", rest_name="l2-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """l2_wide_metric_enabled must be of a type compatible with boolean""",
'defined-type': "boolean",
'generated-type': """YANGDynClass(base=YANGBool, is_leaf=True, yang_name="l2-wide-metric-enabled", rest_name="l2-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)""",
})
self.__l2_wide_metric_enabled = t
if hasattr(self, '_set'):
self._set()
def _unset_l2_wide_metric_enabled(self):
self.__l2_wide_metric_enabled = YANGDynClass(base=YANGBool, is_leaf=True, yang_name="l2-wide-metric-enabled", rest_name="l2-wide-metric-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='boolean', is_config=False)
def _get_ldp_sync_enabled(self):
"""
Getter method for ldp_sync_enabled, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/ldp_sync_enabled (isis-status)
YANG Description: If LDP sync enabled on IS-IS interfaces
"""
return self.__ldp_sync_enabled
def _set_ldp_sync_enabled(self, v, load=False):
"""
Setter method for ldp_sync_enabled, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/ldp_sync_enabled (isis-status)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_sync_enabled is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_sync_enabled() directly.
YANG Description: If LDP sync enabled on IS-IS interfaces
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="ldp-sync-enabled", rest_name="ldp-sync-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_sync_enabled must be of a type compatible with isis-status""",
'defined-type': "brocade-isis-operational:isis-status",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="ldp-sync-enabled", rest_name="ldp-sync-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)""",
})
self.__ldp_sync_enabled = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_sync_enabled(self):
self.__ldp_sync_enabled = YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'is-enabled': {'value': 1}, u'is-disabled': {'value': 0}},), is_leaf=True, yang_name="ldp-sync-enabled", rest_name="ldp-sync-enabled", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='isis-status', is_config=False)
def _get_ldp_sync_hold_down(self):
"""
Getter method for ldp_sync_hold_down, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/ldp_sync_hold_down (uint16)
YANG Description: LDP-Sync hold-down duration; 0 is infinite
"""
return self.__ldp_sync_hold_down
def _set_ldp_sync_hold_down(self, v, load=False):
"""
Setter method for ldp_sync_hold_down, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/ldp_sync_hold_down (uint16)
If this variable is read-only (config: false) in the
source YANG file, then _set_ldp_sync_hold_down is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ldp_sync_hold_down() directly.
YANG Description: LDP-Sync hold-down duration; 0 is infinite
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="ldp-sync-hold-down", rest_name="ldp-sync-hold-down", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """ldp_sync_hold_down must be of a type compatible with uint16""",
'defined-type': "uint16",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="ldp-sync-hold-down", rest_name="ldp-sync-hold-down", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)""",
})
self.__ldp_sync_hold_down = t
if hasattr(self, '_set'):
self._set()
def _unset_ldp_sync_hold_down(self):
self.__ldp_sync_hold_down = YANGDynClass(base=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), is_leaf=True, yang_name="ldp-sync-hold-down", rest_name="ldp-sync-hold-down", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='uint16', is_config=False)
def _get_summary_address_v4(self):
"""
Getter method for summary_address_v4, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/summary_address_v4 (list)
YANG Description: IS-IS IPv4 address summary
"""
return self.__summary_address_v4
def _set_summary_address_v4(self, v, load=False):
"""
Setter method for summary_address_v4, mapped from YANG variable /isis_state/router_isis_config/is_address_family_v4/summary_address_v4 (list)
If this variable is read-only (config: false) in the
source YANG file, then _set_summary_address_v4 is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_summary_address_v4() directly.
YANG Description: IS-IS IPv4 address summary
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=YANGListType("address",summary_address_v4.summary_address_v4, yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='address', extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}), is_container='list', yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='list', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """summary_address_v4 must be of a type compatible with list""",
'defined-type': "list",
'generated-type': """YANGDynClass(base=YANGListType("address",summary_address_v4.summary_address_v4, yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='address', extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}), is_container='list', yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='list', is_config=False)""",
})
self.__summary_address_v4 = t
if hasattr(self, '_set'):
self._set()
def _unset_summary_address_v4(self):
self.__summary_address_v4 = YANGDynClass(base=YANGListType("address",summary_address_v4.summary_address_v4, yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, is_container='list', user_ordered=False, path_helper=self._path_helper, yang_keys='address', extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}), is_container='list', yang_name="summary-address-v4", rest_name="summary-address-v4", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'isis-ipv4-summary-address', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-isis-operational', defining_module='brocade-isis-operational', yang_type='list', is_config=False)
afi = __builtin__.property(_get_afi)
safi = __builtin__.property(_get_safi)
originate_default_route = __builtin__.property(_get_originate_default_route)
originate_default_routemap_name = __builtin__.property(_get_originate_default_routemap_name)
default_metric = __builtin__.property(_get_default_metric)
l1_default_link_metric = __builtin__.property(_get_l1_default_link_metric)
l2_default_link_metric = __builtin__.property(_get_l2_default_link_metric)
administrative_distance = __builtin__.property(_get_administrative_distance)
maximum_equal_cost_paths = __builtin__.property(_get_maximum_equal_cost_paths)
redist_isis = __builtin__.property(_get_redist_isis)
redist_ospf = __builtin__.property(_get_redist_ospf)
redist_static = __builtin__.property(_get_redist_static)
redist_connected = __builtin__.property(_get_redist_connected)
redist_rip = __builtin__.property(_get_redist_rip)
redist_bgp = __builtin__.property(_get_redist_bgp)
l1_wide_metric_enabled = __builtin__.property(_get_l1_wide_metric_enabled)
l2_wide_metric_enabled = __builtin__.property(_get_l2_wide_metric_enabled)
ldp_sync_enabled = __builtin__.property(_get_ldp_sync_enabled)
ldp_sync_hold_down = __builtin__.property(_get_ldp_sync_hold_down)
summary_address_v4 = __builtin__.property(_get_summary_address_v4)
_pyangbind_elements = {'afi': afi, 'safi': safi, 'originate_default_route': originate_default_route, 'originate_default_routemap_name': originate_default_routemap_name, 'default_metric': default_metric, 'l1_default_link_metric': l1_default_link_metric, 'l2_default_link_metric': l2_default_link_metric, 'administrative_distance': administrative_distance, 'maximum_equal_cost_paths': maximum_equal_cost_paths, 'redist_isis': redist_isis, 'redist_ospf': redist_ospf, 'redist_static': redist_static, 'redist_connected': redist_connected, 'redist_rip': redist_rip, 'redist_bgp': redist_bgp, 'l1_wide_metric_enabled': l1_wide_metric_enabled, 'l2_wide_metric_enabled': l2_wide_metric_enabled, 'ldp_sync_enabled': ldp_sync_enabled, 'ldp_sync_hold_down': ldp_sync_hold_down, 'summary_address_v4': summary_address_v4, }
| 80.766551 | 820 | 0.748519 | 9,403 | 69,540 | 5.246198 | 0.024992 | 0.036894 | 0.047679 | 0.030651 | 0.929455 | 0.904237 | 0.887938 | 0.871174 | 0.860126 | 0.846726 | 0 | 0.010813 | 0.126215 | 69,540 | 860 | 821 | 80.860465 | 0.80103 | 0.180659 | 0 | 0.501053 | 0 | 0.042105 | 0.397328 | 0.250009 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132632 | false | 0 | 0.031579 | 0 | 0.271579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0dd04ac0fb1dbbc393635034435d31e5a0163d24 | 6,668 | py | Python | tests/test_circuit.py | fabian-hk/Secure-Two-Party-Computation | f7e10a0a5c1b0361dd700391d81cdcc75612666d | [
"BSD-2-Clause"
] | 6 | 2019-05-21T18:40:50.000Z | 2021-10-19T10:27:50.000Z | tests/test_circuit.py | fabian-hk/Secure-Two-Party-Computation | f7e10a0a5c1b0361dd700391d81cdcc75612666d | [
"BSD-2-Clause"
] | null | null | null | tests/test_circuit.py | fabian-hk/Secure-Two-Party-Computation | f7e10a0a5c1b0361dd700391d81cdcc75612666d | [
"BSD-2-Clause"
] | null | null | null | import unittest
from random import randint, random
import numpy as np
from tests.test_circuit_creater import *
from tests.evaluate_circuit import evaluate_circuit
from tests.plain_evaluator import *
from tools.person import Person
class TestCircuit(unittest.TestCase):
def test_circuit_0(self):
for i in range(4):
in_vals_a = [str(randint(0, 1)) + str(randint(0, 1))]
in_vals_b = [str(randint(0, 1)) + str(randint(0, 1))]
res_mpc, res_plain = evaluate_circuit(create_example_circuit_0, in_vals_a, in_vals_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_circuit_1(self):
for i in range(4):
in_vals_a = [str(randint(0, 1)) + str(randint(0, 1))]
in_vals_b = [str(randint(0, 1)) + str(randint(0, 1))]
res_mpc, res_plain = evaluate_circuit(create_example_circuit_1, in_vals_a, in_vals_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_circuit_2(self):
for i in range(4):
in_vals_a = [str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1))]
in_vals_b = [str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1))]
res_mpc, res_plain = evaluate_circuit(create_example_circuit_2, in_vals_a, in_vals_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_circuit_3(self):
for i in range(4):
in_vals_a = [str(randint(0, 1)) + str(randint(0, 1))]
in_vals_b = [str(randint(0, 1)) + str(randint(0, 1))]
res_mpc, res_plain = evaluate_circuit(create_example_circuit_3, in_vals_a, in_vals_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_circuit_4(self):
for i in range(4):
in_vals_a = [str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1))]
in_vals_b = [str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1))]
res_mpc, res_plain = evaluate_circuit(create_example_circuit_4, in_vals_a, in_vals_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_circuit_5(self):
for i in range(4):
in_vals_a = [str(randint(0, 1)) + str(randint(0, 1))]
in_vals_b = [str(randint(0, 1)) + str(randint(0, 1))]
res_mpc, res_plain = evaluate_circuit(create_example_circuit_5, in_vals_a, in_vals_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_and_operation(self):
for i in range(4):
in_vals_a = [str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(
randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1))]
in_vals_b = [str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(
randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1)) + str(randint(0, 1))]
res_mpc, res_plain = evaluate_circuit(and_operation, in_vals_a, in_vals_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
class TestFunctions(unittest.TestCase):
def test_add(self):
for i in range(5):
in_vals_a = [randint(0, 1073741823)]
in_vals_b = [randint(0, 1073741823)]
if random() < 0.5:
in_vals_a[0] *= -1
if random() < 0.5:
in_vals_b[0] *= -1
res_mpc_proto, res_mpc, res_plain = evaluate_circuit("add", in_vals_a, in_vals_b, True)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
res_dez = h.print_output(res_mpc_proto)
self.assertEqual(res_dez, (in_vals_a[0] + in_vals_b[0]))
def test_equality_test(self):
for i in range(5):
in_vals_a = [randint(0, 4294967295)]
if random() < 0.75:
in_vals_b = in_vals_a
else:
in_vals_b = [randint(0, 4294967295)]
res_mpc_proto, res_mpc, res_plain = evaluate_circuit("equality_test", in_vals_a, in_vals_b, True)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
res_dez = h.print_output(res_mpc_proto)
self.assertEqual(res_dez == 1, in_vals_a == in_vals_b)
def test_mean(self):
for i in range(5):
in_vals_a = [randint(0, 268435454), randint(0, 268435454), randint(0, 268435454), randint(0, 268435454)]
in_vals_b = [randint(0, 268435454), randint(0, 268435454), randint(0, 268435454), randint(0, 268435454)]
res_mpc_proto, res_mpc, res_plain = evaluate_circuit("mean", in_vals_a, in_vals_b, True)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
res_dez = h.print_output(res_mpc_proto)
self.assertEqual(int(np.mean(in_vals_a + in_vals_b)), res_dez)
class TestGates(unittest.TestCase):
def test_and_gate(self):
in_vals_a = ["0", "1"]
in_vals_b = ["0", "1"]
for i in range(4):
for in_val_a in in_vals_a:
for in_val_b in in_vals_b:
res_mpc, res_plain = evaluate_circuit(create_and_gate, in_val_a, in_val_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_xor_gate(self):
in_vals_a = ["0", "1"]
in_vals_b = ["0", "1"]
for i in range(4):
for in_val_a in in_vals_a:
for in_val_b in in_vals_b:
res_mpc, res_plain = evaluate_circuit(create_xor_gate, in_val_a, in_val_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
def test_nand_gate(self):
in_vals_a = ["0", "1"]
in_vals_b = ["0", "1"]
for i in range(4):
for in_val_a in in_vals_a:
for in_val_b in in_vals_b:
res_mpc, res_plain = evaluate_circuit(create_nand_gate, in_val_a, in_val_b)
# check if the MPC and plain result are the same
self.assertEqual(res_mpc, res_plain)
| 38.994152 | 116 | 0.594331 | 1,036 | 6,668 | 3.546332 | 0.067568 | 0.101252 | 0.137725 | 0.150245 | 0.859826 | 0.848122 | 0.839412 | 0.829069 | 0.829069 | 0.796407 | 0 | 0.056796 | 0.294991 | 6,668 | 170 | 117 | 39.223529 | 0.724739 | 0.091482 | 0 | 0.486239 | 0 | 0 | 0.005295 | 0 | 0 | 0 | 0 | 0 | 0.146789 | 1 | 0.119266 | false | 0 | 0.06422 | 0 | 0.211009 | 0.027523 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0de708efca0bb14ea3f98e57baa7fdfbd2d92fde | 15,596 | py | Python | unit_tests/caseworker/cases/test_templates.py | uktrade/lite-frontend | c0f79d6c511a87406b953fe68cbf6546cc18874e | [
"MIT"
] | 1 | 2021-10-16T16:36:58.000Z | 2021-10-16T16:36:58.000Z | unit_tests/caseworker/cases/test_templates.py | uktrade/lite-frontend | c0f79d6c511a87406b953fe68cbf6546cc18874e | [
"MIT"
] | 45 | 2020-08-11T14:37:46.000Z | 2022-03-29T17:03:02.000Z | unit_tests/caseworker/cases/test_templates.py | uktrade/lite-frontend | c0f79d6c511a87406b953fe68cbf6546cc18874e | [
"MIT"
] | 3 | 2021-02-01T06:26:19.000Z | 2022-02-21T23:02:46.000Z | import pytest
import requests
from bs4 import BeautifulSoup
from django.template.loader import render_to_string
from caseworker.cases.objects import Case
team1 = {"id": "136cbb1f-390b-4f78-bfca-86300edec300", "name": "team1", "part_of_ecju": None}
team2 = {"id": "47762273-5655-4ce3-afa1-b34112f3e781", "name": "team2", "part_of_ecju": None}
john_smith = {
"email": "john.smith@example.com",
"first_name": "John",
"id": "63c74ddd-c119-48cc-8696-d196218ca583",
"last_name": "Smith",
"role_name": "Super User",
"status": "Active",
"team": team1,
}
john_doe = {
"email": "john.doe@example.com",
"first_name": "John",
"id": "63c74ddd-c119-48cc-8696-d196218ca583",
"last_name": "Doe",
"role_name": "Super User",
"status": "Active",
"team": team1,
}
jane_doe = {
"email": "jane.doe@example.com",
"first_name": "Jane",
"id": "24afb1dc-fa1e-40d1-a716-840585c85ebc",
"last_name": "Doe",
"role_name": "Super User",
"status": "Active",
"team": team2,
}
jane_smith = {
"email": "jane.smith@example.com",
"first_name": "Jane",
"id": "24afb1dc-fa1e-40d1-a716-840585c85ebc",
"last_name": "Smith",
"role_name": "Super User",
"status": "Active",
"team": team2,
}
dummy_advice = {
"id": "f4f3476f-9849-49d1-973e-62b185085a64",
"text": "",
"note": "",
"type": {"key": "approve", "value": "Approve"},
"level": "user",
"proviso": None,
"denial_reasons": [],
"footnote": None,
"user": jane_smith,
"created_at": "2021-03-18T11:27:56.625251Z",
"good": None,
"goods_type": None,
"country": None,
"end_user": "633178cd-83ec-4773-8829-c19065912565",
"ultimate_end_user": None,
"consignee": None,
"third_party": None,
}
def test_advice_section_no_user_advice_checkboxes_visible_no_combine_button(data_standard_case):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
context["current_user"] = jane_doe
context["current_advice_level"] = ["user"]
html = render_to_string("case/tabs/user-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" in soup.find(id="button-combine-user-advice").parent["class"]
assert soup.find(id="link-select-all-goods")
assert soup.find(id="link-select-all-destinations")
def test_advice_section_no_user_advice_checkboxes_visible_no_combine_button_grouped_view(
data_standard_case, rf, client
):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
context["current_user"] = jane_doe
context["current_advice_level"] = ["user"]
case_id = context["case"]["id"]
queue_id = context["queue"]["id"]
request = rf.get(f"/queues/{queue_id}/cases/{case_id}/user-advice/?grouped-advice-view=True")
request.session = client.session
request.requests_session = requests.Session()
html = render_to_string("case/tabs/user-advice.html", context=context, request=request)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" in soup.find(id="button-combine-user-advice").parent["class"]
assert soup.find(id="button-select-all-no_advice")
def test_advice_section_user_can_combine_advice_from_own_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
context["case"].advice = [dummy_advice]
context["current_user"] = jane_doe
context["current_advice_level"] = ["user"]
html = render_to_string("case/tabs/user-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" not in soup.find(id="button-combine-user-advice").parent["class"]
def test_advice_section_user_cannot_combine_advice_from_other_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
advice_1 = {**dummy_advice}
case = {**data_standard_case}
context["case"] = Case(case["case"])
context["case"].advice = [advice_1]
context["current_user"] = john_smith
context["current_advice_level"] = ["user"]
html = render_to_string("case/tabs/user-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" in soup.find(id="button-combine-user-advice").parent["class"]
def test_advice_section_user_can_clear_advice_from_own_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
team_advice = {**dummy_advice}
team_advice["level"] = "team"
context["case"].advice = [team_advice]
context["current_user"] = jane_doe
context["current_advice_level"] = ["user", "team"]
html = render_to_string("case/tabs/team-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" not in soup.find(id="button-clear-team-advice").parent["class"]
def test_advice_section_user_cannot_clear_advice_from_other_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
team_advice = {**dummy_advice}
team_advice["level"] = "team"
context["case"].advice = [team_advice]
context["current_user"] = john_smith
context["current_advice_level"] = ["user", "team"]
html = render_to_string("case/tabs/team-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" in soup.find(id="button-clear-team-advice").parent["class"]
def test_advice_section_user_cannot_clear_if_no_team_advice(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
advice = {**dummy_advice}
advice["level"] = "user"
context["case"].advice = [advice]
context["current_user"] = john_doe
context["current_advice_level"] = ["user"]
html = render_to_string("case/tabs/team-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert not soup.find(id="button-clear-team-advice")
def test_advice_section_user_can_combine_team_advice_from_own_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
team_advice = {**dummy_advice}
team_advice["level"] = "team"
context["case"].advice = [team_advice]
context["current_user"] = jane_doe
context["current_advice_level"] = ["user", "team"]
html = render_to_string("case/tabs/team-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" not in soup.find(id="button-combine-team-advice").parent["class"]
def test_advice_section_user_cannot_combine_team_advice_if_no_advice_from_own_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
team_advice = {**dummy_advice}
team_advice["level"] = "team"
context["case"].advice = [team_advice]
context["current_user"] = john_smith
context["current_advice_level"] = ["user", "team"]
html = render_to_string("case/tabs/team-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" in soup.find(id="button-combine-team-advice").parent["class"]
def test_advice_section_user_can_clear_final_advice_from_own_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
advice = {**dummy_advice}
advice["level"] = "final"
context["case"].advice = [advice]
context["current_user"] = jane_doe
context["current_advice_level"] = ["user", "team", "final"]
html = render_to_string("case/tabs/final-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert soup.find(id="button-clear-final-advice")
def test_advice_section_user_cannot_clear_final_advice_from_other_team(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
advice = {**dummy_advice}
advice["level"] = "final"
context["case"].advice = [advice]
context["current_user"] = john_doe
context["current_advice_level"] = ["user", "team", "final"]
html = render_to_string("case/tabs/final-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert not soup.find(id="button-clear-final-advice")
def test_advice_section_user_cannot_clear_if_no_final_advice(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
advice = {**dummy_advice}
advice["level"] = "team"
context["case"].advice = [advice]
context["current_user"] = john_doe
context["current_advice_level"] = ["user", "team"]
html = render_to_string("case/tabs/final-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert not soup.find(id="button-clear-final-advice")
def test_advice_section_user_cannot_finalise(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
advice = {**dummy_advice}
advice["level"] = "final"
context["case"].advice = [advice]
context["current_user"] = john_doe
context["current_advice_level"] = ["user", "team", "final"]
context["can_finalise"] = False
html = render_to_string("case/tabs/final-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" in soup.find(id="button-finalise").parent["class"]
def test_advice_section_user_can_finalise(data_standard_case, rf, client):
context = {}
context["queue"] = {"id": "00000000-0000-0000-0000-000000000001"}
case = {**data_standard_case}
context["case"] = Case(case["case"])
advice = {**dummy_advice}
advice["level"] = "final"
context["case"].advice = [advice]
context["current_user"] = john_doe
context["current_advice_level"] = ["user", "team", "final"]
context["can_finalise"] = True
html = render_to_string("case/tabs/final-advice.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "app-advice__disabled-buttons" not in soup.find(id="button-finalise").parent["class"]
def test_good_on_application_detail_unverified_product(
authorized_client,
mock_application_search,
queue_pk,
standard_case_pk,
good_on_application_pk,
data_search,
data_good_on_application,
data_standard_case,
):
# given the product is not yet reviewed
good_on_application = {**data_good_on_application}
good_on_application["is_good_controlled"] = None
# and the exporter told us the good is controlled
good_on_application["good"]["is_good_controlled"] = {"key": "True", "value": "Yes"}
context = {
"good_on_application": good_on_application,
"good_on_application_documents": [],
"case": Case(data_standard_case["case"]),
"other_cases": [],
"data": {},
"organisation_documents": {},
"queue": {"id": "00000000-0000-0000-0000-000000000001"},
}
# then we show the is_good_controlled value that the exporter originally gave
html = render_to_string("case/product-on-case.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "Yes" in soup.find(id="is-licensable-value").text
def test_good_on_application_detail_verified_product(
authorized_client,
mock_application_search,
queue_pk,
standard_case_pk,
good_on_application_pk,
data_search,
data_good_on_application,
data_standard_case,
):
# given the product has been reviewed
good_on_application = {**data_good_on_application}
good_on_application["is_good_controlled"] = {"key": "False", "value": "No"}
# and the exporter told us the good is controlled
good_on_application["good"]["is_good_controlled"] = {"key": "True", "value": "Yes"}
context = {
"good_on_application": good_on_application,
"good_on_application_documents": [],
"case": Case(data_standard_case["case"]),
"other_cases": [],
"data": {},
"organisation_documents": {},
"queue": {"id": "00000000-0000-0000-0000-000000000001"},
}
# then we show the is_good_controlled value that the reviewer gave
html = render_to_string("case/product-on-case.html", context)
soup = BeautifulSoup(html, "html.parser")
assert "No" in soup.find(id="is-licensable-value").text
@pytest.mark.parametrize(
"quantity,unit",
[
(256, {"key": "NAR", "value": "items"}),
(1, {"key": "NAR", "value": "item"}),
(123.45, {"key": "GRM", "value": "Gram(s)"}),
(128.64, {"key": "KGM", "value": "Kilogram(s)"}),
(1150.32, {"key": "MTK", "value": "Square metre(s)"}),
(100.00, {"key": "MTR", "value": "Metre(s)"}),
(2500.25, {"key": "LTR", "value": "Litre(s)"}),
(123.45, {"key": "MTQ", "value": "Cubic metre(s)"}),
(99, {"key": "ITG", "value": "Intangible"}),
],
)
def test_good_on_application_display_quantity(data_good_on_application, quantity, unit):
good_on_application = {**data_good_on_application}
good_on_application["good"]["item_category"] = {"key": "group2_firearms", "value": "Firearms"}
good_on_application["quantity"] = quantity
good_on_application["unit"] = unit
context = {
"queue": {"id": "00000000-0000-0000-0000-000000000001"},
"case": {"id": "8fb76bed-fd45-4293-95b8-eda9468aa254", "goods": []},
"goods": [good_on_application],
}
expected_quantity = f"{quantity} {unit['value']}"
html = render_to_string("case/slices/goods.html", context)
soup = BeautifulSoup(html, "html.parser")
actual_quantity = soup.find(id="quantity-value").text
assert expected_quantity == actual_quantity
@pytest.mark.parametrize(
"agreed_to_foi,foi_reason", [("Yes", "internal details"), ("No", ""),],
)
def test_foi_details_on_summary_page(data_standard_case, agreed_to_foi, foi_reason):
case = data_standard_case["case"]
case["data"]["agreed_to_foi"] = agreed_to_foi
case["data"]["foi_reason"] = foi_reason
context = {"case": case}
html = render_to_string("case/slices/freedom-of-information.html", context)
soup = BeautifulSoup(html, "html.parser")
actual_foi_value = soup.find(id="agreed-to-foi-value").text
actual_foi_reason_value = soup.find(id="foi-reason-value").text
assert agreed_to_foi == actual_foi_value
assert foi_reason == actual_foi_reason_value
| 38.039024 | 115 | 0.674468 | 1,973 | 15,596 | 5.069437 | 0.121135 | 0.039192 | 0.054389 | 0.032394 | 0.809338 | 0.79834 | 0.787043 | 0.782044 | 0.753849 | 0.74805 | 0 | 0.061533 | 0.159079 | 15,596 | 409 | 116 | 38.132029 | 0.701106 | 0.019877 | 0 | 0.641176 | 0 | 0 | 0.311866 | 0.153413 | 0 | 0 | 0 | 0 | 0.064706 | 1 | 0.052941 | false | 0 | 0.014706 | 0 | 0.067647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21797fe8cfb4e9f886b95aaa5567aba7aa0c5f59 | 32 | py | Python | services/director-v2/src/simcore_service_director_v2/modules/dynamic_sidecar/__init__.py | colinRawlings/osparc-simcore | bf2f18d5bc1e574d5f4c238d08ad15156184c310 | [
"MIT"
] | 25 | 2018-04-13T12:44:12.000Z | 2022-03-12T15:01:17.000Z | services/director-v2/src/simcore_service_director_v2/modules/dynamic_sidecar/__init__.py | colinRawlings/osparc-simcore | bf2f18d5bc1e574d5f4c238d08ad15156184c310 | [
"MIT"
] | 2,553 | 2018-01-18T17:11:55.000Z | 2022-03-31T16:26:40.000Z | services/director-v2/src/simcore_service_director_v2/modules/dynamic_sidecar/__init__.py | mrnicegyu11/osparc-simcore | b6fa6c245dbfbc18cc74a387111a52de9b05d1f4 | [
"MIT"
] | 20 | 2018-01-18T19:45:33.000Z | 2022-03-29T07:08:47.000Z | from .module_setup import setup
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21e154be265ff0545e1defc7008e310bd7c4d911 | 5,440 | py | Python | tests/test_device_manager.py | freundTech/pykdeconnect | 3834c7196796e96259222c2d19c5e687f1e6bd9c | [
"MIT"
] | 1 | 2022-03-07T12:15:37.000Z | 2022-03-07T12:15:37.000Z | tests/test_device_manager.py | freundTech/pykdeconnect | 3834c7196796e96259222c2d19c5e687f1e6bd9c | [
"MIT"
] | 13 | 2022-03-09T14:20:10.000Z | 2022-03-21T10:16:24.000Z | tests/test_device_manager.py | freundTech/pykdeconnect | 3834c7196796e96259222c2d19c5e687f1e6bd9c | [
"MIT"
] | null | null | null | from unittest.mock import AsyncMock, MagicMock
import pytest
from pykdeconnect.device_manager import DeviceManager
def test_device_manager_add_device():
storage = MagicMock()
device_manager = DeviceManager(storage)
device = MagicMock()
device.device_id = "foo"
device_manager.add_device(device)
assert device_manager.get_device("foo") == device
assert len(device_manager.get_devices()) == 1
assert device in device_manager.get_devices()
def test_device_manager_remove_device():
storage = MagicMock()
storage.load_device = MagicMock(return_value=None)
device_manager = DeviceManager(storage)
device = MagicMock()
device.device_id = "foo"
device_manager.add_device(device)
device_manager.remove_device(device)
assert device_manager.get_device("foo") is None
assert len(device_manager.get_devices()) == 0
def test_device_manager_load_from_storage():
device = MagicMock()
device.device_id = "foo"
storage = MagicMock()
storage.load_device = MagicMock(return_value=device)
device_manager = DeviceManager(storage)
assert device_manager.get_device("foo") == device
@pytest.mark.asyncio
async def test_device_manager_disconnect_all():
device = MagicMock()
device.close_connection = AsyncMock()
storage = MagicMock()
device_manager = DeviceManager(storage)
device_manager.add_device(device)
await device_manager.disconnect_all()
device.close_connection.assert_awaited_once()
@pytest.mark.asyncio
async def test_device_manager_pairing_accepted():
device = MagicMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
callback = AsyncMock(return_value=True)
device_manager.set_pairing_callback(callback)
device_manager.add_device(device)
await device_manager.on_pairing_request(device)
storage.store_device.assert_called_with(device)
callback.assert_awaited_with(device)
device.confirm_pair.assert_called_once()
@pytest.mark.asyncio
async def test_device_manager_pairing_rejected():
device = MagicMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
callback = AsyncMock(return_value=False)
device_manager.set_pairing_callback(callback)
device_manager.add_device(device)
await device_manager.on_pairing_request(device)
callback.assert_awaited_with(device)
device.reject_pair.assert_called_once()
@pytest.mark.asyncio
async def test_device_manager_pairing_no_callback():
device = MagicMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
device_manager.add_device(device)
await device_manager.on_pairing_request(device)
device.reject_pair.assert_called_once()
def test_device_manager_unpair():
device = MagicMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
device_manager.add_device(device)
device_manager.unpair(device)
storage.remove_device.assert_called_once_with(device)
device.set_unpaired.assert_called_once()
@pytest.mark.asyncio
async def test_device_manager_connected_callback():
device = MagicMock()
device.device_connected = AsyncMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
device_manager.add_device(device)
callback = AsyncMock()
device_manager.register_device_connected_callback(callback)
await device_manager.device_connected(device)
callback.assert_called_once()
callback.assert_awaited_once_with(device)
device.device_connected.assert_awaited_once()
@pytest.mark.asyncio
async def test_device_manager_disconnected_callback():
device = MagicMock()
device.device_disconnected = AsyncMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
device_manager.add_device(device)
callback = AsyncMock()
device_manager.register_device_disconnected_callback(callback)
await device_manager.device_disconnected(device)
callback.assert_awaited_once_with(device)
device.device_disconnected.assert_awaited_once()
@pytest.mark.asyncio
async def test_device_manager_remove_connected_callback():
device = MagicMock()
device.device_connected = AsyncMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
device_manager.add_device(device)
callback = AsyncMock()
device_manager.register_device_connected_callback(callback)
device_manager.unregister_device_connected_callback(callback)
await device_manager.device_connected(device)
callback.assert_not_awaited()
device.device_connected.assert_awaited_once()
@pytest.mark.asyncio
async def test_device_manager_remove_disconnected_callback():
device = MagicMock()
device.device_disconnected = AsyncMock()
device.device_id = "foo"
storage = MagicMock()
device_manager = DeviceManager(storage)
device_manager.add_device(device)
callback = AsyncMock()
device_manager.register_device_disconnected_callback(callback)
device_manager.unregister_device_disconnected_callback(callback)
await device_manager.device_disconnected(device)
callback.assert_not_awaited()
device.device_disconnected.assert_awaited_once()
| 27.336683 | 68 | 0.765257 | 627 | 5,440 | 6.285486 | 0.100478 | 0.197919 | 0.039584 | 0.060898 | 0.87592 | 0.85765 | 0.819335 | 0.757929 | 0.676224 | 0.676224 | 0 | 0.000433 | 0.151471 | 5,440 | 198 | 69 | 27.474747 | 0.853336 | 0 | 0 | 0.742647 | 0 | 0 | 0.007721 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.029412 | false | 0 | 0.022059 | 0 | 0.051471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d077a30cc52997d1c227a234e09b4b3bee43a13 | 682 | py | Python | calculate_anything/currency/providers/__init__.py | friday/ulauncher-albert-calculate-anything | 65e34ded08a4d88a66ec9fcd29bec41e57b32967 | [
"MIT"
] | null | null | null | calculate_anything/currency/providers/__init__.py | friday/ulauncher-albert-calculate-anything | 65e34ded08a4d88a66ec9fcd29bec41e57b32967 | [
"MIT"
] | null | null | null | calculate_anything/currency/providers/__init__.py | friday/ulauncher-albert-calculate-anything | 65e34ded08a4d88a66ec9fcd29bec41e57b32967 | [
"MIT"
] | null | null | null | from calculate_anything.currency.providers.provider import ApiKeyCurrencyProvider, FreeCurrencyProvider
from calculate_anything.currency.providers.fixerio import FixerIOCurrencyProvider
from calculate_anything.currency.providers.european_central_bank import ECBProvider
from calculate_anything.currency.providers.mycurrencynet import MyCurrencyNetCurrencyProvider
from calculate_anything.currency.providers.coinbase import CoinbaseCurrencyProvider
from calculate_anything.currency.providers.combined import CombinedCurrencyProvider
from calculate_anything.currency.providers.factory import CurrencyProviderFactory
from calculate_anything.exceptions import CurrencyProviderException
| 75.777778 | 103 | 0.917889 | 65 | 682 | 9.476923 | 0.384615 | 0.168831 | 0.272727 | 0.329545 | 0.431818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048387 | 682 | 8 | 104 | 85.25 | 0.949153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
df0b2eadcbe13e8044b861f48fe91a3d66bd4102 | 93 | py | Python | blocksec2go/comm/__init__.py | DhruvKhemani/BlockchainSecurity2Go-Python-Library | ed4e432a18ee203840d2aa963ec35dc82f3e1399 | [
"MIT"
] | 2 | 2021-11-23T13:44:53.000Z | 2021-12-06T19:48:51.000Z | blocksec2go/comm/__init__.py | DhruvKhemani/BlockchainSecurity2Go-Python-Library | ed4e432a18ee203840d2aa963ec35dc82f3e1399 | [
"MIT"
] | null | null | null | blocksec2go/comm/__init__.py | DhruvKhemani/BlockchainSecurity2Go-Python-Library | ed4e432a18ee203840d2aa963ec35dc82f3e1399 | [
"MIT"
] | 1 | 2020-10-03T08:27:26.000Z | 2020-10-03T08:27:26.000Z | from blocksec2go.comm.pyscard import open_pyscard
from blocksec2go.comm.base import CardError | 46.5 | 49 | 0.88172 | 13 | 93 | 6.230769 | 0.615385 | 0.37037 | 0.469136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.075269 | 93 | 2 | 50 | 46.5 | 0.918605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
df4bddfa6f4cc3d98b75cd69d94b0b7685e7f496 | 20,190 | py | Python | pyechonest/playlist.py | gleitz/automaticdj | 3880c175bc09c17ed9f71ba9902e348a00bb64ef | [
"MIT"
] | 14 | 2015-06-19T22:00:41.000Z | 2021-03-14T07:41:38.000Z | pyechonest/playlist.py | gleitz/automaticdj | 3880c175bc09c17ed9f71ba9902e348a00bb64ef | [
"MIT"
] | null | null | null | pyechonest/playlist.py | gleitz/automaticdj | 3880c175bc09c17ed9f71ba9902e348a00bb64ef | [
"MIT"
] | 2 | 2015-07-19T10:51:23.000Z | 2019-04-10T14:46:23.000Z | #!/usr/bin/env python
# encoding: utf-8
"""
Copyright (c) 2010 The Echo Nest. All rights reserved.
Created by Tyler Williams on 2010-04-25.
The Playlist module loosely covers http://developer.echonest.com/docs/v4/playlist.html
Refer to the official api documentation if you are unsure about something.
"""
import util
from proxies import PlaylistProxy
from song import Song
import catalog
class Playlist(PlaylistProxy):
"""
A Dynamic Playlist object
Attributes:
session_id: Playlist Session ID
song: The current song
Example:
>>> p = Playlist(type='artist-radio', artist=['ida maria', 'florence + the machine'])
>>> p
<Dynamic Playlist - 9c210205d4784144b4fa90770fa55d0b>
>>> p.song
<song - Later On>
>>> p.get_next_song()
<song - Overall>
>>>
"""
def __init__(self, session_id=None, type='artist', artist_pick='song_hotttnesss-desc', variety=.5, artist_id=None, artist=None, \
song_id=None, description=None, max_tempo=None, min_tempo=None, max_duration=None, \
min_duration=None, max_loudness=None, min_loudness=None, max_danceability=None, min_danceability=None, \
max_energy=None, min_energy=None, artist_max_familiarity=None, artist_min_familiarity=None, \
artist_max_hotttnesss=None, artist_min_hotttnesss=None, song_max_hotttnesss=None, song_min_hotttnesss=None, \
min_longitude=None, max_longitude=None, min_latitude=None, max_latitude=None, \
mode=None, key=None, buckets=[], sort=None, limit=False, dmca=False, audio=False, chain_xspf=False, \
seed_catalog=None, steer=None, source_catalog=None, steer_description=None):
"""
Args:
Kwargs:
type (str): a string representing the playlist type ('artist', 'artist-radio', ...)
artist_pick (str): How songs should be chosen for each artist
variety (float): A number between 0 and 1 specifying the variety of the playlist
artist_id (str): the artist_id
artist (str): the name of an artist
song_id (str): the song_id
description (str): A string describing the artist and song
results (int): An integer number of results to return
max_tempo (float): The max tempo of song results
min_tempo (float): The min tempo of song results
max_duration (float): The max duration of song results
min_duration (float): The min duration of song results
max_loudness (float): The max loudness of song results
min_loudness (float): The min loudness of song results
artist_max_familiarity (float): A float specifying the max familiarity of artists to search for
artist_min_familiarity (float): A float specifying the min familiarity of artists to search for
artist_max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for
artist_min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for
song_max_hotttnesss (float): A float specifying the max hotttnesss of songs to search for
song_min_hotttnesss (float): A float specifying the max hotttnesss of songs to search for
max_energy (float): The max energy of song results
min_energy (float): The min energy of song results
max_dancibility (float): The max dancibility of song results
min_dancibility (float): The min dancibility of song results
mode (int): 0 or 1 (minor or major)
key (int): 0-11 (c, c-sharp, d, e-flat, e, f, f-sharp, g, a-flat, a, b-flat, b)
max_latitude (float): A float specifying the max latitude of artists to search for
min_latitude (float): A float specifying the min latitude of artists to search for
max_longitude (float): A float specifying the max longitude of artists to search for
min_longitude (float): A float specifying the min longitude of artists to search for
sort (str): A string indicating an attribute and order for sorting the results
buckets (list): A list of strings specifying which buckets to retrieve
limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets
seed_catalog (str or Catalog): A Catalog object or catalog id to use as a seed
source_catalog (str or Catalog): A Catalog object or catalog id
steer (str): A steering value to determine the target song attributes
steer_description (str): A steering value to determine the target song description term attributes
Returns:
A dynamic playlist object
"""
kwargs = {}
if type:
kwargs['type'] = type
if artist_pick:
kwargs['artist_pick'] = artist_pick
if variety is not None:
kwargs['variety'] = variety
if artist:
kwargs['artist'] = artist
if artist_id:
kwargs['artist_id'] = artist_id
if song_id:
kwargs['song_id'] = song_id
if description:
kwargs['description'] = description
if max_tempo is not None:
kwargs['max_tempo'] = max_tempo
if min_tempo is not None:
kwargs['min_tempo'] = min_tempo
if max_duration is not None:
kwargs['max_duration'] = max_duration
if min_duration is not None:
kwargs['min_duration'] = min_duration
if max_loudness is not None:
kwargs['max_loudness'] = max_loudness
if min_loudness is not None:
kwargs['min_loudness'] = min_loudness
if max_danceability is not None:
kwargs['max_danceability'] = max_danceability
if min_danceability is not None:
kwargs['min_danceability'] = min_danceability
if max_energy is not None:
kwargs['max_energy'] = max_energy
if min_energy is not None:
kwargs['min_energy'] = min_energy
if artist_max_familiarity is not None:
kwargs['artist_max_familiarity'] = artist_max_familiarity
if artist_min_familiarity is not None:
kwargs['artist_min_familiarity'] = artist_min_familiarity
if artist_max_hotttnesss is not None:
kwargs['artist_max_hotttnesss'] = artist_max_hotttnesss
if artist_min_hotttnesss is not None:
kwargs['artist_min_hotttnesss'] = artist_min_hotttnesss
if song_max_hotttnesss is not None:
kwargs['song_max_hotttnesss'] = song_max_hotttnesss
if song_min_hotttnesss is not None:
kwargs['song_min_hotttnesss'] = song_min_hotttnesss
if mode is not None:
kwargs['mode'] = mode
if key is not None:
kwargs['key'] = key
if max_latitude is not None:
kwargs['max_latitude'] = max_latitude
if min_latitude is not None:
kwargs['min_latitude'] = min_latitude
if max_longitude is not None:
kwargs['max_longitude'] = max_longitude
if min_longitude is not None:
kwargs['min_longitude'] = min_longitude
if sort:
kwargs['sort'] = sort
if buckets:
kwargs['bucket'] = buckets
if limit:
kwargs['limit'] = 'true'
if dmca:
kwargs['dmca'] = 'true'
if chain_xspf:
kwargs['chain_xspf'] = 'true'
if audio:
kwargs['audio'] = 'true'
if steer:
kwargs['steer'] = steer
if steer_description:
kwargs['steer_description'] = steer_description
if seed_catalog:
if isinstance(seed_catalog, catalog.Catalog):
kwargs['seed_catalog'] = seed_catalog.id
else:
kwargs['seed_catalog'] = seed_catalog
if source_catalog:
if isinstance(source_catalog, catalog.Catalog):
kwargs['source_catalog'] = source_catalog.id
else:
kwargs['source_catalog'] = source_catalog
super(Playlist, self).__init__(session_id, **kwargs)
def __repr__(self):
return "<Dynamic Playlist - %s>" % self.session_id.encode('utf-8')
# def __str__(self):
# return self.name.encode('utf-8')
def get_next_song(self, **kwargs):
"""Get the next song in the playlist
Args:
Kwargs:
Returns:
A song object
Example:
>>> p = playlist.Playlist(type='artist-radio', artist=['ida maria', 'florence + the machine'])
>>> p.get_next_song()
<song - She Said>
>>>
"""
response = self.get_attribute('dynamic', session_id=self.session_id, **kwargs)
self.cache['songs'] = response['songs']
# we need this to fix up all the dict keys to be strings, not unicode objects
fix = lambda x : dict((str(k), v) for (k,v) in x.iteritems())
if len(self.cache['songs']):
return Song(**fix(self.cache['songs'][0]))
else:
return None
def get_current_song(self):
"""Get the current song in the playlist
Args:
Kwargs:
Returns:
A song object
Example:
>>> p = playlist.Playlist(type='artist-radio', artist=['ida maria', 'florence + the machine'])
>>> p.song
<song - Later On>
>>> p.get_current_song()
<song - Later On>
>>>
"""
# we need this to fix up all the dict keys to be strings, not unicode objects
if not 'songs' in self.cache:
self.get_next_song()
if len(self.cache['songs']):
return Song(**util.fix(self.cache['songs'][0]))
else:
return None
song = property(get_current_song)
def session_info(self):
"""Get information about the playlist
Args:
Kwargs:
Returns:
A dict with diagnostic information about the currently running playlist
Example:
>>> p = playlist.Playlist(type='artist-radio', artist=['ida maria', 'florence + the machine'])
>>> p.info
{
u 'terms': [{
u 'frequency': 1.0,
u 'name': u 'rock'
},
{
u 'frequency': 0.99646542152360207,
u 'name': u 'pop'
},
{
u 'frequency': 0.90801905502131963,
u 'name': u 'indie'
},
{
u 'frequency': 0.90586455490260576,
u 'name': u 'indie rock'
},
{
u 'frequency': 0.8968907243373172,
u 'name': u 'alternative'
},
[...]
{
u 'frequency': 0.052197425644931635,
u 'name': u 'easy listening'
}],
u 'description': [],
u 'seed_songs': [],
u 'banned_artists': [],
u 'rules': [{
u 'rule': u "Don't put two copies of the same song in a playlist."
},
{
u 'rule': u 'Give preference to artists that are not already in the playlist'
}],
u 'session_id': u '9c1893e6ace04c8f9ce745f38b35ff95',
u 'seeds': [u 'ARI4XHX1187B9A1216', u 'ARNCHOP121318C56B8'],
u 'skipped_songs': [],
u 'banned_songs': [],
u 'playlist_type': u 'artist-radio',
u 'seed_catalogs': [],
u 'rated_songs': [],
u 'history': [{
u 'artist_id': u 'ARN6QMG1187FB56C8D',
u 'artist_name': u 'Laura Marling',
u 'id': u 'SOMSHNP12AB018513F',
u 'served_time': 1291412277.204201,
u 'title': u 'Hope In The Air'
}]
}
>>> p.session_info()
(same result as above)
>>>
"""
return self.get_attribute("session_info", session_id=self.session_id)
info = property(session_info)
def static(type='artist', artist_pick='song_hotttnesss-desc', variety=.5, artist_id=None, artist=None, \
song_id=None, description=None, results=15, max_tempo=None, min_tempo=None, max_duration=None, \
min_duration=None, max_loudness=None, min_loudness=None, max_danceability=None, min_danceability=None, \
max_energy=None, min_energy=None, artist_max_familiarity=None, artist_min_familiarity=None, \
artist_max_hotttnesss=None, artist_min_hotttnesss=None, song_max_hotttnesss=None, song_min_hotttnesss=None, \
min_longitude=None, max_longitude=None, min_latitude=None, max_latitude=None, \
mode=None, key=None, buckets=[], sort=None, limit=False, seed_catalog=None, source_catalog=None):
"""Get a static playlist
Args:
Kwargs:
type (str): a string representing the playlist type ('artist', 'artist-radio', ...)
artist_pick (str): How songs should be chosen for each artist
variety (float): A number between 0 and 1 specifying the variety of the playlist
artist_id (str): the artist_id
artist (str): the name of an artist
song_id (str): the song_id
description (str): A string describing the artist and song
results (int): An integer number of results to return
max_tempo (float): The max tempo of song results
min_tempo (float): The min tempo of song results
max_duration (float): The max duration of song results
min_duration (float): The min duration of song results
max_loudness (float): The max loudness of song results
min_loudness (float): The min loudness of song results
artist_max_familiarity (float): A float specifying the max familiarity of artists to search for
artist_min_familiarity (float): A float specifying the min familiarity of artists to search for
artist_max_hotttnesss (float): A float specifying the max hotttnesss of artists to search for
artist_min_hotttnesss (float): A float specifying the max hotttnesss of artists to search for
song_max_hotttnesss (float): A float specifying the max hotttnesss of songs to search for
song_min_hotttnesss (float): A float specifying the max hotttnesss of songs to search for
max_energy (float): The max energy of song results
min_energy (float): The min energy of song results
max_dancibility (float): The max dancibility of song results
min_dancibility (float): The min dancibility of song results
mode (int): 0 or 1 (minor or major)
key (int): 0-11 (c, c-sharp, d, e-flat, e, f, f-sharp, g, a-flat, a, b-flat, b)
max_latitude (float): A float specifying the max latitude of artists to search for
min_latitude (float): A float specifying the min latitude of artists to search for
max_longitude (float): A float specifying the max longitude of artists to search for
min_longitude (float): A float specifying the min longitude of artists to search for
sort (str): A string indicating an attribute and order for sorting the results
buckets (list): A list of strings specifying which buckets to retrieve
limit (bool): A boolean indicating whether or not to limit the results to one of the id spaces specified in buckets
seed_catalog (str or Catalog): An Artist Catalog object or Artist Catalog id to use as a seed
source_catalog (str or Catalog): A Catalog object or catalog id
Returns:
A list of Song objects
Example:
>>> p = playlist.static(type='artist-radio', artist=['ida maria', 'florence + the machine'])
>>> p
[<song - Pickpocket>,
<song - Self-Taught Learner>,
<song - Maps>,
<song - Window Blues>,
<song - That's Not My Name>,
<song - My Lover Will Go>,
<song - Home Sweet Home>,
<song - Stella & God>,
<song - Don't You Want To Share The Guilt?>,
<song - Forget About It>,
<song - Dull Life>,
<song - This Trumpet In My Head>,
<song - Keep Your Head>,
<song - One More Time>,
<song - Knights in Mountain Fox Jackets>]
>>>
"""
kwargs = {}
if type:
kwargs['type'] = type
if artist_pick:
kwargs['artist_pick'] = artist_pick
if variety is not None:
kwargs['variety'] = variety
if artist:
kwargs['artist'] = artist
if artist_id:
kwargs['artist_id'] = artist_id
if song_id:
kwargs['song_id'] = song_id
if description:
kwargs['description'] = description
if results is not None:
kwargs['results'] = results
if max_tempo is not None:
kwargs['max_tempo'] = max_tempo
if min_tempo is not None:
kwargs['min_tempo'] = min_tempo
if max_duration is not None:
kwargs['max_duration'] = max_duration
if min_duration is not None:
kwargs['min_duration'] = min_duration
if max_loudness is not None:
kwargs['max_loudness'] = max_loudness
if min_loudness is not None:
kwargs['min_loudness'] = min_loudness
if max_danceability is not None:
kwargs['max_danceability'] = max_danceability
if min_danceability is not None:
kwargs['min_danceability'] = min_danceability
if max_energy is not None:
kwargs['max_energy'] = max_energy
if min_energy is not None:
kwargs['min_energy'] = min_energy
if artist_max_familiarity is not None:
kwargs['artist_max_familiarity'] = artist_max_familiarity
if artist_min_familiarity is not None:
kwargs['artist_min_familiarity'] = artist_min_familiarity
if artist_max_hotttnesss is not None:
kwargs['artist_max_hotttnesss'] = artist_max_hotttnesss
if artist_min_hotttnesss is not None:
kwargs['artist_min_hotttnesss'] = artist_min_hotttnesss
if song_max_hotttnesss is not None:
kwargs['song_max_hotttnesss'] = song_max_hotttnesss
if song_min_hotttnesss is not None:
kwargs['song_min_hotttnesss'] = song_min_hotttnesss
if mode is not None:
kwargs['mode'] = mode
if key is not None:
kwargs['key'] = key
if max_latitude is not None:
kwargs['max_latitude'] = max_latitude
if min_latitude is not None:
kwargs['min_latitude'] = min_latitude
if max_longitude is not None:
kwargs['max_longitude'] = max_longitude
if min_longitude is not None:
kwargs['min_longitude'] = min_longitude
if sort:
kwargs['sort'] = sort
if buckets:
kwargs['bucket'] = buckets
if limit:
kwargs['limit'] = 'true'
if seed_catalog:
if isinstance(seed_catalog, catalog.Catalog):
kwargs['seed_catalog'] = seed_catalog.id
else:
kwargs['seed_catalog'] = seed_catalog
if source_catalog:
if isinstance(source_catalog, catalog.Catalog):
kwargs['source_catalog'] = source_catalog.id
else:
kwargs['source_catalog'] = source_catalog
result = util.callm("%s/%s" % ('playlist', 'static'), kwargs)
return [Song(**util.fix(s_dict)) for s_dict in result['response']['songs']]
| 37.113971 | 133 | 0.594354 | 2,475 | 20,190 | 4.689697 | 0.116364 | 0.020246 | 0.036444 | 0.060739 | 0.775653 | 0.769536 | 0.767037 | 0.759025 | 0.753511 | 0.746446 | 0 | 0.016108 | 0.320456 | 20,190 | 543 | 134 | 37.18232 | 0.829883 | 0.450421 | 0 | 0.796117 | 0 | 0 | 0.114578 | 0.018315 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029126 | false | 0 | 0.019417 | 0.004854 | 0.097087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df5db499de8912de243a9be8c386a01fa9d800ad | 3,929 | py | Python | chaospy/quadrature/chebyshev.py | utsekaj42/chaospy | 0fb23cbb58eb987c3ca912e2a20b83ebab0514d0 | [
"MIT"
] | 333 | 2016-10-25T12:00:48.000Z | 2022-03-30T07:50:33.000Z | chaospy/quadrature/chebyshev.py | utsekaj42/chaospy | 0fb23cbb58eb987c3ca912e2a20b83ebab0514d0 | [
"MIT"
] | 327 | 2016-09-25T16:29:41.000Z | 2022-03-30T03:26:27.000Z | chaospy/quadrature/chebyshev.py | utsekaj42/chaospy | 0fb23cbb58eb987c3ca912e2a20b83ebab0514d0 | [
"MIT"
] | 74 | 2016-10-17T11:14:13.000Z | 2021-12-09T10:55:59.000Z | """Chebyshev-Gauss quadrature rule of the first kind."""
import numpy
import chaospy
from .hypercube import hypercube_quadrature
def chebyshev_1(order, lower=-1, upper=1, physicist=False):
r"""
Chebyshev-Gauss quadrature rule of the first kind.
Compute the sample points and weights for Chebyshev-Gauss quadrature. The
sample points are the roots of the nth degree Chebyshev polynomial. These
sample points and weights correctly integrate polynomials of degree
:math:`2N-1` or less.
Gaussian quadrature come in two variants: physicist and probabilist. For
first order Chebyshev-Gauss physicist means a weight function
:math:`1/\sqrt{1-x^2}` and weights that sum to :math`1/2`, and probabilist
means a weight function is :math:`1/\sqrt{x (1-x)}` and sum to 1.
Args:
order (int):
The quadrature order.
lower (float):
Lower bound for the integration interval.
upper (float):
Upper bound for the integration interval.
physicist (bool):
Use physicist weights instead of probabilist.
Returns:
abscissas (numpy.ndarray):
The ``order+1`` quadrature points for where to evaluate the model
function with.
weights (numpy.ndarray):
The quadrature weights associated with each abscissas.
Examples:
>>> abscissas, weights = chaospy.quadrature.chebyshev_1(3)
>>> abscissas
array([[-0.92387953, -0.38268343, 0.38268343, 0.92387953]])
>>> weights
array([0.25, 0.25, 0.25, 0.25])
See also:
:func:`chaospy.quadrature.chebyshev_2`
:func:`chaospy.quadrature.gaussian`
"""
order = int(order)
coefficients = chaospy.construct_recurrence_coefficients(
order=order, dist=chaospy.Beta(0.5, 0.5, lower, upper))
[abscissas], [weights] = chaospy.coefficients_to_quadrature(coefficients)
weights *= 0.5 if physicist else 1
return abscissas[numpy.newaxis], weights
def chebyshev_2(order, lower=-1, upper=1, physicist=False):
r"""
Chebyshev-Gauss quadrature rule of the second kind.
Compute the sample points and weights for Chebyshev-Gauss quadrature. The
sample points are the roots of the nth degree Chebyshev polynomial. These
sample points and weights correctly integrate polynomials of degree
:math:`2N-1` or less.
Gaussian quadrature come in two variants: physicist and probabilist. For
second order Chebyshev-Gauss physicist means a weight function
:math:`\sqrt{1-x^2}` and weights that sum to :math`2`, and probabilist
means a weight function is :math:`\sqrt{x (1-x)}` and sum to 1.
Args:
order (int):
The quadrature order.
lower (float):
Lower bound for the integration interval.
upper (float):
Upper bound for the integration interval.
physicist (bool):
Use physicist weights instead of probabilist.
Returns:
abscissas (numpy.ndarray):
The ``order+1`` quadrature points for where to evaluate the model
function with.
weights (numpy.ndarray):
The quadrature weights associated with each abscissas.
Examples:
>>> abscissas, weights = chaospy.quadrature.chebyshev_2(3)
>>> abscissas
array([[-0.80901699, -0.30901699, 0.30901699, 0.80901699]])
>>> weights
array([0.1381966, 0.3618034, 0.3618034, 0.1381966])
See also:
:func:`chaospy.quadrature.chebyshev_1`
:func:`chaospy.quadrature.gaussian`
"""
order = int(order)
coefficients = chaospy.construct_recurrence_coefficients(
order=order, dist=chaospy.Beta(1.5, 1.5, lower, upper))
[abscissas], [weights] = chaospy.coefficients_to_quadrature(coefficients)
weights *= 2 if physicist else 1
return abscissas[numpy.newaxis], weights
| 36.045872 | 78 | 0.662001 | 493 | 3,929 | 5.245436 | 0.20284 | 0.037896 | 0.046404 | 0.034029 | 0.892111 | 0.892111 | 0.858855 | 0.858855 | 0.840681 | 0.730085 | 0 | 0.053981 | 0.24561 | 3,929 | 108 | 79 | 36.37963 | 0.818489 | 0.713668 | 0 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.157895 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10c594c412eabe7e53c12af3ff5ec4ee2a317d6c | 210 | py | Python | sites_microsoft_auth/apps.py | gskudder/django_sites_microsoft_auth | da6be26c04ae9a8ed1e5515fd5f7e398e990b532 | [
"MIT"
] | null | null | null | sites_microsoft_auth/apps.py | gskudder/django_sites_microsoft_auth | da6be26c04ae9a8ed1e5515fd5f7e398e990b532 | [
"MIT"
] | 357 | 2019-10-07T10:01:50.000Z | 2022-03-26T02:49:25.000Z | sites_microsoft_auth/apps.py | gskudder/django_sites_microsoft_auth | da6be26c04ae9a8ed1e5515fd5f7e398e990b532 | [
"MIT"
] | null | null | null | from django.apps import AppConfig
class MicrosoftAuthConfig(AppConfig):
name = "sites_microsoft_auth"
verbose_name = "Microsoft Auth"
def ready(self):
import sites_microsoft_auth.signals
| 21 | 43 | 0.742857 | 24 | 210 | 6.291667 | 0.666667 | 0.258278 | 0.238411 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 210 | 9 | 44 | 23.333333 | 0.888235 | 0 | 0 | 0 | 0 | 0 | 0.161905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
80417ceef2708d616eaa10d0776001a47a2095f6 | 47 | py | Python | yanapy/__init__.py | Sterals/INTERPRETATOR | 0a6a97ca4d9a560ae9682a42cf27a89c7146e682 | [
"MIT"
] | 1 | 2021-05-26T11:51:39.000Z | 2021-05-26T11:51:39.000Z | yanapy/__init__.py | Sterals/INTERPRETATOR | 0a6a97ca4d9a560ae9682a42cf27a89c7146e682 | [
"MIT"
] | 9 | 2021-05-20T19:37:05.000Z | 2021-06-03T20:55:39.000Z | yanapy/__init__.py | Sterals/INTERPRETATOR | 0a6a97ca4d9a560ae9682a42cf27a89c7146e682 | [
"MIT"
] | 1 | 2021-05-26T20:21:03.000Z | 2021-05-26T20:21:03.000Z | from .interpretators import baseinterpretator
| 15.666667 | 45 | 0.87234 | 4 | 47 | 10.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 47 | 2 | 46 | 23.5 | 0.97619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
804bb6af89c368f3c92a6d5af6133328d8c77969 | 251 | py | Python | in/sample/sample3.py | amihaylo/pycode_similar | ec3249aef6fd9d1e0f834ce0bffeb6e609075a78 | [
"MIT"
] | null | null | null | in/sample/sample3.py | amihaylo/pycode_similar | ec3249aef6fd9d1e0f834ce0bffeb6e609075a78 | [
"MIT"
] | null | null | null | in/sample/sample3.py | amihaylo/pycode_similar | ec3249aef6fd9d1e0f834ce0bffeb6e609075a78 | [
"MIT"
] | null | null | null | class Queue:
def __init__(self):
pass
def enqueue(self, item):
pass
def front(self):
pass
def dequeue(self):
pass
def isEmpty(self):
pass
def __str__(self):
pass
| 13.210526 | 28 | 0.482072 | 27 | 251 | 4.185185 | 0.444444 | 0.353982 | 0.389381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.438247 | 251 | 18 | 29 | 13.944444 | 0.801418 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | false | 0.461538 | 0 | 0 | 0.538462 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
33d68303711c4f392088af895bbdce86735ad36b | 35 | py | Python | dispike/followup/__init__.py | mitsuaky/dispike | bd3ffb28fc03307077d647ee233f4f0e5c594434 | [
"MIT"
] | 41 | 2020-12-29T03:07:38.000Z | 2022-01-30T09:05:03.000Z | dispike/followup/__init__.py | mitsuaky/dispike | bd3ffb28fc03307077d647ee233f4f0e5c594434 | [
"MIT"
] | 66 | 2020-12-28T08:04:27.000Z | 2021-11-04T09:12:54.000Z | dispike/followup/__init__.py | mitsuaky/dispike | bd3ffb28fc03307077d647ee233f4f0e5c594434 | [
"MIT"
] | 11 | 2021-01-21T22:36:34.000Z | 2021-11-04T07:23:30.000Z | from .main import FollowUpMessages
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d508127bb5a285cc6dd8d34213ec4d3c1f51dbcd | 48 | py | Python | libshorttext/converter/stemmer/test_porter.py | mycollectingbox/LibShortText-for-Windows | 0630b5dfd598713210b14aefc5f6f4043aeb8ff4 | [
"BSD-3-Clause"
] | null | null | null | libshorttext/converter/stemmer/test_porter.py | mycollectingbox/LibShortText-for-Windows | 0630b5dfd598713210b14aefc5f6f4043aeb8ff4 | [
"BSD-3-Clause"
] | null | null | null | libshorttext/converter/stemmer/test_porter.py | mycollectingbox/LibShortText-for-Windows | 0630b5dfd598713210b14aefc5f6f4043aeb8ff4 | [
"BSD-3-Clause"
] | null | null | null | import porter
print(porter.stem("unexpected"))
| 12 | 32 | 0.770833 | 6 | 48 | 6.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 3 | 33 | 16 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
d514cdb5b424c8c36d1d0a65cdb73a9f0eeb8eac | 106 | py | Python | prescription_generator/helper/__init__.py | anshul2807/Automation-scripts | 1830437fc9cf5f97b1f5f194a704fb247849ef09 | [
"MIT"
] | 496 | 2020-10-07T15:45:34.000Z | 2022-03-29T16:40:30.000Z | prescription_generator/helper/__init__.py | anshul2807/Automation-scripts | 1830437fc9cf5f97b1f5f194a704fb247849ef09 | [
"MIT"
] | 550 | 2020-10-07T15:31:53.000Z | 2022-03-20T22:00:38.000Z | prescription_generator/helper/__init__.py | anshul2807/Automation-scripts | 1830437fc9cf5f97b1f5f194a704fb247849ef09 | [
"MIT"
] | 388 | 2020-10-07T15:45:21.000Z | 2022-03-27T14:54:46.000Z | # from .pdf_operations import save_pdf
# from .speech import speech_rec_for_windows, speech_rec_for_linux
| 35.333333 | 66 | 0.849057 | 17 | 106 | 4.823529 | 0.588235 | 0.219512 | 0.292683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103774 | 106 | 2 | 67 | 53 | 0.863158 | 0.95283 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d1985bb9fd94c375378a089fb012ab0cb1bff82 | 50 | py | Python | masonite/contrib/cloudinary/providers/__init__.py | vaibhavmule/masonite-cloudinary-driver | 866b073717144b8e4755495a01cd4da20d295eaf | [
"MIT"
] | 1 | 2018-12-08T07:07:37.000Z | 2018-12-08T07:07:37.000Z | masonite/contrib/cloudinary/providers/__init__.py | vaibhavmule/masonite-cloudinary-driver | 866b073717144b8e4755495a01cd4da20d295eaf | [
"MIT"
] | null | null | null | masonite/contrib/cloudinary/providers/__init__.py | vaibhavmule/masonite-cloudinary-driver | 866b073717144b8e4755495a01cd4da20d295eaf | [
"MIT"
] | null | null | null | from .CloudinaryProvider import CloudinaryProvider | 50 | 50 | 0.92 | 4 | 50 | 11.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 50 | 1 | 50 | 50 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d5c90e65553d5b5a8690e7a35f1474da8fc5244 | 198 | py | Python | test/test_make_index.py | 5267/QUANTAXIS | c3f38b805939e33309e2da7ea8cb32d245c3edfb | [
"MIT"
] | 5 | 2017-06-30T04:42:29.000Z | 2018-01-05T09:20:28.000Z | test/test_make_index.py | 5267/QUANTAXIS | c3f38b805939e33309e2da7ea8cb32d245c3edfb | [
"MIT"
] | null | null | null | test/test_make_index.py | 5267/QUANTAXIS | c3f38b805939e33309e2da7ea8cb32d245c3edfb | [
"MIT"
] | null | null | null |
import pymongo
#collection=pymongo.MongoClient().quantaxis.stock_day
#collection.ensure_index('code')
collection=pymongo.MongoClient().quantaxis.backtest_history
collection.ensure_index('cookie') | 24.75 | 59 | 0.838384 | 22 | 198 | 7.363636 | 0.590909 | 0.209877 | 0.345679 | 0.45679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040404 | 198 | 8 | 60 | 24.75 | 0.852632 | 0.419192 | 0 | 0 | 0 | 0 | 0.053571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d5195c4fddbba9764f87e4095b195e0a9eda4d14 | 180 | py | Python | eegdrive/ingestion/__init__.py | lucagrementieri/eegdrive | 65b122246e2a75c0c7c80db3e544f6a6741ceb53 | [
"Apache-2.0"
] | null | null | null | eegdrive/ingestion/__init__.py | lucagrementieri/eegdrive | 65b122246e2a75c0c7c80db3e544f6a6741ceb53 | [
"Apache-2.0"
] | null | null | null | eegdrive/ingestion/__init__.py | lucagrementieri/eegdrive | 65b122246e2a75c0c7c80db3e544f6a6741ceb53 | [
"Apache-2.0"
] | null | null | null | from .eeg import EEG
from .episode_dataset import EpisodeDataset
from .ingest import ingest_session
from .transforms import HighPass, RemoveBeginning, RemoveLineNoise, Standardize
| 36 | 79 | 0.855556 | 21 | 180 | 7.238095 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105556 | 180 | 4 | 80 | 45 | 0.944099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d53984c56b9132ca01c673067be4a73eb06de1e2 | 18 | py | Python | game/libs/IA/__init__.py | Gnukos/codingGame | 867e856c934731cf5bfb20919c4a4d0af20a3116 | [
"MIT"
] | 1 | 2022-02-15T23:15:28.000Z | 2022-02-15T23:15:28.000Z | botpunk/__init__.py | TisaneFruitRouge/botpunk | 6deb4a4239e71315ed03417ab7089c2290f425df | [
"MIT"
] | null | null | null | botpunk/__init__.py | TisaneFruitRouge/botpunk | 6deb4a4239e71315ed03417ab7089c2290f425df | [
"MIT"
] | null | null | null | from bot import *
| 9 | 17 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 18 | 1 | 18 | 18 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5613992bca5c1620906626acf6322baad0b20eb | 228 | py | Python | core/joblib/__init__.py | chuanli11/GFPGAN | 4adbf820cef782c7d33113be35e5f1a49f2a3793 | [
"BSD-3-Clause"
] | null | null | null | core/joblib/__init__.py | chuanli11/GFPGAN | 4adbf820cef782c7d33113be35e5f1a49f2a3793 | [
"BSD-3-Clause"
] | null | null | null | core/joblib/__init__.py | chuanli11/GFPGAN | 4adbf820cef782c7d33113be35e5f1a49f2a3793 | [
"BSD-3-Clause"
] | null | null | null | from .SubprocessorBase import Subprocessor
from .ThisThreadGenerator import ThisThreadGenerator
from .SubprocessGenerator import SubprocessGenerator
from .MPFunc import MPFunc
from .MPClassFuncOnDemand import MPClassFuncOnDemand | 45.6 | 52 | 0.894737 | 20 | 228 | 10.2 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 228 | 5 | 53 | 45.6 | 0.976077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6359f7bebe118dc875348db4aae921372f6cab8d | 45 | py | Python | worms/vis/__init__.py | abiedermann/worms | 026c45a88d5c71b0e035ac83de6f4dc107316ed8 | [
"Apache-2.0"
] | 4 | 2018-01-30T23:13:43.000Z | 2021-02-12T22:36:54.000Z | worms/vis/__init__.py | abiedermann/worms | 026c45a88d5c71b0e035ac83de6f4dc107316ed8 | [
"Apache-2.0"
] | 9 | 2018-02-23T00:52:25.000Z | 2022-01-26T00:02:32.000Z | worms/vis/__init__.py | abiedermann/worms | 026c45a88d5c71b0e035ac83de6f4dc107316ed8 | [
"Apache-2.0"
] | 4 | 2018-06-28T21:30:14.000Z | 2022-03-30T17:50:42.000Z | from .vis_pymol import *
from .plot import *
| 15 | 24 | 0.733333 | 7 | 45 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 25 | 22.5 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63aad6bf9e36c98fd93a1f833ee428f700566eab | 3,491 | py | Python | tests/api/test_repo_history_limit.py | saukrIppl/newsea | 0fd5ab2ade9a8fb16b1e7b43ba13dac32eb39603 | [
"Apache-2.0"
] | 2 | 2017-06-21T09:46:55.000Z | 2018-05-30T10:07:32.000Z | tests/api/test_repo_history_limit.py | saukrIppl/newsea | 0fd5ab2ade9a8fb16b1e7b43ba13dac32eb39603 | [
"Apache-2.0"
] | null | null | null | tests/api/test_repo_history_limit.py | saukrIppl/newsea | 0fd5ab2ade9a8fb16b1e7b43ba13dac32eb39603 | [
"Apache-2.0"
] | 1 | 2020-10-01T04:11:41.000Z | 2020-10-01T04:11:41.000Z | """seahub/api2/views.py::Repo api tests.
"""
import json
from django.core.urlresolvers import reverse
from constance import config
from seahub.test_utils import BaseTestCase
class RepoTest(BaseTestCase):
def setUp(self):
self.user_repo_id = self.repo.id
def tearDown(self):
self.remove_repo()
self.clear_cache()
def test_can_get_history_limit(self):
self.login_as(self.user)
resp = self.client.get(reverse("api2-repo-history-limit", args=[self.user_repo_id]))
json_resp = json.loads(resp.content)
assert json_resp['keep_days'] == -1
def test_can_get_history_limit_if_setting_not_enabled(self):
self.login_as(self.user)
config.ENABLE_REPO_HISTORY_SETTING = False
resp = self.client.get(reverse("api2-repo-history-limit", args=[self.user_repo_id]))
json_resp = json.loads(resp.content)
assert json_resp['keep_days'] == -1
def test_can_set_history_limit(self):
self.login_as(self.user)
url = reverse("api2-repo-history-limit", args=[self.user_repo_id])
days = 0
data = 'keep_days=%s' % days
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
json_resp = json.loads(resp.content)
assert json_resp['keep_days'] == days
days = 6
data = 'keep_days=%s' % days
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
json_resp = json.loads(resp.content)
assert json_resp['keep_days'] == days
days = -1
data = 'keep_days=%s' % days
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
json_resp = json.loads(resp.content)
assert json_resp['keep_days'] == days
days = -7
data = 'keep_days=%s' % days
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
json_resp = json.loads(resp.content)
assert json_resp['keep_days'] == -1
def test_can_not_get_if_not_repo_owner(self):
self.login_as(self.admin)
resp = self.client.get(reverse("api2-repo-history-limit", args=[self.user_repo_id]))
self.assertEqual(403, resp.status_code)
def test_can_not_set_if_not_repo_owner(self):
self.login_as(self.admin)
url = reverse("api2-repo-history-limit", args=[self.user_repo_id])
data = 'keep_days=%s' % 6
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
self.assertEqual(403, resp.status_code)
def test_can_not_set_if_not_invalid_arg(self):
self.login_as(self.user)
url = reverse("api2-repo-history-limit", args=[self.user_repo_id])
data = 'limit_ays=%s' % 6
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
self.assertEqual(400, resp.status_code)
url = reverse("api2-repo-history-limit", args=[self.user_repo_id])
data = 'keep_days=%s' % 'invalid-arg'
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
self.assertEqual(400, resp.status_code)
def test_can_not_set_if_setting_not_enabled(self):
self.login_as(self.user)
config.ENABLE_REPO_HISTORY_SETTING = False
url = reverse("api2-repo-history-limit", args=[self.user_repo_id])
data = 'keep_days=%s' % 6
resp = self.client.put(url, data, 'application/x-www-form-urlencoded')
self.assertEqual(403, resp.status_code)
| 35.262626 | 92 | 0.654254 | 495 | 3,491 | 4.393939 | 0.149495 | 0.051494 | 0.070805 | 0.057931 | 0.875402 | 0.867126 | 0.854253 | 0.854253 | 0.832644 | 0.825287 | 0 | 0.0124 | 0.214552 | 3,491 | 98 | 93 | 35.622449 | 0.780817 | 0.010599 | 0 | 0.676056 | 0 | 0 | 0.176675 | 0.129968 | 0 | 0 | 0 | 0 | 0.15493 | 1 | 0.126761 | false | 0 | 0.056338 | 0 | 0.197183 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
63b6637e25111984205588c8810f37bfa2ec37d3 | 29 | py | Python | algo/apg/func.py | xlnwel/grl | 7d42bb2e78bc3e7b7c3ebbcf356a4d1cf12abebf | [
"Apache-2.0"
] | 5 | 2021-09-04T14:50:39.000Z | 2022-03-13T09:53:09.000Z | algo/apg/func.py | xlnwel/d2rl | 7d42bb2e78bc3e7b7c3ebbcf356a4d1cf12abebf | [
"Apache-2.0"
] | null | null | null | algo/apg/func.py | xlnwel/d2rl | 7d42bb2e78bc3e7b7c3ebbcf356a4d1cf12abebf | [
"Apache-2.0"
] | 2 | 2022-01-25T09:32:01.000Z | 2022-03-13T09:53:14.000Z | from algo.seed.func import *
| 14.5 | 28 | 0.758621 | 5 | 29 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
893ced94a2d3a42d9b920906bebb2367e0d090ac | 48 | py | Python | app/data/enum/__init__.py | lokaimoma/Bugza | 93ffe344cb0be7dc4c45965f52798e02d05d320b | [
"Unlicense"
] | 2 | 2022-02-14T23:53:00.000Z | 2022-03-24T12:19:49.000Z | app/data/enum/__init__.py | lokaimoma/Bugza | 93ffe344cb0be7dc4c45965f52798e02d05d320b | [
"Unlicense"
] | null | null | null | app/data/enum/__init__.py | lokaimoma/Bugza | 93ffe344cb0be7dc4c45965f52798e02d05d320b | [
"Unlicense"
] | null | null | null | # Created by Kelvin_Clark on 1/30/2022, 9:54 PM
| 24 | 47 | 0.729167 | 11 | 48 | 3.090909 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.166667 | 48 | 1 | 48 | 48 | 0.6 | 0.9375 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
89463524c008dcd7c4ed2cffc0aee9dfd58fee28 | 26 | py | Python | marco/marco.py | Xlayton/my-hello-world | 26bbedfbd512f9e37e0813f605732e9c4845ee4c | [
"Apache-2.0"
] | 5 | 2018-10-03T17:52:48.000Z | 2021-03-18T05:53:38.000Z | marco/marco.py | Xlayton/my-hello-world | 26bbedfbd512f9e37e0813f605732e9c4845ee4c | [
"Apache-2.0"
] | 71 | 2018-10-01T12:15:06.000Z | 2018-10-06T08:15:28.000Z | marco/marco.py | Xlayton/my-hello-world | 26bbedfbd512f9e37e0813f605732e9c4845ee4c | [
"Apache-2.0"
] | 196 | 2018-10-01T12:18:18.000Z | 2020-10-15T23:54:50.000Z | print"Hello👋🏻 from space🌎" | 26 | 26 | 0.730769 | 6 | 26 | 3.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 26 | 1 | 26 | 26 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
897504555ad3122a427e39ae924521ecefa1b7fc | 99 | py | Python | djhl/book/tests/test_apps.py | serlus/DjHL | aab86de0577b3e976709208e74f217315f582285 | [
"BSD-3-Clause"
] | 2 | 2020-06-18T23:23:08.000Z | 2021-09-26T10:46:24.000Z | djhl/book/tests/test_apps.py | serlus/DjHL | aab86de0577b3e976709208e74f217315f582285 | [
"BSD-3-Clause"
] | 105 | 2020-06-17T19:40:51.000Z | 2022-03-01T20:23:04.000Z | djhl/book/tests/test_apps.py | serlus/DjHL | aab86de0577b3e976709208e74f217315f582285 | [
"BSD-3-Clause"
] | 6 | 2020-06-18T22:53:30.000Z | 2020-09-19T01:43:37.000Z | from djhl.book.apps import BookConfig
def test_core():
assert BookConfig.name == "djhl.book"
| 16.5 | 41 | 0.727273 | 14 | 99 | 5.071429 | 0.785714 | 0.225352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 99 | 5 | 42 | 19.8 | 0.855422 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
899a721e61c40f7ec1af3d64fdc4f00aaec01f19 | 42 | py | Python | SLpackage/private/pacbio/pythonpkgs/pbcommand/lib/python2.7/site-packages/pbcommand/engine/__init__.py | fanglab/6mASCOPE | 3f1fdcb7693ff152f17623ce549526ec272698b1 | [
"BSD-3-Clause"
] | 5 | 2022-02-20T07:10:02.000Z | 2022-03-18T17:47:53.000Z | SLpackage/private/pacbio/pythonpkgs/pbcommand/lib/python2.7/site-packages/pbcommand/engine/__init__.py | fanglab/6mASCOPE | 3f1fdcb7693ff152f17623ce549526ec272698b1 | [
"BSD-3-Clause"
] | null | null | null | SLpackage/private/pacbio/pythonpkgs/pbcommand/lib/python2.7/site-packages/pbcommand/engine/__init__.py | fanglab/6mASCOPE | 3f1fdcb7693ff152f17623ce549526ec272698b1 | [
"BSD-3-Clause"
] | null | null | null | from .runner import run_cmd, ExtCmdResult
| 21 | 41 | 0.833333 | 6 | 42 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
89af8b6a2f4c1d7cea9b9ef6341fc4e1bc7c42bf | 14,350 | py | Python | tests/test_ucb.py | harisankarh/mabwiser | 0c860253be017d1f393e18bf9d9d7e1739f93dca | [
"Apache-2.0"
] | 1 | 2020-07-22T06:55:17.000Z | 2020-07-22T06:55:17.000Z | tests/test_ucb.py | harisankarh/mabwiser | 0c860253be017d1f393e18bf9d9d7e1739f93dca | [
"Apache-2.0"
] | null | null | null | tests/test_ucb.py | harisankarh/mabwiser | 0c860253be017d1f393e18bf9d9d7e1739f93dca | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import datetime
import math
import numpy as np
import pandas as pd
from mabwiser.mab import LearningPolicy
from mabwiser.ucb import _UCB1
from tests.test_base import BaseTest
class UCBTest(BaseTest):
def test_alpha0(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=[1, 1, 1, 2, 2, 2, 3, 3, 3],
rewards=[0, 0, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=0),
seed=123456,
num_run=3,
is_predict=True)
self.assertEqual(len(arm), 3)
self.assertEqual(arm, [3, 3, 3])
def test_alpha0_expectations(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=[1, 1, 1, 2, 2, 2, 3, 3, 3],
rewards=[0, 0, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=0),
seed=123456,
num_run=1,
is_predict=False)
self.assertDictEqual(arm, {1: 0.0, 2: 0.0, 3: 1.0})
def test_alpha1(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=[1, 1, 1, 2, 2, 3, 3, 3, 3, 3],
rewards=[0, 0, 1, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=3,
is_predict=True)
self.assertEqual(len(arm), 3)
self.assertEqual(arm, [1, 1, 1])
def test_alpha1_expectations(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=[1, 1, 1, 2, 2, 3, 3, 3, 3, 3],
rewards=[0, 0, 1, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=1,
is_predict=False)
self.assertDictEqual(arm, {1: 1.5723073962832794, 2: 1.5174271293851465, 3: 1.5597051824376162})
def test_np(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=np.asarray([1, 1, 1, 2, 2, 3, 3, 3, 3, 3]),
rewards=np.asarray([0, 0, 1, 0, 0, 0, 0, 1, 1, 1]),
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=3,
is_predict=True)
self.assertEqual(len(arm), 3)
self.assertEqual(arm, [1, 1, 1])
def test_df(self):
df = pd.DataFrame({'decisions': [1, 1, 1, 2, 2, 3, 3, 3, 3, 3], 'rewards': [0, 0, 1, 0, 0, 0, 0, 1, 1, 1]})
arm, mab = self.predict(arms=[1, 2, 3],
decisions=df['decisions'],
rewards=df['rewards'],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=3,
is_predict=True)
self.assertEqual(len(arm), 3)
self.assertEqual(arm, [1, 1, 1])
def test_df_list(self):
df = pd.DataFrame({'decisions': [1, 1, 1, 2, 2, 3, 3, 3, 3, 3], 'rewards': [0, 0, 1, 0, 0, 0, 0, 1, 1, 1]})
arm, mab = self.predict(arms=[1, 2, 3],
decisions=df['decisions'],
rewards=[0, 0, 1, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=3,
is_predict=True)
self.assertEqual(len(arm), 3)
self.assertEqual(arm, [1, 1, 1])
def test_ucb_t1(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=[1, 1, 1, 3, 2, 2, 3, 1, 3],
rewards=[0, 1, 1, 0, 1, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=0.24),
seed=123456,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, [1, 1, 1, 1])
def test_ucb_t2(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=[1, 1, 1, 3, 2, 2, 3, 1, 3],
rewards=[0, 1, 1, 0, 1, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=1.5),
seed=71,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, [2, 2, 2, 2])
def test_ucb_t3(self):
arm, mab = self.predict(arms=[1, 2, 4],
decisions=[1, 1, 4, 4, 2, 2, 1, 1, 4, 2, 1, 4, 1, 2, 4],
rewards=[7, 9, 10, 20, 2, 5, 8, 15, 17, 11, 0, 5, 2, 9, 3],
learning_policy=LearningPolicy.UCB1(alpha=1.25),
seed=123456,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, [4, 4, 4, 4])
def test_ucb_t4(self):
arm, mab = self.predict(arms=[1, 2, 4],
decisions=[1, 1, 4, 4, 2, 2, 1, 1, 4, 2, 1, 4, 1, 2, 4],
rewards=[7, 9, 10, 20, 2, 5, 8, 15, 17, 11, 0, 5, 2, 9, 3],
learning_policy=LearningPolicy.UCB1(alpha=2),
seed=23,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, [4, 4, 4, 4])
def test_ucb_t5(self):
arm, mab = self.predict(arms=['one', 'two', 'three'],
decisions=['one', 'one', 'one', 'three', 'two', 'two', 'three', 'one', 'three', 'two'],
rewards=[1, 0, 1, 0, 1, 0, 1, 1, 1, 0],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=23,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, ['three', 'three', 'three', 'three'])
def test_ucb_t6(self):
arm, mab = self.predict(arms=['one', 'two', 'three'],
decisions=['one', 'one', 'one', 'three', 'two', 'two', 'three', 'one', 'three', 'two'],
rewards=[2, 7, 7, 9, 1, 3, 1, 2, 6, 4],
learning_policy=LearningPolicy.UCB1(alpha=1.25),
seed=17,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, ['three', 'three', 'three', 'three'])
def test_ucb_t7(self):
arm, mab = self.predict(arms=['a', 'b', 'c'],
decisions=['a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c', 'a'],
rewards=[-1.25, 12, 0.7, 10, 12, 9.2, -1, -10, 4, 0],
learning_policy=LearningPolicy.UCB1(alpha=1.25),
seed=123456,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, ['b', 'b', 'b', 'b'])
def test_ucb_t8(self):
arm, mab = self.predict(arms=['a', 'b', 'c'],
decisions=['a', 'b', 'c', 'a', 'b', 'c', 'a', 'b', 'c', 'a'],
rewards=[-1.25, 0.7, 12, 10, 12, 9.2, -1, -10, 4, 0],
learning_policy=LearningPolicy.UCB1(alpha=0.5),
seed=9,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, ['c', 'c', 'c', 'c'])
def test_ucb_t9(self):
# Dates to test
a = datetime.datetime(2018, 1, 1)
b = datetime.datetime(2017, 7, 31)
c = datetime.datetime(2018, 9, 15)
arm, mab = self.predict(arms=[a, b, c],
decisions=[a, b, c, a, b, c, a, b, c, a],
rewards=[1.25, 0.7, 12, 10, 1.43, 0.2, -1, -10, 4, 0],
learning_policy=LearningPolicy.UCB1(alpha=0.25),
seed=123456,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, [c, c, c, c])
def test_ucb_t10(self):
# Dates to test
a = datetime.datetime(2018, 1, 1)
b = datetime.datetime(2017, 7, 31)
c = datetime.datetime(2018, 9, 15)
arm, mab = self.predict(arms=[a, b, c],
decisions=[a, b, c, a, b, c, a, b, c, a, b, b],
rewards=[7, 12, 1, -10, 5, 1, 2, 9, 3, 3, 6, 7],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=7,
num_run=4,
is_predict=True)
self.assertEqual(len(arm), 4)
self.assertEqual(arm, [b, b, b, b])
def test_unused_arm(self):
arm, mab = self.predict(arms=[1, 2, 3, 4],
decisions=[1, 1, 1, 2, 2, 3, 3, 3, 3, 3],
rewards=[0, 0, 1, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=1,
is_predict=True)
self.assertTrue(len(mab._imp.arm_to_expectation), 4)
def test_fit_twice(self):
arm, mab = self.predict(arms=[1, 2, 3, 4],
decisions=[1, 1, 1, 2, 2, 3, 3, 3, 3, 3],
rewards=[0, 0, 1, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=1,
is_predict=True)
self.assertTrue(len(mab._imp.arm_to_expectation), 4)
mean = mab._imp.arm_to_mean[1]
ci = mab._imp.arm_to_expectation[1]
self.assertAlmostEqual(0.3333333333333333, mean)
self.assertAlmostEqual(1.5723073962832794, ci)
mean1 = mab._imp.arm_to_mean[4]
ci1 = mab._imp.arm_to_expectation[4]
self.assertEqual(mean1, 0)
self.assertEqual(ci1, 0)
# Fit again
decisions2 = [1, 3, 4]
rewards2 = [0, 1, 1]
mab.fit(decisions2, rewards2)
mean2 = mab._imp.arm_to_mean[1]
ci2 = mab._imp.arm_to_expectation[1]
mean3 = mab._imp.arm_to_mean[4]
ci3 = mab._imp.arm_to_expectation[4]
self.assertEqual(mean2, 0)
self.assertAlmostEqual(0, mean2)
self.assertAlmostEqual(1.4823038073675112, ci2)
self.assertEqual(mean3, 1)
self.assertAlmostEqual(2.4823038073675114, ci3)
def test_partial_fit(self):
arm, mab = self.predict(arms=[1, 2, 3, 4],
decisions=[1, 1, 1, 2, 2, 3, 3, 3, 3, 3],
rewards=[0, 0, 1, 0, 0, 0, 0, 1, 1, 1],
learning_policy=LearningPolicy.UCB1(alpha=1),
seed=123456,
num_run=1,
is_predict=True)
self.assertTrue(len(mab._imp.arm_to_expectation), 4)
mean = mab._imp.arm_to_mean[1]
ci = mab._imp.arm_to_expectation[1]
self.assertAlmostEqual(0.3333333333333333, mean)
self.assertAlmostEqual(1.5723073962832794, ci)
mean1 = mab._imp.arm_to_mean[4]
ci1 = mab._imp.arm_to_expectation[4]
self.assertEqual(mean1, 0)
self.assertEqual(ci1, 0)
# Fit again
decisions2 = [1, 3, 4]
rewards2 = [0, 1, 1]
mab.partial_fit(decisions2, rewards2)
mean2 = mab._imp.arm_to_mean[1]
ci2 = mab._imp.arm_to_expectation[1]
mean3 = mab._imp.arm_to_mean[4]
ci3 = mab._imp.arm_to_expectation[4]
self.assertEqual(mean2, 0.25)
self.assertAlmostEqual(1.3824639856219572, ci2)
self.assertEqual(mean3, 1)
self.assertAlmostEqual(3.2649279712439143, ci3)
def test_add_arm(self):
arm, mab = self.predict(arms=[1, 2, 3],
decisions=[1, 2, 1, 1, 2],
rewards=[10, 4, 3, 5, 6],
learning_policy=LearningPolicy.UCB1(1.0),
seed=123456,
num_run=1,
is_predict=True)
mab.add_arm(4)
self.assertTrue(4 in mab.arms)
self.assertTrue(4 in mab._imp.arms)
self.assertTrue(mab._imp.arm_to_expectation[4] == 0)
self.assertTrue(mab._imp.arm_to_mean[4] == 0)
def test_confidence(self):
# parameters
mean = 20
arm_count = 150
total_count = 500
alpha = 1
cb = _UCB1._get_ucb(mean, alpha, total_count, arm_count)
self.assertAlmostEqual(cb, 20.287856633260894)
alpha = 0.25
cb = _UCB1._get_ucb(mean, alpha, total_count, arm_count)
self.assertAlmostEqual(cb, 20.07196415831522)
alpha = 3.33
cb = _UCB1._get_ucb(mean, alpha, total_count, arm_count)
self.assertAlmostEqual(cb, 20.95856258875877)
| 38.888889 | 119 | 0.430523 | 1,702 | 14,350 | 3.519977 | 0.077556 | 0.025038 | 0.016024 | 0.059589 | 0.862961 | 0.855951 | 0.84193 | 0.812218 | 0.799533 | 0.799533 | 0 | 0.13478 | 0.437979 | 14,350 | 368 | 120 | 38.994565 | 0.60806 | 0.005575 | 0 | 0.692029 | 0 | 0 | 0.015775 | 0 | 0 | 0 | 0 | 0 | 0.213768 | 1 | 0.07971 | false | 0 | 0.025362 | 0 | 0.108696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9830e5c17e9370946523382a45e82af7c72f926a | 45 | py | Python | PythonPackageTemplate/__init__.py | grobbles/python-package-template | 926569fe5a6caf9bfe177ec7fed68191db505a26 | [
"MIT"
] | null | null | null | PythonPackageTemplate/__init__.py | grobbles/python-package-template | 926569fe5a6caf9bfe177ec7fed68191db505a26 | [
"MIT"
] | null | null | null | PythonPackageTemplate/__init__.py | grobbles/python-package-template | 926569fe5a6caf9bfe177ec7fed68191db505a26 | [
"MIT"
] | null | null | null | from .Module import *
from .Main import Main
| 15 | 22 | 0.755556 | 7 | 45 | 4.857143 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 23 | 22.5 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98a76443721c72eb035abacdaf1185ef2356beb3 | 30 | py | Python | samples/currencies/__init__.py | zoho/zohocrm-python-sdk-2.0 | 3a93eb3b57fed4e08f26bd5b311e101cb2995411 | [
"Apache-2.0"
] | null | null | null | samples/currencies/__init__.py | zoho/zohocrm-python-sdk-2.0 | 3a93eb3b57fed4e08f26bd5b311e101cb2995411 | [
"Apache-2.0"
] | null | null | null | samples/currencies/__init__.py | zoho/zohocrm-python-sdk-2.0 | 3a93eb3b57fed4e08f26bd5b311e101cb2995411 | [
"Apache-2.0"
] | null | null | null | from .currency import Currency | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f376443fc9042ff567d1b84e02d11231c022912 | 209 | py | Python | human-feedback-api/human_feedback_api/templatetags/custom_tags.py | yangalexandery/rl-teacher | d7b01223df548bf7bd27d4ddec9f6e9c9dd0def4 | [
"MIT"
] | 463 | 2017-08-03T16:08:05.000Z | 2022-03-06T23:12:40.000Z | human-feedback-api/human_feedback_api/templatetags/custom_tags.py | yangalexandery/rl-teacher | d7b01223df548bf7bd27d4ddec9f6e9c9dd0def4 | [
"MIT"
] | 22 | 2017-08-03T16:59:24.000Z | 2020-12-21T01:08:26.000Z | human-feedback-api/human_feedback_api/templatetags/custom_tags.py | oguzserbetci/rl-teacher-atari | fd6c399921d347333d7c5b4b12c63f1a955cea5c | [
"MIT"
] | 86 | 2017-08-03T16:17:06.000Z | 2022-03-08T12:11:00.000Z | from django import template
register = template.Library()
@register.inclusion_tag('_comparison.html')
def _comparison(comparison, experiment):
return {'comparison': comparison, "experiment": experiment}
| 26.125 | 63 | 0.779904 | 21 | 209 | 7.619048 | 0.619048 | 0.25 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 209 | 7 | 64 | 29.857143 | 0.855615 | 0 | 0 | 0 | 0 | 0 | 0.172249 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f680ea1f918e7a547eeb5e063c29263f8b5aab97 | 20,503 | py | Python | device_repo/device_repo_ice/Digitizer_ice.py | TerryGeng/device_repo | 972025b0d4f0f20b4df333d209059cda3cbd8a5d | [
"MIT"
] | 2 | 2020-12-21T12:57:45.000Z | 2021-08-30T07:28:10.000Z | device_repo/device_repo_ice/Digitizer_ice.py | TerryGeng/device_repo | 972025b0d4f0f20b4df333d209059cda3cbd8a5d | [
"MIT"
] | null | null | null | device_repo/device_repo_ice/Digitizer_ice.py | TerryGeng/device_repo | 972025b0d4f0f20b4df333d209059cda3cbd8a5d | [
"MIT"
] | 2 | 2021-03-31T14:30:46.000Z | 2021-08-30T07:26:12.000Z | # -*- coding: utf-8 -*-
#
# Copyright (c) ZeroC, Inc. All rights reserved.
#
#
# Ice version 3.7.4
#
# <auto-generated>
#
# Generated from file `Digitizer.ice'
#
# Warning: do not edit this file.
#
# </auto-generated>
#
from sys import version_info as _version_info_
import Ice, IcePy
from . import device_repo_ice
# Included module device_repo_ice
_M_device_repo_ice = Ice.openModule('device_repo.device_repo_ice')
# Start of module device_repo_ice
__name__ = 'device_repo.device_repo_ice'
_M_device_repo_ice._t_Digitizer = IcePy.defineValue('::device_repo_ice::Digitizer', Ice.Value, -1, (), False, True, None, ())
if 'DigitizerPrx' not in _M_device_repo_ice.__dict__:
_M_device_repo_ice.DigitizerPrx = Ice.createTempClass()
class DigitizerPrx(_M_device_repo_ice.DevicePrx):
def set_sample_number(self, number_of_samples, context=None):
return _M_device_repo_ice.Digitizer._op_set_sample_number.invoke(self, ((number_of_samples, ), context))
def set_sample_numberAsync(self, number_of_samples, context=None):
return _M_device_repo_ice.Digitizer._op_set_sample_number.invokeAsync(self, ((number_of_samples, ), context))
def begin_set_sample_number(self, number_of_samples, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_set_sample_number.begin(self, ((number_of_samples, ), _response, _ex, _sent, context))
def end_set_sample_number(self, _r):
return _M_device_repo_ice.Digitizer._op_set_sample_number.end(self, _r)
def set_input_range(self, channel, range, context=None):
return _M_device_repo_ice.Digitizer._op_set_input_range.invoke(self, ((channel, range), context))
def set_input_rangeAsync(self, channel, range, context=None):
return _M_device_repo_ice.Digitizer._op_set_input_range.invokeAsync(self, ((channel, range), context))
def begin_set_input_range(self, channel, range, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_set_input_range.begin(self, ((channel, range), _response, _ex, _sent, context))
def end_set_input_range(self, _r):
return _M_device_repo_ice.Digitizer._op_set_input_range.end(self, _r)
def set_repeats(self, repeats, context=None):
return _M_device_repo_ice.Digitizer._op_set_repeats.invoke(self, ((repeats, ), context))
def set_repeatsAsync(self, repeats, context=None):
return _M_device_repo_ice.Digitizer._op_set_repeats.invokeAsync(self, ((repeats, ), context))
def begin_set_repeats(self, repeats, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_set_repeats.begin(self, ((repeats, ), _response, _ex, _sent, context))
def end_set_repeats(self, _r):
return _M_device_repo_ice.Digitizer._op_set_repeats.end(self, _r)
def set_trigger_level(self, trigger_level, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_level.invoke(self, ((trigger_level, ), context))
def set_trigger_levelAsync(self, trigger_level, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_level.invokeAsync(self, ((trigger_level, ), context))
def begin_set_trigger_level(self, trigger_level, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_level.begin(self, ((trigger_level, ), _response, _ex, _sent, context))
def end_set_trigger_level(self, _r):
return _M_device_repo_ice.Digitizer._op_set_trigger_level.end(self, _r)
def set_trigger_delay(self, delay, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_delay.invoke(self, ((delay, ), context))
def set_trigger_delayAsync(self, delay, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_delay.invokeAsync(self, ((delay, ), context))
def begin_set_trigger_delay(self, delay, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_delay.begin(self, ((delay, ), _response, _ex, _sent, context))
def end_set_trigger_delay(self, _r):
return _M_device_repo_ice.Digitizer._op_set_trigger_delay.end(self, _r)
def set_trigger_timeout(self, timeout, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_timeout.invoke(self, ((timeout, ), context))
def set_trigger_timeoutAsync(self, timeout, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_timeout.invokeAsync(self, ((timeout, ), context))
def begin_set_trigger_timeout(self, timeout, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_set_trigger_timeout.begin(self, ((timeout, ), _response, _ex, _sent, context))
def end_set_trigger_timeout(self, _r):
return _M_device_repo_ice.Digitizer._op_set_trigger_timeout.end(self, _r)
def get_sample_rate(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_sample_rate.invoke(self, ((), context))
def get_sample_rateAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_sample_rate.invokeAsync(self, ((), context))
def begin_get_sample_rate(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_get_sample_rate.begin(self, ((), _response, _ex, _sent, context))
def end_get_sample_rate(self, _r):
return _M_device_repo_ice.Digitizer._op_get_sample_rate.end(self, _r)
def get_sample_number(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_sample_number.invoke(self, ((), context))
def get_sample_numberAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_sample_number.invokeAsync(self, ((), context))
def begin_get_sample_number(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_get_sample_number.begin(self, ((), _response, _ex, _sent, context))
def end_get_sample_number(self, _r):
return _M_device_repo_ice.Digitizer._op_get_sample_number.end(self, _r)
def get_input_range(self, channel, context=None):
return _M_device_repo_ice.Digitizer._op_get_input_range.invoke(self, ((channel, ), context))
def get_input_rangeAsync(self, channel, context=None):
return _M_device_repo_ice.Digitizer._op_get_input_range.invokeAsync(self, ((channel, ), context))
def begin_get_input_range(self, channel, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_get_input_range.begin(self, ((channel, ), _response, _ex, _sent, context))
def end_get_input_range(self, _r):
return _M_device_repo_ice.Digitizer._op_get_input_range.end(self, _r)
def get_repeats(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_repeats.invoke(self, ((), context))
def get_repeatsAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_repeats.invokeAsync(self, ((), context))
def begin_get_repeats(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_get_repeats.begin(self, ((), _response, _ex, _sent, context))
def end_get_repeats(self, _r):
return _M_device_repo_ice.Digitizer._op_get_repeats.end(self, _r)
def get_trigger_level(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_level.invoke(self, ((), context))
def get_trigger_levelAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_level.invokeAsync(self, ((), context))
def begin_get_trigger_level(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_level.begin(self, ((), _response, _ex, _sent, context))
def end_get_trigger_level(self, _r):
return _M_device_repo_ice.Digitizer._op_get_trigger_level.end(self, _r)
def get_trigger_delay(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_delay.invoke(self, ((), context))
def get_trigger_delayAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_delay.invokeAsync(self, ((), context))
def begin_get_trigger_delay(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_delay.begin(self, ((), _response, _ex, _sent, context))
def end_get_trigger_delay(self, _r):
return _M_device_repo_ice.Digitizer._op_get_trigger_delay.end(self, _r)
def get_trigger_timeout(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_timeout.invoke(self, ((), context))
def get_trigger_timeoutAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_timeout.invokeAsync(self, ((), context))
def begin_get_trigger_timeout(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_get_trigger_timeout.begin(self, ((), _response, _ex, _sent, context))
def end_get_trigger_timeout(self, _r):
return _M_device_repo_ice.Digitizer._op_get_trigger_timeout.end(self, _r)
def start_acquire(self, context=None):
return _M_device_repo_ice.Digitizer._op_start_acquire.invoke(self, ((), context))
def start_acquireAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_start_acquire.invokeAsync(self, ((), context))
def begin_start_acquire(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_start_acquire.begin(self, ((), _response, _ex, _sent, context))
def end_start_acquire(self, _r):
return _M_device_repo_ice.Digitizer._op_start_acquire.end(self, _r)
def acquire_and_fetch_average(self, context=None):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch_average.invoke(self, ((), context))
def acquire_and_fetch_averageAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch_average.invokeAsync(self, ((), context))
def begin_acquire_and_fetch_average(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch_average.begin(self, ((), _response, _ex, _sent, context))
def end_acquire_and_fetch_average(self, _r):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch_average.end(self, _r)
def fetch_average(self, context=None):
return _M_device_repo_ice.Digitizer._op_fetch_average.invoke(self, ((), context))
def fetch_averageAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_fetch_average.invokeAsync(self, ((), context))
def begin_fetch_average(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_fetch_average.begin(self, ((), _response, _ex, _sent, context))
def end_fetch_average(self, _r):
return _M_device_repo_ice.Digitizer._op_fetch_average.end(self, _r)
def acquire_and_fetch(self, context=None):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch.invoke(self, ((), context))
def acquire_and_fetchAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch.invokeAsync(self, ((), context))
def begin_acquire_and_fetch(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch.begin(self, ((), _response, _ex, _sent, context))
def end_acquire_and_fetch(self, _r):
return _M_device_repo_ice.Digitizer._op_acquire_and_fetch.end(self, _r)
def fetch(self, context=None):
return _M_device_repo_ice.Digitizer._op_fetch.invoke(self, ((), context))
def fetchAsync(self, context=None):
return _M_device_repo_ice.Digitizer._op_fetch.invokeAsync(self, ((), context))
def begin_fetch(self, _response=None, _ex=None, _sent=None, context=None):
return _M_device_repo_ice.Digitizer._op_fetch.begin(self, ((), _response, _ex, _sent, context))
def end_fetch(self, _r):
return _M_device_repo_ice.Digitizer._op_fetch.end(self, _r)
@staticmethod
def checkedCast(proxy, facetOrContext=None, context=None):
return _M_device_repo_ice.DigitizerPrx.ice_checkedCast(proxy, '::device_repo_ice::Digitizer', facetOrContext, context)
@staticmethod
def uncheckedCast(proxy, facet=None):
return _M_device_repo_ice.DigitizerPrx.ice_uncheckedCast(proxy, facet)
@staticmethod
def ice_staticId():
return '::device_repo_ice::Digitizer'
_M_device_repo_ice._t_DigitizerPrx = IcePy.defineProxy('::device_repo_ice::Digitizer', DigitizerPrx)
_M_device_repo_ice.DigitizerPrx = DigitizerPrx
del DigitizerPrx
_M_device_repo_ice.Digitizer = Ice.createTempClass()
class Digitizer(_M_device_repo_ice.Device):
def ice_ids(self, current=None):
return ('::Ice::Object', '::device_repo_ice::Device', '::device_repo_ice::Digitizer')
def ice_id(self, current=None):
return '::device_repo_ice::Digitizer'
@staticmethod
def ice_staticId():
return '::device_repo_ice::Digitizer'
def set_sample_number(self, number_of_samples, current=None):
raise NotImplementedError("servant method 'set_sample_number' not implemented")
def set_input_range(self, channel, range, current=None):
raise NotImplementedError("servant method 'set_input_range' not implemented")
def set_repeats(self, repeats, current=None):
raise NotImplementedError("servant method 'set_repeats' not implemented")
def set_trigger_level(self, trigger_level, current=None):
raise NotImplementedError("servant method 'set_trigger_level' not implemented")
def set_trigger_delay(self, delay, current=None):
raise NotImplementedError("servant method 'set_trigger_delay' not implemented")
def set_trigger_timeout(self, timeout, current=None):
raise NotImplementedError("servant method 'set_trigger_timeout' not implemented")
def get_sample_rate(self, current=None):
raise NotImplementedError("servant method 'get_sample_rate' not implemented")
def get_sample_number(self, current=None):
raise NotImplementedError("servant method 'get_sample_number' not implemented")
def get_input_range(self, channel, current=None):
raise NotImplementedError("servant method 'get_input_range' not implemented")
def get_repeats(self, current=None):
raise NotImplementedError("servant method 'get_repeats' not implemented")
def get_trigger_level(self, current=None):
raise NotImplementedError("servant method 'get_trigger_level' not implemented")
def get_trigger_delay(self, current=None):
raise NotImplementedError("servant method 'get_trigger_delay' not implemented")
def get_trigger_timeout(self, current=None):
raise NotImplementedError("servant method 'get_trigger_timeout' not implemented")
def start_acquire(self, current=None):
raise NotImplementedError("servant method 'start_acquire' not implemented")
def acquire_and_fetch_average(self, current=None):
raise NotImplementedError("servant method 'acquire_and_fetch_average' not implemented")
def fetch_average(self, current=None):
raise NotImplementedError("servant method 'fetch_average' not implemented")
def acquire_and_fetch(self, current=None):
raise NotImplementedError("servant method 'acquire_and_fetch' not implemented")
def fetch(self, current=None):
raise NotImplementedError("servant method 'fetch' not implemented")
def __str__(self):
return IcePy.stringify(self, _M_device_repo_ice._t_DigitizerDisp)
__repr__ = __str__
_M_device_repo_ice._t_DigitizerDisp = IcePy.defineClass('::device_repo_ice::Digitizer', Digitizer, (), None, (_M_device_repo_ice._t_DeviceDisp,))
Digitizer._ice_type = _M_device_repo_ice._t_DigitizerDisp
Digitizer._op_set_sample_number = IcePy.Operation('set_sample_number', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), IcePy._t_int, False, 0),), (), None, ())
Digitizer._op_set_input_range = IcePy.Operation('set_input_range', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), IcePy._t_int, False, 0), ((), IcePy._t_double, False, 0)), (), None, ())
Digitizer._op_set_repeats = IcePy.Operation('set_repeats', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), IcePy._t_int, False, 0),), (), None, ())
Digitizer._op_set_trigger_level = IcePy.Operation('set_trigger_level', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), IcePy._t_double, False, 0),), (), None, ())
Digitizer._op_set_trigger_delay = IcePy.Operation('set_trigger_delay', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), IcePy._t_double, False, 0),), (), None, ())
Digitizer._op_set_trigger_timeout = IcePy.Operation('set_trigger_timeout', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), IcePy._t_double, False, 0),), (), None, ())
Digitizer._op_get_sample_rate = IcePy.Operation('get_sample_rate', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), IcePy._t_double, False, 0), ())
Digitizer._op_get_sample_number = IcePy.Operation('get_sample_number', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), IcePy._t_int, False, 0), ())
Digitizer._op_get_input_range = IcePy.Operation('get_input_range', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (((), IcePy._t_int, False, 0),), (), ((), IcePy._t_int, False, 0), ())
Digitizer._op_get_repeats = IcePy.Operation('get_repeats', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), IcePy._t_int, False, 0), ())
Digitizer._op_get_trigger_level = IcePy.Operation('get_trigger_level', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), IcePy._t_double, False, 0), ())
Digitizer._op_get_trigger_delay = IcePy.Operation('get_trigger_delay', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), IcePy._t_double, False, 0), ())
Digitizer._op_get_trigger_timeout = IcePy.Operation('get_trigger_timeout', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), IcePy._t_double, False, 0), ())
Digitizer._op_start_acquire = IcePy.Operation('start_acquire', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), None, ())
Digitizer._op_acquire_and_fetch_average = IcePy.Operation('acquire_and_fetch_average', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), _M_device_repo_ice._t_DataSets, False, 0), ())
Digitizer._op_fetch_average = IcePy.Operation('fetch_average', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), _M_device_repo_ice._t_DataSets, False, 0), ())
Digitizer._op_acquire_and_fetch = IcePy.Operation('acquire_and_fetch', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), _M_device_repo_ice._t_DataSets, False, 0), ())
Digitizer._op_fetch = IcePy.Operation('fetch', Ice.OperationMode.Normal, Ice.OperationMode.Normal, False, None, (), (), (), ((), _M_device_repo_ice._t_DataSets, False, 0), ())
_M_device_repo_ice.Digitizer = Digitizer
del Digitizer
# End of module device_repo_ice
| 56.482094 | 219 | 0.71609 | 2,656 | 20,503 | 5.062877 | 0.04744 | 0.081059 | 0.103443 | 0.095783 | 0.863018 | 0.76069 | 0.689224 | 0.626683 | 0.572618 | 0.523091 | 0 | 0.001408 | 0.168658 | 20,503 | 362 | 220 | 56.638122 | 0.787504 | 0.013803 | 0 | 0.038298 | 1 | 0 | 0.073363 | 0.019652 | 0 | 0 | 0 | 0 | 0 | 1 | 0.412766 | false | 0 | 0.012766 | 0.33617 | 0.774468 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f6942ccaaf9505f777b06e1ad315cbdf136e3a0b | 1,076 | py | Python | Tasks/spacy_test.py | AntonAlbertovich/Eusocial-Cluster-Utility | fef4f583b6151bb40e54d6825d65d668581c2121 | [
"MIT"
] | 2 | 2019-03-22T15:08:31.000Z | 2019-03-23T20:10:40.000Z | Tasks/spacy_test.py | AntonAlbertovich/Eusocial-Cluster-Utility | fef4f583b6151bb40e54d6825d65d668581c2121 | [
"MIT"
] | 1 | 2019-03-23T20:08:12.000Z | 2019-03-23T20:08:12.000Z | Tasks/spacy_test.py | AntonAlbertovich/Eusocial-Cluster-Utility | fef4f583b6151bb40e54d6825d65d668581c2121 | [
"MIT"
] | 1 | 2019-03-23T19:56:07.000Z | 2019-03-23T19:56:07.000Z | import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'(12) I am going to the grocery store right now. ')
for token in doc:
print(token.text, token.dep_)
print("------------------------------------------------")
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'Do you want anything?')
for token in doc:
print(token.text, token.dep_)
print("------------------------------------------------")
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'I’ve never taken a linguistics course before.')
for token in doc:
print(token.text, token.dep_)
print("------------------------------------------------")
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'I have never taken a linguistics course before.')
for token in doc:
print(token.text, token.dep_)
print("------------------------------------------------")
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'(16) I’d like to talk about it at some point but that’ll be a whole new discussion. ')
for token in doc:
print(token.text, token.dep_)
print("------------------------------------------------")
| 24.454545 | 99 | 0.530669 | 151 | 1,076 | 3.649007 | 0.357616 | 0.072595 | 0.108893 | 0.127042 | 0.76225 | 0.76225 | 0.76225 | 0.76225 | 0.76225 | 0.76225 | 0 | 0.00431 | 0.137546 | 1,076 | 43 | 100 | 25.023256 | 0.58944 | 0 | 0 | 0.769231 | 0 | 0.038462 | 0.518173 | 0.223672 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 0.038462 | 0.384615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6ba345c8018c173fcb7be38dc01a750bccd4b9c | 40 | py | Python | tests/engine/__init__.py | 2kodevs/Search-Engine | 840001f825d9632c6c7a5fd24151b79ca1a9a06b | [
"MIT"
] | null | null | null | tests/engine/__init__.py | 2kodevs/Search-Engine | 840001f825d9632c6c7a5fd24151b79ca1a9a06b | [
"MIT"
] | null | null | null | tests/engine/__init__.py | 2kodevs/Search-Engine | 840001f825d9632c6c7a5fd24151b79ca1a9a06b | [
"MIT"
] | null | null | null | from ._tests import SearchEngineTestCase | 40 | 40 | 0.9 | 4 | 40 | 8.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6cfd91952ffe39686f51320416dcd744b337856 | 2,357 | py | Python | D 2016-2017/Oct 26/ResistancesInParallel_in_class.py | bnajafi/Python4ScientificComputing_Fundamentals | 7c943404654d9c44920ea9a7fdee28bc99ad8d05 | [
"MIT"
] | 65 | 2017-01-19T08:03:38.000Z | 2021-04-02T17:41:43.000Z | D 2016-2017/Oct 26/ResistancesInParallel_in_class.py | bnajafi/Python4ScientificComputing_Fundamentals | 7c943404654d9c44920ea9a7fdee28bc99ad8d05 | [
"MIT"
] | null | null | null | D 2016-2017/Oct 26/ResistancesInParallel_in_class.py | bnajafi/Python4ScientificComputing_Fundamentals | 7c943404654d9c44920ea9a7fdee28bc99ad8d05 | [
"MIT"
] | 110 | 2017-01-19T08:04:14.000Z | 2020-07-23T13:44:52.000Z |
Ri=["conv",12, 10]
Ro=["conv",12, 25]
R1=["cond",12,0.2,0.8]#Here is the order: 0:type, 1:area,
R2=["cond",12,0.3,1.5]
R3=["cond",12,0.1,0.7]#Here is the order: 0:type, 1:area,
R4=["cond",12,0.15,1.5]
R5=["cond",12,0.25,11.1]
#R_parallel=[R3,R4,R5]
#R_in Series=[Ri,Ro,R7,R8]
resistances_in_series=[Ri,R1,R2,Ro]
resistances_in_parallel=[R3,R4,R5]
Rtotal_series=0
Message="\n \n The Resistances are: \t"
for resistance in resistances_in_series:
#print "this is the resistance"
#print resistance
type_of_resistance=resistance[0]
#print "type of resistance is "+type_of_resistance
A=resistance[1]
if type_of_resistance=="cond":
L=resistance[2]
k=resistance[3]
R=round(float(L)/(k*A),4)
resistance.append(R)
#print "Conductive resistance"
#print resistance
elif type_of_resistance=="conv":
A=resistance[1]
h=resistance[2]
R=round(1.0/(A*h),4)
resistance.append(R)
#print "Conductive resistance"
#print resistance
else:
print "I don't know this type of resistance"
break
Rtotal_series=Rtotal_series+R
Message=Message+str(R)+ " degC/W \t"
#print Message
print Message
print "So the total resitance of your wall is: "+str(Rtotal_series)+"degC/W"
Rtotal_parallel_inv=0
Message="\n \n The Resistances are: \t"
for resistance in resistances_in_parallel:
#print "this is the resistance"
#print resistance
type_of_resistance=resistance[0]
#print "type of resistance is "+type_of_resistance
A=resistance[1]
if type_of_resistance=="cond":
L=resistance[2]
k=resistance[3]
R=round(float(L)/(k*A),4)
resistance.append(R)
#print "Conductive resistance"
#print resistance
elif type_of_resistance=="conv":
A=resistance[1]
h=resistance[2]
R=round(1.0/(A*h),4)
resistance.append(R)
#print "Conductive resistance"
#print resistance
else:
print "I don't know this type of resistance"
break
Rtotal_parallel_inv=Rtotal_parallel_inv+1/R
Message=Message+str(R)+ " degC/W \t"
#print Message
Rtotal_parallel=1/Rtotal_parallel_inv
print Message
print "So the total resitance of your wall is: "+str(R_total_parallel)+"degC/W"
R_total =Rtotal_parallel+Rtotal_series
| 28.059524 | 79 | 0.652949 | 365 | 2,357 | 4.09589 | 0.189041 | 0.048161 | 0.128428 | 0.048161 | 0.759866 | 0.759866 | 0.759866 | 0.759866 | 0.727759 | 0.727759 | 0 | 0.045356 | 0.214255 | 2,357 | 83 | 80 | 28.39759 | 0.761879 | 0.217225 | 0 | 0.654545 | 0 | 0 | 0.156798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.109091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6d6f59b972b04c33bab096a8604694d113dabcb | 321 | py | Python | cakechat/dialog_model/inference/__init__.py | sketscripter/emotional-chatbot-cakechat | 470df58a2206a0ea38b6bed53b20cbc63bd3de24 | [
"Apache-2.0"
] | 1,608 | 2018-01-31T15:22:29.000Z | 2022-03-30T19:59:16.000Z | cakechat/dialog_model/inference/__init__.py | GaelicThunder/cakechat | 844507281b30d81b3fe3674895fe27826dba8438 | [
"Apache-2.0"
] | 64 | 2019-07-05T06:06:43.000Z | 2021-08-02T05:22:31.000Z | cakechat/dialog_model/inference/__init__.py | Spark3757/chatbot | 4e8eae70af2d5b68564d86b7ea0dbec956ae676f | [
"Apache-2.0"
] | 690 | 2018-01-31T17:57:19.000Z | 2022-03-30T07:07:41.000Z | from cakechat.dialog_model.inference.utils import get_sequence_log_probs, get_sequence_score_by_thought_vector, \
get_sequence_score
from cakechat.dialog_model.inference.predict import get_nn_response_ids, get_nn_responses, warmup_predictor
from cakechat.dialog_model.inference.service_tokens import ServiceTokensIDs
| 64.2 | 113 | 0.890966 | 45 | 321 | 5.911111 | 0.555556 | 0.135338 | 0.203008 | 0.259399 | 0.360902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065421 | 321 | 4 | 114 | 80.25 | 0.886667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6e30682785c0cdce843e27b6f97a7e7739fa292 | 208 | py | Python | tests/files/plugins/test-plugin.py | jeremyschulman/netcfgbu | c2056f07aefa7c9e584fc9a34c9971100df7fa49 | [
"Apache-2.0"
] | 83 | 2020-06-02T13:25:33.000Z | 2022-03-07T20:50:36.000Z | tests/files/plugins/test-plugin.py | jeremyschulman/netcfgbu | c2056f07aefa7c9e584fc9a34c9971100df7fa49 | [
"Apache-2.0"
] | 55 | 2020-06-03T17:51:31.000Z | 2021-08-14T14:13:56.000Z | tests/files/plugins/test-plugin.py | jeremyschulman/netcfgbu | c2056f07aefa7c9e584fc9a34c9971100df7fa49 | [
"Apache-2.0"
] | 16 | 2020-06-05T20:32:27.000Z | 2021-11-01T17:06:38.000Z | from netcfgbu.plugins import Plugin
class TestPlugin(Plugin):
def backup_success(rec: dict, res: bool):
return (rec, res)
def backup_failed(rec: dict, res: bool):
return (rec, res)
| 20.8 | 45 | 0.658654 | 28 | 208 | 4.821429 | 0.571429 | 0.133333 | 0.148148 | 0.207407 | 0.385185 | 0.385185 | 0.385185 | 0 | 0 | 0 | 0 | 0 | 0.235577 | 208 | 9 | 46 | 23.111111 | 0.849057 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f6eaf6837ed0072d89fd31c889ab306fcb1ac8a8 | 207 | py | Python | bluenrg/commands/__init__.py | autopi-io/py-bluenrg | f3fa9df8fa9ff86b615aef1782f6bbce80298abf | [
"Apache-2.0"
] | null | null | null | bluenrg/commands/__init__.py | autopi-io/py-bluenrg | f3fa9df8fa9ff86b615aef1782f6bbce80298abf | [
"Apache-2.0"
] | null | null | null | bluenrg/commands/__init__.py | autopi-io/py-bluenrg | f3fa9df8fa9ff86b615aef1782f6bbce80298abf | [
"Apache-2.0"
] | null | null | null | # NOTE: This file is auto-generated, please do not modify
from .hci import *
from .hci_testing import *
from .hal import *
from .gap import *
from .gatt_att import *
from .l2cap import *
from .app import *
| 20.7 | 57 | 0.724638 | 33 | 207 | 4.484848 | 0.606061 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005952 | 0.188406 | 207 | 9 | 58 | 23 | 0.875 | 0.2657 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63df094cf2fae4e9064558932295571d91e6dac4 | 47 | py | Python | naslib/optimizers/oneshot/darts/searcher.py | az2104nas/sztnb302alsr2bs21on | 6084c82c59a4a89498a191d96c231f47df10317d | [
"Apache-2.0"
] | null | null | null | naslib/optimizers/oneshot/darts/searcher.py | az2104nas/sztnb302alsr2bs21on | 6084c82c59a4a89498a191d96c231f47df10317d | [
"Apache-2.0"
] | 4 | 2021-06-08T21:32:32.000Z | 2022-03-12T00:29:33.000Z | naslib/optimizers/oneshot/darts/searcher.py | az2104nas/sztnb302alsr2bs21on | 6084c82c59a4a89498a191d96c231f47df10317d | [
"Apache-2.0"
] | null | null | null | from naslib.optimizers.oneshot import Searcher
| 23.5 | 46 | 0.87234 | 6 | 47 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1246e8bbb7be1d21799a58e54e5dafb13ab7e06b | 139 | py | Python | GCL/losses/__init__.py | GCL-staging/PyGCL | 6cf2f4475053c631c6db1b8a2412bd811b586275 | [
"Apache-2.0"
] | null | null | null | GCL/losses/__init__.py | GCL-staging/PyGCL | 6cf2f4475053c631c6db1b8a2412bd811b586275 | [
"Apache-2.0"
] | null | null | null | GCL/losses/__init__.py | GCL-staging/PyGCL | 6cf2f4475053c631c6db1b8a2412bd811b586275 | [
"Apache-2.0"
] | null | null | null | from .jsd import *
from .vicreg import *
from .infonce import *
from .triplet import *
from .bootstrap import *
from .barlow_twins import * | 23.166667 | 27 | 0.748201 | 19 | 139 | 5.421053 | 0.473684 | 0.485437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165468 | 139 | 6 | 27 | 23.166667 | 0.887931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12559ac9bb4ab44aba097ecb87205d8a4f28d6b5 | 71 | py | Python | baloo/__init__.py | cda-group/baloo | 0d442117c2a919b177e0a96024cbdc82762cb646 | [
"BSD-3-Clause"
] | 11 | 2018-12-16T00:19:39.000Z | 2021-01-06T04:56:02.000Z | baloo/__init__.py | monner/baloo | f6e05e35b73a75e8a300754c6bdc575e5f2d53b9 | [
"BSD-3-Clause"
] | 6 | 2019-02-21T23:22:14.000Z | 2021-06-01T22:39:32.000Z | baloo/__init__.py | monner/baloo | f6e05e35b73a75e8a300754c6bdc575e5f2d53b9 | [
"BSD-3-Clause"
] | 6 | 2019-02-12T14:30:43.000Z | 2020-03-15T17:17:56.000Z | from .core import *
from .functions import *
from .io.parsers import *
| 17.75 | 25 | 0.732394 | 10 | 71 | 5.2 | 0.6 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 71 | 3 | 26 | 23.666667 | 0.881356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
126ea4b32457916697fc1408b40ac77349d6c589 | 5,707 | py | Python | catalog/migrations/0001_initial.py | eldemoni/mymdb_final | 02332d0d5b4d88bc6adb38ce797010e16b50c847 | [
"MIT"
] | null | null | null | catalog/migrations/0001_initial.py | eldemoni/mymdb_final | 02332d0d5b4d88bc6adb38ce797010e16b50c847 | [
"MIT"
] | null | null | null | catalog/migrations/0001_initial.py | eldemoni/mymdb_final | 02332d0d5b4d88bc6adb38ce797010e16b50c847 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.2 on 2020-03-04 11:31
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Actor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first_name', models.CharField(max_length=100)),
('last_name', models.CharField(max_length=100)),
('date_of_birth', models.DateField(blank=True, null=True)),
('date_of_death', models.DateField(blank=True, null=True, verbose_name='Died')),
('picture', models.URLField(null=True)),
],
options={
'ordering': ['last_name'],
},
),
migrations.CreateModel(
name='Director',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first_name', models.CharField(max_length=100)),
('last_name', models.CharField(max_length=100)),
('date_of_birth', models.DateField(blank=True, null=True)),
('date_of_death', models.DateField(blank=True, null=True, verbose_name='Died')),
('picture', models.URLField(null=True)),
],
options={
'ordering': ['last_name'],
},
),
migrations.CreateModel(
name='Genre',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(help_text='Ingrese el nombre del género.', max_length=200)),
],
options={
'ordering': ['name'],
},
),
migrations.CreateModel(
name='Movie',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(help_text='Nombre de la película.', max_length=200)),
('poster', models.URLField(null=True)),
('trailer', models.URLField(null=True)),
('summary', models.TextField(help_text='Ingrese una breve descripción de la película.', max_length=1000)),
('release_date', models.DateField(null=True)),
('director', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='catalog.Director')),
('genre', models.ManyToManyField(help_text='Seleccione un genero para esta película', to='catalog.Genre')),
('saved', models.ManyToManyField(blank=True, related_name='fav_movies', to=settings.AUTH_USER_MODEL)),
('stars', models.ManyToManyField(help_text='Ingrese los actores principales', to='catalog.Actor')),
],
options={
'ordering': ['title'],
},
),
migrations.CreateModel(
name='WhatIf',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('what_if', models.TextField(max_length=500)),
('by', models.ForeignKey(blank=True, default=1, null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
('dislikes', models.ManyToManyField(blank=True, related_name='dislikes', to=settings.AUTH_USER_MODEL)),
('likes', models.ManyToManyField(blank=True, related_name='likes', to=settings.AUTH_USER_MODEL)),
('movie', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='catalog.Movie')),
],
options={
'ordering': ['movie'],
},
),
migrations.CreateModel(
name='Series',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(help_text='Nombre de la serie.', max_length=200)),
('poster', models.URLField(null=True)),
('trailer', models.URLField(null=True)),
('summary', models.TextField(help_text='Ingrese una breve descripción de la película.', max_length=1000)),
('release_date', models.DateField(null=True)),
('episodes', models.IntegerField(null=True)),
('director', models.ManyToManyField(to='catalog.Director')),
('genre', models.ManyToManyField(help_text='Seleccione un genero para esta película', to='catalog.Genre')),
('saved', models.ManyToManyField(blank=True, related_name='fav_series', to=settings.AUTH_USER_MODEL)),
('stars', models.ManyToManyField(help_text='Ingrese los actores principales', to='catalog.Actor')),
],
options={
'ordering': ['title'],
},
),
migrations.CreateModel(
name='Profiles',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('mail', models.EmailField(max_length=254)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.DO_NOTHING, to=settings.AUTH_USER_MODEL)),
],
),
]
| 49.626087 | 151 | 0.573331 | 575 | 5,707 | 5.535652 | 0.215652 | 0.040214 | 0.035187 | 0.046183 | 0.775055 | 0.744266 | 0.709079 | 0.709079 | 0.709079 | 0.709079 | 0 | 0.012457 | 0.282635 | 5,707 | 114 | 152 | 50.061404 | 0.765022 | 0.007885 | 0 | 0.626168 | 1 | 0 | 0.152827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028037 | 0 | 0.065421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d600bebd3d9373769f3518a9c26bd98689ba3d82 | 48,012 | py | Python | Decision Tree/kingsheep/template_player.py | wangqiaowen/Kingsheep | 7e8bf14eaf311ede9d8c335361c672e1b4383236 | [
"Apache-2.0"
] | null | null | null | Decision Tree/kingsheep/template_player.py | wangqiaowen/Kingsheep | 7e8bf14eaf311ede9d8c335361c672e1b4383236 | [
"Apache-2.0"
] | null | null | null | Decision Tree/kingsheep/template_player.py | wangqiaowen/Kingsheep | 7e8bf14eaf311ede9d8c335361c672e1b4383236 | [
"Apache-2.0"
] | null | null | null | """
Kingsheep Agent Template
This template is provided for the course 'Practical Artificial Intelligence' of the University of Zürich.
Please edit the following things before you upload your agent:
- change the name of your file to '[uzhshortname]_A2.py', where [uzhshortname] needs to be your uzh shortname
- change the name of the class to a name of your choosing
- change the def 'get_class_name()' to return the new name of your class
- change the init of your class:
- self.name can be an (anonymous) name of your choosing
- self.uzh_shortname needs to be your UZH shortname
- change the name of the model in get_sheep_model to [uzhshortname]_sheep_model
- change the name of the model in get_wolf_model to [uzhshortname]_wolf_model
The results and rankings of the agents will be published on OLAT using your 'name', not 'uzh_shortname',
so they are anonymous (and your 'name' is expected to be funny, no pressure).
"""
from config import *
import pickle
import numpy as np
import random
def get_class_name():
return 'SheepTheVictim'
class SheepTheVictim():
"""Example class for a Kingsheep player"""
def __init__(self):
self.name = "SheepTheVictim"
self.uzh_shortname = "qiawan"
def get_sheep_model(self): #qiawan_sheep_model_gnb_their_8+13.sav
return pickle.load(open('qiawan_sheep_model_gnb_their_8+13.sav','rb'))
def get_wolf_model(self): #qiawan_wolf_model_new_gnb_4+13+14.sav
return pickle.load(open('qiawan_wolf_model_gnb_their_4+13+14.sav','rb'))
def is_wolf_nearby(sheep_position, wolf_position):
if (sheep_position[1] - wolf_position[1] <= 2 and sheep_position[1] - wolf_position[1] > 0) \
or (sheep_position[1] - wolf_position[1] >= -2 and sheep_position[1] - wolf_position[1] < 0) \
or (sheep_position[0] - wolf_position[0] <= 2 and sheep_position[0] - wolf_position[0] > 0) \
or (sheep_position[0] - wolf_position[0] >= -2 and sheep_position[0] - wolf_position[0] < 0) :
return True
def new_sheep_position(result, sheep_position):
if result[0] == -2:
return ((sheep_position[0]-1),sheep_position[1])
elif result[0] == 2:
return ((sheep_position[0]+1),sheep_position[1])
elif result[0] == -1:
return (sheep_position[0],(sheep_position[1]-1))
elif result[0] == 1:
return (sheep_position[0],(sheep_position[1]+1))
else:
return sheep_position
def new_wolf_position(result, wolf_position):
if result[0] == -2:
return ((wolf_position[0]-1),wolf_position[1])
elif result[0] == 2:
return ((wolf_position[0]+1),wolf_position[1])
elif result[0] == -1:
return (wolf_position[0],(wolf_position[1]-1))
elif result[0] == 1:
return (wolf_position[0],(wolf_position[1]+1))
else:
return wolf_position
def manhattan_D(x,y):
return sum(map(lambda i,j: abs(i-j),x,y))
def fence_surrounded(sheep_position, fence):
right = (sheep_position[0]+1,sheep_position[1])
left = (sheep_position[0]-1,sheep_position[1])
above = (sheep_position[0],sheep_position[1]-1)
below = (sheep_position[0],sheep_position[1]+1)
print ("right",right, right in fence)
print ("above", above, above in fence)
print ("left", left ,left in fence)
print ("below", below, below in fence)
if right in fence and above in fence:
return True
elif right in fence and below in fence:
return True
elif left in fence and above in fence :
return True
elif left in fence and below in fence:
return True
# if food_position[0] - sheep_position [0] == 2 :
# if food_position[1] - sheep_position[1] == 2:
# x = abs(food_position[0] - sheep_position [0])
# y = abs(food_position[1] - sheep_position[1]
else :return False
def food_exist(sheep_position, foods):
if sheep_position in foods:
return True
else: return False
def valid_move(self, figure, x_new, y_new, field):
# Neither the sheep nor the wolf, can step on a square outside the map. Imagine the map is surrounded by fences.
if x_new > FIELD_HEIGHT - 1:
print ("outside")
return False
elif x_new < 0:
print ("outside")
return False
elif y_new > FIELD_WIDTH -1:
print ("outside")
return False
elif y_new < 0:
print ("outside")
return False
# Neither the sheep nor the wolf, can enter a square with a fence on.
if field[x_new][y_new] == CELL_FENCE:
print ("fence")
return False
# Wolfs can not step on squares occupied by the opponents wolf (wolfs block each other).
# Wolfs can not step on squares occupied by the sheep of the same player .
if figure == CELL_WOLF_1:
if field[x_new][y_new] == CELL_WOLF_2:
print ("2_wolf")
return False
elif field[x_new][y_new] == CELL_SHEEP_1:
print ("1_sheep")
return False
elif figure == CELL_WOLF_2:
if field[x_new][y_new] == CELL_WOLF_1:
print ("1_wolf")
return False
elif field[x_new][y_new] == CELL_SHEEP_2:
print ("2_sheep")
return False
# Sheep can not step on squares occupied by the wolf of the same player.
# Sheep can not step on squares occupied by the opposite sheep.
if figure == CELL_SHEEP_1:
if field[x_new][y_new] == CELL_SHEEP_2 or \
field[x_new][y_new] == CELL_WOLF_1:
print ("your_sheep&their_wolf")
return False
elif figure == CELL_SHEEP_2:
if field[x_new][y_new] == CELL_SHEEP_1 or \
field[x_new][y_new] == CELL_WOLF_2:
print ("your_sheep&their_wolf")
return False
return True
def fence_between(sheep_position,wolf_position, fence):
# print (fence)
if sheep_position[0] - wolf_position[0] >=1 and sheep_position[1] - wolf_position[1] == 0:
# sheep right wolf
x_dis = abs(sheep_position[0] - wolf_position[0])
for i in range(x_dis+1):
if (sheep_position[0]-i, sheep_position[1]) in fence:
return True
break
else: continue
elif sheep_position[0] - wolf_position[0] <=-1 and sheep_position[1] - wolf_position[1] == 0:
# sheep left wolf
x_dis = abs(sheep_position[0] - wolf_position[0])
for i in range(x_dis+1):
if (sheep_position[0]+i, sheep_position[1]) in fence:
return True
break
else: continue
elif sheep_position[0] - wolf_position[0] == 0 and sheep_position[1] - wolf_position[1] >= 1:
# sheep below wolf
x_dis = abs(sheep_position[1] - wolf_position[1])
for i in range(x_dis+1):
if (sheep_position[0], sheep_position[1]-i) in fence:
return True
break
else: continue
elif sheep_position[0] - wolf_position[0] == 0 and sheep_position[1] - wolf_position[1] <= -1:
# sheep above wolf
x_dis = abs(sheep_position[1] - wolf_position[1])
for i in range(x_dis+1):
if (sheep_position[0], sheep_position[1]+i) in fence:
return True
break
else: continue
elif sheep_position[0] - wolf_position[0] >= 1 and sheep_position[1] - wolf_position[1] >= 1:
# sheep right below wolf
print ("sheep right below wolf")
x_dis = abs(sheep_position[0] - wolf_position[0])
y_dis = abs(sheep_position[1] - wolf_position[1])
for i in range(x_dis+1):
for j in range(y_dis+1):
# print (i,j)
print ((sheep_position[0]-i, sheep_position[1]-j))
if (sheep_position[0]-i, sheep_position[1]-j) in fence:
return True
break
else: continue
# return True
# else: return False
elif sheep_position[0] - wolf_position[0] >= 1 and sheep_position[1] - wolf_position[1] <= -1:
# sheep right above wolf
x_dis = abs(sheep_position[0] - wolf_position[0])
y_dis = abs(sheep_position[1] - wolf_position[1])
for i in range(x_dis+1):
for j in range(y_dis+1):
if (sheep_position[0]-i, sheep_position[1]+j) in fence:
return True
break
else: continue
elif sheep_position[0] - wolf_position[0] <= -1 and sheep_position[1] - wolf_position[1] <= -1:
# sheep left above wolf
x_dis = abs(sheep_position[0] - wolf_position[0])
y_dis = abs(sheep_position[1] - wolf_position[1])
for i in range(x_dis+1):
for j in range(y_dis+1):
if (sheep_position[0]+i, sheep_position[1]+j) in fence:
return True
break
else: continue
elif sheep_position[0] - wolf_position[0] <= -1 and sheep_position[1] - wolf_position[1] >= 1:
# sheep left below wolf
x_dis = abs(sheep_position[0] - wolf_position[0])
y_dis = abs(sheep_position[1] - wolf_position[1])
for i in range(x_dis+1):
for j in range(y_dis+1):
if (sheep_position[0]+i, sheep_position[1]-j) in fence:
return True
break
else: continue
else: return False
# def towards_food (food_goal, sheep_position, distance):
# if (abs(food_goal[0]-sheep_position[0]) == 1 and abs(food_goal[1]-sheep_position[1]) == 0) \
# or (abs(food_goal[0]-sheep_position[0]) == 0 and abs(food_goal[1]-sheep_position[1]) == 1):
# print ("food next to me")
# if food_goal[0]-sheep_position[0] == 1 :
# result[0] = 2
# elif food_goal[0]-sheep_position[0] == -1 :
# result[0] = -2
# elif food_goal[1]-sheep_position[1] == 1 :
# result[0] = 1
# elif food_goal[1]-sheep_position[1] == -1 :
# result[0] = -1
def fence_nextto(sheep_position,food_goal,fence):
if sheep_position[0] - food_goal[0] == 2 and sheep_position[1] - food_goal[1] == 0:
# sheep right next to food
print ("sheep right next to food")
# x_dis = abs(sheep_position[0] - food_goal[0])
# y_dis = abs(sheep_position[1] - food_goal[1])
# for i in range(x_dis+1):
# for j in range(y_dis+1):
# # print (i,j)
print ((sheep_position[0]-1, sheep_position[1]))
if (sheep_position[0]-1, sheep_position[1]) in fence:
return True
# return True
# else: return False
elif sheep_position[0] - food_goal[0] == -2 and sheep_position[1] - food_goal[1] == 0:
# sheep left next to food
print ("sheep right next to food")
print ((sheep_position[0]+1, sheep_position[1]))
if (sheep_position[0]+1, sheep_position[1]) in fence:
return True
elif sheep_position[0] - food_goal[0] == 0 and sheep_position[1] - food_goal[1] == 2:
# sheep below next to wolf
print ("sheep right next to food")
print ((sheep_position[0], sheep_position[1]-1))
if (sheep_position[0], sheep_position[1]-1) in fence:
return True
elif sheep_position[0] - food_goal[0] == 0 and sheep_position[1] - food_goal[1] == -2:
# sheep below next to wolf
print ("sheep right next to food")
print ((sheep_position[0], sheep_position[1]+1))
if (sheep_position[0], sheep_position[1]+1) in fence:
return True
else: return False
def move_sheep(self, p_num ,p_state, p_time_remaining, field):
if 'sheep_model' not in p_state:
p_state['sheep_model'] = self.get_sheep_model()
sheep_model = p_state['sheep_model']
X_sheep = []
game_features = []
#preprocess field to get features, add to X_sheep
#this code is largely copied from the Jupyter Notebook where the models were trained
#create empty feature array for this game state
#add features and move to X_sheep
if p_num == 1:
sheep = CELL_SHEEP_1
wolf = CELL_WOLF_2
op_wolf = CELL_WOLF_1
op_sheep = CELL_SHEEP_2
else:
sheep = CELL_SHEEP_2
wolf = CELL_WOLF_1
op_wolf = CELL_WOLF_2
op_sheep = CELL_SHEEP_1
#get positions of sheep, wolf and food items
food = []
obstacles = []
y=0
for field_row in field:
x = 0
for item in field_row:
if item == sheep:
sheep_position = (x,y)
elif item == wolf:
wolf_position = (x,y)
elif item == CELL_RHUBARB or item == CELL_GRASS:
food.append((x,y))
elif item == CELL_FENCE:
obstacles.append((x,y))
elif item == op_wolf:
op_wolf_position = (x,y)
elif item == op_sheep:
op_sheep_position = (x,y)
x += 1
y+=1
#feature 1: determine if wolf within two steps up
if sheep_position[1] - wolf_position[1] <= 2 and sheep_position[1] - wolf_position[1] > 0:
s_feature1 = 1
else:
s_feature1 = 0
game_features.append(s_feature1)
#feature 2: determine if wolf within two steps down
if sheep_position[1] - wolf_position[1] >= -2 and sheep_position[1] - wolf_position[1] < 0:
s_feature2 = 1
else:
s_feature2 = 0
game_features.append(s_feature2)
#feature 3: determine if wolf within two steps left
if sheep_position[0] - wolf_position[0] <= 2 and sheep_position[0] - wolf_position[0] > 0:
s_feature3 = 1
else:
s_feature3 = 0
game_features.append(s_feature3)
#feature 4: determine if wolf within two steps right
if sheep_position[0] - wolf_position[0] >= -2 and sheep_position[0] - wolf_position[0] < 0:
s_feature4 = 1
else:
s_feature4 = 0
game_features.append(s_feature4)
s_feature5 = 0
s_feature6 = 0
s_feature7 = 0
s_feature8 = 0
#determine closest food:
food_distance = 1000
food_goal = None
for food_item in food:
distance = abs(food_item[0] - sheep_position[0]) + abs(food_item[1] - sheep_position[1])
if distance < food_distance:
food_distance = distance
food_goal = food_item
# elif distance == food_distance and field[food_item[1]][food_item[0]] == rhubarb :
print (food_goal)
print (sheep_position)
print( food_goal != None)
if food_goal != None:
#feature 5: determine if food within two steps up
print("hunt food")
print (sheep_position[1])
print (food_goal[1])
if food_goal != None:
#feature 5: determine if closest food is below the sheep
if sheep_position[1] - food_goal[1] < 0:
s_feature5 = 1
#feature 6: determine if closest food is above the sheep
if sheep_position[1] - food_goal[1] > 0:
s_feature6 = 1
#feature 7: determine if closest food is right of the sheep
if sheep_position[0] - food_goal[0] < 0:
s_feature7 = 1
#feature 8: determine if closest food is left of the sheep
if sheep_position[0] - food_goal[0] > 0:
s_feature8 = 1
game_features.append(s_feature5)
game_features.append(s_feature6)
game_features.append(s_feature7)
game_features.append(s_feature8)
# s_feature9 = 0
# s_feature10 = 0
# s_feature11 = 0
# s_feature12 = 0
# #determine closest fence:
# fence_distance = 1000
# fence_goal = None
# # print (obstacles)
# for fence_item in obstacles:
# # print ('loop')
# distance = abs(fence_item[0] - sheep_position[0]) + abs(fence_item[1] - sheep_position[1])
# if distance < fence_distance:
# fence_distance = distance
# fence_goal = fence_item
# if sheep_position[1] - fence_item[1] == -1:
# s_feature9 = 1
# #feature 10: determine if closest food is above the sheep
# if sheep_position[1] - fence_item[1] == 1 :
# s_feature10 = 1
# #feature 11: determine if closest food is right of the sheep
# if sheep_position[0] - fence_item[0] == -1:
# s_feature11 = 1
# #feature 12: determine if closest food is left of the sheep
# if sheep_position[0] - fence_item[0] == 1:
# s_feature12 = 1
# game_features.append(s_feature9)
# game_features.append(s_feature10)
# game_features.append(s_feature11)
# game_features.append(s_feature12)
s_feature13 = abs(sheep_position[0]-wolf_position[0])+abs(sheep_position[1]-wolf_position[1])
# print (s_feature13)
game_features.append(s_feature13)
# s_feature14 = abs(op_wolf_position[0]-sheep_position[0])+abs(op_wolf_position[1]-sheep_position[1])
# # print (s_feature13)
# game_features.append(s_feature14)
# s_feature15 = abs(op_sheep_position[0]-sheep_position[0])+abs(op_sheep_position[1]-sheep_position[1])
# # print (s_feature13)
# game_features.append(s_feature15)
print (game_features)
X_sheep.append(game_features)
result = sheep_model.predict(X_sheep)
proba = sheep_model.predict_proba(X_sheep)
print(result)
# proba_dic = {proba[0][0]:-2,proba[0][1]:-1, :0, :1, :2}
proba_dic = {-2:proba[0][0], -1:proba[0][1], 0:proba[0][2], 1:proba[0][3], 2:proba[0][4]}
print (proba_dic)
# if result[0] == -2:
# new_position = ((sheep_position[0]-1),sheep_position[1])
# elif result[0] == 2:
# new_position = ((sheep_position[0]+1),sheep_position[1])
# elif result[0] == -1:
# new_position = (sheep_position[0],(sheep_position[1]-1))
# elif result[0] == 1:
# new_position = (sheep_position[0],(sheep_position[1]+1))
while True:
if result[0] == 0:
if 0 in proba_dic.keys():
proba_dic.pop(result[0])
# remain_choice = list(proba_dic.keys())
# random_step = random.choice(remain_choice)
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
# result = np.array([random_step])
result = np.array([next_step])
if result[0] == -2:
new_position = ((sheep_position[0]-1),sheep_position[1])
elif result[0] == 2:
new_position = ((sheep_position[0]+1),sheep_position[1])
elif result[0] == -1:
new_position = (sheep_position[0],(sheep_position[1]-1))
elif result[0] == 1:
new_position = (sheep_position[0],(sheep_position[1]+1))
# elif result[0] == 0:
# new_position = (sheep_position[0],(sheep_position[1]))
if new_position in obstacles or not self.valid_move(sheep,new_position[1] , new_position[0], field) :
print("new_position:")
print(new_position)
print("pop:")
print(result[0])
proba_dic.pop(result[0])
print (proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
print("next_step:")
print(next_step)
print (result[0])
else :
break
print ("wolf_position")
print (wolf_position)
print (SheepTheVictim.manhattan_D(sheep_position, wolf_position))
print (SheepTheVictim.manhattan_D(SheepTheVictim.new_sheep_position(result,sheep_position), wolf_position))
print (SheepTheVictim.new_sheep_position(result,sheep_position))
print ("wolf and fence ", SheepTheVictim.fence_between(sheep_position, wolf_position, obstacles))
# if SheepTheVictim.manhattan_D(SheepTheVictim.new_sheep_position(result,sheep_position), wolf_position) >= 2 or SheepTheVictim.fence_between(sheep_position, wolf_position, obstacles):
if SheepTheVictim.manhattan_D(sheep_position, wolf_position) >2 or SheepTheVictim.fence_between(sheep_position, wolf_position, obstacles):
print ("wolf not nearby")
if food_goal :
# if (abs(food_goal[0]-sheep_position[0]) == 1 and abs(food_goal[1]-sheep_position[1]) == 0) \
# or (abs(food_goal[0]-sheep_position[0]) == 0 and abs(food_goal[1]-sheep_position[1]) == 1):
# print ("food next to me")
# if food_goal[0]-sheep_position[0] == 1 :
# result[0] = 2
# elif food_goal[0]-sheep_position[0] == -1 :
# result[0] = -2
# elif food_goal[1]-sheep_position[1] == 1 :
# result[0] = 1
# elif food_goal[1]-sheep_position[1] == -1 :
# result[0] = -1
print (SheepTheVictim.fence_surrounded(SheepTheVictim.new_sheep_position(result,sheep_position), obstacles))
if not SheepTheVictim.fence_surrounded(sheep_position, obstacles):
print ("sheep not surrounded by fence")
print("new sheep position", SheepTheVictim.new_sheep_position(result,sheep_position))
print ("food_goal",food_goal)
# print ("have fence", (SheepTheVictim.new_sheep_position(result,sheep_position)[0], SheepTheVictim.new_sheep_position(result,sheep_position)[1]+1) in obstacles)
if SheepTheVictim.manhattan_D(SheepTheVictim.new_sheep_position(result,sheep_position), food_goal) > SheepTheVictim.manhattan_D(sheep_position,food_goal) \
and not SheepTheVictim.fence_between(sheep_position, food_goal, obstacles) :
print ("i'm running away from the food!*****************************")
if (abs(food_goal[0]-sheep_position[0]) >= 1 and abs(food_goal[1]-sheep_position[1]) == 0) \
or (abs(food_goal[0]-sheep_position[0]) == 0 and abs(food_goal[1]-sheep_position[1]) >= 1):
# print ("food next to me")
if food_goal[0]-sheep_position[0] >= 1 :
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = 2
elif food_goal[0]-sheep_position[0] <= -1 :
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = -2
elif food_goal[1]-sheep_position[1] >= 1 :
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = 1
elif food_goal[1]-sheep_position[1] <= -1 :
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = -1
else:
result[0] = -result[0]
elif SheepTheVictim.fence_nextto(SheepTheVictim.new_sheep_position(result,sheep_position),food_goal,obstacles) and \
not SheepTheVictim.food_exist(SheepTheVictim.new_sheep_position(result,sheep_position),food):
print ("fence between sheep and food")
if (abs(food_goal[0]-sheep_position[0]) >= 1 and abs(food_goal[1]-sheep_position[1]) == 0) \
or (abs(food_goal[0]-sheep_position[0]) == 0 and abs(food_goal[1]-sheep_position[1]) >= 1):
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
elif SheepTheVictim.fence_nextto(sheep_position,food_goal,obstacles):
print ("fence between sheep and food")
if (abs(food_goal[0]-sheep_position[0]) >= 1 and abs(food_goal[1]-sheep_position[1]) == 0) \
or (abs(food_goal[0]-sheep_position[0]) == 0 and abs(food_goal[1]-sheep_position[1]) >= 1):
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
elif SheepTheVictim.fence_surrounded(SheepTheVictim.new_sheep_position(result,sheep_position), obstacles) and \
not SheepTheVictim.food_exist(SheepTheVictim.new_sheep_position(result,sheep_position),food):
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# elif SheepTheVictim.fence_nextto(SheepTheVictim.new_sheep_position(result,sheep_position),food_goal,obstacles):
# print ("fence between sheep and food")
# if (abs(food_goal[0]-sheep_position[0]) >= 1 and abs(food_goal[1]-sheep_position[1]) == 0) \
# or (abs(food_goal[0]-sheep_position[0]) == 0 and abs(food_goal[1]-sheep_position[1]) >= 1):
# if proba_dic:
# # print(proba_dic)
# proba_dic.pop(result[0])
# # print(proba_dic)
# if proba_dic:
# next_max = max(proba_dic.values())
# next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
# result = np.array([next_step])
# else :
# result[0] = result[0]
# if food_goal[0]-sheep_position[0] <= 2 : #food right sheep
# result[0] = random.choice((1,-2,-1))
# elif food_goal[0]-sheep_position[0] >= -2 :
# result[0] = random.choice((1,2,-1))
# elif food_goal[1]-sheep_position[1] <= 2 : #food below sheep
# result[0] = random.choice((-2,2,-1))
# elif food_goal[1]-sheep_position[1] >= -2 :
# result[0] = random.choice((-2,2,1))
elif SheepTheVictim.fence_surrounded(sheep_position, obstacles):
# next_sheep_position = SheepTheVictim.new_sheep_position(result,sheep_position)
right = (sheep_position[0]+1,sheep_position[1])
left = (sheep_position[0]-1,sheep_position[1])
above = (sheep_position[0],sheep_position[1]-1)
below = (sheep_position[0],sheep_position[1]+1)
print("obstacles!!!")
if right in obstacles and above in obstacles:
if 2 in proba_dic:
proba_dic.pop(2)
if -1 in proba_dic:
proba_dic.pop(-1)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = random.choice((1,-2))
elif right in obstacles and below in obstacles:
if 1 in proba_dic:
proba_dic.pop(1)
if 2 in proba_dic:
proba_dic.pop(2)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = random.choice((-1,-2))
elif left in obstacles and above in obstacles :
if -2 in proba_dic:
proba_dic.pop(-2)
if -1 in proba_dic:
proba_dic.pop(-1)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = random.choice((1,2))
elif left in obstacles and below in obstacles:
if 1 in proba_dic:
proba_dic.pop(1)
if -2 in proba_dic:
proba_dic.pop(-2)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# result[0] = random.choice((-1,2))
# if result[0] == 2:
# if (sheep_position[0]+1,sheep_position[1]) in obstacles:
# result[0] = -result[0]
# if result[0] == -2:
# if (sheep_position[0]-1,sheep_position[1]) in obstacles:
# result[0] = -result[0]
# if result[0] == 1:
# if (sheep_position[0],sheep_position[1]+1) in obstacles:
# result[0] = -result[0]
# if result[0] == -1:
# if (sheep_position[0],sheep_position[1]-1) in obstacles:
# result[0] = -result[0]
elif SheepTheVictim.fence_surrounded(SheepTheVictim.new_sheep_position(result,sheep_position), obstacles) and \
not SheepTheVictim.food_exist(SheepTheVictim.new_sheep_position(result,sheep_position),food):
if proba_dic:
# print(proba_dic)
proba_dic.pop(result[0])
# print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
else:
# if (SheepTheVictim.manhattan_D(SheepTheVictim.new_sheep_position(result,sheep_position), wolf_position)) < SheepTheVictim.manhattan_D(sheep_position, wolf_position):
# proba_dic.pop(result[0])
# next_max = max(proba_dic.values())
# next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
# result = np.array([next_step])
# new_position = SheepTheVictim.new_sheep_position(result,sheep_position)
# print(new_position[0], new_position[1])
# print(result)
# print(self.valid_move(sheep, 2, 4, field))
# if self.valid_move(sheep,new_position[1] , new_position[0], field):
# print("valid_move")
# result = np.array([next_step])
# else:
# proba_dic.pop(result[0])
# next_max = max(proba_dic.values())
# next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
# result = np.array([next_step])
while True:
print ("else else else")
# if proba_dic:
# proba_dic.pop(result[0])
# next_max = max(proba_dic.values())
# next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
# result = np.array([next_step])
new_position = SheepTheVictim.new_sheep_position(result,sheep_position)
if self.valid_move(sheep,new_position[1] , new_position[0], field) and \
(SheepTheVictim.manhattan_D(SheepTheVictim.new_sheep_position(result,sheep_position), wolf_position)) > SheepTheVictim.manhattan_D(sheep_position, wolf_position):
print("valid_move")
print(result)
# result = np.array([next_step])
break
elif proba_dic:
print(proba_dic)
proba_dic.pop(result[0])
print(proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
# new_position = SheepTheVictim.new_sheep_position(result,sheep_position)
continue
else:
result[0] = 0
break
# result = np.array([next_step])
print ("running!!!!")
print(result)
return result, p_state
def move_wolf(self, p_num, p_state, p_time_remaining, field):
if 'wolf_model' not in p_state:
p_state['wolf_model'] = self.get_wolf_model()
wolf_model = p_state['wolf_model']
X_wolf = []
game_features = []
#preprocess field to get features, add to X_wolf
#this code is largely copied from the Jupyter Notebook where the models were trained
#create empty feature array for this game state
#add features and move to X_wolf and Y_wolf
if p_num == 1:
sheep = CELL_SHEEP_2
wolf = CELL_WOLF_1
op_wolf = CELL_WOLF_2
my_sheep = CELL_SHEEP_1
else:
sheep = CELL_SHEEP_1
wolf = CELL_WOLF_2
op_wolf = CELL_WOLF_1
my_sheep = CELL_SHEEP_2
y=0
obstacles = []
for field_row in field:
x = 0
for item in field_row:
if item == sheep:
sheep_position = (x,y)
elif item == wolf:
wolf_position = (x,y)
elif item == CELL_FENCE:
obstacles.append((x,y))
elif item == op_wolf:
op_wolf_position = (x,y)
elif item == my_sheep:
my_sheep_position = (x,y)
x += 1
y+=1
#feature 1: determine if the sheep is above the wolf
if wolf_position[1] - sheep_position[1] > 0:
w_feature1 = 1
else:
w_feature1 = 0
game_features.append(w_feature1)
#feature 2: determine if the sheep is below the wolf
if wolf_position[1] - sheep_position[1] < 0:
w_feature2 = 1
else:
w_feature2 = 0
game_features.append(w_feature2)
#feature 3: determine if the sheep is left of the wolf
if wolf_position[0] - sheep_position[0] > 0:
w_feature3 = 1
else:
w_feature3 = 0
game_features.append(w_feature3)
#feature 4: determine if the sheep is right from the wolf
if wolf_position[0] - sheep_position[0] < 0:
w_feature4 = 1
else:
w_feature4 = 0
game_features.append(w_feature4)
# w_feature9 = 0
# w_feature10 = 0
# w_feature11 = 0
# w_feature12 = 0
# #determine closest fence:
# fence_distance = 1000
# fence_goal = None
# # print (obstacles)
# for fence_item in obstacles:
# # print ('loop')
# distance = abs(fence_item[0] - sheep_position[0]) + abs(fence_item[1] - sheep_position[1])
# if distance < fence_distance:
# fence_distance = distance
# fence_goal = fence_item
# if wolf_position[1] - fence_item[1] == -1:
# s_feature9 = 1
# #feature 10: determine if closest food is above the sheep
# if wolf_position[1] - fence_item[1] == 1 :
# s_feature10 = 1
# #feature 11: determine if closest food is right of the sheep
# if wolf_position[0] - fence_item[0] == -1:
# s_feature11 = 1
# #feature 12: determine if closest food is left of the sheep
# if wolf_position[0] - fence_item[0] == 1:
# s_feature12 = 1
# game_features.append(w_feature9)
# game_features.append(w_feature10)
# game_features.append(w_feature11)
# game_features.append(w_feature12)
s_feature13 = abs(sheep_position[0]-wolf_position[0])+abs(sheep_position[1]-wolf_position[1])
# print (s_feature13)
game_features.append(s_feature13)
s_feature14 = abs(op_wolf_position[0]-wolf_position[0])+abs(op_wolf_position[1]-wolf_position[1])
# print (s_feature13)
game_features.append(s_feature14)
X_wolf.append(game_features)
print ("X_wolf: ", X_wolf)
result = wolf_model.predict(X_wolf)
proba = wolf_model.predict_proba(X_wolf)
print("wolf result: ",result)
# proba_dic = {proba[0][0]:-2,proba[0][1]:-1, :0, :1, :2}
proba_dic = {-2:proba[0][0], -1:proba[0][1], 0:proba[0][2], 1:proba[0][3], 2:proba[0][4]}
print ("wolf proba_dic:",proba_dic)
while True:
if result[0] == 0:
if 0 in proba_dic.keys():
proba_dic.pop(result[0])
# remain_choice = list(proba_dic.keys())
# random_step = random.choice(remain_choice)
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
# result = np.array([random_step])
result = np.array([next_step])
if result[0] == -2:
new_position = ((wolf_position[0]-1),wolf_position[1])
elif result[0] == 2:
new_position = ((wolf_position[0]+1),wolf_position[1])
elif result[0] == -1:
new_position = (wolf_position[0],(wolf_position[1]-1))
elif result[0] == 1:
new_position = (wolf_position[0],(wolf_position[1]+1))
if new_position in obstacles or not self.valid_move(wolf, new_position[1], new_position[0],field):
print("new_position:")
print(new_position)
print("pop:")
print(result[0])
proba_dic.pop(result[0])
print (proba_dic)
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
print("next_step:")
print(next_step)
print (result[0])
else :
break
if SheepTheVictim.fence_surrounded(wolf_position, obstacles) :
step_dic = {-2:-2, -1:-1, 1:1, 2:2}
# next_sheep_position = SheepTheVictim.new_sheep_position(result,sheep_position)
right = (wolf_position[0]+1,wolf_position[1])
left = (wolf_position[0]-1,wolf_position[1])
above = (wolf_position[0],wolf_position[1]-1)
below = (wolf_position[0],wolf_position[1]+1)
print("obstacles!!!")
if right in obstacles and above in obstacles:
if 2 in proba_dic:
proba_dic.pop(2)
if -1 in proba_dic:
proba_dic.pop(-1)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
elif right in obstacles and below in obstacles:
if 2 in proba_dic:
proba_dic.pop(2)
if 1 in proba_dic:
proba_dic.pop(1)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
elif left in obstacles and above in obstacles :
if -2 in proba_dic:
proba_dic.pop(-2)
if -1 in proba_dic:
proba_dic.pop(-1)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
elif left in obstacles and below in obstacles:
if -2 in proba_dic:
proba_dic.pop(-2)
if 1 in proba_dic:
proba_dic.pop(1)
if proba_dic :
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
new_position = SheepTheVictim.new_wolf_position(result,wolf_position)
if not self.valid_move(wolf, new_position[1], new_position[0],field):
proba_dic.pop(result[0])
if proba_dic:
next_max = max(proba_dic.values())
next_step = list(proba_dic.keys())[list(proba_dic.values()).index(next_max)]
result = np.array([next_step])
else:
remain_choice = list(step_dic.keys())
random_step = random.choice(remain_choice)
result = np.array([random_step])
return result, p_state | 45.16651 | 193 | 0.505499 | 5,598 | 48,012 | 4.106645 | 0.043051 | 0.161164 | 0.065162 | 0.029362 | 0.840532 | 0.809996 | 0.779068 | 0.746183 | 0.723607 | 0.698856 | 0 | 0.035298 | 0.394006 | 48,012 | 1,063 | 194 | 45.16651 | 0.754838 | 0.249 | 0 | 0.603834 | 0 | 0 | 0.022057 | 0.004249 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023962 | false | 0 | 0.00639 | 0.00639 | 0.105431 | 0.121406 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c39b2a4602139769566ed0a677f285a726d753e5 | 69 | py | Python | deepspeed/profiling/testing/config/__init__.py | B06901052/DeepSpeed | c71bee0c7e15a67c849e2093bfa6b8ca12fbdd82 | [
"MIT"
] | null | null | null | deepspeed/profiling/testing/config/__init__.py | B06901052/DeepSpeed | c71bee0c7e15a67c849e2093bfa6b8ca12fbdd82 | [
"MIT"
] | null | null | null | deepspeed/profiling/testing/config/__init__.py | B06901052/DeepSpeed | c71bee0c7e15a67c849e2093bfa6b8ca12fbdd82 | [
"MIT"
] | null | null | null | from .scalerop import *
from .tensorop import *
from .mathop import * | 23 | 23 | 0.753623 | 9 | 69 | 5.777778 | 0.555556 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 69 | 3 | 24 | 23 | 0.896552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c3c31d366301fa3a3ba7f8b1cd479d805055d280 | 12,281 | py | Python | tests/components/fritzbox_callmonitor/test_config_flow.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 1 | 2021-07-08T20:09:55.000Z | 2021-07-08T20:09:55.000Z | tests/components/fritzbox_callmonitor/test_config_flow.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 47 | 2021-02-21T23:43:07.000Z | 2022-03-31T06:07:10.000Z | tests/components/fritzbox_callmonitor/test_config_flow.py | OpenPeerPower/core | f673dfac9f2d0c48fa30af37b0a99df9dd6640ee | [
"Apache-2.0"
] | null | null | null | """Tests for fritzbox_callmonitor config flow."""
from unittest.mock import PropertyMock
from fritzconnection.core.exceptions import FritzConnectionException, FritzSecurityError
from requests.exceptions import ConnectionError as RequestsConnectionError
from openpeerpower.components.fritzbox_callmonitor.config_flow import (
RESULT_INSUFFICIENT_PERMISSIONS,
RESULT_INVALID_AUTH,
RESULT_MALFORMED_PREFIXES,
RESULT_NO_DEVIES_FOUND,
)
from openpeerpower.components.fritzbox_callmonitor.const import (
CONF_PHONEBOOK,
CONF_PREFIXES,
DOMAIN,
FRITZ_ATTR_NAME,
FRITZ_ATTR_SERIAL_NUMBER,
SERIAL_NUMBER,
)
from openpeerpower.config_entries import SOURCE_IMPORT, SOURCE_USER
from openpeerpower.const import (
CONF_HOST,
CONF_NAME,
CONF_PASSWORD,
CONF_PORT,
CONF_USERNAME,
)
from openpeerpower.core import OpenPeerPower
from openpeerpower.data_entry_flow import (
RESULT_TYPE_ABORT,
RESULT_TYPE_CREATE_ENTRY,
RESULT_TYPE_FORM,
)
from tests.common import MockConfigEntry, patch
MOCK_HOST = "fake_host"
MOCK_PORT = 1234
MOCK_USERNAME = "fake_username"
MOCK_PASSWORD = "fake_password"
MOCK_PHONEBOOK_NAME_1 = "fake_phonebook_name_1"
MOCK_PHONEBOOK_NAME_2 = "fake_phonebook_name_2"
MOCK_PHONEBOOK_ID = 0
MOCK_SERIAL_NUMBER = "fake_serial_number"
MOCK_NAME = "fake_call_monitor_name"
MOCK_USER_DATA = {
CONF_HOST: MOCK_HOST,
CONF_PORT: MOCK_PORT,
CONF_PASSWORD: MOCK_PASSWORD,
CONF_USERNAME: MOCK_USERNAME,
}
MOCK_CONFIG_ENTRY = {
CONF_HOST: MOCK_HOST,
CONF_PORT: MOCK_PORT,
CONF_PASSWORD: MOCK_PASSWORD,
CONF_USERNAME: MOCK_USERNAME,
CONF_PREFIXES: None,
CONF_PHONEBOOK: MOCK_PHONEBOOK_ID,
SERIAL_NUMBER: MOCK_SERIAL_NUMBER,
}
MOCK_YAML_CONFIG = {
CONF_HOST: MOCK_HOST,
CONF_PORT: MOCK_PORT,
CONF_PASSWORD: MOCK_PASSWORD,
CONF_USERNAME: MOCK_USERNAME,
CONF_PHONEBOOK: MOCK_PHONEBOOK_ID,
CONF_NAME: MOCK_NAME,
}
MOCK_DEVICE_INFO = {FRITZ_ATTR_SERIAL_NUMBER: MOCK_SERIAL_NUMBER}
MOCK_PHONEBOOK_INFO_1 = {FRITZ_ATTR_NAME: MOCK_PHONEBOOK_NAME_1}
MOCK_PHONEBOOK_INFO_2 = {FRITZ_ATTR_NAME: MOCK_PHONEBOOK_NAME_2}
MOCK_UNIQUE_ID = f"{MOCK_SERIAL_NUMBER}-{MOCK_PHONEBOOK_ID}"
async def test_yaml_import(opp: OpenPeerPower) -> None:
"""Test configuration.yaml import."""
with patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.__init__",
return_value=None,
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.phonebook_ids",
new_callable=PropertyMock,
return_value=[0],
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.phonebook_info",
return_value=MOCK_PHONEBOOK_INFO_1,
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.modelname",
return_value=MOCK_PHONEBOOK_NAME_1,
), patch(
"openpeerpower.components.fritzbox_callmonitor.config_flow.FritzConnection.__init__",
return_value=None,
), patch(
"openpeerpower.components.fritzbox_callmonitor.config_flow.FritzConnection.call_action",
return_value=MOCK_DEVICE_INFO,
), patch(
"openpeerpower.components.fritzbox_callmonitor.async_setup_entry",
return_value=True,
) as mock_setup_entry:
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": SOURCE_IMPORT},
data=MOCK_YAML_CONFIG,
)
assert result["type"] == RESULT_TYPE_CREATE_ENTRY
assert result["title"] == MOCK_NAME
assert result["data"] == MOCK_CONFIG_ENTRY
assert len(mock_setup_entry.mock_calls) == 1
async def test_setup_one_phonebook(opp: OpenPeerPower) -> None:
"""Test setting up manually."""
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": SOURCE_USER},
)
assert result["type"] == RESULT_TYPE_FORM
assert result["step_id"] == "user"
with patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.__init__",
return_value=None,
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.phonebook_ids",
new_callable=PropertyMock,
return_value=[0],
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.phonebook_info",
return_value=MOCK_PHONEBOOK_INFO_1,
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.modelname",
return_value=MOCK_PHONEBOOK_NAME_1,
), patch(
"openpeerpower.components.fritzbox_callmonitor.config_flow.FritzConnection.__init__",
return_value=None,
), patch(
"openpeerpower.components.fritzbox_callmonitor.config_flow.FritzConnection.call_action",
return_value=MOCK_DEVICE_INFO,
), patch(
"openpeerpower.components.fritzbox_callmonitor.async_setup_entry",
return_value=True,
) as mock_setup_entry:
result = await opp.config_entries.flow.async_configure(
result["flow_id"], user_input=MOCK_USER_DATA
)
assert result["type"] == RESULT_TYPE_CREATE_ENTRY
assert result["title"] == MOCK_PHONEBOOK_NAME_1
assert result["data"] == MOCK_CONFIG_ENTRY
assert len(mock_setup_entry.mock_calls) == 1
async def test_setup_multiple_phonebooks(opp: OpenPeerPower) -> None:
"""Test setting up manually."""
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": SOURCE_USER},
)
assert result["type"] == RESULT_TYPE_FORM
assert result["step_id"] == "user"
with patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.__init__",
return_value=None,
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.phonebook_ids",
new_callable=PropertyMock,
return_value=[0, 1],
), patch(
"openpeerpower.components.fritzbox_callmonitor.config_flow.FritzConnection.__init__",
return_value=None,
), patch(
"openpeerpower.components.fritzbox_callmonitor.config_flow.FritzConnection.call_action",
return_value=MOCK_DEVICE_INFO,
), patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.phonebook_info",
side_effect=[MOCK_PHONEBOOK_INFO_1, MOCK_PHONEBOOK_INFO_2],
):
result = await opp.config_entries.flow.async_configure(
result["flow_id"], user_input=MOCK_USER_DATA
)
assert result["type"] == RESULT_TYPE_FORM
assert result["step_id"] == "phonebook"
assert result["errors"] == {}
with patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.modelname",
return_value=MOCK_PHONEBOOK_NAME_1,
), patch(
"openpeerpower.components.fritzbox_callmonitor.async_setup_entry",
return_value=True,
) as mock_setup_entry:
result = await opp.config_entries.flow.async_configure(
result["flow_id"],
{CONF_PHONEBOOK: MOCK_PHONEBOOK_NAME_2},
)
assert result["type"] == RESULT_TYPE_CREATE_ENTRY
assert result["title"] == MOCK_PHONEBOOK_NAME_2
assert result["data"] == {
CONF_HOST: MOCK_HOST,
CONF_PORT: MOCK_PORT,
CONF_PASSWORD: MOCK_PASSWORD,
CONF_USERNAME: MOCK_USERNAME,
CONF_PREFIXES: None,
CONF_PHONEBOOK: 1,
SERIAL_NUMBER: MOCK_SERIAL_NUMBER,
}
assert len(mock_setup_entry.mock_calls) == 1
async def test_setup_cannot_connect(opp: OpenPeerPower) -> None:
"""Test we handle cannot connect."""
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": SOURCE_USER},
)
with patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.__init__",
side_effect=RequestsConnectionError,
):
result = await opp.config_entries.flow.async_configure(
result["flow_id"], user_input=MOCK_USER_DATA
)
assert result["type"] == RESULT_TYPE_ABORT
assert result["reason"] == RESULT_NO_DEVIES_FOUND
async def test_setup_insufficient_permissions(opp: OpenPeerPower) -> None:
"""Test we handle insufficient permissions."""
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": SOURCE_USER},
)
with patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.__init__",
side_effect=FritzSecurityError,
):
result = await opp.config_entries.flow.async_configure(
result["flow_id"], user_input=MOCK_USER_DATA
)
assert result["type"] == RESULT_TYPE_ABORT
assert result["reason"] == RESULT_INSUFFICIENT_PERMISSIONS
async def test_setup_invalid_auth(opp: OpenPeerPower) -> None:
"""Test we handle invalid auth."""
result = await opp.config_entries.flow.async_init(
DOMAIN,
context={"source": SOURCE_USER},
)
with patch(
"openpeerpower.components.fritzbox_callmonitor.base.FritzPhonebook.__init__",
side_effect=FritzConnectionException,
):
result = await opp.config_entries.flow.async_configure(
result["flow_id"], user_input=MOCK_USER_DATA
)
assert result["type"] == RESULT_TYPE_FORM
assert result["errors"] == {"base": RESULT_INVALID_AUTH}
async def test_options_flow_correct_prefixes(opp: OpenPeerPower) -> None:
"""Test config flow options."""
config_entry = MockConfigEntry(
domain=DOMAIN,
unique_id=MOCK_UNIQUE_ID,
data=MOCK_CONFIG_ENTRY,
options={CONF_PREFIXES: None},
)
config_entry.add_to_opp(opp)
with patch(
"openpeerpower.components.fritzbox_callmonitor.async_setup_entry",
return_value=True,
):
await opp.config_entries.async_setup(config_entry.entry_id)
result = await opp.config_entries.options.async_init(config_entry.entry_id)
assert result["type"] == RESULT_TYPE_FORM
assert result["step_id"] == "init"
result = await opp.config_entries.options.async_configure(
result["flow_id"], user_input={CONF_PREFIXES: "+49, 491234"}
)
assert result["type"] == RESULT_TYPE_CREATE_ENTRY
assert config_entry.options == {CONF_PREFIXES: ["+49", "491234"]}
async def test_options_flow_incorrect_prefixes(opp: OpenPeerPower) -> None:
"""Test config flow options."""
config_entry = MockConfigEntry(
domain=DOMAIN,
unique_id=MOCK_UNIQUE_ID,
data=MOCK_CONFIG_ENTRY,
options={CONF_PREFIXES: None},
)
config_entry.add_to_opp(opp)
with patch(
"openpeerpower.components.fritzbox_callmonitor.async_setup_entry",
return_value=True,
):
await opp.config_entries.async_setup(config_entry.entry_id)
result = await opp.config_entries.options.async_init(config_entry.entry_id)
assert result["type"] == RESULT_TYPE_FORM
assert result["step_id"] == "init"
result = await opp.config_entries.options.async_configure(
result["flow_id"], user_input={CONF_PREFIXES: ""}
)
assert result["type"] == RESULT_TYPE_FORM
assert result["errors"] == {"base": RESULT_MALFORMED_PREFIXES}
async def test_options_flow_no_prefixes(opp: OpenPeerPower) -> None:
"""Test config flow options."""
config_entry = MockConfigEntry(
domain=DOMAIN,
unique_id=MOCK_UNIQUE_ID,
data=MOCK_CONFIG_ENTRY,
options={CONF_PREFIXES: None},
)
config_entry.add_to_opp(opp)
with patch(
"openpeerpower.components.fritzbox_callmonitor.async_setup_entry",
return_value=True,
):
await opp.config_entries.async_setup(config_entry.entry_id)
result = await opp.config_entries.options.async_init(config_entry.entry_id)
assert result["type"] == RESULT_TYPE_FORM
assert result["step_id"] == "init"
result = await opp.config_entries.options.async_configure(
result["flow_id"], user_input={}
)
assert result["type"] == RESULT_TYPE_CREATE_ENTRY
assert config_entry.options == {CONF_PREFIXES: None}
| 34.208914 | 96 | 0.706213 | 1,396 | 12,281 | 5.831662 | 0.085244 | 0.040536 | 0.110429 | 0.149613 | 0.818327 | 0.782828 | 0.745117 | 0.745117 | 0.744626 | 0.740818 | 0 | 0.004773 | 0.198111 | 12,281 | 358 | 97 | 34.304469 | 0.821893 | 0.003501 | 0 | 0.648649 | 0 | 0 | 0.20941 | 0.178547 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | false | 0.02027 | 0.040541 | 0 | 0.040541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6132cc90577b8c896a0100bf8451cbc39783b045 | 49 | py | Python | forecastio/__init__.py | timgates42/python-forecast.io | 17bc91b6672b651db013adfae9d4584db56ef49a | [
"BSD-2-Clause"
] | 343 | 2015-01-02T17:23:50.000Z | 2022-01-21T01:05:14.000Z | forecastio/__init__.py | timgates42/python-forecast.io | 17bc91b6672b651db013adfae9d4584db56ef49a | [
"BSD-2-Clause"
] | 40 | 2015-01-28T08:15:26.000Z | 2022-03-09T14:44:15.000Z | forecastio/__init__.py | timgates42/python-forecast.io | 17bc91b6672b651db013adfae9d4584db56ef49a | [
"BSD-2-Clause"
] | 80 | 2015-02-26T08:41:20.000Z | 2021-07-02T20:50:08.000Z | from forecastio.api import load_forecast, manual
| 24.5 | 48 | 0.857143 | 7 | 49 | 5.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
61860d966abb9f83633bb92e0fc57b49e20951a3 | 538 | py | Python | workloads/workload.py | binmahone/Raven | 40b7e24f14a72af978341c311250f15795be1eb0 | [
"Apache-2.0"
] | 1 | 2021-12-23T02:45:06.000Z | 2021-12-23T02:45:06.000Z | workloads/workload.py | Mukvin/Raven | 40b7e24f14a72af978341c311250f15795be1eb0 | [
"Apache-2.0"
] | null | null | null | workloads/workload.py | Mukvin/Raven | 40b7e24f14a72af978341c311250f15795be1eb0 | [
"Apache-2.0"
] | 2 | 2021-09-16T10:18:01.000Z | 2021-09-17T08:40:47.000Z | from abc import ABCMeta, abstractmethod
class workload(object):
__metaclass__ = ABCMeta
def __init__(self):
self.conf = None
pass
@abstractmethod
def generate(self):
pass
@abstractmethod
def create(self):
pass
@abstractmethod
def load(self):
pass
@abstractmethod
def delete(self):
pass
@abstractmethod
def drop(self):
pass
def set_conf(self, conf):
self.conf = conf
def get_conf(self):
return self.conf | 16.30303 | 39 | 0.589219 | 57 | 538 | 5.385965 | 0.403509 | 0.29316 | 0.34202 | 0.325733 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.336431 | 538 | 33 | 40 | 16.30303 | 0.859944 | 0 | 0 | 0.44 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.32 | false | 0.24 | 0.04 | 0.04 | 0.48 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
61b49f0d42a2039e604374816f9229d91210e78a | 34 | py | Python | simulaqron/run/__init__.py | WrathfulSpatula/SimulaQron | eaa5548df2f992e187ee70ccd81f192a1ce93e14 | [
"BSD-3-Clause"
] | 25 | 2017-11-20T08:50:12.000Z | 2018-07-31T19:02:19.000Z | simulaqron/run/__init__.py | WrathfulSpatula/SimulaQron | eaa5548df2f992e187ee70ccd81f192a1ce93e14 | [
"BSD-3-Clause"
] | 23 | 2017-11-21T21:47:28.000Z | 2018-10-03T08:28:41.000Z | simulaqron/run/__init__.py | WrathfulSpatula/SimulaQron | eaa5548df2f992e187ee70ccd81f192a1ce93e14 | [
"BSD-3-Clause"
] | 13 | 2017-11-20T08:50:14.000Z | 2018-09-01T21:44:00.000Z | from .run import run_applications
| 17 | 33 | 0.852941 | 5 | 34 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
61bc76576b06fa634de77d20c913adab3ac06c2d | 340 | py | Python | dnsimple2/resources/__init__.py | indradhanush/dnsimple2-python | 2580f5808e2d05afe22ba4b10c9e2f9255fa71c6 | [
"MIT"
] | 3 | 2017-10-03T21:09:38.000Z | 2017-10-06T07:44:39.000Z | dnsimple2/resources/__init__.py | indradhanush/dnsimple2-python | 2580f5808e2d05afe22ba4b10c9e2f9255fa71c6 | [
"MIT"
] | 13 | 2017-01-22T20:52:02.000Z | 2020-09-25T14:45:38.000Z | dnsimple2/resources/__init__.py | indradhanush/dnsimple2-python | 2580f5808e2d05afe22ba4b10c9e2f9255fa71c6 | [
"MIT"
] | 5 | 2017-07-01T11:55:41.000Z | 2017-10-05T04:06:33.000Z | from dnsimple2.resources.accounts import AccountResource
from dnsimple2.resources.base import BaseResource, ResourceList
from dnsimple2.resources.domains import (
CollaboratorResource,
DomainResource,
EmailForwardResource
)
from dnsimple2.resources.user import UserResource
from dnsimple2.resources.whoami import WhoAmIResource
| 34 | 63 | 0.847059 | 33 | 340 | 8.727273 | 0.515152 | 0.225694 | 0.381944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016502 | 0.108824 | 340 | 9 | 64 | 37.777778 | 0.933993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.555556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4eeb717e7d6b0877ff9c7e67693b08cabf77c50c | 163 | py | Python | numpyro/contrib/nn/__init__.py | alexalemi/numpyro | 9a690c7f60dee13ff9ea88ce107400349c77ce77 | [
"MIT"
] | 1 | 2019-06-24T04:27:18.000Z | 2019-06-24T04:27:18.000Z | numpyro/contrib/nn/__init__.py | alexalemi/numpyro | 9a690c7f60dee13ff9ea88ce107400349c77ce77 | [
"MIT"
] | null | null | null | numpyro/contrib/nn/__init__.py | alexalemi/numpyro | 9a690c7f60dee13ff9ea88ce107400349c77ce77 | [
"MIT"
] | null | null | null | from numpyro.contrib.nn.auto_reg_nn import AutoregressiveNN
from numpyro.contrib.nn.masked_dense import MaskedDense
__all__ = ['MaskedDense', 'AutoregressiveNN']
| 32.6 | 59 | 0.834356 | 20 | 163 | 6.45 | 0.6 | 0.170543 | 0.27907 | 0.310078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079755 | 163 | 4 | 60 | 40.75 | 0.86 | 0 | 0 | 0 | 0 | 0 | 0.165644 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f60db6db2edfa1d97d25e90bbd128bcd0d15706d | 24 | py | Python | __init__.py | oxford-pcs/ifu_builder | d5efcd96407e7797c0f289f86b0158e0e3b66f70 | [
"MIT"
] | 1 | 2018-01-22T21:53:59.000Z | 2018-01-22T21:53:59.000Z | __init__.py | oxford-pcs/ifu_builder | d5efcd96407e7797c0f289f86b0158e0e3b66f70 | [
"MIT"
] | null | null | null | __init__.py | oxford-pcs/ifu_builder | d5efcd96407e7797c0f289f86b0158e0e3b66f70 | [
"MIT"
] | null | null | null | from instrument import * | 24 | 24 | 0.833333 | 3 | 24 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f62fdece9a72ed76f0ba6b7d8880a573f4d40012 | 107 | py | Python | myapp/views.py | agarzon/django-plesk-hello-world | 4f97473074508ab83cdcbae4df959a764a8c1ec6 | [
"MIT"
] | 8 | 2015-03-07T10:53:07.000Z | 2021-10-17T23:28:39.000Z | myapp/views.py | agarzon/django-plesk-hello-world | 4f97473074508ab83cdcbae4df959a764a8c1ec6 | [
"MIT"
] | null | null | null | myapp/views.py | agarzon/django-plesk-hello-world | 4f97473074508ab83cdcbae4df959a764a8c1ec6 | [
"MIT"
] | 1 | 2017-07-16T23:20:55.000Z | 2017-07-16T23:20:55.000Z | from django.http import HttpResponse
def hello(request):
return HttpResponse("<h1>Hello, world</h1>")
| 21.4 | 48 | 0.738318 | 14 | 107 | 5.642857 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021505 | 0.130841 | 107 | 4 | 49 | 26.75 | 0.827957 | 0 | 0 | 0 | 0 | 0 | 0.196262 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f65c20a6ab43233f8ef85d4ad2bfa3542e85e174 | 541 | py | Python | lista_function.py | tiberiope/curso_python | c1606b18136d379c92aac2878e5f59e2b7732d15 | [
"MIT"
] | null | null | null | lista_function.py | tiberiope/curso_python | c1606b18136d379c92aac2878e5f59e2b7732d15 | [
"MIT"
] | null | null | null | lista_function.py | tiberiope/curso_python | c1606b18136d379c92aac2878e5f59e2b7732d15 | [
"MIT"
] | null | null | null | #lista sofre alteração na função
lista_cor = ['Vermelho', 'Verde', 'Preto', 'Branco', 'Azul']
clone_lista = lista_cor
def lista_funcao(lista):
for cor in lista:
print(cor)
lista.pop()
lista_funcao(clone_lista)
print(lista_cor)
print('----------')
#lista não sofre alteração na função
lista_cor = ['Vermelho', 'Verde', 'Preto', 'Branco', 'Azul']
clone_lista = lista_cor[:]
lista_funcao(clone_lista)
print(lista_cor)
print('----------')
#lista não sofre alteração na função
lista_funcao(lista_cor[:])
print(lista_cor) | 21.64 | 60 | 0.68207 | 74 | 541 | 4.77027 | 0.256757 | 0.181303 | 0.135977 | 0.186969 | 0.773371 | 0.773371 | 0.773371 | 0.773371 | 0.773371 | 0.773371 | 0 | 0 | 0.144177 | 541 | 25 | 61 | 21.64 | 0.762419 | 0.186691 | 0 | 0.5625 | 0 | 0 | 0.173516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.0625 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c998a6232261d7e264bddb55293800aebda1b85 | 109 | py | Python | app/head/acts/templates/__init__.py | Matexer/BSPR | a503a8795cb0f4cebe2eedd148aa00aea75b570e | [
"MIT"
] | null | null | null | app/head/acts/templates/__init__.py | Matexer/BSPR | a503a8795cb0f4cebe2eedd148aa00aea75b570e | [
"MIT"
] | null | null | null | app/head/acts/templates/__init__.py | Matexer/BSPR | a503a8795cb0f4cebe2eedd148aa00aea75b570e | [
"MIT"
] | null | null | null | from .config_calculation import ConfigCalculationActTemplate
from .calculation import CalculationActTemplate
| 36.333333 | 60 | 0.908257 | 9 | 109 | 10.888889 | 0.666667 | 0.346939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073395 | 109 | 2 | 61 | 54.5 | 0.970297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
9cd5f4c15d89dc865655134b54e4cbcc43eb1f42 | 22 | py | Python | examples/str.zfill/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/str.zfill/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | examples/str.zfill/ex2.py | mcorne/python-by-example | 15339c0909c84b51075587a6a66391100971c033 | [
"MIT"
] | null | null | null | print('-42'.zfill(5))
| 11 | 21 | 0.590909 | 4 | 22 | 3.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.045455 | 22 | 1 | 22 | 22 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1adf6c1952fbf08a90bc5fb7990e21521891ab6d | 236 | py | Python | QRegisterAccessDriver/QRegisterAccess.py | ShengbingZhou/register | 5b3e2c7d66784bd4a95727d0ea134d233b325d6f | [
"MIT"
] | null | null | null | QRegisterAccessDriver/QRegisterAccess.py | ShengbingZhou/register | 5b3e2c7d66784bd4a95727d0ea134d233b325d6f | [
"MIT"
] | null | null | null | QRegisterAccessDriver/QRegisterAccess.py | ShengbingZhou/register | 5b3e2c7d66784bd4a95727d0ea134d233b325d6f | [
"MIT"
] | null | null | null | class QRegisterAccess:
def readReg(moduleName : str, addr : int) -> int:
value = 0xaa55
return value
def writeReg(moduleName : str, addr : int, value : int) -> int:
# write value
return True | 26.222222 | 67 | 0.580508 | 26 | 236 | 5.269231 | 0.538462 | 0.189781 | 0.248175 | 0.291971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018987 | 0.330508 | 236 | 9 | 68 | 26.222222 | 0.848101 | 0.04661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026786 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
1ae76084bf8358d40d1382474896ae07bec293ce | 23 | py | Python | remass/tui/__init__.py | snototter/remass | 60494346f676f29a3517bcce30e8aab21cf3d3c6 | [
"MIT"
] | null | null | null | remass/tui/__init__.py | snototter/remass | 60494346f676f29a3517bcce30e8aab21cf3d3c6 | [
"MIT"
] | null | null | null | remass/tui/__init__.py | snototter/remass | 60494346f676f29a3517bcce30e8aab21cf3d3c6 | [
"MIT"
] | null | null | null | from .tui import RATui
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21533926cef4c5f6831b03b268b03af469a3d172 | 86,927 | py | Python | miri/datamodels/miri_distortion_models.py | eslavich/MiriTE | 05e25e1222e854fef5a72011f6618fa8fb5eaaff | [
"CNRI-Python"
] | null | null | null | miri/datamodels/miri_distortion_models.py | eslavich/MiriTE | 05e25e1222e854fef5a72011f6618fa8fb5eaaff | [
"CNRI-Python"
] | 24 | 2019-08-09T15:03:20.000Z | 2022-03-04T10:04:48.000Z | miri/datamodels/miri_distortion_models.py | eslavich/MiriTE | 05e25e1222e854fef5a72011f6618fa8fb5eaaff | [
"CNRI-Python"
] | 4 | 2019-06-16T15:03:23.000Z | 2020-12-02T19:51:52.000Z | #!/usr/bin/env python
# -*- coding:utf-8 -*-
"""
An extension to the standard STScI data model, which defines a means of
describing MIRI distortion coefficients.
NOTE: The contents of this data model might change, depending on the
STScI implementation of distortion models.
:Reference:
The STScI jwst.datamodels documentation. See
https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/index.html
:History:
21 Jan 2013: Created
23 Jan 2013: Added plotting.
05 Feb 2013: Reformatted test code using "with" context manager.
Modified to use functions from MiriDataModel.
08 Feb 2013: Replaced 'to_fits' with more generic 'save' method.
21 Feb 2013: Changed default order of MiriDistortionModel from "3" to
"None" (Vincent Geers, DIAS)
25 Feb 2013: Corrected typo.
26 Feb 2013: Changed the default fit type from 'POLY2D' to None.
01 Jul 2013: get_primary_array_name() method added.
12 Sep 2013: Change the way the row and column matrices are checked
so the .copy() method works.
13 Sep 2013: Changed CMATRIX and RMATRIX to BMATRIX and AMATRIX, added
new TMATRIX and MMATRIX.
13 Sep 2013: Corrected some typos.
16 Sep 2013: Removed the ORDER parameter, since it is derivable from
the size of the matrices. BUNIT parameters added to schema
metadata.
30 Oct 2013: All MIRI distortion models (imaging, LRS, MRS) combined
into one module. New model for LRS distortion and wavelength
calibration.
31 Oct 2013: BETA array removed from MRS D2C model.
27 Nov 2013: Modified to match update to distortion table definition in
miri.distortion.lrs.schema
10 Dec 2013: Delimiter in MIRI schema names changed from "." to "_".
10 Apr 2014: Modified for jsonschema draft 4: Functions made more
independent of schema structure. Modified to define data
units using the set_data_units method.
29 Aug 2014: Included new reference file keywords (REFTYPE, AUTHOR, PEDIGREE)
25 Sep 2014: TYPE and REFTYPE are no longer identical.
07 Oct 2014: Added new inverse matrices BIMATRIX, AIMATRIX, TIMATRIX, MIMATRIX
and the new BORESIGHT_OFFSETS table. Removed need for fitref,
changed fitmodel to now contain reference to documentation.
10 Oct 2014: Restored fitref for consistency with the linearity model.
16 Oct 2014: REFTYPE of WCS changed to DISTORTION.
02 Jul 2015: Major change to MRS distortion model. MiriMrsD2CModel (data array
containing looking tables) replaced by MiriMrsDistortionModel
(tables containing polynomial coefficients).
09 Jul 2015: Removed duplication of table units between schema and metadata.
Units are now only defined in the metadata.
Use of the fieldnames class variable removed from the code and
deprecated. It is now used only by a few conversion scripts.
Separate data models created for channel 12 and channel 34
distortion data. (Merge these models after CDP-4 delivery.)
11 Sep 2015: Removed duplicated plot method.
17 Nov 2015: Changed column names and HDU names to eliminate fitsverify
problems. NEW DATA MODELS TO BE INSTATED AFTER CDP-5 DELIVERY.
10 Dec 2015: Old and new data models merged into one module.
11 Dec 2015: v2v3 changed to XANYAN in new MRS distortion models.
16 Feb 2016: Imager distortion matrices changed to float64.
09 Jun 2016: Added set_exposure_type() call to MiriImagingDistortionModel,
MiriLrsD2WModel, MiriMrsDistortionModel12, and
MiriMrsDistortionModel34 to set the EXP_TYPE keyword (now
required for DISTORTION files).
16 Jun 2016: Added new metadata keywords to MRS distortion schemas.
Old format MRS data models removed (as MIRISim no longer
uses them).
15 Jun 2017: meta.reffile schema level removed to match changes in the
JWST build 7.1 data models release. meta.reffile.type also
changed to meta.reftype. TYPE keyword replaced by DATAMODL.
12 Jul 2017: Replaced "clobber" parameter with "overwrite".
10 Aug 2018: Updated MRS distortion models to reflect CDP-7 format.
03 Sep 2018: Old CDP-6 variants of the distortion models included.
Updated units for imager distortion model.
14 Nov 2018: Explicitly set table column units based on the tunit definitions
in the schema. Removed redundant function.
30 Jan 2019: self.meta.model_type now set to the name of the STScI data
model this model is designed to match (skipped if there isn't
a corresponding model defined in ancestry.py).
11 Feb 2019: Added missing C, D, E, F matrices to imager distortion model.
22 Mar 2019: Changed the reference type for LRS distortion
from 'DISTORTION' to 'SPECWCS'.
12 Sep 2019: Added CDP8 version of MRS distortion models
while keeping CDP7 versions the default.
07 Oct 2019: Removed '.yaml' suffix from schema references.
26 Mar 2020: Ensure the model_type remains as originally defined when saving
to a file.
11 May 2020: Removed CDP-6 versions of the data model and made the CDP-8
version the default. XANYAN changed back to V2V3.
@author: Steven Beard (UKATC), Vincent Geers (DIAS)
"""
# import warnings
import numpy as np
#import numpy.ma as ma
# Import the MIRI base data model and utilities.
from miri.datamodels.ancestry import get_my_model_type
from miri.datamodels.miri_model_base import MiriDataModel
# The distortion model might be represented by one of these STScI models,
# for example
#from jwst.datamodels.models import Poly2DModel, ICheb2DModel, ILegend2DModel, ...
# List all classes and global functions here.
__all__ = ['MiriImagingDistortionModel', 'MiriLrsD2WModel', \
'MiriMrsDistortionModel12', 'MiriMrsDistortionModel34',
'MiriMrsDistortionModel12_CDP7', 'MiriMrsDistortionModel34_CDP7']
class MiriImagingDistortionModel(MiriDataModel):
"""
A data model for MIRI distortion coefficients, based on the STScI
base model, DataModel.
After a data model has been created, data arrays and data tables
are available as attributes with the same names as their input
parameters, below. Metadata items are available within a ".meta"
attribute tree.
See https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/index.html
:Parameters:
init: shape tuple, file path, file object, pyfits.HDUList, numpy array
An optional initializer for the data model, which can have one
of the following forms:
* None: A default data model with no shape. (If a data array is
provided in the cmatrix parameter, the shape is derived from
the array.)
* Shape tuple: Initialize with empty data of the given shape.
* File path: Initialize from the given file.
* Readable file object: Initialize from the given file object.
* pyfits.HDUList: Initialize from the given pyfits.HDUList.
bmatrix: numpy array (optional)
An array containing the elements of the B matrix,
describing distortion fit coefficients. Must be 2-D.
A 3rd order polynomial fit will result in a 4x4 matrix.
If a bmatrix parameter is provided, its contents overwrite the
data initialized by the init parameter.
amatrix: numpy array (optional)
An array containing the elements of the A matrix,
describing distortion fit coefficients. Must be 2-D.
A 3rd order polynomial fit will result in a 4x4 matrix.
tmatrix: numpy array (optional)
An array containing the elements of the T matrix,
describing distortion fit coefficients. Must be 2-D.
A 2nd order polynomial fit will result in a 3x3 matrix.
mmatrix: numpy array (optional)
An array containing the elements of the M matrix,
describing distortion fit coefficients. Must be 2-D.
A 2nd order polynomial fit will result in a 3x3 matrix.
bimatrix: numpy array (optional)
An array containing the elements of the inverse B matrix,
describing distortion fit coefficients. Must be 2-D.
A 2nd order polynomial fit will result in a 3x3 matrix.
aimatrix: numpy array (optional)
An array containing the elements of the inverse A matrix,
describing distortion fit coefficients. Must be 2-D.
A 2nd order polynomial fit will result in a 3x3 matrix.
timatrix: numpy array (optional)
An array containing the elements of the inverse T matrix,
describing distortion fit coefficients. Must be 2-D.
A 2nd order polynomial fit will result in a 3x3 matrix.
mimatrix: numpy array (optional)
An array containing the elements of the inverse M matrix,
describing distortion fit coefficients. Must be 2-D.
A 2nd order polynomial fit will result in a 3x3 matrix.
boresight_offsets: list of tuples or numpy record array (optional)
Either: A list of tuples containing (parameter:object, filter:string,
col_offset:number, row_offset:number).
Or: A numpy record array containing the same information as above.
If not specified, it will default to dummy values and no
boresight_offset table will be assumed.
fitref: str (optional)
A string containing a human-readable reference to a document
describing the distortion model.
fitmodel: str (optional)
If a recognised JWST fitting model has been used (e.g. one of the
models in the astropy.modeling package) a unique, machine-readable
string defining the model used. If the model is not known or
doesn't match one of the standard JWST models, leave this keyword
blank and describe the model using the fitref parameter (above).
Some example strings from astropy.modeling: Chebyshev1D', 'Chebyshev2D',
'InverseSIP', 'Legendre1D','Legendre2D', 'Polynomial1D',
'Polynomial2D', etc...
\*\*kwargs:
All other keyword arguments are passed to the DataModel initialiser.
See the jwst.datamodels documentation for the meaning of these keywords.
"""
schema_url = "miri_distortion_imaging.schema"
fieldnames = ('FILTER', 'COL_OFFSET', 'ROW_OFFSET')
def __init__(self, init=None,
bmatrix=None, amatrix=None, tmatrix=None, mmatrix=None,
bimatrix=None, aimatrix=None, timatrix=None, mimatrix=None,
dmatrix=None, cmatrix=None, fmatrix=None, ematrix=None,
dimatrix=None, cimatrix=None, fimatrix=None, eimatrix=None,
fitref=None, fitmodel=None, boresight_offsets=None, **kwargs):
"""
Initialises the MiriImagingDistortionModel class.
Parameters: See class doc string.
"""
super(MiriImagingDistortionModel, self).__init__(init=init, **kwargs)
# Data type is distortion map.
self.meta.reftype = 'DISTORTION'
# Initialise the model type
self._init_data_type()
# This is a reference data model.
self._reference_model()
# Verify the matrices have the correct shape. They are already
# constrained to be 2-D in the schema.
if bmatrix is not None:
bmatrix = np.asarray(bmatrix)
if bmatrix.ndim == 2:
if bmatrix.shape[0] != bmatrix.shape[1]:
strg = "B Matrix should be square: "
strg += "%dx%d matrix provided instead." % bmatrix.shape
raise TypeError(strg)
else:
strg = "B matrix should be 2-D. %d-D array provided." % \
bmatrix.ndim
raise TypeError(strg)
self.bmatrix = bmatrix
if amatrix is not None:
amatrix = np.asarray(amatrix)
if amatrix.ndim == 2:
if amatrix.shape[0] != amatrix.shape[1]:
strg = "A matrix should be square: "
strg += "%dx%d matrix provided instead." % amatrix.shape
raise TypeError(strg)
else:
strg = "A matrix should be 2-D. %d-D array provided." % \
self.amatrix.ndim
raise TypeError(strg)
self.amatrix = amatrix
if tmatrix is not None:
tmatrix = np.asarray(tmatrix)
if tmatrix.ndim == 2:
if tmatrix.shape[0] != tmatrix.shape[1]:
strg = "T matrix should be square: "
strg += "%dx%d matrix provided instead." % tmatrix.shape
raise TypeError(strg)
else:
strg = "T matrix should be 2-D. %d-D array provided." % \
self.tmatrix.ndim
raise TypeError(strg)
self.tmatrix = tmatrix
if mmatrix is not None:
mmatrix = np.asarray(mmatrix)
if mmatrix.ndim == 2:
if mmatrix.shape[0] != mmatrix.shape[1]:
strg = "M matrix should be square: "
strg += "%dx%d matrix provided instead." % mmatrix.shape
raise TypeError(strg)
else:
strg = "M matrix should be 2-D. %d-D array provided." % \
self.mmatrix.ndim
raise TypeError(strg)
self.mmatrix = mmatrix
if dmatrix is not None:
dmatrix = np.asarray(dmatrix)
if dmatrix.ndim == 2:
if dmatrix.shape[0] != dmatrix.shape[1]:
strg = "D Matrix should be square: "
strg += "%dx%d matrix provided instead." % dmatrix.shape
raise TypeError(strg)
else:
strg = "D matrix should be 2-D. %d-D array provided." % \
dmatrix.ndim
raise TypeError(strg)
self.dmatrix = dmatrix
if cmatrix is not None:
cmatrix = np.asarray(cmatrix)
if cmatrix.ndim == 2:
if cmatrix.shape[0] != cmatrix.shape[1]:
strg = "C Matrix should be square: "
strg += "%dx%d matrix provided instead." % cmatrix.shape
raise TypeError(strg)
else:
strg = "C matrix should be 2-D. %d-D array provided." % \
cmatrix.ndim
raise TypeError(strg)
self.cmatrix = cmatrix
if fmatrix is not None:
fmatrix = np.asarray(fmatrix)
if fmatrix.ndim == 2:
if fmatrix.shape[0] != fmatrix.shape[1]:
strg = "F Matrix should be square: "
strg += "%dx%d matrix provided instead." % fmatrix.shape
raise TypeError(strg)
else:
strg = "F matrix should be 2-D. %d-D array provided." % \
fmatrix.ndim
raise TypeError(strg)
self.fmatrix = fmatrix
if ematrix is not None:
ematrix = np.asarray(ematrix)
if ematrix.ndim == 2:
if ematrix.shape[0] != ematrix.shape[1]:
strg = "E Matrix should be square: "
strg += "%dx%d matrix provided instead." % ematrix.shape
raise TypeError(strg)
else:
strg = "E matrix should be 2-D. %d-D array provided." % \
ematrix.ndim
raise TypeError(strg)
self.ematrix = ematrix
if bimatrix is not None:
bimatrix = np.asarray(bimatrix)
if bimatrix.ndim == 2:
if bimatrix.shape[0] != bimatrix.shape[1]:
strg = "BI matrix should be square: "
strg += "%dx%d matrix provided instead." % bimatrix.shape
raise TypeError(strg)
else:
strg = "BI matrix should be 2-D. %d-D array provided." % \
self.bimatrix.ndim
raise TypeError(strg)
self.bimatrix = bimatrix
if aimatrix is not None:
aimatrix = np.asarray(aimatrix)
if aimatrix.ndim == 2:
if aimatrix.shape[0] != aimatrix.shape[1]:
strg = "AI matrix should be square: "
strg += "%dx%d matrix provided instead." % aimatrix.shape
raise TypeError(strg)
else:
strg = "AI matrix should be 2-D. %d-D array provided." % \
self.aimatrix.ndim
raise TypeError(strg)
self.aimatrix = aimatrix
if timatrix is not None:
timatrix = np.asarray(timatrix)
if timatrix.ndim == 2:
if timatrix.shape[0] != timatrix.shape[1]:
strg = "TI matrix should be square: "
strg += "%dx%d matrix provided instead." % timatrix.shape
raise TypeError(strg)
else:
strg = "TI matrix should be 2-D. %d-D array provided." % \
self.timatrix.ndim
raise TypeError(strg)
self.timatrix = timatrix
if mimatrix is not None:
mimatrix = np.asarray(mimatrix)
if mimatrix.ndim == 2:
if mimatrix.shape[0] != mimatrix.shape[1]:
strg = "MI matrix should be square: "
strg += "%dx%d matrix provided instead." % mimatrix.shape
raise TypeError(strg)
else:
strg = "MI matrix should be 2-D. %d-D array provided." % \
self.mimatrix.ndim
raise TypeError(strg)
self.mimatrix = mimatrix
if dimatrix is not None:
dimatrix = np.asarray(dimatrix)
if dimatrix.ndim == 2:
if dimatrix.shape[0] != dimatrix.shape[1]:
strg = "DI Matrix should be square: "
strg += "%dx%d imatrix provided instead." % dimatrix.shape
raise TypeError(strg)
else:
strg = "DI imatrix should be 2-D. %d-D array provided." % \
dimatrix.ndim
raise TypeError(strg)
self.dimatrix = dimatrix
if cimatrix is not None:
cimatrix = np.asarray(cimatrix)
if cimatrix.ndim == 2:
if cimatrix.shape[0] != cimatrix.shape[1]:
strg = "CI Matrix should be square: "
strg += "%dx%d imatrix provided instead." % cimatrix.shape
raise TypeError(strg)
else:
strg = "CI imatrix should be 2-D. %d-D array provided." % \
cimatrix.ndim
raise TypeError(strg)
self.cimatrix = cimatrix
if fimatrix is not None:
fimatrix = np.asarray(fimatrix)
if fimatrix.ndim == 2:
if fimatrix.shape[0] != fimatrix.shape[1]:
strg = "FI Matrix should be square: "
strg += "%dx%d imatrix provided instead." % fimatrix.shape
raise TypeError(strg)
else:
strg = "FI imatrix should be 2-D. %d-D array provided." % \
fimatrix.ndim
raise TypeError(strg)
self.fimatrix = fimatrix
if eimatrix is not None:
eimatrix = np.asarray(eimatrix)
if eimatrix.ndim == 2:
if eimatrix.shape[0] != eimatrix.shape[1]:
strg = "EI Matrix should be square: "
strg += "%dx%d imatrix provided instead." % eimatrix.shape
raise TypeError(strg)
else:
strg = "EI imatrix should be 2-D. %d-D array provided." % \
eimatrix.ndim
raise TypeError(strg)
self.eimatrix = eimatrix
if boresight_offsets is not None:
try:
self.boresight_offsets = boresight_offsets
except (ValueError, TypeError) as e:
strg = "boresight_offsets must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
# Copy the units of the these arrays from the schema, if defined.
aunits = self.set_data_units('amatrix')
bunits = self.set_data_units('bmatrix')
tunits = self.set_data_units('tmatrix')
munits = self.set_data_units('mmatrix')
dunits = self.set_data_units('dmatrix')
cunits = self.set_data_units('cmatrix')
funits = self.set_data_units('fmatrix')
eunits = self.set_data_units('ematrix')
biunits = self.set_data_units('bimatrix')
aiunits = self.set_data_units('aimatrix')
tiunits = self.set_data_units('timatrix')
miunits = self.set_data_units('mimatrix')
# Copy the table column units from the schema, if defined.
boresight_units = self.set_table_units('boresight_offsets')
if fitref is not None:
self.meta.fit.reference = fitref
if fitmodel is not None:
self.meta.fit.model = fitmodel
# Define the exposure type (if not already contained in the data model)
# NOTE: This will only define an exposure type when a valid detector
# is defined in the metadata.
if not self.meta.exposure.type:
self.set_exposure_type()
def _init_data_type(self):
# Initialise the data model type
model_type = get_my_model_type( self.__class__.__name__ )
self.meta.model_type = model_type
def on_save(self, path):
super(MiriImagingDistortionModel, self).on_save(path)
# Re-initialise data type on save
self._init_data_type()
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which controls
the size of other arrays that are implicitly created.
For this data structure, the primary array's name is "bmatrix"
and not "data".
"""
return 'bmatrix'
def __str__(self):
"""
Return the contents of the distortion map object as a readable
string.
"""
# Start with the data object title, metadata and history
strg = self.get_title(underline=True, underchar="=") + "\n"
strg += self.get_meta_str(underline=True, underchar='-')
if self.meta.fit.model is not None:
strg += "Fit model is \'%s\'\n" % str(self.meta.fit.model)
if self.meta.fit.reference is not None:
strg += "See \'%s\' for a description of the fit.\n" % \
str(self.meta.fit.reference)
strg += self.get_history_str()
strg += self.get_data_str('bmatrix', underline=True, underchar="-")
strg += self.get_data_str('amatrix', underline=True, underchar="-")
strg += self.get_data_str('tmatrix', underline=True, underchar="-")
strg += self.get_data_str('mmatrix', underline=True, underchar="-")
strg += self.get_data_str('dmatrix', underline=True, underchar="-")
strg += self.get_data_str('cmatrix', underline=True, underchar="-")
strg += self.get_data_str('fmatrix', underline=True, underchar="-")
strg += self.get_data_str('ematrix', underline=True, underchar="-")
strg += self.get_data_str('bimatrix', underline=True, underchar="-")
strg += self.get_data_str('aimatrix', underline=True, underchar="-")
strg += self.get_data_str('timatrix', underline=True, underchar="-")
strg += self.get_data_str('mimatrix', underline=True, underchar="-")
strg += self.get_data_str('dimatrix', underline=True, underchar="-")
strg += self.get_data_str('cimatrix', underline=True, underchar="-")
strg += self.get_data_str('fimatrix', underline=True, underchar="-")
strg += self.get_data_str('eimatrix', underline=True, underchar="-")
if self.boresight_offsets is not None:
strg += self.get_data_str('boresight_offsets', underline=True, underchar="-")
else:
strg += "No boresight_offsets."
return strg
class MiriLrsD2WModel(MiriDataModel):
"""
A generic data model for a MIRI LRS distortion and wavelength
calibration table.
See MIRI-TR-10020-MPI for a detailed description of the content
of the data model.
After a data model has been created, the wavelength table is available
within the attribute .wavelength_table. Metadata items are available
within a ".meta" attribute tree.
See https://jwst-pipeline.readthedocs.io/en/latest/jwst/datamodels/index.html
:Parameters:
init: shape tuple, file path, file object, pyfits.HDUList, numpy array
An optional initializer for the data model, which can have one
of the following forms:
* None: A default data model with no shape. (If a data array is
provided in the flux parameter, the shape is derived from the
array.)
* Shape tuple: Initialize with empty data of the given shape.
* File path: Initialize from the given file.
* Readable file object: Initialize from the given file object.
* pyfits.HDUList: Initialize from the given pyfits.HDUList.
wavelength_table: list of tuples or numpy record array (optional)
Either: A list of tuples containing (parameter:object, factor:number,
uncertainty:number), giving the wavelength calibration factors valid for
different generic parameters.
Or: A numpy record array containing the same information as above.
A wavelength table must either be defined in the initializer or in
this parameter. A blank table is not allowed.
\*\*kwargs:
All other keyword arguments are passed to the DataModel initialiser.
See the jwst.datamodels documentation for the meaning of these keywords.
"""
schema_url = "miri_distortion_lrs.schema"
fieldnames = ('X_CENTER', 'Y_CENTER', 'WAVELENGTH', 'X0', 'Y0', 'X1', 'Y1', \
'X2', 'Y2', 'X3', 'Y3')
def __init__(self, init=None, wavelength_table=None, **kwargs):
"""
Initialises the MiriLrsD2WModel class.
Parameters: See class doc string.
"""
super(MiriLrsD2WModel, self).__init__(init=init, **kwargs)
# Data type is wavelength calibration world coordinates.
#self.meta.reftype = 'DISTORTION'
self.meta.reftype = 'SPECWCS'
# Initialise the model type
self._init_data_type()
# This is a reference data model.
self._reference_model()
if wavelength_table is not None:
try:
self.wavelength_table = wavelength_table
except (ValueError, TypeError) as e:
strg = "wavelength_table must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
# Copy the table column units from the schema, if defined.
wavelength_units = self.set_table_units('wavelength_table')
# Define the exposure type (if not already contained in the data model)
# NOTE: This will only define an exposure type when a valid detector
# is defined in the metadata.
if not self.meta.exposure.type:
self.set_exposure_type()
def _init_data_type(self):
# Initialise the data model type
model_type = get_my_model_type( self.__class__.__name__ )
self.meta.model_type = model_type
def on_save(self, path):
super(MiriLrsD2WModel, self).on_save(path)
# Re-initialise data type on save
self._init_data_type()
# TODO: Over-complicated data structure needs to be simplified.
class MiriMrsDistortionModel12(MiriDataModel):
"""
A data model for a MIRI MRS distortion model - CHANNEL 34 VARIANT,
based on the STScI base model, DataModel. Old CDP-7 version.
:Parameters:
init: shape tuple, file path, file object, pyfits.HDUList, numpy array
An optional initializer for the data model, which can have one
of the following forms:
* None: A default data model with no shape. (If a data array is
provided in the lambda parameter, the shape is derived from
the array.)
* Shape tuple: Initialize with empty data of the given shape.
* File path: Initialize from the given file.
* Readable file object: Initialize from the given file object.
* pyfits.HDUList: Initialize from the given pyfits.HDUList.
slicenumber: numpy array (optional)
An array containing the elements of the slice array, which
describes the mapping of pixel corners to slice number.
Must be 2-D.
fov_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
fov_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
albe_v2v3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
v2v3_albe: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
bzero1: float (optional)
Beta coordinate of the centre of slice 1 of channel 1
bdel1: float (optional)
Slice width (delta beta) for channel 1
bzero2: float (optional)
Beta coordinate of the centre of slice 1 of channel 2
bdel2: float (optional)
Slice width (delta beta) for channel 2
\*\*kwargs:
All other keyword arguments are passed to the DataModel initialiser.
See the jwst.datamodels documentation for the meaning of these keywords.
"""
schema_url = "miri_distortion_mrs12.schema"
fieldnames_fov = ('alpha_min', 'alpha_max')
fieldnames_d2c = ['VAR1']
for i in (0,1,2,3,4):
for j in (0,1,2,3,4):
fieldnames_d2c.append('VAR2_%d_%d' % (i,j))
fieldnames_trans = ['Label']
for i in (0,1):
for j in (0,1):
fieldnames_trans.append('COEFF_%d_%d' % (i,j))
def __init__(self, init=None, slicenumber=None, fov_ch1=None, fov_ch2=None,
alpha_ch1=None, lambda_ch1=None, alpha_ch2=None, lambda_ch2=None,
x_ch1=None, y_ch1=None, x_ch2=None, y_ch2=None,
albe_v2v3=None, v2v3_albe=None, bzero1=None, bdel1=None,
bzero2=None, bdel2=None, **kwargs):
"""
Initialises the MiriMrsDistortionModel12 class.
Parameters: See class doc string.
"""
super(MiriMrsDistortionModel12, self).__init__(init=init, **kwargs)
# Data type is MRS DISTORTION.
self.meta.reftype = 'DISTORTION'
# Initialise the model type
self._init_data_type()
# This is a reference data model.
self._reference_model()
if slicenumber is not None:
self.slicenumber = slicenumber
# Define the beta coordinates and slice widths, if given
if bzero1 is not None:
self.meta.instrument.bzero1 = bzero1
if bdel1 is not None:
self.meta.instrument.bdel1 = bdel1
if bzero2 is not None:
self.meta.instrument.bzero2 = bzero2
if bdel2 is not None:
self.meta.instrument.bdel2 = bdel2
if fov_ch1 is not None:
try:
self.fov_ch1 = fov_ch1
except (ValueError, TypeError) as e:
strg = "fov_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if fov_ch2 is not None:
try:
self.fov_ch2 = fov_ch2
except (ValueError, TypeError) as e:
strg = "fov_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch1 is not None:
try:
self.alpha_ch1 = alpha_ch1
except (ValueError, TypeError) as e:
strg = "alpha_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch1 is not None:
try:
self.lambda_ch1 = lambda_ch1
except (ValueError, TypeError) as e:
strg = "lambda_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch2 is not None:
try:
self.alpha_ch2 = alpha_ch2
except (ValueError, TypeError) as e:
strg = "alpha_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch2 is not None:
try:
self.lambda_ch2 = lambda_ch2
except (ValueError, TypeError) as e:
strg = "lambda_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch1 is not None:
try:
self.x_ch1 = x_ch1
except (ValueError, TypeError) as e:
strg = "x_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch1 is not None:
try:
self.y_ch1 = y_ch1
except (ValueError, TypeError) as e:
strg = "y_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch2 is not None:
try:
self.x_ch2 = x_ch2
except (ValueError, TypeError) as e:
strg = "x_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch2 is not None:
try:
self.y_ch2 = y_ch2
except (ValueError, TypeError) as e:
strg = "y_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if albe_v2v3 is not None:
try:
self.albe_to_v2v3 = albe_v2v3
except (ValueError, TypeError) as e:
strg = "albe_v2v3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if v2v3_albe is not None:
try:
self.v2v3_to_albe = v2v3_albe
except (ValueError, TypeError) as e:
strg = "v2v3_albe must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
# Copy the table column units from the schema, if defined.
fov_ch1_units = self.set_table_units('fov_ch1')
fov_ch2_units = self.set_table_units('fov_ch2')
alpha_ch1_units = self.set_table_units('alpha_ch1')
alpha_ch2_units = self.set_table_units('alpha_ch2')
lambda_ch1_units = self.set_table_units('lambda_ch1')
lambda_ch2_units = self.set_table_units('lambda_ch2')
x_ch1_units = self.set_table_units('x_ch1')
x_ch2_units = self.set_table_units('x_ch2')
y_ch1_units = self.set_table_units('y_ch1')
y_ch2_units = self.set_table_units('y_ch2')
albe_to_v2v3_units = self.set_table_units('albe_to_v2v3')
v2v3_to_albe_units = self.set_table_units('v2v3_to_albe')
# Define the exposure type (if not already contained in the data model)
# NOTE: This will only define an exposure type when a valid detector
# is defined in the metadata.
if not self.meta.exposure.type:
self.set_exposure_type()
def _init_data_type(self):
# Initialise the data model type
model_type = get_my_model_type( self.__class__.__name__ )
self.meta.model_type = model_type
def on_save(self, path):
super(MiriMrsDistortionModel12, self).on_save(path)
# Re-initialise data type on save
self._init_data_type()
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which controls
the size of other arrays that are implicitly created.
For this data structure, the primary array's name is "slicenumber"
and not "data".
"""
return 'slicenumber'
def __str__(self):
"""
Return the contents of the D2C map object as a readable
string.
"""
# Start with the data object title, metadata and history
strg = self.get_title(underline=True, underchar="=") + "\n"
strg += self.get_meta_str(underline=True, underchar='-')
strg += self.get_history_str()
strg += self.get_data_str('slicenumber', underline=True, underchar="-")
strg += self.get_data_str('fov_ch1', underline=True, underchar="-")
strg += self.get_data_str('fov_ch2', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch1', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch1', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch2', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch2', underline=True, underchar="-")
strg += self.get_data_str('x_ch1', underline=True, underchar="-")
strg += self.get_data_str('y_ch1', underline=True, underchar="-")
strg += self.get_data_str('x_ch2', underline=True, underchar="-")
strg += self.get_data_str('y_ch2', underline=True, underchar="-")
strg += self.get_data_str('albe_to_v2v3', underline=True, underchar="-")
strg += self.get_data_str('v2v3_to_albe', underline=True, underchar="-")
return strg
class MiriMrsDistortionModel12(MiriDataModel):
"""
A data model for a MIRI MRS distortion model - CHANNEL 12 VARIANT,
based on the STScI base model, DataModel.
:Parameters:
init: shape tuple, file path, file object, pyfits.HDUList, numpy array
An optional initializer for the data model, which can have one
of the following forms:
* None: A default data model with no shape. (If a data array is
provided in the lambda parameter, the shape is derived from
the array.)
* Shape tuple: Initialize with empty data of the given shape.
* File path: Initialize from the given file.
* Readable file object: Initialize from the given file object.
* pyfits.HDUList: Initialize from the given pyfits.HDUList.
slicenumber: numpy array (optional)
An array containing the elements of the slice array, which
describes the mapping of pixel corners to slice number.
Must be 2-D.
fov_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
fov_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch1: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch2: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
albe_v2v3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
v2v3_albe: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
bzero1: float (optional)
Beta coordinate of the centre of slice 1 of channel 1
bdel1: float (optional)
Slice width (delta beta) for channel 1
bzero2: float (optional)
Beta coordinate of the centre of slice 1 of channel 2
bdel2: float (optional)
Slice width (delta beta) for channel 2
\*\*kwargs:
All other keyword arguments are passed to the DataModel initialiser.
See the jwst.datamodels documentation for the meaning of these keywords.
"""
schema_url = "miri_distortion_mrs12.schema"
fieldnames_fov = ('alpha_min', 'alpha_max')
fieldnames_d2c = ['VAR1']
for i in (0,1,2,3,4):
for j in (0,1,2,3,4):
fieldnames_d2c.append('VAR2_%d_%d' % (i,j))
fieldnames_trans = ['Label']
for i in (0,1):
for j in (0,1):
fieldnames_trans.append('COEFF_%d_%d' % (i,j))
def __init__(self, init=None, slicenumber=None, fov_ch1=None, fov_ch2=None,
alpha_ch1=None, lambda_ch1=None, alpha_ch2=None, lambda_ch2=None,
x_ch1=None, y_ch1=None, x_ch2=None, y_ch2=None,
albe_v2v3=None, v2v3_albe=None, bzero1=None, bdel1=None,
bzero2=None, bdel2=None, **kwargs):
"""
Initialises the MiriMrsDistortionModel12 class.
Parameters: See class doc string.
"""
super(MiriMrsDistortionModel12, self).__init__(init=init, **kwargs)
# Data type is MRS DISTORTION.
self.meta.reftype = 'DISTORTION'
# Initialise the model type
self._init_data_type()
# This is a reference data model.
self._reference_model()
if slicenumber is not None:
self.slicenumber = slicenumber
# Define the beta coordinates and slice widths, if given
if bzero1 is not None:
self.meta.instrument.bzero1 = bzero1
if bdel1 is not None:
self.meta.instrument.bdel1 = bdel1
if bzero2 is not None:
self.meta.instrument.bzero2 = bzero2
if bdel2 is not None:
self.meta.instrument.bdel2 = bdel2
if fov_ch1 is not None:
try:
self.fov_ch1 = fov_ch1
except (ValueError, TypeError) as e:
strg = "fov_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if fov_ch2 is not None:
try:
self.fov_ch2 = fov_ch2
except (ValueError, TypeError) as e:
strg = "fov_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch1 is not None:
try:
self.alpha_ch1 = alpha_ch1
except (ValueError, TypeError) as e:
strg = "alpha_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch1 is not None:
try:
self.lambda_ch1 = lambda_ch1
except (ValueError, TypeError) as e:
strg = "lambda_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch2 is not None:
try:
self.alpha_ch2 = alpha_ch2
except (ValueError, TypeError) as e:
strg = "alpha_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch2 is not None:
try:
self.lambda_ch2 = lambda_ch2
except (ValueError, TypeError) as e:
strg = "lambda_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch1 is not None:
try:
self.x_ch1 = x_ch1
except (ValueError, TypeError) as e:
strg = "x_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch1 is not None:
try:
self.y_ch1 = y_ch1
except (ValueError, TypeError) as e:
strg = "y_ch1 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch2 is not None:
try:
self.x_ch2 = x_ch2
except (ValueError, TypeError) as e:
strg = "x_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch2 is not None:
try:
self.y_ch2 = y_ch2
except (ValueError, TypeError) as e:
strg = "y_ch2 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if albe_v2v3 is not None:
try:
self.albe_to_v2v3 = albe_v2v3
except (ValueError, TypeError) as e:
strg = "albe_v2v3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if v2v3_albe is not None:
try:
self.v2v3_to_albe = v2v3_albe
except (ValueError, TypeError) as e:
strg = "v2v3_albe must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
# Copy the table column units from the schema, if defined.
fov_ch1_units = self.set_table_units('fov_ch1')
fov_ch2_units = self.set_table_units('fov_ch2')
alpha_ch1_units = self.set_table_units('alpha_ch1')
alpha_ch2_units = self.set_table_units('alpha_ch2')
lambda_ch1_units = self.set_table_units('lambda_ch1')
lambda_ch2_units = self.set_table_units('lambda_ch2')
x_ch1_units = self.set_table_units('x_ch1')
x_ch2_units = self.set_table_units('x_ch2')
y_ch1_units = self.set_table_units('y_ch1')
y_ch2_units = self.set_table_units('y_ch2')
albe_to_v2v3_units = self.set_table_units('albe_to_v2v3')
v2v3_to_albe_units = self.set_table_units('v2v3_to_albe')
# Define the exposure type (if not already contained in the data model)
# NOTE: This will only define an exposure type when a valid detector
# is defined in the metadata.
if not self.meta.exposure.type:
self.set_exposure_type()
def _init_data_type(self):
# Initialise the data model type
model_type = get_my_model_type( self.__class__.__name__ )
self.meta.model_type = model_type
def on_save(self, path):
super(MiriMrsDistortionModel12, self).on_save(path)
# Re-initialise data type on save
self._init_data_type()
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which controls
the size of other arrays that are implicitly created.
For this data structure, the primary array's name is "slicenumber"
and not "data".
"""
return 'slicenumber'
def __str__(self):
"""
Return the contents of the D2C map object as a readable
string.
"""
# Start with the data object title, metadata and history
strg = self.get_title(underline=True, underchar="=") + "\n"
strg += self.get_meta_str(underline=True, underchar='-')
strg += self.get_history_str()
strg += self.get_data_str('slicenumber', underline=True, underchar="-")
strg += self.get_data_str('fov_ch1', underline=True, underchar="-")
strg += self.get_data_str('fov_ch2', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch1', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch1', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch2', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch2', underline=True, underchar="-")
strg += self.get_data_str('x_ch1', underline=True, underchar="-")
strg += self.get_data_str('y_ch1', underline=True, underchar="-")
strg += self.get_data_str('x_ch2', underline=True, underchar="-")
strg += self.get_data_str('y_ch2', underline=True, underchar="-")
strg += self.get_data_str('albe_to_v2v3', underline=True, underchar="-")
strg += self.get_data_str('v2v3_to_albe', underline=True, underchar="-")
return strg
# TODO: Over-complicated data structure needs to be simplified.
class MiriMrsDistortionModel34(MiriDataModel):
"""
A data model for a MIRI MRS distortion model - CHANNEL 34 VARIANT,
based on the STScI base model, DataModel. Old CDP-7 version.
:Parameters:
init: shape tuple, file path, file object, pyfits.HDUList, numpy array
An optional initializer for the data model, which can have one
of the following forms:
* None: A default data model with no shape. (If a data array is
provided in the lambda parameter, the shape is derived from
the array.)
* Shape tuple: Initialize with empty data of the given shape.
* File path: Initialize from the given file.
* Readable file object: Initialize from the given file object.
* pyfits.HDUList: Initialize from the given pyfits.HDUList.
slicenumber: numpy array (optional)
An array containing the elements of the slice array, which
describes the mapping of pixel corners to slice number.
Must be 2-D.
fov_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
fov_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
albe_v2v3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
v2v3_albe: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
bzero3: float (optional)
Beta coordinate of the centre of slice 1 of channel 3
bdel3: float (optional)
Slice width (delta beta) for channel 3
bzero4: float (optional)
Beta coordinate of the centre of slice 1 of channel 4
bdel4: float (optional)
Slice width (delta beta) for channel 4
\*\*kwargs:
All other keyword arguments are passed to the DataModel initialiser.
See the jwst.datamodels documentation for the meaning of these keywords.
"""
schema_url = "miri_distortion_mrs34.schema"
fieldnames_fov = ('alpha_min', 'alpha_max')
fieldnames_d2c = ['VAR1']
for i in (0,1,2,3,4):
for j in (0,1,2,3,4):
fieldnames_d2c.append('VAR2_%d_%d' % (i,j))
fieldnames_trans = ['Label']
for i in (0,1):
for j in (0,1):
fieldnames_trans.append('COEFF_%d_%d' % (i,j))
def __init__(self, init=None, slicenumber=None, fov_ch3=None, fov_ch4=None,
alpha_ch3=None, lambda_ch3=None, alpha_ch4=None, lambda_ch4=None,
x_ch3=None, y_ch3=None, x_ch4=None, y_ch4=None,
albe_v2v3=None, v2v3_albe=None, bzero3=None, bdel3=None,
bzero4=None, bdel4=None, **kwargs):
"""
Initialises the MiriMrsDistortionModel34 class.
Parameters: See class doc string.
"""
super(MiriMrsDistortionModel34, self).__init__(init=init, **kwargs)
# Data type is MRS DISTORTION.
self.meta.reftype = 'DISTORTION'
# Initialise the model type
self._init_data_type()
# This is a reference data model.
self._reference_model()
if slicenumber is not None:
self.slicenumber = slicenumber
# Define the beta coordinates and slice widths, if given
if bzero3 is not None:
self.meta.instrument.bzero3 = bzero3
if bdel3 is not None:
self.meta.instrument.bdel3 = bdel3
if bzero4 is not None:
self.meta.instrument.bzero4 = bzero4
if bdel4 is not None:
self.meta.instrument.bdel4 = bdel4
if fov_ch3 is not None:
try:
self.fov_ch3 = fov_ch3
except (ValueError, TypeError) as e:
strg = "fov_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if fov_ch4 is not None:
try:
self.fov_ch4 = fov_ch4
except (ValueError, TypeError) as e:
strg = "fov_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch3 is not None:
try:
self.alpha_ch3 = alpha_ch3
except (ValueError, TypeError) as e:
strg = "alpha_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch3 is not None:
try:
self.lambda_ch3 = lambda_ch3
except (ValueError, TypeError) as e:
strg = "lambda_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch4 is not None:
try:
self.alpha_ch4 = alpha_ch4
except (ValueError, TypeError) as e:
strg = "alpha_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch4 is not None:
try:
self.lambda_ch4 = lambda_ch4
except (ValueError, TypeError) as e:
strg = "lambda_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch3 is not None:
try:
self.x_ch3 = x_ch3
except (ValueError, TypeError) as e:
strg = "x_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch3 is not None:
try:
self.y_ch3 = y_ch3
except (ValueError, TypeError) as e:
strg = "y_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch4 is not None:
try:
self.x_ch4 = x_ch4
except (ValueError, TypeError) as e:
strg = "x_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch4 is not None:
try:
self.y_ch4 = y_ch4
except (ValueError, TypeError) as e:
strg = "y_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if albe_v2v3 is not None:
try:
self.albe_to_v2v3 = albe_v2v3
except (ValueError, TypeError) as e:
strg = "albe_v2v3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if v2v3_albe is not None:
try:
self.v2v3_to_albe = v2v3_albe
except (ValueError, TypeError) as e:
strg = "v2v3_albe must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
# Copy the table column units from the schema, if defined.
fov_ch3_units = self.set_table_units('fov_ch3')
fov_ch4_units = self.set_table_units('fov_ch4')
alpha_ch3_units = self.set_table_units('alpha_ch3')
alpha_ch4_units = self.set_table_units('alpha_ch4')
lambda_ch3_units = self.set_table_units('lambda_ch3')
lambda_ch4_units = self.set_table_units('lambda_ch4')
x_ch3_units = self.set_table_units('x_ch3')
x_ch4_units = self.set_table_units('x_ch4')
y_ch3_units = self.set_table_units('y_ch3')
y_ch4_units = self.set_table_units('y_ch4')
albe_to_v2v3_units = self.set_table_units('albe_to_v2v3')
v2v3_to_albe_units = self.set_table_units('v2v3_to_albe')
# Define the exposure type (if not already contained in the data model)
# NOTE: This will only define an exposure type when a valid detector
# is defined in the metadata.
if not self.meta.exposure.type:
self.set_exposure_type()
def _init_data_type(self):
# Initialise the data model type
model_type = get_my_model_type( self.__class__.__name__ )
self.meta.model_type = model_type
def on_save(self, path):
super(MiriMrsDistortionModel34, self).on_save(path)
# Re-initialise data type on save
self._init_data_type()
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which controls
the size of other arrays that are implicitly created.
For this data structure, the primary array's name is "slicenumber"
and not "data".
"""
return 'slicenumber'
def __str__(self):
"""
Return the contents of the D2C map object as a readable
string.
"""
# Start with the data object title, metadata and history
strg = self.get_title_and_metadata()
strg += self.get_data_str('slicenumber', underline=True, underchar="-")
strg += self.get_data_str('fov_ch3', underline=True, underchar="-")
strg += self.get_data_str('fov_ch4', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch3', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch3', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch4', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch4', underline=True, underchar="-")
strg += self.get_data_str('x_ch3', underline=True, underchar="-")
strg += self.get_data_str('y_ch3', underline=True, underchar="-")
strg += self.get_data_str('x_ch4', underline=True, underchar="-")
strg += self.get_data_str('y_ch4', underline=True, underchar="-")
strg += self.get_data_str('albe_to_v2v3', underline=True, underchar="-")
strg += self.get_data_str('v2v3_to_albe', underline=True, underchar="-")
return strg
class MiriMrsDistortionModel34(MiriDataModel):
"""
A data model for a MIRI MRS distortion model - CHANNEL 34 VARIANT,
based on the STScI base model, DataModel.
:Parameters:
init: shape tuple, file path, file object, pyfits.HDUList, numpy array
An optional initializer for the data model, which can have one
of the following forms:
* None: A default data model with no shape. (If a data array is
provided in the lambda parameter, the shape is derived from
the array.)
* Shape tuple: Initialize with empty data of the given shape.
* File path: Initialize from the given file.
* Readable file object: Initialize from the given file object.
* pyfits.HDUList: Initialize from the given pyfits.HDUList.
slicenumber: numpy array (optional)
An array containing the elements of the slice array, which
describes the mapping of pixel corners to slice number.
Must be 2-D.
fov_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
fov_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (alpha_min:value, beta_min:value)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
alpha_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
lambda_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
x_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
y_ch4: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
albe_v2v3: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
v2v3_albe: list of tuples or numpy record array (optional)
Either: A list of tuples containing (...)
Or: A numpy record array containing the same information as above.
If not specified, no table will be defined.
bzero3: float (optional)
Beta coordinate of the centre of slice 1 of channel 3
bdel3: float (optional)
Slice width (delta beta) for channel 3
bzero4: float (optional)
Beta coordinate of the centre of slice 1 of channel 4
bdel4: float (optional)
Slice width (delta beta) for channel 4
\*\*kwargs:
All other keyword arguments are passed to the DataModel initialiser.
See the jwst.datamodels documentation for the meaning of these keywords.
"""
schema_url = "miri_distortion_mrs34.schema"
fieldnames_fov = ('alpha_min', 'alpha_max')
fieldnames_d2c = ['VAR1']
for i in (0,1,2,3,4):
for j in (0,1,2,3,4):
fieldnames_d2c.append('VAR2_%d_%d' % (i,j))
fieldnames_trans = ['Label']
for i in (0,1):
for j in (0,1):
fieldnames_trans.append('COEFF_%d_%d' % (i,j))
def __init__(self, init=None, slicenumber=None, fov_ch3=None, fov_ch4=None,
alpha_ch3=None, lambda_ch3=None, alpha_ch4=None, lambda_ch4=None,
x_ch3=None, y_ch3=None, x_ch4=None, y_ch4=None,
albe_v2v3=None, v2v3_albe=None, bzero3=None, bdel3=None,
bzero4=None, bdel4=None, **kwargs):
"""
Initialises the MiriMrsDistortionModel34 class.
Parameters: See class doc string.
"""
super(MiriMrsDistortionModel34, self).__init__(init=init, **kwargs)
# Data type is MRS DISTORTION.
self.meta.reftype = 'DISTORTION'
# Initialise the model type
self._init_data_type()
# This is a reference data model.
self._reference_model()
if slicenumber is not None:
self.slicenumber = slicenumber
# Define the beta coordinates and slice widths, if given
if bzero3 is not None:
self.meta.instrument.bzero3 = bzero3
if bdel3 is not None:
self.meta.instrument.bdel3 = bdel3
if bzero4 is not None:
self.meta.instrument.bzero4 = bzero4
if bdel4 is not None:
self.meta.instrument.bdel4 = bdel4
if fov_ch3 is not None:
try:
self.fov_ch3 = fov_ch3
except (ValueError, TypeError) as e:
strg = "fov_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if fov_ch4 is not None:
try:
self.fov_ch4 = fov_ch4
except (ValueError, TypeError) as e:
strg = "fov_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch3 is not None:
try:
self.alpha_ch3 = alpha_ch3
except (ValueError, TypeError) as e:
strg = "alpha_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch3 is not None:
try:
self.lambda_ch3 = lambda_ch3
except (ValueError, TypeError) as e:
strg = "lambda_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if alpha_ch4 is not None:
try:
self.alpha_ch4 = alpha_ch4
except (ValueError, TypeError) as e:
strg = "alpha_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if lambda_ch4 is not None:
try:
self.lambda_ch4 = lambda_ch4
except (ValueError, TypeError) as e:
strg = "lambda_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch3 is not None:
try:
self.x_ch3 = x_ch3
except (ValueError, TypeError) as e:
strg = "x_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch3 is not None:
try:
self.y_ch3 = y_ch3
except (ValueError, TypeError) as e:
strg = "y_ch3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if x_ch4 is not None:
try:
self.x_ch4 = x_ch4
except (ValueError, TypeError) as e:
strg = "x_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if y_ch4 is not None:
try:
self.y_ch4 = y_ch4
except (ValueError, TypeError) as e:
strg = "y_ch4 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if albe_v2v3 is not None:
try:
self.albe_to_v2v3 = albe_v2v3
except (ValueError, TypeError) as e:
strg = "albe_v2v3 must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
if v2v3_albe is not None:
try:
self.v2v3_to_albe = v2v3_albe
except (ValueError, TypeError) as e:
strg = "v2v3_albe must be a numpy record array or list of records."
strg += "\n %s" % str(e)
raise TypeError(strg)
# Copy the table column units from the schema, if defined.
fov_ch3_units = self.set_table_units('fov_ch3')
fov_ch4_units = self.set_table_units('fov_ch4')
alpha_ch3_units = self.set_table_units('alpha_ch3')
alpha_ch4_units = self.set_table_units('alpha_ch4')
lambda_ch3_units = self.set_table_units('lambda_ch3')
lambda_ch4_units = self.set_table_units('lambda_ch4')
x_ch3_units = self.set_table_units('x_ch3')
x_ch4_units = self.set_table_units('x_ch4')
y_ch3_units = self.set_table_units('y_ch3')
y_ch4_units = self.set_table_units('y_ch4')
albe_to_v2v3_units = self.set_table_units('albe_to_v2v3')
v2v3_to_albe_units = self.set_table_units('v2v3_to_albe')
# Define the exposure type (if not already contained in the data model)
# NOTE: This will only define an exposure type when a valid detector
# is defined in the metadata.
if not self.meta.exposure.type:
self.set_exposure_type()
def _init_data_type(self):
# Initialise the data model type
model_type = get_my_model_type( self.__class__.__name__ )
self.meta.model_type = model_type
def on_save(self, path):
super(MiriMrsDistortionModel34, self).on_save(path)
# Re-initialise data type on save
self._init_data_type()
def get_primary_array_name(self):
"""
Returns the name "primary" array for this model, which controls
the size of other arrays that are implicitly created.
For this data structure, the primary array's name is "slicenumber"
and not "data".
"""
return 'slicenumber'
def __str__(self):
"""
Return the contents of the D2C map object as a readable
string.
"""
# Start with the data object title, metadata and history
strg = self.get_title_and_metadata()
strg += self.get_data_str('slicenumber', underline=True, underchar="-")
strg += self.get_data_str('fov_ch3', underline=True, underchar="-")
strg += self.get_data_str('fov_ch4', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch3', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch3', underline=True, underchar="-")
strg += self.get_data_str('alpha_ch4', underline=True, underchar="-")
strg += self.get_data_str('lambda_ch4', underline=True, underchar="-")
strg += self.get_data_str('x_ch3', underline=True, underchar="-")
strg += self.get_data_str('y_ch3', underline=True, underchar="-")
strg += self.get_data_str('x_ch4', underline=True, underchar="-")
strg += self.get_data_str('y_ch4', underline=True, underchar="-")
strg += self.get_data_str('albe_to_v2v3', underline=True, underchar="-")
strg += self.get_data_str('v2v3_to_albe', underline=True, underchar="-")
return strg
#
# A minimal test is run when this file is run as a main program.
# For a more substantial test see miri/datamodels/tests.
#
if __name__ == '__main__':
print("Testing the MIRI distortion models module.")
PLOTTING = False
SAVE_FILES = False
print("Testing the MiriImagingDistortionModel class.")
bmatrix = [[0.1,0.2,0.3,0.4],
[0.5,0.6,0.7,0.8],
[0.8,0.7,0.6,0.5],
[0.4,0.3,0.2,0.1]
]
amatrix = [[0.1,0.0,0.0,0.0],
[0.0,0.1,0.0,0.0],
[0.0,0.0,0.1,0.0],
[0.0,0.0,0.0,0.1]
]
tmatrix = [[0.1,0.0,0.0],
[0.0,0.1,0.0],
[0.0,0.0,0.1],
]
mmatrix = [[0.1,0.0,0.0],
[0.0,0.1,0.0],
[0.0,0.0,0.1],
]
boffsets = [('One', 0.1, 0.2),
('Two', 0.2, 0.3)]
with MiriImagingDistortionModel( bmatrix=bmatrix, amatrix=amatrix,
tmatrix=tmatrix, mmatrix=mmatrix,
dmatrix=amatrix, cmatrix=bmatrix,
fmatrix=amatrix, ematrix=bmatrix,
boresight_offsets=boffsets,
fitref='MIRI-TN-00070-ATC version 3',
fitmodel='Polynomial2D' ) as testdata1:
# This is how to set the matrix units (if not obtained from a file).
testdata1.meta.bmatrix.units = 'mm ** (1-ij)'
testdata1.meta.amatrix.units = 'mm ** (1-ij)'
print(testdata1)
if PLOTTING:
testdata1.plot(description="testdata1")
if SAVE_FILES:
testdata1.save("test_imaging_distortion_model1.fits", overwrite=True)
del testdata1
print("Testing the MiriLrsD2WModel class.")
wavedata = [
(61.13976, 80.25328, 5.12652, 78.78318, 80.00265, 43.53634, 81.47993, 43.49634, 80.50392, 78.74317, 79.02664),
(61.09973, 79.27727, 5.15676, 78.74317, 79.02664, 43.49634, 80.50392, 43.45628, 79.52791, 78.70311, 78.05063),
(61.05965, 78.30126, 5.18700, 78.70311, 78.05063, 43.45628, 79.52791, 43.41617, 78.55190, 78.66300, 77.07462),
(61.01951, 77.32526, 5.21725, 78.66300, 77.07462, 43.41617, 78.55190, 43.37601, 77.57589, 78.62285, 76.09861),
(60.97933, 76.34925, 5.24749, 78.62285, 76.09861, 43.37601, 77.57589, 43.33580, 76.59988, 78.58264, 75.12260),
(60.93910, 75.37324, 5.27773, 78.58264, 75.12260, 43.33580, 76.59988, 43.29554, 75.62387, 78.54238, 74.14659),
(60.89881, 74.39723, 5.30797, 78.54238, 74.14659, 43.29554, 75.62387, 43.25523, 74.64786, 78.50207, 73.17058),
(60.85848, 73.42122, 5.33821, 78.50207, 73.17058, 43.25523, 74.64786, 43.21487, 73.67186, 78.46171, 72.19458)
]
print("\nWavelength calibration with table derived from list of tuples:")
with MiriLrsD2WModel( wavelength_table=wavedata ) as testwave1:
print(testwave1)
if PLOTTING:
testwave1.plot(description="testwave1")
if SAVE_FILES:
testwave1.save("test_lrs_d2w_model1.fits", overwrite=True)
del testwave1
print("Testing the MiriMrsDistortionModel classes.")
slicenumber = [[1,2,3,4],
[1,2,3,4],
[1,2,3,4],
[1,2,3,4]
]
slicenumber3 = [slicenumber, slicenumber]
fovdata = [(-2.95, 3.09),
(-2.96, 3.00)]
d2cdata = [(100.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0,
21.0, 22.0, 323.0, 24.0, 25.0),
(101.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0,
21.0, 22.0, 323.0, 24.0, 25.0)]
c2ddata = [(99.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0,
21.0, 22.0, 323.0, 24.0, 25.0),
(98.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0,
21.0, 22.0, 323.0, 24.0, 25.0)]
transform = [('T_CH3C,V2', 0.11, 0.21, 0.31, 0.41, 0.51, 0.61, 0.71, 0.81, 0.91),
('T_CH3C,V3', 0.12, 0.22, 0.32, 0.42, 0.52, 0.62, 0.72, 0.82, 0.92)]
with MiriMrsDistortionModel12( slicenumber=slicenumber3,
fov_ch1=fovdata, fov_ch2=fovdata,
alpha_ch1=d2cdata, lambda_ch1=d2cdata,
alpha_ch2=d2cdata, lambda_ch2=d2cdata,
x_ch1=c2ddata, y_ch1=c2ddata,
x_ch2=c2ddata, y_ch2=c2ddata,
albe_v2v3=transform, v2v3_albe=transform,
bzero1=-1.772, bdel1=0.177,
bzero2=-2.238, bdel2=0.280
) as testdata1:
print(testdata1)
print("Data arrays=", testdata1.list_data_arrays())
print("Data tables=", testdata1.list_data_tables())
if PLOTTING:
testdata1.plot(description="testdata1")
if SAVE_FILES:
testdata1.save("test_mrs_distortion_model1.fits", overwrite=True)
# newmodel = MiriMrsDistortionModel12("test_mrs_distortion_model1.fits")
# print(newmodel)
del testdata1
with MiriMrsDistortionModel34( slicenumber=slicenumber3,
fov_ch3=fovdata, fov_ch4=fovdata,
alpha_ch3=d2cdata, lambda_ch3=d2cdata,
alpha_ch4=d2cdata, lambda_ch4=d2cdata,
x_ch3=c2ddata, y_ch3=c2ddata,
x_ch4=c2ddata, y_ch4=c2ddata,
albe_v2v3=transform, v2v3_albe=transform,
bzero3=-1.772, bdel3=0.177,
bzero4=-2.238, bdel4=0.280
) as testdata2:
print(testdata2)
print("Data arrays=", testdata2.list_data_arrays())
print("Data tables=", testdata2.list_data_tables())
if PLOTTING:
testdata2.plot(description="testdata2")
if SAVE_FILES:
testdata2.save("test_mrs_distortion_model2.fits", overwrite=True)
# newmodel = MiriMrsDistortionModel34("test_mrs_distortion_model2.fits")
# print(newmodel)
del testdata2
print("Test finished.")
| 45.203848 | 116 | 0.595028 | 11,350 | 86,927 | 4.447225 | 0.0637 | 0.017949 | 0.047547 | 0.033679 | 0.772228 | 0.750654 | 0.737618 | 0.732724 | 0.718936 | 0.713329 | 0 | 0.040538 | 0.321764 | 86,927 | 1,922 | 117 | 45.227367 | 0.815611 | 0.38132 | 0 | 0.702231 | 0 | 0 | 0.143087 | 0.009202 | 0 | 0 | 0 | 0.001041 | 0 | 1 | 0.027158 | false | 0 | 0.00291 | 0 | 0.064985 | 0.013579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dcd8ef75ba941a2736e03dc9fe2016e504ece690 | 16,252 | py | Python | ILSwiss/rlkit/torch/common/networks.py | zbzhu99/NGSIM_Imitation | 0af6ce327e4fc4da32eddb08ba0bba5403dac24e | [
"MIT"
] | 3 | 2022-01-28T01:33:04.000Z | 2022-02-21T13:43:43.000Z | ILSwiss/rlkit/torch/common/networks.py | zbzhu99/NGSIM_GAIL | 0af6ce327e4fc4da32eddb08ba0bba5403dac24e | [
"MIT"
] | null | null | null | ILSwiss/rlkit/torch/common/networks.py | zbzhu99/NGSIM_GAIL | 0af6ce327e4fc4da32eddb08ba0bba5403dac24e | [
"MIT"
] | null | null | null | """
General networks for pytorch.
Algorithm-specific networks should go else-where.
"""
import math
import torch
from torch import nn as nn
from torch.nn import functional as F
from torch.nn import BatchNorm1d
from rlkit.policies.base import Policy
from rlkit.torch.utils import pytorch_util as ptu
from rlkit.torch.core import PyTorchModule
from rlkit.torch.utils.normalizer import TorchFixedNormalizer
from rlkit.torch.common.modules import LayerNorm
def identity(x):
return x
class Mlp(PyTorchModule):
def __init__(
self,
hidden_sizes,
output_size,
input_size,
init_w=3e-3,
hidden_activation=F.relu,
output_activation=identity,
hidden_init=ptu.fanin_init,
b_init_value=0.1,
layer_norm=False,
layer_norm_kwargs=None,
batch_norm=False,
batch_norm_before_output_activation=False,
):
self.save_init_params(locals())
super().__init__()
if layer_norm_kwargs is None:
layer_norm_kwargs = dict()
self.input_size = input_size
self.output_size = output_size
self.hidden_activation = hidden_activation
self.output_activation = output_activation
self.layer_norm = layer_norm
self.batch_norm = batch_norm
self.batch_norm_before_output_activation = batch_norm_before_output_activation
self.fcs = nn.ModuleList()
self.layer_norms = nn.ModuleList()
self.batch_norms = nn.ModuleList()
in_size = input_size
for i, next_size in enumerate(hidden_sizes):
fc = nn.Linear(in_size, next_size)
in_size = next_size
hidden_init(fc.weight)
fc.bias.data.fill_(b_init_value)
self.fcs.append(fc)
if self.layer_norm:
ln = LayerNorm(next_size)
self.layer_norms.append(ln)
if self.batch_norm:
bn = BatchNorm1d(next_size)
self.batch_norms.append(bn)
if self.batch_norm_before_output_activation:
bn = BatchNorm1d(output_size)
self.batch_norms.append(bn)
self.last_fc = nn.Linear(in_size, output_size)
self.last_fc.weight.data.uniform_(-init_w, init_w)
self.last_fc.bias.data.uniform_(-init_w, init_w)
@torch.jit.ignore
def forward(self, input, return_preactivations=False):
h = input
for i, fc in enumerate(self.fcs):
h = fc(h)
if self.layer_norm:
h = self.layer_norms[i](h)
if self.batch_norm:
h = self.batch_norms[i](h)
h = self.hidden_activation(h)
preactivation = self.last_fc(h)
if self.batch_norm_before_output_activation:
preactivation = self.batch_norms[-1](preactivation)
output = self.output_activation(preactivation)
if return_preactivations:
return output, preactivation
else:
return output
@torch.jit.export
def jit_forward(self, input):
assert self.layer_norm is False
assert self.batch_norm is False
assert self.batch_norm_before_output_activation is False
h = input
for i, fc in enumerate(self.fcs):
h = fc(h)
h = self.hidden_activation(h)
preactivation = self.last_fc(h)
output = self.output_activation(preactivation)
return output
class ConditionalMlp(PyTorchModule):
def __init__(
self,
input_hidden_sizes,
input_size,
output_size,
latent_input_dim,
latent_hidden_sizes,
init_w=3e-3,
hidden_activation=F.relu,
output_activation=identity,
hidden_init=ptu.fanin_init,
b_init_value=0.1,
layer_norm=False,
layer_norm_kwargs=None,
batch_norm=False,
batch_norm_before_output_activation=False,
):
self.save_init_params(locals())
super().__init__()
if layer_norm_kwargs is None:
layer_norm_kwargs = dict()
self.input_size = input_size
self.latent_input_dim = latent_input_dim
self.output_size = output_size
self.hidden_activation = hidden_activation
self.output_activation = output_activation
self.layer_norm = layer_norm
self.batch_norm = batch_norm
self.batch_norm_before_output_activation = batch_norm_before_output_activation
self.input_layer_norms = nn.ModuleList()
self.input_batch_norms = nn.ModuleList()
self.latent_layer_norms = nn.ModuleList()
self.latent_batch_norms = nn.ModuleList()
self.input_encoder_fcs = nn.ModuleList()
self.latent_encoder_fcs = nn.ModuleList()
in_size = input_size
for i, next_size in enumerate(input_hidden_sizes):
fc = nn.Linear(in_size, next_size)
in_size = next_size
ptu.fanin_init(fc.weight)
fc.bias.data.fill_(0.1),
self.input_encoder_fcs.append(fc)
if self.layer_norm:
ln = LayerNorm(next_size)
self.input_layer_norms.append(ln)
if self.batch_norm:
bn = BatchNorm1d(next_size)
self.input_batch_norms.append(bn)
in_size = latent_input_dim
for i, next_size in enumerate(latent_hidden_sizes):
fc = nn.Linear(in_size, next_size)
in_size = next_size
ptu.fanin_init(fc.weight)
fc.bias.data.fill_(0.1),
self.latent_encoder_fcs.append(fc)
if self.layer_norm:
ln = LayerNorm(next_size)
self.latent_layer_norms.append(ln)
if self.batch_norm:
bn = BatchNorm1d(next_size)
self.latent_batch_norms.append(bn)
if len(input_hidden_sizes) > 0 and len(latent_hidden_sizes) > 0:
self.last_hidden_size = input_hidden_sizes[-1] + latent_hidden_sizes[-1]
elif len(input_hidden_sizes) > 0:
self.last_hidden_size = input_hidden_sizes[-1] + latent_input_dim
elif len(latent_hidden_sizes) > 0:
self.last_hidden_size = input_size + latent_hidden_sizes[-1]
else:
self.last_hidden_size = input_size + latent_input_dim
if self.batch_norm_before_output_activation:
self.last_batch_norm = BatchNorm1d(output_size)
self.last_fc = nn.Linear(self.last_hidden_size, output_size)
self.last_fc.weight.data.uniform_(-init_w, init_w)
self.last_fc.bias.data.uniform_(-init_w, init_w)
@torch.jit.ignore
def forward(
self,
input,
latent_variable,
return_preactivations=False,
):
h_input = input
for i, fc in enumerate(self.input_encoder_fcs):
h_input = fc(h_input)
if self.layer_norm:
h_input = self.input_layer_norms[i](h_input)
if self.batch_norm:
h_input = self.input_batch_norms[i](h_input)
h_input = self.hidden_activation(h_input)
assert len(latent_variable.shape) == 2
h_latent = latent_variable
for i, fc in enumerate(self.latent_encoder_fcs):
h_latent = fc(h_latent)
if self.layer_norm:
h_latent = self.latent_layer_norms[i](h_latent)
if self.batch_norm:
h_latent = self.latent_batch_norms[i](h_latent)
h_latent = self.hidden_activation(h_latent)
h = torch.cat([h_input, h_latent], dim=-1)
preactivation = self.last_fc(h)
if self.batch_norm_before_output_activation:
preactivation = self.last_batch_norm(preactivation)
output = self.output_activation(preactivation)
if return_preactivations:
return output, preactivation
else:
return output
@torch.jit.export
def jit_forward(
self,
input,
latent_variable,
):
"""
torch.jit does not support condition control (such as if ... else ...)
"""
assert self.batch_norm is False
assert self.layer_norm is False
assert self.batch_norm_before_output_activation is False
h_input = input
for i, fc in enumerate(self.input_encoder_fcs):
h_input = fc(h_input)
h_input = self.hidden_activation(h_input)
assert len(latent_variable.shape) == 2
h_latent = latent_variable
for i, fc in enumerate(self.latent_encoder_fcs):
h_latent = fc(h_latent)
h_latent = self.hidden_activation(h_latent)
h = torch.cat([h_input, h_latent], dim=-1)
preactivation = self.last_fc(h)
output = self.output_activation(preactivation)
return output
class FlattenConditionalMlp(ConditionalMlp):
@torch.jit.ignore
def forward(self, *inputs, **kwargs):
flat_inputs = torch.cat(inputs, dim=1)
return super().forward(flat_inputs, **kwargs)
@torch.jit.export
def jit_forward(self, *inputs, **kwargs):
flat_inputs = torch.cat(inputs, dim=1)
return super().jit_forward(flat_inputs, **kwargs)
class ConvNet(PyTorchModule):
def __init__(
self,
kernel_sizes,
num_channels,
strides,
paddings,
hidden_sizes,
output_size,
input_size,
init_w=3e-3,
hidden_activation=F.relu,
output_activation=identity,
hidden_init=ptu.fanin_init,
b_init_value=0.1,
):
self.save_init_params(locals())
super().__init__()
self.kernel_sizes = kernel_sizes
self.num_channels = num_channels
self.strides = strides
self.paddings = paddings
self.hidden_activation = hidden_activation
self.output_activation = output_activation
self.convs = []
self.fcs = []
in_c = input_size[0]
in_h = input_size[1]
for k, c, s, p in zip(kernel_sizes, num_channels, strides, paddings):
conv = nn.Conv2d(in_c, c, k, stride=s, padding=p)
hidden_init(conv.weight)
conv.bias.data.fill_(b_init_value)
self.convs.append(conv)
out_h = int(math.floor(1 + (in_h + 2 * p - k) / s))
in_c = c
in_h = out_h
in_dim = in_c * in_h * in_h
for h in hidden_sizes:
fc = nn.Linear(in_dim, h)
in_dim = h
hidden_init(fc.weight)
fc.bias.data.fill_(b_init_value)
self.fcs.append(fc)
self.last_fc = nn.Linear(in_dim, output_size)
self.last_fc.weight.data.uniform_(-init_w, init_w)
self.last_fc.bias.data.uniform_(-init_w, init_w)
@torch.jit.ignore
def forward(self, input, return_preactivations=False):
h = input
for conv in self.convs:
h = conv(h)
h = self.hidden_activation(h)
h = h.view(h.size(0), -1)
for i, fc in enumerate(self.fcs):
h = fc(h)
h = self.hidden_activation(h)
preactivation = self.last_fc(h)
output = self.output_activation(preactivation)
if return_preactivations:
return output, preactivation
else:
return output
@torch.jit.export
def jit_forward(self, input):
h = input
for conv in self.convs:
h = conv(h)
h = self.hidden_activation(h)
h = h.view(h.size(0), -1)
for i, fc in enumerate(self.fcs):
h = fc(h)
h = self.hidden_activation(h)
preactivation = self.last_fc(h)
output = self.output_activation(preactivation)
return output
class FlattenMlp(Mlp):
"""
Flatten inputs along dimension 1 and then pass through MLP.
"""
@torch.jit.ignore
def forward(self, *inputs, **kwargs):
flat_inputs = torch.cat(inputs, dim=1)
return super().forward(flat_inputs, **kwargs)
@torch.jit.export
def forward(self, *inputs, **kwargs):
flat_inputs = torch.cat(inputs, dim=1)
return super().jit_forward(flat_inputs, **kwargs)
class MlpPolicy(Mlp, Policy):
"""
A simpler interface for creating policies.
"""
def __init__(
self,
hidden_sizes,
output_size,
input_size,
obs_normalizer: TorchFixedNormalizer = None,
**kwargs
):
self.save_init_params(locals())
super().__init__(hidden_sizes, output_size, input_size, **kwargs)
self.obs_normalizer = obs_normalizer
def forward(self, obs, **kwargs):
if self.obs_normalizer:
obs = self.obs_normalizer.normalize(obs)
return super().forward(obs, **kwargs)
def get_action(self, obs_np):
actions = self.get_actions(obs_np[None])
return actions[0, :], {}
def get_actions(self, obs):
return self.eval_np(obs)
class TanhMlpPolicy(MlpPolicy):
"""
A helper class since most policies have a tanh output activation.
"""
def __init__(self, *args, **kwargs):
self.save_init_params(locals())
super().__init__(*args, output_activation=torch.tanh, **kwargs)
raise NotImplementedError()
class ObsPreprocessedQFunc(FlattenMlp):
"""
This is a weird thing and I didn't know what to call.
Basically I wanted this so that if you need to preprocess
your inputs somehow (attention, gating, etc.) with an external module
before passing to the policy you could do so.
Assumption is that you do not want to update the parameters of the preprocessing
module so its output is always detached.
"""
def __init__(self, preprocess_model, z_dim, *args, wrap_absorbing=False, **kwargs):
self.save_init_params(locals())
super().__init__(*args, **kwargs)
# this is a hack so that it is not added as a submodule
self.preprocess_model_list = [preprocess_model]
self.wrap_absorbing = wrap_absorbing
self.z_dim = z_dim
@property
def preprocess_model(self):
# this is a hack so that it is not added as a submodule
return self.preprocess_model_list[0]
def preprocess_fn(self, obs_batch):
mode = self.preprocess_model.training
self.preprocess_model.eval()
processed_obs_batch = self.preprocess_model(
obs_batch[:, : -self.z_dim],
self.wrap_absorbing,
obs_batch[:, -self.z_dim :],
).detach()
self.preprocess_model.train(mode)
return processed_obs_batch
def forward(self, obs, actions):
obs = self.preprocess_fn(obs).detach()
return super().forward(obs, actions)
class ObsPreprocessedVFunc(FlattenMlp):
"""
This is a weird thing and I didn't know what to call.
Basically I wanted this so that if you need to preprocess
your inputs somehow (attention, gating, etc.) with an external module
before passing to the policy you could do so.
Assumption is that you do not want to update the parameters of the preprocessing
module so its output is always detached.
"""
def __init__(self, preprocess_model, z_dim, *args, wrap_absorbing=False, **kwargs):
self.save_init_params(locals())
super().__init__(*args, **kwargs)
# this is a hack so that it is not added as a submodule
self.preprocess_model_list = [preprocess_model]
self.wrap_absorbing = wrap_absorbing
self.z_dim = z_dim
@property
def preprocess_model(self):
# this is a hack so that it is not added as a submodule
return self.preprocess_model_list[0]
def preprocess_fn(self, obs_batch):
mode = self.preprocess_model.training
self.preprocess_model.eval()
processed_obs_batch = self.preprocess_model(
obs_batch[:, : -self.z_dim],
self.wrap_absorbing,
obs_batch[:, -self.z_dim :],
).detach()
self.preprocess_model.train(mode)
return processed_obs_batch
def forward(self, obs):
obs = self.preprocess_fn(obs).detach()
return super().forward(obs)
| 32.700201 | 87 | 0.624354 | 2,085 | 16,252 | 4.595204 | 0.105516 | 0.048429 | 0.024423 | 0.026302 | 0.798977 | 0.753157 | 0.728316 | 0.709634 | 0.702849 | 0.692934 | 0 | 0.004498 | 0.288641 | 16,252 | 496 | 88 | 32.766129 | 0.824237 | 0.076114 | 0 | 0.707254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020725 | 1 | 0.069948 | false | 0 | 0.025907 | 0.010363 | 0.178756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d0035e0b458c800f4a8ef59bc82aafb23458f96 | 345 | py | Python | {{cookiecutter.repo_name}}/src/{{cookiecutter.src_package_name}}_fastapi/deps.py | ryzalk/test-mlpcgcp | 7b16ab3015d345f7d037de672443f1f202601b75 | [
"OML"
] | null | null | null | {{cookiecutter.repo_name}}/src/{{cookiecutter.src_package_name}}_fastapi/deps.py | ryzalk/test-mlpcgcp | 7b16ab3015d345f7d037de672443f1f202601b75 | [
"OML"
] | null | null | null | {{cookiecutter.repo_name}}/src/{{cookiecutter.src_package_name}}_fastapi/deps.py | ryzalk/test-mlpcgcp | 7b16ab3015d345f7d037de672443f1f202601b75 | [
"OML"
] | null | null | null | import {{cookiecutter.src_package_name}} as {{cookiecutter.src_package_name_short}}
import {{cookiecutter.src_package_name}}_fastapi as {{cookiecutter.src_package_name_short}}_fapi
PRED_MODEL = {{cookiecutter.src_package_name_short}}.modeling.utils.load_model(
{{cookiecutter.src_package_name_short}}_fapi.config.SETTINGS.PRED_MODEL_PATH)
| 49.285714 | 96 | 0.834783 | 46 | 345 | 5.76087 | 0.369565 | 0.339623 | 0.498113 | 0.588679 | 0.792453 | 0.550943 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052174 | 345 | 6 | 97 | 57.5 | 0.810398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0d23f53df4600d7cd65ccd0eb0ac21d15a2bf948 | 241 | py | Python | terrascript/gitlab/d.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/gitlab/d.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/gitlab/d.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | # terrascript/gitlab/d.py
import terrascript
class gitlab_group(terrascript.Data):
pass
class gitlab_project(terrascript.Data):
pass
class gitlab_user(terrascript.Data):
pass
class gitlab_users(terrascript.Data):
pass
| 14.176471 | 39 | 0.755187 | 30 | 241 | 5.933333 | 0.4 | 0.247191 | 0.426966 | 0.404494 | 0.505618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161826 | 241 | 16 | 40 | 15.0625 | 0.881188 | 0.095436 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.444444 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
b4a8678c612874ba099b03619fdffbbdc56b5814 | 71 | py | Python | app/core/schemas.py | dghy/WRTC-Hub | 430a30fa7ecb200028957dc1a316837c631ef996 | [
"MIT"
] | null | null | null | app/core/schemas.py | dghy/WRTC-Hub | 430a30fa7ecb200028957dc1a316837c631ef996 | [
"MIT"
] | null | null | null | app/core/schemas.py | dghy/WRTC-Hub | 430a30fa7ecb200028957dc1a316837c631ef996 | [
"MIT"
] | null | null | null | from pydantic import BaseModel
class BaseSchema(BaseModel):
pass
| 11.833333 | 30 | 0.774648 | 8 | 71 | 6.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183099 | 71 | 5 | 31 | 14.2 | 0.948276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
b4e49cfa66a096fbd620e96e7aca6d2f2fa9c97b | 1,528 | py | Python | csf_tz/overrides.py | RapidSignal-Electronics/CSF_TZ | 2cb8925e85f783623b4facce4048c88cc9c44ca8 | [
"MIT"
] | null | null | null | csf_tz/overrides.py | RapidSignal-Electronics/CSF_TZ | 2cb8925e85f783623b4facce4048c88cc9c44ca8 | [
"MIT"
] | null | null | null | csf_tz/overrides.py | RapidSignal-Electronics/CSF_TZ | 2cb8925e85f783623b4facce4048c88cc9c44ca8 | [
"MIT"
] | 1 | 2022-03-17T22:49:40.000Z | 2022-03-17T22:49:40.000Z | from __future__ import unicode_literals
import frappe, erpnext
import datetime, math
from erpnext.payroll.doctype.salary_slip.salary_slip import SalarySlip
# from erpnext.payroll.doctype.additional_salary.additional_salary import get_additional_salary_component
class csftz_SalarySlip(SalarySlip):
# def add_additional_salary_components(self, component_type):
# salary_components_details, additional_salary_details = get_additional_salary_component(self.employee, self.start_date, self.end_date, component_type)
# if salary_components_details and additional_salary_details:
# for additional_salary in additional_salary_details:
# additional_salary = frappe._dict(additional_salary)
# # exit if additional_salary.name already exists in self.earnings/deductions
# existing_additional_salary_record_found = 0
# for d in self.get(component_type):
# if d.additional_salary == additional_salary.name:
# existing_additional_salary_record_found = 1
# break
# if existing_additional_salary_record_found:
# continue
# amount = additional_salary.amount
# overwrite = additional_salary.overwrite
# self.update_component_row(frappe._dict(salary_components_details[additional_salary.component]), amount, component_type, overwrite=overwrite, additional_salary=additional_salary.name)
pass | 58.769231 | 200 | 0.71466 | 163 | 1,528 | 6.306748 | 0.325153 | 0.342412 | 0.075875 | 0.093385 | 0.248054 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001698 | 0.229058 | 1,528 | 26 | 201 | 58.769231 | 0.870968 | 0.801702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.666667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
370c9654321ca14218a97e143bfb054155cda94e | 188 | py | Python | src/wai/annotations/blue_channel/component/__init__.py | waikato-ufdl/wai-annotations-bluechannel | f60fa3b55842d76691c17e2b3a74dc45345c66f7 | [
"Apache-2.0"
] | null | null | null | src/wai/annotations/blue_channel/component/__init__.py | waikato-ufdl/wai-annotations-bluechannel | f60fa3b55842d76691c17e2b3a74dc45345c66f7 | [
"Apache-2.0"
] | null | null | null | src/wai/annotations/blue_channel/component/__init__.py | waikato-ufdl/wai-annotations-bluechannel | f60fa3b55842d76691c17e2b3a74dc45345c66f7 | [
"Apache-2.0"
] | null | null | null | from ._BlueChannelReader import BlueChannelReader
from ._BlueChannelWriter import BlueChannelWriter
from ._FromBlueChannel import FromBlueChannel
from ._ToBlueChannel import ToBlueChannel
| 37.6 | 49 | 0.893617 | 16 | 188 | 10.25 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 188 | 4 | 50 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c0fd18abe86b4b21c4a1ebb814f63c61fd9ddea | 26 | py | Python | jason/props/rules/__init__.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | null | null | null | jason/props/rules/__init__.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | 136 | 2019-05-15T07:30:47.000Z | 2021-07-19T05:21:39.000Z | jason/props/rules/__init__.py | manoadamro/jason | e6a152797cc47fc158b41f1f4b4d55f79d0494f7 | [
"MIT"
] | 1 | 2019-05-15T10:00:34.000Z | 2019-05-15T10:00:34.000Z | from .any_of import AnyOf
| 13 | 25 | 0.807692 | 5 | 26 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2c1f369e2916879d701ef9d84f757021d9ef7fe4 | 103 | py | Python | klusta_pipeline/__init__.py | gentnerlab/klusta-pipeline | f3ab124e7eb39d574301e40e2e4af1a08fea5adb | [
"BSD-3-Clause"
] | 6 | 2015-07-07T21:49:54.000Z | 2018-08-21T16:17:13.000Z | klusta_pipeline/__init__.py | gentnerlab/klusta-pipeline | f3ab124e7eb39d574301e40e2e4af1a08fea5adb | [
"BSD-3-Clause"
] | 22 | 2015-07-07T19:41:28.000Z | 2018-01-14T06:53:21.000Z | klusta_pipeline/__init__.py | gentnerlab/klusta-pipeline | f3ab124e7eb39d574301e40e2e4af1a08fea5adb | [
"BSD-3-Clause"
] | 8 | 2015-07-14T19:12:24.000Z | 2018-06-05T21:47:32.000Z | from constants import *
from dataio import *
from maps import *
from probe import *
from utils import * | 20.6 | 23 | 0.76699 | 15 | 103 | 5.266667 | 0.466667 | 0.506329 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184466 | 103 | 5 | 24 | 20.6 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25e90284bcc923b7cf93ad6ce5acff94ceca84d1 | 173 | py | Python | solar_data_pipeline/file/csv.py | slacgismo/solar-data-pipeline | 1207122d34abbc92c7be6e36db3c979e1fa0556e | [
"BSD-2-Clause"
] | 1 | 2020-01-15T11:17:27.000Z | 2020-01-15T11:17:27.000Z | solar_data_pipeline/file/csv.py | slacgismo/solar-data-pipeline | 1207122d34abbc92c7be6e36db3c979e1fa0556e | [
"BSD-2-Clause"
] | null | null | null | solar_data_pipeline/file/csv.py | slacgismo/solar-data-pipeline | 1207122d34abbc92c7be6e36db3c979e1fa0556e | [
"BSD-2-Clause"
] | null | null | null | from solardatatools.dataio import get_pvdaq_data
class CsvAccess:
def __init__(self, file_url):
self._file_url = file_url
def retrieve(self):
pass
| 17.3 | 48 | 0.699422 | 23 | 173 | 4.826087 | 0.695652 | 0.189189 | 0.198198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236994 | 173 | 9 | 49 | 19.222222 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
d30f2f7fd3c6da5faed8fc332f9cd3e9c465f718 | 43 | py | Python | __init__.py | depsir/cart | 6fd09177d3bc67dd80205bbc763494f6dd55ca0c | [
"BSD-3-Clause"
] | null | null | null | __init__.py | depsir/cart | 6fd09177d3bc67dd80205bbc763494f6dd55ca0c | [
"BSD-3-Clause"
] | null | null | null | __init__.py | depsir/cart | 6fd09177d3bc67dd80205bbc763494f6dd55ca0c | [
"BSD-3-Clause"
] | null | null | null | from cart import Cart
from item import Item | 21.5 | 21 | 0.837209 | 8 | 43 | 4.5 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 2 | 22 | 21.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d3301ad2e11b7ee41c54a011feb2c521ae4d5c14 | 111 | py | Python | agents/maze_agents/modules/__init__.py | lee15253/edl_bk | 6777f5803138e6a64dabb096fe18a495728aabe3 | [
"MIT"
] | 30 | 2020-02-16T15:52:59.000Z | 2022-03-22T10:54:54.000Z | agents/maze_agents/modules/__init__.py | lee15253/edl_bk | 6777f5803138e6a64dabb096fe18a495728aabe3 | [
"MIT"
] | null | null | null | agents/maze_agents/modules/__init__.py | lee15253/edl_bk | 6777f5803138e6a64dabb096fe18a495728aabe3 | [
"MIT"
] | 7 | 2020-02-16T15:53:05.000Z | 2022-01-18T03:41:03.000Z | from .policy import *
from .density import *
from .value_function import *
from .intrinsic_motivation import *
| 22.2 | 35 | 0.783784 | 14 | 111 | 6.071429 | 0.571429 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144144 | 111 | 4 | 36 | 27.75 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d34f9be572911d947a97432eceb5fc04d09fc70f | 108 | py | Python | movies/views.py | felix13/DjangoCustomErrorPage | dd65af4da3c00082e505b97ef2ba51be6f791f5f | [
"MIT"
] | null | null | null | movies/views.py | felix13/DjangoCustomErrorPage | dd65af4da3c00082e505b97ef2ba51be6f791f5f | [
"MIT"
] | null | null | null | movies/views.py | felix13/DjangoCustomErrorPage | dd65af4da3c00082e505b97ef2ba51be6f791f5f | [
"MIT"
] | null | null | null | from django.shortcuts import render
def home(request):
return render(request, "movies/home.html")
| 15.428571 | 46 | 0.722222 | 14 | 108 | 5.571429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175926 | 108 | 6 | 47 | 18 | 0.876404 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d3af716170a446e8962e19ae1b9a997b42a84fb8 | 141 | py | Python | cracking-the-coding-interview/src/chapter1/checkSorted.py | silphire/training-with-books | bd07f7376996828b6cb4000d654cdc5f53d1c589 | [
"MIT"
] | null | null | null | cracking-the-coding-interview/src/chapter1/checkSorted.py | silphire/training-with-books | bd07f7376996828b6cb4000d654cdc5f53d1c589 | [
"MIT"
] | 4 | 2020-01-04T14:05:45.000Z | 2020-01-19T14:53:03.000Z | cracking-the-coding-interview/src/chapter1/checkSorted.py | silphire/training-with-books | bd07f7376996828b6cb4000d654cdc5f53d1c589 | [
"MIT"
] | null | null | null | def checkSorted(a, b):
return sorted(list(a)) == sorted(list(b))
print(checkSorted("abc", "anc"))
print(checkSorted("aabbcc", "abcabc")) | 28.2 | 45 | 0.666667 | 19 | 141 | 4.947368 | 0.631579 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 141 | 5 | 46 | 28.2 | 0.746032 | 0 | 0 | 0 | 0 | 0 | 0.126761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
6cbb0eb9306ce02c517457c62c058d8335a30c05 | 1,274 | py | Python | py/test/test.py | hairbeRt/Fred | ae3770e2ad62f83a7d2070e8e48087d89aa09bfc | [
"MIT"
] | null | null | null | py/test/test.py | hairbeRt/Fred | ae3770e2ad62f83a7d2070e8e48087d89aa09bfc | [
"MIT"
] | null | null | null | py/test/test.py | hairbeRt/Fred | ae3770e2ad62f83a7d2070e8e48087d89aa09bfc | [
"MIT"
] | null | null | null | import numpy as np
import unittest
import Fred.backend as fred
class TestContinuousFrechet(unittest.TestCase):
def test_zigzag1d(self):
a = fred.Curve(np.array([0.0, 1.0, 0.0, 1.0]))
b = fred.Curve(np.array([0.0, 0.75, 0.25, 1.0]))
c = fred.Curve(np.array([0.0, 1.0]))
self.assertEqual(fred.continuous_frechet(a, b).value, 0.25)
self.assertEqual(fred.continuous_frechet(a, c).value, 0.5)
def test_longsegment(self):
a = fred.Curve(np.array([0.0,500.0e3, 1.0e6]))
b = fred.Curve(np.array([0.0, 1.0e6]))
self.assertEqual(fred.continuous_frechet(a, b).value, 0.0)
class TestDiscreteFrechet(unittest.TestCase):
def test_zigzag1d(self):
a = fred.Curve(np.array([0.0, 1.0, 0.0, 1.0]))
b = fred.Curve(np.array([0.0, 0.75, 0.25, 1.0]))
c = fred.Curve(np.array([0.0, 1.0]))
self.assertEqual(fred.discrete_frechet(a, b).value, 0.25)
self.assertEqual(fred.discrete_frechet(a, c).value, 1.0)
def test_longsegment(self):
a = fred.Curve(np.array([0.0,500.0e3, 1.0e6]))
b = fred.Curve(np.array([0.0, 1.0e6]))
self.assertEqual(fred.discrete_frechet(a, b).value, 500000.0)
if __name__ == '__main__':
unittest.main()
| 36.4 | 69 | 0.60989 | 210 | 1,274 | 3.614286 | 0.180952 | 0.044796 | 0.144928 | 0.210804 | 0.805007 | 0.805007 | 0.760211 | 0.760211 | 0.720685 | 0.57971 | 0 | 0.094433 | 0.210361 | 1,274 | 34 | 70 | 37.470588 | 0.66004 | 0 | 0 | 0.518519 | 0 | 0 | 0.006279 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.148148 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6cd9cb09c97e8c5038449aa15e952fc452c790b8 | 31 | py | Python | Know Your Code/Python/linear_search_python/hi.py | rswalia/open-source-contribution-for-beginners | 1ea29479c6d949760c83926b4c43a6b0d33ad0a0 | [
"MIT"
] | 249 | 2018-04-19T08:30:19.000Z | 2022-03-30T06:31:09.000Z | Know Your Code/Python/linear_search_python/hi.py | rswalia/open-source-contribution-for-beginners | 1ea29479c6d949760c83926b4c43a6b0d33ad0a0 | [
"MIT"
] | 152 | 2021-11-01T06:00:11.000Z | 2022-03-20T11:40:00.000Z | Know Your Code/Python/linear_search_python/hi.py | rswalia/open-source-contribution-for-beginners | 1ea29479c6d949760c83926b4c43a6b0d33ad0a0 | [
"MIT"
] | 111 | 2018-04-09T01:53:29.000Z | 2022-03-19T08:59:28.000Z | import matplotlib.pyplot as plt | 31 | 31 | 0.870968 | 5 | 31 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cf310035d0f62de5e7ee29e88517057e05cd2ea | 190 | py | Python | fundamentals-JPN/src/notebooks/script/helper.py | konabuta/fta-azure-machine-learning | 70da95e7a4c9b3e42db61bb0f69eda8e07c28eee | [
"MIT"
] | null | null | null | fundamentals-JPN/src/notebooks/script/helper.py | konabuta/fta-azure-machine-learning | 70da95e7a4c9b3e42db61bb0f69eda8e07c28eee | [
"MIT"
] | null | null | null | fundamentals-JPN/src/notebooks/script/helper.py | konabuta/fta-azure-machine-learning | 70da95e7a4c9b3e42db61bb0f69eda8e07c28eee | [
"MIT"
] | null | null | null | def data_preprocess(df, categorical_cols, float_cols):
df[categorical_cols] = df[categorical_cols].astype("category")
df[float_cols] = df[float_cols].astype("float")
return df
| 27.142857 | 66 | 0.731579 | 26 | 190 | 5.076923 | 0.384615 | 0.295455 | 0.386364 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136842 | 190 | 6 | 67 | 31.666667 | 0.804878 | 0 | 0 | 0 | 0 | 0 | 0.068421 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6cf6b05adb299457225e5e3b54352abb18cb1019 | 105 | py | Python | testing/freeze/tests/test_trivial.py | solackerman/pytest | 0fc00c02a7a39ebd6c57886a85580ea3341e76eb | [
"MIT"
] | 2 | 2015-03-04T10:17:57.000Z | 2022-03-13T18:32:09.000Z | testing/freeze/tests/test_trivial.py | solackerman/pytest | 0fc00c02a7a39ebd6c57886a85580ea3341e76eb | [
"MIT"
] | 2 | 2017-07-15T22:12:00.000Z | 2017-08-09T00:34:51.000Z | testing/freeze/tests/test_trivial.py | solackerman/pytest | 0fc00c02a7a39ebd6c57886a85580ea3341e76eb | [
"MIT"
] | 1 | 2020-12-02T16:03:58.000Z | 2020-12-02T16:03:58.000Z |
def test_upper():
assert 'foo'.upper() == 'FOO'
def test_lower():
assert 'FOO'.lower() == 'foo' | 17.5 | 33 | 0.571429 | 14 | 105 | 4.142857 | 0.428571 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 105 | 6 | 34 | 17.5 | 0.690476 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9f485759abb2ac95bf9ec98633a562491d91c3ab | 25 | py | Python | mapper_on_file/__init__.py | cao5zy/mapper_on_file | f325d52495a5de92bf6bbcab7b01090d834020b6 | [
"MIT"
] | null | null | null | mapper_on_file/__init__.py | cao5zy/mapper_on_file | f325d52495a5de92bf6bbcab7b01090d834020b6 | [
"MIT"
] | null | null | null | mapper_on_file/__init__.py | cao5zy/mapper_on_file | f325d52495a5de92bf6bbcab7b01090d834020b6 | [
"MIT"
] | null | null | null | from .app import mapping
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9f8a79739a2cb7d8a084ebe4606dc83b4834fa74 | 156 | py | Python | pyalgo/basic_modules/__init__.py | gilad-dotan/pyalgo_pkg | 132ff3c032c3fc0ae910201611e5d2cde387eb74 | [
"MIT"
] | 1 | 2021-04-01T08:59:30.000Z | 2021-04-01T08:59:30.000Z | pyalgo/basic_modules/__init__.py | gilad-dotan/pyalgo_pkg | 132ff3c032c3fc0ae910201611e5d2cde387eb74 | [
"MIT"
] | null | null | null | pyalgo/basic_modules/__init__.py | gilad-dotan/pyalgo_pkg | 132ff3c032c3fc0ae910201611e5d2cde387eb74 | [
"MIT"
] | null | null | null | # // import modules section \\
from . import default_functions
from . import default_values
# // load numba compiles \\
default_functions.prime_check(5)
| 17.333333 | 32 | 0.75 | 19 | 156 | 5.947368 | 0.684211 | 0.176991 | 0.300885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.153846 | 156 | 8 | 33 | 19.5 | 0.848485 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9fa05c4b54d129e8655faf8f771c4e68145210b6 | 233 | py | Python | lib/core/dict_tree.py | p3r7/lexicon-mpx1-sysex-tests | b4c0e72d3cc74d0692a2221f6e574061d3d16e67 | [
"MIT"
] | null | null | null | lib/core/dict_tree.py | p3r7/lexicon-mpx1-sysex-tests | b4c0e72d3cc74d0692a2221f6e574061d3d16e67 | [
"MIT"
] | null | null | null | lib/core/dict_tree.py | p3r7/lexicon-mpx1-sysex-tests | b4c0e72d3cc74d0692a2221f6e574061d3d16e67 | [
"MIT"
] | null | null | null |
from functools import reduce # only in Python 3
import operator
## ACCESSORS
def get_in(tree, path):
return reduce(operator.getitem, path, tree)
def set_in(tree, path, value):
get_in(tree, path[:-1])[path[-1]] = value
| 16.642857 | 47 | 0.686695 | 36 | 233 | 4.361111 | 0.527778 | 0.11465 | 0.191083 | 0.165605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015789 | 0.184549 | 233 | 13 | 48 | 17.923077 | 0.810526 | 0.111588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.166667 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
4ca5df5bdff1718e83e7bdcb46bc49f0dddfe483 | 74 | py | Python | day3/strings.py | simon21-meet/meet2019y1lab3 | b6548126e128a2ac85a8a25b2d47a8c990059379 | [
"MIT"
] | null | null | null | day3/strings.py | simon21-meet/meet2019y1lab3 | b6548126e128a2ac85a8a25b2d47a8c990059379 | [
"MIT"
] | null | null | null | day3/strings.py | simon21-meet/meet2019y1lab3 | b6548126e128a2ac85a8a25b2d47a8c990059379 | [
"MIT"
] | null | null | null | print('Hello World')
print("Hello World")
#print("Helo World")
| 4.933333 | 20 | 0.594595 | 9 | 74 | 4.888889 | 0.444444 | 0.454545 | 0.681818 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22973 | 74 | 14 | 21 | 5.285714 | 0.77193 | 0.256757 | 0 | 0 | 0 | 0 | 0.511628 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
981b4a7dabdbf46998f362e40e76e85574473aaa | 32 | py | Python | tatum/__init__.py | grace43/tatum-python | 4884d52d02522b7c3075158cff9f0d5e874af6ac | [
"MIT"
] | 3 | 2021-01-11T11:38:07.000Z | 2021-08-07T05:34:55.000Z | tatum/__init__.py | grace43/tatum-python | 4884d52d02522b7c3075158cff9f0d5e874af6ac | [
"MIT"
] | null | null | null | tatum/__init__.py | grace43/tatum-python | 4884d52d02522b7c3075158cff9f0d5e874af6ac | [
"MIT"
] | 2 | 2021-04-29T11:49:32.000Z | 2022-03-10T18:05:18.000Z | from tatum.ledger import account | 32 | 32 | 0.875 | 5 | 32 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e22a5598cc13e0d29c552a7e887eebd5acdaf645 | 10,946 | py | Python | SP&OS/ass5/a.py | saurabhjha443/ascalibra | 202f6afc6ceacc75487604a380834e807d4a9038 | [
"MIT"
] | null | null | null | SP&OS/ass5/a.py | saurabhjha443/ascalibra | 202f6afc6ceacc75487604a380834e807d4a9038 | [
"MIT"
] | null | null | null | SP&OS/ass5/a.py | saurabhjha443/ascalibra | 202f6afc6ceacc75487604a380834e807d4a9038 | [
"MIT"
] | 1 | 2020-01-26T16:23:59.000Z | 2020-01-26T16:23:59.000Z | fptr = open('a.asm', 'r')
addressFile = open('output.txt', 'w')
prev_add = 0
len1 = 0
lit = []
statement_add = []
index_symb = 1
index_lit = 1
index_pool = 1
registers = {'AREG': 1, 'BREG': 2, 'CREG': 3, 'DREG': 4}
comp_code = {'LT': 1, 'LE': 2, 'EQ': 3, 'GT': 4, 'GE': 5, 'ANY': 6}
instru_statement = {'STOP': '00', 'ADD': '01', 'SUB': '02', 'MULT': '03', 'MOVER': '04', 'MOVEM': '05', 'COMP': '06',
'BC': '07', 'DIV': '08', 'READ': '09', 'PRINT': '10'}
pool_table = [['index', 'literal']]
lit_table = [['index', 'literal', 'address']]
symb = [['index', 'symbol', 'address']]
IC = [['address', 'mnemonic opcode', 'operand']]
for f in fptr:
words = f.upper().split()
if words[0][-1] == ':':
symb.append([index_symb, words[0][0:-1], prev_add])
index_symb += 1
# print(symb)
statement_add.append([words[0], prev_add])
if words[1] in ['STOP', 'ADD', 'SUB', 'MULT', 'MOVER', 'MOVEM', 'COMP', 'BC', 'DIV', 'READ', 'PRINT']:
if words[1] in ['ADD', 'SUB', 'MULT', 'DIV', 'MOVER', 'MOVEM']:
if words[2][5] == '=' and words[2][6] == "'" and words[2][8] == "'":
lit.append(words[2][5:])
IC.append([prev_add, ('IS', instru_statement[words[1]]),
((registers[words[2][0:4]]), ('L', words[2][7]))])
else:
IC.append([prev_add, ('IS', instru_statement[words[1]]),
((registers[words[2][0:4]]), ('S', [i[0] for i in symb if i[1] == words[2][5]]))])
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[1] == 'COMP':
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
# IC.append([prev_add, ('IS', instru_statement[words[1]]), ((registers[words[2][0:4]]), ('L', words[2][7]))])
len1 = 1
prev_add = prev_add + len1
elif words[1] == 'BC':
IC.append([prev_add, ('IS', instru_statement[words[1]]),
((comp_code[words[2]]), ('S', [i[0] for i in symb if i[1] == words[3]]))])
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[1] in ['READ', 'PRINT', 'STOP']:
if words[1] == 'STOP':
IC.append([prev_add, ('IS', instru_statement[words[1]])])
else:
IC.append(
[prev_add, ('IS', instru_statement[words[1]]), ('S', [i[0] for i in symb if i[1] == words[2]])])
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[2] in ['DS', 'DC']:
if words[2] == 'DS':
IC.append([prev_add, ('S', [i[0] for i in symb if i[1] == words[1]]), ('DL', '02'), ('C', words[3])])
symb.append([index_symb, words[1], prev_add])
index_symb += 1
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = words[3]
prev_add = prev_add + len1
elif words[2] == 'DC':
IC.append([prev_add, ('S', [i[0] for i in symb if i[1] == words[1]]), ('DL', '01'), ('C', words[3])])
symb.append([index_symb, words[1], prev_add])
index_symb += 1
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[1] in ['START', 'END', 'ORIGIN', 'EQU', 'LTORG']:
if words[1] == 'START':
IC.append([prev_add, ('AD', '01'), ('C', words[2])])
addressFile.write(f)
len1 = 1
prev_add = int(words[2])
elif words[1] == 'LTORG':
IC.append([prev_add, ('AD', '05')])
addressFile.write(f)
for l in lit:
pool_table.append([index_pool, l])
lit_table.append([index_lit, l, prev_add])
index_lit += 1
len1 = 1
IC.append([prev_add, ('S',), ('DL', '01'), ('C', l)])
addressFile.write(l + ' ' + str(prev_add) + '\n')
prev_add += len
lit = []
index_pool += 1
elif words[1] == 'END':
IC.append(['AD', '02'])
addressFile.write(f)
for l in lit:
pool_table.append([index_pool, l])
lit_table.append([index_lit, l, prev_add])
index_lit += 1
len1 = 1
IC.append([prev_add, ('S',), ('DL', '01'), ('C', l)])
addressFile.write(l + ' ' + str(prev_add) + '\n')
prev_add += len
index_pool += 1
elif words[1] == 'ORIGIN':
IC.append([prev_add, ('AD', '03'), ('S',)])
len1 = 1
addressFile.write(f)
for statement, address in statement_add:
if statement in words[2]:
if '+' in words[2]:
a = words[2].find('+')
prev_add = address + int(words[2][a:])
if '-' in words[2]:
a = words[2].find('-')
prev_add = address - int(words[2][a:])
elif words[2] == 'EQU':
IC.append([prev_add, ('S', [i[0] for i in symb if i[1] == words[1]])('AD', '04')])
addressFile.write(f)
else:
statement_add.append([words[0], prev_add])
if words[0] in ['STOP', 'ADD', 'SUB', 'MULT', 'MOVER', 'MOVEM', 'COMP', 'BC', 'DIV', 'READ', 'PRINT']:
if words[0] in ['ADD', 'SUB', 'MULT', 'DIV', 'MOVER', 'MOVEM']:
if words[1][5] == '=' and words[1][6] == "'" and words[1][8] == "'":
lit.append(words[1][5:])
IC.append([prev_add, ('IS', instru_statement[words[0]]),
((registers[words[1][0:4]]), ('L', words[1][7]))])
else:
IC.append([prev_add, ('IS', instru_statement[words[0]]),
((registers[words[1][0:4]]), ('S', [i[0] for i in symb if i[1] == words[1][5]]))])
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[0] == 'COMP':
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[0] == 'BC':
IC.append([prev_add, ('IS', instru_statement[words[0]]),
((comp_code[words[1]]), ('S', [i[0] for i in symb if i[1] == words[2]]))])
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[0] in ['READ', 'PRINT', 'STOP']:
if words[0] == 'STOP':
IC.append([prev_add, ('IS', instru_statement[words[0]])])
else:
IC.append(
[prev_add, ('IS', instru_statement[words[0]]), ('S', [i[0] for i in symb if i[1] == words[1]])])
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
elif words[0] in ['START', 'END', 'ORIGIN', 'EQU', 'LTORG']:
if words[0] == 'START':
IC.append([prev_add, ('AD', '01'), ('C', words[1])])
addressFile.write(f)
len1 = 1
prev_add = int(words[1])
elif words[0] == 'LTORG':
IC.append([prev_add, ('AD', '05')])
addressFile.write(f)
for l in lit:
pool_table.append([index_pool, l])
lit_table.append([index_lit, l, prev_add])
index_lit += 1
len1 = 1
# IC.append([prev_add,])
IC.append([prev_add, ('S',), ('DL', '01'), ('C', l)])
addressFile.write(l + ' ' + str(prev_add) + '\n')
prev_add += len1
index_pool += 1
lit = []
elif words[0] == 'END':
IC.append(['AD', '02'])
addressFile.write(f)
for l in lit:
pool_table.append([index_pool, l])
lit_table.append([index_lit, l, prev_add])
index_lit += 1
len1 = 1
IC.append([prev_add, ('S',), ('DL', '01'), ('C', l)])
addressFile.write(l + ' ' + str(prev_add) + '\n')
prev_add += len1
index_pool += 1
elif words[0] == 'ORIGIN':
IC.append([prev_add, ('AD', '03'), ('S',)])
len1 = 1
addressFile.write(f)
for statement, address in statement_add:
if statement in words[1]:
if '+' in words[1]:
a = words[1].find('+')
prev_add = address + int(words[1][a:])
if '-' in words[1]:
a = words[1].find('-')
prev_add = address - int(words[1][a:])
elif words[1] == 'EQU':
IC.append([prev_add, ('S', [i[0] for i in symb if i[1] == words[0]])('AD', '04')])
addressFile.write(f)
elif words[1] in ['DS', 'DC']:
if words[1] == 'DS':
IC.append([prev_add, ('S', [i[0] for i in symb if i[1] == words[1]]), ('DL', '02'), ('C', words[2])])
symb.append([index_symb, words[0], prev_add])
index_symb += 1
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = words[2]
prev_add = prev_add + len1
elif words[1] == 'DC':
IC.append([prev_add, ('S', [i[0] for i in symb if i[1] == words[1]]), ('DL', '01'), ('C', words[2])])
symb.append([index_symb, words[0], prev_add])
index_symb += 1
addressFile.write(f[0:-1] + ' ' + str(prev_add) + '\n')
len1 = 1
prev_add = prev_add + len1
fptr.close()
addressFile.close()
print("literal table is:", lit_table)
print("symbol table is:", symb)
print("pool table is:", pool_table)
# print("Intermediate code is:",IC)
for i in IC:
print(i)
| 46.777778 | 125 | 0.41321 | 1,322 | 10,946 | 3.30711 | 0.076399 | 0.144099 | 0.076853 | 0.096066 | 0.838747 | 0.820677 | 0.792543 | 0.792543 | 0.76624 | 0.665599 | 0 | 0.047806 | 0.396126 | 10,946 | 233 | 126 | 46.978541 | 0.613616 | 0.016079 | 0 | 0.550926 | 0 | 0 | 0.065038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e252168f0b22c790dbcc4bd7cf7ee1da07717b9d | 146 | py | Python | slowai/core.py | jackiey99/slowai | bb2e8ff34df4f1809325d8e37d1ee5c568e83294 | [
"Apache-2.0"
] | null | null | null | slowai/core.py | jackiey99/slowai | bb2e8ff34df4f1809325d8e37d1ee5c568e83294 | [
"Apache-2.0"
] | 2 | 2021-09-28T05:42:42.000Z | 2022-02-26T10:04:21.000Z | slowai/core.py | jackiey99/slowai | bb2e8ff34df4f1809325d8e37d1ee5c568e83294 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: 00_core.ipynb (unless otherwise specified).
__all__ = ['add']
# Cell
def add(a, b):
return a + b | 20.857143 | 87 | 0.671233 | 23 | 146 | 4.043478 | 0.826087 | 0.043011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.19863 | 146 | 7 | 88 | 20.857143 | 0.777778 | 0.616438 | 0 | 0 | 1 | 0 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e270936d3d8a55556fc4c43a6e7a4adb4efb6b4b | 132 | py | Python | evil/test_module.py | justanr/evil_python | e11a66eeab277dd4f9972c38178ace64eb5cd875 | [
"MIT"
] | null | null | null | evil/test_module.py | justanr/evil_python | e11a66eeab277dd4f9972c38178ace64eb5cd875 | [
"MIT"
] | null | null | null | evil/test_module.py | justanr/evil_python | e11a66eeab277dd4f9972c38178ace64eb5cd875 | [
"MIT"
] | null | null | null | from .module import module
will_import = 1
with module('will_import'):
x = 3
class Derper:
pass
_secret = 1
| 11 | 27 | 0.606061 | 18 | 132 | 4.277778 | 0.666667 | 0.25974 | 0.415584 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.318182 | 132 | 11 | 28 | 12 | 0.822222 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.142857 | 0.428571 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
2ca48647a0ec96e391953f9527581351a0073412 | 39 | py | Python | semester4/oop/lab1/ukrnet_news/types/__init__.py | no1sebomb/University-Labs | 1da5e7486f0b8a6119c077945aba8c89cdfc2e50 | [
"WTFPL"
] | null | null | null | semester4/oop/lab1/ukrnet_news/types/__init__.py | no1sebomb/University-Labs | 1da5e7486f0b8a6119c077945aba8c89cdfc2e50 | [
"WTFPL"
] | null | null | null | semester4/oop/lab1/ukrnet_news/types/__init__.py | no1sebomb/University-Labs | 1da5e7486f0b8a6119c077945aba8c89cdfc2e50 | [
"WTFPL"
] | 1 | 2020-11-01T23:54:52.000Z | 2020-11-01T23:54:52.000Z | # coding=utf-8
from .news import News
| 9.75 | 22 | 0.717949 | 7 | 39 | 4 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.179487 | 39 | 3 | 23 | 13 | 0.84375 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e2e3bf1b2653716ec154acf83cd9cc82e89e9eec | 3,105 | py | Python | example/tests/core/m2core_int_enum_tests.py | mdutkin/m2core | 1e08acbc99e9e6c60a03d63110e2fcec96a35ec0 | [
"MIT"
] | 18 | 2017-11-02T16:06:41.000Z | 2019-04-16T08:11:37.000Z | example/tests/core/m2core_int_enum_tests.py | mdutkin/m2core | 1e08acbc99e9e6c60a03d63110e2fcec96a35ec0 | [
"MIT"
] | 4 | 2018-06-19T08:45:26.000Z | 2019-02-08T04:28:28.000Z | example/tests/core/m2core_int_enum_tests.py | mdutkin/m2core | 1e08acbc99e9e6c60a03d63110e2fcec96a35ec0 | [
"MIT"
] | 2 | 2017-11-10T07:27:22.000Z | 2018-06-27T12:16:27.000Z | __author__ = 'Maxim Dutkin (max@dutkin.ru)'
import unittest
from m2core.common.int_enum import M2CoreIntEnum
class M2CoreIntEnumTest(unittest.TestCase):
def setUp(self):
class SampleEnum(M2CoreIntEnum):
ONE = 1
TWO = 2
THREE = 3
FOUR = 4
FIVE = 5
SIX = 6
SEVEN = 7
EIGHT = 8
NINE = 9
TEN = 10
self.sample_enum = SampleEnum
def test_get_by_int(self):
self.assertEqual(self.sample_enum.ONE, self.sample_enum.get(1))
self.assertEqual(self.sample_enum.TWO, self.sample_enum.get(2))
self.assertEqual(self.sample_enum.THREE, self.sample_enum.get(3))
self.assertEqual(self.sample_enum.FOUR, self.sample_enum.get(4))
self.assertEqual(self.sample_enum.FIVE, self.sample_enum.get(5))
self.assertEqual(self.sample_enum.SIX, self.sample_enum.get(6))
self.assertEqual(self.sample_enum.SEVEN, self.sample_enum.get(7))
self.assertEqual(self.sample_enum.EIGHT, self.sample_enum.get(8))
self.assertEqual(self.sample_enum.NINE, self.sample_enum.get(9))
self.assertEqual(self.sample_enum.TEN, self.sample_enum.get(10))
self.assertTrue(self.sample_enum.get(11) is None)
self.assertTrue(self.sample_enum.get(-1) is None)
def test_get_raises(self):
with self.assertRaises(Exception):
self.sample_enum.get(1.0)
with self.assertRaises(Exception):
self.sample_enum.get(1.1)
with self.assertRaises(Exception):
self.sample_enum.get(True)
with self.assertRaises(Exception):
self.sample_enum.get(object)
def test_get_by_str(self):
self.assertEqual(self.sample_enum.ONE, self.sample_enum.get('ONE'))
self.assertEqual(self.sample_enum.TWO, self.sample_enum.get('TWO'))
self.assertEqual(self.sample_enum.THREE, self.sample_enum.get('THREE'))
self.assertEqual(self.sample_enum.FOUR, self.sample_enum.get('FOUR'))
self.assertEqual(self.sample_enum.FIVE, self.sample_enum.get('FIVE'))
self.assertEqual(self.sample_enum.SIX, self.sample_enum.get('SIX'))
self.assertEqual(self.sample_enum.SEVEN, self.sample_enum.get('SEVEN'))
self.assertEqual(self.sample_enum.EIGHT, self.sample_enum.get('EIGHT'))
self.assertEqual(self.sample_enum.NINE, self.sample_enum.get('NINE'))
self.assertEqual(self.sample_enum.TEN, self.sample_enum.get('TEN'))
self.assertTrue(self.sample_enum.get('ELEVEN') is None)
self.assertTrue(self.sample_enum.get('NON_EXISTENT_MEMBER') is None)
def test_all(self):
self.assertEqual([
self.sample_enum.ONE,
self.sample_enum.TWO,
self.sample_enum.THREE,
self.sample_enum.FOUR,
self.sample_enum.FIVE,
self.sample_enum.SIX,
self.sample_enum.SEVEN,
self.sample_enum.EIGHT,
self.sample_enum.NINE,
self.sample_enum.TEN,
], self.sample_enum.all())
| 41.4 | 79 | 0.648953 | 412 | 3,105 | 4.706311 | 0.150485 | 0.309438 | 0.433213 | 0.245487 | 0.795255 | 0.794224 | 0.762249 | 0.6787 | 0.593089 | 0.544611 | 0 | 0.013836 | 0.231884 | 3,105 | 74 | 80 | 41.959459 | 0.799161 | 0 | 0 | 0.061538 | 0 | 0 | 0.02963 | 0 | 0 | 0 | 0 | 0 | 0.446154 | 1 | 0.076923 | false | 0 | 0.030769 | 0 | 0.138462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e2e6731da12666c32045a7ccf34e6e628a1fde03 | 235 | py | Python | PyObjCTest/test_nsform.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | PyObjCTest/test_nsform.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | PyObjCTest/test_nsform.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | import AppKit
from PyObjCTools.TestSupport import TestCase
class TestNSForm(TestCase):
def testMethods(self):
self.assertArgIsBOOL(AppKit.NSForm.setBordered_, 0)
self.assertArgIsBOOL(AppKit.NSForm.setBezeled_, 0)
| 26.111111 | 59 | 0.765957 | 25 | 235 | 7.12 | 0.64 | 0.213483 | 0.280899 | 0.348315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01005 | 0.153191 | 235 | 8 | 60 | 29.375 | 0.884422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e2efe7a2ecd3ee127d49292c4f04b3b2786c70c9 | 53,396 | py | Python | openconcept/utilities/VTOLPowerAndThrust.py | berlinexpress174/openconcept_winter | f366d3245924142621c9663d505642890ca8d5d7 | [
"MIT"
] | null | null | null | openconcept/utilities/VTOLPowerAndThrust.py | berlinexpress174/openconcept_winter | f366d3245924142621c9663d505642890ca8d5d7 | [
"MIT"
] | null | null | null | openconcept/utilities/VTOLPowerAndThrust.py | berlinexpress174/openconcept_winter | f366d3245924142621c9663d505642890ca8d5d7 | [
"MIT"
] | null | null | null | from __future__ import division
from re import A
from matplotlib import units
import numpy as np
import openmdao.api as om
from openmdao.api import Group, ExplicitComponent, IndepVarComp, BalanceComp, ExecComp
from openconcept.analysis.atmospherics.density_comp import DensityComp
from openconcept.analysis.atmospherics.compute_atmos_props import ComputeAtmosphericProperties
from openconcept.components.battery import SOCBattery
from openconcept.utilities.math import AddSubtractComp
from openconcept.utilities.dvlabel import DVLabel
class PowerAndThrustCal(Group):
def initialize(self):
self.options.declare('num_nodes',default=1,desc="Number of mission analysis points to run")
def setup(self):
nn = self.options['num_nodes']
dvlist = [['ac|weights|W_battery','batt_weight',500,'kg'],
['ac|propulsion|battery|specific_energy','specific_energy',300,'W*h/kg'],]
self.add_subsystem('dvs',DVLabel(dvlist),promotes_inputs=["*"],promotes_outputs=["*"])
#introduce model components
self.add_subsystem('CompDiskLoad', ComputeDiskLoad(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompHoverPower',HoverPower(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompRotorInducedVelocity', RotorHoverVelocity(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompVelocityRatio', VelocityRatio(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompVerticalPower',VerticalPower(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompTotalVerticalPowerandThrust',TotalPowerAndThrustVert(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
#self.add_subsystem('CompInflowVelocity',InflowVelocity(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompThrottle',Throttle(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['throttle'])
# rotor efficiency is Figure of merit
self.add_subsystem('batt1', SOCBattery(num_nodes=nn, efficiency=0.97),promotes_inputs=["duration","specific_energy"])
self.connect('power','batt1.elec_load')
self.connect('batt_weight','batt1.battery_weight')
class PowerAndThrustCruiseMultiRotor(Group):
"""This is an example model of a MultiRotor propulsion system.
"""
def initialize(self):
self.options.declare('num_nodes',default=1,desc="Number of mission analysis points to run")
def setup(self):
nn = self.options['num_nodes']
dvlist = [['ac|weights|W_battery','batt_weight',500,'kg'],
['ac|propulsion|battery|specific_energy','specific_energy',300,'W*h/kg'],]
self.add_subsystem('dvs',DVLabel(dvlist),promotes_inputs=["*"],promotes_outputs=["*"])
#introduce model components
self.add_subsystem('CompDiskLoad', ComputeDiskLoad(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompHoverPower',HoverPower(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompAdvancedRatioNu', AdvancedRatio(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompCT', ThrustCoef(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompHoverInflowRatio', HoverInflowRatio(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompInflowRatio', InflowRatio_implicit(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompCruisePower', CruiserPower(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompTotalThrustAndPowerInCruise', TotalPowerAndThrustCruise(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['*'])
self.add_subsystem('CompCruiseThrottle', CruiseThrottle(num_nodes=nn),promotes_inputs=['*'], promotes_outputs=['throttle'])
# rotor efficiency is Figure of merit
self.add_subsystem('batt1', SOCBattery(num_nodes=nn, efficiency=0.97),promotes_inputs=["duration","specific_energy"])
self.connect('power','batt1.elec_load')
self.connect('batt_weight','batt1.battery_weight')
class TotalPowerAndThrustVert(om.ExplicitComponent):
"""
Compute the thrust needed for vertical climb and descent
Inputs
------
ac|weights|MTOW : float
MTOW, (scaler, lb)
P_vert : float
Single rotor power requried in straight level cruise (vector, dimensionless) (vector, h.p.)
ac|propulsion|propeller|FM : float
Figure of Merit (scalar, dimensionless)
fltcond|vs : float
Vertical speed (vector, ft/s)
ac|propulsion|propeller|num_rotors
Number of rotors (scalar, dimensionless)
ac|propulsion|propeller|coaxialprop : int
If the propeller/rotor is coaxial layout or not (scalar, dimensionless)
Output
------
thrust_total : vector
Total thrust generated from all rotor (vector, N)
power : vector
Power needed from all motor (vector, hp)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
self.add_input('P_vert', shape = (nn,), units='hp', desc='Power needed for vertical climb and descent')
self.add_input('ac|propulsion|propeller|FM', units=None, desc='Figure of Merit')
self.add_input('fltcond|vs', shape = (nn,), units='ft/s', desc='Vertical speed')
self.add_input('ac|propulsion|propeller|num_rotors', desc='Number_of_rotor')
self.add_input('ac|propulsion|motor|rating', units='hp', desc='Design motor rating')
self.add_input('ac|propulsion|propeller|coaxialprop', desc='coaxial layout or not')
self.add_output('thrust', shape = (nn,), units='lbf',desc = 'Total thrust generated from all rotor')
self.add_output('power', shape = (nn,), units='hp',desc = 'Power needed from all motor')
self.declare_partials(['thrust'], ['ac|weights|MTOW'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['power'], ['P_vert'], rows=arange, cols=arange)
self.declare_partials(['power'], ['ac|propulsion|propeller|num_rotors'], rows=arange, cols=np.zeros(nn))
def compute(self, inputs, outputs):
outputs['thrust'] = inputs['ac|weights|MTOW']
if inputs['ac|propulsion|propeller|coaxialprop'] == 0:
outputs['power'] = inputs['ac|propulsion|propeller|num_rotors'] * inputs['P_vert']
elif inputs['ac|propulsion|propeller|coaxialprop'] == 1:
outputs['power'] = inputs['ac|propulsion|propeller|num_rotors']/2*inputs['P_vert']*1.281
def compute_partials(self, inputs, partials):
partials['thrust', 'ac|weights|MTOW'] = 1
if inputs['ac|propulsion|propeller|coaxialprop'] == 0:
partials['power', 'P_vert'] = inputs['ac|propulsion|propeller|num_rotors']
partials['power', 'ac|propulsion|propeller|num_rotors'] = inputs['P_vert']
elif inputs['ac|propulsion|propeller|coaxialprop'] == 1:
partials['power', 'P_vert'] = 1.281 * inputs['ac|propulsion|propeller|num_rotors']/2
partials['power', 'ac|propulsion|propeller|num_rotors'] = 0.5*inputs['P_vert']*1.281
class TotalPowerAndThrustCruise(om.ExplicitComponent):
"""
Compute the thrust needed for cruise
Inputs
------
ac|weights|MTOW : float
MTOW, (scaler, lb)
P_cruise : float
Power needed for cruise for one motor (vector, h.p.)
ac|propulsion|propeller|num_rotors : int
Number of rotors (scalar, dimensionless)
ac|propulsion|propeller|coaxialprop : int
If the propeller/rotor is coaxial layout or not (scalar, dimensionless)
Output
------
thrust : vector
Total thrust generated from all rotor in cruise (vector, N)
power : vector
Power needed from all motor in cruise (vector, h.p.)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
self.add_input('P_cruise', shape = (nn,), units='hp', desc='Power needed for vertical climb and descent')
self.add_input('ac|propulsion|propeller|num_rotors', desc='Number_of_rotor')
self.add_input('ac|propulsion|propeller|coaxialprop', desc='coaxial layout or not')
self.add_output('thrust', shape = (nn,), units='lbf',desc = 'Total thrust generated from all rotor in cruise ')
self.add_output('power', shape = (nn,), units='hp',desc = 'Power needed from all motor in cruise ')
self.declare_partials(['thrust'], ['ac|weights|MTOW'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['power'], ['P_cruise'], rows=arange, cols=arange)
self.declare_partials(['power'], ['ac|propulsion|propeller|num_rotors'], rows=arange, cols=np.zeros(nn))
def compute(self, inputs, outputs):
outputs['thrust'] = inputs['ac|weights|MTOW']
if inputs['ac|propulsion|propeller|coaxialprop'] == 0:
outputs['power'] = inputs['ac|propulsion|propeller|num_rotors']*inputs['P_cruise']
elif inputs['ac|propulsion|propeller|coaxialprop'] == 1:
outputs['power'] = inputs['ac|propulsion|propeller|num_rotors']/2*inputs['P_cruise']*1.281
def compute_partials(self, inputs, partials):
partials['thrust', 'ac|weights|MTOW'] = 1
if inputs['ac|propulsion|propeller|coaxialprop'] == 0:
partials['power', 'P_cruise'] = inputs['ac|propulsion|propeller|num_rotors']
partials['power', 'ac|propulsion|propeller|num_rotors'] = inputs['P_cruise']
elif inputs['ac|propulsion|propeller|coaxialprop'] == 1:
partials['power', 'P_cruise'] = 1.281 * inputs['ac|propulsion|propeller|num_rotors']/2
partials['power', 'ac|propulsion|propeller|num_rotors'] = 0.5*inputs['P_cruise']*1.281
class CruiseThrottle(om.ExplicitComponent):
"""
Compute the throttle needed for cruise
Inputs
------
P_cruise : float
Power needed from all motor in cruise (vector, h.p.)
ac|propulsion|motor|rating
Design motor rating (vector, hp)
Output
------
throttle : float
Power control setting. Should be in between [0, 1]. (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
eta_m : float
Motor efficiency (default 0.97, dimensionaless)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
self.options.declare('efficiency', default=0.97, desc="Motor efficiency")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('P_cruise', shape = (nn,), units=' hp ', desc='Power needed for vertical climb and descent for one motor')
self.add_input('ac|propulsion|motor|rating', units='hp', desc='Design motor rating')
self.add_output('throttle', shape = (nn,), units=None,desc = 'Power control setting')
self.declare_partials(['throttle'], ['P_cruise'], rows=arange, cols=arange)
self.declare_partials(['throttle'], ['ac|propulsion|motor|rating'], rows=arange, cols=np.zeros(nn))
def compute(self, inputs, outputs):
eta_m = self.options['efficiency']
outputs['throttle'] = inputs['P_cruise'] / (inputs['ac|propulsion|motor|rating'] * eta_m)
def compute_partials(self, inputs, partials):
eta_m = self.options['efficiency']
partials['throttle', 'P_cruise'] = 1/(inputs['ac|propulsion|motor|rating'] * eta_m)
partials['throttle', 'ac|propulsion|motor|rating'] = - (inputs['P_cruise']/(eta_m)) * inputs['ac|propulsion|motor|rating'] ** (-2)
class ComputeDiskLoad(om.ExplicitComponent):
"""
Compute the disk loading for single rotor
Inputs
------
ac|weights|MTOW : float
MTOW (scalar, lb)
ac|propulsion|propeller|num_rotors : int
Number of rotors (scalar, dimensionless)
ac|propulsion|propeller|diameter
Rotor diameter (scalar, ft)
Output
------
diskload : float
Disk load for single rotor (scalar, lbf/ft**2)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
self.add_input('ac|propulsion|propeller|num_rotors', desc='Number_of_rotor')
self.add_input('ac|propulsion|propeller|diameter', units='ft', desc='Rotor diameter')
self.add_output('diskload', shape = (nn,), units='lbf/ft**2',desc = 'Disk load per per propeller')
self.declare_partials(['diskload'], ['ac|weights|MTOW'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['diskload'], ['ac|propulsion|propeller|num_rotors'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['diskload'], ['ac|propulsion|propeller|diameter'], rows=arange, cols=np.zeros(nn))
def compute(self, inputs, outputs):
outputs['diskload'] = (inputs['ac|weights|MTOW']) / (((inputs['ac|propulsion|propeller|diameter']/2) ** 2) * np.pi * inputs['ac|propulsion|propeller|num_rotors'])
def compute_partials(self, inputs, partials):
partials['diskload', 'ac|weights|MTOW'] = 1.27323954473516 / (inputs['ac|propulsion|propeller|diameter'] ** 2 * inputs['ac|propulsion|propeller|num_rotors'])
partials['diskload', 'ac|propulsion|propeller|diameter'] = -2.54647908947033 * inputs['ac|weights|MTOW'] / (inputs['ac|propulsion|propeller|diameter'] ** 3 * inputs['ac|propulsion|propeller|num_rotors'])
partials['diskload', 'ac|propulsion|propeller|num_rotors'] = -1.27323954473516 * inputs['ac|weights|MTOW'] / (inputs['ac|propulsion|propeller|diameter'] ** 2 * inputs['ac|propulsion|propeller|num_rotors'] ** 2)
class HoverPower(om.ExplicitComponent):
"""
Calculates the minimum power required for single rotor to produce thrust.
Inputs
------
ac|weights|MTOW : float
MTOW (scalar, lb)
ac|propulsion|propeller|num_rotors : int
Number of rotors (scalar, dimensionless)
diskload : float
Disk load for one rotor (scalar, lbf/ft**2)
ac|propulsion|propeller|FM : float
Figure of Merit (scalar, dimensionless)
Output
------
P_Hover : float
Ideal power, P_Ideal (h.p.) = thrust * sqrt(diskload) / 38, (vector, h.p.)
P_Hover = P_Ideal / FM
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
self.add_input('ac|propulsion|propeller|num_rotors', units = None, desc='Number_of_rotor')
self.add_input('diskload', shape = (nn,), units='lbf/ft**2', desc = 'Disk load per rotor')
self.add_input('ac|propulsion|propeller|FM', units=None, desc = 'Figure of merit')
self.add_output('P_Hover', shape = (nn,), units='hp',desc = 'Ideal hover power')
self.declare_partials(['P_Hover'], ['ac|weights|MTOW'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['P_Hover'], ['ac|propulsion|propeller|num_rotors'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['P_Hover'], ['diskload'], rows=arange, cols=arange)
self.declare_partials(['P_Hover'], ['ac|propulsion|propeller|FM'], rows=arange, cols=np.zeros(nn))
def compute(self, inputs, outputs):
Thrust = (inputs['ac|weights|MTOW']/inputs['ac|propulsion|propeller|num_rotors'])
P_act = Thrust * np.sqrt(inputs['diskload']) / (38*inputs['ac|propulsion|propeller|FM'])
outputs['P_Hover'] = P_act
def compute_partials(self, inputs, partials):
partials['P_Hover', 'ac|weights|MTOW'] = (inputs['diskload']**0.5)/(38*inputs['ac|propulsion|propeller|num_rotors']*inputs['ac|propulsion|propeller|FM'])
partials['P_Hover', 'ac|propulsion|propeller|num_rotors'] = -inputs['ac|weights|MTOW']*inputs['diskload']**0.5/(38*inputs['ac|propulsion|propeller|num_rotors']**2*inputs['ac|propulsion|propeller|FM'])
partials['P_Hover', 'diskload'] = 0.0131578947368421*inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['diskload']**0.5*inputs['ac|propulsion|propeller|FM'])
partials['P_Hover', 'ac|propulsion|propeller|FM'] = -(inputs['ac|weights|MTOW']*inputs['diskload']**0.5)/(38*inputs['ac|propulsion|propeller|num_rotors'])*(inputs['ac|propulsion|propeller|FM']**(-2))
class RotorHoverVelocity(om.ExplicitComponent):
"""
Computes the rotor induced speed
Inputs
------
thrust : float
Single propeller thrust (scalar, lb)
ac|propulsion|propeller|num_rotors
Number of rotors (scalar, dimensionless)
ac|propulsion|propeller|diameter
Rotor diameter (scalar, ft)
fltcond|rho : float
Density (vector, slug/ft**3)
Output
------
V_hover : float
(Old)Rotor induced velocity, should be a negative value since flowing downward (vector, ft/s)
(Old)Rotor induced velocity, squart root, velocity is defined positive downward (vector, ft/s)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
#self.add_input('thrust', shape = (nn,), units='lbf', desc='single motor thrust')
self.add_input('ac|propulsion|propeller|num_rotors', units = None, desc='Number of rotor')
self.add_input('ac|propulsion|propeller|diameter', units = 'ft', desc='Rotor diameter')
self.add_input('fltcond|rho', shape=(nn,), units='slug/ft**3', desc = 'air density')
# 0.002377 is the default value. Can be replaced by other values.
self.add_output('V_hover', shape = (nn,), units='ft/s', desc = 'Rotor induced speed')
self.declare_partials(['V_hover'], ['ac|weights|MTOW'], rows=arange, cols=np.zeros(nn))
#self.declare_partials(['V_hover'], ['thrust'], rows=arange, cols=arange)
self.declare_partials(['V_hover'], ['ac|propulsion|propeller|num_rotors'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['V_hover'], ['ac|propulsion|propeller|diameter'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['V_hover'], ['fltcond|rho'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
#print(inputs['fltcond|rho'])
A = (inputs['ac|propulsion|propeller|diameter'] / 2) ** 2 * np.pi
#outputs['V_hover'] = np.sqrt((inputs['thrust']/ inputs['ac|propulsion|propeller|num_rotors'])/(2 * inputs['fltcond|rho'] * A))
outputs['V_hover'] = np.sqrt((inputs['ac|weights|MTOW']/ inputs['ac|propulsion|propeller|num_rotors'])/(2 * inputs['fltcond|rho'] * A))
def compute_partials(self, inputs, partials):
#partials['V_hover', 'thrust'] = 0.398942280401433*(inputs['thrust']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['fltcond|rho']*inputs['ac|propulsion|propeller|diameter']**2))**0.5/inputs['thrust']
partials['V_hover', 'ac|weights|MTOW'] = 0.398942280401433*(inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['fltcond|rho']*inputs['ac|propulsion|propeller|diameter']**2))**0.5/inputs['ac|weights|MTOW']
partials['V_hover', 'ac|propulsion|propeller|num_rotors'] = -0.398942280401433*(inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['fltcond|rho']*inputs['ac|propulsion|propeller|diameter']**2))**0.5/inputs['ac|propulsion|propeller|num_rotors']
partials['V_hover', 'ac|propulsion|propeller|diameter'] = -0.797884560802865*(inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['fltcond|rho']*inputs['ac|propulsion|propeller|diameter']**2))**0.5/inputs['ac|propulsion|propeller|diameter']
#partials['V_hover', 'fltcond|rho'] = 0.5* ( inputs['ac|weights|MTOW']/(2*A*inputs['ac|propulsion|propeller|num_rotors']) ) **(0.5) * (1/inputs['fltcond|rho'])**(-0.5) * (-1/inputs['fltcond|rho']**2)
partials['V_hover', 'fltcond|rho'] = -0.398942280401433*(inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['fltcond|rho']*inputs['ac|propulsion|propeller|diameter']**2))**0.5/inputs['fltcond|rho']
class InflowVelocity(om.ExplicitComponent):
"""
Computes the rotor inflow velocity for the thrust estimation.
Inputs
------
fltcond|vs : float
Vertical speed (vector, ft/s)
V_hover : float
Rotor induced velocity, squart root, velocity is defined positive downward (vector, ft/s)
V_ClimbRatio : float
The velocity ratio between climb rate and induced velocity (vector, dimensionless)
Output
------
V_inflow : float
Net inflow velocity (vector, ft/s)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
#self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
self.add_input('fltcond|vs', shape = (nn,), units='ft/s', desc='vertical speed')
self.add_input('V_hover', shape = (nn,), units='ft/s', desc='rotor induced velocity')
self.add_input('V_ClimbRatio', shape = (nn,), units=None, desc='climb velocity over hover velocity')
self.add_output('V_inflow', shape = (nn,), units='ft/s', desc = 'net inflow velocity')
self.declare_partials(['V_inflow'], ['fltcond|vs'], rows=arange, cols=arange)
self.declare_partials(['V_inflow'], ['V_hover'], rows=arange, cols=arange)
self.declare_partials(['V_inflow'], ['V_ClimbRatio'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
for ii in range(len(inputs['V_ClimbRatio'])):
if inputs['V_ClimbRatio'][ii] > 0:
# Rotorcraft Aeromechanics by Wayne johnson, Cambridge P94. Eqn (4.45)
outputs['V_inflow'] = inputs['fltcond|vs']/2 + np.sqrt((inputs['fltcond|vs']/2)**2+inputs['V_hover']**2)
elif inputs['V_ClimbRatio'][ii] < 0 and inputs['V_ClimbRatio'][ii] >= -2 :
# Selfmade surrogate model model from Model for Vortex Ring State Influence on Rotorcraft Flight Dynamics. Fig. 37, VRS Model, Vx/Vh = 0
outputs['V_inflow'] = inputs['V_hover'] * (0.914*inputs['V_ClimbRatio']**5 + 3.289*inputs['V_ClimbRatio']**4 + 4.587*inputs['V_ClimbRatio']**3 + 3.518*inputs['V_ClimbRatio']**2 + 1.267*inputs['V_ClimbRatio'] + 1.004 )
else:
raise RuntimeError('Warning: You are reaching turbulant and Windmill state, two algorithms are under development')
#print('v_inflow',outputs['V_inflow'])
def compute_partials(self, inputs, partials):
for ii in range(len(inputs['V_ClimbRatio'])):
if inputs['V_ClimbRatio'][ii] > 0:
#print('case1')
partials['V_inflow', 'fltcond|vs'] = 1/2 * ( 1 + (inputs['fltcond|vs']/2)**2 + inputs['V_hover']**2 )**(-0.5) * inputs['fltcond|vs']
partials['V_inflow', 'V_hover'] = 1/2 * ((inputs['fltcond|vs']/2)**2 + inputs['V_hover']**2 )**(-0.5) *2*inputs['V_hover']
#partials['V_inflow', 'V_ClimbRatio'] = None
elif inputs['V_ClimbRatio'][ii] < 0 and inputs['V_ClimbRatio'][ii] >= -2 :
#print('case2')
#partials['V_inflow', 'fltcond|vs'] = None
partials['V_inflow', 'V_hover'] = (0.914*inputs['V_ClimbRatio']**5 + 3.289*inputs['V_ClimbRatio']**4 + 4.587*inputs['V_ClimbRatio']**3 + 3.518*inputs['V_ClimbRatio']**2 + 1.267*inputs['V_ClimbRatio'] + 1.004 )
partials['V_inflow', 'V_ClimbRatio'] = inputs['V_hover'] * (5*0.914*inputs['V_ClimbRatio']**4 + 4*3.289*inputs['V_ClimbRatio']**3 + 3*4.587*inputs['V_ClimbRatio']**2 + 2*3.518*inputs['V_ClimbRatio']**1 + 1.267)
else:
raise RuntimeError('Warning: You are reaching turbulant and Windmill state, two algorithms are under development')
class AddVerticalPower(om.ExplicitComponent):
"""
Calculates additional power required for vertical clambing or landing for a given vertical climb rate.
Assuming no rotor wake in the down stream.
Inputs
------
ac|weights|MTOW : float
MTOW (scalar, lb)
diskload : float
Disk load for one rotor (scalar, lbf/ft**2)
ac|propulsion|propeller|num_rotors : int
Number of rotors (scalar, dimensionless)
fltcond|vs : float
Vertical speed (vector, ft/s)
V_hover : float
Rotor induced velocity, should be a negative value since flowing downward (vector, ft/s)
Output
------
P_addvert : float
Additional power required for climbing in a given vertical climb rate (vector, ft/s)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
#self.add_input('thrust', shape = (nn,), units='lbf', desc='single motor thrust')
self.add_input('ac|propulsion|propeller|num_rotors', units = None, desc='Number_of_rotor')
self.add_input('fltcond|vs', val = -25, shape = (nn,), units='ft/s', desc = 'climb rate')
self.add_input('V_hover', shape = (nn,), units='ft/s', desc = 'Rotor induced speed')
self.add_output('P_addvert', shape = (nn,), units='hp',desc = 'Additional power required for climbing in a given vertical climb rate')
self.declare_partials(['P_addvert'], ['ac|weights|MTOW'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['P_addvert'], ['ac|propulsion|propeller|num_rotors'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['P_addvert'], ['fltcond|vs'], rows=arange, cols=arange)
self.declare_partials(['P_addvert'], ['V_hover'], rows=arange, cols=arange)
#self.declare_partials(['P_addvert'], ['thrust'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
#print('Vs = ',inputs['fltcond|vs'])
nn = self.options['num_nodes']
if inputs['fltcond|vs'] > 0: # vertical climb
A = (inputs['fltcond|vs']/2 + np.sqrt((-inputs['fltcond|vs']/2)**2 + inputs['V_hover']**2 ) - inputs['V_hover'] )
else: # vertical descent
A = (inputs['fltcond|vs']/2 - np.sqrt((inputs['fltcond|vs']/2)**2 - inputs['V_hover']**2 ) - inputs['V_hover'] )
outputs['P_addvert'] = ((inputs['ac|weights|MTOW']/inputs['ac|propulsion|propeller|num_rotors'])/550) * A
def compute_partials(self, inputs, partials):
partials['P_addvert', 'ac|weights|MTOW'] = (inputs['fltcond|vs']/2 - inputs['V_hover'] + (inputs['fltcond|vs']**2/4 + inputs['V_hover']**2)**0.5)/(550*inputs['ac|propulsion|propeller|num_rotors'])
partials['P_addvert', 'ac|propulsion|propeller|num_rotors'] = -inputs['ac|weights|MTOW']*(inputs['fltcond|vs']/2 - inputs['V_hover'] + (inputs['fltcond|vs']**2/4 + inputs['V_hover']**2)**0.5)/(550*inputs['ac|propulsion|propeller|num_rotors']**2)
partials['P_addvert', 'fltcond|vs'] = inputs['ac|weights|MTOW']*(0.25*inputs['fltcond|vs']/(inputs['fltcond|vs']**2/4 + inputs['V_hover']**2)**0.5 + 1/2)/(550*inputs['ac|propulsion|propeller|num_rotors'])
partials['P_addvert', 'V_hover'] = inputs['ac|weights|MTOW']*(1.0*inputs['V_hover']/(inputs['fltcond|vs']**2/4 + inputs['V_hover']**2)**0.5 - 1)/(550*inputs['ac|propulsion|propeller|num_rotors'])
class VelocityRatio(om.ExplicitComponent):
"""
Computes the velocity ratio between
Inputs
------
fltcond|vs : float
Vertical speed (vector, ft/s)
V_hover : float
Rotor induced velocity, should be a positive since the positive velocity direction is defined downward in momentum theory (vector, ft/s)
Output
------
V_ClimbRatio : float
The velocity ratio between climb rate and induced velocity (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('fltcond|vs', val = -25, shape = (nn,), units = 'ft/s', desc='Vertical speed')
self.add_input('V_hover', shape = (nn,), units='ft/s', desc = 'Rotor induced speed')
self.add_output('V_ClimbRatio', shape = (nn,), units=None, desc = 'velocity ratio between climb rate and induced velocity')
self.declare_partials(['V_ClimbRatio'], ['fltcond|vs'], rows=arange, cols=arange)
self.declare_partials(['V_ClimbRatio'], ['V_hover'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
#print('fltcond|vs = :',inputs['fltcond|vs'])
#print('V_hover = :',inputs['V_hover'])
outputs['V_ClimbRatio'] = inputs['fltcond|vs']/inputs['V_hover']
def compute_partials(self, inputs, partials):
partials['V_ClimbRatio', 'fltcond|vs'] = 1/inputs['V_hover']
partials['V_ClimbRatio', 'V_hover'] = -inputs['fltcond|vs']*inputs['V_hover']**(-2)
class VerticalPower(om.ExplicitComponent):
"""
Computes the velocity ratio between
Inputs
------
V_ClimbRatio : float
The velocity ratio between climb rate and induced velocity (vector, dimensionless)
P_Hover : float
Ideal power, P_Ideal (h.p.) = thrust * sqrt(diskload) / 38, (vector, h.p.)
P_Hover = P_Ideal / FM
Output
------
P_vert : float
Power needed for vertical climb and descent (vector, h.p.)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('V_ClimbRatio', shape = (nn,), units=None, desc = 'velocity ratio between climb rate and induced velocity')
self.add_input('P_Hover', shape = (nn,), units='hp',desc = 'Ideal hover power')
self.add_output('P_vert', shape = (nn,), units='hp', desc = 'Power needed for vertical climb and descent')
self.declare_partials(['P_vert'], ['V_ClimbRatio'], rows=arange, cols=arange)
self.declare_partials(['P_vert'], ['P_Hover'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
for ii in range(len(inputs['V_ClimbRatio'])):
if inputs['V_ClimbRatio'][ii] >= 0:
outputs['P_vert'] = inputs['P_Hover']*(0.5 * inputs['V_ClimbRatio'] + np.sqrt(0.25 * (inputs['V_ClimbRatio'])**2 + 1))
elif inputs['V_ClimbRatio'][ii] <= -2:
outputs['P_vert'] = inputs['P_Hover']*(0.5 * inputs['V_ClimbRatio'] - np.sqrt(0.25 * (inputs['V_ClimbRatio'])**2 - 1))
else:
outputs['P_vert'] = inputs['P_Hover']*(0.974-0.125*inputs['V_ClimbRatio']-1.372*inputs['V_ClimbRatio']**2-1.718*inputs['V_ClimbRatio']**3-0.655*inputs['V_ClimbRatio']**4)
#print('V_ClimbRatio = :',inputs['V_ClimbRatio'])
#print('P_vert = :',outputs['P_vert'])
def compute_partials(self, inputs, partials):
for ii in range(len(inputs['V_ClimbRatio'])):
if inputs['V_ClimbRatio'][ii] >= 0:
#partials['P_vert', 'V_ClimbRatio'] = inputs['P_Hover']*(0.5 + 0.25 * (inputs['V_ClimbRatio']**2 + 4)**(-0.5) * 2 * inputs['V_ClimbRatio'])
partials['P_vert', 'V_ClimbRatio'] = inputs['P_Hover']*(0.5 + 0.25 * inputs['V_ClimbRatio'] * (0.25*inputs['V_ClimbRatio']**2 + 1)**(-0.5))
partials['P_vert', 'P_Hover'] = 0.5 * inputs['V_ClimbRatio'] + np.sqrt(0.25 * (inputs['V_ClimbRatio'])**2 + 1)
elif inputs['V_ClimbRatio'][ii] <= -2:
#partials['P_vert', 'V_ClimbRatio'] = inputs['P_Hover']*(0.5 - 0.25 * (inputs['V_ClimbRatio']**2 - 4)**(-0.5) * 2 * inputs['V_ClimbRatio'])
partials['P_vert', 'V_ClimbRatio'] = inputs['P_Hover']*(0.5 - 0.25 * inputs['V_ClimbRatio'] * (0.25*inputs['V_ClimbRatio']**2 - 1)**(-0.5))
partials['P_vert', 'P_Hover'] = 0.5 * inputs['V_ClimbRatio'] - np.sqrt(0.25 * (inputs['V_ClimbRatio'])**2 - 1)
else:
partials['P_vert', 'V_ClimbRatio'] = inputs['P_Hover']*(-0.125 - 2*1.372*inputs['V_ClimbRatio'] - 3*1.718*inputs['V_ClimbRatio']**2 - 4*0.655*inputs['V_ClimbRatio']**3)
partials['P_vert', 'P_Hover'] = 0.974-0.125*inputs['V_ClimbRatio']-1.372*inputs['V_ClimbRatio']**2-1.718*inputs['V_ClimbRatio']**3-0.655*inputs['V_ClimbRatio']**4
class Throttle(om.ExplicitComponent):
"""
Compute the throttle needed for the vertical climb and descent
Inputs
------
P_vert : float
Power needed for vertical climb and descent for one motor (vector, h.p.)
ac|propulsion|propeller|num_rotors
Number of rotors (scalar, dimensionless)
ac|propulsion|motor|rating
Design motor rating (vector, hp)
Output
------
throttle : float
Power control setting. Should be [0, 1]. (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
eta_m : float
Motor efficiency (default 0.97, dimensionaless)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
self.options.declare('efficiency', default=0.97, desc="Motor efficiency")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('P_vert', shape = (nn,), units=' hp ', desc='Power needed for vertical climb and descent for one motor')
self.add_input('ac|propulsion|propeller|num_rotors', desc='Number_of_rotor')
self.add_input('ac|propulsion|motor|rating', units='hp', desc='Design motor rating')
self.add_output('throttle', shape = (nn,), units=None,desc = 'Power control setting')
self.declare_partials(['throttle'], ['P_vert'], rows=arange, cols=arange)
self.declare_partials(['throttle'], ['ac|propulsion|motor|rating'], rows=arange, cols=np.zeros(nn))
def compute(self, inputs, outputs):
eta_m = self.options['efficiency']
outputs['throttle'] = inputs['P_vert'] / (inputs['ac|propulsion|motor|rating'] * eta_m)
def compute_partials(self, inputs, partials):
eta_m = self.options['efficiency']
partials['throttle', 'P_vert'] = 1/(inputs['ac|propulsion|motor|rating'] * eta_m)
partials['throttle', 'ac|propulsion|motor|rating'] = - (inputs['P_vert']/(eta_m)) * inputs['ac|propulsion|motor|rating'] ** (-2)
class AdvancedRatio(om.ExplicitComponent):
"""
Compute the rotor advanced ratio, usually denoted as mu
Inputs
------
fltcond|Ueas : float
Absolute airspeed, (vector, ft/s)
proprpm
Rotor rpm (vector, rpm)
ac|propulsion|propeller|diameter : float
Rotor diameter (scalar, ft)
aircraftAOA : float
Aircraft cruise angle of attack (vector, deg)
Output
------
mu : float
Rotor advanced ratio (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('fltcond|Ueas', shape = (nn,), units='ft/s', desc='Absolute airspeed')
self.add_input('proprpm', shape = (nn,), units='rpm', desc='Rotor rpm')
self.add_input('ac|propulsion|propeller|diameter', desc='Rotor diameter', units='ft')
self.add_input('aircraftAOA', shape = (nn,), units=' rad ', desc='Aircraft cruise angle of attack')
self.add_output('mu', shape = (nn,), units=None,desc = 'Rotor advanced ratio')
self.declare_partials(['mu'], ['fltcond|Ueas'], rows=arange, cols=arange)
self.declare_partials(['mu'], ['proprpm'], rows=arange, cols=arange)
self.declare_partials(['mu'], ['ac|propulsion|propeller|diameter'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['mu'], ['aircraftAOA'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
nn = self.options['num_nodes']
V_tip = inputs['proprpm'] * inputs['ac|propulsion|propeller|diameter'] * np.pi /60
outputs['mu'] = np.cos(inputs['aircraftAOA'])*inputs['fltcond|Ueas'] / (V_tip)
def compute_partials(self, inputs, partials):
nn = self.options['num_nodes']
partials['mu', 'fltcond|Ueas'] = np.cos(inputs['aircraftAOA']) * 60 / (inputs['proprpm'] * inputs['ac|propulsion|propeller|diameter'] * np.pi)
partials['mu', 'proprpm'] = - np.cos(inputs['aircraftAOA']) * inputs['fltcond|Ueas'] * 60 / (inputs['ac|propulsion|propeller|diameter'] * np.pi) * (inputs['proprpm'] ** -2)
partials['mu', 'ac|propulsion|propeller|diameter'] = - np.cos(inputs['aircraftAOA']) * inputs['fltcond|Ueas'] * 60 / (inputs['proprpm'] * np.pi) * (inputs['ac|propulsion|propeller|diameter'] ** -2)
partials['mu', 'aircraftAOA'] = -np.sin(inputs['aircraftAOA']) * inputs['fltcond|Ueas'] * 60 / (inputs['proprpm'] * inputs['ac|propulsion|propeller|diameter'] * np.pi)
class ThrustCoef(om.ExplicitComponent):
"""
Compute the hover thrust coefficient
Inputs
------
ac|weights|MTOW : float
MTOW (scaler, lb)
fltcond|rho : float
Air density (vector, slug/ft**3)
proprpm : float
Rotor rpm (vector, rpm)
ac|propulsion|propeller|diameter : float
Rotor diameter (scalar, ft)
ac|propulsion|propeller|num_rotors : int
Number of rotor (scaler, dimensionless)
aircraftAOA : float
Aircraft cruise angle of attack (vector, deg)
Output
------
C_T : float
Thrust coefficient (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('ac|weights|MTOW', units='lb', desc='MTOW')
self.add_input('fltcond|rho', shape = (nn,), units='slug/ft**3', desc='Air desnity')
self.add_input('proprpm', shape = (nn,), units='rpm', desc='Rotor rpm')
self.add_input('ac|propulsion|propeller|diameter', desc=' Rotor diameter', units='ft')
self.add_input('ac|propulsion|propeller|num_rotors', desc='Number_of_rotor')
self.add_input('aircraftAOA', shape = (nn,), units=' rad ', desc='Aircraft cruise angle of attack')
self.add_output('C_T', shape = (nn,), units=None,desc = 'Thrust coefficient')
self.declare_partials(['C_T'], ['ac|weights|MTOW'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['C_T'], ['fltcond|rho'], rows=arange, cols=arange)
self.declare_partials(['C_T'], ['proprpm'], rows=arange, cols=arange)
self.declare_partials(['C_T'], ['ac|propulsion|propeller|diameter'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['C_T'], ['ac|propulsion|propeller|num_rotors'], rows=arange, cols=np.zeros(nn))
self.declare_partials(['C_T'], ['aircraftAOA'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
nn = self.options['num_nodes']
#single_motor_thrust = (inputs['ac|weights|MTOW'] / (np.cos(inputs['aircraftAOA'])) * inputs['ac|propulsion|propeller|num_rotors'])
#print( 'np.cos(inputs[aircraftAOA])' , np.cos(inputs['aircraftAOA']))
#V_tip = inputs['proprpm'] * inputs['ac|propulsion|propeller|diameter'] * np.pi /60
#disk_area = (inputs['ac|propulsion|propeller|diameter'] /2) ** 2 * np.pi
#outputs['C_T'] = single_motor_thrust / ( disk_area * inputs['fltcond|rho'] * V_tip ** 2)
outputs['C_T'] = inputs['ac|weights|MTOW'] / (inputs['ac|propulsion|propeller|num_rotors']*inputs['fltcond|rho']*inputs['ac|propulsion|propeller|diameter']**3*inputs['proprpm']*15*np.cos(inputs['aircraftAOA']) )
def compute_partials(self, inputs, partials):
nn = self.options['num_nodes']
disk_area = (inputs['ac|propulsion|propeller|diameter'] /2) ** 2 * np.pi
"""
partials['C_T', 'ac|weights|MTOW'] = 0.0212206590789194/(inputs['ac|propulsion|propeller|num_rotors']*inputs['ac|propulsion|propeller|diameter']**3*inputs['fltcond|rho']*inputs['proprpm']*np.cos(inputs['aircraftAOA']))
partials['C_T', 'fltcond|rho'] = -0.0212206590789194*inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['ac|propulsion|propeller|diameter']**3*inputs['fltcond|rho']**2**inputs['proprpm']*np.cos(inputs['aircraftAOA']))
partials['C_T', 'proprpm'] = -0.0212206590789194*inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['ac|propulsion|propeller|diameter']**3*inputs['fltcond|rho']*inputs['proprpm']**2*np.cos(inputs['aircraftAOA']))
partials['C_T', 'ac|propulsion|propeller|diameter'] = -0.0636619772367581*inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']*inputs['ac|propulsion|propeller|diameter']**4*inputs['fltcond|rho']*inputs['proprpm']*np.cos(inputs['aircraftAOA']))
partials['C_T', 'ac|propulsion|propeller|num_rotors'] = -0.0212206590789194*inputs['ac|weights|MTOW']/(inputs['ac|propulsion|propeller|num_rotors']**2*inputs['ac|propulsion|propeller|diameter']**3*inputs['fltcond|rho']*inputs['proprpm']*np.cos(inputs['aircraftAOA']))
partials['C_T', 'aircraftAOA'] = 0.0212206590789194*inputs['ac|weights|MTOW']*np.sin(inputs['aircraftAOA'])/(inputs['ac|propulsion|propeller|num_rotors']*inputs['ac|propulsion|propeller|diameter']**3*inputs['fltcond|rho']*inputs['proprpm']*np.cos(inputs['aircraftAOA'])**2)
"""
partials['C_T', 'ac|weights|MTOW'] = 1 / ( inputs['ac|propulsion|propeller|num_rotors'] * inputs['fltcond|rho'] * inputs['ac|propulsion|propeller|diameter']**3
* inputs['proprpm'] * 15 * np.cos(inputs['aircraftAOA']) )
partials['C_T', 'fltcond|rho'] = - (inputs['ac|weights|MTOW']/ (inputs['ac|propulsion|propeller|num_rotors'] * inputs['ac|propulsion|propeller|diameter']**3
* inputs['proprpm'] * 15 * np.cos(inputs['aircraftAOA']) )) * inputs['fltcond|rho'] ** (-2)
partials['C_T', 'proprpm'] = - (inputs['ac|weights|MTOW']/ (inputs['ac|propulsion|propeller|num_rotors'] * inputs['ac|propulsion|propeller|diameter']**3
* inputs['fltcond|rho'] * 15 * np.cos(inputs['aircraftAOA']) )) * inputs['proprpm'] ** (-2)
partials['C_T', 'ac|propulsion|propeller|diameter'] = -3 * ( inputs['ac|weights|MTOW'] / ( inputs['ac|propulsion|propeller|num_rotors'] * inputs['fltcond|rho'] * inputs['proprpm']
* 15 * np.cos(inputs['aircraftAOA']))) * inputs['ac|propulsion|propeller|diameter'] ** (-4)
partials['C_T', 'ac|propulsion|propeller|num_rotors'] = - ( inputs['ac|weights|MTOW']/ (inputs['ac|propulsion|propeller|diameter']**3 * inputs['fltcond|rho'] * inputs['proprpm']
* 15 * np.cos(inputs['aircraftAOA']) )) * inputs['ac|propulsion|propeller|num_rotors'] ** (-2)
partials['C_T', 'aircraftAOA'] = ( inputs['ac|weights|MTOW']/ ( inputs['ac|propulsion|propeller|num_rotors'] * inputs['ac|propulsion|propeller|diameter']**3 * inputs['fltcond|rho']
* 15 * inputs['proprpm'] ) ) * np.cos(inputs['aircraftAOA']) ** (-2) * np.sin(inputs['aircraftAOA'])
class HoverInflowRatio(om.ExplicitComponent):
"""
Compute the hover inflow ratio during forward flight
Inputs
------
C_T : float
Thrust coefficient, (vector, dimensionless)
Output
------
lambda_h : float
Hover inflow ratio (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('C_T', shape = (nn,), units= None, desc='Thrust coefficient')
self.add_output('lambda_h', shape = (nn,), units=None,desc = ' Hover inflow ratio')
self.declare_partials(['lambda_h'], ['C_T'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
outputs['lambda_h'] = (inputs['C_T']/2) ** 0.5
def compute_partials(self, inputs, partials):
partials['lambda_h', 'C_T'] = 0.5 * (inputs['C_T']/2) ** (-0.5) * 0.5
class InflowRatio(om.ExplicitComponent):
"""
Compute the current inflow ratio during forward flight
Inputs
------
lambda_h : float
Hover inflow ratio (vector, dimensionless)
mu : float
Rotor advanced ratio (vector, dimensionless)
Output
------
lambda : float
Current inflow ratio (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('lambda_h', shape = (nn,), units= None, desc='Hover inflow ratio ')
self.add_input('mu', shape = (nn,), units= None, desc='Rotor advanced ratio')
self.add_output('lambda', shape = (nn,), units=None,desc = 'current inflow ratio')
self.declare_partials(['lambda'], ['lambda_h'], rows=arange, cols=arange)
self.declare_partials(['lambda'], ['mu'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
outputs['lambda'] = (0.05 * (inputs['mu']/inputs['lambda_h']) + 0.3) * inputs['lambda_h']
def compute_partials(self, inputs, partials):
partials['lambda', 'lambda_h'] = 0.05 * inputs['mu'] + 0.3
partials['lambda', 'mu'] = 0.05 * inputs['lambda_h']
class InflowRatio_implicit(om.ImplicitComponent):
"""
Compute the current inflow ratio during forward flight
Inputs
------
aircraftAOA : float
Aircraft cruise angle of attack (vector, deg)
mu : float
Rotor advanced ratio (vector, dimensionless)
C_T : float
Thrust coefficient, (vector, dimensionless)
Output
------
lambda : float
Current inflow ratio (vector, dimensionless)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('aircraftAOA', shape = (nn,), units=' rad ', desc='Aircraft cruise angle of attack')
self.add_input('mu', shape = (nn,), units= None, desc='Rotor advanced ratio')
self.add_input('C_T', shape = (nn,), units= None, desc='Thrust coefficient')
self.add_output('lambda', shape = (nn,), units=None,desc = 'current inflow ratio')
self.declare_partials(['lambda'], ['aircraftAOA'], rows=arange, cols=arange)
self.declare_partials(['lambda'], ['mu'], rows=arange, cols=arange)
self.declare_partials(['lambda'], ['C_T'], rows=arange, cols=arange)
self.declare_partials(['lambda'], ['lambda'], rows=arange, cols=arange)
def apply_nonlinear(self, inputs, outputs, residuals):
mu = inputs['mu']
AOA = inputs['aircraftAOA']
C_T = inputs['C_T']
solv_lambda = outputs['lambda']
residuals['lambda'] = solv_lambda - mu*np.tan(AOA) - C_T/(2 * np.sqrt(mu**2 + solv_lambda**2))
def guess_nonlinear(self, inputs, outputs, resids):
# Check residuals
if np.any(np.abs(resids['lambda'])) > 1.0E-2:
outputs['lambda'] = 1.0
def linearize(self, inputs, outputs, partials):
mu = inputs['mu']
AOA = inputs['aircraftAOA']
C_T = inputs['C_T']
solv_lambda = outputs['lambda']
partials['lambda', 'mu'] = -np.tan(AOA) - C_T/2 * (-1/2) * (mu**2 + solv_lambda**2)**(-3/2) * 2 * mu
partials['lambda', 'aircraftAOA'] = - mu*(np.cos(inputs['aircraftAOA'])**(-2))
partials['lambda', 'C_T'] = - 0.5*(mu**2+solv_lambda**2)**(-0.5)
partials['lambda', 'lambda'] = 1 - C_T/2 * (-0.5) * (mu**2 + solv_lambda**2)**(-3/2) * 2 * solv_lambda
class CruiserPower(om.ExplicitComponent):
"""
Compute the current inflow ratio during forward flight
Inputs
------
lambda_h : float
Hover inflow ratio (vector, dimensionless)
lambda : float
Current inflow ratio (vector, dimensionless)
P_Hover : float
Ideal power, P_Ideal (h.p.) = thrust * sqrt(diskload) / 38, (vector, h.p.)
P_Hover = P_Ideal / FM
Output
------
P_cruise : float
Single rotor power requried in straight level cruise (vector, h.p.)
Options
-------
num_nodes : int
Number of analysis points to run (sets vec length) (default 1)
"""
def initialize(self):
self.options.declare('num_nodes', default=1, desc="Number of nodes to compute")
def setup(self):
nn = self.options['num_nodes']
arange = np.arange(0, nn)
self.add_input('lambda_h', shape = (nn,), units= None, desc='Hover inflow ratio ')
self.add_input('lambda', shape = (nn,), units= None, desc='Current inflow ratio')
self.add_input('P_Hover', shape = (nn,), units = 'hp', desc='Ideal power')
self.add_output('P_cruise', shape = (nn,), units='hp',desc = 'Single rotor power requried in straight level cruise')
self.declare_partials(['P_cruise'], ['lambda_h'], rows=arange, cols=arange)
self.declare_partials(['P_cruise'], ['lambda'], rows=arange, cols=arange)
self.declare_partials(['P_cruise'], ['P_Hover'], rows=arange, cols=arange)
def compute(self, inputs, outputs):
outputs['P_cruise'] = inputs['P_Hover'] * inputs['lambda']/inputs['lambda_h']
def compute_partials(self, inputs, partials):
partials['P_cruise', 'lambda_h'] = - inputs['P_Hover'] * inputs['lambda'] * inputs['lambda_h'] ** (-2)
partials['P_cruise', 'lambda'] = inputs['P_Hover'] /inputs['lambda_h']
partials['P_cruise', 'P_Hover'] = inputs['lambda']/inputs['lambda_h']
| 52.71076 | 281 | 0.637033 | 6,767 | 53,396 | 4.915472 | 0.054234 | 0.060247 | 0.0947 | 0.073054 | 0.884496 | 0.847879 | 0.821724 | 0.797162 | 0.761356 | 0.733458 | 0 | 0.020816 | 0.195689 | 53,396 | 1,012 | 282 | 52.762846 | 0.753696 | 0.219867 | 0 | 0.518438 | 0 | 0 | 0.308765 | 0.109092 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158351 | false | 0 | 0.023861 | 0 | 0.223427 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
393ce2e1160867316db91d9c8df0fbf48e61b89c | 66 | py | Python | software/glasgow/platform/all.py | emilazy/Glasgow | 4575ad07ccce76b0b92d29a76fc18a3700a68823 | [
"Apache-2.0",
"0BSD"
] | 3 | 2020-04-30T22:58:29.000Z | 2021-02-25T11:58:51.000Z | software/glasgow/platform/all.py | emilazy/Glasgow | 4575ad07ccce76b0b92d29a76fc18a3700a68823 | [
"Apache-2.0",
"0BSD"
] | null | null | null | software/glasgow/platform/all.py | emilazy/Glasgow | 4575ad07ccce76b0b92d29a76fc18a3700a68823 | [
"Apache-2.0",
"0BSD"
] | null | null | null | from .rev_ab import *
from .rev_c0 import *
from .rev_c1 import *
| 16.5 | 21 | 0.727273 | 12 | 66 | 3.75 | 0.5 | 0.466667 | 0.577778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.181818 | 66 | 3 | 22 | 22 | 0.796296 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3951ed98ddae29feab778521b39ff401d592d785 | 192 | py | Python | src/conductor/lib/__init__.py | geoffxy/conductor | d33c22031674f9f9e09ac34c0083f26c2daa24e5 | [
"Apache-2.0"
] | null | null | null | src/conductor/lib/__init__.py | geoffxy/conductor | d33c22031674f9f9e09ac34c0083f26c2daa24e5 | [
"Apache-2.0"
] | 19 | 2021-03-15T15:31:28.000Z | 2022-03-11T15:33:17.000Z | src/conductor/lib/__init__.py | geoffxy/conductor | d33c22031674f9f9e09ac34c0083f26c2daa24e5 | [
"Apache-2.0"
] | null | null | null | """
This module is Conductor's user library. It contains various utilities that
can be useful in Python scripts that run as Conductor tasks.
"""
# Path-related utilities.
from .path import *
| 24 | 75 | 0.760417 | 29 | 192 | 5.034483 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 192 | 7 | 76 | 27.428571 | 0.9125 | 0.838542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ac40d68dc0e53d002be00fe766d1cd39cf1c8f9 | 49 | py | Python | behavior_machine/visualization/__init__.py | CMU-TBD/behavior_machine | b403192b8002603fc20c76713c7a9fe46a7ed686 | [
"MIT"
] | 1 | 2020-07-28T20:17:52.000Z | 2020-07-28T20:17:52.000Z | behavior_machine/visualization/__init__.py | CMU-TBD/behavior_machine | b403192b8002603fc20c76713c7a9fe46a7ed686 | [
"MIT"
] | 1 | 2021-01-25T15:54:45.000Z | 2021-01-25T15:54:45.000Z | behavior_machine/visualization/__init__.py | CMU-TBD/behavior_machine | b403192b8002603fc20c76713c7a9fe46a7ed686 | [
"MIT"
] | 1 | 2021-01-22T06:12:10.000Z | 2021-01-22T06:12:10.000Z | from .visualize import visualize_behavior_machine | 49 | 49 | 0.918367 | 6 | 49 | 7.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 49 | 1 | 49 | 49 | 0.934783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
46d3bb0de8ebf9a994490403615e597241dd0701 | 9,261 | py | Python | python/one-shot-classification/ops.py | shashank879/omniglot | 41137f2a05e24ebbeab7458a08cd0a5042103b50 | [
"MIT"
] | null | null | null | python/one-shot-classification/ops.py | shashank879/omniglot | 41137f2a05e24ebbeab7458a08cd0a5042103b50 | [
"MIT"
] | null | null | null | python/one-shot-classification/ops.py | shashank879/omniglot | 41137f2a05e24ebbeab7458a08cd0a5042103b50 | [
"MIT"
] | null | null | null | import tensorflow as tf
def multi_kernel_conv2d(inputs, num_outputs_each, kernel_sizes=[3, 5, 7], stride=1, padding='SAME',
activation_fn=tf.nn.leaky_relu, normalizer_fn=None, normalizer_params=None,
weights_initializer=tf.truncated_normal_initializer, weights_regularizer=None,
biases_initializer=tf.zeros_initializer, biases_regularizer=None,
reuse=None, scope='multi_kernel_conv'):
""" This function performs convolution over the same input using different size kernels, the result is later concatenated.
Args:
inputs: A Tensor of rank N+2 of shape `[batch_size] + input_spatial_shape + [in_channels]`
num_outputs_each: Integer, the number of output filters from each kernel size
kernel_size: A sequence of N positive integers specifying the spatial dimensions of the filters (KxK) is the kernel size used
stride: A sequence of N positive integers specifying the stride at which to compute output. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `rate` value != 1.
padding: One of `"VALID"` or `"SAME"`.
activation_fn: Activation function. The default value is a Leaky ReLU function. Explicitly set it to None to skip it and maintain a linear activation.
normalizer_fn: Normalization function to use instead of `biases`. If `normalizer_fn` is provided then `biases_initializer` and `biases_regularizer` are ignored and `biases` are not created nor added. Default set to None for no normalizer function
normalizer_params: Normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
scope: Optional scope for `variable_scope`.
Returns:
A tensor representing the output of the operation.
"""
with tf.variable_scope(scope):
if reuse:
tf.get_variable_scope().reuse()
n = inputs.get_shape()[-1]
assert type(kernel_sizes) is list, 'kernel sizes is not a list'
convs = []
for k in kernel_sizes:
w = tf.get_variable('weights{}'.format(k), shape=[k, k] + [n, num_outputs_each], initializer=weights_initializer, regularizer=weights_regularizer)
c = tf.nn.conv2d(inputs, w, [1, stride, stride, 1], padding)
convs.append(c)
c = tf.concat(convs, axis=-1)
if normalizer_fn:
c = normalizer_fn(c, **normalizer_params)
else:
b = tf.get_variable('biases', shape=[num_outputs_each * len(kernel_sizes)], initializer=biases_initializer, regularizer=biases_regularizer)
c = c + b
if activation_fn:
c = activation_fn(c)
return c
def multi_kernel_conv2d_transpose(inputs, num_outputs_each, kernel_sizes=[3, 5, 7], stride=1, padding='SAME',
activation_fn=tf.nn.leaky_relu, normalizer_fn=None, normalizer_params=None,
weights_initializer=tf.truncated_normal_initializer, weights_regularizer=None,
biases_initializer=tf.zeros_initializer, biases_regularizer=None,
reuse=None, scope='multi_kernel_conv'):
""" This function performs deconvolution over the same input using different size kernels, the result is later concatenated.
Args:
inputs: A Tensor of rank N+2 of shape `[batch_size] + input_spatial_shape + [in_channels]`
num_outputs_each: Integer, the number of output filters from each kernel size
kernel_size: A sequence of N positive integers specifying the spatial dimensions of the filters (KxK) is the kernel size used
stride: A sequence of N positive integers specifying the stride at which to compute output. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `rate` value != 1.
padding: One of `"VALID"` or `"SAME"`.
activation_fn: Activation function. The default value is a Leaky ReLU function. Explicitly set it to None to skip it and maintain a linear activation.
normalizer_fn: Normalization function to use instead of `biases`. If `normalizer_fn` is provided then `biases_initializer` and `biases_regularizer` are ignored and `biases` are not created nor added. Default set to None for no normalizer function
normalizer_params: Normalization function parameters.
weights_initializer: An initializer for the weights.
weights_regularizer: Optional regularizer for the weights.
biases_initializer: An initializer for the biases. If None skip biases.
biases_regularizer: Optional regularizer for the biases.
reuse: Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
scope: Optional scope for `variable_scope`.
Returns:
A tensor representing the output of the operation.
"""
with tf.variable_scope(scope):
if reuse:
tf.get_variable_scope().reuse()
assert type(kernel_sizes) is list, 'kernel sizes is not a list'
convs = []
for i, k in enumerate(kernel_sizes):
c = tf.contrib.layers.conv2d_transpose(inputs, num_outputs_each, k, stride=stride, activation_fn=None, scope='c{}'.format(i))
convs.append(c)
c = tf.concat(convs, axis=-1)
if normalizer_fn:
c = normalizer_fn(c, **normalizer_params)
else:
b = tf.get_variable('biases', shape=[num_outputs_each * len(kernel_sizes)], initializer=biases_initializer, regularizer=biases_regularizer)
c = c + b
if activation_fn:
c = activation_fn(c)
return c
def resnet_block_transpose(inputs, num_outputs, kernel_size=3, stride=1, activation_fn=None, reuse=None, scope='resnet_block'):
""" This function performs deconvolution over the input by resizing the input and adding a computed residual to it.
Args:
inputs: A Tensor of rank N+2 of shape `[batch_size] + input_spatial_shape + [in_channels]`
num_outputs: Integer, the number of output filters from each kernel size
kernel_size: A sequence of N positive integers specifying the spatial dimensions of the filters, (KxK) is the kernel size used
stride: A sequence of N positive integers specifying the stride at which to compute output. Can be a single integer to specify the same value for all spatial dimensions. Specifying any `stride` value != 1 is incompatible with specifying any `rate` value != 1.
activation_fn: Activation function. The default value is a Leaky ReLU function. Explicitly set it to None to skip it and maintain a linear activation.
reuse: Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
scope: Optional scope for `variable_scope`.
Returns:
A tensor representing the output of the operation.
"""
with tf.variable_scope(scope):
if reuse:
tf.get_variable_scope().reuse()
## It was observed that the model performed better without any regularizer
norm_fn = None
norm_params = {}
shape = inputs.get_shape()
h = int(shape[1])
w = int(shape[2])
c = int(shape[3])
## First change the output to required number of channels with Linear activation
c0 = tf.contrib.layers.conv2d(inputs, num_outputs, 1, stride=1, activation_fn=None, normalizer_fn=norm_fn, normalizer_params=norm_params, scope='c0')
## Resize the input to required size, calulated using the given stride
c1 = tf.image.resize_images(c0, [h * stride, w * stride])
## Perform a convolution over the original input with equal number of channels, kernel_size=1 and a leaky relu activation
r1 = tf.contrib.layers.conv2d(inputs, c, kernel_size=1, stride=1, activation_fn=tf.nn.leaky_relu, normalizer_fn=norm_fn, normalizer_params=norm_params, scope='r1')
## Calculate the residual using a second convolution with no activation and required number of channels
r2 = tf.contrib.layers.conv2d_transpose(r1, num_outputs, kernel_size, stride=stride, activation_fn=None, normalizer_fn=norm_fn, normalizer_params=norm_params, scope='r2')
## Add the residual to the resized input to get the output
output = c1 + r2
## Apply any activation if required
if activation_fn is not None:
output = activation_fn(output)
return output
# End
| 62.574324 | 273 | 0.680056 | 1,246 | 9,261 | 4.926164 | 0.153291 | 0.031281 | 0.018247 | 0.01173 | 0.824699 | 0.795699 | 0.778918 | 0.778918 | 0.774845 | 0.766862 | 0 | 0.007214 | 0.251593 | 9,261 | 147 | 274 | 63 | 0.878373 | 0.540654 | 0 | 0.630769 | 0 | 0 | 0.035288 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 1 | 0.046154 | false | 0 | 0.015385 | 0 | 0.107692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
46fca24c1b2e0f49ed7216a19f3fcc0735e2fbdd | 3,581 | py | Python | tests/Metrics/test_regression.py | earlbabson/torchflare | 15db06d313a53a3ec4640869335ba87730562b28 | [
"Apache-2.0"
] | 1 | 2021-04-28T19:57:57.000Z | 2021-04-28T19:57:57.000Z | tests/Metrics/test_regression.py | earlbabson/torchflare | 15db06d313a53a3ec4640869335ba87730562b28 | [
"Apache-2.0"
] | null | null | null | tests/Metrics/test_regression.py | earlbabson/torchflare | 15db06d313a53a3ec4640869335ba87730562b28 | [
"Apache-2.0"
] | null | null | null | # flake8: noqa
import collections
import pytest
import sklearn.metrics as skm
import torch
from torchflare.metrics.regression import MAE, MSE, MSLE, R2Score
torch.manual_seed(42)
n_targets = 3
inputs = collections.namedtuple("input", ["outputs", "targets"])
single_target_inputs = inputs(outputs=torch.rand(10, 4), targets=torch.rand(10, 4))
multi_target_inputs = inputs(outputs=torch.rand(10, 4, n_targets), targets=torch.rand(10, 4, n_targets))
def test_mse():
def _test_single_target():
np_outputs = single_target_inputs.outputs.view(-1).numpy()
np_targets = single_target_inputs.targets.view(-1).numpy()
mse = MSE()
mse.accumulate(outputs=single_target_inputs.outputs, targets=single_target_inputs.targets)
assert skm.mean_squared_error(np_targets, np_outputs) == pytest.approx(mse.compute().item())
def _test_multiple_target():
np_outputs = multi_target_inputs.outputs.view(-1, n_targets).numpy()
np_targets = multi_target_inputs.targets.view(-1, n_targets).numpy()
mse = MSE()
mse.accumulate(outputs=multi_target_inputs.outputs, targets=multi_target_inputs.targets)
assert skm.mean_squared_error(np_targets, np_outputs) == pytest.approx(mse.compute().item())
for _ in range(10):
_test_single_target()
_test_multiple_target()
def test_mae():
def _test_single_target():
np_outputs = single_target_inputs.outputs.view(-1).numpy()
np_targets = single_target_inputs.targets.view(-1).numpy()
mae = MAE()
mae.accumulate(outputs=single_target_inputs.outputs, targets=single_target_inputs.targets)
assert skm.mean_absolute_error(np_targets, np_outputs) == pytest.approx(mae.compute().item())
def _test_multiple_target():
np_outputs = multi_target_inputs.outputs.view(-1, n_targets).numpy()
np_targets = multi_target_inputs.targets.view(-1, n_targets).numpy()
mae = MAE()
mae.accumulate(outputs=multi_target_inputs.outputs, targets=multi_target_inputs.targets)
assert skm.mean_absolute_error(np_targets, np_outputs) == pytest.approx(mae.compute().item())
for _ in range(10):
_test_single_target()
_test_multiple_target()
def test_msle():
def _test_single_target():
np_outputs = single_target_inputs.outputs.view(-1).numpy()
np_targets = single_target_inputs.targets.view(-1).numpy()
msle = MSLE()
msle.accumulate(outputs=single_target_inputs.outputs, targets=single_target_inputs.targets)
assert skm.mean_squared_log_error(np_targets, np_outputs) == pytest.approx(msle.compute().item())
def _test_multiple_target():
np_outputs = multi_target_inputs.outputs.view(-1, n_targets).numpy()
np_targets = multi_target_inputs.targets.view(-1, n_targets).numpy()
msle = MSLE()
msle.accumulate(outputs=multi_target_inputs.outputs, targets=multi_target_inputs.targets)
assert skm.mean_squared_log_error(np_targets, np_outputs) == pytest.approx(msle.compute().item())
for _ in range(10):
_test_single_target()
_test_multiple_target()
def test_r2score():
def _test():
size = 51
preds = torch.rand(size)
targets = torch.rand(size)
np_y_pred = preds.numpy()
np_y = targets.numpy()
m = R2Score()
m.reset()
m.accumulate(preds, targets)
assert skm.r2_score(np_y, np_y_pred) == pytest.approx(m.compute().item(), abs=1e-4)
for _ in range(10):
_test()
| 31.13913 | 105 | 0.692544 | 474 | 3,581 | 4.909283 | 0.135021 | 0.134078 | 0.100559 | 0.075204 | 0.831543 | 0.804899 | 0.767082 | 0.767082 | 0.735281 | 0.735281 | 0 | 0.01512 | 0.187378 | 3,581 | 114 | 106 | 31.412281 | 0.784536 | 0.003351 | 0 | 0.56338 | 0 | 0 | 0.005327 | 0 | 0 | 0 | 0 | 0 | 0.098592 | 1 | 0.15493 | false | 0 | 0.070423 | 0 | 0.225352 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
202831a5d71d1a2aaa5a4ae699abf65ac5fa1bda | 199 | py | Python | src/database/models/__init__.py | uesleicarvalhoo/Whastapp-API | 56666fa932d779a57d088f0d7676c7b107cccd6c | [
"MIT"
] | null | null | null | src/database/models/__init__.py | uesleicarvalhoo/Whastapp-API | 56666fa932d779a57d088f0d7676c7b107cccd6c | [
"MIT"
] | null | null | null | src/database/models/__init__.py | uesleicarvalhoo/Whastapp-API | 56666fa932d779a57d088f0d7676c7b107cccd6c | [
"MIT"
] | null | null | null | from src.database.models.base import BaseModel
from src.database.models.conversation import Conversation
from src.database.models.message import Message
__all__ = (BaseModel, Message, Conversation)
| 33.166667 | 57 | 0.839196 | 25 | 199 | 6.52 | 0.4 | 0.128834 | 0.276074 | 0.386503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090452 | 199 | 5 | 58 | 39.8 | 0.900552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20310661ed261b6c21b436a1f1918adcae86bc7f | 103 | py | Python | pywarp/util/compat.py | shemigon/pywarp | 1072e10aeccf7211c76453ba1173180a654ea082 | [
"Apache-2.0"
] | 1 | 2020-01-10T15:07:28.000Z | 2020-01-10T15:07:28.000Z | pywarp/util/compat.py | shemigon/pywarp | 1072e10aeccf7211c76453ba1173180a654ea082 | [
"Apache-2.0"
] | null | null | null | pywarp/util/compat.py | shemigon/pywarp | 1072e10aeccf7211c76453ba1173180a654ea082 | [
"Apache-2.0"
] | null | null | null | try:
from secrets import token_bytes
except ImportError:
from os import urandom as token_bytes
| 20.6 | 41 | 0.776699 | 15 | 103 | 5.2 | 0.733333 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203884 | 103 | 4 | 42 | 25.75 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
20464b5f0c0ed546d579170d0fce8e0fbbbd9f8e | 729 | gyp | Python | binding.gyp | codeout/node-bgpdump2 | 63422b5810c3eaafde2e1713216d4facb19a0205 | [
"MIT"
] | null | null | null | binding.gyp | codeout/node-bgpdump2 | 63422b5810c3eaafde2e1713216d4facb19a0205 | [
"MIT"
] | null | null | null | binding.gyp | codeout/node-bgpdump2 | 63422b5810c3eaafde2e1713216d4facb19a0205 | [
"MIT"
] | null | null | null | {
'targets': [
{
'target_name': 'bgpdump2',
'sources': [
'src/addon.cc',
'src/bgpdump2.cc',
'deps/bgpdump2/src/bgpdump_data.c',
'deps/bgpdump2/src/bgpdump_file.c',
'deps/bgpdump2/src/bgpdump_option.c',
'deps/bgpdump2/src/bgpdump_parse.c',
'deps/bgpdump2/src/bgpdump_peer.c',
'deps/bgpdump2/src/bgpdump_peerstat.c',
'deps/bgpdump2/src/bgpdump_query.c',
'deps/bgpdump2/src/bgpdump_route.c',
'deps/bgpdump2/src/ptree.c',
'deps/bgpdump2/src/queue.c'
],
'include_dirs': [
'<!(node -e "require(\'nan\')")',
'deps/bgpdump2/src'
],
'libraries': [
'-lbz2'
]
}
]
}
| 25.137931 | 47 | 0.529492 | 80 | 729 | 4.7 | 0.3625 | 0.351064 | 0.43883 | 0.382979 | 0.428191 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026923 | 0.286694 | 729 | 28 | 48 | 26.035714 | 0.696154 | 0 | 0 | 0.071429 | 0 | 0 | 0.60631 | 0.432099 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.