hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7d73b2a4018d3130d2248317a694e4aad2e63d08 | 43 | py | Python | info/modules/__init__.py | catplane/guguji | 9cb240ea2315c358bc6aa687f14f7f10f8d781ef | [
"MIT"
] | 1 | 2018-12-29T02:49:56.000Z | 2018-12-29T02:49:56.000Z | info/modules/__init__.py | catplane/guguji | 9cb240ea2315c358bc6aa687f14f7f10f8d781ef | [
"MIT"
] | null | null | null | info/modules/__init__.py | catplane/guguji | 9cb240ea2315c358bc6aa687f14f7f10f8d781ef | [
"MIT"
] | null | null | null | # modules包什么都内容都不做,只是一名字而已,具体的模块放到modules下面 | 43 | 43 | 0.906977 | 3 | 43 | 13 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 43 | 1 | 43 | 43 | 0.928571 | 0.953488 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7db4c860392dc6924ca9684b6f5ec93cb75d087a | 208 | py | Python | esgd/__init__.py | crowsonkb/esgd | 3dd3008ae947b578f3491decb3c1e4a853a81a76 | [
"MIT"
] | 43 | 2022-01-23T21:05:36.000Z | 2022-02-20T19:57:31.000Z | esgd/__init__.py | crowsonkb/esgd | 3dd3008ae947b578f3491decb3c1e4a853a81a76 | [
"MIT"
] | null | null | null | esgd/__init__.py | crowsonkb/esgd | 3dd3008ae947b578f3491decb3c1e4a853a81a76 | [
"MIT"
] | 3 | 2022-01-24T09:06:15.000Z | 2022-01-27T15:52:30.000Z | """ESGD-M (ESGD from "Equilibrated adaptive learning rates for non-convex optimization"
with quasi-hyperbolic momentum from "Quasi-hyperbolic momentum and Adam for deep
learning".
"""
from .esgd import ESGD
| 29.714286 | 87 | 0.788462 | 29 | 208 | 5.655172 | 0.655172 | 0.182927 | 0.280488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129808 | 208 | 6 | 88 | 34.666667 | 0.906077 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
81787f261fa41c0db83d394568e84fb9f5073472 | 143 | py | Python | pyspj/models/simple.py | HansBug/pyspj | ed776cf7d2d1766ee4c2152221d1d3dbdd18d93a | [
"Apache-2.0"
] | null | null | null | pyspj/models/simple.py | HansBug/pyspj | ed776cf7d2d1766ee4c2152221d1d3dbdd18d93a | [
"Apache-2.0"
] | null | null | null | pyspj/models/simple.py | HansBug/pyspj | ed776cf7d2d1766ee4c2152221d1d3dbdd18d93a | [
"Apache-2.0"
] | null | null | null | from .base import SPJResult
class SimpleSPJResult(SPJResult):
"""
Overview:
Result of simple special judge.
"""
pass
| 14.3 | 39 | 0.629371 | 14 | 143 | 6.428571 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.286713 | 143 | 9 | 40 | 15.888889 | 0.882353 | 0.314685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
81b7fb577abf52c2673da948b166f1496d130134 | 88 | py | Python | test.py | jrderuiter/snakemake-rnaseq-star-featurecounts | b10a44ee1719cbddfd58d9cd5c4d1df63dbb677d | [
"MIT"
] | 8 | 2018-04-26T17:18:18.000Z | 2021-09-14T20:44:34.000Z | test.py | jrderuiter/snakemake-rnaseq-star-featurecounts | b10a44ee1719cbddfd58d9cd5c4d1df63dbb677d | [
"MIT"
] | null | null | null | test.py | jrderuiter/snakemake-rnaseq-star-featurecounts | b10a44ee1719cbddfd58d9cd5c4d1df63dbb677d | [
"MIT"
] | 7 | 2017-08-23T16:40:24.000Z | 2021-09-14T20:45:15.000Z | import subprocess
def test_pipeline():
subprocess.check_call(["snakemake", "-n"])
| 14.666667 | 46 | 0.704545 | 10 | 88 | 6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 88 | 5 | 47 | 17.6 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81e5df593af3a5c58794926509de452996e507bc | 66 | py | Python | pixelsort/constants.py | drosoCode/NotSoBot | 3c4b809fce75151cae0059ba8cfca68996147155 | [
"MIT"
] | null | null | null | pixelsort/constants.py | drosoCode/NotSoBot | 3c4b809fce75151cae0059ba8cfca68996147155 | [
"MIT"
] | null | null | null | pixelsort/constants.py | drosoCode/NotSoBot | 3c4b809fce75151cae0059ba8cfca68996147155 | [
"MIT"
] | 1 | 2020-11-05T07:34:16.000Z | 2020-11-05T07:34:16.000Z | black_pixel = (0, 0, 0, 255)
white_pixel = (255, 255, 255, 255)
| 22 | 35 | 0.606061 | 12 | 66 | 3.166667 | 0.416667 | 0.473684 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.346154 | 0.212121 | 66 | 2 | 36 | 33 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c49aa61469ba7cb04aacdbe06416615a25687ae5 | 4,692 | py | Python | tests/backends/test_local.py | triagemd/stored | a14ba2ed1d646847b37e3dab5c54cac98f33467e | [
"MIT"
] | 3 | 2017-09-29T23:53:20.000Z | 2019-08-29T17:22:24.000Z | tests/backends/test_local.py | triagemd/stored | a14ba2ed1d646847b37e3dab5c54cac98f33467e | [
"MIT"
] | 3 | 2017-09-05T02:10:20.000Z | 2018-02-20T21:31:16.000Z | tests/backends/test_local.py | triagemd/stored | a14ba2ed1d646847b37e3dab5c54cac98f33467e | [
"MIT"
] | null | null | null | import pytest
import os
from backports.tempfile import TemporaryDirectory
from stored.backends.local import LocalFileStorage
@pytest.fixture
def sample_local_path():
return 'tests/files/foo.tar.gz'
def touch(path):
with open(path, 'a'):
os.utime(path, None)
def test_sync_to_file(temp_dir, sample_local_path):
output_path = os.path.join(temp_dir, os.path.basename(sample_local_path))
LocalFileStorage(sample_local_path).sync_to(output_path)
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = ['foo.tar.gz', ]
assert sorted(actual) == sorted(expected)
def test_sync_to_file_nonexistent_input(temp_dir):
output_path = os.path.join(temp_dir, 'nonexistent_file')
LocalFileStorage('nonexistent_file').sync_to(output_path)
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = []
assert sorted(actual) == sorted(expected)
def test_sync_to_directory(temp_dir):
with TemporaryDirectory() as input_dir:
touch(os.path.join(input_dir, 'foo.txt'))
os.makedirs(os.path.join(input_dir, 'bar'))
touch(os.path.join(input_dir, 'bar', 'baz.txt'))
LocalFileStorage(input_dir).sync_to(temp_dir)
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = ['foo.txt', 'bar/baz.txt']
assert sorted(actual) == sorted(expected)
def test_sync_to_directory_creates_output_dir(temp_dir):
output_dir = os.path.join(temp_dir, 'inner_dir')
with TemporaryDirectory() as input_dir:
touch(os.path.join(input_dir, 'foo.txt'))
os.makedirs(os.path.join(input_dir, 'bar'))
touch(os.path.join(input_dir, 'bar', 'baz.txt'))
LocalFileStorage(input_dir).sync_to(output_dir)
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = ['inner_dir/foo.txt', 'inner_dir/bar/baz.txt']
assert sorted(actual) == sorted(expected)
def test_sync_to_directory_nonexistent_input(temp_dir):
LocalFileStorage('nonexistent_dir').sync_to(temp_dir)
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = []
assert sorted(actual) == sorted(expected)
def test_sync_from_directory(temp_dir, sample_local_path):
with TemporaryDirectory() as input_dir:
touch(os.path.join(input_dir, 'foo.txt'))
os.makedirs(os.path.join(input_dir, 'bar'))
touch(os.path.join(input_dir, 'bar', 'baz.txt'))
LocalFileStorage(temp_dir).sync_from(input_dir)
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = ['foo.txt', 'bar/baz.txt']
assert sorted(actual) == sorted(expected)
def test_sync_from_directory_nonexistent_input(temp_dir):
LocalFileStorage(temp_dir).sync_from('nonexistent_dir')
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = []
assert sorted(actual) == sorted(expected)
def test_sync_from_file(temp_dir, sample_local_path):
output_path = os.path.join(temp_dir, os.path.basename(sample_local_path))
LocalFileStorage(output_path).sync_from(sample_local_path)
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = ['foo.tar.gz', ]
assert sorted(actual) == sorted(expected)
def test_sync_from_file_nonexistent_input(temp_dir):
output_path = os.path.join(temp_dir, 'nonexistent_file')
LocalFileStorage(output_path).sync_from('nonexistent_file')
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = []
assert sorted(actual) == sorted(expected)
def test_list(temp_dir):
touch(os.path.join(temp_dir, 'foo.jpg'))
os.makedirs(os.path.join(temp_dir, 'bar'))
touch(os.path.join(temp_dir, 'bar', 'baz-1.jpg'))
touch(os.path.join(temp_dir, 'bar', 'baz-2.jpg'))
actual = LocalFileStorage(temp_dir).list()
actual = [file.replace(temp_dir + '/', '') for file in actual]
expected = ['foo.jpg', 'bar/baz-1.jpg', 'bar/baz-2.jpg']
assert actual == expected
def test_list_relative(temp_dir):
touch(os.path.join(temp_dir, 'foo.jpg'))
os.makedirs(os.path.join(temp_dir, 'bar'))
touch(os.path.join(temp_dir, 'bar', 'baz-1.jpg'))
touch(os.path.join(temp_dir, 'bar', 'baz-2.jpg'))
actual = LocalFileStorage(temp_dir).list(relative=True)
expected = ['foo.jpg', 'bar/baz-1.jpg', 'bar/baz-2.jpg']
assert actual == expected
def test_is_dir(sample_local_path):
assert LocalFileStorage(os.path.dirname(sample_local_path)).is_dir()
assert not LocalFileStorage(sample_local_path).is_dir()
assert LocalFileStorage(sample_local_path + '/').is_dir()
assert LocalFileStorage(sample_local_path + '/foo').is_dir()
def test_filename():
assert LocalFileStorage('/foo/bar.zip').filename == 'bar.zip'
| 36.372093 | 77 | 0.716539 | 654 | 4,692 | 4.900612 | 0.097859 | 0.087363 | 0.068643 | 0.056786 | 0.832137 | 0.79532 | 0.762871 | 0.762871 | 0.762871 | 0.74103 | 0 | 0.001989 | 0.142583 | 4,692 | 128 | 78 | 36.65625 | 0.794681 | 0 | 0 | 0.572917 | 0 | 0 | 0.093564 | 0.009165 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.15625 | false | 0 | 0.041667 | 0.010417 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4b5103dd2e39ec6db3de9b5fdd5b3a9a63aa7fc | 6,657 | py | Python | Supported Languages/Python/smash/controllers/cdn.py | SMASH-INC/API | d0679f199f786aa24f0510df078b4318c27dcc0f | [
"MIT"
] | null | null | null | Supported Languages/Python/smash/controllers/cdn.py | SMASH-INC/API | d0679f199f786aa24f0510df078b4318c27dcc0f | [
"MIT"
] | null | null | null | Supported Languages/Python/smash/controllers/cdn.py | SMASH-INC/API | d0679f199f786aa24f0510df078b4318c27dcc0f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
smash.controllers.cdn
This file was automatically generated for SMASH by SMASH v2.0 ( https://smashlabs.io ).
"""
import logging
from .base_controller import BaseController
from ..api_helper import APIHelper
from ..configuration import Configuration
from ..http.auth.custom_auth import CustomAuth
from ..models.cdn_push_model_response import CDNPushModelResponse
from ..models.cdn_pull_model_response import CDNPullModelResponse
class CDN(BaseController):
"""A Controller to access Endpoints in the smash API."""
def __init__(self, client=None, call_back=None):
super(CDN, self).__init__(client, call_back)
self.logger = logging.getLogger(__name__)
def cdn_push_zone(self,
options=dict()):
"""Does a GET request to /s/c/push.
CDN Push Zone API
Args:
options (dict, optional): Key-value pairs for any of the
parameters to this API Endpoint. All parameters to the
endpoint are supplied through the dictionary with their names
being the key and their desired values being the value. A list
of parameters that can be used are::
cname -- string -- Domain or domain names separated by a
comma you wish to allow CNAME access
file -- string -- GIT URL, file URL, or direct upload of
file
Returns:
CDNPushModelResponse: Response from the API.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
try:
self.logger.info('cdn_push_zone called.')
# Validate required parameters
self.logger.info('Validating required parameters for cdn_push_zone.')
self.validate_parameters(cname=options.get("cname"),
file=options.get("file"))
# Prepare query URL
self.logger.info('Preparing query URL for cdn_push_zone.')
_query_builder = Configuration.get_base_uri(Configuration.Server.PATH)
_query_builder += '/s/c/push'
_query_parameters = {
'cname': options.get('cname', None),
'file': options.get('file', None)
}
_query_builder = APIHelper.append_url_with_query_parameters(_query_builder,
_query_parameters, Configuration.array_serialization)
_query_url = APIHelper.clean_url(_query_builder)
# Prepare and execute request
self.logger.info('Preparing and executing request for cdn_push_zone.')
_request = self.http_client.get(_query_url)
CustomAuth.apply(_request)
_context = self.execute_request(_request, name = 'cdn_push_zone')
# Endpoint and global error handling using HTTP status codes.
self.logger.info('Validating response for cdn_push_zone.')
if _context.response.status_code == 404:
self.logger.info('Status code 404 received for cdn_push_zone. Returning nil.')
return None
self.validate_response(_context)
# Return appropriate type
return APIHelper.json_deserialize(_context.response.raw_body, CDNPushModelResponse.from_dictionary)
except Exception as e:
self.logger.error(e, exc_info = True)
raise
def cdn_pull_zone(self,
options=dict()):
"""Does a GET request to /s/c/pull.
CDN Pull Zone API
Args:
options (dict, optional): Key-value pairs for any of the
parameters to this API Endpoint. All parameters to the
endpoint are supplied through the dictionary with their names
being the key and their desired values being the value. A list
of parameters that can be used are::
origin -- string -- Domain or domain names separated by a
comma
cname -- string -- Domain or domain names separated by a
comma you wish to allow CNAME access
Returns:
CDNPullModelResponse: Response from the API.
Raises:
APIException: When an error occurs while fetching the data from
the remote API. This exception includes the HTTP Response
code, an error message, and the HTTP body that was received in
the request.
"""
try:
self.logger.info('cdn_pull_zone called.')
# Validate required parameters
self.logger.info('Validating required parameters for cdn_pull_zone.')
self.validate_parameters(origin=options.get("origin"),
cname=options.get("cname"))
# Prepare query URL
self.logger.info('Preparing query URL for cdn_pull_zone.')
_query_builder = Configuration.get_base_uri(Configuration.Server.PATH)
_query_builder += '/s/c/pull'
_query_parameters = {
'origin': options.get('origin', None),
'cname': options.get('cname', None)
}
_query_builder = APIHelper.append_url_with_query_parameters(_query_builder,
_query_parameters, Configuration.array_serialization)
_query_url = APIHelper.clean_url(_query_builder)
# Prepare and execute request
self.logger.info('Preparing and executing request for cdn_pull_zone.')
_request = self.http_client.get(_query_url)
CustomAuth.apply(_request)
_context = self.execute_request(_request, name = 'cdn_pull_zone')
# Endpoint and global error handling using HTTP status codes.
self.logger.info('Validating response for cdn_pull_zone.')
if _context.response.status_code == 404:
self.logger.info('Status code 404 received for cdn_pull_zone. Returning nil.')
return None
self.validate_response(_context)
# Return appropriate type
return APIHelper.json_deserialize(_context.response.raw_body, CDNPullModelResponse.from_dictionary)
except Exception as e:
self.logger.error(e, exc_info = True)
raise
| 41.60625 | 111 | 0.609434 | 750 | 6,657 | 5.224 | 0.218667 | 0.038285 | 0.042879 | 0.017866 | 0.778458 | 0.740684 | 0.740684 | 0.740684 | 0.740684 | 0.729964 | 0 | 0.003322 | 0.321616 | 6,657 | 159 | 112 | 41.867925 | 0.86426 | 0.339192 | 0 | 0.422535 | 1 | 0 | 0.151111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042254 | false | 0 | 0.098592 | 0 | 0.211268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4c7afb9c3cba1c0599fd5d81c4875b515e64ae3 | 132 | py | Python | nodeodm_proxy/api.py | IS-AgroSmart/MVP | b347e7c846a9a29584a6baee0b825381d5bc62a3 | [
"CNRI-Python"
] | null | null | null | nodeodm_proxy/api.py | IS-AgroSmart/MVP | b347e7c846a9a29584a6baee0b825381d5bc62a3 | [
"CNRI-Python"
] | 140 | 2020-01-21T15:42:29.000Z | 2021-08-21T18:04:19.000Z | nodeodm_proxy/api.py | IS-AgroSmart/MVP | b347e7c846a9a29584a6baee0b825381d5bc62a3 | [
"CNRI-Python"
] | 1 | 2019-12-13T21:47:57.000Z | 2019-12-13T21:47:57.000Z | import requests
def get_info(server_url, uuid, token=""):
return requests.get(f"{server_url}/task/{uuid}/info?token={token}")
| 22 | 71 | 0.712121 | 20 | 132 | 4.55 | 0.6 | 0.197802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106061 | 132 | 5 | 72 | 26.4 | 0.771186 | 0 | 0 | 0 | 0 | 0 | 0.325758 | 0.325758 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
c4d2485fbc5ae6587cd7254e490c40c9484a3160 | 19 | py | Python | zhusuan/mcmc/__init__.py | thuwzy/ZhuSuan-PyTorch | 471e4d401a6edce07312b01b2b76fa2c56b15c0f | [
"MIT"
] | 12 | 2021-08-11T10:28:21.000Z | 2022-03-12T14:20:02.000Z | zhusuan/mcmc/__init__.py | thu-ml/Zhusuan-Jittor | e73c6e3081afde305b9caba80858543abf168466 | [
"MIT"
] | 1 | 2021-07-29T08:50:00.000Z | 2021-07-29T08:50:00.000Z | zhusuan/mcmc/__init__.py | thu-ml/Zhusuan-Jittor | e73c6e3081afde305b9caba80858543abf168466 | [
"MIT"
] | 2 | 2021-08-17T12:05:15.000Z | 2022-01-12T09:47:49.000Z | from .SGLD import * | 19 | 19 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c4ebc0e6cbe35e3af92007f0fa6d696c2b981252 | 1,231 | py | Python | ros_ws/build/gripper_pkg/cmake/gripper_pkg-genmsg-context.py | isuru-m/ROSbot_Gripper_Project | c3d8f46461612a52137ff3f63db45cac20b5364f | [
"MIT"
] | null | null | null | ros_ws/build/gripper_pkg/cmake/gripper_pkg-genmsg-context.py | isuru-m/ROSbot_Gripper_Project | c3d8f46461612a52137ff3f63db45cac20b5364f | [
"MIT"
] | null | null | null | ros_ws/build/gripper_pkg/cmake/gripper_pkg-genmsg-context.py | isuru-m/ROSbot_Gripper_Project | c3d8f46461612a52137ff3f63db45cac20b5364f | [
"MIT"
] | null | null | null | # generated from genmsg/cmake/pkg-genmsg.context.in
messages_str = "/home/husarion/ros_ws/devel/share/gripper_pkg/msg/stepActionAction.msg;/home/husarion/ros_ws/devel/share/gripper_pkg/msg/stepActionActionGoal.msg;/home/husarion/ros_ws/devel/share/gripper_pkg/msg/stepActionActionResult.msg;/home/husarion/ros_ws/devel/share/gripper_pkg/msg/stepActionActionFeedback.msg;/home/husarion/ros_ws/devel/share/gripper_pkg/msg/stepActionGoal.msg;/home/husarion/ros_ws/devel/share/gripper_pkg/msg/stepActionResult.msg;/home/husarion/ros_ws/devel/share/gripper_pkg/msg/stepActionFeedback.msg"
services_str = "/home/husarion/ros_ws/src/gripper_pkg/srv/stepService.srv;/home/husarion/ros_ws/src/gripper_pkg/srv/servoService.srv"
pkg_name = "gripper_pkg"
dependencies_str = "std_msgs;actionlib_msgs"
langs = "gencpp;geneus;genlisp;gennodejs;genpy"
dep_include_paths_str = "gripper_pkg;/home/husarion/ros_ws/devel/share/gripper_pkg/msg;std_msgs;/opt/ros/kinetic/share/std_msgs/cmake/../msg;actionlib_msgs;/opt/ros/kinetic/share/actionlib_msgs/cmake/../msg"
PYTHON_EXECUTABLE = "/usr/bin/python"
package_has_static_sources = '' == 'TRUE'
genmsg_check_deps_script = "/opt/ros/kinetic/share/genmsg/cmake/../../../lib/genmsg/genmsg_check_deps.py"
| 102.583333 | 531 | 0.824533 | 186 | 1,231 | 5.22043 | 0.317204 | 0.123584 | 0.15448 | 0.175077 | 0.467559 | 0.416066 | 0.416066 | 0.416066 | 0.348095 | 0.265705 | 0 | 0 | 0.02762 | 1,231 | 11 | 532 | 111.909091 | 0.811195 | 0.039805 | 0 | 0 | 1 | 0.333333 | 0.827966 | 0.802542 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f22170f6de31a162609729519ce25a0a6651840e | 165 | py | Python | scripts/registry_json_blob.py | riddopic/opta | 25fa6435fdc7e2ea9c7963ed74100fffb0743063 | [
"Apache-2.0"
] | 595 | 2021-05-21T22:30:48.000Z | 2022-03-31T15:40:25.000Z | scripts/registry_json_blob.py | riddopic/opta | 25fa6435fdc7e2ea9c7963ed74100fffb0743063 | [
"Apache-2.0"
] | 463 | 2021-05-24T21:32:59.000Z | 2022-03-31T17:12:33.000Z | scripts/registry_json_blob.py | riddopic/opta | 25fa6435fdc7e2ea9c7963ed74100fffb0743063 | [
"Apache-2.0"
] | 29 | 2021-05-21T22:27:52.000Z | 2022-03-28T16:43:45.000Z | #!/usr/bin/env python
import json
from opta.registry import make_registry_dict
if __name__ == "__main__":
print(json.dumps(make_registry_dict(), indent=True))
| 20.625 | 56 | 0.757576 | 24 | 165 | 4.708333 | 0.75 | 0.212389 | 0.283186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 165 | 7 | 57 | 23.571429 | 0.77931 | 0.121212 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1efedb59433560ea1c04ad6b7e4c290dc8c2a51b | 12,507 | py | Python | neutron_taas/tests/unit/services/drivers/test_linux_sriov_utils.py | openstack/tap-as-a-service | c9d046843565b3af514169c26e5893dbe86a9b98 | [
"Apache-2.0"
] | 68 | 2015-10-18T02:57:10.000Z | 2022-02-22T11:33:25.000Z | neutron_taas/tests/unit/services/drivers/test_linux_sriov_utils.py | openstack/tap-as-a-service | c9d046843565b3af514169c26e5893dbe86a9b98 | [
"Apache-2.0"
] | null | null | null | neutron_taas/tests/unit/services/drivers/test_linux_sriov_utils.py | openstack/tap-as-a-service | c9d046843565b3af514169c26e5893dbe86a9b98 | [
"Apache-2.0"
] | 27 | 2015-11-11T02:00:35.000Z | 2020-03-07T03:36:33.000Z | # Copyright (C) 2018 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# This class implements a utility functions for SRIOV NIC Switch Driver
#
import copy
import re
from unittest import mock
from neutron_taas.common import utils as common_utils
from neutron_taas.services.taas.drivers.linux import sriov_nic_exceptions \
as taas_exc
from neutron_taas.services.taas.drivers.linux import sriov_nic_utils
from neutron_taas.tests import base
FAKE_SRIOV_PORT = {
'id': 'fake_1', 'mac_address': "52:54:00:12:35:02",
'binding:profile': {
'pci_slot': None}, 'binding:vif_details': {'vlan': 20}
}
class TestSriovNicUtils(base.TaasTestCase):
def setUp(self):
super(TestSriovNicUtils, self).setUp()
def test_get_sysfs_netdev_path_with_pf_interface(self):
self.assertEqual(
"/sys/bus/pci/devices/12/physfn/net",
sriov_nic_utils.SriovNicUtils().
_get_sysfs_netdev_path(12, True))
def test_get_sysfs_netdev_path_without_pf_interface(self):
self.assertEqual(
"/sys/bus/pci/devices/12/net",
sriov_nic_utils.SriovNicUtils().
_get_sysfs_netdev_path(12, False))
@mock.patch.object(sriov_nic_utils, 'os')
def test_get_ifname_by_pci_address(self, mock_os):
mock_os.listdir.return_value = ['random1', 'random2']
self.assertEqual(sriov_nic_utils.SriovNicUtils().
get_ifname_by_pci_address(12, False), 'random2')
@mock.patch.object(sriov_nic_utils, 'os')
def test_get_ifname_by_pci_address_no_dev_info(self, mock_os):
mock_os.listdir.return_value = list()
self.assertRaises(
taas_exc.PciDeviceNotFoundById,
sriov_nic_utils.SriovNicUtils().get_ifname_by_pci_address, 12, 9)
@mock.patch.object(sriov_nic_utils, 'os')
@mock.patch.object(sriov_nic_utils, 'open', create=True)
def test_get_mac_by_pci_address(self, mock_open, mock_os):
mock_os.listdir.return_value = ['random1', 'random2']
mock_os.path.join.return_value = 'random'
fake_file_handle = ["52:54:00:12:35:02"]
fake_file_iter = fake_file_handle.__iter__()
mock_open.return_value.__enter__.return_value = fake_file_iter
self.assertEqual(
"52:54:00:12:35:02", sriov_nic_utils.SriovNicUtils().
get_mac_by_pci_address(12, False))
@mock.patch.object(sriov_nic_utils, 'os')
@mock.patch.object(sriov_nic_utils, 'open', create=True)
def test_get_mac_by_pci_address_no_content(self, mock_open, mock_os):
mock_os.listdir.return_value = ['random1', 'random2']
mock_os.path.join.return_value = 'random'
fake_file_handle = []
fake_file_iter = fake_file_handle.__iter__()
mock_open.return_value.__enter__.return_value = fake_file_iter
self.assertRaises(
taas_exc.PciDeviceNotFoundById,
sriov_nic_utils.SriovNicUtils().get_mac_by_pci_address, 12, False)
@mock.patch.object(sriov_nic_utils, 'os')
def test_get_mac_by_pci_address_wrong_dev_path(self, mock_os):
mock_os.listdir.return_value = ['random1', 'random2']
mock_os.path.join.return_value = 'random'
self.assertRaises(
taas_exc.PciDeviceNotFoundById,
sriov_nic_utils.SriovNicUtils().get_mac_by_pci_address, 12, False)
@mock.patch.object(sriov_nic_utils, 'os')
@mock.patch.object(sriov_nic_utils, 'open', create=True)
def test_get_net_name_by_vf_pci_address(self, mock_open, mock_os):
mock_os.listdir.return_value = ['enp0s3', 'enp0s2']
mock_os.path.join.return_value = 'random'
fake_file_handle = ["52:54:00:12:35:02"]
fake_file_iter = fake_file_handle.__iter__()
mock_open.return_value.__enter__.return_value = fake_file_iter
self.assertEqual(
'net_enp0s3_52_54_00_12_35_02',
sriov_nic_utils.SriovNicUtils().
get_net_name_by_vf_pci_address(12))
def _common_merge_utility(self, value):
output_list = list()
for v in value:
output_list.append(v)
return output_list
def test_get_ranges_str_from_list(self):
input_list = [4, 11, 12, 13, 25, 26, 27]
self.assertEqual("4,11-13,25-27", common_utils.
get_ranges_str_from_list(input_list))
def test_get_list_from_ranges_str(self):
input_str = "4,6,10-13,25-27"
expected_output = [4, 6, 10, 11, 12, 13, 25, 26, 27]
self.assertEqual(expected_output, common_utils.
get_list_from_ranges_str(input_str))
def test_get_vf_num_by_pci_address_neg(self):
self.assertRaises(
taas_exc.PciDeviceNotFoundById,
sriov_nic_utils.SriovNicUtils().get_vf_num_by_pci_address, 12)
@mock.patch.object(sriov_nic_utils, 'glob')
@mock.patch.object(sriov_nic_utils, 're')
@mock.patch.object(sriov_nic_utils, 'os')
def test_get_vf_num_by_pci_address(self, mock_os, mock_re, mock_glob):
mock_glob.iglob.return_value = ['file1']
mock_os.readlink.return_value = 12
mock_re.compile().search.return_value = re.match(r"(\d+)", "89")
self.assertEqual(
'89', sriov_nic_utils.SriovNicUtils().
get_vf_num_by_pci_address(12))
@mock.patch.object(sriov_nic_utils, 'glob')
@mock.patch.object(sriov_nic_utils, 're')
@mock.patch.object(sriov_nic_utils, 'os')
@mock.patch.object(sriov_nic_utils, 'open', create=True)
@mock.patch.object(sriov_nic_utils, 'portbindings')
def test_get_sriov_port_params(self, mock_port_bindings, mock_open,
mock_os, mock_re, mock_glob):
sriov_port = copy.deepcopy(FAKE_SRIOV_PORT)
fake_profile = mock_port_bindings.PROFILE = 'binding:profile'
mock_port_bindings.VIF_DETAILS = 'binding:vif_details'
sriov_port[fake_profile]['pci_slot'] = 3
mock_glob.iglob.return_value = ['file1']
mock_os.readlink.return_value = 12
mock_re.compile().search.return_value = re.match(r"(\d+)", "89")
mock_os.listdir.return_value = ['net_enp0s2_52_54_00_12_35_02',
'net_enp0s3_52_54_00_12_35_02']
mock_os.path.join.return_value = 'random'
fake_file_handle = ["52:54:00:12:35:02"]
fake_file_iter = fake_file_handle.__iter__()
mock_open.return_value.__enter__.return_value = fake_file_iter
expected_output = {
'mac': '52:54:00:12:35:02', 'pci_slot': 3, 'vf_index': '89',
'pf_device': 'net_enp0s3_52_54_00_12_35_02', 'src_vlans': 20}
self.assertEqual(
expected_output, sriov_nic_utils.SriovNicUtils().
get_sriov_port_params(sriov_port))
@mock.patch.object(sriov_nic_utils, 'glob')
@mock.patch.object(sriov_nic_utils, 're')
@mock.patch.object(sriov_nic_utils, 'os')
@mock.patch.object(sriov_nic_utils, 'open', create=True)
@mock.patch.object(sriov_nic_utils, 'portbindings')
def test_get_sriov_port_params_no_pci_slot(self, mock_port_bindings,
mock_open, mock_os, mock_re,
mock_glob):
sriov_port = copy.deepcopy(FAKE_SRIOV_PORT)
mock_port_bindings.PROFILE = 'binding:profile'
mock_port_bindings.VIF_DETAILS = 'binding:vif_details'
mock_glob.iglob.return_value = ['file1']
mock_os.readlink.return_value = 12
mock_re.compile().search.return_value = re.match(r"(\d+)", "89")
mock_os.listdir.return_value = ['enp0s3', 'enp0s2']
mock_os.path.join.return_value = 'random'
fake_file_handle = ["52:54:00:12:35:02"]
fake_file_iter = fake_file_handle.__iter__()
mock_open.return_value.__enter__.return_value = fake_file_iter
self.assertIsNone(sriov_nic_utils.SriovNicUtils().
get_sriov_port_params(sriov_port))
@mock.patch.object(sriov_nic_utils, 'utils')
@mock.patch.object(sriov_nic_utils, 'os')
def test_execute_sysfs_command_egress_add(self, mock_os,
mock_neutron_utils):
sriov_nic_utils.SriovNicUtils().execute_sysfs_command(
'add', {'pf_device': 'p2p1', 'vf_index': '9'}, {'vf_index': '18'},
"4,11-13", True, "OUT")
egress_cmd = ['i40e_sysfs_command', 'p2p1', '18',
'egress_mirror', 'add', '9']
mock_neutron_utils.execute.assert_called_once_with(
egress_cmd, run_as_root=True, privsep_exec=True)
@mock.patch.object(sriov_nic_utils, 'utils')
@mock.patch.object(sriov_nic_utils, 'os')
def test_execute_sysfs_command_ingress_add(self, mock_os,
mock_neutron_utils):
sriov_nic_utils.SriovNicUtils().execute_sysfs_command(
'add', {'pf_device': 'p2p1', 'vf_index': '9'}, {'vf_index': '18'},
"4,11-13", True, "IN")
ingress_cmd = ['i40e_sysfs_command', 'p2p1', '18',
'ingress_mirror', 'add', '9']
mock_neutron_utils.execute.assert_called_once_with(
ingress_cmd, run_as_root=True, privsep_exec=True)
@mock.patch.object(sriov_nic_utils, 'utils')
@mock.patch.object(sriov_nic_utils, 'os')
def test_execute_sysfs_command_both_add(
self, mock_os, mock_neutron_utils):
sriov_nic_utils.SriovNicUtils().execute_sysfs_command(
'add', {'pf_device': 'p2p1', 'vf_index': '9'}, {'vf_index': '18'},
"4,11-13", True, "BOTH")
self.assertEqual(2, mock_neutron_utils.execute.call_count)
@mock.patch.object(sriov_nic_utils, 'utils')
@mock.patch.object(sriov_nic_utils, 'os')
def test_execute_sysfs_command_egress_rem(self, mock_os,
mock_neutron_utils):
sriov_nic_utils.SriovNicUtils().execute_sysfs_command(
'rem', {'pf_device': 'p2p1', 'vf_index': '9'}, {'vf_index': '18'},
"4,11-13", True, "OUT")
egress_cmd = ['i40e_sysfs_command', 'p2p1', '18',
'egress_mirror', 'rem', '9']
mock_neutron_utils.execute.assert_called_once_with(
egress_cmd, run_as_root=True, privsep_exec=True)
@mock.patch.object(sriov_nic_utils, 'utils')
@mock.patch.object(sriov_nic_utils, 'os')
def test_execute_sysfs_command_ingress_rem(self, mock_os,
mock_neutron_utils):
sriov_nic_utils.SriovNicUtils().execute_sysfs_command(
'rem', {'pf_device': 'p2p1', 'vf_index': '9'}, {'vf_index': '18'},
"4,11-13", True, "IN")
ingress_cmd = ['i40e_sysfs_command', 'p2p1', '18',
'ingress_mirror', 'rem', '9']
mock_neutron_utils.execute.assert_called_once_with(
ingress_cmd, run_as_root=True, privsep_exec=True)
@mock.patch.object(sriov_nic_utils, 'utils')
@mock.patch.object(sriov_nic_utils, 'os')
def test_execute_sysfs_command_both_rem(
self, mock_os, mock_neutron_utils):
sriov_nic_utils.SriovNicUtils().execute_sysfs_command(
'rem', {'pf_device': 'p2p1', 'vf_index': '9'}, {'vf_index': '18'},
"4,11-13", True, "BOTH")
self.assertEqual(2, mock_neutron_utils.execute.call_count)
@mock.patch.object(sriov_nic_utils, 'utils')
@mock.patch.object(sriov_nic_utils, 'os')
def test_execute_sysfs_command_not_both_vf_to_vf_all_vlans_False(
self, mock_os, mock_neutron_utils):
cmd = ['i40e_sysfs_command', 'p2p1', '9',
'vlan_mirror', 'rem', '4,11-13']
sriov_nic_utils.SriovNicUtils().execute_sysfs_command(
'rem', {'pf_device': 'p2p1', 'vf_index': '9'}, {'vf_index': '18'},
"4,11-13", False, "FAKE")
mock_neutron_utils.execute.assert_called_once_with(
cmd, run_as_root=True, privsep_exec=True)
| 46.322222 | 78 | 0.654753 | 1,688 | 12,507 | 4.449052 | 0.130332 | 0.061784 | 0.096937 | 0.095872 | 0.824767 | 0.810386 | 0.794141 | 0.786019 | 0.751798 | 0.751798 | 0 | 0.038632 | 0.225953 | 12,507 | 269 | 79 | 46.494424 | 0.737114 | 0.048853 | 0 | 0.616071 | 0 | 0 | 0.110784 | 0.014564 | 0 | 0 | 0 | 0 | 0.09375 | 1 | 0.102679 | false | 0 | 0.03125 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
48644fb93e1d67034c937d32ae4552355f13beaf | 211 | py | Python | myvenv/lib/python3.5/site-packages/crispy_forms/tests/compatibility.py | tuvapp/tuvappcom | 5ca2be19f4b0c86a1d4a9553711a4da9d3f32841 | [
"MIT"
] | 1 | 2017-10-31T02:37:37.000Z | 2017-10-31T02:37:37.000Z | myvenv/lib/python3.5/site-packages/crispy_forms/tests/compatibility.py | tuvapp/tuvappcom | 5ca2be19f4b0c86a1d4a9553711a4da9d3f32841 | [
"MIT"
] | 6 | 2020-06-05T18:44:19.000Z | 2022-01-13T00:48:56.000Z | myvenv/lib/python3.5/site-packages/crispy_forms/tests/compatibility.py | tuvapp/tuvappcom | 5ca2be19f4b0c86a1d4a9553711a4da9d3f32841 | [
"MIT"
] | 15 | 2017-01-12T10:43:13.000Z | 2019-04-19T08:28:46.000Z | # coding: utf-8
try:
from django.template.loader import get_template_from_string
except ImportError:
from django.template import Engine
get_template_from_string = Engine.get_default().from_string
| 21.1 | 63 | 0.78673 | 29 | 211 | 5.448276 | 0.517241 | 0.189873 | 0.227848 | 0.265823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005587 | 0.151659 | 211 | 9 | 64 | 23.444444 | 0.877095 | 0.061611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f82579a39f7dcd881b76713f592babdd8f68995 | 2,731 | py | Python | cjaasPythonClient/apis/swagger_client/__init__.py | kat-mulberries/cjaas-sdk | 11dc39c9e2058d1a6c900ad0ef4236a984f8aac5 | [
"Apache-2.0"
] | 4 | 2021-04-28T16:33:09.000Z | 2022-01-12T00:19:06.000Z | cjaasPythonClient/apis/swagger_client/__init__.py | kat-mulberries/cjaas-sdk | 11dc39c9e2058d1a6c900ad0ef4236a984f8aac5 | [
"Apache-2.0"
] | 2 | 2021-07-06T15:35:59.000Z | 2021-12-16T16:52:34.000Z | cjaasPythonClient/apis/swagger_client/__init__.py | kat-mulberries/cjaas-sdk | 11dc39c9e2058d1a6c900ad0ef4236a984f8aac5 | [
"Apache-2.0"
] | 7 | 2021-05-13T20:15:21.000Z | 2021-12-16T10:28:02.000Z | # coding: utf-8
# flake8: noqa
"""
Azure Functions OpenAPI Extension
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import apis into sdk package
from swagger_client.api.journey_api import JourneyApi
# import ApiClient
from swagger_client.api_client import ApiClient
from swagger_client.configuration import Configuration
# import models into sdk package
from swagger_client.models.action import Action
from swagger_client.models.action_config import ActionConfig
from swagger_client.models.cloud_event import CloudEvent
from swagger_client.models.create_progressive_profile_view_job_response_model import CreateProgressiveProfileViewJobResponseModel
from swagger_client.models.data_message import DataMessage
from swagger_client.models.error_object import ErrorObject
from swagger_client.models.http_error_response import HttpErrorResponse
from swagger_client.models.http_generic_list_object_response_journey_action import HttpGenericListObjectResponseJourneyAction
from swagger_client.models.http_generic_list_object_response_profile_view_template import HttpGenericListObjectResponseProfileViewTemplate
from swagger_client.models.http_generic_object_response_journey_action import HttpGenericObjectResponseJourneyAction
from swagger_client.models.http_report import HttpReport
from swagger_client.models.http_report_object_response import HttpReportObjectResponse
from swagger_client.models.http_response_meta import HttpResponseMeta
from swagger_client.models.http_simple_message_object_response import HttpSimpleMessageObjectResponse
from swagger_client.models.journey_action import JourneyAction
from swagger_client.models.message_object import MessageObject
from swagger_client.models.modified_cloud_event import ModifiedCloudEvent
from swagger_client.models.object import Object
from swagger_client.models.profile_attribute_view import ProfileAttributeView
from swagger_client.models.profile_view_builder_template import ProfileViewBuilderTemplate
from swagger_client.models.profile_view_builder_template_attribute import ProfileViewBuilderTemplateAttribute
from swagger_client.models.profile_view_query_response import ProfileViewQueryResponse
from swagger_client.models.profile_view_template import ProfileViewTemplate
from swagger_client.models.profile_view_template_attribute import ProfileViewTemplateAttribute
from swagger_client.models.profile_view_template_create_model import ProfileViewTemplateCreateModel
from swagger_client.models.tape_reader_response import TapeReaderResponse
| 55.734694 | 138 | 0.894544 | 328 | 2,731 | 7.134146 | 0.280488 | 0.136325 | 0.210684 | 0.255556 | 0.37094 | 0.25641 | 0.17265 | 0.086325 | 0.044444 | 0 | 0 | 0.003152 | 0.07067 | 2,731 | 48 | 139 | 56.895833 | 0.918834 | 0.128158 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6f82bee414cd56e9e21dc0f8ac4556aa4c204f44 | 189 | py | Python | riker/permission/__init__.py | A-UNDERSCORE-D/riker | 5257d6113a614e54696068b758275e59f71ddf51 | [
"0BSD"
] | null | null | null | riker/permission/__init__.py | A-UNDERSCORE-D/riker | 5257d6113a614e54696068b758275e59f71ddf51 | [
"0BSD"
] | null | null | null | riker/permission/__init__.py | A-UNDERSCORE-D/riker | 5257d6113a614e54696068b758275e59f71ddf51 | [
"0BSD"
] | null | null | null | """Permission handler implementations."""
__all__ = ["SimplePermissionHandler", "BasePermissionHandler"]
from .base import BasePermissionHandler
from .simple import SimplePermissionHandler
| 37.8 | 62 | 0.830688 | 14 | 189 | 10.928571 | 0.714286 | 0.326797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079365 | 189 | 4 | 63 | 47.25 | 0.87931 | 0.185185 | 0 | 0 | 0 | 0 | 0.297297 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6fc57d1e985f3cdbb4a763ddd3fc75868ee76475 | 2,578 | py | Python | pitch_seq.py | Shinichi-Nakagawa/retrosheet-app-example | 10544b7936c0a2dd865efc0c59d258640a88e76b | [
"MIT"
] | null | null | null | pitch_seq.py | Shinichi-Nakagawa/retrosheet-app-example | 10544b7936c0a2dd865efc0c59d258640a88e76b | [
"MIT"
] | 1 | 2015-04-11T02:30:17.000Z | 2015-04-11T08:33:28.000Z | pitch_seq.py | Shinichi-Nakagawa/retrosheet-app-example | 10544b7936c0a2dd865efc0c59d258640a88e76b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = 'Shinichi Nakagawa'
'''
+ following pickoff throw by the catcher
* indicates the following pitch was blocked by the catcher
. marker for play not involving the batter
1 pickoff throw to first
2 pickoff throw to second
3 pickoff throw to third
> Indicates a runner going on the pitch
B ball
C called strike
F foul
H hit batter
I intentional ball
K strike (unknown type)
L foul bunt
M missed bunt attempt
N no pitch (on balks and interference calls)
O foul tip on bunt
P pitchout
Q swinging on pitchout
R foul ball on pitchout
S swinging strike
T foul tip
U unknown or missed pitch
V called ball because pitcher went to his mouth
X ball put into play by batter
Y ball put into play on pitchout
'''
EN_PITCH_SEQ_DICT = {
'+': 'following pickoff throw by the catcher',
'*': 'indicates the following pitch was blocked by the catcher',
'.': 'marker for play not involving the batter',
'1': 'pickoff throw to first',
'2': 'pickoff throw to second',
'3': 'pickoff throw to third',
'>': 'Indicates a runner going on the pitch',
'B': 'ball',
'C': 'called strike',
'F': 'foul',
'H': 'hit batter',
'I': 'intentional ball',
'K': 'strike (unknown type)',
'L': 'foul bunt',
'M': 'missed bunt attempt',
'N': 'no pitch (on balks and interference calls)',
'O': 'foul tip on bunt',
'P': 'pitchout',
'Q': 'swinging on pitchout',
'R': 'foul ball on pitchout',
'S': 'swinging strike',
'T': 'foul tip',
'U': 'unknown or missed pitch',
'V': 'called ball because pitcher went to his mouth',
'X': 'ball put into play by batter',
'Y': 'ball put into play on pitchout',
}
JA_PITCH_SEQ_DICT = {
'+': 'following pickoff throw by the catcher',
'*': 'indicates the following pitch was blocked by the catcher',
'.': 'marker for play not involving the batter',
'1': '一塁牽制',
'2': '二塁牽制',
'3': '三塁牽制',
'>': 'ランナースタート',
'B': 'ボール',
'C': '見逃し',
'F': 'ファール',
'H': '安打',
'I': '四球',
'K': 'ストライク',
'L': 'ファウルバント',
'M': 'missed bunt attempt',
'N': 'no pitch (on balks and interference calls)',
'O': 'foul tip on bunt',
'P': 'アウト',
'Q': '空振り三振',
'R': 'ファウルフライ',
'S': '空振り',
'T': 'ファウルチップ',
'U': 'unknown or missed pitch',
'V': 'called ball because pitcher went to his mouth',
'X': 'インプレー(ノーアウト)',
'Y': 'インプレー(アウト)',
}
| 27.136842 | 68 | 0.579131 | 360 | 2,578 | 4.119444 | 0.291667 | 0.072825 | 0.04855 | 0.040459 | 0.896156 | 0.896156 | 0.896156 | 0.896156 | 0.896156 | 0.896156 | 0 | 0.005376 | 0.27851 | 2,578 | 94 | 69 | 27.425532 | 0.791935 | 0.016292 | 0 | 0.280702 | 0 | 0 | 0.596989 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6fd696bba3dc79c78cb923d85a355716c417d629 | 4,999 | py | Python | apps/fifth_edition/tests/test_physical_attack_view_post.py | tylerfrenchx13/django-dnd | b0c78c51aebeed4195fd91a3e55c313c645f9c3b | [
"MIT"
] | null | null | null | apps/fifth_edition/tests/test_physical_attack_view_post.py | tylerfrenchx13/django-dnd | b0c78c51aebeed4195fd91a3e55c313c645f9c3b | [
"MIT"
] | null | null | null | apps/fifth_edition/tests/test_physical_attack_view_post.py | tylerfrenchx13/django-dnd | b0c78c51aebeed4195fd91a3e55c313c645f9c3b | [
"MIT"
] | null | null | null | from apps.fifth_edition.models import AbilityScore, PhysicalAttack
from django.test import TestCase
from rest_framework.test import APIClient
class TestPhysicalAttackViewPOST(TestCase):
"""
Test class to verify functionality of the PhysicalAttackViewPOST API view.
"""
def setUp(self):
"""
Method to create prerequisite test information
:return: None
"""
score_data = {
"strength": 12,
"dexterity": 15,
"constitution": 15,
"intelligence": 14,
"wisdom": 16,
"charisma": 18
}
AbilityScore.objects.create(**score_data)
def test_physical_attack_post_succesful(self):
"""
Test to verify that a new physical attack entry can be created
:return: None
"""
client = APIClient()
test_data = {
"ability_score": AbilityScore.objects.first().id,
"name": "Test Attack",
"weapon_type": "Simple Melee Weapon",
"properties": "Finesse, light, thrown (range 20/60)",
"dice_type": "d4",
"dice_count": 2,
"damage_type": "bl",
"str_atk_bonus": 1,
"dex_atk_bonus": 2
}
response = client.post("/api/physical-attack/create/", test_data, format="json")
self.assertEqual(response.status_code, 201)
entry = PhysicalAttack.objects.first()
self.assertEqual(entry.name, "Test Attack")
self.assertEqual(entry.damage_type, "bl")
self.assertEqual(entry.dice_type, "d4"),
self.assertEqual(entry.dice_count, 2)
self.assertEqual(entry.str_atk_bonus, 1)
self.assertEqual(entry.dex_atk_bonus, 2)
def test_physical_attack_post_dice_count_default(self):
"""
Test to verify that a new physical attack entry can be created
:return: None
"""
client = APIClient()
test_data = {
"ability_score": AbilityScore.objects.first().id,
"name": "Test Attack",
"weapon_type": "Simple Melee Weapon",
"properties": "Finesse, light, thrown (range 20/60)",
"dice_type": "d4",
"dice_count": 1,
"damage_type": "bl",
"str_atk_bonus": 1,
"dex_atk_bonus": 2
}
response = client.post("/api/physical-attack/create/", test_data, format="json")
self.assertEqual(response.status_code, 201)
entry = PhysicalAttack.objects.first()
self.assertEqual(entry.name, "Test Attack")
self.assertEqual(entry.damage_type, "bl")
self.assertEqual(entry.dice_type, "d4"),
self.assertEqual(entry.dice_count, 1)
self.assertEqual(entry.str_atk_bonus, 1)
self.assertEqual(entry.dex_atk_bonus, 2)
def test_physical_attack_post_failure_on_damage_type(self):
"""
Test to verify that an invalid physical attack entry will fail to be created
:return: None
"""
client = APIClient()
test_data = {
"ability_score": AbilityScore.objects.first().id,
"name": "Test Attack",
"damage_type": "zd",
"dice_type": "d4",
"dice_count": 2
}
response = client.post("/api/physical-attack/create/", test_data, format="json")
self.assertEqual(response.status_code, 400)
def test_physical_attack_post_failure_on_dice_type(self):
"""
Test to verify that an invalid physical attack entry will fail to be created
:return: None
"""
client = APIClient()
test_data = {
"ability_score": AbilityScore.objects.first().id,
"name": "Test Attack",
"damage_type": "bl",
"dice_type": "d14",
"dice_count": 2
}
response = client.post("/api/physical-attack/create/", test_data, format="json")
self.assertEqual(response.status_code, 400)
def test_physical_attack_post_double_success(self):
"""
Test to verify that a new physical attack entry can be created
:return: None
"""
client = APIClient()
test_data = {
"ability_score": AbilityScore.objects.first().id,
"name": "Test Attack",
"weapon_type": "Simple Melee Weapon",
"properties": "Finesse, light, thrown (range 20/60)",
"dice_type": "d4",
"dice_count": 1,
"damage_type": "bl",
"str_atk_bonus": 1,
"dex_atk_bonus": 2
}
response = client.post("/api/physical-attack/create/", test_data, format="json")
self.assertEqual(response.status_code, 201)
entry = PhysicalAttack.objects.first()
self.assertEqual(entry.name, "Test Attack")
self.assertEqual(entry.damage_type, "bl")
self.assertEqual(entry.dice_type, "d4"),
self.assertEqual(entry.dice_count, 1)
self.assertEqual(entry.str_atk_bonus, 1)
self.assertEqual(entry.dex_atk_bonus, 2)
test_data_two = {
"ability_score": AbilityScore.objects.first().id,
"name": "Second Attack",
"weapon_type": "Simple Melee Weapon",
"properties": "Finesse, light, thrown (range 20/60)",
"dice_type": "d8",
"dice_count": 1,
"damage_type": "ne",
"str_atk_bonus": 1,
"dex_atk_bonus": 2
}
response = client.post("/api/physical-attack/create/", test_data_two, format="json")
self.assertEqual(response.status_code, 201)
entry = PhysicalAttack.objects.last()
self.assertEqual(entry.name, "Second Attack")
self.assertEqual(entry.damage_type, "ne")
self.assertEqual(entry.dice_type, "d8"),
self.assertEqual(entry.dice_count, 1)
self.assertEqual(entry.str_atk_bonus, 1)
self.assertEqual(entry.dex_atk_bonus, 2)
| 30.29697 | 86 | 0.702941 | 662 | 4,999 | 5.119335 | 0.164653 | 0.132783 | 0.141635 | 0.028327 | 0.845382 | 0.823842 | 0.809973 | 0.792269 | 0.792269 | 0.792269 | 0 | 0.019603 | 0.153031 | 4,999 | 164 | 87 | 30.481707 | 0.780822 | 0.109822 | 0 | 0.688525 | 0 | 0 | 0.250344 | 0.038514 | 0 | 0 | 0 | 0 | 0.245902 | 1 | 0.04918 | false | 0 | 0.02459 | 0 | 0.081967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6fd7d3edae2156a4cd43e5c3dbadd7bb392d47dd | 322 | py | Python | .modules/.metagoofil/hachoir_parser/container/__init__.py | termux-one/EasY_HaCk | 0a8d09ca4b126b027b6842e02fa0c29d8250e090 | [
"Apache-2.0"
] | 1,103 | 2018-04-20T14:08:11.000Z | 2022-03-29T06:22:43.000Z | .modules/.metagoofil/hachoir_parser/container/__init__.py | sshourya948/EasY_HaCk | 0a8d09ca4b126b027b6842e02fa0c29d8250e090 | [
"Apache-2.0"
] | 236 | 2016-11-20T07:56:15.000Z | 2017-04-12T12:10:00.000Z | .modules/.metagoofil/hachoir_parser/container/__init__.py | sshourya948/EasY_HaCk | 0a8d09ca4b126b027b6842e02fa0c29d8250e090 | [
"Apache-2.0"
] | 262 | 2017-09-16T22:15:50.000Z | 2022-03-31T00:38:42.000Z | from hachoir_parser.container.asn1 import ASN1File
from hachoir_parser.container.mkv import MkvFile
from hachoir_parser.container.ogg import OggFile, OggStream
from hachoir_parser.container.riff import RiffFile
from hachoir_parser.container.swf import SwfFile
from hachoir_parser.container.realmedia import RealMediaFile
| 40.25 | 60 | 0.878882 | 43 | 322 | 6.44186 | 0.418605 | 0.238267 | 0.368231 | 0.563177 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.080745 | 322 | 7 | 61 | 46 | 0.929054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6fed0a496ec55d4b58d0957ae461e237722314dc | 3,695 | py | Python | code/classification/sequence_model/pre-process.py | msra-nlc/MSParS-V2.0- | 3e215b5f6ef47040275b3612fd2e1d5591909039 | [
"Apache-2.0"
] | 11 | 2019-11-22T16:46:36.000Z | 2021-07-17T04:06:14.000Z | code/classification/sequence_model/pre-process.py | msra-nlc/MSParS-V2.0- | 3e215b5f6ef47040275b3612fd2e1d5591909039 | [
"Apache-2.0"
] | 3 | 2019-11-11T05:40:10.000Z | 2020-03-05T14:04:38.000Z | code/classification/sequence_model/pre-process.py | msra-nlc/MSParS-V2.0- | 3e215b5f6ef47040275b3612fd2e1d5591909039 | [
"Apache-2.0"
] | 3 | 2020-04-04T12:21:52.000Z | 2022-02-27T13:29:45.000Z | python3 preprocess.py -train_src single-turn-data/src-train.txt -train_tgt single-turn-data/tgt-train.txt -valid_src single-turn-data/src-test.txt -valid_tgt single-turn-data/tgt-test.txt -save_data single-turn-data/demo -dynamic_dict
python3 preprocess.py -train_src multi-turn-data/src-train.txt -train_tgt multi-turn-data/tgt-train.txt -valid_src multi-turn-data/src-test.txt -valid_tgt multi-turn-data/tgt-test.txt -save_data multi-turn-data/demo -dynamic_dict
python3 preprocess.py -train_src single-turn-data/src-train-maskinput.txt -train_tgt single-turn-data/tgt-train-maskinput.txt -valid_src single-turn-data/src-test-maskinput.txt -valid_tgt single-turn-data/tgt-test-maskinput.txt -save_data single-turn-data/demo-maskinput -dynamic_dict
python3 preprocess.py -train_src multi-turn-data/src-train-maskinput.txt -train_tgt multi-turn-data/tgt-train-maskinput.txt -valid_src multi-turn-data/src-test-maskinput.txt -valid_tgt multi-turn-data/tgt-test-maskinput.txt -save_data multi-turn-data/demo-maskinput -dynamic_dict
python3 preprocess.py -train_src single-turn-data/src-train-maskA.txt -train_tgt single-turn-data/tgt-train-maskA.txt -valid_src single-turn-data/src-test-maskA.txt -valid_tgt single-turn-data/tgt-test-maskA.txt -save_data single-turn-data/demo-maskA -dynamic_dict
python3 preprocess.py -train_src multi-turn-data/src-train-maskA.txt -train_tgt multi-turn-data/tgt-train-maskA.txt -valid_src multi-turn-data/src-test-maskA.txt -valid_tgt multi-turn-data/tgt-test-maskA.txt -save_data multi-turn-data/demo-maskA -dynamic_dict
python3 preprocess.py -train_src single-turn-data/src-train-maskB.txt -train_tgt single-turn-data/tgt-train-maskB.txt -valid_src single-turn-data/src-test-maskB.txt -valid_tgt single-turn-data/tgt-test-maskB.txt -save_data single-turn-data/demo-maskB -dynamic_dict
python3 preprocess.py -train_src multi-turn-data/src-train-maskB.txt -train_tgt multi-turn-data/tgt-train-maskB.txt -valid_src multi-turn-data/src-test-maskB.txt -valid_tgt multi-turn-data/tgt-test-maskB.txt -save_data multi-turn-data/demo-maskB -dynamic_dict
python3 preprocess.py -train_src single-turn-data/src-train-onlyinput.txt -train_tgt single-turn-data/tgt-train-onlyinput.txt -valid_src single-turn-data/src-test-onlyinput.txt -valid_tgt single-turn-data/tgt-test-onlyinput.txt -save_data single-turn-data/demo-onlyinput -dynamic_dict
python3 preprocess.py -train_src multi-turn-data/src-train-onlyinput.txt -train_tgt multi-turn-data/tgt-train-onlyinput.txt -valid_src multi-turn-data/src-test-onlyinput.txt -valid_tgt multi-turn-data/tgt-test-onlyinput.txt -save_data multi-turn-data/demo-onlyinput -dynamic_dict
python3 preprocess.py -train_src single-turn-data/src-train-onlyA.txt -train_tgt single-turn-data/tgt-train-onlyA.txt -valid_src single-turn-data/src-test-onlyA.txt -valid_tgt single-turn-data/tgt-test-onlyA.txt -save_data single-turn-data/demo-onlyA -dynamic_dict
python3 preprocess.py -train_src multi-turn-data/src-train-onlyA.txt -train_tgt multi-turn-data/tgt-train-onlyA.txt -valid_src multi-turn-data/src-test-onlyA.txt -valid_tgt multi-turn-data/tgt-test-onlyA.txt -save_data multi-turn-data/demo-onlyA -dynamic_dict
python3 preprocess.py -train_src single-turn-data/src-train-onlyB.txt -train_tgt single-turn-data/tgt-train-onlyB.txt -valid_src single-turn-data/src-test-onlyB.txt -valid_tgt single-turn-data/tgt-test-onlyB.txt -save_data single-turn-data/demo-onlyB -dynamic_dict
python3 preprocess.py -train_src multi-turn-data/src-train-onlyB.txt -train_tgt multi-turn-data/tgt-train-onlyB.txt -valid_src multi-turn-data/src-test-onlyB.txt -valid_tgt multi-turn-data/tgt-test-onlyB.txt -save_data multi-turn-data/demo-onlyB -dynamic_dict
| 246.333333 | 284 | 0.820839 | 662 | 3,695 | 4.454683 | 0.034743 | 0.189895 | 0.166158 | 0.113937 | 1 | 1 | 0.992879 | 0.95117 | 0.349949 | 0.349949 | 0 | 0.003985 | 0.049256 | 3,695 | 14 | 285 | 263.928571 | 0.835468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d229cddbb1af311c6b47536b51140e0b22c7dd1a | 3,843 | py | Python | shenfun/optimization/numba/pdma.py | spectralDNS/shenfun | 956633aa0f1638db5ebdc497ff68a438aa22b932 | [
"BSD-2-Clause"
] | 138 | 2017-06-17T13:30:27.000Z | 2022-03-20T02:33:47.000Z | shenfun/optimization/numba/pdma.py | spectralDNS/shenfun | 956633aa0f1638db5ebdc497ff68a438aa22b932 | [
"BSD-2-Clause"
] | 73 | 2017-05-16T06:53:04.000Z | 2022-02-04T10:40:44.000Z | shenfun/optimization/numba/pdma.py | spectralDNS/shenfun | 956633aa0f1638db5ebdc497ff68a438aa22b932 | [
"BSD-2-Clause"
] | 38 | 2018-01-31T14:37:01.000Z | 2022-03-31T15:07:27.000Z | import numba as nb
__all__ = ['PDMA_LU', 'PDMA_Solve',
'PDMA_inner_solve']
@nb.jit(nopython=True, fastmath=True, cache=True)
def PDMA_LU(data):
"""LU decomposition"""
a = data[0, :-4]
b = data[1, :-2]
d = data[2, :]
e = data[3, 2:]
f = data[4, 4:]
n = d.shape[0]
m = e.shape[0]
k = n - m
for i in range(n-2*k):
lam = b[i]/d[i]
d[i+k] -= lam*e[i]
e[i+k] -= lam*f[i]
b[i] = lam
lam = a[i]/d[i]
b[i+k] -= lam*e[i]
d[i+2*k] -= lam*f[i]
a[i] = lam
i = n-4
lam = b[i]/d[i]
d[i+k] -= lam*e[i]
b[i] = lam
i = n-3
lam = b[i]/d[i]
d[i+k] -= lam*e[i]
b[i] = lam
@nb.jit(nopython=True, fastmath=True, cache=True)
def PDMA_Solve1D(a, b, d, e, f, u):
n = d.shape[0]
u[2] -= b[0]*u[0]
u[3] -= b[1]*u[1]
for k in range(4, n):
u[k] -= (b[k-2]*u[k-2] + a[k-4]*u[k-4])
u[n-1] /= d[n-1]
u[n-2] /= d[n-2]
u[n-3] = (u[n-3]-e[n-3]*u[n-1])/d[n-3]
u[n-4] = (u[n-4]-e[n-4]*u[n-2])/d[n-4]
for k in range(n-5, -1, -1):
u[k] = (u[k]-e[k]*u[k+2]-f[k]*u[k+4])/d[k]
@nb.jit(nopython=True, fastmath=True, cache=True)
def PDMA_LU2D(a, b, d, e, f, axis):
if axis == 0:
for j in range(d.shape[1]):
PDMA_LU(a[:-8, j], b[:-6, j], d[:-4, j], e[:-6, j], f[:-8, j])
elif axis == 1:
for i in range(d.shape[0]):
PDMA_LU(a[i, :-8], b[i, :-6], d[i, :-4], e[i, :-6], f[i, :-8])
@nb.jit(nopython=True, fastmath=True, cache=True)
def PDMA_LU3D(a, b, d, e, f, axis):
if axis == 0:
for j in range(d.shape[1]):
for k in range(d.shape[2]):
PDMA_LU(a[:-8, j, k], b[:-6, j, k], d[:-4, j, k], e[:-6, j, k], f[:-8, j, k])
elif axis == 1:
for i in range(d.shape[0]):
for k in range(d.shape[2]):
PDMA_LU(a[i, :-8, k], b[i, :-6, k], d[i, :-4, k], e[i, :-6, k], f[i, :-8, k])
elif axis == 2:
for i in range(d.shape[0]):
for j in range(d.shape[1]):
PDMA_LU(a[i, j, :-8], b[i, j, :-6], d[i, j, :-4], e[i, j, :-6], f[i, j, :-8])
def PDMA_Solve(x, data, axis=0):
a = data[0, :-4]
b = data[1, :-2]
d = data[2, :]
e = data[3, 2:]
f = data[4, 4:]
n = x.ndim
if n == 1:
PDMA_Solve1D(a, b, d, e, f, x)
elif n == 2:
PDMA_Solve2D(a, b, d, e, f, x, axis)
elif n == 3:
PDMA_Solve3D(a, b, d, e, f, x, axis)
@nb.jit(nopython=True, fastmath=True, cache=True)
def PDMA_Solve2D(a, b, d, e, f, x, axis):
if axis == 0:
for j in range(x.shape[1]):
PDMA_Solve1D(a, b, d, e, f, x[:, j])
elif axis == 1:
for i in range(x.shape[0]):
PDMA_Solve1D(a, b, d, e, f, x[i])
@nb.jit(nopython=True, fastmath=True, cache=True)
def PDMA_Solve3D(a, b, d, e, f, x, axis):
if axis == 0:
for j in range(x.shape[1]):
for k in range(x.shape[2]):
PDMA_Solve1D(a, b, d, e, f, x[:, j, k])
elif axis == 1:
for i in range(x.shape[0]):
for k in range(x.shape[2]):
PDMA_Solve1D(a, b, d, e, f, x[i, :, k])
elif axis == 2:
for i in range(x.shape[0]):
for j in range(x.shape[1]):
PDMA_Solve1D(a, b, d, e, f, x[i, j])
@nb.jit(nopython=True, fastmath=True, cache=True)
def PDMA_inner_solve(u, data):
a = data[0, :-4]
b = data[1, :-2]
d = data[2, :]
e = data[3, 2:]
f = data[4, 4:]
n = d.shape[0]
u[2] -= b[0]*u[0]
u[3] -= b[1]*u[1]
for k in range(4, n):
u[k] -= (b[k-2]*u[k-2] + a[k-4]*u[k-4])
u[n-1] /= d[n-1]
u[n-2] /= d[n-2]
u[n-3] = (u[n-3]-e[n-3]*u[n-1])/d[n-3]
u[n-4] = (u[n-4]-e[n-4]*u[n-2])/d[n-4]
for k in range(n-5, -1, -1):
u[k] = (u[k]-e[k]*u[k+2]-f[k]*u[k+4])/d[k]
| 29.113636 | 93 | 0.433255 | 816 | 3,843 | 2.003676 | 0.0625 | 0.089908 | 0.023853 | 0.031804 | 0.832416 | 0.81896 | 0.81896 | 0.811621 | 0.777982 | 0.717431 | 0 | 0.061672 | 0.312256 | 3,843 | 131 | 94 | 29.335878 | 0.556943 | 0.004163 | 0 | 0.677966 | 0 | 0 | 0.008636 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.008475 | 0 | 0.076271 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d23bbbea824f64c900cdef027e2a2362355b004c | 138 | py | Python | openproblems/tasks/dimensionality_reduction/datasets/__init__.py | bendemeo/SingleCellOpenProblems | e4c009f8c232bdae4c9e20b8e435d0fe474b3daf | [
"MIT"
] | 134 | 2020-08-19T07:35:56.000Z | 2021-05-19T11:37:50.000Z | openproblems/tasks/dimensionality_reduction/datasets/__init__.py | bendemeo/SingleCellOpenProblems | e4c009f8c232bdae4c9e20b8e435d0fe474b3daf | [
"MIT"
] | 175 | 2020-08-17T15:26:06.000Z | 2021-05-14T11:03:46.000Z | openproblems/tasks/dimensionality_reduction/datasets/__init__.py | LuckyMD/SingleCellOpenProblems | 0ae39db494557e1dd9f28e59dda765527191eee1 | [
"MIT"
] | 46 | 2020-10-08T21:11:37.000Z | 2021-04-25T07:05:28.000Z | from .citeseq import citeseq_cbmc
from .human_blood_nestorowa2016 import human_blood_nestorowa2016
from .tenx_5k_pbmc import tenx_5k_pbmc
| 34.5 | 64 | 0.891304 | 21 | 138 | 5.428571 | 0.47619 | 0.175439 | 0.403509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079365 | 0.086957 | 138 | 3 | 65 | 46 | 0.825397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d26a6be259c81bd570d7de118d632445e2612720 | 40 | py | Python | ner_api/user/__init__.py | rubiagatra/ner-suara-surabaya | b730ec7aa824d699fc0152e578388d76f40167ca | [
"MIT"
] | null | null | null | ner_api/user/__init__.py | rubiagatra/ner-suara-surabaya | b730ec7aa824d699fc0152e578388d76f40167ca | [
"MIT"
] | null | null | null | ner_api/user/__init__.py | rubiagatra/ner-suara-surabaya | b730ec7aa824d699fc0152e578388d76f40167ca | [
"MIT"
] | null | null | null | from ner_api.user.model import UserModel | 40 | 40 | 0.875 | 7 | 40 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
96397c70a0e48229755af08abe9fd3927992b9f4 | 106 | py | Python | basic/package/m1.py | onezens/python | 73cdc22901a006751338d0145b6e120e55fdf80f | [
"MIT"
] | null | null | null | basic/package/m1.py | onezens/python | 73cdc22901a006751338d0145b6e120e55fdf80f | [
"MIT"
] | null | null | null | basic/package/m1.py | onezens/python | 73cdc22901a006751338d0145b6e120e55fdf80f | [
"MIT"
] | null | null | null |
def sayHello():
print('m1 sayHello : Hello world!')
def smile():
print('m1 smile starting ^_^ ......') | 17.666667 | 38 | 0.613208 | 13 | 106 | 4.923077 | 0.615385 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022472 | 0.160377 | 106 | 6 | 38 | 17.666667 | 0.696629 | 0 | 0 | 0 | 0 | 0 | 0.509434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9658e38ef791731dda256d5ab99679d36ad9e5dc | 47 | py | Python | pcraster/pcraster-4.2.0/pcraster-4.2.0/source/fern/source/fern/python/test/__init__.py | quanpands/wflow | b454a55e4a63556eaac3fbabd97f8a0b80901e5a | [
"MIT"
] | null | null | null | pcraster/pcraster-4.2.0/pcraster-4.2.0/source/fern/source/fern/python/test/__init__.py | quanpands/wflow | b454a55e4a63556eaac3fbabd97f8a0b80901e5a | [
"MIT"
] | null | null | null | pcraster/pcraster-4.2.0/pcraster-4.2.0/source/fern/source/fern/python/test/__init__.py | quanpands/wflow | b454a55e4a63556eaac3fbabd97f8a0b80901e5a | [
"MIT"
] | null | null | null | from . data import *
from . test_case import *
| 15.666667 | 25 | 0.702128 | 7 | 47 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212766 | 47 | 2 | 26 | 23.5 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
967e1ab43ee3b9f3985bc272e63667298d97a1e0 | 100 | py | Python | pysprint/core/__init__.py | Ptrskay3/PySprint | f90811970c66e8fadea1220c4c19bf95cdf33c9e | [
"MIT"
] | 13 | 2020-05-29T14:53:13.000Z | 2022-02-09T17:29:19.000Z | pysprint/core/__init__.py | Ptrskay3/Interferometry | f90811970c66e8fadea1220c4c19bf95cdf33c9e | [
"MIT"
] | 8 | 2019-10-14T18:23:26.000Z | 2021-09-14T16:42:27.000Z | pysprint/core/__init__.py | Ptrskay3/Interferometry | f90811970c66e8fadea1220c4c19bf95cdf33c9e | [
"MIT"
] | 1 | 2020-10-07T06:42:17.000Z | 2020-10-07T06:42:17.000Z | from .methods import *
from .bases import Dataset
from .callbacks import *
from .phase import Phase
| 20 | 26 | 0.78 | 14 | 100 | 5.571429 | 0.5 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 100 | 4 | 27 | 25 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
96bca4f7f5af6960e714e9bccbf1c1c06d6b6263 | 27 | py | Python | test_lab_TSPP.py | VicktorKu/projects | 8013769823ec0c832df359de0d065a9f23bb0c73 | [
"Unlicense"
] | null | null | null | test_lab_TSPP.py | VicktorKu/projects | 8013769823ec0c832df359de0d065a9f23bb0c73 | [
"Unlicense"
] | null | null | null | test_lab_TSPP.py | VicktorKu/projects | 8013769823ec0c832df359de0d065a9f23bb0c73 | [
"Unlicense"
] | null | null | null | print('Its my first repo')
| 13.5 | 26 | 0.703704 | 5 | 27 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7398e4cda97bad1f78b87058eea35ffb5cfe984d | 24 | py | Python | Cartwheel/cartwheel-3d/Python/App/KeyframeEditor/__init__.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/cartwheel-3d/Python/App/KeyframeEditor/__init__.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | Cartwheel/cartwheel-3d/Python/App/KeyframeEditor/__init__.py | MontyThibault/centre-of-mass-awareness | 58778f148e65749e1dfc443043e9fc054ca3ff4d | [
"MIT"
] | null | null | null | from Model import Model
| 12 | 23 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73aeac61ba3cfb95292c0cbb0f779c9c9483463b | 83 | py | Python | modules/trainer/__init__.py | nobodykid/sinkhorngan-positive | 811f697da4fe02599fc7f0e1bdf77c89d183aba4 | [
"MIT"
] | null | null | null | modules/trainer/__init__.py | nobodykid/sinkhorngan-positive | 811f697da4fe02599fc7f0e1bdf77c89d183aba4 | [
"MIT"
] | null | null | null | modules/trainer/__init__.py | nobodykid/sinkhorngan-positive | 811f697da4fe02599fc7f0e1bdf77c89d183aba4 | [
"MIT"
] | null | null | null | from . import criterion
from . import post
from . import pre
from . import updater
| 16.6 | 23 | 0.759036 | 12 | 83 | 5.25 | 0.5 | 0.634921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192771 | 83 | 4 | 24 | 20.75 | 0.940299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73b3adfc36f655eeab2be993013ff12178233f5e | 6,026 | py | Python | tttcls_google_gpc/metrics.py | lauraset/Coarse-to-fine-weakly-supervised-GPC-segmentation | 934e5d9b0a64cc428095e550227987aa040b6791 | [
"MIT"
] | null | null | null | tttcls_google_gpc/metrics.py | lauraset/Coarse-to-fine-weakly-supervised-GPC-segmentation | 934e5d9b0a64cc428095e550227987aa040b6791 | [
"MIT"
] | null | null | null | tttcls_google_gpc/metrics.py | lauraset/Coarse-to-fine-weakly-supervised-GPC-segmentation | 934e5d9b0a64cc428095e550227987aa040b6791 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
class ClassificationMetric(nn.Module):
def __init__(self, numClass, device='cpu'):
super().__init__()
self.numClass = numClass
self.device = device
self.reset(device)
# OA
def OverallAccuracy(self):
# return all class overall pixel accuracy
# PA = acc = (TP + TN) / (TP + TN + FP + TN)
acc = torch.diag(self.confusionMatrix).sum() / self.confusionMatrix.sum()
return acc
# UA
def Precision(self):
# return each category pixel accuracy(A more accurate way to call it precision)
# acc = (TP) / TP + FP
classAcc = torch.diag(self.confusionMatrix) / self.confusionMatrix.sum(axis=0)
return classAcc # 返回的是一个列表值,如:[0.90, 0.80, 0.96],表示类别1 2 3各类别的预测准确率
# PA
def Recall(self):
# acc = (TP) / TP + FN
classAcc = torch.diag(self.confusionMatrix) / self.confusionMatrix.sum(axis=1)
return classAcc # 返回的是一个列表值,如:[0.90, 0.80, 0.96],表示类别1 2 3各类别的预测准确率
def F1score(self):
# 2*Recall*Precision/(Recall+Precision)
p = self.Precision()
r = self.Recall()
return 2*p*r/(p+r)
def genConfusionMatrix(self, imgPredict, imgLabel): # 同FCN中score.py的fast_hist()函数
# remove classes from unlabeled pixels in gt image and predict
# mask = (imgLabel >= 0) & (imgLabel < self.numClass)
# label = self.numClass * imgLabel[mask] + imgPredict[mask]
label = self.numClass * imgLabel.flatten() + imgPredict.flatten()
count = torch.bincount(label, minlength=self.numClass ** 2)
confusionMatrix = count.reshape(self.numClass, self.numClass)
return confusionMatrix
def getConfusionMatrix(self): # 同FCN中score.py的fast_hist()函数
# cfM = self.confusionMatrix / np.sum(self.confusionMatrix, axis=0)
cfM = self.confusionMatrix
return cfM
def addBatch(self, imgPredict, imgLabel):
assert imgPredict.shape == imgLabel.shape
self.confusionMatrix += self.genConfusionMatrix(imgPredict, imgLabel)
def reset(self, device):
self.confusionMatrix = torch.zeros((self.numClass, self.numClass))
if device=='cuda':
self.confusionMatrix = self.confusionMatrix.cuda()
class AverageMeter(object):
"""Computes and stores the average and current value
Imported from https://github.com/pytorch/examples/blob/master/imagenet/main.py#L247-L262
"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
# multi-label classification metric
class MultilabelMetric(nn.Module):
def __init__(self, numClass, device='cpu'):
super().__init__()
self.numClass = numClass
self.device = device
self.reset(device)
# OA
def OverallAccuracy(self):
# return all class overall pixel accuracy
# PA = acc = (TP + TN) / (TP + TN + FP + TN)
acc = torch.diag(self.confusionMatrix).sum() / self.confusionMatrix.sum()
return acc
# UA
def Precision(self):
# return each category pixel accuracy(A more accurate way to call it precision)
# acc = (TP) / TP + FP
classAcc = torch.diag(self.confusionMatrix) / self.confusionMatrix.sum(axis=0)
return classAcc # 返回的是一个列表值,如:[0.90, 0.80, 0.96],表示类别1 2 3各类别的预测准确率
# PA
def Recall(self):
# acc = (TP) / TP + FN
classAcc = torch.diag(self.confusionMatrix) / self.confusionMatrix.sum(axis=1)
return classAcc # 返回的是一个列表值,如:[0.90, 0.80, 0.96],表示类别1 2 3各类别的预测准确率
def F1score(self):
# 2*Recall*Precision/(Recall+Precision)
p = self.Precision()
r = self.Recall()
return 2*p*r/(p+r)
def genConfusionMatrix(self, imgPredict, imgLabel): # 同FCN中score.py的fast_hist()函数
# remove classes from unlabeled pixels in gt image and predict
# mask = (imgLabel >= 0) & (imgLabel < self.numClass)
# label = self.numClass * imgLabel[mask] + imgPredict[mask]
label = self.numClass * imgLabel.flatten() + imgPredict.flatten()
count = torch.bincount(label, minlength=self.numClass ** 2)
confusionMatrix = count.reshape(self.numClass, self.numClass)
return confusionMatrix
def getConfusionMatrix(self): # 同FCN中score.py的fast_hist()函数
# cfM = self.confusionMatrix / np.sum(self.confusionMatrix, axis=0)
cfM = self.confusionMatrix
return cfM
def addBatch(self, imgPredict, imgLabel):
assert imgPredict.shape == imgLabel.shape
self.confusionMatrix += self.genConfusionMatrix(imgPredict, imgLabel)
def reset(self, device):
self.confusionMatrix = torch.zeros((self.numClass, self.numClass))
if device=='cuda':
self.confusionMatrix = self.confusionMatrix.cuda()
def plot_confusionmatrix(cm):
r = cm.shape[0]
c = cm.shape[1]
for i in range(r):
for j in range(c):
print('%.3f'%cm[i,j], end=' ')
print('\n', end='')
def acc2file(oa, f1, ua, pa, cm, txtpath):
with open(txtpath, "a") as f:
f.write('oa, f1, ua, pa, confusion_matrix\n')
f.write(str(oa)+'\n')
for i in f1:
f.write(str(i)+' ')
f.write('\n')
for i in ua:
f.write(str(i)+' ')
f.write('\n')
for i in pa:
f.write(str(i)+' ')
f.write('\n')
r = cm.shape[0]
for i in range(r):
for j in range(r):
f.write(str(cm[i,j])+' ')
f.write('\n')
if __name__=="__main__":
m = ClassificationMetric(3,device='cpu')
ref = torch.tensor([0,0,1,1,2,2])
pred = torch.tensor([0,1,0,1,0,2])
m.addBatch(pred, ref)
print(m.Precision())
print(m.Recall())
print(m.F1score())
| 35.239766 | 95 | 0.605709 | 748 | 6,026 | 4.834225 | 0.195187 | 0.136615 | 0.048673 | 0.04646 | 0.802268 | 0.802268 | 0.802268 | 0.797566 | 0.797566 | 0.784845 | 0 | 0.021886 | 0.26452 | 6,026 | 170 | 96 | 35.447059 | 0.793998 | 0.242449 | 0 | 0.663793 | 0 | 0 | 0.017948 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 1 | 0.198276 | false | 0 | 0.017241 | 0 | 0.344828 | 0.043103 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
73c2d14095e81af1bebc6a68bae21bbbbdcd5a96 | 122 | py | Python | codewof/programming/content/en/forest/initial.py | uccser-admin/programming-practice-prototype | 3af4c7d85308ac5bb35bb13be3ec18cac4eb8308 | [
"MIT"
] | null | null | null | codewof/programming/content/en/forest/initial.py | uccser-admin/programming-practice-prototype | 3af4c7d85308ac5bb35bb13be3ec18cac4eb8308 | [
"MIT"
] | null | null | null | codewof/programming/content/en/forest/initial.py | uccser-admin/programming-practice-prototype | 3af4c7d85308ac5bb35bb13be3ec18cac4eb8308 | [
"MIT"
] | 1 | 2018-04-12T23:58:35.000Z | 2018-04-12T23:58:35.000Z | def is_forest(items):
tree_count = items.count("tree")
tree_count = items.count("Tree")
return tree_count > 1
| 24.4 | 36 | 0.672131 | 18 | 122 | 4.333333 | 0.444444 | 0.346154 | 0.358974 | 0.487179 | 0.589744 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.196721 | 122 | 4 | 37 | 30.5 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
73d21828989e131ea8093ac431ff9bdc6cf8d8ef | 1,182 | py | Python | pictures.py | Wefqi99/HangmanFinal2 | ba49fcc8fe626f82c80a847636218a220b18bb46 | [
"MIT"
] | null | null | null | pictures.py | Wefqi99/HangmanFinal2 | ba49fcc8fe626f82c80a847636218a220b18bb46 | [
"MIT"
] | null | null | null | pictures.py | Wefqi99/HangmanFinal2 | ba49fcc8fe626f82c80a847636218a220b18bb46 | [
"MIT"
] | null | null | null | pictures = [
'''
_____________________________
''',
'''
|
|
|
|
|
|
|
|_____________________________
''',
'''
|--------------
|
|
|
|
|
|
|_____________________________
''',
'''
|--------------
| |
| O
|
|
|
|
|_____________________________
''',
'''
|--------------
| |
| O
| |
|
|
|
|_____________________________
''',
'''
|--------------
| |
| O
| --|
|
|
|
|_____________________________
''',
'''|--------------
| |
| O
| --|---
|
|
|
|_____________________________
''',
'''
|--------------
| |
| O
| --|---
| |
| /
|
|_____________________________
''',
'''
|--------------
| |
| O
| --|---
| |
| / \\
|
|_____________________________
'''
]
| 8.442857 | 30 | 0.232657 | 7 | 1,182 | 2 | 0.285714 | 0.714286 | 0.857143 | 0.857143 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.546531 | 1,182 | 139 | 31 | 8.503597 | 0.026119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fbc149d515e074a11e42433424211b0fa09084b1 | 33,381 | py | Python | models.py | DebasmitaGhose/Semantic-Segmentation | 6887e69130a8dedfe7c0006add08f2781fe44734 | [
"MIT"
] | 2 | 2018-05-27T13:10:44.000Z | 2018-05-30T05:53:40.000Z | models.py | DebasmitaGhose/Semantic-Segmentation | 6887e69130a8dedfe7c0006add08f2781fe44734 | [
"MIT"
] | null | null | null | models.py | DebasmitaGhose/Semantic-Segmentation | 6887e69130a8dedfe7c0006add08f2781fe44734 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from pylab import *
import os
import sys
from keras_contrib.applications import densenet
from keras.models import Model
from keras.regularizers import l2
from keras.layers import *
from keras.engine import Layer
from keras.applications.vgg16 import *
from keras.models import *
from keras.applications.imagenet_utils import _obtain_input_shape
import keras.backend as K
import tensorflow as tf
from utils.get_weights_path import *
from utils.basics import *
from utils.resnet_helpers import *
from utils.BilinearUpSampling import *
from keras.layers import Activation
from keras.layers.normalization import BatchNormalization
def top(x, input_shape, classes, activation, weight_decay):
x = Conv2D(classes, (1, 1), activation='linear',
padding='same', kernel_regularizer=l2(weight_decay),
use_bias=False)(x)
if K.image_data_format() == 'channels_first':
channel, row, col = input_shape
else:
row, col, channel = input_shape
# TODO(ahundt) this is modified for the sigmoid case! also use loss_shape
if activation is 'sigmoid':
x = Reshape((row * col * classes,))(x)
return x
def crop(o1, o2, i):
o_shape2 = Model(i, o2).output_shape
outputHeight2 = o_shape2[1]
outputWidth2 = o_shape2[2]
o_shape1 = Model(i, o1).output_shape
outputHeight1 = o_shape1[1]
outputWidth1 = o_shape1[2]
print(outputHeight2)
print(outputWidth2)
print(outputHeight1)
print(outputWidth1 )
cx = abs(outputWidth1 - outputWidth2)
cy = abs(outputHeight2 - outputHeight1)
if outputWidth1 > outputWidth2:
o1 = Cropping2D(cropping=((0, 0), (0, cx)))(o1)
else:
o2 = Cropping2D(cropping=((0, 0), (0, cx)))(o2)
if outputHeight1 > outputHeight2:
o1 = Cropping2D(cropping=((0, cy), (0, 0)))(o1)
else:
o2 = Cropping2D(cropping=((0, cy), (0, 0)))(o2)
return o1, o2
def FCN_Vgg16_32s(input_shape=None, weight_decay=0., batch_momentum=0.9, batch_shape=None, classes=21):
if batch_shape:
img_input = Input(batch_shape=batch_shape)
image_size = batch_shape[1:3]
else:
img_input = Input(shape=input_shape)
image_size = input_shape[0:2]
# Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', kernel_regularizer=l2(weight_decay))(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
# Convolutional layers transfered from fully-connected layers
x = Conv2D(4096, (7, 7), activation='relu', padding='same', name='fc1', kernel_regularizer=l2(weight_decay))(x)
x = Dropout(0.5)(x)
x = Conv2D(4096, (1, 1), activation='relu', padding='same', name='fc2', kernel_regularizer=l2(weight_decay))(x)
x = Dropout(0.5)(x)
#classifying layer
x = Conv2D(classes, (1, 1), kernel_initializer='he_normal', activation='linear', padding='valid', strides=(1, 1), kernel_regularizer=l2(weight_decay))(x)
x = BilinearUpSampling2D(size=(32, 32))(x)
model = Model(img_input, x)
weights_path = os.path.abspath('C:\\Users\\User\\Documents\\UMass Amherst\\Semester 2\\COMPSCI 690IV - Intelligent Visual Computing\\Project - Semantic Segmentation\\Keras-FCN\\Models\\FCN_Vgg16_32s\\vgg16.h5')
model.load_weights(weights_path, by_name=True)
return model
def AtrousFCN_Vgg16_16s(input_shape=None, weight_decay=0., batch_momentum=0.9, batch_shape=None, classes=21):
if batch_shape:
img_input = Input(batch_shape=batch_shape)
image_size = batch_shape[1:3]
else:
img_input = Input(shape=input_shape)
image_size = input_shape[0:2]
# Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', kernel_regularizer=l2(weight_decay))(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3', kernel_regularizer=l2(weight_decay))(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2', kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3', kernel_regularizer=l2(weight_decay))(x)
# Convolutional layers transfered from fully-connected layers
x = Conv2D(4096, (7, 7), activation='relu', padding='same', dilation_rate=(2, 2),
name='fc1', kernel_regularizer=l2(weight_decay))(x)
x = Dropout(0.5)(x)
x = Conv2D(4096, (1, 1), activation='relu', padding='same', name='fc2', kernel_regularizer=l2(weight_decay))(x)
x = Dropout(0.5)(x)
#classifying layer
x = Conv2D(classes, (1, 1), kernel_initializer='he_normal', activation='linear', padding='valid', strides=(1, 1), kernel_regularizer=l2(weight_decay))(x)
x = BilinearUpSampling2D(target_size=tuple(image_size))(x)
model = Model(img_input, x)
weights_path = os.path.expanduser(os.path.join('~', '.keras/models/fcn_vgg16_weights_tf_dim_ordering_tf_kernels.h5'))
model.load_weights(weights_path, by_name=True)
return model
def FCN_Resnet50_32s(input_shape = None, weight_decay=0., batch_momentum=0.9, batch_shape=None, classes=21):
if batch_shape:
img_input = Input(batch_shape=batch_shape)
image_size = batch_shape[1:3]
else:
img_input = Input(shape=input_shape)
image_size = input_shape[0:2]
bn_axis = 3
x = Conv2D(64, (7, 7), strides=(2, 2), padding='same', name='conv1', kernel_regularizer=l2(weight_decay))(img_input)
x = BatchNormalization(axis=bn_axis, name='bn_conv1')(x)
x = Activation('relu')(x)
x = MaxPooling2D((3, 3), strides=(2, 2))(x)
x = conv_block(3, [64, 64, 256], stage=2, block='a', strides=(1, 1))(x)
x = identity_block(3, [64, 64, 256], stage=2, block='b')(x)
x = identity_block(3, [64, 64, 256], stage=2, block='c')(x)
x = conv_block(3, [128, 128, 512], stage=3, block='a')(x)
x = identity_block(3, [128, 128, 512], stage=3, block='b')(x)
x = identity_block(3, [128, 128, 512], stage=3, block='c')(x)
x = identity_block(3, [128, 128, 512], stage=3, block='d')(x)
x = conv_block(3, [256, 256, 1024], stage=4, block='a')(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='b')(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='c')(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='d')(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='e')(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='f')(x)
x = conv_block(3, [512, 512, 2048], stage=5, block='a')(x)
x = identity_block(3, [512, 512, 2048], stage=5, block='b')(x)
x = identity_block(3, [512, 512, 2048], stage=5, block='c')(x)
#classifying layer
x = Conv2D(classes, (1, 1), kernel_initializer='he_normal', activation='linear', padding='valid', strides=(1, 1), kernel_regularizer=l2(weight_decay))(x)
x = BilinearUpSampling2D(size=(32, 32))(x)
model = Model(img_input, x)
weights_path = os.path.expanduser(os.path.join('~', '.keras/models/fcn_resnet50_weights_tf_dim_ordering_tf_kernels.h5'))
model.load_weights(weights_path, by_name=True)
return model
def AtrousFCN_Resnet50_16s(input_shape = None, weight_decay=0., batch_momentum=0.9, batch_shape=None, classes=21):
if batch_shape:
img_input = Input(batch_shape=batch_shape)
image_size = batch_shape[1:3]
else:
img_input = Input(shape=input_shape)
image_size = input_shape[0:2]
bn_axis = 3
x = Conv2D(64, (7, 7), strides=(2, 2), padding='same', name='conv1', kernel_regularizer=l2(weight_decay))(img_input)
x = BatchNormalization(axis=bn_axis, name='bn_conv1', momentum=batch_momentum)(x)
x = Activation('relu')(x)
x = MaxPooling2D((3, 3), strides=(2, 2))(x)
x = conv_block(3, [64, 64, 256], stage=2, block='a', weight_decay=weight_decay, strides=(1, 1), batch_momentum=batch_momentum)(x)
x = identity_block(3, [64, 64, 256], stage=2, block='b', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [64, 64, 256], stage=2, block='c', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = conv_block(3, [128, 128, 512], stage=3, block='a', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [128, 128, 512], stage=3, block='b', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [128, 128, 512], stage=3, block='c', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [128, 128, 512], stage=3, block='d', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = conv_block(3, [256, 256, 1024], stage=4, block='a', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='b', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='c', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='d', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='e', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = identity_block(3, [256, 256, 1024], stage=4, block='f', weight_decay=weight_decay, batch_momentum=batch_momentum)(x)
x = atrous_conv_block(3, [512, 512, 2048], stage=5, block='a', weight_decay=weight_decay, atrous_rate=(2, 2), batch_momentum=batch_momentum)(x)
x = atrous_identity_block(3, [512, 512, 2048], stage=5, block='b', weight_decay=weight_decay, atrous_rate=(2, 2), batch_momentum=batch_momentum)(x)
x = atrous_identity_block(3, [512, 512, 2048], stage=5, block='c', weight_decay=weight_decay, atrous_rate=(2, 2), batch_momentum=batch_momentum)(x)
#classifying layer
#x = Conv2D(classes, (3, 3), dilation_rate=(2, 2), kernel_initializer='normal', activation='linear', padding='same', strides=(1, 1), kernel_regularizer=l2(weight_decay))(x)
x = Conv2D(classes, (1, 1), kernel_initializer='he_normal', activation='linear', padding='same', strides=(1, 1), kernel_regularizer=l2(weight_decay))(x)
x = BilinearUpSampling2D(target_size=tuple(image_size))(x)
model = Model(img_input, x)
weights_path = os.path.expanduser(os.path.join('~', '.keras/models/fcn_resnet50_weights_tf_dim_ordering_tf_kernels.h5'))
model.load_weights(weights_path, by_name=True)
return model
def VggIFCN(input_shape=None, weight_decay=0., batch_momentum=0.9, batch_shape=None, classes=21):
# assert input_height%32 == 0
# assert input_width%32 == 0
# https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_th_dim_ordering_th_kernels.h5
if batch_shape:
img_input = Input(batch_shape=batch_shape)
image_size = batch_shape[1:3]
else:
img_input = Input(shape=input_shape)
image_size = input_shape[0:2]
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(
img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
f1 = x
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
f2 = x
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
f3 = x
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
var_3_1=x
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
var_3_2=x
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
var_3_3=x
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
f4 = x
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
var_4_1 = x
print("var4.1",var_4_1.shape)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
var_4_2 = x
print("var4.2",var_4_2.shape)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
var_4_3 = x
print("var4.3",var_4_3.shape)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
f5 = x
x = Flatten(name='flatten')(x)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense(1000, activation='softmax', name='predictions')(x)
#vgg = Model(img_input, x)
#weights_path = 'Models/FCN_Vgg16_8s/vgg16_weights_th_dim_ordering_th_kernels.h5'
#vgg.load_weights(weights_path)
o = f5
o = (Conv2D(4096, (7, 7), activation='relu', padding='same'))(o)
o = Dropout(0.5)(o)
o = (Conv2D(4096, (1, 1), activation='relu', padding='same'))(o)
o = Dropout(0.5)(o)
o = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(o)
#o:size=10X10
o = Conv2DTranspose(classes, kernel_size=(4, 4), strides=(2, 2), use_bias=False)(o)
#o:size=22X22
o2 = f4
o2 = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(o2)
#o2:size=20X20
o, o2 = crop(o, o2, img_input)
# both are 20X20
o=Add()([o,o2])
var_4_1 = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(var_4_1)
var_4_2 = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(var_4_2)
var_4_3 = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(var_4_3)
# all are of sizes 20X20
o = Add()([o, var_4_1])
o = Add()([o, var_4_2])
o = Add()([o, var_4_3])
temp_20=o#20X20
o = Conv2DTranspose(classes, kernel_size=(4, 4), strides=(2, 2), use_bias=False)(o)
#o:size=44X44
o2=f3
o2= (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(o2)
#o2:size=40X40
o, o2 = crop(o, o2, img_input)
# Both are 40X40
o=Add()([o,o2])
var_3_1 = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(var_3_1)
var_3_2 = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(var_3_2)
var_3_3 = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(var_3_3)
# all are of sizes 20X20
o = Add()([o, var_3_1])
o = Add()([o, var_3_2])
o = Add()([o, var_3_3])
temp_40=o#20X20
#print(o.shape)
#print(o.shape)
#imshow(o)
K.print_tensor(o)
#########lets make a context network:#############################################
o=f5
o = (Conv2D(4096, (7, 7), activation='relu', padding='same'))(o)
o = Dropout(0.5)(o)
o = (Conv2D(4096, (1, 1), activation='relu', padding='same'))(o)
o = Dropout(0.5)(o)
o = (Conv2D(classes, (1, 1), kernel_initializer='he_normal'))(o)
o=(Conv2D(512, (5,5), kernel_initializer='he_normal',padding='same'))(o)
o=BatchNormalization()(o)
o=Activation('relu')(o)
temp_c1=o
o=(Conv2D(512, (5,5), kernel_initializer='he_normal',padding='same'))(o)
o=BatchNormalization()(o)
o=Activation('relu')(o)
temp_c2=o
o=(Conv2D(512, (5,5), kernel_initializer='he_normal',padding='same'))(o)
o=BatchNormalization()(o)
o=Activation('relu')(o)
temp_c3=o
o=(Conv2D(512, (5,5), kernel_initializer='he_normal',padding='same'))(o)
o=BatchNormalization()(o)
o=Activation('relu')(o)
temp_c4=o
o=(Conv2D(512, (5,5), kernel_initializer='he_normal',padding='same'))(o)
o=BatchNormalization()(o)
o=Activation('relu')(o)
temp_c5=o
o=(Conv2D(512, (5,5), kernel_initializer='he_normal',padding='same'))(o)
o=BatchNormalization()(o)
o=Activation('relu')(o)
temp_c6=o
o=Add()([o,temp_c1])
o=Add()([o,temp_c2])
o=Add()([o,temp_c3])
o=Add()([o,temp_c4])
o=Add()([o,temp_c5])
o=Add()([o,temp_c6])
o = Conv2DTranspose(classes, kernel_size=(4, 4), strides=(2, 2), use_bias=False)(o)
o,temp_20=crop(o, temp_20, img_input)
o=Add()([o,temp_20])
temp_20 = Conv2DTranspose(classes, kernel_size=(4, 4), strides=(2, 2), use_bias=False)(temp_20)
o = Conv2DTranspose(classes, kernel_size=(4, 4), strides=(2, 2), use_bias=False)(o)
temp_20,temp_40=crop(temp_20, temp_40, img_input)
o,temp_40=crop(o, temp_40, img_input)
o=Add()([o,temp_40])
o=Add()([o,temp_20])
####################################################################################
#o = Conv2DTranspose(classes, kernel_size=(16, 16), strides=(8, 8), use_bias=False)(o)
o = Conv2DTranspose(classes, kernel_size=(8, 8), strides=(8, 8), use_bias=False)(o)
o_shape = Model(img_input, o).output_shape
outputHeight = o_shape[1]
outputWidth = o_shape[2]
o = (Reshape((-1, outputHeight * outputWidth)))(o)
o = (Permute((2, 1)))(o)
o = (Reshape((outputHeight , outputWidth,-1)))(o)
o = (Activation('softmax'))(o)
model = Model(img_input, o)
model.outputWidth = outputWidth
model.outputHeight = outputHeight
#weights_path = 'Models/FCN_Vgg16_8s/fcn_vgg16_weights_tf_dim_ordering_tf_kernels.h5'
#model.load_weights(weights_path, by_name=True)
#return model
return model
def Atrous_DenseNet(input_shape=None, weight_decay=1E-4,
batch_momentum=0.9, batch_shape=None, classes=21,
include_top=False, activation='sigmoid'):
# TODO(ahundt) pass the parameters but use defaults for now
if include_top is True:
# TODO(ahundt) Softmax is pre-applied, so need different train, inference, evaluate.
# TODO(ahundt) for multi-label try per class sigmoid top as follows:
# x = Reshape((row * col * classes))(x)
# x = Activation('sigmoid')(x)
# x = Reshape((row, col, classes))(x)
return densenet.DenseNet(depth=None, nb_dense_block=3, growth_rate=32,
nb_filter=-1, nb_layers_per_block=[6, 12, 24, 16],
bottleneck=True, reduction=0.5, dropout_rate=0.2,
weight_decay=1E-4,
include_top=True, top='segmentation',
weights=None, input_tensor=None,
input_shape=input_shape,
classes=classes, transition_dilation_rate=2,
transition_kernel_size=(1, 1),
transition_pooling=None)
# if batch_shape:
# img_input = Input(batch_shape=batch_shape)
# image_size = batch_shape[1:3]
# else:
# img_input = Input(shape=input_shape)
# image_size = input_shape[0:2]
input_shape = _obtain_input_shape(input_shape,
default_size=32,
min_size=16,
data_format=K.image_data_format(),
include_top=False)
img_input = Input(shape=input_shape)
x = densenet.__create_dense_net(classes, img_input,
depth=None, nb_dense_block=3, growth_rate=32,
nb_filter=-1, nb_layers_per_block=[6, 12, 24, 16],
bottleneck=True, reduction=0.5, dropout_rate=0.2,
weight_decay=1E-4, top='segmentation',
input_shape=input_shape,
transition_dilation_rate=2,
transition_kernel_size=(1, 1),
transition_pooling=None,
include_top=include_top)
x = top(x, input_shape, classes, activation, weight_decay)
model = Model(img_input, x, name='Atrous_DenseNet')
# TODO(ahundt) add weight loading
return model
def DenseNet_FCN(input_shape=None, weight_decay=1E-4,
batch_momentum=0.9, batch_shape=None, classes=21,
include_top=False, activation='sigmoid'):
if include_top is True:
# TODO(ahundt) Softmax is pre-applied, so need different train, inference, evaluate.
# TODO(ahundt) for multi-label try per class sigmoid top as follows:
# x = Reshape((row * col * classes))(x)
# x = Activation('sigmoid')(x)
# x = Reshape((row, col, classes))(x)
return densenet.DenseNetFCN(input_shape=input_shape,
weights=None, classes=classes,
nb_layers_per_block=[4, 5, 7, 10, 12, 15],
growth_rate=16,
dropout_rate=0.2)
# if batch_shape:
# img_input = Input(batch_shape=batch_shape)
# image_size = batch_shape[1:3]
# else:
# img_input = Input(shape=input_shape)
# image_size = input_shape[0:2]
input_shape = _obtain_input_shape(input_shape,
default_size=32,
min_size=16,
data_format=K.image_data_format(),
include_top=False)
img_input = Input(shape=input_shape)
x = densenet.__create_fcn_dense_net(classes, img_input,
input_shape=input_shape,
nb_layers_per_block=[4, 5, 7, 10, 12, 15],
growth_rate=16,
dropout_rate=0.2,
include_top=include_top)
x = top(x, input_shape, classes, activation, weight_decay)
# TODO(ahundt) add weight loading
model = Model(img_input, x, name='DenseNet_FCN')
return model
def Unet (input_shape=None, weight_decay=0., batch_momentum=0.9, batch_shape=None, classes=21 ):
if batch_shape:
img_input = Input(batch_shape=batch_shape)
image_size = batch_shape[1:3]
else:
img_input = Input(shape=input_shape)
image_size = input_shape[0:2]
conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(img_input)
conv1 = Dropout(0.2)(conv1)
conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(pool1)
conv2 = Dropout(0.2)(conv2)
conv2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
print(conv2.shape)
conv3 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(pool2)
conv3 = Dropout(0.2)(conv3)
conv3 = Convolution2D(128, 3, 3, activation='relu', border_mode='same')(conv3)
print(conv3.shape)
x = [UpSampling2D(size=(2, 2))(conv3), conv2]
#print(x.shape)
print(x)
up1 = merge([UpSampling2D(size=(2, 2))(conv3), conv2], mode='concat', concat_axis=1)
conv4 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(up1)
conv4 = Dropout(0.2)(conv4)
conv4 = Convolution2D(64, 3, 3, activation='relu', border_mode='same')(conv4)
up2 = merge([UpSampling2D(size=(2, 2))(conv4), conv1], mode='concat', concat_axis=1)
conv5 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(up2)
conv5 = Dropout(0.2)(conv5)
conv5 = Convolution2D(32, 3, 3, activation='relu', border_mode='same')(conv5)
conv6 = Convolution2D(nClasses, 1, 1, activation='relu',border_mode='same')(conv5)
conv6 = core.Reshape((nClasses,input_height*input_width))(conv6)
conv6 = core.Permute((2,1))(conv6)
conv7 = core.Activation('softmax')(conv6)
model = Model(input=inputs, output=conv7)
if not optimizer is None:
model.compile(loss="categorical_crossentropy", optimizer= optimizer , metrics=['accuracy'] )
return model
VGG_Weights_path = "C://Users//User//Documents//UMass Amherst//Semester 2//COMPSCI 690IV - Intelligent Visual Computing//Project - Semantic Segmentation//Keras-FCN//Models//FCN_Vgg16_UNet//vgg16.h5"
def VGGUnet( input_shape=None, weight_decay=0., batch_momentum=0
, batch_shape=None, classes=21):
if batch_shape:
img_input = Input(batch_shape=batch_shape)
image_size = batch_shape[1:3]
else:
img_input = Input(shape=input_shape)
image_size = input_shape[0:2]
'''
input_height = 320
input_width = 320
img_input = Input(shape=(3,input_height,input_width))
'''
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
f1 = x
print(f1.shape)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
f2 = x
print(f2.shape)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
f3 = x
print(f3.shape)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
f4 = x
print(f4.shape)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool' )(x)
f5 = x
print(f5.shape)
x = Flatten(name='flatten')(x)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense( 1000 , activation='softmax', name='predictions')(x)
print(x.shape)
#vgg = Model( img_input , x )
#vgg.load_weights(VGG_Weights_path)
#weights_path = os.path.abspath('C:\\Users\\User\\Documents\\UMass Amherst\\Semester 2\\COMPSCI 690IV - Intelligent Visual Computing\\Project - Semantic Segmentation\\Keras-FCN\\Models\\FCN_Vgg16_32s\\vgg16.h5')
#vgg.load_weights(weights_path, by_name=True)
levels = [f1 , f2 , f3 , f4 , f5 ]
o = f4
print("o=f4",o.shape)
o = ( ZeroPadding2D( (1,1)))(o)
print("0-padding",o.shape)
o = ( Conv2D(512, (3, 3), padding='valid'))(o)
print("conv2d",o.shape)
o = BatchNormalization()(o)
print("batch-norm",o.shape)
o = (UpSampling2D( (2,2)))(o)
print("upsample",o.shape)
print("f3",f3.shape)
o = ( concatenate([ o ,f3],axis=-1 ) )
o = ( ZeroPadding2D( (1,1)))(o)
o = ( Conv2D( 256, (3, 3), padding='valid'))(o)
o = ( BatchNormalization())(o)
o = (UpSampling2D( (2,2)))(o)
o = ( concatenate([o,f2],axis=-1 ) )
o = ( ZeroPadding2D((1,1)))(o)
o = ( Conv2D( 128 , (3, 3), padding='valid') )(o)
o = ( BatchNormalization())(o)
o = (UpSampling2D( (2,2)))(o)
o = ( concatenate([o,f1],axis=-1 ) )
o = ( ZeroPadding2D((1,1)))(o)
o = ( Conv2D( 64 , (3, 3), padding='valid'))(o)
o = ( BatchNormalization())(o)
o = (UpSampling2D( (2,2)))(o)
print("o", o.shape)
n_classes=classes
o = Conv2D( n_classes , (3, 3) , padding='same')( o )
o_shape = Model(img_input , o ).output_shape
print("o shape",o_shape)
outputHeight = o_shape[1]
print("output Height = ",outputHeight);
outputWidth = o_shape[2]
print("output Width =",outputWidth)
#outputHeight = 320
#outputWidth = 320
o = (Reshape(( n_classes , outputHeight*outputWidth )))(o)
o = (Permute((2, 1)))(o)
o = (Reshape(( outputHeight,outputWidth,-1 )))(o)
o = (Activation('softmax'))(o)
model = Model( img_input , o )
model.outputWidth = outputWidth
model.outputHeight = outputHeight
#weights_path = os.path.abspath('C:\\Users\\User\\Documents\\UMass Amherst\\Semester 2\\COMPSCI 690IV - Intelligent Visual Computing\\Project - Semantic Segmentation\\Keras-FCN\\Models\\FCN_Vgg16_32s\\vgg16.h5')
#model.load_weights(weights_path, by_name=True)
return model | 45.170501 | 216 | 0.611306 | 4,644 | 33,381 | 4.220069 | 0.069983 | 0.010409 | 0.037963 | 0.050617 | 0.842025 | 0.817022 | 0.794826 | 0.786713 | 0.778651 | 0.766354 | 0 | 0.074792 | 0.22375 | 33,381 | 739 | 217 | 45.170501 | 0.681538 | 0.087954 | 0 | 0.552475 | 0 | 0.00396 | 0.0906 | 0.013347 | 0 | 0 | 0 | 0.001353 | 0 | 1 | 0.021782 | false | 0 | 0.041584 | 0 | 0.089109 | 0.053465 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83ed57495ff65b74269b0f735a5762572943dada | 41 | py | Python | backend/db/models/__init__.py | appheap/social-media-analyzer | 0f9da098bfb0b4f9eb38e0244aa3a168cf97d51c | [
"Apache-2.0"
] | 5 | 2021-09-11T22:01:15.000Z | 2022-03-16T21:33:42.000Z | backend/db/models/__init__.py | iamatlasss/social-media-analyzer | 429d1d2bbd8bfce80c50c5f8edda58f87ace668d | [
"Apache-2.0"
] | null | null | null | backend/db/models/__init__.py | iamatlasss/social-media-analyzer | 429d1d2bbd8bfce80c50c5f8edda58f87ace668d | [
"Apache-2.0"
] | 3 | 2022-01-18T11:06:22.000Z | 2022-02-26T13:39:28.000Z | from .base import *
from .media import *
| 13.666667 | 20 | 0.707317 | 6 | 41 | 4.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 41 | 2 | 21 | 20.5 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83ed81f27b016a19f817452c268e6a6f8cad9de7 | 242 | py | Python | general/admin.py | VladaDidko/skill- | 861c08376e2bc9b9a5a44e3a8560324ee53ce2d0 | [
"Unlicense"
] | null | null | null | general/admin.py | VladaDidko/skill- | 861c08376e2bc9b9a5a44e3a8560324ee53ce2d0 | [
"Unlicense"
] | 18 | 2019-05-28T17:20:34.000Z | 2022-03-11T23:50:12.000Z | general/admin.py | VladaDidko/skill- | 861c08376e2bc9b9a5a44e3a8560324ee53ce2d0 | [
"Unlicense"
] | 3 | 2019-05-27T09:51:54.000Z | 2019-12-12T20:35:29.000Z | from django.contrib import admin
from blog.models import Category
from blog.models import Post
from users.models import Profile
# Register your models here.
admin.site.register(Category)
admin.site.register(Profile)
admin.site.register(Post) | 26.888889 | 32 | 0.826446 | 36 | 242 | 5.555556 | 0.416667 | 0.18 | 0.255 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099174 | 242 | 9 | 33 | 26.888889 | 0.917431 | 0.107438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8606855bb23843453380234925e55fe9e43150f6 | 34 | py | Python | Session 7 - Python/Instructional Material/Class Examples/PyInstaller/TestBuild.py | dbowmans46/PracticalProgramming | e9fee468eee12625ce198d9a4f4c5ee735f5db90 | [
"MIT"
] | null | null | null | Session 7 - Python/Instructional Material/Class Examples/PyInstaller/TestBuild.py | dbowmans46/PracticalProgramming | e9fee468eee12625ce198d9a4f4c5ee735f5db90 | [
"MIT"
] | null | null | null | Session 7 - Python/Instructional Material/Class Examples/PyInstaller/TestBuild.py | dbowmans46/PracticalProgramming | e9fee468eee12625ce198d9a4f4c5ee735f5db90 | [
"MIT"
] | 1 | 2019-08-07T16:51:23.000Z | 2019-08-07T16:51:23.000Z | print("The build was successful")
| 17 | 33 | 0.764706 | 5 | 34 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
790af15b41bd54861264ce39f31388800d44dbd5 | 243 | py | Python | transformers_interpret/__init__.py | MichalMalyska/transformers-interpret | 878ec4b6928e2417a3ffe8499be52033938090f0 | [
"Apache-2.0"
] | 1 | 2021-07-06T21:07:49.000Z | 2021-07-06T21:07:49.000Z | transformers_interpret/__init__.py | MichalMalyska/transformers-interpret | 878ec4b6928e2417a3ffe8499be52033938090f0 | [
"Apache-2.0"
] | null | null | null | transformers_interpret/__init__.py | MichalMalyska/transformers-interpret | 878ec4b6928e2417a3ffe8499be52033938090f0 | [
"Apache-2.0"
] | null | null | null | from .attributions import Attributions, LIGAttributions
from .explainer import BaseExplainer
from .explainers.question_answering import QuestionAnsweringExplainer
from .explainers.sequence_classification import SequenceClassificationExplainer
| 48.6 | 79 | 0.901235 | 21 | 243 | 10.333333 | 0.619048 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069959 | 243 | 4 | 80 | 60.75 | 0.960177 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f70cc5efd6ee30bf6e24e6413ae411d0d6938392 | 34 | py | Python | dataloader/__init__.py | sajith-rahim/transformer-classifier | 543562fc22a4ee3b224eaf44876449552026d2e5 | [
"Apache-2.0"
] | null | null | null | dataloader/__init__.py | sajith-rahim/transformer-classifier | 543562fc22a4ee3b224eaf44876449552026d2e5 | [
"Apache-2.0"
] | null | null | null | dataloader/__init__.py | sajith-rahim/transformer-classifier | 543562fc22a4ee3b224eaf44876449552026d2e5 | [
"Apache-2.0"
] | null | null | null | from .sentence_dataloader import * | 34 | 34 | 0.852941 | 4 | 34 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f70e3047e3f418e139848f3764890f8c70ffc0e6 | 23 | py | Python | __init__.py | marcusschiesser/all-intraday | 8879251896074b0a527d75baa8389a071a81d058 | [
"MIT"
] | 20 | 2020-02-08T06:41:41.000Z | 2022-01-06T08:41:01.000Z | __init__.py | marcusschiesser/all-intraday | 8879251896074b0a527d75baa8389a071a81d058 | [
"MIT"
] | null | null | null | __init__.py | marcusschiesser/all-intraday | 8879251896074b0a527d75baa8389a071a81d058 | [
"MIT"
] | 5 | 2020-01-15T06:21:49.000Z | 2021-02-14T16:57:26.000Z | from .intraday import * | 23 | 23 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 23 | 1 | 23 | 23 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f70e8a5e36b7b6fa9004e87414745e7ec78d75f6 | 96 | py | Python | venv/lib/python3.8/site-packages/keyring/backends/SecretService.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/keyring/backends/SecretService.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/keyring/backends/SecretService.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/b7/df/1e/7980259571f5a43b5ac0c36215dfc4b1485986d14af13b40a821ae930f | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f72d0fdb931b1dcc84a293aeef05fc7e649e1c49 | 23 | py | Python | prepack/__init__.py | CargoCodes/PreparePack | 3d1d3623c0c86ab02a92a567ed954fb37e18b8fa | [
"MIT"
] | null | null | null | prepack/__init__.py | CargoCodes/PreparePack | 3d1d3623c0c86ab02a92a567ed954fb37e18b8fa | [
"MIT"
] | null | null | null | prepack/__init__.py | CargoCodes/PreparePack | 3d1d3623c0c86ab02a92a567ed954fb37e18b8fa | [
"MIT"
] | null | null | null | from .prepack import *
| 11.5 | 22 | 0.73913 | 3 | 23 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f730c43933f5c965f1163cbe92ea2e00c357ef48 | 115 | py | Python | glance/contrib/plugins/artifacts_sample/__init__.py | wkoathp/glance | eb0c47047ddc28371f546437118986ed904f41d3 | [
"Apache-2.0"
] | 3 | 2015-12-22T09:04:44.000Z | 2017-10-18T15:26:03.000Z | glance/contrib/plugins/artifacts_sample/__init__.py | wkoathp/glance | eb0c47047ddc28371f546437118986ed904f41d3 | [
"Apache-2.0"
] | null | null | null | glance/contrib/plugins/artifacts_sample/__init__.py | wkoathp/glance | eb0c47047ddc28371f546437118986ed904f41d3 | [
"Apache-2.0"
] | null | null | null | from v1 import artifact as art1
from v2 import artifact as art2
MY_ARTIFACT = [art1.MyArtifact, art2.MyArtifact]
| 19.166667 | 48 | 0.791304 | 18 | 115 | 5 | 0.555556 | 0.311111 | 0.355556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061856 | 0.156522 | 115 | 5 | 49 | 23 | 0.865979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f73cbf2bab99a7328feb35771cde3ea1764c3913 | 60 | py | Python | DIRE/neural_model/model/encoder.py | lukedram/DIRE | f2149bad5d655938bb682fddd33e6cd652f0bf4a | [
"MIT"
] | 43 | 2019-11-20T18:19:05.000Z | 2022-03-30T11:56:33.000Z | DIRE/neural_model/model/encoder.py | lukedram/DIRE | f2149bad5d655938bb682fddd33e6cd652f0bf4a | [
"MIT"
] | 8 | 2020-05-07T01:34:02.000Z | 2021-04-15T14:06:14.000Z | DIRE/neural_model/model/encoder.py | lukedram/DIRE | f2149bad5d655938bb682fddd33e6cd652f0bf4a | [
"MIT"
] | 15 | 2019-11-19T14:15:36.000Z | 2021-06-04T17:54:54.000Z | import torch.nn as nn
class Encoder(nn.Module):
pass
| 8.571429 | 25 | 0.683333 | 10 | 60 | 4.1 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 60 | 6 | 26 | 10 | 0.891304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f759c5e8cfc826a7e8950073f625ccc8174cb646 | 187 | py | Python | imu_rabbitmq/imu/__init__.py | apl-ocean-engineering/sensor-test-frame-controller | 5e1457e478d6b66eca19000d7e8877ebb96c51d4 | [
"MIT"
] | null | null | null | imu_rabbitmq/imu/__init__.py | apl-ocean-engineering/sensor-test-frame-controller | 5e1457e478d6b66eca19000d7e8877ebb96c51d4 | [
"MIT"
] | 12 | 2018-04-20T16:31:38.000Z | 2018-05-18T22:41:08.000Z | imu_rabbitmq/imu/__init__.py | apl-ocean-engineering/sensor-test-frame-controller | 5e1457e478d6b66eca19000d7e8877ebb96c51d4 | [
"MIT"
] | 1 | 2018-03-09T05:44:59.000Z | 2018-03-09T05:44:59.000Z | from __future__ import absolute_import, division, print_function
from .version import __version__
from .imu import *
from .server import *
from .client import *
from .imu import *
| 23.375 | 64 | 0.764706 | 24 | 187 | 5.541667 | 0.458333 | 0.225564 | 0.195489 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 187 | 7 | 65 | 26.714286 | 0.863636 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f75b65275eed78e8505348966f747c7e76784a06 | 8,567 | py | Python | conductor/conductor/tests/unit/api/controller/v1/test_plans.py | onap/optf-has | dd06e2675aedd7ae6344f2f51e70bbd468f36ce5 | [
"Apache-2.0"
] | 4 | 2019-02-14T19:18:09.000Z | 2019-10-21T17:17:59.000Z | conductor/conductor/tests/unit/api/controller/v1/test_plans.py | onap/optf-has | dd06e2675aedd7ae6344f2f51e70bbd468f36ce5 | [
"Apache-2.0"
] | null | null | null | conductor/conductor/tests/unit/api/controller/v1/test_plans.py | onap/optf-has | dd06e2675aedd7ae6344f2f51e70bbd468f36ce5 | [
"Apache-2.0"
] | 4 | 2019-05-09T07:05:54.000Z | 2020-11-20T05:56:47.000Z | #
# -------------------------------------------------------------------------
# Copyright (c) 2018 Intel Corporation Intellectual Property
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# -------------------------------------------------------------------------
#
"""Test case for PlansController"""
import copy
import json
import uuid
import mock
from conductor.api.controllers.v1 import plans
from conductor.tests.unit.api import base_api
from oslo_serialization import jsonutils
class TestPlansController(base_api.BaseApiTest):
def test_index_options(self):
actual_response = self.app.options('/v1/plans', expect_errors=True, )
self.assertEqual(204, actual_response.status_int)
self.assertEqual("GET,POST", actual_response.headers['Allow'])
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch.object(plans.LOG, 'error')
@mock.patch.object(plans.LOG, 'debug')
@mock.patch.object(plans.LOG, 'warning')
@mock.patch.object(plans.LOG, 'info')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_get(self, rpc_mock, info_mock, warning_mock, debug_mock,
error_mock, aaf_mock):
req_json_file = './conductor/tests/unit/api/controller/v1/plans.json'
params = json.loads(open(req_json_file).read())
plan_id = str(uuid.uuid4())
params['id'] = plan_id
rpc_mock.return_value = {'plans': [params]}
aaf_mock.return_value = True
actual_response = self.app.get('/v1/plans', extra_environ=self.extra_environment)
self.assertEqual(200, actual_response.status_int)
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch.object(plans.LOG, 'error')
@mock.patch.object(plans.LOG, 'debug')
@mock.patch.object(plans.LOG, 'warning')
@mock.patch.object(plans.LOG, 'info')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_post_error(self, rpc_mock, info_mock, warning_mock,
debug_mock,
error_mock,
aaf_mock):
req_json_file = './conductor/tests/unit/api/controller/v1/plans.json'
params = jsonutils.dumps(json.loads(open(req_json_file).read()))
rpc_mock.return_value = {}
aaf_mock.return_value = True
response = self.app.post('/v1/plans', params=params,
expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(500, response.status_int)
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch.object(plans.LOG, 'error')
@mock.patch.object(plans.LOG, 'debug')
@mock.patch.object(plans.LOG, 'warning')
@mock.patch.object(plans.LOG, 'info')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_post_success(self, rpc_mock, info_mock, warning_mock,
debug_mock,
error_mock,
aaf_mock):
req_json_file = './conductor/tests/unit/api/controller/v1/plans.json'
params = json.loads(open(req_json_file).read())
mock_params = copy.deepcopy(params)
plan_id = str(uuid.uuid4())
mock_params['id'] = plan_id
rpc_mock.return_value = {'plan': mock_params}
aaf_mock.return_value = True
params = json.dumps(params)
response = self.app.post('/v1/plans', params=params,
expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(201, response.status_int)
def test_index_httpmethod_notallowed(self):
actual_response = self.app.put('/v1/plans', expect_errors=True)
self.assertEqual(405, actual_response.status_int)
actual_response = self.app.patch('/v1/plans', expect_errors=True)
self.assertEqual(405, actual_response.status_int)
class TestPlansItemController(base_api.BaseApiTest):
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_options(self, rpc_mock, aaf_mock):
req_json_file = './conductor/tests/unit/api/controller/v1/plans.json'
params = json.loads(open(req_json_file).read())
plan_id = str(uuid.uuid4())
params['id'] = plan_id
rpc_mock.return_value = {'plans': [params]}
aaf_mock.return_value = True
url = '/v1/plans/' + plan_id
print(url)
actual_response = self.app.options(url=url, expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(204, actual_response.status_int)
self.assertEqual("GET,DELETE", actual_response.headers['Allow'])
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_httpmethod_notallowed(self, rpc_mock, aaf_mock):
req_json_file = './conductor/tests/unit/api/controller/v1/plans.json'
params = json.loads(open(req_json_file).read())
plan_id = str(uuid.uuid4())
params['id'] = plan_id
rpc_mock.return_value = {'plans': [params]}
aaf_mock.return_value = True
url = '/v1/plans/' + plan_id
actual_response = self.app.put(url=url, expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(405, actual_response.status_int)
actual_response = self.app.patch(url=url, expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(405, actual_response.status_int)
actual_response = self.app.post(url=url, expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(405, actual_response.status_int)
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_get(self, rpc_mock, aaf_mock):
req_json_file = './conductor/tests/unit/api/controller/v1/plans.json'
params = json.loads(open(req_json_file).read())
plan_id = str(uuid.uuid4())
params['id'] = plan_id
expected_response = {'plans': [params]}
rpc_mock.return_value = {'plans': [params]}
aaf_mock.return_value = True
url = '/v1/plans/' + plan_id
actual_response = self.app.get(url=url, expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(200, actual_response.status_int)
self.assertJsonEqual(expected_response,
json.loads(actual_response.body.decode()))
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_get_non_exist(self, rpc_mock, aaf_mock):
rpc_mock.return_value = {'plans': []}
aaf_mock.return_value = True
plan_id = str(uuid.uuid4())
url = '/v1/plans/' + plan_id
actual_response = self.app.get(url=url, expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(404, actual_response.status_int)
@mock.patch('conductor.api.adapters.aaf.aaf_authentication.authenticate')
@mock.patch('conductor.common.music.messaging.component.RPCClient.call')
def test_index_delete(self, rpc_mock, aaf_mock):
req_json_file = './conductor/tests/unit/api/controller/v1/plans.json'
params = json.loads(open(req_json_file).read())
plan_id = str(uuid.uuid4())
params['id'] = plan_id
rpc_mock.return_value = {'plans': [params]}
aaf_mock.return_value = True
url = '/v1/plans/' + plan_id
actual_response = self.app.delete(url=url, expect_errors=True, extra_environ=self.extra_environment)
self.assertEqual(204, actual_response.status_int)
| 48.954286 | 109 | 0.672347 | 1,078 | 8,567 | 5.140074 | 0.148423 | 0.045479 | 0.051976 | 0.043313 | 0.818625 | 0.771341 | 0.756181 | 0.746977 | 0.741202 | 0.741202 | 0 | 0.010532 | 0.190965 | 8,567 | 174 | 110 | 49.235632 | 0.788919 | 0.09093 | 0 | 0.673913 | 0 | 0 | 0.196137 | 0.164456 | 0 | 0 | 0 | 0 | 0.115942 | 1 | 0.072464 | false | 0 | 0.050725 | 0 | 0.137681 | 0.007246 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7654a21f50b9c5c27903f437e0f196c100a634b | 16,307 | py | Python | tests/io/test_access_groups_v2.py | mitsuo0114/pyTenable | 304953b1126d51c334a551199f3da820f4cc5b46 | [
"MIT"
] | null | null | null | tests/io/test_access_groups_v2.py | mitsuo0114/pyTenable | 304953b1126d51c334a551199f3da820f4cc5b46 | [
"MIT"
] | null | null | null | tests/io/test_access_groups_v2.py | mitsuo0114/pyTenable | 304953b1126d51c334a551199f3da820f4cc5b46 | [
"MIT"
] | null | null | null | '''
test access_groups
'''
import uuid
import pytest
from tenable.errors import UnexpectedValueError, APIError
from tests.checker import check
@pytest.fixture(name='rules')
def fixture_rules():
'''
Fixture to return access_group rules structure
'''
return [('ipv4', 'eq', ['192.168.0.0/24'])]
@pytest.fixture(name='agroup')
def fixture_agroup(request, api, vcr, rules):
'''
Fixture to create access_group
'''
with vcr.use_cassette('test_access_groups_v2_create_success'):
group = api.access_groups_v2.create('Example', rules)
def teardown():
'''
cleanup function to delete access_group
'''
try:
with vcr.use_cassette('test_access_groups_v2_delete_success'):
api.access_groups_v2.delete(group['id'])
except APIError:
pass
request.addfinalizer(teardown)
return group
def test_access_group_v2_principal_constructor_type_typeerror(api):
'''
test to raise exception when type of type param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_principal_constructor')([(1, 'something')])
def test_access_group_v2_principal_constructor_type_unexpectedvalueerror(api):
'''
test to raise exception when type param value does not match the choices.
'''
with pytest.raises(UnexpectedValueError):
getattr(api.access_groups_v2, '_principal_constructor')([('something', 'something')])
def test_access_group_v2_principal_constructor_id_typeerror(api):
'''
test to raise exception when type of id param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_principal_constructor')([('user', 1)])
def test_access_group_v2_principal_constructor_permission_typeerror(api):
'''
test to raise exception when type of permissions param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_principal_constructor')([('user', str(uuid.uuid4()), 1)])
def test_access_group_v2_principal_constructor_permission_unexpectedvalueerror(api):
'''
test to raise exception when permissions param value does not match the choices.
'''
with pytest.raises(UnexpectedValueError):
getattr(api.access_groups_v2, '_principal_constructor')([
('user', str(uuid.uuid4()), ['nope'])])
def test_access_group_v2_principal_constructor_dict_type_typeerror(api):
'''
test to raise exception when type of type param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_principal_constructor')([{
'type': 1,
'principal_id': str(uuid.uuid4()),
'principal_name': 'test@test.com'
}])
def test_access_group_v2_principal_constructor_dict_type_unexpectedvalueerror(api):
'''
test to raise exception when type param value does not match the choices.
'''
with pytest.raises(UnexpectedValueError):
getattr(api.access_groups_v2, '_principal_constructor')([{
'type': 'something',
'principal_id': str(uuid.uuid4()),
'principal_name': 'test@test.com'
}])
def test_access_group_v2_principal_constructor_dict_id_typeerror(api):
'''
test to raise exception when type of id param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_principal_constructor')([{
'type': 'user',
'principal_id': 1,
'principal_name': 'test@test.com'
}])
def test_access_group_v2_principal_constructor_dict_name_typeerror(api):
'''
test to raise exception when type of name param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_principal_constructor')([{
'type': 'user',
'principal_id': str(uuid.uuid4()),
'principal_name': 1
}])
def test_access_group_v2_principal_constructor_dict_permissions_typeerror(api):
'''
test to raise exception when type of permissions param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_principal_constructor')([{
'type': 'user',
'principal_id': str(uuid.uuid4()),
'permissions': 1
}])
def test_access_group_v2_principal_constructor_dict_permissions_unexpectedvalueerror(api):
'''
test to raise exception when permissions param value does not match the choices.
'''
with pytest.raises(UnexpectedValueError):
getattr(api.access_groups_v2, '_principal_constructor')([{
'type': 'user',
'principal_id': str(uuid.uuid4()),
'permissions': ['Nothing']
}])
def test_access_group_v2_principal_constructor_tuple_pass(api):
'''
test to parse tuple type principals
'''
assert getattr(api.access_groups_v2, '_principal_constructor')([
('user', 'test@test.com', ['can_view'])
]) == [{'permissions': ['CAN_VIEW'], 'type': 'user', 'principal_name': 'test@test.com'}]
user_id = str(uuid.uuid4())
assert getattr(api.access_groups_v2, '_principal_constructor')([
('user', user_id)
]) == [{'permissions': ['CAN_VIEW'], 'type': 'user', 'principal_id': user_id}]
def test_access_group_v2_principal_constructor_dict_pass(api):
'''
test to parse dict type principals
'''
assert getattr(api.access_groups_v2, '_principal_constructor')([
{'type': 'user', 'principal_name': 'test@test.com', 'permissions': ['CAN_VIEW']}
]) == [{'permissions': ['CAN_VIEW'], 'type': 'user', 'principal_name': 'test@test.com'}]
user_id = str(uuid.uuid4())
assert getattr(api.access_groups_v2, '_principal_constructor')([
{'type': 'user', 'principal_id': user_id}
]) == [{'permissions': ['CAN_VIEW'], 'type': 'user', 'principal_id': user_id}]
def test_access_group_v2_list_clean_typeerror(api):
'''
test to raise exception when type of items param does not match the expected type.
'''
with pytest.raises(TypeError):
getattr(api.access_groups_v2, '_list_clean')(items='nope')
def test_access_group_v2_list_clean_pass(api):
'''
test to remove duplicates from list
'''
resp = getattr(api.access_groups_v2, '_list_clean')(['one', 'two', 'one'])
assert sorted(resp) == ['one', 'two']
@pytest.mark.vcr()
def test_access_groups_v2_create_name_typeerror(api, rules):
'''
test to raise exception when type of name param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.create(1, rules)
@pytest.mark.vcr()
def test_access_groups_v2_create_all_users_typeerror(api, rules):
'''
test to raise exception when type of all_users param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.create('Test', rules, all_users='nope')
@pytest.mark.vcr()
def test_access_groups_v2_create_access_group_type_typeerror(api, rules):
'''
test to raise exception when type of access_group_type param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.create('Test', rules, access_group_type=1)
@pytest.mark.vcr()
def test_access_groups_v2_create_access_group_type_unexpectedvalueerror(api, rules):
'''
test to raise exception when access_group_type param value does not match the choices.
'''
with pytest.raises(UnexpectedValueError):
api.access_groups_v2.create('Test', rules, access_group_type='nope')
@pytest.mark.vcr()
def test_access_groups_v2_create_principals_typeerror(api, rules):
'''
test to raise exception when type of principals param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.create('Test', rules, principals='nope')
@pytest.mark.vcr()
def test_access_groups_v2_create_success(agroup):
'''
test to create access group
'''
assert isinstance(agroup, dict)
check(agroup, 'created_at', 'datetime')
check(agroup, 'updated_at', 'datetime')
check(agroup, 'id', 'uuid')
check(agroup, 'name', str)
check(agroup, 'all_assets', bool)
check(agroup, 'version', int)
check(agroup, 'status', str)
check(agroup, 'access_group_type', str)
check(agroup, 'rules', list)
for rule in agroup['rules']:
check(rule, 'type', str)
check(rule, 'operator', str)
check(rule, 'terms', list)
check(agroup, 'principals', list)
for principal in agroup['principals']:
check(principal, 'type', str)
check(principal, 'principal_id', 'uuid')
check(principal, 'principal_name', str)
check(principal, 'permissions', list)
check(agroup, 'created_by_uuid', 'uuid')
check(agroup, 'updated_by_uuid', 'uuid')
check(agroup, 'created_by_name', str)
check(agroup, 'updated_by_name', str)
check(agroup, 'processing_percent_complete', int)
@pytest.mark.vcr()
def test_access_groups_v2_delete_success(api, agroup):
'''
test to delete access group
'''
api.access_groups_v2.delete(agroup['id'])
@pytest.mark.vcr()
def test_access_group_v2_edit_id_typeerror(api):
'''
test to raise exception when type of group_id param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.edit(1)
@pytest.mark.vcr()
def test_access_group_v2_edit_id_unexpectedvalueerror(api):
'''
test to raise exception when group_id param value does not match the choices.
'''
with pytest.raises(UnexpectedValueError):
api.access_groups_v2.edit('something')
@pytest.mark.vcr()
def test_access_group_v2_edit_success(api, agroup):
'''
test to edit access group
'''
resp = api.access_groups_v2.edit(agroup['id'], name='Updated', all_users=False)
assert isinstance(resp, dict)
check(resp, 'created_at', 'datetime')
check(resp, 'updated_at', 'datetime')
check(resp, 'id', 'uuid')
check(resp, 'name', str)
check(resp, 'all_assets', bool)
check(resp, 'version', int)
check(resp, 'status', str)
check(resp, 'access_group_type', str)
check(resp, 'rules', list)
for rule in resp['rules']:
check(rule, 'type', str)
check(rule, 'operator', str)
check(rule, 'terms', list)
check(resp, 'principals', list)
for principal in resp['principals']:
check(principal, 'type', str)
check(principal, 'principal_id', 'uuid')
check(principal, 'principal_name', str)
check(principal, 'permissions', list)
check(resp, 'created_by_uuid', 'uuid')
check(resp, 'updated_by_uuid', 'uuid')
check(resp, 'created_by_name', str)
check(resp, 'updated_by_name', str)
check(resp, 'processing_percent_complete', int)
@pytest.mark.vcr()
def test_access_groups_v2_details_success(api):
'''
test to get details of specific access group
'''
group = api.access_groups_v2.create('Test', [('ipv4', 'eq', ['192.168.0.0/24'])])
resp = api.access_groups_v2.details(group['id'])
assert isinstance(resp, dict)
check(resp, 'created_at', 'datetime')
check(resp, 'updated_at', 'datetime')
check(resp, 'id', 'uuid')
check(resp, 'name', str)
check(resp, 'all_assets', bool)
check(resp, 'version', int)
check(resp, 'status', str)
check(resp, 'access_group_type', str)
check(resp, 'rules', list)
for rule in resp['rules']:
check(rule, 'type', str)
check(rule, 'operator', str)
check(rule, 'terms', list)
check(resp, 'principals', list)
for principal in group['principals']:
check(principal, 'type', str)
check(principal, 'principal_id', 'uuid')
check(principal, 'principal_name', str)
check(principal, 'permissions', list)
check(resp, 'created_by_uuid', 'uuid')
check(resp, 'updated_by_uuid', 'uuid')
check(resp, 'created_by_name', str)
check(resp, 'updated_by_name', str)
check(resp, 'processing_percent_complete', int)
api.access_groups_v2.delete(group['id'])
@pytest.mark.vcr()
def test_access_groups_v2_list_offset_typeerror(api):
'''
test to raise exception when type of offset param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(offset='nope')
@pytest.mark.vcr()
def test_access_groups_v2_list_limit_typeerror(api):
'''
test to raise exception when type of limit param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(limit='nope')
@pytest.mark.vcr()
def test_access_groups_v2_list_sort_field_typeerror(api):
'''
test to raise exception when type of sort field_name param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(sort=((1, 'asc'),))
@pytest.mark.vcr()
def test_access_groups_v2_list_sort_direction_typeerror(api):
'''
test to raise exception when type of sort field_direction param
does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(sort=(('uuid', 1),))
@pytest.mark.vcr()
def test_access_groups_v2_list_sort_direction_unexpectedvalue(api):
'''
test to raise exception when sort_firection param value does not match the choices.
'''
with pytest.raises(UnexpectedValueError):
api.access_groups_v2.list(sort=(('uuid', 'nope'),))
@pytest.mark.vcr()
def test_access_groups_v2_list_filter_name_typeerror(api):
'''
test to raise exception when type of filter_name param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list((1, 'match', 'win'))
@pytest.mark.vcr()
def test_access_groups_v2_list_filter_operator_typeerror(api):
'''
test to raise exception when type of filter_operator param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(('name', 1, 'win'))
@pytest.mark.vcr()
def test_access_groups_v2_list_filter_value_typeerror(api):
'''
test to raise exception when type of filter_value param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(('name', 'match', 1))
@pytest.mark.vcr()
def test_access_groups_v2_list_filter_type_typeerror(api):
'''
test to raise exception when type of filter_type param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(filter_type=1)
@pytest.mark.vcr()
def test_access_groups_v2_list_wildcard_typeerror(api):
'''
test to raise exception when type of wildcard param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(wildcard=1)
@pytest.mark.vcr()
def test_access_groups_v2_list_wildcard_fields_typeerror(api):
'''
test to raise exception when type of wildcard_fields param does not match the expected type.
'''
with pytest.raises(TypeError):
api.access_groups_v2.list(wildcard_fields='nope')
@pytest.mark.vcr()
def test_access_groups_v2_list(api):
'''
test to get list of access groups
'''
count = 0
access_groups = api.access_groups_v2.list()
for group in access_groups:
count += 1
assert isinstance(group, dict)
check(group, 'created_at', 'datetime')
check(group, 'updated_at', 'datetime')
check(group, 'id', 'uuid')
check(group, 'name', str)
check(group, 'all_assets', bool)
check(group, 'version', int)
check(group, 'status', str)
check(group, 'access_group_type', str)
#check(group, 'created_by_uuid', 'uuid') # Will not return for default group
#check(group, 'updated_by_uuid', 'uuid') # Will not return for default group
check(group, 'created_by_name', str)
check(group, 'updated_by_name', str)
check(group, 'processing_percent_complete', int)
assert count == access_groups.total
| 36.237778 | 98 | 0.678175 | 2,113 | 16,307 | 4.991481 | 0.06389 | 0.079644 | 0.08628 | 0.069309 | 0.857685 | 0.815113 | 0.802503 | 0.767706 | 0.74552 | 0.704276 | 0 | 0.010142 | 0.195805 | 16,307 | 449 | 99 | 36.318486 | 0.794113 | 0.191022 | 0 | 0.517986 | 0 | 0 | 0.171181 | 0.040625 | 0 | 0 | 0 | 0 | 0.035971 | 1 | 0.147482 | false | 0.014388 | 0.014388 | 0 | 0.169065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f78c5b69480da8b4fa186cf803a83d4161541a1a | 40,621 | py | Python | RO-VIM-openstack/osm_rovim_openstack/tests/test_vimconn_openstack.py | TCSOSM-20/RO | 5ad826aba849d0e90b4d8661a5ce872d02d7dd52 | [
"Apache-2.0"
] | null | null | null | RO-VIM-openstack/osm_rovim_openstack/tests/test_vimconn_openstack.py | TCSOSM-20/RO | 5ad826aba849d0e90b4d8661a5ce872d02d7dd52 | [
"Apache-2.0"
] | null | null | null | RO-VIM-openstack/osm_rovim_openstack/tests/test_vimconn_openstack.py | TCSOSM-20/RO | 5ad826aba849d0e90b4d8661a5ce872d02d7dd52 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
##
# Copyright 2017 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# For those usages not covered by the Apache License, Version 2.0 please
# contact with: nfvlabs@tid.es
##
"""
This module contains unit tests for the OpenStack VIM connector
Run this directly with python2 or python3.
"""
import copy
import unittest
import mock
from neutronclient.v2_0.client import Client
from osm_ro_plugin import vimconn
from osm_rovim_openstack.vimconn_openstack import vimconnector
__author__ = "Igor D.C."
__date__ = "$23-aug-2017 23:59:59$"
class TestSfcOperations(unittest.TestCase):
def setUp(self):
# instantiate dummy VIM connector so we can test it
self.vimconn = vimconnector(
'123', 'openstackvim', '456', '789', 'http://dummy.url', None,
'user', 'pass')
def _test_new_sfi(self, create_sfc_port_pair, sfc_encap,
ingress_ports=['5311c75d-d718-4369-bbda-cdcc6da60fcc'],
egress_ports=['230cdf1b-de37-4891-bc07-f9010cf1f967']):
# input to VIM connector
name = 'osm_sfi'
# + ingress_ports
# + egress_ports
# TODO(igordc): must be changed to NSH in Queens (MPLS is a workaround)
correlation = 'nsh'
if sfc_encap is not None:
if not sfc_encap:
correlation = None
# what OpenStack is assumed to respond (patch OpenStack's return value)
dict_from_neutron = {'port_pair': {
'id': '3d7ddc13-923c-4332-971e-708ed82902ce',
'name': name,
'description': '',
'tenant_id': '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c',
'project_id': '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c',
'ingress': ingress_ports[0] if len(ingress_ports) else None,
'egress': egress_ports[0] if len(egress_ports) else None,
'service_function_parameters': {'correlation': correlation}
}}
create_sfc_port_pair.return_value = dict_from_neutron
# what the VIM connector is expected to
# send to OpenStack based on the input
dict_to_neutron = {'port_pair': {
'name': name,
'ingress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'egress': '230cdf1b-de37-4891-bc07-f9010cf1f967',
'service_function_parameters': {'correlation': correlation}
}}
# call the VIM connector
if sfc_encap is None:
result = self.vimconn.new_sfi(name, ingress_ports, egress_ports)
else:
result = self.vimconn.new_sfi(name, ingress_ports, egress_ports,
sfc_encap)
# assert that the VIM connector made the expected call to OpenStack
create_sfc_port_pair.assert_called_with(dict_to_neutron)
# assert that the VIM connector had the expected result / return value
self.assertEqual(result, dict_from_neutron['port_pair']['id'])
def _test_new_sf(self, create_sfc_port_pair_group):
# input to VIM connector
name = 'osm_sf'
instances = ['bbd01220-cf72-41f2-9e70-0669c2e5c4cd',
'12ba215e-3987-4892-bd3a-d0fd91eecf98',
'e25a7c79-14c8-469a-9ae1-f601c9371ffd']
# what OpenStack is assumed to respond (patch OpenStack's return value)
dict_from_neutron = {'port_pair_group': {
'id': '3d7ddc13-923c-4332-971e-708ed82902ce',
'name': name,
'description': '',
'tenant_id': '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c',
'project_id': '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c',
'port_pairs': instances,
'group_id': 1,
'port_pair_group_parameters': {
"lb_fields": [],
"ppg_n_tuple_mapping": {
"ingress_n_tuple": {},
"egress_n_tuple": {}
}}
}}
create_sfc_port_pair_group.return_value = dict_from_neutron
# what the VIM connector is expected to
# send to OpenStack based on the input
dict_to_neutron = {'port_pair_group': {
'name': name,
'port_pairs': ['bbd01220-cf72-41f2-9e70-0669c2e5c4cd',
'12ba215e-3987-4892-bd3a-d0fd91eecf98',
'e25a7c79-14c8-469a-9ae1-f601c9371ffd']
}}
# call the VIM connector
result = self.vimconn.new_sf(name, instances)
# assert that the VIM connector made the expected call to OpenStack
create_sfc_port_pair_group.assert_called_with(dict_to_neutron)
# assert that the VIM connector had the expected result / return value
self.assertEqual(result, dict_from_neutron['port_pair_group']['id'])
def _test_new_sfp(self, create_sfc_port_chain, sfc_encap, spi):
# input to VIM connector
name = 'osm_sfp'
classifications = ['2bd2a2e5-c5fd-4eac-a297-d5e255c35c19',
'00f23389-bdfa-43c2-8b16-5815f2582fa8']
sfs = ['2314daec-c262-414a-86e3-69bb6fa5bc16',
'd8bfdb5d-195e-4f34-81aa-6135705317df']
# TODO(igordc): must be changed to NSH in Queens (MPLS is a workaround)
correlation = 'nsh'
chain_id = 33
if spi:
chain_id = spi
# what OpenStack is assumed to respond (patch OpenStack's return value)
dict_from_neutron = {'port_chain': {
'id': '5bc05721-079b-4b6e-a235-47cac331cbb6',
'name': name,
'description': '',
'tenant_id': '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c',
'project_id': '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c',
'chain_id': chain_id,
'flow_classifiers': classifications,
'port_pair_groups': sfs,
'chain_parameters': {'correlation': correlation}
}}
create_sfc_port_chain.return_value = dict_from_neutron
# what the VIM connector is expected to
# send to OpenStack based on the input
dict_to_neutron = {'port_chain': {
'name': name,
'flow_classifiers': ['2bd2a2e5-c5fd-4eac-a297-d5e255c35c19',
'00f23389-bdfa-43c2-8b16-5815f2582fa8'],
'port_pair_groups': ['2314daec-c262-414a-86e3-69bb6fa5bc16',
'd8bfdb5d-195e-4f34-81aa-6135705317df'],
'chain_parameters': {'correlation': correlation}
}}
if spi:
dict_to_neutron['port_chain']['chain_id'] = spi
# call the VIM connector
if sfc_encap is None:
if spi is None:
result = self.vimconn.new_sfp(name, classifications, sfs)
else:
result = self.vimconn.new_sfp(name, classifications, sfs,
spi=spi)
else:
if spi is None:
result = self.vimconn.new_sfp(name, classifications, sfs,
sfc_encap)
else:
result = self.vimconn.new_sfp(name, classifications, sfs,
sfc_encap, spi)
# assert that the VIM connector made the expected call to OpenStack
create_sfc_port_chain.assert_called_with(dict_to_neutron)
# assert that the VIM connector had the expected result / return value
self.assertEqual(result, dict_from_neutron['port_chain']['id'])
def _test_new_classification(self, create_sfc_flow_classifier, ctype):
# input to VIM connector
name = 'osm_classification'
definition = {'ethertype': 'IPv4',
'logical_source_port':
'aaab0ab0-1452-4636-bb3b-11dca833fa2b',
'protocol': 'tcp',
'source_ip_prefix': '192.168.2.0/24',
'source_port_range_max': 99,
'source_port_range_min': 50}
# what OpenStack is assumed to respond (patch OpenStack's return value)
dict_from_neutron = {'flow_classifier': copy.copy(definition)}
dict_from_neutron['flow_classifier'][
'id'] = '7735ec2c-fddf-4130-9712-32ed2ab6a372'
dict_from_neutron['flow_classifier']['name'] = name
dict_from_neutron['flow_classifier']['description'] = ''
dict_from_neutron['flow_classifier'][
'tenant_id'] = '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c'
dict_from_neutron['flow_classifier'][
'project_id'] = '130b1e97-b0f1-40a8-8804-b6ad9b8c3e0c'
create_sfc_flow_classifier.return_value = dict_from_neutron
# what the VIM connector is expected to
# send to OpenStack based on the input
dict_to_neutron = {'flow_classifier': copy.copy(definition)}
dict_to_neutron['flow_classifier']['name'] = 'osm_classification'
# call the VIM connector
result = self.vimconn.new_classification(name, ctype, definition)
# assert that the VIM connector made the expected call to OpenStack
create_sfc_flow_classifier.assert_called_with(dict_to_neutron)
# assert that the VIM connector had the expected result / return value
self.assertEqual(result, dict_from_neutron['flow_classifier']['id'])
@mock.patch.object(Client, 'create_sfc_flow_classifier')
def test_new_classification(self, create_sfc_flow_classifier):
self._test_new_classification(create_sfc_flow_classifier,
'legacy_flow_classifier')
@mock.patch.object(Client, 'create_sfc_flow_classifier')
def test_new_classification_unsupported_type(self, create_sfc_flow_classifier):
self.assertRaises(vimconn.VimConnNotSupportedException,
self._test_new_classification,
create_sfc_flow_classifier, 'h265')
@mock.patch.object(Client, 'create_sfc_port_pair')
def test_new_sfi_with_sfc_encap(self, create_sfc_port_pair):
self._test_new_sfi(create_sfc_port_pair, True)
@mock.patch.object(Client, 'create_sfc_port_pair')
def test_new_sfi_without_sfc_encap(self, create_sfc_port_pair):
self._test_new_sfi(create_sfc_port_pair, False)
@mock.patch.object(Client, 'create_sfc_port_pair')
def test_new_sfi_default_sfc_encap(self, create_sfc_port_pair):
self._test_new_sfi(create_sfc_port_pair, None)
@mock.patch.object(Client, 'create_sfc_port_pair')
def test_new_sfi_bad_ingress_ports(self, create_sfc_port_pair):
ingress_ports = ['5311c75d-d718-4369-bbda-cdcc6da60fcc',
'a0273f64-82c9-11e7-b08f-6328e53f0fa7']
self.assertRaises(vimconn.VimConnNotSupportedException,
self._test_new_sfi,
create_sfc_port_pair, True, ingress_ports=ingress_ports)
ingress_ports = []
self.assertRaises(vimconn.VimConnNotSupportedException,
self._test_new_sfi,
create_sfc_port_pair, True, ingress_ports=ingress_ports)
@mock.patch.object(Client, 'create_sfc_port_pair')
def test_new_sfi_bad_egress_ports(self, create_sfc_port_pair):
egress_ports = ['230cdf1b-de37-4891-bc07-f9010cf1f967',
'b41228fe-82c9-11e7-9b44-17504174320b']
self.assertRaises(vimconn.VimConnNotSupportedException,
self._test_new_sfi,
create_sfc_port_pair, True, egress_ports=egress_ports)
egress_ports = []
self.assertRaises(vimconn.VimConnNotSupportedException,
self._test_new_sfi,
create_sfc_port_pair, True, egress_ports=egress_ports)
@mock.patch.object(vimconnector, 'get_sfi')
@mock.patch.object(Client, 'create_sfc_port_pair_group')
def test_new_sf(self, create_sfc_port_pair_group, get_sfi):
get_sfi.return_value = {'sfc_encap': True}
self._test_new_sf(create_sfc_port_pair_group)
@mock.patch.object(vimconnector, 'get_sfi')
@mock.patch.object(Client, 'create_sfc_port_pair_group')
def test_new_sf_inconsistent_sfc_encap(self, create_sfc_port_pair_group,
get_sfi):
get_sfi.return_value = {'sfc_encap': 'nsh'}
self.assertRaises(vimconn.VimConnNotSupportedException,
self._test_new_sf, create_sfc_port_pair_group)
@mock.patch.object(Client, 'create_sfc_port_chain')
def test_new_sfp_with_sfc_encap(self, create_sfc_port_chain):
self._test_new_sfp(create_sfc_port_chain, True, None)
@mock.patch.object(Client, 'create_sfc_port_chain')
def test_new_sfp_without_sfc_encap(self, create_sfc_port_chain):
self._test_new_sfp(create_sfc_port_chain, False, None)
self._test_new_sfp(create_sfc_port_chain, False, 25)
@mock.patch.object(Client, 'create_sfc_port_chain')
def test_new_sfp_default_sfc_encap(self, create_sfc_port_chain):
self._test_new_sfp(create_sfc_port_chain, None, None)
@mock.patch.object(Client, 'create_sfc_port_chain')
def test_new_sfp_with_sfc_encap_spi(self, create_sfc_port_chain):
self._test_new_sfp(create_sfc_port_chain, True, 25)
@mock.patch.object(Client, 'create_sfc_port_chain')
def test_new_sfp_default_sfc_encap_spi(self, create_sfc_port_chain):
self._test_new_sfp(create_sfc_port_chain, None, 25)
@mock.patch.object(Client, 'list_sfc_flow_classifiers')
def test_get_classification_list(self, list_sfc_flow_classifiers):
# what OpenStack is assumed to return to the VIM connector
list_sfc_flow_classifiers.return_value = {'flow_classifiers': [
{'source_port_range_min': 2000,
'destination_ip_prefix': '192.168.3.0/24',
'protocol': 'udp',
'description': '',
'ethertype': 'IPv4',
'l7_parameters': {},
'source_port_range_max': 2000,
'destination_port_range_min': 3000,
'source_ip_prefix': '192.168.2.0/24',
'logical_destination_port': None,
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'destination_port_range_max': None,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'logical_source_port': 'aaab0ab0-1452-4636-bb3b-11dca833fa2b',
'id': '22198366-d4e8-4d6b-b4d2-637d5d6cbb7d',
'name': 'fc1'}]}
# call the VIM connector
filter_dict = {'protocol': 'tcp', 'ethertype': 'IPv4'}
result = self.vimconn.get_classification_list(filter_dict.copy())
# assert that VIM connector called OpenStack with the expected filter
list_sfc_flow_classifiers.assert_called_with(**filter_dict)
# assert that the VIM connector successfully
# translated and returned the OpenStack result
self.assertEqual(result, [
{'id': '22198366-d4e8-4d6b-b4d2-637d5d6cbb7d',
'name': 'fc1',
'description': '',
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'ctype': 'legacy_flow_classifier',
'definition': {
'source_port_range_min': 2000,
'destination_ip_prefix': '192.168.3.0/24',
'protocol': 'udp',
'ethertype': 'IPv4',
'l7_parameters': {},
'source_port_range_max': 2000,
'destination_port_range_min': 3000,
'source_ip_prefix': '192.168.2.0/24',
'logical_destination_port': None,
'destination_port_range_max': None,
'logical_source_port': 'aaab0ab0-1452-4636-bb3b-11dca833fa2b'}
}])
def _test_get_sfi_list(self, list_port_pair, correlation, sfc_encap):
# what OpenStack is assumed to return to the VIM connector
list_port_pair.return_value = {'port_pairs': [
{'ingress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'egress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'service_function_parameters': {'correlation': correlation},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'c121ebdd-7f2d-4213-b933-3325298a6966',
'name': 'osm_sfi'}]}
# call the VIM connector
filter_dict = {'name': 'osm_sfi', 'description': ''}
result = self.vimconn.get_sfi_list(filter_dict.copy())
# assert that VIM connector called OpenStack with the expected filter
list_port_pair.assert_called_with(**filter_dict)
# assert that the VIM connector successfully
# translated and returned the OpenStack result
self.assertEqual(result, [
{'ingress_ports': ['5311c75d-d718-4369-bbda-cdcc6da60fcc'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'egress_ports': ['5311c75d-d718-4369-bbda-cdcc6da60fcc'],
'sfc_encap': sfc_encap,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'c121ebdd-7f2d-4213-b933-3325298a6966',
'name': 'osm_sfi'}])
@mock.patch.object(Client, 'list_sfc_port_pairs')
def test_get_sfi_list_with_sfc_encap(self, list_sfc_port_pairs):
self._test_get_sfi_list(list_sfc_port_pairs, 'nsh', True)
@mock.patch.object(Client, 'list_sfc_port_pairs')
def test_get_sfi_list_without_sfc_encap(self, list_sfc_port_pairs):
self._test_get_sfi_list(list_sfc_port_pairs, None, False)
@mock.patch.object(Client, 'list_sfc_port_pair_groups')
def test_get_sf_list(self, list_sfc_port_pair_groups):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_pair_groups.return_value = {'port_pair_groups': [
{'port_pairs': ['08fbdbb0-82d6-11e7-ad95-9bb52fbec2f2',
'0d63799c-82d6-11e7-8deb-a746bb3ae9f5'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'port_pair_group_parameters': {},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'f4a0bde8-82d5-11e7-90e1-a72b762fa27f',
'name': 'osm_sf'}]}
# call the VIM connector
filter_dict = {'name': 'osm_sf', 'description': ''}
result = self.vimconn.get_sf_list(filter_dict.copy())
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_pair_groups.assert_called_with(**filter_dict)
# assert that the VIM connector successfully
# translated and returned the OpenStack result
self.assertEqual(result, [
{'sfis': ['08fbdbb0-82d6-11e7-ad95-9bb52fbec2f2',
'0d63799c-82d6-11e7-8deb-a746bb3ae9f5'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'f4a0bde8-82d5-11e7-90e1-a72b762fa27f',
'name': 'osm_sf'}])
def _test_get_sfp_list(self, list_sfc_port_chains, correlation, sfc_encap):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_chains.return_value = {'port_chains': [
{'port_pair_groups': ['7d8e3bf8-82d6-11e7-a032-8ff028839d25',
'7dc9013e-82d6-11e7-a5a6-a3a8d78a5518'],
'flow_classifiers': ['1333c2f4-82d7-11e7-a5df-9327f33d104e',
'1387ab44-82d7-11e7-9bb0-476337183905'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'chain_parameters': {'correlation': correlation},
'chain_id': 40,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': '821bc9be-82d7-11e7-8ce3-23a08a27ab47',
'name': 'osm_sfp'}]}
# call the VIM connector
filter_dict = {'name': 'osm_sfp', 'description': ''}
result = self.vimconn.get_sfp_list(filter_dict.copy())
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_chains.assert_called_with(**filter_dict)
# assert that the VIM connector successfully
# translated and returned the OpenStack result
self.assertEqual(result, [
{'service_functions': ['7d8e3bf8-82d6-11e7-a032-8ff028839d25',
'7dc9013e-82d6-11e7-a5a6-a3a8d78a5518'],
'classifications': ['1333c2f4-82d7-11e7-a5df-9327f33d104e',
'1387ab44-82d7-11e7-9bb0-476337183905'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'sfc_encap': sfc_encap,
'spi': 40,
'id': '821bc9be-82d7-11e7-8ce3-23a08a27ab47',
'name': 'osm_sfp'}])
@mock.patch.object(Client, 'list_sfc_port_chains')
def test_get_sfp_list_with_sfc_encap(self, list_sfc_port_chains):
self._test_get_sfp_list(list_sfc_port_chains, 'nsh', True)
@mock.patch.object(Client, 'list_sfc_port_chains')
def test_get_sfp_list_without_sfc_encap(self, list_sfc_port_chains):
self._test_get_sfp_list(list_sfc_port_chains, None, False)
@mock.patch.object(Client, 'list_sfc_flow_classifiers')
def test_get_classification(self, list_sfc_flow_classifiers):
# what OpenStack is assumed to return to the VIM connector
list_sfc_flow_classifiers.return_value = {'flow_classifiers': [
{'source_port_range_min': 2000,
'destination_ip_prefix': '192.168.3.0/24',
'protocol': 'udp',
'description': '',
'ethertype': 'IPv4',
'l7_parameters': {},
'source_port_range_max': 2000,
'destination_port_range_min': 3000,
'source_ip_prefix': '192.168.2.0/24',
'logical_destination_port': None,
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'destination_port_range_max': None,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'logical_source_port': 'aaab0ab0-1452-4636-bb3b-11dca833fa2b',
'id': '22198366-d4e8-4d6b-b4d2-637d5d6cbb7d',
'name': 'fc1'}
]}
# call the VIM connector
result = self.vimconn.get_classification(
'22198366-d4e8-4d6b-b4d2-637d5d6cbb7d')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_flow_classifiers.assert_called_with(
id='22198366-d4e8-4d6b-b4d2-637d5d6cbb7d')
# assert that VIM connector successfully returned the OpenStack result
self.assertEqual(result,
{'id': '22198366-d4e8-4d6b-b4d2-637d5d6cbb7d',
'name': 'fc1',
'description': '',
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'ctype': 'legacy_flow_classifier',
'definition': {
'source_port_range_min': 2000,
'destination_ip_prefix': '192.168.3.0/24',
'protocol': 'udp',
'ethertype': 'IPv4',
'l7_parameters': {},
'source_port_range_max': 2000,
'destination_port_range_min': 3000,
'source_ip_prefix': '192.168.2.0/24',
'logical_destination_port': None,
'destination_port_range_max': None,
'logical_source_port':
'aaab0ab0-1452-4636-bb3b-11dca833fa2b'}
})
@mock.patch.object(Client, 'list_sfc_flow_classifiers')
def test_get_classification_many_results(self, list_sfc_flow_classifiers):
# what OpenStack is assumed to return to the VIM connector
list_sfc_flow_classifiers.return_value = {'flow_classifiers': [
{'source_port_range_min': 2000,
'destination_ip_prefix': '192.168.3.0/24',
'protocol': 'udp',
'description': '',
'ethertype': 'IPv4',
'l7_parameters': {},
'source_port_range_max': 2000,
'destination_port_range_min': 3000,
'source_ip_prefix': '192.168.2.0/24',
'logical_destination_port': None,
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'destination_port_range_max': None,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'logical_source_port': 'aaab0ab0-1452-4636-bb3b-11dca833fa2b',
'id': '22198366-d4e8-4d6b-b4d2-637d5d6cbb7d',
'name': 'fc1'},
{'source_port_range_min': 1000,
'destination_ip_prefix': '192.168.3.0/24',
'protocol': 'udp',
'description': '',
'ethertype': 'IPv4',
'l7_parameters': {},
'source_port_range_max': 1000,
'destination_port_range_min': 3000,
'source_ip_prefix': '192.168.2.0/24',
'logical_destination_port': None,
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'destination_port_range_max': None,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'logical_source_port': 'aaab0ab0-1452-4636-bb3b-11dca833fa2b',
'id': '3196bafc-82dd-11e7-a205-9bf6c14b0721',
'name': 'fc2'}
]}
# call the VIM connector
self.assertRaises(vimconn.VimConnConflictException,
self.vimconn.get_classification,
'3196bafc-82dd-11e7-a205-9bf6c14b0721')
# assert the VIM connector called OpenStack with the expected filter
list_sfc_flow_classifiers.assert_called_with(
id='3196bafc-82dd-11e7-a205-9bf6c14b0721')
@mock.patch.object(Client, 'list_sfc_flow_classifiers')
def test_get_classification_no_results(self, list_sfc_flow_classifiers):
# what OpenStack is assumed to return to the VIM connector
list_sfc_flow_classifiers.return_value = {'flow_classifiers': []}
# call the VIM connector
self.assertRaises(vimconn.VimConnNotFoundException,
self.vimconn.get_classification,
'3196bafc-82dd-11e7-a205-9bf6c14b0721')
# assert the VIM connector called OpenStack with the expected filter
list_sfc_flow_classifiers.assert_called_with(
id='3196bafc-82dd-11e7-a205-9bf6c14b0721')
@mock.patch.object(Client, 'list_sfc_port_pairs')
def test_get_sfi(self, list_sfc_port_pairs):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_pairs.return_value = {'port_pairs': [
{'ingress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'egress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'service_function_parameters': {'correlation': 'nsh'},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'c121ebdd-7f2d-4213-b933-3325298a6966',
'name': 'osm_sfi1'},
]}
# call the VIM connector
result = self.vimconn.get_sfi('c121ebdd-7f2d-4213-b933-3325298a6966')
# assert the VIM connector called OpenStack with the expected filter
list_sfc_port_pairs.assert_called_with(
id='c121ebdd-7f2d-4213-b933-3325298a6966')
# assert the VIM connector successfully returned the OpenStack result
self.assertEqual(result,
{'ingress_ports': [
'5311c75d-d718-4369-bbda-cdcc6da60fcc'],
'egress_ports': [
'5311c75d-d718-4369-bbda-cdcc6da60fcc'],
'sfc_encap': True,
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'c121ebdd-7f2d-4213-b933-3325298a6966',
'name': 'osm_sfi1'})
@mock.patch.object(Client, 'list_sfc_port_pairs')
def test_get_sfi_many_results(self, list_sfc_port_pairs):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_pairs.return_value = {'port_pairs': [
{'ingress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'egress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'service_function_parameters': {'correlation': 'nsh'},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'c121ebdd-7f2d-4213-b933-3325298a6966',
'name': 'osm_sfi1'},
{'ingress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'egress': '5311c75d-d718-4369-bbda-cdcc6da60fcc',
'service_function_parameters': {'correlation': 'nsh'},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'c0436d92-82db-11e7-8f9c-5fa535f1261f',
'name': 'osm_sfi2'}
]}
# call the VIM connector
self.assertRaises(vimconn.VimConnConflictException,
self.vimconn.get_sfi,
'c0436d92-82db-11e7-8f9c-5fa535f1261f')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_pairs.assert_called_with(
id='c0436d92-82db-11e7-8f9c-5fa535f1261f')
@mock.patch.object(Client, 'list_sfc_port_pairs')
def test_get_sfi_no_results(self, list_sfc_port_pairs):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_pairs.return_value = {'port_pairs': []}
# call the VIM connector
self.assertRaises(vimconn.VimConnNotFoundException,
self.vimconn.get_sfi,
'b22892fc-82d9-11e7-ae85-0fea6a3b3757')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_pairs.assert_called_with(
id='b22892fc-82d9-11e7-ae85-0fea6a3b3757')
@mock.patch.object(Client, 'list_sfc_port_pair_groups')
def test_get_sf(self, list_sfc_port_pair_groups):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_pair_groups.return_value = {'port_pair_groups': [
{'port_pairs': ['08fbdbb0-82d6-11e7-ad95-9bb52fbec2f2'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'port_pair_group_parameters': {},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'aabba8a6-82d9-11e7-a18a-d3c7719b742d',
'name': 'osm_sf1'}
]}
# call the VIM connector
result = self.vimconn.get_sf('b22892fc-82d9-11e7-ae85-0fea6a3b3757')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_pair_groups.assert_called_with(
id='b22892fc-82d9-11e7-ae85-0fea6a3b3757')
# assert that VIM connector successfully returned the OpenStack result
self.assertEqual(result,
{'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'sfis': ['08fbdbb0-82d6-11e7-ad95-9bb52fbec2f2'],
'id': 'aabba8a6-82d9-11e7-a18a-d3c7719b742d',
'name': 'osm_sf1'})
@mock.patch.object(Client, 'list_sfc_port_pair_groups')
def test_get_sf_many_results(self, list_sfc_port_pair_groups):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_pair_groups.return_value = {'port_pair_groups': [
{'port_pairs': ['08fbdbb0-82d6-11e7-ad95-9bb52fbec2f2'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'port_pair_group_parameters': {},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'aabba8a6-82d9-11e7-a18a-d3c7719b742d',
'name': 'osm_sf1'},
{'port_pairs': ['0d63799c-82d6-11e7-8deb-a746bb3ae9f5'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'port_pair_group_parameters': {},
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': 'b22892fc-82d9-11e7-ae85-0fea6a3b3757',
'name': 'osm_sf2'}
]}
# call the VIM connector
self.assertRaises(vimconn.VimConnConflictException,
self.vimconn.get_sf,
'b22892fc-82d9-11e7-ae85-0fea6a3b3757')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_pair_groups.assert_called_with(
id='b22892fc-82d9-11e7-ae85-0fea6a3b3757')
@mock.patch.object(Client, 'list_sfc_port_pair_groups')
def test_get_sf_no_results(self, list_sfc_port_pair_groups):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_pair_groups.return_value = {'port_pair_groups': []}
# call the VIM connector
self.assertRaises(vimconn.VimConnNotFoundException,
self.vimconn.get_sf,
'b22892fc-82d9-11e7-ae85-0fea6a3b3757')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_pair_groups.assert_called_with(
id='b22892fc-82d9-11e7-ae85-0fea6a3b3757')
@mock.patch.object(Client, 'list_sfc_port_chains')
def test_get_sfp(self, list_sfc_port_chains):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_chains.return_value = {'port_chains': [
{'port_pair_groups': ['7d8e3bf8-82d6-11e7-a032-8ff028839d25'],
'flow_classifiers': ['1333c2f4-82d7-11e7-a5df-9327f33d104e'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'chain_parameters': {'correlation': 'nsh'},
'chain_id': 40,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': '821bc9be-82d7-11e7-8ce3-23a08a27ab47',
'name': 'osm_sfp1'}]}
# call the VIM connector
result = self.vimconn.get_sfp('821bc9be-82d7-11e7-8ce3-23a08a27ab47')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_chains.assert_called_with(
id='821bc9be-82d7-11e7-8ce3-23a08a27ab47')
# assert that VIM connector successfully returned the OpenStack result
self.assertEqual(result,
{'service_functions': [
'7d8e3bf8-82d6-11e7-a032-8ff028839d25'],
'classifications': [
'1333c2f4-82d7-11e7-a5df-9327f33d104e'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'sfc_encap': True,
'spi': 40,
'id': '821bc9be-82d7-11e7-8ce3-23a08a27ab47',
'name': 'osm_sfp1'})
@mock.patch.object(Client, 'list_sfc_port_chains')
def test_get_sfp_many_results(self, list_sfc_port_chains):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_chains.return_value = {'port_chains': [
{'port_pair_groups': ['7d8e3bf8-82d6-11e7-a032-8ff028839d25'],
'flow_classifiers': ['1333c2f4-82d7-11e7-a5df-9327f33d104e'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'chain_parameters': {'correlation': 'nsh'},
'chain_id': 40,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': '821bc9be-82d7-11e7-8ce3-23a08a27ab47',
'name': 'osm_sfp1'},
{'port_pair_groups': ['7d8e3bf8-82d6-11e7-a032-8ff028839d25'],
'flow_classifiers': ['1333c2f4-82d7-11e7-a5df-9327f33d104e'],
'description': '',
'tenant_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'chain_parameters': {'correlation': 'nsh'},
'chain_id': 50,
'project_id': '8f3019ef06374fa880a0144ad4bc1d7b',
'id': '5d002f38-82de-11e7-a770-f303f11ce66a',
'name': 'osm_sfp2'}
]}
# call the VIM connector
self.assertRaises(vimconn.VimConnConflictException,
self.vimconn.get_sfp,
'5d002f38-82de-11e7-a770-f303f11ce66a')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_chains.assert_called_with(
id='5d002f38-82de-11e7-a770-f303f11ce66a')
@mock.patch.object(Client, 'list_sfc_port_chains')
def test_get_sfp_no_results(self, list_sfc_port_chains):
# what OpenStack is assumed to return to the VIM connector
list_sfc_port_chains.return_value = {'port_chains': []}
# call the VIM connector
self.assertRaises(vimconn.VimConnNotFoundException,
self.vimconn.get_sfp,
'5d002f38-82de-11e7-a770-f303f11ce66a')
# assert that VIM connector called OpenStack with the expected filter
list_sfc_port_chains.assert_called_with(
id='5d002f38-82de-11e7-a770-f303f11ce66a')
@mock.patch.object(Client, 'delete_sfc_flow_classifier')
def test_delete_classification(self, delete_sfc_flow_classifier):
result = self.vimconn.delete_classification(
'638f957c-82df-11e7-b7c8-132706021464')
delete_sfc_flow_classifier.assert_called_with(
'638f957c-82df-11e7-b7c8-132706021464')
self.assertEqual(result, '638f957c-82df-11e7-b7c8-132706021464')
@mock.patch.object(Client, 'delete_sfc_port_pair')
def test_delete_sfi(self, delete_sfc_port_pair):
result = self.vimconn.delete_sfi(
'638f957c-82df-11e7-b7c8-132706021464')
delete_sfc_port_pair.assert_called_with(
'638f957c-82df-11e7-b7c8-132706021464')
self.assertEqual(result, '638f957c-82df-11e7-b7c8-132706021464')
@mock.patch.object(Client, 'delete_sfc_port_pair_group')
def test_delete_sf(self, delete_sfc_port_pair_group):
result = self.vimconn.delete_sf('638f957c-82df-11e7-b7c8-132706021464')
delete_sfc_port_pair_group.assert_called_with(
'638f957c-82df-11e7-b7c8-132706021464')
self.assertEqual(result, '638f957c-82df-11e7-b7c8-132706021464')
@mock.patch.object(Client, 'delete_sfc_port_chain')
def test_delete_sfp(self, delete_sfc_port_chain):
result = self.vimconn.delete_sfp(
'638f957c-82df-11e7-b7c8-132706021464')
delete_sfc_port_chain.assert_called_with(
'638f957c-82df-11e7-b7c8-132706021464')
self.assertEqual(result, '638f957c-82df-11e7-b7c8-132706021464')
if __name__ == '__main__':
unittest.main()
| 47.509942 | 83 | 0.619556 | 4,337 | 40,621 | 5.51003 | 0.092921 | 0.032807 | 0.035151 | 0.031636 | 0.901159 | 0.872118 | 0.840901 | 0.815667 | 0.789806 | 0.75022 | 0 | 0.137108 | 0.278206 | 40,621 | 854 | 84 | 47.565574 | 0.677933 | 0.130622 | 0 | 0.608491 | 0 | 0 | 0.340285 | 0.23254 | 0 | 0 | 0 | 0.001171 | 0.084906 | 1 | 0.06761 | false | 0.001572 | 0.009434 | 0 | 0.078616 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f79f2086a1c6ab01fd5abf5131801cb0b4560e22 | 685 | py | Python | buildroot/support/testing/tests/package/test_lua_gd.py | bramkragten/operating-system | 27fc2de146f1ef047316a4b58a236c72d26da81c | [
"Apache-2.0"
] | 349 | 2021-08-17T08:46:53.000Z | 2022-03-30T06:25:25.000Z | buildroot/support/testing/tests/package/test_lua_gd.py | bramkragten/operating-system | 27fc2de146f1ef047316a4b58a236c72d26da81c | [
"Apache-2.0"
] | 8 | 2020-04-02T22:51:47.000Z | 2020-04-27T03:24:55.000Z | buildroot/support/testing/tests/package/test_lua_gd.py | bramkragten/operating-system | 27fc2de146f1ef047316a4b58a236c72d26da81c | [
"Apache-2.0"
] | 12 | 2021-08-17T20:10:30.000Z | 2022-01-06T10:52:54.000Z | from tests.package.test_lua import TestLuaBase
class TestLuaLuaGD(TestLuaBase):
config = TestLuaBase.config + \
"""
BR2_PACKAGE_LUA=y
BR2_PACKAGE_LUA_GD=y
BR2_PACKAGE_FONTCONFIG=y
BR2_PACKAGE_JPEG=y
BR2_PACKAGE_LIBPNG=y
"""
def test_run(self):
self.login()
self.module_test("gd")
class TestLuajitLuaGD(TestLuaBase):
config = TestLuaBase.config + \
"""
BR2_PACKAGE_LUAJIT=y
BR2_PACKAGE_LUA_GD=y
BR2_PACKAGE_FONTCONFIG=y
BR2_PACKAGE_JPEG=y
BR2_PACKAGE_LIBPNG=y
"""
def test_run(self):
self.login()
self.module_test("gd")
| 21.40625 | 46 | 0.611679 | 81 | 685 | 4.839506 | 0.283951 | 0.255102 | 0.22449 | 0.173469 | 0.780612 | 0.780612 | 0.556122 | 0.556122 | 0.556122 | 0.556122 | 0 | 0.020877 | 0.30073 | 685 | 31 | 47 | 22.096774 | 0.797495 | 0 | 0 | 0.727273 | 0 | 0 | 0.010989 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.636364 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
e39d15c1b0b565dfd2e15311c4149f373eda3f9f | 13,557 | py | Python | tests/test_input.py | xroynard/ms_deepvoxscene | e1800a5628e6b9ab20c12d1939e04ac2fd3b4cfc | [
"MIT"
] | 13 | 2019-08-29T18:59:33.000Z | 2021-06-10T03:18:57.000Z | tests/test_input.py | xroynard/ms_deepvoxscene | e1800a5628e6b9ab20c12d1939e04ac2fd3b4cfc | [
"MIT"
] | 2 | 2019-09-02T09:06:45.000Z | 2019-09-02T11:55:15.000Z | tests/test_input.py | xroynard/ms_deepvoxscene | e1800a5628e6b9ab20c12d1939e04ac2fd3b4cfc | [
"MIT"
] | 1 | 2020-04-30T03:50:16.000Z | 2020-04-30T03:50:16.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: Xavier Roynard
"""
from __future__ import print_function, division
import os
import sys
import time
import numpy as np
from torch.autograd import Variable
from torch.utils.data import DataLoader
sys.path.insert(0, os.path.abspath('..'))
from input import PointCloudDataset
#%% Test PointCloudDataset
if __name__ == '__main__':
###############################################################################
DATASET = "parislille3d"
# DATASET = "semantic3d"
# DATASET = "s3dis"
###############################################################################
# DATA_DIR = os.path.join(os.path.curdir, "data", DATASET)
DATA_DIR = os.path.join(os.path.curdir, "../data", DATASET, "debug_datasets")
DATASET_DIR = os.path.join(DATA_DIR,'train')
RESULT_DIR = os.path.join(DATA_DIR,'result')
GRID_SIZE = 32
VOXEL_SIZE = 0.1
SCALES = {1}
# SCALES = {1,2,4}
BATCH_SIZE = 20
NB_CLASSES = 9
NUM_WORKERS = 16
NB_POINTS_PER_CLASS = 100
NB_POINTS_TO_ADD_PER_CLASS = 100
# NB_POINTS_PER_CLASS = 3600
# NB_POINTS_TO_ADD_PER_CLASS = 1000
# SEGMENTATION = True
SEGMENTATION = False
VOXELIZED = True
# VOXELIZED = False
dset = PointCloudDataset(DATASET_DIR,
grid_size=GRID_SIZE,
voxel_size=VOXEL_SIZE,
scales=SCALES,
# transform=voxel_augmentation,
nb_pts_per_class=NB_POINTS_PER_CLASS,
nb_pts_to_add_per_class=NB_POINTS_TO_ADD_PER_CLASS,
voxelized=VOXELIZED,
segmentation=SEGMENTATION,
# inRAM=True,
denseGrid=True
)
dset_loader = DataLoader(dset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=NUM_WORKERS,
collate_fn=default_collate if VOXELIZED else pc_collate
)#, pin_memory=True) #?
# city_list = ["Lille1_1", "Lille1_2", "Lille2", "Paris"]
# if VOXELIZED:
# dsets = {city: PointCloudDataset(os.path.join(data_dir,city), r=1.6, transform=voxel_augmentation, voxelized=VOXELIZED, segmentation=SEGMENTATION, device=0)
# for city in city_list}
# dset_loaders = {city: torch.utils.data.DataLoader(dsets[city], batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKERS)#, pin_memory=True) #?
# for city in city_list}
# else:
# dsets = {city: PointCloudDataset(os.path.join(data_dir,city), r=1.6, transform=pc_augmentation, voxelized=VOXELIZED, segmentation=SEGMENTATION, device=0)
# for city in city_list}
# dset_loaders = {city: torch.utils.data.DataLoader(dsets[city], batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKERS, collate_fn=pc_collate)#, pin_memory=True) #?
# for city in city_list}
#
# # Taille des dataset
# dset_sizes = {city: len(dsets[city]) for city in city_list}
# dset_loaders_sizes = {city: len(dset_loaders[city]) for city in city_list}
# print("dset_sizes :", dset_sizes)
# print("dset_loaders_sizes :", dset_loaders_sizes)
#
# # Test le data loader
# for city in city_list:
# print("\n\nCity : {}".format(city))
# start_time = time.time()
# dataset_len = len(dsets[city])
# dataloader_len = len(dset_loaders[city])
# print("Taille Dataset : {}".format(dataset_len))
# print("Taille DataLoader : {}".format(dataloader_len))
# print("Taille Batches : {}".format(BATCH_SIZE))
# for i,data in enumerate(dset_loaders[city]):
# inputs = Variable(data['input']).cuda()
# labels = Variable(data['label']).cuda()
# print("\r\tDurée epoch:{:07.2f} s,{:05.2f}%,{:08d},input:{},labels:{}".format(time.time() - start_time, 100*i*BATCH_SIZE/dataset_len, i*BATCH_SIZE, inputs.size(), labels.size()), end="")
# sys.stdout.flush()
##%% Test le data loader
# print("\n\n")
# dataset_len = len(dset)
# dataloader_len = len(dset_loader)
# print("Taille Dataset : {}".format(dataset_len))
# print("Taille DataLoader : {}".format(dataloader_len))
# print("Taille Batches : {}".format(BATCH_SIZE))
#
# start_time = time.time()
# for i,data in enumerate(dset_loader):
# inputs = data['input']
# if isinstance(inputs,list):
# inputs = Variable(inputs[0]).cuda()
# else:
# inputs = Variable(inputs).cuda()
# labels = Variable(data['label']).cuda()
# print("\r\tDurée epoch:{:07.2f} s,{:05.2f}%,{:08d},input:{},labels:{}".format(time.time() - start_time, 100*i*BATCH_SIZE/dataset_len, i*BATCH_SIZE, inputs.size(), labels.size()), end="")
# sys.stdout.flush()
#
# pred_class = np.random.randint(0,high=NB_CLASSES,size=(len(dset),1))
# pred_proba_class = np.random.rand(len(dset),NB_CLASSES)
#
# dset.write_training_points( os.path.join(RESULT_DIR, "inputDEBUG_write_training_points.ply") )
# dset.write_pred_cloud( pred_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_cloud.ply") )
# dset.write_pred_proba_cloud( pred_proba_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_proba_cloud.ply") )
#
# dset.new_training_points(pred_proba_class)
#
################################################################################
################################################################################
################################################################################
# print("\n\n")
# dset.active() # 0 -> 1
# dset.active() # 1 -> 0
# dset.active() # 0 -> 1
# dset.active(False) # 1 -> 0
# dset.active(False) # 0 -> 0
# dset.active(False) # 0 -> 0
# dset.active(True) # 0 -> 1
# dset.active(True) # 1 -> 1
# dset.active(True) # 1 -> 1
# print("Active Learning --> dataset_len", len(dset))
#
# dataset_len = len(dset)
# dataloader_len = len(dset_loader)
# print("Taille Dataset : {}".format(dataset_len))
# print("Taille DataLoader : {}".format(dataloader_len))
# print("Taille Batches : {}".format(BATCH_SIZE))
#
# start_time = time.time()
# for i,data in enumerate(dset_loader):
# inputs = data['input']
# if isinstance(inputs,list):
# inputs = Variable(inputs[0]).cuda()
# else:
# inputs = Variable(inputs).cuda()
# labels = Variable(data['label']).cuda()
# print("\r\tDurée epoch:{:07.2f} s,{:05.2f}%,{:08d},input:{},labels:{}".format(time.time() - start_time, 100*i*BATCH_SIZE/dataset_len, i*BATCH_SIZE, inputs.size(), labels.size()), end="")
# sys.stdout.flush()
#
# pred_class = np.random.randint(0,high=NB_CLASSES,size=(len(dset),1))
# pred_proba_class = np.random.rand(NB_CLASSES,len(dset))
# pred_proba_class = (pred_proba_class / np.sum(pred_proba_class, axis=0)).transpose()
#
# dset.write_training_points( os.path.join(RESULT_DIR, "inputDEBUG_write_training_points_active.ply") )
# dset.write_pred_cloud( pred_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_cloud_test_active.ply") )
# dset.write_pred_proba_cloud( pred_proba_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_proba_cloud_test_active.ply") )
#
# for _ in range(3):
# print("\n")
# pred_proba_class = np.random.rand(NB_CLASSES,len(dset))
# pred_proba_class = (pred_proba_class / np.sum(pred_proba_class, axis=0)).transpose()
# dset.new_training_points(pred_proba_class)
# print("Training --> dataset_len", len(dset))
# dset.active(True) # Pour continuer à être en active learning ET reprendre des active_points...
#
################################################################################
################################################################################
################################################################################
# print("\n\n")
# dset.active(False)
# dset.test_grid(False)
# dset.test(False)
# print("Training --> dataset_len", len(dset))
#
# dataset_len = len(dset)
# dataloader_len = len(dset_loader)
# print("Taille Dataset : {}".format(dataset_len))
# print("Taille DataLoader : {}".format(dataloader_len))
# print("Taille Batches : {}".format(BATCH_SIZE))
#
# start_time = time.time()
# for i,data in enumerate(dset_loader):
# inputs = data['input']
# if isinstance(inputs,list):
# inputs = Variable(inputs[0]).cuda()
# else:
# inputs = Variable(inputs).cuda()
# labels = Variable(data['label']).cuda()
# print("\r\tDurée epoch:{:07.2f} s,{:05.2f}%,{:08d},input:{},labels:{}".format(time.time() - start_time, 100*i*BATCH_SIZE/dataset_len, i*BATCH_SIZE, inputs.size(), labels.size()), end="")
# sys.stdout.flush()
#
# pred_class = np.random.randint(0,high=NB_CLASSES,size=(len(dset),1))
# pred_proba_class = np.random.rand(len(dset),NB_CLASSES)
#
# dset.write_training_points( os.path.join(RESULT_DIR, "inputDEBUG_write_training_points.ply") )
# dset.write_pred_cloud( pred_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_cloud.ply") )
# dset.write_pred_proba_cloud( pred_proba_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_proba_cloud.ply") )
#
# dset.new_training_points(pred_proba_class)
###############################################################################
###############################################################################
###############################################################################
print("\n\n")
dset.test_grid() # 0 -> 1
# dset.test_grid() # 1 -> 0
# dset.test_grid() # 0 -> 1
# dset.test_grid(False) # 1 -> 0
# dset.test_grid(False) # 0 -> 0
# dset.test_grid(False) # 0 -> 0
# dset.test_grid(True) # 0 -> 1
# dset.test_grid(True) # 1 -> 1
# dset.test_grid(True) # 1 -> 1
print("Testing on Grid --> dataset_len", len(dset))
dataset_len = len(dset)
dataloader_len = len(dset_loader)
print("Taille Dataset : {}".format(dataset_len))
print("Taille DataLoader : {}".format(dataloader_len))
print("Taille Batches : {}".format(BATCH_SIZE))
start_time = time.time()
for i,data in enumerate(dset_loader):
inputs = data['input']
if isinstance(inputs,list):
inputs = Variable(inputs[0]).cuda()
else:
inputs = Variable(inputs).cuda()
labels = Variable(data['label']).cuda()
print("\r\tDurée epoch:{:07.2f} s,{:05.2f}%,{:08d},input:{},labels:{}".format(time.time() - start_time, 100*i*BATCH_SIZE/dataset_len, i*BATCH_SIZE, inputs.size(), labels.size()), end="")
sys.stdout.flush()
pred_class = np.random.randint(0,high=NB_CLASSES,size=(len(dset),1))
pred_proba_class = np.random.rand(len(dset),NB_CLASSES)
dset.write_training_points( os.path.join(RESULT_DIR, "inputDEBUG_write_training_points_test_grid.ply") )
dset.write_pred_cloud( pred_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_cloud_test_grid.ply") )
dset.write_pred_proba_cloud( pred_proba_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_proba_cloud_test_grid.ply") )
dset.new_training_points(pred_proba_class)
###############################################################################
###############################################################################
###############################################################################
print("\n\n")
dset.test() # 0 -> 1
dset.test() # 1 -> 0
dset.test() # 0 -> 1
dset.test(False) # 1 -> 0
dset.test(False) # 0 -> 0
dset.test(False) # 0 -> 0
dset.test(True) # 0 -> 1
dset.test(True) # 1 -> 1
dset.test(True) # 1 -> 1
print("Testing --> dataset_len", len(dset))
dataset_len = len(dset)
dataloader_len = len(dset_loader)
print("Taille Dataset : {}".format(dataset_len))
print("Taille DataLoader : {}".format(dataloader_len))
print("Taille Batches : {}".format(BATCH_SIZE))
start_time = time.time()
for i,data in enumerate(dset_loader):
inputs = data['input']
if isinstance(inputs,list):
inputs = Variable(inputs[0]).cuda()
else:
inputs = Variable(inputs).cuda()
labels = Variable(data['label']).cuda()
print("\r\tDurée epoch:{:07.2f} s,{:05.2f}%,{:08d},input:{},labels:{}".format(time.time() - start_time, 100*i*BATCH_SIZE/dataset_len, i*BATCH_SIZE, inputs.size(), labels.size()), end="")
sys.stdout.flush()
pred_class = np.random.randint(0,high=NB_CLASSES,size=(len(dset),1))
pred_proba_class = np.random.rand(len(dset),NB_CLASSES)
dset.write_training_points( os.path.join(RESULT_DIR, "inputDEBUG_write_training_points_test.ply") )
dset.write_pred_cloud( pred_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_cloud_test.ply") )
dset.write_pred_proba_cloud( pred_proba_class , os.path.join(RESULT_DIR, "inputDEBUG_write_pred_proba_cloud_test.ply") )
dset.new_training_points(pred_proba_class)
| 45.955932 | 199 | 0.56812 | 1,616 | 13,557 | 4.52599 | 0.098391 | 0.039377 | 0.042111 | 0.032814 | 0.823763 | 0.805442 | 0.757041 | 0.748154 | 0.712333 | 0.712333 | 0 | 0.017025 | 0.211477 | 13,557 | 294 | 200 | 46.112245 | 0.667166 | 0.552703 | 0 | 0.489583 | 0 | 0.020833 | 0.135761 | 0.068828 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.135417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
581150b173b46e9ce5d530fe24667bfa9d8ccaa6 | 15,026 | py | Python | pal/writer/register/c/field_accessor.py | mars-research/pal | 5977394cda8750ff5dcb89c2bf193ec1ef4cd137 | [
"MIT"
] | 26 | 2020-01-06T23:53:17.000Z | 2022-02-01T08:58:21.000Z | pal/writer/register/c/field_accessor.py | mars-research/pal | 5977394cda8750ff5dcb89c2bf193ec1ef4cd137 | [
"MIT"
] | 30 | 2019-11-13T00:55:22.000Z | 2022-01-06T08:09:35.000Z | pal/writer/register/c/field_accessor.py | mars-research/pal | 5977394cda8750ff5dcb89c2bf193ec1ef4cd137 | [
"MIT"
] | 14 | 2019-11-15T16:56:22.000Z | 2021-12-22T10:14:17.000Z | import pal.gadget
class CFieldAccessorWriter():
def declare_field_accessors(self, outfile, register, field):
self._declare_field_constants(outfile, register, field)
if field.msb == field.lsb:
if register.is_readable():
self._declare_bitfield_is_enabled(outfile, register, field)
self._declare_bitfield_is_enabled_in_value(outfile, register, field)
self._declare_bitfield_is_disabled(outfile, register, field)
self._declare_bitfield_is_disabled_in_value(outfile, register, field)
if register.is_writeable():
self._declare_bitfield_enable(outfile, register, field)
self._declare_bitfield_enable_in_value(outfile, register, field)
self._declare_bitfield_disable(outfile, register, field)
self._declare_bitfield_disable_in_value(outfile, register, field)
else:
if register.is_readable():
self._declare_get_field(outfile, register, field)
self._declare_get_field_from_value(outfile, register, field)
if register.is_writeable():
self._declare_set_field(outfile, register, field)
self._declare_set_field_in_value(outfile, register, field)
def call_field_get(self, outfile, register, field, destination,
register_value):
if field.msb == field.lsb:
call = "{size} {dest} = pal_{component}{reg_name}_{field_name}_is_enabled_in_value({reg_val});".format(
size=self._register_size_type(register),
dest=destination,
component = register.component.lower() + "_" if register.component else "",
reg_name=register.name.lower(),
field_name=field.name.lower(),
reg_val=str(register_value)
)
else:
call = "{size} {dest} = pal_get_{component}{reg_name}_{field_name}_from_value({reg_val});".format(
size=self._register_size_type(register),
dest=destination,
component = register.component.lower() + "_" if register.component else "",
reg_name=register.name.lower(),
field_name=field.name.lower(),
reg_val=str(register_value)
)
outfile.write(call)
self.write_newline(outfile)
# -------------------------------------------------------------------------
# private
# -------------------------------------------------------------------------
def _declare_field_constants(self, outfile, register, field):
prefix = self._field_prefix(register, field)
name = prefix + "name"
val = '"' + field.name.lower() + '"'
self._declare_preprocessor_constant(outfile, name, val)
if field.long_name:
name = prefix + "long_name"
val = '"' + field.long_name + '"'
self._declare_preprocessor_constant(outfile, name, val)
name = prefix + "lsb"
val = str(field.lsb)
self._declare_preprocessor_constant(outfile, name, val)
name = prefix + "msb"
val = str(field.msb)
self._declare_preprocessor_constant(outfile, name, val)
name = prefix + "mask"
val = self._field_mask_hex_string(register, field)
self._declare_preprocessor_constant(outfile, name, val)
self.write_newline(outfile)
def _declare_bitfield_is_enabled(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.args = []
gadget.name = self._bitfield_is_enabled_function_name(register, field)
if register.component:
view_type = self._view_type(register)
gadget.args.append((view_type, "view"))
if register.is_indexed:
gadget.args.append(("uint32_t", "index"))
self._declare_bitfield_is_enabled_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_is_enabled_details(self, outfile, register, field):
self.call_register_get(outfile, register, "value")
return_statement = "return (value & {mask}) != 0;".format(
mask=self._field_mask_string(register, field),
)
outfile.write(return_statement)
def _declare_bitfield_is_enabled_in_value(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.name = self._bitfield_is_enabled_in_value_function_name(register, field)
gadget.args = [(size_type, "value")]
self._declare_bitfield_is_enabled_in_val_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_is_enabled_in_val_details(self, outfile, register, field):
f_body = "return (value & {mask}) != 0;".format(
mask=self._field_mask_string(register, field)
)
outfile.write(f_body)
def _declare_bitfield_is_disabled(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.args = []
gadget.name = self._bitfield_is_disabled_function_name(register, field)
if register.component:
view_type = self._view_type(register)
gadget.args.append((view_type, "view"))
if register.is_indexed:
gadget.args.append(("uint32_t", "index"))
self._declare_bitfield_is_disabled_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_is_disabled_details(self, outfile, register, field):
self.call_register_get(outfile, register, "value")
return_statement = "return (value & {mask}) == 0;".format(
mask=self._field_mask_string(register, field),
)
outfile.write(return_statement)
def _declare_bitfield_is_disabled_in_value(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.name = self._bitfield_is_disabled_in_value_function_name(register, field)
gadget.args = [(size_type, "value")]
self._declare_bitfield_is_disabled_in_value_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_is_disabled_in_value_details(self, outfile, register, field):
f_body = "return (value & {mask}) == 0;".format(
mask=self._field_mask_string(register, field)
)
outfile.write(f_body)
def _declare_bitfield_enable(self, outfile, register, field):
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = "void"
gadget.args = []
gadget.name = self._bitfield_enable_function_name(register, field)
if register.component:
view_type = self._view_type(register)
gadget.args.append((view_type, "view"))
if register.is_indexed:
gadget.args.append(("uint32_t", "index"))
self._declare_bitfield_enable_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_enable_details(self, outfile, register, field):
self.call_register_get(outfile, register, "value")
reg_set = "{reg_set}({view}{index}value | {mask});".format(
reg_set=self._register_write_function_name(register),
view="view, " if register.component else "",
index="index, " if register.is_indexed else "",
mask=self._field_mask_string(register, field),
)
outfile.write(reg_set)
def _declare_bitfield_enable_in_value(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.name = self._bitfield_enable_in_value_function_name(register, field)
gadget.args = [(size_type, "value")]
self._declare_bitfield_enable_in_value_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_enable_in_value_details(self, outfile, register, field):
f_body = "return value | {mask};".format(
mask=self._field_mask_string(register, field)
)
outfile.write(f_body)
def _declare_bitfield_disable(self, outfile, register, field):
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = "void"
gadget.args = []
gadget.name = self._bitfield_disable_function_name(register, field)
if register.component:
view_type = self._view_type(register)
gadget.args.append((view_type, "view"))
if register.is_indexed:
gadget.args.append(("uint32_t", "index"))
self._declare_bitfield_disable_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_disable_details(self, outfile, register, field):
self.call_register_get(outfile, register, "value")
reg_set = "{reg_set}({view}{index}value & ~{mask});".format(
reg_set=self._register_write_function_name(register),
view="view, " if register.component else "",
index="index, " if register.is_indexed else "",
mask=self._field_mask_string(register, field),
)
outfile.write(reg_set)
def _declare_bitfield_disable_in_value(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.name = self._bitfield_disable_in_value_function_name(register, field)
gadget.args = [(size_type, "value")]
self._declare_bitfield_disable_in_value_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_bitfield_disable_in_value_details(self, outfile, register, field):
f_body = "return value & ~{mask};".format(
mask=self._field_mask_string(register, field)
)
outfile.write(f_body)
def _declare_get_field(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.args = []
gadget.name = self._field_read_function_name(register, field)
if register.component:
view_type = self._view_type(register)
gadget.args.append((view_type, "view"))
if register.is_indexed:
gadget.args.append(("uint32_t", "index"))
self._declare_get_field_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_get_field_details(self, outfile, register, field):
self.call_register_get(outfile, register, "value")
return_statement = "return (value & {mask}) >> {lsb};".format(
mask=self._field_mask_string(register, field),
lsb=self._field_lsb_string(register, field),
)
outfile.write(return_statement)
def _declare_get_field_from_value(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.name = self._field_read_from_value_function_name(register, field)
gadget.args = [(size_type, "value")]
self._declare_get_field_from_value_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_get_field_from_value_details(self, outfile, register, field):
f_body = "return (value & {mask}) >> {lsb};".format(
size=self._register_size_type(register),
mask=self._field_mask_string(register, field),
lsb=self._field_lsb_string(register, field)
)
outfile.write(f_body)
def _declare_set_field(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = "void"
gadget.name = self._field_write_function_name(register, field)
gadget.args = []
if register.component:
view_type = self._view_type(register)
gadget.args.append((view_type, "view"))
gadget.args.append((size_type, "value"))
if register.is_indexed:
gadget.args.append(("uint32_t", "index"))
self._declare_field_set_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_field_set_details(self, outfile, register, field):
self.call_register_get(outfile, register, "register_value")
old_field_removed = "register_value = register_value & ~{mask};".format(
mask=self._field_mask_string(register, field),
)
new_field_added = "register_value |= ((value << {lsb}) & {mask});".format(
mask=self._field_mask_string(register, field),
lsb=self._field_lsb_string(register, field),
)
reg_set = "{reg_set}({view}{index}register_value);".format(
reg_set=self._register_write_function_name(register),
view="view, " if register.component else "",
index="index, " if register.is_indexed else "",
)
outfile.write(old_field_removed)
self.write_newline(outfile)
outfile.write(new_field_added)
self.write_newline(outfile)
outfile.write(reg_set)
def _declare_set_field_in_value(self, outfile, register, field):
size_type = self._register_size_type(register)
gadget = self.gadgets["pal.c.function_definition"]
gadget.return_type = size_type
gadget.args = [(size_type, "field_value"), (size_type, "register_value")]
gadget.name = self._field_write_in_value_function_name(register, field)
self._declare_set_field_in_value_details(outfile, register, field)
@pal.gadget.c.function_definition
def _declare_set_field_in_value_details(self, outfile, register, field):
old_field_removed = "register_value = register_value & ~{mask};".format(
mask=self._field_mask_string(register, field),
)
new_field_added = "return register_value | ((field_value << {lsb}) & {mask});".format(
mask=self._field_mask_string(register, field),
lsb=self._field_lsb_string(register, field),
)
outfile.write(old_field_removed)
self.write_newline(outfile)
outfile.write(new_field_added)
| 40.284182 | 115 | 0.654133 | 1,736 | 15,026 | 5.292627 | 0.043203 | 0.118851 | 0.113191 | 0.070527 | 0.920004 | 0.884959 | 0.8399 | 0.807793 | 0.783631 | 0.761319 | 0 | 0.001386 | 0.231599 | 15,026 | 372 | 116 | 40.392473 | 0.794388 | 0.010315 | 0 | 0.588652 | 0 | 0.007092 | 0.085828 | 0.035649 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095745 | false | 0 | 0.003546 | 0 | 0.102837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5876a92666c685176711a23464fa02bb123d8010 | 7,343 | py | Python | pirates/leveleditor/worldData/interior_mansion_unlit.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 3 | 2021-02-25T06:38:13.000Z | 2022-03-22T07:00:15.000Z | pirates/leveleditor/worldData/interior_mansion_unlit.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | null | null | null | pirates/leveleditor/worldData/interior_mansion_unlit.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 1 | 2021-02-25T06:38:17.000Z | 2021-02-25T06:38:17.000Z | # uncompyle6 version 3.2.0
# Python bytecode 2.4 (62061)
# Decompiled from: Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)]
# Embedded file name: pirates.leveleditor.worldData.interior_mansion_unlit
from pandac.PandaModules import Point3, VBase3, Vec4
objectStruct = {'Objects': {'1155772882.54fxlara0': {'Type': 'Building Interior', 'Name': '', 'Instanced': False, 'Objects': {'1166143524.85kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(90.15, 0.0, 0.0), 'Pos': Point3(19.548, 6.893, 6.866), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143586.68kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(90.15, 0.0, 0.0), 'Pos': Point3(19.353, -7.794, 7.086), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143623.15kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(0.865, 0.0, 0.0), 'Pos': Point3(2.84, -18.939, 6.978), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143681.42kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(0.865, 0.0, 0.0), 'Pos': Point3(-2.509, -18.897, 6.978), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143724.31kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(-91.184, 0.0, 0.0), 'Pos': Point3(-19.221, -7.77, 6.974), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143764.89kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(-91.184, 0.0, 0.0), 'Pos': Point3(-19.367, 6.655, 6.881), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143817.12kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(-177.626, 0.0, 0.0), 'Pos': Point3(-3.766, 19.726, 6.879), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143875.54kmuller': {'Type': 'Light_Fixtures', 'Hpr': VBase3(-177.626, 0.0, 0.0), 'Pos': Point3(5.791, 19.735, 6.899), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sconce_govs'}}, '1166143932.26kmuller': {'Type': 'Wall_Hangings', 'Hpr': VBase3(179.625, 0.0, 0.0), 'Pos': Point3(10.415, 19.937, 6.825), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/Map_02'}}, '1166143990.79kmuller': {'Type': 'Wall_Hangings', 'Hpr': VBase3(179.625, 0.0, 0.0), 'Pos': Point3(-10.912, 19.75, 6.948), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/portrait_gov'}}, '1166144156.74kmuller': {'Type': 'Wall_Hangings', 'Hpr': VBase3(0.824, 0.0, 0.0), 'Pos': Point3(-13.751, -20.837, 6.846), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/Map_03'}}, '1166144410.68kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(-18.923, 0.0, 0.0), 'Pos': Point3(16.275, 0.42, 0.041), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/stool_fancy'}}, '1166144528.23kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(0.0, 0.0, 0.0), 'Pos': Point3(0.96, -4.725, -0.002), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/desk_gov'}}, '1166144640.7kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(-179.961, 1.372, 0.0), 'Pos': Point3(6.304, -18.654, 0.059), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bookshelf_fancy'}}, '1166144680.15kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(-179.823, 0.0, 0.0), 'Pos': Point3(-5.75, -18.46, -0.029), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cabinet_fancy_tall'}}, '1166144753.35kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(121.726, 0.0, 0.0), 'Pos': Point3(-13.221, -12.316, -0.102), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sofa_fancy'}}, '1166144793.17kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': False, 'Hpr': VBase3(90.075, 0.0, 0.0), 'Pos': Point3(-18.44, -0.216, -0.018), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cabinet_fancy_low'}}, '1166144871.87kmuller': {'Type': 'Furniture - Fancy', 'DisableCollision': True, 'Hpr': VBase3(-179.935, 0.0, 0.0), 'Pos': Point3(-0.245, -19.338, -0.203), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/clock_fancy_tall'}}, '1166206746.22kmuller': {'Type': 'Light_Fixtures', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-0.583, -0.202, 14.248), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/chandelier_govs'}}, '1166219819.11kmuller': {'Type': 'Baskets', 'DisableCollision': False, 'Hpr': VBase3(7.472, 0.0, 0.0), 'Pos': Point3(-2.05, -6.242, 2.954), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/box_for_letters'}}, '1185384923.75kmuller': {'Type': 'Collision Barrier', 'DisableCollision': False, 'Hpr': VBase3(-179.632, 0.0, 0.0), 'Pos': Point3(-0.043, -18.523, -0.474), 'Scale': VBase3(1.0, 1.0, 2.03), 'Visual': {'Model': 'models/misc/pir_m_prp_lev_cambarrier_plane'}}}, 'Visual': {'Model': 'models/buildings/interior_mansion_gov'}}}, 'Node Links': [], 'Layers': {}, 'ObjectIds': {'1155772882.54fxlara0': '["Objects"]["1155772882.54fxlara0"]', '1166143524.85kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143524.85kmuller"]', '1166143586.68kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143586.68kmuller"]', '1166143623.15kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143623.15kmuller"]', '1166143681.42kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143681.42kmuller"]', '1166143724.31kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143724.31kmuller"]', '1166143764.89kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143764.89kmuller"]', '1166143817.12kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143817.12kmuller"]', '1166143875.54kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143875.54kmuller"]', '1166143932.26kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143932.26kmuller"]', '1166143990.79kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166143990.79kmuller"]', '1166144156.74kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144156.74kmuller"]', '1166144410.68kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144410.68kmuller"]', '1166144528.23kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144528.23kmuller"]', '1166144640.7kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144640.7kmuller"]', '1166144680.15kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144680.15kmuller"]', '1166144753.35kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144753.35kmuller"]', '1166144793.17kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144793.17kmuller"]', '1166144871.87kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166144871.87kmuller"]', '1166206746.22kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166206746.22kmuller"]', '1166219819.11kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1166219819.11kmuller"]', '1185384923.75kmuller': '["Objects"]["1155772882.54fxlara0"]["Objects"]["1185384923.75kmuller"]'}}
extraInfo = {'camPos': Point3(-430, 548, 290), 'camHpr': VBase3(-141, -21, 0), 'focalLength': 1.39999997616} | 1,049 | 6,944 | 0.661038 | 1,017 | 7,343 | 4.728614 | 0.230089 | 0.027033 | 0.027449 | 0.034103 | 0.422125 | 0.37513 | 0.366604 | 0.308588 | 0.287378 | 0.280724 | 0 | 0.262425 | 0.071088 | 7,343 | 7 | 6,945 | 1,049 | 0.442604 | 0.03105 | 0 | 0 | 0 | 0 | 0.571368 | 0.287161 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
58781e7c8ae1df5d13451376c5626dd0812ada8a | 3,298 | py | Python | app/interface/mykad.py | justanotherresearchanddevelopment/MalaysianIncomeTaxCalculator | acfac285f0876a5fa462e77dbd70b656a76eec06 | [
"Apache-2.0"
] | null | null | null | app/interface/mykad.py | justanotherresearchanddevelopment/MalaysianIncomeTaxCalculator | acfac285f0876a5fa462e77dbd70b656a76eec06 | [
"Apache-2.0"
] | null | null | null | app/interface/mykad.py | justanotherresearchanddevelopment/MalaysianIncomeTaxCalculator | acfac285f0876a5fa462e77dbd70b656a76eec06 | [
"Apache-2.0"
] | null | null | null | class MyKad:
def __init__(self, mykad):
self.mykad = mykad
def getBirthPlace(self):
if self.mykad[6:8] == "01" : return "Johor"
if self.mykad[6:8] == "21" : return "Johor"
if self.mykad[6:8] == "22" : return "Johor"
if self.mykad[6:8] == "23" : return "Johor"
if self.mykad[6:8] == "24" : return "Johor"
if self.mykad[6:8] == "02" : return "Kedah"
if self.mykad[6:8] == "25" : return "Kedah"
if self.mykad[6:8] == "26" : return "Kedah"
if self.mykad[6:8] == "27" : return "Kedah"
if self.mykad[6:8] == "03" : return "Kelantan"
if self.mykad[6:8] == "28" : return "Kelantan"
if self.mykad[6:8] == "29 " : return "Kelantan"
if self.mykad[6:8] == "04" : return "Malacca"
if self.mykad[6:8] == "30" : return "Malacca"
if self.mykad[6:8] == "05" : return "Negeri Sembilan"
if self.mykad[6:8] == "31" : return "Negeri Sembilan"
if self.mykad[6:8] == "59" : return "Negeri Sembilan"
if self.mykad[6:8] == "06" : return "Pahang"
if self.mykad[6:8] == "32" : return "Pahang"
if self.mykad[6:8] == "33" : return "Pahang"
if self.mykad[6:8] == "07" : return "Penang"
if self.mykad[6:8] == "34" : return "Penang"
if self.mykad[6:8] == "35" : return "Penang"
if self.mykad[6:8] == "08" : return "Perak"
if self.mykad[6:8] == "36" : return "Perak"
if self.mykad[6:8] == "37" : return "Perak"
if self.mykad[6:8] == "38" : return "Perak"
if self.mykad[6:8] == "39" : return "Perak"
if self.mykad[6:8] == "09" : return "Perlis"
if self.mykad[6:8] == "40" : return "Perlis"
if self.mykad[6:8] == "10" : return "Selangor"
if self.mykad[6:8] == "41" : return "Selangor"
if self.mykad[6:8] == "42" : return "Selangor"
if self.mykad[6:8] == "43" : return "Selangor"
if self.mykad[6:8] == "44" : return "Selangor"
if self.mykad[6:8] == "11" : return "Terengganu"
if self.mykad[6:8] == "45" : return "Terengganu"
if self.mykad[6:8] == "46" : return "Terengganu"
if self.mykad[6:8] == "12" : return "Sabah"
if self.mykad[6:8] == "47" : return "Sabah"
if self.mykad[6:8] == "48" : return "Sabah"
if self.mykad[6:8] == "49" : return "Sabah"
if self.mykad[6:8] == "13" : return "Sarawak"
if self.mykad[6:8] == "50" : return "Sarawak"
if self.mykad[6:8] == "51" : return "Sarawak"
if self.mykad[6:8] == "52" : return "Sarawak"
if self.mykad[6:8] == "53" : return "Sarawak"
if self.mykad[6:8] == "14" : return "Federal Territory of Kuala Lumpur"
if self.mykad[6:8] == "54" : return "Federal Territory of Kuala Lumpur"
if self.mykad[6:8] == "55" : return "Federal Territory of Kuala Lumpur"
if self.mykad[6:8] == "56" : return "Federal Territory of Kuala Lumpur"
if self.mykad[6:8] == "57" : return "Federal Territory of Kuala Lumpur"
if self.mykad[6:8] == "15" : return "Federal Territory of Labuan"
if self.mykad[6:8] == "58" : return "Federal Territory of Labuan"
if self.mykad[6:8] == "16" : return "Federal Territory of Putrajaya"
return None
| 53.193548 | 79 | 0.537295 | 486 | 3,298 | 3.63786 | 0.179012 | 0.290158 | 0.342195 | 0.373303 | 0.879525 | 0.872172 | 0.872172 | 0.240385 | 0.184389 | 0.184389 | 0 | 0.092166 | 0.276228 | 3,298 | 62 | 80 | 53.193548 | 0.648513 | 0 | 0 | 0 | 0 | 0 | 0.207942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
542f05ccb28a220de27c3e8ff196a694793aeceb | 86 | py | Python | unet3d/train/__init__.py | Crush18/3DUnetCNN | 7e257f712bd11510ebefef44c13ef48329858fca | [
"MIT"
] | 1,624 | 2017-02-15T13:41:39.000Z | 2022-03-29T11:51:57.000Z | unet3d/train/__init__.py | hjt996/3DUnetCNN | be2573c52169b725075acf182374f7098ee792d1 | [
"MIT"
] | 271 | 2017-02-15T22:46:04.000Z | 2022-03-27T11:04:59.000Z | unet3d/train/__init__.py | hjt996/3DUnetCNN | be2573c52169b725075acf182374f7098ee792d1 | [
"MIT"
] | 663 | 2017-02-23T04:27:51.000Z | 2022-03-31T06:36:30.000Z | from .train import run_training_with_package
run_training = run_training_with_package
| 28.666667 | 44 | 0.895349 | 13 | 86 | 5.384615 | 0.538462 | 0.471429 | 0.428571 | 0.628571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081395 | 86 | 2 | 45 | 43 | 0.886076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
545c9e9f3eeba832e7a8c2cc50d6fb5acba51496 | 115 | py | Python | cluster/__init__.py | george-j-zhu/timeseriesprocessing | 5d774a438c5e835a8d5c802009f4d5303388b69d | [
"CC-BY-4.0"
] | 1 | 2018-06-26T05:27:55.000Z | 2018-06-26T05:27:55.000Z | cluster/__init__.py | george-j-zhu/timeseriesprocessing | 5d774a438c5e835a8d5c802009f4d5303388b69d | [
"CC-BY-4.0"
] | null | null | null | cluster/__init__.py | george-j-zhu/timeseriesprocessing | 5d774a438c5e835a8d5c802009f4d5303388b69d | [
"CC-BY-4.0"
] | null | null | null | from . import timeSeriesCluster
from . import kMeans
from . import gaussianMixture
from . import cluster_utilities
| 23 | 31 | 0.826087 | 13 | 115 | 7.230769 | 0.538462 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13913 | 115 | 4 | 32 | 28.75 | 0.949495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49ea4a00ea95e69f2bab3ea893023cd45402c672 | 40 | py | Python | src/ml/__init__.py | USArmyResearchLab/ARL-UPPER | 2f79f25338f18655b2a19c8afe3fed267cc0f198 | [
"Apache-2.0"
] | 4 | 2020-09-14T06:13:04.000Z | 2020-11-21T07:10:36.000Z | src/ml/__init__.py | USArmyResearchLab/ARL-UPPER | 2f79f25338f18655b2a19c8afe3fed267cc0f198 | [
"Apache-2.0"
] | null | null | null | src/ml/__init__.py | USArmyResearchLab/ARL-UPPER | 2f79f25338f18655b2a19c8afe3fed267cc0f198 | [
"Apache-2.0"
] | 2 | 2020-03-15T17:59:26.000Z | 2020-09-14T06:13:05.000Z | from .models import xgb_model, lr_model
| 20 | 39 | 0.825 | 7 | 40 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7313b9ff8295a4959f0d837789347155eb4917f | 48 | py | Python | elf/types/section/types/relocations/__init__.py | Valmarelox/elftoolsng | 99c3f4913a7e477007b1d81df83274d7657bf693 | [
"MIT"
] | null | null | null | elf/types/section/types/relocations/__init__.py | Valmarelox/elftoolsng | 99c3f4913a7e477007b1d81df83274d7657bf693 | [
"MIT"
] | null | null | null | elf/types/section/types/relocations/__init__.py | Valmarelox/elftoolsng | 99c3f4913a7e477007b1d81df83274d7657bf693 | [
"MIT"
] | null | null | null | from .rela_section import RelactionAddendSection | 48 | 48 | 0.916667 | 5 | 48 | 8.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f89e0ff813be2732cf4259d19cceba27456a2a2 | 2,357 | py | Python | tests/test_send_camera.py | waider/gopro-py-api | b18b5458f5bbe689f468842d6888104317786de8 | [
"MIT"
] | 1 | 2019-05-06T21:48:54.000Z | 2019-05-06T21:48:54.000Z | tests/test_send_camera.py | waider/gopro-py-api | b18b5458f5bbe689f468842d6888104317786de8 | [
"MIT"
] | null | null | null | tests/test_send_camera.py | waider/gopro-py-api | b18b5458f5bbe689f468842d6888104317786de8 | [
"MIT"
] | null | null | null | import urllib.request
import io
from .conftest import GoProCameraTest
from goprocam import GoProCamera
from socket import timeout
class SendCameraTest(GoProCameraTest):
def test_send_camera_default(self):
with self.monkeypatch.context() as m:
m.setattr(GoProCamera.GoPro, 'getPassword',
lambda self: 'password')
# this returns nothing so we need to be a bit more clever
# to verify it
def fake_urlopen(url, *args, **kwargs):
assert url == 'http://10.5.5.9/camera/foo?t=password'
return io.BytesIO('{}'.encode('utf8'))
m.setattr(urllib.request, 'urlopen', fake_urlopen)
self.goprocam.sendCamera('foo')
def test_send_camera_with_value(self):
with self.monkeypatch.context() as m:
m.setattr(GoProCamera.GoPro, 'getPassword',
lambda self: 'password')
def fake_urlopen(url, *args, **kwargs):
assert url == 'http://10.5.5.9/camera/foo?t=password&p=bar'
return io.BytesIO('{}'.encode('utf8'))
m.setattr(urllib.request, 'urlopen', fake_urlopen)
self.goprocam.sendCamera('foo', 'bar')
def test_send_camera_with_hex_value(self):
with self.monkeypatch.context() as m:
m.setattr(GoProCamera.GoPro, 'getPassword',
lambda self: 'password')
def fake_urlopen(url, *args, **kwargs):
assert url == 'http://10.5.5.9/camera/foo?t=password&p=%FF'
return io.BytesIO('{}'.encode('utf8'))
m.setattr(urllib.request, 'urlopen', fake_urlopen)
self.goprocam.sendCamera('foo', 'FF')
def test_send_camera_error(self):
# just for coverage, really. can't test as it stands.
with self.monkeypatch.context() as m:
m.setattr(GoProCamera.GoPro, 'getPassword',
lambda self: 'password')
self.goprocam.sendCamera('foo', 'bar')
def test_send_camera_timeout(self):
# same
with self.monkeypatch.context() as m:
m.setattr(GoProCamera.GoPro, 'getPassword',
lambda self: 'password')
self.responses['/camera/foo?t=password&p=bar'] = timeout()
self.goprocam.sendCamera('foo', 'bar')
| 39.283333 | 75 | 0.588884 | 273 | 2,357 | 4.996337 | 0.271062 | 0.046921 | 0.040323 | 0.062317 | 0.760264 | 0.721408 | 0.703079 | 0.703079 | 0.703079 | 0.658358 | 0 | 0.010695 | 0.285957 | 2,357 | 59 | 76 | 39.949153 | 0.799762 | 0.053034 | 0 | 0.613636 | 0 | 0.045455 | 0.13965 | 0.012573 | 0 | 0 | 0 | 0 | 0.068182 | 1 | 0.181818 | false | 0.318182 | 0.113636 | 0 | 0.386364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3fd5470aec5aa9006fcfb966edb49811a0f1e47e | 25 | py | Python | scanrpc/__init__.py | PureStake/substrate-scanrpc | f2a5368719c59e6a335cdbfdbcf0747e57278aae | [
"MIT"
] | null | null | null | scanrpc/__init__.py | PureStake/substrate-scanrpc | f2a5368719c59e6a335cdbfdbcf0747e57278aae | [
"MIT"
] | null | null | null | scanrpc/__init__.py | PureStake/substrate-scanrpc | f2a5368719c59e6a335cdbfdbcf0747e57278aae | [
"MIT"
] | null | null | null | from .scanrpc import main | 25 | 25 | 0.84 | 4 | 25 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 25 | 1 | 25 | 25 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b203299920df62cd2a5a1284f8e573c7ac814993 | 10,382 | py | Python | venv/Lib/site-packages/keystoneauth1/tests/unit/loading/test_adapter.py | prasoon-uta/IBM-coud-storage | 82a6876316715efbd0b492d0d467dde0ab26a56b | [
"Apache-2.0"
] | null | null | null | venv/Lib/site-packages/keystoneauth1/tests/unit/loading/test_adapter.py | prasoon-uta/IBM-coud-storage | 82a6876316715efbd0b492d0d467dde0ab26a56b | [
"Apache-2.0"
] | null | null | null | venv/Lib/site-packages/keystoneauth1/tests/unit/loading/test_adapter.py | prasoon-uta/IBM-coud-storage | 82a6876316715efbd0b492d0d467dde0ab26a56b | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from oslo_config import cfg
from oslo_config import fixture as config
from keystoneauth1 import loading
from keystoneauth1.tests.unit.loading import utils
class ConfLoadingTests(utils.TestCase):
GROUP = 'adaptergroup'
def setUp(self):
super(ConfLoadingTests, self).setUp()
self.conf_fx = self.useFixture(config.Config())
loading.register_adapter_conf_options(self.conf_fx.conf, self.GROUP,
include_deprecated=False)
def test_load(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces='internal',
region_name='region', endpoint_override='endpoint',
version='2.0', group=self.GROUP)
adap = loading.load_adapter_from_conf_options(
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
self.assertEqual('type', adap.service_type)
self.assertEqual('name', adap.service_name)
self.assertEqual(['internal'], adap.interface)
self.assertEqual('region', adap.region_name)
self.assertEqual('endpoint', adap.endpoint_override)
self.assertEqual('session', adap.session)
self.assertEqual('auth', adap.auth)
self.assertEqual('2.0', adap.version)
self.assertIsNone(adap.min_version)
self.assertIsNone(adap.max_version)
def test_load_valid_interfaces_list(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces=['internal', 'public'],
region_name='region', endpoint_override='endpoint',
version='2.0', group=self.GROUP)
adap = loading.load_adapter_from_conf_options(
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
self.assertEqual('type', adap.service_type)
self.assertEqual('name', adap.service_name)
self.assertEqual(['internal', 'public'], adap.interface)
self.assertEqual('region', adap.region_name)
self.assertEqual('endpoint', adap.endpoint_override)
self.assertEqual('session', adap.session)
self.assertEqual('auth', adap.auth)
self.assertEqual('2.0', adap.version)
self.assertIsNone(adap.min_version)
self.assertIsNone(adap.max_version)
def test_load_valid_interfaces_comma_list(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces='internal,public',
region_name='region', endpoint_override='endpoint',
version='2.0', group=self.GROUP)
adap = loading.load_adapter_from_conf_options(
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
self.assertEqual('type', adap.service_type)
self.assertEqual('name', adap.service_name)
self.assertEqual(['internal', 'public'], adap.interface)
self.assertEqual('region', adap.region_name)
self.assertEqual('endpoint', adap.endpoint_override)
self.assertEqual('session', adap.session)
self.assertEqual('auth', adap.auth)
self.assertEqual('2.0', adap.version)
self.assertIsNone(adap.min_version)
self.assertIsNone(adap.max_version)
def test_load_bad_valid_interfaces_value(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces='bad',
region_name='region', endpoint_override='endpoint',
version='2.0', group=self.GROUP)
self.assertRaises(
TypeError,
loading.load_adapter_from_conf_options,
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
def test_load_version_range(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces='internal',
region_name='region', endpoint_override='endpoint',
min_version='2.0', max_version='3.0', group=self.GROUP)
adap = loading.load_adapter_from_conf_options(
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
self.assertEqual('type', adap.service_type)
self.assertEqual('name', adap.service_name)
self.assertEqual(['internal'], adap.interface)
self.assertEqual('region', adap.region_name)
self.assertEqual('endpoint', adap.endpoint_override)
self.assertEqual('session', adap.session)
self.assertEqual('auth', adap.auth)
self.assertIsNone(adap.version)
self.assertEqual('2.0', adap.min_version)
self.assertEqual('3.0', adap.max_version)
def test_version_mutex_min(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces='iface',
region_name='region', endpoint_override='endpoint',
version='2.0', min_version='2.0', group=self.GROUP)
self.assertRaises(
TypeError,
loading.load_adapter_from_conf_options,
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
def test_version_mutex_max(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces='iface',
region_name='region', endpoint_override='endpoint',
version='2.0', max_version='3.0', group=self.GROUP)
self.assertRaises(
TypeError,
loading.load_adapter_from_conf_options,
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
def test_version_mutex_minmax(self):
self.conf_fx.config(
service_type='type', service_name='name',
valid_interfaces='iface',
region_name='region', endpoint_override='endpoint',
version='2.0', min_version='2.0', max_version='3.0',
group=self.GROUP)
self.assertRaises(
TypeError,
loading.load_adapter_from_conf_options,
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
def test_get_conf_options(self):
opts = loading.get_adapter_conf_options()
for opt in opts:
if opt.name != 'valid-interfaces':
self.assertIsInstance(opt, cfg.StrOpt)
else:
self.assertIsInstance(opt, cfg.ListOpt)
self.assertEqual({'service-type', 'service-name',
'interface', 'valid-interfaces',
'region-name', 'endpoint-override', 'version',
'min-version', 'max-version'},
{opt.name for opt in opts})
def test_get_conf_options_undeprecated(self):
opts = loading.get_adapter_conf_options(include_deprecated=False)
for opt in opts:
if opt.name != 'valid-interfaces':
self.assertIsInstance(opt, cfg.StrOpt)
else:
self.assertIsInstance(opt, cfg.ListOpt)
self.assertEqual({'service-type', 'service-name', 'valid-interfaces',
'region-name', 'endpoint-override', 'version',
'min-version', 'max-version'},
{opt.name for opt in opts})
def test_deprecated(self):
"""Test external options that are deprecated by Adapter options.
Not to be confused with ConfLoadingDeprecatedTests, which tests conf
options in Adapter which are themselves deprecated.
"""
def new_deprecated():
return cfg.DeprecatedOpt(uuid.uuid4().hex, group=uuid.uuid4().hex)
opt_names = ['service-type', 'valid-interfaces', 'endpoint-override']
depr = dict([(n, [new_deprecated()]) for n in opt_names])
opts = loading.get_adapter_conf_options(deprecated_opts=depr)
for opt in opts:
if opt.name in opt_names:
self.assertIn(depr[opt.name][0], opt.deprecated_opts)
class ConfLoadingLegacyTests(ConfLoadingTests):
"""Tests with inclusion of deprecated conf options.
Not to be confused with ConfLoadingTests.test_deprecated, which tests
external options that are deprecated in favor of Adapter options.
"""
GROUP = 'adaptergroup'
def setUp(self):
super(ConfLoadingLegacyTests, self).setUp()
self.conf_fx = self.useFixture(config.Config())
loading.register_adapter_conf_options(self.conf_fx.conf, self.GROUP)
def test_load_old_interface(self):
self.conf_fx.config(
service_type='type', service_name='name',
interface='internal',
region_name='region', endpoint_override='endpoint',
version='2.0', group=self.GROUP)
adap = loading.load_adapter_from_conf_options(
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
self.assertEqual('type', adap.service_type)
self.assertEqual('name', adap.service_name)
self.assertEqual('internal', adap.interface)
self.assertEqual('region', adap.region_name)
self.assertEqual('endpoint', adap.endpoint_override)
self.assertEqual('session', adap.session)
self.assertEqual('auth', adap.auth)
self.assertEqual('2.0', adap.version)
self.assertIsNone(adap.min_version)
self.assertIsNone(adap.max_version)
def test_interface_conflict(self):
self.conf_fx.config(
service_type='type', service_name='name', interface='iface',
valid_interfaces='internal,public',
region_name='region', endpoint_override='endpoint',
group=self.GROUP)
self.assertRaises(
TypeError,
loading.load_adapter_from_conf_options,
self.conf_fx.conf, self.GROUP, session='session', auth='auth')
| 42.37551 | 78 | 0.641976 | 1,198 | 10,382 | 5.383139 | 0.126878 | 0.100016 | 0.037215 | 0.035354 | 0.791441 | 0.770662 | 0.747093 | 0.732672 | 0.732672 | 0.732672 | 0 | 0.006229 | 0.242343 | 10,382 | 244 | 79 | 42.54918 | 0.813628 | 0.086014 | 0 | 0.743456 | 0 | 0 | 0.106257 | 0 | 0 | 0 | 0 | 0 | 0.324607 | 1 | 0.08377 | false | 0 | 0.026178 | 0.005236 | 0.136126 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b76070fe29d8ba7b912023955536701ccf34a681 | 121 | py | Python | PythonExtensions/debug/__init__.py | Jakar510/PythonExtensions | f29600f73454d21345f6da893a1df1b71ddacd0b | [
"MIT"
] | null | null | null | PythonExtensions/debug/__init__.py | Jakar510/PythonExtensions | f29600f73454d21345f6da893a1df1b71ddacd0b | [
"MIT"
] | null | null | null | PythonExtensions/debug/__init__.py | Jakar510/PythonExtensions | f29600f73454d21345f6da893a1df1b71ddacd0b | [
"MIT"
] | null | null | null | from .chains import *
from .console import *
from .converters import *
from .debug_tk import *
from .decorators import *
| 20.166667 | 25 | 0.752066 | 16 | 121 | 5.625 | 0.5 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165289 | 121 | 5 | 26 | 24.2 | 0.891089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b78d7af390cca265c9a7594d26b87ab23f3cb477 | 2,594 | py | Python | tests/test_minimax.py | elena-rae/connect4_bccn_pcp | ffa1c1fbd61a28901d67bdeb33250ae52c35c296 | [
"MIT"
] | null | null | null | tests/test_minimax.py | elena-rae/connect4_bccn_pcp | ffa1c1fbd61a28901d67bdeb33250ae52c35c296 | [
"MIT"
] | null | null | null | tests/test_minimax.py | elena-rae/connect4_bccn_pcp | ffa1c1fbd61a28901d67bdeb33250ae52c35c296 | [
"MIT"
] | null | null | null | import numpy as np
from agents.agent_minimax.minimax import *
from agents.agent_random.random import generate_move_random
from agents.common import *
def test_get_valid_column():
board = initialize_game_state()
for x in range(5):
apply_player_action(board, x, 1)
print(pretty_print_board(board))
valid_columns = get_valid_columns(board)
print("valid columns", valid_columns)
assert len(valid_columns) == 7
print()
for x in range(5):
for y in range(1,6,2):
apply_player_action(board, y, 2)
print(pretty_print_board(board))
valid_columns = get_valid_columns(board)
print("valid columns", valid_columns)
assert len(valid_columns) == 5
def test_evaluate_board():
""""""
"""assert horizontal evaluation"""
board = initialize_game_state()
for x in range(3):
apply_player_action(board, x+2, 1)
score_player1= evaluate_board(board, 1)
#print(pretty_print_board(board))
assert score_player1 > 0
for x in range(2):
apply_player_action(board, x+2, 2)
score_player2 = evaluate_board(board, 2)
score_player1= evaluate_board(board, 1)
#print(pretty_print_board(board))
assert score_player2 == -score_player1
"""assert vertical evaluation """
board = initialize_game_state()
for x in range(3):
apply_player_action(board, 3, 1)
score_player1 = evaluate_board(board, 1)
assert score_player1 > 0
for x in range(3):
apply_player_action(board, 5, 2)
score_player2 = evaluate_board(board, 2)
score_player1 = evaluate_board(board, 1)
#print(pretty_print_board(board))
assert score_player2 == -score_player1
"""assert diagonal evaluation positiv slope """
board = initialize_game_state()
for x in range(2):
apply_player_action(board, x + 4, 2)
for x in range(3):
apply_player_action(board, x + 3, 1)
score_player1 = evaluate_board(board, 1)
score_player2 = evaluate_board(board, 2)
#print(pretty_print_board(board))
assert score_player1 > 0
assert score_player2 < 0
"""assert diagonal evaluation negative slope """
board = initialize_game_state()
apply_player_action(board, 2, 2)
for x in range(2):
apply_player_action(board, x+2, 1)
for x in range(3):
apply_player_action(board, x+2, 2)
score_player1 = evaluate_board(board, 1)
score_player2 = evaluate_board(board, 2)
#print(pretty_print_board(board))
assert score_player1 < 0
assert score_player2 > 0
def test_evaluate_window():
pass
| 27.305263 | 59 | 0.682344 | 364 | 2,594 | 4.596154 | 0.151099 | 0.101614 | 0.111775 | 0.14465 | 0.811118 | 0.754334 | 0.753138 | 0.751943 | 0.683802 | 0.658697 | 0 | 0.035027 | 0.218581 | 2,594 | 94 | 60 | 27.595745 | 0.790331 | 0.061681 | 0 | 0.645161 | 1 | 0 | 0.011499 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 1 | 0.048387 | false | 0.016129 | 0.064516 | 0 | 0.112903 | 0.080645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b7c71f5b04274a1378f8eaff6b7665005a39dd01 | 5,401 | py | Python | tests/contrib/eventinfo/test_plugin.py | iopipe/iopipe-python | 9bf0620bab732e82a9addcee250eabb9c0cc649f | [
"Apache-2.0"
] | 74 | 2016-08-18T14:26:50.000Z | 2021-11-21T10:58:32.000Z | tests/contrib/eventinfo/test_plugin.py | vemel/iopipe-python | 46c277f9447ddb00e544437ceaa7ba263a759c1d | [
"Apache-2.0"
] | 198 | 2016-08-18T18:52:43.000Z | 2021-05-09T10:01:14.000Z | tests/contrib/eventinfo/test_plugin.py | vemel/iopipe-python | 46c277f9447ddb00e544437ceaa7ba263a759c1d | [
"Apache-2.0"
] | 23 | 2016-08-04T23:22:21.000Z | 2020-01-20T13:54:27.000Z | import mock
import os
from iopipe.contrib.eventinfo import EventInfoPlugin
@mock.patch("iopipe.report.send_report", autospec=True)
def test__eventinfo_plugin__apigw(
mock_send_report, handler_with_eventinfo, event_apigw, mock_context
):
iopipe, handler = handler_with_eventinfo
plugins = iopipe.config["plugins"]
assert len(plugins) == 1
assert plugins[0].enabled is True
assert plugins[0].name == "event-info"
handler(event_apigw, mock_context)
metrics = iopipe.report.custom_metrics
assert any([m["name"] == "@iopipe/event-info.eventType" for m in metrics])
assert len(metrics) == 10
assert "@iopipe/plugin-event-info" in iopipe.report.labels
assert "@iopipe/aws-api-gateway" in iopipe.report.labels
event_type = [m for m in metrics if m["name"] == "@iopipe/event-info.eventType"]
assert len(event_type) == 1
assert event_type[0]["s"] == "apiGateway"
assert "eventType" in iopipe.report.report
assert iopipe.report.report["eventType"] == "aws-api-gateway"
@mock.patch("iopipe.report.send_report", autospec=True)
def test__eventinfo_plugin__cloudfront(
mock_send_report, handler_with_eventinfo, event_cloudfront, mock_context
):
iopipe, handler = handler_with_eventinfo
plugins = iopipe.config["plugins"]
assert len(plugins) == 1
assert plugins[0].enabled is True
assert plugins[0].name == "event-info"
handler(event_cloudfront, mock_context)
metrics = iopipe.report.custom_metrics
assert any([m["name"] == "@iopipe/event-info.eventType" for m in metrics])
assert len(metrics) == 7
event_type = [m for m in metrics if m["name"] == "@iopipe/event-info.eventType"]
assert len(event_type) == 1
assert event_type[0]["s"] == "cloudFront"
assert "eventType" in iopipe.report.report
assert iopipe.report.report["eventType"] == "aws-cloud-front"
@mock.patch("iopipe.report.send_report", autospec=True)
def test__eventinfo_plugin__kinesis(
mock_send_report, handler_with_eventinfo, event_kinesis, mock_context
):
iopipe, handler = handler_with_eventinfo
plugins = iopipe.config["plugins"]
assert len(plugins) == 1
assert plugins[0].enabled is True
assert plugins[0].name == "event-info"
handler(event_kinesis, mock_context)
metrics = iopipe.report.custom_metrics
assert any([m["name"] == "@iopipe/event-info.eventType" for m in metrics])
assert len(metrics) == 4
event_type = [m for m in metrics if m["name"] == "@iopipe/event-info.eventType"]
assert len(event_type) == 1
assert event_type[0]["s"] == "kinesis"
assert "eventType" in iopipe.report.report
assert iopipe.report.report["eventType"] == "aws-kinesis"
@mock.patch("iopipe.report.send_report", autospec=True)
def test__eventinfo_plugin__scheduled(
mock_send_report, handler_with_eventinfo, event_scheduled, mock_context
):
iopipe, handler = handler_with_eventinfo
plugins = iopipe.config["plugins"]
assert len(plugins) == 1
assert plugins[0].enabled is True
assert plugins[0].name == "event-info"
handler(event_scheduled, mock_context)
metrics = iopipe.report.custom_metrics
assert any([m["name"] == "@iopipe/event-info.eventType" for m in metrics])
assert len(metrics) == 6
event_type = [m for m in metrics if m["name"] == "@iopipe/event-info.eventType"]
assert len(event_type) == 1
assert event_type[0]["s"] == "scheduled"
assert "eventType" in iopipe.report.report
assert iopipe.report.report["eventType"] == "aws-scheduled"
def test__eventinfo_plugin__enabled(monkeypatch):
monkeypatch.setattr(os, "environ", {"IOPIPE_EVENT_INFO_ENABLED": "true"})
plugin = EventInfoPlugin(enabled=False)
assert plugin.enabled is True
@mock.patch("iopipe.report.send_report", autospec=True)
def test__eventinfo_plugin__step_function(
mock_send_report, handler_step_function_with_eventinfo, event_apigw, mock_context
):
iopipe, handler = handler_step_function_with_eventinfo
plugins = iopipe.config["plugins"]
assert len(plugins) == 1
assert plugins[0].enabled is True
assert plugins[0].name == "event-info"
response1 = handler(event_apigw, mock_context)
assert "iopipe" in response1
assert "id" in response1["iopipe"]
assert "step" in response1["iopipe"]
response2 = handler(response1, mock_context)
assert "iopipe" in response2
assert response1["iopipe"]["id"] == response2["iopipe"]["id"]
assert response2["iopipe"]["step"] > response1["iopipe"]["step"]
@mock.patch("iopipe.report.send_report", autospec=True)
def test__eventinfo_plugin__http_response(
mock_send_report, handler_http_response_with_eventinfo, event_apigw, mock_context
):
iopipe, handler = handler_http_response_with_eventinfo
handler(event_apigw, mock_context)
metrics = iopipe.report.custom_metrics
assert any(
(
m["name"] == "@iopipe/event-info.apiGateway.response.statusCode"
for m in metrics
)
)
assert all(
(
"n" in m
for m in metrics
if m["name"] == "@iopipe/event-info.apiGateway.response.statusCode"
)
)
metric = next(
(
m
for m in metrics
if m["name"] == "@iopipe/event-info.apiGateway.response.statusCode"
)
)
assert metric["n"] == 200
| 31.584795 | 85 | 0.693946 | 693 | 5,401 | 5.217893 | 0.103896 | 0.06969 | 0.049779 | 0.048673 | 0.795354 | 0.75083 | 0.75083 | 0.712666 | 0.704923 | 0.668695 | 0 | 0.009522 | 0.183299 | 5,401 | 170 | 86 | 31.770588 | 0.810247 | 0 | 0 | 0.507937 | 0 | 0 | 0.179226 | 0.10998 | 0 | 0 | 0 | 0 | 0.404762 | 1 | 0.055556 | false | 0 | 0.02381 | 0 | 0.079365 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b7ea26f4b0332b849b8bcef625eaea63e406ac9a | 943 | py | Python | terrascript/nomad/d.py | jackluo923/python-terrascript | ed4b626e6d28621ea1b02fc16f7277a094d89830 | [
"BSD-2-Clause"
] | 4 | 2022-02-07T21:08:14.000Z | 2022-03-03T04:41:28.000Z | terrascript/nomad/d.py | jackluo923/python-terrascript | ed4b626e6d28621ea1b02fc16f7277a094d89830 | [
"BSD-2-Clause"
] | null | null | null | terrascript/nomad/d.py | jackluo923/python-terrascript | ed4b626e6d28621ea1b02fc16f7277a094d89830 | [
"BSD-2-Clause"
] | 2 | 2022-02-06T01:49:42.000Z | 2022-02-08T14:15:00.000Z | # terrascript/nomad/d.py
import terrascript
class nomad_acl_policies(terrascript.Data):
pass
class nomad_acl_policy(terrascript.Data):
pass
class nomad_acl_token(terrascript.Data):
pass
class nomad_acl_tokens(terrascript.Data):
pass
class nomad_datacenters(terrascript.Data):
pass
class nomad_deployments(terrascript.Data):
pass
class nomad_job(terrascript.Data):
pass
class nomad_job_parser(terrascript.Data):
pass
class nomad_namespace(terrascript.Data):
pass
class nomad_namespaces(terrascript.Data):
pass
class nomad_plugin(terrascript.Data):
pass
class nomad_plugins(terrascript.Data):
pass
class nomad_scaling_policies(terrascript.Data):
pass
class nomad_scaling_policy(terrascript.Data):
pass
class nomad_scheduler_config(terrascript.Data):
pass
class nomad_regions(terrascript.Data):
pass
class nomad_volumes(terrascript.Data):
pass
| 13.28169 | 47 | 0.757158 | 116 | 943 | 5.939655 | 0.224138 | 0.246734 | 0.468795 | 0.557329 | 0.756168 | 0.419448 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16649 | 943 | 70 | 48 | 13.471429 | 0.87659 | 0.02333 | 0 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.485714 | 0.028571 | 0 | 0.514286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
4d006b4a0ab73cb7019e55149843c1816b7eade4 | 205 | py | Python | internal/utils.py | makecodes/genshin-card-backend | 611093846e56a2251336845ccf849b01b7d3f447 | [
"Unlicense"
] | null | null | null | internal/utils.py | makecodes/genshin-card-backend | 611093846e56a2251336845ccf849b01b7d3f447 | [
"Unlicense"
] | null | null | null | internal/utils.py | makecodes/genshin-card-backend | 611093846e56a2251336845ccf849b01b7d3f447 | [
"Unlicense"
] | null | null | null | def money_to_db(value):
value = str(value)
value = value.replace(".", "")
value = value.replace(",", ".")
return value
def empty_object():
return {}
def empty_list():
return []
| 14.642857 | 35 | 0.570732 | 24 | 205 | 4.708333 | 0.458333 | 0.353982 | 0.300885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24878 | 205 | 13 | 36 | 15.769231 | 0.733766 | 0 | 0 | 0 | 0 | 0 | 0.014634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.222222 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4d1234790c3c7d3da9ec231690a24ddf62bd5d41 | 948 | py | Python | api/barriers/migrations/0098_auto_20201119_1144.py | uktrade/market-access-api | 850a59880f8f62263784bcd9c6b3362e447dbc7a | [
"MIT"
] | null | null | null | api/barriers/migrations/0098_auto_20201119_1144.py | uktrade/market-access-api | 850a59880f8f62263784bcd9c6b3362e447dbc7a | [
"MIT"
] | 51 | 2018-05-31T12:16:31.000Z | 2022-03-08T09:36:48.000Z | api/barriers/migrations/0098_auto_20201119_1144.py | uktrade/market-access-api | 850a59880f8f62263784bcd9c6b3362e447dbc7a | [
"MIT"
] | 2 | 2019-12-24T09:47:42.000Z | 2021-02-09T09:36:51.000Z | # Generated by Django 3.1.2 on 2020-11-19 11:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("barriers", "0097_auto_20201113_1149"),
]
operations = [
migrations.AddField(
model_name="barrier",
name="commercial_value",
field=models.BigIntegerField(blank=True, null=True),
),
migrations.AddField(
model_name="barrier",
name="commercial_value_explanation",
field=models.TextField(blank=True),
),
migrations.AddField(
model_name="historicalbarrier",
name="commercial_value",
field=models.BigIntegerField(blank=True, null=True),
),
migrations.AddField(
model_name="historicalbarrier",
name="commercial_value_explanation",
field=models.TextField(blank=True),
),
]
| 27.882353 | 64 | 0.592827 | 87 | 948 | 6.310345 | 0.448276 | 0.131148 | 0.167577 | 0.196721 | 0.724954 | 0.724954 | 0.724954 | 0.724954 | 0.650273 | 0.324226 | 0 | 0.046687 | 0.299578 | 948 | 33 | 65 | 28.727273 | 0.78012 | 0.047468 | 0 | 0.740741 | 1 | 0 | 0.18535 | 0.08768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d2fbcd94cce8fc83641cd0fd21fe195cbd24241 | 559 | py | Python | backend/modules/__init__.py | crowdbotics-apps/test-33024 | 09ff5fe2740de7d38aced0309cba713511e7482e | [
"FTL",
"AML",
"RSA-MD"
] | null | null | null | backend/modules/__init__.py | crowdbotics-apps/test-33024 | 09ff5fe2740de7d38aced0309cba713511e7482e | [
"FTL",
"AML",
"RSA-MD"
] | null | null | null | backend/modules/__init__.py | crowdbotics-apps/test-33024 | 09ff5fe2740de7d38aced0309cba713511e7482e | [
"FTL",
"AML",
"RSA-MD"
] | null | null | null | from .common import *
from .instructors import *
from .models_btw import *
from .models_pas import *
from .models_pre_trip import *
from .models_swp import *
from .models_vrt import *
from .students import *
from .pretrips_class_a import *
from .pretrips_class_b import *
from .pretrips_class_c import *
from .pretrips_class_p import *
from .pretrips_bus import *
from .btw_class_a import *
from .btw_class_b import *
from .btw_class_c import *
from .btw_class_p import *
from .btw_bus import *
from .probability_chart import *
from .instructions import *
| 23.291667 | 32 | 0.779964 | 85 | 559 | 4.835294 | 0.258824 | 0.462287 | 0.194647 | 0.223844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148479 | 559 | 23 | 33 | 24.304348 | 0.863445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4d6085e1f9d0ef5eb4f97e0e8f14366d27f64b54 | 6,631 | py | Python | src/stand-alone-application/thrain.py | 21Shadow10/Cloud-Message-App | be052da218294829f5287001975d4134b11ee716 | [
"MIT"
] | 1 | 2021-12-30T10:30:30.000Z | 2021-12-30T10:30:30.000Z | src/stand-alone-application/thrain.py | 21Shadow10/Cloud-Message-App | be052da218294829f5287001975d4134b11ee716 | [
"MIT"
] | null | null | null | src/stand-alone-application/thrain.py | 21Shadow10/Cloud-Message-App | be052da218294829f5287001975d4134b11ee716 | [
"MIT"
] | null | null | null | import ENCDEC
import time
import unicodedata
import os
import os.path
import DH
global key
global prime_
'''
-----------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~ENCRYPTION SNIPPET~~~~~~~~~~~~~~~~~~~~~~~~~
-----------------------------------------------------------------
'''
def encrypt(filename, directory, public_key, private_key):
key = DH.generate_secret(int(private_key), int(public_key))
str = key.encode('utf-8').hex()
key = str[0:32]
file_obj = open(filename, "r")
t = time.time()
#list,str = ENCDEC.shamirs_split(file_obj)
msg1 = ENCDEC.AESCipher(key).encrypt(file_obj.read())
#msg2 = ENCDEC.AESCipher(key).encrypt(str)
s = time.time()
# Exchange this with public key
outputFilename = os.path.join(directory, key[16:]+".txt")
file_obj = open(outputFilename, 'w')
file_obj.write(msg1)
# file_obj.write('\n')
# file_obj.write(list[1])
# file_obj.write('\n')
# file_obj.write(msg2)
os.remove(filename)
os.system("start " + directory)
'''
-----------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~DECRYPTION SNIPPET~~~~~~~~~~~~~~~~~~~~~~~~~
-----------------------------------------------------------------
'''
def decrypt(filename, directory, public_key, private_key):
key = DH.generate_secret(int(private_key), int(public_key))
str = key.encode('utf-8').hex()
key = str[0:32]
file_obj = open(filename, "r")
msg = file_obj.read()
#list = msg.split('\n')
#msg1 = list[0]
#msg2 = list[2]
text = ENCDEC.AESCipher(key).decrypt(msg)
#msg2 = ENCDEC.AESCipher(key).decrypt(msg2)
#temp = []
# temp.append(unicodedata.normalize('NFKD',msg1).encode('ascii','ignore'))
# temp.append(list[1])
#text = ENCDEC.shamirs_join(temp,unicodedata.normalize('NFKD',msg2).encode('ascii','ignore'))
outputFilename = os.path.join(directory, "DecodedFile.txt")
file_obj = open(outputFilename, "w")
file_obj.write(text)
os.remove(filename)
os.system("start " + directory)
'''
Prime Number: 1090748135619415929450294929359784500348155124953172211774101106966150168922785639028532473848836817769712164169076432969224698752674677662739994265785437233596157045970922338040698100507861033047312331823982435279475700199860971612732540528796554502867919746776983759391475987142521315878719577519148811830879919426939958487087540965716419167467499326156226529675209172277001377591248147563782880558861083327174154014975134893125116015776318890295960698011614157721282527539468816519319333337503114777192360412281721018955834377615480468479252748867320362385355596601795122806756217713579819870634321561907813255153703950795271232652404894983869492174481652303803498881366210508647263668376514131031102336837488999775744046733651827239395353540348414872854639719294694323450186884189822544540647226987292160693184734654941906936646576130260972193280317171696418971553954161446191759093719524951116705577362073481319296041201283516154269044389257727700289684119460283480452306204130024913879981135908026983868205969318167819680850998649694416907952712904962404937775789698917207356355227455066183815847669135530549755439819480321732925869069136146085326382334628745456398071603058051634209386708703306545903199608523824513729625136659128221100967735450519952404248198262813831097374261650380017277916975324134846574681307337017380830353680623216336949471306191686438249305686413380231046096450953594089375540285037292470929395114028305547452584962074309438151825437902976012891749355198678420603722034900311364893046495761404333938686140037848030916292543273684533640032637639100774502371542479302473698388692892420946478947733800387782741417786484770190108867879778991633218628640533982619322466154883011452291890252336487236086654396093853898628805813177559162076363154436494477507871294119841637867701722166609831201845484078070518041336869808398454625586921201308185638888082699408686536045192649569198110353659943111802300636106509865023943661829436426563007917282050894429388841748885398290707743052973605359277515749619730823773215894755121761467887865327707115573804264519206349215850195195364813387526811742474131549802130246506341207020335797706780705406945275438806265978516209706795702579244075380490231741030862614968783306207869687868108423639971983209077624758080499988275591392787267627182442892809646874228263172435642368588260139161962836121481966092745325488641054238839295138992979335446110090325230955276870524611359124918392740353154294858383359
'''
prime_ = 0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE45B3DC2007CB8A163BF0598DA48361C55D39A69163FA8FD24CF5F83655D23DCA3AD961C62F356208552BB9ED529077096966D670C354E4ABC9804F1746C08CA18217C32905E462E36CE3BE39E772C180E86039B2783A2EC07A28FB5C55DF06F4C52C9DE2BCBF6955817183995497CEA956AE515D2261898FA051015728E5A8AAAC42DAD33170D04507A33A85521ABDF1CBA64ECFB850458DBEF0A8AEA71575D060C7DB3970F85A6E1E4C7ABF5AE8CDB0933D71E8C94E04A25619DCEE3D2261AD2EE6BF12FFA06D98A0864D87602733EC86A64521F2B18177B200CBBE117577A615D6C770988C0BAD946E208E24FA074E5AB3143DB5BFCE0FD108E4B82D120A92108011A723C12A787E6D788719A10BDBA5B2699C327186AF4E23C1A946834B6150BDA2583E9CA2AD44CE8DBBBC2DB04DE8EF92E8EFC141FBECAA6287C59474E6BC05D99B2964FA090C3A2233BA186515BE7ED1F612970CEE2D7AFB81BDD762170481CD0069127D5B05AA993B4EA988D8FDDC186FFB7DC90A6C08F4DF435C93402849236C3FAB4D27C7026C1D4DCB2602646DEC9751E763DBA37BDF8FF9406AD9E530EE5DB382F413001AEB06A53ED9027D831179727B0865A8918DA3EDBEBCF9B14ED44CE6CBACED4BB1BDB7F1447E6CC254B332051512BD7AF426FB8F401378CD2BF5983CA01C64B92ECF032EA15D1721D03F482D7CE6E74FEF6D55E702F46980C82B5A84031900B1C9E59E7C97FBEC7E8F323A97A7E36CC88BE0F1D45B7FF585AC54BD407B22B4154AACC8F6D7EBF48E1D814CC5ED20F8037E0A79715EEF29BE32806A1D58BB7C5DA76F550AA3D8A1FBFF0EB19CCB1A313D55CDA56C9EC2EF29632387FE8D76E3C0468043E8F663F4860EE12BF2D5B0B7474D6E694F91E6DBE115974A3926F12FEE5E438777CB6A932DF8CD8BEC4D073B931BA3BC832B68D9DD300741FA7BF8AFC47ED2576F6936BA424663AAB639C5AE4F5683423B4742BF1C978238F16CBE39D652DE3FDB8BEFC848AD922222E04A4037C0713EB57A81A23F0C73473FC646CEA306B4BCBC8862F8385DDFA9D4B7FA2C087E879683303ED5BDD3A062B3CF5B3A278A66D2A13F83F44F82DDF310EE074AB6A364597E899A0255DC164F31CC50846851DF9AB48195DED7EA1B1D510BD7EE74D73FAF36BC31ECFA268359046F4EB879F924009438B481C6CD7889A002ED5EE382BC9190DA6FC026E479558E4475677E9AA9E3050E2765694DFC81F56E880B96E7160C980DD98EDD3DFFFFFFFFFFFFFFFFF
| 88.413333 | 2,481 | 0.854321 | 240 | 6,631 | 23.491667 | 0.291667 | 0.01614 | 0.01277 | 0.009223 | 0.094005 | 0.082299 | 0.082299 | 0.05995 | 0.05995 | 0.045406 | 0 | 0.592105 | 0.048711 | 6,631 | 74 | 2,482 | 89.608108 | 0.30168 | 0.073292 | 0 | 0.352941 | 0 | 0 | 0.013928 | 0 | 0 | 1 | 0.634478 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.235294 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d729890727ad91f757c94c077c6da494bbf9de8 | 1,826 | py | Python | dcbr-api/api/migrations/0015_auto_20190625_1319.py | WadeBarnes/agri-dcbr | 92a62ec8c22a3b7b6dd1978eaa7320494a9ca9d9 | [
"Apache-2.0"
] | 1 | 2020-07-11T23:20:16.000Z | 2020-07-11T23:20:16.000Z | dcbr-api/api/migrations/0015_auto_20190625_1319.py | WadeBarnes/agri-dcbr | 92a62ec8c22a3b7b6dd1978eaa7320494a9ca9d9 | [
"Apache-2.0"
] | 19 | 2019-07-26T22:47:49.000Z | 2020-12-15T22:06:25.000Z | dcbr-api/api/migrations/0015_auto_20190625_1319.py | WadeBarnes/agri-dcbr | 92a62ec8c22a3b7b6dd1978eaa7320494a9ca9d9 | [
"Apache-2.0"
] | 7 | 2019-04-15T17:13:09.000Z | 2019-12-09T23:52:53.000Z | # Generated by Django 2.2 on 2019-06-25 20:19
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0014_auto_20190625_1202'),
]
operations = [
migrations.RenameField(
model_name='risk_factor_animal',
old_name='num_traded',
new_name='num_cat_transferred',
),
migrations.RenameField(
model_name='risk_factor_animal',
old_name='num_leased',
new_name='num_dog_leased',
),
migrations.RenameField(
model_name='risk_factor_animal',
old_name='num_sold',
new_name='num_dog_sold',
),
migrations.RenameField(
model_name='risk_factor_animal',
old_name='num_transferred',
new_name='num_dog_traded',
),
migrations.AddField(
model_name='risk_factor_animal',
name='num_cat_leased',
field=models.IntegerField(default=0, validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AddField(
model_name='risk_factor_animal',
name='num_cat_sold',
field=models.IntegerField(default=0, validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AddField(
model_name='risk_factor_animal',
name='num_cat_traded',
field=models.IntegerField(default=0, validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AddField(
model_name='risk_factor_animal',
name='num_dog_transferred',
field=models.IntegerField(default=0, validators=[django.core.validators.MinValueValidator(0)]),
),
]
| 33.2 | 107 | 0.608434 | 184 | 1,826 | 5.73913 | 0.255435 | 0.079545 | 0.098485 | 0.143939 | 0.70928 | 0.70928 | 0.70928 | 0.70928 | 0.70928 | 0.70928 | 0 | 0.029008 | 0.282585 | 1,826 | 54 | 108 | 33.814815 | 0.777099 | 0.023549 | 0 | 0.583333 | 1 | 0 | 0.185851 | 0.012914 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d779376bfb0ad54388b641f50acbfeaaf867d19 | 8,202 | py | Python | nemo/collections/asr/models/k2_sequence_models.py | hamjam/NeMo | b3484d32e1317666151f931bfa39867d88ed8658 | [
"Apache-2.0"
] | 1 | 2022-03-08T02:48:44.000Z | 2022-03-08T02:48:44.000Z | nemo/collections/asr/models/k2_sequence_models.py | hamjam/NeMo | b3484d32e1317666151f931bfa39867d88ed8658 | [
"Apache-2.0"
] | 1 | 2022-03-06T14:09:02.000Z | 2022-03-06T14:09:02.000Z | nemo/collections/asr/models/k2_sequence_models.py | hamjam/NeMo | b3484d32e1317666151f931bfa39867d88ed8658 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List, Optional
from omegaconf import DictConfig
from pytorch_lightning import Trainer
from nemo.collections.asr.models.ctc_bpe_models import EncDecCTCModelBPE
from nemo.collections.asr.models.ctc_models import EncDecCTCModel
from nemo.collections.asr.parts.k2.classes import ASRK2Mixin
from nemo.core.classes.common import PretrainedModelInfo, typecheck
from nemo.utils import logging
class EncDecK2SeqModel(EncDecCTCModel, ASRK2Mixin):
"""Encoder decoder models with various lattice losses."""
def __init__(self, cfg: DictConfig, trainer: Trainer = None):
super().__init__(cfg=cfg, trainer=trainer)
self._init_k2()
@classmethod
def list_available_models(cls) -> Optional[PretrainedModelInfo]:
"""
This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
Returns:
List of available pre-trained models.
"""
pass
def change_vocabulary(self, new_vocabulary: List[str]):
"""
Changes vocabulary used during CTC decoding process. Use this method when fine-tuning on from pre-trained model.
This method changes only decoder and leaves encoder and pre-processing modules unchanged. For example, you would
use it if you want to use pretrained encoder when fine-tuning on a data in another language, or when you'd need
model to learn capitalization, punctuation and/or special characters.
If new_vocabulary == self.decoder.vocabulary then nothing will be changed.
Args:
new_vocabulary: list with new vocabulary. Must contain at least 2 elements. Typically, \
this is target alphabet.
Returns: None
"""
super().change_vocabulary(new_vocabulary)
if self.use_graph_lm:
self.token_lm = None
logging.warning(
f"""With .change_vocabulary() call for a model with criterion_type=`{self.loss.criterion_type}`,
a new token_lm has to be set manually: call .update_k2_modules(new_cfg)
or update .graph_module_cfg.backend_cfg.token_lm before calling this method."""
)
self.update_k2_modules(self.graph_module_cfg)
@typecheck()
def forward(
self, input_signal=None, input_signal_length=None, processed_signal=None, processed_signal_length=None,
):
"""
Forward pass of the model.
Args:
input_signal: Tensor that represents a batch of raw audio signals,
of shape [B, T]. T here represents timesteps, with 1 second of audio represented as
`self.sample_rate` number of floating point values.
input_signal_length: Vector of length B, that contains the individual lengths of the audio
sequences.
processed_signal: Tensor that represents a batch of processed audio signals,
of shape (B, D, T) that has undergone processing via some DALI preprocessor.
processed_signal_length: Vector of length B, that contains the individual lengths of the
processed audio sequences.
Returns:
A tuple of 3 elements -
1) The log probabilities tensor of shape [B, T, D].
2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
3) The greedy token predictions of the model of shape [B, T] (via argmax)
"""
log_probs, encoded_len, greedy_predictions = super().forward(
input_signal=input_signal,
input_signal_length=input_signal_length,
processed_signal=processed_signal,
processed_signal_length=processed_signal_length,
)
return self._forward_k2_post_processing(
log_probs=log_probs, encoded_length=encoded_len, greedy_predictions=greedy_predictions
)
class EncDecK2SeqModelBPE(EncDecCTCModelBPE, ASRK2Mixin):
"""Encoder decoder models with Byte Pair Encoding and various lattice losses."""
def __init__(self, cfg: DictConfig, trainer: Trainer = None):
super().__init__(cfg=cfg, trainer=trainer)
self._init_k2()
@classmethod
def list_available_models(cls) -> Optional[PretrainedModelInfo]:
"""
This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
Returns:
List of available pre-trained models.
"""
pass
def change_vocabulary(self, new_tokenizer_dir: str, new_tokenizer_type: str):
"""
Changes vocabulary of the tokenizer used during CTC decoding process.
Use this method when fine-tuning on from pre-trained model.
This method changes only decoder and leaves encoder and pre-processing modules unchanged. For example, you would
use it if you want to use pretrained encoder when fine-tuning on a data in another language, or when you'd need
model to learn capitalization, punctuation and/or special characters.
Args:
new_tokenizer_dir: Path to the new tokenizer directory.
new_tokenizer_type: Either `bpe` or `wpe`. `bpe` is used for SentencePiece tokenizers,
whereas `wpe` is used for `BertTokenizer`.
Returns: None
"""
super().change_vocabulary(new_tokenizer_dir, new_tokenizer_type)
if self.use_graph_lm:
self.token_lm = None
logging.warning(
f"""With .change_vocabulary() call for a model with criterion_type=`{self.loss.criterion_type}`,
a new token_lm has to be set manually: call .update_k2_modules(new_cfg)
or update .graph_module_cfg.backend_cfg.token_lm before calling this method."""
)
self.update_k2_modules(self.graph_module_cfg)
@typecheck()
def forward(
self, input_signal=None, input_signal_length=None, processed_signal=None, processed_signal_length=None,
):
"""
Forward pass of the model.
Args:
input_signal: Tensor that represents a batch of raw audio signals,
of shape [B, T]. T here represents timesteps, with 1 second of audio represented as
`self.sample_rate` number of floating point values.
input_signal_length: Vector of length B, that contains the individual lengths of the audio
sequences.
processed_signal: Tensor that represents a batch of processed audio signals,
of shape (B, D, T) that has undergone processing via some DALI preprocessor.
processed_signal_length: Vector of length B, that contains the individual lengths of the
processed audio sequences.
Returns:
A tuple of 3 elements -
1) The log probabilities tensor of shape [B, T, D].
2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
3) The greedy token predictions of the model of shape [B, T] (via argmax)
"""
log_probs, encoded_len, greedy_predictions = super().forward(
input_signal=input_signal,
input_signal_length=input_signal_length,
processed_signal=processed_signal,
processed_signal_length=processed_signal_length,
)
return self._forward_k2_post_processing(
log_probs=log_probs, encoded_length=encoded_len, greedy_predictions=greedy_predictions
)
| 44.335135 | 120 | 0.678981 | 1,048 | 8,202 | 5.16126 | 0.240458 | 0.032538 | 0.01479 | 0.009983 | 0.763172 | 0.750601 | 0.726197 | 0.726197 | 0.726197 | 0.726197 | 0 | 0.005431 | 0.259205 | 8,202 | 184 | 121 | 44.576087 | 0.884793 | 0.509876 | 0 | 0.705882 | 0 | 0 | 0.15919 | 0.063097 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.029412 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4d91aeb86c78d853d3ed1eaf68d6afb5b89ef4e8 | 6,778 | py | Python | tests/test_component/test_restaurant_review.py | Squad002/gooutsafe-service | 88b759ac849c539dffec4dbc9054998792e5fbcc | [
"MIT"
] | 2 | 2020-11-26T23:37:40.000Z | 2020-12-12T19:29:51.000Z | tests/test_component/test_restaurant_review.py | Squad002/gooutsafe-service | 88b759ac849c539dffec4dbc9054998792e5fbcc | [
"MIT"
] | null | null | null | tests/test_component/test_restaurant_review.py | Squad002/gooutsafe-service | 88b759ac849c539dffec4dbc9054998792e5fbcc | [
"MIT"
] | null | null | null | from sqlalchemy.sql.operators import op
from tests import data, helpers
from tests.fixtures import app, client
from urllib.parse import urlparse
# TODO merge with the restaurant page test
def test_user_should_see_review_form(client):
helpers.create_user(client)
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
helpers.login_user(client)
res = visit_restaurant_page(client)
assert res.status_code == 200
assert b"Add your review" in res.data
assert b"Your rating" in res.data
assert b"Your review" in res.data
def test_user_should_create_review(client):
helpers.create_user(client)
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
helpers.login_user(client)
res = create_review(client)
assert res.status_code == 200
assert b"mario" in res.data
assert b"5" in res.data
assert (
b"It was a delicious dinner, but initially the service was not so excellent in the speed of serving the meals."
in res.data
)
# def test_user_should_create_review2(client, db):
# helpers.insert_complete_restaurant(db)
# helpers.create_user(client)
# helpers.login_user(client)
# res = create_review(client, rating=4)
# helpers.logout(client)
# helpers.create_user(client, data=data.user3)
# helpers.login_user(client)
# res = create_review(client, rating=2)
# helpers.logout(client)
# compute_restaurants_rating_average()
# assert res.status_code == 200
# assert b'<div class="content">3</div>' in res.data
def test_user_should_create_review_if_already_did(client):
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
helpers.create_user(client)
helpers.login_user(client)
helpers.login_user(client)
create_review(client)
res = create_review(client, rating=3)
assert res.status_code == 200
assert b"You already reviewed this restaraunt"
def test_user_should_not_create_review_when_message_is_less_than_30_character(
client
):
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
helpers.create_user(client)
helpers.login_user(client)
res = client.post(
"/restaurants/1",
data=dict(rating=4, message="It was a"),
follow_redirects=True,
)
assert res.status_code == 200
assert b"The review should be at least of 30 characters." in res.data
# def test_user_should_not_create_review_when_rating_bigger_than_5(client, db):
# helpers.create_user(client)
# helpers.login_user(client)
# helpers.insert_complete_restaurant(db)
# res = create_review(client, rating=6)
# assert res.status_code == 200
# assert b"The number of stars must be between 1 and 5" in res.data
# def test_user_should_not_create_review_when_rating_is_zero(client, db):
# helpers.create_user(client)
# helpers.login_user(client)
# helpers.insert_complete_restaurant(db)
# res = create_review(client, rating=0)
# assert res.status_code == 200
# assert b"This field is required" in res.data
# def test_user_should_not_create_review_when_rating_smaller_than_zero(client, db):
# helpers.create_user(client)
# helpers.login_user(client)
# helpers.insert_complete_restaurant(db)
# res = create_review(client, rating=-1)
# assert res.status_code == 200
# assert b"The number of stars must be between 1 and 5" in res.data
def test_authority_should_not_see_the_review_form(client):
helpers.create_health_authority(client)
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
helpers.login_authority(client)
res = visit_restaurant_page(client)
assert res.status_code == 200
assert b"Add your review" not in res.data
assert b"Your rating" not in res.data
assert b"Your review" not in res.data
# Here the HA cannot see the form, but if it knows the endpoint, then it still can make a request
def test_authority_should_not_create_review(client):
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
helpers.create_health_authority(client)
helpers.login_authority(client)
res = create_review(client)
assert res.status_code == 200
assert b"Only a logged user can review a restaurant." in res.data
def test_operator_should_not_see_the_review_form(client):
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
res = visit_restaurant_page(client)
assert res.status_code == 200
assert b"Add your review" not in res.data
assert b"Your rating" not in res.data
assert b"Your review" not in res.data
# Here the Operator cannot see the form, but if it knows the endpoint, then it still can make a request
def test_operator_should_not_create_review(client):
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
res = create_review(client)
assert res.status_code == 200
assert b"Only a logged user can review a restaurant." in res.data
def test_anonymous_user_should_see_review_form(client):
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
res = visit_restaurant_page(client)
assert res.status_code == 200
assert b"Add your review" in res.data
assert b"Your rating" in res.data
assert b"Your review" in res.data
def test_anonymous_user_should_be_redirected_on_login_page_when_create_review(
client
):
helpers.create_operator(client)
helpers.login_operator(client)
helpers.create_restaurant(client)
helpers.logout(client)
res = create_review(client, redirect=False)
assert res.status_code == 302
assert urlparse(res.location).path == "/login/user"
# Helpers methods
def create_review(client, message=None, rating=5, redirect=True):
if not message:
message = "It was a delicious dinner, but initially the service was not so excellent in the speed of serving the meals."
return client.post(
"/restaurants/1",
data=dict(
rating=rating,
message=message,
),
follow_redirects=redirect,
)
def visit_restaurant_page(client):
return client.get(
"/restaurants/1",
follow_redirects=False,
)
| 27.778689 | 128 | 0.729566 | 948 | 6,778 | 4.99789 | 0.14346 | 0.161883 | 0.108274 | 0.056142 | 0.834529 | 0.810257 | 0.795272 | 0.743774 | 0.70916 | 0.655973 | 0 | 0.011989 | 0.187814 | 6,778 | 243 | 129 | 27.893004 | 0.848683 | 0.2545 | 0 | 0.661654 | 0 | 0.015038 | 0.119665 | 0 | 0 | 0 | 0 | 0.004115 | 0.225564 | 1 | 0.090226 | false | 0 | 0.030075 | 0.007519 | 0.135338 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4dd92bc384de743590dab329da0e66b9f933799c | 161 | py | Python | project/core/stations/admin.py | robwise1/birdr | fa6cf9036baebb735bff9a59abc4c04867e40d61 | [
"MIT"
] | null | null | null | project/core/stations/admin.py | robwise1/birdr | fa6cf9036baebb735bff9a59abc4c04867e40d61 | [
"MIT"
] | null | null | null | project/core/stations/admin.py | robwise1/birdr | fa6cf9036baebb735bff9a59abc4c04867e40d61 | [
"MIT"
] | null | null | null | from django.contrib import admin
from stations.models import Station
class StationAdmin(admin.ModelAdmin):
pass
admin.site.register(Station, StationAdmin) | 20.125 | 42 | 0.813665 | 20 | 161 | 6.55 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118012 | 161 | 8 | 42 | 20.125 | 0.922535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
151313c50116765c62e593c3bf8eff4179ef883b | 22 | py | Python | poetrytools/__init__.py | textioHQ/Poetry-Tools | ab1cc5e576a5b7af8093990c36016a0fca02cb44 | [
"MIT"
] | 105 | 2015-02-15T01:09:23.000Z | 2022-03-01T01:47:52.000Z | poetrytools/__init__.py | textioHQ/Poetry-Tools | ab1cc5e576a5b7af8093990c36016a0fca02cb44 | [
"MIT"
] | 6 | 2016-03-08T19:16:22.000Z | 2020-12-02T16:26:19.000Z | poetrytools/__init__.py | textioHQ/Poetry-Tools | ab1cc5e576a5b7af8093990c36016a0fca02cb44 | [
"MIT"
] | 31 | 2015-11-15T03:13:27.000Z | 2022-02-11T17:40:06.000Z | from .poetics import * | 22 | 22 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
15192563f05eb3c0358f7ebafeb1a743fcfdaeae | 37 | py | Python | turbustat/statistics/cramer/__init__.py | CFD-UTSA/Turbulence-stars | 354d02e38d15e3b0d1f751b43f430dbd3a14c250 | [
"MIT"
] | 42 | 2016-04-07T20:49:59.000Z | 2022-03-28T12:54:13.000Z | turbustat/statistics/cramer/__init__.py | CFD-UTSA/Turbulence-stars | 354d02e38d15e3b0d1f751b43f430dbd3a14c250 | [
"MIT"
] | 131 | 2015-03-05T21:42:27.000Z | 2021-07-22T14:59:04.000Z | turbustat/statistics/cramer/__init__.py | CFD-UTSA/Turbulence-stars | 354d02e38d15e3b0d1f751b43f430dbd3a14c250 | [
"MIT"
] | 21 | 2015-06-10T17:10:06.000Z | 2022-02-28T15:59:42.000Z | from .cramer import Cramer_Distance
| 18.5 | 36 | 0.837838 | 5 | 37 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12976225cb7bcedcf8e7879c9a954c9887221191 | 109 | py | Python | Het Toolkit/HetKit_BETA_June_2020/lib/cropping.py | michaelcolman/Sub_cellular_heterogeneity_TOOLKIT | 24c90150429a39159e3ebed654d7ef43260a5aff | [
"CC0-1.0"
] | null | null | null | Het Toolkit/HetKit_BETA_June_2020/lib/cropping.py | michaelcolman/Sub_cellular_heterogeneity_TOOLKIT | 24c90150429a39159e3ebed654d7ef43260a5aff | [
"CC0-1.0"
] | null | null | null | Het Toolkit/HetKit_BETA_June_2020/lib/cropping.py | michaelcolman/Sub_cellular_heterogeneity_TOOLKIT | 24c90150429a39159e3ebed654d7ef43260a5aff | [
"CC0-1.0"
] | null | null | null | from PIL import Image
import numpy as np
def crop_data(data, x1, x2, y1, y2):
return data[y1:y2, x1:x2]
| 12.111111 | 36 | 0.688073 | 22 | 109 | 3.363636 | 0.681818 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091954 | 0.201835 | 109 | 8 | 37 | 13.625 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
12d1c7f2e9c6e024bd2d086c86e90d250f02e52f | 270 | py | Python | tests/helpers/custom_exceptions.py | aiincidentdatabase/nlp-lambdas | b7b5fa864c2eedbcc3f829bd672f6842e7b30a2f | [
"MIT"
] | null | null | null | tests/helpers/custom_exceptions.py | aiincidentdatabase/nlp-lambdas | b7b5fa864c2eedbcc3f829bd672f6842e7b30a2f | [
"MIT"
] | null | null | null | tests/helpers/custom_exceptions.py | aiincidentdatabase/nlp-lambdas | b7b5fa864c2eedbcc3f829bd672f6842e7b30a2f | [
"MIT"
] | null | null | null | # Some custom exceptions to help categorize testing
class JsonException(Exception): pass
class SamOutputException(Exception): pass
class SamExecutionException(Exception): pass
class StartApiTimeoutException(Exception): pass
class InternalServerException(Exception): pass | 45 | 51 | 0.859259 | 27 | 270 | 8.592593 | 0.555556 | 0.280172 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081481 | 270 | 6 | 52 | 45 | 0.935484 | 0.181481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
12fd573278dad43f3edd63f9e489feb89e4add6f | 166 | py | Python | shapeworld/realizers/__init__.py | ProKil/ShapeWorld | c68379dca207b6e3bf0ea38eba61895cf6f4e5a2 | [
"MIT"
] | 52 | 2017-02-07T12:02:11.000Z | 2022-03-09T10:35:52.000Z | shapeworld/realizers/__init__.py | ProKil/ShapeWorld | c68379dca207b6e3bf0ea38eba61895cf6f4e5a2 | [
"MIT"
] | 30 | 2017-11-29T15:18:48.000Z | 2021-12-12T10:27:08.000Z | shapeworld/realizers/__init__.py | ProKil/ShapeWorld | c68379dca207b6e3bf0ea38eba61895cf6f4e5a2 | [
"MIT"
] | 27 | 2017-04-18T21:14:29.000Z | 2021-07-08T14:14:00.000Z | from shapeworld.realizers.realizer import CaptionRealizer
from shapeworld.realizers.dmrs.realizer import DmrsRealizer
__all__ = ['CaptionRealizer', 'DmrsRealizer']
| 27.666667 | 59 | 0.837349 | 16 | 166 | 8.4375 | 0.5625 | 0.207407 | 0.340741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084337 | 166 | 5 | 60 | 33.2 | 0.888158 | 0 | 0 | 0 | 0 | 0 | 0.162651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
420757967e98393711499129518c401e08af2ce8 | 28,602 | py | Python | sdrfgui/view.py | spoortiramesh/sdrfgui | 7d51fcde085d0b2b65c77e090450d9843db1ca70 | [
"BSD-2-Clause"
] | null | null | null | sdrfgui/view.py | spoortiramesh/sdrfgui | 7d51fcde085d0b2b65c77e090450d9843db1ca70 | [
"BSD-2-Clause"
] | null | null | null | sdrfgui/view.py | spoortiramesh/sdrfgui | 7d51fcde085d0b2b65c77e090450d9843db1ca70 | [
"BSD-2-Clause"
] | null | null | null | import csv
import pandas as pd
from tkinter import *
import tkinter as tk
from PIL import ImageTk, Image
import tkinter as tk
import glob, os, shutil
from tkinter import filedialog
class View():
def __init__(self, model):
self.model = model
self.secs = ["Default", "Human", "Vertebrates", "Non-vertebrates", "Plants", "Cell lines"]
self.master = Tk()
self.master.geometry("800x800")
self.master.title("Sweet GUI") # Adding a title
self.v = StringVar(self.master)
self.v.set(self.secs[0])
self.data = []
self.sourcePath = None
# Load/Save sdrfs Buttons
[self.load_sdrf_button, self.save_sdrf_button] = self.create_load_save_buttons()
self.folder_path = StringVar()
self.lbl1 = Label(self.master,textvariable=self.folder_path)
self.lbl1.grid(row=0, column=1)
self.buttonBrowse = Button(text="Browse folder", command=self.browse_button)
self.buttonBrowse.grid(row=1, column=3)
Button(self.master, text='Set directory', command=self.set_dir).grid(row=1, column=4, sticky=W, pady=4)
self.w = OptionMenu(self.master, self.v, *self.secs, command=self.on_option_change)
self.w.grid(row=1, column=0)
self.laba = tk.Label(self.master, text='No. of raw files: ')
self.laba.grid(row=1,column=7,sticky=W)
self.ent0 = tk.Entry(self.master)
self.ent0.grid(row=1, column=9)
self.okbtn = tk.Button(self.master, text='OK',command=self.on_click)
self.okbtn.grid(row=1, column=10)
def create_load_save_buttons(self):
""" Creates the load and save Button and sets them on the bottom right"""
# Pack the two buttons together
f = Frame(self.master)
f.grid(row=1, column=0, columnspan=1)
save = tk.Button(
f, text="Save SDRF"
)
load = tk.Button(
f, text="Load SDRF"
)
load.grid(row=0, column=0)
save.grid(row=0, column=1)
# Position at bottom right
f.place(relx=0.99, rely=0.99, anchor="se")
return load, save
def next_step(self, entry):
if entry.get():
# the user entered data in the mandatory entry: proceed to next step
instruction_label= Label(self.master, text='Next step')
instruction_label.grid(row=21, column=0, sticky=E)
else:
# the mandatory field is empty
instruction_label = Label(self.master, text="mandatory data missing")
instruction_label.grid(row=16, column=0, sticky=E)
entry.focus_set()
def browse_button(self):
# Allow user to select a directory and store it in global var
# called folder_path
filename = filedialog.askdirectory()
self.folder_path.set(filename)
print(filename)
def set_dir(self):
self.sourcePath = self.folder_path.get()
os.chdir(self.sourcePath) # Provide the path here
def saveinfo(self):
valor0=entry0.get()
valor1 = entry1.get()
valor2 = entry2.get()
valor3 = entry3.get()
valor4 = entry4.get()
valor5 = entry5.get()
valor6 = entry6.get()
valor7 = entry7.get()
valor8 = entry8.get()
valor9 = entry9.get()
valor10 = entry10.get()
valor11=entry11.get()
self.data.append([valor0,valor1, valor2, valor3,valor4,valor5,valor6,valor7,valor8,valor9,valor10,valor11])
print(self.data)
def export(self):
with open('test_sdrf.tsv', 'w', encoding='UTF8') as f:
list_human=['Source Name','characteristics[organism]','characteristics[organism parts]','characteristics[cell type]','characteristics[developmental stage]','characteristics[disease]','characteristics[sex]','characteristics[individual]','characteristics[cell line]','comment[data file]','comment[fraction identifier]','comment[label]']
f = pd.DataFrame(self.data,columns=list_human)
nan_value = float("NaN")
f.replace("", nan_value, inplace=True)
f.dropna(how='all', axis=1, inplace=True)
filename = "sdrf_test.tsv"
path = os.path.join(self.sourcePath, filename)
f.to_csv(path,sep="\t")
def saveinfo_ver(self):
valor0=entry0.get()
valor1 = entry1.get()
valor2 = entry2.get()
valor3 = entry3.get()
valor4 = entry4.get()
valor5 = entry5.get()
valor6 = entry6.get()
valor7 = entry7.get()
valor8 = entry8.get()
valor9 = entry9.get()
valor10 = entry10.get()
valor11=entry11.get()
valor12=entry12.get()
valor13=entry13.get()
valor14=entry14.get()
valor15=entry15.get()
valor16=entry16.get()
valor17=entry17.get()
valor18 = entry18.get()
self.data.append([valor0,valor1,valor2,valor3,valor4,valor5,valor6,valor7,valor8,valor9,valor10,valor11,valor12,valor13,valor14,valor15,valor16,valor17,valor18])
print(self.data)
def export_ver(self):
with open('test_sdrf.tsv', 'w', encoding='UTF8') as f:
list_v=['Source Name','characteristics[organism]','characteristics[age]','characteristics[developmental stage]','characteristics[sex]','characteristics[disease]','characteristics[organism part]','characteristics[cell type]','technology type','assay name','characteristics[individual]','characteristics[biological replicate]','comment[data file]','comment[technical replicate]','comment[fraction identifier]','comment[label]','comment[cleavage agent details]','comment[instrument]']
f = pd.DataFrame(self.data,columns=list_v)
nan_value = float("NaN")
f.replace("", nan_value, inplace=True)
f.dropna(how='all', axis=1, inplace=True)
filename = "sdrf_test.tsv"
path = os.path.join(self.sourcePath, filename)
f.to_csv(path,sep="\t")
def saveinfo_def(self):
valor0=entry0.get()
valor1 = entry1.get()
valor3 = entry3.get()
valor4 = entry4.get()
valor5 = entry5.get()
valor6 = entry6.get()
valor7 = entry7.get()
valor8 = entry8.get()
valor9 = entry9.get()
valor10 = entry10.get()
valor11=entry11.get()
valor12=entry12.get()
valor13=entry13.get()
valor14=entry14.get()
self.data.append([valor0,valor1, valor3,valor4,valor5,valor6,valor7,valor8,valor9,valor10,valor11,valor12,valor13,valor14])
print(self.data)
def export_def(self):
with open('test_sdrf.tsv', 'w', encoding='UTF8') as f:
list_d=['Source Name','characteristics[organism]','characteristics[disease]','characteristics[organism part]','characteristics[cell type]','technology type','assay name','characteristics[biological replicate]','comment[data file]','comment[technical replicate]','comment[fraction identifier]','comment[label]','comment[cleavage agent details]','comment[instrument]']
f = pd.DataFrame(self.data,columns=list_d)
nan_value = float("NaN")
f.replace("", nan_value, inplace=True)
f.dropna(how='all', axis=1, inplace=True)
filename = "sdrf_test.tsv"
path = os.path.join(self.sourcePath, filename)
f.to_csv(path,sep="\t")
def saveinfo_plants(self):
valor0=entry0.get()
valor1 = entry1.get()
valor2 = entry2.get()
valor3 = entry3.get()
valor4 = entry4.get()
valor5 = entry5.get()
valor6 = entry6.get()
valor7 = entry7.get()
valor8 = entry8.get()
valor9 = entry9.get()
valor10 = entry10.get()
valor11=entry11.get()
valor12=entry12.get()
valor13=entry13.get()
valor14=entry14.get()
valor15=entry15.get()
valor16=entry16.get()
valor17=entry17.get()
valor18 = entry18.get()
valor19 = entry19.get()
self.data.append([valor0,valor1, valor2, valor3,valor4,valor5,valor6,valor7,valor8,valor9,valor10,valor11,valor12,valor13,valor14,valor15,valor16,valor17,valor18,valor19])
print(self.data)
def export_plants(self):
with open('test_sdrf.tsv', 'w', encoding='UTF8') as f:
list_v=['Source Name','characteristics[organism]','characteristics[ecotype/cultivar]','characteristics[age]','characteristics[developmental stage]','characteristics[organism part]','characteristics[cell type]','technology type','assay name','characteristics[individual]','characteristics[biological replicate]','comment[data file]','comment[technical replicate]','comment[fraction identifier]','comment[label]','comment[cleavage agent details]','comment[instrument]']
f = pd.DataFrame(self.data,columns=list_v)
nan_value = float("NaN")
f.replace("", nan_value, inplace=True)
f.dropna(how='all', axis=1, inplace=True)
filename = "sdrf_test.tsv"
path = os.path.join(self.sourcePath, filename)
f.to_csv(path,sep="\t")
def on_option_change(self, event):
global entry0
global entry1
global entry2
global entry3
global entry4
global entry5
global entry6
global entry7
global entry8
global entry9
global entry10
global entry11
global entry12
global entry13
global entry14
global entry15
global entry16
global entry17
global entry18
global entry19
if self.v.get() == 'Human':
entry0 = tk.Entry(self.master)
entry0.grid(row=2, column=1)
lab0 = tk.Label(self.master, text='Source Name')
lab0.grid(row=2, column=0, sticky=E)
entry1 = tk.Entry(self.master)
entry1.grid(row=3, column=1)
lab1 = tk.Label(self.master, text='characteristics[organism]')
lab1.grid(row=3, column=0, sticky=E)
entry2 = tk.Entry(self.master)
entry2.grid(row=4, column=1)
lab2 = tk.Label(self.master, text='characteristics[organism parts]')
lab2.grid(row=4, column=0, sticky=E)
entry3 = tk.Entry(self.master)
entry3.grid(row=5, column=1)
lab3 = tk.Label(self.master, text='characteristics[cell type]')
lab3.grid(row=5,column=0,sticky=E)
entry4 = tk.Entry(self.master)
entry4.grid(row=6, column=1)
lab4 = tk.Label(self.master, text='characteristics[developmental stage]')
lab4.grid(row=6, column=0, sticky=E)
entry5 = tk.Entry(self.master)
entry5.grid(row=7, column=1)
lab5 = tk.Label(self.master, text='characteristics[disease]')
lab5.grid(row=7, column=0, sticky=E)
entry6 = tk.Entry(self.master)
entry6.grid(row=8, column=1)
lab6 = tk.Label(self.master, text='characteristics[sex]')
lab6.grid(row=8, column=0, sticky=E)
entry7 = tk.Entry(self.master)
entry7.grid(row=9, column=1)
lab7 = tk.Label(self.master, text='characteristics[individual]')
lab7.grid(row=9, column=0, sticky=E)
entry8 = tk.Entry(self.master)
entry8.grid(row=10, column=1)
lab8=tk.Label(self.master, text='characteristics[cell line]')
lab8.grid(row=10, column=0, sticky=E)
entry9 = tk.Entry(self.master)
entry9.grid(row=11, column=1)
lab9 = tk.Label(self.master, text='comment[data file]')
lab9.grid(row=11, column=0, sticky=E)
entry10 = tk.Entry(self.master)
entry10.grid(row=12, column=1)
lab10 = tk.Label(self.master, text='comment[fraction identifier]')
lab10.grid(row=12, column=0, sticky=E)
entry11 = tk.Entry(self.master)
entry11.grid(row=13, column=1)
lab11 = tk.Label(self.master, text='comment[label]')
lab11.grid(row=13,column=0,sticky=E)
button1 = tk.Button(text='Save',command=self.saveinfo)
button1.grid(row=20, column=1, sticky=W)
button2 = tk.Button(text='Export as .tsv', command=self.export)
button2.grid(row=20, column=2, sticky=W)
elif self.v.get() == 'Vertebrates':
entry0 = tk.Entry(self.master)
entry0.grid(row=2, column=1)
lab0 = tk.Label(self.master, text='Source Name (*):')
nxtbt0 = tk.Button(self.master, text='OK', command=lambda : self.next_step(entry0))
lab0.grid(row=2, column=0, sticky=E)
entry1 = tk.Entry(self.master)
entry1.grid(row=3, column=1)
lab1 = tk.Label(self.master, text='characteristics[organism] (*): ')
nxtbt1 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab1.grid(row=3, column=0, sticky=E)
entry2 = tk.Entry(self.master)
entry2.grid(row=4, column=1)
lab2 = tk.Label(self.master, text='characteristics[age]')
lab2.grid(row=4, column=0, sticky=E)
entry5 = tk.Entry(self.master)
entry5.grid(row=5, column=1)
lab5 = tk.Label(self.master, text='characteristics[developmental stage]')
lab5.grid(row=5,column=0,sticky=E)
entry6 = tk.Entry(self.master)
entry6.grid(row=6, column=1)
lab6 = tk.Label(self.master, text='characteristics[sex]')
lab6.grid(row=6, column=0, sticky=E)
entry7 = tk.Entry(self.master)
entry7.grid(row=7, column=1)
lab7 = tk.Label(self.master, text='characteristics[disease] (*):')
nxtbt7 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab7.grid(row=7, column=0, sticky=E)
entry8 = tk.Entry(self.master)
entry8.grid(row=10, column=1)
lab8 = tk.Label(self.master, text='characteristics[organism part] (*):')
nxtbt8 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab8.grid(row=10, column=0, sticky=E)
entry9 = tk.Entry(self.master)
entry9.grid(row=11, column=1)
lab9 = tk.Label(self.master, text='characteristics[cell type]:')
lab9.grid(row=11, column=0, sticky=E)
entry10 = tk.Entry(self.master)
entry10.grid(row=12, column=1)
nxtbt10 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab10 = tk.Label(self.master, text='technology type] (*):')
lab10.grid(row=12, column=0, sticky=E)
entry11 = tk.Entry(self.master)
entry11.grid(row=13, column=1)
lab11 = tk.Label(self.master, text='assay name')
lab11.grid(row=13, column=0, sticky=E)
entry12 = tk.Entry(self.master)
entry12.grid(row=14, column=1)
lab12 = tk.Label(self.master, text='characteristics[individual]')
lab12.grid(row=14, column=0, sticky=E)
entry13 = tk.Entry(self.master)
entry13.grid(row=15, column=1)
lab13 = tk.Label(self.master, text='characteristics[biological replicate] (*):')
nxtbt13= tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab13.grid(row=15,column=0,sticky=E)
entry14 = tk.Entry(self.master)
entry14.grid(row=16, column=1)
lab14 = tk.Label(self.master, text='comment[data file] (*):')
nxtbt14 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab14.grid(row=16,column=0,sticky=E)
entry15 = tk.Entry(self.master)
entry15.grid(row=17, column=1)
lab15 = tk.Label(self.master, text='comment[technical replicate] (*):')
nxtbt15 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab15.grid(row=17,column=0,sticky=E)
entry16 = tk.Entry(self.master)
entry16.grid(row=18, column=1)
lab16 = tk.Label(self.master, text='comment[fraction identifier] (*):')
nxtbt16 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab16.grid(row=18,column=0,sticky=E)
entry17 = tk.Entry(self.master)
entry17.grid(row=19, column=1)
lab17 = tk.Label(self.master, text='comment[label] (*):')
nxtbt17 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab17.grid(row=19,column=0,sticky=E)
entry18 = tk.Entry(self.master)
entry18.grid(row=20, column=1)
lab18= tk.Label(self.master, text='comment[cleavage agent details] (*):')
nxtbt18 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab18.grid(row=20,column=0,sticky=E)
entry19 = tk.Entry(self.master)
entry19.grid(row=21, column=1)
lab19 = tk.Label(self.master, text='comment[instrument] (*):')
nxtbt19 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab19.grid(row=21,column=0,sticky=E)
button1 = tk.Button(text='Save',command=self.saveinfo_ver)
button1.grid(row=22, column=1, sticky=W)
button2 = tk.Button(text='Export as .tsv', command=self.export_ver)
button2.grid(row=22, column=2, sticky=W)
elif self.v.get() == 'Default':
entry0 = tk.Entry(self.master)
entry0.grid(row=2, column=1)
lab0 = tk.Label(self.master, text='Source Name (*) :')
nxtbt0 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab0.grid(row=2, column=0, sticky=E)
entry1 = tk.Entry(self.master)
entry1.grid(row=3, column=1)
lab1 = tk.Label(self.master, text='characteristics[organism] (*) :')
nxtbt1 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
nxtbt11=tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab1.grid(row=3, column=0, sticky=E)
entry3 = tk.Entry(self.master)
entry3.grid(row=5, column=1)
lab3 = tk.Label(self.master, text='characteristics[disease] (*):')
nxtbt3= tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab3.grid(row=5,column=0,sticky=E)
entry4 = tk.Entry(self.master)
entry4.grid(row=6, column=1)
lab4 = tk.Label(self.master, text='characteristics[organism part] (*)')
nxtbt4= tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab4.grid(row=6, column=0, sticky=E)
entry5 = tk.Entry(self.master)
entry5.grid(row=7, column=1)
lab5 = tk.Label(self.master, text='characteristics[cell type] (*):')
nxtbt5= tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab5.grid(row=7, column=0, sticky=E)
entry6 = tk.Entry(self.master)
entry6.grid(row=8, column=1)
lab6 = tk.Label(self.master, text='technology type (*)')
nxtbt6= tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab6.grid(row=8, column=0, sticky=E)
entry7 = tk.Entry(self.master)
entry7.grid(row=9, column=1)
lab7 = tk.Label(self.master, text='assay name')
lab7.grid(row=9, column=0, sticky=E)
entry8 = tk.Entry(self.master)
entry8.grid(row=10, column=1)
lab8= tk.Label(self.master, text='characteristics[biological replicate] (*):')
nxtbt8= tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab8.grid(row=10,column=0,sticky=E)
entry9 = tk.Entry(self.master)
entry9.grid(row=11, column=1)
lab9= tk.Label(self.master, text='comment[data file] (*):')
nxtbt9= tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab9.grid(row=11,column=0,sticky=E)
entry10 = tk.Entry(self.master)
entry10.grid(row=12, column=1)
lab10 = tk.Label(self.master, text='comment[technical replicate] (*):')
nxtbt10 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab10.grid(row=12,column=0,sticky=E)
entry11 = tk.Entry(self.master)
entry11.grid(row=13, column=1)
lab11 = tk.Label(self.master, text='comment[fraction identifier] (*):')
nxtbt11 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab11.grid(row=13,column=0,sticky=E)
entry12 = tk.Entry(self.master)
entry12.grid(row=14, column=1)
lab12 = tk.Label(self.master, text='comment[label] (*):')
nxtbt12 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab12.grid(row=14,column=0,sticky=E)
entry13 = tk.Entry(self.master)
entry13.grid(row=15, column=1)
lab13= tk.Label(self.master, text='comment[cleavage agent details] (*):')
nxtbt13 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab13.grid(row=15,column=0,sticky=E)
entry14 = tk.Entry(self.master)
entry14.grid(row=16, column=1)
lab14 = tk.Label(self.master, text='comment[instrument] (*):')
nxtbt14 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab14.grid(row=16,column=0,sticky=E)
button1 = tk.Button(text='Save',command=self.saveinfo_def)
button1.grid(row=17, column=1, sticky=W)
button2 = tk.Button(text='Export as .tsv', command=self.export_def)
button2.grid(row=17, column=2, sticky=W)
elif self.v.get() == 'Plants':
entry0 = tk.Entry(self.master)
entry0.grid(row=2, column=1)
lab0 = tk.Label(self.master, text='Source Name(*): ')
nxtbt0 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab0.grid(row=2, column=0, sticky=E)
entry1 = tk.Entry(self.master)
entry1.grid(row=3, column=1)
lab1 = tk.Label(self.master, text='characteristics[organism](*): ')
nxtbt1 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab1.grid(row=3, column=0, sticky=E)
entry2 = tk.Entry(self.master)
entry2.grid(row=4, column=1)
lab2 = tk.Label(self.master, text='characteristics[ecotype/cultivar] :')
entry2 = tk.Entry(self.master)
entry2.grid(row=4, column=1)
entry3 = tk.Entry(self.master)
entry3.grid(row=5, column=1)
lab3 = tk.Label(self.master, text='characteristics[age] : ')
lab3.grid(row=5, column=0, sticky=E)
entry4 = tk.Entry(self.master)
entry4.grid(row=6, column=1)
lab4 = tk.Label(self.master, text='characteristics[developmental stage] : ')
lab4.grid(row=6, column=0, sticky=E)
entry5 = tk.Entry(self.master)
entry5.grid(row=7, column=1)
lab5 = tk.Label(self.master, text='characteristics[organism part](*): ')
nxtbt5 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab5.grid(row=7, column=0, sticky=E)
entry6 = tk.Entry(self.master)
entry6.grid(row=8, column=1)
lab6 = tk.Label(self.master, text='characteristics[cell type] (*): ')
nxtbt6 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab6.grid(row=8, column=0, sticky=E)
entry7 = tk.Entry(self.master)
entry7.grid(row=9, column=1)
lab7 = tk.Label(self.master, text='technology type (*) : ')
nxtbt7 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab7.grid(row=9, column=0, sticky=E)
entry8 = tk.Entry(self.master)
entry8.grid(row=10, column=1)
lab8 = tk.Label(self.master, text='assay name: ')
lab8.grid(row=10, column=0, sticky=E)
entry9 = tk.Entry(self.master)
entry9.grid(row=11, column=1)
lab9 = tk.Label(self.master, text='characteristics[individual]: ')
lab9.grid(row=11, column=0, sticky=E)
entry10 = tk.Entry(self.master)
entry10.grid(row=12, column=1)
lab10= tk.Label(self.master, text='characteristics[biological replicate](*): ')
nxtbt10 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab10.grid(row=12, column=0, sticky=E)
entry11 = tk.Entry(self.master)
entry11.grid(row=13, column=1)
lab11 = tk.Label(self.master, text='comment[data file](*): ')
nxtbt11 = tk.Button(self.master, text='OK', command=lambda: self.next_step(entry0))
lab11.grid(row=13, column=0, sticky=E)
entry12 = tk.Entry(self.master)
entry12.grid(row=14, column=1)
lab12 = tk.Label(self.master, text='comment[technical replicate](*): ')
lab12.grid(row=14, column=0, sticky=E)
entry13 = tk.Entry(self.master)
entry13.grid(row=15, column=1)
lab13 = tk.Label(self.master, text='comment[fraction identifier](*): ')
lab13.grid(row=15, column=0, sticky=E)
entry14 = tk.Entry(self.master)
entry14.grid(row=16, column=1)
lab14 = tk.Label(self.master, text='comment[label](*): ')
lab14.grid(row=16, column=0, sticky=E)
entry15 = tk.Entry(self.master)
entry15.grid(row=17, column=1)
lab15 = tk.Label(self.master, text='comment[cleavage agent details](*): ')
lab15.grid(row=17, column=0, sticky=E)
entry16 = tk.Entry(self.master)
entry16.grid(row=18, column=1)
lab16 = tk.Label(self.master, text='comment[instrument](*): ')
lab16.grid(row=18, column=0, sticky=E)
button1 = tk.Button(text='Save',command=self.saveinfo_plants)
button1.grid(row=19, column=1, sticky=W)
button2 = tk.Button(text='Export as .tsv', command=self.export_plants)
button2.grid(row=19, column=2, sticky=W)
#lab2 = tk.Label(self.master, text=secs[0])
#lab2.grid(row=2, column=1, sticky=W)
def on_click(self):
num = self.ent0.get()
if num.isdigit():
numl= tk.Label(self.master, text=num)
numl.grid(row=1,column=8)
self.ent0.destroy()
self.okbtn.destroy()
else:
numno = tk.Label(self.master, text='Enter valid number')
numno.grid(row=1, column=6)
| 44.482115 | 493 | 0.569121 | 3,487 | 28,602 | 4.637224 | 0.087468 | 0.10637 | 0.088312 | 0.078726 | 0.832777 | 0.804576 | 0.790785 | 0.763946 | 0.736054 | 0.723871 | 0 | 0.056191 | 0.291308 | 28,602 | 642 | 494 | 44.551402 | 0.741539 | 0.015244 | 0 | 0.596457 | 0 | 0 | 0.126794 | 0.039861 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029528 | false | 0 | 0.015748 | 0 | 0.049213 | 0.009843 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4279e5c75f86dd9005104299ccc3681d870b3c09 | 208 | py | Python | Trakttv.bundle/Contents/Libraries/Shared/stash/serializers/s_none.py | disrupted/Trakttv.bundle | 24712216c71f3b22fd58cb5dd89dad5bb798ed60 | [
"RSA-MD"
] | 1,346 | 2015-01-01T14:52:24.000Z | 2022-03-28T12:50:48.000Z | Trakttv.bundle/Contents/Libraries/Shared/stash/serializers/s_none.py | alcroito/Plex-Trakt-Scrobbler | 4f83fb0860dcb91f860d7c11bc7df568913c82a6 | [
"RSA-MD"
] | 474 | 2015-01-01T10:27:46.000Z | 2022-03-21T12:26:16.000Z | Trakttv.bundle/Contents/Libraries/Shared/stash/serializers/s_none.py | alcroito/Plex-Trakt-Scrobbler | 4f83fb0860dcb91f860d7c11bc7df568913c82a6 | [
"RSA-MD"
] | 191 | 2015-01-02T18:27:22.000Z | 2022-03-29T10:49:48.000Z | from stash.serializers.core.base import Serializer
class NoneSerializer(Serializer):
__key__ = 'none'
def dumps(self, value):
return value
def loads(self, value):
return value
| 17.333333 | 50 | 0.673077 | 24 | 208 | 5.666667 | 0.708333 | 0.132353 | 0.220588 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245192 | 208 | 11 | 51 | 18.909091 | 0.866242 | 0 | 0 | 0.285714 | 0 | 0 | 0.019231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.285714 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
428868fd84d87c40b7b03734ee96765327716b92 | 45 | py | Python | models/informer/__init__.py | liuaoy/deep-time-series | 569ca2173b3c033522c3c3a6333b86270bcfa2e7 | [
"Apache-2.0"
] | null | null | null | models/informer/__init__.py | liuaoy/deep-time-series | 569ca2173b3c033522c3c3a6333b86270bcfa2e7 | [
"Apache-2.0"
] | null | null | null | models/informer/__init__.py | liuaoy/deep-time-series | 569ca2173b3c033522c3c3a6333b86270bcfa2e7 | [
"Apache-2.0"
] | null | null | null | from .informer import Informer, InformerStack | 45 | 45 | 0.866667 | 5 | 45 | 7.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
42aac5e3a42fd58f56acfe4a4cc52ccdabd0149a | 23,246 | py | Python | vel/rl/test/test_integration.py | galatolofederico/vel | 0473648cffb3f34fb784d12dbb25844ab58ffc3c | [
"MIT"
] | 273 | 2018-09-01T08:54:34.000Z | 2022-02-02T13:22:51.000Z | vel/rl/test/test_integration.py | braincorp/vel | bdf9d9eb6ed66278330e8cbece307f6e63ce53c6 | [
"MIT"
] | 47 | 2018-08-17T11:27:08.000Z | 2022-03-11T23:26:55.000Z | vel/rl/test/test_integration.py | braincorp/vel | bdf9d9eb6ed66278330e8cbece307f6e63ce53c6 | [
"MIT"
] | 37 | 2018-10-11T22:56:57.000Z | 2020-10-06T19:53:05.000Z | import torch
import torch.optim as optim
from vel.modules.input.image_to_tensor import ImageToTensorFactory
from vel.modules.input.normalize_observations import NormalizeObservationsFactory
from vel.rl.buffers.circular_replay_buffer import CircularReplayBuffer
from vel.rl.buffers.prioritized_circular_replay_buffer import PrioritizedCircularReplayBuffer
from vel.rl.commands.rl_train_command import FrameTracker
from vel.rl.env_roller.step_env_roller import StepEnvRoller
from vel.rl.env_roller.trajectory_replay_env_roller import TrajectoryReplayEnvRoller
from vel.rl.env_roller.transition_replay_env_roller import TransitionReplayEnvRoller
from vel.rl.metrics import EpisodeRewardMetric
from vel.rl.modules.noise.eps_greedy import EpsGreedy
from vel.rl.modules.noise.ou_noise import OuNoise
from vel.schedules.linear import LinearSchedule
from vel.schedules.linear_and_constant import LinearAndConstantSchedule
from vel.util.random import set_seed
from vel.rl.env.classic_atari import ClassicAtariEnv
from vel.rl.env.mujoco import MujocoEnv
from vel.rl.vecenv.subproc import SubprocVecEnvWrapper
from vel.rl.vecenv.dummy import DummyVecEnvWrapper
from vel.rl.models.stochastic_policy_model import StochasticPolicyModelFactory
from vel.rl.models.q_stochastic_policy_model import QStochasticPolicyModelFactory
from vel.rl.models.q_model import QModelFactory
from vel.rl.models.deterministic_policy_model import DeterministicPolicyModelFactory
from vel.rl.models.stochastic_policy_model_separate import StochasticPolicyModelSeparateFactory
from vel.rl.models.backbone.nature_cnn import NatureCnnFactory
from vel.rl.models.backbone.mlp import MLPFactory
from vel.rl.reinforcers.on_policy_iteration_reinforcer import (
OnPolicyIterationReinforcer, OnPolicyIterationReinforcerSettings
)
from vel.rl.reinforcers.buffered_off_policy_iteration_reinforcer import (
BufferedOffPolicyIterationReinforcer, BufferedOffPolicyIterationReinforcerSettings
)
from vel.rl.reinforcers.buffered_mixed_policy_iteration_reinforcer import (
BufferedMixedPolicyIterationReinforcer, BufferedMixedPolicyIterationReinforcerSettings
)
from vel.rl.algo.dqn import DeepQLearning
from vel.rl.algo.policy_gradient.a2c import A2CPolicyGradient
from vel.rl.algo.policy_gradient.ppo import PpoPolicyGradient
from vel.rl.algo.policy_gradient.trpo import TrpoPolicyGradient
from vel.rl.algo.policy_gradient.acer import AcerPolicyGradient
from vel.rl.algo.policy_gradient.ddpg import DeepDeterministicPolicyGradient
from vel.api.info import TrainingInfo, EpochInfo
CPU_DEVICE = torch.device('cpu')
def test_a2c_breakout():
"""
Simple 1 iteration of a2c breakout
"""
seed = 1001
# Set random seed in python std lib, numpy and pytorch
set_seed(seed)
# Create 16 environments evaluated in parallel in sub processess with all usual DeepMind wrappers
# These are just helper functions for that
vec_env = SubprocVecEnvWrapper(
ClassicAtariEnv('BreakoutNoFrameskip-v4'), frame_history=4
).instantiate(parallel_envs=16, seed=seed)
# Again, use a helper to create a model
# But because model is owned by the reinforcer, model should not be accessed using this variable
# but from reinforcer.model property
model = StochasticPolicyModelFactory(
input_block=ImageToTensorFactory(),
backbone=NatureCnnFactory(input_width=84, input_height=84, input_channels=4)
).instantiate(action_space=vec_env.action_space)
# Reinforcer - an object managing the learning process
reinforcer = OnPolicyIterationReinforcer(
device=CPU_DEVICE,
settings=OnPolicyIterationReinforcerSettings(
batch_size=256,
number_of_steps=5
),
model=model,
algo=A2CPolicyGradient(
entropy_coefficient=0.01,
value_coefficient=0.5,
discount_factor=0.99,
max_grad_norm=0.5
),
env_roller=StepEnvRoller(
environment=vec_env,
device=CPU_DEVICE
)
)
# Model optimizer
optimizer = optim.RMSprop(reinforcer.model.parameters(), lr=7.0e-4, eps=1e-3)
# Overall information store for training information
training_info = TrainingInfo(
metrics=[
EpisodeRewardMetric('episode_rewards'), # Calculate average reward from episode
],
callbacks=[] # Print live metrics every epoch to standard output
)
# A bit of training initialization bookkeeping...
training_info.initialize()
reinforcer.initialize_training(training_info)
training_info.on_train_begin()
# Let's make 100 batches per epoch to average metrics nicely
num_epochs = 1
# Normal handrolled training loop
for i in range(1, num_epochs+1):
epoch_info = EpochInfo(
training_info=training_info,
global_epoch_idx=i,
batches_per_epoch=1,
optimizer=optimizer
)
reinforcer.train_epoch(epoch_info, interactive=False)
training_info.on_train_end()
def test_ppo_breakout():
"""
Simple 1 iteration of ppo breakout
"""
device = torch.device('cpu')
seed = 1001
# Set random seed in python std lib, numpy and pytorch
set_seed(seed)
# Create 16 environments evaluated in parallel in sub processess with all usual DeepMind wrappers
# These are just helper functions for that
vec_env = SubprocVecEnvWrapper(
ClassicAtariEnv('BreakoutNoFrameskip-v4'), frame_history=4
).instantiate(parallel_envs=8, seed=seed)
# Again, use a helper to create a model
# But because model is owned by the reinforcer, model should not be accessed using this variable
# but from reinforcer.model property
model = StochasticPolicyModelFactory(
input_block=ImageToTensorFactory(),
backbone=NatureCnnFactory(input_width=84, input_height=84, input_channels=4)
).instantiate(action_space=vec_env.action_space)
# Reinforcer - an object managing the learning process
reinforcer = OnPolicyIterationReinforcer(
device=device,
settings=OnPolicyIterationReinforcerSettings(
number_of_steps=12,
batch_size=4,
experience_replay=2,
),
model=model,
algo=PpoPolicyGradient(
entropy_coefficient=0.01,
value_coefficient=0.5,
max_grad_norm=0.5,
cliprange=LinearSchedule(0.1, 0.0),
discount_factor=0.99,
normalize_advantage=True
),
env_roller=StepEnvRoller(
environment=vec_env,
device=device,
)
)
# Model optimizer
# optimizer = optim.RMSprop(reinforcer.model.parameters(), lr=7.0e-4, eps=1e-3)
optimizer = optim.Adam(reinforcer.model.parameters(), lr=2.5e-4, eps=1e-5)
# Overall information store for training information
training_info = TrainingInfo(
metrics=[
EpisodeRewardMetric('episode_rewards'), # Calculate average reward from episode
],
callbacks=[
FrameTracker(100_000)
] # Print live metrics every epoch to standard output
)
# A bit of training initialization bookkeeping...
training_info.initialize()
reinforcer.initialize_training(training_info)
training_info.on_train_begin()
# Let's make 100 batches per epoch to average metrics nicely
num_epochs = 1
# Normal handrolled training loop
for i in range(1, num_epochs+1):
epoch_info = EpochInfo(
training_info=training_info,
global_epoch_idx=i,
batches_per_epoch=1,
optimizer=optimizer
)
reinforcer.train_epoch(epoch_info, interactive=False)
training_info.on_train_end()
def test_dqn_breakout():
"""
Simple 1 iteration of DQN breakout
"""
device = torch.device('cpu')
seed = 1001
# Set random seed in python std lib, numpy and pytorch
set_seed(seed)
# Only single environment for DQN
vec_env = DummyVecEnvWrapper(
ClassicAtariEnv('BreakoutNoFrameskip-v4'), frame_history=4
).instantiate(parallel_envs=1, seed=seed)
# Again, use a helper to create a model
# But because model is owned by the reinforcer, model should not be accessed using this variable
# but from reinforcer.model property
model_factory = QModelFactory(
input_block=ImageToTensorFactory(),
backbone=NatureCnnFactory(input_width=84, input_height=84, input_channels=4)
)
# Reinforcer - an object managing the learning process
reinforcer = BufferedOffPolicyIterationReinforcer(
device=device,
settings=BufferedOffPolicyIterationReinforcerSettings(
rollout_steps=4,
training_steps=1,
),
environment=vec_env,
algo=DeepQLearning(
model_factory=model_factory,
double_dqn=False,
target_update_frequency=10_000,
discount_factor=0.99,
max_grad_norm=0.5
),
model=model_factory.instantiate(action_space=vec_env.action_space),
env_roller=TransitionReplayEnvRoller(
environment=vec_env,
device=device,
replay_buffer=CircularReplayBuffer(
buffer_capacity=100,
buffer_initial_size=100,
num_envs=vec_env.num_envs,
observation_space=vec_env.observation_space,
action_space=vec_env.action_space,
frame_stack_compensation=True,
frame_history=4
),
action_noise=EpsGreedy(
epsilon=LinearAndConstantSchedule(
initial_value=1.0, final_value=0.1, end_of_interpolation=0.1
),
environment=vec_env
)
)
)
# Model optimizer
optimizer = optim.RMSprop(reinforcer.model.parameters(), lr=2.5e-4, alpha=0.95, momentum=0.95, eps=1e-3)
# Overall information store for training information
training_info = TrainingInfo(
metrics=[
EpisodeRewardMetric('episode_rewards'), # Calculate average reward from episode
],
callbacks=[
FrameTracker(100_000)
] # Print live metrics every epoch to standard output
)
# A bit of training initialization bookkeeping...
training_info.initialize()
reinforcer.initialize_training(training_info)
training_info.on_train_begin()
# Let's make 100 batches per epoch to average metrics nicely
num_epochs = 1
# Normal handrolled training loop
for i in range(1, num_epochs+1):
epoch_info = EpochInfo(
training_info=training_info,
global_epoch_idx=i,
batches_per_epoch=1,
optimizer=optimizer
)
reinforcer.train_epoch(epoch_info, interactive=False)
training_info.on_train_end()
def test_prioritized_dqn_breakout():
"""
Simple 1 iteration of DQN prioritized replay breakout
"""
device = torch.device('cpu')
seed = 1001
# Set random seed in python std lib, numpy and pytorch
set_seed(seed)
# Only single environment for DQN
vec_env = DummyVecEnvWrapper(
ClassicAtariEnv('BreakoutNoFrameskip-v4'), frame_history=4
).instantiate(parallel_envs=1, seed=seed)
# Again, use a helper to create a model
# But because model is owned by the reinforcer, model should not be accessed using this variable
# but from reinforcer.model property
model_factory = QModelFactory(
input_block=ImageToTensorFactory(),
backbone=NatureCnnFactory(input_width=84, input_height=84, input_channels=4)
)
# Reinforcer - an object managing the learning process
reinforcer = BufferedOffPolicyIterationReinforcer(
device=device,
settings=BufferedOffPolicyIterationReinforcerSettings(
rollout_steps=4,
training_steps=1,
),
environment=vec_env,
algo=DeepQLearning(
model_factory=model_factory,
double_dqn=False,
target_update_frequency=10_000,
discount_factor=0.99,
max_grad_norm=0.5
),
model=model_factory.instantiate(action_space=vec_env.action_space),
env_roller=TransitionReplayEnvRoller(
environment=vec_env,
device=device,
replay_buffer=PrioritizedCircularReplayBuffer(
buffer_capacity=100,
buffer_initial_size=100,
num_envs=vec_env.num_envs,
observation_space=vec_env.observation_space,
action_space=vec_env.action_space,
priority_exponent=0.6,
priority_weight=LinearSchedule(
initial_value=0.4,
final_value=1.0
),
priority_epsilon=1.0e-6,
frame_stack_compensation=True,
frame_history=4
),
action_noise=EpsGreedy(
epsilon=LinearAndConstantSchedule(
initial_value=1.0, final_value=0.1, end_of_interpolation=0.1
),
environment=vec_env
)
)
)
# Model optimizer
optimizer = optim.RMSprop(reinforcer.model.parameters(), lr=2.5e-4, alpha=0.95, momentum=0.95, eps=1e-3)
# Overall information store for training information
training_info = TrainingInfo(
metrics=[
EpisodeRewardMetric('episode_rewards'), # Calculate average reward from episode
],
callbacks=[
FrameTracker(100_000)
] # Print live metrics every epoch to standard output
)
# A bit of training initialization bookkeeping...
training_info.initialize()
reinforcer.initialize_training(training_info)
training_info.on_train_begin()
# Let's make 100 batches per epoch to average metrics nicely
num_epochs = 1
# Normal handrolled training loop
for i in range(1, num_epochs+1):
epoch_info = EpochInfo(
training_info=training_info,
global_epoch_idx=i,
batches_per_epoch=1,
optimizer=optimizer
)
reinforcer.train_epoch(epoch_info, interactive=False)
training_info.on_train_end()
def test_ddpg_bipedal_walker():
"""
1 iteration of DDPG bipedal walker environment
"""
device = torch.device('cpu')
seed = 1001
# Set random seed in python std lib, numpy and pytorch
set_seed(seed)
# Only single environment for DDPG
vec_env = DummyVecEnvWrapper(
MujocoEnv('BipedalWalker-v2')
).instantiate(parallel_envs=1, seed=seed)
# Again, use a helper to create a model
# But because model is owned by the reinforcer, model should not be accessed using this variable
# but from reinforcer.model property
model_factory = DeterministicPolicyModelFactory(
input_block=NormalizeObservationsFactory(input_shape=24),
policy_backbone=MLPFactory(input_length=24, hidden_layers=[64, 64], normalization='layer'),
value_backbone=MLPFactory(input_length=28, hidden_layers=[64, 64], normalization='layer')
)
# Reinforcer - an object managing the learning process
reinforcer = BufferedOffPolicyIterationReinforcer(
device=device,
settings=BufferedOffPolicyIterationReinforcerSettings(
rollout_steps=4,
training_steps=1,
),
environment=vec_env,
algo=DeepDeterministicPolicyGradient(
model_factory=model_factory,
tau=0.01,
discount_factor=0.99,
max_grad_norm=0.5
),
model=model_factory.instantiate(action_space=vec_env.action_space),
env_roller=TransitionReplayEnvRoller(
environment=vec_env,
device=device,
action_noise=OuNoise(std_dev=0.2, environment=vec_env),
replay_buffer=CircularReplayBuffer(
buffer_capacity=100,
buffer_initial_size=100,
num_envs=vec_env.num_envs,
observation_space=vec_env.observation_space,
action_space=vec_env.action_space
),
normalize_returns=True,
discount_factor=0.99
),
)
# Model optimizer
optimizer = optim.Adam(reinforcer.model.parameters(), lr=2.5e-4, eps=1e-4)
# Overall information store for training information
training_info = TrainingInfo(
metrics=[
EpisodeRewardMetric('episode_rewards'), # Calculate average reward from episode
],
callbacks=[
FrameTracker(100_000)
] # Print live metrics every epoch to standard output
)
# A bit of training initialization bookkeeping...
training_info.initialize()
reinforcer.initialize_training(training_info)
training_info.on_train_begin()
# Let's make 100 batches per epoch to average metrics nicely
num_epochs = 1
# Normal handrolled training loop
for i in range(1, num_epochs+1):
epoch_info = EpochInfo(
training_info=training_info,
global_epoch_idx=i,
batches_per_epoch=1,
optimizer=optimizer
)
reinforcer.train_epoch(epoch_info, interactive=False)
training_info.on_train_end()
def test_trpo_bipedal_walker():
"""
1 iteration of TRPO on bipedal walker
"""
device = torch.device('cpu')
seed = 1001
# Set random seed in python std lib, numpy and pytorch
set_seed(seed)
vec_env = DummyVecEnvWrapper(
MujocoEnv('BipedalWalker-v2', normalize_returns=True),
).instantiate(parallel_envs=8, seed=seed)
# Again, use a helper to create a model
# But because model is owned by the reinforcer, model should not be accessed using this variable
# but from reinforcer.model property
model_factory = StochasticPolicyModelSeparateFactory(
input_block=NormalizeObservationsFactory(input_shape=24),
policy_backbone=MLPFactory(input_length=24, hidden_layers=[32, 32]),
value_backbone=MLPFactory(input_length=24, hidden_layers=[32])
)
# Reinforcer - an object managing the learning process
reinforcer = OnPolicyIterationReinforcer(
device=device,
settings=OnPolicyIterationReinforcerSettings(
number_of_steps=12,
),
model=model_factory.instantiate(action_space=vec_env.action_space),
algo=TrpoPolicyGradient(
max_kl=0.01,
cg_iters=10,
line_search_iters=10,
improvement_acceptance_ratio=0.1,
cg_damping=0.1,
vf_iters=5,
entropy_coef=0.0,
discount_factor=0.99,
max_grad_norm=0.5,
gae_lambda=1.0
),
env_roller=StepEnvRoller(
environment=vec_env,
device=device,
)
)
# Model optimizer
optimizer = optim.Adam(reinforcer.model.parameters(), lr=1.0e-3, eps=1e-4)
# Overall information store for training information
training_info = TrainingInfo(
metrics=[
EpisodeRewardMetric('episode_rewards'), # Calculate average reward from episode
],
callbacks=[
FrameTracker(100_000)
] # Print live metrics every epoch to standard output
)
# A bit of training initialization bookkeeping...
training_info.initialize()
reinforcer.initialize_training(training_info)
training_info.on_train_begin()
# Let's make 100 batches per epoch to average metrics nicely
num_epochs = 1
# Normal handrolled training loop
for i in range(1, num_epochs+1):
epoch_info = EpochInfo(
training_info=training_info,
global_epoch_idx=i,
batches_per_epoch=1,
optimizer=optimizer
)
reinforcer.train_epoch(epoch_info, interactive=False)
training_info.on_train_end()
def test_acer_breakout():
"""
1 iteration of ACER on breakout environment
"""
device = torch.device('cpu')
seed = 1001
# Set random seed in python std lib, numpy and pytorch
set_seed(seed)
# Create 16 environments evaluated in parallel in sub processess with all usual DeepMind wrappers
# These are just helper functions for that
vec_env = SubprocVecEnvWrapper(
ClassicAtariEnv('BreakoutNoFrameskip-v4'), frame_history=4
).instantiate(parallel_envs=16, seed=seed)
# Again, use a helper to create a model
# But because model is owned by the reinforcer, model should not be accessed using this variable
# but from reinforcer.model property
model_factory = QStochasticPolicyModelFactory(
input_block=ImageToTensorFactory(),
backbone=NatureCnnFactory(input_width=84, input_height=84, input_channels=4)
)
# Reinforcer - an object managing the learning process
reinforcer = BufferedMixedPolicyIterationReinforcer(
device=device,
settings=BufferedMixedPolicyIterationReinforcerSettings(
experience_replay=2,
number_of_steps=12,
stochastic_experience_replay=False
),
model=model_factory.instantiate(action_space=vec_env.action_space),
env=vec_env,
algo=AcerPolicyGradient(
model_factory=model_factory,
entropy_coefficient=0.01,
q_coefficient=0.5,
rho_cap=10.0,
retrace_rho_cap=1.0,
trust_region=True,
trust_region_delta=1.0,
discount_factor=0.99,
max_grad_norm=10.0,
),
env_roller=TrajectoryReplayEnvRoller(
environment=vec_env,
device=device,
replay_buffer=CircularReplayBuffer(
buffer_capacity=100,
buffer_initial_size=100,
num_envs=vec_env.num_envs,
action_space=vec_env.action_space,
observation_space=vec_env.observation_space,
frame_stack_compensation=True,
frame_history=4,
)
),
)
# Model optimizer
optimizer = optim.RMSprop(reinforcer.model.parameters(), lr=7.0e-4, eps=1e-3, alpha=0.99)
# Overall information store for training information
training_info = TrainingInfo(
metrics=[
EpisodeRewardMetric('episode_rewards'), # Calculate average reward from episode
],
callbacks=[] # Print live metrics every epoch to standard output
)
# A bit of training initialization bookkeeping...
training_info.initialize()
reinforcer.initialize_training(training_info)
training_info.on_train_begin()
# Let's make 100 batches per epoch to average metrics nicely
num_epochs = 1
# Normal handrolled training loop
for i in range(1, num_epochs+1):
epoch_info = EpochInfo(
training_info=training_info,
global_epoch_idx=i,
batches_per_epoch=1,
optimizer=optimizer
)
reinforcer.train_epoch(epoch_info, interactive=False)
training_info.on_train_end()
| 34.286136 | 108 | 0.67659 | 2,593 | 23,246 | 5.869263 | 0.123409 | 0.038636 | 0.01715 | 0.022078 | 0.802287 | 0.777055 | 0.752875 | 0.740982 | 0.727906 | 0.721007 | 0 | 0.02394 | 0.25428 | 23,246 | 677 | 109 | 34.33678 | 0.853995 | 0.208638 | 0 | 0.689218 | 0 | 0 | 0.015277 | 0.006045 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014799 | false | 0 | 0.078224 | 0 | 0.093023 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c42a8b3db70c1910ae9e09b5cd1c89b20ab2afba | 732 | py | Python | corruption_tracker/tests.py | Basele/corruption_tracker | e8bacdce540321f382980fbb3af782611f260bda | [
"BSD-3-Clause"
] | 2 | 2016-05-13T12:24:19.000Z | 2021-02-03T18:02:13.000Z | corruption_tracker/tests.py | Basele/corruption_tracker | e8bacdce540321f382980fbb3af782611f260bda | [
"BSD-3-Clause"
] | null | null | null | corruption_tracker/tests.py | Basele/corruption_tracker | e8bacdce540321f382980fbb3af782611f260bda | [
"BSD-3-Clause"
] | null | null | null | from corruption_tracker.test_runner import perfomance_test
def run_home():
perfomance_test('/')
def hole_ua():
perfomance_test('/api/polygon/fit_bounds/1/22.89551,46.00459,43.98926,52.46940/')
def run_kha_city():
perfomance_test('/api/polygon/fit_bounds/2/33.68408,49.46098,38.95752,51.04657/')
def run_kha_distritc():
perfomance_test('/api/polygon/fit_bounds/3/35.92289,49.88977,36.58207,50.08909/')
def kha_houses():
perfomance_test('/api/polygon/fit_bounds/4/36.08374,49.96337,36.41333,50.03575/')
def kiev_houses():
perfomance_test('/api/polygon/fit_bounds/4/30.48088,50.43028,30.56328,50.45332/')
# run_home()
# hole_ua()
# run_kha_city()
# run_kha_distritc()
# kha_houses()
kiev_houses()
| 21.529412 | 85 | 0.730874 | 119 | 732 | 4.243697 | 0.478992 | 0.194059 | 0.168317 | 0.237624 | 0.354455 | 0.354455 | 0.158416 | 0.158416 | 0 | 0 | 0 | 0.219365 | 0.096995 | 732 | 33 | 86 | 22.181818 | 0.544629 | 0.09153 | 0 | 0 | 0 | 0.357143 | 0.471927 | 0.47041 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | true | 0 | 0.071429 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6721028df564dc70c0701c59a964c276bfc1bbc7 | 48 | py | Python | quests/lvl_0001/python.py | bamr4287/it-journey | 91305420065deb2b530181fbc30b3d1d13e66086 | [
"MIT"
] | null | null | null | quests/lvl_0001/python.py | bamr4287/it-journey | 91305420065deb2b530181fbc30b3d1d13e66086 | [
"MIT"
] | 1 | 2022-01-19T01:17:23.000Z | 2022-01-19T01:17:23.000Z | quests/lvl_0001/python.py | bamr4287/it-journey | 91305420065deb2b530181fbc30b3d1d13e66086 | [
"MIT"
] | 1 | 2021-09-03T23:46:53.000Z | 2021-09-03T23:46:53.000Z | import os
import sys
print ("hello tahra")
| 9.6 | 22 | 0.666667 | 7 | 48 | 4.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 48 | 4 | 23 | 12 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0.255814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
672523893f410834bc8bff70c0b594ca8ae6f239 | 38,905 | py | Python | test/test_fract.py | ktmrmshk/fract | a370e7e69f3b5d48bdd45aa73d8afc0d67756cf1 | [
"Apache-2.0"
] | 4 | 2019-05-29T07:14:38.000Z | 2021-09-20T21:14:08.000Z | test/test_fract.py | ktmrmshk/fract | a370e7e69f3b5d48bdd45aa73d8afc0d67756cf1 | [
"Apache-2.0"
] | 5 | 2018-08-10T07:53:50.000Z | 2019-03-28T09:20:16.000Z | test/test_fract.py | ktmrmshk/fract | a370e7e69f3b5d48bdd45aa73d8afc0d67756cf1 | [
"Apache-2.0"
] | 1 | 2018-05-28T06:01:28.000Z | 2018-05-28T06:01:28.000Z | import unittest, json, logging, re
from fract import FractTest, FractTestHassert, FractTestHdiff
class test_FractTest(unittest.TestCase):
def setUp(self):
self.fracttest = FractTest()
def tearDown(self):
pass
def test_init_template1(self):
ft=FractTestHassert()
ft.init_template()
self.assertTrue( 'TestType' in ft.query )
self.assertTrue( ft.query['TestType'] == 'hassert')
self.assertTrue( 'Comment' in ft.query )
self.assertTrue( 'TestId' in ft.query )
def test_init_template2(self):
ft=FractTestHdiff()
ft.init_template()
self.assertTrue( 'TestType' in ft.query )
self.assertTrue( ft.query['TestType'] == 'hdiff')
self.assertTrue( 'RequestA' in ft.query )
self.assertTrue( 'RequestB' in ft.query )
self.assertTrue( 'TestCase' in ft.query )
self.assertTrue( 'TestId' in ft.query )
self.assertTrue( 'Comment' in ft.query )
def test_init_example_1(self):
ft=FractTestHassert()
ft.init_example()
self.assertTrue( ft.query['TestType'] == 'hassert' )
def test_init_example_2(self):
ft=FractTestHdiff()
ft.init_example()
self.assertTrue( ft.query['TestType'] == 'hdiff' )
def test_import_query(self):
self.fracttest.import_query('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}],"Location":[{"type":"regex","query":"https://www.akamai.com"}]}} ''')
self.assertTrue( self.fracttest.query['TestType'] == 'hassert' )
def test_add1(self):
ft = FractTestHassert()
ft.init_template()
ft.add('status_code', '(301|302)')
ft.add('status_code', '301')
logging.warning(json.dumps(ft.query))
self.assertTrue( ft.query['TestCase'] == {"status_code": [{"type": "regex", "query":"(301|302)" }, {"type": "regex", "query":"301"}]})
# 2018/08/21 ignore_case support
def test_add2_option(self):
ft = FractTestHassert()
ft.init_template()
ft.add('X-Cache-Key', '/FooBar/', 'contain', {'ignore_case': True})
logging.warning(json.dumps(ft.query))
self.assertTrue( ft.query['TestCase'] == {"X-Cache-Key": [{"type": "contain", "query":"/FooBar/", "option" :{"ignore_case": True} }]})
def test_setRequest1(self):
ft = FractTestHassert()
ft.init_template()
ft.setRequest('http://www.akamai.com/', 'www.akamai.com.edgekey.net')
self.assertTrue( ft.query['Request']['Url'] == 'http://www.akamai.com/')
self.assertTrue( ft.query['Request']['Ghost'] == 'www.akamai.com.edgekey.net')
self.assertTrue( ft.query['Request']['Method'] == 'GET')
def test_set_comment(self):
ft = FractTestHassert()
ft.init_template()
ft.set_comment('abc=123')
self.assertTrue( ft.query['Comment'] == 'abc=123')
def test_set_testid(self):
ft = FractTestHassert()
ft.init_template()
ft.set_testid('hogehoge')
self.assertTrue( ft.query['TestId'] == 'hogehoge')
ft.set_testid()
self.assertTrue( ft.query['TestId'] != 'hogehoge')
def test_set_loadtime(self):
ft = FractTestHassert()
ft.init_template()
ft.set_loadtime(0.123)
self.assertTrue( ft.query['LoadTime'] == 0.123)
def test_str_summary_hassert(self):
ft = FractTestHassert()
ft.init_example()
self.assertTrue( type(ft._str_summary() ) == type(str()) )
print( ft._str_summary() )
def test_str_summary_hdiff(self):
ft = FractTestHdiff()
ft.init_example()
self.assertTrue( type(ft._str_summary() ) == type(str()) )
print( ft._str_summary() )
from fract import FractDsetFactory
class test_FractDsetFactory(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_init1(self):
ft = FractTestHassert()
ft.init_example()
jsontxt = ft.__str__()
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractTestHassert() ))
def test_init2(self):
ft = FractTestHdiff()
ft.init_example()
jsontxt = ft.__str__()
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractTestHdiff() ))
def test_init3(self):
ft = FractResult()
ft.init_example(FractResult.HASSERT)
jsontxt = ft.__str__()
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractResult() ))
def test_init4(self):
ft = FractResult()
ft.init_example(FractResult.HDIFF)
jsontxt = ft.__str__()
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractResult() ))
def test_init11(self):
ft = FractTestHassert()
ft.init_example()
jsontxt = ft.query
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractTestHassert() ))
def test_init12(self):
ft = FractTestHdiff()
ft.init_example()
jsontxt = ft.query
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractTestHdiff() ))
def test_init13(self):
ft = FractResult()
ft.init_example(FractResult.HASSERT)
jsontxt = ft.query
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractResult() ))
def test_init14(self):
ft = FractResult()
ft.init_example(FractResult.HDIFF)
jsontxt = ft.query
obj = FractDsetFactory.create(jsontxt)
self.assertTrue( type(obj) == type(FractResult() ))
from fract import FractSuiteManager
class test_FractSuiteManager(unittest.TestCase):
def test_load_base_suite(self):
ftm = FractSuiteManager()
ftm.load_base_suite('testcase4test.json')
self.assertTrue( len(ftm._suite) == 32)
#logging.warning( ftm._testsuite )
def test_merge_suite(self):
ftm = FractSuiteManager()
ftm.load_base_suite('testcase4test.json')
ret = ftm.merge_suite('testcase4test_sub.json')
self.assertTrue( ret == (1,1) )
self.assertTrue( len(ftm._suite) == 33)
def test_merge_suite2(self):
ftm = FractSuiteManager()
ftm.load_base_suite('resutlcase4test.json')
ret = ftm.merge_suite('resultcase4test_sub.json')
self.assertTrue( ret == (1,1) )
self.assertTrue( len(ftm._suite) == 33)
ftm.save('final_result.json')
def test_save(self):
ftm = FractSuiteManager()
ftm.load_base_suite('testcase4test.json')
ftm.save('foobar.json')
def test_get_suite(self):
ftm = FractSuiteManager()
ftm.load_base_suite('testcase4test.json')
a=ftm.get_suite()
logging.warning('type of a is {}'.format(type(a[0])))
self.assertTrue( type(a[0]) == type(FractTestHassert() ) )
def test_get_suite2(self):
ftm = FractSuiteManager()
ftm.load_base_suite('resutlcase4test.json')
a=ftm.get_suite()
logging.warning('type of a is {}'.format(type(a[0])))
self.assertTrue( type(a[0]) == type(FractResult() ) )
from fract import FractResult
class test_FracResult(unittest.TestCase):
def setUp(self):
self.fractresult = FractResult()
def tearDown(self):
pass
def test_init_example1(self):
self.fractresult.init_example('hassert')
self.assertTrue( self.fractresult.query['TestType'] == 'hassert' )
def test_init_example2(self):
self.fractresult.init_example('hdiff')
self.assertTrue( self.fractresult.query['TestType'] == 'hdiff' )
def test_setTestType(self):
self.fractresult.setTestType('hassert')
self.assertTrue( self.fractresult.query['TestType'] == 'hassert' )
def test_setPassed(self):
self.fractresult.setPassed(False)
self.assertTrue( self.fractresult.query['Passed'] == False )
def test_setResponse(self):
self.fractresult.setResponse( 403, {'Content-Length': 123, 'Vary': 'User-Agent'})
self.assertTrue( self.fractresult.query['Response']['status_code'] == 403 )
self.assertTrue( self.fractresult.query['Response']['Vary'] == 'User-Agent' )
def test_check_passed1(self):
self.fractresult.query=json.loads( '''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Passed":false,"Response":{"status_code":301,"Content-Length":"0","Location":"https://www.akamai.com","Date":"Mon, 26 Mar 2018 09:20:33 GMT","Connection":"keep-alive","Set-Cookie":"AKA_A2=1; expires=Mon, 26-Mar-2018 10:20:33 GMT; secure; HttpOnly","Referrer-Policy":"same-origin","X-N":"S"},"ResultCase":{"status_code":[{"Passed":false,"Value":301,"TestCase":{"type":"regex","query":"(200|404)"}},{"Passed":true,"Value":301,"TestCase":{"type":"regex","query":"301"}}],"Content-Type":[{"Passed":false,"Value":"This Header is not in Response","TestCase":{"type":"regex","query":"text/html$"}}]}} ''' )
ret = self.fractresult.check_passed()
self.assertTrue( ret == (False, 3, 1, 2) )
def test_check_passed2(self):
self.fractresult.query=json.loads('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Passed":true,"Response":{"status_code":301,"Content-Length":"0","Location":"https://www.akamai.com","Date":"Mon, 26 Mar 2018 09:20:33 GMT","Connection":"keep-alive","Set-Cookie":"AKA_A2=1; expires=Mon, 26-Mar-2018 10:20:33 GMT; secure; HttpOnly","Referrer-Policy":"same-origin","X-N":"S"},"ResultCase":{"status_code":[{"Passed":true,"Value":301,"TestCase":{"type":"regex","query":"301"}}],"Content-Type":[{"Passed":true,"Value":"text/html","TestCase":{"type":"regex","query":"text/html$"}}]}}''')
ret = self.fractresult.check_passed()
self.assertTrue( ret == (True, 2, 2, 0) )
def test_str_resultcase(self):
self.fractresult.query=json.loads('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Passed":true,"Response":{"status_code":301,"Content-Length":"0","Location":"https://www.akamai.com","Date":"Mon, 26 Mar 2018 09:20:33 GMT","Connection":"keep-alive","Set-Cookie":"AKA_A2=1; expires=Mon, 26-Mar-2018 10:20:33 GMT; secure; HttpOnly","Referrer-Policy":"same-origin","X-N":"S"},"ResultCase":{"status_code":[{"Passed":true,"Value":301,"TestCase":{"type":"regex","query":"301"}}],"Content-Type":[{"Passed":true,"Value":"text/html","TestCase":{"type":"regex","query":"text/html$"}}]}}''')
ret=self.fractresult._str_resultcase( True )
self.assertTrue( type(ret) == type(str()))
logging.warning( ret )
# 2018/08/21 ignore_case support
def test_str_resultcase_option(self):
self.fractresult.query=json.loads('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Passed":true,"Response":{"status_code":301,"Content-Length":"0","Location":"https://www.akamai.com","Date":"Mon, 26 Mar 2018 09:20:33 GMT","Connection":"keep-alive","Set-Cookie":"AKA_A2=1; expires=Mon, 26-Mar-2018 10:20:33 GMT; secure; HttpOnly","Referrer-Policy":"same-origin","X-N":"S"},"ResultCase":{"status_code":[{"Passed":true,"Value":301,"TestCase":{"type":"regex","query":"301"}}],"Content-Type":[{"Passed":true,"Value":"text/html","TestCase":{"type":"regex","query":"text/html$"}}]}}''')
ret=self.fractresult._str_resultcase( )
self.assertTrue( type(ret) == type(str()))
logging.warning( ret )
def test_str_resultcase_option2(self):
self.fractresult.query=json.loads('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Passed":true,"Response":{"status_code":301,"Content-Length":"0","Location":"https://www.akamai.com","Date":"Mon, 26 Mar 2018 09:20:33 GMT","Connection":"keep-alive","Set-Cookie":"AKA_A2=1; expires=Mon, 26-Mar-2018 10:20:33 GMT; secure; HttpOnly","Referrer-Policy":"same-origin","X-N":"S"},"ResultCase":{"status_code":[{"Passed":true,"Value":301,"TestCase":{"type":"regex","query":"301"}}],"Content-Type":[{"Passed":true,"Value":"text/html","TestCase":{"type":"regex","query":"text/html$","option":{"ignore_case":true,"not":false}}}]}}''')
ret=self.fractresult._str_resultcase( )
self.assertTrue( type(ret) == type(str()))
logging.warning( ret )
def test_str_resultcase2(self):
self.fractresult.query=json.loads('''{"TestType":"hdiff","Passed":false,"Comment":"This is comment","TestId":"d704230e1206c259ddbb900004c185e46c42a32a","ResponseA":{"status_code":301,"Content-Length":"0","Location":"https://www.akamai.com","Date":"Mon, 26 Mar 2018 09:20:33 GMT","Connection":"keep-alive","Set-Cookie":"AKA_A2=1; expires=Mon, 26-Mar-2018 10:20:33 GMT; secure; HttpOnly","Referrer-Policy":"same-origin","X-N":"S"},"ResponseB":{"status_code":301,"Content-Length":"0","Location":"https://www.akamai.com","Date":"Mon, 26 Mar 2018 09:20:33 GMT","Connection":"keep-alive","Set-Cookie":"AKA_A2=1; expires=Mon, 26-Mar-2018 10:20:33 GMT; secure; HttpOnly","Referrer-Policy":"same-origin","X-N":"S"},"ResultCase":{"status_code":{"Passed":true,"Value":[301,301]},"Content-Length":{"Passed":false,"Value":[123,345]}}}''')
ret=self.fractresult._str_resultcase( True )
self.assertTrue( type(ret) == type(str()))
logging.warning( ret )
from fract import Actor, ActorResponse
class test_Actor(unittest.TestCase):
def setUp(self):
self.actor = Actor()
#self.actorresponse = self.actor.get('https://space.ktmrmshk.com/abc/example.html?abc=123', ghost='space.ktmrmshk.com.edgekey-staging.net', headers={'Accept-Encoding': 'gzip'})
# 2019/10/21 For Botman Start
self.actorresponse = self.actor.get('https://space.ktmrmshk.com/abc/example.html?abc=123', ghost='space.ktmrmshk.com.edgekey-staging.net', headers={'Accept-Encoding': 'gzip', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36'})
# 2019/10/21 For Botman End
def tearDown(self):
pass
def test_get(self):
self.assertTrue( self.actorresponse.r.status_code != 400 )
def test_get_headers(self):
self.assertTrue( 'status_code' in self.actorresponse.headers() )
def test_get_status_code(self):
self.assertTrue( self.actorresponse.status_code() == self.actorresponse.r.status_code )
def test_resh(self):
self.assertTrue( self.actorresponse.resh('status_code') == self.actorresponse.r.status_code )
self.assertTrue( self.actorresponse.resh('Date') == self.actorresponse.r.headers['Date'] )
#2018/12/04 Rum-off Start
def test_resh_null_response_header(self):
self.assertTrue( self.actorresponse.resh('NothingNothingNothing') == '' )
#2018/12/04 Rum-off End
def test_siggleton(self):
a = Actor()
b = Actor()
self.assertTrue( a == b)
def test_getLoadTime(self):
self.assertTrue( type(self.actorresponse.getLoadTime()) == type(1.23))
from fract import Fract
class test_Fract(unittest.TestCase):
def setUp(self):
self.fr = Fract()
def tearDown(self):
pass
def test_run_hassert(self):
testcase = FractTestHassert()
testcase.import_query('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}],"Location":[{"type":"regex","query":"https://www.akamai.com"}]}} ''')
ret = self.fr._run_hassert(testcase)
self.assertTrue( ret.query['TestType'] == 'hassert')
self.assertTrue( ret.query['Passed'] == False )
logging.info('FractResult: {}'.format(ret))
def test_passed(self):
self.assertTrue( self.fr._passed('regex', '(200|404)', '404') )
self.assertFalse( self.fr._passed('regex', '(200|404)', '403') )
self.assertTrue( self.fr._passed('startswith', 'http://', 'http://www.jins.com') )
self.assertFalse( self.fr._passed('startswith', 'http://', 'https://www.jins.com') )
self.assertTrue( self.fr._passed('endswith', '.com', 'http://www.jins.com') )
self.assertFalse( self.fr._passed('endswith', '.com', 'https://www.jins.co.jp') )
self.assertTrue( self.fr._passed('contain', 'jins', 'http://www.jins.com') )
self.assertFalse( self.fr._passed('contain', 'jeans', 'https://www.jins.co.jp') )
self.assertTrue( self.fr._passed('regex', re.escape('http://abc.com/index.html?xyz=123&Name=FOOBAR'), 'http://abc.com/index.html?xyz=123&Name=FOOBAR') )
self.assertTrue( self.fr._passed('exact', 'http://abc.com/index.html?xyz=123&Name=FOOBAR', 'http://abc.com/index.html?xyz=123&Name=FOOBAR') )
self.assertFalse( self.fr._passed('exact', 'https://abc.com/index.html?xyz=123&Name=FOOBAR', 'http://abc.com/index.html?xyz=123&Name=FOOBAR') )
# 2018/08/21 ignore_case support
def test_passed_ignore_case(self):
self.assertTrue( self.fr._passed('regex', '(200|404)', '404', True) )
self.assertTrue( self.fr._passed('regex', '(200|404)', '404', False) )
self.assertFalse( self.fr._passed('regex', '(200|404)', '403', False) )
self.assertTrue( self.fr._passed('startswith', 'http://', 'hTTp://www.jins.com', True) )
self.assertFalse( self.fr._passed('startswith', 'http://', 'https://www.jins.com', True) )
self.assertTrue( self.fr._passed('endswith', '.com', 'http://www.jins.COM', True) )
self.assertFalse( self.fr._passed('endswith', '.com', 'https://www.jins.co.jp', True) )
self.assertTrue( self.fr._passed('contain', 'jins', 'http://www.JIns.com', True) )
self.assertFalse( self.fr._passed('contain', 'jins', 'http://www.JIns.com') )
self.assertFalse( self.fr._passed('contain', 'jins', 'http://www.JIns.com', False) )
self.assertFalse( self.fr._passed('contain', 'jeans', 'https://www.jins.co.jp') )
self.assertTrue( self.fr._passed('regex', re.escape('http://abc.com/index.html?xyz=123&Name=FOOBAR'), 'http://abc.com/index.html?xyz=123&Name=foobar', True) )
self.assertTrue( self.fr._passed('exact', 'http://abc.com/index.html?xyz=123&Name=FOOBAR', 'http://abc.com/index.html?xyz=123&Name=foobar', True) )
self.assertFalse( self.fr._passed('exact', 'https://abc.com/index.html?xyz=123&Name=FOOBAR', 'http://abc.com/index.html?xyz=123&Name=FOOBAR') )
def test_check_headercase(self):
ret = self.fr._check_headercase('status_code', [{"type":"regex","query":"301"}], {'status_code': 301})
self.assertTrue( ret[0]['Passed'] == True )
self.assertTrue( ret[0]['Value'] == 301 )
self.assertTrue( ret[0]['TestCase']['type'] == 'regex' )
# 2018/08/21 ignore_case support
def test_check_headercase2(self):
ret = self.fr._check_headercase('X-Cache-Key', [{"type":"regex","query":"/HOGE/foo/bar"}], {'X-Cache-Key': 'D/S/123/hoge/foo/bar/dot.jpg'})
self.assertTrue( ret[0]['Passed'] == False )
self.assertTrue( ret[0]['Value'] == 'D/S/123/hoge/foo/bar/dot.jpg' )
self.assertTrue( ret[0]['TestCase']['type'] == 'regex' )
def test_check_headercase3(self):
ret = self.fr._check_headercase('X-Cache-Key', [{"type":"regex","query":"/HOGE/foo/bar","option":{"ignore_case":True}}], {'X-Cache-Key': 'D/S/123/hoge/foo/bar/dot.jpg'})
self.assertTrue( ret[0]['Passed'] == True )
def test_check_header_gclid(self):
testcase = FractTestHdiff()
testcase.import_query('''{"TestType": "hassert", "Request": {"Ghost": "a850.dscr.akamai-staging.net", "Method": "GET", "Url": "https://fract.akamaized-staging.net/testing/gclid/first", "Headers": {"User-Agent": "MacOS", "Pragma": "akamai-x-cache-on,akamai-x-cache-remote-on,akamai-x-check-cacheable,akamai-x-get-cache-key,akamai-x-get-extracted-values,akamai-x-get-request-id,akamai-x-serial-no, akamai-x-get-true-cache-key", "X-Akamai-Cloudlet-Cost": "true", "Cookie": "akamai-rum=off"}}, "TestCase": {"X-Cache-Key": [{"type": "regex", "query": "/728260/", "option": {"ignore_case": false}}, {"type": "regex", "query": "/000/", "option": {"ignore_case": false}}, {"type": "contain", "query": "/fract.akamaized-staging.net/col", "option": {"ignore_case": false}}], "status_code": [{"type": "regex", "query": "302", "option": {"ignore_case": false}}], "Location": [{"type": "exact", "query": "https://fract.akamaized-staging.net/testing/gclid/first?gclid=EAIaIQobChMIodSq1veA4wIVQXZgCh2QGQmGEAAYASAAEgKs3_D_BwE&_ga=2.108142722.1227508989.1571126780-1104359897.1571126780&utm_medium=email&utm_source=uq_html&utm_term=uh_191016_crm_welcome5&abc=123&123=abc", "option": {"ignore_case": false}}]}, "Comment": "This test was gened by FraseGen - v1.04 at 2019/10/21, 17:26:22 JST", "TestId": "88fe95419ddb43f5dd95cd6cde44c50687a54c73127284d06b70d59fd90540fd", "Active": true, "LoadTime": 0.180052} ''')
ret = self.fr.run(testcase)
self.assertTrue( ret.query['TestType'] == 'hassert')
self.assertTrue( ret.query['Passed'] == True )
self.assertTrue( 'LoadTime' in ret.query )
logging.info('FractResult: {}'.format(ret))
def test_run_hdiff(self):
testcase = FractTestHdiff()
testcase.import_query('''{"TestType":"hdiff","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","RequestA":{"Ghost":"www.akamai.com","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"RequestB":{"Ghost":"www.akamai.com.edgekey-staging.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"VerifyHeaders":["Last-Modified","Cache-Control", "status_code", "Content-Length"]} ''')
fr =Fract()
ret = fr._run_hdiff(testcase)
logging.warning('fractresult= {}'.format(ret))
self.assertTrue( ret.query['TestType'] == 'hdiff')
def test_run1(self):
testcase = FractTest()
testcase.import_query('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}],"Location":[{"type":"regex","query":"https://www.akamai.com"}]}} ''')
ret = self.fr.run(testcase)
self.assertTrue( ret.query['TestType'] == 'hassert')
self.assertTrue( ret.query['Passed'] == False )
self.assertTrue( 'LoadTime' in ret.query )
logging.info('FractResult: {}'.format(ret))
def test_run2(self):
testcase = FractTest()
testcase.import_query('''{"TestType":"hdiff","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","RequestA":{"Ghost":"www.akamai.com","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"RequestB":{"Ghost":"www.akamai.com.edgekey-staging.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"VerifyHeaders":["Last-Modified","Cache-Control", "status_code", "Content-Length"]} ''')
fr =Fract()
ret = fr.run(testcase)
logging.warning('fractresult= {}'.format(ret))
self.assertTrue( ret.query['TestType'] == 'hdiff')
# 2018/08/21 ignore_case support
def test_run_ignore_case_failed(self):
testcase = FractTest()
testcase.import_query('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"TEXT/html$"}],"Location":[{"type":"regex","query":"https://WWW.akamai.COM","option":{"ignore_case":false}}]}}''')
ret = self.fr.run(testcase)
self.assertTrue( ret.query['TestType'] == 'hassert')
self.assertTrue( ret.query['Passed'] == False )
self.assertTrue( ret.query['ResultCase']['Content-Type'][0]['Passed'] == False )
self.assertTrue( ret.query['ResultCase']['Location'][0]['Passed'] == False )
self.assertTrue( ret.query['ResultCase']['Location'][0]['TestCase']['option']['ignore_case'] == False )
logging.warning('FractResult: {}'.format(ret))
def test_run_ignore_case_passed(self):
testcase = FractTest()
testcase.import_query('''{"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404|302|301)"}],"Connection":[{"type":"regex","query":"KeeP","option":{"ignore_case":true}}],"Location":[{"type":"contain","query":"https://WWW.akamai.COM","option":{"ignore_case":true}}]}}''')
ret = self.fr.run(testcase)
logging.warning('FractResult: {}'.format(ret))
self.assertTrue( ret.query['TestType'] == 'hassert')
self.assertTrue( ret.query['Passed'] == True )
self.assertTrue( ret.query['ResultCase']['Connection'][0]['Passed'] == True )
self.assertTrue( ret.query['ResultCase']['Location'][0]['Passed'] == True )
self.assertTrue( ret.query['ResultCase']['Location'][0]['TestCase']['option']['ignore_case'] == True )
def test_run2(self):
testcase = FractTest()
testcase.import_query('''{"TestType":"hdiff","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","RequestA":{"Ghost":"www.akamai.com","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"RequestB":{"Ghost":"www.akamai.com.edgekey-staging.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"VerifyHeaders":["Last-Modified","Cache-Control", "status_code", "Content-Length"]} ''')
fr =Fract()
ret = fr.run(testcase)
logging.warning('fractresult= {}'.format(ret))
self.assertTrue( ret.query['TestType'] == 'hdiff')
# 2018/08/21 ignore_case support
def test_run_active_filed(self):
testcase = FractTest()
testcase.import_query('''{"Active":true,"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}],"Location":[{"type":"regex","query":"https://www.akamai.com"}]}} ''')
ret = self.fr.run(testcase)
self.assertTrue( ret.query['TestType'] == 'hassert')
self.assertTrue( ret.query['Passed'] == False )
logging.info('FractResult: {}'.format(ret))
def test_run_inactive(self):
testcase = FractTest()
testcase.import_query('''{"Active":false,"TestType":"hassert","Comment":"This is a test for redirect","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}],"Location":[{"type":"regex","query":"https://www.akamai.com"}]}} ''')
ret = self.fr.run(testcase)
self.assertTrue( ret is None)
from fract import FractClient
class test_FractClient(unittest.TestCase):
def setUp(self):
logging.basicConfig(level=logging.DEBUG)
self.testsuite = '''[{"TestType":"hassert","Comment":"This is comment","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}]}},{"TestType":"hdiff","Comment":"This is comment","TestId":"d704230e1206c259ddbb900004c185e46c42a32a","RequestA":{"Ghost":"www.akamai.com","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"RequestB":{"Ghost":"www.akamai.com.edgekey-staging.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"VerifyHeaders":["Last-Modified","Cache-Control"]}]'''
def test_init(self):
fclient = FractClient(self.testsuite)
self.assertTrue( len(fclient._testsuite) ==2 )
def test_init2(self):
fclient = FractClient(fract_suite_file='testcase4test.json')
self.assertTrue( len(fclient._testsuite) == 32 )
def test_run_suite(self):
fclient = FractClient(self.testsuite)
fclient.run_suite()
self.assertTrue( len(fclient._result_suite) ==2 )
logging.info('test_run_suite(): _result_suite={}'.format(fclient._result_suite[0]))
fclient.export_result()
def test_run_suite2(self):
fclient = FractClient(self.testsuite)
fclient.run_suite( ['d704230e1206c259ddbb900004c185e46c42a32a'])
self.assertTrue( len(fclient._result_suite) ==1 )
self.assertTrue( len(fclient._failed_result_suite) == 0)
logging.info('test_run_suite(): _result_suite={}'.format(fclient._result_suite[0]))
fclient.export_result()
def test_run_suite_inactive_test(self):
testjson='''[{"Active":true,"TestType":"hassert","Comment":"This is comment","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}]}},{"TestType":"hdiff","Comment":"This is comment","TestId":"d704230e1206c259ddbb900004c185e46c42a32a","RequestA":{"Ghost":"www.akamai.com","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"RequestB":{"Ghost":"www.akamai.com.edgekey-staging.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"VerifyHeaders":["Last-Modified","Cache-Control"]},{"Active":false,"TestType":"hassert","Comment":"This is comment","TestId":"3606bd5770167eaca08586a8c77d05e6ed076899","Request":{"Ghost":"www.akamai.com.edgekey.net","Method":"GET","Url":"https://www.akamai.com/us/en/","Headers":{"Cookie":"abc=123","Accept-Encoding":"gzip"}},"TestCase":{"status_code":[{"type":"regex","query":"(200|404)"},{"type":"regex","query":"301"}],"Content-Type":[{"type":"regex","query":"text/html$"}]}}]'''
fclient = FractClient(testjson)
fclient.run_suite()
self.assertTrue( len(fclient._result_suite) == 2 )
logging.info('test_run_suite(): _result_suite={}'.format(fclient._result_suite[0]))
fclient.export_result()
def test_make_summary(self):
fclient = FractClient(self.testsuite)
fclient.run_suite()
fclient.make_summary()
def test_get_testcase(self):
fclient = FractClient(self.testsuite)
t = fclient._get_testcase('d704230e1206c259ddbb900004c185e46c42a32a')
self.assertTrue(t.query['TestId'] == 'd704230e1206c259ddbb900004c185e46c42a32a')
def test_export_failed_testsuite(self):
fclient = FractClient(self.testsuite)
fclient.run_suite( ['3606bd5770167eaca08586a8c77d05e6ed076899'])
fclient.export_failed_testsuite('diff.json')
def test_load_resultfile(self):
fclient = FractClient(self.testsuite)
fclient.load_resultfile('resutlcase4test.json')
self.assertTrue( len(fclient._result_suite) == 32 )
self.assertTrue( len(fclient._failed_result_suite) == 23 )
# redirect summary support
def test_export_redirect_summary(self):
REDIRECT_SUMMARY='redirect_summary.json'
fclient = FractClient(fract_suite_file='testcase4redirect.json') # includes 7 redirect
fclient.load_resultfile('result4redirect.json')
fclient.export_redirect_summary(REDIRECT_SUMMARY)
self.assertTrue( len(fclient.redirect_summary) == 7)
self.assertTrue( fclient.redirect_summary[0]['Response']['status_code'] == 301 )
self.assertTrue( fclient.redirect_summary[0]['Response']['Server'] == 'AkamaiGHost' )
self.assertTrue( fclient.redirect_summary[1]['TestId'] == 'ec5890b017383f077f788478aa41911748e0a5a15b7230a1555b14648950da83' )
self.assertTrue( 'User-Agent' in fclient.redirect_summary[1]['Request']['Headers'] )
def test_make_spec_summary(self):
fclient = FractClient(fract_suite_file='testcase4redirect.json') # includes 7 redirect
fclient.load_resultfile('result4redirect.json')
fret=fclient._result_suite[0]
single_summary = fclient._make_spec_summary(fret)
self.assertTrue( single_summary['Response']['status_code'] == 200 )
self.assertTrue( single_summary['Response']['Location'] == '' )
def test_export_ercost_high(self):
ERCOST_SUMMARY='ercost_high_summary.json'
fclient = FractClient(fract_suite_file='testcase4redirect.json') # includes 7 redirect
fclient.load_resultfile('result4redirect.json')
fclient.export_ercost_high(ERCOST_SUMMARY, 10000000)
self.assertTrue( len(fclient.ercost_high_summary) == 3)
from fract import JsonYaml
class test_JsonYaml(unittest.TestCase):
def setUp(self):
self.jy=JsonYaml()
def tearDown(self):
pass
def test_j2y(self):
self.jy.j2y('testcase4test.json', 'testcase4test.yaml')
def test_y2j(self):
self.jy.y2j('testcase4test.yaml', 'testcase4test2.json')
from fract import RedirectLoopTester
class test_RedirectLoopTester(unittest.TestCase):
def setUp(self):
self.rlt=RedirectLoopTester()
def tearDown(self):
pass
def test_test_from_urls(self):
self.rlt.test_from_urls('urllist4redirectloop.txt', 'fract.akamaized-staging.net', 5)
self.assertTrue( self.rlt.allcount == 10)
self.assertTrue( self.rlt.errorcount == 4)
self.assertTrue( self.rlt.hasRedirectCount == 9)
self.assertTrue( self.rlt.resultList[0]['Threshold'] == 5 )
self.assertTrue( self.rlt.resultList[0]['ReachedThreshold'] == False )
self.assertTrue( self.rlt.resultList[0]['URL'] == 'https://fract.akamaized.net/301/3/' )
self.assertTrue( self.rlt.resultList[0]['TargetHost'] == 'fract.akamaized-staging.net' )
self.assertTrue( self.rlt.resultList[0]['Depth'] == 3 )
def test_test_from_urls_2(self):
self.rlt.test_from_urls('urllist4redirectloop.txt', None, 10)
self.assertTrue( self.rlt.allcount == 10)
self.assertTrue( self.rlt.errorcount == 0)
self.assertTrue( self.rlt.hasRedirectCount == 9)
self.assertTrue( self.rlt.resultList[0]['Threshold'] == 10 )
self.assertTrue( self.rlt.resultList[0]['ReachedThreshold'] == False )
self.assertTrue( self.rlt.resultList[0]['URL'] == 'https://fract.akamaized.net/301/3/' )
self.assertTrue( self.rlt.resultList[0]['TargetHost'] == 'fract.akamaized.net' )
self.assertTrue( self.rlt.resultList[0]['Depth'] == 3 )
def test_tracechain1(self):
subitem = self.rlt.getNewSubItem()
self.rlt.tracechain('https://fract.akamaized.net/301/', 'fract.akamaized.net', 5, subitem)
#logging.warning('test_test_tracechain(): subitem={}'.format(subitem))
self.assertTrue( subitem['Chain'][0]['Location'] == 'https://fract.akamaized.net/' )
self.assertTrue( subitem['Chain'][0]['status_code'] == 301 )
def test_tracechain2(self):
testdepth=3
subitem = self.rlt.getNewSubItem()
self.rlt.tracechain('https://fract.akamaized.net/301/{}/'.format(testdepth), 'fract.akamaized.net', 5, subitem)
self.assertTrue( len(subitem['Chain']) == testdepth )
def test_tracechain_overflow(self):
testdepth=6
subitem = self.rlt.getNewSubItem()
self.rlt.tracechain('https://fract.akamaized.net/301/{}/'.format(testdepth), 'fract.akamaized.net', 5, subitem)
self.assertTrue( len(subitem['Chain']) == 5 )
def test_tracechain_noredirect(self):
subitem = self.rlt.getNewSubItem()
self.rlt.tracechain('https://fract.akamaized.net/', 'fract.akamaized.net', 5, subitem)
self.assertTrue( len(subitem['Chain']) == 0 )
def test_save(self):
pass
def test_getNewSubItem(self):
obj = self.rlt.getNewSubItem()
self.assertTrue('Depth' in obj)
self.assertTrue('TargetHost' in obj)
self.assertTrue('Chain' in obj)
self.assertTrue('Threshold' in obj)
self.assertTrue( obj['ReachedThreshold'] is False)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
unittest.main()
| 59.853846 | 1,403 | 0.650919 | 4,694 | 38,905 | 5.29314 | 0.078611 | 0.08283 | 0.028013 | 0.023263 | 0.807897 | 0.754769 | 0.704097 | 0.676407 | 0.635595 | 0.625332 | 0 | 0.05306 | 0.14741 | 38,905 | 649 | 1,404 | 59.946071 | 0.69599 | 0.01663 | 0 | 0.49505 | 0 | 0.039604 | 0.444442 | 0.096681 | 0 | 0 | 0 | 0 | 0.384158 | 1 | 0.192079 | false | 0.124752 | 0.043564 | 0 | 0.253465 | 0.00396 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
672c60f5c79624ed4447dbf45f28e589bc204e91 | 148 | py | Python | hpfspecmatch/__init__.py | sinclairej/hpfspecmatch-1 | 4e2604c734e74572b33aabc3b2b7d983888701d6 | [
"MIT"
] | null | null | null | hpfspecmatch/__init__.py | sinclairej/hpfspecmatch-1 | 4e2604c734e74572b33aabc3b2b7d983888701d6 | [
"MIT"
] | null | null | null | hpfspecmatch/__init__.py | sinclairej/hpfspecmatch-1 | 4e2604c734e74572b33aabc3b2b7d983888701d6 | [
"MIT"
] | 1 | 2021-06-11T15:37:20.000Z | 2021-06-11T15:37:20.000Z | from .hpfspecmatch import *
from .likelihood import *
from .priors import *
from .utils import *
from .rotbroad_help import *
from .config import *
| 21.142857 | 28 | 0.756757 | 19 | 148 | 5.842105 | 0.473684 | 0.45045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 148 | 6 | 29 | 24.666667 | 0.895161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
679924f8148b86d70274e4d475131ba039af5a94 | 103 | py | Python | pyccel/complexity/__init__.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | pyccel/complexity/__init__.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | pyccel/complexity/__init__.py | toddrme2178/pyccel | deec37503ab0c5d0bcca1a035f7909f7ce8ef653 | [
"MIT"
] | null | null | null | # -*- coding: UTF-8 -*-
from .basic import *
from .arithmetic import *
from .memory import *
| 17.166667 | 25 | 0.592233 | 12 | 103 | 5.083333 | 0.666667 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.262136 | 103 | 5 | 26 | 20.6 | 0.789474 | 0.203884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
679b4727524d05e4782dd28ea7e47ec564eecdb9 | 7,218 | py | Python | tests/test_behaviours.py | luciotorre/behaviour-trees | 26c60d9e8cc8a498af6c11c6d62c4e207cfcc6b9 | [
"MIT"
] | null | null | null | tests/test_behaviours.py | luciotorre/behaviour-trees | 26c60d9e8cc8a498af6c11c6d62c4e207cfcc6b9 | [
"MIT"
] | null | null | null | tests/test_behaviours.py | luciotorre/behaviour-trees | 26c60d9e8cc8a498af6c11c6d62c4e207cfcc6b9 | [
"MIT"
] | 1 | 2018-03-12T08:42:58.000Z | 2018-03-12T08:42:58.000Z | import unittest
import random
import math
import behaviours
class Testbehaviours(unittest.TestCase):
def test_do(self):
target = []
tree = behaviours.do("call_it", lambda state: state.append(True))
running, success = tree.tick(target)
self.assertEqual(running, False)
self.assertEqual(success, True)
self.assertEqual(target, [True])
def test_run(self):
target = []
tree = behaviours.run("call_it", lambda state: state.append(True))
running, success = tree.tick(target)
self.assertEqual(running, True)
self.assertEqual(target, [True])
running, success = tree.tick(target)
self.assertEqual(running, False)
self.assertEqual(success, True)
self.assertEqual(target, [True])
def test_do_fail(self):
tree = behaviours.do("call_it", lambda state: 1/0)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, False)
def test_eval(self):
tree = behaviours.evalb("call it", lambda state: True)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, True)
def test_eval_fail(self):
tree = behaviours.evalb("call it", lambda state: False)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, False)
def test_wait(self):
tree = behaviours.wait(2)
running, success = tree.tick()
self.assertEqual(running, True)
running, success = tree.tick()
self.assertEqual(running, True)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, True)
def test_sequence(self):
target = []
tree = behaviours.sequence("append two",
behaviours.do("call it1", lambda state: state.append(1)),
behaviours.wait(1),
behaviours.do("call it2", lambda state: state.append(2)),
)
running, success = tree.tick(target)
self.assertEqual(running, True)
self.assertEqual(target, [1])
running, success = tree.tick(target)
self.assertEqual(running, False)
self.assertEqual(success, True)
self.assertEqual(target, [1, 2])
def test_sequence_fail(self):
target = []
tree = behaviours.sequence("append two",
behaviours.evalb("fail", lambda s: False),
behaviours.do("call it", lambda s: target.append(2)),
)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, False)
def test_select(self):
tree = behaviours.select("pick one",
behaviours.evalb("fail", lambda state: False),
behaviours.wait(1),
)
running, success = tree.tick()
self.assertEqual(running, True)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, True)
def test_select_fail(self):
tree = behaviours.select("pick one",
behaviours.evalb("fail", lambda s: False),
behaviours.evalb("fail", lambda s: False),
)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, False)
def test_parallel_while(self):
s = dict(stop=False, count=0)
tree = behaviours.whileb("repeat while",
behaviours.evalb("check", lambda state: not state['stop']),
behaviours.do("count", lambda state: state.__setitem__("count", state["count"] + 1))
)
running, success = tree.tick(s)
self.assertEqual(running, True)
self.assertEqual(s['count'], 1)
running, success = tree.tick(s)
self.assertEqual(running, True)
self.assertEqual(s['count'], 2)
s['stop'] = True
running, success = tree.tick(s)
self.assertEqual(running, False)
self.assertEqual(success, True)
self.assertEqual(s['count'], 3)
def test_parallel_until(self):
s = dict(stop=False, count=0)
tree = behaviours.untilb("repeat until",
behaviours.evalb("check", lambda state: state['stop']),
behaviours.notb(behaviours.do("incr", lambda state: state.__setitem__("count", state["count"] + 1))),
)
running, success = tree.tick(s)
self.assertEqual(running, True)
self.assertEqual(s['count'], 1)
running, success = tree.tick(s)
self.assertEqual(running, True)
self.assertEqual(s['count'], 2)
s['stop'] = True
running, success = tree.tick(s)
self.assertEqual(running, False)
self.assertEqual(success, True)
self.assertEqual(s['count'], 3)
def test_notb(self):
tree = behaviours.notb(
behaviours.evalb("check", lambda state: True))
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, False)
tree = behaviours.notb(
behaviours.evalb("check", lambda state: False))
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, True)
def test_chance(self):
random.seed(1)
v = random.random()
# this p is higher that the 'random' value that will be picked => success
success_v = math.sqrt(v)
# this p is lower that the 'random' value that will be picked => fail
fail_v = v**2
tree = behaviours.chance(success_v,
behaviours.evalb("check", lambda state: True)
)
random.seed(1)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, True)
tree = behaviours.chance(fail_v,
behaviours.evalb("check", lambda state: True)
)
random.seed(1)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, False)
random.seed(1)
tree = behaviours.chance(success_v,
behaviours.evalb("check", lambda state: False)
)
running, success = tree.tick()
self.assertEqual(running, False)
self.assertEqual(success, False)
def test_repeat(self):
target = []
tree = behaviours.repeat(behaviours.do("success", lambda state: state.append(True)))
running, success = tree.tick(target)
self.assertEqual(running, True)
self.assertEqual(target, [True])
running, success = tree.tick(target)
self.assertEqual(running, True)
self.assertEqual(target, [True, True])
def test_conditional(self):
tree = behaviours.conditional("condition",
condition=behaviours.evalb("check", lambda state: True),
true=behaviours.do("exito", lambda state: state.append(True)),
false=behaviours.do("fail", lambda state: state.append(False)),
)
target = []
running, success = tree.tick(target)
self.assertEqual(running, False)
self.assertEqual(success, True)
self.assertEqual(target, [True])
def test_conditional_long(self):
state = dict(target=[], condition=True)
tree = behaviours.conditional("condition",
condition=behaviours.evalb("check", lambda state: state['condition']),
true=behaviours.repeat(behaviours.do("success", lambda state: state['target'].append(True))),
false=behaviours.repeat(behaviours.do("fail", lambda state: state['target'].append(False))),
)
running, success = tree.tick(state)
self.assertEqual(running, True)
self.assertEqual(state['target'], [True])
running, success = tree.tick(state)
self.assertEqual(running, True)
self.assertEqual(state['target'], [True, True])
state['condition'] = False
running, success = tree.tick(state)
self.assertEqual(running, True)
self.assertEqual(state['target'], [True, True, False])
state['condition'] = True
running, success = tree.tick(state)
self.assertEqual(running, True)
self.assertEqual(state['target'], [True, True, False, True])
| 28.305882 | 104 | 0.703242 | 918 | 7,218 | 5.485839 | 0.081699 | 0.20552 | 0.117951 | 0.144162 | 0.839357 | 0.820095 | 0.792693 | 0.77363 | 0.699563 | 0.684472 | 0 | 0.004564 | 0.150042 | 7,218 | 254 | 105 | 28.417323 | 0.8163 | 0.019257 | 0 | 0.633166 | 0 | 0 | 0.051166 | 0 | 0 | 0 | 0 | 0 | 0.346734 | 1 | 0.085427 | false | 0 | 0.020101 | 0 | 0.110553 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67a21ac2b65e42e6a91c28458c86ff30c8ec1b30 | 4,066 | py | Python | tests/sync/test_calendar_getters.py | HETHAT/aladhan.py | 1426f2ab1c298c430f2b94f748bcab94129f500c | [
"MIT"
] | 8 | 2021-04-23T16:26:41.000Z | 2021-08-02T21:02:53.000Z | tests/sync/test_calendar_getters.py | HETHAT/aladhan.py | 1426f2ab1c298c430f2b94f748bcab94129f500c | [
"MIT"
] | 2 | 2021-07-30T20:43:34.000Z | 2021-08-20T21:44:39.000Z | tests/sync/test_calendar_getters.py | HETHAT/aladhan.py | 1426f2ab1c298c430f2b94f748bcab94129f500c | [
"MIT"
] | 2 | 2021-04-23T15:47:43.000Z | 2021-07-30T20:09:32.000Z | import pytest
import aladhan
@pytest.fixture
def client():
with aladhan.Client() as client:
yield client
@pytest.mark.parametrize(
["args", "kwargs", "expected"],
[
[(34, 4), {"date": aladhan.CalendarDateArg(2021, 5)}, list],
[(34, 4), {"date": aladhan.CalendarDateArg(2021)}, dict],
[
(34, 4),
{"date": aladhan.CalendarDateArg(1442, 9, hijri=True)},
list,
],
[(34, 4), {"date": aladhan.CalendarDateArg(1442, hijri=True)}, dict],
[
(34.69, 4.420),
{
"date": aladhan.CalendarDateArg(2021, 5),
"params": None,
},
list,
],
[
(34, 4),
{
"date": aladhan.CalendarDateArg(2021, 5),
"params": aladhan.Parameters(tune=aladhan.Tune(1)),
},
list,
],
[
(34, 4),
{
"date": aladhan.CalendarDateArg(1442, 9, hijri=True),
"params": aladhan.Parameters(tune=aladhan.Tune(1)),
},
list,
],
],
)
def test_calendar(client, args, kwargs, expected):
ts = client.get_calendar(*args, **kwargs)
assert isinstance(ts, expected)
assert isinstance(
expected == list and ts[0] or ts["1"][0], aladhan.Timings
)
@pytest.mark.parametrize(
["args", "kwargs", "expected"],
[
[("London",), {"date": aladhan.CalendarDateArg(2021)}, dict],
[
("London",),
{"date": aladhan.CalendarDateArg(1442, 9, hijri=True)},
list,
],
[
("London",),
{"date": aladhan.CalendarDateArg(1442, hijri=True)},
dict,
],
[
("London",),
{
"date": aladhan.CalendarDateArg(2021, 5),
"params": aladhan.Parameters(tune=aladhan.Tune(1)),
},
list,
],
[
("London",),
{
"date": aladhan.CalendarDateArg(1442, 9, hijri=True),
"params": aladhan.Parameters(tune=aladhan.Tune(1)),
},
list,
],
],
)
def test_calendar_by_address(client, args, kwargs, expected):
ts = client.get_calendar_by_address(*args, **kwargs)
assert isinstance(ts, expected)
assert isinstance(
expected == list and ts[0] or ts["1"][0], aladhan.Timings
)
@pytest.mark.parametrize(
["args", "kwargs", "expected"],
[
[("London", "GB"), {"date": aladhan.CalendarDateArg(2021)}, dict],
[
("London", "GB"),
{"date": aladhan.CalendarDateArg(2021), "state": "Bexley"},
dict,
],
[
("London", "GB"),
{
"date": aladhan.CalendarDateArg(2021),
"state": None,
"params": None,
},
dict,
],
[("London", "GB"), {"date": aladhan.CalendarDateArg(2021)}, dict],
[
("London", "GB"),
{"date": aladhan.CalendarDateArg(1442, 9, hijri=True)},
list,
],
[
("London", "GB"),
{"date": aladhan.CalendarDateArg(1442, hijri=True)},
dict,
],
[
("London", "GB"),
{
"date": aladhan.CalendarDateArg(2021, 5),
"params": aladhan.Parameters(tune=aladhan.Tune(1)),
},
list,
],
[
("London", "GB"),
{
"date": aladhan.CalendarDateArg(1442, 9, hijri=True),
"params": aladhan.Parameters(tune=aladhan.Tune(1)),
},
list,
],
],
)
def test_calendar_by_city(client, args, kwargs, expected):
ts = client.get_calendar_by_city(*args, **kwargs)
assert isinstance(ts, expected)
assert isinstance(
expected == list and ts[0] or ts["1"][0], aladhan.Timings
)
| 27.288591 | 77 | 0.455976 | 347 | 4,066 | 5.302594 | 0.144092 | 0.119565 | 0.282609 | 0.179348 | 0.925543 | 0.92337 | 0.863587 | 0.791848 | 0.679348 | 0.56413 | 0 | 0.052548 | 0.382194 | 4,066 | 148 | 78 | 27.472973 | 0.679936 | 0 | 0 | 0.578571 | 0 | 0 | 0.072553 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 1 | 0.028571 | false | 0 | 0.014286 | 0 | 0.042857 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67b345bf2d99af89195669c775553fb732fe834d | 202 | py | Python | dawa_facade/tests/replication/__init__.py | YnkDK/Dawa-Facade | c013dd9dc06581ea3955ddbe9d50880ae9f072b0 | [
"MIT"
] | null | null | null | dawa_facade/tests/replication/__init__.py | YnkDK/Dawa-Facade | c013dd9dc06581ea3955ddbe9d50880ae9f072b0 | [
"MIT"
] | null | null | null | dawa_facade/tests/replication/__init__.py | YnkDK/Dawa-Facade | c013dd9dc06581ea3955ddbe9d50880ae9f072b0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""__init__.py.py
Copyright 2017 Martin Storgaard Dieu under The MIT License
Written by Martin Storgaard Dieu <martin@storgaarddieu.com>, november 2017
""" | 28.857143 | 74 | 0.737624 | 29 | 202 | 5 | 0.793103 | 0.206897 | 0.262069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051136 | 0.128713 | 202 | 7 | 75 | 28.857143 | 0.772727 | 0.950495 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67e805afef2f9ecbaf0d77b3b16cdd52ffcbb37d | 2,511 | py | Python | networks/SimpleMLPs.py | EchteRobert/FeatureAggregation | 3dc17178dae6ae2acbbadb559ebe89298a778423 | [
"BSD-3-Clause"
] | 1 | 2022-03-02T16:47:21.000Z | 2022-03-02T16:47:21.000Z | networks/SimpleMLPs.py | EchteRobert/FeatureAggregation | 3dc17178dae6ae2acbbadb559ebe89298a778423 | [
"BSD-3-Clause"
] | 1 | 2022-03-02T10:12:44.000Z | 2022-03-02T10:13:12.000Z | networks/SimpleMLPs.py | broadinstitute/FeatureAggregation | 3dc17178dae6ae2acbbadb559ebe89298a778423 | [
"BSD-3-Clause"
] | null | null | null |
import torch
import torch.nn as nn
import torch.nn.functional as F
class MLP(nn.Module):
def __init__(self, input_dim=1938, latent_dim=256, output_dim=128, k=1):
super(MLP, self).__init__()
self.latent_dim = latent_dim
self.output_dim = output_dim
# Feature extraction sub-model
self.lin1 = nn.Linear(input_dim, int(256//k)) # (input channels, output channels, kernel_size)
self.lin2 = nn.Linear(int(256//k), int(256//k))
self.lin3 = nn.Linear(int(256//k), self.latent_dim) # this projects the BSx1938 vector into a BSxlatent_dim vector
# Projection head on top of the desired feature representation
self.proj1 = nn.Linear(self.latent_dim, int(128//k))
self.proj2 = nn.Linear(int(128//k), int(128//k))
self.proj3 = nn.Linear(int(128//k), self.output_dim)
def forward(self, x):
# Feature extraction sub-model
x = F.leaky_relu(self.lin1(x))
x = F.leaky_relu(self.lin2(x))
x = torch.max(x, 1, keepdim=True)[0]
x = x.view(x.shape[0], -1)
features = F.leaky_relu(self.lin3(x))
# Projection head
x = F.leaky_relu(self.proj1(features))
x = F.leaky_relu(self.proj2(x))
x = F.leaky_relu(self.proj3(x))
return x, features
class oldMLP(nn.Module):
def __init__(self, input_dim=400, latent_dim=256, output_dim=128, k=1):
super(oldMLP, self).__init__()
self.latent_dim = latent_dim
self.output_dim = output_dim
# Feature extraction sub-model
self.lin1 = nn.Linear(input_dim, 256//k) # (input channels, output channels, kernel_size)
self.lin2 = nn.Linear(256//k, 256//k)
self.lin3 = nn.Linear(256//k, self.latent_dim) # this projects the BSx1938 vector into a BSxlatent_dim vector
# Projection head on top of the desired feature representation
self.proj1 = nn.Linear(self.latent_dim, 128//k)
self.proj2 = nn.Linear(128//k, 128//k)
self.proj3 = nn.Linear(128//k, self.output_dim)
def forward(self, x):
# Feature extraction sub-model
x = F.leaky_relu(self.lin1(x))
x = F.leaky_relu(self.lin2(x))
x = torch.max(x, 1, keepdim=True)[0]
x = x.view(x.shape[0], -1)
features = F.leaky_relu(self.lin3(x))
# Projection head
x = F.leaky_relu(self.proj1(features))
x = F.leaky_relu(self.proj2(x))
x = F.leaky_relu(self.proj3(x))
return x, features
| 34.39726 | 122 | 0.620868 | 385 | 2,511 | 3.909091 | 0.171429 | 0.063787 | 0.079734 | 0.111628 | 0.927575 | 0.914286 | 0.831894 | 0.796013 | 0.796013 | 0.754817 | 0 | 0.057611 | 0.246515 | 2,511 | 72 | 123 | 34.875 | 0.737844 | 0.19315 | 0 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.066667 | 0 | 0.244444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c02501c7c02120dc2ca6ca4ac1ce0c02cb8bdfae | 554 | py | Python | tests/nfe_reader/ba/conftest.py | jroquejr/nfe-reader | 277379bfb9865b2656c2576d8ccf8c3e1f3cacd1 | [
"MIT"
] | null | null | null | tests/nfe_reader/ba/conftest.py | jroquejr/nfe-reader | 277379bfb9865b2656c2576d8ccf8c3e1f3cacd1 | [
"MIT"
] | 2 | 2021-04-21T14:57:31.000Z | 2021-04-21T14:57:32.000Z | tests/nfe_reader/ba/conftest.py | jroquejr/nfe-reader | 277379bfb9865b2656c2576d8ccf8c3e1f3cacd1 | [
"MIT"
] | null | null | null | from pytest import fixture
from tests.util import load_file
@fixture(scope="module")
def html_first_page():
return load_file("crawler_ba/fixture-first-page.html")
@fixture(scope="module")
def html_nfe():
return load_file("crawler_ba/nfe.html")
@fixture(scope="module")
def html_emitter():
return load_file("crawler_ba/emitter.html")
@fixture(scope="module")
def html_products():
return load_file("crawler_ba/products.html")
@fixture(scope="module")
def html_server_error():
return load_file("crawler_ba/server-error.html")
| 19.103448 | 58 | 0.741877 | 80 | 554 | 4.9125 | 0.275 | 0.122137 | 0.229008 | 0.267176 | 0.651399 | 0.295165 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115523 | 554 | 28 | 59 | 19.785714 | 0.802041 | 0 | 0 | 0.294118 | 0 | 0 | 0.285199 | 0.196751 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | true | 0 | 0.117647 | 0.294118 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c054c8007aa99524c9d2240e5958653dd634e461 | 96 | py | Python | terrascript/helm/r.py | benediktkr/python-terrascript | 7b56a34563964e0313dd39b52b4b7d9ceb71dde9 | [
"BSD-2-Clause"
] | 4 | 2022-02-07T21:08:14.000Z | 2022-03-03T04:41:28.000Z | terrascript/helm/r.py | benediktkr/python-terrascript | 7b56a34563964e0313dd39b52b4b7d9ceb71dde9 | [
"BSD-2-Clause"
] | null | null | null | terrascript/helm/r.py | benediktkr/python-terrascript | 7b56a34563964e0313dd39b52b4b7d9ceb71dde9 | [
"BSD-2-Clause"
] | 2 | 2022-02-06T01:49:42.000Z | 2022-02-08T14:15:00.000Z | # terrascript/helm/r.py
import terrascript
class helm_release(terrascript.Resource):
pass
| 13.714286 | 41 | 0.78125 | 12 | 96 | 6.166667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135417 | 96 | 6 | 42 | 16 | 0.891566 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fbe5060192efcb5acd37307b4b214a57cb965a70 | 1,255 | py | Python | tests/finders/test_functional_text.py | ace-ecosystem/urlfinderlib | 6d58362b9943d037b749768815fc6d012367cbf5 | [
"Apache-2.0"
] | null | null | null | tests/finders/test_functional_text.py | ace-ecosystem/urlfinderlib | 6d58362b9943d037b749768815fc6d012367cbf5 | [
"Apache-2.0"
] | 21 | 2019-12-03T14:23:46.000Z | 2021-08-30T20:34:44.000Z | tests/finders/test_functional_text.py | ace-ecosystem/urlfinderlib | 6d58362b9943d037b749768815fc6d012367cbf5 | [
"Apache-2.0"
] | 3 | 2019-12-03T17:44:47.000Z | 2020-08-11T20:43:15.000Z | import urlfinderlib.finders
def test_find_urls_in_text():
text = b"""
<http://domain.com/angle_brackets>
`http://domain.com/backticks`
[http://domain.com/brackets]
{http://domain.com/curly_brackets}
"http://domain.com/double_quotes"
(http://domain.com/parentheses)
'http://domain.com/single_quotes'
http://domain.com/spaces
http://domain.com/lines
http://domain.com/text<http://domain.com/actual>
"""
finder = urlfinderlib.finders.TextUrlFinder(text)
expected_urls = {
"http://domain.com/angle_brackets",
"http://domain.com/backticks",
"http://domain.com/brackets",
"http://domain.com/curly_brackets",
"http://domain.com/double_quotes",
"http://domain.com/parentheses",
"http://domain.com/single_quotes",
"http://domain.com/spaces",
"http://domain.com/lines",
"http://domain.com/text",
"http://domain.com/actual",
}
assert finder.find_urls() == expected_urls
def test_invalid_ipv6():
text = b"""
https://domain.com. [https://domain2.com]
"""
finder = urlfinderlib.finders.TextUrlFinder(text)
expected_urls = {
"https://domain.com",
"https://domain2.com",
}
assert finder.find_urls() == expected_urls | 26.145833 | 53 | 0.643825 | 151 | 1,255 | 5.218543 | 0.211921 | 0.274112 | 0.362944 | 0.159898 | 0.906091 | 0.906091 | 0.751269 | 0.614213 | 0.614213 | 0.614213 | 0 | 0.002887 | 0.172112 | 1,255 | 48 | 54 | 26.145833 | 0.755534 | 0 | 0 | 0.25641 | 0 | 0 | 0.566879 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 1 | 0.051282 | false | 0 | 0.025641 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
220a7905c09030343a94cb7c1575e3a5a7f507a5 | 3,343 | py | Python | test/test_command_line_parsing.py | jmiiller/dns_shark | 80ee4c7ec32fc3fec202e5142cf745d432770947 | [
"MIT"
] | 3 | 2020-01-21T20:32:35.000Z | 2020-08-01T07:14:55.000Z | test/test_command_line_parsing.py | jmiiller/dns_shark | 80ee4c7ec32fc3fec202e5142cf745d432770947 | [
"MIT"
] | 4 | 2020-01-20T01:16:39.000Z | 2020-01-20T01:34:27.000Z | test/test_command_line_parsing.py | jmiiller/dns_shark | 80ee4c7ec32fc3fec202e5142cf745d432770947 | [
"MIT"
] | null | null | null | import unittest
from dns_shark.command_line_parsing import create_parser
from contextlib import redirect_stderr
from io import StringIO
class CommandLineParsingTests(unittest.TestCase):
"""
Unit testing for command_line_parsing.py (i.e. command line parsing logic)
"""
def setUp(self):
self.parser = create_parser()
self.buffer = StringIO()
def test_no_args_provided(self):
"""
Test case for when no arguments are supplied.
"""
with redirect_stderr(self.buffer):
self.assertRaises(SystemExit, self.parser.parse_args, [])
def test_one_arg_provided(self):
"""
Test case for when only one argument is supplied.
"""
with redirect_stderr(self.buffer):
self.assertRaises(SystemExit, self.parser.parse_args, ["127.0.0.1"])
def test_only_required_args_given(self):
"""
Test case for when only the two required arguments are supplied.
"""
parsed_args = self.parser.parse_args(['127.0.0.1', 'www.jeffreymiiller.com'])
self.assertEqual(parsed_args.dns_server_ip, ['127.0.0.1'])
self.assertEqual(parsed_args.domain_name, ['www.jeffreymiiller.com'])
self.assertEqual(parsed_args.verbose, None)
self.assertEqual(parsed_args.ipv6, None)
def test_verbose_true(self):
"""
Test case for when the verbose argument is supplied.
"""
parsed_args = self.parser.parse_args(['127.0.0.1', 'www.jeffreymiiller.com', '--verbose', '1'])
self.assertEqual(parsed_args.dns_server_ip, ['127.0.0.1'])
self.assertEqual(parsed_args.domain_name, ['www.jeffreymiiller.com'])
self.assertEqual(parsed_args.verbose, [True])
self.assertEqual(parsed_args.ipv6, None)
def test_ipv6_true(self):
"""
Test case for when the ipv6 argument is supplied.
"""
parsed_args = self.parser.parse_args(['127.0.0.1', 'www.jeffreymiiller.com', '--ipv6', '1'])
self.assertEqual(parsed_args.dns_server_ip, ['127.0.0.1'])
self.assertEqual(parsed_args.domain_name, ['www.jeffreymiiller.com'])
self.assertEqual(parsed_args.verbose, None)
self.assertEqual(parsed_args.ipv6, [True])
def test_ipv6_and_verbose_true(self):
"""
Test case for when both the ipv6 and verbose arguments are supplied.
"""
parsed_args = self.parser.parse_args(['127.0.0.1', 'www.jeffreymiiller.com', '--verbose', '1', '--ipv6', '1'])
self.assertEqual(parsed_args.dns_server_ip, ['127.0.0.1'])
self.assertEqual(parsed_args.domain_name, ['www.jeffreymiiller.com'])
self.assertEqual(parsed_args.verbose, [True])
self.assertEqual(parsed_args.ipv6, [True])
def test_ipv6_and_verbose_reverse_order(self):
"""
Test case that reverses the order the ipv6 and verbose arguments.
"""
parsed_args = self.parser.parse_args(['127.0.0.1', 'www.jeffreymiiller.com', '--ipv6', '1', '--verbose', '1'])
self.assertEqual(parsed_args.dns_server_ip, ['127.0.0.1'])
self.assertEqual(parsed_args.domain_name, ['www.jeffreymiiller.com'])
self.assertEqual(parsed_args.verbose, [True])
self.assertEqual(parsed_args.ipv6, [True])
if __name__ == '__main__':
unittest.main()
| 37.988636 | 118 | 0.654801 | 430 | 3,343 | 4.886047 | 0.174419 | 0.118991 | 0.199905 | 0.237982 | 0.78534 | 0.764874 | 0.726321 | 0.687292 | 0.673965 | 0.673965 | 0 | 0.032638 | 0.211786 | 3,343 | 87 | 119 | 38.425287 | 0.764706 | 0.14149 | 0 | 0.478261 | 0 | 0 | 0.140364 | 0.081693 | 0 | 0 | 0 | 0 | 0.478261 | 1 | 0.173913 | false | 0 | 0.086957 | 0 | 0.282609 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
222e3a8b079a35cec4d4aa1b3557b734e7de0373 | 43 | py | Python | MeasureVAE/__init__.py | RetroCirce/Music-SketchNet | 40b45a658414703b7583e25a2c41e753c4a61e81 | [
"CC0-1.0"
] | 57 | 2020-08-01T04:08:36.000Z | 2022-02-21T10:13:56.000Z | MeasureVAE/__init__.py | RetroCirce/Music-SketchNet | 40b45a658414703b7583e25a2c41e753c4a61e81 | [
"CC0-1.0"
] | 3 | 2020-10-09T11:37:28.000Z | 2021-09-08T02:24:50.000Z | MeasureVAE/__init__.py | RetroCirce/Music-SketchNet | 40b45a658414703b7583e25a2c41e753c4a61e81 | [
"CC0-1.0"
] | 8 | 2020-08-05T12:32:45.000Z | 2022-03-22T02:23:10.000Z | from . import measure_vae, encoder, decoder | 43 | 43 | 0.813953 | 6 | 43 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 43 | 1 | 43 | 43 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
223857b59f08513b546cff7d04bfbddc1e3cba4e | 47,603 | py | Python | tests/cli_test.py | DanielHHowell/servox | 789f6b1fe6afaf41b754a866d0f8bbbe079eab4f | [
"Apache-2.0"
] | 4 | 2020-08-10T21:38:24.000Z | 2021-04-02T18:04:23.000Z | tests/cli_test.py | DanielHHowell/servox | 789f6b1fe6afaf41b754a866d0f8bbbe079eab4f | [
"Apache-2.0"
] | 265 | 2020-07-24T14:20:02.000Z | 2022-03-31T16:05:18.000Z | tests/cli_test.py | DanielHHowell/servox | 789f6b1fe6afaf41b754a866d0f8bbbe079eab4f | [
"Apache-2.0"
] | 3 | 2020-10-30T16:45:09.000Z | 2022-02-03T08:08:03.000Z | import json
import os
import re
from pathlib import Path
import pytest
import respx
import yaml
from freezegun import freeze_time
from typer import Typer
from typer.testing import CliRunner
import servo
from servo import Optimizer
from servo.cli import CLI, Context, ServoCLI
from servo.connectors.vegeta import VegetaConnector
from servo.servo import Servo
from tests.helpers import MeasureConnector
@pytest.fixture()
def cli_runner() -> CliRunner:
return CliRunner(mix_stderr=False)
@pytest.fixture()
def optimizer() -> Optimizer:
return Optimizer(id="dev.opsani.com/servox", token="123456789")
@pytest.fixture()
def servo_cli() -> ServoCLI:
return ServoCLI()
@pytest.fixture(autouse=True)
def servo_yaml(tmp_path: Path) -> Path:
config_path: Path = tmp_path / "servo.yaml"
config_path.touch()
return config_path
@pytest.fixture()
def vegeta_config_file(servo_yaml: Path) -> Path:
config = {
"connectors": ["vegeta"],
"vegeta": {"rate": 0, "target": "https://opsani.com/"},
}
servo_yaml.write_text(yaml.dump(config))
return servo_yaml
def test_help(cli_runner: CliRunner, servo_cli: Typer) -> None:
result = cli_runner.invoke(servo_cli, "--help")
assert result.exit_code == 0
assert "servo [OPTIONS] COMMAND [ARGS]" in result.stdout
def test_run(cli_runner: CliRunner, servo_cli: Typer) -> None:
"""Run the servo"""
def test_console(cli_runner: CliRunner, servo_cli: Typer) -> None:
"""Open an interactive console"""
def test_connectors(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None
) -> None:
result = cli_runner.invoke(servo_cli, "connectors")
assert result.exit_code == 0, f"expected exit code 0, but found {result.exit_code}: {result.stderr}"
assert re.search("^NAME\\s+VERSION\\s+DESCRIPTION\n", result.stdout)
def test_connectors_verbose(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
optimizer_env: None,
) -> None:
result = cli_runner.invoke(servo_cli, "connectors -v")
assert result.exit_code == 0
assert re.match(
"NAME\\s+VERSION\\s+DESCRIPTION\\s+HOMEPAGE\\s+MATURITY\\s+LICENSE",
result.stdout,
)
def test_check_no_optimizer(cli_runner: CliRunner, servo_cli: Typer) -> None:
result = cli_runner.invoke(servo_cli, "check")
assert result.exit_code == 2
assert "Error: Invalid value: An optimizer must be specified" in result.stderr
@respx.mock
def test_check(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
request = respx.post("https://api.opsani.com/accounts/dev.opsani.com/applications/servox/servo")
result = cli_runner.invoke(servo_cli, "check")
assert request.called, f"stdout={result.stdout}, stderr={result.stderr}"
assert result.exit_code == 0
assert re.search("CONNECTOR\\s+STATUS", result.stdout)
@respx.mock
def test_check_multiservo(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path,
) -> None:
request1 = respx.post(
"https://api.opsani.com/accounts/dev.opsani.com/applications/multi-servox-1/servo"
)
request2 = respx.post(
"https://api.opsani.com/accounts/dev.opsani.com/applications/multi-servox-2/servo"
)
result = cli_runner.invoke(servo_cli, "check")
assert result.exit_code == 0, f"exited with non-zero status code (stdout={result.stdout}, stderr={result.stderr})"
assert request1.called
assert request2.called
assert re.search("CONNECTOR\\s+STATUS", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1\\s+√ PASSED", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2\\s+√ PASSED", result.stdout)
@respx.mock
def test_check_multiservo_by_name(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path,
) -> None:
request1 = respx.post(
"https://api.opsani.com/accounts/dev.opsani.com/applications/multi-servox-1/servo"
)
request2 = respx.post(
"https://api.opsani.com/accounts/dev.opsani.com/applications/multi-servox-2/servo"
)
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 check")
assert result.exit_code == 0, f"exited with non-zero status code (stdout={result.stdout}, stderr={result.stderr})"
assert not request1.called
assert request2.called
assert re.search("CONNECTOR\\s+STATUS", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1\\s+√ PASSED", result.stdout) is None
assert re.search("dev.opsani.com/multi-servox-2\\s+√ PASSED", result.stdout)
@respx.mock
def test_check_verbose(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
request = respx.post("https://api.opsani.com/accounts/dev.opsani.com/applications/servox/servo")
result = cli_runner.invoke(servo_cli, "check -v", catch_exceptions=False)
assert request.called
assert result.exit_code == 0, f"result is: {result.stdout}, {result.stderr}"
assert re.search(
"CONNECTOR\\s+CHECK\\s+ID\\s+TAGS\\s+STATUS\\s+MESSAGE", result.stdout
)
@pytest.mark.usefixtures("optimizer_env")
class TestShow:
def test_help_does_not_require_optimizer_and_token(
self, cli_runner: CliRunner, servo_cli: Typer, clean_environment
) -> None:
clean_environment()
result = cli_runner.invoke(servo_cli, "show --help")
assert result.exit_code == 2
assert "Error: Invalid value: An optimizer must be specified" in result.stderr
def test_help(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "show --help")
assert result.exit_code == 0
assert "Display one or more resources" in result.stdout
def test_connectors(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "show connectors", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("NAME\\s+TYPE\\s+VERSION\\s+DESCRIPTION\n", result.stdout)
def test_components(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "show components", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("COMPONENT\\s+SETTINGS\\s+CONNECTOR", result.stdout)
def test_events_all(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "show events -a", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("EVENT\\s+CONNECTORS", result.stdout)
assert re.search("^check", result.stdout, flags=re.MULTILINE)
assert re.search("^adjust\\s.+", result.stdout, flags=re.MULTILINE)
assert re.search("^components\\s.+", result.stdout, flags=re.MULTILINE)
def test_events_includes_servo(
self, cli_runner: CliRunner, servo_cli: Typer,
) -> None:
result = cli_runner.invoke(servo_cli, "show events", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("Servo", result.stdout)
def test_events_on(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "show events --on", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("EVENT\\s+CONNECTORS", result.stdout)
assert re.search("check\\s+Servo\n", result.stdout)
assert re.search("measure\\s+Measure\n", result.stdout)
assert not re.search("before measure\\s+Measure", result.stdout)
assert not re.search("after measure\\s+Measure", result.stdout)
assert len(result.stdout.split("\n")) > 3
def test_events_no_on(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "show events --no-on", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("EVENT\\s+CONNECTORS", result.stdout)
assert not re.search("check\\s+Servo\n", result.stdout)
assert not re.search("^measure\\s+Measure\n", result.stdout)
assert re.search("after measure\\s+Measure", result.stdout)
def test_events_after_on(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "show events --after --on", catch_exceptions=False
)
assert result.exit_code == 0
assert re.match("EVENT\\s+CONNECTORS", result.stdout)
assert re.search("check\\s+Servo\n", result.stdout)
assert not re.search("^measure\\s+Measure\n", result.stdout)
assert re.search("after measure\\s+Measure", result.stdout)
def test_events_no_on_before(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "show events --no-on --before", catch_exceptions=False
)
assert result.exit_code == 0
assert re.match("EVENT\\s+CONNECTORS", result.stdout)
assert not re.search("check\\s+Servo\n", result.stdout)
assert not re.search("^measure\\s+Measure\n", result.stdout)
assert re.search("before measure\\s+Measure", result.stdout)
assert re.search("after measure\\s+Measure", result.stdout)
def test_events_no_after(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "show events --no-after", catch_exceptions=False
)
assert result.exit_code == 0
assert re.match("EVENT\\s+CONNECTORS", result.stdout)
assert re.search("check\\s+Servo\n", result.stdout)
assert re.search("measure\\s+Measure\n", result.stdout)
assert re.search("before measure\\s+Measure", result.stdout)
assert not re.search("after measure\\s+Measure", result.stdout)
def test_events_by_connector(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "show events --by-connector", catch_exceptions=False
)
assert result.exit_code == 0
assert re.match("CONNECTOR\\s+EVENTS", result.stdout)
assert re.search("Servo\\s+check\n", result.stdout)
assert re.search(
"Adjust\\s+adjust\n\\s+components\n\\s+describe",
result.stdout,
flags=re.MULTILINE,
)
assert re.search(
"Measure\\s+describe\n\\s+before measure\n\\s+measure\n\\s+after measure\n\\s+metrics",
result.stdout,
flags=re.MULTILINE,
)
def test_events_empty_config_file(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "show events", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("EVENT\\s+CONNECTORS", result.stdout), f"Failed to match with output: {result.stdout}"
assert "check Servo\n" in result.stdout
assert len(result.stdout.split("\n")) == 4
def test_metrics(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "show metrics", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("METRIC\\s+UNIT\\s+CONNECTORS", result.stdout)
@pytest.mark.usefixtures("stub_multiservo_yaml")
class TestMultiservo:
@pytest.fixture
def optimizer_env(self) -> None:
# NOTE: zero out the optimizer_env fixture as you can't use them
# under multiservo
pass
def test_connectors(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "show connectors", catch_exceptions=False)
assert result.exit_code == 0, f"Non-zero exit status code: stdout={result.stdout}, stderr={result.stderr}"
assert re.match("dev.opsani.com/multi-servox-1\nNAME\\s+TYPE\\s+VERSION\\s+DESCRIPTION\n", result.stdout)
def test_connectors_by_name(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 show connectors", catch_exceptions=False)
assert result.exit_code == 0, f"Non-zero exit status code: stdout={result.stdout}, stderr={result.stderr}"
assert re.match("dev.opsani.com/multi-servox-2\nNAME\\s+TYPE\\s+VERSION\\s+DESCRIPTION\n", result.stdout)
def test_components(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "show components", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("COMPONENT\\s+SETTINGS\\s+CONNECTOR", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
assert re.search("main\\s+cpu=3 RangeSetting\\(range=\\[0..10\\], step=1\\)\\s+adjust", result.stdout)
def test_components_by_name(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 show components", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("COMPONENT\\s+SETTINGS\\s+CONNECTOR", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout) is None
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
assert re.search("main\\s+cpu=3 RangeSetting\\(range=\\[0..10\\], step=1\\)\\s+adjust", result.stdout)
def test_events(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "show events", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("EVENT\\s+CONNECTORS", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
def test_events_by_name(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 show events", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("EVENT\\s+CONNECTORS", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout) is None
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
def test_metrics(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "show metrics", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("METRIC\\s+UNIT\\s+CONNECTORS", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
assert re.search("error_rate\\s+requests_per_minute\\s+\\(rpm\\)\\s+Measure", result.stdout)
assert re.search("throughput\\s+requests_per_minute\\s+\\(rpm\\)\\s+Measure", result.stdout)
def test_metrics_by_name(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 show metrics", catch_exceptions=False)
assert result.exit_code == 0, f"non-zero exit code. stdout={result.stdout}, stderr={result.stderr}"
assert re.search("METRIC\\s+UNIT\\s+CONNECTORS", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout) is None
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
assert re.search("error_rate\\s+requests_per_minute\\s+\\(rpm\\)\\s+Measure", result.stdout)
assert re.search("throughput\\s+requests_per_minute\\s+\\(rpm\\)\\s+Measure", result.stdout)
def test_version(cli_runner: CliRunner, servo_cli: Typer) -> None:
result = cli_runner.invoke(servo_cli, "version")
assert result.exit_code == 0
assert f"Servo v{servo.__version__}" in result.stdout
def test_version_no_optimizer(cli_runner: CliRunner, servo_cli: Typer) -> None:
result = cli_runner.invoke(servo_cli, "version", catch_exceptions=False)
assert result.exit_code == 0
assert f"Servo v{servo.__version__}" in result.stdout
def test_config(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
optimizer_env: None,
) -> None:
result = cli_runner.invoke(servo_cli, "config")
assert result.exit_code == 0
assert "connectors:" in result.stdout
def test_config_multiservo(
cli_runner: CliRunner,
servo_cli: Typer,
stub_multiservo_yaml: Path,
) -> None:
result = cli_runner.invoke(servo_cli, "config")
assert result.exit_code == 0
assert "connectors:" in result.stdout
def test_config_multiservo_named(
cli_runner: CliRunner,
servo_cli: Typer,
stub_multiservo_yaml: Path,
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 config")
assert result.exit_code == 0
assert "connectors:" in result.stdout
def test_run_with_empty_config_file(
cli_runner: CliRunner,
servo_cli: Typer,
servo_yaml: Path,
optimizer_env: None,
) -> None:
result = cli_runner.invoke(servo_cli, "config", catch_exceptions=False)
assert result.exit_code == 0, f"RESULT: {result.stderr}"
parsed = yaml.full_load(result.stdout)
assert parsed, f"Expected to a config doc: {optimizer}"
optimizer = parsed['optimizer']
assert optimizer['id'] == 'dev.opsani.com/servox'
assert optimizer['token'] == '123456789'
def test_run_with_malformed_config_file(
cli_runner: CliRunner,
servo_cli: Typer,
servo_yaml: Path,
optimizer_env: None,
) -> None:
servo_yaml.write_text("</\n\n..:989890j\n___*")
with pytest.raises(TypeError) as e:
cli_runner.invoke(servo_cli, "config", catch_exceptions=False)
assert "parsed to an unexpected value of type" in str(e)
def test_config_with_bad_connectors_key(
cli_runner: CliRunner,
servo_cli: Typer,
servo_yaml: Path,
optimizer_env: None,
) -> None:
servo_yaml.write_text("connectors: [invalid]\n")
result = cli_runner.invoke(servo_cli, "config", catch_exceptions=False)
assert "fatal: invalid configuration: no connector found for the identifier \"invalid\"" in result.stderr
def test_config_yaml(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
optimizer_env: None,
) -> None:
result = cli_runner.invoke(servo_cli, "config -f yaml", catch_exceptions=False)
assert result.exit_code == 0
assert "connectors:" in result.stdout
def test_config_yaml_file(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
tmp_path: Path,
optimizer_env: None,
) -> None:
path = tmp_path / "settings.yaml"
result = cli_runner.invoke(servo_cli, f"config -f yaml -o {path}")
assert result.exit_code == 0
assert "connectors:" in path.read_text()
@freeze_time("2020-01-01")
def test_config_configmap_file(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
tmp_path: Path,
optimizer_env: None,
mocker,
) -> None:
mocker.patch.object(Servo, "version", "100.0.0")
mocker.patch.object(VegetaConnector, "version", "100.0.0")
path = tmp_path / "settings.yaml"
result = cli_runner.invoke(servo_cli, f"config -f configmap -o {path}")
assert result.exit_code == 0
# TODO: Fixme -- this should just be optimizer: and token:
assert path.read_text() == (
"---\n"
"apiVersion: v1\n"
"kind: ConfigMap\n"
"metadata:\n"
" name: opsani-servo-config\n"
" labels:\n"
" app.kubernetes.io/name: servo\n"
" app.kubernetes.io/version: 100.0.0\n"
" annotations:\n"
" servo.opsani.com/configured_at: '2020-01-01T00:00:00+00:00'\n"
' servo.opsani.com/connectors: \'[{"name": "vegeta", "type": "Vegeta Connector",\n'
' "description": "Vegeta load testing connector", "version": "100.0.0", "url":\n'
' "https://github.com/opsani/vegeta-connector"}]\'\n'
"data:\n"
" servo.yaml: |\n"
" connectors:\n"
" - vegeta\n"
" vegeta:\n"
" rate: '0'\n"
" target: https://opsani.com/\n"
)
def test_config_json(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
optimizer_env: None,
) -> None:
result = cli_runner.invoke(servo_cli, "config -f json")
assert result.exit_code == 0
settings = json.loads(result.stdout)
assert settings["connectors"] is not None
@pytest.fixture()
def aliased_connector_cli(optimizer_env: None, servo_yaml: Path) -> ServoCLI:
aliased_config = {
"connectors": {
"first": "measure",
"second": "measure",
},
"first": {},
"second": {},
}
servo_yaml.write_text(yaml.dump(aliased_config))
cli = servo.cli.ConnectorCLI(MeasureConnector, name="cli-ext", help="A CLI extension")
@cli.command()
def attack(
context: servo.cli.Context,
):
print(f"active connector is: {context.connector.name}")
return ServoCLI()
def test_aliased_connector_error(cli_runner: CliRunner, aliased_connector_cli: ServoCLI) -> None:
result = cli_runner.invoke(aliased_connector_cli, f"cli-ext attack")
assert (
result.exit_code == 2
), f"Expected status code of 1 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
assert re.search("multiple instances of \"MeasureConnector\" found in servo \"dev.opsani.com/servox\": select one of \\[\'first\', \'second\'\\]", result.stderr)
def test_aliased_connector_resolution(cli_runner: CliRunner, aliased_connector_cli: ServoCLI) -> None:
result = cli_runner.invoke(aliased_connector_cli, f"cli-ext -c first attack")
assert (
result.exit_code == 0
), f"Expected status code of 0 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
assert re.search("active connector is: first", result.stdout)
def test_aliased_connector_invalid_name(cli_runner: CliRunner, aliased_connector_cli: ServoCLI) -> None:
result = cli_runner.invoke(aliased_connector_cli, f"cli-ext -c INVALID attack")
assert (
result.exit_code == 2
), f"Expected status code of 2 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
assert re.search("no connector named \"INVALID\" of type \"MeasureConnector\" found in servo \"dev.opsani.com/servox\": select one of \\[\'first\', \'second\'\\]", result.stderr)
def test_connector_cli_not_active_in_assembly(cli_runner: CliRunner, aliased_connector_cli: ServoCLI) -> None:
result = cli_runner.invoke(aliased_connector_cli, f"vegeta attack")
assert (
result.exit_code == 2
), f"Expected status code of 2 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
assert re.search("no instances of \"VegetaConnector\" are active the in servo \"dev.opsani.com/servox\"", result.stderr)
def test_config_json_file(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
tmp_path: Path,
optimizer_env: None,
) -> None:
path = tmp_path / "settings.json"
result = cli_runner.invoke(servo_cli, f"config -f json -o {path}")
assert result.exit_code == 0
settings = json.loads(path.read_text())
assert settings["connectors"] is not None
def test_config_dict(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
optimizer_env: None,
) -> None:
result = cli_runner.invoke(servo_cli, "config -f dict")
assert result.exit_code == 0
settings = eval(result.stdout)
assert settings["connectors"] is not None
def test_config_dict_file(
cli_runner: CliRunner,
servo_cli: Typer,
vegeta_config_file: Path,
tmp_path: Path,
optimizer_env: None,
) -> None:
path = tmp_path / "settings.py"
result = cli_runner.invoke(servo_cli, f"config -f dict -o {path}")
assert result.exit_code == 0, f"failed with output {(result.stdout, result.stderr)}"
settings = eval(path.read_text())
assert settings["connectors"] is not None
def test_schema(cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None) -> None:
result = cli_runner.invoke(servo_cli, "schema", catch_exceptions=False)
assert (
result.exit_code == 0
), f"Expected status code of 0 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
schema = json.loads(result.stdout)
assert schema["title"] == "Servo Configuration Schema"
def test_schema_output_to_file(
cli_runner: CliRunner, servo_cli: Typer, tmp_path: Path, optimizer_env: None
) -> None:
output_path = tmp_path / "schema.json"
result = cli_runner.invoke(servo_cli, f"schema -f json -o {output_path}")
assert result.exit_code == 0
schema = json.loads(output_path.read_text())
assert schema["title"] == "Servo Configuration Schema"
def test_schema_multiservo(cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path) -> None:
result = cli_runner.invoke(servo_cli, "schema", catch_exceptions=False)
assert (
result.exit_code == 1
), f"Expected status code of 1 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
assert re.search("error: schema can only be outputted for a single servo", result.stderr)
def test_schema_multiservo_by_name(cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 schema", catch_exceptions=False)
assert (
result.exit_code == 0
), f"Expected status code of 1 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
schema = json.loads(result.stdout)
assert schema["title"] == "Servo Configuration Schema"
def test_schema_multiservo_top_level(cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path) -> None:
result = cli_runner.invoke(servo_cli, "schema --top-level", catch_exceptions=False)
assert (
result.exit_code == 1
), f"Expected status code of 1 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
assert re.search("error: schema can only be outputted for all connectors or a single servo", result.stderr)
def test_schema_multiservo_top_level_by_name(cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 schema --top-level", catch_exceptions=False)
assert (
result.exit_code == 0
), f"Expected status code of 1 but got {result.exit_code} -- stdout: {result.stdout}\nstderr: {result.stderr}"
schema = json.loads(result.stdout)
assert schema["title"] == "Servo Schema"
def test_schema_all(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None
) -> None:
result = cli_runner.invoke(servo_cli, ["schema", "--all"])
assert result.exit_code == 0
schema = json.loads(result.stdout)
assert schema["title"] == "Servo Configuration Schema"
def test_schema_top_level(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None
) -> None:
result = cli_runner.invoke(servo_cli, ["schema", "--top-level"])
assert result.exit_code == 0
schema = json.loads(result.stdout)
assert schema["title"] == "Servo Schema"
def test_schema_all_top_level(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None
) -> None:
result = cli_runner.invoke(servo_cli, ["schema", "--top-level", "--all"])
assert result.exit_code == 0
schema = json.loads(result.stdout)
assert schema["title"] == "Servo Schema"
def test_schema_top_level_dict(
servo_cli: Typer, cli_runner: CliRunner, optimizer_env: None
) -> None:
result = cli_runner.invoke(servo_cli, "schema -f dict --top-level")
assert result.exit_code == 0, f"stdout: {result.stdout}\n\nstderr: {result.stderr}"
schema = eval(result.stdout)
assert schema["title"] == "Servo Schema"
def test_schema_top_level_dict_file_output(
servo_cli: Typer, cli_runner: CliRunner, tmp_path: Path, optimizer_env: None
) -> None:
path = tmp_path / "output.dict"
result = cli_runner.invoke(servo_cli, f"schema -f dict --top-level -o {path}", catch_exceptions=False)
assert result.exit_code == 0, f"failed with non-zero exit code: stderr={result.stderr}, stdout={result.stdout}"
schema = eval(path.read_text())
assert schema["title"] == "Servo Schema"
class TestCommands:
@pytest.fixture(autouse=True)
def test_set_defaults_via_env(self) -> None:
os.environ["OPSANI_OPTIMIZER"] = "dev.opsani.com/test-app"
os.environ["OPSANI_TOKEN"] = "123456789"
def test_schema_text(self, servo_cli: Typer, cli_runner: CliRunner) -> None:
result = cli_runner.invoke(servo_cli, "schema -f text")
assert result.exit_code == 1
assert "not yet implemented" in result.stderr
def test_schema_html(self, servo_cli: Typer, cli_runner: CliRunner) -> None:
result = cli_runner.invoke(servo_cli, "schema -f html", catch_exceptions=False)
assert result.exit_code == 1
assert "not yet implemented" in result.stderr
def test_schema_dict(self, servo_cli: Typer, cli_runner: CliRunner) -> None:
result = cli_runner.invoke(servo_cli, "schema -f dict")
assert result.exit_code == 0
assert "'title': 'Servo Configuration Schema'" in result.stdout
def test_schema_dict_file_output(
self, servo_cli: Typer, cli_runner: CliRunner, tmp_path: Path
) -> None:
path = tmp_path / "output.dict"
result = cli_runner.invoke(servo_cli, f"schema -f dict -o {path}")
assert result.exit_code == 0
content = path.read_text()
assert "'title': 'Servo Configuration Schema'" in content
def test_validate(self, cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path) -> None:
result = cli_runner.invoke(servo_cli, f"validate -f {stub_servo_yaml}", catch_exceptions=False)
assert result.exit_code == 0, f"non-zero exit status (result.stdout={result.stdout}, result.stderr={result.stderr})"
assert re.match(f"√ Valid configuration in {stub_servo_yaml}", result.stdout)
def test_validate_multiservo(self, cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path) -> None:
result = cli_runner.invoke(servo_cli, f"validate -f {stub_multiservo_yaml}", catch_exceptions=False)
assert result.exit_code == 0, f"non-zero exit status (result.stdout={result.stdout}, result.stderr={result.stderr})"
assert re.match(f"√ Valid configuration in {stub_multiservo_yaml}", result.stdout)
def test_generate_with_name(
self, cli_runner: CliRunner, servo_cli: Typer
) -> None:
result = cli_runner.invoke(
servo_cli, "generate --name foo -f servo.yaml measure", input="y\n"
)
assert result.exit_code == 0
assert "already exists. Overwrite it?" in result.stdout
content = yaml.full_load(open("servo.yaml"))
assert content == {"connectors": ["measure"], "measure": {}, "name": "foo"}
def test_generate_with_append(
self, cli_runner: CliRunner, servo_cli: Typer, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "generate --name foo -f servo.yaml --append measure", input="y\n"
)
assert result.exit_code == 0
content = list(yaml.full_load_all(open("servo.yaml")))
assert content == [
{
'adjust': {},
'connectors': [
'measure',
'adjust',
],
'measure': {
'description': None,
},
},
{
'connectors': [
'measure',
],
'measure': {},
'name': 'foo',
},
]
def test_generate_prompts_to_overwrite(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "generate -f servo.yaml measure", input="y\n"
)
assert result.exit_code == 0
assert "already exists. Overwrite it?" in result.stdout
content = yaml.full_load(servo_yaml.read_text())
assert content == {"connectors": ["measure"], "measure": {}}
def test_generate_prompts_to_overwrite_declined(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "generate -f servo.yaml measure", input="N\n"
)
assert result.exit_code == 1
assert "already exists. Overwrite it?" in result.stdout
content = yaml.full_load(servo_yaml.read_text())
assert content is None
def test_generate_prompts_to_overwrite_forced(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
result = cli_runner.invoke(
servo_cli, "generate -f servo.yaml --force measure", input="y\n"
)
assert result.exit_code == 0
content = yaml.full_load(servo_yaml.read_text())
assert content == {"connectors": ["measure"], "measure": {}}
def test_generate_connector_without_settings(
self,
cli_runner: CliRunner,
servo_cli: Typer,
optimizer_env: None,
stub_servo_yaml: Path,
) -> None:
pass
## CLI Lifecycle tests
def test_loading_cli_without_specific_connectors_activates_all_optionally(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
# temp config file, no connectors key
pass
def test_loading_cli_with_specific_connectors_only_activates_required(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
pass
def test_loading_cli_with_empty_connectors_list_disables_all(
self, cli_runner: CliRunner, servo_cli: Typer, servo_yaml: Path
) -> None:
servo_yaml.write_text(yaml.dump({"connectors": []}))
cli_runner.invoke(servo_cli, "info")
class TestCLIFoundation:
class TheTestCLI(CLI):
pass
@pytest.fixture()
def cli(self) -> CLI:
return TestCLIFoundation.TheTestCLI(help="This is just a test.", callback=None)
def test_context_class_in_commands(self, cli: CLI, cli_runner: CliRunner) -> None:
@cli.command()
def test(context: Context) -> None:
assert context is not None
assert isinstance(context, Context)
assert context.servo is None
assert context.assembly is None
assert context.connector is None
result = cli_runner.invoke(cli, "", catch_exceptions=False)
assert result.exit_code == 0
def test_context_class_in_subcommand_groups(
self, cli: CLI, cli_runner: CliRunner
) -> None:
sub_cli = TestCLIFoundation.TheTestCLI(name="another", callback=None)
@sub_cli.command()
def test(context: Context) -> None:
assert context is not None
assert isinstance(context, Context)
assert context.servo == None
assert context.assembly == None
assert context.connector == None
cli.add_cli(sub_cli)
result = cli_runner.invoke(cli, "another test", catch_exceptions=False)
assert result.exit_code == 0
def test_context_inheritance(self, cli: CLI, cli_runner: CliRunner) -> None:
sub_cli = TestCLIFoundation.TheTestCLI(name="another", callback=None)
@cli.callback()
def touch_context(context: Context) -> None:
context.obj = 31337
@sub_cli.command()
def test(context: Context) -> None:
assert context is not None
assert context.obj == 31337
cli.add_cli(sub_cli)
result = cli_runner.invoke(cli, "another test", catch_exceptions=False)
assert result.exit_code == 0
def test_that_servo_cli_commands_are_explicitly_ordered(
self, cli: CLI, cli_runner: CliRunner
) -> None:
servo_cli = ServoCLI(name="servo", callback=None)
# Add in explicit non lexical sort order
@servo_cli.command()
def zzzz(context: Context) -> None:
pass
@servo_cli.command()
def aaaa(context: Context) -> None:
pass
@servo_cli.command()
def mmmm(context: Context) -> None:
pass
result = cli_runner.invoke(servo_cli, "--help", catch_exceptions=False)
assert result.exit_code == 0
assert (
re.search(r"zzzz\n.*aaaa\n.*mmmm\n", result.stdout, flags=re.MULTILINE)
is not None
)
def test_command_name_for_nested_connectors() -> None:
from servo.utilities import strings
assert strings.commandify("fake") == "fake"
assert strings.commandify("another_fake") == "another-fake"
def test_ordering_of_ops_commands(servo_cli: CLI, cli_runner: CliRunner) -> None:
result = cli_runner.invoke(servo_cli, "--help", catch_exceptions=False)
assert result.exit_code == 0
assert (
re.search(
r".*run.*\n.*check.*\n.*describe.*\n", result.stdout, flags=re.MULTILINE
)
is not None
)
def test_ordering_of_config_commands(servo_cli: CLI, cli_runner: CliRunner) -> None:
result = cli_runner.invoke(servo_cli, "--help", catch_exceptions=False)
assert result.exit_code == 0
assert (
re.search(
r".*settings.*\n.*schema.*\n.*validate.*\n.*generate.*",
result.stdout,
flags=re.MULTILINE,
)
is not None
)
def test_init_from_scratch(servo_cli: CLI, cli_runner: CliRunner) -> None:
result = cli_runner.invoke(
servo_cli,
"init",
catch_exceptions=False,
input="dev.opsani.com/servox\n123456789\nn\ny\n",
)
assert result.exit_code == 0
dotenv = Path(".env")
assert (
dotenv.read_text()
== "OPSANI_OPTIMIZER=dev.opsani.com/servox\nOPSANI_TOKEN=123456789\nSERVO_LOG_LEVEL=DEBUG\n"
)
servo_yaml = Path("servo.yaml")
assert servo_yaml.read_text() is not None
def test_init_existing(servo_cli: CLI, cli_runner: CliRunner) -> None:
pass
# TODO: test setting section via initializer, add_cli
# TODO: section settable on CLI class, via @command(), and via add_cli()
# TODO: test passing callback as argument to command, via initializer for root callbacks
# TODO: Test passing of correct context
# TODO: Test trying to generate against a class that doesn't have settings (should be a warning instead of error!)
# TODO: init with multi-servos, init single with CLI options, init single option in the config file
# TODO: test overloading/cascading URL and base URL in multi-servo
def test_list(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "list", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("NAME\\s+OPTIMIZER\\s+DESCRIPTION", result.stdout)
assert re.search("dev.opsani.com/servox\\s+dev.opsani.com/servox\\s+Continuous Optimization Orchestrator", result.stdout)
def test_list_multiservo(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "list", catch_exceptions=False)
assert result.exit_code == 0, f"Non-zero exit status code: stdout={result.stdout}, stderr={result.stderr}"
assert re.match("NAME\\s+OPTIMIZER\\s+DESCRIPTION", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1\\s+dev.opsani.com/multi-servox-1\\s+Continuous Optimization Orchestrator", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2\\s+dev.opsani.com/multi-servox-2\\s+Continuous Optimization Orchestrator", result.stdout)
def test_measure(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "measure", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("METRIC\\s+UNIT\\s+READINGS", result.stdout)
assert re.search("Some Metric\\s+rpm\\s+31337.00 \\(just now\\)", result.stdout)
def test_measure_by_connectors_arg(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "measure --connectors measure", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("METRIC\\s+UNIT\\s+READINGS", result.stdout)
assert re.search("Some Metric\\s+rpm\\s+31337.00 \\(just now\\)", result.stdout)
def test_measure_multiservo(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "measure", catch_exceptions=False)
assert result.exit_code == 0, f"Non-zero exit status code: stdout={result.stdout}, stderr={result.stderr}"
assert re.search("dev.opsani.com/multi-servox-1", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
assert re.search("METRIC\\s+UNIT\\s+READINGS", result.stdout)
assert re.search("Some Metric\\s+rpm\\s+31337.00 \\(just now\\)", result.stdout)
def test_measure_multiservo_named(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 measure", catch_exceptions=False)
assert result.exit_code == 0, f"Non-zero exit status code: stdout={result.stdout}, stderr={result.stderr}"
assert re.search("dev.opsani.com/multi-servox-1", result.stdout) is None
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
assert re.search("METRIC\\s+UNIT\\s+READINGS", result.stdout)
assert re.search("Some Metric\\s+rpm\\s+31337.00 \\(just now\\)", result.stdout)
def test_adjust_incomplete_identifier(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "adjust setting=value", catch_exceptions=False)
assert result.exit_code == 2
assert re.search("Error: Invalid value: unable to parse setting descriptor 'setting=value'", result.stderr)
def test_adjust(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "adjust component.setting=value", catch_exceptions=False)
assert result.exit_code == 0
assert re.match("CONNECTOR\\s+SETTINGS", result.stdout)
assert re.search("adjust\\s+main.cpu=3", result.stdout)
def test_adjust_multiservo(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "adjust component.setting=value", catch_exceptions=False)
assert result.exit_code == 0, f"failed with non-zero exit code (stdout={result.stdout}, stderr={result.stderr})"
assert re.search("CONNECTOR\\s+SETTINGS", result.stdout)
assert re.search("adjust\\s+main.cpu=3", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
def test_adjust_multiservo_named(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 adjust component.setting=value", catch_exceptions=False)
assert result.exit_code == 0, f"failed with non-zero exit code (stdout={result.stdout}, stderr={result.stderr})"
assert re.search("CONNECTOR\\s+SETTINGS", result.stdout)
assert re.search("adjust\\s+main.cpu=3", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout) is None
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
def test_describe(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "describe", catch_exceptions=False)
assert result.exit_code == 0
assert re.search("CONNECTOR\\s+COMPONENTS\\s+METRICS", result.stdout)
assert re.search('measure\\s+throughput \\(rpm\\)', result.stdout)
assert re.search('\\s+error_rate \\(rpm\\)', result.stdout)
assert re.search("adjust\\s+main.cpu=3", result.stdout)
def test_describe_connector(
cli_runner: CliRunner, servo_cli: Typer, optimizer_env: None, stub_servo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "describe adjust", catch_exceptions=False)
assert result.exit_code == 0, f"failed with non-zero exit code (stdout={result.stdout}, stderr={result.stderr})"
assert re.search("CONNECTOR\\s+COMPONENTS\\s+METRICS", result.stdout)
assert re.search('measure\\s+throughput \\(rpm\\)', result.stdout) is None
assert re.search('\\s+error_rate \\(rpm\\)', result.stdout) is None
assert re.search("adjust\\s+main.cpu=3", result.stdout)
def test_describe_multiservo(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "describe", catch_exceptions=False)
assert result.exit_code == 0, f"failed with non-zero exit code (stdout={result.stdout}, stderr={result.stderr})"
assert re.search("CONNECTOR\\s+COMPONENTS\\s+METRICS", result.stdout)
assert re.search('measure\\s+throughput \\(rpm\\)', result.stdout)
assert re.search('\\s+error_rate \\(rpm\\)', result.stdout)
assert re.search("adjust\\s+main.cpu=3", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout)
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
def test_describe_multiservo_named(
cli_runner: CliRunner, servo_cli: Typer, stub_multiservo_yaml: Path
) -> None:
result = cli_runner.invoke(servo_cli, "-n dev.opsani.com/multi-servox-2 describe", catch_exceptions=False)
assert result.exit_code == 0, f"failed with non-zero exit code (stdout={result.stdout}, stderr={result.stderr})"
assert re.search("CONNECTOR\\s+COMPONENTS\\s+METRICS", result.stdout)
assert re.search('measure\\s+throughput \\(rpm\\)', result.stdout)
assert re.search('\\s+error_rate \\(rpm\\)', result.stdout)
assert re.search("adjust\\s+main.cpu=3", result.stdout)
assert re.search("dev.opsani.com/multi-servox-1", result.stdout) is None
assert re.search("dev.opsani.com/multi-servox-2", result.stdout)
| 41.466028 | 182 | 0.675084 | 6,386 | 47,603 | 4.85938 | 0.059505 | 0.055974 | 0.046468 | 0.061582 | 0.825503 | 0.802816 | 0.784158 | 0.760956 | 0.732985 | 0.717485 | 0 | 0.008215 | 0.197004 | 47,603 | 1,147 | 183 | 41.50218 | 0.803453 | 0.016764 | 0 | 0.583069 | 0 | 0.02963 | 0.246061 | 0.097646 | 0 | 0 | 0 | 0.000872 | 0.304762 | 1 | 0.12381 | false | 0.013757 | 0.017989 | 0.004233 | 0.154497 | 0.001058 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
226110110d5d9366f501709681c7269039d7f107 | 14,935 | py | Python | sawtooth/consent_common/transaction.py | AlexZhovnuvaty/patient-consent | f14444aa3d48c8c23d162487d03be0459de46caa | [
"Apache-2.0"
] | 6 | 2020-01-08T07:55:51.000Z | 2021-09-19T10:45:57.000Z | sawtooth/consent_common/transaction.py | AlexZhovnuvaty/patient-consent | f14444aa3d48c8c23d162487d03be0459de46caa | [
"Apache-2.0"
] | 2 | 2019-12-12T13:39:51.000Z | 2020-02-20T14:01:28.000Z | sawtooth/consent_common/transaction.py | AlexZhovnuvaty/patient-consent | f14444aa3d48c8c23d162487d03be0459de46caa | [
"Apache-2.0"
] | 12 | 2019-10-09T13:35:14.000Z | 2020-09-10T07:40:23.000Z | import hashlib
import random
import logging
from sawtooth_sdk.protobuf.batch_pb2 import BatchHeader, Batch
from sawtooth_sdk.protobuf.transaction_pb2 import Transaction, TransactionHeader
from . import helper as helper
from .protobuf.consent_payload_pb2 import Permission, ConsentTransactionPayload, Client, ActionOnAccess
logging.basicConfig(level=logging.DEBUG)
LOGGER = logging.getLogger(__name__)
def _make_transaction(payload, inputs, outputs, txn_signer, batch_signer):
txn_header_bytes, signature = _transaction_header(txn_signer, batch_signer, inputs, outputs, payload)
txn = Transaction(
header=txn_header_bytes,
header_signature=signature,
payload=payload.SerializeToString()
)
return txn
def make_batch_and_id(transactions, batch_signer):
batch_header_bytes, signature = _batch_header(batch_signer, transactions)
batch = Batch(
header=batch_header_bytes,
header_signature=signature,
transactions=transactions
)
return batch, batch.header_signature
def _make_header_and_batch(payload, inputs, outputs, txn_signer, batch_signer):
txn_header_bytes, signature = _transaction_header(txn_signer, batch_signer, inputs, outputs, payload)
txn = Transaction(
header=txn_header_bytes,
header_signature=signature,
payload=payload.SerializeToString()
)
transactions = [txn]
batch_header_bytes, signature = _batch_header(batch_signer, transactions)
batch = Batch(
header=batch_header_bytes,
header_signature=signature,
transactions=transactions
)
return batch, batch.header_signature
def _transaction_header(txn_signer, batch_signer, inputs, outputs, payload):
txn_header_bytes = TransactionHeader(
family_name=helper.TP_FAMILYNAME,
family_version=helper.TP_VERSION,
inputs=inputs,
outputs=outputs,
signer_public_key=txn_signer.get_public_key().as_hex(), # signer.get_public_key().as_hex(),
# In this example, we're signing the batch with the same private key,
# but the batch can be signed by another party, in which case, the
# public key will need to be associated with that key.
batcher_public_key=batch_signer.get_public_key().as_hex(), # signer.get_public_key().as_hex(),
# In this example, there are no dependencies. This list should include
# an previous transaction header signatures that must be applied for
# this transaction to successfully commit.
# For example,
# dependencies=['540a6803971d1880ec73a96cb97815a95d374cbad5d865925e5aa0432fcf1931539afe10310c122c5eaae15df61236079abbf4f258889359c4d175516934484a'],
dependencies=[],
nonce=random.random().hex().encode(),
payload_sha512=hashlib.sha512(payload.SerializeToString()).hexdigest()
).SerializeToString()
signature = txn_signer.sign(txn_header_bytes)
return txn_header_bytes, signature
def _batch_header(batch_signer, transactions):
batch_header_bytes = BatchHeader(
signer_public_key=batch_signer.get_public_key().as_hex(),
transaction_ids=[txn.header_signature for txn in transactions],
).SerializeToString()
signature = batch_signer.sign(batch_header_bytes)
return batch_header_bytes, signature
def create_hospital_client(txn_signer, batch_signer):
permissions = [Permission(type=Permission.READ_HOSPITAL),
Permission(type=Permission.READ_OWN_HOSPITAL),
Permission(type=Permission.READ_PATIENT_DATA),
Permission(type=Permission.READ_OWN_PATIENT),
Permission(type=Permission.READ_INVESTIGATOR),
Permission(type=Permission.GRANT_INVESTIGATOR_ACCESS),
Permission(type=Permission.REVOKE_INVESTIGATOR_ACCESS),
Permission(type=Permission.WRITE_PATIENT_DATA)
]
return create_client(txn_signer, batch_signer, permissions)
def create_patient_client(txn_signer, batch_signer):
permissions = [Permission(type=Permission.READ_HOSPITAL),
Permission(type=Permission.READ_PATIENT),
Permission(type=Permission.READ_OWN_PATIENT),
Permission(type=Permission.GRANT_READ_DATA_ACCESS),
Permission(type=Permission.REVOKE_READ_DATA_ACCESS),
Permission(type=Permission.READ_PATIENT_DATA),
Permission(type=Permission.READ_OWN_PATIENT_DATA),
Permission(type=Permission.GRANT_WRITE_DATA_ACCESS),
Permission(type=Permission.REVOKE_WRITE_DATA_ACCESS),
Permission(type=Permission.READ_INFORM_CONSENT_REQUEST),
Permission(type=Permission.READ_SIGNED_INFORM_CONSENT),
Permission(type=Permission.SIGN_INFORM_CONSENT),
Permission(type=Permission.DECLINE_INFORM_CONSENT)
]
return create_client(txn_signer, batch_signer, permissions)
def create_investigator_client(txn_signer, batch_signer):
permissions = [
Permission(type=Permission.READ_HOSPITAL),
Permission(type=Permission.READ_OWN_INVESTIGATOR),
Permission(type=Permission.REQUEST_INFORM_CONSENT),
Permission(type=Permission.READ_INFORM_CONSENT_REQUEST),
Permission(type=Permission.READ_SIGNED_INFORM_CONSENT),
Permission(type=Permission.READ_PATIENT_DATA),
Permission(type=Permission.READ_PATIENT),
Permission(type=Permission.IMPORT_TRIAL_DATA),
Permission(type=Permission.READ_TRIAL_DATA),
Permission(type=Permission.UPDATE_TRIAL_DATA)
]
return create_client(txn_signer, batch_signer, permissions)
def create_sponsor_client(txn_signer, batch_signer):
permissions = [
# Permission(type=Permission.READ_LAB),
# Permission(type=Permission.READ_OWN_LAB),
# Permission(type=Permission.READ_LAB_TEST)
]
return create_client(txn_signer, batch_signer, permissions)
# def create_investigator_client(txn_signer, batch_signer):
# permissions = [Permission(type=Permission.READ_OWN_INVESTIGATOR),
# Permission(type=Permission.READ_FORMATTED_EHR)
# ]
# return create_client(txn_signer, batch_signer, permissions)
def create_client(txn_signer, batch_signer, permissions):
client_pkey = txn_signer.get_public_key().as_hex()
LOGGER.debug('client_pkey: ' + str(client_pkey))
inputs = outputs = helper.make_client_address(public_key=client_pkey)
LOGGER.debug('inputs: ' + str(inputs))
client = Client(
public_key=client_pkey,
permissions=permissions)
payload = ConsentTransactionPayload(
payload_type=ConsentTransactionPayload.ADD_CLIENT,
create_client=client)
LOGGER.debug('payload: ' + str(payload))
return _make_transaction(
payload=payload,
inputs=[inputs],
outputs=[outputs],
txn_signer=txn_signer,
batch_signer=batch_signer)
# def grant_data_processing(txn_signer, batch_signer, dest_pkey):
# patient_pkey = txn_signer.get_public_key().as_hex()
# permission_hex = helper.make_data_processing_access_address(dest_pkey=dest_pkey, src_pkey=patient_pkey)
#
# access = ActionOnAccess(
# dest_pkey=dest_pkey,
# src_pkey=patient_pkey
# )
#
# payload = ConsentTransactionPayload(
# payload_type=ConsentTransactionPayload.GRANT_DATA_PROCESSING_ACCESS,
# grant_data_processing_access=access)
#
# return _make_transaction(
# payload=payload,
# inputs=[permission_hex],
# outputs=[permission_hex],
# txn_signer=txn_signer,
# batch_signer=batch_signer)
#
#
# def revoke_data_processing(txn_signer, batch_signer, dest_pkey):
# patient_pkey = txn_signer.get_public_key().as_hex()
# permission_hex = helper.make_data_processing_access_address(dest_pkey=dest_pkey, src_pkey=patient_pkey)
#
# access = ActionOnAccess(
# dest_pkey=dest_pkey,
# src_pkey=patient_pkey
# )
#
# payload = ConsentTransactionPayload(
# payload_type=ConsentTransactionPayload.REVOKE_DATA_PROCESSING_ACCESS,
# revoke_data_processing_access=access)
#
# return _make_transaction(
# payload=payload,
# inputs=[permission_hex],
# outputs=[permission_hex],
# txn_signer=txn_signer,
# batch_signer=batch_signer)
# def grant_investigator_access(txn_signer, batch_signer, dest_pkey):
# hospital_pkey = txn_signer.get_public_key().as_hex()
# permission_hex = helper.make_investigator_access_address(dest_pkey=dest_pkey, src_pkey=hospital_pkey)
#
# access = ActionOnAccess(
# dest_pkey=dest_pkey,
# src_pkey=hospital_pkey
# )
#
# payload = ConsentTransactionPayload(
# payload_type=ConsentTransactionPayload.GRANT_INVESTIGATOR_ACCESS,
# grant_investigator_access=access)
#
# return _make_transaction(
# payload=payload,
# inputs=[permission_hex],
# outputs=[permission_hex],
# txn_signer=txn_signer,
# batch_signer=batch_signer)
#
#
# def revoke_investigator_access(txn_signer, batch_signer, dest_pkey):
# hospital_pkey = txn_signer.get_public_key().as_hex()
# permission_hex = helper.make_investigator_access_address(dest_pkey=dest_pkey, src_pkey=hospital_pkey)
#
# access = ActionOnAccess(
# dest_pkey=dest_pkey,
# src_pkey=hospital_pkey
# )
#
# payload = ConsentTransactionPayload(
# payload_type=ConsentTransactionPayload.REVOKE_INVESTIGATOR_ACCESS,
# revoke_investigator_access=access)
#
# return _make_transaction(
# payload=payload,
# inputs=[permission_hex],
# outputs=[permission_hex],
# txn_signer=txn_signer,
# batch_signer=batch_signer)
def request_inform_document_consent(txn_signer, batch_signer, patient_pkey):
investigator_pkey = txn_signer.get_public_key().as_hex()
permission_hex = \
helper.make_request_inform_document_consent_address(dest_pkey=investigator_pkey, src_pkey=patient_pkey)
permission_vice_versa_hex = \
helper.make_request_inform_document_consent_address(dest_pkey=patient_pkey, src_pkey=investigator_pkey)
access = ActionOnAccess(
dest_pkey=investigator_pkey,
src_pkey=patient_pkey
)
payload = ConsentTransactionPayload(
payload_type=ConsentTransactionPayload.REQUEST_INFORM_CONSENT,
request_inform_document_consent=access)
return _make_transaction(
payload=payload,
inputs=[permission_hex, permission_vice_versa_hex],
outputs=[permission_hex, permission_vice_versa_hex],
txn_signer=txn_signer,
batch_signer=batch_signer)
def sign_inform_document_consent(txn_signer, batch_signer, investigator_pkey):
patient_pkey = txn_signer.get_public_key().as_hex()
request_inform_consent_permission_hex = \
helper.make_request_inform_document_consent_address(dest_pkey=investigator_pkey, src_pkey=patient_pkey)
request_inform_consent_permission_vice_versa_hex = \
helper.make_request_inform_document_consent_address(dest_pkey=patient_pkey, src_pkey=investigator_pkey)
sign_inform_consent_permission_hex = \
helper.make_sign_inform_document_consent_address(dest_pkey=investigator_pkey, src_pkey=patient_pkey)
access = ActionOnAccess(
dest_pkey=investigator_pkey,
src_pkey=patient_pkey
)
payload = ConsentTransactionPayload(
payload_type=ConsentTransactionPayload.SIGN_INFORM_CONSENT,
sign_inform_document_consent=access)
return _make_transaction(
payload=payload,
inputs=[request_inform_consent_permission_hex, request_inform_consent_permission_vice_versa_hex,
sign_inform_consent_permission_hex],
outputs=[request_inform_consent_permission_hex, request_inform_consent_permission_vice_versa_hex,
sign_inform_consent_permission_hex],
txn_signer=txn_signer,
batch_signer=batch_signer)
def decline_inform_consent(txn_signer, batch_signer, investigator_pkey):
patient_pkey = txn_signer.get_public_key().as_hex()
request_inform_consent_permission_hex = \
helper.make_request_inform_document_consent_address(dest_pkey=investigator_pkey, src_pkey=patient_pkey)
request_inform_consent_permission_vice_versa_hex = \
helper.make_request_inform_document_consent_address(dest_pkey=patient_pkey, src_pkey=investigator_pkey)
sign_inform_consent_permission_hex = \
helper.make_sign_inform_document_consent_address(dest_pkey=investigator_pkey, src_pkey=patient_pkey)
access = ActionOnAccess(
dest_pkey=investigator_pkey,
src_pkey=patient_pkey
)
payload = ConsentTransactionPayload(
payload_type=ConsentTransactionPayload.DECLINE_INFORM_CONSENT,
decline_inform_consent=access)
return _make_transaction(
payload=payload,
inputs=[request_inform_consent_permission_hex, request_inform_consent_permission_vice_versa_hex,
sign_inform_consent_permission_hex],
outputs=[request_inform_consent_permission_hex, request_inform_consent_permission_vice_versa_hex,
sign_inform_consent_permission_hex],
txn_signer=txn_signer,
batch_signer=batch_signer)
# def grant_transfer_ehr_permission(txn_signer, batch_signer, dest_pkey):
# hospital_pkey = txn_signer.get_public_key().as_hex()
# consent_hex = helper.make_consent_share_shared_ehr_address(dest_pkey=dest_pkey, src_pkey=hospital_pkey)
#
# access = ActionOnAccess(
# dest_pkey=dest_pkey,
# src_pkey=hospital_pkey
# )
#
# payload = ConsentTransactionPayload(
# payload_type=ConsentTransactionPayload.GRANT_SHARE_SHARED_EHR_ACCESS,
# grant_share_shared_ehr_access=access)
#
# return _make_transaction(
# payload=payload,
# inputs=[consent_hex],
# outputs=[consent_hex],
# txn_signer=txn_signer,
# batch_signer=batch_signer)
#
#
# def revoke_transfer_ehr_permission(txn_signer, batch_signer, dest_pkey):
# hospital_pkey = txn_signer.get_public_key().as_hex()
# consent_hex = helper.make_consent_share_shared_ehr_address(dest_pkey=dest_pkey, src_pkey=hospital_pkey)
#
# access = ActionOnAccess(
# dest_pkey=dest_pkey,
# src_pkey=hospital_pkey
# )
#
# payload = ConsentTransactionPayload(
# payload_type=ConsentTransactionPayload.REVOKE_SHARE_SHARED_EHR_ACCESS,
# revoke_share_shared_ehr_access=access)
#
# return _make_transaction(
# payload=payload,
# inputs=[consent_hex],
# outputs=[consent_hex],
# txn_signer=txn_signer,
# batch_signer=batch_signer)
| 38.196931 | 156 | 0.728021 | 1,640 | 14,935 | 6.189024 | 0.080488 | 0.050542 | 0.075369 | 0.068966 | 0.804926 | 0.763941 | 0.737044 | 0.726601 | 0.719901 | 0.697734 | 0 | 0.008148 | 0.19471 | 14,935 | 390 | 157 | 38.294872 | 0.835786 | 0.337931 | 0 | 0.507937 | 0 | 0 | 0.003077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068783 | false | 0 | 0.042328 | 0 | 0.179894 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
226a803303823a3fd96b0fc863b179b270569c74 | 2,450 | py | Python | logger/mail.py | Sandr0x00/price-logger | 5044e43204ac4e5e1e3b4685e29436ce16e3fd02 | [
"MIT"
] | null | null | null | logger/mail.py | Sandr0x00/price-logger | 5044e43204ac4e5e1e3b4685e29436ce16e3fd02 | [
"MIT"
] | 42 | 2019-02-08T18:18:23.000Z | 2022-03-11T23:43:45.000Z | logger/mail.py | Sandr0x00/price-logger | 5044e43204ac4e5e1e3b4685e29436ce16e3fd02 | [
"MIT"
] | null | null | null | import smtplib, ssl
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import re
import time
def send_mail(mail_config, item, price):
if not mail_config or mail_config == "":
return False
# Create a secure SSL context
context = ssl.create_default_context()
with smtplib.SMTP_SSL(mail_config['smtp_url'], mail_config['smtp_port'], context=context) as server:
message = MIMEMultipart("alternative")
if 'title' in item:
message["Subject"] = "[Price-Logger] Alert: {}".format(item['title'])
else:
message["Subject"] = "[Price-Logger] Alert: {}".format(item['id'])
message["From"] = mail_config['from']
message["To"] = mail_config['to']
# Create the plain-text and HTML version of your message
html = '''<html><body>
Item: <a href="{}">{}</a><br/>
Price: {}
</body></html>'''.format(item['url'], item['id'], price)
# Turn these into plain/html MIMEText objects
part2 = MIMEText(html, "html")
# Add HTML/plain-text parts to MIMEMultipart message
# The email client will try to render the last part first
message.attach(part2)
server.login(mail_config['from'], mail_config['password'])
server.sendmail(mail_config['from'], mail_config["to"], message.as_string())
return True
def send_error(mail_config, error, subject):
if not mail_config or mail_config == "":
return False
# Create a secure SSL context
context = ssl.create_default_context()
with smtplib.SMTP_SSL(mail_config['smtp_url'], mail_config['smtp_port'], context=context) as server:
message = MIMEMultipart("alternative")
message["Subject"] = "[Price-Logger] Error {}".format(subject)
message["From"] = mail_config['from']
message["To"] = mail_config['to']
# Create the plain-text and HTML version of your message
html = '<html><body>{}</body></html>'.format(error)
# Turn these into plain/html MIMEText objects
part2 = MIMEText(html, "html")
# Add HTML/plain-text parts to MIMEMultipart message
# The email client will try to render the last part first
message.attach(part2)
server.login(mail_config['from'], mail_config['password'])
server.sendmail(mail_config['from'], mail_config["to"], message.as_string())
return True | 38.888889 | 104 | 0.638776 | 309 | 2,450 | 4.94822 | 0.245955 | 0.143885 | 0.054938 | 0.04709 | 0.800523 | 0.800523 | 0.800523 | 0.748201 | 0.748201 | 0.748201 | 0 | 0.002114 | 0.227755 | 2,450 | 63 | 105 | 38.888889 | 0.806025 | 0.190612 | 0 | 0.585366 | 0 | 0 | 0.178915 | 0.024835 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0.04878 | 0.121951 | 0 | 0.268293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
97ef5beec0fa88d5d43b584ba9bca9bef802c7e6 | 621 | py | Python | Data/ObjectData.py | jeui123/pyrelay | 43203343217c784183342dead933e0dc88c161d0 | [
"MIT"
] | null | null | null | Data/ObjectData.py | jeui123/pyrelay | 43203343217c784183342dead933e0dc88c161d0 | [
"MIT"
] | null | null | null | Data/ObjectData.py | jeui123/pyrelay | 43203343217c784183342dead933e0dc88c161d0 | [
"MIT"
] | null | null | null | from .ObjectStatusData import *
class ObjectData:
def __init__(self, objectType=0, status=None):
self.objectType = objectType
self.status = ObjectStatusData()
def read(self, reader):
self.objectType = reader.readUnsignedShort()
self.status.read(reader)
def write(self, writer):
writer.writeUnsignedShort(self.objectType)
self.status.write(writer)
def clone(self):
return ObjectData(self.objectType, self.status.clone())
def __str__(self):
return "ObjectType: {}\nStatus: \n{}".format(self.objectType, self.status)
| 31.05 | 83 | 0.648953 | 64 | 621 | 6.171875 | 0.375 | 0.212658 | 0.202532 | 0.182278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00211 | 0.236715 | 621 | 19 | 84 | 32.684211 | 0.831224 | 0 | 0 | 0 | 0 | 0 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.066667 | 0.133333 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
97fd758aab53a0bb77397af3cba6165a9974a922 | 10,538 | py | Python | luminaire/tests/test_models.py | Dima2022/luminaire | 5f98b624e70fad69a0fd82aed7d04a843fbb6450 | [
"Apache-2.0"
] | 525 | 2020-08-19T17:10:59.000Z | 2022-03-30T12:31:11.000Z | luminaire/tests/test_models.py | Dima2022/luminaire | 5f98b624e70fad69a0fd82aed7d04a843fbb6450 | [
"Apache-2.0"
] | 53 | 2020-08-19T17:25:42.000Z | 2022-03-18T05:58:09.000Z | luminaire/tests/test_models.py | jtimberlake/luminaire | 605db1aa4a2df24a398d64baad7e5530bdea2883 | [
"Apache-2.0"
] | 43 | 2020-08-24T01:31:29.000Z | 2022-03-16T02:00:27.000Z | from luminaire.model.lad_structural import *
from luminaire.model.lad_filtering import *
from luminaire.model.window_density import *
from datetime import datetime
class TestLADStructural(object):
def test_lad_structural_training(self, training_test_data):
hyper_params = LADStructuralHyperParams(is_log_transformed=False, p=4, q=0).params
lad_struct_obj = LADStructuralModel(hyper_params, freq='D')
data_summary = {'ts_start': training_test_data.first_valid_index(),
'ts_end': training_test_data.last_valid_index(),
'is_log_transformed': False}
success, ts_end, model = lad_struct_obj.train(data=training_test_data, **data_summary)
assert success and isinstance(model, LADStructuralModel)
def test_lad_structural_training_zeroes(self, training_test_data_zeroes):
hyper_params = LADStructuralHyperParams(is_log_transformed=False, p=4, q=0).params
lad_struct_obj = LADStructuralModel(hyper_params, freq='D')
data_summary = {'ts_start': training_test_data_zeroes.first_valid_index(),
'ts_end': training_test_data_zeroes.last_valid_index(),
'is_log_transformed': False}
success, ts_end, model = lad_struct_obj.train(data=training_test_data_zeroes, **data_summary)
assert success and isinstance(model, LADStructuralModel)
def test_lad_structural_scoring(self, scoring_test_data, lad_structural_model):
pred_date_normal = scoring_test_data.index[0]
value_normal = scoring_test_data['raw'][0]
output_normal = lad_structural_model.score(value_normal, pred_date_normal)
pred_date_anomalous = scoring_test_data.index[1]
value_anomalous = scoring_test_data['raw'][1]
output_anomalous = lad_structural_model.score(value_anomalous, pred_date_anomalous)
assert output_normal['Success'] and not output_normal['IsAnomaly']
assert output_anomalous['Success'] and output_anomalous['IsAnomaly']
def test_lad_filtering_training(self, training_test_data):
hyper_params = LADFilteringHyperParams(is_log_transformed=False).params
lad_filtering_obj = LADFilteringModel(hyper_params, freq='D')
data_summary = {'ts_start': training_test_data.first_valid_index(),
'ts_end': training_test_data.last_valid_index(),
'is_log_transformed': False}
success, ts_end, model = lad_filtering_obj.train(data=training_test_data, **data_summary)
assert success and isinstance(model, LADFilteringModel)
def test_lad_filtering_scoring(self, scoring_test_data, lad_filtering_model):
pred_date_normal = scoring_test_data.index[0]
value_normal = scoring_test_data['raw'][0]
output_normal, lad_filtering_model_update = lad_filtering_model.score(value_normal, pred_date_normal)
pred_date_anomalous = scoring_test_data.index[1]
value_anomalous = scoring_test_data['raw'][1]
output_anomalous, lad_filtering_model_update = lad_filtering_model_update.score(value_anomalous, pred_date_anomalous)
assert output_normal['Success'] and not output_normal['IsAnomaly']
assert output_anomalous['Success'] and output_anomalous['IsAnomaly'] \
and isinstance(lad_filtering_model_update, LADFilteringModel)
def test_lad_structural_training_log(self, training_test_data_log):
hyper_params = LADStructuralHyperParams(is_log_transformed=True, include_holidays_exog=False).params
lad_structural_obj = LADStructuralModel(hyper_params, freq='D')
data_summary = {'ts_start': training_test_data_log.first_valid_index(),
'ts_end': training_test_data_log.last_valid_index(),
'is_log_transformed': True}
success, ts_end, model = lad_structural_obj.train(data=training_test_data_log, **data_summary)
assert success and isinstance(model, LADStructuralModel)
def test_lad_structural_scoring_log(self, scoring_test_data_log, lad_structural_model_log_seasonal):
pred_date_normal = scoring_test_data_log.index[0]
value_normal = scoring_test_data_log['raw'][0]
output_normal = lad_structural_model_log_seasonal.score(value_normal, pred_date_normal)
pred_date_anomalous = scoring_test_data_log.index[1]
value_anomalous = scoring_test_data_log['raw'][1]
output_anomalous = lad_structural_model_log_seasonal.score(value_anomalous, pred_date_anomalous)
assert output_normal['Success'] and output_normal['IsAnomaly']
assert output_anomalous['Success'] and output_anomalous['IsAnomaly']
def test_lad_filtering_training_log(self, training_test_data_log):
hyper_params = LADFilteringHyperParams(is_log_transformed=True).params
lad_filtering_obj = LADFilteringModel(hyper_params, freq='D')
data_summary = {'ts_start': training_test_data_log.first_valid_index(),
'ts_end': training_test_data_log.last_valid_index(),
'is_log_transformed': True}
success, ts_end, model = lad_filtering_obj.train(data=training_test_data_log, **data_summary)
assert success and isinstance(model, LADFilteringModel)
def test_lad_filtering_scoring_log(self, scoring_test_data_log, lad_filtering_model_log_seasonal):
pred_date_normal = scoring_test_data_log.index[0]
value_normal = scoring_test_data_log['raw'][0]
output_normal, lad_filtering_model_update = lad_filtering_model_log_seasonal.score(value_normal, pred_date_normal)
pred_date_anomalous = scoring_test_data_log.index[1]
value_anomalous = scoring_test_data_log['raw'][1]
output_anomalous, lad_filtering_model_update = lad_filtering_model_update.score(value_anomalous, pred_date_anomalous)
assert output_normal['Success'] and not output_normal['IsAnomaly']
assert output_anomalous['Success'] and output_anomalous['IsAnomaly'] \
and isinstance(lad_filtering_model_update, LADFilteringModel)
def test_high_freq_window_density_training(self, window_density_model_data):
training_start = datetime(2020, 4, 30)
training_end = datetime(2020, 5, 27, 23, 59, 59)
data = window_density_model_data[(window_density_model_data.index >= training_start)
& (window_density_model_data.index <= training_end)]
config = WindowDensityHyperParams(detection_method='kldiv',
window_length=6 * 24).params
de_obj = DataExploration(**config)
data, pre_prc = de_obj.stream_profile(df=data)
config.update(pre_prc)
wdm_obj = WindowDensityModel(hyper_params=config)
success, ts_end, model = wdm_obj.train(data=data, past_model=None)
assert success and isinstance(model, WindowDensityModel)
def test_high_freq_window_density_scoring(self, window_density_model_data, window_density_model):
scoring_start = datetime(2020, 5, 28)
scoring_end = datetime(2020, 5, 28, 23, 59, 59)
data = window_density_model_data[(window_density_model_data.index >= scoring_start)
& (window_density_model_data.index <= scoring_end)]
result = window_density_model.score(data)
assert result[0]['Success'] and isinstance(result[0]['AnomalyProbability'], float)
def test_low_freq_window_density_training_last_window(self, window_density_model_data_hourly):
training_start = datetime(2018, 4, 1)
training_end = datetime(2018, 9, 30, 23, 59, 59)
data = window_density_model_data_hourly[(window_density_model_data_hourly.index >= training_start)
& (window_density_model_data_hourly.index <= training_end)]
config = WindowDensityHyperParams(freq='H', baseline_type="last_window").params
de_obj = DataExploration(**config)
data, pre_prc = de_obj.stream_profile(df=data)
config.update(pre_prc)
wdm_obj = WindowDensityModel(hyper_params=config)
success, ts_end, model = wdm_obj.train(data=data, past_model=None)
assert success and isinstance(model, WindowDensityModel)
def test_low_freq_window_density_scoring_last_window(self, window_density_model_data_hourly,
window_density_model_hourly_last_window):
scoring_start = datetime(2018, 10, 1)
scoring_end = datetime(2018, 10, 1, 23, 59, 59)
data = window_density_model_data_hourly[(window_density_model_data_hourly.index >= scoring_start)
& (window_density_model_data_hourly.index <= scoring_end)]
result = window_density_model_hourly_last_window.score(data)
assert result[0]['Success'] and isinstance(result[0]['AnomalyProbability'], float)
def test_low_freq_window_density_training_aggregated(self, window_density_model_data_hourly):
training_start = datetime(2018, 4, 1)
training_end = datetime(2018, 9, 30, 23, 59, 59)
data = window_density_model_data_hourly[(window_density_model_data_hourly.index >= training_start)
& (window_density_model_data_hourly.index <= training_end)]
config = WindowDensityHyperParams(freq='H', baseline_type="aggregated").params
de_obj = DataExploration(**config)
data, pre_prc = de_obj.stream_profile(df=data)
config.update(pre_prc)
wdm_obj = WindowDensityModel(hyper_params=config)
success, ts_end, model = wdm_obj.train(data=data, past_model=None)
assert success and isinstance(model, WindowDensityModel)
def test_low_freq_window_density_scoring_aggregated(self, window_density_model_data_hourly,
window_density_model_hourly_aggregated):
scoring_start = datetime(2018, 10, 1)
scoring_end = datetime(2018, 10, 1, 23, 59, 59)
data = window_density_model_data_hourly[(window_density_model_data_hourly.index >= scoring_start)
& (window_density_model_data_hourly.index <= scoring_end)]
result = window_density_model_hourly_aggregated.score(data)
assert result[0]['Success'] and isinstance(result[0]['AnomalyProbability'], float)
| 53.765306 | 125 | 0.708199 | 1,279 | 10,538 | 5.393276 | 0.083659 | 0.04639 | 0.078284 | 0.076544 | 0.931139 | 0.910409 | 0.876051 | 0.8343 | 0.818063 | 0.806176 | 0 | 0.017664 | 0.210287 | 10,538 | 195 | 126 | 54.041026 | 0.811223 | 0 | 0 | 0.615385 | 0 | 0 | 0.039856 | 0 | 0 | 0 | 0 | 0 | 0.132867 | 1 | 0.104895 | false | 0 | 0.027972 | 0 | 0.13986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f05d7306173b55e9d870f8be736c484e068e653 | 81 | py | Python | elabjournal/elabjournal/StorageTypes.py | matthijsbrouwer/elabjournal-python | 4063b01993f0bf17ea2857009c1bedc5ace8b87b | [
"Apache-2.0"
] | 2 | 2021-06-29T11:17:27.000Z | 2022-01-11T18:41:49.000Z | elabjournal/elabjournal/StorageTypes.py | matthijsbrouwer/elabjournal-python | 4063b01993f0bf17ea2857009c1bedc5ace8b87b | [
"Apache-2.0"
] | null | null | null | elabjournal/elabjournal/StorageTypes.py | matthijsbrouwer/elabjournal-python | 4063b01993f0bf17ea2857009c1bedc5ace8b87b | [
"Apache-2.0"
] | 1 | 2019-06-06T13:23:11.000Z | 2019-06-06T13:23:11.000Z | from .eLABJournalPager import *
class StorageTypes(eLABJournalPager):
pass | 13.5 | 37 | 0.777778 | 7 | 81 | 9 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160494 | 81 | 6 | 38 | 13.5 | 0.926471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3f2acadaca5e53c0f9ccb21b6e36f1634b325018 | 38 | py | Python | src/lib/binhex.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/binhex.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/binhex.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("binhex")
| 19 | 37 | 0.763158 | 6 | 38 | 4 | 0.666667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.685714 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.