hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23e02b42f9834f452d669bf43e6ade3322053afe | 45 | py | Python | cflearn/data/__init__.py | carefree0910/carefree-learn | 2043812afbe9c56f01ec1639961736313ee062ba | [
"MIT"
] | 400 | 2020-07-05T18:55:49.000Z | 2022-02-21T02:33:08.000Z | cflow/api/cv/data/__init__.py | carefree0910/carefree-flow | 7035015a072cf8142074d01683889f90950d2939 | [
"MIT"
] | 82 | 2020-08-01T13:29:38.000Z | 2021-10-09T07:13:44.000Z | cflearn/data/__init__.py | carefree0910/carefree-learn | 2043812afbe9c56f01ec1639961736313ee062ba | [
"MIT"
] | 34 | 2020-07-05T21:15:34.000Z | 2021-12-20T08:45:17.000Z | from .core import *
from .interface import *
| 15 | 24 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 45 | 2 | 25 | 22.5 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b0d40641bbef7004d990876a4240fcad00c7ef2 | 45 | py | Python | app/users/serializers/__init__.py | InNickF/django-template | a8a9e1e5cd8cf63543cc78ef4fbd6bce060a448b | [
"MIT"
] | 3 | 2020-09-20T11:21:01.000Z | 2021-01-31T18:55:54.000Z | app/users/serializers/__init__.py | InNickF/django-template | a8a9e1e5cd8cf63543cc78ef4fbd6bce060a448b | [
"MIT"
] | 2 | 2020-09-21T09:53:32.000Z | 2021-06-10T19:40:41.000Z | app/users/serializers/__init__.py | InNickF/django-template | a8a9e1e5cd8cf63543cc78ef4fbd6bce060a448b | [
"MIT"
] | 2 | 2021-01-17T20:59:23.000Z | 2021-01-31T18:55:58.000Z | """Users serializers"""
from .users import *
| 15 | 23 | 0.688889 | 5 | 45 | 6.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 2 | 24 | 22.5 | 0.794872 | 0.377778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b0ff51baec4951388f12a5b837fc3652e41ad1a | 9,550 | py | Python | tests/unit/workflows/nodejs_npm/test_actions.py | honey-insurance/aws-lambda-builders | 908ad5f892b9a40ace7181fa53b949511c929055 | [
"Apache-2.0"
] | 180 | 2018-11-09T04:51:19.000Z | 2020-08-06T21:43:20.000Z | tests/unit/workflows/nodejs_npm/test_actions.py | honey-insurance/aws-lambda-builders | 908ad5f892b9a40ace7181fa53b949511c929055 | [
"Apache-2.0"
] | 108 | 2018-11-08T18:34:51.000Z | 2020-08-12T17:59:41.000Z | tests/unit/workflows/nodejs_npm/test_actions.py | honey-insurance/aws-lambda-builders | 908ad5f892b9a40ace7181fa53b949511c929055 | [
"Apache-2.0"
] | 91 | 2018-11-08T22:58:00.000Z | 2020-08-17T21:15:31.000Z | from unittest import TestCase
from mock import patch, call
from parameterized import parameterized
from aws_lambda_builders.actions import ActionFailedError
from aws_lambda_builders.workflows.nodejs_npm.actions import (
NodejsNpmPackAction,
NodejsNpmInstallAction,
NodejsNpmrcAndLockfileCopyAction,
NodejsNpmrcCleanUpAction,
NodejsNpmLockFileCleanUpAction,
NodejsNpmCIAction,
)
from aws_lambda_builders.workflows.nodejs_npm.npm import NpmExecutionError
class TestNodejsNpmPackAction(TestCase):
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
@patch("aws_lambda_builders.workflows.nodejs_npm.npm.SubprocessNpm")
def test_tars_and_unpacks_npm_project(self, OSUtilMock, SubprocessNpmMock):
osutils = OSUtilMock.return_value
subprocess_npm = SubprocessNpmMock.return_value
action = NodejsNpmPackAction(
"artifacts", "scratch_dir", "manifest", osutils=osutils, subprocess_npm=subprocess_npm
)
osutils.dirname.side_effect = lambda value: "/dir:{}".format(value)
osutils.abspath.side_effect = lambda value: "/abs:{}".format(value)
osutils.joinpath.side_effect = lambda a, b: "{}/{}".format(a, b)
subprocess_npm.run.return_value = "package.tar"
action.execute()
subprocess_npm.run.assert_called_with(["pack", "-q", "file:/abs:/dir:manifest"], cwd="scratch_dir")
osutils.extract_tarfile.assert_called_with("scratch_dir/package.tar", "artifacts")
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
@patch("aws_lambda_builders.workflows.nodejs_npm.npm.SubprocessNpm")
def test_raises_action_failed_when_npm_fails(self, OSUtilMock, SubprocessNpmMock):
osutils = OSUtilMock.return_value
subprocess_npm = SubprocessNpmMock.return_value
builder_instance = SubprocessNpmMock.return_value
builder_instance.run.side_effect = NpmExecutionError(message="boom!")
action = NodejsNpmPackAction(
"artifacts", "scratch_dir", "manifest", osutils=osutils, subprocess_npm=subprocess_npm
)
with self.assertRaises(ActionFailedError) as raised:
action.execute()
self.assertEqual(raised.exception.args[0], "NPM Failed: boom!")
class TestNodejsNpmInstallAction(TestCase):
@patch("aws_lambda_builders.workflows.nodejs_npm.npm.SubprocessNpm")
def test_installs_npm_production_dependencies_for_npm_project(self, SubprocessNpmMock):
subprocess_npm = SubprocessNpmMock.return_value
action = NodejsNpmInstallAction("artifacts", subprocess_npm=subprocess_npm)
action.execute()
expected_args = ["install", "-q", "--no-audit", "--no-save", "--production", "--unsafe-perm"]
subprocess_npm.run.assert_called_with(expected_args, cwd="artifacts")
@patch("aws_lambda_builders.workflows.nodejs_npm.npm.SubprocessNpm")
def test_can_set_mode(self, SubprocessNpmMock):
subprocess_npm = SubprocessNpmMock.return_value
action = NodejsNpmInstallAction("artifacts", subprocess_npm=subprocess_npm, is_production=False)
action.execute()
expected_args = ["install", "-q", "--no-audit", "--no-save", "--production=false", "--unsafe-perm"]
subprocess_npm.run.assert_called_with(expected_args, cwd="artifacts")
@patch("aws_lambda_builders.workflows.nodejs_npm.npm.SubprocessNpm")
def test_raises_action_failed_when_npm_fails(self, SubprocessNpmMock):
subprocess_npm = SubprocessNpmMock.return_value
builder_instance = SubprocessNpmMock.return_value
builder_instance.run.side_effect = NpmExecutionError(message="boom!")
action = NodejsNpmInstallAction("artifacts", subprocess_npm=subprocess_npm)
with self.assertRaises(ActionFailedError) as raised:
action.execute()
self.assertEqual(raised.exception.args[0], "NPM Failed: boom!")
class TestNodejsNpmCIAction(TestCase):
@patch("aws_lambda_builders.workflows.nodejs_npm.npm.SubprocessNpm")
def test_tars_and_unpacks_npm_project(self, SubprocessNpmMock):
subprocess_npm = SubprocessNpmMock.return_value
action = NodejsNpmCIAction("sources", subprocess_npm=subprocess_npm)
action.execute()
subprocess_npm.run.assert_called_with(["ci"], cwd="sources")
@patch("aws_lambda_builders.workflows.nodejs_npm.npm.SubprocessNpm")
def test_raises_action_failed_when_npm_fails(self, SubprocessNpmMock):
subprocess_npm = SubprocessNpmMock.return_value
builder_instance = SubprocessNpmMock.return_value
builder_instance.run.side_effect = NpmExecutionError(message="boom!")
action = NodejsNpmCIAction("sources", subprocess_npm=subprocess_npm)
with self.assertRaises(ActionFailedError) as raised:
action.execute()
self.assertEqual(raised.exception.args[0], "NPM Failed: boom!")
class TestNodejsNpmrcAndLockfileCopyAction(TestCase):
@parameterized.expand(
[
[False, False],
[True, False],
[False, True],
[True, True],
]
)
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_copies_into_a_project_if_file_exists(self, npmrc_exists, package_lock_exists, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b: "{}/{}".format(a, b)
action = NodejsNpmrcAndLockfileCopyAction("artifacts", "source", osutils=osutils)
osutils.file_exists.side_effect = [npmrc_exists, package_lock_exists]
action.execute()
filename_exists = {
".npmrc": npmrc_exists,
"package-lock.json": package_lock_exists,
}
file_exists_calls = [call("source/{}".format(filename)) for filename in filename_exists]
copy_file_calls = [
call("source/{}".format(filename), "artifacts") for filename, exists in filename_exists.items() if exists
]
osutils.file_exists.assert_has_calls(file_exists_calls)
osutils.copy_file.assert_has_calls(copy_file_calls)
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_raises_action_failed_when_copying_fails(self, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b: "{}/{}".format(a, b)
osutils.copy_file.side_effect = OSError()
action = NodejsNpmrcAndLockfileCopyAction("artifacts", "source", osutils=osutils)
with self.assertRaises(ActionFailedError):
action.execute()
class TestNodejsNpmrcCleanUpAction(TestCase):
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_removes_npmrc_if_npmrc_exists(self, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b: "{}/{}".format(a, b)
action = NodejsNpmrcCleanUpAction("artifacts", osutils=osutils)
osutils.file_exists.side_effect = [True]
action.execute()
osutils.remove_file.assert_called_with("artifacts/.npmrc")
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_skips_npmrc_removal_if_npmrc_doesnt_exist(self, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b: "{}/{}".format(a, b)
action = NodejsNpmrcCleanUpAction("artifacts", osutils=osutils)
osutils.file_exists.side_effect = [False]
action.execute()
osutils.remove_file.assert_not_called()
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_raises_action_failed_when_removing_fails(self, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b: "{}/{}".format(a, b)
osutils.remove_file.side_effect = OSError()
action = NodejsNpmrcCleanUpAction("artifacts", osutils=osutils)
with self.assertRaises(ActionFailedError):
action.execute()
class TestNodejsNpmLockFileCleanUpAction(TestCase):
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_removes_dot_package_lock_if_exists(self, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b, c: "{}/{}/{}".format(a, b, c)
action = NodejsNpmLockFileCleanUpAction("artifacts", osutils=osutils)
osutils.file_exists.side_effect = [True]
action.execute()
osutils.remove_file.assert_called_with("artifacts/node_modules/.package-lock.json")
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_skips_lockfile_removal_if_it_doesnt_exist(self, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b, c: "{}/{}/{}".format(a, b, c)
action = NodejsNpmLockFileCleanUpAction("artifacts", osutils=osutils)
osutils.file_exists.side_effect = [False]
action.execute()
osutils.remove_file.assert_not_called()
@patch("aws_lambda_builders.workflows.nodejs_npm.utils.OSUtils")
def test_raises_action_failed_when_removing_fails(self, OSUtilMock):
osutils = OSUtilMock.return_value
osutils.joinpath.side_effect = lambda a, b, c: "{}/{}/{}".format(a, b, c)
osutils.remove_file.side_effect = OSError()
action = NodejsNpmLockFileCleanUpAction("artifacts", osutils=osutils)
with self.assertRaises(ActionFailedError):
action.execute()
| 40.466102 | 117 | 0.718639 | 1,027 | 9,550 | 6.401168 | 0.133398 | 0.051415 | 0.051719 | 0.075145 | 0.808336 | 0.779586 | 0.764375 | 0.71874 | 0.703681 | 0.670216 | 0 | 0.000381 | 0.174974 | 9,550 | 235 | 118 | 40.638298 | 0.833989 | 0 | 0 | 0.620482 | 0 | 0 | 0.166492 | 0.108168 | 0 | 0 | 0 | 0 | 0.120482 | 1 | 0.090361 | false | 0 | 0.036145 | 0 | 0.162651 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9e43cb94c88ac2bd2d63336288831b58d9b57cc0 | 28 | py | Python | simple.py | saivenkat288/goCD-Test | 54cbc274f37d7ef8ecd9de608789ef89a8fb3fa4 | [
"Apache-2.0"
] | 1 | 2021-08-09T10:17:13.000Z | 2021-08-09T10:17:13.000Z | simple.py | saivenkat288/goCD-Test | 54cbc274f37d7ef8ecd9de608789ef89a8fb3fa4 | [
"Apache-2.0"
] | null | null | null | simple.py | saivenkat288/goCD-Test | 54cbc274f37d7ef8ecd9de608789ef89a8fb3fa4 | [
"Apache-2.0"
] | null | null | null | print("Hey, Its working!!")
| 14 | 27 | 0.642857 | 4 | 28 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9e44cd41605a4e45ff04c0276011bf4eae3ea27a | 42 | py | Python | lidardet/datasets/processor/__init__.py | Jiaolong/trajectory-prediction | 3fd4e6253b44dfdc86e7c08e93c002baf66f2e46 | [
"Apache-2.0"
] | 6 | 2021-05-10T09:42:01.000Z | 2022-01-04T08:03:42.000Z | lidardet/datasets/processor/__init__.py | Jiaolong/trajectory-prediction | 3fd4e6253b44dfdc86e7c08e93c002baf66f2e46 | [
"Apache-2.0"
] | 3 | 2021-08-16T02:19:10.000Z | 2022-01-10T02:05:48.000Z | lidardet/datasets/processor/__init__.py | Jiaolong/trajectory-prediction | 3fd4e6253b44dfdc86e7c08e93c002baf66f2e46 | [
"Apache-2.0"
] | 1 | 2021-07-15T00:51:58.000Z | 2021-07-15T00:51:58.000Z | from .data_processor import DataProcessor
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e7e147544db863076beb658f81087becee0b6b7 | 20 | py | Python | examples/py30-0027-type-hints1.py | jwilk-forks/python-grammar-changes | 5cbc14e520fadfef8539760a4ffdbe14b9d02f39 | [
"MIT"
] | 8 | 2020-11-21T22:39:41.000Z | 2022-03-13T18:45:53.000Z | examples/py30-0027-type-hints1.py | jwilk-forks/python-grammar-changes | 5cbc14e520fadfef8539760a4ffdbe14b9d02f39 | [
"MIT"
] | 1 | 2021-12-10T10:45:38.000Z | 2021-12-10T10:45:38.000Z | examples/py30-0027-type-hints1.py | jwilk-forks/python-grammar-changes | 5cbc14e520fadfef8539760a4ffdbe14b9d02f39 | [
"MIT"
] | 1 | 2022-02-07T11:16:38.000Z | 2022-02-07T11:16:38.000Z | def f(x: str): pass
| 10 | 19 | 0.6 | 5 | 20 | 2.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
9e83d931b3bea5e28898303ee54c401b375ee99a | 33 | py | Python | cw_nets/unet_pytorch.py | dlindenbaum/cw-nets | 141b8f9a01b8f75e6ce34be0e8c8a931d0559b7c | [
"Apache-2.0"
] | 3 | 2018-07-14T07:45:29.000Z | 2019-04-01T15:28:24.000Z | cw_nets/unet_pytorch.py | CosmiQ/cw-nets | 7b78ac7e1f23b512def23ede52663970b2c87d6e | [
"Apache-2.0"
] | 44 | 2018-07-12T17:13:20.000Z | 2019-05-01T16:04:04.000Z | cw_nets/unet_pytorch.py | dlindenbaum/cw-nets | 141b8f9a01b8f75e6ce34be0e8c8a931d0559b7c | [
"Apache-2.0"
] | 1 | 2018-10-13T17:06:20.000Z | 2018-10-13T17:06:20.000Z | print("TODO-Not implemented yet") | 33 | 33 | 0.787879 | 5 | 33 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 33 | 1 | 33 | 33 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
7b459dfd9fadfe8508bab4f85b7f513fe02f990e | 61 | py | Python | tests/test_greet.py | terasakisatoshi/pydev_conda | dc26fed9d329a06151354e692c6d18ac342cf08c | [
"MIT"
] | null | null | null | tests/test_greet.py | terasakisatoshi/pydev_conda | dc26fed9d329a06151354e692c6d18ac342cf08c | [
"MIT"
] | null | null | null | tests/test_greet.py | terasakisatoshi/pydev_conda | dc26fed9d329a06151354e692c6d18ac342cf08c | [
"MIT"
] | null | null | null | from pydev_conda import greet
def test_greet():
greet()
| 12.2 | 29 | 0.721311 | 9 | 61 | 4.666667 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 61 | 4 | 30 | 15.25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b800911d241237bcc706e3f97f87cd8adf0474c | 202 | py | Python | src/utils/build_key.py | VitalyPetrov/asgi-ml | c297df2e3365cb8fd36fb8048db31e8f16d96fe7 | [
"MIT"
] | 1 | 2020-10-09T16:04:43.000Z | 2020-10-09T16:04:43.000Z | src/utils/build_key.py | VitalyPetrov/asgi-ml | c297df2e3365cb8fd36fb8048db31e8f16d96fe7 | [
"MIT"
] | null | null | null | src/utils/build_key.py | VitalyPetrov/asgi-ml | c297df2e3365cb8fd36fb8048db31e8f16d96fe7 | [
"MIT"
] | null | null | null | from hashlib import md5
from typing import Any, Callable
def build_hashkey(func: Callable, *args: Any, **kwargs: Any) -> str:
return md5(kwargs.get("features").json().encode("utf-8")).hexdigest()
| 28.857143 | 73 | 0.707921 | 29 | 202 | 4.896552 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017045 | 0.128713 | 202 | 6 | 74 | 33.666667 | 0.789773 | 0 | 0 | 0 | 0 | 0 | 0.064356 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
7bc4abcded457e23ca930818e1b41837e22ff7ee | 990 | py | Python | temboo/core/Library/SendGrid/WebAPI/Statistics/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/SendGrid/WebAPI/Statistics/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/SendGrid/WebAPI/Statistics/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.SendGrid.WebAPI.Statistics.GetAllTimeCategoryTotals import GetAllTimeCategoryTotals, GetAllTimeCategoryTotalsInputSet, GetAllTimeCategoryTotalsResultSet, GetAllTimeCategoryTotalsChoreographyExecution
from temboo.Library.SendGrid.WebAPI.Statistics.GetCategoryStatistics import GetCategoryStatistics, GetCategoryStatisticsInputSet, GetCategoryStatisticsResultSet, GetCategoryStatisticsChoreographyExecution
from temboo.Library.SendGrid.WebAPI.Statistics.ListAllCategories import ListAllCategories, ListAllCategoriesInputSet, ListAllCategoriesResultSet, ListAllCategoriesChoreographyExecution
from temboo.Library.SendGrid.WebAPI.Statistics.RetrieveAggregates import RetrieveAggregates, RetrieveAggregatesInputSet, RetrieveAggregatesResultSet, RetrieveAggregatesChoreographyExecution
from temboo.Library.SendGrid.WebAPI.Statistics.RetrieveStatistics import RetrieveStatistics, RetrieveStatisticsInputSet, RetrieveStatisticsResultSet, RetrieveStatisticsChoreographyExecution
| 165 | 219 | 0.924242 | 60 | 990 | 15.25 | 0.45 | 0.054645 | 0.092896 | 0.136612 | 0.224044 | 0.224044 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035354 | 990 | 5 | 220 | 198 | 0.958115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c86d74af8a4641dccf7d64383ceba5a0f5ec7b26 | 2,552 | py | Python | day17/program.py | jredzepovic/AoC2020 | aed66e27ea8a3e1f38457f8f8b21f9cdbf4d173d | [
"MIT"
] | null | null | null | day17/program.py | jredzepovic/AoC2020 | aed66e27ea8a3e1f38457f8f8b21f9cdbf4d173d | [
"MIT"
] | null | null | null | day17/program.py | jredzepovic/AoC2020 | aed66e27ea8a3e1f38457f8f8b21f9cdbf4d173d | [
"MIT"
] | null | null | null | from itertools import product
import numpy as np
def extend_grid_3d(current_grid):
x, y, z = [], [], []
for loc in current_grid:
x.append(loc[0])
y.append(loc[1])
z.append(loc[2])
grid = np.meshgrid(
range(min(x) - 1, max(x) + 2),
range(min(y) - 1, max(y) + 2),
range(min(z) - 1, max(z) + 2))
return list(map(tuple, np.stack((grid[0].ravel(), grid[1].ravel(), grid[2].ravel()), axis=1)))
def extend_grid_4d(current_grid):
x, y, z, w = [], [], [], []
for loc in current_grid:
x.append(loc[0])
y.append(loc[1])
z.append(loc[2])
w.append(loc[3])
grid = np.meshgrid(
range(min(x) - 1, max(x) + 2),
range(min(y) - 1, max(y) + 2),
range(min(z) - 1, max(z) + 2),
range(min(w) - 1, max(w) + 2))
return list(map(tuple, np.stack((grid[0].ravel(), grid[1].ravel(), grid[2].ravel(), grid[3].ravel()), axis=1)))
def main():
# part 1
with open("./input.txt") as f:
active = set([(i, j, 0) for i, l in enumerate(f.readlines()) for j, p in enumerate(l) if p == "#"])
transitions = list(product((-1, 0, 1), repeat=3))
transitions.remove((0, 0, 0))
for _ in range(6):
next_grid = set()
grid = extend_grid_3d(active)
for x, y, z in grid:
active_neighbors = sum((x + dx, y + dy, z + dz) in active for dx, dy, dz in transitions)
if (x, y, z) in active and (active_neighbors == 2 or active_neighbors == 3):
next_grid.add((x, y, z))
if (x, y, z) not in active and active_neighbors == 3:
next_grid.add((x, y, z))
active = next_grid
print(len(active))
# part 2
with open("./input.txt") as f:
active = set([(i, j, 0, 0) for i, l in enumerate(f.readlines()) for j, p in enumerate(l) if p == "#"])
transitions = list(product((-1, 0, 1), repeat=4))
transitions.remove((0, 0, 0, 0))
for _ in range(6):
next_grid = set()
grid = extend_grid_4d(active)
for x, y, z, w in grid:
active_neighbors = sum((x + dx, y + dy, z + dz, w + dw) in active for dx, dy, dz, dw in transitions)
if (x, y, z, w) in active and (active_neighbors == 2 or active_neighbors == 3):
next_grid.add((x, y, z, w))
if (x, y, z, w) not in active and active_neighbors == 3:
next_grid.add((x, y, z, w))
active = next_grid
print(len(active))
if __name__ == "__main__":
main()
| 31.9 | 115 | 0.519201 | 411 | 2,552 | 3.131387 | 0.172749 | 0.018648 | 0.027972 | 0.018648 | 0.872572 | 0.798757 | 0.700855 | 0.700855 | 0.700855 | 0.700855 | 0 | 0.034871 | 0.303292 | 2,552 | 79 | 116 | 32.303797 | 0.688976 | 0.005094 | 0 | 0.474576 | 0 | 0 | 0.012618 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0 | 0.033898 | 0 | 0.118644 | 0.033898 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8737a92b4e8d950d80241dbe25134323f70c768 | 5,985 | py | Python | scripts/processed_data_generator.py | tugot17/Polish-Parties-Twitter-Activity | 46c92b6a1d7a2d5a57f36b27c8dcb2ac3a5af6ef | [
"MIT"
] | null | null | null | scripts/processed_data_generator.py | tugot17/Polish-Parties-Twitter-Activity | 46c92b6a1d7a2d5a57f36b27c8dcb2ac3a5af6ef | [
"MIT"
] | null | null | null | scripts/processed_data_generator.py | tugot17/Polish-Parties-Twitter-Activity | 46c92b6a1d7a2d5a57f36b27c8dcb2ac3a5af6ef | [
"MIT"
] | null | null | null | from os.path import join, realpath, dirname, exists, basename
from os import makedirs
import pandas as pd
from pandas import CategoricalDtype
from tqdm.auto import tqdm
from .coalitions import coalitions
def generate_number_of_tweets_per_day(df, output_dir):
# exclude retweets
df = df[df.user_rt.isnull()]
value_counts = df['date'].value_counts()
df_value_counts = pd.DataFrame(value_counts)
df_value_counts = df_value_counts.reset_index()
df_value_counts.columns = ['date', 'number_of_tweets']
df_value_counts = df_value_counts.sort_values(by=['date'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "number_of_tweets_per_day.csv")
df_value_counts.to_csv(path, index=False)
def generate_number_of_retweets_per_day(df, output_dir):
df = df.dropna(subset=['user_rt'])
value_counts = df['date'].value_counts()
df_value_counts = pd.DataFrame(value_counts)
df_value_counts = df_value_counts.reset_index()
df_value_counts.columns = ['date', 'number_of_retweets']
df_value_counts = df_value_counts.sort_values(by=['date'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "number_of_retweets_per_day.csv")
df_value_counts.to_csv(path, index=False)
def generate_number_of_retweets_for_users_tweets_per_day(df, output_dir):
df_retweets_counts = df.groupby("date")['retweets_count'].sum().reset_index()
df_retweets_counts = df_retweets_counts.sort_values(by=['date'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "number_of_retweets_for_users_tweets_per_day.csv")
df_retweets_counts.to_csv(path, index=False)
def generate_number_of_likes_for_users_tweets_per_day(df, output_dir):
df_likes_counts = df.groupby("date")['likes_count'].sum().reset_index()
df_likes_counts = df_likes_counts.sort_values(by=['date'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "number_of_likes_for_users_tweets_per_day.csv")
df_likes_counts.to_csv(path, index=False)
def generate_tweeting_activity_distribution_in_a_day(df, output_dir):
# exclude retweets
df = df[df.user_rt.isnull()]
value_counts = pd.to_datetime(df['time']).dt.hour.value_counts(dropna=True)
df_value_counts = pd.DataFrame(value_counts)
df_value_counts = df_value_counts.reset_index()
df_value_counts.columns = ['hour', 'number_of_tweets']
df_value_counts = df_value_counts.sort_values(by=['hour'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "tweeting_activity_distribution_in_a_day.csv")
df_value_counts.to_csv(path, index=False)
def generate_tweeting_activity_distribution_in_a_week(df, output_dir):
# exclude retweets
df = df[df.user_rt.isnull()]
value_counts = pd.to_datetime(df['date']).dt.day_name().value_counts(dropna=True)
df_value_counts = pd.DataFrame(value_counts)
df_value_counts = df_value_counts.reset_index()
df_value_counts.columns = ['week_day', 'number_of_tweets']
cats = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
cat_type = CategoricalDtype(categories=cats, ordered=True)
df_value_counts['week_day'] = df_value_counts['week_day'].astype(cat_type)
df_value_counts = df_value_counts.sort_values(by=['week_day'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "tweeting_activity_distribution_in_a_week.csv")
df_value_counts.to_csv(path, index=False)
def generate_retweeting_activity_distribution_in_a_day(df, output_dir):
df = df.dropna(subset=['user_rt'])
value_counts = pd.to_datetime(df['time']).dt.hour.value_counts(dropna=True)
df_value_counts = pd.DataFrame(value_counts)
df_value_counts = df_value_counts.reset_index()
df_value_counts.columns = ['hour', 'number_of_retweets']
df_value_counts = df_value_counts.sort_values(by=['hour'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "retweeting_activity_distribution_in_a_day.csv")
df_value_counts.to_csv(path, index=False)
def generate_retweeting_activity_distribution_in_a_week(df, output_dir):
df = df.dropna(subset=['user_rt'])
value_counts = pd.to_datetime(df['date']).dt.day_name().value_counts(dropna=True)
df_value_counts = pd.DataFrame(value_counts)
df_value_counts = df_value_counts.reset_index()
df_value_counts.columns = ['week_day', 'number_of_retweets']
cats = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
cat_type = CategoricalDtype(categories=cats, ordered=True)
df_value_counts['week_day'] = df_value_counts['week_day'].astype(cat_type)
df_value_counts = df_value_counts.sort_values(by=['week_day'])
if not exists(output_dir):
makedirs(output_dir)
path = join(output_dir, "retweeting_activity_distribution_in_a_week.csv")
df_value_counts.to_csv(path, index=False)
if __name__ == '__main__':
data_dir_path = "data"
for coalition_name in tqdm(coalitions.keys()):
for party_name in coalitions[coalition_name]:
save_dir = join("processed_data", coalition_name, party_name)
df = pd.read_csv(join(data_dir_path, f"{party_name}.csv"))
generate_number_of_tweets_per_day(df, join(save_dir))
generate_number_of_retweets_per_day(df, join(save_dir))
generate_number_of_retweets_for_users_tweets_per_day(df, join(save_dir))
generate_number_of_likes_for_users_tweets_per_day(df, join(save_dir))
generate_tweeting_activity_distribution_in_a_day(df, join(save_dir))
generate_tweeting_activity_distribution_in_a_week(df, join(save_dir))
generate_retweeting_activity_distribution_in_a_day(df, join(save_dir))
generate_retweeting_activity_distribution_in_a_week(df, join(save_dir)) | 37.879747 | 89 | 0.741855 | 878 | 5,985 | 4.611617 | 0.102506 | 0.17387 | 0.147691 | 0.088911 | 0.879476 | 0.868116 | 0.868116 | 0.854779 | 0.841689 | 0.831069 | 0 | 0 | 0.146867 | 5,985 | 158 | 90 | 37.879747 | 0.792989 | 0.008354 | 0 | 0.54717 | 0 | 0 | 0.126939 | 0.055125 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075472 | false | 0 | 0.056604 | 0 | 0.132075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8d59f767d1e53762d773da161d29b34be697973 | 90 | py | Python | kipoiseq/extractors/__init__.py | KalinNonchev/kipoiseq | 38d1134885e401198acd3883286dc55627cf12a6 | [
"MIT"
] | 2 | 2019-12-16T17:13:04.000Z | 2021-07-29T12:05:47.000Z | kipoiseq/extractors/__init__.py | KalinNonchev/kipoiseq | 38d1134885e401198acd3883286dc55627cf12a6 | [
"MIT"
] | 117 | 2020-04-22T12:46:45.000Z | 2021-08-02T04:40:58.000Z | kipoiseq/extractors/__init__.py | KalinNonchev/kipoiseq | 38d1134885e401198acd3883286dc55627cf12a6 | [
"MIT"
] | null | null | null | from .base import *
from .vcf import *
from .vcf_seq import *
from .vcf_matching import *
| 18 | 27 | 0.733333 | 14 | 90 | 4.571429 | 0.428571 | 0.46875 | 0.609375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177778 | 90 | 4 | 28 | 22.5 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cdd290d17b404d89bee6287777511a63a7408c1a | 363 | py | Python | vesper/command/clip_exporter.py | HaroldMills/NFC | 356b2234dc3c7d180282a597fa1e039ae79e03c6 | [
"MIT"
] | null | null | null | vesper/command/clip_exporter.py | HaroldMills/NFC | 356b2234dc3c7d180282a597fa1e039ae79e03c6 | [
"MIT"
] | 1 | 2015-01-12T12:41:29.000Z | 2015-01-12T12:41:29.000Z | vesper/command/clip_exporter.py | HaroldMills/NFC | 356b2234dc3c7d180282a597fa1e039ae79e03c6 | [
"MIT"
] | null | null | null | class ClipExporter:
clip_query_set_select_related_args = None
def begin_exports(self):
pass
def begin_subset_exports(
self, station, mic_output, date, detector, clip_count):
pass
def export(self, clip):
pass
def end_subset_exports(self):
pass
def end_exports(self):
pass
| 13.961538 | 67 | 0.608815 | 43 | 363 | 4.837209 | 0.534884 | 0.211538 | 0.216346 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.327824 | 363 | 25 | 68 | 14.52 | 0.852459 | 0 | 0 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0.384615 | 0 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
a808bef29ee082d73f135a0f9fba51a82693d944 | 215 | py | Python | src/vanchor/resources/__init__.py | AlexAsplund/Vanchor | cb5d1c95567ab9d9bd280e2ca3022e4a2da1fa67 | [
"MIT"
] | 12 | 2021-09-25T01:03:31.000Z | 2022-02-04T09:13:00.000Z | src/vanchor/resources/__init__.py | AlexAsplund/Vanchor | cb5d1c95567ab9d9bd280e2ca3022e4a2da1fa67 | [
"MIT"
] | 13 | 2021-09-20T19:56:50.000Z | 2022-01-10T13:08:32.000Z | src/vanchor/resources/__init__.py | AlexAsplund/Vanchor | cb5d1c95567ab9d9bd280e2ca3022e4a2da1fa67 | [
"MIT"
] | 1 | 2021-10-05T10:49:59.000Z | 2021-10-05T10:49:59.000Z | from .config import *
from .events import *
from .device_manager import *
from .workers import *
from .functions import *
from .tools import *
from .data import *
from .main import *
from .metrics import *
| 21.5 | 30 | 0.706977 | 28 | 215 | 5.392857 | 0.428571 | 0.529801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209302 | 215 | 9 | 31 | 23.888889 | 0.888235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a82c56e6c83c863d1bd739fff6aae57a9fa3eee4 | 1,156 | py | Python | src/cosa_constants.py | charleshong3/cosa | 20936a87ced9371736d7280d613821ae8d564fa3 | [
"BSD-2-Clause"
] | 35 | 2021-06-15T17:37:45.000Z | 2022-03-10T10:50:31.000Z | src/cosa_constants.py | charleshong3/cosa | 20936a87ced9371736d7280d613821ae8d564fa3 | [
"BSD-2-Clause"
] | 1 | 2021-08-07T17:52:04.000Z | 2021-09-15T19:35:51.000Z | src/cosa_constants.py | charleshong3/cosa | 20936a87ced9371736d7280d613821ae8d564fa3 | [
"BSD-2-Clause"
] | 7 | 2021-06-18T08:52:54.000Z | 2022-03-08T15:39:40.000Z | #!/usr/bin/env python3
# j=7, v=3, prob - var
_A = [
[1, 0, 0], # R
[1, 0, 0], # S
[0, 1, 1], # P
[0, 1, 1], # Q
[1, 1, 0], # C
[1, 0, 1], # K
[0, 1, 1], # N
]
# assume 6 levels of ranks
# v=3, i=6 var - rank
_B = [
[1, 0, 1, 0, 0, 1], # Weights
[0, 0, 0, 1, 1, 1], # Inputs
[0, 1, 0, 0, 1, 1], # Outputs
]
# for uneven mapping
# v=3, i=6, i'=6
_Z = [
# Weights
[
[1, 0, 0, 0, 0, 0], # mem 0
[0, 0, 0, 0, 0, 0], # mem 1
[1, 1, 1, 0, 0, 0], # mem 2
[0, 0, 0, 0, 0, 0], # mem 3
[0, 0, 0, 0, 0, 0], # mem 4
[1, 1, 1, 1, 1, 1], # mem 5
],
# Inputs
[
[0, 0, 0, 0, 0, 0], # mem 0
[0, 0, 0, 0, 0, 0], # mem 1
[0, 0, 0, 0, 0, 0], # mem 2
[1, 1, 1, 1, 0, 0], # mem 3
[1, 1, 1, 1, 1, 0], # mem 4
[1, 1, 1, 1, 1, 1], # mem 5
],
# Outputs
[
[0, 0, 0, 0, 0, 0], # mem 0
[1, 1, 0, 0, 0, 0], # mem 1
[0, 0, 0, 0, 0, 0], # mem 2
[0, 0, 0, 0, 0, 0], # mem 3
[1, 1, 1, 1, 1, 0], # mem 4
[1, 1, 1, 1, 1, 1], # mem 5
],
]
| 21.407407 | 36 | 0.304498 | 229 | 1,156 | 1.524017 | 0.170306 | 0.361032 | 0.386819 | 0.366762 | 0.544413 | 0.510029 | 0.487106 | 0.472779 | 0.441261 | 0.441261 | 0 | 0.27663 | 0.455882 | 1,156 | 53 | 37 | 21.811321 | 0.278219 | 0.25173 | 0 | 0.55 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b57a3b8bb757613e8cd40079603a5ea0de8841ec | 208 | py | Python | asgi_correlation_id/__init__.py | lakshaythareja/asgi-correlation-id | c8febfdc04191087fb96b8d1843ad80e6f5cd080 | [
"BSD-4-Clause"
] | null | null | null | asgi_correlation_id/__init__.py | lakshaythareja/asgi-correlation-id | c8febfdc04191087fb96b8d1843ad80e6f5cd080 | [
"BSD-4-Clause"
] | null | null | null | asgi_correlation_id/__init__.py | lakshaythareja/asgi-correlation-id | c8febfdc04191087fb96b8d1843ad80e6f5cd080 | [
"BSD-4-Clause"
] | null | null | null | from asgi_correlation_id.log_filters import correlation_id_filter
from asgi_correlation_id.middleware import CorrelationIdMiddleware
__all__ = (
'CorrelationIdMiddleware',
'correlation_id_filter',
)
| 26 | 66 | 0.836538 | 22 | 208 | 7.318182 | 0.5 | 0.322981 | 0.236025 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110577 | 208 | 7 | 67 | 29.714286 | 0.87027 | 0 | 0 | 0 | 0 | 0 | 0.211538 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a92c3b2733eeda494ec4c11450a70b5759d0e42f | 146 | py | Python | python/tom/hmm/__init__.py | m7thon/tom | fde3e934083c8c91256350b00e4128e48b351a8c | [
"MIT"
] | 7 | 2017-10-04T05:41:46.000Z | 2021-07-18T01:31:36.000Z | python/tom/hmm/__init__.py | m7thon/tom | fde3e934083c8c91256350b00e4128e48b351a8c | [
"MIT"
] | 1 | 2021-05-16T16:16:55.000Z | 2021-05-20T09:21:30.000Z | python/tom/hmm/__init__.py | m7thon/tom | fde3e934083c8c91256350b00e4128e48b351a8c | [
"MIT"
] | 1 | 2017-10-04T05:41:59.000Z | 2017-10-04T05:41:59.000Z | from .._tomlib import Hmm, Policy
from ._hmm import random_HMM, convert_HMM_to_OOM, learn_EM
#try:
# from ._hmm import ghmm
#except:
# pass
| 20.857143 | 58 | 0.732877 | 23 | 146 | 4.304348 | 0.652174 | 0.141414 | 0.262626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178082 | 146 | 6 | 59 | 24.333333 | 0.825 | 0.308219 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a93b1738e39802cd55ad28aad02d09302286e47e | 486 | py | Python | pdf_struct/core/__init__.py | koreyou/pdf-struct | 2a1549f21e63c9291f4daf62d7832b87ab20f7fd | [
"Apache-2.0"
] | 10 | 2021-11-08T14:40:23.000Z | 2022-03-29T13:57:33.000Z | pdf_struct/core/__init__.py | koreyou/pdf-struct | 2a1549f21e63c9291f4daf62d7832b87ab20f7fd | [
"Apache-2.0"
] | 1 | 2022-03-04T11:48:16.000Z | 2022-03-09T15:43:36.000Z | pdf_struct/core/__init__.py | koreyou/pdf-struct | 2a1549f21e63c9291f4daf62d7832b87ab20f7fd | [
"Apache-2.0"
] | 4 | 2021-12-25T22:12:06.000Z | 2022-03-13T17:44:10.000Z | from pdf_struct.core import clustering
from pdf_struct.core import data_statistics
from pdf_struct.core import document
from pdf_struct.core import download
from pdf_struct.core import evaluation
from pdf_struct.core import export
from pdf_struct.core import feature_extractor
from pdf_struct.core import predictor
from pdf_struct.core import preprocessing
from pdf_struct.core import structure_evaluation
from pdf_struct.core import transition_labels
from pdf_struct.core import utils
| 37.384615 | 48 | 0.876543 | 76 | 486 | 5.394737 | 0.263158 | 0.204878 | 0.380488 | 0.497561 | 0.721951 | 0.160976 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098765 | 486 | 12 | 49 | 40.5 | 0.936073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d242655e2ddceb02c06b2d36050828ad7b4852f | 31 | py | Python | livecoding/helper.py | akafliegdarmstadt/AkaPythonTutorial | ab05b5e0f00c02526d280d0c567d5192890dc399 | [
"MIT"
] | null | null | null | livecoding/helper.py | akafliegdarmstadt/AkaPythonTutorial | ab05b5e0f00c02526d280d0c567d5192890dc399 | [
"MIT"
] | 1 | 2018-10-15T19:46:45.000Z | 2018-10-15T19:46:45.000Z | livecoding/helper.py | akafliegdarmstadt/AkaPythonTutorial | ab05b5e0f00c02526d280d0c567d5192890dc399 | [
"MIT"
] | 1 | 2018-10-10T18:39:56.000Z | 2018-10-10T18:39:56.000Z | def hallo():
print('hallo') | 15.5 | 18 | 0.580645 | 4 | 31 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193548 | 31 | 2 | 18 | 15.5 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
8d50947ee79bd00367f5849abd72ed9b154daf6b | 16,911 | py | Python | tests/api/online/settings.py | happz/settlers | 961a6d2121ab6e89106f17017f026c60c77f16f9 | [
"MIT"
] | 1 | 2018-11-16T09:41:31.000Z | 2018-11-16T09:41:31.000Z | tests/api/online/settings.py | happz/settlers | 961a6d2121ab6e89106f17017f026c60c77f16f9 | [
"MIT"
] | 15 | 2015-01-07T14:17:36.000Z | 2019-04-29T13:26:43.000Z | tests/api/online/settings.py | happz/settlers | 961a6d2121ab6e89106f17017f026c60c77f16f9 | [
"MIT"
] | null | null | null | """
"""
from tests.online import TestCase
from tests import cmp_json_dicts
import random
class Email(TestCase):
def test_empty_submit(self):
reply = self.query('/settings/email')
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'email',
'orig_fields': None
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_invalid_param(self):
i = random.randint(-20, 20)
reply = self.query('/settings/email', data = {'__email': '%s' % i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': None,
'orig_fields': {
'__email': '%s' % i
}
},
'error': {
'message': 'The input field \'__email\' was not expected.',
'params': {}
}
})
def test_empty_action(self):
reply = self.query('/settings/email', data = {'email': ''})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'email',
'orig_fields': {
'email': ''
}
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_malformed_string(self):
reply = self.query('/settings/email', data = {'email': 'foobar'})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'email',
'orig_fields': {
'email': 'foobar'
}
},
'error': {
'message': 'An email address must contain a single @',
'params': {
}
}
})
def test_malformed_float(self):
reply = self.query('/settings/email', data = {'email': 3.14})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'email',
'orig_fields': {
'email': '3.14'
}
},
'error': {
'message': 'An email address must contain a single @',
'params': {
}
}
})
def test_proper(self):
reply = self.query('/settings/email', data = {'email': self.config.get('online', 'email')})
cmp_json_dicts(reply, {
'status': 200,
'form': {
'updated_fields': {
'email': self.config.get('online', 'email')
},
'invalid_field': None,
'orig_fields': None
}
})
class AfterPassTurn(TestCase):
def test_empty_submit(self):
reply = self.query('/settings/after_pass_turn')
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'action',
'orig_fields': None
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_invalid_param(self):
i = random.randint(-20, 20)
reply = self.query('/settings/after_pass_turn', data = {'__action': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': None,
'orig_fields': {
'__action': '%i' % i
}
},
'error': {
'message': 'The input field \'__action\' was not expected.',
'params': {}
}
})
def test_empty_action(self):
reply = self.query('/settings/after_pass_turn', data = {'action': ''})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'action',
'orig_fields': {
'action': ''
}
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_malformed_string(self):
reply = self.query('/settings/after_pass_turn', data = {'action': 'foobar'})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'action',
'orig_fields': {
'action': 'foobar'
}
},
'error': {
'message': 'Please enter an integer value',
'params': {
}
}
})
def test_malformed_float(self):
reply = self.query('/settings/after_pass_turn', data = {'action': 3.14})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'action',
'orig_fields': {
'action': '3.14'
}
},
'error': {
'message': 'Please enter an integer value',
'params': {
}
}
})
def test_malformed_oor(self):
i = random.randint(-20, -5)
reply = self.query('/settings/after_pass_turn', data = {'action': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'action',
'orig_fields': {
'action': '%i' % i
}
},
'error': {
'message': 'Value must be one of: 0; 1; 2 (not %i)' % i,
'params': {
}
}
})
def test_random(self):
i = random.randint(0, 2)
reply = self.query('/settings/after_pass_turn', data = {'action': i})
cmp_json_dicts(reply, {
'status': 200,
'form': {
'updated_fields': {
'action': i
},
'invalid_field': None,
'orig_fields': None
}
})
class PerTablePage(TestCase):
VALID_INPUTS = range(10, 61, 10)
def get_rand_input(self):
i = random.randint(0, len(self.VALID_INPUTS) - 1)
return self.VALID_INPUTS[i]
def test_empty_submit(self):
reply = self.query('/settings/per_page')
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'per_page',
'orig_fields': None
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_invalid_param(self):
i = random.randint(-20, 20)
reply = self.query('/settings/per_page', data = {'__per_page': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': None,
'orig_fields': {
'__per_page': '%i' % i
}
},
'error': {
'message': 'The input field \'__per_page\' was not expected.',
'params': {
}
}
})
def test_empty(self):
reply = self.query('/settings/per_page', data = {'per_page': ''})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'per_page',
'orig_fields': {
'per_page': ''
}
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_malformed_int(self):
while True:
i = random.randint(-100, 100)
if i in self.VALID_INPUTS:
continue
break
reply = self.query('/settings/per_page', data = {'per_page': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'per_page',
'orig_fields': {
'per_page': '%i' % i
}
},
'error': {
'message': 'Value must be one of: 10; 20; 30; 40; 50; 60 (not %i)' % i,
'params': {
}
}
})
def test_malformed_string(self):
reply = self.query('/settings/per_page', data = {'per_page': 'foobar'})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'per_page',
'orig_fields': {
'per_page': 'foobar'
}
},
'error': {
'message': 'Please enter an integer value',
'params': {
}
}
})
def test_random(self):
i = self.get_rand_input()
reply = self.query('/settings/per_page', data = {'per_page': i})
cmp_json_dicts(reply, {
'status': 200,
'form': {
'updated_fields': {
'per_page': i
},
'invalid_field': None,
'orig_fields': None
}
})
class Sound(TestCase):
def test_empty_submit(self):
reply = self.query('/settings/sound')
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'sound',
'orig_fields': None
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_invalid_param(self):
i = random.randint(-20, 20)
reply = self.query('/settings/sound', data = {'__sound': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': None,
'orig_fields': {
'__sound': '%i' % i
}
},
'error': {
'message': 'The input field \'__sound\' was not expected.',
'params': {
}
}
})
def test_empty_skin(self):
reply = self.query('/settings/sound', data = {'sound': ''})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'sound',
'orig_fields': {
'sound': ''
}
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_malformed_int(self):
i = random.randint(-20, -10)
reply = self.query('/settings/sound', data = {'sound': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'sound',
'orig_fields': {
'sound': '%i' % i
}
},
'error': {
'message': 'Value must be one of: 0; 1 (not %i)' % i,
'params': {
}
}
})
def test_random(self):
i = random.randint(0, 1)
reply = self.query('/settings/sound', data = {'sound': i})
cmp_json_dicts(reply, {
'status': 200,
'form': {
'updated_fields': {
'sound': i
},
'invalid_field': None,
'orig_fields': None
}
})
class MyColor(TestCase):
VALID_KINDS = ['settlers']
VALID_COLORS = ['pink', 'purple', 'dark_green', 'black', 'brown', 'light_blue', 'orange', 'green', 'dark_blue', 'red']
def get_rand_kind(self):
return self.VALID_KINDS[0]
def get_rand_color(self):
i = random.randint(0, len(self.VALID_COLORS) - 1)
return self.VALID_COLORS[i]
def test_empty_submit(self):
reply = self.query('/settings/color')
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'color',
'orig_fields': None
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_invalid_param_kind(self):
i = random.randint(-20, 20)
reply = self.query('/settings/color', data = {'__kind': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': None,
'orig_fields': {
'__kind': '%i' % i
}
},
'error': {
'message': 'The input field \'__kind\' was not expected.',
'params': {
}
}
})
def test_empty_kind(self):
reply = self.query('/settings/color', data = {'kind': ''})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'color',
'orig_fields': {
'kind': ''
}
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_empty(self):
reply = self.query('/settings/color', data = {'kind': '', 'color': ''})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'color',
'orig_fields': {
'color': '',
'kind': ''
}
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_malformed_kind_int(self):
i = random.randint(-20, -10)
color = self.get_rand_color()
reply = self.query('/settings/color', data = {'kind': i, 'color': color})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'kind',
'orig_fields': {
'kind': '%i' % i,
'color': color
}
},
'error': {
'message': 'Value must be one of: settlers (not u\'%i\')' % i,
'params': {
}
}
})
def test_malformed_kind_string(self):
color = self.get_rand_color()
reply = self.query('/settings/color', data = {'kind': 'foobar', 'color': color})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'kind',
'orig_fields': {
'kind': 'foobar',
'color': color
}
},
'error': {
'message': 'Value must be one of: settlers (not u\'foobar\')',
'params': {
}
}
})
def test_malformed_color_int(self):
return
kind = self.get_rand_kind()
color = random.randint(-20, -10)
reply = self.query('/settings/color', data = {'kind': kind, 'color': color})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'color',
'orig_fields': {
'kind': kind,
'color': '%i' % color
}
},
'error': {
'message': 'Value must be one of: pink; purple; dark_green; black; brown; light_blue; orange; green; dark_blue; red (not u\'%i\')' % color,
'params': {
}
}
})
def test_malformed_color_string(self):
return
kind = self.get_rand_kind()
color = self.get_rand_color()
reply = self.query('/settings/color', data = {'kind': kind, 'color': 'foobar'})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'color',
'orig_fields': {
'kind': kind,
'color': 'foobar'
}
},
'error': {
'message': 'Value must be one of: pink; purple; dark_green; black; brown; light_blue; orange; green; dark_blue; red (not u\'foobar\')',
'params': {
}
}
})
class Board(TestCase):
def test_empty_submit(self):
reply = self.query('/settings/board_skin')
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'skin',
'orig_fields': None
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_invalid_param(self):
i = random.randint(-20, 20)
reply = self.query('/settings/board_skin', data = {'__skin': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': None,
'orig_fields': {
'__skin': '%i' % i
}
},
'error': {
'message': 'The input field \'__skin\' was not expected.',
'params': {
}
}
})
def test_empty_skin(self):
reply = self.query('/settings/board_skin', data = {'skin': ''})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'skin',
'orig_fields': {
'skin': ''
}
},
'error': {
'message': 'Please enter a value',
'params': {}
}
})
def test_malformed_int(self):
i = random.randint(-20, 20)
reply = self.query('/settings/board_skin', data = {'skin': i})
cmp_json_dicts(reply, {
'status': 400,
'form': {
'updated_fields': None,
'invalid_field': 'skin',
'orig_fields': {
'skin': '%i' % i
}
},
'error': {
'message': 'Value must be one of: real; simple (not u\'%i\')' % i,
'params': {
}
}
})
def test_random(self):
skins = ['simple', 'real']
i = random.randint(0, 1)
skin = skins[i]
reply = self.query('/settings/board_skin', data = {'skin': skin})
cmp_json_dicts(reply, {
'status': 200,
'form': {
'updated_fields': {
'skin': skin
},
'invalid_field': None,
'orig_fields': None
}
})
if __name__ == '__main__':
unittest.main()
| 24.089744 | 148 | 0.486252 | 1,711 | 16,911 | 4.584454 | 0.072472 | 0.054819 | 0.058134 | 0.103774 | 0.920066 | 0.90668 | 0.889087 | 0.849057 | 0.787098 | 0.756119 | 0 | 0.018646 | 0.346697 | 16,911 | 701 | 149 | 24.124108 | 0.691347 | 0 | 0 | 0.631912 | 0 | 0.004739 | 0.28863 | 0.010353 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063191 | false | 0.012638 | 0.004739 | 0.00158 | 0.090047 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
570113d9e2b2057607af6cf330db571e53552eb1 | 224 | py | Python | confo/Exceptions/EtcdExceptions.py | sambe-consulting/confo | 3def0c151a45aa14849710da0daa678458d24d91 | [
"Apache-2.0"
] | 1 | 2021-03-21T20:55:12.000Z | 2021-03-21T20:55:12.000Z | confo/Exceptions/EtcdExceptions.py | sambe-consulting/confo | 3def0c151a45aa14849710da0daa678458d24d91 | [
"Apache-2.0"
] | 6 | 2021-03-09T01:13:13.000Z | 2021-03-20T05:57:59.000Z | confo/Exceptions/EtcdExceptions.py | sambe-consulting/confo | 3def0c151a45aa14849710da0daa678458d24d91 | [
"Apache-2.0"
] | 1 | 2021-08-24T07:52:35.000Z | 2021-08-24T07:52:35.000Z | class Etcd3HostNotFoundException(Exception):
pass
class Etcd3PortNotFoundException(Exception):
pass
class ConfigurationNotSetException(Exception):
pass
class UnknownFormatInMainNameSpace(Exception):
pass
| 17.230769 | 46 | 0.803571 | 16 | 224 | 11.25 | 0.4375 | 0.288889 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.142857 | 224 | 12 | 47 | 18.666667 | 0.927083 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
5712886cfb5a86b05a786ff611bdf6a0db859daa | 33 | py | Python | django_user_agents/tests/__init__.py | claymcenter/django-user_agents | e2a0d92e371446c151b0830a911f44f8253c9376 | [
"MIT"
] | null | null | null | django_user_agents/tests/__init__.py | claymcenter/django-user_agents | e2a0d92e371446c151b0830a911f44f8253c9376 | [
"MIT"
] | null | null | null | django_user_agents/tests/__init__.py | claymcenter/django-user_agents | e2a0d92e371446c151b0830a911f44f8253c9376 | [
"MIT"
] | 1 | 2020-10-21T09:39:35.000Z | 2020-10-21T09:39:35.000Z | from .tests import MiddlewareTest | 33 | 33 | 0.878788 | 4 | 33 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 33 | 1 | 33 | 33 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5740d83f137d1d9e4a0a9cb102eae6bf26695008 | 170 | py | Python | todo_app/admin.py | SpaceWalker0318/Todo-app-backend | b885a0c87e07584b689b2d43d923ce4233fc7738 | [
"MIT"
] | null | null | null | todo_app/admin.py | SpaceWalker0318/Todo-app-backend | b885a0c87e07584b689b2d43d923ce4233fc7738 | [
"MIT"
] | null | null | null | todo_app/admin.py | SpaceWalker0318/Todo-app-backend | b885a0c87e07584b689b2d43d923ce4233fc7738 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from todo_app import models
admin.site.register(models.UserProfile)
admin.site.register(models.TodoItem)
| 18.888889 | 39 | 0.817647 | 24 | 170 | 5.75 | 0.583333 | 0.130435 | 0.246377 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105882 | 170 | 8 | 40 | 21.25 | 0.907895 | 0.152941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5744319e78c4f5736d74739e3e78fcd00cdfb614 | 7,274 | py | Python | tests/unit/test_upstream.py | Brickstertwo/git-commands | 87fa9a6573dd426eecece098fbadc3f5550c8976 | [
"MIT"
] | 1 | 2018-10-17T11:09:32.000Z | 2018-10-17T11:09:32.000Z | tests/unit/test_upstream.py | Brickstertwo/git-commands | 87fa9a6573dd426eecece098fbadc3f5550c8976 | [
"MIT"
] | 122 | 2015-01-06T19:10:23.000Z | 2017-09-26T14:22:11.000Z | tests/unit/test_upstream.py | Brickster/git-commands | 87fa9a6573dd426eecece098fbadc3f5550c8976 | [
"MIT"
] | null | null | null | import unittest
import mock
from . import testutils
from ..layers import GitUpstream
from bin.commands import upstream
@mock.patch('bin.commands.utils.git.is_empty_repository', return_value=False)
class TestUpstream(unittest.TestCase):
layer = GitUpstream
@mock.patch('bin.commands.utils.git.current_branch', return_value='the-branch')
@mock.patch('bin.commands.utils.execute.stdout')
def test_upstream(self, mock_stdout, mock_currentbranch, mock_isemptyrepository):
# setup
expected_upstream = "the-upstream"
upstream_info = "refs/heads/{}\n".format(expected_upstream)
mock_stdout.return_value = upstream_info
# when
actual_upstream = upstream.upstream()
# then
self.assertEqual(actual_upstream, expected_upstream)
mock_isemptyrepository.assert_called_once_with()
mock_currentbranch.assert_called_once_with()
mock_stdout.assert_called_once_with('git config --local branch.the-branch.merge')
@mock.patch('bin.commands.utils.git.current_branch', return_value='the-branch')
@mock.patch('bin.commands.utils.execute.stdout')
def test_upstream_includeRemote_noUpstream(self, mock_stdout, mock_currentbranch, mock_isemptyrepository):
# setup
mock_stdout.return_value = ''
# when
actual_upstream = upstream.upstream()
# then
self.assertEqual(actual_upstream, '')
mock_currentbranch.assert_called_once_with()
mock_stdout.assert_called_once_with('git config --local branch.the-branch.merge')
def test_upstream_repositoryIsEmpty(self, mock_isemptyrepository):
# setup
mock_isemptyrepository.return_value = True
# when
upstream_result = upstream.upstream()
# then
self.assertEqual(upstream_result, None)
mock_isemptyrepository.assert_called_once_with()
@mock.patch('bin.commands.utils.git.current_branch', return_value='the-branch')
@mock.patch('bin.commands.utils.git.is_valid_reference', return_value=True)
@mock.patch('bin.commands.utils.execute.stdout')
def test_upstream_branchIncluded(self, mock_stdout, mock_isvalidreference, mock_currentbranch, mock_isemptyrepository):
# setup
branch_name = 'the-branch'
expected_upstream = "the-upstream"
upstream_info = "refs/heads/{}\n".format(expected_upstream)
mock_stdout.return_value = upstream_info
# when
actual_upstream = upstream.upstream(branch=branch_name)
# then
self.assertEqual(actual_upstream, expected_upstream)
mock_currentbranch.assert_not_called()
mock_isvalidreference.assert_called_once_with(branch_name)
mock_stdout.assert_called_once_with('git config --local branch.the-branch.merge')
@mock.patch('bin.commands.utils.git.is_valid_reference', return_value=False)
@mock.patch('bin.commands.utils.messages.error', side_effect=testutils.and_exit)
def test_upstream_notAValidReference(self, mock_error, mock_isvalidreference, mock_isemptyrepository):
# when
try:
upstream.upstream(branch='bad-branch')
self.fail('expected to exit but did not') # pragma: no cover
except SystemExit:
pass
mock_isvalidreference.assert_called_once_with('bad-branch')
mock_error.assert_called_once_with("'bad-branch' is not a valid branch")
@mock.patch('bin.commands.utils.git.current_branch', return_value='the-branch')
@mock.patch('bin.commands.utils.execute.stdout')
@mock.patch('bin.commands.utils.execute.check_output', return_value='the-remote')
def test_upstream_includeRemote_always(self, mock_checkoutput, mock_stdout, mock_currentbranch, mock_isemptyrepository):
# setup
expected_upstream = "the-upstream"
upstream_info = "refs/heads/{}\n".format(expected_upstream)
mock_stdout.return_value = upstream_info
# when
actual_upstream = upstream.upstream(include_remote=upstream.IncludeRemote.ALWAYS)
# then
self.assertEqual(actual_upstream, 'the-remote/' + expected_upstream)
mock_isemptyrepository.assert_called_once()
mock_currentbranch.assert_called_once()
mock_stdout.assert_called_once_with('git config --local branch.the-branch.merge')
mock_checkoutput.assert_called_once_with('git config --local branch.the-branch.remote')
@mock.patch('bin.commands.utils.git.current_branch', return_value='the-branch')
@mock.patch('bin.commands.utils.execute.stdout')
def test_upstream_includeRemote_never(self, mock_stdout, mock_currentbranch, mock_isemptyrepository):
# setup
expected_upstream = "the-upstream"
upstream_info = "refs/heads/{}\n".format(expected_upstream)
mock_stdout.return_value = upstream_info
# when
actual_upstream = upstream.upstream(include_remote=upstream.IncludeRemote.NEVER)
# then
self.assertEqual(actual_upstream, expected_upstream)
mock_isemptyrepository.assert_called_once()
mock_currentbranch.assert_called_once()
mock_stdout.assert_called_once_with('git config --local branch.the-branch.merge')
@mock.patch('bin.commands.utils.git.current_branch', return_value='the-branch')
@mock.patch('bin.commands.utils.execute.stdout')
@mock.patch('bin.commands.utils.execute.check_output', return_value='the-remote')
def test_upstream_includeRemote_noneLocal_notLocal(self, mock_checkoutput, mock_stdout, mock_currentbranch, mock_isemptyrepository):
# setup
expected_upstream = "the-upstream"
upstream_info = "refs/heads/{}\n".format(expected_upstream)
mock_stdout.return_value = upstream_info
# when
actual_upstream = upstream.upstream(include_remote=upstream.IncludeRemote.NONE_LOCAL)
# then
self.assertEqual(actual_upstream, 'the-remote/' + expected_upstream)
mock_isemptyrepository.assert_called_once()
mock_currentbranch.assert_called_once()
mock_stdout.assert_called_once_with('git config --local branch.the-branch.merge')
mock_checkoutput.assert_called_once_with('git config --local branch.the-branch.remote')
@mock.patch('bin.commands.utils.git.current_branch', return_value='the-branch')
@mock.patch('bin.commands.utils.execute.stdout')
@mock.patch('bin.commands.utils.execute.check_output', return_value='.')
def test_upstream_includeRemote_noneLocal_isLocal(self, mock_checkoutput, mock_stdout, mock_currentbranch, mock_isemptyrepository):
# setup
expected_upstream = "the-upstream"
upstream_info = "refs/heads/{}\n".format(expected_upstream)
mock_stdout.return_value = upstream_info
# when
actual_upstream = upstream.upstream(include_remote=upstream.IncludeRemote.NONE_LOCAL)
# then
self.assertEqual(actual_upstream, expected_upstream)
mock_isemptyrepository.assert_called_once()
mock_currentbranch.assert_called_once()
mock_stdout.assert_called_once_with('git config --local branch.the-branch.merge')
mock_checkoutput.assert_called_once_with('git config --local branch.the-branch.remote')
| 41.096045 | 136 | 0.726285 | 846 | 7,274 | 5.949173 | 0.109929 | 0.059607 | 0.079475 | 0.083449 | 0.849195 | 0.819591 | 0.796145 | 0.780052 | 0.758196 | 0.758196 | 0 | 0 | 0.173082 | 7,274 | 176 | 137 | 41.329545 | 0.836741 | 0.020484 | 0 | 0.628571 | 0 | 0 | 0.218935 | 0.140462 | 0 | 0 | 0 | 0 | 0.32381 | 1 | 0.085714 | false | 0.009524 | 0.047619 | 0 | 0.152381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93f48fa5d89c272d91b0677c380d31cabfdfdeb2 | 40 | py | Python | pdip/integrator/connection/types/file/connectors/csv/__init__.py | ahmetcagriakca/pdip | c4c16d5666a740154cabdc6762cd44d98b7bdde8 | [
"MIT"
] | 2 | 2021-12-09T21:07:46.000Z | 2021-12-11T22:18:01.000Z | pdip/connection/file/connectors/csv/__init__.py | fmuyilmaz/pdip | f7e30b0c04d9e85ef46b0b7094fafd3ce18bccab | [
"MIT"
] | null | null | null | pdip/connection/file/connectors/csv/__init__.py | fmuyilmaz/pdip | f7e30b0c04d9e85ef46b0b7094fafd3ce18bccab | [
"MIT"
] | 3 | 2021-11-15T00:47:00.000Z | 2021-12-17T11:35:45.000Z | from .csv_connector import CsvConnector
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
93fa9bb8e9f862e7049ff821692edf030f9269c2 | 159 | py | Python | psi/app/api/__init__.py | lusi1990/betterlifepsi | 8e7f8562967ab1816d8c25db3251c550a357f39c | [
"MIT"
] | 33 | 2018-10-19T03:41:56.000Z | 2022-01-23T16:26:02.000Z | psi/app/api/__init__.py | lusi1990/betterlifepsi | 8e7f8562967ab1816d8c25db3251c550a357f39c | [
"MIT"
] | 318 | 2018-09-23T15:16:54.000Z | 2022-03-31T22:58:55.000Z | psi/app/api/__init__.py | lusi1990/betterlifepsi | 8e7f8562967ab1816d8c25db3251c550a357f39c | [
"MIT"
] | 19 | 2018-10-22T18:04:18.000Z | 2021-12-06T19:49:05.000Z | # encoding=utf-8
from .sales_order import SalesOrderApi
def init_all_apis(api):
api.add_resource(SalesOrderApi, '/api/sales_order/<int:sales_order_id>')
| 22.714286 | 76 | 0.779874 | 24 | 159 | 4.875 | 0.708333 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.100629 | 159 | 6 | 77 | 26.5 | 0.811189 | 0.08805 | 0 | 0 | 0 | 0 | 0.258741 | 0.258741 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5227c2fa6368b43ec3f026e100c93b81f005a1c | 250 | py | Python | keys.py | lizKimita/Mshauri-Connect | 24f6f67017eebf5ee1d2e08c9bf249108dee28a2 | [
"MIT"
] | 1 | 2019-06-20T08:23:22.000Z | 2019-06-20T08:23:22.000Z | keys.py | lizKimita/Mshauri-Connect | 24f6f67017eebf5ee1d2e08c9bf249108dee28a2 | [
"MIT"
] | 16 | 2019-06-11T14:55:14.000Z | 2021-09-08T01:02:58.000Z | keys.py | lizKimita/Mshauri-Connect | 24f6f67017eebf5ee1d2e08c9bf249108dee28a2 | [
"MIT"
] | null | null | null | business_shortcode = "174379" #lipa na mpesa code
phone_number = "254740392957"
mpesa_passkey = "bfb279f9aa9bdbcf158e97dd71a467cd2e0c893059b10f78e6b72ada1ed2c919"
consumer_key = "nJKAXNYR4L0Jo3vbBu5C4oWVWuyASWZK"
consumer_secret = "VirFCmLCWpQVOJL4"
| 41.666667 | 82 | 0.86 | 19 | 250 | 11.052632 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255411 | 0.076 | 250 | 5 | 83 | 50 | 0.65368 | 0.072 | 0 | 0 | 0 | 0 | 0.562771 | 0.415584 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f52b4ebc57c5911c5977f8b5074a7be35f7f512d | 43 | py | Python | addons14/base_technical_features/tests/__init__.py | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | 1 | 2021-06-10T14:59:13.000Z | 2021-06-10T14:59:13.000Z | addons14/base_technical_features/tests/__init__.py | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | null | null | null | addons14/base_technical_features/tests/__init__.py | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | 1 | 2021-04-09T09:44:44.000Z | 2021-04-09T09:44:44.000Z | from . import test_base_technical_features
| 21.5 | 42 | 0.883721 | 6 | 43 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f535cc78f7f056b66b85171c2e054fe6b54c9f11 | 285 | py | Python | tests/test_10_cfunits.py | shoyer/cfgrib | fe11a1b638b1779e51da87eaa30f1f12b2d0911c | [
"Apache-2.0"
] | null | null | null | tests/test_10_cfunits.py | shoyer/cfgrib | fe11a1b638b1779e51da87eaa30f1f12b2d0911c | [
"Apache-2.0"
] | null | null | null | tests/test_10_cfunits.py | shoyer/cfgrib | fe11a1b638b1779e51da87eaa30f1f12b2d0911c | [
"Apache-2.0"
] | null | null | null |
from __future__ import absolute_import, division, print_function, unicode_literals
from cf2cdm import cfunits
def test_are_convertible():
assert cfunits.are_convertible('m', 'm')
assert cfunits.are_convertible('hPa', 'Pa')
assert not cfunits.are_convertible('m', 'Pa')
| 25.909091 | 82 | 0.757895 | 37 | 285 | 5.513514 | 0.540541 | 0.27451 | 0.308824 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004065 | 0.136842 | 285 | 10 | 83 | 28.5 | 0.825203 | 0 | 0 | 0 | 0 | 0 | 0.035211 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.166667 | true | 0 | 0.333333 | 0 | 0.5 | 0.166667 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
f5745f7c2245ec7e9341ca9b8a489b1bd7e8790d | 2,054 | py | Python | dev/Gems/CloudGemMetric/v1/AWS/common-code/AWSCommon/keyparts.py | kostenickj/lumberyard | e881f3023cc1840650eb7b133e605881d1d4330d | [
"AML"
] | null | null | null | dev/Gems/CloudGemMetric/v1/AWS/common-code/AWSCommon/keyparts.py | kostenickj/lumberyard | e881f3023cc1840650eb7b133e605881d1d4330d | [
"AML"
] | null | null | null | dev/Gems/CloudGemMetric/v1/AWS/common-code/AWSCommon/keyparts.py | kostenickj/lumberyard | e881f3023cc1840650eb7b133e605881d1d4330d | [
"AML"
] | null | null | null |
class KeyParts(object):
def __init__(self, key, sep):
self.__key = key
if self.__key.index("/") == 0:
self.__key = self.__key[1:]
self.__parts = self.__key.split(sep)
@property
def sensitivity_level(self):
return self.raw_split(self.key_sensitivity)
@property
def source(self):
return self.raw_split(self.key_source)
@property
def buildid(self):
return self.raw_split(self.key_buildid)
@property
def datetime(self):
return self.raw_split(self.key_datetime)
@property
def year(self):
return int(self.raw_split(self.key_year))
@property
def month(self):
return int(self.raw_split(self.key_month))
@property
def day(self):
return int(self.raw_split(self.key_day))
@property
def hour(self):
return int(self.raw_split(self.key_hour))
@property
def event(self):
return self.raw_split(self.key_event)
@property
def filename(self):
return self.__parts[11]
@property
def schema(self):
return self.raw_split(self.key_schema)
@property
def key_source(self):
return self.__parts[7]
@property
def key_buildid(self):
return self.__parts[8]
@property
def key_year(self):
return self.__parts[3]
@property
def key_month(self):
return self.__parts[4]
@property
def key_day(self):
return self.__parts[5]
@property
def key_hour(self):
return self.__parts[6]
@property
def key_event(self):
return self.__parts[1]
@property
def key_schema(self):
return self.__parts[10]
@property
def key_datetime(self):
return self.__parts[2]
@property
def key_sensitivity(self):
return self.__parts[9]
@property
def path(self):
return self.__key.replace(self.filename, "")
def raw_split(self, value):
return value.split("=")[1]
| 20.54 | 53 | 0.601266 | 259 | 2,054 | 4.490347 | 0.166023 | 0.208083 | 0.216681 | 0.179708 | 0.259673 | 0.259673 | 0.259673 | 0.11006 | 0 | 0 | 0 | 0.010989 | 0.291139 | 2,054 | 100 | 54 | 20.54 | 0.787775 | 0 | 0 | 0.297297 | 0 | 0 | 0.000974 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.324324 | false | 0 | 0 | 0.310811 | 0.648649 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f578b750742d961ce0b0b9703f69ef9ef6507e90 | 7,760 | py | Python | src/network/architecture.py | keyochali/handwritten-text-recognition | b2a26ac47dd6d6e0dfd128d841941db00aece748 | [
"MIT"
] | 2 | 2020-05-11T19:41:11.000Z | 2021-11-08T15:53:45.000Z | src/network/architecture.py | keyochali/handwritten-text-recognition | b2a26ac47dd6d6e0dfd128d841941db00aece748 | [
"MIT"
] | null | null | null | src/network/architecture.py | keyochali/handwritten-text-recognition | b2a26ac47dd6d6e0dfd128d841941db00aece748 | [
"MIT"
] | 1 | 2021-11-06T08:52:24.000Z | 2021-11-06T08:52:24.000Z | """Networks to the Handwritten Text Recognition Model"""
from tensorflow.keras.layers import Input, Conv2D, Bidirectional, LSTM, Dense
from tensorflow.keras.layers import Dropout, BatchNormalization, MaxPooling2D
from tensorflow.keras.layers import Reshape, Activation, LeakyReLU, PReLU
from tensorflow.keras.constraints import MaxNorm
from tensorflow.keras.optimizers import RMSprop
from network.layers import FullGatedConv2D, GatedConv2D
def bluche(input_size, output_size):
"""
Gated Convolucional Recurrent Neural Network by Bluche et al.
Reference:
Bluche, T., Messina, R.:
Gated convolutional recurrent neural networks for multilingual handwriting recognition.
In: Document Analysis and Recognition (ICDAR), 2017
14th IAPR International Conference on, vol. 1, pp. 646–651, 2017.
URL: https://ieeexplore.ieee.org/document/8270042
Moysset, B. and Messina, R.:
Are 2D-LSTM really dead for offline text recognition?
In: International Journal on Document Analysis and Recognition (IJDAR)
Springer Science and Business Media LLC
URL: http://dx.doi.org/10.1007/s10032-019-00325-0
"""
input_data = Input(name="input", shape=input_size)
cnn = Reshape((input_size[0] // 2, input_size[1] // 2, input_size[2] * 4))(input_data)
cnn = Conv2D(filters=8, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = Activation(activation="tanh")(cnn)
cnn = Dropout(rate=0.5)(cnn)
cnn = Conv2D(filters=16, kernel_size=(2,4), strides=(2,4), padding="same")(cnn)
cnn = Activation(activation="tanh")(cnn)
cnn = Dropout(rate=0.5)(cnn)
cnn = GatedConv2D(filters=16, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = Conv2D(filters=32, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = Activation(activation="tanh")(cnn)
cnn = Dropout(rate=0.5)(cnn)
cnn = GatedConv2D(filters=32, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = Conv2D(filters=64, kernel_size=(2,4), strides=(2,4), padding="same")(cnn)
cnn = Activation(activation="tanh")(cnn)
cnn = Dropout(rate=0.5)(cnn)
cnn = GatedConv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = Activation(activation="tanh")(cnn)
cnn = Dropout(rate=0.5)(cnn)
cnn = MaxPooling2D(pool_size=(1,4), strides=(1,4), padding="valid")(cnn)
shape = cnn.get_shape()
blstm = Reshape((shape[1], shape[2] * shape[3]))(cnn)
blstm = Bidirectional(LSTM(units=128, return_sequences=True, dropout=0.5))(blstm)
blstm = Dense(units=128)(blstm)
blstm = Activation(activation="tanh")(blstm)
blstm = Bidirectional(LSTM(units=128, return_sequences=True, dropout=0.5))(blstm)
blstm = Dense(units=output_size)(blstm)
output_data = Activation(activation="softmax")(blstm)
optimizer = RMSprop(learning_rate=4e-4)
return (input_data, output_data, optimizer)
def puigcerver(input_size, output_size):
"""
Convolucional Recurrent Neural Network by Puigcerver et al.
Reference:
Puigcerver, J.: Are multidimensional recurrent layers really
necessary for handwritten text recognition? In: Document
Analysis and Recognition (ICDAR), 2017 14th
IAPR International Conference on, vol. 1, pp. 67–72. IEEE (2017)
"""
input_data = Input(name="input", shape=input_size)
cnn = Conv2D(filters=16, kernel_size=(3,3), strides=(1,1), padding="same")(input_data)
cnn = BatchNormalization()(cnn)
cnn = LeakyReLU()(cnn)
cnn = MaxPooling2D(pool_size=(2,2), strides=(2,2), padding="valid")(cnn)
cnn = Conv2D(filters=32, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = BatchNormalization()(cnn)
cnn = LeakyReLU()(cnn)
cnn = MaxPooling2D(pool_size=(2,2), strides=(2,2), padding="valid")(cnn)
cnn = Dropout(rate=0.2)(cnn)
cnn = Conv2D(filters=48, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = BatchNormalization()(cnn)
cnn = LeakyReLU()(cnn)
cnn = MaxPooling2D(pool_size=(2,2), strides=(2,2), padding="valid")(cnn)
cnn = Dropout(rate=0.2)(cnn)
cnn = Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = BatchNormalization()(cnn)
cnn = LeakyReLU()(cnn)
cnn = Dropout(rate=0.2)(cnn)
cnn = Conv2D(filters=80, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = BatchNormalization()(cnn)
cnn = LeakyReLU()(cnn)
shape = cnn.get_shape()
blstm = Reshape((shape[1], shape[2] * shape[3]))(cnn)
blstm = Bidirectional(LSTM(units=256, return_sequences=True, dropout=0.5))(blstm)
blstm = Bidirectional(LSTM(units=256, return_sequences=True, dropout=0.5))(blstm)
blstm = Bidirectional(LSTM(units=256, return_sequences=True, dropout=0.5))(blstm)
blstm = Bidirectional(LSTM(units=256, return_sequences=True, dropout=0.5))(blstm)
blstm = Bidirectional(LSTM(units=256, return_sequences=True, dropout=0.5))(blstm)
blstm = Dropout(rate=0.5)(blstm)
blstm = Dense(units=output_size)(blstm)
output_data = Activation(activation="softmax")(blstm)
optimizer = RMSprop(learning_rate=3e-4)
return (input_data, output_data, optimizer)
def flor(input_size, output_size):
"""Gated Convolucional Recurrent Neural Network by Flor."""
input_data = Input(name="input", shape=input_size)
cnn = Conv2D(filters=16, kernel_size=(3,3), strides=(2,2), padding="same")(input_data)
cnn = PReLU(shared_axes=[1,2])(cnn)
cnn = BatchNormalization(renorm=True)(cnn)
cnn = FullGatedConv2D(filters=16, kernel_size=(3,3), padding="same")(cnn)
cnn = Conv2D(filters=32, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = PReLU(shared_axes=[1,2])(cnn)
cnn = BatchNormalization(renorm=True)(cnn)
cnn = FullGatedConv2D(filters=32, kernel_size=(3,3), padding="same")(cnn)
cnn = Conv2D(filters=40, kernel_size=(2,4), strides=(2,4), padding="same")(cnn)
cnn = PReLU(shared_axes=[1,2])(cnn)
cnn = BatchNormalization(renorm=True)(cnn)
cnn = FullGatedConv2D(filters=40, kernel_size=(3,3), padding="same", kernel_constraint=MaxNorm(4, [0,1,2]))(cnn)
cnn = Dropout(rate=0.2)(cnn)
cnn = Conv2D(filters=48, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = PReLU(shared_axes=[1,2])(cnn)
cnn = BatchNormalization(renorm=True)(cnn)
cnn = FullGatedConv2D(filters=48, kernel_size=(3,3), padding="same", kernel_constraint=MaxNorm(4, [0,1,2]))(cnn)
cnn = Dropout(rate=0.2)(cnn)
cnn = Conv2D(filters=56, kernel_size=(2,4), strides=(2,4), padding="same")(cnn)
cnn = PReLU(shared_axes=[1,2])(cnn)
cnn = BatchNormalization(renorm=True)(cnn)
cnn = FullGatedConv2D(filters=56, kernel_size=(3,3), padding="same", kernel_constraint=MaxNorm(4, [0,1,2]))(cnn)
cnn = Dropout(rate=0.2)(cnn)
cnn = Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding="same")(cnn)
cnn = PReLU(shared_axes=[1,2])(cnn)
cnn = BatchNormalization(renorm=True)(cnn)
cnn = MaxPooling2D(pool_size=(1,2), strides=(1,2), padding="valid")(cnn)
shape = cnn.get_shape()
blstm = Reshape((shape[1], shape[2] * shape[3]))(cnn)
blstm = Bidirectional(LSTM(units=128, return_sequences=True, dropout=0.5))(blstm)
blstm = Dense(units=128)(blstm)
blstm = Bidirectional(LSTM(units=128, return_sequences=True, dropout=0.5))(blstm)
blstm = Dense(units=output_size)(blstm)
output_data = Activation(activation="softmax")(blstm)
optimizer = RMSprop(learning_rate=5e-4)
return (input_data, output_data, optimizer)
| 41.058201 | 116 | 0.672165 | 1,084 | 7,760 | 4.737085 | 0.134686 | 0.072444 | 0.042843 | 0.046738 | 0.819864 | 0.7889 | 0.776631 | 0.769426 | 0.754625 | 0.746835 | 0 | 0.056592 | 0.164304 | 7,760 | 188 | 117 | 41.276596 | 0.734927 | 0.144459 | 0 | 0.693694 | 0 | 0 | 0.027701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.054054 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1977f951e12f01b6bc1fb379ea50332d6748cac0 | 212 | py | Python | stump/admin/__init__.py | The-Politico/politico-civic-stump | b66f4288841823d327a49563ffbc9ad1c826e247 | [
"MIT"
] | null | null | null | stump/admin/__init__.py | The-Politico/politico-civic-stump | b66f4288841823d327a49563ffbc9ad1c826e247 | [
"MIT"
] | null | null | null | stump/admin/__init__.py | The-Politico/politico-civic-stump | b66f4288841823d327a49563ffbc9ad1c826e247 | [
"MIT"
] | null | null | null | from django.contrib import admin
from stump.models import Appearance, AppearanceType
from .appearance import AppearanceAdmin
admin.site.register(AppearanceType)
admin.site.register(Appearance, AppearanceAdmin)
| 26.5 | 51 | 0.853774 | 24 | 212 | 7.541667 | 0.5 | 0.099448 | 0.187845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084906 | 212 | 7 | 52 | 30.285714 | 0.93299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
19824a85fe31a60ed270f2fb4f9df3fd575bf951 | 2,964 | py | Python | Classes/reservations.py | paulmouzas/Projects | cd2f489706bfb39a310d580e000c9f188cb9f305 | [
"MIT"
] | 1 | 2021-02-28T10:32:50.000Z | 2021-02-28T10:32:50.000Z | Classes/reservations.py | paulmouzas/Projects | cd2f489706bfb39a310d580e000c9f188cb9f305 | [
"MIT"
] | null | null | null | Classes/reservations.py | paulmouzas/Projects | cd2f489706bfb39a310d580e000c9f188cb9f305 | [
"MIT"
] | null | null | null | <<<<<<< HEAD
"""
**Airline / Hotel Reservation System** - Create a reservation system which books airline seats or hotel rooms. It charges various rates for particular sections of the plane or hotel. Example, first class is going to cost more than coach.
Hotel rooms have penthouse suites which cost more. Keep track of when rooms will be available and can be scheduled.
"""
import datetime
from calendar import monthrange
current_year_month = (datetime.date.today().year, datetime.date.today().month)
class Calendar(object):
def __init__(self):
self.reservations = []
def update(self, update):
self.reservations.append(update)
def printCalendar(self, month_year=current_year_month):
for i in range(monthrange(month_year)[0], monthrange(month_year)[1]):
print i
class Reservation(object):
def __init__(self, name, date, upgrade=False):
self.name = name
self.date = date
self.upgrade = upgrade
self.price = 99.99 if self.upgrade else 79.99
paul = Reservation('Paul', datetime.date.today())
calendar = Calendar()
calendar.update(paul)
print calendar.printCalendar()
=======
"""
**Airline / Hotel Reservation System** - Create a reservation system which books airline seats or hotel rooms. It charges various rates for particular sections of the plane or hotel. Example, first class is going to cost more than coach.
Hotel rooms have penthouse suites which cost more. Keep track of when rooms will be available and can be scheduled.
"""
import datetime
from calendar import monthrange
month_names = {1:'January', 2:'February', 3:'March', 4:'April',
5:'May', 6:'June', 7:'July', 8:'August',
9:'September', 10:'October', 11:'November', 12:'December'}
current_year, current_month = datetime.date.today().year, datetime.date.today().month
class Calendar(object):
def __init__(self):
self.reservations = []
def update(self, update):
self.reservations.append(update)
def printCalendar(self, year=current_year, month=current_month):
days_taken = [day.date.day for day in self.reservations if day.date.month==month]
# days_taken is a list of days that are
first_day = monthrange(year, month)[0]
last_day = monthrange(year, month)[1]+1
print 'Month of %s' % month_names[month]
for i in range(first_day, last_day):
print "%d:\t %s" % (i, 'Not available' if i in days_taken else 'Available')
class Reservation(object):
def __init__(self, name, date, upgrade=False):
self.name = name
self.date = date
self.upgrade = upgrade
self.price = 99.99 if self.upgrade else 79.99
paul = Reservation('Paul', datetime.date.today())
calendar = Calendar()
calendar.update(paul)
>>>>>>> a5038f67d6e0c1379d346f5491464b9b1f2e80ad
| 35.285714 | 237 | 0.66363 | 389 | 2,964 | 4.96401 | 0.290488 | 0.037286 | 0.052822 | 0.035215 | 0.726049 | 0.709477 | 0.709477 | 0.709477 | 0.709477 | 0.709477 | 0 | 0.027559 | 0.228745 | 2,964 | 83 | 238 | 35.710843 | 0.817148 | 0.012483 | 0 | 0.627451 | 0 | 0 | 0.055884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.078431 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
19935c3e1588ff3638c086ffae6bef0b24a3182f | 26 | py | Python | terrascript/dyn/__init__.py | vfoucault/python-terrascript | fe82b3d7e79ffa72b7871538f999828be0a115d0 | [
"BSD-2-Clause"
] | null | null | null | terrascript/dyn/__init__.py | vfoucault/python-terrascript | fe82b3d7e79ffa72b7871538f999828be0a115d0 | [
"BSD-2-Clause"
] | null | null | null | terrascript/dyn/__init__.py | vfoucault/python-terrascript | fe82b3d7e79ffa72b7871538f999828be0a115d0 | [
"BSD-2-Clause"
] | null | null | null | """2017-11-28 18:07:28"""
| 13 | 25 | 0.538462 | 6 | 26 | 2.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.583333 | 0.076923 | 26 | 1 | 26 | 26 | 0 | 0.730769 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ff296ee852dc5cd378a67d4a8957e2446b327ee | 193 | py | Python | math/Sequences/ArithematicProgression/AP.py | CarbonDDR/al-go-rithms | 8e65affbe812931b7dde0e2933eb06c0f44b4130 | [
"CC0-1.0"
] | 1,253 | 2017-06-06T07:19:25.000Z | 2022-03-30T17:07:58.000Z | math/Sequences/ArithematicProgression/AP.py | rishabh99-rc/al-go-rithms | 4df20d7ef7598fda4bc89101f9a99aac94cdd794 | [
"CC0-1.0"
] | 554 | 2017-09-29T18:56:01.000Z | 2022-02-21T15:48:13.000Z | math/Sequences/ArithematicProgression/AP.py | rishabh99-rc/al-go-rithms | 4df20d7ef7598fda4bc89101f9a99aac94cdd794 | [
"CC0-1.0"
] | 2,226 | 2017-09-29T19:59:59.000Z | 2022-03-25T08:59:55.000Z | def ap(start,difference,terms):
ans="AP IS : " + str(list(range(start,start+difference*terms,difference)))
return (ans)
def test():
return (ap(2,5,10))
test()
| 17.545455 | 79 | 0.57513 | 26 | 193 | 4.269231 | 0.576923 | 0.27027 | 0.36036 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.253886 | 193 | 10 | 80 | 19.3 | 0.743056 | 0 | 0 | 0 | 0 | 0 | 0.043716 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2767b08e22c7c155f3af2ccf0e648bb394dea313 | 35 | py | Python | authlib/specs/rfc7519/claims.py | tk193192/authlib | 4c60a628f64c6d385a06ea55e416092726b94d07 | [
"BSD-3-Clause"
] | 2 | 2021-04-26T18:17:37.000Z | 2021-04-28T21:39:45.000Z | authlib/specs/rfc7519/claims.py | tk193192/authlib | 4c60a628f64c6d385a06ea55e416092726b94d07 | [
"BSD-3-Clause"
] | 4 | 2021-03-19T08:17:59.000Z | 2021-06-10T19:34:36.000Z | authlib/specs/rfc7519/claims.py | tk193192/authlib | 4c60a628f64c6d385a06ea55e416092726b94d07 | [
"BSD-3-Clause"
] | 2 | 2021-05-24T20:34:12.000Z | 2022-03-26T07:46:17.000Z | from authlib.jose import JWTClaims
| 17.5 | 34 | 0.857143 | 5 | 35 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27a32486a4ec187c578382c7af0a9691fb1d438d | 57,607 | py | Python | auxpm/samplers.py | matt-graham/auxiliary-pm-mcmc | 04e73508c1432ae5ac2fc867a9f794f95ce1d2f8 | [
"MIT"
] | 2 | 2016-01-26T19:59:42.000Z | 2020-07-11T10:26:03.000Z | auxpm/samplers.py | matt-graham/auxiliary-pm-mcmc | 04e73508c1432ae5ac2fc867a9f794f95ce1d2f8 | [
"MIT"
] | null | null | null | auxpm/samplers.py | matt-graham/auxiliary-pm-mcmc | 04e73508c1432ae5ac2fc867a9f794f95ce1d2f8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Auxiliary Pseudo-Marginal Markov chain Monte Carlo samplers
"""
__authors__ = 'Matt Graham'
__copyright__ = 'Copyright 2015, Matt Graham'
__license__ = 'MIT'
import numpy as np
import mcmc_updates as mcmc
class BaseAdaptiveMHSampler(object):
""" Base class for adaptive Metropolis Hastings samplers.
Implements a basic adaptive MH scheme which tunes scale parameters of the
MH proposal distributions to achieve an acceptance rate in some target
range.
A derived class must implement a ``get_samples`` method with signature::
thetas, n_reject = get_samples(self, theta_init, n_sample)
which returns the sampled states ``thetas`` and number of rejections
``n_reject`` made during a series of ``n_sample`` iterations of a
MCMC update dynamic which includes (or solely consists of) a MH update
step with proposal distribution parameterised by the ``proposal_scales``
attribute of this class as scale parameters.
"""
def __init__(self, prop_scales):
""" Base class for adaptive Metropolis Hastings samplers.
Parameters
----------
prop_scales : ndarray
Array of values to initialise proposal distribution scale
parameters to.
"""
self.prop_scales = prop_scales
def get_samples(self, theta_init, n_sample):
""" Perform a series of Markov chain updates.
Parameters
----------
theta_init : ndarray
State to initialise chain at, with shape ``(n_dim, )``.
n_sample : integer
Number of Markov chain updates to perform and so state samples to
return.
Returns
-------
thetas : ndarray
Two dimensional array of sampled chain states with shape
``(n_sample, n_dim)``.
n_reject : integer or iterable
For a Markov chain in which each state update contains only one
Metropolis(-Hastings) accept step this is the number of rejected
proposed updates during the ``n_sample`` updates. If each update
contains multiple Metropolis(-Hastings) accept steps this is an
iterable with each element corresponding to the rejection count
for a particular accept step in the order they are performed in
the overall update.
"""
raise NotImplementedError()
def adaptive_run(self, theta_init, batch_size, n_batch,
low_acc_thr, upp_acc_thr, adapt_factor_func,
print_details=False, reject_count_index=-1,):
""" Run MH Markov chain with proposal tuning to adapt acceptance rate.
Performs batches of MH Markov chain updates, after each batch
calculating an estimate of the current acceptance rate from the
number of rejections in the last batch and is this falls outside
some specified range, adjusting the MH proposal distribution
scale parameters by some multiplicate or divisive adaption factor
calculated as a function of the current batch number and overall
number of batches.
Parameters
----------
theta_init : ndarray
State to start running chain from.
batch_size : integer
Number of samples (Markov chain updates) to compute for each batch.
n_batch : integer
Number of batches of updates (and so adaptions) to do in total.
low_acc_thr : float
Lower acceptance rate threshold, a batch estimated acceptance rate
less than this will cause the proposal distribution scales to be
divided by ``adapt_factor_func(b, n_batch)`` where ``b`` is the
current batch number.
upp_acc_thr : float
Upper acceptance rate threshold, a batch estimated acceptance rate
more than this will cause the proposal distribution scales to be
multiplied by ``adapt_factor_func(b, n_batch)`` where ``b`` is the
current batch number.
adapt_factor_func : function or callable object
Function which determines the factor by which the proposal
distribution scale parameters are adjusted after each batch
(if acceptance rate outside required interval). Function should
have a signature of the form::
adapt_factor = adapt_factor_func(b, n_batch)
where ``adapt_factor`` is a scalar floating-point value used to
multiply / divide the proposal widths, ``b`` is the current batch
number and ``n_batch`` is the total number of batches to be used.
print_details : boolean
Whether to print accept rate and adaption factor for each batch
to standard out during a run.
reject_count_index : integer
Optional argument specifying index of which rejection count to use
as adaption signal when ``get_samples`` method returns an iterable
of rejection counts as its second argument, for example when
each overall Markov chain update between successive sample is
composed of several Metropolis(-Hastings) steps each with their
own possibility of rejecting. The index specified should
correspond to the rejection count of the MH step of which the
proposal distribution is parameterised by the ``prop_scales``
parameters for the adaptive run to make any sense.
Returns
-------
thetas : ndarray
Array of states sampled during all batches of adaptive run, of
shape ``(n_batch * n_size, n_dim)``.
prop_scales : ndarray
Array of proposal scales after each batch of adaptive run, of
shape ``(n_batch, n_dim)``.
accept_rates : ndaray
Array of acceptance rates for each batch of adaptive run, of shape
``(n_batch, )``.
"""
thetas = np.empty((n_batch * batch_size, theta_init.shape[0]))
prop_scales = np.empty((n_batch, self.prop_scales.shape[0]))
accept_rates = np.empty(n_batch)
for b in range(n_batch):
thetas[b*batch_size:(b+1)*batch_size], n_reject = (
self.get_samples(theta_init, batch_size))
# if multiple rejection counts present e.g. from multiple
# Metropolis(-Hastings) accept steps during one overall update
# for different parts of state, adapt only using last
if hasattr(n_reject, '__len__') and reject_count_index:
n_reject = n_reject[reject_count_index]
accept_rates[b] = 1. - (n_reject * 1. / batch_size)
theta_init = thetas[(b + 1) * batch_size - 1]
adapt_factor = adapt_factor_func(b, n_batch)
if accept_rates[b] < low_acc_thr:
self.prop_scales /= adapt_factor
elif accept_rates[b] > upp_acc_thr:
self.prop_scales *= adapt_factor
prop_scales[b] = self.prop_scales
if print_details:
print('Batch {0}: accept rate {1}, adapt factor {2}'
.format(b + 1, accept_rates[b], adapt_factor))
return thetas, prop_scales, accept_rates
class PMMHSampler(BaseAdaptiveMHSampler):
""" Pseudo-marginal Metropolis Hastings sampler.
Markov chain Monte Carlo sampler which uses pseudo-marginal Metropolis
Hastings updates. In the pseudo-marginal framework only an unbiased
noisy estimate of the (unnormalised) target density is available.
"""
def __init__(self, log_f_estimator, log_prop_density, prop_sampler,
prop_scales, prng):
""" Pseudo-Marginal Metropolis Hastings sampler.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state. Should
have a call signature::
log_f_est = log_f_estimator(theta)
where ``theta`` is state vector (as ndarray) to evaluate density
at and log_f_est is the returned double log-density estimate.
log_prop_density : function or callable object or None
Function returning logarithm of parameter update proposal density
at a given proposed parameter state given the current parameter
state. Should have a call signature::
log_prop_dens = log_prop_density(theta_prop, theta_curr)
where ``theta_prop`` is proposed parameter state to evaluate the
log proposal density at, ``theta_curr`` is the parameter state to
condition the proposal density on and ``log_prop_dens`` is the
returned log proposal density value. Alternatively ``None`` may
be passed which indicates a symmetric proposal density in which
case a Metropolis update will be made.
prop_sampler : function or callable object
Function which returns a proposed new parameter state drawn from
proposal distribution given a current parameter state. Should have
a call signature::
theta_prop = prop_sampler(theta_curr, prop_scales)
where ``theta_curr`` is the current parameter state vector (as a
ndarray) which the proposal should be conditioned on,
``prop_scales`` is a ndarray of scale parameters for the proposal
distribution (e.g. standard deviation for Gaussian proposals) and
``theta_prop`` is the returned random propsal distribution draw,
again an ndarray.
prop_scales : ndarray
Array of values to initialise the scale parameters of the state
proposal distribution to. If an initial adaptive run is performed
by calling ``adaptive_run``, these parameters will be tuned to
try to achieve an average accept rate in some prescribed interval.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
"""
super(PMMHSampler, self).__init__(prop_scales)
self.log_f_estimator = log_f_estimator
if log_prop_density is None:
self.do_metropolis_update = True
else:
self.do_metropolis_update = False
self.log_prop_density = log_prop_density
self.prop_sampler = prop_sampler
self.prng = prng
def get_samples(self, theta_init, n_sample):
""" Perform a series of Markov chain updates.
Parameters
----------
theta_init : ndarray
State to initialise chain at, with shape ``(n_dim, )``.
n_sample : integer
Number of Markov chain updates to perform and so state samples to
return.
Returns
-------
thetas : ndarray
Two dimensional array of sampled chain states with shape
``(n_sample, n_dim)``.
n_reject : integer
The number of rejected proposed updates during the ``n_sample``
updates.
"""
if hasattr(theta_init, 'shape'):
thetas = np.empty((n_sample, theta_init.shape[0]))
else:
thetas = np.empty(n_sample)
thetas[0] = theta_init
log_f_est_curr = self.log_f_estimator(theta_init)
n_reject = 0
for s in range(1, n_sample):
if self.do_metropolis_update:
thetas[s], log_f_est_curr, rejection = mcmc.metropolis_step(
thetas[s-1], log_f_est_curr, self.log_f_estimator,
self.prng, self.prop_sampler, self.prop_scales)
else:
thetas[s], log_f_est_curr, rejection = mcmc.met_hastings_step(
thetas[s-1], log_f_est_curr, self.log_f_estimator,
self.prng, self.prop_sampler, self.prop_scales,
self.log_prop_density)
if rejection:
n_reject += 1
return thetas, n_reject
class APMMetIndPlusMHSampler(BaseAdaptiveMHSampler):
""" Auxiliary pseudo-marginal MI + MH sampler.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
Metropolis independence updates for the random draws and Metropolis--
Hastings updates for the parameter state.
"""
def __init__(self, log_f_estimator, log_prop_density, prop_sampler,
prop_scales, u_sampler, prng):
""" Auxiliary Pseudo-marginal MI + MH sampler.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state and
random draws. Should have a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
log_prop_density : function or callable object or None
Function returning logarithm of parameter update proposal density
at a given proposed parameter state given the current parameter
state. Should have a call signature::
log_prop_dens = log_prop_density(theta_prop, theta_curr)
where ``theta_prop`` is proposed parameter state to evaluate the
log proposal density at, ``theta_curr`` is the parameter state to
condition the proposal density on and ``log_prop_dens`` is the
returned log proposal density value. Alternatively ``None`` may
be passed which indicates a symmetric proposal density in which
case a Metropolis update will be made.
prop_sampler : function or callable object
Function which returns a proposed new parameter state drawn from
proposal distribution given a current parameter state. Should have
a call signature::
theta_prop = prop_sampler(theta_curr, prop_scales)
where ``theta_curr`` is the current parameter state vector (as a
ndarray) which the proposal should be conditioned on,
``prop_scales`` is a ndarray of scale parameters for the proposal
distribution (e.g. standard deviation for Gaussian proposals) and
``theta_prop`` is the returned random propsal distribution draw,
again an ndarray.
prop_scales : ndarray
Array of values to initialise the scale parameters of the state
proposal distribution to. If an initial adaptive run is performed
by calling ``adaptive_run``, these parameters will be tuned to
try to achieve an average accept rate in some prescribed interval.
u_sampler : function or callable object
Function which returns an independent sample from the 'prior'
distribution on the random draws :math:`q(u)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
"""
super(APMMetIndPlusMHSampler, self).__init__(prop_scales)
self.log_f_estimator = log_f_estimator
if log_prop_density is None:
self.do_metropolis_update = True
else:
self.do_metropolis_update = False
self.log_prop_density = log_prop_density
self.prop_sampler = prop_sampler
self.prop_scales = prop_scales
self.u_sampler = u_sampler
self.prng = prng
def get_samples(self, theta_init, n_sample, u_init=None):
""" Perform a series of Markov chain updates.
Parameters
----------
theta_init : ndarray
State to initialise parameters at, with shape ``(n_dim, )``.
n_sample : integer
Number of Markov chain updates to perform and so state samples to
return.
u_init : ndarray
State to initialise random draws at. Optional, if not specified
will be sampled from base density.
Returns
-------
thetas : ndarray
Two dimensional array of sampled chain states with shape
``(n_sample, n_dim)``.
(n_reject_1, n_reject_2) : tuple
The number of rejected proposed updates during the ``n_sample``
updates, in the acceptance step for the random draw variable
given the current state parameter (``n_reject_1``) and in the
acceptance step for the state parameter given the current random
draws (``n_reject_2``).
"""
if hasattr(theta_init, 'shape'):
thetas = np.empty((n_sample, theta_init.shape[0]))
else:
thetas = np.empty(n_sample)
thetas[0] = theta_init
u = u_init if not u_init is None else self.u_sampler()
log_f_est_curr, cached_res_curr = self.log_f_estimator(u, theta_init)
n_reject_1 = 0
n_reject_2 = 0
for s in range(1, n_sample):
## Update u keeping theta fixed using MI
# As for this update only u will be changed, cached results
# for current theta calculated in previous log_f_estimator call
# can be reused, hence pass these values to estimator (with no
# theta value being needed in this case) and use only first
# return value (as second will be equal to cached_res_curr)
log_f_func_1 = lambda v: (
self.log_f_estimator(v, thetas[s-1], cached_res_curr)[0])
u, log_f_est_curr, rejection = mcmc.metropolis_indepedence_step(
u, log_f_est_curr, log_f_func_1, self.prng, self.u_sampler)
if rejection:
n_reject_1 += 1
## Update theta keeping u fixed using MH
def log_f_func_2(theta):
# save cached results from estimator evaluation for proposed
# theta update so this can be saved to be used in
# final call of log_f_func in slice sampling routine will
# always be accepted update so cached results will be correct
log_f_est, self._cached_res_prop = (
self.log_f_estimator(u, theta))
return log_f_est
if self.do_metropolis_update:
thetas[s], log_f_est_curr, rejection = mcmc.metropolis_step(
thetas[s-1], log_f_est_curr, log_f_func_2, self.prng,
self.prop_sampler, self.prop_scales)
else:
thetas[s], log_f_est_curr, rejection = mcmc.met_hastings_step(
thetas[s-1], log_f_est_curr, log_f_func_2, self.prng,
self.prop_sampler, self.prop_scales, self.log_prop_density)
if rejection:
n_reject_2 += 1
else:
# if proposal accepted update current cached results
cached_res_curr = self._cached_res_prop
return thetas, (n_reject_1, n_reject_2)
class APMEllSSPlusMHSampler(BaseAdaptiveMHSampler):
""" Auxiliary pseudo-marginal ESS + MH sampler.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
elliptical slice sampling updates for the random draws and Metropolis--
Hastings updates for the parameter state.
It is implicitly assumed the 'prior' :math:`q(u)` on the random draws is
Gaussian in this case.
"""
def __init__(self, log_f_estimator, log_prop_density, prop_sampler,
prop_scales, u_sampler, prng, max_slice_iters=1000):
""" Auxiliary Pseudo-marginal ESS + MH sampler.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state and
random draws. Should have a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
log_prop_density : function or callable object or None
Function returning logarithm of parameter update proposal density
at a given proposed parameter state given the current parameter
state. Should have a call signature::
log_prop_dens = log_prop_density(theta_prop, theta_curr)
where ``theta_prop`` is proposed parameter state to evaluate the
log proposal density at, ``theta_curr`` is the parameter state to
condition the proposal density on and ``log_prop_dens`` is the
returned log proposal density value. Alternatively ``None`` may
be passed which indicates a symmetric proposal density in which
case a Metropolis update will be made.
prop_sampler : function or callable object
Function which returns a proposed new parameter state drawn from
proposal distribution given a current parameter state. Should have
a call signature::
theta_prop = prop_sampler(theta_curr, prop_scales)
where ``theta_curr`` is the current parameter state vector (as a
ndarray) which the proposal should be conditioned on,
``prop_scales`` is a ndarray of scale parameters for the proposal
distribution (e.g. standard deviation for Gaussian proposals) and
``theta_prop`` is the returned random propsal distribution draw,
again an ndarray.
prop_scales : ndarray
Array of values to initialise the scale parameters of the state
proposal distribution to. If an initial adaptive run is performed
by calling ``adaptive_run``, these parameters will be tuned to
try to achieve an average accept rate in some prescribed interval.
u_sampler : function or callable object
Function which returns an independent sample from the 'prior'
distribution on the random draws :math:`q(u)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
max_slice_iters : integer
Maximum number of elliptical slice shrinking iterations to perform.
"""
super(APMEllSSPlusMHSampler, self).__init__(prop_scales)
self.log_f_estimator = log_f_estimator
if log_prop_density is None:
self.do_metropolis_update = True
else:
self.do_metropolis_update = False
self.log_prop_density = log_prop_density
self.prop_sampler = prop_sampler
self.prop_scales = prop_scales
self.prng = prng
self.u_sampler = u_sampler
self.max_slice_iters = max_slice_iters
def elliptical_slice_sample_u_given_theta(self, u, log_f_est, log_f_func):
""" Perform ESS on conditional density of random draws given state.
Performs elliptical slice sampling conditional target density on
auxiliary random draw variables given a parameter state.
"""
v = self.u_sampler()
return mcmc.elliptical_slice_step(u, log_f_est, log_f_func, self.prng,
v, self.max_slice_iters)
def get_samples(self, theta_init, n_sample, u_init=None):
""" Perform a series of Markov chain updates.
Parameters
----------
theta_init : ndarray
State to initialise parameters at, with shape ``(n_dim, )``.
n_sample : integer
Number of Markov chain updates to perform and so state samples to
return.
u_init : ndarray
State to initialise random draws at. Optional, if not specified
will be sampled from base density.
Returns
-------
thetas : ndarray
Two dimensional array of sampled chain states with shape
``(n_sample, n_dim)``.
n_reject : integer or iterable
The number of rejected proposed updates during the ``n_sample``
updates.
"""
if hasattr(theta_init, 'shape'):
thetas = np.empty((n_sample, theta_init.shape[0]))
else:
thetas = np.empty(n_sample)
thetas[0] = theta_init
u = u_init if u_init is not None else self.u_sampler()
log_f_est_curr, self._cached_res_curr = (
self.log_f_estimator(u, theta_init))
n_reject = 0
for s in range(1, n_sample):
## Update u keeping theta fixed using ell-SS
# As for this update only u will be changed, cached results
# for current theta calculated in previous log_f_estimator call
# can be reused, hence pass these values to estimator (with no
# theta value being needed in this case) and use only first
# return value (as second will be equal to cached_res_curr)
log_f_func_1 = lambda v: (
self.log_f_estimator(v, thetas[s-1], self._cached_res_curr)[0])
u, log_f_est_curr = self.elliptical_slice_sample_u_given_theta(
u, log_f_est_curr, log_f_func_1)
## Update theta keeping u fixed using MH
def log_f_func_2(theta):
# save cached results from estimator evaluation for proposed
# theta update so this can be saved to be used in
# final call of log_f_func in slice sampling routine will
# always be accepted update so cached results will be correct
log_f_est, self._cached_res_prop = (
self.log_f_estimator(u, theta))
return log_f_est
if self.do_metropolis_update:
thetas[s], log_f_est_curr, rejection = mcmc.metropolis_step(
thetas[s-1], log_f_est_curr, log_f_func_2, self.prng,
self.prop_sampler, self.prop_scales)
else:
thetas[s], log_f_est_curr, rejection = mcmc.met_hastings_step(
thetas[s-1], log_f_est_curr, log_f_func_2, self.prng,
self.prop_sampler, self.prop_scales, self.log_prop_density)
if rejection:
n_reject += 1
else:
# if proposal accepted update current cached results
self._cached_res_curr = self._cached_res_prop
return thetas, n_reject
class BaseAPMMetIndPlusSliceSampler(object):
""" Abstract auxiliary pseudo-marginal MI + SS sampler base class.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
Metropolis independence updates for the random draws and some form of
linear slice sampling in updates for parameter state.
"""
def __init__(self, log_f_estimator, u_sampler, prng, max_steps_out=0,
max_slice_iters=1000):
""" Abstract auxiliary Pseudo-marginal MI + SS sampler base class.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state and
random draws. Should have a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
u_sampler : function or callable object
Function which returns an independent sample from the 'prior'
distribution on the random draws :math:`q(u)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
max_steps_out : integer
Maximum number of stepping out iterations to perform during slice
sampling update (default 0).
max_slice_iters : integer
Maximum number of slice shrinking iterations to perform.
"""
self.log_f_estimator = log_f_estimator
self.u_sampler = u_sampler
self.prng = prng
self.max_steps_out = max_steps_out
self.max_slice_iters = max_slice_iters
def slice_step(self, x_curr, log_f_curr, log_f_func, w):
""" Perform a linear slice sampling step.
Simply wraps external module function passing in fixed object level
arguments for more convenient calling.
"""
return mcmc.linear_slice_step(x_curr, log_f_curr, log_f_func, w,
self.prng, self.max_steps_out,
self.max_slice_iters)
def slice_sample_theta_given_u(self, theta, log_f_est, u):
""" Perform SS on conditional density of state given random draws.
Performs slice sampling along some line on conditional target density
on parameter state given auxiliary random draw variables.
Should be implemented by an derived class.
"""
raise NotImplementedError()
def get_samples(self, theta_init, n_sample, u_init=None):
""" Perform a series of Markov chain updates.
Parameters
----------
theta_init : ndarray
State to initialise parameters at, with shape ``(n_dim, )``.
n_sample : integer
Number of Markov chain updates to perform and so state samples to
return.
u_init : ndarray
State to initialise random draws at. Optional, if not specified
will be sampled from base density.
Returns
-------
thetas : ndarray
Two dimensional array of sampled chain states with shape
``(n_sample, n_dim)``.
n_reject : integer or iterable
The number of rejected proposed updates during the ``n_sample``
updates.
"""
if hasattr(theta_init, 'shape'):
thetas = np.empty((n_sample, theta_init.shape[0]))
else:
thetas = np.empty((n_sample, 1))
thetas[0] = theta_init
u = u_init if u_init is not None else self.u_sampler()
log_f_est_curr, self._cached_res_curr = (
self.log_f_estimator(u, theta_init))
n_reject = 0
for s in range(1, n_sample):
## Update u keeping theta fixed using MI
# As for this update only u will be changed, cached results
# for current theta calculated in previous log_f_estimator call
# can be reused, hence pass these values to estimator (with no
# theta value being needed in this case) and use only first
# return value (as second will be equal to cached_res_curr)
log_f_func_1 = lambda v: (
self.log_f_estimator(v, thetas[s-1], self._cached_res_curr)[0])
u, log_f_est_curr, rejection = mcmc.metropolis_indepedence_step(
u, log_f_est_curr, log_f_func_1, self.prng, self.u_sampler)
if rejection:
n_reject += 1
## Update theta given current u using SS
# self.cached_res_curr also updated in this method
thetas[s], log_f_est_curr = self.slice_sample_theta_gvn_u(
thetas[s-1].copy(), log_f_est_curr, u)
return thetas, n_reject
class BaseAPMEllSSPlusSliceSampler(object):
""" Abstract auxiliary pseudo-marginal ESS + SS sampler base class.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
elliptical slice sampling updates for the random draws and some form of
linear slice sampling in updates for parameter state.
It is implicitly assumed the 'prior' :math:`q(u)` on the random draws is
Gaussian in this case.
"""
def __init__(self, log_f_estimator, u_sampler, prng, max_steps_out=0,
max_slice_iters=1000):
""" Abstract auxiliary Pseudo-marginal ESS + SS sampler base class.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state and
random draws. Should have a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
u_sampler : function or callable object
Function which returns an independent sample from the 'prior'
distribution on the random draws :math:`q(u)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
max_steps_out : integer
Maximum number of stepping out iterations to perform during slice
sampling update (default 0).
max_slice_iters : integer
Maximum number of slice shrinking iterations to perform (common
to both elliptical and linear slice sampling updates).
"""
self.log_f_estimator = log_f_estimator
self.u_sampler = u_sampler
self.prng = prng
self.max_steps_out = max_steps_out
self.max_slice_iters = max_slice_iters
def slice_step(self, x_curr, log_f_curr, log_f_func, w):
""" Perform a linear slice sampling step.
Simply wraps external module function passing in fixed object level
arguments for more convenient calling.
"""
return mcmc.linear_slice_step(x_curr, log_f_curr, log_f_func, w,
self.prng, self.max_steps_out,
self.max_slice_iters)
def elliptical_slice_sample_u_given_theta(self, u, log_f_est, log_f_func):
""" Perform ESS on conditional density of random draws given state.
Performs elliptical slice sampling conditional target density on
auxiliary random draw variables given a parameter state.
"""
v = self.u_sampler()
return mcmc.elliptical_slice_step(u, log_f_est, log_f_func, self.prng,
v, self.max_slice_iters)
def slice_sample_theta_given_u(self, theta, log_f_est, u):
""" Perform SS on conditional density of state given random draws.
Performs slice sampling along some line on conditional target density
on parameter state given auxiliary random draw variables.
Should be implemented by an derived class.
"""
raise NotImplementedError()
def get_samples(self, theta_init, n_sample, u_init=None):
""" Perform a series of Markov chain updates.
Parameters
----------
theta_init : ndarray
State to initialise parameters at, with shape ``(n_dim, )``.
n_sample : integer
Number of Markov chain updates to perform and so state samples to
return.
u_init : ndarray
State to initialise random draws at. Optional, if not specified
will be sampled from base density.
Returns
-------
thetas : ndarray
Two dimensional array of sampled chain states with shape
``(n_sample, n_dim)``.
"""
if hasattr(theta_init, 'shape'):
thetas = np.empty((n_sample, theta_init.shape[0]))
else:
thetas = np.empty((n_sample, 1))
thetas[0] = theta_init
u = u_init if u_init is not None else self.u_sampler()
log_f_est_curr, self._cached_res_curr = (
self.log_f_estimator(u, theta_init))
for s in range(1, n_sample):
## Update u keeping theta fixed using ell-SS
log_f_func = lambda v: (
self.log_f_estimator(v, thetas[s-1], self._cached_res_curr)[0]
# second output will be equal to cached_res_curr as
# not changing theta
)
u, log_f_est_curr = self.elliptical_slice_sample_u_given_theta(
u, log_f_est_curr, log_f_func)
## Update theta given current u using SS
# self.cached_res_curr also updated in this method
thetas[s], log_f_est_curr = self.slice_sample_theta_gvn_u(
thetas[s-1].copy(), log_f_est_curr, u)
return thetas
class APMMetIndPlusSeqSliceSampler(BaseAPMMetIndPlusSliceSampler):
""" Auxiliary pseudo-marginal MI + sequential-SS sampler.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
Metropolis independence updates for the random draws and sequential
(over axes) slice sampling in updates for parameter state.
"""
def __init__(self, log_f_estimator, u_sampler, prng, ws, max_steps_out=0,
max_slice_iters=1000):
""" Auxiliary pseudo-marginal MI + sequential-SS sampler.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state and
random draws. Should have a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
u_sampler : function or callable object
Function which returns an independent sample from the 'prior'
distribution on the random draws :math:`q(u)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
ws : ndarray
Initial slice bracket widths to use when performing slice sampling
sequentially on parameter state vector dimensions (i.e. `ws`
should be same length as parameter state vector with a per
dimension slice bracket width parameter being specified).
max_steps_out : integer
Maximum number of stepping out iterations to perform during slice
sampling update (default 0).
max_slice_iters : integer
Maximum number of slice shrinking iterations to perform.
"""
super(APMMetIndPlusSeqSliceSampler, self).__init__(
log_f_estimator, u_sampler, prng, max_steps_out, max_slice_iters)
self.ws = ws
def slice_sample_theta_gvn_u(self, theta, log_f_est, u):
""" Perform seq-SS on conditional density of state given random draws.
Performs slice sampling on conditional target density of each dimension
of parameter state given rest of parameter state vector and auxiliary
random draw variables, the dimension updates being peformed
sequentially in a fixed ordinal ordering.
"""
for j in range(len(theta)):
x_curr = theta[j]
def log_f_func(x):
# keep saving cached results from new estimator evaluations
# final call of log_f_func in slice sampling routine will
# always be accepted update so cached results will be correct
log_f_est_, self._cached_res_curr = (
self.log_f_estimator(u, np.r_[theta[:j], x, theta[j+1:]])
)
return log_f_est_
x_new, log_f_est = self.slice_step(
x_curr, log_f_est, log_f_func, self.ws[j])
theta[j] = x_new
return theta, log_f_est
class APMMetIndPlusRandDirSliceSampler(BaseAPMMetIndPlusSliceSampler):
""" Auxiliary pseudo-marginal MI + random-direction-SS sampler.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
Metropolis independence updates for the random draws and slice sampling
along a random direction in updates for parameter state.
"""
def __init__(self, log_f_estimator, u_sampler, prng, slc_dir_and_w_sampler,
max_steps_out=0, max_slice_iters=1000):
""" Auxiliary pseudo-marginal MI + random-direction-SS sampler.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state and
random draws. Should have a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
u_sampler : function or callable object
Function which returns an independent sample from the 'prior'
distribution on the random draws :math:`q(u)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
slc_dir_and_w_sampler : function or callable object
Function which returns a vector specifying a random direction in
the parameter state space along to slice sample along with a
corresponding initial slice bracket width for this direction.
Should have a call signature::
d, w = slc_dir_and_w_sampler()
where ``d`` is a ndarray of same dimension as the parameter state
and ``w`` is a (positive) floating point value specifying the
corresponding initial slice bracket width parameter.
max_steps_out : integer
Maximum number of stepping out iterations to perform during slice
sampling update (default 0).
max_slice_iters : integer
Maximum number of slice shrinking iterations to perform.
"""
super(APMMetIndPlusRandDirSliceSampler, self).__init__(
log_f_estimator, u_sampler, prng, max_steps_out, max_slice_iters)
self.slc_dir_and_w_sampler = slc_dir_and_w_sampler
def slice_sample_theta_gvn_u(self, theta, log_f_est, u):
""" Perform rd-SS on conditional density of state given random draws.
Performs slice sampling along a random direction on conditional target
density of parameter state given auxiliary random draw variables.
"""
d, w = self.slc_dir_and_w_sampler()
def log_f_func(x):
# keep saving cached results from new estimator evaluations
# final call of log_f_func in slice sampling routine will
# always be accepted update so cached results will be correct
log_f_est_, self._cached_res_curr = (
self.log_f_estimator(u, theta + x * d)
)
return log_f_est_
x_new, log_f_est = self.slice_step(0., log_f_est, log_f_func, w)
return theta + x_new * d, log_f_est
class APMEllSSPlusRandDirSliceSampler(BaseAPMEllSSPlusSliceSampler):
""" Auxiliary pseudo-marginal ESS + random-direction-SS sampler.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
elliptical slice sampling updates for the random draws and slice sampling
along a random direction in updates for parameter state.
It is implicitly assumed the 'prior' :math:`q(u)` on the random draws is
Gaussian in this case.
"""
def __init__(self, log_f_estimator, u_sampler, prng, slc_dir_and_w_sampler,
max_steps_out=0, max_slice_iters=1000):
""" Auxiliary pseudo-marginal ESS + random-direction-SS sampler.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target distribution given current parameter state and
random draws. Should have a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
u_sampler : function or callable object
Function which returns an independent sample from the 'prior'
distribution on the random draws :math:`q(u)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
slc_dir_and_w_sampler : function or callable object
Function which returns a vector specifying a random direction in
the parameter state space along to slice sample along with a
corresponding initial slice bracket width for this direction.
Should have a call signature::
d, w = slc_dir_and_w_sampler()
where ``d`` is a ndarray of same dimension as the parameter state
and ``w`` is a (positive) floating point value specifying the
corresponding initial slice bracket width parameter.
max_steps_out : integer
Maximum number of stepping out iterations to perform during slice
sampling update (default 0).
max_slice_iters : integer
Maximum number of slice shrinking iterations to perform (common
to both elliptical and linear slice sampling updates)
"""
super(APMEllSSPlusRandDirSliceSampler, self).__init__(
log_f_estimator, u_sampler, prng, max_steps_out, max_slice_iters)
self.slc_dir_and_w_sampler = slc_dir_and_w_sampler
def slice_sample_theta_gvn_u(self, theta, log_f_est, u):
""" Perform rd-SS on conditional density of state given random draws.
Performs slice sampling along a random direction on conditional target
density of parameter state given auxiliary random draw variables.
"""
d, w = self.slc_dir_and_w_sampler()
def log_f_func(x):
# keep saving cached results from new estimator evaluations
# final call of log_f_func in slice sampling routine will
# always be accepted update so cached results will be correct
log_f_est_, self._cached_res_curr = (
self.log_f_estimator(u, theta + x * d)
)
return log_f_est_
x_new, log_f_est = self.slice_step(0., log_f_est, log_f_func, w)
return theta + x_new * d, log_f_est
class APMEllSSPlusEllSSSampler(BaseAPMEllSSPlusSliceSampler):
""" Auxiliary pseudo-marginal ESS + ESS sampler.
Sampler in the auxiliary pseudo-marginal MCMC framework which uses
elliptical slice sampling updates for both the random draws and parameter
states.
It is implicitly assumed the prior :math:`q(u)` on the random draws and
the prior on the parameters :math:`p(\\theta)` are both Gaussian.
"""
def __init__(self, log_f_estimator, u_sampler, theta_sampler, prng,
max_slice_iters=1000):
""" Auxiliary pseudo-marginal ESS + ESS sampler.
Parameters
----------
log_f_estimator : function or callable object
Function which returns an unbiased estimate of the log density
of the target likelihood (i.e. without Gaussian prior on parameter
state) given current parameter state and random draws. Should have
a call signature::
log_f_est, cached_res_out =
log_f_estimator(u, theta, [, cached_res_in])
where ``u`` is the vector of auxiliary random draws used in the
density estimator, ``theta`` is the state vector (as ndarray) to
estimate the density at, ``cached_res_in`` is an optional input
which can be provided if cached intermediate results
deterministically calculated from the ``theta`` which it is wished
to estimate the density at have been stored from a previous call,
potentially speeding subsequent estimates, ``log_f_est`` is the
calculated log-density estimate and ``cached_res_out`` are
intermediate cached results determinstically calculated from
the specified ``theta`` which can be used in subsequent calls to
potentially speed further estimates of the log density for this
``theta`` value (if ``cached_res_in`` was specified then
``cached_res_out == cached_res_in``).
u_sampler : function or callable object
Function which returns an independent sample from the Gaussian
prior distribution on the random draws :math:`q(u)`.
theta_sampler : function or callable object
Function which returns an independent sample from the Gaussian
prior distribution on the parameters :math:`p(\\theta)`.
prng : RandomState
Pseudo-random number generator object (either an instance of a
``numpy`` ``RandomState`` or an object with an equivalent
interface) used to randomly sample accept decisions in MH accept
step.
max_slice_iters : integer
Maximum number of slice shrinking iterations to perform.
"""
super(APMEllSSPlusEllSSSampler, self).__init__(
log_f_estimator, u_sampler, prng, None, max_slice_iters)
self.theta_sampler = theta_sampler
def slice_sample_theta_gvn_u(self, theta, log_f_est, u):
""" Perform ESS on conditional density of state given random draws.
Performs elliptical slice sampling on conditional target
density of parameter state given auxiliary random draw variables.
"""
def log_f_func(theta):
# keep saving cached results from new estimator evaluations
# final call of log_f_func in slice sampling routine will
# always be accepted update so cached results will be correct
log_f_est_, self._cached_res_curr = (
self.log_f_estimator(u, theta)
)
return log_f_est_
v = self.theta_sampler()
return mcmc.elliptical_slice_step(
theta, log_f_est, log_f_func, self.prng, v, self.max_slice_iters)
| 49.40566 | 79 | 0.641971 | 7,401 | 57,607 | 4.816376 | 0.060262 | 0.020199 | 0.015514 | 0.014784 | 0.864164 | 0.852129 | 0.840291 | 0.831313 | 0.819615 | 0.809376 | 0 | 0.003054 | 0.300936 | 57,607 | 1,165 | 80 | 49.448069 | 0.882099 | 0.633951 | 0 | 0.697987 | 0 | 0 | 0.007457 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.110738 | false | 0 | 0.006711 | 0 | 0.218121 | 0.010067 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd6a1be517a6b33b0fe1a38cba9957c136ed08ad | 37 | py | Python | python-logging/mymodule/sub/__init__.py | cgt212/example-code | 739dadc5003b0a1f82cc05b3f40c168659ed31f3 | [
"Apache-2.0"
] | null | null | null | python-logging/mymodule/sub/__init__.py | cgt212/example-code | 739dadc5003b0a1f82cc05b3f40c168659ed31f3 | [
"Apache-2.0"
] | null | null | null | python-logging/mymodule/sub/__init__.py | cgt212/example-code | 739dadc5003b0a1f82cc05b3f40c168659ed31f3 | [
"Apache-2.0"
] | null | null | null | from mymodule.sub.thing import Thing
| 18.5 | 36 | 0.837838 | 6 | 37 | 5.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fda5205a76f7338320e3c6b08c4926f4b6fd1878 | 21 | py | Python | ptfit/__init__.py | msyriac/ptfit | 84a002f6cf0ff47a22e4c374d0b336d57e030b35 | [
"BSD-2-Clause"
] | null | null | null | ptfit/__init__.py | msyriac/ptfit | 84a002f6cf0ff47a22e4c374d0b336d57e030b35 | [
"BSD-2-Clause"
] | null | null | null | ptfit/__init__.py | msyriac/ptfit | 84a002f6cf0ff47a22e4c374d0b336d57e030b35 | [
"BSD-2-Clause"
] | null | null | null | from .ptfit import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fdae93356ff62838465860e6c8bd4f30ee438e5d | 193 | py | Python | profiles_api/admin.py | csilouanos/profiles-reset-api | 919648b82853d22c99dc54e54f4dcaa91964c527 | [
"MIT"
] | null | null | null | profiles_api/admin.py | csilouanos/profiles-reset-api | 919648b82853d22c99dc54e54f4dcaa91964c527 | [
"MIT"
] | null | null | null | profiles_api/admin.py | csilouanos/profiles-reset-api | 919648b82853d22c99dc54e54f4dcaa91964c527 | [
"MIT"
] | null | null | null | from django.contrib import admin
from profiles_api import models
# Registers the UserProfile model as admin
admin.site.register(models.UserProfile)
admin.site.register(models.ProfileFeedItem)
| 27.571429 | 43 | 0.84456 | 26 | 193 | 6.230769 | 0.615385 | 0.111111 | 0.209877 | 0.283951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093264 | 193 | 6 | 44 | 32.166667 | 0.925714 | 0.207254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a9044a1a633ca05906619be6044590be0cec74b6 | 155 | py | Python | tests/test_regex2.py | mannuan/dspider | bf1bbad375b3b61f800cb25d1c839659a66f3e12 | [
"Apache-2.0"
] | 15 | 2018-05-12T17:15:59.000Z | 2020-09-06T04:32:47.000Z | tests/test_regex2.py | mannuan/dspider | bf1bbad375b3b61f800cb25d1c839659a66f3e12 | [
"Apache-2.0"
] | null | null | null | tests/test_regex2.py | mannuan/dspider | bf1bbad375b3b61f800cb25d1c839659a66f3e12 | [
"Apache-2.0"
] | 2 | 2018-06-29T00:44:52.000Z | 2020-07-07T01:58:03.000Z | # -*- coding:utf-8 -*-
import re
_str = '奥斯卡级hi空间大撒545谎单价(8)'
print(re.sub(r'([^(]+)\([^)]+\)',r'\1',_str))
print(re.sub(r'[^(]+\(([^)]+)\)',r'\1',_str))
| 22.142857 | 45 | 0.470968 | 22 | 155 | 3.181818 | 0.5 | 0.2 | 0.285714 | 0.314286 | 0.457143 | 0.457143 | 0.457143 | 0 | 0 | 0 | 0 | 0.048951 | 0.077419 | 155 | 6 | 46 | 25.833333 | 0.440559 | 0.129032 | 0 | 0 | 0 | 0 | 0.413534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
e3000309b23a18d7ba6afdb32965ac922810c4dc | 52,614 | py | Python | dlkit/json_/assessment_authoring/managers.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | dlkit/json_/assessment_authoring/managers.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | dlkit/json_/assessment_authoring/managers.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """JSON implementations of assessment.authoring managers."""
# pylint: disable=no-init
# Numerous classes don't require __init__.
# pylint: disable=too-many-public-methods,too-few-public-methods
# Number of methods are defined in specification
# pylint: disable=protected-access
# Access to protected methods allowed in package json package scope
# pylint: disable=too-many-ancestors
# Inheritance defined in specification
from . import profile
from . import sessions
from .. import utilities
from ..osid import managers as osid_managers
from ..primitives import Type
from ..type.objects import TypeList
from ..utilities import get_registry
from dlkit.abstract_osid.osid import errors
from dlkit.manager_impls.assessment_authoring import managers as assessment_authoring_managers
class AssessmentAuthoringProfile(osid_managers.OsidProfile, assessment_authoring_managers.AssessmentAuthoringProfile):
"""The ``AssessmentAuthoringProfile`` describes the interoperability among assessment authoring services."""
def supports_assessment_part_lookup(self):
"""Tests if looking up assessment part is supported.
return: (boolean) - ``true`` if assessment part lookup is
supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_assessment_part_lookup' in profile.SUPPORTS
def supports_assessment_part_query(self):
"""Tests if querying assessment part is supported.
return: (boolean) - ``true`` if assessment part query is
supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_assessment_part_query' in profile.SUPPORTS
def supports_assessment_part_admin(self):
"""Tests if an assessment part administrative service is supported.
return: (boolean) - ``true`` if assessment part administration
is supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_assessment_part_admin' in profile.SUPPORTS
def supports_assessment_part_bank(self):
"""Tests if an assessment part bank lookup service is supported.
return: (boolean) - ``true`` if an assessment part bank lookup
service is supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_assessment_part_bank' in profile.SUPPORTS
def supports_assessment_part_bank_assignment(self):
"""Tests if an assessment part bank service is supported.
return: (boolean) - ``true`` if assessment part bank assignment
service is supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_assessment_part_bank_assignment' in profile.SUPPORTS
def supports_assessment_part_item(self):
"""Tests if an assessment part item service is supported for looking up assessment part and item mappings.
return: (boolean) - ``true`` if assessment part item service is
supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_assessment_part_item' in profile.SUPPORTS
def supports_assessment_part_item_design(self):
"""Tests if an assessment part item design session is supported.
return: (boolean) - ``true`` if an assessment part item design
service is supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_assessment_part_item_design' in profile.SUPPORTS
def supports_sequence_rule_lookup(self):
"""Tests if looking up sequence rule is supported.
return: (boolean) - ``true`` if sequence rule lookup is
supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_sequence_rule_lookup' in profile.SUPPORTS
def supports_sequence_rule_admin(self):
"""Tests if a sequence rule administrative service is supported.
return: (boolean) - ``true`` if sequence rule administration is
supported, ``false`` otherwise
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.supports_resource_lookup
return 'supports_sequence_rule_admin' in profile.SUPPORTS
def get_assessment_part_record_types(self):
"""Gets the supported ``AssessmentPart`` record types.
return: (osid.type.TypeList) - a list containing the supported
``AssessmentPart`` record types
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.get_resource_record_types_template
record_type_maps = get_registry('ASSESSMENT_PART_RECORD_TYPES', self._runtime)
record_types = []
for record_type_map in record_type_maps:
record_types.append(Type(**record_type_maps[record_type_map]))
return TypeList(record_types)
assessment_part_record_types = property(fget=get_assessment_part_record_types)
def get_assessment_part_search_record_types(self):
"""Gets the supported ``AssessmentPart`` search record types.
return: (osid.type.TypeList) - a list containing the supported
``AssessmentPart`` search record types
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.get_resource_record_types_template
record_type_maps = get_registry('ASSESSMENT_PART_SEARCH_RECORD_TYPES', self._runtime)
record_types = []
for record_type_map in record_type_maps:
record_types.append(Type(**record_type_maps[record_type_map]))
return TypeList(record_types)
assessment_part_search_record_types = property(fget=get_assessment_part_search_record_types)
def get_sequence_rule_record_types(self):
"""Gets the supported ``SequenceRule`` record types.
return: (osid.type.TypeList) - a list containing the supported
``SequenceRule`` record types
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.get_resource_record_types_template
record_type_maps = get_registry('SEQUENCE_RULE_RECORD_TYPES', self._runtime)
record_types = []
for record_type_map in record_type_maps:
record_types.append(Type(**record_type_maps[record_type_map]))
return TypeList(record_types)
sequence_rule_record_types = property(fget=get_sequence_rule_record_types)
def get_sequence_rule_search_record_types(self):
"""Gets the supported ``SequenceRule`` search record types.
return: (osid.type.TypeList) - a list containing the supported
``SequenceRule`` search record types
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.get_resource_record_types_template
record_type_maps = get_registry('SEQUENCE_RULE_SEARCH_RECORD_TYPES', self._runtime)
record_types = []
for record_type_map in record_type_maps:
record_types.append(Type(**record_type_maps[record_type_map]))
return TypeList(record_types)
sequence_rule_search_record_types = property(fget=get_sequence_rule_search_record_types)
def get_sequence_rule_enabler_record_types(self):
"""Gets the supported ``SequenceRuleEnabler`` record types.
return: (osid.type.TypeList) - a list containing the supported
``SequenceRuleEnabler`` record types
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.get_resource_record_types_template
record_type_maps = get_registry('SEQUENCE_RULE_ENABLER_RECORD_TYPES', self._runtime)
record_types = []
for record_type_map in record_type_maps:
record_types.append(Type(**record_type_maps[record_type_map]))
return TypeList(record_types)
sequence_rule_enabler_record_types = property(fget=get_sequence_rule_enabler_record_types)
def get_sequence_rule_enabler_search_record_types(self):
"""Gets the supported ``SequenceRuleEnabler`` search record types.
return: (osid.type.TypeList) - a list containing the supported
``SequenceRuleEnabler`` search record types
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.ResourceProfile.get_resource_record_types_template
record_type_maps = get_registry('SEQUENCE_RULE_ENABLER_SEARCH_RECORD_TYPES', self._runtime)
record_types = []
for record_type_map in record_type_maps:
record_types.append(Type(**record_type_maps[record_type_map]))
return TypeList(record_types)
sequence_rule_enabler_search_record_types = property(fget=get_sequence_rule_enabler_search_record_types)
class AssessmentAuthoringManager(osid_managers.OsidManager, AssessmentAuthoringProfile, assessment_authoring_managers.AssessmentAuthoringManager):
"""The assessment authoring manager provides access to assessment authoring sessions and provides interoperability tests for various aspects of this service.
The sessions included in this manager are:
* ``AssessmentPartLookupSession:`` a session to retrieve
assessment part
* ``AssessmentPartQuerySession:`` a session to query for
assessment part
* ``AssessmentPartSearchSession:`` a session to search for
assessment part
* ``AssessmentPartAdminSession:`` a session to create and delete
assessment part
* ``AssessmentPartNotificationSession:`` a session to receive
notifications pertaining to assessment part changes
* ``AssessmentPartBankSession:`` a session to look up assessment
part bank mappings
* ``AssessmentPartBankAssignmentSession:`` a session to manage
assessment part to bank mappings
* ``AssessmentPartSmartBankSession:`` a session to manage dynamic
bank of assessment part
* ``AssessmentPartItemSession:`` a session to look up assessment
part to item mappings
* ``AssessmentPartItemDesignSession:`` a session to map items to
assessment parts
* ``SequenceRuleLookupSession:`` a session to retrieve sequence
rule
* ``SequenceRuleQuerySession:`` a session to query for sequence
rule
* ``SequenceRuleSearchSession:`` a session to search for sequence
rule
* ``SequenceRuleAdminSession:`` a session to create and delete
sequence rule
* ``SequenceRuleNotificationSession:`` a session to receive
notifications pertaining to sequence rule changes
* ``SequenceRuleBankSession:`` a session to look up sequence rule
bank mappings
* ``SequenceRuleBankAssignmentSession:`` a session to manage
sequence rule to bank mappings
* ``SequenceRuleSmartBankSession:`` a session to manage dynamic
bank of sequence rule
* ``SequenceRuleEnablerLookupSession:`` a session to retrieve
sequence rule enablers
* ``SequenceRuleEnablerQuerySession:`` a session to query for
sequence rule enablers
* ``SequenceRuleEnablerSearchSession:`` a session to search for
sequence rule enablers
* ``SequenceRuleEnablerAdminSession:`` a session to create and
delete sequence rule enablers
* ``SequenceRuleEnablerNotificationSession:`` a session to receive
notifications pertaining to sequence rule enabler changes
* ``SequenceRuleEnablerBankSession:`` a session to look up
sequence rule enabler bank mappings
* ``SequenceRuleEnablerBankAssignmentSession:`` a session to
manage sequence rule enabler to bank mappings
* ``SequenceRuleEnablerSmartBankSession:`` a session to manage
dynamic bank of sequence rule enablers
* ``SequenceRuleEnableRuleLookupSession:`` a session to look up
sequence rule enabler mappings
* ``SequenceRuleEnablerRuleApplicationSession:`` a session to
apply sequence rule enablers
"""
def __init__(self):
osid_managers.OsidManager.__init__(self)
@utilities.remove_null_proxy_kwarg
def get_assessment_part_lookup_session(self):
"""Gets the ``OsidSession`` associated with the assessment part lookup service.
return: (osid.assessment.authoring.AssessmentPartLookupSession)
- an ``AssessmentPartLookupSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_lookup()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` is ``true``.*
"""
if not self.supports_assessment_part_lookup():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartLookupSession(runtime=self._runtime)
assessment_part_lookup_session = property(fget=get_assessment_part_lookup_session)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_assessment_part_lookup_session_for_bank(self, bank_id):
"""Gets the ``OsidSession`` associated with the assessment part lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.AssessmentPartLookupSession)
- an ``AssessmentPartLookupSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_lookup():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentPartLookupSession(bank_id, runtime=self._runtime)
@utilities.remove_null_proxy_kwarg
def get_assessment_part_query_session(self):
"""Gets the ``OsidSession`` associated with the assessment part query service.
return: (osid.assessment.authoring.AssessmentPartQuerySession) -
an ``AssessmentPartQuerySession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_query()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_query()`` is ``true``.*
"""
if not self.supports_assessment_part_query():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartQuerySession(runtime=self._runtime)
assessment_part_query_session = property(fget=get_assessment_part_query_session)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_assessment_part_query_session_for_bank(self, bank_id):
"""Gets the ``OsidSession`` associated with the assessment part query service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.AssessmentPartQuerySession) -
an ``AssessmentPartQuerySession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_query()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_query()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_query():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentPartQuerySession(bank_id, runtime=self._runtime)
@utilities.remove_null_proxy_kwarg
def get_assessment_part_admin_session(self):
"""Gets the ``OsidSession`` associated with the assessment part administration service.
return: (osid.assessment.authoring.AssessmentPartAdminSession) -
an ``AssessmentPartAdminSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_admin()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_admin()`` is ``true``.*
"""
if not self.supports_assessment_part_admin():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartAdminSession(runtime=self._runtime)
assessment_part_admin_session = property(fget=get_assessment_part_admin_session)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_assessment_part_admin_session_for_bank(self, bank_id):
"""Gets the ``OsidSession`` associated with the assessment part administration service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.AssessmentPartAdminSession) -
an ``AssessmentPartAdminSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_admin()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_admin():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentPartAdminSession(bank_id, runtime=self._runtime)
@utilities.remove_null_proxy_kwarg
def get_assessment_part_bank_session(self):
"""Gets the ``OsidSession`` to lookup assessment part/bank mappings for assessment parts.
return: (osid.assessment.authoring.AssessmentPartBankSession) -
an ``AssessmentPartBankSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_bank()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_bank()`` is ``true``.*
"""
if not self.supports_assessment_part_bank():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartBankSession(runtime=self._runtime)
assessment_part_bank_session = property(fget=get_assessment_part_bank_session)
@utilities.remove_null_proxy_kwarg
def get_assessment_part_bank_assignment_session(self):
"""Gets the ``OsidSession`` associated with assigning assessment part to bank.
return:
(osid.assessment.authoring.AssessmentPartBankAssignmentS
ession) - an ``AssessmentPartBankAssignmentSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented -
``supports_assessment_part_bank_assignment()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_bank_assignment()`` is ``true``.*
"""
if not self.supports_assessment_part_bank_assignment():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartBankAssignmentSession(runtime=self._runtime)
assessment_part_bank_assignment_session = property(fget=get_assessment_part_bank_assignment_session)
@utilities.remove_null_proxy_kwarg
def get_sequence_rule_lookup_session(self):
"""Gets the ``OsidSession`` associated with the sequence rule lookup service.
return: (osid.assessment.authoring.SequenceRuleLookupSession) -
a ``SequenceRuleLookupSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_lookup()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_lookup()`` is ``true``.*
"""
if not self.supports_sequence_rule_lookup():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.SequenceRuleLookupSession(runtime=self._runtime)
sequence_rule_lookup_session = property(fget=get_sequence_rule_lookup_session)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_sequence_rule_lookup_session_for_bank(self, bank_id):
"""Gets the ``OsidSession`` associated with the sequence rule lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.SequenceRuleLookupSession) -
a ``SequenceRuleLookupSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_lookup()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_sequence_rule_lookup():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.SequenceRuleLookupSession(bank_id, runtime=self._runtime)
@utilities.remove_null_proxy_kwarg
def get_sequence_rule_admin_session(self):
"""Gets the ``OsidSession`` associated with the sequence rule administration service.
return: (osid.assessment.authoring.SequenceRuleAdminSession) - a
``SequenceRuleAdminSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_admin()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_admin()`` is ``true``.*
"""
if not self.supports_sequence_rule_admin():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.SequenceRuleAdminSession(runtime=self._runtime)
sequence_rule_admin_session = property(fget=get_sequence_rule_admin_session)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_sequence_rule_admin_session_for_bank(self, bank_id):
"""Gets the ``OsidSession`` associated with the sequence rule administration service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.SequenceRuleAdminSession) - a
``SequenceRuleAdminSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_admin()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_sequence_rule_admin():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.SequenceRuleAdminSession(bank_id, runtime=self._runtime)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_assessment_part_item_session(self, *args, **kwargs):
"""Gets the ``OsidSession`` associated with the assessment part item service.
return: (osid.assessment.authoring.AssessmentPartItemSession)
- an ``AssessmentPartItemSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` is ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
if self._proxy_in_args(*args, **kwargs):
raise errors.InvalidArgument('A Proxy object was received but not expected.')
# pylint: disable=no-member
return sessions.AssessmentPartItemSession(runtime=self._runtime)
assessment_part_item_session = property(fget=get_assessment_part_item_session)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_assessment_part_item_session_for_bank(self, bank_id, *args, **kwargs):
"""Gets the ``OsidSession`` associated with the assessment part item service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.AssessmentPartItemSession)
- an ``AssessmentPartItemSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_item()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
if self._proxy_in_args(*args, **kwargs):
raise errors.InvalidArgument('A Proxy object was received but not expected.')
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
# pylint: disable=no-member
return sessions.AssessmentPartItemSession(bank_id, runtime=self._runtime)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_assessment_part_item_design_session(self, *args, **kwargs):
"""Gets the ``OsidSession`` associated with the assessment part item design service.
return: (osid.assessment.authoring.AssessmentPartItemDesignSession)
- an ``AssessmentPartItemDesignSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item_design()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` is ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
if self._proxy_in_args(*args, **kwargs):
raise errors.InvalidArgument('A Proxy object was received but not expected.')
# pylint: disable=no-member
return sessions.AssessmentPartItemDesignSession(runtime=self._runtime)
assessment_part_item_design_session = property(fget=get_assessment_part_item_design_session)
@utilities.remove_null_proxy_kwarg
@utilities.arguments_not_none
def get_assessment_part_item_design_session_for_bank(self, bank_id, *args, **kwargs):
"""Gets the ``OsidSession`` associated with the assessment part item design service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.AssessmentPartItemDesignSession)
- an ``AssessmentPartItemDesignSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item_design()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_item_design()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
if self._proxy_in_args(*args, **kwargs):
raise errors.InvalidArgument('A Proxy object was received but not expected.')
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
# pylint: disable=no-member
return sessions.AssessmentPartItemDesignSession(bank_id, runtime=self._runtime)
class AssessmentAuthoringProxyManager(osid_managers.OsidProxyManager, AssessmentAuthoringProfile, assessment_authoring_managers.AssessmentAuthoringProxyManager):
"""The assessment authoring manager provides access to assessment authoring sessions and provides interoperability tests for various aspects of this service.
Methods in this manager support the passing of a ``Proxy`` object.
The sessions included in this manager are:
* ``AssessmentPartLookupSession:`` a session to retrieve
assessment part
* ``AssessmentPartQuerySession:`` a session to query for
assessment part
* ``AssessmentPartSearchSession:`` a session to search for
assessment part
* ``AssessmentPartAdminSession:`` a session to create and delete
assessment part
* ``AssessmentPartNotificationSession:`` a session to receive
notifications pertaining to assessment part changes
* ``AssessmentPartBankSession:`` a session to look up assessment
part bank mappings
* ``AssessmentPartBankAssignmentSession:`` a session to manage
assessment part to bank mappings
* ``AssessmentPartSmartBankSession:`` a session to manage dynamic
bank of assessment part
* ``AssessmentPartItemSession:`` a session to look up assessment
part to item mappings
* ``AssessmentPartItemDesignSession:`` a session to map items to
assessment parts
* ``SequenceRuleLookupSession:`` a session to retrieve sequence
rule
* ``SequenceRuleQuerySession:`` a session to query for sequence
rule
* ``SequenceRuleSearchSession:`` a session to search for sequence
rule
* ``SequenceRuleAdminSession:`` a session to create and delete
sequence rule
* ``SequenceRuleNotificationSession:`` a session to receive
notifications pertaining to sequence rule changes
* ``SequenceRuleBankSession:`` a session to look up sequence rule
bank mappings
* ``SequenceRuleBankAssignmentSession:`` a session to manage
sequence rule to bank mappings
* ``SequenceRuleSmartBankSession:`` a session to manage dynamic
bank of sequence rule
* ``SequenceRuleEnablerLookupSession:`` a session to retrieve
sequence rule enablers
* ``SequenceRuleEnablerQuerySession:`` a session to query for
sequence rule enablers
* ``SequenceRuleEnablerSearchSession:`` a session to search for
sequence rule enablers
* ``SequenceRuleEnablerAdminSession:`` a session to create and
delete sequence rule enablers
* ``SequenceRuleEnablerNotificationSession:`` a session to receive
notifications pertaining to sequence rule enabler changes
* ``SequenceRuleEnablerBankSession:`` a session to look up
sequence rule enabler bank mappings
* ``SequenceRuleEnablerBankAssignmentSession:`` a session to
manage sequence rule enabler to bank mappings
* ``SequenceRuleEnablerSmartBankSession:`` a session to manage
dynamic bank of sequence rule enablers
* ``SequenceRuleEnableRuleLookupSession:`` a session to look up
sequence rule enabler mappings
* ``SequenceRuleEnablerRuleApplicationSession:`` a session to
apply sequence rule enablers
"""
def __init__(self):
osid_managers.OsidProxyManager.__init__(self)
@utilities.arguments_not_none
def get_assessment_part_lookup_session(self, proxy):
"""Gets the ``OsidSession`` associated with the assessment part lookup service.
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.AssessmentPartLookupSession)
- an ``AssessmentPartLookupSession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_lookup()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` is ``true``.*
"""
if not self.supports_assessment_part_lookup():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartLookupSession(proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_assessment_part_lookup_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the assessment part lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.AssessmentPartLookupSession)
- an ``AssessmentPartLookupSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id or proxy is null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_lookup():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentPartLookupSession(bank_id, proxy, self._runtime)
@utilities.arguments_not_none
def get_assessment_part_query_session(self, proxy):
"""Gets the ``OsidSession`` associated with the assessment part query service.
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.AssessmentPartQuerySession) -
an ``AssessmentPartQuerySession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_query()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_query()`` is ``true``.*
"""
if not self.supports_assessment_part_query():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartQuerySession(proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_assessment_part_query_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the assessment part query service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.AssessmentPartQuerySession) -
an ``AssessmentPartQuerySession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id or proxy is null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_query()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_query()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_query():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentPartQuerySession(bank_id, proxy, self._runtime)
@utilities.arguments_not_none
def get_assessment_part_admin_session(self, proxy):
"""Gets the ``OsidSession`` associated with the assessment part administration service.
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.AssessmentPartAdminSession) -
an ``AssessmentPartAdminSession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_admin()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_admin()`` is ``true``.*
"""
if not self.supports_assessment_part_admin():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartAdminSession(proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_assessment_part_admin_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the assessment part administration service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.AssessmentPartAdminSession) -
an ``AssessmentPartAdminSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id or proxy is null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_admin()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_admin():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.AssessmentPartAdminSession(bank_id, proxy, self._runtime)
@utilities.arguments_not_none
def get_assessment_part_bank_session(self, proxy):
"""Gets the ``OsidSession`` to lookup assessment part/bank mappings for assessment parts.
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.AssessmentPartBankSession) -
an ``AssessmentPartBankSession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_bank()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_bank()`` is ``true``.*
"""
if not self.supports_assessment_part_bank():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartBankSession(proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_assessment_part_bank_assignment_session(self, proxy):
"""Gets the ``OsidSession`` associated with assigning assessment part to bank.
arg: proxy (osid.proxy.Proxy): a proxy
return:
(osid.assessment.authoring.AssessmentPartBankAssignmentS
ession) - an ``AssessmentPartBankAssignmentSession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented -
``supports_assessment_part_bank_assignment()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_bank_assignment()`` is ``true``.*
"""
if not self.supports_assessment_part_bank_assignment():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartBankAssignmentSession(proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_sequence_rule_lookup_session(self, proxy):
"""Gets the ``OsidSession`` associated with the sequence rule lookup service.
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.SequenceRuleLookupSession) -
a ``SequenceRuleLookupSession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_lookup()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_lookup()`` is ``true``.*
"""
if not self.supports_sequence_rule_lookup():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.SequenceRuleLookupSession(proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_sequence_rule_lookup_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the sequence rule lookup service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.SequenceRuleLookupSession) -
a ``SequenceRuleLookupSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id or proxy is null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_lookup()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_lookup()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_sequence_rule_lookup():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.SequenceRuleLookupSession(bank_id, proxy, self._runtime)
@utilities.arguments_not_none
def get_sequence_rule_admin_session(self, proxy):
"""Gets the ``OsidSession`` associated with the sequence rule administration service.
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.SequenceRuleAdminSession) - a
``SequenceRuleAdminSession``
raise: NullArgument - ``proxy`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_admin()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_admin()`` is ``true``.*
"""
if not self.supports_sequence_rule_admin():
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.SequenceRuleAdminSession(proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_sequence_rule_admin_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the sequence rule administration service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
arg: proxy (osid.proxy.Proxy): a proxy
return: (osid.assessment.authoring.SequenceRuleAdminSession) - a
``SequenceRuleAdminSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id or proxy is null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_sequence_rule_admin()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_sequence_rule_admin()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_sequence_rule_admin():
raise errors.Unimplemented()
##
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
##
# pylint: disable=no-member
return sessions.SequenceRuleAdminSession(bank_id, proxy, self._runtime)
@utilities.arguments_not_none
def get_assessment_part_item_session(self, proxy):
"""Gets the ``OsidSession`` associated with the assessment part item service.
return: (osid.assessment.authoring.AssessmentPartItemSession)
- an ``AssessmentPartItemSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` is ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartItemSession(proxy=proxy, runtime=self._runtime)
assessment_part_item_session = property(fget=get_assessment_part_item_session)
@utilities.arguments_not_none
def get_assessment_part_item_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the assessment part item service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.AssessmentPartItemSession)
- an ``AssessmentPartItemSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_item()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
# pylint: disable=no-member
return sessions.AssessmentPartItemSession(bank_id, proxy=proxy, runtime=self._runtime)
@utilities.arguments_not_none
def get_assessment_part_item_design_session(self, proxy):
"""Gets the ``OsidSession`` associated with the assessment part item design service.
return: (osid.assessment.authoring.AssessmentPartItemDesignSession)
- an ``AssessmentPartItemDesignSession``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item_design()`` is
``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_lookup()`` is ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
# pylint: disable=no-member
return sessions.AssessmentPartItemDesignSession(proxy=proxy, runtime=self._runtime)
assessment_part_item_design_session = property(fget=get_assessment_part_item_design_session)
@utilities.arguments_not_none
def get_assessment_part_item_design_session_for_bank(self, bank_id, proxy):
"""Gets the ``OsidSession`` associated with the assessment part item design service for the given bank.
arg: bank_id (osid.id.Id): the ``Id`` of the ``Bank``
return: (osid.assessment.authoring.AssessmentPartItemDesignSession)
- an ``AssessmentPartItemDesignSession``
raise: NotFound - no ``Bank`` found by the given ``Id``
raise: NullArgument - ``bank_id`` is ``null``
raise: OperationFailed - unable to complete request
raise: Unimplemented - ``supports_assessment_part_item_design()`` or
``supports_visible_federation()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_assessment_part_item_design()`` and
``supports_visible_federation()`` are ``true``.*
"""
if not self.supports_assessment_part_lookup(): # This is kludgy, but only until Tom fixes spec
raise errors.Unimplemented()
# Also include check to see if the catalog Id is found otherwise raise errors.NotFound
# pylint: disable=no-member
return sessions.AssessmentPartItemDesignSession(bank_id, proxy=proxy, runtime=self._runtime)
| 47.272237 | 161 | 0.682233 | 5,582 | 52,614 | 6.227875 | 0.039592 | 0.077724 | 0.054424 | 0.021632 | 0.960678 | 0.951818 | 0.938097 | 0.907749 | 0.877085 | 0.861495 | 0 | 0 | 0.231345 | 52,614 | 1,112 | 162 | 47.314748 | 0.859601 | 0.604706 | 0 | 0.541667 | 0 | 0 | 0.039497 | 0.028708 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185606 | false | 0 | 0.034091 | 0 | 0.473485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e32de162c339beb09a447120cd19afe54eb08cc3 | 7,364 | py | Python | resolution_dim_multiple.py | Theme-Maths/MultipleDegree-DifferentialEquationsSolver | 8400c4d72ca9377ff46a64f16ac60c33b7bb3f44 | [
"MIT"
] | null | null | null | resolution_dim_multiple.py | Theme-Maths/MultipleDegree-DifferentialEquationsSolver | 8400c4d72ca9377ff46a64f16ac60c33b7bb3f44 | [
"MIT"
] | null | null | null | resolution_dim_multiple.py | Theme-Maths/MultipleDegree-DifferentialEquationsSolver | 8400c4d72ca9377ff46a64f16ac60c33b7bb3f44 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Ce fichier est chargé de résoudre les ED de dimension n, et d'interfacer les entrées utilisateur.
Contient les fonctions de résolution à plusieurs dimensions :
- solution_dim_2(g, y0, yp0, t0, T, h, methode='rk4')
- solution_dim_n(g, Y0, t0, T, h, methode='rk4')
- sol_exacte_dim_2(g, y0, yp0, t0, T, h)
- sol_exacte_dim_n(g, Y0, t0, T, h)
"""
import numpy as np
from scipy.integrate import odeint
import schemas_1d, traces
from math import cos, atan, pi, sin, exp
def solution_dim_2(g, y0, yp0, t0, T, h, methode='rk4'):
"""
Résout une équation différentielle d'odre 2 avec la méthode choisie.
Renvoie un tuple x, y de la solution calculé.
Paramètres
----------
g : fonction du problème de Cauchy
y0 : valeur de la solution initiale de y
yp0 : valeur de la condition initiale de y'
t0 : valeur de la borne inférieure de l'intervalle de résolution
T : valeur de la borne supérieure de l'intervalle de résolution
h : valeur du pas
methode : méthodes de résolution possibles ('euler' ou 'rk4'). Vaut 'rk4' par défaut.
Renvoi
-------
t_list : liste des abscisses
Y_list : liste des valeurs de calculées par la méthode
"""
# NOTATIONS :
# y = y
# yp = y'
# ypp = y" = g(t0, y0, yp0)
def F(t, Y): # on définit la fonction F du nouveau problème de Cauchy
(y, yp) = Y
ypp = g(t, y, yp) # on calcule la dernière coordonnée de ce que renverra F, à savoir ypp
return [yp, ypp]
Y0 = [y0, yp0]
if methode == 'rk4':
return schemas_1d.rk4_vect(F, Y0, t0, T, h)
elif methode == 'euler':
return schemas_1d.euler_vect(F, Y0, t0, T, h)
else :
raise ValueError("Méthode non reconnue. Les méthodes reconnues sont : 'rk4', 'euler'.")
def solution_dim_n(g, Y0, t0, T, h, methode='rk4'):
"""
Résout une équation différentielle d'ordre n > 1 avec la méthode choisie.
Renvoie un tuple (x, y) de la solution calculée.
Paramètres
----------
g : fonction du problème de Cauchy (exemple dont la solution est exp(): g = lambda t, y, yp, ypp : ypp)
Y0 : liste contenant les conditions initiales dans l'ordre suivant : Y0 = [y0, yp0, ... y_(n-1)0]
t0 : valeur de la borne inférieure de l'intervalle de résolution
T : valeur de la borne supérieure de l'intervalle de résolution
h : valeur du pas
methode : méthodes de résolution possibles : 'euler', 'rk4'. Vaut 'rk4' par défaut.
Renvoi
-------
t_list : liste des abscisses
Y_list : liste des valeurs de calculées par la méthode
"""
def F(t, Y): # on définit la fonction F du nouveau problème de Cauchy
y_np1 = g(t, *Y) # on calcule la dernière coordonnée de ce que renverra F, à savoir la valeur de la dérivées n+1ième de y en t
return Y[1:]+[y_np1]
if methode == 'rk4':
return schemas_1d.rk4_vect(F, Y0, t0, T, h)
elif methode == 'euler':
return schemas_1d.euler_vect(F, Y0, t0, T, h)
else :
raise AttributeError("Méthode non reconnue. Les méthodes reconnues sont : 'rk4', 'euler'.")
def sol_exacte_dim_2(g, y0, yp0, t0, T, h):
"""
Résout une équation différentielle d'ordre n avec scipy.integrate.odeint
Renvoie un tuple x, y de la solution calculée.
Paramètres
----------
g : fonction du problème de Cauchy
y0 : valeur de la solution initiale de y
yp0 : valeur de la condition initiale de y'
t0 : valeur de la borne inférieure de l'intervalle de résolution
T : valeur de la borne supérieure de l'intervalle de résolution
h : valeur du pas
Renvoi
-------
t_list : liste des abscisses
Y_list : liste des ordonnées de la 'solution exacte' trouvée
"""
def F(t, Y): # on définit la fonction F du nouveau problème de Cauchy
(y, yp) = Y
ypp = g(t, y, yp)
return np.array([yp, ypp])
Y0 = np.array([y0, yp0])
x = np.linspace(t0, T, int((T-t0)/h)) # on construit la liste des abscisses
Y = odeint(F, Y0, x, tfirst=True)
return x, Y[:,0]
def sol_exacte_dim_n(g, Y0, t0, T, h):
"""
Résout une équation différentielle d'ordre n avec odeint, méthode la plus précise implémentée dans Python.
Renvoie un tuple x, y de la solution calculée.
Paramètres
----------
g : fonction du problème de Cauchy
Y0 : liste contenant les conditions initiales dans l'ordre suivant : Y = [y0, yp0, ... y_(n-1)0]
t0 : valeur de la borne inférieure de l'intervalle de résolution
T : valeur de la borne supérieure de l'intervalle de résolution
h : valeur du pas
Renvoi
-------
t_list : liste des abscisses
Y_list : liste des ordonnées de la 'solution exacte' trouvée
"""
Y0=np.array(Y0)
t0 = np.array(t0)
def F(t, Y): # on définit la fonction F du nouveau problème de Cauchy
y_np1 = g(t, *Y)
output = Y[1:]
return np.append(output,y_np1)
x = np.linspace(t0, T, int((T-t0)/h)) # on construit la liste des abscisses
Y = odeint(F, Y0, x, tfirst=True)
return x, Y[:,0]
#%% COMMANDES DIRECTES
# EXEMPLE 1 DE LA PRESENTATION ------------------------------------------------
# # Définition de la fonction g du problème de Cauchy
# g = lambda t, y, yp : 3*yp - 20*y + 5
# # Définition des courbes que l'on veut tracer
# x1, y1 = solution_dim_2(g, 0, 0, 0, 3, 0.01, methode='rk4')
# x2, y2 = solution_dim_2(g, 0, 0, 0, 3, 0.01, methode='rk4')
# x3, y3 = solution_dim_2(g, 0, 0, 0, 3, 0.1, methode='rk4')
# xs, ys = sol_exacte_dim_2(g, 0, 0, 0, 3, 0.001)
# # Tracé des courbes
# traces.trace((x1, y1, 'Euler pas de 0.01'), (x2, y2, 'RK4 pas de 0.01'), (x3, y3, 'RK4 pas de 0.1'), sol=(xs, ys))
# EXEMPLE 2 DE LA PRESENTATION ------------------------------------------------
# # Définition de la fonction g à résoudre
# g = lambda t, y, yp, ypp: 1/4 * (cos(t*ypp) - atan(yp))
# # Définition des courbes que l'on veut tracer
# x1, y1 = solution_dim_n(g, [0, 0, 0], 0, 50, 0.2, methode='euler')
# x2, y2 = solution_dim_n(g, [0, 0, 0], 0, 50, 0.1, methode='euler')
# x3, y3 = solution_dim_n(g, [0, 0, 0], 0, 50, 0.05, methode='euler')
# xs, ys = sol_exacte_dim_n(g, [0, 0, 0], 0, 50, 0.025)
# # Traçage des courbes
# traces.trace((x1, y1, 'Euler, pas de 0.2'),(x2, y2, 'Euler, pas de 0.1'), (x3, y3, 'Euler, pas de 0.05'), sol=(xs, ys))
# EXEMPLE DU PENDULE ----------------------------------------------------------
# # Définition de la fonction g à résoudre
# g = lambda t, y, yp : -2*0.22*yp-4**2*sin(y)
# # Définition des courbes que l'on veut tracer
# x1, y1 = solution_dim_n(g, [1.3, 0], 0, 5, 0.001, methode='euler')
# x2, y2 = solution_dim_n(g, [1.3,0], 0, 5, 0.001, methode='rk4')
# xs, ys = sol_exacte_dim_n(g, [1.3,0], 0, 5, 0.001)
# # Traçage des courbes
# traces.trace((x1, y1, 'Euler, pas de 0.001'),(x2, y2, 'RK4, pas de 0.001'), sol=(xs, ys))
| 37.005025 | 144 | 0.573873 | 1,148 | 7,364 | 3.621951 | 0.158537 | 0.023088 | 0.031265 | 0.011544 | 0.810005 | 0.807359 | 0.797018 | 0.783069 | 0.761424 | 0.745551 | 0 | 0.051482 | 0.285171 | 7,364 | 198 | 145 | 37.191919 | 0.738412 | 0.682238 | 0 | 0.577778 | 0 | 0 | 0.087493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177778 | false | 0 | 0.088889 | 0 | 0.488889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3390e58dea4abbe0459fa110eba1be9496fe02a | 61 | py | Python | test.py | nsde/TPReplace | d3e8b6db420da49c34d17369b9753fe8ed1a92e4 | [
"MIT"
] | null | null | null | test.py | nsde/TPReplace | d3e8b6db420da49c34d17369b9753fe8ed1a92e4 | [
"MIT"
] | null | null | null | test.py | nsde/TPReplace | d3e8b6db420da49c34d17369b9753fe8ed1a92e4 | [
"MIT"
] | null | null | null | import os
print(os.listdir(r"C:\Users\xitzf\Desktop\blocks")) | 30.5 | 51 | 0.770492 | 11 | 61 | 4.272727 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032787 | 61 | 2 | 51 | 30.5 | 0.79661 | 0 | 0 | 0 | 0 | 0 | 0.467742 | 0.467742 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e33950bc8e0c5c69941aec80d91f16cd580e8ffa | 24 | py | Python | confrm/__init__.py | confrm/confrm | 7d2b0b2f5efac243d9877509684d71acf4816dd6 | [
"Apache-2.0"
] | 1 | 2021-04-15T05:55:42.000Z | 2021-04-15T05:55:42.000Z | confrm/__init__.py | confrm/confrm | 7d2b0b2f5efac243d9877509684d71acf4816dd6 | [
"Apache-2.0"
] | 33 | 2020-12-23T19:44:41.000Z | 2021-01-26T20:53:01.000Z | confrm/__init__.py | confrm/confrm | 7d2b0b2f5efac243d9877509684d71acf4816dd6 | [
"Apache-2.0"
] | 1 | 2021-01-07T11:06:35.000Z | 2021-01-07T11:06:35.000Z | from .confrm import APP
| 12 | 23 | 0.791667 | 4 | 24 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e34c82bcbb0b62e20773fb195f17e0a49e8b4f1e | 37 | py | Python | ncc/callbacks/__init__.py | NCC-AI/ncc | c53379abcb21eb18268591239d02f69a148df6c5 | [
"MIT"
] | null | null | null | ncc/callbacks/__init__.py | NCC-AI/ncc | c53379abcb21eb18268591239d02f69a148df6c5 | [
"MIT"
] | null | null | null | ncc/callbacks/__init__.py | NCC-AI/ncc | c53379abcb21eb18268591239d02f69a148df6c5 | [
"MIT"
] | null | null | null | from .callbacks import slack_logging
| 18.5 | 36 | 0.864865 | 5 | 37 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e366f18dda2c3c4e031ce91ba556ac9380962e56 | 81 | py | Python | fsm_eigenvalue/compute/__init__.py | petarmaric/fsm_eigenvalue | d4ca102cf2920ca41d31085f9e4bf1866d06a320 | [
"BSD-3-Clause"
] | 1 | 2021-03-09T13:16:17.000Z | 2021-03-09T13:16:17.000Z | fsm_eigenvalue/compute/__init__.py | petarmaric/fsm_eigenvalue | d4ca102cf2920ca41d31085f9e4bf1866d06a320 | [
"BSD-3-Clause"
] | null | null | null | fsm_eigenvalue/compute/__init__.py | petarmaric/fsm_eigenvalue | d4ca102cf2920ca41d31085f9e4bf1866d06a320 | [
"BSD-3-Clause"
] | null | null | null | from .core import perform_iteration
from .parameter_sweep import parameter_sweep
| 27 | 44 | 0.876543 | 11 | 81 | 6.181818 | 0.636364 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098765 | 81 | 2 | 45 | 40.5 | 0.931507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3754bd635c94dad3d8bd69c188f3201731bfdae | 119 | py | Python | src/blog/views.py | hamdyadam97/blog-django-ar | 2e2fec47cfe149c904f822503272a4b2fd90de0d | [
"bzip2-1.0.6"
] | null | null | null | src/blog/views.py | hamdyadam97/blog-django-ar | 2e2fec47cfe149c904f822503272a4b2fd90de0d | [
"bzip2-1.0.6"
] | null | null | null | src/blog/views.py | hamdyadam97/blog-django-ar | 2e2fec47cfe149c904f822503272a4b2fd90de0d | [
"bzip2-1.0.6"
] | null | null | null | from django.shortcuts import render
def home(request):
return render(request,'blog/index.html',{'title':'home'})
| 19.833333 | 61 | 0.722689 | 16 | 119 | 5.375 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 119 | 5 | 62 | 23.8 | 0.819048 | 0 | 0 | 0 | 0 | 0 | 0.201681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
8b696220b30c44fa269683be7b2dd86ec1a5e98c | 114 | py | Python | pyquadfilter/__init__.py | Kurene/pyquadfilter | 89f678bd845fac556b46640e346b5503803e0e0d | [
"MIT"
] | 1 | 2021-09-24T07:32:16.000Z | 2021-09-24T07:32:16.000Z | pyquadfilter/__init__.py | Kurene/pyquadfilter | 89f678bd845fac556b46640e346b5503803e0e0d | [
"MIT"
] | null | null | null | pyquadfilter/__init__.py | Kurene/pyquadfilter | 89f678bd845fac556b46640e346b5503803e0e0d | [
"MIT"
] | null | null | null | """Top-level module for pyquadfilter"""
from .core import PyQuadFilter
from .plot import plot_frequency_response
| 22.8 | 41 | 0.807018 | 15 | 114 | 6 | 0.733333 | 0.355556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114035 | 114 | 4 | 42 | 28.5 | 0.891089 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b711a23a662358ce47d2c710fe2ea644098d61e | 27 | py | Python | treeHandler/__init__.py | Sreekiranar/tree-handler | 3ceadfd0a50d2f02861531fcf693cd5c343c398c | [
"MIT"
] | 1 | 2020-02-13T06:55:16.000Z | 2020-02-13T06:55:16.000Z | treeHandler/__init__.py | Sreekiranar/treeHandler | 3ceadfd0a50d2f02861531fcf693cd5c343c398c | [
"MIT"
] | null | null | null | treeHandler/__init__.py | Sreekiranar/treeHandler | 3ceadfd0a50d2f02861531fcf693cd5c343c398c | [
"MIT"
] | null | null | null | from .treeHandler import *
| 13.5 | 26 | 0.777778 | 3 | 27 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b8e521094722ff54569862900116d59450bb130 | 48 | py | Python | detection/keypoint/__init__.py | corenel/auto-traffic-camera-calib | a81d52b3a21b7cef37006cc93f764d93807293b0 | [
"MIT"
] | 3 | 2020-11-27T08:26:12.000Z | 2021-08-24T02:53:45.000Z | detection/keypoint/__init__.py | corenel/auto-traffic-camera-calib | a81d52b3a21b7cef37006cc93f764d93807293b0 | [
"MIT"
] | 1 | 2021-07-17T20:15:59.000Z | 2021-07-17T20:15:59.000Z | detection/keypoint/__init__.py | corenel/auto-traffic-camera-calib | a81d52b3a21b7cef37006cc93f764d93807293b0 | [
"MIT"
] | null | null | null | from .keypoint_detector import KeypointDetector
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8bb7e35b22c001f952f06596fa9c83016c3cf865 | 7,544 | py | Python | symmetry_functions.py | haakonvt/LearningTensorFlow | 6988a15af2ac916ae1a5e23b2c5bde9630cc0519 | [
"MIT"
] | 5 | 2018-09-06T12:52:12.000Z | 2020-05-09T01:40:12.000Z | symmetry_functions.py | haakonvt/LearningTensorFlow | 6988a15af2ac916ae1a5e23b2c5bde9630cc0519 | [
"MIT"
] | null | null | null | symmetry_functions.py | haakonvt/LearningTensorFlow | 6988a15af2ac916ae1a5e23b2c5bde9630cc0519 | [
"MIT"
] | 4 | 2018-02-06T08:42:06.000Z | 2019-04-16T11:23:06.000Z | from math import exp,cos,pi,tanh,sqrt # Faster than numpy for scalars
import numpy as np
"""
#################
The cutoff functions
#################
"""
def cutoff_tanh(r,rc):
"""
Can take scalar and vector input of r and evaluate the cutoff function
"""
if type(r) == int:
if r <= rc:
return tanh(1-r/rc)**3
else:
return 0.
else:
return np.tanh(1-r/rc)**3 * (r <= rc)
def cutoff_cos(r,rc):
"""
Can take scalar, vector or matrix input of r and evaluate the cutoff function
"""
# r_SW_cut = 100#3.77118
if type(r) == int:
if r <= rc and r < r_SW_cut:
return 0.5*(cos(pi*r/rc)+1)
else:
return 0.
else:
return 0.5*(np.cos(pi*r/rc)+1) * (r <= rc)# * (r < r_SW_cut)
"""
#################
Single particle symmetry functions
#################
"""
def G1(r, rc, cutoff=cutoff_cos):
r_cut = cutoff(r,rc)
summation = np.sum( r_cut )
return summation
def G2(r, rc, rs, eta, cutoff=cutoff_cos):
r_cut = cutoff(r,rc)
summation = np.sum( np.exp(-eta*(r-rs)**2)*r_cut )
return summation
def G3(r, rc, kappa, cutoff=cutoff_cos):
r_cut = cutoff(r,rc)
summation = np.sum( np.cos(kappa*r)*r_cut )
return summation
def G4(xyz, rc, eta, zeta, lambda_c, cutoff=cutoff_cos):
""" xyz:
[[x1 y1 z1]
[x2 y2 z2]
[x3 y3 z3]
[x4 y4 z4]]
"""
r = np.linalg.norm(xyz,axis=1)
N = len(r)
r_cut = cutoff(r,rc)
summation = 0
for j in range(N):
# for k in range(N): # This double counts angles... as in the litterature
# if j == k:
# continue # Skip j=k
for k in range(j+1,N): # Away with stupid double counting
r_jk = np.linalg.norm(xyz[j] - xyz[k])
cos_theta = np.dot(xyz[j],xyz[k]) / (r[j]*r[k])
cutoff_ijk = r_cut[j] * r_cut[k] * cutoff(r_jk, rc)
part_sum = (1+lambda_c * cos_theta)**zeta * exp(-eta*(r[j]**2+r[k]**2+r_jk**2))
summation += part_sum*cutoff_ijk
summation *= 2**(1-zeta) # Normalization factor
return summation
def G5(xyz, rc, eta, zeta, lambda_c, cutoff=cutoff_cos):
""" xyz:
[[x1 y1 z1]
[x2 y2 z2]
[x3 y3 z3]
[x4 y4 z4]]
"""
r = np.linalg.norm(xyz,axis=1)
N = len(r)
r_cut = cutoff(r,rc)
summation = 0
for j in range(N):
# for k in range(N): # This double counts angles... as in the litterature
# if j == k:
# continue # Skip j=k
for k in range(j+1,N): # Away with stupid double counting
cos_theta = np.dot(xyz[j],xyz[k]) / (r[j]*r[k])
cutoff_ijk = r_cut[j] * r_cut[k]
part_sum = (1+lambda_c * cos_theta)**zeta * exp(-eta*(r[j]**2+r[k]**2))
summation += part_sum*cutoff_ijk
summation *= 2**(1-zeta)
return summation
"""
#################
N particle symmetry functions
#################
"""
def G1_N(r, type, rc, cutoff=cutoff_cos):
"""
r = [r1 , r2 , ..., rN]
type = ["Hydrogen", "Oxygen", ..., "Carbon"]
"""
# TODO: May move this to symmetry_transform.py in stead...
"""
#################
Next functions are mainly for testing purposes
i.e. plotting response-curves etc.
#################
"""
def G1_single_neighbor(r, rc, cutoff=cutoff_cos):
return cutoff(r,rc)
def G2_single_neighbor(r, rc, rs, eta, cutoff=cutoff_cos):
r_cut = cutoff(r,rc)
return np.exp(-eta*(r-rs)**2)*r_cut
def G3_single_neighbor(r, rc, kappa, cutoff=cutoff_cos):
r_cut = cutoff(r,rc)
return np.cos(kappa*r)*r_cut
def G4_single_neighbor_rjk(theta, rc, zeta, lambda_c, eta, cutoff=cutoff_cos, percent_of_rc=0.8):
"""
rij = rik = 0.8 Rc
theta = 0,..,360
"""
rij = percent_of_rc * rc
cos_theta = np.cos(theta)
rjk = sqrt(2) * rij * np.sqrt(1 - cos_theta) # Simplified law of cosines
exp_factor = np.exp(-eta*(2*rij**2 + rjk**2))
angle_factor = 2**(1-zeta) * (1 + lambda_c * cos_theta)**zeta
cutoff_factor = cutoff(rij, rc)**2 * cutoff(rjk, rc)
return angle_factor * exp_factor * cutoff_factor
def G4_single_neighbor_radial(r, zeta, lambda_c, eta):
"""
adfds
"""
theta = pi/3. # Constant at 60 degrees aka pi/3
exp_factor = np.exp(-eta*3*r**2)
angle_factor = 2**(1-zeta) * (1 + lambda_c * np.cos(theta))**zeta
return angle_factor * exp_factor
def G4_single_neighbor_radial_cut(r, rc, zeta, lambda_c, eta, cutoff=cutoff_cos):
"""
With cutoff
"""
theta = pi/3. # Constant at 60 degrees aka pi/3
exp_factor = np.exp(-eta*3*r**2)
angle_factor = 2**(1-zeta) * (1 + lambda_c * np.cos(theta))**zeta
return angle_factor * exp_factor * cutoff(r, rc)**3
def G5_single_neighbor_radial_cut(r, rc, zeta, lambda_c, eta, cutoff=cutoff_cos):
"""
With cutoff
"""
theta = pi/3. # Constant at 60 degrees aka pi/3
exp_factor = np.exp(-eta * 2*r**2)
angle_factor = 2**(1-zeta) * (1 + lambda_c * cos(theta))**zeta
return angle_factor * exp_factor * cutoff(r, rc)**2
def G4_single_neighbor_2D(theta_grid, rc_grid, r_all, zeta, lambda_c, eta):
cutoff = cutoff_cos
rij = r_all # rij = rik
cos_theta = np.cos(theta_grid)
rjk = sqrt(2) * rij * np.sqrt(1 - cos_theta) # Simplified law of cosines
exp_factor = np.exp(-eta*(2*rij**2 + rjk**2))
angle_factor = 2**(1-zeta) * (1 + lambda_c * cos_theta)**zeta
cutoff_factor = cutoff(rij, rc_grid)**2 * cutoff(rjk, rc_grid)
return angle_factor * exp_factor * cutoff_factor
def G4_single_neighbor(theta, r_all, rc, zeta, lambda_c, eta):
"""
NB: Number 4, not 5
"""
cutoff = cutoff_cos
rij = r_all # rij = rik
cos_theta = np.cos(theta)
rjk = sqrt(2) * rij * np.sqrt(1 - cos_theta) # Simplified law of cosines
exp_factor = np.exp(-eta*(2*rij**2 + rjk**2))
angle_factor = 2**(1-zeta) * (1 + lambda_c * cos_theta)**zeta
cutoff_factor = cutoff(rij, rc)**2 * cutoff(rjk, rc)
return angle_factor * exp_factor * cutoff_factor
def G5_single_neighbor(theta, r_all, rc, zeta, lambda_c, eta):
"""
Assumes cutoffs to be normalized to 1 and is removed from eqs
"""
cutoff = cutoff_cos
rij = r_all # Both equal
exp_factor = np.exp(-eta*2*rij**2)
angle_factor = 2**(1-zeta) * (1 + lambda_c * np.cos(theta))**zeta
cutoff_factor = cutoff(rij, rc)**2
return angle_factor * exp_factor * cutoff_factor
def G5_single_neighbor_radial(r, zeta, lambda_c, eta):
"""
Radial part of G5 when rij = rik
"""
theta = pi/3. # Constant at 60 degrees aka pi/3
exp_factor = np.exp(-eta*2*r**2) # rij = rik
angle_factor = 2**(1-zeta) * (1 + lambda_c * np.cos(theta))**zeta
return angle_factor * exp_factor
def G5_single_neighbor_rjk(theta, rc, zeta, lambda_c, eta, cutoff=cutoff_cos, percent_of_rc=0.8):
"""
rij = rik = 0.8 Rc
theta = 0,..,360
"""
rij = percent_of_rc * rc
cos_theta = np.cos(theta)
exp_factor = np.exp(-eta*2*rij**2)
angle_factor = 2**(1-zeta) * (1 + lambda_c * cos_theta)**zeta
cutoff_factor = cutoff(rij, rc)**2
return angle_factor * exp_factor * cutoff_factor
if __name__ == '__main__':
"""
Mainly for testing purpose
"""
print "This does absolutely nothing, I'm afraid dear!"
| 32.239316 | 97 | 0.568399 | 1,183 | 7,544 | 3.459003 | 0.144548 | 0.020528 | 0.058651 | 0.030792 | 0.812072 | 0.745112 | 0.732893 | 0.722141 | 0.687439 | 0.687439 | 0 | 0.033073 | 0.270546 | 7,544 | 233 | 98 | 32.377682 | 0.710522 | 0.08868 | 0 | 0.640625 | 0 | 0 | 0.009541 | 0 | 0 | 0 | 0 | 0.004292 | 0 | 0 | null | null | 0 | 0.015625 | null | null | 0.007813 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8bbe6d5e2e40ba5708d40eec499289daf2f9bd49 | 5,462 | py | Python | pymbs/processing/body.py | brutzl/pymbs | fb7c91435f56b5c4d460f82f081d5d1960fea886 | [
"MIT"
] | null | null | null | pymbs/processing/body.py | brutzl/pymbs | fb7c91435f56b5c4d460f82f081d5d1960fea886 | [
"MIT"
] | null | null | null | pymbs/processing/body.py | brutzl/pymbs | fb7c91435f56b5c4d460f82f081d5d1960fea886 | [
"MIT"
] | null | null | null | from pymbs.common.abstractbody import AbstractBody
from pymbs.symbolics import Matrix
from .frame import Frame
import pymbs.symbolics as symbolics
class Body(AbstractBody):
'''
Body holding mass and inertia properties
'''
def __init__(self, name, mass=0, cg=symbolics.zeros((3,)), inertia=symbolics.zeros((3,3)), graph=None):
'''
Constructor
name: Name of the Body
mass: Mass in kg (Scalar)
cg: Centre Of Gravity (3x1 Vector)
inertia: Inertia Tensor w.r.t. The Centre Of Gravity (symmetric 3x3 Matrix)
'''
# Call MbsElement Constructor
assert graph is not None
AbstractBody.__init__(self, name, mass, cg, inertia, graph)
# additional attributes
self.index = None # body index, i.e. position in mass matrix
self.children = [] # list of all children coordinate systems
self.joint = None; # reference to parent joint
# attributes used for calculation
self.I_r = None # Position of Origin w.r.t. Inertial Frame
self.I_v = None # Velocity of Origin w.r.t. Inertial Frame
self.I_a = None # Acceleration of Origin w.r.t. Inertial Frame
self.I_l = None # Centre of Gravity w.r.t. Inertial Frame
self.I_R = None # Transformation Matrix, Inertial Frame <- Body Frame
self.K_om = None # Angular Velocity w.r.t. (Body Frame if Explicit, Inertial Frame if Recursive!!!)
self.K_al = None # Angular Acceleration w.r.t. (Body Frame if Explicit, Inertial Frame if Recursive!!!)
self.CS_0 = self.addFrame('_int_CS_0')
def addFrame(self, name, p=symbolics.zeros((3,)), R=symbolics.eye((3,3))):
'''
Add A New Coordinate System To The List Of Children
name: Name of the Coordinate System
p: Position of the Coordinate System (3x1 Vector)
R: Orientation of the Coordinate System (3x3 Matrix)
'''
# Create a New Coordinate System
cs = Frame(name=name, parentBody=self, p=p, R=R, graph=self.graph)
# Append it to the List of Children
self.children += [cs]
# return new Coordinate System
return cs
class FlexibleBody(AbstractBody):
'''
Body holding mass and inertia properties
'''
def __init__(self, sid, name, mass=0, cg=symbolics.zeros((3,)), inertia=symbolics.zeros((3,3)), graph=None):
'''
Constructor
name: Name of the Body
mass: Mass in kg (Scalar)
cg: Centre Of Gravity (3x1 Vector)
inertia: Inertia Tensor w.r.t. The Centre Of Gravity (symmetric 3x3 Matrix)
'''
# Call MbsElement Constructor
assert graph is not None
AbstractBody.__init__(self, name, mass, cg, inertia, graph)
# FlexibleBody object in Processing requires sid-object as well
self.sid = sid
# additional attributes
self.index = None # body index, i.e. position in mass matrix
self.children = [] # list of all children coordinate systems
self.joint = None; # reference to parent joint
# attributes used for calculation
self.I_r = None # Position of Origin w.r.t. Inertial Frame
self.I_v = None # Velocity of Origin w.r.t. Inertial Frame
self.I_a = None # Acceleration of Origin w.r.t. Inertial Frame
self.I_l = None # Centre of Gravity w.r.t. Inertial Frame
self.I_R = None # Transformation Matrix, Inertial Frame <- Body Frame
self.K_om = None # Angular Velocity w.r.t. (Body Frame if Explicit, Inertial Frame if Recursive!!!)
self.K_al = None # Angular Acceleration w.r.t. (Body Frame if Explicit, Inertial Frame if Recursive!!!)
#self.CS_0 = self.addFrame('_int_CS_0')
# checking if the values of nelastq and nq (SID-File) are equal
for nodes in self.sid.modal.frame.Knoten:
if nodes.origin.originmatrix.nq != self.sid.modal.refmod.nelastq:
raise NotImplementedError('the values of nelastq and nq (SID-File) must be equal')
name_flexible_coordinates = 'flexible_coordinates'
name_flexible_velocity = 'flexible_velocity'
name_flexible_acceleration = 'flexible_acceleration'
self.q = [graph.addVariable(name='q_%i_%s_%s' %(i+1,name_flexible_coordinates,self.name)) for i in range(self.sid.modal.refmod.nelastq)]
self.q_vec = Matrix(self.q)
self.qd = [graph.addVariable(name='qd_%i_%s_%s' %(i+1,name_flexible_velocity,self.name)) for i in range(self.sid.modal.refmod.nelastq)]
self.qd_vec = Matrix(self.qd)
self.qdd = [graph.addVariable(name='qdd_%i_%s_%s' %(i+1,name_flexible_acceleration,self.name)) for i in range(self.sid.modal.refmod.nelastq)]
self.q0 = [0]*self.sid.modal.refmod.nelastq
self.qd0 = [0]*self.sid.modal.refmod.nelastq
def addFrame(self, name, p=symbolics.zeros((3,)), R=symbolics.eye((3,3))):
'''
Add A New Coordinate System To The List Of Children
name: Name of the Coordinate System
p: Position of the Coordinate System (3x1 Vector)
R: Orientation of the Coordinate System (3x3 Matrix)
'''
# Create a New Coordinate System
cs = Frame(name=name, parentBody=self, p=p, R=R, graph=self.graph)
# Append it to the List of Children
self.children += [cs]
# return new Coordinate System
return cs
| 40.459259 | 149 | 0.642439 | 760 | 5,462 | 4.532895 | 0.168421 | 0.008128 | 0.012192 | 0.025544 | 0.824383 | 0.817126 | 0.800871 | 0.786067 | 0.76865 | 0.76865 | 0 | 0.010131 | 0.259063 | 5,462 | 134 | 150 | 40.761194 | 0.841117 | 0.42219 | 0 | 0.581818 | 0 | 0 | 0.053015 | 0.007277 | 0 | 0 | 0 | 0 | 0.036364 | 1 | 0.072727 | false | 0 | 0.072727 | 0 | 0.218182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8bcc3a870bcbbeb41b4ff8351bbeb22f3d1ac191 | 80 | py | Python | tests/test_sanity.py | ffreemt/freemt-utils | 25bf192033235bb783005795f8c0bcdd8a79610f | [
"MIT"
] | null | null | null | tests/test_sanity.py | ffreemt/freemt-utils | 25bf192033235bb783005795f8c0bcdd8a79610f | [
"MIT"
] | null | null | null | tests/test_sanity.py | ffreemt/freemt-utils | 25bf192033235bb783005795f8c0bcdd8a79610f | [
"MIT"
] | null | null | null | ''' sanity check
'''
def test_sanity():
''' sanity check '''
assert 1
| 10 | 24 | 0.5375 | 9 | 80 | 4.666667 | 0.666667 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.275 | 80 | 7 | 25 | 11.428571 | 0.706897 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4770f7a6a8ed87a23f0ee21a13326e42b2c6c6b1 | 41,723 | py | Python | NitroFE/time_based_features/weighted_window_features/weighted_window_features.py | NITRO-AI/NitroFE | 08d5ccd2be7da4534bd1fb04b85d7c61ba1c017e | [
"Apache-2.0"
] | 81 | 2021-10-31T12:20:10.000Z | 2022-03-29T22:38:06.000Z | NitroFE/time_based_features/weighted_window_features/weighted_window_features.py | adbmd/NitroFE | 327a54ffd5f9aaa19d05d7d87918757e3b0f5712 | [
"Apache-2.0"
] | 1 | 2021-11-02T14:21:48.000Z | 2021-11-02T14:21:48.000Z | NitroFE/time_based_features/weighted_window_features/weighted_window_features.py | adbmd/NitroFE | 327a54ffd5f9aaa19d05d7d87918757e3b0f5712 | [
"Apache-2.0"
] | 7 | 2021-11-01T08:17:37.000Z | 2022-01-01T19:06:06.000Z | from pandas.core.series import Series
from NitroFE.time_based_features.weighted_window_features.weighted_windows import (
_barthann_window,
_bartlett_window,
_equal_window,
_blackman_window,
_blackmanharris_window,
_bohman_window,
_cosine_window,
_exponential_window,
_flattop_window,
_gaussian_window,
_hamming_window,
_hann_window,
_kaiser_window,
_parzen_window,
_triang_window,
_weighted_moving_window,
)
import numpy as np
import pandas as pd
from typing import Union, Callable
class weighted_window_features:
def __init__(self):
self.params = {}
pass
def first_fit_params_save(self, function_name, **kwargs):
if not function_name in self.params:
self.params[function_name] = {}
for _key in kwargs.keys():
self.params[function_name][_key] = kwargs[_key]
def _template_feature_calculation(
self,
function_name,
win_function,
first_fit: bool = True,
dataframe: Union[pd.DataFrame, pd.Series] = None,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = np.mean,
operation_args: tuple = (),
last_values_from_calculated: bool = False,
**kwargs
):
_function_name = function_name
if not isinstance(operation_args, tuple):
operation_args = (operation_args,)
if first_fit:
self.params[_function_name] = {}
self.params[_function_name]["window"] = window
self.params[_function_name]["min_periods"] = min_periods
self.params[_function_name]["symmetric"] = symmetric
self.params[_function_name]["operation"] = operation
self.params[_function_name]["operation_args"] = operation_args
self.params[_function_name][
"last_values_from_calculated"
] = last_values_from_calculated
self.first_fit_params_save(_function_name, kwargs=kwargs)
if not first_fit:
if (
self.params[_function_name]["last_values_from_previous_run"] is None
) and (self.params[_function_name]["window"] != 1):
raise ValueError(
"First fit has not occured before. Kindly run first_fit=True for first fit instance,"
"and then proceed with first_fit=False for subsequent fits "
)
dataframe = pd.concat(
[
self.params[_function_name]["last_values_from_previous_run"],
dataframe,
],
axis=0,
)
_return = dataframe.rolling(
window=self.params[_function_name]["window"],
min_periods=self.params[_function_name]["min_periods"],
).agg(
lambda x: self.params[_function_name]["operation"](
win_function(
data=x,
window_size=self.params[_function_name]["window"],
symmetric=self.params[_function_name]["symmetric"],
**self.params[_function_name]["kwargs"]
),
*self.params[_function_name]["operation_args"]
)
)
if not first_fit:
_return = _return.iloc[
self.params[_function_name]["len_last_values_from_previous_run"] :
]
if not self.params[_function_name]["last_values_from_calculated"]:
_last_values_from_previous_run = (
dataframe.iloc[1 - self.params[_function_name]["window"] :]
if self.params[_function_name]["window"] != 1
else None
)
else:
_last_values_from_previous_run = (
_return.iloc[1 - self.params[_function_name]["window"] :]
if self.params[_function_name]["window"] != 1
else None
)
self.first_fit_params_save(
_function_name,
last_values_from_previous_run=_last_values_from_previous_run,
len_last_values_from_previous_run=len(_last_values_from_previous_run),
)
return _return
def caluclate_weighted_moving_window_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create weighted moving window feature
A weighted average is an average that has multiplying factors to give different weights to data at different positions in the sample window.
Mathematically, the weighted moving average is the convolution of the data with a fixed weighting function.
In an n-day WMA the latest day has weight n, the second latest n-1, etc, down to one
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.sum is used
operation_args : tuple, optional
additional agrument values to be sent for self defined operation function
"""
operation = np.sum if operation == None else operation
_function_name = "caluclate_weighted_moving_window_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_weighted_moving_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=None,
operation=operation,
operation_args=operation_args,
)
def caluclate_barthann_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create Bartlett–Hann weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which Bartlett–Hann weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_barthann_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_barthann_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_bartlett_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create bartlett weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which bartlett weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_bartlett_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_bartlett_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_equal_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create equally weighted rolling window feature
All elemets are weighted equally
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which equally weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_equal_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_equal_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=None,
operation=operation,
operation_args=operation_args,
)
def caluclate_blackman_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create blackman weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which blackman weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_blackman_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_blackman_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_blackmanharris_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create blackman-harris weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which blackman-harris weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_blackmanharris_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_blackmanharris_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_bohman_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create bohman weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which bohman weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_bohman_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_bohman_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_cosine_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create cosine weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which cosine weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_cosine_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_cosine_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_exponential_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
center: float = None,
tau: float = 1,
):
"""
Create exponential weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which exponential weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
center : float , optional
Parameter defining the center location of the window function.
The default value if not given is center = (M-1) / 2. This parameter must take its default value for symmetric windows.
tau : float , optional
Parameter defining the decay. For center = 0 use tau = -(M-1) / ln(x) if x is the fraction of the window remaining at the end, by default 1
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_exponential_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_exponential_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
center=center,
tau=tau,
)
def caluclate_flattop_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create flattop weighted rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which flattop weighted rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_flattop_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_flattop_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_gaussian_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
std: float = 1,
):
"""
Create flattop gaussian rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which flattop gaussian rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
std : float, optional
The standard deviation, sigma.
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_gaussian_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_gaussian_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
std=std,
)
def caluclate_hamming_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create flattop hamming rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which flattop hamming rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_hamming_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_hamming_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_hann_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create flattop hann rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which flattop hann rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_hann_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_hann_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_kaiser_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
beta: float = 7,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create flattop kaiser rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which flattop kaiser rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
beta : float, optional
Shape parameter, determines trade-off between main-lobe width and side lobe level, by default 7
As beta gets large, the window narrows.
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_kaiser_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_kaiser_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
beta=beta,
)
def caluclate_parzen_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create flattop parzen rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which flattop parzen rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_parzen_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_parzen_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
def caluclate_triang_feature(
self,
dataframe: Union[pd.DataFrame, pd.Series],
first_fit: bool = True,
window: int = 3,
min_periods: int = 1,
symmetric: bool = False,
operation: Callable = None,
operation_args: tuple = (),
):
"""
Create flattop triang rolling window feature
Parameters
----------
dataframe : Union[pd.DataFrame,pd.Series]
dataframe/series over which flattop triang rolling window feature is to be constructed
first_fit : bool, optional
Rolling features require past "window" number of values for calculation.
Use True, when calculating for training data { in which case last "window" number of values will be saved }
Use False, when calculating for testing/production data { in which case the, last "window" number of values, which
are were saved during the last phase, will be utilized for calculation }, by default True
window : int, optional
Size of the rolling window, by default 3
min_periods : int, optional
Minimum number of observations in window required to have a value, by default 1
symmetric : bool, optional
When True , generates a symmetric window, for use in filter design. When False,
generates a periodic window, for use in spectral analysis, by default False
operation : Callable, optional
operation to perform over the weighted rolling window values, when None is passed, np.mean is used
operation_args : tuple, optional
additional agrument values to be sent for operation function
"""
operation = np.mean if operation == None else operation
_function_name = "caluclate_triang_feature"
return self._template_feature_calculation(
function_name=_function_name,
win_function=_triang_window,
first_fit=first_fit,
dataframe=dataframe,
window=window,
min_periods=min_periods,
symmetric=symmetric,
operation=operation,
operation_args=operation_args,
)
| 45.252711 | 152 | 0.619826 | 4,703 | 41,723 | 5.364448 | 0.046991 | 0.038051 | 0.026636 | 0.038051 | 0.904792 | 0.884022 | 0.870387 | 0.866305 | 0.862301 | 0.858575 | 0 | 0.002903 | 0.322987 | 41,723 | 921 | 153 | 45.301846 | 0.890183 | 0.488627 | 0 | 0.652893 | 0 | 0 | 0.04859 | 0.032451 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039256 | false | 0.002066 | 0.010331 | 0 | 0.086777 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
479552a1291447af4dd9a90c5ce8421e6dfc32fc | 20 | py | Python | sturn/stun/__init__.py | m32/sturn | ffc252db2a434daef33c5e819444b1d929a8599b | [
"MIT"
] | 2 | 2021-07-11T21:24:37.000Z | 2021-12-23T18:30:50.000Z | sturn/stun/__init__.py | m32/sturn | ffc252db2a434daef33c5e819444b1d929a8599b | [
"MIT"
] | null | null | null | sturn/stun/__init__.py | m32/sturn | ffc252db2a434daef33c5e819444b1d929a8599b | [
"MIT"
] | 1 | 2021-12-24T01:07:21.000Z | 2021-12-24T01:07:21.000Z | from .stun import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4796ec376b4ed0889615feae4b523334d5744c98 | 1,638 | py | Python | hackerrank/ginortS.py | FelixTheC/hackerrank_exercises | 24eedbedebd122c53fd2cb6018cc3535d0d4c6a0 | [
"MIT"
] | null | null | null | hackerrank/ginortS.py | FelixTheC/hackerrank_exercises | 24eedbedebd122c53fd2cb6018cc3535d0d4c6a0 | [
"MIT"
] | null | null | null | hackerrank/ginortS.py | FelixTheC/hackerrank_exercises | 24eedbedebd122c53fd2cb6018cc3535d0d4c6a0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@created: 08.11.19
@author: felix
"""
import re
def my_sort(string: str) -> str:
lower_letters = re.findall(r'[a-z]', string)
lower_letters.sort()
upper_letters = re.findall(r'[A-Z]', string)
upper_letters.sort()
digits = [x for x in re.findall(r'[0-9]', string)]
even = [x for x in digits if int(x) % 2 == 0]
odd = [x for x in digits if int(x) % 2 != 0]
even.sort()
odd.sort()
return ''.join(lower_letters + upper_letters + odd + even)
if __name__ == '__main__':
test_result = 'dddddddddddddddddeeeeeeeeeeeeeeefffffffffffggggggggggggggggghhhhhhhhhhhhhhhjjjjjjjjjjjjjjjjjjjjqqqqqqqqqqqqqqrrrrrrrrrrrrrrtttttttttttttttwwwwwwwwwwwwwwyyyyyyyyyyyyyyAAAAAAAAAAAAAAAAAAAABBBBBBBBBBCCCCCCCCCCCCCCCCCDDDDDDDDDDDEEEEEEEEEEEFFFFFFFFFFFFFFFGGGGGGGGGGGGGGHHHHHHHHHHHHHHHHHHHHHHHHHIIIIIIIIIIIIIIIIIIIIIIIIIIJJJJJJJJJJJJJJJJJJJJJJKKKKKKKKKKKKKKKKLLLLLLLLLLLLLLLLLMMMMMMMMMMMMMNNNNNNNNNNOOOOOOOOOOOPPPPPPPPPPPPPPPPPPQQQQQQQQQQQQQQRRRRRRRRRRRRRSSSSSSSSSSSSSSSSSTTTTTTTTTTTTTTTTTUUUUUUUUUUUUUUUVVVVVVVVVVVVVVVVVVVWWWWWWWWWWWWWWWWWWWWXXXXXXXXXXXXXYYYYYYYYYYZZZZZZZZZZZZZ111111111111111111111111111111111111111133333333333333333333333333333333333333333333333333355555555555555555555555555555555555555777777777777777777777777777777777777777777999999999999999999999999999999999999999999999900000000000002222222222222222222222222222222222222222222222222244444444444444444444444444444444444444444444446666666666666666666666666666666666666666666666666668888888888888888888888888888888888888888888888888' # nopep8
string = input()
print(my_sort(string))
| 58.5 | 1,029 | 0.840659 | 101 | 1,638 | 13.465347 | 0.475248 | 0.026471 | 0.022059 | 0.015441 | 0.067647 | 0.067647 | 0.067647 | 0.030882 | 0.030882 | 0.030882 | 0 | 0.295378 | 0.088523 | 1,638 | 27 | 1,030 | 60.666667 | 0.615539 | 0.051282 | 0 | 0 | 0 | 0 | 0.662346 | 0.64744 | 0 | 1 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.0625 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
47f48c04680b0fc028c0341372f898d4f326292b | 20 | py | Python | test.py | 846395745/myProject | c4db486641d411485eab2bcc26def0f8a1318d66 | [
"MIT"
] | 1 | 2019-12-23T12:22:15.000Z | 2019-12-23T12:22:15.000Z | test.py | 846395745/myProject | c4db486641d411485eab2bcc26def0f8a1318d66 | [
"MIT"
] | null | null | null | test.py | 846395745/myProject | c4db486641d411485eab2bcc26def0f8a1318d66 | [
"MIT"
] | null | null | null | a = 123
print(a)
| 3.333333 | 8 | 0.5 | 4 | 20 | 2.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 0.35 | 20 | 5 | 9 | 4 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9a23481c5c8c94dc6bf3a42693240d3dc146a28e | 96 | py | Python | vmachine/__init__.py | horus-4ever/python-chip8-emulator | a22b4003b0f2e6d333346927902661a6dd06d980 | [
"MIT"
] | null | null | null | vmachine/__init__.py | horus-4ever/python-chip8-emulator | a22b4003b0f2e6d333346927902661a6dd06d980 | [
"MIT"
] | null | null | null | vmachine/__init__.py | horus-4ever/python-chip8-emulator | a22b4003b0f2e6d333346927902661a6dd06d980 | [
"MIT"
] | null | null | null | from .registers import *
from .instructionset import *
from .vcpu import *
from .memory import * | 24 | 29 | 0.760417 | 12 | 96 | 6.083333 | 0.5 | 0.410959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 96 | 4 | 30 | 24 | 0.901235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a41cdd9cca7417079cd4d1c2752aab4d67bdae0 | 37 | py | Python | zeroth/namesss/__init__.py | njvrzm/zeroth | 26c000389403cd7e54dca7dfb9364b9fe50e161a | [
"MIT"
] | null | null | null | zeroth/namesss/__init__.py | njvrzm/zeroth | 26c000389403cd7e54dca7dfb9364b9fe50e161a | [
"MIT"
] | null | null | null | zeroth/namesss/__init__.py | njvrzm/zeroth | 26c000389403cd7e54dca7dfb9364b9fe50e161a | [
"MIT"
] | null | null | null | from .namesss import getNamesForYear
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0005db1a2e08a9d801cd8b63bf0ba6675c55db9 | 49 | py | Python | app/business/blog/__init__.py | Anioko/reusable | de6480bc23fb8cfff474985128be91f4dd391be6 | [
"MIT"
] | null | null | null | app/business/blog/__init__.py | Anioko/reusable | de6480bc23fb8cfff474985128be91f4dd391be6 | [
"MIT"
] | null | null | null | app/business/blog/__init__.py | Anioko/reusable | de6480bc23fb8cfff474985128be91f4dd391be6 | [
"MIT"
] | null | null | null | from app.business.blog.views import blog # noqa
| 24.5 | 48 | 0.77551 | 8 | 49 | 4.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 1 | 49 | 49 | 0.904762 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d09a47265de40fa28d63d23152163d2afe398393 | 32 | py | Python | RandomBot/__init__.py | RandomBotDev/RandomBotCog | a00db232cf6eeb85293060a3ffaf6d44f4330450 | [
"MIT"
] | null | null | null | RandomBot/__init__.py | RandomBotDev/RandomBotCog | a00db232cf6eeb85293060a3ffaf6d44f4330450 | [
"MIT"
] | null | null | null | RandomBot/__init__.py | RandomBotDev/RandomBotCog | a00db232cf6eeb85293060a3ffaf6d44f4330450 | [
"MIT"
] | 1 | 2022-03-07T11:48:37.000Z | 2022-03-07T11:48:37.000Z | from RandomBot.MainCog import *
| 16 | 31 | 0.8125 | 4 | 32 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
efecbbcd5adab5ef1bed01ae2353a20bf7d46be4 | 48 | py | Python | tests/examples-bad/2.py | JohannesBuchner/pystrict3 | f442a89ac6a23f4323daed8ef829d8e9e1197f90 | [
"BSD-2-Clause"
] | 1 | 2020-06-05T08:53:26.000Z | 2020-06-05T08:53:26.000Z | tests/examples-bad/2.py | JohannesBuchner/pystrict3 | f442a89ac6a23f4323daed8ef829d8e9e1197f90 | [
"BSD-2-Clause"
] | 1 | 2020-06-04T13:47:19.000Z | 2020-06-04T13:47:57.000Z | tests/examples-bad/2.py | JohannesBuchner/pystrict3 | f442a89ac6a23f4323daed8ef829d8e9e1197f90 | [
"BSD-2-Clause"
] | 1 | 2020-11-07T17:02:46.000Z | 2020-11-07T17:02:46.000Z | def format():
pass ## bad, format is a keyword
| 16 | 33 | 0.666667 | 8 | 48 | 4 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 48 | 2 | 34 | 24 | 0.842105 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
4bd4fec1d55acfabcdd266d3d2c4b9e7eead06a4 | 7,156 | py | Python | models/unet.py | johnmartinsson/adversarial-representation-learning | 86cd1489b0bdfa76bab37e313c6ab53304179f1e | [
"Apache-2.0"
] | null | null | null | models/unet.py | johnmartinsson/adversarial-representation-learning | 86cd1489b0bdfa76bab37e313c6ab53304179f1e | [
"Apache-2.0"
] | null | null | null | models/unet.py | johnmartinsson/adversarial-representation-learning | 86cd1489b0bdfa76bab37e313c6ab53304179f1e | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
def double_conv(channels_in, channels_out):
return nn.Sequential(
nn.Conv2d(channels_in, channels_out, 3, padding=1),
nn.BatchNorm2d(channels_out),
nn.ReLU(),
nn.Conv2d(channels_out, channels_out, 3, padding=1),
nn.BatchNorm2d(channels_out),
nn.ReLU()
)
class UNetFilter(nn.Module):
def __init__(self, channels_in, channels_out, chs=[32, 64, 128, 256, 512], image_width=64, image_height=64, noise_dim=10, activation='sigmoid', nb_classes=2, embedding_dim=16, use_cond=True):
super().__init__()
self.use_cond = use_cond
self.width = image_width
self.height = image_height
self.activation = activation
self.embed_condition = nn.Embedding(nb_classes, embedding_dim)
# noise projection layer
self.project_noise = nn.Linear(noise_dim, image_width//16 * image_height//16 * chs[4])
# condition projection layer
self.project_cond = nn.Linear(embedding_dim, image_width//16 * image_height//16)
self.dconv_down1 = double_conv(channels_in, chs[0])
self.pool_down1 = nn.MaxPool2d(2, stride=2)
self.dconv_down2 = double_conv(chs[0], chs[1])
self.pool_down2 = nn.MaxPool2d(2, stride=2)
self.dconv_down3 = double_conv(chs[1], chs[2])
self.pool_down3 = nn.MaxPool2d(2, stride=2)
self.dconv_down4 = double_conv(chs[2], chs[3])
self.pool_down4 = nn.MaxPool2d(2, stride=2)
self.dconv_down5 = double_conv(chs[3], chs[4])
if self.use_cond:
self.dconv_up5 = double_conv(chs[4]+chs[4]+1+chs[3], chs[3])
else:
self.dconv_up5 = double_conv(chs[4]+chs[4]+chs[3], chs[3])
self.dconv_up4 = double_conv(chs[3]+chs[2], chs[2])
self.dconv_up3 = double_conv(chs[2]+chs[1], chs[1])
self.dconv_up2 = double_conv(chs[1]+chs[0], chs[0])
self.dconv_up1 = nn.Conv2d(chs[0], channels_out, kernel_size=1)
def forward(self, x, z, cond):
noise = self.project_noise(z).reshape(x.shape[0], 512, x.shape[2]//16, x.shape[3]//16)
cond_emb = self.embed_condition(cond)
cond_emb = self.project_cond(cond_emb).reshape(x.shape[0], 1, x.shape[2]//16, x.shape[3]//16)
conv1_down = self.dconv_down1(x)
pool1 = self.pool_down1(conv1_down)
conv2_down = self.dconv_down2(pool1)
pool2 = self.pool_down2(conv2_down)
conv3_down = self.dconv_down3(pool2)
pool3 = self.pool_down3(conv3_down)
conv4_down = self.dconv_down4(pool3)
pool4 = self.pool_down4(conv4_down)
conv5_down = self.dconv_down5(pool4)
if self.use_cond:
conv5_down = torch.cat((conv5_down, noise, cond_emb), dim=1)
else:
conv5_down = torch.cat((conv5_down, noise), dim=1)
conv5_up = F.interpolate(conv5_down, scale_factor=2, mode='nearest')
conv5_up = torch.cat((conv4_down, conv5_up), dim=1)
conv5_up = self.dconv_up5(conv5_up)
conv4_up = F.interpolate(conv5_up, scale_factor=2, mode='nearest')
conv4_up = torch.cat((conv3_down, conv4_up), dim=1)
conv4_up = self.dconv_up4(conv4_up)
conv3_up = F.interpolate(conv4_up, scale_factor=2, mode='nearest')
conv3_up = torch.cat((conv2_down, conv3_up), dim=1)
conv3_up = self.dconv_up3(conv3_up)
conv2_up = F.interpolate(conv3_up, scale_factor=2, mode='nearest')
conv2_up = torch.cat((conv1_down, conv2_up), dim=1)
conv2_up = self.dconv_up2(conv2_up)
conv1_up = self.dconv_up1(conv2_up)
if self.activation == 'sigmoid':
x = torch.sigmoid(conv1_up)
else:
x = torch.tanh(conv1_up)
return x
class UNet(nn.Module):
def __init__(self, channels_in, channels_out, chs=[8, 16, 32, 64, 128], image_width=64, image_height=64, noise_dim=10, activation='tanh', additive_noise=True):
super().__init__()
self.width = image_width
self.height = image_height
self.additive_noise = additive_noise
self.activation = activation
# noise projection layer
if noise_dim is not None:
if not additive_noise:
self.project_noise = nn.Linear(noise_dim, image_width*image_height)
self.dconv_down1 = double_conv(channels_in+1, chs[0])
else:
self.project_noise = nn.Linear(noise_dim, channels_in*image_width*image_height)
self.dconv_down1 = double_conv(channels_in, chs[0])
else:
self.dconv_down1 = double_conv(channels_in, chs[0])
self.pool_down1 = nn.MaxPool2d(2, stride=2)
self.dconv_down2 = double_conv(chs[0], chs[1])
self.pool_down2 = nn.MaxPool2d(2, stride=2)
self.dconv_down3 = double_conv(chs[1], chs[2])
self.pool_down3 = nn.MaxPool2d(2, stride=2)
self.dconv_down4 = double_conv(chs[2], chs[3])
self.pool_down4 = nn.MaxPool2d(2, stride=2)
self.dconv_down5 = double_conv(chs[3], chs[4])
self.dconv_up5 = double_conv(chs[4]+chs[3], chs[4])
self.dconv_up4 = double_conv(chs[3]+chs[2], chs[3])
self.dconv_up3 = double_conv(chs[2]+chs[1], chs[2])
self.dconv_up2 = double_conv(chs[1]+chs[0], chs[1])
self.dconv_up1 = nn.Conv2d(chs[1], channels_out, kernel_size=1)
def forward(self, x, z=None):
if z is not None:
if self.additive_noise:
noise = self.project_noise(z).reshape(x.shape)
x = x + noise
else:
noise = self.project_noise(z).reshape(x.shape[0], 1, x.shape[2], x.shape[3])
x = torch.cat((x, noise), dim=1) # concatenate along channel dimension
conv1_down = self.dconv_down1(x)
pool1 = self.pool_down1(conv1_down)
conv2_down = self.dconv_down2(pool1)
pool2 = self.pool_down2(conv2_down)
conv3_down = self.dconv_down3(pool2)
pool3 = self.pool_down3(conv3_down)
conv4_down = self.dconv_down4(pool3)
pool4 = self.pool_down4(conv4_down)
conv5_down = self.dconv_down5(pool4)
conv5_up = F.interpolate(conv5_down, scale_factor=2, mode='nearest')
conv5_up = torch.cat((conv4_down, conv5_up), dim=1)
conv5_up = self.dconv_up5(conv5_up)
conv4_up = F.interpolate(conv4_down, scale_factor=2, mode='nearest')
conv4_up = torch.cat((conv3_down, conv4_up), dim=1)
conv4_up = self.dconv_up4(conv4_up)
conv3_up = F.interpolate(conv3_down, scale_factor=2, mode='nearest')
conv3_up = torch.cat((conv2_down, conv3_up), dim=1)
conv3_up = self.dconv_up3(conv3_up)
conv2_up = F.interpolate(conv2_down, scale_factor=2, mode='nearest')
conv2_up = torch.cat((conv1_down, conv2_up), dim=1)
conv2_up = self.dconv_up2(conv2_up)
conv1_up = self.dconv_up1(conv2_up)
if self.activation == 'sigmoid':
x = torch.sigmoid(conv1_up)
else:
x = torch.tanh(conv1_up)
return x
| 37.862434 | 195 | 0.629402 | 1,054 | 7,156 | 4.023719 | 0.102467 | 0.091252 | 0.05211 | 0.033954 | 0.80382 | 0.801462 | 0.785192 | 0.752888 | 0.723886 | 0.653619 | 0 | 0.062569 | 0.245109 | 7,156 | 188 | 196 | 38.06383 | 0.72251 | 0.015092 | 0 | 0.608696 | 0 | 0 | 0.011501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036232 | false | 0 | 0.021739 | 0.007246 | 0.094203 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4bfa9c0cfb779d6b4fdbdd0f2917e034d14fb504 | 229 | py | Python | src/mpls/mpls_controller.py | harpratap/nfv-mpls | bd7cb779a0ddf613f112fae860d149b7f8f0972f | [
"MIT"
] | null | null | null | src/mpls/mpls_controller.py | harpratap/nfv-mpls | bd7cb779a0ddf613f112fae860d149b7f8f0972f | [
"MIT"
] | null | null | null | src/mpls/mpls_controller.py | harpratap/nfv-mpls | bd7cb779a0ddf613f112fae860d149b7f8f0972f | [
"MIT"
] | null | null | null | class MPLSController:
# host
_host = None
# port
_port = None
# has a label distribution protocol
_label_distribution_protocol = None
def getHost(self):
return _host
def getPort(self):
return _port
| 13.470588 | 37 | 0.681223 | 27 | 229 | 5.518519 | 0.555556 | 0.228188 | 0.33557 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.262009 | 229 | 16 | 38 | 14.3125 | 0.881657 | 0.187773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
ef6894bbfe2c86e66e2bbae6584d8f1c8bf63886 | 46 | py | Python | vision/settings/__init__.py | JackGoldsworth/Vision | 084330bec340596167944b623bc7b8d7d9c26b01 | [
"MIT"
] | null | null | null | vision/settings/__init__.py | JackGoldsworth/Vision | 084330bec340596167944b623bc7b8d7d9c26b01 | [
"MIT"
] | 1 | 2018-08-20T18:35:48.000Z | 2019-01-10T02:56:12.000Z | vision/settings/__init__.py | JackGoldsworth/Vision | 084330bec340596167944b623bc7b8d7d9c26b01 | [
"MIT"
] | null | null | null | from .settings_handler import SettingsHandler
| 23 | 45 | 0.891304 | 5 | 46 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
324770b7e7d379ad16c037efadb8859f7c70801f | 25 | py | Python | robotframework-ls/tests/robotframework_ls_tests/_resources/case_same_basename/directory/my_library.py | mardukbp/robotframework-lsp | 57b4b2b14b712c9bf90577924a920fb9b9e831c7 | [
"ECL-2.0",
"Apache-2.0"
] | 92 | 2020-01-22T22:15:29.000Z | 2022-03-31T05:19:16.000Z | robotframework-ls/tests/robotframework_ls_tests/_resources/case_same_basename/directory/my_library.py | mardukbp/robotframework-lsp | 57b4b2b14b712c9bf90577924a920fb9b9e831c7 | [
"ECL-2.0",
"Apache-2.0"
] | 604 | 2020-01-25T17:13:27.000Z | 2022-03-31T18:58:24.000Z | robotframework-ls/tests/robotframework_ls_tests/_resources/case_same_basename/directory/my_library.py | mardukbp/robotframework-lsp | 57b4b2b14b712c9bf90577924a920fb9b9e831c7 | [
"ECL-2.0",
"Apache-2.0"
] | 39 | 2020-02-06T00:38:06.000Z | 2022-03-15T06:14:19.000Z | def in_lib_2():
pass
| 8.333333 | 15 | 0.6 | 5 | 25 | 2.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.28 | 25 | 2 | 16 | 12.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
32479c50d88fc863dca5519fbbc0d501bea7ab11 | 91 | py | Python | tools/dist_train.py | chetanmreddy/imvoxelnet | 10dd35a96539af7b147be4bb03b0395cc164177e | [
"MIT"
] | 1 | 2022-03-11T11:05:35.000Z | 2022-03-11T11:05:35.000Z | tools/dist_train.py | chetanmreddy/imvoxelnet | 10dd35a96539af7b147be4bb03b0395cc164177e | [
"MIT"
] | null | null | null | tools/dist_train.py | chetanmreddy/imvoxelnet | 10dd35a96539af7b147be4bb03b0395cc164177e | [
"MIT"
] | null | null | null | import os
os.system('bash tools/dist_train.sh configs/imvoxelnet/imvoxelnet_scannet.py 2') | 30.333333 | 80 | 0.824176 | 15 | 91 | 4.866667 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.065934 | 91 | 3 | 80 | 30.333333 | 0.847059 | 0 | 0 | 0 | 0 | 0 | 0.728261 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3270af60a9809cb2ae7836127962db973a4cdd5f | 75 | py | Python | bnf/test/fixtures/rules/__init__.py | Nikita-Boyarskikh/bnf | 1293b0f2187593989e2484a7af9612477fa8bbe0 | [
"MIT"
] | null | null | null | bnf/test/fixtures/rules/__init__.py | Nikita-Boyarskikh/bnf | 1293b0f2187593989e2484a7af9612477fa8bbe0 | [
"MIT"
] | null | null | null | bnf/test/fixtures/rules/__init__.py | Nikita-Boyarskikh/bnf | 1293b0f2187593989e2484a7af9612477fa8bbe0 | [
"MIT"
] | null | null | null | # flake8: noqa
from .common import *
from .llk import *
from .lrk import *
| 15 | 21 | 0.693333 | 11 | 75 | 4.727273 | 0.636364 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.2 | 75 | 4 | 22 | 18.75 | 0.85 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32b67ff4a1d3c45ab161c0d5ef092858421d702a | 39 | py | Python | web3_erc20_predefined/predefined/bsc/__init__.py | kkristof200/py_web3_erc20_predefined | e95399bb14c61bb56e56f474937b0ace8565772b | [
"MIT"
] | null | null | null | web3_erc20_predefined/predefined/bsc/__init__.py | kkristof200/py_web3_erc20_predefined | e95399bb14c61bb56e56f474937b0ace8565772b | [
"MIT"
] | null | null | null | web3_erc20_predefined/predefined/bsc/__init__.py | kkristof200/py_web3_erc20_predefined | e95399bb14c61bb56e56f474937b0ace8565772b | [
"MIT"
] | null | null | null | from .busd import *
from .wbnb import * | 19.5 | 19 | 0.717949 | 6 | 39 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 20 | 19.5 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32be6734f613b0b90f439ca6fdc9baa989e825d0 | 15,960 | py | Python | tests/live_tests.py | gramedia-digital-nusantara/midtranspay | 0367f8b261293e49ee8a6e395f6c44455212cf7b | [
"BSD-3-Clause"
] | 6 | 2018-01-30T06:08:52.000Z | 2021-02-15T12:41:40.000Z | tests/live_tests.py | derekjamescurtis/veritranspay | 0367f8b261293e49ee8a6e395f6c44455212cf7b | [
"BSD-3-Clause"
] | 8 | 2015-01-21T17:00:42.000Z | 2017-07-06T05:26:30.000Z | tests/live_tests.py | derekjamescurtis/veritranspay | 0367f8b261293e49ee8a6e395f6c44455212cf7b | [
"BSD-3-Clause"
] | 6 | 2015-07-21T16:49:57.000Z | 2017-07-05T07:55:35.000Z | import random
import unittest
import os
import requests
from requests import codes
import veritranspay
from veritranspay import request, veritrans, payment_types, response
from veritranspay.response import status
from . import fixtures
from faker import Faker
fake = Faker()
SANDBOX_CLIENT_KEY = os.environ.get('SANDBOX_CLIENT_KEY', None)
SANDBOX_SERVER_KEY = os.environ.get('SANDBOX_SERVER_KEY', None)
RUN_ALL_ACCEPTANCE_TESTS = os.environ.get('RUN_ALL_ACCEPTANCE_TESTS', False)
class LiveTests_Base(object):
def setUp(self):
if None in [SANDBOX_CLIENT_KEY, SANDBOX_SERVER_KEY]:
self.skipTest("Live credentials not provided -- skipping tests")
if not RUN_ALL_ACCEPTANCE_TESTS and \
self.VERSION != veritranspay.__version__:
self.skipTest("Skipping this version of tests")
expected = fixtures.CC_REQUEST
self.expected = expected
self.trans_details = request.TransactionDetails(
order_id=expected['transaction_details']['order_id'],
gross_amount=expected['transaction_details']['gross_amount'])
self.cust_details = request.CustomerDetails(
first_name=expected['customer_details']['first_name'],
last_name=expected['customer_details']['last_name'],
email=expected['customer_details']['email'],
phone=expected['customer_details']['phone'],
billing_address=request.Address(
**expected['customer_details']['billing_address']),
shipping_address=request.Address(
**expected['customer_details']['shipping_address'])
)
self.item_details = \
[request.ItemDetails(item_id=item['id'],
price=item['price'],
quantity=item['quantity'],
name=item['name'])
for item
in expected['item_details']]
def get_token(self, cc_num, client_key, secure=False):
# try to get a token
params = {'card_number': cc_num,
'card_exp_month': '12',
'card_exp_year': '2020',
'card_cvv': '123',
'secure': secure,
'gross_amount': 145000,
'client_key': client_key,
}
token_url = 'https://api.sandbox.midtrans.com/v2/token'
resp = requests.get(token_url, params=params)
if resp.status_code == codes.OK:
return resp.json()['token_id']
else:
self.fail("Failed retrieving token from server")
class AcceptanceTests_v0_4(LiveTests_Base, unittest.TestCase):
VERSION = 'v0.4'
def test_success_cc_charge_request(self):
# 1: get a token
# on live, this step --MUST-- be performed by the web
# application through the javascript library.
token = self.get_token(
random.choice(fixtures.CC_ACCEPTED),
SANDBOX_CLIENT_KEY)
# 2: Create a sandbox gateway
gateway = veritrans.VTDirect(
SANDBOX_SERVER_KEY,
sandbox_mode=True)
# 3: Create a charge request
cc_payment = payment_types.CreditCard(
bank=self.expected['credit_card']['bank'],
token_id=token)
charge_req = request.ChargeRequest(
charge_type=cc_payment,
transaction_details=self.trans_details,
customer_details=self.cust_details,
item_details=self.item_details)
# 4: Submit our request
resp = gateway.submit_charge_request(charge_req)
self.assertIsInstance(resp, response.CreditCardChargeResponse)
self.assertEqual(status.SUCCESS, resp.status_code)
class AcceptanceTests_v0_5(LiveTests_Base, unittest.TestCase):
VERSION = 'v0.5'
def test_accept_challenged_charge_request(self):
'''
Verify that we can accept challenged charge requests.
'''
# 1: get a token
# on live, this step --MUST-- be performed by the web
# application through the javascript library.
token = self.get_token(
random.choice(fixtures.CC_CHALLENGED_FDS),
SANDBOX_CLIENT_KEY)
# 2: Create a sandbox gateway
gateway = veritrans.VTDirect(
SANDBOX_SERVER_KEY,
sandbox_mode=True)
# 3: Create a charge request
cc_payment = payment_types.CreditCard(
bank=self.expected['credit_card']['bank'],
token_id=token)
charge_req = request.ChargeRequest(
charge_type=cc_payment,
transaction_details=self.trans_details,
customer_details=self.cust_details,
item_details=self.item_details)
# 4: Submit charge request
# - verify we get a status_code of CHALLENGE back
# - verify that we are returned a CreditCardChargeResponse
resp = gateway.submit_charge_request(charge_req)
self.assertIsInstance(resp, response.CreditCardChargeResponse)
self.assertEqual(status.CHALLENGE, resp.status_code)
# 5: Lookup the status of the transaction using the response
# - verify can use CreditCareChargeResponse can as a StatusRequest
# - verify we get a StatusResponse back
# - verify the status_code is still CHALLENGE
status_resp = gateway.submit_status_request(resp)
self.assertIsInstance(status_resp, response.StatusResponse)
self.assertEqual(status_resp.status_code, status.CHALLENGE)
# 6: Approve the transaction!
# - verify can build an ApprovalRequest
# - verify we get an ApprovalResponse back
# - verify the status_code is now SUCCESS
approval_req = request.ApprovalRequest(
status_resp.order_id)
approval_resp = gateway.submit_approval_request(
approval_req)
self.assertIsInstance(approval_resp, response.ApproveResponse)
self.assertEqual(approval_resp.status_code, status.SUCCESS)
class AcceptanceTests_v0_6(LiveTests_Base, unittest.TestCase):
VERSION = '0.9'
def test_one_click(self):
pass
def test_two_click(self):
pass
def test_preauth_capture(self):
pass
class PermataVA_AcceptanceTests_v0_9(unittest.TestCase):
VERSION = '0.9'
def setUp(self):
if None in [SANDBOX_CLIENT_KEY, SANDBOX_SERVER_KEY]:
self.skipTest("Live credentials not provided -- skipping tests")
if not RUN_ALL_ACCEPTANCE_TESTS and \
self.VERSION != veritranspay.__version__:
self.skipTest("Skipping %s this version of tests" % (self.VERSION))
expected = fixtures.VIRTUALACCOUNTPERMATA_REQUEST
self.expected = expected
self.trans_details = request.TransactionDetails(
order_id=expected['transaction_details']['order_id'],
gross_amount=expected['transaction_details']['gross_amount'])
self.cust_details = request.CustomerDetails(
first_name=expected['customer_details']['first_name'],
last_name=expected['customer_details']['last_name'],
email=expected['customer_details']['email'],
phone=expected['customer_details']['phone'],
billing_address=request.Address(
**expected['customer_details']['billing_address']),
shipping_address=request.Address(
**expected['customer_details']['shipping_address'])
)
self.item_details = \
[request.ItemDetails(item_id=item['id'],
price=item['price'],
quantity=item['quantity'],
name=item['name'])
for item
in expected['item_details']]
def test_virtualaccountpermata(self):
"""
Verify Permata Virtual Account
"""
trans_details = self.trans_details
trans_details.order_id = "".join([fake.random_letter() for _ in range(10)])
# 2: Create a sandbox gateway
gateway = veritrans.VTDirect(
SANDBOX_SERVER_KEY,
sandbox_mode=True)
# 3: Create a charge request
payment = payment_types.VirtualAccountPermata()
charge_req = request.ChargeRequest(
charge_type=payment,
transaction_details=trans_details,
customer_details=self.cust_details,
item_details=self.item_details)
# 4: Submit our request
resp = gateway.submit_charge_request(charge_req)
self.assertIsInstance(resp, response.VirtualAccountPermataChargeResponse)
self.assertEqual(status.PENDING, resp.status_code)
self.assertEqual(self.trans_details.order_id, resp.order_id)
class BriEpay_AcceptanceTests_v0_9(unittest.TestCase):
VERSION = '0.9'
def setUp(self):
if None in [SANDBOX_CLIENT_KEY, SANDBOX_SERVER_KEY]:
self.skipTest("Live credentials not provided -- skipping tests")
if not RUN_ALL_ACCEPTANCE_TESTS and \
self.VERSION != veritranspay.__version__:
self.skipTest("Skipping %s this version of tests" % self.VERSION)
expected = fixtures.BRIEPAY_REQUEST
self.expected = expected
self.trans_details = request.TransactionDetails(
order_id=expected['transaction_details']['order_id'],
gross_amount=expected['transaction_details']['gross_amount'])
self.cust_details = request.CustomerDetails(
first_name=expected['customer_details']['first_name'],
last_name=expected['customer_details']['last_name'],
email=expected['customer_details']['email'],
phone=expected['customer_details']['phone'],
billing_address=request.Address(
**expected['customer_details']['billing_address']),
shipping_address=request.Address(
**expected['customer_details']['shipping_address'])
)
self.item_details = \
[request.ItemDetails(item_id=item['id'],
price=item['price'],
quantity=item['quantity'],
name=item['name'])
for item
in expected['item_details']]
def test_briepay(self):
"""
Verify Bri Epay payment method
"""
# 2: Create a sandbox gateway
gateway = veritrans.VTDirect(
SANDBOX_SERVER_KEY,
sandbox_mode=True)
# 3: Create a charge request
payment = payment_types.BriEpay()
charge_req = request.ChargeRequest(
charge_type=payment,
transaction_details=self.trans_details,
customer_details=self.cust_details,
item_details=self.item_details)
# 4: Submit our request
resp = gateway.submit_charge_request(charge_req)
self.assertIsInstance(resp, response.EpayBriChargeResponse)
self.assertEqual(status.PENDING, resp.status_code)
self.assertEqual(self.trans_details.order_id, resp.order_id)
class MandiriVA_AcceptanceTests_v0_9(unittest.TestCase):
VERSION = '0.9'
def setUp(self):
if None in [SANDBOX_CLIENT_KEY, SANDBOX_SERVER_KEY]:
self.skipTest("Live credentials not provided -- skipping tests")
if not RUN_ALL_ACCEPTANCE_TESTS and \
self.VERSION != veritranspay.__version__:
self.skipTest("Skipping %s this version of tests" % self.VERSION)
expected = fixtures.VIRTUALACCOUNTMANDIRI_REQUEST
self.expected = expected
self.trans_details = request.TransactionDetails(
order_id="".join([fake.random_letter() for _ in range(10)]),
gross_amount=expected['transaction_details']['gross_amount'])
self.cust_details = request.CustomerDetails(
first_name=expected['customer_details']['first_name'],
last_name=expected['customer_details']['last_name'],
email=expected['customer_details']['email'],
phone=expected['customer_details']['phone'],
billing_address=request.Address(
**expected['customer_details']['billing_address']),
shipping_address=request.Address(
**expected['customer_details']['shipping_address'])
)
self.item_details = \
[request.ItemDetails(item_id=item['id'],
price=item['price'],
quantity=item['quantity'],
name=item['name'])
for item
in expected['item_details']]
def test_virtual_account_mandiri(self):
"""
Verify mandiri bill payment
:return:
"""
# 2: Create a sandbox gateway
gateway = veritrans.VTDirect(
SANDBOX_SERVER_KEY,
sandbox_mode=True)
# 3: Create a charge request
payment = payment_types.VirtualAccountMandiri(bill_info1=self.expected['echannel']['bill_info1'], bill_info2=self.expected['echannel']['bill_info2'])
charge_req = request.ChargeRequest(
charge_type=payment,
transaction_details=self.trans_details,
customer_details=self.cust_details,
item_details=self.item_details)
# 4: Submit our request
resp = gateway.submit_charge_request(charge_req)
self.assertIsInstance(resp, response.VirtualAccountMandiriChargeResponse)
self.assertEqual(status.PENDING, resp.status_code)
self.assertEqual(self.trans_details.order_id, resp.order_id)
class GoPay_AcceptanceTests_v0_9(unittest.TestCase):
VERSION = '0.9'
def setUp(self):
if None in [SANDBOX_CLIENT_KEY, SANDBOX_SERVER_KEY]:
self.skipTest("Live credentials not provided -- skipping tests")
if not RUN_ALL_ACCEPTANCE_TESTS and \
self.VERSION != veritranspay.__version__:
self.skipTest("Skipping %s this version of tests" % self.VERSION)
expected = fixtures.GOPAY_REQUEST
self.expected = expected
self.trans_details = request.TransactionDetails(
order_id=expected['transaction_details']['order_id'],
gross_amount=expected['transaction_details']['gross_amount'])
self.cust_details = request.CustomerDetails(
first_name=expected['customer_details']['first_name'],
last_name=expected['customer_details']['last_name'],
email=expected['customer_details']['email'],
phone=expected['customer_details']['phone'],
)
self.item_details = \
[request.ItemDetails(item_id=item['id'],
price=item['price'],
quantity=item['quantity'],
name=item['name'])
for item
in expected['item_details']]
def test_gopay(self):
"""
Verify GoPay payment method
"""
# 2: Create a sandbox gateway
gateway = veritrans.VTDirect(
SANDBOX_SERVER_KEY,
sandbox_mode=True)
# 3: Create a charge request
payment = payment_types.GoPay()
charge_req = request.ChargeRequest(
charge_type=payment,
transaction_details=self.trans_details,
customer_details=self.cust_details,
item_details=self.item_details)
# 4: Submit our request
resp = gateway.submit_charge_request(charge_req)
self.assertIsInstance(resp, response.GoPayChargeResponse)
self.assertEqual(status.PENDING, resp.status_code)
self.assertEqual(self.trans_details.order_id, resp.order_id) | 36.689655 | 157 | 0.622744 | 1,651 | 15,960 | 5.773471 | 0.127801 | 0.053504 | 0.067562 | 0.028326 | 0.768254 | 0.758078 | 0.741922 | 0.741922 | 0.741922 | 0.735208 | 0 | 0.006478 | 0.284211 | 15,960 | 435 | 158 | 36.689655 | 0.827906 | 0.086278 | 0 | 0.705882 | 0 | 0 | 0.128173 | 0.001665 | 0 | 0 | 0 | 0 | 0.069204 | 1 | 0.051903 | false | 0.010381 | 0.034602 | 0 | 0.141869 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
08ab2cc86667f7b5599664a0dbe1be1b56370be8 | 123 | py | Python | app/improving_agent/models/any_type.py | brettasmi/EvidARA | 319bbe80ddb4d7d6aa4f1db005ad5461e015a8bc | [
"MIT"
] | null | null | null | app/improving_agent/models/any_type.py | brettasmi/EvidARA | 319bbe80ddb4d7d6aa4f1db005ad5461e015a8bc | [
"MIT"
] | 5 | 2020-06-25T21:47:50.000Z | 2020-07-15T01:22:51.000Z | app/improving_agent/models/any_type.py | suihuanglab/evidARA | cf5b8bbdb9f90136c66b58c694acf2efc18ffc22 | [
"MIT"
] | 1 | 2020-03-23T10:39:59.000Z | 2020-03-23T10:39:59.000Z | class AnyType:
def __init__(self):
pass
@staticmethod
def get_value(self, value):
return value | 17.571429 | 31 | 0.609756 | 14 | 123 | 5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.317073 | 123 | 7 | 32 | 17.571429 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
08d448164d3290e329010a8b9202706276607c36 | 94 | py | Python | terrascript/ovh/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/ovh/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/ovh/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | # terrascript/ovh/__init__.py
import terrascript
class ovh(terrascript.Provider):
pass
| 11.75 | 32 | 0.765957 | 11 | 94 | 6.181818 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 94 | 7 | 33 | 13.428571 | 0.85 | 0.287234 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
08efeff1de865af9ab4a8f08fe438929fb0a9876 | 42 | py | Python | packaging_tutorial/example_pkg/hello_module.py | HFM3/pypi-template | d9407c0171f48dd35c06598600f6299850ea69a2 | [
"Unlicense"
] | null | null | null | packaging_tutorial/example_pkg/hello_module.py | HFM3/pypi-template | d9407c0171f48dd35c06598600f6299850ea69a2 | [
"Unlicense"
] | null | null | null | packaging_tutorial/example_pkg/hello_module.py | HFM3/pypi-template | d9407c0171f48dd35c06598600f6299850ea69a2 | [
"Unlicense"
] | null | null | null | def hello_function():
print('HELLO!')
| 14 | 21 | 0.642857 | 5 | 42 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 42 | 2 | 22 | 21 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
41262714c6da19e2b65742beb459a14e0a215487 | 86 | py | Python | code/planner/__init__.py | jeguzzi/resilience | f6fb8be52a7f5a9dbf755ff7dbd5f8117c802a30 | [
"MIT"
] | 6 | 2019-05-23T22:52:56.000Z | 2021-09-02T08:52:23.000Z | code/planner/__init__.py | hebinbing/resilience | f6fb8be52a7f5a9dbf755ff7dbd5f8117c802a30 | [
"MIT"
] | null | null | null | code/planner/__init__.py | hebinbing/resilience | f6fb8be52a7f5a9dbf755ff7dbd5f8117c802a30 | [
"MIT"
] | 3 | 2019-10-17T07:51:16.000Z | 2022-02-09T12:51:58.000Z | from .planner_abs import Planner
from .planner_doors import DoorPlanner, CellPlanner
| 21.5 | 51 | 0.848837 | 11 | 86 | 6.454545 | 0.636364 | 0.309859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 86 | 3 | 52 | 28.666667 | 0.934211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5afd9072ed77cdc8ad81fa524a5eca4d50dc10b | 121 | py | Python | app/cogs/settings/__init__.py | fossabot/Starboard-2 | 798e2d04995ae7d920e76708b9ea8fae6f4af319 | [
"MIT"
] | 16 | 2021-01-19T19:12:00.000Z | 2021-12-21T12:00:04.000Z | app/cogs/settings/__init__.py | Davi-the-Mudkip/Starboard-2 | 4de3c689ffef007e4f4a279251d107d890b69b15 | [
"MIT"
] | 15 | 2021-04-02T16:58:48.000Z | 2022-03-28T06:09:49.000Z | app/cogs/settings/__init__.py | Davi-the-Mudkip/Starboard-2 | 4de3c689ffef007e4f4a279251d107d890b69b15 | [
"MIT"
] | 13 | 2021-01-21T14:26:00.000Z | 2021-09-29T18:55:17.000Z | from app.classes.bot import Bot
from . import settings_commands
def setup(bot: Bot):
settings_commands.setup(bot)
| 15.125 | 32 | 0.760331 | 18 | 121 | 5 | 0.5 | 0.355556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157025 | 121 | 7 | 33 | 17.285714 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb3c984a796a7c011b62d9b99c9febb0c58dd8b4 | 92 | py | Python | pentagon/component/vpn/__init__.py | supaflysnooka/pentagon | 7431cc29a80e090172b78abdf12d5da54d7f2455 | [
"Apache-2.0"
] | null | null | null | pentagon/component/vpn/__init__.py | supaflysnooka/pentagon | 7431cc29a80e090172b78abdf12d5da54d7f2455 | [
"Apache-2.0"
] | null | null | null | pentagon/component/vpn/__init__.py | supaflysnooka/pentagon | 7431cc29a80e090172b78abdf12d5da54d7f2455 | [
"Apache-2.0"
] | null | null | null | from pentagon.component import ComponentBase
import os
class Vpn(ComponentBase):
pass
| 13.142857 | 44 | 0.793478 | 11 | 92 | 6.636364 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163043 | 92 | 6 | 45 | 15.333333 | 0.948052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
de9920dae17bd40f291f9901725aac008020d82a | 153 | py | Python | 01_basic-python/solution_condition.py | utecx/py-algorithms-4-automotive-engineering | 45fa443b721efe6d887aaeeeae9b6867d71f2677 | [
"MIT"
] | 47 | 2020-04-20T14:12:20.000Z | 2022-03-02T15:26:59.000Z | 01_basic-python/solution_condition.py | utecx/py-algorithms-4-automotive-engineering | 45fa443b721efe6d887aaeeeae9b6867d71f2677 | [
"MIT"
] | 6 | 2019-08-08T05:15:44.000Z | 2020-03-27T09:39:06.000Z | 01_basic-python/solution_condition.py | utecx/py-algorithms-4-automotive-engineering | 45fa443b721efe6d887aaeeeae9b6867d71f2677 | [
"MIT"
] | 65 | 2019-07-01T06:09:48.000Z | 2022-03-08T18:37:45.000Z | def check_value(x):
if x == 0:
print("Is zero", x)
elif x > 0:
print("Is positive", x)
else:
print("Is negative", x)
| 19.125 | 31 | 0.477124 | 23 | 153 | 3.130435 | 0.565217 | 0.291667 | 0.194444 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 0.366013 | 153 | 7 | 32 | 21.857143 | 0.721649 | 0 | 0 | 0 | 0 | 0 | 0.189542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.142857 | 0.428571 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
dedf7a9d3dec8145688e43976274b55f740b23ce | 72 | py | Python | citizenscience/user/__init__.py | otwn/citizenscience | 24ed9652ff896d46c7c7da9530c51fbd451d14ae | [
"BSD-3-Clause"
] | 1 | 2021-07-06T19:44:15.000Z | 2021-07-06T19:44:15.000Z | fbone/user/__init__.py | pyeliteman/PDF-OCR-RTP | 1833cda366ece33e6f6850dabec029d6a1502c74 | [
"Apache-2.0"
] | null | null | null | fbone/user/__init__.py | pyeliteman/PDF-OCR-RTP | 1833cda366ece33e6f6850dabec029d6a1502c74 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from models import User
from views import user
| 14.4 | 23 | 0.680556 | 11 | 72 | 4.454545 | 0.727273 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.194444 | 72 | 4 | 24 | 18 | 0.827586 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dee61ee2bac8e2e1b692011d32e985f47e9a5049 | 21 | py | Python | src/metricas/__init__.py | JLuisRojas/reconocimiento-de-voz | 59282ffd6841f22e514a7055cb4d20ef97181b90 | [
"MIT"
] | 1 | 2021-12-03T00:01:09.000Z | 2021-12-03T00:01:09.000Z | src/metricas/__init__.py | JLuisRojas/reconocimiento-de-voz | 59282ffd6841f22e514a7055cb4d20ef97181b90 | [
"MIT"
] | 2 | 2021-04-30T21:11:01.000Z | 2021-08-25T16:00:42.000Z | src/metricas/__init__.py | JLuisRojas/reconocimiento-de-voz | 59282ffd6841f22e514a7055cb4d20ef97181b90 | [
"MIT"
] | null | null | null | from .wer import wer
| 10.5 | 20 | 0.761905 | 4 | 21 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
720178181ed5bcab6b3dc4901a38c870b21d2e79 | 180 | py | Python | yys/RegisterKeyModule.py | yangxu0110/yysScript | 079101f57fb1a64b871924c988760d9e74063a71 | [
"Apache-2.0"
] | 62 | 2019-09-28T14:07:22.000Z | 2022-02-25T05:54:47.000Z | yys/RegisterKeyModule.py | fishtank666/yysScript | 079101f57fb1a64b871924c988760d9e74063a71 | [
"Apache-2.0"
] | 6 | 2019-11-12T11:08:36.000Z | 2020-11-25T10:40:52.000Z | yys/RegisterKeyModule.py | fishtank666/yysScript | 079101f57fb1a64b871924c988760d9e74063a71 | [
"Apache-2.0"
] | 24 | 2019-10-12T02:21:39.000Z | 2021-11-13T07:32:25.000Z | # 生成激活码模块
class RegisterKeyUtil:
def ValidateUserKey(userkey: str) -> bool:
# todo
pass
def CreateUserKey(path: str) -> bool:
# todo
pass
| 16.363636 | 46 | 0.566667 | 17 | 180 | 6 | 0.705882 | 0.137255 | 0.215686 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.344444 | 180 | 10 | 47 | 18 | 0.864407 | 0.094444 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
720b306ddd21576c4aaf2e51f6d4e052c293d0c6 | 28 | py | Python | mcpi_functions.py | leha-code/mcpi-zero | fae9dbe1677fca64b9cc617e3a06858de2e220fd | [
"MIT"
] | 2 | 2021-01-27T00:23:39.000Z | 2021-01-27T15:46:38.000Z | mcpi_functions.py | mcpiscript/mcpi-zero | fae9dbe1677fca64b9cc617e3a06858de2e220fd | [
"MIT"
] | null | null | null | mcpi_functions.py | mcpiscript/mcpi-zero | fae9dbe1677fca64b9cc617e3a06858de2e220fd | [
"MIT"
] | null | null | null | def coming_soon():
pass
| 9.333333 | 18 | 0.642857 | 4 | 28 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 28 | 2 | 19 | 14 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
9d4542b4ee808d471aff9ee0b50fdfe05183bd98 | 48 | py | Python | wrappers/__init__.py | kylestach/learn-you-a-soccer | e4163602ffb73b51d1b9ab35e75f2033f7179fff | [
"MIT"
] | 24 | 2020-01-12T08:20:36.000Z | 2022-03-17T13:07:30.000Z | wrappers/__init__.py | kylestach/learn-you-a-soccer | e4163602ffb73b51d1b9ab35e75f2033f7179fff | [
"MIT"
] | null | null | null | wrappers/__init__.py | kylestach/learn-you-a-soccer | e4163602ffb73b51d1b9ab35e75f2033f7179fff | [
"MIT"
] | 10 | 2020-01-21T04:43:20.000Z | 2021-11-02T02:50:56.000Z | # __init__.py
from .normalized_actions import *
| 16 | 33 | 0.791667 | 6 | 48 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 2 | 34 | 24 | 0.785714 | 0.229167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c2105be514f22bc8e73aa96bf2f2ec1e97ac1ed5 | 115 | py | Python | annotate/admin.py | henryyang42/lifelog_annotation | 586f44132508f59e97dda701bd5602d26b79a6f4 | [
"MIT"
] | null | null | null | annotate/admin.py | henryyang42/lifelog_annotation | 586f44132508f59e97dda701bd5602d26b79a6f4 | [
"MIT"
] | null | null | null | annotate/admin.py | henryyang42/lifelog_annotation | 586f44132508f59e97dda701bd5602d26b79a6f4 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Entry)
admin.site.register(Annotation)
| 19.166667 | 32 | 0.808696 | 16 | 115 | 5.8125 | 0.625 | 0.236559 | 0.365591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095652 | 115 | 5 | 33 | 23 | 0.894231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dfa90ccdbd251b8efecf8574615a12d9633f01e4 | 114 | py | Python | KFR/build/examples/Debug/audio_low_quality.py | Asifadam93/FiltreMusical | dcd53bc41934f219fb9b3d5aef281099fb572a49 | [
"BSD-3-Clause"
] | null | null | null | KFR/build/examples/Debug/audio_low_quality.py | Asifadam93/FiltreMusical | dcd53bc41934f219fb9b3d5aef281099fb572a49 | [
"BSD-3-Clause"
] | null | null | null | KFR/build/examples/Debug/audio_low_quality.py | Asifadam93/FiltreMusical | dcd53bc41934f219fb9b3d5aef281099fb572a49 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import dspplot
dspplot.plot(r'audio_low_quality.wav', file='../svg/audio_low_quality.svg')
| 22.8 | 75 | 0.763158 | 19 | 114 | 4.368421 | 0.736842 | 0.192771 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061404 | 114 | 4 | 76 | 28.5 | 0.775701 | 0.175439 | 0 | 0 | 0 | 0 | 0.526882 | 0.526882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a05ec202e173da996b727c6d7156b8f343c384d1 | 131 | py | Python | ct/preprocess/__init__.py | ViktorStagge/CompressiveTransformer | 644b363b1314f4f10c803ff4f014ff6d1a238fad | [
"MIT"
] | 2 | 2020-10-26T10:08:37.000Z | 2021-07-02T02:21:35.000Z | ct/preprocess/__init__.py | ViktorStagge/CompressiveTransformer | 644b363b1314f4f10c803ff4f014ff6d1a238fad | [
"MIT"
] | null | null | null | ct/preprocess/__init__.py | ViktorStagge/CompressiveTransformer | 644b363b1314f4f10c803ff4f014ff6d1a238fad | [
"MIT"
] | null | null | null | from ct.preprocess.tokenize import Tokenizer
from ct.preprocess.dataset import preprocess
from ct.preprocess.wma import wma as wma
| 32.75 | 44 | 0.847328 | 20 | 131 | 5.55 | 0.45 | 0.162162 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10687 | 131 | 3 | 45 | 43.666667 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a0a7c71bca46cc9409ebf4d801ee0d1ce28d9d6d | 60 | py | Python | products_app/Reset.py | sergiomastro/inventory-mgmt-app-py-master | 56b89a11bec453537657a7ea10f51fa91307bf2d | [
"MIT"
] | 1 | 2018-06-07T03:58:15.000Z | 2018-06-07T03:58:15.000Z | products_app/Reset.py | sergiomastro/inventory-mgmt-app-py-master | 56b89a11bec453537657a7ea10f51fa91307bf2d | [
"MIT"
] | null | null | null | products_app/Reset.py | sergiomastro/inventory-mgmt-app-py-master | 56b89a11bec453537657a7ea10f51fa91307bf2d | [
"MIT"
] | null | null | null | from app import reset_products_file
reset_products_file() | 20 | 36 | 0.85 | 9 | 60 | 5.222222 | 0.666667 | 0.553191 | 0.723404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116667 | 60 | 3 | 37 | 20 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a0b34aef92f144b3ba9e7de5897ec189e97871eb | 4,511 | py | Python | cybox/common/datetimewithprecision.py | siemens/python-cybox | b692a98c8a62bd696e2a0dda802ada7359853482 | [
"BSD-3-Clause"
] | null | null | null | cybox/common/datetimewithprecision.py | siemens/python-cybox | b692a98c8a62bd696e2a0dda802ada7359853482 | [
"BSD-3-Clause"
] | null | null | null | cybox/common/datetimewithprecision.py | siemens/python-cybox | b692a98c8a62bd696e2a0dda802ada7359853482 | [
"BSD-3-Clause"
] | 1 | 2019-04-16T18:37:32.000Z | 2019-04-16T18:37:32.000Z | # Copyright (c) 2014, The MITRE Corporation. All rights reserved.
# See LICENSE.txt for complete terms.
import cybox
import cybox.bindings.cybox_common as common_binding
import dateutil
from datetime import datetime
DATE_PRECISION_VALUES = ("year", "month", "day")
TIME_PRECISION_VALUES = ("hour", "minute", "second")
DATETIME_PRECISION_VALUES = DATE_PRECISION_VALUES + TIME_PRECISION_VALUES
def parse_value(value):
if not value:
return None
elif isinstance(value, datetime):
return value
return dateutil.parser.parse(value)
def serialize_value(value):
if not value:
return None
return value.isoformat()
class DateTimeWithPrecision(cybox.Entity):
_binding = common_binding
_binding_class = common_binding.DateTimeWithPrecisionType
_namespace = 'http://cybox.mitre.org/common-2'
def __init__(self, value=None, precision='second'):
super(DateTimeWithPrecision, self).__init__()
self.value = value
self.precision = precision
@property
def value(self):
return self._value
@value.setter
def value(self, value):
self._value = parse_value(value)
@property
def precision(self):
return self._precision
@precision.setter
def precision(self, value):
if value not in DATETIME_PRECISION_VALUES:
raise ValueError("value must be one of [%s]" % ", ".join(x for x in DATETIME_PRECISION_VALUES))
self._precision = value
def to_obj(self, return_obj=None, ns_info=None):
self._collect_ns_info(ns_info)
obj = self._binding_class()
obj.valueOf_ = serialize_value(self.value)
obj.precision = self._precision
return obj
@classmethod
def from_obj(cls, obj):
if not obj:
return None
return_obj = cls()
return_obj.value = obj.valueOf_
return_obj.precision = obj.precision
return return_obj
def to_dict(self):
value = serialize_value(self.value)
if self.precision == 'second':
return value
dict_ = {}
dict_['precision'] = self.precision
dict_['value'] = value
return dict_
@classmethod
def from_dict(cls, dict_):
if not dict_:
return None
return_obj = cls()
if not isinstance(dict_, dict):
return_obj.value = dict_
else:
return_obj.precision = dict_.get('precision')
return_obj.value = dict_.get('value')
return return_obj
class DateWithPrecision(cybox.Entity):
_binding = common_binding
_binding_class = common_binding.DateWithPrecisionType
_namespace = 'http://cybox.mitre.org/common-2'
def __init__(self, value=None, precision='day'):
super(DateWithPrecision, self).__init__()
self.value = value
self.precision = precision
@property
def value(self):
return self._value
@value.setter
def value(self, value):
self._value = parse_value(value)
if isinstance(self._value, datetime):
self._value = self._value.date()
@property
def precision(self):
return self._precision
@precision.setter
def precision(self, value):
if value not in DATE_PRECISION_VALUES:
raise ValueError("value must be one of [%s]" % ", ".join(x for x in DATE_PRECISION_VALUES))
self._precision = value
def to_obj(self, return_obj=None, ns_info=None):
self._collect_ns_info(ns_info)
obj = self._binding_class()
obj.valueOf_ = serialize_value(self.value)
obj.precision = self._precision
return obj
@classmethod
def from_obj(cls, obj):
if not obj:
return None
return_obj = cls()
return_obj.value = obj.valueOf_
return_obj.precision = obj.precision
return return_obj
def to_dict(self):
value = serialize_value(self.value)
if self.precision == 'day':
return value
dict_ = {}
dict_['precision'] = self.precision
dict_['value'] = value
return dict_
@classmethod
def from_dict(cls, dict_):
if not dict_:
return None
return_obj = cls()
if not isinstance(dict_, dict):
return_obj.value = dict_
else:
return_obj.precision = dict_.get('precision')
return_obj.value = dict_.get('value')
return return_obj
| 26.85119 | 107 | 0.630459 | 526 | 4,511 | 5.15019 | 0.155894 | 0.07309 | 0.046512 | 0.033961 | 0.761905 | 0.761905 | 0.761905 | 0.739756 | 0.739756 | 0.698413 | 0 | 0.001843 | 0.278431 | 4,511 | 167 | 108 | 27.011976 | 0.830415 | 0.021946 | 0 | 0.782946 | 0 | 0 | 0.049444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155039 | false | 0 | 0.031008 | 0.031008 | 0.426357 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a0bfd6ab0ca423ed8a3ab37f8d4db02c20997664 | 51 | py | Python | cfdm/core/meta/__init__.py | tsjackson-noaa/cfdm | a669677905badaced2eba87413288ac0bc2697fc | [
"MIT"
] | 22 | 2018-11-07T18:16:22.000Z | 2022-03-16T16:05:21.000Z | cfdm/core/meta/__init__.py | tsjackson-noaa/cfdm | a669677905badaced2eba87413288ac0bc2697fc | [
"MIT"
] | 119 | 2019-04-08T08:00:24.000Z | 2022-03-22T08:21:22.000Z | cfdm/core/meta/__init__.py | tsjackson-noaa/cfdm | a669677905badaced2eba87413288ac0bc2697fc | [
"MIT"
] | 8 | 2019-04-09T10:12:26.000Z | 2021-07-22T02:41:15.000Z | from .docstringrewrite import DocstringRewriteMeta
| 25.5 | 50 | 0.901961 | 4 | 51 | 11.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 51 | 1 | 51 | 51 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd07214a2b2bb586fbbe30af40b7635992db8cc6 | 72 | py | Python | glue/dialogs/link_editor/qt/__init__.py | sergiopasra/glue | c25a217a122a11818382672c99cb21f57a30636f | [
"BSD-3-Clause"
] | null | null | null | glue/dialogs/link_editor/qt/__init__.py | sergiopasra/glue | c25a217a122a11818382672c99cb21f57a30636f | [
"BSD-3-Clause"
] | null | null | null | glue/dialogs/link_editor/qt/__init__.py | sergiopasra/glue | c25a217a122a11818382672c99cb21f57a30636f | [
"BSD-3-Clause"
] | null | null | null | from .link_editor import * # noqa
from .link_equation import * # noqa
| 24 | 36 | 0.722222 | 10 | 72 | 5 | 0.6 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 72 | 2 | 37 | 36 | 0.862069 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cd855c4e472af9a7d55f8517ec75030ac037e339 | 6,819 | py | Python | test/test_modbus_slave.py | eydam-prototyping/mp_modbus | 8007c41dd16e6f71bd27b587628f57f38f27a7e0 | [
"MIT"
] | 2 | 2022-01-06T02:21:16.000Z | 2022-03-08T07:55:43.000Z | test/test_modbus_slave.py | eydam-prototyping/mp_modbus | 8007c41dd16e6f71bd27b587628f57f38f27a7e0 | [
"MIT"
] | 2 | 2021-12-10T15:56:52.000Z | 2022-02-19T23:45:24.000Z | test/test_modbus_slave.py | eydam-prototyping/mp_modbus | 8007c41dd16e6f71bd27b587628f57f38f27a7e0 | [
"MIT"
] | 3 | 2021-07-30T11:16:55.000Z | 2022-01-05T18:19:55.000Z | import unittest
from mp_modbus_slave import modbus_tcp_server
from mp_modbus_frame import modbus_tcp_frame
class Test(unittest.TestCase):
def test_server_handle_message_1(self):
srv = modbus_tcp_server("", 0, context={"co":{"startAddr": 1000, "registers": bytearray([0xFF, 0x00, 0x00, 0x00]*5)}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=1, register=1000, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x01]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=1, register=1001, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x02]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=1, register=1000, length=3, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x05]))
def test_server_handle_message_2(self):
srv = modbus_tcp_server("", 0, context={"di":{"startAddr": 1000, "registers": bytearray([0xFF, 0x00, 0x00, 0x00]*5)}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=2, register=1000, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x01]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=2, register=1001, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x02]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=2, register=1000, length=3, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x05]))
def test_server_handle_message_3(self):
srv = modbus_tcp_server("", 0, context={"hr":{"startAddr": 1000, "registers": bytearray([0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F])}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=3, register=1000, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x00, 0x01, 0x02, 0x03]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=3, register=1001, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x02, 0x03, 0x04, 0x05]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=3, register=1000, length=3, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x00, 0x01, 0x02, 0x03, 0x04, 0x05]))
def test_server_handle_message_4(self):
srv = modbus_tcp_server("", 0, context={"ir":{"startAddr": 1000, "registers": bytearray([0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F])}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=4, register=1000, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x00, 0x01, 0x02, 0x03]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=4, register=1001, length=2, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x02, 0x03, 0x04, 0x05]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=4, register=1000, length=3, fr_type="request")
self.assertEqual(srv.handle_message(msg).data, bytearray([0x00, 0x01, 0x02, 0x03, 0x04, 0x05]))
def test_server_handle_message_5(self):
srv = modbus_tcp_server("", 0, context={"co":{"startAddr": 1000, "registers": bytearray([0xFF, 0x00]*3)}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=5, register=1000, fr_type="request", data=bytearray([0xFF, 0x00]))
srv.handle_message(msg)
self.assertEqual(srv.context["co"]["registers"], bytearray([0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=5, register=1001, fr_type="request", data=bytearray([0x00, 0x00]))
srv.handle_message(msg)
self.assertEqual(srv.context["co"]["registers"], bytearray([0xFF, 0x00, 0x00, 0x00, 0xFF, 0x00]))
def test_server_handle_message_6(self):
srv = modbus_tcp_server("", 0, context={"hr":{"startAddr": 1000, "registers": bytearray([0x00, 0x00]*3)}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=6, register=1000, fr_type="request", data=bytearray([0xFF, 0x00]))
srv.handle_message(msg)
self.assertEqual(srv.context["hr"]["registers"], bytearray([0xFF, 0x00, 0x00, 0x00, 0x00, 0x00]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=6, register=1001, fr_type="request", data=bytearray([0xAB, 0xCD]))
srv.handle_message(msg)
self.assertEqual(srv.context["hr"]["registers"], bytearray([0xFF, 0x00, 0xAB, 0xCD, 0x00, 0x00]))
def test_server_handle_message_15(self):
srv = modbus_tcp_server("", 0, context={"co":{"startAddr": 1000, "registers": bytearray([0xFF, 0x00]*4)}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=15, register=1000, fr_type="request", data=bytearray([0x0f]), length=4)
self.assertEqual(srv.handle_message(msg).get_frame(), bytearray([0x00, 0x01, 0x00, 0x00, 0x00, 0x06, 0x02, 0x0f, 0x03, 0xe8, 0x00, 0x04]))
self.assertEqual(srv.context["co"]["registers"], bytearray([0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=15, register=1000, fr_type="request", data=bytearray([0x00]), length=2)
self.assertEqual(srv.handle_message(msg).get_frame(), bytearray([0x00, 0x01, 0x00, 0x00, 0x00, 0x06, 0x02, 0x0f, 0x03, 0xe8, 0x00, 0x02]))
self.assertEqual(srv.context["co"]["registers"], bytearray([0x00, 0x00, 0x00, 0x00, 0xFF, 0x00, 0xFF, 0x00]))
def test_server_handle_message_16(self):
srv = modbus_tcp_server("", 0, context={"hr":{"startAddr": 1000, "registers": bytearray([0x00, 0x00]*4)}})
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=16, register=1000, fr_type="request", data=bytearray([0xAB, 0xCD, 0x12, 0x34]), length=2)
self.assertEqual(srv.handle_message(msg).get_frame(), bytearray([0x00, 0x01, 0x00, 0x00, 0x00, 0x06, 0x02, 0x10, 0x03, 0xe8, 0x00, 0x02]))
self.assertEqual(srv.context["hr"]["registers"], bytearray([0xAB, 0xCD, 0x12, 0x34, 0x00, 0x00, 0x00, 0x00]))
msg = modbus_tcp_frame(transaction_id=1, unit_id=2, func_code=16, register=1001, fr_type="request", data=bytearray([0xAB, 0xCD]), length=1)
self.assertEqual(srv.handle_message(msg).get_frame(), bytearray([0x00, 0x01, 0x00, 0x00, 0x00, 0x06, 0x02, 0x10, 0x03, 0xe9, 0x00, 0x01]))
self.assertEqual(srv.context["hr"]["registers"], bytearray([0xAB, 0xCD, 0xAB, 0xCD, 0x00, 0x00, 0x00, 0x00])) | 71.03125 | 196 | 0.691304 | 986 | 6,819 | 4.578093 | 0.075051 | 0.05494 | 0.095702 | 0.075321 | 0.95525 | 0.942401 | 0.933097 | 0.90031 | 0.874391 | 0.834072 | 0 | 0.126502 | 0.145623 | 6,819 | 96 | 197 | 71.03125 | 0.648301 | 0 | 0 | 0.235294 | 0 | 0 | 0.056892 | 0 | 0 | 0 | 0.118475 | 0 | 0.352941 | 1 | 0.117647 | false | 0 | 0.044118 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cd88ddbb62ea20fc330c2cbbcf5198044ea6c04a | 51 | py | Python | boa3/model/builtin/interop/storage/storagecontext/__init__.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3/model/builtin/interop/storage/storagecontext/__init__.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3/model/builtin/interop/storage/storagecontext/__init__.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from .storagecontexttype import StorageContextType
| 25.5 | 50 | 0.901961 | 4 | 51 | 11.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 51 | 1 | 51 | 51 | 0.978723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
26a7a7efea8830b102b86e7d98532d5aa6e8067d | 26 | py | Python | ttt_pkg/__init__.py | Alkatat/Tic-Tac-Toe | cdff8cd776a8463d45715ab41c1ebb2f386c68a2 | [
"MIT"
] | null | null | null | ttt_pkg/__init__.py | Alkatat/Tic-Tac-Toe | cdff8cd776a8463d45715ab41c1ebb2f386c68a2 | [
"MIT"
] | null | null | null | ttt_pkg/__init__.py | Alkatat/Tic-Tac-Toe | cdff8cd776a8463d45715ab41c1ebb2f386c68a2 | [
"MIT"
] | null | null | null | from ttt_pkg.ttt import * | 26 | 26 | 0.769231 | 5 | 26 | 3.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
26fc939783feaae91ea55cb32501051a7bbee747 | 204 | py | Python | fcn/keras_fcn/__init__.py | NickleDave/fcn-syl-seg | 90f66cc716d5564dc297ba70720b31ada2ff062c | [
"BSD-3-Clause"
] | null | null | null | fcn/keras_fcn/__init__.py | NickleDave/fcn-syl-seg | 90f66cc716d5564dc297ba70720b31ada2ff062c | [
"BSD-3-Clause"
] | null | null | null | fcn/keras_fcn/__init__.py | NickleDave/fcn-syl-seg | 90f66cc716d5564dc297ba70720b31ada2ff062c | [
"BSD-3-Clause"
] | null | null | null | """This subpackage is adapted from https://github.com/JihongJu/keras-fcn
under MIT License, https://github.com/JihongJu/keras-fcn/blob/master/LICENSE"""
from . import encoders, decoders, callbacks, blocks | 68 | 79 | 0.784314 | 29 | 204 | 5.517241 | 0.724138 | 0.1375 | 0.175 | 0.275 | 0.375 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 204 | 3 | 80 | 68 | 0.851064 | 0.715686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f808c28b47da5fa2f39ff6e16efaea4ba3fb3caa | 1,116 | py | Python | Lintcode/Ladder_29_F/64. Merge Sorted Array.py | ctc316/algorithm-python | ac4580d55e05e93e407c6156c9bb801808027d60 | [
"MIT"
] | null | null | null | Lintcode/Ladder_29_F/64. Merge Sorted Array.py | ctc316/algorithm-python | ac4580d55e05e93e407c6156c9bb801808027d60 | [
"MIT"
] | null | null | null | Lintcode/Ladder_29_F/64. Merge Sorted Array.py | ctc316/algorithm-python | ac4580d55e05e93e407c6156c9bb801808027d60 | [
"MIT"
] | null | null | null | class Solution:
"""
@param: A: sorted integer array A which has m elements, but size of A is m+n
@param: m: An integer
@param: B: sorted integer array B which has n elements
@param: n: An integer
@return: nothing
"""
def mergeSortedArray(self, A, m, B, n):
for i in range(n):
A[m + i] = B[i]
A.sort()
class Solution:
"""
@param: A: sorted integer array A which has m elements, but size of A is m+n
@param: m: An integer
@param: B: sorted integer array B which has n elements
@param: n: An integer
@return: nothing
"""
def mergeSortedArray(self, A, m, B, n):
i = m - 1
j = n - 1
index = m + n - 1
while i >= 0 and j >= 0:
if A[i] > B[j]:
A[index] = A[i]
i -= 1
else:
A[index] = B[j]
j -= 1
index -= 1
while i >= 0:
A[index] = A[i]
i -= 1
index -= 1
while j >= 0:
A[index] = B[j]
j -= 1
index -= 1 | 22.77551 | 80 | 0.44086 | 162 | 1,116 | 3.037037 | 0.222222 | 0.105691 | 0.146341 | 0.077236 | 0.841463 | 0.841463 | 0.800813 | 0.800813 | 0.735772 | 0.735772 | 0 | 0.022472 | 0.441756 | 1,116 | 49 | 81 | 22.77551 | 0.767255 | 0.344982 | 0 | 0.576923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f82f89eaa35c05fb7590a214dc8fadc69fc41451 | 27 | py | Python | avalanche/benchmarks/scenarios/new_classes/__init__.py | PRISHIta123/avalanche | cf92e4e1b7135fedd04106a195eb1fb67b97c124 | [
"MIT"
] | 810 | 2018-10-08T15:49:05.000Z | 2022-03-31T15:28:09.000Z | avalanche/benchmarks/scenarios/new_classes/__init__.py | PRISHIta123/avalanche | cf92e4e1b7135fedd04106a195eb1fb67b97c124 | [
"MIT"
] | 477 | 2021-03-01T17:50:51.000Z | 2022-03-31T14:51:23.000Z | avalanche/benchmarks/scenarios/new_classes/__init__.py | PRISHIta123/avalanche | cf92e4e1b7135fedd04106a195eb1fb67b97c124 | [
"MIT"
] | 147 | 2018-10-08T15:49:18.000Z | 2022-03-31T04:08:45.000Z | from .nc_scenario import *
| 13.5 | 26 | 0.777778 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f85d516baeafdaf2dca8151502d7b875df9e198f | 31,171 | py | Python | tests/runtime/linux/finish_test.py | gaocegege/treadmill | 04325d319c0ee912c066f07b88b674e84485f154 | [
"Apache-2.0"
] | 2 | 2017-03-20T07:13:33.000Z | 2017-05-03T03:39:53.000Z | tests/runtime/linux/finish_test.py | gaocegege/treadmill | 04325d319c0ee912c066f07b88b674e84485f154 | [
"Apache-2.0"
] | 12 | 2017-07-10T07:04:06.000Z | 2017-07-26T09:32:54.000Z | tests/runtime/linux/finish_test.py | gaocegege/treadmill | 04325d319c0ee912c066f07b88b674e84485f154 | [
"Apache-2.0"
] | 2 | 2017-05-04T11:25:32.000Z | 2017-07-11T09:10:01.000Z | """Unit test for treadmill.runtime.linux._finish.
"""
import datetime
import os
import shutil
import tempfile
import tarfile
import time
import unittest
import kazoo
import mock
import yaml
import treadmill
import treadmill.rulefile
from treadmill import firewall
from treadmill import fs
from treadmill import iptables
from treadmill import utils
from treadmill.apptrace import events
from treadmill.runtime.linux import _finish as app_finish
class LinuxRuntimeFinishTest(unittest.TestCase):
"""Tests for treadmill.runtime.linux._finish"""
def setUp(self):
# Access protected module _base_service
# pylint: disable=W0212
self.root = tempfile.mkdtemp()
self.tm_env = mock.Mock(
root=self.root,
# nfs_dir=os.path.join(self.root, 'mnt', 'nfs'),
apps_dir=os.path.join(self.root, 'apps'),
archives_dir=os.path.join(self.root, 'archives'),
metrics_dir=os.path.join(self.root, 'metrics'),
svc_cgroup=mock.Mock(
spec_set=treadmill.services._base_service.ResourceService,
),
svc_localdisk=mock.Mock(
spec_set=treadmill.services._base_service.ResourceService,
),
svc_network=mock.Mock(
spec_set=treadmill.services._base_service.ResourceService,
),
rules=mock.Mock(
spec_set=treadmill.rulefile.RuleMgr,
),
watchdogs=mock.Mock(
spec_set=treadmill.watchdog.Watchdog,
),
)
def tearDown(self):
if self.root and os.path.isdir(self.root):
shutil.rmtree(self.root)
@mock.patch('kazoo.client.KazooClient', mock.Mock(set_spec=True))
@mock.patch('shutil.copy', mock.Mock())
@mock.patch('treadmill.appevents.post', mock.Mock())
@mock.patch('treadmill.utils.datetime_utcnow', mock.Mock(
return_value=datetime.datetime(2015, 1, 22, 14, 14, 36, 537918)))
@mock.patch('treadmill.appcfg.manifest.read', mock.Mock())
@mock.patch('treadmill.runtime.linux._finish._kill_apps_by_root',
mock.Mock())
@mock.patch('treadmill.runtime.linux._finish._send_container_archive',
mock.Mock())
@mock.patch('treadmill.sysinfo.hostname',
mock.Mock(return_value='xxx.xx.com'))
@mock.patch('treadmill.fs.archive_filesystem',
mock.Mock(return_value=True))
@mock.patch('treadmill.apphook.cleanup', mock.Mock())
@mock.patch('treadmill.iptables.rm_ip_set', mock.Mock())
@mock.patch('treadmill.rrdutils.flush_noexc', mock.Mock())
@mock.patch('treadmill.subproc.call', mock.Mock(return_value=0))
@mock.patch('treadmill.subproc.check_call', mock.Mock())
@mock.patch('treadmill.subproc.invoke', mock.Mock())
@mock.patch('treadmill.zkutils.get',
mock.Mock(return_value={
'server': 'nonexist',
'auth': 'nonexist',
}))
def test_finish(self):
"""Tests container finish procedure and freeing of the resources.
"""
# Access protected module _kill_apps_by_root
# pylint: disable=W0212
manifest = {
'app': 'proid.myapp',
'cell': 'test',
'cpu': '100%',
'disk': '100G',
'environment': 'dev',
'memory': '100M',
'name': 'proid.myapp#001',
'proid': 'foo',
'shared_network': False,
'task': '001',
'uniqueid': '0000000ID1234',
'archive': [
'/var/tmp/treadmill'
],
'endpoints': [
{
'port': 8000,
'name': 'http',
'real_port': 5000,
'proto': 'tcp',
},
{
'port': 54321,
'type': 'infra',
'name': 'ssh',
'real_port': 54321,
'proto': 'tcp',
}
],
'ephemeral_ports': {
'tcp': [45024],
'udp': [62422],
},
'services': [
{
'name': 'web_server',
'command': '/bin/false',
'restart': {
'limit': 3,
'interval': 60,
},
}
],
'vring': {
'some': 'settings'
}
}
treadmill.appcfg.manifest.read.return_value = manifest
app_unique_name = 'proid.myapp-001-0000000ID1234'
mock_cgroup_client = self.tm_env.svc_cgroup.make_client.return_value
mock_ld_client = self.tm_env.svc_localdisk.make_client.return_value
mock_nwrk_client = self.tm_env.svc_network.make_client.return_value
localdisk = {
'block_dev': '/dev/foo',
}
mock_ld_client.get.return_value = localdisk
network = {
'vip': '192.168.0.2',
'gateway': '192.168.254.254',
'veth': 'testveth.0',
'external_ip': '172.31.81.67',
}
mock_nwrk_client.get.return_value = network
app_dir = os.path.join(self.tm_env.apps_dir, app_unique_name)
# Create content in app root directory, verify that it is archived.
fs.mkdir_safe(os.path.join(app_dir, 'root', 'xxx'))
fs.mkdir_safe(os.path.join(app_dir, 'services'))
# Simulate daemontools finish script, marking the app is done.
with open(os.path.join(app_dir, 'exitinfo'), 'w') as f:
f.write(yaml.dump({'service': 'web_server', 'rc': 0, 'sig': 0}))
mock_zkclient = kazoo.client.KazooClient()
mock_watchdog = mock.Mock()
app_finish.finish(self.tm_env, mock_zkclient, app_dir, mock_watchdog)
treadmill.subproc.check_call.assert_has_calls(
[
mock.call(
[
's6_svc',
'-d',
app_dir,
]
),
mock.call(
[
's6_svwait',
'-d',
app_dir,
]
),
]
)
# All resource service clients are properly created
self.tm_env.svc_cgroup.make_client.assert_called_with(
os.path.join(app_dir, 'cgroups')
)
self.tm_env.svc_localdisk.make_client.assert_called_with(
os.path.join(app_dir, 'localdisk')
)
self.tm_env.svc_network.make_client.assert_called_with(
os.path.join(app_dir, 'network')
)
treadmill.runtime.linux._finish._kill_apps_by_root.assert_called_with(
os.path.join(app_dir, 'root')
)
# Verify that we tested the archiving for the app root volume
treadmill.fs.archive_filesystem.assert_called_with(
'/dev/foo',
os.path.join(app_dir, 'root'),
os.path.join(app_dir,
'001_xxx.xx.com_20150122_141436537918.tar'),
mock.ANY
)
# Verify that the file is uploaded by Uploader
app = utils.to_obj(manifest)
treadmill.runtime.linux._finish._send_container_archive\
.assert_called_with(
mock_zkclient,
app,
os.path.join(app_dir,
'001_xxx.xx.com_20150122_141436537918.tar.gz'),
)
# Verify that the app folder was deleted
self.assertFalse(os.path.exists(app_dir))
# Cleanup the block device
mock_ld_client.delete.assert_called_with(app_unique_name)
# Cleanup the cgroup resource
mock_cgroup_client.delete.assert_called_with(app_unique_name)
# Cleanup network resources
mock_nwrk_client.get.assert_called_with(app_unique_name)
self.tm_env.rules.unlink_rule.assert_has_calls(
[
mock.call(chain=iptables.PREROUTING_DNAT,
rule=firewall.DNATRule(
proto='tcp',
dst_ip='172.31.81.67', dst_port=5000,
new_ip='192.168.0.2', new_port=8000
),
owner=app_unique_name),
mock.call(chain=iptables.POSTROUTING_SNAT,
rule=firewall.SNATRule(
proto='tcp',
src_ip='192.168.0.2', src_port=8000,
new_ip='172.31.81.67', new_port=5000
),
owner=app_unique_name),
mock.call(chain=iptables.PREROUTING_DNAT,
rule=firewall.DNATRule(
proto='tcp',
dst_ip='172.31.81.67', dst_port=54321,
new_ip='192.168.0.2', new_port=54321
),
owner=app_unique_name),
mock.call(chain=iptables.POSTROUTING_SNAT,
rule=firewall.SNATRule(
proto='tcp',
src_ip='192.168.0.2', src_port=54321,
new_ip='172.31.81.67', new_port=54321
),
owner=app_unique_name),
mock.call(chain=iptables.PREROUTING_DNAT,
rule=firewall.DNATRule(
proto='tcp',
dst_ip='172.31.81.67', dst_port=45024,
new_ip='192.168.0.2', new_port=45024
),
owner=app_unique_name),
mock.call(chain=iptables.PREROUTING_DNAT,
rule=firewall.DNATRule(
proto='udp',
dst_ip='172.31.81.67', dst_port=62422,
new_ip='192.168.0.2', new_port=62422
),
owner=app_unique_name),
],
any_order=True
)
self.assertEqual(self.tm_env.rules.unlink_rule.call_count, 6)
treadmill.iptables.rm_ip_set.assert_has_calls(
[
mock.call(treadmill.iptables.SET_INFRA_SVC,
'192.168.0.2,tcp:54321'),
mock.call(treadmill.iptables.SET_INFRA_SVC,
'192.168.0.2,tcp:45024'),
mock.call(treadmill.iptables.SET_INFRA_SVC,
'192.168.0.2,udp:62422'),
mock.call(treadmill.iptables.SET_VRING_CONTAINERS,
'192.168.0.2'),
],
any_order=True
)
self.assertEqual(treadmill.iptables.rm_ip_set.call_count, 4)
mock_nwrk_client.delete.assert_called_with(app_unique_name)
treadmill.appevents.post.assert_called_with(
mock.ANY,
events.FinishedTraceEvent(
instanceid='proid.myapp#001',
rc=0,
signal=0,
payload={
'service': 'web_server',
'sig': 0,
'rc': 0
}
)
)
treadmill.rrdutils.flush_noexc.assert_called_with(
os.path.join(self.root, 'metrics', 'apps',
app_unique_name + '.rrd')
)
shutil.copy.assert_called_with(
os.path.join(self.root, 'metrics', 'apps',
app_unique_name + '.rrd'),
os.path.join(app_dir, 'metrics.rrd')
)
self.assertTrue(mock_watchdog.remove.called)
@mock.patch('kazoo.client.KazooClient', mock.Mock(set_spec=True))
@mock.patch('shutil.copy', mock.Mock())
@mock.patch('treadmill.appevents.post', mock.Mock())
@mock.patch('treadmill.apphook.cleanup', mock.Mock())
@mock.patch('treadmill.runtime.linux._finish._kill_apps_by_root',
mock.Mock())
@mock.patch('treadmill.appcfg.manifest.read', mock.Mock())
@mock.patch('treadmill.sysinfo.hostname',
mock.Mock(return_value='myhostname'))
@mock.patch('treadmill.cgroups.delete', mock.Mock())
@mock.patch('treadmill.cgutils.reset_memory_limit_in_bytes',
mock.Mock(return_value=[]))
@mock.patch('treadmill.fs.archive_filesystem',
mock.Mock(return_value=True))
@mock.patch('treadmill.subproc.call', mock.Mock(return_value=0))
@mock.patch('treadmill.subproc.check_call', mock.Mock())
@mock.patch('treadmill.subproc.invoke', mock.Mock())
@mock.patch('treadmill.zkutils.get', mock.Mock(return_value=None))
@mock.patch('treadmill.rrdutils.flush_noexc', mock.Mock())
def test_finish_error(self):
"""Tests container finish procedure when app is improperly finished."""
manifest = {
'app': 'proid.myapp',
'cell': 'test',
'cpu': '100%',
'disk': '100G',
'environment': 'dev',
'memory': '100M',
'name': 'proid.myapp#001',
'proid': 'foo',
'shared_network': False,
'task': '001',
'uniqueid': '0000000001234',
'archive': [
'/var/tmp/treadmill'
],
'endpoints': [
{
'port': 8000,
'name': 'http',
'real_port': 5000,
'proto': 'tcp',
}
],
'services': [
{
'name': 'web_server',
'command': '/bin/false',
'restart': {
'limit': 3,
'interval': 60,
},
}
],
'ephemeral_ports': {
'tcp': [],
'udp': [],
},
'vring': {
'some': 'settings'
}
}
treadmill.appcfg.manifest.read.return_value = manifest
app_unique_name = 'proid.myapp-001-0000000001234'
mock_ld_client = self.tm_env.svc_localdisk.make_client.return_value
localdisk = {
'block_dev': '/dev/foo',
}
mock_ld_client.get.return_value = localdisk
mock_nwrk_client = self.tm_env.svc_network.make_client.return_value
network = {
'vip': '192.168.0.2',
'gateway': '192.168.254.254',
'veth': 'testveth.0',
'external_ip': '172.31.81.67',
}
mock_nwrk_client.get.return_value = network
app_dir = os.path.join(self.tm_env.apps_dir, app_unique_name)
# Create content in app root directory, verify that it is archived.
fs.mkdir_safe(os.path.join(app_dir, 'root', 'xxx'))
fs.mkdir_safe(os.path.join(app_dir, 'services'))
# Simulate daemontools finish script, marking the app is done.
with open(os.path.join(app_dir, 'exitinfo'), 'w') as f:
f.write(yaml.dump({'service': 'web_server', 'rc': 1, 'sig': 3}))
mock_zkclient = kazoo.client.KazooClient()
mock_watchdog = mock.Mock()
app_finish.finish(
self.tm_env, mock_zkclient, app_dir, mock_watchdog
)
treadmill.appevents.post.assert_called_with(
mock.ANY,
events.FinishedTraceEvent(
instanceid='proid.myapp#001',
rc=1,
signal=3,
payload={
'service': 'web_server',
'sig': 3,
'rc': 1,
}
)
)
treadmill.rrdutils.flush_noexc.assert_called_with(
os.path.join(self.root, 'metrics', 'apps',
app_unique_name + '.rrd')
)
shutil.copy.assert_called_with(
os.path.join(self.tm_env.metrics_dir, 'apps',
app_unique_name + '.rrd'),
os.path.join(app_dir, 'metrics.rrd')
)
self.assertTrue(mock_watchdog.remove.called)
@mock.patch('kazoo.client.KazooClient', mock.Mock(set_spec=True))
@mock.patch('shutil.copy', mock.Mock())
@mock.patch('treadmill.appevents.post', mock.Mock())
@mock.patch('treadmill.appcfg.manifest.read', mock.Mock())
@mock.patch('treadmill.apphook.cleanup', mock.Mock())
@mock.patch('treadmill.runtime.linux._finish._kill_apps_by_root',
mock.Mock())
@mock.patch('treadmill.sysinfo.hostname',
mock.Mock(return_value='hostname'))
@mock.patch('treadmill.fs.archive_filesystem',
mock.Mock(return_value=True))
@mock.patch('treadmill.rulefile.RuleMgr.unlink_rule', mock.Mock())
@mock.patch('treadmill.subproc.call', mock.Mock(return_value=0))
@mock.patch('treadmill.subproc.check_call', mock.Mock())
@mock.patch('treadmill.subproc.invoke', mock.Mock())
@mock.patch('treadmill.zkutils.get', mock.Mock(return_value=None))
@mock.patch('treadmill.rrdutils.flush_noexc', mock.Mock())
def test_finish_aborted(self):
"""Tests container finish procedure when node is aborted.
"""
manifest = {
'app': 'proid.myapp',
'cell': 'test',
'cpu': '100%',
'disk': '100G',
'environment': 'dev',
'host_ip': '172.31.81.67',
'memory': '100M',
'name': 'proid.myapp#001',
'proid': 'foo',
'shared_network': False,
'task': '001',
'uniqueid': '0000000ID1234',
'archive': [
'/var/tmp/treadmill'
],
'endpoints': [
{
'port': 8000,
'name': 'http',
'real_port': 5000,
'proto': 'tcp',
}
],
'services': [
{
'name': 'web_server',
'command': '/bin/false',
'restart': {
'limit': 3,
'interval': 60,
},
}
],
'ephemeral_ports': {
'tcp': [],
'udp': [],
},
'vring': {
'some': 'settings'
}
}
treadmill.appcfg.manifest.read.return_value = manifest
app_unique_name = 'proid.myapp-001-0000000ID1234'
mock_ld_client = self.tm_env.svc_localdisk.make_client.return_value
localdisk = {
'block_dev': '/dev/foo',
}
mock_ld_client.get.return_value = localdisk
mock_nwrk_client = self.tm_env.svc_network.make_client.return_value
network = {
'vip': '192.168.0.2',
'gateway': '192.168.254.254',
'veth': 'testveth.0',
'external_ip': '172.31.81.67',
}
mock_nwrk_client.get.return_value = network
app_dir = os.path.join(self.root, 'apps', app_unique_name)
# Create content in app root directory, verify that it is archived.
fs.mkdir_safe(os.path.join(app_dir, 'root', 'xxx'))
fs.mkdir_safe(os.path.join(app_dir, 'services'))
# Simulate daemontools finish script, marking the app is done.
with open(os.path.join(app_dir, 'aborted'), 'w') as aborted:
aborted.write('something went wrong')
mock_zkclient = kazoo.client.KazooClient()
mock_watchdog = mock.Mock()
app_finish.finish(
self.tm_env, mock_zkclient, app_dir, mock_watchdog
)
treadmill.appevents.post(
mock.ANY,
events.AbortedTraceEvent(
instanceid='proid.myapp#001',
why=None,
payload={
'why': 'something went wrong',
'node': 'hostname',
}
)
)
treadmill.rrdutils.flush_noexc.assert_called_with(
os.path.join(self.root, 'metrics', 'apps',
app_unique_name + '.rrd')
)
shutil.copy.assert_called_with(
os.path.join(self.root, 'metrics', 'apps',
app_unique_name + '.rrd'),
os.path.join(app_dir, 'metrics.rrd')
)
self.assertTrue(mock_watchdog.remove.called)
@mock.patch('treadmill.subproc.check_call', mock.Mock(return_value=0))
def test_finish_no_manifest(self):
"""Test app finish on directory with no app.json.
"""
app_finish.finish(self.tm_env, None, self.root, mock.Mock())
@mock.patch('kazoo.client.KazooClient', mock.Mock(set_spec=True))
@mock.patch('shutil.copy', mock.Mock())
@mock.patch('treadmill.appevents.post', mock.Mock())
@mock.patch('treadmill.apphook.cleanup', mock.Mock())
@mock.patch('treadmill.utils.datetime_utcnow', mock.Mock(
return_value=datetime.datetime(2015, 1, 22, 14, 14, 36, 537918)))
@mock.patch('treadmill.appcfg.manifest.read', mock.Mock())
@mock.patch('treadmill.runtime.linux._finish._kill_apps_by_root',
mock.Mock())
@mock.patch('treadmill.runtime.linux._finish._send_container_archive',
mock.Mock())
@mock.patch('treadmill.sysinfo.hostname',
mock.Mock(return_value='xxx.ms.com'))
@mock.patch('treadmill.fs.archive_filesystem',
mock.Mock(return_value=True))
@mock.patch('treadmill.iptables.rm_ip_set', mock.Mock())
@mock.patch('treadmill.rrdutils.flush_noexc', mock.Mock())
@mock.patch('treadmill.subproc.call', mock.Mock(return_value=0))
@mock.patch('treadmill.subproc.check_call', mock.Mock())
@mock.patch('treadmill.subproc.invoke', mock.Mock())
@mock.patch('treadmill.zkutils.get',
mock.Mock(return_value={
'server': 'nonexist',
'auth': 'nonexist',
}))
@mock.patch('treadmill.zkutils.put', mock.Mock())
def test_finish_no_resources(self):
"""Test app finish on directory when all resources are already freed.
"""
# Access protected module _kill_apps_by_root
# pylint: disable=W0212
manifest = {
'app': 'proid.myapp',
'cell': 'test',
'cpu': '100%',
'disk': '100G',
'environment': 'dev',
'memory': '100M',
'name': 'proid.myapp#001',
'proid': 'foo',
'shared_network': False,
'task': '001',
'uniqueid': '0000000ID1234',
'archive': [
'/var/tmp/treadmill'
],
'endpoints': [
{
'port': 8000,
'name': 'http',
'real_port': 5000
},
{
'port': 54321,
'type': 'infra',
'name': 'ssh',
'real_port': 54321
}
],
'ephemeral_ports': {
'tcp': [45024],
'udp': [62422],
},
'services': [
{
'command': '/bin/false',
'restart_count': 3,
'name': 'web_server'
}
],
'vring': {
'some': 'settings'
}
}
treadmill.appcfg.manifest.read.return_value = manifest
app_unique_name = 'proid.myapp-001-0000000ID1234'
mock_cgroup_client = self.tm_env.svc_cgroup.make_client.return_value
mock_ld_client = self.tm_env.svc_localdisk.make_client.return_value
mock_nwrk_client = self.tm_env.svc_network.make_client.return_value
# All resource managers return None
mock_cgroup_client.get.return_value = None
mock_ld_client.get.return_value = None
mock_nwrk_client.get.return_value = None
app_dir = os.path.join(self.tm_env.apps_dir, app_unique_name)
# Create content in app root directory, verify that it is archived.
fs.mkdir_safe(os.path.join(app_dir, 'root', 'xxx'))
fs.mkdir_safe(os.path.join(app_dir, 'services'))
# Simulate daemontools finish script, marking the app is done.
with open(os.path.join(app_dir, 'exitinfo'), 'w') as f:
f.write(yaml.dump({'service': 'web_server', 'rc': 0, 'sig': 0}))
mock_zkclient = kazoo.client.KazooClient()
mock_watchdog = mock.Mock()
treadmill.runtime.linux._finish.finish(
self.tm_env, mock_zkclient, app_dir, mock_watchdog
)
treadmill.subproc.check_call.assert_has_calls(
[
mock.call(
[
's6_svc',
'-d',
app_dir,
],
),
mock.call(
[
's6_svwait',
'-d',
app_dir,
],
),
]
)
self.tm_env.svc_cgroup.make_client.assert_called_with(
os.path.join(app_dir, 'cgroups')
)
self.tm_env.svc_localdisk.make_client.assert_called_with(
os.path.join(app_dir, 'localdisk')
)
self.tm_env.svc_network.make_client.assert_called_with(
os.path.join(app_dir, 'network')
)
treadmill.runtime.linux._finish._kill_apps_by_root.assert_called_with(
os.path.join(app_dir, 'root')
)
# Verify that the app folder was deleted
self.assertFalse(os.path.exists(app_dir))
# Cleanup the network resources
mock_nwrk_client.get.assert_called_with(app_unique_name)
# Cleanup the block device
mock_ld_client.delete.assert_called_with(app_unique_name)
# Cleanup the cgroup resource
mock_cgroup_client.delete.assert_called_with(app_unique_name)
treadmill.appevents.post.assert_called_with(
mock.ANY,
events.FinishedTraceEvent(
instanceid='proid.myapp#001',
rc=0,
signal=0,
payload={
'service': 'web_server',
'sig': 0,
'rc': 0
}
)
)
treadmill.rrdutils.flush_noexc.assert_called_with(
os.path.join(self.root, 'metrics', 'apps',
app_unique_name + '.rrd')
)
shutil.copy.assert_called_with(
os.path.join(self.root, 'metrics', 'apps',
app_unique_name + '.rrd'),
os.path.join(app_dir, 'metrics.rrd')
)
self.assertTrue(mock_watchdog.remove.called)
def test__copy_metrics(self):
"""Test that metrics are copied safely.
"""
# Access protected module _copy_metrics
# pylint: disable=W0212
with open(os.path.join(self.root, 'in.rrd'), 'w+'):
pass
app_finish._copy_metrics(os.path.join(self.root, 'in.rrd'),
self.root)
self.assertTrue(os.path.exists(os.path.join(self.root, 'metrics.rrd')))
os.unlink(os.path.join(self.root, 'metrics.rrd'))
app_finish._copy_metrics(os.path.join(self.root, 'nosuchthing.rrd'),
self.root)
self.assertFalse(
os.path.exists(os.path.join(self.root, 'metrics.rrd')))
def test__archive_logs(self):
"""Tests archiving local logs."""
# Access protected module _archive_logs
#
# pylint: disable=W0212
container_dir = os.path.join(self.root, 'xxx.yyy-1234-qwerty')
fs.mkdir_safe(container_dir)
archives_dir = os.path.join(self.root, 'archives')
fs.mkdir_safe(archives_dir)
sys_archive = os.path.join(archives_dir,
'xxx.yyy-1234-qwerty.sys.tar.gz')
app_archive = os.path.join(archives_dir,
'xxx.yyy-1234-qwerty.app.tar.gz')
app_finish._archive_logs(self.tm_env, container_dir)
self.assertTrue(os.path.exists(sys_archive))
self.assertTrue(os.path.exists(app_archive))
os.unlink(sys_archive)
os.unlink(app_archive)
def _touch_file(path):
"""Touch file, appending path to container_dir."""
fpath = os.path.join(container_dir, path)
fs.mkdir_safe(os.path.dirname(fpath))
open(fpath, 'w+').close()
_touch_file('sys/foo/log/current')
_touch_file('sys/bla/log/current')
_touch_file('sys/bla/log/xxx')
_touch_file('services/xxx/log/current')
_touch_file('services/xxx/log/whatever')
_touch_file('a.yml')
_touch_file('a.rrd')
_touch_file('log/current')
_touch_file('whatever')
app_finish._archive_logs(self.tm_env, container_dir)
tar = tarfile.open(sys_archive)
files = sorted([member.name for member in tar.getmembers()])
self.assertEqual(
files,
['a.rrd', 'a.yml', 'log/current',
'sys/bla/log/current', 'sys/foo/log/current']
)
tar.close()
tar = tarfile.open(app_archive)
files = sorted([member.name for member in tar.getmembers()])
self.assertEqual(
files,
['services/xxx/log/current']
)
tar.close()
def test__archive_cleanup(self):
"""Tests cleanup of local logs."""
# Access protected module _ARCHIVE_LIMIT, _cleanup_archive_dir
#
# pylint: disable=W0212
fs.mkdir_safe(self.tm_env.archives_dir)
# Cleanup does not care about file extensions, it will cleanup
# oldest file if threshold is exceeded.
app_finish._ARCHIVE_LIMIT = 20
file1 = os.path.join(self.tm_env.archives_dir, '1')
with open(file1, 'w+') as f:
f.write('x' * 10)
app_finish._cleanup_archive_dir(self.tm_env)
self.assertTrue(os.path.exists(file1))
os.utime(file1, (time.time() - 1, time.time() - 1))
file2 = os.path.join(self.tm_env.archives_dir, '2')
with open(file2, 'w+') as f:
f.write('x' * 10)
app_finish._cleanup_archive_dir(self.tm_env)
self.assertTrue(os.path.exists(file1))
with open(os.path.join(self.tm_env.archives_dir, '2'), 'w+') as f:
f.write('x' * 15)
app_finish._cleanup_archive_dir(self.tm_env)
self.assertFalse(os.path.exists(file1))
self.assertTrue(os.path.exists(file2))
if __name__ == '__main__':
unittest.main()
| 37.967113 | 79 | 0.523403 | 3,308 | 31,171 | 4.727932 | 0.104595 | 0.056777 | 0.036445 | 0.040217 | 0.83133 | 0.792199 | 0.760742 | 0.744309 | 0.727302 | 0.701726 | 0 | 0.036307 | 0.352315 | 31,171 | 820 | 80 | 38.013415 | 0.738372 | 0.06567 | 0 | 0.636872 | 0 | 0 | 0.165272 | 0.072991 | 0 | 0 | 0 | 0 | 0.069832 | 1 | 0.015363 | false | 0.001397 | 0.02514 | 0 | 0.041899 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f87d9e97668b459629470a6aa8e32511acb2300e | 39 | py | Python | tnread/__init__.py | Uiuran/text-network-notebooks | d2744890df896def45047e7d266d21d1d5287533 | [
"CC-BY-4.0"
] | null | null | null | tnread/__init__.py | Uiuran/text-network-notebooks | d2744890df896def45047e7d266d21d1d5287533 | [
"CC-BY-4.0"
] | null | null | null | tnread/__init__.py | Uiuran/text-network-notebooks | d2744890df896def45047e7d266d21d1d5287533 | [
"CC-BY-4.0"
] | null | null | null | from .main import *
from .vis import *
| 13 | 19 | 0.692308 | 6 | 39 | 4.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 39 | 2 | 20 | 19.5 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8b65c77a81990f272b5984a31eeefdb94f59b5b | 5,414 | py | Python | sstcam_sandbox/d191128_pedestal_lab/extract_dc_tf.py | watsonjj/CHECLabPySB | 91330d3a6f510a392f635bd7f4abd2f77871322c | [
"BSD-3-Clause"
] | null | null | null | sstcam_sandbox/d191128_pedestal_lab/extract_dc_tf.py | watsonjj/CHECLabPySB | 91330d3a6f510a392f635bd7f4abd2f77871322c | [
"BSD-3-Clause"
] | null | null | null | sstcam_sandbox/d191128_pedestal_lab/extract_dc_tf.py | watsonjj/CHECLabPySB | 91330d3a6f510a392f635bd7f4abd2f77871322c | [
"BSD-3-Clause"
] | 1 | 2021-03-30T09:46:56.000Z | 2021-03-30T09:46:56.000Z | from sstcam_sandbox import get_checs, get_data
from CHECLabPy.core.io import TIOReader
from TargetCalibSB.tf import TFDC
from TargetCalibSB.pedestal import PedestalTargetCalib
from TargetCalibSB import get_cell_ids_for_waveform
from tqdm import tqdm
from glob import glob
import re
def process(tf_r0_paths, pedestal_path, tf_path):
pedestal = PedestalTargetCalib.from_tcal(pedestal_path)
# Parse amplitudes from filepath
amplitudes = []
readers = []
for path in tf_r0_paths:
regex_ped = re.search(r".+VPED_(\d+).tio", path)
amplitudes.append(int(regex_ped.group(1)))
readers.append(TIOReader(path))
# Instance TF class from first file
tf = TFDC(
readers[0].n_pixels,
readers[0].n_samples - 32,
readers[0].n_cells,
amplitudes
)
desc0 = "Generating TF"
it = zip(amplitudes, readers)
n_amp = len(amplitudes)
for amplitude, reader in tqdm(it, total=n_amp, desc=desc0):
amplitude_index = tf.get_input_amplitude_index(amplitude)
for iwf, wfs in enumerate(reader):
if wfs.missing_packets:
continue
# Skip to next file when enough hits are reached
if iwf % 1000 == 0:
if (tf.hits[..., amplitude_index] > 100).all():
break
tf.add_to_tf(
pedestal.subtract_pedestal(wfs, wfs.first_cell_id),
wfs.first_cell_id,
amplitude_index
)
tf.save(tf_path)
def main():
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/before_25deg.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp35/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/before_35deg.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp35_440pF_2/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_440pF/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/after_35deg.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
#
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_440pF/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_440pF/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/after_25deg.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_440pF_3/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_440pF_3/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/after_25deg_3.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp35_440pF_3/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_440pF_3/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/after_35deg_3.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_100pF/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_100pF/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/100pF_25deg.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
#
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp35_100pF/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_100pF/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/100pF_35deg.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
# tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_100_pF_1k/*.tio")
# pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_100pF/VPED_1095_ped.tcal"
# tf_path = get_data("d191128_pedestal_lab/dc_tf/100pF_1k_25deg.h5")
# process(tf_r0_paths, pedestal_path, tf_path)
tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_200_pF/*.tio")
pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_200_pF/VPED_1095_ped.tcal"
tf_path = get_data("d191128_pedestal_lab/dc_tf/200pF_25deg.h5")
process(tf_r0_paths, pedestal_path, tf_path)
tf_r0_paths = glob("/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp35_200_pF/*.tio")
pedestal_path = "/Users/Jason/Downloads/tempdata/d191128_pedestal_lab/dc_tf_tm_temp25_200_pF/VPED_1095_ped.tcal"
tf_path = get_data("d191128_pedestal_lab/dc_tf/200pF_35deg.h5")
process(tf_r0_paths, pedestal_path, tf_path)
if __name__ == '__main__':
main()
| 48.774775 | 119 | 0.744366 | 816 | 5,414 | 4.506127 | 0.142157 | 0.134621 | 0.161545 | 0.179494 | 0.746533 | 0.746533 | 0.746533 | 0.746533 | 0.737286 | 0.737286 | 0 | 0.094753 | 0.148134 | 5,414 | 110 | 120 | 49.218182 | 0.702515 | 0.558921 | 0 | 0.08 | 0 | 0 | 0.199915 | 0.184143 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.16 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.