hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
88131f05d1ba6082cb9e8ed561414623005e583c | 10,275 | py | Python | pocean/tests/dsg/trajectory/test_trajectory_im.py | axiom-data-science/pocean-core | 11ad6b8fc43a4c29fa8aa404bf52cb7d39a9c8b1 | [
"MIT"
] | 13 | 2017-03-26T03:17:33.000Z | 2021-05-14T12:20:28.000Z | pocean/tests/dsg/trajectory/test_trajectory_im.py | axiom-data-science/pocean-core | 11ad6b8fc43a4c29fa8aa404bf52cb7d39a9c8b1 | [
"MIT"
] | 43 | 2017-02-21T14:45:33.000Z | 2022-03-09T18:04:10.000Z | pocean/tests/dsg/trajectory/test_trajectory_im.py | axiom-data-science/pocean-core | 11ad6b8fc43a4c29fa8aa404bf52cb7d39a9c8b1 | [
"MIT"
] | 10 | 2017-03-03T18:35:00.000Z | 2021-03-28T22:37:41.000Z | #!python
# coding=utf-8
import os
import tempfile
import unittest
from dateutil.parser import parse as dtparse
import numpy as np
from pocean.cf import CFDataset
from pocean.dsg import IncompleteMultidimensionalTrajectory
from pocean.tests.dsg.test_new import test_is_mine
import logging
from pocean import logger
logger.level = logging.INFO
logger.handlers = [logging.StreamHandler()]
class TestIncompleteMultidimensionalTrajectory(unittest.TestCase):
def test_im_single_row(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-singlerow.nc')
with IncompleteMultidimensionalTrajectory(filepath) as s:
df = s.to_dataframe(clean_rows=True)
assert len(df) == 1
def test_imt_multi(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-multiple.nc')
CFDataset.load(filepath).close()
with IncompleteMultidimensionalTrajectory(filepath) as ncd:
fid, tmpfile = tempfile.mkstemp(suffix='.nc')
df = ncd.to_dataframe(clean_rows=False)
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile) as result_ncd:
assert 'trajectory' in result_ncd.dimensions
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, unique_dims=True) as result_ncd:
assert 'trajectory_dim' in result_ncd.dimensions
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, reduce_dims=True) as result_ncd:
# Could not reduce dims since there was more than one trajectory
assert 'trajectory' in result_ncd.dimensions
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, unlimited=True) as result_ncd:
assert result_ncd.dimensions['obs'].isunlimited() is True
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, reduce_dims=True, unlimited=True) as result_ncd:
# Could not reduce dims since there was more than one trajectory
assert 'trajectory' in result_ncd.dimensions
assert result_ncd.dimensions['obs'].isunlimited() is True
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, unique_dims=True, reduce_dims=True, unlimited=True) as result_ncd:
# Could not reduce dims since there was more than one trajectory
assert 'trajectory_dim' in result_ncd.dimensions
assert result_ncd.dimensions['obs_dim'].isunlimited() is True
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
os.close(fid)
os.remove(tmpfile)
def test_imt_multi_not_string(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-multiple-nonstring.nc')
CFDataset.load(filepath).close()
with IncompleteMultidimensionalTrajectory(filepath) as ncd:
fid, tmpfile = tempfile.mkstemp(suffix='.nc')
df = ncd.to_dataframe(clean_rows=False)
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile) as result_ncd:
assert 'trajectory' in result_ncd.dimensions
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, reduce_dims=True) as result_ncd:
# Could not reduce dims since there was more than one trajectory
assert 'trajectory' not in result_ncd.dimensions
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, unlimited=True) as result_ncd:
assert result_ncd.dimensions['obs'].isunlimited() is True
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, reduce_dims=True, unlimited=True) as result_ncd:
# Could not reduce dims since there was more than one trajectory
assert 'trajectory' not in result_ncd.dimensions
assert result_ncd.dimensions['obs'].isunlimited() is True
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
os.close(fid)
os.remove(tmpfile)
def test_imt_single(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-single.nc')
CFDataset.load(filepath).close()
with IncompleteMultidimensionalTrajectory(filepath) as ncd:
fid, tmpfile = tempfile.mkstemp(suffix='.nc')
df = ncd.to_dataframe(clean_rows=False)
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile) as result_ncd:
assert 'trajectory' in result_ncd.dimensions
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, reduce_dims=True) as result_ncd:
# Reduced trajectory dimension
assert 'trajectory' not in result_ncd.dimensions
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, unlimited=True) as result_ncd:
# Reduced trajectory dimension
assert result_ncd.dimensions['obs'].isunlimited() is True
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, reduce_dims=True, unlimited=True) as result_ncd:
# Reduced trajectory dimension
assert 'trajectory' not in result_ncd.dimensions
assert result_ncd.dimensions['obs'].isunlimited() is True
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
os.close(fid)
os.remove(tmpfile)
def test_imt_change_axis_names(self):
new_axis = {
't': 'time',
'x': 'lon',
'y': 'lat',
'z': 'depth'
}
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-multiple.nc')
with IncompleteMultidimensionalTrajectory(filepath) as ncd:
fid, tmpfile = tempfile.mkstemp(suffix='.nc')
df = ncd.to_dataframe(clean_rows=False, axes=new_axis)
with IncompleteMultidimensionalTrajectory.from_dataframe(df, tmpfile, axes=new_axis) as result_ncd:
assert 'trajectory' in result_ncd.dimensions
assert 'time' in result_ncd.variables
assert 'lon' in result_ncd.variables
assert 'lat' in result_ncd.variables
assert 'depth' in result_ncd.variables
test_is_mine(IncompleteMultidimensionalTrajectory, tmpfile) # Try to load it again
os.close(fid)
os.remove(tmpfile)
def test_imt_calculated_metadata_single(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-single.nc')
with IncompleteMultidimensionalTrajectory(filepath) as ncd:
s = ncd.calculated_metadata()
assert s.min_t.round('S') == dtparse('1990-01-01 00:00:00')
assert s.max_t.round('S') == dtparse('1990-01-05 03:00:00')
traj1 = s.trajectories["Trajectory1"]
assert traj1.min_z == 0
assert traj1.max_z == 99
assert traj1.min_t.round('S') == dtparse('1990-01-01 00:00:00')
assert traj1.max_t.round('S') == dtparse('1990-01-05 03:00:00')
first_loc = traj1.geometry.coords[0]
assert np.isclose(first_loc[0], -7.9336)
assert np.isclose(first_loc[1], 42.00339)
def test_imt_calculated_metadata_multi(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-multiple.nc')
with IncompleteMultidimensionalTrajectory(filepath) as ncd:
m = ncd.calculated_metadata()
assert m.min_t == dtparse('1990-01-01 00:00:00')
assert m.max_t == dtparse('1990-01-02 12:00:00')
assert len(m.trajectories) == 4
traj0 = m.trajectories["Trajectory0"]
assert traj0.min_z == 0
assert traj0.max_z == 35
assert traj0.min_t.round('S') == dtparse('1990-01-01 00:00:00')
assert traj0.max_t.round('S') == dtparse('1990-01-02 11:00:00')
first_loc = traj0.geometry.coords[0]
assert np.isclose(first_loc[0], -35.07884)
assert np.isclose(first_loc[1], 2.15286)
traj3 = m.trajectories["Trajectory3"]
assert traj3.min_z == 0
assert traj3.max_z == 36
assert traj3.min_t.round('S') == dtparse('1990-01-01 00:00:00')
assert traj3.max_t.round('S') == dtparse('1990-01-02 12:00:00')
first_loc = traj3.geometry.coords[0]
assert np.isclose(first_loc[0], -73.3026)
assert np.isclose(first_loc[1], 1.95761)
def test_json_attributes_single(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-single.nc')
with IncompleteMultidimensionalTrajectory(filepath) as s:
s.json_attributes()
def test_json_attributes_multi(self):
filepath = os.path.join(os.path.dirname(__file__), 'resources', 'im-multiple.nc')
with IncompleteMultidimensionalTrajectory(filepath) as s:
s.json_attributes()
| 48.928571 | 148 | 0.667543 | 1,186 | 10,275 | 5.610455 | 0.129005 | 0.051398 | 0.054253 | 0.119477 | 0.844755 | 0.824466 | 0.795912 | 0.787647 | 0.776075 | 0.735197 | 0 | 0.028174 | 0.243504 | 10,275 | 209 | 149 | 49.162679 | 0.827866 | 0.071727 | 0 | 0.529412 | 0 | 0 | 0.067487 | 0.002523 | 0 | 0 | 0 | 0 | 0.30719 | 1 | 0.058824 | false | 0 | 0.065359 | 0 | 0.130719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7152150e416a0f03be9d77970bf2acc3fdb7e8db | 19,103 | py | Python | tests/sanic/test_graphqlview.py | tryagainconcepts/graphql-server | 3c8d7f07beda1e0c9d197008ab050f634d9d730c | [
"MIT"
] | 33 | 2017-03-23T04:31:19.000Z | 2020-07-27T06:53:59.000Z | tests/sanic/test_graphqlview.py | tryagainconcepts/graphql-server | 3c8d7f07beda1e0c9d197008ab050f634d9d730c | [
"MIT"
] | 39 | 2017-03-23T10:02:30.000Z | 2020-07-22T13:27:57.000Z | tests/sanic/test_graphqlview.py | tryagainconcepts/graphql-server | 3c8d7f07beda1e0c9d197008ab050f634d9d730c | [
"MIT"
] | 37 | 2017-06-29T16:24:55.000Z | 2020-07-24T07:11:33.000Z | import json
from urllib.parse import urlencode
import pytest
from .app import create_app, url_string
from .schema import AsyncSchema
def response_json(response):
return json.loads(response.body.decode())
def json_dump_kwarg(**kwargs):
return json.dumps(kwargs)
def json_dump_kwarg_list(**kwargs):
return json.dumps([kwargs])
@pytest.mark.parametrize("app", [create_app()])
def test_allows_get_with_query_param(app):
_, response = app.client.get(uri=url_string(query="{test}"))
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello World"}}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_get_with_variable_values(app):
_, response = app.client.get(
uri=url_string(
query="query helloWho($who: String){ test(who: $who) }",
variables=json.dumps({"who": "Dolly"}),
)
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello Dolly"}}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_get_with_operation_name(app):
_, response = app.client.get(
uri=url_string(
query="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
operationName="helloWorld",
)
)
assert response.status == 200
assert response_json(response) == {
"data": {"test": "Hello World", "shared": "Hello Everyone"}
}
@pytest.mark.parametrize("app", [create_app()])
def test_reports_validation_errors(app):
_, response = app.client.get(
uri=url_string(query="{ test, unknownOne, unknownTwo }")
)
assert response.status == 400
assert response_json(response) == {
"errors": [
{
"message": "Cannot query field 'unknownOne' on type 'QueryRoot'.",
"locations": [{"line": 1, "column": 9}],
},
{
"message": "Cannot query field 'unknownTwo' on type 'QueryRoot'.",
"locations": [{"line": 1, "column": 21}],
},
]
}
@pytest.mark.parametrize("app", [create_app()])
def test_errors_when_missing_operation_name(app):
_, response = app.client.get(
uri=url_string(
query="""
query TestQuery { test }
mutation TestMutation { writeTest { test } }
"""
)
)
assert response.status == 400
assert response_json(response) == {
"errors": [
{
"message": "Must provide operation name"
" if query contains multiple operations.",
}
]
}
@pytest.mark.parametrize("app", [create_app()])
def test_errors_when_sending_a_mutation_via_get(app):
_, response = app.client.get(
uri=url_string(
query="""
mutation TestMutation { writeTest { test } }
"""
)
)
assert response.status == 405
assert response_json(response) == {
"errors": [
{
"message": "Can only perform a mutation operation from a POST request.",
}
]
}
@pytest.mark.parametrize("app", [create_app()])
def test_errors_when_selecting_a_mutation_within_a_get(app):
_, response = app.client.get(
uri=url_string(
query="""
query TestQuery { test }
mutation TestMutation { writeTest { test } }
""",
operationName="TestMutation",
)
)
assert response.status == 405
assert response_json(response) == {
"errors": [
{
"message": "Can only perform a mutation operation from a POST request.",
}
]
}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_mutation_to_exist_within_a_get(app):
_, response = app.client.get(
uri=url_string(
query="""
query TestQuery { test }
mutation TestMutation { writeTest { test } }
""",
operationName="TestQuery",
)
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello World"}}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_post_with_json_encoding(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg(query="{test}"),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello World"}}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_sending_a_mutation_via_post(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg(query="mutation TestMutation { writeTest { test } }"),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == {"data": {"writeTest": {"test": "Hello World"}}}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_post_with_url_encoding(app):
# Example of how sanic does send data using url enconding
# can be found at their repo.
# https://github.com/huge-success/sanic/blob/master/tests/test_requests.py#L927
payload = "query={test}"
_, response = app.client.post(
uri=url_string(),
data=payload,
headers={"content-type": "application/x-www-form-urlencoded"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello World"}}
@pytest.mark.parametrize("app", [create_app()])
def test_supports_post_json_query_with_string_variables(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg(
query="query helloWho($who: String){ test(who: $who) }",
variables=json.dumps({"who": "Dolly"}),
),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello Dolly"}}
@pytest.mark.parametrize("app", [create_app()])
def test_supports_post_json_query_with_json_variables(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg(
query="query helloWho($who: String){ test(who: $who) }",
variables={"who": "Dolly"},
),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello Dolly"}}
@pytest.mark.parametrize("app", [create_app()])
def test_supports_post_url_encoded_query_with_string_variables(app):
_, response = app.client.post(
uri=url_string(),
data=urlencode(
dict(
query="query helloWho($who: String){ test(who: $who) }",
variables=json.dumps({"who": "Dolly"}),
)
),
headers={"content-type": "application/x-www-form-urlencoded"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello Dolly"}}
@pytest.mark.parametrize("app", [create_app()])
def test_supports_post_json_query_with_get_variable_values(app):
_, response = app.client.post(
uri=url_string(variables=json.dumps({"who": "Dolly"})),
data=json_dump_kwarg(
query="query helloWho($who: String){ test(who: $who) }",
),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello Dolly"}}
@pytest.mark.parametrize("app", [create_app()])
def test_post_url_encoded_query_with_get_variable_values(app):
_, response = app.client.post(
uri=url_string(variables=json.dumps({"who": "Dolly"})),
data=urlencode(
dict(
query="query helloWho($who: String){ test(who: $who) }",
)
),
headers={"content-type": "application/x-www-form-urlencoded"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello Dolly"}}
@pytest.mark.parametrize("app", [create_app()])
def test_supports_post_raw_text_query_with_get_variable_values(app):
_, response = app.client.post(
uri=url_string(variables=json.dumps({"who": "Dolly"})),
data="query helloWho($who: String){ test(who: $who) }",
headers={"content-type": "application/graphql"},
)
assert response.status == 200
assert response_json(response) == {"data": {"test": "Hello Dolly"}}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_post_with_operation_name(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg(
query="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
operationName="helloWorld",
),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == {
"data": {"test": "Hello World", "shared": "Hello Everyone"}
}
@pytest.mark.parametrize("app", [create_app()])
def test_allows_post_with_get_operation_name(app):
_, response = app.client.post(
uri=url_string(operationName="helloWorld"),
data="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
headers={"content-type": "application/graphql"},
)
assert response.status == 200
assert response_json(response) == {
"data": {"test": "Hello World", "shared": "Hello Everyone"}
}
@pytest.mark.parametrize("app", [create_app(pretty=True)])
def test_supports_pretty_printing(app):
_, response = app.client.get(uri=url_string(query="{test}"))
assert response.body.decode() == (
"{\n" ' "data": {\n' ' "test": "Hello World"\n' " }\n" "}"
)
@pytest.mark.parametrize("app", [create_app(pretty=False)])
def test_not_pretty_by_default(app):
_, response = app.client.get(url_string(query="{test}"))
assert response.body.decode() == '{"data":{"test":"Hello World"}}'
@pytest.mark.parametrize("app", [create_app()])
def test_supports_pretty_printing_by_request(app):
_, response = app.client.get(uri=url_string(query="{test}", pretty="1"))
assert response.body.decode() == (
"{\n" ' "data": {\n' ' "test": "Hello World"\n' " }\n" "}"
)
@pytest.mark.parametrize("app", [create_app()])
def test_handles_field_errors_caught_by_graphql(app):
_, response = app.client.get(uri=url_string(query="{thrower}"))
assert response.status == 200
assert response_json(response) == {
"data": None,
"errors": [
{
"locations": [{"column": 2, "line": 1}],
"message": "Throws!",
"path": ["thrower"],
}
],
}
@pytest.mark.parametrize("app", [create_app()])
def test_handles_syntax_errors_caught_by_graphql(app):
_, response = app.client.get(uri=url_string(query="syntaxerror"))
assert response.status == 400
assert response_json(response) == {
"errors": [
{
"locations": [{"column": 1, "line": 1}],
"message": "Syntax Error: Unexpected Name 'syntaxerror'.",
}
]
}
@pytest.mark.parametrize("app", [create_app()])
def test_handles_errors_caused_by_a_lack_of_query(app):
_, response = app.client.get(uri=url_string())
assert response.status == 400
assert response_json(response) == {
"errors": [{"message": "Must provide query string."}]
}
@pytest.mark.parametrize("app", [create_app()])
def test_handles_batch_correctly_if_is_disabled(app):
_, response = app.client.post(
uri=url_string(), data="[]", headers={"content-type": "application/json"}
)
assert response.status == 400
assert response_json(response) == {
"errors": [
{
"message": "Batch GraphQL requests are not enabled.",
}
]
}
@pytest.mark.parametrize("app", [create_app()])
def test_handles_incomplete_json_bodies(app):
_, response = app.client.post(
uri=url_string(), data='{"query":', headers={"content-type": "application/json"}
)
assert response.status == 400
assert response_json(response) == {
"errors": [{"message": "POST body sent invalid JSON."}]
}
@pytest.mark.parametrize("app", [create_app()])
def test_handles_plain_post_text(app):
_, response = app.client.post(
uri=url_string(variables=json.dumps({"who": "Dolly"})),
data="query helloWho($who: String){ test(who: $who) }",
headers={"content-type": "text/plain"},
)
assert response.status == 400
assert response_json(response) == {
"errors": [{"message": "Must provide query string."}]
}
@pytest.mark.parametrize("app", [create_app()])
def test_handles_poorly_formed_variables(app):
_, response = app.client.get(
uri=url_string(
query="query helloWho($who: String){ test(who: $who) }", variables="who:You"
)
)
assert response.status == 400
assert response_json(response) == {
"errors": [{"message": "Variables are invalid JSON."}]
}
@pytest.mark.parametrize("app", [create_app()])
def test_handles_unsupported_http_methods(app):
_, response = app.client.put(uri=url_string(query="{test}"))
assert response.status == 405
assert response.headers["Allow"] in ["GET, POST", "HEAD, GET, POST, OPTIONS"]
assert response_json(response) == {
"errors": [
{
"message": "GraphQL only supports GET and POST requests.",
}
]
}
@pytest.mark.parametrize("app", [create_app()])
def test_passes_request_into_request_context(app):
_, response = app.client.get(uri=url_string(query="{request}", q="testing"))
assert response.status == 200
assert response_json(response) == {"data": {"request": "testing"}}
@pytest.mark.parametrize("app", [create_app(context={"session": "CUSTOM CONTEXT"})])
def test_passes_custom_context_into_context(app):
_, response = app.client.get(uri=url_string(query="{context { session request }}"))
assert response.status_code == 200
res = response_json(response)
assert "data" in res
assert "session" in res["data"]["context"]
assert "request" in res["data"]["context"]
assert "CUSTOM CONTEXT" in res["data"]["context"]["session"]
assert "Request" in res["data"]["context"]["request"]
@pytest.mark.parametrize("app", [create_app(context="CUSTOM CONTEXT")])
def test_context_remapped_if_not_mapping(app):
_, response = app.client.get(uri=url_string(query="{context { session request }}"))
assert response.status_code == 200
res = response_json(response)
assert "data" in res
assert "session" in res["data"]["context"]
assert "request" in res["data"]["context"]
assert "CUSTOM CONTEXT" not in res["data"]["context"]["request"]
assert "Request" in res["data"]["context"]["request"]
@pytest.mark.parametrize("app", [create_app()])
def test_post_multipart_data(app):
query = "mutation TestMutation { writeTest { test } }"
data = (
"------sanicgraphql\r\n"
+ 'Content-Disposition: form-data; name="query"\r\n'
+ "\r\n"
+ query
+ "\r\n"
+ "------sanicgraphql--\r\n"
+ "Content-Type: text/plain; charset=utf-8\r\n"
+ 'Content-Disposition: form-data; name="file"; filename="text1.txt"; filename*=utf-8\'\'text1.txt\r\n'
+ "\r\n"
+ "\r\n"
+ "------sanicgraphql--\r\n"
)
_, response = app.client.post(
uri=url_string(),
data=data,
headers={"content-type": "multipart/form-data; boundary=----sanicgraphql"},
)
assert response.status == 200
assert response_json(response) == {"data": {"writeTest": {"test": "Hello World"}}}
@pytest.mark.parametrize("app", [create_app(batch=True)])
def test_batch_allows_post_with_json_encoding(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg_list(id=1, query="{test}"),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == [{"data": {"test": "Hello World"}}]
@pytest.mark.parametrize("app", [create_app(batch=True)])
def test_batch_supports_post_json_query_with_json_variables(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg_list(
id=1,
query="query helloWho($who: String){ test(who: $who) }",
variables={"who": "Dolly"},
),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == [{"data": {"test": "Hello Dolly"}}]
@pytest.mark.parametrize("app", [create_app(batch=True)])
def test_batch_allows_post_with_operation_name(app):
_, response = app.client.post(
uri=url_string(),
data=json_dump_kwarg_list(
id=1,
query="""
query helloYou { test(who: "You"), ...shared }
query helloWorld { test(who: "World"), ...shared }
query helloDolly { test(who: "Dolly"), ...shared }
fragment shared on QueryRoot {
shared: test(who: "Everyone")
}
""",
operationName="helloWorld",
),
headers={"content-type": "application/json"},
)
assert response.status == 200
assert response_json(response) == [
{"data": {"test": "Hello World", "shared": "Hello Everyone"}}
]
@pytest.mark.parametrize("app", [create_app(schema=AsyncSchema, enable_async=True)])
def test_async_schema(app):
query = "{a,b,c}"
_, response = app.client.get(uri=url_string(query=query))
assert response.status == 200
assert response_json(response) == {"data": {"a": "hey", "b": "hey2", "c": "hey3"}}
@pytest.mark.parametrize("app", [create_app()])
def test_preflight_request(app):
_, response = app.client.options(
uri=url_string(), headers={"Access-Control-Request-Method": "POST"}
)
assert response.status == 200
@pytest.mark.parametrize("app", [create_app()])
def test_preflight_incorrect_request(app):
_, response = app.client.options(
uri=url_string(), headers={"Access-Control-Request-Method": "OPTIONS"}
)
assert response.status == 400
| 31.214052 | 111 | 0.605873 | 2,133 | 19,103 | 5.235818 | 0.101735 | 0.092765 | 0.075215 | 0.08596 | 0.852256 | 0.827364 | 0.822618 | 0.80086 | 0.780086 | 0.725466 | 0 | 0.00905 | 0.230697 | 19,103 | 611 | 112 | 31.265139 | 0.750885 | 0.008376 | 0 | 0.582645 | 0 | 0.002066 | 0.266501 | 0.015577 | 0 | 0 | 0 | 0 | 0.173554 | 1 | 0.088843 | false | 0.004132 | 0.010331 | 0.006198 | 0.105372 | 0.004132 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
715873c80a8e51225bbd841b13670beebcc8b06e | 216 | py | Python | kidsdata/kiss_data.py | abeelen/kidsdata | 76c798b102a407e29d162aafceb01c518d848536 | [
"BSD-3-Clause"
] | null | null | null | kidsdata/kiss_data.py | abeelen/kidsdata | 76c798b102a407e29d162aafceb01c518d848536 | [
"BSD-3-Clause"
] | null | null | null | kidsdata/kiss_data.py | abeelen/kidsdata | 76c798b102a407e29d162aafceb01c518d848536 | [
"BSD-3-Clause"
] | null | null | null | from .kiss_rawdata import KissRawData
from .kiss_continuum import KissContinuum
from .kiss_spectroscopy import KissSpectroscopy
# pylint: disable=no-member
class KissData(KissSpectroscopy, KissContinuum):
pass
| 24 | 48 | 0.833333 | 24 | 216 | 7.375 | 0.666667 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115741 | 216 | 8 | 49 | 27 | 0.926702 | 0.115741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
71655f12b964bda70ea2bc330b7929b873175e39 | 387 | py | Python | matchzoo/data_generator/__init__.py | freedombenLiu/MatchZoo | b1ba96ac8b84e70952f5787f62272ceef8cea106 | [
"Apache-2.0"
] | 2 | 2019-10-04T16:51:36.000Z | 2021-06-09T04:43:35.000Z | matchzoo/data_generator/__init__.py | ThanhChinhBK/MatchZoo | f77403044bca4ff0a84738921180724a54fd42f9 | [
"Apache-2.0"
] | null | null | null | matchzoo/data_generator/__init__.py | ThanhChinhBK/MatchZoo | f77403044bca4ff0a84738921180724a54fd42f9 | [
"Apache-2.0"
] | null | null | null | from .data_generator import DataGenerator
from .pair_data_generator import PairDataGenerator
from .dynamic_data_generator import DynamicDataGenerator
from .dpool_data_generator import DPoolDataGenerator
from .dpool_data_generator import DPoolPairDataGenerator
from .histogram_data_generator import HistogramDataGenerator
from .histogram_data_generator import HistogramPairDataGenerator
| 43 | 64 | 0.906977 | 41 | 387 | 8.243902 | 0.365854 | 0.269231 | 0.393491 | 0.130178 | 0.35503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074935 | 387 | 8 | 65 | 48.375 | 0.944134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
716d91c3bbc9e75b8bac2e0fb390a1a53fb9e4d2 | 126 | py | Python | ckanext/harvest/harvesters/__init__.py | CSCfi/metax-ckanext-harvest | 52e12bf2c550b60c5bef8f887f635af352d54d28 | [
"PostgreSQL"
] | null | null | null | ckanext/harvest/harvesters/__init__.py | CSCfi/metax-ckanext-harvest | 52e12bf2c550b60c5bef8f887f635af352d54d28 | [
"PostgreSQL"
] | null | null | null | ckanext/harvest/harvesters/__init__.py | CSCfi/metax-ckanext-harvest | 52e12bf2c550b60c5bef8f887f635af352d54d28 | [
"PostgreSQL"
] | null | null | null |
from ckanext.harvest.harvesters.ckanharvester import CKANHarvester
from ckanext.harvest.harvesters.base import HarvesterBase
| 31.5 | 66 | 0.880952 | 14 | 126 | 7.928571 | 0.571429 | 0.198198 | 0.324324 | 0.504505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 126 | 3 | 67 | 42 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
71927c1a085431ef914970e196d6c6c74d6b30fc | 138 | py | Python | UPD_1.1/buttons.py | Chen-Py/Fast_Key_Exchange | 2e04196f05e5ca87b903bf013352f44fe0a79e97 | [
"Apache-2.0"
] | 3 | 2020-01-10T15:54:57.000Z | 2020-03-14T13:04:14.000Z | UPD_1.1/buttons.py | Chen-Py/Fast_Key_Exchange | 2e04196f05e5ca87b903bf013352f44fe0a79e97 | [
"Apache-2.0"
] | null | null | null | UPD_1.1/buttons.py | Chen-Py/Fast_Key_Exchange | 2e04196f05e5ca87b903bf013352f44fe0a79e97 | [
"Apache-2.0"
] | 1 | 2020-01-29T06:07:39.000Z | 2020-01-29T06:07:39.000Z | from kivy.uix.button import Button
from kivy.lang import Builder
Builder.load_file('buttons.kv')
class RoundedButton(Button):
pass
| 15.333333 | 34 | 0.775362 | 20 | 138 | 5.3 | 0.7 | 0.150943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137681 | 138 | 8 | 35 | 17.25 | 0.890756 | 0 | 0 | 0 | 0 | 0 | 0.072464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
71b7770a3265bf49b526728f86ffebbc2ef48763 | 58,700 | py | Python | lbry/wallet/checkpoints.py | dbigge/lbry-sdk | f69a4b3cca219bbff7f6fc3ff8b0f2e64af4c1bf | [
"MIT"
] | null | null | null | lbry/wallet/checkpoints.py | dbigge/lbry-sdk | f69a4b3cca219bbff7f6fc3ff8b0f2e64af4c1bf | [
"MIT"
] | null | null | null | lbry/wallet/checkpoints.py | dbigge/lbry-sdk | f69a4b3cca219bbff7f6fc3ff8b0f2e64af4c1bf | [
"MIT"
] | null | null | null | HASHES = {
0: 'bf3ff54138625c56737509f080e7e7f3c55972f0f80e684f8e25d2ad83bedbe2',
1000: '4ec1f9aebc8f7f75d5d05430d1512e38598188d56a8b51510c9e47656c4ffad9',
2000: '5d5965e43d187b6f8b35b4be51099d9ec6f7b0446dac778f30040b8371b621a2',
3000: 'd429f69a9dd890f7d9827232a348fe5120371ed402baf53c383e860081f6706e',
4000: 'd18fef650f23032e4e21299b8b71127f53ff6d2114c3eac4592a71f550dc5071',
5000: '12c1dd1b9cda29eb4448e619a4099c7bb6612159e1cd9765fbbabeee50f1daff',
6000: 'ebda54cab7979bc60f46cab8d36b6fcea31a44c38d6e59bdcb582054b6885cf8',
7000: 'd2be08791053336ae9bd322ee5b97e9920764858f173a9e0e8fb54ee5d8bad0e',
8000: '94583639d9e2ce5eac78c08ef662190830263ea13468453b0a8492c3aca562fe',
9000: 'a6ebb24a3ec9bb3fbdb9d9525729513dd705fe85093c33a07bbfe34ba1f1eae0',
10000: 'a5d32c6cd43097f3725e36dd55d08ec04cbe62e40d1fb3b798167cc9c1fa1ce4',
11000: 'b5cf8df354bc26e8b0315e8038564a9a7fc1faa6ad75e67819b7eafbcfdcb145',
12000: 'ae7b0e5106299e43b78f337fe15e85c27f88d5bcf618a21c03ce43c094dde909',
13000: 'b57dfe631571b654e11fb4295aecfc11f612664d22d64a9690f64c0b3ec128bf',
14000: '76e55cc1c38d9a6a349c9a42ae0b748d0d8b27ebb30539be8bb92e71e7abca17',
15000: '5d33cbc872acb824a89ef601aa3c918e7f4dbb8d5be509a400fa41b03f7fe103',
16000: '25760f6a6bfb58104d02e60efa62ba5ae141bbac1f65d944a667ad66fbef15a9',
17000: 'ad37d98bb272aee3a950679cdd4cc203b1c129bb3a03fe74223ff858badfcc50',
18000: '9eac34fc453b97b1c62228740efcae4ef910b0bea26c322e17e3474b81713b0d',
19000: '06783429c8f50cc6495da29d38c86c3873800cf1eceffedfffe2a73f116a39d3',
20000: '0237c620f7f68d9ae9852d00551c98a938352d2e3d9336c536f1cbf5246fade6',
21000: 'c4dbb4eea7195a2cf73de2d49b32447cec9c063051acbd300b26399c9242b0ed',
22000: 'd18fea80b659d9690830c2f37c7c9d53f79fb919debb27cc2131044df26d7ff5',
23000: '0371254f4698d3c46596c1fd861e2e1e75053beb69c3c4d8ee6702f0992f0026',
24000: 'dbc97c3986bb9bb6230321d1fc8b7ca13369033f57b08ac01bb775f7241861fe',
25000: 'cfb9baa125e91d02c060940b5b51cce52b495a0b0bd081cd869a7ca45a0e0585',
26000: '6e07fecd3d0f72b8cd50a3cdc1e9f624beb3dac9f6356308a1ec295f47d68f3b',
27000: '8ad248da3aac7c1db958d0ce8a2d50fde461bbb96a628efa215116d3b74b74e5',
28000: '4e8231ede23b397c49f1a97020ab5fe45465936cb6c8cf1c39ade1b38a80547f',
29000: '1453b7a4be4076e2c77b94bc35f639c571dc05b225a5f8018a9a60337fb3ef66',
30000: '43d779c83032d75e8df09aeb6ab103ad41618777565fabc2e56bf31c3a20285a',
31000: 'ea1426f57f75ee781a6e569772944119f5e65c8b110b31a32b61d951c3a1900f',
32000: '43d1a3941b2c3033f78040830d39e657f74958d71b3cd0bc728550469cdb5fde',
33000: '3b0c0eac231476efed2b9bb9dff206a6438a14946c3b4238bf0af6d03f9a350f',
34000: 'a8136e9d6f3f5048ef8f9f56628009f5a87bb28664ba8028ecae0797e5328309',
35000: 'd8136e638995aaea03c6b50f1cc0f1891bd107290f12a6ee57e3cae36b7e3374',
36000: '06a7692cf5ce9bb71461e5427e6f55bce2904f9d3af4c28e7343fecbf0e336da',
37000: '12a1a49c237497295150f99c76d79d8affe9fd720e0cbffa62be80e2bda7f832',
38000: 'fcf3e526931b7f7cead47652f12cdf76641032bc545e22aa0dc83bef46a085fa',
39000: 'ffc34a198ab1cc13819c0c90b26a8455627ff12868b49ac1e6b5e1a086ed011f',
40000: 'ac71a9895999531350cf77a701035b0b59720a39994c990354b9cb1f6ea8eb49',
41000: '4fa753f5f2de41ad69ab32c6802eeb352f070d8684c3bd154eeb8a3c17aa6363',
42000: '6f0febbddf9248e3df2254a7196ec13f353c7a8049e098f2fcdf920580abffa4',
43000: 'e3e98488cb203be9b531e3b5ce2200f4e5655297a4f4534bcf486ad5e26687bf',
44000: 'c7c6078e30163204e49d493d506f72f71d68d2a591874328342ca3cca81c87bd',
45000: '2a65629529f69c15a6e404ba54b83e024acba9d6c786be554fb14d28951ddf9b',
46000: 'cbc148ad661e6fc390746e94cf21df71edcb222b4e8d8f0771237a8e03412f9c',
47000: 'd6977f770749b7ce1a624a61f7b2d26d817a27c50a895466ee10f8bd758c029a',
48000: 'df05fddcd5ccfab7811e5d7b6cf7b49df71c7e0f77a57ce384936948114bb93c',
49000: '9d26a67c06f066229bcdafd55cbe71e2b2518465d6deb618c418b2f229ef513f',
50000: '4ceb896d901315e0f3e10f3b1ed8472f5bb5230d4ed6377214b07e425e515f6a',
51000: '218019bd8440a8c242e84940119d4eccccd26134a208cbb92cdea3480069f482',
52000: 'c15f1538280e71abc1cf3e117c79b0b671a9f6fb46102fea7b9a1fbcad418011',
53000: '0ad4d3812a42e23fc0defe7c62dad451266ee21699ade625a5f00b91993850d3',
54000: 'f994cff7b8166b0c39a310bb686f043fe9355678ae406a7fa1a74d4bf6f9b849',
55000: '6723d30c99fb45513a42a3ffaa50d1b8350e3c8b52c6dc6894303147eba95d55',
56000: '947cbea79aabe2051522d9e957d92b86dee2c420c32a83cf6f6d655136731926',
57000: '8a153fa31a971e7e304650257bfdbaac230b36a9beebc01a85a7aab9a357a5f2',
58000: 'd19a5234ab995cb851d691b2c2c84ae665890621c52794e54c16d6a87f516c7b',
59000: 'f9c7a353e62ce57a5e3f983362b08fdbff5e2aaadce325bba7680104a6f51cb7',
60000: '9e1d535bd31525803e93e4b4c2346fcdfee7c5cdc4b9fe3032df9295aa85c754',
61000: '640387271e7eb0816b9e290dd157f58d1eb54cfc39bc7ecc4486099af2a22b76',
62000: '3d36acc4169a238657729df674277c6cf236e21c2bed4e99b8f1a6e82d00bbd6',
63000: '6d4201e45595ea160773d63bb1481f78664156de1f4564d9a8c3f61b3cc1a6c9',
64000: 'b2cadf2ba39b2d5e688029cbbc68cc6d71337b29861dee24fe941b31be1198a3',
65000: 'd8a364d385ef42e53ce88c82355235ee7a27dd3e642849dc22d08a4041695dfb',
66000: '7025d4b2d9537002ce2b9ec42c596eb864e30f41c06f02413997b5629d6dce3b',
67000: '852a3fecfb3a77bcd6d2604170cbe9646d9b889c90d234eddc8f9d112777fe41',
68000: '38956213012f66a89377f4914db838faa6738732f7a7a865ebc5777f2f126b73',
69000: '4c89b07d23727d4bd1a080bcb9a22dcdb45c904f4c9866153a000056d7ec1546',
70000: 'e4e7fbf0d17b5f933deba12e045ea23eabfa6f7dbdc633e5c3d3ef8ddd066b47',
71000: '1f07c36e82f8e7582ec9df23ddc90715df45978b6aa0a2f17c08ee14cbb23000',
72000: 'c59c660b5aca2fa3364524ef43a4fa93fc3f16df777427cf1d810f65ce42e4d2',
73000: '83a05ac2fcae8c5d50a90df308f641616fa9b9fb553763c874b764a73def2f95',
74000: '448d209c032a2c6f5330481bf34fcf0bf1ea2c47f43fa5f82cf42e0d8f4d2b20',
75000: '5627eb3491e1154ad2c90094da85dc7dbfaf89d3765848db1ed915b7e26dde58',
76000: 'b1417b1b2360e24fe42345508ba0a333f4bd76f48c959b8bb40a99bf34ec5ba0',
77000: '250aca2bc4e6da7a8e1cb2e4deb0396aee7c317177b099d7dab9731b4c0f7573',
78000: 'dbfb1dbe80a5db06c9dcc645c5279eaf40559f5e04df0e64b696e80aba5659e0',
79000: 'a7183a545925164b5a9882810ae117d0f3df9b3ed55885fe74ebea284bec484f',
80000: '5b4a7bfd9b5394e2759daecfe472ea6e92b281a7821b106314b2b2e0facbaab8',
81000: 'b7075fe530b44e4f5895521b3cf99a79fe23d8bedde0027d6e7924ca93793eab',
82000: '6a1fa75b896d799ae84046cba7cd942dad47ec04bf50a85cd1b2bb15861995e1',
83000: 'aa5f4caf970433e90e2e45f9524ab9a6e22281505ada4c206ecf4672486240e3',
84000: '103732d12ef792a1adbe8f295ca6abd008867323ee996698b2650bdc1bb8d06b',
85000: 'a214c16ffdc505d6caba7bd0c2bae766bb19f7eb4d436cfc037e93783afae7e8',
86000: 'f54226ae9a6814a345968b5c2982adf638b995ff50e27de1890b3760ead12158',
87000: '26546212a0720cab988194432f4fa7c3e47caf0fb8e31efdd1ee93c3ad056868',
88000: '48f7d47bc443bdbc6a37ffe8f1d0d91de16d4a470d03655692de8a04f84c2561',
89000: '005bac455ff093a39b907af0441897e7a9ceb19eaf51d9dd9ec2b5ea43ff6d57',
90000: 'b482f10315170e6bd280325e180ce37b0423b685baa06d33586ee659649b71a7',
91000: '56db4d5d6c460fa76d492d9900c02e8fa99bfe97c7d9fa2a75bf115128ea0da7',
92000: '72ac917c4f531f91f65fbe33e78b4d4e2bef18aab3d2ff584d59fed1ba2cf398',
93000: '1100df2fdb03d44530723d131dd7d687053db6a317ac9181cfdd51925152df02',
94000: '6d50f66efb571136501340fc00b52105e0011f0f11fd68c9b185717bd42f9307',
95000: 'bd0c996d9ec40d5e91c2b5f278a5c9c9ea0b839012a7d135e1afc73a725bd8d6',
96000: '4ebfe8a3e1a1625632896d5d2970d8b080f08c5948f0ac38e81f3fc85db5af5e',
97000: 'cbc03464aa3513aa33b540d0f026b93db7607c4bc245af8fd88c3b476ea5e394',
98000: 'c9a185ffa55d01fc73c2e60004b0c8dee60ffbac7784cd16046effd7a0a51a86',
99000: '1eba97d5ee69229f4cd697c934d60cf3262ebcdf4c79526cda71e07b52fa22f7',
100000: '0b1650207a55c64e7a6cec62ddc3cc190ed3796507bbd48358a0b7d6186bc3ce',
101000: '1f47841b0034d3cf1046660c2246902f679445b5ef621df2762bbea33b7d685a',
102000: '42f63b1b622d08e120b5405ba65d25e1a9777312cca77248b7827d7084e6d482',
103000: 'ef4861fbd4e578ff80e0d5e2afc62fb113aa203d3933f74efd3e2400d73de922',
104000: 'e97a3cb78c09eef732757e81c6a7135beb33d95398054b45df738853acedaaad',
105000: 'd12e8feec15893ccab662a1ad0754b2ccbc18b078ff3894eb2214ee4ac2f7af9',
106000: '83ae84103b37c93b2d1535fc47b053194270b3c186b1b25d69cd8a1540caaa1a',
107000: 'a102c6dce88bc4695250c24a0ebd2913b2b85c211b7d5dbbd18ecd95bea63144',
108000: 'a0c20b36140d55288dc4dfeb6c16a6ece3d43efb57b1728bc872c35d6660c704',
109000: '412fe428b2929c3cf5c6991a2657e5414a569b30e77a3f4fecef14239d89caca',
110000: '8518706e957f3ef892d1d2d7f65c5f4588620be994cda9cb7d81fb600554a456',
111000: '66f4e25f40d24360299e5358f954d8a79b245399c46dbc4e8a3db0c10fa14e18',
112000: '81c0ded5c5e30f92eec84ca293abb0165068d917bdcf436754a0e9b0ef1e325b',
113000: 'ebd5e034a45517e59e96e986b55e38051a5d2a3120a7ed92d4f3e01e18e73972',
114000: '681b85720d71ab660c7443392883ed4bdcf9aceffe202cc5ead94664e315a744',
115000: '3fbda839115bc6e6d451a05f78b27b35b137c94374fdaa42235f3035980e85e5',
116000: '7bfb88d39fe0ee7ce046648676b226997a812af1f1ed79040750045536c76067',
117000: '5853c6d48fbf99b06d3e1f474e1b667ca598712f9feaff8cda48a95935cfc498',
118000: '6a23987731379799289d2a527dd40fde8aaca8398625d2a2e3e646372ffc99e1',
119000: 'af73cbe957865b60c3c3139dd1e2bf8bf1cc504dd167545463105996609750c4',
120000: 'b309a20170421fc4ddfedfd7611716e1cf5a802a1db09b66b9ae448fdf958792',
121000: '624cc893801e4da093ca973e2135352bb428dc278e47f5308412d05acbbdeacc',
122000: '2f20c67ac3485b6e5407762a23e8a7df1bb380f1607f538cb5e2e1ffaad578ef',
123000: '89c73ed65ab8f729cdcd678dc52e876969907f1560898220702a696893744e00',
124000: '17d8155d97fa4e8705aa55969709f9cbc97f87815d40f5a2b5cf4d7b85dfe4b3',
125000: 'd3cbbdb7f51c8a1f5794bb3d1f40c413b313f91bd3f3dec0d8ebb6a02107e52a',
126000: 'bfd02d760eb5de2e792ea74215ae37d13174c3fc0aff08587281f0f39e427caf',
127000: '937dcc09a14d1de79f5eb679996225995cdce69497cff5f12c39c098c42656a0',
128000: '81d372b07e2467bfa62bb4b974d0eb75ea313e547b9d4e62f3c617f939bcf67d',
129000: '9b4b315738ab8b5266b5accada140dcf487b2f509f1d758b9b972b93ecd03c9e',
130000: '8ab3b1862cacf7f1c4f6617f7280ba31547dadd62bec9c37ecc8d074c90be2b0',
131000: '868d6520e526b8594c44a05eba67ec1de8e01b547b34b62ebc5dd09d656e9072',
132000: 'ce09471822965da31db3188cd156538c44337da2952486a34dd2ab372ea490a8',
133000: 'b0c34cd700630c9281c8657dd24baf609f72f897442538d77f4ac355199731c0',
134000: 'a76e5dc895406a3951d55b557272915810420bdc3ac076863c7efdef3b115890',
135000: '2b70fd7021d40f6df15f3f72bc8e034a8a0bcd8542847b5b7a2088253351b3a4',
136000: '6161ee84f6fe100a02de0675bc9500dc18ccaec74e7da4b92135f0ccee6fb663',
137000: 'ae5770a7158089ca9adab8bdd07f1259e6f204ed377af27fb6a772fe1c031864',
138000: 'f3b8de21370968a90cedfcb1cb6803ec9a6b7740a94d8e4c60a80407fd55a12e',
139000: 'f08f6431d7367b71b25b321809ebf48f797bf7fcc943ba366420fd3f3dc00e5b',
140000: 'e76345dbb8c4c5ed1ea37249f892f64098557700259fb360401a559c45909041',
141000: '776b333f5b221f6b443d6011bc8d0753eed5ce2446a6d3ee99a5800c7159fa92',
142000: '9453693cb846f27ba2ac39194e5b70cedfaf435b73b5f7f288dd81343ff605ea',
143000: '08aea64cc4eb0170d2704cbd3a4ee871d4f3e53a8c1395288c029eaddd1aa097',
144000: '801d60c225301481c761114d5152780d5002b77aed18845c32c9e3804a5db254',
145000: '78cc80d54933b84248aee32a18144d03237f948de7889c97e2112d17ac998009',
146000: '1e954e8f2fe59da3feef0514f6b271f52961467782b2530501c294a83181f176',
147000: 'c0ac42fe9c0e1d2b0b7e6e796c553eafbd5d7bbce7df84903dabe16dc56ba863',
148000: '75b5441fc1785dcba69b9409f24224ba3207d624bb0e7d54af24118d08c788d7',
149000: '20e53048515cc34507cf2d1dd160eabafaa97793d81626e9fce38d540142dbc4',
150000: '5ff7a08461fc3fbacf8c6261dbd34a790e271bbc39908c23d5417f0f7a71b71c',
151000: '5c14cab7670821966d5aa61f09b32d93c865764b6e258764a3c33a82a3133fe7',
152000: 'e595968e040b54ed6572862b4089e005238399014d99bf4c361cc5922dbfa6c0',
153000: '1292bac3fab11de09f46a8327511b7c37366d0f2a96a027f4bf2f16147a39af3',
154000: '3db31267722b500dd9b26f5cfc4f06f26c5c343d181464499d3bb4aaf13c3c20',
155000: 'b92962af73b42335125f28d7d3fb97e3aa6ccf47c3f4229eb7fa04b67fdb933e',
156000: 'fdbcf12415284d8cc0adbf16996b9d37a943fd1a924c5dee5f47dc8add23f278',
157000: '199781ca8e5929708f834d6f38ce3e2fe196ab519f9915f14401462273431e19',
158000: '969017fd15cf6a73683de287c5e19fbd69ebb5aaf7c13339297f7177a716de3d',
159000: '77c7995cf97218655c195fdac9010c599859a46f760d84750189114d2d2d1d9d',
160000: '00ff84f0d845b4f1099d970cc0c3e2dd1c2584c611449f2318a3f27327246d51',
161000: '08481dc5f61e776ae8be12236d595549abfa0be28b187d80dad573594d94c11f',
162000: '71497ab26b05453f3c7057f8bf57fcc8ba30920a6032ab0ae3abdd29e0677582',
163000: '08713e0b750233a3843241d24573c4800f32c894000b7815860048ef3e7a06db',
164000: '01fd806f1879285ad5234f38789074b0ea3a0b2707bc6b49aa9bd51ecd26ddcb',
165000: 'a990225a2f02a77c1c61d518d46dde1e1cff3f4df0ed24eb463b97f84c7a2616',
166000: '7772a601a6e765ee340eb666cd645503c9988430ba4f558961246e9e69349b25',
167000: '389c7a8979078b57d54936497680c7908b9f989e46631358f1c31a3094621557',
168000: '2e1df97fe3cba7add5f05734a119464ff7c22bf92701ed85642da856ec653aff',
169000: 'fe0adfe455f65cf90e1632ef39bf9ca5856020d995cb71ce8a4998a40eed5998',
170000: '87487f255873e9f6411565553fa9ccb9518e7baf71ade67492536278f1ba0feb',
171000: '37cf239bd630b7c891019adcefead4baf19a86b40e21f3dfe4a76a78f2985103',
172000: 'e2f4087b868558af51dc8f40d77c300d76eacab17e5fb8bbf3b1f99c02d995a7',
173000: '273f32717f740438e93063d259dd4f0f160077c34c937a7d6a9b1f9fbb34d87d',
174000: '807a861813ca690a6fc0715f354f36bf491c34588b5ab778c0f4e692ec3fc152',
175000: 'dd74cf3d686c59820885bab134c81b7079f2edca86a25944fe0aa36a96941550',
176000: 'aba8163b91a905aab5ad6e7599b919fba650c446938dc7b2352759203bea8957',
177000: '260bd0d053038a54565f9b25c3894cfa875c9426b9d6e9915ec1df03bcf90ac5',
178000: '7a527f71743c4dff25f13b918ad4a6d91fb0bf9c873a7282e51cb4edf545edbb',
179000: '945397818b80995ac4e23c785b6d5ed2eb01338a2d4ca6f0cf40986e87659d86',
180000: 'ba1b5263f5418844796a19db06a53c095ec0428696d8ee953790d6c86de0795a',
181000: '2b673859af43554accd8599bca53c2ca94500ecd65df7672a1e586595bdec7ff',
182000: '08bae9f096e348c8c6b42ac2762298ec3faeba896524968478fff1d40017b5b0',
183000: '211577ae9a4f93fa5dc3e33764c31048ee306b5e5ffa5081fabdf3f30e49a977',
184000: 'a4301eead637043a8751a30c13935af971a731e03e90317c5a16c1677e50d837',
185000: '1f4aa362281293e8328b6c5b32a9948ef03dcea81c6ba13f94531f0c5e42627a',
186000: '0c6b15f2164a1b55bdf0eee2b0f5ab7cb9d2dfea6fcc82686483aa7a3659f6a4',
187000: '5bfa261cad0e4d271814de1f4752352f35da10f24b8016ac0065172e1296052a',
188000: '94c2490bdafe70d65951e4034e5b9122fa6b376a5ad8c9e6276289c4a5e059e0',
189000: 'a0119c5d57523b7ccaf1e5c6671eae9f8562bbf2477f7dd8c6847c6da41707f6',
190000: '4d82802c2464053c391374eb6124f8839faa9b343bae3f34e00f87fad9f45f9b',
191000: 'd341f5b4d20894be520eeb533d56c5737ad02a7ab0c24b2463516ff04e3b8b0d',
192000: 'ddf1a72ba512786d39f7c313f710bbd452040f1eb8a0c0ee7a4b9192199fcc3c',
193000: '514e1b40e5e15dc72e75cabb8d490c5e76b968e6dcdb3acf518b47e3a2020407',
194000: 'd0bac7ca0636b662d6be169bb5af807c891523a44fceda68f08c8934d6e9e254',
195000: '00c37e81f0dfe63a8fdefd0ba91442019899d50f72b43c4de8017e767777a5af',
196000: '581a9d093458c0157464c42fa2e2d9a1f61f7edf1ccc5dd77a4fdfdd9a4b37b8',
197000: 'd7555fbf7671d2475e38f62418f6130ddfe85b8fd07fe8a9815849316165d401',
198000: 'ea388884ad5abb06ccb1d0d2202dfef5c4ce4de0f3040df089017d01f14b9530',
199000: '6a80cbb22620b7a5f99f4ea36380b7bcd22e87feaf4186bfba0b9aabcbcaab70',
200000: '7a7231293d242887c6dfa517a78b60d40b41456b6f2fe833c0217074664a61d7',
201000: '0d6bde8cdd0370964b83ee766891b3cda7b9eec098dfaa5ef05842b29bbd043c',
202000: '223fc091bddfbaedec92099f873fc1a6bc50556f3421f472a63977b3351b95c1',
203000: 'f87fb207fd397f8fb5c5c21f84f272cf33d840e20ccaf9de810af784d8459142',
204000: '6273b130d819d84492bc50fa443ca401df748e58fa84a3708a1797f116f60f1f',
205000: '498edfaaa9d5f0501aea5482ffdd9208b86af46fcea5a1cbc82d51d45eb6a256',
206000: 'c069138610493a70742918b5c624cbf9275bf55ee24649e32244238433883ea0',
207000: '5110744882ba11b98df95f11e100bc398a5aab4c8258133b518b5abb58c7fe64',
208000: 'e5690d4e0561ef015d6e2997df3b8b95163cd884d999d8c9ac212bd7e1f06ab0',
209000: '17a0a3bbf2fecbfa5c7667524cb2a452f49b8dd089af132f9d2c530ca0b677b1',
210000: '3a9489f3aca89576152949a67df6462d25bf3a78976451b5cf8bd8385a10e6ee',
211000: '45196e756c7c282c4a9e7133adb07f103d0f74ee90919cc8ad89ca3e5d38c6db',
212000: '590f21c83694c35028b9c9d632a082307b8bb27ec87e2c0c4e2881a7be9c6637',
213000: '46e56c45305c51e4d22927e449a1855064504cfb58cfceb0f293d8a4f1dca7cb',
214000: '50b3f1bb82c1b213ed2cfd2c1144b8e67e5fa731dafabfefe063f8658c5536d7',
215000: 'f53709db4ca6c12b80c64501eeb8f176211116b0fbc3a650082cf830792b206a',
216000: '6818db3f6582e71021a84fffe751fcb47db2f734d59547be47a373575b5c7d8e',
217000: '01f6f8a0699ac533ee7921f63334af07b04369d5e580c1c609c6fee64c8c6b8d',
218000: '81e9ff93cdf5686fc42663a056c74fb0f560c29a5d23f21ef92364735a865392',
219000: '80b32bb6bc3c54a19d3b604f56084b3bda13afd4af4130f791505e44fdad6c98',
220000: 'c427f24ab14449b1430c5302385f3a77f90f19715d940e8dfb7c054828270163',
221000: '4303a0e1c6f8489589290370a6843d97eb02a01be8c227d905cd0e30034feae6',
222000: 'ccfdc1768afc18cccdf2b5d90d40158d86b110d878955b6570ad9e6e40063008',
223000: '6baefb7061239c02472c4783de6d32936f7fa1da9fa6d2448b4d92f40b89e79d',
224000: 'bdc334f5d1a3bd1d19cf589e3b1ab0e8a324d44ea1c70297ff2de6a66c99399f',
225000: '82d29612f96c55dbe83655db68afe39146aff1418af929f05033fcbe612ebcb0',
226000: '179b4144f4d644f546f001273c1e5382a8de50f84ec06a2c83b02808a9167d28',
227000: 'a350e96895efe979223475151305eb8d1739121b566964033b9967031ee5581c',
228000: 'd208c66a9ccb1afde8d1540d944bfb43522015f9d2df50851e66ce1b84b06c0d',
229000: '95cfabe5d18dd8d8e4a57b7bf7052ab09b7138ba0b3c9d2efdb32352bfcca149',
230000: '3e399d08c33b8fddeb687deb276afd3ce101d19f5a71b232532951f05d46d8f6',
231000: '91c0c17d5df1c553bd7dd6db8feb58a4436d598ef4acb1ed9a06da65f3cca83f',
232000: 'eb3928a522dddde176cd4c68fa95f1e953d74f3759af4b030f7e2be4c466c45f',
233000: '3ac946df5a319d31716f50c752a5769471c0f3ba4b1002613996981c8f32fa3b',
234000: '4c9cdbce1f2a4324368130c154e2d24e885877c13a126a847059ccc289fc922c',
235000: '8701d98b254fd0b588f022ea140e936d496ca8ccc0890ce133523432f4c09e6a',
236000: '0d2d45adcdf950541c25ff5b2b87dff5faad479094b1f68804a62c3151e6d598',
237000: 'b8924ead720d0e7a304c45c209ef3fdb90cde437d138bfed2f1e8357fb3d951d',
238000: '4c9c62a3b3f6bbfb08edc23d63f9b35ed94bb38393fd1761f9b9b810bb72f68d',
239000: 'd01494b4999c2c4def2564eeb1207fd1f3d35a62fba2e61b386bfab498cb9b6f',
240000: '3021d8f0edbc0cd2ab87bad5feffac54924b51eac9f6fae8ebe289f474f11a8c',
241000: 'd2ecf0170093d3a2dfc876f673e565e3e4aa76d23896af51c96c004cb14ce67a',
242000: '2b5ef67326178702a1a35fe4a848d22b791114b59121e41a25ddc1bfac82c6bd',
243000: 'b2d546f127252c3fa5f7ca45c61268dbaef66027484a1bcdadd281fabf97d34f',
244000: '58f9b398b871dd62952899d31661c118ca4f0d1c71dd165523b889d62b154393',
245000: '504760f0e46e2de0a2c8b4dd3e93c19d0f3e0e1b36916f02cbff166655816206',
246000: '15ffb23b844e6ac648a7fd099be8e631cb51b3a2496c67ac2dbedc4f6818d7a7',
247000: 'e3e56e8575552202cf35fc883bfcfd92cacf8013a6536729ec19e53bdb0db128',
248000: '22f75d137c7fe42a80ee04a386fdea8cad56e7ce8601234b6ec676498edf9e34',
249000: 'f84709269c393cae8106ddb1fa3a48cf3226e97bcc108717e483493159921aa1',
250000: 'f3aee545330f18ec49f6f880d7f9fd31ed6633279881dd13177b0a687c51b000',
251000: 'c2e6a4cb3c6e88f523db2ac24fc7fff3b8a27d13ea7b5d614806ed96247025e4',
252000: 'efba2c1f91a13ee8d7a8658e70893ccd3014a7019c225e356c3ee67955b58756',
253000: '9097ecbbc1279bfa2c92822a6485714ca7ba35cd3dad9ccaff5a8d24645eae92',
254000: '00a41b720ec0d7a9fea3229b16ad8d065cd6bb1d14587eb3404de33e45db9454',
255000: '7523636406176d391928c638e15e959a9d0255aa71dc9f3bea8995cae02d1d51',
256000: '8d3f7c5ded24e2488e25d6030d78d351354734b8b058caf6143430e96ebd5d0c',
257000: '48fbbc3737a718163a501e52964eb58aaf4163aab7ceaa4f75f928b25c376ec2',
258000: 'e1052e7ebf83246770a740c0c6ce8f011fc7007448777c533d67fec810fe5e64',
259000: '043f7d98b500fafc248549c1f5a9727fb0cec4e3e5f13071eeba459fb9179d13',
260000: 'ed2f3960b077651ac554b277dd271e12e6cd05942fb54b9464f18432adbe5ca2',
261000: '234f17bf2054ad4d3efeae165c3359a14e08d2e53506c839cbed6ba3d2a48ab5',
262000: '4342d662c4dab581bf1c123e057e4413bbf74b7a2a331e80f7099201539be54c',
263000: 'a7b1380371fbfe64bfb94bed5ae256f7db59a4661e461221cae99dcd54c72a8a',
264000: '4f2a606cdc5f346e0855d962cbef7c2e4993b2b07b86ce658a796b1baf6179d8',
265000: 'cc09cde8c101eb17b5457bf976525940904cb839e70ef3335780eb0261976826',
266000: '3c5f24533723afb124b2bc8c2f8d373eb5640dc4ea4dc76b841e81706dd6b29d',
267000: '467c5ca63621be011a07129d0c3ea785e2a0853cc3c0d1e9cace122831d42507',
268000: '07df13a794f6c0fc77142fbd0afa23121ee2a2a23bc9abca35c164edb5f6ad1c',
269000: 'e2be277e83a997a111fb45948e9a7afdd7925feb1708a84c62968cf6a42da1c1',
270000: '463425dddeeac2f27ee717364073229c333a7ba357888623c3b15a1db52542f4',
271000: 'd62e0bd8584fae7e4893c18252ae8d1c245e2e4c8e2f2d73703c681c6259f19f',
272000: '85cabf5254b6b6292c105e68d44f78d6effeec582467caf9d30a5deeb43b6a6a',
273000: 'f5509cee847a757c84adea8e40f3c7c0cc6a4da33a8cd3019cf9ca850d0bc93b',
274000: '2cfdb3477de2fe1c81c7133094094c79e933cfe6034c158de07ffd0c4c08b564',
275000: '1c1b8e447e95ec3c306fc68528631682bd85e1c876d86223fea28b7ff383c16a',
276000: 'b37e5e4f6d29590c9a58773a4d4384cf7c51d04fede4a602ba8095e14bf65993',
277000: 'b92d6ffdecdca014c946c2fe8c6c404ccf60d8401e7631918da8885bd360daca',
278000: 'c1f82e49c8f66aa580a216c8e08262080ff9fc4f4df1c482ce8b2695b90b7240',
279000: '41c48f5965e8939d1df69127798b46af17646ea75e545a7b47049596d9bfb3a2',
280000: '48da55fb8ce259432417ffacba5e064df4df2a8293cdd484ee0ad8ce4b0bc38f',
281000: '0e892fd450a8ca018567e1a6e23bcdc5b4b872686f816a415991f59b3abd76f3',
282000: 'b90fb78edf920d0bfb4bce1f39327c6777b511fd05d715729d7c4cdbf449d59e',
283000: '80965fd3817b9509ddbe6a919aa73adb11af64cda5d8352ad2bd9b7824e95c42',
284000: '43447043aea93b162d6f615e8cb7d992d02c176a41b0518256f7e0d8fb3162b1',
285000: '33bc92c77ba88251822698a7bd02860ef9c12420e4c27f4020370acaab0a2f37',
286000: 'eb664123736754c19a47e2afbbc8645c598d7b70c3d42132c334cfeea8fd6bf3',
287000: '6af8f2fb3ea1f0fd0d6432599c3a681be7d384cdc553903bcaac8de2d472725f',
288000: 'b5fcfef1b5f6373bfc9322164160fe98fc7fb4f3117bf91c8fe61324db62723a',
289000: 'f6f1c6f8a2a2905b2ca18845b8bc771bad6e9cc32fd2c8e20a565cbd7781bac9',
290000: 'cded625fb7b923d7471ee28276b6983bb4fb4f6281ded9a8637b6974710feab5',
291000: '49af69a4855a36d00cfe2408f7d090fb5001913aa06615eca5af78ecb92b1fb9',
292000: '2dfe91f7f1870539e6b51ce58f9cd6415ea8fd50dad376d74941a9ff1bf0f8f4',
293000: 'b84f14b784cf6eca78d13f4b0db4714be5bcfa59b73974b7106a0206a8993767',
294000: '07e95fc3c141dc64ae192cd233f97ac16b1adc9dfbbd685475bf693930b6f604',
295000: 'c39cca37abe0b8839f8e797287385c73031496c632a064a89b9dc0730a3fffd2',
296000: 'e69afa519ac761328358f5b3ac8b4395d5aa4b955bf1b4c5f3bc0af49cbc3756',
297000: '20c93e2d1d9bce1c861eb16febe50e534acc3e2d142cd2d0767bfe076545d1f8',
298000: '75970b654ac884cbd1328b68b59963bfa5babacca4c7d48368a1d938feb8af6c',
299000: '05448c7f1230870e814858ac9a7858fc72a045dd9e0f1f14b8862d2fdd524b07',
300000: 'e254223f85c3ba65a0295a5538387c08ae94e6070f5c989d27349978fef077c7',
301000: '4a8bd2357b4ce05422d51e052018fb9f75ce3c07c53b5d1623ac19afff388e63',
302000: '18f4a960b810034f5ea02458bbcb9bf3d38d681555be4e40d132f867444306c8',
303000: '667cd927ba1e7ab162c17a711bb0ba65f838e543f46b093c86c14e39988da156',
304000: '55280e92a3c3b9ab9e1032490aff10c1a2fe9e96cf4f88e49b7a756c0491701e',
305000: '8af5b503f1819b23da7b3663314a15b10e39e072377d1b08518a373c85515c1a',
306000: '439796b2b4add7dd0fbe14bb3d04f3917e92de2afa07da2a384b389ad9228888',
307000: '07fd4cbacddb504e23880dc454fb70659386afa21dab3e862cbe7688218a43fb',
308000: '2734a4205be5f2c8ce5850bee8d4ad0829c208911bf4ce833349a43af2db8fe1',
309000: '261b84349406a913a4cb700fd218112920910c1c57922e289114c7b903fe2d9f',
310000: 'bc652e239edd3476df7bd1e0563595145fd4f2b37dc2ac3bc28ec089a7550812',
311000: 'fd7c1b32015a2418efc6e1a73470ea3205de05662c767b0d4198fcaf804c3b5a',
312000: '4f017d27b81da7eaf62760cb7fe6613a6ec28fdc18859b6e727b216240a86315',
313000: '2d09525e2364e7cd7fe45191fee11f1ce16a2fbd16b56e41f90a3c64ce2f944e',
314000: 'c05770490a47280c5b318709c264b2ab951a5bf8c4575cbf1b1e3dbf35323f45',
315000: 'ac5da2f5abcee0f92f29479c6355ddafd1f3b4b3cca6659117c17c9fc10fc743',
316000: '4496d1a0e4c22a2521595bd4fae60abdb9404f79135311606a9e6713be373271',
317000: '40ba7c87490677af9e8b033a19ee5ffe51ba192f5912f491738bb6485cfb76d4',
318000: 'f42045295ce30bd4ba9fd9c845bc1228949b4152517185d3dd0598d8bae38481',
319000: 'f488695e46b53e4734f4eb9fced93f067f037af7099d0f9510856a898a0c438d',
320000: '83e42370911c70bfa2ad89a82ecd02bbc9cb13d24c2a900a0030771898ed618b',
321000: '74eabfbb64eb89488b59393ed529b4ee97e0f88a7fc14f798f22715bcee3fd19',
322000: '54f42438cf96fe8efa0180a47350002b34ccfe82123a85a9a6dc6f8b0894f0a2',
323000: '5509357b42476732435a1d5a6a850e27f62cb3ca81219a94f08dded06dfcbbf8',
324000: '1cd477361dc4c72b4f157469b6f3616e060f5d058b44ba864225210432b35540',
325000: '060ae9c541eee90a80a11fe300583e47c0ad4d12093d0651310cc6c87ecf2b75',
326000: '13ce59868c947ecf2d6aa986501c9f8411a59b4e0b8b6113f4f45dff4074fe59',
327000: '16a1532ff983f5cdab6c2d465d58f0369665a926c9d877212d95f9387e8732ca',
328000: 'c9608a16fc43e7e94f04b3309c067b0a4f68a66705009dabb39898f41fdc862e',
329000: '029b02acbe0fc8610894f0b89d6bd274362195a443fb9e5e4d4485be7786d461',
330000: '01181b9529da87eb39510619285c08b05cfa2ca2c7a2fb3ecadba474c1db3aa5',
331000: '2371752c35da119184e53dcb593a193fb1c81d6edcff921599328a6142a57e3d',
332000: 'fc025a8f319725faa9903eca86935116a2aad8fec6193055d5d5f0f431ba84b2',
333000: 'b602651afd9988075887a3f48455d977357032185b7d482262efff9cc97a45a6',
334000: 'a9a2bfb6b0f8db298b5ba732f4a23242fa89e1c5038916ccfba3751f7dd1b4a0',
335000: 'eeb3e35b6ea94119bf058e6b02a2ec2a706089bc1c360ca28d1b4e2ce1b547ac',
336000: 'e834f7ac6bd8e5b46ecebda4997ae73bccf353fa024fbaba7ad4d1d70d08d14b',
337000: '229a7017b6d0ffa08374e297b07f12620bcdbaaf5977dd39f3f207ecb03c7adc',
338000: '83d86500e8b2ef46a0f1505e29f7401b123ea5ed5feacb49864ab1de284eb574',
339000: 'b08e9dcef3e38d631414b4d3a8d8b3dfa8a07d9e5829c515e7db2fec64c60827',
340000: 'aa36ce6494df6905f4516530ddca77d88341c0202bf052347a86e090c03ab0ad',
341000: '527b50125d5479117ca8bfe84fd5507b2312360908d4cafe8588d40c4ef5622d',
342000: 'd1397e8b0f847b6d46c409659e9e10f3182ad9bdbca563cc2598af37cdc080bf',
343000: '896576aef0e0dbdb2df8291f2832716029f69718ac79f355425d475a1b2b7b4a',
344000: 'cf4b3c75279b0f7d71cbba934228709ff03e3254fdd83a47fde66e83ff393eb3',
345000: 'e68c760edf370e758e815d9363ada456fa2568b06abf8626f28a32cc62944770',
346000: 'ab19fe52e780e45a716ed97e93084bb3f7916e0a4479f131c7f5f7a2e7d69d9c',
347000: 'bc77d338a10fd71daa6c68fa9df2fdccf44b6d9b1d7c6456bb281bf8547b622b',
348000: 'f84469ebb26540cb3de0ae03f1f47452f9a9bc6b7195c3b5260d48623507ed86',
349000: '5bafc3e6dbe4bf88257c4381cb4f2fea0a4cde1f272e1f1006600b507cb0f670',
350000: 'e53ff96761b9c98bed635457acd1856a2ffe65b84e339cea94ad30f13a31e5e9',
351000: '5e0eb50d32e90e5ca73639313e5d3a850a113fbe60ac494ca6cdb799e9604d7e',
352000: '1a1a04aaef110493bf8fbdb8d1784a7706c6e303773ad0bad6511dd3c1da6c54',
353000: 'd72f3b615e244584f131f2f8e7154534aa9d2d9c4a710bb4de44de556b5d1232',
354000: 'e81b823c64229c555a50e90a4c3c4654b6673cbff4c62c6e897d09aac00d95f8',
355000: '11eed14cac2024588b389ddcd54315923ffff03d6c4c5f32657ba551fd308516',
356000: '465f28c63e564296f75a4a22c3ad16714cda29d7a2797e51e512eb1ed5f6a49e',
357000: '2b544c79c6f74ae1c05c268669e57ad41d6807f1d0ce1a5024b7b08646df6861',
358000: '8a76eccf113819aad8d7c17a459a614274bfdaeacfb4659d945f13dbf3907ccc',
359000: 'e63743f458f1479593d73952bfb882c72e5889331add576a716761eff2cf13dd',
360000: '0ab0bef439debf08dc3e1df9010c4e119d6862de6620de5f725cac0013c06301',
361000: '50f649f0c8bc0d77ffddd53fd5b896705a8858d3684a3ec876cb320ad26d2c29',
362000: 'e058c43c366c9c307ca3ce5a732f0ab2b4e09ff291a067366af25a82fb3cddf1',
363000: 'f99d7f0ec93ec7fd74273bb6f8bc4ad8ec0fcda9f6406001668dcf62a088a867',
364000: 'e5a69ea8763650542b1f90d6217af00da137a9bb72cd0dbd5379570d1fa696f0',
365000: '5d0eb3063474c6bbf9d9581898fcf8c4cd097fe5c6e3875d7cd701a5a0302507',
366000: '14da2978f4d4eeff5eb0a6bb663e514095acfe2210039df8dda9b76f522f74ba',
367000: '4ad3ebe23fe9b0f407c0765adb69ee973de5b84db29c97167c7a358fe8696bfa',
368000: '22db57ccf8c8193ae9fc08abe11f1ea8e869bfd6ab2c0b918820acdaafc19136',
369000: '530052d61878d20282a69e0bace41857f7edea6d3e9e92fd0a7781c8b60d208e',
370000: 'cd057d0c889c78e06d0eb71287c46deea4340b8c7a4c470dd69020e1a9029c43',
371000: '41f32ab47863f0c44b8755ecf9b201aaa92999051ccbf8a5acdee0e12dee4110',
372000: '37de29bd97d75a41dc3eb52bed9c09baf069c23120a393888e15e61ac7f81ea2',
373000: 'd6b26bb791fdf6d79f59d5c37ef2af0a35149737029157724224d614477527f4',
374000: 'de18c6a921838385950b10ed9429579e95a0bdc574f448f457ee486a59a04260',
375000: '1f86edf4ae47f409f8de6a16c17c2da063937478a0ccc800633e4d33b531709f',
376000: 'a0511050ef086052fe5671d3333c255507b03edbd365cd7ce7cecddb4000f391',
377000: '7b8f0060f51fd170296b9e000bb2970097f77ebf0b5872a9ac01be5aa03d7ffa',
378000: 'fc5669d194d822c55e141ddc9aa229059e4663169456dc927d43b44a2c8d3c0d',
379000: '1c0bf45f052e0123d1a6806fc4a93ac8261ed26772f324c1b3e7c3fa44d7f442',
380000: '73b69dcfa0a916a4c495a14d4d43e270b123ba37d267e3eb0d65890336fd3e32',
381000: 'e89e3d548193df667c637f774c76f508ec79b7bf1f2ac5d9adf7468006e34454',
382000: '6b835973c2efcd564fbd58d96226cb984d5c32dbf63d5b6044a0a11f77c2ccc7',
383000: '50b7011656c5903941303c7a19e4ea0fc6bc10456ba85638b0919658556e47cb',
384000: '74c509b238c1418ae527d370e8fe922f7639c88ef8d78fc3a45894deeb2252a2',
385000: '6f9f6147b06d8fea8c440cdebb3869b7bface7cb2355d66639060056adc9bcc2',
386000: '5b59e68fd62751b8be184a5621f1c63f1e9cbd22051487999816df392dca7837',
387000: '86d8b9628b83ac63637c86c364001067ee1c3050f395758fd3f0bcab20cef703',
388000: '5e570f7b69738285848bd8b7f2fbb82d05e99a72cd4a0faf20a9e3491d2d041c',
389000: '78d31bda70842388c1cfb2b83e9e95ded37a0304aaed3292e2687a96c07be9b4',
390000: '316da5e2ce79bbc4f4500ea6417271f0ae268f85f86f81223d0af38d1e0e0a67',
391000: '45cebf8a25c25aaaac06e915fb3fcc99a84be61a266c2bd8997bd1fe00a7321c',
392000: 'c5ff6e24f524bfb1f629245345052c0a0df33ab881ee9d11dcaf64915b2acad3',
393000: 'a54b5d4824bae06f91566b73c99a07b812095d7c133d1042ded80211c0402b3f',
394000: '0ae76082f8347ddf838ce0ad486f5decf6223e8b8cfc16dfe178b15830807c60',
395000: 'dc436c97ee6ed427687e6d48e609d271b125495afdebbcf611c0fd5102ec900e',
396000: 'a87871c13ca3119f309a4ed98e4224d4eff01387ea67e86148007ac0d759f2b9',
397000: '58c143b302132ad4b38718ebe1c5fa81c1177a30045f1e4a52dfa749e84dde1d',
398000: '33089400401a190c9375abb63c6ef489123f7369e5a35766b59e69edaf4e3b93',
399000: '656305fae087b86758bc7a01e13051b1aaf4a5c53d9b71e56614217a82b20eea',
400000: '7ea274642eac11d02aef589428b181e1c8aed913e248b357917604b00b23c752',
401000: '1969a9f64e316e0b4fc864fd7162379821510a944652b8dea747e425c6222807',
402000: '3700e1703e7803a4577de7e364a34c3be94d3f933e05c5dfbf6ff68349b6fa31',
403000: '8b622599bf4b2d82bc11c897f35536036e6798ac101a6256e14477624147efe2',
404000: '5f6f6dcea393ddccdec69b30881ab08e01e1a7e6a50d681883c759b742c12d6e',
405000: 'c3fb0507a57da69042e246dc8ecac101f33cf2b1fb53fe2f507ce28a0d82e4b5',
406000: '4f034663bb7b84439fa256f7b4ee2a5cf93e0048c1eb476aa2152453ee7be544',
407000: 'ff99ff38df6f928242571fed5b03fe0c3cb481b510d3c943e47e26f26440cbda',
408000: '89865af5e99e64cef61dd931061bbca7fe355218668f487f06474e46ebf892d8',
409000: '1b1d050c0b98d55d9ecb0d1a5b03e75caa352b4e0ce6a57136551000e150c480',
410000: 'fe52790595db35c7685e5261de78d139c830af0a5ae73a7e33da21be9ffcc9e6',
411000: '1afe55e66e6e16abff282c9d6bcfbc6a3a6e40db3c60323cd44e3b3a5d4ba924',
412000: 'b59a4057d73e8037f3e0e9b92310def7900c19917892ebe7689d5f8d77912388',
413000: '603f3ca16a6a795d5aa807eac5d6ee7f80a1e2596cd91cf41cf90ee309b2871a',
414000: '10e255807a43bf5b95f9af648f1724de1e953a5540819822be5de184ead24ae0',
415000: '9faf6b1c33006e3da3ae2c7599c7fdeb5078a8427edd6666290e32aea67f5a23',
416000: '973b77e4d491ce0c163ca64f5419b05822eebc2f5d85810f51370f56e0d812b6',
417000: 'bf99710d77fe84ae3ef5a371887cacc2ca1fbc5549b04ef9f61cb9d911b06a9f',
418000: 'f032acfc2d629fecfa180067b4531f9d72924c5f204cfe9053c6556a334373e8',
419000: 'bc9f085612af6a21411fcf51259387d97e6ae1169e4c6f92766a168302a98bac',
420000: '9136bd1fa7e8850c52e3cda39a96f754cdc605a69ea49b7d6d959d0cce81efba',
421000: 'ec2a78bad1b4b7325d574122ad8c3072d27fe6854a6965e6917a28e0c868a229',
422000: '536f4cc5d6ab082d32ccdba1caccb5d4f3145b60ee8538b1e18c6231dcfa23df',
423000: 'ac7ad1f97da0e55661bb25d53c27ea72f705d54050f020aad9fd5856a27117cd',
424000: 'f840eed1c0fdfba43965bfc175e4d92a5ddd0fd6022dfef988d70a5e7f363116',
425000: 'c1a8f9b79a34b76c9ae58e005501dbb1893cfbda8bb25b0d54854657b3ff1a54',
426000: 'e8895d8f797026a2f19ade2b548bd2cb655e2d756dcc0a50850fd37148d24ae5',
427000: '0ca56cf4fd882583854fca00e339534346d53178c6679f8eb1e7a5dbb7b9e7b0',
428000: 'd3ce901843f952df4b63e755a0f58924ae11324aad47afece9a93502dcae7c13',
429000: '774cf2d1ba62f20f5e1ce50692f65f0ce35c7456d3988cfbb2e1b753e0ab0f51',
430000: 'a07fc310f6201c30f1fa70a082348942c65743d51e3c19492596f6aab4bc9bb5',
431000: '7bc7f405bdfc91dd01877ccfd6142c973d81dab12fa188f468abe6c9a30fe0d8',
432000: '7fe40e145078c1952f87547156e2147144449971f108164d27bcfb93f1c840d7',
433000: 'dbe52d2772d6c0461e53357b71a79096140039ef643876ec2d27e1bdfee7e81f',
434000: '94b379e842d4979bbec171bb08881227ac4e9477186ab63f7eee9b4261b92365',
435000: 'bc9cd265de563cfe80090ce12085539bc67d406c23e4d41751f191d6b6c63168',
436000: '3ff03098d2c20b5cc080363a84d8295510db937272801454d429c4d17a1dbe10',
437000: '5b8ebd40170e217eb66990af3574b80f4e66bedd092e44836aa5101ded50c13d',
438000: 'f725813181d50c0b20552635ecc578685ab0218619d53ed23493c46fb060ecec',
439000: 'cc7cb296ae79850ba2f6f7532c445189452831957f62cb823bc83aa6b51daee9',
440000: '757b816392227c9d1a14f7350cab6e77a46e465e7853b1291abdaebcee2bba94',
441000: '834515861eb72aa472843a6924fa9d6350630ed56e35181d9fe66e3a46b6952a',
442000: 'cac31ac937ebb6f0cf4f3c33f19d66a05bdc11105a98ef12c32b494e19e772a0',
443000: '552b03680b5495c7bd6ec58db0b6767e6bde5c5961a02b7b2130ea6116bddbad',
444000: '2bea6ef419d9fe81de8bf5cfe21b2603f202e2b9a959d2cfbdcbccbb7a7406c8',
445000: 'a4106f2115b356eca92e5e3f45b8ae0e13b82af94bae391942200d1c59d34d17',
446000: 'ba49924afd8a40260e546e5e4633e945877ac9d5754c277b77bb154c0c9d0e40',
447000: '83ea56135c2b052df0fb09c68cb7a085d876854f34338abaf3cf51760c83c3eb',
448000: '806d77789673fa09d24c188ee024e71425df7baf623fda073076f3a9b87ddbce',
449000: 'c41d60b93fca1d2adbbfcfa99475a8eda152630d815eb1f6c58cbd15a0c2eb65',
450000: '732d315feeb485fcc5055916df17827d80e7f831c43df81ea79123882c4fb356',
451000: '202156d29e57c51941cdb7256918128589f5b8dbae5762bbbeacff6eefbbfc71',
452000: '0194e8482252f2492e1e2696fa5761f50875d0188310a0cf1d967edffb1b79f5',
453000: '9e63570821d8b5cd7de0c5f9fda67300f531d87cd8f778fdc6bee9519b77e01e',
454000: '632353fe048e83ebef942d64d550721d29b3fc8083a247b2443f2aa2b706c6fb',
455000: '65863feebdbd60d16330000cbae496667a67dd5fa8b8e3b6dd3732843f6222a5',
456000: 'bbc554cb3c0281d6853ff2f6c25b1251eb108c09e5a6c4a69a56e8f79c9206e4',
457000: 'fe6fdabb84ad1ffa807fd55db29dc95dafeb0e9a5eeb5ce75be8a4d16f9f116c',
458000: '949bd40d861e4b297352722b6f01efe9574711fe64b10c2e8dd769ad99cf9655',
459000: '9c139c9ccb41fbb3ecee878cf47bb37e28c76816c257f99dd7b23ec1f3eb3ab8',
460000: '61c3126d504fb38ea21948d2837368441389155e9675c7f1629eefcd03198f8a',
461000: '28ff145fe2abc7cacfb1613d40b45e13580c643ad68c44db81d15125324f4f33',
462000: 'e6b532924d07f1ff3976a53397327ccb232d3b2ac6ab090874cfa6ae47b8b315',
463000: '15ca651db9b159978ba6b2adfae1669e3042f4311ed683c72838bf337d71274d',
464000: '6903539d4f38f728e075441efa96b8013295ac166db50ddd4103fb2bc9937f0f',
465000: '6b6c2640411d900c13f5b53a9a993d10fbbd7f03f6fd477e120ba06a7cd67778',
466000: 'a05d282fcb9c091a0a96c2ddb3728c1e98eb767fd5311bcc08c9c10b5a0eb24f',
467000: 'afb81fdfd562f2cd2d53434e6458c38462f85b65958ecade001bcf2b49f9b28a',
468000: '97374586191b3a30654483722f7c6486ef9d9f71aaca44b72146c11d4f4b80c7',
469000: '90a829ff58be9ad30024ba1851515cd96d567be1d99154cb966067316d1902af',
470000: 'e9fbf2d2291e1b5af5dded3c223611d9f2e31fa6e1d8c075741aa155538bbddc',
471000: '3b4513a6a6c05ae9d2ab880a2d1aed3b189e0fca9cfa2bbc7c31ebcbffaed9a4',
472000: '32a57820226f672e695823620fccd42a6fe624dd2d88f7e7c76e0e9128883b27',
473000: '15cb7e920799939eb01d06b1b713cba3ed31ef92ca037ee98eb8101b6331e5a6',
474000: 'cc23e76e68dd726f31e592a4ea5d6aa3cacd827d068317a0a476f16ce468d9f9',
475000: '9e7e33a8ee53a4bd94a26b35bb43054b42a238a85a859ed9e4a33db978a128ff',
476000: '23c85e4d8edc7939fe0e5a47fa250f0921b366711c34910c394af00a4efd90c6',
477000: 'fd9d489fba4175c1fb3b8a0f9bb947287e64da4bee9f8a7fe73411632c820368',
478000: 'e287b69696013e70b276843e83af6cb5cd076000cb280e43de29cad7f706dec9',
479000: 'e86c7a06d96887e404c052b0fcedf10d40423b02d6b9fa7ccb9827551e5d089f',
480000: '74e7ad7d78dc585a5cd1054c0288aefcec0e031cd40f0ebaa0fefc7a325f212a',
481000: 'a01f98758cd3e561d02bef30c054ef70405ebdce9e670a7d6165efe3a9030886',
482000: 'cc9e22f04e4e2e4f52b8382281092ba50ec99722c9172a0e1c9ad654d26bd4d9',
483000: 'c92e278d54111ab2f3906eafab87fdb932307e60836df209856a97cb9e1ae3a8',
484000: '18fc89d07631aa01891aa44231dae6cfa43fd4733c193468b033532a4574eb80',
485000: 'ab546bb694363e07469dec48b559479d34d724a43865fa8f3a178ddc7b83e8a2',
486000: '59926fddc89abcfb55775a58e5e6e453e388f530f720dc61001f1ac59b59cd43',
487000: 'b41bcbd91984a3dfe72c87bb7c2ef32ba3eb3cf4318f5d37654459143dceca1b',
488000: '94e859b8f01ccbe9fc5bea5c274f252e23599cd444d5eb0962ece97110b86eda',
489000: 'fd99b951714162e55d8944e8c3ce9b098c4b74ea24017e4b204f95359202df08',
490000: '80b84d18f4bc6d624d4190db94920976926bbe48f2e3e0f4c08106b9f99ac5df',
491000: 'b0c5b14b01225b1ed8272dbe9b2def41756cda0a3fe6a1a8e75b0795b72e6a7b',
492000: '572dfe57507384278def462488416248a3039c1a76a8c4aa23d1ec706de1e3f6',
493000: '75f22a8d4d202914c0f3bffaf95e47eeb82eb00dcd2913af5dd67acf6f8ed8f0',
494000: '626e26524f9427582d61f340ee9181d8f238ad8acf96c1862724fab62ede8c8b',
495000: '69a11346ba65c79b5fd566768a8b2880a5a29e2552ee8e343bc88bcf43e9ca54',
496000: '314c88db22753d317c7b65f872d54d3ff672d04b3781293d59a1ea902811336e',
497000: '7bfcf2614b54ac3808894694e8b15940d6740760a2252ba4b47b2926bdd4bf51',
498000: 'cc02f90d2fd4e0f378d2ae3bf1011026ca3d7f6642dab316f5fbc2f57aa840d0',
499000: '841182d8519cd7b2d8fd01ddd087de892ecfafaacf9d6aa029ce1ed71ee0d538',
500000: 'e2742854018ecbfef71d626ab797b7c6137e84ff1e27a05b5da84b2fc7c43145',
501000: '07334852d0b542f897164e4d74e699678c850af4e46111d495701a312be21105',
502000: '6150771ac25f4185eb94ef6aa15e7f409b6beb411358ba7d982755655d431492',
503000: '156d3165eb11d021c46ae5968aeac20f65eb05e68d841bdc98dde156d42fb500',
504000: 'e9d474e59bf527d84f031ba41cf471f49e5bf0472ae1b3f48f036813a0b195e6',
505000: '261cda40eed5ec5b2a44edcd995b24887a45a417fec3f68507a4d600ef2abc33',
506000: 'ffbeca24a8f2b7cd6d64207911e153a2f0bb70d34bb4f073c33197e3a2276558',
507000: '14e2664bed708d57478f6fb5a4228e4c26843bd6533056fb38dbf1e7595028a9',
508000: '2750e25c3dde233447266f7b35f78d8bff18bbde4ae094d06e38fbf6cd2c6902',
509000: '5fc210a72d142dd413bc3b76d5ac772564a5917aef13890e7cee190c61b059d0',
510000: '7184fa5628bfe59481bb3542bb4212b535c3ede3c24425f0bf7b1b453c238c6a',
511000: '2b4bec04d73f0e524cf12d04bbf7b3859b489275edd047f41b139ee6220074de',
512000: 'ebf81767f4d26c98bce62a5f5d1a289da5bba770e88f346e2fbb1d676c155e09',
513000: '8bd8a99c65a258ebec809ca76e2cc91b0b4fec5dfe0970736d2fcd706aa660ff',
514000: '2dd7cba17366c483819ea090d3196950ee35a4bd9f28acd94575525c851234d5',
515000: '33ccf5f5230b9a79cd89eace91cd1c2858f6d040a11494ffb8d51ee934753d57',
516000: '30eafb000d199c8d9d1a55c5b09a382a3b10bcbca837e584db6647699ce6a3c0',
517000: 'e7ed622512e2483fb8537579ce2d2519053f9a6c8d19c42d665007632d5f74d3',
518000: '5180d11f98a25b8698b969c3c9a4b736522397859e7dc00665658f93c6cb8be4',
519000: '17ca81b3a3c942c80384c4d2c176b5274c52569b90eef686e077f53a884130c7',
520000: 'fe1f552778cd5250ba6556712217b86fd6f1d574143f8ad6238e1d6b67481f97',
521000: 'd7ac10bcb8662ab9940c4212dbf0c4b9cf9750e3c04a905efdffbce63f065de7',
522000: 'fa7625300aa890ccda58268eaf2daef14539c195f93311099754f955f367413f',
523000: 'a55ef67a0e75bea12003609359443a28b42b9037290872d6b1672433cf5c0d34',
524000: '08d1d556393bbddf6eda11df852a792515ab9c1d519bd91c48711a0de8faffcc',
525000: '76b8691b16f7dafcf980ec3e7c69c2f3caee32057ac8d61b3d3aef0fd394642c',
526000: '6d3ecbd7132a725e330f0a566d8d602a9cd96326a36387c1c1fe52f7f6ad975a',
527000: '71d08ce7aec6ff50fa1ddc37f6b1574db6e71004e8e111f04a9b715103f68e01',
528000: 'e9b103c11db50a121c9e3a1fb4cbbd4fb124b07a67167c5a461a539dac3f932c',
529000: 'd2f784907a019c1eb5830a74799562cd6d21570c4591bb038f379897f3296d75',
530000: '996bca5a7dfc4a8c7f9f8dfb38b1881fb97893a950f4c8f57462506782ee8ca7',
531000: 'c84dea2e7fb0c9c884f8dce525b8b65bb7deb33ab7fdbcf72a5382e9bac2ade5',
532000: '3e200d76a92dd1f6fcc420d44ebe446829fa5e0162177acd75a13c98f1477cd1',
533000: '37fa7cd81c658ada13d04dcf0e674017c6b7430bb0cd5a080b1b924396377b52',
534000: '9483c344293c376d2bd247f2846a5ff5038b27af48ea5badcde101fcd0113dfd',
535000: 'c9d4f9982955c07960da1e4f92803ba135f4584416e0767a3567e4d2a21e62dc',
536000: 'bc9abe4e4be06aa8f61b599d64923ba88d36886b638f413b12bf87ae852e4010',
537000: 'df8307b0097b4882740c18432ab2d7e64486cb7bb70b503923bd630ea277e9e3',
538000: '6e2accc5e65a85e0aad098dbc7021bba3cac37c70f6a0ff3135f86b90954bd2c',
539000: 'b7d9f50727cb512dbaa1ba073d227805e7adc70f9aee46b9721a65ca3d34a8fb',
540000: '19c695a6b8bd256fddfd00c53e6401829ae59ebdc1e71ac234f82ffb2991ed0f',
541000: 'cc07c3740f96ec8929c7c8714892a19815fe64f4bcdc4d1bf12cd0a9e1e82125',
542000: '23ffec3127ec3e093ec8221fa760510e63fcab31f9f0fb2b48727649e4308980',
543000: '6c73b242bb05af125ce4ea324d921ece54c9510980128605aec23cc465dbeb00',
544000: '51f215e5f11ae31853c4caaae1d01aac2dbafd00302b70e7198e213d485d5bfe',
545000: '2c36150f62c86d050e4c64d83aa0071ef4af0bf8e1e17623c740ea4ee7f3275e',
546000: '322e6d77f529412c4b7a1c7778253900f1976f238f96b771845b56aa080f4d23',
547000: '1d3413530079389231bdba0a0a7de7c85a4ca991569b2f26a1e48ad2f34a995f',
548000: '3576ec0d3ed469c5498229af531fdda290777fa1ab7cd83c49450f730c5ac0d6',
549000: 'd2a5ed3d920f273bf891ac40ece534a56f8774c8809ed7d8482ce4b78bdf016e',
550000: 'e1f75701ff829339e03c019eef2da1ee4239f38879173a833f81295200e96d29',
551000: 'fe8339fb53a33e3b5c4a39cdd64d869f41b4fc88c73fa018eb5245d2a7a01e94',
552000: '8c2180cc834d5c14c9801772caa65e82b44cde33da7aac586c81b2bfe77412dc',
553000: 'db9b4ea2cbdb21fd48a69da3bac2dbe8cea2a16d45f9252776633661e28fd1d2',
554000: 'd09c32f70dd94d737205922618b789204a39668b59fd3ef405ba6d4ba3bd8aea',
555000: '6e8efd0190d866972b469cc827330f84f515d6466d8748fc33520e8e0505bf3e',
556000: '8ce7cc8aa6ad30971014b332c64ce87303098e64391a9be03d1c621df851c00c',
557000: '9959211d82372edae69cfac47cc5c937fecbb6de9e235a5a3954c653f3b1fda6',
558000: '4f1d620babaa28a03d02c2ba8b7d1526ae47d45bac1220a1a794cf4a489536be',
559000: '3a0f1617c6056a27b1da33f0e4ca80b5b6648341551c19622f7f4b4f016ce8ce',
560000: '54877adf522c9abaec851a358ec6fba9957227e0e306d3a425167685380cf6ff',
561000: '55db8f8ad8869221ee891244767a7698d48ec61264729cae19008779dadba422',
562000: '4ac9c41b29253ed03c45222412c53ddde7b9da63441d06ea91b448d89dc220bb',
563000: 'cce11479fe1fcda6af734a03be8ea044df7e2726df67927f5bc76c33728133a2',
564000: 'e2e5c52e028ab1be7f5d182a217e16694ffd370930cdafa9443b627e8da256c5',
565000: '81488a98ee93b0cf5704cb1f46383456bf5238fdae7bcde4d0826276b5a48235',
566000: 'c9afdc804f7b0df600c51b9d596d6107a0cb71a4f8ed3ea764585ffc5f70644d',
567000: 'efd3ac4baf1c6e1fb2f8bac40f1416f189c9eb1135330ae4c25f296f012907fa',
568000: '225c3db3b83b78fbf9459c7e1e53ae35b01752608d329eefa99247d8a2b9b83e',
569000: 'ac98f3fb9084261c19a63fc8f10baeb5b03aaf9ca432d7f763a3a6706f81eefc',
570000: '4ee2eae15c9019466bb50d103947bdf60a5397f8fff56154ef782fd1b8579dd9',
571000: 'a89f29bfc76988560ccd59a61f8a323fde7c4020abafd577f88d1b0b4c289260',
572000: 'eb8380e096c79806443535f2f3b007378e43d2cf3ff56f3b7e6a1b2639ff3da2',
573000: '3182ec1555aae0df8d76dd5f37bebac6936e0e0c5609263263eef2e6e87863ef',
574000: 'fc45bbadf22417d49e7df89f02df358b85a4cc07adb4010d3bc0334f397b01d6',
575000: '6faadf2b0c3a98952bb047a58b83466a223dd323a6e32d68be5e8d064482fa84',
576000: '68465ee1d64b65ca71bbeeb611a8b4657d3ce765857d049536047e2ea0490044',
577000: 'a019fabcf35e7d4fe15e4e3a499dea2bedc61e4c9cea2803731995def15a9794',
578000: 'cc2fb6111fa1a338eeade31574b77a1012ec5948f88e7e5276f7ba4443df3db0',
579000: 'c5e7c605c03e56f221397f55bcd164dff21856f7fa48e745e61f2d05d7de5f54',
580000: '110fe7150988c930f9c324e00e108050b8be5ca2eaf200f54477073c72feef2a',
581000: '0c3adb09dca9a9924fcfe30f16bc70734ec62fd4a306ce65a38b74c5d817ee40',
582000: 'fc47433896c47765651d8cb31c88cbf80d0887bf071eaab9d2963af9f8bdfb56',
583000: 'c7c518f8634bc7c5cc536c9b77fe119cd6cdce4fafb25d4c0aba53c5bd970e35',
584000: '43b2357dd053608b30488aca98f5d9c151a68730fb8d008f1bd1488bfab0c391',
585000: 'cdfdb4ebd1c5fc3689b4f013254c832da7bfb12f3440fd327c6b065afd093dce',
586000: '48979b814cb6964b20ad8b322c4936c7b241c24fe6d610181c43d1cd003e5d90',
587000: 'c262a418368aab465facb8f1a82a392eb74e3fd56760779033a97a3f930630ff',
588000: 'b20dc4e0c17a223408a51b958b6b481ddeadad513e046c86c7418a8e96e35992',
589000: '0411c5ce49578218dc8e225c11276ed9a9802aac47d8f5837c4eadb3bd72e1dd',
590000: 'be8fe8b058722f4919af2834c076e2bfb602654a0343b57eb568d0ed4833fb39',
591000: 'affcbbd2a9fa43a64332412d649c2c0bd4b85e90e5e67576f09ee0c207a66cdc',
592000: '14a2d414220d240286ec6eddab0b0540ff4d1794b39f7913c00ab005e2ca2a3d',
593000: 'a1e2744404626b5e8e101402f50eb751c282ce692d58493ac6fa2efa43254526',
594000: '679da754e8f4cfd017ed5c8111361c0f9624104a87a50d077dd7bb5bb97c7ddb',
595000: '0c31eeb25d718f96c68de13704a0904c3a524b5936690ddab8b4b38fbabad5f5',
596000: 'f930b6516bb01d3365bd0e28aed56a580b4efbb45b0abed5ba941f7a56b545de',
597000: '56287fa7bcb53be966ba75c15b92ddf94a7bb51f4ef51037e56ccca936df4d41',
598000: '1a6303e68038921fe23644dd10f5cbad4af3648f7cbe8432860f439c84fb226b',
599000: 'add270bb4fcb8dc9c254c9130fa037dcedb8ca922c2e35ed403661a896addf5e',
600000: '786ea387bc7e1665655f1f683d56c67baeb9bc482bbe4401acd167793d34a508',
601000: 'a4c5f398f28cfcdc6e0b6fd467582ff5abfcb4f003cc59ae7703e9467cdd9ee4',
602000: '21b19a5a23344325581e7daf0a398b37cac5b4c335237bb8e6c14e270bdc1567',
603000: '000eef5b4300cfd804554e199ca6ffcfe73ec8446e82ad32abbc51f45594a368',
604000: 'fbf77437dfba9caa4d11b03a3673f2274dbd3d92e704bc5e8ed3f90d99d2c988',
605000: '0bdba251f44baa12e2b33b1ba89eb8a5d31ecaf6193b836e6356845149ea3f33',
606000: '2dcadcae1cdc68ab6404606937aa9981400e12ae77474462764b8077d391d2d4',
607000: 'eff9dfe7fb3e31536dc12c78603f023f8f1f59b2b44a59359e8bfe50b2e22299',
608000: '50fdd1223efaaa261f6fe592d7b9e1e32b5a50f78dfee7bc8af19250f8a82776',
609000: '5d88ff14c7c36602251061379179ed87c5f7d60d746d8d10fe678a0303ce6596',
610000: '5e56249989c623bd7f818ac1e64c5d1b1f3afec3678559d446e356ee5bb3394a',
611000: '5a9b15a38dd35b9ef0a028b12ab69b0993eb4bc5577d50c486fdfd032f243a14',
612000: '258213d8b7141418f22e303ec43005bae0508a3597fa99c4d59d656d7d768af2',
613000: '5ee8b46ebdec5576550540f7fe01c5a91c3ba7bfab26db605ade4e620430cc21',
614000: 'c41a592e2042d201bf028bbfef484d264a391db9dc9ef302708e7a39f723cf5c',
615000: '70fa6591c0b7bb63665f040b87750668be47058942c79c863bc00232f536d725',
616000: 'f4932c1c8fc84e3f4b97336424e27eeb09a65d7347097e983e1ab698a38dcc0a',
617000: 'd82bd9b58adf54a50bfd471c66144f2969ab443648fb7bab8299408afc021544',
618000: '72f7dfda9bd67b900106e632cf50204fe27cecdeea51d9b0cfde9ba6da8dcd5c',
619000: 'ac434b8d723ba498bee878cf589c7d61b74c7a00ea5b7abe70149ba878522332',
620000: '519930116d8a8f3b01ba443bbe4fe5346cebc783c632cdd5f62d305f61b6f446',
621000: 'fed977a0174f8cdc3e979e7435ffde2f7dbac2d4e1c63593117cfe81f320bfec',
622000: '1a6daef00425e6ad136d7bda36cad99baa762f6d788b92f630ed1af33995211e',
623000: 'b9f1e2836bd2acae3a9d21516052007005f3f67ae1fc133c1c4c336abe0dce51',
624000: '5d4a19f7e7f99b5c39bc844b0a90ebcbd97d88f817f8f2edafca78ae499c3167',
625000: 'c1afa80aebb856746da25503eb786f247ab098a94ddc58fc96ca46caee7019c3',
626000: '222c2ec50130a658ece226c420c76a8f7a15640033fd89ccaa29bd7c639cb94b',
627000: 'de9830e9cc7537c42133153cccf891dcd422203827b1377df748ec27dbfd04a9',
628000: '35d44ad84a3d9d548bbc8c91ca734979759e3a03e37153b650f794a0aed1b742',
629000: 'c119f3acbd761598673b48eaf0d223e1729b8c8aaef9a715f1bc3dde561a4568',
630000: 'ebfda8a47983ced56d40e95ce3be793305899cb92359ee2f9a83f7f6a59a4fe7',
631000: '1479c2e6217b97e98f44bbb315ca99d1ab02aad14f52d60f7e6670f5c6d0fbe4',
632000: '6ed7f9c90ea95825b1aae91644a68e36b6452e61f05a2821e94399823ad669e3',
633000: 'e00eb0bffcf784f9f8fef21b92b9cd98e44db5d790b6eab309d0b11adf910820',
634000: '3c71674dab2c2644d910acff4336fb02c2dd8281e30e5c97b9378d50f3d6eac0',
635000: 'ee3fbedd791f31de2f820cf37cf5a7bcf1fddff354c1ab5736384f41846894f5',
636000: '551b7598f5576c577b48ac7530e56422a5fd31a3860b37994f8489308830e9f6',
637000: '531b1a767734f1750077ec9d1ab57fd4430e0154deca2fdb9dc5cfcbe980726c',
638000: '3ae3a009b3822211b3bbe9745f006eaf3aeae365e81890205e0223925461af81',
639000: '2aa1de8c824fcc1a8f34269010405a0a0fb921fa0f8b998010e4076fd722948c',
640000: 'badc624e2ffabc438b7b2c9894e2134e36c2e80af700cebdf0980169c1cfe49a',
641000: '0b550e8f9b66ac3a3d0e5a911819643600dec121959fe2b5128cc6c68503e655',
642000: 'bfccbe9a0a127d4cf606adb5eb42f3f0ee2ebc780b719d0ec612e590aa7ec47f',
643000: '4d44b13390410caf86d145103471469d0b10450166f5b96b8886a440aab5b1cf',
644000: '2670950e311d8230e8d3447e97652cb5592248b59968bedb6a98a98c12b8d81d',
645000: '6703a25f54614ff79c954438ab9fae5baff32c81fa47306b337f0f87bb11f949',
646000: 'c93876f02c788041b400aa42149c176e624eef01bc66b49adf8fcbe255dd1833',
647000: '276521b16f6a98f5399a9754621fbd7999b6b94e3cff17cd88d4a1176f431353',
648000: '65b6238c589baec511c240cc87cb6ce758ad6cf119f465838a976a853c7f5be6',
649000: 'c94f6f7fa7a8afdbecc7c291eab2a3cf218ff1af5117ed1f3f0e60214ec7041d',
650000: 'b823008cdb0176fa0522c26b35962b4325fd526217f7eb19464cffcb3c24b4d9',
651000: '77024b0bfd3cc7c8d52eff6db4dcec041f8675e6f3c413b7d581bedb1b53bb11',
652000: '22a677eef4cfcee12756492eab7e184bda054481d4fc19cc46bc7147c097d118',
653000: '66779d92b9fc28fddc46ed14b79b26b3ad7672f0dbc7cf4462d5a49769a89470',
654000: '097eb77aede84c35822b3700673efd0129de247cb8ba5ef6b8e6dd7e85bb908b',
655000: '83b5b4d4aeef3782c82b9a2fd53937e4523ea15867780e4264688e860fd99593',
656000: '99646218016eda7976b45fbb6e891fc4bc130bfb3b5e6dcac8b11e0a4b2f59bd',
657000: '9ad1995af3f7f6839a11bf5fcd856f471b200911e1e5a647390a7aa26abc6825',
658000: 'fd5a4462737448dcfa9d81802d16f4621ea0b177f538144d836829a536cdd451',
659000: '7f6baf86cafbdc820437d63260fcc434694dcf0ab8000307f134fc9c50f437b6',
660000: '54a5f2c5a1f1534f23f54c5ac6e3f794ebb62f298dc9ed1aba4112e10cc778cb',
661000: '91b46c2d247a43ea916adad7edcc37e18d149fe3ab970eaf39cd060a9c856ce2',
662000: 'b49e0df59bfc9dcd32cd1a4b903a12accda70054fbd5f4bc480de0367f254c21',
663000: '29d2d3e06f744ec2225659199bd43e446308fe1e1ff16c26e0e89caa468450f1',
664000: '50b0665a8c7a9fd1ffb165fbb0148115f83fb7a5f6e9b8ad16c9dff9175d11c8',
665000: '35754b82653110c6cb0f9a0e7011ed1ff8b360301e4280e6aa2f6faa5950a369',
666000: 'fbc64efa658b00bbb70f5db8265e99465a07627d663c9a716ec8b42bed82338e',
667000: 'a046d6957f24ec9f66c73612d3eb483c65e4216fc2761260d9818486cbebe56e',
668000: '5d78ef63ea8ce0f66032368effcd56fa443b0f5c488ef8c243d3622113087fbe',
669000: '0b1b551500e2c8617150a18deec1e3b9594dd98fc9b97e20dacf2f059f966692',
670000: '5fa21e8edbdbb2ccccc2e3aca8e15b0965cd307ea54b6621760e654b53488d33',
671000: '90affcb451a786aa42ff93dd1f39961a15cc8c332613f8dc591d9f9c34d359e6',
672000: '1a8996b27fb79d43627caf0166f5c8ec72e1e0d09e6c0e17a0d9418d37719afc',
673000: '491ec5299fa141a82b438ef5d2429eb5560a0f04412878165c86f9fbb94198af',
674000: 'dd680d4c1101a98b6465ac85a63ef12f3ebb1eb88277d37aa341a5b387e2f1b6',
675000: 'af8acc98b0fd9ff347acb01801f246c63c6698aff0a0cd69b0f91e17da3742df',
676000: 'af24a5759888314e5ef719ee23b9f948b3962ac0b949231a540d45eedaa614f8',
677000: '7ec568d4e5ceff99c0e9cefa6407784aa0246845420812bcfd8a5d9c03cb01a8',
678000: '845df5d7e1d6f22242ffc4c976937da67ce0880a94c5bf189195d5728f208976',
679000: '41f895ce11bd91e09ea8b16102c8f2192cdd2c754e2cb04d33edea688fa7c3b8',
680000: 'cbf1e3a313abbe05d90ff1de6d861cf58da50b7f8f7ba43a593a3b6192d65f8a',
681000: '1ac184be2b5edfdcc0a73a6a93d3f59a5197bd5bba2ddb1ce737251526998bc9',
682000: '0e793523ba0a679dce1e9cf1d1189c5311e27ae8d88c35bce36ddb3c16dc34c4',
683000: '08e07172a6b6c3d70c4379b206ed796ea8d916d314d0b6f02539becbb90077d5',
684000: '0b67663fc37940b4d40ba88b8b7610776e35f070e4995c5ab09ccd4b86dc1143',
685000: '8ae92af68bcc012326889b5ef89b899aa38f65dfd3d9f7dd5b29dba0c5fbebbd',
686000: '5223ea0cffd3c59fa6597208425ab55b3a76366fbd27d22508cd208ebf2a2eec',
687000: 'e160e55af7fd110905364f980543fca62a123c84c81e9fd38a0aeeebc30a501a',
688000: 'fcf3b5e0afae8d665cd6f63dacd2a861aecac20f8b73a682f79cad9523c5c4f3',
689000: '772ffab484f07ad91ed45cf6f569eff7440f4243e16e07698e4b7bd4a109e6e3',
690000: '5b2a4fc69617ccf1787ee40a0f6d7e0af783bcd856c2c8e0ab747a91a7c68d19',
691000: 'e4ba352a759e2425d508a4d5b58595e6ff5eb912d8bfece5a0bc646f61e77084',
692000: 'fca2c5b6a721db85278ab55b5e2e39a445e5b1ee5ba69a17044623a6b945d0b2',
693000: '99f4365eee70f86499ec26c373922d389cbc5e2a198e96af5d5823ad241748a1',
694000: '0e19e76398e65cb01517dcd4ee702c9c04c0fd53cf9468ee539094aa6248e1c8',
695000: '5addd92022007d81ced43595458e2eee2903227063af8e9edd75cccbe559930d',
696000: 'ba25dfe3467cc9999eb8593d955032475b777cefc8006851f692266ebb83d140',
697000: 'e10c1c734038198a99bf970ae89e3a56b2058612e35eac8141d48e147d2d47e0',
698000: '472c4de8e57737ca48c3bdbf3c35c37f24a179f5dfa48af89efda3c3d33c131f',
699000: 'b359f2e61a3ca3cf8b613045617e38b06767ccff72129f13971faf26a4d08234',
700000: 'd0555d53358978ae0abcb09f911c1b3e7b5a282b4ff6d455ca9ad04299666dc5',
701000: '5c763b8b4329809553fb58ab279ebbc6639695c45760f0c181168d41da95e6d7',
702000: '8a6abdca484fbdac5fd6fb279a435377d964d43b62393b0d5b0cd3baaab1d2c3',
703000: '50cf159af75b28c7a95ca990e480ebcd534c4601b03d7c0a979f26f4e33cecd7',
704000: '0daf7f9c7eca5c6d91de9ebadb3ef1287c290700bab0aa9f6a2ac4bf42a8a98c',
705000: 'a359e7584112eadd7fad5d961f1dee180481852d7706701c2211a98a009c128d',
706000: '139ab9432b0c7f62739f818b114d57a59cbf20584230f1198c35f1da62c2ed62',
707000: '43e1f4c83d26d419a35adc4dc5bd85c6253ec0faf7e70196bc5d5fac18c51746',
708000: '1a05dba8d5d2ef4984c79cc939148ad71535043640fd5644141cdbc512623514',
709000: 'fe7a8bd03f742de5227e5d0c8dcfd0b5cc13582703b74b1898c506fc6ffdab04',
710000: '26217b4ecadd39e9a14ffb6cbcaab6ee7bf0954efcbbbec4de9bddb461819084',
711000: '94aee5cc00e862e4952e573908449f4a48b630285f2f2fb20765d01dfcb68ac6',
712000: '21fb256c0f5133634b94f6522cb8d2b0a9b982f1672ba561033be689410e7860',
713000: '8d7d7ee4cb0598fecd85c8004b59074bedc3219f2376966e01a7faa92377cdcf',
714000: '56a033531fc8dbb7ffe07d7365cce20281e522c3ff9fe7a19d0188af351a5799',
715000: '4a6671846cccdc26d6fd1d77db50772315f932c24dc706004e11de68ad1ac387',
716000: '4222447b25305bc063b668c0e010b16f9c0802fff9371e9d096ba2174926f073',
717000: 'c231fcf5238a34cd15ac735d27b3dd0ee025714e48c408a798f0ba74be0aef77',
718000: '0f96aaa30d20fc572d9db26871143a657b706c35a9668d31e3b64280a049787a',
719000: '8557c088c4fc4745674c795d4a58aeb3df0dc91bb5ff93cc6268c4a88c64db5a',
720000: 'd699e06ee7835c9d687853187125f27145ec91daf394fdee37218b8c34fee9f4',
721000: 'c675e891a36425627e20de36f1fbdd5baba7114661f1c45f81f66a3fc55da902',
722000: '0c6fedfb6d6c1a77254904fdd2400dedce7d45bdc2271beb77e2087a0ba30d1a',
723000: 'a442c320886beccb3d7ea13276dbef8e98e1a47686cba2cdbeb6a2d2883af928',
724000: '1592e2d6ac3be7535b44db6cc99080d00b19c6663f52eef3c28eae3dac27ba49',
725000: '45b2a800f17b8571172a2658577dbe95b91ae88e611a91ac0c92609b3600f693',
726000: 'ed6257a6567665747aa354e93ab7d3e6539d6dd41fced8a2f62cf848e2b30ce0',
727000: '4b1a577c6c2358b0344bb1befd9b8e5572b787ec2bdc0bdcd4a150f26b2e2ab7',
728000: '448765fbdf6261c376120ff9401db8a8841fcbed466365f982d3bc53775b93ca',
729000: '99c2acea0af193d2e10498acd1c6d162d2a804a69157af46817b5ece5ea86491',
730000: '94cec967e44f850f512d4240cb8a52ffaf953d0364b0a1dd7604b4a01406e669',
731000: '6e63f5019439bc7e27a17a189baad0da8f5724883af3ca35efa0d4e5aaa75b97',
732000: '53e1b373805f3236c7725415e872d5635b8679894c4fb630c62b6b75b4ec9d9c',
733000: '43e9ab6cf54fde5dcdc4c473af26b256435f4af4254d96fa728f2af9b078d630',
734000: 'a3ef7f9257d591c7dcc0f82346cb162a768ee5fe1228353ec485e69be1bf585f',
}
| 79.539295 | 79 | 0.874668 | 1,471 | 58,700 | 34.903467 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.619534 | 0.075196 | 58,700 | 737 | 80 | 79.647218 | 0.326254 | 0 | 0 | 0 | 0 | 0 | 0.801363 | 0.801363 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e0fca2744f6954d73aa16737387cf4a4c38d479c | 12,319 | py | Python | tf2_gnn/models/horn_grap_tasks.py | Sherrykexin/tf2-gnn | 752cfa8c368b08837b1a122338b43381dbddf2df | [
"MIT"
] | null | null | null | tf2_gnn/models/horn_grap_tasks.py | Sherrykexin/tf2-gnn | 752cfa8c368b08837b1a122338b43381dbddf2df | [
"MIT"
] | null | null | null | tf2_gnn/models/horn_grap_tasks.py | Sherrykexin/tf2-gnn | 752cfa8c368b08837b1a122338b43381dbddf2df | [
"MIT"
] | 1 | 2021-04-22T12:46:26.000Z | 2021-04-22T12:46:26.000Z | from typing import Any, Dict, List, Tuple, Optional
import numpy as np
import tensorflow as tf
from tf2_gnn.data import GraphDataset
from tf2_gnn.models import GraphTaskModel
from tf2_gnn import GNNInput, GNN
class InvariantArgumentSelectionTask(GraphTaskModel):
def __init__(self, params: Dict[str, Any], dataset: GraphDataset, name: str = None):
super().__init__(params, dataset=dataset, name=name)
self._params = params
self._num_edge_types = dataset.num_edge_types
self._embedding_layer = tf.keras.layers.Embedding(
input_dim=params["node_vocab_size"], #size of the vocabulary
output_dim=params["node_label_embedding_size"]
)
self._gnn = GNN(params) #RGCN,RGIN,RGAT,GGNN
self._argument_repr_to_regression_layer = tf.keras.layers.Dense(
units=self._params["regression_hidden_layer_size"][0], activation=tf.nn.relu, use_bias=True) #decide layer output shape
self._regression_layer_1 = tf.keras.layers.Dense(
units=self._params["regression_hidden_layer_size"][1], activation=tf.nn.relu, use_bias=True)
self._argument_output_layer = tf.keras.layers.Dense(
units=1, use_bias=True)#we didn't normalize label so this should not be sigmoid
self._node_to_graph_aggregation = None
def build(self, input_shapes):
# print("--build--")
# build node embedding layer
with tf.name_scope("Node_embedding_layer"):
self._embedding_layer.build(tf.TensorShape((None,)))
# build gnn layers
self._gnn.build(
GNNInput(
node_features=tf.TensorShape((None, self._params["node_label_embedding_size"])),
adjacency_lists=tuple(
input_shapes[f"adjacency_list_{edge_type_idx}"]
for edge_type_idx in range(self._num_edge_types)
),
node_to_graph_map=tf.TensorShape((None,)),
num_graphs=tf.TensorShape(()),
)
)
#build task-specific layer
with tf.name_scope("Argument_repr_to_regression_layer"):
self._argument_repr_to_regression_layer.build(tf.TensorShape((None, self._params["hidden_dim"]))) #decide layer input shape
with tf.name_scope("regression_layer_1"):
self._regression_layer_1.build(tf.TensorShape((None, self._params["regression_hidden_layer_size"][0])))
with tf.name_scope("Argument_regression_layer"):
self._argument_output_layer.build(
tf.TensorShape((None, self._params["regression_hidden_layer_size"][1])) #decide layer input shape
)
super().build_horn_graph_gnn()#by pass graph_task_mode (GraphTaskModel)' build because it will build another gnn layer
#tf.keras.Model.build([])
def call(self, inputs, training: bool = False):
node_labels_embedded = self._embedding_layer(inputs["node_features"], training=training)
adjacency_lists: Tuple[tf.Tensor, ...] = tuple(
inputs[f"adjacency_list_{edge_type_idx}"]
for edge_type_idx in range(self._num_edge_types)
)
#before feed into gnn
# print("node_features",inputs["node_features"])
# print("node_features len",len(set(np.array(inputs["node_features"]))))
# print("arguments",inputs["node_argument"])
# print("node_to_graph_map",inputs['node_to_graph_map'])
# print("num_graphs_in_batch",inputs['num_graphs_in_batch'])
# print("adjacency_lists",adjacency_lists)
# call gnn and get graph representation
gnn_input = GNNInput(
node_features=node_labels_embedded,
num_graphs=inputs['num_graphs_in_batch'],
node_to_graph_map=inputs['node_to_graph_map'],
adjacency_lists=adjacency_lists
)
final_node_representations = self._gnn(gnn_input, training=training)
argument_representations=tf.gather(params=final_node_representations*1,indices=inputs["node_argument"])
#print("argument_representations",argument_representations)
return self.compute_task_output(inputs, argument_representations, training)
def compute_task_output(
self,
batch_features: Dict[str, tf.Tensor],
final_argument_representations: tf.Tensor,
training: bool,
) -> Any:
#call task specific layers
argument_regression_hidden_layer_output=self._argument_repr_to_regression_layer(final_argument_representations)
argument_regression_1=self._regression_layer_1(argument_regression_hidden_layer_output)
predicted_argument_score = self._argument_output_layer(
argument_regression_1
) # Shape [argument number, 1]
return tf.squeeze(predicted_argument_score, axis=-1) #Shape [argument number,]
def compute_task_metrics(#todo:change to hinge loss or lasso
self,
batch_features: Dict[str, tf.Tensor],
task_output: Any,
batch_labels: Dict[str, tf.Tensor],
) -> Dict[str, tf.Tensor]:
mse = tf.losses.mean_squared_error(batch_labels["node_labels"], task_output)
hinge_loss=tf.losses.hinge(batch_labels["node_labels"], task_output)
mae = tf.losses.mean_absolute_error(batch_labels["node_labels"], task_output)
num_graphs = tf.cast(batch_features["num_graphs_in_batch"], tf.float32)
return {
"loss": mse,
"batch_squared_error": mse * num_graphs,
"batch_absolute_error": mae * num_graphs,
"num_graphs": num_graphs,
}
def compute_epoch_metrics(self, task_results: List[Any]) -> Tuple[float, str]:
total_num_graphs = sum(
batch_task_result["num_graphs"] for batch_task_result in task_results
)
total_absolute_error = sum(
batch_task_result["batch_absolute_error"] for batch_task_result in task_results
)
epoch_mae = total_absolute_error / total_num_graphs
return epoch_mae.numpy(), f"Mean Absolute Error = {epoch_mae.numpy():.3f}"
class InvariantNodeIdentifyTask(GraphTaskModel):
def __init__(self, params: Dict[str, Any], dataset: GraphDataset, name: str = None):
super().__init__(params, dataset=dataset, name=name)
self._params = params
self._num_edge_types = dataset.num_edge_types
self._embedding_layer = tf.keras.layers.Embedding(
input_dim=params["node_vocab_size"], #size of the vocabulary
output_dim=params["node_label_embedding_size"]
)
self._gnn = GNN(params) #RGCN,RGIN,RGAT,GGNN
self._argument_repr_to_regression_layer = tf.keras.layers.Dense(
units=self._params["regression_hidden_layer_size"][0], activation=tf.nn.relu, use_bias=True) #decide layer output shape
self._regression_layer_1 = tf.keras.layers.Dense(
units=self._params["regression_hidden_layer_size"][1], activation=tf.nn.relu, use_bias=True)
self._argument_output_layer = tf.keras.layers.Dense(activation=tf.nn.sigmoid,
units=1, use_bias=True)
self._node_to_graph_aggregation = None
def build(self, input_shapes):
# print("--build--")
# build node embedding layer
with tf.name_scope("Node_embedding_layer"):
self._embedding_layer.build(tf.TensorShape((None,)))
# build gnn layers
self._gnn.build(
GNNInput(
node_features=tf.TensorShape((None, self._params["node_label_embedding_size"])),
adjacency_lists=tuple(
input_shapes[f"adjacency_list_{edge_type_idx}"]
for edge_type_idx in range(self._num_edge_types)
),
node_to_graph_map=tf.TensorShape((None,)),
num_graphs=tf.TensorShape(()),
)
)
#build task-specific layer
with tf.name_scope("Argument_repr_to_regression_layer"):
self._argument_repr_to_regression_layer.build(tf.TensorShape((None, self._params["hidden_dim"]))) #decide layer input shape
with tf.name_scope("regression_layer_1"):
self._regression_layer_1.build(tf.TensorShape((None, self._params["regression_hidden_layer_size"][0])))
with tf.name_scope("Argument_regression_layer"):
self._argument_output_layer.build(
tf.TensorShape((None, self._params["regression_hidden_layer_size"][1])))#decide layer input shape
#super().build([],True)#by pass graph_task_mode (GraphTaskModel)' build because it will build another gnn layer
super().build_horn_graph_gnn()
#tf.keras.Model.build([])
def call(self, inputs, training: bool = False):
node_labels_embedded = self._embedding_layer(inputs["node_features"], training=training)
adjacency_lists: Tuple[tf.Tensor, ...] = tuple(
inputs[f"adjacency_list_{edge_type_idx}"]
for edge_type_idx in range(self._num_edge_types)
)
# call gnn and get graph representation
gnn_input = GNNInput(
node_features=node_labels_embedded,
num_graphs=inputs['num_graphs_in_batch'],
node_to_graph_map=inputs['node_to_graph_map'],
adjacency_lists=adjacency_lists
)
final_node_representations = self._gnn(gnn_input, training=training)
if self._params["label_type"]=="argument_identify":
return self.compute_task_output(inputs, final_node_representations, training)
elif self._params["label_type"] == "control_location_identify":
return self.compute_task_output(inputs, final_node_representations, training)
elif self._params["label_type"]=="argument_identify_no_batchs":
current_node_representations = tf.gather(params=final_node_representations * 1,
indices=inputs["current_node_index"])
return self.compute_task_output(inputs, current_node_representations, training)
def compute_task_output(
self,
batch_features: Dict[str, tf.Tensor],
final_argument_representations: tf.Tensor,
training: bool,
) -> Any:
#call task specific layers
argument_regression_hidden_layer_output = self._argument_repr_to_regression_layer(
final_argument_representations)
argument_regression_1 = self._regression_layer_1(argument_regression_hidden_layer_output)
predicted_argument_score = self._argument_output_layer(
argument_regression_1
) # Shape [argument number, 1]
return tf.squeeze(predicted_argument_score, axis=-1)
def compute_task_metrics(
self,
batch_features: Dict[str, tf.Tensor],
task_output: Any,
batch_labels: Dict[str, tf.Tensor],
) -> Dict[str, tf.Tensor]:
ce = tf.reduce_mean(
tf.keras.losses.binary_crossentropy(
y_true=batch_labels["node_labels"], y_pred=task_output, from_logits=False
)
)
num_correct = tf.reduce_sum(
tf.cast(
tf.math.equal(batch_labels["node_labels"], tf.math.round(task_output)), tf.int32
)
)
num_nodes = tf.cast(len(batch_labels["node_labels"]), tf.float32)
num_graphs = tf.cast(batch_features["num_graphs_in_batch"], tf.float32)
return {
"loss": ce,
"batch_acc": tf.cast(num_correct, tf.float32) / num_nodes,
"num_correct": num_correct,
"num_graphs": num_graphs,
"num_nodes":num_nodes
}
def compute_epoch_metrics(self, task_results: List[Any]) -> Tuple[float, str]:
total_num_graphs = np.sum(
batch_task_result["num_graphs"] for batch_task_result in task_results
)
total_num_nodes = np.sum(
batch_task_result["num_nodes"] for batch_task_result in task_results
)
total_num_correct = np.sum(
batch_task_result["num_correct"] for batch_task_result in task_results
)
epoch_acc = tf.cast(total_num_correct, tf.float32) / total_num_nodes
return -epoch_acc.numpy(), f"Accuracy = {epoch_acc.numpy():.3f}"
| 47.563707 | 135 | 0.663771 | 1,500 | 12,319 | 5.081333 | 0.124667 | 0.027158 | 0.033062 | 0.02519 | 0.820257 | 0.79756 | 0.772238 | 0.762792 | 0.753346 | 0.743374 | 0 | 0.004784 | 0.236464 | 12,319 | 258 | 136 | 47.748062 | 0.80555 | 0.108369 | 0 | 0.634615 | 0 | 0 | 0.114534 | 0.060146 | 0 | 0 | 0 | 0.003876 | 0 | 1 | 0.057692 | false | 0 | 0.028846 | 0 | 0.144231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cd110f97e82455a7f224776ffac09d7ee0f7063 | 47 | py | Python | robolearn_envs/pybullet/common/__init__.py | domingoesteban/robolearn_envs | 1e10f315abcbb034e613b3b5a7a48662a839c81b | [
"BSD-3-Clause"
] | 2 | 2020-08-20T15:46:55.000Z | 2022-02-16T13:45:59.000Z | robolearn_envs/pybullet/common/__init__.py | domingoesteban/robolearn_envs | 1e10f315abcbb034e613b3b5a7a48662a839c81b | [
"BSD-3-Clause"
] | null | null | null | robolearn_envs/pybullet/common/__init__.py | domingoesteban/robolearn_envs | 1e10f315abcbb034e613b3b5a7a48662a839c81b | [
"BSD-3-Clause"
] | 1 | 2020-10-03T11:28:15.000Z | 2020-10-03T11:28:15.000Z | from .objects import *
from .surfaces import *
| 15.666667 | 23 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 24 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ce7fec8e1e3d4307f091eaa272d9f096085b27a | 6,542 | py | Python | single_channel/custom_filters.py | TeunKrikke/SourceSepDL | 1df21ee8636fda4f4b8a24f98a66dbeab9e2f603 | [
"MIT"
] | null | null | null | single_channel/custom_filters.py | TeunKrikke/SourceSepDL | 1df21ee8636fda4f4b8a24f98a66dbeab9e2f603 | [
"MIT"
] | null | null | null | single_channel/custom_filters.py | TeunKrikke/SourceSepDL | 1df21ee8636fda4f4b8a24f98a66dbeab9e2f603 | [
"MIT"
] | null | null | null | from keras.layers import Dense, Lambda, multiply
from keras.layers.wrappers import TimeDistributed
from keras import backend as K
def separation_layers(features, input_layer, bottom_layer):
"""
Separation layer/filter using a wiener filter with 3 sigmoid
activated dense layers
Keyword arguments:
features -- number of FT units
input_layer -- the original mixture
bottom_layer -- the last layer of the network
returns:
speaker_1 -- prediction of speaker 1 signal
speaker_2 -- prediction of speaker 2 signal
"""
increase = TimeDistributed(Dense(2*features, activation="sigmoid"), name='mix_increase')(bottom_layer)
tdd1 = TimeDistributed(Dense(features, activation="sigmoid"), name='speaker_1_dense')(increase)
tdd2 = TimeDistributed(Dense(features, activation="sigmoid"), name='speaker_2_dense')(increase)
speaker_1 = Lambda(function=lambda x: mask(tdd1, tdd2, input_layer),
name='speaker_1')(tdd1)
speaker_2 = Lambda(function=lambda x: mask(tdd2, tdd1, input_layer),
name='speaker_2')(tdd2)
return speaker_1, speaker_2
def separation_layers_no(features, input_layer, bottom_layer):
"""
Separation layer/filter no filter but 5 sigmoid
activated dense layers
Keyword arguments:
features -- number of FT units
input_layer -- the original mixture
bottom_layer -- the last layer of the network
returns:
speaker_1 -- prediction of speaker 1 signal
speaker_2 -- prediction of speaker 2 signal
"""
increase = TimeDistributed(Dense(2*features, activation="sigmoid"), name='mix_increase')(bottom_layer)
tdd1 = TimeDistributed(Dense(200, activation="sigmoid"), name='speaker_1_dense')(increase)
tdd2 = TimeDistributed(Dense(200, activation="sigmoid"), name='speaker_2_dense')(increase)
speaker_1 = TimeDistributed(Dense(features, activation="sigmoid"),
name='speaker_1')(tdd1)
speaker_2 = TimeDistributed(Dense(features, activation="sigmoid"),
name='speaker_2')(tdd2)
return speaker_1, speaker_2
def separation_layers_ideal(features, input_layer, bottom_layer):
"""
Separation layer/filter using an ideal filter with 3 sigmoid
activated dense layers
Keyword arguments:
features -- number of FT units
input_layer -- the original mixture
bottom_layer -- the last layer of the network
returns:
speaker_1 -- prediction of speaker 1 signal
speaker_2 -- prediction of speaker 2 signal
"""
increase = TimeDistributed(Dense(2*features, activation="sigmoid"), name='mix_increase')(bottom_layer)
tdd1 = TimeDistributed(Dense(features, activation="sigmoid"), name='speaker_1_dense')(increase)
tdd2 = TimeDistributed(Dense(features, activation="sigmoid"), name='speaker_2_dense')(increase)
speaker_1 = Lambda(function=lambda x: ideal_mask(tdd1, tdd2, input_layer),
name='speaker_1')(tdd1)
speaker_2 = Lambda(function=lambda x: ideal_mask(tdd2, tdd1, input_layer),
name='speaker_2')(tdd2)
return speaker_1, speaker_2
def separation_layers_tanh(features, input_layer, bottom_layer):
"""
Separation layer/filter using a wiener filter with 3 layers of tanh
activated dense layers
Keyword arguments:
features -- number of FT units
input_layer -- the original mixture
bottom_layer -- the last layer of the network
returns:
speaker_1 -- prediction of speaker 1 signal
speaker_2 -- prediction of speaker 2 signal
"""
increase = TimeDistributed(Dense(2*features, activation="tanh"), name='mix_increase')(bottom_layer)
tdd1 = TimeDistributed(Dense(features, activation="tanh"), name='speaker_1_dense')(increase)
tdd2 = TimeDistributed(Dense(features, activation="tanh"), name='speaker_2_dense')(increase)
speaker_1 = Lambda(function=lambda x: mask(tdd1, tdd2, input_layer),
name='speaker_1')(tdd1)
speaker_2 = Lambda(function=lambda x: mask(tdd2, tdd1, input_layer),
name='speaker_2')(tdd2)
return speaker_1, speaker_2
def separation_layers_linear(features, input_layer, bottom_layer):
"""
Separation layer/filter using a wiener filter with 3 linear
activated dense layers
Keyword arguments:
features -- number of FT units
input_layer -- the original mixture
bottom_layer -- the last layer of the network
returns:
speaker_1 -- prediction of speaker 1 signal
speaker_2 -- prediction of speaker 2 signal
"""
increase = TimeDistributed(Dense(2*features, activation="linear"), name='mix_increase')(bottom_layer)
tdd1 = TimeDistributed(Dense(features, activation="linear"), name='speaker_1_dense')(increase)
tdd2 = TimeDistributed(Dense(features, activation="linear"), name='speaker_2_dense')(increase)
speaker_1 = Lambda(function=lambda x: mask(tdd1, tdd2, input_layer),
name='speaker_1')(tdd1)
speaker_2 = Lambda(function=lambda x: mask(tdd2, tdd1, input_layer),
name='speaker_2')(tdd2)
return speaker_1, speaker_2
def mask(predicted_1, predicted_2, mix):
"""
Masking using a wiener filter
Keyword arguments:
predicted_1 -- filter prediction of speaker 1 as learned by the network
predicted_2 -- filter prediction of speaker 2 as learned by the network
mix -- the original mixture
returns:
signal of predicted 1
"""
the_mask = K.pow(K.abs(predicted_1), 2) / (K.pow(K.abs(predicted_1), 2) +
K.pow(K.abs(predicted_2), 2))
# return merge([the_mask,mix[0,0]], mode= "mul")
return multiply([the_mask, mix])
def ideal_mask(predicted_1, predicted_2, mix):
"""
Masking using a ideal filter
Keyword arguments:
predicted_1 -- filter prediction of speaker 1 as learned by the network
predicted_2 -- filter prediction of speaker 2 as learned by the network
mix -- the original mixture
returns:
signal of predicted 1
"""
# mags = np.dstack((predicted_1, predicted_2))
# mask = mags >= np.max(mags, axis=2, keepdims=True)
# return mix * mask[:,:,0]
return multiply([predicted_1, mix])
| 39.173653 | 106 | 0.661724 | 799 | 6,542 | 5.259074 | 0.097622 | 0.060923 | 0.063303 | 0.090433 | 0.909329 | 0.909329 | 0.904093 | 0.897906 | 0.861733 | 0.817706 | 0 | 0.028064 | 0.242892 | 6,542 | 166 | 107 | 39.409639 | 0.820311 | 0.342862 | 0 | 0.518519 | 0 | 0 | 0.108649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12963 | false | 0 | 0.055556 | 0 | 0.314815 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cee9c47377c4d0a756f877df9d6db8c6efab98f | 25 | py | Python | arctia/__init__.py | unternehmen/arctia | 5c0a9b1933199c09dc7312730ed32c3894bc33ac | [
"Unlicense"
] | 1 | 2018-01-12T15:11:03.000Z | 2018-01-12T15:11:03.000Z | arctia/__init__.py | unternehmen/arctia | 5c0a9b1933199c09dc7312730ed32c3894bc33ac | [
"Unlicense"
] | 4 | 2018-02-17T00:20:09.000Z | 2018-06-01T19:49:08.000Z | arctia/__init__.py | unternehmen/arctia | 5c0a9b1933199c09dc7312730ed32c3894bc33ac | [
"Unlicense"
] | null | null | null | from .arctia import main
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c16581c2d44d33a5f2cd5b861c3b52fef9fd392 | 177 | py | Python | pyramid_app/routes.py | marinewater/pyramid-social-auth | 926f230294ec6b0fdf02a5ed4113073d82a9d18c | [
"MIT"
] | 2 | 2015-02-10T01:19:21.000Z | 2016-07-24T14:40:59.000Z | pyramid_app/routes.py | marinewater/pyramid-social-auth | 926f230294ec6b0fdf02a5ed4113073d82a9d18c | [
"MIT"
] | null | null | null | pyramid_app/routes.py | marinewater/pyramid-social-auth | 926f230294ec6b0fdf02a5ed4113073d82a9d18c | [
"MIT"
] | null | null | null | def includeme(config):
config.add_route('pyramid-social-auth.auth', '/psa/login/{provider}')
config.add_route('pyramid-social-auth.complete', '/psa/complete/{provider}') | 59 | 80 | 0.728814 | 23 | 177 | 5.521739 | 0.521739 | 0.141732 | 0.220472 | 0.330709 | 0.488189 | 0.488189 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073446 | 177 | 3 | 80 | 59 | 0.77439 | 0 | 0 | 0 | 0 | 0 | 0.544944 | 0.544944 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c749ee797e17400b3ddbf7ac439a88ffca0a93e | 2,955 | py | Python | tests/test_xsd_union.py | imanashoorii/zibal-zeep | 9ff7b229b0759597823da41d1dbf48c6e7b5b383 | [
"MIT"
] | null | null | null | tests/test_xsd_union.py | imanashoorii/zibal-zeep | 9ff7b229b0759597823da41d1dbf48c6e7b5b383 | [
"MIT"
] | null | null | null | tests/test_xsd_union.py | imanashoorii/zibal-zeep | 9ff7b229b0759597823da41d1dbf48c6e7b5b383 | [
"MIT"
] | null | null | null | from tests.utils import assert_nodes_equal, load_xml, render_node
from zibalzeep import xsd
def test_union_same_types():
schema = xsd.Schema(
load_xml(
"""
<?xml version="1.0"?>
<xsd:schema
xmlns="http://tests.python-zeep.org/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://tests.python-zeep.org/"
targetNamespace="http://tests.python-zeep.org/"
elementFormDefault="qualified">
<xsd:simpleType name="MMYY">
<xsd:restriction base="xsd:int"/>
</xsd:simpleType>
<xsd:simpleType name="MMYYYY">
<xsd:restriction base="xsd:int"/>
</xsd:simpleType>
<xsd:simpleType name="Date">
<xsd:union memberTypes="tns:MMYY MMYYYY"/>
</xsd:simpleType>
<xsd:element name="item" type="tns:Date"/>
</xsd:schema>
"""
)
)
elm = schema.get_element("ns0:item")
node = render_node(elm, "102018")
expected = """
<document>
<ns0:item xmlns:ns0="http://tests.python-zeep.org/">102018</ns0:item>
</document>
"""
assert_nodes_equal(expected, node)
value = elm.parse(list(node)[0], schema)
assert value == 102018
def test_union_mixed():
schema = xsd.Schema(
load_xml(
"""
<?xml version="1.0"?>
<xsd:schema
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://tests.python-zeep.org/"
targetNamespace="http://tests.python-zeep.org/"
elementFormDefault="qualified">
<xsd:element name="item" type="tns:Date"/>
<xsd:simpleType name="Date">
<xsd:union memberTypes="xsd:date xsd:gYear xsd:gYearMonth tns:MMYY tns:MMYYYY"/>
</xsd:simpleType>
<xsd:simpleType name="MMYY">
<xsd:restriction base="xsd:string">
<xsd:pattern value="(0[123456789]|1[012]){1}\d{2}"/>
</xsd:restriction>
</xsd:simpleType>
<xsd:simpleType name="MMYYYY">
<xsd:restriction base="xsd:string">
<xsd:pattern value="(0[123456789]|1[012]){1}\d{4}"/>
</xsd:restriction>
</xsd:simpleType>
</xsd:schema>
"""
)
)
elm = schema.get_element("ns0:item")
node = render_node(elm, "102018")
expected = """
<document>
<ns0:item xmlns:ns0="http://tests.python-zeep.org/">102018</ns0:item>
</document>
"""
assert_nodes_equal(expected, node)
value = elm.parse(list(node)[0], schema)
assert value == "102018"
node = render_node(elm, "2018")
expected = """
<document>
<ns0:item xmlns:ns0="http://tests.python-zeep.org/">2018</ns0:item>
</document>
"""
assert_nodes_equal(expected, node)
value = elm.parse(list(node)[0], schema)
assert value == "2018"
| 30.78125 | 92 | 0.551946 | 332 | 2,955 | 4.846386 | 0.201807 | 0.096955 | 0.07458 | 0.094469 | 0.866998 | 0.819764 | 0.819764 | 0.782474 | 0.722188 | 0.722188 | 0 | 0.050943 | 0.282572 | 2,955 | 95 | 93 | 31.105263 | 0.708019 | 0 | 0 | 0.682927 | 0 | 0.073171 | 0.326725 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 1 | 0.04878 | false | 0 | 0.04878 | 0 | 0.097561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98d8da9f3feabc13377df934815f68b354da7d75 | 195 | py | Python | hello/views.py | kfarrell0/python-docs-hello-django | 9d3c8f0a26a4d8b3b3bf01ba8e5bfa83c7fb3555 | [
"MIT"
] | null | null | null | hello/views.py | kfarrell0/python-docs-hello-django | 9d3c8f0a26a4d8b3b3bf01ba8e5bfa83c7fb3555 | [
"MIT"
] | null | null | null | hello/views.py | kfarrell0/python-docs-hello-django | 9d3c8f0a26a4d8b3b3bf01ba8e5bfa83c7fb3555 | [
"MIT"
] | null | null | null | from django.http import HttpResponse
from django.shortcuts import render
def hello(request):
return HttpResponse("Hello POWERBI Team - How are you ? today we are learning Azure & PowerBI!")
| 32.5 | 100 | 0.774359 | 27 | 195 | 5.592593 | 0.740741 | 0.13245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158974 | 195 | 5 | 101 | 39 | 0.920732 | 0 | 0 | 0 | 0 | 0 | 0.374359 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
98ec3310c242d24a8ffbb0600ccf19356e10c214 | 104 | py | Python | terrascript/newrelic/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/newrelic/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/newrelic/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | # terrascript/newrelic/__init__.py
import terrascript
class newrelic(terrascript.Provider):
pass
| 13 | 37 | 0.788462 | 11 | 104 | 7.090909 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134615 | 104 | 7 | 38 | 14.857143 | 0.866667 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c722b270f694cf75d054175908f7d90260fda0c3 | 99 | py | Python | s3_deploy/__main__.py | seanharr11/cra-deploy-to-s3 | 478c30324aaaf803c8785805bbdc4aa6b69b5343 | [
"MIT"
] | 10 | 2019-05-25T14:01:41.000Z | 2021-04-08T12:53:07.000Z | s3_deploy/__main__.py | seanharr11/cra-deploy-to-s3 | 478c30324aaaf803c8785805bbdc4aa6b69b5343 | [
"MIT"
] | 1 | 2021-01-27T16:12:28.000Z | 2021-01-27T16:12:28.000Z | s3_deploy/__main__.py | seanharr11/cra-deploy-to-s3 | 478c30324aaaf803c8785805bbdc4aa6b69b5343 | [
"MIT"
] | 4 | 2019-06-20T19:23:49.000Z | 2020-10-29T22:57:42.000Z | from s3_deploy import main
import sys
def console_entry():
main(None, sys.stdout, sys.stderr)
| 16.5 | 38 | 0.747475 | 16 | 99 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.161616 | 99 | 5 | 39 | 19.8 | 0.855422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c75cb09bff931b5bb89a980ac50bf4afc7f05280 | 20 | py | Python | ros_ws/devel/lib/python2.7/dist-packages/intro_pkg1/msg/__init__.py | TheProjectsGuy/Learning-ROS | 612f8eeeed0d3308cfff9084dbf7dda4732ec1ae | [
"MIT"
] | 2 | 2019-08-14T11:46:45.000Z | 2020-05-13T21:03:40.000Z | ros_ws/devel/lib/python2.7/dist-packages/intro_pkg1/msg/__init__.py | TheProjectsGuy/Learning-ROS | 612f8eeeed0d3308cfff9084dbf7dda4732ec1ae | [
"MIT"
] | 1 | 2018-12-07T18:54:09.000Z | 2018-12-08T13:18:44.000Z | ros_ws/devel/lib/python2.7/dist-packages/intro_pkg1/msg/__init__.py | TheProjectsGuy/Learning-ROS | 612f8eeeed0d3308cfff9084dbf7dda4732ec1ae | [
"MIT"
] | null | null | null | from ._Equ import *
| 10 | 19 | 0.7 | 3 | 20 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
403c2ad53c957c9b58e1c3f453634d2cfc30f27c | 20,245 | py | Python | py_feature/501_concat.py | weiziyoung/instacart | 5da75e6a033859c3394e4e651331aafb002f161c | [
"MIT"
] | 290 | 2017-08-15T14:47:20.000Z | 2022-03-28T07:46:12.000Z | py_feature/501_concat.py | weiziyoung/instacart | 5da75e6a033859c3394e4e651331aafb002f161c | [
"MIT"
] | null | null | null | py_feature/501_concat.py | weiziyoung/instacart | 5da75e6a033859c3394e4e651331aafb002f161c | [
"MIT"
] | 126 | 2017-08-15T14:55:07.000Z | 2022-03-03T09:02:34.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jul 10 04:11:27 2017
@author: konodera
nohup python -u 501_concat.py &
"""
import pandas as pd
import numpy as np
from tqdm import tqdm
import multiprocessing as mp
import gc
import utils
utils.start(__file__)
#==============================================================================
# def
#==============================================================================
def user_feature(df, name):
if 'train' in name:
name_ = 'trainT-0'
elif name == 'test':
name_ = 'test'
df = pd.merge(df, pd.read_pickle('../feature/{}/f101_order.p'.format(name_)),# same
on='order_id', how='left')
# timezone
df = pd.merge(df, pd.read_pickle('../input/mk/timezone.p'),
on='order_hour_of_day', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f102_user.p'.format(name)),
on='user_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f103_user.p'.format(name)),
on='user_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f104_user.p'.format(name)),
on='user_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f105_order.p'.format(name_)),# same
on='order_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f110_order.p'.format(name_)),# same
on='order_id', how='left')
gc.collect()
return df
def item_feature(df, name):
# aisle = pd.read_pickle('../input/mk/goods.p')[['product_id', 'aisle_id']]
# aisle = pd.get_dummies(aisle.rename(columns={'aisle_id':'item_aisle'}), columns=['item_aisle'])
# df = pd.merge(df, aisle, on='product_id', how='left')
organic = pd.read_pickle('../input/mk/products_feature.p')
df = pd.merge(df, organic, on='product_id', how='left')
# this could be worse
df = pd.merge(df, pd.read_pickle('../feature//{}/f202_product_hour.p'.format(name)),
on=['product_id','order_hour_of_day'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_uniq_product_hour.p'.format(name)),
on=['product_id','order_hour_of_day'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_product_dow.p'.format(name)),
on=['product_id','order_dow'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_uniq_product_dow.p'.format(name)),
on=['product_id','order_dow'], how='left')
gc.collect()
# low importance
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_product_timezone.p'.format(name)),
on=['product_id','timezone'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_uniq_product_timezone.p'.format(name)),
on=['product_id','timezone'], how='left')
# low importance
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_product_dow-timezone.p'.format(name)),
on=['product_id', 'order_dow', 'timezone'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_uniq_product_dow-timezone.p'.format(name)),
on=['product_id', 'order_dow', 'timezone'], how='left')
# no boost
df = pd.merge(df, pd.read_pickle('../feature/{}/f202_flat_product.p'.format(name)),
on=['product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f203_product.p'.format(name)),
on='product_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f205_order_product.p'.format(name)),
on=['order_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f207_product.p'.format(name)),
on='product_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f208_product.p'.format(name)),
on='product_id', how='left')
# low imp
df = pd.merge(df, pd.read_pickle('../feature/{}/f209_product.p'.format(name)),
on='product_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f210_product.p'.format(name)),
on='product_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f211_product.p'.format(name)),
on='product_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f212_product.p'.format(name)),
on='product_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f213_product-dow.p'.format(name)),
on=['product_id','order_dow'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f214_product.p'.format(name)),
on='product_id', how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f215_product.p'.format(name)),
on='product_id', how='left')
gc.collect()
return df
def user_item_feature(df, name):
df = pd.merge(df, pd.read_pickle('../feature/{}/f301_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f301_order-product_n5.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f302_order-product_all.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f303_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f304-1_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f304-2_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f304-3_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f305_order-product.p'.format(name)),
on=['order_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f306_user-product.p'.format(name)),
on=['user_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f306_user-product_n5.p'.format(name)),
on=['user_id', 'product_id'], how='left')
gc.collect()
df = pd.merge(df, pd.read_pickle('../feature/{}/f307_user-product-timezone.p'.format(name)),
on=['user_id', 'product_id','timezone'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f307_user-product-dow.p'.format(name)),
on=['user_id', 'product_id','order_dow'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f308_user-product-timezone.p'.format(name)),
on=['user_id', 'product_id','timezone'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f308_user-product-dow.p'.format(name)),
on=['user_id', 'product_id','order_dow'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f309_user-product.p'.format(name)),
on=['user_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f309_user-product_n5.p'.format(name)),
on=['user_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f310_user-product.p'.format(name)),
on=['user_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f312_user_product.p'.format(name)),
on=['user_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f312_user_product_n5.p'.format(name)),
on=['user_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f313_user_aisle.p'.format(name)),
on=['user_id', 'aisle_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f313_user_dep.p'.format(name)),
on=['user_id', 'department_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f314_user-product.p'.format(name)),
on=['user_id', 'product_id'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f315-1_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f315-2_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f315-3_order-product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f316_order_product.p'.format(name)),
on=['order_id', 'product_id'],how='left')
gc.collect()
return df
def daytime_feature(df, name):
df = pd.merge(df, pd.read_pickle('../feature/{}/f401_dow.p'.format(name)),
on=['order_dow'], how='left')
df = pd.merge(df, pd.read_pickle('../feature/{}/f401_hour.p'.format(name)),
on=['order_hour_of_day'], how='left')
return df
def concat_pred_item(T, dryrun=False):
if T==-1:
name = 'test'
else:
name = 'trainT-'+str(T)
#==============================================================================
print('load label')
#==============================================================================
# NOTE: order_id is label
print('load t3')
X_base = pd.read_pickle('../feature/X_base_t3.p')
label = pd.read_pickle('../feature/{}/label_reordered.p'.format(name))
# 'inner' for removing t-n_order_id == NaN
if 'train' in name:
df = pd.merge(X_base[X_base.is_train==1], label, on='order_id', how='inner')
elif name == 'test':
df = pd.merge(X_base[X_base.is_train==0], label, on='order_id', how='inner')
if dryrun:
print('dryrun')
df = df.sample(9999)
df = pd.merge(df, pd.read_pickle('../input/mk/goods.p')[['product_id', 'aisle_id', 'department_id']],
on='product_id', how='left')
print('{}.shape:{}\n'.format(name, df.shape))
#==============================================================================
print('user feature')
#==============================================================================
df = user_feature(df, name)
print('{}.shape:{}\n'.format(name, df.shape))
#==============================================================================
print('item feature')
#==============================================================================
df = item_feature(df, name)
print('{}.shape:{}\n'.format(name, df.shape))
#==============================================================================
print('reduce memory')
#==============================================================================
utils.reduce_memory(df)
ix_end = df.shape[1]
#==============================================================================
print('user x item')
#==============================================================================
df = user_item_feature(df, name)
print('{}.shape:{}\n'.format(name, df.shape))
#==============================================================================
print('user x item')
#==============================================================================
def compress(df, key):
"""
key: str
"""
df_ = df.drop_duplicates(key)[[key]].set_index(key)
dtypes = df.dtypes
col = dtypes[dtypes!='O'].index
col = [c for c in col if '_id' not in c]
gr = df.groupby(key)
for c in col:
df_[c+'-min'] = gr[c].min()
df_[c+'-mean'] = gr[c].mean()
df_[c+'-median'] = gr[c].median()
df_[c+'-max'] = gr[c].max()
df_[c+'-std'] = gr[c].std()
var = df_.var()
col = var[var==0].index
df_.drop(col, axis=1, inplace=True)
gc.collect()
return df_.reset_index()
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f301_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
feature = compress(pd.read_pickle('../feature/{}/f301_order-product_n5.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f302_order-product_all.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f303_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f304-1_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f304-2_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f304-3_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f305_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
gc.collect()
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f306_user-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
feature = compress(pd.read_pickle('../feature/{}/f306_user-product_n5.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f307_user-product-timezone.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f308_user-product-timezone.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f308_user-product-dow.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f309_user-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
feature = compress(pd.read_pickle('../feature/{}/f309_user-product_n5.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f310_user-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f312_user_product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
feature = compress(pd.read_pickle('../feature/{}/f312_user_product_n5.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
gc.collect()
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f313_user_aisle.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f313_user_dep.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'user_id'
feature = compress(pd.read_pickle('../feature/{}/f314_user-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f315-1_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f315-2_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f315-3_order-product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
key = 'order_id'
feature = compress(pd.read_pickle('../feature/{}/f316_order_product.p'.format(name)), key)
df = pd.merge(df, feature, on=key, how='left')
gc.collect()
#==============================================================================
print('reduce memory')
#==============================================================================
utils.reduce_memory(df, ix_end)
ix_end = df.shape[1]
#==============================================================================
print('daytime')
#==============================================================================
df = daytime_feature(df, name)
print('{}.shape:{}\n'.format(name, df.shape))
# #==============================================================================
# print('aisle')
# #==============================================================================
# order_aisdep = pd.read_pickle('../input/mk/order_aisle-department.p')
# col = [c for c in order_aisdep.columns if 'department_' in c]
# order_aisdep.drop(col, axis=1, inplace=1)
#
# df = pd.merge(df, order_aisdep.add_prefix('t-1_'), on='t-1_order_id', how='left')
# df = pd.merge(df, order_aisdep.add_prefix('t-2_'), on='t-2_order_id', how='left')
#
# print('{}.shape:{}\n'.format(name, df.shape))
#==============================================================================
print('feature engineering')
#==============================================================================
df = pd.get_dummies(df, columns=['timezone'])
df = pd.get_dummies(df, columns=['order_dow'])
df = pd.get_dummies(df, columns=['order_hour_of_day'])
df['days_near_order_cycle'] = (df.days_since_last_order_this_item - df.item_order_days_mean).abs()
df['days_last_order-min'] = df.days_since_last_order_this_item - df.useritem_order_days_min
df['days_last_order-max'] = df.days_since_last_order_this_item - df.useritem_order_days_max
df['pos_cart_diff'] = (df.item_mean_pos_cart - df.useritem_mean_pos_cart)
df['t-1_product_unq_len_diffByT-2'] = df['t-1_product_unq_len'] - df['t-2_product_unq_len']
df['t-1_product_unq_len_diffByT-3'] = df['t-1_product_unq_len'] - df['t-3_product_unq_len']
df['t-2_product_unq_len_diffByT-3'] = df['t-2_product_unq_len'] - df['t-3_product_unq_len']
df['t-1_product_unq_len_ratioByT-2'] = df['t-1_product_unq_len'] / df['t-2_product_unq_len']
df['t-1_product_unq_len_ratioByT-3'] = df['t-1_product_unq_len'] / df['t-3_product_unq_len']
df['t-2_product_unq_len_ratioByT-3'] = df['t-2_product_unq_len'] / df['t-3_product_unq_len']
df['T'] = T
#==============================================================================
print('reduce memory')
#==============================================================================
utils.reduce_memory(df, ix_end)
#==============================================================================
print('output')
#==============================================================================
if dryrun == True:
return df
else:
utils.to_pickles(df, '../feature/{}/all'.format(name), 20, inplace=True)
def multi(name):
concat_pred_item(name)
#==============================================================================
# multi
mp_pool = mp.Pool(2)
mp_pool.map(multi, [0,1,2,-1])
utils.end(__file__)
| 44.988889 | 106 | 0.518795 | 2,594 | 20,245 | 3.848882 | 0.0798 | 0.058494 | 0.078425 | 0.09365 | 0.842248 | 0.822216 | 0.810797 | 0.797175 | 0.784756 | 0.738482 | 0 | 0.019975 | 0.196345 | 20,245 | 449 | 107 | 45.089087 | 0.593669 | 0.152383 | 0 | 0.445205 | 0 | 0 | 0.295513 | 0.1735 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023973 | false | 0 | 0.020548 | 0 | 0.065068 | 0.061644 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc17f70e23c09951f204572cd6a0c3d459ad6721 | 30 | py | Python | Scrap11888/lib/Decorators/__init__.py | GeorgeVasiliadis/Scrap11888 | f485ac894c681489e15c71597b4110859cfc7645 | [
"MIT"
] | 1 | 2021-12-14T22:28:43.000Z | 2021-12-14T22:28:43.000Z | Scrap11888/lib/Decorators/__init__.py | GeorgeVasiliadis/Scrap11888 | f485ac894c681489e15c71597b4110859cfc7645 | [
"MIT"
] | null | null | null | Scrap11888/lib/Decorators/__init__.py | GeorgeVasiliadis/Scrap11888 | f485ac894c681489e15c71597b4110859cfc7645 | [
"MIT"
] | null | null | null | from .Debugging import timeMe
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9072a59457ceb8095c739b4c65084edc0dc252a3 | 40,411 | py | Python | tests/unit/gapic/trace_v2/test_trace_service.py | tswast/python-trace | c162047a779478a43561a7e1f1b8687dda5ecc89 | [
"Apache-2.0"
] | null | null | null | tests/unit/gapic/trace_v2/test_trace_service.py | tswast/python-trace | c162047a779478a43561a7e1f1b8687dda5ecc89 | [
"Apache-2.0"
] | null | null | null | tests/unit/gapic/trace_v2/test_trace_service.py | tswast/python-trace | c162047a779478a43561a7e1f1b8687dda5ecc89 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import mock
import grpc
from grpc.experimental import aio
import math
import pytest
from proto.marshal.rules.dates import DurationRule, TimestampRule
from google import auth
from google.api_core import client_options
from google.api_core import exceptions
from google.api_core import gapic_v1
from google.api_core import grpc_helpers
from google.api_core import grpc_helpers_async
from google.auth import credentials
from google.auth.exceptions import MutualTLSChannelError
from google.cloud.trace_v2.services.trace_service import TraceServiceAsyncClient
from google.cloud.trace_v2.services.trace_service import TraceServiceClient
from google.cloud.trace_v2.services.trace_service import transports
from google.cloud.trace_v2.types import trace
from google.cloud.trace_v2.types import tracing
from google.oauth2 import service_account
from google.protobuf import any_pb2 as any # type: ignore
from google.protobuf import timestamp_pb2 as timestamp # type: ignore
from google.protobuf import wrappers_pb2 as wrappers # type: ignore
from google.rpc import status_pb2 as gr_status # type: ignore
from google.rpc import status_pb2 as status # type: ignore
def client_cert_source_callback():
return b"cert bytes", b"key bytes"
# If default endpoint is localhost, then default mtls endpoint will be the same.
# This method modifies the default endpoint so the client can produce a different
# mtls endpoint for endpoint testing purposes.
def modify_default_endpoint(client):
return (
"foo.googleapis.com"
if ("localhost" in client.DEFAULT_ENDPOINT)
else client.DEFAULT_ENDPOINT
)
def test__get_default_mtls_endpoint():
api_endpoint = "example.googleapis.com"
api_mtls_endpoint = "example.mtls.googleapis.com"
sandbox_endpoint = "example.sandbox.googleapis.com"
sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com"
non_googleapi = "api.example.com"
assert TraceServiceClient._get_default_mtls_endpoint(None) is None
assert (
TraceServiceClient._get_default_mtls_endpoint(api_endpoint) == api_mtls_endpoint
)
assert (
TraceServiceClient._get_default_mtls_endpoint(api_mtls_endpoint)
== api_mtls_endpoint
)
assert (
TraceServiceClient._get_default_mtls_endpoint(sandbox_endpoint)
== sandbox_mtls_endpoint
)
assert (
TraceServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint)
== sandbox_mtls_endpoint
)
assert TraceServiceClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi
@pytest.mark.parametrize("client_class", [TraceServiceClient, TraceServiceAsyncClient])
def test_trace_service_client_from_service_account_file(client_class):
creds = credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_file"
) as factory:
factory.return_value = creds
client = client_class.from_service_account_file("dummy/file/path.json")
assert client._transport._credentials == creds
client = client_class.from_service_account_json("dummy/file/path.json")
assert client._transport._credentials == creds
assert client._transport._host == "cloudtrace.googleapis.com:443"
def test_trace_service_client_get_transport_class():
transport = TraceServiceClient.get_transport_class()
assert transport == transports.TraceServiceGrpcTransport
transport = TraceServiceClient.get_transport_class("grpc")
assert transport == transports.TraceServiceGrpcTransport
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(TraceServiceClient, transports.TraceServiceGrpcTransport, "grpc"),
(
TraceServiceAsyncClient,
transports.TraceServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
@mock.patch.object(
TraceServiceClient, "DEFAULT_ENDPOINT", modify_default_endpoint(TraceServiceClient)
)
@mock.patch.object(
TraceServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(TraceServiceAsyncClient),
)
def test_trace_service_client_client_options(
client_class, transport_class, transport_name
):
# Check that if channel is provided we won't create a new one.
with mock.patch.object(TraceServiceClient, "get_transport_class") as gtc:
transport = transport_class(credentials=credentials.AnonymousCredentials())
client = client_class(transport=transport)
gtc.assert_not_called()
# Check that if channel is provided via str we will create a new one.
with mock.patch.object(TraceServiceClient, "get_transport_class") as gtc:
client = client_class(transport=transport_name)
gtc.assert_called()
# Check the case api_endpoint is provided.
options = client_options.ClientOptions(api_endpoint="squid.clam.whelk")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
api_mtls_endpoint="squid.clam.whelk",
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS is
# "never".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS": "never"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
api_mtls_endpoint=client.DEFAULT_ENDPOINT,
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS is
# "always".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS": "always"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
api_mtls_endpoint=client.DEFAULT_MTLS_ENDPOINT,
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided, GOOGLE_API_USE_MTLS is
# "auto", and client_cert_source is provided.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS": "auto"}):
options = client_options.ClientOptions(
client_cert_source=client_cert_source_callback
)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
api_mtls_endpoint=client.DEFAULT_MTLS_ENDPOINT,
client_cert_source=client_cert_source_callback,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided, GOOGLE_API_USE_MTLS is
# "auto", and default_client_cert_source is provided.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS": "auto"}):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=True,
):
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
api_mtls_endpoint=client.DEFAULT_MTLS_ENDPOINT,
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided, GOOGLE_API_USE_MTLS is
# "auto", but client_cert_source and default_client_cert_source are None.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS": "auto"}):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=False,
):
patched.return_value = None
client = client_class()
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
api_mtls_endpoint=client.DEFAULT_ENDPOINT,
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS has
# unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS": "Unsupported"}):
with pytest.raises(MutualTLSChannelError):
client = client_class()
# Check the case quota_project_id is provided
options = client_options.ClientOptions(quota_project_id="octopus")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
api_mtls_endpoint=client.DEFAULT_ENDPOINT,
client_cert_source=None,
quota_project_id="octopus",
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(TraceServiceClient, transports.TraceServiceGrpcTransport, "grpc"),
(
TraceServiceAsyncClient,
transports.TraceServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_trace_service_client_client_options_scopes(
client_class, transport_class, transport_name
):
# Check the case scopes are provided.
options = client_options.ClientOptions(scopes=["1", "2"],)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=["1", "2"],
api_mtls_endpoint=client.DEFAULT_ENDPOINT,
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(TraceServiceClient, transports.TraceServiceGrpcTransport, "grpc"),
(
TraceServiceAsyncClient,
transports.TraceServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_trace_service_client_client_options_credentials_file(
client_class, transport_class, transport_name
):
# Check the case credentials file is provided.
options = client_options.ClientOptions(credentials_file="credentials.json")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file="credentials.json",
host=client.DEFAULT_ENDPOINT,
scopes=None,
api_mtls_endpoint=client.DEFAULT_ENDPOINT,
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
def test_trace_service_client_client_options_from_dict():
with mock.patch(
"google.cloud.trace_v2.services.trace_service.transports.TraceServiceGrpcTransport.__init__"
) as grpc_transport:
grpc_transport.return_value = None
client = TraceServiceClient(client_options={"api_endpoint": "squid.clam.whelk"})
grpc_transport.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
api_mtls_endpoint="squid.clam.whelk",
client_cert_source=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
)
def test_batch_write_spans(
transport: str = "grpc", request_type=tracing.BatchWriteSpansRequest
):
client = TraceServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.batch_write_spans), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.batch_write_spans(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == tracing.BatchWriteSpansRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_batch_write_spans_from_dict():
test_batch_write_spans(request_type=dict)
@pytest.mark.asyncio
async def test_batch_write_spans_async(transport: str = "grpc_asyncio"):
client = TraceServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = tracing.BatchWriteSpansRequest()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.batch_write_spans), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.batch_write_spans(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert response is None
def test_batch_write_spans_field_headers():
client = TraceServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = tracing.BatchWriteSpansRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.batch_write_spans), "__call__"
) as call:
call.return_value = None
client.batch_write_spans(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_batch_write_spans_field_headers_async():
client = TraceServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = tracing.BatchWriteSpansRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.batch_write_spans), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.batch_write_spans(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_batch_write_spans_flattened():
client = TraceServiceClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._transport.batch_write_spans), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.batch_write_spans(
name="name_value", spans=[trace.Span(name="name_value")],
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].spans == [trace.Span(name="name_value")]
def test_batch_write_spans_flattened_error():
client = TraceServiceClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.batch_write_spans(
tracing.BatchWriteSpansRequest(),
name="name_value",
spans=[trace.Span(name="name_value")],
)
@pytest.mark.asyncio
async def test_batch_write_spans_flattened_async():
client = TraceServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.batch_write_spans), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.batch_write_spans(
name="name_value", spans=[trace.Span(name="name_value")],
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0].name == "name_value"
assert args[0].spans == [trace.Span(name="name_value")]
@pytest.mark.asyncio
async def test_batch_write_spans_flattened_error_async():
client = TraceServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.batch_write_spans(
tracing.BatchWriteSpansRequest(),
name="name_value",
spans=[trace.Span(name="name_value")],
)
def test_create_span(transport: str = "grpc", request_type=trace.Span):
client = TraceServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.create_span), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = trace.Span(
name="name_value",
span_id="span_id_value",
parent_span_id="parent_span_id_value",
span_kind=trace.Span.SpanKind.INTERNAL,
)
response = client.create_span(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == trace.Span()
# Establish that the response is the type that we expect.
assert isinstance(response, trace.Span)
assert response.name == "name_value"
assert response.span_id == "span_id_value"
assert response.parent_span_id == "parent_span_id_value"
assert response.span_kind == trace.Span.SpanKind.INTERNAL
def test_create_span_from_dict():
test_create_span(request_type=dict)
@pytest.mark.asyncio
async def test_create_span_async(transport: str = "grpc_asyncio"):
client = TraceServiceAsyncClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = trace.Span()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_span), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
trace.Span(
name="name_value",
span_id="span_id_value",
parent_span_id="parent_span_id_value",
span_kind=trace.Span.SpanKind.INTERNAL,
)
)
response = await client.create_span(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the response is the type that we expect.
assert isinstance(response, trace.Span)
assert response.name == "name_value"
assert response.span_id == "span_id_value"
assert response.parent_span_id == "parent_span_id_value"
assert response.span_kind == trace.Span.SpanKind.INTERNAL
def test_create_span_field_headers():
client = TraceServiceClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = trace.Span()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client._transport.create_span), "__call__") as call:
call.return_value = trace.Span()
client.create_span(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_span_field_headers_async():
client = TraceServiceAsyncClient(credentials=credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = trace.Span()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client._client._transport.create_span), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(trace.Span())
await client.create_span(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_credentials_transport_error():
# It is an error to provide credentials and a transport instance.
transport = transports.TraceServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = TraceServiceClient(
credentials=credentials.AnonymousCredentials(), transport=transport,
)
# It is an error to provide a credentials file and a transport instance.
transport = transports.TraceServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = TraceServiceClient(
client_options={"credentials_file": "credentials.json"},
transport=transport,
)
# It is an error to provide scopes and a transport instance.
transport = transports.TraceServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = TraceServiceClient(
client_options={"scopes": ["1", "2"]}, transport=transport,
)
def test_transport_instance():
# A client may be instantiated with a custom transport instance.
transport = transports.TraceServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
client = TraceServiceClient(transport=transport)
assert client._transport is transport
def test_transport_get_channel():
# A client may be instantiated with a custom transport instance.
transport = transports.TraceServiceGrpcTransport(
credentials=credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
transport = transports.TraceServiceGrpcAsyncIOTransport(
credentials=credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
def test_transport_grpc_default():
# A client should use the gRPC transport by default.
client = TraceServiceClient(credentials=credentials.AnonymousCredentials(),)
assert isinstance(client._transport, transports.TraceServiceGrpcTransport,)
def test_trace_service_base_transport_error():
# Passing both a credentials object and credentials_file should raise an error
with pytest.raises(exceptions.DuplicateCredentialArgs):
transport = transports.TraceServiceTransport(
credentials=credentials.AnonymousCredentials(),
credentials_file="credentials.json",
)
def test_trace_service_base_transport():
# Instantiate the base transport.
with mock.patch(
"google.cloud.trace_v2.services.trace_service.transports.TraceServiceTransport.__init__"
) as Transport:
Transport.return_value = None
transport = transports.TraceServiceTransport(
credentials=credentials.AnonymousCredentials(),
)
# Every method on the transport should just blindly
# raise NotImplementedError.
methods = (
"batch_write_spans",
"create_span",
)
for method in methods:
with pytest.raises(NotImplementedError):
getattr(transport, method)(request=object())
def test_trace_service_base_transport_with_credentials_file():
# Instantiate the base transport with a credentials file
with mock.patch.object(
auth, "load_credentials_from_file"
) as load_creds, mock.patch(
"google.cloud.trace_v2.services.trace_service.transports.TraceServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
load_creds.return_value = (credentials.AnonymousCredentials(), None)
transport = transports.TraceServiceTransport(
credentials_file="credentials.json", quota_project_id="octopus",
)
load_creds.assert_called_once_with(
"credentials.json",
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/trace.append",
),
quota_project_id="octopus",
)
def test_trace_service_auth_adc():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
TraceServiceClient()
adc.assert_called_once_with(
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/trace.append",
),
quota_project_id=None,
)
def test_trace_service_transport_auth_adc():
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(auth, "default") as adc:
adc.return_value = (credentials.AnonymousCredentials(), None)
transports.TraceServiceGrpcTransport(
host="squid.clam.whelk", quota_project_id="octopus"
)
adc.assert_called_once_with(
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/trace.append",
),
quota_project_id="octopus",
)
def test_trace_service_host_no_port():
client = TraceServiceClient(
credentials=credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="cloudtrace.googleapis.com"
),
)
assert client._transport._host == "cloudtrace.googleapis.com:443"
def test_trace_service_host_with_port():
client = TraceServiceClient(
credentials=credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="cloudtrace.googleapis.com:8000"
),
)
assert client._transport._host == "cloudtrace.googleapis.com:8000"
def test_trace_service_grpc_transport_channel():
channel = grpc.insecure_channel("http://localhost/")
# Check that if channel is provided, mtls endpoint and client_cert_source
# won't be used.
callback = mock.MagicMock()
transport = transports.TraceServiceGrpcTransport(
host="squid.clam.whelk",
channel=channel,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=callback,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert not callback.called
def test_trace_service_grpc_asyncio_transport_channel():
channel = aio.insecure_channel("http://localhost/")
# Check that if channel is provided, mtls endpoint and client_cert_source
# won't be used.
callback = mock.MagicMock()
transport = transports.TraceServiceGrpcAsyncIOTransport(
host="squid.clam.whelk",
channel=channel,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=callback,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert not callback.called
@mock.patch("grpc.ssl_channel_credentials", autospec=True)
@mock.patch("google.api_core.grpc_helpers.create_channel", autospec=True)
def test_trace_service_grpc_transport_channel_mtls_with_client_cert_source(
grpc_create_channel, grpc_ssl_channel_cred
):
# Check that if channel is None, but api_mtls_endpoint and client_cert_source
# are provided, then a mTLS channel will be created.
mock_cred = mock.Mock()
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
transport = transports.TraceServiceGrpcTransport(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/trace.append",
),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
)
assert transport.grpc_channel == mock_grpc_channel
@mock.patch("grpc.ssl_channel_credentials", autospec=True)
@mock.patch("google.api_core.grpc_helpers_async.create_channel", autospec=True)
def test_trace_service_grpc_asyncio_transport_channel_mtls_with_client_cert_source(
grpc_create_channel, grpc_ssl_channel_cred
):
# Check that if channel is None, but api_mtls_endpoint and client_cert_source
# are provided, then a mTLS channel will be created.
mock_cred = mock.Mock()
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
transport = transports.TraceServiceGrpcAsyncIOTransport(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/trace.append",
),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
)
assert transport.grpc_channel == mock_grpc_channel
@pytest.mark.parametrize(
"api_mtls_endpoint", ["mtls.squid.clam.whelk", "mtls.squid.clam.whelk:443"]
)
@mock.patch("google.api_core.grpc_helpers.create_channel", autospec=True)
def test_trace_service_grpc_transport_channel_mtls_with_adc(
grpc_create_channel, api_mtls_endpoint
):
# Check that if channel and client_cert_source are None, but api_mtls_endpoint
# is provided, then a mTLS channel will be created with SSL ADC.
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
# Mock google.auth.transport.grpc.SslCredentials class.
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
mock_cred = mock.Mock()
transport = transports.TraceServiceGrpcTransport(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint=api_mtls_endpoint,
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/trace.append",
),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
)
assert transport.grpc_channel == mock_grpc_channel
@pytest.mark.parametrize(
"api_mtls_endpoint", ["mtls.squid.clam.whelk", "mtls.squid.clam.whelk:443"]
)
@mock.patch("google.api_core.grpc_helpers_async.create_channel", autospec=True)
def test_trace_service_grpc_asyncio_transport_channel_mtls_with_adc(
grpc_create_channel, api_mtls_endpoint
):
# Check that if channel and client_cert_source are None, but api_mtls_endpoint
# is provided, then a mTLS channel will be created with SSL ADC.
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
# Mock google.auth.transport.grpc.SslCredentials class.
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
mock_cred = mock.Mock()
transport = transports.TraceServiceGrpcAsyncIOTransport(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint=api_mtls_endpoint,
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=(
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/trace.append",
),
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
)
assert transport.grpc_channel == mock_grpc_channel
def test_span_path():
project = "squid"
trace = "clam"
span = "whelk"
expected = "projects/{project}/traces/{trace}/spans/{span}".format(
project=project, trace=trace, span=span,
)
actual = TraceServiceClient.span_path(project, trace, span)
assert expected == actual
def test_parse_span_path():
expected = {
"project": "octopus",
"trace": "oyster",
"span": "nudibranch",
}
path = TraceServiceClient.span_path(**expected)
# Check that the path construction is reversible.
actual = TraceServiceClient.parse_span_path(path)
assert expected == actual
def test_client_withDEFAULT_CLIENT_INFO():
client_info = gapic_v1.client_info.ClientInfo()
with mock.patch.object(
transports.TraceServiceTransport, "_prep_wrapped_messages"
) as prep:
client = TraceServiceClient(
credentials=credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
with mock.patch.object(
transports.TraceServiceTransport, "_prep_wrapped_messages"
) as prep:
transport_class = TraceServiceClient.get_transport_class()
transport = transport_class(
credentials=credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
| 36.972553 | 110 | 0.689194 | 4,709 | 40,411 | 5.651731 | 0.069866 | 0.022995 | 0.01905 | 0.019276 | 0.846246 | 0.810889 | 0.773165 | 0.744946 | 0.737394 | 0.711317 | 0 | 0.003587 | 0.227413 | 40,411 | 1,092 | 111 | 37.00641 | 0.848852 | 0.167331 | 0 | 0.667519 | 0 | 0 | 0.114774 | 0.047706 | 0 | 0 | 0 | 0 | 0.11509 | 1 | 0.047315 | false | 0 | 0.033248 | 0.002558 | 0.08312 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
908408f571b842b131c2831199b17efe7cb46ae6 | 75 | py | Python | dpg_oop_wrapper/inputs/__init__.py | sharkbound/dearpygui_oop_wrapper | 7e9c7fbe6bb4e0584fee6e5dbb7dc86243f4882a | [
"MIT"
] | 2 | 2020-10-18T11:47:23.000Z | 2021-05-11T22:24:23.000Z | dpg_oop_wrapper/inputs/__init__.py | sharkbound/dearpygui_oop_wrapper | 7e9c7fbe6bb4e0584fee6e5dbb7dc86243f4882a | [
"MIT"
] | null | null | null | dpg_oop_wrapper/inputs/__init__.py | sharkbound/dearpygui_oop_wrapper | 7e9c7fbe6bb4e0584fee6e5dbb7dc86243f4882a | [
"MIT"
] | null | null | null | from .textinput import *
from .intinput import *
from .floatinput import *
| 18.75 | 25 | 0.76 | 9 | 75 | 6.333333 | 0.555556 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 75 | 3 | 26 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90d881776ff310fddf18443f1426220c31800679 | 272 | py | Python | babilim/data/__init__.py | penguinmenac3/babilim | d3b1dd7c38a9de8f1e553cc5c0b2dfa62fe25c27 | [
"MIT"
] | 1 | 2020-05-04T15:20:55.000Z | 2020-05-04T15:20:55.000Z | babilim/data/__init__.py | penguinmenac3/babilim | d3b1dd7c38a9de8f1e553cc5c0b2dfa62fe25c27 | [
"MIT"
] | 1 | 2019-11-28T09:03:20.000Z | 2019-11-28T09:03:20.000Z | babilim/data/__init__.py | penguinmenac3/babilim | d3b1dd7c38a9de8f1e553cc5c0b2dfa62fe25c27 | [
"MIT"
] | 1 | 2019-11-28T08:30:13.000Z | 2019-11-28T08:30:13.000Z | from babilim.data.dataset import Dataset
from babilim.data.dataloader import Dataloader
from babilim.data.transformer import Transformer
from babilim.data.image_grid import image_grid_wrap, image_grid_unwrap
__all__ = ['Dataset', 'image_grid_wrap', 'image_grid_unwrap']
| 34 | 70 | 0.838235 | 38 | 272 | 5.657895 | 0.315789 | 0.209302 | 0.27907 | 0.167442 | 0.260465 | 0.260465 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 272 | 7 | 71 | 38.857143 | 0.866935 | 0 | 0 | 0 | 0 | 0 | 0.143382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
90f3052ba7f80d3c98858b057777f24f3d332e72 | 40 | py | Python | dirscan/dirsearch/lib/controller/__init__.py | imfiver/Sec-Tools | a828e31c2e371c37f1256f0a574707a24776530d | [
"Apache-2.0"
] | 144 | 2021-11-05T10:45:05.000Z | 2022-03-31T03:17:19.000Z | dirscan/dirsearch/lib/controller/__init__.py | imfiver/Sec-Tools | a828e31c2e371c37f1256f0a574707a24776530d | [
"Apache-2.0"
] | 6 | 2021-11-07T02:47:41.000Z | 2022-03-06T05:50:15.000Z | dirscan/dirsearch/lib/controller/__init__.py | imfiver/Sec-Tools | a828e31c2e371c37f1256f0a574707a24776530d | [
"Apache-2.0"
] | 41 | 2021-11-07T13:35:02.000Z | 2022-03-29T00:09:36.000Z | from .controller import * # noqa: F401
| 20 | 39 | 0.7 | 5 | 40 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0.2 | 40 | 1 | 40 | 40 | 0.78125 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
291abb4c450c82a09c4b83afb1e0eb293bbb7668 | 149 | py | Python | site-root/.py/ctrl/register_fail.py | TED-996/krait-twostones | 51b27793b9cd536d680fb9a6785c57473d35cac1 | [
"MIT"
] | null | null | null | site-root/.py/ctrl/register_fail.py | TED-996/krait-twostones | 51b27793b9cd536d680fb9a6785c57473d35cac1 | [
"MIT"
] | null | null | null | site-root/.py/ctrl/register_fail.py | TED-996/krait-twostones | 51b27793b9cd536d680fb9a6785c57473d35cac1 | [
"MIT"
] | null | null | null | import krait
import mvc
class RegisterFailController(object):
def __init__(self):
pass
def get_view(self):
return ".view/register_fail.html"
| 14.9 | 37 | 0.765101 | 20 | 149 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14094 | 149 | 9 | 38 | 16.555556 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0.161074 | 0.161074 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.285714 | 0.142857 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
291bf5d657c46fde49c0a96d8b7e436c45fb69bc | 33 | py | Python | src/dbspy/gui/spectrum/dbs/__init__.py | ZhengKeli/PositronSpector | be0281fe50fe634183b6f239f03b7140c1dc0b7f | [
"MIT"
] | 1 | 2019-06-18T09:23:42.000Z | 2019-06-18T09:23:42.000Z | src/dbspy/gui/spectrum/dbs/__init__.py | ZhengKeli/DBSpy | be0281fe50fe634183b6f239f03b7140c1dc0b7f | [
"MIT"
] | null | null | null | src/dbspy/gui/spectrum/dbs/__init__.py | ZhengKeli/DBSpy | be0281fe50fe634183b6f239f03b7140c1dc0b7f | [
"MIT"
] | null | null | null | from . import raw, peak, res, bg
| 16.5 | 32 | 0.666667 | 6 | 33 | 3.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 33 | 1 | 33 | 33 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29445d9c4541a5994190997a33ebc461b694d7bd | 224 | py | Python | Scripted/CIP_/CIP/logic/__init__.py | hung-lab/SlicerCIP | eefe2ba10ea42da61fac621c9f77b69737c08b23 | [
"BSD-3-Clause"
] | 10 | 2016-04-13T10:13:49.000Z | 2021-11-11T18:05:45.000Z | Scripted/CIP_/CIP/logic/__init__.py | hung-lab/SlicerCIP | eefe2ba10ea42da61fac621c9f77b69737c08b23 | [
"BSD-3-Clause"
] | 33 | 2015-06-05T15:31:00.000Z | 2022-03-16T00:21:44.000Z | Scripted/CIP_/CIP/logic/__init__.py | hung-lab/SlicerCIP | eefe2ba10ea42da61fac621c9f77b69737c08b23 | [
"BSD-3-Clause"
] | 22 | 2015-06-04T20:23:39.000Z | 2022-03-17T03:40:48.000Z | from .Util import *
from .SlicerUtil import *
from .geometry_topology_data import *
from .EventsTrigger import *
from . import file_conventions
#from StructuresParameters import *
#from Colors import *
from .timer import *
| 22.4 | 37 | 0.785714 | 27 | 224 | 6.407407 | 0.481481 | 0.346821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147321 | 224 | 9 | 38 | 24.888889 | 0.905759 | 0.241071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
294a4d76047c183c647e75da0f5eeccadbbd78de | 44,519 | py | Python | portality/formcontext/xwalk.py | glauberm/doaj | dc24dfcbf4a9f02ce5c9b09b611a5766ea5742f7 | [
"Apache-2.0"
] | null | null | null | portality/formcontext/xwalk.py | glauberm/doaj | dc24dfcbf4a9f02ce5c9b09b611a5766ea5742f7 | [
"Apache-2.0"
] | null | null | null | portality/formcontext/xwalk.py | glauberm/doaj | dc24dfcbf4a9f02ce5c9b09b611a5766ea5742f7 | [
"Apache-2.0"
] | null | null | null | from portality import models, lcc
from portality.datasets import licenses, main_license_options
from flask_login import current_user
from portality.util import flash_with_url, listpop
from copy import deepcopy
from portality.formcontext.choices import Choices
def interpret_list(current_values, allowed_values, substitutions):
current_values = deepcopy(current_values)
interpreted_fields = {}
foreign_values = {}
for cv in current_values:
if cv not in allowed_values:
foreign_values[current_values.index(cv)] = cv
ps = foreign_values.keys()
ps.sort()
# FIXME: if the data is broken, just return it as is
if len(ps) > len(substitutions):
return current_values
i = 0
for k in ps:
interpreted_fields[substitutions[i].get("field")] = current_values[k]
current_values[k] = substitutions[i].get("default")
i += 1
return current_values, interpreted_fields
def interpret_special(val):
# if you modify this, make sure to modify reverse_interpret_special as well
if isinstance(val, basestring):
if val.lower() == Choices.TRUE.lower():
return True
elif val.lower() == Choices.FALSE.lower():
return False
elif val.lower() == Choices.NONE.lower():
return None
elif val == Choices.digital_archiving_policy_val("none"):
return None
if isinstance(val, list):
if len(val) == 1:
actual_val = interpret_special(val[0])
if not actual_val:
return []
return val
return val
return val
def reverse_interpret_special(val, field=''):
# if you modify this, make sure to modify interpret_special as well
if val is None:
return Choices.NONE
elif val is True:
return Choices.TRUE
elif val is False:
return Choices.FALSE
# no need to handle digital archiving policy or other list
# fields here - empty lists handled below
if isinstance(val, list):
if len(val) == 1:
reverse_actual_val = reverse_interpret_special(val[0], field=field)
return [reverse_actual_val]
elif len(val) == 0:
# mostly it'll just be a None val
if field == 'digital_archiving_policy':
return [Choices.digital_archiving_policy_val("none")]
return [Choices.NONE]
return val
return val
def interpret_other(value, other_field_data, other_value=Choices.OTHER, store_other_label=False):
'''
Interpret a value list coming from (e.g.) checkboxes when one of
them says "Other" and allows free-text input.
The value can also be a string. In that case, if it matched other_value, other_field_data is returned
instead of the original value. This is for radio buttons with an "Other" option - you only get 1 value
from the form, but if it's "Other", you still need to replace it with the relevant free text field data.
:param value: String or list of values from the form.
checkboxes_field.data basically.
:param other_field_data: data from the Other inline extra text input field.
Usually checkboxes_field_other.data or similar.
:param other_value: Which checkbox has an extra field? Put its value in here. It's "Other" by default.
More technically: the value which triggers considering and adding the data in other_field to value.
'''
# if you modify this, make sure to modify reverse_interpret_other too
if isinstance(value, basestring):
if value == other_value:
return other_field_data
elif isinstance(value, list):
value = value[:]
# if "Other" (or some custom value) is in the there, remove it and take the data from the extra text field
if other_value in value and other_field_data:
# preserve the order, important for reversing this process when displaying the edit form
where = value.index(other_value)
if store_other_label:
# Needed when multiple items in the list could be freely specified,
# i.e. unrestricted by the choices for that field.
# Digital archiving policies is such a field, with both an
# "Other" choice requiring free text input and a "A national library"
# choice requiring free text input, presumably with the name
# of the library.
value[where] = [other_value, other_field_data]
else:
value[where] = other_field_data
# don't know what else to do, just return it as-is
return value
def reverse_interpret_other(interpreted_value, possible_original_values, other_value=Choices.OTHER, replace_label=Choices.OTHER):
'''
Returns tuple: (main field value or list of values, other field value)
'''
# if you modify this, make sure to modify interpret_other too
other_field_val = ''
if isinstance(interpreted_value, basestring):
# A special case first: where the value is the empty string.
# In that case, the main field was never submitted (e.g. if it was
# a choice of "Yes", "No" and "Other", none of those were submitted
# as an answer - maybe it was an optional field).
if not interpreted_value:
return None, None
# if the stored (a.k.a. interpreted) value is not one of the
# possible values, then the "Other" option must have been
# selected during initial submission
# if so, all we've got to do is swap them
# so the main field gets a value of "Other" or similar
# and the secondary (a.k.a. other) field gets the currently
# stored value - resulting in a form that looks exactly like the
# one initially submitted
if interpreted_value not in possible_original_values:
return other_value, interpreted_value
elif isinstance(interpreted_value, list):
# 2 copies of the list needed
interpreted_value = interpreted_value[:] # don't modify the original list passed in
for iv in interpreted_value[:]: # don't modify the list while iterating over it
# same deal here, if the original list was ['LOCKSS', 'Other']
# and the secondary field was 'some other policy'
# then it would have been interpreted by interpret_other
# into ['LOCKSS', 'some other policy']
# so now we need to turn that back into
# (['LOCKSS', 'Other'], 'some other policy')
if iv not in possible_original_values:
where = interpreted_value.index(iv)
if isinstance(iv, list):
# This is a field with two or more choices which require
# further specification via free text entry.
# If we only recorded the free text values, we wouldn't
# be able to tell which one relates to which choice.
# E.g. ["some other archiving policy", "Library of Chile"]
# does not tell us that "some other archiving policy"
# is related to the "Other" field, and "Library of Chile"
# is related to the "A national library field.
#
# [["Other", "some other archiving policy"], ["A national library", "Library of Chile"]]
# does tell us that, on the other hand.
# It is this case that we are dealing with here.
label = iv[0]
val = iv[1]
if label == replace_label:
other_field_val = val
interpreted_value[where] = label
else:
continue
else:
other_field_val = iv
interpreted_value[where] = other_value
break
return interpreted_value, other_field_val
class JournalGenericXWalk(object):
@classmethod
def is_new_editor_group(cls, form, old):
old_eg = old.editor_group
new_eg = form.editor_group.data
return old_eg != new_eg and new_eg is not None and new_eg != ""
@classmethod
def is_new_editor(cls, form, old):
old_ed = old.editor
new_ed = form.editor.data
return old_ed != new_ed and new_ed is not None and new_ed != ""
class SuggestionFormXWalk(JournalGenericXWalk):
_formFields2objectFields = {
"instructions_authors_url" : "bibjson.link.url where bibjson.link.type=author_instructions",
"oa_statement_url" : "bibjson.link.url where bibjson.link.type=oa_statement",
"aims_scope_url" : "bibjson.link.url where bibjson.link.type=aims_scope",
"submission_charges_url" : "bibjson.submission_charges_url",
"editorial_board_url" : "bibjson.link.url where bibjson.link.type=editorial_board",
}
@classmethod
def formField2objectField(cls, field):
return cls._formFields2objectFields.get(field, field)
@classmethod
def form2obj(cls, form):
suggestion = models.Suggestion()
bibjson = suggestion.bibjson()
if form.title.data:
bibjson.title = form.title.data
bibjson.add_url(form.url.data, urltype='homepage')
if form.alternative_title.data:
bibjson.alternative_title = form.alternative_title.data
if form.pissn.data:
bibjson.add_identifier(bibjson.P_ISSN, form.pissn.data)
if form.eissn.data:
bibjson.add_identifier(bibjson.E_ISSN, form.eissn.data)
if form.publisher.data:
bibjson.publisher = form.publisher.data
if form.society_institution.data:
bibjson.institution = form.society_institution.data
if form.platform.data:
bibjson.provider = form.platform.data
if form.contact_name.data or form.contact_email.data:
suggestion.add_contact(form.contact_name.data, form.contact_email.data)
if form.country.data:
bibjson.country = form.country.data
if interpret_special(form.processing_charges.data):
bibjson.set_apc(form.processing_charges_currency.data, form.processing_charges_amount.data)
if form.processing_charges_url.data:
bibjson.apc_url = form.processing_charges_url.data
if interpret_special(form.submission_charges.data):
bibjson.set_submission_charges(form.submission_charges_currency.data, form.submission_charges_amount.data)
if form.submission_charges_url.data:
bibjson.submission_charges_url = form.submission_charges_url.data
suggestion.set_articles_last_year(form.articles_last_year.data, form.articles_last_year_url.data)
if interpret_special(form.waiver_policy.data):
bibjson.add_url(form.waiver_policy_url.data, 'waiver_policy')
# checkboxes
if interpret_special(form.digital_archiving_policy.data) or form.digital_archiving_policy_url.data:
archiving_policies = interpret_special(form.digital_archiving_policy.data)
archiving_policies = interpret_other(archiving_policies, form.digital_archiving_policy_other.data, store_other_label=True)
archiving_policies = interpret_other(archiving_policies, form.digital_archiving_policy_library.data, Choices.digital_archiving_policy_val("library"), store_other_label=True)
bibjson.set_archiving_policy(archiving_policies, form.digital_archiving_policy_url.data)
if form.crawl_permission.data and form.crawl_permission.data != 'None':
bibjson.allows_fulltext_indexing = interpret_special(form.crawl_permission.data) # just binary
# checkboxes
article_ids = interpret_special(form.article_identifiers.data)
article_ids = interpret_other(article_ids, form.article_identifiers_other.data)
if article_ids:
bibjson.persistent_identifier_scheme = article_ids
if form.metadata_provision.data and form.metadata_provision.data != 'None':
suggestion.article_metadata = interpret_special(form.metadata_provision.data) # just binary
if (form.download_statistics.data and form.download_statistics.data != 'None') or form.download_statistics_url.data:
bibjson.set_article_statistics(form.download_statistics_url.data, interpret_special(form.download_statistics.data))
if form.first_fulltext_oa_year.data:
bibjson.set_oa_start(year=form.first_fulltext_oa_year.data)
# checkboxes
fulltext_format = interpret_other(form.fulltext_format.data, form.fulltext_format_other.data)
if fulltext_format:
bibjson.format = fulltext_format
if form.keywords.data:
bibjson.set_keywords(form.keywords.data) # tag list field
if form.languages.data:
bibjson.set_language(form.languages.data) # select multiple field - gives a list back
bibjson.add_url(form.editorial_board_url.data, urltype='editorial_board')
if form.review_process.data or form.review_process_url.data:
bibjson.set_editorial_review(form.review_process.data, form.review_process_url.data)
bibjson.add_url(form.aims_scope_url.data, urltype='aims_scope')
bibjson.add_url(form.instructions_authors_url.data, urltype='author_instructions')
if (form.plagiarism_screening.data and form.plagiarism_screening.data != 'None') or form.plagiarism_screening_url.data:
bibjson.set_plagiarism_detection(
form.plagiarism_screening_url.data,
has_detection=interpret_special(form.plagiarism_screening.data)
)
if form.publication_time.data:
bibjson.publication_time = form.publication_time.data
bibjson.add_url(form.oa_statement_url.data, urltype='oa_statement')
license_type = interpret_other(form.license.data, form.license_other.data)
license_title = license_type
if license_type in licenses:
by = licenses[license_type]['BY']
nc = licenses[license_type]['NC']
nd = licenses[license_type]['ND']
sa = licenses[license_type]['SA']
license_title = licenses[license_type]['title']
elif form.license_checkbox.data:
by = True if 'BY' in form.license_checkbox.data else False
nc = True if 'NC' in form.license_checkbox.data else False
nd = True if 'ND' in form.license_checkbox.data else False
sa = True if 'SA' in form.license_checkbox.data else False
license_title = license_type
else:
by = None; nc = None; nd = None; sa = None
license_title = license_type
bibjson.set_license(
license_title,
license_type,
url=form.license_url.data,
open_access=interpret_special(form.open_access.data),
by=by, nc=nc, nd=nd, sa=sa,
embedded=interpret_special(form.license_embedded.data),
embedded_example_url=form.license_embedded_url.data
)
# checkboxes
deposit_policies = interpret_special(form.deposit_policy.data) # need empty list if it's just "None"
deposit_policies = interpret_other(deposit_policies, form.deposit_policy_other.data)
if deposit_policies:
bibjson.deposit_policy = deposit_policies
if form.copyright.data and form.copyright.data != 'None':
holds_copyright = interpret_special(form.copyright.data)
bibjson.set_author_copyright(form.copyright_url.data, holds_copyright=holds_copyright)
if form.publishing_rights.data and form.publishing_rights.data != 'None':
publishing_rights = interpret_special(form.publishing_rights.data)
bibjson.set_author_publishing_rights(form.publishing_rights_url.data, holds_rights=publishing_rights)
if getattr(form, "suggester_name", None) or getattr(form, "suggester_email", None):
name = None
email = None
if getattr(form, "suggester_name", None):
name = form.suggester_name.data
if getattr(form, "suggester_email", None):
email = form.suggester_email.data
suggestion.set_suggester(name, email)
# admin stuff
if getattr(form, 'application_status', None):
suggestion.set_application_status(form.application_status.data)
if getattr(form, 'notes', None):
for formnote in form.notes.data:
if formnote["note"]:
suggestion.add_note(formnote["note"])
if getattr(form, 'subject', None):
new_subjects = []
for code in form.subject.data:
sobj = {"scheme": 'LCC', "term": lcc.lookup_code(code), "code": code}
new_subjects.append(sobj)
bibjson.set_subjects(new_subjects)
if getattr(form, 'owner', None):
owns = form.owner.data
if owns:
owns = owns.strip()
suggestion.set_owner(owns)
if getattr(form, 'editor_group', None):
editor_group = form.editor_group.data
if editor_group:
editor_group = editor_group.strip()
suggestion.set_editor_group(editor_group)
if getattr(form, "editor", None):
editor = form.editor.data
if editor:
editor = editor.strip()
suggestion.set_editor(editor)
if getattr(form, "doaj_seal", None):
suggestion.set_seal(form.doaj_seal.data)
# continuations information
if getattr(form, "replaces", None):
bibjson.replaces = form.replaces.data
if getattr(form, "is_replaced_by", None):
bibjson.is_replaced_by = form.is_replaced_by.data
if getattr(form, "discontinued_date", None):
bibjson.discontinued_date = form.discontinued_date.data
return suggestion
@classmethod
def obj2form(cls, obj):
forminfo = {}
bibjson = obj.bibjson()
forminfo['title'] = bibjson.title
forminfo['url'] = bibjson.get_single_url(urltype='homepage')
forminfo['alternative_title'] = bibjson.alternative_title
forminfo['pissn'] = listpop(bibjson.get_identifiers(idtype=bibjson.P_ISSN))
forminfo['eissn'] = listpop(bibjson.get_identifiers(idtype=bibjson.E_ISSN))
forminfo['publisher'] = bibjson.publisher
forminfo['society_institution'] = bibjson.institution
forminfo['platform'] = bibjson.provider
forminfo['contact_name'] = listpop(obj.contacts(), {}).get('name')
forminfo['contact_email'] = listpop(obj.contacts(), {}).get('email')
forminfo['confirm_contact_email'] = forminfo['contact_email']
forminfo['country'] = bibjson.country
forminfo["replaces"] = bibjson.replaces
forminfo["is_replaced_by"] = bibjson.is_replaced_by
forminfo["discontinued_date"] = bibjson.discontinued_date
apc = bibjson.apc
if apc:
forminfo['processing_charges'] = reverse_interpret_special(True)
forminfo['processing_charges_currency'] = apc.get('currency')
forminfo['processing_charges_amount'] = apc.get('average_price')
else:
forminfo['processing_charges'] = reverse_interpret_special(False)
forminfo['processing_charges_url'] = bibjson.apc_url
submission_charges = bibjson.submission_charges
if submission_charges:
forminfo['submission_charges'] = reverse_interpret_special(True)
forminfo['submission_charges_currency'] = submission_charges.get('currency')
forminfo['submission_charges_amount'] = submission_charges.get('average_price')
else:
forminfo['submission_charges'] = reverse_interpret_special(False)
forminfo['submission_charges_url'] = bibjson.submission_charges_url
articles_last_year = obj.articles_last_year
if articles_last_year:
forminfo['articles_last_year'] = articles_last_year.get('count')
forminfo['articles_last_year_url'] = articles_last_year.get('url')
forminfo['waiver_policy_url'] = bibjson.get_single_url(urltype='waiver_policy')
forminfo['waiver_policy'] = reverse_interpret_special(forminfo['waiver_policy_url'] is not None and forminfo['waiver_policy_url'] != '')
#archiving_policies = reverse_interpret_special(bibjson.archiving_policy.get('policy', []), field='digital_archiving_policy')
#substitutions = [
# {"default": Choices.digital_archiving_policy_val("library"), "field" : "digital_archiving_policy_library" },
# {"default": Choices.digital_archiving_policy_val("other"), "field" : "digital_archiving_policy_other"}
#]
#archiving_policies, special_fields = interpret_list(
# archiving_policies, # current values
# Choices.digital_archiving_policy_list(), # allowed values
# substitutions # substitution instructions
#)
#forminfo.update(special_fields)
# checkboxes
archiving_policies = reverse_interpret_special(bibjson.archiving_policy.get('policy', []), field='digital_archiving_policy')
# for backwards compatibility we keep the "Other" field first in the reverse xwalk
# previously we didn't store which free text value was which (Other, or specific national library)
# so in those cases, just put it in "Other", it'll be correct most of the time
archiving_policies, forminfo['digital_archiving_policy_other'] = \
reverse_interpret_other(archiving_policies, Choices.digital_archiving_policy_list())
archiving_policies, forminfo['digital_archiving_policy_library'] = \
reverse_interpret_other(
archiving_policies,
Choices.digital_archiving_policy_list(),
other_value=Choices.digital_archiving_policy_val("library"),
replace_label=Choices.digital_archiving_policy_label("library")
)
forminfo['digital_archiving_policy'] = archiving_policies
forminfo['digital_archiving_policy_url'] = bibjson.archiving_policy.get('url')
forminfo['crawl_permission'] = reverse_interpret_special(bibjson.allows_fulltext_indexing)
# checkboxes
article_ids = reverse_interpret_special(bibjson.persistent_identifier_scheme)
article_ids, forminfo['article_identifiers_other'] = \
reverse_interpret_other(article_ids, Choices.article_identifiers_list())
forminfo['article_identifiers'] = article_ids
forminfo['metadata_provision'] = reverse_interpret_special(obj.article_metadata)
forminfo['download_statistics'] = reverse_interpret_special(bibjson.article_statistics.get('statistics'))
forminfo['download_statistics_url'] = bibjson.article_statistics.get('url')
forminfo['first_fulltext_oa_year'] = bibjson.oa_start.get('year')
# checkboxes
forminfo['fulltext_format'], forminfo['fulltext_format_other'] = \
reverse_interpret_other(bibjson.format, Choices.fulltext_format_list())
forminfo['keywords'] = bibjson.keywords
forminfo['languages'] = bibjson.language
forminfo['editorial_board_url'] = bibjson.get_single_url('editorial_board')
forminfo['review_process'] = bibjson.editorial_review.get('process')
forminfo['review_process_url'] = bibjson.editorial_review.get('url')
forminfo['aims_scope_url'] = bibjson.get_single_url('aims_scope')
forminfo['instructions_authors_url'] = bibjson.get_single_url('author_instructions')
forminfo['plagiarism_screening'] = reverse_interpret_special(bibjson.plagiarism_detection.get('detection'))
forminfo['plagiarism_screening_url'] = bibjson.plagiarism_detection.get('url')
forminfo['publication_time'] = bibjson.publication_time
forminfo['oa_statement_url'] = bibjson.get_single_url('oa_statement')
license = bibjson.get_license()
license = license if license else {} # reinterpret the None val
forminfo['license'], forminfo['license_other'] = reverse_interpret_other(license.get('type'), Choices.licence_list())
if forminfo['license_other']:
forminfo['license_checkbox'] = []
if license.get('BY'): forminfo['license_checkbox'].append('BY')
if license.get('SA'): forminfo['license_checkbox'].append('SA')
if license.get('NC'): forminfo['license_checkbox'].append('NC')
if license.get('ND'): forminfo['license_checkbox'].append('ND')
forminfo['license_url'] = license.get('url')
forminfo['open_access'] = reverse_interpret_special(license.get('open_access'))
forminfo['license_embedded'] = reverse_interpret_special(license.get('embedded'))
forminfo['license_embedded_url'] = license.get('embedded_example_url')
# checkboxes
forminfo['deposit_policy'], forminfo['deposit_policy_other'] = \
reverse_interpret_other(reverse_interpret_special(bibjson.deposit_policy), Choices.deposit_policy_list())
forminfo['copyright'] = reverse_interpret_special(bibjson.author_copyright.get('copyright', ''))
forminfo['copyright_url'] = bibjson.author_copyright.get('url')
forminfo['publishing_rights'] = reverse_interpret_special(bibjson.author_publishing_rights.get('publishing_rights', ''))
forminfo['publishing_rights_url'] = bibjson.author_publishing_rights.get('url')
forminfo['suggester_name'] = obj.suggester.get('name')
forminfo['suggester_email'] = obj.suggester.get('email')
forminfo['suggester_email_confirm'] = forminfo['suggester_email']
forminfo['application_status'] = obj.application_status
forminfo['notes'] = obj.ordered_notes
forminfo['subject'] = []
for s in bibjson.subjects():
if "code" in s:
forminfo['subject'].append(s['code'])
forminfo['owner'] = obj.owner
if obj.editor_group is not None:
forminfo['editor_group'] = obj.editor_group
if obj.editor is not None:
forminfo['editor'] = obj.editor
forminfo['doaj_seal'] = obj.has_seal()
return forminfo
class JournalFormXWalk(JournalGenericXWalk):
@classmethod
def form2obj(cls, form):
journal = models.Journal()
bibjson = journal.bibjson()
# The if statements that wrap practically every field are there due to this
# form being used to edit old journals which don't necessarily have most of
# this info.
# It also allows admins to delete the contents of any field if they wish,
# by ticking the "Allow incomplete form" checkbox and deleting the contents
# of that field. The if condition(s) will then *not* add the relevant field to the
# new journal object being constructed.
# add_url in the journal model has a safeguard against empty URL-s.
if form.title.data:
bibjson.title = form.title.data
bibjson.add_url(form.url.data, urltype='homepage')
if form.alternative_title.data:
bibjson.alternative_title = form.alternative_title.data
if form.pissn.data:
bibjson.add_identifier(bibjson.P_ISSN, form.pissn.data)
if form.eissn.data:
bibjson.add_identifier(bibjson.E_ISSN, form.eissn.data)
if form.publisher.data:
bibjson.publisher = form.publisher.data
if form.society_institution.data:
bibjson.institution = form.society_institution.data
if form.platform.data:
bibjson.provider = form.platform.data
if form.contact_name.data or form.contact_email.data:
journal.add_contact(form.contact_name.data, form.contact_email.data)
if form.country.data:
bibjson.country = form.country.data
if interpret_special(form.processing_charges.data):
bibjson.set_apc(form.processing_charges_currency.data, form.processing_charges_amount.data)
if form.processing_charges_url.data:
bibjson.apc_url = form.processing_charges_url.data
if interpret_special(form.submission_charges.data):
bibjson.set_submission_charges(form.submission_charges_currency.data, form.submission_charges_amount.data)
if form.submission_charges_url.data:
bibjson.submission_charges_url = form.submission_charges_url.data
if interpret_special(form.waiver_policy.data):
bibjson.add_url(form.waiver_policy_url.data, 'waiver_policy')
# checkboxes
if interpret_special(form.digital_archiving_policy.data) or form.digital_archiving_policy_url.data:
archiving_policies = interpret_special(form.digital_archiving_policy.data)
archiving_policies = interpret_other(archiving_policies, form.digital_archiving_policy_other.data, store_other_label=True)
archiving_policies = interpret_other(archiving_policies, form.digital_archiving_policy_library.data, Choices.digital_archiving_policy_val("library"), store_other_label=True)
bibjson.set_archiving_policy(archiving_policies, form.digital_archiving_policy_url.data)
if form.crawl_permission.data and form.crawl_permission.data != 'None':
bibjson.allows_fulltext_indexing = interpret_special(form.crawl_permission.data) # just binary
# checkboxes
article_ids = interpret_special(form.article_identifiers.data)
article_ids = interpret_other(article_ids, form.article_identifiers_other.data)
if article_ids:
bibjson.persistent_identifier_scheme = article_ids
if (form.download_statistics.data and form.download_statistics.data != 'None') or form.download_statistics_url.data:
bibjson.set_article_statistics(form.download_statistics_url.data, interpret_special(form.download_statistics.data))
if form.first_fulltext_oa_year.data:
bibjson.set_oa_start(year=form.first_fulltext_oa_year.data)
# checkboxes
fulltext_format = interpret_other(form.fulltext_format.data, form.fulltext_format_other.data)
if fulltext_format:
bibjson.format = fulltext_format
if form.keywords.data:
bibjson.set_keywords(form.keywords.data) # tag list field
if form.languages.data:
bibjson.set_language(form.languages.data) # select multiple field - gives a list back
bibjson.add_url(form.editorial_board_url.data, urltype='editorial_board')
if form.review_process.data or form.review_process_url.data:
bibjson.set_editorial_review(form.review_process.data, form.review_process_url.data)
bibjson.add_url(form.aims_scope_url.data, urltype='aims_scope')
bibjson.add_url(form.instructions_authors_url.data, urltype='author_instructions')
if (form.plagiarism_screening.data and form.plagiarism_screening.data != 'None') or form.plagiarism_screening_url.data:
bibjson.set_plagiarism_detection(
form.plagiarism_screening_url.data,
has_detection=interpret_special(form.plagiarism_screening.data)
)
if form.publication_time.data:
bibjson.publication_time = form.publication_time.data
bibjson.add_url(form.oa_statement_url.data, urltype='oa_statement')
license_type = interpret_other(form.license.data, form.license_other.data)
if interpret_special(license_type):
# "None" and "False" as strings like they come out of the WTForms processing)
# would get interpreted correctly by this check, so "None" licenses should not appear
if license_type in licenses:
by = licenses[license_type]['BY']
nc = licenses[license_type]['NC']
nd = licenses[license_type]['ND']
sa = licenses[license_type]['SA']
license_title = licenses[license_type]['title']
elif form.license_checkbox.data:
by = True if 'BY' in form.license_checkbox.data else False
nc = True if 'NC' in form.license_checkbox.data else False
nd = True if 'ND' in form.license_checkbox.data else False
sa = True if 'SA' in form.license_checkbox.data else False
license_title = license_type
else:
by = None; nc = None; nd = None; sa = None;
license_title = license_type
bibjson.set_license(
license_title,
license_type,
url=form.license_url.data,
open_access=interpret_special(form.open_access.data),
by=by, nc=nc, nd=nd, sa=sa,
embedded=interpret_special(form.license_embedded.data),
embedded_example_url=form.license_embedded_url.data
)
# checkboxes
deposit_policies = interpret_special(form.deposit_policy.data) # need empty list if it's just "None"
deposit_policies = interpret_other(deposit_policies, form.deposit_policy_other.data)
if deposit_policies:
bibjson.deposit_policy = deposit_policies
if form.copyright.data and form.copyright.data != 'None':
holds_copyright = interpret_special(form.copyright.data)
bibjson.set_author_copyright(form.copyright_url.data, holds_copyright=holds_copyright)
if form.publishing_rights.data and form.publishing_rights.data != 'None':
publishing_rights = interpret_special(form.publishing_rights.data)
bibjson.set_author_publishing_rights(form.publishing_rights_url.data, holds_rights=publishing_rights)
for formnote in form.notes.data:
if formnote["note"]:
journal.add_note(formnote["note"])
new_subjects = []
for code in form.subject.data:
sobj = {"scheme": 'LCC', "term": lcc.lookup_code(code), "code": code}
new_subjects.append(sobj)
bibjson.set_subjects(new_subjects)
if getattr(form, 'owner', None):
owner = form.owner.data
if owner:
owner = owner.strip()
journal.set_owner(owner)
if getattr(form, 'editor_group', None):
editor_group = form.editor_group.data
if editor_group:
editor_group = editor_group.strip()
journal.set_editor_group(editor_group)
if getattr(form, "editor", None):
editor = form.editor.data
if editor:
editor = editor.strip()
journal.set_editor(editor)
if getattr(form, "doaj_seal", None):
journal.set_seal(form.doaj_seal.data)
# continuations information
if getattr(form, "replaces", None):
bibjson.replaces = form.replaces.data
if getattr(form, "is_replaced_by", None):
bibjson.is_replaced_by = form.is_replaced_by.data
if getattr(form, "discontinued_date", None):
bibjson.discontinued_date = form.discontinued_date.data
# old fields - only create them in the journal record if the values actually exist
# need to use interpret_special in the test condition in case 'None' comes back from the form
if getattr(form, 'author_pays', None):
if interpret_special(form.author_pays.data):
bibjson.author_pays = form.author_pays.data
if getattr(form, 'author_pays_url', None):
if interpret_special(form.author_pays_url.data):
bibjson.author_pays_url = form.author_pays_url.data
if getattr(form, 'oa_end_year', None):
if interpret_special(form.oa_end_year.data):
bibjson.set_oa_end(form.oa_end_year.data)
return journal
@classmethod
def obj2form(cls, obj):
forminfo = {}
bibjson = obj.bibjson()
forminfo['title'] = bibjson.title
forminfo['url'] = bibjson.get_single_url(urltype='homepage')
forminfo['alternative_title'] = bibjson.alternative_title
forminfo['pissn'] = listpop(bibjson.get_identifiers(idtype=bibjson.P_ISSN))
forminfo['eissn'] = listpop(bibjson.get_identifiers(idtype=bibjson.E_ISSN))
forminfo['publisher'] = bibjson.publisher
forminfo['society_institution'] = bibjson.institution
forminfo['platform'] = bibjson.provider
forminfo['contact_name'] = listpop(obj.contacts(), {}).get('name')
forminfo['contact_email'] = listpop(obj.contacts(), {}).get('email')
forminfo['confirm_contact_email'] = forminfo['contact_email']
forminfo['country'] = bibjson.country
forminfo["replaces"] = bibjson.replaces
forminfo["is_replaced_by"] = bibjson.is_replaced_by
forminfo["discontinued_date"] = bibjson.discontinued_date
apc = bibjson.apc
if apc:
forminfo['processing_charges'] = reverse_interpret_special(True)
forminfo['processing_charges_currency'] = apc.get('currency')
forminfo['processing_charges_amount'] = apc.get('average_price')
else:
forminfo['processing_charges'] = reverse_interpret_special(False)
forminfo['processing_charges_url'] = bibjson.apc_url
submission_charges = bibjson.submission_charges
if submission_charges:
forminfo['submission_charges'] = reverse_interpret_special(True)
forminfo['submission_charges_currency'] = submission_charges.get('currency')
forminfo['submission_charges_amount'] = submission_charges.get('average_price')
else:
forminfo['submission_charges'] = reverse_interpret_special(False)
forminfo['submission_charges_url'] = bibjson.submission_charges_url
forminfo['waiver_policy_url'] = bibjson.get_single_url(urltype='waiver_policy')
forminfo['waiver_policy'] = reverse_interpret_special(forminfo['waiver_policy_url'] is not None and forminfo['waiver_policy_url'] != '')
#archiving_policies = reverse_interpret_special(bibjson.archiving_policy.get('policy', []), field='digital_archiving_policy')
#substitutions = [
# {"default": Choices.digital_archiving_policy_val("library"), "field" : "digital_archiving_policy_library" },
# {"default": Choices.digital_archiving_policy_val("other"), "field" : "digital_archiving_policy_other"}
#]
#archiving_policies, special_fields = interpret_list(
# archiving_policies, # current values
# Choices.digital_archiving_policy_list(), # allowed values
# substitutions # substitution instructions
#)
#forminfo.update(special_fields)
# checkboxes
archiving_policies = reverse_interpret_special(bibjson.archiving_policy.get('policy', []), field='digital_archiving_policy')
# for backwards compatibility we keep the "Other" field first in the reverse xwalk
# previously we didn't store which free text value was which (Other, or specific national library)
# so in those cases, just put it in "Other", it'll be correct most of the time
archiving_policies, forminfo['digital_archiving_policy_other'] = \
reverse_interpret_other(archiving_policies, Choices.digital_archiving_policy_list())
archiving_policies, forminfo['digital_archiving_policy_library'] = \
reverse_interpret_other(
archiving_policies,
Choices.digital_archiving_policy_list(),
other_value=Choices.digital_archiving_policy_val("library"),
replace_label=Choices.digital_archiving_policy_label("library")
)
forminfo['digital_archiving_policy'] = archiving_policies
forminfo['digital_archiving_policy_url'] = bibjson.archiving_policy.get('url')
forminfo['crawl_permission'] = reverse_interpret_special(bibjson.allows_fulltext_indexing)
# checkboxes
article_ids = reverse_interpret_special(bibjson.persistent_identifier_scheme)
article_ids, forminfo['article_identifiers_other'] = \
reverse_interpret_other(article_ids, Choices.article_identifiers_list())
forminfo['article_identifiers'] = article_ids
forminfo['download_statistics'] = reverse_interpret_special(bibjson.article_statistics.get('statistics'))
forminfo['download_statistics_url'] = bibjson.article_statistics.get('url')
forminfo['first_fulltext_oa_year'] = bibjson.oa_start.get('year')
# checkboxes
forminfo['fulltext_format'], forminfo['fulltext_format_other'] = \
reverse_interpret_other(bibjson.format, Choices.fulltext_format_list())
forminfo['keywords'] = bibjson.keywords
forminfo['languages'] = bibjson.language
forminfo['editorial_board_url'] = bibjson.get_single_url('editorial_board')
forminfo['review_process'] = bibjson.editorial_review.get('process', '')
forminfo['review_process_url'] = bibjson.editorial_review.get('url')
forminfo['aims_scope_url'] = bibjson.get_single_url('aims_scope')
forminfo['instructions_authors_url'] = bibjson.get_single_url('author_instructions')
forminfo['plagiarism_screening'] = reverse_interpret_special(bibjson.plagiarism_detection.get('detection'))
forminfo['plagiarism_screening_url'] = bibjson.plagiarism_detection.get('url')
forminfo['publication_time'] = bibjson.publication_time
forminfo['oa_statement_url'] = bibjson.get_single_url('oa_statement')
license = bibjson.get_license()
license = license if license else {} # reinterpret the None val
forminfo['license'], forminfo['license_other'] = reverse_interpret_other(license.get('type'), Choices.licence_list())
if forminfo['license_other']:
forminfo['license_checkbox'] = []
if license.get('BY'): forminfo['license_checkbox'].append('BY')
if license.get('SA'): forminfo['license_checkbox'].append('SA')
if license.get('NC'): forminfo['license_checkbox'].append('NC')
if license.get('ND'): forminfo['license_checkbox'].append('ND')
forminfo['license_url'] = license.get('url')
forminfo['open_access'] = reverse_interpret_special(license.get('open_access'))
forminfo['license_embedded'] = reverse_interpret_special(license.get('embedded'))
forminfo['license_embedded_url'] = license.get('embedded_example_url')
# checkboxes
forminfo['deposit_policy'], forminfo['deposit_policy_other'] = \
reverse_interpret_other(reverse_interpret_special(bibjson.deposit_policy), Choices.deposit_policy_list())
forminfo['copyright'] = reverse_interpret_special(bibjson.author_copyright.get('copyright', ''))
forminfo['copyright_url'] = bibjson.author_copyright.get('url')
forminfo['publishing_rights'] = reverse_interpret_special(bibjson.author_publishing_rights.get('publishing_rights', ''))
forminfo['publishing_rights_url'] = bibjson.author_publishing_rights.get('url')
forminfo['notes'] = obj.ordered_notes
forminfo['subject'] = []
for s in bibjson.subjects():
if "code" in s:
forminfo['subject'].append(s['code'])
forminfo['owner'] = obj.owner
if obj.editor_group is not None:
forminfo['editor_group'] = obj.editor_group
if obj.editor is not None:
forminfo['editor'] = obj.editor
forminfo['doaj_seal'] = obj.has_seal()
# old fields - only show them if the values actually exist in the journal record
if bibjson.author_pays:
forminfo['author_pays'] = bibjson.author_pays
if bibjson.author_pays_url:
forminfo['author_pays_url'] = bibjson.author_pays_url
if bibjson.oa_end:
forminfo['oa_end_year'] = bibjson.oa_end.get('year')
return forminfo
| 46.763655 | 185 | 0.670635 | 5,202 | 44,519 | 5.494425 | 0.086313 | 0.040865 | 0.036946 | 0.018263 | 0.758204 | 0.733364 | 0.729235 | 0.722133 | 0.714995 | 0.706249 | 0 | 0.00053 | 0.237809 | 44,519 | 951 | 186 | 46.812829 | 0.841801 | 0.148925 | 0 | 0.721519 | 0 | 0 | 0.112942 | 0.032023 | 0 | 0 | 0 | 0.001052 | 0 | 1 | 0.018987 | false | 0 | 0.009494 | 0.001582 | 0.082278 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
294ea989a43836b2621b28b6a923cdc4efa83097 | 33 | py | Python | grab_convert_from_libgen/__init__.py | willmeyers/grab-from-libgen | efaee01b0b8c9497dd758e5acabab9e5dc6d1084 | [
"MIT"
] | 2 | 2020-12-15T20:22:07.000Z | 2022-02-22T13:43:21.000Z | grab_convert_from_libgen/__init__.py | willmeyers/grab-from-libgen | efaee01b0b8c9497dd758e5acabab9e5dc6d1084 | [
"MIT"
] | 3 | 2022-02-22T13:43:49.000Z | 2022-03-27T19:40:45.000Z | grab_convert_from_libgen/__init__.py | willmeyers/grab-from-libgen | efaee01b0b8c9497dd758e5acabab9e5dc6d1084 | [
"MIT"
] | 1 | 2022-03-20T18:42:07.000Z | 2022-03-20T18:42:07.000Z | from .search import LibgenSearch
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
295dd60f863ca842df778f99531c2525bcc0d46c | 137 | py | Python | mpl_interactions/__init__.py | samanthahamilton/mpl-interactions | fd41f510a4c813befa984df76a35e240c382963a | [
"BSD-3-Clause"
] | null | null | null | mpl_interactions/__init__.py | samanthahamilton/mpl-interactions | fd41f510a4c813befa984df76a35e240c382963a | [
"BSD-3-Clause"
] | null | null | null | mpl_interactions/__init__.py | samanthahamilton/mpl-interactions | fd41f510a4c813befa984df76a35e240c382963a | [
"BSD-3-Clause"
] | null | null | null | from ._version import __version__, version_info
from .generic import *
from .helpers import *
from .pyplot import *
from .utils import *
| 22.833333 | 47 | 0.773723 | 18 | 137 | 5.555556 | 0.444444 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153285 | 137 | 5 | 48 | 27.4 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2986834927f197798c29db3166e5dc85cab9b9ab | 81 | py | Python | files/dirsearch/lib/connection/__init__.py | Thmyris/linux.cf | a47e489816cdbbb53ab8d57cb10a938c78e558b4 | [
"Unlicense"
] | 71 | 2019-02-02T11:38:46.000Z | 2022-03-31T14:08:27.000Z | files/dirsearch/lib/connection/__init__.py | Thmyris/linux.cf | a47e489816cdbbb53ab8d57cb10a938c78e558b4 | [
"Unlicense"
] | 1 | 2021-10-18T00:13:27.000Z | 2021-10-18T00:13:31.000Z | files/dirsearch/lib/connection/__init__.py | Thmyris/linux.cf | a47e489816cdbbb53ab8d57cb10a938c78e558b4 | [
"Unlicense"
] | 15 | 2019-08-07T06:32:04.000Z | 2022-03-09T12:48:20.000Z | from .RequestException import *
from .Requester import *
from .Response import *
| 20.25 | 31 | 0.777778 | 9 | 81 | 7 | 0.555556 | 0.31746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 81 | 3 | 32 | 27 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
466853cddf042c43536567c7538ee2af42d6909a | 45 | py | Python | rpyc_mem/connect/__init__.py | m0hithreddy/rpyc-mem | 72e46da34fe2165a89d702a02ec0bb7b6d64775e | [
"MIT"
] | 1 | 2022-03-12T23:29:13.000Z | 2022-03-12T23:29:13.000Z | rpyc_mem/connect/__init__.py | m0hithreddy/rpyc-mem | 72e46da34fe2165a89d702a02ec0bb7b6d64775e | [
"MIT"
] | null | null | null | rpyc_mem/connect/__init__.py | m0hithreddy/rpyc-mem | 72e46da34fe2165a89d702a02ec0bb7b6d64775e | [
"MIT"
] | null | null | null | from .rpyc_mem_connect import RpycMemConnect
| 22.5 | 44 | 0.888889 | 6 | 45 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
467d564e1863e5898a37aab26144c782ef5ed9be | 2,480 | py | Python | buildroot/support/testing/tests/boot/test_syslinux.py | superm1/operating-system | 142e7df6cfe3d83e9b19f2b8d100378e9d28ce84 | [
"Apache-2.0"
] | 349 | 2021-08-17T08:46:53.000Z | 2022-03-30T06:25:25.000Z | buildroot/support/testing/tests/boot/test_syslinux.py | TopSWorld/operating-system | 99a4d4ea75e5afd53f7e71422726f9d3200b25a3 | [
"Apache-2.0"
] | 2 | 2022-01-14T21:22:11.000Z | 2022-01-15T21:59:24.000Z | buildroot/support/testing/tests/boot/test_syslinux.py | TopSWorld/operating-system | 99a4d4ea75e5afd53f7e71422726f9d3200b25a3 | [
"Apache-2.0"
] | 12 | 2021-08-17T20:10:30.000Z | 2022-01-06T10:52:54.000Z | import infra.basetest
class TestSysLinuxBase(infra.basetest.BRTest):
x86_toolchain_config = \
"""
BR2_x86_i686=y
BR2_TOOLCHAIN_EXTERNAL=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM=y
BR2_TOOLCHAIN_EXTERNAL_DOWNLOAD=y
BR2_TOOLCHAIN_EXTERNAL_URL="http://toolchains.bootlin.com/downloads/releases/toolchains/x86-i686/tarballs/x86-i686--glibc--bleeding-edge-2018.11-1.tar.bz2"
BR2_TOOLCHAIN_EXTERNAL_GCC_8=y
BR2_TOOLCHAIN_EXTERNAL_HEADERS_4_14=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_CXX=y
"""
x86_64_toolchain_config = \
"""
BR2_x86_64=y
BR2_x86_corei7=y
BR2_TOOLCHAIN_EXTERNAL=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM=y
BR2_TOOLCHAIN_EXTERNAL_DOWNLOAD=y
BR2_TOOLCHAIN_EXTERNAL_URL="http://toolchains.bootlin.com/downloads/releases/toolchains/x86-64-core-i7/tarballs/x86-64-core-i7--glibc--stable-2018.11-1.tar.bz2"
BR2_TOOLCHAIN_EXTERNAL_GCC_7=y
BR2_TOOLCHAIN_EXTERNAL_HEADERS_4_1=y
BR2_TOOLCHAIN_EXTERNAL_CXX=y
BR2_TOOLCHAIN_EXTERNAL_HAS_SSP=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_GLIBC=y
"""
syslinux_legacy_config = \
"""
BR2_TARGET_SYSLINUX=y
BR2_TARGET_SYSLINUX_ISOLINUX=y
BR2_TARGET_SYSLINUX_PXELINUX=y
BR2_TARGET_SYSLINUX_MBR=y
"""
syslinux_efi_config = \
"""
BR2_TARGET_SYSLINUX=y
BR2_TARGET_SYSLINUX_EFI=y
"""
class TestSysLinuxX86LegacyBios(TestSysLinuxBase):
config = \
TestSysLinuxBase.x86_toolchain_config + \
infra.basetest.MINIMAL_CONFIG + \
TestSysLinuxBase.syslinux_legacy_config
def test_run(self):
pass
class TestSysLinuxX86EFI(TestSysLinuxBase):
config = \
TestSysLinuxBase.x86_toolchain_config + \
infra.basetest.MINIMAL_CONFIG + \
TestSysLinuxBase.syslinux_efi_config
def test_run(self):
pass
class TestSysLinuxX86_64LegacyBios(TestSysLinuxBase):
config = \
TestSysLinuxBase.x86_64_toolchain_config + \
infra.basetest.MINIMAL_CONFIG + \
TestSysLinuxBase.syslinux_legacy_config
def test_run(self):
pass
class TestSysLinuxX86_64EFI(TestSysLinuxBase):
config = \
TestSysLinuxBase.x86_64_toolchain_config + \
infra.basetest.MINIMAL_CONFIG + \
TestSysLinuxBase.syslinux_efi_config
def test_run(self):
pass
| 28.837209 | 168 | 0.695161 | 282 | 2,480 | 5.698582 | 0.223404 | 0.049782 | 0.211574 | 0.196017 | 0.761668 | 0.761668 | 0.695084 | 0.654014 | 0.581207 | 0.536403 | 0 | 0.058268 | 0.231855 | 2,480 | 85 | 169 | 29.176471 | 0.785302 | 0 | 0 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.117647 | 0.029412 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
469a7ab128c07aeec05bb5d8603be278e75b7f52 | 208 | py | Python | tests/base.py | c-e-p/Flask-Tus | c1c9c622b30e27f7dce503790e32e0dab655b1b0 | [
"MIT"
] | null | null | null | tests/base.py | c-e-p/Flask-Tus | c1c9c622b30e27f7dce503790e32e0dab655b1b0 | [
"MIT"
] | null | null | null | tests/base.py | c-e-p/Flask-Tus | c1c9c622b30e27f7dce503790e32e0dab655b1b0 | [
"MIT"
] | null | null | null | from flask_testing import TestCase
class BaseTestCase(TestCase):
""" Base Tests """
def create_app(self):
return
def setUp(self):
return
def tearDown(self):
return | 14.857143 | 34 | 0.610577 | 23 | 208 | 5.434783 | 0.695652 | 0.24 | 0.208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.302885 | 208 | 14 | 35 | 14.857143 | 0.862069 | 0.048077 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0.125 | 0.375 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
46a08dcf080ba196dcbeaa72b107cb5475c9d8dd | 75 | py | Python | exceptions.py | kquaziportfolio/credmanager | 265832f750f3a288ddf0a92c643a5e6310a3ed5b | [
"MIT"
] | null | null | null | exceptions.py | kquaziportfolio/credmanager | 265832f750f3a288ddf0a92c643a5e6310a3ed5b | [
"MIT"
] | null | null | null | exceptions.py | kquaziportfolio/credmanager | 265832f750f3a288ddf0a92c643a5e6310a3ed5b | [
"MIT"
] | null | null | null | class CredManagerBase(Exception): pass
class InvalidToken(Exception): pass
| 25 | 38 | 0.84 | 8 | 75 | 7.875 | 0.625 | 0.412698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 75 | 2 | 39 | 37.5 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
d3e5aea62239e742aa683ef5e09e21ec353b90aa | 6,407 | py | Python | loldib/getratings/models/NA/na_kayle/na_kayle_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_kayle/na_kayle_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_kayle/na_kayle_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Kayle_Jng_Aatrox(Ratings):
pass
class NA_Kayle_Jng_Ahri(Ratings):
pass
class NA_Kayle_Jng_Akali(Ratings):
pass
class NA_Kayle_Jng_Alistar(Ratings):
pass
class NA_Kayle_Jng_Amumu(Ratings):
pass
class NA_Kayle_Jng_Anivia(Ratings):
pass
class NA_Kayle_Jng_Annie(Ratings):
pass
class NA_Kayle_Jng_Ashe(Ratings):
pass
class NA_Kayle_Jng_AurelionSol(Ratings):
pass
class NA_Kayle_Jng_Azir(Ratings):
pass
class NA_Kayle_Jng_Bard(Ratings):
pass
class NA_Kayle_Jng_Blitzcrank(Ratings):
pass
class NA_Kayle_Jng_Brand(Ratings):
pass
class NA_Kayle_Jng_Braum(Ratings):
pass
class NA_Kayle_Jng_Caitlyn(Ratings):
pass
class NA_Kayle_Jng_Camille(Ratings):
pass
class NA_Kayle_Jng_Cassiopeia(Ratings):
pass
class NA_Kayle_Jng_Chogath(Ratings):
pass
class NA_Kayle_Jng_Corki(Ratings):
pass
class NA_Kayle_Jng_Darius(Ratings):
pass
class NA_Kayle_Jng_Diana(Ratings):
pass
class NA_Kayle_Jng_Draven(Ratings):
pass
class NA_Kayle_Jng_DrMundo(Ratings):
pass
class NA_Kayle_Jng_Ekko(Ratings):
pass
class NA_Kayle_Jng_Elise(Ratings):
pass
class NA_Kayle_Jng_Evelynn(Ratings):
pass
class NA_Kayle_Jng_Ezreal(Ratings):
pass
class NA_Kayle_Jng_Fiddlesticks(Ratings):
pass
class NA_Kayle_Jng_Fiora(Ratings):
pass
class NA_Kayle_Jng_Fizz(Ratings):
pass
class NA_Kayle_Jng_Galio(Ratings):
pass
class NA_Kayle_Jng_Gangplank(Ratings):
pass
class NA_Kayle_Jng_Garen(Ratings):
pass
class NA_Kayle_Jng_Gnar(Ratings):
pass
class NA_Kayle_Jng_Gragas(Ratings):
pass
class NA_Kayle_Jng_Graves(Ratings):
pass
class NA_Kayle_Jng_Hecarim(Ratings):
pass
class NA_Kayle_Jng_Heimerdinger(Ratings):
pass
class NA_Kayle_Jng_Illaoi(Ratings):
pass
class NA_Kayle_Jng_Irelia(Ratings):
pass
class NA_Kayle_Jng_Ivern(Ratings):
pass
class NA_Kayle_Jng_Janna(Ratings):
pass
class NA_Kayle_Jng_JarvanIV(Ratings):
pass
class NA_Kayle_Jng_Jax(Ratings):
pass
class NA_Kayle_Jng_Jayce(Ratings):
pass
class NA_Kayle_Jng_Jhin(Ratings):
pass
class NA_Kayle_Jng_Jinx(Ratings):
pass
class NA_Kayle_Jng_Kalista(Ratings):
pass
class NA_Kayle_Jng_Karma(Ratings):
pass
class NA_Kayle_Jng_Karthus(Ratings):
pass
class NA_Kayle_Jng_Kassadin(Ratings):
pass
class NA_Kayle_Jng_Katarina(Ratings):
pass
class NA_Kayle_Jng_Kayle(Ratings):
pass
class NA_Kayle_Jng_Kayn(Ratings):
pass
class NA_Kayle_Jng_Kennen(Ratings):
pass
class NA_Kayle_Jng_Khazix(Ratings):
pass
class NA_Kayle_Jng_Kindred(Ratings):
pass
class NA_Kayle_Jng_Kled(Ratings):
pass
class NA_Kayle_Jng_KogMaw(Ratings):
pass
class NA_Kayle_Jng_Leblanc(Ratings):
pass
class NA_Kayle_Jng_LeeSin(Ratings):
pass
class NA_Kayle_Jng_Leona(Ratings):
pass
class NA_Kayle_Jng_Lissandra(Ratings):
pass
class NA_Kayle_Jng_Lucian(Ratings):
pass
class NA_Kayle_Jng_Lulu(Ratings):
pass
class NA_Kayle_Jng_Lux(Ratings):
pass
class NA_Kayle_Jng_Malphite(Ratings):
pass
class NA_Kayle_Jng_Malzahar(Ratings):
pass
class NA_Kayle_Jng_Maokai(Ratings):
pass
class NA_Kayle_Jng_MasterYi(Ratings):
pass
class NA_Kayle_Jng_MissFortune(Ratings):
pass
class NA_Kayle_Jng_MonkeyKing(Ratings):
pass
class NA_Kayle_Jng_Mordekaiser(Ratings):
pass
class NA_Kayle_Jng_Morgana(Ratings):
pass
class NA_Kayle_Jng_Nami(Ratings):
pass
class NA_Kayle_Jng_Nasus(Ratings):
pass
class NA_Kayle_Jng_Nautilus(Ratings):
pass
class NA_Kayle_Jng_Nidalee(Ratings):
pass
class NA_Kayle_Jng_Nocturne(Ratings):
pass
class NA_Kayle_Jng_Nunu(Ratings):
pass
class NA_Kayle_Jng_Olaf(Ratings):
pass
class NA_Kayle_Jng_Orianna(Ratings):
pass
class NA_Kayle_Jng_Ornn(Ratings):
pass
class NA_Kayle_Jng_Pantheon(Ratings):
pass
class NA_Kayle_Jng_Poppy(Ratings):
pass
class NA_Kayle_Jng_Quinn(Ratings):
pass
class NA_Kayle_Jng_Rakan(Ratings):
pass
class NA_Kayle_Jng_Rammus(Ratings):
pass
class NA_Kayle_Jng_RekSai(Ratings):
pass
class NA_Kayle_Jng_Renekton(Ratings):
pass
class NA_Kayle_Jng_Rengar(Ratings):
pass
class NA_Kayle_Jng_Riven(Ratings):
pass
class NA_Kayle_Jng_Rumble(Ratings):
pass
class NA_Kayle_Jng_Ryze(Ratings):
pass
class NA_Kayle_Jng_Sejuani(Ratings):
pass
class NA_Kayle_Jng_Shaco(Ratings):
pass
class NA_Kayle_Jng_Shen(Ratings):
pass
class NA_Kayle_Jng_Shyvana(Ratings):
pass
class NA_Kayle_Jng_Singed(Ratings):
pass
class NA_Kayle_Jng_Sion(Ratings):
pass
class NA_Kayle_Jng_Sivir(Ratings):
pass
class NA_Kayle_Jng_Skarner(Ratings):
pass
class NA_Kayle_Jng_Sona(Ratings):
pass
class NA_Kayle_Jng_Soraka(Ratings):
pass
class NA_Kayle_Jng_Swain(Ratings):
pass
class NA_Kayle_Jng_Syndra(Ratings):
pass
class NA_Kayle_Jng_TahmKench(Ratings):
pass
class NA_Kayle_Jng_Taliyah(Ratings):
pass
class NA_Kayle_Jng_Talon(Ratings):
pass
class NA_Kayle_Jng_Taric(Ratings):
pass
class NA_Kayle_Jng_Teemo(Ratings):
pass
class NA_Kayle_Jng_Thresh(Ratings):
pass
class NA_Kayle_Jng_Tristana(Ratings):
pass
class NA_Kayle_Jng_Trundle(Ratings):
pass
class NA_Kayle_Jng_Tryndamere(Ratings):
pass
class NA_Kayle_Jng_TwistedFate(Ratings):
pass
class NA_Kayle_Jng_Twitch(Ratings):
pass
class NA_Kayle_Jng_Udyr(Ratings):
pass
class NA_Kayle_Jng_Urgot(Ratings):
pass
class NA_Kayle_Jng_Varus(Ratings):
pass
class NA_Kayle_Jng_Vayne(Ratings):
pass
class NA_Kayle_Jng_Veigar(Ratings):
pass
class NA_Kayle_Jng_Velkoz(Ratings):
pass
class NA_Kayle_Jng_Vi(Ratings):
pass
class NA_Kayle_Jng_Viktor(Ratings):
pass
class NA_Kayle_Jng_Vladimir(Ratings):
pass
class NA_Kayle_Jng_Volibear(Ratings):
pass
class NA_Kayle_Jng_Warwick(Ratings):
pass
class NA_Kayle_Jng_Xayah(Ratings):
pass
class NA_Kayle_Jng_Xerath(Ratings):
pass
class NA_Kayle_Jng_XinZhao(Ratings):
pass
class NA_Kayle_Jng_Yasuo(Ratings):
pass
class NA_Kayle_Jng_Yorick(Ratings):
pass
class NA_Kayle_Jng_Zac(Ratings):
pass
class NA_Kayle_Jng_Zed(Ratings):
pass
class NA_Kayle_Jng_Ziggs(Ratings):
pass
class NA_Kayle_Jng_Zilean(Ratings):
pass
class NA_Kayle_Jng_Zyra(Ratings):
pass
| 15.364508 | 46 | 0.761667 | 972 | 6,407 | 4.59465 | 0.151235 | 0.216301 | 0.370802 | 0.463502 | 0.797582 | 0.797582 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173404 | 6,407 | 416 | 47 | 15.401442 | 0.843278 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d3eb076042f2cf97e57e0a88da64ef44e138542d | 38 | py | Python | os_v4_hek/defs/efpp.py | holy-crust/reclaimer | 0aa693da3866ce7999c68d5f71f31a9c932cdb2c | [
"MIT"
] | null | null | null | os_v4_hek/defs/efpp.py | holy-crust/reclaimer | 0aa693da3866ce7999c68d5f71f31a9c932cdb2c | [
"MIT"
] | null | null | null | os_v4_hek/defs/efpp.py | holy-crust/reclaimer | 0aa693da3866ce7999c68d5f71f31a9c932cdb2c | [
"MIT"
] | null | null | null | from ...os_v3_hek.defs.efpp import *
| 19 | 37 | 0.710526 | 7 | 38 | 3.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.131579 | 38 | 1 | 38 | 38 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
313d4183c738e8332b63ba879131588d5a5eaef6 | 18,810 | py | Python | backend/tests/views/test_resource_view_set.py | crosspower/naruko | 4c524e2ef955610a711830bc86d730ffe4fc2bd8 | [
"MIT"
] | 17 | 2019-01-23T04:37:43.000Z | 2019-10-15T01:42:31.000Z | backend/tests/views/test_resource_view_set.py | snickerjp/naruko | 4c524e2ef955610a711830bc86d730ffe4fc2bd8 | [
"MIT"
] | 1 | 2019-01-23T08:04:44.000Z | 2019-01-23T08:44:33.000Z | backend/tests/views/test_resource_view_set.py | snickerjp/naruko | 4c524e2ef955610a711830bc86d730ffe4fc2bd8 | [
"MIT"
] | 6 | 2019-01-23T09:10:59.000Z | 2020-12-02T04:15:41.000Z | from django.test import TestCase
from rest_framework.test import APIClient
from backend.models import UserModel, RoleModel, TenantModel, AwsEnvironmentModel, Command, Document, Parameter
from backend.models.resource.ec2 import Ec2
from datetime import datetime
from unittest import mock
@mock.patch("backend.views.resource_view_set.ControlResourceUseCase")
class InstanceViewSetTestCase(TestCase):
api_path_in_tenant = '/api/tenants/{}/aws-environments/{}/resources/{}'
api_path = '/api/tenants/{}/aws-environments/{}' \
'/regions/ap-northeast-1/services/ec2/resources/i-123456789012/'
@staticmethod
def _create_aws_env_model(name, aws_account_id, tenant):
now = datetime.now()
aws = AwsEnvironmentModel.objects.create(
name=name,
aws_account_id=aws_account_id,
aws_role="test_role",
aws_external_id="test_external_id",
tenant=tenant,
created_at=now,
updated_at=now
)
aws.save()
return aws
@staticmethod
def _create_role_model(id, role_name):
now = datetime.now()
return RoleModel.objects.create(
id=id,
role_name=role_name,
created_at=now,
updated_at=now
)
@staticmethod
def _create_tenant_model(tenant_name):
now = datetime.now()
return TenantModel.objects.create(
tenant_name=tenant_name,
created_at=now,
updated_at=now
)
@staticmethod
def _create_user_model(email, name, password, tenant, role):
now = datetime.now()
user_model = UserModel(
email=email,
name=name,
password=password,
tenant=tenant,
role=role,
created_at=now,
updated_at=now,
)
user_model.save()
return user_model
@classmethod
def setUpClass(cls):
super(InstanceViewSetTestCase, cls).setUpClass()
# Company1に所属するMASTERユーザーの作成
role_model = cls._create_role_model(2, "test_role")
tenant_model1 = cls._create_tenant_model("test_tenant_users_in_tenant_1")
# Company1に所属するAWS環境の作成
aws1 = cls._create_aws_env_model("test_name1", "test_aws1", tenant_model1)
user1 = cls._create_user_model(
email="test_email",
name="test_name",
password="test_password",
tenant=tenant_model1,
role=role_model,
)
user1.aws_environments.add(aws1)
# Company1に所属するUSERユーザーの作成
role_model_user = cls._create_role_model(3, "test_role")
user2 = cls._create_user_model(
email="test_email_USER",
name="test_name",
password="test_password",
tenant=tenant_model1,
role=role_model_user,
)
user2.aws_environments.add(aws1)
# Company2に所属するユーザーの作成
tenant_model2 = cls._create_tenant_model("test_tenant_users_in_tenant_2")
cls._create_user_model(
email="test_email2",
name="test_name2",
password="test_password2",
tenant=tenant_model2,
role=role_model,
)
# Company2に所属するAWS環境の作成
cls._create_aws_env_model("test_name2", "test_aws2", tenant_model2)
# ログインしていない状態でAPIが使用できないことを確認する
def test_not_login(self, use_case):
client = APIClient()
# 検証対象の実行
response = client.get(self.api_path_in_tenant.format(1, 1, "?region=test"), format='json')
self.assertEqual(response.status_code, 401)
# 正常系
def test_get_resource(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
fetch_resources = use_case.return_value.fetch_resources
fetch_resources.return_value = {}
# 検証対象の実行
response = client.get(
path=self.api_path_in_tenant.format(tenant_id, aws_id, "?region=test"),
format='json')
fetch_resources.assert_called_once()
self.assertEqual(response.status_code, 200)
# テナントが存在しない場合
def test_no_tenant(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
fetch_resources = use_case.return_value.fetch_resources
fetch_resources.return_value = {}
# 検証対象の実行
response = client.get(
path=self.api_path_in_tenant.format(100, aws_id, "?region=test"),
format='json')
fetch_resources.assert_not_called()
self.assertEqual(response.status_code, 404)
# AWS環境が存在しない場合
def test_no_aws_env(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
fetch_resources = use_case.return_value.fetch_resources
fetch_resources.return_value = {}
# 検証対象の実行
response = client.get(
path=self.api_path_in_tenant.format(tenant_id, 100, "?region=test"),
format='json')
fetch_resources.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース起動:正常系
def test_start_resource(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
start_resource = use_case.return_value.start_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, aws_id) + "start/",
format='json')
start_resource.assert_called_once()
self.assertEqual(response.status_code, 200)
# リソース起動:テナントが存在しない場合
def test_start_resource_no_tenant(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
start_resource = use_case.return_value.start_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(100, aws_id) + "start/",
format='json')
start_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース起動:AWS環境が存在しない場合
def test_start_resource_no_aws(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
start_resource = use_case.return_value.start_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, 100) + "start/",
format='json')
start_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース再起動:正常系
def test_reboot_resource(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
reboot_resource = use_case.return_value.reboot_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, aws_id) + "reboot/",
format='json')
reboot_resource.assert_called_once()
self.assertEqual(response.status_code, 200)
# リソース再起動:テナントが存在しない場合
def test_reboot_resource_no_tenant(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
reboot_resource = use_case.return_value.reboot_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(100, aws_id) + "reboot/",
format='json')
reboot_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース再起動:AWS環境が存在しない場合
def test_reboot_resource_no_aws(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
reboot_resource = use_case.return_value.reboot_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, 100) + "reboot/",
format='json')
reboot_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース再起動:正常系
def test_stop_resource(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
stop_resource = use_case.return_value.stop_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, aws_id) + "stop/",
format='json')
stop_resource.assert_called_once()
self.assertEqual(response.status_code, 200)
# リソース再起動:テナントが存在しない場合
def test_stop_resource_no_tenant(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
stop_resource = use_case.return_value.stop_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(100, aws_id) + "stop/",
format='json')
stop_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース再起動:AWS環境が存在しない場合
def test_stop_resource_no_aws(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
stop_resource = use_case.return_value.stop_resource
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, 100) + "stop/",
format='json')
stop_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース詳細取得:正常系
def test_retrieve_resource(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
describe_resource = use_case.return_value.describe_resource
mock_resource = mock.Mock()
mock_resource.serialize.return_value = "TEST"
describe_resource.return_value = mock_resource
# 検証対象の実行
response = client.get(
path=self.api_path.format(tenant_id, aws_id),
format='json')
describe_resource.assert_called_once()
mock_resource.serialize.assert_called_once()
self.assertEqual(response.status_code, 200)
# リソース詳細取得:テナントが存在しない場合
def test_retrieve_resource_no_tenant(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
describe_resource = use_case.return_value.describe_resource
# 検証対象の実行
response = client.get(
path=self.api_path.format(100, aws_id),
format='json')
describe_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# リソース詳細取得:AWS環境が存在しない場合
def test_retrieve_resource_no_aws(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
describe_resource = use_case.return_value.describe_resource
# 検証対象の実行
response = client.get(
path=self.api_path.format(tenant_id, 100),
format='json')
describe_resource.assert_not_called()
self.assertEqual(response.status_code, 404)
# コマンド実行:正常系
def test_run_command(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
run_command = use_case.return_value.run_command
run_command.return_value = Command(
Document("document_name", [Parameter(key="param", value="value")]),
Ec2("ap-northeast-1", "i-123456789012")
)
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, aws_id) + "run_command/",
data=dict(
name="document_name",
parameters=[dict(key="param", value="value")]
),
format='json')
run_command.assert_called_once()
self.assertEqual(response.status_code, 200)
# コマンド実行:テナントが存在しない場合
def test_run_command_no_tenant(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
run_command = use_case.return_value.run_command
run_command.return_value = Command(
Document("document_name", [Parameter(key="param", value="value")]),
Ec2("ap-northeast-1", "i-123456789012")
)
# 検証対象の実行
response = client.post(
path=self.api_path.format(100, aws_id) + "run_command/",
data=dict(
name="document_name",
parameters=[dict(key="param", value="value")]
),
format='json')
run_command.assert_not_called()
self.assertEqual(response.status_code, 404)
# コマンド実行:AWS環境が存在しない場合
def test_run_command_no_aws(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
run_command = use_case.return_value.run_command
run_command.return_value = Command(
Document("document_name", [Parameter(key="param", value="value")]),
Ec2("ap-northeast-1", "i-123456789012")
)
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, 100) + "run_command/",
data=dict(
name="document_name",
parameters=[dict(key="param", value="value")]
),
format='json')
run_command.assert_not_called()
self.assertEqual(response.status_code, 404)
# コマンド実行:指定されたサービスがEC2でない場合
def test_run_command_not_ec2(self, use_case: mock.Mock):
client = APIClient()
user_model = UserModel.objects.get(email="test_email")
client.force_authenticate(user=user_model)
# Company1のIDを取得
tenant_id = TenantModel.objects.get(tenant_name="test_tenant_users_in_tenant_1").id
# AWS環境のIDを取得
aws_id = AwsEnvironmentModel.objects.get(aws_account_id="test_aws1").id
run_command = use_case.return_value.run_command
run_command.return_value = Command(
Document("document_name", [Parameter(key="param", value="value")]),
Ec2("ap-northeast-1", "i-123456789012")
)
# 検証対象の実行
response = client.post(
path=self.api_path.format(tenant_id, aws_id).replace("ec2", "rds") + "run_command/",
data=dict(
name="document_name",
parameters=[dict(key="param", value="value")]
),
format='json')
run_command.assert_not_called()
self.assertEqual(response.status_code, 400)
| 35.290807 | 112 | 0.632323 | 2,095 | 18,810 | 5.360382 | 0.067303 | 0.036064 | 0.02618 | 0.051647 | 0.826091 | 0.809261 | 0.795993 | 0.787444 | 0.781122 | 0.755744 | 0 | 0.017641 | 0.270707 | 18,810 | 532 | 113 | 35.357143 | 0.800991 | 0.052419 | 0 | 0.662824 | 0 | 0 | 0.096516 | 0.036818 | 0 | 0 | 0 | 0 | 0.115274 | 1 | 0.072046 | false | 0.014409 | 0.017291 | 0 | 0.10951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
31649d58cebd60a2357897a52161b414ca78e323 | 2,499 | py | Python | tests/test_call_api.py | coreycollins/drench | 6cb97dfb649238795a34f2e34118d1016341437b | [
"MIT"
] | null | null | null | tests/test_call_api.py | coreycollins/drench | 6cb97dfb649238795a34f2e34118d1016341437b | [
"MIT"
] | null | null | null | tests/test_call_api.py | coreycollins/drench | 6cb97dfb649238795a34f2e34118d1016341437b | [
"MIT"
] | null | null | null | #pylint:disable=missing-docstring
from lambdas.call_api import handler
def test_check_query():
event = {
'job_id': 1234,
'principal_id': 4321,
'api_version': 'v1',
'next': {
'in_path': 'some/path',
'out_path': 's3://com.drench.results/1234/test-query/out',
'content_type': 'text',
'report_url': None,
'name': 'test-query',
'type': 'query',
'params': {
'QueryString': 'SELECT *',
'ResultConfiguration': {
'OutputLocation': '$.next.out_path'
},
"QueryExecutionContext": {
"Database": "foo"
},
},
},
'result': {
'job_id': '123',
'out_path': 's3://com.drench.results/1234/test-query/out',
'report_url': 's3://foo/bar/out.html'
},
'api_call': {
'path':'/jobs/$.job_id/steps',
'body':{
'step':{
'name': '$.next.name',
'out_path': '$.next.out_path',
'content_type': '$.next.content_type',
'status': '$.result.status',
'report_url': '$.result.report_url'
}
},
'method': 'PUT'
}
}
step_id = handler(event, {})
assert step_id == 'step_id'
def test_no_body():
event = {
'job_id': 1234,
'principal_id': 4321,
'api_version': 'v1',
'next': {
'in_path': 'some/path',
'out_path': 's3://com.drench.results/1234/test-query/out',
'content_type': 'text',
'report_url': None,
'name': 'test-query',
'type': 'query',
'params': {
'QueryString': 'SELECT *',
'ResultConfiguration': {
'OutputLocation': '$.next.out_path'
},
"QueryExecutionContext": {
"Database": "foo"
},
},
},
'result': {
'job_id': '123',
'out_path': 's3://com.drench.results/1234/test-query/out',
'report_url': 's3://foo/bar/out.html'
},
'api_call': {
'path':'/jobs/$.job_id/steps',
'method': 'PUT'
}
}
step_id = handler(event, {})
assert step_id == 'step_id'
| 29.4 | 70 | 0.414166 | 213 | 2,499 | 4.657277 | 0.276995 | 0.056452 | 0.03629 | 0.048387 | 0.802419 | 0.802419 | 0.802419 | 0.802419 | 0.802419 | 0.802419 | 0 | 0.031702 | 0.419368 | 2,499 | 84 | 71 | 29.75 | 0.651964 | 0.012805 | 0 | 0.666667 | 0 | 0 | 0.376723 | 0.103812 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0.025641 | false | 0 | 0.012821 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
316c3d00508dba0db73d7f0678f09cdf01b8fede | 702 | py | Python | src/sudoku/grid.py | stahl06/Sudoku | 51b62a613ad6f92613e45b9ee342006f4920f2f3 | [
"MIT"
] | null | null | null | src/sudoku/grid.py | stahl06/Sudoku | 51b62a613ad6f92613e45b9ee342006f4920f2f3 | [
"MIT"
] | null | null | null | src/sudoku/grid.py | stahl06/Sudoku | 51b62a613ad6f92613e45b9ee342006f4920f2f3 | [
"MIT"
] | null | null | null | from sudoku.cell import Cell
class Grid:
CONS_Dimensions = 9
__grid = [[Cell()]*CONS_Dimensions]*CONS_Dimensions
def __init__(self):
pass
def add_cell(self, x_coordinate, y_coordinate, cell):
if self.__validate_cell_update(x_coordinate, y_coordinate):
self.__grid[x_coordinate][y_coordinate] = cell
def get_cell(self, x_coordinate, y_coordinate):
return self.__grid[x_coordinate][y_coordinate]
def __validate_cell_update(self, x_coordinate, y_coordinate):
if ( 0 <= x_coordinate < 9 and 0 <= y_coordinate < 9):
existing_cell = self.__grid[x_coordinate][y_coordinate]
return existing_cell.can_modify()
| 29.25 | 67 | 0.68661 | 92 | 702 | 4.76087 | 0.293478 | 0.200913 | 0.191781 | 0.351598 | 0.447489 | 0.342466 | 0 | 0 | 0 | 0 | 0 | 0.009141 | 0.220798 | 702 | 23 | 68 | 30.521739 | 0.79159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0.066667 | 0.066667 | 0.066667 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
3195d0021049b19dbc8bd87b80ebcdeec3efadfd | 6,843 | py | Python | eccodes/eccodes.py | blazk/eccodes-python | b3097ae9c27897d99805b527bd938963f7832fb6 | [
"Apache-2.0"
] | null | null | null | eccodes/eccodes.py | blazk/eccodes-python | b3097ae9c27897d99805b527bd938963f7832fb6 | [
"Apache-2.0"
] | null | null | null | eccodes/eccodes.py | blazk/eccodes-python | b3097ae9c27897d99805b527bd938963f7832fb6 | [
"Apache-2.0"
] | null | null | null | from gribapi import __version__
from gribapi import bindings_version
from gribapi import GRIB_CHECK as CODES_CHECK
from gribapi import CODES_PRODUCT_GRIB
from gribapi import CODES_PRODUCT_BUFR
from gribapi import CODES_PRODUCT_ANY
from gribapi import GRIB_MISSING_DOUBLE as CODES_MISSING_DOUBLE
from gribapi import GRIB_MISSING_LONG as CODES_MISSING_LONG
from gribapi import gts_new_from_file as codes_gts_new_from_file
from gribapi import metar_new_from_file as codes_metar_new_from_file
from gribapi import codes_new_from_file
from gribapi import any_new_from_file as codes_any_new_from_file
from gribapi import bufr_new_from_file as codes_bufr_new_from_file
from gribapi import grib_new_from_file as codes_grib_new_from_file
from gribapi import codes_close_file
from gribapi import grib_count_in_file as codes_count_in_file
from gribapi import grib_multi_support_on as codes_grib_multi_support_on
from gribapi import grib_multi_support_off as codes_grib_multi_support_off
from gribapi import grib_release as codes_release
from gribapi import grib_get_string as codes_get_string
from gribapi import grib_set_string as codes_set_string
from gribapi import grib_gribex_mode_on as codes_gribex_mode_on
from gribapi import grib_gribex_mode_off as codes_gribex_mode_off
from gribapi import grib_write as codes_write
from gribapi import grib_multi_write as codes_grib_multi_write
from gribapi import grib_multi_append as codes_grib_multi_append
from gribapi import grib_get_size as codes_get_size
from gribapi import grib_get_string_length as codes_get_string_length
from gribapi import grib_skip_computed as codes_skip_computed
from gribapi import grib_skip_coded as codes_skip_coded
from gribapi import grib_skip_edition_specific as codes_skip_edition_specific
from gribapi import grib_skip_duplicates as codes_skip_duplicates
from gribapi import grib_skip_read_only as codes_skip_read_only
from gribapi import grib_skip_function as codes_skip_function
from gribapi import grib_iterator_new as codes_grib_iterator_new
from gribapi import grib_iterator_delete as codes_grib_iterator_delete
from gribapi import grib_iterator_next as codes_grib_iterator_next
from gribapi import grib_keys_iterator_new as codes_keys_iterator_new
from gribapi import grib_keys_iterator_next as codes_keys_iterator_next
from gribapi import grib_keys_iterator_delete as codes_keys_iterator_delete
from gribapi import grib_keys_iterator_get_name as codes_keys_iterator_get_name
from gribapi import grib_keys_iterator_rewind as codes_keys_iterator_rewind
from gribapi import codes_bufr_keys_iterator_new
from gribapi import codes_bufr_keys_iterator_next
from gribapi import codes_bufr_keys_iterator_delete
from gribapi import codes_bufr_keys_iterator_get_name
from gribapi import codes_bufr_keys_iterator_rewind
from gribapi import grib_get_long as codes_get_long
from gribapi import grib_get_double as codes_get_double
from gribapi import grib_set_long as codes_set_long
from gribapi import grib_set_double as codes_set_double
from gribapi import grib_new_from_samples as codes_grib_new_from_samples
from gribapi import codes_bufr_new_from_samples
from gribapi import codes_new_from_samples
from gribapi import codes_bufr_copy_data
from gribapi import grib_clone as codes_clone
from gribapi import grib_set_double_array as codes_set_double_array
from gribapi import grib_get_double_array as codes_get_double_array
from gribapi import grib_get_string_array as codes_get_string_array
from gribapi import grib_set_string_array as codes_set_string_array
from gribapi import grib_set_long_array as codes_set_long_array
from gribapi import grib_get_long_array as codes_get_long_array
from gribapi import grib_multi_new as codes_grib_multi_new
from gribapi import grib_multi_release as codes_grib_multi_release
from gribapi import grib_copy_namespace as codes_copy_namespace
from gribapi import grib_index_new_from_file as codes_index_new_from_file
from gribapi import grib_index_add_file as codes_index_add_file
from gribapi import grib_index_release as codes_index_release
from gribapi import grib_index_get_size as codes_index_get_size
from gribapi import grib_index_get_long as codes_index_get_long
from gribapi import grib_index_get_string as codes_index_get_string
from gribapi import grib_index_get_double as codes_index_get_double
from gribapi import grib_index_select_long as codes_index_select_long
from gribapi import grib_index_select_double as codes_index_select_double
from gribapi import grib_index_select_string as codes_index_select_string
from gribapi import grib_new_from_index as codes_new_from_index
from gribapi import grib_get_message_size as codes_get_message_size
from gribapi import grib_get_message_offset as codes_get_message_offset
from gribapi import grib_get_double_element as codes_get_double_element
from gribapi import grib_get_double_elements as codes_get_double_elements
from gribapi import grib_get_elements as codes_get_elements
from gribapi import grib_set_missing as codes_set_missing
from gribapi import grib_set_key_vals as codes_set_key_vals
from gribapi import grib_is_missing as codes_is_missing
from gribapi import grib_is_defined as codes_is_defined
from gribapi import grib_find_nearest as codes_grib_find_nearest
from gribapi import grib_find_nearest_multiple as codes_grib_find_nearest_multiple
from gribapi import grib_get_native_type as codes_get_native_type
from gribapi import grib_get as codes_get
from gribapi import grib_get_array as codes_get_array
from gribapi import grib_get_values as codes_get_values
from gribapi import grib_get_data as codes_grib_get_data
from gribapi import grib_set_values as codes_set_values
from gribapi import grib_set as codes_set
from gribapi import grib_set_array as codes_set_array
from gribapi import grib_index_get as codes_index_get
from gribapi import grib_index_select as codes_index_select
from gribapi import grib_index_write as codes_index_write
from gribapi import grib_index_read as codes_index_read
from gribapi import grib_no_fail_on_wrong_length as codes_no_fail_on_wrong_length
from gribapi import grib_gts_header as codes_gts_header
from gribapi import grib_get_api_version as codes_get_api_version
from gribapi import codes_get_version_info
from gribapi import grib_get_message as codes_get_message
from gribapi import grib_new_from_message as codes_new_from_message
from gribapi import grib_set_definitions_path as codes_set_definitions_path
from gribapi import grib_set_samples_path as codes_set_samples_path
from gribapi import codes_samples_path
from gribapi import codes_definition_path
from gribapi import codes_bufr_multi_element_constant_arrays_on
from gribapi import codes_bufr_multi_element_constant_arrays_off
from gribapi import codes_bufr_extract_headers
from gribapi.errors import GribInternalError as CodesInternalError
from gribapi.errors import *
| 56.553719 | 82 | 0.905159 | 1,178 | 6,843 | 4.798812 | 0.076401 | 0.221829 | 0.336812 | 0.323191 | 0.673802 | 0.428799 | 0.157262 | 0.04847 | 0.018397 | 0 | 0 | 0 | 0.094403 | 6,843 | 120 | 83 | 57.025 | 0.912216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31ac351a09354476193c71efd593d365627a75d7 | 43 | py | Python | utils/__init__.py | jamespauly/udi-daikin-poly-v3 | a69ea43f1e07513639e3d31df0320f67c5c4e43f | [
"MIT"
] | null | null | null | utils/__init__.py | jamespauly/udi-daikin-poly-v3 | a69ea43f1e07513639e3d31df0320f67c5c4e43f | [
"MIT"
] | null | null | null | utils/__init__.py | jamespauly/udi-daikin-poly-v3 | a69ea43f1e07513639e3d31df0320f67c5c4e43f | [
"MIT"
] | null | null | null | from .Utilities import Utilities | 43 | 43 | 0.651163 | 4 | 43 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.325581 | 43 | 1 | 43 | 43 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
31cecad47ad3b183df66389cc364a17d5b5b4e18 | 257 | py | Python | src/beast/python/beast/util/Git.py | Ziftr/stellard | 626514cbbb2c6c2b6844315ca98a2bfcbca0b43d | [
"BSL-1.0"
] | 58 | 2015-01-07T09:10:59.000Z | 2019-07-15T14:34:01.000Z | src/beast/python/beast/util/Git.py | Ziftr/stellard | 626514cbbb2c6c2b6844315ca98a2bfcbca0b43d | [
"BSL-1.0"
] | 12 | 2015-01-02T00:01:45.000Z | 2018-04-25T12:35:02.000Z | src/beast/python/beast/util/Git.py | Ziftr/stellard | 626514cbbb2c6c2b6844315ca98a2bfcbca0b43d | [
"BSL-1.0"
] | 23 | 2015-01-04T00:13:27.000Z | 2019-02-15T18:01:17.000Z | from __future__ import absolute_import, division, print_function, unicode_literals
import os
from beast.util import Execute
from beast.util import String
def describe(**kwds):
return String.single_line(Execute.execute('git describe --tags', **kwds))
| 25.7 | 82 | 0.789883 | 35 | 257 | 5.571429 | 0.628571 | 0.092308 | 0.133333 | 0.194872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120623 | 257 | 9 | 83 | 28.555556 | 0.862832 | 0 | 0 | 0 | 0 | 0 | 0.07393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.666667 | 0.166667 | 1 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
7313a6275b1ce5964aac44675db1c0c951ea1d07 | 272 | py | Python | django_context_request/exceptions.py | AustinGilkison/django-context-request | 882d73836cec1f8916a44d59087a472d4ea660e3 | [
"MIT"
] | null | null | null | django_context_request/exceptions.py | AustinGilkison/django-context-request | 882d73836cec1f8916a44d59087a472d4ea660e3 | [
"MIT"
] | null | null | null | django_context_request/exceptions.py | AustinGilkison/django-context-request | 882d73836cec1f8916a44d59087a472d4ea660e3 | [
"MIT"
] | null | null | null | class RequestContextProxyError(object):
class WrongWrappedFunc(Exception):
pass
class ObjNotFound(Exception):
pass
class ObjReadOnly(Exception):
pass
class RequestContextError(object):
class ObjExisted(Exception):
pass | 17 | 39 | 0.680147 | 22 | 272 | 8.409091 | 0.454545 | 0.281081 | 0.291892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253676 | 272 | 16 | 40 | 17 | 0.91133 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
81f68679f6a059377809192d004d5955afbc1224 | 39,092 | py | Python | my_functions.py | kaushnian/TradingView_Machine_Learning | df71b150dd00f6cfc453b4c48fa644586a4df1ba | [
"MIT"
] | null | null | null | my_functions.py | kaushnian/TradingView_Machine_Learning | df71b150dd00f6cfc453b4c48fa644586a4df1ba | [
"MIT"
] | null | null | null | my_functions.py | kaushnian/TradingView_Machine_Learning | df71b150dd00f6cfc453b4c48fa644586a4df1ba | [
"MIT"
] | null | null | null | from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import ElementNotInteractableException
from profit import profits
from TradeViewGUI import Main
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import time
from selenium.common.exceptions import NoSuchElementException
from termcolor import colored
class Functions(Main):
"""You will find click, get, find, and show_me functions here."""
# Find Functions
def find_best_stoploss(self):
best_in_dict = max(profits, key=profits.get)
return best_in_dict
def find_best_takeprofit(self):
best_in_dict = max(profits, key=profits.get)
return best_in_dict
def find_best_key_both(self):
best_in_dict = max(profits)
return best_in_dict
# Click Functions
def click_settings_button(self, wait):
"""click settings button."""
try:
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='icon-button "
"js-backtesting-open-format-dialog "
"apply-common-tooltip']")))
settings_button = self.driver.find_element_by_xpath(
"//*[@class='icon-button js-backtesting-open-format-dialog "
"apply-common-tooltip']")
settings_button.click()
except AttributeError:
pass
def click_strategy_tester(self):
"""check if strategy tester tab is active if not click to open tab."""
try:
strategy_tester_tab = self.driver.find_elements_by_xpath("//*[@class='title-37voAVwR']")
for index, web_element in enumerate(strategy_tester_tab):
if web_element.text == 'Strategy Tester':
active_tab = strategy_tester_tab[index].get_attribute('data-active')
if active_tab == 'false':
strategy_tester_tab[index].click()
break
except (IndexError, NoSuchElementException, ElementNotInteractableException):
print("Could Not Click Strategy Tester Tab. Please Check web element XPATH.")
def click_overview(self):
try:
strategy_tester_tab = self.driver.find_elements_by_xpath("//*[@class='title-37voAVwR']")
for index, web_element in enumerate(strategy_tester_tab):
if web_element.text == 'Strategy Tester':
active_tab = strategy_tester_tab[index].get_attribute('data-active')
if active_tab == 'false':
strategy_tester_tab[index].click()
# time.sleep(.3)
overview = \
self.driver.find_element_by_class_name("report-tabs").find_elements_by_tag_name("li")[0]
overview.click()
else:
overview = \
self.driver.find_element_by_class_name("report-tabs").find_elements_by_tag_name("li")[0]
overview.click()
break
except (IndexError, NoSuchElementException, ElementNotInteractableException):
print("Could Not Click Strategy Tester Tab. Please Check web element XPATH.")
def click_performance_summary(self):
"""click perfromance summary tab."""
try:
strategy_tester_tab = self.driver.find_elements_by_xpath("//*[@class='title-37voAVwR']")
for index, web_element in enumerate(strategy_tester_tab):
if web_element.text == 'Strategy Tester':
active_tab = strategy_tester_tab[index].get_attribute('data-active')
if active_tab == 'false':
strategy_tester_tab[index].click()
# time.sleep(.3)
performance_tab = \
self.driver.find_elements_by_class_name("report-tabs").find_elements_by_tag_name("li")[1]
performance_tab.click()
else:
performance_tab = \
self.driver.find_element_by_class_name("report-tabs").find_elements_by_tag_name("li")[1]
performance_tab.click()
break
except (IndexError, NoSuchElementException, ElementNotInteractableException):
print("Could Not Click Strategy Tester Tab. Please Check web element XPATH.")
def click_list_of_trades(self):
"""click list of trades tab."""
try:
strategy_tester_tab = self.driver.find_elements_by_xpath("//*[@class='title-37voAVwR']")
for index, web_element in enumerate(strategy_tester_tab):
if web_element.text == 'Strategy Tester':
active_tab = strategy_tester_tab[index].get_attribute('data-active')
if active_tab == 'false':
strategy_tester_tab[index].click()
# time.sleep(.3)
list_of_trades = \
self.driver.find_element_by_class_name("report-tabs").find_elements_by_tag_name("li")[2]
list_of_trades.click()
else:
list_of_trades = \
self.driver.find_element_by_class_name("report-tabs").find_elements_by_tag_name("li")[2]
list_of_trades.click()
break
except (IndexError, NoSuchElementException, ElementNotInteractableException):
print("Could Not Click Strategy Tester Tab. Please Check web element XPATH.")
def click_long_stoploss_input(self, count, wait):
"""click short stoploss input."""
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")))
stoploss_input_box = self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[
0]
stoploss_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
stoploss_input_box.send_keys(str(count))
stoploss_input_box.send_keys(Keys.ENTER)
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_long_takeprofit_input(self, count, wait):
"""click long take profit input."""
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")))
takeprofit_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[1]
takeprofit_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
takeprofit_input_box.send_keys(str(count))
takeprofit_input_box.send_keys(Keys.ENTER)
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_short_stoploss_input(self, count, wait):
"""click short stoploss input."""
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")))
stoploss_input_box = self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[
2]
stoploss_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
stoploss_input_box.send_keys(str(count))
stoploss_input_box.send_keys(Keys.ENTER)
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_short_takeprofit_input(self, count, wait):
"""click short take profit input."""
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")))
stoploss_input_box = self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[
3]
stoploss_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
stoploss_input_box.send_keys(str(count))
stoploss_input_box.send_keys(Keys.ENTER)
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_long_inputs(self, long_stoploss_value, long_takeprofit_value, wait):
"""click both long inputs."""
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")))
stoploss_input_box = self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[
0]
takeprofit_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[1]
stoploss_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
stoploss_input_box.send_keys(str(long_stoploss_value))
takeprofit_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
takeprofit_input_box.send_keys(str(long_takeprofit_value))
takeprofit_input_box.send_keys(Keys.ENTER)
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_short_inputs(self, short_stoploss_value, short_takeprofit_value, wait):
"""click both short inputs."""
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")))
stoploss_input_box = self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[
2]
takeprofit_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[3]
stoploss_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
stoploss_input_box.send_keys(str(short_stoploss_value))
takeprofit_input_box.send_keys(Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE + Keys.BACK_SPACE)
takeprofit_input_box.send_keys(str(short_takeprofit_value))
takeprofit_input_box.send_keys(Keys.ENTER)
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_all_inputs(self, long_stoploss_value, long_takeprofit_value, short_stoploss_value, short_takeprofit_value,
wait):
"""click short stoploss input."""
wait.until(EC.visibility_of_element_located((By.XPATH, "//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")))
long_stoploss_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[0]
long_takeprofit_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[1]
short_stoploss_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[2]
short_takeprofit_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[3]
long_stoploss_input_box.send_keys(Keys.BACK_SPACE * 8)
long_stoploss_input_box.send_keys(str(long_stoploss_value))
long_takeprofit_input_box.send_keys(Keys.BACK_SPACE * 8)
long_takeprofit_input_box.send_keys(str(long_takeprofit_value))
short_stoploss_input_box.send_keys(Keys.BACK_SPACE * 8)
short_stoploss_input_box.send_keys(str(short_stoploss_value))
short_takeprofit_input_box.send_keys(Keys.BACK_SPACE * 8)
short_takeprofit_input_box.send_keys(str(short_takeprofit_value))
short_takeprofit_input_box.send_keys(Keys.ENTER)
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_input_tab(self):
"""making sure the input tab is clicked."""
try:
input_tab = \
self.driver.find_elements_by_xpath("//*[@class='tab-1KEqJy8_ withHover-1KEqJy8_ tab-3I2ohC86']")[0]
if input_tab.get_attribute("data-value") == "inputs":
input_tab.click()
except IndexError:
pass
def click_ok_button(self):
time.sleep(.5)
ok_button = self.driver.find_element_by_name("submit")
ok_button.click()
def click_enable_both_checkboxes(self):
long_checkbox = self.driver.find_elements_by_xpath("//*[@class='input-24iGIobO']")[0]
short_checkbox = self.driver.find_elements_by_xpath("//*[@class='input-24iGIobO']")[1]
if not long_checkbox.get_attribute("checked"):
click_long_checkbox = self.driver.find_elements_by_xpath("//*[@class='box-3574HVnv check-382c8Fu1']")[0]
click_long_checkbox.click()
if not short_checkbox.get_attribute("checked"):
click_short_checkbox = self.driver.find_elements_by_xpath("//*[@class='box-3574HVnv check-382c8Fu1']")[1]
click_short_checkbox.click()
def click_enable_long_strategy_checkbox(self):
long_checkbox = self.driver.find_elements_by_xpath("//*[@class='input-24iGIobO']")[0]
short_checkbox = self.driver.find_elements_by_xpath("//*[@class='input-24iGIobO']")[1]
if not long_checkbox.get_attribute("checked"):
click_long_checkbox = self.driver.find_elements_by_xpath("//*[@class='box-3574HVnv check-382c8Fu1']")[0]
click_long_checkbox.click()
if short_checkbox.get_attribute("checked"):
click_short_checkbox = self.driver.find_elements_by_xpath("//*[@class='box-3574HVnv check-382c8Fu1']")[1]
click_short_checkbox.click()
def click_enable_short_strategy_checkbox(self):
long_checkbox = self.driver.find_elements_by_xpath("//*[@class='input-24iGIobO']")[0]
short_checkbox = self.driver.find_elements_by_xpath("//*[@class='input-24iGIobO']")[1]
if long_checkbox.get_attribute("checked"):
click_long_checkbox = self.driver.find_elements_by_xpath("//*[@class='box-3574HVnv check-382c8Fu1']")[0]
click_long_checkbox.click()
if not short_checkbox.get_attribute("checked"):
click_short_checkbox = self.driver.find_elements_by_xpath("//*[@class='box-3574HVnv check-382c8Fu1']")[1]
click_short_checkbox.click()
def click_rest_all_inputs(self):
"""click short stoploss input."""
long_stoploss_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[0]
long_takeprofit_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[1]
short_stoploss_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[2]
short_takeprofit_input_box = \
self.driver.find_elements_by_xpath("//*[@class='input-3bEGcMc9 with-end-slot-S5RrC8PC']")[3]
long_stoploss_input_box.send_keys(Keys.BACK_SPACE * 8)
long_stoploss_input_box.send_keys(str("50"))
long_takeprofit_input_box.send_keys(Keys.BACK_SPACE * 8)
long_takeprofit_input_box.send_keys(str("50"))
short_stoploss_input_box.send_keys(Keys.BACK_SPACE * 8)
short_stoploss_input_box.send_keys(str("50"))
short_takeprofit_input_box.send_keys(Keys.BACK_SPACE * 8)
short_takeprofit_input_box.send_keys(str("50"))
short_takeprofit_input_box.send_keys(Keys.ENTER)
# Get Functions
def get_net_all(self, long_stoploss_value, long_takeprofit_value, short_stoploss_value, short_takeprofit_value,
wait):
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "additional_percent_value")))
try:
time.sleep(.5)
check = self.driver.find_elements_by_class_name("additional_percent_value")[0]
check.find_element_by_xpath('./span[contains(@class, "neg")]')
negative = True
except NoSuchElementException:
negative = False
if negative:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = -float(net_profit[0])
profits.update({-net_value: ["Long Stoploss:", long_stoploss_value, "Long Take Profit:",
long_takeprofit_value, "Short Stoploss:", short_stoploss_value,
"Short Take Profit:", short_takeprofit_value]})
print(colored(
f'Net Profit: -{net_value}% --> Long Stoploss: {long_stoploss_value}, Long Take Profit: {long_takeprofit_value}, Short Stoploss: {short_stoploss_value}, Short Take Profit: {short_takeprofit_value}',
'red'))
else:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = float(net_profit[0])
profits.update({net_value: ["Long Stoploss:", long_stoploss_value, "Long Take Profit:",
long_takeprofit_value, "Short Stoploss:", short_stoploss_value,
"Short Take Profit:", short_takeprofit_value]})
print(colored(
f'Net Profit: {net_value}% --> Long Stoploss: {long_stoploss_value}, Long Take Profit: {long_takeprofit_value}, Short Stoploss: {short_stoploss_value}, Short Take Profit: {short_takeprofit_value}',
'green'))
return net_profit
def get_net_both(self, stoploss_value, takeprofit_value, wait):
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "additional_percent_value")))
try:
time.sleep(.5)
check = self.driver.find_elements_by_class_name("additional_percent_value")[0]
check.find_element_by_xpath('./span[contains(@class, "neg")]')
negative = True
except NoSuchElementException:
negative = False
if negative:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = -float(net_profit[0])
profits.update({-net_value: ["Stoploss:", stoploss_value, "Take Profit:", takeprofit_value]})
print(colored(f'Net Profit: -{net_value}% --> Stoploss: {stoploss_value}, Take Profit: {takeprofit_value}',
'red'))
else:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = float(net_profit[0])
profits.update({net_value: ["Stoploss:", stoploss_value, "Take Profit:", takeprofit_value]})
print(colored(f'Net Profit: {net_value}% --> Stoploss: {stoploss_value}, Take Profit: {takeprofit_value}',
'green'))
return net_profit
def get_net_profit_stoploss(self, count, wait):
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "additional_percent_value")))
try:
time.sleep(.5)
check = self.driver.find_elements_by_class_name("additional_percent_value")[0]
check.find_element_by_xpath('./span[contains(@class, "neg")]')
negative = True
except NoSuchElementException:
negative = False
if negative:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = -float(net_profit[0])
profits.update({count: -net_value})
print(colored(f'Stoploss: {count}%, Net Profit: {net_value}%', 'red'))
else:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = float(net_profit[0])
profits.update({count: net_value})
print(colored(f'Stoploss: {count}%, Net Profit: {net_value}%', 'green'))
return net_profit
def get_net_profit_takeprofit(self, count, wait):
try:
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "additional_percent_value")))
time.sleep(.5)
check = self.driver.find_elements_by_class_name("additional_percent_value")[0]
check.find_element_by_xpath('./span[contains(@class, "neg")]')
negative = True
except (NoSuchElementException, IndexError):
negative = False
if negative:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = -float(net_profit[0])
profits.update({count: -net_value})
print(colored(f'Take Profit: {count}%, Net Profit: {net_value}%', 'red'))
else:
net_profit = self.driver.find_elements_by_class_name("additional_percent_value")[0].text.split(" %")
net_value = float(net_profit[0])
profits.update({count: net_value})
print(colored(f'Take Profit: {count}%, Net Profit: {net_value}%', 'green'))
return net_profit
def get_win_rate(self, count, wait):
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, "additional_percent_value")))
try:
win_rate = self.driver.find_elements_by_class_name("additional_percent_value")[1]
win_rate.find_element_by_xpath('./span[contains(@class, "neg")]')
negative = True
except NoSuchElementException:
negative = False
if negative:
win_rate = self.driver.find_elements_by_class_name("additional_percent_value")[1].text.split(" %")
net_value = float(win_rate[0])
profits.update({count: -net_value})
negative_color = {count: net_value}
print(colored(f'{negative_color}', 'red'))
else:
win_rate = self.driver.find_elements_by_class_name("additional_percent_value")[1].text.split(" %")
net_value = float(win_rate[0])
profits.update({count: net_value})
positive_color = {count: net_value}
print(colored(f'{positive_color}', 'green'))
return win_rate
# Show Me Functions
def print_best_stoploss(self):
try:
best_stoploss = max(profits, key=profits.get)
max_percentage = profits[best_stoploss]
if max_percentage > 0:
profitable = colored(str(best_stoploss) + " %", 'green')
print(f"Best Stoploss: " + str(profitable))
else:
profitable = colored(str(best_stoploss) + " %", 'red')
print(f"Best Stoploss: " + str(profitable))
except (UnboundLocalError, ValueError):
print("error printing stoploss.")
def print_best_takeprofit(self):
try:
best_takeprofit = max(profits, key=profits.get)
max_percentage = profits[best_takeprofit]
if max_percentage > 0:
profitable = colored(str(best_takeprofit) + " %", 'green')
print(f"Best Take Profit: " + str(profitable))
else:
profitable = colored(str(best_takeprofit) + " %", 'red')
print(f"Best Take Profit: " + str(profitable))
except (UnboundLocalError, ValueError):
print("error printing take profit.")
def print_best_both(self):
try:
best_key = self.find_best_key_both()
best_stoploss = profits[best_key][1]
best_takeprofit = profits[best_key][3]
print(f"Best Stop Loss: {best_stoploss}")
print(f"Best Take Profit: {best_takeprofit}\n")
except (UnboundLocalError, ValueError):
print("error printing stoploss.")
def print_best_all(self):
try:
best_key = self.find_best_key_both()
best_long_stoploss = profits[best_key][1]
best_long_takeprofit = profits[best_key][3]
best_short_stoploss = profits[best_key][5]
best_short_takeprofit = profits[best_key][7]
print(f"Best Long Stop Loss: {best_long_stoploss}")
print(f"Best Long Take Profit: {best_long_takeprofit}")
print(f"Best Short Stop Loss: {best_short_stoploss}")
print(f"Best Short Take Profit: {best_short_takeprofit}\n")
except (UnboundLocalError, ValueError):
print("error printing stoploss.")
def print_net_profit(self):
net_profit = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[0].find_element_by_class_name("additional_percent_value")
try:
negative = net_profit.find_element_by_class_name("neg")
if negative:
display = colored(f'{net_profit.text}', 'red')
print(f'Net Profit: {display}')
except NoSuchElementException:
display = colored(f'{net_profit.text}', 'green')
print(f'Net Profit: {display}')
def print_gross_profit(self):
gross_profit = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[1].find_element_by_class_name("additional_percent_value")
try:
negative = gross_profit.find_element_by_class_name("neg")
if negative:
display = colored(f'{gross_profit.text}', 'red')
print(f'Gross Profit: {display}')
except NoSuchElementException:
display = colored(f'{gross_profit.text}', 'green')
print(f'Gross Profit: {display}')
def print_gross_loss(self):
gross_loss = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[2].find_element_by_class_name("additional_percent_value")
try:
negative = gross_loss.find_element_by_class_name("neg")
if negative:
display = colored(f'{gross_loss.text}', 'red')
print(f'Gross Loss: {display}')
except NoSuchElementException:
display = colored(f'{gross_loss.text}', 'green')
print(f'Gross Loss: {display}')
def print_max_drawdown(self):
max_drawdown = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[3].find_element_by_class_name("additional_percent_value")
try:
negative = max_drawdown.find_element_by_class_name("neg")
if negative:
display = colored(f'{max_drawdown.text}', 'red')
print(f'Max Drawdown: {display}')
except NoSuchElementException:
display = colored(f'{max_drawdown.text}', 'green')
print(f'Max Drawdown: {display}')
def print_buy_and_hold_return(self):
buy_and_hold_return = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[4].find_element_by_class_name("additional_percent_value")
try:
negative = buy_and_hold_return.find_element_by_class_name("neg")
if negative:
display = colored(f'{buy_and_hold_return.text}', 'red')
print(f'Buy & Hold Return: {display}')
except NoSuchElementException:
display = colored(f'{buy_and_hold_return.text}', 'green')
print(f'Buy & Hold Return: {display}')
def print_sharpe_ratio(self):
sharpe_ratio = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[5].find_elements_by_tag_name("td")[1]
try:
negative = sharpe_ratio.find_element_by_class_name("neg")
if negative:
display = colored(f'{sharpe_ratio.text}', 'red')
print(f'Sharpe Ratio: {display}')
except NoSuchElementException:
display = colored(f'{sharpe_ratio.text}', 'green')
print(f'Sharpe Ratio: {display}')
def print_sortino_ratio(self):
sortino_ratio = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[6].find_elements_by_tag_name("td")[1]
try:
negative = sortino_ratio.find_element_by_class_name("neg")
if negative:
display = colored(f'{sortino_ratio.text}', 'red')
print(f'Sortino Ratio: {display}')
except NoSuchElementException:
display = colored(f'{sortino_ratio.text}', 'green')
print(f'Sortino Ratio: {display}')
def print_profit_factor(self):
profit_factor = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[7].find_elements_by_tag_name("td")[1]
try:
negative = profit_factor.find_element_by_class_name("neg")
if negative:
display = colored(f'{profit_factor.text}', 'red')
print(f'Profit Factor: {display}')
except NoSuchElementException:
display = colored(f'{profit_factor.text}', 'green')
print(f'Profit Factor: {display}')
def print_max_contracts_held(self):
max_contracts_held = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[8].find_elements_by_tag_name("td")[1]
try:
negative = max_contracts_held.find_element_by_class_name("neg")
if negative:
display = colored(f'{max_contracts_held.text}', 'red')
print(f'Max Contracts Held: {display}')
except NoSuchElementException:
display = colored(f'{max_contracts_held.text}', 'green')
print(f'Max Contracts Held: {display}')
def print_open_pl(self):
open_pl = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[9].find_element_by_class_name("additional_percent_value")
try:
negative = open_pl.find_element_by_class_name("neg")
if negative:
display = colored(f'{open_pl.text}', 'red')
print(f'Open PL: {display}')
except NoSuchElementException:
display = colored(f'{open_pl.text}', 'green')
print(f'Open PL: {display}')
def print_commission_paid(self):
commission_paid = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[10].find_elements_by_tag_name("td")[1]
print(f'Commission Paid: {commission_paid.text}')
def print_total_closed_trades(self):
total_closed_trades = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[11].find_elements_by_tag_name("td")[1]
print(f'Total Closed Trades: {total_closed_trades.text}')
def print_total_open_trades(self):
total_open_trades = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[12].find_elements_by_tag_name("td")[1]
print(f'Total Open Trades: {total_open_trades.text}')
def print_number_winning_trades(self):
number_winning_trades = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[13].find_elements_by_tag_name("td")[1]
print(f'Number Winning Trades: {number_winning_trades.text}')
def print_number_losing_trades(self):
number_losing_trades = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[14].find_elements_by_tag_name("td")[1]
print(f'Number Losing Trades: {number_losing_trades.text}')
def print_percent_profitable(self):
percent_profitable = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[15].find_elements_by_tag_name("td")[1]
print(f'Percent Profitable: {percent_profitable.text}')
def print_avg_trade(self):
avg_trade = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[16].find_element_by_class_name("additional_percent_value")
try:
negative = avg_trade.find_element_by_class_name("neg")
if negative:
display = colored(f'{avg_trade.text}', 'red')
print(f'Avg Trade: {display}')
except NoSuchElementException:
display = colored(f'{avg_trade.text}', 'green')
print(f'Avg Trade: {display}')
def print_avg_win_trade(self):
avg_win_trade = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[17].find_element_by_class_name("additional_percent_value")
try:
negative = avg_win_trade.find_element_by_class_name("neg")
if negative:
display = colored(f'{avg_win_trade.text}', 'red')
print(f'Avg Win Trade: {display}')
except NoSuchElementException:
display = colored(f'{avg_win_trade.text}', 'green')
print(f'Avg Win Trade: {display}')
def print_avg_loss_trade(self):
avg_loss_trade = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[18].find_element_by_class_name("additional_percent_value")
try:
negative = avg_loss_trade.find_element_by_class_name("neg")
if negative:
display = colored(f'{avg_loss_trade.text}', 'red')
print(f'Avg Loss Trade: {display}')
except NoSuchElementException:
display = colored(f'{avg_loss_trade.text}', 'green')
print(f'Avg Loss Trade: {display}')
def print_win_loss_ratio(self):
win_loss_ratio = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[19].find_elements_by_tag_name("td")[1]
print(f'Win/Loss Ratio: {win_loss_ratio.text}')
def print_largest_winning_trade(self):
largest_winning_trade = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[20].find_element_by_class_name("additional_percent_value")
try:
negative = largest_winning_trade.find_element_by_class_name("neg")
if negative:
display = colored(f'{largest_winning_trade.text}', 'red')
print(f'Largest Win Trade: {display}')
except NoSuchElementException:
display = colored(f'{largest_winning_trade.text}', 'green')
print(f'Largest Win Trade: {display}')
def print_largest_losing_trade(self):
largest_losing_trade = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[21].find_element_by_class_name("additional_percent_value")
try:
negative = largest_losing_trade.find_element_by_class_name("neg")
if negative:
display = colored(f'{largest_losing_trade.text}', 'red')
print(f'Largest Loss Trade: {display}')
except NoSuchElementException:
display = colored(f'{largest_losing_trade.text}', 'green')
print(f'Largest Loss Trade: {display}')
def print_avg_bars_in_trades(self):
avg_bars_in_trades = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[22].find_elements_by_tag_name("td")[1]
print(f'Avg Bars In Trades: {avg_bars_in_trades.text}')
def print_avg_bars_in_winning_trades(self):
avg_bars_in_winning_trades = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[23].find_elements_by_tag_name("td")[1]
print(f'Avg Bars In Winning Trades: {avg_bars_in_winning_trades.text}')
def print_avg_bars_in_losing_trades(self):
avg_bars_in_losing_trades = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[24].find_elements_by_tag_name("td")[1]
print(f'Avg Bars In Losing Trades: {avg_bars_in_losing_trades.text}')
def print_margin_calls(self):
margin_calls = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[25].find_elements_by_tag_name("td")[1]
print(f'Avg Bars In Losing Trades: {margin_calls.text}')
def print_win_rate(self):
win_rate = \
self.driver.find_element_by_class_name("report-data").find_element_by_tag_name("table").find_element_by_tag_name(
"tbody").find_elements_by_tag_name("tr")[15].find_elements_by_tag_name("td")[1]
print(f'Win Rate: {win_rate.text}')
| 53.771664 | 214 | 0.648675 | 4,855 | 39,092 | 4.854171 | 0.045932 | 0.058811 | 0.069504 | 0.044299 | 0.890737 | 0.854118 | 0.814996 | 0.78122 | 0.752705 | 0.734247 | 0 | 0.011103 | 0.232759 | 39,092 | 726 | 215 | 53.84573 | 0.774647 | 0.014658 | 0 | 0.611544 | 0 | 0.00312 | 0.196348 | 0.088029 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088924 | false | 0.00312 | 0.014041 | 0 | 0.117005 | 0.157566 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c30880268ff86e5b97281e6789c71323ad8036cb | 11,445 | py | Python | Objectes.py | Sjats39/Newtons-Mechanic-Sistem-Simulation | a7bb230b0417d3ccca793a0741b2a4805f6ba710 | [
"Apache-2.0"
] | null | null | null | Objectes.py | Sjats39/Newtons-Mechanic-Sistem-Simulation | a7bb230b0417d3ccca793a0741b2a4805f6ba710 | [
"Apache-2.0"
] | null | null | null | Objectes.py | Sjats39/Newtons-Mechanic-Sistem-Simulation | a7bb230b0417d3ccca793a0741b2a4805f6ba710 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
G = 6.67408e-11
#constant de gravetat universal per reduir els calculs es interessant augmentar el valor per que la gravetat sigui mes forta en cossos de poca massa i distancies mes curtes
#Es podria implementar amb certa faciilitat en un espai R3, inclus utilitzar el model de la relativitat enlloc de la mecanica newtoniana
class particula : #cos de massa negligable en comparacio al sistema
def __init__(self,sistema, pos_x,pos_y,vx_0 =0, vy_0=0, Tf=10E2,Te=0.01,t =0 , color="red"): #atencio valor Te
self.t = t
self.sys= sistema
self.Tf = Tf
self.Te = Te
self.pos_x = pos_x
self.pos_y = pos_y
self.vx_0 = vx_0
self.vy_0 = vy_0
self.X=[self.pos_x]
self.Y=[self.pos_y]
self.c = color
def trajectoria (self):
def small_trajec(sistema):
gx, gy = sistema.Gfield(self.pos_x,self.pos_y)
vx = gx*self.Te +self.vx_0
vy= gy*self.Te +self.vy_0
x = (1/2)*gx*(self.Te)**2+self.vx_0*self.Te +self.pos_x
y = (1/2)*gy*(self.Te)**2+self.vy_0*self.Te +self.pos_y
return(x,y,vx,vy)
crash = False
while self.t< self.Tf and crash ==False :
self.pos_x,self.pos_y, self.vx_0,self.vy_0=small_trajec(self.sys)
for cos in self.sys.cossos :
crash = cos.Esta(self.pos_x,self.pos_y)
if crash == True :
break
self.t += self.Te
self.X.append(self.pos_x)
self.Y.append(self.pos_y)
plt.plot(self.X,self.Y,c=self.c)
class particula_accelerada (): #La diferencia d'aquest objecte amb l'objecte particula es que aquest pot accelerar en un (o +) instants. (entremig amb coet [massa i variació d'aquesta no pres en compte ])
#Si acceleracions =[] es igual que una particula,
def __init__(self,sistema ,pos_x,pos_y,acceleracions ,massa, vx_0 =0 , vy_0 = 0,escala=1, Tf=10E2,Te=0.01,t =0 ,color = "orange"): #acceleracions = [ [T_0,duracio, F_x, F_y ] , ... ] (llista de llistes )
# sistema (/!\ Camp Gravitacional)
self.sys = sistema
#valors propis (objecte)
self.massa = massa
self.pos_x = pos_x
self.pos_y = pos_y
self.vx_0 = vx_0
self.vy_0 = vy_0
#valors Calcul Trajectoria
self.Tf = Tf
self.Te = Te
self.acceleracions = acceleracions
self.t = t
#valors Grafic
self.scale = escala
self.color = color
self.X=[self.pos_x]
self.Y=[self.pos_y]
# Trobar "forma" (coalisions) De moment es assimilat a un punt
def trajectoria (self):
def small_trajec(sistema,Fx=None,Fy= None):
gx, gy = sistema.Gfield(self.pos_x,self.pos_y)
if Fx != None and Fy !=None:
vx = (gx+Fx/self.massa)*self.Te +self.vx_0
vy= (gy+Fy/self.massa)*self.Te +self.vy_0
x = (1/2)*(gx+Fx/self.massa)*(self.Te)**2+self.vx_0*self.Te +self.pos_x
y = (1/2)*(gy+Fy/self.massa)*(self.Te)**2+self.vy_0*self.Te +self.pos_y
else :
vx = gx*self.Te +self.vx_0
vy= gy*self.Te +self.vy_0
x = (1/2)*gx*(self.Te)**2+self.vx_0*self.Te +self.pos_x
y = (1/2)*gy*(self.Te)**2+self.vy_0*self.Te +self.pos_y
return(x,y,vx,vy)
crash = False
while self.t< self.Tf and crash ==False :
if self.acceleracions == []:
self.pos_x,self.pos_y, self.vx_0,self.vy_0=small_trajec(self.sys)
else :
if self.acceleracions[0][0] < self.t :
self.pos_x,self.pos_y, self.vx_0,self.vy_0=small_trajec(self.sys)
else:
self.pos_x,self.pos_y, self.vx_0,self.vy_0=small_trajec(self.sys,self.acceleracions[0][2],self.acceleracions[0][3])
if self.acceleracions[0][0] + self.acceleracions[0][1] > self.t :
self.acceleracions[0][0] += self.Te
self.acceleracions[0][1] -= self.Te
if self.acceleracions[0][0] + self.acceleracions[0][1] < self.t +self.Te :
self.acceleracions.remove(self.acceleracions[0])
for cos in self.sys.cossos :
crash = cos.Esta(self.pos_x,self.pos_y)
if crash == True :
break
self.t += self.Te
self.X.append(self.pos_x)
self.Y.append(self.pos_y)
plt.plot(self.X,self.Y,c=self.color)
#plt.scatter(self.X,self.Y,c=self.color)
class cos: #soposem que els cosos son circulars perfectes
def __init__(self,mass,pos_x,pos_y,densitat= 1,escala=(1/5)*10E-7, color = "black"):
self.mass = mass
self.pos_x = pos_x
self.pos_y = pos_y
self.densitat= densitat
self.scale = escala
self.color = color
self.radi= (((3/4)*self.mass)/self.densitat)**(1/3)
def field(self,x,y): #camp creat per el cos en el buit
x = x-self.pos_x
y = y - self.pos_y
r2 = x*x + y*y
theta = np.arctan2(y, x)
A = -(G*self.mass)/r2
return A*np.cos(theta), A*np.sin(theta)
def Esta(self,x,y):
distancia= np.sqrt((self.pos_x-x)**2+(self.pos_y -y)**2) #centre del cos i punt
if distancia < self.radi*self.scale: #Per els testos es multiplica pel factor escala (visualitzacio) normalment s'hauria de treure
#if distancia < self.radi*self.scale:
return(True)
else:
return(False)
def figura(self):
return plt.Circle((self.pos_x,self.pos_y),self.radi*self.scale,color = self.color)
class cos_mobil ():
def __init__(self,mass,pos_x,pos_y, vx_0 =0 , vy_0 = 0,densitat= 1,escala=1, color = "black"):
self.mass = mass
self.pos_x = pos_x
self.pos_y = pos_y
self.vx_0 = vx_0
self.vy_0 = vy_0
self.densitat= densitat
self.scale = escala
self.color = color
self.radi= (((3/4)*self.mass)/self.densitat)**(1/3)
def field(self,x,y): #camp creat per el cos en el buit
x = x-self.pos_x
y = y - self.pos_y
r2 = x*x + y*y
theta = np.arctan2(y, x)
A = -(G*self.mass)/r2
return A*np.cos(theta), A*np.sin(theta)
def Esta(self,x,y):
distancia= np.sqrt((self.pos_x-x)**2+(self.pos_y -y)**2) #centre del cos i punt
if distancia <self.radi: #com en el objecte cos si vol que estigui concorde amb el grafic multiplicar per self.scale
return(True)
else:
return(False)
def figura(self):
return plt.Circle((self.pos_x,self.pos_y),self.radi*self.scale,color = self.color)
class coet (): #La massa no sera constant en aqauest cas (encara que sera negligable en comparació el sistema)
#Si acceleracions =[] es igual que una particula,
def __init__(self,sistema ,pos_x,pos_y,acceleracions ,m_0, vx_0 =0 , vy_0 = 0,escala=1, Tf=10E2,Te=0.01,t =0 ,color = "orange"): #acceleracions = [ [T_0,duracio, F_x, F_y ] , ... ] (llista de llistes )
#Acceleracions es una llista de llistes de 5 variables tal com acc = [ [t0, /\t , mf, Fx , Fy] ... ]
# sistema (/!\ Camp Gravitacional)
self.sys = sistema
#combustible
self.c = c #velocitat dels gasos a la sortida respecte al coet
#valors propis (objecte)
self.m_0 = m_0
self.pos_x = pos_x
self.pos_y = pos_y
self.vx_0 = vx_0
self.vy_0 = vy_0
#valors Calcul Trajectoria
self.Tf = Tf
self.Te = Te
self.acceleracions = acceleracions
self.t = t
#valors Grafic
self.scale = escala
self.color = color
self.X=[self.pos_x]
self.Y=[self.pos_y]
# Trobar "forma" (coalisions) De moment es assimilat a un punt
def trajectoria (self):
def mass_evolution(m_0,v0,v1):
return m_0*np.exp**((v1-v0)/self.c)
self.M = self.mass_evolution()
def variation(x1,x2):
return((x2-x1)/self.Te)
def small_trajec(sistema,m0,m1,Fx=None,Fy= None):
gx, gy = sistema.Gfield(self.pos_x,self.pos_y)
if Fx != None and Fy !=None:
vx = (gx+Fx/self.m_0 + (variation(m1, m2)/m1)*self.vx_0)*self.Te +self.vx_0
vy = (gy+Fy/self.m_0 + (variation(m1, m2)/m1)*self.vy_0)*self.Te +self.vy_0
x = (1/2)*(gx+Fx/self.m_0+ (variation(m1, m2)/m1)*self.vx_0)*(self.Te)**2+self.vx_0*self.Te +self.pos_x
y = (1/2)*(gy+Fy/self.m_0+(variation(m1, m2)/m1)*self.vy_0)*(self.Te)**2+self.vy_0*self.Te +self.pos_y
else :
vx = (gx+ (variation(m1, m2)/m1)*self.vx_0)*self.Te +self.vx_0
vy= (gy+ (variation(m1, m2)/m1)*self.vy_0)*self.Te +self.vy_0
x = (1/2)*(gx+(variation(m1, m2)/m1)*self.vx_0)*(self.Te)**2+self.vx_0*self.Te +self.pos_x
y = (1/2)*(gy+(variation(m1, m2)/m1)*self.vy_0)*(self.Te)**2+self.vy_0*self.Te +self.pos_y
return(x,y,vx,vy)
crash = False
i = 0
while self.t< self.Tf and crash ==False :
if self.acceleracions == []:
self.pos_x,self.pos_y, self.vx_0,self.vy_0=small_trajec(self.sys, m[i],m[i+1])
else :
if self.acceleracions[0][0] < self.t :
self.pos_x,self.pos_y, self.vx_0,self.vy_0=small_trajec(self.sys, m[i],m[i+1])
else:
self.pos_x,self.pos_y, self.vx_0,self.vy_0=small_trajec(self.sys, m[i],m[i+1],self.acceleracions[0][2],self.acceleracions[0][3])
if self.acceleracions[0][0] + self.acceleracions[0][1] > self.t :
self.acceleracions[0][0] += self.Te
self.acceleracions[0][1] -= self.Te
if self.acceleracions[0][0] + self.acceleracions[0][1] < self.t +self.Te :
self.acceleracions.remove(self.acceleracions[0])
i += 1
for cos in self.sys.cossos :
crash = cos.Esta(self.pos_x,self.pos_y)
if crash == True :
break
self.t += self.Te
self.X.append(self.pos_x)
self.Y.append(self.pos_y)
plt.plot(self.X,self.Y,c=self.color)
#plt.scatter(self.X,self.Y,c=self.color)
class cometa (): # En aquest cas la seva massa no sera obmesa en el calcul del camp gravitacional
def __init__(self,m_0,pos_x,pos_y, vx_0 =0 , vy_0 = 0,densitat= 1,escala=1, color = "black"):
self.m_0 = m_0
self.pos_x = pos_x
self.pos_y = pos_y
self.vx_0 = vx_0
self.vy_0 = vy_0
self.densitat= densitat
self.scale = escala
self.color = color
# Trobar "forma" (coalisions)
| 38.40604 | 210 | 0.545391 | 1,766 | 11,445 | 3.407701 | 0.123443 | 0.083749 | 0.047856 | 0.038385 | 0.783815 | 0.783815 | 0.76321 | 0.75108 | 0.738784 | 0.734962 | 0 | 0.034282 | 0.324596 | 11,445 | 297 | 211 | 38.535354 | 0.744243 | 0.167846 | 0 | 0.778846 | 0 | 0 | 0.003163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096154 | false | 0 | 0.014423 | 0.019231 | 0.163462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c328211bb9acdb38566a948687bd9ce51765ae9c | 78 | py | Python | py_tdlib/constructors/search_messages_filter_chat_photo.py | Mr-TelegramBot/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 24 | 2018-10-05T13:04:30.000Z | 2020-05-12T08:45:34.000Z | py_tdlib/constructors/search_messages_filter_chat_photo.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 3 | 2019-06-26T07:20:20.000Z | 2021-05-24T13:06:56.000Z | py_tdlib/constructors/search_messages_filter_chat_photo.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 5 | 2018-10-05T14:29:28.000Z | 2020-08-11T15:04:10.000Z | from ..factory import Type
class searchMessagesFilterChatPhoto(Type):
pass
| 13 | 42 | 0.807692 | 8 | 78 | 7.875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 78 | 5 | 43 | 15.6 | 0.926471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c32c772c1275ebcb09743587e31bdde85b4f59eb | 70 | py | Python | nsdloader/__init__.py | DanielAnthes/NSD-DataLoader | 1ac9d7a5b581c98a55d0aff2faa76e0e8ee97396 | [
"MIT"
] | null | null | null | nsdloader/__init__.py | DanielAnthes/NSD-DataLoader | 1ac9d7a5b581c98a55d0aff2faa76e0e8ee97396 | [
"MIT"
] | null | null | null | nsdloader/__init__.py | DanielAnthes/NSD-DataLoader | 1ac9d7a5b581c98a55d0aff2faa76e0e8ee97396 | [
"MIT"
] | null | null | null | from .nsdloader import NSDLoader # import main class from subpackage
| 35 | 69 | 0.814286 | 9 | 70 | 6.333333 | 0.666667 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 70 | 1 | 70 | 70 | 0.966102 | 0.471429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c35e04f022f922d26a536836ce884ad69d9493e6 | 26 | py | Python | setup.py | kamfed/pyCFormation | 11bfc77bff727669101b8d3e9af52e495e61fb52 | [
"MIT"
] | null | null | null | setup.py | kamfed/pyCFormation | 11bfc77bff727669101b8d3e9af52e495e61fb52 | [
"MIT"
] | null | null | null | setup.py | kamfed/pyCFormation | 11bfc77bff727669101b8d3e9af52e495e61fb52 | [
"MIT"
] | null | null | null | #Todo: Create a setup file | 26 | 26 | 0.769231 | 5 | 26 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0.961538 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5ee24ed6b0789a5a1bce5e36813e0930067d5555 | 104 | py | Python | src/compas_wood/datastructures/assembly.py | brgcode/compas_wood | 5b43d1a77053523e6a5132dbfcbd99b808cf5a52 | [
"MIT"
] | null | null | null | src/compas_wood/datastructures/assembly.py | brgcode/compas_wood | 5b43d1a77053523e6a5132dbfcbd99b808cf5a52 | [
"MIT"
] | null | null | null | src/compas_wood/datastructures/assembly.py | brgcode/compas_wood | 5b43d1a77053523e6a5132dbfcbd99b808cf5a52 | [
"MIT"
] | null | null | null | from typing import NewType
from compas.datastructures import Network
class Assembly(Network):
pass
| 17.333333 | 41 | 0.807692 | 13 | 104 | 6.461538 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 104 | 5 | 42 | 20.8 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6f3569406debd001e223b0b2c212890670b680b6 | 166 | py | Python | solid/l/bad.py | yulio94/python-programming-concepts | e60ceded9ae34f854f6d23f30ffd9e199b393658 | [
"MIT"
] | null | null | null | solid/l/bad.py | yulio94/python-programming-concepts | e60ceded9ae34f854f6d23f30ffd9e199b393658 | [
"MIT"
] | null | null | null | solid/l/bad.py | yulio94/python-programming-concepts | e60ceded9ae34f854f6d23f30ffd9e199b393658 | [
"MIT"
] | null | null | null | class Animal:
""""""
def fly(self):
""""""
class Dog(Animal):
""""""
def fly(self):
if not has_wings:
raise Exception
| 11.857143 | 27 | 0.439759 | 17 | 166 | 4.235294 | 0.705882 | 0.25 | 0.333333 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.385542 | 166 | 13 | 28 | 12.769231 | 0.705882 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
6f3ef83d2f70dc134821b89a5fb2fbc0ff3593ac | 51 | py | Python | tests/core/test_import.py | pipermerriam/cthaeh | a3f63b0522d940af37f485ccbeed07666adb465b | [
"MIT"
] | 2 | 2020-09-17T11:23:18.000Z | 2021-11-04T14:15:27.000Z | tests/core/test_import.py | pipermerriam/cthaeh | a3f63b0522d940af37f485ccbeed07666adb465b | [
"MIT"
] | 8 | 2020-04-28T18:23:44.000Z | 2020-05-05T00:51:09.000Z | tests/core/test_import.py | pipermerriam/cthaeh | a3f63b0522d940af37f485ccbeed07666adb465b | [
"MIT"
] | 5 | 2020-04-27T18:30:54.000Z | 2022-03-28T18:55:30.000Z | def test_import():
import cthaeh # noqa: F401
| 17 | 31 | 0.666667 | 7 | 51 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.235294 | 51 | 2 | 32 | 25.5 | 0.769231 | 0.196078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 1 | 0 | 1.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
48a1a5b5e9de09d5d4d8e51854ac73c1850a044e | 46,400 | py | Python | pymove/molecules/marching_cubes.py | manny405/PyMoVE | 82045fa27b3bd31f2159d3ad72dc0a373c5e7b23 | [
"BSD-3-Clause"
] | 5 | 2021-01-24T10:35:06.000Z | 2021-11-30T07:55:44.000Z | pymove/molecules/marching_cubes.py | manny405/PyMoVE | 82045fa27b3bd31f2159d3ad72dc0a373c5e7b23 | [
"BSD-3-Clause"
] | null | null | null | pymove/molecules/marching_cubes.py | manny405/PyMoVE | 82045fa27b3bd31f2159d3ad72dc0a373c5e7b23 | [
"BSD-3-Clause"
] | 1 | 2021-11-28T16:37:48.000Z | 2021-11-28T16:37:48.000Z |
import numpy as np
from ase.data import vdw_radii,atomic_numbers,covalent_radii
from ase.data.colors import jmol_colors
from pymove import Structure
from pymove.io import read,write
from pymove.driver import BaseDriver_
from pymove.molecules.utils import com
import numpy as np
from scipy.spatial.distance import cdist
import scipy
from matplotlib.colors import to_hex
from pymove.io import read,write
from pymove.molecules.align import align
from pymove.molecules.marching_cubes_lookup import *
from numba import jit
from numba.extending import overload
import time
all_radii = []
for idx,value in enumerate(vdw_radii):
if np.isnan(value):
value = covalent_radii[idx]
all_radii.append(value)
all_radii = np.array(all_radii)
def equal_axis_aspect(ax):
xticks = ax.get_xticks()
yticks = ax.get_yticks()
zticks = ax.get_zticks()
xrange = xticks[-1] - xticks[0]
yrange = yticks[-1] - yticks[0]
zrange = zticks[-1] - zticks[0]
max_range = max([xrange,yrange,zrange]) / 2
xmid = np.mean(xticks)
ymid = np.mean(yticks)
zmid = np.mean(zticks)
ax.set_xlim(xmid - max_range, xmid + max_range)
ax.set_ylim(ymid - max_range, ymid + max_range)
ax.set_zlim(zmid - max_range, zmid + max_range)
def equal_axis_aspect_2D(ax):
xticks = ax.get_xticks()
yticks = ax.get_yticks()
xrange = xticks[-1] - xticks[0]
yrange = yticks[-1] - yticks[0]
max_range = max([xrange,yrange]) / 2
xmid = np.mean(xticks)
ymid = np.mean(yticks)
ax.set_xlim(xmid - max_range, xmid + max_range)
ax.set_ylim(ymid - max_range, ymid + max_range)
def compute_edge_sites(cube_vertex):
pair_idx = np.array([
[0,1],
[0,3],
[2,3],
[1,2],
[0,4],
[3,7],
[2,6],
[1,5],
[4,5],
[4,7],
[6,7],
[5,6],
])
pairs = cube_vertex[pair_idx]
edge = np.mean(pairs, axis=1)
return edge
class MarchingCubes(BaseDriver_):
def __init__(self, vdw=all_radii, update=True,
cache=0.25, spacing=0.25):
self.vdw = vdw
self.update = update
self.struct = None
self.spacing = spacing
self.cache = cache
self.offset_combination_dict = self.create_offset_dict_fast()
## Storage
self.x_vals = []
self.y_vals = []
self.z_vals = []
def create_offset_dict(self):
## Find all combinations of small values that lead to less than or equal
## to the largest value. This is equivalent to finding all grid points
## within a certain radius
offset_combination_dict = {}
max_offset_value = np.round(np.max(self.vdw) / self.cache) + 1
idx_range = np.arange(-max_offset_value , max_offset_value+1)[::-1]
sort_idx = np.argsort(np.abs(idx_range))
idx_range = idx_range[sort_idx]
all_idx = np.array(
np.meshgrid(idx_range,idx_range,idx_range)).T.reshape(-1,3)
all_idx = all_idx.astype(int)
all_norm = np.linalg.norm(all_idx, axis=-1)
for value in range(int(max_offset_value+1)):
min_norm = value
take_idx = np.where(all_norm <= value)[0]
final_idx = all_idx[take_idx]
offset_combination_dict[value] = final_idx
return offset_combination_dict
def create_offset_dict_fast(self):
"""
Current offset dict version is rigorous but slow.
"""
offset_combination_dict = {}
max_offset_value = np.round(np.max(self.vdw) / self.cache) + 1
idx_range = np.arange(-max_offset_value , max_offset_value+1)[::-1]
sort_idx = np.argsort(np.abs(idx_range))
idx_range = idx_range[sort_idx]
all_idx = np.array(
np.meshgrid(idx_range,idx_range,idx_range)).T.reshape(-1,3)
all_idx = all_idx.astype(int)
all_norm = np.linalg.norm(all_idx, axis=-1)
sort_idx = np.argsort(all_norm, kind="mergesort")
self.sort_idx = sort_idx
all_idx = all_idx[sort_idx]
all_norm = all_norm[sort_idx]
prev_idx = 0
for value in range(int(max_offset_value+1)):
idx = np.searchsorted(all_norm[prev_idx:], value, side="right")
idx += prev_idx
offset_combination_dict[value] = all_idx[0:idx]
prev_idx = idx
return offset_combination_dict
def calc_struct(self, struct):
self.struct = struct
volume = self.struct_to_volume(self.struct)
total_volume,voxel_coords,cube_coords,coords,triangles = \
self.marching_cubes(volume)
self.struct.properties["Marching_Cubes_Volume"] = total_volume
return total_volume
def center_molecule(self, struct):
"""
Simple centering operation.
"""
mol_com = com(struct)
geo = struct.get_geo_array()
geo = geo - mol_com
struct.from_geo_array(geo, struct.geometry["element"])
def point_to_grid(self, points):
"""
Returns the nearest point on the grid with respect to the argument.
Also, returns the index of this point with respect to the grid coords.
Arguments
---------
points: array
2D array of points
"""
if len(self.x_vals) == 0 or len(self.y_vals) == 0 or len(self.z_vals) == 0:
raise Exception()
points_on_grid = np.round(points / self.spacing)*self.spacing
### Compute index with respect to grid limits
min_loc = np.array([self.x_vals[0],self.y_vals[0],self.z_vals[0]])
temp_grid_coords = points_on_grid-min_loc
grid_region_idx = np.round(temp_grid_coords / self.spacing)
grid_region_idx = grid_region_idx.astype(int)
return points_on_grid,grid_region_idx
def sphere_to_grid(self, radius, center):
"""
Returns how a new sphere would be added to the current grid.
"""
spacing = self.spacing
min_loc = np.array([self.x_vals[0],self.y_vals[0],self.z_vals[0]])
center_on_grid = np.round(center / self.spacing)*self.spacing
rad_spacing = np.round(radius / self.spacing).astype(int)
all_idx = self.offset_combination_dict[rad_spacing+1]
temp_grid_coords = all_idx*spacing
temp_norm = np.linalg.norm(temp_grid_coords,axis=-1)
final_idx = np.where(temp_norm < radius)[0]
temp_grid_coords = temp_grid_coords[final_idx]
### 20200429 Trying to correct grid filling
temp_grid_coords = temp_grid_coords+center_on_grid-min_loc
grid_region_idx = np.round(temp_grid_coords / spacing)
grid_region_idx = grid_region_idx.astype(int)
return grid_region_idx
def get_grid(self, struct=None, spacing=0):
"""
Prepares the grid points in a numerically stable way about the origin.
If the molecule is not centered at the origin, this will be corrected
automatically.
"""
geo = struct.get_geo_array()
ele = struct.geometry["element"]
struct_radii = np.array([self.vdw[atomic_numbers[x]] for x in ele])
struct_centers = self.centers
### Get minimum and maximum positions that the grid should have
min_pos = []
for idx,radius in enumerate(struct_radii):
temp_pos = struct_centers[idx] - radius - self.spacing
temp_pos = (temp_pos / self.spacing - 1).astype(int)*self.spacing
min_pos.append(temp_pos)
max_pos = []
for idx,radius in enumerate(struct_radii):
temp_pos = struct_centers[idx] + radius + self.spacing
temp_pos = (temp_pos / self.spacing + 1).astype(int)*self.spacing
max_pos.append(temp_pos)
min_pos = np.min(np.vstack(min_pos), axis=0)
max_pos = np.max(np.vstack(max_pos), axis=0)
### Build grid out from the origin
x_pos_num = np.abs(np.round(max_pos[0] / self.spacing).astype(int))
x_neg_num = np.abs(np.round(min_pos[0] / self.spacing).astype(int))
y_pos_num = np.abs(np.round(max_pos[1] / self.spacing).astype(int))
y_neg_num = np.abs(np.round(min_pos[1] / self.spacing).astype(int))
z_pos_num = np.abs(np.round(max_pos[2] / self.spacing).astype(int))
z_neg_num = np.abs(np.round(min_pos[2] / self.spacing).astype(int))
### Using linspace instead of arange because arange is not
### numerically stable.
x_grid_pos = np.linspace(0,max_pos[0],x_pos_num+1)
x_grid_neg = np.linspace(min_pos[0], 0-self.spacing, x_neg_num)
x_grid = np.hstack([x_grid_neg, x_grid_pos])
y_grid_pos = np.linspace(0,max_pos[1],y_pos_num+1)
y_grid_neg = np.linspace(min_pos[1], 0-self.spacing, y_neg_num)
y_grid = np.hstack([y_grid_neg, y_grid_pos])
z_grid_pos = np.linspace(0,max_pos[2],z_pos_num+1)
z_grid_neg = np.linspace(min_pos[2], 0-self.spacing, z_neg_num)
z_grid = np.hstack([z_grid_neg, z_grid_pos])
self.x_vals = x_grid
self.y_vals = y_grid
self.z_vals = z_grid
X,Y,Z = np.meshgrid(self.x_vals, self.y_vals, self.z_vals,
indexing="ij")
self.grid_coords = np.c_[X.ravel(),
Y.ravel(),
Z.ravel()]
def place_atom_centers(self, struct):
"""
Places the centers of the atoms onto the grid. This is necessary to
ensure numerical stability of the algorithm. While this is an approximation,
using even a course grid, such as 0.05 this will introduce only a minimum
amount of error. Stores radii and centers.
"""
centers = struct.get_geo_array()
ele = struct.geometry["element"]
struct_radii = np.array([self.vdw[atomic_numbers[x]] for x in ele])
## Compute centers on grid
grid_centers = []
for idx,center in enumerate(centers):
centered_on_grid = np.round(centers[idx] / self.spacing)*self.spacing
grid_centers.append(centered_on_grid)
## Store radii and centers
self.radii = struct_radii
self.centers = np.vstack(grid_centers)
def struct_to_volume(self, struct=None, spacing=0, center_com=True):
if spacing == 0:
spacing = self.spacing
if struct == None:
struct = self.struct
if center_com:
self.center_molecule(struct)
self.place_atom_centers(struct)
self.get_grid(struct)
min_loc = np.array([self.x_vals[0],self.y_vals[0],self.z_vals[0]])
volume = np.zeros((self.x_vals.shape[0],
self.y_vals.shape[0],
self.z_vals.shape[0]))
for idx,center in enumerate(self.centers):
## Now compute idx to also populate x,y,z directions for given radius
rad = self.radii[idx]
rad_spacing = np.round(rad / spacing).astype(int)
#### THIS SUFFERS FROM NUMERICAL ERRORS
# all_idx = self.offset_combination_dict[rad_spacing]
# temp_grid_coords = all_idx*spacing
#### GET ONE SPACING LARGER
all_idx = self.offset_combination_dict[rad_spacing+1]
temp_grid_coords = all_idx*spacing
temp_norm = np.linalg.norm(temp_grid_coords,axis=-1)
final_idx = np.where(temp_norm < rad)[0]
temp_grid_coords = temp_grid_coords[final_idx]
### 20200429 Trying to correct grid filling
temp_grid_coords = temp_grid_coords+self.centers[idx]-min_loc
grid_region_idx = np.round(temp_grid_coords / spacing)
grid_region_idx = grid_region_idx.astype(int)
volume[grid_region_idx[:,0], grid_region_idx[:,1], grid_region_idx[:,2]] = 1
return volume
def struct_to_volume_colors(self, struct=None, spacing=0, center_com=True):
if spacing == 0:
spacing = self.spacing
if struct == None:
struct = self.struct
if center_com:
self.center_molecule(struct)
self.place_atom_centers(struct)
self.get_grid(struct)
min_loc = np.array([self.x_vals[0],self.y_vals[0],self.z_vals[0]])
volume = np.zeros((self.x_vals.shape[0],
self.y_vals.shape[0],
self.z_vals.shape[0]))
ele = struct.geometry["element"]
ele_colors = [jmol_colors[atomic_numbers[x]] for x in ele]
colors = np.empty(volume.shape, dtype=object)
for idx,center in enumerate(self.centers):
## Now compute idx to also populate x,y,z directions for given radius
rad = self.radii[idx]
rad_spacing = np.round(rad / spacing).astype(int)
#### THIS SUFFERS FROM NUMERICAL ERRORS
# all_idx = self.offset_combination_dict[rad_spacing]
# temp_grid_coords = all_idx*spacing
#### GET ONE SPACING LARGER
all_idx = self.offset_combination_dict[rad_spacing+1]
temp_grid_coords = all_idx*spacing
temp_norm = np.linalg.norm(temp_grid_coords,axis=-1)
final_idx = np.where(temp_norm < rad)[0]
temp_grid_coords = temp_grid_coords[final_idx]
### 20200429 Trying to correct grid filling
temp_grid_coords = temp_grid_coords+self.centers[idx]-min_loc
grid_region_idx = np.round(temp_grid_coords / spacing)
grid_region_idx = grid_region_idx.astype(int)
volume[grid_region_idx[:,0], grid_region_idx[:,1], grid_region_idx[:,2]] = 1
current_color = ele_colors[idx]
colors[grid_region_idx[:,0], grid_region_idx[:,1], grid_region_idx[:,2]] = to_hex(current_color)
return volume,colors
def marching_cubes(self, volume):
start = time.time()
X,Y,Z = np.meshgrid(self.x_vals, self.y_vals, self.z_vals,
indexing="ij")
grid_point_reference = np.c_[X.ravel(),
Y.ravel(),
Z.ravel()]
x_num,y_num,z_num = volume.shape
## Start by projecting down Z direction because this is easiest based on the
## indexing scheme
z_proj = np.arange(0,z_num-1)
front_plane_top_left_idx = z_proj
front_plane_bot_left_idx = front_plane_top_left_idx + 1
## Have to move 1 in the Y direction which is the same as z_num
back_plane_top_left_idx = z_proj + z_num
back_plane_bot_left_idx = back_plane_top_left_idx + 1
## Have to move 1 in the X direction which is the same as z_num*y_num
front_plane_top_right_idx = z_proj + y_num*z_num
front_plane_bot_right_idx = front_plane_top_right_idx + 1
## Have to move 1 in the y direction which is the same as z_num
back_plane_top_right_idx = front_plane_top_right_idx + z_num
back_plane_bot_right_idx = back_plane_top_right_idx + 1
#### Now project over the Y direction
y_proj = np.arange(0,y_num-1)[:,None]*(z_num)
front_plane_top_left_idx = front_plane_top_left_idx + y_proj
front_plane_bot_left_idx = front_plane_bot_left_idx+ y_proj
back_plane_top_left_idx = back_plane_top_left_idx+ y_proj
back_plane_bot_left_idx = back_plane_bot_left_idx+ y_proj
front_plane_top_right_idx = front_plane_top_right_idx+ y_proj
front_plane_bot_right_idx = front_plane_bot_right_idx+ y_proj
back_plane_top_right_idx = back_plane_top_right_idx+ y_proj
back_plane_bot_right_idx = back_plane_bot_right_idx+ y_proj
#### Lastly project in X direction
x_proj = np.arange(0,x_num-1)[:,None,None]*(y_num*z_num)
front_plane_top_left_idx = front_plane_top_left_idx + x_proj
front_plane_bot_left_idx = front_plane_bot_left_idx + x_proj
back_plane_top_left_idx = back_plane_top_left_idx + x_proj
back_plane_bot_left_idx = back_plane_bot_left_idx + x_proj
front_plane_top_right_idx = front_plane_top_right_idx + x_proj
front_plane_bot_right_idx = front_plane_bot_right_idx + x_proj
back_plane_top_right_idx = back_plane_top_right_idx + x_proj
back_plane_bot_right_idx = back_plane_bot_right_idx + x_proj
#
voxel_idx = np.c_[front_plane_top_left_idx.ravel(),
front_plane_bot_left_idx.ravel(),
back_plane_bot_left_idx.ravel(),
back_plane_top_left_idx.ravel(),
front_plane_top_right_idx.ravel(),
front_plane_bot_right_idx.ravel(),
back_plane_bot_right_idx.ravel(),
back_plane_top_right_idx.ravel(),
]
voxel_mask = np.take(volume, voxel_idx)
voxel_sum = np.sum(voxel_mask, axis=-1)
voxel_surface_vertex_idx = np.where(np.logical_and(voxel_sum != 0,
voxel_sum != 8))[0]
self.full_voxels = np.where(voxel_sum == 8)[0]
## Get only the non-zero points on the surface for visualization
surface_vertex_idx = voxel_idx[voxel_surface_vertex_idx][
voxel_mask[voxel_surface_vertex_idx].astype(bool)]
surface_vertex = grid_point_reference[surface_vertex_idx]
## Get the voxels that correspond to the surface of the molecule
surface_voxel = voxel_mask[voxel_surface_vertex_idx].astype(int)
## Get corresponding grid_point_reference idx for each of the surface voxel
## verticies
surface_voxel_vert = voxel_idx[voxel_surface_vertex_idx]
voxel_coords = []
cube_coords = []
coords = []
triangles = []
total_volume = self.full_voxels.shape[0]*self.spacing*self.spacing*self.spacing
# print("BEFORE LOOP: {}".format(time.time() - start))
proj_total_time = 0
inner_loop_time = 0
radius_loop_time = 0
for idx,entry in enumerate(surface_voxel):
### Get Cartesian Coordinates index
temp_ref_idx = surface_voxel_vert[idx]
### Get populated coordinates
voxel_coords.append(grid_point_reference[
temp_ref_idx[entry.astype(bool)]])
### Get Cart Cube vertex and edges
temp_vertices = grid_point_reference[temp_ref_idx]
temp_edges = compute_edge_sites(temp_vertices)
inner_loop_start = time.time()
### Performing projections onto sphere surfaces for each edge point
for edge_idx,edge in enumerate(temp_edges):
rad_loop_start = time.time()
### Project onto surface of each sphere present
temp_projected_edge_list = []
temp_projected_centers = []
### First choose relevant spheres
edge_to_center = np.linalg.norm(edge - self.centers, axis=-1)
edge_to_center_inside = edge_to_center - self.radii
proj_sphere_idx = np.where(np.abs(edge_to_center_inside) <=
(self.spacing*2))[0]
for r_idx in proj_sphere_idx:
## Also, need center of the atom for proper projection
temp_center = self.centers[r_idx]
temp_projected_centers.append(temp_center)
radius = self.radii[r_idx]
proj_edge_start = time.time()
## Get the projected edge for this sphere
# temp_proj_edge = self.proj_edge(edge,
# edge_idx,
# temp_vertices,
# radius,
# temp_center)
temp_proj_edge = numba_proj_edge(edge,
edge_idx,
temp_vertices,
radius,
temp_center)
proj_total_time += time.time() - proj_edge_start
## If there was no change, do not append
if np.linalg.norm(temp_proj_edge - edge) < 1e-6:
continue
## Append
temp_projected_edge_list.append(temp_proj_edge)
## Let's see if this problem can be solved in a different way
if len(temp_projected_edge_list) == 0:
continue
elif len(temp_projected_edge_list) == 1:
choice_idx = 0
else:
cdist_distances = cdist(temp_projected_edge_list,
temp_projected_centers)
## Choose the one that maximizes distances
cdist_sum = np.sum(cdist_distances,axis=-1)
choice_idx = np.argmax(cdist_sum)
### Hard code for now because only interested in testing for one sphere
temp_edges[edge_idx] = temp_projected_edge_list[choice_idx]
inner_loop_time += time.time() - inner_loop_start
### Get the tri_idx for this surface voxel
triangles_bool = tri_connectivity[tostring(entry)].astype(bool)
array_to_mask = np.repeat(np.arange(0,12)[None,:],
triangles_bool.shape[0],
axis=0)
tri_idx = array_to_mask[triangles_bool].reshape(-1,3)
### Build triangles for grid point reference
tri_idx = tri_idx + len(coords)*12
### Save results for plotting
cube_coords.append(temp_vertices)
coords.append(temp_edges)
triangles.append(tri_idx)
## Compute volume with the projected edges
total_volume += get_volume(entry, temp_vertices, temp_edges)
### For debugging purposes
self.o_voxel_coords = voxel_coords.copy()
self.o_cube_coords = cube_coords.copy()
self.o_coords = coords.copy()
self.o_triangles = triangles.copy()
self.surface_voxel = surface_voxel
self.surface_voxel_vert = surface_voxel_vert
voxel_coords = np.vstack(voxel_coords)
cube_coords = np.vstack(cube_coords)
coords = np.vstack(coords)
triangles = np.vstack(triangles)
# print("AFTER LOOP: {}".format(time.time() - start))
# print("PROJ TOTAL TIME: {}".format(proj_total_time))
# print("INNER LOOP TIME: {}".format(inner_loop_time))
# print("RADIUS LOOP TIME: {}".format(radius_loop_time))
return total_volume,voxel_coords,cube_coords,coords,triangles
def proj_edge(self, edge, edge_idx, vertices, radius, center):
x = edge[0]
y = edge[1]
z = edge[2]
a = center[0]
b = center[1]
c = center[2]
## Each edge idx only has one degree of freedom to project onto surface
if edge_idx == 0:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 1:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
elif edge_idx == 2:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 3:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
elif edge_idx == 4:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 5:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 6:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 7:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 8:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 9:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
elif edge_idx == 10:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 11:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
if proj2 < 0:
proj2 = proj2*-1
proj = np.sqrt(proj2)
### 20200429 Fix decision function
temp_pos_dir = np.linalg.norm((proj + proj_dir_center) - proj_dir_value)
temp_neg_dir = np.linalg.norm((-proj + proj_dir_center) - proj_dir_value)
if temp_neg_dir < temp_pos_dir:
proj = proj*-1 + proj_dir_center
else:
proj = proj + proj_dir_center
## Check if projection is within the spacing of the grid.
## If it's outside, then this cannot be a valid projection.
## And the value is set back to original edge position.
if edge_idx == 0:
## Z, 0,1
if proj < vertices[0][2]:
proj = z
elif proj > vertices[1][2]:
proj = z
elif edge_idx == 1:
if proj < vertices[0][1]:
proj = y
elif proj > vertices[3][1]:
proj = y
elif edge_idx == 2:
## Z 2,3
if proj < vertices[3][2]:
proj = z
elif proj > vertices[2][2]:
proj = z
elif edge_idx == 3:
if proj < vertices[1][1]:
proj = y
elif proj > vertices[2][1]:
proj = y
elif edge_idx == 4:
## X 0,4
if proj < vertices[0][0]:
proj = x
elif proj > vertices[4][0]:
proj = x
elif edge_idx == 5:
## X 3,7
if proj < vertices[3][0]:
proj = x
elif proj > vertices[7][0]:
proj = x
elif edge_idx == 6:
## X 2,6
if proj < vertices[2][0]:
proj = x
elif proj > vertices[6][0]:
proj = x
elif edge_idx == 7:
## X, 1,5
if proj < vertices[1][0]:
proj = x
elif proj > vertices[5][0]:
proj = x
elif edge_idx == 8:
## Z, 4.5
if proj < vertices[4][2]:
proj = z
elif proj > vertices[5][2]:
proj = z
elif edge_idx == 9:
## Y 4,7
if proj < vertices[4][1]:
proj = y
elif proj > vertices[7][1]:
proj = y
elif edge_idx == 10:
## Z, 6,7
if proj < vertices[7][2]:
proj = z
elif proj > vertices[6][2]:
proj = z
elif edge_idx == 11:
## Y, 5,6
if proj < vertices[5][1]:
proj = y
elif proj > vertices[6][1]:
proj = y
### Return final projection
ret_edge = edge.copy()
if edge_idx == 0:
## Z
ret_edge[2] = proj
elif edge_idx == 1:
## Y
ret_edge[1] = proj
elif edge_idx == 2:
## Z
ret_edge[2] = proj
elif edge_idx == 3:
## Y
ret_edge[1] = proj
elif edge_idx == 4:
## X
ret_edge[0] = proj
elif edge_idx == 5:
## X
ret_edge[0] = proj
elif edge_idx == 6:
## X
ret_edge[0] = proj
elif edge_idx == 7:
## X
ret_edge[0] = proj
elif edge_idx == 8:
## Z
ret_edge[2] = proj
elif edge_idx == 9:
## Y
ret_edge[1] = proj
elif edge_idx == 10:
## Z
ret_edge[2] = proj
elif edge_idx == 11:
## Y
ret_edge[1] = proj
return ret_edge
def marching_cubes_basic(self, volume):
X,Y,Z = np.meshgrid(self.x_vals, self.y_vals, self.z_vals,
indexing="ij")
grid_point_reference = np.c_[X.ravel(),
Y.ravel(),
Z.ravel()]
x_num,y_num,z_num = volume.shape
## Start by projecting down Z direction because this is easiest based on the
## indexing scheme
z_proj = np.arange(0,z_num-1)
front_plane_top_left_idx = z_proj
front_plane_bot_left_idx = front_plane_top_left_idx + 1
## Have to move 1 in the Y direction which is the same as z_num
back_plane_top_left_idx = z_proj + z_num
back_plane_bot_left_idx = back_plane_top_left_idx + 1
## Have to move 1 in the X direction which is the same as z_num*y_num
front_plane_top_right_idx = z_proj + y_num*z_num
front_plane_bot_right_idx = front_plane_top_right_idx + 1
## Have to move 1 in the y direction which is the same as z_num
back_plane_top_right_idx = front_plane_top_right_idx + z_num
back_plane_bot_right_idx = back_plane_top_right_idx + 1
#### Now project over the Y direction
y_proj = np.arange(0,y_num-1)[:,None]*(z_num)
front_plane_top_left_idx = front_plane_top_left_idx + y_proj
front_plane_bot_left_idx = front_plane_bot_left_idx+ y_proj
back_plane_top_left_idx = back_plane_top_left_idx+ y_proj
back_plane_bot_left_idx = back_plane_bot_left_idx+ y_proj
front_plane_top_right_idx = front_plane_top_right_idx+ y_proj
front_plane_bot_right_idx = front_plane_bot_right_idx+ y_proj
back_plane_top_right_idx = back_plane_top_right_idx+ y_proj
back_plane_bot_right_idx = back_plane_bot_right_idx+ y_proj
#### Lastly project in X direction
x_proj = np.arange(0,x_num-1)[:,None,None]*(y_num*z_num)
front_plane_top_left_idx = front_plane_top_left_idx + x_proj
front_plane_bot_left_idx = front_plane_bot_left_idx + x_proj
back_plane_top_left_idx = back_plane_top_left_idx + x_proj
back_plane_bot_left_idx = back_plane_bot_left_idx + x_proj
front_plane_top_right_idx = front_plane_top_right_idx + x_proj
front_plane_bot_right_idx = front_plane_bot_right_idx + x_proj
back_plane_top_right_idx = back_plane_top_right_idx + x_proj
back_plane_bot_right_idx = back_plane_bot_right_idx + x_proj
#
voxel_idx = np.c_[front_plane_top_left_idx.ravel(),
front_plane_bot_left_idx.ravel(),
back_plane_bot_left_idx.ravel(),
back_plane_top_left_idx.ravel(),
front_plane_top_right_idx.ravel(),
front_plane_bot_right_idx.ravel(),
back_plane_bot_right_idx.ravel(),
back_plane_top_right_idx.ravel(),
]
voxel_mask = np.take(volume, voxel_idx)
voxel_sum = np.sum(voxel_mask, axis=-1)
voxel_surface_vertex_idx = np.where(np.logical_and(voxel_sum != 0,
voxel_sum != 8))[0]
self.full_voxels = np.where(voxel_sum == 8)[0]
## Get only the non-zero points on the surface for visualization
surface_vertex_idx = voxel_idx[voxel_surface_vertex_idx][
voxel_mask[voxel_surface_vertex_idx].astype(bool)]
surface_vertex = grid_point_reference[surface_vertex_idx]
#### Working on surface triangulation
## Get the voxels that correspond to the surface of the molecule
surface_voxel = voxel_mask[voxel_surface_vertex_idx].astype(int)
## Get corresponding grid_point_reference idx for each of the surface voxel
## verticies
surface_voxel_vert = voxel_idx[voxel_surface_vertex_idx]
voxel_coords = []
cube_coords = []
coords = []
triangles = []
total_volume = self.full_voxels.shape[0]*self.spacing*self.spacing*self.spacing
for idx,entry in enumerate(surface_voxel):
### Get Cartesian Coordinates index
temp_ref_idx = surface_voxel_vert[idx]
### Get populated coordinates
voxel_coords.append(grid_point_reference[
temp_ref_idx[entry.astype(bool)]])
### Get Cart Cube vertex and edges
temp_vertices = grid_point_reference[temp_ref_idx]
temp_edges = compute_edge_sites(temp_vertices)
### Get the tri_idx for this surface voxel
triangles_bool = tri_connectivity[tostring(entry)].astype(bool)
array_to_mask = np.repeat(np.arange(0,12)[None,:],
triangles_bool.shape[0],
axis=0)
tri_idx = array_to_mask[triangles_bool].reshape(-1,3)
### Build triangles for grid point reference
tri_idx = tri_idx + len(coords)*12
### Save results for plotting
cube_coords.append(temp_vertices)
coords.append(temp_edges)
triangles.append(tri_idx)
adjusted_vol = tri_volume[tostring(entry)]
total_volume += (adjusted_vol*self.spacing*self.spacing*self.spacing)
### For debugging purposes
self.o_voxel_coords = voxel_coords.copy()
self.o_cube_coords = cube_coords.copy()
self.o_coords = coords.copy()
self.surface_voxel = surface_voxel
voxel_coords = np.vstack(voxel_coords)
cube_coords = np.vstack(cube_coords)
coords = np.vstack(coords)
triangles = np.vstack(triangles)
return total_volume,voxel_coords,cube_coords,coords,triangles
@jit(nopython=True)
def numba_handle_edges(temp_edges,
temp_vertices,
centers,
radii,
spacing):
### MUCH FASTER BUT NOT TESTED
### Performing projections onto sphere surfaces for each edge point
for edge_idx,edge in enumerate(temp_edges):
### First choose relevant spheres
temp = edge-centers
edge_to_center = numba_norm(temp)
edge_to_center_inside = edge_to_center - radii
proj_sphere_idx = np.where(np.abs(edge_to_center_inside) <=
(spacing*2))[0]
### Project onto surface of each sphere present
temp_projected_edge_list = np.zeros((len(proj_sphere_idx),3))
temp_projected_centers = np.zeros((len(proj_sphere_idx),3))
for r_idx in proj_sphere_idx:
## Also, need center of the atom for proper projection
temp_center = centers[r_idx]
# temp_projected_centers.append(temp_center)
radius = radii[r_idx]
temp_proj_edge = numba_proj_edge(edge,
edge_idx,
temp_vertices,
radius,
temp_center)
## If there was no change, do not append
# if np.linalg.norm(temp_proj_edge - edge) < 1e-6:
# continue
## Append
# temp_projected_edge_list.append(temp_proj_edge)
temp_projected_centers[r_idx] = temp_center
temp_projected_edge_list[r_idx] = temp_proj_edge
## Let's see if this problem can be solved in a different way
if len(temp_projected_edge_list) == 0:
continue
elif len(temp_projected_edge_list) == 1:
choice_idx = 0
else:
# cdist_distances = cdist(temp_projected_edge_list,
# temp_projected_centers)
# cdist_distances = np.linalg.norm(temp_projected_edge_list -
# temp_projected_centers[:,None],
# axis=-1)
temp = temp_projected_edge_list - np.expand_dims(temp_projected_centers,1)
cdist_distances = numba_norm_projected(temp)
## Choose the one that maximizes distances
cdist_sum = np.sum(cdist_distances,axis=-1)
choice_idx = np.argmax(cdist_sum)
### Hard code for now because only interested in testing for one sphere
temp_edges[edge_idx] = temp_projected_edge_list[choice_idx]
return temp_edges
@jit(nopython=True)
def numba_norm(matrix):
result = np.zeros((matrix.shape[0]))
for idx,entry in enumerate(matrix):
result[idx] = np.sqrt(np.sum(np.square(entry)))
return result
@jit(nopython=True)
def numba_norm_projected(matrix):
result = np.zeros((matrix.shape[0],matrix.shape[1]))
for idx1,entry1 in enumerate(matrix):
for idx2,entry2 in enumerate(entry1):
result[idx1,idx2] = np.sqrt(np.sum(np.square(entry2)))
return result
@jit(nopython=True)
def numba_proj_edge(edge, edge_idx, vertices, radius, center):
# x,y,z = edge
# a,b,c = center
x = edge[0]
y = edge[1]
z = edge[2]
a = center[0]
b = center[1]
c = center[2]
## Each edge idx only has one degree of freedom to project onto surface
if edge_idx == 0:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 1:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
elif edge_idx == 2:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 3:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
elif edge_idx == 4:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 5:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 6:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 7:
## X
proj2 = radius*radius - np.square(z-c) - np.square(y-b)
proj_dir_value = x
proj_dir_center = a
original = x
elif edge_idx == 8:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 9:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
elif edge_idx == 10:
## Z
proj2 = radius*radius - np.square(x-a) - np.square(y-b)
proj_dir_value = z
proj_dir_center = c
original = z
elif edge_idx == 11:
## Y
proj2 = radius*radius - np.square(x-a) - np.square(z-c)
proj_dir_value = y
proj_dir_center = b
original = y
if proj2 < 0:
proj2 = proj2*-1
proj = np.sqrt(proj2)
### 20200429 Fix decision function
temp_pos_dir = abs((proj + proj_dir_center) - proj_dir_value)
temp_neg_dir = abs((-proj + proj_dir_center) - proj_dir_value)
# temp_pos_dir = np.linalg.norm((proj + proj_dir_center) - proj_dir_value)
# temp_neg_dir = np.linalg.norm((-proj + proj_dir_center) - proj_dir_value)
if temp_neg_dir < temp_pos_dir:
proj = proj*-1 + proj_dir_center
else:
proj = proj + proj_dir_center
## Check if projection is within the spacing of the grid.
## If it's outside, then this cannot be a valid projection.
## And the value is set back to original edge position.
if edge_idx == 0:
## Z, 0,1
if proj < vertices[0][2]:
proj = z
elif proj > vertices[1][2]:
proj = z
elif edge_idx == 1:
if proj < vertices[0][1]:
proj = y
elif proj > vertices[3][1]:
proj = y
elif edge_idx == 2:
## Z 2,3
if proj < vertices[3][2]:
proj = z
elif proj > vertices[2][2]:
proj = z
elif edge_idx == 3:
if proj < vertices[1][1]:
proj = y
elif proj > vertices[2][1]:
proj = y
elif edge_idx == 4:
## X 0,4
if proj < vertices[0][0]:
proj = x
elif proj > vertices[4][0]:
proj = x
elif edge_idx == 5:
## X 3,7
if proj < vertices[3][0]:
proj = x
elif proj > vertices[7][0]:
proj = x
elif edge_idx == 6:
## X 2,6
if proj < vertices[2][0]:
proj = x
elif proj > vertices[6][0]:
proj = x
elif edge_idx == 7:
## X, 1,5
if proj < vertices[1][0]:
proj = x
elif proj > vertices[5][0]:
proj = x
elif edge_idx == 8:
## Z, 4.5
if proj < vertices[4][2]:
proj = z
elif proj > vertices[5][2]:
proj = z
elif edge_idx == 9:
## Y 4,7
if proj < vertices[4][1]:
proj = y
elif proj > vertices[7][1]:
proj = y
elif edge_idx == 10:
## Z, 6,7
if proj < vertices[7][2]:
proj = z
elif proj > vertices[6][2]:
proj = z
elif edge_idx == 11:
## Y, 5,6
if proj < vertices[5][1]:
proj = y
elif proj > vertices[6][1]:
proj = y
### Return final projection
ret_edge = edge.copy()
if edge_idx == 0:
## Z
ret_edge[2] = proj
elif edge_idx == 1:
## Y
ret_edge[1] = proj
elif edge_idx == 2:
## Z
ret_edge[2] = proj
elif edge_idx == 3:
## Y
ret_edge[1] = proj
elif edge_idx == 4:
## X
ret_edge[0] = proj
elif edge_idx == 5:
## X
ret_edge[0] = proj
elif edge_idx == 6:
## X
ret_edge[0] = proj
elif edge_idx == 7:
## X
ret_edge[0] = proj
elif edge_idx == 8:
## Z
ret_edge[2] = proj
elif edge_idx == 9:
## Y
ret_edge[1] = proj
elif edge_idx == 10:
## Z
ret_edge[2] = proj
elif edge_idx == 11:
## Y
ret_edge[1] = proj
return ret_edge
if __name__ == "__main__":
import json
from scipy.optimize import linear_sum_assignment
import time
# s = read("/Users/ibier/Software/PyMoVE/examples/Example_Structures/molecules/rdx.xyz")
# mc = MarchingCubes(spacing=0.3)
# voxels,colors = mc.struct_to_volume_colors(s)
# spacing = 0.01
#
# start = time.time()
# m = MarchingCubes(cache=spacing)\
# end = time.time()
#
# print("Class Construction: {}".format(end - start))
| 36.306729 | 108 | 0.54819 | 6,034 | 46,400 | 3.938515 | 0.072589 | 0.024448 | 0.030549 | 0.020198 | 0.790995 | 0.776268 | 0.756743 | 0.736293 | 0.724258 | 0.710078 | 0 | 0.019165 | 0.361272 | 46,400 | 1,278 | 109 | 36.306729 | 0.782704 | 0.144806 | 0 | 0.753571 | 0 | 0 | 0.001974 | 0.000538 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.02381 | 0 | 0.067857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
48dc3eff0dba8dd575fcfc1eaf1fd5af481c2692 | 7,321 | py | Python | pylattice/model_subclasses.py | crocha700/pylattice | 54c13735fecee121ffea8048f0f37d9b196f8e54 | [
"MIT"
] | null | null | null | pylattice/model_subclasses.py | crocha700/pylattice | 54c13735fecee121ffea8048f0f37d9b196f8e54 | [
"MIT"
] | null | null | null | pylattice/model_subclasses.py | crocha700/pylattice | 54c13735fecee121ffea8048f0f37d9b196f8e54 | [
"MIT"
] | null | null | null | import numpy as np
import lattice_model
from numpy import pi
class SourceModel(lattice_model.LatticeModel):
""" A subclass that represents the advection-diffusion
model with a large-scale source
Attributes
----------
source: flag for source (boolean)
"""
def __init__(self, source=True, **kwargs):
self.source = source
self.Gy = False
super(SourceModel, self).__init__(**kwargs)
def _advect(self,direction='x',n=1):
""" Advect th on a lattice given u and v,
and the current index array ix, iy
Attributes
----------
direction: direction to perform the advection ('x' or 'y')
n: the number of substeps
n=1 for doing the full advection-diffusion;
n=2 for doing half the advection; etc """
if direction == 'x':
ix_new = self.ix.copy()
dindx = -np.round(self.u*self.dt_2/n/self.dx).astype(int)
ix_new = self.ix + dindx
ix_new[ix_new<0] = ix_new[ix_new<0] + self.nx
ix_new[ix_new>self.nx-1] = ix_new[ix_new>self.nx-1] - self.nx
self.th = self.th[self.iy,ix_new]
elif direction == 'y':
iy_new = self.iy.copy()
dindy = -np.round(self.v*self.dt_2/n/self.dy).astype(int)
iy_new = self.iy + dindy
iy_new[iy_new<0] = iy_new[iy_new<0] + self.ny
iy_new[iy_new>self.ny-1] = iy_new[iy_new>self.ny-1] - self.ny
self.th = self.th[iy_new,self.ix]
# advection + source
#y = self.y[...,np.newaxis] + np.zeros(self.x.size)[np.newaxis,...]
#v = self.v + np.zeros(self.y.size)[...,np.newaxis]
#sy = np.sin(self.dl*y)
#syn = np.sin(self.dl*(y+v*self.dt_2/n))
#v = np.ma.masked_array(v, v == 0.)
#self.forcey = (sy[iy_new,self.ix]-sy)/(self.dl*v)
#self.forcey = (syn-sy)/(self.dl*v)
#self.forcey[v.mask] = (self.dt_2/n)*np.cos(self.dl*y[v.mask])
#self.th = self.th[iy_new,self.ix] + self.forcey
def _source(self,direction='x',n=1):
if direction == 'x':
self.th += (self.dt/n)*np.cos(self.dl*self.y)[...,np.newaxis]
elif direction == 'y':
# a brutal way
self.th += (self.dt/n)*np.cos(self.dl*self.y)[...,np.newaxis]
#pass
## diagnostic methods
def _initialize_nakamura(self):
self.Lmin2 = self.Lx**2
# this 2 is arbitrary here...
thm = np.cos(self.dl*self.y)/(self.D*(self.dl**2))
thmin,thmax = thm.min(),thm.max()
self.dth = 0.1
self.dth2 = self.dth**2
self.TH = np.arange(thmin+self.dth/2,thmax-self.dth/2,self.dth)
self.Leq2 = np.empty(self.TH.size)
self.I1 = np.empty(self.TH.size)
self.I2 = np.empty(self.TH.size)
self.L = np.empty(self.TH.size)
def _calc_Leq2(self):
th = self.th
th = np.vstack([(th[self.nx-self.nx/self.npad:]),th,\
th[:self.nx/self.npad]])
gradth2 = np.vstack([self.gradth2[self.nx-self.nx/self.npad:],\
self.gradth2,self.gradth2[:self.nx/self.npad]])
gradth = np.sqrt(gradth2)
# parallelize this...
for i in range(self.TH.size):
self.fth2 = th<=self.TH[i]+self.dth/2
self.fth1 = th<=self.TH[i]-self.dth/2
A2 = self.dS*self.fth2.sum()
A1 = self.dS*self.fth1.sum()
self.dA = A2-A1
self.G2 = (gradth2[self.fth2]*self.dS).sum()-\
(gradth2[self.fth1]*self.dS).sum()
self.Leq2[i] = self.G2*self.dA/self.dth2
self.L[i] = ((gradth[self.fth2]*self.dS).sum()-\
(gradth[self.fth1]*self.dS).sum())/self.dth
self.I1[i] = self.G2/self.dth
self.I2[i] = self.dA/self.dth
class GyModel(lattice_model.LatticeModel):
""" A subclass that represents the advection-diffusion
model with a basic state sustained by a linear
mean constant mean gradient """
def __init__(self, G=1., **kwargs):
self.G = G
self.Gy = True
super(GyModel, self).__init__(**kwargs)
def _advect(self,direction='x',n=1):
""" Advect th on a lattice given u and v,
and the current index array ix, iy
Attributes
----------
direction: direction to perform the advection ('x' or 'y')
n: the number of substeps
n=1 for doing the full advection-diffusion;
n=2 for doing half the advection; etc """
if direction == 'x':
ix_new = self.ix.copy()
dindx = -np.round(self.u*self.dt_2/n/self.dx).astype(int)
ix_new = self.ix + dindx
ix_new[ix_new<0] = ix_new[ix_new<0] + self.nx
ix_new[ix_new>self.nx-1] = ix_new[ix_new>self.nx-1] - self.nx
self.th = self.th[self.iy,ix_new]
elif direction == 'y':
iy_new = self.iy.copy()
dindy = -np.round(self.v*self.dt_2/n/self.dy).astype(int)
iy_new = self.iy + dindy
iy_new[iy_new<0] = iy_new[iy_new<0] + self.ny
iy_new[iy_new>self.ny-1] = iy_new[iy_new>self.ny-1] - self.ny
self.th = self.th[iy_new,self.ix] + self.G*self.v*self.dt_2/n
def _source(self,direction='x',n=1):
pass
## diagnostic methods
def _initialize_nakamura(self):
self.Lmin2 = self.Lx**2
th = self.G*self.y
#thmin,thmax = th.min(),th.max()
thmin,thmax = -10*pi,10*pi
self.dth = 0.2
self.dth2 = self.dth**2
#self.TH = np.arange(thmin+self.dth/2,thmax-self.dth/2,self.dth)
self.TH = np.arange(thmin,thmax,self.dth)
self.Leq2 = np.empty(self.TH.size)
self.I1 = np.empty(self.TH.size)
self.I2 = np.empty(self.TH.size)
self.L = np.empty(self.TH.size)
self.A = np.empty(self.TH.size)
def _calc_Leq2(self):
th = self.th + self.G*self.y[...,np.newaxis]
th = np.vstack([(th[self.nx-self.nx/self.npad:]-2*pi),th,\
th[:self.nx/self.npad]+2*pi])
gradth2 = np.vstack([self.gradth2[self.nx-self.nx/self.npad:],\
self.gradth2,self.gradth2[:self.nx/self.npad]])
#gradth2 = self.gradth2
gradth = np.sqrt(gradth2)
# parallelize this...
for i in range(self.TH.size):
#self.fth2 = th<=self.TH[i]+self.dth/2
#self.fth1 = th<=self.TH[i]-self.dth/2
self.fth2 = th<=self.TH[i]+self.dth
self.fth1 = th<=self.TH[i]
A2 = self.dS*self.fth2.sum()
A1 = self.dS*self.fth1.sum()
self.dA = A2-A1
self.G2 = (gradth2[self.fth2]*self.dS).sum()-\
(gradth2[self.fth1]*self.dS).sum()
self.Leq2[i] = self.G2*self.dA/self.dth2
self.I1[i] = self.G2/self.dth
self.I2[i] = self.dA/self.dth
self.L[i] = ((gradth[self.fth2]*self.dS).sum()-\
(gradth[self.fth1]*self.dS).sum())/self.dth
self.A[i] = A2
| 34.051163 | 79 | 0.526431 | 1,097 | 7,321 | 3.435734 | 0.132179 | 0.05731 | 0.037145 | 0.031043 | 0.830459 | 0.806845 | 0.772353 | 0.759087 | 0.75272 | 0.744495 | 0 | 0.02504 | 0.312662 | 7,321 | 214 | 80 | 34.21028 | 0.723967 | 0.211173 | 0 | 0.684211 | 0 | 0 | 0.001819 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087719 | false | 0.008772 | 0.026316 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b056fcf08ba512ac09e34bae5159d9d54a88a8d | 46 | py | Python | TwitterBot/__init__.py | shogo82148/JO_RI_bot | 653008faf8356a6c6e2b44f0154f646774aff79b | [
"MIT"
] | null | null | null | TwitterBot/__init__.py | shogo82148/JO_RI_bot | 653008faf8356a6c6e2b44f0154f646774aff79b | [
"MIT"
] | null | null | null | TwitterBot/__init__.py | shogo82148/JO_RI_bot | 653008faf8356a6c6e2b44f0154f646774aff79b | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
from BaseBot import *
| 11.5 | 22 | 0.586957 | 6 | 46 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.195652 | 46 | 3 | 23 | 15.333333 | 0.702703 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d2f9306e846bb221e96333047ced896f8341ae67 | 56 | py | Python | gym_mnist_pair/envs/__init__.py | siavashk/gym-mnist-pair | 8b1df559c6a77b773e49b553b2f3c14e08ad3527 | [
"MIT"
] | null | null | null | gym_mnist_pair/envs/__init__.py | siavashk/gym-mnist-pair | 8b1df559c6a77b773e49b553b2f3c14e08ad3527 | [
"MIT"
] | null | null | null | gym_mnist_pair/envs/__init__.py | siavashk/gym-mnist-pair | 8b1df559c6a77b773e49b553b2f3c14e08ad3527 | [
"MIT"
] | null | null | null | from gym_mnist_pair.envs.mnist_pair import MnistPairEnv
| 28 | 55 | 0.892857 | 9 | 56 | 5.222222 | 0.777778 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 56 | 1 | 56 | 56 | 0.903846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
827489a8a7ef255a207bc29c99809b4fd0a2faca | 77 | py | Python | KernelModel/__init__.py | L-F-A/Machine-Learning | b9472544e06fc91606c0d1a609c23e22ba30cf18 | [
"MIT"
] | null | null | null | KernelModel/__init__.py | L-F-A/Machine-Learning | b9472544e06fc91606c0d1a609c23e22ba30cf18 | [
"MIT"
] | null | null | null | KernelModel/__init__.py | L-F-A/Machine-Learning | b9472544e06fc91606c0d1a609c23e22ba30cf18 | [
"MIT"
] | null | null | null | from .Kernels import *
from .Ridge_Kernel import *
from .SVM_Kernel import *
| 19.25 | 27 | 0.766234 | 11 | 77 | 5.181818 | 0.545455 | 0.350877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155844 | 77 | 3 | 28 | 25.666667 | 0.876923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
829ef62be96aac2aa43b4ffa048dae389faab239 | 40,702 | py | Python | optimization/first_sdEta_mjj_optimization/sdEta_mistake_analyses/sdEta_mmjj_gridsearch/analysis_deltaeta6.1_mmjj_750/Output/Histos/MadAnalysis5job_0/selection_4.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | optimization/first_sdEta_mjj_optimization/sdEta_mistake_analyses/sdEta_mmjj_gridsearch/analysis_deltaeta6.1_mmjj_750/Output/Histos/MadAnalysis5job_0/selection_4.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | optimization/first_sdEta_mjj_optimization/sdEta_mistake_analyses/sdEta_mmjj_gridsearch/analysis_deltaeta6.1_mmjj_750/Output/Histos/MadAnalysis5job_0/selection_4.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | def selection_4():
# Library import
import numpy
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
# Library version
matplotlib_version = matplotlib.__version__
numpy_version = numpy.__version__
# Histo binning
xBinning = numpy.linspace(-8.0,8.0,161,endpoint=True)
# Creating data sequence: middle of each bin
xData = numpy.array([-7.95,-7.85,-7.75,-7.65,-7.55,-7.45,-7.35,-7.25,-7.15,-7.05,-6.95,-6.85,-6.75,-6.65,-6.55,-6.45,-6.35,-6.25,-6.15,-6.05,-5.95,-5.85,-5.75,-5.65,-5.55,-5.45,-5.35,-5.25,-5.15,-5.05,-4.95,-4.85,-4.75,-4.65,-4.55,-4.45,-4.35,-4.25,-4.15,-4.05,-3.95,-3.85,-3.75,-3.65,-3.55,-3.45,-3.35,-3.25,-3.15,-3.05,-2.95,-2.85,-2.75,-2.65,-2.55,-2.45,-2.35,-2.25,-2.15,-2.05,-1.95,-1.85,-1.75,-1.65,-1.55,-1.45,-1.35,-1.25,-1.15,-1.05,-0.95,-0.85,-0.75,-0.65,-0.55,-0.45,-0.35,-0.25,-0.15,-0.05,0.05,0.15,0.25,0.35,0.45,0.55,0.65,0.75,0.85,0.95,1.05,1.15,1.25,1.35,1.45,1.55,1.65,1.75,1.85,1.95,2.05,2.15,2.25,2.35,2.45,2.55,2.65,2.75,2.85,2.95,3.05,3.15,3.25,3.35,3.45,3.55,3.65,3.75,3.85,3.95,4.05,4.15,4.25,4.35,4.45,4.55,4.65,4.75,4.85,4.95,5.05,5.15,5.25,5.35,5.45,5.55,5.65,5.75,5.85,5.95,6.05,6.15,6.25,6.35,6.45,6.55,6.65,6.75,6.85,6.95,7.05,7.15,7.25,7.35,7.45,7.55,7.65,7.75,7.85,7.95])
# Creating weights for histo: y5_ETA_0
y5_ETA_0_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.5823233265,3.9712613928,4.9497477533,5.82997499809,6.75114620776,7.90158122073,8.87188038825,10.43172705,11.4675301613,12.8799889494,14.976159151,16.8348735563,18.7836558843,20.4008184968,23.3935959291,26.8285329821,29.3996187762,33.874450937,36.8876963517,41.5426843579,46.4596801393,53.0224745086,58.0745901741,65.0058642273,73.3741770476,81.6237699698,88.4240441354,91.662441357,95.0359584626,96.1700374896,96.2519174193,96.677717054,92.1864809073,88.4936440757,82.8069289547,76.9933339425,69.9801799595,61.8247469566,53.9763936902,47.5855191734,40.5396052185,33.7270630634,27.2788805957,22.2185929373,18.1777324042,14.5380915268,11.4388701859,8.7244925147,6.99269400052,5.19129954605,5.2731794758,7.01316598296,8.9865162899,11.2505423474,13.7888761696,18.3537802531,22.316852853,27.0987407503,33.8785429334,39.9623537138,46.7830798618,54.5127132301,62.4183864473,69.3046605391,77.4723335315,82.9256888528,88.6123639738,91.9367611216,95.6746379146,96.1536375037,96.3542773315,94.5733588595,93.2141200257,89.4884832221,82.5858491443,74.2052563346,65.2679040025,59.2741491449,52.6458348318,47.028759651,41.997123968,37.0596482042,33.2685274568,29.6534505584,26.6402051437,23.3485599678,21.1131898856,18.7959398738,17.092801335,14.8983712177,13.1829486895,11.8646538206,10.3007151624,9.06430022316,7.96708916453,6.8043661621,5.83816299107,5.17082756362,3.99173177524,3.65192286678,0.00409408448742,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_1
y5_ETA_1_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,13.7298594094,16.1253300979,18.4456910989,19.7567679269,23.670458917,24.7599038848,27.5072857507,29.6865041486,30.5092137908,31.8957407855,34.7719519282,35.8795504401,38.4309405242,39.0406905089,41.1292733728,42.952518878,43.025541629,44.9361500634,45.16443128,48.7608518359,49.7275721168,50.3258539646,50.2711369718,52.7100487867,51.6979847021,51.4511573892,51.4314496583,51.8824843132,49.3286107346,48.7836439068,48.9812419488,48.4790953738,47.5907253403,45.3463272274,43.910987548,45.5257797367,42.3934922666,42.0641488445,39.473327276,39.8530616036,39.6928321433,38.662590481,36.656383529,36.0853920818,36.3516667593,37.1075343474,35.043742367,35.4044979895,36.6324658743,35.4179008488,35.5880122141,34.7729933937,35.1511655231,35.7200580169,36.587546668,37.2888454719,37.312847245,37.9214235783,37.5811167295,39.7174547901,39.6338050858,42.9117014434,43.2460919674,42.8560230979,43.4601932312,43.3508794147,45.9898727662,46.6902983413,47.589924213,48.2471289718,49.5863333784,50.7861416395,50.4935699593,49.6956872513,50.7023036703,51.8224798804,51.2857646636,50.5198869902,49.5866538294,49.4055189528,48.1014439771,45.8801183297,44.9177241361,41.9193050335,41.7402931442,41.3258299493,40.1286253519,38.8483598788,37.6436407075,33.7304704502,32.2355629524,30.77628559,28.6504583192,26.976110329,25.8815581747,23.4293997204,20.9213586331,18.5162543891,15.5892758111,13.5610338535,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_2
y5_ETA_2_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,7.09766081089,9.53757442876,12.8609240681,17.580739627,21.9474873795,26.3349783047,31.7365211254,35.1601652442,43.071958393,48.264156322,53.1519006731,59.7487254311,65.5026831855,71.668859845,77.8504905814,84.0969953837,87.1667370108,93.2397346776,98.2009065542,104.215558883,105.891871688,112.178375278,115.210762666,117.929233952,119.939049042,119.54682953,119.414064961,120.071813611,119.315183662,121.024759922,119.405428859,120.267799404,118.302321813,115.228530722,115.360551512,113.051200051,112.651005574,108.182463388,110.542433149,107.586820292,106.072775294,103.832967178,106.176119267,103.141913753,102.852335757,103.009892963,101.496261175,100.769299747,103.002827061,101.135982843,101.666338662,102.601516916,100.903179985,101.003176953,102.097069403,101.083422454,103.081171793,103.82366994,105.848980427,106.352725482,106.231324205,109.786546979,110.622265439,111.153902211,111.375589703,114.840030106,115.199440695,116.201972279,117.097193067,120.089003173,119.207129087,120.058838798,119.305142644,118.925071524,118.156086578,117.400985508,117.823121468,115.54116587,112.146434098,106.466688969,101.29395326,96.6578128495,92.9117280948,86.7782777156,82.7009633354,76.0119926111,72.3611529022,67.0775941072,59.3784474029,53.9643140548,49.1759055346,41.7771216233,36.304527238,32.1384758791,25.341405127,21.2338891832,16.2654654604,12.7307296689,9.33782428743,6.42531342986,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_3
y5_ETA_3_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.429046280548,0.703988904505,0.946132002939,1.40239543832,2.13985337907,2.88179436041,4.86808237277,7.01905621127,8.77440121089,12.4201836871,16.0001404212,20.0598210915,25.356975339,29.6199386414,35.7244180115,40.3462886269,45.7847023252,50.4379643811,55.8237069229,62.3197891299,66.6059984298,72.3957224443,76.0499184929,78.0216438344,81.0373265275,81.6821368588,84.5516890856,83.6547559647,82.4063703377,84.4969257082,82.8181113097,81.6264390914,82.6821778818,79.6396010078,78.7547337052,76.6033617353,76.4371215718,74.2114452864,74.4625930603,72.7102462337,71.4272069265,70.9275114203,70.4985451722,69.1563142916,68.2813190222,67.9234070377,66.7836544012,67.2421148718,66.9104470577,66.9786575256,66.5375848336,65.3788193926,66.8225331492,67.4588933451,67.4725841895,68.0895659499,69.7949088984,69.3934868417,70.0772571722,71.2534916432,73.0219668532,72.8246074411,76.0682406614,76.4986288072,76.7481515552,78.4185158182,80.139296514,81.1158965364,82.5446194278,83.9346260739,84.0398058841,84.7823209059,82.999626718,83.0401711177,82.384107481,81.4601989279,78.9584307189,75.7080130148,72.1591186534,67.4359398523,62.6306566107,56.2643327327,52.2211460767,46.5555334257,40.0586590184,34.1072855389,28.8422336808,24.6744278405,20.0158031987,15.7978653053,12.2439699263,8.91661536448,6.41903972263,4.61529332251,3.35491419087,1.96900951677,1.41373568252,1.04518424623,0.610546657348,0.445577670419,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_4
y5_ETA_4_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0197355956406,0.0207239559044,0.0345413908813,0.0404590104213,0.0641568574111,0.0927653009251,0.134209316644,0.211152123221,0.232908418709,0.356276905096,0.433228369651,0.714534482926,1.0242848588,1.47737607232,2.27478784438,3.05430566084,4.01649197618,5.17404756271,6.51225025083,7.87904831533,9.37204041328,10.8591322598,12.3968171593,13.7263338115,14.7427363031,15.8211074684,16.1336003389,17.0980910392,17.3944263596,17.6556848453,17.7677936317,17.7765478092,17.7720584874,17.4257874789,17.1444312615,17.0188024016,16.7967813899,16.4637939638,15.9268189713,15.7040043116,15.4777384858,14.9485556732,14.7563245181,14.3863883705,14.4505616219,14.2869779514,13.9029164735,13.8926992581,13.7784540356,13.6748348774,13.6263381772,13.8383223481,13.8713348965,14.0909629394,13.9460340042,14.3071518411,14.3751851077,14.7790396846,15.127282788,15.2103953657,15.2536892626,15.9680365568,16.3511801288,16.872105796,17.060613219,17.3968193284,17.3507877469,17.516475787,17.7248083757,17.7370016943,17.7060053324,17.457332981,16.8022006426,16.454550771,15.8073829704,14.8295084797,13.7112785502,12.3021044948,11.1140415618,9.43617758983,7.72020248262,6.38881795194,5.13274580238,4.02044418267,3.03271402726,2.1858431567,1.55138374428,1.06386704857,0.724362089291,0.515104780526,0.327652829122,0.223037201789,0.178598407614,0.115453371181,0.107567556817,0.0651200173481,0.0493398508909,0.0335683664613,0.0286256392198,0.0138232590703,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_5
y5_ETA_5_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00151165511004,0.00478678801882,0.00453693821812,0.00579928246076,0.00907444457676,0.0115978207374,0.0136114548127,0.0214294846135,0.0277246217246,0.0360433393712,0.0400837547737,0.0572237143671,0.0821736457681,0.115203935857,0.150487623327,0.209707431431,0.327951559284,0.488797304708,0.708560063339,1.00250957217,1.3629895374,1.75295359556,2.21169746651,2.56566982208,3.04911700342,3.46647503753,3.83967895005,4.22638140145,4.39849997522,4.72167991424,4.78086255315,5.01121553128,5.05018116949,5.05114140701,5.0783201297,4.99237087003,4.91093472668,4.82273290984,4.75460005708,4.69297681448,4.54819700299,4.42913955396,4.38487260447,4.30067177728,4.15878068031,4.12754095312,4.05461091377,3.98841173931,3.97194166542,4.00872356347,4.02467550921,4.0389830482,4.09617319426,4.08872735252,4.15854062093,4.14772194492,4.27829824315,4.34292222799,4.45035680208,4.51575697891,4.64253233697,4.67902936456,4.84616670622,4.87740643341,4.98207232267,4.99440337277,4.96489207311,4.93642503175,4.96058300726,4.86657575443,4.71965141248,4.39043798107,4.17029552853,3.87289636642,3.50871348527,3.07840184707,2.61300953162,2.18200252142,1.76332656133,1.36099864495,1.00935806615,0.709612323617,0.476174182361,0.316106109295,0.223347325278,0.149994861442,0.103870492514,0.0773874218888,0.0592408933192,0.050914273718,0.033280487976,0.0234396138218,0.0171426482584,0.0189079369051,0.0113473427814,0.0095820181258,0.00655494937604,0.0057967538353,0.00226793457684,0.00226946575558,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_6
y5_ETA_6_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000571060115529,0.000861045628537,0.00171687500707,0.000861312039159,0.00171416891493,0.00343224851787,0.00286096017891,0.00457071957319,0.00572521625209,0.00772647785054,0.0100289154077,0.0140338218051,0.0208841935797,0.0217558211581,0.0277723526771,0.0306258053747,0.0463664945443,0.062127356946,0.101057796206,0.145988722368,0.223574493184,0.318409876942,0.477834591681,0.650256645953,0.839928213201,1.00948732796,1.29713482534,1.5248344346,1.7796459481,1.96212072979,2.14056786271,2.28735061869,2.36401889732,2.48863009161,2.47516960746,2.51446042582,2.50723884858,2.52763100723,2.47449383418,2.46310665445,2.39039204941,2.39266728609,2.36480263438,2.26986048644,2.25556928099,2.19503359005,2.1448464273,2.19994694168,2.12544393663,2.1252300084,2.1353546117,2.14345389448,2.13155188748,2.19108991312,2.28352690151,2.24451199059,2.30086408507,2.34428951631,2.38336940543,2.43467419323,2.48546415375,2.53030411043,2.53740272892,2.57027070208,2.55474091215,2.48442350288,2.48323390198,2.35056141216,2.23624076549,2.09159129382,1.92572294085,1.75103054826,1.52601603818,1.27737045608,1.08129323785,0.826567795476,0.645364887085,0.454873394913,0.317196783922,0.218681234819,0.143177565481,0.0887361299915,0.0655881358923,0.0497950243098,0.0300652934204,0.0277627059135,0.0217501530597,0.015475198239,0.0171882035443,0.00830117604553,0.0094485781046,0.00657698849174,0.00629783314533,0.0028585189979,0.00429369551176,0.00114499386698,0.000855085827985,0.000572868508833,0.000571706398708,0.00028630204877,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_7
y5_ETA_7_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,6.48539153951e-05,0.000129585161661,0.000151099365519,0.000172751786932,8.63943965138e-05,0.00030261137205,0.000324077044697,0.000669783921584,0.000453733703979,0.000927171582088,0.00105833576827,0.00105644816404,0.00153330711526,0.00187934847193,0.00248241706734,0.00282793874611,0.00345365607765,0.00488190024812,0.00534671440683,0.00742997316928,0.00958776380018,0.0157609043618,0.0250238904322,0.0400112291013,0.0651006366668,0.0952348299731,0.135043455941,0.185245849362,0.231066603754,0.280033546797,0.332737728278,0.378609863727,0.420192267698,0.448538601407,0.480230571114,0.500876032786,0.518764267949,0.525788720531,0.53017706508,0.530454086926,0.530076063166,0.52079101174,0.512793839485,0.501907844865,0.498779300117,0.491145483293,0.486993508368,0.487161146398,0.480521004002,0.484415235455,0.484031344365,0.485025856982,0.492627403485,0.502797164618,0.509371509088,0.510245322323,0.513936711759,0.528307901037,0.523767844071,0.523324860574,0.528547623421,0.521893650887,0.508250848852,0.481940059932,0.457509331526,0.415705854901,0.381781868635,0.32975305887,0.279835231007,0.233549113441,0.180822007459,0.13621323412,0.0979339699056,0.0644741314365,0.0415458169711,0.0249865825885,0.0139478990596,0.00911279496776,0.00654241780661,0.00552969969926,0.00425386112985,0.00410073510436,0.00345563127275,0.002462637037,0.00181427138841,0.00170493912206,0.0013388583152,0.000669715189991,0.000669547132865,0.000604678339595,0.000603261798236,0.000561560999905,0.000259110350817,0.000259363274696,0.000194611534681,0.00012967463846,0.000129674973736,0.000129662233246,6.4872313669e-05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_8
y5_ETA_8_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.83520649725e-05,2.84314660911e-05,0.0,2.83995987261e-05,2.83995987261e-05,0.000141474944145,5.68021525341e-05,0.000283867240731,0.000227219955275,0.000226182409705,0.000451510518615,0.000282797471404,0.000369113927055,0.000538856055529,0.000622928579435,0.000764882692327,0.00107731651618,0.00107531775228,0.00150174586291,0.00224157658905,0.00331983694947,0.00657538676168,0.0129051349439,0.0207664738307,0.0334148923507,0.0470529217457,0.0672155639535,0.0876669577513,0.110983875107,0.132565713932,0.152959936548,0.174727010001,0.193771250254,0.204875527139,0.219859574039,0.228884304887,0.231725489857,0.236200285647,0.241428404563,0.236007388597,0.235121755023,0.236281661771,0.237446468905,0.234029265721,0.239544428516,0.231187783759,0.234061637974,0.238363435255,0.238165340824,0.237106560245,0.240263894422,0.239973880611,0.242867187878,0.230510787873,0.225812504737,0.219412450851,0.201735418564,0.190801467251,0.170551286312,0.15223408529,0.134358795225,0.110604778201,0.086863012144,0.0661125610937,0.0479231859234,0.0314853872538,0.0198148038305,0.0123059482939,0.00762911251798,0.00359928671408,0.00209469862673,0.0016959927477,0.000879629856736,0.000738277154972,0.000507111942693,0.000537855485607,0.000395731789626,0.00048291650475,0.000342279407981,0.00022708660535,0.000368566420183,0.000141964106712,0.000255471280171,0.000170473474107,0.000170565838976,5.68825782791e-05,5.68031771605e-05,2.83995987261e-05,0.000142064178554,0.0,2.84102904795e-05,2.8370686443e-05,2.8451112188e-05,0.0,2.8370686443e-05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_9
y5_ETA_9_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1279.86048908,1381.43257409,1673.56690585,1806.33828457,2160.8900453,2244.05393947,2523.26643109,2577.73618818,2825.53674577,3109.80223252,3344.40640172,3414.48080444,3659.71775816,3920.66817706,3980.41510522,4269.72128241,4316.35987758,4720.46106014,5116.74492355,5148.11802091,5583.43080187,6034.45897061,6339.29596534,6717.27661224,6979.84317439,7233.9118146,7652.87474753,7932.03725137,8325.62562008,8938.13737713,9362.76815484,9771.88349587,10286.2231057,10275.0681412,10700.0449882,10994.1499919,11083.1397691,11127.5789021,11323.2541293,11565.5871778,11497.3192564,11357.0689373,11244.6041313,11276.0464425,11307.961715,12003.3109076,11638.0194654,11805.143982,11716.6502373,11565.825581,11419.1268388,11197.7540494,11859.5960511,11625.4879145,11604.4930485,11677.336774,11755.3715378,11651.2969878,11534.2333065,11671.9150229,11542.2005889,11190.2289665,11392.8909467,11208.4745048,10890.3061547,10686.6251929,10387.3406865,10418.3407982,9862.93430756,9501.38041295,8891.01813027,8332.83155028,7827.71276251,7624.63549921,7131.31382661,7032.23420903,6566.04820847,6388.6300552,5807.77978653,5562.48976887,5171.29312299,4978.87094644,4626.8416458,4452.19204629,4131.22438071,4209.52446426,3940.91322641,3581.46381717,3505.74694809,3208.94644969,3120.12778447,2724.05809945,2601.45308084,2530.95801176,2163.27715387,2056.78857882,1824.49922817,1720.2070391,1430.93431527,1357.93870381,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_10
y5_ETA_10_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,156.919704913,246.476642451,302.265298025,429.696494799,501.428646841,600.305689147,784.6800939,825.700620983,1059.56683629,1201.74988616,1339.74712085,1460.9429959,1745.30409375,1866.22524942,2144.34013681,2369.84472553,2655.26545525,2774.29397144,3215.47250473,3405.24395237,3964.17744285,4554.22001644,5085.95205679,5590.49688317,6094.98014778,6750.13582951,7339.90522276,7983.45266439,9058.93667949,9868.93558783,10812.6466473,11732.7795513,12941.1831098,13608.9628008,14403.9714196,15003.2905815,15661.5435895,16036.2892843,16649.7137853,17108.7221445,17549.9930204,18053.6221155,18680.2400721,18597.1586241,19147.6092883,19052.0423451,19364.5644871,19430.1277656,19492.8899839,19312.1908164,19664.8473809,19463.717403,19454.5677859,19085.4242116,19263.6185847,19013.0044916,18688.2969679,18311.5428206,18031.5599184,17370.9983443,17130.9420936,16775.8499919,16295.6566908,15490.9367037,14885.2497468,14169.4595753,13714.9221392,12752.6771412,11842.4826145,11115.8614201,9945.81853571,8612.30223488,7992.85622382,7378.74684817,6693.4143593,6127.58864459,5581.18951399,4955.40264125,4417.37206293,4030.57180521,3436.6666157,3243.90519014,2799.50235947,2524.6025352,2377.17711637,2130.77115452,1980.10797167,1711.56824714,1555.63807063,1340.77712612,1202.85261126,1048.97282625,861.525719955,737.206354917,609.830063542,471.81705365,407.65469,331.779469724,251.736864318,182.225014243,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_11
y5_ETA_11_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,10.1330248737,11.0548094007,16.3586514601,20.0390347946,27.4082506941,31.0927452959,51.5999381542,66.3291868424,80.8390398703,105.949161367,128.287866124,161.688300274,204.075080771,247.389547577,297.120703466,339.979126999,409.070701162,479.774506833,547.062886891,666.390303809,777.377996021,914.205578867,1091.08804683,1227.47760682,1362.686042,1586.7996696,1789.20964873,2006.46399204,2269.20163007,2565.26281772,2907.33062214,3250.68944131,3518.10739202,3772.65784489,4059.33458489,4307.26320796,4500.38518133,4771.59433516,4945.08212596,5079.1440171,5214.15880008,5351.17158206,5465.86825205,5659.60883664,5759.91222915,5749.60716504,5827.26400726,5874.23619548,5961.08383314,5942.24500787,5921.94610641,5912.49403418,5918.09611601,5874.25924931,5792.2221785,5757.6183726,5660.2466594,5447.49434578,5345.83461932,5297.05654738,5116.39132938,4891.14383735,4740.36023143,4521.81756299,4307.42458481,4008.32029098,3788.34597941,3487.75279209,3195.62228326,2919.68901447,2574.20463174,2218.19502078,1982.24094368,1772.66929079,1595.94166775,1384.2959382,1226.10974596,1046.64409581,928.240753438,775.302766667,638.943945126,558.336596295,469.652720704,394.083787448,340.197139429,282.824328948,229.880813307,205.012180729,156.635245513,129.451815815,96.9779537048,74.8708631407,58.2772892626,43.2978680448,36.8555815197,29.4840026022,24.411893847,16.8150789643,17.0494711441,8.52356366878,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_12
y5_ETA_12_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.332212380083,0.858514626564,0.941667660881,0.7476990566,1.35714698934,1.6617007194,1.88297676753,2.79745663088,2.5199206544,3.54422814001,3.93214341982,4.76264657438,6.20315357463,7.72493543136,10.4392986587,13.2920668959,17.8615366091,23.8417248528,30.1564264944,39.238066493,51.4784230502,68.2588763427,86.4560035514,102.623606108,126.991366947,148.450799031,174.759869408,204.222947435,238.199307572,283.328781666,327.720411174,381.113521497,415.19348534,468.731556213,504.144172252,537.066913866,578.768463346,600.007030643,637.349761019,666.716198679,688.714962493,708.78284254,725.085903191,747.81677934,762.052767172,760.982490495,775.231943346,792.876119919,791.492685365,796.585155664,789.333280985,792.822644557,775.430071487,765.35092747,765.487501237,751.697782617,747.999134173,729.211200389,710.300157857,690.650847553,661.449452559,634.649447578,610.483200371,576.862201329,543.116554679,495.556567659,461.690505265,416.421879815,377.421644034,328.856474088,288.623188783,240.049478167,209.146681806,175.977991997,149.333604167,124.135936483,104.48097087,83.6846717509,65.4354541533,51.6440427875,39.5134453734,29.7399534475,23.3699682838,17.9987874744,13.1802687642,10.3306744245,7.4490025616,6.59080378592,5.28893262437,3.76681798931,3.43332370092,3.18447783341,1.91087744166,2.43682648145,1.49531809251,1.46710387579,0.692352825992,0.719914181636,0.720392382179,0.36003311047,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_13
y5_ETA_13_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0201380948653,0.0504336675643,0.06041717311,0.141244392255,0.141243724741,0.252091664765,0.372985686714,0.252152954664,0.403291781857,0.433578397735,0.45385892183,0.534432564638,0.695640351274,0.766314279717,0.92766206219,1.27042551776,1.46199164766,1.9759663116,2.80245028067,4.02328984298,5.51490999865,7.10896074596,10.5057258249,14.2155063058,18.5207636658,23.7113687845,29.7421006397,37.7859022548,45.362019464,56.3082982951,66.0984251076,81.0915121116,93.8708807909,104.499156072,114.84216025,124.774522106,137.401151502,147.277927668,154.489018018,163.17664878,170.572519072,175.697082808,182.5019607,183.532662612,186.347203994,190.045897678,191.488091475,195.344318404,199.538914121,199.0899201,200.17893844,195.427939672,197.027970128,196.842158574,192.291778029,190.738230802,184.719380692,179.493961697,174.370854355,171.756749147,161.967720698,154.628589074,143.23303708,135.785828912,127.013666636,115.064503011,104.021519643,92.1470568751,79.5572014175,69.6262959552,55.7417004881,45.4300573199,36.8585071981,29.7707794577,24.6091869138,18.9339728814,14.407343689,10.9895944051,7.54213469105,5.34321878722,3.58899932945,2.3998308672,2.13718423231,1.56293368365,1.10899824047,0.94753880115,0.726005551725,0.584782580593,0.574583213237,0.433566807269,0.423389771283,0.292454279321,0.322712192108,0.221838121159,0.211570606718,0.0906994622854,0.13104326509,0.100842758486,0.0706532355169,0.0403416182147,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_14
y5_ETA_14_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0113211285726,0.0198132917797,0.0282607826577,0.0452918782417,0.0480745976857,0.0424445222351,0.065078745979,0.0622282350926,0.073561717713,0.113165116753,0.1074890647,0.141477662525,0.164049865948,0.209391029568,0.183884172305,0.248986387512,0.328237007643,0.379094671039,0.427158419016,0.58561726077,0.809225525506,1.2731986991,1.94361759108,2.84043373853,3.98921090197,5.84791220889,8.19327296867,10.9713601256,14.4341639362,18.2792006767,24.2915667685,30.0707179865,35.5254590661,41.8210102506,47.669503419,53.40411312,58.9825562688,64.9607073039,69.7749385219,74.5540043719,78.4819067247,82.2605678713,85.4725433447,88.0670088,90.2132889468,91.6906191445,93.0380221565,95.4811303276,95.5773541628,95.4079525049,95.8051980461,95.4369235356,95.0402935808,93.5260666785,92.2019789981,90.3437162964,88.5871407585,84.9602216375,82.3267818739,78.4866775187,73.4702261976,70.5136420949,64.1217785847,59.8234471693,53.4201183641,48.1518152936,42.5846835433,36.1196845319,30.0140725037,24.0113672695,18.9951006243,14.4202247536,10.9857302186,8.16798391354,5.86207454146,4.00039918317,2.89139851395,1.81937265175,1.31557450651,0.780783514239,0.597031769939,0.398825289578,0.359284834238,0.277250417388,0.248972729191,0.257398951562,0.152848849806,0.15554838815,0.152807259256,0.11879915504,0.113182276221,0.0877015044279,0.0905694056276,0.0509460000326,0.0339269199238,0.0254455871884,0.0368051052623,0.0141389787579,0.0113149688621,0.0169780051122,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_15
y5_ETA_15_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00152612072888,0.0,0.00151083617232,0.00152463658485,0.00152228393296,0.00458260428621,0.00304267132837,0.00304815888002,0.00304291120197,0.00761668743081,0.00608293919419,0.00914796130627,0.00607901141174,0.0122449917731,0.00609979888132,0.0137120492414,0.018306940255,0.0274012455007,0.031941840003,0.0319347737758,0.0304951540658,0.0441764569475,0.0654910930698,0.0989509430018,0.147896134331,0.261971154912,0.467905356029,0.760132724907,1.1653075903,1.76413323799,2.59230569521,3.52638529978,4.66098505202,5.68986435657,6.94109585109,8.22045872852,9.73347385115,11.0783895743,12.1561794599,13.1153075399,14.4506826748,15.2025032478,16.2997063517,16.5963579114,17.6598014652,18.0324917736,18.6729897279,18.9498723946,19.0381529682,18.9649737964,19.1398806431,19.1097960038,18.91582925,18.3359732318,18.4496709537,17.4053700219,16.9853430814,16.2438855205,15.3561405144,14.3785078998,13.3081871818,12.2117994117,10.7722860496,9.63690050452,8.41202083806,6.9770745274,5.70926339523,4.58764872161,3.509153395,2.58914479928,1.6742739888,1.13786947752,0.767750661335,0.493601253866,0.231641088237,0.153976752777,0.0900241475155,0.0578145707653,0.0411695764142,0.0289457596897,0.019775297531,0.0213281304892,0.0167564469507,0.012211834861,0.0213403604977,0.013693142948,0.0121816320573,0.0121415861647,0.00914275734902,0.00303235085547,0.00915926845136,0.00152273295743,0.00151849676607,0.0015312065218,0.00303696989927,0.00151849676607,0.00304674917952,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y5_ETA_16
y5_ETA_16_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.000180822014455,0.0,0.0,0.0,0.000180614045879,0.000180752999698,0.000721669446093,0.000722820975803,0.000541524369354,0.000721489206661,0.0,0.000903092374196,0.000361055285276,0.00126364017763,0.000721804240541,0.00126606878845,0.00108265590918,0.00162454730455,0.00198655743518,0.00144614338695,0.0025254602451,0.00325114271446,0.00397272743261,0.00686186925949,0.00505703322583,0.0106526511467,0.0241994276635,0.0523568593547,0.120971893243,0.210877093934,0.345772136751,0.60578434869,0.889818202119,1.29969114522,1.71950074987,2.294914369,2.82612503748,3.4179762618,4.11363384564,4.71537551628,5.30338355835,5.82450659549,6.39086664926,6.79732197542,7.23914736464,7.58340468139,7.75163200332,7.92390700995,8.09209196792,8.1751600091,8.20108675828,8.11904700618,7.92106092147,7.8119390392,7.58472566698,7.20110066898,6.84116483233,6.40187742999,5.92071131841,5.31737137074,4.66247062122,4.093849872,3.4420355302,2.89386849047,2.26119688587,1.7519896779,1.28091466379,0.919613166827,0.605791666103,0.354465993497,0.205480078301,0.106712142677,0.0503826983872,0.0227493205495,0.00975091480462,0.00722126976961,0.00415166513623,0.00216526791387,0.00361022087329,0.00144514860393,0.00307222041592,0.00270750630867,0.0021679064189,0.00126531509493,0.000902209663128,0.000722591054988,0.00090335965233,0.00108478797221,0.000180752999698,0.000722656526577,0.000541904874824,0.00036102281907,0.000180183936053,0.0,0.000540714832414,0.0,0.000542177159608,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating a new Canvas
fig = plt.figure(figsize=(12,6),dpi=80)
frame = gridspec.GridSpec(1,1,right=0.7)
pad = fig.add_subplot(frame[0])
# Creating a new Stack
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights+y5_ETA_16_weights,\
label="$bg\_dip\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#e5e5e5", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights,\
label="$bg\_dip\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#f2f2f2", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights,\
label="$bg\_dip\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ccc6aa", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights,\
label="$bg\_dip\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ccc6aa", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights,\
label="$bg\_dip\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#c1bfa8", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights,\
label="$bg\_dip\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#bab5a3", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights,\
label="$bg\_dip\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b2a596", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights,\
label="$bg\_dip\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b7a39b", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights,\
label="$bg\_vbf\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ad998c", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights,\
label="$bg\_vbf\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#9b8e82", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights,\
label="$bg\_vbf\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#876656", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights,\
label="$bg\_vbf\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#afcec6", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights,\
label="$bg\_vbf\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#84c1a3", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights,\
label="$bg\_vbf\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#89a8a0", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights,\
label="$bg\_vbf\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#829e8c", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights+y5_ETA_1_weights,\
label="$bg\_vbf\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#adbcc6", linewidth=4, linestyle="dashdot",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y5_ETA_0_weights,\
label="$signal$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#7a8e99", linewidth=3, linestyle="dashed",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
# Axis
plt.rc('text',usetex=False)
plt.xlabel(r"\eta [ j_{2} ] ",\
fontsize=16,color="black")
plt.ylabel(r"$\mathrm{Events}$ $(\mathcal{L}_{\mathrm{int}} = 40.0\ \mathrm{fb}^{-1})$ ",\
fontsize=16,color="black")
# Boundary of y-axis
ymax=(y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights+y5_ETA_16_weights).max()*1.1
ymin=0 # linear scale
#ymin=min([x for x in (y5_ETA_0_weights+y5_ETA_1_weights+y5_ETA_2_weights+y5_ETA_3_weights+y5_ETA_4_weights+y5_ETA_5_weights+y5_ETA_6_weights+y5_ETA_7_weights+y5_ETA_8_weights+y5_ETA_9_weights+y5_ETA_10_weights+y5_ETA_11_weights+y5_ETA_12_weights+y5_ETA_13_weights+y5_ETA_14_weights+y5_ETA_15_weights+y5_ETA_16_weights) if x])/100. # log scale
plt.gca().set_ylim(ymin,ymax)
# Log/Linear scale for X-axis
plt.gca().set_xscale("linear")
#plt.gca().set_xscale("log",nonposx="clip")
# Log/Linear scale for Y-axis
plt.gca().set_yscale("linear")
#plt.gca().set_yscale("log",nonposy="clip")
# Legend
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
# Saving the image
plt.savefig('../../HTML/MadAnalysis5job_0/selection_4.png')
plt.savefig('../../PDF/MadAnalysis5job_0/selection_4.png')
plt.savefig('../../DVI/MadAnalysis5job_0/selection_4.eps')
# Running!
if __name__ == '__main__':
selection_4()
| 209.804124 | 1,874 | 0.779593 | 7,562 | 40,702 | 4.099577 | 0.300979 | 0.132512 | 0.194703 | 0.254572 | 0.265508 | 0.264346 | 0.263443 | 0.260701 | 0.259476 | 0.259476 | 0 | 0.622247 | 0.040539 | 40,702 | 193 | 1,875 | 210.891192 | 0.171592 | 0.032357 | 0 | 0.185841 | 0 | 0.00885 | 0.026964 | 0.005083 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00885 | false | 0 | 0.035398 | 0 | 0.044248 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
82aa4574ae3539b1b04eeb0aadc07427c5447a23 | 68 | py | Python | app/main/__init__.py | rodisantana2002/resthouse | 277778b129bc5e3a333bd108bc2e4dbb7d2cef13 | [
"MIT"
] | null | null | null | app/main/__init__.py | rodisantana2002/resthouse | 277778b129bc5e3a333bd108bc2e4dbb7d2cef13 | [
"MIT"
] | null | null | null | app/main/__init__.py | rodisantana2002/resthouse | 277778b129bc5e3a333bd108bc2e4dbb7d2cef13 | [
"MIT"
] | 1 | 2019-11-28T03:10:48.000Z | 2019-11-28T03:10:48.000Z | from app.model.models import *
from app.controls.operacoes import *
| 22.666667 | 36 | 0.794118 | 10 | 68 | 5.4 | 0.7 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 68 | 2 | 37 | 34 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
82dea0daa7dd11e322f73b404fd1d3a5b19358cc | 3,916 | py | Python | Huaxian_vmd/projects/generate_samples.py | zjy8006/MonthlyRunoffForecastByAutoReg | 661fcb5dcdfbbb2ec6861e1668a035b50e69f7c2 | [
"MIT"
] | 2 | 2020-05-18T06:45:04.000Z | 2021-05-18T06:38:23.000Z | Huaxian_vmd/projects/generate_samples.py | zjy8006/MonthlyRunoffForecastByAutoReg | 661fcb5dcdfbbb2ec6861e1668a035b50e69f7c2 | [
"MIT"
] | null | null | null | Huaxian_vmd/projects/generate_samples.py | zjy8006/MonthlyRunoffForecastByAutoReg | 661fcb5dcdfbbb2ec6861e1668a035b50e69f7c2 | [
"MIT"
] | 1 | 2020-01-17T02:56:18.000Z | 2020-01-17T02:56:18.000Z | import os
root_path = os.path.dirname(os.path.abspath("__file__"))
import sys
sys.path.append(root_path)
from tools.samples_generator import gen_multi_forecast_samples
from tools.samples_generator import gen_direct_forecast_samples
from tools.samples_generator import gen_direct_hindcast_samples
from Huaxian_vmd.projects.variables import variables
# gen_direct_forecast_samples(
# station="Huaxian",
# decomposer="vmd",
# lags_dict = variables['lags_dict'],
# input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
# output_column=['ORIG'],
# start=533,
# stop=792,
# test_len=120,
# gen_from='training-development and test sets',
# )
# gen_direct_forecast_samples(
# station="Huaxian",
# decomposer="vmd",
# lags_dict = variables['lags_dict'],
# input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
# output_column=['ORIG'],
# start=533,
# stop=792,
# test_len=120,
# gen_from='training-development and appended sets',
# )
# gen_direct_forecast_samples(
# station="Huaxian",
# decomposer="vmd",
# lags_dict = variables['lags_dict'],
# input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
# output_column=['ORIG'],
# start=533,
# stop=792,
# test_len=120,
# gen_from='training and validation sets',
# )
gen_direct_hindcast_samples(
station="Huaxian",
decomposer="vmd",
lags_dict = variables['lags_dict'],
input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
output_column=['ORIG'],
test_len=120,
)
# for lead_time in [1,3,5,7,9]:
# gen_direct_forecast_samples(
# station = "Huaxian",
# decomposer="vmd",
# lags_dict = variables['lags_dict'],
# input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
# output_column=['ORIG'],
# start=533,
# stop=792,
# test_len=120,
# mode = 'PACF',
# lead_time =lead_time,
# gen_from='training and appended sets',
# )
for lead_time in [1,3,5,7,9,]:
gen_direct_forecast_samples(
station = "Huaxian",
decomposer="vmd",
lags_dict = variables['lags_dict'],
input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
output_column=['ORIG'],
start=533,
stop=792,
test_len=120,
mode = 'Pearson',
lead_time =lead_time,
gen_from='training and appended sets',
)
# gen_multi_forecast_samples(
# station='Huaxian',
# decomposer="vmd",
# lags_dict = variables['lags_dict'],
# columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
# start=533,
# stop=792,
# test_len=120,
# )
# gen_direct_forecast_samples(
# station = "Huaxian",
# decomposer="vmd",
# lags_dict = variables['lags_dict'],
# input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
# output_column=['ORIG'],
# start=533,
# stop=792,
# test_len=120,
# mode = 'PACF',
# lead_time =1,
# n_components='mle',
# gen_from='training and appended sets',
# )
# num_in_one = sum(variables['lags_dict'].values())
# for n_components in range(num_in_one-16,num_in_one+1):
# gen_direct_forecast_samples(
# station = "Huaxian",
# decomposer="vmd",
# lags_dict = variables['lags_dict'],
# input_columns=['IMF1','IMF2','IMF3','IMF4','IMF5','IMF6','IMF7','IMF8',],
# output_column=['ORIG'],
# start=533,
# stop=792,
# test_len=120,
# mode = 'PACF',
# lead_time =1,
# n_components=n_components,
# gen_from='training and appended sets',
# )
| 30.59375 | 83 | 0.580695 | 453 | 3,916 | 4.766004 | 0.161148 | 0.070403 | 0.07874 | 0.129226 | 0.860584 | 0.860584 | 0.817045 | 0.817045 | 0.805465 | 0.736452 | 0 | 0.054637 | 0.24285 | 3,916 | 127 | 84 | 30.834646 | 0.673524 | 0.681818 | 0 | 0.4 | 0 | 0 | 0.130285 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7dbc5c4dcf1706074b124b7979db672bffe79fc0 | 97 | py | Python | wagtail/core/apps.py | stevedya/wagtail | 52e5abfe62547cdfd90ea7dfeb8bf5a52f16324c | [
"BSD-3-Clause"
] | 1 | 2022-02-09T05:25:30.000Z | 2022-02-09T05:25:30.000Z | wagtail/core/apps.py | stevedya/wagtail | 52e5abfe62547cdfd90ea7dfeb8bf5a52f16324c | [
"BSD-3-Clause"
] | null | null | null | wagtail/core/apps.py | stevedya/wagtail | 52e5abfe62547cdfd90ea7dfeb8bf5a52f16324c | [
"BSD-3-Clause"
] | null | null | null | from ..apps import WagtailAppConfig as WagtailCoreAppConfig # noqa
# TODO: Deprecation warning
| 24.25 | 67 | 0.804124 | 10 | 97 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14433 | 97 | 3 | 68 | 32.333333 | 0.939759 | 0.309278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7dc72f45b7b9aa36d88c832032841c05e7d96592 | 46 | py | Python | code/exampleStrats/alwaysDefect.py | robo-monk/PrisonersDilemmaTournament | 84f323f46233d3c6b4ce4380e04e981520912423 | [
"MIT"
] | null | null | null | code/exampleStrats/alwaysDefect.py | robo-monk/PrisonersDilemmaTournament | 84f323f46233d3c6b4ce4380e04e981520912423 | [
"MIT"
] | null | null | null | code/exampleStrats/alwaysDefect.py | robo-monk/PrisonersDilemmaTournament | 84f323f46233d3c6b4ce4380e04e981520912423 | [
"MIT"
] | null | null | null | def strategy(history, memory):
return 0, None | 23 | 30 | 0.76087 | 7 | 46 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.130435 | 46 | 2 | 31 | 23 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
81823732e297f1f90e8cdd3277a23125a387119e | 89 | py | Python | src/fluent-python/metaprogramming/test_params.py | sudeep0901/python | 7a50af12e72d21ca4cad7f2afa4c6f929552043f | [
"MIT"
] | null | null | null | src/fluent-python/metaprogramming/test_params.py | sudeep0901/python | 7a50af12e72d21ca4cad7f2afa4c6f929552043f | [
"MIT"
] | 3 | 2019-12-26T05:13:55.000Z | 2020-03-07T06:59:56.000Z | src/fluent-python/metaprogramming/test_params.py | sudeep0901/python | 7a50af12e72d21ca4cad7f2afa4c6f929552043f | [
"MIT"
] | null | null | null | def mult(x, y):
return x*addit(y)
def addit(x):
return x+5
print(mult(2, 1))
| 8.9 | 21 | 0.561798 | 18 | 89 | 2.777778 | 0.555556 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.258427 | 89 | 9 | 22 | 9.888889 | 0.712121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
818cce0073762270646ae6c21fdd2614baa19278 | 152 | py | Python | pythreshold/global_th/__init__.py | HELL-TO-HEAVEN/pythreshold | cd28243eb0da86d427caac2934e9b49702ec3c61 | [
"MIT"
] | null | null | null | pythreshold/global_th/__init__.py | HELL-TO-HEAVEN/pythreshold | cd28243eb0da86d427caac2934e9b49702ec3c61 | [
"MIT"
] | null | null | null | pythreshold/global_th/__init__.py | HELL-TO-HEAVEN/pythreshold | cd28243eb0da86d427caac2934e9b49702ec3c61 | [
"MIT"
] | null | null | null | from .otsu import otsu_threshold
from .p_tile import p_tile_threshold
from .two_peaks import two_peaks_threshold
from .min_err import min_err_threshold
| 30.4 | 42 | 0.868421 | 26 | 152 | 4.692308 | 0.384615 | 0.319672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 152 | 4 | 43 | 38 | 0.897059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81956ba631aeaa386f283eb7d5e0949746352fa7 | 413 | py | Python | bbox_convert.py | TrueWarg/snippets | 0a240505a5bc6017755dbaa5c71283754b2a4273 | [
"Apache-2.0"
] | null | null | null | bbox_convert.py | TrueWarg/snippets | 0a240505a5bc6017755dbaa5c71283754b2a4273 | [
"Apache-2.0"
] | null | null | null | bbox_convert.py | TrueWarg/snippets | 0a240505a5bc6017755dbaa5c71283754b2a4273 | [
"Apache-2.0"
] | null | null | null | import torch
# todo add numpy analog also
def xcycwh_to_xyxy(boxes: torch.Tensor) -> torch.Tensor:
return torch.cat((
boxes[:, :2] - boxes[:, 2:] / 2,
boxes[:, :2] + boxes[:, 2:] / 2
), boxes.dim() - 1)
def xyxy_to_xcycwh(boxes: torch.Tensor) -> torch.Tensor:
return torch.cat((
(boxes[:, :2] + boxes[:, 2:]) / 2,
boxes[:, 2:] - boxes[:, :2]
), boxes.dim() - 1)
| 25.8125 | 56 | 0.527845 | 56 | 413 | 3.821429 | 0.321429 | 0.224299 | 0.196262 | 0.224299 | 0.64486 | 0.64486 | 0.616822 | 0.616822 | 0.616822 | 0.616822 | 0 | 0.042484 | 0.25908 | 413 | 15 | 57 | 27.533333 | 0.656863 | 0.062954 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0.181818 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
81a52d0a339f4831cadb5ad899c932408b955670 | 41 | py | Python | objectdetection/__init__ .py | aierh/autoML | 8e31966edf6de2c223d5eeb6cd4b4dbd6ddbbf77 | [
"MIT"
] | 185 | 2019-12-26T12:41:53.000Z | 2020-09-18T06:22:32.000Z | objectdetection/__init__ .py | aierh/autoML | 8e31966edf6de2c223d5eeb6cd4b4dbd6ddbbf77 | [
"MIT"
] | 8 | 2020-02-25T19:32:22.000Z | 2020-09-18T06:17:48.000Z | objectdetection/__init__ .py | aierh/autoML | 8e31966edf6de2c223d5eeb6cd4b4dbd6ddbbf77 | [
"MIT"
] | 27 | 2019-12-26T15:02:47.000Z | 2020-09-08T21:24:54.000Z | from .trial_adapter import TrialAdapter
| 13.666667 | 39 | 0.853659 | 5 | 41 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 2 | 40 | 20.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81b89a11af889a9c170b2dd4c3ee9cc4b490a2b5 | 6,625 | py | Python | configs/vision/hybrid.py | udday2014/HebbianLearning | e0d17e53e3db8ce54b8fdd901702d2d6e0633732 | [
"MIT"
] | 6 | 2020-01-08T05:36:09.000Z | 2022-02-09T21:07:04.000Z | configs/vision/hybrid.py | udday2014/HebbianLearning | e0d17e53e3db8ce54b8fdd901702d2d6e0633732 | [
"MIT"
] | null | null | null | configs/vision/hybrid.py | udday2014/HebbianLearning | e0d17e53e3db8ce54b8fdd901702d2d6e0633732 | [
"MIT"
] | 1 | 2021-09-11T08:12:29.000Z | 2021-09-11T08:12:29.000Z | from neurolab import params as P
import params as PP
from .meta import *
gdes_on_hebb_layer = {}
hebb_on_gdes_layer = {}
ghg = {}
for ds in datasets:
for da in [False, True]:
for lrn_rule in lrn_rules:
for l in range(1, num_layers[ds] - 2):
gdes_on_hebb_layer[str(l) + '_' + lrn_rule + '_' + ds + ('_da' if da else '')] = {
P.KEY_EXPERIMENT: 'neurolab.experiment.VisionExperiment',
P.KEY_NET_MODULES: 'models.gdes.top_' + str(num_layers[ds]) + 'l.top' + str(l) + '.Net',
P.KEY_NET_OUTPUTS: net_outputs[ds],
P.KEY_DATA_MANAGER: 'neurolab.data.' + data_managers[ds],
P.KEY_AUGMENT_MANAGER: None if not da else 'neurolab.data.LightCustomAugmentManager',
P.KEY_AUGM_BEFORE_STATS: True,
P.KEY_AUGM_STAT_PASSES: 2,
P.KEY_WHITEN: None if lrn_rule_keys[lrn_rule] != 'hwta' else 2,
P.KEY_TOT_TRN_SAMPLES: tot_trn_samples[ds],
P.KEY_BATCHSIZE: 64,
P.KEY_INPUT_SHAPE: input_shapes[ds],
P.KEY_NUM_EPOCHS: 20 if not da else 40,
P.KEY_OPTIM_MANAGER: 'neurolab.optimization.optim.SGDOptimManager',
P.KEY_SCHED_MANAGER: 'neurolab.optimization.sched.MultiStepSchedManager',
P.KEY_LOSS_METRIC_MANAGER: 'neurolab.optimization.metric.CrossEntMetricManager',
P.KEY_CRIT_METRIC_MANAGER: ['neurolab.optimization.metric.TopKAccMetricManager', 'neurolab.optimization.metric.TopKAccMetricManager'],
P.KEY_TOPKACC_K: [1, 5],
P.KEY_LEARNING_RATE: 1e-3,
P.KEY_LR_DECAY: 0.5 if not da else 0.1,
P.KEY_MILESTONES: range(10, 20) if not da else [20, 30],
P.KEY_MOMENTUM: 0.9,
P.KEY_L2_PENALTY: l2_penalties[ds],
P.KEY_DROPOUT_P: 0.5,
P.KEY_LOCAL_LRN_RULE: lrn_rule_keys[lrn_rule],
PP.KEY_WTA_COMPETITIVE_ACT: lrn_rule_competitive_act[lrn_rule],
PP.KEY_WTA_K: lrn_rule_k[lrn_rule],
PP.KEY_ACT_LAMB: lrn_rule_lamb[lrn_rule],
P.KEY_PRE_NET_MODULES: ['models.hebb.model_' + str(num_layers[ds]) + 'l.Net'],
P.KEY_PRE_NET_MDL_PATHS: [P.PROJECT_ROOT + '/results/configs/vision/hebb/config_base_hebb[' + lrn_rule + '_' + ds + ('_da' if da else '') + ']/iter' + P.STR_TOKEN + '/models/model0.pt'],
P.KEY_PRE_NET_OUTPUTS: ['bn' + str(l)],
}
hebb_on_gdes_layer[str(l) + '_' + lrn_rule + '_' + ds + ('_da' if da else '')] = {
P.KEY_EXPERIMENT: 'neurolab.experiment.VisionExperiment',
P.KEY_NET_MODULES: 'models.hebb.top_' + str(num_layers[ds]) + 'l.top' + str(l) + '.Net',
P.KEY_NET_OUTPUTS: net_outputs[ds],
P.KEY_DATA_MANAGER: 'neurolab.data.' + data_managers[ds],
P.KEY_AUGMENT_MANAGER: None if not da else 'neurolab.data.LightCustomAugmentManager',
P.KEY_AUGM_BEFORE_STATS: True,
P.KEY_AUGM_STAT_PASSES: 2,
P.KEY_WHITEN: None,
P.KEY_TOT_TRN_SAMPLES: tot_trn_samples[ds],
P.KEY_BATCHSIZE: 64,
P.KEY_INPUT_SHAPE: input_shapes[ds],
P.KEY_NUM_EPOCHS: 20,
P.KEY_OPTIM_MANAGER: 'neurolab.optimization.optim.SGDOptimManager',
P.KEY_CRIT_METRIC_MANAGER: ['neurolab.optimization.metric.TopKAccMetricManager', 'neurolab.optimization.metric.TopKAccMetricManager'],
P.KEY_TOPKACC_K: [1, 5],
P.KEY_LEARNING_RATE: 1e-3,
P.KEY_LOCAL_LRN_RULE: lrn_rule_keys[lrn_rule],
PP.KEY_WTA_COMPETITIVE_ACT: lrn_rule_competitive_act[lrn_rule],
PP.KEY_WTA_K: lrn_rule_k[lrn_rule],
P.KEY_DEEP_TEACHER_SIGNAL: lrn_rule_dts[lrn_rule],
PP.KEY_ACT_LAMB: lrn_rule_lamb[lrn_rule],
P.KEY_PRE_NET_MODULES: ['models.gdes.model_' + str(num_layers[ds]) + 'l.Net'],
P.KEY_PRE_NET_MDL_PATHS: [P.PROJECT_ROOT + '/results/configs/vision/gdes/config_base_gdes[' + ds + ('_da' if da else '') + ']/iter' + P.STR_TOKEN + '/models/model0.pt'],
P.KEY_PRE_NET_OUTPUTS: ['bn' + str(l)],
}
for l1 in range(1, num_layers[ds] - 1):
for l2 in range(l1 + 1, num_layers[ds]):
ghg[str(l1) + '_' + str(l2) + '_' + lrn_rule + '_' + ds + ('_da' if da else '')] = {
P.KEY_EXPERIMENT: 'neurolab.experiment.VisionExperiment',
P.KEY_NET_MODULES: 'models.gdes.fc.Net' if l2 == num_layers[ds] - 1 else ('models.gdes.fc2.Net' if l2 == num_layers[ds] - 2 else ('models.gdes.top_' + str(num_layers[ds]) + 'l.top' + str(l2) + '.Net')),
P.KEY_NET_OUTPUTS: 'fc' if l2 == num_layers[ds] - 1 else ('fc2' if l2 == num_layers[ds] - 2 else net_outputs[ds]),
P.KEY_DATA_MANAGER: 'neurolab.data.' + data_managers[ds],
P.KEY_AUGMENT_MANAGER: None if not da else 'neurolab.data.LightCustomAugmentManager',
P.KEY_AUGM_BEFORE_STATS: True,
P.KEY_AUGM_STAT_PASSES: 2,
P.KEY_WHITEN: None,
P.KEY_TOT_TRN_SAMPLES: tot_trn_samples[ds],
P.KEY_BATCHSIZE: 64,
P.KEY_INPUT_SHAPE: input_shapes[ds],
P.KEY_NUM_EPOCHS: 20 if not da else 40,
P.KEY_OPTIM_MANAGER: 'neurolab.optimization.optim.SGDOptimManager',
P.KEY_SCHED_MANAGER: 'neurolab.optimization.sched.MultiStepSchedManager',
P.KEY_LOSS_METRIC_MANAGER: 'neurolab.optimization.metric.CrossEntMetricManager',
P.KEY_CRIT_METRIC_MANAGER: ['neurolab.optimization.metric.TopKAccMetricManager', 'neurolab.optimization.metric.TopKAccMetricManager'],
P.KEY_TOPKACC_K: [1, 5],
P.KEY_LEARNING_RATE: 1e-3,
P.KEY_LR_DECAY: 0.5 if not da else 0.1,
P.KEY_MILESTONES: range(10, 20) if not da else [20, 30],
P.KEY_MOMENTUM: 0.9,
P.KEY_L2_PENALTY: 5e-4 if l2 > num_layers[ds] - 3 else l2_penalties[ds],
P.KEY_DROPOUT_P: 0.5,
P.KEY_LOCAL_LRN_RULE: lrn_rule_keys[lrn_rule],
PP.KEY_WTA_COMPETITIVE_ACT: lrn_rule_competitive_act[lrn_rule],
PP.KEY_WTA_K: lrn_rule_k[lrn_rule],
PP.KEY_ACT_LAMB: lrn_rule_lamb[lrn_rule],
P.KEY_PRE_NET_MODULES: ['models.gdes.model_' + str(num_layers[ds]) + 'l.Net', 'models.hebb.fc2.Net' if l1 == num_layers[ds] - 2 else ('models.hebb.top_' + str(num_layers[ds]) + 'l.top' + str(l1) + '.Net')],
P.KEY_PRE_NET_MDL_PATHS: [P.PROJECT_ROOT + '/results/configs/vision/gdes/config_base_gdes[' + ds + ('_da' if da else '') + ']/iter' + P.STR_TOKEN + '/models/model0.pt',
P.PROJECT_ROOT + (('/results/configs/vision/gdes/hebb_fc2_on_gdes_layer[' + str(num_layers[ds] - 2) + '_' + lrn_rule + '_' + ds + ('_da' if da else '') + ']/iter' + P.STR_TOKEN + '/models/model0.pt') if l1 == num_layers[ds] - 2 else
('/results/configs/vision/hybrid/hebb_on_gdes_layer[' + str(l1) + '_' + lrn_rule + '_' + ds + ('_da' if da else '') + ']/iter' + P.STR_TOKEN + '/models/model0.pt'))],
P.KEY_PRE_NET_OUTPUTS: ['bn' + str(l1), 'bn1' if l1 == num_layers[ds] - 2 else 'bn' + str(l2)],
}
| 59.151786 | 265 | 0.669736 | 1,033 | 6,625 | 3.968054 | 0.122943 | 0.073189 | 0.050988 | 0.024152 | 0.905831 | 0.886802 | 0.876067 | 0.833374 | 0.833374 | 0.833374 | 0 | 0.020668 | 0.182038 | 6,625 | 111 | 266 | 59.684685 | 0.735745 | 0 | 0 | 0.653846 | 0 | 0 | 0.230072 | 0.163949 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.028846 | 0.028846 | 0 | 0.028846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
81bfb80dc8b4c116ff60b32bd83a35c19a2a48f8 | 212 | py | Python | bcirc/__init__.py | pypigit/bcirc | 651f29bb162e99844fabb0f6eed2c802ba6a7ad4 | [
"MIT"
] | null | null | null | bcirc/__init__.py | pypigit/bcirc | 651f29bb162e99844fabb0f6eed2c802ba6a7ad4 | [
"MIT"
] | null | null | null | bcirc/__init__.py | pypigit/bcirc | 651f29bb162e99844fabb0f6eed2c802ba6a7ad4 | [
"MIT"
] | null | null | null | from bcirc.circuits import BooleanCircuit, InputGate, TrueGate, FalseGate, NotGate, IdentGate, AndGate, OrGate, NandGate, NorGate, XorGate, XnorGate, ImplyGate, CustomGate, MultiAndGate, MultiOrGate, InputGates
| 70.666667 | 210 | 0.820755 | 21 | 212 | 8.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099057 | 212 | 2 | 211 | 106 | 0.910995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
81cb641bdf4ec7372b08401265ea4499e6add4dc | 22,392 | py | Python | tests/test_secretsmanager/test_server.py | jonnangle/moto-1 | 40b4e299abb732aad7f56cc0f680c0a272a46594 | [
"Apache-2.0"
] | 1 | 2020-01-13T21:45:21.000Z | 2020-01-13T21:45:21.000Z | tests/test_secretsmanager/test_server.py | jonnangle/moto-1 | 40b4e299abb732aad7f56cc0f680c0a272a46594 | [
"Apache-2.0"
] | 17 | 2020-08-28T12:53:56.000Z | 2020-11-10T01:04:46.000Z | tests/test_secretsmanager/test_server.py | jonnangle/moto-1 | 40b4e299abb732aad7f56cc0f680c0a272a46594 | [
"Apache-2.0"
] | 2 | 2017-03-02T05:59:52.000Z | 2020-09-03T13:25:44.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import json
import sure # noqa
import moto.server as server
from moto import mock_secretsmanager
"""
Test the different server responses for secretsmanager
"""
DEFAULT_SECRET_NAME = "test-secret"
@mock_secretsmanager
def test_get_secret_value():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foo-secret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
get_secret = test_client.post(
"/",
data={"SecretId": DEFAULT_SECRET_NAME, "VersionStage": "AWSCURRENT"},
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
)
json_data = json.loads(get_secret.data.decode("utf-8"))
assert json_data["SecretString"] == "foo-secret"
@mock_secretsmanager
def test_get_secret_that_does_not_exist():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
get_secret = test_client.post(
"/",
data={"SecretId": "i-dont-exist", "VersionStage": "AWSCURRENT"},
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
)
json_data = json.loads(get_secret.data.decode("utf-8"))
assert json_data["message"] == "Secrets Manager can't find the specified secret."
assert json_data["__type"] == "ResourceNotFoundException"
@mock_secretsmanager
def test_get_secret_that_does_not_match():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foo-secret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
get_secret = test_client.post(
"/",
data={"SecretId": "i-dont-match", "VersionStage": "AWSCURRENT"},
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
)
json_data = json.loads(get_secret.data.decode("utf-8"))
assert json_data["message"] == "Secrets Manager can't find the specified secret."
assert json_data["__type"] == "ResourceNotFoundException"
@mock_secretsmanager
def test_get_secret_that_has_no_value():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
get_secret = test_client.post(
"/",
data={"SecretId": DEFAULT_SECRET_NAME},
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
)
json_data = json.loads(get_secret.data.decode("utf-8"))
assert (
json_data["message"]
== "Secrets Manager can't find the specified secret value for staging label: AWSCURRENT"
)
assert json_data["__type"] == "ResourceNotFoundException"
@mock_secretsmanager
def test_create_secret():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
res = test_client.post(
"/",
data={"Name": "test-secret", "SecretString": "foo-secret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
res_2 = test_client.post(
"/",
data={"Name": "test-secret-2", "SecretString": "bar-secret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
json_data = json.loads(res.data.decode("utf-8"))
assert json_data["ARN"] != ""
assert json_data["Name"] == "test-secret"
json_data_2 = json.loads(res_2.data.decode("utf-8"))
assert json_data_2["ARN"] != ""
assert json_data_2["Name"] == "test-secret-2"
@mock_secretsmanager
def test_describe_secret():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": "test-secret", "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
describe_secret = test_client.post(
"/",
data={"SecretId": "test-secret"},
headers={"X-Amz-Target": "secretsmanager.DescribeSecret"},
)
create_secret_2 = test_client.post(
"/",
data={"Name": "test-secret-2", "SecretString": "barsecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
describe_secret_2 = test_client.post(
"/",
data={"SecretId": "test-secret-2"},
headers={"X-Amz-Target": "secretsmanager.DescribeSecret"},
)
json_data = json.loads(describe_secret.data.decode("utf-8"))
assert json_data # Returned dict is not empty
assert json_data["ARN"] != ""
assert json_data["Name"] == "test-secret"
json_data_2 = json.loads(describe_secret_2.data.decode("utf-8"))
assert json_data_2 # Returned dict is not empty
assert json_data_2["ARN"] != ""
assert json_data_2["Name"] == "test-secret-2"
@mock_secretsmanager
def test_describe_secret_that_does_not_exist():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
describe_secret = test_client.post(
"/",
data={"SecretId": "i-dont-exist"},
headers={"X-Amz-Target": "secretsmanager.DescribeSecret"},
)
json_data = json.loads(describe_secret.data.decode("utf-8"))
assert json_data["message"] == "Secrets Manager can't find the specified secret."
assert json_data["__type"] == "ResourceNotFoundException"
@mock_secretsmanager
def test_describe_secret_that_does_not_match():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
describe_secret = test_client.post(
"/",
data={"SecretId": "i-dont-match"},
headers={"X-Amz-Target": "secretsmanager.DescribeSecret"},
)
json_data = json.loads(describe_secret.data.decode("utf-8"))
assert json_data["message"] == "Secrets Manager can't find the specified secret."
assert json_data["__type"] == "ResourceNotFoundException"
@mock_secretsmanager
def test_rotate_secret():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
client_request_token = "EXAMPLE2-90ab-cdef-fedc-ba987SECRET2"
rotate_secret = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"ClientRequestToken": client_request_token,
},
headers={"X-Amz-Target": "secretsmanager.RotateSecret"},
)
json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert json_data # Returned dict is not empty
assert json_data["ARN"] != ""
assert json_data["Name"] == DEFAULT_SECRET_NAME
assert json_data["VersionId"] == client_request_token
# @mock_secretsmanager
# def test_rotate_secret_enable_rotation():
# backend = server.create_backend_app('secretsmanager')
# test_client = backend.test_client()
# create_secret = test_client.post(
# '/',
# data={
# "Name": "test-secret",
# "SecretString": "foosecret"
# },
# headers={
# "X-Amz-Target": "secretsmanager.CreateSecret"
# },
# )
# initial_description = test_client.post(
# '/',
# data={
# "SecretId": "test-secret"
# },
# headers={
# "X-Amz-Target": "secretsmanager.DescribeSecret"
# },
# )
# json_data = json.loads(initial_description.data.decode("utf-8"))
# assert json_data # Returned dict is not empty
# assert json_data['RotationEnabled'] is False
# assert json_data['RotationRules']['AutomaticallyAfterDays'] == 0
# rotate_secret = test_client.post(
# '/',
# data={
# "SecretId": "test-secret",
# "RotationRules": {"AutomaticallyAfterDays": 42}
# },
# headers={
# "X-Amz-Target": "secretsmanager.RotateSecret"
# },
# )
# rotated_description = test_client.post(
# '/',
# data={
# "SecretId": "test-secret"
# },
# headers={
# "X-Amz-Target": "secretsmanager.DescribeSecret"
# },
# )
# json_data = json.loads(rotated_description.data.decode("utf-8"))
# assert json_data # Returned dict is not empty
# assert json_data['RotationEnabled'] is True
# assert json_data['RotationRules']['AutomaticallyAfterDays'] == 42
@mock_secretsmanager
def test_rotate_secret_that_does_not_exist():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
rotate_secret = test_client.post(
"/",
data={"SecretId": "i-dont-exist"},
headers={"X-Amz-Target": "secretsmanager.RotateSecret"},
)
json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert json_data["message"] == "Secrets Manager can't find the specified secret."
assert json_data["__type"] == "ResourceNotFoundException"
@mock_secretsmanager
def test_rotate_secret_that_does_not_match():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
rotate_secret = test_client.post(
"/",
data={"SecretId": "i-dont-match"},
headers={"X-Amz-Target": "secretsmanager.RotateSecret"},
)
json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert json_data["message"] == "Secrets Manager can't find the specified secret."
assert json_data["__type"] == "ResourceNotFoundException"
@mock_secretsmanager
def test_rotate_secret_client_request_token_too_short():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
client_request_token = "ED9F8B6C-85B7-B7E4-38F2A3BEB13C"
rotate_secret = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"ClientRequestToken": client_request_token,
},
headers={"X-Amz-Target": "secretsmanager.RotateSecret"},
)
json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert json_data["message"] == "ClientRequestToken must be 32-64 characters long."
assert json_data["__type"] == "InvalidParameterException"
@mock_secretsmanager
def test_rotate_secret_client_request_token_too_long():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
client_request_token = (
"ED9F8B6C-85B7-446A-B7E4-38F2A3BEB13C-" "ED9F8B6C-85B7-446A-B7E4-38F2A3BEB13C"
)
rotate_secret = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"ClientRequestToken": client_request_token,
},
headers={"X-Amz-Target": "secretsmanager.RotateSecret"},
)
json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert json_data["message"] == "ClientRequestToken must be 32-64 characters long."
assert json_data["__type"] == "InvalidParameterException"
@mock_secretsmanager
def test_rotate_secret_rotation_lambda_arn_too_long():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": DEFAULT_SECRET_NAME, "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
rotation_lambda_arn = "85B7-446A-B7E4" * 147 # == 2058 characters
rotate_secret = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"RotationLambdaARN": rotation_lambda_arn,
},
headers={"X-Amz-Target": "secretsmanager.RotateSecret"},
)
json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert json_data["message"] == "RotationLambdaARN must <= 2048 characters long."
assert json_data["__type"] == "InvalidParameterException"
@mock_secretsmanager
def test_put_secret_value_puts_new_secret():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": "foosecret",
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
put_second_secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": "foosecret",
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
second_secret_json_data = json.loads(
put_second_secret_value_json.data.decode("utf-8")
)
version_id = second_secret_json_data["VersionId"]
secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"VersionId": version_id,
"VersionStage": "AWSCURRENT",
},
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
)
second_secret_json_data = json.loads(secret_value_json.data.decode("utf-8"))
assert second_secret_json_data
assert second_secret_json_data["SecretString"] == "foosecret"
@mock_secretsmanager
def test_put_secret_value_can_get_first_version_if_put_twice():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
first_secret_string = "first_secret"
second_secret_string = "second_secret"
put_first_secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": first_secret_string,
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
first_secret_json_data = json.loads(
put_first_secret_value_json.data.decode("utf-8")
)
first_secret_version_id = first_secret_json_data["VersionId"]
test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": second_secret_string,
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
get_first_secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"VersionId": first_secret_version_id,
"VersionStage": "AWSCURRENT",
},
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
)
get_first_secret_json_data = json.loads(
get_first_secret_value_json.data.decode("utf-8")
)
assert get_first_secret_json_data
assert get_first_secret_json_data["SecretString"] == first_secret_string
@mock_secretsmanager
def test_put_secret_value_versions_differ_if_same_secret_put_twice():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
put_first_secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": "secret",
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
first_secret_json_data = json.loads(
put_first_secret_value_json.data.decode("utf-8")
)
first_secret_version_id = first_secret_json_data["VersionId"]
put_second_secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": "secret",
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
second_secret_json_data = json.loads(
put_second_secret_value_json.data.decode("utf-8")
)
second_secret_version_id = second_secret_json_data["VersionId"]
assert first_secret_version_id != second_secret_version_id
@mock_secretsmanager
def test_can_list_secret_version_ids():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
put_first_secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": "secret",
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
first_secret_json_data = json.loads(
put_first_secret_value_json.data.decode("utf-8")
)
first_secret_version_id = first_secret_json_data["VersionId"]
put_second_secret_value_json = test_client.post(
"/",
data={
"SecretId": DEFAULT_SECRET_NAME,
"SecretString": "secret",
"VersionStages": ["AWSCURRENT"],
},
headers={"X-Amz-Target": "secretsmanager.PutSecretValue"},
)
second_secret_json_data = json.loads(
put_second_secret_value_json.data.decode("utf-8")
)
second_secret_version_id = second_secret_json_data["VersionId"]
list_secret_versions_json = test_client.post(
"/",
data={"SecretId": DEFAULT_SECRET_NAME},
headers={"X-Amz-Target": "secretsmanager.ListSecretVersionIds"},
)
versions_list = json.loads(list_secret_versions_json.data.decode("utf-8"))
returned_version_ids = [v["VersionId"] for v in versions_list["Versions"]]
assert [
first_secret_version_id,
second_secret_version_id,
].sort() == returned_version_ids.sort()
@mock_secretsmanager
def test_get_resource_policy_secret():
backend = server.create_backend_app("secretsmanager")
test_client = backend.test_client()
create_secret = test_client.post(
"/",
data={"Name": "test-secret", "SecretString": "foosecret"},
headers={"X-Amz-Target": "secretsmanager.CreateSecret"},
)
describe_secret = test_client.post(
"/",
data={"SecretId": "test-secret"},
headers={"X-Amz-Target": "secretsmanager.GetResourcePolicy"},
)
json_data = json.loads(describe_secret.data.decode("utf-8"))
assert json_data # Returned dict is not empty
assert json_data["ARN"] != ""
assert json_data["Name"] == "test-secret"
#
# The following tests should work, but fail on the embedded dict in
# RotationRules. The error message suggests a problem deeper in the code, which
# needs further investigation.
#
# @mock_secretsmanager
# def test_rotate_secret_rotation_period_zero():
# backend = server.create_backend_app('secretsmanager')
# test_client = backend.test_client()
# create_secret = test_client.post('/',
# data={"Name": "test-secret",
# "SecretString": "foosecret"},
# headers={
# "X-Amz-Target": "secretsmanager.CreateSecret"
# },
# )
# rotate_secret = test_client.post('/',
# data={"SecretId": "test-secret",
# "RotationRules": {"AutomaticallyAfterDays": 0}},
# headers={
# "X-Amz-Target": "secretsmanager.RotateSecret"
# },
# )
# json_data = json.loads(rotate_secret.data.decode("utf-8"))
# assert json_data['message'] == "RotationRules.AutomaticallyAfterDays must be within 1-1000."
# assert json_data['__type'] == 'InvalidParameterException'
# @mock_secretsmanager
# def test_rotate_secret_rotation_period_too_long():
# backend = server.create_backend_app('secretsmanager')
# test_client = backend.test_client()
# create_secret = test_client.post('/',
# data={"Name": "test-secret",
# "SecretString": "foosecret"},
# headers={
# "X-Amz-Target": "secretsmanager.CreateSecret"
# },
# )
# rotate_secret = test_client.post('/',
# data={"SecretId": "test-secret",
# "RotationRules": {"AutomaticallyAfterDays": 1001}},
# headers={
# "X-Amz-Target": "secretsmanager.RotateSecret"
# },
# )
# json_data = json.loads(rotate_secret.data.decode("utf-8"))
# assert json_data['message'] == "RotationRules.AutomaticallyAfterDays must be within 1-1000."
# assert json_data['__type'] == 'InvalidParameterException'
| 33.571214 | 98 | 0.618837 | 2,264 | 22,392 | 5.814929 | 0.074647 | 0.058337 | 0.051044 | 0.065629 | 0.91447 | 0.895329 | 0.880592 | 0.859704 | 0.841246 | 0.805849 | 0 | 0.008943 | 0.250938 | 22,392 | 666 | 99 | 33.621622 | 0.775949 | 0.205341 | 0 | 0.67426 | 0 | 0 | 0.260709 | 0.086184 | 0 | 0 | 0 | 0 | 0.100228 | 1 | 0.04328 | false | 0 | 0.01139 | 0 | 0.05467 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
81cf9108b2105c09db2d3a4a51f169415f02a36b | 159 | py | Python | route/exceptions.py | dubovinszky/route-calculator | e2da6e351a25fcf4ebf98dc05b1d651ed291b7e8 | [
"MIT"
] | null | null | null | route/exceptions.py | dubovinszky/route-calculator | e2da6e351a25fcf4ebf98dc05b1d651ed291b7e8 | [
"MIT"
] | null | null | null | route/exceptions.py | dubovinszky/route-calculator | e2da6e351a25fcf4ebf98dc05b1d651ed291b7e8 | [
"MIT"
] | null | null | null | class InvalidKMLNoLineString(Exception):
pass
class AltitudeModeNotImplemented(Exception):
pass
class InvalidKMLNoCoordinates(Exception):
pass
| 14.454545 | 44 | 0.786164 | 12 | 159 | 10.416667 | 0.5 | 0.312 | 0.288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157233 | 159 | 10 | 45 | 15.9 | 0.932836 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c4c0ccf54f06363d6d5831c434243b0e79870289 | 96 | py | Python | learnosity_sdk/utils/lrnuuid.py | ttton/learnosity-sdk-python | b3e424db67f413e8e62de79f78242d73c195dbe9 | [
"Apache-2.0"
] | 5 | 2015-05-26T22:14:10.000Z | 2020-01-13T14:35:57.000Z | learnosity_sdk/utils/lrnuuid.py | ttton/learnosity-sdk-python | b3e424db67f413e8e62de79f78242d73c195dbe9 | [
"Apache-2.0"
] | 47 | 2015-10-06T16:50:38.000Z | 2022-02-01T05:21:25.000Z | learnosity_sdk/utils/lrnuuid.py | ttton/learnosity-sdk-python | b3e424db67f413e8e62de79f78242d73c195dbe9 | [
"Apache-2.0"
] | 4 | 2016-07-22T18:37:11.000Z | 2020-07-04T22:11:07.000Z | import uuid
class Uuid:
@staticmethod
def generate():
return str(uuid.uuid4())
| 13.714286 | 32 | 0.625 | 11 | 96 | 5.454545 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.270833 | 96 | 6 | 33 | 16 | 0.842857 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.2 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c4cc7d9607e4f2a27b7a366e1404000a80c07aa2 | 130 | py | Python | TagScriptEngine/interface/__init__.py | Scuffed-Guard/TagScript | e0647fa6d2b830e1f875ff643793b420118512ff | [
"CC-BY-4.0"
] | 9 | 2021-03-12T19:52:15.000Z | 2022-01-23T11:50:32.000Z | TagScriptEngine/interface/__init__.py | Scuffed-Guard/TagScript | e0647fa6d2b830e1f875ff643793b420118512ff | [
"CC-BY-4.0"
] | 7 | 2021-03-19T05:15:31.000Z | 2021-07-03T10:24:49.000Z | TagScriptEngine/interface/__init__.py | Scuffed-Guard/TagScript | e0647fa6d2b830e1f875ff643793b420118512ff | [
"CC-BY-4.0"
] | 15 | 2021-03-08T01:17:01.000Z | 2022-03-21T09:47:42.000Z | from .adapter import Adapter
from .block import Block, verb_required_block
__all__ = ("Adapter", "Block", "verb_required_block")
| 26 | 53 | 0.776923 | 17 | 130 | 5.470588 | 0.411765 | 0.193548 | 0.365591 | 0.473118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 130 | 4 | 54 | 32.5 | 0.808696 | 0 | 0 | 0 | 0 | 0 | 0.238462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c4cf62c72fd106ac08878ee39e1de113905b5a80 | 1,054 | py | Python | tests/acceptance/test_invalid_schema_files.py | python-jsonschema/check-jsonschema | aec38b3993d23de767a9c53f79825bbee8b4e45e | [
"Apache-2.0"
] | 3 | 2022-03-02T17:41:42.000Z | 2022-03-18T00:17:33.000Z | tests/acceptance/test_invalid_schema_files.py | python-jsonschema/check-jsonschema | aec38b3993d23de767a9c53f79825bbee8b4e45e | [
"Apache-2.0"
] | 5 | 2022-03-15T11:16:00.000Z | 2022-03-30T14:20:17.000Z | tests/acceptance/test_invalid_schema_files.py | python-jsonschema/check-jsonschema | aec38b3993d23de767a9c53f79825bbee8b4e45e | [
"Apache-2.0"
] | 2 | 2022-03-16T02:56:43.000Z | 2022-03-30T09:35:32.000Z | def test_checker_non_json_schemafile(run_line, tmp_path):
foo = tmp_path / "foo.json"
bar = tmp_path / "bar.json"
foo.write_text("{")
bar.write_text("{}")
res = run_line(["check-jsonschema", "--schemafile", str(foo), str(bar)])
assert res.exit_code == 1
assert "schemafile could not be parsed" in res.stderr
def test_checker_invalid_schemafile(run_line, tmp_path):
foo = tmp_path / "foo.json"
bar = tmp_path / "bar.json"
foo.write_text('{"title": {"foo": "bar"}}')
bar.write_text("{}")
res = run_line(["check-jsonschema", "--schemafile", str(foo), str(bar)])
assert res.exit_code == 1
assert "schemafile was not valid" in res.stderr
def test_checker_invalid_schemafile_scheme(run_line, tmp_path):
foo = tmp_path / "foo.json"
bar = tmp_path / "bar.json"
foo.write_text('{"title": "foo"}')
bar.write_text("{}")
res = run_line(["check-jsonschema", "--schemafile", f"ftp://{foo}", str(bar)])
assert res.exit_code == 1
assert "only supports http, https" in res.stderr
| 32.9375 | 82 | 0.648956 | 153 | 1,054 | 4.24183 | 0.254902 | 0.097072 | 0.09245 | 0.064715 | 0.861325 | 0.861325 | 0.861325 | 0.861325 | 0.747304 | 0.628659 | 0 | 0.003476 | 0.181214 | 1,054 | 31 | 83 | 34 | 0.748552 | 0 | 0 | 0.583333 | 0 | 0 | 0.256167 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4f1e07945320be2533962e83482a905aa91cf89 | 4,914 | py | Python | tests/bio_based/test_SMA.py | rishavpramanik/mealpy | d4a4d5810f15837764e4ee61517350fef3dc92b3 | [
"MIT"
] | null | null | null | tests/bio_based/test_SMA.py | rishavpramanik/mealpy | d4a4d5810f15837764e4ee61517350fef3dc92b3 | [
"MIT"
] | null | null | null | tests/bio_based/test_SMA.py | rishavpramanik/mealpy | d4a4d5810f15837764e4ee61517350fef3dc92b3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Created by "Thieu" at 00:15, 17/03/2022 ----------%
# Email: nguyenthieu2102@gmail.com %
# Github: https://github.com/thieu1995 %
# --------------------------------------------------%
from mealpy.bio_based import SMA
from mealpy.optimizer import Optimizer
import numpy as np
import pytest
@pytest.fixture(scope="module") # scope: Call only 1 time at the beginning
def problem():
def fitness_function(solution):
return np.sum(solution ** 2)
problem = {
"fit_func": fitness_function,
"lb": [-10, -10, -10, -10, -10],
"ub": [10, 10, 10, 10, 10],
"minmax": "min",
}
return problem
def test_OriginalSMA_results(problem):
epoch = 10
pop_size = 50
p_t = 0.05
model = SMA.OriginalSMA(problem, epoch, pop_size, p_t)
best_position, best_fitness = model.solve()
assert isinstance(model, Optimizer)
assert isinstance(best_position, np.ndarray)
assert len(best_position) == len(problem["lb"])
def test_BaseSMA_results(problem):
epoch = 10
pop_size = 50
p_t = 0.05
model = SMA.BaseSMA(problem, epoch, pop_size, p_t)
best_position, best_fitness = model.solve()
assert isinstance(model, Optimizer)
assert isinstance(best_position, np.ndarray)
assert len(best_position) == len(problem["lb"])
@pytest.mark.parametrize("problem, epoch, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -10, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, float("inf"), 0),
])
def test_epoch_SMA(problem, epoch, system_code):
pop_size = 50
algorithms = [SMA.OriginalSMA, SMA.BaseSMA]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size)
assert e.type == SystemExit
assert e.value.code == system_code
@pytest.mark.parametrize("problem, pop_size, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -10, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, float("inf"), 0),
])
def test_pop_size_SMA(problem, pop_size, system_code):
epoch = 10
algorithms = [SMA.OriginalSMA, SMA.BaseSMA]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size)
assert e.type == SystemExit
assert e.value.code == system_code
@pytest.mark.parametrize("problem, alpha, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -1.0, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, 1, 0),
(problem, 1.1, 0),
(problem, -0.01, 0),
])
def test_alpha_SMA(problem, alpha, system_code):
epoch = 10
pop_size = 50
algorithms = [SMA.OriginalSMA, SMA.BaseSMA]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size, alpha=alpha)
assert e.type == SystemExit
assert e.value.code == system_code
@pytest.mark.parametrize("problem, p_t, system_code",
[
(problem, None, 0),
(problem, "hello", 0),
(problem, -1.0, 0),
(problem, [10], 0),
(problem, (0, 9), 0),
(problem, 0, 0),
(problem, 1, 0),
(problem, 1.1, 0),
(problem, -0.01, 0),
])
def test_p_t_SMA(problem, p_t, system_code):
epoch = 10
pop_size = 50
algorithms = [SMA.OriginalSMA, SMA.BaseSMA]
for algorithm in algorithms:
with pytest.raises(SystemExit) as e:
model = algorithm(problem, epoch, pop_size, p_t=p_t)
assert e.type == SystemExit
assert e.value.code == system_code
| 36.947368 | 132 | 0.462556 | 494 | 4,914 | 4.481781 | 0.194332 | 0.101174 | 0.04065 | 0.051491 | 0.761969 | 0.733966 | 0.733062 | 0.733062 | 0.733062 | 0.733062 | 0 | 0.048764 | 0.415751 | 4,914 | 132 | 133 | 37.227273 | 0.722396 | 0.086488 | 0 | 0.702703 | 0 | 0 | 0.037029 | 0 | 0 | 0 | 0 | 0 | 0.126126 | 1 | 0.072072 | false | 0 | 0.036036 | 0.009009 | 0.126126 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c4f97d08d10b6c15258673e264a1b0d917f6e95d | 20 | py | Python | synbio/codes/__init__.py | jecalles/synbio | 8fb89c32166be1ce8b7ec47e1ffd69e5f04a054d | [
"MIT"
] | 1 | 2021-09-01T08:28:19.000Z | 2021-09-01T08:28:19.000Z | __init__.py | thundergoth/Lindbladdynamics | b37c9927223dff1827b2b833afab416b6a5bda19 | [
"MIT"
] | 2 | 2021-02-01T16:31:22.000Z | 2021-05-05T13:44:43.000Z | gemmforge/constructs/__init__.py | ravil-mobile/gemmforge | 6381584c2d1ce77eaa938de02bc4f130f19cb2e4 | [
"MIT"
] | null | null | null | from .code import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6fc1a41d5594dabefc10de48581957cb96a14e0a | 28,581 | py | Python | tests/test_scheduling.py | iml130/lotlan-scheduler | b576f853706d614a918dccd9572cc2c2b666bbe4 | [
"Apache-2.0"
] | null | null | null | tests/test_scheduling.py | iml130/lotlan-scheduler | b576f853706d614a918dccd9572cc2c2b666bbe4 | [
"Apache-2.0"
] | null | null | null | tests/test_scheduling.py | iml130/lotlan-scheduler | b576f853706d614a918dccd9572cc2c2b666bbe4 | [
"Apache-2.0"
] | null | null | null | """ Contains tests for the scheduling process """
# standard libraries
import sys
import os
from os import walk
from os.path import splitext, join
import subprocess as su
import unittest
# 3rd party packages
import xmlrunner
from snakes.nets import Marking, MultiSet
sys.path.append(os.path.abspath("../lotlan_scheduler"))
# local sources
import lotlan_scheduler.helpers as helpers
from lotlan_scheduler.api.event import Event
from lotlan_scheduler.scheduler import LotlanScheduler
# uninstall possible old lotlan_scheduler packages
# so current code is used not old one
os.system("pip3 uninstall lotlan_scheduler")
class TestScheduling(unittest.TestCase):
""" Tests the whole scheduling process """
@classmethod
def setUpClass(cls):
lotlan_logic = {}
material_flows = {}
file_names = sorted(helpers.get_lotlan_file_names("etc/examples/Scheduling/"))
for i, file_name in enumerate(file_names):
f = open(file_name, "r")
lotlan_logic[i] = LotlanScheduler(f.read(), True)
material_flows[i] = lotlan_logic[i].get_materialflows()
for material_flow in material_flows[i]:
material_flow.start()
f.close()
cls.lotlan_logic = lotlan_logic
cls.material_flows = material_flows
def run_transport_order_steps(self, task_uuid, material_flow):
material_flow.fire_event(task_uuid, Event("moved_to_location", "", "Boolean", value=True))
material_flow.fire_event(task_uuid, Event("moved_to_location", "", "Boolean", value=True))
def test_simple_task(self):
material_flows = self.get_material_flows(0)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net_logic = material_flow.petri_net_logic
petri_net = petri_net_logic.get_petri_nets()[0]
pickup_net = petri_net_logic.get_pickup_nets()[0]
delivery_net = petri_net_logic.get_delivery_nets()[0]
material_flow.fire_event("0",
Event("task_finished", "",
"Boolean", value=True)) # should not be allowed
task_initial_marking = Marking(task_started=MultiSet([1]))
pickup_initial_marking = Marking(tos_started=MultiSet([1]))
delivery_initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), task_initial_marking)
self.assertEqual(pickup_net.get_marking(), pickup_initial_marking)
self.assertEqual(delivery_net.get_marking(), delivery_initial_marking)
material_flow.fire_event("0", Event("to_done", "", "Boolean", value=True))
self.assertEqual(petri_net.get_marking(), task_initial_marking)
material_flow.fire_event("0", Event("moved_to_location", "", "Boolean", value=True))
tos_finished_marking = Marking(tos_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), task_initial_marking)
self.assertEqual(pickup_net.get_marking(), tos_finished_marking)
self.assertEqual(delivery_net.get_marking(), pickup_initial_marking)
material_flow.fire_event("0", Event("moved_to_location", "", "Boolean", value=True))
task_finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), task_finished_marking)
self.assertEqual(pickup_net.get_marking(), tos_finished_marking)
self.assertEqual(delivery_net.get_marking(), tos_finished_marking)
def test_triggered_by(self):
material_flows = self.get_material_flows(1)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=False))
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_triggered_by_passed = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_triggered_by_passed)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
def test_finished_by(self):
material_flows = self.get_material_flows(2)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), initial_marking)
marking_after_to_done = Marking(task_done=MultiSet([1]))
self.run_transport_order_steps("0", material_flow)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=False))
self.assertEqual(petri_net.get_marking(), marking_after_to_done)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_finished_by = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_finished_by)
def test_trb_fb(self):
material_flows = self.get_material_flows(3)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_triggered_by_passed = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_triggered_by_passed)
self.run_transport_order_steps("0", material_flow)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True)) # should not be allowed
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=True))
marking_after_finished_by = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_finished_by)
def test_self_loop(self):
material_flows = self.get_material_flows(4)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), initial_marking)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_finished=MultiSet([1]), task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
self.run_transport_order_steps("0", material_flow)
second_iteration_marking = Marking(task_finished=MultiSet([1, 1]), task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), second_iteration_marking)
self.run_transport_order_steps("0", material_flow)
third_iteration_marking = Marking(task_finished=MultiSet([1, 1, 1]), task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), third_iteration_marking)
def test_two_tasks(self):
material_flows = self.get_material_flows(5)
self.assertEqual(len(material_flows), 2)
material_flow_1 = material_flows[0]
material_flow_2 = material_flows[1]
petri_net_m1 = material_flow_1.petri_net_logic.get_petri_nets()[0]
petri_net_m2 = material_flow_2.petri_net_logic.get_petri_nets()[0]
initial_marking_m1 = initial_marking_m2 = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net_m1.get_marking(), initial_marking_m1)
self.assertEqual(petri_net_m2.get_marking(), initial_marking_m2)
self.run_transport_order_steps("0", material_flow_1)
self.run_transport_order_steps("0", material_flow_2)
final_marking_m1 = final_marking_m2 = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net_m1.get_marking(), final_marking_m1)
self.assertEqual(petri_net_m2.get_marking(), final_marking_m2)
def test_triggered_by_and(self):
material_flows = self.get_material_flows(6)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=True))
marking_after_both_true = Marking(buttonPressed_0=MultiSet([1]), buttonPressed2_1=MultiSet([1]))
# check if evaluation of negation works correctly
self.assertEqual(petri_net.get_marking(), marking_after_both_true)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=False))
marking_after_triggered_by_passed = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(),marking_after_triggered_by_passed)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
def test_triggered_by_or(self):
material_flows = self.get_material_flows(7)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=True))
marking_after_triggered_by_passed = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_triggered_by_passed)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
def test_triggered_by_xor(self):
material_flows = self.get_material_flows(8)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_bp_true = Marking(buttonPressed_0=MultiSet([1]), buttonPressed_2=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_bp_true)
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=False))
marking_after_tb_passed = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_tb_passed)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
def test_self_loop_trb_fb(self):
material_flows = self.get_material_flows(9)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_triggered_by_passed = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_triggered_by_passed)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_done=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=True))
marking_after_finished_by = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_finished_by)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_triggered_by_passed_2 = Marking(task_started=MultiSet([1]),
task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(),marking_after_triggered_by_passed_2)
self.run_transport_order_steps("0", material_flow)
second_iteration_marking = Marking(task_finished=MultiSet([1]), task_done=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), second_iteration_marking)
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=True))
marking_after_finished_by_2 = Marking(task_finished=MultiSet([1, 1]))
self.assertEqual(petri_net.get_marking(), marking_after_finished_by_2)
def test_tos_triggered_by(self):
material_flows = self.get_material_flows(10)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net_logic = material_flow.petri_net_logic
petri_net = petri_net_logic.get_petri_nets()[0]
pickup_net = petri_net_logic.get_pickup_nets()[0]
delivery_net = petri_net_logic.get_delivery_nets()[0]
task_initial_marking = Marking(task_started=MultiSet([1]))
pickup_initial_marking = Marking()
delivery_initial_marking = pickup_initial_marking
self.assertEqual(petri_net.get_marking(), task_initial_marking)
self.assertEqual(pickup_net.get_marking(), pickup_initial_marking)
self.assertEqual(delivery_net.get_marking(), delivery_initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
pickup_marking_after_tb = Marking(tos_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), task_initial_marking)
self.assertEqual(pickup_net.get_marking(), pickup_marking_after_tb)
self.assertEqual(delivery_net.get_marking(), delivery_initial_marking)
self.run_transport_order_steps("0", material_flow)
task_finished_marking = Marking(task_finished=MultiSet([1]))
tos_finished_marking = Marking(tos_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), task_finished_marking)
self.assertEqual(pickup_net.get_marking(), tos_finished_marking)
self.assertEqual(delivery_net.get_marking(), tos_finished_marking)
def test_tos_finished_by(self):
material_flows = self.get_material_flows(11)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net_logic = material_flow.petri_net_logic
petri_net = petri_net_logic.get_petri_nets()[0]
pickup_net = petri_net_logic.get_pickup_nets()[0]
delivery_net = petri_net_logic.get_delivery_nets()[0]
task_initial_marking = Marking(task_started=MultiSet([1]))
pickup_initial_marking = Marking(tos_started=MultiSet([1]))
delivery_initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), task_initial_marking)
self.assertEqual(pickup_net.get_marking(), pickup_initial_marking)
self.assertEqual(delivery_net.get_marking(), delivery_initial_marking)
material_flow.fire_event("0", Event("moved_to_location", "", "Boolean", value=True))
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
material_flow.fire_event("0", Event("moved_to_location", "", "Boolean", value=True))
task_finished_marking = Marking(task_finished=MultiSet([1]))
tos_finished_marking = Marking(tos_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), task_finished_marking)
self.assertEqual(pickup_net.get_marking(), tos_finished_marking)
self.assertEqual(delivery_net.get_marking(), tos_finished_marking)
def test_on_done(self):
material_flows = self.get_material_flows(12)
self.assertEqual(len(material_flows), 1)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
petri_net_2 = material_flow.petri_net_logic.get_petri_nets()[1]
initial_marking = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), initial_marking)
self.run_transport_order_steps("0", material_flow)
first_task_finished_marking = Marking(task_finished=MultiSet([1]))
initial_marking_task2 = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), first_task_finished_marking)
self.assertEqual(petri_net_2.get_marking(), initial_marking_task2)
self.run_transport_order_steps("1", material_flow)
second_task_finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net_2.get_marking(), second_task_finished_marking)
def test_on_done_and_other_task(self):
material_flows = self.get_material_flows(13)
self.assertEqual(len(material_flows), 2)
material_flow_1 = material_flows[0]
material_flow_2 = material_flows[1]
petri_net_m1_1 = material_flow_1.petri_net_logic.get_petri_nets()[0]
petri_net_m1_2 = material_flow_1.petri_net_logic.get_petri_nets()[1]
petri_net_m2_1 = material_flow_2.petri_net_logic.get_petri_nets()[0]
initial_marking_m1_1 = Marking(task_started=MultiSet([1]))
initial_marking_m1_2 = Marking()
initial_marking_m2_1 = initial_marking_m1_1
self.assertEqual(petri_net_m1_1.get_marking(), initial_marking_m1_1)
self.assertEqual(petri_net_m1_2.get_marking(), initial_marking_m1_2)
self.assertEqual(petri_net_m2_1.get_marking(), initial_marking_m2_1)
# should not be accepted
self.run_transport_order_steps("1", material_flow_1)
self.run_transport_order_steps("0", material_flow_1)
self.run_transport_order_steps("0", material_flow_2)
marking_after_to_done_m1_1 = Marking(task_finished=MultiSet([1]))
marking_after_to_done_m1_2 = Marking(task_started=MultiSet([1]))
marking_after_to_done_m2_1 = marking_after_to_done_m1_1
self.assertEqual(petri_net_m1_1.get_marking(), marking_after_to_done_m1_1)
self.assertEqual(petri_net_m1_2.get_marking(), marking_after_to_done_m1_2)
self.assertEqual(petri_net_m2_1.get_marking(), marking_after_to_done_m2_1)
self.run_transport_order_steps("1", material_flow_1)
# should not be accepted
material_flow_1.fire_event("0", Event("task_started", "", "Boolean", value=True))
material_flow_2.fire_event("0", Event("task_started", "", "Boolean", value=True))
self.assertEqual(petri_net_m1_1.get_marking(), marking_after_to_done_m1_1)
self.assertEqual(petri_net_m1_2.get_marking(), marking_after_to_done_m1_1)
self.assertEqual(petri_net_m2_1.get_marking(), marking_after_to_done_m2_1)
def test_on_done_with_many_tasks(self):
material_flows = self.get_material_flows(14)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net_1 = material_flow.petri_net_logic.get_petri_nets()[0]
petri_net_2 = material_flow.petri_net_logic.get_petri_nets()[1]
petri_net_3 = material_flow.petri_net_logic.get_petri_nets()[2]
petri_net_4 = material_flow.petri_net_logic.get_petri_nets()[3]
petri_net_5 = material_flow.petri_net_logic.get_petri_nets()[4]
started_marking = Marking(task_started=MultiSet([1]))
finished_marking = Marking(task_finished=MultiSet([1]))
empty_marking = Marking()
self.assertEqual(petri_net_1.get_marking(), started_marking)
self.assertEqual(petri_net_2.get_marking(), empty_marking)
self.assertEqual(petri_net_3.get_marking(), empty_marking)
self.assertEqual(petri_net_4.get_marking(), empty_marking)
self.assertEqual(petri_net_5.get_marking(), empty_marking)
self.run_transport_order_steps("0", material_flow)
self.assertEqual(petri_net_1.get_marking(), finished_marking)
self.assertEqual(petri_net_2.get_marking(), started_marking)
self.assertEqual(petri_net_3.get_marking(), started_marking)
self.assertEqual(petri_net_4.get_marking(), empty_marking)
self.assertEqual(petri_net_5.get_marking(), empty_marking)
self.run_transport_order_steps("1", material_flow)
self.run_transport_order_steps("2", material_flow)
self.assertEqual(petri_net_1.get_marking(), finished_marking)
self.assertEqual(petri_net_2.get_marking(), finished_marking)
self.assertEqual(petri_net_3.get_marking(), finished_marking)
self.assertEqual(petri_net_4.get_marking(), started_marking)
self.assertEqual(petri_net_5.get_marking(), started_marking)
self.run_transport_order_steps("3", material_flow)
self.run_transport_order_steps("4", material_flow)
self.assertEqual(petri_net_1.get_marking(), finished_marking)
self.assertEqual(petri_net_2.get_marking(), finished_marking)
self.assertEqual(petri_net_3.get_marking(), finished_marking)
self.assertEqual(petri_net_4.get_marking(), finished_marking)
self.assertEqual(petri_net_5.get_marking(), finished_marking)
def test_task_sync(self):
material_flows = self.get_material_flows(15)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net_1 = material_flow.petri_net_logic.get_petri_nets()[0]
petri_net_2 = material_flow.petri_net_logic.get_petri_nets()[1]
petri_net_3 = material_flow.petri_net_logic.get_petri_nets()[2]
started_marking = Marking(task_started=MultiSet([1]))
finished_marking = Marking(task_finished=MultiSet([1]))
empty_marking = Marking()
self.assertEqual(petri_net_1.get_marking(), started_marking)
self.assertEqual(petri_net_2.get_marking(), empty_marking)
self.assertEqual(petri_net_3.get_marking(), started_marking)
self.run_transport_order_steps("0", material_flow)
self.assertEqual(petri_net_1.get_marking(), finished_marking)
self.assertEqual(petri_net_2.get_marking(), empty_marking)
self.assertEqual(petri_net_3.get_marking(), started_marking)
self.run_transport_order_steps("2", material_flow)
self.assertEqual(petri_net_1.get_marking(), finished_marking)
self.assertEqual(petri_net_2.get_marking(), started_marking)
self.assertEqual(petri_net_3.get_marking(), finished_marking)
self.run_transport_order_steps("1", material_flow)
self.assertEqual(petri_net_1.get_marking(), finished_marking)
self.assertEqual(petri_net_2.get_marking(), finished_marking)
self.assertEqual(petri_net_3.get_marking(), finished_marking)
def test_same_events_in_tb_and_fb(self):
material_flows = self.get_material_flows(16)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
# check if triggeredby and finishedby event places are created
self.assertEqual(10, len(petri_net._place))
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_button_pressed = Marking(buttonPressed_0=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_button_pressed)
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=False))
self.assertEqual(petri_net.get_marking(), marking_after_button_pressed)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=False))
marking_after_tb = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_tb)
self.run_transport_order_steps("0", material_flow)
marking_after_to_done = Marking(task_done=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_to_done)
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=True))
marking_after_bp2_true = Marking(task_done=MultiSet([1]), buttonPressed2_3=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_bp2_true)
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=False))
marking_after_fb = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_fb)
def test_same_event_in_condition(self):
material_flows = self.get_material_flows(17)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("buttonPressed2", "", "Boolean", value=True))
material_flow.fire_event("0", Event("buttonPressed", "", "Boolean", value=True))
marking_after_tb = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_tb)
self.run_transport_order_steps("0", material_flow)
marking_after_to_done = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_to_done)
def test_tb_with_integer(self):
material_flows = self.get_material_flows(18)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("sensor", "", "Integer", value=3))
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("sensor", "", "Integer", value=51))
marking_after_sensor_is_51 = Marking(sensor_0=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_sensor_is_51)
material_flow.fire_event("0", Event("sensor2", "", "Integer", value=5))
marking_after_sensor2_is_5 = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_sensor2_is_5)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
def test_tb_with_string(self):
material_flows = self.get_material_flows(19)
self.assertEqual(len(material_flows), 1)
material_flow = material_flows[0]
petri_net = material_flow.petri_net_logic.get_petri_nets()[0]
initial_marking = Marking()
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("terminal", "", "String", value="ab"))
self.assertEqual(petri_net.get_marking(), initial_marking)
material_flow.fire_event("0", Event("terminal", "", "String", value="abc"))
marking_after_terminal_is_ab = Marking(task_started=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), marking_after_terminal_is_ab)
self.run_transport_order_steps("0", material_flow)
finished_marking = Marking(task_finished=MultiSet([1]))
self.assertEqual(petri_net.get_marking(), finished_marking)
def get_material_flows(self, test_number):
return TestScheduling.material_flows[test_number]
if __name__ == "__main__":
unittest.main(testRunner=xmlrunner.XMLTestRunner(output="test-reports"),
failfast=False, buffer=False, catchbreak=False)
| 46.70098 | 113 | 0.718099 | 3,688 | 28,581 | 5.147777 | 0.053959 | 0.076692 | 0.114828 | 0.132052 | 0.901449 | 0.882908 | 0.861259 | 0.837556 | 0.808796 | 0.760021 | 0 | 0.018127 | 0.171967 | 28,581 | 611 | 114 | 46.777414 | 0.784078 | 0.01452 | 0 | 0.642369 | 0 | 0 | 0.034964 | 0.000853 | 0 | 0 | 0 | 0 | 0.334852 | 1 | 0.052392 | false | 0.031891 | 0.025057 | 0.002278 | 0.082005 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d221742edc00a79a221c7923cf04b026dd2f173b | 19 | py | Python | util/__init__.py | cosmos-org/ml-toolkit | 2f1d06bafabb0f84c2598038402f9671c2d7720e | [
"MIT"
] | 710 | 2021-08-01T16:43:59.000Z | 2022-03-31T08:39:17.000Z | util/__init__.py | cosmos-org/ml-toolkit | 2f1d06bafabb0f84c2598038402f9671c2d7720e | [
"MIT"
] | 66 | 2019-06-09T12:14:31.000Z | 2021-07-27T05:54:35.000Z | util/__init__.py | cosmos-org/ml-toolkit | 2f1d06bafabb0f84c2598038402f9671c2d7720e | [
"MIT"
] | 183 | 2018-09-07T06:57:13.000Z | 2021-08-01T08:50:15.000Z | from .util import * | 19 | 19 | 0.736842 | 3 | 19 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 19 | 1 | 19 | 19 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d25d2bedad305e8135f5e82184351668c9068504 | 225 | py | Python | moto/medialive/__init__.py | thomassross/moto | 407d5c853dbee9b9e132d97b41414b7dca475765 | [
"Apache-2.0"
] | 1 | 2021-12-12T04:23:06.000Z | 2021-12-12T04:23:06.000Z | moto/medialive/__init__.py | thomassross/moto | 407d5c853dbee9b9e132d97b41414b7dca475765 | [
"Apache-2.0"
] | 4 | 2017-09-30T07:52:52.000Z | 2021-12-13T06:56:55.000Z | moto/medialive/__init__.py | thomassross/moto | 407d5c853dbee9b9e132d97b41414b7dca475765 | [
"Apache-2.0"
] | 2 | 2021-11-24T08:05:43.000Z | 2021-11-25T16:18:48.000Z | from __future__ import unicode_literals
from .models import medialive_backends
from ..core.models import base_decorator
medialive_backend = medialive_backends["us-east-1"]
mock_medialive = base_decorator(medialive_backends)
| 32.142857 | 51 | 0.853333 | 29 | 225 | 6.206897 | 0.551724 | 0.283333 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004854 | 0.084444 | 225 | 6 | 52 | 37.5 | 0.868932 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d27b897b02eca0f1680d21f76f05eb251755abcf | 150 | py | Python | OpticsLab/__init__.py | AzizAlqasem/OpticsLab | a68c12edc9998f0709bae3da2fa0f85778e19bf0 | [
"MIT"
] | null | null | null | OpticsLab/__init__.py | AzizAlqasem/OpticsLab | a68c12edc9998f0709bae3da2fa0f85778e19bf0 | [
"MIT"
] | null | null | null | OpticsLab/__init__.py | AzizAlqasem/OpticsLab | a68c12edc9998f0709bae3da2fa0f85778e19bf0 | [
"MIT"
] | null | null | null | from OpticsLab.source import *
from OpticsLab.components import *
from OpticsLab.grid import *
from OpticsLab.monitor import *
__version__ = '0.0.0' | 21.428571 | 34 | 0.78 | 20 | 150 | 5.65 | 0.45 | 0.460177 | 0.504425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023077 | 0.133333 | 150 | 7 | 35 | 21.428571 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.033113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
96a568eb37d6de656f8418c721193ffb336a269c | 37,879 | py | Python | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/1.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/1.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/1.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3182
passenger_arriving = (
(2, 14, 9, 7, 4, 0, 3, 7, 6, 8, 2, 0), # 0
(0, 6, 5, 2, 0, 0, 7, 10, 7, 3, 2, 0), # 1
(2, 12, 9, 2, 2, 0, 5, 5, 10, 6, 0, 0), # 2
(4, 9, 4, 8, 1, 0, 5, 8, 4, 4, 4, 0), # 3
(2, 11, 10, 5, 1, 0, 10, 9, 5, 10, 0, 0), # 4
(3, 9, 3, 6, 3, 0, 16, 5, 7, 7, 1, 0), # 5
(5, 7, 6, 8, 2, 0, 7, 4, 14, 4, 0, 0), # 6
(4, 7, 8, 1, 0, 0, 3, 2, 4, 4, 3, 0), # 7
(7, 9, 2, 5, 0, 0, 5, 5, 8, 3, 5, 0), # 8
(1, 6, 8, 5, 1, 0, 10, 12, 4, 1, 4, 0), # 9
(2, 11, 5, 5, 5, 0, 5, 7, 8, 7, 6, 0), # 10
(5, 12, 6, 5, 0, 0, 7, 9, 9, 3, 3, 0), # 11
(6, 7, 9, 8, 1, 0, 5, 6, 3, 4, 0, 0), # 12
(4, 7, 5, 2, 3, 0, 2, 5, 6, 4, 3, 0), # 13
(4, 6, 9, 4, 2, 0, 4, 7, 5, 3, 0, 0), # 14
(9, 9, 4, 4, 1, 0, 1, 15, 4, 4, 0, 0), # 15
(4, 12, 4, 6, 0, 0, 10, 8, 5, 5, 1, 0), # 16
(2, 12, 2, 5, 1, 0, 9, 4, 1, 5, 2, 0), # 17
(2, 9, 8, 3, 1, 0, 4, 4, 8, 2, 2, 0), # 18
(4, 11, 5, 3, 2, 0, 6, 12, 6, 8, 4, 0), # 19
(4, 14, 4, 4, 2, 0, 5, 8, 4, 7, 4, 0), # 20
(3, 7, 9, 4, 1, 0, 3, 8, 6, 7, 2, 0), # 21
(3, 8, 8, 4, 3, 0, 6, 9, 6, 2, 3, 0), # 22
(6, 8, 8, 7, 3, 0, 6, 12, 3, 7, 2, 0), # 23
(4, 10, 4, 4, 3, 0, 6, 7, 3, 5, 3, 0), # 24
(5, 4, 6, 1, 2, 0, 3, 13, 7, 5, 2, 0), # 25
(4, 6, 9, 3, 1, 0, 5, 10, 8, 4, 2, 0), # 26
(8, 12, 4, 7, 0, 0, 9, 13, 4, 8, 4, 0), # 27
(3, 7, 4, 8, 3, 0, 4, 11, 4, 4, 2, 0), # 28
(2, 11, 6, 3, 4, 0, 10, 7, 4, 3, 4, 0), # 29
(7, 9, 8, 2, 4, 0, 6, 9, 3, 2, 4, 0), # 30
(7, 13, 12, 2, 3, 0, 5, 6, 9, 4, 2, 0), # 31
(5, 15, 7, 1, 4, 0, 4, 11, 9, 6, 1, 0), # 32
(2, 12, 8, 3, 1, 0, 6, 13, 11, 4, 1, 0), # 33
(6, 11, 6, 4, 4, 0, 7, 7, 5, 4, 2, 0), # 34
(1, 6, 9, 3, 1, 0, 9, 9, 4, 3, 2, 0), # 35
(2, 11, 8, 2, 0, 0, 8, 7, 7, 8, 3, 0), # 36
(4, 7, 7, 3, 1, 0, 4, 7, 10, 7, 2, 0), # 37
(3, 7, 5, 1, 2, 0, 7, 8, 6, 6, 4, 0), # 38
(4, 8, 10, 4, 2, 0, 4, 10, 3, 6, 4, 0), # 39
(5, 12, 9, 3, 2, 0, 4, 9, 5, 3, 1, 0), # 40
(6, 8, 7, 5, 1, 0, 7, 6, 8, 6, 1, 0), # 41
(5, 10, 6, 5, 1, 0, 2, 8, 5, 4, 3, 0), # 42
(2, 12, 4, 6, 4, 0, 5, 9, 2, 6, 4, 0), # 43
(11, 8, 10, 3, 2, 0, 5, 8, 5, 2, 2, 0), # 44
(5, 13, 9, 1, 0, 0, 5, 7, 4, 5, 7, 0), # 45
(9, 5, 5, 2, 2, 0, 8, 11, 4, 9, 4, 0), # 46
(3, 15, 8, 5, 1, 0, 6, 14, 4, 3, 1, 0), # 47
(4, 5, 10, 3, 2, 0, 1, 6, 2, 7, 2, 0), # 48
(6, 4, 7, 2, 3, 0, 6, 8, 6, 7, 4, 0), # 49
(6, 9, 4, 7, 2, 0, 6, 5, 4, 4, 2, 0), # 50
(7, 7, 14, 4, 1, 0, 5, 8, 8, 8, 3, 0), # 51
(4, 13, 10, 2, 6, 0, 7, 7, 6, 5, 3, 0), # 52
(2, 13, 5, 3, 2, 0, 6, 8, 6, 7, 2, 0), # 53
(8, 12, 5, 3, 1, 0, 7, 9, 5, 6, 1, 0), # 54
(5, 11, 8, 4, 0, 0, 5, 1, 7, 3, 3, 0), # 55
(8, 16, 6, 0, 3, 0, 3, 3, 7, 8, 4, 0), # 56
(5, 7, 6, 11, 3, 0, 6, 6, 12, 3, 0, 0), # 57
(9, 5, 9, 5, 3, 0, 6, 7, 9, 7, 3, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.7095121817383676, 9.515044981060607, 11.19193043059126, 8.87078804347826, 10.000240384615385, 6.659510869565219), # 0
(3.7443308140669203, 9.620858238197952, 11.252381752534994, 8.920190141908213, 10.075193108974359, 6.657240994867151), # 1
(3.7787518681104277, 9.725101964085297, 11.31139817195087, 8.968504830917876, 10.148564102564103, 6.654901690821256), # 2
(3.8127461259877085, 9.827663671875001, 11.368936576156813, 9.01569089673913, 10.22028605769231, 6.652493274456523), # 3
(3.8462843698175795, 9.928430874719417, 11.424953852470724, 9.061707125603865, 10.290291666666668, 6.6500160628019325), # 4
(3.879337381718857, 10.027291085770905, 11.479406888210512, 9.106512303743962, 10.358513621794872, 6.647470372886473), # 5
(3.9118759438103607, 10.12413181818182, 11.53225257069409, 9.150065217391306, 10.424884615384617, 6.644856521739131), # 6
(3.943870838210907, 10.218840585104518, 11.58344778723936, 9.19232465277778, 10.489337339743592, 6.64217482638889), # 7
(3.975292847039314, 10.311304899691358, 11.632949425164242, 9.233249396135266, 10.551804487179488, 6.639425603864735), # 8
(4.006112752414399, 10.401412275094698, 11.680714371786634, 9.272798233695653, 10.61221875, 6.636609171195653), # 9
(4.03630133645498, 10.489050224466892, 11.72669951442445, 9.310929951690824, 10.670512820512823, 6.633725845410628), # 10
(4.065829381279876, 10.5741062609603, 11.7708617403956, 9.347603336352659, 10.726619391025642, 6.630775943538648), # 11
(4.094667669007903, 10.656467897727273, 11.813157937017996, 9.382777173913043, 10.780471153846154, 6.627759782608695), # 12
(4.122786981757876, 10.736022647920176, 11.85354499160954, 9.416410250603866, 10.832000801282053, 6.624677679649759), # 13
(4.15015810164862, 10.81265802469136, 11.891979791488144, 9.448461352657004, 10.881141025641025, 6.621529951690821), # 14
(4.1767518107989465, 10.886261541193182, 11.928419223971721, 9.478889266304348, 10.92782451923077, 6.618316915760871), # 15
(4.202538891327675, 10.956720710578002, 11.96282017637818, 9.507652777777778, 10.971983974358976, 6.61503888888889), # 16
(4.227490125353625, 11.023923045998176, 11.995139536025421, 9.53471067330918, 11.013552083333336, 6.611696188103866), # 17
(4.25157629499561, 11.087756060606061, 12.025334190231364, 9.560021739130436, 11.052461538461543, 6.608289130434783), # 18
(4.274768182372451, 11.148107267554012, 12.053361026313912, 9.58354476147343, 11.088645032051284, 6.604818032910629), # 19
(4.297036569602966, 11.204864179994388, 12.079176931590974, 9.60523852657005, 11.122035256410259, 6.601283212560387), # 20
(4.318352238805971, 11.257914311079544, 12.102738793380466, 9.625061820652174, 11.152564903846153, 6.597684986413044), # 21
(4.338685972100283, 11.307145173961842, 12.124003499000287, 9.642973429951692, 11.180166666666667, 6.5940236714975855), # 22
(4.358008551604722, 11.352444281793632, 12.142927935768354, 9.658932140700484, 11.204773237179488, 6.590299584842997), # 23
(4.3762907594381035, 11.393699147727272, 12.159468991002571, 9.672896739130437, 11.226317307692307, 6.586513043478261), # 24
(4.393503377719247, 11.430797284915124, 12.173583552020853, 9.684826011473431, 11.244731570512819, 6.582664364432368), # 25
(4.409617188566969, 11.46362620650954, 12.185228506141103, 9.694678743961353, 11.259948717948719, 6.5787538647343), # 26
(4.424602974100088, 11.492073425662877, 12.194360740681233, 9.702413722826089, 11.271901442307694, 6.574781861413045), # 27
(4.438431516437421, 11.516026455527497, 12.200937142959157, 9.707989734299519, 11.280522435897437, 6.570748671497586), # 28
(4.4510735976977855, 11.535372809255753, 12.204914600292774, 9.711365564613528, 11.285744391025641, 6.566654612016909), # 29
(4.4625, 11.55, 12.20625, 9.7125, 11.287500000000001, 6.562500000000001), # 30
(4.47319183983376, 11.56215031960227, 12.205248928140096, 9.712295118464054, 11.286861125886526, 6.556726763701484), # 31
(4.4836528452685425, 11.574140056818184, 12.202274033816424, 9.711684477124184, 11.28495815602837, 6.547834661835751), # 32
(4.493887715792838, 11.585967720170455, 12.197367798913046, 9.710674080882354, 11.281811569148937, 6.535910757121439), # 33
(4.503901150895141, 11.597631818181819, 12.19057270531401, 9.709269934640524, 11.277441843971632, 6.521042112277196), # 34
(4.513697850063939, 11.609130859374998, 12.181931234903383, 9.707478043300654, 11.27186945921986, 6.503315790021656), # 35
(4.523282512787724, 11.62046335227273, 12.171485869565219, 9.705304411764708, 11.265114893617023, 6.482818853073463), # 36
(4.532659838554988, 11.631627805397729, 12.159279091183576, 9.70275504493464, 11.257198625886524, 6.4596383641512585), # 37
(4.5418345268542195, 11.642622727272729, 12.145353381642513, 9.699835947712419, 11.248141134751775, 6.433861385973679), # 38
(4.5508112771739135, 11.653446626420456, 12.129751222826087, 9.696553125000001, 11.23796289893617, 6.40557498125937), # 39
(4.559594789002558, 11.664098011363638, 12.11251509661836, 9.692912581699348, 11.22668439716312, 6.37486621272697), # 40
(4.568189761828645, 11.674575390625, 12.093687484903382, 9.68892032271242, 11.214326108156028, 6.34182214309512), # 41
(4.576600895140665, 11.684877272727276, 12.07331086956522, 9.684582352941177, 11.2009085106383, 6.3065298350824595), # 42
(4.584832888427111, 11.69500216619318, 12.051427732487923, 9.679904677287583, 11.186452083333334, 6.26907635140763), # 43
(4.592890441176471, 11.704948579545455, 12.028080555555556, 9.674893300653595, 11.17097730496454, 6.229548754789272), # 44
(4.600778252877237, 11.714715021306818, 12.003311820652177, 9.669554227941177, 11.15450465425532, 6.188034107946028), # 45
(4.6085010230179035, 11.724300000000003, 11.97716400966184, 9.663893464052288, 11.137054609929079, 6.144619473596536), # 46
(4.616063451086957, 11.733702024147728, 11.9496796044686, 9.65791701388889, 11.118647650709221, 6.099391914459438), # 47
(4.623470236572891, 11.742919602272728, 11.920901086956523, 9.651630882352942, 11.099304255319149, 6.052438493253375), # 48
(4.630726078964194, 11.751951242897727, 11.890870939009663, 9.645041074346407, 11.079044902482272, 6.003846272696985), # 49
(4.6378356777493615, 11.760795454545454, 11.85963164251208, 9.638153594771243, 11.057890070921987, 5.953702315508913), # 50
(4.6448037324168805, 11.769450745738636, 11.827225679347826, 9.630974448529413, 11.035860239361703, 5.902093684407797), # 51
(4.651634942455243, 11.777915625, 11.793695531400965, 9.623509640522876, 11.012975886524824, 5.849107442112278), # 52
(4.658334007352941, 11.786188600852274, 11.759083680555555, 9.615765175653596, 10.989257491134753, 5.794830651340996), # 53
(4.6649056265984665, 11.79426818181818, 11.723432608695653, 9.60774705882353, 10.964725531914894, 5.739350374812594), # 54
(4.671354499680307, 11.802152876420456, 11.686784797705313, 9.599461294934642, 10.939400487588653, 5.682753675245711), # 55
(4.677685326086957, 11.809841193181818, 11.649182729468599, 9.59091388888889, 10.913302836879433, 5.625127615358988), # 56
(4.683902805306906, 11.817331640625003, 11.610668885869565, 9.582110845588236, 10.886453058510638, 5.566559257871065), # 57
(4.690011636828645, 11.824622727272727, 11.57128574879227, 9.573058169934642, 10.858871631205675, 5.507135665500583), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(2, 14, 9, 7, 4, 0, 3, 7, 6, 8, 2, 0), # 0
(2, 20, 14, 9, 4, 0, 10, 17, 13, 11, 4, 0), # 1
(4, 32, 23, 11, 6, 0, 15, 22, 23, 17, 4, 0), # 2
(8, 41, 27, 19, 7, 0, 20, 30, 27, 21, 8, 0), # 3
(10, 52, 37, 24, 8, 0, 30, 39, 32, 31, 8, 0), # 4
(13, 61, 40, 30, 11, 0, 46, 44, 39, 38, 9, 0), # 5
(18, 68, 46, 38, 13, 0, 53, 48, 53, 42, 9, 0), # 6
(22, 75, 54, 39, 13, 0, 56, 50, 57, 46, 12, 0), # 7
(29, 84, 56, 44, 13, 0, 61, 55, 65, 49, 17, 0), # 8
(30, 90, 64, 49, 14, 0, 71, 67, 69, 50, 21, 0), # 9
(32, 101, 69, 54, 19, 0, 76, 74, 77, 57, 27, 0), # 10
(37, 113, 75, 59, 19, 0, 83, 83, 86, 60, 30, 0), # 11
(43, 120, 84, 67, 20, 0, 88, 89, 89, 64, 30, 0), # 12
(47, 127, 89, 69, 23, 0, 90, 94, 95, 68, 33, 0), # 13
(51, 133, 98, 73, 25, 0, 94, 101, 100, 71, 33, 0), # 14
(60, 142, 102, 77, 26, 0, 95, 116, 104, 75, 33, 0), # 15
(64, 154, 106, 83, 26, 0, 105, 124, 109, 80, 34, 0), # 16
(66, 166, 108, 88, 27, 0, 114, 128, 110, 85, 36, 0), # 17
(68, 175, 116, 91, 28, 0, 118, 132, 118, 87, 38, 0), # 18
(72, 186, 121, 94, 30, 0, 124, 144, 124, 95, 42, 0), # 19
(76, 200, 125, 98, 32, 0, 129, 152, 128, 102, 46, 0), # 20
(79, 207, 134, 102, 33, 0, 132, 160, 134, 109, 48, 0), # 21
(82, 215, 142, 106, 36, 0, 138, 169, 140, 111, 51, 0), # 22
(88, 223, 150, 113, 39, 0, 144, 181, 143, 118, 53, 0), # 23
(92, 233, 154, 117, 42, 0, 150, 188, 146, 123, 56, 0), # 24
(97, 237, 160, 118, 44, 0, 153, 201, 153, 128, 58, 0), # 25
(101, 243, 169, 121, 45, 0, 158, 211, 161, 132, 60, 0), # 26
(109, 255, 173, 128, 45, 0, 167, 224, 165, 140, 64, 0), # 27
(112, 262, 177, 136, 48, 0, 171, 235, 169, 144, 66, 0), # 28
(114, 273, 183, 139, 52, 0, 181, 242, 173, 147, 70, 0), # 29
(121, 282, 191, 141, 56, 0, 187, 251, 176, 149, 74, 0), # 30
(128, 295, 203, 143, 59, 0, 192, 257, 185, 153, 76, 0), # 31
(133, 310, 210, 144, 63, 0, 196, 268, 194, 159, 77, 0), # 32
(135, 322, 218, 147, 64, 0, 202, 281, 205, 163, 78, 0), # 33
(141, 333, 224, 151, 68, 0, 209, 288, 210, 167, 80, 0), # 34
(142, 339, 233, 154, 69, 0, 218, 297, 214, 170, 82, 0), # 35
(144, 350, 241, 156, 69, 0, 226, 304, 221, 178, 85, 0), # 36
(148, 357, 248, 159, 70, 0, 230, 311, 231, 185, 87, 0), # 37
(151, 364, 253, 160, 72, 0, 237, 319, 237, 191, 91, 0), # 38
(155, 372, 263, 164, 74, 0, 241, 329, 240, 197, 95, 0), # 39
(160, 384, 272, 167, 76, 0, 245, 338, 245, 200, 96, 0), # 40
(166, 392, 279, 172, 77, 0, 252, 344, 253, 206, 97, 0), # 41
(171, 402, 285, 177, 78, 0, 254, 352, 258, 210, 100, 0), # 42
(173, 414, 289, 183, 82, 0, 259, 361, 260, 216, 104, 0), # 43
(184, 422, 299, 186, 84, 0, 264, 369, 265, 218, 106, 0), # 44
(189, 435, 308, 187, 84, 0, 269, 376, 269, 223, 113, 0), # 45
(198, 440, 313, 189, 86, 0, 277, 387, 273, 232, 117, 0), # 46
(201, 455, 321, 194, 87, 0, 283, 401, 277, 235, 118, 0), # 47
(205, 460, 331, 197, 89, 0, 284, 407, 279, 242, 120, 0), # 48
(211, 464, 338, 199, 92, 0, 290, 415, 285, 249, 124, 0), # 49
(217, 473, 342, 206, 94, 0, 296, 420, 289, 253, 126, 0), # 50
(224, 480, 356, 210, 95, 0, 301, 428, 297, 261, 129, 0), # 51
(228, 493, 366, 212, 101, 0, 308, 435, 303, 266, 132, 0), # 52
(230, 506, 371, 215, 103, 0, 314, 443, 309, 273, 134, 0), # 53
(238, 518, 376, 218, 104, 0, 321, 452, 314, 279, 135, 0), # 54
(243, 529, 384, 222, 104, 0, 326, 453, 321, 282, 138, 0), # 55
(251, 545, 390, 222, 107, 0, 329, 456, 328, 290, 142, 0), # 56
(256, 552, 396, 233, 110, 0, 335, 462, 340, 293, 142, 0), # 57
(265, 557, 405, 238, 113, 0, 341, 469, 349, 300, 145, 0), # 58
(265, 557, 405, 238, 113, 0, 341, 469, 349, 300, 145, 0), # 59
)
passenger_arriving_rate = (
(3.7095121817383676, 7.612035984848484, 6.715158258354756, 3.5483152173913037, 2.000048076923077, 0.0, 6.659510869565219, 8.000192307692307, 5.322472826086956, 4.476772172236504, 1.903008996212121, 0.0), # 0
(3.7443308140669203, 7.696686590558361, 6.751429051520996, 3.5680760567632848, 2.0150386217948717, 0.0, 6.657240994867151, 8.060154487179487, 5.352114085144928, 4.500952701013997, 1.9241716476395903, 0.0), # 1
(3.7787518681104277, 7.780081571268237, 6.786838903170522, 3.58740193236715, 2.0297128205128203, 0.0, 6.654901690821256, 8.118851282051281, 5.381102898550726, 4.524559268780347, 1.9450203928170593, 0.0), # 2
(3.8127461259877085, 7.8621309375, 6.821361945694087, 3.6062763586956517, 2.044057211538462, 0.0, 6.652493274456523, 8.176228846153847, 5.409414538043478, 4.547574630462725, 1.965532734375, 0.0), # 3
(3.8462843698175795, 7.942744699775533, 6.854972311482434, 3.624682850241546, 2.0580583333333333, 0.0, 6.6500160628019325, 8.232233333333333, 5.437024275362319, 4.569981540988289, 1.9856861749438832, 0.0), # 4
(3.879337381718857, 8.021832868616723, 6.887644132926307, 3.6426049214975844, 2.0717027243589743, 0.0, 6.647470372886473, 8.286810897435897, 5.463907382246377, 4.591762755284204, 2.005458217154181, 0.0), # 5
(3.9118759438103607, 8.099305454545455, 6.919351542416455, 3.660026086956522, 2.084976923076923, 0.0, 6.644856521739131, 8.339907692307692, 5.490039130434783, 4.612901028277636, 2.0248263636363637, 0.0), # 6
(3.943870838210907, 8.175072468083613, 6.950068672343615, 3.6769298611111116, 2.0978674679487184, 0.0, 6.64217482638889, 8.391469871794873, 5.515394791666668, 4.633379114895743, 2.043768117020903, 0.0), # 7
(3.975292847039314, 8.249043919753085, 6.979769655098544, 3.693299758454106, 2.1103608974358976, 0.0, 6.639425603864735, 8.44144358974359, 5.5399496376811594, 4.653179770065696, 2.062260979938271, 0.0), # 8
(4.006112752414399, 8.321129820075758, 7.00842862307198, 3.709119293478261, 2.12244375, 0.0, 6.636609171195653, 8.489775, 5.563678940217391, 4.672285748714653, 2.0802824550189394, 0.0), # 9
(4.03630133645498, 8.391240179573513, 7.03601970865467, 3.724371980676329, 2.134102564102564, 0.0, 6.633725845410628, 8.536410256410257, 5.586557971014494, 4.690679805769779, 2.0978100448933783, 0.0), # 10
(4.065829381279876, 8.459285008768239, 7.06251704423736, 3.739041334541063, 2.145323878205128, 0.0, 6.630775943538648, 8.581295512820512, 5.608562001811595, 4.70834469615824, 2.1148212521920597, 0.0), # 11
(4.094667669007903, 8.525174318181818, 7.087894762210797, 3.7531108695652167, 2.156094230769231, 0.0, 6.627759782608695, 8.624376923076923, 5.6296663043478254, 4.725263174807198, 2.1312935795454546, 0.0), # 12
(4.122786981757876, 8.58881811833614, 7.112126994965724, 3.766564100241546, 2.1664001602564102, 0.0, 6.624677679649759, 8.665600641025641, 5.649846150362319, 4.741417996643816, 2.147204529584035, 0.0), # 13
(4.15015810164862, 8.650126419753088, 7.135187874892886, 3.779384541062801, 2.1762282051282047, 0.0, 6.621529951690821, 8.704912820512819, 5.669076811594202, 4.756791916595257, 2.162531604938272, 0.0), # 14
(4.1767518107989465, 8.709009232954545, 7.157051534383032, 3.7915557065217387, 2.1855649038461538, 0.0, 6.618316915760871, 8.742259615384615, 5.6873335597826085, 4.771367689588688, 2.177252308238636, 0.0), # 15
(4.202538891327675, 8.7653765684624, 7.177692105826908, 3.803061111111111, 2.194396794871795, 0.0, 6.61503888888889, 8.77758717948718, 5.7045916666666665, 4.785128070551272, 2.1913441421156, 0.0), # 16
(4.227490125353625, 8.81913843679854, 7.197083721615253, 3.8138842693236716, 2.202710416666667, 0.0, 6.611696188103866, 8.810841666666668, 5.720826403985508, 4.798055814410168, 2.204784609199635, 0.0), # 17
(4.25157629499561, 8.870204848484848, 7.215200514138818, 3.824008695652174, 2.2104923076923084, 0.0, 6.608289130434783, 8.841969230769234, 5.736013043478262, 4.810133676092545, 2.217551212121212, 0.0), # 18
(4.274768182372451, 8.918485814043208, 7.232016615788346, 3.8334179045893717, 2.2177290064102566, 0.0, 6.604818032910629, 8.870916025641026, 5.750126856884058, 4.8213444105255645, 2.229621453510802, 0.0), # 19
(4.297036569602966, 8.96389134399551, 7.247506158954584, 3.8420954106280196, 2.2244070512820517, 0.0, 6.601283212560387, 8.897628205128207, 5.76314311594203, 4.831670772636389, 2.2409728359988774, 0.0), # 20
(4.318352238805971, 9.006331448863634, 7.261643276028279, 3.8500247282608693, 2.2305129807692303, 0.0, 6.597684986413044, 8.922051923076921, 5.775037092391305, 4.841095517352186, 2.2515828622159084, 0.0), # 21
(4.338685972100283, 9.045716139169473, 7.274402099400172, 3.8571893719806765, 2.2360333333333333, 0.0, 6.5940236714975855, 8.944133333333333, 5.785784057971015, 4.849601399600115, 2.2614290347923682, 0.0), # 22
(4.358008551604722, 9.081955425434906, 7.285756761461012, 3.8635728562801934, 2.2409546474358972, 0.0, 6.590299584842997, 8.963818589743589, 5.79535928442029, 4.857171174307341, 2.2704888563587264, 0.0), # 23
(4.3762907594381035, 9.114959318181818, 7.295681394601543, 3.869158695652174, 2.2452634615384612, 0.0, 6.586513043478261, 8.981053846153845, 5.803738043478262, 4.863787596401028, 2.2787398295454544, 0.0), # 24
(4.393503377719247, 9.1446378279321, 7.304150131212511, 3.8739304045893723, 2.2489463141025636, 0.0, 6.582664364432368, 8.995785256410255, 5.810895606884059, 4.869433420808341, 2.286159456983025, 0.0), # 25
(4.409617188566969, 9.17090096520763, 7.311137103684661, 3.8778714975845405, 2.2519897435897436, 0.0, 6.5787538647343, 9.007958974358974, 5.816807246376811, 4.874091402456441, 2.2927252413019077, 0.0), # 26
(4.424602974100088, 9.193658740530301, 7.31661644440874, 3.880965489130435, 2.2543802884615385, 0.0, 6.574781861413045, 9.017521153846154, 5.821448233695653, 4.877744296272493, 2.2984146851325753, 0.0), # 27
(4.438431516437421, 9.212821164421996, 7.320562285775494, 3.8831958937198072, 2.256104487179487, 0.0, 6.570748671497586, 9.024417948717948, 5.824793840579711, 4.8803748571836625, 2.303205291105499, 0.0), # 28
(4.4510735976977855, 9.228298247404602, 7.322948760175664, 3.884546225845411, 2.257148878205128, 0.0, 6.566654612016909, 9.028595512820512, 5.826819338768117, 4.881965840117109, 2.3070745618511506, 0.0), # 29
(4.4625, 9.24, 7.32375, 3.885, 2.2575000000000003, 0.0, 6.562500000000001, 9.030000000000001, 5.8275, 4.8825, 2.31, 0.0), # 30
(4.47319183983376, 9.249720255681815, 7.323149356884057, 3.884918047385621, 2.257372225177305, 0.0, 6.556726763701484, 9.02948890070922, 5.827377071078432, 4.882099571256038, 2.312430063920454, 0.0), # 31
(4.4836528452685425, 9.259312045454546, 7.3213644202898545, 3.884673790849673, 2.2569916312056737, 0.0, 6.547834661835751, 9.027966524822695, 5.82701068627451, 4.880909613526569, 2.3148280113636366, 0.0), # 32
(4.493887715792838, 9.268774176136363, 7.3184206793478275, 3.8842696323529413, 2.2563623138297872, 0.0, 6.535910757121439, 9.025449255319149, 5.826404448529412, 4.878947119565218, 2.3171935440340907, 0.0), # 33
(4.503901150895141, 9.278105454545454, 7.314343623188405, 3.8837079738562093, 2.2554883687943263, 0.0, 6.521042112277196, 9.021953475177305, 5.825561960784314, 4.876229082125604, 2.3195263636363634, 0.0), # 34
(4.513697850063939, 9.287304687499997, 7.3091587409420296, 3.882991217320261, 2.2543738918439717, 0.0, 6.503315790021656, 9.017495567375887, 5.824486825980392, 4.872772493961353, 2.3218261718749993, 0.0), # 35
(4.523282512787724, 9.296370681818182, 7.302891521739131, 3.8821217647058828, 2.253022978723404, 0.0, 6.482818853073463, 9.012091914893617, 5.823182647058824, 4.868594347826087, 2.3240926704545455, 0.0), # 36
(4.532659838554988, 9.305302244318183, 7.295567454710145, 3.881102017973856, 2.2514397251773044, 0.0, 6.4596383641512585, 9.005758900709218, 5.821653026960784, 4.86371163647343, 2.3263255610795457, 0.0), # 37
(4.5418345268542195, 9.314098181818181, 7.287212028985508, 3.8799343790849674, 2.249628226950355, 0.0, 6.433861385973679, 8.99851290780142, 5.819901568627452, 4.858141352657005, 2.3285245454545453, 0.0), # 38
(4.5508112771739135, 9.322757301136363, 7.277850733695652, 3.87862125, 2.247592579787234, 0.0, 6.40557498125937, 8.990370319148935, 5.817931875, 4.8519004891304345, 2.330689325284091, 0.0), # 39
(4.559594789002558, 9.33127840909091, 7.267509057971015, 3.8771650326797387, 2.245336879432624, 0.0, 6.37486621272697, 8.981347517730496, 5.815747549019608, 4.845006038647344, 2.3328196022727274, 0.0), # 40
(4.568189761828645, 9.3396603125, 7.256212490942029, 3.8755681290849675, 2.2428652216312055, 0.0, 6.34182214309512, 8.971460886524822, 5.813352193627452, 4.837474993961353, 2.334915078125, 0.0), # 41
(4.576600895140665, 9.34790181818182, 7.2439865217391315, 3.8738329411764707, 2.2401817021276598, 0.0, 6.3065298350824595, 8.960726808510639, 5.810749411764706, 4.829324347826088, 2.336975454545455, 0.0), # 42
(4.584832888427111, 9.356001732954544, 7.230856639492753, 3.8719618709150327, 2.2372904166666667, 0.0, 6.26907635140763, 8.949161666666667, 5.80794280637255, 4.820571092995169, 2.339000433238636, 0.0), # 43
(4.592890441176471, 9.363958863636363, 7.216848333333333, 3.8699573202614377, 2.2341954609929076, 0.0, 6.229548754789272, 8.93678184397163, 5.804935980392157, 4.811232222222222, 2.3409897159090907, 0.0), # 44
(4.600778252877237, 9.371772017045453, 7.201987092391306, 3.8678216911764705, 2.230900930851064, 0.0, 6.188034107946028, 8.923603723404256, 5.801732536764706, 4.80132472826087, 2.3429430042613633, 0.0), # 45
(4.6085010230179035, 9.379440000000002, 7.186298405797103, 3.8655573856209147, 2.2274109219858156, 0.0, 6.144619473596536, 8.909643687943262, 5.798336078431372, 4.790865603864735, 2.3448600000000006, 0.0), # 46
(4.616063451086957, 9.386961619318182, 7.16980776268116, 3.8631668055555552, 2.223729530141844, 0.0, 6.099391914459438, 8.894918120567375, 5.794750208333333, 4.77987184178744, 2.3467404048295455, 0.0), # 47
(4.623470236572891, 9.394335681818182, 7.152540652173913, 3.8606523529411763, 2.21986085106383, 0.0, 6.052438493253375, 8.87944340425532, 5.790978529411765, 4.7683604347826085, 2.3485839204545456, 0.0), # 48
(4.630726078964194, 9.401560994318181, 7.134522563405797, 3.8580164297385626, 2.2158089804964543, 0.0, 6.003846272696985, 8.863235921985817, 5.787024644607844, 4.7563483756038645, 2.3503902485795454, 0.0), # 49
(4.6378356777493615, 9.408636363636361, 7.115778985507247, 3.8552614379084966, 2.211578014184397, 0.0, 5.953702315508913, 8.846312056737588, 5.782892156862745, 4.743852657004831, 2.3521590909090904, 0.0), # 50
(4.6448037324168805, 9.415560596590907, 7.096335407608696, 3.852389779411765, 2.2071720478723407, 0.0, 5.902093684407797, 8.828688191489363, 5.778584669117648, 4.73089027173913, 2.353890149147727, 0.0), # 51
(4.651634942455243, 9.4223325, 7.0762173188405795, 3.84940385620915, 2.2025951773049646, 0.0, 5.849107442112278, 8.810380709219858, 5.774105784313726, 4.717478212560386, 2.355583125, 0.0), # 52
(4.658334007352941, 9.428950880681818, 7.055450208333333, 3.8463060702614382, 2.1978514982269504, 0.0, 5.794830651340996, 8.791405992907801, 5.769459105392158, 4.703633472222222, 2.3572377201704544, 0.0), # 53
(4.6649056265984665, 9.435414545454544, 7.034059565217391, 3.843098823529412, 2.192945106382979, 0.0, 5.739350374812594, 8.771780425531915, 5.764648235294119, 4.689373043478261, 2.358853636363636, 0.0), # 54
(4.671354499680307, 9.441722301136364, 7.012070878623187, 3.8397845179738566, 2.1878800975177306, 0.0, 5.682753675245711, 8.751520390070922, 5.759676776960785, 4.674713919082125, 2.360430575284091, 0.0), # 55
(4.677685326086957, 9.447872954545453, 6.989509637681159, 3.8363655555555556, 2.1826605673758865, 0.0, 5.625127615358988, 8.730642269503546, 5.754548333333334, 4.65967309178744, 2.361968238636363, 0.0), # 56
(4.683902805306906, 9.453865312500001, 6.966401331521738, 3.832844338235294, 2.1772906117021273, 0.0, 5.566559257871065, 8.70916244680851, 5.749266507352941, 4.644267554347826, 2.3634663281250003, 0.0), # 57
(4.690011636828645, 9.459698181818181, 6.942771449275362, 3.8292232679738563, 2.1717743262411346, 0.0, 5.507135665500583, 8.687097304964539, 5.743834901960785, 4.628514299516908, 2.3649245454545453, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
0, # 1
)
| 113.071642 | 212 | 0.729032 | 5,147 | 37,879 | 5.363124 | 0.228677 | 0.312998 | 0.24779 | 0.469497 | 0.329807 | 0.327996 | 0.327996 | 0.327996 | 0.327996 | 0.327996 | 0 | 0.818972 | 0.119169 | 37,879 | 334 | 213 | 113.41018 | 0.008362 | 0.03197 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
73774075b5af0dc267f3d0efd9c83e4014c24807 | 213 | py | Python | busker/signals.py | tinpan-io/django-busker | 52df06b82e15572d0cd9c9d13ba2d5136585bc2d | [
"MIT"
] | 2 | 2020-09-01T12:06:07.000Z | 2021-09-24T09:54:57.000Z | busker/signals.py | tinpan-io/django-busker | 52df06b82e15572d0cd9c9d13ba2d5136585bc2d | [
"MIT"
] | null | null | null | busker/signals.py | tinpan-io/django-busker | 52df06b82e15572d0cd9c9d13ba2d5136585bc2d | [
"MIT"
] | null | null | null | from django.dispatch import Signal
# TODO https://stackoverflow.com/a/18532655/3280582
code_post_redeem = Signal(providing_args=["request", "code"])
file_pre_download = Signal(providing_args=["request", "file"])
| 35.5 | 62 | 0.779343 | 28 | 213 | 5.714286 | 0.75 | 0.1875 | 0.2375 | 0.325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076142 | 0.075117 | 213 | 5 | 63 | 42.6 | 0.736041 | 0.230047 | 0 | 0 | 0 | 0 | 0.135802 | 0 | 0 | 0 | 0 | 0.2 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
73ad97d9a645b1fb910921f6f4cd8bff39c4141f | 24 | py | Python | MyEssentials/myessentials/__init__.py | thonzyk/essentials | 16c0be05efc0790f08b10196e63dbf688bbe7df9 | [
"MIT"
] | null | null | null | MyEssentials/myessentials/__init__.py | thonzyk/essentials | 16c0be05efc0790f08b10196e63dbf688bbe7df9 | [
"MIT"
] | null | null | null | MyEssentials/myessentials/__init__.py | thonzyk/essentials | 16c0be05efc0790f08b10196e63dbf688bbe7df9 | [
"MIT"
] | null | null | null | from .datastats import * | 24 | 24 | 0.791667 | 3 | 24 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73b3e61d7cb1697b007f18596438f5927f74f2fc | 42 | py | Python | ipm_util/__init__.py | JiaweiZhuang/ipm_util | f9480e0f75153d11176e0aad2f533bcff8f67f50 | [
"MIT"
] | 4 | 2019-09-30T02:15:19.000Z | 2021-06-16T05:16:30.000Z | ipm_util/__init__.py | JiaweiZhuang/ipm_util | f9480e0f75153d11176e0aad2f533bcff8f67f50 | [
"MIT"
] | null | null | null | ipm_util/__init__.py | JiaweiZhuang/ipm_util | f9480e0f75153d11176e0aad2f533bcff8f67f50 | [
"MIT"
] | 1 | 2022-02-22T01:19:15.000Z | 2022-02-22T01:19:15.000Z | from . ipm_parser import log_to_dataframe
| 21 | 41 | 0.857143 | 7 | 42 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
73e175d46c1eea46439568aa84bf8e258cbb4a66 | 86 | py | Python | python/getalp/wsd/optim/__init__.py | getalp/disambiguate-translate | 38ef754c786ded085d184633b21acc607902c098 | [
"MIT"
] | 53 | 2019-02-12T15:40:22.000Z | 2022-03-30T16:54:22.000Z | python/getalp/wsd/optim/__init__.py | getalp/disambiguate-translate | 38ef754c786ded085d184633b21acc607902c098 | [
"MIT"
] | 21 | 2019-06-11T15:21:17.000Z | 2022-02-05T11:53:38.000Z | python/getalp/wsd/optim/__init__.py | getalp/disambiguate-translate | 38ef754c786ded085d184633b21acc607902c098 | [
"MIT"
] | 19 | 2019-05-26T10:23:41.000Z | 2021-12-06T04:43:08.000Z | from .scheduler_fixed import SchedulerFixed
from .scheduler_noam import SchedulerNoam
| 28.666667 | 43 | 0.883721 | 10 | 86 | 7.4 | 0.7 | 0.351351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 86 | 2 | 44 | 43 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fb7bc7a2a023015d76e552c7258da22d6cc822e6 | 3,053 | py | Python | tests/parsers/pe.py | jonathan-greig/plaso | b88a6e54c06a162295d09b016bddbfbfe7ca9070 | [
"Apache-2.0"
] | 6 | 2015-07-30T11:07:24.000Z | 2021-07-23T07:12:30.000Z | tests/parsers/pe.py | jonathan-greig/plaso | b88a6e54c06a162295d09b016bddbfbfe7ca9070 | [
"Apache-2.0"
] | null | null | null | tests/parsers/pe.py | jonathan-greig/plaso | b88a6e54c06a162295d09b016bddbfbfe7ca9070 | [
"Apache-2.0"
] | 1 | 2021-07-23T07:12:37.000Z | 2021-07-23T07:12:37.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""Tests for the PE file parser."""
import unittest
from plaso.lib import definitions
from plaso.parsers import pe
from tests.parsers import test_lib
class PECOFFTest(test_lib.ParserTestCase):
"""Tests for the PE file parser."""
def testParseFileObjectOnExecutable(self):
"""Tests the ParseFileObject on a PE executable (EXE) file."""
parser = pe.PEParser()
storage_writer = self._ParseFile(['test_pe.exe'], parser)
number_of_events = storage_writer.GetNumberOfAttributeContainers('event')
self.assertEqual(number_of_events, 3)
number_of_warnings = storage_writer.GetNumberOfAttributeContainers(
'extraction_warning')
self.assertEqual(number_of_warnings, 0)
number_of_warnings = storage_writer.GetNumberOfAttributeContainers(
'recovery_warning')
self.assertEqual(number_of_warnings, 0)
events = list(storage_writer.GetSortedEvents())
expected_event_values = {
'data_type': 'pe',
'date_time': '2015-04-21 14:53:56',
'pe_attribute': None,
'pe_type': 'Executable (EXE)',
'timestamp_desc': definitions.TIME_DESCRIPTION_CREATION}
self.CheckEventValues(storage_writer, events[2], expected_event_values)
expected_event_values = {
'data_type': 'pe',
'date_time': '2015-04-21 14:53:55',
'pe_attribute': 'DIRECTORY_ENTRY_IMPORT',
'pe_type': 'Executable (EXE)',
'timestamp_desc': definitions.TIME_DESCRIPTION_MODIFICATION}
self.CheckEventValues(storage_writer, events[1], expected_event_values)
expected_event_values = {
'data_type': 'pe',
'date_time': '2015-04-21 14:53:54',
'dll_name': 'USER32.dll',
'imphash': '8d0739063fc8f9955cc6696b462544ab',
'pe_attribute': 'DIRECTORY_ENTRY_DELAY_IMPORT',
'pe_type': 'Executable (EXE)',
'timestamp_desc': definitions.TIME_DESCRIPTION_MODIFICATION}
self.CheckEventValues(storage_writer, events[0], expected_event_values)
def testParseFileObjectOnDriver(self):
"""Tests the ParseFileObject on a PE driver (SYS) file."""
parser = pe.PEParser()
storage_writer = self._ParseFile(['test_driver.sys'], parser)
number_of_events = storage_writer.GetNumberOfAttributeContainers('event')
self.assertEqual(number_of_events, 1)
number_of_warnings = storage_writer.GetNumberOfAttributeContainers(
'extraction_warning')
self.assertEqual(number_of_warnings, 0)
number_of_warnings = storage_writer.GetNumberOfAttributeContainers(
'recovery_warning')
self.assertEqual(number_of_warnings, 0)
events = list(storage_writer.GetSortedEvents())
expected_event_values = {
'data_type': 'pe',
'date_time': '2015-04-21 14:53:54',
'pe_attribute': None,
'pe_type': 'Driver (SYS)',
'timestamp_desc': definitions.TIME_DESCRIPTION_CREATION}
self.CheckEventValues(storage_writer, events[0], expected_event_values)
if __name__ == '__main__':
unittest.main()
| 32.827957 | 77 | 0.706191 | 340 | 3,053 | 6.029412 | 0.255882 | 0.08878 | 0.062439 | 0.067317 | 0.801463 | 0.783902 | 0.761463 | 0.730244 | 0.730244 | 0.652683 | 0 | 0.037081 | 0.178513 | 3,053 | 92 | 78 | 33.184783 | 0.780303 | 0.069767 | 0 | 0.639344 | 0 | 0 | 0.201207 | 0.029099 | 0 | 0 | 0 | 0 | 0.098361 | 1 | 0.032787 | false | 0 | 0.098361 | 0 | 0.147541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fbb23fbad679e726326a72610630b683eb1c6230 | 17 | py | Python | app/__init__.py | dr-rodriguez/DSII_GalCatWeb | 10d41452f1982c3c2f3ea681dde4932e474f2ee8 | [
"MIT"
] | 4 | 2015-05-12T17:18:19.000Z | 2020-06-23T09:49:23.000Z | flask-job-board/__init__.py | ntungare/Blockchain-Job-Forum | fd91033001896ce72799a90fb1523a961d7b0e45 | [
"MIT"
] | 2 | 2019-01-25T22:11:33.000Z | 2019-01-25T22:14:44.000Z | bdnyc_app/__init__.py | dr-rodriguez/BDNYC_WebApp | 2e1210822fb85c47ff6db286843a0ff4b829f5b7 | [
"MIT"
] | 7 | 2015-02-19T11:23:25.000Z | 2021-07-12T21:22:37.000Z | from app import * | 17 | 17 | 0.764706 | 3 | 17 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 17 | 1 | 17 | 17 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83a25740aa8c4e2401a9691d0579d22796c644cf | 192 | py | Python | views.py | willzhang05/congressional-record | 42243ebd3ba62204994e8e3064748d2157897ab0 | [
"MIT"
] | null | null | null | views.py | willzhang05/congressional-record | 42243ebd3ba62204994e8e3064748d2157897ab0 | [
"MIT"
] | 3 | 2020-03-24T16:46:16.000Z | 2021-02-02T21:56:54.000Z | views.py | willzhang05/congressional-record | 42243ebd3ba62204994e8e3064748d2157897ab0 | [
"MIT"
] | null | null | null | from flask import request, render_template, Response
from . import app
@app.route('/')
def index():
return render_template('index.html')
@app.route('/api')
def api():
return "asdf"
| 16 | 52 | 0.682292 | 26 | 192 | 4.961538 | 0.576923 | 0.217054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161458 | 192 | 11 | 53 | 17.454545 | 0.801242 | 0 | 0 | 0 | 0 | 0 | 0.098958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
83cc249f539aba9c499d8393491e9ff48b84731c | 8,051 | py | Python | qmsolve/time_dependent_solver/crank_nicolson.py | quantum-visualizations/qmsolve | f2ff1c6968053cae7e0d9b1a28d8c04287cb4a56 | [
"BSD-3-Clause"
] | 356 | 2021-04-05T15:48:49.000Z | 2022-03-30T07:43:51.000Z | qmsolve/time_dependent_solver/crank_nicolson.py | quantum-visualizations/qmsolve | f2ff1c6968053cae7e0d9b1a28d8c04287cb4a56 | [
"BSD-3-Clause"
] | 2 | 2021-04-28T18:20:43.000Z | 2021-08-20T17:54:15.000Z | qmsolve/time_dependent_solver/crank_nicolson.py | quantum-visualizations/qmsolve | f2ff1c6968053cae7e0d9b1a28d8c04287cb4a56 | [
"BSD-3-Clause"
] | 29 | 2021-04-04T23:01:12.000Z | 2022-03-19T15:01:23.000Z | import numpy as np
from .method import Method
import time
from ..util.constants import *
from ..particle_system import SingleParticle, TwoParticles
from scipy import sparse
from scipy.sparse import linalg
import progressbar
"""
Crank-Nicolson method for the Schrödinger equation: https://imsc.uni-graz.at/haasegu/Lectures/HPC-II/SS17/presentation1_Schroedinger-Equation_HPC2-seminar.pdf
Prototype and original implementation: https://gist.github.com/marl0ny/23947165652ccad73e55b01241afbe77
Jacobi iteration can be used to solve for the system of equations that occurs
when using the Crank-Nicolson method. This is following an
article found here https://arxiv.org/pdf/1409.8340 by Sadovskyy et al.,
where the Crank-Nicolson method with Jacobi iteration is
used to solve the Ginzburg-Landau equations, which is similar to
the Schrödinger equation but contains nonlinear terms and
couplings with the four vector potential.
"""
def jacobi(inv_diag: sparse.dia_matrix, lower_upper: sparse.dia.dia_matrix,
b: np.ndarray, min_iter: int = 10, max_iter: int = 20, TOL = 0.001):
"""
Given the inverse diagonals of a matrix A and its lower and upper parts
without its diagonal, find the solution x for the system Ax = b.
Reference for Jacobi iteration:
https://en.wikipedia.org/wiki/Jacobi_method
"""
x = b.copy()
for i in range(min_iter):
x = inv_diag @ (b - lower_upper @ x)
for i in range(max_iter - min_iter):
x_ = inv_diag @ (b - lower_upper @ x)
rel_err = np.mean(np.abs(x - x_))
x = x_
if rel_err < TOL:
break
return x
class CrankNicolson(Method):
def __init__(self, simulation):
self.simulation = simulation
self.H = simulation.H
if self.H.potential_type == "matrix":
self.H.particle_system.get_observables(self.H)
self.simulation.Vmin = np.amin(self.H.Vgrid)
self.simulation.Vmax = np.amax(self.H.Vgrid)
def run(self, initial_wavefunction, total_time, dt, store_steps = 1):
self.simulation.store_steps = store_steps
dt_store = total_time/store_steps
self.simulation.total_time = total_time
Nt = int(np.round(total_time / dt))
Nt_per_store_step = int(np.round(dt_store / dt))
self.simulation.Nt_per_store_step = Nt_per_store_step
#time/dt and dt_store/dt must be integers. Otherwise dt is rounded to match that the Nt_per_store_stepdivisions are integers
self.simulation.dt = dt_store/Nt_per_store_step
if isinstance(self.simulation.H.particle_system ,SingleParticle):
Ψ = np.zeros((store_steps + 1, self.H.N **self.H.ndim), dtype = np.complex128)
I = sparse.identity(self.H.N **self.H.ndim)
Ψ[0] = np.array(initial_wavefunction(self.H.particle_system)).reshape( self.H.N **self.H.ndim)
elif isinstance(self.simulation.H.particle_system,TwoParticles):
Ψ = np.zeros((store_steps + 1, self.H.N ** 2), dtype = np.complex128)
I = sparse.identity(self.H.N ** 2)
Ψ[0] = np.array(initial_wavefunction(self.H.particle_system)).reshape(self.H.N**2 )
m = self.H.particle_system.m
BETA = 0.5j*self.simulation.dt/hbar
H_matrix = self.H.T + self.H.V
A = I + BETA*H_matrix
B = I - BETA*H_matrix
#We are going to solve the equation A*Ψ_{i+1} = B*Ψ_{i} for Ψ_{i+1}
D = sparse.diags(A.diagonal(0), (0))
INV_D = sparse.diags(1.0/A.diagonal(0), (0))
L_PLUS_U = A - D
t0 = time.time()
bar = progressbar.ProgressBar()
for i in bar(range(store_steps)):
tmp = np.copy(Ψ[i])
for j in range(Nt_per_store_step):
B_dot_Ψ = B @ tmp
tmp = linalg.gcrotmk(A, B_dot_Ψ)[0]
#tmp = jacobi(INV_D, L_PLUS_U, B_dot_Ψ, min_iter=10, max_iter = 50, TOL = 0.00001)
Ψ[i+1] = tmp
print("Took", time.time() - t0)
if isinstance(self.simulation.H.particle_system ,SingleParticle):
self.simulation.Ψ = Ψ.reshape(store_steps + 1, *([self.H.N] *self.H.ndim ))
elif isinstance(self.simulation.H.particle_system,TwoParticles):
self.simulation.Ψ = Ψ.reshape(store_steps + 1, *([self.H.N] *2 ))
self.simulation.Ψmax = np.amax(np.abs(Ψ))
class CrankNicolsonCupy(Method):
def __init__(self, simulation):
self.simulation = simulation
self.H = simulation.H
if self.H.potential_type == "matrix":
self.H.particle_system.get_observables(self.H)
self.simulation.Vmin = np.amin(self.H.Vgrid)
self.simulation.Vmax = np.amax(self.H.Vgrid)
def run(self, initial_wavefunction, total_time, dt, store_steps = 1):
import cupy as cp
from cupyx.scipy import sparse
self.simulation.store_steps = store_steps
dt_store = total_time/store_steps
self.simulation.total_time = total_time
Nt = int(np.round(total_time / dt))
Nt_per_store_step = int(np.round(dt_store / dt))
self.simulation.Nt_per_store_step = Nt_per_store_step
#time/dt and dt_store/dt must be integers. Otherwise dt is rounded to match that the Nt_per_store_stepdivisions are integers
self.simulation.dt = dt_store/Nt_per_store_step
if isinstance(self.simulation.H.particle_system ,SingleParticle):
Ψ = cp.zeros((store_steps + 1, self.H.N **self.H.ndim), dtype = cp.complex128)
I = sparse.identity(self.H.N **self.H.ndim)
Ψ[0] = cp.array(initial_wavefunction(self.H.particle_system)).reshape( self.H.N **self.H.ndim)
elif isinstance(self.simulation.H.particle_system,TwoParticles):
Ψ = cp.zeros((store_steps + 1, self.H.N ** 2), dtype = cp.complex128)
I = sparse.identity(self.H.N ** 2)
Ψ[0] = cp.array(initial_wavefunction(self.H.particle_system)).reshape(self.H.N**2 )
m = self.H.particle_system.m
BETA = 0.5j*self.simulation.dt/hbar
H_matrix = sparse.csr.csr_matrix(self.H.T + self.H.V)
A = I + BETA*H_matrix
B = I - BETA*H_matrix
#We are going to solve the equation A*Ψ_{i+1} = B*Ψ_{i} for Ψ_{i+1}
D = sparse.diags(A.diagonal(0), (0))
INV_D = sparse.diags(1.0/A.diagonal(0), (0))
L_PLUS_U = A - D
def jacobi_cupy(inv_diag: sparse.dia_matrix, lower_upper: sparse.dia.dia_matrix,
b: cp.ndarray, min_iter: int = 10, max_iter: int = 20, TOL = 0.001):
"""
Given the inverse diagonals of a matrix A and its lower and upper parts
without its diagonal, find the solution x for the system Ax = b.
Reference for Jacobi iteration:
https://en.wikipedia.org/wiki/Jacobi_method
"""
x = b.copy()
for i in range(min_iter):
x = inv_diag @ (b - lower_upper @ x)
for i in range(max_iter - min_iter):
x_ = inv_diag @ (b - lower_upper @ x)
rel_err = cp.mean(cp.abs(x - x_))
x = x_
if rel_err < TOL:
break
return x
bar = progressbar.ProgressBar()
t0 = time.time()
for i in bar(range(store_steps)):
tmp = cp.copy(Ψ[i])
for j in range(Nt_per_store_step):
B_dot_Ψ = B @ tmp
tmp = jacobi_cupy(INV_D, L_PLUS_U, B_dot_Ψ, min_iter=10, max_iter = 50, TOL = 0.00001)
Ψ[i+1] = tmp
print("Took", time.time() - t0)
Ψ = Ψ.get()
if isinstance(self.simulation.H.particle_system ,SingleParticle):
self.simulation.Ψ = Ψ.reshape(store_steps + 1, *([self.H.N] *self.H.ndim ))
elif isinstance(self.simulation.H.particle_system,TwoParticles):
self.simulation.Ψ = Ψ.reshape(store_steps + 1, *([self.H.N] *2 ))
self.simulation.Ψmax = np.amax(np.abs(Ψ))
| 35.941964 | 158 | 0.623525 | 1,193 | 8,051 | 4.050293 | 0.17435 | 0.047599 | 0.049669 | 0.028974 | 0.790149 | 0.790149 | 0.790149 | 0.790149 | 0.778974 | 0.764073 | 0 | 0.022267 | 0.263694 | 8,051 | 223 | 159 | 36.103139 | 0.792848 | 0.109924 | 0 | 0.740157 | 0 | 0 | 0.003145 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047244 | false | 0 | 0.07874 | 0 | 0.15748 | 0.015748 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83d92b533648a4bac5abb1a8868738af3ec22a8a | 11,843 | py | Python | tests/test_app.py | babenek/CredSweeper | 4d69ec934b45fd2f68e00b636077e5edfd1ff6ca | [
"MIT"
] | 17 | 2021-10-22T00:29:46.000Z | 2022-03-21T03:05:56.000Z | tests/test_app.py | babenek/CredSweeper | 4d69ec934b45fd2f68e00b636077e5edfd1ff6ca | [
"MIT"
] | 29 | 2021-11-05T21:10:51.000Z | 2022-03-30T10:41:08.000Z | tests/test_app.py | babenek/CredSweeper | 4d69ec934b45fd2f68e00b636077e5edfd1ff6ca | [
"MIT"
] | 16 | 2021-11-05T20:39:54.000Z | 2022-03-11T00:57:32.000Z | import json
import os
import subprocess
import sys
import tempfile
import pytest
class TestApp:
def test_it_works_p(self) -> None:
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, "samples", "password")
proc = subprocess.Popen([sys.executable, "-m", "credsweeper", "--path", target_path, "--log", "silence"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _stderr = proc.communicate()
output = " ".join(stdout.decode("UTF-8").split())
expected = f"""
rule: Password / severity: medium / line_data_list: [line: 'password = \"cackle!\"' / line_num: 1
/ path: {target_path} / value: 'cackle!' / entropy_validation: False]
/ api_validation: NOT_AVAILABLE / ml_validation: NOT_AVAILABLE\n
"""
expected = " ".join(expected.split())
assert output == expected
def test_it_works_with_ml_p(self) -> None:
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, "samples", "password")
proc = subprocess.Popen(
[sys.executable, "-m", "credsweeper", "--path", target_path, "--ml_validation", "--log", "silence"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _stderr = proc.communicate()
output = " ".join(stdout.decode("UTF-8").split())
expected = f"""
rule: Password / severity: medium / line_data_list: [line: 'password = \"cackle!\"' / line_num: 1
/ path: {target_path} / value: 'cackle!' / entropy_validation: False]
/ api_validation: NOT_AVAILABLE / ml_validation: VALIDATED_KEY\n
"""
expected = " ".join(expected.split())
assert output == expected
def test_it_works_with_patch_p(self) -> None:
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, "samples", "password.patch")
proc = subprocess.Popen([sys.executable, "-m", "credsweeper", "--diff_path", target_path, "--log", "silence"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _stderr = proc.communicate()
output = " ".join(stdout.decode("UTF-8").split())
expected = """
rule: Password / severity: medium / line_data_list: [line: ' "password": "dkajco1"' / line_num: 3
/ path: .changes/1.16.98.json / value: 'dkajco1' / entropy_validation: False]
/ api_validation: NOT_AVAILABLE / ml_validation: NOT_AVAILABLE\n
"""
expected = " ".join(expected.split())
assert output == expected
def test_it_works_with_multiline_in_patch_p(self) -> None:
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, "samples", "multiline.patch")
proc = subprocess.Popen([sys.executable, "-m", "credsweeper", "--diff_path", target_path, "--log", "silence"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _stderr = proc.communicate()
output = " ".join(stdout.decode("UTF-8").split())
expected = """
rule: AWS Client ID / severity: high / line_data_list: [line: ' clid = "AKIAQWADE5R42RDZ4JEM"'
/ line_num: 4 / path: creds.py / value: 'AKIAQWADE5R42RDZ4JEM' / entropy_validation: False]
/ api_validation: NOT_AVAILABLE / ml_validation: NOT_AVAILABLE rule: AWS Multi / severity: high
/ line_data_list: [line: ' clid = "AKIAQWADE5R42RDZ4JEM"' / line_num: 4 / path: creds.py
/ value: 'AKIAQWADE5R42RDZ4JEM'
/ entropy_validation: False, line: ' token = "V84C7sDU001tFFodKU95USNy97TkqXymnvsFmYhQ"'
/ line_num: 5 / path: creds.py / value: 'V84C7sDU001tFFodKU95USNy97TkqXymnvsFmYhQ'
/ entropy_validation: True] / api_validation: NOT_AVAILABLE / ml_validation: NOT_AVAILABLE
rule: Token / severity: medium / line_data_list: [line: ' token = "V84C7sDU001tFFodKU95USNy97TkqXymnvsFmYhQ"'
/ line_num: 5 / path: creds.py / value: 'V84C7sDU001tFFodKU95USNy97TkqXymnvsFmYhQ'
/ entropy_validation: True] / api_validation: NOT_AVAILABLE / ml_validation: NOT_AVAILABLE\n
"""
expected = " ".join(expected.split())
assert output == expected
@pytest.mark.api_validation
def test_it_works_with_api_p(self) -> None:
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, "samples", "google_api_key")
proc = subprocess.Popen(
[sys.executable, "-m", "credsweeper", "--path", target_path, "--api_validation", "--log", "silence"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _stderr = proc.communicate()
output = " ".join(stdout.decode("UTF-8").split())
expected = f"""
rule: Google API Key / severity: high / line_data_list: [line: 'AIzaGiReoGiCrackleCrackle12315618112315' / line_num: 1
/ path: {target_path} / value: 'AIzaGiReoGiCrackleCrackle12315618112315' / entropy_validation: True]
/ api_validation: INVALID_KEY / ml_validation: NOT_AVAILABLE\n
"""
expected = " ".join(expected.split())
assert output == expected
def test_it_works_n(self) -> None:
proc = subprocess.Popen([sys.executable, "-m", "credsweeper"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
_stdout, stderr = proc.communicate()
# Merge more than two whitespaces into one because stdout and stderr are changed based on the terminal size
output = " ".join(stderr.decode("UTF-8").split())
expected = """
usage: python -m credsweeper [-h] (--path PATH [PATH ...] | --diff_path PATH [PATH ...]) [--rules [PATH]] [--find-by-ext] [--ml_validation] [--ml_threshold FLOAT_OR_STR] [-b POSITIVE_INT]
[--api_validation] [-j POSITIVE_INT] [--skip_ignored] [--save-json [PATH]] [-l LOG_LEVEL]
python -m credsweeper: error: one of the arguments --path --diff_path is required
"""
expected = " ".join(expected.split())
assert output == expected
def test_patch_save_json_p(self) -> None:
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, "samples", "password.patch")
json_filename = "unittest_output.json"
proc = subprocess.Popen([
sys.executable, "-m", "credsweeper", "--diff_path", target_path, "--save-json", json_filename, "--log",
"silence"
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
_stdout, _stderr = proc.communicate()
assert os.path.exists("unittest_output_added.json") and os.path.exists("unittest_output_deleted.json")
os.remove("unittest_output_added.json")
os.remove("unittest_output_deleted.json")
def test_find_tests_p(self) -> None:
with tempfile.TemporaryDirectory() as tmp_dir:
json_filename = os.path.join(tmp_dir, 'test_find_tests_p.json')
tests_path = os.path.dirname(__file__)
assert os.path.exists(tests_path)
assert os.path.isdir(tests_path)
proc = subprocess.Popen([
sys.executable, "-m", "credsweeper", "--path", tests_path, "--save-json", json_filename, "--log",
"silence"
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
_stdout, _stderr = proc.communicate()
assert os.path.exists(json_filename)
with open(json_filename, "r") as json_file:
report = json.load(json_file)
assert len(report) > 111
def test_patch_save_json_n(self) -> None:
dir_path = os.path.dirname(os.path.realpath(__file__))
target_path = os.path.join(dir_path, "samples", "password.patch")
proc = subprocess.Popen([sys.executable, "-m", "credsweeper", "--diff_path", target_path, "--log", "silence"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
_stdout, _stderr = proc.communicate()
assert not os.path.exists("unittest_output_added.json") and not os.path.exists("unittest_output_deleted.json")
def test_find_by_ext_p(self) -> None:
with tempfile.TemporaryDirectory() as tmp_dir:
# .deR will be not found, only 4 of them
for f in [".pem", ".crt", ".cer", ".csr", ".deR"]:
file_path = os.path.join(tmp_dir, f"dummy{f}")
assert not os.path.exists(file_path)
open(file_path, "w").write("The quick brown fox jumps over the lazy dog")
# not of all will be found due they are empty
for f in [".jks", ".KeY"]:
file_path = os.path.join(tmp_dir, f"dummy{f}")
assert not os.path.exists(file_path)
open(file_path, "w").close()
# the directory hides all files
ignored_dir = os.path.join(tmp_dir, "target")
os.mkdir(ignored_dir)
for f in [".pfx", ".p12"]:
file_path = os.path.join(ignored_dir, f"dummy{f}")
assert not os.path.exists(file_path)
open(file_path, "w").write("The quick brown fox jumps over the lazy dog")
json_filename = os.path.join(tmp_dir, "dummy.json")
proc = subprocess.Popen([
sys.executable, "-m", "credsweeper", "--path", tmp_dir, "--find-by-ext", "--save-json", json_filename,
"--log", "silence"
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
_stdout, _stderr = proc.communicate()
assert os.path.exists(json_filename)
with open(json_filename, "r") as json_file:
report = json.load(json_file)
assert len(report) == 4, f"{report}"
for t in report:
assert t["line_data_list"][0]["line_num"] == -1
assert str(t["line_data_list"][0]["path"][-4:]) in [".pem", ".crt", ".cer", ".csr"]
def test_find_by_ext_n(self) -> None:
with tempfile.TemporaryDirectory() as tmp_dir:
for f in [".pem", ".crt", ".cer", ".csr", ".der", ".pfx", ".p12", ".key", ".jks"]:
file_path = os.path.join(tmp_dir, f"dummy{f}")
assert not os.path.exists(file_path)
open(file_path, "w").write("The quick brown fox jumps over the lazy dog")
json_filename = os.path.join(tmp_dir, "dummy.json")
proc = subprocess.Popen([
sys.executable, "-m", "credsweeper", "--path", tmp_dir, "--save-json", json_filename, "--log", "silence"
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
_stdout, _stderr = proc.communicate()
assert os.path.exists(json_filename)
with open(json_filename, "r") as json_file:
report = json.load(json_file)
assert len(report) == 0
| 53.346847 | 206 | 0.566579 | 1,294 | 11,843 | 4.972179 | 0.142968 | 0.040099 | 0.029531 | 0.023935 | 0.835406 | 0.798259 | 0.788934 | 0.75474 | 0.723034 | 0.691949 | 0 | 0.014391 | 0.301782 | 11,843 | 221 | 207 | 53.588235 | 0.763696 | 0.018408 | 0 | 0.617801 | 0 | 0.052356 | 0.354389 | 0.047074 | 0 | 0 | 0 | 0 | 0.115183 | 1 | 0.057592 | false | 0.041885 | 0.031414 | 0 | 0.094241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83e51ba0967f94700a191f03dab8f6b36db107b3 | 22 | py | Python | SSD1306/__init__.py | jcksnvllxr80/MidiController | de6d3c983cd27408e88a744a0a4d3c887efa3d54 | [
"MIT"
] | null | null | null | SSD1306/__init__.py | jcksnvllxr80/MidiController | de6d3c983cd27408e88a744a0a4d3c887efa3d54 | [
"MIT"
] | null | null | null | SSD1306/__init__.py | jcksnvllxr80/MidiController | de6d3c983cd27408e88a744a0a4d3c887efa3d54 | [
"MIT"
] | null | null | null | from SSD1306 import *
| 11 | 21 | 0.772727 | 3 | 22 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 0.181818 | 22 | 1 | 22 | 22 | 0.722222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
83f19dba4e4301a9a1733cb0957e3c426efd03ca | 73 | py | Python | objectDetectionD3MWrapper/__init__.py | BalazsHoranyi/object-detection-d3m-wrapper | 3a5438bb1ea102476b57a8dc70e6e0dd791fe60f | [
"MIT"
] | null | null | null | objectDetectionD3MWrapper/__init__.py | BalazsHoranyi/object-detection-d3m-wrapper | 3a5438bb1ea102476b57a8dc70e6e0dd791fe60f | [
"MIT"
] | null | null | null | objectDetectionD3MWrapper/__init__.py | BalazsHoranyi/object-detection-d3m-wrapper | 3a5438bb1ea102476b57a8dc70e6e0dd791fe60f | [
"MIT"
] | 1 | 2020-02-06T03:44:11.000Z | 2020-02-06T03:44:11.000Z | from objectDetectionD3MWrapper.wrapper import ObjectDetectionRNPrimitive
| 36.5 | 72 | 0.931507 | 5 | 73 | 13.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014493 | 0.054795 | 73 | 1 | 73 | 73 | 0.971014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
83f626f15ac1592b0a271a31c38320d53a58d9c3 | 34,584 | py | Python | EEVM_Concrete.py | pisocrob/E-EVM | 3992261c1141de4cbbaf2f2febf63ec228cce72d | [
"MIT"
] | null | null | null | EEVM_Concrete.py | pisocrob/E-EVM | 3992261c1141de4cbbaf2f2febf63ec228cce72d | [
"MIT"
] | null | null | null | EEVM_Concrete.py | pisocrob/E-EVM | 3992261c1141de4cbbaf2f2febf63ec228cce72d | [
"MIT"
] | 1 | 2020-11-19T09:15:40.000Z | 2020-11-19T09:15:40.000Z | #-------------------CONRETE---------------#
from EtherCost import tools
import networkx as nx
import matplotlib.pyplot as plt
from networkx.drawing.nx_agraph import write_dot
import _pickle as pickle
import json
target = [] #source file split by lines
translation = [] #list of program opcodes replaced by Souper IR opcodes
var_dict = {} #Dict of each var's opcode
stack_counts = tools.stack_counts
global_dependants = []
ssa_op_dict = {} #To be operated on indentically to regular stack, but will hold opcodes instead of vars and values
global_state = [[0,0,[],[],[]]] #i, block number, stack, destination, call history
g = nx.DiGraph()
jumpi_dest_lst = []
jump_dest_lst = []
block_total = 0
def resolve_jump_dests(source_file):
jump_map = {}
push_offset = 0
block = 0
block_map = {}
opcode_block = {}
block_op_list = []
global block_total
with open(source_file, "r") as f:
ops = f.read().splitlines()
for i, each in enumerate(ops):
if "PUSH" in each:
push_op = each.split()[0]
try:
push_offset += int(push_op[-2:])
except:
push_offset += int(push_op[-1:])
if each == "JUMPDEST":
jump_map[i+push_offset]=i
for i, each in enumerate(ops):
if each == "JUMPDEST":
opcode_block[block]=block_op_list
block+=1
block_op_list = []
if block > block_total:
block_total = block
block_map[i]=block
block_op_list.append(each)
opcode_block[block_total]=block_op_list
with open("opcode_block_map_"+source_file+".pickle", "wb") as f:
pickle.dump(opcode_block, f, protocol=2)
return jump_map, block_map
def format_source(source_file):
with open(source_file, "r") as f:
target = f.read().splitlines()
for i, opcode in enumerate(target):
if "PUSH" in opcode:
if "0x" not in opcode:
try:
push_op = opcode+" "+target[i+1]
translation.append(push_op)
except IndexError:
print("Error Formatting PUSH instruction at: ",i)
else:
translation.append(opcode)
else:
if "0x" not in opcode:
translation.append(opcode)
foutname = "translation_"+source_file[:-2]+"hsf"
with open(foutname, "w") as f:
for each in translation:
f.write("%s\n" % each)
def _clean_opcode(opcode):
if "PUSH" in opcode:
return "PUSH"
elif "DUP" in opcode:
return "DUP"
elif "SWAP" in opcode:
return "SWAP"
else:
return opcode
def get_dependencies_long(i, opcode, target):
dependants = [[opcode]]
c = i-1
if stack_counts[opcode][0] > 0:
inputs_no = stack_counts[opcode][0]
while inputs_no > 0:
dependants[0].extend([target[c]])
diff = (stack_counts[_clean_opcode(target[c])][1]) - (stack_counts[_clean_opcode(target[c])][0])
inputs_no -= diff
c -= 1
#conditions added to stop at jump, this gives in-block dependancies
if target[c] == "JUMP" or "JUMPI":
dependants[0].extend([target[c]])
break
return dependants
def _stack_pop(opcode, stack):
for x in range(0, stack_counts[opcode][0]):
del stack[0]
return stack
def sym_ex(source_file, c=0, r_stack=[], r_var_count=0, r_call_history=[], r_j_prev=False):
dest = 0
stack = r_stack
var_count = r_var_count
call_history = r_call_history
j_prev = r_j_prev
with open(source_file, "r") as f:
target = f.read().splitlines()
hfs_op_map, block_map = resolve_jump_dests(source_file)
i = c
block_c = 0
while i < len(target):
hist = []
var_increment = False #switch to control incrementation of var names in SSA
#----------------------Stack related Opcodes----------------------#
if "PUSH" in target[i]:
stack.insert(0,(int(target[i].split(" ")[1], 16)))
j_prev = False
elif "DUP" in target[i]:
try:
stack_pos = int(target[i][-2:])
except:
pass
try:
stack_pos = int(target[i][-1:])
except:
pass
global_dependants.extend(get_dependencies_long(i, _clean_opcode(target[i]), target))
stack.insert(0,stack[stack_pos-1])
j_prev = False
elif "SWAP" in target[i]:
stack_pos = 0
try:
stack_pos = int(target[i][-2:])
except:
pass
try:
stack_pos = int(target[i][-1:])
except:
pass
global_dependants.extend(get_dependencies_long(i, _clean_opcode(target[i]), target))
temp = stack[0]
stack[0] = stack[stack_pos]
stack[stack_pos] = temp
j_prev = False
elif target[i] == "POP":
stack = _stack_pop(target[i], stack)
j_prev = False
#---------------------------Mathematical & logical Opcodes------------------------------------------#
elif target[i] == "EXP":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "AND":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = Truej_prev = False
elif target[i] == "XOR":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "SUB":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])-int(stack[1]))
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "SDIV":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])//int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "SLT":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])<int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "ADD":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])+int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "MUL":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])*int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "EQ":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])==int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "OR":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "NOT":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "DIV":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])//int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "LT":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])<int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "SMOD":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])%int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "SIGNEXTEND":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "MOD":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])%int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "SGT":
#operands from stack swapped to form sgt check
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])>int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "GT":
#operands from stack swapped to form gt check
global_dependants.extend(get_dependencies_long(i, target[i], target))
if "%" in str(stack[0]) or "%" in str(stack[1]):
var_name = "%"+str(var_count)
else:
var_name=str(int(stack[0])>int(stack[1]))
ssa_op_dict[var_name] = i+1
stack = _stack_pop(target[i], stack)
stack.insert(0, var_name)
var_increment = True
j_prev = False
elif target[i] == "CALLDATACOPY":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
j_prev = False
elif target[i] == "EXTCODECOPY":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
j_prev = False
elif target[i] == "CALLDATALOAD":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
#----------------------Stack related ops that require symbolic var----------------------#
elif target[i] == "ADDRESS":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "BALANCE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "ORIGIN":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "EXTCODESIZE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "BLOCKHASH":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "COINBASE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "TIMESTAMP":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "NUMBER":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "DIFFICULTY":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "GASLIMIT":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "PC":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "GAS":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "MLOAD":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "CREATE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "CALL":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "CALLCODE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "DELEGATECALL":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "SLOAD":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "MSIZE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "CALLDATASIZE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "KECCAK256":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "SHA3":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "CALLVALUE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "CALLER":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "RETURNDATACOPY":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
j_prev = False
elif target[i] == "RETURNDATASIZE":
global_dependants.extend(get_dependencies_long(i, target[i], target))
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
elif target[i] == "ISZERO":
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
var_name = "%"+str(var_count)
ssa_op_dict[var_name] = i+1
stack.insert(0, var_name)
var_dict[var_name] = target[i]
var_increment = True
j_prev = False
#---------------------------------Store ops----------------------------------------#
elif "STORE" in target[i]:
global_dependants.extend(get_dependencies_long(i, target[i], target))
#currently memory/storage isn't modeled so value is popped
stack = _stack_pop(_clean_opcode(target[i]), stack)
j_prev = False
elif "LOG" in target[i]:
global_dependants.extend(get_dependencies_long(i, target[i], target))
stack = _stack_pop(target[i], stack)
j_prev = False
#-------------------------------------JUMP ops---------------------------------------#
elif target[i] == "JUMPDEST":
if j_prev == False:
g.add_edge(block_map[i-1],block_map[i])
call_history.append(block_map[i-1])
j_prev = False
elif target[i] == "JUMP":
global_dependants.extend(get_dependencies_long(i, target[i], target))
if "%" in str(stack[0]):
print("DYNAMIC JUMP at line ",i)
global_state.append([i,block_map[i],list(stack),[dest],call_history])
call_history = []
break
else:
try:
dest = hfs_op_map[(stack[0])]
g.add_edge(block_map[i],block_map[dest])
call_history.append(block_map[i])
j_prev = True
except:
print("Bad JUMPDEST passed to JUMP at ",i)
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
jump_dest_lst.insert(0, dest)
if len(jump_dest_lst) > 1:
if dest == jump_dest_lst[1]:
if dest != 0:
global_state.append([i,block_map[i],list(stack),[dest],call_history])
else:
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
stack = _stack_pop(target[i], stack)
i = dest-1
elif target[i] == "JUMPI":
if "%" in str(stack[1]):
if "%" in str(stack[0]):
print("DYNAMIC JUMP", i)
global_state.append([i,block_map[i],list(stack),[dest],call_history])
g.add_edge(block_map[i],block_map[i+1])
call_history.append(block_map[i])
print("EX called from DYNAMIC")
stack = _stack_pop(target[i], stack)
j_prev = False
sym_ex(source_file, i+1, list(stack), var_count, list(call_history), j_prev)
call_history = []
break
else:
try:
dest = hfs_op_map[(stack[0])]
except:
print("Bad JUMPDEST passed to JUMPI at ",i)
global_state.append([i,block_map[i],list(stack),[dest],call_history])
g.add_edge(block_map[i],block_map[i+1])
call_history.append(block_map[i])
stack = _stack_pop(target[i], stack)
j_prev = False
sym_ex(source_file, i+1, list(stack), var_count, list(call_history), j_prev)
call_history = []
break
if len(jumpi_dest_lst) > 1:
if (dest,i) in jumpi_dest_lst:
global_state.append([i,block_map[i],list(stack),[dest],call_history])
print("RE-ENTRY DETECTED (JUMPI/DEST): ",block_map[i],", ",block_map[dest])
call_history = []
break
jumpi_dest_lst.insert(0, (dest,i))
global_state.append([i,block_map[i],list(stack),[dest],call_history])
g.add_edge(block_map[i],block_map[dest])
g.add_edge(block_map[i],block_map[i+1])
call_history.append(block_map[i])
stack = _stack_pop(target[i], stack)
j_prev = True
sym_ex(source_file, dest, list(stack), var_count, list(call_history), j_prev)
sym_ex(source_file, i+1, list(stack), var_count, list(call_history), j_prev)
elif str(stack[1])=="True" or str(stack[1])=="1":
if "%" in str(stack[0]):
print("DYNAMIC JUMP", i)
global_state.append([i,block_map[i],list(stack),[dest],call_history])
g.add_edge(block_map[i],block_map[i+1])
call_history.append(block_map[i])
print("EX called from DYNAMIC")
stack = _stack_pop(target[i], stack)
call_history.append(block_map[i])
call_history = []
break
else:
try:
dest = hfs_op_map[(stack[0])]
except:
print("Bad JUMPDEST passed to JUMPI at ",i)
global_state.append([i,block_map[i],list(stack),[dest],call_history])
g.add_edge(block_map[i],block_map[i+1])
call_history.append(block_map[i])
stack = _stack_pop(target[i], stack)
call_history = []
break
if len(jumpi_dest_lst) > 1:
if (dest,i) in jumpi_dest_lst:
global_state.append([i,block_map[i],list(stack),[dest],call_history])
print("RE-ENTRY DETECTED (JUMPI/DEST): ",block_map[i],", ",block_map[dest])
call_history = []
break
jumpi_dest_lst.insert(0, (dest,i))
global_state.append([i,block_map[i],list(stack),[dest],call_history])
g.add_edge(block_map[i],block_map[dest])
call_history.append(block_map[i])
stack = _stack_pop(target[i], stack)
j_prev = True
sym_ex(source_file, dest, list(stack), var_count, list(call_history), j_prev)
call_history.append(block_map[i])
elif str(stack[1])=="False" or str(stack[1]=="0"):
if len(jumpi_dest_lst) > 1:
if (dest,i) in jumpi_dest_lst:
global_state.append([i,block_map[i],list(stack),[dest],call_history])
print("RE-ENTRY DETECTED (JUMPI/DEST): ",block_map[i],", ",block_map[dest])
call_history = []
break
jumpi_dest_lst.insert(0, (dest,i))
global_state.append([i,block_map[i],list(stack),[dest],call_history])
g.add_edge(block_map[i],block_map[i+1])
call_history.append(block_map[i])
stack = _stack_pop(target[i], stack)
j_prev = True
sym_ex(source_file, i+1, list(stack), var_count, list(call_history), j_prev)
else:
print("JUMPI ERROR!: ",stack[1])
call_history = []
break
elif target[i] == "RETURN":
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
elif target[i] == "STOP":
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
elif target[i] == "REVERT":
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
elif target[i] == "SUICIDE":
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
elif target[i] == "SELFDESTRUCT":
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
elif target[i] == "INVALID":
global_state.append([i,block_map[i],list(stack),[],call_history])
call_history = []
break
else:
print(i,": ",target[i])
if var_increment==True:
var_count+=1
if (target[i] == "JUMP") or (target[i] == "JUMPI"):
global_state.append([i,block_map[i],list(stack),[dest],call_history])
else:
global_state.append([i,block_map[i],list(stack),[],call_history])
i+=1
#Uncomment a line below to run with one of the test files
#target_file = "translation_greeter_mortal_remix_op_rt.op"
#target_file = "translation_multisig_remix_rt.op"
target_file = "translation_SimpleCoinToken.op"
#target_file = "translation_remix_GolemMultisig_0x7da82C7AB4771ff031b66538D2fB9b0B047f6CF9.op"
#target_file = "translation_RaidenMultiSigWallet_0x00C7122633A4EF0BC72f7D02456EE2B11E97561e.op"
#------------------------------------------------------------#
sym_ex(target_file)
with open(target_file, "r") as f:
lines = f.read().splitlines()
y = len(lines)
with open("info_"+target_file[:-2]+"pickle", "wb") as f:
pickle.dump(global_state, f, protocol=2)
nx.draw(g,pos=nx.spring_layout(g),with_labels=True)
#nx.write_yaml(g,target_file+"_graph.yaml")
with open("graph_"+target_file[:-2]+"json", "w") as f:
f.write(json.dumps(nx.readwrite.node_link_data(g)))
code_coverage = (nx.number_of_nodes(g)/(block_total+1))*100
gd = nx.descendants(g,0)
print(code_coverage)
print("No of blocks ",block_total+1)
print("nodes: ",nx.number_of_nodes(g))
print("Number of loops: ",len(list(nx.simple_cycles(g))))
plt.show() | 38.130099 | 115 | 0.524665 | 4,234 | 34,584 | 4.030231 | 0.066131 | 0.080813 | 0.046414 | 0.042663 | 0.797468 | 0.778012 | 0.754747 | 0.749121 | 0.747539 | 0.745488 | 0 | 0.012677 | 0.345362 | 34,584 | 907 | 116 | 38.130099 | 0.741034 | 0.041117 | 0 | 0.743758 | 0 | 0 | 0.030996 | 0.000905 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007884 | false | 0.009198 | 0.007884 | 0 | 0.024967 | 0.023653 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83fbedd0f20ce0f4ff2e1f6b78032a9fd28d2038 | 348 | py | Python | test/common/constants.py | ga4gh/refget-cloud | c39a65acba9818414789f004cced487562012bf0 | [
"Apache-2.0"
] | null | null | null | test/common/constants.py | ga4gh/refget-cloud | c39a65acba9818414789f004cced487562012bf0 | [
"Apache-2.0"
] | 3 | 2021-04-30T21:12:42.000Z | 2021-06-02T02:11:45.000Z | test/common/constants.py | ga4gh/refget-cloud | c39a65acba9818414789f004cced487562012bf0 | [
"Apache-2.0"
] | null | null | null | TRUNC512_PHAGE = "2085c82d80500a91dd0b8aa9237b0e43f1c07809bd6e6785"
TRUNC512_CEREVISIAE = "959cb1883fc1ca9ae1394ceb475a356ead1ecceff5824ae7"
TRUNC512_NONEXISTENT = "222222222222222222222222222222222222222222222222"
FILESERVER_PROPS_DICT = {
"source.base_url": "http://localhost:8080",
"source.metadata_path": "/sequence/{seqid}/metadata"
}
| 43.5 | 73 | 0.83046 | 24 | 348 | 11.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.368421 | 0.071839 | 348 | 7 | 74 | 49.714286 | 0.504644 | 0 | 0 | 0 | 0 | 0 | 0.649425 | 0.488506 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7ec264e12e0f09e73094c57747bb0a11eb7ad56 | 16,163 | py | Python | tests/test_generators.py | shadofren/deeposlandia | 3dcb511482aff9c62bffd383e92055920c7a7e85 | [
"MIT"
] | null | null | null | tests/test_generators.py | shadofren/deeposlandia | 3dcb511482aff9c62bffd383e92055920c7a7e85 | [
"MIT"
] | null | null | null | tests/test_generators.py | shadofren/deeposlandia | 3dcb511482aff9c62bffd383e92055920c7a7e85 | [
"MIT"
] | null | null | null | """Unit test related to the generator building and feeding
"""
import pytest
import numpy as np
from deeposlandia import generator, utils
def test_feature_detection_labelling_concise():
"""Test `feature_detection_labelling` function in `generator` module by considering a concise
labelling, *i.e.* all labels are represented into the array:
* as a preliminary verification, check if passing string labels raises an AttributeError
exception
* test if output shape is first input shape (batch size) + an additional dimension given by the
`label_ids` length
* test if both representation provides the same information (native array on the first hand and
its one-hot version on the second hand)
"""
a = np.array([[[[10, 10, 200], [10, 10, 200], [10, 10, 200]],
[[200, 200, 200], [200, 200, 200], [10, 10, 200]],
[[200, 200, 200], [200, 200, 200], [200, 200, 200]]],
[[[10, 200, 10], [10, 200, 10], [10, 10, 200]],
[[200, 10, 10], [10, 200, 10], [10, 10, 200]],
[[10, 200, 10], [200, 10, 10], [10, 10, 200]]]])
labels = np.unique(a.reshape(-1, 3), axis=0).tolist()
wrong_config = [{'id': '0', 'color': [10, 10, 200], 'is_evaluate': True},
{'id': '1', 'color': [200, 10, 10], 'is_evaluate': True},
{'id': '2', 'color': [10, 200, 10], 'is_evaluate': True},
{'id': '3', 'color': [200, 200, 200], 'is_evaluate': True}]
with pytest.raises(ValueError):
b = generator.feature_detection_labelling(a, wrong_config)
config = [{'id': 0, 'color': [10, 10, 200], 'is_evaluate': True},
{'id': 1, 'color': [200, 10, 10], 'is_evaluate': True},
{'id': 2, 'color': [10, 200, 10], 'is_evaluate': True},
{'id': 3, 'color': [200, 200, 200], 'is_evaluate': True}]
b = generator.feature_detection_labelling(a, config)
assert b.shape == (a.shape[0], len(labels))
assert b.tolist() == [[True, False, False, True],
[True, True, True, False]]
def test_feature_detection_labelling_sparse():
"""Test `feature_detection_labelling` function in `generator` module by considering a sparse
labelling, *i.e.* the array contains unknown values (to mimic the non-evaluated label
situations):
* as a preliminary verification, check if passing string labels raises an AttributeError
exception
* test if label length is different from the list of values in the array
* test if output shape is first input shape (batch size) + an additional dimension given by the
`label_ids` length
* test if both representation provides the same information (native array on the first hand and
its one-hot version on the second hand)
"""
a = np.array([[[[10, 10, 200], [10, 10, 200], [10, 10, 200], [200, 10, 10]],
[[200, 200, 200], [200, 200, 200], [10, 10, 200], [200, 10, 10]],
[[200, 200, 200], [200, 200, 200], [200, 200, 200], [10, 10, 200]],
[[200, 200, 200], [200, 200, 200], [200, 200, 200], [10, 10, 200]]],
[[[200, 10, 10], [200, 10, 10], [10, 200, 10], [200, 10, 10]],
[[200, 200, 200], [10, 200, 10], [10, 200, 10], [10, 200, 10]],
[[200, 10, 10], [200, 10, 10], [200, 10, 10], [200, 200, 200]],
[[200, 10, 10], [200, 10, 10], [10, 200, 10], [200, 200, 200]]]])
labels = np.unique(a.reshape(-1, 3), axis=0).tolist()[:-1]
wrong_config = [{'id': '0', 'color': [10, 10, 200], 'is_evaluate': True},
{'id': '1', 'color': [200, 10, 10], 'is_evaluate': True},
{'id': '2', 'color': [10, 200, 10], 'is_evaluate': True}]
with pytest.raises(ValueError):
b = generator.feature_detection_labelling(a, wrong_config)
config = [{'id': 0, 'color': [10, 10, 200], 'is_evaluate': True},
{'id': 1, 'color': [200, 10, 10], 'is_evaluate': True},
{'id': 2, 'color': [10, 200, 10], 'is_evaluate': True}]
b = generator.feature_detection_labelling(a, config)
assert len(labels) != np.amax(a) - np.amin(a) + 1
assert b.tolist() == [[True, True, False],
[False, True, True]]
assert b.shape == (a.shape[0], len(labels))
def test_featdet_mapillary_generator(mapillary_image_size,
mapillary_sample,
mapillary_sample_config,
nb_channels):
"""Test the data generator for the Mapillary dataset
"""
BATCH_SIZE = 10
config = utils.read_config(mapillary_sample_config)
label_ids = [x['id'] for x in config["labels"]]
gen = generator.create_generator("mapillary", "feature_detection",
mapillary_sample,
mapillary_image_size,
BATCH_SIZE,
config["labels"])
item = next(gen)
assert(len(item)==2)
im_shape = item[0].shape
assert im_shape == (BATCH_SIZE, mapillary_image_size, mapillary_image_size, nb_channels)
label_shape = item[1].shape
assert label_shape == (BATCH_SIZE, len(label_ids))
def test_featdet_shape_generator(shapes_image_size, shapes_sample, shapes_sample_config, nb_channels):
"""Test the data generator for the shape dataset
"""
BATCH_SIZE = 10
config = utils.read_config(shapes_sample_config)
label_ids = [x['id'] for x in config["labels"]]
gen = generator.create_generator("shapes", "feature_detection", shapes_sample, shapes_image_size, BATCH_SIZE, config["labels"])
item = next(gen)
assert len(item) == 2
im_shape = item[0].shape
assert im_shape == (BATCH_SIZE, shapes_image_size, shapes_image_size, nb_channels)
label_shape = item[1].shape
assert label_shape == (BATCH_SIZE, len(label_ids))
def test_semantic_segmentation_labelling_concise():
"""Test `semantic_segmentation_labelling` function in `generator` module by considering a
concise labelling, *i.e.* the labels correspond to array values
* as a preliminary verification, check if passing string labels raises an AttributeError
exception
* test if output shape is input shape + an additional dimension given by the
`label_ids` length
* test if both representation provides the same information (native array on the
first hand and its one-hot version on the second hand)
"""
a = np.array([[[[200, 10, 10], [200, 10, 10], [200, 200, 200]],
[[200, 200, 200], [200, 200, 200], [200, 10, 10]],
[[200, 200, 200], [200, 200, 200], [200, 200, 200]]],
[[[200, 10, 10], [200, 10, 10], [10, 10, 200]],
[[10, 200, 10], [10, 200, 10], [10, 10, 200]],
[[200, 10, 10], [200, 10, 10], [10, 10, 200]]]])
labels = np.unique(a.reshape(-1, 3), axis=0).tolist()
wrong_config = [{'id': '0', 'color': [10, 10, 200], 'is_evaluate': True},
{'id': '1', 'color': [200, 10, 10], 'is_evaluate': True},
{'id': '2', 'color': [10, 200, 10], 'is_evaluate': True},
{'id': '3', 'color': [200, 200, 200], 'is_evaluate': True}]
asum, _ = np.histogram(a.reshape(-1), range=(np.amin(a), np.amax(a)))
with pytest.raises(ValueError):
b = generator.semantic_segmentation_labelling(a, wrong_config)
config = [{'id': 0, 'color': [10, 10, 200], 'is_evaluate': True},
{'id': 1, 'color': [200, 10, 10], 'is_evaluate': True},
{'id': 2, 'color': [10, 200, 10], 'is_evaluate': True},
{'id': 3, 'color': [200, 200, 200], 'is_evaluate': True}]
b = generator.semantic_segmentation_labelling(a, config)
assert b.shape == (a.shape[0], a.shape[1], a.shape[2], len(labels))
assert b.tolist() == [[[[False, True, False, False],
[False, True, False, False],
[False, False, False, True]],
[[False, False, False, True],
[False, False, False, True],
[False, True, False, False]],
[[False, False, False, True],
[False, False, False, True],
[False, False, False, True]]],
[[[False, True, False, False],
[False, True, False, False],
[True, False, False, False]],
[[False, False, True, False],
[False, False, True, False],
[True, False, False, False]],
[[False, True, False, False],
[False, True, False, False],
[True, False, False, False]]]]
def test_semantic_segmentation_labelling_sparse():
"""Test `semantic_segmentation_labelling` function in `generator` module by considering a
sparse labelling, *i.e.* the array contains unknown values (to mimic the non-evaluated label
situations)
* as a preliminary verification, check if passing string labels raises an AttributeError
exception
* test if output shape is input shape + an additional dimension given by the
`label_ids` length
* test if both representation provides the same information (native array on the
first hand and its one-hot version on the second hand)
"""
a = np.array([[[[200, 10, 10], [200, 10, 10], [200, 200, 200]],
[[200, 200, 200], [200, 200, 200], [200, 10, 10]],
[[200, 200, 200], [100, 100, 100], [200, 200, 200]]],
[[[200, 10, 10], [200, 10, 10], [10, 10, 200]],
[[200, 200, 200], [100, 100, 100], [10, 10, 200]],
[[200, 10, 10], [200, 10, 10], [10, 10, 200]]]])
asum, _ = np.histogram(a.reshape(-1), range=(np.amin(a), np.amax(a)))
wrong_config = [{'id': '0', 'color': [10, 10, 200], 'is_evaluate': True},
{'id': '2', 'color': [10, 200, 10], 'is_evaluate': True},
{'id': '3', 'color': [200, 200, 200], 'is_evaluate': True}]
with pytest.raises(ValueError):
b = generator.semantic_segmentation_labelling(a, wrong_config)
config = [{'id': 0, 'color': [10, 10, 200], 'is_evaluate': True},
{'id': 2, 'color': [10, 200, 10], 'is_evaluate': True},
{'id': 3, 'color': [200, 200, 200], 'is_evaluate': True}]
labels = [item["id"] for item in config]
b = generator.semantic_segmentation_labelling(a, config)
assert len(labels) != np.amax(a) - np.amin(a) + 1
assert b.shape == (a.shape[0], a.shape[1], a.shape[2], len(labels))
assert b.tolist() == [[[[False, False, False],
[False, False, False],
[False, False, True]],
[[False, False, True],
[False, False, True],
[False, False, False]],
[[False, False, True],
[False, False, False],
[False, False, True]]],
[[[False, False, False],
[False, False, False],
[True, False, False]],
[[False, False, True],
[False, False, False],
[True, False, False]],
[[False, False, False],
[False, False, False],
[True, False, False]]]]
def test_semseg_mapillary_generator(mapillary_image_size,
mapillary_sample,
mapillary_sample_config,
nb_channels):
"""Test the data generator for the Mapillary dataset
"""
BATCH_SIZE = 10
config = utils.read_config(mapillary_sample_config)
label_ids = [x['id'] for x in config["labels"]]
gen = generator.create_generator("mapillary", "semantic_segmentation",
mapillary_sample,
mapillary_image_size,
BATCH_SIZE, config["labels"])
item = next(gen)
assert(len(item)==2)
im_shape = item[0].shape
assert im_shape == (BATCH_SIZE, mapillary_image_size, mapillary_image_size, nb_channels)
label_shape = item[1].shape
assert label_shape == (BATCH_SIZE, mapillary_image_size, mapillary_image_size, len(label_ids))
def test_semseg_shape_generator(shapes_image_size, shapes_sample, shapes_sample_config, nb_channels):
"""Test the data generator for the shape dataset
"""
BATCH_SIZE = 10
config = utils.read_config(shapes_sample_config)
label_ids = [x['id'] for x in config["labels"]]
gen = generator.create_generator("shapes", "semantic_segmentation",
shapes_sample, shapes_image_size,
BATCH_SIZE, config["labels"])
item = next(gen)
assert len(item) == 2
im_shape = item[0].shape
assert im_shape == (BATCH_SIZE, shapes_image_size, shapes_image_size, nb_channels)
label_shape = item[1].shape
assert label_shape == (BATCH_SIZE, shapes_image_size, shapes_image_size, len(label_ids))
def test_semseg_aerial_generator(aerial_image_size, aerial_sample,
aerial_sample_config, nb_channels):
"""Test the data generator for the AerialImage dataset
"""
BATCH_SIZE = 4
config = utils.read_config(aerial_sample_config)
label_ids = [x['id'] for x in config["labels"]]
gen = generator.create_generator("aerial", "semantic_segmentation",
aerial_sample,
aerial_image_size,
BATCH_SIZE, config["labels"])
item = next(gen)
assert(len(item)==2)
im_shape = item[0].shape
assert im_shape == (BATCH_SIZE, aerial_image_size, aerial_image_size, nb_channels)
label_shape = item[1].shape
assert label_shape == (BATCH_SIZE, aerial_image_size, aerial_image_size, len(label_ids))
def test_semseg_tanzania_generator(tanzania_image_size, tanzania_sample,
tanzania_sample_config, nb_channels):
"""Test the data generator for the Open AI Tanzania dataset
"""
BATCH_SIZE = 3
config = utils.read_config(tanzania_sample_config)
label_ids = [x['id'] for x in config["labels"]]
gen = generator.create_generator("tanzania", "semantic_segmentation",
tanzania_sample,
tanzania_image_size,
BATCH_SIZE, config["labels"])
item = next(gen)
assert(len(item)==2)
im_shape = item[0].shape
assert im_shape == (BATCH_SIZE, tanzania_image_size, tanzania_image_size, nb_channels)
label_shape = item[1].shape
assert label_shape == (BATCH_SIZE, tanzania_image_size, tanzania_image_size, len(label_ids))
def test_wrong_model_dataset_generator(shapes_sample_config):
"""Test a wrong model and wrong dataset
"""
dataset = "fake"
model = "conquer_the_world"
IMAGE_SIZE = 10
BATCH_SIZE = 10
datapath = ("./tests/data/" + dataset + "/training")
config = utils.read_config(shapes_sample_config)
# wrong dataset name
with pytest.raises(ValueError) as excinfo:
generator.create_generator(dataset, 'feature_detection', datapath, IMAGE_SIZE, BATCH_SIZE, config["labels"])
assert str(excinfo.value) == "Wrong dataset name {}".format(dataset)
# wrong model name
with pytest.raises(ValueError) as excinfo:
generator.create_generator('shapes', model, datapath, IMAGE_SIZE, BATCH_SIZE, config["labels"])
assert str(excinfo.value) == "Wrong model name {}".format(model)
| 51.474522 | 131 | 0.558374 | 1,962 | 16,163 | 4.441896 | 0.076453 | 0.066781 | 0.073322 | 0.070224 | 0.886862 | 0.865519 | 0.862421 | 0.855881 | 0.837636 | 0.813884 | 0 | 0.08433 | 0.301553 | 16,163 | 313 | 132 | 51.638978 | 0.687661 | 0.160366 | 0 | 0.612335 | 0 | 0 | 0.066158 | 0.006286 | 0 | 0 | 0 | 0 | 0.132159 | 1 | 0.048458 | false | 0 | 0.013216 | 0 | 0.061674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
790045b9940a233b7fe5b3ea902b024bfb745fc8 | 18 | py | Python | lemons/__init__.py | jakebrehm/ezpz | 42d539bc37aa0c3789030ab4a1cae960d56bd5ac | [
"MIT"
] | null | null | null | lemons/__init__.py | jakebrehm/ezpz | 42d539bc37aa0c3789030ab4a1cae960d56bd5ac | [
"MIT"
] | null | null | null | lemons/__init__.py | jakebrehm/ezpz | 42d539bc37aa0c3789030ab4a1cae960d56bd5ac | [
"MIT"
] | null | null | null | from .gui import * | 18 | 18 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f743cd7ab751308bc4d4f35c10d79417a2eea563 | 49 | py | Python | enthought/pyface/image_button.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/pyface/image_button.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/pyface/image_button.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from pyface.image_button import *
| 16.333333 | 33 | 0.795918 | 7 | 49 | 5.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 2 | 34 | 24.5 | 0.904762 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3a2e26ec5a16733cadc5175d079ef518e70ffa0 | 167 | py | Python | src/megatest/util.py | emlynoregan/megatestworking | a02c0b7c30f8eb2644ed0ca612d4d90de35e1ec1 | [
"Apache-2.0"
] | null | null | null | src/megatest/util.py | emlynoregan/megatestworking | a02c0b7c30f8eb2644ed0ca612d4d90de35e1ec1 | [
"Apache-2.0"
] | null | null | null | src/megatest/util.py | emlynoregan/megatestworking | a02c0b7c30f8eb2644ed0ca612d4d90de35e1ec1 | [
"Apache-2.0"
] | null | null | null | import time
def DateTimeToUnixTimestampMicrosec(aDateTime):
return long(time.mktime(aDateTime.timetuple()) * 1000000 + aDateTime.microsecond) if aDateTime else 0
| 33.4 | 105 | 0.802395 | 18 | 167 | 7.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.113772 | 167 | 4 | 106 | 41.75 | 0.851351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
e3bd36e7c40c9bf1c235f9106357bc622624a659 | 158 | py | Python | ysApi/__init__.py | Yousign/yousign-api-client-python | 63b55e9180a7d2577ecc27d54b3cb94da75e7c0f | [
"Apache-2.0"
] | null | null | null | ysApi/__init__.py | Yousign/yousign-api-client-python | 63b55e9180a7d2577ecc27d54b3cb94da75e7c0f | [
"Apache-2.0"
] | 4 | 2015-11-17T20:09:03.000Z | 2022-01-13T17:25:11.000Z | ysApi/__init__.py | Yousign/yousign-api-client-python | 63b55e9180a7d2577ecc27d54b3cb94da75e7c0f | [
"Apache-2.0"
] | 9 | 2015-06-02T10:54:47.000Z | 2022-02-28T16:02:07.000Z | from apiClient import ApiClient
from fileToSign import FileToSign, Signer, VisibleOptions
__all__ = ['ApiClient', 'FileToSign', 'Signer', 'VisibleOptions']
| 26.333333 | 65 | 0.78481 | 15 | 158 | 8 | 0.466667 | 0.266667 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113924 | 158 | 5 | 66 | 31.6 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e3cad2fc2a4c940d5103cdea644f932b5f5d121a | 12,593 | py | Python | benchmark/default_model.py | MARL-NJU/SMARTS | 8881e07ad83aea83e1b02d98a1bf65d851a3e392 | [
"MIT"
] | 5 | 2021-06-15T05:06:10.000Z | 2021-12-01T05:11:49.000Z | benchmark/default_model.py | Duckkkky/smarts | 77fca0605b060d3a922400a9e85db8b28aeb6ce3 | [
"MIT"
] | null | null | null | benchmark/default_model.py | Duckkkky/smarts | 77fca0605b060d3a922400a9e85db8b28aeb6ce3 | [
"MIT"
] | 1 | 2021-04-18T00:17:05.000Z | 2021-04-18T00:17:05.000Z | """
This file contain default network for rllib training,
and can be used for agent evaluation
"""
import pickle
import tensorflow as tf
from pathlib import Path
from ray.rllib.models import ModelCatalog
from ray.rllib.utils import try_import_tf
from ray.rllib.agents.trainer import with_common_config
from smarts.core.agent import Agent
from benchmark.agents import load_config
tf1, tf, tfv = try_import_tf()
BASE_DIR = Path(__file__).expanduser().absolute().parent.parent
class RLLibTFCheckpointAgent(Agent):
def __init__(self, load_path, algorithm, policy_name, yaml_path):
load_path = str(load_path)
if algorithm == "ppo":
from ray.rllib.agents.ppo.ppo_tf_policy import PPOTFPolicy as LoadPolicy
elif algorithm in "a2c":
from ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy as LoadPolicy
from ray.rllib.agents.a3c import DEFAULT_CONFIG
elif algorithm == "pg":
from ray.rllib.agents.pg.pg_tf_policy import PGTFPolicy as LoadPolicy
elif algorithm == "dqn":
from ray.rllib.agents.dqn import DQNTFPolicy as LoadPolicy
elif algorithm == "maac":
from benchmark.agents.maac.tf_policy import CA2CTFPolicy as LoadPolicy
from benchmark.agents.maac.tf_policy import DEFAULT_CONFIG
elif algorithm == "maddpg":
from benchmark.agents.maddpg.tf_policy import MADDPG2TFPolicy as LoadPolicy
from benchmark.agents.maddpg.tf_policy import DEFAULT_CONFIG
elif algorithm == "mfac":
from benchmark.agents.mfac.tf_policy import MFACTFPolicy as LoadPolicy
from benchmark.agents.mfac.tf_policy import DEFAULT_CONFIG
elif algorithm == "networked_pg":
from benchmark.agents.networked_pg.tf_policy import (
NetworkedPG as LoadPolicy,
)
from benchmark.agents.networked_pg.tf_policy import (
PG_DEFAULT_CONFIG as DEFAULT_CONFIG,
)
else:
raise ValueError(f"Unsupported algorithm: {algorithm}")
yaml_path = BASE_DIR / yaml_path
load_path = BASE_DIR / f"log/results/run/{load_path}"
config = load_config(yaml_path)
observation_space = config["policy"][1]
action_space = config["policy"][2]
pconfig = DEFAULT_CONFIG
pconfig["model"].update(config["policy"][-1].get("model", {}))
pconfig["agent_id"] = policy_name
self._prep = ModelCatalog.get_preprocessor_for_space(observation_space)
self._sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph())
with tf.compat.v1.name_scope(policy_name):
# Observation space needs to be flattened before passed to the policy
flat_obs_space = self._prep.observation_space
policy = LoadPolicy(flat_obs_space, action_space, pconfig)
self._sess.run(tf.compat.v1.global_variables_initializer())
objs = pickle.load(open(load_path, "rb"))
objs = pickle.loads(objs["worker"])
state = objs["state"]
weights = state[policy_name]
policy.set_weights(weights)
# for op in tf.get_default_graph().get_operations():
# print(str(op.name))
# These tensor names were found by inspecting the trained model
if algorithm == "ppo":
# CRUCIAL FOR SAFETY:
# We use Tensor("split") instead of Tensor("add") to force
# PPO to be deterministic.
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observation:0"
)
self._output_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/split:0"
)
elif algorithm == "dqn":
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/value_out/BiasAdd:0"
),
axis=1,
)
elif algorithm == "maac":
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/policy-inputs:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/logits_out/BiasAdd:0"
),
axis=1,
)
elif algorithm == "maddpg":
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/obs_2:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/actor/AGENT_2_actor_RelaxedOneHotCategorical_1/sample/AGENT_2_actor_exp/forward/Exp:0"
)
)
else:
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/fc_out/BiasAdd:0"
),
axis=1,
)
def __del__(self):
self._sess.close()
def act(self, obs):
obs = self._prep.transform(obs)
res = self._sess.run(self._output_node, feed_dict={self._input_node: [obs]})
action = res[0]
return action
class RLLibTFSavedModelAgent(Agent):
def __init__(self, load_path, algorithm, policy_name, observation_space):
load_path = str(load_path)
self._prep = ModelCatalog.get_preprocessor_for_space(observation_space)
self._sess = tf.compat.v1.Session(graph=tf.Graph())
tf.compat.v1.saved_model.load(
self._sess, export_dir=load_path, tags=["serve"], clear_devices=True,
)
# These tensor names were found by inspecting the trained model
if algorithm == "PPO":
# CRUCIAL FOR SAFETY:
# We use Tensor("split") instead of Tensor("add") to force
# PPO to be deterministic.
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observation:0"
)
self._output_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/split:0"
)
# todo: need to check
elif algorithm == "DQN":
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/value_out/BiasAdd:0"
),
axis=1,
)
else:
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/fc_out/BiasAdd:0"
),
axis=1,
)
def __del__(self):
self._sess.close()
def act(self, obs):
obs = self._prep.transform(obs)
res = self._sess.run(self._output_node, feed_dict={self._input_node: [obs]})
action = res[0]
return action
class BatchRLLibTFCheckpointAgent(Agent):
def __init__(
self, load_path, algorithm, policy_name, observation_space, action_space
):
load_path = str(load_path)
if algorithm == "PPO":
from ray.rllib.agents.ppo.ppo_tf_policy import PPOTFPolicy as LoadPolicy
elif algorithm in ["A2C", "A3C"]:
from ray.rllib.agents.a3c.a3c_tf_policy import A3CTFPolicy as LoadPolicy
elif algorithm == "PG":
from ray.rllib.agents.pg.pg_tf_policy import PGTFPolicy as LoadPolicy
elif algorithm == "DQN":
from ray.rllib.agents.dqn.dqn_policy import DQNTFPolicy as LoadPolicy
else:
raise ValueError(f"Unsupported algorithm: {algorithm}")
self._prep = ModelCatalog.get_preprocessor_for_space(observation_space)
self._sess = tf.compat.v1.Session(graph=tf.Graph())
with tf.compat.v1.name_scope(policy_name):
# obs_space need to be flattened before passed to PPOTFPolicy
flat_obs_space = self._prep.observation_space
policy = LoadPolicy(flat_obs_space, self._action_space, {})
objs = pickle.load(open(load_path, "rb"))
objs = pickle.loads(objs["worker"])
state = objs["state"]
weights = state[policy_name]
policy.set_weights(weights)
# These tensor names were found by inspecting the trained model
if algorithm == "PPO":
# CRUCIAL FOR SAFETY:
# We use Tensor("split") instead of Tensor("add") to force
# PPO to be deterministic.
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observation:0"
)
self._output_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/split:0"
)
elif self._algorithm == "DQN":
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/value_out/BiasAdd:0"
),
axis=1,
)
else:
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/fc_out/BiasAdd:0"
),
axis=1,
)
def __del__(self):
self._sess.close()
def act(self, obs):
agent_id = list(obs.keys())
obs = list(obs.values())
obs = [self._prep.transform(o) for o in obs]
res = self._sess.run(self._output_node, feed_dict={self._input_node: obs})
actions = res
actions = dict(zip(agent_id, actions))
return actions
class BatchRLLibTFSavedModelAgent(Agent):
def __init__(self, load_path, algorithm, policy_name, observation_space):
load_path = str(load_path)
self._prep = ModelCatalog.get_preprocessor_for_space(observation_space)
self._sess = tf.compat.v1.Session(graph=tf.Graph())
tf.compat.v1.saved_model.load(
self._sess, export_dir=load_path, tags=["serve"], clear_devices=True,
)
# These tensor names were found by inspecting the trained model
if algorithm == "PPO":
# CRUCIAL FOR SAFETY:
# We use Tensor("split") instead of Tensor("add") to force
# PPO to be deterministic.
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observation:0"
)
self._output_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/split:0"
)
elif algorithm == "DQN":
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/value_out/BiasAdd:0"
),
axis=1,
)
else:
self._input_node = self._sess.graph.get_tensor_by_name(
f"{policy_name}/observations:0"
)
self._output_node = tf.argmax(
input=self._sess.graph.get_tensor_by_name(
f"{policy_name}/fc_out/BiasAdd:0"
),
axis=1,
)
def __del__(self):
self._sess.close()
def act(self, obs):
agent_id = list(obs.keys())
obs = [self._prep.transform(o) for o in obs.values()]
res = self._sess.run(self._output_node, feed_dict={self._input_node: obs})
# iterating over a dictionary is guaranteed to be in a deterministic order
# so it's safe to zip here.
actions = dict(zip(agent_id, res))
return actions
| 39.353125 | 122 | 0.590169 | 1,505 | 12,593 | 4.654485 | 0.130897 | 0.049108 | 0.051963 | 0.063954 | 0.803997 | 0.785439 | 0.777016 | 0.713633 | 0.701071 | 0.677088 | 0 | 0.007975 | 0.312952 | 12,593 | 319 | 123 | 39.476489 | 0.801664 | 0.086159 | 0 | 0.648649 | 0 | 0 | 0.096097 | 0.076494 | 0 | 0 | 0 | 0.003135 | 0 | 1 | 0.046332 | false | 0 | 0.100386 | 0 | 0.177606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5414b2732f2cb9af7ea2023657d1192924b3bc40 | 118 | py | Python | flydenity/__init__.py | Collen-Roller/arp | 08eaa2dda3adb1dbd600597a6d03603669c8e06d | [
"MIT"
] | 2 | 2020-10-28T17:03:14.000Z | 2021-01-27T10:44:33.000Z | flydenity/__init__.py | Collen-Roller/arp | 08eaa2dda3adb1dbd600597a6d03603669c8e06d | [
"MIT"
] | 8 | 2020-12-08T16:42:43.000Z | 2020-12-29T00:41:33.000Z | flydenity/__init__.py | Collen-Roller/arp | 08eaa2dda3adb1dbd600597a6d03603669c8e06d | [
"MIT"
] | 1 | 2020-12-09T20:35:52.000Z | 2020-12-09T20:35:52.000Z | """
__init__.py
Collen Roller
collen.roller@gmail.com
Init the project
"""
from .parser import Parser # noqa: F401
| 11.8 | 40 | 0.728814 | 17 | 118 | 4.823529 | 0.764706 | 0.292683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.161017 | 118 | 9 | 41 | 13.111111 | 0.79798 | 0.669492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
542d8563510c38b071886077f05e8e9622b264a3 | 41 | py | Python | tests/project/pack/subpack2/__init__.py | DonaldWhyte/module-dependency | 0c4a1bddf3901340f44c28501ff677f2e9caef70 | [
"MIT"
] | 5 | 2015-08-12T15:36:27.000Z | 2021-06-27T22:49:00.000Z | tests/project/pack/subpack2/__init__.py | DonaldWhyte/module-dependency | 0c4a1bddf3901340f44c28501ff677f2e9caef70 | [
"MIT"
] | null | null | null | tests/project/pack/subpack2/__init__.py | DonaldWhyte/module-dependency | 0c4a1bddf3901340f44c28501ff677f2e9caef70 | [
"MIT"
] | 1 | 2016-09-20T07:05:08.000Z | 2016-09-20T07:05:08.000Z | from .subsubpack import c
from . import d | 20.5 | 25 | 0.780488 | 7 | 41 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 41 | 2 | 26 | 20.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.