hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0507e8ef0a86a503ec33df52636187d448e66a5b | 2,887 | py | Python | FRCScouting/TheBlueAlliance/event.py | xNovax/FRCScouting.ca | caf2774e5854a7386eceb21e57b68c1f9c1f7d2d | [
"MIT"
] | 1 | 2019-06-13T03:07:15.000Z | 2019-06-13T03:07:15.000Z | FRCScouting/TheBlueAlliance/event.py | xNovax/FRCScouting.ca | caf2774e5854a7386eceb21e57b68c1f9c1f7d2d | [
"MIT"
] | 8 | 2019-07-04T16:19:06.000Z | 2019-07-12T17:37:51.000Z | FRCScouting/TheBlueAlliance/event.py | xNovax/FRCScouting.ca | caf2774e5854a7386eceb21e57b68c1f9c1f7d2d | [
"MIT"
] | null | null | null | from django.conf import settings
import tbaapiv3client
from tbaapiv3client.rest import ApiException
def get_event(eventkey):
configuration = tbaapiv3client.Configuration()
configuration.api_key['X-TBA-Auth-Key'] = settings.THE_BLUE_ALLIANCE_KEY
api_instance = tbaapiv3client.EventApi(tbaapiv3client.ApiClient(configuration))
try:
api_response = api_instance.get_event(eventkey)
info = api_response
return info
except ApiException as e:
return None
def get_events_by_year(year):
configuration = tbaapiv3client.Configuration()
configuration.api_key['X-TBA-Auth-Key'] = settings.THE_BLUE_ALLIANCE_KEY
api_instance = tbaapiv3client.EventApi(tbaapiv3client.ApiClient(configuration))
try:
api_response = api_instance.get_events_by_year(year)
info = api_response
return info
except ApiException as e:
return None
def get_events_by_year_keys(year):
configuration = tbaapiv3client.Configuration()
configuration.api_key['X-TBA-Auth-Key'] = settings.THE_BLUE_ALLIANCE_KEY
api_instance = tbaapiv3client.EventApi(tbaapiv3client.ApiClient(configuration))
try:
api_response = api_instance.get_events_by_year_keys(year)
info = api_response
return info
except ApiException as e:
return None
def get_all_event_keys():
keys = {}
for year in range(2016, 2020):
keys[year] = get_events_by_year_keys(year)
return keys
def get_events_by_year_simple(year):
configuration = tbaapiv3client.Configuration()
configuration.api_key['X-TBA-Auth-Key'] = settings.THE_BLUE_ALLIANCE_KEY
api_instance = tbaapiv3client.EventApi(tbaapiv3client.ApiClient(configuration))
try:
api_response = api_instance.get_events_by_year_simple(year)
info = api_response
return info
except ApiException as e:
return None
def get_event_teams(eventkey):
configuration = tbaapiv3client.Configuration()
configuration.api_key['X-TBA-Auth-Key'] = settings.THE_BLUE_ALLIANCE_KEY
api_instance = tbaapiv3client.EventApi(tbaapiv3client.ApiClient(configuration))
try:
api_response = api_instance.get_event_teams(eventkey)
info = api_response
return info
except ApiException as e:
return None
def get_event_matches(eventkey):
configuration = tbaapiv3client.Configuration()
configuration.api_key['X-TBA-Auth-Key'] = settings.THE_BLUE_ALLIANCE_KEY
api_instance = tbaapiv3client.EventApi(tbaapiv3client.ApiClient(configuration))
try:
api_response = api_instance.get_event_matches(eventkey)
info = api_response
return info
except ApiException as e:
return None
def get_all_events_simple():
events = {}
for year in range(2016, 2020):
events[year] = get_events_by_year_simple(year)
return events
| 34.369048 | 83 | 0.731209 | 346 | 2,887 | 5.82948 | 0.130058 | 0.065444 | 0.043629 | 0.059494 | 0.907784 | 0.90233 | 0.839365 | 0.839365 | 0.839365 | 0.839365 | 0 | 0.015457 | 0.19328 | 2,887 | 83 | 84 | 34.783133 | 0.85058 | 0 | 0 | 0.684932 | 0 | 0 | 0.029096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109589 | false | 0 | 0.041096 | 0 | 0.342466 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
051b4b83692eb5e14b1ba6c403ebaae73e514a2c | 5,788 | py | Python | tests/expectations/core/test_expect_column_values_to_be_in_set.py | mmi333/great_expectations | cc9df78596610002c24e2d46f737179e04f31d29 | [
"Apache-2.0"
] | 1 | 2022-03-17T08:05:44.000Z | 2022-03-17T08:05:44.000Z | tests/expectations/core/test_expect_column_values_to_be_in_set.py | Tchibo/great_expectations | 27220336190039148ab91138cb2fd489d2159183 | [
"Apache-2.0"
] | null | null | null | tests/expectations/core/test_expect_column_values_to_be_in_set.py | Tchibo/great_expectations | 27220336190039148ab91138cb2fd489d2159183 | [
"Apache-2.0"
] | null | null | null | import pandas as pd
import pytest
import great_expectations.exceptions.exceptions
from great_expectations.core.batch import RuntimeBatchRequest
from great_expectations.data_context import DataContext
from great_expectations.expectations.core.expect_column_values_to_be_in_set import (
ExpectColumnValuesToBeInSet,
)
# <snippet>
class ExpectColumnValuesToBeTwoLetterCountryCode(ExpectColumnValuesToBeInSet):
default_kwarg_values = {
"value_set": ["FR", "DE", "CH", "ES", "IT", "BE", "NL", "PL"],
}
# </snippet>
def test_expect_column_values_to_be_in_set_fail(
data_context_with_datasource_pandas_engine,
):
context: DataContext = data_context_with_datasource_pandas_engine
df = pd.DataFrame(
{
"a": [
"2021-01-01",
"2021-01-31",
"2021-02-28",
"2021-03-20",
"2021-02-21",
"2021-05-01",
"2021-06-18",
]
}
)
batch_request = RuntimeBatchRequest(
datasource_name="my_datasource",
data_connector_name="default_runtime_data_connector_name",
data_asset_name="my_data_asset",
runtime_parameters={"batch_data": df},
batch_identifiers={"default_identifier_name": "my_identifier"},
)
validator = context.get_validator(
batch_request=batch_request,
create_expectation_suite_with_name="test",
)
result = validator.expect_column_values_to_be_in_set(
column="a", value_set=["2021-06-18"]
)
assert result.success is False
def test_expect_column_values_in_set_pass(
data_context_with_datasource_pandas_engine,
):
context: DataContext = data_context_with_datasource_pandas_engine
df = pd.DataFrame(
{
"a": [
"2021-01-01",
"2021-01-31",
"2021-02-28",
"2021-03-20",
"2021-02-21",
"2021-05-01",
"2021-06-18",
]
}
)
batch_request = RuntimeBatchRequest(
datasource_name="my_datasource",
data_connector_name="default_runtime_data_connector_name",
data_asset_name="my_data_asset",
runtime_parameters={"batch_data": df},
batch_identifiers={"default_identifier_name": "my_identifier"},
)
validator = context.get_validator(
batch_request=batch_request,
create_expectation_suite_with_name="test",
)
result = validator.expect_column_values_to_be_in_set(
column="a",
value_set=[
"2021-01-01",
"2021-01-31",
"2021-02-28",
"2021-03-20",
"2021-02-21",
"2021-05-01",
"2021-06-18",
],
)
assert result.success is True
def test_expect_column_values_country_fail(
data_context_with_datasource_pandas_engine,
):
context: DataContext = data_context_with_datasource_pandas_engine
df = pd.DataFrame(
{
"a": [
"2021-01-01",
"2021-01-31",
"2021-02-28",
"2021-03-20",
"2021-02-21",
"2021-05-01",
"2021-06-18",
]
}
)
batch_request = RuntimeBatchRequest(
datasource_name="my_datasource",
data_connector_name="default_runtime_data_connector_name",
data_asset_name="my_data_asset",
runtime_parameters={"batch_data": df},
batch_identifiers={"default_identifier_name": "my_identifier"},
)
validator = context.get_validator(
batch_request=batch_request,
create_expectation_suite_with_name="test",
)
result = validator.expect_column_values_to_be_two_letter_country_code(column="a")
assert result.success is False
def test_expect_column_values_country_pass(
data_context_with_datasource_pandas_engine,
):
context: DataContext = data_context_with_datasource_pandas_engine
df = pd.DataFrame({"a": ["FR", "DE", "CH", "ES", "IT", "BE", "NL", "PL"]})
batch_request = RuntimeBatchRequest(
datasource_name="my_datasource",
data_connector_name="default_runtime_data_connector_name",
data_asset_name="my_data_asset",
runtime_parameters={"batch_data": df},
batch_identifiers={"default_identifier_name": "my_identifier"},
)
validator = context.get_validator(
batch_request=batch_request,
create_expectation_suite_with_name="test",
)
result = validator.expect_column_values_to_be_two_letter_country_code(column="a")
assert result.success is True
def test_expect_column_values_to_be_in_set_no_set(
data_context_with_datasource_pandas_engine,
):
context: DataContext = data_context_with_datasource_pandas_engine
df = pd.DataFrame(
{
"a": [
"2021-01-01",
"2021-01-31",
"2021-02-28",
"2021-03-20",
"2021-02-21",
"2021-05-01",
"2021-06-18",
]
}
)
batch_request = RuntimeBatchRequest(
datasource_name="my_datasource",
data_connector_name="default_runtime_data_connector_name",
data_asset_name="my_data_asset",
runtime_parameters={"batch_data": df},
batch_identifiers={"default_identifier_name": "my_identifier"},
)
validator = context.get_validator(
batch_request=batch_request,
create_expectation_suite_with_name="test",
)
with pytest.raises(
great_expectations.exceptions.exceptions.InvalidExpectationConfigurationError
):
result = validator.expect_column_values_to_be_in_set(column="a")
| 29.380711 | 85 | 0.626296 | 632 | 5,788 | 5.318038 | 0.132911 | 0.053555 | 0.058911 | 0.074383 | 0.863731 | 0.863731 | 0.859566 | 0.850342 | 0.840821 | 0.828027 | 0 | 0.068166 | 0.270041 | 5,788 | 196 | 86 | 29.530612 | 0.727337 | 0.003455 | 0 | 0.678788 | 0 | 0 | 0.167563 | 0.050304 | 0 | 0 | 0 | 0 | 0.024242 | 1 | 0.030303 | false | 0.012121 | 0.036364 | 0 | 0.078788 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
05658d569f8677aac4c1ee9887ccd1434308387b | 26 | py | Python | examples/tuple_subscr.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | examples/tuple_subscr.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | examples/tuple_subscr.py | igfish/toyvm | bb1ab371a8c71ba01522556235fc9f017c9b6b8f | [
"MIT"
] | null | null | null | t = (1, 3, 4)
print(t[2])
| 8.666667 | 13 | 0.423077 | 7 | 26 | 1.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.230769 | 26 | 2 | 14 | 13 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
05665476964a333ad19115c9b76551d2ce727328 | 2,980 | py | Python | tests/test.py | marinang/mplhep | 05648f2123c2104783950c02a25c1c477942975e | [
"MIT"
] | null | null | null | tests/test.py | marinang/mplhep | 05648f2123c2104783950c02a25c1c477942975e | [
"MIT"
] | null | null | null | tests/test.py | marinang/mplhep | 05648f2123c2104783950c02a25c1c477942975e | [
"MIT"
] | 1 | 2020-04-13T01:25:56.000Z | 2020-04-13T01:25:56.000Z | import pytest
import matplotlib.pyplot as plt
import numpy as np
import mplhep as hep
"""
To test run:
py.test --mpl
When adding new tests, run:
py.test --mpl-generate-path=tests/baseline
"""
plt.switch_backend("Agg")
@pytest.mark.mpl_image_compare(style='default', remove_text=True)
def test_basic():
fig, ax = plt.subplots(figsize=(10, 10))
h = [1, 3, 2]
bins = [0, 1, 2, 3]
hep.histplot(h, bins, yerr=True, label='X')
ax.legend()
return fig
@pytest.mark.mpl_image_compare(style='default', remove_text=True)
def test_histplot():
np.random.seed(0)
h, bins = np.histogram(np.random.normal(10, 3, 400), bins=10)
fig, axs = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10, 10))
axs = axs.flatten()
axs[0].set_title("Default", fontsize=18)
hep.histplot(h, bins, ax=axs[0])
axs[1].set_title("Plot Edges", fontsize=18)
hep.histplot(h, bins, edges=True, ax=axs[1])
axs[2].set_title("Plot Errorbars", fontsize=18)
hep.histplot(h, bins, yerr=np.sqrt(h), ax=axs[2])
axs[3].set_title("Filled Histogram", fontsize=18)
hep.histplot(h, bins, histtype='fill', ax=axs[3])
fig.subplots_adjust(hspace=0.1, wspace=0.1)
return fig
@pytest.mark.mpl_image_compare(style='default', remove_text=True)
def test_histplot_multiple():
np.random.seed(0)
h, bins = np.histogram(np.random.normal(10, 3, 400), bins=10)
fig, axs = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10, 10))
axs = axs.flatten()
axs[0].set_title("Default Overlay", fontsize=18)
hep.histplot([h, 1.5 * h], bins, ax=axs[0])
axs[1].set_title("Default Overlay w/ Errorbars", fontsize=18)
hep.histplot([h, 1.5 * h], bins, yerr=[np.sqrt(h), np.sqrt(1.5 * h)], ax=axs[1])
axs[2].set_title("Automatic Errorbars", fontsize=18)
hep.histplot([h, 1.5 * h], bins, yerr=True, ax=axs[2])
axs[3].set_title("With Labels", fontsize=18)
hep.histplot([h, 1.5 * h], bins, yerr=True, ax=axs[3], label=["First", "Second"])
axs[3].legend(fontsize=16, prop={'family': 'Tex Gyre Heros'})
fig.subplots_adjust(hspace=0.1, wspace=0.1)
return fig
@pytest.mark.mpl_image_compare(style='default', remove_text=True)
def test_histplot_stack():
np.random.seed(0)
h, bins = np.histogram(np.random.normal(10, 3, 400), bins=10)
fig, axs = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10, 10))
axs = axs.flatten()
axs[0].set_title("Default", fontsize=18)
hep.histplot([h, 1.5 * h], bins, stack=True, ax=axs[0])
axs[1].set_title("Plot Edges", fontsize=18)
hep.histplot([h, 1.5 * h], bins, edges=True, stack=True, ax=axs[1])
axs[2].set_title("Plot Errorbars", fontsize=18)
hep.histplot([h, 1.5 * h], bins, yerr=np.sqrt(h), stack=True, ax=axs[2])
axs[3].set_title("Filled Histogram", fontsize=18)
hep.histplot([1.5 * h, h], bins, histtype='fill', stack=True, ax=axs[3])
fig.subplots_adjust(hspace=0.1, wspace=0.1)
return fig
| 30.10101 | 85 | 0.65 | 502 | 2,980 | 3.790837 | 0.181275 | 0.042039 | 0.07567 | 0.132422 | 0.77877 | 0.768261 | 0.755649 | 0.74619 | 0.737257 | 0.7031 | 0 | 0.05491 | 0.162752 | 2,980 | 98 | 86 | 30.408163 | 0.707816 | 0 | 0 | 0.5 | 1 | 0 | 0.082811 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.064516 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
553ac8488c54710d57740c7d2c1821780efaaf60 | 45 | py | Python | src/routes/__init__.py | kumardeepak/file-server | b94d87cadcc93c142a7d4f9bb368d75f7eda5671 | [
"MIT"
] | null | null | null | src/routes/__init__.py | kumardeepak/file-server | b94d87cadcc93c142a7d4f9bb368d75f7eda5671 | [
"MIT"
] | null | null | null | src/routes/__init__.py | kumardeepak/file-server | b94d87cadcc93c142a7d4f9bb368d75f7eda5671 | [
"MIT"
] | null | null | null | from .fileupload import FILEUPLOAD_BLUEPRINT
| 22.5 | 44 | 0.888889 | 5 | 45 | 7.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
556347af5d83298e6fba0b1d10d57ecddc24379c | 137 | py | Python | asset/__init__.py | jp-quant/qfengine | f71c263becb82ee5b7022c17d7983b40d5df31bb | [
"MIT"
] | 3 | 2021-01-19T10:16:19.000Z | 2022-02-13T16:33:11.000Z | asset/__init__.py | jp-quant/qfengine | f71c263becb82ee5b7022c17d7983b40d5df31bb | [
"MIT"
] | null | null | null | asset/__init__.py | jp-quant/qfengine | f71c263becb82ee5b7022c17d7983b40d5df31bb | [
"MIT"
] | 2 | 2021-05-11T12:01:34.000Z | 2021-08-29T04:49:25.000Z | from qfengine.asset.equity import Equity
from qfengine.asset.cash import Cash
from typing import Union
assetClasses = Union[Equity,Cash] | 27.4 | 40 | 0.832117 | 20 | 137 | 5.7 | 0.45 | 0.210526 | 0.298246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109489 | 137 | 5 | 41 | 27.4 | 0.934426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55a45b86c6d5274ae6a4557b9870cce1501dd9db | 83 | py | Python | MiddleKit/Core/FloatAttr.py | PeaceWorksTechnologySolutions/w4py | 74f5a03a63f1a93563502b908474aefaae2abda2 | [
"MIT"
] | 18 | 2016-08-01T20:15:59.000Z | 2019-12-24T16:00:03.000Z | MiddleKit/Core/FloatAttr.py | WebwareForPython/w4py | bba08f5974d49f5da7e88abe3eeda1037d0824a3 | [
"MIT"
] | 6 | 2016-09-13T05:48:45.000Z | 2020-01-09T18:29:12.000Z | MiddleKit/Core/FloatAttr.py | WebwareForPython/w4py | bba08f5974d49f5da7e88abe3eeda1037d0824a3 | [
"MIT"
] | 6 | 2016-09-16T14:32:29.000Z | 2020-01-03T18:52:16.000Z | from BasicTypeAttr import BasicTypeAttr
class FloatAttr(BasicTypeAttr):
pass
| 13.833333 | 39 | 0.807229 | 8 | 83 | 8.375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156627 | 83 | 5 | 40 | 16.6 | 0.957143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e959c90c03cd61dd7f9b15451c3e7f730b2f0578 | 29 | py | Python | app/engine/graphics/ui_framework/premade_components/__init__.py | zerorock1312/lt-maker-master | 82f733683f9dba763a5de8567c41fd7cbcfb0173 | [
"MIT"
] | null | null | null | app/engine/graphics/ui_framework/premade_components/__init__.py | zerorock1312/lt-maker-master | 82f733683f9dba763a5de8567c41fd7cbcfb0173 | [
"MIT"
] | null | null | null | app/engine/graphics/ui_framework/premade_components/__init__.py | zerorock1312/lt-maker-master | 82f733683f9dba763a5de8567c41fd7cbcfb0173 | [
"MIT"
] | null | null | null | from .text_component import * | 29 | 29 | 0.827586 | 4 | 29 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e9691c22fd388339d423482295c8fc8ce8872e3d | 200 | py | Python | output/models/ms_data/schema/sch_p2_a_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 1 | 2021-08-14T17:59:21.000Z | 2021-08-14T17:59:21.000Z | output/models/ms_data/schema/sch_p2_a_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | 4 | 2020-02-12T21:30:44.000Z | 2020-04-15T20:06:46.000Z | output/models/ms_data/schema/sch_p2_a_xsd/__init__.py | tefra/xsdata-w3c-tests | b6b6a4ac4e0ab610e4b50d868510a8b7105b1a5f | [
"MIT"
] | null | null | null | from output.models.ms_data.schema.sch_p2_a_xsd.sch_p2_a import (
E1,
Root,
)
from output.models.ms_data.schema.sch_p2_a_xsd.sch_p2_b import BE1
__all__ = [
"E1",
"Root",
"BE1",
]
| 16.666667 | 66 | 0.675 | 35 | 200 | 3.4 | 0.457143 | 0.168067 | 0.151261 | 0.302521 | 0.705882 | 0.705882 | 0.705882 | 0.705882 | 0.705882 | 0.705882 | 0 | 0.049689 | 0.195 | 200 | 11 | 67 | 18.181818 | 0.689441 | 0 | 0 | 0 | 0 | 0 | 0.045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e9a0a5c09923d00be65e3e9d3bd8e86d4f593fb7 | 161 | py | Python | simsiam/engine/__init__.py | tillaczel/simsiam | d4d03aae625314ac2f24155fac3ca5bfc31502c7 | [
"MIT"
] | null | null | null | simsiam/engine/__init__.py | tillaczel/simsiam | d4d03aae625314ac2f24155fac3ca5bfc31502c7 | [
"MIT"
] | null | null | null | simsiam/engine/__init__.py | tillaczel/simsiam | d4d03aae625314ac2f24155fac3ca5bfc31502c7 | [
"MIT"
] | null | null | null | from simsiam.engine.unsupervised import UnsupervisedEngine
from simsiam.engine.supervised import SupervisedEngine
from simsiam.engine.linear import LinearEngine
| 40.25 | 58 | 0.888199 | 18 | 161 | 7.944444 | 0.555556 | 0.230769 | 0.356643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074534 | 161 | 3 | 59 | 53.666667 | 0.959732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
757d22d85afd98ccbca8b6bec384fbdba8ec3318 | 3,717 | py | Python | tests/k8s/test_read_obj.py | thevennamaneni/kopf | 020f8bc91268225d43575e1bb69470ef10ae6113 | [
"MIT"
] | null | null | null | tests/k8s/test_read_obj.py | thevennamaneni/kopf | 020f8bc91268225d43575e1bb69470ef10ae6113 | [
"MIT"
] | null | null | null | tests/k8s/test_read_obj.py | thevennamaneni/kopf | 020f8bc91268225d43575e1bb69470ef10ae6113 | [
"MIT"
] | null | null | null | import kubernetes.client.rest
import pytest
from asynctest import call
from kopf.k8s.fetching import read_obj
def test_when_present_clustered(client_mock, resource):
result = object()
apicls_mock = client_mock.CustomObjectsApi
apicls_mock.return_value.get_cluster_custom_object.return_value = result
apicls_mock.return_value.get_namespaced_custom_object.return_value = result
sidefn_mock = apicls_mock.return_value.get_namespaced_custom_object
mainfn_mock = apicls_mock.return_value.get_cluster_custom_object
crd = read_obj(resource=resource, namespace=None, name='name1')
assert crd is result
assert not sidefn_mock.called
assert mainfn_mock.call_count == 1
assert mainfn_mock.call_args_list == [call(
group=resource.group,
version=resource.version,
plural=resource.plural,
name='name1',
)]
def test_when_present_namespaced(client_mock, resource):
result = object()
apicls_mock = client_mock.CustomObjectsApi
apicls_mock.return_value.get_cluster_custom_object.return_value = result
apicls_mock.return_value.get_namespaced_custom_object.return_value = result
sidefn_mock = apicls_mock.return_value.get_cluster_custom_object
mainfn_mock = apicls_mock.return_value.get_namespaced_custom_object
crd = read_obj(resource=resource, namespace='ns1', name='name1')
assert crd is result
assert not sidefn_mock.called
assert mainfn_mock.call_count == 1
assert mainfn_mock.call_args_list == [call(
group=resource.group,
version=resource.version,
plural=resource.plural,
namespace='ns1',
name='name1',
)]
@pytest.mark.parametrize('namespace', [None, 'ns1'], ids=['without-namespace', 'with-namespace'])
@pytest.mark.parametrize('status', [404])
def test_when_absent_with_no_default(client_mock, resource, namespace, status):
error = kubernetes.client.rest.ApiException(status=status)
apicls_mock = client_mock.CustomObjectsApi
apicls_mock.return_value.get_cluster_custom_object.side_effect = error
apicls_mock.return_value.get_namespaced_custom_object.side_effect = error
with pytest.raises(kubernetes.client.rest.ApiException) as e:
read_obj(resource=resource, namespace=namespace, name='name1')
assert e.value.status == status
@pytest.mark.parametrize('default', [None, object()], ids=['none', 'object'])
@pytest.mark.parametrize('namespace', [None, 'ns1'], ids=['without-namespace', 'with-namespace'])
@pytest.mark.parametrize('status', [404])
def test_when_absent_with_default(client_mock, resource, namespace, default, status):
error = kubernetes.client.rest.ApiException(status=status)
apicls_mock = client_mock.CustomObjectsApi
apicls_mock.return_value.get_cluster_custom_object.side_effect = error
apicls_mock.return_value.get_namespaced_custom_object.side_effect = error
crd = read_obj(resource=resource, namespace=namespace, name='name1', default=default)
assert crd is default
@pytest.mark.parametrize('namespace', [None, 'ns1'], ids=['without-namespace', 'with-namespace'])
@pytest.mark.parametrize('status', [400, 401, 403, 500, 666])
def test_raises_api_error_despite_default(client_mock, resource, namespace, status):
error = kubernetes.client.rest.ApiException(status=status)
apicls_mock = client_mock.CustomObjectsApi
apicls_mock.return_value.get_cluster_custom_object.side_effect = error
apicls_mock.return_value.get_namespaced_custom_object.side_effect = error
with pytest.raises(kubernetes.client.rest.ApiException) as e:
read_obj(resource=resource, namespace=namespace, name='name1', default=object())
assert e.value.status == status
| 42.238636 | 97 | 0.765402 | 477 | 3,717 | 5.677149 | 0.15304 | 0.070162 | 0.082718 | 0.108567 | 0.882201 | 0.85192 | 0.850812 | 0.850812 | 0.824963 | 0.745938 | 0 | 0.01117 | 0.132903 | 3,717 | 87 | 98 | 42.724138 | 0.829041 | 0 | 0 | 0.681159 | 0 | 0 | 0.055152 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 1 | 0.072464 | false | 0 | 0.057971 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ddaf3f53b08e459a0dea7ed1433f69412c03a5f8 | 46,357 | py | Python | main/courses/course_materials.py | mahkhaled/class2go | b32cb441e8d96c257f70cb61274812ebeed2649d | [
"Apache-2.0"
] | null | null | null | main/courses/course_materials.py | mahkhaled/class2go | b32cb441e8d96c257f70cb61274812ebeed2649d | [
"Apache-2.0"
] | null | null | null | main/courses/course_materials.py | mahkhaled/class2go | b32cb441e8d96c257f70cb61274812ebeed2649d | [
"Apache-2.0"
] | null | null | null | from c2g.models import *
import datetime
from django.db.models import Count, Max, Q, F
from django.db import connection
def get_course_materials(common_page_data, get_video_content=False, get_pset_content=False, get_additional_page_content = False, get_file_content=False, get_exam_content=False, exam_types=[]):
COURSE = common_page_data['course']
REQUEST = common_page_data['request']
USER = REQUEST.user
section_structures = []
if USER.is_authenticated():
sections = ContentSection.objects.getByCourse(course=COURSE)
pages = AdditionalPage.objects.getByCourse(course=COURSE)
files = File.objects.getByCourse(course=COURSE)
exams = Exam.objects.getByCourse(course=COURSE)
if exam_types:
exams = exams.filter(exam_type__in=exam_types)
l1items, l2items = get_contentgroup_data(COURSE)
if get_video_content:
videos = Video.objects.getByCourse(course=COURSE)
if videos:
video_list = []
for video in videos:
video_list.append(video.id)
videoToExs = VideoToExercise.objects.values('video').filter(video__in=video_list, is_deleted=0).annotate(dcount=Count('video'))
if common_page_data['course_mode'] == 'ready':
video_recs = VideoActivity.objects.filter(course=COURSE, student=USER)
video_downloads = VideoDownload.objects.values('video').filter(course=COURSE, student=USER).annotate(dcount=Count('video'))
if get_pset_content:
problem_sets = ProblemSet.objects.getByCourse(course=COURSE)
if problem_sets:
problem_set_list = []
for problem_set in problem_sets:
problem_set_list.append(problem_set.id)
psetToExs = ProblemSetToExercise.objects.values('problemSet').filter(problemSet__in=problem_set_list, is_deleted=0).annotate(dcount=Count('problemSet'))
if common_page_data['course_mode'] == 'ready':
pset_activities = ProblemActivity.objects.values('problemset_to_exercise__problemSet_id', 'problemset_to_exercise__problemSet__submissions_permitted', 'problemset_to_exercise__exercise__fileName').select_related('problemset_to_exercise').filter(problemset_to_exercise__problemSet_id__in=problem_set_list, student=USER).annotate(correct=Max('complete'), num_attempts=Max('attempt_number'))
cursor = connection.cursor()
#The following 2 sqls are the same except the first is for a list of 2 or more and the second is for
# a single item. I was not able to construct the argument for a single value without it putting quotes
# around the strings.
if len(problem_set_list) > 1:
cursor.execute("select e.fileName, p2e.problemSet_id, \
count(case when p2e.is_deleted = 0 then 1 else null end) as `num_active` \
from c2g_problemset_to_exercise p2e, c2g_exercises e \
where p2e.exercise_id = e.id \
and p2e.problemSet_id in %s \
and p2e.mode = 'ready' \
group by e.filename, p2e.problemSet_id \
having num_active = 0", [tuple(problem_set_list)])
else:
cursor.execute("select e.fileName, p2e.problemSet_id, \
count(case when p2e.is_deleted = 0 then 1 else null end) as `num_active` \
from c2g_problemset_to_exercise p2e, c2g_exercises e \
where p2e.exercise_id = e.id \
and p2e.problemSet_id = %s \
and p2e.mode = 'ready' \
group by e.filename, p2e.problemSet_id \
having num_active = 0", [problem_set_list[0]])
deleted_exercise_list = []
for row in cursor.fetchall():
filename = row[0]
problemset_id = row[1]
filename_item = {'filename' : filename,
'problemset_id' : problemset_id
}
deleted_exercise_list.append(filename_item)
#This was close but not quite; couldn't include the case statement for resiliance to bad activity data.
#pset_score_activities = ProblemActivity.objects.values('problemset_to_exercise__problemSet_id', 'problemset_to_exercise__problemSet__submissions_permitted', 'problemset_to_exercise__problemSet__resubmission_penalty', 'problemset_to_exercise__problemSet__partial_credit_deadline', 'problemset_to_exercise__problemSet__grace_period', 'problemset_to_exercise__problemSet__late_penalty', 'problemset_to_exercise__exercise__fileName').select_related('problemset_to_exercise').filter( Q(problemset_to_exercise__problemSet_id__in=problem_set_list), Q(student=common_page_data['request'].user), (Q(problemset_to_exercise__problemSet__submissions_permitted=0) & Q(problemset_to_exercise__problemSet__partial_credit_deadline__gt=F('time_created'))) | (Q(problemset_to_exercise__problemSet__submissions_permitted__gt=0) & Q(problemset_to_exercise__problemSet__submissions_permitted__gte=F('attempt_number')) & Q(problemset_to_exercise__problemSet__partial_credit_deadline__gt=F('time_created')))).annotate(correct=Max('complete'), num_attempts=Max('attempt_number'), last_valid_attempt_time=Max('time_created'))
#The following 2 sqls are the same except the first is for a list of 2 or more and the second is for
# a single item. I was not able to construct the argument for a single value without it putting quotes
# around the strings.
if len(problem_set_list) > 1:
cursor.execute("SELECT `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, `c2g_problem_sets`.`partial_credit_deadline`, \
`c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName`, \
count(`c2g_problem_activity`.`attempt_number`) AS `num_attempts`, \
MAX(`c2g_problem_activity`.`time_created`) AS `last_valid_attempt_time`, \
MAX(`c2g_problem_activity`.`complete`) AS `correct`, \
min(case when c2g_problem_activity.complete = 1 then c2g_problem_activity.id else null end) as `first_correct_answer`, \
max(c2g_problem_activity.id) as `max_activity_id` \
FROM `c2g_problem_activity` \
LEFT OUTER JOIN `c2g_problemset_to_exercise` ON (`c2g_problem_activity`.`problemset_to_exercise_id` = `c2g_problemset_to_exercise`.`id`) \
INNER JOIN `c2g_problem_sets` ON (`c2g_problemset_to_exercise`.`problemSet_id` = `c2g_problem_sets`.`id`) \
INNER JOIN `c2g_exercises` ON (`c2g_problemset_to_exercise`.`exercise_id` = `c2g_exercises`.`id`) \
WHERE (`c2g_problem_activity`.`student_id` = %s AND `c2g_problemset_to_exercise`.`problemSet_id` IN %s \
AND ((`c2g_problem_sets`.`submissions_permitted` = 0 AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`) \
OR (`c2g_problem_sets`.`submissions_permitted` > 0 AND `c2g_problem_sets`.`submissions_permitted` >= `c2g_problem_activity`.`attempt_number` \
AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`))) \
GROUP BY `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, \
`c2g_problem_sets`.`partial_credit_deadline`, `c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName` \
ORDER BY NULL", [common_page_data['request'].user.id, tuple(problem_set_list)])
else:
cursor.execute("SELECT `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, `c2g_problem_sets`.`partial_credit_deadline`, \
`c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName`, \
count(`c2g_problem_activity`.`attempt_number`) AS `num_attempts`, \
MAX(`c2g_problem_activity`.`time_created`) AS `last_valid_attempt_time`, \
MAX(`c2g_problem_activity`.`complete`) AS `correct`, \
min(case when c2g_problem_activity.complete = 1 then c2g_problem_activity.id else null end) as `first_correct_answer`, \
max(c2g_problem_activity.id) as `max_activity_id` \
FROM `c2g_problem_activity` \
LEFT OUTER JOIN `c2g_problemset_to_exercise` ON (`c2g_problem_activity`.`problemset_to_exercise_id` = `c2g_problemset_to_exercise`.`id`) \
INNER JOIN `c2g_problem_sets` ON (`c2g_problemset_to_exercise`.`problemSet_id` = `c2g_problem_sets`.`id`) \
INNER JOIN `c2g_exercises` ON (`c2g_problemset_to_exercise`.`exercise_id` = `c2g_exercises`.`id`) \
WHERE (`c2g_problem_activity`.`student_id` = %s AND `c2g_problemset_to_exercise`.`problemSet_id` = %s \
AND ((`c2g_problem_sets`.`submissions_permitted` = 0 AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`) \
OR (`c2g_problem_sets`.`submissions_permitted` > 0 AND `c2g_problem_sets`.`submissions_permitted` >= `c2g_problem_activity`.`attempt_number` \
AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`))) \
GROUP BY `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, \
`c2g_problem_sets`.`partial_credit_deadline`, `c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName` \
ORDER BY NULL", [common_page_data['request'].user.id, problem_set_list[0]])
score_list = []
for row in cursor.fetchall():
problemset_id = row[0]
submissions_permitted = row[1]
resubmission_penalty = row[2]
partial_credit_deadline = row[3]
grace_period = row[4]
late_penalty = row[5]
filename = row[6]
num_attempts = row[7]
last_valid_attempt_time = row[8]
correct = row[9]
first_correct_answer = row[10]
max_activity_id = row[11]
score_item = {'problemset_id' : problemset_id,
'submissions_permitted' : submissions_permitted,
'resubmission_penalty' : resubmission_penalty,
'partial_credit_deadline' : partial_credit_deadline,
'grace_period' : grace_period,
'late_penalty' : late_penalty,
'filename' : filename,
'num_attempts' : num_attempts,
'last_valid_attempt_time' : last_valid_attempt_time,
'correct' : correct,
'first_correct_answer' : first_correct_answer,
'max_activity_id' : max_activity_id
}
score_list.append(score_item)
index = 0
for section in sections:
section_dict = {'section':section, 'items':[]}
if get_additional_page_content:
for page in pages:
key = ('additional_page', page.id)
if page.section_id == section.id and not l2items.has_key(key):
children = get_children_by_display_style(key, l1items, l2items, USER)
item = {'type':'additional_page', 'additional_page':page, 'index':page.index, 'children': children}
if common_page_data['course_mode'] == 'draft':
item['visible_status'] = get_live_datetime_for(page)
section_dict['items'].append(item)
if get_file_content:
for file in files:
key = ('file', file.id)
if file.section_id == section.id and not l2items.has_key(key):
children = get_children_by_display_style(key, l1items, l2items, USER)
item = {'type':'file', 'file':file, 'index':file.index, 'children': children}
if common_page_data['course_mode'] == 'draft':
item['visible_status'] = get_live_datetime_for(file)
section_dict['items'].append(item)
if get_video_content:
for video in videos:
key = ('video', video.id)
if video.section_id == section.id and not l2items.has_key(key):
children = get_children_by_display_style(key, l1items, l2items, USER)
item = {'type':'video', 'video':video, 'completed_percent': 0, 'index':video.index, 'children': children}
numQuestions = 0
for videoToEx in videoToExs:
if videoToEx['video'] == video.id:
numQuestions = videoToEx['dcount']
break
if common_page_data['course_mode'] == 'draft':
item['visible_status'] = get_live_datetime_for(video)
else:
download_count = 0
for video_download in video_downloads:
if video_download['video'] == video.id:
download_count = video_download['dcount']
break
if download_count > 0:
item['completed_percent'] = 100.0
else:
for video_rec in video_recs:
if video_rec.video_id == video.id:
item['video_rec'] = video_rec
if video.duration:
item['completed_percent'] = 100.0 * max(video_rec.start_seconds, video_rec.max_end_seconds)/ video.duration
else:
item['completed_percent'] = 0
item['numQuestions'] = numQuestions
section_dict['items'].append(item)
if get_pset_content:
for problem_set in problem_sets:
key = ('problemSet', problem_set.id)
if problem_set.section_id == section.id and not l2items.has_key(key):
children = get_children_by_display_style(key, l1items, l2items, USER)
item = {'type':'problem_set', 'problem_set':problem_set, 'index':problem_set.index, 'children': children}
numQuestions = 0
for psetToEx in psetToExs:
if psetToEx['problemSet'] == problem_set.id:
numQuestions = psetToEx['dcount']
break
if common_page_data['course_mode'] == 'draft':
item['visible_status'] = get_live_datetime_for(problem_set)
else:
numCompleted = 0
for pset_activity in pset_activities:
if pset_activity['problemset_to_exercise__problemSet_id'] == problem_set.id and not filename_in_deleted_list(pset_activity['problemset_to_exercise__exercise__fileName'], problem_set.id, deleted_exercise_list):
if pset_activity['correct'] == 1:
numCompleted += 1
elif pset_activity['problemset_to_exercise__problemSet__submissions_permitted'] != 0 and pset_activity['num_attempts'] >= pset_activity['problemset_to_exercise__problemSet__submissions_permitted']:
numCompleted +=1
score = 0.0
for score_item in score_list:
problemset_id = score_item['problemset_id']
submissions_permitted = score_item['submissions_permitted']
resubmission_penalty = score_item['resubmission_penalty']
partial_credit_deadline = score_item['partial_credit_deadline']
grace_period = score_item['grace_period']
late_penalty = score_item['late_penalty']
filename = score_item['filename']
num_attempts = score_item['num_attempts']
last_valid_attempt_time = score_item['last_valid_attempt_time']
correct = score_item['correct']
first_correct_answer = score_item['first_correct_answer']
max_activity_id = score_item['max_activity_id']
if problemset_id == problem_set.id and not filename_in_deleted_list(filename, problemset_id, deleted_exercise_list):
exercise_percent = 100
if first_correct_answer == None or first_correct_answer == max_activity_id:
if correct == 0:
exercise_percent = 0
else:
exercise_percent -= resubmission_penalty*(num_attempts -1)
if last_valid_attempt_time > grace_period:
exercise_percent = int(exercise_percent*(100 - late_penalty)/100.0)
#floor exercise percent at 0
exercise_percent = max(exercise_percent,0)
#add to total_score
score += exercise_percent/100.0
else:
score = problem_set.get_score(USER)
break
#Divide by zero safety check
if numQuestions == 0:
progress = 0
else:
progress = 100.0*numCompleted/numQuestions
item['numCompleted'] = numCompleted
item['score'] = score
item['progress'] = progress
item['numQuestions'] = numQuestions
section_dict['items'].append(item)
if get_exam_content:
user_records = ExamRecord.objects.filter(course=COURSE, student=USER, complete=True).order_by('time_created')
for exam in exams:
key = ('exam', exam.id)
if exam.section_id == section.id and not l2items.has_key(key):
exam_user_records = user_records.filter(exam=exam) #might change this to a python list filter if want to trade db access for memory
children = get_children_by_display_style(key, l1items, l2items, USER)
item = {'type':'exam', 'exam':exam, 'index':exam.index, 'children': children, 'records':exam_user_records}
section_dict['items'].append(item)
if common_page_data['course_mode'] == 'draft':
item['visible_status'] = get_live_datetime_for(exam)
if common_page_data['course_mode'] == 'draft' or len(section_dict['items']) > 0:
section_dict['items'] = sorted(section_dict['items'], key=lambda k: k['index'])
section_structures.append(section_dict)
index += 1
return section_structures
def filename_in_deleted_list(filename, problem_set_id, deleted_exercise_list):
for item in deleted_exercise_list:
if item['filename'] == filename and item['problemset_id'] == problem_set_id:
return True
return False
def get_contentgroup_data(course):
l1_items = {}
l2_items = {}
for cgtype, cgtid, cgref, target, level, display in [get_group_item_data(x, selfref=True) for x in
ContentGroup.objects.getByCourse(course=course)]:
if not target.is_live():
continue
if level == 2:
l2_items[(cgtype, cgtid)] = (cgref, target, level, display)
else:
l1_items[(cgtype, cgtid)] = cgref.group_id
return l1_items, l2_items
def get_group_item_data(group_item, selfref=False):
ctype = group_item.get_content_type()
level = group_item.level
display = group_item.display_style or 'button'
target = getattr(group_item, ctype)
cgid = target.id
if not selfref:
return ctype, cgid, target, level, display
return ctype, cgid, group_item, target, level, display
def get_children_by_display_style(key, level1_items, level2_items, user=None):
children = get_children(key, level1_items, level2_items, user)
tagged_children = {}
for child in children:
display_style = child.get('display', 'button')
if not tagged_children.has_key(display_style):
tagged_children[display_style] = [child]
else:
tagged_children[display_style].append(child)
return tagged_children
def get_children(key, level1_items, level2_items, user=None):
def type_sorter(ci1, ci2):
ci1_type = ci1['type']
ci2_type = ci2['type']
ci1_title = ci1['title']
ci2_title = ci2['title']
if ci1_type < ci2_type:
return -1
elif ci1_type > ci2_type:
return +1
else:
# equal types, go by title
if ci1_title < ci2_title:
return -1
elif ci1_title > ci2_title:
return +1
else:
return 0
def name_sorter(ci1, ci2):
ci1_name = ci1['name']
ci2_name = ci2['name']
ci1_ext = ci1['ext']
ci2_ext = ci2['ext']
if ci1_name and ci2_name:
if ci1_ext < ci2_ext:
return -1
elif ci1_ext > ci2_ext:
return +1
else:
# equal extensions, go by filename
if ci1_name < ci2_name:
return -1
elif ci1_name > ci2_name:
return +1
else:
return 0
else:
return 0
children = []
if level1_items.has_key(key):
group_id = level1_items[key]
children.extend([augment_child_data(k, v, user) for k,v in level2_items.items() if v[0].group_id == group_id])
children = sorted(sorted(children, type_sorter), name_sorter)
return children
def augment_child_data(key, value, user=None):
class NoFile():
name = ''
cgtype = key[0]
ref = value[1]
tmp_f = getattr(ref, 'file', NoFile())
name = tmp_f.name.split('/').pop()
ext = name.split('.').pop().lower()
# target target this entry target ref
child_data = {'type': cgtype, 'id': key[1], 'self': value[0], 'ref': ref, 'display': value[3], 'ext': ext,
'name': name, 'title': ref.title, 'url': ref.get_url(), 'index': ref.index, 'children': None, }
child_data[cgtype] = ref # FIXME: set 'exam':exam - remove after making templates use 'ref'
if cgtype == "exam" and user: # FIXME: per-type special cases belong somewhere else?
child_data['records'] = ExamRecord.objects.filter(course=ref.course, student=user, complete=True, exam=ref)
return child_data
def get_live_datetime_for(thing):
"""Return the appropriate .live_datetime string for thing"""
prod_thing = thing.image
if not prod_thing.live_datetime:
return "<span style='color:#A00000;'>Not Live</span>"
elif prod_thing.live_datetime > datetime.datetime.now():
return prod_thing.live_datetime.strftime("<span style='color:#A07000;'>Live %F at %H:%M</span>" )
else:
return "<span style='color:green;'>Live</span>"
#Test purposes only - not to be run in production
def test_for_pset_progress_and_score():
logfile = open('zzzz.log', 'w')
#Get all courses
courses = Course.objects.filter(mode='ready')
#Get all users
users = User.objects.all()
for course in courses:
logfile.write("course_id : " + str(course.id) + "\n")
#Get all problemsets
problem_sets = ProblemSet.objects.getByCourse(course=course)
#Get all sections
sections = ContentSection.objects.getByCourse(course=course)
if problem_sets:
problem_set_list = []
for problem_set in problem_sets:
problem_set_list.append(problem_set.id)
psetToExs = ProblemSetToExercise.objects.values('problemSet').filter(problemSet__in=problem_set_list, is_deleted=0).annotate(dcount=Count('problemSet'))
cursor = connection.cursor()
if len(problem_set_list) > 1:
cursor.execute("select e.fileName, p2e.problemSet_id, \
count(case when p2e.is_deleted = 0 then 1 else null end) as `num_active` \
from c2g_problemset_to_exercise p2e, c2g_exercises e \
where p2e.exercise_id = e.id \
and p2e.problemSet_id in %s \
and p2e.mode = 'ready' \
group by e.filename, p2e.problemSet_id \
having num_active = 0", [tuple(problem_set_list)])
else:
cursor.execute("select e.fileName, p2e.problemSet_id, \
count(case when p2e.is_deleted = 0 then 1 else null end) as `num_active` \
from c2g_problemset_to_exercise p2e, c2g_exercises e \
where p2e.exercise_id = e.id \
and p2e.problemSet_id = %s \
and p2e.mode = 'ready' \
group by e.filename, p2e.problemSet_id \
having num_active = 0", [problem_set_list[0]])
deleted_exercise_list = []
for row in cursor.fetchall():
filename = row[0]
problemset_id = row[1]
filename_item = {'filename' : filename,
'problemset_id' : problemset_id
}
deleted_exercise_list.append(filename_item)
for user in users:
user_groups = user.groups.all()
for g in user_groups:
if g.id == course.student_group_id:
pset_activities = ProblemActivity.objects.values('problemset_to_exercise__problemSet_id', 'problemset_to_exercise__problemSet__submissions_permitted', 'problemset_to_exercise__exercise__fileName').select_related('problemset_to_exercise').filter(problemset_to_exercise__problemSet_id__in=problem_set_list, student=user).annotate(correct=Max('complete'), num_attempts=Max('attempt_number'))
#pset_score_activities = ProblemActivity.objects.values('problemset_to_exercise__problemSet_id', 'problemset_to_exercise__problemSet__submissions_permitted', 'problemset_to_exercise__problemSet__resubmission_penalty', 'problemset_to_exercise__problemSet__partial_credit_deadline', 'problemset_to_exercise__problemSet__grace_period', 'problemset_to_exercise__problemSet__late_penalty', 'problemset_to_exercise__exercise__fileName').select_related('problemset_to_exercise').filter( Q(problemset_to_exercise__problemSet_id__in=problem_set_list), Q(student=user), (Q(problemset_to_exercise__problemSet__submissions_permitted=0) & Q(problemset_to_exercise__problemSet__partial_credit_deadline__gt=F('time_created'))) | (Q(problemset_to_exercise__problemSet__submissions_permitted__gt=0) & Q(problemset_to_exercise__problemSet__submissions_permitted__gte=F('attempt_number')) & Q(problemset_to_exercise__problemSet__partial_credit_deadline__gt=F('time_created')))).annotate(correct=Max('complete'), num_attempts=Max('attempt_number'), last_valid_attempt_time=Max('time_created'))
if len(problem_set_list) > 1:
cursor.execute("SELECT `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, `c2g_problem_sets`.`partial_credit_deadline`, \
`c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName`, \
count(`c2g_problem_activity`.`attempt_number`) AS `num_attempts`, \
MAX(`c2g_problem_activity`.`time_created`) AS `last_valid_attempt_time`, \
MAX(`c2g_problem_activity`.`complete`) AS `correct`, \
min(case when c2g_problem_activity.complete = 1 then c2g_problem_activity.id else null end) as `first_correct_answer`, \
max(c2g_problem_activity.id) as `max_activity_id` \
FROM `c2g_problem_activity` \
LEFT OUTER JOIN `c2g_problemset_to_exercise` ON (`c2g_problem_activity`.`problemset_to_exercise_id` = `c2g_problemset_to_exercise`.`id`) \
INNER JOIN `c2g_problem_sets` ON (`c2g_problemset_to_exercise`.`problemSet_id` = `c2g_problem_sets`.`id`) \
INNER JOIN `c2g_exercises` ON (`c2g_problemset_to_exercise`.`exercise_id` = `c2g_exercises`.`id`) \
WHERE (`c2g_problem_activity`.`student_id` = %s AND `c2g_problemset_to_exercise`.`problemSet_id` IN %s \
AND ((`c2g_problem_sets`.`submissions_permitted` = 0 AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`) \
OR (`c2g_problem_sets`.`submissions_permitted` > 0 AND `c2g_problem_sets`.`submissions_permitted` >= `c2g_problem_activity`.`attempt_number` \
AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`))) \
GROUP BY `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, \
`c2g_problem_sets`.`partial_credit_deadline`, `c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName` \
ORDER BY NULL", [user.id, tuple(problem_set_list)])
else:
cursor.execute("SELECT `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, `c2g_problem_sets`.`partial_credit_deadline`, \
`c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName`, \
count(`c2g_problem_activity`.`attempt_number`) AS `num_attempts`, \
MAX(`c2g_problem_activity`.`time_created`) AS `last_valid_attempt_time`, \
MAX(`c2g_problem_activity`.`complete`) AS `correct`, \
min(case when c2g_problem_activity.complete = 1 then c2g_problem_activity.id else null end) as `first_correct_answer`, \
max(c2g_problem_activity.id) as `max_activity_id` \
FROM `c2g_problem_activity` \
LEFT OUTER JOIN `c2g_problemset_to_exercise` ON (`c2g_problem_activity`.`problemset_to_exercise_id` = `c2g_problemset_to_exercise`.`id`) \
INNER JOIN `c2g_problem_sets` ON (`c2g_problemset_to_exercise`.`problemSet_id` = `c2g_problem_sets`.`id`) \
INNER JOIN `c2g_exercises` ON (`c2g_problemset_to_exercise`.`exercise_id` = `c2g_exercises`.`id`) \
WHERE (`c2g_problem_activity`.`student_id` = %s AND `c2g_problemset_to_exercise`.`problemSet_id` = %s \
AND ((`c2g_problem_sets`.`submissions_permitted` = 0 AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`) \
OR (`c2g_problem_sets`.`submissions_permitted` > 0 AND `c2g_problem_sets`.`submissions_permitted` >= `c2g_problem_activity`.`attempt_number` \
AND `c2g_problem_sets`.`partial_credit_deadline` > `c2g_problem_activity`.`time_created`))) \
GROUP BY `c2g_problemset_to_exercise`.`problemSet_id`, `c2g_problem_sets`.`submissions_permitted`, `c2g_problem_sets`.`resubmission_penalty`, \
`c2g_problem_sets`.`partial_credit_deadline`, `c2g_problem_sets`.`grace_period`, `c2g_problem_sets`.`late_penalty`, `c2g_exercises`.`fileName` \
ORDER BY NULL", [user.id, problem_set_list[0]])
score_list = []
for row in cursor.fetchall():
problemset_id = row[0]
submissions_permitted = row[1]
resubmission_penalty = row[2]
partial_credit_deadline = row[3]
grace_period = row[4]
late_penalty = row[5]
filename = row[6]
num_attempts = row[7]
last_valid_attempt_time = row[8]
correct = row[9]
first_correct_answer = row[10]
max_activity_id = row[11]
score_item = {'problemset_id' : problemset_id,
'submissions_permitted' : submissions_permitted,
'resubmission_penalty' : resubmission_penalty,
'partial_credit_deadline' : partial_credit_deadline,
'grace_period' : grace_period,
'late_penalty' : late_penalty,
'filename' : filename,
'num_attempts' : num_attempts,
'last_valid_attempt_time' : last_valid_attempt_time,
'correct' : correct,
'first_correct_answer' : first_correct_answer,
'max_activity_id' : max_activity_id
}
score_list.append(score_item)
for section in sections:
for problem_set in problem_sets:
if problem_set.section_id == section.id:
numQuestions = 0
for psetToEx in psetToExs:
if psetToEx['problemSet'] == problem_set.id:
numQuestions = psetToEx['dcount']
break
numCompleted = 0
for pset_activity in pset_activities:
if pset_activity['problemset_to_exercise__problemSet_id'] == problem_set.id and not filename_in_deleted_list(pset_activity['problemset_to_exercise__exercise__fileName'], problem_set.id, deleted_exercise_list):
if pset_activity['correct'] == 1:
numCompleted += 1
elif pset_activity['problemset_to_exercise__problemSet__submissions_permitted'] != 0 and pset_activity['num_attempts'] >= pset_activity['problemset_to_exercise__problemSet__submissions_permitted']:
numCompleted +=1
old_numCompleted = problem_set.get_progress(user)
if old_numCompleted != numCompleted:
logfile.write("****FC : course_id : " + str(course.id) + " pset_id : " + str(problem_set.id) + " user_id : " + str(user.id) + " old : " + str(old_numCompleted) + " new : " + str(numCompleted) + "\n")
else:
logfile.write("**PC : course_id : " + str(course.id) + " pset_id : " + str(problem_set.id) + " user_id : " + str(user.id) + " old : " + str(old_numCompleted) + " new : " + str(numCompleted) + "\n")
score = 0.0
old_score = 0.0
for score_item in score_list:
problemset_id = score_item['problemset_id']
submissions_permitted = score_item['submissions_permitted']
resubmission_penalty = score_item['resubmission_penalty']
partial_credit_deadline = score_item['partial_credit_deadline']
grace_period = score_item['grace_period']
late_penalty = score_item['late_penalty']
filename = score_item['filename']
num_attempts = score_item['num_attempts']
last_valid_attempt_time = score_item['last_valid_attempt_time']
correct = score_item['correct']
first_correct_answer = score_item['first_correct_answer']
max_activity_id = score_item['max_activity_id']
if problemset_id == problem_set.id and not filename_in_deleted_list(filename, problemset_id, deleted_exercise_list):
exercise_percent = 100
if first_correct_answer == None or first_correct_answer == max_activity_id:
if correct == 0:
exercise_percent = 0
else:
exercise_percent -= resubmission_penalty*(num_attempts -1)
if last_valid_attempt_time > grace_period:
exercise_percent = int(exercise_percent*(100 - late_penalty)/100.0)
#floor exercise percent at 0
exercise_percent = max(exercise_percent,0)
#add to total_score
score += exercise_percent/100.0
else:
logfile.write("Bad data\n")
score = problem_set.get_score(user)
break
old_score = problem_set.get_score(user)
if old_score != score:
logfile.write("****FS : course_id : " + str(course.id) + " pset_id : " + str(problem_set.id) + " user_id : " + str(user.id) + " old : " + str(old_score) + " new : " + str(score) + "\n")
else:
logfile.write("**PS : course_id : " + str(course.id) + " pset_id : " + str(problem_set.id) + " user_id : " + str(user.id) + " old : " + str(old_score) + " new : " + str(score) + "\n")
logfile.close()
| 72.546166 | 1,121 | 0.501607 | 4,257 | 46,357 | 5.0895 | 0.076345 | 0.05354 | 0.075695 | 0.072002 | 0.787178 | 0.767331 | 0.741023 | 0.720622 | 0.71693 | 0.71693 | 0 | 0.016621 | 0.418556 | 46,357 | 638 | 1,122 | 72.659875 | 0.787193 | 0.072201 | 0 | 0.62069 | 0 | 0.038314 | 0.066845 | 0.024546 | 0 | 0 | 0 | 0.001567 | 0 | 1 | 0.021073 | false | 0 | 0.007663 | 0 | 0.076628 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ddd8eb0ca8ce86cbf8798c78b3e372538e8e021c | 24 | py | Python | build/lib/tools/pes/__init__.py | miquelcanyelles/PhDtools | f397261d890d36a6ecaa018fcd0e67959a78c0e9 | [
"MIT"
] | null | null | null | build/lib/tools/pes/__init__.py | miquelcanyelles/PhDtools | f397261d890d36a6ecaa018fcd0e67959a78c0e9 | [
"MIT"
] | null | null | null | build/lib/tools/pes/__init__.py | miquelcanyelles/PhDtools | f397261d890d36a6ecaa018fcd0e67959a78c0e9 | [
"MIT"
] | null | null | null | from tools.pes import *
| 12 | 23 | 0.75 | 4 | 24 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ddef75b1aa9d329ada1028eb11964fcb37ee981e | 108 | py | Python | yukicoder/yuki063.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | 1 | 2018-11-12T15:18:55.000Z | 2018-11-12T15:18:55.000Z | yukicoder/yuki063.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | null | null | null | yukicoder/yuki063.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | null | null | null | L, K = map(int, input().split())
if L % (K*2) == 0:
print(K*(L//(K*2)-1))
else:
print(K*(L//(K*2)))
| 18 | 32 | 0.435185 | 23 | 108 | 2.043478 | 0.521739 | 0.170213 | 0.191489 | 0.340426 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05814 | 0.203704 | 108 | 5 | 33 | 21.6 | 0.488372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
34d881921b2c12d0666db4ae431b1c1705ec7943 | 80 | py | Python | naics/base_codes/__init__.py | dylanmoring/naics_sic | 8b51ddf0b9ab1b9d380bfd620564ac281bb7d1d2 | [
"MIT"
] | null | null | null | naics/base_codes/__init__.py | dylanmoring/naics_sic | 8b51ddf0b9ab1b9d380bfd620564ac281bb7d1d2 | [
"MIT"
] | null | null | null | naics/base_codes/__init__.py | dylanmoring/naics_sic | 8b51ddf0b9ab1b9d380bfd620564ac281bb7d1d2 | [
"MIT"
] | null | null | null | from .naics_code import NAICSIndustryCode
from .sic_code import SICIndustryCode
| 26.666667 | 41 | 0.875 | 10 | 80 | 6.8 | 0.7 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 80 | 2 | 42 | 40 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b8c08f165afec1d6bef305c7b8f84490afd058a | 184 | py | Python | conan_ue4cli/commands/__init__.py | JJC1138/conan-ue4cli | 76163a21cb63976cacfea46b024d80c12cf39313 | [
"MIT"
] | null | null | null | conan_ue4cli/commands/__init__.py | JJC1138/conan-ue4cli | 76163a21cb63976cacfea46b024d80c12cf39313 | [
"MIT"
] | null | null | null | conan_ue4cli/commands/__init__.py | JJC1138/conan-ue4cli | 76163a21cb63976cacfea46b024d80c12cf39313 | [
"MIT"
] | null | null | null | from .boilerplate import boilerplate
from .build import build
from .generate import generate
from .precompute import precompute
from .sources import sources
from .update import update
| 26.285714 | 36 | 0.836957 | 24 | 184 | 6.416667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 184 | 6 | 37 | 30.666667 | 0.9625 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32da01ad48487c0a6a79b88c6fe5b8867c4aa858 | 9,883 | py | Python | q2_api_client/clients/v2/pfm_client.py | jcook00/q2-api-client | 4431af164eb4baf52e26e8842e017cad1609a279 | [
"BSD-2-Clause"
] | null | null | null | q2_api_client/clients/v2/pfm_client.py | jcook00/q2-api-client | 4431af164eb4baf52e26e8842e017cad1609a279 | [
"BSD-2-Clause"
] | null | null | null | q2_api_client/clients/v2/pfm_client.py | jcook00/q2-api-client | 4431af164eb4baf52e26e8842e017cad1609a279 | [
"BSD-2-Clause"
] | null | null | null | from q2_api_client.clients.base_q2_client import BaseQ2Client
from q2_api_client.endpoints.v2_endpoints import PFMEndpoint
class PFMClient(BaseQ2Client):
def get_account(self, account_guid, member_guid=None):
"""GET /v2/pfm/accounts/{accountGuid}
:param str account_guid: path parameter
:param str member_guid: query parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.ACCOUNT_GUID.value.format(accountGuid=account_guid)
query_parameters = self._copy_query_parameters()
query_parameters['member_guid'] = member_guid
return self._get(url=self._build_url(endpoint), query_parameters=query_parameters)
def update_account(self, account_guid, request_body):
"""PUT /v2/pfm/accounts/{accountGuid}
:param str account_guid: path parameter
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.ACCOUNT_GUID.value.format(accountGuid=account_guid)
return self._put(url=self._build_url(endpoint), json=request_body)
def get_categories(self):
"""GET /v2/pfm/categories
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.CATEGORIES.value
return self._get(url=self._build_url(endpoint))
def create_category(self, request_body):
"""POST /v2/pfm/categories
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.CATEGORIES.value
return self._post(url=self._build_url(endpoint), json=request_body)
def delete_category(self, category_guid):
"""DELETE /v2/pfm/categories/{categoryGuid}
:param str category_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.CATEGORY_GUID.value.format(categoryGuid=category_guid)
return self._delete(url=self._build_url(endpoint))
def update_category(self, category_guid, request_body):
"""PUT /v2/pfm/categories/{categoryGuid}
:param str category_guid: path parameter
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.CATEGORY_GUID.value.format(categoryGuid=category_guid)
return self._put(url=self._build_url(endpoint), json=request_body)
def get_institutions(self, name=None, count=None):
"""GET /v2/pfm/institutions
:param str name: query parameter (string to search the institution names by)
:param int count: query parameter (number of results to return)
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.INSTITUTIONS.value
query_parameters = self._copy_query_parameters()
query_parameters['name'] = name
query_parameters['count'] = count
return self._get(url=self._build_url(endpoint), query_parameters=query_parameters)
def get_institution(self, institution_guid):
"""GET /v2/pfm/institutions/{institutionGuid}
:param str institution_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.INSTITUTION_GUID.value.format(institutionGuid=institution_guid)
return self._get(url=self._build_url(endpoint))
def get_institution_credentials(self, institution_guid):
"""GET /v2/pfm/institutions/{institutionGuid}/credentials
:param str institution_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.INSTITUTION_CREDENTIALS.value.format(institutionGuid=institution_guid)
return self._get(url=self._build_url(endpoint))
def get_job(self, job_guid):
"""GET /v2/pfm/jobs/{jobGuid}
:param str job_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.JOB.value.format(jobGuid=job_guid)
return self._get(url=self._build_url(endpoint))
def get_job_mfa_credentials(self, job_guid):
"""GET /v2/pfm/jobs/{jobGuid}/mfa_credentials
:param str job_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.JOB_MFA_CREDENTIALS.value.format(jobGuid=job_guid)
return self._get(url=self._build_url(endpoint))
def resume_job(self, job_guid):
"""POST /v2/pfm/jobs/{jobGuid}/resume
:param str job_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.JOB_RESUME.value.format(jobGuid=job_guid)
return self._post(url=self._build_url(endpoint))
def get_member(self, member_guid):
"""GET /v2/pfm/members/{memberGuid}
:param str member_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.MEMBER_GUID.value.format(memberGuid=member_guid)
return self._get(url=self._build_url(endpoint))
def create_member(self, request_body):
"""POST /v2/pfm/members/
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.MEMBERS.value
return self._post(url=self._build_url(endpoint), json=request_body)
def update_member(self, member_guid, request_body):
"""PUT /v2/pfm/members/{memberGuid}
:param str member_guid: path parameter
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.MEMBER_GUID.value.format(memberGuid=member_guid)
return self._put(url=self._build_url(endpoint), json=request_body)
def delete_member(self, member_guid):
"""DELETE /v2/pfm/members/{memberGuid}
:param str member_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.MEMBER_GUID.value.format(memberGuid=member_guid)
return self._delete(url=self._build_url(endpoint))
def delete_all_members(self):
"""DELETE /v2/pfm/members/all
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.ALL_MEMBERS.value
return self._delete(url=self._build_url(endpoint))
def create_member_credentials(self, member_guid, request_body):
"""POST /v2/pfm/members/{memberGuid}/credentials
:param str member_guid: path parameter
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.MEMBER_CREDENTIALS.value.format(memberGuid=member_guid)
return self._post(url=self._build_url(endpoint), json=request_body)
def refresh_member(self, member_guid):
"""POST /v2/pfm/members/{memberGuid}/refresh
:param str member_guid: path parameter
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.MEMBER_REFRESH.value.format(memberGuid=member_guid)
return self._post(url=self._build_url(endpoint))
def update_transaction(self, transaction_guid, request_body):
"""PUT /v2/pfm/transactions/{transactionGuid}
:param str transaction_guid: path parameter
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.TRANSACTION_GUID.value.format(transactionGuid=transaction_guid)
return self._put(url=self._build_url(endpoint), json=request_body)
def update_split_transaction(self, transaction_guid, request_body):
"""PUT /v2/pfm/transactions/{transactionGuid}/splits
:param str transaction_guid: path parameter
:param dict request_body: Dictionary object to send in the body of the request
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.TRANSACTION_SPLITS.value.format(transactionGuid=transaction_guid)
return self._put(url=self._build_url(endpoint), json=request_body)
def create_user(self):
"""POST /v2/pfm/users
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.USERS.value
return self._post(url=self._build_url(endpoint))
def get_widget(self, widget_short_name, no_redirect=None, q2token=None):
"""GET /v2/pfm/widgets/{widgetShortName}
:param str widget_short_name: path parameter
:param bool no_redirect: query parameter
(Flag to return url data and not send a 302 redirect)
:param str q2token: query parameter
(Allow passing in q2token by query string for authentication)
:return: Response object
:rtype: requests.Response
"""
endpoint = PFMEndpoint.WIDGET.value.format(widgetShortName=widget_short_name)
query_parameters = self._copy_query_parameters()
query_parameters['no_redirect'] = no_redirect
query_parameters['q2token'] = q2token
return self._get(url=self._build_url(endpoint), query_parameters=query_parameters)
| 39.063241 | 101 | 0.680765 | 1,152 | 9,883 | 5.638889 | 0.091146 | 0.04064 | 0.070813 | 0.088516 | 0.798491 | 0.789255 | 0.768319 | 0.763239 | 0.673184 | 0.664563 | 0 | 0.00484 | 0.226449 | 9,883 | 252 | 102 | 39.218254 | 0.844866 | 0.37519 | 0 | 0.4375 | 0 | 0 | 0.007269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2875 | false | 0 | 0.025 | 0 | 0.6125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
32e38d7887194eb390ea2ee480841b9d9f727a5c | 898 | py | Python | metric_learning_traffic/models/__init__.py | GT-AcerZhang/BaiduStar2020-Traffic-Sign-Detection-And-Pair-Competition-Solution | 3e44ff6975562aa8bcd55485bafe1b494f37a859 | [
"Apache-2.0"
] | 13 | 2020-09-09T12:23:36.000Z | 2022-03-16T09:42:07.000Z | metric_learning_traffic/models/__init__.py | GT-AcerZhang/BaiduStar2020-Traffic-Sign-Detection-And-Pair-Competition-Solution | 3e44ff6975562aa8bcd55485bafe1b494f37a859 | [
"Apache-2.0"
] | null | null | null | metric_learning_traffic/models/__init__.py | GT-AcerZhang/BaiduStar2020-Traffic-Sign-Detection-And-Pair-Competition-Solution | 3e44ff6975562aa8bcd55485bafe1b494f37a859 | [
"Apache-2.0"
] | 5 | 2020-09-14T07:35:39.000Z | 2021-12-22T02:03:31.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from .resnet_embedding import ResNet50
from .resnet_embedding import ResNet101
from .resnet_embedding import ResNet152
from .resnext_vd_embedding import ResNeXt50_vd_32x4d
from .resnext_vd_embedding import ResNeXt50_vd_64x4d
from .resnext_vd_embedding import ResNeXt101_vd_32x4d
from .resnext_vd_embedding import ResNeXt101_vd_64x4d
from .resnext_vd_embedding import ResNeXt152_vd_32x4d
from .resnext_vd_embedding import ResNeXt152_vd_64x4d
from .se_resnext_vd_embedding import SE_ResNeXt50_vd_32x4d
from .se_resnext_vd_embedding import SE_ResNeXt101_vd_32x4d
from .se_resnext_vd_embedding import SENet154_vd
from .efficientnet_embedding import EfficientNetB4
from .res2net_vd import Res2Net101_vd_26w_4s
from .res2net_vd import Res2Net50_vd_26w_4s
from .hrnet_embedding import HRNet_W64_C | 47.263158 | 59 | 0.898664 | 133 | 898 | 5.548872 | 0.233083 | 0.284553 | 0.219512 | 0.292683 | 0.50271 | 0.50271 | 0.50271 | 0.100271 | 0 | 0 | 0 | 0.089915 | 0.083519 | 898 | 19 | 60 | 47.263158 | 0.806804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.052632 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd3603c5b7f277a890d41cd1c13395195290ac10 | 2,842 | py | Python | tests/backend/endpoints/test_eventstore.py | aspuru-guzik-group/molar | a3e0c337bd8a41c94b2c25831c95048cc7614f04 | [
"BSD-3-Clause"
] | 4 | 2021-07-20T18:49:44.000Z | 2021-10-15T00:58:12.000Z | tests/backend/endpoints/test_eventstore.py | aspuru-guzik-group/molar | a3e0c337bd8a41c94b2c25831c95048cc7614f04 | [
"BSD-3-Clause"
] | null | null | null | tests/backend/endpoints/test_eventstore.py | aspuru-guzik-group/molar | a3e0c337bd8a41c94b2c25831c95048cc7614f04 | [
"BSD-3-Clause"
] | 2 | 2022-01-07T17:57:42.000Z | 2022-01-13T21:00:20.000Z | # std
from datetime import datetime
class TestEventStore:
def test_create_eventstore(self, client, new_database_headers):
out = client.post(
"/api/v1/eventstore/test_database",
headers=new_database_headers,
json={"type": "molecule", "data": {"smiles": "abc"}},
)
assert out.status_code == 200
out = client.get(
"/api/v1/eventstore/test_database", headers=new_database_headers
)
assert out.status_code == 200
assert len(out.json()) == 1
# Database without eventstore
out = client.get("/api/v1/eventstore/main", headers=new_database_headers)
assert out.status_code == 403
def test_update_eventstore(self, client, new_database_headers):
out = client.get(
"/api/v1/eventstore/test_database", headers=new_database_headers
)
events = out.json()
out = client.patch(
"/api/v1/eventstore/test_database",
headers=new_database_headers,
json={
"type": "molecule",
"data": {"smiles": "def"},
"uuid": events[0]["uuid"],
},
)
assert out.status_code == 200
# fake UUID
out = client.patch(
"/api/v1/eventstore/test_database",
headers=new_database_headers,
json={
"type": "molecule",
"data": {"smiles": "abc"},
"uuid": "91912ca4-cf33-428b-baf0-dfe89ef2dbda",
},
)
assert out.status_code == 404
def test_delete_eventstore(self, client, new_database_headers):
out = client.get(
"/api/v1/eventstore/test_database", headers=new_database_headers
)
events = out.json()
out = client.delete(
"/api/v1/eventstore/test_database",
headers=new_database_headers,
json={"type": "molecule", "uuid": events[0]["uuid"]},
)
assert out.status_code == 200
def test_rollback_eventstore(self, client, new_database_headers):
out = client.patch(
"/api/v1/eventstore/rollback/test_database",
params={"before": str(datetime(1980, 1, 1, 16, 30))},
headers=new_database_headers,
)
assert out.status_code == 200
out = client.get(
"/api/v1/eventstore/test_database", headers=new_database_headers
)
events = out.json()
len(events) == 0
def test_user_id_alembic_notnull(self, client, new_database_headers):
out = client.get(
"/api/v1/eventstore/test_database", headers=new_database_headers
)
events = out.json()
assert events[0]["alembic_version"] is not None
assert events[0]["user_id"] is not None
| 34.240964 | 81 | 0.569317 | 306 | 2,842 | 5.081699 | 0.205882 | 0.241158 | 0.185209 | 0.176849 | 0.753055 | 0.753055 | 0.72283 | 0.72283 | 0.634084 | 0.55627 | 0 | 0.032126 | 0.309993 | 2,842 | 82 | 82 | 34.658537 | 0.760836 | 0.014426 | 0 | 0.492958 | 0 | 0 | 0.186986 | 0.13872 | 0 | 0 | 0 | 0 | 0.140845 | 1 | 0.070423 | false | 0 | 0.014085 | 0 | 0.098592 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd590a64dc1f14459e9d3e110c7e9d325b15dee5 | 176 | py | Python | test_annotations/helpers.py | sk-/python2.7-type-annotator | 949422534a491209e28660a94b62e8afcaa10759 | [
"MIT"
] | 1 | 2015-12-13T14:27:23.000Z | 2015-12-13T14:27:23.000Z | test_annotations/helpers.py | sk-/python2.7-type-annotator | 949422534a491209e28660a94b62e8afcaa10759 | [
"MIT"
] | null | null | null | test_annotations/helpers.py | sk-/python2.7-type-annotator | 949422534a491209e28660a94b62e8afcaa10759 | [
"MIT"
] | null | null | null | class A(object):
def foo(self, x, y, *args, **kwargs):
return x
def __call__(self, foo):
return foo
def foo(a, b=None, *args, **kwargs):
return a
| 17.6 | 41 | 0.551136 | 27 | 176 | 3.444444 | 0.518519 | 0.129032 | 0.344086 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295455 | 176 | 9 | 42 | 19.555556 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.428571 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
bd22415a282da260d793b10642e70eca5acd666f | 48 | py | Python | automatapy/automata/regex.py | cxlvinchau/automata-py | c0c27866a8f32ca4ccb970b3ffa8f63adb62bef9 | [
"MIT"
] | 3 | 2022-02-16T13:50:15.000Z | 2022-02-16T23:17:32.000Z | automatapy/automata/regex.py | cxlvinchau/automatapy | c0c27866a8f32ca4ccb970b3ffa8f63adb62bef9 | [
"MIT"
] | null | null | null | automatapy/automata/regex.py | cxlvinchau/automatapy | c0c27866a8f32ca4ccb970b3ffa8f63adb62bef9 | [
"MIT"
] | null | null | null | class Regex:
def to_nfa(self):
pass | 12 | 21 | 0.5625 | 7 | 48 | 3.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.354167 | 48 | 4 | 22 | 12 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
bd4bde411cacfdff61372bbcfa7da0118bd6e955 | 48 | py | Python | src/nesteddataclasses/__init__.py | mububoki/nested-dataclasses | e4cf74a52c4bf3e4a6cb7d589a8c3c3d94135ee4 | [
"MIT"
] | null | null | null | src/nesteddataclasses/__init__.py | mububoki/nested-dataclasses | e4cf74a52c4bf3e4a6cb7d589a8c3c3d94135ee4 | [
"MIT"
] | 1 | 2021-07-27T15:04:06.000Z | 2021-07-27T15:04:06.000Z | src/nesteddataclasses/__init__.py | mububoki/nested-dataclasses | e4cf74a52c4bf3e4a6cb7d589a8c3c3d94135ee4 | [
"MIT"
] | null | null | null | from .nesteddataclasses import nested_dataclass
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fb27407c6193966c9acd50ef6aacb1defab3098 | 64 | py | Python | h264tomp4/__init__.py | y-tetsu/gmail_picamera | 7f9f4f5564d1538ae8942fda87df02f866bcc6fd | [
"MIT"
] | null | null | null | h264tomp4/__init__.py | y-tetsu/gmail_picamera | 7f9f4f5564d1538ae8942fda87df02f866bcc6fd | [
"MIT"
] | null | null | null | h264tomp4/__init__.py | y-tetsu/gmail_picamera | 7f9f4f5564d1538ae8942fda87df02f866bcc6fd | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from h264tomp4.h264tomp4 import h264tomp4
| 21.333333 | 41 | 0.8125 | 9 | 64 | 5.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 0.09375 | 64 | 2 | 42 | 32 | 0.689655 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1fe9f5916897af38c7c309667faf00d69311eda5 | 50,256 | py | Python | code/python/ExchangeDataFeedSnapshotAPISymbolList/v1/fds/sdk/ExchangeDataFeedSnapshotAPISymbolList/model/fields.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 6 | 2022-02-07T16:34:18.000Z | 2022-03-30T08:04:57.000Z | code/python/ExchangeDataFeedSnapshotAPISymbolList/v1/fds/sdk/ExchangeDataFeedSnapshotAPISymbolList/model/fields.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 2 | 2022-02-07T05:25:57.000Z | 2022-03-07T14:18:04.000Z | code/python/ExchangeDataFeedSnapshotAPISymbolList/v1/fds/sdk/ExchangeDataFeedSnapshotAPISymbolList/model/fields.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | null | null | null | """
Exchange DataFeed Snapshot
FactSet’s Exchange DataFeed Snapshot API provides cost-effective access to real-time and delayed global exchange data. Proprietary technology normalizes over 200 global exchanges and 150+ data fields. Asset types integrated include equities, futures, options, warrants, fixed income, mutual funds, ETFs, indices, commodities, and FX rates. <p>Cutting-edge technology ensures reliability and provides scalability that allow applications to request multiple items at a time. To simplify client-side development an entire response can be placed in a matrix or table for effortless integration into internal and external applications. Using specified output formats (CSV, XML, JSON) receive all standard fields by default or customize the list based on specific needs.</p></p>Below are the current hosts:</p><p>Production: api.factset.com<p>Sandbox: api-sandbox.factset.com</p> # noqa: E501
The version of the OpenAPI document: 1.0.0
Contact: api@factset.com
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from fds.sdk.ExchangeDataFeedSnapshotAPISymbolList.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from fds.sdk.ExchangeDataFeedSnapshotAPISymbolList.exceptions import ApiAttributeError
class Fields(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'exchange': (str,), # noqa: E501
'product': (str,), # noqa: E501
'bid': (float,), # noqa: E501
'bid_date': (str,), # noqa: E501
'bid_time': (int,), # noqa: E501
'bid_vol': (int,), # noqa: E501
'bid_tick': (str,), # noqa: E501
'bid_close': (float,), # noqa: E501
'bid_close_date': (str,), # noqa: E501
'bid_close_vol': (int,), # noqa: E501
'bid_exch': (str,), # noqa: E501
'ask': (float,), # noqa: E501
'ask_date': (str,), # noqa: E501
'ask_time': (int,), # noqa: E501
'ask_vol': (int,), # noqa: E501
'ask_close': (float,), # noqa: E501
'ask_close_date': (str,), # noqa: E501
'ask_close_vol': (int,), # noqa: E501
'ask_exch': (str,), # noqa: E501
'short_sale_indicator': (int,), # noqa: E501
'quote_condition': (str,), # noqa: E501
'last_price': (float,), # noqa: E501
'last_date': (str,), # noqa: E501
'last_time': (int,), # noqa: E501
'last_vol': (int,), # noqa: E501
'last_tick': (str,), # noqa: E501
'official_close': (float,), # noqa: E501
'official_close_time': (int,), # noqa: E501
'last_exch': (str,), # noqa: E501
'settlement': (float,), # noqa: E501
'traded_price': (float,), # noqa: E501
'traded_date': (str,), # noqa: E501
'traded_time': (int,), # noqa: E501
'traded_vol': (int,), # noqa: E501
'traded_condition': (str,), # noqa: E501
'net_change': (float,), # noqa: E501
'percent_change': (float,), # noqa: E501
'premkt_price': (float,), # noqa: E501
'premkt_time': (int,), # noqa: E501
'premkt_vol': (int,), # noqa: E501
'premkt_c_vol': (int,), # noqa: E501
'postmkt_price': (float,), # noqa: E501
'postmkt_time': (int,), # noqa: E501
'postmkt_vol': (int,), # noqa: E501
'postmkt_cvol': (int,), # noqa: E501
'offbook_cum_vol': (int,), # noqa: E501
'official_bid_close': (float,), # noqa: E501
'official_ask_close': (float,), # noqa: E501
'mid_date': (str,), # noqa: E501
'mid_time': (int,), # noqa: E501
'cvol': (int,), # noqa: E501
'turnover': (float,), # noqa: E501
'vwap': (float,), # noqa: E501
'trade_count': (int,), # noqa: E501
'block_trade_count': (int,), # noqa: E501
'block_cvol': (int,), # noqa: E501
'prev_close': (float,), # noqa: E501
'close_date': (str,), # noqa: E501
'prev_close_unadj': (float,), # noqa: E501
'prev_close_2': (float,), # noqa: E501
'prev_close_unadj_2': (float,), # noqa: E501
'lower_trading_band': (float,), # noqa: E501
'upper_trading_band': (float,), # noqa: E501
'buy_imbalance': (int,), # noqa: E501
'sell_imbalance': (int,), # noqa: E501
'nas_buy_imbalance': (int,), # noqa: E501
'nas_sell_imbalance': (int,), # noqa: E501
'open': (float,), # noqa: E501
'high': (float,), # noqa: E501
'low': (float,), # noqa: E501
'venue': (str,), # noqa: E501
'buy_id': (str,), # noqa: E501
'sell_id': (str,), # noqa: E501
'auto_trade_vwap': (float,), # noqa: E501
'auto_trade_cvol': (int,), # noqa: E501
'auto_trade_count': (int,), # noqa: E501
'ex_date_status': (str,), # noqa: E501
'premkt_net_change': (float,), # noqa: E501
'premkt_percent_change': (float,), # noqa: E501
'closing_vol': (int,), # noqa: E501
'primary_market': (str,), # noqa: E501
'iso_country_exchange': (str,), # noqa: E501
'premkt_exch': (str,), # noqa: E501
'postmkt_exch': (str,), # noqa: E501
'fref_security_type': (str,), # noqa: E501
'security_sub_type': (str,), # noqa: E501
'postmkt_net_change': (float,), # noqa: E501
'postmkt_percent_change': (float,), # noqa: E501
'isin': (str,), # noqa: E501
'cusip': (str,), # noqa: E501
'sedol': (str,), # noqa: E501
'description': (str,), # noqa: E501
'shares_outstanding': (float,), # noqa: E501
'price_currency': (str,), # noqa: E501
'security_status': (str,), # noqa: E501
'gmt_offset': (int,), # noqa: E501
'market_segment': (str,), # noqa: E501
'market_sector': (str,), # noqa: E501
'period': (str,), # noqa: E501
'country_code': (str,), # noqa: E501
'financial_status': (int,), # noqa: E501
'factset_industry': (str,), # noqa: E501
'factset_sector': (str,), # noqa: E501
'halt_info': (int,), # noqa: E501
'homepage': (str,), # noqa: E501
'halt_description': (str,), # noqa: E501
'feed_currency': (str,), # noqa: E501
'country_name': (str,), # noqa: E501
'order_lot_size': (int,), # noqa: E501
'trade_lot_size': (int,), # noqa: E501
'tick_size': (float,), # noqa: E501
'tick_group': (str,), # noqa: E501
'tick_pilot_eff_date': (str,), # noqa: E501
'avg_30_day_vol': (float,), # noqa: E501
'avg_5_day_vol': (float,), # noqa: E501
'high_52_week': (float,), # noqa: E501
'low_52_week': (float,), # noqa: E501
'high_52_week_date': (str,), # noqa: E501
'low_52_week_date': (str,), # noqa: E501
'trade_condition': (str,), # noqa: E501
'total_return_3_m': (float,), # noqa: E501
'total_return_52_w': (float,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'exchange': 'Exchange', # noqa: E501
'product': 'product', # noqa: E501
'bid': 'Bid', # noqa: E501
'bid_date': 'Bid_Date', # noqa: E501
'bid_time': 'Bid_Time', # noqa: E501
'bid_vol': 'Bid_Vol', # noqa: E501
'bid_tick': 'Bid_Tick', # noqa: E501
'bid_close': 'Bid_Close', # noqa: E501
'bid_close_date': 'Bid_Close_Date', # noqa: E501
'bid_close_vol': 'Bid_Close_Vol', # noqa: E501
'bid_exch': 'Bid_Exch', # noqa: E501
'ask': 'Ask', # noqa: E501
'ask_date': 'Ask_Date', # noqa: E501
'ask_time': 'Ask_Time', # noqa: E501
'ask_vol': 'Ask_Vol', # noqa: E501
'ask_close': 'Ask_Close', # noqa: E501
'ask_close_date': 'Ask_Close_Date', # noqa: E501
'ask_close_vol': 'Ask_Close_Vol', # noqa: E501
'ask_exch': 'Ask_Exch', # noqa: E501
'short_sale_indicator': 'Short_Sale_Indicator', # noqa: E501
'quote_condition': 'Quote_Condition', # noqa: E501
'last_price': 'Last_Price', # noqa: E501
'last_date': 'Last_Date', # noqa: E501
'last_time': 'Last_Time', # noqa: E501
'last_vol': 'Last_Vol', # noqa: E501
'last_tick': 'Last_Tick', # noqa: E501
'official_close': 'Official_Close', # noqa: E501
'official_close_time': 'Official_Close_Time', # noqa: E501
'last_exch': 'Last_Exch', # noqa: E501
'settlement': 'Settlement', # noqa: E501
'traded_price': 'Traded_Price', # noqa: E501
'traded_date': 'Traded_Date', # noqa: E501
'traded_time': 'Traded_Time', # noqa: E501
'traded_vol': 'Traded_Vol', # noqa: E501
'traded_condition': 'Traded_Condition', # noqa: E501
'net_change': 'Net_Change', # noqa: E501
'percent_change': 'Percent_Change', # noqa: E501
'premkt_price': 'Premkt_Price', # noqa: E501
'premkt_time': 'Premkt_Time', # noqa: E501
'premkt_vol': 'Premkt_Vol', # noqa: E501
'premkt_c_vol': 'Premkt_CVol', # noqa: E501
'postmkt_price': 'Postmkt_Price', # noqa: E501
'postmkt_time': 'Postmkt_Time', # noqa: E501
'postmkt_vol': 'Postmkt_Vol', # noqa: E501
'postmkt_cvol': 'Postmkt_Cvol', # noqa: E501
'offbook_cum_vol': 'Offbook_Cum_Vol', # noqa: E501
'official_bid_close': 'Official_Bid_Close', # noqa: E501
'official_ask_close': 'Official_Ask_Close', # noqa: E501
'mid_date': 'Mid_Date', # noqa: E501
'mid_time': 'Mid_Time', # noqa: E501
'cvol': 'Cvol', # noqa: E501
'turnover': 'Turnover', # noqa: E501
'vwap': 'Vwap', # noqa: E501
'trade_count': 'Trade_Count', # noqa: E501
'block_trade_count': 'Block_Trade_Count', # noqa: E501
'block_cvol': 'Block_Cvol', # noqa: E501
'prev_close': 'Prev_Close', # noqa: E501
'close_date': 'Close_Date', # noqa: E501
'prev_close_unadj': 'Prev_Close_Unadj', # noqa: E501
'prev_close_2': 'Prev_Close_2', # noqa: E501
'prev_close_unadj_2': 'Prev_Close_Unadj_2', # noqa: E501
'lower_trading_band': 'Lower_Trading_Band', # noqa: E501
'upper_trading_band': 'Upper_Trading_Band', # noqa: E501
'buy_imbalance': 'Buy_Imbalance', # noqa: E501
'sell_imbalance': 'Sell_Imbalance', # noqa: E501
'nas_buy_imbalance': 'Nas_Buy_Imbalance', # noqa: E501
'nas_sell_imbalance': 'Nas_Sell_Imbalance', # noqa: E501
'open': 'Open', # noqa: E501
'high': 'High', # noqa: E501
'low': 'Low', # noqa: E501
'venue': 'Venue', # noqa: E501
'buy_id': 'Buy_Id', # noqa: E501
'sell_id': 'Sell_Id', # noqa: E501
'auto_trade_vwap': 'Auto_Trade_Vwap', # noqa: E501
'auto_trade_cvol': 'Auto_Trade_Cvol', # noqa: E501
'auto_trade_count': 'Auto_Trade_Count', # noqa: E501
'ex_date_status': 'Ex_Date_Status', # noqa: E501
'premkt_net_change': 'Premkt_Net_Change', # noqa: E501
'premkt_percent_change': 'Premkt_Percent_Change', # noqa: E501
'closing_vol': 'Closing_Vol', # noqa: E501
'primary_market': 'Primary_Market', # noqa: E501
'iso_country_exchange': 'Iso_Country_Exchange', # noqa: E501
'premkt_exch': 'Premkt_Exch', # noqa: E501
'postmkt_exch': 'Postmkt_Exch', # noqa: E501
'fref_security_type': 'Fref_Security_type', # noqa: E501
'security_sub_type': 'Security_Sub_type', # noqa: E501
'postmkt_net_change': 'Postmkt_Net_Change', # noqa: E501
'postmkt_percent_change': 'Postmkt_Percent_Change', # noqa: E501
'isin': 'Isin', # noqa: E501
'cusip': 'Cusip', # noqa: E501
'sedol': 'Sedol', # noqa: E501
'description': 'description', # noqa: E501
'shares_outstanding': 'Shares_Outstanding', # noqa: E501
'price_currency': 'Price_Currency', # noqa: E501
'security_status': 'Security_Status', # noqa: E501
'gmt_offset': 'Gmt_Offset', # noqa: E501
'market_segment': 'Market_Segment', # noqa: E501
'market_sector': 'Market_Sector', # noqa: E501
'period': 'Period', # noqa: E501
'country_code': 'Country_Code', # noqa: E501
'financial_status': 'Financial_Status', # noqa: E501
'factset_industry': 'Factset_Industry', # noqa: E501
'factset_sector': 'Factset_Sector', # noqa: E501
'halt_info': 'Halt_Info', # noqa: E501
'homepage': 'Homepage', # noqa: E501
'halt_description': 'Halt_description', # noqa: E501
'feed_currency': 'Feed_Currency', # noqa: E501
'country_name': 'Country_Name', # noqa: E501
'order_lot_size': 'Order_Lot_Size', # noqa: E501
'trade_lot_size': 'Trade_Lot_Size', # noqa: E501
'tick_size': 'Tick_Size', # noqa: E501
'tick_group': 'Tick_Group', # noqa: E501
'tick_pilot_eff_date': 'Tick_Pilot_Eff_Date', # noqa: E501
'avg_30_day_vol': 'Avg_30Day_Vol', # noqa: E501
'avg_5_day_vol': 'Avg_5Day_Vol', # noqa: E501
'high_52_week': 'High_52Week', # noqa: E501
'low_52_week': 'Low_52Week', # noqa: E501
'high_52_week_date': 'High_52Week_Date', # noqa: E501
'low_52_week_date': 'Low_52Week_Date', # noqa: E501
'trade_condition': 'Trade_Condition', # noqa: E501
'total_return_3_m': 'Total_Return_3M', # noqa: E501
'total_return_52_w': 'Total_Return_52W', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""Fields - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
exchange (str): Field ID # 20. Exchange ISO-Code. Enumeration in Data Service Manual.. [optional] # noqa: E501
product (str): Field ID # 4. Product identifier. Enumeration in Data Service Manual.. [optional] # noqa: E501
bid (float): Field ID # 509. Current bid price. [optional] # noqa: E501
bid_date (str): Field ID # 386. Current bid date. [optional] # noqa: E501
bid_time (int): Field ID # 385. Current bid time. [optional] # noqa: E501
bid_vol (int): Field ID # 505. Current bid size. [optional] # noqa: E501
bid_tick (str): Field ID # 518. Current bid tick direction. Enumeration in Data Service Manual.. [optional] # noqa: E501
bid_close (float): Field ID # 648. Official Closing Bid. [optional] # noqa: E501
bid_close_date (str): Field ID # 1062. Official Closing Bid Date. [optional] # noqa: E501
bid_close_vol (int): Field ID # 296. Official Closing Bid Volume. [optional] # noqa: E501
bid_exch (str): Field ID # 506. Exchange of the current bid price. Enumeration in Data Service Manual.. [optional] # noqa: E501
ask (float): Field ID # 609. Current ask price. [optional] # noqa: E501
ask_date (str): Field ID # 388. Current ask date. [optional] # noqa: E501
ask_time (int): Field ID # 387. Current ask time. [optional] # noqa: E501
ask_vol (int): Field ID # 605. Current ask size. [optional] # noqa: E501
ask_close (float): Field ID # 649. Official Closing ask. [optional] # noqa: E501
ask_close_date (str): Field ID # 1064. Official Closing ask Date. [optional] # noqa: E501
ask_close_vol (int): Field ID # 297. Official Closing ask Volume. [optional] # noqa: E501
ask_exch (str): Field ID # 606. Exchange of the current ask price. Enumeration in Data Service Manual.. [optional] # noqa: E501
short_sale_indicator (int): Field ID # 277. Flag to indicate if a security is restricted from being sold short. [optional] # noqa: E501
quote_condition (str): Field ID # 38. Current Quote Condition. Enumeration in Data Service Manual.. [optional] # noqa: E501
last_price (float): Field ID # 50. Official last trade price. [optional] # noqa: E501
last_date (str): Field ID # 384. Last Date. [optional] # noqa: E501
last_time (int): Field ID # 383. Official last traded time. [optional] # noqa: E501
last_vol (int): Field ID # 31. Official last traded volume. [optional] # noqa: E501
last_tick (str): Field ID # 25. Official last tick. Enumeration in Data Service Manual.. [optional] # noqa: E501
official_close (float): Field ID # 526. Official Close/Close Range 1 Price. [optional] # noqa: E501
official_close_time (int): Field ID # 1065. Official Close/Close Range 1 Time. [optional] # noqa: E501
last_exch (str): Field ID # 33. Official last traded exchange. Enumeration in Data Service Manual.. [optional] # noqa: E501
settlement (float): Field ID # 815. Settle Price. [optional] # noqa: E501
traded_price (float): Field ID # 912. Last traded Price. [optional] # noqa: E501
traded_date (str): Field ID # 868. Last traded Date. [optional] # noqa: E501
traded_time (int): Field ID # 916. Last traded Time. [optional] # noqa: E501
traded_vol (int): Field ID # 918. Last traded Volume. [optional] # noqa: E501
traded_condition (str): Field ID # 1098. Last traded trade condition. [optional] # noqa: E501
net_change (float): Field ID # 662. Official last change. [optional] # noqa: E501
percent_change (float): Field ID # 816. Official last percentage change. [optional] # noqa: E501
premkt_price (float): Field ID # 1019. Unofficial last premarket trade price. [optional] # noqa: E501
premkt_time (int): Field ID # 1075. Unofficial last premarket traded time. [optional] # noqa: E501
premkt_vol (int): Field ID # 1832. Unofficial last premarket traded volume. [optional] # noqa: E501
premkt_c_vol (int): Field ID # 1836. Unofficial last premarket cumulative volume. [optional] # noqa: E501
postmkt_price (float): Field ID # 2029. Unofficial last post market trade price. [optional] # noqa: E501
postmkt_time (int): Field ID # 1076. Unofficial last post market traded time. [optional] # noqa: E501
postmkt_vol (int): Field ID # 1860. Unofficial last post market traded volume. [optional] # noqa: E501
postmkt_cvol (int): Field ID # 1864. Unofficial last post market cumulative volume. [optional] # noqa: E501
offbook_cum_vol (int): Field ID # 528. Off Book Cumulative Volume. [optional] # noqa: E501
official_bid_close (float): Field ID # 448. The bid close price of today. [optional] # noqa: E501
official_ask_close (float): Field ID # 476. The ask close price of today. [optional] # noqa: E501
mid_date (str): Field ID # 136. Current mid date. [optional] # noqa: E501
mid_time (int): Field ID # 135. Current mid price time. [optional] # noqa: E501
cvol (int): Field ID # 132. Cumulative volume. [optional] # noqa: E501
turnover (float): Field ID # 341. Turnover. [optional] # noqa: E501
vwap (float): Field ID # 780. Volume Weighted Average Price. [optional] # noqa: E501
trade_count (int): Field ID # 267. Cumulative trade count. [optional] # noqa: E501
block_trade_count (int): Field ID # 269. Cumulative block count. [optional] # noqa: E501
block_cvol (int): Field ID # 271. Cumulative block volume. [optional] # noqa: E501
prev_close (float): Field ID # 208. Previous trading days Close. [optional] # noqa: E501
close_date (str): Field ID # 1051. Previous trading days Closing Date. [optional] # noqa: E501
prev_close_unadj (float): Field ID # 892. Unadjusted Previous trading days Close. [optional] # noqa: E501
prev_close_2 (float): Field ID # 1172. Previous trading days Close late rollover[1]. [optional] # noqa: E501
prev_close_unadj_2 (float): Field ID # 1176. Unadjusted Previous trading days Close late rollover. [optional] # noqa: E501
lower_trading_band (float): Field ID # 1093. Lower trading band. [optional] # noqa: E501
upper_trading_band (float): Field ID # 1087. Upper trading band. [optional] # noqa: E501
buy_imbalance (int): Field ID # 495. NYSE buy imbalance. [optional] # noqa: E501
sell_imbalance (int): Field ID # 496. NYSE sell imbalance. [optional] # noqa: E501
nas_buy_imbalance (int): Field ID # 948. NAS buy imbalance. [optional] # noqa: E501
nas_sell_imbalance (int): Field ID # 949. NAS sell imbalance. [optional] # noqa: E501
open (float): Field ID # 158. The Open Range 1 or Open Price. [optional] # noqa: E501
high (float): Field ID # 107. Current high for the day. [optional] # noqa: E501
low (float): Field ID # 307. Current low for the day. [optional] # noqa: E501
venue (str): Field ID # 530. Venue. [optional] # noqa: E501
buy_id (str): Field ID # 1820. Buy Id. [optional] # noqa: E501
sell_id (str): Field ID # 1824. Sell Id. [optional] # noqa: E501
auto_trade_vwap (float): Field ID # 637. VWAP including only order book (automatic) trades. [optional] # noqa: E501
auto_trade_cvol (int): Field ID # 635. Cumulative Volume calculated on all automated trading volumes for order-based segments. [optional] # noqa: E501
auto_trade_count (int): Field ID # 636. Trade Quantity including only order book (automatic) trades. [optional] # noqa: E501
ex_date_status (str): Field ID # 531. Ex-Date Status. [optional] # noqa: E501
premkt_net_change (float): Field ID # 896. Net change in pre-market session(US stocks only). [optional] # noqa: E501
premkt_percent_change (float): Field ID # 897. Percent change in pre-market session(US stocks only). [optional] # noqa: E501
closing_vol (int): Field ID # 1345. Volume of the closing trade. [optional] # noqa: E501
primary_market (str): Field ID # 1517. FactSet Exchange Code of primary market for instrument. Determined by highest trading volume over a 3-day calendar period. [optional] # noqa: E501
iso_country_exchange (str): Field ID # 1621. Three Letter Country Code from ISO-3166. [optional] # noqa: E501
premkt_exch (str): Field ID # 1743. Premarket Exchange. Enumeration in Data Service Manual. . [optional] # noqa: E501
postmkt_exch (str): Field ID # 1744. Post Market Exchange. Enumeration in Data Service Manual.. [optional] # noqa: E501
fref_security_type (str): Field ID # 1751. The Security type returned by FREF_SECURITY_type. [optional] # noqa: E501
security_sub_type (str): Field ID # 1762. Sub type of the security populated for funds right now. [optional] # noqa: E501
postmkt_net_change (float): Field ID # 1881. Post Market Net Change. [optional] # noqa: E501
postmkt_percent_change (float): Field ID # 1882. Post Market Percent Change. . [optional] # noqa: E501
isin (str): Field ID # 12. ISIN. [optional] # noqa: E501
cusip (str): Field ID # 14. CUSIP. [optional] # noqa: E501
sedol (str): Field ID # 15. SEDOL. [optional] # noqa: E501
description (str): Field ID # 8. Security Description. [optional] # noqa: E501
shares_outstanding (float): Field ID # 29. Total number of shares outstanding. [optional] # noqa: E501
price_currency (str): Field ID # 62. Price currency code. [optional] # noqa: E501
security_status (str): Field ID # 2800. Security Status or Halt Indicator. Enumeration in Data manual. [optional] # noqa: E501
gmt_offset (int): Field ID # 389. GMT Offset in Minutes. [optional] # noqa: E501
market_segment (str): Field ID # 650. Market segment. [optional] # noqa: E501
market_sector (str): Field ID # 651. Market sector. [optional] # noqa: E501
period (str): Field ID # 633. Period. [optional] # noqa: E501
country_code (str): Field ID # 652. ISO Country code. [optional] # noqa: E501
financial_status (int): Field ID # 1896. Financial Status Enumeration Table. [optional] # noqa: E501
factset_industry (str): Field ID # 722. FactSet Industry Classification. [optional] # noqa: E501
factset_sector (str): Field ID # 723. FactSet Sector Classification. [optional] # noqa: E501
halt_info (int): Field ID # 1414. Halt Status. [optional] # noqa: E501
homepage (str): Field ID # 724. Company Homepage. [optional] # noqa: E501
halt_description (str): Field ID # 1184. Halt description. [optional] # noqa: E501
feed_currency (str): Field ID # 1182. Currency the Exchange sends the prices to FactSet in. [optional] # noqa: E501
country_name (str): Field ID # 1190. Name of Country. [optional] # noqa: E501
order_lot_size (int): Field ID # 427. Number of securities in a lot. [optional] # noqa: E501
trade_lot_size (int): Field ID # 1335. The minimum number of lots required to trade. [optional] # noqa: E501
tick_size (float): Field ID # 1499. Tick Size. [optional] # noqa: E501
tick_group (str): Field ID # 1507. Tick Group. [optional] # noqa: E501
tick_pilot_eff_date (str): Field ID # 1508. Tick Pilot effective day. [optional] # noqa: E501
avg_30_day_vol (float): Field ID # 709. Average cumulative volume for last 30 days. [optional] # noqa: E501
avg_5_day_vol (float): Field ID # 719. Average cumulative volume over last 5 trading days. [optional] # noqa: E501
high_52_week (float): Field ID # 767. 52 Week High Price. [optional] # noqa: E501
low_52_week (float): Field ID # 768. 52 Week Low Price. [optional] # noqa: E501
high_52_week_date (str): Field ID # 1220. 52 Week High Price Date. [optional] # noqa: E501
low_52_week_date (str): Field ID # 1295. 52 Week Low Price Date. [optional] # noqa: E501
trade_condition (str): Field ID # 174. Trade Condition. [optional] # noqa: E501
total_return_3_m (float): Field ID # 746. 3 Month return for US mutual funds. [optional] # noqa: E501
total_return_52_w (float): Field ID # 747. 52-Week Total Return for US mutual funds. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""Fields - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
exchange (str): Field ID # 20. Exchange ISO-Code. Enumeration in Data Service Manual.. [optional] # noqa: E501
product (str): Field ID # 4. Product identifier. Enumeration in Data Service Manual.. [optional] # noqa: E501
bid (float): Field ID # 509. Current bid price. [optional] # noqa: E501
bid_date (str): Field ID # 386. Current bid date. [optional] # noqa: E501
bid_time (int): Field ID # 385. Current bid time. [optional] # noqa: E501
bid_vol (int): Field ID # 505. Current bid size. [optional] # noqa: E501
bid_tick (str): Field ID # 518. Current bid tick direction. Enumeration in Data Service Manual.. [optional] # noqa: E501
bid_close (float): Field ID # 648. Official Closing Bid. [optional] # noqa: E501
bid_close_date (str): Field ID # 1062. Official Closing Bid Date. [optional] # noqa: E501
bid_close_vol (int): Field ID # 296. Official Closing Bid Volume. [optional] # noqa: E501
bid_exch (str): Field ID # 506. Exchange of the current bid price. Enumeration in Data Service Manual.. [optional] # noqa: E501
ask (float): Field ID # 609. Current ask price. [optional] # noqa: E501
ask_date (str): Field ID # 388. Current ask date. [optional] # noqa: E501
ask_time (int): Field ID # 387. Current ask time. [optional] # noqa: E501
ask_vol (int): Field ID # 605. Current ask size. [optional] # noqa: E501
ask_close (float): Field ID # 649. Official Closing ask. [optional] # noqa: E501
ask_close_date (str): Field ID # 1064. Official Closing ask Date. [optional] # noqa: E501
ask_close_vol (int): Field ID # 297. Official Closing ask Volume. [optional] # noqa: E501
ask_exch (str): Field ID # 606. Exchange of the current ask price. Enumeration in Data Service Manual.. [optional] # noqa: E501
short_sale_indicator (int): Field ID # 277. Flag to indicate if a security is restricted from being sold short. [optional] # noqa: E501
quote_condition (str): Field ID # 38. Current Quote Condition. Enumeration in Data Service Manual.. [optional] # noqa: E501
last_price (float): Field ID # 50. Official last trade price. [optional] # noqa: E501
last_date (str): Field ID # 384. Last Date. [optional] # noqa: E501
last_time (int): Field ID # 383. Official last traded time. [optional] # noqa: E501
last_vol (int): Field ID # 31. Official last traded volume. [optional] # noqa: E501
last_tick (str): Field ID # 25. Official last tick. Enumeration in Data Service Manual.. [optional] # noqa: E501
official_close (float): Field ID # 526. Official Close/Close Range 1 Price. [optional] # noqa: E501
official_close_time (int): Field ID # 1065. Official Close/Close Range 1 Time. [optional] # noqa: E501
last_exch (str): Field ID # 33. Official last traded exchange. Enumeration in Data Service Manual.. [optional] # noqa: E501
settlement (float): Field ID # 815. Settle Price. [optional] # noqa: E501
traded_price (float): Field ID # 912. Last traded Price. [optional] # noqa: E501
traded_date (str): Field ID # 868. Last traded Date. [optional] # noqa: E501
traded_time (int): Field ID # 916. Last traded Time. [optional] # noqa: E501
traded_vol (int): Field ID # 918. Last traded Volume. [optional] # noqa: E501
traded_condition (str): Field ID # 1098. Last traded trade condition. [optional] # noqa: E501
net_change (float): Field ID # 662. Official last change. [optional] # noqa: E501
percent_change (float): Field ID # 816. Official last percentage change. [optional] # noqa: E501
premkt_price (float): Field ID # 1019. Unofficial last premarket trade price. [optional] # noqa: E501
premkt_time (int): Field ID # 1075. Unofficial last premarket traded time. [optional] # noqa: E501
premkt_vol (int): Field ID # 1832. Unofficial last premarket traded volume. [optional] # noqa: E501
premkt_c_vol (int): Field ID # 1836. Unofficial last premarket cumulative volume. [optional] # noqa: E501
postmkt_price (float): Field ID # 2029. Unofficial last post market trade price. [optional] # noqa: E501
postmkt_time (int): Field ID # 1076. Unofficial last post market traded time. [optional] # noqa: E501
postmkt_vol (int): Field ID # 1860. Unofficial last post market traded volume. [optional] # noqa: E501
postmkt_cvol (int): Field ID # 1864. Unofficial last post market cumulative volume. [optional] # noqa: E501
offbook_cum_vol (int): Field ID # 528. Off Book Cumulative Volume. [optional] # noqa: E501
official_bid_close (float): Field ID # 448. The bid close price of today. [optional] # noqa: E501
official_ask_close (float): Field ID # 476. The ask close price of today. [optional] # noqa: E501
mid_date (str): Field ID # 136. Current mid date. [optional] # noqa: E501
mid_time (int): Field ID # 135. Current mid price time. [optional] # noqa: E501
cvol (int): Field ID # 132. Cumulative volume. [optional] # noqa: E501
turnover (float): Field ID # 341. Turnover. [optional] # noqa: E501
vwap (float): Field ID # 780. Volume Weighted Average Price. [optional] # noqa: E501
trade_count (int): Field ID # 267. Cumulative trade count. [optional] # noqa: E501
block_trade_count (int): Field ID # 269. Cumulative block count. [optional] # noqa: E501
block_cvol (int): Field ID # 271. Cumulative block volume. [optional] # noqa: E501
prev_close (float): Field ID # 208. Previous trading days Close. [optional] # noqa: E501
close_date (str): Field ID # 1051. Previous trading days Closing Date. [optional] # noqa: E501
prev_close_unadj (float): Field ID # 892. Unadjusted Previous trading days Close. [optional] # noqa: E501
prev_close_2 (float): Field ID # 1172. Previous trading days Close late rollover[1]. [optional] # noqa: E501
prev_close_unadj_2 (float): Field ID # 1176. Unadjusted Previous trading days Close late rollover. [optional] # noqa: E501
lower_trading_band (float): Field ID # 1093. Lower trading band. [optional] # noqa: E501
upper_trading_band (float): Field ID # 1087. Upper trading band. [optional] # noqa: E501
buy_imbalance (int): Field ID # 495. NYSE buy imbalance. [optional] # noqa: E501
sell_imbalance (int): Field ID # 496. NYSE sell imbalance. [optional] # noqa: E501
nas_buy_imbalance (int): Field ID # 948. NAS buy imbalance. [optional] # noqa: E501
nas_sell_imbalance (int): Field ID # 949. NAS sell imbalance. [optional] # noqa: E501
open (float): Field ID # 158. The Open Range 1 or Open Price. [optional] # noqa: E501
high (float): Field ID # 107. Current high for the day. [optional] # noqa: E501
low (float): Field ID # 307. Current low for the day. [optional] # noqa: E501
venue (str): Field ID # 530. Venue. [optional] # noqa: E501
buy_id (str): Field ID # 1820. Buy Id. [optional] # noqa: E501
sell_id (str): Field ID # 1824. Sell Id. [optional] # noqa: E501
auto_trade_vwap (float): Field ID # 637. VWAP including only order book (automatic) trades. [optional] # noqa: E501
auto_trade_cvol (int): Field ID # 635. Cumulative Volume calculated on all automated trading volumes for order-based segments. [optional] # noqa: E501
auto_trade_count (int): Field ID # 636. Trade Quantity including only order book (automatic) trades. [optional] # noqa: E501
ex_date_status (str): Field ID # 531. Ex-Date Status. [optional] # noqa: E501
premkt_net_change (float): Field ID # 896. Net change in pre-market session(US stocks only). [optional] # noqa: E501
premkt_percent_change (float): Field ID # 897. Percent change in pre-market session(US stocks only). [optional] # noqa: E501
closing_vol (int): Field ID # 1345. Volume of the closing trade. [optional] # noqa: E501
primary_market (str): Field ID # 1517. FactSet Exchange Code of primary market for instrument. Determined by highest trading volume over a 3-day calendar period. [optional] # noqa: E501
iso_country_exchange (str): Field ID # 1621. Three Letter Country Code from ISO-3166. [optional] # noqa: E501
premkt_exch (str): Field ID # 1743. Premarket Exchange. Enumeration in Data Service Manual. . [optional] # noqa: E501
postmkt_exch (str): Field ID # 1744. Post Market Exchange. Enumeration in Data Service Manual.. [optional] # noqa: E501
fref_security_type (str): Field ID # 1751. The Security type returned by FREF_SECURITY_type. [optional] # noqa: E501
security_sub_type (str): Field ID # 1762. Sub type of the security populated for funds right now. [optional] # noqa: E501
postmkt_net_change (float): Field ID # 1881. Post Market Net Change. [optional] # noqa: E501
postmkt_percent_change (float): Field ID # 1882. Post Market Percent Change. . [optional] # noqa: E501
isin (str): Field ID # 12. ISIN. [optional] # noqa: E501
cusip (str): Field ID # 14. CUSIP. [optional] # noqa: E501
sedol (str): Field ID # 15. SEDOL. [optional] # noqa: E501
description (str): Field ID # 8. Security Description. [optional] # noqa: E501
shares_outstanding (float): Field ID # 29. Total number of shares outstanding. [optional] # noqa: E501
price_currency (str): Field ID # 62. Price currency code. [optional] # noqa: E501
security_status (str): Field ID # 2800. Security Status or Halt Indicator. Enumeration in Data manual. [optional] # noqa: E501
gmt_offset (int): Field ID # 389. GMT Offset in Minutes. [optional] # noqa: E501
market_segment (str): Field ID # 650. Market segment. [optional] # noqa: E501
market_sector (str): Field ID # 651. Market sector. [optional] # noqa: E501
period (str): Field ID # 633. Period. [optional] # noqa: E501
country_code (str): Field ID # 652. ISO Country code. [optional] # noqa: E501
financial_status (int): Field ID # 1896. Financial Status Enumeration Table. [optional] # noqa: E501
factset_industry (str): Field ID # 722. FactSet Industry Classification. [optional] # noqa: E501
factset_sector (str): Field ID # 723. FactSet Sector Classification. [optional] # noqa: E501
halt_info (int): Field ID # 1414. Halt Status. [optional] # noqa: E501
homepage (str): Field ID # 724. Company Homepage. [optional] # noqa: E501
halt_description (str): Field ID # 1184. Halt description. [optional] # noqa: E501
feed_currency (str): Field ID # 1182. Currency the Exchange sends the prices to FactSet in. [optional] # noqa: E501
country_name (str): Field ID # 1190. Name of Country. [optional] # noqa: E501
order_lot_size (int): Field ID # 427. Number of securities in a lot. [optional] # noqa: E501
trade_lot_size (int): Field ID # 1335. The minimum number of lots required to trade. [optional] # noqa: E501
tick_size (float): Field ID # 1499. Tick Size. [optional] # noqa: E501
tick_group (str): Field ID # 1507. Tick Group. [optional] # noqa: E501
tick_pilot_eff_date (str): Field ID # 1508. Tick Pilot effective day. [optional] # noqa: E501
avg_30_day_vol (float): Field ID # 709. Average cumulative volume for last 30 days. [optional] # noqa: E501
avg_5_day_vol (float): Field ID # 719. Average cumulative volume over last 5 trading days. [optional] # noqa: E501
high_52_week (float): Field ID # 767. 52 Week High Price. [optional] # noqa: E501
low_52_week (float): Field ID # 768. 52 Week Low Price. [optional] # noqa: E501
high_52_week_date (str): Field ID # 1220. 52 Week High Price Date. [optional] # noqa: E501
low_52_week_date (str): Field ID # 1295. 52 Week Low Price Date. [optional] # noqa: E501
trade_condition (str): Field ID # 174. Trade Condition. [optional] # noqa: E501
total_return_3_m (float): Field ID # 746. 3 Month return for US mutual funds. [optional] # noqa: E501
total_return_52_w (float): Field ID # 747. 52-Week Total Return for US mutual funds. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")
| 67.821862 | 891 | 0.600426 | 6,174 | 50,256 | 4.717201 | 0.095238 | 0.135146 | 0.134048 | 0.017305 | 0.817367 | 0.745468 | 0.709106 | 0.700179 | 0.700179 | 0.700179 | 0 | 0.067913 | 0.296223 | 50,256 | 740 | 892 | 67.913514 | 0.75552 | 0.65827 | 0 | 0.160563 | 0 | 0 | 0.344186 | 0.018736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0.005634 | 0.011268 | 0.002817 | 0.059155 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9533b5128bd568da6a6fd52fb567652b49761e74 | 43 | py | Python | p2016_05_28_python_path_find/child/test.py | zhyq0826/blog-code | 4369d653dea4a7a054dc796d14faea727973258f | [
"MIT"
] | 1 | 2018-07-07T14:35:55.000Z | 2018-07-07T14:35:55.000Z | p2016_05_28_python_path_find/child/test.py | zhyq0826/blog-code | 4369d653dea4a7a054dc796d14faea727973258f | [
"MIT"
] | null | null | null | p2016_05_28_python_path_find/child/test.py | zhyq0826/blog-code | 4369d653dea4a7a054dc796d14faea727973258f | [
"MIT"
] | null | null | null | import sys
print __file__
print sys.argv[0] | 14.333333 | 17 | 0.813953 | 8 | 43 | 3.875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.116279 | 43 | 3 | 17 | 14.333333 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0.666667 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
1f97a6c79ab46209d290d6c0b815cf48b9825c13 | 7,076 | py | Python | test/python/test_fusedConv2D.py | ananyamukh6/ngraph-tf | 7d6d643164371c38458525c63ecf1fe29ce10b36 | [
"Apache-2.0"
] | null | null | null | test/python/test_fusedConv2D.py | ananyamukh6/ngraph-tf | 7d6d643164371c38458525c63ecf1fe29ce10b36 | [
"Apache-2.0"
] | null | null | null | test/python/test_fusedConv2D.py | ananyamukh6/ngraph-tf | 7d6d643164371c38458525c63ecf1fe29ce10b36 | [
"Apache-2.0"
] | null | null | null | # ==============================================================================
# Copyright 2018 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""nGraph TensorFlow bridge fusedConv2D tests.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pytest
import tensorflow as tf
from tensorflow.python.framework import constant_op
from tensorflow.python.ops import nn_ops
from tensorflow.python.ops import nn_impl
from tensorflow.python.ops import array_ops
from common import NgraphTest
from tensorflow.python.framework import dtypes
import numpy as np
class TestFusedConv2D(NgraphTest):
INPUT_SIZES = [3, 1, 6, 2]
FILTER_SIZES = [1, 1, 2, 2]
BIAS_SIZES = [2]
def test_fusedconv2d_bias(self):
inp_values = np.random.rand(*self.INPUT_SIZES)
filt_values = np.random.rand(*self.FILTER_SIZES)
bias_values = np.random.rand(*self.BIAS_SIZES)
def run_test(sess):
inp = array_ops.placeholder(dtypes.float32)
filt = array_ops.placeholder(dtypes.float32)
bias = array_ops.placeholder(dtypes.float32)
return sess.run(
nn_ops.bias_add(
nn_ops.conv2d(
inp, filt, strides=[1, 1, 1, 1], padding="SAME"), bias),
{
inp: inp_values,
filt: filt_values,
bias: bias_values,
})
assert np.allclose(
self.without_ngraph(run_test), self.with_ngraph(run_test))
def test_fusedconv2d_bias_relu(self):
inp_values = np.random.rand(*self.INPUT_SIZES)
filt_values = np.random.rand(*self.FILTER_SIZES)
bias_values = np.random.rand(*self.BIAS_SIZES)
def run_test(sess):
inp = array_ops.placeholder(dtypes.float32)
filt = array_ops.placeholder(dtypes.float32)
bias = array_ops.placeholder(dtypes.float32)
return sess.run(
nn_ops.relu(
nn_ops.bias_add(
nn_ops.conv2d(
inp, filt, strides=[1, 1, 1, 1], padding="SAME"),
bias)), {
inp: inp_values,
filt: filt_values,
bias: bias_values,
})
assert np.allclose(
self.without_ngraph(run_test), self.with_ngraph(run_test))
def test_fusedconv2d_batchnorm(self):
inp_values = np.random.rand(*self.INPUT_SIZES)
filt_values = np.random.rand(*self.FILTER_SIZES)
scale_values = np.random.rand(*self.BIAS_SIZES)
offset_values = np.random.rand(*self.BIAS_SIZES)
mean_values = np.random.rand(*self.BIAS_SIZES)
variance_values = np.random.rand(*self.BIAS_SIZES)
def run_test(sess):
inp = array_ops.placeholder(dtypes.float32)
filt = array_ops.placeholder(dtypes.float32)
scale = array_ops.placeholder(dtypes.float32)
offset = array_ops.placeholder(dtypes.float32)
mean = array_ops.placeholder(dtypes.float32)
variance = array_ops.placeholder(dtypes.float32)
bn, _, _ = nn_impl.fused_batch_norm(
nn_ops.conv2d(inp, filt, strides=[1, 1, 1, 1], padding="SAME"),
scale,
offset,
mean,
variance,
epsilon=0.02,
is_training=False)
return sess.run(
bn, {
inp: inp_values,
filt: filt_values,
scale: scale_values,
offset: offset_values,
mean: mean_values,
variance: variance_values,
})
assert np.allclose(
self.without_ngraph(run_test),
self.with_ngraph(run_test),
rtol=0,
atol=5e-5)
def test_fusedconv2d_batchnorm_relu(self):
inp_values = np.random.rand(*self.INPUT_SIZES)
filt_values = np.random.rand(*self.FILTER_SIZES)
scale_values = np.random.rand(*self.BIAS_SIZES)
offset_values = np.random.rand(*self.BIAS_SIZES)
mean_values = np.random.rand(*self.BIAS_SIZES)
variance_values = np.random.rand(*self.BIAS_SIZES)
def run_test(sess):
inp = array_ops.placeholder(dtypes.float32)
filt = array_ops.placeholder(dtypes.float32)
scale = array_ops.placeholder(dtypes.float32)
offset = array_ops.placeholder(dtypes.float32)
mean = array_ops.placeholder(dtypes.float32)
variance = array_ops.placeholder(dtypes.float32)
bn, _, _ = nn_impl.fused_batch_norm(
nn_ops.conv2d(inp, filt, strides=[1, 1, 1, 1], padding="SAME"),
scale,
offset,
mean,
variance,
epsilon=0.02,
is_training=False)
return sess.run(
nn_ops.relu(bn), {
inp: inp_values,
filt: filt_values,
scale: scale_values,
offset: offset_values,
mean: mean_values,
variance: variance_values,
})
assert np.allclose(
self.without_ngraph(run_test), self.with_ngraph(run_test))
def test_fusedconv2d_squeeze_bias(self):
inp_values = np.random.rand(*self.INPUT_SIZES)
filt_values = np.random.rand(*self.FILTER_SIZES)
bias_values = np.random.rand(*self.BIAS_SIZES)
squeeze_dim = [1]
def run_test(sess):
inp = array_ops.placeholder(dtypes.float32)
filt = array_ops.placeholder(dtypes.float32)
bias = array_ops.placeholder(dtypes.float32)
return sess.run(
nn_ops.bias_add(
array_ops.squeeze(
nn_ops.conv2d(
inp, filt, strides=[1, 1, 1, 1], padding="SAME"),
squeeze_dim), bias), {
inp: inp_values,
filt: filt_values,
bias: bias_values,
})
assert np.allclose(
self.without_ngraph(run_test), self.with_ngraph(run_test))
| 38.456522 | 80 | 0.559638 | 788 | 7,076 | 4.808376 | 0.176396 | 0.048562 | 0.077593 | 0.099762 | 0.775139 | 0.74901 | 0.729216 | 0.729216 | 0.729216 | 0.729216 | 0 | 0.021358 | 0.331685 | 7,076 | 183 | 81 | 38.666667 | 0.779869 | 0.108395 | 0 | 0.765517 | 0 | 0 | 0.003181 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0.068966 | false | 0 | 0.082759 | 0 | 0.213793 | 0.006897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2f54ec17d1e602ef6a91cf3cc405a82f0d7a2802 | 256 | py | Python | src/mist/api/machines/__init__.py | SpiralUp/mist.api | a3b5233ab4aa3f6a0a2dea6333ff1e5a260af934 | [
"Apache-2.0"
] | 6 | 2017-08-24T00:34:30.000Z | 2022-01-16T21:29:22.000Z | src/mist/api/machines/__init__.py | SpiralUp/mist.api | a3b5233ab4aa3f6a0a2dea6333ff1e5a260af934 | [
"Apache-2.0"
] | 9 | 2021-03-31T18:50:47.000Z | 2022-01-09T23:20:02.000Z | src/mist/api/machines/__init__.py | SpiralUp/mist.api | a3b5233ab4aa3f6a0a2dea6333ff1e5a260af934 | [
"Apache-2.0"
] | 13 | 2017-09-21T18:17:02.000Z | 2022-02-21T04:29:25.000Z | from mist.api.clouds.models import Cloud # noqa: F401
from mist.api.clouds.models import CloudSize, CloudLocation # noqa: F401
from mist.api.images.models import CloudImage # noqa: F401
from mist.api.networks.models import Network, Subnet # noqa: F401
| 51.2 | 73 | 0.78125 | 38 | 256 | 5.263158 | 0.421053 | 0.16 | 0.22 | 0.24 | 0.52 | 0.29 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.132813 | 256 | 4 | 74 | 64 | 0.846847 | 0.167969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2f7fb28d60c72eed65df73e55347d382496b0686 | 14,656 | py | Python | dekigokoro/dekigokoro.py | broman/dekigokoro-py | d9d9d77a300567896dc33450a89b441225bec858 | [
"MIT"
] | 4 | 2019-06-16T17:24:48.000Z | 2019-12-15T00:55:00.000Z | dekigokoro/dekigokoro.py | broman/dekigokoro-py | d9d9d77a300567896dc33450a89b441225bec858 | [
"MIT"
] | 1 | 2019-08-17T01:42:46.000Z | 2019-08-17T01:42:46.000Z | dekigokoro/dekigokoro.py | broman/dekigokoro.py | d9d9d77a300567896dc33450a89b441225bec858 | [
"MIT"
] | null | null | null | import aiohttp
import utils.values as values
class Client:
r"""
The base client class for Dekigokoro.
token: `str`_
The token used to authorise with the API. This will be added as the
Authorization header for all requests.
You can obtain this token by creating a `Dekigokoro`_ account.
All class methods are `coroutines`_.
"""
def __init__(self, token: str):
self._base_url = "https://dekigokoro.io/api/v1"
self._headers = {"Authorization": token, "Content-Type": "application/json"}
# Balance
async def get_balance(self, player: str, *, subkey: str = "") -> int:
"""
Gets a players balance.
player: `str`_
The player whose balance to set.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int`_ -- The new balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
"""
async with aiohttp.ClientSession() as session:
r = await session.get(
f"{self._base_url}/currency/{player}/{subkey}",
headers=self._headers,
raise_for_status=True,
)
bal = await r.json()
return int(bal["balance"])
async def set_balance(self, player: str, balance: int, *, subkey: str = "") -> int:
"""
Sets a players balance.
player: `str`_
The player whose balance to set.
balance: `int`_
The player's new balance.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int`_ -- The new balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- The balance value provided was not an integer.
"""
await values.check_int(balance)
async with aiohttp.ClientSession() as session:
await session.put(
f"{self._base_url}/currency/{player}/{subkey}",
headers=self._headers,
json={"balance": str(balance)},
raise_for_status=True,
)
async def add_balance(self, player: str, balance: int, *, subkey: str = "") -> int:
"""
Adds to a player's balance.
player: `str`_
The player whose balance to add to.
balance: `int`_
The amount to add to the player's balance.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int`_ -- The new balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- The balance value provided was not an integer.
"""
currentbal = await self.get_balance(player, subkey=subkey)
return await self.set_balance(player, currentbal + balance, subkey=subkey)
async def subtract_balance(
self, player: str, balance: int, *, subkey: str = ""
) -> int:
"""
Subtracts from a player's balance.
player: `str`_
The player whose balance to subtract from.
balance: `int`_
The amount to subtract from the players balance.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int`_ -- The new balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- The balance value provided was not an integer.
"""
currentbal = await self.get_balance(player, subkey=subkey)
return await self.set_balance(player, currentbal - balance, subkey=subkey)
# Levels
async def get_levels(self, player: str, subkey: str = "") -> int:
"""
Gets a player's levels.
player: `str`_
The player whose levels to retrieve.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int` -- The level balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
"""
async with aiohttp.ClientSession() as session:
r = await session.get(
f"{self._base_url}/levels/{player}/{subkey}",
headers=self._headers,
raise_for_status=True,
)
ret = await r.json()
return int(ret["exp"])
async def set_levels(self, player: str, exp: int, *, subkey: str = "") -> int:
"""
Set a player's levels.
player: `str`_
The player whose levels to set.
exp: `int`_
The amount of levels to set.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int`_ -- The new level balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- The experience value provided was not an integer.
"""
await values.check_int(exp)
async with aiohttp.ClientSession() as session:
r = await session.put(
f"{self._base_url}/levels/{player}/{subkey}",
headers=self._headers,
json={"exp": str(exp)},
raise_for_status=True,
)
ret = await r.json()
async def add_levels(self, player: str, exp: int, *, subkey: str = "") -> int:
"""
Adds to a player's level balance.
player: `str`_
The player whose level balance to add to.
exp: `int`_
The amount to add to the player's level balance.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int`_ -- The new balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- The experience value provided was not an integer.
"""
currentexp = self.get_levels(player, subkey=subkey)
newbal = self.set_levels(player, exp + currentexp, subkey=subkey)
return int(newbal)
async def subtract_levels(self, player: str, exp: int, *, subkey: str = "") -> int:
"""
Subtracts from a player's level balance.
player: `str`_
The player whose level balance to subtract from.
balance: `int`_
The amount to add to the player's level balance.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`int`_ -- The new level balance for the player.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- The experience value provided was not an integer.
"""
currentexp = self.get_levels(player, subkey=subkey)
newbal = self.set_levels(player, currentexp - exp, subkey=subkey)
return int(newbal)
# Leaderboards
async def get_balance_leaderboards(
self, *, after: int = 0, limit: int = 100, subkey: str = ""
) -> list:
"""
Retrieve a list of the current balance leaderboards.
after: Optional[`int`_]
Position to get results after. This value must be positive. Defaults to 0.
limit: Optional[`int`_]
Maximum number of leaderboard entries to return. This value must be between 1 and 100. Defaults to 100.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`list`_ [`dict`_] -- A list of leaderboard entries in descending order (highest balance first).
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- One or more of the values provided are outside the boundary.
Example response:
.. code-block:: python
[
{
"app_id": 1234567890123456,
"player_id": 1234,
"balance": 1000,
"rank": 1
},
{
"app_id": 1234567890123456,
"player_id": 513,
"balance": 854,
"rank": 2
},
# ...
]
"""
await values.check_vals(after, limit)
async with aiohttp.ClientSession() as session:
r = await session.get(
f"{self._base_url}/currency/rankings?after={after}?limit={limit}",
headers=self._headers,
raise_for_status=True,
)
players = await r.json()
# Convert the stringly-typed integer values returned by the API to ints, since other methods convert as well.
for player in players:
player.update(
(k, int(v)) for k, v in player.items() if k != "player_id"
)
return players
async def get_levels_leaderboards(
self, *, after: int = 0, limit: int = 100, subkey: str = ""
) -> list:
"""
Retrieve a list of the current levels leaderboards.
after: Optional[`int`_]
Position to get results after. This value must be positive. Defaults to 0.
limit: Optional[`int`_]
Maximum number of leaderboard entries to return. This value must be between 1 and 100. Defaults to 100.
subkey: Optional[`str`_]
The `subkey`_ to use.
Returns:
`list`_ [`dict`_] -- A list of leaderboard entries in descending order (highest exp first).
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- One or more of the values provided are outside the boundary.
Example response:
.. code-block:: python
[
{
"app_id": 1234567890123456,
"player_id": "1234",
"exp": 1000,
"rank": 1
},
{
"app_id": 1234567890123456,
"player_id": "513",
"exp": 854,
"rank": 2
},
# ...
]
"""
await values.check_vals(after, limit)
async with aiohttp.ClientSession() as session:
r = await session.get(
f"{self._base_url}/levels/rankings?after={after}?limit={limit}",
headers=self._headers,
raise_for_status=True,
)
players = await r.json()
# Convert the stringly-typed integer values returned by the API to ints, since other methods convert as well
for player in players:
player.update(
(k, int(v)) for k, v in player.items() if k != "player_id"
)
return players
async def get_userdata(self, player: str) -> dict:
"""
Gets a player's userdata.
player: `str`_
The player whose userdata to get.
Returns:
`dict`_ -- The player's userdata.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
"""
async with aiohttp.ClientSession() as session:
r = await session.get(
f"{self._base_url}/userdata/{player}",
headers=self._headers,
raise_for_status=True,
)
data = await r.json()
return data["data"]
async def set_userdata(self, player: str, data: dict):
"""
Sets a player's userdata.
player: `str`_
The player whose userdata to set.
data: `dict`_
The userdata in the form of a dictionary. Nested data is supported.
Returns:
`dict`_ -- The player's new userdata.
Raises:
`aiohttp.ClientResponseError`_ -- An HTTP error occured.
`ValueError`_ -- The data provided was not a dictionary.
"""
await values.check_dict(data)
async with aiohttp.ClientSession() as session:
response = await session.put(
f"{self._base_url}/userdata/{player}",
headers=self._headers,
json=data,
raise_for_status=True,
)
async def set_relationship(self, player: str, target: str, relationship_type: str):
"""
Sets a player's relationship to another player. Relationships are stringly typed,
meaning any kind of relationship is possible.
player: `str`_
The player whose relationship to set.
target: `str`_
The target player.
Raises:
`ValueError`_ -- One or more of the values provided are None or empty.
"""
if relationship_type:
raise ValueError("Value required.")
async with aiohttp.ClientSession() as session:
await session.put(
f"{self._base_url}/relationships/{player}/{target}",
headers=self._headers,
json={
"type": relationship_type,
},
raise_for_status=True,
)
async def get_relationship(self, player: str, target: str) -> str:
"""
Retrieves a player's relationship to another player.
player: `str`_
The player whose relationship to retrieve.
target: `str`_
The target player.
"""
async with aiohttp.ClientSession() as session:
r = await session.get(
f"{self._base_url}/relationships/{player}/{target}",
headers=self._headers,
raise_for_status=True,
)
data = await r.json()
return r["type"]
async def delete_relationship(self, player: str, target: str):
"""
Removes a player's relationship to another player.
player: `str`_
The player whose relationship to remove.
target: `str`_
The target player.
"""
async with aiohttp.ClientSession() as session:
await session.delete(
f"{self._base_url}/relationships/{player}/{target}",
headers=self._headers,
raise_for_status=True,
) | 32.641425 | 121 | 0.531455 | 1,541 | 14,656 | 4.931214 | 0.120701 | 0.021319 | 0.02224 | 0.030794 | 0.842216 | 0.82274 | 0.792999 | 0.781155 | 0.764048 | 0.678379 | 0 | 0.013198 | 0.374454 | 14,656 | 449 | 122 | 32.641425 | 0.815663 | 0.035207 | 0 | 0.550725 | 0 | 0 | 0.100936 | 0.07967 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007246 | false | 0 | 0.014493 | 0 | 0.101449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2f8484c255ef09171132a37c1c81be0354e16938 | 127 | py | Python | dmjedi/model/__init__.py | khaferkamp/dmjedi | 742c556ff47243c3925a06b5838c14d5df714085 | [
"MIT"
] | 2 | 2019-08-13T11:43:50.000Z | 2019-08-13T11:43:54.000Z | dmjedi/model/__init__.py | khaferkamp/dmjedi | 742c556ff47243c3925a06b5838c14d5df714085 | [
"MIT"
] | null | null | null | dmjedi/model/__init__.py | khaferkamp/dmjedi | 742c556ff47243c3925a06b5838c14d5df714085 | [
"MIT"
] | null | null | null | from .columns import (IntColumn, TextColumn, BoolColumn, DateColumn, NumericColumn, TimestampColumn, JsonColumn) # noqa: F401
| 63.5 | 126 | 0.80315 | 12 | 127 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026549 | 0.110236 | 127 | 1 | 127 | 127 | 0.876106 | 0.07874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c82fc33a8f3eccdd0ba191c74db28a3b741982ce | 172 | py | Python | SimFastTiming/Configuration/python/SimFastTiming_cff.py | nistefan/cmssw | ea13af97f7f2117a4f590a5e654e06ecd9825a5b | [
"Apache-2.0"
] | 2 | 2020-01-27T15:21:37.000Z | 2020-05-11T11:13:18.000Z | SimFastTiming/Configuration/python/SimFastTiming_cff.py | nistefan/cmssw | ea13af97f7f2117a4f590a5e654e06ecd9825a5b | [
"Apache-2.0"
] | null | null | null | SimFastTiming/Configuration/python/SimFastTiming_cff.py | nistefan/cmssw | ea13af97f7f2117a4f590a5e654e06ecd9825a5b | [
"Apache-2.0"
] | 1 | 2020-10-06T16:30:09.000Z | 2020-10-06T16:30:09.000Z | import FWCore.ParameterSet.Config as cms
from SimFastTiming.FastTimingCommon.fastTimeDigitizer_cfi import *
from SimFastTiming.FastTimingCommon.mtdDigitizer_cfi import *
| 28.666667 | 66 | 0.872093 | 18 | 172 | 8.222222 | 0.666667 | 0.22973 | 0.445946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081395 | 172 | 5 | 67 | 34.4 | 0.936709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c0f634f0cd4170f5e8dad77f0ea5c31dc76c199a | 184 | py | Python | webprovider/views.py | SlapBass/nx-portal | ee262079db1e5230a24ebbc205e44926f11f8da9 | [
"Apache-2.0"
] | 5 | 2019-10-04T04:46:44.000Z | 2019-10-09T10:02:01.000Z | webprovider/views.py | SlapBass/nx-portal | ee262079db1e5230a24ebbc205e44926f11f8da9 | [
"Apache-2.0"
] | 9 | 2019-10-06T07:15:09.000Z | 2020-09-24T02:19:40.000Z | webprovider/views.py | SlapBass/nx-portal | ee262079db1e5230a24ebbc205e44926f11f8da9 | [
"Apache-2.0"
] | 1 | 2020-06-19T13:26:08.000Z | 2020-06-19T13:26:08.000Z | from django.shortcuts import render
def index(request):
return render(request, 'index.html')
def routable_index(request, aticle_slug):
return render(request, 'index.html')
| 18.4 | 41 | 0.744565 | 24 | 184 | 5.625 | 0.541667 | 0.177778 | 0.281481 | 0.355556 | 0.414815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146739 | 184 | 9 | 42 | 20.444444 | 0.859873 | 0 | 0 | 0.4 | 0 | 0 | 0.108696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
8d02d311793ee8ee3c451777e5e8a4bec4002264 | 497 | py | Python | redshells/contrib/model/__init__.py | hirosassa/redshells | 7824381a7d1f042405014b4572a5d5824338fc74 | [
"MIT"
] | 42 | 2019-01-02T01:31:39.000Z | 2022-01-29T08:56:12.000Z | redshells/contrib/model/__init__.py | hirosassa/redshells | 7824381a7d1f042405014b4572a5d5824338fc74 | [
"MIT"
] | 29 | 2019-03-28T02:33:01.000Z | 2021-09-27T00:45:25.000Z | redshells/contrib/model/__init__.py | hirosassa/redshells | 7824381a7d1f042405014b4572a5d5824338fc74 | [
"MIT"
] | 17 | 2019-02-21T03:08:20.000Z | 2022-02-17T23:27:48.000Z | from redshells.contrib.model.factorization_machine import FactorizationMachineGraph, FactorizationMachine
from redshells.contrib.model.feature_aggregation_similarity_model import FeatureAggregationSimilarityModel
from redshells.contrib.model.graph_convolutional_matrix_completion import GraphConvolutionalMatrixCompletion
from redshells.contrib.model.matrix_factorization_model import MatrixFactorizationGraph, MatrixFactorization
import redshells.model.utils
import redshells.contrib.model.utils
| 71 | 108 | 0.917505 | 48 | 497 | 9.3125 | 0.4375 | 0.178971 | 0.234899 | 0.223714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044266 | 497 | 6 | 109 | 82.833333 | 0.941053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9b0ed5c8ed455a13f30afa6b81e4f047339bf611 | 58 | py | Python | skypy/resolvers/__init__.py | nickalaskreynolds/skypy | 777c6d82bf520c75b5c38f8cee9b7b4d438fbdba | [
"MIT"
] | null | null | null | skypy/resolvers/__init__.py | nickalaskreynolds/skypy | 777c6d82bf520c75b5c38f8cee9b7b4d438fbdba | [
"MIT"
] | 3 | 2018-02-11T00:26:18.000Z | 2018-02-17T18:10:29.000Z | skypy/resolvers/__init__.py | nickalaskreynolds/skypy | 777c6d82bf520c75b5c38f8cee9b7b4d438fbdba | [
"MIT"
] | null | null | null | from . import dateresolver
from . import locationresolver
| 19.333333 | 30 | 0.827586 | 6 | 58 | 8 | 0.666667 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 58 | 2 | 31 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f195c3026c69169e84e2138f14c695d3eabf4b6f | 222 | py | Python | office365/graph/onedrive/photo.py | stardust85/Office365-REST-Python-Client | cd369c607c7d137a000734e9c5e8f03ae3e3c603 | [
"MIT"
] | null | null | null | office365/graph/onedrive/photo.py | stardust85/Office365-REST-Python-Client | cd369c607c7d137a000734e9c5e8f03ae3e3c603 | [
"MIT"
] | null | null | null | office365/graph/onedrive/photo.py | stardust85/Office365-REST-Python-Client | cd369c607c7d137a000734e9c5e8f03ae3e3c603 | [
"MIT"
] | null | null | null | from office365.runtime.client_value_object import ClientValueObject
class Photo(ClientValueObject):
"""The photo resource provides photo and camera properties, for example, EXIF metadata, on a driveItem."""
pass
| 31.714286 | 110 | 0.788288 | 27 | 222 | 6.407407 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015789 | 0.144144 | 222 | 6 | 111 | 37 | 0.894737 | 0.45045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f1a7d9d27256ccdfa5a107bcd0b8a2d0fdc4cebc | 23 | py | Python | tiledb_cli/__init__.py | TileDB-Inc/TileDB-CLI | e18e148fe5c6044b87d28595f5370eecac0b3c8f | [
"MIT"
] | 3 | 2021-09-15T12:55:59.000Z | 2021-12-22T16:39:38.000Z | x/views/__init__.py | jamesroberts/x | d081d97b40cde04a428236b746ef3bc3d0324311 | [
"MIT"
] | 7 | 2021-09-24T00:12:51.000Z | 2022-02-03T20:30:34.000Z | x/views/__init__.py | jamesroberts/x | d081d97b40cde04a428236b746ef3bc3d0324311 | [
"MIT"
] | null | null | null | from .root import root
| 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f1b8fdfe488f16beddbf72a854c6406b8348442f | 33 | py | Python | src/web/modules/smartq/tests/__init__.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 5 | 2018-03-08T17:22:27.000Z | 2018-03-11T14:20:53.000Z | src/web/modules/smartq/tests/__init__.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 263 | 2018-03-08T18:05:12.000Z | 2022-03-11T23:26:20.000Z | src/web/modules/smartq/tests/__init__.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 6 | 2018-03-12T19:48:19.000Z | 2022-01-14T04:58:52.000Z | # TODO(Artem Tabolin): add tests
| 16.5 | 32 | 0.727273 | 5 | 33 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 33 | 1 | 33 | 33 | 0.857143 | 0.909091 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1b9e16c4210ade0533d90701e2fcfe509bc16d5 | 31 | py | Python | slang_extraction/__init__.py | NeelShah18/api | 602dcd7bce5b3a54873a004e7847565c17ce9fc9 | [
"MIT"
] | null | null | null | slang_extraction/__init__.py | NeelShah18/api | 602dcd7bce5b3a54873a004e7847565c17ce9fc9 | [
"MIT"
] | null | null | null | slang_extraction/__init__.py | NeelShah18/api | 602dcd7bce5b3a54873a004e7847565c17ce9fc9 | [
"MIT"
] | null | null | null | from UNICODE_DATA import SLANG
| 15.5 | 30 | 0.870968 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7b33bd2b31d0032be5c8b97ed8a6b2216c9d2fa0 | 21 | py | Python | spice_api/__init__.py | Nekmo/spice | 717a2cc24ad969e1caec2aabeffc30a796c6ec91 | [
"MIT"
] | 41 | 2016-08-01T04:57:24.000Z | 2022-02-13T01:38:04.000Z | spice_api/__init__.py | Nekmo/spice | 717a2cc24ad969e1caec2aabeffc30a796c6ec91 | [
"MIT"
] | 32 | 2016-07-13T18:10:22.000Z | 2018-06-05T22:58:48.000Z | spice_api/__init__.py | Nekmo/spice | 717a2cc24ad969e1caec2aabeffc30a796c6ec91 | [
"MIT"
] | 14 | 2016-08-25T23:09:03.000Z | 2018-05-06T19:33:32.000Z | from .spice import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e3d12762866a32a3d7b649f80565e65d160cd70 | 27 | py | Python | src/alfred_google/__init__.py | zct/Google-Alfred3-Workflow | cba6cc6753ba2b46a9c3bdb5561076e6bb9f37c3 | [
"MIT"
] | 274 | 2016-06-21T13:57:27.000Z | 2021-12-03T14:07:43.000Z | src/alfred_google/__init__.py | zct/Google-Alfred3-Workflow | cba6cc6753ba2b46a9c3bdb5561076e6bb9f37c3 | [
"MIT"
] | 19 | 2016-07-08T12:59:30.000Z | 2021-10-12T21:01:53.000Z | src/alfred_google/__init__.py | zct/Google-Alfred3-Workflow | cba6cc6753ba2b46a9c3bdb5561076e6bb9f37c3 | [
"MIT"
] | 38 | 2016-07-09T06:26:06.000Z | 2021-11-06T08:00:29.000Z | from gsearch import search
| 13.5 | 26 | 0.851852 | 4 | 27 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e4a96786a082b967ec0a47262cc0b6eed6876ee | 33 | py | Python | foliant/preprocessors/dbmldoc/__init__.py | foliant-docs/foliantcontrib.dbmldoc | a39c04932ef521c03f105245f692a6426b8be69b | [
"MIT"
] | 1 | 2021-07-01T18:12:20.000Z | 2021-07-01T18:12:20.000Z | foliant/preprocessors/dbmldoc/__init__.py | foliant-docs/foliantcontrib.dbmldoc | a39c04932ef521c03f105245f692a6426b8be69b | [
"MIT"
] | null | null | null | foliant/preprocessors/dbmldoc/__init__.py | foliant-docs/foliantcontrib.dbmldoc | a39c04932ef521c03f105245f692a6426b8be69b | [
"MIT"
] | null | null | null | from .dbmldoc import Preprocessor | 33 | 33 | 0.878788 | 4 | 33 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 33 | 1 | 33 | 33 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9eb99783f0ecfbce53889c07b503d9225226cceb | 7,082 | py | Python | src/genie/libs/parser/iosxr/tests/ShowRouteIpv4/cli/equal/golden8_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxr/tests/ShowRouteIpv4/cli/equal/golden8_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxr/tests/ShowRouteIpv4/cli/equal/golden8_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
'vrf': {
'VRF1': {
'address_family': {
'ipv4': {
'routes': {
'10.16.2.2/32': {
'route': '10.16.2.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'Loopback300': {
'outgoing_interface': 'Loopback300',
'updated': '3w4d',
},
},
},
},
'10.12.90.2/32': {
'route': '10.12.90.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0.390': {
'outgoing_interface': 'GigabitEthernet0/0/0/0.390',
'updated': '3w4d',
},
},
},
},
'10.12.110.2/32': {
'route': '10.12.110.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0.410': {
'outgoing_interface': 'GigabitEthernet0/0/0/0.410',
'updated': '3w4d',
},
},
},
},
'10.12.115.2/32': {
'route': '10.12.115.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0.415': {
'outgoing_interface': 'GigabitEthernet0/0/0/0.415',
'updated': '3w4d',
},
},
},
},
'10.12.120.2/32': {
'route': '10.12.120.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/0.420': {
'outgoing_interface': 'GigabitEthernet0/0/0/0.420',
'updated': '3w4d',
},
},
},
},
'10.23.90.2/32': {
'route': '10.23.90.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1.390': {
'outgoing_interface': 'GigabitEthernet0/0/0/1.390',
'updated': '3w4d',
},
},
},
},
'10.23.110.2/32': {
'route': '10.23.110.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1.410': {
'outgoing_interface': 'GigabitEthernet0/0/0/1.410',
'updated': '3w4d',
},
},
},
},
'10.23.115.2/32': {
'route': '10.23.115.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1.415': {
'outgoing_interface': 'GigabitEthernet0/0/0/1.415',
'updated': '3w4d',
},
},
},
},
'10.23.120.2/32': {
'route': '10.23.120.2/32',
'active': True,
'source_protocol_codes': 'L',
'source_protocol': 'local',
'next_hop': {
'outgoing_interface': {
'GigabitEthernet0/0/0/1.420': {
'outgoing_interface': 'GigabitEthernet0/0/0/1.420',
'updated': '3w4d',
},
},
},
},
},
},
},
},
},
}
| 50.94964 | 95 | 0.225925 | 342 | 7,082 | 4.51462 | 0.116959 | 0.031088 | 0.341969 | 0.352332 | 0.878886 | 0.773316 | 0.773316 | 0.555699 | 0.555699 | 0.555699 | 0 | 0.135312 | 0.676504 | 7,082 | 138 | 96 | 51.318841 | 0.538629 | 0 | 0 | 0.391304 | 0 | 0 | 0.237927 | 0.085428 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9eb9cf8f2d8f71bb2613053c0daedd924ce84829 | 146 | py | Python | blog/be/server/views/root.py | kamko/lnu_ht19_4ME310_final_project | ccb5d3c659cde0dac49c1bd6c3d46c46e73a111e | [
"MIT"
] | null | null | null | blog/be/server/views/root.py | kamko/lnu_ht19_4ME310_final_project | ccb5d3c659cde0dac49c1bd6c3d46c46e73a111e | [
"MIT"
] | 2 | 2020-06-07T19:02:54.000Z | 2020-06-07T19:03:02.000Z | blog/be/server/views/root.py | kamko/lnu_ht19_4ME310_final_project | ccb5d3c659cde0dac49c1bd6c3d46c46e73a111e | [
"MIT"
] | null | null | null | from flask import Blueprint
blueprint = Blueprint('root', __name__)
@blueprint.route('/')
def root():
return '4M310-final-project-blog-be'
| 16.222222 | 40 | 0.712329 | 18 | 146 | 5.555556 | 0.777778 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031746 | 0.136986 | 146 | 8 | 41 | 18.25 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0.219178 | 0.184932 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
7b962c6dff4381c0ee6c07d37720be694d3ca3f0 | 28 | py | Python | __init__.py | janglapuk/xiongmai-cam-api | 15b1328983ad4c441869f09c086457198847bb8f | [
"MIT"
] | 17 | 2018-07-05T20:55:41.000Z | 2021-05-17T09:27:39.000Z | __init__.py | ngohuynhngockhanh/xiongmai-cam-api | b1263fa622523e7d31f22bab5816e5367ffbf877 | [
"MIT"
] | 2 | 2020-01-04T17:19:39.000Z | 2021-05-20T15:03:32.000Z | __init__.py | ngohuynhngockhanh/xiongmai-cam-api | b1263fa622523e7d31f22bab5816e5367ffbf877 | [
"MIT"
] | 10 | 2017-11-12T10:41:44.000Z | 2021-07-19T15:02:15.000Z | from . import xmcam, xmconst | 28 | 28 | 0.785714 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c88f160f100014e20ee7bb08a5f79279abf9e75b | 32 | py | Python | maps/park_path/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | maps/park_path/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | maps/park_path/__init__.py | 56kyle/bloons_auto | 419d55b51d1cddc49099593970adf1c67985b389 | [
"MIT"
] | null | null | null | from .park_path import ParkPath
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8cf79176778ceeafccb2a727d303901d0d55b41 | 5,545 | py | Python | utils/reid_metric.py | Qidian213/NAIC2019 | 23e05a8a096168ccfa4d1743467fdf78ffcaabba | [
"MIT"
] | null | null | null | utils/reid_metric.py | Qidian213/NAIC2019 | 23e05a8a096168ccfa4d1743467fdf78ffcaabba | [
"MIT"
] | null | null | null | utils/reid_metric.py | Qidian213/NAIC2019 | 23e05a8a096168ccfa4d1743467fdf78ffcaabba | [
"MIT"
] | null | null | null | # encoding: utf-8
"""
@author: liaoxingyu
@contact: sherlockliao01@gmail.com
"""
import numpy as np
import torch
from ignite.metrics import Metric
from data.datasets.eval_reid import eval_func,eval_submit
from .re_ranking import re_ranking
from .distance import low_memory_local_dist
class R1_mAP(Metric):
def __init__(self, num_query, max_rank=50, feat_norm='yes'):
super(R1_mAP, self).__init__()
self.num_query = num_query
self.max_rank = max_rank
self.feat_norm = feat_norm
def reset(self):
self.scores = []
self.feats = []
self.local_feats = []
self.pids = []
self.camids = []
self.img_paths = []
def update(self, output):
score, feat, local_feat, pid, camid, img_paths = output
self.scores.append(score)
self.feats.append(feat)
self.local_feats.append(local_feat)
self.pids.extend(np.asarray(pid))
self.camids.extend(np.asarray(camid))
self.img_paths.extend(np.asarray(img_paths))
def compute(self):
feats = torch.cat(self.feats, dim=0)
local_feats = torch.cat(self.local_feats, dim=0)
if self.feat_norm == 'yes':
print("The test feature is normalized")
feats = torch.nn.functional.normalize(feats, dim=1, p=2)
# query
qf = feats[:self.num_query]
qlf = local_feats[:self.num_query]
q_pids = np.asarray(self.pids[:self.num_query])
q_camids = np.asarray(self.camids[:self.num_query])
q_img_paths = np.asarray(self.img_paths[:self.num_query])
# gallery
gf = feats[self.num_query:]
glf = local_feats[self.num_query:]
g_pids = np.asarray(self.pids[self.num_query:])
g_camids = np.asarray(self.camids[self.num_query:])
g_img_paths = np.asarray(self.img_paths[self.num_query:])
m, n = qf.shape[0], gf.shape[0]
### global distmat
global_distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
global_distmat.addmm_(1, -2, qf, gf.t())
global_distmat = global_distmat.cpu().numpy()
### local distmat
qlf = qlf.permute(0,2,1)
glf = glf.permute(0,2,1)
local_distmat = low_memory_local_dist(qlf.cpu().numpy(),glf.cpu().numpy(), aligned = True)
dist_mat = global_distmat + 0.4*local_distmat
cmc, mAP = eval_func(dist_mat, q_pids, g_pids, q_camids, g_camids,q_img_paths, g_img_paths)
return cmc, mAP
class R1_mAP_reranking(Metric):
def __init__(self, num_query, max_rank=50, feat_norm='yes'):
super(R1_mAP_reranking, self).__init__()
self.num_query = num_query
self.max_rank = max_rank
self.feat_norm = feat_norm
def reset(self):
self.scores = []
self.feats = []
self.local_feats = []
self.pids = []
self.camids = []
self.img_paths = []
def update(self, output):
score, feat, local_feat, pid, camid, img_paths = output
self.scores.append(score)
self.feats.append(feat)
self.local_feats.append(local_feat)
self.pids.extend(np.asarray(pid))
self.camids.extend(np.asarray(camid))
self.img_paths.extend(np.asarray(img_paths))
def compute(self):
feats = torch.cat(self.feats, dim=0)
local_feats = torch.cat(self.local_feats, dim=0)
if self.feat_norm == 'yes':
print("The test feature is normalized")
feats = torch.nn.functional.normalize(feats, dim=1, p=2)
# query
qf = feats[:self.num_query]
qlf = local_feats[:self.num_query]
q_pids = np.asarray(self.pids[:self.num_query])
q_camids = np.asarray(self.camids[:self.num_query])
q_img_paths = np.asarray(self.img_paths[:self.num_query])
# gallery
gf = feats[self.num_query:]
glf = local_feats[self.num_query:]
g_pids = np.asarray(self.pids[self.num_query:])
g_camids = np.asarray(self.camids[self.num_query:])
g_img_paths = np.asarray(self.img_paths[self.num_query:])
### local distmat
qlf = qlf.permute(0,2,1)
glf = glf.permute(0,2,1)
local_distmat = low_memory_local_dist(qlf.cpu().numpy(),glf.cpu().numpy(), aligned = True)
local_qq_distmat = low_memory_local_dist(qlf.cpu().numpy(),qlf.cpu().numpy(), aligned = True)
local_gg_distmat = low_memory_local_dist(glf.cpu().numpy(),glf.cpu().numpy(), aligned = True)
local_dist = np.concatenate(
[np.concatenate([local_qq_distmat, local_distmat], axis=1),
np.concatenate([local_distmat.T, local_gg_distmat], axis=1)],
axis=0)
print("Enter reranking")
### only global_features
# distmat = re_ranking(qf, gf, k1=3, k2=1, lambda_value= 0.3, wl=0.4)
### only local features
# distmat = re_ranking(qf,gf,k1=3,k2=1,lambda_value=0.3,local_distmat=local_dist,only_local=True)
### global and local features
distmat = re_ranking(qf,gf,k1=7,k2=2,lambda_value=0.4, wl=0.3, local_distmat=local_dist,only_local=False)
# cmc, mAP = eval_func(distmat, q_pids, g_pids, q_camids, g_camids,q_img_paths, g_img_paths)
cmc, mAP = eval_submit(distmat, q_pids, g_pids, q_camids, g_camids,q_img_paths, g_img_paths)
return cmc, mAP
| 37.214765 | 113 | 0.617674 | 793 | 5,545 | 4.081967 | 0.152585 | 0.064257 | 0.088971 | 0.042014 | 0.793018 | 0.777881 | 0.777881 | 0.762434 | 0.707445 | 0.707445 | 0 | 0.015896 | 0.251217 | 5,545 | 148 | 114 | 37.466216 | 0.763728 | 0.086384 | 0 | 0.75 | 0 | 0 | 0.017286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.057692 | 0 | 0.173077 | 0.028846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cdcece15f0ec129fb407523f95a60e83cf4ccc81 | 47 | py | Python | cmr_app.py | ejmg/clear-my-record-backend | 225a82b0f997435e4674f3b9929b4d204bd2fff0 | [
"MIT"
] | null | null | null | cmr_app.py | ejmg/clear-my-record-backend | 225a82b0f997435e4674f3b9929b4d204bd2fff0 | [
"MIT"
] | null | null | null | cmr_app.py | ejmg/clear-my-record-backend | 225a82b0f997435e4674f3b9929b4d204bd2fff0 | [
"MIT"
] | null | null | null | from clear_my_record_backend.server import cmr
| 23.5 | 46 | 0.893617 | 8 | 47 | 4.875 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cdf92ed5525714a803d77b5efe68c7d3da46b2a0 | 105 | py | Python | rlcard/utils/__init__.py | randombenj/rlcard | 0948035d26e1b619c068360326f12451f5d28f8b | [
"MIT"
] | 1,735 | 2019-09-05T12:49:43.000Z | 2022-03-30T12:02:07.000Z | rlcard/utils/__init__.py | randombenj/rlcard | 0948035d26e1b619c068360326f12451f5d28f8b | [
"MIT"
] | 197 | 2019-09-14T05:59:02.000Z | 2022-03-03T19:21:19.000Z | rlcard/utils/__init__.py | randombenj/rlcard | 0948035d26e1b619c068360326f12451f5d28f8b | [
"MIT"
] | 476 | 2019-09-13T15:25:32.000Z | 2022-03-29T01:41:29.000Z | from rlcard.utils.logger import Logger
from rlcard.utils import seeding
from rlcard.utils.utils import *
| 26.25 | 38 | 0.828571 | 16 | 105 | 5.4375 | 0.375 | 0.344828 | 0.517241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 105 | 3 | 39 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a823e552afe275bef8b06e49d5cdb5b71a238bd5 | 39 | py | Python | accounts/tests/__init__.py | adrienlachaize/dezede | 584ec30cedab95152e2f95595b7691a04e6736e2 | [
"BSD-3-Clause"
] | 15 | 2015-02-10T21:16:31.000Z | 2021-03-25T16:46:20.000Z | accounts/tests/__init__.py | adrienlachaize/dezede | 584ec30cedab95152e2f95595b7691a04e6736e2 | [
"BSD-3-Clause"
] | 4 | 2021-02-10T15:42:08.000Z | 2022-03-11T23:20:38.000Z | accounts/tests/__init__.py | adrienlachaize/dezede | 584ec30cedab95152e2f95595b7691a04e6736e2 | [
"BSD-3-Clause"
] | 6 | 2016-07-10T14:20:48.000Z | 2022-01-19T18:34:02.000Z | from .register import RegisterTestCase
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 39 | 1 | 39 | 39 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b5297982301f416da355846a5367798d40ec168a | 208 | py | Python | src/pkg/caendr/caendr/services/sql/etl/__init__.py | AndersenLab/CAENDR | ce4cdb74db736db8226ffc90988959b71b0d5ff5 | [
"MIT"
] | 3 | 2022-02-09T07:04:37.000Z | 2022-03-11T02:46:35.000Z | src/pkg/caendr/caendr/services/sql/etl/__init__.py | AndersenLab/CAENDR | ce4cdb74db736db8226ffc90988959b71b0d5ff5 | [
"MIT"
] | 4 | 2022-01-28T22:28:08.000Z | 2022-02-11T21:47:15.000Z | src/pkg/caendr/caendr/services/sql/etl/__init__.py | AndersenLab/CAENDR | ce4cdb74db736db8226ffc90988959b71b0d5ff5 | [
"MIT"
] | 1 | 2022-01-11T03:39:02.000Z | 2022-01-11T03:39:02.000Z | from .strains import load_strains
from .wormbase import load_genes_summary, load_genes, load_orthologs
from .homologs import load_homologs
from .strain_annotated_variants import load_strain_annotated_variants | 52 | 69 | 0.889423 | 29 | 208 | 6 | 0.413793 | 0.229885 | 0.264368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081731 | 208 | 4 | 69 | 52 | 0.910995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b54b1e85c5eac0275add69369fd86b868fa1fc36 | 37 | py | Python | proxypool/exceptions/__init__.py | lixinjiang/ProxyPool | b39461f11ce0bdb81b0898fb7ce10075b4526d1f | [
"MIT"
] | 3,584 | 2017-07-09T17:32:20.000Z | 2022-03-31T18:45:49.000Z | proxypool/exceptions/__init__.py | wu2021-wang/proxyssr | ab917a8f0a6d65ce771501539047c776851a0e67 | [
"MIT"
] | 128 | 2017-12-23T16:02:30.000Z | 2022-03-31T05:26:55.000Z | proxypool/exceptions/__init__.py | wu2021-wang/proxyssr | ab917a8f0a6d65ce771501539047c776851a0e67 | [
"MIT"
] | 1,509 | 2017-09-14T08:06:19.000Z | 2022-03-30T20:59:56.000Z | from .empty import PoolEmptyException | 37 | 37 | 0.891892 | 4 | 37 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a908e385e12363d9c9aa614723b835b92d3b4b86 | 16,717 | py | Python | tests/decorators/test_decorator_iterate_on_arg.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | 18 | 2017-01-09T22:17:49.000Z | 2022-01-24T20:46:42.000Z | tests/decorators/test_decorator_iterate_on_arg.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | 84 | 2017-04-11T13:47:49.000Z | 2022-03-21T20:12:57.000Z | tests/decorators/test_decorator_iterate_on_arg.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | 43 | 2017-01-05T20:40:26.000Z | 2022-03-31T19:18:02.000Z | """Test the TcEx IterateOn Decorator."""
# third-party
import pytest
# first-party
from tcex import IterateOnArg, OnException
# pylint: disable=no-self-use
class TestIterateOnArgDecorators:
"""Test the TcEx Decorators."""
args = None
tcex = None
exit_message = None
@IterateOnArg(
arg='colors',
default=None,
fail_enabled=True,
fail_msg='Failed iterate_on_args',
fail_msg_property='fail_msg',
fail_on=[None],
)
@OnException()
def iterate_on_arg(self, ret_val, colors, _array_length=None, _index=None):
"""Test fail on input decorator with no arg value (use first arg input)."""
if ret_val == 'colors':
return colors
if ret_val == '_array_length':
return _array_length
if ret_val == '_index':
return _index
return None
@pytest.mark.parametrize(
'arg,value,variable_type',
[
('colors', b'blue', 'Binary'),
('colors', [b'blue'], 'BinaryArray'),
('colors', [b'blue', b'red'], 'BinaryArray'),
('colors', {'key': 'color', 'value': 'blue'}, 'KeyValue'),
('colors', [{'key': 'color', 'value': 'blue'}], 'KeyValueArray'),
(
'colors',
[{'key': 'color', 'value': 'blue'}, {'key': 'color', 'value': 'red'}],
'KeyValueArray',
),
('colors', 'blue', 'String'),
('colors', ['blue'], 'StringArray'),
('colors', ['blue', 'red'], 'StringArray'),
('colors', {'id': '123', 'type': 'Address', 'value': '1.1.1.1'}, 'TCEntity'),
('colors', [{'id': '123', 'type': 'Address', 'value': '1.1.1.1'}], 'TCEntityArray'),
(
'colors',
[
{'id': '123', 'type': 'Address', 'value': '1.1.1.1'},
{'id': '002', 'type': 'Address', 'value': '2.2.2.2'},
],
'TCEntityArray',
),
],
)
def test_iterate_on_arg_color(self, arg, value, variable_type, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
result = self.iterate_on_arg(ret_val='colors') # pylint: disable=no-value-for-parameter
# results will always be an array so ensure value/expected is an array
expected = value
if not isinstance(expected, list):
expected = [expected]
assert result == expected, f'result of ({result}) does not match ({expected})'
@pytest.mark.parametrize(
'arg,value,variable_type',
[
('colors', b'blue', 'Binary'),
('colors', [b'blue'], 'BinaryArray'),
('colors', [b'blue', b'red'], 'BinaryArray'),
('colors', {'key': 'color', 'value': 'blue'}, 'KeyValue'),
('colors', [{'key': 'color', 'value': 'blue'}], 'KeyValueArray'),
(
'colors',
[{'key': 'color', 'value': 'blue'}, {'key': 'color', 'value': 'red'}],
'KeyValueArray',
),
('colors', 'blue', 'String'),
('colors', ['blue'], 'StringArray'),
('colors', ['blue', 'red'], 'StringArray'),
('colors', {'id': '123', 'type': 'Address', 'value': '1.1.1.1'}, 'TCEntity'),
('colors', [{'id': '123', 'type': 'Address', 'value': '1.1.1.1'}], 'TCEntityArray'),
(
'colors',
[
{'id': '123', 'type': 'Address', 'value': '1.1.1.1'},
{'id': '002', 'type': 'Address', 'value': '2.2.2.2'},
],
'TCEntityArray',
),
],
)
def test_iterate_on_arg_array_length(self, arg, value, variable_type, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
result = self.iterate_on_arg( # pylint: disable=no-value-for-parameter
ret_val='_array_length'
)
# results will always be an array so ensure value/expected is an array
expected = value
if not isinstance(expected, list):
expected = [expected]
assert result[0] == len(
expected
), f'array length of {result} does not match length of expected'
@pytest.mark.parametrize(
'arg,value,variable_type',
[
('colors', b'blue', 'Binary'),
('colors', [b'blue'], 'BinaryArray'),
('colors', [b'blue', b'red'], 'BinaryArray'),
('colors', {'key': 'color', 'value': 'blue'}, 'KeyValue'),
('colors', [{'key': 'color', 'value': 'blue'}], 'KeyValueArray'),
(
'colors',
[{'key': 'color', 'value': 'blue'}, {'key': 'color', 'value': 'red'}],
'KeyValueArray',
),
('colors', 'blue', 'String'),
('colors', ['blue'], 'StringArray'),
('colors', ['blue', 'red'], 'StringArray'),
('colors', {'id': '123', 'type': 'Address', 'value': '1.1.1.1'}, 'TCEntity'),
('colors', [{'id': '123', 'type': 'Address', 'value': '1.1.1.1'}], 'TCEntityArray'),
(
'colors',
[
{'id': '123', 'type': 'Address', 'value': '1.1.1.1'},
{'id': '002', 'type': 'Address', 'value': '2.2.2.2'},
],
'TCEntityArray',
),
],
)
def test_iterate_on_arg_index(self, arg, value, variable_type, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
result = self.iterate_on_arg(ret_val='_index') # pylint: disable=no-value-for-parameter
# results will always be an array so ensure value/expected is an array
expected = value
if not isinstance(expected, list):
expected = [expected]
assert (result[-1] + 1) == len(
expected
), f'index of {result[-1]} does not match length of expected'
@IterateOnArg(
arg='colors',
default='magenta',
fail_enabled=False,
fail_msg='Failed iterate_on_args',
fail_on=None,
)
def iterate_on_arg_default(self, **kwargs):
"""Test fail on input decorator with no arg value (use first arg input)."""
return kwargs.get('colors')
@pytest.mark.parametrize(
'arg,value,variable_type,expected',
[
# expected must have default value from decorator
('colors', None, 'String', ['magenta']),
('colors', [None], 'StringArray', ['magenta']),
('colors', ['blue', None], 'StringArray', ['blue', 'magenta']),
],
)
def test_iterate_on_arg_default(self, arg, value, variable_type, expected, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
result = self.iterate_on_arg_default()
assert result == expected, f'result of ({result}) does not match ({expected})'
@IterateOnArg(
arg='colors',
default=None,
fail_enabled='fail_on_error',
fail_msg='Failed iterate_on_args',
fail_on=[None, ''],
)
def iterate_on_arg_fail_on(self, **kwargs):
"""Test fail on input decorator with no arg value (use first arg input)."""
return kwargs.get('colors')
@pytest.mark.parametrize(
'arg,value,variable_type',
[
('colors', [None], 'StringArray'),
('colors', ['blue', None], 'StringArray'),
('colors', ['blue', ''], 'StringArray'),
],
)
def test_iterate_on_arg_fail_on(self, arg, value, variable_type, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {
arg: variable,
'fail_on_error': True,
'tc_playbook_out_variables': [variable],
}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
try:
self.iterate_on_arg_fail_on()
assert False, 'fail on value was not caught'
except SystemExit:
assert self.exit_message == 'Failed iterate_on_args' # must match fail_msg on decorator
@IterateOnArg(
'colors',
fail_on=[''],
to_float=True,
to_int={'allow_none': True},
equal_to=123,
in_range={'min': 100, 'max': 200},
less_than=150,
default='123',
fail_enabled=True,
)
def iterate_on_arg_validators(self, **kwargs):
"""Test various validators and transforms."""
return kwargs.get('colors')
@IterateOnArg(
'colors',
fail_on=[''],
to_float=True,
to_int={'allow_none': True},
equal_to=123,
in_range={'min': 100, 'max': 200},
less_than=150,
fail_msg='Custom fail msg.',
fail_enabled=True,
)
def iterate_on_arg_validators_fail_msg(self, **kwargs):
"""Test various validators and transforms."""
return kwargs.get('colors')
@IterateOnArg(
'colors', fail_on=[''], to_int=[], equal_to=123, in_range=[100, 200], less_than=150
)
def iterate_on_arg_validators_diff(self, **kwargs):
"""functionally the same as above but uses different input methods to exercise code."""
return kwargs.get('colors')
@pytest.mark.parametrize(
'arg,value,variable_type,expected',
[
# expected must have default value from decorator
('colors', None, 'String', [123]),
('colors', [None], 'StringArray', [123]),
('colors', ['123', None], 'StringArray', [123, 123]),
],
)
def test_iterate_on_arg_validators(self, arg, value, variable_type, expected, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
result = self.iterate_on_arg_validators()
assert result == expected, f'result of ({result}) does not match ({expected})'
@pytest.mark.parametrize(
'arg,value,variable_type',
[
# expected must have default value from decorator
('colors', ['135'], 'StringArray'),
],
)
def test_iterate_on_arg_validators_fail(self, arg, value, variable_type, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
try:
self.iterate_on_arg_validators()
assert False, 'Should have failed!'
except SystemExit:
assert (
self.exit_message
== 'Invalid value (135) found for "Colors": "Colors" (colors) is not equal to 123'
)
@pytest.mark.parametrize(
'arg,value,variable_type',
[
# expected must have default value from decorator
('colors', ['90'], 'StringArray')
],
)
def test_validators_fail_msg(self, arg, value, variable_type, playbook_app):
"""Test ReadArg decorator.
Args:
playbook_app (callable, fixture): The playbook_app fixture.
"""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
try:
self.iterate_on_arg_validators_fail_msg()
assert False, 'Should have failed!'
except SystemExit:
assert self.exit_message == 'Custom fail msg.'
@pytest.mark.parametrize(
'arg,value,variable_type',
[
# expected must have default value from decorator
('colors', ['abc'], 'StringArray',)
],
)
def test_transforms_fail_msg(self, arg, value, variable_type, playbook_app):
"""Test fail_msg for transfomrs."""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
try:
self.iterate_on_arg_validators_fail_msg()
assert False, 'Should have failed!'
except SystemExit:
assert self.exit_message == 'Custom fail msg.'
@pytest.mark.parametrize(
'arg,value,variable_type',
[
# expected must have default value from decorator
('colors', ['abc'], 'StringArray',)
],
)
def test_transforms_fail(self, arg, value, variable_type, playbook_app):
"""Test fail_msg for transfomrs."""
variable = f'#App:0001:{arg}!{variable_type}'
config_data = {arg: variable, 'tc_playbook_out_variables': [variable]}
self.tcex = playbook_app(config_data=config_data).tcex
self.args = self.tcex.args
# parse variable and add to KV store
self.tcex.playbook.create_output(arg, value, variable_type)
# call decorated method and get result
try:
self.iterate_on_arg_validators()
assert False, 'Should have failed!'
except SystemExit:
assert (
self.exit_message
== 'Invalid value ("abc") found for "Colors": "Colors" (colors) must be a float.'
)
| 36.740659 | 100 | 0.560089 | 1,836 | 16,717 | 4.929739 | 0.088235 | 0.053033 | 0.053033 | 0.066291 | 0.88841 | 0.869738 | 0.842669 | 0.829853 | 0.820351 | 0.819909 | 0 | 0.016468 | 0.298917 | 16,717 | 454 | 101 | 36.821586 | 0.755802 | 0.159718 | 0 | 0.66358 | 0 | 0.006173 | 0.227899 | 0.058888 | 0 | 0 | 0 | 0 | 0.046296 | 1 | 0.049383 | false | 0 | 0.006173 | 0 | 0.095679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a94e3e5b2a744cf24d01a7a07eb350e5e8586a0c | 128 | py | Python | pysignalclirestapi/__init__.py | thielenf/pysignalclirestapi | fa6dae987dba04e2ec1d34e54cb0b4897ce85d14 | [
"MIT"
] | 6 | 2020-01-18T00:37:14.000Z | 2022-01-24T08:15:54.000Z | pysignalclirestapi/__init__.py | thielenf/pysignalclirestapi | fa6dae987dba04e2ec1d34e54cb0b4897ce85d14 | [
"MIT"
] | 7 | 2021-04-29T10:04:40.000Z | 2022-02-13T15:55:31.000Z | pysignalclirestapi/__init__.py | thielenf/pysignalclirestapi | fa6dae987dba04e2ec1d34e54cb0b4897ce85d14 | [
"MIT"
] | 7 | 2020-08-24T04:04:51.000Z | 2022-02-05T23:44:55.000Z | from pysignalclirestapi.api import SignalCliRestApi, SignalCliRestApiError, SignalCliRestApiAuth, SignalCliRestApiHTTPBasicAuth
| 64 | 127 | 0.914063 | 8 | 128 | 14.625 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054688 | 128 | 1 | 128 | 128 | 0.966942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a97c49bab196395752fd3ec41a65c846696040ba | 117,615 | py | Python | TestFileIO.py | dcoukos/CHO_network | 2b609b1a947e7c32c8dcd5c96d83c1df9c560bb7 | [
"MIT"
] | 1 | 2018-01-08T19:40:07.000Z | 2018-01-08T19:40:07.000Z | TestFileIO.py | dcoukos/CHO_network | 2b609b1a947e7c32c8dcd5c96d83c1df9c560bb7 | [
"MIT"
] | null | null | null | TestFileIO.py | dcoukos/CHO_network | 2b609b1a947e7c32c8dcd5c96d83c1df9c560bb7 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Jan 19 23:45:56 2018
@author: dimitricoukos
"""
import unittest
import json
import DataTreatment
from DataTreatment import openJson, write
class SampleData(unittest.TestCase):
initial_input = {
"GLNLASEer": {
"N-octanoyl-DL-homoserine lactone": [],
"5-butyl-4-methyldihydro-2(3H)-furanone": [],
"gamma-undecanolactone": [
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "3.92",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.25",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.55",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.63",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.95",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "5.64",
"ecNumber": "3.1.1.25"
}
],
"gamma-dodecanolactone": [],
"N-(3-oxododecanoyl)-L-homoserine lactone": [
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "1.01",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "1.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "3",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "6.44",
"ecNumber": "3.1.1.25"
}
],
"nonanoic-1,5-lactone": [],
"gamma-dodecalactone": [],
"N-(3-oxodecanoyl)-L-homoserine lactone": [
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "0.19",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "0.6",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "3.96",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.52",
"ecNumber": "3.1.1.25"
}
],
"gamma-dodecanoic lactone": [
{
"organism": "Homo sapiens",
"turnoverNumber": "101",
"ecNumber": "3.1.1.25"
}
],
"gamma-heptalactone": [],
"undecanoic-gamma-lactone": [],
"N-(2-oxotetrahydrofuran-3-yl)pentanamide": [],
"N-octanoylhomoserine lactone": [],
"nonanoic-gamma-lactone": [
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "2",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "3.1",
"ecNumber": "3.1.1.25"
}
],
"5-(thiobutyl)butyrolactone": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "7.5",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "19.4",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "116",
"ecNumber": "3.1.1.25"
}
],
"N-hexanoylhomoserine lactone": [],
"N-(3-oxodecanoyl)-DL-homoserine lactone": [],
"delta-undecalactone": [],
"delta-dodecalactone": [],
"gamma-(S)-valerolactone": [],
"gamma-undecalactone": [],
"gamma-(R)-valerolactone": [],
"octanoyl-L-homoserine lactone": [],
"N-(3-oxododecanoyl)-DL-homoserine lactone": [],
"gamma-(S)-caprolactone": [],
"dodecanoic-1,5-lactone": [],
"gamma-nonanoic acid lactone": [],
"gamma-heptanolactone": [],
"Paraoxon": [
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "8.47",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "12.6",
"ecNumber": "3.1.1.25"
}
],
"dodecanoic-gamma-lactone": [],
"undecanoic-1,5-lactone": [],
"gamma-heptanolide": [
{
"organism": "Sulfolobus acidocaldarius",
"turnoverNumber": "10.25",
"ecNumber": "3.1.1.25"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "34",
"ecNumber": "3.1.1.25"
}
],
"delta-undecanolactone": [
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "12.65",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "44.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "56.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "58",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "66.5",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "71.2",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "93.3",
"ecNumber": "3.1.1.25"
}
],
"gamma-nonalactone": [
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "5.54",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "5.57",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "31",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Vulcanisaeta moutnovskia",
"turnoverNumber": "44.49",
"ecNumber": "3.1.1.25"
}
],
"N-(3-oxohexanoyl)-L-homoserine lactone": [],
"N-(3-oxooctanoyl)-L-homoserine lactone": [],
"3-oxo-octanoyl-L-homoserine lactone": [],
"gamma-dodecanoic acid lactone": [],
"gamma-(R)-caprolactone": [],
"4-methoxy phenyl acetate": [],
"epsilon-caprolactone": [
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "7.27",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus acidocaldarius",
"turnoverNumber": "15.04",
"ecNumber": "3.1.1.25"
}
],
"Gamma-caprolactone": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "25",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "44",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "44",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Vulcanisaeta moutnovskia",
"turnoverNumber": "112.3",
"ecNumber": "3.1.1.25"
}
],
"gamma-butyrolactone": [
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "5.75",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "111",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "111",
"ecNumber": "3.1.1.25"
}
],
"delta-valerolactone": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.5",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.9",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "29.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "40",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "69.4",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "94",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "156",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "210",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "210",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "210",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "632",
"ecNumber": "3.1.1.25"
}
],
"gamma-undecanoiclactone": [],
"9-oxo-N-(2-oxotetrahydrofuran-3-yl)undecanamide": [],
"N-(3-oxooctanoyl)-DL-homoserine lactone": [
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "0.92",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "0.97",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "4.1",
"ecNumber": "3.1.1.25"
}
],
"N-dodecanoylhomoserine lactone": [],
"nonanoic-delta-lactone": [],
"7-oxo-N-(2-oxotetrahydrofuran-3-yl)nonanamide": [],
"dodecanoic-delta-lactone": [],
"dihydrocoumarin": [
{
"organism": "Homo sapiens",
"turnoverNumber": "152",
"ecNumber": "3.1.1.25"
}
],
"N-dodecanoyl-DL-homoserine lactone": [],
"dodecanoic-1,4-lactone": [],
"gamma-undecanoic acid lactone": [],
"delta-nonalactone": [
{
"organism": "Homo sapiens",
"turnoverNumber": "48",
"ecNumber": "3.1.1.25"
},
{
"organism": "Vulcanisaeta moutnovskia",
"turnoverNumber": "88.91",
"ecNumber": "3.1.1.25"
}
],
"undecanoic-1,4-lactone": [],
"pantoyl lactone": [],
"nonanoic-1,4-lactone": [],
"N-(3-oxohexanoyl)homoserine lactone": [],
"undecanoic-delta-lactone": [
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "12.9",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "14.1",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "17.65",
"ecNumber": "3.1.1.25"
}
],
"3-oxo-decanoyl-L-homoserine lactone": [],
"N-(3-oxooctanoyl)homoserine lactone": []
},
"CYSTS": {
"L-Ser": [],
"homocysteine": [
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "6.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "7.38",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "15.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "32.1",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "34",
"ecNumber": "4.2.1.22"
}
],
"L-homocysteine": [
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "0.031",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.04",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.09",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.85",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.3",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "4.66",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "7.93",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "9.06",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "12.7",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "17",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "21.5",
"ecNumber": "4.2.1.22"
}
],
"L-cystathionine": [
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.083",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.133",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.418",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.56",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.56",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "1.03",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "6.08",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "6.08",
"ecNumber": "4.2.1.22"
}
],
"L-cysteine": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "1.95",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "3.13",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.13",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "4.39",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "4.39",
"ecNumber": "4.2.1.22"
}
],
"L-serine": [
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.082",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.15",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.45",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "0.52",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.85",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "1.3",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "1.67",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "2.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "2.9",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.67",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "5.3",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "5.4",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "5.9",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "7.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "7.6",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "8.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "10.19",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "10.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "13.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "14.01",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "14.6",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "14.7",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "15.8",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "16.8",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "17",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "19",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "19.7",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "21",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "21.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "39",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "45",
"ecNumber": "4.2.1.22"
}
],
"more": []
},
"BTNDe": {},
"BTNDm": {},
"GTHPe": {
"cumene peroxide": [
{
"organism": "Lucilia cuprina",
"turnoverNumber": "35.78",
"ecNumber": "1.11.1.9"
}
],
"GSH": [
{
"organism": "Homo sapiens",
"turnoverNumber": "19.5",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "24.5",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "221.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "293.3",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "361.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "408.3",
"ecNumber": "1.11.1.9"
}
],
"tert-butyl hydroperoxide": [],
"H2O2": [
{
"organism": "Homo sapiens",
"turnoverNumber": "10.55",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "16.02",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "20.83",
"ecNumber": "1.11.1.9"
},
{
"organism": "Lucilia cuprina",
"turnoverNumber": "44.03",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "316.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "445",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "560",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "670",
"ecNumber": "1.11.1.9"
}
]
},
"GTHPm": {
"cumene peroxide": [
{
"organism": "Lucilia cuprina",
"turnoverNumber": "35.78",
"ecNumber": "1.11.1.9"
}
],
"GSH": [
{
"organism": "Homo sapiens",
"turnoverNumber": "19.5",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "24.5",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "221.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "293.3",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "361.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "408.3",
"ecNumber": "1.11.1.9"
}
],
"tert-butyl hydroperoxide": [],
"H2O2": [
{
"organism": "Homo sapiens",
"turnoverNumber": "10.55",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "16.02",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "20.83",
"ecNumber": "1.11.1.9"
},
{
"organism": "Lucilia cuprina",
"turnoverNumber": "44.03",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "316.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "445",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "560",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "670",
"ecNumber": "1.11.1.9"
}
]
},
"FA120ACPHi": {},
"RE1845C": {
"more": [
{
"organism": "Bos taurus",
"turnoverNumber": "-999",
"ecNumber": "2.3.1.65"
}
]
},
"TRDRm": {
"GSSG": [],
"methaneseleninic acid": [],
"NADH": [
{
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.2",
"ecNumber": "1.8.1.9"
},
{
"organism": "Methanosarcina acetivorans",
"turnoverNumber": "0.817",
"ecNumber": "1.8.1.9"
}
],
"alloxan": [],
"Hordeum vulgare thioredoxin disulfide h2": [
{
"organism": "Hordeum vulgare",
"turnoverNumber": "0.8",
"ecNumber": "1.8.1.9"
},
{
"organism": "Hordeum vulgare",
"turnoverNumber": "1.31",
"ecNumber": "1.8.1.9"
},
{
"organism": "Hordeum vulgare",
"turnoverNumber": "2.98",
"ecNumber": "1.8.1.9"
}
],
"protein disulfide-isomerase": [],
"Hordeum vulgare thioredoxin disulfide h1": [
{
"organism": "Hordeum vulgare",
"turnoverNumber": "3.26",
"ecNumber": "1.8.1.9"
}
],
"hydrogen peroxide": [
{
"organism": "Mus musculus",
"turnoverNumber": "6.08",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin-CAC": [],
"thioredoxin P34S": [],
"thioredoxin disulfide 41": [],
"thioredoxin": [
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "0.02",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "0.052",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "0.243",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Solanum lycopersicum",
"turnoverNumber": "0.38",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "1.25",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "1.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Escherichia coli",
"turnoverNumber": "2.38",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "3.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "4.03",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "5.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "5.58",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "8.1",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "10.17",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Escherichia coli",
"turnoverNumber": "13.2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Anopheles gambiae",
"turnoverNumber": "14.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Anopheles gambiae",
"turnoverNumber": "15.4",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Anopheles gambiae",
"turnoverNumber": "15.7",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "19.97",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "22",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "22.8",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "25",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "25.78",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "27.4",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "29.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "37",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "37.88",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "41.7",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "46.57",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "50",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Aeropyrum pernix",
"turnoverNumber": "63.2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Bos taurus",
"turnoverNumber": "1030",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Bos taurus",
"turnoverNumber": "1200",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Bos taurus",
"turnoverNumber": "1300",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin disulfide 8": [],
"thioredoxin-R": [],
"methylseleninate": [
{
"organism": "Mus musculus",
"turnoverNumber": "14",
"ecNumber": "1.8.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "23",
"ecNumber": "1.8.1.9"
},
{
"organism": "Plasmodium falciparum",
"turnoverNumber": "31",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin disulfide": [
{
"organism": "Mus musculus",
"turnoverNumber": "0.13",
"ecNumber": "1.8.1.9"
},
{
"organism": "Methanosarcina acetivorans",
"turnoverNumber": "1.175",
"ecNumber": "1.8.1.9"
},
{
"organism": "Drosophila melanogaster",
"turnoverNumber": "4.99",
"ecNumber": "1.8.1.9"
},
{
"organism": "Drosophila melanogaster",
"turnoverNumber": "5.8",
"ecNumber": "1.8.1.9"
},
{
"organism": "Taenia crassiceps",
"turnoverNumber": "19.2",
"ecNumber": "1.8.1.9"
},
{
"organism": "Schistosoma mansoni",
"turnoverNumber": "30",
"ecNumber": "1.8.1.9"
},
{
"organism": "Mus musculus",
"turnoverNumber": "37",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "40.98",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "44.85",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "47",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "94.17",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "114.1",
"ecNumber": "1.8.1.9"
}
],
"FAD": [],
"more": [
{
"organism": "Escherichia coli",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
}
],
"Lipoamide": [
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "27.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "31.2",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin 41": [
{
"organism": "Entamoeba histolytica",
"turnoverNumber": "2.2",
"ecNumber": "1.8.1.9"
}
],
"selenocysteine": [],
"NADPH": [
{
"organism": "Solanum lycopersicum",
"turnoverNumber": "0.35",
"ecNumber": "1.8.1.9"
},
{
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "0.61",
"ecNumber": "1.8.1.9"
},
{
"organism": "Methanosarcina acetivorans",
"turnoverNumber": "0.65",
"ecNumber": "1.8.1.9"
},
{
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "33.3",
"ecNumber": "1.8.1.9"
}
],
"DTNB": [
{
"wild-type": False,
"organism": "Plasmodium falciparum",
"turnoverNumber": "0.233",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Plasmodium falciparum",
"turnoverNumber": "4.58",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "29.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Escherichia coli",
"turnoverNumber": "50.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "66.7",
"ecNumber": "1.8.1.9"
}
],
"lipoic acid": [],
"thioredoxin 1": [],
"thioredoxin 2": [
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "47.1",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin 3": [],
"glutaredoxin 4": [],
"thioredoxin 8": [
{
"organism": "Entamoeba histolytica",
"turnoverNumber": "2.7",
"ecNumber": "1.8.1.9"
}
],
"rat thioredoxin": [],
"5,5'-dithiobis(2-nitrobenzoic acid)": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.018",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.075",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.23",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.25",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.52",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.55",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Medicago truncatula",
"turnoverNumber": "0.62",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Schistosoma mansoni",
"turnoverNumber": "1.2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Drosophila melanogaster",
"turnoverNumber": "1.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Solanum lycopersicum",
"turnoverNumber": "1.77",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "2.23",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "2.23",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "2.4",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "2.53",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "2.62",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "2.62",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Anopheles gambiae",
"turnoverNumber": "5.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "8.28",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Aeropyrum pernix",
"turnoverNumber": "9",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "15.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Schistosoma mansoni",
"turnoverNumber": "16",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "18.73",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "20.83",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "20.85",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "21.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "30.02",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "33.08",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "33.33",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "47.72",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "48.42",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "49.87",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "70.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "106.3",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin K36E": [],
"5-hydroxy-1,4-naphthoquinone": [
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "52.75",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "174.3",
"ecNumber": "1.8.1.9"
}
]
},
"MCD": {
"malonyl-CoA": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "13.3",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "47.3",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "94.6",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "109.2",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "114.2",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "117.1",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "128.3",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "135",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "137.5",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "141.2",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "162.5",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "167.1",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "175.4",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "208.3",
"ecNumber": "4.1.1.9"
}
],
"N-hydroxy-L-ornithine": []
}}
initial_brenda_reaction = {
"GLNLASEer": {
"N-octanoyl-DL-homoserine lactone": [],
"5-butyl-4-methyldihydro-2(3H)-furanone": [],
"gamma-undecanolactone": [
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "3.92",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.25",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.55",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.63",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.95",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "5.64",
"ecNumber": "3.1.1.25"
}
],
"gamma-dodecanolactone": [],
"N-(3-oxododecanoyl)-L-homoserine lactone": [
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "1.01",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "1.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "3",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "6.44",
"ecNumber": "3.1.1.25"
}
],
"nonanoic-1,5-lactone": [],
"gamma-dodecalactone": [],
"N-(3-oxodecanoyl)-L-homoserine lactone": [
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "0.19",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "0.6",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "3.96",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "4.52",
"ecNumber": "3.1.1.25"
}
],
"gamma-dodecanoic lactone": [
{
"organism": "Homo sapiens",
"turnoverNumber": "101",
"ecNumber": "3.1.1.25"
}
],
"gamma-heptalactone": [],
"undecanoic-gamma-lactone": [],
"N-(2-oxotetrahydrofuran-3-yl)pentanamide": [],
"N-octanoylhomoserine lactone": [],
"nonanoic-gamma-lactone": [
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "2",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "3.1",
"ecNumber": "3.1.1.25"
}
],
"5-(thiobutyl)butyrolactone": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "7.5",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "19.4",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "116",
"ecNumber": "3.1.1.25"
}
],
"N-hexanoylhomoserine lactone": [],
"N-(3-oxodecanoyl)-DL-homoserine lactone": [],
"delta-undecalactone": [],
"delta-dodecalactone": [],
"gamma-(S)-valerolactone": [],
"gamma-undecalactone": [],
"gamma-(R)-valerolactone": [],
"octanoyl-L-homoserine lactone": [],
"N-(3-oxododecanoyl)-DL-homoserine lactone": [],
"gamma-(S)-caprolactone": [],
"dodecanoic-1,5-lactone": [],
"gamma-nonanoic acid lactone": [],
"gamma-heptanolactone": [],
"Paraoxon": [
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "8.47",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "12.6",
"ecNumber": "3.1.1.25"
}
],
"dodecanoic-gamma-lactone": [],
"undecanoic-1,5-lactone": [],
"gamma-heptanolide": [
{
"organism": "Sulfolobus acidocaldarius",
"turnoverNumber": "10.25",
"ecNumber": "3.1.1.25"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "34",
"ecNumber": "3.1.1.25"
}
],
"delta-undecanolactone": [
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "12.65",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "44.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "56.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "58",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "66.5",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "71.2",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "93.3",
"ecNumber": "3.1.1.25"
}
],
"gamma-nonalactone": [
{
"wild-type": True,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "5.54",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "5.57",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "31",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Vulcanisaeta moutnovskia",
"turnoverNumber": "44.49",
"ecNumber": "3.1.1.25"
}
],
"N-(3-oxohexanoyl)-L-homoserine lactone": [],
"N-(3-oxooctanoyl)-L-homoserine lactone": [],
"3-oxo-octanoyl-L-homoserine lactone": [],
"gamma-dodecanoic acid lactone": [],
"gamma-(R)-caprolactone": [],
"4-methoxy phenyl acetate": [],
"epsilon-caprolactone": [
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "7.27",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus acidocaldarius",
"turnoverNumber": "15.04",
"ecNumber": "3.1.1.25"
}
],
"Gamma-caprolactone": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "25",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "44",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "44",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Vulcanisaeta moutnovskia",
"turnoverNumber": "112.3",
"ecNumber": "3.1.1.25"
}
],
"gamma-butyrolactone": [
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "5.75",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "111",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "111",
"ecNumber": "3.1.1.25"
}
],
"delta-valerolactone": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.5",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.9",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "29.8",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "40",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "69.4",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "94",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "156",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "210",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "210",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "210",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "632",
"ecNumber": "3.1.1.25"
}
],
"gamma-undecanoiclactone": [],
"9-oxo-N-(2-oxotetrahydrofuran-3-yl)undecanamide": [],
"N-(3-oxooctanoyl)-DL-homoserine lactone": [
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "0.92",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "0.97",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "4.1",
"ecNumber": "3.1.1.25"
}
],
"N-dodecanoylhomoserine lactone": [],
"nonanoic-delta-lactone": [],
"7-oxo-N-(2-oxotetrahydrofuran-3-yl)nonanamide": [],
"dodecanoic-delta-lactone": [],
"dihydrocoumarin": [
{
"organism": "Homo sapiens",
"turnoverNumber": "152",
"ecNumber": "3.1.1.25"
}
],
"N-dodecanoyl-DL-homoserine lactone": [],
"dodecanoic-1,4-lactone": [],
"gamma-undecanoic acid lactone": [],
"delta-nonalactone": [
{
"organism": "Homo sapiens",
"turnoverNumber": "48",
"ecNumber": "3.1.1.25"
},
{
"organism": "Vulcanisaeta moutnovskia",
"turnoverNumber": "88.91",
"ecNumber": "3.1.1.25"
}
],
"undecanoic-1,4-lactone": [],
"pantoyl lactone": [],
"nonanoic-1,4-lactone": [],
"N-(3-oxohexanoyl)homoserine lactone": [],
"undecanoic-delta-lactone": [
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "12.9",
"ecNumber": "3.1.1.25"
},
{
"wild-type": False,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "14.1",
"ecNumber": "3.1.1.25"
},
{
"wild-type": True,
"organism": "Sulfolobus islandicus",
"turnoverNumber": "17.65",
"ecNumber": "3.1.1.25"
}
],
"3-oxo-decanoyl-L-homoserine lactone": [],
"N-(3-oxooctanoyl)homoserine lactone": []
},
"CYSTS": {
"L-Ser": [],
"homocysteine": [
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "6.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "7.38",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "15.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "32.1",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "34",
"ecNumber": "4.2.1.22"
}
],
"L-homocysteine": [
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "0.031",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.04",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.09",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.85",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.3",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "4.66",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "7.93",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "9.06",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "12.7",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "17",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "21.5",
"ecNumber": "4.2.1.22"
}
],
"L-cystathionine": [
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.083",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.133",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.418",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.56",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.56",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "1.03",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "6.08",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "6.08",
"ecNumber": "4.2.1.22"
}
],
"L-cysteine": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "1.95",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "3.13",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.13",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "4.39",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "4.39",
"ecNumber": "4.2.1.22"
}
],
"L-serine": [
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.082",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.15",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.45",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "0.52",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "0.85",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "1.3",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "1.67",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "2.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "2.9",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.67",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "5.3",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "5.4",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "5.9",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "7.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "7.6",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "8.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "10.19",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "10.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "13.2",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "14.01",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "14.6",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "14.7",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "15.8",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "16.8",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "17",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "19",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "19.7",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "21",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "21.5",
"ecNumber": "4.2.1.22"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "39",
"ecNumber": "4.2.1.22"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "45",
"ecNumber": "4.2.1.22"
}
],
"more": []
},
"BTNDe": {},
"BTNDm": {},
"GTHPe": {
"cumene peroxide": [
{
"organism": "Lucilia cuprina",
"turnoverNumber": "35.78",
"ecNumber": "1.11.1.9"
}
],
"GSH": [
{
"organism": "Homo sapiens",
"turnoverNumber": "19.5",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "24.5",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "221.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "293.3",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "361.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "408.3",
"ecNumber": "1.11.1.9"
}
],
"tert-butyl hydroperoxide": [],
"H2O2": [
{
"organism": "Homo sapiens",
"turnoverNumber": "10.55",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "16.02",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "20.83",
"ecNumber": "1.11.1.9"
},
{
"organism": "Lucilia cuprina",
"turnoverNumber": "44.03",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "316.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "445",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "560",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "670",
"ecNumber": "1.11.1.9"
}
]
},
"GTHPm": {
"cumene peroxide": [
{
"organism": "Lucilia cuprina",
"turnoverNumber": "35.78",
"ecNumber": "1.11.1.9"
}
],
"GSH": [
{
"organism": "Homo sapiens",
"turnoverNumber": "19.5",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "24.5",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "221.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "293.3",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "361.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "408.3",
"ecNumber": "1.11.1.9"
}
],
"tert-butyl hydroperoxide": [],
"H2O2": [
{
"organism": "Homo sapiens",
"turnoverNumber": "10.55",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "16.02",
"ecNumber": "1.11.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "20.83",
"ecNumber": "1.11.1.9"
},
{
"organism": "Lucilia cuprina",
"turnoverNumber": "44.03",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "316.7",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "445",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "560",
"ecNumber": "1.11.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "670",
"ecNumber": "1.11.1.9"
}
]
},
"FA120ACPHi": {},
"RE1845C": {
"more": [
{
"organism": "Bos taurus",
"turnoverNumber": "-999",
"ecNumber": "2.3.1.65"
}
]
},
"TRDRm": {
"GSSG": [],
"methaneseleninic acid": [],
"NADH": [
{
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.2",
"ecNumber": "1.8.1.9"
},
{
"organism": "Methanosarcina acetivorans",
"turnoverNumber": "0.817",
"ecNumber": "1.8.1.9"
}
],
"alloxan": [],
"Hordeum vulgare thioredoxin disulfide h2": [
{
"organism": "Hordeum vulgare",
"turnoverNumber": "0.8",
"ecNumber": "1.8.1.9"
},
{
"organism": "Hordeum vulgare",
"turnoverNumber": "1.31",
"ecNumber": "1.8.1.9"
},
{
"organism": "Hordeum vulgare",
"turnoverNumber": "2.98",
"ecNumber": "1.8.1.9"
}
],
"protein disulfide-isomerase": [],
"Hordeum vulgare thioredoxin disulfide h1": [
{
"organism": "Hordeum vulgare",
"turnoverNumber": "3.26",
"ecNumber": "1.8.1.9"
}
],
"hydrogen peroxide": [
{
"organism": "Mus musculus",
"turnoverNumber": "6.08",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin-CAC": [],
"thioredoxin P34S": [],
"thioredoxin disulfide 41": [],
"thioredoxin": [
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "0.02",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "0.052",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "0.243",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Solanum lycopersicum",
"turnoverNumber": "0.38",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "1.25",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "1.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Escherichia coli",
"turnoverNumber": "2.38",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "3.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "4.03",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "5.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "5.58",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "8.1",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "10.17",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Escherichia coli",
"turnoverNumber": "13.2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Anopheles gambiae",
"turnoverNumber": "14.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Anopheles gambiae",
"turnoverNumber": "15.4",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Anopheles gambiae",
"turnoverNumber": "15.7",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "19.97",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "22",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "22.8",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "25",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "25.78",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "27.4",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "29.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "37",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "37.88",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "41.7",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "46.57",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "50",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Aeropyrum pernix",
"turnoverNumber": "63.2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Bos taurus",
"turnoverNumber": "1030",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Bos taurus",
"turnoverNumber": "1200",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Bos taurus",
"turnoverNumber": "1300",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin disulfide 8": [],
"thioredoxin-R": [],
"methylseleninate": [
{
"organism": "Mus musculus",
"turnoverNumber": "14",
"ecNumber": "1.8.1.9"
},
{
"organism": "Homo sapiens",
"turnoverNumber": "23",
"ecNumber": "1.8.1.9"
},
{
"organism": "Plasmodium falciparum",
"turnoverNumber": "31",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin disulfide": [
{
"organism": "Mus musculus",
"turnoverNumber": "0.13",
"ecNumber": "1.8.1.9"
},
{
"organism": "Methanosarcina acetivorans",
"turnoverNumber": "1.175",
"ecNumber": "1.8.1.9"
},
{
"organism": "Drosophila melanogaster",
"turnoverNumber": "4.99",
"ecNumber": "1.8.1.9"
},
{
"organism": "Drosophila melanogaster",
"turnoverNumber": "5.8",
"ecNumber": "1.8.1.9"
},
{
"organism": "Taenia crassiceps",
"turnoverNumber": "19.2",
"ecNumber": "1.8.1.9"
},
{
"organism": "Schistosoma mansoni",
"turnoverNumber": "30",
"ecNumber": "1.8.1.9"
},
{
"organism": "Mus musculus",
"turnoverNumber": "37",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "40.98",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "44.85",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "47",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "94.17",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "114.1",
"ecNumber": "1.8.1.9"
}
],
"FAD": [],
"more": [
{
"organism": "Escherichia coli",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Escherichia coli",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "-999",
"ecNumber": "1.8.1.9"
}
],
"Lipoamide": [
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "3.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "27.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "31.2",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin 41": [
{
"organism": "Entamoeba histolytica",
"turnoverNumber": "2.2",
"ecNumber": "1.8.1.9"
}
],
"selenocysteine": [],
"NADPH": [
{
"organism": "Solanum lycopersicum",
"turnoverNumber": "0.35",
"ecNumber": "1.8.1.9"
},
{
"organism": "Sulfolobus solfataricus",
"turnoverNumber": "0.61",
"ecNumber": "1.8.1.9"
},
{
"organism": "Methanosarcina acetivorans",
"turnoverNumber": "0.65",
"ecNumber": "1.8.1.9"
},
{
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "33.3",
"ecNumber": "1.8.1.9"
}
],
"DTNB": [
{
"wild-type": False,
"organism": "Plasmodium falciparum",
"turnoverNumber": "0.233",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Plasmodium falciparum",
"turnoverNumber": "4.58",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "29.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Escherichia coli",
"turnoverNumber": "50.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "66.7",
"ecNumber": "1.8.1.9"
}
],
"lipoic acid": [],
"thioredoxin 1": [],
"thioredoxin 2": [
{
"wild-type": False,
"organism": "Saccharomyces cerevisiae",
"turnoverNumber": "47.1",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin 3": [],
"glutaredoxin 4": [],
"thioredoxin 8": [
{
"organism": "Entamoeba histolytica",
"turnoverNumber": "2.7",
"ecNumber": "1.8.1.9"
}
],
"rat thioredoxin": [],
"5,5'-dithiobis(2-nitrobenzoic acid)": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.018",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.075",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.23",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Entamoeba histolytica",
"turnoverNumber": "0.25",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.52",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "0.55",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Medicago truncatula",
"turnoverNumber": "0.62",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Schistosoma mansoni",
"turnoverNumber": "1.2",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Drosophila melanogaster",
"turnoverNumber": "1.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Solanum lycopersicum",
"turnoverNumber": "1.77",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "2.23",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "2.23",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "2.4",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Caenorhabditis elegans",
"turnoverNumber": "2.53",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "2.62",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "2.62",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Anopheles gambiae",
"turnoverNumber": "5.5",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "8.28",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Aeropyrum pernix",
"turnoverNumber": "9",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "15.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Schistosoma mansoni",
"turnoverNumber": "16",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "18.73",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "20.83",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Mus musculus",
"turnoverNumber": "20.85",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Drosophila melanogaster",
"turnoverNumber": "21.6",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "30.02",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "33.08",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "33.33",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "47.72",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Mus musculus",
"turnoverNumber": "48.42",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "49.87",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "70.3",
"ecNumber": "1.8.1.9"
},
{
"wild-type": True,
"organism": "Rattus norvegicus",
"turnoverNumber": "106.3",
"ecNumber": "1.8.1.9"
}
],
"thioredoxin K36E": [],
"5-hydroxy-1,4-naphthoquinone": [
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "52.75",
"ecNumber": "1.8.1.9"
},
{
"wild-type": False,
"organism": "Rattus norvegicus",
"turnoverNumber": "174.3",
"ecNumber": "1.8.1.9"
}
]
},
"MCD": {
"malonyl-CoA": [
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "13.3",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "47.3",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "94.6",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "109.2",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "114.2",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "117.1",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "128.3",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "135",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "137.5",
"ecNumber": "4.1.1.9"
},
{
"wild-type": True,
"organism": "Homo sapiens",
"turnoverNumber": "141.2",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "162.5",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "167.1",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "175.4",
"ecNumber": "4.1.1.9"
},
{
"wild-type": False,
"organism": "Homo sapiens",
"turnoverNumber": "208.3",
"ecNumber": "4.1.1.9"
}
],
"N-hydroxy-L-ornithine": []
}}
def test_file_correction(self):
'''correctJson() should be able to reverse KEGG codes and BRENDA names
that have been reversed.
'''
brenda_keggs = DataTreatment.correctJson('Unit Tests/incorrect_json.json')
with open('Unit Tests/correct_json.json') as infile:
correct_file = json.load(infile)
self.assertEqual(brenda_keggs, correct_file)
def test_load_brenda(self):
'''BRENDA parameters must be loaded correctly for program to work.
'''
treated_brenda_output = DataTreatment.openJson('Unit Tests/sample_brenda_output.json')
self.assertEqual(treated_brenda_output, SampleData.initial_input)
def test_convert_brenda_to_data_structure(self):
'''test that brenda is converted to an Enzyme and MetaboliteCandidate -
based structure'''
#Where does this happen in DataTreatment?
class FileCorrectionBadInput(unittest.TestCase):
def test_no_code(self):
'''correctJson() must have a kegg code in a key:value pair'''
self.assertRaises(DataTreatment.BadDataError,
DataTreatment.correctJson, 'Unit Tests/no_code.json')
def test_no_file(self):
'''Throw FileNotFoundError if no file'''
self.assertRaises(FileNotFoundError, DataTreatment.correctJson,
'Unit Tests/no_file_here.json')
def test_incomplete(self):
'''File must be populated and be proper JSON.'''
self.assertRaises(json.decoder.JSONDecodeError, DataTreatment.correctJson,
'Unit Tests/incomplete.json')
def test_empty(self):
'''File must be populated and be proper JSON.'''
self.assertRaises(json.decoder.JSONDecodeError, DataTreatment.correctJson,
'Unit Tests/empty.json')
class IO(unittest.TestCase):
'''Meant to test write(). openJson is already tested in TestDataPassing.py
'''
def test_write(self):
file_readout = openJson('Unit Tests/sample_brenda_output.json')
write('Unit Tests/sample_write_output.json', file_readout)
self.assertEqual(file_readout, openJson('Unit Tests/sample_write_output.json'))
if __name__ == '__main__':
unittest.main() | 31.719256 | 94 | 0.355499 | 8,470 | 117,615 | 4.930342 | 0.047226 | 0.090421 | 0.075958 | 0.122701 | 0.970546 | 0.96887 | 0.966715 | 0.964847 | 0.964847 | 0.964847 | 0 | 0.074069 | 0.493551 | 117,615 | 3,708 | 95 | 31.719256 | 0.627002 | 0.005518 | 0 | 0.679369 | 0 | 0 | 0.358666 | 0.020585 | 0 | 0 | 0 | 0 | 0.001905 | 1 | 0.002177 | false | 0 | 0.001089 | 0 | 0.004627 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d41ca54bc48687ea2f1803f6236ad479dfaf1d0 | 9,921 | py | Python | iif_analysis/script_single_setting_IQR_json_generator.py | bonilab/malariaibm-interrupted-feeding-and-recombination | 02f45069b4b4c646069b05e7c7ffa78ee80cacfb | [
"MIT"
] | null | null | null | iif_analysis/script_single_setting_IQR_json_generator.py | bonilab/malariaibm-interrupted-feeding-and-recombination | 02f45069b4b4c646069b05e7c7ffa78ee80cacfb | [
"MIT"
] | null | null | null | iif_analysis/script_single_setting_IQR_json_generator.py | bonilab/malariaibm-interrupted-feeding-and-recombination | 02f45069b4b4c646069b05e7c7ffa78ee80cacfb | [
"MIT"
] | null | null | null | # import sys
# sys.path.append('../../')
import numpy as np
import pandas as pd
import json
import copy
from plot_helper import coloring_legend, df_col_replace
from constant import REPORTDAYS, HEADER_NAME, COLUMNS_TO_DROP, FIRST_ROW_AFTER_BURNIN
def single_setting_IQR_json_generator(fpath_pattern_list, outfile_dir, outfile_stgy_tag, threshold):
def T01_IQR_reporter_oneset_mostdangtriple(dflist, pattern, threshold):
all_100_T01s = [] # in days
for onerun in dflist:
combined_geno_freq = onerun.filter(regex=pattern, axis=1).sum(axis=1).values # len=361
T01_this_run = float('inf')
for idx,val in enumerate(combined_geno_freq):
if val > threshold:
T01_this_run = REPORTDAYS[idx]
break
all_100_T01s.append(T01_this_run)
assert(len(all_100_T01s)==100)
return np.quantile(all_100_T01s, [0.25, 0.5, 0.75])
def T01_IQR_reporter_oneset_mostdangdouble(dflist_arg, drug, threshold):
option = 1
most_dang_double_tag = '2-2' if drug == 'DHA-PPQ' else '2-4'
all_100_T01s = [] # in days
dflist = copy.deepcopy(dflist_arg)
# rename all 100 df's by `drug` and sum-up columns
for i in range(len(dflist)):
dflist[i] = df_col_replace(dflist[i], drug, option)
combined_geno_freq = dflist[i][most_dang_double_tag].values # len=361
T01_this_run = float('inf')
for idx,val in enumerate(combined_geno_freq):
if val > threshold:
T01_this_run = REPORTDAYS[idx]
break
all_100_T01s.append(T01_this_run)
assert(len(all_100_T01s)==100)
return np.quantile(all_100_T01s, [0.25, 0.5, 0.75])
# Main Driver Code
set3_fpath, set4_fpath, set7_fpath, set8_fpath, set11_fpath, set12_fpath = fpath_pattern_list
# all rows, all sets
iqr_median = {}
iqr_25p = {}
iqr_75p = {}
dflist_set3 = []
dflist_set4 = []
dflist_set7 = []
dflist_set8 = []
dflist_set11 = []
dflist_set12 = []
for i in range(1,101):
dflist_set3.append(
pd.read_csv(set3_fpath%i, index_col=False, names=HEADER_NAME, sep='\t').drop(columns=COLUMNS_TO_DROP)
)
dflist_set4.append(
pd.read_csv(set4_fpath%i, index_col=False, names=HEADER_NAME, sep='\t').drop(columns=COLUMNS_TO_DROP)
)
dflist_set7.append(
pd.read_csv(set7_fpath%i, index_col=False, names=HEADER_NAME, sep='\t').drop(columns=COLUMNS_TO_DROP)
)
dflist_set8.append(
pd.read_csv(set8_fpath%i, index_col=False, names=HEADER_NAME, sep='\t').drop(columns=COLUMNS_TO_DROP)
)
dflist_set11.append(
pd.read_csv(set11_fpath%i, index_col=False, names=HEADER_NAME, sep='\t').drop(columns=COLUMNS_TO_DROP)
)
dflist_set12.append(
pd.read_csv(set12_fpath%i, index_col=False, names=HEADER_NAME, sep='\t').drop(columns=COLUMNS_TO_DROP)
)
# initialize with row1
# set3
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set3, 'TYY..Y2.', threshold)
assert(len(temp)==3) # 25p, median, and 75p values
iqr_median['row1'] = [temp[1]]
iqr_25p['row1'] = [temp[0]]
iqr_75p['row1'] = [temp[2]]
# set4
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set4, 'TYY..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row1'].append(temp[1])
iqr_25p['row1'].append(temp[0])
iqr_75p['row1'].append(temp[2])
# set7
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set7, 'TYY..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row1'].append(temp[1])
iqr_25p['row1'].append(temp[0])
iqr_75p['row1'].append(temp[2])
# set8
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set8, 'TYY..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row1'].append(temp[1])
iqr_25p['row1'].append(temp[0])
iqr_75p['row1'].append(temp[2])
# set11
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set11, 'TYY..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row1'].append(temp[1])
iqr_25p['row1'].append(temp[0])
iqr_75p['row1'].append(temp[2])
# set12
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set12, 'TYY..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row1'].append(temp[1])
iqr_25p['row1'].append(temp[0])
iqr_75p['row1'].append(temp[2])
# row2
# set3
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set3, 'KNF..Y2.', threshold)
assert(len(temp)==3) # 25p, median, and 75p values
iqr_median['row2'] = [temp[1]]
iqr_25p['row2'] = [temp[0]]
iqr_75p['row2'] = [temp[2]]
# set4
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set4, 'KNF..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row2'].append(temp[1])
iqr_25p['row2'].append(temp[0])
iqr_75p['row2'].append(temp[2])
# set7
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set7, 'KNF..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row2'].append(temp[1])
iqr_25p['row2'].append(temp[0])
iqr_75p['row2'].append(temp[2])
# set8
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set8, 'KNF..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row2'].append(temp[1])
iqr_25p['row2'].append(temp[0])
iqr_75p['row2'].append(temp[2])
# set11
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set11, 'KNF..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row2'].append(temp[1])
iqr_25p['row2'].append(temp[0])
iqr_75p['row2'].append(temp[2])
# set12
temp = T01_IQR_reporter_oneset_mostdangtriple(dflist_set12, 'KNF..Y2.', threshold)
assert(len(temp)==3)
iqr_median['row2'].append(temp[1])
iqr_25p['row2'].append(temp[0])
iqr_75p['row2'].append(temp[2])
# row3
# set3
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set3, 'DHA-PPQ', threshold)
assert(len(temp)==3) # 25p, median, and 75p values
iqr_median['row3'] = [temp[1]]
iqr_25p['row3'] = [temp[0]]
iqr_75p['row3'] = [temp[2]]
# set4
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set4, 'DHA-PPQ', threshold)
assert(len(temp)==3)
iqr_median['row3'].append(temp[1])
iqr_25p['row3'].append(temp[0])
iqr_75p['row3'].append(temp[2])
# set7
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set7, 'DHA-PPQ', threshold)
assert(len(temp)==3)
iqr_median['row3'].append(temp[1])
iqr_25p['row3'].append(temp[0])
iqr_75p['row3'].append(temp[2])
# set8
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set8, 'DHA-PPQ', threshold)
assert(len(temp)==3)
iqr_median['row3'].append(temp[1])
iqr_25p['row3'].append(temp[0])
iqr_75p['row3'].append(temp[2])
# set11
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set11, 'DHA-PPQ', threshold)
assert(len(temp)==3)
iqr_median['row3'].append(temp[1])
iqr_25p['row3'].append(temp[0])
iqr_75p['row3'].append(temp[2])
# set12
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set12, 'DHA-PPQ', threshold)
assert(len(temp)==3)
iqr_median['row3'].append(temp[1])
iqr_25p['row3'].append(temp[0])
iqr_75p['row3'].append(temp[2])
# row4
# set3
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set3, 'ASAQ', threshold)
assert(len(temp)==3) # 25p, median, and 75p values
iqr_median['row4'] = [temp[1]]
iqr_25p['row4'] = [temp[0]]
iqr_75p['row4'] = [temp[2]]
# set4
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set4, 'ASAQ', threshold)
assert(len(temp)==3)
iqr_median['row4'].append(temp[1])
iqr_25p['row4'].append(temp[0])
iqr_75p['row4'].append(temp[2])
# set7
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set7, 'ASAQ', threshold)
assert(len(temp)==3)
iqr_median['row4'].append(temp[1])
iqr_25p['row4'].append(temp[0])
iqr_75p['row4'].append(temp[2])
# set8
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set8, 'ASAQ', threshold)
assert(len(temp)==3)
iqr_median['row4'].append(temp[1])
iqr_25p['row4'].append(temp[0])
iqr_75p['row4'].append(temp[2])
# set11
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set11, 'ASAQ', threshold)
assert(len(temp)==3)
iqr_median['row4'].append(temp[1])
iqr_25p['row4'].append(temp[0])
iqr_75p['row4'].append(temp[2])
# set12
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set12, 'ASAQ', threshold)
assert(len(temp)==3)
iqr_median['row4'].append(temp[1])
iqr_25p['row4'].append(temp[0])
iqr_75p['row4'].append(temp[2])
# row5
# set3
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set3, 'AL', threshold)
assert(len(temp)==3) # 25p, median, and 75p values
iqr_median['row5'] = [temp[1]]
iqr_25p['row5'] = [temp[0]]
iqr_75p['row5'] = [temp[2]]
# set4
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set4, 'AL', threshold)
assert(len(temp)==3)
iqr_median['row5'].append(temp[1])
iqr_25p['row5'].append(temp[0])
iqr_75p['row5'].append(temp[2])
# set7
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set7, 'AL', threshold)
assert(len(temp)==3)
iqr_median['row5'].append(temp[1])
iqr_25p['row5'].append(temp[0])
iqr_75p['row5'].append(temp[2])
# set8
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set8, 'AL', threshold)
assert(len(temp)==3)
iqr_median['row5'].append(temp[1])
iqr_25p['row5'].append(temp[0])
iqr_75p['row5'].append(temp[2])
# set11
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set11, 'AL', threshold)
assert(len(temp)==3)
iqr_median['row5'].append(temp[1])
iqr_25p['row5'].append(temp[0])
iqr_75p['row5'].append(temp[2])
# set12
temp = T01_IQR_reporter_oneset_mostdangdouble(dflist_set12, 'AL', threshold)
assert(len(temp)==3)
iqr_median['row5'].append(temp[1])
iqr_25p['row5'].append(temp[0])
iqr_75p['row5'].append(temp[2])
# if directory exist check happening
# in main script notebook file
with open(outfile_dir+outfile_stgy_tag+'_median.json', 'w') as outfile:
json.dump(iqr_median, outfile)
with open(outfile_dir+outfile_stgy_tag+'_25p.json', 'w') as outfile:
json.dump(iqr_25p, outfile)
with open(outfile_dir+outfile_stgy_tag+'_75p.json', 'w') as outfile:
json.dump(iqr_75p, outfile)
| 36.076364 | 108 | 0.696402 | 1,518 | 9,921 | 4.296443 | 0.097497 | 0.114995 | 0.068691 | 0.098129 | 0.841153 | 0.808648 | 0.795462 | 0.777216 | 0.765256 | 0.725391 | 0 | 0.075667 | 0.13547 | 9,921 | 274 | 109 | 36.208029 | 0.684738 | 0.056244 | 0 | 0.559091 | 0 | 0 | 0.064225 | 0 | 0 | 0 | 0 | 0 | 0.145455 | 1 | 0.013636 | false | 0 | 0.027273 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d7342258d218a633032a1674372179d42dba75f | 13,182 | py | Python | clientside/resources.py | matthew-rimmer/unify | 39ff4847d65ff901ea5359339a6b243773b23abc | [
"MIT"
] | null | null | null | clientside/resources.py | matthew-rimmer/unify | 39ff4847d65ff901ea5359339a6b243773b23abc | [
"MIT"
] | null | null | null | clientside/resources.py | matthew-rimmer/unify | 39ff4847d65ff901ea5359339a6b243773b23abc | [
"MIT"
] | null | null | null | import requests
import asyncio
from json import loads, dumps
from mimetypes import guess_type
from .settings import get_route, api_url
def get_request_headers(token=None, content_type='application/json'):
output = { 'Content-Type':content_type, 'Connection':'close' }
if token is not None:
output['Authorization'] = 'jwt {token}'.format(token=token)
return output
def get_servable_pictures(json, picture_path):
if 'error' not in json:
print(json)
if picture_path in json['data']:
if json['data'][picture_path] == '' or json['data'][picture_path] is None:
json['data'][picture_path] = User_Requests.get_default_image()
elif json['data'][picture_path] == []:
json['data'][picture_path] = [ User_Requests.get_default_image() ]
elif isinstance(json['data'][picture_path], str):
json['data'][picture_path] = get_image_url(
json['data']['User_ID'],
json['data'][picture_path]
)
else:
picture_links = []
for pic in json['data'][picture_path]:
picture_links.append(
get_image_url(
json['data']['User_ID'],
pic
)
)
json['data'][picture_path] = picture_links
return json
def get_list_servable_pictures(json, picture_path):
if 'error' not in json:
if len(json['data']) >= 1:
for i in range(len(json['data'])):
if picture_path in json['data'][i]:
if json['data'][i][picture_path] == '' or json['data'][i][picture_path] is None:
json['data'][i][picture_path] = User_Requests.get_default_image()
elif isinstance(json['data'][i][picture_path], str):
json['data'][i][picture_path] = get_image_url(
json['data'][i]['User_ID'],
json['data'][i][picture_path]
)
return json
def check_req_success(response, picture_path=None):
if 200 <= response.status_code <= 203:
r = response.json()
response.close()
return r
else:
r = { 'error': loads(response.text) }
response.close()
return r
def get_image_url(user_id, image_path):
return api_url + get_route('images', user=user_id, image=image_path)
class User_Requests:
@staticmethod
def create(user_data):
if 'tags' in user_data:
user_data['tag_rels'] = []
for t in user_data['tags']:
user_data['tag_rels'].append({
'User_Tag':t
})
del user_data['tags']
resp = requests.post(
api_url + get_route('create_user'),
json = user_data,
headers = get_request_headers(),
verify = True
)
return check_req_success(resp)
@staticmethod
def upload_image(auth_token, image_path):
image_path = r'{}'.format(image_path)
with open(image_path, 'rb') as image:
print('Opened: {img}'.format(img=image_path))
#print(guess_type(image)[0])
resp = requests.post(
api_url + get_route('images', assign_user=True),
data = image,
headers = get_request_headers(
token=auth_token,
content_type=guess_type(image_path)[0]
),
verify = True
)
print('{s}: {r}'.format(s=resp.status_code, r=resp.reason))
return check_req_success(resp)
@staticmethod
def get_default_image():
return get_image_url('default','user.png')
@staticmethod
def login(login_data, auth_token=None):
resp = requests.get(
api_url + get_route('login'),
json = login_data,
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def verify(user_id, auth_token, code):
resp = requests.patch(
api_url + get_route('user_verify', effected_id=user_id),
json = {'Verification_Code':code},
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def get_info(user_id, auth_token):
resp = requests.get(
api_url + get_route('user', effected_id=user_id),
headers = get_request_headers(token=auth_token),
verify = True
)
return get_servable_pictures(check_req_success(resp), 'pictures')
@staticmethod
def get_friends(user_id, auth_token):
resp = requests.get(
api_url + get_route('user_friends', effected_id=user_id),
headers = get_request_headers(token=auth_token),
verify = True
)
return get_list_servable_pictures(check_req_success(resp), 'Picture_Path')
@staticmethod
def get_feed(auth_token, offset=0, limit=15):
resp = requests.get(
api_url + get_route('user_feed', offset=offset, limit=limit),
headers = get_request_headers(token=auth_token),
verify = True
)
return get_list_servable_pictures(check_req_success(resp), 'Picture_Path')
@staticmethod
def get_matches(auth_token, offset=0, limit=15):
resp = requests.get(
api_url + get_route('user_matches', offset=offset, limit=limit),
headers = get_request_headers(token=auth_token),
verify = True
)
return get_list_servable_pictures(check_req_success(resp), 'Picture_Path')
@staticmethod
def edit(user_id, auth_token, user_edits):
resp = requests.patch(
api_url + get_route('user', effected_id=user_id),
json = user_edits,
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def get_change_password_code(auth_token):
resp = requests.get(
api_url + get_route('user_change_password'),
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def check_change_password_code(auth_token, code):
resp = requests.patch(
api_url + get_route('user_change_password'),
json = { 'Password_Code': code },
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def change_password(auth_token, password):
resp = requests.post(
api_url + get_route('user_change_password'),
json = { 'Password': password },
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def delete(user_id, auth_token):
resp = requests.delete(
api_url + get_route('user', effected_id=user_id),
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def add_tags(user_id, auth_token, tags):
resp = requests.post(
api_url + get_route('user_tags', effected_id=user_id),
json = {'User_Tags': tags},
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def delete_tags(user_id, auth_token, tags):
resp = requests.delete(
api_url + get_route('user_tags', effected_id=user_id),
json = {'User_Tags': tags},
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def get_friend_requests(user_id, auth_token):
resp = requests.get(
api_url + get_route('user_friend_requests', effected_id=user_id),
headers = get_request_headers(token=auth_token),
verify = True
)
return get_list_servable_pictures(check_req_success(resp), 'Picture_Path')
@staticmethod
def send_friend_request(user_id, auth_token):
resp = requests.post(
api_url + get_route('user_friend_requests', effected_id=user_id),
json = {},
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def delete_friend_request(user_id, auth_token):
resp = requests.delete(
api_url + get_route('user_friend_requests', effected_id=user_id),
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def accept_friend_request(user_id, auth_token):
resp = requests.patch(
api_url + get_route('user_friend_requests', effected_id=user_id),
json = {},
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def delete_friendship(user_id, auth_token):
resp = requests.delete(
api_url + get_route('user_friends', effected_id=user_id),
headers = get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
class Event_Requests:
@staticmethod
def create(auth_token, event_data):
resp = requests.post(
api_url + get_route('create_event'),
json = event_data,
headers=get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def get(event_id, auth_token):
resp = requests.get(
api_url + get_route('event', effected_id=event_id),
headers=get_request_headers(token=auth_token),
verify = True
)
return get_servable_pictures(check_req_success(resp), 'Picture_Path')
@staticmethod
def edit(event_id, auth_token, event_data):
resp = requests.patch(
api_url + get_route('event', effected_id=event_id),
json = event_data,
headers=get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def delete(event_id, auth_token):
resp = requests.delete(
api_url + get_route('event', effected_id=event_id),
headers=get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def attending(event_id, auth_token):
resp = requests.post(
api_url + get_route('event_users', effected_id=event_id),
json = {},
headers=get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def delete_attending(event_id, auth_token):
resp = requests.delete(
api_url + get_route('event_users', effected_id=event_id),
headers=get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def upload_image(auth_token, image_path):
image_path = r'{}'.format(image_path)
with open(image_path, 'rb') as image:
print('Opened: {img}'.format(img=image_path))
#print(guess_type(image)[0])
resp = requests.post(
api_url + get_route('images', assign_user=False),
data = image,
headers = get_request_headers(
token=auth_token,
content_type=guess_type(image_path)[0]
),
verify = True
)
print('{s}: {r}'.format(s=resp.status_code, r=resp.reason))
return check_req_success(resp)
class Report_Requests:
@staticmethod
def report_user(user_id, auth_token, reason):
resp = requests.post(
api_url + get_route('report_user', effected_id=user_id),
json = { 'Report_Reason':reason },
headers=get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp)
@staticmethod
def report_event(event_id, auth_token, reason):
resp = requests.post(
api_url + get_route('report_event', effected_id=event_id),
json = { 'Report_Reason':reason },
headers=get_request_headers(token=auth_token),
verify = True
)
return check_req_success(resp) | 34.965517 | 100 | 0.582613 | 1,499 | 13,182 | 4.803202 | 0.080053 | 0.07 | 0.070833 | 0.058333 | 0.832083 | 0.801111 | 0.766806 | 0.74375 | 0.704583 | 0.685139 | 0 | 0.001891 | 0.318085 | 13,182 | 377 | 101 | 34.965517 | 0.799088 | 0.004097 | 0 | 0.601796 | 0 | 0 | 0.059267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10479 | false | 0.023952 | 0.01497 | 0.005988 | 0.236527 | 0.01497 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d7dac35e2258d0664963910cdb4bcf46a296bb3 | 159 | py | Python | kartverket_tide_api/parsers/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | kartverket_tide_api/parsers/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | kartverket_tide_api/parsers/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | from .abstractresponseparser import AbstractResponseParser
from .locationdataparser import LocationDataParser
from .stationlistparser import StationListParser
| 39.75 | 58 | 0.90566 | 12 | 159 | 12 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 159 | 3 | 59 | 53 | 0.979592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d8fb1266a4ff0bdee75ae4b3f4c58dbdfd36abc | 47 | py | Python | sympy/functions/combinatorial/__init__.py | ovolve/sympy | 0a15782f20505673466b940454b33b8014a25c13 | [
"BSD-3-Clause"
] | 319 | 2016-09-22T15:54:48.000Z | 2022-03-18T02:36:58.000Z | sympy/functions/combinatorial/__init__.py | curzel-it/KiPyCalc | 909c783d5e6967ea58ca93f875106d8a8e3ca5db | [
"MIT"
] | 13 | 2020-03-24T17:53:51.000Z | 2022-02-10T20:01:14.000Z | sympy/functions/combinatorial/__init__.py | curzel-it/KiPyCalc | 909c783d5e6967ea58ca93f875106d8a8e3ca5db | [
"MIT"
] | 27 | 2016-10-06T16:05:32.000Z | 2022-03-18T02:37:00.000Z | from . import factorials
from . import numbers
| 15.666667 | 24 | 0.787234 | 6 | 47 | 6.166667 | 0.666667 | 0.540541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 25 | 23.5 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a5d1ad95177b486465043cee969e8d573eedf859 | 80 | py | Python | tests/rcf/test_utils.py | FatimAmiri/compas_rcf | 6922036dbb033a83b9ff1cc8c016f865f6ee8e34 | [
"MIT"
] | null | null | null | tests/rcf/test_utils.py | FatimAmiri/compas_rcf | 6922036dbb033a83b9ff1cc8c016f865f6ee8e34 | [
"MIT"
] | null | null | null | tests/rcf/test_utils.py | FatimAmiri/compas_rcf | 6922036dbb033a83b9ff1cc8c016f865f6ee8e34 | [
"MIT"
] | null | null | null | def test_empty_test():
assert True
def test_1984():
assert 2 + 2 == 4
| 11.428571 | 22 | 0.6125 | 13 | 80 | 3.538462 | 0.615385 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12069 | 0.275 | 80 | 6 | 23 | 13.333333 | 0.672414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
570a2ddb80ee27f12b5e2932bb47e6f5b992b58b | 27 | py | Python | TagDict/__init__.py | patarapolw/TagDict | 09d2a05055381f9a8770d10e278c71280805bee4 | [
"Apache-2.0"
] | 2 | 2018-07-09T03:58:21.000Z | 2018-07-15T03:17:07.000Z | TagDict/__init__.py | patarapolw/TagDict | 09d2a05055381f9a8770d10e278c71280805bee4 | [
"Apache-2.0"
] | null | null | null | TagDict/__init__.py | patarapolw/TagDict | 09d2a05055381f9a8770d10e278c71280805bee4 | [
"Apache-2.0"
] | null | null | null | from .excel import TagDict
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5747493847d1d55803f36924defa3ac500e8c86c | 203 | py | Python | lumin/data_processing/__init__.py | choisant/lumin | c039136eb096e8f3800f13925f9325b99cf7e76b | [
"Apache-2.0"
] | 43 | 2019-02-11T16:16:42.000Z | 2021-12-13T15:35:20.000Z | lumin/data_processing/__init__.py | choisant/lumin | c039136eb096e8f3800f13925f9325b99cf7e76b | [
"Apache-2.0"
] | 48 | 2020-05-21T02:40:50.000Z | 2021-08-10T11:07:08.000Z | lumin/data_processing/__init__.py | choisant/lumin | c039136eb096e8f3800f13925f9325b99cf7e76b | [
"Apache-2.0"
] | 14 | 2019-05-02T15:09:41.000Z | 2022-01-12T21:13:34.000Z | # from .file_proc import * # noqa F304
# from .hep_proc import * # noqa F304
# from .pre_proc import * # noqa F304
# __all__ = [*file_proc.__all__, *hep_proc.__all__, *pre_proc.__all__] # noqa F405
| 33.833333 | 83 | 0.689655 | 30 | 203 | 3.933333 | 0.333333 | 0.254237 | 0.355932 | 0.457627 | 0.372881 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 0.187192 | 203 | 5 | 84 | 40.6 | 0.642424 | 0.931034 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93caa91dd53ef75cb3d28d20604cc1ccd33e18bd | 155 | py | Python | platform/core/polyaxon/db/models/git_access.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/db/models/git_access.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | platform/core/polyaxon/db/models/git_access.py | hackerwins/polyaxon | ff56a098283ca872abfbaae6ba8abba479ffa394 | [
"Apache-2.0"
] | null | null | null | from db.models.abstract.access_catalog import HostAccessCatalog
class GitAccess(HostAccessCatalog):
class Meta(HostAccessCatalog.Meta):
pass
| 22.142857 | 63 | 0.787097 | 16 | 155 | 7.5625 | 0.75 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148387 | 155 | 6 | 64 | 25.833333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
93fb3c2541e101e8dc7cee83c5313a4ec68ddebd | 80 | py | Python | 05/00/remove.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | 05/00/remove.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | 39 | 2017-07-31T22:54:01.000Z | 2017-08-31T00:19:03.000Z | 05/00/remove.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | s = set([1,2,3])
print(s)
s.remove(2)
print(s)
s.remove(4)#KeyError: 4
print(s)
| 11.428571 | 23 | 0.625 | 19 | 80 | 2.631579 | 0.473684 | 0.36 | 0.28 | 0.52 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084507 | 0.1125 | 80 | 6 | 24 | 13.333333 | 0.619718 | 0.1375 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9e0c6cc6499627bf5e2e6cde714e0e6c0c185117 | 4,999 | py | Python | migrations/versions/5c3ebec69cdd_make_delete_modify_private.py | hieulq/pgscm | d51ef5ab7f9bf99e768f4f8ebb2d68dc68a8a592 | [
"Apache-2.0"
] | null | null | null | migrations/versions/5c3ebec69cdd_make_delete_modify_private.py | hieulq/pgscm | d51ef5ab7f9bf99e768f4f8ebb2d68dc68a8a592 | [
"Apache-2.0"
] | 74 | 2017-07-24T19:31:12.000Z | 2018-04-12T04:31:29.000Z | migrations/versions/5c3ebec69cdd_make_delete_modify_private.py | hieulq/pgscm | d51ef5ab7f9bf99e768f4f8ebb2d68dc68a8a592 | [
"Apache-2.0"
] | 2 | 2017-07-18T10:10:10.000Z | 2017-07-21T17:40:19.000Z | """make deleted_at and modify_info column to private property
Revision ID: 5c3ebec69cdd
Revises: ef552a46d4ff
Create Date: 2017-07-26 21:28:30.229291
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import mysql
# revision identifiers, used by Alembic.
revision = '5c3ebec69cdd'
down_revision = 'ef552a46d4ff'
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('associate_group',
sa.Column('_deleted_at', sa.DateTime(), nullable=True))
op.add_column('associate_group',
sa.Column('_modify_info', sa.String(length=255),
nullable=True))
op.drop_index('a_group_code_index', table_name='associate_group')
op.create_index('a_group_code_index', 'associate_group',
['associate_group_code', '_deleted_at'], unique=False)
op.drop_column('associate_group', 'modify_info')
op.drop_column('associate_group', 'deleted_at')
op.add_column('certificate',
sa.Column('_deleted_at', sa.DateTime(), nullable=True))
op.add_column('certificate',
sa.Column('_modify_info', sa.String(length=255),
nullable=True))
op.drop_index('certificate_code_index', table_name='certificate')
op.create_index('certificate_code_index', 'certificate',
['certificate_code', '_deleted_at'], unique=False)
op.drop_index('certificate_code_index2', table_name='certificate')
op.drop_column('certificate', 'modify_info')
op.drop_column('certificate', 'deleted_at')
op.add_column('farmer',
sa.Column('_deleted_at', sa.DateTime(), nullable=True))
op.add_column('farmer', sa.Column('_modify_info', sa.String(length=255),
nullable=True))
op.drop_index('farmer_code_index', table_name='farmer')
op.create_index('farmer_code_index', 'farmer',
['farmer_code', '_deleted_at'], unique=False)
op.drop_column('farmer', 'modify_info')
op.drop_column('farmer', 'deleted_at')
op.add_column('group',
sa.Column('_deleted_at', sa.DateTime(), nullable=True))
op.add_column('group', sa.Column('_modify_info', sa.String(length=255),
nullable=True))
op.drop_index('group_code_index', table_name='group')
op.create_index('group_code_index', 'group', ['group_code', '_deleted_at'],
unique=False)
op.drop_column('group', 'modify_info')
op.drop_column('group', 'deleted_at')
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('group',
sa.Column('deleted_at', mysql.DATETIME(), nullable=True))
op.add_column('group', sa.Column('modify_info', mysql.VARCHAR(length=255),
nullable=True))
op.drop_index('group_code_index', table_name='group')
op.create_index('group_code_index', 'group', ['group_code', 'deleted_at'],
unique=False)
op.drop_column('group', '_modify_info')
op.drop_column('group', '_deleted_at')
op.add_column('farmer',
sa.Column('deleted_at', mysql.DATETIME(), nullable=True))
op.add_column('farmer', sa.Column('modify_info', mysql.VARCHAR(length=255),
nullable=True))
op.drop_index('farmer_code_index', table_name='farmer')
op.create_index('farmer_code_index', 'farmer',
['farmer_code', 'deleted_at'], unique=False)
op.drop_column('farmer', '_modify_info')
op.drop_column('farmer', '_deleted_at')
op.add_column('certificate',
sa.Column('deleted_at', mysql.DATETIME(), nullable=True))
op.add_column('certificate',
sa.Column('modify_info', mysql.VARCHAR(length=255),
nullable=True))
op.create_index('certificate_code_index2', 'certificate',
['certificate_code'], unique=False)
op.drop_index('certificate_code_index', table_name='certificate')
op.create_index('certificate_code_index', 'certificate',
['certificate_code', 'deleted_at'], unique=False)
op.drop_column('certificate', '_modify_info')
op.drop_column('certificate', '_deleted_at')
op.add_column('associate_group',
sa.Column('deleted_at', mysql.DATETIME(), nullable=True))
op.add_column('associate_group',
sa.Column('modify_info', mysql.VARCHAR(length=255),
nullable=True))
op.drop_index('a_group_code_index', table_name='associate_group')
op.create_index('a_group_code_index', 'associate_group',
['associate_group_code', 'deleted_at'], unique=False)
op.drop_column('associate_group', '_modify_info')
op.drop_column('associate_group', '_deleted_at')
# ### end Alembic commands ###
| 48.067308 | 79 | 0.634127 | 589 | 4,999 | 5.067912 | 0.113752 | 0.075377 | 0.058961 | 0.051256 | 0.883417 | 0.874707 | 0.862647 | 0.852261 | 0.852261 | 0.846231 | 0 | 0.01701 | 0.223845 | 4,999 | 103 | 80 | 48.533981 | 0.75232 | 0.068014 | 0 | 0.55814 | 0 | 0 | 0.301601 | 0.028992 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023256 | false | 0 | 0.034884 | 0 | 0.05814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f5463f16a06c66296fd1a257dbd7b9bf048da231 | 356 | py | Python | 001146StepikPyBegin/Stepik001146PyBeginсh02p03st04Q03_20200411.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001146StepikPyBegin/Stepik001146PyBeginсh02p03st04Q03_20200411.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001146StepikPyBegin/Stepik001146PyBeginсh02p03st04Q03_20200411.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | print('a', 'b', 'c', sep='*') #1
print('d', 'e', 'f', sep='**', end='') #2
print('g', 'h', 'i', sep='+', end='%') #2
print('j', 'k', 'l', sep='-', end='\n') #2
print('m', 'n', 'o', sep='/', end='!') #3
print('p', 'q', 'r', sep='1', end='%') #3
print('s', 't', 'u', sep='&', end='\n') #3
print('v', 'w', 'x', sep='%') #4
print('y', 'z', sep='/', end='!') #5 | 39.555556 | 42 | 0.367978 | 63 | 356 | 2.079365 | 0.539683 | 0.274809 | 0.10687 | 0.183206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03268 | 0.140449 | 356 | 9 | 43 | 39.555556 | 0.395425 | 0.025281 | 0 | 0 | 0 | 0 | 0.129794 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f568a7140ed21328ed2e1e0781ab28eaee208a28 | 27 | py | Python | eod/models/necks/__init__.py | Helicopt/EOD | b5db36f4ce267bf64d093b8174bde2c4097b4718 | [
"Apache-2.0"
] | 196 | 2021-10-30T05:15:36.000Z | 2022-03-30T18:43:40.000Z | eod/tasks/det/models/necks/__init__.py | YZW-explorer/EOD | f10e64de86c0f356ebf5c7e923f4042eec4207b1 | [
"Apache-2.0"
] | 12 | 2021-10-30T11:33:28.000Z | 2022-03-31T14:22:58.000Z | eod/tasks/det/models/necks/__init__.py | YZW-explorer/EOD | f10e64de86c0f356ebf5c7e923f4042eec4207b1 | [
"Apache-2.0"
] | 23 | 2021-11-01T07:26:17.000Z | 2022-03-27T05:55:37.000Z | from .fpn import FPN # noqa | 27 | 27 | 0.740741 | 5 | 27 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 1 | 27 | 27 | 0.909091 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f581f8197aacc9995b71762e63acafdb4eececf0 | 1,659 | py | Python | covid_api/serializers/entidad.py | jmbarrios/covid-mexico-19 | 6872d55830e2a6cd6987a4ee517cd016dd853edf | [
"MIT"
] | null | null | null | covid_api/serializers/entidad.py | jmbarrios/covid-mexico-19 | 6872d55830e2a6cd6987a4ee517cd016dd853edf | [
"MIT"
] | null | null | null | covid_api/serializers/entidad.py | jmbarrios/covid-mexico-19 | 6872d55830e2a6cd6987a4ee517cd016dd853edf | [
"MIT"
] | 2 | 2020-05-11T15:32:31.000Z | 2020-05-13T19:12:20.000Z | import json
from rest_framework import serializers
from covid_data import models
class EntidadSimpleSerializer(serializers.ModelSerializer):
class Meta:
model = models.Entidad
fields = [
'url',
'clave',
'descripcion']
extra_kwargs = {
'url': {'view_name': 'entidad-detail', 'lookup_field': 'clave'}
}
class EntidadSerializer(serializers.ModelSerializer):
class Meta:
model = models.Entidad
fields = [
'url',
'clave',
'descripcion',
]
extra_kwargs = {
'url': {'view_name': 'entidad-detail', 'lookup_field': 'clave'}
}
class EntidadGeoSerializer(serializers.ModelSerializer):
type = serializers.CharField(
read_only=True,
default='Feature')
geometry = serializers.SerializerMethodField()
properties = EntidadSerializer(source='*')
class Meta:
model = models.Entidad
fields = [
'type',
'geometry',
'properties'
]
def get_geometry(self, obj):
return json.loads(obj.geometria_simplificada.geojson)
class EntidadCentroideSerializer(serializers.ModelSerializer):
type = serializers.CharField(
read_only=True,
default='Feature')
geometry = serializers.SerializerMethodField()
properties = EntidadSerializer(source='*')
class Meta:
model = models.Entidad
fields = [
'type',
'geometry',
'properties'
]
def get_geometry(self, obj):
return json.loads(obj.centroide.geojson)
| 24.397059 | 75 | 0.588306 | 135 | 1,659 | 7.133333 | 0.37037 | 0.107996 | 0.058152 | 0.083074 | 0.78297 | 0.78297 | 0.78297 | 0.78297 | 0.78297 | 0.78297 | 0 | 0 | 0.30862 | 1,659 | 67 | 76 | 24.761194 | 0.839582 | 0 | 0 | 0.703704 | 0 | 0 | 0.11091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.055556 | 0.037037 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
193cb6d039f4fb47fa859b8e180ae49b9941c300 | 20,120 | py | Python | tests/test_tasks_and_participants.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 4 | 2021-05-14T08:22:47.000Z | 2022-02-04T19:48:25.000Z | tests/test_tasks_and_participants.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 1,083 | 2019-11-06T17:01:07.000Z | 2022-03-25T10:26:51.000Z | tests/test_tasks_and_participants.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 4 | 2019-11-06T12:00:45.000Z | 2021-06-09T04:18:28.000Z | import unittest
from datetime import datetime
import pytest
from testing.postgresql import Postgresql
from pepys_import.core.store.data_store import DataStore
def create_example_tasks(ds, create_participants=False):
ds.initialise()
with ds.session_scope():
ds.populate_reference()
ds.populate_metadata()
with ds.session_scope():
priv_id = ds.session.query(ds.db_classes.Privacy).all()[0].privacy_id
change_id = ds.add_to_changes(
"USER", datetime.utcnow(), "Creating test tasks/participants"
).change_id
s1 = ds.db_classes.Series(name="Joint Warrior", privacy_id=priv_id)
s2 = ds.db_classes.Series(name="Another Top-Level Series", privacy_id=priv_id)
wg1 = ds.db_classes.Wargame(
name="Joint Warrior 20/02",
start=datetime(2020, 2, 1, 0, 0, 0),
end=datetime(2020, 2, 28, 0, 0, 0),
privacy_id=priv_id,
)
wg1.series = s1
wg2 = ds.db_classes.Wargame(
name="An Wargame",
start=datetime(2020, 2, 1, 0, 0, 0),
end=datetime(2020, 2, 28, 0, 0, 0),
privacy_id=priv_id,
)
wg2.series = s2
wg3 = ds.db_classes.Wargame(
name="Another Wargame",
start=datetime(2020, 2, 1, 0, 0, 0),
end=datetime(2020, 2, 28, 0, 0, 0),
privacy_id=priv_id,
)
wg3.series = s2
serial1 = ds.db_classes.Serial(
serial_number="J05020",
exercise="NAVCOMEX",
start=datetime(2020, 2, 3, 7, 0, 0),
end=datetime(2020, 2, 4, 12, 0, 0),
environment="Test Environment",
privacy_id=priv_id,
)
serial1.wargame = wg1
serial2 = ds.db_classes.Serial(
serial_number="J05084",
exercise="ADEX 324",
start=datetime(2020, 2, 6, 9, 0, 0),
end=datetime(2020, 2, 8, 14, 0, 0),
environment="Test Environment",
privacy_id=priv_id,
)
serial2.wargame = wg1
serial3 = ds.db_classes.Serial(
serial_number="J05110",
exercise="CASEX E3",
start=datetime(2020, 2, 23, 9, 0, 0),
end=datetime(2020, 2, 25, 15, 0, 0),
environment="Test Environment",
privacy_id=priv_id,
)
serial3.wargame = wg1
ds.session.add_all([s1, s2, wg1, wg2, wg3, serial1, serial2, serial3])
plat1 = (
ds.session.query(ds.db_classes.Platform)
.filter(ds.db_classes.Platform.name == "ADRI")
.one()
)
plat2 = (
ds.session.query(ds.db_classes.Platform)
.filter(ds.db_classes.Platform.name == "JEAN")
.one()
)
plat3 = (
ds.session.query(ds.db_classes.Platform)
.filter(ds.db_classes.Platform.name == "NARV")
.one()
)
if create_participants:
p1 = wg1.add_participant(
data_store=ds, platform=plat1, privacy="Private", change_id=change_id
)
p2 = wg1.add_participant(
data_store=ds, platform=plat2, privacy="Private", change_id=change_id
)
p3 = wg1.add_participant(
data_store=ds, platform=plat3, privacy="Private", change_id=change_id
)
serial1.add_participant(
data_store=ds,
wargame_participant=p1,
start=datetime(2020, 2, 3, 8, 0, 0),
end=datetime(2020, 2, 3, 10, 0, 0),
force_type="Blue",
privacy="Private",
change_id=change_id,
)
serial1.add_participant(
data_store=ds,
wargame_participant=p2,
start=datetime(2020, 2, 3, 8, 0, 0),
end=datetime(2020, 2, 3, 9, 30, 0),
force_type="Red",
privacy="Private",
change_id=change_id,
)
serial2.add_participant(
data_store=ds,
wargame_participant=p3,
start=datetime(2020, 2, 6, 11, 0, 0),
end=datetime(2020, 2, 7, 11, 0, 0),
force_type="Blue",
privacy="Private",
change_id=change_id,
)
class TestTasksAndParticipants_SQLite(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
def test_create_tasks(self):
create_example_tasks(self.store)
with self.store.session_scope():
all_series = self.store.session.query(self.store.db_classes.Series).all()
all_wargames = self.store.session.query(self.store.db_classes.Wargame).all()
all_serials = self.store.session.query(self.store.db_classes.Serial).all()
assert len(all_series) == 2
assert len(all_wargames) == 3
assert len(all_serials) == 3
jw_series = (
self.store.session.query(self.store.db_classes.Series)
.filter(self.store.db_classes.Series.name == "Joint Warrior")
.one()
)
other_series = (
self.store.session.query(self.store.db_classes.Series)
.filter(self.store.db_classes.Series.name == "Another Top-Level Series")
.one()
)
assert len(jw_series.child_wargames) == 1
assert len(other_series.child_wargames) == 2
jw_wargame = jw_series.child_wargames[0]
assert jw_wargame.name == "Joint Warrior 20/02"
assert len(jw_wargame.child_serials) == 3
assert jw_wargame.series == jw_series
assert jw_wargame.series_name == "Joint Warrior"
serial_numbers = [serial.serial_number for serial in jw_wargame.child_serials]
assert "J05020" in serial_numbers
assert "J05084" in serial_numbers
assert "J05110" in serial_numbers
serial_exercises = [serial.exercise for serial in jw_wargame.child_serials]
assert "NAVCOMEX" in serial_exercises
assert "ADEX 324" in serial_exercises
assert "CASEX E3" in serial_exercises
def test_create_participants(self):
create_example_tasks(self.store, create_participants=True)
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_wgps) == 3
assert len(all_sps) == 3
jw_wargame = (
self.store.session.query(self.store.db_classes.Wargame)
.filter(self.store.db_classes.Wargame.name == "Joint Warrior 20/02")
.one()
)
assert len(jw_wargame.participants) == 3
platform_names = [participant.platform_name for participant in jw_wargame.participants]
assert "ADRI" in platform_names
assert "JEAN" in platform_names
assert "NARV" in platform_names
serial1 = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
assert len(serial1.participants) == 2
platform_names = [participant.platform_name for participant in serial1.participants]
assert "ADRI" in platform_names
assert "JEAN" in platform_names
force_types = [participant.force_type_name for participant in serial1.participants]
assert "Red" in force_types
assert "Blue" in force_types
def test_delete_task_deletes_children_and_participants(self):
create_example_tasks(self.store, create_participants=True)
with self.store.session_scope():
wargame = (
self.store.session.query(self.store.db_classes.Wargame)
.filter(self.store.db_classes.Wargame.name == "Joint Warrior 20/02")
.one()
)
self.store.session.delete(wargame)
with self.store.session_scope():
all_wargames = self.store.session.query(self.store.db_classes.Wargame).all()
all_serials = self.store.session.query(self.store.db_classes.Serial).all()
assert len(all_wargames) == 2
assert len(all_serials) == 0
# Should still have parent
parent_series = (
self.store.session.query(self.store.db_classes.Series)
.filter(self.store.db_classes.Series.name == "Joint Warrior")
.all()
)
assert len(parent_series) == 1
# The only participants were those under one of the deleted tasks, so they should be deleted too
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
assert len(all_wgps) == 0
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_sps) == 0
# But the platforms the participants reference shouldn't be deleted
all_platforms = self.store.session.query(self.store.db_classes.Platform).all()
assert len(all_platforms) == 4
def test_removing_serial_participant_deletes_it(self):
create_example_tasks(self.store, create_participants=True)
with self.store.session_scope():
serial = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
serial.participants.remove(serial.participants[0])
with self.store.session_scope():
serial = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
assert len(serial.participants) == 1
# Check it deletes the SerialParticipant entry
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_sps) == 2
# Check it doesn't delete the wargame participant associated with it
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
assert len(all_wgps) == 3
def test_removing_wargame_participant_deletes_it_and_serial_participants(self):
create_example_tasks(self.store, create_participants=True)
with self.store.session_scope():
wargame = (
self.store.session.query(self.store.db_classes.Wargame)
.filter(self.store.db_classes.Wargame.name == "Joint Warrior 20/02")
.one()
)
narv_participant = [
participant
for participant in wargame.participants
if participant.platform_name == "ADRI"
][0]
wargame.participants.remove(narv_participant)
with self.store.session_scope():
serial = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
assert len(serial.participants) == 1
# Check it deletes the WargameParticipant entry
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
assert len(all_wgps) == 2
# Check it also deletes the SerialParticipant entry
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_sps) == 2
@pytest.mark.postgres
class TestTasksAndParticipants_Postgres(unittest.TestCase):
def setUp(self):
self.postgres = None
try:
self.postgres = Postgresql(
database="test",
host="localhost",
user="postgres",
password="postgres",
port=55527,
)
except RuntimeError:
raise Exception("Testing Postgres server could not be started/accessed")
self.store = DataStore(
db_name="test",
db_host="localhost",
db_username="postgres",
db_password="postgres",
db_port=55527,
db_type="postgres",
)
def tearDown(self):
try:
self.postgres.stop()
except AttributeError:
return
def test_create_tasks(self):
create_example_tasks(self.store)
with self.store.session_scope():
all_series = self.store.session.query(self.store.db_classes.Series).all()
all_wargames = self.store.session.query(self.store.db_classes.Wargame).all()
all_serials = self.store.session.query(self.store.db_classes.Serial).all()
assert len(all_series) == 2
assert len(all_wargames) == 3
assert len(all_serials) == 3
jw_series = (
self.store.session.query(self.store.db_classes.Series)
.filter(self.store.db_classes.Series.name == "Joint Warrior")
.one()
)
other_series = (
self.store.session.query(self.store.db_classes.Series)
.filter(self.store.db_classes.Series.name == "Another Top-Level Series")
.one()
)
assert len(jw_series.child_wargames) == 1
assert len(other_series.child_wargames) == 2
jw_wargame = jw_series.child_wargames[0]
assert jw_wargame.name == "Joint Warrior 20/02"
assert len(jw_wargame.child_serials) == 3
assert jw_wargame.series == jw_series
assert jw_wargame.series_name == "Joint Warrior"
serial_numbers = [serial.serial_number for serial in jw_wargame.child_serials]
assert "J05020" in serial_numbers
assert "J05084" in serial_numbers
assert "J05110" in serial_numbers
serial_exercises = [serial.exercise for serial in jw_wargame.child_serials]
assert "NAVCOMEX" in serial_exercises
assert "ADEX 324" in serial_exercises
assert "CASEX E3" in serial_exercises
def test_create_participants(self):
create_example_tasks(self.store, create_participants=True)
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_wgps) == 3
assert len(all_sps) == 3
jw_wargame = (
self.store.session.query(self.store.db_classes.Wargame)
.filter(self.store.db_classes.Wargame.name == "Joint Warrior 20/02")
.one()
)
assert len(jw_wargame.participants) == 3
platform_names = [participant.platform_name for participant in jw_wargame.participants]
assert "ADRI" in platform_names
assert "JEAN" in platform_names
assert "NARV" in platform_names
serial1 = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
assert len(serial1.participants) == 2
platform_names = [participant.platform_name for participant in serial1.participants]
assert "ADRI" in platform_names
assert "JEAN" in platform_names
force_types = [participant.force_type_name for participant in serial1.participants]
assert "Red" in force_types
assert "Blue" in force_types
def test_delete_task_deletes_children_and_participants(self):
create_example_tasks(self.store, create_participants=True)
with self.store.session_scope():
wargame = (
self.store.session.query(self.store.db_classes.Wargame)
.filter(self.store.db_classes.Wargame.name == "Joint Warrior 20/02")
.one()
)
self.store.session.delete(wargame)
with self.store.session_scope():
all_wargames = self.store.session.query(self.store.db_classes.Wargame).all()
all_serials = self.store.session.query(self.store.db_classes.Serial).all()
assert len(all_wargames) == 2
assert len(all_serials) == 0
# Should still have parent
parent_series = (
self.store.session.query(self.store.db_classes.Series)
.filter(self.store.db_classes.Series.name == "Joint Warrior")
.all()
)
assert len(parent_series) == 1
# The only participants were those under one of the deleted tasks, so they should be deleted too
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
assert len(all_wgps) == 0
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_sps) == 0
# But the platforms the participants reference shouldn't be deleted
all_platforms = self.store.session.query(self.store.db_classes.Platform).all()
assert len(all_platforms) == 4
def test_removing_serial_participant_deletes_it(self):
create_example_tasks(self.store, create_participants=True)
with self.store.session_scope():
serial = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
serial.participants.remove(serial.participants[0])
with self.store.session_scope():
serial = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
assert len(serial.participants) == 1
# Check it deletes the SerialParticipant entry
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_sps) == 2
# Check it doesn't delete the wargame participant associated with it
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
assert len(all_wgps) == 3
def test_removing_wargame_participant_deletes_it_and_serial_participants(self):
create_example_tasks(self.store, create_participants=True)
with self.store.session_scope():
wargame = (
self.store.session.query(self.store.db_classes.Wargame)
.filter(self.store.db_classes.Wargame.name == "Joint Warrior 20/02")
.one()
)
narv_participant = [
participant
for participant in wargame.participants
if participant.platform_name == "ADRI"
][0]
wargame.participants.remove(narv_participant)
with self.store.session_scope():
serial = (
self.store.session.query(self.store.db_classes.Serial)
.filter(self.store.db_classes.Serial.serial_number == "J05020")
.one()
)
assert len(serial.participants) == 1
# Check it deletes the WargameParticipant entry
all_wgps = self.store.session.query(self.store.db_classes.WargameParticipant).all()
assert len(all_wgps) == 2
# Check it also deletes the SerialParticipant entry
all_sps = self.store.session.query(self.store.db_classes.SerialParticipant).all()
assert len(all_sps) == 2
| 37.962264 | 108 | 0.592992 | 2,298 | 20,120 | 5.01262 | 0.083986 | 0.11251 | 0.064936 | 0.106259 | 0.893914 | 0.882802 | 0.853373 | 0.836965 | 0.830367 | 0.819168 | 0 | 0.030349 | 0.307256 | 20,120 | 529 | 109 | 38.034026 | 0.796097 | 0.039115 | 0 | 0.695652 | 0 | 0 | 0.045398 | 0 | 0 | 0 | 0 | 0 | 0.183575 | 1 | 0.033816 | false | 0.004831 | 0.012077 | 0 | 0.05314 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1984f9653261ed700ad09fc234454f96ba80a450 | 615 | py | Python | utils/print_split.py | sghick/tools-AutoArchiveIPA | ed9de807949d71fd952c32c1b0d6d75a6fcb7d12 | [
"MIT"
] | 2 | 2019-01-10T02:02:21.000Z | 2019-05-28T01:59:54.000Z | utils/print_split.py | sghick/tools-AutoArchiveIPA | ed9de807949d71fd952c32c1b0d6d75a6fcb7d12 | [
"MIT"
] | null | null | null | utils/print_split.py | sghick/tools-AutoArchiveIPA | ed9de807949d71fd952c32c1b0d6d75a6fcb7d12 | [
"MIT"
] | null | null | null | # coding: utf-8
####################################################################################################
# print split
####################################################################################################
split = '-' * 20
def print_war(s):
print('||:' + s)
def print_log(s):
print(get_log(s))
def print_head():
print('\n+' + split + '+')
def print_sep():
print(get_sep())
def print_foot():
print('+' + split + '+\n')
def print_body(s):
print('|' + s)
def get_log(s):
return split + '[ ' + s + ' ]' + split
def get_sep():
return '+' + split + '+'
| 19.21875 | 100 | 0.346341 | 58 | 615 | 3.5 | 0.293103 | 0.236453 | 0.068966 | 0.098522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005906 | 0.173984 | 615 | 31 | 101 | 19.83871 | 0.393701 | 0.04065 | 0 | 0 | 0 | 0 | 0.049096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.470588 | false | 0 | 0 | 0.117647 | 0.588235 | 0.705882 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
5ff9ef907b22ed460e9db423b2b02935711abaa6 | 47 | py | Python | src/pyth2/__util/EnvironmentDependency.py | gnomeberry/pyth2 | 532d89e4ed22b4f9427069bf187ab836e2c2f538 | [
"MIT"
] | null | null | null | src/pyth2/__util/EnvironmentDependency.py | gnomeberry/pyth2 | 532d89e4ed22b4f9427069bf187ab836e2c2f538 | [
"MIT"
] | null | null | null | src/pyth2/__util/EnvironmentDependency.py | gnomeberry/pyth2 | 532d89e4ed22b4f9427069bf187ab836e2c2f538 | [
"MIT"
] | null | null | null | '''
Created on 2016/01/24
@author: _
'''
| 7.833333 | 22 | 0.510638 | 6 | 47 | 3.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 0.276596 | 47 | 5 | 23 | 9.4 | 0.441176 | 0.702128 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
272daa421c9b5a702d0a24e288889f9d493ce3ed | 703 | py | Python | python/easy/strings/string_formatting.py | Razor-87/hackerrank | b82dd1f97eeb3c2a9141b196b30b2820acd050e7 | [
"Unlicense"
] | null | null | null | python/easy/strings/string_formatting.py | Razor-87/hackerrank | b82dd1f97eeb3c2a9141b196b30b2820acd050e7 | [
"Unlicense"
] | null | null | null | python/easy/strings/string_formatting.py | Razor-87/hackerrank | b82dd1f97eeb3c2a9141b196b30b2820acd050e7 | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
def print_formatted(number: int) -> None:
"""
>>> print_formatted(17) #doctest: +NORMALIZE_WHITESPACE
1 1 1 1
2 2 2 10
3 3 3 11
4 4 4 100
5 5 5 101
6 6 6 110
7 7 7 111
8 10 8 1000
9 11 9 1001
10 12 A 1010
11 13 B 1011
12 14 C 1100
13 15 D 1101
14 16 E 1110
15 17 F 1111
16 20 10 10000
17 21 11 10001
"""
width = len(f"{number:b}")
for i in range(1, number+1):
print(f"{i:{width}n} {i:{width}o} {i:{width}X} {i:{width}b}")
| 25.107143 | 69 | 0.403983 | 109 | 703 | 2.577982 | 0.541284 | 0.085409 | 0.021352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.368571 | 0.502134 | 703 | 27 | 70 | 26.037037 | 0.434286 | 0.618777 | 0 | 0 | 0 | 0.25 | 0.331522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
272f3fdf0dc369ce804f8c29bfe91babe230b010 | 15,434 | py | Python | objects.py | joshsharp/mtn | dc7d87668aa426e6b76a1d072cd4bd89b9b7dc3e | [
"Unlicense"
] | 15 | 2017-07-14T11:27:04.000Z | 2021-11-11T01:21:42.000Z | slackmojicode/objects.py | puhitaku/slackmojicode | 0084aa0df029a0c34d47bcf63169872062d0eea3 | [
"Unlicense"
] | null | null | null | slackmojicode/objects.py | puhitaku/slackmojicode | 0084aa0df029a0c34d47bcf63169872062d0eea3 | [
"Unlicense"
] | 6 | 2019-05-20T18:02:11.000Z | 2021-06-27T09:16:36.000Z | from rpython.rlib.objectmodel import r_dict, compute_hash
from rply.token import BaseBox
from errors import *
def dict_eq(key, other):
# we need to implement rdict method to find key equality
return key._eq(other)
def dict_hash(key):
# we need to implement rdict method to find key equality
return key._hash()
class Null(BaseBox):
def __init__(self):
pass
def to_string(self):
return "<null>"
def dump(self):
return "<null>"
class Function(BaseBox):
def __init__(self, name, code):
self.name = name
self.code = code
def to_string(self):
return "<function %s>" % self.name
def dump(self):
return "<function %s>" % self.name
def add(self, right):
raise Exception("Cannot add that to function %s" % self.name)
class ExternalFunction(BaseBox):
def __init__(self, name, fn, args):
self.name = name
self.fn = fn
self.args = args
def to_string(self):
return "<function %s>" % self.name
def dump(self):
return "<function %s>" % self.name
def add(self, right):
raise Exception("Cannot add that to function %s" % self.name)
class Array(BaseBox):
def __init__(self, args):
self.values = args
def dump(self):
return self.to_string()
def map(self, fun, ls):
nls = []
for l in ls:
nls.append(fun(l))
return nls
def push(self, statement):
self.values.insert(0,statement)
def append(self, statement):
self.values.append(statement)
def index(self, right):
if isinstance(right, Integer):
return self.values[right.value]
raise LogicError("Cannot index with that value")
def add(self, right):
if isinstance(right, Array):
result = Array([])
result.values.extend(self.values)
result.values.extend(right.values)
return result
raise LogicError("Cannot add that to array")
def sub(self,right):
if isinstance(right,Integer):
result = [val for val in self.values]
del result[right.intvalue]
return Array(result)
raise LogicError("Cannot remove that index from array")
def to_string(self):
return '[%s]' % (", ".join(self.map(lambda x: x.to_string(),self.values)))
class Dict(BaseBox):
def __init__(self, args):
self.values = args
def dump(self):
return self.to_string()
def map(self, fun, ls):
nls = []
for l in ls:
nls.append(fun(l))
return nls
def update(self, key, val):
self.values[key] = val
def index(self, right):
if isinstance(right, Integer):
return self.values[right]
if isinstance(right, String):
return self.values[right]
if isinstance(right, Float):
return self.values[right]
if isinstance(right, Boolean):
return self.values[right]
raise LogicError("Cannot index with that value")
def add(self, right):
if isinstance(right, Dict):
result = Dict(r_dict(dict_eq, dict_hash))
for key, val in self.values.iteritems():
result.values[key] = val
for key, val in right.values.iteritems():
result.values[key] = val
return result
raise LogicError("Cannot add that to dict")
def sub(self,right):
result = r_dict(dict_eq, dict_hash)
for key, val in self.values.iteritems():
result[key] = val
del result[right]
return Dict(result)
def to_string(self):
return '{%s}' % (", ".join(self.map(lambda k: "%s: %s" % (k[0].to_string(), k[1].to_string()),self.values.iteritems())))
class Boolean(BaseBox):
def __init__(self, value):
self.boolvalue = bool(value)
@property
def value(self):
return bool(self.boolvalue)
def __hash__(self):
return compute_hash(self.boolvalue)
def __eq__(self, other):
if(isinstance(other,Boolean)):
return self.boolvalue == other.boolvalue
return False
def _hash(self):
return compute_hash(self.boolvalue)
def _eq(self, other):
if(isinstance(other,Boolean)):
return self.boolvalue == other.boolvalue
return False
def equals(self, right):
if isinstance(right, Boolean):
return Boolean(self.value == right.value)
if isinstance(right, Integer):
return Boolean(self.to_int() == right.value)
if isinstance(right, Float):
return Boolean(self.to_int() == right.value)
else:
return Boolean(False)
raise LogicError("Cannot compare that to boolean")
def lte(self, right):
if isinstance(right, Boolean):
return Boolean(self.value == right.value)
raise LogicError("Cannot compare that to boolean")
def lt(self, right):
raise LogicError("Cannot compare boolean that way")
def gt(self, right):
raise LogicError("Cannot compare boolean that way")
def gte(self, right):
if isinstance(right, Boolean):
return Boolean(self.value == right.value)
raise LogicError("Cannot compare that to boolean")
def add(self, right):
raise LogicError("Cannot add that to boolean")
def sub(self, right):
raise LogicError("Cannot sub that from boolean")
def mul(self, right):
raise LogicError("Cannot mul that to boolean")
def div(self, right):
raise LogicError("Cannot div that from boolean")
def to_string(self):
if self.value:
return "true"
return "false"
def to_int(self):
if self.value:
return 1
return 0
def dump(self):
return self.to_string()
class Integer(BaseBox):
def __init__(self, value):
self.intvalue = int(value)
@property
def value(self):
return int(self.intvalue)
def __hash__(self):
return compute_hash(self.intvalue)
def __eq__(self, other):
if(isinstance(other,Integer)):
return (self.intvalue) == (other.intvalue)
return False
def _hash(self):
return compute_hash(self.intvalue)
def _eq(self, other):
if(isinstance(other,Integer)):
return self.intvalue == other.intvalue
return False
def to_string(self):
return str(self.value)
def dump(self):
return str(self.value)
def equals(self, right):
if isinstance(right,Float):
return Boolean(float(self.value) == right.value)
if isinstance(right, Integer):
return Boolean(self.value == right.value)
if isinstance(right, Boolean):
return Boolean(self.value == right.to_int())
raise LogicError("Cannot compare that to integer")
def lte(self, right):
if isinstance(right, Integer):
return Boolean(self.value <= right.value)
if isinstance(right,Float):
return Boolean(float(self.value) <= right.value)
raise LogicError("Cannot compare that to integer")
def lt(self, right):
if isinstance(right, Integer):
return Boolean(self.value < right.value)
if type(right) is Float:
return Boolean(float(self.value) < right.value)
raise LogicError("Cannot compare integer that way")
def gt(self, right):
if isinstance(right, Integer):
return Boolean(self.value > right.value)
if isinstance(right,Float):
return Boolean(float(self.value) > right.value)
raise LogicError("Cannot compare integer that way")
def gte(self, right):
if isinstance(right, Integer):
return Boolean(self.value >= right.value)
if isinstance(right,Float):
return Boolean(float(self.value) >= right.value)
raise LogicError("Cannot compare integer that way")
def add(self, right):
if isinstance(right, Integer):
return Integer(self.value + right.value)
if isinstance(right,Float):
return Float(float(self.value) + right.value)
raise LogicError("Cannot add %s to integer" % str(right.__class__.__name__))
def sub(self, right):
if isinstance(right, Integer):
return Integer(self.value - right.value)
if isinstance(right,Float):
return Float(float(self.value) - right.value)
raise LogicError("Cannot sub from int")
def mul(self, right):
if isinstance(right, Integer):
return Integer(self.value * right.value)
if isinstance(right,Float):
return Float(float(self.value) * right.value)
raise LogicError("Cannot mul that to int")
def div(self, right):
if isinstance(right, Integer):
return Integer(self.value / right.value)
if isinstance(right,Float):
return Float(float(self.value) / right.value)
raise LogicError("Cannot div that with int")
class Float(BaseBox):
def __init__(self, val):
self.floatvalue = float(val)
@property
def value(self):
return float(self.floatvalue)
def __hash__(self):
return compute_hash(self.value)
def __eq__(self, other):
return (self.value) == (other.value)
def _hash(self):
return compute_hash(self.floatvalue)
def _eq(self, other):
if(isinstance(other,Float)):
return self.floatvalue == other.floatvalue
return False
def to_string(self):
return str(self.value)
def equals(self, right):
if isinstance(right,Float):
return Boolean(self.value == right.value)
if isinstance(right, Integer):
return Boolean(self.value == float(right.value))
if isinstance(right, Boolean):
return Boolean(self.value == float(right.to_int()))
raise LogicError("Cannot compare that to float")
def lte(self, right):
if isinstance(right, Integer):
return Boolean(self.value <= float(right.value))
if isinstance(right,Float):
return Boolean(self.value <= right.value)
raise LogicError("Cannot compare that to integer")
def lt(self, right):
if isinstance(right, Integer):
return Boolean(self.value < float(right.value))
if type(right) is Float:
return Boolean(self.value < right.value)
raise LogicError("Cannot compare integer that way")
def gt(self, right):
if isinstance(right, Integer):
return Boolean(self.value > float(right.value))
if isinstance(right,Float):
return Boolean(self.value > right.value)
raise LogicError("Cannot compare integer that way")
def gte(self, right):
if isinstance(right, Integer):
return Boolean(self.value >= float(right.value))
if isinstance(right,Float):
return Boolean(self.value >= right.value)
raise LogicError("Cannot compare integer that way")
def add(self, right):
if isinstance(right, Integer):
return Float(self.value + float(right.value))
if isinstance(right,Float):
return Float(self.value + right.value)
raise LogicError("Cannot add that to float")
def sub(self, right):
if isinstance(right,Float):
return Float(self.value - right.value)
if isinstance(right, Integer):
return Float(self.value - float(right.value))
raise LogicError("Cannot sub string")
def mul(self, right):
if isinstance(right, Integer):
return Float(self.value * float(right.value))
if isinstance(right,Float):
return Float(self.value * right.value)
raise LogicError("Cannot mul that to float")
def div(self, right):
if isinstance(right, Integer):
return Float(self.value / float(right.value))
if isinstance(right,Float):
return Float(self.value / right.value)
raise LogicError("Cannot div that with float")
def dump(self):
return str(self.value)
class String(BaseBox):
def __init__(self, value):
self.value = str(value)
def __hash__(self):
return compute_hash(self.value)
def __eq__(self, other):
return (self.value) == (other.value)
def _hash(self):
return compute_hash(self.value)
def _eq(self, other):
if(isinstance(other,String)):
return self.value == other.value
return False
def to_string(self):
return str(self.value)
def equals(self, right):
if isinstance(right, String):
return Boolean(self.value == right.value)
if isinstance(right, Boolean):
length = int(len(self.value) != 0)
return Boolean(length == right.to_int())
raise LogicError("Cannot compare that to string")
def lte(self, right):
if isinstance(right, String):
return Boolean(self.value == right.value)
raise LogicError("Cannot compare that to string")
def lt(self, right):
raise LogicError("Cannot compare string that way")
def gt(self, right):
raise LogicError("Cannot compare string that way")
def gte(self, right):
if isinstance(right, String):
return Boolean(self.value == right.value)
raise LogicError("Cannot compare that to string")
def add(self, right):
if isinstance(right, Integer):
return String(self.value + str(right.value))
if isinstance(right,Float):
return String("%s%s" % (self.value,right.value))
if isinstance(right, String):
return String(self.value + right.value)
raise LogicError("Cannot add that to string")
def sub(self, right):
if isinstance(right, Integer):
sli = len(self.value) - right.value
assert(sli >= 0)
return String(self.value[:sli])
raise LogicError("Cannot sub string")
def mul(self, right):
if isinstance(right, Integer):
return String(self.value * right.value)
raise LogicError("Cannot multiply string with that")
def div(self, right):
raise LogicError("Cannot divide a string")
def index(self, right):
if isinstance(right, Integer):
if right.value >= 0:
return String(str(self.value[right.value]))
raise LogicError("Cannot index with that")
def dump(self):
return str(self.value)
class Variable(BaseBox):
def __init__(self, name, value):
self.name = str(name)
self.value = value
def dump(self):
return self.value.dump()
| 29.120755 | 128 | 0.584165 | 1,814 | 15,434 | 4.899118 | 0.061191 | 0.072915 | 0.112861 | 0.081242 | 0.842129 | 0.811748 | 0.780241 | 0.737932 | 0.694047 | 0.662991 | 0 | 0.000751 | 0.310224 | 15,434 | 529 | 129 | 29.175803 | 0.834022 | 0.007062 | 0 | 0.660622 | 0 | 0 | 0.084976 | 0 | 0 | 0 | 0 | 0 | 0.002591 | 1 | 0.261658 | false | 0.002591 | 0.007772 | 0.085492 | 0.590674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2730506f771efd6ed00e2b66244f92431f3c0819 | 211 | py | Python | backend/users/forms.py | cbh4ou/wikipediabook | 922acd1b5f6004b9ad334665803b87ae56ca556e | [
"MIT"
] | null | null | null | backend/users/forms.py | cbh4ou/wikipediabook | 922acd1b5f6004b9ad334665803b87ae56ca556e | [
"MIT"
] | null | null | null | backend/users/forms.py | cbh4ou/wikipediabook | 922acd1b5f6004b9ad334665803b87ae56ca556e | [
"MIT"
] | null | null | null | from django.forms import ModelForm
from .models import Wikis
import datetime
from django import forms
from django.core.exceptions import ValidationError
from django.utils.translation import ugettext_lazy as _
| 23.444444 | 55 | 0.848341 | 29 | 211 | 6.103448 | 0.551724 | 0.225989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123223 | 211 | 8 | 56 | 26.375 | 0.956757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
27828079609af973dacc8b6d22dac7d27953660f | 75 | py | Python | 000403StepPyThin/000403_01_08_vid01_01_initialization_20200220.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000403StepPyThin/000403_01_08_vid01_01_initialization_20200220.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 000403StepPyThin/000403_01_08_vid01_01_initialization_20200220.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null |
a = 2
b = 3
print(a + b)
a = 6
print(a + b)
b = b + 2
print(b)
# print(c)
| 7.5 | 12 | 0.466667 | 19 | 75 | 1.842105 | 0.368421 | 0.342857 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 0.32 | 75 | 9 | 13 | 8.333333 | 0.607843 | 0.106667 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
27d8d52d1043efaad8ef871b1618bcfa238c306e | 3,520 | py | Python | msgraph-cli-extensions/beta/calendar_beta/azext_calendar_beta/generated/_client_factory.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | msgraph-cli-extensions/beta/calendar_beta/azext_calendar_beta/generated/_client_factory.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | msgraph-cli-extensions/beta/calendar_beta/azext_calendar_beta/generated/_client_factory.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
def cf_calendar_beta_cl(cli_ctx, *_):
from msgraph.cli.core.commands.client_factory import get_mgmt_service_client
from azext_calendar_beta.vendored_sdks.calendar import Calendar
return get_mgmt_service_client(cli_ctx,
Calendar,
subscription_bound=False,
base_url_bound=False)
def cf_group(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups
def cf_group_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups_calendar
def cf_group_calendar_calendar_view(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups_calendar_calendar_view
def cf_group_calendar_event(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups_calendar_events
def cf_group_calendar_view(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups_calendar_view
def cf_group_calendar_view_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups_calendar_view_calendar
def cf_group_event(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups_events
def cf_group_event_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).groups_events_calendar
def cf_place_place(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).places_place
def cf_user(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users
def cf_user_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar
def cf_user_calendar_calendar_view(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_calendar_view
def cf_user_calendar_event(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_events
def cf_user_calendar_group(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_groups
def cf_user_calendar_group_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_groups_calendars
def cf_user_calendar_group_calendar_calendar_view(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_groups_calendars_calendar_view
def cf_user_calendar_group_calendar_event(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_groups_calendars_events
def cf_user_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendars
def cf_user_calendar_calendar_view(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendars_calendar_view
def cf_user_calendar_event(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendars_events
def cf_user_calendar_view(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_view
def cf_user_calendar_view_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_calendar_view_calendar
def cf_user_event(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_events
def cf_user_event_calendar(cli_ctx, *_):
return cf_calendar_beta_cl(cli_ctx).users_events_calendar
| 30.608696 | 86 | 0.731534 | 497 | 3,520 | 4.607646 | 0.146881 | 0.131004 | 0.152838 | 0.174672 | 0.765066 | 0.739301 | 0.689956 | 0.677293 | 0.677293 | 0.662445 | 0 | 0 | 0.16108 | 3,520 | 114 | 87 | 30.877193 | 0.775483 | 0.124716 | 0 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.454545 | false | 0 | 0.036364 | 0.436364 | 0.945455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
27fe920b93ae0fa1dda23bbab656f6d2e2bf8314 | 67 | py | Python | VerificationEmailC/__init__.py | coder-samurai/VerificationEmail | 12b4f6a082403354332721cd58f80dfb039afa7e | [
"MIT"
] | 1 | 2022-01-01T14:14:33.000Z | 2022-01-01T14:14:33.000Z | VerificationEmailC/__init__.py | coder-samurai/VerificationEmail | 12b4f6a082403354332721cd58f80dfb039afa7e | [
"MIT"
] | null | null | null | VerificationEmailC/__init__.py | coder-samurai/VerificationEmail | 12b4f6a082403354332721cd58f80dfb039afa7e | [
"MIT"
] | null | null | null | from VerificationEmailC.verificationemail import VerificationEmail
| 33.5 | 66 | 0.925373 | 5 | 67 | 12.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059701 | 67 | 1 | 67 | 67 | 0.984127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fd8f8d3c739cf493979307c8939f0bbfbad5e9bf | 6,447 | py | Python | Hero2Vector/model/hero2vec.py | diorw/dota_analyze_and_prediction | 3f5a6f21ba74fe065bbb5cc2fa8f512986023249 | [
"MIT"
] | null | null | null | Hero2Vector/model/hero2vec.py | diorw/dota_analyze_and_prediction | 3f5a6f21ba74fe065bbb5cc2fa8f512986023249 | [
"MIT"
] | null | null | null | Hero2Vector/model/hero2vec.py | diorw/dota_analyze_and_prediction | 3f5a6f21ba74fe065bbb5cc2fa8f512986023249 | [
"MIT"
] | null | null | null | import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.nn import init
class CBOH(nn.Module):
def __init__(self, heropool_size, embedding_dim):
"""
Initialize an NN with one hidden layer. Weight of the hidden layer is
the embedding.
inputs:
heropool_size: int
embedding_dim: int
"""
super().__init__()
self.embedding_dim = embedding_dim
self.embeddings = nn.Embedding(heropool_size, embedding_dim)
# self.lstm = nn.LSTM(embedding_dim,embedding_dim)
self.affine = nn.Linear(embedding_dim, heropool_size)
self.init_emb()
def init_emb(self):
"""
init embeddings and affine layer
"""
initrange = 0.5 / self.embedding_dim
self.embeddings.weight.data.uniform_(-initrange, initrange)
self.affine.weight.data.uniform_(-0, 0)
self.affine.bias.data.zero_()
def forward(self, inputs):
"""
inputs:
inputs: torch.autograd.Variable, size = (N, 5)
returns:
out: torch.autograd.Variable, size = (N, heropool_size)
"""
embeds = self.embeddings(inputs).sum(dim=1) #contiuous
print(embeds.is_contiguous())
out = self.affine(embeds)
return out
class CBOHBilayer(nn.Module):
def __init__(self, heropool_size, embedding_dim, hidden_dim=10):
"""
Initialize an NN with two hidden layers. Weight of the first hidden
layer is the embedding.
inputs:
heropool_size: int
embedding_dim: int
hidden_dim: int
"""
super().__init__()
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.embeddings = nn.Embedding(heropool_size, embedding_dim)
#Initialize 2nd hidden layer with dimension = hidden_dim
self.linear1 = nn.Linear(embedding_dim, hidden_dim)
self.relu1 = nn.ReLU()
self.affine = nn.Linear(hidden_dim, heropool_size)
self.init_emb()
def init_emb(self):
"""
init embeddings and affine layer. The weight of the 2nd hidden layer is
initialized by Kaiming_norm.
"""
initrange = 0.5 / self.embedding_dim
self.embeddings.weight.data.uniform_(-initrange, initrange)
init.kaiming_normal(self.linear1.weight.data)
self.linear1.bias.data.zero_()
self.affine.weight.data.uniform_(-0, 0)
self.affine.bias.data.zero_()
def forward(self, inputs):
"""
inputs:
inputs: torch.autograd.Variable, size = (N, 5)
returns:
out: torch.autograd.Variable, size = (N, heropool_size)
"""
embeds = self.embeddings(inputs).sum(dim=1) #contiuous
pipe = nn.Sequential(self.linear1, self.relu1, self.affine)
out = pipe(embeds)
return out
class CBOHTrilayer(nn.Module):
def __init__(self, heropool_size, embedding_dim, hidden_dim=10,
affine_dim=10):
"""
Initialize an NN with three hidden layers. Weight of the first hidden
layer is the embedding.
inputs:
heropool_size: int
embedding_dim: int
hidden_dim: int
affine_dim: int
"""
super().__init__()
self.embedding_dim = embedding_dim
self.affine_dim = affine_dim
self.embeddings = nn.Embedding(heropool_size, embedding_dim)
#Initialize 2nd hidden layer with dimension = hidden_dim
self.linear1 = nn.Linear(embedding_dim, hidden_dim)
self.relu1 = nn.ReLU()
#Initialize 3rd hidden layer with dimension = affine_dim
self.linear2 = nn.Linear(hidden_dim, affine_dim)
self.relu2 = nn.ReLU()
self.affine = nn.Linear(affine_dim, heropool_size)
self.init_emb()
def init_emb(self):
"""
init embeddings and affine layer. The weights of the 2nd and 3rd hidden
layers are initialized by Kaiming_norm.
"""
initrange = 0.5 / self.embedding_dim
self.embeddings.weight.data.uniform_(-initrange, initrange)
init.kaiming_normal(self.linear1.weight.data)
self.linear1.bias.data.zero_()
init.kaiming_normal(self.linear2.weight.data)
self.linear2.bias.data.zero_()
self.affine.weight.data.uniform_(-0, 0)
self.affine.bias.data.zero_()
def forward(self, inputs):
"""
inputs:
inputs: torch.autograd.Variable, size = (N, 5)
returns:
out: torch.autograd.Variable, size = (N, heropool_size)
"""
embeds = self.embeddings(inputs).sum(dim=1)
pipe = nn.Sequential(self.linear1, self.relu1, self.linear2, self.relu2)
# skip connection to assist gradient flow
if self.embedding_dim == self.affine_dim:
out = self.affine(pipe(embeds) + embeds)
else:
out = self.affine(pipe(embeds))
return out
class CBOHLstm(nn.Module):
def __init__(self, heropool_size, embedding_dim):
"""
Initialize an NN with one hidden layer. Weight of the hidden layer is
the embedding.
inputs:
heropool_size: int
embedding_dim: int
"""
super().__init__()
self.embedding_dim = embedding_dim
self.embeddings = nn.Embedding(heropool_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim,embedding_dim)
self.affine = nn.Linear(embedding_dim, heropool_size)
self.init_emb()
def init_emb(self):
"""
init embeddings and affine layer
"""
initrange = 0.5 / self.embedding_dim
self.embeddings.weight.data.uniform_(-initrange, initrange)
self.affine.weight.data.uniform_(-0, 0)
self.affine.bias.data.zero_()
def forward(self, inputs):
"""
inputs:
inputs: torch.autograd.Variable, size = (N, 5)
returns:
out: torch.autograd.Variable, size = (N, heropool_size)
"""
embeds = self.embeddings(inputs).sum(dim=1) #contiuous
lstm_out, _ = self.lstm(embeds.view(len(inputs), 1, -1))
# tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
out = self.affine(lstm_out.view(len(inputs), -1))
return out | 34.66129 | 80 | 0.613309 | 776 | 6,447 | 4.916237 | 0.123711 | 0.103801 | 0.054522 | 0.050328 | 0.822018 | 0.792661 | 0.769332 | 0.769332 | 0.748362 | 0.748362 | 0 | 0.012804 | 0.285249 | 6,447 | 186 | 81 | 34.66129 | 0.815104 | 0.273926 | 0 | 0.645161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.064516 | 0 | 0.27957 | 0.010753 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fda2a3c894939eba6ffad8ab402da734472a4109 | 6,223 | py | Python | tests/test_profiles.py | andrewmilas10/courier-python | d0935bcef2dfc67324794b2ba320a69256131422 | [
"MIT"
] | 13 | 2020-07-29T22:05:36.000Z | 2021-08-10T14:32:50.000Z | tests/test_profiles.py | andrewmilas10/courier-python | d0935bcef2dfc67324794b2ba320a69256131422 | [
"MIT"
] | 18 | 2020-03-19T20:04:45.000Z | 2022-03-31T23:32:11.000Z | tests/test_profiles.py | andrewmilas10/courier-python | d0935bcef2dfc67324794b2ba320a69256131422 | [
"MIT"
] | 8 | 2020-05-15T15:30:29.000Z | 2022-02-08T14:10:48.000Z | import responses
import pytest
from trycourier.client import Courier
from trycourier.exceptions import CourierAPIException
@responses.activate
def test_success_profiles_get():
responses.add(
responses.GET,
'https://api.courier.com/profiles/profile.id',
status=200,
content_type='application/json',
body='{"profile":{}}'
)
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.get("profile.id")
assert r == {'profile':{}}
@responses.activate
def test_fail_profiles_get():
responses.add(
responses.GET,
'https://api.courier.com/profiles/profile.id',
status=400,
content_type='application/json',
body='{"message": "Not Found"}'
)
c = Courier(auth_token='123456789ABCDF')
with pytest.raises(CourierAPIException):
c.profiles.get('profile.id')
@responses.activate
def test_success_profiles_get_subscriptions():
responses.add(
responses.GET,
'https://api.courier.com/profiles/profile.id/lists',
status=200,
content_type='application/json',
body='{"paging":{}, "results": []}'
)
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.get_subscriptions('profile.id')
assert r == {'paging':{}, 'results':[]}
@responses.activate
def test_success_profiles_get_subscriptions_with_params():
responses.add(
responses.GET,
'https://api.courier.com/profiles/profile.id/lists?cursor=456',
status=200,
content_type='application/json',
body='{"paging":{}, "results": []}'
)
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.get_subscriptions(recipient_id='profile.id', cursor="456")
assert r == {'paging':{}, 'results':[]}
@responses.activate
def test_fail_profiles_get_subscriptions():
responses.add(
responses.GET,
'https://api.courier.com/profiles/profile.id/lists',
status=400,
content_type='application/json',
body='{"message": "Not Found"}'
)
c = Courier(auth_token='123456789ABCDF')
with pytest.raises(CourierAPIException):
c.profiles.get_subscriptions("profile.id")
@responses.activate
def test_success_profiles_add():
responses.add(
responses.PUT,
'https://api.courier.com/profiles/profile.id',
status=200,
content_type='application/json',
body='{"status": "SUCCESS"}'
)
profile = {
"email": "jane@doe.com"
}
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.add("profile.id", profile)
assert r == {"status": "SUCCESS"}
@responses.activate
def test_success_profiles_replace():
responses.add(
responses.PUT,
'https://api.courier.com/profiles/profile.id',
status=200,
content_type='application/json',
body='{"status": "SUCCESS"}'
)
profile={
"email":"jane@doe.com"
}
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.replace("profile.id", profile)
assert r == {"status":"SUCCESS"}
@responses.activate
def test_fail_profiles_replace():
responses.add(
responses.PUT,
'https://api.courier.com/profiles/profile.id',
status=400,
content_type='application/json',
body='{"message": "An error occured"}'
)
profile = {
"email": "jane@doei.com"
}
c = Courier(auth_token='123456789ABCDF')
with pytest.raises(CourierAPIException):
c.profiles.replace("profile.id", profile)
@responses.activate
def test_success_profiles_merge():
responses.add(
responses.POST,
'https://api.courier.com/profiles/profile.id',
status=200,
content_type='application/json',
body='{"status": "SUCCESS"}'
)
profile = {
"email": "jane@doe.com"
}
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.merge("profile.id", profile)
assert r == {"status": "SUCCESS"}
@responses.activate
def test_success_profiles_merge_idempotent():
responses.add(
responses.POST,
'https://api.courier.com/profiles/profile.id',
status=200,
content_type='application/json',
body='{"status": "SUCCESS"}'
)
profile = {
"email": "text@example.com"
}
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.merge("profile.id", profile, idempotency_key="1234ABCD")
assert responses.calls[0].request.headers.get(
'Idempotency-Key') == '1234ABCD'
assert r == {"status": "SUCCESS"}
@responses.activate
def test_fail_profiles_merge():
responses.add(
responses.POST,
'https://api.courier.com/profiles/profile.id',
status=400,
content_type='application/json',
body='{"message": "An error occured"}'
)
profile = {
"email": "text@example.com"
}
c = Courier(auth_token='123456789ABCDF')
with pytest.raises(CourierAPIException):
c.profiles.merge("profile.id", profile)
@responses.activate
def test_success_profiles_patch():
responses.add(
responses.PATCH,
'https://api.courier.com/profiles/profile.id',
status=200,
content_type='application/json',
body='{"status": "SUCCESS"}'
)
operations=[
{
"op":"add",
"path": "/number",
"value": 4
},
{
"op":"replace",
"path": "/number",
"value": 5
},
{
"op":"copy",
"from":"/number",
"path":"/test_num"
}
]
c = Courier(auth_token='123456789ABCDF')
r = c.profiles.patch("profile.id", operations)
assert r == {"status": "SUCCESS"}
@responses.activate
def test_fail_profiles_patch():
responses.add(
responses.PATCH,
'https://api.courier.com/profiles/profile.id',
status=400,
content_type='application/json',
body='{"message": "An error occured"}'
)
profile = {
"email": "text@example.com"
}
c = Courier(auth_token='123456789ABCDF')
with pytest.raises(CourierAPIException):
c.profiles.patch("profile.id", profile) | 24.694444 | 77 | 0.605978 | 655 | 6,223 | 5.639695 | 0.122137 | 0.063346 | 0.070384 | 0.084461 | 0.897943 | 0.886031 | 0.874932 | 0.856524 | 0.789388 | 0.75203 | 0 | 0.036583 | 0.240077 | 6,223 | 252 | 78 | 24.694444 | 0.744555 | 0 | 0 | 0.645 | 0 | 0 | 0.283419 | 0 | 0 | 0 | 0 | 0 | 0.045 | 1 | 0.065 | false | 0 | 0.02 | 0 | 0.085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fdea1e10f87f82e182f3ba12ed829de4ee2a7280 | 276 | py | Python | cla_backend/apps/call_centre/tests/api/test_event_api.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 3 | 2019-10-02T15:31:03.000Z | 2022-01-13T10:15:53.000Z | cla_backend/apps/call_centre/tests/api/test_event_api.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 206 | 2015-01-02T16:50:11.000Z | 2022-02-16T20:16:05.000Z | cla_backend/apps/call_centre/tests/api/test_event_api.py | uk-gov-mirror/ministryofjustice.cla_backend | 4d524c10e7bd31f085d9c5f7bf6e08a6bb39c0a6 | [
"MIT"
] | 6 | 2015-03-23T23:08:42.000Z | 2022-02-15T17:04:44.000Z | from rest_framework.test import APITestCase
from legalaid.tests.views.test_base import CLAOperatorAuthBaseApiTestMixin
from cla_eventlog.tests.test_views import EventAPIMixin
class EventViewSetTestCase(CLAOperatorAuthBaseApiTestMixin, EventAPIMixin, APITestCase):
pass
| 30.666667 | 88 | 0.873188 | 28 | 276 | 8.464286 | 0.607143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 276 | 8 | 89 | 34.5 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fdff0cd31ea11ee284b98a17f582ba07fa688269 | 36 | py | Python | textattack/datasets/translation/__init__.py | cclauss/TextAttack | 98b8d6102aa47bf3c41afedace0215d48f8ed046 | [
"MIT"
] | 1 | 2021-06-24T19:35:18.000Z | 2021-06-24T19:35:18.000Z | textattack/datasets/translation/__init__.py | 53X/TextAttack | e6a7969abc1e28a2a8a7e2ace709b78eb9dc94be | [
"MIT"
] | null | null | null | textattack/datasets/translation/__init__.py | 53X/TextAttack | e6a7969abc1e28a2a8a7e2ace709b78eb9dc94be | [
"MIT"
] | 1 | 2021-11-12T05:26:21.000Z | 2021-11-12T05:26:21.000Z | from .translation_datasets import *
| 18 | 35 | 0.833333 | 4 | 36 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e30d5ca713ebd350074f1d4355fd8852d732724a | 1,368 | py | Python | news/migrations/0003_auto_20160203_1845.py | n2o/guhema | eb390cbb5213a5ae16539ea46d473a5dc1866415 | [
"MIT"
] | null | null | null | news/migrations/0003_auto_20160203_1845.py | n2o/guhema | eb390cbb5213a5ae16539ea46d473a5dc1866415 | [
"MIT"
] | 2 | 2016-01-20T22:21:33.000Z | 2016-01-29T08:50:21.000Z | news/migrations/0003_auto_20160203_1845.py | n2o/guhema | eb390cbb5213a5ae16539ea46d473a5dc1866415 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-02-03 18:45
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('news', '0002_auto_20151108_1957'),
]
operations = [
migrations.AddField(
model_name='entry',
name='content_de',
field=models.TextField(null=True, verbose_name='Inhalt'),
),
migrations.AddField(
model_name='entry',
name='content_en',
field=models.TextField(null=True, verbose_name='Inhalt'),
),
migrations.AddField(
model_name='entry',
name='content_ru',
field=models.TextField(null=True, verbose_name='Inhalt'),
),
migrations.AddField(
model_name='entry',
name='title_de',
field=models.CharField(max_length=50, null=True, verbose_name='Titel'),
),
migrations.AddField(
model_name='entry',
name='title_en',
field=models.CharField(max_length=50, null=True, verbose_name='Titel'),
),
migrations.AddField(
model_name='entry',
name='title_ru',
field=models.CharField(max_length=50, null=True, verbose_name='Titel'),
),
]
| 29.73913 | 83 | 0.570906 | 142 | 1,368 | 5.295775 | 0.366197 | 0.143617 | 0.183511 | 0.215426 | 0.734043 | 0.734043 | 0.734043 | 0.670213 | 0.670213 | 0.670213 | 0 | 0.039832 | 0.302632 | 1,368 | 45 | 84 | 30.4 | 0.748428 | 0.048977 | 0 | 0.631579 | 1 | 0 | 0.11094 | 0.01772 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3328eacad61b98c25d335d011775964d7b273ec | 28 | py | Python | openwater/tests/__init__.py | flowmatters/openwater | 8c48fc1694f54c2735a7ac451fcce56df498e520 | [
"MIT"
] | 1 | 2020-02-12T11:17:02.000Z | 2020-02-12T11:17:02.000Z | openwater/tests/__init__.py | flowmatters/openwater | 8c48fc1694f54c2735a7ac451fcce56df498e520 | [
"MIT"
] | null | null | null | openwater/tests/__init__.py | flowmatters/openwater | 8c48fc1694f54c2735a7ac451fcce56df498e520 | [
"MIT"
] | 1 | 2020-02-27T13:58:14.000Z | 2020-02-27T13:58:14.000Z |
from . import system_test
| 7 | 25 | 0.75 | 4 | 28 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 28 | 3 | 26 | 9.333333 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b788c98316586bc9fae89e47e17c76252178f31 | 190 | py | Python | plugins/atomio/__init__.py | nielsvm/toggle-desktop | 2f060d03ad1b36c8e01e43c012fc877ba1bd9f0c | [
"BSD-3-Clause"
] | 1 | 2018-07-23T07:42:40.000Z | 2018-07-23T07:42:40.000Z | plugins/atomio/__init__.py | nielsvm/toggle-desktop | 2f060d03ad1b36c8e01e43c012fc877ba1bd9f0c | [
"BSD-3-Clause"
] | null | null | null | plugins/atomio/__init__.py | nielsvm/toggle-desktop | 2f060d03ad1b36c8e01e43c012fc877ba1bd9f0c | [
"BSD-3-Clause"
] | 1 | 2015-03-17T22:46:09.000Z | 2015-03-17T22:46:09.000Z | __all__ = ['find_replace', 'config_find_replace']
from plugins.atomio import *
from core.path import register_path_prefix, user
@register_path_prefix
def ATOMDIR():
return user('.atom')
| 27.142857 | 49 | 0.773684 | 26 | 190 | 5.230769 | 0.653846 | 0.161765 | 0.264706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115789 | 190 | 6 | 50 | 31.666667 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0.189474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
8b8d61e094642328f7fbf834e0abfa51dd9655c9 | 26 | py | Python | hello/hello.py | Cpt-Meow/API | 17854abac4970a38b899b2ce8c31d7a521bdf71c | [
"Apache-2.0"
] | null | null | null | hello/hello.py | Cpt-Meow/API | 17854abac4970a38b899b2ce8c31d7a521bdf71c | [
"Apache-2.0"
] | null | null | null | hello/hello.py | Cpt-Meow/API | 17854abac4970a38b899b2ce8c31d7a521bdf71c | [
"Apache-2.0"
] | null | null | null | print('Hello from Aline')
| 13 | 25 | 0.730769 | 4 | 26 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
4734afe1b61e693c0590651d7f680a42f31b7005 | 165 | py | Python | coffin/contrib/auth/urls.py | spothero/coffin | 9ea6a9173cbfed592c5b4776c489dba8d9280d52 | [
"BSD-3-Clause"
] | 1 | 2016-11-19T06:32:20.000Z | 2016-11-19T06:32:20.000Z | coffin/contrib/auth/urls.py | spothero/coffin | 9ea6a9173cbfed592c5b4776c489dba8d9280d52 | [
"BSD-3-Clause"
] | null | null | null | coffin/contrib/auth/urls.py | spothero/coffin | 9ea6a9173cbfed592c5b4776c489dba8d9280d52 | [
"BSD-3-Clause"
] | 1 | 2019-08-14T09:51:23.000Z | 2019-08-14T09:51:23.000Z | import inspect
from django.contrib.auth import urls
exec inspect.getsource(urlpatterns)\
.replace('django.contrib.auth.views', 'coffin.contrib.auth.views') | 27.5 | 74 | 0.763636 | 21 | 165 | 6 | 0.619048 | 0.261905 | 0.269841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115152 | 165 | 6 | 74 | 27.5 | 0.863014 | 0 | 0 | 0 | 0 | 0 | 0.301205 | 0.301205 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
473634a3b77f9cba29b2ea7ac2b2b6ce4dfd9f12 | 121 | py | Python | model/__init__.py | CFM-MSG/Code_SelectiveHCN | 1f624c5debd03925f0732d1d732c69c46d1fc39d | [
"MIT"
] | 2 | 2021-10-12T05:18:57.000Z | 2022-03-23T13:11:42.000Z | model/__init__.py | CFM-MSG/Code_SelectiveHCN | 1f624c5debd03925f0732d1d732c69c46d1fc39d | [
"MIT"
] | null | null | null | model/__init__.py | CFM-MSG/Code_SelectiveHCN | 1f624c5debd03925f0732d1d732c69c46d1fc39d | [
"MIT"
] | null | null | null | from . import agc_layer
from . import selectscale_hc
from . import selectframe_tc
from . import model
from . import utils | 24.2 | 28 | 0.801653 | 18 | 121 | 5.222222 | 0.555556 | 0.531915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157025 | 121 | 5 | 29 | 24.2 | 0.921569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
47470a6b6cf6c9dbce39259527a8bc636b92e747 | 2,762 | py | Python | tests/functional/test_media.py | jounile/nollanet | 7bea20934d3f5e09658a9d31c3b05c15416398a0 | [
"MIT"
] | 3 | 2019-10-13T08:37:13.000Z | 2020-02-16T12:24:11.000Z | tests/functional/test_media.py | jounile/nollanet | 7bea20934d3f5e09658a9d31c3b05c15416398a0 | [
"MIT"
] | 5 | 2019-11-13T15:56:52.000Z | 2021-04-30T20:58:19.000Z | tests/functional/test_media.py | jounile/nollanet | 7bea20934d3f5e09658a9d31c3b05c15416398a0 | [
"MIT"
] | 1 | 2020-04-08T21:09:52.000Z | 2020-04-08T21:09:52.000Z | import pytest
from requests import get
from urllib.parse import urljoin
def test_new_media_page(wait_for_api, login_user):
"""
GIVEN a user has logged in (login_user)
WHEN the '/media/newmedia' page is navigated to (GET)
THEN check the response is valid and page title is correct
"""
request_session, api_url = wait_for_api
response = request_session.get(urljoin(api_url, '/media/newmedia'))
assert response.status_code == 200
assert '<h1>New media</h1>' in response.text
def test_update_media_page(wait_for_api, login_user):
"""
GIVEN a user has logged in (login_user)
WHEN the '/media/update/1' page is navigated to (GET)
THEN check the response is valid and page title is correct
"""
request_session, api_url = wait_for_api
response = request_session.get(urljoin(api_url, '/media/update/1'))
assert response.status_code == 200
assert '<h1>Update media</h1>' in response.text
def test_hidden_photo(wait_for_api, login_user):
"""
GIVEN a user has logged in (login_user)
WHEN the '/media/newmedia' is submitted to creat a story (POST)
THEN check the response is valid and flash message is correct
"""
new_hidden_media = dict(mediatype_id=1, genre_id=1, storytype_id=2, country_id=1, media_topic='New topic', media_desc='description', media_text='Text content', hidden=1)
request_session, api_url = wait_for_api
response = request_session.post(urljoin(api_url, '/media/newmedia'), data=new_hidden_media, allow_redirects=True)
assert response.status_code == 200
assert '<div class="flash">New media added</div>' in response.text
def test_valid_photo(wait_for_api, login_user):
"""
GIVEN a user has logged in (login_user)
WHEN the '/media/newmedia' is submitted to creat a story (POST)
THEN check the response is valid and flash message is correct
"""
new_media = dict(mediatype_id=1, genre_id=1, storytype_id=4, country_id=1, media_topic='New photo topic', media_desc='Description', media_text='Text content', hidden=None)
request_session, api_url = wait_for_api
response = request_session.post(urljoin(api_url, '/media/newmedia'), data=new_media, allow_redirects=True)
assert response.status_code == 200
assert '<div class="flash">New media added</div>' in response.text
def test_new_media_page(wait_for_api, login_user):
"""
GIVEN a user has logged in (login_user)
WHEN the '/media/newmedia' page is navigated to (GET)
THEN check the response is valid and page title is correct
"""
request_session, api_url = wait_for_api
response = request_session.get(urljoin(api_url, '/media/newmedia'))
assert response.status_code == 200
assert '<h1>New media</h1>' in response.text | 46.033333 | 175 | 0.727009 | 427 | 2,762 | 4.491803 | 0.175644 | 0.036496 | 0.052138 | 0.039103 | 0.936392 | 0.936392 | 0.912409 | 0.875912 | 0.875912 | 0.822732 | 0 | 0.014017 | 0.173425 | 2,762 | 60 | 176 | 46.033333 | 0.826106 | 0.286025 | 0 | 0.6 | 0 | 0 | 0.151858 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.166667 | false | 0 | 0.1 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
474ff534fdac535ba10b2e37062c27796f440747 | 45,615 | py | Python | MoRT/mcm_textgeneration/mcm_models.py | ml-research/MoRT_NMI | 98dc14f42714b1b794d685507c01b593cde5638c | [
"MIT"
] | 4 | 2021-04-04T13:42:34.000Z | 2021-11-29T15:38:50.000Z | MoRT/mcm_textgeneration/mcm_models.py | ml-research/MoRT_NMI | 98dc14f42714b1b794d685507c01b593cde5638c | [
"MIT"
] | null | null | null | MoRT/mcm_textgeneration/mcm_models.py | ml-research/MoRT_NMI | 98dc14f42714b1b794d685507c01b593cde5638c | [
"MIT"
] | null | null | null | # coding=utf-8
# adapted from https://github.com/huggingface/transformers/tree/master/src/transformers/modeling_utils.py
import transformers
from transformers import GPT2LMHeadModel
import torch
import warnings
import torch.nn as nn
from torch import Tensor
from torch.nn import functional as F
from torch.nn import CrossEntropyLoss
from transformers.file_utils import add_start_docstrings_to_callable
# dirty hack to add mort to path
import sys
import os
sys.path.append(os.path.join(os.getcwd(), "../"))
from mort.funcs_mcm import BERTSentenceSubspace, MoRTSentenceSubspace
# for windows
# import ctypes
# ctypes.cdll.LoadLibrary('caffe2_nvrtc.dll')
GPT2_INPUTS_DOCSTRING = r"""
Args:
input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, input_ids_length)`):
`input_ids_length` = `sequence_length if `past` is None else 1
Indices of input sequence tokens in the vocabulary.
If using `past` as an input make sure that `input_ids` are those of the last position.
Indices can be obtained using :class:`transformers.GPT2Tokenizer`.
See :func:`transformers.PreTrainedTokenizer.encode` and
:func:`transformers.PreTrainedTokenizer.encode_plus` for details.
`What are input IDs? <../glossary.html#input-ids>`__
past (:obj:`List[torch.FloatTensor]` of length :obj:`config.n_layers`):
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model
(see `past` output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model
should not be passed as input ids as they have already been computed.
attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Mask to avoid performing attention on padding token indices.
Mask values selected in ``[0, 1]``:
``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens.
`What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, input_ids_length)`, `optional`, defaults to :obj:`None`):
`input_ids_length` = `sequence_length if `past` is None else 1
Segment token indices to indicate first and second portions of the inputs.
Indices are selected in ``[0, 1]``: ``0`` corresponds to a `sentence A` token, ``1``
corresponds to a `sentence B` token
If using `past` as an input make sure that `token_type_ids` correspond to the `input_ids` of the last position.
`What are token type IDs? <../glossary.html#token-type-ids>`_
position_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Indices of positions of each input sequence tokens in the position embeddings.
Selected in the range ``[0, config.max_position_embeddings - 1]``.
`What are position IDs? <../glossary.html#position-ids>`_
head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`, defaults to :obj:`None`):
Mask to nullify selected heads of the self-attention modules.
Mask values selected in ``[0, 1]``:
:obj:`1` indicates the head is **not masked**, :obj:`0` indicates the head is **masked**.
input_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`, defaults to :obj:`None`):
Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_ids` indices into associated vectors
than the model's internal embedding lookup matrix.
"""
class BERTGRUSentiment(nn.Module):
def __init__(self,
bert,
hidden_dim,
output_dim,
n_layers,
bidirectional,
dropout):
super().__init__()
self.bert = bert
embedding_dim = bert.config.to_dict()['hidden_size']
self.rnn = nn.GRU(embedding_dim,
hidden_dim,
num_layers=n_layers,
bidirectional=bidirectional,
batch_first=True,
dropout=0 if n_layers < 2 else dropout)
self.out = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
# text = [batch size, sent len]
with torch.no_grad():
embedded = self.bert(text)[0]
# embedded = [batch size, sent len, emb dim]
_, hidden = self.rnn(embedded)
# hidden = [n layers * n directions, batch size, emb dim]
if self.rnn.bidirectional:
hidden = self.dropout(torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1))
else:
hidden = self.dropout(hidden[-1, :, :])
# hidden = [batch size, hid dim]
output = self.out(hidden)
# output = [batch size, out dim]
return output
class GPT2MCMLMHeadModel(GPT2LMHeadModel):
def __init__(self, config):
super().__init__(config)
self.transformer = transformers.GPT2Model(config)
self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
# by default, do not use the mcm until setup is performed
self.use_mcm = False
self.mcm = None
self.mcm_tokenizer = None
self.mcm_threshold = 0
self.min_token_number = 1
self.save_edge_cases = False
self.init_weights()
def get_output_embeddings(self):
return self.lm_head
def prepare_inputs_for_generation(self, input_ids, past, **kwargs):
# only last token for inputs_ids if past is defined in kwargs
if past:
input_ids = input_ids[:, -1].unsqueeze(-1)
return {"input_ids": input_ids, "past": past, "use_cache": kwargs["use_cache"]}
@add_start_docstrings_to_callable(GPT2_INPUTS_DOCSTRING)
def forward(
self,
input_ids=None,
past_key_values=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
**kwargs,
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``labels = input_ids``
Indices are selected in ``[-100, 0, ..., config.vocab_size]``
All labels set to ``-100`` are ignored (masked), the loss is only
computed for labels in ``[0, ..., config.vocab_size]``
"""
if "past" in kwargs:
warnings.warn(
"The `past` argument is deprecated and will be removed in a future version, use `past_key_values` instead.",
FutureWarning,
)
past_key_values = kwargs.pop("past")
assert kwargs == {}, f"Unexpected keyword arguments: {list(kwargs.keys())}."
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
transformer_outputs = self.transformer(
input_ids,
past_key_values=past_key_values,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
loss = None
if labels is not None:
# Shift so that tokens < n predict n
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
if not return_dict:
output = (lm_logits,) + transformer_outputs[1:]
return ((loss,) + output) if loss is not None else output
return transformers.modeling_outputs.CausalLMOutputWithPast(
loss=loss,
logits=lm_logits,
past_key_values=transformer_outputs.past_key_values,
hidden_states=transformer_outputs.hidden_states,
attentions=transformer_outputs.attentions,
)
def setup_mcm(self, device="cpu", transformer_model='bert-large-nli-mean-tokens',
mcm_tokenizer=None, threshold=0., use_mort=False, min_token_number=1,
save_edge_cases=False, file_descriptor="", use_mcm=True, working_path=None):
print("Running setup_mcm")
self.mcm_tokenizer = mcm_tokenizer
self.mcm_threshold = threshold
self.min_token_number = min_token_number
self.save_edge_cases = save_edge_cases
self.file_descriptor = file_descriptor
if transformer_model == "pytorch-sentiment-analysis":
sentiment_model = BERTGRUSentiment(transformers.BertModel.from_pretrained('bert-base-uncased'), 256, 1, 2, True, 0.25).to(torch.device(device))
# expects model to be in same folder, can currently be found on server: /home/ml-stud-jhonermann/tut6-model.pt
sentiment_model.load_state_dict(torch.load('tut6-model.pt'))
sentiment_model.eval()
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased')
def predict_sentiment_(sentence):
tokens = tokenizer.tokenize(sentence)
tokens = tokens[:tokenizer.max_model_input_sizes['bert-base-uncased'] - 2]
indexed = [tokenizer.cls_token_id] + tokenizer.convert_tokens_to_ids(tokens) + [tokenizer.sep_token_id]
tensor = torch.LongTensor(indexed).to(torch.device(device))
tensor = tensor.unsqueeze(0)
prediction = torch.sigmoid(sentiment_model(tensor))
return [[prediction.item(), sentence], []]
self.mcm = predict_sentiment_
else:
# MoRT currently requires to change line 226 in mort/funcs_mcm.py to bias_with_action[0]))
# and line 999 in mort/dataMoral.py to data = np.genfromtxt('../data/parsed_yes_no_BERTBias.csv', skip_header=1, dtype=str, delimiter=",")
if use_mort:
mcm_model = MoRTSentenceSubspace(device=device,
transormer_model=transformer_model,
filename_pickled_cluster=None,
pca_framework='sklearn',
eval_model_path=os.path.join(os.getcwd(),
"../",
"mort",
"adaptBias",
"results",
transformer_model,
"bert_model_adapted_run1",
"adapted_best_model.pt"))
def mcm_(query):
return mcm_model.bias(query, norm=mcm_model.norm, qa_template=False)
self.mcm = mcm_
else:
mcm_model = BERTSentenceSubspace(device=device,
transormer_model=transformer_model,
filename_pickled_cluster=None,
pca_framework='sklearn', working_path=working_path)
def mcm_(query):
return mcm_model.bias(query, norm=8.946814, qa_template=False)
self.mcm = mcm_
self.use_mcm = use_mcm
def _generate_no_beam_search(
self,
input_ids,
cur_len,
max_length,
min_length,
do_sample,
temperature,
top_k,
top_p,
repetition_penalty,
no_repeat_ngram_size,
bad_words_ids,
pad_token_id,
eos_token_id,
batch_size,
attention_mask,
use_cache,
model_kwargs,
):
"""Generate sequences for each example without beam search (num_beams == 1).
All returned sequence are generated independantly.
"""
# length of generated sentences / unfinished sentences
unfinished_sents = input_ids.new(batch_size).fill_(1)
sent_lengths = input_ids.new(batch_size).fill_(max_length)
past = None
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_kwargs
)
outputs = self(**model_inputs, return_dict=True)
next_token_logits = outputs.logits[:, -1, :]
scores = self.postprocess_next_token_scores(
scores=next_token_logits,
input_ids=input_ids,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
cur_len=cur_len,
min_length=min_length,
max_length=max_length,
eos_token_id=eos_token_id,
repetition_penalty=repetition_penalty,
batch_size=batch_size,
num_beams=1,
)
# if model has past, then set the past variable to speed up decoding
if "past_key_values" in outputs:
past = outputs.past_key_values
elif "mems" in outputs:
past = outputs.mems
if do_sample:
# Temperature (higher temperature => more likely to sample low probability tokens)
if temperature != 1.0:
scores = scores / temperature
# Top-p/top-k filtering
next_token_logscores = top_k_top_p_filtering(scores, input_ids=input_ids, mcm=self.mcm,
use_mcm=self.use_mcm, tokenizer=self.mcm_tokenizer,
top_k=top_k, top_p=top_p, threshold=self.mcm_threshold,
mcm_keep_at_least=self.min_token_number,
edge_cases_file=self.file_descriptor if self.save_edge_cases else "")
# Sample
probs = F.softmax(next_token_logscores, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
else:
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
# update generations and finished sentences
if eos_token_id is not None:
# pad finished sentences if eos_token_id exist
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
else:
tokens_to_add = next_token
# add token and increase length by one
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
cur_len = cur_len + 1
if eos_token_id is not None:
eos_in_sents = tokens_to_add == eos_token_id
# if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
# unfinished_sents is set to zero if eos in sentence
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a </s> in each sentence, or if we exceed the maximul length
if unfinished_sents.max() == 0:
break
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
return input_ids
def top_k_top_p_filtering(
logits: Tensor,
top_k: int = 0,
top_p: float = 1.0,
filter_value: float = -float("Inf"),
min_tokens_to_keep: int = 1,
input_ids: Tensor = None,
use_mcm: bool = False,
tokenizer=None,
mcm=None,
threshold: float = 0.0,
mcm_keep_at_least: int = 1,
edge_cases_file: str = "",
eot_token: str = '',
) -> Tensor:
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
Args:
logits: logits distribution shape (batch size, vocabulary size)
if top_k > 0: keep only top k tokens with highest probability (top-k filtering).
if top_p < 1.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
Make sure we keep at least min_tokens_to_keep per batch example in the output
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
"""
if top_k > 0:
top_k = min(max(top_k, min_tokens_to_keep), logits.size(-1)) # Safety check
# Remove all tokens with a probability less than the last token of the top-k
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p < 1.0:
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold (token with 0 are kept)
sorted_indices_to_remove = cumulative_probs > top_p
if min_tokens_to_keep > 1:
# Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)
sorted_indices_to_remove[..., :min_tokens_to_keep] = 0
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
# scatter sorted tensors to original indexing
indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
logits[indices_to_remove] = filter_value
if use_mcm:
tokenlist = list(torch.nonzero(F.softmax(logit, dim=-1), as_tuple=True)[-1].tolist() for logit in logits)
indices_to_remove = torch.ones(logits.size(), dtype=torch.bool)
for next_tokens, current, sequence in zip(tokenlist, input_ids, range(logits.size()[0])):
# decode tokens for mcm and retrieve scoring
texts = tokenizer.batch_decode(torch.cat([torch.tensor([list(current)] * len(next_tokens)).to(input_ids.device),
torch.reshape(torch.tensor(next_tokens), (-1, 1)).to(input_ids.device)],
dim=-1),
clean_up_tokenization_spaces=True)
texts = [text.replace(eot_token, '') for text in texts]
mcm_res, _, _ = mcm(texts)
index_scores = list(zip(mcm_res[0], next_tokens, texts))
index_scores.sort(reverse=True) # TODO index_scores sorting inpalce ?
# try to only keep positive ratings
keep = [entry[1] for entry in index_scores if entry[0] > threshold]
# if there is no option left retrieve the best rated n indices
if len(keep) <= mcm_keep_at_least:
if len(index_scores) > mcm_keep_at_least:
keep = [entry[1] for entry in index_scores[:mcm_keep_at_least]]
else:
keep = [entry[1] for entry in index_scores[:]]
# consider the step an edge case if some values are filtered, but number of remaining tokens is still above the keep-threshold
elif len(edge_cases_file) > 0 and len(keep) != len(index_scores):
with open("edge_cases_{}.txt".format(edge_cases_file), "a", encoding="utf-8") as file:
for index_score in index_scores:
file.write("{}|{}\n".format(index_score[0], index_score[2]))
for i in keep:
indices_to_remove[sequence][i] = 0
logits[indices_to_remove] = filter_value
return logits
class DialoGPT2MCMLMHeadModel(GPT2MCMLMHeadModel):
def __init__(self, config):
super().__init__(config)
def setup_mcm(self, device="cpu", transformer_model='bert-large-nli-mean-tokens',
mcm_tokenizer=None, threshold=0, use_mort=False, min_token_number=1,
save_edge_cases=False, file_descriptor="", use_mcm=True, start_mcm_token=None, num_text_splits=1):
super().setup_mcm(device=device, transformer_model=transformer_model,
mcm_tokenizer=mcm_tokenizer, threshold=threshold, use_mort=use_mort, min_token_number=min_token_number,
save_edge_cases=save_edge_cases, file_descriptor=file_descriptor, use_mcm=use_mcm)
self.start_mcm_token = start_mcm_token
self.num_text_splits = num_text_splits
def _generate_no_beam_search(
self,
input_ids,
cur_len,
max_length,
min_length,
do_sample,
temperature,
top_k,
top_p,
repetition_penalty,
no_repeat_ngram_size,
bad_words_ids,
pad_token_id,
eos_token_id,
batch_size,
attention_mask,
use_cache,
model_kwargs,
):
"""Generate sequences for each example without beam search (num_beams == 1).
All returned sequence are generated independantly.
"""
# length of generated sentences / unfinished sentences
unfinished_sents = input_ids.new(batch_size).fill_(1)
sent_lengths = input_ids.new(batch_size).fill_(max_length)
past = None
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_kwargs
)
outputs = self(**model_inputs, return_dict=True)
next_token_logits = outputs.logits[:, -1, :]
scores = self.postprocess_next_token_scores(
scores=next_token_logits,
input_ids=input_ids,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
cur_len=cur_len,
min_length=min_length,
max_length=max_length,
eos_token_id=eos_token_id,
repetition_penalty=repetition_penalty,
batch_size=batch_size,
num_beams=1,
)
# if model has past, then set the past variable to speed up decoding
if "past_key_values" in outputs:
past = outputs.past_key_values
elif "mems" in outputs:
past = outputs.mems
if do_sample:
# Temperature (higher temperature => more likely to sample low probability tokens)
if temperature != 1.0:
scores = scores / temperature
# Top-p/top-k filtering
next_token_logscores = top_k_top_p_filtering_dialogpt(scores, input_ids=input_ids, mcm=self.mcm,
use_mcm=self.use_mcm, tokenizer=self.mcm_tokenizer,
top_k=top_k, top_p=top_p, threshold=self.mcm_threshold,
start_mcm_token=self.start_mcm_token,
num_text_splits=self.num_text_splits,
mcm_keep_at_least=self.min_token_number,
edge_cases_file=self.file_descriptor if self.save_edge_cases else "")
# Sample
probs = F.softmax(next_token_logscores, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
else:
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
# update generations and finished sentences
if eos_token_id is not None:
# pad finished sentences if eos_token_id exist
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
else:
tokens_to_add = next_token
# add token and increase length by one
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
cur_len = cur_len + 1
if eos_token_id is not None:
eos_in_sents = tokens_to_add == eos_token_id
# if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
# unfinished_sents is set to zero if eos in sentence
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a </s> in each sentence, or if we exceed the maximul length
if unfinished_sents.max() == 0:
break
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
return input_ids
def top_k_top_p_filtering_dialogpt(
logits: Tensor,
top_k: int = 0,
top_p: float = 1.0,
filter_value: float = -float("Inf"),
min_tokens_to_keep: int = 1,
input_ids: Tensor = None,
use_mcm: bool = False,
tokenizer=None,
mcm=None,
threshold: float = 0.0,
mcm_keep_at_least: int = 1,
edge_cases_file: str = "",
start_mcm_token: str = None,
num_text_splits: int = 1
) -> Tensor:
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
Args:
logits: logits distribution shape (batch size, vocabulary size)
if top_k > 0: keep only top k tokens with highest probability (top-k filtering).
if top_p < 1.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering).
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
Make sure we keep at least min_tokens_to_keep per batch example in the output
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
"""
if top_k > 0:
top_k = min(max(top_k, min_tokens_to_keep), logits.size(-1)) # Safety check
# Remove all tokens with a probability less than the last token of the top-k
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p < 1.0:
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold (token with 0 are kept)
sorted_indices_to_remove = cumulative_probs > top_p
if min_tokens_to_keep > 1:
# Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)
sorted_indices_to_remove[..., :min_tokens_to_keep] = 0
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
# scatter sorted tensors to original indexing
indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
logits[indices_to_remove] = filter_value
if use_mcm:
tokenlist = list(torch.nonzero(F.softmax(logit, dim=-1), as_tuple=True)[-1].tolist() for logit in logits)
indices_to_remove = torch.ones(logits.size(), dtype=torch.bool)
for next_tokens, current, sequence in zip(tokenlist, input_ids, range(logits.size()[0])):
# each sequence is processed seperately
#index_scores = []
texts = []
for next_token in next_tokens: # TODO impl this parallel with python multiprocessing
# decode tokens for mcm and retrieve scoring
text = tokenizer.decode(torch.cat([current, torch.tensor([next_token]).to(input_ids.device)], dim=-1),
clean_up_tokenization_spaces=True)
if start_mcm_token:
text_splits = text.split(start_mcm_token)[-num_text_splits:]
text = ' '.join(text_splits)
texts.append(text)
#index_scores.append((mcm(text)[0][0], next_token, text))
mcm_res, _, _ = mcm(texts)
index_scores = list(zip(mcm_res[0], next_tokens, texts))
index_scores.sort(reverse=True)
# try to only keep positive ratings
keep = [entry[1] for entry in index_scores if entry[0] > threshold]
# if there is no option left retrieve the best rated n indices
if len(keep) <= mcm_keep_at_least:
if len(index_scores) > mcm_keep_at_least:
keep = [entry[1] for entry in index_scores[:mcm_keep_at_least]]
else:
keep = [entry[1] for entry in index_scores[:]]
# consider the step an edge case if some values are filtered, but number of remaining tokens is still above the keep-threshold
elif len(edge_cases_file) > 0 and len(keep) != len(index_scores):
with open("edge_cases_{}.txt".format(edge_cases_file), "a", encoding="utf-8") as file:
for index_score in index_scores:
file.write("{}|{}\n".format(index_score[0], index_score[2]))
indices_to_remove[sequence][keep] = 0
logits[indices_to_remove] = filter_value
return logits
def calc_banned_ngram_tokens(prev_input_ids, num_hypos, no_repeat_ngram_size, cur_len):
# Copied from fairseq for no_repeat_ngram in beam_search"""
if cur_len + 1 < no_repeat_ngram_size:
# return no banned tokens if we haven't generated no_repeat_ngram_size tokens yet
return [[] for _ in range(num_hypos)]
generated_ngrams = [{} for _ in range(num_hypos)]
for idx in range(num_hypos):
gen_tokens = prev_input_ids[idx].tolist()
generated_ngram = generated_ngrams[idx]
for ngram in zip(*[gen_tokens[i:] for i in range(no_repeat_ngram_size)]):
prev_ngram_tuple = tuple(ngram[:-1])
generated_ngram[prev_ngram_tuple] = generated_ngram.get(prev_ngram_tuple, []) + [ngram[-1]]
def _get_generated_ngrams(hypo_idx):
# Before decoding the next token, prevent decoding of ngrams that have already appeared
start_idx = cur_len + 1 - no_repeat_ngram_size
ngram_idx = tuple(prev_input_ids[hypo_idx, start_idx:cur_len].tolist())
return generated_ngrams[hypo_idx].get(ngram_idx, [])
banned_tokens = [_get_generated_ngrams(hypo_idx) for hypo_idx in range(num_hypos)]
return banned_tokens
def calc_banned_bad_words_ids(prev_input_ids, bad_words_ids):
banned_tokens = []
def _tokens_match(prev_tokens, tokens):
if len(tokens) == 0:
# if bad word tokens is just one token always ban it
return True
if len(tokens) > len(prev_input_ids):
# if bad word tokens are longer then prev input_ids they can't be equal
return False
if prev_tokens[-len(tokens):] == tokens:
# if tokens match
return True
else:
return False
for prev_input_ids_slice in prev_input_ids:
banned_tokens_slice = []
for banned_token_seq in bad_words_ids:
assert len(banned_token_seq) > 0, "Banned words token sequences {} cannot have an empty list".format(
bad_words_ids
)
if _tokens_match(prev_input_ids_slice.tolist(), banned_token_seq[:-1]) is False:
# if tokens do not match continue
continue
banned_tokens_slice.append(banned_token_seq[-1])
banned_tokens.append(banned_tokens_slice)
return banned_tokens
class OpenAIGPTMCMLMHeadModel(transformers.OpenAIGPTPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.transformer = transformers.OpenAIGPTModel(config)
self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
# by default, do not use the mcm until setup is performed
self.use_mcm = False
self.mcm = None
self.mcm_tokenizer = None
self.mcm_threshold = 0
self.min_token_number = 1
self.save_edge_cases = False
self.init_weights()
def get_output_embeddings(self):
return self.lm_head
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``labels = input_ids``
Indices are selected in ``[-100, 0, ..., config.vocab_size]``
All labels set to ``-100`` are ignored (masked), the loss is only
computed for labels in ``[0, ..., config.vocab_size]``
"""
# return_dict = return_dict if return_dict is not None else self.config.use_return_dict
transformer_outputs = self.transformer(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
# return_dict=return_dict,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
loss = None
if labels is not None:
# Shift so that tokens < n predict n
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
# if not return_dict:
output = (lm_logits,) + transformer_outputs[1:]
return ((loss,) + output) if loss is not None else output
# return transformers.CausalLMOutput(
# loss=loss,
# logits=lm_logits,
# hidden_states=transformer_outputs.hidden_states,
# attentions=transformer_outputs.attentions,
#)
def setup_mcm(self, device="cpu", transformer_model='bert-large-nli-mean-tokens',
mcm_tokenizer=None, threshold=0, use_mort=False, min_token_number=1, save_edge_cases=False, file_descriptor=""):
self.mcm_tokenizer = mcm_tokenizer
self.mcm_threshold = threshold
self.min_token_number = min_token_number
self.save_edge_cases = save_edge_cases
self.file_descriptor = file_descriptor
if transformer_model == "pytorch-sentiment-analysis":
sentiment_model = BERTGRUSentiment(transformers.BertModel.from_pretrained('bert-base-uncased'), 256, 1, 2, True, 0.25).to(torch.device(device))
# expects model to be in same folder, can currently be found on server: /home/ml-stud-jhonermann/tut6-model.pt
sentiment_model.load_state_dict(torch.load('tut6-model.pt'))
sentiment_model.eval()
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased')
def predict_sentiment_(sentence):
tokens = tokenizer.tokenize(sentence)
tokens = tokens[:tokenizer.max_model_input_sizes['bert-base-uncased'] - 2]
indexed = [tokenizer.cls_token_id] + tokenizer.convert_tokens_to_ids(tokens) + [tokenizer.sep_token_id]
tensor = torch.LongTensor(indexed).to(torch.device(device))
tensor = tensor.unsqueeze(0)
prediction = torch.sigmoid(sentiment_model(tensor))
return [[prediction.item(), sentence], []]
self.mcm = predict_sentiment_
else:
# MoRT currently requires to change line 226 in mort/funcs_mcm.py to bias_with_action[0]))
# and line 999 in mort/dataMoral.py to data = np.genfromtxt('../data/parsed_yes_no_BERTBias.csv', skip_header=1, dtype=str, delimiter=",")
if use_mort:
mcm_model = MoRTSentenceSubspace(device=device,
transormer_model=transformer_model,
filename_pickled_cluster=None,
pca_framework='sklearn',
eval_model_path=os.path.join(os.getcwd(),
"../",
"mort",
"adaptBias",
"results",
transformer_model,
"bert_model_adapted_run1",
"adapted_best_model.pt"))
def mcm_(query):
return mcm_model.bias(query, norm=mcm_model.norm, qa_template=False)
self.mcm = mcm_
else:
mcm_model = BERTSentenceSubspace(device=device,
transormer_model=transformer_model,
filename_pickled_cluster=None,
pca_framework='sklearn')
def mcm_(query):
return mcm_model.bias(query, norm=8.946814, qa_template=False)
self.mcm = mcm_
self.use_mcm = True
def _generate_no_beam_search(
self,
input_ids,
cur_len,
max_length,
min_length,
do_sample,
temperature,
top_k,
top_p,
repetition_penalty,
no_repeat_ngram_size,
bad_words_ids,
pad_token_id,
eos_token_id,
batch_size,
attention_mask,
use_cache,
model_kwargs,
):
"""Generate sequences for each example without beam search (num_beams == 1).
All returned sequence are generated independantly.
"""
# length of generated sentences / unfinished sentences
unfinished_sents = input_ids.new(batch_size).fill_(1)
sent_lengths = input_ids.new(batch_size).fill_(max_length)
past = None
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_kwargs
)
outputs = self(**model_inputs, return_dict=True)
next_token_logits = outputs.logits[:, -1, :]
scores = self.postprocess_next_token_scores(
scores=next_token_logits,
input_ids=input_ids,
no_repeat_ngram_size=no_repeat_ngram_size,
bad_words_ids=bad_words_ids,
cur_len=cur_len,
min_length=min_length,
max_length=max_length,
eos_token_id=eos_token_id,
repetition_penalty=repetition_penalty,
batch_size=batch_size,
num_beams=1,
)
# if model has past, then set the past variable to speed up decoding
if "past_key_values" in outputs:
past = outputs.past_key_values
elif "mems" in outputs:
past = outputs.mems
if do_sample:
# Temperature (higher temperature => more likely to sample low probability tokens)
if temperature != 1.0:
scores = scores / temperature
# Top-p/top-k filtering
next_token_logscores = top_k_top_p_filtering(scores, input_ids=input_ids, mcm=self.mcm,
use_mcm=self.use_mcm, tokenizer=self.mcm_tokenizer,
top_k=top_k, top_p=top_p, threshold=self.mcm_threshold,
mcm_keep_at_least=self.min_token_number,
edge_cases_file=self.file_descriptor if self.save_edge_cases else "")
# Sample
probs = F.softmax(next_token_logscores, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
else:
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
# update generations and finished sentences
if eos_token_id is not None:
# pad finished sentences if eos_token_id exist
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
else:
tokens_to_add = next_token
# add token and increase length by one
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
cur_len = cur_len + 1
if eos_token_id is not None:
eos_in_sents = tokens_to_add == eos_token_id
# if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
# unfinished_sents is set to zero if eos in sentence
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a </s> in each sentence, or if we exceed the maximul length
if unfinished_sents.max() == 0:
break
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
return input_ids
| 47.074303 | 155 | 0.591954 | 5,466 | 45,615 | 4.66685 | 0.105745 | 0.02258 | 0.016465 | 0.00933 | 0.781881 | 0.767964 | 0.757576 | 0.748716 | 0.748716 | 0.742561 | 0 | 0.011161 | 0.326296 | 45,615 | 968 | 156 | 47.122934 | 0.818913 | 0.173737 | 0 | 0.729226 | 0 | 0.017192 | 0.110574 | 0.019914 | 0 | 0 | 0 | 0.002066 | 0.002865 | 1 | 0.040115 | false | 0.002865 | 0.017192 | 0.008596 | 0.100287 | 0.001433 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
47f0cb239326a140b00e78c90fda0ecc3e5773d4 | 27 | py | Python | minimailer/core/__init__.py | mpavelka/minimailer | 1f42e13b6b758166419e78f3770c59db03adadd7 | [
"MIT"
] | null | null | null | minimailer/core/__init__.py | mpavelka/minimailer | 1f42e13b6b758166419e78f3770c59db03adadd7 | [
"MIT"
] | null | null | null | minimailer/core/__init__.py | mpavelka/minimailer | 1f42e13b6b758166419e78f3770c59db03adadd7 | [
"MIT"
] | null | null | null | from .mailer import Mailer
| 13.5 | 26 | 0.814815 | 4 | 27 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
47f70e937b7126af349be6979bf4260dc8f3aae0 | 76 | py | Python | assn2/Q1input.py | vardhan2000/1st-sem-python-assignments | 9f38ab2b15c36b5ae1c6a725f4d4effe026e0bb4 | [
"MIT"
] | null | null | null | assn2/Q1input.py | vardhan2000/1st-sem-python-assignments | 9f38ab2b15c36b5ae1c6a725f4d4effe026e0bb4 | [
"MIT"
] | null | null | null | assn2/Q1input.py | vardhan2000/1st-sem-python-assignments | 9f38ab2b15c36b5ae1c6a725f4d4effe026e0bb4 | [
"MIT"
] | null | null | null | inp = "WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW"
| 38 | 75 | 0.921053 | 2 | 76 | 35 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039474 | 76 | 1 | 76 | 76 | 0.958904 | 0 | 0 | 0 | 0 | 0 | 0.881579 | 0.881579 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9a0f52ac26ccd0a189ae7a86eb20460747da3447 | 21 | py | Python | split_folders/__init__.py | menaceslinger/split-folders | fe508e1dfa48ecb0c40a71ba388451838dd0877e | [
"MIT"
] | 1 | 2019-05-16T06:53:08.000Z | 2019-05-16T06:53:08.000Z | split_folders/__init__.py | alenweiru/split-folders | 55333cb0185b402332b957a03ca79d20c670c447 | [
"MIT"
] | null | null | null | split_folders/__init__.py | alenweiru/split-folders | 55333cb0185b402332b957a03ca79d20c670c447 | [
"MIT"
] | 1 | 2018-12-10T12:42:21.000Z | 2018-12-10T12:42:21.000Z | from .split import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7bd55b8232d929780b55e31530d8100c45fd7521 | 125 | py | Python | hybrid/resource/__init__.py | jmartin4nrel/HOPP-1 | c66ff2d97b43a785ac15004958615290a76477c5 | [
"BSD-3-Clause"
] | 3 | 2021-03-10T20:03:42.000Z | 2022-03-18T17:10:04.000Z | hybrid/resource/__init__.py | jmartin4nrel/HOPP-1 | c66ff2d97b43a785ac15004958615290a76477c5 | [
"BSD-3-Clause"
] | 14 | 2020-12-28T22:32:07.000Z | 2022-03-17T15:33:04.000Z | hybrid/resource/__init__.py | jmartin4nrel/HOPP-1 | c66ff2d97b43a785ac15004958615290a76477c5 | [
"BSD-3-Clause"
] | 8 | 2021-01-19T02:39:01.000Z | 2022-01-31T18:04:39.000Z | from .solar_resource import SolarResource
from .wind_resource import WindResource
from .elec_prices import ElectricityPrices
| 31.25 | 42 | 0.88 | 15 | 125 | 7.133333 | 0.666667 | 0.261682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096 | 125 | 3 | 43 | 41.666667 | 0.946903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d023352b11d143e9d59052361dd03fc59dd7aaa4 | 144 | py | Python | tuan_lib/database/__init__.py | HKer-MuCoi/python_http_lib | 68411daaa232c5cbd0d731afb216780603edb9c9 | [
"MIT"
] | null | null | null | tuan_lib/database/__init__.py | HKer-MuCoi/python_http_lib | 68411daaa232c5cbd0d731afb216780603edb9c9 | [
"MIT"
] | null | null | null | tuan_lib/database/__init__.py | HKer-MuCoi/python_http_lib | 68411daaa232c5cbd0d731afb216780603edb9c9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from .mysql import ActiveAlchemy # noqa
from .mysql import decorator as SqlAlchemyDecorator # noqa | 36 | 58 | 0.736111 | 19 | 144 | 5.578947 | 0.789474 | 0.169811 | 0.283019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 0.138889 | 144 | 4 | 58 | 36 | 0.846774 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d03e3760eb2fc85ab3fa4f2a45a801e964419a64 | 110 | py | Python | ants/core/__init__.py | ncullen93/ANTsPy | a4c990dcd5b7445a45ce7b366ee018c7350e7d9f | [
"Apache-2.0"
] | 3 | 2018-06-07T19:11:47.000Z | 2019-06-10T05:24:06.000Z | ants/core/__init__.py | ncullen93/ANTsPy | a4c990dcd5b7445a45ce7b366ee018c7350e7d9f | [
"Apache-2.0"
] | null | null | null | ants/core/__init__.py | ncullen93/ANTsPy | a4c990dcd5b7445a45ce7b366ee018c7350e7d9f | [
"Apache-2.0"
] | 1 | 2019-04-04T06:18:44.000Z | 2019-04-04T06:18:44.000Z |
from .ants_image import *
from .ants_transform import *
from .image_io import *
from .transform_io import *
| 15.714286 | 29 | 0.763636 | 16 | 110 | 5 | 0.375 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163636 | 110 | 6 | 30 | 18.333333 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0461fee5e42aa872fe0067f7b404d2f9b356503 | 35 | py | Python | tests/parser/good/import-as.py | Nakrez/RePy | 057db55a99eac2c5cb3d622fa1f2e29f6083d8d6 | [
"MIT"
] | 1 | 2020-11-24T05:24:26.000Z | 2020-11-24T05:24:26.000Z | tests/parser/good/import-as.py | Nakrez/RePy | 057db55a99eac2c5cb3d622fa1f2e29f6083d8d6 | [
"MIT"
] | null | null | null | tests/parser/good/import-as.py | Nakrez/RePy | 057db55a99eac2c5cb3d622fa1f2e29f6083d8d6 | [
"MIT"
] | null | null | null | import math as m
print(m.sqrt(9))
| 8.75 | 16 | 0.685714 | 8 | 35 | 3 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.171429 | 35 | 3 | 17 | 11.666667 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
d0a3911781fd243c44495742df52e4d427a9f8e4 | 32 | py | Python | price/__init__.py | victoray/block-tracker-api | 0d5918a29572b47b0fb3f205fc1ba21ad4fcca51 | [
"MIT"
] | null | null | null | price/__init__.py | victoray/block-tracker-api | 0d5918a29572b47b0fb3f205fc1ba21ad4fcca51 | [
"MIT"
] | null | null | null | price/__init__.py | victoray/block-tracker-api | 0d5918a29572b47b0fb3f205fc1ba21ad4fcca51 | [
"MIT"
] | null | null | null | from price.router import router
| 16 | 31 | 0.84375 | 5 | 32 | 5.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0f3fdb8e3cebb097c1a787cb9eee7bb49c39008 | 180 | py | Python | EventFilter/EcalRawToDigi/python/EcalUnpackerMapping_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | EventFilter/EcalRawToDigi/python/EcalUnpackerMapping_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | EventFilter/EcalRawToDigi/python/EcalUnpackerMapping_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
# ----- For the EE Mapping :
from Geometry.EcalMapping.EcalMapping_cfi import *
from Geometry.EcalMapping.EcalMappingRecord_cfi import *
| 25.714286 | 56 | 0.8 | 22 | 180 | 6.454545 | 0.681818 | 0.169014 | 0.323944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116667 | 180 | 6 | 57 | 30 | 0.893082 | 0.144444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d0f9df7f36ac469bbd4b319502c7475f9905f720 | 39 | py | Python | iBridges/task/mongo/__init__.py | sara-nl/iBridges | a630cde7e4cab455a41f41ab96c7a45434dbaf97 | [
"Apache-2.0"
] | null | null | null | iBridges/task/mongo/__init__.py | sara-nl/iBridges | a630cde7e4cab455a41f41ab96c7a45434dbaf97 | [
"Apache-2.0"
] | null | null | null | iBridges/task/mongo/__init__.py | sara-nl/iBridges | a630cde7e4cab455a41f41ab96c7a45434dbaf97 | [
"Apache-2.0"
] | 1 | 2018-08-28T13:38:26.000Z | 2018-08-28T13:38:26.000Z | from .task import * # noqa: F403 F401
| 19.5 | 38 | 0.666667 | 6 | 39 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0.230769 | 39 | 1 | 39 | 39 | 0.666667 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef74dab180042c40d62971413994320d1d8b80a7 | 8,554 | py | Python | tests/auth/auth_test.py | oslokommune/okdata-sdk-python | 39d9f79b96b2fe4c33136bc9344043f33cc0ee4c | [
"MIT"
] | 2 | 2021-01-13T06:53:04.000Z | 2021-08-02T05:14:06.000Z | tests/auth/auth_test.py | oslokommune/okdata-sdk-python | 39d9f79b96b2fe4c33136bc9344043f33cc0ee4c | [
"MIT"
] | 4 | 2021-04-21T06:14:36.000Z | 2021-08-03T08:35:14.000Z | tests/auth/auth_test.py | oslokommune/okdata-sdk-python | 39d9f79b96b2fe4c33136bc9344043f33cc0ee4c | [
"MIT"
] | 1 | 2021-08-02T05:14:09.000Z | 2021-08-02T05:14:09.000Z | import json
import logging
import re
import pytest
from okdata.sdk.auth.auth import Authenticate
from okdata.sdk.auth.credentials.client_credentials import ClientCredentialsProvider
from okdata.sdk.config import Config
from okdata.sdk.exceptions import ApiAuthenticateError
from freezegun import freeze_time
from tests.auth.client_credentials_test_utils import (
from_cache_not_expired_token,
from_cache_expired_token,
utc_now,
)
from tests.test_utils import (
client_credentials_response,
client_credentials_response_no_refresh,
)
logging.basicConfig(level=logging.INFO)
config = Config(env="prod")
token_endpoint = "https://login.oslo.kommune.no/auth/realms/api-catalog/protocol/openid-connect/token"
@pytest.fixture(scope="function")
def mock_home_dir(monkeypatch, tmp_path):
monkeypatch.setenv("HOME", str(tmp_path))
@freeze_time(utc_now)
class TestAuthenticate:
def test_authenticate_cache_disabled(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = False
response = json.dumps(client_credentials_response)
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, text=response, status_code=200)
auth.login()
assert auth._access_token == client_credentials_response["access_token"]
assert auth._refresh_token == client_credentials_response["refresh_token"]
def test_authenticat_no_cache(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = True
response = json.dumps(client_credentials_response)
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, text=response, status_code=200)
auth.login()
assert auth._access_token == client_credentials_response["access_token"]
assert auth._refresh_token == client_credentials_response["refresh_token"]
def test_authenticate_cached_credentials(self, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = True
cached_credentials = {
"provider": "ClientCredentialsProvider",
"access_token": from_cache_not_expired_token,
"refresh_token": from_cache_not_expired_token,
}
auth.file_cache.write_credentials(json.dumps(cached_credentials))
auth.login()
assert auth._access_token == cached_credentials["access_token"]
assert auth._refresh_token == cached_credentials["refresh_token"]
def test_authenticate_refresh_credentials(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = True
cached_credentials = {
"provider": "ClientCredentialsProvider",
"access_token": from_cache_not_expired_token,
"refresh_token": from_cache_not_expired_token,
}
auth.file_cache.write_credentials(json.dumps(cached_credentials))
response = json.dumps(client_credentials_response)
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, text=response, status_code=200)
auth.login()
assert auth._access_token == cached_credentials["access_token"]
assert auth._refresh_token == cached_credentials["refresh_token"]
def test_authenticate_expired_tokens(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = True
cached_credentials = {
"provider": "TokenServiceProvider",
"access_token": from_cache_expired_token,
"refresh_token": from_cache_expired_token,
}
auth.file_cache.write_credentials(json.dumps(cached_credentials))
response = json.dumps(client_credentials_response)
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, text=response, status_code=200)
auth.login()
print(from_cache_not_expired_token)
print(from_cache_expired_token)
assert auth._access_token == client_credentials_response["access_token"]
assert auth._refresh_token == client_credentials_response["access_token"]
def test_authenticate_expired_access_token(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = True
cached_credentials = {
"provider": "TokenServiceProvider",
"access_token": from_cache_expired_token,
"refresh_token": from_cache_not_expired_token,
}
auth.file_cache.write_credentials(json.dumps(cached_credentials))
response = json.dumps(client_credentials_response)
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, text=response, status_code=200)
auth.login()
assert auth._access_token == from_cache_not_expired_token
assert auth._refresh_token == cached_credentials["refresh_token"]
def test_authenticate_fail(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(
config, client_id="wrong_id"
)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
response = json.dumps(
{"error": "authentication error", "error_description": "No such client"}
)
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, text=response, status_code=200)
try:
auth.login()
except ApiAuthenticateError:
assert True
def test_refresh_inactive_session(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = True
cached_credentials = {
"provider": "TokenServiceProvider",
"access_token": from_cache_expired_token,
"refresh_token": from_cache_not_expired_token,
}
auth.file_cache.write_credentials(json.dumps(cached_credentials))
error_msg = {
"error": "invalid_grant",
"error_description": "Session not active",
}
refresh_response = {"text": json.dumps(error_msg), "status_code": 400}
login_response = {
"text": json.dumps(client_credentials_response),
"status_code": 200,
}
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, [refresh_response, login_response])
auth.login()
assert auth._access_token == from_cache_not_expired_token
assert auth._refresh_token == cached_credentials["refresh_token"]
def test_refresh_no_refresh_token(self, requests_mock, mock_home_dir):
client_credentials_provider = ClientCredentialsProvider(config)
auth = Authenticate(config=config, token_provider=client_credentials_provider)
auth.file_cache.credentials_cache_enabled = True
cached_credentials = {
"provider": "TokenServiceProvider",
"access_token": from_cache_expired_token,
}
auth.file_cache.write_credentials(json.dumps(cached_credentials))
response = json.dumps(client_credentials_response_no_refresh)
matcher = re.compile(token_endpoint)
requests_mock.register_uri("POST", matcher, text=response, status_code=200)
auth.login()
assert auth._access_token == from_cache_not_expired_token
assert auth._refresh_token is None
| 38.881818 | 102 | 0.724223 | 930 | 8,554 | 6.268817 | 0.112903 | 0.102058 | 0.077187 | 0.035849 | 0.808576 | 0.775815 | 0.773928 | 0.773928 | 0.773928 | 0.762607 | 0 | 0.003922 | 0.19523 | 8,554 | 219 | 103 | 39.059361 | 0.842969 | 0 | 0 | 0.58642 | 0 | 0.006173 | 0.08686 | 0.005845 | 0 | 0 | 0 | 0 | 0.104938 | 1 | 0.061728 | false | 0 | 0.067901 | 0 | 0.135802 | 0.012346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ef993b143b8dfbb9f87a046d45f81a2ade497ddd | 385 | py | Python | DynamicETLDashboard/DynamicETL_Dashboard/Utilities/ArgumentFeeder.py | BRutan/DynamicETLDashboard | 8a40e6f51e53f084d6103ba41cd675916505652f | [
"MIT"
] | null | null | null | DynamicETLDashboard/DynamicETL_Dashboard/Utilities/ArgumentFeeder.py | BRutan/DynamicETLDashboard | 8a40e6f51e53f084d6103ba41cd675916505652f | [
"MIT"
] | null | null | null | DynamicETLDashboard/DynamicETL_Dashboard/Utilities/ArgumentFeeder.py | BRutan/DynamicETLDashboard | 8a40e6f51e53f084d6103ba41cd675916505652f | [
"MIT"
] | null | null | null | #####################################
# ArgumentFeeder.py
#####################################
# Description:
# * Converts arguments pulled from tkinter
# GUI into arguments usable by
# target scripts.
import os
class ArgumentFeeder:
"""
* Converts arguments pulled from tkinter GUI into arguments usable by
target scripts.
"""
def __init__(self):
pass | 22.647059 | 74 | 0.558442 | 35 | 385 | 6.028571 | 0.6 | 0.161137 | 0.218009 | 0.255924 | 0.672986 | 0.672986 | 0.672986 | 0.672986 | 0.672986 | 0.672986 | 0 | 0 | 0.194805 | 385 | 17 | 75 | 22.647059 | 0.680645 | 0.532468 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
efb7fa056d26af10d4e5fee2fde458bb5fd8acf7 | 3,095 | py | Python | saf/linear/tests/test_nearshockapproximator.py | dmitry-kabanov/fickettmodel | 255b1e9cae1cfb7a6b914ad61a17288d52215cc4 | [
"MIT"
] | null | null | null | saf/linear/tests/test_nearshockapproximator.py | dmitry-kabanov/fickettmodel | 255b1e9cae1cfb7a6b914ad61a17288d52215cc4 | [
"MIT"
] | null | null | null | saf/linear/tests/test_nearshockapproximator.py | dmitry-kabanov/fickettmodel | 255b1e9cae1cfb7a6b914ad61a17288d52215cc4 | [
"MIT"
] | null | null | null | import numpy.testing as npt
from numpy import (absolute, all, arange, array, cos, linspace, log, sin)
from ..nearshockapproximator import (NearShockApproximator,
NearShockFifthOrderApproximator)
class TestNearShockApproximator:
def test__two_points_away_from_shock__should_give_fifth_order(self):
r = 2.0
powers = arange(4, 8)
n_list = r**powers + 1
error_list = []
for n in n_list:
x, dx = linspace(0, 1.2, num=n, retstep=True)
y = sin(x)
approx = NearShockApproximator(dx)
result = approx.approximate_two_points_away_from_shock(y)
desired = cos(x[-3])
error = absolute(result - desired)
error_list.append(error)
errors = array(error_list)
observed_orders = log(errors[0:-1] / errors[1:]) / log(r)
min_order = 4.90
npt.assert_(all(observed_orders >= min_order))
def test__one_point_away_from_shock__should_give_fourth_order(self):
r = 2.0
powers = arange(2, 7)
n_list = 10.0 * r**powers
error_list = []
for n in n_list:
x, dx = linspace(0, 1.2, num=n, retstep=True)
y = sin(x)
approx = NearShockApproximator(dx)
result = approx.approximate_one_point_away_from_shock(y)
desired = cos(x[-2])
error = absolute(result - desired)
error_list.append(error)
errors = array(error_list)
observed_orders = log(errors[0:-1] / errors[1:]) / log(r)
min_order = 3.90
npt.assert_(all(observed_orders >= min_order))
def test__on_shock__should_give_fifth_order(self):
r = 2.0
powers = arange(2, 6)
n_list = 10.0 * r**powers
error_list = []
for n in n_list:
x, dx = linspace(0, 1.2, num=n, retstep=True)
y = sin(x)
approx = NearShockApproximator(dx)
result = approx.approximate_on_shock(y)
desired = cos(x[-1])
error = absolute(result - desired)
error_list.append(error)
errors = array(error_list)
observed_orders = log(errors[0:-1] / errors[1:]) / log(r)
min_order = 4.95
npt.assert_(all(observed_orders >= min_order))
class TestNearShockFifthOrderApproximator:
def test__one_point_away_from_shock__should_give_fifth_order(self):
r = 2.0
powers = arange(0, 5)
n_list = 10.0 * r**powers
error_list = []
for n in n_list:
x, dx = linspace(0, 1.2, num=n, retstep=True)
y = sin(x)
approx = NearShockFifthOrderApproximator(dx)
result = approx.approximate_one_point_away_from_shock(y)
desired = cos(x[-2])
error = absolute(result - desired)
error_list.append(error)
errors = array(error_list)
observed_orders = log(errors[0:-1] / errors[1:]) / log(r)
min_order = 5.0
npt.assert_(all(observed_orders >= min_order))
| 30.048544 | 73 | 0.579645 | 394 | 3,095 | 4.304569 | 0.175127 | 0.063679 | 0.045991 | 0.025943 | 0.820755 | 0.800118 | 0.800118 | 0.745283 | 0.729953 | 0.70283 | 0 | 0.030689 | 0.31567 | 3,095 | 102 | 74 | 30.343137 | 0.770066 | 0 | 0 | 0.675676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 1 | 0.054054 | false | 0 | 0.040541 | 0 | 0.121622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.