hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
feb13e5ba262667d0714542e6a47b4c360a270ac | 61 | py | Python | odin/explain/__init__.py | trungnt13/odin-ai | 9c6986a854e62da39637ea463667841378b7dd84 | [
"MIT"
] | 7 | 2020-12-29T19:35:58.000Z | 2022-01-31T21:01:30.000Z | odin/explain/__init__.py | imito/odin-ai | 9c6986a854e62da39637ea463667841378b7dd84 | [
"MIT"
] | 3 | 2020-02-06T16:44:17.000Z | 2020-09-26T05:26:14.000Z | odin/explain/__init__.py | trungnt13/odin-ai | 9c6986a854e62da39637ea463667841378b7dd84 | [
"MIT"
] | 6 | 2019-02-14T01:36:28.000Z | 2020-10-30T13:16:32.000Z | from odin.traininglain import adversarial_attack, deep_dream
| 30.5 | 60 | 0.885246 | 8 | 61 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081967 | 61 | 1 | 61 | 61 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
feb51c373ed544f8e3e782dc0bb6f85f43af2253 | 67 | py | Python | duckpy/__init__.py | DevelopForLizardz/duckpy | 3bcb70cf42043f0c64904ceec7328260826afd75 | [
"MIT"
] | 1 | 2020-03-21T07:33:47.000Z | 2020-03-21T07:33:47.000Z | duckpy/__init__.py | DevelopForLizardz/duckpy | 3bcb70cf42043f0c64904ceec7328260826afd75 | [
"MIT"
] | null | null | null | duckpy/__init__.py | DevelopForLizardz/duckpy | 3bcb70cf42043f0c64904ceec7328260826afd75 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding utf-8 -*-
from .duck import *
| 13.4 | 22 | 0.597015 | 10 | 67 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036364 | 0.179104 | 67 | 4 | 23 | 16.75 | 0.690909 | 0.626866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
229018cac2f92d265f4f546737aff538ace49a1a | 6,649 | py | Python | ml_source/src/blocktorch/blocktorch/tests/component_tests/test_baseline_classifier.py | blocktorch/blocktorch | 044aa269813ab22c5fd27f84272e5fb540fc522b | [
"MIT"
] | 1 | 2021-09-23T12:23:02.000Z | 2021-09-23T12:23:02.000Z | ml_source/src/blocktorch/blocktorch/tests/component_tests/test_baseline_classifier.py | blocktorch/blocktorch | 044aa269813ab22c5fd27f84272e5fb540fc522b | [
"MIT"
] | null | null | null | ml_source/src/blocktorch/blocktorch/tests/component_tests/test_baseline_classifier.py | blocktorch/blocktorch | 044aa269813ab22c5fd27f84272e5fb540fc522b | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
import pytest
from pandas.testing import assert_frame_equal, assert_series_equal
from blocktorch.model_family import ModelFamily
from blocktorch.pipelines.components import BaselineClassifier
from blocktorch.utils import get_random_state
def test_baseline_init():
baseline = BaselineClassifier()
assert baseline.parameters["strategy"] == "mode"
assert baseline.model_family == ModelFamily.BASELINE
assert baseline.classes_ is None
def test_baseline_invalid_strategy():
with pytest.raises(ValueError):
BaselineClassifier(strategy="unfortunately invalid strategy")
def test_baseline_y_is_None(X_y_binary):
X, _ = X_y_binary
with pytest.raises(ValueError):
BaselineClassifier().fit(X, y=None)
@pytest.mark.parametrize("data_type", ["pd", "ww"])
def test_baseline_binary_mode(data_type, make_data_type):
X = pd.DataFrame({"one": [1, 2, 3, 4], "two": [2, 3, 4, 5], "three": [1, 2, 3, 4]})
y = pd.Series([10, 11, 10, 10])
X = make_data_type(data_type, X)
y = make_data_type(data_type, y)
clf = BaselineClassifier(strategy="mode")
fitted = clf.fit(X, y)
assert isinstance(fitted, BaselineClassifier)
assert clf.classes_ == [10, 11]
expected_predictions = pd.Series(np.array([10] * X.shape[0]), dtype="int64")
predictions = clf.predict(X)
assert_series_equal(expected_predictions, predictions)
predicted_proba = clf.predict_proba(X)
assert predicted_proba.shape == (X.shape[0], 2)
expected_predictions_proba = pd.DataFrame(
{10: [1.0, 1.0, 1.0, 1.0], 11: [0.0, 0.0, 0.0, 0.0]}
)
assert_frame_equal(expected_predictions_proba, predicted_proba)
np.testing.assert_allclose(clf.feature_importance, np.array([0.0] * X.shape[1]))
def test_baseline_binary_random(X_y_binary):
X, y = X_y_binary
values = np.unique(y)
clf = BaselineClassifier(strategy="random", random_seed=0)
clf.fit(X, y)
assert clf.classes_ == [0, 1]
expected_predictions = pd.Series(
get_random_state(0).choice(np.unique(y), len(X)), dtype="int64"
)
predictions = clf.predict(X)
assert_series_equal(expected_predictions, predictions)
predicted_proba = clf.predict_proba(X)
assert predicted_proba.shape == (len(X), 2)
expected_predictions_proba = pd.DataFrame(
np.array([[0.5 for i in range(len(values))]] * len(X))
)
assert_frame_equal(expected_predictions_proba, predicted_proba)
np.testing.assert_allclose(clf.feature_importance, np.array([0.0] * X.shape[1]))
def test_baseline_binary_random_weighted(X_y_binary):
X, y = X_y_binary
values, counts = np.unique(y, return_counts=True)
percent_freq = counts.astype(float) / len(y)
assert percent_freq.sum() == 1.0
clf = BaselineClassifier(strategy="random_weighted", random_seed=0)
clf.fit(X, y)
assert clf.classes_ == [0, 1]
expected_predictions = pd.Series(
get_random_state(0).choice(np.unique(y), len(X), p=percent_freq), dtype="int64"
)
predictions = clf.predict(X)
assert_series_equal(expected_predictions, predictions)
predicted_proba = clf.predict_proba(X)
assert predicted_proba.shape == (len(X), 2)
expected_predictions_proba = pd.DataFrame(
np.array([[percent_freq[i] for i in range(len(values))]] * len(X))
)
assert_frame_equal(expected_predictions_proba, predicted_proba)
np.testing.assert_allclose(clf.feature_importance, np.array([0.0] * X.shape[1]))
def test_baseline_multiclass_mode():
X = pd.DataFrame({"one": [1, 2, 3, 4], "two": [2, 3, 4, 5], "three": [1, 2, 3, 4]})
y = pd.Series([10, 12, 11, 11])
clf = BaselineClassifier(strategy="mode")
clf.fit(X, y)
assert clf.classes_ == [10, 11, 12]
predictions = clf.predict(X)
expected_predictions = pd.Series([11] * len(X), dtype="int64")
assert_series_equal(expected_predictions, predictions)
predicted_proba = clf.predict_proba(X)
assert predicted_proba.shape == (len(X), 3)
expected_predictions_proba = pd.DataFrame(
{10: [0.0, 0.0, 0.0, 0.0], 11: [1.0, 1.0, 1.0, 1.0], 12: [0.0, 0.0, 0.0, 0.0]}
)
assert_frame_equal(expected_predictions_proba, predicted_proba)
np.testing.assert_allclose(clf.feature_importance, np.array([0.0] * X.shape[1]))
def test_baseline_multiclass_random(X_y_multi):
X, y = X_y_multi
values = np.unique(y)
clf = BaselineClassifier(strategy="random", random_seed=0)
clf.fit(X, y)
assert clf.classes_ == [0, 1, 2]
expected_predictions = pd.Series(
get_random_state(0).choice(np.unique(y), len(X)), dtype="int64"
)
predictions = clf.predict(X)
assert_series_equal(expected_predictions, predictions)
predicted_proba = clf.predict_proba(X)
assert predicted_proba.shape == (len(X), 3)
assert_frame_equal(
pd.DataFrame(np.array([[1.0 / 3 for i in range(len(values))]] * len(X))),
predicted_proba,
)
np.testing.assert_allclose(clf.feature_importance, np.array([0.0] * X.shape[1]))
def test_baseline_multiclass_random_weighted(X_y_multi):
X, y = X_y_multi
values, counts = np.unique(y, return_counts=True)
percent_freq = counts.astype(float) / len(y)
assert percent_freq.sum() == 1.0
clf = BaselineClassifier(strategy="random_weighted", random_seed=0)
clf.fit(X, y)
assert clf.classes_ == [0, 1, 2]
expected_predictions = pd.Series(
get_random_state(0).choice(np.unique(y), len(X), p=percent_freq), dtype="int64"
)
predictions = clf.predict(X)
assert_series_equal(expected_predictions, predictions)
predicted_proba = clf.predict_proba(X)
assert predicted_proba.shape == (len(X), 3)
assert_frame_equal(
pd.DataFrame(
np.array([[percent_freq[i] for i in range(len(values))]] * len(X))
),
predicted_proba,
)
np.testing.assert_allclose(clf.feature_importance, np.array([0.0] * X.shape[1]))
def test_baseline_no_mode():
X = pd.DataFrame([[1, 2, 3, 0, 1]])
y = pd.Series([1, 0, 2, 0, 1])
clf = BaselineClassifier()
clf.fit(X, y)
assert clf.classes_ == [0, 1, 2]
expected_predictions = pd.Series([0] * len(X), dtype="int64")
predictions = clf.predict(X)
assert_series_equal(expected_predictions, predictions)
predicted_proba = clf.predict_proba(X)
assert predicted_proba.shape == (len(X), 3)
assert_frame_equal(
pd.DataFrame(np.array([[1.0 if i == 0 else 0.0 for i in range(3)]] * len(X))),
predicted_proba,
)
np.testing.assert_allclose(clf.feature_importance, np.array([0.0] * X.shape[1]))
| 34.630208 | 87 | 0.679651 | 966 | 6,649 | 4.467909 | 0.104555 | 0.013438 | 0.012512 | 0.013902 | 0.784523 | 0.746061 | 0.728684 | 0.723123 | 0.717563 | 0.701807 | 0 | 0.035734 | 0.179275 | 6,649 | 191 | 88 | 34.811518 | 0.755177 | 0 | 0 | 0.610738 | 0 | 0 | 0.024365 | 0 | 0 | 0 | 0 | 0 | 0.281879 | 1 | 0.067114 | false | 0 | 0.09396 | 0 | 0.161074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
22ea37ac692ffa5a130a4b60c383041d4e61bb0f | 48 | py | Python | src/asgi_lifespan/_exceptions.py | euri10/asgi-lifespan | 0bffff3cf3b99319428aaf36bf10ca902ce4146e | [
"MIT"
] | 98 | 2019-09-27T19:53:06.000Z | 2022-03-18T19:46:03.000Z | src/asgi_lifespan/_exceptions.py | euri10/asgi-lifespan | 0bffff3cf3b99319428aaf36bf10ca902ce4146e | [
"MIT"
] | 45 | 2019-09-27T20:43:46.000Z | 2022-01-04T11:12:11.000Z | src/asgi_lifespan/_exceptions.py | euri10/asgi-lifespan | 0bffff3cf3b99319428aaf36bf10ca902ce4146e | [
"MIT"
] | 4 | 2020-01-10T09:21:21.000Z | 2021-12-02T08:49:10.000Z | class LifespanNotSupported(Exception):
pass
| 16 | 38 | 0.791667 | 4 | 48 | 9.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145833 | 48 | 2 | 39 | 24 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
22f8118c1d870c52e872de06cc2aeac8a4dcc366 | 161 | py | Python | netmiko/juniper/__init__.py | octupszhang/netmiko | 7828ac645f3ec3f839e3a41484a0c441a309982f | [
"MIT"
] | null | null | null | netmiko/juniper/__init__.py | octupszhang/netmiko | 7828ac645f3ec3f839e3a41484a0c441a309982f | [
"MIT"
] | null | null | null | netmiko/juniper/__init__.py | octupszhang/netmiko | 7828ac645f3ec3f839e3a41484a0c441a309982f | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from netmiko.juniper.juniper_ssh import JuniperSSH, JuniperFileTransfer
__all__ = ['JuniperSSH', 'JuniperFileTransfer']
| 32.2 | 71 | 0.84472 | 16 | 161 | 7.875 | 0.6875 | 0.460317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 161 | 4 | 72 | 40.25 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.180124 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
fe1a428fc6f383178692088cc2e7d134c4dd4917 | 204 | py | Python | csp/propagators/dummy_propagator.py | abeccaro/csp-solver | a761dee02a4dd12162eb55ef34cc0989c79567cc | [
"MIT"
] | null | null | null | csp/propagators/dummy_propagator.py | abeccaro/csp-solver | a761dee02a4dd12162eb55ef34cc0989c79567cc | [
"MIT"
] | null | null | null | csp/propagators/dummy_propagator.py | abeccaro/csp-solver | a761dee02a4dd12162eb55ef34cc0989c79567cc | [
"MIT"
] | null | null | null | from csp.propagators.propagator import Propagator
class DummyPropagator(Propagator):
"""Propagator that actually doesn't propagate."""
def on_domain_change(self, var):
pass
| 22.666667 | 54 | 0.691176 | 22 | 204 | 6.318182 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22549 | 204 | 8 | 55 | 25.5 | 0.879747 | 0.210784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
a3d4adf6417c8ba73a1516794b174eebeab97118 | 5,625 | py | Python | colour/models/rgb/transfer_functions/tests/test_arib_std_b67.py | soma2000-lang/colour | bb7ee23ac65e09613af78bd18dd98dffb1a2904a | [
"BSD-3-Clause"
] | 1 | 2022-02-12T06:28:15.000Z | 2022-02-12T06:28:15.000Z | colour/models/rgb/transfer_functions/tests/test_arib_std_b67.py | soma2000-lang/colour | bb7ee23ac65e09613af78bd18dd98dffb1a2904a | [
"BSD-3-Clause"
] | null | null | null | colour/models/rgb/transfer_functions/tests/test_arib_std_b67.py | soma2000-lang/colour | bb7ee23ac65e09613af78bd18dd98dffb1a2904a | [
"BSD-3-Clause"
] | null | null | null | """
Defines the unit tests for the
:mod:`colour.models.rgb.transfer_functions.arib_std_b67` module.
"""
import numpy as np
import unittest
from colour.models.rgb.transfer_functions import (
oetf_ARIBSTDB67,
oetf_inverse_ARIBSTDB67,
)
from colour.utilities import domain_range_scale, ignore_numpy_errors
__author__ = "Colour Developers"
__copyright__ = "Copyright (C) 2013-2022 - Colour Developers"
__license__ = "New BSD License - https://opensource.org/licenses/BSD-3-Clause"
__maintainer__ = "Colour Developers"
__email__ = "colour-developers@colour-science.org"
__status__ = "Production"
__all__ = [
"TestOetf_ARIBSTDB67",
"TestOetf_inverse_ARIBSTDB67",
]
class TestOetf_ARIBSTDB67(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_ARIBSTDB67` definition unit tests methods.
"""
def test_oetf_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_ARIBSTDB67` definition.
"""
self.assertAlmostEqual(oetf_ARIBSTDB67(-0.25), -0.25, places=7)
self.assertAlmostEqual(oetf_ARIBSTDB67(0.0), 0.0, places=7)
self.assertAlmostEqual(
oetf_ARIBSTDB67(0.18), 0.212132034355964, places=7
)
self.assertAlmostEqual(oetf_ARIBSTDB67(1.0), 0.5, places=7)
self.assertAlmostEqual(
oetf_ARIBSTDB67(64.0), 1.302858098046995, places=7
)
def test_n_dimensional_oetf_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_ARIBSTDB67` definition n-dimensional arrays support.
"""
E = 0.18
E_p = oetf_ARIBSTDB67(E)
E = np.tile(E, 6)
E_p = np.tile(E_p, 6)
np.testing.assert_almost_equal(oetf_ARIBSTDB67(E), E_p, decimal=7)
E = np.reshape(E, (2, 3))
E_p = np.reshape(E_p, (2, 3))
np.testing.assert_almost_equal(oetf_ARIBSTDB67(E), E_p, decimal=7)
E = np.reshape(E, (2, 3, 1))
E_p = np.reshape(E_p, (2, 3, 1))
np.testing.assert_almost_equal(oetf_ARIBSTDB67(E), E_p, decimal=7)
def test_domain_range_scale_oetf_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_ARIBSTDB67` definition domain and range scale support.
"""
E = 0.18
E_p = oetf_ARIBSTDB67(E)
d_r = (("reference", 1), ("1", 1), ("100", 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
oetf_ARIBSTDB67(E * factor), E_p * factor, decimal=7
)
@ignore_numpy_errors
def test_nan_oetf_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_ARIBSTDB67` definition nan support.
"""
oetf_ARIBSTDB67(np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]))
class TestOetf_inverse_ARIBSTDB67(unittest.TestCase):
"""
Defines :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_inverse_ARIBSTDB67` definition unit tests methods.
"""
def test_oetf_inverse_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_inverse_ARIBSTDB67` definition.
"""
self.assertAlmostEqual(oetf_inverse_ARIBSTDB67(-0.25), -0.25, places=7)
self.assertAlmostEqual(oetf_inverse_ARIBSTDB67(0.0), 0.0, places=7)
self.assertAlmostEqual(
oetf_inverse_ARIBSTDB67(0.212132034355964), 0.18, places=7
)
self.assertAlmostEqual(oetf_inverse_ARIBSTDB67(0.5), 1.0, places=7)
self.assertAlmostEqual(
oetf_inverse_ARIBSTDB67(1.302858098046995), 64.0, places=7
)
def test_n_dimensional_oetf_inverse_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_inverse_ARIBSTDB67` definition n-dimensional arrays support.
"""
E_p = 0.212132034355964
E = oetf_inverse_ARIBSTDB67(E_p)
E_p = np.tile(E_p, 6)
E = np.tile(E, 6)
np.testing.assert_almost_equal(
oetf_inverse_ARIBSTDB67(E_p), E, decimal=7
)
E_p = np.reshape(E_p, (2, 3))
E = np.reshape(E, (2, 3))
np.testing.assert_almost_equal(
oetf_inverse_ARIBSTDB67(E_p), E, decimal=7
)
E_p = np.reshape(E_p, (2, 3, 1))
E = np.reshape(E, (2, 3, 1))
np.testing.assert_almost_equal(
oetf_inverse_ARIBSTDB67(E_p), E, decimal=7
)
def test_domain_range_scale_oetf_inverse_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_inverse_ARIBSTDB67` definition domain and range scale support.
"""
E_p = 0.212132034355964
E = oetf_inverse_ARIBSTDB67(E_p)
d_r = (("reference", 1), ("1", 1), ("100", 100))
for scale, factor in d_r:
with domain_range_scale(scale):
np.testing.assert_almost_equal(
oetf_inverse_ARIBSTDB67(E_p * factor),
E * factor,
decimal=7,
)
@ignore_numpy_errors
def test_nan_oetf_inverse_ARIBSTDB67(self):
"""
Tests :func:`colour.models.rgb.transfer_functions.arib_std_b67.\
oetf_inverse_ARIBSTDB67` definition nan support.
"""
oetf_inverse_ARIBSTDB67(
np.array([-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan])
)
if __name__ == "__main__":
unittest.main()
| 30.241935 | 79 | 0.639644 | 723 | 5,625 | 4.672199 | 0.138313 | 0.015394 | 0.136767 | 0.081705 | 0.831557 | 0.801362 | 0.763766 | 0.724097 | 0.627886 | 0.605388 | 0 | 0.080066 | 0.242844 | 5,625 | 185 | 80 | 30.405405 | 0.713078 | 0.220622 | 0 | 0.434343 | 0 | 0 | 0.063979 | 0.01521 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.080808 | false | 0 | 0.040404 | 0 | 0.141414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a3fe9c5b1371ef41ca6b20f12dce26dbf087573c | 173 | py | Python | dmr/__init__.py | taylorshin/dmr | e496cc8df87392548812af17b38b4cb3753f8fde | [
"MIT"
] | 43 | 2017-01-10T21:44:29.000Z | 2021-03-18T13:18:23.000Z | dmr/__init__.py | taylorshin/dmr | e496cc8df87392548812af17b38b4cb3753f8fde | [
"MIT"
] | 1 | 2019-06-28T03:21:04.000Z | 2019-06-28T03:21:04.000Z | dmr/__init__.py | taylorshin/dmr | e496cc8df87392548812af17b38b4cb3753f8fde | [
"MIT"
] | 12 | 2017-04-10T07:31:41.000Z | 2022-03-21T05:30:39.000Z | from .lda import LDA
from .dmr import DMR
from .sdmr import SDMR
from .mdmr import MDMR
from .jlda import JLDA
from .vocabulary import Vocabulary
from .corpus import Corpus
| 21.625 | 34 | 0.797688 | 28 | 173 | 4.928571 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16185 | 173 | 7 | 35 | 24.714286 | 0.951724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4304720d450cd0cfd7e921d764b26d56b478242e | 3,256 | py | Python | manual-files/server/analytics.py | cloudmesh/cloudmesh-analytics | 26e8ab8e718730cbbc5b99ac71395c22686ae698 | [
"Apache-2.0"
] | null | null | null | manual-files/server/analytics.py | cloudmesh/cloudmesh-analytics | 26e8ab8e718730cbbc5b99ac71395c22686ae698 | [
"Apache-2.0"
] | null | null | null | manual-files/server/analytics.py | cloudmesh/cloudmesh-analytics | 26e8ab8e718730cbbc5b99ac71395c22686ae698 | [
"Apache-2.0"
] | 2 | 2019-10-21T03:58:57.000Z | 2020-02-12T00:07:06.000Z | from sklearn.linear_model import LinearRegression
import os
import numpy as np
import file
from flask import jsonify
def load_obj(path):
return np.load(path, allow_pickle=True).item()
def save_obj(obj, class_name):
np.save(os.path.join('.', str(class_name) + '_constructor'),
obj, allow_pickle=True)
return
def LinearRegression_constructor(body):
try:
paras = body['paras']
res = LinearRegression(**paras)
save_obj(res, 'LinearRegression')
except Exception as e:
return jsonify({'Error': str(e)})
return jsonify({'return': 'successfully constructed'})
def LinearRegression_fit(body):
if 'X' in body['paras'].keys() and isinstance(body['paras']['X'], str):
file_name = body['paras']['X']
body['paras']['X'] = file.read_csv(file_name)
if 'y' in body['paras'].keys() and isinstance(body['paras']['y'], str):
file_name = body['paras']['y']
body['paras']['y'] = file.read_csv(file_name)
try:
obj =load_obj(os.path.join('.', 'LinearRegression_constructor.npy'))
res = obj.fit(**body['paras'])
save_obj(obj, 'LinearRegression')
except Exception as e:
return jsonify({'Error': str(e)})
return jsonify({'return': str(res)})
def LinearRegression_get_params(body):
if 'X' in body['paras'].keys() and isinstance(body['paras']['X'], str):
file_name = body['paras']['X']
body['paras']['X'] = file.read_csv(file_name)
if 'y' in body['paras'].keys() and isinstance(body['paras']['y'], str):
file_name = body['paras']['y']
body['paras']['y'] = file.read_csv(file_name)
try:
obj =load_obj(os.path.join('.', 'LinearRegression_constructor.npy'))
res = obj.get_params(**body['paras'])
save_obj(obj, 'LinearRegression')
except Exception as e:
return jsonify({'Error': str(e)})
return jsonify({'return': str(res)})
def LinearRegression_predict(body):
if 'X' in body['paras'].keys() and isinstance(body['paras']['X'], str):
file_name = body['paras']['X']
body['paras']['X'] = file.read_csv(file_name)
if 'y' in body['paras'].keys() and isinstance(body['paras']['y'], str):
file_name = body['paras']['y']
body['paras']['y'] = file.read_csv(file_name)
try:
obj =load_obj(os.path.join('.', 'LinearRegression_constructor.npy'))
res = obj.predict(**body['paras'])
save_obj(obj, 'LinearRegression')
except Exception as e:
return jsonify({'Error': str(e)})
return jsonify({'return': str(res)})
def LinearRegression_score(body):
if 'X' in body['paras'].keys() and isinstance(body['paras']['X'], str):
file_name = body['paras']['X']
body['paras']['X'] = file.read_csv(file_name)
if 'y' in body['paras'].keys() and isinstance(body['paras']['y'], str):
file_name = body['paras']['y']
body['paras']['y'] = file.read_csv(file_name)
try:
obj =load_obj(os.path.join('.', 'LinearRegression_constructor.npy'))
res = obj.score(**body['paras'])
save_obj(obj, 'LinearRegression')
except Exception as e:
return jsonify({'Error': str(e)})
return jsonify({'return': str(res)}) | 33.22449 | 76 | 0.605651 | 429 | 3,256 | 4.475524 | 0.123543 | 0.173438 | 0.0625 | 0.0625 | 0.7875 | 0.7875 | 0.7875 | 0.7875 | 0.7875 | 0.7875 | 0 | 0 | 0.204853 | 3,256 | 98 | 77 | 33.22449 | 0.741599 | 0 | 0 | 0.68 | 0 | 0 | 0.159963 | 0.0393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093333 | false | 0 | 0.066667 | 0.013333 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
430c1a4cc9d02a7f772cc490352c85ebba8edb26 | 102 | py | Python | utils/affiliationParser/__init__.py | avenzi/brainworks-public | 04caab38af567eb7ef258e1952e37257e9fbe8fc | [
"MIT"
] | 6 | 2021-12-17T20:32:18.000Z | 2022-02-24T16:28:14.000Z | utils/affiliationParser/__init__.py | avenzi/brainworks-public | 04caab38af567eb7ef258e1952e37257e9fbe8fc | [
"MIT"
] | null | null | null | utils/affiliationParser/__init__.py | avenzi/brainworks-public | 04caab38af567eb7ef258e1952e37257e9fbe8fc | [
"MIT"
] | null | null | null | from .utils import download_grid_data
from .parse import parse_affil
from .matcher import match_affil
| 25.5 | 37 | 0.852941 | 16 | 102 | 5.1875 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 102 | 3 | 38 | 34 | 0.922222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
43480fabcb18083720900c34dd3bc32662e64886 | 473 | py | Python | tests/test_python_image_clone_detection.py | shanedevane/python-image-clone-detection | 1de9f5140e81ff735e1bed46d11f913d1e23f9f2 | [
"MIT"
] | 2 | 2018-10-04T21:17:32.000Z | 2019-09-25T14:44:12.000Z | tests/test_python_image_clone_detection.py | shanedevane/python-image-clone-detection | 1de9f5140e81ff735e1bed46d11f913d1e23f9f2 | [
"MIT"
] | null | null | null | tests/test_python_image_clone_detection.py | shanedevane/python-image-clone-detection | 1de9f5140e81ff735e1bed46d11f913d1e23f9f2 | [
"MIT"
] | 2 | 2018-10-04T21:43:38.000Z | 2019-11-20T11:30:56.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_python_image_clone_detection
----------------------------------
Tests for `python_image_clone_detection` module.
"""
import pytest
from python_image_clone_detection import python_image_clone_detection
class TestPython_image_clone_detection(object):
@classmethod
def setup_class(cls):
pass
def test_something(self):
pass
@classmethod
def teardown_class(cls):
pass
| 15.766667 | 69 | 0.661734 | 54 | 473 | 5.444444 | 0.518519 | 0.170068 | 0.323129 | 0.340136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002604 | 0.188161 | 473 | 29 | 70 | 16.310345 | 0.763021 | 0.340381 | 0 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.272727 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
4a358e216caa19d4a9d8aafc787a70a670ffe319 | 48 | py | Python | nnlib/arguments/__init__.py | huzecong/nnlib | fd68abc51028444ce7c789642e2a7b8ed1853255 | [
"MIT"
] | 1 | 2019-01-08T03:55:23.000Z | 2019-01-08T03:55:23.000Z | nnlib/arguments/__init__.py | huzecong/nnlib | fd68abc51028444ce7c789642e2a7b8ed1853255 | [
"MIT"
] | null | null | null | nnlib/arguments/__init__.py | huzecong/nnlib | fd68abc51028444ce7c789642e2a7b8ed1853255 | [
"MIT"
] | null | null | null | from nnlib.arguments.arguments import Arguments
| 24 | 47 | 0.875 | 6 | 48 | 7 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4a55181def3e48dc5fb02cc096f848306edbe284 | 155 | py | Python | libconform/ncs/__init__.py | jofas/conform | 9f8dd3c7c607269529bf4d62a729ed2ca1880baa | [
"MIT"
] | 5 | 2020-02-10T13:30:06.000Z | 2021-12-22T16:08:02.000Z | libconform/ncs/__init__.py | jofas/conform | 9f8dd3c7c607269529bf4d62a729ed2ca1880baa | [
"MIT"
] | 1 | 2019-07-04T14:12:13.000Z | 2020-06-16T16:05:02.000Z | libconform/ncs/__init__.py | jofas/conform | 9f8dd3c7c607269529bf4d62a729ed2ca1880baa | [
"MIT"
] | null | null | null | from .decision_tree import NCSDecisionTree
from .neural_net import NCSNeuralNet
from .knn import NCSKNearestNeighbors, \
NCSKNearestNeighborsRegressor
| 31 | 42 | 0.851613 | 15 | 155 | 8.666667 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116129 | 155 | 4 | 43 | 38.75 | 0.948905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4a64ab2a695e9d2e9f93ca1f181b4ffaf24ddb52 | 2,781 | py | Python | tests/test_area_square_inches.py | putridparrot/PyUnits | 4f1095c6fc0bee6ba936921c391913dbefd9307c | [
"MIT"
] | null | null | null | tests/test_area_square_inches.py | putridparrot/PyUnits | 4f1095c6fc0bee6ba936921c391913dbefd9307c | [
"MIT"
] | null | null | null | tests/test_area_square_inches.py | putridparrot/PyUnits | 4f1095c6fc0bee6ba936921c391913dbefd9307c | [
"MIT"
] | null | null | null | # <auto-generated>
# This code was generated by the UnitCodeGenerator tool
#
# Changes to this file will be lost if the code is regenerated
# </auto-generated>
import unittest
import units.area.square_inches
class TestSquareInchesMethods(unittest.TestCase):
def test_convert_known_square_inches_to_square_kilometres(self):
self.assertAlmostEqual(0.437999124, units.area.square_inches.to_square_kilometres(678900000.0), places=1)
self.assertAlmostEqual(6.4516, units.area.square_inches.to_square_kilometres(10000000000.0), places=1)
self.assertAlmostEqual(5806.44, units.area.square_inches.to_square_kilometres(9e12), places=1)
def test_convert_known_square_inches_to_square_metres(self):
self.assertAlmostEqual(0.7032244, units.area.square_inches.to_square_metres(1090.0), places=1)
self.assertAlmostEqual(838.708, units.area.square_inches.to_square_metres(1.3e6), places=1)
self.assertAlmostEqual(6.443858, units.area.square_inches.to_square_metres(9988.0), places=1)
def test_convert_known_square_inches_to_square_miles(self):
self.assertAlmostEqual(0.2508433450668, units.area.square_inches.to_square_miles(1007008000.0), places=1)
self.assertAlmostEqual(298.9172023262932, units.area.square_inches.to_square_miles(1.2e12), places=1)
self.assertAlmostEqual(0.6227441715131, units.area.square_inches.to_square_miles(250e7), places=1)
def test_convert_known_square_inches_to_square_yards(self):
self.assertAlmostEqual(0.694444, units.area.square_inches.to_square_yards(900.0), places=1)
self.assertAlmostEqual(11.574074, units.area.square_inches.to_square_yards(15000.0), places=1)
self.assertAlmostEqual(2314.814815, units.area.square_inches.to_square_yards(3e6), places=1)
def test_convert_known_square_inches_to_square_feet(self):
self.assertAlmostEqual(0.236111, units.area.square_inches.to_square_feet(34.0), places=1)
self.assertAlmostEqual(6.958333, units.area.square_inches.to_square_feet(1002.0), places=1)
self.assertAlmostEqual(6.18056, units.area.square_inches.to_square_feet(890.0), places=1)
def test_convert_known_square_inches_to_hectares(self):
self.assertAlmostEqual(0.580644, units.area.square_inches.to_hectares(9000000.0), places=1)
self.assertAlmostEqual(0.79649376185, units.area.square_inches.to_hectares(12345678.0), places=1)
self.assertAlmostEqual(6.4443591113, units.area.square_inches.to_hectares(99887766.0), places=1)
def test_convert_known_square_inches_to_acres(self):
self.assertAlmostEqual(0.143496199, units.area.square_inches.to_acres(900100.0), places=1)
self.assertAlmostEqual(0.7971125395, units.area.square_inches.to_acres(5e6), places=1)
self.assertAlmostEqual(1.9681789486, units.area.square_inches.to_acres(12345678.0), places=1)
if __name__ == '__main__':
unittest.main()
| 56.755102 | 107 | 0.822366 | 413 | 2,781 | 5.263923 | 0.220339 | 0.160074 | 0.180313 | 0.212512 | 0.653634 | 0.546458 | 0.382245 | 0.140754 | 0.122815 | 0.122815 | 0 | 0.130234 | 0.064006 | 2,781 | 48 | 108 | 57.9375 | 0.704956 | 0.053578 | 0 | 0 | 1 | 0 | 0.003046 | 0 | 0 | 0 | 0 | 0 | 0.636364 | 1 | 0.212121 | false | 0 | 0.060606 | 0 | 0.30303 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4a97b041a1f03ee7f706c5703eef49f19bf8e6f8 | 42 | py | Python | analyst_tool.py | cybersheepdog/Analyst-Tool | ea7227e97a5c5916cfdf5a7d83109479df626af1 | [
"BSD-3-Clause"
] | 3 | 2022-02-16T02:38:09.000Z | 2022-03-11T13:02:38.000Z | analyst_tool.py | cybersheepdog/Analyst-Tool | ea7227e97a5c5916cfdf5a7d83109479df626af1 | [
"BSD-3-Clause"
] | null | null | null | analyst_tool.py | cybersheepdog/Analyst-Tool | ea7227e97a5c5916cfdf5a7d83109479df626af1 | [
"BSD-3-Clause"
] | null | null | null | from analyst import *
analyst(terminal=1)
| 14 | 21 | 0.785714 | 6 | 42 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.119048 | 42 | 2 | 22 | 21 | 0.864865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4a98999a4e5a2ccacf4c027c50a22be804aa9313 | 1,076 | py | Python | environments/mujoco/rand_param_envs/gym/envs/mujoco/__init__.py | lfeng1999/varibad | 840f4bd56ccee96a6c162265d18ec54db8b77a1e | [
"MIT"
] | 119 | 2020-02-12T07:06:17.000Z | 2022-03-24T08:37:34.000Z | environments/mujoco/rand_param_envs/gym/envs/mujoco/__init__.py | lfeng1999/varibad | 840f4bd56ccee96a6c162265d18ec54db8b77a1e | [
"MIT"
] | 2 | 2020-05-24T22:33:42.000Z | 2020-09-28T16:42:02.000Z | environments/mujoco/rand_param_envs/gym/envs/mujoco/__init__.py | lfeng1999/varibad | 840f4bd56ccee96a6c162265d18ec54db8b77a1e | [
"MIT"
] | 26 | 2020-04-20T13:10:11.000Z | 2022-03-22T10:21:10.000Z | # ^^^^^ so that user gets the correct error
# message if mujoco is not installed correctly
from environments.mujoco.rand_param_envs.gym.envs.mujoco.ant import AntEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.half_cheetah import HalfCheetahEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.hopper import HopperEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.humanoid import HumanoidEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.humanoidstandup import HumanoidStandupEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.inverted_double_pendulum import InvertedDoublePendulumEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.inverted_pendulum import InvertedPendulumEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.mujoco_env import MujocoEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.reacher import ReacherEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.swimmer import SwimmerEnv
from environments.mujoco.rand_param_envs.gym.envs.mujoco.walker2d import Walker2dEnv
| 76.857143 | 114 | 0.875465 | 151 | 1,076 | 6.059603 | 0.298013 | 0.19235 | 0.264481 | 0.312568 | 0.594536 | 0.594536 | 0.594536 | 0.594536 | 0.594536 | 0.122404 | 0 | 0.00197 | 0.056691 | 1,076 | 13 | 115 | 82.769231 | 0.899507 | 0.079926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
4aba2e3c9aa957ab6847c3cf4193ee05de604c25 | 41 | py | Python | PELEpharmacophore/errors/custom_errors.py | martimunicoy/PELEpharmacophore | e571688e9e4ceec48a77e0bb1486ba8c0a54e773 | [
"MIT"
] | 4 | 2021-02-10T12:18:00.000Z | 2022-03-23T10:34:34.000Z | PELEpharmacophore/errors/custom_errors.py | martimunicoy/PELEpharmacophore | e571688e9e4ceec48a77e0bb1486ba8c0a54e773 | [
"MIT"
] | 4 | 2021-02-26T10:43:52.000Z | 2021-03-29T10:26:52.000Z | PELEpharmacophore/errors/custom_errors.py | martimunicoy/PELEpharmacophore | e571688e9e4ceec48a77e0bb1486ba8c0a54e773 | [
"MIT"
] | null | null | null | class WrongYamlFile(Exception):
pass
| 13.666667 | 31 | 0.756098 | 4 | 41 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 41 | 2 | 32 | 20.5 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
435b1e2b79171fdd59c080a694235d6ad31eddcb | 12,651 | py | Python | tests/transport/redundant/_session_input.py | WshgL/pyuavcan | f2b8d2d743f09ad4af8d62fc96d8f0b013aeb8b0 | [
"MIT"
] | null | null | null | tests/transport/redundant/_session_input.py | WshgL/pyuavcan | f2b8d2d743f09ad4af8d62fc96d8f0b013aeb8b0 | [
"MIT"
] | null | null | null | tests/transport/redundant/_session_input.py | WshgL/pyuavcan | f2b8d2d743f09ad4af8d62fc96d8f0b013aeb8b0 | [
"MIT"
] | null | null | null | # Copyright (c) 2019 UAVCAN Consortium
# This software is distributed under the terms of the MIT License.
# Author: Pavel Kirienko <pavel@uavcan.org>
import time
import asyncio
import pytest
import pyuavcan
from pyuavcan.transport import Transfer, Timestamp, Priority, ResourceClosedError
from pyuavcan.transport.loopback import LoopbackTransport
from pyuavcan.transport.redundant._session._base import RedundantSessionStatistics
from pyuavcan.transport.redundant._session._input import RedundantInputSession
from pyuavcan.transport.redundant._session._input import RedundantTransferFrom
pytestmark = pytest.mark.asyncio
async def _unittest_redundant_input_cyclic() -> None:
asyncio.get_running_loop().slow_callback_duration = 5.0
spec = pyuavcan.transport.InputSessionSpecifier(pyuavcan.transport.MessageDataSpecifier(4321), None)
spec_tx = pyuavcan.transport.OutputSessionSpecifier(spec.data_specifier, None)
meta = pyuavcan.transport.PayloadMetadata(30)
ts = Timestamp.now()
tr_a = LoopbackTransport(111)
tr_b = LoopbackTransport(111)
tx_a = tr_a.get_output_session(spec_tx, meta)
tx_b = tr_b.get_output_session(spec_tx, meta)
inf_a = tr_a.get_input_session(spec, meta)
inf_b = tr_b.get_input_session(spec, meta)
inf_a.transfer_id_timeout = 1.1 # This is used to ensure that the transfer-ID timeout is handled correctly.
is_retired = False
def retire() -> None:
nonlocal is_retired
is_retired = True
ses = RedundantInputSession(spec, meta, tid_modulo_provider=lambda: 32, finalizer=retire) # Like CAN, for example.
assert not is_retired
assert ses.specifier is spec
assert ses.payload_metadata is meta
assert not ses.inferiors
assert ses.sample_statistics() == RedundantSessionStatistics()
assert pytest.approx(0.0) == ses.transfer_id_timeout
# Empty inferior set reception.
time_before = asyncio.get_running_loop().time()
assert not await (ses.receive(asyncio.get_running_loop().time() + 2.0))
assert (
1.0 < asyncio.get_running_loop().time() - time_before < 5.0
), "The method should have returned in about two seconds."
# Begin reception, then add an inferior while the reception is in progress.
assert await (
tx_a.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=1,
fragmented_payload=[memoryview(b"abc")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
async def add_inferior(inferior: pyuavcan.transport.InputSession) -> None:
await asyncio.sleep(1.0)
ses._add_inferior(inferior) # pylint: disable=protected-access
time_before = asyncio.get_running_loop().time()
tr, _ = await (
asyncio.gather(
# Start reception here. It would stall for two seconds because no inferiors.
ses.receive(asyncio.get_running_loop().time() + 2.0),
# While the transmission is stalled, add one inferior with a delay.
add_inferior(inf_a),
)
)
assert (
0.0 < asyncio.get_running_loop().time() - time_before < 5.0
), "The method should have returned in about one second."
assert isinstance(tr, RedundantTransferFrom)
assert ts.monotonic <= tr.timestamp.monotonic <= (asyncio.get_running_loop().time() + 1e-3)
assert tr.priority == Priority.HIGH
assert tr.transfer_id == 1
assert tr.fragmented_payload == [memoryview(b"abc")]
assert tr.inferior_session == inf_a
# More inferiors
assert ses.transfer_id_timeout == pytest.approx(1.1)
ses._add_inferior(inf_a) # No change, added above # pylint: disable=protected-access
assert ses.inferiors == [inf_a]
ses._add_inferior(inf_b) # pylint: disable=protected-access
assert ses.inferiors == [inf_a, inf_b]
assert ses.transfer_id_timeout == pytest.approx(1.1)
assert inf_b.transfer_id_timeout == pytest.approx(1.1)
# Redundant reception - new transfers accepted because the iface switch timeout is exceeded.
time.sleep(ses.transfer_id_timeout) # Just to make sure that it is REALLY exceeded.
assert await (
tx_b.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=2,
fragmented_payload=[memoryview(b"def")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
assert await (
tx_b.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=3,
fragmented_payload=[memoryview(b"ghi")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
tr = await (ses.receive(asyncio.get_running_loop().time() + 0.1))
assert isinstance(tr, RedundantTransferFrom)
assert ts.monotonic <= tr.timestamp.monotonic <= (asyncio.get_running_loop().time() + 1e-3)
assert tr.priority == Priority.HIGH
assert tr.transfer_id == 2
assert tr.fragmented_payload == [memoryview(b"def")]
assert tr.inferior_session == inf_b
tr = await (ses.receive(asyncio.get_running_loop().time() + 0.1))
assert isinstance(tr, RedundantTransferFrom)
assert ts.monotonic <= tr.timestamp.monotonic <= (asyncio.get_running_loop().time() + 1e-3)
assert tr.priority == Priority.HIGH
assert tr.transfer_id == 3
assert tr.fragmented_payload == [memoryview(b"ghi")]
assert tr.inferior_session == inf_b
assert None is await (ses.receive(asyncio.get_running_loop().time() + 1.0)) # Nothing left to read now.
# This one will be rejected because wrong iface and the switch timeout is not yet exceeded.
assert await (
tx_a.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=4,
fragmented_payload=[memoryview(b"rej")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
assert None is await (ses.receive(asyncio.get_running_loop().time() + 0.1))
# Transfer-ID timeout reconfiguration.
ses.transfer_id_timeout = 3.0
with pytest.raises(ValueError):
ses.transfer_id_timeout = -0.0
assert ses.transfer_id_timeout == pytest.approx(3.0)
assert inf_a.transfer_id_timeout == pytest.approx(3.0)
assert inf_a.transfer_id_timeout == pytest.approx(3.0)
# Inferior removal resets the state of the deduplicator.
ses._close_inferior(0) # pylint: disable=protected-access
ses._close_inferior(1) # Out of range, no effect. # pylint: disable=protected-access
assert ses.inferiors == [inf_b]
assert await (
tx_b.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=1,
fragmented_payload=[memoryview(b"acc")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
tr = await (ses.receive(asyncio.get_running_loop().time() + 0.1))
assert isinstance(tr, RedundantTransferFrom)
assert ts.monotonic <= tr.timestamp.monotonic <= (asyncio.get_running_loop().time() + 1e-3)
assert tr.priority == Priority.HIGH
assert tr.transfer_id == 1
assert tr.fragmented_payload == [memoryview(b"acc")]
assert tr.inferior_session == inf_b
# Stats check.
assert ses.sample_statistics() == RedundantSessionStatistics(
transfers=4,
frames=inf_b.sample_statistics().frames,
payload_bytes=12,
errors=0,
drops=0,
inferiors=[
inf_b.sample_statistics(),
],
)
# Closure.
assert not is_retired
ses.close()
assert is_retired
is_retired = False
ses.close()
assert not is_retired
assert not ses.inferiors
with pytest.raises(ResourceClosedError):
await (ses.receive(0))
tr_a.close()
tr_b.close()
inf_a.close()
inf_b.close()
await asyncio.sleep(2.0)
async def _unittest_redundant_input_monotonic() -> None:
asyncio.get_running_loop().slow_callback_duration = 5.0
spec = pyuavcan.transport.InputSessionSpecifier(pyuavcan.transport.MessageDataSpecifier(4321), None)
spec_tx = pyuavcan.transport.OutputSessionSpecifier(spec.data_specifier, None)
meta = pyuavcan.transport.PayloadMetadata(30)
ts = Timestamp.now()
tr_a = LoopbackTransport(111)
tr_b = LoopbackTransport(111)
tx_a = tr_a.get_output_session(spec_tx, meta)
tx_b = tr_b.get_output_session(spec_tx, meta)
inf_a = tr_a.get_input_session(spec, meta)
inf_b = tr_b.get_input_session(spec, meta)
inf_a.transfer_id_timeout = 1.1 # This is used to ensure that the transfer-ID timeout is handled correctly.
ses = RedundantInputSession(
spec,
meta,
tid_modulo_provider=lambda: 2 ** 56, # Like UDP or serial - infinite modulo.
finalizer=lambda: None,
)
assert ses.specifier is spec
assert ses.payload_metadata is meta
assert not ses.inferiors
assert ses.sample_statistics() == RedundantSessionStatistics()
assert pytest.approx(0.0) == ses.transfer_id_timeout
# Add inferiors.
ses._add_inferior(inf_a) # No change, added above # pylint: disable=protected-access
assert ses.inferiors == [inf_a]
ses._add_inferior(inf_b) # pylint: disable=protected-access
assert ses.inferiors == [inf_a, inf_b]
ses.transfer_id_timeout = 1.1
assert ses.transfer_id_timeout == pytest.approx(1.1)
assert inf_a.transfer_id_timeout == pytest.approx(1.1)
assert inf_b.transfer_id_timeout == pytest.approx(1.1)
# Redundant reception from multiple interfaces concurrently.
for tx_x in (tx_a, tx_b):
assert await (
tx_x.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=2,
fragmented_payload=[memoryview(b"def")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
assert await (
tx_x.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=3,
fragmented_payload=[memoryview(b"ghi")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
tr = await (ses.receive(asyncio.get_running_loop().time() + 0.1))
assert isinstance(tr, RedundantTransferFrom)
assert ts.monotonic <= tr.timestamp.monotonic <= (asyncio.get_running_loop().time() + 1e-3)
assert tr.priority == Priority.HIGH
assert tr.transfer_id == 2
assert tr.fragmented_payload == [memoryview(b"def")]
tr = await (ses.receive(asyncio.get_running_loop().time() + 0.1))
assert isinstance(tr, RedundantTransferFrom)
assert ts.monotonic <= tr.timestamp.monotonic <= (asyncio.get_running_loop().time() + 1e-3)
assert tr.priority == Priority.HIGH
assert tr.transfer_id == 3
assert tr.fragmented_payload == [memoryview(b"ghi")]
assert None is await (ses.receive(asyncio.get_running_loop().time() + 2.0)) # Nothing left to read now.
# This one will be accepted despite a smaller transfer-ID because of the TID timeout.
assert await (
tx_a.send(
Transfer(
timestamp=Timestamp.now(),
priority=Priority.HIGH,
transfer_id=1,
fragmented_payload=[memoryview(b"acc")],
),
asyncio.get_running_loop().time() + 1.0,
)
)
tr = await (ses.receive(asyncio.get_running_loop().time() + 0.1))
assert isinstance(tr, RedundantTransferFrom)
assert ts.monotonic <= tr.timestamp.monotonic <= (asyncio.get_running_loop().time() + 1e-3)
assert tr.priority == Priority.HIGH
assert tr.transfer_id == 1
assert tr.fragmented_payload == [memoryview(b"acc")]
assert tr.inferior_session == inf_a
# Stats check.
assert ses.sample_statistics() == RedundantSessionStatistics(
transfers=3,
frames=inf_a.sample_statistics().frames + inf_b.sample_statistics().frames,
payload_bytes=9,
errors=0,
drops=0,
inferiors=[
inf_a.sample_statistics(),
inf_b.sample_statistics(),
],
)
ses.close()
tr_a.close()
tr_b.close()
inf_a.close()
inf_b.close()
await asyncio.sleep(2.0)
| 37.099707 | 119 | 0.649119 | 1,561 | 12,651 | 5.074311 | 0.137732 | 0.045449 | 0.068678 | 0.084838 | 0.777932 | 0.757228 | 0.741194 | 0.715566 | 0.663174 | 0.653453 | 0 | 0.016974 | 0.245593 | 12,651 | 340 | 120 | 37.208824 | 0.812972 | 0.117619 | 0 | 0.707143 | 0 | 0 | 0.013481 | 0 | 0 | 0 | 0 | 0 | 0.303571 | 1 | 0.003571 | false | 0 | 0.032143 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
43a4ea1c51c4cd342b72e02e559be2da7ea95e32 | 502 | py | Python | benchmarks/__init__.py | vishalbelsare/pyaugmecon | b9b6310b66007d1be7035f50a7e2691e7669f74e | [
"MIT"
] | 5 | 2021-05-29T20:18:06.000Z | 2022-01-20T08:56:26.000Z | benchmarks/__init__.py | vishalbelsare/pyaugmecon | b9b6310b66007d1be7035f50a7e2691e7669f74e | [
"MIT"
] | null | null | null | benchmarks/__init__.py | vishalbelsare/pyaugmecon | b9b6310b66007d1be7035f50a7e2691e7669f74e | [
"MIT"
] | 3 | 2021-08-20T19:27:28.000Z | 2022-01-21T13:42:49.000Z | from benchmarks.parallelization_default import parallelization_default
from benchmarks.parallelization_simple import parallelization_simple
from benchmarks.parallelization_no_redivide import parallelization_no_redivide
from benchmarks.parallelization_no_shared_flag import parallelization_no_shared_flag
from benchmarks.parallelization_cores import parallelization_cores
from benchmarks.augmecon_r import augmecon_r
from benchmarks.augmecon_2 import augmecon_2
from benchmarks.augmecon import augmecon
| 55.777778 | 84 | 0.920319 | 60 | 502 | 7.366667 | 0.233333 | 0.253394 | 0.328054 | 0.140271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004255 | 0.063745 | 502 | 8 | 85 | 62.75 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
60208e3cea0691997b24888616b6e26b62386439 | 121 | py | Python | backend/src/members/admin.py | kfriden/HouseOfTric | b32fca5dd2045b158f73530109616e6eca984158 | [
"MIT"
] | null | null | null | backend/src/members/admin.py | kfriden/HouseOfTric | b32fca5dd2045b158f73530109616e6eca984158 | [
"MIT"
] | null | null | null | backend/src/members/admin.py | kfriden/HouseOfTric | b32fca5dd2045b158f73530109616e6eca984158 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from .models import Members
admin.site.register(Members) | 17.285714 | 32 | 0.801653 | 17 | 121 | 5.705882 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132231 | 121 | 7 | 33 | 17.285714 | 0.92381 | 0.214876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
6049e9ef160ee6d38ff88a84b0e5e4f081697e39 | 677 | py | Python | media_grouper/media.py | msakhnik/media_grouper | 8cfcc7fa73c4ad3193642d5696c6ff2c9fcd24c3 | [
"Apache-2.0"
] | null | null | null | media_grouper/media.py | msakhnik/media_grouper | 8cfcc7fa73c4ad3193642d5696c6ff2c9fcd24c3 | [
"Apache-2.0"
] | null | null | null | media_grouper/media.py | msakhnik/media_grouper | 8cfcc7fa73c4ad3193642d5696c6ff2c9fcd24c3 | [
"Apache-2.0"
] | null | null | null | """
Abstract class that describes media objects interfaces
"""
from typing import List, Dict
class Media:
SUPPORTED_FORMATS = tuple()
def __init__(self, path: str, detector: None) -> None:
self.path = ""
self.detector = None
def find_faces(self) -> List:
raise NotImplementedError
def get_exif_data(self) -> Dict:
raise NotImplementedError
def get_creation_date(self):
raise NotImplementedError
def extract_faces(self, faces) -> List:
raise NotImplementedError
def get_quality(self, faces) -> int:
raise NotImplementedError
def show_media(self):
raise NotImplementedError
| 21.83871 | 58 | 0.666174 | 74 | 677 | 5.918919 | 0.472973 | 0.328767 | 0.308219 | 0.205479 | 0.155251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.254062 | 677 | 30 | 59 | 22.566667 | 0.867327 | 0.079764 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.388889 | false | 0 | 0.055556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
606777b7ddb7f6b2e9cb91a9c5d5e56923567693 | 4,576 | py | Python | api/__init__.py | TomTsong/xsolla | 0ae4774df5b4461369bbdbb160345c9c5f8481f5 | [
"MIT"
] | null | null | null | api/__init__.py | TomTsong/xsolla | 0ae4774df5b4461369bbdbb160345c9c5f8481f5 | [
"MIT"
] | null | null | null | api/__init__.py | TomTsong/xsolla | 0ae4774df5b4461369bbdbb160345c9c5f8481f5 | [
"MIT"
] | null | null | null | import json
import requests
from xsolla.api.payment_ui import TokenRequest
class XsollaClient(object):
merchant_id = None
def create_payment_u_i_token(self, args = []):
pass
def create_subscription_plan(self, args = []):
pass
def update_subscription_plan(self, args = []):
pass
def list_subscription_plans(self, args = []):
pass
def create_subscription_product(self, args = []):
pass
def update_subscription_product(self, args = []):
pass
def list_subscription_products(self, args = []):
pass
def update_subscription(self, args = []):
pass
def list_subscriptions(self, args = []):
pass
def list_subscription_payments(self, args = []):
pass
def list_user_subscription_payments(self, args = []):
pass
def list_subscription_currencies(self, args = []):
pass
def list_user_attributes(self, args = []):
pass
def get_user_attribute(self, args = []):
pass
def create_user_attribute(self, args = []):
pass
def create_virtual_item(self, args = []):
pass
def get_virtual_item(self, args = []):
pass
def list_virtual_items(self, args = []):
pass
def create_virtual_items_group(self, args = []):
pass
def get_virtual_items_group(self, args = []):
pass
def list_virtual_items_groups(self, args = []):
pass
def get_project_virtual_currency_settings(self, args = []):
pass
def get_wallet_user(self, args = []):
pass
def list_wallet_users(self, args = []):
pass
def list_wallet_user_operations(self, args = []):
pass
def recharge_wallet_user_balance(self, args = []):
pass
def list_wallet_user_virtual_items(self, args = []):
pass
def get_coupon(self, args = []):
pass
def redeem_coupon(self, args = []):
pass
def create_promotion(self, args = []):
pass
def get_promotion(self, args = []):
pass
def review_promotion(self, args = []):
pass
def list_promotions(self, args = []):
pass
def get_promotion_subject(self, args = []):
pass
def get_promotion_payment_systems(self, args = []):
pass
def get_promotion_periods(self, args = []):
pass
def get_promotion_rewards(self, args = []):
pass
def list_coupon_promotions(self, args = []):
pass
def create_coupon_promotion(self, args = []):
pass
def list_events(self, args = []):
pass
def search_payments_registry(self, args = []):
pass
def list_payments_registry(self, args = []):
pass
def list_transfers_registry(self, args = []):
pass
def list_reports_registry(self, args = []):
pass
def list_support_tickets(self, args = []):
pass
def list_support_ticket_comments(self, args = []):
pass
def create_game_delivery_entity(self, args = []):
pass
def get_game_delivery_entity(self, args = []):
pass
def list_game_delivery_entities(self, args = []):
pass
def list_game_delivery_drm_platforms(self, args = []):
pass
def get_storefront_virtual_currency(self, args = []):
pass
def get_storefront_virtual_groups(self, args = []):
pass
def get_storefront_virtual_items(self, args = []):
pass
def get_storefront_subscriptions(self, args = []):
pass
def get_storefront_bonus(self, args = []):
pass
def create_project(self, args = []):
pass
def get_project(self, args = []):
pass
def list_projects(self, args = []):
pass
def list_payment_accounts(self, args = []):
pass
def charge_payment_account(self, args = []):
pass
def delete_payment_account(self, args = []):
pass
def create_payment_ui_token_from_request(self, token_request: TokenRequest):
############################################################
res = self.create_payment_ui_token(token_request.to_dict())
return res['token']
############################################################
def create_common_payment_ui_token(self, project_id, user_id, sandbox_mode=False):
token_request = TokenRequest(project_id, user_id)
token_request.set_sandbox_mode(sandbox_mode)
return self.create_payment_ui_token_from_request(token_request)
| 22.541872 | 86 | 0.593969 | 516 | 4,576 | 4.947674 | 0.176357 | 0.191148 | 0.286722 | 0.358402 | 0.73678 | 0.607129 | 0.256953 | 0 | 0 | 0 | 0 | 0 | 0.277972 | 4,576 | 202 | 87 | 22.653465 | 0.7727 | 0 | 0 | 0.455224 | 0 | 0 | 0.001122 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.470149 | false | 0.455224 | 0.022388 | 0 | 0.522388 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
607c069e9a1f49e0181f6f2033d1b35ab49866b5 | 17 | py | Python | jython/Lib/test.py | eugeneai/LOD-table-annotator | adbbcee8c6591414d65ab2672cf37578636f2fc0 | [
"Apache-2.0"
] | null | null | null | jython/Lib/test.py | eugeneai/LOD-table-annotator | adbbcee8c6591414d65ab2672cf37578636f2fc0 | [
"Apache-2.0"
] | null | null | null | jython/Lib/test.py | eugeneai/LOD-table-annotator | adbbcee8c6591414d65ab2672cf37578636f2fc0 | [
"Apache-2.0"
] | null | null | null | print("test ok")
| 8.5 | 16 | 0.647059 | 3 | 17 | 3.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 17 | 1 | 17 | 17 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
60836dbc702c6f74139a39e7ce6a0f0322ad29dc | 2,886 | py | Python | nimbleclient/query.py | prachiruparelia-hpe/nimble-python-sdk | a3e99d89e647291caf7936300ae853d21d94d6e5 | [
"Apache-2.0"
] | 10 | 2020-03-10T20:06:36.000Z | 2022-02-16T17:55:21.000Z | nimbleclient/query.py | prachiruparelia-hpe/nimble-python-sdk | a3e99d89e647291caf7936300ae853d21d94d6e5 | [
"Apache-2.0"
] | 21 | 2020-04-07T16:00:11.000Z | 2021-05-07T16:09:53.000Z | nimbleclient/query.py | prachiruparelia-hpe/nimble-python-sdk | a3e99d89e647291caf7936300ae853d21d94d6e5 | [
"Apache-2.0"
] | 7 | 2020-03-11T03:45:31.000Z | 2020-09-14T18:06:03.000Z | #
# © Copyright 2020 Hewlett Packard Enterprise Development LP
#
def and_(*args):
"""Build AND filters."""
return {
'operator': 'and',
'criteria': list(args)
}
def or_(*args):
"""Build OR filters."""
return {
'operator': 'or',
'criteria': list(args)
}
class _Field:
"""Builds a simple filter criteria."""
def __init__(self, name, value=None):
self.name = name
def _get_value(self, value):
if value is True:
value = 'true'
elif value is False:
value = 'false'
elif value is None:
value = 'null'
else:
value = str(value)
return value
def __eq__(self, value):
return {"fieldName": self.name, "operator": "equals", "value": self._get_value(value)}
def __ne__(self, value):
return {"fieldName": self.name, "operator": "notEqual", "value": self._get_value(value)}
def __ge__(self, value):
return {"fieldName": self.name, "operator": "greaterOrEqual", "value": value}
def __gt__(self, value):
return {"fieldName": self.name, "operator": "greaterThan", "value": value}
def __lt__(self, value):
return {"fieldName": self.name, "operator": "lessThan", "value": value}
def __le__(self, value):
return {"fieldName": self.name, "operator": "lessOrEqual", "value": value}
def __in__(self, value):
return {"fieldName": self.name, "operator": "iContains", "value": str(value)}
def __contains__(self, value):
return {"fieldName": self.name, "operator": "iContains", "value": str(value)}
def eq(self, value, case_sensitive=True):
return {"fieldName": self.name, "operator": "equals" if case_sensitive else "iEquals", "value": value}
def ne(self, value, case_sensitive=True):
return {"fieldName": self.name, "operator": "notEqual" if case_sensitive else "iNotEqual", "value": value}
def contains(self, value, case_sensitive=True):
return {"fieldName": self.name, "operator": "contains" if case_sensitive else "iContains", "value": str(value)}
def in_set(self, values):
return {"fieldName": self.name, "operator": "inSet", "values": list(values)}
def not_in_set(self, values):
return {"fieldName": self.name, "operator": "notInSet", "values": list(values)}
def between(self, value1, value2):
return {"fieldName": self.name, "operator": "between", "value1": value1, "value2": value2}
def between_inclusive(self, value1, value2):
return {"fieldName": self.name, "operator": "betweenInclusive", "value1": value1, "value2": value2}
def regex(self, value):
return {"fieldName": self.name, "operator": "regexp", "value": str(value)}
def __str__(self):
return self.name
| 31.714286 | 119 | 0.598406 | 321 | 2,886 | 5.199377 | 0.218069 | 0.091073 | 0.182145 | 0.220491 | 0.58538 | 0.516477 | 0.459557 | 0.291791 | 0.23547 | 0.180348 | 0 | 0.007343 | 0.244976 | 2,886 | 90 | 120 | 32.066667 | 0.758146 | 0.045392 | 0 | 0.105263 | 0 | 0 | 0.213007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.368421 | false | 0 | 0 | 0.298246 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
7147be9c973540b280c3867171d548f01fe47fc1 | 39 | py | Python | s3pipeline/strategies/error.py | sethvargo/scrapy-s3pipeline | 1a11de83c51c3863b471d806583fa5f3a5a6464a | [
"MIT"
] | 77 | 2017-12-07T11:31:14.000Z | 2022-03-18T21:08:33.000Z | s3pipeline/strategies/error.py | sethvargo/scrapy-s3pipeline | 1a11de83c51c3863b471d806583fa5f3a5a6464a | [
"MIT"
] | 7 | 2019-04-19T23:55:22.000Z | 2021-03-06T22:22:37.000Z | s3pipeline/strategies/error.py | sethvargo/scrapy-s3pipeline | 1a11de83c51c3863b471d806583fa5f3a5a6464a | [
"MIT"
] | 14 | 2017-12-07T11:31:42.000Z | 2022-03-18T20:54:09.000Z | class UploadError(Exception):
pass
| 13 | 29 | 0.74359 | 4 | 39 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 30 | 19.5 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
7147c0127fef6a825c89154856b2bab353de63a0 | 125 | py | Python | core/python/kungfu/command/ext/__all__.py | yunnant/kungfu | 03dba19c922a5950068bd2d223488b8543ad8dd1 | [
"Apache-2.0"
] | 2,209 | 2017-11-15T07:51:14.000Z | 2021-01-19T03:16:48.000Z | core/python/kungfu/command/ext/__all__.py | yunnant/kungfu | 03dba19c922a5950068bd2d223488b8543ad8dd1 | [
"Apache-2.0"
] | 45 | 2017-11-16T04:38:51.000Z | 2021-01-18T22:20:33.000Z | core/python/kungfu/command/ext/__all__.py | yunnant/kungfu | 03dba19c922a5950068bd2d223488b8543ad8dd1 | [
"Apache-2.0"
] | 889 | 2017-11-15T08:04:38.000Z | 2021-01-16T12:41:25.000Z | # pyinstaller matters
# must explicitly import all commands
from . import install
from . import uninstall
from . import list | 20.833333 | 37 | 0.792 | 16 | 125 | 6.1875 | 0.6875 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168 | 125 | 6 | 38 | 20.833333 | 0.951923 | 0.44 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
718d87ce5187e2773672043981478044dc6cc667 | 145,336 | py | Python | tests/test_mysqlx_crud.py | fanmolh/mysql-connector-python | 2c46b9ea8005a7902b274c893ba4a22f58fbaecd | [
"Artistic-1.0-Perl"
] | null | null | null | tests/test_mysqlx_crud.py | fanmolh/mysql-connector-python | 2c46b9ea8005a7902b274c893ba4a22f58fbaecd | [
"Artistic-1.0-Perl"
] | null | null | null | tests/test_mysqlx_crud.py | fanmolh/mysql-connector-python | 2c46b9ea8005a7902b274c893ba4a22f58fbaecd | [
"Artistic-1.0-Perl"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License, version 2.0, as
# published by the Free Software Foundation.
#
# This program is also distributed with certain software (including
# but not limited to OpenSSL) that is licensed under separate terms,
# as designated in a particular file or component or in included license
# documentation. The authors of MySQL hereby grant you an
# additional permission to link the program and your derivative works
# with the separately licensed software that they have included with
# MySQL.
#
# Without limiting anything contained in the foregoing, this file,
# which is part of MySQL Connector/Python, is also subject to the
# Universal FOSS Exception, version 1.0, a copy of which can be found at
# http://oss.oracle.com/licenses/universal-foss-exception.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License, version 2.0, for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
"""Unittests for mysqlx.crud
"""
import gc
import json
import logging
import unittest
import threading
import time
import tests
import mysqlx
LOGGER = logging.getLogger(tests.LOGGER_NAME)
_CREATE_TEST_TABLE_QUERY = "CREATE TABLE `{0}`.`{1}` (id INT)"
_INSERT_TEST_TABLE_QUERY = "INSERT INTO `{0}`.`{1}` VALUES ({2})"
_CREATE_TEST_VIEW_QUERY = ("CREATE VIEW `{0}`.`{1}` AS SELECT * "
"FROM `{2}`.`{3}`")
_CREATE_VIEW_QUERY = "CREATE VIEW `{0}`.`{1}` AS {2}"
_DROP_TABLE_QUERY = "DROP TABLE IF EXISTS `{0}`.`{1}`"
_DROP_VIEW_QUERY = "DROP VIEW IF EXISTS `{0}`.`{1}`"
_SHOW_INDEXES_QUERY = "SHOW INDEXES FROM `{0}`.`{1}` WHERE Key_name='{2}'"
_PREP_STMT_QUERY = (
"SELECT p.sql_text, p.count_execute "
"FROM performance_schema.prepared_statements_instances AS p "
"JOIN performance_schema.threads AS t ON p.owner_thread_id = t.thread_id "
"AND t.processlist_id = @@pseudo_thread_id")
def create_view(schema, view_name, defined_as):
query = _CREATE_VIEW_QUERY.format(schema.name, view_name, defined_as)
schema.get_session().sql(query).execute()
return schema.get_view(view_name, True)
def drop_table(schema, table_name):
query = _DROP_TABLE_QUERY.format(schema.name, table_name)
schema.get_session().sql(query).execute()
def drop_view(schema, view_name):
query = _DROP_VIEW_QUERY.format(schema.name, view_name)
schema.get_session().sql(query).execute()
@unittest.skipIf(tests.MYSQL_VERSION < (5, 7, 14), "XPlugin not compatible")
class MySQLxDbDocTests(tests.MySQLxTests):
def setUp(self):
self.connect_kwargs = tests.get_mysqlx_config()
self.schema_name = self.connect_kwargs["schema"]
self.collection_name = "collection_test"
try:
self.session = mysqlx.get_session(self.connect_kwargs)
except mysqlx.Error as err:
self.fail("{0}".format(err))
self.schema = self.session.get_schema(self.schema_name)
self.collection = self.schema.create_collection(self.collection_name)
def tearDown(self):
self.schema.drop_collection(self.collection_name)
self.session.close()
def test_dbdoc_creation(self):
doc_1 = mysqlx.DbDoc({"_id": "1", "name": "Fred", "age": 21})
self.collection.add(doc_1).execute()
self.assertEqual(1, self.collection.count())
# Don't allow _id assignment
self.assertRaises(mysqlx.ProgrammingError,
doc_1.__setitem__, "_id", "1")
doc_2 = {"_id": "2", "name": "Wilma", "age": 33}
self.collection.add(doc_2).execute()
self.assertEqual(2, self.collection.count())
# Copying a DbDoc
doc_3 = self.collection.find().execute().fetch_one()
doc_4 = doc_3.copy("new_id")
self.assertEqual(doc_4["_id"], "new_id")
self.assertNotEqual(doc_3, doc_4)
# Copying a DbDoc without _id
doc_5 = mysqlx.DbDoc({"name": "Fred", "age": 21})
doc_6 = doc_5.copy()
@unittest.skipIf(tests.MYSQL_VERSION < (5, 7, 14), "XPlugin not compatible")
class MySQLxSchemaTests(tests.MySQLxTests):
def setUp(self):
self.connect_kwargs = tests.get_mysqlx_config()
self.schema_name = self.connect_kwargs["schema"]
try:
self.session = mysqlx.get_session(self.connect_kwargs)
except mysqlx.Error as err:
self.fail("{0}".format(err))
self.schema = self.session.get_schema(self.schema_name)
def tearDown(self):
self.session.close()
def test_exists_in_database(self):
# Test with special chars
schema_name_1 = "myschema%"
schema_name_2 = "myschema_"
schema_1 = self.session.create_schema(schema_name_1)
self.assertTrue(schema_1.exists_in_database())
schema_2 = self.session.create_schema(schema_name_2)
self.assertTrue(schema_2.exists_in_database())
self.session.drop_schema(schema_name_1)
self.session.drop_schema(schema_name_2)
def test_get_session(self):
session = self.schema.get_session()
self.assertEqual(session, self.session)
self.assertTrue(self.schema.exists_in_database())
bad_schema = self.session.get_schema("boo")
self.assertFalse(bad_schema.exists_in_database())
def test_create_collection(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name, True)
self.assertEqual(collection.get_name(), collection_name)
self.assertTrue(collection.exists_in_database())
# reusing the existing collection should work
collection = self.schema.create_collection(collection_name, True)
self.assertEqual(collection.get_name(), collection_name)
self.assertTrue(collection.exists_in_database())
# should get exception if reuse is false and it already exists
self.assertRaises(mysqlx.ProgrammingError,
self.schema.create_collection, collection_name,
False)
# should get exception if using an invalid name
self.assertRaises(mysqlx.ProgrammingError,
self.schema.create_collection, "")
self.assertRaises(mysqlx.ProgrammingError,
self.schema.create_collection, None)
self.schema.drop_collection(collection_name)
def test_get_collection(self):
collection_name = "collection_test"
coll = self.schema.get_collection(collection_name)
self.assertFalse(coll.exists_in_database())
coll = self.schema.create_collection(collection_name)
self.assertTrue(coll.exists_in_database())
self.schema.drop_collection(collection_name)
def test_get_view(self):
table_name = "table_test"
view_name = "view_test"
view = self.schema.get_view(view_name)
self.assertFalse(view.exists_in_database())
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, table_name)).execute()
defined_as = "SELECT id FROM {0}.{1}".format(self.schema_name,
table_name)
view = create_view(self.schema, view_name, defined_as)
self.assertTrue(view.exists_in_database())
# raise a ProgrammingError if the view does not exists
self.assertRaises(mysqlx.ProgrammingError,
self.schema.get_view, "nonexistent",
check_existence=True)
drop_table(self.schema, table_name)
drop_view(self.schema, view_name)
def test_get_collections(self):
coll = self.schema.get_collections()
self.assertEqual(0, len(coll), "Should have returned 0 objects")
self.schema.create_collection("coll1")
self.schema.create_collection("coll2")
self.schema.create_collection("coll3")
coll = self.schema.get_collections()
self.assertEqual(3, len(coll), "Should have returned 3 objects")
self.assertEqual("coll1", coll[0].get_name())
self.assertEqual("coll2", coll[1].get_name())
self.assertEqual("coll3", coll[2].get_name())
self.schema.drop_collection("coll1")
self.schema.drop_collection("coll2")
self.schema.drop_collection("coll3")
def test_get_tables(self):
tables = self.schema.get_tables()
self.assertEqual(0, len(tables), "Should have returned 0 objects")
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, "table1")).execute()
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, "table2")).execute()
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, "table3")).execute()
self.session.sql(_CREATE_TEST_VIEW_QUERY.format(
self.schema_name, "view1",
self.schema_name, "table1")).execute()
tables = self.schema.get_tables()
self.assertEqual(4, len(tables), "Should have returned 4 objects")
self.assertEqual("table1", tables[0].get_name())
self.assertEqual("table2", tables[1].get_name())
self.assertEqual("table3", tables[2].get_name())
self.assertEqual("view1", tables[3].get_name())
drop_table(self.schema, "table1")
drop_table(self.schema, "table2")
drop_table(self.schema, "table3")
drop_view(self.schema, "view1")
def test_drop_collection(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
self.schema.drop_collection(collection_name)
self.assertFalse(collection.exists_in_database())
# dropping an non-existing collection should succeed silently
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 19),
"Schema validation unavailable.")
def test_schema_validation(self):
collection_name = "collection_test"
json_schema = {
"id": "http://json-schema.org/geo",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Longitude and Latitude Values",
"description": "A geographical coordinate",
"required": ["latitude", "longitude"],
"type": "object",
"properties": {
"latitude": {
"type": "number",
"minimum": -90,
"maximum": 90
},
"longitude": {
"type": "number",
"minimum": -180,
"maximum": 180
}
},
}
json_schema_string = json.dumps(json_schema)
# Invalid validation options cases
invalid_options = [
"",
-1,
{},
{"foo": "bar"},
{"level": None, "schema": None},
{"level": "off", "schema": True},
{"level": "off", "schema": None},
{"level": "on", "schema": json_schema},
{"level": True, "schema": json_schema},
]
# Test Schema.create_schema() validation options
for validation in invalid_options:
self.assertRaises(mysqlx.ProgrammingError,
self.schema.create_collection,
collection_name,
validation=validation)
# Invalid option in validation
self.assertRaises(mysqlx.ProgrammingError,
self.schema.create_collection,
collection_name,
validation={"level": "strict",
"schema": json_schema,
"invalid": "option"})
# Test using JSON schema as dict
coll = self.schema.create_collection(
collection_name, validation={"level": "strict",
"schema": json_schema})
# The latitude and longitude should be numbers
self.assertRaises(mysqlx.OperationalError,
coll.add({"latitude": "41.14961",
"longitude": "-8.61099"}).execute)
coll.add({"latitude": 41.14961, "longitude": -8.61099}).execute()
self.assertEqual(1, coll.count())
self.schema.drop_collection(collection_name)
# Test JSON schema as string
coll = self.schema.create_collection(
collection_name, validation={"level": "strict",
"schema": json_schema_string})
self.assertRaises(mysqlx.OperationalError,
coll.add({"latitude": "41.14961",
"longitude": "-8.61099"}).execute)
coll.add({"latitude": 41.14961, "longitude": -8.61099}).execute()
self.assertEqual(1, coll.count())
# Test Schema.modify_collection() validation options
for validation in invalid_options:
self.assertRaises(mysqlx.ProgrammingError,
self.schema.modify_collection,
collection_name,
validation=validation)
# Test Schema.modify_collection()
coll.modify("TRUE").set("location", "Porto/Portugal").execute()
json_schema["properties"]["location"] = {"type": "string"}
json_schema["required"].append("location")
self.schema.modify_collection(
collection_name, validation={"level": "strict",
"schema": json_schema})
# The 'location' property is required
self.assertRaises(mysqlx.OperationalError,
coll.add({"latitude": 41.14961,
"longitude": -8.61099}).execute)
coll.add({"location": "Porto/Portugal",
"latitude": 41.14961,
"longitude": -8.61099}).execute()
self.assertEqual(2, coll.count())
# Test using only 'level' option in Schema.modify_collectioa()
self.schema.modify_collection(
collection_name, validation={"level": "off"})
# Test using only 'schema' option in Schema.modify_collection()
self.schema.modify_collection(
collection_name, validation={"schema": json_schema})
# Test validation without any information in Schema.modify_collection()
self.schema.modify_collection(
collection_name, validation={"schema": json_schema})
# Drop the collection
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION >= (8, 0, 19),
"Schema validation is available.")
def test_unsupported_schema_validation(self):
collection_name = "collection_test"
json_schema = {
"id": "http://json-schema.org/geo",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Longitude and Latitude Values",
"description": "A geographical coordinate",
"required": ["latitude", "longitude"],
"type": "object",
"properties": {
"latitude": {
"type": "number",
"minimum": -90,
"maximum": 90
},
"longitude": {
"type": "number",
"minimum": -180,
"maximum": 180
}
},
}
# Test creating a schema-less collection on server < 8.0.19
coll = self.schema.create_collection(collection_name)
self.schema.drop_collection(collection_name)
# Test creating a collection with validation on server < 8.0.19
self.assertRaises(mysqlx.NotSupportedError,
self.schema.create_collection,
collection_name,
validation={"level": "strict",
"schema": json_schema})
# Test modifying a collection with validation on server < 8.0.19
self.assertRaises(mysqlx.NotSupportedError,
self.schema.modify_collection,
collection_name,
validation={"level": "strict",
"schema": json_schema})
@unittest.skipIf(tests.MYSQL_VERSION < (5, 7, 14), "XPlugin not compatible")
class MySQLxCollectionTests(tests.MySQLxTests):
def setUp(self):
self.connect_kwargs = tests.get_mysqlx_config()
self.schema_name = self.connect_kwargs["schema"]
try:
self.session = mysqlx.get_session(self.connect_kwargs)
except mysqlx.Error as err:
self.fail("{0}".format(err))
self.schema = self.session.get_schema(self.schema_name)
def tearDown(self):
self.session.close()
def test_exists_in_database(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
self.assertTrue(collection.exists_in_database())
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 3), "Row locks unavailable.")
def test_lock_shared(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add({"_id": "1", "name": "Fred", "age": 21}).execute()
waiting = threading.Event()
lock_a = threading.Lock()
lock_b = threading.Lock()
errors = []
def client_a(lock_a, lock_b, waiting):
sess1 = mysqlx.get_session(self.connect_kwargs)
schema = sess1.get_schema(self.schema_name)
collection = schema.get_collection(collection_name)
sess1.start_transaction()
result = collection.find("name = 'Fred'").lock_shared().execute()
lock_a.release()
lock_b.acquire()
time.sleep(2)
if waiting.is_set():
errors.append("S-S lock test failure.")
sess1.commit()
return
sess1.commit()
sess1.start_transaction()
result = collection.find("name = 'Fred'").lock_shared().execute()
lock_b.release()
lock_a.acquire()
time.sleep(2)
if not waiting.is_set():
errors.append("S-X lock test failure.")
sess1.commit()
return
sess1.commit()
def client_b(lock_a, lock_b, waiting):
sess1 = mysqlx.get_session(self.connect_kwargs)
schema = sess1.get_schema(self.schema_name)
collection = schema.get_collection(collection_name)
lock_a.acquire()
sess1.start_transaction()
waiting.set()
lock_b.release()
result = collection.find("name = 'Fred'").lock_shared().execute()
waiting.clear()
sess1.commit()
lock_b.acquire()
sess1.start_transaction()
waiting.set()
lock_a.release()
result = collection.find("name = 'Fred'").lock_exclusive().execute()
waiting.clear()
sess1.commit()
client1 = threading.Thread(target=client_a,
args=(lock_a, lock_b, waiting,))
client2 = threading.Thread(target=client_b,
args=(lock_a, lock_b, waiting,))
lock_a.acquire()
lock_b.acquire()
client1.start()
client2.start()
client1.join()
client2.join()
self.schema.drop_collection(collection_name)
if errors:
self.fail(errors[0])
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 3), "Row locks unavailable.")
def test_lock_exclusive(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add({"_id": "1", "name": "Fred", "age": 21}).execute()
event = threading.Event()
pause = threading.Event()
locking = threading.Event()
waiting = threading.Event()
errors = []
def client_a(pause, locking, waiting):
sess1 = mysqlx.get_session(self.connect_kwargs)
schema = sess1.get_schema(self.schema_name)
collection = schema.get_collection(collection_name)
sess1.start_transaction()
result = collection.find("name = 'Fred'").lock_exclusive().execute()
locking.set()
time.sleep(2)
locking.clear()
if not waiting.is_set():
sess1.commit()
errors.append("X-X lock test failure.")
return
sess1.commit()
pause.set()
sess1.start_transaction()
result = collection.find("name = 'Fred'").lock_exclusive().execute()
locking.set()
time.sleep(2)
locking.clear()
if not waiting.is_set():
errors.append("X-S lock test failure.")
sess1.commit()
return
sess1.commit()
def client_b(pause, locking, waiting):
sess1 = mysqlx.get_session(self.connect_kwargs)
schema = sess1.get_schema(self.schema_name)
collection = schema.get_collection(collection_name)
if not locking.wait(2):
return
sess1.start_transaction()
waiting.set()
result = collection.find("name = 'Fred'").lock_exclusive().execute()
waiting.clear()
sess1.commit()
if not pause.wait(2):
return
if not locking.wait(2):
return
sess1.start_transaction()
waiting.set()
result = collection.find("name = 'Fred'").lock_shared().execute()
waiting.clear()
sess1.commit()
client1 = threading.Thread(target=client_a,
args=(pause, locking, waiting,))
client2 = threading.Thread(target=client_b,
args=(pause, locking, waiting,))
client1.start()
client2.start()
client1.join()
client2.join()
self.schema.drop_collection(collection_name)
if errors:
self.fail(errors[0])
@unittest.skipIf(tests.MYSQL_VERSION > (8, 0, 4),
"id field creation on server must not be available.")
def test_add_old_versions(self):
"""Tests error message when adding documents without an ids on old
servers"""
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
coll_add = collection.add({"name": "Fred", "age": 21})
self.assertRaises(mysqlx.errors.OperationalError, coll_add.execute)
# Providing _id for each document must allow his insertion
persons = [{"_id": "12345678901234567890123456789012",
"name": "Dyno dog dinosaur", "age": 33},
{"_id": "12345678901234567890123456789013",
"name": "Puss saber-toothed cat", "age": 42}]
result = collection.add(persons).execute()
self.assertEqual(2, result.get_affected_items_count(),
"documents not inserted")
# Empty list is expected here since the server did not generate the ids
self.assertEqual([], result.get_generated_ids(),
"_id from user was overwritten")
self.schema.drop_collection(collection_name)
def _test_lock_contention(self, lock_type_1, lock_type_2, lock_contention):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add({"name": "Fred", "age": 21}).execute()
locking = threading.Event()
waiting = threading.Event()
errors = []
def thread_a(locking, waiting):
session = mysqlx.get_session(self.connect_kwargs)
schema = session.get_schema(self.schema_name)
collection = schema.get_collection(collection_name)
session.start_transaction()
result = collection.find("name = 'Fred'")
if lock_type_1 == "S":
result.lock_shared().execute()
else:
result.lock_exclusive().execute()
locking.set()
time.sleep(2)
locking.clear()
if not waiting.is_set():
errors.append("{0}-{0} lock test failure."
"".format(lock_type_1, lock_type_2))
session.commit()
return
session.commit()
def thread_b(locking, waiting):
session = mysqlx.get_session(self.connect_kwargs)
schema = session.get_schema(self.schema_name)
collection = schema.get_collection(collection_name)
if not locking.wait(2):
errors.append("{0}-{0} lock test failure."
"".format(lock_type_1, lock_type_2))
session.commit()
return
session.start_transaction()
if lock_type_2 == "S":
result = collection.find("name = 'Fred'") \
.lock_shared(lock_contention)
else:
result = collection.find("name = 'Fred'") \
.lock_exclusive(lock_contention)
if lock_contention == mysqlx.LockContention.NOWAIT \
and (lock_type_1 == "X" or lock_type_2 == "X"):
self.assertRaises(mysqlx.OperationalError, result.execute)
session.rollback()
waiting.set()
time.sleep(4)
session.start_transaction()
result.execute()
session.commit()
waiting.clear()
client1 = threading.Thread(target=thread_a, args=(locking, waiting,))
client2 = threading.Thread(target=thread_b, args=(locking, waiting,))
client1.start()
client2.start()
client1.join()
client2.join()
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 5),
"Lock contention unavailable.")
def test_lock_shared_with_nowait(self):
self._test_lock_contention("S", "S", mysqlx.LockContention.NOWAIT)
self._test_lock_contention("S", "X", mysqlx.LockContention.NOWAIT)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 5),
"Lock contention unavailable.")
def test_lock_exclusive_with_nowait(self):
self._test_lock_contention("X", "X", mysqlx.LockContention.NOWAIT)
self._test_lock_contention("X", "S", mysqlx.LockContention.NOWAIT)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 5),
"Lock contention unavailable.")
def test_lock_shared_with_skip_locked(self):
self._test_lock_contention("S", "S", mysqlx.LockContention.SKIP_LOCKED)
self._test_lock_contention("S", "X", mysqlx.LockContention.SKIP_LOCKED)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 5),
"Lock contention unavailable.")
def test_lock_exclusive_with_skip_locker(self):
self._test_lock_contention("X", "X", mysqlx.LockContention.SKIP_LOCKED)
self._test_lock_contention("X", "S", mysqlx.LockContention.SKIP_LOCKED)
def test_add(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
result = collection.add(
{"_id": 1, "name": "Fred", "age": 21}
).execute()
self.assertEqual(result.get_affected_items_count(), 1)
self.assertEqual(1, collection.count())
# Adding multiple dictionaries at once
result = collection.add(
{"_id": 2, "name": "Wilma", "age": 33},
{"_id": 3, "name": "Barney", "age": 42}
).execute()
self.assertEqual(result.get_affected_items_count(), 2)
self.assertEqual(3, collection.count())
# Adding JSON strings
result = collection.add(
'{"_id": 4, "name": "Bambam", "age": 8}',
'{"_id": 5, "name": "Pebbles", "age": 8}'
).execute()
self.assertEqual(result.get_affected_items_count(), 2)
self.assertEqual(5, collection.count())
# All strings should be considered literal, for expressions
# mysqlx.expr() function must be used
collection.add(
{"_id": "6", "status": "Approved",
"email": "Fred (fred@example.com)"},
{"_id": "7", "status": "Rejected\n(ORA:Pending)",
"email": "Barney (barney@example.com)"},
).execute()
result = collection.find().execute()
self.assertEqual(7, len(result.fetch_all()))
# test unicode
result = collection.add({"_id": "8", "age": 1, "name": u"😀"}).execute()
self.assertEqual(result.get_affected_items_count(), 1)
self.assertEqual(8, collection.count())
if tests.MYSQL_VERSION > (8, 0, 4):
# Following test are only possible on servers with id generetion.
# Ensure _id is created at the server side
persons = [{"name": "Wilma", "age": 33},
{"name": "Barney", "age": 42}]
result = collection.add(persons).execute()
for person in persons:
# Ensure no '_id' field was added locally.
if tests.PY2:
self.assertFalse(person.has_key("_id"))
else:
self.assertFalse("_id" in person)
self.assertEqual(2, result.get_affected_items_count(),
"Not all documents were inserted")
# Allow _id given from the user and server side generation
persons = [{"_id": "12345678901234567890123456789012",
"name": "Dyno", "desc": "dog dinosaur"},
{"_id": "12345678901234567890123456789013",
"name": "Puss", "desc": "saber-toothed cat"},
# following doc does not have id field and must be
# generated at the server side
{"name": "hoppy", "desc": "hoppy kangaroo/dinosaur"}]
result = collection.add(persons).execute()
self.assertEqual(3, result.get_affected_items_count(),
"Not all documents were inserted")
# Only 1 `_id` was generated, 2 were given by us.
self.assertEqual(1, len(result.get_generated_ids()),
"Unexpected number of _id were generated.")
result = collection.find().execute()
for row in result.fetch_all():
self.assertTrue(hasattr(row, "_id"),
"`_id` field could not be found in doc")
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 2),
"CONT_IN operator unavailable")
def test_cont_in_operator(self):
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
collection.add({
"_id": "a6f4b93e1a264a108393524f29546a8c",
"title": "AFRICAN EGG",
"description": "A Fast-Paced Documentary of a Pastry Chef And a "
"Dentist who must Pursue a Forensic Psychologist in "
"The Gulf of Mexico",
"releaseyear": 2006,
"language": "English",
"duration": 130,
"rating": "G",
"genre": "Science fiction",
"actors": [{
"name": "MILLA PECK",
"country": "Mexico",
"birthdate": "12 Jan 1984"
}, {
"name": "VAL BOLGER",
"country": "Botswana",
"birthdate": "26 Jul 1975"
}, {
"name": "SCARLETT BENING",
"country": "Syria",
"birthdate": "16 Mar 1978"
}],
"additionalinfo": {
"director": "Sharice Legaspi",
"writers": ["Rusty Couturier", "Angelic Orduno", "Carin Postell"],
"productioncompanies": ["Qvodrill", "Indigoholdings"]
}
}).execute()
if tests.MYSQL_VERSION >= (8, 0, 17):
# To comply with the SQL standard, IN returns NULL not only if the
# expression on the left hand side is NULL, but also if no match
# is found in the list and one of the expressions in the list is NULL.
not_found_without_null = False
not_found_with_null = None
# Value false match result changed
value_false_match_everything = False
else:
not_found_without_null = None
not_found_with_null = True
value_false_match_everything = True
test_cases = [
("(1+5) in (1, 2, 3, 4, 5)", False),
("(1>5) in (true, false)", True),
("('a'>'b') in (true, false)", True),
("(1>5) in [true, false]", None),
("(1+5) in [1, 2, 3, 4, 5]", None),
("('a'>'b') in [true, false]", None),
("true IN [(1>5), !(false), (true || false), (false && true)]",
True),
("true IN ((1>5), !(false), (true || false), (false && true))",
True),
("{ 'name' : 'MILLA PECK' } IN actors", True),
("{\"field\":true} IN (\"mystring\", 124, myvar, othervar.jsonobj)",
not_found_without_null),
("actor.name IN ['a name', null, (1<5-4), myvar.jsonobj.name]",
None),
("!false && true IN [true]", True),
("1-5/2*2 > 3-2/1*2 IN [true, false]", None),
("true IN [1-5/2*2 > 3-2/1*2]", False),
("'African Egg' IN ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", True),
("1 IN ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", True),
("true IN ('African Egg', 1, false, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", not_found_with_null),
("false IN ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", not_found_with_null),
("false IN ('African Egg', 1, true, 'No null', [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", value_false_match_everything),
("[0,1,2] IN ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", True),
("{ 'title' : 'Atomic Firefighter' } IN ('African Egg', 1, true, "
"NULL, [0,1,2], { 'title' : 'Atomic Firefighter' })", True),
("title IN ('African Egg', 'The Witcher', 'Jurassic Perk')", False),
("releaseyear IN (2006, 2010, 2017)", True),
("'African Egg' in movietitle", None),
("0 NOT IN [1,2,3]", True),
("1 NOT IN [1,2,3]", False),
("'' IN title", False),
("title IN ('', ' ')", False),
("title IN ['', ' ']", False),
("[\"Rusty Couturier\", \"Angelic Orduno\", \"Carin Postell\"] IN "
"additionalinfo.writers", True),
("{ \"name\" : \"MILLA PECK\", \"country\" : \"Mexico\", "
"\"birthdate\": \"12 Jan 1984\"} IN actors", True),
("releaseyear IN [2006, 2007, 2008]", True),
("true IN title", False),
("false IN genre", False),
("'Sharice Legaspi' IN additionalinfo.director", True),
("'Mexico' IN actors[*].country", True),
("'Angelic Orduno' IN additionalinfo.writers", True),
]
for test in test_cases:
try:
result = collection.find() \
.fields("{0} as res".format(test[0])) \
.execute().fetch_one()
except:
self.assertEqual(None, test[1])
else:
self.assertEqual(result['res'], test[1], "For test case {} "
"result was {}".format(test, result))
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 17),
"OVERLAPS operator unavailable")
def test_overlaps_operator(self):
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
collection.add({
"_id": "a6f4b93e1a264a108393524f29546a8c",
"title": "AFRICAN EGG",
"description": "A Fast-Paced Documentary of a Pastry Chef And a "
"Dentist who must Pursue a Forensic Psychologist in "
"The Gulf of Mexico",
"releaseyear": 2006,
"language": "English",
"duration": 130,
"rating": "G",
"genre": "Science fiction",
"actors": [{
"name": "MILLA PECK",
"country": "Mexico",
"birthdate": "12 Jan 1984"
}, {
"name": "VAL BOLGER",
"country": "Botswana",
"birthdate": "26 Jul 1975"
}, {
"name": "SCARLETT BENING",
"country": "Syria",
"birthdate": "16 Mar 1978"
}],
"additionalinfo": {
"director": "Sharice Legaspi",
"writers": ["Rusty Couturier", "Angelic Orduno", "Carin Postell"],
"productioncompanies": ["Qvodrill", "Indigoholdings"]
}
}).execute()
test_cases = [
("(1+5) overlaps (1, 2, 3, 4, 5)", None),
("(1>5) overlaps (true, false)", None),
("('a'>'b') overlaps (true, false)", None),
("(1>5) overlaps [true, false]", None),
("[1>5] overlaps [true, false]", True),
("[(1+5)] overlaps [1, 2, 3, 4, 5]", False),
("[(1+4)] overlaps [1, 2, 3, 4, 5]", True),
("('a'>'b') overlaps [true, false]", None),
("true overlaps [(1>5), !(false), (true || false), (false && true)]",
True),
("true overlaps ((1>5), !(false), (true || false), (false && true))",
None),
("{ 'name' : 'MILLA PECK' } overlaps actors", False),
("{\"field\":true} overlaps (\"mystring\", 124, myvar, othervar.jsonobj)",
None),
("actor.name overlaps ['a name', null, (1<5-4), myvar.jsonobj.name]",
None),
("!false && true overlaps [true]", True),
("1-5/2*2 > 3-2/1*2 overlaps [true, false]", None),
("true IN [1-5/2*2 > 3-2/1*2]", False),
("'African Egg' overlaps ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", None),
("1 overlaps ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", None),
("true overlaps ('African Egg', 1, false, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", None),
("false overlaps ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", None),
("false overlaps ('African Egg', 1, true, 'No null', [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", None),
("[0,1,2] overlaps ('African Egg', 1, true, NULL, [0,1,2], "
"{ 'title' : 'Atomic Firefighter' })", None),
("{ 'title' : 'Atomic Firefighter' } overlaps ('African Egg', 1, true, "
"NULL, [0,1,2], { 'title' : 'Atomic Firefighter' })", None),
("title overlaps ('African Egg', 'The Witcher', 'Jurassic Perk')", None),
("releaseyear overlaps (2006, 2010, 2017)", None),
("'African overlaps' in movietitle", None),
("0 NOT overlaps [1,2,3]", True),
("1 NOT overlaps [1,2,3]", False),
("[0] NOT overlaps [1,2,3]", True),
("[1] NOT overlaps [1,2,3]", False),
("[!false && true] OVERLAPS [true]", True),
("[!false AND true] OVERLAPS [true]", True),
("[!false & true] OVERLAPS [true]", False),
("'' IN title", False),
("title overlaps ('', ' ')", None),
("title overlaps ['', ' ']", False),
("[\"Rusty Couturier\", \"Angelic Orduno\", \"Carin Postell\"] IN "
"additionalinfo.writers", True),
("{ \"name\" : \"MILLA PECK\", \"country\" : \"Mexico\", "
"\"birthdate\": \"12 Jan 1984\"} IN actors", True),
("releaseyear IN [2006, 2007, 2008]", True),
("true overlaps title", False),
("false overlaps genre", False),
("'Sharice Legaspi' overlaps additionalinfo.director", True),
("'Mexico' overlaps actors[*].country", True),
("'Angelic Orduno' overlaps additionalinfo.writers", True),
("[([1,2] overlaps [1,2])] overlaps [false] invalid [true]", None),
("[([1] overlaps [2])] overlaps [3] invalid [true] as res", None),
("[] []", None),
("[] TRUE as res", None)
]
for test in test_cases:
try:
result = collection.find() \
.fields("{0} as res".format(test[0])) \
.execute().fetch_one()
except:
self.assertEqual(None, test[1], "For test case {} "
"exeption was not expected.".format(test))
else:
self.assertEqual(result['res'], test[1], "For test case {} "
"result was {}".format(test, result))
self.schema.drop_collection(collection_name)
def test_ilri_expressions(self):
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
# is
result = collection.find("$.key is null").execute()
self.assertEqual(4, len(result.fetch_all()))
# is_not
result = collection.find("$.key is not null").execute()
self.assertEqual(0, len(result.fetch_all()))
# regexp
result = collection.find("$.name regexp 'F.*'").execute()
self.assertEqual(1, len(result.fetch_all()))
# not_regexp
result = collection.find("$.name not regexp 'F.*'").execute()
self.assertEqual(3, len(result.fetch_all()))
# like
result = collection.find("$.name like 'F%'").execute()
self.assertEqual(1, len(result.fetch_all()))
# not_like
result = collection.find("$.name not like 'F%'").execute()
self.assertEqual(3, len(result.fetch_all()))
# in
result = collection.find("$.age in (21, 28)").execute()
self.assertEqual(2, len(result.fetch_all()))
# not_in
result = collection.find("$.age not in (21, 28)").execute()
self.assertEqual(2, len(result.fetch_all()))
# between
result = collection.find("$.age between 20 and 29").execute()
self.assertEqual(2, len(result.fetch_all()))
# between_not
result = collection.find("$.age not between 20 and 29").execute()
self.assertEqual(2, len(result.fetch_all()))
self.schema.drop_collection(collection_name)
def test_unary_operators(self):
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
# sign_plus
result = collection.find("$.age == 21") \
.fields("+($.age * -1) as test").execute()
self.assertEqual(-21, result.fetch_all()[0]["test"])
# sign_minus
result = collection.find("$.age == 21") \
.fields("-$.age as test").execute()
self.assertEqual(-21, result.fetch_all()[0]["test"])
# !
result = collection.find("$.age == 21") \
.fields("! ($.age == 21) as test").execute()
self.assertFalse(result.fetch_all()[0]["test"])
# not
result = collection.find("$.age == 21") \
.fields("not ($.age == 21) as test").execute()
self.assertFalse(result.fetch_all()[0]["test"])
# ~
result = collection.find("$.age == 21") \
.fields("5 & ~1 as test").execute()
self.assertEqual(4, result.fetch_all()[0]["test"])
self.schema.drop_collection(collection_name)
def test_interval_expressions(self):
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
collection.add({"_id": "1", "adate": "2000-01-01",
"adatetime": "2000-01-01 12:00:01"}).execute()
result = collection.find().fields("$.adatetime + interval 1000000 "
"microsecond = '2000-01-01 12:00:02'"
" as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval 1 second = "
"'2000-01-01 12:00:02' "
"as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval 2 minute = "
"'2000-01-01 12:02:01' "
"as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval 4 hour = "
"'2000-01-01 16:00:01' "
"as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adate + interval 10 day = "
"'2000-01-11' as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adate + interval 2 week = "
"'2000-01-15' as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adate - interval 2 month = "
"'1999-11-01' as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adate + interval 2 quarter = "
"'2000-07-01' as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adate - interval 1 year = "
"'1999-01-01' as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '3.1000000' "
"second_microsecond = '2000-01-01 "
"12:00:05' as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '1:1.1' "
"minute_microsecond = "
"'2000-01-01 12:01:02.100000' "
"as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval "
"'1:1' minute_second "
"= '2000-01-01 12:01:02'"
" as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '1:1:1.1' "
"hour_microsecond = "
"'2000-01-01 13:01:02.100000'"
" as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '1:1:1' "
"hour_second = '2000-01-01 13:01:02'"
" as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '1:1' "
"hour_minute = '2000-01-01 13:01:01'"
" as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval "
"'2 3:4:5.600' day_microsecond = "
"'2000-01-03 15:04:06.600000'"
" as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '2 3:4:5' "
"day_second = '2000-01-03 15:04:06' "
"as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '2 3:4' "
"day_minute = '2000-01-03 15:04:01' "
"as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adatetime + interval '2 3' "
"day_hour = '2000-01-03 15:00:01' "
"as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
result = collection.find().fields("$.adate + interval '2-3' "
"year_month = "
"'2002-04-01' as test").execute()
self.assertTrue(result.fetch_all()[0]["test"])
self.schema.drop_collection(collection_name)
def test_bitwise_operators(self):
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
result = collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
# &
result = collection.find("$.age = 21") \
.fields("$.age & 1 as test").execute()
self.assertEqual(1, result.fetch_all()[0]["test"])
# |
result = collection.find("$.age == 21") \
.fields("0 | 1 as test").execute()
self.assertEqual(1, result.fetch_all()[0]["test"])
# ^
result = collection.find("$.age = 21") \
.fields("$.age ^ 1 as test").execute()
self.assertEqual(20, result.fetch_all()[0]["test"])
# <<
result = collection.find("$.age == 21") \
.fields("1 << 2 as test").execute()
self.assertEqual(4, result.fetch_all()[0]["test"])
# >>
result = collection.find("$.age == 21") \
.fields("4 >> 2 as test").execute()
self.assertEqual(1, result.fetch_all()[0]["test"])
self.schema.drop_collection(collection_name)
def test_numeric_operators(self):
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
# =
result = collection.find("$.age = 21").execute()
self.assertEqual(1, len(result.fetch_all()))
# ==
result = collection.find("$.age == 21").execute()
self.assertEqual(1, len(result.fetch_all()))
# &&
result = collection.find("$.age == 21 && $.name == 'Fred'").execute()
self.assertEqual(1, len(result.fetch_all()))
# and
result = collection.find("$.age == 21 and $.name == 'Fred'").execute()
self.assertEqual(1, len(result.fetch_all()))
# or
result = collection.find("$.age == 21 or $.age == 42").execute()
self.assertEqual(2, len(result.fetch_all()))
# ||
result = collection.find("$.age == 21 || $.age == 42").execute()
self.assertEqual(2, len(result.fetch_all()))
# xor
result = collection.find().fields("$.age xor 1 as test").execute()
docs = result.fetch_all()
self.assertTrue(all([i["test"] is False for i in docs]))
# !=
result = collection.find("$.age != 21").execute()
self.assertEqual(3, len(result.fetch_all()))
# <>
result = collection.find("$.age <> 21").execute()
self.assertEqual(3, len(result.fetch_all()))
# >
result = collection.find("$.age > 28").execute()
self.assertEqual(2, len(result.fetch_all()))
# >=
result = collection.find("$.age >= 28").execute()
self.assertEqual(3, len(result.fetch_all()))
# <
result = collection.find("$.age < 28").execute()
self.assertEqual(1, len(result.fetch_all()))
# <=
result = collection.find("$.age <= 28").execute()
self.assertEqual(2, len(result.fetch_all()))
# +
result = collection.find("$.age == 21") \
.fields("$.age + 10 as test").execute()
self.assertEqual(31, result.fetch_all()[0]["test"])
# -
result = collection.find("$.age == 21") \
.fields("$.age - 10 as test").execute()
self.assertEqual(11, result.fetch_all()[0]["test"])
# *
result = collection.find("$.age == 21") \
.fields("$.age * 10 as test").execute()
self.assertEqual(210, result.fetch_all()[0]["test"])
# /
result = collection.find("$.age == 21") \
.fields("$.age / 7 as test").execute()
self.assertEqual(3, result.fetch_all()[0]["test"])
# div
result = collection.find("$.age == 21") \
.fields("$.age div 7 as test").execute()
self.assertEqual(3, result.fetch_all()[0]["test"])
# %
result = collection.find("$.age == 21") \
.fields("$.age % 7 as test").execute()
self.assertEqual(0, result.fetch_all()[0]["test"])
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 5),
"id field creation on server side is required.")
def test_get_generated_ids(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
result = collection.add({"name": "Fred", "age": 21}).execute()
self.assertTrue(result.get_generated_ids() is not None)
result = collection.add(
{"name": "Fred", "age": 21},
{"name": "Barney", "age": 45}).execute()
self.assertEqual(2, len(result.get_generated_ids()))
self.schema.drop_collection(collection_name)
def test_remove(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 45},
{"_id": "3", "name": "Wilma", "age": 42}
).execute()
self.assertEqual(3, collection.count())
result = collection.remove("age == 21").execute()
self.assertEqual(1, result.get_affected_items_count())
self.assertEqual(2, collection.count())
# Collection.remove() is not allowed without a condition
result = collection.remove(None)
self.assertRaises(mysqlx.ProgrammingError, result.execute)
result = collection.remove("")
self.assertRaises(mysqlx.ProgrammingError, result.execute)
self.assertRaises(mysqlx.ProgrammingError, collection.remove, " ")
self.schema.drop_collection(collection_name)
def _assert_flat_line(self, samples, tolerance):
for sample in range(1, len(samples)):
self.assertLessEqual(samples[sample] - tolerance,
samples[sample - 1], "For sample {} Objects "
"{} overpass the tolerance () from previews "
"sample {}".format(sample, samples[sample],
tolerance,
samples[sample - 1]))
def _collect_samples(self, sample_size, funct, param):
samples = [0] * sample_size
for num in range(sample_size * 10):
_ = funct(eval(param)).execute()
if num % 10 == 0:
samples[int(num / 10)] = len(gc.get_objects())
return samples
def test_memory_use_in_sequential_calls(self):
"Tests the number of new open objects in sequential usage"
collection_name = "{0}.test".format(self.schema_name)
collection = self.schema.create_collection(collection_name)
sample_size = 100
param = '{"_id": "{}".format(num), "name": repr(num), "number": num}'
add_samples = self._collect_samples(sample_size, collection.add,
param)
param = '\'$.name == "{}"\'.format(num)'
find_samples = self._collect_samples(sample_size, collection.find,
param)
# The tolerance here is the number of new objects that can be created
# on each sequential method invocation without exceed memory usage.
tolerance = 12
self._assert_flat_line(add_samples, tolerance)
self._assert_flat_line(find_samples, tolerance)
self.schema.drop_collection(collection_name)
def test_bind(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
# Empty bind should not be allowed
find = collection.find("$.age == :age")
self.assertRaises(mysqlx.ProgrammingError, find.bind)
# Invalid arguments to bind
find = collection.find("$.age == :age")
self.assertRaises(mysqlx.ProgrammingError, find.bind, 21, 28, 42)
# Bind with a dictionary
result = collection.find("$.age == :age").bind({"age": 67}).execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual("Betty", docs[0]["name"])
# Bind with a JSON string
result = collection.find("$.age == :age").bind('{"age": 42}').execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual("Wilma", docs[0]["name"])
result = collection.find("$.age == :age").bind("age", 28).execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual("Barney", docs[0]["name"])
self.schema.drop_collection(collection_name)
def test_find(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result = collection.find("$.age == 67").execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual("Betty", docs[0]["name"])
result = \
collection.find("$.age > 28").sort("age DESC, name ASC").execute()
docs = result.fetch_all()
self.assertEqual(2, len(docs))
self.assertEqual(67, docs[0]["age"])
result = \
collection.find().fields("age").sort("age DESC").limit(2).execute()
docs = result.fetch_all()
self.assertEqual(2, len(docs))
self.assertEqual(42, docs[1]["age"])
self.assertEqual(1, len(docs[1].keys()))
# test flexible params
result = collection.find("$.age > 28")\
.sort(["age DESC", "name ASC"]).execute()
docs = result.fetch_all()
self.assertEqual(2, len(docs))
self.assertEqual(67, docs[0]["age"])
# test flexible params
result = collection.find().fields(["age"])\
.sort("age DESC").limit(2).execute()
docs = result.fetch_all()
self.assertEqual(2, len(docs))
self.assertEqual(42, docs[1]["age"])
self.assertEqual(1, len(docs[1].keys()))
# test like operator
result = collection.find("$.name like 'B%'").execute()
docs = result.fetch_all()
self.assertEqual(2, len(docs))
# test aggregation functions without alias
result = collection.find().fields("sum($.age)").execute()
docs = result.fetch_all()
self.assertTrue("sum($.age)" in docs[0].keys())
self.assertEqual(158, docs[0]["sum($.age)"])
# test operators without alias
result = collection.find().fields("$.age + 100").execute()
docs = result.fetch_all()
self.assertTrue("$.age + 100" in docs[0].keys())
# tests comma seperated fields
result = collection.find("$.age = 21").fields("$.age, $.name").execute()
docs = result.fetch_all()
self.assertEqual("Fred", docs[0]["$.name"])
# test limit and offset
result = collection.find().fields("$.name").limit(2).offset(2).execute()
docs = result.fetch_all()
self.assertEqual(2, len(docs))
self.assertEqual("Wilma", docs[0]["$.name"])
self.assertEqual("Betty", docs[1]["$.name"])
self.assertRaises(ValueError, collection.find().limit, -1)
self.assertRaises(ValueError, collection.find().limit(1).offset, -1)
# test unread result found
find = collection.find()
find.execute()
find.execute()
result = find.execute()
docs = result.fetch_all()
self.assertEqual(4, len(docs))
self.schema.drop_collection(collection_name)
def test_modify(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result = collection.modify("age < 67").set("young", True).execute()
self.assertEqual(3, result.get_affected_items_count())
doc = collection.find("name = 'Fred'").execute().fetch_all()[0]
self.assertEqual(True, doc.young)
result = \
collection.modify("age == 28").change("young", False).execute()
self.assertEqual(1, result.get_affected_items_count())
docs = collection.find("young = True").execute().fetch_all()
self.assertEqual(2, len(docs))
result = collection.modify("young == True").unset("young").execute()
self.assertEqual(2, result.get_affected_items_count())
docs = collection.find("young = True").execute().fetch_all()
self.assertEqual(0, len(docs))
# test flexible params
result = collection.modify("TRUE").unset(["young"]).execute()
self.assertEqual(1, result.get_affected_items_count())
# Collection.modify() is not allowed without a condition
result = collection.modify(None).unset(["young"])
self.assertRaises(mysqlx.ProgrammingError, result.execute)
result = collection.modify("").unset(["young"])
self.assertRaises(mysqlx.ProgrammingError, result.execute)
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 4), "Unavailable")
def test_modify_patch(self):
collection_name = "collection_GOT"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Bran", "family_name": "Stark", "age": 18,
"actors_bio": {"bd": "1999 April 9", "rn": "Isaac Hempstead"},
"parents": ["Eddard Stark", "Catelyn Stark"]},
{"_id": "2", "name": "Sansa", "family_name": "Stark", "age": 21,
"actors_bio": {"bd": "1996 February 21",
"rn": "Sophie Turner"},
"parents": ["Eddard Stark", "Catelyn Stark"]},
{"_id": "3", "name": "Arya", "family_name": "Stark", "age": 20,
"actors_bio": {"bd": "1997 April 15",
"rn": "Maisie Williams"},
"parents": ["Eddard Stark", "Catelyn Stark"]},
{"_id": "4", "name": "Jon", "family_name": "Snow", "age": 30,
"actors_bio": {"bd": "1986 December 26",
"rn": "Kit Harington"}, },
{"_id": "5", "name": "Daenerys", "family_name": "Targaryen",
"age": 30, "actors_bio": {"bd": "1986 October 23",
"rn": "Emilia Clarke"}, },
{"_id": "6", "name": "Margaery", "family_name": "Tyrell",
"age": 35, "actors_bio": {"bd": "1982 February 11",
"rn": "Natalie Dormer"}, },
{"_id": "7", "name": "Cersei", "family_name": "Lannister",
"age": 44, "actors_bio": {"bd": "1973 October 3",
"rn": "Lena Headey"},
"parents": ["Tywin Lannister, Joanna Lannister"]},
{"_id": "8", "name": "Tyrion", "family_name": "Lannister",
"age": 48, "actors_bio": {"bd": "1969 June 11",
"rn": "Peter Dinklage"},
"parents": ["Tywin Lannister, Joanna Lannister"]},
).execute()
# test with empty document
result = collection.modify("TRUE").patch('{}').execute()
self.assertEqual(0, result.get_affected_items_count())
# Test addition of new attribute
result = collection.modify("age <= 21").patch(
'{"status": "young"}').execute()
self.assertEqual(3, result.get_affected_items_count())
doc = collection.find("name = 'Bran'").execute().fetch_all()[0]
self.assertEqual("young", doc.status)
doc = collection.find("name = 'Sansa'").execute().fetch_all()[0]
self.assertEqual("young", doc.status)
doc = collection.find("name = 'Arya'").execute().fetch_all()[0]
self.assertEqual("young", doc.status)
result = collection.modify("age > 21").patch(
'{"status": "older"}').execute()
self.assertEqual(5, result.get_affected_items_count())
doc = collection.find("name = 'Jon'").execute().fetch_all()[0]
self.assertEqual("older", doc.status)
doc = collection.find("name = 'Cersei'").execute().fetch_all()[0]
self.assertEqual("older", doc.status)
doc = collection.find("name = 'Tyrion'").execute().fetch_all()[0]
self.assertEqual("older", doc.status)
doc = collection.find("name = 'Daenerys'").execute().fetch_all()[0]
self.assertEqual("older", doc.status)
doc = collection.find("name = 'Margaery'").execute().fetch_all()[0]
self.assertEqual("older", doc.status)
# Test addition of new attribute with array value
result = collection.modify('family_name == "Tyrell"').patch(
{"parents": ["Mace Tyrell", "Alerie Tyrell"]}).execute()
self.assertEqual(1, result.get_affected_items_count())
doc = collection.find("name = 'Margaery'").execute().fetch_all()[0]
self.assertEqual(
["Mace Tyrell", "Alerie Tyrell"],
doc.parents)
result = collection.modify('name == "Jon"').patch(
'{"parents": ["Lyanna Stark and Rhaegar Targaryen"], '
'"bastard":null}').execute()
self.assertEqual(1, result.get_affected_items_count())
doc = collection.find("name = 'Jon'").execute().fetch_all()[0]
self.assertEqual(
["Lyanna Stark and Rhaegar Targaryen"],
doc.parents)
# Test update of attribute with array value
result = collection.modify('name == "Jon"').patch(
'{"parents": ["Lyanna Stark", "Rhaegar Targaryen"], '
'"bastard":null}').execute()
self.assertEqual(1, result.get_affected_items_count())
doc = collection.find("name = 'Jon'").execute().fetch_all()[0]
self.assertEqual(
["Lyanna Stark", "Rhaegar Targaryen"],
doc.parents)
# Test add and update of a nested attribute with doc value
result = collection.modify('name == "Daenerys"').patch('''
{"dragons":{"drogon": "black with red markings",
"Rhaegal": "green with bronze markings",
"Viserion": "creamy white, with gold markings"}}
''').execute()
self.assertEqual(1, result.get_affected_items_count())
doc = collection.find("name = 'Daenerys'").execute().fetch_all()[0]
self.assertEqual(
{"drogon": "black with red markings",
"Rhaegal": "green with bronze markings",
"Viserion": "creamy white, with gold markings"},
doc.dragons)
# test remove attribute by seting it with null value.
result = collection.modify("TRUE").patch('{"status": null}').execute()
self.assertEqual(8, result.get_affected_items_count())
# Test remove a nested attribute with doc value
result = collection.modify('name == "Daenerys"').patch(
{"dragons": {"drogon": "black with red markings",
"Rhaegal": "green with bronze markings",
"Viserion": None}}
).execute()
self.assertEqual(1, result.get_affected_items_count())
doc = collection.find("name = 'Daenerys'").execute().fetch_all()[0]
self.assertEqual(
{"drogon": "black with red markings",
"Rhaegal": "green with bronze markings"},
doc.dragons)
# Test add new attribute using expression
result = collection.modify('name == "Daenerys"').patch(mysqlx.expr(
'JSON_OBJECT("dragons", JSON_OBJECT("count", 3))'
)).execute()
self.assertEqual(1, result.get_affected_items_count())
doc = collection.find("name = 'Daenerys'").execute().fetch_all()[0]
self.assertEqual(
{"drogon": "black with red markings",
"Rhaegal": "green with bronze markings",
"count": 3},
doc.dragons)
# Test update attribute value using expression
result = collection.modify('name == "Daenerys"').patch(mysqlx.expr(
'JSON_OBJECT("dragons",'
' JSON_OBJECT("count", $.dragons.count - 1))')).execute()
self.assertEqual(1, result.get_affected_items_count())
doc = collection.find("name = 'Daenerys'").execute().fetch_all()[0]
self.assertEqual(
{"drogon": "black with red markings",
"Rhaegal": "green with bronze markings",
"count": 2},
doc.dragons)
# Test update attribute value using expression without JSON functions
result = collection.modify('TRUE').patch(mysqlx.expr(
'{"actors_bio": {"current": {"day_of_birth": CAST(SUBSTRING_INDEX('
' $.actors_bio.bd, " ", - 1) AS DECIMAL)}}}')).execute()
self.assertEqual(8, result.get_affected_items_count())
# Test update attribute value using mysqlx.expr
result = collection.modify('TRUE').patch(
{"actors_bio": {"current": {
"birth_age": mysqlx.expr(
'CAST(SUBSTRING_INDEX($.actors_bio.bd, " ", 1)'
' AS DECIMAL)')}}
}).execute()
self.assertEqual(8, result.get_affected_items_count())
doc = collection.find(
"actors_bio.rn = 'Maisie Williams'").execute().fetch_all()[0]
self.assertEqual(
{"bd": "1997 April 15",
"current": {'day_of_birth': 15, 'birth_age': 1997},
"rn": "Maisie Williams"},
doc.actors_bio)
# Test update attribute value using mysqlx.expr extended without '()'
result = collection.modify('TRUE').patch(
{"actors_bio": {"current": {
"age": mysqlx.expr(
'CAST(Year(CURDATE()) - '
'SUBSTRING_INDEX($.actors_bio.bd, " ", 1) AS DECIMAL)')}}
}).execute()
self.assertEqual(8, result.get_affected_items_count())
res = self.session.sql("select Year(CURDATE()) - 1997").execute()
age = res.fetch_all()[0]["Year(CURDATE()) - 1997"]
doc = collection.find(
"actors_bio.rn = 'Maisie Williams'").execute().fetch_all()[0]
self.assertEqual(
{"bd": "1997 April 15",
"current": {'age': age, 'day_of_birth': 15, 'birth_age': 1997},
"rn": "Maisie Williams"},
doc.actors_bio)
# test use of year funtion.
result = collection.modify('TRUE').patch(mysqlx.expr(
'{"actors_bio": {"current": {"last_update": Year(CURDATE())}}}'
)).execute()
self.assertEqual(8, result.get_affected_items_count())
# Collection.modify() is not allowed without a condition
result = collection.modify(None).patch('{"status":"alive"}')
self.assertRaises(mysqlx.ProgrammingError, result.execute)
result = collection.modify("").patch('{"status":"alive"}')
self.assertRaises(mysqlx.ProgrammingError, result.execute)
# Collection.modify().patch() is not allowed without a document
result = collection.modify("TRUE").patch('')
self.assertRaises(mysqlx.OperationalError, result.execute)
result = collection.modify("TRUE").patch(None)
self.assertRaises(mysqlx.OperationalError, result.execute)
# Collection.modify().patch() must fail is parameter is other
# than DBdoc, dict or str.
self.assertRaises(mysqlx.ProgrammingError,
collection.modify("TRUE").patch, {"a_set"})
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 3),
"Root level updates not supported")
def test_replace_one(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result = collection.find("age = 21").execute().fetch_one()
self.assertEqual("Fred", result["name"])
result['name'] = "George"
collection.replace_one(result["_id"], result)
result = collection.find("age = 21").execute().fetch_one()
self.assertEqual("George", result["name"])
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 2), "Upsert not supported")
def test_add_or_replace_one(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result = collection.find("age = 21").execute().fetch_one()
self.assertEqual("Fred", result["name"])
result['name'] = "George"
collection.add_or_replace_one(result["_id"], result)
result = collection.find("age = 21").execute().fetch_one()
self.assertEqual("George", result["name"])
result = collection.find("_id = 'new_id'").execute().fetch_all()
self.assertEqual(0, len(result))
upsert = {"_id": "11", 'name': 'Melissandre', "age": 99999}
collection.add_or_replace_one("new_id", upsert)
result = collection.find("age = 99999").execute().fetch_one()
self.assertEqual("Melissandre", result["name"])
self.assertEqual("new_id", result["_id"])
self.schema.drop_collection(collection_name)
def test_get_one(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result = collection.find("name = 'Fred'").execute().fetch_one()
result = collection.get_one(result["_id"])
self.assertEqual("Fred", result["name"])
self.schema.drop_collection(collection_name)
def test_remove_one(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result = collection.find("name = 'Fred'").execute().fetch_one()
result = collection.remove_one(result["_id"])
result = collection.find("name = 'Fred'").execute().fetch_all()
self.assertEqual(0, len(result))
self.schema.drop_collection(collection_name)
def test_results(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result1 = collection.find().execute()
# now do another collection find.
# the first one will have to be transparently buffered
result2 = collection.find("age > 28").sort("age DESC").execute()
docs2 = result2.fetch_all()
self.assertEqual(2, len(docs2))
self.assertEqual("Betty", docs2[0]["name"])
docs1 = result1.fetch_all()
self.assertEqual(4, len(docs1))
result3 = collection.find("age > 28").sort("age DESC").execute()
self.assertEqual("Betty", result3.fetch_one()["name"])
self.assertEqual("Wilma", result3.fetch_one()["name"])
self.assertEqual(None, result3.fetch_one())
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 4), "Dev API change")
def test_create_index(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
# Create index with single field
index_name = "age_idx"
result = collection.create_index(index_name,
{"fields": [{"field": "$.age",
"type": "INT",
"required": True}],
"unique": True})
# Unique indexes are not supported
self.assertRaises(mysqlx.NotSupportedError, result.execute)
collection.create_index(index_name,
{"fields": [{"field": "$.age", "type": "INT",
"required": True}],
"unique": False}).execute()
result = self.session.sql(_SHOW_INDEXES_QUERY.format(
self.schema_name, collection_name, index_name)).execute()
rows = result.fetch_all()
self.assertEqual(1, len(rows))
# Create index with multiple fields
index_name = "streets_idx"
collection.create_index(index_name,
{"fields": [{"field": "$.street",
"type": "TEXT(15)",
"required": True},
{"field": "$.cross_street",
"type": "TEXT(15)",
"required": True}],
"unique": False}).execute()
result = self.session.sql(_SHOW_INDEXES_QUERY.format(
self.schema_name, collection_name, index_name)).execute()
rows = result.fetch_all()
self.assertEqual(2, len(rows))
# Create index using a geojson datatype
index_name = "geo_idx"
collection.create_index(index_name,
{"fields": [{"field": '$.myGeoJsonField',
"type": 'GEOJSON',
"required": True,
"options": 2,
"srid": 4326}],
"unique": False,
"type":'SPATIAL'}).execute()
result = self.session.sql(_SHOW_INDEXES_QUERY.format(
self.schema_name, collection_name, index_name)).execute()
rows = result.fetch_all()
self.assertEqual(1, len(rows))
# Create an index on document fields which contain arrays
index_name = "emails_idx"
index_desc = {"fields": [{"field": "$.emails", "type": "CHAR(128)",
"array": True}]}
collection.create_index(index_name, index_desc).execute()
result = self.session.sql(_SHOW_INDEXES_QUERY.format(
self.schema_name, collection_name, index_name)).execute()
rows = result.fetch_all()
self.assertEqual(1, len(rows))
# Error conditions
# Index name can not be None
index_name = None
index_desc = {"fields": [{"field": "$.myField", "type": "TEXT(10)"}],
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# Index name can not be invalid identifier
index_name = "!invalid"
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
index_name = "invalid()"
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
index_name = "01invalid"
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# index descriptor wrong format
# Required "fields" is missing
index_name = "myIndex"
index_desc = {"fields1": [{"field": "$.myField", "type": "TEXT(10)"}],
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
index_desc = {"field": [{"field": "$.myField", "type": "TEXT(10)"}],
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# index type with invalid type
index_desc = {"field": [{"field": "$.myField", "type": "TEXT(10)"}],
"unique": False, "type":"Invalid"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# index description contains aditional fields
index_desc = {"field": [{"field": "$.myField", "type": "TEXT(10)"}],
"unique": False, "other":"value"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# Inner "field" value is not a list
index_desc = {"fields": "$.myField",
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# Required inner "field" is missing
index_desc = {"fields": [{}],
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# Required inner "field" is misstyped
index_desc = {"fields": [{"field1": "$.myField", "type": "TEXT(10)"}],
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# Required inner "field" is misstyped
index_desc = {"fields": [{"01field1": "$.myField",
"type": "TEXT(10)"}],
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# Required inner "field.type" is missing
index_desc = {"fields": [{"field": "$.myField"}], "unique": False,
"type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# Required inner "field.type" is invalid
index_desc = {"fields": [{"field": "$.myField", "type": "invalid"}],
"unique": False, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.OperationalError, create_index.execute)
# By current Server limitations, "unique" can ont be True
index_desc = {"fields": [{"field": "$.myField", "type": "TEXT(10)"}],
"unique": True, "type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.NotSupportedError, create_index.execute)
# index specifiying the 'collation' option for non TEXT data type
index_desc = {"fields": [{"field": "$.myField", "type": "int",
"collation": "utf8_general_ci"}],
"type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# member description contains aditional fields
index_desc = {"fields": [{"field": "$.myField", "type": "int",
"additional": "field"}],
"type":"INDEX"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# index type SPATIAL requires inner required field to be True
index_name = "geotrap"
index_desc = {"fields": [{"field": "$.intField", "type": "INT",
"required": True},
{"field": "$.floatField", "type": "FLOAT",
"required": True},
{"field": "$.dateField", "type": "DATE"},
{"field": "$.geoField", "type": "GEOJSON",
"required": False, "options": 2,
"srid": 4326}], "type" : "SPATIAL"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# inner field type GEOJSON requires index type set to SPATIAL
index_desc = {"fields": [{"field": "$.intField", "type": "INT",
"required": True},
{"field": "$.floatField", "type": "FLOAT",
"required": True},
{"field": "$.dateField", "type": "DATE"},
{"field": "$.geoField", "type": "GEOJSON",
"required": False, "options": 2,
"srid": 4326}], "type" : "SPATIAL"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# "srid" fields can be present only if "type" is set to "GEOJSON"
index_desc = {"fields": [{"field": "$.NogeoField", "type": "int",
"required": True, "srid": 4326}],
"type" : "SPATIAL"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# "options" fields can be present only if "type" is set to "GEOJSON"
index_desc = {"fields": [{"field": "$.NogeoField", "type": "int",
"required": True, "options": 2}],
"type" : "SPATIAL"}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(mysqlx.ProgrammingError, create_index.execute)
# "required" fields must be Boolean
index_name = "age_idx"
index_desc = {"fields": [{"field": "$.age", "type": "INT",
"required": "True"}], "unique": False}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(TypeError, create_index.execute)
# "array" fields must be Boolean
index_name = "emails_idx"
index_desc = {"fields": [{"field": "$.emails", "type": "CHAR(128)",
"array": "True"}]}
create_index = collection.create_index(index_name, index_desc)
self.assertRaises(TypeError, create_index.execute)
self.schema.drop_collection(collection_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 4), "Dev API change")
def test_drop_index(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
index_name = "age_idx"
collection.create_index(index_name,
{"fields": [{"field": "$.age", "type": "INT",
"required": True}],
"unique": False}).execute()
show_indexes_sql = (
"SHOW INDEXES FROM `{0}`.`{1}` WHERE Key_name='{2}'"
"".format(self.schema_name, collection_name, index_name)
)
result = self.session.sql(show_indexes_sql).execute()
rows = result.fetch_all()
self.assertEqual(1, len(rows))
collection.drop_index(index_name)
result = self.session.sql(show_indexes_sql).execute()
rows = result.fetch_all()
self.assertEqual(0, len(rows))
# dropping an non-existing index should succeed silently
collection.drop_index(index_name)
self.schema.drop_collection(collection_name)
def test_parameter_binding(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
result = collection.find("age == :age").bind("age", 67).execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual("Betty", docs[0]["name"])
result = collection.find("$.age = :age").bind('{"age": 42}') \
.sort("age DESC, name ASC").execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual("Wilma", docs[0]["name"])
# The number of bind parameters and placeholders do not match
self.assertRaises(mysqlx.ProgrammingError,
collection.find("$.age = ? and $.name = ?").bind, 42)
# Binding anonymous parameters are not allowed in crud operations
self.assertRaises(mysqlx.ProgrammingError,
collection.find("$.age = ?").bind, 42)
self.assertRaises(mysqlx.ProgrammingError,
collection.find("$.name = ?").bind, "Fred")
self.schema.drop_collection(collection_name)
def test_unicode_parameter_binding(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": u"José", "age": 21},
{"_id": "2", "name": u"João", "age": 28},
{"_id": "3", "name": u"Célia", "age": 42},
).execute()
result = collection.find("name == :name").bind("name", u"José") \
.execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual(u"José", docs[0]["name"])
result = collection.find("$.name = :name").bind(u'{"name": "João"}') \
.execute()
docs = result.fetch_all()
self.assertEqual(1, len(docs))
self.assertEqual(u"João", docs[0]["name"])
self.schema.drop_collection(collection_name)
def test_array_insert(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": 1, "name": "Fred", "cards": []},
{"_id": 2, "name": "Barney", "cards": [1, 2, 4]},
{"_id": 3, "name": "Wilma", "cards": []},
{"_id": 4, "name": "Betty", "cards": []},
).execute()
collection.modify("$._id == 2").array_insert("$.cards[2]", 3).execute()
docs = collection.find("$._id == 2").execute().fetch_all()
self.assertEqual([1, 2, 3, 4], docs[0]["cards"])
# Test binding
modify = collection.modify("$._id == :id").array_insert("$.cards[0]", 0)
modify.bind("id", 1).execute()
doc = collection.get_one(1)
self.assertEqual([0], doc["cards"])
modify.bind("id", 2).execute()
doc = collection.get_one(2)
self.assertEqual([0, 1, 2, 3, 4], doc["cards"])
modify.bind("id", 3).execute()
doc = collection.get_one(3)
self.assertEqual([0], doc["cards"])
self.schema.drop_collection(collection_name)
def test_array_append(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": 1, "name": "Fred", "cards": [1]},
{"_id": 2, "name": "Barney", "cards": [1, 2, 4]},
{"_id": 3, "name": "Wilma", "cards": [1, 2]},
{"_id": 4, "name": "Betty", "cards": []},
).execute()
collection.modify("$._id == 2").array_append("$.cards[1]", 3).execute()
docs = collection.find("$._id == 2").execute().fetch_all()
self.assertEqual([1, [2, 3], 4], docs[0]["cards"])
# Test binding
modify = collection.modify("$._id == :id").array_append("$.cards[0]", 5)
modify.bind("id", 1).execute()
doc = collection.get_one(1)
self.assertEqual([[1, 5]], doc["cards"])
modify.bind("id", 2).execute()
doc = collection.get_one(2)
self.assertEqual([[1, 5], [2, 3], 4], doc["cards"])
modify.bind("id", 3).execute()
doc = collection.get_one(3)
self.assertEqual([[1, 5], 2], doc["cards"])
self.schema.drop_collection(collection_name)
def test_count(self):
collection_name = "collection_test"
collection = self.schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
).execute()
self.assertEqual(4, collection.count())
self.schema.drop_collection(collection_name)
self.assertRaises(mysqlx.OperationalError, collection.count)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 14),
"Prepared statements not supported")
def test_prepared_statements(self):
session = mysqlx.get_session(self.connect_kwargs)
schema = session.get_schema(self.schema_name)
expected_stmt_attrs = \
lambda stmt, changed, prepared, repeated, exec_counter: \
stmt.changed == changed and stmt.prepared == prepared and \
stmt.repeated == repeated and stmt.exec_counter == exec_counter
collection_name = "prepared_collection_test"
collection = schema.create_collection(collection_name)
collection.add(
{"_id": "1", "name": "Fred", "age": 21},
{"_id": "2", "name": "Barney", "age": 28},
{"_id": "3", "name": "Wilma", "age": 42},
{"_id": "4", "name": "Betty", "age": 67},
{"_id": "5", "name": "Bob", "age": 75},
).execute()
# FindStatement
find = collection.find("$.age == :age")
self.assertTrue(expected_stmt_attrs(find, True, False, False, 0))
# On the first call should: Crud::Find (without prepared statement)
find.bind("age", 21).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(find, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
find.bind("age", 28).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(find, False, True, True, 2))
# On subsequent calls should: Prepare::Execute
find.bind("age", 42).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(find, False, True, True, 3))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[0]
expected_sql_text = ("SELECT doc FROM `{}`.`{}` "
"WHERE (JSON_EXTRACT(doc,'$.age') = ?)"
"".format(self.schema_name, collection_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], find.exec_counter - 1)
# Using sort() should deallocate the prepared statement:
# Prepare::Deallocate + Crud::Find
find.bind("age", 21).sort("age").execute().fetch_all()
self.assertTrue(expected_stmt_attrs(find, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
find.bind("age", 42).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(find, False, True, True, 2))
# The previous statement should be closed since it had no limit/offset
# Prepare::Deallocate + Prepare::Prepare + Prepare::Execute
find.bind("age", 67).limit(1).offset(0).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(find, False, True, False, 1))
# On the second call should: Prepare::Execute
find.bind("age", 75).limit(1).offset(0).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(find, False, True, True, 2))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[0]
expected_sql_text = ("SELECT doc FROM `{}`.`{}` "
"WHERE (JSON_EXTRACT(doc,'$.age') = ?) "
"ORDER BY JSON_EXTRACT(doc,'$.age') LIMIT ?, ?"
"".format(self.schema_name, collection_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], find.exec_counter)
# ModifyStatement
modify = collection.modify("$._id == :id").set("age", 18)
self.assertTrue(expected_stmt_attrs(modify, True, False, False, 0))
# On the first call should: Crud::Modify (without prepared statement)
res = modify.bind("id", "1").execute()
self.assertTrue(expected_stmt_attrs(modify, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
res = modify.bind("id", "2").execute()
self.assertTrue(expected_stmt_attrs(modify, False, True, True, 2))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[1]
expected_sql_text = ("UPDATE `{}`.`{}` "
"SET doc=JSON_SET(JSON_SET(doc,'$.age',18),"
"'$._id',JSON_EXTRACT(`doc`,'$._id')) "
"WHERE (JSON_EXTRACT(doc,'$._id') = ?)"
"".format(self.schema_name, collection_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], modify.exec_counter - 1)
# Using set() should deallocate the prepared statement:
# Prepare::Deallocate + Crud::Modify
res = modify.bind("id", "3").set("age", 92).execute()
self.assertTrue(expected_stmt_attrs(modify, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
res = modify.bind("id", "4").execute()
self.assertTrue(expected_stmt_attrs(modify, False, True, True, 2))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[1]
expected_sql_text = ("UPDATE `{}`.`{}` "
"SET doc=JSON_SET(JSON_SET(doc,'$.age',92),'$._id'"
",JSON_EXTRACT(`doc`,'$._id')) "
"WHERE (JSON_EXTRACT(doc,'$._id') = ?)"
"".format(self.schema_name, collection_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], modify.exec_counter - 1)
# RemoveStatement
remove = collection.remove("$._id == :id").limit(2)
self.assertTrue(expected_stmt_attrs(remove, True, False, False, 0))
# On the first call should: Crud::Remove (without prepared statement)
remove.bind("id", "1").execute()
self.assertTrue(expected_stmt_attrs(remove, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
remove.bind("id", "2").execute()
self.assertTrue(expected_stmt_attrs(remove, False, True, True, 2))
# On subsequent calls should: Prepare::Execute
remove.bind("id", "3").execute()
self.assertTrue(expected_stmt_attrs(remove, False, True, True, 3))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[2]
expected_sql_text = ("DELETE FROM `{}`.`{}` "
"WHERE (JSON_EXTRACT(doc,'$._id') = ?) LIMIT ?"
"".format(self.schema_name, collection_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], remove.exec_counter - 1)
# Using sort() should deallocate the prepared statement:
# Prepare::Deallocate + Crud::Remove
remove.bind("id", "3").sort("_id ASC").execute()
self.assertTrue(expected_stmt_attrs(remove, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
remove.bind("id", "4").execute()
self.assertTrue(expected_stmt_attrs(remove, False, True, True, 2))
# On subsequent calls should: Prepare::Execute
remove.bind("id", "5").execute()
self.assertTrue(expected_stmt_attrs(remove, False, True, True, 3))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[2]
expected_sql_text = ("DELETE FROM `{}`.`{}` "
"WHERE (JSON_EXTRACT(doc,'$._id') = ?) "
"ORDER BY JSON_EXTRACT(doc,'$._id') LIMIT ?"
"".format(self.schema_name, collection_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], remove.exec_counter - 1)
schema.drop_collection(collection_name)
session.close()
@unittest.skipIf(tests.MYSQL_VERSION < (5, 7, 14), "XPlugin not compatible")
class MySQLxTableTests(tests.MySQLxTests):
def setUp(self):
self.connect_kwargs = tests.get_mysqlx_config()
self.schema_name = self.connect_kwargs["schema"]
try:
self.session = mysqlx.get_session(self.connect_kwargs)
except mysqlx.Error as err:
self.fail("{0}".format(err))
self.schema = self.session.get_schema(self.schema_name)
def tearDown(self):
self.session.close()
def test_exists_in_database(self):
table_name = "table_test"
try:
sql = _CREATE_TEST_TABLE_QUERY.format(self.schema_name, table_name)
self.session.sql(sql).execute()
except mysqlx.Error as err:
LOGGER.info("{0}".format(err))
table = self.schema.get_table(table_name)
self.assertTrue(table.exists_in_database())
drop_table(self.schema, table_name)
def test_select(self):
table_name = "{0}.test".format(self.schema_name)
self.session.sql("CREATE TABLE {0}(age INT, name VARCHAR(50))"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (21, 'Fred')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (28, 'Barney')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (42, 'Wilma')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (67, 'Betty')"
"".format(table_name)).execute()
table = self.schema.get_table("test")
result = table.select().order_by("age DESC").execute()
rows = result.fetch_all()
self.assertEqual(4, len(rows))
self.assertEqual(67, rows[0]["age"])
result = table.select("age").where("age = 42").execute()
self.assertEqual(1, len(result.columns))
rows = result.fetch_all()
self.assertEqual(1, len(rows))
# test flexible params
result = table.select(['age', 'name']).order_by("age DESC").execute()
rows = result.fetch_all()
self.assertEqual(4, len(rows))
# test like operator
result = table.select().where("name like 'B%'").execute()
rows = result.fetch_all()
self.assertEqual(2, len(rows))
# test aggregation functions
result = table.select("sum(age)").execute()
rows = result.fetch_all()
self.assertTrue("sum(age)" == result.columns[0].get_column_name())
self.assertEqual(158, rows[0]["sum(age)"])
# test operators without alias
result = table.select("age + 100").execute()
rows = result.fetch_all()
self.assertTrue("age + 100" == result.columns[0].get_column_name())
# test cast operators
result = table.select("cast(age as binary(10)) as test").execute()
self.assertEqual(result.columns[0].get_type(), mysqlx.ColumnType.BYTES)
result = table.select("cast('1994-12-11' as date) as test").execute()
self.assertEqual(result.columns[0].get_type(), mysqlx.ColumnType.DATE)
result = table.select("cast('1994-12-11:12:00:00' as datetime) as "
"test").execute()
self.assertEqual(result.columns[0].get_type(),
mysqlx.ColumnType.DATETIME)
result = table.select("cast(age as decimal(10, 7)) as test").execute()
self.assertEqual(result.columns[0].get_type(),
mysqlx.ColumnType.DECIMAL)
result = table.select("cast('{\"a\": 24}' as json) as test").execute()
self.assertEqual(result.columns[0].get_type(), mysqlx.ColumnType.JSON)
result = table.select("cast(age as signed) as test").execute()
self.assertEqual(result.columns[0].get_type(), mysqlx.ColumnType.INT)
result = table.select("cast(age as unsigned) as test").execute()
self.assertEqual(result.columns[0].get_type(),
mysqlx.ColumnType.BIGINT)
result = table.select("cast(age as signed integer) as test").execute()
self.assertEqual(result.columns[0].get_type(), mysqlx.ColumnType.INT)
result = table.select("cast(age as unsigned integer) as "
"test").execute()
self.assertEqual(result.columns[0].get_type(),
mysqlx.ColumnType.BIGINT)
result = table.select("cast('12:00:00' as time) as test").execute()
self.assertEqual(result.columns[0].get_type(), mysqlx.ColumnType.TIME)
drop_table(self.schema, "test")
coll = self.schema.create_collection("test")
coll.add(
{"_id": "1", "a": 21},
{"_id": "2", "a": 22},
{"_id": "3", "a": 23},
{"_id": "4", "a": 24}
).execute()
table = self.schema.get_collection_as_table("test")
result = table.select("doc->'$.a' as a").execute()
rows = result.fetch_all()
self.assertEqual("a", result.columns[0].get_column_name())
self.assertEqual(4, len(rows))
self.schema.drop_collection("test")
def test_having(self):
table_name = "{0}.test".format(self.schema_name)
self.session.sql("CREATE TABLE {0}(age INT, name VARCHAR(50), "
"gender CHAR(1))".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (21, 'Fred', 'M')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (28, 'Barney', 'M')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (42, 'Wilma', 'F')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (67, 'Betty', 'F')"
"".format(table_name)).execute()
table = self.schema.get_table("test")
result = table.select().group_by("gender").order_by("age ASC").execute()
rows = result.fetch_all()
self.assertEqual(2, len(rows))
self.assertEqual(21, rows[0]["age"])
self.assertEqual(42, rows[1]["age"])
result = table.select().group_by("gender").having("gender = 'F'") \
.order_by("age ASC").execute()
rows = result.fetch_all()
self.assertEqual(1, len(rows))
self.assertEqual(42, rows[0]["age"])
# test flexible params
result = table.select().group_by(["gender"]) \
.order_by(["name DESC", "age ASC"]).execute()
rows = result.fetch_all()
self.assertEqual(2, len(rows))
self.assertEqual(42, rows[0]["age"])
self.assertEqual(21, rows[1]["age"])
drop_table(self.schema, "test")
def test_insert(self):
self.session.sql("CREATE TABLE {0}.test(age INT, name "
"VARCHAR(50), gender CHAR(1))"
"".format(self.schema_name)).execute()
table = self.schema.get_table("test")
result = table.insert("age", "name") \
.values(21, 'Fred') \
.values(28, 'Barney') \
.values(42, 'Wilma') \
.values(67, 'Betty').execute()
result = table.select().execute()
rows = result.fetch_all()
self.assertEqual(4, len(rows))
# test flexible params
result = table.insert(["age", "name"]) \
.values([35, 'Eddard']) \
.values(9, 'Arya').execute()
result = table.select().execute()
rows = result.fetch_all()
self.assertEqual(6, len(rows))
# test unicode
table.insert("age", "name").values(1, u"😀").execute()
result = table.select().execute()
rows = result.fetch_all()
self.assertEqual(7, len(rows))
drop_table(self.schema, "test")
def test_update(self):
self.session.sql("CREATE TABLE {0}.test(age INT, name "
"VARCHAR(50), gender CHAR(1), `info` json DEFAULT NULL)"
"".format(self.schema_name)).execute()
table = self.schema.get_table("test")
result = table.insert("age", "name", "info") \
.values(21, 'Fred', {"married": True, "sons": 0}) \
.values(28, 'Barney', {"married": True, "sons": 1}) \
.values(42, 'Wilma', {"married": True, "sons": 0}) \
.values(67, 'Betty', {"married": True, "sons": 1}).execute()
result = table.update().set("age", 25).where("age == 21").execute()
self.assertEqual(1, result.get_affected_items_count())
# Table.update() is not allowed without a condition
result = table.update().set("age", 25)
self.assertRaises(mysqlx.ProgrammingError, result.execute)
# Update with a mysqlx expression
statement = table.update()
statement.set("info", mysqlx.expr("JSON_SET(info, '$.sons', $.sons * 2)"))
result = statement.where( "name = 'Barney' or name = 'Betty'").execute()
assert (2 == result.get_affected_items_count())
statement = table.update()
statement.set("info", mysqlx.expr("JSON_REPLACE(info, '$.married', False)"))
result = statement.where( "name = 'Fred' or name = 'Wilma'").execute()
assert (2 == result.get_affected_items_count())
drop_table(self.schema, "test")
def test_delete(self):
table_name = "table_test"
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, table_name)).execute()
self.session.sql(_INSERT_TEST_TABLE_QUERY.format(
self.schema_name, table_name, "1")).execute()
self.session.sql(_INSERT_TEST_TABLE_QUERY.format(
self.schema_name, table_name, "2")).execute()
self.session.sql(_INSERT_TEST_TABLE_QUERY.format(
self.schema_name, table_name, "3")).execute()
table = self.schema.get_table(table_name)
self.assertTrue(table.exists_in_database())
self.assertEqual(table.count(), 3)
table.delete().where("id = 1").execute()
self.assertEqual(table.count(), 2)
# Table.delete() is not allowed without a condition
result = table.delete()
self.assertRaises(mysqlx.ProgrammingError, result.execute)
drop_table(self.schema, table_name)
def test_count(self):
table_name = "table_test"
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, table_name)).execute()
self.session.sql(_INSERT_TEST_TABLE_QUERY.format(
self.schema_name, table_name, "1")).execute()
table = self.schema.get_table(table_name)
self.assertTrue(table.exists_in_database())
self.assertEqual(table.count(), 1)
drop_table(self.schema, table_name)
self.assertRaises(mysqlx.OperationalError, table.count)
def test_results(self):
table_name = "{0}.test".format(self.schema_name)
self.session.sql("CREATE TABLE {0}(age INT, name VARCHAR(50))"
"".format(table_name)).execute()
# Test if result has no data
result = self.session.sql("SELECT age, name FROM {0}"
"".format(table_name)).execute()
self.assertFalse(result.has_data())
rows = result.fetch_all()
self.assertEqual(len(rows), 0)
# Insert data
self.session.sql("INSERT INTO {0} VALUES (21, 'Fred')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (28, 'Barney')"
"".format(table_name)).execute()
# Test if result has data
result = self.session.sql("SELECT age, name FROM {0}"
"".format(table_name)).execute()
self.assertTrue(result.has_data())
rows = result.fetch_all()
self.assertEqual(len(rows), 2)
table = self.schema.get_table("test")
result = table.select().execute()
row = result.fetch_one()
# Test access by column name and index
self.assertEqual("Fred", row["name"])
self.assertEqual("Fred", row[1])
# Test if error is raised with negative indexes and out of bounds
self.assertRaises(IndexError, row.__getitem__, -1)
self.assertRaises(IndexError, row.__getitem__, -2)
self.assertRaises(IndexError, row.__getitem__, -3)
self.assertRaises(IndexError, row.__getitem__, 3)
# Test if error is raised with an invalid column name
self.assertRaises(ValueError, row.__getitem__, "last_name")
row = result.fetch_one()
self.assertEqual("Barney", row["name"])
self.assertEqual("Barney", row[1])
self.assertEqual(None, result.fetch_one())
# Test result using column label
table = self.schema.get_table("test")
result = table.select("age AS the_age, name AS the_name") \
.where("age = 21").execute()
row = result.fetch_one()
self.assertEqual(21, row["the_age"])
self.assertEqual("Fred", row["the_name"])
drop_table(self.schema, "test")
def test_multiple_resultsets(self):
self.session.sql("CREATE PROCEDURE {0}.spProc() BEGIN SELECT 1; "
"SELECT 2; SELECT 'a'; END"
"".format(self.schema_name)).execute()
result = self.session.sql(" CALL {0}.spProc"
"".format(self.schema_name)).execute()
rows = result.fetch_all()
self.assertEqual(1, len(rows))
self.assertEqual(1, rows[0][0])
self.assertEqual(True, result.next_result())
rows = result.fetch_all()
self.assertEqual(1, len(rows))
self.assertEqual(2, rows[0][0])
self.assertEqual(True, result.next_result())
rows = result.fetch_all()
self.assertEqual(1, len(rows))
self.assertEqual("a", rows[0][0])
self.assertEqual(False, result.next_result())
self.session.sql("DROP PROCEDURE IF EXISTS {0}.spProc"
"".format(self.schema_name)).execute()
def test_auto_inc_value(self):
table_name = "{0}.test".format(self.schema_name)
self.session.sql(
"CREATE TABLE {0}(id INT KEY AUTO_INCREMENT, name VARCHAR(50))"
"".format(table_name)).execute()
result = self.session.sql("INSERT INTO {0} VALUES (NULL, 'Fred')"
"".format(table_name)).execute()
self.assertEqual(1, result.get_autoincrement_value())
table = self.schema.get_table("test")
result2 = table.insert("id", "name").values(None, "Boo").execute()
self.assertEqual(2, result2.get_autoincrement_value())
drop_table(self.schema, "test")
def test_column_metadata(self):
table_name = "{0}.test".format(self.schema_name)
self.session.sql(
"CREATE TABLE {0}(age INT, name VARCHAR(50), pic VARBINARY(100), "
"config JSON, created DATE, updated DATETIME(6), ts TIMESTAMP(2), "
"active BIT)".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (21, 'Fred', NULL, NULL, '2008-07-26', "
"'2019-01-19 03:14:07.999999', '2020-01-01 10:10:10+05:30', 0)"
"".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (28, 'Barney', NULL, NULL, '2012-03-12', "
"'2019-01-19 03:14:07.999999', '2020-01-01 10:10:10+05:30', 0)"
"".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (42, 'Wilma', NULL, NULL, '1975-11-11', "
"'2019-01-19 03:14:07.999999', '2020-01-01 10:10:10+05:30', 1)"
"".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (67, 'Betty', NULL, NULL, '2015-06-21', "
"'2019-01-19 03:14:07.999999', '2020-01-01 10:10:10+05:30', 0)"
"".format(table_name)).execute()
table = self.schema.get_table("test")
result = table.select().execute()
result.fetch_all()
col = result.columns[0]
self.assertEqual("age", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual(mysqlx.ColumnType.INT, col.get_type())
col = result.columns[1]
self.assertEqual("name", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual(mysqlx.ColumnType.STRING, col.get_type())
if tests.MYSQL_VERSION >= (8, 0, 1):
self.assertEqual("utf8mb4_0900_ai_ci", col.get_collation_name())
self.assertEqual("utf8mb4", col.get_character_set_name())
col = result.columns[2]
self.assertEqual("pic", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual("binary", col.get_collation_name())
self.assertEqual("binary", col.get_character_set_name())
self.assertEqual(mysqlx.ColumnType.BYTES, col.get_type())
col = result.columns[3]
self.assertEqual("config", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual(mysqlx.ColumnType.JSON, col.get_type())
col = result.columns[4]
self.assertEqual("created", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual(mysqlx.ColumnType.DATE, col.get_type())
col = result.columns[5]
self.assertEqual("updated", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual(mysqlx.ColumnType.DATETIME, col.get_type())
col = result.columns[6]
self.assertEqual("ts", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual(mysqlx.ColumnType.TIMESTAMP, col.get_type())
col = result.columns[7]
self.assertEqual("active", col.get_column_name())
self.assertEqual("test", col.get_table_name())
self.assertEqual(mysqlx.ColumnType.BIT, col.get_type())
self.assertEqual(result.columns, result.get_columns())
drop_table(self.schema, "test")
def test_is_view(self):
table_name = "table_test"
view_name = "view_test"
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, table_name)).execute()
self.session.sql(_INSERT_TEST_TABLE_QUERY.format(
self.schema_name, table_name, "1")).execute()
table = self.schema.get_table(table_name)
self.assertFalse(table.is_view())
self.session.sql(_CREATE_TEST_VIEW_QUERY.format(
self.schema_name, view_name,
self.schema_name, table_name)).execute()
view = self.schema.get_table(view_name)
self.assertTrue(view.is_view())
drop_table(self.schema, table_name)
drop_view(self.schema, view_name)
@unittest.skipIf(tests.MYSQL_VERSION < (8, 0, 14),
"Prepared statements not supported")
def test_prepared_statements(self):
session = mysqlx.get_session(self.connect_kwargs)
schema = session.get_schema(self.schema_name)
expected_stmt_attrs = \
lambda stmt, changed, prepared, repeated, exec_counter: \
stmt.changed == changed and stmt.prepared == prepared and \
stmt.repeated == repeated and stmt.exec_counter == exec_counter
table_name = "prepared_table_test"
session.sql("CREATE TABLE {}.{}(id INT KEY AUTO_INCREMENT, "
"name VARCHAR(50), age INT)"
"".format(self.schema_name, table_name)).execute()
table = schema.get_table(table_name)
table.insert("name", "age") \
.values("Fred", 21) \
.values("Barney", 28) \
.values("Wilma", 42) \
.values("Betty", 67) \
.values("Bob", 75).execute()
# SelectStatement
select = table.select().where("age == :age")
self.assertTrue(expected_stmt_attrs(select, True, False, False, 0))
# On the first call should: Crud::Select (without prepared statement)
select.bind("age", 21).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(select, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
select.bind("age", 28).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(select, False, True, True, 2))
# On subsequent calls should: Prepare::Execute
select.bind("age", 42).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(select, False, True, True, 3))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[0]
expected_sql_text = ("SELECT * FROM `{}`.`{}` "
"WHERE (`age` = ?)"
"".format(self.schema_name, table_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], select.exec_counter - 1)
# Using sort() should deallocate the prepared statement:
# Prepare::Deallocate + Crud::Select
select.bind("age", 21).order_by("age").execute().fetch_all()
self.assertTrue(expected_stmt_attrs(select, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
select.bind("age", 42).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(select, False, True, True, 2))
# The previous statement should be closed since it had no limit/offset
# Prepare::Deallocate + Crud::Find
select.bind("age", 67).limit(1).offset(0).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(select, False, True, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
select.bind("age", 75).limit(1).offset(0).execute().fetch_all()
self.assertTrue(expected_stmt_attrs(select, False, True, True, 2))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[0]
expected_sql_text = ("SELECT * FROM `{}`.`{}` "
"WHERE (`age` = ?) ORDER BY `age` LIMIT ?, ?"
"".format(self.schema_name, table_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], select.exec_counter)
# UpdateStatement
update = table.update().where("id == :id").set("age", 18)
self.assertTrue(expected_stmt_attrs(update, True, False, False, 0))
# On the first call should: Crud::Update (without prepared statement)
update.bind("id", 1).execute()
self.assertTrue(expected_stmt_attrs(update, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
update.bind("id", 2).execute()
self.assertTrue(expected_stmt_attrs(update, False, True, True, 2))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[1]
expected_sql_text = ("UPDATE `{}`.`{}` SET `age`=18 "
"WHERE (`id` = ?)"
"".format(self.schema_name, table_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], update.exec_counter - 1)
# Using set() should deallocate the prepared statement:
# Prepare::Deallocate + Crud::Update
update.bind("id", "3").set("age", 92).execute()
self.assertTrue(expected_stmt_attrs(update, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
update.bind("id", "4").execute()
self.assertTrue(expected_stmt_attrs(update, False, True, True, 2))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[1]
expected_sql_text = ("UPDATE `{}`.`{}` "
"SET `age`=92 WHERE (`id` = ?)"
"".format(self.schema_name, table_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], update.exec_counter - 1)
# DeleteStatement
delete = table.delete().where("id == :id")
self.assertTrue(expected_stmt_attrs(delete, True, False, False, 0))
# On the first call should: Crud::Delete (without prepared statement)
delete.bind("id", 1).execute()
self.assertTrue(expected_stmt_attrs(delete, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
delete.bind("id", 2).execute()
self.assertTrue(expected_stmt_attrs(delete, False, True, True, 2))
# On subsequent calls should: Prepare::Execute
delete.bind("id", 3).execute()
self.assertTrue(expected_stmt_attrs(delete, False, True, True, 3))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[2]
expected_sql_text = ("DELETE FROM `{}`.`{}` "
"WHERE (`id` = ?)"
"".format(self.schema_name, table_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], delete.exec_counter - 1)
# Using sort() should deallocate the prepared statement:
# Prepare::Deallocate + Crud::Delete
delete.bind("id", 3).sort("age ASC").execute()
self.assertTrue(expected_stmt_attrs(delete, False, False, False, 1))
# On the second call should: Prepare::Prepare + Prepare::Execute
delete.bind("id", 4).execute()
self.assertTrue(expected_stmt_attrs(delete, False, True, True, 2))
# On subsequent calls should: Prepare::Execute
delete.bind("id", 5).execute()
self.assertTrue(expected_stmt_attrs(delete, False, True, True, 3))
row = session.sql(_PREP_STMT_QUERY).execute().fetch_all()[2]
expected_sql_text = ("DELETE FROM `{}`.`{}` "
"WHERE (`id` = ?) ORDER BY `age`"
"".format(self.schema_name, table_name))
self.assertEqual(row[0], expected_sql_text)
self.assertEqual(row[1], delete.exec_counter - 1)
drop_table(schema, table_name)
session.close()
@unittest.skipIf(tests.MYSQL_VERSION < (5, 7, 14), "XPlugin not compatible")
class MySQLxViewTests(tests.MySQLxTests):
def setUp(self):
self.connect_kwargs = tests.get_mysqlx_config()
self.schema_name = self.connect_kwargs["schema"]
self.table_name = "table_test"
self.view_name = "view_test"
try:
self.session = mysqlx.get_session(self.connect_kwargs)
except mysqlx.Error as err:
self.fail("{0}".format(err))
self.schema = self.session.get_schema(self.schema_name)
def tearDown(self):
drop_table(self.schema, self.table_name)
drop_view(self.schema, self.view_name)
self.session.close()
def test_exists_in_database(self):
view = self.schema.get_view(self.view_name)
self.assertFalse(view.exists_in_database())
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, self.table_name)).execute()
defined_as = "SELECT id FROM {0}.{1}".format(self.schema_name,
self.table_name)
view = create_view(self.schema, self.view_name, defined_as)
self.assertTrue(view.exists_in_database())
def test_select(self):
table_name = "{0}.{1}".format(self.schema_name, self.table_name)
self.session.sql("CREATE TABLE {0} (age INT, name VARCHAR(50))"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (21, 'Fred')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (28, 'Barney')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (42, 'Wilma')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (67, 'Betty')"
"".format(table_name)).execute()
defined_as = "SELECT age, name FROM {0}".format(table_name)
view = create_view(self.schema, self.view_name, defined_as)
result = view.select().order_by("age DESC").execute()
rows = result.fetch_all()
self.assertEqual(4, len(rows))
self.assertEqual(67, rows[0]["age"])
result = view.select("age").where("age = 42").execute()
self.assertEqual(1, len(result.columns))
rows = result.fetch_all()
self.assertEqual(1, len(rows))
# test flexible params
result = view.select(['age', 'name']).order_by("age DESC").execute()
rows = result.fetch_all()
self.assertEqual(4, len(rows))
def test_having(self):
table_name = "{0}.{1}".format(self.schema_name, self.table_name)
self.session.sql("CREATE TABLE {0} (age INT, name VARCHAR(50), "
"gender CHAR(1))".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (21, 'Fred', 'M')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (28, 'Barney', 'M')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (42, 'Wilma', 'F')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (67, 'Betty', 'F')"
"".format(table_name)).execute()
defined_as = "SELECT age, name, gender FROM {0}".format(table_name)
view = create_view(self.schema, self.view_name, defined_as)
result = view.select().group_by("gender").order_by("age ASC").execute()
rows = result.fetch_all()
self.assertEqual(2, len(rows))
self.assertEqual(21, rows[0]["age"])
self.assertEqual(42, rows[1]["age"])
result = view.select().group_by("gender").having("gender = 'F'") \
.order_by("age ASC").execute()
rows = result.fetch_all()
self.assertEqual(1, len(rows))
self.assertEqual(42, rows[0]["age"])
# test flexible params
result = view.select().group_by(["gender"]) \
.order_by(["name DESC", "age ASC"]).execute()
rows = result.fetch_all()
self.assertEqual(2, len(rows))
self.assertEqual(42, rows[0]["age"])
self.assertEqual(21, rows[1]["age"])
def test_insert(self):
table_name = "{0}.{1}".format(self.schema_name, self.table_name)
self.session.sql("CREATE TABLE {0} (age INT, name VARCHAR(50), "
"gender CHAR(1))".format(table_name)).execute()
defined_as = "SELECT age, name, gender FROM {0}".format(table_name)
view = create_view(self.schema, self.view_name, defined_as)
result = view.insert("age", "name").values(21, 'Fred') \
.values(28, 'Barney') \
.values(42, 'Wilma') \
.values(67, 'Betty').execute()
result = view.select().execute()
rows = result.fetch_all()
self.assertEqual(4, len(rows))
# test flexible params
result = view.insert(["age", "name"]).values([35, 'Eddard']) \
.values(9, 'Arya').execute()
result = view.select().execute()
rows = result.fetch_all()
self.assertEqual(6, len(rows))
def test_update(self):
table_name = "{0}.{1}".format(self.schema_name, self.table_name)
self.session.sql("CREATE TABLE {0} (age INT, name VARCHAR(50), "
"gender CHAR(1))".format(table_name)).execute()
defined_as = ("SELECT age, name, gender FROM {0}".format(table_name))
view = create_view(self.schema, self.view_name, defined_as)
result = view.insert("age", "name").values(21, 'Fred') \
.values(28, 'Barney') \
.values(42, 'Wilma') \
.values(67, 'Betty').execute()
result = view.update().set("age", 25).where("age == 21").execute()
self.assertEqual(1, result.get_affected_items_count())
drop_table(self.schema, "test")
def test_delete(self):
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, self.table_name)).execute()
self.session.sql(_INSERT_TEST_TABLE_QUERY.format(
self.schema_name, self.table_name, "1")).execute()
defined_as = "SELECT id FROM {0}.{1}".format(self.schema_name,
self.table_name)
view = create_view(self.schema, self.view_name, defined_as)
self.assertEqual(view.count(), 1)
view.delete().where("id = 1").execute()
self.assertEqual(view.count(), 0)
def test_count(self):
self.session.sql(_CREATE_TEST_TABLE_QUERY.format(
self.schema_name, self.table_name)).execute()
self.session.sql(_INSERT_TEST_TABLE_QUERY.format(
self.schema_name, self.table_name, "1")).execute()
defined_as = "SELECT id FROM {0}.{1}".format(self.schema_name,
self.table_name)
view = create_view(self.schema, self.view_name, defined_as)
self.assertEqual(view.count(), 1)
drop_view(self.schema, self.view_name)
self.assertRaises(mysqlx.OperationalError, view.count)
def test_results(self):
table_name = "{0}.{1}".format(self.schema_name, self.table_name)
self.session.sql("CREATE TABLE {0} (age INT, name VARCHAR(50))"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (21, 'Fred')"
"".format(table_name)).execute()
self.session.sql("INSERT INTO {0} VALUES (28, 'Barney')"
"".format(table_name)).execute()
defined_as = "SELECT age, name FROM {0}".format(table_name)
view = create_view(self.schema, self.view_name, defined_as)
result = view.select().execute()
self.assertEqual("Fred", result.fetch_one()["name"])
self.assertEqual("Barney", result.fetch_one()["name"])
self.assertEqual(None, result.fetch_one())
def test_auto_inc_value(self):
table_name = "{0}.{1}".format(self.schema_name, self.table_name)
self.session.sql("CREATE TABLE {0} (id INT KEY AUTO_INCREMENT, "
"name VARCHAR(50))".format(table_name)).execute()
result = self.session.sql("INSERT INTO {0} VALUES (NULL, 'Fred')"
"".format(table_name)).execute()
self.assertEqual(1, result.get_autoincrement_value())
defined_as = "SELECT id, name FROM {0}".format(table_name)
view = create_view(self.schema, self.view_name, defined_as)
result2 = view.insert("id", "name").values(None, "Boo").execute()
self.assertEqual(2, result2.get_autoincrement_value())
def test_column_metadata(self):
table_name = "{0}.{1}".format(self.schema_name, self.table_name)
self.session.sql(
"CREATE TABLE {0}(age INT, name VARCHAR(50), pic VARBINARY(100), "
"config JSON, created DATE, active BIT)"
"".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (21, 'Fred', NULL, NULL, '2008-07-26', 0)"
"".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (28, 'Barney', NULL, NULL, '2012-03-12'"
", 0)".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (42, 'Wilma', NULL, NULL, '1975-11-11', 1)"
"".format(table_name)).execute()
self.session.sql(
"INSERT INTO {0} VALUES (67, 'Betty', NULL, NULL, '2015-06-21', 0)"
"".format(table_name)).execute()
defined_as = ("SELECT age, name, pic, config, created, active FROM {0}"
"".format(table_name))
view = create_view(self.schema, self.view_name, defined_as)
result = view.select().execute()
result.fetch_all()
col = result.columns[0]
self.assertEqual("age", col.get_column_name())
self.assertEqual(self.view_name, col.get_table_name())
self.assertEqual(mysqlx.ColumnType.INT, col.get_type())
col = result.columns[1]
self.assertEqual("name", col.get_column_name())
self.assertEqual(self.view_name, col.get_table_name())
self.assertEqual(mysqlx.ColumnType.STRING, col.get_type())
col = result.columns[2]
self.assertEqual("pic", col.get_column_name())
self.assertEqual(self.view_name, col.get_table_name())
self.assertEqual("binary", col.get_collation_name())
self.assertEqual("binary", col.get_character_set_name())
self.assertEqual(mysqlx.ColumnType.BYTES, col.get_type())
col = result.columns[3]
self.assertEqual("config", col.get_column_name())
self.assertEqual(self.view_name, col.get_table_name())
self.assertEqual(mysqlx.ColumnType.JSON, col.get_type())
col = result.columns[5]
self.assertEqual("active", col.get_column_name())
self.assertEqual(self.view_name, col.get_table_name())
self.assertEqual(mysqlx.ColumnType.BIT, col.get_type())
| 43.749548 | 86 | 0.559104 | 15,929 | 145,336 | 4.952728 | 0.057694 | 0.062364 | 0.020053 | 0.016985 | 0.816154 | 0.775567 | 0.739023 | 0.702834 | 0.674175 | 0.648266 | 0 | 0.028685 | 0.291903 | 145,336 | 3,321 | 87 | 43.762722 | 0.737893 | 0.064877 | 0 | 0.630659 | 0 | 0.008737 | 0.177048 | 0.007237 | 0 | 0 | 0 | 0 | 0.19579 | 1 | 0.036537 | false | 0.000397 | 0.003177 | 0 | 0.046068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
71a102343a0cc4f232f18dd91a4daef6f89c378a | 64 | py | Python | bayesian_changepoint_detection/cy_offline_changepoint_detection.py | ingle/bayesian_changepoint_detection | 5cab09e8f12f834716f074aef9d209b831787474 | [
"MIT"
] | 505 | 2015-01-13T16:17:58.000Z | 2022-03-29T09:47:38.000Z | bayesian_changepoint_detection/cy_offline_changepoint_detection.py | ingle/bayesian_changepoint_detection | 5cab09e8f12f834716f074aef9d209b831787474 | [
"MIT"
] | 31 | 2015-05-26T17:45:50.000Z | 2022-03-24T17:28:26.000Z | bayesian_changepoint_detection/cy_offline_changepoint_detection.py | ingle/bayesian_changepoint_detection | 5cab09e8f12f834716f074aef9d209b831787474 | [
"MIT"
] | 176 | 2015-02-26T17:34:38.000Z | 2022-03-24T04:36:34.000Z | import pyximport
pyximport.install()
from cy_offline import *
| 10.666667 | 24 | 0.796875 | 8 | 64 | 6.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 64 | 5 | 25 | 12.8 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
71ae4081ddec8170e59aa423453ec25a2b335198 | 125 | py | Python | news/news/settings/__init__.py | Four-Velocity/developstoday-test | db9989d2955c0463116494f955f8534ff6e94c6d | [
"Unlicense"
] | null | null | null | news/news/settings/__init__.py | Four-Velocity/developstoday-test | db9989d2955c0463116494f955f8534ff6e94c6d | [
"Unlicense"
] | 6 | 2021-03-19T04:37:21.000Z | 2021-09-22T19:11:14.000Z | news/news/settings/__init__.py | Four-Velocity/developstoday-test | db9989d2955c0463116494f955f8534ff6e94c6d | [
"Unlicense"
] | null | null | null | from .django import * # All Django related settings
from .third_party import * # Django REST Framework & other 3rd parties
| 41.666667 | 71 | 0.76 | 17 | 125 | 5.529412 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.184 | 125 | 2 | 72 | 62.5 | 0.911765 | 0.552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
71dd091577ccadb72d81ae178c92ff37a38ea058 | 53 | py | Python | bmw/threaded_random_state_generator.py | qcware/bmw | 761e405587bffe5dc4ca9f79432a79df2c7fd8f8 | [
"MIT"
] | null | null | null | bmw/threaded_random_state_generator.py | qcware/bmw | 761e405587bffe5dc4ca9f79432a79df2c7fd8f8 | [
"MIT"
] | null | null | null | bmw/threaded_random_state_generator.py | qcware/bmw | 761e405587bffe5dc4ca9f79432a79df2c7fd8f8 | [
"MIT"
] | null | null | null | from .bmw_plugin import ThreadedRandomStateGenerator
| 26.5 | 52 | 0.90566 | 5 | 53 | 9.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 53 | 1 | 53 | 53 | 0.959184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
71def6cf1e90547725c205075e23dde1f1c07699 | 140 | py | Python | todo/commands/group/__init__.py | tomasdanjonsson/td-cli | 08abf22e991ef3b62c170af67fd77581fa1c1b21 | [
"MIT"
] | 154 | 2018-09-28T11:05:39.000Z | 2022-03-05T08:22:09.000Z | todo/commands/group/__init__.py | tomasdanjonsson/td-cli | 08abf22e991ef3b62c170af67fd77581fa1c1b21 | [
"MIT"
] | 18 | 2019-01-14T08:47:30.000Z | 2021-12-10T21:02:58.000Z | todo/commands/group/__init__.py | tomasdanjonsson/td-cli | 08abf22e991ef3b62c170af67fd77581fa1c1b21 | [
"MIT"
] | 11 | 2018-10-15T12:54:06.000Z | 2022-02-07T13:34:37.000Z | # flake8: noqa: F401
from .add import Add
from .delete import Delete
from .get import Get
from .list import List
from .preset import Preset
| 20 | 26 | 0.771429 | 23 | 140 | 4.695652 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.171429 | 140 | 6 | 27 | 23.333333 | 0.896552 | 0.128571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e0854e07cae785649fea285e39f99ad946e2db27 | 143 | py | Python | cancel_token/__init__.py | gsalgado/asyncio-cancel-token | eb16cfbac57f4110c53ebdee248cbd0497b4a466 | [
"MIT"
] | null | null | null | cancel_token/__init__.py | gsalgado/asyncio-cancel-token | eb16cfbac57f4110c53ebdee248cbd0497b4a466 | [
"MIT"
] | null | null | null | cancel_token/__init__.py | gsalgado/asyncio-cancel-token | eb16cfbac57f4110c53ebdee248cbd0497b4a466 | [
"MIT"
] | null | null | null | from .exceptions import ( # noqa: F401
OperationCancelled,
EventLoopMismatch,
)
from .token import ( # noqa: F401
CancelToken,
)
| 17.875 | 39 | 0.678322 | 13 | 143 | 7.461538 | 0.692308 | 0.206186 | 0.28866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 0.230769 | 143 | 7 | 40 | 20.428571 | 0.827273 | 0.146853 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.285714 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e0b6cee19865fccafef122362896119f07a048eb | 82 | py | Python | chess_py/game/__init__.py | Aubhro/chess_py | 14bebc2f8c49ae25c59375cc83d0b38d8ff7281d | [
"MIT"
] | 14 | 2016-07-02T01:54:00.000Z | 2020-12-16T19:26:48.000Z | chess_py/game/__init__.py | Aubhro/chess_py | 14bebc2f8c49ae25c59375cc83d0b38d8ff7281d | [
"MIT"
] | 18 | 2016-09-01T04:27:49.000Z | 2019-03-29T04:52:03.000Z | chess_py/game/__init__.py | Aubhro/chess_py | 14bebc2f8c49ae25c59375cc83d0b38d8ff7281d | [
"MIT"
] | 7 | 2016-05-14T20:55:05.000Z | 2020-10-30T05:42:02.000Z | from .game import Game
from . import game_state
__all__ = ['Game', 'game_state']
| 16.4 | 32 | 0.719512 | 12 | 82 | 4.416667 | 0.416667 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158537 | 82 | 4 | 33 | 20.5 | 0.768116 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
e0c03fb501959cebc8d30befaa4f7e2a4d1152d4 | 13,152 | py | Python | PandasVisual.py | cj1205/SimplePandasVisual | bb4c0f49e4d7aeefa532d3d205fb83db17b92da8 | [
"Unlicense",
"MIT"
] | null | null | null | PandasVisual.py | cj1205/SimplePandasVisual | bb4c0f49e4d7aeefa532d3d205fb83db17b92da8 | [
"Unlicense",
"MIT"
] | null | null | null | PandasVisual.py | cj1205/SimplePandasVisual | bb4c0f49e4d7aeefa532d3d205fb83db17b92da8 | [
"Unlicense",
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import re
import locale
import abc
class Formatter(object):
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def __init__(self, *args, **kwargs):
pass
@abc.abstractmethod
def toString(self, *args, **kwargs):
pass
class FloatFormatter(Formatter):
__metaclass__ = abc.ABCMeta
def __init__(self, formula):
self.formula = formula
def toString(self, value):
dst = ''
try:
value = float(value)
dst = self.formula.format(value)
except:
dst = '{0}'.format(str(value))
return dst
class StringFormatter(Formatter):
__metaclass__ = abc.ABCMeta
def __init__(self, formula):
self.formula = formula
def toString(self, value):
return self.formula.format(value)
class PandasVisual(object):
def __init__(self, df, title):
self.title = title
self.html = ""
self.df = df
self.label_color = ['#FFED97','#D3FF93','#B3D9D9']
self.sec_label_color = ['#4CAF50','#938235','#f24f6a','#c8d32e','#c0d16e','#e0d2ef','#6ba377']
self.header_bgd_color = ['#3ddb99','#66ba9c','#baa266','#4cb57d','#59b785','#c19c78','#3dbbdb','#2ba1bf','#c1ac78','#118ead','#66ba8d']
self.category_bgd_color = ['#b54641','#bc56b9','#6575c6']
self.default_format = StringFormatter('{0}')
def setColumnFormat(self, columns):
self.col_format = columns
def isSingleIndex( self, index):
return not isinstance( index, pd.MultiIndex)
def to_html_for_ExtraRow(self, extra_rows, extra_row_format):
extra_row_html = ""
if extra_rows and len(extra_rows)>0:
row_format = extra_row_format if extra_row_format else self.default_format
for item in extra_rows:
extra_row_html += """<th style="background-color: #841738;color: white;text-align: center;padding: 8px;">"""+str(item['HEADER'])+"</th>"
for cell in item['DATA']:
format_value = cell if isinstance(cell,str) else row_format.format( cell)
extra_row_html += """<td style="text-align: left;padding: 8px;"><span style="color: red;font-weight:bold;">"""+ str(format_value)+"</span></td>"
extra_row_html +="</tr>"
return extra_row_html
def to_html_for_SingleIndex(self, df, extra_rows, color_dict,extra_row_format):
html_output = ""
indexs = df.index
columns = df.columns
dataset = df.values
header = df.index.names + list(df)
header_html = ""
if self.isSingleIndex( self.df.columns):
header_html = "<tr>"
for item in header:
header_html += """<th style="background-color: ORANGE;color: black;text-align: center;padding: 8px;">"""+str(item)+"</th>"
header_html += "</tr>"
else:
level_count = len(df.columns.levels)
header_html = "<tr>"
for item in df.index.names:
header_html += """<th style="background-color: ORANGE;color: black;text-align: center;padding: 8px;" rowspan="{0}">{1}</th>""".format( level_count, str(item))
label_html = ""
label_index_i = 0
label_index_j = 0
curr_index_count = 0
first_line_label = df.columns.labels[label_index_i]
label_count = len(df.columns.labels[label_index_i])
while label_index_j < label_count:
if (label_index_j+1<label_count) and first_line_label[label_index_j] == first_line_label[label_index_j+1]:
curr_index_count+=1
else:
label_html +="""<th style="background-color: {2};color: black;text-align: center;padding: 8px;" colspan="{0}">{1}</th>""".format( curr_index_count+1, str(df.columns.levels[label_index_i][first_line_label[label_index_j]]), self.category_bgd_color[first_line_label[label_index_j]%3])
curr_index_count = 0
label_index_j+=1
header_html += label_html+"</tr>"
label_index_i += 1
while label_index_i<len(df.columns.labels):
label_html = ""
label_index_j = 0
curr_index_count = 0
line_label = df.columns.labels[label_index_i]
label_count = len(line_label)
while label_index_j < label_count:
if (label_index_j+1<label_count) and line_label[label_index_j] == line_label[label_index_j+1]:
curr_index_count+=1
else:
label_html +="""<th style="background-color: {2};color: black;text-align: center;padding: 8px;" colspan="{0}">{1}</th>""".format( curr_index_count+1, str(df.columns.levels[label_index_i][line_label[label_index_j]]), self.header_bgd_color[line_label[label_index_j]%11])
curr_index_count = 0
label_index_j+=1
header_html += "<tr>"+label_html+"</tr>"
label_index_i+=1
body_html = ""
for row_index in range(0, len(dataset) ):
first_color_index = ""
body_html += "<tr>" if row_index%2==0 else """<tr style="background-color: #f2f2f2">"""
body_html +="""<th style="background-color: {0};color: blue;text-align: center;padding: 8px;">""".format( self.label_color[row_index%3])+"{0}".format(indexs[row_index]) +"</th>"
first_color_index = indexs[row_index]
for value_index in range(0, len(dataset[row_index]) ):
formatter = self.col_format[value_index] if value_index in self.col_format else self.default_format
format_value = formatter.toString(dataset[row_index][value_index])
second_color_index = header[value_index+1]
color_value = 'white'
if color_dict and len(color_dict)>0:
color_value = color_dict[first_color_index+'*'+second_color_index] if first_color_index+'*'+second_color_index in color_dict else 'white'
body_html +="""<td style="background-color: {0};text-align: left;padding: 8px;">""".format( color_value)+ str(format_value)+"</td>"
body_html +="</tr>"
extra_row_html=""
if extra_rows and len(extra_rows)>0:
extra_row_html = self.to_html_for_ExtraRow( extra_rows, extra_row_format)
rtl_html = """<h2>{0}</h2><table border="1" style="border-collapse: collapse;width: 100%;">""".format( self.title)+header_html+body_html+extra_row_html+"</table>"
return rtl_html
def to_html_for_MultipleIndex(self, df, extra_rows, color_dict, extra_row_format):
html_output = ""
indexs = df.index
columns = df.columns
dataset = df.values
label_dict = {}
header = df.index.names + list(df)
header_html = ""
if self.isSingleIndex( self.df.columns):
header_html = "<tr>"
for item in header:
header_html += """<th style="background-color: ORANGE;color: black;text-align: center;padding: 8px;">"""+str(item)+"</th>"
header_html += "</tr>"
else:
level_count = len(df.columns.levels)
header_html = "<tr>"
for item in df.index.names:
header_html += """<th style="background-color: ORANGE;color: black;text-align: center;padding: 8px;" rowspan="{0}">{1}</th>""".format( level_count, str(item))
label_html = ""
label_index_i = 0
label_index_j = 0
curr_index_count = 0
first_line_label = df.columns.labels[label_index_i]
label_count = len(df.columns.labels[label_index_i])
while label_index_j < label_count:
if (label_index_j+1<label_count) and first_line_label[label_index_j] == first_line_label[label_index_j+1]:
curr_index_count+=1
else:
label_html +="""<th style="background-color: {2};color: black;text-align: center;padding: 8px;" colspan="{0}">{1}</th>""".format( curr_index_count+1, str(df.columns.levels[label_index_i][first_line_label[label_index_j]]), self.category_bgd_color[first_line_label[label_index_j]%3])
curr_index_count = 0
label_index_j+=1
header_html += label_html+"</tr>"
label_index_i += 1
while label_index_i<len(df.columns.labels):
label_html = ""
curr_index_count = 0
label_index_j = 0
line_label = df.columns.labels[label_index_i]
label_count = len(line_label)
while label_index_j < label_count:
if (label_index_j+1<label_count) and line_label[label_index_j] == line_label[label_index_j+1]:
curr_index_count+=1
else:
label_html +="""<th style="background-color: {2};color: black;text-align: center;padding: 8px;" colspan="{0}">{1}</th>""".format( curr_index_count+1, str(df.columns.levels[label_index_i][line_label[label_index_j]]), self.header_bgd_color[line_label[label_index_j]%11])
curr_index_count = 0
label_index_j+=1
header_html += "<tr>"+label_html+"</tr>"
label_index_i+=1
body_html = ""
label_index_len = len(indexs.labels)
header_pattern = re.compile(r'[0-9]{2}-[\w\s]+', re.I)
for row_index in range(0, len(dataset) ):
first_color_index = ""
body_html += "<tr>" if row_index%2==0 else """<tr style="background-color: #f2f2f2">"""
body_html +="""<th style="background-color: {0};color: blue;text-align: center;padding: 8px;">""".format( self.label_color[ indexs.labels[0][row_index]%3])+ "{0}".format( indexs.levels[0][indexs.labels[0][row_index]])+"</th>"
first_color_index += indexs.levels[0][indexs.labels[0][row_index]]+"*"
if label_index_len>1:
for label_index in range(1, label_index_len):
index_label_value=indexs.levels[label_index][indexs.labels[label_index][row_index]]
if header_pattern.match( index_label_value):
index_label_value = index_label_value[3:]
body_html +="""<th style="background-color: {0};color: blue;text-align: center;padding: 8px;">""".format( self.sec_label_color[ hash(indexs.levels[1][indexs.labels[1][row_index]])%7])+ index_label_value+"</th>"
first_color_index += indexs.levels[label_index][indexs.labels[label_index][row_index]]+"*"
for value_index in range(0, len(dataset[row_index]) ):
formatter = self.col_format[value_index] if value_index in self.col_format else self.default_format
format_value = formatter.toString(dataset[row_index][value_index])
second_color_index = header[value_index+label_index_len]
color_value = 'white'
if color_dict and len(color_dict)>0:
color_value = color_dict[first_color_index+second_color_index] if first_color_index+second_color_index in color_dict else 'white'
body_html +="""<td style="background-color: {0};text-align: left;padding: 8px;">""".format(color_value)+ str(format_value)+"</td>"
body_html +="</tr>"
extra_row_html=""
if extra_rows and len(extra_rows)>0:
extra_row_html = self.to_html_for_ExtraRow( extra_rows, extra_row_format)
rtl_html = """<h2>{0}</h2><table border="1" style="border-collapse: collapse;width: 100%;">""".format( self.title)+header_html+body_html+extra_row_html+"</table>"
return rtl_html
def to_html( self, extra_rows=[], color_dict=None, extra_row_format='{:,.0f}'):
if self.isSingleIndex( self.df.index):
self.html = self.to_html_for_SingleIndex( self.df, extra_rows, color_dict,extra_row_format)
else:
self.html =self.to_html_for_MultipleIndex( self.df, extra_rows, color_dict,extra_row_format)
return self.html
if __name__ == '__main__':
raw_data = {'first_name': ['Tina', 'Jake', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', "Jacobson", 'Milner', 'Milner'],
'age': [42, 52, 36, 24, 73],
'preTestScore': [423163, 245778, 31345234, 57978, 6234512],
'postTestScore': ["25,000", "94,000", 57, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age', 'preTestScore', 'postTestScore'])
pv = PandasVisual(df, 'Test Pandas Visual')
pv.setColumnFormat( {0: StringFormatter('{0}'),1: StringFormatter('{0}'), 2: FloatFormatter('{:.2%}'), 3:FloatFormatter('{:,.0f}'), 4:FloatFormatter('{:.2%}'), 5:FloatFormatter('{:.2%}')} )
with open('test_pandas_visual.html', 'wb') as file_object:
file_object.write('')
with open('test_pandas_visual.html', 'wb') as file_object:
file_object.write(pv.to_html()) | 56.689655 | 301 | 0.603634 | 1,695 | 13,152 | 4.39056 | 0.117404 | 0.07928 | 0.047299 | 0.040849 | 0.767804 | 0.746842 | 0.73018 | 0.715937 | 0.703306 | 0.703306 | 0 | 0.02946 | 0.254106 | 13,152 | 232 | 302 | 56.689655 | 0.729154 | 0 | 0 | 0.648402 | 0 | 0.077626 | 0.167186 | 0.047974 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059361 | false | 0.009132 | 0.022831 | 0.009132 | 0.146119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
e0e6ea6415bdfcc18b520cf677d8beeb293d82cb | 40 | py | Python | scrapli_netconf/transport/plugins/__init__.py | dmulyalin/scrapli_netconf | 7c9e5e74a1afac7955177db759e54d2211637d42 | [
"MIT"
] | 61 | 2020-05-17T19:57:25.000Z | 2022-03-30T01:10:32.000Z | scrapli_netconf/transport/plugins/__init__.py | dmulyalin/scrapli_netconf | 7c9e5e74a1afac7955177db759e54d2211637d42 | [
"MIT"
] | 79 | 2020-05-17T20:22:05.000Z | 2022-03-02T14:37:28.000Z | scrapli_netconf/transport/plugins/__init__.py | dmulyalin/scrapli_netconf | 7c9e5e74a1afac7955177db759e54d2211637d42 | [
"MIT"
] | 6 | 2021-01-07T16:45:28.000Z | 2022-02-11T19:31:49.000Z | """scrapli_netconf.transport.plugins"""
| 20 | 39 | 0.775 | 4 | 40 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 40 | 1 | 40 | 40 | 0.769231 | 0.825 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
46102b48d554154ead46f29d26de6f371f8fb995 | 207 | py | Python | webserial/errors.py | buckley-w-david/webserial | fe8a47251fe422dd58a150ca991e015033eb2114 | [
"MIT"
] | null | null | null | webserial/errors.py | buckley-w-david/webserial | fe8a47251fe422dd58a150ca991e015033eb2114 | [
"MIT"
] | 1 | 2021-11-17T18:43:35.000Z | 2021-11-18T03:48:31.000Z | webserial/errors.py | buckley-w-david/webserial | fe8a47251fe422dd58a150ca991e015033eb2114 | [
"MIT"
] | null | null | null | class WebserialError(Exception):
pass
class EqualChapterError(WebserialError):
pass
class NoChaptersFoundError(WebserialError):
pass
class LocalAheadOfRemoteError(WebserialError):
pass
| 13.8 | 46 | 0.777778 | 16 | 207 | 10.0625 | 0.4375 | 0.167702 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164251 | 207 | 14 | 47 | 14.785714 | 0.930636 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
1cb4ebfe7e43ec62765f6de96aaf518a91228d49 | 40 | py | Python | challenge/001-hello-world/python/main.py | oddballo/coding-challenges | e8c90209276f153dd0b9d581bbaf44bc7dbf9fa1 | [
"MIT"
] | 1 | 2020-06-23T03:34:04.000Z | 2020-06-23T03:34:04.000Z | challenge/001-hello-world/python/main.py | oddballo/coding-challenges | e8c90209276f153dd0b9d581bbaf44bc7dbf9fa1 | [
"MIT"
] | null | null | null | challenge/001-hello-world/python/main.py | oddballo/coding-challenges | e8c90209276f153dd0b9d581bbaf44bc7dbf9fa1 | [
"MIT"
] | null | null | null | #!/usr/bin/python
print("Hello World")
| 10 | 20 | 0.675 | 6 | 40 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 3 | 21 | 13.333333 | 0.75 | 0.4 | 0 | 0 | 0 | 0 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
1cd340fb5c99521265a8f004402b60227f5e998a | 2,622 | py | Python | tests/test_teacher_views.py | Thommond/tsct-portal | 726cfcf86a15985093fd9002a2636478a2495e9e | [
"MIT"
] | 2 | 2020-04-16T00:44:44.000Z | 2020-04-21T19:14:30.000Z | tests/test_teacher_views.py | Thommond/tsct-portal | 726cfcf86a15985093fd9002a2636478a2495e9e | [
"MIT"
] | 16 | 2020-04-14T17:41:11.000Z | 2020-10-30T18:42:33.000Z | tests/test_teacher_views.py | Thommond/tsct-portal | 726cfcf86a15985093fd9002a2636478a2495e9e | [
"MIT"
] | 1 | 2020-04-07T18:08:54.000Z | 2020-04-07T18:08:54.000Z | import pytest
from portal.student_views import view_schedule
from .test_courses import login, logout
def test_all_grades(client):
# Make sure anonymous users cannot access
assert client.get('/course/216/session/1/all_grades').status_code == 302
# Login
rv = login(client, 'teacher@stevenscollege.edu', 'qwerty')
assert b'Logged in' in rv.data
# Make sure teacher can not access
assert client.get('/course/216/session/1/all_grades').status_code == 403
# Log out of teacher who should not have access
rv = logout(client)
assert b'TSCT Portal Login' in rv.data
rv = login(client, 'teacher2@stevenscollege.edu', 'PASSWORD')
assert b'Logged in' in rv.data
# should be able to access
assert client.get('/course/216/session/1/all_grades').status_code == 200
assert client.get('/course/180/session/1/all_grades').status_code == 403
assert client.get('/course/216/session/2/all_grades').status_code == 403
response = client.get('/course/216/session/1/all_grades')
# Making sure students grades are displaying
assert b'bob phillp' in response.data
assert b'<td>F</td>' in response.data
rv = logout(client)
assert b'TSCT Portal Login' in rv.data
def test_assignment_grades(client):
# Make sure anonymous users cannot access
assert client.get('/course/180/session/2/assignments/1/grades').status_code == 302
# login
rv = login(client, 'teacher2@stevenscollege.edu', 'PASSWORD')
assert b'Logged in' in rv.data
# Make sure other teachers don't have access
assert client.get('/course/180/session/2/assignments/1/grades').status_code == 403
# Logout of other teacher
rv = logout(client)
assert b'TSCT Portal Login'
# Login as teacher that owns the course
rv = login(client, 'teacher@stevenscollege.edu', 'qwerty')
assert b'Logged in' in rv.data
# Check that you can get the page
assert client.get('/course/180/session/2/assignments/1/grades').status_code == 200
assert client.get('/course/216/session/2/assignments/1/grades').status_code == 403
assert client.get('/course/216/session/1/assignments/1/grades').status_code == 403
assert client.get('/course/216/session/2/assignments/2/grades').status_code == 403
response = client.get('/course/180/session/2/assignments/1/grades')
# Making sure that students' grades and assignment points display on page
assert b'bob phillp' in response.data
assert b'<td>A</td>' in response.data
assert b'<td>24/25</td>' in response.data
# Log out
rv = logout(client)
assert b'TSCT Portal Login' in rv.data
| 43.7 | 86 | 0.708238 | 397 | 2,622 | 4.617128 | 0.219144 | 0.06383 | 0.106383 | 0.126023 | 0.77305 | 0.771413 | 0.757229 | 0.749045 | 0.632297 | 0.599564 | 0 | 0.04514 | 0.172006 | 2,622 | 59 | 87 | 44.440678 | 0.799171 | 0.173532 | 0 | 0.435897 | 0 | 0 | 0.361524 | 0.275093 | 0 | 0 | 0 | 0 | 0.615385 | 1 | 0.051282 | false | 0.051282 | 0.076923 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
1cd46eb55d48a228bc349cb6c143598bc701664e | 9,891 | py | Python | python/latex2wolfram/lextab.py | rafaellc28/Latex2Wolfram | ac133872a7fb3a884e52df07c45db9bd9188adf7 | [
"MIT"
] | 2 | 2019-09-24T21:00:57.000Z | 2021-07-19T21:24:09.000Z | python/latex2wolfram/lextab.py | rafaellc28/Latex2Wolfram | ac133872a7fb3a884e52df07c45db9bd9188adf7 | [
"MIT"
] | 1 | 2019-09-24T20:54:08.000Z | 2019-09-24T20:54:08.000Z | python/latex2wolfram/lextab.py | rafaellc28/Latex2Wolfram | ac133872a7fb3a884e52df07c45db9bd9188adf7 | [
"MIT"
] | null | null | null | # lextab.py. This file automatically created by PLY (version 3.10). Don't edit!
_tabversion = '3.10'
_lextokens = set(('LFLOOR', 'INTEGERSETWITHTWOLIMITS', 'LESS', 'LBRACKET', 'SUBSTR', 'EMPTYSET', 'EXP', 'INTER', 'MINUS', 'RPAREN', 'MIN', 'PLUS', 'LCEIL', 'GT', 'RBRACE', 'RRBRACE', 'GE', 'SYMDIFF', 'WHERE', 'REALSETNEGATIVE', 'MAX', 'LOGICAL', 'LENGTH', 'OR', 'CARET', 'UNIFORM01', 'NORMAL', 'UNIFORM', 'DIFF', 'RCEIL', 'ROUND', 'SLASHES', 'IRAND224', 'MINIMIZE', 'NATURALSETWITHTWOLIMITS', 'SIN', 'COLON', 'TRUNC', 'DOTS', 'DIMEN', 'DIVIDE', 'FOR', 'UNION', 'INTEGERSETPOSITIVE', 'SYMBOLIC', 'AMPERSAND', 'EQ', 'AND', 'LBRACE', 'INTEGERSET', 'ARCTAN', 'NOT', 'NOTSUBSET', 'PROD', 'TIME2STR', 'MOD', 'COS', 'DEFAULT', 'SUM', 'CROSS', 'SETOF', 'REALSETWITHTWOLIMITS', 'LLBRACE', 'NEQ', 'REALSETWITHONELIMIT', 'SEMICOLON', 'QUESTION_MARK', 'VARIABLES', 'PIPE', 'SETS', 'BINARYSET', 'UNDERLINE', 'NEXISTS', 'GMTIME', 'FORALL', 'REALSETPOSITIVE', 'MAXIMIZE', 'BY', 'CARD', 'INTEGERSETWITHONELIMIT', 'STR2TIME', 'INTEGERSETNEGATIVE', 'SUBSET', 'LOG', 'NFORALL', 'NUMBER', 'RFLOOR', 'REALSET', 'LE', 'PARAMETERS', 'LN', 'LT', 'COMMA', 'QUOTIENT', 'NORMAL01', 'INFINITY', 'STRING', 'EXISTS', 'NATURALSETWITHONELIMIT', 'TIMES', 'NOTIN', 'LPAREN', 'IN', 'ID', 'SQRT', 'RBRACKET', 'NATURALSET'))
_lexreflags = 64
_lexliterals = ''
_lexstateinfo = {'INITIAL': 'inclusive'}
_lexstatere = {'INITIAL': [('(?P<t_STRING>"(?:[^\\\\]|\\\\.)*?(?:"|$)|\\\'(?:[^\\\\]|\\\\.)*?(?:\\\'|$))|(?P<t_DOTS>\\\\cdots|\\\\ldots|\\\\dots|\\.\\.\\.)|(?P<t_EMPTYSET>\\\\emptyset|\\\\varnothing)|(?P<t_SLASHES>//)|(?P<t_INFINITY>\\\\infty)|(?P<t_COMMENT>\\%[^\\n]*)|(?P<t_MOD>\\\\text\\{\\s*\\%\\s*\\}|\\\\mod|\\\\bmod)|(?P<t_BY>\\\\text\\{\\s*by\\s*\\})|(?P<t_QUOTIENT>\\\\big/|\\\\text\\{\\s*div\\s*\\})|(?P<t_TIMES>\\*|\\\\cdot|\\\\ast)|(?P<t_DIVIDE>/|\\\\div)|(?P<t_LESS>\\\\text\\{\\s*less\\s*\\})|(?P<t_FOR>\\\\text\\{\\s*[fF][oO][rR]\\s*\\})|(?P<t_WHERE>\\\\text\\{\\s*[wW][hH][eE][rR][eE]\\s*\\})|(?P<t_OR>\\\\lor|\\\\vee|\\\\text\\{\\s*or\\s*\\})|(?P<t_AND>\\\\land|\\\\wedge|\\\\text\\{\\s*and\\s*\\})|(?P<t_NOT>\\\\neg|!|\\\\text\\{\\s*not\\s*})|(?P<t_FORALL>\\\\forall)|(?P<t_NFORALL>\\\\not\\\\forall)|(?P<t_EXISTS>\\\\exists)|(?P<t_NEXISTS>\\\\nexists|\\\\not\\\\exists)|(?P<t_DEFAULT>\\\\text\\{\\s*default\\s*\\})|(?P<t_DIMEN>\\\\text\\{\\s*dimen\\s*\\})|(?P<t_SETOF>\\\\text\\{\\s*setof\\s*\\})|(?P<t_PARAMETERS>\\\\mathbb{P}|\\\\mathbb{Param}|\\\\mathbb{Params}|\\\\mathbb{Parameter}|\\\\mathbb{Parameters})|(?P<t_SETS>\\\\mathbb{Set}|\\\\mathbb{Sets})|(?P<t_VARIABLES>\\\\mathbb{V}|\\\\mathbb{Var}|\\\\mathbb{Variable}|\\\\mathbb{Vars}|\\\\mathbb{Variables})|(?P<t_BINARYSET>\\\\mathbb{B})|(?P<t_INTEGERSETWITHTWOLIMITS>\\\\mathbb{Z}[_\\^]{([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?\\s*,\\s*([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?})|(?P<t_INTEGERSETWITHONELIMIT>\\\\mathbb{Z}[_\\^]{([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?})|(?P<t_INTEGERSETPOSITIVE>\\\\mathbb{Z}[_\\^]{\\+})|(?P<t_INTEGERSETNEGATIVE>\\\\mathbb{Z}[_\\^]{-})|(?P<t_INTEGERSET>\\\\mathbb{Z})|(?P<t_REALSETWITHTWOLIMITS>\\\\mathbb{R}[_\\^]{([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?\\s*,\\s*([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?})|(?P<t_REALSETWITHONELIMIT>\\\\mathbb{R}[_\\^]{([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?})|(?P<t_REALSETPOSITIVE>\\\\mathbb{R}[_\\^]{\\+})|(?P<t_REALSETNEGATIVE>\\\\mathbb{R}[_\\^]{-})|(?P<t_REALSET>\\\\mathbb{R})|(?P<t_NATURALSETWITHTWOLIMITS>\\\\mathbb{N}[_\\^]{([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?\\s*,\\s*([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?})|(?P<t_NATURALSETWITHONELIMIT>\\\\mathbb{N}[_\\^]{([><]|\\\\geq|\\\\leq)\\s*([-+]?[0-9]*\\.?[0-9]+)([eE][-+]?[0-9]+)?})|(?P<t_NATURALSET>\\\\mathbb{N})|(?P<t_SYMBOLIC>\\\\mathbb{S})|(?P<t_LOGICAL>\\\\mathbb{L})|(?P<t_SUBSET>\\\\subseteq|\\\\subset)|(?P<t_NOTSUBSET>\\\\not\\\\subseteq|\\\\not\\\\subset)|(?P<t_MAXIMIZE>\\\\text\\{\\s*maximize:\\s*\\}|maximize\\:|\\\\text\\{\\s*maximize\\s*\\}|maximize)|(?P<t_MINIMIZE>\\\\text\\{\\s*minimize:\\s*\\}|minimize:|\\\\text\\{\\s*minimize\\s*\\}|minimize)|(?P<t_ignore_SUBJECTTO>\\\\text\\{\\s*subject\\sto:\\s*\\}|\\\\text\\{\\s*subj\\.to:\\s*\\}|\\\\text\\{\\s*s\\.t\\.:\\s*\\}|subject\\sto:\\s*|subj\\.to:\\s*|s\\.t\\.:\\s*|\\\\text\\{\\s*subject\\sto\\s*\\}|\\\\text\\{\\s*subj\\.to\\s*\\}|\\\\text\\{\\s*s\\.t\\.\\s*\\}|subject\\sto\\s*|subj\\.to\\s*|s\\.t\\.\\s*)|(?P<t_LLBRACE>\\\\\\{)|(?P<t_RRBRACE>\\\\\\})|(?P<t_LBRACE>\\{)|(?P<t_RBRACE>\\})|(?P<t_LBRACKET>\\[|\\\\\\[)|(?P<t_RBRACKET>\\]|\\\\\\])|(?P<t_PIPE>\\\\mid|\\\\vert|\\|)|(?P<t_ignore_LIMITS>\\\\limits)|(?P<t_ignore_BEGIN>\\\\begin\\{[a-zA-Z][a-zA-Z0-9]*[\\*]?\\}[\\{\\[][a-zA-Z0-9][a-zA-Z0-9]*[\\*]?[\\}\\]]|\\\\begin\\{[a-zA-Z][a-zA-Z0-9]*[\\*]?\\})', [None, ('t_STRING', 'STRING'), ('t_DOTS', 'DOTS'), ('t_EMPTYSET', 'EMPTYSET'), ('t_SLASHES', 'SLASHES'), ('t_INFINITY', 'INFINITY'), ('t_COMMENT', 'COMMENT'), ('t_MOD', 'MOD'), ('t_BY', 'BY'), ('t_QUOTIENT', 'QUOTIENT'), ('t_TIMES', 'TIMES'), ('t_DIVIDE', 'DIVIDE'), ('t_LESS', 'LESS'), ('t_FOR', 'FOR'), ('t_WHERE', 'WHERE'), ('t_OR', 'OR'), ('t_AND', 'AND'), ('t_NOT', 'NOT'), ('t_FORALL', 'FORALL'), ('t_NFORALL', 'NFORALL'), ('t_EXISTS', 'EXISTS'), ('t_NEXISTS', 'NEXISTS'), ('t_DEFAULT', 'DEFAULT'), ('t_DIMEN', 'DIMEN'), ('t_SETOF', 'SETOF'), ('t_PARAMETERS', 'PARAMETERS'), ('t_SETS', 'SETS'), ('t_VARIABLES', 'VARIABLES'), ('t_BINARYSET', 'BINARYSET'), ('t_INTEGERSETWITHTWOLIMITS', 'INTEGERSETWITHTWOLIMITS'), None, None, None, None, None, None, ('t_INTEGERSETWITHONELIMIT', 'INTEGERSETWITHONELIMIT'), None, None, None, ('t_INTEGERSETPOSITIVE', 'INTEGERSETPOSITIVE'), ('t_INTEGERSETNEGATIVE', 'INTEGERSETNEGATIVE'), ('t_INTEGERSET', 'INTEGERSET'), ('t_REALSETWITHTWOLIMITS', 'REALSETWITHTWOLIMITS'), None, None, None, None, None, None, ('t_REALSETWITHONELIMIT', 'REALSETWITHONELIMIT'), None, None, None, ('t_REALSETPOSITIVE', 'REALSETPOSITIVE'), ('t_REALSETNEGATIVE', 'REALSETNEGATIVE'), ('t_REALSET', 'REALSET'), ('t_NATURALSETWITHTWOLIMITS', 'NATURALSETWITHTWOLIMITS'), None, None, None, None, None, None, ('t_NATURALSETWITHONELIMIT', 'NATURALSETWITHONELIMIT'), None, None, None, ('t_NATURALSET', 'NATURALSET'), ('t_SYMBOLIC', 'SYMBOLIC'), ('t_LOGICAL', 'LOGICAL'), ('t_SUBSET', 'SUBSET'), ('t_NOTSUBSET', 'NOTSUBSET'), ('t_MAXIMIZE', 'MAXIMIZE'), ('t_MINIMIZE', 'MINIMIZE'), ('t_ignore_SUBJECTTO', 'ignore_SUBJECTTO'), ('t_LLBRACE', 'LLBRACE'), ('t_RRBRACE', 'RRBRACE'), ('t_LBRACE', 'LBRACE'), ('t_RBRACE', 'RBRACE'), ('t_LBRACKET', 'LBRACKET'), ('t_RBRACKET', 'RBRACKET'), ('t_PIPE', 'PIPE'), ('t_ignore_LIMITS', 'ignore_LIMITS'), ('t_ignore_BEGIN', 'ignore_BEGIN')]), ('(?P<t_ignore_END>\\\\end\\{[a-zA-Z][a-zA-Z0-9]*[\\*]?\\}[\\{\\[][a-zA-Z0-9][a-zA-Z0-9]*[\\*]?[\\}\\]]|\\\\end\\{[a-zA-Z][a-zA-Z0-9]*[\\*]?\\})|(?P<t_ignore_BEGIN_EQUATION>\\\\begin\\{equation\\})|(?P<t_ignore_END_EQUATION>\\\\end\\{equation\\})|(?P<t_ignore_BEGIN_SPLIT>\\\\begin\\{split\\})|(?P<t_ignore_END_SPLIT>\\\\end\\{split\\})|(?P<t_ignore_DISPLAYSTYLE>\\\\displaystyle)|(?P<t_ignore_QUAD>\\\\quad)|(?P<t_ignore_MATHCLAP>\\\\mathclap)|(?P<t_ignore_TEXT>\\\\text\\{\\s*\\}|\\\\text)|(?P<t_ignored_LEFT>\\\\left)|(?P<t_ignored_RIGHT>\\\\right)|(?P<t_COLON>:)|(?P<t_AMPERSAND>\\\\&)|(?P<t_ignore_AMP>&)|(?P<t_ignore_BACKSLASHES>\\\\\\\\)|(?P<t_ignore_N>\\n)|(?P<t_ignore_R>\\r)|(?P<t_DIFF>\\\\setminus)|(?P<t_SYMDIFF>\\\\triangle|\\\\ominus|\\\\oplus)|(?P<t_UNION>\\\\cup|\\\\bigcup)|(?P<t_INTER>\\\\cap|\\\\bigcap)|(?P<t_CROSS>\\\\times)|(?P<t_ID>(\\\\_)*[a-zA-Z]((\\\\_)*[a-zA-Z0-9]*)*)|(?P<t_NUMBER>[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)|(?P<t_newline>\\n+)|(?P<t_ARCTAN>\\\\tan\\^\\{-1\\}|\\\\arctan)|(?P<t_RFLOOR>\\\\rfloor)|(?P<t_LFLOOR>\\\\lfloor)|(?P<t_LCEIL>\\\\lceil)|(?P<t_NOTIN>\\\\notin)|(?P<t_RCEIL>\\\\rceil)|(?P<t_PROD>\\\\prod)|(?P<t_SQRT>\\\\sqrt)|(?P<t_LE>\\\\leq)|(?P<t_SUM>\\\\sum)|(?P<t_NEQ>\\\\neq)|(?P<t_COS>\\\\cos)|(?P<t_GE>\\\\geq)|(?P<t_SIN>\\\\sin)|(?P<t_EXP>\\\\exp)|(?P<t_LOG>\\\\log)|(?P<t_MIN>\\\\min)|(?P<t_MAX>\\\\max)|(?P<t_LN>\\\\ln)|(?P<t_IN>\\\\in)|(?P<t_QUESTION_MARK>\\?)|(?P<t_PLUS>\\+)|(?P<t_RPAREN>\\))|(?P<t_LPAREN>\\()|(?P<t_CARET>\\^)|(?P<t_UNDERLINE>_)|(?P<t_SEMICOLON>;)|(?P<t_LT><)|(?P<t_COMMA>,)|(?P<t_EQ>=)|(?P<t_MINUS>-)|(?P<t_GT>>)', [None, ('t_ignore_END', 'ignore_END'), ('t_ignore_BEGIN_EQUATION', 'ignore_BEGIN_EQUATION'), ('t_ignore_END_EQUATION', 'ignore_END_EQUATION'), ('t_ignore_BEGIN_SPLIT', 'ignore_BEGIN_SPLIT'), ('t_ignore_END_SPLIT', 'ignore_END_SPLIT'), ('t_ignore_DISPLAYSTYLE', 'ignore_DISPLAYSTYLE'), ('t_ignore_QUAD', 'ignore_QUAD'), ('t_ignore_MATHCLAP', 'ignore_MATHCLAP'), ('t_ignore_TEXT', 'ignore_TEXT'), ('t_ignored_LEFT', 'ignored_LEFT'), ('t_ignored_RIGHT', 'ignored_RIGHT'), ('t_COLON', 'COLON'), ('t_AMPERSAND', 'AMPERSAND'), ('t_ignore_AMP', 'ignore_AMP'), ('t_ignore_BACKSLASHES', 'ignore_BACKSLASHES'), ('t_ignore_N', 'ignore_N'), ('t_ignore_R', 'ignore_R'), ('t_DIFF', 'DIFF'), ('t_SYMDIFF', 'SYMDIFF'), ('t_UNION', 'UNION'), ('t_INTER', 'INTER'), ('t_CROSS', 'CROSS'), ('t_ID', 'ID'), None, None, None, ('t_NUMBER', 'NUMBER'), None, ('t_newline', 'newline'), (None, 'ARCTAN'), (None, 'RFLOOR'), (None, 'LFLOOR'), (None, 'LCEIL'), (None, 'NOTIN'), (None, 'RCEIL'), (None, 'PROD'), (None, 'SQRT'), (None, 'LE'), (None, 'SUM'), (None, 'NEQ'), (None, 'COS'), (None, 'GE'), (None, 'SIN'), (None, 'EXP'), (None, 'LOG'), (None, 'MIN'), (None, 'MAX'), (None, 'LN'), (None, 'IN'), (None, 'QUESTION_MARK'), (None, 'PLUS'), (None, 'RPAREN'), (None, 'LPAREN'), (None, 'CARET'), (None, 'UNDERLINE'), (None, 'SEMICOLON'), (None, 'LT'), (None, 'COMMA'), (None, 'EQ'), (None, 'MINUS'), (None, 'GT')])]}
_lexstateignore = {'INITIAL': ' \t'}
_lexstateerrorf = {'INITIAL': 't_error'}
_lexstateeoff = {}
| 899.181818 | 8,420 | 0.560611 | 1,315 | 9,891 | 3.997719 | 0.161977 | 0.043371 | 0.024348 | 0.007609 | 0.112612 | 0.112612 | 0.092829 | 0.078562 | 0.073616 | 0.067149 | 0 | 0.010138 | 0.042665 | 9,891 | 10 | 8,421 | 989.1 | 0.545042 | 0.007785 | 0 | 0 | 0 | 0.222222 | 0.788626 | 0.559723 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
1cddd53fe0e268b350ce5f1caa9f02559033193e | 82 | py | Python | notes/__init__.py | NTsystems/NoTes-API | 310cd45f5d1fad25034a274bc497504259c69907 | [
"MIT"
] | null | null | null | notes/__init__.py | NTsystems/NoTes-API | 310cd45f5d1fad25034a274bc497504259c69907 | [
"MIT"
] | null | null | null | notes/__init__.py | NTsystems/NoTes-API | 310cd45f5d1fad25034a274bc497504259c69907 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from .celeryconf import app as celery_app | 20.5 | 41 | 0.853659 | 12 | 82 | 5.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134146 | 82 | 4 | 41 | 20.5 | 0.901408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1cf718aebed74dcc936e305deb1490ffa0834ec2 | 1,210 | py | Python | gslib/third_party/storage_apitools/credentials_lib.py | maxshine/gsutil | c81d67f2286402accfcdf79f0199844949bebefc | [
"Apache-2.0"
] | 2,151 | 2020-04-18T07:31:17.000Z | 2022-03-31T08:39:18.000Z | gslib/third_party/storage_apitools/credentials_lib.py | maxshine/gsutil | c81d67f2286402accfcdf79f0199844949bebefc | [
"Apache-2.0"
] | 4,640 | 2015-07-08T16:19:08.000Z | 2019-12-02T15:01:27.000Z | gslib/third_party/storage_apitools/credentials_lib.py | maxshine/gsutil | c81d67f2286402accfcdf79f0199844949bebefc | [
"Apache-2.0"
] | 698 | 2015-06-02T19:18:35.000Z | 2022-03-29T16:57:15.000Z | # -*- coding: utf-8 -*-
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shim for backwards-compatibility for moving GCE credentials.
oauth2client loads credentials classes based on the module name where
they were created; this means that moving GceAssertionCredentials from
here to third_party requires a shim mapping the old name to the new
one. Once loaded, the credential will be re-serialized with the new
path, meaning that we can (at some point) consider removing this file.
"""
# TODO: Remove this module once this change has been around long
# enough that old credentials are likely to be rare.
from apitools.base.py import GceAssertionCredentials
| 43.214286 | 74 | 0.777686 | 187 | 1,210 | 5.026738 | 0.679144 | 0.06383 | 0.02766 | 0.034043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009891 | 0.164463 | 1,210 | 27 | 75 | 44.814815 | 0.919881 | 0.922314 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
1c14963dd18987712245a7db5fc92c626bbd5513 | 252 | py | Python | src/python/WMCore/WMBS/Oracle/Files/DeleteParentCheck.py | hufnagel/WMCore | b150cc725b68fc1cf8e6e0fa07c826226a4421fa | [
"Apache-2.0"
] | 21 | 2015-11-19T16:18:45.000Z | 2021-12-02T18:20:39.000Z | src/python/WMCore/WMBS/Oracle/Files/DeleteParentCheck.py | hufnagel/WMCore | b150cc725b68fc1cf8e6e0fa07c826226a4421fa | [
"Apache-2.0"
] | 5,671 | 2015-01-06T14:38:52.000Z | 2022-03-31T22:11:14.000Z | src/python/WMCore/WMBS/Oracle/Files/DeleteParentCheck.py | hufnagel/WMCore | b150cc725b68fc1cf8e6e0fa07c826226a4421fa | [
"Apache-2.0"
] | 67 | 2015-01-21T15:55:38.000Z | 2022-02-03T19:53:13.000Z | #!/usr/bin/env python
"""
_DeleteParentCheck_
Oracle implementation of DeleteParentCheck
"""
from WMCore.WMBS.MySQL.Files.DeleteParentCheck import DeleteParentCheck as MySQLDeleteParentCheck
class DeleteParentCheck(MySQLDeleteParentCheck):
pass
| 21 | 97 | 0.829365 | 23 | 252 | 9 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099206 | 252 | 11 | 98 | 22.909091 | 0.911894 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
1c400f0eed5bedcf15ac83b8b0358c7c54ae6b43 | 21 | py | Python | salad/__init__.py | Work4Labs/salad | 176869a4437103d501feb3035beaf162c2507435 | [
"BSD-3-Clause"
] | null | null | null | salad/__init__.py | Work4Labs/salad | 176869a4437103d501feb3035beaf162c2507435 | [
"BSD-3-Clause"
] | null | null | null | salad/__init__.py | Work4Labs/salad | 176869a4437103d501feb3035beaf162c2507435 | [
"BSD-3-Clause"
] | null | null | null | VERSION = "0.4.14.2"
| 10.5 | 20 | 0.571429 | 5 | 21 | 2.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.277778 | 0.142857 | 21 | 1 | 21 | 21 | 0.388889 | 0 | 0 | 0 | 0 | 0 | 0.380952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
1c4376331213d71be83752613cbfb6ce878134b3 | 143 | py | Python | problems/0025/compute.py | Dynortice/Project-Euler | 99a0201b5d5f147eab77fc52d9db8995045cded0 | [
"MIT"
] | null | null | null | problems/0025/compute.py | Dynortice/Project-Euler | 99a0201b5d5f147eab77fc52d9db8995045cded0 | [
"MIT"
] | null | null | null | problems/0025/compute.py | Dynortice/Project-Euler | 99a0201b5d5f147eab77fc52d9db8995045cded0 | [
"MIT"
] | null | null | null | from math import ceil, log, sqrt
def compute(n: int) -> int:
return int(ceil((log(10) * (n - 1) + log(5) / 2) / log((1 + sqrt(5)) / 2)))
| 23.833333 | 79 | 0.538462 | 26 | 143 | 2.961538 | 0.576923 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 0.230769 | 143 | 5 | 80 | 28.6 | 0.627273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
1c58fd21f2b6006a9997bee48a88075f5e1ed7af | 810 | py | Python | sure_tosca-client_python_stubs/sure_tosca_client/__init__.py | QCDIS/CONF | 6ddb37b691754bbba97c85228d266ac050c4baa4 | [
"Apache-2.0"
] | null | null | null | sure_tosca-client_python_stubs/sure_tosca_client/__init__.py | QCDIS/CONF | 6ddb37b691754bbba97c85228d266ac050c4baa4 | [
"Apache-2.0"
] | 41 | 2017-01-23T16:20:55.000Z | 2019-10-07T12:45:21.000Z | sure_tosca-client_python_stubs/sure_tosca_client/__init__.py | skoulouzis/CONF | 8c0596810f7ef5fec001148dd67192b25abbe3c8 | [
"Apache-2.0"
] | 2 | 2020-05-26T12:53:14.000Z | 2020-10-08T05:59:46.000Z | # coding: utf-8
# flake8: noqa
"""
tosca-sure
TOSCA Simple qUeRy sErvice (SURE). # noqa: E501
OpenAPI spec version: 1.0.0
Contact: S.Koulouzis@uva.nl
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import apis into sdk package
from sure_tosca_client.api.default_api import DefaultApi
# import ApiClient
from sure_tosca_client.api_client import ApiClient
from sure_tosca_client.configuration import Configuration
# import models into sdk package
from sure_tosca_client.models.node_template import NodeTemplate
from sure_tosca_client.models.node_template_map import NodeTemplateMap
from sure_tosca_client.models.topology_template import TopologyTemplate
from sure_tosca_client.models.tosca_template import ToscaTemplate
| 27.931034 | 71 | 0.812346 | 113 | 810 | 5.59292 | 0.460177 | 0.113924 | 0.143987 | 0.210443 | 0.387658 | 0.299051 | 0.191456 | 0 | 0 | 0 | 0 | 0.011315 | 0.12716 | 810 | 28 | 72 | 28.928571 | 0.882603 | 0.355556 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1c66f4993b686c53350a2ac4c4951416ce227b90 | 387 | py | Python | accenv/lib/python3.4/site-packages/IPython/external/argparse/__init__.py | adamshamsudeen/clubdin-dj | eb48c67dab3a4ae7c4032544eb4d64e0b1d7e15a | [
"MIT"
] | null | null | null | accenv/lib/python3.4/site-packages/IPython/external/argparse/__init__.py | adamshamsudeen/clubdin-dj | eb48c67dab3a4ae7c4032544eb4d64e0b1d7e15a | [
"MIT"
] | null | null | null | accenv/lib/python3.4/site-packages/IPython/external/argparse/__init__.py | adamshamsudeen/clubdin-dj | eb48c67dab3a4ae7c4032544eb4d64e0b1d7e15a | [
"MIT"
] | null | null | null | from IPython.utils.version import check_version
try:
import argparse
# don't use system argparse if older than 1.1:
if not check_version(argparse.__version__, '1.1'):
raise ImportError
else:
from argparse import *
from argparse import SUPPRESS
except (ImportError, AttributeError):
from ._argparse import *
from ._argparse import SUPPRESS
| 29.769231 | 54 | 0.705426 | 48 | 387 | 5.520833 | 0.479167 | 0.181132 | 0.271698 | 0.166038 | 0.332075 | 0.332075 | 0.332075 | 0 | 0 | 0 | 0 | 0.013468 | 0.232558 | 387 | 12 | 55 | 32.25 | 0.878788 | 0.113695 | 0 | 0 | 0 | 0 | 0.008798 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.727273 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1c7dcb61837baa2610e1238568449d096ef1e0e4 | 451 | py | Python | ISIC_dataset_downloader/config.py | wallali/AzSkinCancer | a241e61226c2dab9e60b532523031fec5435b51f | [
"MIT"
] | null | null | null | ISIC_dataset_downloader/config.py | wallali/AzSkinCancer | a241e61226c2dab9e60b532523031fec5435b51f | [
"MIT"
] | null | null | null | ISIC_dataset_downloader/config.py | wallali/AzSkinCancer | a241e61226c2dab9e60b532523031fec5435b51f | [
"MIT"
] | null | null | null | # URLS
dataSetUrl = "https://isic-archive.com:443/api/v1/dataset?limit=0&offset=0&sort=name&sortdir=1"
imageSetBaseUrl = "https://isic-archive.com:443/api/v1/image?limit=0&offset=0&sort=name&sortdir=1"
imageDownloadBaseUrl = "https://isic-archive.com:443/api/v1/image/"
imageDetailsDownloadBaseUrl = "https://isic-archive.com:443/api/v1/image/"
# Pickled Files
imageIdClassMapPkl = "imageIdClassMap.pkl"
datasetImageIdMapPkl = "dataSetImageIdMap.pkl" | 50.111111 | 98 | 0.780488 | 61 | 451 | 5.770492 | 0.459016 | 0.102273 | 0.181818 | 0.215909 | 0.514205 | 0.514205 | 0.514205 | 0.4375 | 0 | 0 | 0 | 0.051402 | 0.050998 | 451 | 9 | 99 | 50.111111 | 0.771028 | 0.039911 | 0 | 0 | 0 | 0.333333 | 0.654292 | 0.048724 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
1c8423305f2fc8d1316652e87d238077a4c8e3ab | 118 | py | Python | src/introducao/05_operador_is.py | SamuelPossamai/material_auxilio_conceitos_python | 44c15e72f7409441fe0db38288dac782f0cbc94d | [
"MIT"
] | 1 | 2022-02-08T23:39:11.000Z | 2022-02-08T23:39:11.000Z | src/introducao/05_operador_is.py | SamuelPossamai/material_auxilio_conceitos_python | 44c15e72f7409441fe0db38288dac782f0cbc94d | [
"MIT"
] | null | null | null | src/introducao/05_operador_is.py | SamuelPossamai/material_auxilio_conceitos_python | 44c15e72f7409441fe0db38288dac782f0cbc94d | [
"MIT"
] | null | null | null |
class Classe:
pass
a = Classe()
b = Classe()
c = a
print(f'{a is b=}')
print(f'{a is c=}')
print(f'{b is c=}')
| 9.833333 | 19 | 0.516949 | 24 | 118 | 2.541667 | 0.375 | 0.295082 | 0.229508 | 0.295082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237288 | 118 | 11 | 20 | 10.727273 | 0.677778 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0 | 0 | 0.125 | 0.375 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
c763bff82589981414c636d6793b2676eb2e080a | 225 | py | Python | __test__/test_is_number.py | tetrascience/ts-lib-task-script-utils | 23213c526e7723b1365576cede45b9e9e8e6fd73 | [
"Apache-2.0"
] | null | null | null | __test__/test_is_number.py | tetrascience/ts-lib-task-script-utils | 23213c526e7723b1365576cede45b9e9e8e6fd73 | [
"Apache-2.0"
] | 1 | 2021-01-21T23:04:13.000Z | 2021-01-21T23:04:13.000Z | __test__/test_is_number.py | tetrascience/ts-lib-task-script-utils | 23213c526e7723b1365576cede45b9e9e8e6fd73 | [
"Apache-2.0"
] | null | null | null | from common.is_number import isnumber
import common
def test_is_number():
assert isnumber(10)
assert isnumber("10")
assert isnumber("NaN")
assert not common.isnumber(True)
assert not isnumber("cheese")
| 18.75 | 37 | 0.715556 | 30 | 225 | 5.266667 | 0.466667 | 0.265823 | 0.202532 | 0.278481 | 0.291139 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.191111 | 225 | 11 | 38 | 20.454545 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.048889 | 0 | 0 | 0 | 0 | 0 | 0.625 | 1 | 0.125 | true | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c77644665541db8be9c305c5def0da8b490f70b2 | 137 | py | Python | pystargazer/__init__.py | fossabot/pystargazer | 14f4755ec70d7a5e09ec46e95753e33d5c21b79a | [
"MIT"
] | 30 | 2020-04-08T11:42:43.000Z | 2022-01-02T20:01:00.000Z | pystargazer/__init__.py | fossabot/pystargazer | 14f4755ec70d7a5e09ec46e95753e33d5c21b79a | [
"MIT"
] | 8 | 2020-04-23T18:48:44.000Z | 2021-07-04T00:43:14.000Z | pystargazer/__init__.py | fossabot/pystargazer | 14f4755ec70d7a5e09ec46e95753e33d5c21b79a | [
"MIT"
] | 2 | 2020-04-25T06:55:55.000Z | 2020-08-25T04:18:49.000Z | import asyncio
try:
import uvloop
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
except ModuleNotFoundError:
pass
| 15.222222 | 59 | 0.773723 | 15 | 137 | 6.866667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167883 | 137 | 8 | 60 | 17.125 | 0.903509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
c781e664513dfbd065d2a1b4d11ceca707869030 | 143 | py | Python | tools/leetcode.009.Palindrome Number/leetcode.009.Palindrome Number.submission0.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | 4 | 2015-10-10T00:30:55.000Z | 2020-07-27T19:45:54.000Z | tools/leetcode.009.Palindrome Number/leetcode.009.Palindrome Number.submission0.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | null | null | null | tools/leetcode.009.Palindrome Number/leetcode.009.Palindrome Number.submission0.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | null | null | null | class Solution:
# @param {integer} x
# @return {boolean}
def isPalindrome(self, x):
t = str(x)
return t[::-1] == t | 143 | 143 | 0.51049 | 18 | 143 | 4.055556 | 0.722222 | 0.191781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010417 | 0.328671 | 143 | 1 | 143 | 143 | 0.75 | 0.251748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
c79dbfd282adebe98fa98fea406dc94c5ccbb369 | 37 | py | Python | baselines/shallow_ssad/__init__.py | ChengIC/ryan-sad | 09a93245ae6917911bd0f9d39d533d825c23c259 | [
"MIT"
] | null | null | null | baselines/shallow_ssad/__init__.py | ChengIC/ryan-sad | 09a93245ae6917911bd0f9d39d533d825c23c259 | [
"MIT"
] | null | null | null | baselines/shallow_ssad/__init__.py | ChengIC/ryan-sad | 09a93245ae6917911bd0f9d39d533d825c23c259 | [
"MIT"
] | null | null | null | from .ssad_convex import ConvexSSAD
| 18.5 | 36 | 0.837838 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
c7ba03e00dd62b8f1a98414be6fd4d65cd391fec | 135 | py | Python | utils/__init__.py | wang-chen/lgl-feature-matching | 55bd17ee5e8699a06514bca09a6ef834808448a7 | [
"BSD-3-Clause"
] | 17 | 2022-03-06T16:23:44.000Z | 2022-03-14T08:50:11.000Z | utils/__init__.py | wang-chen/lgl-feature-matching | 55bd17ee5e8699a06514bca09a6ef834808448a7 | [
"BSD-3-Clause"
] | 2 | 2022-03-09T11:05:19.000Z | 2022-03-25T20:54:42.000Z | utils/__init__.py | wang-chen/lgl-feature-matching | 55bd17ee5e8699a06514bca09a6ef834808448a7 | [
"BSD-3-Clause"
] | 2 | 2022-03-07T01:18:33.000Z | 2022-03-07T08:28:56.000Z | #!/usr/bin/env python3
from .visualization import Visualizer
from .geometry import Projector
from .evaluation import MatchEvaluator
| 16.875 | 38 | 0.814815 | 16 | 135 | 6.875 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008475 | 0.125926 | 135 | 7 | 39 | 19.285714 | 0.923729 | 0.155556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c7cc069d2390ef92139e73814ce3599d74ddd060 | 140 | py | Python | openunreid/core/metrics/__init__.py | zwzhang121/OpenUnReID | 4f399efca3d560c608fb4c9c2ed43f522b17596a | [
"Apache-2.0"
] | 344 | 2020-07-01T03:11:21.000Z | 2022-03-30T03:01:41.000Z | openunreid/core/metrics/__init__.py | zwzhang121/OpenUnReID | 4f399efca3d560c608fb4c9c2ed43f522b17596a | [
"Apache-2.0"
] | 43 | 2020-07-14T05:03:25.000Z | 2022-03-29T06:15:54.000Z | openunreid/core/metrics/__init__.py | zwzhang121/OpenUnReID | 4f399efca3d560c608fb4c9c2ed43f522b17596a | [
"Apache-2.0"
] | 71 | 2020-07-01T06:54:30.000Z | 2022-03-23T02:57:53.000Z | # Credit to https://github.com/KaiyangZhou/deep-person-reid
from .accuracy import accuracy # noqa
from .rank import evaluate_rank # noqa
| 28 | 59 | 0.771429 | 20 | 140 | 5.35 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135714 | 140 | 4 | 60 | 35 | 0.884298 | 0.478571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c7e99aaaf05affd2773a9cae9cb9e71013e5bada | 41 | py | Python | lib/__init__.py | DissSecurity/DissSec | 2244a87ec177402081ae47b161181afd1898d7a9 | [
"MIT"
] | null | null | null | lib/__init__.py | DissSecurity/DissSec | 2244a87ec177402081ae47b161181afd1898d7a9 | [
"MIT"
] | null | null | null | lib/__init__.py | DissSecurity/DissSec | 2244a87ec177402081ae47b161181afd1898d7a9 | [
"MIT"
] | null | null | null | # Date: 9/8/2019
# Author: Diss.Security
| 13.666667 | 23 | 0.682927 | 7 | 41 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171429 | 0.146341 | 41 | 2 | 24 | 20.5 | 0.628571 | 0.878049 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
c7f1682edfb09291adf28b102c826878d850cbaf | 185 | py | Python | alchemy_provider/utils/__init__.py | SherkhanSyzdykov/alchemy_provider | d964f8dcfb59f803e9d5f69316eef5199bf71529 | [
"MIT"
] | 1 | 2022-03-30T22:12:50.000Z | 2022-03-30T22:12:50.000Z | alchemy_provider/utils/__init__.py | SherkhanSyzdykov/alchemy_provider | d964f8dcfb59f803e9d5f69316eef5199bf71529 | [
"MIT"
] | null | null | null | alchemy_provider/utils/__init__.py | SherkhanSyzdykov/alchemy_provider | d964f8dcfb59f803e9d5f69316eef5199bf71529 | [
"MIT"
] | null | null | null | from .alchemy_orm import get_column, is_column, is_relationship
from .cls_or_ins import cls_or_ins
from .aliased_manager import AliasedManager
from .concurrency import run_concurrently
| 37 | 63 | 0.87027 | 28 | 185 | 5.392857 | 0.607143 | 0.10596 | 0.10596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097297 | 185 | 4 | 64 | 46.25 | 0.904192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1be7387aedfc6e0f2390c011be93b4fd25fbd64f | 81 | py | Python | pybtracker/__init__.py | elektito/pybtracker | cb9f4574f662e6b787d0a7c77391fa2317836232 | [
"MIT"
] | 42 | 2016-01-25T15:18:33.000Z | 2022-01-20T19:20:56.000Z | pybtracker/__init__.py | elektito/pybtracker | cb9f4574f662e6b787d0a7c77391fa2317836232 | [
"MIT"
] | 9 | 2016-10-18T14:10:36.000Z | 2022-01-14T17:43:08.000Z | pybtracker/__init__.py | elektito/pybtracker | cb9f4574f662e6b787d0a7c77391fa2317836232 | [
"MIT"
] | 16 | 2016-12-04T12:23:43.000Z | 2022-01-26T02:13:21.000Z | from .server import TrackerServer
from .client import TrackerClient, ServerError
| 27 | 46 | 0.851852 | 9 | 81 | 7.666667 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 81 | 2 | 47 | 40.5 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1bedf70857139dcace75ce1f30fc9fd7765ac607 | 189 | py | Python | features/environment.py | zeegin/selenium_chrome_smoke | 94f0b5042793b98a18ca0a0d5567373bfc953393 | [
"MIT"
] | 10 | 2018-09-28T07:31:02.000Z | 2020-01-28T11:05:53.000Z | features/environment.py | khorevaa/selenium_chrome_smoke | 94f0b5042793b98a18ca0a0d5567373bfc953393 | [
"MIT"
] | null | null | null | features/environment.py | khorevaa/selenium_chrome_smoke | 94f0b5042793b98a18ca0a0d5567373bfc953393 | [
"MIT"
] | 1 | 2018-09-30T13:47:30.000Z | 2018-09-30T13:47:30.000Z | from selenium import webdriver
def before_all(context):
context.driver = webdriver.Chrome()
context.driver.implicitly_wait(10)
def after_all(context):
context.driver.quit()
| 17.181818 | 39 | 0.746032 | 24 | 189 | 5.75 | 0.625 | 0.282609 | 0.246377 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.153439 | 189 | 10 | 40 | 18.9 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
40042062a133c3d90f38fb0ed6720e1ef311c49e | 102 | py | Python | skills_ml/evaluation/__init__.py | bhagyaramgpo/skills-ml | be520fc2a2f88bff756d25e57c3378a465a1dcb2 | [
"MIT"
] | 147 | 2016-12-05T19:45:05.000Z | 2022-02-17T03:03:28.000Z | skills_ml/evaluation/__init__.py | bhagyaramgpo/skills-ml | be520fc2a2f88bff756d25e57c3378a465a1dcb2 | [
"MIT"
] | 390 | 2016-12-02T03:11:13.000Z | 2022-03-28T22:08:20.000Z | skills_ml/evaluation/__init__.py | bhagyaramgpo/skills-ml | be520fc2a2f88bff756d25e57c3378a465a1dcb2 | [
"MIT"
] | 66 | 2017-12-14T16:33:24.000Z | 2022-02-17T03:03:31.000Z | """Tools for evaluating the effectiveness of machine learning and NLP algorithms on workforce data"""
| 51 | 101 | 0.803922 | 14 | 102 | 5.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 102 | 1 | 102 | 102 | 0.931818 | 0.931373 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
400ad8fc75a382d6eb670232fd8cf5011ca225a1 | 176 | py | Python | itdagene/core/notifications/admin.py | itdagene-ntnu/itdagene | b972cd3d803debccebbc33641397a39834b8d69a | [
"MIT"
] | 9 | 2018-10-17T20:58:09.000Z | 2021-12-16T16:16:45.000Z | itdagene/core/notifications/admin.py | itdagene-ntnu/itdagene | b972cd3d803debccebbc33641397a39834b8d69a | [
"MIT"
] | 177 | 2018-10-27T18:15:56.000Z | 2022-03-28T04:29:06.000Z | itdagene/core/notifications/admin.py | itdagene-ntnu/itdagene | b972cd3d803debccebbc33641397a39834b8d69a | [
"MIT"
] | null | null | null | from django.contrib import admin
from itdagene.core.notifications.models import Notification, Subscription
admin.site.register(Notification)
admin.site.register(Subscription)
| 29.333333 | 73 | 0.857955 | 21 | 176 | 7.190476 | 0.619048 | 0.119205 | 0.225166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 176 | 5 | 74 | 35.2 | 0.920732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
40211c3830c482c6fbf191b637b7c8f0e20ab33a | 69 | py | Python | tests/runtime-trace-tests/cases/empty_array.py | jaydeetay/pxt | aad1beaf15edc46e1327806367298cbc942dcbc1 | [
"MIT"
] | 977 | 2019-05-06T23:12:55.000Z | 2022-03-29T19:11:44.000Z | tests/runtime-trace-tests/cases/empty_array.py | jaydeetay/pxt | aad1beaf15edc46e1327806367298cbc942dcbc1 | [
"MIT"
] | 3,980 | 2019-05-09T20:48:14.000Z | 2022-03-28T20:33:07.000Z | tests/runtime-trace-tests/cases/empty_array.py | jaydeetay/pxt | aad1beaf15edc46e1327806367298cbc942dcbc1 | [
"MIT"
] | 306 | 2016-04-09T05:28:07.000Z | 2019-05-02T14:23:29.000Z | def foo():
print(len(text_list))
text_list: List[str] = []
foo() | 13.8 | 25 | 0.608696 | 11 | 69 | 3.636364 | 0.636364 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 69 | 5 | 26 | 13.8 | 0.701754 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
403be0a9498714a46b0315495ec66fa9b78eb1fd | 260 | py | Python | app/exceptions.py | LimaGuilherme/flask-boilerplate | 4053dd2d24937d405fedcd839e15d2cc45d87f01 | [
"MIT"
] | null | null | null | app/exceptions.py | LimaGuilherme/flask-boilerplate | 4053dd2d24937d405fedcd839e15d2cc45d87f01 | [
"MIT"
] | null | null | null | app/exceptions.py | LimaGuilherme/flask-boilerplate | 4053dd2d24937d405fedcd839e15d2cc45d87f01 | [
"MIT"
] | null | null | null | class BadParameter(Exception):
pass
class NotAuthorized(Exception):
pass
class UnexpectedError(Exception):
pass
class NotFound(Exception):
pass
class RepositoryError(Exception):
pass
class ConfigClassNotFound(Exception):
pass
| 11.304348 | 37 | 0.730769 | 24 | 260 | 7.916667 | 0.375 | 0.410526 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 260 | 22 | 38 | 11.818182 | 0.913462 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
4067b13b7a86aa806269ca4aad2d569b0ec67c29 | 2,782 | py | Python | ytdw.py | anonik9900/YT-Downloader.py | 080e4280a40e1da5583686a7f2f2ea636fa1a2a3 | [
"BSD-2-Clause"
] | null | null | null | ytdw.py | anonik9900/YT-Downloader.py | 080e4280a40e1da5583686a7f2f2ea636fa1a2a3 | [
"BSD-2-Clause"
] | null | null | null | ytdw.py | anonik9900/YT-Downloader.py | 080e4280a40e1da5583686a7f2f2ea636fa1a2a3 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python3
import random
import string
import subprocess #Process commands
import socket #Process socket data
import pyfiglet
import sys
import os
import time
import pytube
from pytube import YouTube
ascii_banner = pyfiglet.figlet_format("YT-Downloader")
print(ascii_banner)
print ("{< Devolved By Anonik V 0.1>}")
print("")
def MenuShell():
choice = input("Type...\n {1} to download video with audio \n {2} to only video \n {3} to only audio \n \n")
if choice == str("1"):
link = input("Enter the video link:")
yt = YouTube(link)
#Title of video
print("Title: ",yt.title)#Number of views of video
print("Number of views: ",yt.views)#Length of the video
print("Length of video: ",yt.length,"seconds")#Description of video
print("Description: ",yt.description)#Rating
print("Ratings: ",yt.rating)
#print(yt.streams.filter(progressive=True))
#ys = yt.streams.get_by_itag('137')
time.sleep(0.2)
ys = yt.streams.get_highest_resolution()
ys.download()
contatore =0
while contatore <= 99:
contatore = contatore + 1
#time.sleep(0.4)
print ("Download Video: [" +str(contatore)+"%]")
time.sleep(0.4)
print("")
print ("Download of ",yt.title, " Completed")
#print(yt.streams)
if choice == str("2"):
link = input("Enter the video link:")
yt = YouTube(link)
#Title of video
print("Title: ",yt.title)#Number of views of video
print("Number of views: ",yt.views)#Length of the video
print("Length of video: ",yt.length,"seconds")#Description of video
print("Description: ",yt.description)#Rating
print("Ratings: ",yt.rating)
#print(yt.streams.filter(progressive=True))
time.sleep(0.2)
ys = yt.streams.get_by_itag('137')
ys.download()
contatore =0
while contatore <= 99:
contatore = contatore + 1
#time.sleep(0.4)
print ("Download Video: [" +str(contatore)+"%]")
time.sleep(0.3)
print("")
print ("Download of ",yt.title, " Completed")
#print(yt.streams)
if choice == str("3"):
link = input("Enter the video link:")
yt = YouTube(link)
#Title of video
print("Title: ",yt.title)#Number of views of video
print("Number of views: ",yt.views)#Length of the video
print("Length of video: ",yt.length,"seconds")#Description of video
print("Description: ",yt.description)#Rating
print("Ratings: ",yt.rating)
time.slepp(0.2)
ys = yt.streams.get_by_itag('140')
ys.download()
contatore =0
while contatore <= 99:
contatore = contatore + 1
#time.sleep(0.4)
print ("Download Audio: [" +str(contatore)+"%]")
time.sleep(0.4)
print("")
print ("Download of ",yt.title, " Completed")
#print(yt.streams)
else:
sys.exit()
print (MenuShell())
| 26.75 | 110 | 0.645579 | 390 | 2,782 | 4.576923 | 0.205128 | 0.047059 | 0.060504 | 0.030812 | 0.765266 | 0.765266 | 0.765266 | 0.7507 | 0.707003 | 0.707003 | 0 | 0.021486 | 0.196981 | 2,782 | 103 | 111 | 27.009709 | 0.777529 | 0.186556 | 0 | 0.630137 | 0 | 0.013699 | 0.251758 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0 | 0.136986 | 0 | 0.150685 | 0.383562 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
4095812510c90e3bade44fe7286bf5c85a73ab1e | 141 | py | Python | src/bitcaster/backends/__init__.py | bitcaster-io/bitcaster | 9f1bad96e00e3bc78a22451731e231d30662b166 | [
"BSD-3-Clause"
] | 4 | 2018-03-01T10:22:30.000Z | 2020-04-04T16:31:11.000Z | src/bitcaster/backends/__init__.py | bitcaster-io/bitcaster | 9f1bad96e00e3bc78a22451731e231d30662b166 | [
"BSD-3-Clause"
] | 60 | 2018-05-20T04:42:32.000Z | 2022-02-10T17:03:37.000Z | src/bitcaster/backends/__init__.py | bitcaster-io/bitcaster | 9f1bad96e00e3bc78a22451731e231d30662b166 | [
"BSD-3-Clause"
] | 1 | 2018-08-04T05:06:45.000Z | 2018-08-04T05:06:45.000Z | from ..security import ADMIN_PERMISSIONS, OWNER_PERMISSIONS
from .ldap import BitcasterLDAPBackend
from .permissions import BitcasterBackend
| 35.25 | 59 | 0.87234 | 15 | 141 | 8.066667 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092199 | 141 | 3 | 60 | 47 | 0.945313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
40a6913c54a572fb9bc4bcc8fd89e87e4035ad8a | 70 | py | Python | alphabet/__init__.py | robertoreale/alphabet | 9b7bc0759be56c710dc4a1df95db21ee58c614a3 | [
"MIT"
] | 1 | 2020-11-10T19:44:53.000Z | 2020-11-10T19:44:53.000Z | alphabet/__init__.py | reale/alphabet | 9b7bc0759be56c710dc4a1df95db21ee58c614a3 | [
"MIT"
] | 2 | 2018-08-22T11:45:12.000Z | 2019-03-05T16:29:33.000Z | alphabet/__init__.py | robertoreale/alphabet | 9b7bc0759be56c710dc4a1df95db21ee58c614a3 | [
"MIT"
] | 1 | 2020-11-10T19:45:05.000Z | 2020-11-10T19:45:05.000Z | from __future__ import absolute_import
from .alphabet import alphabet
| 23.333333 | 38 | 0.871429 | 9 | 70 | 6.222222 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 70 | 2 | 39 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
40c315e8d9686692912e81cf6a9a1845edeed26e | 381 | py | Python | seededrandom.py | matthiasamberg/TigerJython---The-fantastic-Elevator-Game | 9ede94676b5e0fcd034c049eb0f700eff041c09a | [
"Apache-2.0"
] | 1 | 2020-12-14T10:26:19.000Z | 2020-12-14T10:26:19.000Z | seededrandom.py | matthiasamberg/TigerJython---The-fantastic-Elevator-Game | 9ede94676b5e0fcd034c049eb0f700eff041c09a | [
"Apache-2.0"
] | null | null | null | seededrandom.py | matthiasamberg/TigerJython---The-fantastic-Elevator-Game | 9ede94676b5e0fcd034c049eb0f700eff041c09a | [
"Apache-2.0"
] | null | null | null | import random
class SeededRandom:
def __init__(self, seed):
self.nextSeed = seed
def getRandom(self):
random.seed(self.nextSeed)
self.nextSeed = random.random()
return random.random()
def getRandInt(self, low, high):
random.seed(self.nextSeed)
self.nextSeed = random.random()
return random.randint(low, high)
| 22.411765 | 40 | 0.629921 | 43 | 381 | 5.488372 | 0.348837 | 0.254237 | 0.20339 | 0.186441 | 0.491525 | 0.491525 | 0.491525 | 0.491525 | 0.491525 | 0.491525 | 0 | 0 | 0.265092 | 381 | 16 | 41 | 23.8125 | 0.842857 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
40e977a998fefa13eed3231839e5aba625501782 | 22 | py | Python | index.py | lexmin0412/autotest-platform | a41a7b7e0e3f96a0b2b86a5c3007c23057312953 | [
"MIT"
] | null | null | null | index.py | lexmin0412/autotest-platform | a41a7b7e0e3f96a0b2b86a5c3007c23057312953 | [
"MIT"
] | null | null | null | index.py | lexmin0412/autotest-platform | a41a7b7e0e3f96a0b2b86a5c3007c23057312953 | [
"MIT"
] | null | null | null | print("Hello VScode")
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 22 | 1 | 22 | 22 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
904c8860b0b9dff449bc5d32663e26d89ff09a42 | 152 | py | Python | myeda/test/test_visualization.py | PhilippvK/python-myeda | e8501a24535d73997fed9445b45b08ca93cb4b0b | [
"MIT"
] | 3 | 2020-09-26T12:44:39.000Z | 2022-01-13T10:25:17.000Z | myeda/test/test_visualization.py | PhilippvK/python-myeda | e8501a24535d73997fed9445b45b08ca93cb4b0b | [
"MIT"
] | null | null | null | myeda/test/test_visualization.py | PhilippvK/python-myeda | e8501a24535d73997fed9445b45b08ca93cb4b0b | [
"MIT"
] | null | null | null | from pyeda.inter import *
from myeda.visualization import *
import nose
def test_todo():
assert True
if __name__ == '__main__':
nose.run()
| 15.2 | 33 | 0.690789 | 20 | 152 | 4.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 152 | 9 | 34 | 16.888889 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | true | 0 | 0.428571 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
907020a5c4f67e2a112034f1eb97866812205ac3 | 102 | py | Python | AoC20/day_02/b.py | a-recknagel/AoC20 | 7aa0013dc745bdc0ad357e1168b212bd065fd092 | [
"MIT"
] | null | null | null | AoC20/day_02/b.py | a-recknagel/AoC20 | 7aa0013dc745bdc0ad357e1168b212bd065fd092 | [
"MIT"
] | null | null | null | AoC20/day_02/b.py | a-recknagel/AoC20 | 7aa0013dc745bdc0ad357e1168b212bd065fd092 | [
"MIT"
] | null | null | null | from AoC20.day_2 import Rule, data
print(sum([Rule(line).check_2() for line in data.splitlines()]))
| 20.4 | 64 | 0.72549 | 18 | 102 | 4 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.117647 | 102 | 4 | 65 | 25.5 | 0.755556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
90808834dc578fe20fbb45556415cbdfd2181a73 | 134 | py | Python | BB/lib/__init__.py | Morgenkroete/GOF2BountyBot | b4fe3d765b764ab169284ce0869a810825013389 | [
"MIT"
] | 6 | 2020-06-09T16:36:52.000Z | 2021-02-02T17:53:44.000Z | BB/lib/__init__.py | Morgenkroete/GOF2BountyBot | b4fe3d765b764ab169284ce0869a810825013389 | [
"MIT"
] | 138 | 2020-08-02T11:20:34.000Z | 2020-12-15T15:55:11.000Z | BB/lib/__init__.py | Morgenkroete/GOF2BountyBot | b4fe3d765b764ab169284ce0869a810825013389 | [
"MIT"
] | 6 | 2020-07-05T05:32:16.000Z | 2020-11-01T21:58:31.000Z | # Make all lib modules available on package import
from . import discordUtil, emojis, jsonHandler, pathfinding, stringTyping, timeUtil | 67 | 83 | 0.820896 | 16 | 134 | 6.875 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126866 | 134 | 2 | 83 | 67 | 0.940171 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
90b01e849e4ce05c670fa0e0812c775251a622ed | 82 | py | Python | supermamas/pamperings/viewmodels/__init__.py | oasalonen/supermamas | 3ab2b2370de903cea614ea9dfa10ce1c0504a715 | [
"Apache-2.0"
] | null | null | null | supermamas/pamperings/viewmodels/__init__.py | oasalonen/supermamas | 3ab2b2370de903cea614ea9dfa10ce1c0504a715 | [
"Apache-2.0"
] | null | null | null | supermamas/pamperings/viewmodels/__init__.py | oasalonen/supermamas | 3ab2b2370de903cea614ea9dfa10ce1c0504a715 | [
"Apache-2.0"
] | null | null | null | from supermamas.pamperings.viewmodels.pampering_list import PamperingListViewModel | 82 | 82 | 0.926829 | 8 | 82 | 9.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 82 | 1 | 82 | 82 | 0.949367 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
29184978630d562622e6a57f85c8ff42be9d1160 | 153 | py | Python | orangetool/orangetool_test.py | Moduland/Orangetool | 4f85547c4393e6bf8635a185c8017cdef8b141c1 | [
"MIT"
] | 123 | 2017-02-01T15:40:34.000Z | 2022-02-12T17:16:21.000Z | orangetool/orangetool_test.py | Moduland/Orangetool | 4f85547c4393e6bf8635a185c8017cdef8b141c1 | [
"MIT"
] | 13 | 2019-07-01T20:32:52.000Z | 2021-06-25T15:23:52.000Z | orangetool/orangetool_test.py | Moduland/Orangetool | 4f85547c4393e6bf8635a185c8017cdef8b141c1 | [
"MIT"
] | 27 | 2017-02-01T15:40:37.000Z | 2021-06-13T10:11:08.000Z | # -*- coding: utf-8 -*-
"""Orangetool test."""
from orangetool import version, check_update
if __name__ == "__main__":
version()
check_update()
| 19.125 | 44 | 0.653595 | 17 | 153 | 5.294118 | 0.764706 | 0.266667 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007937 | 0.176471 | 153 | 7 | 45 | 21.857143 | 0.706349 | 0.254902 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
293193d536a4253234ab7f18ed1e712627801f05 | 164 | py | Python | src/main/bin/python/python2.7/site-packages/liblzma.py | otrack/serverless-shell | f3fcd28353b775caddd45e2db8998537d8a6d8a4 | [
"Apache-2.0"
] | 132 | 2021-02-24T12:14:35.000Z | 2022-03-28T13:06:22.000Z | src/main/bin/python/python2.7/site-packages/liblzma.py | otrack/serverless-shell | f3fcd28353b775caddd45e2db8998537d8a6d8a4 | [
"Apache-2.0"
] | null | null | null | src/main/bin/python/python2.7/site-packages/liblzma.py | otrack/serverless-shell | f3fcd28353b775caddd45e2db8998537d8a6d8a4 | [
"Apache-2.0"
] | 3 | 2021-12-08T15:20:46.000Z | 2021-12-13T04:55:08.000Z | from lzma import *
from warnings import warn
warn("'liblzma' has been renamed to 'lzma'!\n Please update code to import 'lzma'!", DeprecationWarning, stacklevel=1)
| 41 | 118 | 0.762195 | 24 | 164 | 5.208333 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.134146 | 164 | 3 | 119 | 54.666667 | 0.873239 | 0 | 0 | 0 | 0 | 0 | 0.463415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
2966c34f9dc30ea3fb91332ac0736cc99cc23032 | 227 | py | Python | Chapter02/Exercise2.17/exercise2.17.py | PacktPublishing/Web-Development-with-Django-Second-Edition | a9c3d8e46176af612e3b8fe7bc2a2a8effafe981 | [
"MIT"
] | 2 | 2022-01-03T22:17:21.000Z | 2022-03-04T13:32:36.000Z | Chapter02/Exercise2.17/exercise2.17.py | PacktPublishing/Web-Development-with-Django-Second-Edition | a9c3d8e46176af612e3b8fe7bc2a2a8effafe981 | [
"MIT"
] | null | null | null | Chapter02/Exercise2.17/exercise2.17.py | PacktPublishing/Web-Development-with-Django-Second-Edition | a9c3d8e46176af612e3b8fe7bc2a2a8effafe981 | [
"MIT"
] | 1 | 2022-02-25T13:53:37.000Z | 2022-02-25T13:53:37.000Z | #!/usr/bin/env python3
from reviews.models import Publisher
Publisher.objects.filter(Q(name__startswith="New") | Q(name__startswith="Idea"))
Publisher.objects.filter(Q(name__startswith="New") & Q(name__endswith="Publisher")) | 32.428571 | 83 | 0.779736 | 31 | 227 | 5.451613 | 0.548387 | 0.118343 | 0.266272 | 0.272189 | 0.532544 | 0.532544 | 0.532544 | 0.532544 | 0.532544 | 0 | 0 | 0.004695 | 0.061674 | 227 | 7 | 83 | 32.428571 | 0.788732 | 0.092511 | 0 | 0 | 0 | 0 | 0.092233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
2983e2aa611e48ca8011f56059af74656c443cb7 | 73 | py | Python | separate_exercises/nested_sum.py | mightypanders/pythonlearning | ee2e403ed198f1ae5c8c5bac4cd1e380c1824153 | [
"MIT"
] | 1 | 2016-07-04T07:03:32.000Z | 2016-07-04T07:03:32.000Z | separate_exercises/nested_sum.py | mightypanders/pythonlearning | ee2e403ed198f1ae5c8c5bac4cd1e380c1824153 | [
"MIT"
] | null | null | null | separate_exercises/nested_sum.py | mightypanders/pythonlearning | ee2e403ed198f1ae5c8c5bac4cd1e380c1824153 | [
"MIT"
] | null | null | null | a = 0
for i in range(10):
a += 1
for j in range(10):
a += 1
print(a)
| 10.428571 | 20 | 0.520548 | 18 | 73 | 2.111111 | 0.555556 | 0.368421 | 0.473684 | 0.526316 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 0.30137 | 73 | 6 | 21 | 12.166667 | 0.607843 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
298652060105d830f6011dae86eba3586fe1f06d | 68 | py | Python | app/__main__.py | subesokun/vscode-python-demo | 7060268e67d65a2cf5793838a1f0936e5d8adf3c | [
"MIT"
] | null | null | null | app/__main__.py | subesokun/vscode-python-demo | 7060268e67d65a2cf5793838a1f0936e5d8adf3c | [
"MIT"
] | null | null | null | app/__main__.py | subesokun/vscode-python-demo | 7060268e67d65a2cf5793838a1f0936e5d8adf3c | [
"MIT"
] | null | null | null |
def run():
print("Hello from 🐍")
print(eval("1+1"))
run()
| 9.714286 | 25 | 0.5 | 11 | 68 | 3.181818 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039216 | 0.25 | 68 | 6 | 26 | 11.333333 | 0.627451 | 0 | 0 | 0 | 0 | 0 | 0.223881 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0 | 0.25 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
46462700247a5cb24a315ec186f6ccb65dadb28b | 80 | py | Python | test_python.py | jmasvial/python_course | 4a1b86099dcca74ef65e7f04d29694bbcc64e391 | [
"Apache-2.0"
] | null | null | null | test_python.py | jmasvial/python_course | 4a1b86099dcca74ef65e7f04d29694bbcc64e391 | [
"Apache-2.0"
] | null | null | null | test_python.py | jmasvial/python_course | 4a1b86099dcca74ef65e7f04d29694bbcc64e391 | [
"Apache-2.0"
] | null | null | null | print "test environment"
ip_add = ['1','2','3','4']
for i in ip_add:
print i
| 11.428571 | 26 | 0.6 | 16 | 80 | 2.875 | 0.75 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 0.1875 | 80 | 6 | 27 | 13.333333 | 0.646154 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
4674a926f8de0a76e36d944cbdfcf52aeeda28e7 | 142 | py | Python | python/7kyu/credit_card_mask.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | 3 | 2021-06-08T01:57:13.000Z | 2021-06-26T10:52:47.000Z | python/7kyu/credit_card_mask.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | null | null | null | python/7kyu/credit_card_mask.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | 2 | 2021-06-10T21:20:13.000Z | 2021-06-30T10:13:26.000Z | """Kata url: https://www.codewars.com/kata/5412509bd436bd33920011bc."""
def maskify(cc: str) -> str:
return cc[-4:].rjust(len(cc), '#')
| 23.666667 | 71 | 0.640845 | 19 | 142 | 4.789474 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152 | 0.119718 | 142 | 5 | 72 | 28.4 | 0.576 | 0.457746 | 0 | 0 | 0 | 0 | 0.014085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
4677796152d7995ce1d31797285fd2dbba6d865f | 158 | py | Python | src/frr/tests/runtests.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/runtests.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/runtests.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | import pytest
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), "helpers", "python"))
raise SystemExit(pytest.main(sys.argv[1:]))
| 22.571429 | 77 | 0.753165 | 25 | 158 | 4.6 | 0.64 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.075949 | 158 | 6 | 78 | 26.333333 | 0.780822 | 0 | 0 | 0 | 0 | 0 | 0.082278 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
46945fdd4ae692887bc54e2be033d939da9fa0fa | 34 | py | Python | Magnus/Problem sets/PS2/A4.py | NumEconCopenhagen/projects-2022-git-good | df457732b3da0d52c481b0adcb18e1cef63a5089 | [
"MIT"
] | null | null | null | Magnus/Problem sets/PS2/A4.py | NumEconCopenhagen/projects-2022-git-good | df457732b3da0d52c481b0adcb18e1cef63a5089 | [
"MIT"
] | null | null | null | Magnus/Problem sets/PS2/A4.py | NumEconCopenhagen/projects-2022-git-good | df457732b3da0d52c481b0adcb18e1cef63a5089 | [
"MIT"
] | 2 | 2022-02-14T17:00:01.000Z | 2022-03-09T07:20:32.000Z | import mymodule
mymodule.myfun(5)
| 11.333333 | 17 | 0.823529 | 5 | 34 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.088235 | 34 | 2 | 18 | 17 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
4697e932fabfda31167bdf81b3896aabbdb6af68 | 53 | py | Python | atlas/testing/acceptance/mixins/run_process.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 296 | 2020-03-16T19:55:00.000Z | 2022-01-10T19:46:05.000Z | atlas/testing/acceptance/mixins/run_process.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 57 | 2020-03-17T11:15:57.000Z | 2021-07-10T14:42:27.000Z | atlas/testing/acceptance/mixins/run_process.py | DeepLearnI/atlas | 8aca652d7e647b4e88530b93e265b536de7055ed | [
"Apache-2.0"
] | 38 | 2020-03-17T21:06:05.000Z | 2022-02-08T03:19:34.000Z |
from foundations_spec.extensions import run_process | 17.666667 | 51 | 0.886792 | 7 | 53 | 6.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09434 | 53 | 3 | 51 | 17.666667 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d3cd8350a0ae5e2a1dd48f0984d353a52d6c22f8 | 76 | py | Python | paper_finder/__init__.py | Syhen/paper-finder | bb7aacf3101ff50a1f4cfe42b4a2bb7b5809faf2 | [
"MIT"
] | null | null | null | paper_finder/__init__.py | Syhen/paper-finder | bb7aacf3101ff50a1f4cfe42b4a2bb7b5809faf2 | [
"MIT"
] | null | null | null | paper_finder/__init__.py | Syhen/paper-finder | bb7aacf3101ff50a1f4cfe42b4a2bb7b5809faf2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
create on 2019-04-21 上午1:26
author @heyao
"""
| 9.5 | 27 | 0.552632 | 12 | 76 | 3.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196721 | 0.197368 | 76 | 7 | 28 | 10.857143 | 0.491803 | 0.855263 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
d3edf7b107eb5d9426b88bed0908fb42b1319106 | 4,171 | py | Python | assignment2/assignment2.py | georgeboghez/RN_Assignment1 | d93c42b540da65cd619d15ec1f3675987d339c7c | [
"MIT"
] | null | null | null | assignment2/assignment2.py | georgeboghez/RN_Assignment1 | d93c42b540da65cd619d15ec1f3675987d339c7c | [
"MIT"
] | null | null | null | assignment2/assignment2.py | georgeboghez/RN_Assignment1 | d93c42b540da65cd619d15ec1f3675987d339c7c | [
"MIT"
] | null | null | null | import pickle, gzip
import random
import json
import copy
import sys
from percepton import Perceptron
def train(train_set):
d = []
for j in range(10):
bestAccuracy = 0
bestPerceptron = None
for i in range(20):
print(j, i)
p = Perceptron(j, [random.uniform(-0.5, 0.5)] * len(train_set[0][0]), random.uniform(-0.5, 0.5), 0.001)
accuracy = p.train_data(train_set, 30)
if accuracy > bestAccuracy:
bestAccuracy = accuracy
bestPerceptron = copy.deepcopy(p)
d.append(bestPerceptron.toJSON())
with open('perceptrons.json', 'w+') as outfile:
json.dump(d, outfile)
def train_mini(train_set):
d = []
for j in range(10):
bestAccuracy = 0
bestPerceptron = None
for i in range(3):
print(j, i)
p = Perceptron(j, [random.uniform(-0.5, 0.5)] * len(train_set[0][0]), random.uniform(-0.5, 0.5), 0.001)
accuracy = p.mini_batch(train_set, 30)
if accuracy > bestAccuracy:
bestAccuracy = accuracy
bestPerceptron = copy.deepcopy(p)
d.append(bestPerceptron.toJSON())
with open('perceptrons_mini_batch.json', 'w+') as outfile:
json.dump(d, outfile)
def check_set(_set):
with open("perceptrons.json", "r") as f:
perceptronList = json.loads(f.read())
for i in range(len(perceptronList)):
perceptronList[i] = json.loads(perceptronList[i])
p = Perceptron(perceptronList[i]["label"], perceptronList[i]["weight"], perceptronList[i]["bias"], perceptronList[i]["learning_rate"], perceptronList[i]["accuracy"])
perceptronList[i] = copy.deepcopy(p)
num = 0
for index, digit in enumerate(_set[0]):
bestResult = -10000
for perceptron in perceptronList:
result = perceptron.check_result(digit)
if result > bestResult:
bestPerceptron = perceptron.get_label()
if bestPerceptron == _set[1][index]:
num += 1
print(100 * num / len(_set[1]))
def check_set_mini(_set):
with open("perceptrons_mini_batch.json", "r") as f:
perceptronList = json.loads(f.read())
for i in range(len(perceptronList)):
perceptronList[i] = json.loads(perceptronList[i])
p = Perceptron(perceptronList[i]["label"], perceptronList[i]["weight"], perceptronList[i]["bias"], perceptronList[i]["learning_rate"], perceptronList[i]["accuracy"])
perceptronList[i] = copy.deepcopy(p)
num = 0
for index, digit in enumerate(_set[0]):
bestRes = -1000000
for perceptron in perceptronList:
result = perceptron.check_result(digit)
if result > bestRes:
bestPerceptron = perceptron.get_label()
if bestPerceptron == _set[1][index]:
num += 1
print(100 * num / len(_set[1]))
if __name__ == "__main__":
f = gzip.open('mnist.pkl.gz', 'rb')
train_set, valid_set, test_set = pickle.load(f, encoding='latin')
f.close()
if len(sys.argv) == 2:
if sys.argv[1] == "online":
train(train_set)
elif sys.argv[1] == "mini":
train_mini(train_set)
elif sys.argv[1] == "miniM":
p = Perceptron(0, [random.uniform(-0.5, 0.5)] * len(train_set[0][0]), random.uniform(-0.5, 0.5), 0.001)
p.train_matrix(train_set, 1)
elif sys.argv[1] == "validOnline":
check_set(valid_set)
elif sys.argv[1] == "testOnline":
check_set(test_set)
elif sys.argv[1] == "validMini":
check_set_mini(valid_set)
elif sys.argv[1] == "testMini":
check_set_mini(test_set)
else:
print("invalid argument. try running the script with one of the following arguments: online, mini, validOnline, testOnline, validMini, testMini")
else:
print("try running the script with one of the following arguments: online, mini, validOnline, testOnline, validMini, testMini") | 40.105769 | 177 | 0.580436 | 507 | 4,171 | 4.662722 | 0.205128 | 0.101523 | 0.011421 | 0.038071 | 0.780034 | 0.773689 | 0.724196 | 0.723773 | 0.723773 | 0.695854 | 0 | 0.031377 | 0.289379 | 4,171 | 104 | 178 | 40.105769 | 0.766194 | 0 | 0 | 0.531915 | 0 | 0.021277 | 0.119367 | 0.012943 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.06383 | 0 | 0.106383 | 0.06383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
310274ddd448a9b49ffadf19823bf613996a1768 | 2,100 | py | Python | tests/test_paginator.py | MatveiAleksandrovich/SocialNetwork | fe3964a4e6b0da208a2eba81b23b24fcb104978d | [
"MIT"
] | null | null | null | tests/test_paginator.py | MatveiAleksandrovich/SocialNetwork | fe3964a4e6b0da208a2eba81b23b24fcb104978d | [
"MIT"
] | null | null | null | tests/test_paginator.py | MatveiAleksandrovich/SocialNetwork | fe3964a4e6b0da208a2eba81b23b24fcb104978d | [
"MIT"
] | null | null | null | import pytest
from django.core.paginator import Paginator, Page
class TestGroupPaginatorView:
@pytest.mark.django_db(transaction=True)
def test_group_paginator_view_get(self, client, post_with_group):
try:
response = client.get(f'/group/{post_with_group.group.slug}')
except Exception as e:
assert False, f'''Страница `/group/<slug>/` работает неправильно. Ошибка: `{e}`'''
if response.status_code in (301, 302):
response = client.get(f'/group/{post_with_group.group.slug}/')
assert response.status_code != 404, 'Страница `/group/<slug>/` не найдена, проверьте этот адрес в *urls.py*'
assert 'paginator' in response.context, \
'Проверьте, что передали переменную `paginator` в контекст страницы `/group/<slug>/`'
assert type(response.context['paginator']) == Paginator, \
'Проверьте, что переменная `paginator` на странице `/group/<slug>/` типа `Paginator`'
assert 'page' in response.context, \
'Проверьте, что передали переменную `page` в контекст страницы `/group/<slug>/`'
assert type(response.context['page']) == Page, \
'Проверьте, что переменная `page` на странице `/group/<slug>/` типа `Page`'
@pytest.mark.django_db(transaction=True)
def test_index_paginator_view_get(self, client, post_with_group):
response = client.get(f'/')
assert response.status_code != 404, 'Страница `/` не найдена, проверьте этот адрес в *urls.py*'
assert 'paginator' in response.context, \
'Проверьте, что передали переменную `paginator` в контекст страницы `/`'
assert type(response.context['paginator']) == Paginator, \
'Проверьте, что переменная `paginator` на странице `/` типа `Paginator`'
assert 'page' in response.context, \
'Проверьте, что передали переменную `page` в контекст страницы `/`'
assert type(response.context['page']) == Page, \
'Проверьте, что переменная `page` на странице `/` типа `Page`'
| 53.846154 | 117 | 0.640476 | 237 | 2,100 | 5.586498 | 0.265823 | 0.054381 | 0.039275 | 0.07855 | 0.823263 | 0.806647 | 0.753776 | 0.753776 | 0.634441 | 0.620846 | 0 | 0.007444 | 0.232381 | 2,100 | 38 | 118 | 55.263158 | 0.813896 | 0 | 0 | 0.3125 | 0 | 0 | 0.43356 | 0.034433 | 0 | 0 | 0 | 0 | 0.34375 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3106814e2f32ef2afba4839e6536055d5c097147 | 137 | py | Python | Numython/Posts/calculando-integrales-utilizando-sympy/sympy_integrales.py | JorgeDeLosSantos/_blogs_ | 62791568bb99d4d21d2fc3648967ec96d306cae0 | [
"MIT"
] | null | null | null | Numython/Posts/calculando-integrales-utilizando-sympy/sympy_integrales.py | JorgeDeLosSantos/_blogs_ | 62791568bb99d4d21d2fc3648967ec96d306cae0 | [
"MIT"
] | null | null | null | Numython/Posts/calculando-integrales-utilizando-sympy/sympy_integrales.py | JorgeDeLosSantos/_blogs_ | 62791568bb99d4d21d2fc3648967ec96d306cae0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from sympy import integrate, latex, cos
from sympy.abc import x,y,z
f = x+cos(x)
fi = integrate(f,x)
print fi
| 13.7 | 39 | 0.649635 | 26 | 137 | 3.423077 | 0.615385 | 0.202247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009009 | 0.189781 | 137 | 9 | 40 | 15.222222 | 0.792793 | 0.153285 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.4 | null | null | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
3118bd8419f61f7ee7328dca9ba3b6a4d4941a97 | 221 | py | Python | changelogs/admin.py | zhelyabuzhsky/changelogger | 8dbbfe978a86fa89a7eae23525c60c5f281fea6c | [
"MIT"
] | 1 | 2020-04-14T06:10:23.000Z | 2020-04-14T06:10:23.000Z | changelogs/admin.py | zhelyabuzhsky/changelogger | 8dbbfe978a86fa89a7eae23525c60c5f281fea6c | [
"MIT"
] | 43 | 2019-08-23T06:23:57.000Z | 2022-03-18T06:30:34.000Z | changelogs/admin.py | zhelyabuzhsky/changelogger | 8dbbfe978a86fa89a7eae23525c60c5f281fea6c | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from .models import Project, User, Version
admin.site.register(Project)
admin.site.register(Version)
admin.site.register(User, UserAdmin)
| 24.555556 | 47 | 0.819005 | 31 | 221 | 5.83871 | 0.419355 | 0.149171 | 0.281768 | 0.265193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090498 | 221 | 8 | 48 | 27.625 | 0.900498 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
3153dc17a63ece9717b2e332e2c886929e80a8f4 | 47 | py | Python | temppydemo/test/__init__.py | robertberrington/template-python-demo | 0d5ca06b5b61570ff56fda95cb98a05aebdf90d0 | [
"MIT"
] | null | null | null | temppydemo/test/__init__.py | robertberrington/template-python-demo | 0d5ca06b5b61570ff56fda95cb98a05aebdf90d0 | [
"MIT"
] | null | null | null | temppydemo/test/__init__.py | robertberrington/template-python-demo | 0d5ca06b5b61570ff56fda95cb98a05aebdf90d0 | [
"MIT"
] | null | null | null | """Unit tests for the `temppydemo` package."""
| 23.5 | 46 | 0.680851 | 6 | 47 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 47 | 1 | 47 | 47 | 0.780488 | 0.851064 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
316daaca26c86dd82d7e97e354ecf3c33c88b05f | 115 | py | Python | grascii.py | chanicpanic/grascii | 654d24529dd8373d9df35f07b06323bb17ed7ffb | [
"MIT"
] | 3 | 2020-10-02T11:45:47.000Z | 2021-06-27T01:16:08.000Z | grascii.py | chanicpanic/grascii | 654d24529dd8373d9df35f07b06323bb17ed7ffb | [
"MIT"
] | 2 | 2021-07-03T23:09:00.000Z | 2021-07-06T17:46:02.000Z | grascii.py | grascii/grascii | 654d24529dd8373d9df35f07b06323bb17ed7ffb | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import sys
import grascii.__main__
if __name__ == "__main__":
grascii.__main__.main()
| 12.777778 | 27 | 0.713043 | 15 | 115 | 4.4 | 0.666667 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156522 | 115 | 8 | 28 | 14.375 | 0.680412 | 0.173913 | 0 | 0 | 0 | 0 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
317157e1dff4338f0212a6582b12aa0fc6752b02 | 35 | py | Python | cac/core/model/__init__.py | guoyongcs/CAC | ed80eab340f9ea146315c1b1a25dc66ab709f8e9 | [
"BSD-3-Clause"
] | 3 | 2019-11-04T07:57:06.000Z | 2021-07-16T08:47:02.000Z | cac/core/model/__init__.py | guoyongcs/CAC | ed80eab340f9ea146315c1b1a25dc66ab709f8e9 | [
"BSD-3-Clause"
] | null | null | null | cac/core/model/__init__.py | guoyongcs/CAC | ed80eab340f9ea146315c1b1a25dc66ab709f8e9 | [
"BSD-3-Clause"
] | null | null | null | from .resnet import cifar_resnet20
| 17.5 | 34 | 0.857143 | 5 | 35 | 5.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 0.114286 | 35 | 1 | 35 | 35 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
31df6f205292631cb117bdf4b4440e8854eb987e | 366 | py | Python | metanic/rest/authentication.py | LimpidTech/melody | a00b99f9b697864a078e2cb886be4d75c10458a9 | [
"BSD-3-Clause"
] | null | null | null | metanic/rest/authentication.py | LimpidTech/melody | a00b99f9b697864a078e2cb886be4d75c10458a9 | [
"BSD-3-Clause"
] | 1 | 2020-02-11T21:34:24.000Z | 2020-02-11T21:34:24.000Z | metanic/rest/authentication.py | LimpidTech/melody | a00b99f9b697864a078e2cb886be4d75c10458a9 | [
"BSD-3-Clause"
] | null | null | null | from rest_framework.authentication import BasicAuthentication
from rest_framework.authentication import SessionAuthentication
class RestSessionAuthentication(SessionAuthentication):
def enforce_csrf(self, request):
""" Override this to not verify CSRF since these APIs don't need it. """
# TODO: Don't allow SessionAuthentication in production | 45.75 | 80 | 0.79235 | 40 | 366 | 7.175 | 0.75 | 0.055749 | 0.118467 | 0.216028 | 0.25784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153005 | 366 | 8 | 81 | 45.75 | 0.925806 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
31e6b7c19d729c3b690be6113261cb5c4286d006 | 30 | py | Python | omegasl/omegasl_lang.py | omega-graphics/omega-gte-project | 9f833e253bafde30ff8a7a2246d308b07cd0b00a | [
"Apache-2.0"
] | 1 | 2021-07-07T17:36:49.000Z | 2021-07-07T17:36:49.000Z | omegasl/omegasl_lang.py | omega-graphics/omega-gte-project | 9f833e253bafde30ff8a7a2246d308b07cd0b00a | [
"Apache-2.0"
] | null | null | null | omegasl/omegasl_lang.py | omega-graphics/omega-gte-project | 9f833e253bafde30ff8a7a2246d308b07cd0b00a | [
"Apache-2.0"
] | null | null | null | import json
import plistlib
| 6 | 15 | 0.8 | 4 | 30 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 30 | 4 | 16 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9eddf46aabaa0b8c6fe16d2cba7286ac37f8eb71 | 288 | py | Python | server-for-lambda/model/__init__.py | DIO0550/door-watcher | 2aff77d6c923ab9385a041aeaa0617b6193568dd | [
"MIT"
] | 1 | 2020-03-23T03:12:21.000Z | 2020-03-23T03:12:21.000Z | server-for-lambda/model/__init__.py | DIO0550/door-watcher | 2aff77d6c923ab9385a041aeaa0617b6193568dd | [
"MIT"
] | 10 | 2020-01-19T09:56:19.000Z | 2020-03-20T18:55:07.000Z | server-for-lambda/model/__init__.py | DIO0550/door-watcher | 2aff77d6c923ab9385a041aeaa0617b6193568dd | [
"MIT"
] | 1 | 2020-03-23T12:09:51.000Z | 2020-03-23T12:09:51.000Z | ###############################################################################
# マイグレーションやORマッピングに使用する共有用メタクラスを生成します。
###############################################################################
from sqlalchemy.ext.declarative.api import declarative_base
Base = declarative_base()
| 48 | 79 | 0.375 | 12 | 288 | 8.833333 | 0.666667 | 0.283019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048611 | 288 | 5 | 80 | 57.6 | 0.386861 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
732809a544a997134c6a8abd384e7246ffe63aa0 | 73 | py | Python | __main__.py | spisnigel/passata | 5dc387174f1cc5e6005365c21491d60a3917091a | [
"Apache-2.0"
] | 1 | 2017-06-27T13:39:49.000Z | 2017-06-27T13:39:49.000Z | __main__.py | spisnigel/passata | 5dc387174f1cc5e6005365c21491d60a3917091a | [
"Apache-2.0"
] | null | null | null | __main__.py | spisnigel/passata | 5dc387174f1cc5e6005365c21491d60a3917091a | [
"Apache-2.0"
] | null | null | null | from passata import PomodoroApp
pomodoro = PomodoroApp()
pomodoro.run()
| 14.6 | 31 | 0.794521 | 8 | 73 | 7.25 | 0.75 | 0.655172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123288 | 73 | 4 | 32 | 18.25 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 5 |
7328ceebbea12db999ca72e4dbd590b8f2417c5b | 96 | py | Python | storylist/admin.py | akurtovic/STL-Crimelog | 958e3bddfc30b54676837dfbdb784e0e0d35578b | [
"MIT"
] | null | null | null | storylist/admin.py | akurtovic/STL-Crimelog | 958e3bddfc30b54676837dfbdb784e0e0d35578b | [
"MIT"
] | null | null | null | storylist/admin.py | akurtovic/STL-Crimelog | 958e3bddfc30b54676837dfbdb784e0e0d35578b | [
"MIT"
] | 1 | 2019-01-19T17:46:00.000Z | 2019-01-19T17:46:00.000Z | from django.contrib import admin
from storylist.models import Story
admin.site.register(Story)
| 19.2 | 34 | 0.833333 | 14 | 96 | 5.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104167 | 96 | 4 | 35 | 24 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
733dc8837424d9cd14d6eb891c58a1f8a518770d | 137 | py | Python | bin/cubes/soma-screw.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | null | null | null | bin/cubes/soma-screw.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | null | null | null | bin/cubes/soma-screw.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | 1 | 2022-01-02T16:54:14.000Z | 2022-01-02T16:54:14.000Z | #!/usr/bin/env python
# $Id$
"""14 solutions"""
import puzzler
from puzzler.puzzles.somacubes import SomaScrew
puzzler.run(SomaScrew)
| 13.7 | 47 | 0.744526 | 18 | 137 | 5.666667 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016529 | 0.116788 | 137 | 9 | 48 | 15.222222 | 0.826446 | 0.277372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
73414715dbf1ae306efa4cce7831f8c70a8fc438 | 313 | py | Python | fv3gfs/util/units.py | ai2cm/fv3gfs-util | 56fd8e93cefe6951396717a49390c4020a0bc20c | [
"BSD-3-Clause"
] | 1 | 2021-01-05T20:55:01.000Z | 2021-01-05T20:55:01.000Z | fv3gfs/util/units.py | VulcanClimateModeling/fv3gfs-util | 1d7c302b836befe905d776b0a972f464bfd3a255 | [
"BSD-3-Clause"
] | 34 | 2020-11-10T18:06:18.000Z | 2021-07-20T22:46:31.000Z | fv3gfs/util/units.py | ai2cm/fv3gfs-util | 56fd8e93cefe6951396717a49390c4020a0bc20c | [
"BSD-3-Clause"
] | 1 | 2021-08-10T21:36:44.000Z | 2021-08-10T21:36:44.000Z | def ensure_equal_units(units1: str, units2: str) -> None:
if not units_are_equal(units1, units2):
raise UnitsError(f"incompatible units {units1} and {units2}")
def units_are_equal(units1: str, units2: str) -> bool:
return units1.strip() == units2.strip()
class UnitsError(Exception):
pass
| 26.083333 | 69 | 0.699681 | 42 | 313 | 5.071429 | 0.52381 | 0.103286 | 0.140845 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03876 | 0.175719 | 313 | 11 | 70 | 28.454545 | 0.786822 | 0 | 0 | 0 | 0 | 0 | 0.127796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0 | 0.142857 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
735585bd644757ff5155d75a317815414d03286e | 22 | py | Python | hello.py | boukhriss/PythonPrograms | 16920a5a337c121d14b9dfaf8a91c1c5aabd0d5a | [
"MIT"
] | null | null | null | hello.py | boukhriss/PythonPrograms | 16920a5a337c121d14b9dfaf8a91c1c5aabd0d5a | [
"MIT"
] | null | null | null | hello.py | boukhriss/PythonPrograms | 16920a5a337c121d14b9dfaf8a91c1c5aabd0d5a | [
"MIT"
] | null | null | null | print ("hello issam")
| 11 | 21 | 0.681818 | 3 | 22 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
73627902179aa6a872847fbe37bb257e1f24e847 | 77 | py | Python | hrl/hrl.py | guojm14/HRL | b011fa65a82a861a89979257ed63ed3341b01b24 | [
"MIT"
] | 5 | 2021-07-23T09:50:35.000Z | 2022-01-03T07:44:43.000Z | hrl/hrl.py | guojm14/HRL | b011fa65a82a861a89979257ed63ed3341b01b24 | [
"MIT"
] | null | null | null | hrl/hrl.py | guojm14/HRL | b011fa65a82a861a89979257ed63ed3341b01b24 | [
"MIT"
] | null | null | null | def create_hrl():
"""check the type of hrl and return the object"""
| 19.25 | 53 | 0.623377 | 12 | 77 | 3.916667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25974 | 77 | 3 | 54 | 25.666667 | 0.824561 | 0.558442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
7df97383da1fb5859030c8afb292f472a23e2dc8 | 274 | py | Python | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_Pbr_09_10PottedMeatCan_bop_test.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 33 | 2021-12-15T07:11:47.000Z | 2022-03-29T08:58:32.000Z | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_Pbr_09_10PottedMeatCan_bop_test.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 3 | 2021-12-15T11:39:54.000Z | 2022-03-29T07:24:23.000Z | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_Pbr_09_10PottedMeatCan_bop_test.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | null | null | null | _base_ = "./FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_Pbr_01_02MasterChefCan_bop_test.py"
OUTPUT_DIR = "output/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_AggressiveV2_Flat_ycbvPbr_SO/09_10PottedMeatCan"
DATASETS = dict(TRAIN=("ycbv_010_potted_meat_can_train_pbr",))
| 68.5 | 115 | 0.879562 | 36 | 274 | 6.055556 | 0.777778 | 0.100917 | 0.238532 | 0.348624 | 0.385321 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086792 | 0.032847 | 274 | 3 | 116 | 91.333333 | 0.735849 | 0 | 0 | 0 | 0 | 0 | 0.791971 | 0.791971 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b41507f5258f94677dac19b7bb9866eae1776ebe | 72 | py | Python | OpenAttack/attack_assist/goal/__init__.py | e-tornike/OpenAttack | b19c53af2e01f096505f8ebb8f48a54388295003 | [
"MIT"
] | 444 | 2020-07-14T12:13:26.000Z | 2022-03-28T02:46:30.000Z | OpenAttack/attack_assist/goal/__init__.py | e-tornike/OpenAttack | b19c53af2e01f096505f8ebb8f48a54388295003 | [
"MIT"
] | 50 | 2020-07-15T01:34:42.000Z | 2022-01-24T12:19:19.000Z | OpenAttack/attack_assist/goal/__init__.py | e-tornike/OpenAttack | b19c53af2e01f096505f8ebb8f48a54388295003 | [
"MIT"
] | 86 | 2020-08-02T13:16:45.000Z | 2022-03-27T06:22:04.000Z | from .base import AttackGoal
from .classifier_goal import ClassifierGoal | 36 | 43 | 0.875 | 9 | 72 | 6.888889 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097222 | 72 | 2 | 43 | 36 | 0.953846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.