hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7e531ed52b747cc6b76b2ae7e55a9d8857f606fc | 35 | py | Python | sepadd/__init__.py | CaptainConsternant/python-sepaxml | da33e3b8ccf02657f6bb351c4515f820d60eb847 | [
"MIT"
] | 53 | 2018-07-24T17:01:47.000Z | 2022-03-25T10:59:04.000Z | sepadd/__init__.py | CaptainConsternant/python-sepaxml | da33e3b8ccf02657f6bb351c4515f820d60eb847 | [
"MIT"
] | 30 | 2018-08-27T10:11:07.000Z | 2022-03-16T21:40:43.000Z | sepadd/__init__.py | CaptainConsternant/python-sepaxml | da33e3b8ccf02657f6bb351c4515f820d60eb847 | [
"MIT"
] | 29 | 2018-09-04T09:50:32.000Z | 2022-02-16T14:32:11.000Z | from sepaxml import SepaDD # noqa
| 17.5 | 34 | 0.771429 | 5 | 35 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 35 | 1 | 35 | 35 | 0.964286 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e1b0df56d7290e72e86a2975236ca4f76dbfdfc | 17,897 | py | Python | scripts/models.py | BeckResearchLab/USP-inhibition | 9ade0cf5f19e92a5e4d9579cd2b3012590c1a365 | [
"BSD-3-Clause"
] | 1 | 2017-02-20T20:13:13.000Z | 2017-02-20T20:13:13.000Z | scripts/models.py | BeckResearchLab/USP-inhibition | 9ade0cf5f19e92a5e4d9579cd2b3012590c1a365 | [
"BSD-3-Clause"
] | 1 | 2017-04-20T16:41:16.000Z | 2017-04-20T16:57:39.000Z | scripts/models.py | BeckResearchLab/USP-inhibition | 9ade0cf5f19e92a5e4d9579cd2b3012590c1a365 | [
"BSD-3-Clause"
] | 4 | 2017-04-04T16:38:24.000Z | 2019-07-02T11:27:46.000Z | #!/usr/bin/env python
"""
Construct a neural network model, support vector and decision trees regression models from the data
"""
import pickle
import lasagne
import numpy as np
import sklearn
from lasagne.layers import DenseLayer
from lasagne.layers import InputLayer
from nolearn.lasagne import NeuralNet
from scipy.stats import randint as sp_randint
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge, BayesianRidge, Lasso
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.svm import LinearSVR
from sklearn.tree import DecisionTreeRegressor
__author__ = "Pearl Philip"
__credits__ = "David Beck"
__license__ = "BSD 3-Clause License"
__maintainer__ = "Pearl Philip"
__email__ = "pphilip@uw.edu"
__status__ = "Development"
def run_models(x_train, y_train, x_test, y_test, n_features):
"""
Driving all machine learning models as parallel processes.
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
model_choice = int(input("Type your choice of model to be run:" + "\n" +
"1 for Linear Regression" + "\n" +
"2 for Neural Network" + "\n" +
"3 for Support Vector Machine" + "\n" +
"4 for Decision Tree" + "\n" +
"5 for Ridge Regression" + "\n" +
"6 for Bayesian Ridge Regression" + "\n" +
"7 for Lasso:" + "\n" +
"8 for Random Forest Regressor:" + "\n"
))
if model_choice == 1:
build_linear(x_train, y_train, x_test, y_test, n_features)
elif model_choice == 2:
build_nn(x_train, y_train, x_test, y_test, n_features)
elif model_choice == 3:
build_svm(x_train, y_train, x_test, y_test, n_features)
elif model_choice == 4:
build_tree(x_train, y_train, x_test, y_test, n_features)
elif model_choice == 5:
build_ridge(x_train, y_train, x_test, y_test, n_features)
elif model_choice == 6:
build_bayesian_rr(x_train, y_train, x_test, y_test, n_features)
elif model_choice == 7:
build_lasso(x_train, y_train, x_test, y_test, n_features)
elif model_choice == 8:
build_forest(x_train, y_train, x_test, y_test, n_features)
else:
print("Please choose from list of available models only")
return
def build_linear(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a decision trees regression model from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
clf = LinearRegression(n_jobs=-1)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
with open('../trained_networks/lr_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
return
def build_nn(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a regression neural network model from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
net = NeuralNet(layers=[('input', InputLayer),
('hidden0', DenseLayer),
('hidden1', DenseLayer),
('output', DenseLayer)],
input_shape=(None, x_train.shape[1]), # Number of i/p nodes = number of columns in x
hidden0_num_units=15,
hidden0_nonlinearity=lasagne.nonlinearities.softmax,
hidden1_num_units=17,
hidden1_nonlinearity=lasagne.nonlinearities.softmax,
output_num_units=1, # Number of o/p nodes = number of columns in y
output_nonlinearity=lasagne.nonlinearities.softmax,
max_epochs=100,
update_learning_rate=0.01,
regression=True,
verbose=0)
# Finding the optimal set of params for each variable in the training of the neural network
param_dist = {'hidden0_num_units':sp_randint(3, 30), 'hidden1_num_units':sp_randint(3, 30)}
clf = RandomizedSearchCV(estimator=net, param_distributions=param_dist,
n_iter=15, n_jobs=-1)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
with open('../trained_networks/nn_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(net, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
return
def build_svm(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a support vector regression model from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
clf = LinearSVR(random_state=1, dual=False, epsilon=0,
loss='squared_epsilon_insensitive')
# Random state has int value for non-random sampling
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
with open('../trained_networks/svm_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
return
def build_tree(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a decision trees regression model from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
model = DecisionTreeRegressor()
param_dist = {'max_depth': sp_randint(1, 15),
'min_samples_split': sp_randint(2, 15)}
clf = RandomizedSearchCV(estimator=model, param_distributions=param_dist,
n_iter=15, n_jobs=-1)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
print(clf.best_params_, clf.best_score_)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
with open('../trained_networks/dt_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
return
def build_ridge(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a ridge regression model from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
clf = Ridge()
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
# Optimal ridge regression alpha value from CV
ridge_alpha = clf.alpha_
with open('../trained_networks/rr_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
return
def build_bayesian_rr(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a Bayesian ridge regression model from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
clf = BayesianRidge()
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
# Optimal ridge regression alpha value from CV
ridge_alpha = clf.alpha_
with open('../trained_networks/brr_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
return
def build_lasso(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a Lasso linear model with cross validation from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
model = Lasso(random_state=1)
# Random state has int value for non-random sampling
param_dist = {'alpha': np.arange( 0.0001, 1, 0.001 ).tolist()}
clf = RandomizedSearchCV(estimator=model, param_distributions=param_dist,
n_iter=15, n_jobs=-1)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
print(clf.best_params_, clf.best_score_)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
with open('../trained_networks/lasso_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
return
def build_forest(x_train, y_train, x_test, y_test, n_features):
"""
Constructing a random forest regression model from input dataframe
:param x_train: features dataframe for model training
:param y_train: target dataframe for model training
:param x_test: features dataframe for model testing
:param y_test: target dataframe for model testing
:return: None
"""
model = RandomForestRegressor()
param_dist = {'max_depth': sp_randint(1, 15),
'min_samples_split': sp_randint(2, 15)}
clf = RandomizedSearchCV(estimator=model, param_distributions=param_dist,
n_iter=15, n_jobs=-1)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
# Mean absolute error regression loss
mean_abs = sklearn.metrics.mean_absolute_error(y_test, y_pred)
# Mean squared error regression loss
mean_sq = sklearn.metrics.mean_squared_error(y_test, y_pred)
# Median absolute error regression loss
median_abs = sklearn.metrics.median_absolute_error(y_test, y_pred)
# R^2 (coefficient of determination) regression score function
r2 = sklearn.metrics.r2_score(y_test, y_pred)
# Explained variance regression score function
exp_var_score = sklearn.metrics.explained_variance_score(y_test, y_pred)
with open('../trained_networks/rfr_%d_data.pkl' % n_features, 'wb') as results:
pickle.dump(clf, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(mean_sq, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(median_abs, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(r2, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(exp_var_score, results, pickle.HIGHEST_PROTOCOL)
pickle.dump(y_pred, results, pickle.HIGHEST_PROTOCOL)
print(r2)
return
| 44.409429 | 105 | 0.708443 | 2,419 | 17,897 | 4.990079 | 0.090947 | 0.027338 | 0.094441 | 0.132218 | 0.853533 | 0.843758 | 0.836633 | 0.836633 | 0.836633 | 0.829674 | 0 | 0.008131 | 0.2097 | 17,897 | 402 | 106 | 44.519901 | 0.845305 | 0.268872 | 0 | 0.591304 | 0 | 0 | 0.063154 | 0.023987 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03913 | false | 0 | 0.065217 | 0 | 0.143478 | 0.017391 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0e6cc5c869f970643e4ed3e6d6db2a1c51acaa15 | 87 | py | Python | veros_bgc/setup/bgc_global_4deg/__init__.py | AkasDutta/veros-bgc | 47acef00e53516e2394313282e21d403526f5f8b | [
"MIT"
] | 2 | 2020-06-08T09:20:36.000Z | 2022-01-03T07:02:12.000Z | veros_bgc/setup/bgc_global_4deg/__init__.py | AkasDutta/veros-bgc | 47acef00e53516e2394313282e21d403526f5f8b | [
"MIT"
] | null | null | null | veros_bgc/setup/bgc_global_4deg/__init__.py | AkasDutta/veros-bgc | 47acef00e53516e2394313282e21d403526f5f8b | [
"MIT"
] | 1 | 2021-11-23T17:20:11.000Z | 2021-11-23T17:20:11.000Z | from veros_bgc.setup.bgc_global_4deg.bgc_global_four_degree import GlobalFourDegreeBGC
| 43.5 | 86 | 0.91954 | 13 | 87 | 5.692308 | 0.769231 | 0.243243 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.045977 | 87 | 1 | 87 | 87 | 0.879518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7eda2ca41cb82478006efc88697e52f1165eeb08 | 110 | py | Python | peacemakr/exception/missing_persister.py | peacemakr-io/peacemakr-python-sdk | 180bbc2e480ea855dddf0e28c2f27e83a17bfb84 | [
"Apache-2.0"
] | 3 | 2020-01-27T10:07:29.000Z | 2021-05-17T16:45:59.000Z | peacemakr/exception/missing_persister.py | peacemakr-io/peacemakr-python-sdk | 180bbc2e480ea855dddf0e28c2f27e83a17bfb84 | [
"Apache-2.0"
] | 7 | 2020-06-24T03:55:36.000Z | 2021-03-30T00:43:51.000Z | peacemakr/exception/missing_persister.py | peacemakr-io/peacemakr-python-sdk | 180bbc2e480ea855dddf0e28c2f27e83a17bfb84 | [
"Apache-2.0"
] | 1 | 2021-04-27T04:12:30.000Z | 2021-04-27T04:12:30.000Z | from peacemakr.exception.peacemakr import PeacemakrError
class MissingPersisterError(PeacemakrError):
pass
| 22 | 56 | 0.863636 | 10 | 110 | 9.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 110 | 4 | 57 | 27.5 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
7d3e18a86cb943302d2948e925215ea084b3bc78 | 43 | py | Python | TestPyqt/util/printHello.py | ppcrong/TestPyqt | b7294f88d99c0864abe7f9d2cb4a006c293a2ad7 | [
"Apache-2.0"
] | null | null | null | TestPyqt/util/printHello.py | ppcrong/TestPyqt | b7294f88d99c0864abe7f9d2cb4a006c293a2ad7 | [
"Apache-2.0"
] | null | null | null | TestPyqt/util/printHello.py | ppcrong/TestPyqt | b7294f88d99c0864abe7f9d2cb4a006c293a2ad7 | [
"Apache-2.0"
] | null | null | null | def runscript():
print('Hello World!')
| 14.333333 | 25 | 0.627907 | 5 | 43 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 26 | 21.5 | 0.771429 | 0 | 0 | 0 | 0 | 0 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
adb41daf81408ccc93317c9ea7ba6360a0ef5677 | 85 | py | Python | Lib/test/lazyimports/deferred_resolve_failure/utilities/type_from_ast.py | mananpal1997/cinder | a8804cc6e3a5861463ff959abcd09ad60a0763e5 | [
"CNRI-Python-GPL-Compatible"
] | 1,886 | 2021-05-03T23:58:43.000Z | 2022-03-31T19:15:58.000Z | Lib/test/lazyimports/deferred_resolve_failure/utilities/type_from_ast.py | mananpal1997/cinder | a8804cc6e3a5861463ff959abcd09ad60a0763e5 | [
"CNRI-Python-GPL-Compatible"
] | 70 | 2021-05-04T23:25:35.000Z | 2022-03-31T18:42:08.000Z | Lib/test/lazyimports/deferred_resolve_failure/utilities/type_from_ast.py | mananpal1997/cinder | a8804cc6e3a5861463ff959abcd09ad60a0763e5 | [
"CNRI-Python-GPL-Compatible"
] | 52 | 2021-05-04T21:26:03.000Z | 2022-03-08T18:02:56.000Z | from ..type import GraphQLSchema
def type_from_ast(schema: GraphQLSchema):
pass
| 17 | 41 | 0.776471 | 11 | 85 | 5.818182 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152941 | 85 | 4 | 42 | 21.25 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
70d0e9b1a4f8db891688801fad5c31465eea9476 | 33 | py | Python | player/__init__.py | TvSeriesFans/CineMonster | 036a3223618afd536932d21b0e86d18d0fba3b28 | [
"Apache-2.0"
] | 15 | 2017-09-17T17:52:43.000Z | 2020-08-31T15:41:12.000Z | player/__init__.py | TvSeriesFans/CineMonster | 036a3223618afd536932d21b0e86d18d0fba3b28 | [
"Apache-2.0"
] | 13 | 2017-03-14T13:24:14.000Z | 2021-08-20T13:52:54.000Z | player/__init__.py | TvSeriesFans/CineMonster | 036a3223618afd536932d21b0e86d18d0fba3b28 | [
"Apache-2.0"
] | 27 | 2017-07-01T18:33:49.000Z | 2021-08-05T09:13:18.000Z | from player.Player import Player
| 16.5 | 32 | 0.848485 | 5 | 33 | 5.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cb74690f8139b3ebd9e675d4ff5ac36b272ca8f6 | 1,026 | py | Python | unit_tests/core/cookies/test_views.py | code-review-doctor/lite-frontend-1 | cb3b885bb389ea33ef003c916bea7b03a36d86bb | [
"MIT"
] | 1 | 2021-08-16T09:19:32.000Z | 2021-08-16T09:19:32.000Z | unit_tests/core/cookies/test_views.py | code-review-doctor/lite-frontend-1 | cb3b885bb389ea33ef003c916bea7b03a36d86bb | [
"MIT"
] | 1 | 2021-09-24T10:58:08.000Z | 2021-09-24T13:32:30.000Z | export_support/cookies/tests/test_views.py | uktrade/export-support | 5f4f445ddb1836737484439f9f81f05d3fc1aaa9 | [
"MIT"
] | null | null | null | from django.urls import reverse
def test_prev_page_allowed_path(client):
url = reverse("cookies:cookies-preferences")
response = client.get(f"{url}?from=/next-url/")
assert response.status_code == 200
assert response.context["prev_page"] == "/next-url/"
url = reverse("cookies:cookies-preferences")
response = client.get(f"{url}?from=http://testserver/next-url/")
assert response.status_code == 200
assert response.context["prev_page"] == "http://testserver/next-url/"
def test_prev_page_different_host(client, settings):
url = reverse("cookies:cookies-preferences")
response = client.get(f"{url}?from=http://not-the-same.com/next/")
assert response.status_code == 200
assert response.context["prev_page"] == "/"
def test_prev_page_different_scheme(client, settings):
url = reverse("cookies:cookies-preferences")
response = client.get(f"{url}?from=https://test-server/next/")
assert response.status_code == 200
assert response.context["prev_page"] == "/"
| 36.642857 | 73 | 0.706628 | 134 | 1,026 | 5.261194 | 0.268657 | 0.158865 | 0.096454 | 0.13617 | 0.808511 | 0.740426 | 0.740426 | 0.740426 | 0.740426 | 0.740426 | 0 | 0.013529 | 0.135478 | 1,026 | 27 | 74 | 38 | 0.781285 | 0 | 0 | 0.5 | 0 | 0 | 0.309942 | 0.125731 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.15 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cb8686a1dde192080fd9d92d57ea8101e52e61bd | 4,319 | py | Python | nitorch/tests/test_fft.py | balbasty/nitorch | d30c3125a8a66ea1434f2b39ed03338afd9724b4 | [
"MIT"
] | 46 | 2020-07-31T10:14:05.000Z | 2022-03-24T12:51:46.000Z | nitorch/tests/test_fft.py | balbasty/nitorch | d30c3125a8a66ea1434f2b39ed03338afd9724b4 | [
"MIT"
] | 36 | 2020-10-06T19:01:38.000Z | 2022-02-03T18:07:35.000Z | nitorch/tests/test_fft.py | balbasty/nitorch | d30c3125a8a66ea1434f2b39ed03338afd9724b4 | [
"MIT"
] | 6 | 2021-01-05T14:59:05.000Z | 2021-11-18T18:26:45.000Z | import pytest
import torch
from nitorch.core.optionals import try_import
from nitorch.core import fft as nifft
pyfft = try_import('torch.fft', _as=True)
_torch_has_old_fft = nifft._torch_has_old_fft
_torch_has_complex = nifft._torch_has_complex
_torch_has_fft_module = nifft._torch_has_fft_module
_torch_has_fftshift = nifft._torch_has_fftshift
norms = ('forward', 'backward', 'ortho')
ndims = (1, 2, 3, 4)
@pytest.mark.parametrize('norm', norms)
def test_fft(norm):
if not _torch_has_fft_module or not _torch_has_old_fft:
return True
nifft._torch_has_complex = False
nifft._torch_has_fft_module = False
nifft._torch_has_fftshift = False
x = torch.randn([16, 32, 2], dtype=torch.doubl)
f1 = pyfft.fft(torch.complex(x[..., 0], x[..., 1]), norm=norm)
f2 = nifft.fft(x, norm=norm)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
x = torch.randn([16, 32, 2], dtype=torch.doubl)
f1 = pyfft.fft(torch.complex(x[..., 0], x[..., 1]), dim=0, norm=norm)
f2 = nifft.fft(x, dim=0, norm=norm)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
x = torch.randn([16, 32], dtype=torch.doubl)
f1 = pyfft.fft(x, norm=norm)
f2 = nifft.fft(x, real=True, norm=norm)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
@pytest.mark.parametrize('norm', norms)
def test_ifft(norm):
if not _torch_has_fft_module or not _torch_has_old_fft:
return True
nifft._torch_has_complex = False
nifft._torch_has_fft_module = False
nifft._torch_has_fftshift = False
x = torch.randn([16, 32, 2], dtype=torch.doubl)
f1 = pyfft.ifft(torch.complex(x[..., 0], x[..., 1]), norm=norm)
f2 = nifft.ifft(x, norm=norm)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
x = torch.randn([16, 32, 2], dtype=torch.doubl)
f1 = pyfft.ifft(torch.complex(x[..., 0], x[..., 1]), dim=0, norm=norm)
f2 = nifft.ifft(x, dim=0, norm=norm)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
x = torch.randn([16, 32], dtype=torch.doubl)
f1 = pyfft.ifft(x, norm=norm)
f2 = nifft.ifft(x, real=True, norm=norm)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
@pytest.mark.parametrize('norm', norms)
@pytest.mark.parametrize('ndim', ndims)
@pytest.mark.parametrize('shuffle', (True, False))
def test_fftn(norm, ndim, shuffle):
if not _torch_has_fft_module or not _torch_has_old_fft:
return True
nifft._torch_has_complex = False
nifft._torch_has_fft_module = False
nifft._torch_has_fftshift = False
dims = [0, 1, -2, 3]
if shuffle:
import random
random.shuffle(dims)
dims = dims[:ndim]
x = torch.randn([4, 9, 16, 33, 2], dtype=torch.double)
f1 = pyfft.fftn(torch.complex(x[..., 0], x[..., 1]), norm=norm, dim=dims)
f2 = nifft.fftn(x, norm=norm, dim=dims)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
x = torch.randn([4, 9, 16, 33], dtype=torch.double)
f1 = pyfft.fftn(x, norm=norm, dim=dims)
f2 = nifft.fftn(x, real=True, norm=norm, dim=dims)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2, atol=1e-5)
@pytest.mark.parametrize('norm', norms)
@pytest.mark.parametrize('ndim', ndims)
@pytest.mark.parametrize('shuffle', (True, False))
def test_ifftn(norm, ndim, shuffle):
if not _torch_has_fft_module or not _torch_has_old_fft:
return True
nifft._torch_has_complex = False
nifft._torch_has_fft_module = False
nifft._torch_has_fftshift = False
dims = [0, 1, -2, 3]
if shuffle:
import random
random.shuffle(dims)
dims = dims[:ndim]
x = torch.randn([4, 9, 16, 33, 2], dtype=torch.double)
f1 = pyfft.ifftn(torch.complex(x[..., 0], x[..., 1]), norm=norm, dim=dims)
f2 = nifft.ifftn(x, norm=norm, dim=dims)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2)
x = torch.randn([4, 9, 16, 33], dtype=torch.double)
f1 = pyfft.ifftn(x, norm=norm, dim=dims)
f2 = nifft.ifftn(x, real=True, norm=norm, dim=dims)
f2 = torch.complex(f2[..., 0], f2[..., 1])
assert torch.allclose(f1, f2, atol=1e-5)
| 32.719697 | 78 | 0.631628 | 680 | 4,319 | 3.854412 | 0.092647 | 0.085464 | 0.079359 | 0.064861 | 0.888211 | 0.879054 | 0.853491 | 0.832507 | 0.808852 | 0.808852 | 0 | 0.050488 | 0.192869 | 4,319 | 131 | 79 | 32.969466 | 0.701377 | 0 | 0 | 0.660194 | 0 | 0 | 0.015516 | 0 | 0 | 0 | 0 | 0 | 0.097087 | 1 | 0.038835 | false | 0 | 0.067961 | 0 | 0.145631 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cb945213eb343aae33a206a4afc02487c8db444f | 176 | py | Python | emtf_nnet/keras/quantization/__init__.py | jiafulow/emtf-nnet | 70a6c747c221178f9db940197ea886bdb60bf3ba | [
"Apache-2.0"
] | null | null | null | emtf_nnet/keras/quantization/__init__.py | jiafulow/emtf-nnet | 70a6c747c221178f9db940197ea886bdb60bf3ba | [
"Apache-2.0"
] | null | null | null | emtf_nnet/keras/quantization/__init__.py | jiafulow/emtf-nnet | 70a6c747c221178f9db940197ea886bdb60bf3ba | [
"Apache-2.0"
] | null | null | null | from .quantize_model import quantize_annotate_model, quantize_model, quantize_scope
from tensorflow_model_optimization.python.core.quantization.keras import quantize_annotate
| 44 | 90 | 0.897727 | 22 | 176 | 6.818182 | 0.545455 | 0.173333 | 0.293333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 176 | 3 | 91 | 58.666667 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cba7db3f25732397f118d3ec9c69484fd99af5aa | 35 | py | Python | src/CryptoPlus/Util/randpool.py | koujiandou-mhfg/python-cryptoplus | a5a1f8aecce4ddf476b2d80b586822d9e91eeb7d | [
"MIT"
] | 64 | 2015-01-15T10:41:41.000Z | 2022-01-10T23:51:42.000Z | src/CryptoPlus/Util/randpool.py | koujiandou-mhfg/python-cryptoplus | a5a1f8aecce4ddf476b2d80b586822d9e91eeb7d | [
"MIT"
] | 5 | 2016-01-05T17:48:22.000Z | 2018-02-22T04:32:17.000Z | src/CryptoPlus/Util/randpool.py | koujiandou-mhfg/python-cryptoplus | a5a1f8aecce4ddf476b2d80b586822d9e91eeb7d | [
"MIT"
] | 53 | 2015-04-14T00:17:02.000Z | 2022-03-12T05:32:05.000Z | from Crypto.Util.randpool import *
| 17.5 | 34 | 0.8 | 5 | 35 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cba7ec42248377443ecfec4c0c0072be2e6cb90d | 90 | py | Python | tests/spec_test_cases/test_containers.py | mkwiatkowski/pinocchio | 1281aa42a8f1f950c8987cde6df16570d353ee2f | [
"MIT"
] | 10 | 2015-01-23T18:53:23.000Z | 2022-03-26T06:09:27.000Z | tests/spec_test_cases/test_containers.py | mkwiatkowski/pinocchio | 1281aa42a8f1f950c8987cde6df16570d353ee2f | [
"MIT"
] | 7 | 2015-01-18T17:28:38.000Z | 2021-05-18T10:29:24.000Z | tests/spec_test_cases/test_containers.py | mkwiatkowski/pinocchio | 1281aa42a8f1f950c8987cde6df16570d353ee2f | [
"MIT"
] | 2 | 2015-01-23T18:58:54.000Z | 2015-09-14T10:09:51.000Z | def test_are_marked_as_deprecated():
pass
def test_doesnt_work_with_sets():
pass
| 15 | 36 | 0.766667 | 14 | 90 | 4.357143 | 0.785714 | 0.229508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 90 | 5 | 37 | 18 | 0.813333 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
cbe32f16c66a15823f2e6cbe36a5cfc2916c536f | 93 | py | Python | spring_cloud/gateway/pathpattern/__init__.py | haribo0915/Spring-Cloud-in-Python | 0bcd7093869c797df14428bf2d1b0a779f96e573 | [
"Apache-2.0"
] | 5 | 2020-10-06T09:48:23.000Z | 2020-10-07T13:19:46.000Z | spring_cloud/gateway/pathpattern/__init__.py | haribo0915/Spring-Cloud-in-Python | 0bcd7093869c797df14428bf2d1b0a779f96e573 | [
"Apache-2.0"
] | 5 | 2020-10-05T09:57:01.000Z | 2020-10-12T19:52:48.000Z | spring_cloud/gateway/pathpattern/__init__.py | haribo0915/Spring-Cloud-in-Python | 0bcd7093869c797df14428bf2d1b0a779f96e573 | [
"Apache-2.0"
] | 8 | 2020-10-05T06:34:49.000Z | 2020-10-07T13:19:46.000Z | # -*- coding: utf-8 -*-
from .elements import *
from .parser import *
from .pattern import *
| 18.6 | 23 | 0.655914 | 12 | 93 | 5.083333 | 0.666667 | 0.327869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.182796 | 93 | 4 | 24 | 23.25 | 0.789474 | 0.225806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1db9fb3f09abf32caf22270167f9134d59878fb2 | 27 | py | Python | fastai/utils/__init__.py | bhoomit/fastai | 45c8e42bffd8d6cb4c9dc709136be566666f21b3 | [
"Apache-2.0"
] | 2 | 2019-03-06T23:19:16.000Z | 2020-08-12T23:44:31.000Z | fastai/utils/__init__.py | bhoomit/fastai | 45c8e42bffd8d6cb4c9dc709136be566666f21b3 | [
"Apache-2.0"
] | 3 | 2021-05-20T19:59:09.000Z | 2022-02-26T09:11:29.000Z | fastai/utils/__init__.py | bhoomit/fastai | 45c8e42bffd8d6cb4c9dc709136be566666f21b3 | [
"Apache-2.0"
] | 2 | 2018-09-19T09:35:09.000Z | 2018-10-03T09:08:12.000Z | from .collect_env import *
| 13.5 | 26 | 0.777778 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3824889bf7f50fb993c7a301f9b378cd49cc8917 | 29 | py | Python | bin/test_mo519.py | maxozo/nf_example | c6746c075e88bcc09465bcfeb2c7cecae0e118ca | [
"MIT"
] | null | null | null | bin/test_mo519.py | maxozo/nf_example | c6746c075e88bcc09465bcfeb2c7cecae0e118ca | [
"MIT"
] | null | null | null | bin/test_mo519.py | maxozo/nf_example | c6746c075e88bcc09465bcfeb2c7cecae0e118ca | [
"MIT"
] | null | null | null |
print("Yes, I am running!") | 9.666667 | 27 | 0.62069 | 5 | 29 | 3.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172414 | 29 | 3 | 27 | 9.666667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6987066e8d1415b6ed66059b2dc92722e8d20a07 | 5,972 | py | Python | sdk/python/tests/integration/registration/test_cli_apply_duplicated_featureview_names.py | danilopeixoto/feast | 57d134355364654a2275b477b3b82b149f0779ca | [
"Apache-2.0"
] | 2,258 | 2020-05-17T02:41:07.000Z | 2022-03-31T22:30:57.000Z | sdk/python/tests/integration/registration/test_cli_apply_duplicated_featureview_names.py | danilopeixoto/feast | 57d134355364654a2275b477b3b82b149f0779ca | [
"Apache-2.0"
] | 1,768 | 2020-05-16T05:37:28.000Z | 2022-03-31T23:30:05.000Z | sdk/python/tests/integration/registration/test_cli_apply_duplicated_featureview_names.py | danilopeixoto/feast | 57d134355364654a2275b477b3b82b149f0779ca | [
"Apache-2.0"
] | 415 | 2020-05-16T18:21:27.000Z | 2022-03-31T09:59:10.000Z | import tempfile
from pathlib import Path
from textwrap import dedent
from tests.utils.cli_utils import CliRunner, get_example_repo
def test_cli_apply_duplicated_featureview_names() -> None:
"""
Test apply feature views with duplicated names and single py file in a feature repo using CLI
"""
with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
runner = CliRunner()
# Construct an example repo in a temporary dir
repo_path = Path(repo_dir_name)
data_path = Path(data_dir_name)
repo_config = repo_path / "feature_store.yaml"
repo_config.write_text(
dedent(
f"""
project: foo
registry: {data_path / "registry.db"}
provider: local
online_store:
path: {data_path / "online_store.db"}
"""
)
)
repo_example = repo_path / "example.py"
repo_example.write_text(
get_example_repo(
"example_feature_repo_with_duplicated_featureview_names.py"
)
)
rc, output = runner.run_with_output(["apply"], cwd=repo_path)
assert (
rc != 0
and b"Please ensure that all feature view names are case-insensitively unique"
in output
)
def test_cli_apply_imported_featureview() -> None:
"""
Test apply feature views with duplicated names and single py file in a feature repo using CLI
"""
with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
runner = CliRunner()
# Construct an example repo in a temporary dir
repo_path = Path(repo_dir_name)
data_path = Path(data_dir_name)
repo_config = repo_path / "feature_store.yaml"
repo_config.write_text(
dedent(
f"""
project: foo
registry: {data_path / "registry.db"}
provider: local
online_store:
path: {data_path / "online_store.db"}
"""
)
)
repo_example = repo_path / "example.py"
repo_example.write_text(get_example_repo("example_feature_repo_2.py"))
repo_example_2 = repo_path / "example_2.py"
repo_example_2.write_text(
"from example import driver_hourly_stats_view\n"
"from feast import FeatureService\n"
"a_feature_service = FeatureService(\n"
" name='driver_locations_service',\n"
" features=[driver_hourly_stats_view],\n"
")\n"
)
rc, output = runner.run_with_output(["apply"], cwd=repo_path)
assert rc == 0
assert b"Created feature service driver_locations_service" in output
def test_cli_apply_imported_featureview_with_duplication() -> None:
"""
Test apply feature views with duplicated names and single py file in a feature repo using CLI
"""
with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
runner = CliRunner()
# Construct an example repo in a temporary dir
repo_path = Path(repo_dir_name)
data_path = Path(data_dir_name)
repo_config = repo_path / "feature_store.yaml"
repo_config.write_text(
dedent(
f"""
project: foo
registry: {data_path / "registry.db"}
provider: local
online_store:
path: {data_path / "online_store.db"}
"""
)
)
repo_example = repo_path / "example.py"
repo_example.write_text(get_example_repo("example_feature_repo_2.py"))
repo_example_2 = repo_path / "example_2.py"
repo_example_2.write_text(
"from datetime import timedelta\n"
"from example import driver_hourly_stats, driver_hourly_stats_view\n"
"from feast import FeatureService, FeatureView\n"
"a_feature_service = FeatureService(\n"
" name='driver_locations_service',\n"
" features=[driver_hourly_stats_view],\n"
")\n"
"driver_hourly_stats_view_2 = FeatureView(\n"
" name='driver_hourly_stats',\n"
" entities=['driver_id'],\n"
" ttl=timedelta(days=1),\n"
" online=True,\n"
" batch_source=driver_hourly_stats,\n"
" tags={'dummy': 'true'})\n"
)
rc, output = runner.run_with_output(["apply"], cwd=repo_path)
assert rc != 0
assert (
b"More than one feature view with name driver_hourly_stats found." in output
)
def test_cli_apply_duplicated_featureview_names_multiple_py_files() -> None:
"""
Test apply feature views with duplicated names from multiple py files in a feature repo using CLI
"""
with tempfile.TemporaryDirectory() as repo_dir_name, tempfile.TemporaryDirectory() as data_dir_name:
runner = CliRunner()
# Construct an example repo in a temporary dir
repo_path = Path(repo_dir_name)
data_path = Path(data_dir_name)
repo_config = repo_path / "feature_store.yaml"
repo_config.write_text(
dedent(
f"""
project: foo
registry: {data_path / "registry.db"}
provider: local
online_store:
path: {data_path / "online_store.db"}
"""
)
)
# Create multiple py files containing the same feature view name
for i in range(3):
repo_example = repo_path / f"example{i}.py"
repo_example.write_text(get_example_repo("example_feature_repo_2.py"))
rc, output = runner.run_with_output(["apply"], cwd=repo_path)
assert (
rc != 0
and b"Please ensure that all feature view names are case-insensitively unique"
in output
)
| 33.363128 | 104 | 0.610348 | 711 | 5,972 | 4.841069 | 0.151899 | 0.041836 | 0.044451 | 0.030506 | 0.838466 | 0.838466 | 0.822487 | 0.798664 | 0.7638 | 0.734166 | 0 | 0.003853 | 0.304588 | 5,972 | 178 | 105 | 33.550562 | 0.824946 | 0.10432 | 0 | 0.664122 | 0 | 0 | 0.360901 | 0.092922 | 0 | 0 | 0 | 0 | 0.045802 | 1 | 0.030534 | false | 0 | 0.083969 | 0 | 0.114504 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
69d7fd0f4ebc6397446ca038d9fe8a7ba2f84d55 | 37 | py | Python | tests/unit/cli/test_evaluate.py | flickerfly/anchore-cli | bdfd0252db8d159191a39b9f7dd360fc8de3d20a | [
"Apache-2.0"
] | null | null | null | tests/unit/cli/test_evaluate.py | flickerfly/anchore-cli | bdfd0252db8d159191a39b9f7dd360fc8de3d20a | [
"Apache-2.0"
] | 1 | 2021-01-13T22:22:41.000Z | 2021-01-13T22:23:26.000Z | tests/unit/cli/test_evaluate.py | flickerfly/anchore-cli | bdfd0252db8d159191a39b9f7dd360fc8de3d20a | [
"Apache-2.0"
] | null | null | null | from anchorecli.cli import evaluate
| 12.333333 | 35 | 0.837838 | 5 | 37 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 2 | 36 | 18.5 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
385dabb70926ef3711b1127953c1117097608bd2 | 60 | py | Python | HW/import_monster/src/__init__.py | SaidMuratbekov/msai-python | fc694d9d6571af8dbdf162a35f98b6ffdd079396 | [
"MIT"
] | null | null | null | HW/import_monster/src/__init__.py | SaidMuratbekov/msai-python | fc694d9d6571af8dbdf162a35f98b6ffdd079396 | [
"MIT"
] | null | null | null | HW/import_monster/src/__init__.py | SaidMuratbekov/msai-python | fc694d9d6571af8dbdf162a35f98b6ffdd079396 | [
"MIT"
] | null | null | null | from . import methods_importer # noga
from . import * # noga | 30 | 37 | 0.733333 | 8 | 60 | 5.375 | 0.625 | 0.465116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183333 | 60 | 2 | 38 | 30 | 0.877551 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38806e810e77010e7bd4578fd89fddcb59951a4a | 102 | py | Python | app/handlers/homework/__init__.py | vitaliy-ukiru/math-bot | 72c116b4f5a4aa6a5f8eaae67ecbbf3df821f9e9 | [
"MIT"
] | 1 | 2021-12-11T07:41:38.000Z | 2021-12-11T07:41:38.000Z | app/handlers/homework/__init__.py | vitaliy-ukiru/math-bot | 72c116b4f5a4aa6a5f8eaae67ecbbf3df821f9e9 | [
"MIT"
] | 8 | 2021-05-08T21:48:34.000Z | 2022-01-20T15:42:00.000Z | app/handlers/homework/__init__.py | vitaliy-ukiru/math-bot | 72c116b4f5a4aa6a5f8eaae67ecbbf3df821f9e9 | [
"MIT"
] | null | null | null | __all__ = ("dp",)
from .commands import dp
from .callback_mode import dp
from .inline_mode import dp
| 17 | 29 | 0.754902 | 16 | 102 | 4.4375 | 0.5 | 0.253521 | 0.338028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 102 | 5 | 30 | 20.4 | 0.825581 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38884279b7f04430315f5cb3aa3ddb2510ef320a | 37 | py | Python | bolsa/urls.py | IgorAlmeeida/pet-site | 265c1d622548093c4b58679efe5f8b8f3c7ebb84 | [
"MIT"
] | null | null | null | bolsa/urls.py | IgorAlmeeida/pet-site | 265c1d622548093c4b58679efe5f8b8f3c7ebb84 | [
"MIT"
] | null | null | null | bolsa/urls.py | IgorAlmeeida/pet-site | 265c1d622548093c4b58679efe5f8b8f3c7ebb84 | [
"MIT"
] | null | null | null | from django.urls import path, include | 37 | 37 | 0.837838 | 6 | 37 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a06af3fc074cd117eaa9c0c366a5228436c4df4 | 167 | py | Python | backend/trips/admin.py | repeating/PoputchikiInno | 54b60cfd3c40a25357667c4044fd477f3b6b9152 | [
"CC-BY-4.0"
] | 20 | 2021-09-23T16:33:34.000Z | 2022-01-08T08:56:10.000Z | backend/trips/admin.py | repeating/PoputchikiInno | 54b60cfd3c40a25357667c4044fd477f3b6b9152 | [
"CC-BY-4.0"
] | null | null | null | backend/trips/admin.py | repeating/PoputchikiInno | 54b60cfd3c40a25357667c4044fd477f3b6b9152 | [
"CC-BY-4.0"
] | 2 | 2021-09-23T16:31:39.000Z | 2021-12-17T01:02:01.000Z | from django.contrib import admin
from .models import CarTrip, Relation
admin.site.register(CarTrip)
from django.contrib import admin
admin.site.register(Relation)
| 16.7 | 37 | 0.814371 | 23 | 167 | 5.913043 | 0.434783 | 0.147059 | 0.25 | 0.338235 | 0.411765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113772 | 167 | 9 | 38 | 18.555556 | 0.918919 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2a1d665db4aa7f8019dbeb2f16897614a25a48f2 | 8,167 | py | Python | pybargain_protocol/tests/test_bargaining_proposal.py | LaurentMT/pybargain_protocol | 3b4c6040ec3562ce6921f917c97a9931d5c6e5de | [
"MIT"
] | 1 | 2015-06-30T15:34:41.000Z | 2015-06-30T15:34:41.000Z | pybargain_protocol/tests/test_bargaining_proposal.py | LaurentMT/pybargain_protocol | 3b4c6040ec3562ce6921f917c97a9931d5c6e5de | [
"MIT"
] | null | null | null | pybargain_protocol/tests/test_bargaining_proposal.py | LaurentMT/pybargain_protocol | 3b4c6040ec3562ce6921f917c97a9931d5c6e5de | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
Version: 0.0.1
Python library for the bargaining protocol
'''
import json
import unittest
from pybargain_protocol.constants import *
from pybargain_protocol.bargaining_message import BargainingMessage
from pybargain_protocol.tests.helpers import build_valid_single_tx, build_proposal_message, build_request_ack_message, build_unsigned_single_tx
from pybargain_protocol.tests.values import *
'''
SPECIFIC TEST VALUES
'''
VALID_MIN_SIG = 'G2hoKHDmsSkxGgBGCtG5i848dhIu7KC2k0G80UdMgYTQjGYtLsnM6eQixwcUp0OTDSenqEjbxm8x/GsW8bHDkNA='
VALID_FULL_SIG = 'G4tshZ7kLc8iZvxIRSzRsbDED/Suz8j+5tdl1o650URXD8LIOBXR88KDLS4UGZIZ1wIOThNLty/fUabMBX11vVA='
class TestBargainingProposal(unittest.TestCase):
def test_valid_min_unsigned(self):
'''
Tests validity of a minimal BargainingProposal message (unsigned)
'''
# Builds a minimal message
txs = build_valid_single_tx(VALID_BUYER_AMOUNT, 0, VALID_OUTPUTS1)
msg = build_proposal_message(time=VALID_TIME3, transactions=txs)
# Gets the serialized message
msg_ser = msg.pbuff
# Deserializes the message
msg_deser = BargainingMessage.deserialize(msg_ser)
# Checks
self.assertTrue(msg.check_msg_fmt(TESTNET))
self.assertTrue(msg_deser.check_msg_fmt(TESTNET))
self.assertEqual(msg_deser.msg_type, TYPE_BARGAIN_PROPOSAL)
self.assertEqual(msg_deser.details_version, PROTOCOL_VERSION)
self.assertEqual(msg_deser.sign_type, SIGN_NONE)
self.assertEqual(msg_deser.sign_data, '')
self.assertEqual(msg_deser.signature, '')
self.assertEqual(msg_deser.pbuff, msg_ser)
self.assertEqual(msg_deser.status, MSG_STATUS_OK)
self.assertEqual(msg_deser.details.time, VALID_TIME3)
self.assertEqual(msg_deser.details.buyer_data, '')
self.assertEqual(msg_deser.details.seller_data, '')
self.assertEqual(msg_deser.details.memo, '')
self.assertEqual(msg_deser.details.transactions, txs)
def test_valid_min_signed(self):
'''
Tests validity of a minimal BargainingProposal message (signed)
'''
# Build a previous RequestACK message
prev_msg = build_request_ack_message(time=VALID_TIME2, outputs=VALID_OUTPUTS1)
# Builds a minimal message
txs = build_valid_single_tx(VALID_BUYER_AMOUNT, 0, VALID_OUTPUTS1)
msg = build_proposal_message(time=VALID_TIME3, transactions=txs, sign_type=SIGN_ECDSA_SHA256, prev_msg=prev_msg)
# Gets the serialized message
msg_ser = msg.pbuff
# Deserializes the message
msg_deser = BargainingMessage.deserialize(msg_ser)
# Checks
self.assertEquals(msg.signature, VALID_MIN_SIG)
self.assertEquals(msg_deser.signature, VALID_MIN_SIG)
self.assertTrue(msg.check_signature(prev_msg))
self.assertTrue(msg_deser.check_signature(prev_msg))
def test_valid_full_unsigned(self):
'''
Tests validity of a full BargainingProposal message (unsigned)
'''
# Builds a minimal message
txs = build_valid_single_tx(VALID_BUYER_AMOUNT, 0, VALID_OUTPUTS1)
msg = build_proposal_message(VALID_TIME3, VALID_BUYER_DATA, VALID_SELLER_DATA, txs,
VALID_BUYER_AMOUNT, 0,VALID_MEMO, VALID_REFUND_TO, SIGN_NONE)
# Gets the serialized message
msg_ser = msg.pbuff
# Deserializes the message
msg_deser = BargainingMessage.deserialize(msg_ser)
# Checks
self.assertTrue(msg.check_msg_fmt(TESTNET))
self.assertTrue(msg_deser.check_msg_fmt(TESTNET))
self.assertEqual(msg_deser.msg_type, TYPE_BARGAIN_PROPOSAL)
self.assertEqual(msg_deser.details_version, PROTOCOL_VERSION)
self.assertEqual(msg_deser.sign_type, SIGN_NONE)
self.assertEqual(msg_deser.sign_data, '')
self.assertEqual(msg_deser.signature, '')
self.assertEqual(msg_deser.pbuff, msg_ser)
self.assertEqual(msg_deser.status, MSG_STATUS_OK)
self.assertEqual(msg_deser.details.time, VALID_TIME3)
self.assertEqual(msg_deser.details.buyer_data, VALID_BUYER_DATA)
self.assertEqual(msg_deser.details.seller_data, VALID_SELLER_DATA)
self.assertEqual(msg_deser.details.memo, VALID_MEMO)
self.assertEqual(msg_deser.details.transactions, txs)
def test_valid_full_signed(self):
'''
Tests validity of a full BargainingProposal message (signed)
'''
# Build a previous RequestACK message
prev_msg = build_request_ack_message(time=VALID_TIME2, outputs=VALID_OUTPUTS1)
# Builds a minimal message
txs = build_valid_single_tx(VALID_BUYER_AMOUNT, 0, VALID_OUTPUTS1)
msg = build_proposal_message(VALID_TIME3, VALID_BUYER_DATA, VALID_SELLER_DATA, txs,
VALID_BUYER_AMOUNT, 0, VALID_MEMO, VALID_REFUND_TO,
SIGN_ECDSA_SHA256, prev_msg)
# Gets the serialized message
msg_ser = msg.pbuff
# Deserializes the message
msg_deser = BargainingMessage.deserialize(msg_ser)
# Checks
self.assertTrue(msg.check_signature(prev_msg))
self.assertTrue(msg_deser.check_signature(prev_msg))
self.assertEquals(msg.signature, VALID_FULL_SIG)
self.assertEquals(msg_deser.signature, VALID_FULL_SIG)
def test_invalid_memo(self):
'''
Tests invalidity of a message with an invalid memo
'''
txs = build_valid_single_tx(VALID_BUYER_AMOUNT, 0, VALID_OUTPUTS1)
msg = build_proposal_message(time=VALID_TIME3, transactions=txs, memo=INVALID_MEMO)
self.assertIsNone(msg)
def test_invalid_time(self):
'''
Tests invalidity of a message with erroneous time
'''
txs = build_valid_single_tx(VALID_BUYER_AMOUNT, 0, VALID_OUTPUTS1)
msg = build_proposal_message(transactions=txs)
self.assertIsNone(msg)
def test_invalid_sign(self):
'''
Tests invalidity of a message with an invalid signature
'''
# Build a previous RequestACK message
prev_msg = build_request_ack_message(time=VALID_TIME2, outputs=VALID_OUTPUTS1)
# Builds a minimal message
txs = build_valid_single_tx(VALID_BUYER_AMOUNT, 0, VALID_OUTPUTS1)
msg = build_proposal_message(time=VALID_TIME3, transactions=txs, sign_type=SIGN_ECDSA_SHA256, prev_msg=prev_msg)
# Overrides the signature with an invalid one
msg.signature = INVALID_SIG
msg.pbuff = msg.serialize()
# Gets the serialized message
msg_ser = msg.pbuff
# Deserializes the message
msg_deser = BargainingMessage.deserialize(msg_ser)
# Checks
self.assertFalse(msg.check_signature(prev_msg))
self.assertFalse(msg_deser.check_signature(prev_msg))
def test_invalid_refund_to(self):
'''
Tests invalidity of a message with an invalid refund_to
'''
txs = build_valid_single_tx(VALID_AMOUNT1, 0, VALID_OUTPUTS1)
msg = build_proposal_message(time=VALID_TIME3, transactions=txs, amount=VALID_AMOUNT1, fees=0, is_redeemable=True)
self.assertIsNone(msg)
def test_invalid_empty_txs(self):
'''
Tests invalidity of a message with an empty txs
'''
msg = build_proposal_message(time=VALID_TIME3, amount=VALID_AMOUNT1, fees=0)
self.assertIsNone(msg)
def test_invalid_txs(self):
'''
Tests invalidity of a message with an unsigned tx (unsigned tx)
'''
txs = build_unsigned_single_tx(VALID_AMOUNT1, 0, VALID_OUTPUTS1)
msg = build_proposal_message(time=VALID_TIME3, transactions=txs, amount=VALID_AMOUNT1, fees=0)
self.assertIsNone(msg)
if __name__ == '__main__':
unittest.main() | 41.247475 | 143 | 0.681401 | 958 | 8,167 | 5.496868 | 0.123173 | 0.05469 | 0.082036 | 0.104823 | 0.817888 | 0.802317 | 0.767755 | 0.738891 | 0.68572 | 0.63046 | 0 | 0.014233 | 0.242929 | 8,167 | 198 | 144 | 41.247475 | 0.837458 | 0.154157 | 0 | 0.544554 | 0 | 0 | 0.027803 | 0.026594 | 0 | 0 | 0 | 0 | 0.425743 | 1 | 0.09901 | false | 0 | 0.059406 | 0 | 0.168317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2a5d2f166480069ad31b757653d2a473e58eda46 | 71 | py | Python | ss_generator/ca_tracing/__init__.py | xingjiepan/alpha_helix_generator | 2b35691b790e6363d5c4897a72c3efa8556d0143 | [
"BSD-3-Clause"
] | null | null | null | ss_generator/ca_tracing/__init__.py | xingjiepan/alpha_helix_generator | 2b35691b790e6363d5c4897a72c3efa8556d0143 | [
"BSD-3-Clause"
] | null | null | null | ss_generator/ca_tracing/__init__.py | xingjiepan/alpha_helix_generator | 2b35691b790e6363d5c4897a72c3efa8556d0143 | [
"BSD-3-Clause"
] | null | null | null | from . import alpha_helix
from . import beta_sheet
from . import basic
| 17.75 | 25 | 0.788732 | 11 | 71 | 4.909091 | 0.636364 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 71 | 3 | 26 | 23.666667 | 0.915254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a8d088e69e650c0af1093bde0cb6d68149eeefb | 7,530 | py | Python | utils/Category_Model.py | bamdart/Photo-Filter-Recommendation | 60032ebbc8891536b929e0aea17e98723ae346ad | [
"MIT"
] | null | null | null | utils/Category_Model.py | bamdart/Photo-Filter-Recommendation | 60032ebbc8891536b929e0aea17e98723ae346ad | [
"MIT"
] | null | null | null | utils/Category_Model.py | bamdart/Photo-Filter-Recommendation | 60032ebbc8891536b929e0aea17e98723ae346ad | [
"MIT"
] | null | null | null | import keras
from keras.models import Model
from keras.layers import *
from utils.shuffleNet_v2 import *
from utils.module import *
# output_filters = 128
def block(x, filters):
filters = filters // 4
x1 = Conv2D(filters = filters, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x1 = BatchNormalization()(x1)
x1 = LeakyReLU(alpha=0.1)(x1)
x2 = Conv2D(filters = filters, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x1)
x2 = BatchNormalization()(x2)
x2 = LeakyReLU(alpha=0.1)(x2)
x3 = Concatenate(axis = -1)([x1,x2])
x3 = Conv2D(filters = filters, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x3)
x3 = BatchNormalization()(x3)
x3 = LeakyReLU(alpha=0.1)(x3)
x4 = Concatenate(axis = -1)([x1,x2, x3])
x4 = Conv2D(filters = filters, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x4)
x4 = BatchNormalization()(x4)
x4 = LeakyReLU(alpha=0.1)(x4)
x5 = Concatenate(axis = -1)([x1,x2, x3, x4])
return x5
# def CreatModel(input_shape, output_shape = 8, ouput_feature = False):
# # Define model input
# input_tensor = Input(input_shape)
# x = Conv2D(filters = 16, kernel_size = (3, 3), strides = (2, 2), padding = 'same')(input_tensor)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.1)(x)
# x = Conv2D(filters = 32, kernel_size = (5, 5), strides = (1, 1), padding = 'same')(x)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.1)(x)
# x = AveragePooling2D(pool_size = (3, 3), strides = (2, 2), padding = 'same')(x)
# x = block(x, filters = 64)
# x = AveragePooling2D(pool_size = (3, 3), strides = (2, 2), padding = 'same')(x)
# x = block(x, filters = 128)
# x = AveragePooling2D(pool_size = (3, 3), strides = (2, 2), padding = 'same')(x)
# x = block(x, filters = 256)
# x = Conv2D(filters = output_filters, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.1)(x)
# feature = GlobalAveragePooling2D()(x)
# # classify
# x = Dense(8)(feature)
# #x = BatchNormalization()(x)
# x = Activation('softmax')(x)
# if(ouput_feature):
# classify_model = Model(inputs = input_tensor, outputs = feature)
# else:
# classify_model = Model(inputs = input_tensor, outputs = x)
# return classify_model
# def CreatModel(input_shape = (28, 28, 1), output_shape = 10):
# # Define model input
# input_tensor = Input(input_shape)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(input_tensor)
# x = LeakyReLU(alpha=0.2)(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = AveragePooling2D(pool_size = (3, 3), strides = (3, 3), padding = 'same')(x)
# # x = Dropout(0.2)(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = AveragePooling2D(pool_size = (3, 3), strides = (3, 3), padding = 'same')(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = Conv2D(filters = 64, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.2)(x)
# x = AveragePooling2D(pool_size = (3, 3), strides = (3, 3), padding = 'same')(x)
# x = GlobalAveragePooling2D()(x)
# # Fully Connected Layer
# # x = Dense(4096)(x)
# x = Dense(128)(x)
# x = Dense(8)(x)
# # sigmoid activation function
# output_tensor = Activation('sigmoid')(x)
# model = Model(inputs = input_tensor, outputs = output_tensor)
# return model
def CreatModel(input_shape = (28, 28, 1), output_shape = 10):
# Define model input
input_tensor = Input(input_shape)
f = 64
# x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(input_tensor)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.1)(x)
# x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
# x = BatchNormalization()(x)
# x = LeakyReLU(alpha=0.1)(x)
# x = MaxPool2D(pool_size = (2, 2), strides = (2, 2), padding = 'same')(x)
# x = Xception_module(x, use_BN = True,filter_num = f)
# x = Xception_reduce_module(x,filter_num = f*2)
# x = Xception_module(x,filter_num = f*2)
# x = Dropout(0.2)(x)
# x = ShuffleNet_v2_block(x, output_channels = f * 2)
# x = Dropout(0.1)(x)
# x = block(x, f)
# x = AveragePooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same')(x)
# x = Dropout(0.1)(x)
x_shortcut = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(input_tensor)
x_shortcut = BatchNormalization()(x_shortcut)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(input_tensor)
x = LeakyReLU(alpha=0.1)(x)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = LeakyReLU(alpha=0.1)(x)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = LeakyReLU(alpha=0.1)(x)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = LeakyReLU(alpha=0.1)(x)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = BatchNormalization()(x)
x = Add()([x, x_shortcut])
x = LeakyReLU(alpha=0.1)(x)
x = MaxPool2D(pool_size = (3, 3), strides = (3, 3), padding = 'same')(x)
# x = Dropout(0.1)(x)
x_shortcut = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x_shortcut = BatchNormalization()(x_shortcut)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = LeakyReLU(alpha=0.1)(x)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = BatchNormalization()(x)
x = Add()([x, x_shortcut])
x = LeakyReLU(alpha=0.1)(x)
x = MaxPool2D(pool_size = (3, 3), strides = (3, 3), padding = 'same')(x)
# x = Dropout(0.1)(x)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
x = Conv2D(filters = f, kernel_size = (3, 3), strides = (1, 1), padding = 'same')(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
# x = MaxPool2D(pool_size = (3, 3), strides = (3, 3), padding = 'same')(x)
x = GlobalAveragePooling2D()(x)
# Fully Connected Layer
x = Dense(128)(x)
x = Dense(8)(x)
# sigmoid activation function
output_tensor = Activation('sigmoid')(x)
model = Model(inputs = input_tensor, outputs = output_tensor)
return model
| 39.424084 | 108 | 0.580744 | 1,079 | 7,530 | 3.967563 | 0.07785 | 0.03644 | 0.051857 | 0.112357 | 0.838122 | 0.836253 | 0.825742 | 0.786031 | 0.782527 | 0.782527 | 0 | 0.064866 | 0.228154 | 7,530 | 190 | 109 | 39.631579 | 0.671714 | 0.55166 | 0 | 0.435484 | 0 | 0 | 0.02288 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.080645 | 0 | 0.145161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aaa4a9a620748cbcbd3827006950798c8fb88ae6 | 190 | py | Python | websearch/SearchApp/views.py | AdaFactor/OpenMarathon-Websearch | 505305cd2977f083b9685a7f81b489e0865f8bc3 | [
"MIT"
] | null | null | null | websearch/SearchApp/views.py | AdaFactor/OpenMarathon-Websearch | 505305cd2977f083b9685a7f81b489e0865f8bc3 | [
"MIT"
] | null | null | null | websearch/SearchApp/views.py | AdaFactor/OpenMarathon-Websearch | 505305cd2977f083b9685a7f81b489e0865f8bc3 | [
"MIT"
] | null | null | null | from django.shortcuts import render
def index(request):
return render(request, 'SearchApp/index.html')
def yourimages(request):
return render(request, 'SearchApp/yourimages.html') | 23.75 | 55 | 0.763158 | 23 | 190 | 6.304348 | 0.521739 | 0.17931 | 0.262069 | 0.358621 | 0.482759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126316 | 190 | 8 | 55 | 23.75 | 0.873494 | 0 | 0 | 0 | 0 | 0 | 0.235602 | 0.13089 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
aaa8c5382aeef570dc3972361e76cee3240dbadc | 44 | py | Python | test/integration/test_test.py | Loopring/hummingbot-deprecated | 43be8574ed9efd405aeee13a34c7a87ee732c7aa | [
"Apache-2.0"
] | null | null | null | test/integration/test_test.py | Loopring/hummingbot-deprecated | 43be8574ed9efd405aeee13a34c7a87ee732c7aa | [
"Apache-2.0"
] | null | null | null | test/integration/test_test.py | Loopring/hummingbot-deprecated | 43be8574ed9efd405aeee13a34c7a87ee732c7aa | [
"Apache-2.0"
] | 1 | 2021-11-23T19:59:17.000Z | 2021-11-23T19:59:17.000Z | import conf
import hummingbot
print("Done") | 11 | 17 | 0.795455 | 6 | 44 | 5.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 44 | 4 | 18 | 11 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aab4bd4fe7d78c64db7bb43a282f19f125c54cc9 | 43 | py | Python | virtual/lib/python3.6/site-packages/pylint/test/functional/old_division_manually.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 69 | 2019-02-18T12:07:35.000Z | 2022-03-12T10:38:32.000Z | virtual/lib/python3.6/site-packages/pylint/test/functional/old_division_manually.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 32 | 2018-05-01T05:24:43.000Z | 2022-03-11T23:20:39.000Z | virtual/lib/python3.6/site-packages/pylint/test/functional/old_division_manually.py | drewheathens/The-Moringa-Tribune | 98ee4d63c9df6f1f7497fc6876960a822d914500 | [
"MIT"
] | 88 | 2016-11-27T02:16:11.000Z | 2020-02-28T05:10:26.000Z | from __future__ import division
print 1 / 3 | 21.5 | 31 | 0.813953 | 7 | 43 | 4.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.162791 | 43 | 2 | 32 | 21.5 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
2aa4fe58d568be23c63c7774f00a06cc50ed53c0 | 90 | py | Python | src/stk/molecular/molecules/__init__.py | fiszczyp/stk | 56e75c493a472d98ccbf3af14cc9ce7f12cbe3d7 | [
"MIT"
] | null | null | null | src/stk/molecular/molecules/__init__.py | fiszczyp/stk | 56e75c493a472d98ccbf3af14cc9ce7f12cbe3d7 | [
"MIT"
] | null | null | null | src/stk/molecular/molecules/__init__.py | fiszczyp/stk | 56e75c493a472d98ccbf3af14cc9ce7f12cbe3d7 | [
"MIT"
] | null | null | null | from .molecule import *
from .building_block import *
from .constructed_molecule import *
| 22.5 | 35 | 0.8 | 11 | 90 | 6.363636 | 0.545455 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 90 | 3 | 36 | 30 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2aea8ce6899cd446fd0ae8e44117bf8bc3dc1ca9 | 44 | py | Python | flask_api/app/__init__.py | brennanhfredericks/network-monitor-server | 7c811d7851aee5d069569306c46dff39d8d52400 | [
"MIT"
] | null | null | null | flask_api/app/__init__.py | brennanhfredericks/network-monitor-server | 7c811d7851aee5d069569306c46dff39d8d52400 | [
"MIT"
] | null | null | null | flask_api/app/__init__.py | brennanhfredericks/network-monitor-server | 7c811d7851aee5d069569306c46dff39d8d52400 | [
"MIT"
] | null | null | null | from .app import app
from .common import db
| 14.666667 | 22 | 0.772727 | 8 | 44 | 4.25 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 44 | 2 | 23 | 22 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
630271810c2da572d2ec4c4506dcd1ac0fc339d5 | 2,003 | py | Python | kvdroid/jclass/android/app.py | kvdroid/Kvdroid | 8df0712f93fd3908a90792186ca7ee351c3d654e | [
"MIT"
] | 21 | 2022-01-05T09:24:27.000Z | 2022-03-31T05:40:11.000Z | kvdroid/jclass/android/app.py | kengoon/PyAndroidKX | 53b72b51c7b9aec06bbc330e7bf0f2e3a89736e2 | [
"MIT"
] | 8 | 2021-12-28T14:20:26.000Z | 2022-03-28T08:05:13.000Z | kvdroid/jclass/android/app.py | kengoon/KvDroid | 7a4e2ab417ceeee909f820525bf48fa3b4404264 | [
"MIT"
] | 6 | 2021-12-31T07:05:32.000Z | 2022-02-23T02:29:36.000Z | from jnius import autoclass # NOQA
from kvdroid.jclass import _class_call
def NotificationManager(*args, instantiate: bool = False):
return _class_call(autoclass("android.app.NotificationManager"), args, instantiate)
def Notification(*args, instantiate: bool = False):
return _class_call(autoclass("android.app.Notification"), args, instantiate)
if autoclass('android.os.Build$VERSION').SDK_INT >= 26:
def NotificationChannel(*args, instantiate: bool = False):
return _class_call(autoclass("android.app.NotificationChannel"), args, instantiate)
def WallpaperManager(*args, instantiate: bool = False):
return _class_call(autoclass('android.app.WallpaperManager'), args, instantiate)
def Request(*args, instantiate: bool = False):
return _class_call(autoclass("android.app.DownloadManager$Request"), args, instantiate)
def Activity(*args, instantiate: bool = False):
return _class_call(autoclass('android.app.Activity'), args, instantiate)
def PendingIntent(*args, instantiate: bool = False):
return _class_call(autoclass("android.app.PendingIntent"), args, instantiate)
def MemoryInfo(*args, instantiate: bool = False):
return _class_call(autoclass('android.app.ActivityManager$MemoryInfo'), args, instantiate)
def ActivityManager(*args, instantiate: bool = False):
return _class_call(autoclass("android.app.ActivityManager"), args, instantiate)
def ComponentName(*args, instantiate: bool = False):
return _class_call(autoclass("android.content.ComponentName"), args, instantiate)
def ApplicationInfo(*args, instantiate: bool = False):
return _class_call(autoclass("android.content.pm.ApplicationInfo"), args, instantiate)
def PackageManager(*args, instantiate: bool = False):
return _class_call(autoclass("android.content.pm.PackageManager"), args, instantiate)
def Configuration(*args, instantiate: bool = False):
return _class_call(autoclass("android.content.res.Configuration"), args, instantiate)
| 35.767857 | 94 | 0.757863 | 220 | 2,003 | 6.768182 | 0.186364 | 0.261921 | 0.165883 | 0.209537 | 0.53996 | 0.53996 | 0.53996 | 0.53996 | 0.53996 | 0.53996 | 0 | 0.001138 | 0.122816 | 2,003 | 55 | 95 | 36.418182 | 0.846329 | 0.001997 | 0 | 0 | 0 | 0 | 0.206309 | 0.196294 | 0 | 0 | 0 | 0 | 0 | 1 | 0.448276 | true | 0 | 0.068966 | 0.448276 | 0.965517 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
6328afd73fc129c7b655d6338f175bf4815de40b | 139 | py | Python | reactiondataextractor/models/__init__.py | dmw51/reactiondataextractor | f7d2ee9a2a7df17ffcf9b33efee2bcb49dfdcbae | [
"MIT"
] | 3 | 2021-09-29T01:33:35.000Z | 2022-03-19T09:04:23.000Z | reactiondataextractor/models/__init__.py | dmw51/reactiondataextractor | f7d2ee9a2a7df17ffcf9b33efee2bcb49dfdcbae | [
"MIT"
] | 4 | 2021-10-05T06:11:28.000Z | 2022-02-23T21:18:32.000Z | reactiondataextractor/models/__init__.py | dmw51/reactiondataextractor | f7d2ee9a2a7df17ffcf9b33efee2bcb49dfdcbae | [
"MIT"
] | null | null | null | from .base import *
from .exceptions import *
# from .output import *
from .reaction import *
from .segments import *
from .utils import *
| 19.857143 | 25 | 0.726619 | 18 | 139 | 5.611111 | 0.444444 | 0.49505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179856 | 139 | 6 | 26 | 23.166667 | 0.885965 | 0.151079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2dba3e250d2eedcf8ec125a3caf29dd38196c6b9 | 46 | py | Python | HW/HW5/CW5.5.py | kolyasalubov/Lv-639.pythonCore | 06f10669a188318884adb00723127465ebdf2907 | [
"MIT"
] | null | null | null | HW/HW5/CW5.5.py | kolyasalubov/Lv-639.pythonCore | 06f10669a188318884adb00723127465ebdf2907 | [
"MIT"
] | null | null | null | HW/HW5/CW5.5.py | kolyasalubov/Lv-639.pythonCore | 06f10669a188318884adb00723127465ebdf2907 | [
"MIT"
] | null | null | null | def count_sheeps(sheep):
return sum(sheep) | 23 | 24 | 0.73913 | 7 | 46 | 4.714286 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 46 | 2 | 25 | 23 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2de21d932a32c9275b4186a6004f7110e185e537 | 11,368 | py | Python | mamonsu/plugins/system/linux/scripts.py | sgrinko/mamonsu | 03b8e1217b08e6580e08129fddd64dbc20265fd4 | [
"BSD-3-Clause"
] | 188 | 2016-01-31T09:05:59.000Z | 2022-03-22T16:49:12.000Z | mamonsu/plugins/system/linux/scripts.py | sgrinko/mamonsu | 03b8e1217b08e6580e08129fddd64dbc20265fd4 | [
"BSD-3-Clause"
] | 162 | 2016-02-02T13:49:14.000Z | 2022-02-22T08:45:42.000Z | mamonsu/plugins/system/linux/scripts.py | sgrinko/mamonsu | 03b8e1217b08e6580e08129fddd64dbc20265fd4 | [
"BSD-3-Clause"
] | 46 | 2016-01-31T21:23:37.000Z | 2022-02-07T10:59:54.000Z | class Scripts(object):
Bash = {
# name_of_script: 'script '
'disk_sizes':
"""#!/bin/bash
PATH=/usr/local/bin:${PATH}
IFS_DEFAULT="${IFS}"
contains() {
[[ $1 =~ (^|${2})"${3}"($|${2}) ]] && echo "1" || echo "0"
}
#################################################################################
while getopts "s::a:sj:uphvt:" OPTION; do
case ${OPTION} in
j)
JSON=1
JSON_ATTR=(${OPTARG})
IFS="${IFS_DEFAULT}"
;;
esac
done
#################################################################################
rval=$(cat /proc/self/mountinfo)
exclude_list=("none" "unknown" "rootfs" "iso9660"
"squashfs" "udf" "romfs" "ramfs"
"debugfs" "cgroup" "cgroup_root"
"pstore" "devtmpfs" "autofs"
"cgroup" "configfs" "devpts"
"efivarfs" "fusectl" "fuse.gvfsd-fuse"
"hugetlbfs" "mqueue"
"nfsd" "proc" "pstore"
"rpc_pipefs" "securityfs" "sysfs"
"nsfs" "tmpfs" "tracefs")
list_str=$(IFS=","; echo "${exclude_list[*]}")
output=" "
if [[ ${JSON} -eq 1 ]]; then
echo '{'
echo ' "data":['
count=1
while read line; do
values=(${line})
if [ $(contains "${list_str}" "," "${values[8]}") -eq 0 ]; then
if [[ ${output} != " " ]]; then
echo " ${output}"
fi
output='{ '
output+='"'{#${JSON_ATTR[0]}}'"'
output+=':'
output+='"'${values[4]}'"'
output+=' }'
tmp="${output}"
output="${output},"
fi
let "count=count+1"
done <<< "${rval}"
echo " ${tmp}"
echo ' ]'
echo '}'
else
echo "${rval:-0}"
fi
exit ${rcode}
""",
'disk_stats':
"""#!/bin/bash
IFS_DEFAULT="${IFS}"
#
#################################################################################
while getopts "s::a:sj:uphvt:" OPTION; do
case ${OPTION} in
j)
JSON=1
JSON_ATTR=(${OPTARG})
IFS="${IFS_DEFAULT}"
;;
esac
done
#################################################################################
output=" "
rval=`cat /proc/diskstats`
if [[ ${JSON} -eq 1 ]]; then
echo '{'
echo ' "data":['
count=1
value=0
while read line; do
if [[ ${line} != '' ]]; then
IFS="|" values=(${line})
if [[ $count == 1 ]]; then # for loop0 case
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 3`
read_op=`echo "$new_value2" | cut -d " " -f 4`
read_sc=`echo "$new_value2" | cut -d " " -f 6`
write_op=`echo "$new_value2" | cut -d " " -f 8`
write_sc=`echo "$new_value2" | cut -d " " -f 10`
ticks=`echo "$new_value2" | cut -d " " -f 13`
else
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 4`
read_op=`echo "$new_value2" | cut -d " " -f 5`
read_sc=`echo "$new_value2" | cut -d " " -f 7`
write_op=`echo "$new_value2" | cut -d " " -f 9`
write_sc=`echo "$new_value2" | cut -d " " -f 11`
ticks=`echo "$new_value2" | cut -d " " -f 14`
fi
if [[ $new_value3 != *"loop"* ]] && [[ $new_value3 != *"ram"* ]] && [[ $new_value3 != *[0-9]* ]]; then
if [[ ${output} != " " ]]; then
echo " ${output}"
fi
value=$(($read_op+$value))
output='{ '
output+='"'{#${JSON_ATTR[0]}}'"'
output+=':'
output+='"'$new_value3'"'
output+=' }'
tmp="${output}"
output="${output},"
fi
fi
let "count=count+1"
done <<< ${rval}
echo " ${tmp}"
echo ' ]'
echo '}'
else
echo "${rval:-0}"
fi
exit ${rcode}
""",
'disk_stats_read_op':
"""#!/bin/bash
rval=`cat /proc/diskstats`
count=1
value=0
while read line; do
if [[ ${line} != '' ]]; then
IFS="|" values=(${line})
if [[ $count == 1 ]]; then # for loop0 case
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
# echo $new_value2
new_value3=`echo "$new_value2" | cut -d " " -f 3`
read_op=`echo "$new_value2" | cut -d " " -f 4`
else
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 4`
read_op=`echo "$new_value2" | cut -d " " -f 5`
fi
re='^[0-9]+$'
has_digits='no'
if [[ "${new_value3: -1}" =~ $re ]]; then
has_digits='yes'
fi
if [[ $new_value3 != *"loop"* ]] && [[ $new_value3 != *"ram"* ]] && [[ $has_digits == 'no' ]]; then
value=$(($read_op+$value))
fi
fi
let "count=count+1"
done <<< ${rval}
echo $(($value))
""",
'disk_stats_read_b':
"""#!/bin/bash
rval=`cat /proc/diskstats`
count=1
value=0
while read line; do
if [[ ${line} != '' ]]; then
IFS="|" values=(${line})
if [[ $count == 1 ]]; then # for loop0 case
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
# echo $new_value2
new_value3=`echo "$new_value2" | cut -d " " -f 3`
read_sc=`echo "$new_value2" | cut -d " " -f 6`
else
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 4`
read_sc=`echo "$new_value2" | cut -d " " -f 7`
fi
re='^[0-9]+$'
has_digits='no'
if [[ "${new_value3: -1}" =~ $re ]]; then
has_digits='yes'
fi
if [[ $new_value3 != *"loop"* ]] && [[ $new_value3 != *"ram"* ]] && [[ $has_digits == 'no' ]]; then
value=$(($read_sc+$value))
fi
fi
let "count=count+1"
done <<< ${rval}
echo $(($value*512))
""",
'disk_stats_write_op':
"""#!/bin/bash
rval=`cat /proc/diskstats`
count=1
value=0
while read line; do
if [[ ${line} != '' ]]; then
IFS="|" values=(${line})
if [[ $count == 1 ]]; then # for loop0 case
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 3`
write_op=`echo "$new_value2" | cut -d " " -f 8`
else
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 4`
write_op=`echo "$new_value2" | cut -d " " -f 9`
fi
re='^[0-9]+$'
has_digits='no'
if [[ "${new_value3: -1}" =~ $re ]]; then
has_digits='yes'
fi
if [[ $new_value3 != *"loop"* ]] && [[ $new_value3 != *"ram"* ]] && [[ $has_digits == 'no' ]];then
#echo $write_op
value=$(($write_op+$value))
fi
fi
let "count=count+1"
done <<< ${rval}
echo $(($value))
""",
'disk_stats_write_b':
"""#!/bin/bash
rval=`cat /proc/diskstats`
count=1
value=0
while read line; do
if [[ ${line} != '' ]]; then
IFS="|" values=(${line})
if [[ $count == 1 ]]; then # for loop0 case
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 3`
write_sc=`echo "$new_value2" | cut -d " " -f 10`
else
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 4`
write_sc=`echo "$new_value2" | cut -d " " -f 11`
fi
re='^[0-9]+$'
has_digits='no'
if [[ "${new_value3: -1}" =~ $re ]]; then
has_digits='yes'
fi
#echo $values
if [[ $new_value3 != *"loop"* ]] && [[ $new_value3 != *"ram"* ]] && [[ $has_digits == 'no' ]]; then
#echo $write_sc
#echo $new_value3
value=$(($write_sc+$value))
fi
fi
let "count=count+1"
done <<< ${rval}
echo $(($value*512))
""",
'net':
"""#!/bin/bash
#################################################################################
while getopts "s::a:sj:uphvt:" OPTION; do
case ${OPTION} in
j)
JSON=1
JSON_ATTR=(${OPTARG})
;;
esac
done
#################################################################################
rval=$(cat /proc/net/dev)
output=" "
if [[ ${JSON} -eq 1 ]]; then
echo '{'
echo ' "data":['
count=1
while read line; do
values=(${line})
if [[ "${values[0]}" != *"lo:"* ]] && [[ "${#values[@]}">1 ]]; then
if [[ ${output} != " " ]] && [[ $count > 4 ]]; then
echo " ${output}"
fi
output='{ '
output+='"'{#${JSON_ATTR[0]}}'"'
output+=':'
t="${values[0]}"
output+='"'${t%?}'"'
output+=' }'
tmp="${output}"
output="${output},"
fi
let "count=count+1"
done <<< "${rval}"
echo " ${tmp}"
echo ' ]'
echo '}'
else
echo "${rval:-0}"
fi
exit ${rcode}
""",
'disk_stats_ticks':
"""#!/bin/bash
rval=`cat /proc/diskstats`
count=1
value=0
while read line; do
if [[ ${line} != '' ]]; then
IFS="|" values=(${line})
if [[ $count == 1 ]]; then # for loop0 case
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
# echo $new_value2
new_value3=`echo "$new_value2" | cut -d " " -f 3`
ticks=`echo "$new_value2" | cut -d " " -f 13`
else
new_value2=`echo ${values[0]} | sed -n '/[0-9]/s/ \+/ /gp'`
new_value3=`echo "$new_value2" | cut -d " " -f 4`
ticks=`echo "$new_value2" | cut -d " " -f 14`
fi
if [[ $new_value3 != *"loop"* ]] && [[ $new_value3 != *"ram"* ]]; then
#echo $ticks
value=$(($ticks+$value))
fi
fi
let "count=count+1"
done <<< ${rval}
echo $(($value))
"""
}
| 29.994723 | 117 | 0.369282 | 1,165 | 11,368 | 3.477253 | 0.112446 | 0.104419 | 0.112318 | 0.126389 | 0.84078 | 0.833374 | 0.826709 | 0.807208 | 0.807208 | 0.74747 | 0 | 0.034695 | 0.399103 | 11,368 | 378 | 118 | 30.074074 | 0.558337 | 0.002199 | 0 | 0 | 0 | 0 | 0.296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2dedcc161b0c6e738e573e0058c036574fbcf91b | 169 | py | Python | intake_postgres/__init__.py | ContinuumIO/intake-postgres | fda7f7b2b6255544ea7ffd365a4ac8b2655fd226 | [
"BSD-2-Clause"
] | 2 | 2018-11-26T00:14:10.000Z | 2018-12-21T01:52:44.000Z | intake_postgres/__init__.py | ContinuumIO/intake-postgres | fda7f7b2b6255544ea7ffd365a4ac8b2655fd226 | [
"BSD-2-Clause"
] | 1 | 2018-12-20T08:41:05.000Z | 2018-12-21T15:00:08.000Z | intake_postgres/__init__.py | ContinuumIO/intake-postgres | fda7f7b2b6255544ea7ffd365a4ac8b2655fd226 | [
"BSD-2-Clause"
] | 3 | 2018-12-19T08:34:14.000Z | 2019-01-24T07:58:32.000Z | from __future__ import absolute_import
from .intake_postgres import PostgresSource
from .intake_postgres import __version__
__all__ = ['PostgresSource', '__version__']
| 28.166667 | 43 | 0.840237 | 18 | 169 | 6.833333 | 0.5 | 0.162602 | 0.292683 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100592 | 169 | 5 | 44 | 33.8 | 0.809211 | 0 | 0 | 0 | 0 | 0 | 0.147929 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
932d89fd7611518436db59af4160b2fb99512b1e | 37,763 | py | Python | tests/minimal/parent/test_parent.py | InscriptaLabs/BioCantor | e0b240480e229c28d3e12ba629d80ae2e397fa8f | [
"MIT"
] | 19 | 2021-04-20T04:11:30.000Z | 2021-10-11T14:48:57.000Z | tests/minimal/parent/test_parent.py | InscriptaLabs/BioCantor | e0b240480e229c28d3e12ba629d80ae2e397fa8f | [
"MIT"
] | 1 | 2021-08-10T23:13:38.000Z | 2021-08-11T18:45:45.000Z | tests/minimal/parent/test_parent.py | InscriptaLabs/BioCantor | e0b240480e229c28d3e12ba629d80ae2e397fa8f | [
"MIT"
] | 2 | 2021-08-02T08:14:35.000Z | 2021-09-15T19:30:38.000Z | import pytest
from inscripta.biocantor.exc import (
NoSuchAncestorException,
InvalidStrandException,
InvalidPositionException,
NullParentException,
MismatchedParentException,
ParentException,
LocationException,
)
from inscripta.biocantor.location.location_impl import (
SingleInterval,
CompoundInterval,
EmptyLocation,
)
from inscripta.biocantor.location.strand import Strand
from inscripta.biocantor.parent import Parent, make_parent
from inscripta.biocantor.sequence.alphabet import Alphabet
from inscripta.biocantor.sequence import Sequence
class TestParent:
@pytest.mark.parametrize(
"id,sequence_type,strand,location,sequence,expected",
[
(None, None, None, None, None, Parent()),
(
"id",
"seqtype",
Strand.MINUS,
SingleInterval(
0,
1,
Strand.MINUS,
Parent(
id="id",
sequence_type="seqtype",
strand=Strand.MINUS,
sequence=Sequence(
"AAA",
Alphabet.NT_STRICT,
id="id",
type="seqtype",
parent=Parent(
id="id2",
sequence=Sequence("CCC", Alphabet.NT_STRICT, id="id2"),
),
),
),
),
Sequence(
"AAA",
Alphabet.NT_STRICT,
id="id",
type="seqtype",
parent=Parent(id="id2", sequence=Sequence("CCC", Alphabet.NT_STRICT, id="id2")),
),
Parent(
id="id",
sequence_type="seqtype",
strand=Strand.MINUS,
location=SingleInterval(
0,
1,
Strand.MINUS,
Parent(
id="id",
sequence_type="seqtype",
strand=Strand.MINUS,
sequence=Sequence(
"AAA",
Alphabet.NT_STRICT,
id="id",
type="seqtype",
parent=Parent(
id="id2",
sequence=Sequence("CCC", Alphabet.NT_STRICT, id="id2"),
),
),
),
),
sequence=Sequence(
"AAA",
Alphabet.NT_STRICT,
id="id",
type="seqtype",
parent=Parent(
id="id2",
sequence=Sequence("CCC", Alphabet.NT_STRICT, id="id2"),
),
),
),
),
],
)
def test_init(self, id, sequence_type, strand, location, sequence, expected):
assert expected == Parent(
id=id,
sequence_type=sequence_type,
strand=strand,
location=location,
sequence=sequence,
)
@pytest.mark.parametrize(
"id,sequence_type,strand,location,sequence,parent,expected_exception",
[
("id1", None, None, SingleInterval(0, 5, Strand.PLUS, parent="id2"), None, None, ParentException),
("id1", None, None, None, Sequence("AAA", Alphabet.NT_STRICT, id="id2"), None, ParentException),
(
None,
None,
None,
SingleInterval(0, 5, Strand.PLUS, parent="id1"),
Sequence("AAC", Alphabet.NT_STRICT, id="id2"),
None,
ParentException,
),
(
None,
"seqtype",
None,
SingleInterval(0, 5, Strand.PLUS, parent=Parent(sequence_type="unknown")),
None,
None,
ParentException,
),
(None, "seqtype", None, None, Sequence("AAT", Alphabet.NT_STRICT, type="unknown"), None, ParentException),
(
None,
None,
None,
SingleInterval(0, 5, Strand.PLUS, parent=Parent(sequence_type="unknown")),
Sequence("AAG", Alphabet.NT_STRICT, type="seqtype"),
None,
ParentException,
),
(None, None, Strand.MINUS, SingleInterval(0, 5, Strand.PLUS), None, None, InvalidStrandException),
(
None,
None,
None,
SingleInterval(0, 10, Strand.PLUS),
Sequence("A", Alphabet.NT_STRICT),
None,
InvalidPositionException,
),
(
None,
None,
None,
None,
Sequence("AA", Alphabet.NT_STRICT),
Parent(sequence=Sequence("A", Alphabet.NT_STRICT)),
LocationException,
),
(None, None, Strand.PLUS, SingleInterval(5, 10, Strand.MINUS), None, None, InvalidStrandException),
(
None,
None,
None,
None,
Sequence("AA", Alphabet.NT_STRICT, parent="id1"),
Parent(id="id2"),
MismatchedParentException,
),
],
)
def test_init_error(self, id, sequence_type, strand, location, sequence, parent, expected_exception):
with pytest.raises(expected_exception):
Parent(
id=id,
sequence_type=sequence_type,
strand=strand,
location=location,
sequence=sequence,
parent=parent,
)
@pytest.mark.parametrize(
"obj,expected",
[
(
Sequence("AAA", Alphabet.NT_STRICT),
Parent(sequence=Sequence("AAA", Alphabet.NT_STRICT)),
),
("parent", Parent(id="parent")),
(
SingleInterval(5, 10, Strand.PLUS),
Parent(location=SingleInterval(5, 10, Strand.PLUS)),
),
(
CompoundInterval([5], [10], Strand.PLUS),
Parent(location=CompoundInterval([5], [10], Strand.PLUS)),
),
(EmptyLocation(), Parent(location=EmptyLocation())),
(Strand.MINUS, Parent(strand=Strand.MINUS)),
(
Parent(
id="parent",
sequence_type="chr",
strand=Strand.MINUS,
parent=Parent(id="grandparent"),
),
Parent(
id="parent",
sequence_type="chr",
strand=Strand.MINUS,
parent=Parent(id="grandparent"),
),
),
],
)
def test_make_parent(self, obj, expected):
assert make_parent(obj) == expected
@pytest.mark.parametrize(
"parent1,parent2,expected",
[
(Parent(), Parent(), True),
(Parent(), Parent(id=None, sequence_type=None), True),
(Parent(id="id1"), Parent(id="id2"), False),
(
Parent(sequence_type=None),
Parent(sequence_type="unknown"),
False,
),
(Parent(strand=Strand.UNSTRANDED), Parent(strand=Strand.MINUS), False),
(
Parent(location=SingleInterval(0, 5, Strand.PLUS, parent="id1")),
Parent(location=SingleInterval(0, 5, Strand.PLUS, parent="id2")),
False,
),
(
Parent(sequence=Sequence("A", Alphabet.NT_STRICT)),
Parent(sequence=Sequence("A", Alphabet.NT_STRICT, parent=Parent(id="parent"))),
False,
),
(
Parent(parent="parent1"),
Parent(parent="parent2"),
False,
),
],
)
def test_eq(self, parent1, parent2, expected):
assert (parent1 == parent2) is expected
@pytest.mark.parametrize(
"parent1,parent2,expected",
[
(Parent(), Parent(), True),
(Parent(id="id1"), Parent(id="id2"), False),
(
Parent(sequence_type=None),
Parent(sequence_type="unknown"),
False,
),
(Parent(strand=Strand.UNSTRANDED), Parent(strand=Strand.MINUS), True),
(
Parent(location=SingleInterval(0, 5, Strand.PLUS, parent="id1")),
Parent(location=SingleInterval(0, 5, Strand.PLUS, parent="id2")),
False,
),
(
Parent(sequence=Sequence("A", Alphabet.NT_STRICT)),
Parent(sequence=Sequence("A", Alphabet.NT_STRICT, parent="parent")),
False,
),
(
Parent(parent="parent1"),
Parent(parent="parent2"),
False,
),
],
)
def test_equals_except_location(self, parent1, parent2, expected):
assert parent1.equals_except_location(parent2) is expected
@pytest.mark.parametrize(
"id,location,sequence,expected",
[
("id", None, None, "id"),
(
None,
SingleInterval(0, 1, Strand.PLUS, parent="id"),
None,
"id",
),
(
None,
None,
Sequence("A", Alphabet.NT_STRICT, id="id", parent="id2"),
"id",
),
(
"id",
SingleInterval(0, 1, Strand.PLUS, parent="id"),
Sequence("A", Alphabet.NT_STRICT, id="id", parent="id2"),
"id",
),
],
)
def test_id(self, id, location, sequence, expected):
assert Parent(id=id, location=location, sequence=sequence).id == expected
@pytest.mark.parametrize(
"sequence_type,location,sequence,expected",
[
("seqtype", None, None, "seqtype"),
(
None,
SingleInterval(
0,
5,
Strand.PLUS,
parent=Parent(sequence_type="seqtype"),
),
None,
"seqtype",
),
(
None,
None,
Sequence("A", Alphabet.NT_STRICT, type="seqtype"),
"seqtype",
),
(
None,
None,
Sequence(
"A",
Alphabet.NT_STRICT,
type="seqtype",
parent=Parent(sequence_type="seqtype_2"),
),
"seqtype",
),
],
)
def test_sequence_type(self, sequence_type, location, sequence, expected):
assert Parent(sequence_type=sequence_type, location=location, sequence=sequence).sequence_type == expected
@pytest.mark.parametrize(
"strand,location,sequence,expected",
[
(Strand.PLUS, None, None, Strand.PLUS),
(None, SingleInterval(0, 5, Strand.MINUS), None, Strand.MINUS),
(
Strand.PLUS,
None,
Sequence("A", Alphabet.NT_STRICT, parent=Strand.MINUS),
Strand.PLUS,
),
],
)
def test_strand(self, strand, location, sequence, expected):
assert Parent(strand=strand, location=location, sequence=sequence).strand == expected
def test_location(self):
assert Parent(location=SingleInterval(0, 1, Strand.PLUS)).location == SingleInterval(0, 1, Strand.PLUS)
def test_sequence(self):
assert Parent(sequence=Sequence("A", Alphabet.NT_STRICT)).sequence == Sequence("A", Alphabet.NT_STRICT)
@pytest.mark.parametrize(
"parent,expected",
[
(Parent(parent="id"), Parent(id="id")),
(
Parent(
sequence=Sequence(
"AA",
Alphabet.NT_STRICT,
parent=Parent(sequence_type="chr"),
)
),
Parent(sequence_type="chr"),
),
],
)
def test_parent(self, parent, expected):
assert parent.parent == expected
@pytest.mark.parametrize(
"parent,expected",
[
(Parent(), Parent()),
(Parent(strand=Strand.PLUS), Parent()),
(
Parent(strand=Strand.PLUS, location=SingleInterval(5, 10, Strand.PLUS)),
Parent(),
),
(
Parent(
id="parent",
sequence_type="unknown",
strand=Strand.PLUS,
location=SingleInterval(0, 1, Strand.PLUS),
sequence=Sequence("AAA", Alphabet.NT_STRICT),
parent="grandparent",
),
Parent(
id="parent",
sequence_type="unknown",
sequence=Sequence("AAA", Alphabet.NT_STRICT),
parent="grandparent",
),
),
],
)
def test_strip_location_info(self, parent, expected):
assert parent.strip_location_info() == expected
@pytest.mark.parametrize(
"parent,sequence_type,include_self,expected",
[
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(id="parent", sequence_type="seqtype"),
),
"seqtype",
True,
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(id="parent", sequence_type="seqtype"),
),
),
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(id="parent", sequence_type="seqtype"),
),
"seqtype",
False,
Parent(id="parent", sequence_type="seqtype"),
),
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(
id="parent",
sequence_type="seqtype_2",
parent=Parent(id="grandparent", sequence_type="seqtype_2"),
),
),
"seqtype_2",
True,
Parent(
id="parent",
sequence_type="seqtype_2",
parent=Parent(id="grandparent", sequence_type="seqtype_2"),
),
),
],
)
def test_first_ancestor_of_type(self, parent, sequence_type, include_self, expected):
assert parent.first_ancestor_of_type(sequence_type, include_self=include_self) == expected
@pytest.mark.parametrize(
"parent,sequence_type,include_self",
[
(Parent(id="self"), "seqtype_2", True),
(
Parent(id="self", parent="parent"),
"seqtype_2",
True,
),
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(
id="parent",
sequence_type="seqtype_2",
parent=Parent(id="grandparent", sequence_type="seqtype_2"),
),
),
"chr",
True,
),
],
)
def test_first_ancestor_of_type_error(self, parent, sequence_type, include_self):
with pytest.raises(NoSuchAncestorException):
parent.first_ancestor_of_type(sequence_type, include_self=include_self)
@pytest.mark.parametrize(
"parent,sequence_type,include_self,expected",
[
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(id="parent", sequence_type="seqtype"),
),
"seqtype",
True,
True,
),
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(id="parent", sequence_type="seqtype"),
),
"seqtype",
False,
True,
),
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(
id="parent",
sequence_type="seqtype_2",
parent=Parent(id="grandparent", sequence_type="seqtype_2"),
),
),
"seqtype_2",
True,
True,
),
(
Parent(id="self"),
"seqtype_2",
True,
False,
),
(
Parent(id="self", parent="parent"),
"seqtype_2",
True,
False,
),
(
Parent(
id="self",
sequence_type="seqtype",
parent=Parent(
id="parent",
sequence_type="seqtype_2",
parent=Parent(id="grandparent", sequence_type="seqtype_2"),
),
),
"chr",
True,
False,
),
],
)
def test_has_ancestor_of_type(self, parent, sequence_type, include_self, expected):
assert parent.has_ancestor_of_type(sequence_type, include_self=include_self) is expected
@pytest.mark.parametrize(
"parent,expected",
[
(
Parent(
id="parent",
location=SingleInterval(3, 5, Strand.PLUS),
parent=Parent(id="grandparent", location=SingleInterval(10, 20, Strand.PLUS)),
),
SingleInterval(13, 15, Strand.PLUS, parent="grandparent"),
),
(
Parent(
id="parent",
location=SingleInterval(0, 5, Strand.PLUS),
sequence_type="unknown",
strand=Strand.PLUS,
parent=Parent(
id="grandparent",
location=SingleInterval(100, 200, Strand.MINUS),
),
),
SingleInterval(195, 200, Strand.MINUS, parent="grandparent"),
),
(
Parent(
id="parent",
location=SingleInterval(6, 9, Strand.MINUS),
parent=Parent(
id="grandparent",
location=SingleInterval(0, 10, Strand.PLUS),
sequence_type="chr",
strand=Strand.PLUS,
parent="great grandparent",
),
),
SingleInterval(
6,
9,
Strand.MINUS,
parent=Parent(
id="grandparent",
sequence_type="chr",
parent="great grandparent",
),
),
),
(
Parent(
id="parent",
sequence_type="chr",
strand=Strand.MINUS,
location=SingleInterval(6, 8, Strand.MINUS),
parent=Parent(
id="grandparent",
sequence_type="unknown",
strand=Strand.MINUS,
location=SingleInterval(5, 15, Strand.MINUS),
parent="great grandparent",
),
),
SingleInterval(
7,
9,
Strand.PLUS,
parent=Parent(
id="grandparent",
sequence_type="unknown",
parent="great grandparent",
),
),
),
(
Parent(
id="parent",
location=SingleInterval(3, 5, Strand.UNSTRANDED),
parent=Parent(id="grandparent", location=SingleInterval(10, 20, Strand.PLUS)),
),
SingleInterval(13, 15, Strand.UNSTRANDED, parent="grandparent"),
),
(
Parent(
id="parent",
location=SingleInterval(3, 5, Strand.UNSTRANDED),
parent=Parent(id="grandparent", location=SingleInterval(10, 20, Strand.MINUS)),
),
SingleInterval(15, 17, Strand.UNSTRANDED, parent="grandparent"),
),
],
)
def test_lift_child_location_contiguous_to_parent_single_interval(self, parent, expected):
assert parent.lift_child_location_to_parent() == expected
@pytest.mark.parametrize(
"parent,expected",
[
(
Parent(
id="parent",
location=CompoundInterval([3, 7], [5, 10], Strand.PLUS),
parent=Parent(id="grandparent", location=SingleInterval(10, 20, Strand.PLUS)),
),
CompoundInterval([13, 17], [15, 20], Strand.PLUS, parent="grandparent"),
),
(
Parent(
id="parent",
location=CompoundInterval([0, 10], [5, 15], Strand.PLUS),
sequence_type="unknown",
strand=Strand.PLUS,
parent=Parent(
id="grandparent",
location=SingleInterval(100, 200, Strand.MINUS),
),
),
CompoundInterval(
[185, 195],
[190, 200],
Strand.MINUS,
parent="grandparent",
),
),
(
Parent(
id="parent",
location=CompoundInterval([6], [9], Strand.MINUS),
parent=Parent(
id="grandparent",
location=SingleInterval(0, 10, Strand.PLUS),
sequence_type="chr",
strand=Strand.PLUS,
parent="great grandparent",
),
),
SingleInterval(
6,
9,
Strand.MINUS,
parent=Parent(
id="grandparent",
sequence_type="chr",
parent="great grandparent",
),
),
),
(
Parent(
id="parent",
sequence_type="chr",
strand=Strand.MINUS,
location=CompoundInterval([6], [8], Strand.MINUS),
parent=Parent(
id="grandparent",
sequence_type="unknown",
strand=Strand.MINUS,
location=SingleInterval(5, 15, Strand.MINUS),
parent="great grandparent",
),
),
SingleInterval(
7,
9,
Strand.PLUS,
parent=Parent(
id="grandparent",
sequence_type="unknown",
parent="great grandparent",
),
),
),
(
Parent(
id="parent",
location=CompoundInterval([3, 7], [5, 10], Strand.UNSTRANDED),
parent=Parent(id="grandparent", location=SingleInterval(10, 20, Strand.PLUS)),
),
CompoundInterval(
[13, 17],
[15, 20],
Strand.UNSTRANDED,
parent="grandparent",
),
),
(
Parent(
id="parent",
location=CompoundInterval([3], [5], Strand.UNSTRANDED),
parent=Parent(id="grandparent", location=SingleInterval(10, 20, Strand.MINUS)),
),
SingleInterval(15, 17, Strand.UNSTRANDED, parent="grandparent"),
),
],
)
def test_lift_child_location_discontiguous_to_parent_single_interval(self, parent, expected):
assert parent.lift_child_location_to_parent() == expected
@pytest.mark.parametrize(
"parent,expected_error",
[
# No location
(
Parent(parent=SingleInterval(5, 10, Strand.PLUS)),
NullParentException,
),
# Parent has no location
(
Parent(
location=SingleInterval(5, 10, Strand.PLUS),
parent="grandparent",
),
NullParentException,
),
# Location on parent can't be unstranded
(
Parent(
location=SingleInterval(5, 10, Strand.PLUS),
parent=Parent(
id="grandparent",
location=SingleInterval(0, 100, Strand.UNSTRANDED),
),
),
InvalidStrandException,
),
# Location must fit inside location on parent
(
Parent(
location=SingleInterval(5, 10, Strand.PLUS),
parent=Parent(id="grandparent", location=SingleInterval(30, 31, Strand.PLUS)),
),
ValueError,
),
],
)
def test_lift_child_location_to_parent_single_interval_error(self, parent, expected_error):
with pytest.raises(expected_error):
parent.lift_child_location_to_parent()
@pytest.mark.parametrize(
"parent,expected",
[
# Location takes up entire parent location
(
Parent(
id="parent",
location=SingleInterval(0, 10, Strand.PLUS),
parent=Parent(
id="grandparent",
location=CompoundInterval([5, 20], [10, 25], Strand.PLUS),
),
),
CompoundInterval([5, 20], [10, 25], Strand.PLUS, parent="grandparent"),
),
# Location (unstranded) takes up part of parent location (minus)
(
Parent(
id="parent",
location=SingleInterval(10, 20, Strand.UNSTRANDED),
parent=Parent(
id="grandparent",
location=CompoundInterval([10, 20, 30], [18, 28, 38], Strand.MINUS),
),
),
CompoundInterval(
[14, 20],
[18, 26],
Strand.UNSTRANDED,
parent="grandparent",
),
),
# Location (minus) takes up one block of parent location (plus); location is at end of sequence
(
Parent(
id="parent",
location=SingleInterval(5, 10, Strand.MINUS),
parent=Parent(
id="grandparent",
location=CompoundInterval([30, 40], [35, 45], Strand.PLUS),
),
),
SingleInterval(40, 45, Strand.MINUS, parent="grandparent"),
),
# Location (minus) takes up part of one block of parent location (minus)
(
Parent(
id="parent",
location=SingleInterval(0, 4, Strand.MINUS),
parent=Parent(
id="grandparent",
location=CompoundInterval([30, 40], [35, 45], Strand.MINUS),
),
),
SingleInterval(41, 45, Strand.PLUS, parent="grandparent"),
),
],
)
def test_lift_child_location_contiguous_to_parent_compound_interval(self, parent, expected):
assert parent.lift_child_location_to_parent() == expected
@pytest.mark.parametrize(
"parent,expected",
[
# Location takes up entire parent location
(
Parent(
id="parent",
location=CompoundInterval([0, 5], [5, 10], Strand.PLUS),
parent=Parent(
id="grandparent",
location=CompoundInterval([5, 20], [10, 25], Strand.PLUS),
),
),
CompoundInterval([5, 20], [10, 25], Strand.PLUS, parent="grandparent"),
),
# Location (unstranded) takes up part of parent location (minus)
(
Parent(
id="parent",
location=CompoundInterval([10, 22], [20, 23], Strand.UNSTRANDED),
parent=Parent(
id="grandparent",
location=CompoundInterval([10, 20, 30], [18, 28, 38], Strand.MINUS),
),
),
CompoundInterval(
[11, 14, 20],
[12, 18, 26],
Strand.UNSTRANDED,
parent="grandparent",
),
),
# Location (minus) takes up one block of parent location (plus); location is at end of sequence
(
Parent(
id="parent",
location=CompoundInterval([5], [10], Strand.MINUS),
parent=Parent(
id="grandparent",
location=CompoundInterval([30, 40], [35, 45], Strand.PLUS),
),
),
SingleInterval(40, 45, Strand.MINUS, parent="grandparent"),
),
# Location (minus) takes up part of one block of parent location (minus)
(
Parent(
id="parent",
location=CompoundInterval([0, 3], [1, 4], Strand.MINUS),
parent=Parent(
id="grandparent",
location=CompoundInterval([30, 40], [35, 45], Strand.MINUS),
),
),
CompoundInterval([41, 44], [42, 45], Strand.PLUS, parent="grandparent"),
),
],
)
def test_lift_child_location_discontiguous_to_parent_compound_interval(self, parent, expected):
assert parent.lift_child_location_to_parent() == expected
@pytest.mark.parametrize(
"parent,expected_error",
[
# Location must fit inside location on parent
(
Parent(
location=SingleInterval(5, 50, Strand.PLUS),
parent=Parent(
id="grandparent",
location=CompoundInterval([10, 20], [15, 25], Strand.PLUS),
),
),
InvalidPositionException,
),
],
)
def test_lift_child_location_to_parent_compound_interval_error(self, parent, expected_error):
with pytest.raises(expected_error):
parent.lift_child_location_to_parent()
@pytest.mark.parametrize(
"parent,location,expected",
[
(
Parent(),
SingleInterval(5, 10, Strand.PLUS),
Parent(location=SingleInterval(5, 10, Strand.PLUS), strand=Strand.PLUS),
),
(
Parent(
id="parent",
sequence_type="unknown",
strand=Strand.MINUS,
location=SingleInterval(0, 2, Strand.MINUS),
sequence=Sequence("AAA", Alphabet.NT_STRICT),
),
SingleInterval(2, 3, Strand.PLUS),
Parent(
id="parent",
sequence_type="unknown",
strand=Strand.PLUS,
location=SingleInterval(2, 3, Strand.PLUS),
sequence=Sequence("AAA", Alphabet.NT_STRICT),
),
),
(
Parent(
id="parent",
sequence_type="unknown",
strand=Strand.MINUS,
location=SingleInterval(0, 2, Strand.MINUS),
sequence=Sequence("AAA", Alphabet.NT_STRICT),
),
None,
Parent(
id="parent",
sequence_type="unknown",
sequence=Sequence("AAA", Alphabet.NT_STRICT),
),
),
],
)
def test_reset_location(self, parent, location, expected):
assert parent.reset_location(location) == expected
@pytest.mark.parametrize(
"parent,location,expected_exception",
[
(
Parent(sequence=Sequence("AAA", Alphabet.NT_STRICT)),
SingleInterval(0, 5, Strand.PLUS),
InvalidPositionException,
),
(
Parent(id="id1", sequence=Sequence("AAA", Alphabet.NT_STRICT)),
SingleInterval(
0,
1,
Strand.PLUS,
parent=Parent(id="id2", sequence=Sequence("AAA", Alphabet.NT_STRICT)),
),
ParentException,
),
],
)
def test_reset_location_error(self, parent, location, expected_exception):
with pytest.raises(expected_exception):
parent.reset_location(location)
@pytest.mark.parametrize(
"parent,sequence,include_self,expected",
[
(Parent(), Sequence("AA", Alphabet.NT_STRICT), True, False),
(Parent(), Sequence("AA", Alphabet.NT_STRICT), False, False),
(
Parent(sequence=Sequence("AA", Alphabet.NT_STRICT)),
Sequence("AA", Alphabet.NT_STRICT),
True,
True,
),
(
Parent(sequence=Sequence("AA", Alphabet.NT_STRICT)),
Sequence("AA", Alphabet.NT_STRICT),
False,
False,
),
(
Parent(
sequence=Sequence("AA", Alphabet.NT_STRICT),
parent=Sequence("AA", Alphabet.NT_STRICT),
),
Sequence("AA", Alphabet.NT_STRICT),
False,
True,
),
(
Parent(
sequence=Sequence("AA", Alphabet.NT_STRICT),
parent=Sequence("AAT", Alphabet.NT_STRICT),
),
Sequence("AAT", Alphabet.NT_STRICT),
False,
True,
),
(
Parent(
sequence=Sequence("AA", Alphabet.NT_STRICT),
parent=Sequence("AAT", Alphabet.NT_STRICT),
),
Sequence("AAT", Alphabet.NT_STRICT, id="id"),
True,
False,
),
(
Parent(
parent=Parent(parent=Parent(parent=Parent(sequence=Sequence("AAA", Alphabet.NT_STRICT, id="seq"))))
),
Sequence("AAA", Alphabet.NT_STRICT, id="seq"),
True,
True,
),
],
)
def test_has_ancestor_sequence(self, parent, sequence, include_self, expected):
assert parent.has_ancestor_sequence(sequence, include_self) == expected
| 35.424953 | 119 | 0.425178 | 2,754 | 37,763 | 5.716412 | 0.051198 | 0.056406 | 0.056914 | 0.053992 | 0.853586 | 0.811027 | 0.757988 | 0.720828 | 0.666582 | 0.623388 | 0 | 0.023138 | 0.470116 | 37,763 | 1,065 | 120 | 35.458216 | 0.763618 | 0.01851 | 0 | 0.719298 | 0 | 0 | 0.068178 | 0.014062 | 0 | 0 | 0 | 0 | 0.018519 | 1 | 0.023392 | false | 0 | 0.006823 | 0 | 0.031189 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93619f4843dd6696861f6a30d232ae53cf55ca52 | 38 | py | Python | mef3_processor/mef3_processor/__init__.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null | mef3_processor/mef3_processor/__init__.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null | mef3_processor/mef3_processor/__init__.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null |
from .processor import MEF3Processor
| 12.666667 | 36 | 0.842105 | 4 | 38 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.131579 | 38 | 2 | 37 | 19 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9372c97158fd3025dd4acfe48d94a4d22b7de300 | 19,223 | py | Python | buildserver/sil_snomed_server/migrations/versions/initialize_basic_snomed_components_and__2016_08_17_89efb2ad498e.py | poppingtonic/terminology-server | 788375e4f666b9344dc4f5faebee63fab58f1f57 | [
"MIT"
] | null | null | null | buildserver/sil_snomed_server/migrations/versions/initialize_basic_snomed_components_and__2016_08_17_89efb2ad498e.py | poppingtonic/terminology-server | 788375e4f666b9344dc4f5faebee63fab58f1f57 | [
"MIT"
] | null | null | null | buildserver/sil_snomed_server/migrations/versions/initialize_basic_snomed_components_and__2016_08_17_89efb2ad498e.py | poppingtonic/terminology-server | 788375e4f666b9344dc4f5faebee63fab58f1f57 | [
"MIT"
] | 1 | 2021-08-20T00:01:48.000Z | 2021-08-20T00:01:48.000Z | """initialize basic snomed components and refsets
Revision ID: 89efb2ad498e
Revises: None
Create Date: 2016-08-17 10:44:17.479387
"""
# revision identifiers, used by Alembic.
revision = '89efb2ad498e'
down_revision = None
from alembic import op
import sqlalchemy as sa
import sil_snomed_server
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('curr_annotationrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('annotation', sa.Text(), nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id')
)
op.create_index('sct_curr_annotationrefset_f_index_effective_time', 'curr_annotationrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_associationrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('target_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'target_component_id')
)
op.create_index('sct_curr_associationrefset_f_index_effective_time', 'curr_associationrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_attributevaluerefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('value_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'value_id')
)
op.create_index('sct_curr_attributevaluerefset_f_index_effective_time', 'curr_attributevaluerefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_complexmaprefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('map_group', sa.Integer(), autoincrement=False, nullable=False),
sa.Column('map_priority', sa.Integer(), autoincrement=False, nullable=False),
sa.Column('map_rule', sa.Unicode(), nullable=True),
sa.Column('map_target', sa.Unicode(), nullable=True),
sa.Column('correlation_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('map_advice', sa.Unicode(), nullable=True),
sa.Column('map_block', sa.Integer(), autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'map_group', 'map_priority', 'correlation_id')
)
op.create_index('sct_curr_complexmaprefset_f_index_effective_time', 'curr_complexmaprefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_concept_f',
sa.Column('id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('definition_status_id', sa.BigInteger(), nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id')
)
op.create_index(op.f('ix_curr_concept_f_id'), 'curr_concept_f', ['id'], unique=False)
op.create_index('sct_curr_concept_f_index_effective_time', 'curr_concept_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_description_f',
sa.Column('id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('concept_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('language_code', sa.Unicode(), nullable=True),
sa.Column('type_id', sa.BigInteger(), nullable=True),
sa.Column('term', sa.Text(), nullable=True),
sa.Column('case_significance_id', sa.BigInteger(), nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'concept_id')
)
op.create_index(op.f('ix_curr_description_f_id'), 'curr_description_f', ['id'], unique=False)
op.create_index('sct_curr_description_f_index_effective_time', 'curr_description_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_descriptionformatrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('description_format_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('description_length', sa.Integer(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'description_format_id', 'description_length')
)
op.create_index('sct_curr_descriptionformatrefset_f_index_effective_time', 'curr_descriptionformatrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_extendedmaprefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('map_group', sa.Integer(), autoincrement=False, nullable=False),
sa.Column('map_priority', sa.Integer(), autoincrement=False, nullable=False),
sa.Column('map_rule', sa.Unicode(), nullable=True),
sa.Column('map_target', sa.Unicode(), nullable=True),
sa.Column('correlation_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('map_category_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('map_advice', sa.Unicode(), nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'map_group', 'map_priority', 'correlation_id', 'map_category_id')
)
op.create_index('sct_curr_extendedmaprefset_f_index_effective_time', 'curr_extendedmaprefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_langrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('acceptability_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'acceptability_id')
)
op.create_index('sct_curr_langrefset_f_index_effective_time', 'curr_langrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_moduledependencyrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('source_effective_time', sa.Date(), nullable=False),
sa.Column('target_effective_time', sa.Date(), nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'source_effective_time', 'target_effective_time')
)
op.create_index('sct_curr_moduledependencyrefset_f_index_effective_time', 'curr_moduledependencyrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_orderedrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('order', sa.Integer(), autoincrement=False, nullable=False),
sa.Column('linked_to_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'order', 'linked_to_id')
)
op.create_index('sct_curr_orderedrefset_f_index_effective_time', 'curr_orderedrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_queryspecificationrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('query', sa.Unicode(), nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id')
)
op.create_index('sct_curr_queryspecificationrefset_f_index_effective_time', 'curr_queryspecificationrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_referencesetdescriptorrefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('attribute_description_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('attribute_type_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('attribute_order', sa.Integer(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id', 'attribute_description_id', 'attribute_type_id', 'attribute_order')
)
op.create_index('sct_curr_referencesetdescriptorrefset_f_index_effective_time', 'curr_referencesetdescriptorrefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_relationship_f',
sa.Column('id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('source_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('destination_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('relationship_group', sa.Integer(), autoincrement=False, nullable=False),
sa.Column('type_id', sa.BigInteger(), nullable=True),
sa.Column('characteristic_type_id', sa.BigInteger(), nullable=True),
sa.Column('modifier_id', sa.BigInteger(), nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'source_id', 'destination_id', 'relationship_group')
)
op.create_index(op.f('ix_curr_relationship_f_id'), 'curr_relationship_f', ['id'], unique=False)
op.create_index('sct_curr_relationship_f_index_effective_time', 'curr_relationship_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_simplemaprefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('map_target', sa.Unicode(), nullable=True),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id')
)
op.create_index('sct_curr_simplemaprefset_f_index_effective_time', 'curr_simplemaprefset_f', ['effective_time'], unique=False, postgresql_using='brin')
op.create_table('curr_simplerefset_f',
sa.Column('id', sil_snomed_server.data_types.custom_types.sa.dialects.postgresql.UUID(), nullable=False),
sa.Column('effective_time', sa.Date(), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('module_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('refset_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.Column('referenced_component_id', sa.BigInteger(), autoincrement=False, nullable=False),
sa.PrimaryKeyConstraint('id', 'effective_time', 'active', 'module_id', 'refset_id', 'referenced_component_id')
)
op.create_index('sct_curr_simplerefset_f_index_effective_time', 'curr_simplerefset_f', ['effective_time'], unique=False, postgresql_using='brin')
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_index('sct_curr_simplerefset_f_index_effective_time', table_name='curr_simplerefset_f')
op.drop_table('curr_simplerefset_f')
op.drop_index('sct_curr_simplemaprefset_f_index_effective_time', table_name='curr_simplemaprefset_f')
op.drop_table('curr_simplemaprefset_f')
op.drop_index('sct_curr_relationship_f_index_effective_time', table_name='curr_relationship_f')
op.drop_index(op.f('ix_curr_relationship_f_id'), table_name='curr_relationship_f')
op.drop_table('curr_relationship_f')
op.drop_index('sct_curr_referencesetdescriptorrefset_f_index_effective_time', table_name='curr_referencesetdescriptorrefset_f')
op.drop_table('curr_referencesetdescriptorrefset_f')
op.drop_index('sct_curr_queryspecificationrefset_f_index_effective_time', table_name='curr_queryspecificationrefset_f')
op.drop_table('curr_queryspecificationrefset_f')
op.drop_index('sct_curr_orderedrefset_f_index_effective_time', table_name='curr_orderedrefset_f')
op.drop_table('curr_orderedrefset_f')
op.drop_index('sct_curr_moduledependencyrefset_f_index_effective_time', table_name='curr_moduledependencyrefset_f')
op.drop_table('curr_moduledependencyrefset_f')
op.drop_index('sct_curr_langrefset_f_index_effective_time', table_name='curr_langrefset_f')
op.drop_table('curr_langrefset_f')
op.drop_index('sct_curr_extendedmaprefset_f_index_effective_time', table_name='curr_extendedmaprefset_f')
op.drop_table('curr_extendedmaprefset_f')
op.drop_index('sct_curr_descriptionformatrefset_f_index_effective_time', table_name='curr_descriptionformatrefset_f')
op.drop_table('curr_descriptionformatrefset_f')
op.drop_index('sct_curr_description_f_index_effective_time', table_name='curr_description_f')
op.drop_index(op.f('ix_curr_description_f_id'), table_name='curr_description_f')
op.drop_table('curr_description_f')
op.drop_index('sct_curr_concept_f_index_effective_time', table_name='curr_concept_f')
op.drop_index(op.f('ix_curr_concept_f_id'), table_name='curr_concept_f')
op.drop_table('curr_concept_f')
op.drop_index('sct_curr_complexmaprefset_f_index_effective_time', table_name='curr_complexmaprefset_f')
op.drop_table('curr_complexmaprefset_f')
op.drop_index('sct_curr_attributevaluerefset_f_index_effective_time', table_name='curr_attributevaluerefset_f')
op.drop_table('curr_attributevaluerefset_f')
op.drop_index('sct_curr_associationrefset_f_index_effective_time', table_name='curr_associationrefset_f')
op.drop_table('curr_associationrefset_f')
op.drop_index('sct_curr_annotationrefset_f_index_effective_time', table_name='curr_annotationrefset_f')
op.drop_table('curr_annotationrefset_f')
### end Alembic commands ###
| 73.934615 | 182 | 0.755969 | 2,444 | 19,223 | 5.638298 | 0.050327 | 0.076052 | 0.123004 | 0.160015 | 0.904862 | 0.845791 | 0.830116 | 0.802322 | 0.664224 | 0.641582 | 0 | 0.001835 | 0.092597 | 19,223 | 259 | 183 | 74.220077 | 0.788167 | 0.016439 | 0 | 0.454167 | 0 | 0 | 0.332733 | 0.186182 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008333 | false | 0 | 0.0125 | 0 | 0.020833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fa7a71794cb0bc9a11be0bbedaf4e8205534a98b | 11,983 | py | Python | tests/test_wms.py | MelinaAb/geoengine-python | 838fa2fd5c2602284d8a3391f2e8cddeae07da33 | [
"Apache-2.0"
] | 3 | 2021-07-27T18:00:47.000Z | 2021-11-10T16:02:06.000Z | tests/test_wms.py | geo-engine/geoengine-python | f785e67b4a4ab2a39f93a7b875f41119d23a212f | [
"Apache-2.0"
] | 43 | 2021-06-09T10:34:27.000Z | 2022-03-29T09:05:11.000Z | tests/test_wms.py | MelinaAb/geoengine-python | 838fa2fd5c2602284d8a3391f2e8cddeae07da33 | [
"Apache-2.0"
] | 2 | 2021-08-13T13:16:11.000Z | 2022-03-29T08:03:37.000Z | '''Tests for WMS calls'''
from datetime import datetime
import unittest
import textwrap
from PIL import Image
import requests_mock
from geoengine.types import QueryRectangle
import geoengine as ge
class WmsTests(unittest.TestCase):
'''WMS test runner'''
def setUp(self) -> None:
ge.reset(False)
def test_ndvi_image(self):
with requests_mock.Mocker() as m,\
open("tests/responses/wms-ndvi.png", "rb") as ndvi_png,\
open("tests/responses/4326.gml", "rb") as epsg4326_gml:
m.post('http://mock-instance/anonymous', json={
"id": "c4983c3e-9b53-47ae-bda9-382223bd5081",
"project": None,
"view": None
})
m.post('http://mock-instance/workflow',
json={
"id": "5b9508a8-bd34-5a1c-acd6-75bb832d2d38"
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
m.get('http://mock-instance/workflow/5b9508a8-bd34-5a1c-acd6-75bb832d2d38/metadata',
json={
"type": "raster",
"dataType": "U8",
"spatialReference": "EPSG:4326",
"measurement": {
"type": "unitless"
},
"noDataValue": 0.0
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
m.get('http://epsg.io/4326.gml?download', body=epsg4326_gml)
# Unfortunately, we need a separate library to catch the request from the WMS call
with open("tests/responses/wms_capabilities.xml", "r", encoding="utf-8") as wms_capabilities:
m.get(
# pylint: disable=line-too-long
'http://mock-instance/wms/5b9508a8-bd34-5a1c-acd6-75bb832d2d38?service=WMS&request=GetCapabilities&version=1.3.0',
text=wms_capabilities.read())
m.get(
# pylint: disable=line-too-long
'http://mock-instance/wms/5b9508a8-bd34-5a1c-acd6-75bb832d2d38?service=WMS&version=1.3.0&request=GetMap&layers=5b9508a8-bd34-5a1c-acd6-75bb832d2d38&time=2014-04-01T12%3A00%3A00.000%2B00%3A00&crs=EPSG%3A4326&bbox=-90.0%2C-180.0%2C90.0%2C180.0&width=200&height=100&format=image%2Fpng&styles=custom%3A%7B%22type%22%3A+%22linearGradient%22%2C+%22breakpoints%22%3A+%5B%7B%22value%22%3A+0%2C+%22color%22%3A+%5B0%2C+0%2C+0%2C+255%5D%7D%2C+%7B%22value%22%3A+255%2C+%22color%22%3A+%5B255%2C+255%2C+255%2C+255%5D%7D%5D%2C+%22noDataColor%22%3A+%5B0%2C+0%2C+0%2C+0%5D%2C+%22defaultColor%22%3A+%5B0%2C+0%2C+0%2C+0%5D%7D',
body=ndvi_png,
)
ge.initialize("http://mock-instance")
workflow_definition = {
"type": "Raster",
"operator": {
"type": "GdalSource",
"params": {
"dataset": {
"type": "internal",
"datasetId": "36574dc3-560a-4b09-9d22-d5945f2b8093"
}
}
}
}
time = datetime.strptime(
'2014-04-01T12:00:00.000Z', "%Y-%m-%dT%H:%M:%S.%f%z")
workflow = ge.register_workflow(workflow_definition)
img = workflow.wms_get_map_as_image(
QueryRectangle(
[-180.0, -90.0, 180.0, 90.0],
[time, time],
resolution=(1.8, 1.8)
),
colorizer_min_max=(0, 255)
)
self.assertEqual(img, Image.open("tests/responses/wms-ndvi.png"))
def test_image_error(self):
with requests_mock.Mocker() as m,\
open("tests/responses/4326.gml", "rb") as epsg4326_gml:
m.post('http://mock-instance/anonymous', json={
"id": "c4983c3e-9b53-47ae-bda9-382223bd5081",
"project": None,
"view": None
})
m.post('http://mock-instance/workflow',
json={
"id": "5b9508a8-bd34-5a1c-acd6-75bb832d2d38"
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
m.get('http://mock-instance/workflow/5b9508a8-bd34-5a1c-acd6-75bb832d2d38/metadata',
json={
"type": "raster",
"dataType": "U8",
"spatialReference": "EPSG:4326",
"measurement": {
"type": "unitless"
},
"noDataValue": 0.0
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
m.get('http://epsg.io/4326.gml?download', body=epsg4326_gml)
# Unfortunately, we need a separate library to catch the request from the WMS call
m.get(
# pylint: disable=line-too-long
'http://mock-instance/wms/5b9508a8-bd34-5a1c-acd6-75bb832d2d38?service=WMS&version=1.3.0&request=GetMap&layers=5b9508a8-bd34-5a1c-acd6-75bb832d2d38&time=2004-04-01T12%3A00%3A00.000%2B00%3A00&crs=EPSG%3A4326&bbox=-90.0%2C-180.0%2C90.0%2C180.0&width=200&height=100&format=image%2Fpng&styles=custom%3A%7B%22type%22%3A+%22linearGradient%22%2C+%22breakpoints%22%3A+%5B%7B%22value%22%3A+0%2C+%22color%22%3A+%5B0%2C+0%2C+0%2C+255%5D%7D%2C+%7B%22value%22%3A+255%2C+%22color%22%3A+%5B255%2C+255%2C+255%2C+255%5D%7D%5D%2C+%22noDataColor%22%3A+%5B0%2C+0%2C+0%2C+0%5D%2C+%22defaultColor%22%3A+%5B0%2C+0%2C+0%2C+0%5D%7D',
json={
"error": "Operator",
"message": 'Operator: Could not open gdal dataset for file path '
'"test_data/raster/modis_ndvi/MOD13A2_M_NDVI_2004-04-01.TIFF"'
},
status_code=400,
)
ge.initialize("http://mock-instance")
workflow_definition = {
"type": "Raster",
"operator": {
"type": "GdalSource",
"params": {
"dataset": {
"type": "internal",
"datasetId": "36574dc3-560a-4b09-9d22-d5945f2b8093"
}
}
}
}
time = datetime.strptime(
'2004-04-01T12:00:00.000Z', "%Y-%m-%dT%H:%M:%S.%f%z")
workflow = ge.register_workflow(workflow_definition)
with self.assertRaises(ge.GeoEngineException) as ctx:
workflow.wms_get_map_as_image(
QueryRectangle(
[-180.0, -90.0, 180.0, 90.0],
[time, time],
resolution=(1.8, 1.8)
),
colorizer_min_max=(0, 255)
)
self.assertEqual(str(ctx.exception),
'Operator: Operator: Could not open gdal dataset for file path '
'"test_data/raster/modis_ndvi/MOD13A2_M_NDVI_2004-04-01.TIFF"')
def test_wms_url(self):
with requests_mock.Mocker() as m, open("tests/responses/4326.gml", "rb") as epsg4326_gml:
m.post('http://mock-instance/anonymous', json={
"id": "c4983c3e-9b53-47ae-bda9-382223bd5081",
"project": None,
"view": None
})
m.post('http://mock-instance/workflow',
json={
"id": "5b9508a8-bd34-5a1c-acd6-75bb832d2d38"
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
m.get('http://mock-instance/workflow/5b9508a8-bd34-5a1c-acd6-75bb832d2d38/metadata',
json={
"type": "raster",
"dataType": "U8",
"spatialReference": "EPSG:4326",
"measurement": {
"type": "unitless"
},
"noDataValue": 0.0
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
m.get('http://epsg.io/4326.gml?download', body=epsg4326_gml)
ge.initialize("http://mock-instance")
workflow_definition = {
"type": "Raster",
"operator": {
"type": "GdalSource",
"params": {
"dataset": {
"type": "internal",
"datasetId": "36574dc3-560a-4b09-9d22-d5945f2b8093"
}
}
}
}
time = datetime.strptime(
'2014-04-01T12:00:00.000Z', "%Y-%m-%dT%H:%M:%S.%f%z")
workflow = ge.register_workflow(workflow_definition)
wms_curl = workflow.wms_get_map_curl(QueryRectangle(
[-180.0, -90.0, 180.0, 90.0],
[time, time],
resolution=(1, 1),
))
self.assertEqual(
# pylint: disable=line-too-long
wms_curl,
"""curl -X GET -H "Authorization: Bearer c4983c3e-9b53-47ae-bda9-382223bd5081" 'http://mock-instance/wms/5b9508a8-bd34-5a1c-acd6-75bb832d2d38?service=WMS&version=1.3.0&request=GetMap&layers=5b9508a8-bd34-5a1c-acd6-75bb832d2d38&time=2014-04-01T12%3A00%3A00.000%2B00%3A00&crs=EPSG%3A4326&bbox=-90.0%2C-180.0%2C90.0%2C180.0&width=360&height=180&format=image%2Fpng&styles='"""
)
def test_result_descriptor(self):
with requests_mock.Mocker() as m:
m.post('http://mock-instance/anonymous', json={
"id": "c4983c3e-9b53-47ae-bda9-382223bd5081",
"project": None,
"view": None
})
m.get('http://mock-instance/workflow/5b9508a8-bd34-5a1c-acd6-75bb832d2d38/metadata',
json={
"type": "raster",
"dataType": "U8",
"spatialReference": "EPSG:4326",
"measurement": {
"type": "unitless"
},
"noDataValue": 0.0
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
m.get('http://mock-instance/workflow/foo/metadata',
json={
'error': 'NotFound',
'message': 'Not Found',
},
request_headers={'Authorization': 'Bearer c4983c3e-9b53-47ae-bda9-382223bd5081'})
ge.initialize("http://mock-instance")
workflow = ge.workflow_by_id(
'5b9508a8-bd34-5a1c-acd6-75bb832d2d38')
result_descriptor = workflow.get_result_descriptor()
expected_repr = '''\
Data type: U8
Spatial Reference: EPSG:4326
Measurement: {'type': 'unitless'}
No Data Value: 0.0
'''
self.assertEqual(
repr(result_descriptor),
textwrap.dedent(expected_repr)
)
with self.assertRaises(ge.GeoEngineException) as exception:
workflow = ge.workflow_by_id('foo')
result_descriptor = workflow.get_result_descriptor()
self.assertEqual(str(exception.exception),
'NotFound: Not Found')
if __name__ == '__main__':
unittest.main()
| 41.898601 | 624 | 0.501127 | 1,233 | 11,983 | 4.797243 | 0.176805 | 0.02705 | 0.0541 | 0.050719 | 0.853593 | 0.838546 | 0.791547 | 0.778022 | 0.778022 | 0.767033 | 0 | 0.162693 | 0.362931 | 11,983 | 285 | 625 | 42.045614 | 0.61213 | 0.026538 | 0 | 0.602679 | 0 | 0.013393 | 0.380154 | 0.098076 | 0 | 0 | 0 | 0 | 0.03125 | 1 | 0.022321 | false | 0 | 0.03125 | 0 | 0.058036 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
faa98b768c6bf100656d84faf3d05bd6e7ae3db5 | 6,682 | py | Python | tests/test_undos/test_coreutils/test_cp.py | joshmeranda/undo | f54581223c0c157702dda6124691bb40fa2e2b31 | [
"MIT"
] | null | null | null | tests/test_undos/test_coreutils/test_cp.py | joshmeranda/undo | f54581223c0c157702dda6124691bb40fa2e2b31 | [
"MIT"
] | null | null | null | tests/test_undos/test_coreutils/test_cp.py | joshmeranda/undo | f54581223c0c157702dda6124691bb40fa2e2b31 | [
"MIT"
] | null | null | null | import os
import shutil
import unittest
from undo import resolve, expand
from tests.test_undos.test_coreutils import common
class TestCp(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
if os.path.exists(common.COREUTILS_TEST_ENV_DIR):
shutil.rmtree(common.COREUTILS_TEST_ENV_DIR)
os.mkdir(common.COREUTILS_TEST_ENV_DIR)
os.mkdir(os.path.join(
common.COREUTILS_TEST_ENV_DIR,
"DIR"
))
cwd_bak = os.getcwd()
os.chdir(common.COREUTILS_TEST_ENV_DIR)
cls.addClassCleanup(shutil.rmtree, common.COREUTILS_TEST_ENV_DIR)
cls.addClassCleanup(os.chdir, cwd_bak)
def test_copy_single(self):
command = "cp SRC DST"
expected = []
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
expected = ["rm DST"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, True, "sh")]
self.assertListEqual(expected, actual)
def test_copy_single_precise(self):
command = "cp --no-clobber SRC DST"
expected = ["rm DST"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
def test_copy_single_with_no_target_directory(self):
command = "cp -T SRC DST"
expected = []
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
expected = ["rm DST"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, True, "sh")]
self.assertListEqual(expected, actual)
def test_copy_single__with_no_target_directory_precise(self):
command = "cp -T --no-clobber SRC DST"
expected = ["rm DST"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
def test_copy_single_into_dir(self):
command = "cp A DIR"
expected = []
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
expected = ["rm DIR/A"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, True, "sh")]
self.assertListEqual(expected, actual)
def test_copy_single_into_dir_precise(self):
command = "cp --no-clobber A DIR"
expected = ["rm DIR/A"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
def test_copy_many_into_dir(self):
command = "cp A B C DIR"
expected = []
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
expected = ["rm DIR/A DIR/B DIR/C"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, True, "sh")]
self.assertListEqual(expected, actual)
def test_copy_many_into_dir_precise(self):
command = "cp --no-clobber A B C DIR"
expected = ["rm DIR/A DIR/B DIR/C"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
def test_copy_single_into_target_dir(self):
command = "cp --target-directory DIR A"
expected = []
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
expected = ["rm DIR/A"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, True, "sh")]
self.assertListEqual(expected, actual)
def test_copy_single_into_target_dir_precise(self):
command = "cp --no-clobber --target-directory DIR A"
expected = ["rm DIR/A"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
def test_copy_many_into_target_dir(self):
command = "cp --target-directory DIR A B C"
expected = []
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
expected = ["rm DIR/A DIR/B DIR/C"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, True, "sh")]
self.assertListEqual(expected, actual)
def test_copy_many_into_target_dir_precise(self):
command = "cp --no-clobber --target-directory DIR A B C"
expected = ["rm DIR/A DIR/B DIR/C"]
actual = [expand.expand(undo, env, ("%", "%"), "; ")
for env, undo in
resolve.resolve(command, [common.COREUTILS_UNDO_DIR], False, False, "sh")]
self.assertListEqual(expected, actual)
if __name__ == '__main__':
unittest.main()
| 34.443299 | 92 | 0.5654 | 745 | 6,682 | 4.903356 | 0.085906 | 0.098549 | 0.088694 | 0.108404 | 0.915138 | 0.898713 | 0.885574 | 0.836846 | 0.836846 | 0.820422 | 0 | 0 | 0.292876 | 6,682 | 193 | 93 | 34.621762 | 0.773122 | 0 | 0 | 0.666667 | 0 | 0 | 0.080066 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.096296 | false | 0 | 0.037037 | 0 | 0.140741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fade17006e6552887b4af145cf38458f2635eaba | 22 | py | Python | src/vsql/__init__.py | GlobalMaksimum/ipython-vsql | 856f2640346572582e7ce72690f6c11247aa735b | [
"MIT"
] | 2 | 2019-12-19T06:22:06.000Z | 2021-05-29T16:19:46.000Z | src/vsql/__init__.py | GlobalMaksimum/ipython-vsql | 856f2640346572582e7ce72690f6c11247aa735b | [
"MIT"
] | null | null | null | src/vsql/__init__.py | GlobalMaksimum/ipython-vsql | 856f2640346572582e7ce72690f6c11247aa735b | [
"MIT"
] | null | null | null | from .vmagic import *
| 11 | 21 | 0.727273 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fae8868e152d889375743c057d4728960aa93a21 | 123 | py | Python | agn_utils/batch_processing/__init__.py | avivajpeyi/agn_phenomenological_model | 94d2e39f43cb11986d0abcb33769ee1aa501ca85 | [
"MIT"
] | null | null | null | agn_utils/batch_processing/__init__.py | avivajpeyi/agn_phenomenological_model | 94d2e39f43cb11986d0abcb33769ee1aa501ca85 | [
"MIT"
] | null | null | null | agn_utils/batch_processing/__init__.py | avivajpeyi/agn_phenomenological_model | 94d2e39f43cb11986d0abcb33769ee1aa501ca85 | [
"MIT"
] | 1 | 2021-08-22T07:05:15.000Z | 2021-08-22T07:05:15.000Z | from .mutipool_jobs import run_function_with_multiprocess
from .python_script_dag_creator import create_python_script_jobs
| 41 | 64 | 0.918699 | 18 | 123 | 5.722222 | 0.722222 | 0.23301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065041 | 123 | 2 | 65 | 61.5 | 0.895652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f05fdfa7e5c7be00d1179c5fa51eb122d373027 | 59 | py | Python | carp_api/common/__init__.py | Drachenfels/carp-api | b7f280a645d5116d85dc7cd1d177b449704457de | [
"MIT"
] | null | null | null | carp_api/common/__init__.py | Drachenfels/carp-api | b7f280a645d5116d85dc7cd1d177b449704457de | [
"MIT"
] | null | null | null | carp_api/common/__init__.py | Drachenfels/carp-api | b7f280a645d5116d85dc7cd1d177b449704457de | [
"MIT"
] | null | null | null | from . import logic # NOQA
from . import endpoint # NOQA
| 19.666667 | 30 | 0.694915 | 8 | 59 | 5.125 | 0.625 | 0.487805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237288 | 59 | 2 | 31 | 29.5 | 0.911111 | 0.152542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f0d7a4a95078aff0e8ebfb00e2717ed7e6edf93 | 37,881 | py | Python | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/61.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/61.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null | instances/passenger_demand/pas-20210421-2109-int14000000000000001e/61.py | LHcau/scheduling-shared-passenger-and-freight-transport-on-a-fixed-infrastructure | bba1e6af5bc8d9deaa2dc3b83f6fe9ddf15d2a11 | [
"BSD-3-Clause"
] | null | null | null |
"""
PASSENGERS
"""
numPassengers = 3176
passenger_arriving = (
(3, 3, 8, 3, 5, 0, 12, 7, 7, 7, 1, 0), # 0
(2, 8, 6, 7, 2, 0, 8, 10, 8, 5, 1, 0), # 1
(2, 5, 4, 4, 4, 0, 4, 13, 5, 3, 3, 0), # 2
(3, 7, 5, 6, 4, 0, 3, 11, 6, 0, 2, 0), # 3
(5, 7, 6, 1, 3, 0, 9, 10, 7, 7, 1, 0), # 4
(3, 10, 3, 3, 3, 0, 8, 7, 5, 12, 6, 0), # 5
(3, 7, 4, 3, 1, 0, 8, 11, 6, 3, 5, 0), # 6
(5, 3, 7, 2, 1, 0, 8, 7, 8, 1, 4, 0), # 7
(7, 12, 7, 7, 2, 0, 8, 6, 9, 6, 6, 0), # 8
(3, 4, 4, 2, 0, 0, 9, 11, 8, 1, 4, 0), # 9
(3, 8, 9, 3, 1, 0, 4, 10, 5, 1, 5, 0), # 10
(6, 6, 8, 2, 3, 0, 8, 6, 8, 2, 1, 0), # 11
(0, 8, 7, 1, 1, 0, 7, 16, 1, 6, 5, 0), # 12
(2, 10, 3, 4, 1, 0, 7, 9, 6, 8, 1, 0), # 13
(7, 6, 7, 3, 4, 0, 5, 7, 6, 3, 1, 0), # 14
(1, 8, 7, 3, 0, 0, 6, 9, 7, 2, 0, 0), # 15
(4, 11, 10, 2, 2, 0, 9, 8, 5, 7, 0, 0), # 16
(5, 12, 6, 8, 3, 0, 8, 6, 10, 8, 1, 0), # 17
(4, 9, 6, 5, 0, 0, 8, 7, 2, 3, 0, 0), # 18
(1, 14, 3, 4, 2, 0, 6, 10, 7, 4, 5, 0), # 19
(3, 10, 9, 5, 2, 0, 7, 3, 3, 5, 1, 0), # 20
(2, 15, 13, 8, 1, 0, 6, 12, 6, 3, 3, 0), # 21
(1, 11, 4, 5, 1, 0, 5, 4, 2, 9, 3, 0), # 22
(2, 9, 9, 2, 2, 0, 7, 14, 3, 5, 6, 0), # 23
(6, 8, 10, 2, 2, 0, 10, 11, 5, 6, 1, 0), # 24
(4, 8, 7, 2, 0, 0, 8, 7, 7, 3, 4, 0), # 25
(4, 10, 7, 1, 3, 0, 7, 6, 4, 4, 2, 0), # 26
(3, 10, 6, 4, 1, 0, 4, 15, 7, 5, 1, 0), # 27
(3, 9, 6, 2, 2, 0, 11, 8, 5, 2, 1, 0), # 28
(6, 5, 11, 4, 3, 0, 7, 5, 8, 3, 2, 0), # 29
(1, 14, 7, 5, 0, 0, 7, 10, 7, 7, 4, 0), # 30
(5, 8, 1, 1, 0, 0, 8, 11, 6, 5, 4, 0), # 31
(2, 8, 6, 2, 1, 0, 9, 6, 3, 5, 1, 0), # 32
(4, 14, 10, 6, 4, 0, 10, 12, 6, 3, 3, 0), # 33
(4, 10, 10, 1, 2, 0, 9, 15, 5, 6, 1, 0), # 34
(7, 15, 5, 4, 1, 0, 8, 14, 7, 3, 0, 0), # 35
(3, 8, 6, 4, 0, 0, 2, 12, 7, 6, 3, 0), # 36
(3, 13, 4, 4, 1, 0, 4, 13, 6, 3, 5, 0), # 37
(2, 9, 12, 6, 0, 0, 2, 9, 9, 4, 3, 0), # 38
(4, 8, 8, 3, 3, 0, 8, 13, 5, 5, 2, 0), # 39
(1, 9, 7, 5, 1, 0, 3, 8, 7, 6, 1, 0), # 40
(4, 14, 6, 9, 0, 0, 9, 9, 6, 7, 3, 0), # 41
(3, 9, 8, 1, 3, 0, 8, 9, 11, 5, 4, 0), # 42
(5, 14, 9, 6, 1, 0, 5, 4, 6, 3, 4, 0), # 43
(7, 15, 9, 2, 4, 0, 8, 4, 6, 6, 2, 0), # 44
(2, 9, 5, 2, 1, 0, 6, 13, 5, 7, 0, 0), # 45
(4, 7, 10, 3, 1, 0, 8, 6, 5, 6, 1, 0), # 46
(2, 11, 12, 4, 3, 0, 8, 5, 5, 8, 4, 0), # 47
(7, 9, 7, 3, 1, 0, 6, 6, 2, 7, 1, 0), # 48
(1, 9, 12, 4, 2, 0, 3, 4, 11, 1, 1, 0), # 49
(4, 8, 8, 2, 1, 0, 10, 10, 5, 3, 7, 0), # 50
(4, 7, 12, 4, 1, 0, 4, 7, 3, 6, 1, 0), # 51
(0, 15, 5, 1, 3, 0, 4, 4, 1, 2, 1, 0), # 52
(6, 7, 4, 3, 1, 0, 5, 9, 9, 3, 1, 0), # 53
(3, 8, 8, 9, 2, 0, 4, 8, 8, 5, 4, 0), # 54
(6, 3, 5, 8, 1, 0, 2, 8, 8, 3, 1, 0), # 55
(9, 8, 4, 7, 0, 0, 5, 6, 6, 5, 5, 0), # 56
(1, 10, 8, 2, 1, 0, 5, 12, 11, 4, 1, 0), # 57
(5, 7, 5, 6, 2, 0, 5, 5, 4, 2, 5, 0), # 58
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), # 59
)
station_arriving_intensity = (
(3.7095121817383676, 9.515044981060607, 11.19193043059126, 8.87078804347826, 10.000240384615385, 6.659510869565219), # 0
(3.7443308140669203, 9.620858238197952, 11.252381752534994, 8.920190141908213, 10.075193108974359, 6.657240994867151), # 1
(3.7787518681104277, 9.725101964085297, 11.31139817195087, 8.968504830917876, 10.148564102564103, 6.654901690821256), # 2
(3.8127461259877085, 9.827663671875001, 11.368936576156813, 9.01569089673913, 10.22028605769231, 6.652493274456523), # 3
(3.8462843698175795, 9.928430874719417, 11.424953852470724, 9.061707125603865, 10.290291666666668, 6.6500160628019325), # 4
(3.879337381718857, 10.027291085770905, 11.479406888210512, 9.106512303743962, 10.358513621794872, 6.647470372886473), # 5
(3.9118759438103607, 10.12413181818182, 11.53225257069409, 9.150065217391306, 10.424884615384617, 6.644856521739131), # 6
(3.943870838210907, 10.218840585104518, 11.58344778723936, 9.19232465277778, 10.489337339743592, 6.64217482638889), # 7
(3.975292847039314, 10.311304899691358, 11.632949425164242, 9.233249396135266, 10.551804487179488, 6.639425603864735), # 8
(4.006112752414399, 10.401412275094698, 11.680714371786634, 9.272798233695653, 10.61221875, 6.636609171195653), # 9
(4.03630133645498, 10.489050224466892, 11.72669951442445, 9.310929951690824, 10.670512820512823, 6.633725845410628), # 10
(4.065829381279876, 10.5741062609603, 11.7708617403956, 9.347603336352659, 10.726619391025642, 6.630775943538648), # 11
(4.094667669007903, 10.656467897727273, 11.813157937017996, 9.382777173913043, 10.780471153846154, 6.627759782608695), # 12
(4.122786981757876, 10.736022647920176, 11.85354499160954, 9.416410250603866, 10.832000801282053, 6.624677679649759), # 13
(4.15015810164862, 10.81265802469136, 11.891979791488144, 9.448461352657004, 10.881141025641025, 6.621529951690821), # 14
(4.1767518107989465, 10.886261541193182, 11.928419223971721, 9.478889266304348, 10.92782451923077, 6.618316915760871), # 15
(4.202538891327675, 10.956720710578002, 11.96282017637818, 9.507652777777778, 10.971983974358976, 6.61503888888889), # 16
(4.227490125353625, 11.023923045998176, 11.995139536025421, 9.53471067330918, 11.013552083333336, 6.611696188103866), # 17
(4.25157629499561, 11.087756060606061, 12.025334190231364, 9.560021739130436, 11.052461538461543, 6.608289130434783), # 18
(4.274768182372451, 11.148107267554012, 12.053361026313912, 9.58354476147343, 11.088645032051284, 6.604818032910629), # 19
(4.297036569602966, 11.204864179994388, 12.079176931590974, 9.60523852657005, 11.122035256410259, 6.601283212560387), # 20
(4.318352238805971, 11.257914311079544, 12.102738793380466, 9.625061820652174, 11.152564903846153, 6.597684986413044), # 21
(4.338685972100283, 11.307145173961842, 12.124003499000287, 9.642973429951692, 11.180166666666667, 6.5940236714975855), # 22
(4.358008551604722, 11.352444281793632, 12.142927935768354, 9.658932140700484, 11.204773237179488, 6.590299584842997), # 23
(4.3762907594381035, 11.393699147727272, 12.159468991002571, 9.672896739130437, 11.226317307692307, 6.586513043478261), # 24
(4.393503377719247, 11.430797284915124, 12.173583552020853, 9.684826011473431, 11.244731570512819, 6.582664364432368), # 25
(4.409617188566969, 11.46362620650954, 12.185228506141103, 9.694678743961353, 11.259948717948719, 6.5787538647343), # 26
(4.424602974100088, 11.492073425662877, 12.194360740681233, 9.702413722826089, 11.271901442307694, 6.574781861413045), # 27
(4.438431516437421, 11.516026455527497, 12.200937142959157, 9.707989734299519, 11.280522435897437, 6.570748671497586), # 28
(4.4510735976977855, 11.535372809255753, 12.204914600292774, 9.711365564613528, 11.285744391025641, 6.566654612016909), # 29
(4.4625, 11.55, 12.20625, 9.7125, 11.287500000000001, 6.562500000000001), # 30
(4.47319183983376, 11.56215031960227, 12.205248928140096, 9.712295118464054, 11.286861125886526, 6.556726763701484), # 31
(4.4836528452685425, 11.574140056818184, 12.202274033816424, 9.711684477124184, 11.28495815602837, 6.547834661835751), # 32
(4.493887715792838, 11.585967720170455, 12.197367798913046, 9.710674080882354, 11.281811569148937, 6.535910757121439), # 33
(4.503901150895141, 11.597631818181819, 12.19057270531401, 9.709269934640524, 11.277441843971632, 6.521042112277196), # 34
(4.513697850063939, 11.609130859374998, 12.181931234903383, 9.707478043300654, 11.27186945921986, 6.503315790021656), # 35
(4.523282512787724, 11.62046335227273, 12.171485869565219, 9.705304411764708, 11.265114893617023, 6.482818853073463), # 36
(4.532659838554988, 11.631627805397729, 12.159279091183576, 9.70275504493464, 11.257198625886524, 6.4596383641512585), # 37
(4.5418345268542195, 11.642622727272729, 12.145353381642513, 9.699835947712419, 11.248141134751775, 6.433861385973679), # 38
(4.5508112771739135, 11.653446626420456, 12.129751222826087, 9.696553125000001, 11.23796289893617, 6.40557498125937), # 39
(4.559594789002558, 11.664098011363638, 12.11251509661836, 9.692912581699348, 11.22668439716312, 6.37486621272697), # 40
(4.568189761828645, 11.674575390625, 12.093687484903382, 9.68892032271242, 11.214326108156028, 6.34182214309512), # 41
(4.576600895140665, 11.684877272727276, 12.07331086956522, 9.684582352941177, 11.2009085106383, 6.3065298350824595), # 42
(4.584832888427111, 11.69500216619318, 12.051427732487923, 9.679904677287583, 11.186452083333334, 6.26907635140763), # 43
(4.592890441176471, 11.704948579545455, 12.028080555555556, 9.674893300653595, 11.17097730496454, 6.229548754789272), # 44
(4.600778252877237, 11.714715021306818, 12.003311820652177, 9.669554227941177, 11.15450465425532, 6.188034107946028), # 45
(4.6085010230179035, 11.724300000000003, 11.97716400966184, 9.663893464052288, 11.137054609929079, 6.144619473596536), # 46
(4.616063451086957, 11.733702024147728, 11.9496796044686, 9.65791701388889, 11.118647650709221, 6.099391914459438), # 47
(4.623470236572891, 11.742919602272728, 11.920901086956523, 9.651630882352942, 11.099304255319149, 6.052438493253375), # 48
(4.630726078964194, 11.751951242897727, 11.890870939009663, 9.645041074346407, 11.079044902482272, 6.003846272696985), # 49
(4.6378356777493615, 11.760795454545454, 11.85963164251208, 9.638153594771243, 11.057890070921987, 5.953702315508913), # 50
(4.6448037324168805, 11.769450745738636, 11.827225679347826, 9.630974448529413, 11.035860239361703, 5.902093684407797), # 51
(4.651634942455243, 11.777915625, 11.793695531400965, 9.623509640522876, 11.012975886524824, 5.849107442112278), # 52
(4.658334007352941, 11.786188600852274, 11.759083680555555, 9.615765175653596, 10.989257491134753, 5.794830651340996), # 53
(4.6649056265984665, 11.79426818181818, 11.723432608695653, 9.60774705882353, 10.964725531914894, 5.739350374812594), # 54
(4.671354499680307, 11.802152876420456, 11.686784797705313, 9.599461294934642, 10.939400487588653, 5.682753675245711), # 55
(4.677685326086957, 11.809841193181818, 11.649182729468599, 9.59091388888889, 10.913302836879433, 5.625127615358988), # 56
(4.683902805306906, 11.817331640625003, 11.610668885869565, 9.582110845588236, 10.886453058510638, 5.566559257871065), # 57
(4.690011636828645, 11.824622727272727, 11.57128574879227, 9.573058169934642, 10.858871631205675, 5.507135665500583), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_arriving_acc = (
(3, 3, 8, 3, 5, 0, 12, 7, 7, 7, 1, 0), # 0
(5, 11, 14, 10, 7, 0, 20, 17, 15, 12, 2, 0), # 1
(7, 16, 18, 14, 11, 0, 24, 30, 20, 15, 5, 0), # 2
(10, 23, 23, 20, 15, 0, 27, 41, 26, 15, 7, 0), # 3
(15, 30, 29, 21, 18, 0, 36, 51, 33, 22, 8, 0), # 4
(18, 40, 32, 24, 21, 0, 44, 58, 38, 34, 14, 0), # 5
(21, 47, 36, 27, 22, 0, 52, 69, 44, 37, 19, 0), # 6
(26, 50, 43, 29, 23, 0, 60, 76, 52, 38, 23, 0), # 7
(33, 62, 50, 36, 25, 0, 68, 82, 61, 44, 29, 0), # 8
(36, 66, 54, 38, 25, 0, 77, 93, 69, 45, 33, 0), # 9
(39, 74, 63, 41, 26, 0, 81, 103, 74, 46, 38, 0), # 10
(45, 80, 71, 43, 29, 0, 89, 109, 82, 48, 39, 0), # 11
(45, 88, 78, 44, 30, 0, 96, 125, 83, 54, 44, 0), # 12
(47, 98, 81, 48, 31, 0, 103, 134, 89, 62, 45, 0), # 13
(54, 104, 88, 51, 35, 0, 108, 141, 95, 65, 46, 0), # 14
(55, 112, 95, 54, 35, 0, 114, 150, 102, 67, 46, 0), # 15
(59, 123, 105, 56, 37, 0, 123, 158, 107, 74, 46, 0), # 16
(64, 135, 111, 64, 40, 0, 131, 164, 117, 82, 47, 0), # 17
(68, 144, 117, 69, 40, 0, 139, 171, 119, 85, 47, 0), # 18
(69, 158, 120, 73, 42, 0, 145, 181, 126, 89, 52, 0), # 19
(72, 168, 129, 78, 44, 0, 152, 184, 129, 94, 53, 0), # 20
(74, 183, 142, 86, 45, 0, 158, 196, 135, 97, 56, 0), # 21
(75, 194, 146, 91, 46, 0, 163, 200, 137, 106, 59, 0), # 22
(77, 203, 155, 93, 48, 0, 170, 214, 140, 111, 65, 0), # 23
(83, 211, 165, 95, 50, 0, 180, 225, 145, 117, 66, 0), # 24
(87, 219, 172, 97, 50, 0, 188, 232, 152, 120, 70, 0), # 25
(91, 229, 179, 98, 53, 0, 195, 238, 156, 124, 72, 0), # 26
(94, 239, 185, 102, 54, 0, 199, 253, 163, 129, 73, 0), # 27
(97, 248, 191, 104, 56, 0, 210, 261, 168, 131, 74, 0), # 28
(103, 253, 202, 108, 59, 0, 217, 266, 176, 134, 76, 0), # 29
(104, 267, 209, 113, 59, 0, 224, 276, 183, 141, 80, 0), # 30
(109, 275, 210, 114, 59, 0, 232, 287, 189, 146, 84, 0), # 31
(111, 283, 216, 116, 60, 0, 241, 293, 192, 151, 85, 0), # 32
(115, 297, 226, 122, 64, 0, 251, 305, 198, 154, 88, 0), # 33
(119, 307, 236, 123, 66, 0, 260, 320, 203, 160, 89, 0), # 34
(126, 322, 241, 127, 67, 0, 268, 334, 210, 163, 89, 0), # 35
(129, 330, 247, 131, 67, 0, 270, 346, 217, 169, 92, 0), # 36
(132, 343, 251, 135, 68, 0, 274, 359, 223, 172, 97, 0), # 37
(134, 352, 263, 141, 68, 0, 276, 368, 232, 176, 100, 0), # 38
(138, 360, 271, 144, 71, 0, 284, 381, 237, 181, 102, 0), # 39
(139, 369, 278, 149, 72, 0, 287, 389, 244, 187, 103, 0), # 40
(143, 383, 284, 158, 72, 0, 296, 398, 250, 194, 106, 0), # 41
(146, 392, 292, 159, 75, 0, 304, 407, 261, 199, 110, 0), # 42
(151, 406, 301, 165, 76, 0, 309, 411, 267, 202, 114, 0), # 43
(158, 421, 310, 167, 80, 0, 317, 415, 273, 208, 116, 0), # 44
(160, 430, 315, 169, 81, 0, 323, 428, 278, 215, 116, 0), # 45
(164, 437, 325, 172, 82, 0, 331, 434, 283, 221, 117, 0), # 46
(166, 448, 337, 176, 85, 0, 339, 439, 288, 229, 121, 0), # 47
(173, 457, 344, 179, 86, 0, 345, 445, 290, 236, 122, 0), # 48
(174, 466, 356, 183, 88, 0, 348, 449, 301, 237, 123, 0), # 49
(178, 474, 364, 185, 89, 0, 358, 459, 306, 240, 130, 0), # 50
(182, 481, 376, 189, 90, 0, 362, 466, 309, 246, 131, 0), # 51
(182, 496, 381, 190, 93, 0, 366, 470, 310, 248, 132, 0), # 52
(188, 503, 385, 193, 94, 0, 371, 479, 319, 251, 133, 0), # 53
(191, 511, 393, 202, 96, 0, 375, 487, 327, 256, 137, 0), # 54
(197, 514, 398, 210, 97, 0, 377, 495, 335, 259, 138, 0), # 55
(206, 522, 402, 217, 97, 0, 382, 501, 341, 264, 143, 0), # 56
(207, 532, 410, 219, 98, 0, 387, 513, 352, 268, 144, 0), # 57
(212, 539, 415, 225, 100, 0, 392, 518, 356, 270, 149, 0), # 58
(212, 539, 415, 225, 100, 0, 392, 518, 356, 270, 149, 0), # 59
)
passenger_arriving_rate = (
(3.7095121817383676, 7.612035984848484, 6.715158258354756, 3.5483152173913037, 2.000048076923077, 0.0, 6.659510869565219, 8.000192307692307, 5.322472826086956, 4.476772172236504, 1.903008996212121, 0.0), # 0
(3.7443308140669203, 7.696686590558361, 6.751429051520996, 3.5680760567632848, 2.0150386217948717, 0.0, 6.657240994867151, 8.060154487179487, 5.352114085144928, 4.500952701013997, 1.9241716476395903, 0.0), # 1
(3.7787518681104277, 7.780081571268237, 6.786838903170522, 3.58740193236715, 2.0297128205128203, 0.0, 6.654901690821256, 8.118851282051281, 5.381102898550726, 4.524559268780347, 1.9450203928170593, 0.0), # 2
(3.8127461259877085, 7.8621309375, 6.821361945694087, 3.6062763586956517, 2.044057211538462, 0.0, 6.652493274456523, 8.176228846153847, 5.409414538043478, 4.547574630462725, 1.965532734375, 0.0), # 3
(3.8462843698175795, 7.942744699775533, 6.854972311482434, 3.624682850241546, 2.0580583333333333, 0.0, 6.6500160628019325, 8.232233333333333, 5.437024275362319, 4.569981540988289, 1.9856861749438832, 0.0), # 4
(3.879337381718857, 8.021832868616723, 6.887644132926307, 3.6426049214975844, 2.0717027243589743, 0.0, 6.647470372886473, 8.286810897435897, 5.463907382246377, 4.591762755284204, 2.005458217154181, 0.0), # 5
(3.9118759438103607, 8.099305454545455, 6.919351542416455, 3.660026086956522, 2.084976923076923, 0.0, 6.644856521739131, 8.339907692307692, 5.490039130434783, 4.612901028277636, 2.0248263636363637, 0.0), # 6
(3.943870838210907, 8.175072468083613, 6.950068672343615, 3.6769298611111116, 2.0978674679487184, 0.0, 6.64217482638889, 8.391469871794873, 5.515394791666668, 4.633379114895743, 2.043768117020903, 0.0), # 7
(3.975292847039314, 8.249043919753085, 6.979769655098544, 3.693299758454106, 2.1103608974358976, 0.0, 6.639425603864735, 8.44144358974359, 5.5399496376811594, 4.653179770065696, 2.062260979938271, 0.0), # 8
(4.006112752414399, 8.321129820075758, 7.00842862307198, 3.709119293478261, 2.12244375, 0.0, 6.636609171195653, 8.489775, 5.563678940217391, 4.672285748714653, 2.0802824550189394, 0.0), # 9
(4.03630133645498, 8.391240179573513, 7.03601970865467, 3.724371980676329, 2.134102564102564, 0.0, 6.633725845410628, 8.536410256410257, 5.586557971014494, 4.690679805769779, 2.0978100448933783, 0.0), # 10
(4.065829381279876, 8.459285008768239, 7.06251704423736, 3.739041334541063, 2.145323878205128, 0.0, 6.630775943538648, 8.581295512820512, 5.608562001811595, 4.70834469615824, 2.1148212521920597, 0.0), # 11
(4.094667669007903, 8.525174318181818, 7.087894762210797, 3.7531108695652167, 2.156094230769231, 0.0, 6.627759782608695, 8.624376923076923, 5.6296663043478254, 4.725263174807198, 2.1312935795454546, 0.0), # 12
(4.122786981757876, 8.58881811833614, 7.112126994965724, 3.766564100241546, 2.1664001602564102, 0.0, 6.624677679649759, 8.665600641025641, 5.649846150362319, 4.741417996643816, 2.147204529584035, 0.0), # 13
(4.15015810164862, 8.650126419753088, 7.135187874892886, 3.779384541062801, 2.1762282051282047, 0.0, 6.621529951690821, 8.704912820512819, 5.669076811594202, 4.756791916595257, 2.162531604938272, 0.0), # 14
(4.1767518107989465, 8.709009232954545, 7.157051534383032, 3.7915557065217387, 2.1855649038461538, 0.0, 6.618316915760871, 8.742259615384615, 5.6873335597826085, 4.771367689588688, 2.177252308238636, 0.0), # 15
(4.202538891327675, 8.7653765684624, 7.177692105826908, 3.803061111111111, 2.194396794871795, 0.0, 6.61503888888889, 8.77758717948718, 5.7045916666666665, 4.785128070551272, 2.1913441421156, 0.0), # 16
(4.227490125353625, 8.81913843679854, 7.197083721615253, 3.8138842693236716, 2.202710416666667, 0.0, 6.611696188103866, 8.810841666666668, 5.720826403985508, 4.798055814410168, 2.204784609199635, 0.0), # 17
(4.25157629499561, 8.870204848484848, 7.215200514138818, 3.824008695652174, 2.2104923076923084, 0.0, 6.608289130434783, 8.841969230769234, 5.736013043478262, 4.810133676092545, 2.217551212121212, 0.0), # 18
(4.274768182372451, 8.918485814043208, 7.232016615788346, 3.8334179045893717, 2.2177290064102566, 0.0, 6.604818032910629, 8.870916025641026, 5.750126856884058, 4.8213444105255645, 2.229621453510802, 0.0), # 19
(4.297036569602966, 8.96389134399551, 7.247506158954584, 3.8420954106280196, 2.2244070512820517, 0.0, 6.601283212560387, 8.897628205128207, 5.76314311594203, 4.831670772636389, 2.2409728359988774, 0.0), # 20
(4.318352238805971, 9.006331448863634, 7.261643276028279, 3.8500247282608693, 2.2305129807692303, 0.0, 6.597684986413044, 8.922051923076921, 5.775037092391305, 4.841095517352186, 2.2515828622159084, 0.0), # 21
(4.338685972100283, 9.045716139169473, 7.274402099400172, 3.8571893719806765, 2.2360333333333333, 0.0, 6.5940236714975855, 8.944133333333333, 5.785784057971015, 4.849601399600115, 2.2614290347923682, 0.0), # 22
(4.358008551604722, 9.081955425434906, 7.285756761461012, 3.8635728562801934, 2.2409546474358972, 0.0, 6.590299584842997, 8.963818589743589, 5.79535928442029, 4.857171174307341, 2.2704888563587264, 0.0), # 23
(4.3762907594381035, 9.114959318181818, 7.295681394601543, 3.869158695652174, 2.2452634615384612, 0.0, 6.586513043478261, 8.981053846153845, 5.803738043478262, 4.863787596401028, 2.2787398295454544, 0.0), # 24
(4.393503377719247, 9.1446378279321, 7.304150131212511, 3.8739304045893723, 2.2489463141025636, 0.0, 6.582664364432368, 8.995785256410255, 5.810895606884059, 4.869433420808341, 2.286159456983025, 0.0), # 25
(4.409617188566969, 9.17090096520763, 7.311137103684661, 3.8778714975845405, 2.2519897435897436, 0.0, 6.5787538647343, 9.007958974358974, 5.816807246376811, 4.874091402456441, 2.2927252413019077, 0.0), # 26
(4.424602974100088, 9.193658740530301, 7.31661644440874, 3.880965489130435, 2.2543802884615385, 0.0, 6.574781861413045, 9.017521153846154, 5.821448233695653, 4.877744296272493, 2.2984146851325753, 0.0), # 27
(4.438431516437421, 9.212821164421996, 7.320562285775494, 3.8831958937198072, 2.256104487179487, 0.0, 6.570748671497586, 9.024417948717948, 5.824793840579711, 4.8803748571836625, 2.303205291105499, 0.0), # 28
(4.4510735976977855, 9.228298247404602, 7.322948760175664, 3.884546225845411, 2.257148878205128, 0.0, 6.566654612016909, 9.028595512820512, 5.826819338768117, 4.881965840117109, 2.3070745618511506, 0.0), # 29
(4.4625, 9.24, 7.32375, 3.885, 2.2575000000000003, 0.0, 6.562500000000001, 9.030000000000001, 5.8275, 4.8825, 2.31, 0.0), # 30
(4.47319183983376, 9.249720255681815, 7.323149356884057, 3.884918047385621, 2.257372225177305, 0.0, 6.556726763701484, 9.02948890070922, 5.827377071078432, 4.882099571256038, 2.312430063920454, 0.0), # 31
(4.4836528452685425, 9.259312045454546, 7.3213644202898545, 3.884673790849673, 2.2569916312056737, 0.0, 6.547834661835751, 9.027966524822695, 5.82701068627451, 4.880909613526569, 2.3148280113636366, 0.0), # 32
(4.493887715792838, 9.268774176136363, 7.3184206793478275, 3.8842696323529413, 2.2563623138297872, 0.0, 6.535910757121439, 9.025449255319149, 5.826404448529412, 4.878947119565218, 2.3171935440340907, 0.0), # 33
(4.503901150895141, 9.278105454545454, 7.314343623188405, 3.8837079738562093, 2.2554883687943263, 0.0, 6.521042112277196, 9.021953475177305, 5.825561960784314, 4.876229082125604, 2.3195263636363634, 0.0), # 34
(4.513697850063939, 9.287304687499997, 7.3091587409420296, 3.882991217320261, 2.2543738918439717, 0.0, 6.503315790021656, 9.017495567375887, 5.824486825980392, 4.872772493961353, 2.3218261718749993, 0.0), # 35
(4.523282512787724, 9.296370681818182, 7.302891521739131, 3.8821217647058828, 2.253022978723404, 0.0, 6.482818853073463, 9.012091914893617, 5.823182647058824, 4.868594347826087, 2.3240926704545455, 0.0), # 36
(4.532659838554988, 9.305302244318183, 7.295567454710145, 3.881102017973856, 2.2514397251773044, 0.0, 6.4596383641512585, 9.005758900709218, 5.821653026960784, 4.86371163647343, 2.3263255610795457, 0.0), # 37
(4.5418345268542195, 9.314098181818181, 7.287212028985508, 3.8799343790849674, 2.249628226950355, 0.0, 6.433861385973679, 8.99851290780142, 5.819901568627452, 4.858141352657005, 2.3285245454545453, 0.0), # 38
(4.5508112771739135, 9.322757301136363, 7.277850733695652, 3.87862125, 2.247592579787234, 0.0, 6.40557498125937, 8.990370319148935, 5.817931875, 4.8519004891304345, 2.330689325284091, 0.0), # 39
(4.559594789002558, 9.33127840909091, 7.267509057971015, 3.8771650326797387, 2.245336879432624, 0.0, 6.37486621272697, 8.981347517730496, 5.815747549019608, 4.845006038647344, 2.3328196022727274, 0.0), # 40
(4.568189761828645, 9.3396603125, 7.256212490942029, 3.8755681290849675, 2.2428652216312055, 0.0, 6.34182214309512, 8.971460886524822, 5.813352193627452, 4.837474993961353, 2.334915078125, 0.0), # 41
(4.576600895140665, 9.34790181818182, 7.2439865217391315, 3.8738329411764707, 2.2401817021276598, 0.0, 6.3065298350824595, 8.960726808510639, 5.810749411764706, 4.829324347826088, 2.336975454545455, 0.0), # 42
(4.584832888427111, 9.356001732954544, 7.230856639492753, 3.8719618709150327, 2.2372904166666667, 0.0, 6.26907635140763, 8.949161666666667, 5.80794280637255, 4.820571092995169, 2.339000433238636, 0.0), # 43
(4.592890441176471, 9.363958863636363, 7.216848333333333, 3.8699573202614377, 2.2341954609929076, 0.0, 6.229548754789272, 8.93678184397163, 5.804935980392157, 4.811232222222222, 2.3409897159090907, 0.0), # 44
(4.600778252877237, 9.371772017045453, 7.201987092391306, 3.8678216911764705, 2.230900930851064, 0.0, 6.188034107946028, 8.923603723404256, 5.801732536764706, 4.80132472826087, 2.3429430042613633, 0.0), # 45
(4.6085010230179035, 9.379440000000002, 7.186298405797103, 3.8655573856209147, 2.2274109219858156, 0.0, 6.144619473596536, 8.909643687943262, 5.798336078431372, 4.790865603864735, 2.3448600000000006, 0.0), # 46
(4.616063451086957, 9.386961619318182, 7.16980776268116, 3.8631668055555552, 2.223729530141844, 0.0, 6.099391914459438, 8.894918120567375, 5.794750208333333, 4.77987184178744, 2.3467404048295455, 0.0), # 47
(4.623470236572891, 9.394335681818182, 7.152540652173913, 3.8606523529411763, 2.21986085106383, 0.0, 6.052438493253375, 8.87944340425532, 5.790978529411765, 4.7683604347826085, 2.3485839204545456, 0.0), # 48
(4.630726078964194, 9.401560994318181, 7.134522563405797, 3.8580164297385626, 2.2158089804964543, 0.0, 6.003846272696985, 8.863235921985817, 5.787024644607844, 4.7563483756038645, 2.3503902485795454, 0.0), # 49
(4.6378356777493615, 9.408636363636361, 7.115778985507247, 3.8552614379084966, 2.211578014184397, 0.0, 5.953702315508913, 8.846312056737588, 5.782892156862745, 4.743852657004831, 2.3521590909090904, 0.0), # 50
(4.6448037324168805, 9.415560596590907, 7.096335407608696, 3.852389779411765, 2.2071720478723407, 0.0, 5.902093684407797, 8.828688191489363, 5.778584669117648, 4.73089027173913, 2.353890149147727, 0.0), # 51
(4.651634942455243, 9.4223325, 7.0762173188405795, 3.84940385620915, 2.2025951773049646, 0.0, 5.849107442112278, 8.810380709219858, 5.774105784313726, 4.717478212560386, 2.355583125, 0.0), # 52
(4.658334007352941, 9.428950880681818, 7.055450208333333, 3.8463060702614382, 2.1978514982269504, 0.0, 5.794830651340996, 8.791405992907801, 5.769459105392158, 4.703633472222222, 2.3572377201704544, 0.0), # 53
(4.6649056265984665, 9.435414545454544, 7.034059565217391, 3.843098823529412, 2.192945106382979, 0.0, 5.739350374812594, 8.771780425531915, 5.764648235294119, 4.689373043478261, 2.358853636363636, 0.0), # 54
(4.671354499680307, 9.441722301136364, 7.012070878623187, 3.8397845179738566, 2.1878800975177306, 0.0, 5.682753675245711, 8.751520390070922, 5.759676776960785, 4.674713919082125, 2.360430575284091, 0.0), # 55
(4.677685326086957, 9.447872954545453, 6.989509637681159, 3.8363655555555556, 2.1826605673758865, 0.0, 5.625127615358988, 8.730642269503546, 5.754548333333334, 4.65967309178744, 2.361968238636363, 0.0), # 56
(4.683902805306906, 9.453865312500001, 6.966401331521738, 3.832844338235294, 2.1772906117021273, 0.0, 5.566559257871065, 8.70916244680851, 5.749266507352941, 4.644267554347826, 2.3634663281250003, 0.0), # 57
(4.690011636828645, 9.459698181818181, 6.942771449275362, 3.8292232679738563, 2.1717743262411346, 0.0, 5.507135665500583, 8.687097304964539, 5.743834901960785, 4.628514299516908, 2.3649245454545453, 0.0), # 58
(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0), # 59
)
passenger_allighting_rate = (
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 0
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 1
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 2
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 3
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 4
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 5
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 6
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 7
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 8
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 9
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 10
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 11
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 12
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 13
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 14
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 15
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 16
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 17
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 18
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 19
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 20
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 21
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 22
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 23
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 24
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 25
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 26
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 27
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 28
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 29
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 30
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 31
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 32
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 33
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 34
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 35
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 36
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 37
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 38
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 39
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 40
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 41
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 42
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 43
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 44
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 45
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 46
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 47
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 48
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 49
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 50
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 51
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 52
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 53
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 54
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 55
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 56
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 57
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 58
(0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1, 0, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 1), # 59
)
"""
parameters for reproducibiliy. More information: https://numpy.org/doc/stable/reference/random/parallel.html
"""
#initial entropy
entropy = 258194110137029475889902652135037600173
#index for seed sequence child
child_seed_index = (
1, # 0
60, # 1
)
| 113.077612 | 212 | 0.729046 | 5,147 | 37,881 | 5.363513 | 0.230037 | 0.312975 | 0.247772 | 0.469463 | 0.328769 | 0.327972 | 0.327972 | 0.327972 | 0.327972 | 0.327972 | 0 | 0.818983 | 0.119163 | 37,881 | 334 | 213 | 113.416168 | 0.008362 | 0.031969 | 0 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87fc0cad1c70cb86e9d736cfb25933b8aa8167bb | 9,259 | py | Python | fbgemm_gpu/test/quantize_ops_test.py | stevenlysc/FBGEMM | e21f4d043bb5d8aea0b887bf71b28779edff48e4 | [
"BSD-3-Clause"
] | null | null | null | fbgemm_gpu/test/quantize_ops_test.py | stevenlysc/FBGEMM | e21f4d043bb5d8aea0b887bf71b28779edff48e4 | [
"BSD-3-Clause"
] | null | null | null | fbgemm_gpu/test/quantize_ops_test.py | stevenlysc/FBGEMM | e21f4d043bb5d8aea0b887bf71b28779edff48e4 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import unittest
import hypothesis.strategies as st
import numpy as np
import torch
from hypothesis import HealthCheck, given, assume, settings
try:
# pyre-ignore[21]
from fbgemm_gpu import open_source # noqa: F401
from test_utils import ( # pyre-ignore[21]
fused_rowwise_8bit_quantize_reference,
fused_rowwise_8bit_dequantize_reference,
fused_rowwise_nbit_quantize_reference,
fused_rowwise_nbit_quantize_dequantize_reference,
bytes_to_half_floats,
gpu_available,
)
except Exception:
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops")
torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:sparse_ops_cpu")
from fbgemm_gpu.test.test_utils import (
fused_rowwise_8bit_quantize_reference,
fused_rowwise_8bit_dequantize_reference,
fused_rowwise_nbit_quantize_reference,
fused_rowwise_nbit_quantize_dequantize_reference,
bytes_to_half_floats,
gpu_available,
)
class TestFused8BitRowwiseQuantizationConversion(unittest.TestCase):
# pyre-fixme[56]: Pyre was not able to infer the type of argument
# `hypothesis.strategies.integers($parameter$min_value = 0, $parameter$max_value =
# 100)` to decorator factory `hypothesis.given`.
@given(
nrows=st.integers(min_value=0, max_value=100),
ncols=st.integers(min_value=0, max_value=100),
)
@settings(deadline=10000, suppress_health_check=[HealthCheck.filter_too_much])
def test_quantize_op(self, nrows: int, ncols: int) -> None:
input_data = torch.rand(nrows, ncols).float()
quantized_data = torch.ops.fbgemm.FloatToFused8BitRowwiseQuantized(input_data)
if nrows == 0 or ncols == 0:
assert quantized_data.numel() == nrows * ((ncols + 3) // 4 * 4 + 8)
return
reference = fused_rowwise_8bit_quantize_reference(input_data.numpy())
np.testing.assert_array_almost_equal(quantized_data.numpy(), reference)
if gpu_available:
input_data_gpu = input_data.cuda()
quantized_data_gpu = torch.ops.fbgemm.FloatToFused8BitRowwiseQuantized(
input_data_gpu
)
quantized_data_numpy = quantized_data_gpu.cpu().numpy()
ncols_aligned = (ncols + 4 - 1) // 4 * 4
# compare quantized data
np.testing.assert_allclose(
quantized_data_numpy[:, :ncols], reference[:, :ncols]
)
# compare scales
np.testing.assert_array_almost_equal(
quantized_data_numpy[:, ncols_aligned : ncols_aligned + 4],
reference[:, ncols : ncols + 4],
)
# compare zero points
np.testing.assert_array_equal(
quantized_data_numpy[:, ncols_aligned + 4 : ncols_aligned + 8],
reference[:, ncols + 4 : ncols + 8],
)
@settings(deadline=10000, suppress_health_check=[HealthCheck.filter_too_much])
def test_quantize_and_dequantize_op_cuda_large_nrows(self) -> None:
ncols = 256
nrows = 65540
input_data = torch.rand(nrows, ncols).float()
quantized_data = torch.ops.fbgemm.FloatToFused8BitRowwiseQuantized(input_data)
reference = torch.from_numpy(
fused_rowwise_8bit_dequantize_reference(quantized_data.numpy())
)
if gpu_available:
input_data_gpu = input_data.cuda()
quantized_data_gpu = torch.ops.fbgemm.FloatToFused8BitRowwiseQuantized(
input_data_gpu
)
dequantized_data_gpu = torch.ops.fbgemm.Fused8BitRowwiseQuantizedToFloat(
quantized_data_gpu
)
reference = torch.from_numpy(
fused_rowwise_8bit_dequantize_reference(
quantized_data_gpu.cpu().numpy()
)
)
# compare quantized data
torch.testing.assert_allclose(dequantized_data_gpu.cpu(), reference)
class TestFusedNBitRowwiseQuantizationConversion(unittest.TestCase):
# pyre-ignore [56]: Invalid decoration, was not able to infer the type of argument
@given(
nrows=st.integers(min_value=0, max_value=100),
ncols=st.integers(min_value=0, max_value=100),
bit_rate=st.sampled_from([2, 4]),
)
@settings(deadline=10000, suppress_health_check=[HealthCheck.filter_too_much])
def test_quantize_op(self, nrows: int, ncols: int, bit_rate: int) -> None:
assert 8 % bit_rate == 0
num_elem_per_byte = 8 // bit_rate
assume(ncols % (2 * num_elem_per_byte) == 0)
input_data = torch.rand(nrows, ncols).float()
quantized_data = torch.ops.fbgemm.FloatToFusedNBitRowwiseQuantizedSBHalf(
input_data, bit_rate
)
if nrows == 0 or ncols == 0:
assert quantized_data.numel() == nrows * (
(ncols + bit_rate - 1) // bit_rate + 4
)
return
quantized_data = quantized_data.numpy()
reference = fused_rowwise_nbit_quantize_reference(input_data.numpy(), bit_rate)
interleaved_dim = ncols // num_elem_per_byte
# compare quantized data
np.testing.assert_array_equal(
quantized_data[:, :interleaved_dim], reference[:, :interleaved_dim]
)
# compare scales
np.testing.assert_array_almost_equal(
bytes_to_half_floats(
quantized_data[:, interleaved_dim : interleaved_dim + 2]
),
bytes_to_half_floats(reference[:, interleaved_dim : interleaved_dim + 2]),
)
# compare zero points
np.testing.assert_array_equal(
quantized_data[:, interleaved_dim + 2], reference[:, interleaved_dim + 2]
)
if gpu_available:
input_data_gpu = input_data.cuda()
quantized_data_gpu = torch.ops.fbgemm.FloatToFusedNBitRowwiseQuantizedSBHalf(
input_data_gpu, bit_rate
)
quantized_data_numpy = quantized_data_gpu.cpu().numpy()
# compare quantized data
np.testing.assert_array_equal(
quantized_data_numpy[:, :ncols], reference[:, :ncols]
)
# pyre-ignore [56]: Invalid decoration, was not able to infer the type of argument
@given(
nrows=st.integers(min_value=0, max_value=100),
ncols=st.integers(min_value=0, max_value=100),
bit_rate=st.sampled_from([2, 4]),
)
@settings(deadline=10000, suppress_health_check=[HealthCheck.filter_too_much])
def test_quantize_and_dequantize_op(self, nrows: int, ncols: int, bit_rate: int) -> None:
assert 8 % bit_rate == 0
num_elem_per_byte = 8 // bit_rate
input_data = torch.rand(nrows, ncols).float()
assume(ncols % (2 * num_elem_per_byte) == 0)
quantized_data = torch.ops.fbgemm.FloatToFusedNBitRowwiseQuantizedSBHalf(
input_data, bit_rate
)
dequantized_data = torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfToFloat(
quantized_data, bit_rate
)
if nrows == 0 or ncols == 0:
assert dequantized_data.numel() == 0
return
reference = torch.from_numpy(
fused_rowwise_nbit_quantize_dequantize_reference(
input_data.numpy(), bit_rate
)
)
torch.testing.assert_allclose(dequantized_data, reference)
if gpu_available:
input_data_gpu = input_data.cuda()
quantized_data_gpu = torch.ops.fbgemm.FloatToFusedNBitRowwiseQuantizedSBHalf(
input_data_gpu, bit_rate
)
dequantized_data_gpu = torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfToFloat(
quantized_data_gpu, bit_rate
)
# compare quantized data
torch.testing.assert_allclose(dequantized_data_gpu.cpu(), reference)
@settings(deadline=10000, suppress_health_check=[HealthCheck.filter_too_much])
def test_quantize_and_dequantize_op_cuda_large_nrows(self) -> None:
ncols = 256
bit_rate = 4
nrows = 65540
num_elem_per_byte = 8 // bit_rate
input_data = torch.rand(nrows, ncols).float()
assume(ncols % (2 * num_elem_per_byte) == 0)
reference = torch.from_numpy(
fused_rowwise_nbit_quantize_dequantize_reference(
input_data.numpy(), bit_rate
)
)
if gpu_available:
input_data_gpu = input_data.cuda()
quantized_data_gpu = torch.ops.fbgemm.FloatToFusedNBitRowwiseQuantizedSBHalf(
input_data_gpu, bit_rate
)
dequantized_data_gpu = torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfToFloat(
quantized_data_gpu, bit_rate
)
# compare quantized data
torch.testing.assert_allclose(dequantized_data_gpu.cpu(), reference)
if __name__ == "__main__":
unittest.main()
| 39.4 | 93 | 0.64575 | 1,031 | 9,259 | 5.467507 | 0.150339 | 0.085329 | 0.032287 | 0.021288 | 0.804151 | 0.785524 | 0.73195 | 0.73195 | 0.700195 | 0.668263 | 0 | 0.020732 | 0.270656 | 9,259 | 234 | 94 | 39.568376 | 0.814009 | 0.086618 | 0 | 0.565217 | 0 | 0 | 0.01162 | 0.010671 | 0 | 0 | 0 | 0.004274 | 0.092391 | 1 | 0.027174 | false | 0 | 0.043478 | 0 | 0.097826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e20407c76f671882d2d198a22dfe6f22e7096dec | 428 | py | Python | naturalnets/environments/__init__.py | bjuergens/NaturalNets | fd67f1b3c443761270adaf9877ed2a6358d830f0 | [
"MIT"
] | null | null | null | naturalnets/environments/__init__.py | bjuergens/NaturalNets | fd67f1b3c443761270adaf9877ed2a6358d830f0 | [
"MIT"
] | 2 | 2021-04-13T11:47:01.000Z | 2021-04-30T11:44:46.000Z | naturalnets/environments/__init__.py | bjuergens/NaturalNets | fd67f1b3c443761270adaf9877ed2a6358d830f0 | [
"MIT"
] | 1 | 2021-11-03T09:36:40.000Z | 2021-11-03T09:36:40.000Z | from naturalnets.environments.collect_points import CollectPoints
from naturalnets.environments.collect_points_rays import CollectPointsRays
from naturalnets.environments.gym_mujoco import GymMujoco
from naturalnets.environments.reacher_memory import ReacherMemory
from naturalnets.environments.gym_environments import GeneralGymEnvironment
from naturalnets.environments.challenger_neural_network import ChallengerNeuralNetwork
| 61.142857 | 86 | 0.915888 | 44 | 428 | 8.727273 | 0.454545 | 0.234375 | 0.421875 | 0.177083 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056075 | 428 | 6 | 87 | 71.333333 | 0.950495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e20b1be97e7d5d963a180af4fc8a9096fe2355c1 | 10,003 | py | Python | tests/test_api_objects.py | nomis/pymonzo | b5c8d4f46dcb3a2f475797a8b8ef1c15f6493fb9 | [
"MIT"
] | 25 | 2017-01-21T01:03:17.000Z | 2021-12-13T20:41:12.000Z | tests/test_api_objects.py | nomis/pymonzo | b5c8d4f46dcb3a2f475797a8b8ef1c15f6493fb9 | [
"MIT"
] | 25 | 2017-07-30T10:57:30.000Z | 2022-03-16T08:33:01.000Z | tests/test_api_objects.py | nomis/pymonzo | b5c8d4f46dcb3a2f475797a8b8ef1c15f6493fb9 | [
"MIT"
] | 14 | 2017-02-05T01:45:49.000Z | 2022-02-14T20:46:02.000Z | # -*- coding: utf-8 -*-
"""
Test 'pymonzo.api_objects' file
"""
from __future__ import unicode_literals
from datetime import datetime
import pytest
from dateutil.parser import parse as parse_date
from pymonzo import api_objects
from pymonzo.utils import CommonMixin
class TestMonzoObject:
"""
Test `api_objects.MonzoObject` class
"""
klass = api_objects.MonzoObject
data = {
'foo': 'foo',
'bar': 'bar',
}
@pytest.fixture(scope='session')
def instance(self):
"""Simple fixture that returns initialize object"""
return self.klass(data=self.data)
def test_class_inheritance(self, instance):
"""Test class inheritance"""
assert isinstance(instance, api_objects.MonzoObject)
assert isinstance(instance, CommonMixin)
def test_class_properties(self, instance):
"""Test class properties"""
assert self.klass._required_keys == []
assert instance._required_keys == []
def test_class_initialization(self, instance):
"""Test class `__init__` method"""
assert instance._raw_data == self.data
assert instance.foo == 'foo'
assert instance.bar == 'bar'
def test_class_lack_of_required_keys(self, mocker):
"""Test class `__init__` method when data lack one of required keys"""
mocker.patch.multiple(self.klass, _required_keys='baz')
with pytest.raises(ValueError):
self.klass(data=self.data)
class TestMonzoAccount:
"""
Test `api_objects.MonzoAccount` class
"""
klass = api_objects.MonzoAccount
@pytest.fixture(scope='session')
def data(self, accounts_api_response):
"""Simple fixture that returns data used to initialize the object"""
return accounts_api_response['accounts'][0]
@pytest.fixture(scope='session')
def instance(self, data):
"""Simple fixture that returns initialize object"""
return self.klass(data)
def test_class_inheritance(self, instance):
"""Test class inheritance"""
assert isinstance(instance, api_objects.MonzoAccount)
assert isinstance(instance, api_objects.MonzoObject)
def test_class_properties(self, instance):
"""Test class properties"""
expected_keys = ['id', 'description', 'created']
assert self.klass._required_keys == expected_keys
assert instance._required_keys == expected_keys
def test_class_initialization(self, instance, data):
"""Test class `__init__` method"""
expected_data = data.copy()
assert instance._raw_data == data
del instance._raw_data
expected_data['created'] = parse_date(expected_data['created'])
assert vars(instance) == expected_data
assert isinstance(instance.created, datetime)
def test_class_lack_of_required_keys(self, mocker, data):
"""Test class `__init__` method when data lack one of required keys"""
mocker.patch.multiple(self.klass, _required_keys='baz')
with pytest.raises(ValueError):
self.klass(data=data)
class TestMonzoPot:
"""
Test `api_objects.MonzoPot` class
"""
klass = api_objects.MonzoPot
@pytest.fixture(scope='session')
def data(self, pots_api_response):
"""Simple fixture that returns data used to initialize the object"""
return pots_api_response['pots'][0]
@pytest.fixture(scope='session')
def instance(self, data):
"""Simple fixture that returns initialize object"""
return self.klass(data)
def test_class_inheritance(self, instance):
"""Test class inheritance"""
assert isinstance(instance, api_objects.MonzoPot)
assert isinstance(instance, api_objects.MonzoObject)
def test_class_properties(self, instance):
"""Test class properties"""
expected_keys = ['id', 'name', 'created']
assert self.klass._required_keys == expected_keys
assert instance._required_keys == expected_keys
def test_class_initialization(self, instance, data):
"""Test class `__init__` method"""
expected_data = data.copy()
assert instance._raw_data == data
del instance._raw_data
expected_data['created'] = parse_date(expected_data['created'])
assert vars(instance) == expected_data
assert isinstance(instance.created, datetime)
def test_class_lack_of_required_keys(self, mocker, data):
"""Test class `__init__` method when data lack one of required keys"""
mocker.patch.multiple(self.klass, _required_keys='baz')
with pytest.raises(ValueError):
self.klass(data=data)
class TestMonzoBalance:
"""
Test `api_objects.MonzoBalance` class
"""
klass = api_objects.MonzoBalance
@pytest.fixture(scope='session')
def data(self, balance_api_response):
"""Simple fixture that returns data used to initialize the object"""
return balance_api_response
@pytest.fixture(scope='session')
def instance(self, data):
"""Simple fixture that returns initialize object"""
return self.klass(data)
def test_class_inheritance(self, instance):
"""Test class inheritance"""
assert isinstance(instance, api_objects.MonzoBalance)
assert isinstance(instance, api_objects.MonzoObject)
def test_class_properties(self, instance):
"""Test class properties"""
expected_keys = ['balance', 'currency', 'spend_today']
assert self.klass._required_keys == expected_keys
assert instance._required_keys == expected_keys
def test_class_initialization(self, instance, data):
"""Test class `__init__` method"""
expected_data = data.copy()
assert instance._raw_data == expected_data
del instance._raw_data
assert vars(instance) == expected_data
def test_class_lack_of_required_keys(self, mocker, data):
"""Test class `__init__` method when data lack one of required keys"""
mocker.patch.multiple(self.klass, _required_keys='baz')
with pytest.raises(ValueError):
self.klass(data=data)
class TestMonzoTransaction:
"""
Test `api_objects.MonzoTransaction` class
"""
klass = api_objects.MonzoTransaction
@pytest.fixture(scope='session')
def data(self, transaction_api_response):
"""Simple fixture that returns data used to initialize the object"""
return transaction_api_response['transaction']
@pytest.fixture(scope='session')
def instance(self, data):
"""Simple fixture that returns initialize object"""
return self.klass(data)
def test_class_inheritance(self, instance):
"""Test class inheritance"""
assert isinstance(instance, api_objects.MonzoTransaction)
assert isinstance(instance, api_objects.MonzoObject)
def test_class_properties(self, instance):
"""Test class properties"""
expected_keys = [
'account_balance', 'amount', 'created', 'currency', 'description',
'id', 'merchant', 'metadata', 'notes', 'is_load',
]
assert self.klass._required_keys == expected_keys
assert instance._required_keys == expected_keys
def test_class_initialization(self, instance, data):
"""Test class `__init__` method"""
expected_data = data.copy()
assert instance._raw_data == expected_data
del instance._raw_data
expected_data['created'] = parse_date(expected_data['created'])
expected_data['settled'] = parse_date(expected_data['settled'])
expected_data['merchant'] = api_objects.MonzoMerchant(
data=expected_data['merchant']
)
assert vars(instance) == expected_data
assert isinstance(instance.created, datetime)
assert isinstance(instance.settled, datetime)
assert isinstance(instance.merchant, api_objects.MonzoMerchant)
def test_class_lack_of_required_keys(self, mocker, data):
"""Test class `__init__` method when data lack one of required keys"""
mocker.patch.multiple(self.klass, _required_keys='baz')
with pytest.raises(ValueError):
self.klass(data=data)
class TestMonzoMerchant:
"""
Test `api_objects.MonzoMerchant` class
"""
klass = api_objects.MonzoMerchant
@pytest.fixture(scope='session')
def data(self, transaction_api_response):
"""Simple fixture that returns data used to initialize the object"""
return transaction_api_response['transaction']['merchant']
@pytest.fixture(scope='session')
def instance(self, data):
"""Simple fixture that returns initialize object"""
return self.klass(data)
def test_class_inheritance(self, instance):
"""Test class inheritance"""
assert isinstance(instance, api_objects.MonzoMerchant)
assert isinstance(instance, api_objects.MonzoObject)
def test_class_properties(self, instance):
"""Test class properties"""
expected_keys = [
'address', 'created', 'group_id', 'id',
'logo', 'emoji', 'name', 'category',
]
assert self.klass._required_keys == expected_keys
assert instance._required_keys == expected_keys
def test_class_initialization(self, instance, data):
"""Test class `__init__` method"""
expected_data = data.copy()
assert instance._raw_data == expected_data
del instance._raw_data
expected_data['created'] = parse_date(expected_data['created'])
assert vars(instance) == expected_data
assert isinstance(instance.created, datetime)
def test_class_lack_of_required_keys(self, mocker, data):
"""Test class `__init__` method when data lack one of required keys"""
mocker.patch.multiple(self.klass, _required_keys='baz')
with pytest.raises(ValueError):
self.klass(data=data)
| 33.793919 | 78 | 0.669799 | 1,113 | 10,003 | 5.773585 | 0.09434 | 0.067227 | 0.044818 | 0.042484 | 0.795207 | 0.781046 | 0.779334 | 0.749767 | 0.749767 | 0.735294 | 0 | 0.000386 | 0.223533 | 10,003 | 295 | 79 | 33.908475 | 0.82696 | 0.170649 | 0 | 0.639053 | 0 | 0 | 0.052599 | 0 | 0 | 0 | 0 | 0 | 0.254438 | 1 | 0.207101 | false | 0 | 0.035503 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
355e92cfb405185b95ff722157702edac96710a7 | 145 | py | Python | backend/grant/utils/date.py | DSBUGAY2/zcash-grant-system | 729b9edda13bd1eeb3f445d889264230c6470d7e | [
"MIT"
] | 8 | 2019-06-03T16:29:49.000Z | 2021-05-11T20:38:36.000Z | backend/grant/utils/date.py | DSBUGAY2/zcash-grant-system | 729b9edda13bd1eeb3f445d889264230c6470d7e | [
"MIT"
] | 342 | 2019-01-15T19:13:58.000Z | 2020-03-24T16:38:13.000Z | backend/grant/utils/date.py | DSBUGAY2/zcash-grant-system | 729b9edda13bd1eeb3f445d889264230c6470d7e | [
"MIT"
] | 5 | 2019-02-15T09:06:47.000Z | 2022-01-24T21:38:41.000Z | import math
def get_quarter_formatted(date):
return "Q" + str(math.ceil(date.date_created.month / 3.)) + " " + str(date.date_created.year)
| 24.166667 | 97 | 0.696552 | 22 | 145 | 4.409091 | 0.681818 | 0.164948 | 0.309278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008065 | 0.144828 | 145 | 5 | 98 | 29 | 0.774194 | 0 | 0 | 0 | 0 | 0 | 0.013793 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
3574725e346f96f0dc3b3b6899847e6d31214437 | 26 | py | Python | molecular_columns/__init__.py | jpinedaf/molecular_columns | 756f0ca447cd92b51e71589b21307864f6ac2ee6 | [
"MIT"
] | null | null | null | molecular_columns/__init__.py | jpinedaf/molecular_columns | 756f0ca447cd92b51e71589b21307864f6ac2ee6 | [
"MIT"
] | null | null | null | molecular_columns/__init__.py | jpinedaf/molecular_columns | 756f0ca447cd92b51e71589b21307864f6ac2ee6 | [
"MIT"
] | null | null | null | #
from . import col_nh2d
| 6.5 | 22 | 0.692308 | 4 | 26 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.230769 | 26 | 3 | 23 | 8.666667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3578486d25007acc016e0c30d38823039696d049 | 481 | py | Python | Geometry/CaloEventSetup/python/CaloGeometryDBReader_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 852 | 2015-01-11T21:03:51.000Z | 2022-03-25T21:14:00.000Z | Geometry/CaloEventSetup/python/CaloGeometryDBReader_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 30,371 | 2015-01-02T00:14:40.000Z | 2022-03-31T23:26:05.000Z | Geometry/CaloEventSetup/python/CaloGeometryDBReader_cfi.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 3,240 | 2015-01-02T05:53:18.000Z | 2022-03-31T17:24:21.000Z | import FWCore.ParameterSet.Config as cms
from Geometry.CaloEventSetup.CaloGeometryBuilder_cfi import *
from Geometry.EcalAlgo.EcalGeometryDBReader_cfi import *
from Geometry.HcalEventSetup.HcalGeometryDBReader_cfi import *
from Geometry.HcalEventSetup.CaloTowerGeometryDBReader_cfi import *
from Geometry.HcalEventSetup.hcalTopologyIdeal_cfi import *
from Geometry.HcalEventSetup.CaloTowerTopology_cfi import *
from Geometry.ForwardGeometry.ForwardGeometryDBReader_cfi import *
| 40.083333 | 67 | 0.879418 | 48 | 481 | 8.666667 | 0.416667 | 0.201923 | 0.1875 | 0.302885 | 0.336538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072765 | 481 | 11 | 68 | 43.727273 | 0.932735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35995995784e5bf9a00679457f6c00c1fb02da16 | 117 | py | Python | AMnet/__init__.py | THREDgroup/AMnet | 4f269c7a09d39051524e29e6e1f7a70169b7ff64 | [
"MIT"
] | 1 | 2019-03-24T10:49:23.000Z | 2019-03-24T10:49:23.000Z | AMnet/__init__.py | THREDgroup/AMnet | 4f269c7a09d39051524e29e6e1f7a70169b7ff64 | [
"MIT"
] | null | null | null | AMnet/__init__.py | THREDgroup/AMnet | 4f269c7a09d39051524e29e6e1f7a70169b7ff64 | [
"MIT"
] | null | null | null | import AMnet.training
import AMnet.showing
import AMnet.application
import AMnet.preprocessing
import AMnet.utilities | 23.4 | 26 | 0.880342 | 15 | 117 | 6.866667 | 0.466667 | 0.533981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 117 | 5 | 27 | 23.4 | 0.953704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35bfc99e479811aa7d3ac3f9705bc4aec9020f5b | 175 | py | Python | src/rbp/random/__init__.py | simecek/rbp | 6bae3d37c4800b89efbb16861c79bf5f6fcb40d9 | [
"MIT"
] | 2 | 2020-02-18T18:35:23.000Z | 2020-06-12T09:52:14.000Z | src/rbp/random/__init__.py | simecek/rbp | 6bae3d37c4800b89efbb16861c79bf5f6fcb40d9 | [
"MIT"
] | 13 | 2020-02-18T16:10:19.000Z | 2020-12-15T13:49:06.000Z | src/rbp/random/__init__.py | simecek/rbp | 6bae3d37c4800b89efbb16861c79bf5f6fcb40d9 | [
"MIT"
] | 2 | 2020-06-12T09:52:20.000Z | 2020-12-03T16:14:00.000Z | from .genomic_position import random_genomic_position, random_genomic_interval
from .seq_permutation import seq_permutation
from .random_intervals import gen_random_intervals
| 43.75 | 78 | 0.902857 | 23 | 175 | 6.434783 | 0.434783 | 0.202703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074286 | 175 | 3 | 79 | 58.333333 | 0.91358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea1ee2a3622a4fe32009c6b664aca077b7a040ef | 182 | py | Python | pyrunjvm/__init__.py | riag/pyrunjvm | cd5c588553b0fdea398fb775244c49b1b8231a3c | [
"Apache-2.0"
] | null | null | null | pyrunjvm/__init__.py | riag/pyrunjvm | cd5c588553b0fdea398fb775244c49b1b8231a3c | [
"Apache-2.0"
] | null | null | null | pyrunjvm/__init__.py | riag/pyrunjvm | cd5c588553b0fdea398fb775244c49b1b8231a3c | [
"Apache-2.0"
] | null | null | null |
import importlib.metadata as importlib_metadata
__version__ = importlib_metadata.version(__name__)
from .application import create_application
from .context import create_context
| 22.75 | 50 | 0.862637 | 21 | 182 | 6.904762 | 0.47619 | 0.351724 | 0.331034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098901 | 182 | 7 | 51 | 26 | 0.884146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea6ec78ae6c5dcd9bc77e77f1879be3c32ccb585 | 47 | py | Python | connectwise/time/__init__.py | punkrokk/connectwise-rest-api-python | f8b2b3c7668b407935e38b8af9d0b5d3b14d9fb2 | [
"Apache-2.0"
] | 7 | 2017-01-24T06:41:47.000Z | 2021-04-16T17:34:43.000Z | connectwise/time/__init__.py | punkrokk/connectwise-rest-api-python | f8b2b3c7668b407935e38b8af9d0b5d3b14d9fb2 | [
"Apache-2.0"
] | 2 | 2019-10-30T21:32:59.000Z | 2019-11-01T18:56:39.000Z | connectwise/time/__init__.py | punkrokk/connectwise-rest-api-python | f8b2b3c7668b407935e38b8af9d0b5d3b14d9fb2 | [
"Apache-2.0"
] | 7 | 2017-10-17T18:41:18.000Z | 2019-11-12T20:02:14.000Z | from connectwise.time import entries as entries | 47 | 47 | 0.87234 | 7 | 47 | 5.857143 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 47 | 1 | 47 | 47 | 0.97619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ea6f20319ef94dd60742fc25f1faac029e862777 | 127 | py | Python | money_legos/util.py | gokhanbaydar/py-money-legos | 3d71885e2befeb3e6415177ea2220339c6082088 | [
"MIT"
] | 3 | 2021-02-21T03:15:54.000Z | 2021-06-19T10:01:54.000Z | money_legos/util.py | gokhanbaydar/py-money-legos | 3d71885e2befeb3e6415177ea2220339c6082088 | [
"MIT"
] | null | null | null | money_legos/util.py | gokhanbaydar/py-money-legos | 3d71885e2befeb3e6415177ea2220339c6082088 | [
"MIT"
] | null | null | null | import json
import pkgutil
def read_json(file_location):
return json.loads(pkgutil.get_data(__package__, file_location))
| 18.142857 | 67 | 0.80315 | 18 | 127 | 5.222222 | 0.666667 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11811 | 127 | 6 | 68 | 21.166667 | 0.839286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
ea783dbf00949bc6ba66ba7423f2274a29d05a07 | 3,562 | py | Python | tests/lists/tests/insert_tests.py | kimgea/django-ordered-field | c3a79cd93b013d90bbe0d6b9c9ede872d16af949 | [
"MIT"
] | null | null | null | tests/lists/tests/insert_tests.py | kimgea/django-ordered-field | c3a79cd93b013d90bbe0d6b9c9ede872d16af949 | [
"MIT"
] | 1 | 2018-05-10T09:11:49.000Z | 2018-05-10T09:11:49.000Z | tests/lists/tests/insert_tests.py | kimgea/django-ordered-field | c3a79cd93b013d90bbe0d6b9c9ede872d16af949 | [
"MIT"
] | null | null | null | from django.test import TestCase
from tests.lists.models import List, Item
from tests.lists.tests.helper import set_up_helper
class ListInsertTest(TestCase):
def setUp(self):
set_up_helper()
def test_insert_first(self):
list1 = List.objects.filter(pk=1).first()
item_1_1 = Item(name="new", list=list1, order=0)
item_1_1.save()
result = list(Item.objects.filter(list=1).order_by("list", "order").
values_list("updated_by", "order_changed_count", "change_collection_count", "list", "order", "id"))
expected_result = [("", 0, 0, 1, 0, 7),
("a", 1, 0, 1, 1, 1),
("a", 1, 0, 1, 2, 2),
("a", 1, 0, 1, 3, 3),
("a", 1, 0, 1, 4, 4),
("a", 1, 0, 1, 5, 5)]
self.assertEqual(result, expected_result)
def test_insert_second(self):
list1 = List.objects.filter(pk=1).first()
item_1_1 = Item(name="new", list=list1, order=1)
item_1_1.save()
result = list(Item.objects.filter(list=1).order_by("list", "order").
values_list("updated_by", "order_changed_count", "list", "order", "id"))
expected_result = [("", 0, 1, 0, 1),
("", 0, 1, 1, 7),
("a", 1, 1, 2, 2),
("a", 1, 1, 3, 3),
("a", 1, 1, 4, 4),
("a", 1, 1, 5, 5)]
self.assertEqual(result, expected_result)
def test_insert_middle(self):
list1 = List.objects.filter(pk=1).first()
item_1_1 = Item(name="new", list=list1, order=2)
item_1_1.save()
result = list(Item.objects.filter(list=1).order_by("list", "order").
values_list("updated_by", "order_changed_count", "list", "order", "id"))
expected_result = [("", 0, 1, 0, 1),
("", 0, 1, 1, 2),
("", 0, 1, 2, 7),
("a", 1, 1, 3, 3),
("a", 1, 1, 4, 4),
("a", 1, 1, 5, 5)]
self.assertEqual(result, expected_result)
def test_insert_seccond_last(self):
list1 = List.objects.filter(pk=1).first()
item_1_1 = Item(name="new", list=list1, order=4)
item_1_1.save()
result = list(Item.objects.filter(list=1).order_by("list", "order").
values_list("updated_by", "order_changed_count", "list", "order", "id"))
expected_result = [("", 0, 1, 0, 1),
("", 0, 1, 1, 2),
("", 0, 1, 2, 3),
("", 0, 1, 3, 4),
("", 0, 1, 4, 7),
("a", 1, 1, 5, 5)]
self.assertEqual(result, expected_result)
def test_insert_last(self):
list1 = List.objects.filter(pk=1).first()
item_1_1 = Item(name="new", list=list1, order=5)
item_1_1.save()
result = list(Item.objects.filter(list=1).order_by("list", "order").
values_list("updated_by", "order_changed_count", "list", "order", "id"))
expected_result = [("", 0, 1, 0, 1),
("", 0, 1, 1, 2),
("", 0, 1, 2, 3),
("", 0, 1, 3, 4),
("", 0, 1, 4, 5),
("", 0, 1, 5, 7)]
self.assertEqual(result, expected_result)
| 41.905882 | 121 | 0.444413 | 443 | 3,562 | 3.410835 | 0.115124 | 0.034414 | 0.025811 | 0.021178 | 0.842488 | 0.807412 | 0.807412 | 0.786896 | 0.786896 | 0.786896 | 0 | 0.077833 | 0.383212 | 3,562 | 84 | 122 | 42.404762 | 0.609923 | 0 | 0 | 0.591549 | 0 | 0 | 0.083099 | 0.006457 | 0 | 0 | 0 | 0 | 0.070423 | 1 | 0.084507 | false | 0 | 0.042254 | 0 | 0.140845 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
577ffa1557a679cf439d0b9fb8fc667e3367a4f1 | 1,404 | py | Python | torch_metrics/classification/pr.py | Joffreybvn/torch-metrics | 587a4995730a7ec7b0370d6dd523d2c84129b255 | [
"MIT"
] | 95 | 2020-11-04T06:02:20.000Z | 2022-03-17T16:27:04.000Z | torch_metrics/classification/pr.py | Joffreybvn/torch-metrics | 587a4995730a7ec7b0370d6dd523d2c84129b255 | [
"MIT"
] | 22 | 2020-11-04T06:09:29.000Z | 2022-03-28T16:58:36.000Z | torch_metrics/classification/pr.py | Joffreybvn/torch-metrics | 587a4995730a7ec7b0370d6dd523d2c84129b255 | [
"MIT"
] | 17 | 2020-11-04T05:56:35.000Z | 2021-08-28T15:38:14.000Z | import torch
class Precision:
"""
Computes precision of the predictions with respect to the true labels.
Args:
y_true: Tensor of Ground truth values.
y_pred: Tensor of Predicted values.
epsilon: Fuzz factor to avoid division by zero. default: `1e-10`
Returns:
Tensor of precision score
"""
def __init__(self, epsilon=1e-10):
self.epsilon = epsilon
def __call__(self, y_pred, y_true):
true_positives = torch.sum(torch.round(torch.clip(y_pred * y_true, 0, 1)))
predicted_positives = torch.sum(torch.round(torch.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + self.epsilon)
return precision
class Recall:
"""
Computes recall of the predictions with respect to the true labels.
Args:
y_true: Tensor of Ground truth values.
y_pred: Tensor of Predicted values.
epsilon: Fuzz factor to avoid division by zero. default: `1e-10`
Returns:
Tensor of recall score
"""
def __init__(self, epsilon=1e-10):
self.epsilon = epsilon
def __call__(self, y_pred, y_true):
true_positives = torch.sum(torch.round(torch.clip(y_pred * y_true, 0, 1)))
actual_positives = torch.sum(torch.round(torch.clip(y_true, 0, 1)))
recall = true_positives / (actual_positives + self.epsilon)
return recall
| 29.25 | 82 | 0.652422 | 190 | 1,404 | 4.621053 | 0.242105 | 0.039863 | 0.027335 | 0.045558 | 0.740319 | 0.740319 | 0.740319 | 0.740319 | 0.698178 | 0.651481 | 0 | 0.019157 | 0.25641 | 1,404 | 47 | 83 | 29.87234 | 0.821839 | 0.37963 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.058824 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
5794b15b6d81f4780ee4f72c90291a727f69e7fe | 251 | py | Python | crawlino/__init__.py | BBVA/crawlino | 685f57e6b3e9356484ead2681bb178f651d2f371 | [
"Apache-2.0"
] | 1 | 2018-11-11T21:07:54.000Z | 2018-11-11T21:07:54.000Z | crawlino/__init__.py | BBVA/crawlino | 685f57e6b3e9356484ead2681bb178f651d2f371 | [
"Apache-2.0"
] | null | null | null | crawlino/__init__.py | BBVA/crawlino | 685f57e6b3e9356484ead2681bb178f651d2f371 | [
"Apache-2.0"
] | null | null | null | from .exceptions import *
from .crawlino_flow import *
from .decorators import *
from .helpers import *
from .mini_lang import *
from .modules import *
from .crawlino_managers import *
from .current_config import *
from .models.plugins_models import * | 27.888889 | 36 | 0.784861 | 33 | 251 | 5.818182 | 0.454545 | 0.416667 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139442 | 251 | 9 | 36 | 27.888889 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
579fef3fcec93be8106e0313bc31f27786b37d37 | 43,841 | py | Python | tests/test_ping_s.py | shaikustin/jc | b59e38cfd2c8a7f5868e05d5562557b1c27e5e56 | [
"MIT"
] | null | null | null | tests/test_ping_s.py | shaikustin/jc | b59e38cfd2c8a7f5868e05d5562557b1c27e5e56 | [
"MIT"
] | null | null | null | tests/test_ping_s.py | shaikustin/jc | b59e38cfd2c8a7f5868e05d5562557b1c27e5e56 | [
"MIT"
] | null | null | null | import os
import unittest
import json
from jc.exceptions import ParseError
import jc.parsers.ping_s
THIS_DIR = os.path.dirname(os.path.abspath(__file__))
# To create streaming output use:
# $ cat ping.out | jc --ping-s | jello -c > ping-streaming.json
class MyTests(unittest.TestCase):
def setUp(self):
# input
# centos
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-O.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_O = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-O-D.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_O_D = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-hostname-O.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_hostname_O = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-hostname-O-p.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_hostname_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-hostname-O-D-p-s.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_hostname_O_D_p_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-ip-O-p.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_ip_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-ip-O-p-unparsable.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_ip_O_p_unparsable = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-ip-O-D-p.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_ip_O_D_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-hostname-O-p.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_hostname_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-hostname-O-D-p-s.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_hostname_O_D_p_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-dup.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_dup = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-ip-dup.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_ip_dup = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-O-unparsedlines.out'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_O_unparsedlines = f.read()
# ubuntu
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-ip-O.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_ip_O = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-ip-O-D.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_ip_O_D = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-hostname-O.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_hostname_O = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-hostname-O-p.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_hostname_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-hostname-O-D-p-s.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_hostname_O_D_p_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-ip-O-p.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_ip_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-ip-O-D-p.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_ip_O_D_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-hostname-O-p.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_hostname_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-hostname-O-D-p-s.out'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_hostname_O_D_p_s = f.read()
# fedora
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-ip-O.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping_ip_O = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-ip-O-D.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping_ip_O_D = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-hostname-O.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping_hostname_O = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-hostname-O-p.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping_hostname_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-hostname-O-D-p-s.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping_hostname_O_D_p_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-ip-O-p.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_ip_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-ip-O-D-p.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_ip_O_D_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-hostname-O-p.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_hostname_O_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-hostname-O-D-p-s.out'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_hostname_O_D_p_s = f.read()
# freebsd
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-hostname-p.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_hostname_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-hostname-s.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_hostname_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-hostname.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_hostname = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-ip-p.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_ip_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-ip-s.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_ip_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-ip.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_ip = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-hostname-p.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_hostname_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-hostname-s.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_hostname_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-hostname.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_hostname = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-ip-p.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_ip_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-ip-s.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_ip_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-ip.out'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_ip = f.read()
# osx
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-hostname-p.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_hostname_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-hostname-s.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_hostname_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-hostname.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_hostname = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-p.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-s.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-unreachable.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_unreachable = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-unknown-errors.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_unknown_errors = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-hostname-p.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_hostname_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-hostname-s.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_hostname_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-hostname.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_hostname = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-p.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_p = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-s.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_s = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-unparsable.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_unparsable = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-dup.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_dup = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-dup.out'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_dup = f.read()
# raspberry pi
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/pi/ping-ip-O.out'), 'r', encoding='utf-8') as f:
self.pi_ping_ip_O = f.read()
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/pi/ping-ip-O-D.out'), 'r', encoding='utf-8') as f:
self.pi_ping_ip_O_D = f.read()
# output
# centos
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-O-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_O_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-O-streaming-ignore-exceptions.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_O_streaming_ignore_exceptions_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-O-D-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_O_D_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-hostname-O-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_hostname_O_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-hostname-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_hostname_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-hostname-O-D-p-s-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_hostname_O_D_p_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-ip-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_ip_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-ip-O-D-p-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_ip_O_D_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-hostname-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_hostname_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-hostname-O-D-p-s-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_hostname_O_D_p_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping-ip-dup-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping_ip_dup_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/centos-7.7/ping6-ip-dup-streaming.json'), 'r', encoding='utf-8') as f:
self.centos_7_7_ping6_ip_dup_streaming_json = json.loads(f.read())
# ubunutu
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-ip-O-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_ip_O_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-ip-O-D-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_ip_O_D_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-hostname-O-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_hostname_O_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-hostname-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_hostname_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping-hostname-O-D-p-s-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping_hostname_O_D_p_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-ip-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_ip_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-ip-O-D-p-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_ip_O_D_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-hostname-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_hostname_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/ubuntu-18.04/ping6-hostname-O-D-p-s-streaming.json'), 'r', encoding='utf-8') as f:
self.ubuntu_18_4_ping6_hostname_O_D_p_s_streaming_json = json.loads(f.read())
# fedora
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-ip-O-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping_ip_O_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-ip-O-D-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping_ip_O_D_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-hostname-O-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping_hostname_O_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-hostname-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping_hostname_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping-hostname-O-D-p-s-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping_hostname_O_D_p_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-ip-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_ip_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-ip-O-D-p-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_ip_O_D_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-hostname-O-p-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_hostname_O_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/fedora32/ping6-hostname-O-D-p-s-streaming.json'), 'r', encoding='utf-8') as f:
self.fedora32_ping6_hostname_O_D_p_s_streaming_json = json.loads(f.read())
# freebsd
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-hostname-p-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_hostname_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-hostname-s-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_hostname_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-hostname-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_hostname_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-ip-p-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_ip_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-ip-s-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_ip_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping-ip-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping_ip_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-hostname-p-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_hostname_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-hostname-s-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_hostname_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-hostname-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_hostname_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-ip-p-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_ip_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-ip-s-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_ip_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/freebsd12/ping6-ip-streaming.json'), 'r', encoding='utf-8') as f:
self.freebsd12_ping6_ip_streaming_json = json.loads(f.read())
# osx:
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-hostname-p-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_hostname_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-hostname-s-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_hostname_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-hostname-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_hostname_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-p-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-s-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-unreachable-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_unreachable_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-hostname-p-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_hostname_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-hostname-s-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_hostname_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-hostname-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_hostname_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-p-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_p_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-s-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_s_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping-ip-dup-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping_ip_dup_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/osx-10.14.6/ping6-ip-dup-streaming.json'), 'r', encoding='utf-8') as f:
self.osx_10_14_6_ping6_ip_dup_streaming_json = json.loads(f.read())
# raspberry pi
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/pi/ping-ip-O-streaming.json'), 'r', encoding='utf-8') as f:
self.pi_ping_ip_O_streaming_json = json.loads(f.read())
with open(os.path.join(THIS_DIR, os.pardir, 'tests/fixtures/pi/ping-ip-O-D-streaming.json'), 'r', encoding='utf-8') as f:
self.pi_ping_ip_O_D_streaming_json = json.loads(f.read())
def test_ping_s_nodata(self):
"""
Test 'ping' with no data
"""
self.assertEqual(list(jc.parsers.ping_s.parse('', quiet=True)), [])
def test_ping_s_unparsable(self):
data = 'unparsable data'
g = jc.parsers.ping_s.parse(data.splitlines(), quiet=True)
with self.assertRaises(ParseError):
list(g)
def test_ping_s_ignore_exceptions_success(self):
"""
Test 'ping' with -qq (ignore_exceptions) option
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping_ip_O.splitlines(), quiet=True, ignore_exceptions=True)), self.centos_7_7_ping_ip_O_streaming_ignore_exceptions_json)
def test_ping_s_ignore_exceptions_error(self):
"""
Test 'ping' with -qq (ignore_exceptions) option option and error
"""
data_in = 'not ping'
expected = json.loads('[{"_jc_meta":{"success":false,"error":"ParseError: Could not detect ping OS","line":"not ping"}}]')
self.assertEqual(list(jc.parsers.ping_s.parse(data_in.splitlines(), quiet=True, ignore_exceptions=True)), expected)
def test_ping_s_ip_O_centos_7_7(self):
"""
Test 'ping <ip> -O' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping_ip_O.splitlines(), quiet=True)), self.centos_7_7_ping_ip_O_streaming_json)
def test_ping_s_ip_O_D_centos_7_7(self):
"""
Test 'ping <ip> -O -D' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping_ip_O_D.splitlines(), quiet=True)), self.centos_7_7_ping_ip_O_D_streaming_json)
def test_ping_s_hostname_O_centos_7_7(self):
"""
Test 'ping <hostname> -O' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping_hostname_O.splitlines(), quiet=True)), self.centos_7_7_ping_hostname_O_streaming_json)
def test_ping_s_hostname_O_p_centos_7_7(self):
"""
Test 'ping <hostname> -O -p' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping_hostname_O_p.splitlines(), quiet=True)), self.centos_7_7_ping_hostname_O_p_streaming_json)
def test_ping_s_hostname_O_D_p_s_centos_7_7(self):
"""
Test 'ping <hostname> -O -D -p -s' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping_hostname_O_D_p_s.splitlines(), quiet=True)), self.centos_7_7_ping_hostname_O_D_p_s_streaming_json)
def test_ping6_s_ip_O_p_centos_7_7(self):
"""
Test 'ping6 <ip> -O -p' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping6_ip_O_p.splitlines(), quiet=True)), self.centos_7_7_ping6_ip_O_p_streaming_json)
def test_ping6_s_ip_O_p_unparsable_centos_7_7(self):
"""
Test 'ping6 <ip> -O -p' with unparsable lines on Centos 7.7 (raises IndexError)
"""
g = jc.parsers.ping_s.parse(self.centos_7_7_ping6_ip_O_p_unparsable.splitlines(), quiet=True)
with self.assertRaises(IndexError):
list(g)
def test_ping6_s_ip_O_D_p_centos_7_7(self):
"""
Test 'ping6 <ip> -O -D -p' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping6_ip_O_D_p.splitlines(), quiet=True)), self.centos_7_7_ping6_ip_O_D_p_streaming_json)
def test_ping6_s_hostname_O_p_centos_7_7(self):
"""
Test 'ping6 <hostname> -O -p' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping6_hostname_O_p.splitlines(), quiet=True)), self.centos_7_7_ping6_hostname_O_p_streaming_json)
def test_ping6_s_hostname_O_D_p_s_centos_7_7(self):
"""
Test 'ping6 <hostname> -O -D -p -s' on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping6_hostname_O_D_p_s.splitlines(), quiet=True)), self.centos_7_7_ping6_hostname_O_D_p_s_streaming_json)
def test_ping_s_ip_dup_centos_7_7(self):
"""
Test 'ping <ip>' to broadcast IP to get duplicate replies on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping_ip_dup.splitlines(), quiet=True)), self.centos_7_7_ping_ip_dup_streaming_json)
def test_ping6_s_ip_dup_centos_7_7(self):
"""
Test 'ping6 <ip>' to broadcast IP to get duplicate replies on Centos 7.7
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.centos_7_7_ping6_ip_dup.splitlines(), quiet=True)), self.centos_7_7_ping6_ip_dup_streaming_json)
def test_ping_s_ip_O_unparsedlines_centos_7_7(self):
"""
Test 'ping <ip> -O' on Centos 7.7 with unparsable lines and error messages
"""
g = jc.parsers.ping_s.parse(self.centos_7_7_ping_ip_O_unparsedlines.splitlines(), quiet=True)
with self.assertRaises(IndexError):
list(g)
def test_ping_s_ip_O_ubuntu_18_4(self):
"""
Test 'ping <ip> -O' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping_ip_O.splitlines(), quiet=True)), self.ubuntu_18_4_ping_ip_O_streaming_json)
def test_ping_s_ip_O_D_ubuntu_18_4(self):
"""
Test 'ping <ip> -O -D' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping_ip_O_D.splitlines(), quiet=True)), self.ubuntu_18_4_ping_ip_O_D_streaming_json)
def test_ping_s_hostname_O_ubuntu_18_4(self):
"""
Test 'ping <hostname> -O' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping_hostname_O.splitlines(), quiet=True)), self.ubuntu_18_4_ping_hostname_O_streaming_json)
def test_ping_s_hostname_O_p_ubuntu_18_4(self):
"""
Test 'ping <hostname> -O -p' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping_hostname_O_p.splitlines(), quiet=True)), self.ubuntu_18_4_ping_hostname_O_p_streaming_json)
def test_ping_s_hostname_O_D_p_s_ubuntu_18_4(self):
"""
Test 'ping <hostname> -O -D -p -s' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping_hostname_O_D_p_s.splitlines(), quiet=True)), self.ubuntu_18_4_ping_hostname_O_D_p_s_streaming_json)
def test_ping6_s_ip_O_p_ubuntu_18_4(self):
"""
Test 'ping6 <ip> -O -p' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping6_ip_O_p.splitlines(), quiet=True)), self.ubuntu_18_4_ping6_ip_O_p_streaming_json)
def test_ping6_s_ip_O_D_p_ubuntu_18_4(self):
"""
Test 'ping6 <ip> -O -D -p' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping6_ip_O_D_p.splitlines(), quiet=True)), self.ubuntu_18_4_ping6_ip_O_D_p_streaming_json)
def test_ping6_s_hostname_O_p_ubuntu_18_4(self):
"""
Test 'ping6 <hostname> -O -p' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping6_hostname_O_p.splitlines(), quiet=True)), self.ubuntu_18_4_ping6_hostname_O_p_streaming_json)
def test_ping6_s_hostname_O_D_p_s_ubuntu_18_4(self):
"""
Test 'ping6 <hostname> -O -D -p -s' on Ubuntu 18.4
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.ubuntu_18_4_ping6_hostname_O_D_p_s.splitlines(), quiet=True)), self.ubuntu_18_4_ping6_hostname_O_D_p_s_streaming_json)
def test_ping_s_ip_O_fedora32(self):
"""
Test 'ping <ip> -O' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping_ip_O.splitlines(), quiet=True)), self.fedora32_ping_ip_O_streaming_json)
def test_ping_s_ip_O_D_fedora32(self):
"""
Test 'ping <ip> -O -D' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping_ip_O_D.splitlines(), quiet=True)), self.fedora32_ping_ip_O_D_streaming_json)
def test_ping_s_hostname_O_fedora32(self):
"""
Test 'ping <hostname> -O' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping_hostname_O.splitlines(), quiet=True)), self.fedora32_ping_hostname_O_streaming_json)
def test_ping_s_hostname_O_p_fedora32(self):
"""
Test 'ping <hostname> -O -p' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping_hostname_O_p.splitlines(), quiet=True)), self.fedora32_ping_hostname_O_p_streaming_json)
def test_ping_s_hostname_O_D_p_s_fedora32(self):
"""
Test 'ping <hostname> -O -D -p -s' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping_hostname_O_D_p_s.splitlines(), quiet=True)), self.fedora32_ping_hostname_O_D_p_s_streaming_json)
def test_ping6_s_ip_O_p_fedora32(self):
"""
Test 'ping6 <ip> -O -p' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping6_ip_O_p.splitlines(), quiet=True)), self.fedora32_ping6_ip_O_p_streaming_json)
def test_ping6_s_ip_O_D_p_fedora32(self):
"""
Test 'ping6 <ip> -O -D -p' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping6_ip_O_D_p.splitlines(), quiet=True)), self.fedora32_ping6_ip_O_D_p_streaming_json)
def test_ping6_s_hostname_O_p_fedora32(self):
"""
Test 'ping6 <hostname> -O -p' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping6_hostname_O_p.splitlines(), quiet=True)), self.fedora32_ping6_hostname_O_p_streaming_json)
def test_ping6_s_hostname_O_D_p_s_fedora32(self):
"""
Test 'ping6 <hostname> -O -D -p -s' on fedora32
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.fedora32_ping6_hostname_O_D_p_s.splitlines(), quiet=True)), self.fedora32_ping6_hostname_O_D_p_s_streaming_json)
def test_ping_s_hostname_p_freebsd12(self):
"""
Test 'ping <hostname> -p' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping_hostname_p.splitlines(), quiet=True)), self.freebsd12_ping_hostname_p_streaming_json)
def test_ping_s_hostname_s_freebsd12(self):
"""
Test 'ping <hostname> -s' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping_hostname_s.splitlines(), quiet=True)), self.freebsd12_ping_hostname_s_streaming_json)
def test_ping_s_ping_hostname_freebsd12(self):
"""
Test 'ping <hostname>' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping_hostname.splitlines(), quiet=True)), self.freebsd12_ping_hostname_streaming_json)
def test_ping_s_ip_p_freebsd12(self):
"""
Test 'ping <ip> -p' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping_ip_p.splitlines(), quiet=True)), self.freebsd12_ping_ip_p_streaming_json)
def test_ping_s_ip_s_freebsd12(self):
"""
Test 'ping <ip> -s' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping_ip_s.splitlines(), quiet=True)), self.freebsd12_ping_ip_s_streaming_json)
def test_ping_s_ip_freebsd12(self):
"""
Test 'ping6 <ip>' on freebsd127
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping_ip.splitlines(), quiet=True)), self.freebsd12_ping_ip_streaming_json)
def test_ping6_s_hostname_p_freebsd12(self):
"""
Test 'ping6 <hostname> -p' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping6_hostname_p.splitlines(), quiet=True)), self.freebsd12_ping6_hostname_p_streaming_json)
def test_ping6_s_hostname_s_freebsd12(self):
"""
Test 'ping6 <hostname> -s' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping6_hostname_s.splitlines(), quiet=True)), self.freebsd12_ping6_hostname_s_streaming_json)
def test_ping6_s_hostname_freebsd12(self):
"""
Test 'ping6 <hostname>' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping6_hostname.splitlines(), quiet=True)), self.freebsd12_ping6_hostname_streaming_json)
def test_ping6_s_ip_p_freebsd12(self):
"""
Test 'ping6 <ip> -p' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping6_ip_p.splitlines(), quiet=True)), self.freebsd12_ping6_ip_p_streaming_json)
def test_ping6_s_ip_s_freebsd12(self):
"""
Test 'ping6 <ip> -s' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping6_ip_s.splitlines(), quiet=True)), self.freebsd12_ping6_ip_s_streaming_json)
def test_ping6_s_ip_freebsd12(self):
"""
Test 'ping6 <ip>' on freebsd12
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.freebsd12_ping6_ip.splitlines(), quiet=True)), self.freebsd12_ping6_ip_streaming_json)
def test_ping_s_hostname_p_osx_10_14_6(self):
"""
Test 'ping <hostname> -p' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_hostname_p.splitlines(), quiet=True)), self.osx_10_14_6_ping_hostname_p_streaming_json)
def test_ping_s_hostname_s_osx_10_14_6(self):
"""
Test 'ping <hostname> -s' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_hostname_s.splitlines(), quiet=True)), self.osx_10_14_6_ping_hostname_s_streaming_json)
def test_ping_s_hostname_osx_10_14_6(self):
"""
Test 'ping <hostname>' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_hostname.splitlines(), quiet=True)), self.osx_10_14_6_ping_hostname_streaming_json)
def test_ping_s_ip_p_osx_10_14_6(self):
"""
Test 'ping <ip> -p' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_ip_p.splitlines(), quiet=True)), self.osx_10_14_6_ping_ip_p_streaming_json)
def test_ping_s_ip_s_osx_10_14_6(self):
"""
Test 'ping <ip> -s' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_ip_s.splitlines(), quiet=True)), self.osx_10_14_6_ping_ip_s_streaming_json)
def test_ping_s_ip_osx_10_14_6(self):
"""
Test 'ping <ip>' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_ip.splitlines(), quiet=True)), self.osx_10_14_6_ping_ip_streaming_json)
def test_ping_s_ip_unreachable_osx_10_14_6(self):
"""
Test 'ping <ip>' with host unreachable error on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_ip_unreachable.splitlines(), quiet=True)), self.osx_10_14_6_ping_ip_unreachable_streaming_json)
def test_ping_s_ip_unknown_errors_osx_10_14_6(self):
"""
Test 'ping <ip>' with unknown/unparsable errors on osx 10.14.6
"""
g = jc.parsers.ping_s.parse(self.osx_10_14_6_ping_ip_unknown_errors.splitlines(), quiet=True)
with self.assertRaises(IndexError):
list(g)
def test_ping6_s_hostname_p_osx_10_14_6(self):
"""
Test 'ping6 <hostname> -p' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_hostname_p.splitlines(), quiet=True)), self.osx_10_14_6_ping6_hostname_p_streaming_json)
def test_ping6_s_hostname_s_osx_10_14_6(self):
"""
Test 'ping6 <hostname> -s' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_hostname_s.splitlines(), quiet=True)), self.osx_10_14_6_ping6_hostname_s_streaming_json)
def test_ping6_s_hostname_osx_10_14_6(self):
"""
Test 'ping6 <hostname>' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_hostname.splitlines(), quiet=True)), self.osx_10_14_6_ping6_hostname_streaming_json)
def test_ping6_s_ip_p_osx_10_14_6(self):
"""
Test 'ping6 <ip> -p' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_ip_p.splitlines(), quiet=True)), self.osx_10_14_6_ping6_ip_p_streaming_json)
def test_ping6_s_ip_s_osx_10_14_6(self):
"""
Test 'ping6 <ip> -s' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_ip_s.splitlines(), quiet=True)), self.osx_10_14_6_ping6_ip_s_streaming_json)
def test_ping6_s_ip_osx_10_14_6(self):
"""
Test 'ping6 <ip>' on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_ip.splitlines(), quiet=True)), self.osx_10_14_6_ping6_ip_streaming_json)
def test_ping6_s_ip_unparsable_osx_10_14_6(self):
"""
Test 'ping6 <ip>' with unparsable lines on osx 10.14.6
"""
g = jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_ip_unparsable.splitlines(), quiet=True)
with self.assertRaises(IndexError):
list(g)
def test_ping_s_ip_dup_osx_10_14_6(self):
"""
Test 'ping <ip>' to broadcast IP to get duplicate replies on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping_ip_dup.splitlines(), quiet=True)), self.osx_10_14_6_ping_ip_dup_streaming_json)
def test_ping6_s_ip_dup_osx_10_14_6(self):
"""
Test 'ping6 <ip>' to broadcast IP to get duplicate replies on osx 10.14.6
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.osx_10_14_6_ping6_ip_dup.splitlines(), quiet=True)), self.osx_10_14_6_ping6_ip_dup_streaming_json)
def test_ping_s_ip_O_pi(self):
"""
Test 'ping6 <ip> -O' on raspberry pi
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.pi_ping_ip_O.splitlines(), quiet=True)), self.pi_ping_ip_O_streaming_json)
def test_ping_s_ip_O_D_pi(self):
"""
Test 'ping6 <ip> -O -D' on raspberry pi
"""
self.assertEqual(list(jc.parsers.ping_s.parse(self.pi_ping_ip_O_D.splitlines(), quiet=True)), self.pi_ping_ip_O_D_streaming_json)
if __name__ == '__main__':
unittest.main()
| 54.46087 | 191 | 0.677813 | 7,337 | 43,841 | 3.760801 | 0.01581 | 0.082448 | 0.032979 | 0.037691 | 0.973472 | 0.971913 | 0.95488 | 0.933606 | 0.901714 | 0.874062 | 0 | 0.048783 | 0.173787 | 43,841 | 804 | 192 | 54.528607 | 0.712992 | 0.069296 | 0 | 0.02267 | 0 | 0.093199 | 0.172936 | 0.152474 | 0 | 0 | 0 | 0 | 0.166247 | 1 | 0.168766 | false | 0 | 0.012594 | 0 | 0.183879 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57ae1d68a3742ed229f7932289dddda9f5b54a83 | 47 | py | Python | hitbasic/translations/initialisation_clause.py | pvmm/hitbasic | f134a80d7e9ada5c7c7423a43001f543fa9ad896 | [
"BSD-2-Clause"
] | null | null | null | hitbasic/translations/initialisation_clause.py | pvmm/hitbasic | f134a80d7e9ada5c7c7423a43001f543fa9ad896 | [
"BSD-2-Clause"
] | null | null | null | hitbasic/translations/initialisation_clause.py | pvmm/hitbasic | f134a80d7e9ada5c7c7423a43001f543fa9ad896 | [
"BSD-2-Clause"
] | null | null | null | # don't delete this file
class Clause: pass
| 7.833333 | 24 | 0.702128 | 8 | 47 | 4.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234043 | 47 | 5 | 25 | 9.4 | 0.916667 | 0.468085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
57af28b5c3aa3e807bd5c799b6f1754a18da5498 | 47 | py | Python | test/python/LIM2Metrics/py3/base/common/Python003/Python003.py | sagodiz/SonarQube-plug-in | 4f8e111baecc4c9f9eaa5cd3d7ebeb1e365ace2c | [
"BSD-4-Clause"
] | 20 | 2015-06-16T17:39:10.000Z | 2022-03-20T22:39:40.000Z | test/python/LIM2Metrics/py3/base/common/Python003/Python003.py | sagodiz/SonarQube-plug-in | 4f8e111baecc4c9f9eaa5cd3d7ebeb1e365ace2c | [
"BSD-4-Clause"
] | 29 | 2015-12-29T19:07:22.000Z | 2022-03-22T10:39:02.000Z | test/python/LIM2Metrics/py3/base/common/Python003/Python003.py | sagodiz/SonarQube-plug-in | 4f8e111baecc4c9f9eaa5cd3d7ebeb1e365ace2c | [
"BSD-4-Clause"
] | 12 | 2015-08-28T01:22:18.000Z | 2021-09-25T08:17:31.000Z | def whats_on_the_telly():
print("Nothing")
| 15.666667 | 25 | 0.702128 | 7 | 47 | 4.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 47 | 2 | 26 | 23.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
57c28da004688b7f98b7adae3953c25d9821db0e | 2,067 | py | Python | countries_plus/migrations/0004_auto_20150616_1242.py | niSeRdiSeR/django-countries-plus | 0ec1fd32dabb2aa3cf7ad0701277313896060073 | [
"MIT"
] | 33 | 2015-06-16T09:26:11.000Z | 2022-02-05T20:14:41.000Z | countries_plus/migrations/0004_auto_20150616_1242.py | niSeRdiSeR/django-countries-plus | 0ec1fd32dabb2aa3cf7ad0701277313896060073 | [
"MIT"
] | 16 | 2015-01-24T12:36:30.000Z | 2022-03-30T10:36:59.000Z | countries_plus/migrations/0004_auto_20150616_1242.py | niSeRdiSeR/django-countries-plus | 0ec1fd32dabb2aa3cf7ad0701277313896060073 | [
"MIT"
] | 20 | 2015-05-15T08:49:37.000Z | 2022-01-31T13:27:29.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('countries_plus', '0003_auto_20150611_1446'),
]
operations = [
migrations.AlterField(
model_name='country',
name='capital',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='currency_name',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='currency_symbol',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='languages',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='name',
field=models.CharField(max_length=255, unique=True),
),
migrations.AlterField(
model_name='country',
name='neighbours',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='phone',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='postal_code_format',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='postal_code_regex',
field=models.CharField(max_length=255, blank=True, null=True),
),
migrations.AlterField(
model_name='country',
name='tld',
field=models.CharField(max_length=255, blank=True, null=True),
),
]
| 31.8 | 74 | 0.56894 | 200 | 2,067 | 5.705 | 0.23 | 0.175285 | 0.219106 | 0.254163 | 0.807187 | 0.807187 | 0.77213 | 0.69851 | 0.69851 | 0.69851 | 0 | 0.033029 | 0.311563 | 2,067 | 64 | 75 | 32.296875 | 0.768798 | 0.01016 | 0 | 0.672414 | 0 | 0 | 0.101761 | 0.011252 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.086207 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57da055f15c0fcc81564f076ba57b50c84c7d4b8 | 1,018 | py | Python | drivers/plot_XYZvtang.py | lgbouma/rudolf | 0f130f45fdbcce109a6750aa179dd26a12179619 | [
"MIT"
] | null | null | null | drivers/plot_XYZvtang.py | lgbouma/rudolf | 0f130f45fdbcce109a6750aa179dd26a12179619 | [
"MIT"
] | null | null | null | drivers/plot_XYZvtang.py | lgbouma/rudolf | 0f130f45fdbcce109a6750aa179dd26a12179619 | [
"MIT"
] | null | null | null | import os
import rudolf.plotting as rp
from rudolf.paths import RESULTSDIR
PLOTDIR = os.path.join(RESULTSDIR, 'XYZvtang')
if not os.path.exists(PLOTDIR):
os.mkdir(PLOTDIR)
rp.plot_XYZvtang(PLOTDIR, show_1627=1, show_comovers=0, show_sun=1,
show_7368=0, show_allknown=1, show_rsg5=1, orientation='square')
rp.plot_XYZvtang(PLOTDIR, show_1627=1, show_comovers=0, show_sun=1,
show_7368=0, show_allknown=1, show_rsg5=1, orientation='portrait')
rp.plot_XYZvtang(PLOTDIR, show_1627=1, show_comovers=0, show_sun=1,
show_7368=0, show_allknown=1, orientation='portrait')
rp.plot_XYZvtang(PLOTDIR, show_1627=1, show_comovers=1, show_sun=1)
rp.plot_XYZvtang(PLOTDIR, show_1627=1, show_comovers=1, show_sun=1,
show_7368=1)
rp.plot_XYZvtang(PLOTDIR, show_1627=1, show_comovers=1, show_sun=1,
orientation='portrait')
rp.plot_XYZvtang(PLOTDIR, show_1627=1, show_comovers=1)
rp.plot_XYZvtang(PLOTDIR, show_1627=1)
rp.plot_XYZvtang(PLOTDIR)
| 39.153846 | 83 | 0.72888 | 161 | 1,018 | 4.36646 | 0.186335 | 0.113798 | 0.179232 | 0.268848 | 0.798009 | 0.756757 | 0.756757 | 0.756757 | 0.714083 | 0.714083 | 0 | 0.0927 | 0.152259 | 1,018 | 25 | 84 | 40.72 | 0.7219 | 0 | 0 | 0.25 | 0 | 0 | 0.037328 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aa24818ed1e5885f5e2ce40ff737a4c3284fe7a1 | 116 | py | Python | catalyst/contrib/models/cv/__init__.py | balakhonoff/catalyst | 82d904aee97045efbaef3963e36c2ce5173ddac4 | [
"Apache-2.0"
] | 46 | 2020-03-27T20:12:32.000Z | 2021-11-21T19:08:51.000Z | catalyst/contrib/models/cv/__init__.py | balakhonoff/catalyst | 82d904aee97045efbaef3963e36c2ce5173ddac4 | [
"Apache-2.0"
] | 2 | 2020-04-06T10:43:04.000Z | 2020-07-01T18:26:10.000Z | catalyst/contrib/models/cv/__init__.py | balakhonoff/catalyst | 82d904aee97045efbaef3963e36c2ce5173ddac4 | [
"Apache-2.0"
] | 5 | 2020-04-17T14:09:53.000Z | 2021-05-10T08:58:29.000Z | # flake8: noqa
# isort:skip_file
from .encoders import *
from .classification import *
from .segmentation import *
| 16.571429 | 29 | 0.758621 | 14 | 116 | 6.214286 | 0.714286 | 0.229885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.155172 | 116 | 6 | 30 | 19.333333 | 0.877551 | 0.241379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a4f1c7963fa87d41a45b1fd70e7317322dfe9719 | 156 | py | Python | ocial/ocial_project/accounts/admin.py | kasimbozdag/swe_574 | a77fa29fd80c713cd202ccbb82cfcadfa52b81fa | [
"MIT"
] | 1 | 2019-09-29T12:54:58.000Z | 2019-09-29T12:54:58.000Z | ocial/ocial_project/accounts/admin.py | kasimbozdag/swe_574 | a77fa29fd80c713cd202ccbb82cfcadfa52b81fa | [
"MIT"
] | 55 | 2019-09-26T16:29:22.000Z | 2022-02-10T11:28:32.000Z | ocial/ocial_project/accounts/admin.py | kasimbozdag/swe_574 | a77fa29fd80c713cd202ccbb82cfcadfa52b81fa | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
admin.site.register(Learner)
admin.site.register(UserProfile)
admin.site.register(UserFollowing)
| 15.6 | 34 | 0.807692 | 20 | 156 | 6.3 | 0.55 | 0.214286 | 0.404762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 156 | 9 | 35 | 17.333333 | 0.893617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3541d0eef483832fb1d4453062e220c5823ab5dd | 231 | py | Python | reactive/builder.py | juju-solutions/layer-binary-builder | 79a3b2d3098fba77b4ad2192b72846165d7463f7 | [
"Apache-2.0"
] | null | null | null | reactive/builder.py | juju-solutions/layer-binary-builder | 79a3b2d3098fba77b4ad2192b72846165d7463f7 | [
"Apache-2.0"
] | 2 | 2016-05-12T09:38:14.000Z | 2016-05-12T09:38:37.000Z | reactive/builder.py | juju-solutions/layer-binary-builder | 79a3b2d3098fba77b4ad2192b72846165d7463f7 | [
"Apache-2.0"
] | null | null | null | from charms.reactive import when_not
from charms.reactive import set_state
from charmhelpers.core import hookenv
@when_not('builder.ready')
def bootstrap():
hookenv.status_set('active', 'Ready')
set_state('builder.ready')
| 25.666667 | 41 | 0.774892 | 32 | 231 | 5.4375 | 0.53125 | 0.114943 | 0.206897 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116883 | 231 | 8 | 42 | 28.875 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0.160173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1036b1bac2d8aa004f7e41d3bc89c6f4f0945b81 | 81 | py | Python | plugins/__init__.py | davinerd/gql_intruder | 624ad97d44650642497aa9c07f8d41a31d4c5521 | [
"MIT"
] | 11 | 2020-11-26T09:09:52.000Z | 2022-03-08T22:08:42.000Z | plugins/__init__.py | davinerd/gql_intruder | 624ad97d44650642497aa9c07f8d41a31d4c5521 | [
"MIT"
] | 3 | 2020-11-29T23:09:51.000Z | 2020-12-15T08:37:30.000Z | plugins/__init__.py | davinerd/gql_intruder | 624ad97d44650642497aa9c07f8d41a31d4c5521 | [
"MIT"
] | null | null | null | from plugins.intruder.intruder import Intruder
from plugins.dump.dump import Dump | 40.5 | 46 | 0.864198 | 12 | 81 | 5.833333 | 0.416667 | 0.314286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08642 | 81 | 2 | 47 | 40.5 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
104743848e0f121c13b1dd2d2ab96c0be429ec8d | 40 | py | Python | app/keyboards/default/__init__.py | AmaRocket/AmaRocket-MovieBuddyBot | 0dec956cefb054cb44d7f1139cd9aee54f7c154e | [
"MIT"
] | null | null | null | app/keyboards/default/__init__.py | AmaRocket/AmaRocket-MovieBuddyBot | 0dec956cefb054cb44d7f1139cd9aee54f7c154e | [
"MIT"
] | null | null | null | app/keyboards/default/__init__.py | AmaRocket/AmaRocket-MovieBuddyBot | 0dec956cefb054cb44d7f1139cd9aee54f7c154e | [
"MIT"
] | null | null | null | from .menu import totalkb, vote_average
| 20 | 39 | 0.825 | 6 | 40 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1082ce40e3878c93ab08c5f7d87cb7d3222005ca | 71 | py | Python | src/_user/_user.py | LeonardoCruzx/django-rest-starter | 05ec156f248980f4f3bbe8039fd360a687dbb5df | [
"MIT"
] | 7 | 2020-09-08T21:45:01.000Z | 2020-11-30T16:45:32.000Z | src/_user/_user.py | LeonardoCruzx/django-rest-starter | 05ec156f248980f4f3bbe8039fd360a687dbb5df | [
"MIT"
] | null | null | null | src/_user/_user.py | LeonardoCruzx/django-rest-starter | 05ec156f248980f4f3bbe8039fd360a687dbb5df | [
"MIT"
] | null | null | null | from _template.template import Template
class User(Template):
pass | 17.75 | 39 | 0.788732 | 9 | 71 | 6.111111 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15493 | 71 | 4 | 40 | 17.75 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
10894d2edfb949748d1326c43e87cc533dfd602e | 5,051 | py | Python | gimmebio/stat_strains/tests/test_stat_strain.py | Chandrima-04/gimmebio | cb3e66380006d5c5c00ff70bfb87317dd252c312 | [
"MIT"
] | 3 | 2020-01-21T23:49:55.000Z | 2020-07-29T17:02:30.000Z | gimmebio/stat_strains/tests/test_stat_strain.py | Chandrima-04/gimmebio | cb3e66380006d5c5c00ff70bfb87317dd252c312 | [
"MIT"
] | null | null | null | gimmebio/stat_strains/tests/test_stat_strain.py | Chandrima-04/gimmebio | cb3e66380006d5c5c00ff70bfb87317dd252c312 | [
"MIT"
] | 4 | 2020-01-21T16:48:17.000Z | 2020-03-13T15:34:52.000Z | """Test suite for stat strain."""
from unittest import TestCase
import pandas as pd
from os.path import dirname, join, basename
from gimmebio.stat_strains import (
entropy_reduce_position_matrix,
entropy_reduce_postion_matrices,
parallel_entropy_reduce_postion_matrices,
fast_entropy_reduce_postion_matrices,
)
from gimmebio.stat_strains.metrics import scaling_manhattan
MATRIX_1_FILENAME = join(dirname(__file__), 'NC_006085.1_cds_WP_002515220.1_751.counts.txt.gz')
MATRIX_2_FILENAME = join(dirname(__file__), 'NC_006085.1_cds_WP_002516475.1_2279.counts.txt.gz')
MATRIX_3_FILENAME = join(dirname(__file__), 'NC_006085.1_cds_WP_002515415.1_1951.counts.txt.gz')
ONE_COL_FILENAME = join(dirname(__file__), 'count_matrix_with_one_col.csv')
NULL_FILENAME = join(dirname(__file__), 'null_count_matrix.csv')
def _parse_matrix(filehandle):
matrix = pd.read_csv(filehandle, index_col=0, header=0)
pref = basename(filehandle) + '__'
matrix.columns = [pref + el for el in matrix.columns]
return matrix
def trivial_metric(x, y):
return 0.
class TestStatStrain(TestCase):
"""Test suite for ram seq."""
def test_trivial_reduce_entropy_full_reduce(self):
"""Test that we only get 1 column."""
original = pd.read_csv(MATRIX_1_FILENAME, index_col=0, header=0)
full_reduced = entropy_reduce_position_matrix(
original,
1,
trivial_metric
)
self.assertEqual(full_reduced.shape[0], original.shape[0])
self.assertEqual(full_reduced.shape[1], 1)
def test_reduce_entropy_reduce(self):
"""Test that we only get 1 column."""
original = pd.read_csv(MATRIX_1_FILENAME, index_col=0, header=0)
full_reduced = entropy_reduce_position_matrix(
original,
0.1,
'cosine'
)
self.assertEqual(full_reduced.shape[0], original.shape[0])
self.assertLess(full_reduced.shape[1], original.shape[1])
def test_reduce_entropy_reduce_2(self):
"""Test that we only get 1 column."""
original = pd.read_csv(MATRIX_3_FILENAME, index_col=0, header=0)
full_reduced = entropy_reduce_position_matrix(
original,
0.1,
'cosine'
)
self.assertEqual(full_reduced.shape[0], original.shape[0])
self.assertLess(full_reduced.shape[1], original.shape[1])
def test_reduce_two_matrices_trivial_full_reduce(self):
def _parse_matrix(filehandle):
matrix = pd.read_csv(filehandle, index_col=0, header=0)
pref = basename(filehandle) + '__'
matrix.columns = [pref + el for el in matrix.columns]
return matrix
full_reduced = entropy_reduce_postion_matrices(
[MATRIX_1_FILENAME, MATRIX_2_FILENAME],
1,
trivial_metric,
matrix_parser=_parse_matrix
)
self.assertEqual(full_reduced.shape[1], 1)
def _test_parallel_reduce_two_matrices_trivial_full_reduce(self):
full_reduced = parallel_entropy_reduce_postion_matrices(
[MATRIX_1_FILENAME, MATRIX_2_FILENAME],
1,
trivial_metric,
matrix_parser=_parse_matrix,
threads=4,
)
self.assertEqual(full_reduced.shape[1], 1920)
def test_fast_reduce_two_matrices_trivial_full_reduce(self):
full_reduced = fast_entropy_reduce_postion_matrices(
[MATRIX_1_FILENAME, MATRIX_2_FILENAME],
1,
trivial_metric,
matrix_parser=_parse_matrix,
)
self.assertEqual(full_reduced.shape[1], 1920)
def test_reduce_null_matrix_is_empty(self):
"""Test that we only get 1 column."""
original = pd.read_csv(NULL_FILENAME, index_col=0, header=0)
full_reduced = entropy_reduce_position_matrix(
original,
1,
trivial_metric
)
self.assertEqual(full_reduced.shape[0], original.shape[0])
self.assertEqual(full_reduced.shape[1], 0)
def test_fast_reduce_first_matrix_one_col(self):
full_reduced = fast_entropy_reduce_postion_matrices(
[ONE_COL_FILENAME, MATRIX_1_FILENAME, MATRIX_2_FILENAME],
0.1,
'cosine',
matrix_parser=_parse_matrix,
)
self.assertGreater(full_reduced.shape[1], 1)
def test_fast_reduce_first_matrix_null(self):
full_reduced = fast_entropy_reduce_postion_matrices(
[NULL_FILENAME, ONE_COL_FILENAME, MATRIX_1_FILENAME, MATRIX_2_FILENAME],
0.1,
'cosine',
matrix_parser=_parse_matrix,
)
self.assertGreater(full_reduced.shape[1], 1)
def test_fast_reduce_with_matrix_null(self):
full_reduced = fast_entropy_reduce_postion_matrices(
[MATRIX_1_FILENAME, NULL_FILENAME, MATRIX_2_FILENAME],
0.1,
'cosine',
matrix_parser=_parse_matrix,
)
self.assertGreater(full_reduced.shape[1], 1)
| 35.570423 | 96 | 0.667393 | 633 | 5,051 | 4.913112 | 0.151659 | 0.084887 | 0.072026 | 0.054662 | 0.801929 | 0.777492 | 0.758521 | 0.748232 | 0.733119 | 0.648232 | 0 | 0.038743 | 0.243714 | 5,051 | 141 | 97 | 35.822695 | 0.775393 | 0.035439 | 0 | 0.575221 | 0 | 0 | 0.047511 | 0.040488 | 0 | 0 | 0 | 0 | 0.123894 | 1 | 0.115044 | false | 0 | 0.044248 | 0.00885 | 0.19469 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10ab7483ea6383cb4d0963392e90e1d0e8cfb751 | 3,191 | py | Python | migrations/versions/48a922151dd4_correct_test_schemas.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 443 | 2015-01-03T16:28:39.000Z | 2021-04-26T16:39:46.000Z | migrations/versions/48a922151dd4_correct_test_schemas.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 12 | 2015-07-30T19:07:16.000Z | 2016-11-07T23:11:21.000Z | migrations/versions/48a922151dd4_correct_test_schemas.py | vault-the/changes | 37e23c3141b75e4785cf398d015e3dbca41bdd56 | [
"Apache-2.0"
] | 47 | 2015-01-09T10:04:00.000Z | 2020-11-18T17:58:19.000Z | """Correct Test schemas
Revision ID: 48a922151dd4
Revises: 2b71d67ef04d
Create Date: 2013-11-07 13:12:14.375337
"""
# revision identifiers, used by Alembic.
revision = '48a922151dd4'
down_revision = '2b71d67ef04d'
from alembic import op
import sqlalchemy as sa
from sqlalchemy import text
from sqlalchemy.dialects import postgresql
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.execute('UPDATE testgroup SET result = 0 WHERE result IS NULL')
op.execute('UPDATE test SET result = 0 WHERE result IS NULL')
op.execute('UPDATE testgroup SET num_failed = 0 WHERE num_failed IS NULL')
op.execute('UPDATE testgroup SET num_tests = 0 WHERE num_tests IS NULL')
op.alter_column('test', 'date_created',
existing_type=postgresql.TIMESTAMP(),
nullable=False)
op.alter_column('test', 'result',
existing_type=sa.INTEGER(),
server_default=text('0'),
nullable=False)
op.alter_column('testgroup', 'date_created',
existing_type=postgresql.TIMESTAMP(),
nullable=False)
op.alter_column('testgroup', 'name',
existing_type=sa.TEXT(),
nullable=False)
op.alter_column('testgroup', 'num_failed',
existing_type=sa.INTEGER(),
server_default=text('0'),
nullable=False)
op.alter_column('testgroup', 'num_tests',
existing_type=sa.INTEGER(),
server_default=text('0'),
nullable=False)
op.alter_column('testgroup', 'result',
existing_type=sa.INTEGER(),
server_default=text('0'),
nullable=False)
op.alter_column('testsuite', 'date_created',
existing_type=postgresql.TIMESTAMP(),
nullable=False)
op.alter_column('testsuite', 'name',
existing_type=sa.TEXT(),
nullable=False)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.alter_column('testsuite', 'name',
existing_type=sa.TEXT(),
nullable=True)
op.alter_column('testsuite', 'date_created',
existing_type=postgresql.TIMESTAMP(),
nullable=True)
op.alter_column('testgroup', 'result',
existing_type=sa.INTEGER(),
nullable=True)
op.alter_column('testgroup', 'num_tests',
existing_type=sa.INTEGER(),
nullable=True)
op.alter_column('testgroup', 'num_failed',
existing_type=sa.INTEGER(),
nullable=True)
op.alter_column('testgroup', 'name',
existing_type=sa.TEXT(),
nullable=True)
op.alter_column('testgroup', 'date_created',
existing_type=postgresql.TIMESTAMP(),
nullable=True)
op.alter_column('test', 'result',
existing_type=sa.INTEGER(),
nullable=True)
op.alter_column('test', 'date_created',
existing_type=postgresql.TIMESTAMP(),
nullable=True)
### end Alembic commands ###
| 35.455556 | 78 | 0.598872 | 340 | 3,191 | 5.458824 | 0.194118 | 0.067888 | 0.126078 | 0.118534 | 0.798491 | 0.786638 | 0.786638 | 0.78125 | 0.70528 | 0.696121 | 0 | 0.026235 | 0.283297 | 3,191 | 89 | 79 | 35.853933 | 0.785308 | 0.093388 | 0 | 0.828571 | 0 | 0 | 0.187762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.057143 | 0 | 0.085714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
529ff80e9a0efedce4ac5cdbd9fd617f7ef867cf | 152 | py | Python | mlpipe/utils.py | guanyilun/mlpipe | 7e739f6b4307db5350e9601a36f415ecf77a62c2 | [
"MIT"
] | 1 | 2020-10-27T09:48:47.000Z | 2020-10-27T09:48:47.000Z | mlpipe/utils.py | guanyilun/mlpipe | 7e739f6b4307db5350e9601a36f415ecf77a62c2 | [
"MIT"
] | 3 | 2018-12-01T02:20:18.000Z | 2021-11-15T17:48:01.000Z | mlpipe/utils.py | guanyilun/mlpipe | 7e739f6b4307db5350e9601a36f415ecf77a62c2 | [
"MIT"
] | 4 | 2018-11-27T22:11:04.000Z | 2019-10-01T16:15:34.000Z | import numpy as np
def to_categorical(y, num_classes):
""" 1-hot encodes a tensor """
return np.eye(num_classes, dtype='int')[y.astype('int')]
| 25.333333 | 60 | 0.671053 | 25 | 152 | 3.96 | 0.8 | 0.20202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.164474 | 152 | 5 | 61 | 30.4 | 0.771654 | 0.144737 | 0 | 0 | 0 | 0 | 0.04918 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
52e005d6d28382b2a4238c10d88cfe8e67258195 | 20,478 | py | Python | test/hummingbot/connector/exchange/binance/test_binance_api_order_book_data_source.py | joedomino874/hummingbot | cb3ee5a30a2feb0a55ceca9d200c59662d7e3057 | [
"Apache-2.0"
] | 3,027 | 2019-04-04T18:52:17.000Z | 2022-03-30T09:38:34.000Z | test/hummingbot/connector/exchange/binance/test_binance_api_order_book_data_source.py | joedomino874/hummingbot | cb3ee5a30a2feb0a55ceca9d200c59662d7e3057 | [
"Apache-2.0"
] | 4,080 | 2019-04-04T19:51:11.000Z | 2022-03-31T23:45:21.000Z | test/hummingbot/connector/exchange/binance/test_binance_api_order_book_data_source.py | joedomino874/hummingbot | cb3ee5a30a2feb0a55ceca9d200c59662d7e3057 | [
"Apache-2.0"
] | 1,342 | 2019-04-04T20:50:53.000Z | 2022-03-31T15:22:36.000Z | import asyncio
import re
import ujson
import unittest
import hummingbot.connector.exchange.binance.binance_constants as CONSTANTS
import hummingbot.connector.exchange.binance.binance_utils as utils
from aioresponses.core import aioresponses
from typing import (
Any,
Dict,
List,
)
from unittest.mock import AsyncMock, patch
from hummingbot.core.api_throttler.async_throttler import AsyncThrottler
from hummingbot.core.data_type.order_book import OrderBook
from hummingbot.core.data_type.order_book_message import OrderBookMessage
from hummingbot.connector.exchange.binance.binance_api_order_book_data_source import BinanceAPIOrderBookDataSource
from test.hummingbot.connector.network_mocking_assistant import NetworkMockingAssistant
class BinanceAPIOrderBookDataSourceUnitTests(unittest.TestCase):
# logging.Level required to receive logs from the data source logger
level = 0
@classmethod
def setUpClass(cls) -> None:
super().setUpClass()
cls.ev_loop = asyncio.get_event_loop()
cls.base_asset = "COINALPHA"
cls.quote_asset = "HBOT"
cls.trading_pair = f"{cls.base_asset}-{cls.quote_asset}"
cls.ex_trading_pair = cls.base_asset + cls.quote_asset
cls.domain = "com"
def setUp(self) -> None:
super().setUp()
self.log_records = []
self.listening_task = None
self.mocking_assistant = NetworkMockingAssistant()
self.throttler = AsyncThrottler(rate_limits=CONSTANTS.RATE_LIMITS)
self.data_source = BinanceAPIOrderBookDataSource(trading_pairs=[self.trading_pair],
throttler=self.throttler,
domain=self.domain)
self.data_source.logger().setLevel(1)
self.data_source.logger().addHandler(self)
self.resume_test_event = asyncio.Event()
def tearDown(self) -> None:
self.listening_task and self.listening_task.cancel()
super().tearDown()
def handle(self, record):
self.log_records.append(record)
def _is_logged(self, log_level: str, message: str) -> bool:
return any(record.levelname == log_level and record.getMessage() == message
for record in self.log_records)
def _raise_exception(self, exception_class):
raise exception_class
def _create_exception_and_unlock_test_with_event(self, exception):
self.resume_test_event.set()
raise exception
def _successfully_subscribed_event(self):
resp = {
"result": None,
"id": 1
}
return resp
def _trade_update_event(self):
resp = {
"e": "trade",
"E": 123456789,
"s": self.ex_trading_pair,
"t": 12345,
"p": "0.001",
"q": "100",
"b": 88,
"a": 50,
"T": 123456785,
"m": True,
"M": True
}
return resp
def _order_diff_event(self):
resp = {
"e": "depthUpdate",
"E": 123456789,
"s": self.ex_trading_pair,
"U": 157,
"u": 160,
"b": [["0.0024", "10"]],
"a": [["0.0026", "100"]]
}
return resp
def _snapshot_response(self):
resp = {
"lastUpdateId": 1027024,
"bids": [
[
"4.00000000",
"431.00000000"
]
],
"asks": [
[
"4.00000200",
"12.00000000"
]
]
}
return resp
@aioresponses()
def test_get_last_trade_prices(self, mock_api):
url = utils.public_rest_url(path_url=CONSTANTS.TICKER_PRICE_CHANGE_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_response: Dict[str, Any] = {
# Truncated Response
"lastPrice": "100",
}
mock_api.get(regex_url, body=ujson.dumps(mock_response))
result: Dict[str, float] = self.ev_loop.run_until_complete(
self.data_source.get_last_traded_prices(trading_pairs=[self.trading_pair],
throttler=self.throttler)
)
self.assertEqual(1, len(result))
self.assertEqual(100, result[self.trading_pair])
@aioresponses()
@patch("hummingbot.connector.exchange.binance.binance_utils.convert_from_exchange_trading_pair")
def test_get_all_mid_prices(self, mock_api, mock_utils):
# Mocks binance_utils for BinanceOrderBook.diff_message_from_exchange()
mock_utils.return_value = self.trading_pair
url = utils.public_rest_url(path_url=CONSTANTS.TICKER_PRICE_CHANGE_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_response: List[Dict[str, Any]] = [{
# Truncated Response
"symbol": self.ex_trading_pair,
"bidPrice": "99",
"askPrice": "101",
}]
mock_api.get(regex_url, body=ujson.dumps(mock_response))
result: Dict[str, float] = self.ev_loop.run_until_complete(
self.data_source.get_all_mid_prices()
)
self.assertEqual(1, len(result))
self.assertEqual(100, result[self.trading_pair])
@aioresponses()
@patch("hummingbot.connector.exchange.binance.binance_utils.convert_from_exchange_trading_pair")
def test_fetch_trading_pairs(self, mock_api, mock_utils):
# Mocks binance_utils for BinanceOrderBook.diff_message_from_exchange()
mock_utils.return_value = self.trading_pair
url = utils.public_rest_url(path_url=CONSTANTS.EXCHANGE_INFO_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_response: Dict[str, Any] = {
# Truncated Response
"symbols":
[
{
"symbol": self.ex_trading_pair,
"status": "TRADING",
"baseAsset": self.base_asset,
"quoteAsset": self.quote_asset,
},
]
}
mock_api.get(regex_url, body=ujson.dumps(mock_response))
result: Dict[str] = self.ev_loop.run_until_complete(
self.data_source.fetch_trading_pairs()
)
self.assertEqual(1, len(result))
self.assertTrue(self.trading_pair in result)
@aioresponses()
@patch("hummingbot.connector.exchange.binance.binance_utils.convert_from_exchange_trading_pair")
def test_fetch_trading_pairs_exception_raised(self, mock_api, mock_utils):
# Mocks binance_utils for BinanceOrderBook.diff_message_from_exchange()
mock_utils.return_value = self.trading_pair
url = utils.public_rest_url(path_url=CONSTANTS.EXCHANGE_INFO_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_api.get(regex_url, exception=Exception)
result: Dict[str] = self.ev_loop.run_until_complete(
self.data_source.fetch_trading_pairs()
)
self.assertEqual(0, len(result))
def test_get_throttler_instance(self):
self.assertIsInstance(BinanceAPIOrderBookDataSource._get_throttler_instance(), AsyncThrottler)
@aioresponses()
def test_get_snapshot_successful(self, mock_api):
url = utils.public_rest_url(path_url=CONSTANTS.SNAPSHOT_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_api.get(regex_url, body=ujson.dumps(self._snapshot_response()))
result: Dict[str, Any] = self.ev_loop.run_until_complete(
self.data_source.get_snapshot(self.trading_pair)
)
self.assertEqual(self._snapshot_response(), result)
@aioresponses()
def test_get_snapshot_catch_exception(self, mock_api):
url = utils.public_rest_url(path_url=CONSTANTS.SNAPSHOT_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_api.get(regex_url, status=400)
with self.assertRaises(IOError):
self.ev_loop.run_until_complete(
self.data_source.get_snapshot(self.trading_pair)
)
@aioresponses()
def test_get_new_order_book(self, mock_api):
url = utils.public_rest_url(path_url=CONSTANTS.SNAPSHOT_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_response: Dict[str, Any] = {
"lastUpdateId": 1,
"bids": [
[
"4.00000000",
"431.00000000"
]
],
"asks": [
[
"4.00000200",
"12.00000000"
]
]
}
mock_api.get(regex_url, body=ujson.dumps(mock_response))
result: OrderBook = self.ev_loop.run_until_complete(
self.data_source.get_new_order_book(self.trading_pair)
)
self.assertEqual(1, result.snapshot_uid)
@patch("aiohttp.ClientSession.ws_connect")
def test_create_websocket_connection_cancelled_when_connecting(self, mock_ws):
mock_ws.side_effect = asyncio.CancelledError
with self.assertRaises(asyncio.CancelledError):
self.ev_loop.run_until_complete(
self.data_source._create_websocket_connection()
)
@patch("aiohttp.ClientSession.ws_connect")
def test_create_websocket_connection_exception_raised(self, mock_ws):
mock_ws.side_effect = Exception("TEST ERROR.")
with self.assertRaises(Exception):
self.ev_loop.run_until_complete(
self.data_source._create_websocket_connection()
)
self.assertTrue(self._is_logged("NETWORK",
"Unexpected error occured when connecting to WebSocket server. Error: TEST ERROR."))
@patch("hummingbot.core.data_type.order_book_tracker_data_source.OrderBookTrackerDataSource._sleep")
@patch("aiohttp.ClientSession.ws_connect")
def test_listen_for_trades_cancelled_when_connecting(self, mock_ws, _: AsyncMock):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.side_effect = asyncio.CancelledError
with self.assertRaises(asyncio.CancelledError):
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_trades(self.ev_loop, msg_queue)
)
self.ev_loop.run_until_complete(self.listening_task)
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_trades_exception_raised_when_connecting(self, mock_ws):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.side_effect = lambda **_: self._create_exception_and_unlock_test_with_event(Exception("TEST ERROR."))
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_trades(self.ev_loop, msg_queue)
)
self.ev_loop.run_until_complete(self.resume_test_event.wait())
self.assertTrue(self._is_logged("NETWORK", "Unexpected error occured when connecting to WebSocket server. Error: TEST ERROR."))
self.assertTrue(self._is_logged("ERROR", "Unexpected error with WebSocket connection. Retrying after 30 seconds..."))
@patch("hummingbot.core.data_type.order_book_tracker_data_source.OrderBookTrackerDataSource._sleep")
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_trades_cancelled_when_listening(self, mock_ws, _: AsyncMock):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.return_value = self.mocking_assistant.create_websocket_mock()
mock_ws.return_value.receive_json.side_effect = lambda: (
self._raise_exception(asyncio.CancelledError)
)
with self.assertRaises(asyncio.CancelledError):
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_trades(self.ev_loop, msg_queue)
)
self.ev_loop.run_until_complete(self.listening_task)
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_trades_logs_exception(self, mock_ws):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.return_value = self.mocking_assistant.create_websocket_mock()
mock_ws.close.return_value = None
incomplete_resp = {
"m": 1,
"i": 2,
}
self.mocking_assistant.add_websocket_json_message(mock_ws.return_value, incomplete_resp)
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_trades(self.ev_loop, msg_queue)
)
with self.assertRaises(asyncio.TimeoutError):
self.ev_loop.run_until_complete(
asyncio.wait_for(self.listening_task, 1)
)
self.assertTrue(self._is_logged("ERROR", "Unexpected error with WebSocket connection. Retrying after 30 seconds..."))
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_trades_iter_message_throws_exception(self, mock_ws):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.return_value = self.mocking_assistant.create_websocket_mock()
mock_ws.return_value.receive_json.side_effect = lambda: self._raise_exception(Exception("TEST ERROR"))
mock_ws.close.return_value = None
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_trades(self.ev_loop, msg_queue)
)
with self.assertRaises(asyncio.TimeoutError):
self.ev_loop.run_until_complete(
asyncio.wait_for(self.listening_task, 1)
)
self.assertTrue(self._is_logged("NETWORK", "Unexpected error occured when parsing websocket payload. Error: TEST ERROR"))
self.assertTrue(self._is_logged("ERROR", "Unexpected error with WebSocket connection. Retrying after 30 seconds..."))
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_trades_successful(self, mock_ws):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.return_value = self.mocking_assistant.create_websocket_mock()
mock_ws.close.return_value = None
self.mocking_assistant.add_websocket_json_message(mock_ws.return_value, self._successfully_subscribed_event())
self.mocking_assistant.add_websocket_json_message(mock_ws.return_value, self._trade_update_event())
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_trades(self.ev_loop, msg_queue)
)
msg: OrderBookMessage = self.ev_loop.run_until_complete(msg_queue.get())
self.assertTrue(12345, msg.trade_id)
@patch("hummingbot.core.data_type.order_book_tracker_data_source.OrderBookTrackerDataSource._sleep")
@patch("aiohttp.ClientSession.ws_connect")
def test_listen_for_order_book_diffs_cancelled_when_connecting(self, mock_ws, _: AsyncMock):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.side_effect = asyncio.CancelledError
with self.assertRaises(asyncio.CancelledError):
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_order_book_diffs(self.ev_loop, msg_queue)
)
self.ev_loop.run_until_complete(self.listening_task)
@patch("hummingbot.core.data_type.order_book_tracker_data_source.OrderBookTrackerDataSource._sleep")
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_order_book_diffs_cancelled_when_listening(self, mock_ws, _: AsyncMock):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.return_value = self.mocking_assistant.create_websocket_mock()
mock_ws.return_value.receive_json.side_effect = lambda: (
self._raise_exception(asyncio.CancelledError)
)
with self.assertRaises(asyncio.CancelledError):
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_order_book_diffs(self.ev_loop, msg_queue)
)
self.ev_loop.run_until_complete(self.listening_task)
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_order_book_diffs_logs_exception(self, mock_ws):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.return_value = self.mocking_assistant.create_websocket_mock()
mock_ws.close.return_value = None
incomplete_resp = {
"m": 1,
"i": 2,
}
self.mocking_assistant.add_websocket_json_message(mock_ws.return_value, incomplete_resp)
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_order_book_diffs(self.ev_loop, msg_queue)
)
with self.assertRaises(asyncio.TimeoutError):
self.ev_loop.run_until_complete(
asyncio.wait_for(self.listening_task, 1)
)
self.assertTrue(self._is_logged("ERROR", "Unexpected error with WebSocket connection. Retrying after 30 seconds..."))
@patch("aiohttp.ClientSession.ws_connect", new_callable=AsyncMock)
def test_listen_for_order_book_diffs_successful(self, mock_ws):
msg_queue: asyncio.Queue = asyncio.Queue()
mock_ws.return_value = self.mocking_assistant.create_websocket_mock()
mock_ws.close.return_value = None
self.mocking_assistant.add_websocket_json_message(mock_ws.return_value, self._successfully_subscribed_event())
self.mocking_assistant.add_websocket_json_message(mock_ws.return_value, self._order_diff_event())
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_order_book_diffs(self.ev_loop, msg_queue)
)
msg: OrderBookMessage = self.ev_loop.run_until_complete(msg_queue.get())
self.assertTrue(12345, msg.update_id)
@aioresponses()
def test_listen_for_order_book_snapshots_cancelled_when_fetching_snapshot(self, mock_api):
url = utils.public_rest_url(path_url=CONSTANTS.SNAPSHOT_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_api.get(regex_url, exception=asyncio.CancelledError)
with self.assertRaises(asyncio.CancelledError):
self.ev_loop.run_until_complete(
self.data_source.listen_for_order_book_snapshots(self.ev_loop, asyncio.Queue())
)
@aioresponses()
def test_listen_for_order_book_snapshots_log_exception(self, mock_api):
msg_queue: asyncio.Queue = asyncio.Queue()
url = utils.public_rest_url(path_url=CONSTANTS.SNAPSHOT_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_api.get(regex_url, exception=Exception)
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_order_book_snapshots(self.ev_loop, msg_queue)
)
with self.assertRaises(asyncio.TimeoutError):
self.ev_loop.run_until_complete(
asyncio.wait_for(self.listening_task, 1)
)
self.assertTrue(self._is_logged("ERROR", f"Unexpected error fetching order book snapshot for {self.trading_pair}."))
@aioresponses()
def test_listen_for_order_book_snapshots_successful(self, mock_api,):
msg_queue: asyncio.Queue = asyncio.Queue()
url = utils.public_rest_url(path_url=CONSTANTS.SNAPSHOT_PATH_URL, domain=self.domain)
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_api.get(regex_url, body=ujson.dumps(self._snapshot_response()))
self.listening_task = self.ev_loop.create_task(
self.data_source.listen_for_order_book_snapshots(self.ev_loop, msg_queue)
)
msg: OrderBookMessage = self.ev_loop.run_until_complete(msg_queue.get())
self.assertTrue(12345, msg.update_id)
| 41.203219 | 135 | 0.662418 | 2,415 | 20,478 | 5.276605 | 0.101449 | 0.022601 | 0.036883 | 0.022444 | 0.810641 | 0.788119 | 0.774778 | 0.749353 | 0.730833 | 0.729734 | 0 | 0.01311 | 0.232689 | 20,478 | 496 | 136 | 41.28629 | 0.797874 | 0.016261 | 0 | 0.524173 | 0 | 0 | 0.105428 | 0.051448 | 0 | 0 | 0 | 0 | 0.083969 | 1 | 0.086514 | false | 0 | 0.035623 | 0.002545 | 0.139949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
52fcc236ab1ec341a2c5ad81f2d559510db84e04 | 3,933 | py | Python | tests/test_hdr.py | knownboyofno/PySIPmath | 22521d0acaa74250fb7684b8eedbb3e6d044b4af | [
"MIT"
] | 2 | 2021-11-11T03:47:58.000Z | 2021-11-19T07:46:24.000Z | tests/test_hdr.py | knownboyofno/PySIPmath | 22521d0acaa74250fb7684b8eedbb3e6d044b4af | [
"MIT"
] | null | null | null | tests/test_hdr.py | knownboyofno/PySIPmath | 22521d0acaa74250fb7684b8eedbb3e6d044b4af | [
"MIT"
] | 3 | 2021-11-11T18:14:04.000Z | 2021-11-22T07:53:30.000Z | import unittest
from unittest.mock import ANY, patch
from pandas import DataFrame
import PySIP
from tests.fixtures.fit import fit_fixture
class HDRTestSuite(unittest.TestCase):
"""Basic test cases."""
@patch("metalog.metalog.fit")
@patch("json.dump")
@patch("builtins.open")
def test_Json_contains_hdr_data(self, mock_open, mock_dump, mock_fit):
mock_fit.return_value = fit_fixture()
fixture = DataFrame(data={
"Accounts": [10.24313638, 13.69812026, 12.62841292, 2.890162231, 7.60269451],
"Products": [7.00895936, 11.61220758, 5.07099725, 7.542072262, 13.37670202],
})
PySIP.Json(fixture, "foo.json", "bar")
self.assertDictEqual({
'arguments': {
'counter': 'PM_Index',
'entity': 1,
'seed3': 0,
'seed4': 0,
'varId': ANY
},
'function': 'HDR_2_0',
'name': 'hdr1'
}, mock_dump.call_args[0][0]["U01"]["rng"][0])
@patch("metalog.metalog.fit")
@patch("json.dump")
@patch("builtins.open")
def test_Json_calls_out_to_metalog_fit_properly(self, mock_open, mock_dump, mock_fit):
mock_fit.return_value = fit_fixture()
fixture = DataFrame(data={
"Accounts": [10.24313638, 13.69812026, 12.62841292, 2.890162231, 7.60269451],
"Products": [7.00895936, 11.61220758, 5.07099725, 7.542072262, 13.37670202],
})
PySIP.Json(fixture, "foo.json", "bar")
mock_fit.assert_called_with(
ANY, bounds=[0, 1], boundedness='u', term_limit=5, term_lower_bound=5
)
@patch("metalog.metalog.fit")
@patch("json.dump")
@patch("builtins.open")
def test_Json_contains_hdr_data_even_when_dependent(self, mock_open, mock_dump, mock_fit):
mock_fit.return_value = fit_fixture()
fixture = DataFrame(data={
"Accounts": [10.24313638, 13.69812026, 12.62841292, 2.890162231, 7.60269451],
"Products": [7.00895936, 11.61220758, 5.07099725, 7.542072262, 13.37670202],
})
PySIP.Json(fixture, "foo.json", "bar", dependence="dependent")
self.assertDictEqual({
'arguments': {
'counter': 'PM_Index',
'entity': 1,
'seed3': 0,
'seed4': 0,
'varId': ANY
},
'function': 'HDR_2_0',
'name': 'hdr1'
}, mock_dump.call_args[0][0]["U01"]["rng"][0])
@patch("metalog.metalog.fit")
@patch("json.dump")
@patch("builtins.open")
def test_Json_respects_provided_seeds(self, mock_open, mock_dump, mock_fit):
mock_fit.return_value = fit_fixture()
provided_seeds = [1, 2, 3, 4, 5]
fixture = DataFrame(data={
"Accounts": [10.24313638, 13.69812026, 12.62841292, 2.890162231, 7.60269451],
"Products": [7.00895936, 11.61220758, 5.07099725, 7.542072262, 13.37670202],
})
PySIP.Json(fixture, "foo.json", "bar", seeds=provided_seeds)
self.assertListEqual(provided_seeds, mock_dump.call_args[0][0]["U01"]["rng"])
@patch("builtins.print")
@patch("metalog.metalog.fit")
@patch("json.dump")
@patch("builtins.open")
def test_Json_reminds_minimum_number_of_provided_seeds(self, mock_open, mock_dump, mock_fit, mock_print):
mock_fit.return_value = fit_fixture()
provided_seeds = [1]
fixture = DataFrame(data={
"Accounts": [10.24313638, 13.69812026, 12.62841292, 2.890162231, 7.60269451],
"Products": [7.00895936, 11.61220758, 5.07099725, 7.542072262, 13.37670202],
})
PySIP.Json(fixture, "foo.json", "bar", seeds=provided_seeds)
print.assert_called_with(
"RNG list length must be equal to or greater than the number of SIPs.")
if __name__ == '__main__':
unittest.main()
| 37.103774 | 109 | 0.59522 | 471 | 3,933 | 4.770701 | 0.235669 | 0.034268 | 0.042279 | 0.048954 | 0.784602 | 0.784602 | 0.784602 | 0.784602 | 0.773921 | 0.748999 | 0 | 0.179733 | 0.25731 | 3,933 | 105 | 110 | 37.457143 | 0.589524 | 0.004322 | 0 | 0.719101 | 0 | 0 | 0.151957 | 0 | 0 | 0 | 0 | 0 | 0.05618 | 1 | 0.05618 | false | 0 | 0.05618 | 0 | 0.123596 | 0.033708 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eab163c4808338f3a52d44ece55b6a061c5a7e63 | 177 | py | Python | data_structures/tree_based/binary_search_tree/test.py | kwahome/data-structures-and-algos | 535b23c63bf384d63c1ebc08d1c32d3dd808297c | [
"Apache-2.0"
] | null | null | null | data_structures/tree_based/binary_search_tree/test.py | kwahome/data-structures-and-algos | 535b23c63bf384d63c1ebc08d1c32d3dd808297c | [
"Apache-2.0"
] | null | null | null | data_structures/tree_based/binary_search_tree/test.py | kwahome/data-structures-and-algos | 535b23c63bf384d63c1ebc08d1c32d3dd808297c | [
"Apache-2.0"
] | null | null | null | import unittest
class BinarySearchTreeTests(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_tree(self):
pass
| 13.615385 | 47 | 0.627119 | 19 | 177 | 5.789474 | 0.631579 | 0.218182 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.299435 | 177 | 12 | 48 | 14.75 | 0.887097 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.375 | 0.125 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
dc4260cf8f20bea302a69e09b4b826fc26dd7426 | 49 | py | Python | mw2fcitx/utils/dedup.py | outloudvi/mw2fcitx | a4fbbcd5e8068ee1f08714f0e18b46c8b289a42c | [
"Unlicense"
] | 67 | 2020-08-13T13:58:03.000Z | 2022-03-29T11:33:51.000Z | mw2fcitx/utils/dedup.py | outloudvi/fcitx5-pinyin-moegirl | c62d3f7d049143a4d8726f408bdd345f53ff3347 | [
"Unlicense"
] | 5 | 2020-11-16T01:48:32.000Z | 2022-02-18T08:04:32.000Z | mw2fcitx/utils/dedup.py | outloudvi/fcitx5-pinyin-moegirl | c62d3f7d049143a4d8726f408bdd345f53ff3347 | [
"Unlicense"
] | 3 | 2020-10-08T15:44:30.000Z | 2022-03-23T12:40:11.000Z | def dedup(arr: [str]):
return list(set(arr))
| 16.333333 | 25 | 0.612245 | 8 | 49 | 3.75 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183673 | 49 | 2 | 26 | 24.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
dc43ffaa60bfc3f0fb4727078290ad21f4802644 | 72 | py | Python | polaritymodel/__init__.py | tsmbland/Model | 40c66fa2f16821bd0e1eab2a1c09026782c1965d | [
"MIT"
] | null | null | null | polaritymodel/__init__.py | tsmbland/Model | 40c66fa2f16821bd0e1eab2a1c09026782c1965d | [
"MIT"
] | null | null | null | polaritymodel/__init__.py | tsmbland/Model | 40c66fa2f16821bd0e1eab2a1c09026782c1965d | [
"MIT"
] | null | null | null | from .interactive import *
from .paramspace import *
from .pde import *
| 18 | 26 | 0.75 | 9 | 72 | 6 | 0.555556 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 72 | 3 | 27 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc6e3f770dec3610379cbe3d5b98d563d85e9c9a | 48 | py | Python | tools/debug/debug.py | DJNing/CenterPoint-KITTI | 44630bc7d60c469a192c7a29178d580196a7410d | [
"Apache-2.0"
] | null | null | null | tools/debug/debug.py | DJNing/CenterPoint-KITTI | 44630bc7d60c469a192c7a29178d580196a7410d | [
"Apache-2.0"
] | null | null | null | tools/debug/debug.py | DJNing/CenterPoint-KITTI | 44630bc7d60c469a192c7a29178d580196a7410d | [
"Apache-2.0"
] | null | null | null | import os
import numpy as np
ts = 164318121300 | 12 | 18 | 0.770833 | 8 | 48 | 4.625 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 0.208333 | 48 | 4 | 19 | 12 | 0.657895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f4b1d82f7fc300b1885ed291152676977a923bc6 | 124 | py | Python | tests/conftest.py | davidolrik/python-swt | 8ab72861daaaa5abc85c239182822893d5f56e8d | [
"MIT"
] | 4 | 2020-05-28T11:32:57.000Z | 2020-10-01T05:51:16.000Z | tests/conftest.py | davidolrik/python-swt | 8ab72861daaaa5abc85c239182822893d5f56e8d | [
"MIT"
] | null | null | null | tests/conftest.py | davidolrik/python-swt | 8ab72861daaaa5abc85c239182822893d5f56e8d | [
"MIT"
] | null | null | null | import pytest
from swt import __version__
def pytest_report_header(config):
return f"project version: {__version__}"
| 15.5 | 44 | 0.782258 | 16 | 124 | 5.4375 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153226 | 124 | 7 | 45 | 17.714286 | 0.828571 | 0 | 0 | 0 | 0 | 0 | 0.241935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f4f6ec9622d19691911989330ec277037663d8e8 | 46 | py | Python | pyimapsync/__init__.py | pyscioffice/pyimapsync | c21bdf4002930756224171b6ee43cb40fc750262 | [
"BSD-3-Clause"
] | null | null | null | pyimapsync/__init__.py | pyscioffice/pyimapsync | c21bdf4002930756224171b6ee43cb40fc750262 | [
"BSD-3-Clause"
] | null | null | null | pyimapsync/__init__.py | pyscioffice/pyimapsync | c21bdf4002930756224171b6ee43cb40fc750262 | [
"BSD-3-Clause"
] | 1 | 2022-01-22T17:37:38.000Z | 2022-01-22T17:37:38.000Z | from pyimapsync.shared import transfer_emails
| 23 | 45 | 0.891304 | 6 | 46 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5215e51ac85d8b62ea480df4911bcd8cbb5f2da4 | 27 | py | Python | src/euler_python_package/euler_python/medium/p376.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p376.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p376.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | def problem376():
pass
| 9 | 17 | 0.62963 | 3 | 27 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
52278dca6f63d5db48275638b83cca3aa4ac4095 | 121 | py | Python | demes/convert/__init__.py | noscode/demes | 5d300a8f5b5e260b828ae2cc7a78ab9ebf3bebba | [
"0BSD"
] | null | null | null | demes/convert/__init__.py | noscode/demes | 5d300a8f5b5e260b828ae2cc7a78ab9ebf3bebba | [
"0BSD"
] | 33 | 2021-07-08T20:01:40.000Z | 2022-03-01T21:11:10.000Z | demes/convert/__init__.py | apragsdale/demes | c31ee9aca6cd8cfe84622a2f1b12b80c5887ae86 | [
"0BSD"
] | null | null | null | # flake8: noqa: F401
from .msprime_ import to_msprime, from_msprime
from .stdpopsim_ import to_stdpopsim, from_stdpopsim
| 30.25 | 52 | 0.826446 | 17 | 121 | 5.529412 | 0.470588 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037383 | 0.115702 | 121 | 3 | 53 | 40.333333 | 0.841122 | 0.14876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87151a37414c37bf72f1a358578325b2e5e187d1 | 5,635 | py | Python | pyteal/ast/for_test.py | CiottiGiorgio/pyteal | 9646a1aa479786c1e80d6a3821d5db1e6c4a16e2 | [
"MIT"
] | null | null | null | pyteal/ast/for_test.py | CiottiGiorgio/pyteal | 9646a1aa479786c1e80d6a3821d5db1e6c4a16e2 | [
"MIT"
] | 1 | 2022-03-04T14:57:57.000Z | 2022-03-04T14:57:57.000Z | pyteal/ast/for_test.py | CiottiGiorgio/pyteal | 9646a1aa479786c1e80d6a3821d5db1e6c4a16e2 | [
"MIT"
] | null | null | null | import pytest
import pyteal as pt
options = pt.CompileOptions()
def test_for_compiles():
i = pt.ScratchVar()
expr = pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1))).Do(
pt.App.globalPut(pt.Itob(pt.Int(0)), pt.Itob(pt.Int(2)))
)
assert expr.type_of() == pt.TealType.none
assert not expr.has_return()
expr.__teal__(options)
def test_nested_for_compiles():
i = pt.ScratchVar()
expr = pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1))).Do(
pt.Seq(
[
pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1))).Do(
pt.Seq([i.store(pt.Int(0))])
)
]
)
)
assert expr.type_of() == pt.TealType.none
assert not expr.has_return()
def test_continue_break():
i = pt.ScratchVar()
expr = pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1))).Do(
pt.Seq([pt.If(pt.Int(1), pt.Break(), pt.Continue())])
)
assert expr.type_of() == pt.TealType.none
assert not expr.has_return()
expr.__teal__(options)
def test_for():
i = pt.ScratchVar()
items = [
(i.store(pt.Int(0))),
i.load() < pt.Int(10),
i.store(i.load() + pt.Int(1)),
pt.App.globalPut(pt.Itob(i.load()), i.load() * pt.Int(2)),
]
expr = pt.For(items[0], items[1], items[2]).Do(pt.Seq([items[3]]))
assert expr.type_of() == pt.TealType.none
assert not expr.has_return()
expected, varEnd = items[0].__teal__(options)
condStart, condEnd = items[1].__teal__(options)
stepStart, stepEnd = items[2].__teal__(options)
do, doEnd = pt.Seq([items[3]]).__teal__(options)
expectedBranch = pt.TealConditionalBlock([])
end = pt.TealSimpleBlock([])
varEnd.setNextBlock(condStart)
doEnd.setNextBlock(stepStart)
expectedBranch.setTrueBlock(do)
expectedBranch.setFalseBlock(end)
condEnd.setNextBlock(expectedBranch)
stepEnd.setNextBlock(condStart)
actual, _ = expr.__teal__(options)
assert actual == expected
def test_for_continue():
i = pt.ScratchVar()
items = [
(i.store(pt.Int(0))),
i.load() < pt.Int(10),
i.store(i.load() + pt.Int(1)),
pt.If(i.load() < pt.Int(4), pt.Continue()),
pt.App.globalPut(pt.Itob(i.load()), i.load() * pt.Int(2)),
]
expr = pt.For(items[0], items[1], items[2]).Do(pt.Seq([items[3], items[4]]))
assert expr.type_of() == pt.TealType.none
assert not expr.has_return()
options.enterLoop()
expected, varEnd = items[0].__teal__(options)
condStart, condEnd = items[1].__teal__(options)
stepStart, stepEnd = items[2].__teal__(options)
do, doEnd = pt.Seq([items[3], items[4]]).__teal__(options)
expectedBranch = pt.TealConditionalBlock([])
end = pt.TealSimpleBlock([])
doEnd.setNextBlock(stepStart)
stepEnd.setNextBlock(condStart)
expectedBranch.setTrueBlock(do)
expectedBranch.setFalseBlock(end)
condEnd.setNextBlock(expectedBranch)
varEnd.setNextBlock(condStart)
_, continueBlocks = options.exitLoop()
for block in continueBlocks:
block.setNextBlock(stepStart)
actual, _ = expr.__teal__(options)
assert actual == expected
def test_for_break():
i = pt.ScratchVar()
items = [
(i.store(pt.Int(0))),
i.load() < pt.Int(10),
i.store(i.load() + pt.Int(1)),
pt.If(i.load() == pt.Int(6), pt.Break()),
pt.App.globalPut(pt.Itob(i.load()), i.load() * pt.Int(2)),
]
expr = pt.For(items[0], items[1], items[2]).Do(pt.Seq([items[3], items[4]]))
assert expr.type_of() == pt.TealType.none
assert not expr.has_return()
options.enterLoop()
expected, varEnd = items[0].__teal__(options)
condStart, condEnd = items[1].__teal__(options)
stepStart, stepEnd = items[2].__teal__(options)
do, doEnd = pt.Seq([items[3], items[4]]).__teal__(options)
expectedBranch = pt.TealConditionalBlock([])
end = pt.TealSimpleBlock([])
doEnd.setNextBlock(stepStart)
stepEnd.setNextBlock(condStart)
expectedBranch.setTrueBlock(do)
expectedBranch.setFalseBlock(end)
condEnd.setNextBlock(expectedBranch)
varEnd.setNextBlock(condStart)
breakBlocks, _ = options.exitLoop()
for block in breakBlocks:
block.setNextBlock(end)
actual, _ = expr.__teal__(options)
assert actual == expected
def test_invalid_for():
with pytest.raises(TypeError):
expr = pt.For()
with pytest.raises(TypeError):
expr = pt.For(pt.Int(2))
with pytest.raises(TypeError):
expr = pt.For(pt.Int(1), pt.Int(2))
with pytest.raises(pt.TealTypeError):
i = pt.ScratchVar()
expr = pt.For(i.store(pt.Int(0)), pt.Int(1), pt.Int(2))
expr.__teal__(options)
with pytest.raises(pt.TealCompileError):
i = pt.ScratchVar()
expr = pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1)))
expr.type_of()
with pytest.raises(pt.TealCompileError):
i = pt.ScratchVar()
expr = pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1)))
expr.__str__()
with pytest.raises(pt.TealTypeError):
i = pt.ScratchVar()
expr = pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1))).Do(
pt.Int(0)
)
with pytest.raises(pt.TealCompileError):
expr = (
pt.For(i.store(pt.Int(0)), pt.Int(1), i.store(i.load() + pt.Int(1)))
.Do(pt.Continue())
.Do(pt.Continue())
)
expr.__str__()
| 28.604061 | 88 | 0.60213 | 767 | 5,635 | 4.277705 | 0.092568 | 0.074672 | 0.040232 | 0.057909 | 0.877781 | 0.843036 | 0.838769 | 0.838769 | 0.807071 | 0.740933 | 0 | 0.018227 | 0.221118 | 5,635 | 196 | 89 | 28.75 | 0.729323 | 0 | 0 | 0.695946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101351 | 1 | 0.047297 | false | 0 | 0.013514 | 0 | 0.060811 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e84eb2abe4f10ad91f3f954aaff7582a97eab1a | 81 | py | Python | src/my_util/__init__.py | rokkish/unsupervised_trajectory_segmentation | 4875bdc532349670d63cbd86a78032cd9cda9631 | [
"MIT"
] | null | null | null | src/my_util/__init__.py | rokkish/unsupervised_trajectory_segmentation | 4875bdc532349670d63cbd86a78032cd9cda9631 | [
"MIT"
] | 2 | 2020-10-29T02:29:36.000Z | 2021-04-18T12:42:49.000Z | src/my_util/__init__.py | rokkish/unsupervised_trajectory_segmentation | 4875bdc532349670d63cbd86a78032cd9cda9631 | [
"MIT"
] | null | null | null | from my_util.models.segnet_model import MyNet
from my_util.trainer import Trainer | 40.5 | 45 | 0.876543 | 14 | 81 | 4.857143 | 0.642857 | 0.176471 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08642 | 81 | 2 | 46 | 40.5 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5ecfdd21138c7074e811149eeb76c5fab4ce835c | 307 | py | Python | altimeter/qj/db/base.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 48 | 2019-11-06T03:20:53.000Z | 2022-02-22T21:10:45.000Z | altimeter/qj/db/base.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 27 | 2020-01-07T23:48:30.000Z | 2022-02-26T00:24:04.000Z | altimeter/qj/db/base.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 21 | 2019-12-20T03:06:35.000Z | 2021-12-15T23:26:00.000Z | """All modes are imported here such that Base has them before being used.
In general Base should be imported from here."""
# noqa # pylint: disable=unused-import
from altimeter.qj.db.base_class import BASE
from altimeter.qj.models.job import Job
from altimeter.qj.models.result_set import ResultSet, Result
| 43.857143 | 73 | 0.798046 | 50 | 307 | 4.86 | 0.64 | 0.160494 | 0.185185 | 0.17284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127036 | 307 | 6 | 74 | 51.166667 | 0.906716 | 0.498371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0dc844ed16ac3a1ecf31f210f5e9430fbb72da37 | 39 | py | Python | pylexitext/engines/__init__.py | vicotrbb/Pylexitext | 9b1ed2abf8f8a76204a7db5d1673f2e6ccf62bd2 | [
"MIT"
] | 4 | 2021-02-18T01:08:30.000Z | 2021-07-08T03:02:53.000Z | pylexitext/engines/__init__.py | vicotrbb/Pylexitext | 9b1ed2abf8f8a76204a7db5d1673f2e6ccf62bd2 | [
"MIT"
] | null | null | null | pylexitext/engines/__init__.py | vicotrbb/Pylexitext | 9b1ed2abf8f8a76204a7db5d1673f2e6ccf62bd2 | [
"MIT"
] | null | null | null | from .search_engine import SearchEngine | 39 | 39 | 0.897436 | 5 | 39 | 6.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
21cfd9564be43465cce543f52615eecc379acbed | 175 | py | Python | src/include/__init__.py | adisve/MFH-GruppProject | 78463cbf730137ed3dbbdb472aa99b79bc4363cc | [
"MIT"
] | 1 | 2022-03-10T19:53:42.000Z | 2022-03-10T19:53:42.000Z | src/include/__init__.py | adisve/MFH-GruppProject | 78463cbf730137ed3dbbdb472aa99b79bc4363cc | [
"MIT"
] | 11 | 2022-03-05T00:00:51.000Z | 2022-03-08T18:14:25.000Z | src/include/__init__.py | adisve/Sustainable-Programming | 78463cbf730137ed3dbbdb472aa99b79bc4363cc | [
"MIT"
] | 1 | 2022-02-20T16:09:12.000Z | 2022-02-20T16:09:12.000Z | """Package that exports game.py module and core game functionality.
Modules exported by this package are:
game.py
Subpackages exported by this package are:
utils
player
"""
| 17.5 | 67 | 0.782857 | 26 | 175 | 5.269231 | 0.653846 | 0.087591 | 0.20438 | 0.306569 | 0.350365 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154286 | 175 | 9 | 68 | 19.444444 | 0.925676 | 0.954286 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
21edf996e8c948cd87dc68a30f006ec0fee91c9d | 125 | py | Python | kata/python/kyu_6/replace_with_alphabet_position/solution.py | ckarakoc/codewars | 2b1fce935f59d73a6d8309b98ca2c3b310623fe4 | [
"MIT"
] | 1 | 2021-06-04T23:24:50.000Z | 2021-06-04T23:24:50.000Z | kata/python/kyu_6/replace_with_alphabet_position/solution.py | ckarakoc/codewars | 2b1fce935f59d73a6d8309b98ca2c3b310623fe4 | [
"MIT"
] | null | null | null | kata/python/kyu_6/replace_with_alphabet_position/solution.py | ckarakoc/codewars | 2b1fce935f59d73a6d8309b98ca2c3b310623fe4 | [
"MIT"
] | null | null | null | import re
def alphabet_position(text):
return ' '.join(str(ord(c) - 96) for c in re.sub('[^a-zA-Z]', '', text.lower())) | 41.666667 | 84 | 0.6 | 22 | 125 | 3.363636 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019048 | 0.16 | 125 | 3 | 84 | 41.666667 | 0.685714 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
21f7b61d609cda79179324fbc453d0c5695e148f | 40,557 | py | Python | python_src/ianna.py | fjpena/sword-of-ianna-msx2 | f104f46a677e4b21f42fbed478307a0bb1d372f3 | [
"Apache-2.0"
] | 43 | 2017-10-21T23:01:25.000Z | 2022-02-21T17:45:11.000Z | python_src/ianna.py | fjpena/sword-of-ianna-msx2 | f104f46a677e4b21f42fbed478307a0bb1d372f3 | [
"Apache-2.0"
] | null | null | null | python_src/ianna.py | fjpena/sword-of-ianna-msx2 | f104f46a677e4b21f42fbed478307a0bb1d372f3 | [
"Apache-2.0"
] | 6 | 2017-10-23T05:48:50.000Z | 2022-01-06T03:11:49.000Z | #!/usr/bin/env python
import pygame
import pygame.locals
import tmxlib
import sys
import constants
import os
from tiled_map import *
from sev_sprite import *
from ianna_entity import *
from mapa import *
from timer import *
from ianna_score import *
import gc
class IannaGame():
def __init__ (self):
self.keyboard=[0,0,0,0,0,0] # 0 is not pressed, 1 is pressed
pygame.init()
self.screen = pygame.display.set_mode((256*3, 192*3))
pygame.display.set_caption('Ianna - Press F12 to set fullscreen')
self.screen.fill((0, 0, 0))
self.buffer = pygame.Surface((256,192))
self.fpsClock = pygame.time.Clock()
self.pause_image = pygame.image.load('artwork/pausa.png').convert()
def toggle_fullscreen(self):
tmp = self.screen.convert()
caption = pygame.display.get_caption()
cursor = pygame.mouse.get_cursor()
w,h = self.screen.get_width(),self.screen.get_height()
flags = self.screen.get_flags()
bits = self.screen.get_bitsize()
pygame.display.quit()
pygame.display.init()
print "Setting display with "+str(w)+"x"+str(h)+" with "+str(bits)+" bits and "+str(flags)+" as flags"
if flags:
self.screen = pygame.display.set_mode((w,h),0,bits)
else:
self.screen = pygame.display.set_mode((w,h),pygame.FULLSCREEN,bits)
self.screen.blit(tmp,(0,0))
pygame.display.set_caption(*caption)
pygame.key.set_mods(0)
pygame.mouse.set_cursor( *cursor )
def pause_menu(self):
self.buffer.set_clip(pygame.Rect(0,0,256,160)) # set clipping area for game, should then set clipping for score area
self.buffer.blit(self.pause_image,(60,60))
pygame.transform.scale(self.buffer,(256*3,192*3),self.screen)
pygame.display.flip()
done = False
while not done:
events = pygame.event.get()
for event in events:
if event.type == pygame.locals.QUIT:
sys.exit(0)
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_o:
current = self.game_entities[0].current_object
if current > 0:
current = current - 1
self.game_entities[0].current_object = current
self.scorearea.draw()
pygame.transform.scale(self.buffer,(256*3,192*3),self.screen)
pygame.display.flip()
elif event.key == pygame.K_p:
current = self.game_entities[0].current_object
if current < len(self.game_entities[0].inventory)-1:
current = current + 1
self.game_entities[0].current_object = current
self.scorearea.draw()
pygame.transform.scale(self.buffer,(256*3,192*3),self.screen)
pygame.display.flip()
elif event.key == pygame.K_a:
current = self.game_entities[0].weapon
while True:
current = current - 1
if current == 0:
current = 4
if self.player_weapons[current-1]:
# switch weapon in game
self.game_entities[0].weapon = current
break
self.game_entities[0].load_weapon()
self.scorearea.draw()
pygame.transform.scale(self.buffer,(256*3,192*3),self.screen)
pygame.display.flip()
elif event.key == pygame.K_SPACE:
# Consume object, if it is "consumable"
player = self.game_entities[0]
if player.current_object < len(player.inventory):
if player.inventory[player.current_object] == "OBJECT_HEALTH":
player.energy = player.get_entity_max_energy()
player.inventory.remove("OBJECT_HEALTH")
self.scorearea.draw()
pygame.transform.scale(self.buffer,(256*3,192*3),self.screen)
pygame.display.flip()
elif event.key == pygame.K_h:
done = True
self.waitforkeyrelease(pygame.K_h)
elif event.key == pygame.K_x:
self.export_menu()
elif event.key == pygame.K_d:
self.waitforkeyrelease(pygame.K_d)
self.dump_menu()
done = True
elif event.key == pygame.K_s:
self.export_sprites()
def dump_menu(self):
# Show object state in current level
print("Object state: ")
for i in range(0,128):
print("Obj %d: %d %d" % (i, self.mymap.objects[i*2], self.mymap.objects[i*2+1]))
# Dump entities in current screen
for entity in self.game_entities:
if entity:
entity.dump_entity()
def export_sprites(self):
if not os.path.exists('export_sprites'):
os.mkdir('export_sprites')
self.export_single_sprite(self.entity_player, 'export_sprites/sprite_barbaro_2.asm', player=True)
self.export_player_weapon(self.entity_player, 'export_sprites/sprite_barbaro_1.asm', constants.WEAPON_SWORD)
self.export_player_weapon(self.entity_player, 'export_sprites/sprite_barbaro_3.asm', constants.WEAPON_ECLIPSE)
self.export_player_weapon(self.entity_player, 'export_sprites/sprite_barbaro_4.asm', constants.WEAPON_AXE)
self.export_player_weapon(self.entity_player, 'export_sprites/sprite_barbaro_5.asm', constants.WEAPON_BLADE)
for enemy_type in ['OBJECT_ENEMY_SKELETON', 'OBJECT_ENEMY_ORC', 'OBJECT_ENEMY_MUMMY', 'OBJECT_ENEMY_TROLL',
'OBJECT_ENEMY_ROCK', 'OBJECT_ENEMY_KNIGHT', 'OBJECT_ENEMY_DALGURAK']:
entity = IannaCharacter(Map.enemy_spritedir[enemy_type], None, None, None, None, player=False)
self.export_single_sprite(entity, "export_sprites/sprite_"+Map.enemy_spritedir[enemy_type]+".asm")
for enemy_type in ('OBJECT_ENEMY_GOLEM','OBJECT_ENEMY_OGRE','OBJECT_ENEMY_MINOTAUR','OBJECT_ENEMY_DEMON'):
entity = IannaCharacter(Map.enemy_spritedir[enemy_type], None, None, None, None, player=False)
self.export_single_sprite(entity, "export_sprites/sprite_"+Map.enemy_spritedir_export[enemy_type]+".asm")
entity2 = IannaCharacter(Map.enemy_sprite2dir[enemy_type], None, None, None, None, player=False, secondary=True)
self.export_single_sprite(entity2, "export_sprites/sprite_"+Map.enemy_sprite2dir_export[enemy_type]+".asm")
def export_single_sprite(self, entity, filename, player=False):
if player is True:
entity.sp_ouch.export(filename)
entity.sp_jump_up.export(filename)
entity.sp_shortjump.export(filename)
entity.sp_longjump.export(filename)
entity.sp_run.export(filename)
entity.sp_brake.export(filename)
entity.sp_braketurn.export(filename)
entity.sp_switch.export(filename)
entity.sp_grab.export(filename)
else:
entity.sp_idle.export(filename)
entity.sp_turn.export(filename)
entity.sp_walk.export(filename)
entity.sp_fall.export(filename)
entity.sp_crouch.export(filename)
entity.sp_unsheathe.export(filename)
entity.sp_idle_sword.export(filename)
entity.sp_walk_sword.export(filename)
entity.sp_high_sword.export(filename)
entity.sp_forw_sword.export(filename)
entity.sp_combo1_sword.export(filename)
entity.sp_low_sword.export(filename)
entity.sp_back_sword.export(filename)
entity.sp_ouch_sword.export(filename)
entity.sp_die.export(filename)
def export_player_weapon(self, entity, filename, weapon):
entity.weapon = weapon
entity.load_weapon()
entity.sp_idle.export(filename)
entity.sp_turn.export(filename)
entity.sp_walk.export(filename)
entity.sp_fall.export(filename)
entity.sp_crouch.export(filename)
entity.sp_unsheathe.export(filename)
entity.sp_idle_sword.export(filename)
entity.sp_walk_sword.export(filename)
entity.sp_high_sword.export(filename)
entity.sp_forw_sword.export(filename)
entity.sp_combo1_sword.export(filename)
entity.sp_low_sword.export(filename)
entity.sp_back_sword.export(filename)
entity.sp_ouch_sword.export(filename)
entity.sp_die.export(filename)
def export_menu(self):
# get the tileset from the same directory, from a file called "levelname.asm" instead of tmx
tileset=IannaTileset(os.path.join(os.path.dirname(self.mapfile), self.mymap.tilefile), self.mymap.map.tilesets[0].image.height / 8,self.mymap.map.tilesets[0].image.width / 8 )
try:
os.mkdir("export")
except OSError as e:
print "Export directory already exists"
levelscripts=["ACTION_NONE","ACTION_PLAYER","ACTION_SECONDARY"]
levelstrings=[]
levelstrings_en=[]
# Scripts for pickable objects: should always be in the same place!!!
for kk, script in IannaScript.scripts_per_pickable_object.iteritems():
if script not in levelscripts:
levelscripts.append(script)
# Scripts for enemy attacks: should always be in the same place!!!
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_SKELETON"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_ORC"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_MUMMY"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_TROLL"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_ROCK"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_KNIGHT"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_GOLEM"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_OGRE"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_MINOTAUR"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_DEMON"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_DALGURAK"]:
if script not in levelscripts:
levelscripts.append(script)
# Scripts for player in specific screens
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
try:
scriptid = "script-"+str(x)+"_"+str(y)
script = self.mymap.map.properties[scriptid]
if script not in levelscripts:
levelscripts.append(script)
except KeyError:
pass
# Create the list of all scripts in the level
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
screen=IannaScreen(self.mymap.map.layers['Fondo'], self.mymap.map.layers['PrimerPlano'], self.mymap.objset, self.mymap.map.layers['Dureza'], self.mymap.map.width,self.mymap.map.height, x, y, properties=self.mymap.map.properties)
screenscripts = screen.get_scripts_from_screen()
for script in screenscripts:
if script not in levelscripts:
levelscripts.append(script)
print "We have",len(levelscripts),"different scripts in the level"
# Create the list of all strings in the level
counter = 0
while True:
stringid = "string-"+str(counter)
try:
string = self.mymap.map.properties[stringid]
levelstrings.append(string)
# Append the English string as well
string = self.mymap.map.properties[stringid+"-en"]
levelstrings_en.append(string)
counter = counter + 1
except KeyError:
break
print "We have",len(levelstrings),"different strings in the level"
initialcoords = self.mymap.map.properties["InitialCoords"].split(',')
mapposx=int(initialcoords[0])
mapposy=int(initialcoords[1])
initialscreen = self.mymap.map.properties["InitialScreen"].split(',')
mapscreenx = int(initialscreen[0])
mapscreeny = int(initialscreen[1])
mainfile = file("export/map.asm","w")
self.mymap.tilemap.print_mapinfo(mainfile,len(levelscripts),len(levelstrings),mapposx,mapposy,mapscreenx,mapscreeny)
tilefile = file("export/map_stiles.SR5","wb")
tileset.print_stiles(tilefile)
tilefile.close()
# Write strings and scripts to file
f = file("export/map_strings.asm","w")
f.write("org $C000\n")
f.write("map_strings: dw string_list\nmap_scripts: dw current_script_pointer\n\n")
f.write(";LEVEL STRING POINTERS\n\n")
f.write("string_list: dw ")
counter = 0
linecounter = 0
for string in levelstrings:
if counter == len(levelstrings) - 1:
f.write("string"+str(counter))
elif linecounter < 10:
f.write("string"+str(counter)+", ")
linecounter = linecounter + 1
else:
f.write("string"+str(counter)+"\n\t\tdw ")
linecounter = 0
counter = counter + 1
f.write("\n\n; STRINGS\n\n")
counter = 0
for string in levelstrings:
f.write("string"+str(counter)+":\t db \'"+string+"\',0\n")
counter = counter + 1
f.write("\n\n; Pickable object types\n")
f.write("OBJECT_KEY_GREEN EQU 11\n")
f.write("OBJECT_KEY_BLUE EQU 12\n")
f.write("OBJECT_KEY_YELLOW EQU 13\n")
f.write("OBJECT_BREAD EQU 14\n")
f.write("OBJECT_MEAT EQU 15\n")
f.write("OBJECT_HEALTH EQU 16\n")
f.write("OBJECT_KEY_RED EQU 17\n")
f.write("OBJECT_KEY_WHITE EQU 18\n")
f.write("OBJECT_KEY_PURPLE EQU 19\n")
f.write("; Flag descriptions, will be used as parameters to functions\n")
f.write("FLAG_PATROL_NONE: EQU 0\n")
f.write("FLAG_PATROL_NOFALL: EQU 1 ; do not jump blindly on platforms\n")
f.write("FLAG_FIGHT_NONE: EQU 0\n")
f.write("FLAG_FIGHT_NOFALL: EQU 1 ; do not jump blindly when fighting\n")
f.write("; script action definitions\n")
f.write("ACTION_NONE: EQU 0 ; do nothing, no parameters\n")
f.write("ACTION_JOYSTICK: EQU 1 ; control position/animation with joystick, no parameters\n")
f.write("ACTION_PLAYER: EQU 2 ; player control\n")
f.write("ACTION_PATROL: EQU 3 ; move left-right in the area, waiting until the player is in its view area\n")
f.write("ACTION_FIGHT: EQU 4 ; Fight\n")
f.write("ACTION_SECONDARY: EQU 5 ; script for secondary entities\n")
f.write("ACTION_STRING: EQU 6 ; print a string in the notification area, useful for cutscenes. One parameter (db): string id\n")
f.write("ACTION_WAIT: EQU 7 ; do nothing for a number of game frames. One parameter (db): number of frames\n")
f.write("ACTION_MOVE: EQU 8 ; move for a number of game frames. Two parameter (db): direction, number of frames\n")
f.write("ACTION_WAIT_SWITCH_ON: EQU 9 ; wait for a switch to be changed from 0 to 1 or 2. One parameter (db): object id\n")
f.write("ACTION_WAIT_DEAD: EQU 9 ; wair for an enemy to be dead (its parameter is 1). One parameter (db): object id\n")
f.write("ACTION_WAIT_DESTROYED: EQU 9 ; wair for an object to be destroyed (its parameter is 1). One parameter (db): object id\n")
f.write("ACTION_WAIT_SWITCH_OFF: EQU 10 ; wait for a switch to be changed from 1/2 to 0. One parameter (db): object id\n")
f.write("ACTION_TOGGLE_SWITCH_ON: EQU 11 ; toggle a switch. It will change the switch from 1 to 2, and also update the tiles. One parameter (db): object id\n")
f.write("ACTION_TOGGLE_SWITCH_OFF: EQU 12 ; toggle a switch. It will change the switch from 2 to 0, and also update the tiles. One parameter (db): object id\n")
f.write("ACTION_OPEN_DOOR: EQU 13 ; open a door. It will change the object value from 0 to 1, then to 2 when done, and update the tiles. One parameter (db): object id\n")
f.write("ACTION_CLOSE_DOOR: EQU 14 ; close a door. It will change the object value from 2 to 1, then to 0 when done, and update the tiles. One parameter (db): object id \n")
f.write("ACTION_REMOVE_BOXES: EQU 15 ; remove a group of boxes. One parameter (db): object id\n")
f.write("ACTION_RETURN_SUBSCRIPT: EQU 16 ; return from a subscript\n")
f.write("ACTION_RESTART_SCRIPT: EQU 17 ; restart the script\n")
f.write("ACTION_TELEPORT: EQU 18 ; teleport. 4 params (db): x,y of screen to go, x,y position in screen (in pixels)\n")
f.write("ACTION_KILL_PLAYER: EQU 19 ; immediately kill the player\n")
f.write("ACTION_ENERGY: EQU 20 ; add/reduce energy on entity touching it. 1 param (db): amount of energy to add/reduce\n")
f.write("ACTION_SET_TIMER: EQU 21 ; set global timer, which will be decremented on every frame. 1 param(db): value to set\n")
f.write("ACTION_WAIT_TIMER_SET: EQU 22 ; wait until global timer is != 0\n")
f.write("ACTION_WAIT_TIMER_GONE: EQU 23 ; wait until global timer is == 0\n")
f.write("ACTION_WAIT_CONTACT: EQU 24 ; wait until the player touches the entity\n")
f.write("ACTION_MOVE_STILE: EQU 25 ; move stile. 5 params(db): x,y for stile, deltax, deltay per frame, number of frames.\n")
f.write("ACTION_CHANGE_OBJECT: EQU 26 ; switch to other object. 1 param(db): id of new object\n")
f.write("ACTION_WAIT_PICKUP: EQU 27 ; used for objects, wait until picked up\n")
f.write("ACTION_IDLE: EQU 28 ; set the state to idle\n")
f.write("ACTION_ADD_INVENTORY: EQU 29 ; add object to inventory. 1 param(db): id of object to add to inventory\n")
f.write("ACTION_REMOVE_JAR: EQU 30 ; remove a jar. One parameter (db): object id\n")
f.write("ACTION_REMOVE_DOOR: EQU 31 ; remove a door. One parameter (db): object id\n")
f.write("ACTION_ADD_ENERGY: EQU 32 ; add energy. One parameter (db): amount of energy to add\n")
f.write("ACTION_CHECK_OBJECT_IN_INVENTORY: EQU 33 ; wait until an object is in the inventory. One parameter (db): object id\n")
f.write("ACTION_REMOVE_OBJECT_FROM_INVENTORY: EQU 34 ; remove object from inventory. One parameter (db): object id\n")
f.write("ACTION_CHECKPOINT: EQU 35 ; set checkpoint. No parameters\n")
f.write("ACTION_FINISH_LEVEL: EQU 36 ; end level. One parameter (db): 0 -> get back to main menu. 1 -> Go to next level.\n")
f.write("ACTION_ADD_WEAPON: EQU 37 ; add weapon to inventory. One parameter (db): 1-> eclipse, 2-> axe, 3-> blade\n")
f.write("ACTION_WAIT_CROSS_DOOR: EQU 38 ; wait until player is crossing our door\n")
f.write("ACTION_CHANGE_STILE: EQU 39 ; change stile. 3 parameters (db): x, y in stile coords, and stile number (0-255)\n")
f.write("ACTION_CHANGE_HARDNESS: EQU 40 ; change hardness for stile. 3 parameters (db): x, y in stile coords, hardness value (0-3)\n")
f.write("ACTION_SET_OBJECT_STATE: EQU 41 ; set object state. 2 parameters (db): object id, state value (0: normal, 1: transitioning, 2: dead/changed, 3-255: other)\n")
f.write("ACTION_WAIT_OBJECT_STATE: EQU 42 ; wait until the object state has a specific value. 2 parameters (db): object id, state value\n")
f.write("ACTION_NOP: EQU 43 ; no-op action\n")
f.write("ACTION_WAIT_CONTACT_EXT: EQU 44 ; wait for contact with area. 4 parameters (db): upper-left X in chars, upper-left Y in chars, width, height in chars\n")
f.write("ACTION_TELEPORT_EXT: EQU 45 ; teleport without waiting for contact. 4 params (db): x,y of screen to go, x,y position in screen (in pixels)\n")
f.write("ACTION_TELEPORT_ENEMY: EQU 46 ; teleport enemy to a different location in this screen. 2 params (db): x, y (in pixels)\n")
f.write("ACTION_MOVE_OBJECT: EQU 47 ; move object in screen. 4 params (db): objid, deltax, deltay per frame, number of frames\n")
f.write("ACTION_WAIT_PICKUP_INVENTORY: EQU 48 ; used for objects, wait until picked up and make sure there is space in the inventory\n")
f.write("ACTION_FX: EQU 49 ; play an FX. 1 param (db): effect\n")
f.write("current_script_pointer: dw ")
counter = 0
linecounter = 0
for script in levelscripts:
if counter == len(levelscripts) - 1:
f.write("script"+str(counter))
elif linecounter < 10:
f.write("script"+str(counter)+", ")
linecounter = linecounter + 1
else:
f.write("script"+str(counter)+"\n\t\tdw ")
linecounter = 0
counter = counter + 1
f.write("\n\n; SCRIPTS\n\n")
counter = 0
for script in levelscripts:
f.write("script"+str(counter)+":\t db "+script+"\n")
counter = counter + 1
f.close()
# Write strings in English and scripts to file
f = file("export/map_strings_en.asm","w")
f.write("org $C000\n")
f.write("map_strings: dw string_list\nmap_scripts: dw current_script_pointer\n\n")
f.write(";LEVEL STRING POINTERS\n\n")
f.write("string_list: dw ")
counter = 0
linecounter = 0
for string in levelstrings_en:
if counter == len(levelstrings_en) - 1:
f.write("string"+str(counter))
elif linecounter < 10:
f.write("string"+str(counter)+", ")
linecounter = linecounter + 1
else:
f.write("string"+str(counter)+"\n\t\tdw ")
linecounter = 0
counter = counter + 1
f.write("\n\n; STRINGS\n\n")
counter = 0
for string in levelstrings_en:
f.write("string"+str(counter)+":\t db \""+string+"\",0\n")
counter = counter + 1
f.write("\n\n; Pickable object types\n")
f.write("OBJECT_KEY_GREEN EQU 11\n")
f.write("OBJECT_KEY_BLUE EQU 12\n")
f.write("OBJECT_KEY_YELLOW EQU 13\n")
f.write("OBJECT_BREAD EQU 14\n")
f.write("OBJECT_MEAT EQU 15\n")
f.write("OBJECT_HEALTH EQU 16\n")
f.write("OBJECT_KEY_RED EQU 17\n")
f.write("OBJECT_KEY_WHITE EQU 18\n")
f.write("OBJECT_KEY_PURPLE EQU 19\n")
f.write("; Flag descriptions, will be used as parameters to functions\n")
f.write("FLAG_PATROL_NONE: EQU 0\n")
f.write("FLAG_PATROL_NOFALL: EQU 1 ; do not jump blindly on platforms\n")
f.write("FLAG_FIGHT_NONE: EQU 0\n")
f.write("FLAG_FIGHT_NOFALL: EQU 1 ; do not jump blindly when fighting\n")
f.write("; script action definitions\n")
f.write("ACTION_NONE: EQU 0 ; do nothing, no parameters\n")
f.write("ACTION_JOYSTICK: EQU 1 ; control position/animation with joystick, no parameters\n")
f.write("ACTION_PLAYER: EQU 2 ; player control\n")
f.write("ACTION_PATROL: EQU 3 ; move left-right in the area, waiting until the player is in its view area\n")
f.write("ACTION_FIGHT: EQU 4 ; Fight\n")
f.write("ACTION_SECONDARY: EQU 5 ; script for secondary entities\n")
f.write("ACTION_STRING: EQU 6 ; print a string in the notification area, useful for cutscenes. One parameter (db): string id\n")
f.write("ACTION_WAIT: EQU 7 ; do nothing for a number of game frames. One parameter (db): number of frames\n")
f.write("ACTION_MOVE: EQU 8 ; move for a number of game frames. Two parameter (db): direction, number of frames\n")
f.write("ACTION_WAIT_SWITCH_ON: EQU 9 ; wait for a switch to be changed from 0 to 1 or 2. One parameter (db): object id\n")
f.write("ACTION_WAIT_DEAD: EQU 9 ; wair for an enemy to be dead (its parameter is 1). One parameter (db): object id\n")
f.write("ACTION_WAIT_DESTROYED: EQU 9 ; wair for an object to be destroyed (its parameter is 1). One parameter (db): object id\n")
f.write("ACTION_WAIT_SWITCH_OFF: EQU 10 ; wait for a switch to be changed from 1/2 to 0. One parameter (db): object id\n")
f.write("ACTION_TOGGLE_SWITCH_ON: EQU 11 ; toggle a switch. It will change the switch from 1 to 2, and also update the tiles. One parameter (db): object id\n")
f.write("ACTION_TOGGLE_SWITCH_OFF: EQU 12 ; toggle a switch. It will change the switch from 2 to 0, and also update the tiles. One parameter (db): object id\n")
f.write("ACTION_OPEN_DOOR: EQU 13 ; open a door. It will change the object value from 0 to 1, then to 2 when done, and update the tiles. One parameter (db): object id\n")
f.write("ACTION_CLOSE_DOOR: EQU 14 ; close a door. It will change the object value from 2 to 1, then to 0 when done, and update the tiles. One parameter (db): object id \n")
f.write("ACTION_REMOVE_BOXES: EQU 15 ; remove a group of boxes. One parameter (db): object id\n")
f.write("ACTION_RETURN_SUBSCRIPT: EQU 16 ; return from a subscript\n")
f.write("ACTION_RESTART_SCRIPT: EQU 17 ; restart the script\n")
f.write("ACTION_TELEPORT: EQU 18 ; teleport. 4 params (db): x,y of screen to go, x,y position in screen (in pixels)\n")
f.write("ACTION_KILL_PLAYER: EQU 19 ; immediately kill the player\n")
f.write("ACTION_ENERGY: EQU 20 ; add/reduce energy on entity touching it. 1 param (db): amount of energy to add/reduce\n")
f.write("ACTION_SET_TIMER: EQU 21 ; set global timer, which will be decremented on every frame. 1 param(db): value to set\n")
f.write("ACTION_WAIT_TIMER_SET: EQU 22 ; wait until global timer is != 0\n")
f.write("ACTION_WAIT_TIMER_GONE: EQU 23 ; wait until global timer is == 0\n")
f.write("ACTION_WAIT_CONTACT: EQU 24 ; wait until the player touches the entity\n")
f.write("ACTION_MOVE_STILE: EQU 25 ; move stile. 5 params(db): x,y for stile, deltax, deltay per frame, number of frames.\n")
f.write("ACTION_CHANGE_OBJECT: EQU 26 ; switch to other object. 1 param(db): id of new object\n")
f.write("ACTION_WAIT_PICKUP: EQU 27 ; used for objects, wait until picked up\n")
f.write("ACTION_IDLE: EQU 28 ; set the state to idle\n")
f.write("ACTION_ADD_INVENTORY: EQU 29 ; add object to inventory. 1 param(db): id of object to add to inventory\n")
f.write("ACTION_REMOVE_JAR: EQU 30 ; remove a jar. One parameter (db): object id\n")
f.write("ACTION_REMOVE_DOOR: EQU 31 ; remove a door. One parameter (db): object id\n")
f.write("ACTION_ADD_ENERGY: EQU 32 ; add energy. One parameter (db): amount of energy to add\n")
f.write("ACTION_CHECK_OBJECT_IN_INVENTORY: EQU 33 ; wait until an object is in the inventory. One parameter (db): object id\n")
f.write("ACTION_REMOVE_OBJECT_FROM_INVENTORY: EQU 34 ; remove object from inventory. One parameter (db): object id\n")
f.write("ACTION_CHECKPOINT: EQU 35 ; set checkpoint. No parameters\n")
f.write("ACTION_FINISH_LEVEL: EQU 36 ; end level. One parameter (db): 0 -> get back to main menu. 1 -> Go to next level.\n")
f.write("ACTION_ADD_WEAPON: EQU 37 ; add weapon to inventory. One parameter (db): 1-> eclipse, 2-> axe, 3-> blade\n")
f.write("ACTION_WAIT_CROSS_DOOR: EQU 38 ; wait until player is crossing our door\n")
f.write("ACTION_CHANGE_STILE: EQU 39 ; change stile. 3 parameters (db): x, y in stile coords, and stile number (0-255)\n")
f.write("ACTION_CHANGE_HARDNESS: EQU 40 ; change hardness for stile. 3 parameters (db): x, y in stile coords, hardness value (0-3)\n")
f.write("ACTION_SET_OBJECT_STATE: EQU 41 ; set object state. 2 parameters (db): object id, state value (0: normal, 1: transitioning, 2: dead/changed, 3-255: other)\n")
f.write("ACTION_WAIT_OBJECT_STATE: EQU 42 ; wait until the object state has a specific value. 2 parameters (db): object id, state value\n")
f.write("ACTION_NOP: EQU 43 ; no-op action\n")
f.write("ACTION_WAIT_CONTACT_EXT: EQU 44 ; wait for contact with area. 4 parameters (db): upper-left X in chars, upper-left Y in chars, width, height in chars\n")
f.write("ACTION_TELEPORT_EXT: EQU 45 ; teleport without waiting for contact. 4 params (db): x,y of screen to go, x,y position in screen (in pixels)\n")
f.write("ACTION_TELEPORT_ENEMY: EQU 46 ; teleport enemy to a different location in this screen. 2 params (db): x, y (in pixels)\n")
f.write("ACTION_MOVE_OBJECT: EQU 47 ; move object in screen. 4 params (db): objid, deltax, deltay per frame, number of frames\n")
f.write("ACTION_WAIT_PICKUP_INVENTORY: EQU 48 ; used for objects, wait until picked up and make sure there is space in the inventory\n")
f.write("ACTION_FX: EQU 49 ; play an FX. 1 param (db): effect\n")
f.write("current_script_pointer: dw ")
counter = 0
linecounter = 0
for script in levelscripts:
if counter == len(levelscripts) - 1:
f.write("script"+str(counter))
elif linecounter < 10:
f.write("script"+str(counter)+", ")
linecounter = linecounter + 1
else:
f.write("script"+str(counter)+"\n\t\tdw ")
linecounter = 0
counter = counter + 1
f.write("\n\n; SCRIPTS\n\n")
counter = 0
for script in levelscripts:
f.write("script"+str(counter)+":\t db "+script+"\n")
counter = counter + 1
f.close()
# Now print all screens
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
filenam = "export/screen_"+str(y)+"_"+str(x)+".asm"
scfile = file(filenam,"w")
screen=IannaScreen(self.mymap.map.layers['Fondo'], self.mymap.map.layers['PrimerPlano'], self.mymap.objset, self.mymap.map.layers['Dureza'], self.mymap.map.width,self.mymap.map.height, x, y, properties=self.mymap.map.properties)
screen.print_screen(scfile, levelscripts)
scfile.close()
# Write pointers to all compressed screens in the main file
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
scnam = "screen_"+str(y)+"_"+str(x)
mainfile.write(scnam+"_addr: DW "+scnam+"\n")
# And write the includes as well
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
scnam = "screen_"+str(y)+"_"+str(x)
filenam_cmp= "screen_"+str(y)+"_"+str(x)+".cmp"
mainfile.write(scnam+" INCBIN "+"\""+filenam_cmp+"\"\n")
# And includes for strings and scripts
mainfile.write("level_strings: INCBIN \"map_strings.cmp\"\n")
mainfile.write("level_strings_en: INCBIN \"map_strings_en.cmp\"\n")
mainfile.close()
# Finally, create makefile
makefile = file("export/makefile","w")
makefile.write("all: map.bin map_stiles.SR5.plet1\n\n")
makefile.write("clean:\n")
makefile.write("\trm *.bin\n")
makefile.write("\trm *.cmp\n")
screen_list=""
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
filenam = " screen_"+str(y)+"_"+str(x)+".cmp"
screen_list = screen_list + filenam
makefile.write("map.bin: map.asm map_strings.cmp map_strings_en.cmp"+screen_list+"\n")
makefile.write("\tpasmo map.asm map.bin map.sym\n\n")
makefile.write("map_stiles.SR5.plet1: map_stiles.SR5\n")
makefile.write("\tpletter.exe 1 map_stiles.SR5\n\n")
makefile.write("map_strings.cmp: map_strings.asm\n")
makefile.write("\tpasmo map_strings.asm map_strings.bin\n")
makefile.write("\tapack map_strings.bin map_strings.cmp\n\n")
makefile.write("map_strings_en.cmp: map_strings_en.asm\n")
makefile.write("\tpasmo map_strings_en.asm map_strings_en.bin\n")
makefile.write("\tapack map_strings_en.bin map_strings_en.cmp\n\n")
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
filenam = "screen_"+str(y)+"_"+str(x)
makefile.write(filenam+".cmp: "+filenam+".asm\n")
makefile.write("\tpasmo "+filenam+".asm "+filenam+".bin\n")
makefile.write("\tapack "+filenam+".bin "+filenam+".cmp\n\n")
makefile.close()
sys.exit(0)
def waitforkeyrelease(self,key):
while True:
events = pygame.event.get()
for event in events:
if event.type == pygame.locals.QUIT:
sys.exit(0)
elif event.type == pygame.KEYUP:
if event.key == key:
return
# Keys will be OPQA, SPACE for fire and CAPS SHIFT for action
def read_keyboard(self):
events = pygame.event.get()
for event in events:
if event.type == pygame.locals.QUIT:
sys.exit(0)
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_o:
self.keyboard[0] = 1
elif event.key == pygame.K_p:
self.keyboard[1] = 1
elif event.key == pygame.K_q:
self.keyboard[2] = 1
elif event.key == pygame.K_a:
self.keyboard[3] = 1
elif event.key == pygame.K_SPACE:
self.keyboard[4] = 1
elif event.key == pygame.K_RSHIFT:
self.keyboard[5] = 1
elif event.key == pygame.K_ESCAPE:
self.ingame = False
elif event.key == pygame.K_h:
self.pause_menu()
elif event.key == pygame.K_F12:
self.toggle_fullscreen()
elif event.type == pygame.KEYUP:
if event.key == pygame.K_o:
self.keyboard[0] = 0
elif event.key == pygame.K_p:
self.keyboard[1] = 0
elif event.key == pygame.K_q:
self.keyboard[2] = 0
elif event.key == pygame.K_a:
self.keyboard[3] = 0
elif event.key == pygame.K_SPACE:
self.keyboard[4] = 0
elif event.key == pygame.K_RSHIFT:
self.keyboard[5] = 0
return
def CheckGravities(self):
self.game_entities[0].apply_gravity()
for entity in [self.game_entities[1],self.game_entities[2]]:
if entity:
entity.apply_gravity()
def ProcessState(self):
for entity in [self.game_entities[0],self.game_entities[1],self.game_entities[2]]:
if entity:
if entity.keyboard:
entity.process_state(entity.keyboard)
entity.keyboard=None
else:
entity.process_state([0,0,0,0,0,0])
def DrawSprites(self):
for entity in [self.game_entities[0],self.game_entities[1],self.game_entities[2]]:
if entity:
try:
self.buffer.blit(entity.current_anim.frames[entity.anim_pos],(entity.posx,entity.posy))
except AttributeError:
print "WARNING: ", entity.current_anim, "is a list, not a SevenuP object. Check!"
print "Entidades: ",entity,self.game_entities[0],self.game_entities[1],self.game_entities[2]
print "posx:",entity.posx,"posy:",entity.posy,"anim_pos:",entity.anim_pos
if entity.extra_sprite:
if entity.state & 1: # looking right
self.buffer.blit(entity.extra_sprite,(entity.posx+24,entity.posy))
else:
self.buffer.blit(entity.extra_sprite,(entity.posx-24,entity.posy))
def RunScripts(self):
for entity in self.game_entities:
if entity:
try:
if entity.state != constants.STATE_DEAD and entity.script:
entity.script.run_script(entity)
except AttributeError:
if entity.script:
entity.script.run_script(entity)
def CheckMapLimits(self):
levelscripts=["ACTION_NONE","ACTION_PLAYER","ACTION_SECONDARY"]
levelstrings=[]
levelstrings_en=[]
# Scripts for pickable objects: should always be in the same place!!!
for kk, script in IannaScript.scripts_per_pickable_object.iteritems():
if script not in levelscripts:
levelscripts.append(script)
# Scripts for enemy attacks: should always be in the same place!!!
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_SKELETON"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_ORC"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_MUMMY"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_TROLL"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_ROCK"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_KNIGHT"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_GOLEM"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_OGRE"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_MINOTAUR"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_DEMON"]:
if script not in levelscripts:
levelscripts.append(script)
for script in IannaCharacter.enemy_attack_patterns["OBJECT_ENEMY_DALGURAK"]:
if script not in levelscripts:
levelscripts.append(script)
# Scripts for player in specific screens
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
try:
scriptid = "script-"+str(x)+"_"+str(y)
script = self.mymap.map.properties[scriptid]
if script not in levelscripts:
levelscripts.append(script)
except KeyError:
pass
# Create the list of all scripts in the level
for y in range(0,self.mymap.map.height/10):
for x in range(0,self.mymap.map.width/16):
screen=IannaScreen(self.mymap.map.layers['Fondo'], self.mymap.map.layers['PrimerPlano'], self.mymap.objset, self.mymap.map.layers['Dureza'], self.mymap.map.width,self.mymap.map.height, x, y, properties=self.mymap.map.properties)
screenscripts = screen.get_scripts_from_screen()
for script in screenscripts:
if script not in levelscripts:
levelscripts.append(script)
print "We have",len(levelscripts),"different scripts in the level, out of a maximum of 255"
bytecount=0
for script in levelscripts:
bytecount = bytecount + len(script.split(','))
bytecount = bytecount + 2*len(levelscripts)
if bytecount > 1023:
print "WARNING!!",
print "Scripts use",bytecount,"bytes, out of a maximum of 1024"
# Create the list of all strings in the level
counter = 0
while True:
stringid = "string-"+str(counter)
try:
string = self.mymap.map.properties[stringid]
levelstrings.append(string)
# Append the English string as well
string = self.mymap.map.properties[stringid+"-en"]
levelstrings_en.append(string)
counter = counter + 1
except KeyError:
break
print "We have",len(levelstrings),"different strings in the level"
bytecount=0
for string in levelstrings:
bytecount = bytecount + len(string)
bytecount = bytecount + 3*len(levelstrings)
if bytecount > 1023:
print "WARNING!!",
print "Strings use",bytecount,"bytes, out of a maximum of 1024"
bytecount=0
for string in levelstrings_en:
bytecount = bytecount + len(string)
bytecount = bytecount + 3*len(levelstrings_en)
if bytecount > 1023:
print "WARNING!!",
print "Strings in English use",bytecount,"bytes, out of a maximum of 1024"
# For now it is not the most beautiful menu ever... but it works :)
def MainMenu(self):
' Initial menu, displays list of maps to load'
' OUTPUT:'
' - Filename of map to be loaded'
self.screen.fill((0, 0, 0))
listoftmx=[]
for dirname, dirnames, filenames in os.walk('./maps/'):
for filename in filenames:
if '.tmx' in filename:
listoftmx.append(os.path.join(dirname, filename))
myfont = pygame.font.SysFont("console",32)
self.screen.blit(myfont.render("Select map with Q/A keys and SPACE:",False,(0,255,255)),(0,0))
selected=0
done=False
while not done:
y=32
i=0
for filename in listoftmx:
if i == selected:
color=(255,255,255)
else:
color=(0,255,255)
self.screen.blit(myfont.render(filename,False,color),(64,y))
y = y+32
i = i + 1
pygame.display.flip()
self.read_keyboard()
if self.keyboard[2] == 1: # UP
if selected > 0:
selected = selected - 1
while self.keyboard[2] == 1:
self.read_keyboard()
elif self.keyboard[3] == 1: # DOWN
if selected < len(listoftmx)-1:
selected = selected + 1
while self.keyboard[3] == 1:
self.read_keyboard()
elif self.keyboard[4] == 1: # SPACE
while self.keyboard[4] == 1:
self.read_keyboard()
done = True
return listoftmx[selected]
def mainloop(self):
self.ingame = True
animate_stiles=0
while self.ingame:
self.read_keyboard()
# call AnimateSTiles
animate_stiles = animate_stiles + 1
if animate_stiles & 1:
self.mymap.anim_stiles()
self.entity_player.keyboard=self.keyboard
self.RunScripts()
self.ProcessState()
self.CheckGravities()
# Redrawscreen
# draw sprites
self.buffer.set_clip(pygame.Rect(0,0,256,160)) # set clipping area for game, should then set clipping for score area
self.mymap.draw_screen(self.buffer)
self.DrawSprites()
self.mymap.draw_foreground(self.buffer)
self.scorearea.draw()
pygame.transform.scale(self.buffer,(256*3,192*3),self.screen)
pygame.display.flip()
self.fpsClock.tick(10) # run at 10 fps
global_timer.tick()
def Play(self):
while True:
gc.collect() # Force garbage collection on every iteration
self.mapfile = self.MainMenu()
self.mymap = Map(self.mapfile)
self.CheckMapLimits()
self.game_entities = [None, None, None, None, None, None, None, None ]
# self.player_weapons = [True,False,False,False] # True if weapon is active, False if not
self.player_weapons = [True, True, True, True] # True if weapon is active, False if not
self.scorearea = IannaScore(self.buffer,self.screen,self.game_entities)
self.entity_player = IannaCharacter("barbaro", self.mymap, self.game_entities, self.scorearea, self.player_weapons, player=True)
initialcoords = self.mymap.map.properties["InitialCoords"].split(',')
self.entity_player.posx=int(initialcoords[0])
self.entity_player.posy=int(initialcoords[1])
self.entity_player.energy=self.entity_player.get_entity_max_energy() # FIXME this is cheating!!
self.game_entities[0] = self.entity_player
self.mymap.load_tile_table(os.path.dirname(self.mapfile),16, 16)
self.mymap.set_screen(self.mymap.current_x,self.mymap.current_y, self.game_entities, self.scorearea)
self.mainloop()
if __name__ == '__main__':
game = IannaGame()
game.Play()
| 43.845405 | 232 | 0.709964 | 6,315 | 40,557 | 4.441172 | 0.089311 | 0.035513 | 0.035442 | 0.048207 | 0.781324 | 0.758147 | 0.736683 | 0.7175 | 0.710048 | 0.679526 | 0 | 0.022024 | 0.165939 | 40,557 | 924 | 233 | 43.892857 | 0.807077 | 0.040881 | 0 | 0.654242 | 0 | 0.077121 | 0.369933 | 0.043472 | 0 | 0 | 0 | 0.001082 | 0 | 0 | null | null | 0.002571 | 0.01671 | null | null | 0.028278 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d01c50bed3e5afb7bc4a42f395c9759880c6fc4 | 105 | py | Python | model/components/base_component.py | MattJTrueblood/Allies_RL_Prototype | 1c7c4360156d0dc5ff53c49401d25026761862df | [
"Unlicense"
] | 1 | 2018-11-19T19:51:49.000Z | 2018-11-19T19:51:49.000Z | model/components/base_component.py | MattJTrueblood/Allies_RL_Prototype | 1c7c4360156d0dc5ff53c49401d25026761862df | [
"Unlicense"
] | null | null | null | model/components/base_component.py | MattJTrueblood/Allies_RL_Prototype | 1c7c4360156d0dc5ff53c49401d25026761862df | [
"Unlicense"
] | null | null | null |
class BaseComponent:
def __init__(self, parent_entity):
self.parent_entity = parent_entity
| 17.5 | 42 | 0.72381 | 12 | 105 | 5.75 | 0.583333 | 0.521739 | 0.463768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209524 | 105 | 5 | 43 | 21 | 0.831325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
1d03e155b68e1756123681e66fc0484988077bc2 | 8,510 | py | Python | mstrio/api/cubes.py | r3dunlop/mstrio-py | 7754c4c35d711093a902b502fb16c98b6e317f4c | [
"Apache-2.0"
] | null | null | null | mstrio/api/cubes.py | r3dunlop/mstrio-py | 7754c4c35d711093a902b502fb16c98b6e317f4c | [
"Apache-2.0"
] | null | null | null | mstrio/api/cubes.py | r3dunlop/mstrio-py | 7754c4c35d711093a902b502fb16c98b6e317f4c | [
"Apache-2.0"
] | null | null | null | import requests
def cube(connection, cube_id, verbose=False):
"""
Get the definition of a specific cube, including attributes and metrics. The cube can be either an Intelligent Cube
or a Direct Data Access (DDA)/MDX cube. The in-memory cube definition provides information about all available
objects without actually running any data query/report. The results can be used by other requests to help filter
large datasets and retrieve values dynamically, helping with performance and scalability.
Args:
connection: MicroStrategy REST API connection object.
cube_id list(str): Unique IDs of the cubes you wish to extract information from.
verbose (bool): Verbosity of request response; defaults to False.
Returns:
Complete HTTP response object
"""
response = requests.get(url=connection.base_url + '/cubes/' + cube_id,
headers={'X-MSTR-AuthToken': connection.auth_token,
'X-MSTR-ProjectID': connection.project_id},
cookies=connection.cookies,
params={'id': cube_id},
verify=connection.ssl_verify)
if verbose:
print(response.url)
return response
def cube_info(connection, cube_id, verbose=False):
"""
Get information for specific cubes in a specific project. The cubes can be either Intelligent cubes or Direct
Data Access (DDA)/MDX cubes. This request returns the cube name, ID, size, status, path, last modification date,
and owner name and ID.
Args:
connection: MicroStrategy REST API connection object.
cube_id (str): Unique ID of the cube you wish to extract information from.
verbose (bool): Verbosity of request response; defaults to False.
Returns:
Complete HTTP response object.
"""
response = requests.get(url=connection.base_url + '/cubes/?id=' + cube_id,
headers={'X-MSTR-AuthToken': connection.auth_token,
'X-MSTR-ProjectID': connection.project_id},
cookies=connection.cookies,
verify=connection.ssl_verify)
if verbose:
print(response.url)
return response
def cube_single_attribute_elements(connection, cube_id, attribute_id, offset=0, limit=25000, verbose=False):
"""
Get elements of a specific attribute of a specific cube.
Args:
connection: MicroStrategy REST API connection object.
cube_id (str): Unique ID of the cube you wish to extract information from.
attribute_id (str): Unique ID of the attribute in the cube.
verbose (bool): Verbosity of request response; defaults to False.
Returns:
Complete HTTP response object.
"""
response = requests.get(url=connection.base_url + '/cubes/' + cube_id + '/attributes/' + attribute_id + '/elements',
headers={'X-MSTR-AuthToken': connection.auth_token,
'X-MSTR-ProjectID': connection.project_id},
cookies=connection.cookies,
params={'offset': offset,
'limit': limit},
verify=connection.ssl_verify)
if verbose:
print(response.url)
return response
def cube_instance(connection, cube_id, body={}, offset=0, limit=1000, verbose=False):
"""
Create a new instance of a specific cube. This in-memory instance can be used by other requests.
Args:
connection: MicroStrategy REST API connection object.
cube_id (str): Unique ID of the cube you wish to extract information from.
offset (int, optional): Starting point within the collection of returned results. Default is 0.
limit (int, optional): Used to control data extract behavior on datasets which have a large number of rows.
The default is 1000. As an example, if the dataset has 50,000 rows, this function will incrementally
extract all 50,000 rows in 1,000 row chunks. Depending on system resources, using a higher limit
setting (e.g. 10,000) may reduce the total time required to extract the entire dataset.
verbose (bool, optional): Verbosity of request response; defaults to False.
Returns:
Complete HTTP response object.
"""
response = requests.post(url=connection.base_url + '/cubes/' + cube_id + '/instances',
headers={'X-MSTR-AuthToken': connection.auth_token,
'X-MSTR-ProjectID': connection.project_id},
json=body,
cookies=connection.cookies,
params={'offset': offset,
'limit': limit},
verify=connection.ssl_verify)
if verbose:
print(response.url)
return response
def cube_instance_id(connection, cube_id, instance_id, offset=0, limit=1000, verbose=False):
"""
Get the results of a previously created instance for a specific cube, using the in-memory instance created by cube_instance().
Args:
connection: MicroStrategy REST API connection object.
cube_id (str): Unique ID of the cube you wish to extract information from.
instance_id (str): Unique ID of the in-memory instance of a published cube.
offset (int, optional): Starting point within the collection of returned results. Default is 0.
limit (int, optional): Used to control data extract behavior on datasets which have a large number of rows.
The default is 1000. As an example, if the dataset has 50,000 rows, this function will incrementally
extract all 50,000 rows in 1,000 row chunks. Depending on system resources, using a higher limit
setting (e.g. 10,000) may reduce the total time required to extract the entire dataset.
verbose (bool, optional): Verbosity of request response; defaults to False.
Returns:
Complete HTTP response object.
"""
response = requests.get(url=connection.base_url + '/cubes/' + cube_id + '/instances/' + instance_id,
headers={'X-MSTR-AuthToken': connection.auth_token,
'X-MSTR-ProjectID': connection.project_id},
cookies=connection.cookies,
params={'offset': offset,
'limit': limit},
verify=connection.ssl_verify)
if verbose:
print(response.url)
return response
def publish(connection, cube_id, verbose=False):
"""
Publish a specific cube in a specific project.
Args:
connection: MicroStrategy REST API connection object.
cube_id (str): Unique ID of the cube you wish to publish.
verbose (bool): Verbosity of request response; defaults to False.
Returns:
Complete HTTP response object.
"""
response = requests.post(url=connection.base_url + '/cubes/' + cube_id,
headers={'X-MSTR-AuthToken': connection.auth_token,
'X-MSTR-ProjectID': connection.project_id},
cookies=connection.cookies,
verify=connection.ssl_verify)
if verbose:
print(response.url)
return response
def status(connection, cube_id, verbose=False):
"""
Get the status of a specific cube in a specific project. The status is returned in
HEADER X-MSTR-CubeStatus with a value from EnumDSSCubeStates, which is a bit vector.
Args:
connection: MicroStrategy REST API connection object.
cube_id (str): Unique ID of the cube you wish to extract information from.
verbose (bool): Verbosity of request response; defaults to False.
Returns:
Complete HTTP response object.
"""
response = requests.head(url=connection.base_url + '/cubes/' + cube_id,
headers={'X-MSTR-AuthToken': connection.auth_token,
'X-MSTR-ProjectID': connection.project_id},
cookies=connection.cookies,
verify=connection.ssl_verify)
if verbose:
print(response.url)
return response
| 45.752688 | 130 | 0.618684 | 998 | 8,510 | 5.211423 | 0.178357 | 0.02538 | 0.01692 | 0.019996 | 0.792732 | 0.774274 | 0.741396 | 0.716401 | 0.71294 | 0.702173 | 0 | 0.010847 | 0.306698 | 8,510 | 185 | 131 | 46 | 0.870678 | 0.491304 | 0 | 0.763889 | 0 | 0 | 0.089079 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097222 | false | 0 | 0.013889 | 0 | 0.208333 | 0.097222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
df1fb6d47b126aa92c2af55eb49af994323127f1 | 42 | py | Python | Ezalt/__main__.py | LucasValentimB/Ezalt | fb4f17f61eff83da42b05c4f3a02334e876e0ef6 | [
"MIT"
] | null | null | null | Ezalt/__main__.py | LucasValentimB/Ezalt | fb4f17f61eff83da42b05c4f3a02334e876e0ef6 | [
"MIT"
] | null | null | null | Ezalt/__main__.py | LucasValentimB/Ezalt | fb4f17f61eff83da42b05c4f3a02334e876e0ef6 | [
"MIT"
] | null | null | null | from Ezalt import Ezalt
Ezalt.Ezalt()
| 10.5 | 24 | 0.714286 | 6 | 42 | 5 | 0.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 42 | 3 | 25 | 14 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
df94a5e4878251a704754e7eccfac2c81e036202 | 32 | py | Python | main.py | zhiqiu1/reversi | 7e6cf9f19d24289d9b873f5b8b2e8d4e6c82fb3e | [
"MIT"
] | null | null | null | main.py | zhiqiu1/reversi | 7e6cf9f19d24289d9b873f5b8b2e8d4e6c82fb3e | [
"MIT"
] | 1 | 2021-01-28T08:40:55.000Z | 2021-01-28T08:41:04.000Z | main.py | zhiqiu1/reversi | 7e6cf9f19d24289d9b873f5b8b2e8d4e6c82fb3e | [
"MIT"
] | null | null | null | import game_ui
game_ui.UI()
| 8 | 15 | 0.6875 | 6 | 32 | 3.333333 | 0.5 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 32 | 3 | 16 | 10.666667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8000bbb503dcc0e30622761be7649f2f9bf18373 | 37 | py | Python | src/web/results/__init__.py | alex-dya/security_scanner | 7aeb6af863ccdbf6c066d52446318aaf898afe7b | [
"MIT"
] | null | null | null | src/web/results/__init__.py | alex-dya/security_scanner | 7aeb6af863ccdbf6c066d52446318aaf898afe7b | [
"MIT"
] | 7 | 2021-05-28T14:31:27.000Z | 2022-03-12T00:57:40.000Z | src/web/results/__init__.py | alex-dya/security_scanner | 7aeb6af863ccdbf6c066d52446318aaf898afe7b | [
"MIT"
] | 1 | 2021-05-15T12:18:49.000Z | 2021-05-15T12:18:49.000Z | from . import routes, exports, forms
| 18.5 | 36 | 0.756757 | 5 | 37 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 37 | 1 | 37 | 37 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8013c521f04a4ab2ff5a0bf341dcb6f5338627c1 | 42 | py | Python | sagenet/DHH_data/__init__.py | MarioniLab/sagenet | d18b381764c23c6c617b6fd520b5e3f371186228 | [
"MIT"
] | 9 | 2021-12-20T14:28:10.000Z | 2022-03-18T06:44:07.000Z | sagenet/DHH_data/__init__.py | MarioniLab/sagenet | d18b381764c23c6c617b6fd520b5e3f371186228 | [
"MIT"
] | null | null | null | sagenet/DHH_data/__init__.py | MarioniLab/sagenet | d18b381764c23c6c617b6fd520b5e3f371186228 | [
"MIT"
] | 1 | 2022-01-29T04:38:54.000Z | 2022-01-29T04:38:54.000Z | from sagenet.DHH_data._DHH_data import *
| 21 | 41 | 0.809524 | 7 | 42 | 4.428571 | 0.714286 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119048 | 42 | 1 | 42 | 42 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
801476c435ab54ee6d034fd2cc66aad43cf542e8 | 16,429 | py | Python | models/tied_model_triple_encoder.py | rajat95/Deep-Deghosting-HDR | d56c213912083df0323f26e594ccc5ed7fa668ec | [
"MIT"
] | 33 | 2019-06-13T09:25:57.000Z | 2022-02-08T09:26:28.000Z | models/tied_model_triple_encoder.py | banananabear/Deep-Deghosting-HDR | d56c213912083df0323f26e594ccc5ed7fa668ec | [
"MIT"
] | 11 | 2019-11-01T08:31:24.000Z | 2021-12-23T03:05:09.000Z | models/tied_model_triple_encoder.py | banananabear/Deep-Deghosting-HDR | d56c213912083df0323f26e594ccc5ed7fa668ec | [
"MIT"
] | 10 | 2019-07-04T06:39:04.000Z | 2022-03-29T03:19:22.000Z | import numpy as np
import tensorflow as tf
layers = tf.layers
leaky_relu=tf.nn.leaky_relu
def deepfuse_triple_3encoder(inp1, inp2, inp3):
layers = tf.layers
leaky_relu=tf.nn.leaky_relu
with tf.variable_scope('DeepFuse'):
neg_prepooled_feats = [[],[]]
pos_prepooled_feats = [[],[]]
num_downsample_layers = 4
pos_branch_enc = []
neg_branch_enc = []
global_feats_pos = []
global_feats_neg = []
ref_feats = []
num_feats = [32,64,128,256]
for inp, reuse1, idx in zip([inp1, inp2], [None, True], [0,1]):
conv = inp
for i in range(num_downsample_layers):
conv = layers.conv2d(conv, num_feats[i], kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_enc_conv'+str(i), reuse = reuse1)
neg_prepooled_feats[idx].append(conv)
if inp == inp2:
global_feats_neg.append(conv)
conv = layers.conv2d(conv, num_feats[i], kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_downsample'+str(i), reuse = reuse1)
neg_branch_enc.append(conv)
for inp, reuse1, idx in zip([inp2, inp3], [None, True], [0,1]):
conv = inp
for i in range(num_downsample_layers):
conv = layers.conv2d(conv, num_feats[i], kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_enc_conv'+str(i), reuse = reuse1)
pos_prepooled_feats[idx].append(conv)
if inp == inp2:
global_feats_pos.append(conv)
conv = layers.conv2d(conv, num_feats[i], kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_downsample'+str(i), reuse = reuse1)
pos_branch_enc.append(conv)
conv = inp2
for i in range(num_downsample_layers):
conv = layers.conv2d(conv, num_feats[i], kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_enc_conv'+str(i))
ref_feats.append(conv)
conv = layers.conv2d(conv, num_feats[i], kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_downsample'+str(i))
ref_branch_enc = conv
pos_pooled_feats1 = []
neg_pooled_feats1 = []
pos_pooled_feats2 = []
neg_pooled_feats2 = []
for idx in range(num_downsample_layers):
neg_pooled_feats1.append(tf.maximum(neg_prepooled_feats[0][idx],neg_prepooled_feats[1][idx]))
neg_merge1 = tf.maximum(neg_branch_enc[0],neg_branch_enc[1])
pos_pooled_feats1.append(tf.maximum(pos_prepooled_feats[0][idx],pos_prepooled_feats[1][idx]))
pos_merge1 = tf.maximum(pos_branch_enc[0],pos_branch_enc[1])
for idx in range(num_downsample_layers):
neg_pooled_feats2.append((neg_prepooled_feats[0][idx]+neg_prepooled_feats[1][idx])/2.0)
neg_merge2 = (neg_branch_enc[0]+neg_branch_enc[1])/2.0
pos_pooled_feats2.append((pos_prepooled_feats[0][idx]+pos_prepooled_feats[1][idx])/2.0)
pos_merge2 = (pos_branch_enc[0]+pos_branch_enc[1])/2.0
merge=tf.concat([neg_merge1,neg_merge2, pos_merge1, pos_merge2, ref_branch_enc],axis=-1)
conv = layers.conv2d(merge, 256, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv1')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_pooled_feats1[-1],neg_pooled_feats2[-1], pos_pooled_feats1[-1], pos_pooled_feats2[-1], ref_feats[-1]],axis=-1)], axis=-1)
conv = layers.conv2d(merge, 128, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv2')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_pooled_feats1[-2],neg_pooled_feats2[-2],pos_pooled_feats1[-2],pos_pooled_feats2[-2], ref_feats[-2]],axis=-1)], axis=-1)
conv = layers.conv2d(merge, 64, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv3')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_pooled_feats1[-3],neg_pooled_feats2[-3], pos_pooled_feats1[-3],pos_pooled_feats2[-3], ref_feats[-3]],axis=-1)], axis=-1)
conv = layers.conv2d(merge, 32, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv4')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_pooled_feats1[-4],neg_pooled_feats2[-4], pos_pooled_feats1[-4], pos_pooled_feats2[-4], ref_feats[-4]],axis=-1)], axis=-1)
conv = layers.conv2d(merge,3,kernel_size=3,strides=1,padding='SAME',activation=tf.nn.sigmoid,
kernel_initializer=tf.contrib.layers.variance_scaling_initializer(),name='fusion_output')
return tf.clip_by_value(conv, 0.0, 1.0)
#Assume all images in the batch belong to same sequence; to be only used for validation; can have different lengths positive and negative sequence
def deepfuse_gen_3encoder(neg_inp, pos_inp, ref):
layers = tf.layers
leaky_relu=tf.nn.leaky_relu
with tf.variable_scope('DeepFuse', reuse=tf.AUTO_REUSE):
neg_feats = []
pos_feats = []
ref_feats = []
num_downsample_layers = 4
pos_branch_enc = []
neg_branch_enc = []
num_feats = [32,64,128,256]
feats1 = layers.conv2d(neg_inp, 32, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_enc_conv0')
neg_feats.append(feats1)
feats1 = layers.conv2d(feats1, 32, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_downsample0')
feats2 = layers.conv2d(feats1, 64, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_enc_conv1')
neg_feats.append(feats2)
feats2 = layers.conv2d(feats2, 64, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_downsample1')
feats3 = layers.conv2d(feats2, 128, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_enc_conv2')
neg_feats.append(feats3)
feats3 = layers.conv2d(feats3, 128, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_downsample2')
feats4 = layers.conv2d(feats3, 256, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_enc_conv3')
neg_feats.append(feats4)
feats4 = layers.conv2d(feats4, 256, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'neg_downsample3')
neg_encoding = feats4
feats1 = layers.conv2d(pos_inp, 32, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_enc_conv0')
pos_feats.append(feats1)
feats1 = layers.conv2d(feats1, 32, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_downsample0')
feats2 = layers.conv2d(feats1, 64, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_enc_conv1')
pos_feats.append(feats2)
feats2 = layers.conv2d(feats2, 64, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_downsample1')
feats3 = layers.conv2d(feats2, 128, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_enc_conv2')
pos_feats.append(feats3)
feats3 = layers.conv2d(feats3, 128, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_downsample2')
feats4 = layers.conv2d(feats3, 256, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_enc_conv3')
pos_feats.append(feats4)
feats4 = layers.conv2d(feats4, 256, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'pos_downsample3')
pos_encoding = feats4
feats1 = layers.conv2d(ref, 32, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_enc_conv0')
ref_feats.append(feats1)
feats1 = layers.conv2d(feats1, 32, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_downsample0')
feats2 = layers.conv2d(feats1, 64, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_enc_conv1')
ref_feats.append(feats2)
feats2 = layers.conv2d(feats2, 64, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_downsample1')
feats3 = layers.conv2d(feats2, 128, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_enc_conv2')
ref_feats.append(feats3)
feats3 = layers.conv2d(feats3, 128, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_downsample2')
feats4 = layers.conv2d(feats3, 256, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_enc_conv3')
ref_feats.append(feats4)
feats4 = layers.conv2d(feats4, 256, kernel_size = 3, strides = 2, padding = 'SAME',activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'ref_downsample3')
ref_encoding = feats4
for i in range(4):
neg_feats[i] = [tf.reduce_max(neg_feats[i], axis=0, keepdims=True), tf.reduce_mean(neg_feats[i], axis=0, keepdims=True)]
pos_feats[i] = [tf.reduce_max(pos_feats[i], axis=0, keepdims=True), tf.reduce_mean(pos_feats[i], axis=0, keepdims=True)]
merged_encoding = tf.concat([tf.reduce_max(neg_encoding, axis=0, keepdims=True),
tf.reduce_mean(neg_encoding, axis=0, keepdims=True),
tf.reduce_max(pos_encoding, axis=0, keepdims=True),
tf.reduce_mean(pos_encoding, axis=0, keepdims=True),
ref_encoding], axis=-1)
conv = layers.conv2d(merged_encoding, 256, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv1')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_feats[-1][0], neg_feats[-1][1], pos_feats[-1][0], pos_feats[-1][1], ref_feats[-1]],axis=-1)], axis=-1)
conv = layers.conv2d(merge, 128, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv2')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_feats[-2][0], neg_feats[-2][1], pos_feats[-2][0], pos_feats[-2][1], ref_feats[-2]],axis=-1)],axis=-1)
conv = layers.conv2d(merge, 64, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv3')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_feats[-3][0], neg_feats[-3][1], pos_feats[-3][0], pos_feats[-3][1], ref_feats[-3]],axis=-1)],axis=-1)
conv = layers.conv2d(merge, 32, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv4')
conv = tf.keras.layers.UpSampling2D((2,2))(conv)
merge = tf.concat([conv, tf.concat([neg_feats[-4][0], neg_feats[-4][1], pos_feats[-4][0], pos_feats[-4][1], ref_feats[-4]],axis=-1)],axis=-1)
# conv = layers.conv2d(merge, 8, kernel_size = 3, strides = 1, padding = 'SAME', activation = leaky_relu,
# kernel_initializer = tf.contrib.layers.variance_scaling_initializer(), name = 'dec_conv5')
conv = layers.conv2d(merge,3,kernel_size=3,strides=1,padding='SAME',activation=tf.nn.sigmoid,
kernel_initializer=tf.contrib.layers.variance_scaling_initializer(),name='fusion_output')
return tf.clip_by_value(conv, 0.0, 1.0)
| 77.131455 | 185 | 0.612149 | 1,978 | 16,429 | 4.83822 | 0.063195 | 0.04232 | 0.047126 | 0.077116 | 0.870428 | 0.846082 | 0.828109 | 0.822675 | 0.79906 | 0.779415 | 0 | 0.042698 | 0.267271 | 16,429 | 212 | 186 | 77.495283 | 0.752284 | 0.02386 | 0 | 0.43956 | 0 | 0 | 0.042912 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010989 | false | 0 | 0.010989 | 0 | 0.032967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
338d51c0d7e8444d0e50902eaaf362b4183ae19a | 25 | py | Python | src/imdb/__init__.py | Jiang-Muyun/SqueezeSeg | d52f1149ed53038fc721a067e816e83a77fd29d3 | [
"BSD-2-Clause"
] | null | null | null | src/imdb/__init__.py | Jiang-Muyun/SqueezeSeg | d52f1149ed53038fc721a067e816e83a77fd29d3 | [
"BSD-2-Clause"
] | 10 | 2020-01-28T22:14:50.000Z | 2022-03-11T23:59:04.000Z | src/imdb/__init__.py | YaraAlnaggar/SqueezeSeg | 10343ca48ef384b27ddb8dd93551eeceb34962d6 | [
"BSD-2-Clause"
] | null | null | null | from .kitti import kitti
| 12.5 | 24 | 0.8 | 4 | 25 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33970c9324927872dbbb2ee18d05fd2fd8210691 | 3,660 | py | Python | evaluation_scripts/plotting_multiple_trajectories.py | julianxu/Rockets | af40b5fd17570bdd310eccffbb9fb6d8afbe8a3b | [
"MIT"
] | 275 | 2018-02-16T19:51:26.000Z | 2022-03-12T23:25:41.000Z | evaluation_scripts/plotting_multiple_trajectories.py | julianxu/Rockets | af40b5fd17570bdd310eccffbb9fb6d8afbe8a3b | [
"MIT"
] | 5 | 2018-02-14T20:55:36.000Z | 2021-04-20T08:17:59.000Z | evaluation_scripts/plotting_multiple_trajectories.py | julianxu/Rockets | af40b5fd17570bdd310eccffbb9fb6d8afbe8a3b | [
"MIT"
] | 49 | 2017-12-06T21:31:56.000Z | 2021-12-11T11:02:58.000Z | """
Author: Reuben Ferrante
Date: 10/05/2017
Description: General scripts for graphing instead of notebook.
"""
from evaluation_scripts.plotting_trajectory import *
def plot_pid_and_ddpg_trajectories(res):
# Low Disc
# state_history = np.load('C://Users//REUBS_LEN//PycharmProjects//RocketLanding//rl_and_control//evaluation_scripts//'
# 'rl_q_learning//low_discretization//final_state_history.npy')
# -------------------------------
state_history = np.load('C://Users//REUBS_LEN//PycharmProjects//RocketLanding//control_and_ai//evaluation_scripts//'
'pid//final_state_history.npy')
fig = res.create_figure()
ax = res.add_subplot(fig, 111, "X-Position Displacement/metres", "Z-Altitude Displacement/metres", grid=False)
# tests = [1, 7, 16, 22]
tests = [1, 7, 16]
for i in tests:
convert_state_and_plot_trajectory_2(res, fig, ax, state_history[i])
state_history = np.load('C://Users//REUBS_LEN//PycharmProjects//RocketLanding//control_and_ai//evaluation_scripts//'
'ddpg//model_2_unnormalized_longer_state//final_state_history.npy')
for i in tests:
convert_state_and_plot_trajectory_2(res, fig, ax, state_history[i])
res.add_title('X-Z Trajectories for Multiple Tests - PID and DDPG')
res.add_legend(np.append(['PID Test '+str(i+1) for i in tests], ['DDPG Test '+str(i+1) for i in tests]))
res.show_plot()
def plot_ddpg1_and_ddpg2_trajectories(res):
state_history = np.load('C://Users//REUBS_LEN//PycharmProjects//RocketLanding//control_and_ai//evaluation_scripts//'
'ddpg//model_1_normal_state//final_state_history.npy')
fig = res.create_figure()
ax = res.add_subplot(fig, 111, "X-Position Displacement/metres", "Z-Altitude Displacement/metres", grid=False)
tests = [1, 7, 16, 22]
for i in tests:
convert_state_and_plot_trajectory_2(res, fig, ax, state_history[i])
state_history = np.load('C://Users//REUBS_LEN//PycharmProjects//RocketLanding//control_and_ai//evaluation_scripts//'
'ddpg//model_2_unnormalized_longer_state//final_state_history.npy')
for i in tests:
convert_state_and_plot_trajectory_2(res, fig, ax, state_history[i])
res.add_title('X-Z Trajectories for Multiple Tests - DDPG Models 1 and 2')
res.add_legend(np.append(['M1 Test ' + str(i+1) for i in tests], ['M2 Test ' + str(i+1) for i in tests]))
res.show_plot()
def plot_single_trajectories(res):
# Low Disc
state_history = np.load('C://Users//REUBS_LEN//PycharmProjects//RocketLanding//control_and_ai//evaluation_scripts//'
'rl_q_learning//low_discretization//final_state_history.npy')
# -------------------------------
# state_history = np.load('C://Users//REUBS_LEN//PycharmProjects//RocketLanding//rl_and_control//evaluation_scripts//'
# 'pid//final_state_history.npy')
fig = res.create_figure()
ax = res.add_subplot(fig, 111, "X-Position Displacement/metres", "Z-Altitude Displacement/metres", grid=False)
test_history = np.matrix(state_history)
tests = [1, 2, 7, 8, 9, 12]
for i in tests:
convert_state_and_plot_trajectory_2(res, fig, ax, state_history[i])
res.add_title('X-Z Trajectories for Multiple Tests - Low Action Discretization')
res.add_legend(['Test ' + str(i+1) for i in tests])
res.show_plot()
plot_colors = ['darkred', 'red', 'salmon', 'darkblue', 'blue', 'deepskyblue', 'cyan']
res = Graphing(plot_colors=plot_colors, fig_size=(8, 5))
plot_pid_and_ddpg_trajectories(res) | 46.923077 | 122 | 0.672951 | 502 | 3,660 | 4.641434 | 0.205179 | 0.103004 | 0.025751 | 0.04721 | 0.832618 | 0.813305 | 0.79485 | 0.79485 | 0.777682 | 0.777682 | 0 | 0.02032 | 0.179781 | 3,660 | 78 | 123 | 46.923077 | 0.755829 | 0.160383 | 0 | 0.565217 | 0 | 0 | 0.375409 | 0.233813 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0.021739 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
339855d757cb9ab301e8ef41180bbf3b52548830 | 238 | py | Python | api/leaderboard/urls.py | firatbulut19/leaderboard | 6f37ed6d0b3fa9f63c955cfd5d6b5c94f481e6a9 | [
"MIT"
] | null | null | null | api/leaderboard/urls.py | firatbulut19/leaderboard | 6f37ed6d0b3fa9f63c955cfd5d6b5c94f481e6a9 | [
"MIT"
] | null | null | null | api/leaderboard/urls.py | firatbulut19/leaderboard | 6f37ed6d0b3fa9f63c955cfd5d6b5c94f481e6a9 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('', views.global_leaderboard.as_view(), name="global_leaderboard"),
path('<str:country>/', views.country_leaderboard.as_view(), name="country_leaderboard"),
] | 29.75 | 92 | 0.731092 | 29 | 238 | 5.793103 | 0.482759 | 0.202381 | 0.202381 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113445 | 238 | 8 | 93 | 29.75 | 0.796209 | 0 | 0 | 0 | 0 | 0 | 0.213389 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
33eddd9418211e82b60e6b3e819f96b20503c44a | 90 | py | Python | python2/string_concatenation.py | callmegibby/codecademy | f2b5507ca9382571cc044f44d6f857cbd4599121 | [
"MIT"
] | 1 | 2020-09-23T21:54:37.000Z | 2020-09-23T21:54:37.000Z | python2/string_concatenation.py | callmegibby/codecademy | f2b5507ca9382571cc044f44d6f857cbd4599121 | [
"MIT"
] | 2 | 2020-09-23T22:08:36.000Z | 2020-10-02T17:41:44.000Z | python2/string_concatenation.py | callmegibby/codecademy | f2b5507ca9382571cc044f44d6f857cbd4599121 | [
"MIT"
] | 1 | 2020-09-23T21:54:58.000Z | 2020-09-23T21:54:58.000Z | # Print the concatenation of "Spam and eggs" on line 3!
print ("Spam " + "and " + "eggs") | 30 | 55 | 0.633333 | 14 | 90 | 4.071429 | 0.714286 | 0.245614 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.211111 | 90 | 3 | 56 | 30 | 0.788732 | 0.588889 | 0 | 0 | 0 | 0 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
33fc6e018065d310de39be13c2da222c7376d10d | 120 | py | Python | main.py | evenharder/mandelbrot-set | b0b4449819dc0599df0572e9dd5eec07c9ed88c8 | [
"MIT"
] | null | null | null | main.py | evenharder/mandelbrot-set | b0b4449819dc0599df0572e9dd5eec07c9ed88c8 | [
"MIT"
] | null | null | null | main.py | evenharder/mandelbrot-set | b0b4449819dc0599df0572e9dd5eec07c9ed88c8 | [
"MIT"
] | null | null | null | from GUI import *
from tkinter import *
from tkinter.ttk import *
def main():
getcontext().prec=100
gui=GUI()
main() | 13.333333 | 25 | 0.7 | 18 | 120 | 4.666667 | 0.555556 | 0.238095 | 0.404762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03 | 0.166667 | 120 | 9 | 26 | 13.333333 | 0.81 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.