hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20fc92794a278a08b94a1f84e232148ac5dbf932 | 8,695 | py | Python | src/ai/models/.ipynb_checkpoints/models_mse-checkpoint.py | carlov93/predictive_maintenance | eb00b82bde02668387d0308571296a82f78abef6 | [
"MIT"
] | 1 | 2020-02-11T07:50:33.000Z | 2020-02-11T07:50:33.000Z | src/ai/models/.ipynb_checkpoints/models_mse-checkpoint.py | carlov93/predictive_maintenance | eb00b82bde02668387d0308571296a82f78abef6 | [
"MIT"
] | 12 | 2020-03-24T18:16:51.000Z | 2022-03-12T00:15:55.000Z | src/ai/models/.ipynb_checkpoints/models_mse-checkpoint.py | carlov93/predictive_maintenance | eb00b82bde02668387d0308571296a82f78abef6 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
import csv
class AnalysisLayer(nn.Module):
def __init__(self):
super(AnalysisLayer, self).__init__()
def forward(self, x):
global latent_space
latent_space = x.detach()
return x
class LstmMse(nn.Module):
def __init__(self, batch_size, input_dim, n_hidden_lstm, n_layers, dropout_rate, n_hidden_fc):
super(LstmMse, self).__init__()
# Attributes for LSTM Network
self.input_dim = input_dim
self.n_hidden_lstm = n_hidden_lstm
self.n_layers = n_layers
self.batch_size = batch_size
self.dropout_rate = dropout_rate
self.n_hidden_fc = n_hidden_fc
# Definition of NN layer
# batch_first = True because dataloader creates batches and batch_size is 0. dimension
self.lstm = nn.LSTM(input_size = self.input_dim,
hidden_size = self.n_hidden_lstm,
num_layers = self.n_layers,
batch_first = True,
dropout = self.dropout_rate)
self.fc1 = nn.Linear(self.n_hidden_lstm, self.n_hidden_fc)
self.dropout = nn.Dropout(p=self.dropout_rate)
self.fc2 = nn.Linear(self.n_hidden_fc, self.input_dim)
def forward(self, input_data, hidden):
# Forward propagate LSTM
# LSTM in Pytorch return two results: the first one usually called output
# and the second one (hidden_state, cell_state).
lstm_out, (hidden_state, cell_state) = self.lstm(input_data, hidden)
# LSTM returns as output all the hidden_states for all the timesteps (seq),
# in other words all of the hidden states throughout
# the sequence.
# Thus we have to select the output from the last sequence (last hidden state of sequence)
# Length of input data can varry
length_seq = input_data.size()[1]
last_out = lstm_out[:,length_seq-1,:]
# Forward path through the subsequent fully connected tanh activation neural network
out_y_hat = self.fc1(last_out)
out_y_hat = self.dropout(out_y_hat)
out_y_hat = F.tanh(out_y_hat)
out_y_hat = self.fc2(out_y_hat)
return out_y_hat
def init_hidden(self):
# This method is for initializing hidden state as well as cell state
# We need to detach the hidden state to prevent exploding/vanishing gradients
h0 = torch.zeros(self.n_layers, self.batch_size, self.n_hidden_lstm, requires_grad=False)
c0 = torch.zeros(self.n_layers, self.batch_size, self.n_hidden_lstm, requires_grad=False)
return [t for t in (h0, c0)]
class LstmMle(nn.Module):
def __init__(self, batch_size, input_dim, n_hidden_lstm, n_layers, dropout_rate, n_hidden_fc):
super(LstmMle, self).__init__()
# Attributes for LSTM Network
self.input_dim = input_dim
self.n_hidden_lstm = n_hidden_lstm
self.n_layers = n_layers
self.batch_size = batch_size
self.dropout_rate = dropout_rate
self.n_hidden_fc = n_hidden_fc
# Definition of NN layer
# batch_first = True because dataloader creates batches and batch_size is 0. dimension
self.lstm = nn.LSTM(input_size = self.input_dim,
hidden_size = self.n_hidden_lstm,
num_layers = self.n_layers,
batch_first = True,
dropout = self.dropout_rate)
self.fc1 = nn.Linear(self.n_hidden_lstm, self.n_hidden_fc)
self.dropout = nn.Dropout(p=self.dropout_rate)
self.fc_y_hat = nn.Linear(self.n_hidden_fc, self.input_dim)
self.fc_tau = nn.Linear(self.n_hidden_fc, self.input_dim)
def forward(self, input_data, hidden):
# Forward propagate LSTM
# LSTM in Pytorch return two results: the first one usually called output
# and the second one (hidden_state, cell_state).
lstm_out, (hidden_state, cell_state) = self.lstm(input_data, hidden)
# LSTM returns as output all the hidden_states for all the timesteps (seq),
# in other words all of the hidden states throughout
# the sequence.
# Thus we have to select the output from the last sequence (last hidden state of sequence)
# Length of input data can varry
length_seq = input_data.size()[1]
last_out = lstm_out[:,length_seq-1,:]
# Forward path through the subsequent fully connected tanh activation
# neural network with 2q output channels
out = self.fc1(last_out)
out = self.dropout(out)
out = F.tanh(out)
y_hat = self.fc_y_hat(out)
tau = self.fc_tau(out)
return [y_hat, tau]
def init_hidden(self):
# This method is for initializing hidden state as well as cell state
# We need to detach the hidden state to prevent exploding/vanishing gradients
h0 = torch.zeros(self.n_layers, self.batch_size, self.n_hidden_lstm, requires_grad=False)
c0 = torch.zeros(self.n_layers, self.batch_size, self.n_hidden_lstm, requires_grad=False)
return [t for t in (h0, c0)]
class LstmMultiTaskLearning(nn.Module):
def __init__(self, batch_size, input_dim, n_hidden_lstm, n_layers,
dropout_rate, n_hidden_fc_prediction, n_hidden_fc_ls_analysis):
super(LstmMultiTaskLearning, self).__init__()
# Attributes for LSTM Network
self.input_dim = input_dim
self.n_hidden_lstm = n_hidden_lstm
self.n_layers = n_layers
self.batch_size = batch_size
self.dropout_rate = dropout_rate
self.n_hidden_fc_prediction = n_hidden_fc_prediction
self.n_hidden_fc_ls_analysis = n_hidden_fc_ls_analysis
self.current_latent_space = None
# define strcture of model
self.sharedlayer = nn.LSTM(input_size = self.input_dim,
hidden_size = self.n_hidden_lstm,
num_layers = self.n_layers,
batch_first = True,
dropout = self.dropout_rate)
self.prediction_network = nn.Sequential(nn.Linear(self.n_hidden_lstm, self.n_hidden_fc_prediction),
nn.Dropout(p=self.dropout_rate),
nn.Tanh(),
nn.Linear(self.n_hidden_fc_prediction, self.input_dim)
)
self.latent_space_analyse_network = nn.Sequential(nn.Linear(self.n_hidden_lstm, self.n_hidden_fc_ls_analysis),
nn.Dropout(p=self.dropout_rate),
nn.Tanh(),
AnalysisLayer(),
nn.Linear(self.n_hidden_fc_ls_analysis, self.input_dim)
)
def forward(self, input_data, hidden):
# Forward propagate LSTM
# LSTM in Pytorch return two results: the first one usually called output
# and the second one (hidden_state, cell_state).
lstm_out, (hidden_state, cell_state)= self.sharedlayer(input_data, hidden)
# LSTM returns as output all the hidden_states for all the timesteps (seq),
# in other words all of the hidden states throughout the sequence.
# Thus we have to select the output from the last sequence (last hidden state of sequence).
# Length of input data can varry
length_seq = input_data.size()[1]
last_out = lstm_out[:,length_seq-1,:]
# Define forward pass through both sub-networks
prediction = self.prediction_network(last_out)
_ = self.latent_space_analyse_network(last_out)
# Save latent space
self.current_latent_space = latent_space
return prediction, _
def init_hidden(self):
# This method is for initializing hidden state as well as cell state
# We need to detach the hidden state to prevent exploding/vanishing gradients
h0 = torch.zeros(self.n_layers, self.batch_size, self.n_hidden_lstm, requires_grad=False)
c0 = torch.zeros(self.n_layers, self.batch_size, self.n_hidden_lstm, requires_grad=False)
return [t for t in (h0, c0)]
| 48.305556 | 118 | 0.613571 | 1,139 | 8,695 | 4.413521 | 0.120281 | 0.059877 | 0.063457 | 0.047742 | 0.872687 | 0.839865 | 0.812811 | 0.805252 | 0.792918 | 0.786354 | 0 | 0.004552 | 0.317769 | 8,695 | 180 | 119 | 48.305556 | 0.842886 | 0.255779 | 0 | 0.566372 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097345 | false | 0 | 0.035398 | 0 | 0.230089 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1f171e1b51cc94a9d13da441d34facedd1b4de1e | 45 | py | Python | distributed_lock/__init__.py | maxpowel/python-distributed-lock | d3199dba4b4ff674f4ea4ed0bb4c19d38718c3d0 | [
"Apache-2.0"
] | null | null | null | distributed_lock/__init__.py | maxpowel/python-distributed-lock | d3199dba4b4ff674f4ea4ed0bb4c19d38718c3d0 | [
"Apache-2.0"
] | null | null | null | distributed_lock/__init__.py | maxpowel/python-distributed-lock | d3199dba4b4ff674f4ea4ed0bb4c19d38718c3d0 | [
"Apache-2.0"
] | null | null | null | from .distributed_lock import DistributedLock | 45 | 45 | 0.911111 | 5 | 45 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 45 | 1 | 45 | 45 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f1aa2d70369edb6f7c946cf47aec944faad9e09 | 187 | py | Python | splicemachine/features/__init__.py | myles-novick/pysplice | 96a848d4adda0a937002798865d32939f059f4d1 | [
"Apache-2.0"
] | null | null | null | splicemachine/features/__init__.py | myles-novick/pysplice | 96a848d4adda0a937002798865d32939f059f4d1 | [
"Apache-2.0"
] | null | null | null | splicemachine/features/__init__.py | myles-novick/pysplice | 96a848d4adda0a937002798865d32939f059f4d1 | [
"Apache-2.0"
] | null | null | null | from .feature import Feature
from .feature_set import FeatureSet
from .feature_store import FeatureStore
from .pipe import Pipe
from .constants import FeatureType, PipeType, PipeLanguage
| 31.166667 | 58 | 0.84492 | 24 | 187 | 6.5 | 0.5 | 0.211538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 187 | 5 | 59 | 37.4 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1f922afe11b7c72391c7153f922ed1567756b0bd | 137 | py | Python | hazijavitorendszer/HW/fahrenheit/test.py | gaebor/hazi | 0907e8304aa690cae5752485ba237d782336b918 | [
"MIT"
] | null | null | null | hazijavitorendszer/HW/fahrenheit/test.py | gaebor/hazi | 0907e8304aa690cae5752485ba237d782336b918 | [
"MIT"
] | null | null | null | hazijavitorendszer/HW/fahrenheit/test.py | gaebor/hazi | 0907e8304aa690cae5752485ba237d782336b918 | [
"MIT"
] | null | null | null | def _eval(_input, _output, _expected, _exception, _expected_exception):
return abs(_expected - _output) < 0.001 and _exception is None
| 45.666667 | 71 | 0.781022 | 18 | 137 | 5.388889 | 0.722222 | 0.350515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033613 | 0.131387 | 137 | 2 | 72 | 68.5 | 0.781513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2f27a0892e35409ffb0e18a1fc3cc3738716b500 | 69 | py | Python | config.py | germainlefebvre4/ns-killer | ca082c19ceff6db94f2789d133d19c77300a5d16 | [
"Apache-2.0"
] | 21 | 2020-05-26T09:02:20.000Z | 2022-03-10T05:35:20.000Z | config.py | germainlefebvre4/ns-killer | ca082c19ceff6db94f2789d133d19c77300a5d16 | [
"Apache-2.0"
] | 15 | 2020-01-09T16:33:33.000Z | 2021-02-05T10:20:36.000Z | config.py | germainlefebvre4/ns-killer | ca082c19ceff6db94f2789d133d19c77300a5d16 | [
"Apache-2.0"
] | 5 | 2020-08-31T06:25:57.000Z | 2020-10-09T22:59:49.000Z | """
Dotenv
"""
import os
from dotenv import load_dotenv
load_dotenv() | 11.5 | 30 | 0.753623 | 10 | 69 | 5 | 0.5 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 69 | 6 | 31 | 11.5 | 0.833333 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2f5fa8189af5aed2c803e46dc69746a066ed3c51 | 103 | py | Python | ariadne/utils/__init__.py | t3hseus/ariadne | b4471a37741000e22281c4d6ff647d65ab9e1914 | [
"MIT"
] | 6 | 2020-08-28T22:44:07.000Z | 2022-01-24T20:53:00.000Z | ariadne/utils/__init__.py | t3hseus/ariadne | b4471a37741000e22281c4d6ff647d65ab9e1914 | [
"MIT"
] | 1 | 2021-02-20T09:38:46.000Z | 2021-02-20T09:38:46.000Z | ariadne/utils/__init__.py | t3hseus/ariadne | b4471a37741000e22281c4d6ff647d65ab9e1914 | [
"MIT"
] | 2 | 2021-10-04T09:25:06.000Z | 2022-02-09T09:09:09.000Z | from . import base
from . import model
from . import inference
from . import drawing
from . import data | 20.6 | 23 | 0.76699 | 15 | 103 | 5.266667 | 0.466667 | 0.632911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184466 | 103 | 5 | 24 | 20.6 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2f83b1ccd73abfd0226bf238834966b2f41c4241 | 152 | py | Python | HackerRank/Python/Easy/E0055.py | Mohammed-Shoaib/HackerRank-Problems | ccfb9fc2f0d8dff454439d75ce519cf83bad7c3b | [
"MIT"
] | 54 | 2019-05-13T12:13:09.000Z | 2022-02-27T02:59:00.000Z | HackerRank/Python/Easy/E0055.py | Mohammed-Shoaib/HackerRank-Problems | ccfb9fc2f0d8dff454439d75ce519cf83bad7c3b | [
"MIT"
] | 2 | 2020-10-02T07:16:43.000Z | 2020-10-19T04:36:19.000Z | HackerRank/Python/Easy/E0055.py | Mohammed-Shoaib/HackerRank-Problems | ccfb9fc2f0d8dff454439d75ce519cf83bad7c3b | [
"MIT"
] | 20 | 2020-05-26T09:48:13.000Z | 2022-03-18T15:18:27.000Z | # Problem Statement: https://www.hackerrank.com/challenges/np-arrays/problem
import numpy
def arrays(arr):
return numpy.flip(numpy.array(arr, float)) | 25.333333 | 76 | 0.776316 | 22 | 152 | 5.363636 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085526 | 152 | 6 | 77 | 25.333333 | 0.848921 | 0.486842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
85fab8a1ca458e8331165e29f299093f4035d61f | 60 | py | Python | __init__.py | scvannost/pyframework | fab1f74a1358dcb41b9ffc6bc7ebb4dad7ae22a0 | [
"MIT"
] | null | null | null | __init__.py | scvannost/pyframework | fab1f74a1358dcb41b9ffc6bc7ebb4dad7ae22a0 | [
"MIT"
] | null | null | null | __init__.py | scvannost/pyframework | fab1f74a1358dcb41b9ffc6bc7ebb4dad7ae22a0 | [
"MIT"
] | null | null | null | from pyframework import *
from pyframework.usermgr import *
| 20 | 33 | 0.816667 | 7 | 60 | 7 | 0.571429 | 0.612245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 60 | 2 | 34 | 30 | 0.942308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c81a07a699e6daba435dd9ed2abb8cbb8c5d4323 | 2,128 | py | Python | tests/test_kde.py | mblackgeo/spatial-kde | 06c5cc019ba0a59bc3bd8b70a7e21c00177573d2 | [
"MIT"
] | 1 | 2022-01-29T06:19:10.000Z | 2022-01-29T06:19:10.000Z | tests/test_kde.py | mblackgeo/spatial-kde | 06c5cc019ba0a59bc3bd8b70a7e21c00177573d2 | [
"MIT"
] | 2 | 2022-02-16T12:27:04.000Z | 2022-02-16T12:29:36.000Z | tests/test_kde.py | mblackgeo/spatial-kde | 06c5cc019ba0a59bc3bd8b70a7e21c00177573d2 | [
"MIT"
] | null | null | null | from pathlib import Path
import geopandas as gpd
import pytest
import rasterio
from spatial_kde import spatial_kernel_density
def test_spatial_kernel_density_no_weight_utm(data_dir, tmp_path):
gdf = gpd.read_file(str(data_dir / "points_epsg_32630.gpkg"))
out_file = str(tmp_path / "out.tif")
spatial_kernel_density(
points=gdf,
radius=100,
output_pixel_size=2,
output_path=out_file,
scaled=False,
)
assert Path(out_file).exists()
with rasterio.open(out_file) as src:
out_arr = src.read(1)
assert max(out_arr.flatten()) == pytest.approx(1.58, abs=1e-2)
def test_spatial_kernel_density_weighted_utm(data_dir, tmp_path):
gdf = gpd.read_file(str(data_dir / "points_epsg_32630.gpkg"))
out_file = str(tmp_path / "out.tif")
spatial_kernel_density(
points=gdf,
radius=100,
output_pixel_size=2,
output_path=out_file,
scaled=False,
weight_col="weight",
)
assert Path(out_file).exists()
with rasterio.open(out_file) as src:
out_arr = src.read(1)
assert max(out_arr.flatten()) == pytest.approx(6.29, abs=1e-2)
def test_spatial_kernel_density_no_weight_wgs(data_dir, tmp_path):
gdf = gpd.read_file(str(data_dir / "points.geojson"))
out_file = str(tmp_path / "out.tif")
spatial_kernel_density(
points=gdf,
radius=0.001,
output_pixel_size=0.00001,
output_path=out_file,
scaled=False,
)
assert Path(out_file).exists()
with rasterio.open(out_file) as src:
out_arr = src.read(1)
assert max(out_arr.flatten()) == pytest.approx(1.37363, abs=1e-2)
def test_spatial_kernel_density_missing_weight(data_dir, tmp_path):
gdf = gpd.read_file(str(data_dir / "points_epsg_32630.gpkg"))
out_file = str(tmp_path / "out.tif")
with pytest.raises(ValueError):
spatial_kernel_density(
points=gdf,
radius=100,
output_pixel_size=2,
output_path=out_file,
scaled=False,
weight_col="not_a_column",
)
| 26.271605 | 73 | 0.653665 | 304 | 2,128 | 4.263158 | 0.223684 | 0.075617 | 0.138889 | 0.061728 | 0.834105 | 0.834105 | 0.834105 | 0.800926 | 0.724537 | 0.724537 | 0 | 0.035847 | 0.239662 | 2,128 | 80 | 74 | 26.6 | 0.765142 | 0 | 0 | 0.633333 | 0 | 0 | 0.059211 | 0.031015 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.066667 | false | 0 | 0.083333 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c843c029c96398f7c151cd02f6c66b892abdd319 | 113 | py | Python | seamm_dashboard/routes/projects/__init__.py | paulsaxe/seamm_dashboard | 66049c8c58fd34af3bd143157d0138e8fb737f9b | [
"BSD-3-Clause"
] | 5 | 2020-04-17T16:34:13.000Z | 2021-12-09T17:24:01.000Z | seamm_dashboard/routes/projects/__init__.py | paulsaxe/seamm_dashboard | 66049c8c58fd34af3bd143157d0138e8fb737f9b | [
"BSD-3-Clause"
] | 55 | 2020-02-26T20:47:52.000Z | 2022-03-12T14:22:10.000Z | seamm_dashboard/routes/projects/__init__.py | paulsaxe/seamm_dashboard | 66049c8c58fd34af3bd143157d0138e8fb737f9b | [
"BSD-3-Clause"
] | 4 | 2019-10-15T18:34:14.000Z | 2022-01-04T20:50:43.000Z | from flask import Blueprint
projects = Blueprint("projects", __name__)
from . import views # noqa: F401, E402
| 18.833333 | 42 | 0.743363 | 14 | 113 | 5.714286 | 0.714286 | 0.425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 0.168142 | 113 | 5 | 43 | 22.6 | 0.787234 | 0.141593 | 0 | 0 | 0 | 0 | 0.084211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
c864211a92482828b3dbfa80ec18f692565420f9 | 48 | py | Python | actinia_gdi/wsgi.py | anikaweinmann/actinia-gdi | 0a32212a9f1e89d7691b1cef1fb9cf9f30a6d2c9 | [
"Apache-2.0"
] | 4 | 2019-04-27T21:21:44.000Z | 2021-04-29T20:28:23.000Z | actinia_gdi/wsgi.py | anikaweinmann/actinia-gdi | 0a32212a9f1e89d7691b1cef1fb9cf9f30a6d2c9 | [
"Apache-2.0"
] | 29 | 2019-04-23T10:53:36.000Z | 2021-03-05T09:41:00.000Z | actinia_gdi/wsgi.py | anikaweinmann/actinia-gdi | 0a32212a9f1e89d7691b1cef1fb9cf9f30a6d2c9 | [
"Apache-2.0"
] | 3 | 2019-04-23T10:13:01.000Z | 2020-04-15T10:42:40.000Z | from actinia_gdi.main import app as application
| 24 | 47 | 0.854167 | 8 | 48 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 48 | 1 | 48 | 48 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c077db8ec195a87f0a8bc60644a07f00c37a0aba | 151 | py | Python | shutdown.py | greenkeytech/discovery-sdk | 3c0357ef98a723f2eaa3f190435d230917d82eea | [
"Apache-2.0"
] | 12 | 2019-08-13T14:08:17.000Z | 2022-02-11T16:56:05.000Z | shutdown.py | finos/greenkey-discovery-sdk | 3c0357ef98a723f2eaa3f190435d230917d82eea | [
"Apache-2.0"
] | 26 | 2019-08-01T14:06:21.000Z | 2021-03-11T17:10:57.000Z | shutdown.py | greenkeytech/discovery-sdk | 3c0357ef98a723f2eaa3f190435d230917d82eea | [
"Apache-2.0"
] | 5 | 2019-09-23T16:09:35.000Z | 2021-03-31T23:24:31.000Z | #!/usr/bin/env python3
from fire import Fire
from launch import teardown_docker_compose
if __name__ == "__main__":
Fire(teardown_docker_compose)
| 18.875 | 42 | 0.781457 | 21 | 151 | 5.047619 | 0.666667 | 0.264151 | 0.396226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007692 | 0.139073 | 151 | 7 | 43 | 21.571429 | 0.807692 | 0.139073 | 0 | 0 | 0 | 0 | 0.062016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c08f149de5d2766d4735d6c431b3f5a95705c06b | 193 | py | Python | main.py | amano-honmono/KT_button | 114fffeaf533ff166afad5f514b5aaacea38aceb | [
"MIT"
] | null | null | null | main.py | amano-honmono/KT_button | 114fffeaf533ff166afad5f514b5aaacea38aceb | [
"MIT"
] | null | null | null | main.py | amano-honmono/KT_button | 114fffeaf533ff166afad5f514b5aaacea38aceb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from bottle import route, run
import generator
@route('/')
def top():
return generator.top()
@route('/login')
def login():
pass
run(host='0.0.0.0', port=80)
| 12.866667 | 29 | 0.606218 | 29 | 193 | 4.034483 | 0.62069 | 0.051282 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044304 | 0.181347 | 193 | 14 | 30 | 13.785714 | 0.696203 | 0.108808 | 0 | 0 | 0 | 0 | 0.082353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0.111111 | 0.222222 | 0.111111 | 0.555556 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
c0c27c54b56206ec26f7eb5a769b9ce0b9f5b625 | 42 | py | Python | esia_auth/models/__init__.py | sysols/django-esia-auth | 8311585f1942ba37588a823932af7a3fdf2b0f9e | [
"BSD-2-Clause"
] | 1 | 2021-09-06T08:25:39.000Z | 2021-09-06T08:25:39.000Z | esia_auth/models/__init__.py | sysols/django-esia-auth | 8311585f1942ba37588a823932af7a3fdf2b0f9e | [
"BSD-2-Clause"
] | null | null | null | esia_auth/models/__init__.py | sysols/django-esia-auth | 8311585f1942ba37588a823932af7a3fdf2b0f9e | [
"BSD-2-Clause"
] | null | null | null | from .esia_user import ESIACompatibleUser
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c0f9d8830880ce76148de6ea4e3176641a9d82b5 | 1,919 | py | Python | app/migrations/0002_auto_20180519_2133.py | callofdutyops/cnc-monitoring | cd18ce238d6f9a1435541159ea4b2e4d3dd94dd6 | [
"MIT"
] | null | null | null | app/migrations/0002_auto_20180519_2133.py | callofdutyops/cnc-monitoring | cd18ce238d6f9a1435541159ea4b2e4d3dd94dd6 | [
"MIT"
] | null | null | null | app/migrations/0002_auto_20180519_2133.py | callofdutyops/cnc-monitoring | cd18ce238d6f9a1435541159ea4b2e4d3dd94dd6 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.5 on 2018-05-19 13:33
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('app', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='cncalarm',
name='alarmName',
field=models.CharField(default='OverHot', max_length=255, unique=True),
preserve_default=False,
),
migrations.AlterField(
model_name='cnc',
name='reservedField_1',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='cnc',
name='reservedField_2',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='cnc',
name='reservedField_3',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='cnc',
name='reservedField_4',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='cnc',
name='reservedField_5',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AlterField(
model_name='cncalarm',
name='alarmAppearance',
field=models.TextField(blank=True, max_length=4000, null=True),
),
migrations.AlterField(
model_name='cncalarm',
name='alarmReason',
field=models.TextField(blank=True, max_length=4000, null=True),
),
migrations.AlterField(
model_name='cncalarm',
name='alarmSolution',
field=models.TextField(blank=True, max_length=4000, null=True),
),
]
| 31.983333 | 83 | 0.569567 | 188 | 1,919 | 5.680851 | 0.281915 | 0.075843 | 0.187266 | 0.217228 | 0.714419 | 0.714419 | 0.714419 | 0.668539 | 0.657303 | 0.657303 | 0 | 0.040878 | 0.311621 | 1,919 | 59 | 84 | 32.525424 | 0.7676 | 0.02345 | 0 | 0.641509 | 1 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018868 | 0 | 0.075472 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23aebec1fa58e3f79bb627fd177ab7bee344cda6 | 3,290 | py | Python | tests/list/test_list_of_declaration.py | nikitanovosibirsk/district42 | 0c13248919fc96bde16b9634a8ea468e4882752a | [
"Apache-2.0"
] | 1 | 2016-09-16T04:09:19.000Z | 2016-09-16T04:09:19.000Z | tests/list/test_list_of_declaration.py | nikitanovosibirsk/district42 | 0c13248919fc96bde16b9634a8ea468e4882752a | [
"Apache-2.0"
] | 2 | 2021-06-14T05:53:49.000Z | 2022-02-01T14:26:31.000Z | tests/list/test_list_of_declaration.py | nikitanovosibirsk/district42 | 0c13248919fc96bde16b9634a8ea468e4882752a | [
"Apache-2.0"
] | null | null | null | from unittest.mock import sentinel
from baby_steps import given, then, when
from pytest import raises
from district42 import schema
from district42.errors import DeclarationError
from district42.types import ListSchema
def test_list_of_elements_declaration():
with when:
list_type = schema.int
sch = schema.list(list_type)
with then:
assert isinstance(sch, ListSchema)
assert sch.props.type == list_type
def test_list_of_invalid_value_type_declaration_error():
with when, raises(Exception) as exception:
schema.list(sentinel)
with then:
assert exception.type is DeclarationError
assert str(exception.value) == (
"`schema.list` value must be an instance of ('list', 'Schema'), "
"instance of '_Sentinel' given"
)
def test_list_of_len_declaration():
with given:
list_type = schema.int
length = 10
with when:
sch = schema.list(list_type).len(length)
with then:
assert sch.props.type == list_type
assert sch.props.len == length
def test_list_of_min_len_declaration():
with given:
list_type = schema.int
min_length = 10
with when:
sch = schema.list(list_type).len(min_length, ...)
with then:
assert sch.props.type == list_type
assert sch.props.min_len == min_length
def test_list_of_max_len_declaration():
with given:
list_type = schema.int
max_length = 10
with when:
sch = schema.list(list_type).len(..., max_length)
with then:
assert sch.props.type == list_type
assert sch.props.max_len == max_length
def test_list_of_min_max_len_declaration():
with given:
list_type = schema.int
min_length, max_length = 1, 10
with when:
sch = schema.list(list_type).len(min_length, max_length)
with then:
assert sch.props.type == list_type
assert sch.props.min_len == min_length
assert sch.props.max_len == max_length
def test_list_of_value_already_declared_len_declaration_error():
with when, raises(Exception) as exception:
schema.list.len(1)(schema.str)
with then:
assert exception.type is DeclarationError
assert str(exception.value) == "`schema.list.len(1)` is already declared"
def test_list_of_value_already_declared_min_len_declaration_error():
with when, raises(Exception) as exception:
schema.list.len(1, ...)(schema.str)
with then:
assert exception.type is DeclarationError
assert str(exception.value) == "`schema.list.len(1, ...)` is already declared"
def test_list_of_value_already_declared_max_len_declaration_error():
with when, raises(Exception) as exception:
schema.list.len(..., 1)(schema.str)
with then:
assert exception.type is DeclarationError
assert str(exception.value) == "`schema.list.len(..., 1)` is already declared"
def test_list_of_value_already_declared_min_max_len_declaration_error():
with when, raises(Exception) as exception:
schema.list.len(1, 2)(schema.str)
with then:
assert exception.type is DeclarationError
assert str(exception.value) == "`schema.list.len(1, 2)` is already declared"
| 27.416667 | 86 | 0.676292 | 440 | 3,290 | 4.829545 | 0.115909 | 0.056471 | 0.051765 | 0.061176 | 0.802353 | 0.785412 | 0.761412 | 0.752 | 0.733176 | 0.711529 | 0 | 0.009885 | 0.231307 | 3,290 | 119 | 87 | 27.647059 | 0.830368 | 0 | 0 | 0.518072 | 0 | 0 | 0.080547 | 0.006383 | 0 | 0 | 0 | 0 | 0.253012 | 1 | 0.120482 | false | 0 | 0.072289 | 0 | 0.192771 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
23d2ee50063607d6264f7bc696c37f274afbf3c9 | 3,404 | py | Python | play shield examples/tiny-snake/bitmaps.py | konimaru/tinypico-micropython | 03850445fd2d89b1fe4e9d99d07e10dfe62ae354 | [
"MIT"
] | 58 | 2019-05-11T19:05:08.000Z | 2022-02-21T23:07:54.000Z | play shield examples/tiny-snake/bitmaps.py | konimaru/tinypico-micropython | 03850445fd2d89b1fe4e9d99d07e10dfe62ae354 | [
"MIT"
] | 8 | 2019-07-28T08:06:03.000Z | 2022-02-22T21:38:22.000Z | play shield examples/tiny-snake/bitmaps.py | konimaru/tinypico-micropython | 03850445fd2d89b1fe4e9d99d07e10dfe62ae354 | [
"MIT"
] | 15 | 2019-05-07T11:19:37.000Z | 2022-03-16T16:39:03.000Z | #icons
icon_wifi = [
0x1f,0xe0
,0x70,0x38
,0xc7,0x8c
,0x1c,0xe0
,0x30,0x30
,0x07,0x80
,0x0c,0xc0
,0x00,0x00
,0x03,0x00
,0x03,0x00
]
icon_wifi_inv = [
0xe0,0x1c
,0x8f,0xc4
,0x38,0x70
,0xe3,0x1c
,0xcf,0xcc
,0xf8,0x7c
,0xf3,0x3c
,0xff,0xfc
,0xfc,0xfc
,0xfc,0xfc
]
icon_tinypico = [
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x3f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x80,
0x3f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xe0,
0x3f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf0,
0x3c, 0x00, 0x06, 0x1f, 0xff, 0xff, 0xff, 0xfc, 0x00, 0xfe, 0x1f, 0xf0, 0x1f, 0xfc, 0x07, 0xf8,
0x3c, 0x00, 0x06, 0x1f, 0xff, 0xff, 0xff, 0xfc, 0x00, 0x3e, 0x1f, 0xc0, 0x07, 0xf0, 0x01, 0xf8,
0x3c, 0x00, 0x06, 0x1f, 0xff, 0xff, 0xff, 0xfc, 0x00, 0x1e, 0x1f, 0x80, 0x07, 0xc0, 0x00, 0xfc,
0x3f, 0xf0, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfc, 0x3e, 0x0e, 0x1f, 0x07, 0xe7, 0x83, 0xf0, 0x7c,
0x3f, 0xf0, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfc, 0x3f, 0x0e, 0x1f, 0x0f, 0xff, 0x87, 0xf8, 0x7c,
0x3f, 0xf0, 0xfe, 0x1c, 0x70, 0x78, 0x7f, 0x0c, 0x3f, 0x8e, 0x1e, 0x1f, 0xff, 0x07, 0xfc, 0x3c,
0x3f, 0xf0, 0xfe, 0x1c, 0x40, 0x3c, 0x7f, 0x0c, 0x3f, 0x8e, 0x1e, 0x1f, 0xff, 0x0f, 0xfc, 0x3c,
0x3f, 0xf0, 0xfe, 0x1c, 0x00, 0x1c, 0x3f, 0x1c, 0x3f, 0x8e, 0x1e, 0x3f, 0xff, 0x0f, 0xfe, 0x3c,
0x3f, 0xf0, 0xfe, 0x1c, 0x1f, 0x1e, 0x3e, 0x1c, 0x3f, 0x0e, 0x1c, 0x3f, 0xff, 0x0f, 0xfe, 0x1c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0e, 0x3e, 0x1c, 0x3e, 0x0e, 0x1c, 0x3f, 0xff, 0x1f, 0xfe, 0x1c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0e, 0x1e, 0x3c, 0x00, 0x1e, 0x1c, 0x3f, 0xff, 0x1f, 0xfe, 0x1c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0x1c, 0x3c, 0x00, 0x3e, 0x1c, 0x3f, 0xff, 0x1f, 0xfe, 0x1c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0x0c, 0x7c, 0x23, 0xfe, 0x1c, 0x3f, 0xff, 0x0f, 0xfe, 0x1c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0x8c, 0x7c, 0x3f, 0xfe, 0x1e, 0x3f, 0xff, 0x0f, 0xfc, 0x3c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0x88, 0x7c, 0x3f, 0xfe, 0x1e, 0x1f, 0xff, 0x0f, 0xfc, 0x3c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0x80, 0xfc, 0x3f, 0xfe, 0x1e, 0x1f, 0xff, 0x87, 0xfc, 0x3c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0xc0, 0xfc, 0x3f, 0xfe, 0x1f, 0x0f, 0xf7, 0x83, 0xf8, 0x7c,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0xc0, 0xfc, 0x3f, 0xfe, 0x1f, 0x80, 0x07, 0xc0, 0x00, 0xfc,
0x3f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0xe1, 0xfc, 0x3f, 0xfe, 0x1f, 0xc0, 0x07, 0xe0, 0x01, 0xfc,
0x1f, 0xf0, 0xfe, 0x1c, 0x3f, 0x0f, 0xe1, 0xfc, 0x3f, 0xfe, 0x1f, 0xe0, 0x0f, 0xf8, 0x03, 0xf8,
0x1f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xe3, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf8,
0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xe3, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf0,
0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc3, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xe0,
0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
]
| 55.803279 | 100 | 0.622797 | 528 | 3,404 | 4.007576 | 0.096591 | 0.36673 | 0.459357 | 0.491493 | 0.725898 | 0.693762 | 0.654064 | 0.63327 | 0.555766 | 0.516068 | 0 | 0.363053 | 0.222385 | 3,404 | 60 | 101 | 56.733333 | 0.436343 | 0.001469 | 0 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 0.612305 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f1c25d3936b4e78a8d6c3e65ff781170fda812f1 | 123 | py | Python | rng.py | mohdAkibUddin/DockerSimplePython | 20da2eec020f0651bb63cdadc13ab126d6321240 | [
"MIT"
] | null | null | null | rng.py | mohdAkibUddin/DockerSimplePython | 20da2eec020f0651bb63cdadc13ab126d6321240 | [
"MIT"
] | null | null | null | rng.py | mohdAkibUddin/DockerSimplePython | 20da2eec020f0651bb63cdadc13ab126d6321240 | [
"MIT"
] | null | null | null | import random
def generateRandomNumber(minimum, maximum):
result = random.randint(minimum, maximum)
return result | 20.5 | 45 | 0.764228 | 13 | 123 | 7.230769 | 0.692308 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162602 | 123 | 6 | 46 | 20.5 | 0.912621 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
7b032b3f03217f9006f46ed7f4daaf7c8f6f4954 | 654 | py | Python | Try & Except & Finialy.py | sivacheetas/matplotlib | af810525aa7875a1b6bf066179d106c7cec023a9 | [
"MIT"
] | 21 | 2019-06-28T05:11:17.000Z | 2022-03-16T02:02:28.000Z | Try & Except & Finialy.py | sivacheetas/python_basic | af810525aa7875a1b6bf066179d106c7cec023a9 | [
"MIT"
] | 2 | 2021-12-28T14:15:58.000Z | 2021-12-28T14:16:02.000Z | Try & Except & Finialy.py | sivacheetas/python_basic | af810525aa7875a1b6bf066179d106c7cec023a9 | [
"MIT"
] | 18 | 2019-07-07T03:20:33.000Z | 2021-05-08T10:44:18.000Z | ##def askint():
## try:
## val = int(input("Please enter an integer: "))
## except:
## print ("Looks like you did not enter an integer!")
##
## finally:
## print ("Finally, I executed!")
## print ("Given Input is ",val)
##
##askint()
def askint1():
try:
val = int(input("Please enter an integer: "))
except:
print ("Looks like you did not enter an integer!")
val = int(input("Try again-Please enter an integer: "))
finally:
print ("Finally, I executed!")
print(val)
askint1()
| 27.25 | 68 | 0.472477 | 68 | 654 | 4.544118 | 0.352941 | 0.113269 | 0.226537 | 0.194175 | 0.757282 | 0.757282 | 0.757282 | 0.757282 | 0.757282 | 0.498382 | 0 | 0.004988 | 0.38685 | 654 | 23 | 69 | 28.434783 | 0.765586 | 0.41896 | 0 | 0 | 0 | 0 | 0.358209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.1 | 0.3 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7b33ec8b36196f6139ba9b00ad95206a7984497e | 159 | py | Python | Python/preprocessing/circleselector/__init__.py | kokizzu/OmniPhotos | b8aa4c90b87b87b087bca8de3cf0e2b6d13f84da | [
"Apache-2.0"
] | 129 | 2020-12-13T02:22:05.000Z | 2022-03-22T02:45:39.000Z | Python/preprocessing/circleselector/__init__.py | kokizzu/OmniPhotos | b8aa4c90b87b87b087bca8de3cf0e2b6d13f84da | [
"Apache-2.0"
] | 4 | 2020-12-20T20:18:05.000Z | 2021-06-03T10:51:55.000Z | Python/preprocessing/circleselector/__init__.py | kokizzu/OmniPhotos | b8aa4c90b87b87b087bca8de3cf0e2b6d13f84da | [
"Apache-2.0"
] | 23 | 2020-12-15T15:11:18.000Z | 2022-03-18T00:15:30.000Z | import circleselector.cv_utils
import circleselector.datatypes
import circleselector.loader
import circleselector.metrics
import circleselector.plotting_utils
| 26.5 | 36 | 0.90566 | 17 | 159 | 8.352941 | 0.470588 | 0.704225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062893 | 159 | 5 | 37 | 31.8 | 0.95302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9e56e6157d609a945d0d61f29e8b7f145bb09e88 | 310 | py | Python | pyjo/fields/__init__.py | marcopaz/pyjo | 57b0fff59ec8bd11e8a0b7d4ff2746811c7f10f3 | [
"MIT"
] | 12 | 2016-11-05T17:35:27.000Z | 2019-07-26T14:38:28.000Z | pyjo/fields/__init__.py | marcopaz/pyjo | 57b0fff59ec8bd11e8a0b7d4ff2746811c7f10f3 | [
"MIT"
] | null | null | null | pyjo/fields/__init__.py | marcopaz/pyjo | 57b0fff59ec8bd11e8a0b7d4ff2746811c7f10f3 | [
"MIT"
] | 2 | 2018-02-14T09:04:06.000Z | 2018-02-20T13:59:15.000Z | from pyjo.fields.field import Field
from pyjo.fields.enumfield import EnumField
from pyjo.fields.rangefield import RangeField
from pyjo.fields.regexfield import RegexField
from pyjo.fields.datetimefield import DatetimeField
from pyjo.fields.listfield import ListField
from pyjo.fields.mapfield import MapField
| 38.75 | 51 | 0.864516 | 42 | 310 | 6.380952 | 0.261905 | 0.208955 | 0.365672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090323 | 310 | 7 | 52 | 44.285714 | 0.950355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c873097709f38e87b5822c19441de0bc344c23c5 | 192 | py | Python | example_catalogs/directories/02/__init__.py | kmerenkov/dbup | fb73f4d08460540ef9a94dc21319195ae982648f | [
"MIT"
] | 3 | 2015-12-25T08:31:07.000Z | 2016-05-08T20:34:31.000Z | example_catalogs/directories/02/__init__.py | kmerenkov/dbup | fb73f4d08460540ef9a94dc21319195ae982648f | [
"MIT"
] | null | null | null | example_catalogs/directories/02/__init__.py | kmerenkov/dbup | fb73f4d08460540ef9a94dc21319195ae982648f | [
"MIT"
] | null | null | null | class Stage(object):
def up(self, session):
session.execute("insert into test values (1);")
def down(self, session):
session.execute("delete from test where col1=1;")
| 27.428571 | 57 | 0.640625 | 26 | 192 | 4.730769 | 0.692308 | 0.178862 | 0.292683 | 0.406504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020134 | 0.223958 | 192 | 6 | 58 | 32 | 0.805369 | 0 | 0 | 0 | 0 | 0 | 0.302083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
c874ec5b211636a508d6e3d163be589de10ae32e | 210 | py | Python | test_yuk.py | okken/pytest-yuk | 36ec350d7e2d753ede1f89226d5e83baa5abe075 | [
"MIT"
] | 4 | 2021-03-27T05:37:24.000Z | 2021-11-28T12:24:33.000Z | test_yuk.py | okken/pytest-yuk | 36ec350d7e2d753ede1f89226d5e83baa5abe075 | [
"MIT"
] | null | null | null | test_yuk.py | okken/pytest-yuk | 36ec350d7e2d753ede1f89226d5e83baa5abe075 | [
"MIT"
] | null | null | null | import pytest
@pytest.mark.yuk
def test_pass():
assert 1 == 1
@pytest.mark.yuk
def test_fail():
assert 1 == 2
def test_pass_unmarked():
assert 1 == 1
def test_fail_unmarked():
assert 1 == 2
| 13.125 | 25 | 0.647619 | 34 | 210 | 3.823529 | 0.352941 | 0.215385 | 0.2 | 0.246154 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049383 | 0.228571 | 210 | 15 | 26 | 14 | 0.753086 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.363636 | 1 | 0.363636 | true | 0.181818 | 0.090909 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
c889a0ebd8318947b17abc3d750994529e61c57b | 296 | py | Python | moderation_module/punishment/commands/__init__.py | alentoghostflame/StupidAlentoBot | c024bfb79a9ecb0d9fda5ddc4e361a0cb878baba | [
"MIT"
] | 1 | 2021-12-12T02:50:20.000Z | 2021-12-12T02:50:20.000Z | moderation_module/punishment/commands/__init__.py | alentoghostflame/StupidAlentoBot | c024bfb79a9ecb0d9fda5ddc4e361a0cb878baba | [
"MIT"
] | 17 | 2020-02-07T23:40:36.000Z | 2020-12-22T16:38:44.000Z | moderation_module/punishment/commands/__init__.py | alentoghostflame/StupidAlentoBot | c024bfb79a9ecb0d9fda5ddc4e361a0cb878baba | [
"MIT"
] | null | null | null | from moderation_module.punishment.commands.mod_control import send_list_embed, add_role, remove_role, set_role
from moderation_module.punishment.commands.punish import warn_cmd, mute_cmd, delete_message_and_warn
from moderation_module.punishment.commands.word_ban_control import word_ban_control
| 74 | 110 | 0.89527 | 44 | 296 | 5.613636 | 0.545455 | 0.17004 | 0.242915 | 0.364372 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057432 | 296 | 3 | 111 | 98.666667 | 0.885305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c89ee3ca58315215c19d91ea8381b2def2225b10 | 537 | py | Python | ransac/__init__.py | romack77/ransac | efec8a98bfbd613b807c85bcb15ec0605bc9c234 | [
"MIT"
] | 2 | 2020-10-26T05:01:55.000Z | 2022-01-10T11:33:01.000Z | ransac/__init__.py | romack77/ransac | efec8a98bfbd613b807c85bcb15ec0605bc9c234 | [
"MIT"
] | null | null | null | ransac/__init__.py | romack77/ransac | efec8a98bfbd613b807c85bcb15ec0605bc9c234 | [
"MIT"
] | null | null | null | from ransac.estimators.jlinkage import JLinkage
from ransac.estimators.ransac import Ransac
from ransac.estimators.ransac import RansacHypothesis
from ransac.estimators.ransac import calculate_ransac_iterations
from ransac.estimators.xransac import calculate_xransac_iterations
from ransac.estimators.xransac import XRansac
from ransac.estimators.xransac import MultiRansacResult
from ransac.models.exceptions import DegenerateModelException
from ransac.models.base import Model
from ransac.models.least_squares import LeastSquaresModel
| 48.818182 | 66 | 0.888268 | 65 | 537 | 7.261538 | 0.276923 | 0.211864 | 0.29661 | 0.165254 | 0.455508 | 0.182203 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074488 | 537 | 10 | 67 | 53.7 | 0.949698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8b8e86e187b68dfb549bc295d06d4743487558b | 12,848 | py | Python | package/tests/test_cp/test_openstack/test_domain/test_services/test_neutron/test_neutron_network_service.py | QualiSystems/OpenStack-Shell | 2e218ee249867550332a9b887a7c50b76ad52e20 | [
"ISC"
] | 1 | 2016-07-06T19:59:33.000Z | 2016-07-06T19:59:33.000Z | package/tests/test_cp/test_openstack/test_domain/test_services/test_neutron/test_neutron_network_service.py | QualiSystems/OpenStack-Shell | 2e218ee249867550332a9b887a7c50b76ad52e20 | [
"ISC"
] | 256 | 2016-07-06T17:02:55.000Z | 2020-10-01T09:35:03.000Z | package/tests/test_cp/test_openstack/test_domain/test_services/test_neutron/test_neutron_network_service.py | QualiSystems/OpenStack-Shell | 2e218ee249867550332a9b887a7c50b76ad52e20 | [
"ISC"
] | 1 | 2017-05-16T20:24:57.000Z | 2017-05-16T20:24:57.000Z | from cloudshell.cp.openstack.domain.services.neutron.neutron_network_service import NeutronNetworkService
import cloudshell.cp.openstack.domain.services.neutron.neutron_network_service as test_neutron_network_service
from unittest import TestCase
from mock import Mock
class TestNeutronNetworkService(TestCase):
def setUp(self):
self.network_service = NeutronNetworkService()
self.mock_logger = Mock()
self.openstack_session = Mock()
self.moc_cp_model = Mock()
def test_create_or_get_network_with_segmentation_id_no_conflict(self):
"""
Tests a successful operation of network creation with no NetCreateConflict error
:return:
"""
test_segmentation_id = '42'
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_network = Mock(return_value={'network':'test_network'})
result = self.network_service.create_or_get_network_with_segmentation_id(openstack_session=self.openstack_session,
segmentation_id=test_segmentation_id,
cp_resource_model=self.moc_cp_model,
logger=self.mock_logger)
self.assertEqual(result, 'test_network')
def test_create_or_get_network_with_segmentation_id_conflict(self):
test_segmentation_id = '42'
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_network = Mock(side_effect=test_neutron_network_service.NetCreateConflict)
mock_client.list_networks = Mock(return_value={'networks': ['test_network']})
result = self.network_service.create_or_get_network_with_segmentation_id(openstack_session=self.openstack_session,
segmentation_id=test_segmentation_id,
cp_resource_model=self.moc_cp_model,
logger=self.mock_logger)
self.assertEqual(result, 'test_network')
def test_get_network_with_segmentation_id_valid_network(self):
test_segmentation_id = '42'
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.list_networks = Mock(return_value={'networks': ['test_network']})
result = self.network_service.get_network_with_segmentation_id(openstack_session=self.openstack_session,
segmentation_id=test_segmentation_id,
logger=self.mock_logger)
self.assertEqual(result, 'test_network')
def test_get_network_with_segmentation_id_no_network(self):
test_segmentation_id = '42'
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.list_networks = Mock(return_value={'networks': []})
result = self.network_service.get_network_with_segmentation_id(openstack_session=self.openstack_session,
segmentation_id=test_segmentation_id,
logger=self.mock_logger)
self.assertEqual(result, None)
def test_valid_cidr_returned(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_subnet = Mock(return_value={'subnet': 'subnet success'})
mock_return_subnets = {'subnets': [{'cidr': '10.0.0.0/24', 'id': 'test-id-1'},
{'cidr': '10.0.1.0/24', 'id': 'test-id-2'}]}
test_reserved_subnets = '172.0.0.0/8, 192.168.0.0/24'
mock_client.list_subnets = Mock(return_value=mock_return_subnets)
result = self.network_service._get_unused_cidr(client=mock_client,
cp_resvd_cidrs=test_reserved_subnets,
logger=self.mock_logger)
self.assertEqual(result, '10.0.2.0/24')
def test_none_cidr_returned(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_subnet = Mock(return_value={'subnet': 'subnet success'})
mock_return_subnets = {'subnets': [{'cidr': '10.0.0.0/24', 'id': 'test-id-1'},
{'cidr': '10.0.1.0/24', 'id': 'test-id-2'}]}
test_reserved_subnets = '10.0.0.0/8, 172.16.0.0/12 , 192.168.0.0/16'
mock_client.list_subnets = Mock(return_value=mock_return_subnets)
result = self.network_service._get_unused_cidr(client=mock_client,
cp_resvd_cidrs=test_reserved_subnets,
logger=self.mock_logger)
self.assertEqual(result, None)
def test_empty_reserved_networks(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_subnet = Mock(return_value={'subnet': 'subnet success'})
mock_return_subnets = {'subnets': [{'cidr': '10.0.0.0/24', 'id': 'test-id-1'},
{'cidr': '10.0.1.0/24', 'id': 'test-id-2'}]}
test_reserved_subnets = ''
mock_client.list_subnets = Mock(return_value=mock_return_subnets)
result = self.network_service._get_unused_cidr(client=mock_client,
cp_resvd_cidrs=test_reserved_subnets,
logger=self.mock_logger)
self.assertEqual(result, '10.0.2.0/24')
def test_reserved_networks_one_empty_entry(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_subnet = Mock(return_value={'subnet': 'subnet success'})
mock_return_subnets = {'subnets': [{'cidr': '10.0.0.0/24', 'id': 'test-id-1'},
{'cidr': '10.0.1.0/24', 'id': 'test-id-2'}]}
test_reserved_subnets = '172.16.0.0/12,,192.168.0.0/16'
mock_client.list_subnets = Mock(return_value=mock_return_subnets)
result = self.network_service._get_unused_cidr(client=mock_client,
cp_resvd_cidrs=test_reserved_subnets,
logger=self.mock_logger)
self.assertEqual(result, '10.0.2.0/24')
def test_create_and_attach_subnet_to_net_success(self):
test_net_id = 'test-net-id'
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_subnet = Mock(return_value={'subnet':'subnet success'})
mock_return_subnets = {'subnets':[{'cidr': '192.168.1.0/24', 'id':'test-id-1'},
{'cidr': '192.168.1.0/24', 'id': 'test-id-2'}]}
test_reserved_subnets = '10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/24'
mock_client.list_subnets = Mock(return_value=mock_return_subnets)
cp_resource_model = Mock()
cp_resource_model.reserved_networks = test_reserved_subnets
# self.network_service._get_unused_cidr = Mock(return_value = '10.0.0.0/24')
result = self.network_service.create_and_attach_subnet_to_net(openstack_session=self.openstack_session,
cp_resource_model=cp_resource_model,
net_id=test_net_id,
logger=self.mock_logger)
self.assertEqual(result, 'subnet success')
def test_create_and_attach_subnet_to_net_return_none(self):
test_net_id = 'test-net-id'
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
mock_client.create_subnet = Mock(side_effect=Exception)
self.network_service._get_unused_cidr = Mock(return_value = '10.0.0.0/24')
with self.assertRaises(Exception) as context:
result = self.network_service.create_and_attach_subnet_to_net(openstack_session=self.openstack_session,
cp_resource_model=Mock(),
net_id=test_net_id,
logger=self.mock_logger)
self.assertTrue(context)
def test_create_floating_ip_success(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
test_network_id = 'test_network_id'
test_subnet_id = 'test_subnet_id'
test_result_subnet_dict = {'subnets': [{'network_id':test_network_id}]}
mock_client.list_subnets = Mock(return_value=test_result_subnet_dict)
test_floating_ip = '1.2.3.4'
test_floating_ip_dict = {'floatingip':test_floating_ip}
mock_client.create_floatingip = Mock(return_value=test_floating_ip_dict)
result = self.network_service.create_floating_ip(openstack_session=self.openstack_session,
floating_ip_subnet_id=test_subnet_id,
logger=self.mock_logger)
floating_ip_call_dict = {'floatingip': {'floating_network_id':test_network_id, 'subnet_id':test_subnet_id}}
mock_client.create_floatingip.assert_called_with(floating_ip_call_dict)
self.assertEqual(result, test_floating_ip)
def test_create_floating_ip_returns_None(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
test_network_id = 'test_network_id'
test_subnet_id = 'test_subnet_id'
test_result_subnet_dict = {'subnets': [{'network_id':test_network_id}]}
mock_client.list_subnets = Mock(return_value=test_result_subnet_dict)
mock_client.create_floatingip = Mock(return_value={})
result = self.network_service.create_floating_ip(openstack_session=self.openstack_session,
floating_ip_subnet_id=test_subnet_id,
logger=self.mock_logger)
floating_ip_call_dict = {'floatingip': {'floating_network_id':test_network_id, 'subnet_id':test_subnet_id}}
mock_client.create_floatingip.assert_called_with(floating_ip_call_dict)
self.assertEqual(result, None)
def test_delete_floating_ip_success(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
test_floating_ip = '1.2.3.4'
test_floating_ip_id = 'test_floating_id'
mock_list_result_dict = {'floatingips': [{'id': test_floating_ip_id}]}
mock_client.list_floatingips = Mock(return_value=mock_list_result_dict)
mock_client.delete_floatingip = Mock()
result = self.network_service.delete_floating_ip(openstack_session=self.openstack_session,
floating_ip=test_floating_ip,
logger=self.mock_logger)
mock_client.delete_floatingip.assert_called_with(test_floating_ip_id)
self.assertTrue(result)
def test_delete_floating_ip_false(self):
mock_client = Mock()
test_neutron_network_service.neutron_client.Client = Mock(return_value=mock_client)
test_floating_ip = ''
mock_client.delete_floatingip = Mock()
result = self.network_service.delete_floating_ip(openstack_session=self.openstack_session,
floating_ip=test_floating_ip,
logger=self.mock_logger)
mock_client.delete_floatingip.assert_not_called()
self.assertFalse(result) | 51.187251 | 122 | 0.606476 | 1,442 | 12,848 | 4.989598 | 0.069348 | 0.082001 | 0.072967 | 0.052814 | 0.869076 | 0.858096 | 0.856011 | 0.844475 | 0.833912 | 0.801529 | 0 | 0.025017 | 0.306196 | 12,848 | 251 | 123 | 51.187251 | 0.78214 | 0.012842 | 0 | 0.691011 | 0 | 0.011236 | 0.075245 | 0.002292 | 0 | 0 | 0 | 0 | 0.106742 | 1 | 0.08427 | false | 0 | 0.022472 | 0 | 0.11236 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c8bd3c412141066270cfb3952ea713cb568b14b0 | 40 | py | Python | src/app/service/models/__init__.py | serious-notreally/cappa | 993a8df35ca6c3b22f3ca811937fd29c07fc71aa | [
"MIT"
] | 9 | 2020-04-05T07:35:55.000Z | 2021-08-03T05:50:05.000Z | src/app/service/models/__init__.py | serious-notreally/cappa | 993a8df35ca6c3b22f3ca811937fd29c07fc71aa | [
"MIT"
] | 89 | 2020-01-26T11:50:06.000Z | 2022-03-31T07:14:18.000Z | src/app/service/models/__init__.py | serious-notreally/cappa | 993a8df35ca6c3b22f3ca811937fd29c07fc71aa | [
"MIT"
] | 13 | 2020-03-10T14:45:07.000Z | 2021-07-31T02:43:40.000Z | from .menu import *
from .site import *
| 13.333333 | 19 | 0.7 | 6 | 40 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 2 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8cf2360bedb1109308a303a3311a14f94ffe7f0 | 215 | py | Python | splitp/__init__.py | js51/SplitsPy | 252f9cb906c4a7516729f288644e9c445fc7d62d | [
"MIT"
] | 2 | 2019-11-20T07:30:09.000Z | 2021-02-22T11:26:46.000Z | splitp/__init__.py | js51/SplitsPy | 252f9cb906c4a7516729f288644e9c445fc7d62d | [
"MIT"
] | 14 | 2020-02-18T11:05:03.000Z | 2021-03-02T08:47:19.000Z | splitp/__init__.py | js51/SplitsPy | 252f9cb906c4a7516729f288644e9c445fc7d62d | [
"MIT"
] | 2 | 2020-05-21T17:07:41.000Z | 2021-02-24T01:36:50.000Z | name = "splitp"
from splitp.nx_tree import *
from splitp.parsers import *
from splitp.tree_helper_functions import *
from splitp.squangles import *
from splitp.enums import *
from splitp.tree_reconstruction import * | 30.714286 | 42 | 0.809302 | 30 | 215 | 5.666667 | 0.4 | 0.352941 | 0.470588 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12093 | 215 | 7 | 43 | 30.714286 | 0.899471 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c8eb258948b253303aad092e2ce511c31871083e | 178 | py | Python | wrappers/python/tests/pool/test_refresh_pool_ledger.py | absltkaos/indy-sdk | bc14c5b514dc1c76ce62dd7f6bf804120bf69f5e | [
"Apache-2.0"
] | 636 | 2017-05-25T07:45:43.000Z | 2022-03-23T22:30:34.000Z | wrappers/python/tests/pool/test_refresh_pool_ledger.py | Nick-1979/indy-sdk | e5f812e14962f0d51cf96f843033754ff841ce30 | [
"Apache-2.0"
] | 731 | 2017-05-29T07:15:08.000Z | 2022-03-31T07:55:58.000Z | wrappers/python/tests/pool/test_refresh_pool_ledger.py | Nick-1979/indy-sdk | e5f812e14962f0d51cf96f843033754ff841ce30 | [
"Apache-2.0"
] | 904 | 2017-05-25T07:45:49.000Z | 2022-03-31T07:43:31.000Z | import pytest
from indy.pool import refresh_pool_ledger
@pytest.mark.asyncio
async def test_refresh_pool_ledger_works(pool_handle):
await refresh_pool_ledger(pool_handle)
| 19.777778 | 54 | 0.842697 | 27 | 178 | 5.185185 | 0.555556 | 0.235714 | 0.364286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106742 | 178 | 8 | 55 | 22.25 | 0.880503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c8f36065b5a447280db3d4bd410a0cce62b72155 | 9,880 | py | Python | Classification Model.py | itsnika/Carcinoma | 4b8557db0263930c916527dc50e9a6376a054c25 | [
"MIT"
] | null | null | null | Classification Model.py | itsnika/Carcinoma | 4b8557db0263930c916527dc50e9a6376a054c25 | [
"MIT"
] | null | null | null | Classification Model.py | itsnika/Carcinoma | 4b8557db0263930c916527dc50e9a6376a054c25 | [
"MIT"
] | null | null | null | ########################################################## CARCINOMA ##########################################################
################################### LIBRARIES AND MODULES TO BE USED THROUGH ALL THE MODELS ###################################
# Importing the libraries and the modules needed
import tensorflow as tf
import tensorflow.python.framework.dtypes
import keras
import keras.backend
from keras.models import Sequential
from tensorflow.keras import layers
from keras.layers import Dense
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
import numpy
import pandas as pd
import sklearn
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
import matplotlib
from matplotlib import pyplot as plt
from keras.utils import to_categorical
from keras.layers import BatchNormalization
from keras.layers import Dropout
from keras import regularizers
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
################################################### LOGISTIC REGRESSION MODEL ##################################################
# Reading content inside the two CSV files inside the same folder and assigning the read stream to variables
features = pd.read_csv('matrix_of_features_x.csv')
labels = pd.read_csv('matrix_of_labels_y.csv')
# Standardizing the data points, by putting them on a scale
features = preprocessing.scale(features)
# Splitting both datasets into training and testing dataframes with testing data size 20% and the rest being userd for training
xtr, xts, ytr, yts = train_test_split(features, labels, test_size = 0.2)
# Constructing the logistic Regression Model using a Neural Network
model = Sequential()
model.add(Dense(21, input_shape = (30, ), activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = Adam(lr = 0.001), metrics = ['accuracy'])
# Defining an Early Stopper that will train our model in 2000 epochs
estop = EarlyStopping(monitor = 'val_loss', min_delta = 0, patience = 15, verbose = 1, mode = 'min')
fitted_model = model.fit(xtr, ytr, epochs = 2000, validation_split = 0.15, verbose = 0, callbacks = [estop])
history = fitted_model.history
print(fitted_model.history.keys())
# Plotting the loss of Training and Validation dataframes over the epochs
loss = history['loss']
plt.figure()
val_loss = history['val_loss']
plt.figure()
plt.plot(loss, 'r', label = 'Training Loss')
plt.plot(val_loss, 'b', label = 'Validation Training Loss')
plt.legend()
plt.ylabel("Loss")
plt.xlabel("Epochs")
# Plotting the accuracy of Training and Validation dataframes over the epochs
acc = history['accuracy']
plt.figure()
val_acc = history['val_accuracy']
plt.figure()
plt.plot(acc, 'r', label = 'Training Accuracy')
plt.plot(val_acc, 'b', label = 'Validation Training Accuracy')
plt.legend()
plt.ylabel("Accuracy")
plt.xlabel("Epochs")
# Calculating the loss and accuracy of data tested
loss, acc = model.evaluate(xts, yts)
print("Testing Data Loss: ", loss)
print("Testing Data Accuracy: ", acc)
# Calculating the AUC score of Testing data
yts_pred = model.predict_proba(xts)
fal_pos_rate, tru_pos_rate, thresh = roc_curve(yts, yts_pred)
auc_krs = auc(fal_pos_rate, tru_pos_rate)
print('Testing Data AUC: ', auc_krs)
# Plotting the ROC curve of Testing data
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fal_pos_rate, tru_pos_rate, label = 'Keras (area = {:.3f})'.format(auc_krs))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc = 'best')
plt.show()
# Calculating the AUC score of Training data
ytr_pred = model.predict_proba(xtr)
fal_pos_rate, tru_pos_rate, thresh = roc_curve(ytr, ytr_pred)
auc_krs = auc(fal_pos_rate, tru_pos_rate)
print('Training Data AUC: ', auc_krs)
# Plotting the ROC curve of Training data
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fal_pos_rate, tru_pos_rate, label='Keras (area = {:.3f})'.format(auc_krs))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc = 'best')
plt.show()
############################################# SOFTMAX REGRESSION MODEL #########################################################
# Converting Matrix of Labels Y into categorical type of data
ytr_categ = to_categorical(ytr)
print(ytr)
print(ytr_categ)
# Constructing the Softmax Regression Model using a Neural Network
model = Sequential()
model.add(Dense(21, input_shape = (30, ), activation = 'softmax'))
model.add(Dense(2, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = Adam(lr = 0.0001), metrics = ['accuracy'])
# Defining an Early Stopper that will train our model in 2000 epochs
estop = EarlyStopping(monitor = 'val_loss', min_delta = 0, patience = 15, verbose = 1, mode = 'min')
fitted_model = model.fit(xtr, ytr_categ, epochs = 2000, validation_split = 0.15, verbose = 0, callbacks = [estop])
history = fitted_model.history
print(fitted_model.history.keys())
# Plotting the loss of Training and Validation dataframes over the epochs
loss = history['loss']
plt.figure()
val_loss = history['val_loss']
plt.figure()
plt.plot(loss, 'r', label = 'Training Loss')
plt.plot(val_loss, 'b', label = 'Validation Training Loss')
plt.legend()
plt.ylabel("Loss")
plt.xlabel("Epochs")
# Plotting the accuracy of Training and Validation dataframes over the epochs
acc = history['accuracy']
plt.figure()
val_acc = history['val_accuracy']
plt.figure()
plt.plot(val_acc, 'r', label = 'val_acc')
plt.plot(acc, 'b', label = 'acc')
plt.legend()
plt.ylabel("Accuracy")
plt.xlabel("Epochs")
# Calculating the loss and accuracy of data tested
yts_cat = to_categorical(yts)
loss, acc = model.evaluate(xts, yts_cat)
print("Test Loss: ", loss)
print("Test Accuracy: ", acc)
# Calculating the AUC score of Testing data
yts_pred = model.predict_proba(xts)
fal_pos_rate, tru_pos_rate, thresh = roc_curve(yts, yts_pred[:,1])
auc_krs = auc(fal_pos_rate, tru_pos_rate)
print('Testing data AUC: ', auc_krs)
# Plotting the ROC curve of Testing data
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fal_pos_rate, tru_pos_rate, label = 'Keras (area = {:.3f})'.format(auc_krs))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc = 'best')
plt.show()
# Calculating the AUC score of Training data
ytr_pred = model.predict_proba(xtr)
fal_pos_rate, tru_pos_rate, thresh = roc_curve(ytr, ytr_pred[:,1])
auc_krs = auc(fal_pos_rate, tru_pos_rate)
print('Testing data AUC: ', auc_krs)
# Plotting the ROC curve of Training data
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fal_pos_rate, tru_pos_rate, label = 'Keras (area = {:.3f})'.format(auc_krs))
plt.title('ROC Curve')
plt.legend(loc = 'best')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###################################### DEEP LEARNING SOFTMAX REGRESSION MODEL ##################################################
# Constructing a Deep Learning Softmax Regression Model using a Neural Network
model = Sequential()
model.add(Dense(21, input_shape = (30, ), activation = 'softmax'))
model.add(Dense(21, activation = 'softmax'))
model.add(Dense(21, activation = 'softmax'))
model.add(Dense(21, activation = 'softmax'))
model.add(Dense(2, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = Adam(lr = 0.001), metrics = ['accuracy'])
# Defining an Early Stopper that will train our model in 3000 epochs
estop = EarlyStopping(monitor = 'val_loss', min_delta = 0, patience = 20, verbose = 1, mode = 'min')
fitted_model = model.fit(xtr, ytr_categ, epochs = 3000, validation_split = 0.1, shuffle = True, verbose = 0, callbacks = [estop])
history = fitted_model.history
print(fitted_model.history.keys())
# Plotting the loss of Training and Validation dataframes over the epochs
loss = history['loss']
plt.figure()
val_loss = history['val_loss']
plt.figure()
plt.plot(loss, 'r', label = 'Training Loss')
plt.plot(val_loss, 'b', label = 'Validation Training Loss')
plt.legend()
plt.ylabel("Loss")
plt.xlabel("Epochs")
# Plotting the accuracy of Training and Validation dataframes over the epochs
acc = history['accuracy']
plt.figure()
val_acc = history['val_accuracy']
plt.figure()
plt.plot(val_acc, 'r', label = 'val_acc')
plt.plot(acc, 'b', label = 'acc')
plt.legend()
plt.ylabel("Accuracy")
plt.xlabel("Epochs")
# Calculating the loss and accuracy of data tested
yts_cat = to_categorical(yts)
loss, acc = model.evaluate(xts, yts_cat)
print("Test Loss: ", loss)
print("Test Accuracy: ", acc)
# Calculating the AUC score of Testing data
yts_pred = model.predict_proba(xts)
fal_pos_rate, tru_pos_rate, thresh = roc_curve(yts, yts_pred[:,1])
auc_krs = auc(fal_pos_rate, tru_pos_rate)
print('Testing data AUC: ', auc_krs)
# Plotting the ROC curve of Testing data
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fal_pos_rate, tru_pos_rate, label = 'Keras (area = {:.3f})'.format(auc_krs))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc = 'best')
plt.show()
# Calculating the AUC score of Training data
ytr_pred = model.predict_proba(xtr)
fal_pos_rate, tru_pos_rate, thresh = roc_curve(ytr, ytr_pred[:,1])
auc_krs = auc(fal_pos_rate, tru_pos_rate)
print('Testing data AUC: ', auc_krs)
# Plotting the ROC curve of Training data
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fal_pos_rate, tru_pos_rate, label = 'Keras (area = {:.3f})'.format(auc_krs))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc = 'best')
plt.show()
########################################################### THE END ########################################################### | 36.323529 | 129 | 0.698077 | 1,448 | 9,880 | 4.634669 | 0.133287 | 0.03755 | 0.026822 | 0.034868 | 0.79094 | 0.77902 | 0.767546 | 0.767546 | 0.767546 | 0.762778 | 0 | 0.014353 | 0.118522 | 9,880 | 272 | 130 | 36.323529 | 0.756229 | 0.2083 | 0 | 0.769231 | 0 | 0 | 0.171393 | 0.013066 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.123077 | 0 | 0.123077 | 0.087179 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cdbd13f54687e28715dbac908e37660007e66fb9 | 20,342 | py | Python | tests/test_checks.py | jeremytiki/Tanjun | 9ca8c9412e7f938b01576c958392f38ff761392b | [
"BSD-3-Clause"
] | 87 | 2021-01-28T06:46:02.000Z | 2022-03-22T03:23:38.000Z | tests/test_checks.py | jeremytiki/Tanjun | 9ca8c9412e7f938b01576c958392f38ff761392b | [
"BSD-3-Clause"
] | 54 | 2020-11-23T12:54:21.000Z | 2022-03-31T10:47:24.000Z | tests/test_checks.py | jeremytiki/Tanjun | 9ca8c9412e7f938b01576c958392f38ff761392b | [
"BSD-3-Clause"
] | 16 | 2021-08-07T02:11:15.000Z | 2022-03-14T06:15:33.000Z | # -*- coding: utf-8 -*-
# cython: language_level=3
# BSD 3-Clause License
#
# Copyright (c) 2020-2021, Faster Speeding
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# pyright: reportUnknownMemberType=none
# This leads to too many false-positives around mocks.
import typing
from unittest import mock
import hikari
import pytest
import tanjun
@pytest.fixture()
def command() -> tanjun.abc.ExecutableCommand[typing.Any]:
command_ = mock.MagicMock(tanjun.abc.ExecutableCommand)
command_.add_check.return_value = command_
return command_
@pytest.fixture()
def context() -> tanjun.abc.Context:
return mock.MagicMock(tanjun.abc.Context)
class TestInjectableCheck:
@pytest.mark.asyncio()
async def test(self):
mock_callback = mock.Mock()
mock_context = mock.Mock()
with mock.patch.object(
tanjun.injecting, "CallbackDescriptor", return_value=mock.AsyncMock()
) as callback_descriptor:
check = tanjun.checks.InjectableCheck(mock_callback)
callback_descriptor.assert_called_once_with(mock_callback)
result = await check(mock_context)
assert result is callback_descriptor.return_value.resolve_with_command_context.return_value
callback_descriptor.return_value.resolve_with_command_context.assert_awaited_once_with(
mock_context, mock_context
)
@pytest.mark.asyncio()
async def test_when_returns_false(self):
mock_callback = mock.Mock()
mock_context = mock.Mock()
mock_descriptor = mock.AsyncMock()
mock_descriptor.resolve_with_command_context.return_value = False
with mock.patch.object(
tanjun.injecting, "CallbackDescriptor", return_value=mock_descriptor
) as callback_descriptor:
check = tanjun.checks.InjectableCheck(mock_callback)
callback_descriptor.assert_called_once_with(mock_callback)
with pytest.raises(tanjun.errors.FailedCheck):
await check(mock_context)
mock_descriptor.resolve_with_command_context.assert_awaited_once_with(mock_context, mock_context)
class TestOwnerCheck:
@pytest.mark.asyncio()
async def test(self):
mock_dependency = mock.AsyncMock()
mock_dependency.check_ownership.return_value = True
mock_context = mock.Mock()
check = tanjun.checks.OwnerCheck(error_message=None, halt_execution=False)
result = await check(mock_context, mock_dependency)
assert result is True
mock_dependency.check_ownership.assert_awaited_once_with(mock_context.client, mock_context.author)
@pytest.mark.asyncio()
async def test_when_false(self):
mock_dependency = mock.AsyncMock()
mock_dependency.check_ownership.return_value = False
mock_context = mock.Mock()
check = tanjun.checks.OwnerCheck(error_message=None, halt_execution=False)
result = await check(mock_context, mock_dependency)
assert result is False
mock_dependency.check_ownership.assert_awaited_once_with(mock_context.client, mock_context.author)
@pytest.mark.asyncio()
async def test_when_false_and_error_message(self):
mock_dependency = mock.AsyncMock()
mock_dependency.check_ownership.return_value = False
mock_context = mock.Mock()
check = tanjun.checks.OwnerCheck(error_message="aye", halt_execution=False)
with pytest.raises(tanjun.errors.CommandError, match="aye"):
await check(mock_context, mock_dependency)
mock_dependency.check_ownership.assert_awaited_once_with(mock_context.client, mock_context.author)
@pytest.mark.asyncio()
async def test_when_false_and_halt_execution(self):
mock_dependency = mock.AsyncMock()
mock_dependency.check_ownership.return_value = False
mock_context = mock.Mock()
check = tanjun.checks.OwnerCheck(error_message=None, halt_execution=True)
with pytest.raises(tanjun.errors.HaltExecution):
await check(mock_context, mock_dependency)
mock_dependency.check_ownership.assert_awaited_once_with(mock_context.client, mock_context.author)
class TestNsfwCheck:
@pytest.mark.asyncio()
async def test(self):
mock_context = mock.Mock()
mock_context.cache.get_guild_channel.return_value.is_nsfw = True
check = tanjun.checks.NsfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_not_called()
@pytest.mark.asyncio()
async def test_when_not_cache_bound(self):
mock_context = mock.Mock(cache=None, rest=mock.AsyncMock())
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.GuildChannel, is_nsfw=True)
check = tanjun.checks.NsfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
@pytest.mark.asyncio()
async def test_when_rest_returns_dm(self):
mock_context = mock.Mock(cache=None, rest=mock.AsyncMock())
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.DMChannel)
check = tanjun.checks.NsfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
@pytest.mark.asyncio()
async def test_when_not_cache_bound_when_not_found_in_cache(self):
mock_context = mock.Mock(rest=mock.AsyncMock())
mock_context.cache.get_guild_channel.return_value = None
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.GuildChannel, is_nsfw=True)
check = tanjun.checks.NsfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
@pytest.mark.asyncio()
async def test_when_false(self):
mock_context = mock.Mock()
mock_context.cache.get_guild_channel.return_value.is_nsfw = None
check = tanjun.checks.NsfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is False
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_not_called()
@pytest.mark.asyncio()
async def test_when_false_and_error_message(self):
mock_context = mock.Mock()
mock_context.cache.get_guild_channel.return_value.is_nsfw = False
check = tanjun.checks.NsfwCheck(error_message="meow me", halt_execution=False)
with pytest.raises(tanjun.errors.CommandError, match="meow me"):
await check(mock_context)
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_not_called()
@pytest.mark.asyncio()
async def test_when_false_and_halt_execution(self):
mock_context = mock.Mock(rest=mock.AsyncMock())
mock_context.cache.get_guild_channel.return_value = None
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.GuildChannel, is_nsfw=False)
check = tanjun.checks.NsfwCheck(error_message=None, halt_execution=True)
with pytest.raises(tanjun.errors.HaltExecution):
await check(mock_context)
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
class TestSfwCheck:
@pytest.mark.asyncio()
async def test(self):
mock_context = mock.Mock()
mock_context.cache.get_guild_channel.return_value.is_nsfw = False
check = tanjun.checks.SfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_not_called()
@pytest.mark.asyncio()
async def test_when_not_cache_bound(self):
mock_context = mock.Mock(cache=None, rest=mock.AsyncMock())
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.GuildChannel, is_nsfw=False)
check = tanjun.checks.SfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
@pytest.mark.asyncio()
async def test_when_rest_returns_dm(self):
mock_context = mock.Mock(cache=None, rest=mock.AsyncMock())
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.DMChannel, is_nsfw=False)
check = tanjun.checks.SfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
@pytest.mark.asyncio()
async def test_when_not_cache_bound_when_not_found_in_cache(self):
mock_context = mock.Mock(rest=mock.AsyncMock())
mock_context.cache.get_guild_channel.return_value = None
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.GuildChannel, is_nsfw=None)
check = tanjun.checks.SfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is True
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
@pytest.mark.asyncio()
async def test_when_false(self):
mock_context = mock.Mock()
mock_context.cache.get_guild_channel.return_value.is_nsfw = True
check = tanjun.checks.SfwCheck(error_message=None, halt_execution=False)
result = await check(mock_context)
assert result is False
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_not_called()
@pytest.mark.asyncio()
async def test_when_false_and_error_message(self):
mock_context = mock.Mock()
mock_context.cache.get_guild_channel.return_value.is_nsfw = True
check = tanjun.checks.SfwCheck(error_message="meow me", halt_execution=False)
with pytest.raises(tanjun.errors.CommandError, match="meow me"):
await check(mock_context)
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_not_called()
@pytest.mark.asyncio()
async def test_when_false_and_halt_execution(self):
mock_context = mock.Mock(rest=mock.AsyncMock())
mock_context.cache.get_guild_channel.return_value = None
mock_context.rest.fetch_channel.return_value = mock.Mock(hikari.GuildChannel, is_nsfw=True)
check = tanjun.checks.SfwCheck(error_message=None, halt_execution=True)
with pytest.raises(tanjun.errors.HaltExecution):
await check(mock_context)
mock_context.cache.get_guild_channel.assert_called_once_with(mock_context.channel_id)
mock_context.rest.fetch_channel.assert_awaited_once_with(mock_context.channel_id)
class TestDmCheck:
def test_for_dm(self):
assert tanjun.checks.DmCheck()(mock.Mock(guild_id=None)) is True
def test_for_guild(self):
assert tanjun.checks.DmCheck(halt_execution=False, error_message=None)(mock.Mock(guild_id=3123)) is False
def test_for_guild_when_halt_execution(self):
with pytest.raises(tanjun.HaltExecution):
assert tanjun.checks.DmCheck(halt_execution=True, error_message=None)(mock.Mock(guild_id=3123))
def test_for_guild_when_error_message(self):
with pytest.raises(tanjun.CommandError, match="message"):
assert tanjun.checks.DmCheck(halt_execution=False, error_message="message")(mock.Mock(guild_id=3123))
class TestGuildCheck:
def test_for_guild(self):
assert tanjun.checks.GuildCheck()(mock.Mock(guild_id=123123)) is True
def test_for_dm(self):
assert tanjun.checks.GuildCheck(halt_execution=False, error_message=None)(mock.Mock(guild_id=None)) is False
def test_for_dm_when_halt_execution(self):
with pytest.raises(tanjun.HaltExecution):
tanjun.checks.GuildCheck(halt_execution=True, error_message=None)(mock.Mock(guild_id=None))
def test_for_dm_when_error_message(self):
with pytest.raises(tanjun.CommandError, match="hi"):
tanjun.checks.GuildCheck(halt_execution=False, error_message="hi")(mock.Mock(guild_id=None))
@pytest.mark.skip(reason="Not Implemented")
class TestAuthorPermissionCheck:
...
@pytest.mark.skip(reason="Not Implemented")
class TestOwnPermissionCheck:
...
def test_with_dm_check(command: mock.Mock):
with mock.patch.object(tanjun.checks, "DmCheck") as dm_check:
assert tanjun.checks.with_dm_check(command) is command
command.add_check.assert_called_once_with(dm_check.return_value)
dm_check.assert_called_once_with(halt_execution=False, error_message="Command can only be used in DMs")
def test_with_dm_check_with_keyword_arguments(command: mock.Mock):
with mock.patch.object(tanjun.checks, "DmCheck") as dm_check:
assert tanjun.checks.with_dm_check(halt_execution=True, error_message="message")(command) is command
command.add_check.assert_called_once_with(dm_check.return_value)
dm_check.assert_called_once_with(halt_execution=True, error_message="message")
def test_with_guild_check(command: mock.Mock):
with mock.patch.object(tanjun.checks, "GuildCheck") as guild_check:
assert tanjun.checks.with_guild_check(command) is command
command.add_check.assert_called_once_with(guild_check.return_value)
guild_check.assert_called_once_with(
halt_execution=False, error_message="Command can only be used in guild channels"
)
def test_with_guild_check_with_keyword_arguments(command: mock.Mock):
with mock.patch.object(tanjun.checks, "GuildCheck") as guild_check:
assert tanjun.checks.with_guild_check(halt_execution=True, error_message="eee")(command) is command
command.add_check.assert_called_once_with(guild_check.return_value)
guild_check.assert_called_once_with(halt_execution=True, error_message="eee")
def test_with_nsfw_check(command: mock.Mock):
with mock.patch.object(tanjun.checks, "NsfwCheck", return_value=mock.AsyncMock()) as nsfw_check:
assert tanjun.checks.with_nsfw_check(command) is command
command.add_check.assert_called_once_with(nsfw_check.return_value)
nsfw_check.assert_called_once_with(
halt_execution=False, error_message="Command can only be used in NSFW channels"
)
def test_with_nsfw_check_with_keyword_arguments(command: mock.Mock):
with mock.patch.object(tanjun.checks, "NsfwCheck", return_value=mock.AsyncMock()) as nsfw_check:
assert tanjun.checks.with_nsfw_check(halt_execution=True, error_message="banned!!!")(command) is command
command.add_check.assert_called_once_with(nsfw_check.return_value)
nsfw_check.assert_called_once_with(halt_execution=True, error_message="banned!!!")
def test_with_sfw_check(command: mock.Mock):
with mock.patch.object(tanjun.checks, "SfwCheck", return_value=mock.AsyncMock()) as sfw_check:
assert tanjun.checks.with_sfw_check(command) is command
command.add_check.assert_called_once_with(sfw_check.return_value)
sfw_check.assert_called_once_with(
halt_execution=False, error_message="Command can only be used in SFW channels"
)
def test_with_sfw_check_with_keyword_arguments(command: mock.Mock):
with mock.patch.object(tanjun.checks, "SfwCheck", return_value=mock.AsyncMock()) as sfw_check:
assert tanjun.checks.with_sfw_check(halt_execution=True, error_message="bango")(command) is command
command.add_check.assert_called_once_with(sfw_check.return_value)
sfw_check.assert_called_once_with(halt_execution=True, error_message="bango")
def test_with_owner_check(command: mock.Mock):
with mock.patch.object(tanjun.checks, "OwnerCheck") as owner_check:
assert tanjun.checks.with_owner_check(command) is command
command.add_check.assert_called_once_with(owner_check.return_value)
owner_check.assert_called_once_with(halt_execution=False, error_message="Only bot owners can use this command")
def test_with_owner_check_with_keyword_arguments(command: mock.Mock):
mock_check = object()
with mock.patch.object(tanjun.checks, "OwnerCheck", return_value=mock_check) as owner_check:
result = tanjun.checks.with_owner_check(
halt_execution=True,
error_message="dango",
)(command)
assert result is command
command.add_check.assert_called_once_with(owner_check.return_value)
owner_check.assert_called_once_with(halt_execution=True, error_message="dango")
def test_with_author_permission_check(command: mock.Mock):
with mock.patch.object(tanjun.checks, "AuthorPermissionCheck") as author_permission_check:
assert (
tanjun.checks.with_author_permission_check(435213, halt_execution=True, error_message="bye")(command)
is command
)
command.add_check.assert_called_once_with(author_permission_check.return_value)
author_permission_check.assert_called_once_with(435213, halt_execution=True, error_message="bye")
def test_with_own_permission_check(command: mock.Mock):
with mock.patch.object(tanjun.checks, "OwnPermissionCheck") as own_permission_check:
assert (
tanjun.checks.with_own_permission_check(5412312, halt_execution=True, error_message="hi")(command)
is command
)
command.add_check.assert_called_once_with(own_permission_check.return_value)
own_permission_check.assert_called_once_with(5412312, halt_execution=True, error_message="hi")
def test_with_check(command: mock.Mock):
mock_check = mock.Mock()
result = tanjun.checks.with_check(mock_check)(command)
assert result is command
command.add_check.assert_called_once_with(mock_check)
| 42.29106 | 119 | 0.747714 | 2,702 | 20,342 | 5.329016 | 0.094745 | 0.085562 | 0.041114 | 0.051392 | 0.855059 | 0.81964 | 0.786513 | 0.761928 | 0.735259 | 0.714702 | 0 | 0.003248 | 0.167486 | 20,342 | 480 | 120 | 42.379167 | 0.847003 | 0.081162 | 0 | 0.640379 | 0 | 0 | 0.02696 | 0.001126 | 0 | 0 | 0 | 0 | 0.280757 | 1 | 0.072555 | false | 0 | 0.015773 | 0.003155 | 0.119874 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cdbfa1708fedfa8ca9d94cbb23c0e98f0a7f533f | 13,082 | py | Python | tests/test_with_shared_empty_database/test_task_sync_scraped_data.py | a-luna/vigorish | 6cede5ced76c7d2c9ad0aacdbd2b18c2f1ee4ee6 | [
"MIT"
] | 2 | 2021-07-15T13:53:33.000Z | 2021-07-25T17:03:29.000Z | tests/test_with_shared_empty_database/test_task_sync_scraped_data.py | a-luna/vigorish | 6cede5ced76c7d2c9ad0aacdbd2b18c2f1ee4ee6 | [
"MIT"
] | 650 | 2019-05-18T07:00:12.000Z | 2022-01-21T19:38:55.000Z | tests/test_with_shared_empty_database/test_task_sync_scraped_data.py | a-luna/vigorish | 6cede5ced76c7d2c9ad0aacdbd2b18c2f1ee4ee6 | [
"MIT"
] | 2 | 2020-03-28T21:01:31.000Z | 2022-01-06T05:16:11.000Z | from collections import namedtuple
from datetime import datetime, timedelta, timezone
from pathlib import Path
from unittest.mock import call, patch, PropertyMock
from click.testing import CliRunner
from tests.conftest import ROOT_FOLDER
from vigorish.cli.vig import cli
from vigorish.enums import DataSet, SyncDirection, VigFile
from vigorish.util.datetime_util import dtaware_fromtimestamp
MLB_SEASON = 2017
S3ObjectMock = namedtuple("S3ObjectMock", ["key", "size", "last_modified"])
def get_s3_key(vig_app, file_id, file_type, data_set):
s3_folder = vig_app.scraped_data.file_helper.get_s3_folderpath(file_type, data_set, year=MLB_SEASON)
return f"{s3_folder}/{file_id}.json"
def get_local_filepath(vig_app, file_id, file_type, data_set):
config_folder = vig_app.scraped_data.file_helper.get_local_folderpath(file_type, data_set, year=MLB_SEASON)
local_folder = config_folder if Path(config_folder).is_absolute() else ROOT_FOLDER.joinpath(config_folder)
return Path(local_folder).joinpath(f"{file_id}.json")
def create_s3_object_mock(vig_app, file_id, file_type, data_set, mod_size=0, mod_mtime=None):
s3_key = get_s3_key(vig_app, file_id, file_type, data_set)
local_file = get_local_filepath(vig_app, file_id, file_type, data_set)
size = local_file.stat().st_size
if mod_size:
size += mod_size
last_modified = dtaware_fromtimestamp(local_file.stat().st_mtime, use_tz=timezone.utc)
if mod_mtime:
last_modified += mod_mtime
return S3ObjectMock(s3_key, size, last_modified)
def create_call_result(vig_app, file_id, sync_direction, file_type, data_set):
local_file = str(get_local_filepath(vig_app, file_id, file_type, data_set))
s3_key = get_s3_key(vig_app, file_id, file_type, data_set)
return call(sync_direction, local_file, s3_key)
def create_br_daily_objects_mock_data(vig_app):
FILE_TYPE = VigFile.PARSED_JSON
DATA_SET = DataSet.BBREF_GAMES_FOR_DATE
# All three files: versions on S3 and local folder are exactly the same
FILE_1_ID = "bbref_games_for_date_2017-05-26"
FILE_2_ID = "bbref_games_for_date_2017-05-27"
FILE_3_ID = "bbref_games_for_date_2017-09-15"
return [
create_s3_object_mock(vig_app, FILE_1_ID, FILE_TYPE, DATA_SET),
create_s3_object_mock(vig_app, FILE_2_ID, FILE_TYPE, DATA_SET),
create_s3_object_mock(vig_app, FILE_3_ID, FILE_TYPE, DATA_SET),
]
def create_br_daily_patch_list_objects_mock_data(vig_app):
FILE_TYPE = VigFile.PATCH_LIST
DATA_SET = DataSet.BBREF_GAMES_FOR_DATE
# File in local folder is newer than the version in S3
FILE_1_ID = "bbref_games_for_date_2017-09-15_PATCH_LIST"
FILE_1_MOD_SIZE = 125
FILE_1_MOD_MTIME = timedelta(hours=-15)
return [create_s3_object_mock(vig_app, FILE_1_ID, FILE_TYPE, DATA_SET, FILE_1_MOD_SIZE, FILE_1_MOD_MTIME)]
def create_bb_daily_objects_mock_data(vig_app):
FILE_TYPE = VigFile.PARSED_JSON
DATA_SET = DataSet.BROOKS_GAMES_FOR_DATE
# Both files are exactly the same
FILE_1_ID = "brooks_games_for_date_2017-05-26"
return [create_s3_object_mock(vig_app, FILE_1_ID, FILE_TYPE, DATA_SET)]
def create_bb_daily_patch_list_objects_mock_data(vig_app):
FILE_TYPE = VigFile.PATCH_LIST
DATA_SET = DataSet.BROOKS_GAMES_FOR_DATE
# File in S3 is newer than the version in local folder
FILE_1_ID = "brooks_games_for_date_2017-05-26_PATCH_LIST"
FILE_1_MOD_SIZE = 250
FILE_1_MOD_MTIME = timedelta(days=3)
return [create_s3_object_mock(vig_app, FILE_1_ID, FILE_TYPE, DATA_SET, FILE_1_MOD_SIZE, FILE_1_MOD_MTIME)]
def create_br_box_objects_mock_data(vig_app):
FILE_TYPE = VigFile.PARSED_JSON
DATA_SET = DataSet.BBREF_BOXSCORES
# File in local folder is newer than the version in S3
FILE_1_ID = "CHA201705260"
FILE_1_MOD_SIZE = 72042
FILE_1_MOD_MTIME = timedelta(hours=-15)
# File in S3 is newer than the version in local folder
FILE_2_ID = "CHA201705272"
FILE_2_MOD_SIZE = 9423
FILE_2_MOD_MTIME = timedelta(days=1, hours=3)
return [
create_s3_object_mock(vig_app, FILE_1_ID, FILE_TYPE, DATA_SET, FILE_1_MOD_SIZE, FILE_1_MOD_MTIME),
create_s3_object_mock(vig_app, FILE_2_ID, FILE_TYPE, DATA_SET, FILE_2_MOD_SIZE, FILE_2_MOD_MTIME),
]
def create_bb_plog_objects_mock_data(vig_app):
FILE_TYPE = VigFile.PARSED_JSON
DATA_SET = DataSet.BROOKS_PITCH_LOGS
# Both files do not exist in local folder
FILE_1_ID = "gid_2017_05_26_oakmlb_nyamlb_1"
FILE_1_SIZE = 6496
FILE_1_MTIME = datetime(2021, 1, 4, 5, 22, 19, tzinfo=timezone.utc)
file_1_s3_key = get_s3_key(vig_app, FILE_1_ID, FILE_TYPE, DATA_SET)
FILE_2_ID = "gid_2017_09_15_sdnmlb_colmlb_1"
FILE_2_SIZE = 9499
FILE_2_MTIME = datetime(2021, 1, 2, 17, 56, 33, tzinfo=timezone.utc)
file_2_s3_key = get_s3_key(vig_app, FILE_2_ID, FILE_TYPE, DATA_SET)
return [
S3ObjectMock(file_1_s3_key, FILE_1_SIZE, FILE_1_MTIME),
S3ObjectMock(file_2_s3_key, FILE_2_SIZE, FILE_2_MTIME),
]
def create_bb_pfx_objects_mock_data(vig_app):
FILE_TYPE = VigFile.PARSED_JSON
DATA_SET = DataSet.BROOKS_PITCHFX
# Both files do not exist in local folder
FILE_1_ID = "HOU201705270_489119"
FILE_1_SIZE = 251036
FILE_1_MTIME = datetime(2021, 1, 12, 7, 57, 28, 695848, tzinfo=timezone.utc)
file_1_s3_key = get_s3_key(vig_app, FILE_1_ID, FILE_TYPE, DATA_SET)
FILE_2_ID = "COL201705270_572096"
FILE_2_SIZE = 83271
FILE_2_MTIME = datetime(2021, 1, 12, 7, 59, 13, 920012, tzinfo=timezone.utc)
file_2_s3_key = get_s3_key(vig_app, FILE_2_ID, FILE_TYPE, DATA_SET)
return [
S3ObjectMock(file_1_s3_key, FILE_1_SIZE, FILE_1_MTIME),
S3ObjectMock(file_2_s3_key, FILE_2_SIZE, FILE_2_MTIME),
]
def test_cli_sync_up_parsed_json(vig_app):
def send_file_side_effect(sync_direction, local_path, s3_key):
print(f"send_file called with {sync_direction}, {local_path}, {s3_key}")
ALL_S3_OBJECTS_MOCK_DATA = (
create_br_daily_objects_mock_data(vig_app)
+ create_br_daily_patch_list_objects_mock_data(vig_app)
+ create_bb_daily_objects_mock_data(vig_app)
+ create_bb_daily_patch_list_objects_mock_data(vig_app)
+ create_br_box_objects_mock_data(vig_app)
+ create_bb_plog_objects_mock_data(vig_app)
+ create_bb_pfx_objects_mock_data(vig_app)
)
with patch(
"vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.all_s3_objects", new_callable=PropertyMock
) as all_s3_objects_mock:
with patch("vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.send_file") as send_file_mock:
SYNC_DIRECTION = SyncDirection.UP_TO_S3
FILE_TYPE = VigFile.PARSED_JSON
FILE_1_ID = "brooks_games_for_date_2017-05-27"
FILE_1_DATA_SET = DataSet.BROOKS_GAMES_FOR_DATE
FILE_2_ID = "CHA201705260"
FILE_2_DATA_SET = DataSet.BBREF_BOXSCORES
all_s3_objects_mock.return_value = ALL_S3_OBJECTS_MOCK_DATA
send_file_mock.side_effect = send_file_side_effect
runner = CliRunner()
result = runner.invoke(cli, f"sync up 2017 --file-type={FILE_TYPE}")
assert result.exit_code == 0
expected_calls = [
create_call_result(vig_app, FILE_1_ID, SYNC_DIRECTION, FILE_TYPE, FILE_1_DATA_SET),
create_call_result(vig_app, FILE_2_ID, SYNC_DIRECTION, FILE_TYPE, FILE_2_DATA_SET),
]
assert send_file_mock.call_args_list == expected_calls
def test_cli_sync_down_parsed_json(vig_app):
def send_file_side_effect(sync_direction, local_path, s3_key):
print(f"send_file called with {sync_direction}, {local_path}, {s3_key}")
ALL_S3_OBJECTS_MOCK_DATA = (
create_br_daily_objects_mock_data(vig_app)
+ create_br_daily_patch_list_objects_mock_data(vig_app)
+ create_bb_daily_objects_mock_data(vig_app)
+ create_bb_daily_patch_list_objects_mock_data(vig_app)
+ create_br_box_objects_mock_data(vig_app)
+ create_bb_plog_objects_mock_data(vig_app)
+ create_bb_pfx_objects_mock_data(vig_app)
)
with patch(
"vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.all_s3_objects", new_callable=PropertyMock
) as all_s3_objects_mock:
with patch("vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.send_file") as send_file_mock:
SYNC_DIRECTION = SyncDirection.DOWN_TO_LOCAL
FILE_TYPE = VigFile.PARSED_JSON
FILE_1_ID = "CHA201705272"
FILE_1_DATA_SET = DataSet.BBREF_BOXSCORES
FILE_2_ID = "gid_2017_05_26_oakmlb_nyamlb_1"
FILE_2_DATA_SET = DataSet.BROOKS_PITCH_LOGS
FILE_3_ID = "gid_2017_09_15_sdnmlb_colmlb_1"
FILE_3_DATA_SET = DataSet.BROOKS_PITCH_LOGS
FILE_4_ID = "COL201705270_572096"
FILE_4_DATA_SET = DataSet.BROOKS_PITCHFX
FILE_5_ID = "HOU201705270_489119"
FILE_5_DATA_SET = DataSet.BROOKS_PITCHFX
all_s3_objects_mock.return_value = ALL_S3_OBJECTS_MOCK_DATA
send_file_mock.side_effect = send_file_side_effect
runner = CliRunner()
result = runner.invoke(cli, f"sync down 2017 --file-type={FILE_TYPE}")
assert result.exit_code == 0
expected_calls = [
create_call_result(vig_app, FILE_1_ID, SYNC_DIRECTION, FILE_TYPE, FILE_1_DATA_SET),
create_call_result(vig_app, FILE_2_ID, SYNC_DIRECTION, FILE_TYPE, FILE_2_DATA_SET),
create_call_result(vig_app, FILE_3_ID, SYNC_DIRECTION, FILE_TYPE, FILE_3_DATA_SET),
create_call_result(vig_app, FILE_4_ID, SYNC_DIRECTION, FILE_TYPE, FILE_4_DATA_SET),
create_call_result(vig_app, FILE_5_ID, SYNC_DIRECTION, FILE_TYPE, FILE_5_DATA_SET),
]
assert send_file_mock.call_args_list == expected_calls
def test_cli_sync_up_patch_list(vig_app):
def send_file_side_effect(sync_direction, local_path, s3_key):
print(f"send_file called with {sync_direction}, {local_path}, {s3_key}")
ALL_S3_OBJECTS_MOCK_DATA = (
create_br_daily_objects_mock_data(vig_app)
+ create_br_daily_patch_list_objects_mock_data(vig_app)
+ create_bb_daily_objects_mock_data(vig_app)
+ create_bb_daily_patch_list_objects_mock_data(vig_app)
+ create_br_box_objects_mock_data(vig_app)
+ create_bb_plog_objects_mock_data(vig_app)
+ create_bb_pfx_objects_mock_data(vig_app)
)
with patch(
"vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.all_s3_objects", new_callable=PropertyMock
) as all_s3_objects_mock:
with patch("vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.send_file") as send_file_mock:
SYNC_DIRECTION = SyncDirection.UP_TO_S3
FILE_TYPE = VigFile.PATCH_LIST
DATA_SET = DataSet.BBREF_GAMES_FOR_DATE
FILE_ID = "bbref_games_for_date_2017-09-15_PATCH_LIST"
all_s3_objects_mock.return_value = ALL_S3_OBJECTS_MOCK_DATA
send_file_mock.side_effect = send_file_side_effect
runner = CliRunner()
result = runner.invoke(cli, f"sync up 2017 --file-type={FILE_TYPE}")
assert result.exit_code == 0
expected_calls = [create_call_result(vig_app, FILE_ID, SYNC_DIRECTION, FILE_TYPE, DATA_SET)]
assert send_file_mock.call_args_list == expected_calls
def test_cli_sync_down_patch_list(vig_app):
def send_file_side_effect(sync_direction, local_path, s3_key):
print(f"send_file called with {sync_direction}, {local_path}, {s3_key}")
ALL_S3_OBJECTS_MOCK_DATA = (
create_br_daily_objects_mock_data(vig_app)
+ create_br_daily_patch_list_objects_mock_data(vig_app)
+ create_bb_daily_objects_mock_data(vig_app)
+ create_bb_daily_patch_list_objects_mock_data(vig_app)
+ create_br_box_objects_mock_data(vig_app)
+ create_bb_plog_objects_mock_data(vig_app)
+ create_bb_pfx_objects_mock_data(vig_app)
)
with patch(
"vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.all_s3_objects", new_callable=PropertyMock
) as all_s3_objects_mock:
with patch("vigorish.tasks.sync_scraped_data.SyncScrapedDataTask.send_file") as send_file_mock:
SYNC_DIRECTION = SyncDirection.DOWN_TO_LOCAL
FILE_TYPE = VigFile.PATCH_LIST
DATA_SET = DataSet.BROOKS_GAMES_FOR_DATE
FILE_ID = "brooks_games_for_date_2017-05-26_PATCH_LIST"
all_s3_objects_mock.return_value = ALL_S3_OBJECTS_MOCK_DATA
send_file_mock.side_effect = send_file_side_effect
runner = CliRunner()
result = runner.invoke(cli, f"sync down 2017 --file-type={FILE_TYPE}")
assert result.exit_code == 0
expected_calls = [create_call_result(vig_app, FILE_ID, SYNC_DIRECTION, FILE_TYPE, DATA_SET)]
assert send_file_mock.call_args_list == expected_calls
| 44.496599 | 111 | 0.734597 | 2,025 | 13,082 | 4.22963 | 0.090864 | 0.049037 | 0.075306 | 0.073555 | 0.866783 | 0.8439 | 0.811325 | 0.780385 | 0.731232 | 0.713835 | 0 | 0.046794 | 0.193013 | 13,082 | 293 | 112 | 44.648464 | 0.764516 | 0.030041 | 0 | 0.547414 | 0 | 0 | 0.122634 | 0.085252 | 0 | 0 | 0 | 0 | 0.034483 | 1 | 0.081897 | false | 0 | 0.038793 | 0 | 0.168103 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cdef49ec10ca0936d16511485e5f07d3b23a3b0c | 366 | py | Python | src/core/compile.py | ravenSanstete/hako | fe72c76e9f319add1921a63dee711f90f4960873 | [
"MIT"
] | 1 | 2016-11-17T07:15:00.000Z | 2016-11-17T07:15:00.000Z | src/core/compile.py | ravenSanstete/hako | fe72c76e9f319add1921a63dee711f90f4960873 | [
"MIT"
] | 6 | 2016-11-17T10:27:38.000Z | 2016-11-18T13:20:05.000Z | src/core/compile.py | ravenSanstete/hako | fe72c76e9f319add1921a63dee711f90f4960873 | [
"MIT"
] | null | null | null | import compileall
import os
print()
print()
print('################################## Begin of Syntax Check #####################################')
print()
print()
compileall.compile_dir(os.path.join(os.getcwd(),'core'), force=1);
print()
print()
print('################################## End of Syntax Check #####################################')
print()
print()
| 24.4 | 103 | 0.428962 | 33 | 366 | 4.727273 | 0.515152 | 0.384615 | 0.192308 | 0.230769 | 0.294872 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00295 | 0.07377 | 366 | 14 | 104 | 26.142857 | 0.457227 | 0 | 0 | 0.615385 | 0 | 0 | 0.519126 | 0.387978 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.153846 | 0 | 0.153846 | 0.769231 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
a82b7d7636ea8a60b06c2e24b52eee5d7df3b72e | 27 | py | Python | src/euler_python_package/euler_python/medium/p423.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p423.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p423.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | def problem423():
pass
| 9 | 17 | 0.62963 | 3 | 27 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0.259259 | 27 | 2 | 18 | 13.5 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
b54b313cf2470a372e73d7bce2828ce0a4ee3f41 | 78 | py | Python | bitfinex/rest/__init__.py | iulian-moraru/bitfinex | 24ed88cc44ffcda5bda439c9265d77fe9db71804 | [
"MIT"
] | 63 | 2018-02-26T19:12:03.000Z | 2022-01-18T13:17:39.000Z | bitfinex/rest/__init__.py | iulian-moraru/bitfinex | 24ed88cc44ffcda5bda439c9265d77fe9db71804 | [
"MIT"
] | 36 | 2018-07-19T10:01:57.000Z | 2022-02-06T15:35:09.000Z | bitfinex/rest/__init__.py | iulian-moraru/bitfinex | 24ed88cc44ffcda5bda439c9265d77fe9db71804 | [
"MIT"
] | 47 | 2018-06-29T13:49:34.000Z | 2022-01-03T21:23:37.000Z | from .restv1 import Client as ClientV1
from .restv2 import Client as ClientV2
| 26 | 38 | 0.820513 | 12 | 78 | 5.333333 | 0.666667 | 0.375 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.153846 | 78 | 2 | 39 | 39 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b571e58ef8af7a65b2a563455e56e431b3fe1a0e | 45 | py | Python | src/pyfx/view/theme/__init__.py | cielong/pyfx | 8c7ad55854a4f8cc59efa9f770b07f64522187e6 | [
"MIT"
] | 9 | 2020-10-09T05:45:32.000Z | 2022-03-01T01:38:27.000Z | src/pyfx/view/theme/__init__.py | cielong/pyfx | 8c7ad55854a4f8cc59efa9f770b07f64522187e6 | [
"MIT"
] | 19 | 2020-12-22T00:08:50.000Z | 2022-03-12T00:16:06.000Z | src/pyfx/view/theme/__init__.py | cielong/pyfx | 8c7ad55854a4f8cc59efa9f770b07f64522187e6 | [
"MIT"
] | 1 | 2020-11-26T14:39:10.000Z | 2020-11-26T14:39:10.000Z | from .theme_config import ThemeConfiguration
| 22.5 | 44 | 0.888889 | 5 | 45 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b572f657ce902a29b04cfe70b09c53869b173693 | 37 | py | Python | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_sankey.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 11,750 | 2015-10-12T07:03:39.000Z | 2022-03-31T20:43:15.000Z | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_sankey.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 2,951 | 2015-10-12T00:41:25.000Z | 2022-03-31T22:19:26.000Z | env/lib/python3.8/site-packages/plotly/graph_objs/layout/template/data/_sankey.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 2,623 | 2015-10-15T14:40:27.000Z | 2022-03-28T16:05:50.000Z | from plotly.graph_objs import Sankey
| 18.5 | 36 | 0.864865 | 6 | 37 | 5.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b593b6f89ce3b98e0a8c3cb72b9ec6550301b586 | 11,746 | py | Python | tests/integration/operators_test/slice_test.py | gglin001/popart | 3225214343f6d98550b6620e809a3544e8bcbfc6 | [
"MIT"
] | 61 | 2020-07-06T17:11:46.000Z | 2022-03-12T14:42:51.000Z | tests/integration/operators_test/slice_test.py | gglin001/popart | 3225214343f6d98550b6620e809a3544e8bcbfc6 | [
"MIT"
] | 1 | 2021-02-25T01:30:29.000Z | 2021-11-09T11:13:14.000Z | tests/integration/operators_test/slice_test.py | gglin001/popart | 3225214343f6d98550b6620e809a3544e8bcbfc6 | [
"MIT"
] | 6 | 2020-07-15T12:33:13.000Z | 2021-11-07T06:55:00.000Z | # Copyright (c) 2019 Graphcore Ltd. All rights reserved.
import numpy as np
import popart
import torch
import pytest
from op_tester import op_tester
# `import test_util` requires adding to sys.path
import sys
from pathlib import Path
sys.path.append(Path(__file__).resolve().parent.parent)
import test_util as tu
def test_slice_opset9(op_tester):
d1 = np.array([[1., 2., 3., 4.], [5., 6., 7., 8.]]).astype(np.float32)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
o = builder.aiOnnxOpset9.slice([i1],
axes=[0, 1],
starts=[1, 0],
ends=[2, 3])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
o = d1[1:2, 0:3]
return [o]
op_tester.run(init_builder, reference, 'infer')
def test_slice_opset10(op_tester):
d1 = np.array([[1., 2., 3., 4.], [5., 6., 7., 8.]]).astype(np.float32)
axesV = np.array([0, 1]).astype(np.int32)
startsV = np.array([1, 0]).astype(np.int32)
endsV = np.array([2, 3]).astype(np.int32)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.addInitializedInputTensor(axesV)
starts = builder.addInitializedInputTensor(startsV)
ends = builder.addInitializedInputTensor(endsV)
o = builder.aiOnnxOpset10.slice([i1, starts, ends, axes])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
o = d1[1:2, 0:3]
return [o]
op_tester.run(init_builder, reference, 'infer')
def test_slice_default_axes(op_tester):
d1 = np.array([[1., 2., 3., 4.], [5., 6., 7., 8.]]).astype(np.float32)
startsV = np.array([1, 0]).astype(np.int32)
endsV = np.array([2, 3]).astype(np.int32)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
starts = builder.addInitializedInputTensor(startsV)
ends = builder.addInitializedInputTensor(endsV)
o = builder.aiOnnx.slice([i1, starts, ends])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
o = d1[1:2, 0:3]
return [o]
op_tester.run(init_builder, reference, 'infer')
def test_slice_neg(op_tester):
d1 = np.array([1., 2., 3., 4., 5., 6., 7., 8.]).astype(np.float32)
axesV = np.array([0]).astype(np.int32)
startsV = np.array([-5]).astype(np.int32)
endsV = np.array([-3]).astype(np.int32)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.addInitializedInputTensor(axesV)
starts = builder.addInitializedInputTensor(startsV)
ends = builder.addInitializedInputTensor(endsV)
o = builder.aiOnnx.slice([i1, starts, ends, axes])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
o = d1[-5:-3]
return [o]
op_tester.run(init_builder, reference, 'infer')
def test_slice_grad(op_tester):
d1 = np.array([[1., 2., 3., 4.], [5., 6., 7., 8.]]).astype(np.float32)
axesV = np.array([0, 1]).astype(np.int32)
startsV = np.array([1, 0]).astype(np.int32)
endsV = np.array([2, 3]).astype(np.int32)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.aiOnnx.constant(axesV)
starts = builder.aiOnnx.constant(startsV)
ends = builder.aiOnnx.constant(endsV)
o = builder.aiOnnx.slice([i1, starts, ends, axes])
builder.addOutputTensor(o)
return [
o,
popart.reservedGradientPrefix() + i1,
popart.reservedGradientPrefix() + o
]
def reference(ref_data):
a = torch.tensor(d1, requires_grad=True)
o = a[1:2, 0:3]
d__o = ref_data.getOutputTensorGrad(0)
o.backward(torch.tensor(d__o))
return [o, a.grad, None]
op_tester.setPatterns(['PreUniRepl'], enableRuntimeAsserts=False)
op_tester.run(init_builder, reference, 'train')
def test_slice_error_start_input(op_tester):
d1 = np.array([[1., 2., 3., 4.], [5., 6., 7., 8.]]).astype(np.float32)
axesV = np.array([0, 1]).astype(np.int32)
startsV = np.array([1, 0]).astype(np.int32)
endsV = np.array([2, 3]).astype(np.int32)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
starts = builder.addInputTensor(startsV)
ends = builder.addInputTensor(endsV)
o = builder.aiOnnx.slice([i1, starts, ends])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
return []
op_tester.setPatterns(['PreUniRepl'], enableRuntimeAsserts=False)
with pytest.raises(popart.popart_exception) as e_info:
op_tester.run(init_builder, reference, 'train')
assert (
e_info.value.args[0] ==
"Need the value of the ai.onnx.Slice:10 input 'starts' to determine the "
"output shape, but was unable because "
"[Tensor::getDataViaGraphTraversal] Could not work out tensor data for "
"input/1.")
def test_slice_start_out_of_bounds(op_tester):
"""
The slice bounds tests follow the behaviour asserted by the Onnx tests,
which follow the behaviour of numpy.
https://github.com/onnx/onnx/blob/master/onnx/backend/test/case/node/slice.py
For a dimension of size n, any slice index of m > n, becomes n. That is, a
slice 10:21 on a dimension of size 20, becomes 10:20.
Note further that an a:b slice is the open-closed interval [a, b), so in the
above example, a slice of 10:20 is valid.
A slice of 20:20, though, is also valid in numpy; it becomes a dimension of
size 0 (but the other dimensions are not affected). The array will have zero
elements.
"""
d1 = np.random.randn(20, 10, 5).astype(np.float32)
# Will create a zero-dim slice, as 1000:1000 becomes 10:10.
axesV = np.array([1], dtype=np.int64)
startsV = np.array([1000], np.int64)
endsV = np.array([1000], np.int64)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.aiOnnx.constant(axesV)
starts = builder.aiOnnx.constant(startsV)
ends = builder.aiOnnx.constant(endsV)
o = builder.aiOnnx.slice([i1, starts, ends, axes])
builder.addOutputTensor(o)
return [o, popart.reservedGradientPrefix() + i1]
def reference(ref_data):
o = d1[:, 1000:1000]
i1_grad = np.zeros(d1.shape, dtype=np.float32)
return [o, i1_grad]
op_tester.run(init_builder, reference, 'train')
def test_slice_end_out_of_bounds(op_tester):
"""
The slice bounds tests follow the behaviour asserted by the Onnx tests,
which follow the behaviour of numpy.
https://github.com/onnx/onnx/blob/master/onnx/backend/test/case/node/slice.py
For a dimension of size n, any slice index of m > n, becomes n. That is, a
slice 10:21 on a dimension of size 20, becomes 10:20.
Note further that an a:b slice is the open-closed interval [a, b), so in the
above example, a slice of 10:20 is valid.
A slice of 20:20, though, is also valid in numpy; it becomes a dimension of
size 0 (but the other dimensions are not affected). The array will have zero
elements.
"""
d1 = np.random.randn(20, 10, 5).astype(np.float32)
# Will create a (20, 9, 5)-dim slice, as 1:1000 becomes 1:10.
axesV = np.array([1], dtype=np.int64)
startsV = np.array([1], dtype=np.int64)
endsV = np.array([1000], dtype=np.int64)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.aiOnnx.constant(axesV)
starts = builder.aiOnnx.constant(startsV)
ends = builder.aiOnnx.constant(endsV)
o = builder.aiOnnx.slice([i1, starts, ends, axes])
builder.addOutputTensor(o)
return [
o,
popart.reservedGradientPrefix() + i1,
popart.reservedGradientPrefix() + o
]
def reference(ref_data):
o = d1[:, 1:1000]
o_grad = np.ones(o.shape,
dtype=np.float32) * ref_data.getOutputTensorGrad(0)
i1_grad = np.pad(o_grad, [(0, 0), (1, 0), (0, 0)], constant_values=0.)
return [o, i1_grad, None]
op_tester.run(init_builder, reference, 'train')
def test_slice_neg_starts_and_ends(op_tester):
d1 = np.array([1., 2., 3., 4.]).astype(np.float32)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
o = builder.aiOnnxOpset9.slice([i1], axes=[0], starts=[-5], ends=[-1])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
o = d1[-4:-1]
return [o]
op_tester.run(init_builder, reference, 'infer')
def test_slice_flip_1(op_tester):
d1 = np.array([1., 2., 3., 4.]).astype(np.float32)
axesV = np.array([0], dtype=np.int64)
startsV = np.array([3], dtype=np.int64)
endsV = np.array([1], dtype=np.int64)
stepsV = np.array([-1], dtype=np.int64)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.aiOnnx.constant(axesV)
starts = builder.aiOnnx.constant(startsV)
ends = builder.aiOnnx.constant(endsV)
steps = builder.aiOnnx.constant(stepsV)
o = builder.aiOnnx.slice([i1, starts, ends, axes, steps])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
o = d1[3:1:-1]
return [o]
op_tester.run(init_builder, reference, 'infer')
def test_slice_flip_2(op_tester):
d1 = np.array([1., 2., 3., 4.]).astype(np.float32)
axesV = np.array([0], dtype=np.int64)
startsV = np.array([-1], dtype=np.int64)
endsV = np.array([-1000], dtype=np.int64)
stepsV = np.array([-1], dtype=np.int64)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.aiOnnx.constant(axesV)
starts = builder.aiOnnx.constant(startsV)
ends = builder.aiOnnx.constant(endsV)
steps = builder.aiOnnx.constant(stepsV)
o = builder.aiOnnx.slice([i1, starts, ends, axes, steps])
builder.addOutputTensor(o)
return [o]
def reference(ref_data):
return [np.flip(d1)]
op_tester.run(init_builder, reference, 'infer')
def test_slice_flip_grad_1(op_tester):
d1 = np.array([1., 2., 3., 4., 5.]).astype(np.float32)
axesV = np.array([0], dtype=np.int64)
starts0V = np.array([4], dtype=np.int64)
ends0V = np.array([1], dtype=np.int64)
stepsV = np.array([-1], dtype=np.int64)
starts1V = np.array([1], dtype=np.int64)
ends1V = np.array([3], dtype=np.int64)
def init_builder(builder):
i1 = builder.addInputTensor(d1)
axes = builder.aiOnnx.constant(axesV)
starts = builder.aiOnnx.constant(starts0V)
ends = builder.aiOnnx.constant(ends0V)
steps = builder.aiOnnx.constant(stepsV)
o = builder.aiOnnx.slice([i1, starts, ends, axes, steps])
starts = builder.aiOnnx.constant(starts1V)
ends = builder.aiOnnx.constant(ends1V)
o = builder.aiOnnx.slice([o, starts, ends, axes])
builder.addOutputTensor(o)
return [
o,
popart.reservedGradientPrefix() + i1,
popart.reservedGradientPrefix() + o
]
def reference(ref_data):
a = torch.tensor(d1, requires_grad=True)
o = torch.flip(a[2:5], [0])
o = o[1:3]
d__o = ref_data.getOutputTensorGrad(0)
o.backward(torch.tensor(d__o))
print(o)
print(a.grad)
return [o, a.grad, None]
op_tester.setPatterns(['PreUniRepl'], enableRuntimeAsserts=False)
op_tester.run(init_builder, reference, 'train')
| 31.074074 | 81 | 0.620722 | 1,607 | 11,746 | 4.45364 | 0.118855 | 0.043035 | 0.026827 | 0.03521 | 0.847143 | 0.846584 | 0.819058 | 0.814028 | 0.813469 | 0.813469 | 0 | 0.049837 | 0.241529 | 11,746 | 377 | 82 | 31.156499 | 0.753508 | 0.121062 | 0 | 0.673469 | 0 | 0 | 0.027011 | 0.003327 | 0 | 0 | 0 | 0 | 0.016327 | 1 | 0.146939 | false | 0 | 0.032653 | 0.008163 | 0.277551 | 0.008163 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b5a8068d337ffb8c4f656c68a23745e6b0b833b8 | 40 | py | Python | openmlaslib/utils/__init__.py | openml/openml-aslib | 5b98f2e4658f17555c2d01a9b88571fe9dfa0027 | [
"BSD-3-Clause"
] | 1 | 2018-04-03T08:54:52.000Z | 2018-04-03T08:54:52.000Z | openmlaslib/utils/__init__.py | openml/openml-aslib | 5b98f2e4658f17555c2d01a9b88571fe9dfa0027 | [
"BSD-3-Clause"
] | null | null | null | openmlaslib/utils/__init__.py | openml/openml-aslib | 5b98f2e4658f17555c2d01a9b88571fe9dfa0027 | [
"BSD-3-Clause"
] | null | null | null | from .scenario import generate_scenario
| 20 | 39 | 0.875 | 5 | 40 | 6.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 40 | 1 | 40 | 40 | 0.944444 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a912b49f26dc308a160ea08c0a7e83d663c88444 | 4,212 | py | Python | test_task/clients/migrations/0003_auto_20181011_1029.py | opqx/PythonDevTest | aec05b1cd3d92e496160efe87a03ae44360f6c83 | [
"MIT"
] | null | null | null | test_task/clients/migrations/0003_auto_20181011_1029.py | opqx/PythonDevTest | aec05b1cd3d92e496160efe87a03ae44360f6c83 | [
"MIT"
] | null | null | null | test_task/clients/migrations/0003_auto_20181011_1029.py | opqx/PythonDevTest | aec05b1cd3d92e496160efe87a03ae44360f6c83 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.1 on 2018-10-11 07:29
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('clients', '0002_auto_20181004_2158'),
]
operations = [
migrations.CreateModel(
name='Building',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, verbose_name='Building')),
],
),
migrations.CreateModel(
name='City',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, verbose_name='City')),
],
),
migrations.CreateModel(
name='Companies',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('company', models.CharField(max_length=200)),
],
),
migrations.CreateModel(
name='Country',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, verbose_name='Country')),
('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Companies')),
],
),
migrations.CreateModel(
name='Office',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, verbose_name='Office')),
('building', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Building')),
('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Companies')),
],
),
migrations.CreateModel(
name='Region',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, verbose_name='Region')),
('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Companies')),
('country', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Country')),
],
),
migrations.CreateModel(
name='Street',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, verbose_name='Street')),
('city', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.City')),
('company', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Companies')),
],
),
migrations.AlterField(
model_name='contact',
name='company',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Companies'),
),
migrations.AddField(
model_name='city',
name='company',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Companies'),
),
migrations.AddField(
model_name='city',
name='region',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Region'),
),
migrations.AddField(
model_name='building',
name='company',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Companies'),
),
migrations.AddField(
model_name='building',
name='street',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='clients.Street'),
),
]
| 43.875 | 116 | 0.582621 | 418 | 4,212 | 5.741627 | 0.143541 | 0.046667 | 0.075833 | 0.119167 | 0.788333 | 0.788333 | 0.76875 | 0.76875 | 0.76875 | 0.76875 | 0 | 0.015057 | 0.274691 | 4,212 | 95 | 117 | 44.336842 | 0.77054 | 0.010684 | 0 | 0.651685 | 1 | 0 | 0.113565 | 0.005522 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022472 | 0 | 0.05618 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8d1842d13df3bcda0bae878a0a30e9b12d1ee46b | 172 | py | Python | cicada2/shared/util.py | herzo175/cicada-2 | be838e7183dccfceafe02b61b0fe2307d50aab69 | [
"BSD-3-Clause"
] | 11 | 2020-05-05T22:42:09.000Z | 2020-05-11T04:13:17.000Z | cicada2/shared/util.py | cicadatesting/cicada-2 | be838e7183dccfceafe02b61b0fe2307d50aab69 | [
"BSD-3-Clause"
] | 8 | 2020-09-24T12:31:54.000Z | 2022-02-19T01:15:42.000Z | cicada2/shared/util.py | cicadatesting/cicada-2 | be838e7183dccfceafe02b61b0fe2307d50aab69 | [
"BSD-3-Clause"
] | 1 | 2020-05-05T22:41:33.000Z | 2020-05-05T22:41:33.000Z | from datetime import datetime
def get_runtime_ms(start: datetime, end: datetime) -> int:
return int((end - start).seconds * 1000 + (end - start).microseconds / 1000)
| 28.666667 | 80 | 0.709302 | 23 | 172 | 5.217391 | 0.608696 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055944 | 0.168605 | 172 | 5 | 81 | 34.4 | 0.783217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
8d1ec80b0a36ce9c77b26351df8a98a56a4c5033 | 21 | py | Python | wepppy/topaz/__init__.py | hwbeeson/wepppy | 6358552df99853c75be8911e7ef943108ae6923e | [
"BSD-3-Clause"
] | null | null | null | wepppy/topaz/__init__.py | hwbeeson/wepppy | 6358552df99853c75be8911e7ef943108ae6923e | [
"BSD-3-Clause"
] | null | null | null | wepppy/topaz/__init__.py | hwbeeson/wepppy | 6358552df99853c75be8911e7ef943108ae6923e | [
"BSD-3-Clause"
] | null | null | null | from .topaz import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8d2877a17bb1eeda2c118db584e1b6b5e0425484 | 10,870 | py | Python | xanthus/models/legacy/neural.py | markdouthwaite/xanthus | 8d4e64bd49e4bdec1e640d72ecffbc0a9d0f0c01 | [
"MIT"
] | 4 | 2020-07-15T21:02:46.000Z | 2020-07-17T16:35:03.000Z | xanthus/models/legacy/neural.py | markdouthwaite/xanthus | 8d4e64bd49e4bdec1e640d72ecffbc0a9d0f0c01 | [
"MIT"
] | 2 | 2021-11-10T19:52:54.000Z | 2022-02-10T02:11:33.000Z | xanthus/models/legacy/neural.py | markdouthwaite/xanthus | 8d4e64bd49e4bdec1e640d72ecffbc0a9d0f0c01 | [
"MIT"
] | null | null | null | """
The MIT License
Copyright (c) 2018-2020 Mark Douthwaite
"""
from typing import Optional, Any, Tuple
from tensorflow.keras import Model
from tensorflow.keras.layers import Multiply, Dense, Concatenate
from tensorflow.keras.initializers import lecun_uniform
from tensorflow.keras.regularizers import l2
from xanthus.datasets import Dataset
from xanthus.models import utils
from xanthus.models.legacy import base
class MultiLayerPerceptronModel(base.NeuralRecommenderModel):
"""
An implementation of a Multilayer Perceptron (MLP) model in Keras.
Parameters
----------
layers: tuple
A tuple, where each element corresponds to the number of units in each of the
layers of the MLP.
activations: str
The activation function to use for each of the layers in the MLP.
l2_reg: float
The L2 regularization to be applied to each of the layers in the MLP.
References
----------
[1] He et al. https://dl.acm.org/doi/10.1145/3038912.3052569
See Also
--------
xanthus.models.base.NeuralRecommenderModel
"""
def __init__(
self,
*args: Optional[Any],
layers: Tuple[int, ...] = (64, 32, 16, 8),
activations: str = "relu",
l2_reg: float = 1e-3,
**kwargs: Optional[Any]
):
"""Initialize a MultiLayerPerceptronModel."""
super().__init__(*args, **kwargs)
self._activations = activations
self._layers = layers
self._l2_reg = l2_reg
def _build_model(
self,
dataset: Dataset,
n_user_dim: int = 1,
n_item_dim: int = 1,
n_factors: int = 50,
**kwargs: Optional[Any]
) -> Model:
"""
Build a Keras model, in this case a MultiLayerPerceptronModel (MLP)
model. See [1] for more info. The original code released with [1] can be
found at [2].
Parameters
----------
dataset: Dataset
The input dataset. This is used to specify the 'vocab' size of each of the
'embedding blocks' (of which there are two in this architecture).
n_user_dim: int
The dimensionality of the user input vector. When using metadata, you should
make sure to set this to the size of each of these vectors.
n_item_dim: int
The dimensionality of the item input vector. When using metadata, you should
make sure to set this to the size of each of these vectors.
n_factors: int
The dimensionality of the latent feature space _for both users and items_
for the GMF component of the architecture.
Returns
-------
output: Model
The 'complete' Keras Model object.
References
----------
[1] He et al. https://dl.acm.org/doi/10.1145/3038912.3052569
[2] https://github.com/hexiangnan/neural_collaborative_filtering
"""
n_user_vocab = dataset.all_users.shape[0]
n_item_vocab = dataset.all_items.shape[0]
if dataset.user_meta is not None:
n_user_vocab += dataset.user_meta.shape[1]
if dataset.item_meta is not None:
n_item_vocab += dataset.item_meta.shape[1]
# mlp block
user_input, user_bias, user_factors = utils.get_embedding_block(
n_user_vocab, n_user_dim, int(self._layers[0] / 2)
)
item_input, item_bias, item_factors = utils.get_embedding_block(
n_item_vocab, n_item_dim, int(self._layers[0] / 2)
)
body = Concatenate()([user_factors, item_factors])
for layer in self._layers:
body = Dense(
layer,
activity_regularizer=l2(self._l2_reg),
activation=self._activations,
)(body)
output = Dense(1, activation="sigmoid", kernel_initializer=lecun_uniform())(
body
)
return Model(inputs=[user_input, item_input], outputs=output)
class NeuralMatrixFactorizationModel(base.NeuralRecommenderModel):
"""
An implementation of a Neural Matrix Factorization (NeuMF) model in Keras.
Parameters
----------
layers: tuple
A tuple, where each element corresponds to the number of units in each of the
layers of the MLP.
activations: str
The activation function to use for each of the layers in the MLP.
l2_reg: float
The L2 regularization to be applied to each of the layers in the MLP.
References
----------
[1] He et al. https://dl.acm.org/doi/10.1145/3038912.3052569
See Also
--------
xanthus.models.base.NeuralRecommenderModel
"""
def __init__(
self,
*args: Optional[Any],
layers: Tuple[int, ...] = (64, 32, 16, 8),
activations: str = "relu",
l2_reg: float = 1e-3,
**kwargs: Optional[Any]
):
"""Initialize a MultiLayerPerceptronModel."""
super().__init__(*args, **kwargs)
self._activations = activations
self._layers = layers
self._l2_reg = l2_reg
def _build_model(
self,
dataset: Dataset,
n_user_dim: int = 1,
n_item_dim: int = 1,
n_factors: int = 50,
**kwargs: Optional[Any]
) -> Model:
"""
Build a Keras model, in this case a NeuralMatrixFactorizationModel (NeuMF)
model. This is a recommender model with two input branches (one half the same
architecture as in GeneralizedMatrixFactorizationModel, the other the same
architecture as in MultiLayerPerceptronModel. See [1] for more info. The
original code released with [1] can be found at [2].
Parameters
----------
dataset: Dataset
The input dataset. This is used to specify the 'vocab' size of each of the
'embedding blocks' (of which there are four in this architecture).
n_user_dim: int
The dimensionality of the user input vector. When using metadata, you should
make sure to set this to the size of each of these vectors.
n_item_dim: int
The dimensionality of the item input vector. When using metadata, you should
make sure to set this to the size of each of these vectors.
n_factors: int
The dimensionality of the latent feature space _for both users and items_
for the GMF component of the architecture.
Returns
-------
output: Model
The 'complete' Keras Model object.
References
----------
[1] He et al. https://dl.acm.org/doi/10.1145/3038912.3052569
[2] https://github.com/hexiangnan/neural_collaborative_filtering
"""
n_user_vocab = dataset.all_users.shape[0]
n_item_vocab = dataset.all_items.shape[0]
if dataset.user_meta is not None:
n_user_vocab += dataset.user_meta.shape[1]
if dataset.item_meta is not None:
n_item_vocab += dataset.item_meta.shape[1]
# mlp block
user_input, mlp_user_bias, mlp_user_factors = utils.get_embedding_block(
n_user_vocab, n_user_dim, int(self._layers[0] / 2)
)
item_input, mlp_item_bias, mlp_item_factors = utils.get_embedding_block(
n_item_vocab, n_item_dim, int(self._layers[0] / 2)
)
mlp_body = Concatenate()([mlp_user_factors, mlp_item_factors])
for layer in self._layers:
mlp_body = Dense(
layer,
activity_regularizer=l2(self._l2_reg),
activation=self._activations,
)(mlp_body)
# mf block
user_input, mf_user_bias, mf_user_factors = utils.get_embedding_block(
n_user_vocab, n_user_dim, n_factors, inputs=user_input,
)
item_input, mf_item_bias, mf_item_factors = utils.get_embedding_block(
n_item_vocab, n_item_dim, n_factors, inputs=item_input,
)
mf_body = Multiply()([mf_user_factors, mf_item_factors])
body = Concatenate()([mf_body, mlp_body])
output = Dense(1, activation="sigmoid", kernel_initializer=lecun_uniform())(
body
)
return Model(inputs=[user_input, item_input], outputs=output)
class GeneralizedMatrixFactorizationModel(base.NeuralRecommenderModel):
"""
An implementation of a Generalized Matrix Factorization (GMF) model in Keras.
References
----------
[1] He et al. https://dl.acm.org/doi/10.1145/3038912.3052569
"""
def _build_model(
self,
dataset: Dataset,
n_user_dim: int = 1,
n_item_dim: int = 1,
n_factors: int = 50,
**kwargs: Optional[Any]
) -> Model:
"""
Build a Keras model, in this case a GeneralizedMatrixFactorizationModel (GMF)
model. See [1] for more info. The original code released with [1] can be
found at [2].
Parameters
----------
dataset: Dataset
The input dataset. This is used to specify the 'vocab' size of each of the
'embedding blocks' (of which there are two in this architecture).
n_user_dim: int
The dimensionality of the user input vector. When using metadata, you should
make sure to set this to the size of each of these vectors.
n_item_dim: int
The dimensionality of the item input vector. When using metadata, you should
make sure to set this to the size of each of these vectors.
n_factors: int
The dimensionality of the latent feature space _for both users and items_
for the GMF component of the architecture.
Returns
-------
output: Model
The 'complete' Keras Model object.
References
----------
[1] He et al. https://dl.acm.org/doi/10.1145/3038912.3052569
[2] https://github.com/hexiangnan/neural_collaborative_filtering
"""
n_user_vocab = dataset.all_users.shape[0]
n_item_vocab = dataset.all_items.shape[0]
if dataset.user_meta is not None:
n_user_vocab += dataset.user_meta.shape[1]
if dataset.item_meta is not None:
n_item_vocab += dataset.item_meta.shape[1]
user_input, user_bias, user_factors = utils.get_embedding_block(
n_user_vocab, n_user_dim, n_factors, **kwargs
)
item_input, item_bias, item_factors = utils.get_embedding_block(
n_item_vocab, n_item_dim, n_factors, **kwargs
)
body = Multiply()([user_factors, item_factors])
output = Dense(1, activation="sigmoid", kernel_initializer=lecun_uniform())(
body
)
return Model(inputs=[user_input, item_input], outputs=output)
| 33.757764 | 88 | 0.618031 | 1,386 | 10,870 | 4.665945 | 0.13925 | 0.017783 | 0.012371 | 0.0167 | 0.841039 | 0.830215 | 0.80934 | 0.799753 | 0.799753 | 0.799753 | 0 | 0.027879 | 0.293836 | 10,870 | 321 | 89 | 33.862928 | 0.814617 | 0.430635 | 0 | 0.7 | 0 | 0 | 0.005444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.061538 | 0 | 0.146154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8da41c7aaa92dcca0cf95fa15b930b129d4c891f | 82 | py | Python | examples/simple_example/simple_module/level_two/other.py | jakab922/pytest-automock | 8941e50bceb7b50345e644f3b1d35f2331817f0d | [
"MIT"
] | null | null | null | examples/simple_example/simple_module/level_two/other.py | jakab922/pytest-automock | 8941e50bceb7b50345e644f3b1d35f2331817f0d | [
"MIT"
] | null | null | null | examples/simple_example/simple_module/level_two/other.py | jakab922/pytest-automock | 8941e50bceb7b50345e644f3b1d35f2331817f0d | [
"MIT"
] | null | null | null | from simple_module.one import func
def other_func(a, b):
return b, func(a)
| 11.714286 | 34 | 0.695122 | 15 | 82 | 3.666667 | 0.733333 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.207317 | 82 | 6 | 35 | 13.666667 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
93efc7b9dfee1710ba4cc3da9c9a3779d1209456 | 189 | py | Python | iguanas/exceptions/tests/test_exceptions.py | Aditya-Kapadiya/Iguanas | dcc2c1e71f00574c3427fa530191e7079834c11b | [
"Apache-2.0"
] | null | null | null | iguanas/exceptions/tests/test_exceptions.py | Aditya-Kapadiya/Iguanas | dcc2c1e71f00574c3427fa530191e7079834c11b | [
"Apache-2.0"
] | null | null | null | iguanas/exceptions/tests/test_exceptions.py | Aditya-Kapadiya/Iguanas | dcc2c1e71f00574c3427fa530191e7079834c11b | [
"Apache-2.0"
] | null | null | null | from iguanas.exceptions import DataFrameSizeError, NoRulesError
def test_exceptions():
assert issubclass(DataFrameSizeError, Exception)
assert issubclass(NoRulesError, Exception)
| 27 | 63 | 0.820106 | 17 | 189 | 9.058824 | 0.647059 | 0.207792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121693 | 189 | 6 | 64 | 31.5 | 0.927711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f552e7125cdd00546691a57998770cd07487d20e | 142 | py | Python | tests/demoapp/demo/admin.py | saxix/drf-api-checker | f33fd0c0e7e55be3a892d16e27e6ee8e7a16eee4 | [
"MIT"
] | 22 | 2018-06-05T21:29:05.000Z | 2022-02-28T23:26:25.000Z | tests/demoapp/demo/admin.py | saxix/drf-api-checker | f33fd0c0e7e55be3a892d16e27e6ee8e7a16eee4 | [
"MIT"
] | 4 | 2019-12-06T12:30:03.000Z | 2020-09-10T16:53:49.000Z | tests/demoapp/demo/admin.py | saxix/drf-api-checker | f33fd0c0e7e55be3a892d16e27e6ee8e7a16eee4 | [
"MIT"
] | null | null | null | from django.contrib.admin import ModelAdmin, register
from .models import Master
@register(Master)
class MasterAdmin(ModelAdmin):
pass
| 15.777778 | 53 | 0.788732 | 17 | 142 | 6.588235 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140845 | 142 | 8 | 54 | 17.75 | 0.918033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f569aa1925aae7f13a1d49e176f3b6b50d3d7ba1 | 253 | py | Python | joint_calling/__init__.py | Monia234/populationgenomics-joint-calling. | 01e407d2b3b710cc2e82a785978ae9113a85b9c1 | [
"MIT"
] | null | null | null | joint_calling/__init__.py | Monia234/populationgenomics-joint-calling. | 01e407d2b3b710cc2e82a785978ae9113a85b9c1 | [
"MIT"
] | 23 | 2021-03-10T11:43:27.000Z | 2022-02-10T13:06:15.000Z | joint_calling/__init__.py | Monia234/populationgenomics-joint-calling. | 01e407d2b3b710cc2e82a785978ae9113a85b9c1 | [
"MIT"
] | 1 | 2021-02-26T02:24:17.000Z | 2021-02-26T02:24:17.000Z | """
Just defines `package_path` which returns the local install path of the package
"""
from os.path import dirname, abspath
def get_package_path():
"""
:return: local install path of the package
"""
return dirname(abspath(__file__))
| 19.461538 | 79 | 0.699605 | 34 | 253 | 5 | 0.558824 | 0.129412 | 0.188235 | 0.211765 | 0.329412 | 0.329412 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201581 | 253 | 12 | 80 | 21.083333 | 0.841584 | 0.482213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f56c85d733907ff6242adfa7e46ffca38c64622f | 761 | py | Python | opytimizer/optimizers/population/__init__.py | anukaal/opytimizer | 5f1ccc0da80e6a4cabd99578fa24cf4f6466f9b9 | [
"Apache-2.0"
] | 528 | 2018-10-01T20:00:09.000Z | 2022-03-27T11:15:31.000Z | opytimizer/optimizers/population/__init__.py | anukaal/opytimizer | 5f1ccc0da80e6a4cabd99578fa24cf4f6466f9b9 | [
"Apache-2.0"
] | 17 | 2019-10-30T00:47:03.000Z | 2022-03-21T11:39:28.000Z | opytimizer/optimizers/population/__init__.py | anukaal/opytimizer | 5f1ccc0da80e6a4cabd99578fa24cf4f6466f9b9 | [
"Apache-2.0"
] | 35 | 2018-10-01T20:03:23.000Z | 2022-03-20T03:54:15.000Z | """An evolutionary package for all common opytimizer modules.
It contains implementations of population-based optimizers.
"""
from opytimizer.optimizers.population.aeo import AEO
from opytimizer.optimizers.population.ao import AO
from opytimizer.optimizers.population.coa import COA
from opytimizer.optimizers.population.epo import EPO
from opytimizer.optimizers.population.gco import GCO
from opytimizer.optimizers.population.gwo import GWO
from opytimizer.optimizers.population.hho import HHO
from opytimizer.optimizers.population.loa import LOA
from opytimizer.optimizers.population.osa import OSA
from opytimizer.optimizers.population.ppa import PPA
from opytimizer.optimizers.population.pvs import PVS
from opytimizer.optimizers.population.rfo import RFO
| 44.764706 | 61 | 0.856767 | 99 | 761 | 6.585859 | 0.292929 | 0.257669 | 0.441718 | 0.625767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0841 | 761 | 16 | 62 | 47.5625 | 0.935438 | 0.155059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f570c76fd5c2e6f20420ae78eb1d6918196b12ac | 2,588 | py | Python | microservices/policyApi/auth/auth.py | bcgov/OCWA | e0bd0763ed1e3c0acc498cb1689778b4e22a475c | [
"Apache-2.0"
] | 9 | 2018-09-14T18:03:45.000Z | 2021-06-16T16:04:25.000Z | microservices/policyApi/auth/auth.py | bcgov/OCWA | e0bd0763ed1e3c0acc498cb1689778b4e22a475c | [
"Apache-2.0"
] | 173 | 2019-01-18T19:25:05.000Z | 2022-01-10T21:15:46.000Z | microservices/policyApi/auth/auth.py | bcgov/OCWA | e0bd0763ed1e3c0acc498cb1689778b4e22a475c | [
"Apache-2.0"
] | 3 | 2018-09-24T15:44:39.000Z | 2018-11-24T01:04:37.000Z | from functools import wraps
from flask import request, abort, Response
from config import Config
from flask_jwt_simple import JWTManager
from flask_jwt_simple.view_decorators import _decode_jwt_from_headers
def jwt_config(app):
config = Config()
app.config['JWT_SECRET_KEY'] = config.data['jwtSecret']
app.config['JWT_DECODE_AUDIENCE'] = config.data['jwtAudience']
jwt = JWTManager(app)
return jwt
def api_key(f):
"""
@param f: flask function
@return: decorator, return the wrapped function or abort json object.
"""
@wraps(f)
def decorated(*args, **kwargs):
config = Config()
if config.data['apiSecret'] == request.headers.get('x-api-key'):
return f(*args, **kwargs)
else:
print("Unauthorized address trying to use API: " + request.remote_addr)
abort(401)
return decorated
def jwt_or_api_key(f):
"""
@param f: flask function
@return: decorator, return the wrapped function or abort json object.
"""
@wraps(f)
def decorated(*args, **kwargs):
config = Config()
if config.data['apiSecret'] == request.headers.get('x-api-key'):
return f(*args, **kwargs)
else:
jwt = _decode_jwt_from_headers()
if not(jwt == None):
return f(*args, **kwargs)
else:
print("Unauthorized address trying to use API: " + request.remote_addr)
abort(401)
return decorated
def jwt(f):
"""
@param f: flask function
@return: decorator, return the wrapped function or abort json object.
"""
@wraps(f)
def decorated(*args, **kwargs):
jwt = _decode_jwt_from_headers()
if not (jwt == None):
return f(*args, **kwargs)
else:
print("Unauthorized address trying to use API: " + request.remote_addr)
abort(401)
return decorated
def admin_jwt(f):
"""
@param f: flask function
@return: decorator, return the wrapped function or abort json object.
"""
@wraps(f)
def decorated(*args, **kwargs):
config = Config()
jwt = _decode_jwt_from_headers()
if jwt == None:
print("Unauthorized address trying to use API: " + request.remote_addr)
abort(401)
if config.data['jwt_access_group'] in jwt[config.data['jwt_group']]:
return f(*args, **kwargs)
print("Unauthorized address trying to use API: " + request.remote_addr)
abort(401)
return decorated
| 25.88 | 87 | 0.602782 | 311 | 2,588 | 4.890675 | 0.186495 | 0.059172 | 0.03616 | 0.055884 | 0.740302 | 0.740302 | 0.723866 | 0.723866 | 0.723866 | 0.723866 | 0 | 0.008112 | 0.285549 | 2,588 | 99 | 88 | 26.141414 | 0.814494 | 0.146445 | 0 | 0.724138 | 0 | 0 | 0.148113 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155172 | false | 0 | 0.086207 | 0 | 0.413793 | 0.086207 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f5a1cc66432beffb6896836691f2677381051f5d | 37 | py | Python | packman/conditions/__init__.py | audoh/packager | 299f297da8465ec40eb9ffffa40fcfaa6bcf0102 | [
"MIT"
] | null | null | null | packman/conditions/__init__.py | audoh/packager | 299f297da8465ec40eb9ffffa40fcfaa6bcf0102 | [
"MIT"
] | null | null | null | packman/conditions/__init__.py | audoh/packager | 299f297da8465ec40eb9ffffa40fcfaa6bcf0102 | [
"MIT"
] | null | null | null | from . import either, exists # noqa
| 18.5 | 36 | 0.702703 | 5 | 37 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216216 | 37 | 1 | 37 | 37 | 0.896552 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f5a8ed20763d33e4065da2e2f7c5fac5e8c20a11 | 41 | py | Python | python/cinn/poly.py | edithgogo/CINN | bed13f4752d80d01a3e1d96a4cc4f5aa56b1e292 | [
"Apache-2.0"
] | 1 | 2019-10-23T09:16:23.000Z | 2019-10-23T09:16:23.000Z | python/cinn/poly.py | edithgogo/CINN | bed13f4752d80d01a3e1d96a4cc4f5aa56b1e292 | [
"Apache-2.0"
] | null | null | null | python/cinn/poly.py | edithgogo/CINN | bed13f4752d80d01a3e1d96a4cc4f5aa56b1e292 | [
"Apache-2.0"
] | null | null | null | from .core_api.poly import create_stages
| 20.5 | 40 | 0.853659 | 7 | 41 | 4.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
194ef11ac491b3ab497791bf7dcb9fbd41b21396 | 47 | py | Python | recommender/contrib/__init__.py | stungkit/stock_trend_analysis | e9d3f2db19a9af93cc8dc55c2394ae88c1b3ee6e | [
"MIT"
] | 7 | 2020-04-16T18:25:15.000Z | 2022-02-20T03:57:31.000Z | recommender/contrib/__init__.py | stungkit/stock_trend_analysis | e9d3f2db19a9af93cc8dc55c2394ae88c1b3ee6e | [
"MIT"
] | 4 | 2020-04-10T05:40:48.000Z | 2022-01-13T01:40:24.000Z | recommender/contrib/__init__.py | stungkit/stock_trend_analysis | e9d3f2db19a9af93cc8dc55c2394ae88c1b3ee6e | [
"MIT"
] | 4 | 2020-11-30T06:43:42.000Z | 2021-03-12T05:42:13.000Z | from . import financialmodelingprep as fmp_api
| 23.5 | 46 | 0.851064 | 6 | 47 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 47 | 1 | 47 | 47 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2716f9cb0c0cc9f41cdff9357cc7ff44493108da | 8,489 | py | Python | quara/objects/tester_typical.py | tknrsgym/quara | 8f3337af83cdd02bb85632bb1e297902b1fff8fb | [
"Apache-2.0"
] | 3 | 2021-05-19T11:44:30.000Z | 2022-03-30T07:13:49.000Z | quara/objects/tester_typical.py | tknrsgym/quara | 8f3337af83cdd02bb85632bb1e297902b1fff8fb | [
"Apache-2.0"
] | 2 | 2021-06-02T01:24:59.000Z | 2021-06-02T12:20:31.000Z | quara/objects/tester_typical.py | tknrsgym/quara | 8f3337af83cdd02bb85632bb1e297902b1fff8fb | [
"Apache-2.0"
] | 1 | 2021-10-14T13:21:27.000Z | 2021-10-14T13:21:27.000Z | from typing import List, Union
from itertools import product
# Quara
from quara.objects.composite_system import CompositeSystem
from quara.objects.operators import compose_qoperations
from quara.objects.state import State
from quara.objects.povm import Povm
from quara.objects.gate import (
get_depolarizing_channel,
)
from quara.objects.operators import tensor_product
from quara.objects.state_typical import (
get_state_names_1qubit,
get_state_names_1qutrit,
)
from quara.objects.povm_typical import (
get_povm_names_1qubit,
get_povm_names_1qutrit,
)
from quara.objects.qoperation_typical import generate_qoperation
# States
def generate_tester_states_depolarized(
c_sys: CompositeSystem,
names: List[str],
error_rates: Union[float, List[float]],
) -> List[State]:
"""returns a list of states corresponding to names of states on a common CompositeSystem affected by a depolarizing channel.
Parameters
----------
c_sys: CompositeSystem
names: List[str]
names of typical states
error_rates: Union[float, List[float]]
depolarizing error rate or list of error rates
If it is float, all states are affected by a common depolarizing channel with the error rate.
Returns
-------
List[State]
list of states depolarized
"""
if type(error_rates) is float:
error_rate = error_rates
elif type(error_rates) is list:
assert len(names) == len(error_rates)
else:
raise ValueError(f"Type of error_rates is invalid.")
states = generate_tester_states(c_sys=c_sys, names=names)
states_depolarized = []
for i, state in enumerate(states):
if type(error_rates) is float:
error_rate = error_rates
elif type(error_rates) is list:
error_rate = error_rates[i]
dp = get_depolarizing_channel(p=error_rate, c_sys=c_sys)
state_new = compose_qoperations(dp, state)
states_depolarized.append(state_new)
return states_depolarized
def generate_tester_states(c_sys: CompositeSystem, names: List[str]) -> List[State]:
"""returns a list of states corresponding to names of states on a common CompositeSystem.
Parameters
----------
c_sys: CompositeSystem
names: List[str]
names of typical states
Returns
-------
List[State]
"""
# c_sys
num = c_sys.num_e_sys
dims = []
for i in range(num):
dims.append(c_sys.dim_e_sys(i))
if dims[0] == 2:
mode_sys = "qubit"
elif dims[0] == 3:
mode_sys = "qutrit"
else:
raise ValueError(f"system size is invalid!")
e_sys = c_sys._elemental_systems[0]
c_sys_0 = CompositeSystem([e_sys])
method = eval("generate_states_1" + mode_sys)
states_0 = method(c_sys_0, names)
states = states_0
for i in range(1, num):
e_sys = c_sys._elemental_systems[i]
c_sys_i = CompositeSystem([e_sys])
states_i = method(c_sys_i, names)
l = []
for p in product(states, states_i):
stateA = p[0]
stateB = p[1]
state = tensor_product(stateA, stateB)
l.append(state)
states = l
return states
def generate_states_1qubit(c_sys: CompositeSystem, names: List[str]) -> List[State]:
"""returns a list of states on a common 1-qubit system.
Parameters
----------
c_sys: CompositeSystem
1-qubit system
names: List[str]
list of 1-qubit state names
Returns
-------
List[State]
"""
assert c_sys.num_e_sys == 1
assert c_sys.dim == 2
names_1qubit = get_state_names_1qubit()
for name in names:
assert name in names_1qubit
mode_qo = "state"
states = []
for name in names:
state = generate_qoperation(mode=mode_qo, name=name, c_sys=c_sys)
states.append(state)
return states
def generate_states_1qutrit(c_sys: CompositeSystem, names: List[str]) -> List[State]:
"""returns a list of states on a common 1-qutrit system.
Parameters
----------
c_sys: CompositeSystem
1-qutrit system
names: List[str]
list of 1-qutrit state names
Returns
-------
List[State]
"""
assert c_sys.num_e_sys == 1
assert c_sys.dim == 3
names_1qutrit = get_state_names_1qutrit()
for name in names:
assert name in names_1qutrit
mode_qo = "state"
states = []
for name in names:
state = generate_qoperation(mode=mode_qo, name=name, c_sys=c_sys)
states.append(state)
return states
# POVMs
def generate_tester_povms_depolarized(
c_sys: CompositeSystem,
names: List[str],
error_rates: Union[float, List[float]],
) -> List[Povm]:
"""returns a list of POVMs corresponding to names of POVMs on a common CompositeSystem affected by a depolarizing channel.
Parameters
----------
c_sys: CompositeSystem
names: List[str]
names of typical povms
error_rates: Union[float, List[float]]
depolarizing error rate or list of error rates
If it is float, all POVMs are affected by a common depolarizing channel with the error rate.
Returns
-------
List[Povm]
list of POVMs depolarized
"""
if type(error_rates) is float:
error_rate = error_rates
elif type(error_rates) is list:
assert len(names) == len(error_rates)
else:
raise ValueError(f"Type of error_rates is invalid.")
povms = generate_tester_povms(c_sys=c_sys, names=names)
povms_depolarized = []
for i, povm in enumerate(povms):
if type(error_rates) is float:
error_rate = error_rates
elif type(error_rates) is list:
error_rate = error_rates[i]
dp = get_depolarizing_channel(p=error_rate, c_sys=c_sys)
povm_new = compose_qoperations(povm, dp)
povms_depolarized.append(povm_new)
return povms_depolarized
def generate_tester_povms(c_sys: CompositeSystem, names: List[str]) -> List[Povm]:
"""returns a list of POVMs corresponding to names of POVMs on a common CompositeSystem.
Parameters
----------
c_sys: CompositeSystem
names: List[str]
names of typical POVMs
Returns
-------
List[POVM]
"""
# c_sys
num = c_sys.num_e_sys
dims = []
for i in range(num):
dims.append(c_sys.dim_e_sys(i))
if dims[0] == 2:
mode_sys = "qubit"
elif dims[0] == 3:
mode_sys = "qutrit"
else:
raise ValueError(f"system size is invalid!")
e_sys = c_sys._elemental_systems[0]
c_sys_0 = CompositeSystem([e_sys])
method = eval("generate_povms_1" + mode_sys)
povms_0 = method(c_sys_0, names)
povms = povms_0
for i in range(1, num):
e_sys = c_sys._elemental_systems[i]
c_sys_i = CompositeSystem([e_sys])
povms_i = method(c_sys_i, names)
l = []
for p in product(povms, povms_i):
povmA = p[0]
povmB = p[1]
povm = tensor_product(povmA, povmB)
l.append(povm)
povms = l
return povms
def generate_povms_1qubit(c_sys: CompositeSystem, names: List[str]) -> List[Povm]:
"""returns a list of POVMs on a common 1-qubit system.
Parameters
----------
c_sys: CompositeSystem
1-qubit system
names: List[str]
list of 1-qubit POVM names
Returns
-------
List[Povm]
"""
assert c_sys.num_e_sys == 1
assert c_sys.dim == 2
names_1qubit = get_povm_names_1qubit()
for name in names:
assert name in names_1qubit
mode_qo = "povm"
povms = []
for name in names:
povm = generate_qoperation(mode=mode_qo, name=name, c_sys=c_sys)
povms.append(povm)
return povms
def generate_povms_1qutrit(c_sys: CompositeSystem, names: List[str]) -> List[Povm]:
"""returns a list of POVMs on a common 1-qutrit system.
Parameters
----------
c_sys: CompositeSystem
1-qutrit system
names: List[str]
list of 1-qutrit POVM names
Returns
-------
List[Povm]
"""
assert c_sys.num_e_sys == 1
assert c_sys.dim == 3
names_1qutrit = get_povm_names_1qutrit()
for name in names:
assert name in names_1qutrit
mode_qo = "povm"
povms = []
for name in names:
povm = generate_qoperation(mode=mode_qo, name=name, c_sys=c_sys)
povms.append(povm)
return povms
# Gate
| 25.724242 | 128 | 0.638945 | 1,164 | 8,489 | 4.454467 | 0.085911 | 0.044744 | 0.058631 | 0.055545 | 0.809836 | 0.745998 | 0.732498 | 0.732498 | 0.727483 | 0.727483 | 0 | 0.01056 | 0.263753 | 8,489 | 329 | 129 | 25.802432 | 0.81904 | 0.253976 | 0 | 0.610778 | 0 | 0 | 0.030456 | 0 | 0 | 0 | 0 | 0 | 0.083832 | 1 | 0.047904 | false | 0 | 0.065868 | 0 | 0.161677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2735b75258f001b84e9bd3741feb44e9c2a07d9f | 18 | py | Python | tasks/processing/entity_parser/__init__.py | getdumont/dumont | 3d3be9463c45378bf872ebb9ab9f2e267ee9a65c | [
"MIT"
] | 1 | 2018-09-13T23:39:38.000Z | 2018-09-13T23:39:38.000Z | tasks/processing/entity_parser/__init__.py | getdumont/dumont | 3d3be9463c45378bf872ebb9ab9f2e267ee9a65c | [
"MIT"
] | null | null | null | tasks/processing/entity_parser/__init__.py | getdumont/dumont | 3d3be9463c45378bf872ebb9ab9f2e267ee9a65c | [
"MIT"
] | null | null | null | from .url import * | 18 | 18 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 18 | 1 | 18 | 18 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2747144a2f995e21d0dff4f2a82191bf92d08464 | 24 | py | Python | QuickSound/__init__.py | Pouple/QuickSound | 213062a3880df5727e26887816c853a69b4c3c2a | [
"MIT"
] | null | null | null | QuickSound/__init__.py | Pouple/QuickSound | 213062a3880df5727e26887816c853a69b4c3c2a | [
"MIT"
] | null | null | null | QuickSound/__init__.py | Pouple/QuickSound | 213062a3880df5727e26887816c853a69b4c3c2a | [
"MIT"
] | null | null | null | from .Sound import Sound | 24 | 24 | 0.833333 | 4 | 24 | 5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2781ce3b2e4e386974155c68029f36e840bf4185 | 7,023 | py | Python | tests/test_gui/test_ui_layout_box.py | Mr-Coxall/arcade | 7767e9c7d7395c0dd35479744052f18ac8c86679 | [
"MIT"
] | null | null | null | tests/test_gui/test_ui_layout_box.py | Mr-Coxall/arcade | 7767e9c7d7395c0dd35479744052f18ac8c86679 | [
"MIT"
] | null | null | null | tests/test_gui/test_ui_layout_box.py | Mr-Coxall/arcade | 7767e9c7d7395c0dd35479744052f18ac8c86679 | [
"MIT"
] | null | null | null | import pytest
import arcade
from arcade import SpriteSolidColor
from arcade.gui.layouts.box import UIBoxLayout
from . import t, dummy_element
@pytest.fixture()
def v_layout():
return UIBoxLayout()
def test_vertical(v_layout):
v_layout.top = 200
v_layout.left = 100
element_1 = dummy_element()
element_2 = dummy_element()
v_layout.pack(element_1)
v_layout.pack(element_2)
v_layout.do_layout()
assert element_1.top == 200
assert element_1.bottom == 150
assert element_1.left == 100
assert element_2.top == element_1.bottom
assert element_2.left == 100
def test_vertical_with_spacing(v_layout):
v_layout.top = 200
v_layout.left = 100
element_1 = dummy_element()
element_2 = dummy_element()
v_layout.pack(element_1)
v_layout.pack(element_2, space=10)
v_layout.do_layout()
assert element_1.bottom == 150
assert element_2.top == 140
@pytest.fixture()
def h_layout():
return UIBoxLayout(vertical=False)
def test_horizontal(h_layout):
h_layout.top = 200
h_layout.left = 100
element_1 = dummy_element()
element_2 = dummy_element()
h_layout.pack(element_1)
h_layout.pack(element_2)
h_layout.size = h_layout.min_size
h_layout.do_layout()
assert element_1.top == 200
assert element_1.left == 100
assert element_2.top == 200
assert element_2.left == 200
def test_horizontal_with_spacing(h_layout):
h_layout.top = 200
h_layout.left = 100
element_1 = dummy_element()
element_2 = dummy_element()
h_layout.pack(element_1)
h_layout.pack(element_2, space=10)
h_layout.size = h_layout.min_size
h_layout.do_layout()
assert element_1.right == 200
assert element_2.left == 210
def test_box_layout_updates_width_and_height(v_layout: UIBoxLayout):
v_layout.pack(dummy_element(100, 50))
v_layout.size = v_layout.min_size
v_layout.do_layout()
assert v_layout.width == 100
assert v_layout.height == 50
v_layout.pack(dummy_element(150, 50), space=10)
v_layout.size = v_layout.min_size
v_layout.do_layout()
assert v_layout.width == 150
assert v_layout.height == 110
def test_v_box_align_items_center():
box = UIBoxLayout(vertical=False, align="center")
element = dummy_element()
box.pack(element)
box.width = 400
box.do_layout()
assert element.center_x == 200
def test_v_box_align_items_left():
box = UIBoxLayout(vertical=False, align="left")
element = dummy_element()
box.pack(element)
box.width = 400
box.do_layout()
assert element.left == 0
@pytest.mark.parametrize(
["vertical", "align", "center_x", "center_y"],
[
t("vertical top", True, "top", 50, 475),
t("vertical center", True, "center", 50, 250),
t("vertical bottom", True, "bottom", 50, 25),
t("horizontal left", False, "left", 50, 25),
t("horizontal center", False, "center", 200, 25),
t("horizontal right", False, "right", 350, 25),
# use synonyms
t("vertical start", True, "start", 50, 475),
t("vertical end", True, "end", 50, 25),
t("vertical left", True, "left", 50, 475),
t("vertical right", True, "right", 50, 25),
t("horizontal start", False, "start", 50, 25),
t("horizontal end", False, "end", 350, 25),
t("horizontal top", False, "top", 50, 25),
t("horizontal bottom", False, "bottom", 350, 25),
],
)
def test_box_alignment(vertical, align, center_x, center_y):
box = UIBoxLayout(vertical=vertical, align=align)
element_1 = dummy_element(width=100, height=50)
box.pack(element_1)
box.height = 500
box.width = 400
box.left = 0
box.bottom = 0
box.do_layout()
assert (element_1.center_x, element_1.center_y) == (center_x, center_y)
@pytest.mark.parametrize(
["vertical", "align", "center_x", "center_y"],
[
t("vertical top", True, "top", 50, 475),
t("vertical center", True, "center", 50, 250),
t("vertical bottom", True, "bottom", 50, 25),
t("horizontal left", False, "left", 50, 25),
t("horizontal center", False, "center", 200, 25),
t("horizontal right", False, "right", 350, 25),
# use synonyms
t("vertical start", True, "start", 50, 475),
t("vertical end", True, "end", 50, 25),
t("vertical left", True, "left", 50, 475),
t("vertical right", True, "right", 50, 25),
t("horizontal start", False, "start", 50, 25),
t("horizontal end", False, "end", 350, 25),
t("horizontal top", False, "top", 50, 25),
t("horizontal bottom", False, "bottom", 350, 25),
],
)
def test_box_alignment_for_sprites(vertical, align, center_x, center_y):
box = UIBoxLayout(vertical=vertical, align=align)
element_1 = SpriteSolidColor(width=100, height=50, color=arcade.color.RED)
box.pack(element_1)
box.height = 500
box.width = 400
box.left = 0
box.bottom = 0
box.do_layout()
assert (element_1.center_x, element_1.center_y) == (center_x, center_y)
def test_min_size_vertical():
box = UIBoxLayout(vertical=True)
box.pack(dummy_element(width=100, height=50))
box.pack(dummy_element(width=100, height=50), space=20)
box.do_layout()
assert box.min_size == (100, 120)
def test_min_size_horizontal():
box = UIBoxLayout(vertical=False)
box.pack(dummy_element(width=100, height=50))
box.pack(dummy_element(width=100, height=50), space=20)
box.do_layout()
assert box.min_size == (220, 50)
def test_vertical_children_size_hint_mix():
box = UIBoxLayout(vertical=True)
box.top = 100
dummy1 = dummy_element(width=100, height=50)
dummy1.size_hint = None
box.pack(dummy1)
dummy2 = dummy_element(width=100, height=50)
dummy2.size_hint = (0, 0)
box.pack(dummy2)
box.do_layout()
assert dummy1.top == 100
assert dummy2.top == 50
def test_horizontal_children_size_hint_mix():
box = UIBoxLayout(vertical=False)
box.left = 0
dummy1 = dummy_element(width=100, height=50)
dummy1.size_hint = None
box.pack(dummy1)
dummy2 = dummy_element(width=100, height=50)
dummy2.size_hint = (0, 0)
box.pack(dummy2)
box.do_layout()
assert dummy1.left == 0
assert dummy2.left == 100
def test_horizontal_nested_layout():
nested = UIBoxLayout(vertical=False)
nested.pack(dummy_element(width=100, height=50))
box = UIBoxLayout(vertical=False)
box.pack(nested)
print(nested.min_size)
print(box.min_size)
box.size = box.min_size
box.do_layout()
assert box.min_size == (100, 50)
def test_vertical_nested_layout():
nested = UIBoxLayout(vertical=True)
nested.pack(dummy_element(width=100, height=50))
box = UIBoxLayout(vertical=False)
box.pack(nested)
print(nested.min_size)
print(box.min_size)
box.size = box.min_size
box.do_layout()
assert box.min_size == (100, 50)
| 24.904255 | 78 | 0.65257 | 1,000 | 7,023 | 4.375 | 0.083 | 0.0416 | 0.0512 | 0.043886 | 0.827886 | 0.766857 | 0.747886 | 0.7152 | 0.708114 | 0.693943 | 0 | 0.072012 | 0.215008 | 7,023 | 281 | 79 | 24.992883 | 0.721567 | 0.00356 | 0 | 0.695431 | 0 | 0 | 0.086347 | 0 | 0 | 0 | 0 | 0 | 0.147208 | 1 | 0.086294 | false | 0 | 0.025381 | 0.010152 | 0.121827 | 0.020305 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
27e8ed033c8dced26ccf8826e9e3cff201cead52 | 3,635 | py | Python | upload/views.py | and-sm/testgr | 26e08c37f6ba399baec3d0dd92b4ebcffbf1081d | [
"MIT"
] | 16 | 2018-09-19T10:31:29.000Z | 2022-03-04T16:04:05.000Z | upload/views.py | and-sm/testgr | 26e08c37f6ba399baec3d0dd92b4ebcffbf1081d | [
"MIT"
] | 4 | 2019-12-11T11:42:58.000Z | 2022-03-12T00:54:08.000Z | upload/views.py | and-sm/testgr | 26e08c37f6ba399baec3d0dd92b4ebcffbf1081d | [
"MIT"
] | 2 | 2019-06-04T05:59:13.000Z | 2019-11-28T14:46:16.000Z | import magic
import json
from rest_framework.response import Response
from rest_framework.views import APIView
from rest_framework import status
from rest_framework.permissions import IsAuthenticated
from rest_framework.authentication import TokenAuthentication
from django.http import JsonResponse
from loader.models import Files
from loader.models import TestJobs, Tests
from django.core.exceptions import ObjectDoesNotExist
from django.core.files.base import ContentFile
from django.conf import settings
class UploadForJobView(APIView):
authentication_classes = [TokenAuthentication]
permission_classes = (IsAuthenticated,)
def post(self, request, uuid):
try:
json_body = json.loads(request.body.decode("utf-8"))
test = Tests.objects.get(uuid=uuid)
file = ContentFile(str(json_body), name="file.json")
instance = Files(file=file)
instance.test = test
instance.save()
return Response(status=status.HTTP_201_CREATED)
except:
if 'file' in request.data:
try:
job = TestJobs.objects.get(uuid=uuid)
file_obj = request.data['file']
"""
Get MIME by reading the header of the file
"""
initial_pos = file_obj.tell()
file_obj.seek(0)
mime_type = magic.from_buffer(file_obj.read(1024), mime=True)
file_obj.seek(initial_pos)
if mime_type not in settings.UPLOAD_MIME_TYPES:
return JsonResponse({"detail": "Incorrect file type"}, status=400)
instance = Files(file=file_obj)
instance.job = job
instance.save()
return Response(status=status.HTTP_201_CREATED)
except ObjectDoesNotExist:
return JsonResponse({"detail": "Incorrect file type"}, status=400)
return JsonResponse({"detail": "Incorrect file content"}, status=400)
class UploadForTestView(APIView):
authentication_classes = [TokenAuthentication]
permission_classes = (IsAuthenticated,)
def post(self, request, uuid):
try:
json_body = json.loads(request.body.decode("utf-8"))
test = Tests.objects.get(uuid=uuid)
file = ContentFile(str(json_body), name="file.json")
instance = Files(file=file)
instance.test = test
instance.save()
return Response(status=status.HTTP_201_CREATED)
except:
if 'file' in request.data:
try:
test = Tests.objects.get(uuid=uuid)
file_obj = request.data['file']
"""
Get MIME by reading the header of the file
"""
initial_pos = file_obj.tell()
file_obj.seek(0)
mime_type = magic.from_buffer(file_obj.read(1024), mime=True)
file_obj.seek(initial_pos)
if mime_type not in settings.UPLOAD_MIME_TYPES:
return JsonResponse({"detail": "Incorrect file type"}, status=400)
instance = Files(file=file_obj)
instance.test = test
instance.save()
return Response(status=status.HTTP_201_CREATED)
except ObjectDoesNotExist:
return JsonResponse({"detail": "Incorrect file type"}, status=400)
| 36.35 | 90 | 0.578817 | 375 | 3,635 | 5.485333 | 0.226667 | 0.040836 | 0.041322 | 0.080214 | 0.749635 | 0.731648 | 0.731648 | 0.727273 | 0.727273 | 0.727273 | 0 | 0.01623 | 0.338927 | 3,635 | 99 | 91 | 36.717172 | 0.839784 | 0 | 0 | 0.75 | 0 | 0 | 0.049797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.180556 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fd6fa0d6c2552a0bc729b1182301321329d2af0b | 17,106 | py | Python | tests/test_local_harness.py | darpa-sail-on/sail-on-client | 1fd7c0ec359469040fd7af0c8e56fe53277d4a27 | [
"Apache-2.0"
] | 1 | 2021-04-12T17:20:54.000Z | 2021-04-12T17:20:54.000Z | tests/test_local_harness.py | darpa-sail-on/sail-on-client | 1fd7c0ec359469040fd7af0c8e56fe53277d4a27 | [
"Apache-2.0"
] | 92 | 2021-03-08T22:32:15.000Z | 2022-03-25T03:53:01.000Z | tests/test_local_harness.py | darpa-sail-on/sail-on-client | 1fd7c0ec359469040fd7af0c8e56fe53277d4a27 | [
"Apache-2.0"
] | null | null | null | """Tests for PAR Interface."""
import os
import pytest
TEST_ID_NAME = "test_ids.csv"
def _initialize_session(
local_interface, protocol_name, domain="image_classification", hints=()
):
"""
Private function to initialize session.
Args:
local_interface (LocalInterface): An instance of LocalInterface
protocol_name (str): Name of the protocol
domain (str): Name of the domain
hints (list[str]): Hints used in session request
Return:
session id
"""
test_id_path = os.path.join(
os.path.dirname(__file__),
"data",
f"{protocol_name}",
f"{domain}",
TEST_ID_NAME,
)
test_ids = list(map(str.strip, open(test_id_path, "r").readlines()))
# Testing if session was sucessfully initalized
session_id = local_interface.session_request(
test_ids, f"{protocol_name}", f"{domain}", "0.1.1", list(hints), 0.5
)
return session_id
def _read_image_ids(image_ids_path):
"""
Private function to read image ids from a csv file.
Args:
image_ids_path (str): Path to a file containing image ids
Return:
list of image ids
"""
return list(map(str.strip, open(image_ids_path, "r").readlines()))
def test_initialize(get_local_harness_params):
"""
Test local harness initialization.
Args:
get_local_harness_params (tuple): Tuple to configure local harness
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
LocalHarness(data_dir, result_dir, gt_dir, gt_config)
def test_test_ids_request(get_local_harness_params):
"""
Test request for test ids.
Args:
get_local_harness_params (tuple): Tuple to configure local harness
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
test_dir = os.path.dirname(__file__)
assumptions_path = os.path.join(test_dir, "assumptions.json")
filename = local_interface.test_ids_request(
"OND", "image_classification", "5678", assumptions_path
)
expected = os.path.join(
test_dir, "data", "OND", "image_classification", TEST_ID_NAME
)
assert os.stat(expected).st_size > 5
assert expected == filename
def test_session_request(get_local_harness_params):
"""
Test session request.
Args:
get_local_harness_params (tuple): Tuple to configure local harness
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
test_dir = os.path.dirname(__file__)
test_id_path = os.path.join(
test_dir, "data", "OND", "image_classification", TEST_ID_NAME
)
test_ids = list(map(str.strip, open(test_id_path, "r").readlines()))
# Testing if session was sucessfully initalized
local_interface.session_request(
test_ids, "OND", "image_classification", "0.1.1", [], 0.5
)
# Testing with hints
local_interface.session_request(
test_ids, "OND", "image_classification", "0.1.1", ["red_light"], 0.5
)
def test_resume_session(get_local_harness_params):
"""
Test resume session.
Args:
get_local_harness_params (tuple): Tuple to configure local harness
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = local_interface.session_request(
["OND.54011215.0000.1236"], "OND", "image_classification", "0.1.1", [], 0.5
)
local_interface.complete_test(session_id, "OND.54011215.0000.1236")
finished_test = local_interface.resume_session(session_id)
assert finished_test == ["OND.54011215.0000.1236"]
# Testing with hints
session_id = local_interface.session_request(
["OND.54011215.0000.1236"],
"OND",
"image_classification",
"0.1.1",
["red_light"],
0.4,
)
local_interface.complete_test(session_id, "OND.54011215.0000.1236")
finished_test = local_interface.resume_session(session_id)
assert finished_test == ["OND.54011215.0000.1236"]
def test_dataset_request(get_local_harness_params):
"""
Tests for dataset request.
Args:
get_local_harness_params (tuple): Tuple to configure local harness
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND")
# Test correct dataset request
filename = local_interface.dataset_request("OND.1.1.1234", 0, session_id)
expected = os.path.join(
local_interface.temp_dir_name, f"{session_id}.OND.1.1.1234.0.csv"
)
assert expected == filename
expected_image_ids = _read_image_ids(expected)
assert expected_image_ids == ["n01484850_18013.JPEG", "n01484850_24624.JPEG"]
@pytest.mark.parametrize(
"protocol_constant", ["detection", "classification", "characterization"]
)
@pytest.mark.parametrize("protocol_name", ["OND", "CONDDA"])
def test_post_results(get_local_harness_params, protocol_constant, protocol_name):
"""
Tests for post results.
Args:
get_local_harness_params (tuple): Tuple to configure local interface
protocol_constant (str): Constants used by the server to identifying results
protocol_name (str): Name of the protocol ( options: OND and CONDDA)
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, protocol_name)
result_files = {
protocol_constant: os.path.join(
os.path.dirname(__file__), f"test_results_{protocol_name}.1.1.1234.csv"
)
}
local_interface.post_results(
result_files, f"{protocol_name}.1.1.1234", 0, session_id
)
@pytest.mark.parametrize(
"feedback_mapping",
(
("classification", ("detection", "classification")),
("score", ("detection", "classification")),
),
)
@pytest.mark.parametrize("protocol_name", ["OND", "CONDDA"])
def test_feedback_request(get_local_harness_params, feedback_mapping, protocol_name):
"""
Tests for feedback request.
Args:
get_local_harness_params (tuple): Tuple to configure local interface
feedback_mapping (dict): Dict with mapping for feedback
protocol_name (str): Name of the protocol (options: OND and CONDDA)
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, protocol_name)
# Post results before posting
result_files = {}
protocol_constant = feedback_mapping[0]
required_files = feedback_mapping[1]
for required_file in required_files:
result_files[required_file] = os.path.join(
os.path.dirname(__file__), f"test_results_{protocol_name}.1.1.1234.csv"
)
local_interface.post_results(
result_files, f"{protocol_name}.1.1.1234", 0, session_id
)
# Get feedback for detection
response = local_interface.get_feedback_request(
["n01484850_18013.JPEG", "n01484850_24624.JPEG"],
protocol_constant,
f"{protocol_name}.1.1.1234",
0,
session_id,
)
expected = os.path.join(
local_interface.temp_dir_name,
"feedback",
f"{session_id}.{protocol_name}.1.1.1234.0_{protocol_constant}.csv",
)
assert expected == response
def test_image_classification_evaluate(get_local_harness_params):
"""
Test evaluate with rounds.
Args:
get_local_harness_params (tuple): Tuple to configure local interface
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND", "image_classification")
baseline_session_id = _initialize_session(
local_interface, "OND", "image_classification"
)
result_folder = os.path.join(
os.path.dirname(__file__), "mock_results", "image_classification"
)
detection_file_id = os.path.join(
result_folder, "OND.54011215.0000.1236_PreComputedDetector_detection.csv"
)
classification_file_id = os.path.join(
result_folder, "OND.54011215.0000.1236_PreComputedDetector_classification.csv"
)
baseline_classification_file_id = os.path.join(
result_folder,
"OND.54011215.0000.1236_BaselinePreComputedDetector_classification.csv",
)
results = {
"detection": detection_file_id,
"classification": classification_file_id,
}
baseline_result = {
"classification": baseline_classification_file_id,
}
local_interface.post_results(results, "OND.54011215.0000.1236", 0, session_id)
local_interface.post_results(
baseline_result, "OND.54011215.0000.1236", 0, baseline_session_id
)
local_interface.evaluate("OND.54011215.0000.1236", 0, session_id)
local_interface.evaluate(
"OND.54011215.0000.1236", 0, session_id, baseline_session_id
)
def test_activity_recognition_evaluate(get_ar_local_harness_params):
"""
Test evaluate for activity recognition.
Args:
get_ar_local_harness_params (tuple): Tuple to configure local interface
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_ar_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND", "activity_recognition")
baseline_session_id = _initialize_session(
local_interface, "OND", "activity_recognition"
)
result_folder = os.path.join(
os.path.dirname(__file__), "mock_results", "activity_recognition"
)
detection_file_id = os.path.join(
result_folder, "OND.10.90001.2100554_PreComputedONDAgent_detection.csv"
)
classification_file_id = os.path.join(
result_folder, "OND.10.90001.2100554_PreComputedONDAgent_classification.csv"
)
characterization_file_id = os.path.join(
result_folder, "OND.10.90001.2100554_PreComputedONDAgent_characterization.csv"
)
results = {
"detection": detection_file_id,
"classification": classification_file_id,
"characterization": characterization_file_id,
}
baseline_classification_file_id = os.path.join(
result_folder,
"OND.10.90001.2100554_BaselinePreComputedONDAgent_classification.csv",
)
baseline_result = {
"classification": baseline_classification_file_id,
}
local_interface.post_results(results, "OND.10.90001.2100554", 0, session_id)
local_interface.post_results(
baseline_result, "OND.10.90001.2100554", 0, baseline_session_id
)
local_interface.evaluate("OND.10.90001.2100554", 0, session_id)
local_interface.evaluate("OND.10.90001.2100554", 0, session_id, baseline_session_id)
def test_transcripts_evaluate(get_dt_local_harness_params):
"""
Test evaluate for transcripts.
Args:
get_dt_local_harness_params (tuple): Tuple to configure local interface
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_dt_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND", "transcripts")
result_folder = os.path.join(
os.path.dirname(__file__), "mock_results", "transcripts"
)
detection_file_id = os.path.join(
result_folder, "OND.0.90001.8714062_PreComputedDetector_detection.csv"
)
classification_file_id = os.path.join(
result_folder, "OND.0.90001.8714062_PreComputedDetector_classification.csv"
)
characterization_file_id = os.path.join(
result_folder, "OND.0.90001.8714062_PreComputedDetector_characterization.csv"
)
results = {
"detection": detection_file_id,
"classification": classification_file_id,
"characterization": characterization_file_id,
}
baseline_session_id = _initialize_session(local_interface, "OND", "transcripts")
local_interface.post_results(results, "OND.0.90001.8714062", 0, session_id)
local_interface.evaluate("OND.0.90001.8714062", 0, session_id)
baseline_classification_file_id = os.path.join(
result_folder,
"OND.0.90001.8714062_BaselinePreComputedDetector_classification.csv",
)
baseline_result = {
"classification": baseline_classification_file_id,
}
local_interface.post_results(
baseline_result, "OND.0.90001.8714062", 0, baseline_session_id
)
local_interface.evaluate("OND.0.90001.8714062", 0, session_id, baseline_session_id)
def test_image_classification_evaluate_roundwise(get_local_harness_params):
"""
Test evaluate with rounds.
Args:
get_local_harness_params (tuple): Tuple to configure local interface
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND", "image_classification")
result_folder = os.path.join(
os.path.dirname(__file__), "mock_results", "image_classification"
)
detection_file_id = os.path.join(
result_folder, "OND.54011215.0000.1236_PreComputedDetector_detection.csv"
)
classification_file_id = os.path.join(
result_folder, "OND.54011215.0000.1236_PreComputedDetector_classification.csv"
)
results = {
"detection": detection_file_id,
"classification": classification_file_id,
}
local_interface.post_results(results, "OND.54011215.0000.1236", 0, session_id)
local_interface.evaluate_round_wise("OND.54011215.0000.1236", 0, session_id)
def test_complete_test(get_local_harness_params):
"""
Test complete test request.
Args:
get_local_harness_params (tuple): Tuple to configure local interface
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND")
local_interface.complete_test(session_id, "OND.10.90001.2100554")
def test_terminate_session(get_local_harness_params):
"""
Test terminate session request.
Args:
get_local_harness_params (tuple): Tuple to configure local interface
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND")
local_interface.terminate_session(session_id)
def test_get_metadata(get_local_harness_params):
"""
Test get metadata.
Args:
get_local_harness_params (tuple): Tuple to configure local interface
Return:
None
"""
from sail_on_client.harness.local_harness import LocalHarness
data_dir, result_dir, gt_dir, gt_config = get_local_harness_params
local_interface = LocalHarness(data_dir, result_dir, gt_dir, gt_config)
session_id = _initialize_session(local_interface, "OND")
metadata = local_interface.get_test_metadata(session_id, "OND.1.1.1234")
assert "OND" == metadata["protocol"]
assert 3 == metadata["known_classes"]
session_id = _initialize_session(local_interface, "OND", hints=["red_light"])
metadata = local_interface.get_test_metadata(session_id, "OND.1.1.1234")
assert "n01484850_4515.JPEG" == metadata["red_light"]
| 33.345029 | 88 | 0.710862 | 2,124 | 17,106 | 5.362053 | 0.071563 | 0.088506 | 0.06638 | 0.06638 | 0.81728 | 0.787514 | 0.746861 | 0.731759 | 0.714813 | 0.689964 | 0 | 0.049235 | 0.193792 | 17,106 | 512 | 89 | 33.410156 | 0.776593 | 0.158249 | 0 | 0.465753 | 0 | 0 | 0.187189 | 0.09327 | 0 | 0 | 0 | 0 | 0.034247 | 1 | 0.054795 | false | 0 | 0.054795 | 0 | 0.116438 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fdbfbb601864a10f059d2b1910ab2051ce6db33b | 137 | py | Python | piqs/__init__.py | nathanshammah/pim | 50c32f1fb2c129e9bada994cb341923318b42cfa | [
"BSD-3-Clause"
] | 17 | 2018-04-19T04:49:19.000Z | 2021-05-21T23:47:40.000Z | piqs/__init__.py | nathanshammah/pim | 50c32f1fb2c129e9bada994cb341923318b42cfa | [
"BSD-3-Clause"
] | 11 | 2018-03-14T10:15:33.000Z | 2020-12-30T16:30:46.000Z | piqs/__init__.py | nathanshammah/pim | 50c32f1fb2c129e9bada994cb341923318b42cfa | [
"BSD-3-Clause"
] | 9 | 2018-01-22T09:26:14.000Z | 2022-02-16T22:21:27.000Z | from piqs.dicke import *
from piqs.cy.dicke import jmm1_dictionary
from piqs.about import *
from piqs.cite import *
__version__ = '1.0'
| 19.571429 | 41 | 0.766423 | 22 | 137 | 4.545455 | 0.545455 | 0.32 | 0.28 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.145985 | 137 | 6 | 42 | 22.833333 | 0.82906 | 0 | 0 | 0 | 0 | 0 | 0.021898 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fde5064dc79f2580660ad01b2f484c00249118af | 2,262 | py | Python | tests/test_api.py | yakky/microservice-talk | 5b59783d5d1f38994e960883f9026a2b11416c7e | [
"CNRI-Python"
] | null | null | null | tests/test_api.py | yakky/microservice-talk | 5b59783d5d1f38994e960883f9026a2b11416c7e | [
"CNRI-Python"
] | 61 | 2020-11-16T06:49:52.000Z | 2022-03-28T00:15:10.000Z | tests/test_api.py | yakky/microservice-talk | 5b59783d5d1f38994e960883f9026a2b11416c7e | [
"CNRI-Python"
] | null | null | null | from urllib.parse import urlencode
import httpx
import pytest
from book_search.main import app
@pytest.mark.asyncio
async def test_search_basic(load_books):
async with httpx.AsyncClient(app=app, base_url="http://testserver") as client:
url = app.url_path_for("search")
params = urlencode({"q": "Susan Collins"})
response = await client.get(f"{url}?{params}")
assert response.status_code == 200
data = response.json()
assert data["results"]
assert data["count"] == 3
assert [row["book_id"] for row in data["results"]] == [1, 17, 20]
for row in data["results"]:
assert row["title"]
assert row["isbn13"]
@pytest.mark.asyncio
async def test_search_year(load_books):
async with httpx.AsyncClient(app=app, base_url="http://testserver") as client:
url = app.url_path_for("search")
params = urlencode({"year": 2008})
response = await client.get(f"{url}?{params}")
assert response.status_code == 200
data = response.json()
assert data["results"]
assert data["count"] == 4
assert [row["book_id"] for row in data["results"]] == [1, 56, 73, 88]
for row in data["results"]:
assert row["title"]
assert row["isbn13"]
@pytest.mark.asyncio
async def test_search_tags(load_books):
async with httpx.AsyncClient(app=app, base_url="http://testserver") as client:
url = app.url_path_for("search")
params = urlencode({"tags": ["between-film", "address-year"]})
response = await client.get(f"{url}?{params}")
assert response.status_code == 200
data = response.json()
assert data["results"]
assert data["count"] == 4
assert [row["book_id"] for row in data["results"]] == [1, 2, 67, 90]
for row in data["results"]:
assert row["title"]
assert row["isbn13"]
@pytest.mark.asyncio
async def test_ping():
async with httpx.AsyncClient(app=app, base_url="http://testserver") as client:
url = app.url_path_for("ping")
response = await client.get(url)
assert response.status_code == 200
data = response.json()
assert data["message"] == "Ping"
| 34.8 | 82 | 0.611406 | 294 | 2,262 | 4.602041 | 0.234694 | 0.073171 | 0.075388 | 0.053215 | 0.85218 | 0.85218 | 0.85218 | 0.826312 | 0.826312 | 0.826312 | 0 | 0.025176 | 0.244916 | 2,262 | 64 | 83 | 35.34375 | 0.766979 | 0 | 0 | 0.666667 | 0 | 0 | 0.14191 | 0 | 0 | 0 | 0 | 0 | 0.37037 | 1 | 0 | false | 0 | 0.074074 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fdfe5b83096c46d389e087a436535f3ebea88631 | 173 | py | Python | operations/query_session.py | tranaj2/Crockpot | 435a3c89fffeb94dbab24845fac11a75c795444a | [
"MIT"
] | null | null | null | operations/query_session.py | tranaj2/Crockpot | 435a3c89fffeb94dbab24845fac11a75c795444a | [
"MIT"
] | 5 | 2018-02-21T03:40:48.000Z | 2018-04-17T06:38:48.000Z | operations/query_session.py | tranaj2/CrockPot | 435a3c89fffeb94dbab24845fac11a75c795444a | [
"MIT"
] | null | null | null | """ Module used to get a handle to the DB session """
from config import Config
def get_session():
"""Get a handle to the DB session"""
return Config.SA_SESSION()
| 21.625 | 53 | 0.682081 | 28 | 173 | 4.142857 | 0.535714 | 0.068966 | 0.172414 | 0.206897 | 0.413793 | 0.413793 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0.213873 | 173 | 7 | 54 | 24.714286 | 0.852941 | 0.439306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e321e388a6626fd4006e15daddeaab00143d3e62 | 49 | py | Python | Beat-Ai/BeatSaber-AI/BeatSaber-AI/phase2/creatingWalls.py | Codingmace/BeatSaber-AI | 1978c68ac983320996eb9161b603ab12be868d0c | [
"MIT"
] | null | null | null | Beat-Ai/BeatSaber-AI/BeatSaber-AI/phase2/creatingWalls.py | Codingmace/BeatSaber-AI | 1978c68ac983320996eb9161b603ab12be868d0c | [
"MIT"
] | null | null | null | Beat-Ai/BeatSaber-AI/BeatSaber-AI/phase2/creatingWalls.py | Codingmace/BeatSaber-AI | 1978c68ac983320996eb9161b603ab12be868d0c | [
"MIT"
] | null | null | null | # TODO
# Feature to add later if get around to it | 24.5 | 42 | 0.734694 | 10 | 49 | 3.6 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 49 | 2 | 42 | 24.5 | 0.947368 | 0.918367 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.5 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8b53eb9931550d6004f00345b51705db23a7af49 | 26 | py | Python | crawler/rpc/__init__.py | ly0/pycrawler | 3be0879b2c342297aa42e642552a988a8295a0eb | [
"MIT"
] | 2 | 2016-10-20T01:40:46.000Z | 2017-03-31T08:27:35.000Z | crawler/rpc/__init__.py | ly0/pycrawler | 3be0879b2c342297aa42e642552a988a8295a0eb | [
"MIT"
] | null | null | null | crawler/rpc/__init__.py | ly0/pycrawler | 3be0879b2c342297aa42e642552a988a8295a0eb | [
"MIT"
] | null | null | null | from .tornadorpc import *
| 13 | 25 | 0.769231 | 3 | 26 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b6ef8cb4ea81cd172549ddeafb3209426d937f5 | 38 | py | Python | brainframe_qt/ui/main_window/video_expanded_view/video_large/stream_overlay/titlebar/__init__.py | aotuai/brainframe-qt | 082cfd0694e569122ff7c63e56dd0ec4b62d5bac | [
"BSD-3-Clause"
] | 17 | 2021-02-11T18:19:22.000Z | 2022-02-08T06:12:50.000Z | brainframe_qt/ui/main_window/video_expanded_view/video_large/stream_overlay/titlebar/__init__.py | aotuai/brainframe-qt | 082cfd0694e569122ff7c63e56dd0ec4b62d5bac | [
"BSD-3-Clause"
] | 80 | 2021-02-11T08:27:31.000Z | 2021-10-13T21:33:22.000Z | brainframe_qt/ui/main_window/video_expanded_view/video_large/stream_overlay/titlebar/__init__.py | aotuai/brainframe-qt | 082cfd0694e569122ff7c63e56dd0ec4b62d5bac | [
"BSD-3-Clause"
] | 5 | 2021-02-12T09:51:34.000Z | 2022-02-08T09:25:15.000Z | from .titlebar import OverlayTitlebar
| 19 | 37 | 0.868421 | 4 | 38 | 8.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8b94da8026e16e75c82b0f4bfa9f60215675b895 | 9,107 | py | Python | unified_focal_loss_pytorch.py | oikosohn/compound-loss-pytorch | f53491f498434565c07761db99cea8b7079c14fe | [
"Apache-2.0"
] | 4 | 2021-12-29T13:55:11.000Z | 2022-03-08T11:17:28.000Z | unified_focal_loss_pytorch.py | oikosohn/compound-loss-pytorch | f53491f498434565c07761db99cea8b7079c14fe | [
"Apache-2.0"
] | 1 | 2022-03-30T05:45:16.000Z | 2022-03-30T05:45:16.000Z | unified_focal_loss_pytorch.py | oikosohn/compound-loss-pytorch | f53491f498434565c07761db99cea8b7079c14fe | [
"Apache-2.0"
] | 2 | 2021-12-29T13:55:10.000Z | 2022-03-06T14:43:17.000Z | import torch
import torch.nn as nn
# Helper function to enable loss function to be flexibly used for
# both 2D or 3D image segmentation - source: https://github.com/frankkramer-lab/MIScnn
def identify_axis(shape):
# Three dimensional
if len(shape) == 5 : return [1,2,3]
# Two dimensional
elif len(shape) == 4 : return [1,2]
# Exception - Unknown
else : raise ValueError('Metric: Shape of tensor is neither 2D or 3D.')
class SymmetricFocalLoss(nn.Module):
"""
Parameters
----------
delta : float, optional
controls weight given to false positive and false negatives, by default 0.7
gamma : float, optional
Focal Tversky loss' focal parameter controls degree of down-weighting of easy examples, by default 2.0
epsilon : float, optional
clip values to prevent division by zero error
"""
def __init__(self, delta=0.7, gamma=2., epsilon=1e-07):
super(SymmetricFocalLoss, self).__init__()
self.delta = delta
self.gamma = gamma
self.epsilon = epsilon
def forward(self, y_pred, y_true):
axis = identify_axis(y_true.size())
y_pred = torch.clamp(y_pred, self.epsilon, 1. - self.epsilon)
cross_entropy = -y_true * torch.log(y_pred)
# Calculate losses separately for each class
back_ce = torch.pow(1 - y_pred[:,:,:,0], self.gamma) * cross_entropy[:,:,:,0]
back_ce = (1 - self.delta) * back_ce
fore_ce = torch.pow(1 - y_pred[:,:,:,1], self.gamma) * cross_entropy[:,:,:,1]
fore_ce = self.delta * fore_ce
loss = torch.mean(torch.sum(torch.stack([back_ce, fore_ce], axis=-1), axis=-1))
return loss
class AsymmetricFocalLoss(nn.Module):
"""For Imbalanced datasets
Parameters
----------
delta : float, optional
controls weight given to false positive and false negatives, by default 0.25
gamma : float, optional
Focal Tversky loss' focal parameter controls degree of down-weighting of easy examples, by default 2.0
epsilon : float, optional
clip values to prevent division by zero error
"""
def __init__(self, delta=0.25, gamma=2., epsilon=1e-07):
super(AsymmetricFocalLoss, self).__init__()
self.delta = delta
self.gamma = gamma
self.epsilon = epsilon
def forward(self, y_pred, y_true):
axis = identify_axis(y_true.size())
y_pred = torch.clamp(y_pred, self.epsilon, 1. - self.epsilon)
cross_entropy = -y_true * torch.log(y_pred)
# Calculate losses separately for each class, only suppressing background class
back_ce = torch.pow(1 - y_pred[:,:,:,0], self.gamma) * cross_entropy[:,:,:,0]
back_ce = (1 - self.delta) * back_ce
fore_ce = cross_entropy[:,:,:,1]
fore_ce = self.delta * fore_ce
loss = torch.mean(torch.sum(torch.stack([back_ce, fore_ce], axis=-1), axis=-1))
return loss
class SymmetricFocalTverskyLoss(nn.Module):
"""This is the implementation for binary segmentation.
Parameters
----------
delta : float, optional
controls weight given to false positive and false negatives, by default 0.7
gamma : float, optional
focal parameter controls degree of down-weighting of easy examples, by default 0.75
smooth : float, optional
smooithing constant to prevent division by 0 errors, by default 0.000001
epsilon : float, optional
clip values to prevent division by zero error
"""
def __init__(self, delta=0.7, gamma=0.75, epsilon=1e-07):
super(SymmetricFocalTverskyLoss, self).__init__()
self.delta = delta
self.gamma = gamma
self.epsilon = epsilon
def forward(self, y_pred, y_true):
y_pred = torch.clamp(y_pred, self.epsilon, 1. - self.epsilon)
axis = identify_axis(y_true.size())
# Calculate true positives (tp), false negatives (fn) and false positives (fp)
tp = torch.sum(y_true * y_pred, axis=axis)
fn = torch.sum(y_true * (1-y_pred), axis=axis)
fp = torch.sum((1-y_true) * y_pred, axis=axis)
dice_class = (tp + self.epsilon)/(tp + self.delta*fn + (1-self.delta)*fp + self.epsilon)
# Calculate losses separately for each class, enhancing both classes
back_dice = (1-dice_class[:,0]) * torch.pow(1-dice_class[:,0], -self.gamma)
fore_dice = (1-dice_class[:,1]) * torch.pow(1-dice_class[:,1], -self.gamma)
# Average class scores
loss = torch.mean(torch.stack([back_dice,fore_dice], axis=-1))
return loss
class AsymmetricFocalTverskyLoss(nn.Module):
"""This is the implementation for binary segmentation.
Parameters
----------
delta : float, optional
controls weight given to false positive and false negatives, by default 0.7
gamma : float, optional
focal parameter controls degree of down-weighting of easy examples, by default 0.75
smooth : float, optional
smooithing constant to prevent division by 0 errors, by default 0.000001
epsilon : float, optional
clip values to prevent division by zero error
"""
def __init__(self, delta=0.7, gamma=0.75, epsilon=1e-07):
super(AsymmetricFocalTverskyLoss, self).__init__()
self.delta = delta
self.gamma = gamma
self.epsilon = epsilon
def forward(self, y_pred, y_true):
# Clip values to prevent division by zero error
y_pred = torch.clamp(y_pred, self.epsilon, 1. - self.epsilon)
axis = identify_axis(y_true.size())
# Calculate true positives (tp), false negatives (fn) and false positives (fp)
tp = torch.sum(y_true * y_pred, axis=axis)
fn = torch.sum(y_true * (1-y_pred), axis=axis)
fp = torch.sum((1-y_true) * y_pred, axis=axis)
dice_class = (tp + self.epsilon)/(tp + self.delta*fn + (1-self.delta)*fp + self.epsilon)
# Calculate losses separately for each class, only enhancing foreground class
back_dice = (1-dice_class[:,0])
fore_dice = (1-dice_class[:,1]) * torch.pow(1-dice_class[:,1], -self.gamma)
# Average class scores
loss = torch.mean(torch.stack([back_dice,fore_dice], axis=-1))
return loss
class SymmetricUnifiedFocalLoss(nn.Module):
"""The Unified Focal loss is a new compound loss function that unifies Dice-based and cross entropy-based loss functions into a single framework.
Parameters
----------
weight : float, optional
represents lambda parameter and controls weight given to symmetric Focal Tversky loss and symmetric Focal loss, by default 0.5
delta : float, optional
controls weight given to each class, by default 0.6
gamma : float, optional
focal parameter controls the degree of background suppression and foreground enhancement, by default 0.5
epsilon : float, optional
clip values to prevent division by zero error
"""
def __init__(self, weight=0.5, delta=0.6, gamma=0.5):
super(SymmetricUnifiedFocalLoss, self).__init__()
self.weight = weight
self.delta = delta
self.gamma = gamma
def forward(self, y_pred, y_true):
symmetric_ftl = SymmetricFocalTverskyLoss(delta=self.delta, gamma=self.gamma)(y_pred, y_true)
symmetric_fl = SymmetricFocalLoss(delta=self.delta, gamma=self.gamma)(y_pred, y_true)
if self.weight is not None:
return (self.weight * symmetric_ftl) + ((1-self.weight) * symmetric_fl)
else:
return symmetric_ftl + symmetric_fl
class AsymmetricUnifiedFocalLoss(nn.Module):
"""The Unified Focal loss is a new compound loss function that unifies Dice-based and cross entropy-based loss functions into a single framework.
Parameters
----------
weight : float, optional
represents lambda parameter and controls weight given to asymmetric Focal Tversky loss and asymmetric Focal loss, by default 0.5
delta : float, optional
controls weight given to each class, by default 0.6
gamma : float, optional
focal parameter controls the degree of background suppression and foreground enhancement, by default 0.5
epsilon : float, optional
clip values to prevent division by zero error
"""
def __init__(self, weight=0.5, delta=0.6, gamma=0.2):
super(AsymmetricUnifiedFocalLoss, self).__init__()
self.weight = weight
self.delta = delta
self.gamma = gamma
def forward(self, y_pred, y_true):
# Obtain Asymmetric Focal Tversky loss
asymmetric_ftl = AsymmetricFocalTverskyLoss(delta=self.delta, gamma=self.gamma)(y_pred, y_true)
# Obtain Asymmetric Focal loss
asymmetric_fl = AsymmetricFocalLoss(delta=self.delta, gamma=self.gamma)(y_pred, y_true)
# Return weighted sum of Asymmetrical Focal loss and Asymmetric Focal Tversky loss
if self.weight is not None:
return (self.weight * asymmetric_ftl) + ((1-self.weight) * asymmetric_fl)
else:
return asymmetric_ftl + asymmetric_fl
| 40.475556 | 149 | 0.659932 | 1,233 | 9,107 | 4.748581 | 0.135442 | 0.024765 | 0.023911 | 0.017079 | 0.803587 | 0.797438 | 0.781896 | 0.773356 | 0.766866 | 0.754227 | 0 | 0.020602 | 0.237839 | 9,107 | 224 | 150 | 40.65625 | 0.822936 | 0.408038 | 0 | 0.653061 | 0 | 0 | 0.008646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132653 | false | 0 | 0.020408 | 0 | 0.295918 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8be95b97597714bfa42bc5fa7d9964706a7ce82d | 285 | py | Python | securityheaders/checkers/xframeoptions/checker.py | th3cyb3rc0p/securityheaders | 941264be581dc01afe28f6416f2d7bed79aecfb3 | [
"Apache-2.0"
] | 151 | 2018-07-29T22:34:43.000Z | 2022-03-22T05:08:27.000Z | securityheaders/checkers/xframeoptions/checker.py | th3cyb3rc0p/securityheaders | 941264be581dc01afe28f6416f2d7bed79aecfb3 | [
"Apache-2.0"
] | 5 | 2019-04-24T07:31:36.000Z | 2021-04-15T14:31:23.000Z | securityheaders/checkers/xframeoptions/checker.py | th3cyb3rc0p/securityheaders | 941264be581dc01afe28f6416f2d7bed79aecfb3 | [
"Apache-2.0"
] | 42 | 2018-07-31T08:18:59.000Z | 2022-03-28T08:18:32.000Z | from securityheaders.models.xframeoptions import XFrameOptions
from securityheaders.checkers import Checker
class XFrameOptionsChecker(Checker):
def __init__(self):
pass
def getxframeoptions(self, headers):
return self.extractheader(headers, XFrameOptions)
| 28.5 | 62 | 0.775439 | 27 | 285 | 8.037037 | 0.62963 | 0.175115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164912 | 285 | 9 | 63 | 31.666667 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0.285714 | 0.142857 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
4764a0f03ab836d1bba6f2e125db20c044f6b854 | 46,316 | py | Python | tests/AdagucTests/TestWMS.py | ernstdevreede/adaguc-server | 3516bf1a2ea6abb4f2e85e72944589dfcc990f7c | [
"Apache-2.0"
] | 1 | 2019-08-21T11:03:09.000Z | 2019-08-21T11:03:09.000Z | tests/AdagucTests/TestWMS.py | ernstdevreede/adaguc-server | 3516bf1a2ea6abb4f2e85e72944589dfcc990f7c | [
"Apache-2.0"
] | null | null | null | tests/AdagucTests/TestWMS.py | ernstdevreede/adaguc-server | 3516bf1a2ea6abb4f2e85e72944589dfcc990f7c | [
"Apache-2.0"
] | null | null | null | import os
import os.path
from io import BytesIO
import unittest
import shutil
import subprocess
import json
from lxml import etree
from lxml import objectify
import re
from .AdagucTestTools import AdagucTestTools
ADAGUC_PATH = os.environ['ADAGUC_PATH']
class TestWMS(unittest.TestCase):
testresultspath = "testresults/TestWMS/"
expectedoutputsspath = "expectedoutputs/TestWMS/"
env = {'ADAGUC_CONFIG': ADAGUC_PATH +
"/data/config/adaguc.autoresource.xml"}
AdagucTestTools().mkdir_p(testresultspath)
def compareXML(self, xml, expectedxml):
obj1 = objectify.fromstring(
re.sub(' xmlns="[^"]+"', '', expectedxml, count=1))
obj2 = objectify.fromstring(re.sub(' xmlns="[^"]+"', '', xml, count=1))
# Remove ADAGUC build date and version from keywordlists
for child in obj1.findall("Service/KeywordList")[0]:
child.getparent().remove(child)
for child in obj2.findall("Service/KeywordList")[0]:
child.getparent().remove(child)
# Boundingbox extent values are too varying by different Proj libraries
def removeBBOX(root):
if (root.tag.title() == "Boundingbox"):
# root.getparent().remove(root)
try:
del root.attrib["minx"]
del root.attrib["miny"]
del root.attrib["maxx"]
del root.attrib["maxy"]
except:
pass
for elem in root.getchildren():
removeBBOX(elem)
removeBBOX(obj1)
removeBBOX(obj2)
result = etree.tostring(obj1)
expect = etree.tostring(obj2)
self.assertEquals(expect, result)
def checkReport(self, reportFilename="", expectedReportFilename=""):
self.assertTrue(os.path.exists(reportFilename))
self.assertEqual(AdagucTestTools().readfromfile(reportFilename),
AdagucTestTools().readfromfile(self.expectedoutputsspath + expectedReportFilename))
os.remove(reportFilename)
def test_WMSGetCapabilities_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetCapabilities_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=testdata.nc&SERVICE=WMS&request=getcapabilities", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetMap_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMap_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&",
env=self.env, args=["--report"])
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
# self.checkReport(reportFilename="checker_report.txt",
# expectedReportFilename="checker_report_WMSGetMap_testdatanc.txt")
def test_WMSGetMap_Report_env(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMap_Report_env"
reportfilename = "./env_checker_report.txt"
self.env['ADAGUC_CHECKER_FILE'] = reportfilename
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&",
env=self.env, args=["--report"])
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(os.path.exists(reportfilename))
self.env.pop('ADAGUC_CHECKER_FILE', None)
if(os.path.exists(reportfilename)):
os.remove(reportfilename)
def test_WMSGetMap_testdatanc_customprojectionstring(self):
AdagucTestTools().cleanTempDir()
# https://geoservices.knmi.nl/cgi-bin/RADNL_OPER_R___25PCPRR_L3.cgi?SERVICE=WMS&REQUEST=GETMAP&VERSION=1.1.1&SRS%3DPROJ4%3A%2Bproj%3Dstere%20%2Bx_0%3D0%20%2By_0%3D0%20%2Blat_ts%3D60%20%2Blon_0%3D0%20%2Blat_0%3D90%20%2Ba%3D6378140%20%2Bb%3D6356750%20%2Bunits%3Dm&FORMAT=image/png&TRANSPARENT=true&WIDTH=750&HEIGHT=660&BBOX=100000,-4250000,600000,-3810000&LAYERS=RADNL_OPER_R___25PCPRR_L3_KNMI&TIME=2018-03-12T12:40:00
filename = "test_WMSGetMap_testdatanc_customprojectionstring.png"
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=testdata&WIDTH=256&HEIGHT=256&CRS=%2Bproj%3Dstere%20%2Bx_0%3D0%20%2By_0%3D0%20%2Blat_ts%3D60%20%2Blon_0%3D0%20%2Blat_0%3D90%20%2Ba%3D6378140%20%2Bb%3D6356750%20%2Bunits%3Dm&BBOX=100000,-4250000,600000,-3810000&STYLES=testdata%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMap_testdatanc_customprojectionstring_proj4namespace(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMap_testdatanc_customprojectionstring_proj4namespace.png"
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=testdata&WIDTH=256&HEIGHT=256&CRS=PROJ4%3A%2Bproj%3Dstere%20%2Bx_0%3D0%20%2By_0%3D0%20%2Blat_ts%3D60%20%2Blon_0%3D0%20%2Blat_0%3D90%20%2Ba%3D6378140%20%2Bb%3D6356750%20%2Bunits%3Dm&BBOX=100000,-4250000,600000,-3810000&STYLES=testdata%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetCapabilitiesGetMap_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetCapabilities_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=testdata.nc&SERVICE=WMS&request=getcapabilities", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
filename = "test_WMSGetMap_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapGetCapabilities_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMap_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
filename = "test_WMSGetCapabilities_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=testdata.nc&SERVICE=WMS&request=getcapabilities", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetMap_getmap_3dims_singlefile(self):
dims = {
'time': {
'vartype': 'd',
'units': "seconds since 1970-01-01 00:00:00",
'standard_name': 'time',
'values': ["2017-01-01T00:00:00Z", "2017-01-01T00:05:00Z", "2017-01-01T00:10:00Z"],
'wmsname': 'time'
},
'elevation': {
'vartype': 'd',
'units': "meters",
'standard_name': 'height',
'values': [7000, 8000, 9000],
'wmsname': 'elevation'
},
'member': {
'vartype': str,
'units': "member number",
'standard_name': 'member',
'values': ['member5', 'member4'],
'wmsname': 'DIM_member'
}
}
AdagucTestTools().cleanTempDir()
def Recurse(dims, number, l):
for value in range(len(dims[list(dims.keys())[number-1]]['values'])):
l[number-1] = value
if number > 1:
Recurse(dims, number - 1, l)
else:
kvps = ""
for i in reversed(range(len(l))):
key = (dims[list(dims)[i]]['wmsname'])
value = (dims[list(dims)[i]]['values'])[l[i]]
kvps += "&" + key + '=' + str(value)
# print("Checking dims" + kvps)
filename = "test_WMSGetMap_getmap_3dims_"+kvps+".png"
filename = filename.replace("&", "_").replace(
":", "_").replace("=", "_")
# print filename
url = "source=netcdf_5dims%2Fnetcdf_5dims_seq1%2Fnc_5D_20170101000000-20170101001000.nc&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=data&WIDTH=360&HEIGHT=180&CRS=EPSG%3A4326&BBOX=-90,-180,90,180&STYLES=auto%2Fnearest&FORMAT=image/png&TRANSPARENT=TRUE&COLORSCALERANGE=0,1&"
url += kvps
status, data, headers = AdagucTestTools().runADAGUCServer(url, env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
l = []
for i in range(len(dims)):
l.append(0)
Recurse(dims, len(dims), l)
def test_WMSCMDUpdateDBNoConfig(self):
AdagucTestTools().cleanTempDir()
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb'], env=self.env, isCGI=False, showLogOnError=False)
self.assertEqual(status, 1)
def test_WMSCMDUpdateDB(self):
AdagucTestTools().cleanTempDir()
ADAGUC_PATH = os.environ['ADAGUC_PATH']
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.timeseries.xml'], isCGI=False, showLogOnError=False)
self.assertEqual(status, 0)
filename = "test_WMSGetCapabilities_timeseries_twofiles"
status, data, headers = AdagucTestTools().runADAGUCServer("SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.timeseries.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSCMDUpdateDBTailPath(self):
AdagucTestTools().cleanTempDir()
ADAGUC_PATH = os.environ['ADAGUC_PATH']
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.timeseries.xml', '--tailpath', 'netcdf_5dims_seq1'], isCGI=False, showLogOnError=False)
self.assertEqual(status, 0)
filename = "test_WMSGetCapabilities_timeseries_tailpath_netcdf_5dims_seq1"
status, data, headers = AdagucTestTools().runADAGUCServer("SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.timeseries.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.timeseries.xml', '--tailpath', 'netcdf_5dims_seq2'], isCGI=False, showLogOnError=False)
self.assertEqual(status, 0)
filename = "test_WMSGetCapabilities_timeseries_tailpath_netcdf_5dims_seq1_and_seq2"
status, data, headers = AdagucTestTools().runADAGUCServer("SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.timeseries.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSCMDUpdateDBPath(self):
AdagucTestTools().cleanTempDir()
ADAGUC_PATH = os.environ['ADAGUC_PATH']
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config',
ADAGUC_PATH + '/data/config/adaguc.timeseries.xml',
'--path', ADAGUC_PATH + '/data/datasets/netcdf_5dims/netcdf_5dims_seq1/nc_5D_20170101000000-20170101001000.nc'],
isCGI=False,
showLogOnError=False,
showLog=False)
self.assertEqual(status, 0)
filename = "test_WMSGetCapabilities_timeseries_path_netcdf_5dims_seq1"
status, data, headers = AdagucTestTools().runADAGUCServer("SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.timeseries.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb',
'--config', ADAGUC_PATH + '/data/config/adaguc.timeseries.xml',
'--path', ADAGUC_PATH + '/data/datasets/netcdf_5dims/netcdf_5dims_seq2/nc_5D_20170101001500-20170101002500.nc'],
isCGI=False,
showLogOnError=False)
self.assertEqual(status, 0)
filename = "test_WMSGetCapabilities_timeseries_path_netcdf_5dims_seq2"
status, data, headers = AdagucTestTools().runADAGUCServer("SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.timeseries.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetFeatureInfo_forecastreferencetime_texthtml(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetFeatureInfo_forecastreferencetime.html"
status, data, headers = AdagucTestTools().runADAGUCServer("source=forecast_reference_time%2FHARM_N25_20171215090000_dimx16_dimy16_dimtime49_dimforecastreferencetime1_varairtemperatureat2m.nc&SERVICE=WMS&REQUEST=GetFeatureInfo&VERSION=1.3.0&LAYERS=air_temperature__at_2m&QUERY_LAYERS=air_temperature__at_2m&CRS=EPSG%3A4326&BBOX=49.55171074378079,1.4162628389784275,54.80328142582087,9.526486675156528&WIDTH=1515&HEIGHT=981&I=832&J=484&FORMAT=image/gif&INFO_FORMAT=text/html&STYLES=&&time=2017-12-17T09%3A00%3A00Z&DIM_reference_time=2017-12-15T09%3A00%3A00Z", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
filename = "test_WMSGetCapabilities_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=testdata.nc&SERVICE=WMS&request=getcapabilities", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetFeatureInfo_timeseries_forecastreferencetime_json(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetFeatureInfo_timeseries_forecastreferencetime.json"
status, data, headers = AdagucTestTools().runADAGUCServer("source=forecast_reference_time%2FHARM_N25_20171215090000_dimx16_dimy16_dimtime49_dimforecastreferencetime1_varairtemperatureat2m.nc&service=WMS&request=GetFeatureInfo&version=1.3.0&layers=air_temperature__at_2m&query_layers=air_temperature__at_2m&crs=EPSG%3A4326&bbox=47.80599631376197%2C1.4162628389784275%2C56.548995855839685%2C9.526486675156528&width=910&height=981&i=502&j=481&format=image%2Fgif&info_format=application%2Fjson&time=1000-01-01T00%3A00%3A00Z%2F3000-01-01T00%3A00%3A00Z&dim_reference_time=2017-12-15T09%3A00%3A00Z", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
filename = "test_WMSGetCapabilities_testdatanc"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=testdata.nc&SERVICE=WMS&request=getcapabilities", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetMap_Report_nounits(self):
AdagucTestTools().cleanTempDir()
if os.path.exists(os.environ["ADAGUC_LOGFILE"]):
os.remove(os.environ["ADAGUC_LOGFILE"])
filename = "test_WMSGetMap_Report_nounits"
reportfilename = "./nounits_checker_report.txt"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=test/testdata_report_nounits.nc&service=WMS&request=GetMap&version=1.3.0&layers=sow_a1&crs=EPSG%3A4326&bbox=47.80599631376197%2C1.4162628389784275%2C56.548995855839685%2C9.526486675156528&width=863&height=981&format=image%2Fpng&info_format=application%2Fjson&time=1000-01-01T00%3A00%3A00Z%2F3000-01-01T00%3A00%3A00Z&dim_reference_time=2017-12-15T09%3A00%3A00Z",
env=self.env, args=["--report=%s" % reportfilename], isCGI=False, showLogOnError=False)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 1)
self.assertTrue(os.path.exists(reportfilename))
self.assertTrue(os.path.exists(os.environ["ADAGUC_LOGFILE"]))
reportfile = open(reportfilename, "r")
report = json.load(reportfile)
reportfile.close()
os.remove(reportfilename)
self.assertTrue("messages" in report)
# add more errors to this list if we expect more.
expectedErrors = ["No time units found for variable time"]
foundErrors = []
#self.assertIsNone("TODO: test if error messages end up in normale log file as well as report.")
for message in report["messages"]:
self.assertTrue("category" in message)
self.assertTrue("documentationLink" in message)
self.assertTrue("message" in message)
self.assertTrue("severity" in message)
if (message["severity"] == "ERROR"):
foundErrors.append(message["message"])
self.assertIn(message["message"], expectedErrors)
self.assertEqual(len(expectedErrors), len(foundErrors))
expectedErrors.append("WMS GetMap Request failed")
foundErrors = []
with open(os.environ["ADAGUC_LOGFILE"]) as logfile:
for line in logfile.readlines():
if "E:" in line:
for error in expectedErrors:
if error in line:
foundErrors.append(error)
logfile.close()
self.assertEqual(len(expectedErrors), len(foundErrors))
def test_WMSGetMap_NoReport_nounits(self):
AdagucTestTools().cleanTempDir()
if os.path.exists(os.environ["ADAGUC_LOGFILE"]):
os.remove(os.environ["ADAGUC_LOGFILE"])
filename = "test_WMSGetMap_Report_nounits"
reportfilename = "./checker_report.txt"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=test/testdata_report_nounits.nc&service=WMS&request=GetMap&version=1.3.0&layers=sow_a1&crs=EPSG%3A4326&bbox=47.80599631376197%2C1.4162628389784275%2C56.548995855839685%2C9.526486675156528&width=863&height=981&format=image%2Fpng&info_format=application%2Fjson&time=1000-01-01T00%3A00%3A00Z%2F3000-01-01T00%3A00%3A00Z&dim_reference_time=2017-12-15T09%3A00%3A00Z",
env=self.env, isCGI=False, showLogOnError=False)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 1)
self.assertTrue(os.path.exists(os.environ["ADAGUC_LOGFILE"]))
expectedErrors = ["No time units found for variable time",
"Exception in DBLoopFiles",
"Invalid dimensions values: No data available for layer sow_a1",
"WMS GetMap Request failed"]
foundErrors = []
with open(os.environ["ADAGUC_LOGFILE"]) as logfile:
for line in logfile.readlines():
if "E:" in line:
for error in expectedErrors:
if error in line:
foundErrors.append(error)
logfile.close()
self.assertEqual(len(expectedErrors), len(foundErrors))
def test_WMSGetMap_worldmap_latlon_PNGFile_withoutinfofile(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMap_worldmap_latlon_PNGFile_withoutinfofile.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=worldmap_latlon.png&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=pngdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=rgba%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMap_worldmap_mercator_PNGFile_withinfofile(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMap_worldmap_mercator_PNGFile_withinfofile.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
"source=worldmap_mercator.png&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=pngdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=rgba%2Fnearest&FORMAT=image/png&TRANSPARENT=FALSE&", env=self.env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetCapabilities_testdatanc_autostyle(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetCapabilities_testdatanc_autostyle.xml"
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetCapabilities_multidimnc_autostyle(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetCapabilities_multidimnc_autostyle.xml"
status, data, headers = AdagucTestTools().runADAGUCServer("source=netcdf_5dims/netcdf_5dims_seq1/nc_5D_20170101000000-20170101001000.nc&SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetCapabilities_multidimncdataset_autostyle(self):
AdagucTestTools().cleanTempDir()
ADAGUC_PATH = os.environ['ADAGUC_PATH']
config = ADAGUC_PATH + '/data/config/adaguc.tests.dataset.xml,' + \
ADAGUC_PATH + '/data/config/datasets/adaguc.testmultidimautostyle.xml'
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', config], env=self.env, isCGI=False)
self.assertEqual(status, 0)
filename = "test_WMSGetCapabilities_multidimncdataset_autostyle.xml"
status, data, headers = AdagucTestTools().runADAGUCServer("dataset=adaguc.testmultidimautostyle&SERVICE=WMS&request=getcapabilities",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.dataset.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertTrue(AdagucTestTools().compareGetCapabilitiesXML(
self.testresultspath + filename, self.expectedoutputsspath + filename))
def test_WMSGetMapWithShowLegendTrue_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapWithShowLegendTrue_testdatanc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=geojsonbaselayer,testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata_style_2/shadedcontour&FORMAT=image/png&TRANSPARENT=FALSE&showlegend=true",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapWithManyContourDefinitions_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapWithManyContourDefinitions_testdatanc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.manycontours.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata_style_manycontours/contour&FORMAT=image/png&TRANSPARENT=FALSE&",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.manycontours.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapWithShowLegendFalse_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapWithShowLegendFalse_testdatanc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=geojsonbaselayer,testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata_style_2/shadedcontour&FORMAT=image/png&TRANSPARENT=FALSE&showlegend=false",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapWithShowLegendNothing_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapWithShowLegendNothing_testdatanc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=geojsonbaselayer,testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata_style_2/shadedcontour&FORMAT=image/png&TRANSPARENT=FALSE&showlegend=",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapWithShowLegendSecondLayer_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapWithShowLegendSecondLayer_testdatanc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=geojsonbaselayer,testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata_style_2/shadedcontour&FORMAT=image/png&TRANSPARENT=FALSE&showlegend=testdata",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapWithShowLegendAllLayers_testdatanc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapWithShowLegendAllLayers_testdatanc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=testdata.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=geojsonbaselayer,testdata&WIDTH=256&HEIGHT=256&CRS=EPSG%3A4326&BBOX=30,-30,75,30&STYLES=testdata_style_2/shadedcontour&FORMAT=image/png&TRANSPARENT=FALSE&showlegend=geojsonbaselayer,testdata",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapRobinsonProjection_sample_tas_cmip6_ssp585_preIndustrial_warming2_year(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapRobinsonProjection_sample_tas_cmip6_ssp585_preIndustrial_warming2_year.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=test/sample_tas_cmip6_ssp585_preIndustrial_warming2_year.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=tas&WIDTH=600&HEIGHT=300&CRS=EPSG%3A54030&BBOX=-17002000,-8700000,17002000,8700000&STYLES=auto/nearest&FORMAT=image/png32&TRANSPARENT=FALSE",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapCustomCRSEPSG3412Projection_sample_tas_cmip6_ssp585_preIndustrial_warming2_year(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapCustomCRSEPSG3412Projection_sample_tas_cmip6_ssp585_preIndustrial_warming2_year.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=test/sample_tas_cmip6_ssp585_preIndustrial_warming2_year.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=tas,geojsonoverlay&&format=image%2Fpng32&crs=%2Bproj%3Dstere+%2Blat_0%3D-90+%2Blat_ts%3D-70+%2Blon_0%3D0+%2Bk%3D1+%2Bx_0%3D0+%2By_0%3D0+%2Ba%3D6378273+%2Bb%3D6356889.449+%2Bunits%3Dm+%2Bno_defs&width=800&height=600&BBOX=-4630165.372231959,-4523993.082972504,5384973.558397711,4717659.691530302&",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapCustomCRSEPSG3413Projection_sample_tas_cmip6_ssp585_preIndustrial_warming2_year(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapCustomCRSEPSG3413Projection_sample_tas_cmip6_ssp585_preIndustrial_warming2_year.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=test/sample_tas_cmip6_ssp585_preIndustrial_warming2_year.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=tas,geojsonoverlay&&format=image%2Fpng32&crs=%2Bproj%3Dstere%20%2Blat_0%3D90%20%2Blat_ts%3D70%20%2Blon_0%3D-45%20%2Bk%3D1%20%2Bx_0%3D0%20%2By_0%3D0%20%2Bdatum%3DWGS84%20%2Bunits%3Dm%20%2Bno_defs&width=800&height=600&BBOX=-4630165.372231959,-4523993.082972504,5384973.558397711,4717659.691530302&",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapRobinsonProjection_ipcc_cmip5_tas_historical_subset_nc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapRobinsonProjection_ipcc_cmip5_tas_historical_subset.nc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=test/ipcc_cmip5_tas_historical_subset.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=tas&WIDTH=600&HEIGHT=300&CRS=EPSG%3A54030&BBOX=-17002000,-8700000,17002000,8700000&STYLES=auto/nearest&FORMAT=image/png32&TRANSPARENT=FALSE",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapCustomCRSEPSG3412Projection_ipcc_cmip5_tas_historical_subset_nc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapCustomCRSEPSG3412Projection_ipcc_cmip5_tas_historical_subset.nc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=test/ipcc_cmip5_tas_historical_subset.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=tas,geojsonoverlay&&format=image%2Fpng32&crs=%2Bproj%3Dstere+%2Blat_0%3D-90+%2Blat_ts%3D-70+%2Blon_0%3D0+%2Bk%3D1+%2Bx_0%3D0+%2By_0%3D0+%2Ba%3D6378273+%2Bb%3D6356889.449+%2Bunits%3Dm+%2Bno_defs&width=800&height=600&BBOX=-4630165.372231959,-4523993.082972504,5384973.558397711,4717659.691530302&",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapCustomCRSEPSG3413Projection_ipcc_cmip5_tas_historical_subset_nc(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapCustomCRSEPSG3413Projection_ipcc_cmip5_tas_historical_subset.nc.png"
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env=self.env, isCGI=False)
self.assertEqual(status, 0)
status, data, headers = AdagucTestTools().runADAGUCServer("source=test/ipcc_cmip5_tas_historical_subset.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=tas,geojsonoverlay&&format=image%2Fpng32&crs=%2Bproj%3Dstere%20%2Blat_0%3D90%20%2Blat_ts%3D70%20%2Blon_0%3D-45%20%2Bk%3D1%20%2Bx_0%3D0%20%2By_0%3D0%20%2Bdatum%3DWGS84%20%2Bunits%3Dm%20%2Bno_defs&width=800&height=600&BBOX=-4630165.372231959,-4523993.082972504,5384973.558397711,4717659.691530302&",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
# def test_WMSGetMapCustomCRSClippedRobinsonProjection_ipcc_cmip5_tas_historical_subset_nc(self):
# AdagucTestTools().cleanTempDir()
# filename="test_WMSGetMapCustomCRSClippedRobinsonProjection_ipcc_cmip5_tas_historical_subset_nc.nc.png"
# status,data,headers = AdagucTestTools().runADAGUCServer(args = ['--updatedb', '--config', ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'], env = self.env, isCGI = False)
# self.assertEqual(status, 0)
# status,data,headers = AdagucTestTools().runADAGUCServer("source=test/ipcc_cmip5_tas_historical_subset.nc&SERVICE=WMS&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=tas&format=image%2Fpng32&crs=%2Bproj%3Drobin+%2Blon_0%3D-150+%2Bx_0%3D0+%2By_0%3D0+%2Bellps%3DWGS84+%2Bdatum%3DWGS84+%2Bunits%3Dm+%2Bno_defs&width=800&height=600&BBOX=-17002000,-8700000,17002000,8700000"
# , {'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
# AdagucTestTools().writetofile(self.testresultspath + filename,data.getvalue())
# self.assertEqual(status, 0)
# self.assertEqual(data.getvalue(), AdagucTestTools().readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetFeatureInfo_timeseries_KNMIHDF5_json(self):
AdagucTestTools().cleanTempDir()
ADAGUC_PATH = os.environ['ADAGUC_PATH']
env = {'ADAGUC_CONFIG': ADAGUC_PATH + "/data/config/adaguc.tests.dataset.xml," +
ADAGUC_PATH + "/data/config/datasets/adaguc.KNMIHDF5.test.xml"}
config = ADAGUC_PATH + '/data/config/adaguc.tests.dataset.xml,' + \
ADAGUC_PATH + '/data/config/datasets/adaguc.KNMIHDF5.test.xml'
status, data, headers = AdagucTestTools().runADAGUCServer(
args=['--updatedb', '--config', config], env=self.env, isCGI=False)
self.assertEqual(status, 0)
filename = "test_WMSGetFeatureInfo_timeseries_KNMIHDF5_json.json"
status, data, headers = AdagucTestTools().runADAGUCServer("dataset=adaguc.KNMIHDF5.test&service=WMS&request=GetFeatureInfo&version=1.3.0&layers=RAD_NL25_PCP_CM&query_layers=RAD_NL25_PCP_CM&crs=EPSG%3A3857&bbox=467411.5837657447%2C5796421.971094566%2C889884.3758374067%2C7834481.671540775&width=199&height=960&i=103&j=501&format=image%2Fgif&info_format=application%2Fjson&time=1000-01-01T00%3A00%3A00Z%2F3000-01-01T00%3A00%3A00Z&", env=env)
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
def test_WMSGetMapWithBilinearRendering(self):
AdagucTestTools().cleanTempDir()
filename = "test_WMSGetMapWithBilinearRendering_gsie-klimaatatlas2020-ev4-resampled.nc.png"
status, data, headers = AdagucTestTools().runADAGUCServer("source=gsie-klimaatatlas2020-ev4-resampled.nc&SERVICE=WMS&&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&LAYERS=interpolatedObs&WIDTH=350&HEIGHT=400&CRS=EPSG%3A3857&BBOX=310273.981651517,6517666.437519898,896694.2006277166,7153301.592131215&STYLES=auto%2Fbilinear&FORMAT=image/png&TRANSPARENT=TRUE&&time=2020-01-01T00%3A00%3A00Z",
{'ADAGUC_CONFIG': ADAGUC_PATH + '/data/config/adaguc.tests.autostyle.xml'})
AdagucTestTools().writetofile(self.testresultspath + filename, data.getvalue())
self.assertEqual(status, 0)
self.assertEqual(data.getvalue(), AdagucTestTools(
).readfromfile(self.expectedoutputsspath + filename))
| 68.820208 | 614 | 0.689049 | 4,744 | 46,316 | 6.603288 | 0.100126 | 0.043574 | 0.033646 | 0.063334 | 0.864521 | 0.837419 | 0.812073 | 0.787301 | 0.765786 | 0.751804 | 0 | 0.066166 | 0.189114 | 46,316 | 672 | 615 | 68.922619 | 0.767926 | 0.045017 | 0 | 0.611012 | 0 | 0.047957 | 0.334155 | 0.295227 | 0 | 0 | 0 | 0.001488 | 0.20071 | 1 | 0.069272 | false | 0.001776 | 0.019538 | 0 | 0.095915 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4774af06410add287453daceffa5065ebef6cc3a | 158 | py | Python | tests/fixtures/abcd_generator.py | venmo/nose-randomly | 39db5db71a226ffdb6572d5785638e0a16379cfb | [
"BSD-3-Clause"
] | 19 | 2015-07-30T17:27:56.000Z | 2021-08-10T07:19:43.000Z | tests/fixtures/abcd_generator.py | venmo/nose-randomly | 39db5db71a226ffdb6572d5785638e0a16379cfb | [
"BSD-3-Clause"
] | 11 | 2016-02-14T10:33:44.000Z | 2016-10-28T12:38:35.000Z | tests/fixtures/abcd_generator.py | adamchainz/nose-randomly | 8a3fbeaf7cc5452c44da8c7e7573fe89391c8260 | [
"BSD-3-Clause"
] | 4 | 2016-06-01T06:04:46.000Z | 2016-10-26T11:41:53.000Z | def _test_func(arg):
pass
def test_generator():
yield _test_func, 'A'
yield _test_func, 'B'
yield _test_func, 'C'
yield _test_func, 'D'
| 15.8 | 25 | 0.639241 | 24 | 158 | 3.75 | 0.458333 | 0.444444 | 0.577778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246835 | 158 | 9 | 26 | 17.555556 | 0.756303 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.142857 | 0 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
478c3e95e9cd00dd6b8c80a33101a1373af09ee2 | 150 | py | Python | openprocurement/auctions/core/plugins/contracting/v3/tests/blanks/fixtures/__init__.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 2 | 2016-09-15T20:17:43.000Z | 2017-01-08T03:32:43.000Z | openprocurement/auctions/core/plugins/contracting/v3/tests/blanks/fixtures/__init__.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 183 | 2017-12-21T11:04:37.000Z | 2019-03-27T08:14:34.000Z | openprocurement/auctions/core/plugins/contracting/v3/tests/blanks/fixtures/__init__.py | EBRD-ProzorroSale/openprocurement.auctions.core | 52bd59f193f25e4997612fca0f87291decf06966 | [
"Apache-2.0"
] | 12 | 2016-09-05T12:07:48.000Z | 2019-02-26T09:24:17.000Z | from zope import deprecation
deprecation.moved('openprocurement.auctions.core.tests.plugins.contracting.v3.tests.blanks.fixtures', 'version update')
| 37.5 | 119 | 0.833333 | 18 | 150 | 6.944444 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.053333 | 150 | 3 | 120 | 50 | 0.873239 | 0 | 0 | 0 | 0 | 0 | 0.626667 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
47968faf120be6f24426f054a076d3f8a92b1dbf | 44 | py | Python | libscampi/contrib/cms/__init__.py | azpm/django-scampi-cms | 33fa5786cc93f4c6aff14c9bb6306ac32c6cd486 | [
"BSD-3-Clause"
] | 2 | 2016-07-28T19:39:49.000Z | 2021-12-10T15:01:54.000Z | libscampi/contrib/cms/__init__.py | azpm/django-scampi-cms | 33fa5786cc93f4c6aff14c9bb6306ac32c6cd486 | [
"BSD-3-Clause"
] | null | null | null | libscampi/contrib/cms/__init__.py | azpm/django-scampi-cms | 33fa5786cc93f4c6aff14c9bb6306ac32c6cd486 | [
"BSD-3-Clause"
] | 1 | 2016-01-20T23:49:36.000Z | 2016-01-20T23:49:36.000Z | from libscampi.contrib.cms.sites import site | 44 | 44 | 0.863636 | 7 | 44 | 5.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
47b4f3d8f5048e55f88f93ac28957e33267b9b62 | 136 | py | Python | Group_assignment/group1.py | erikbijl/git-workshop | 81a2c637ab0be2590417a5f702f122b946df1b82 | [
"MIT"
] | null | null | null | Group_assignment/group1.py | erikbijl/git-workshop | 81a2c637ab0be2590417a5f702f122b946df1b82 | [
"MIT"
] | null | null | null | Group_assignment/group1.py | erikbijl/git-workshop | 81a2c637ab0be2590417a5f702f122b946df1b82 | [
"MIT"
] | null | null | null | # print name of 1st groupmember below this line
print('Erik Bijl')
# print name of 2nd groupmember below this line
print('Koen Peters')
| 27.2 | 47 | 0.764706 | 22 | 136 | 4.727273 | 0.590909 | 0.173077 | 0.211538 | 0.461538 | 0.557692 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.161765 | 136 | 4 | 48 | 34 | 0.894737 | 0.669118 | 0 | 0 | 0 | 0 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9a3da5e0e6ab174e7f49565dc5a370ed5f262b0a | 32 | py | Python | cgio/__init__.py | prplz/cgio.py | b96f49878e0d8a23b571f6148503d35d7324cc0e | [
"MIT"
] | null | null | null | cgio/__init__.py | prplz/cgio.py | b96f49878e0d8a23b571f6148503d35d7324cc0e | [
"MIT"
] | null | null | null | cgio/__init__.py | prplz/cgio.py | b96f49878e0d8a23b571f6148503d35d7324cc0e | [
"MIT"
] | 1 | 2021-01-19T19:58:02.000Z | 2021-01-19T19:58:02.000Z | from ._testcase import TestCase
| 16 | 31 | 0.84375 | 4 | 32 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a6a466a99d27e87235212e4822531355ac51fc6 | 167 | py | Python | tests/__init__.py | UT1C/pyVDK | 168177c4006acc7f57be36f189bee8101e10253d | [
"MIT"
] | 16 | 2020-11-24T18:27:59.000Z | 2021-05-14T19:25:44.000Z | tests/__init__.py | UT1C/pyVDK | 168177c4006acc7f57be36f189bee8101e10253d | [
"MIT"
] | 1 | 2021-04-21T14:35:55.000Z | 2021-06-26T04:18:44.000Z | tests/__init__.py | UT1C/pyVDK | 168177c4006acc7f57be36f189bee8101e10253d | [
"MIT"
] | 2 | 2020-12-03T16:56:31.000Z | 2020-12-19T16:28:58.000Z | from .config_tests import ConfigTests
from .keyboard_tests import KeyboardTests
from .mention_tests import MentionTests
from .rules_bunch_tests import RulesBunchTests
| 33.4 | 46 | 0.88024 | 21 | 167 | 6.761905 | 0.571429 | 0.309859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095808 | 167 | 4 | 47 | 41.75 | 0.940397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d005c599d8913111d06ac03dbec4afc83ec3b672 | 174 | py | Python | python/8kyu/printing_array_elements_with_command_delimiters.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | 3 | 2021-06-08T01:57:13.000Z | 2021-06-26T10:52:47.000Z | python/8kyu/printing_array_elements_with_command_delimiters.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | null | null | null | python/8kyu/printing_array_elements_with_command_delimiters.py | Sigmanificient/codewars | b34df4bf55460d312b7ddf121b46a707b549387a | [
"MIT"
] | 2 | 2021-06-10T21:20:13.000Z | 2021-06-30T10:13:26.000Z | """Kata url: https://www.codewars.com/kata/56e2f59fb2ed128081001328."""
from typing import List
def print_array(arr: List[int]) -> str:
return ','.join(map(str, arr))
| 21.75 | 71 | 0.689655 | 24 | 174 | 4.958333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118421 | 0.126437 | 174 | 7 | 72 | 24.857143 | 0.664474 | 0.373563 | 0 | 0 | 0 | 0 | 0.009709 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d0061b427cb6c89db898685ff7a23aff48a0b592 | 34 | py | Python | src/mist/alert/__init__.py | cc-daveloper/mist.io_mist.monitor | 041f61573efe656208390277473d0a59f215a35c | [
"Apache-2.0"
] | null | null | null | src/mist/alert/__init__.py | cc-daveloper/mist.io_mist.monitor | 041f61573efe656208390277473d0a59f215a35c | [
"Apache-2.0"
] | null | null | null | src/mist/alert/__init__.py | cc-daveloper/mist.io_mist.monitor | 041f61573efe656208390277473d0a59f215a35c | [
"Apache-2.0"
] | null | null | null | from mist.alert.alert import main
| 17 | 33 | 0.823529 | 6 | 34 | 4.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
190b6e1197c3a08b0bd6db9949b82375c09e65f0 | 49 | py | Python | src/web/modules/entrance/tests/__init__.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 5 | 2018-03-08T17:22:27.000Z | 2018-03-11T14:20:53.000Z | src/web/modules/entrance/tests/__init__.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 263 | 2018-03-08T18:05:12.000Z | 2022-03-11T23:26:20.000Z | src/web/modules/entrance/tests/__init__.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 6 | 2018-03-12T19:48:19.000Z | 2022-01-14T04:58:52.000Z | from .home_blocks import *
from .levels import *
| 16.333333 | 26 | 0.755102 | 7 | 49 | 5.142857 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 2 | 27 | 24.5 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef8bba3756e7868bf49bb6b73c1b04a722556ef0 | 26 | py | Python | careless/models/merging/__init__.py | JBGreisman/careless | 8f6c0859973757d11b26b65d9dc51d443030aa70 | [
"MIT"
] | 5 | 2021-02-08T16:34:38.000Z | 2022-03-25T19:16:09.000Z | careless/models/merging/__init__.py | JBGreisman/careless | 8f6c0859973757d11b26b65d9dc51d443030aa70 | [
"MIT"
] | 28 | 2021-01-15T21:31:40.000Z | 2022-03-30T21:06:54.000Z | careless/models/merging/__init__.py | JBGreisman/careless | 8f6c0859973757d11b26b65d9dc51d443030aa70 | [
"MIT"
] | 5 | 2021-02-12T18:43:58.000Z | 2022-02-02T21:38:56.000Z | from . import variational
| 13 | 25 | 0.807692 | 3 | 26 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef94cda9910c5f11f1f187eeb0d9c2dc3bc5e644 | 90 | py | Python | iris_pipeline/readout/__init__.py | zonca/iris_pipeline | a4c20a362037a94f66427521bb5cd5da1c918dd7 | [
"BSD-3-Clause"
] | null | null | null | iris_pipeline/readout/__init__.py | zonca/iris_pipeline | a4c20a362037a94f66427521bb5cd5da1c918dd7 | [
"BSD-3-Clause"
] | 38 | 2019-03-07T01:25:03.000Z | 2022-03-01T13:02:29.000Z | iris_pipeline/readout/__init__.py | zonca/iris_pipeline | a4c20a362037a94f66427521bb5cd5da1c918dd7 | [
"BSD-3-Clause"
] | 1 | 2019-02-28T02:39:06.000Z | 2019-02-28T02:39:06.000Z | from .readoutsamp_step import ReadoutsampStep
from .nonlincorr_step import NonlincorrStep
| 30 | 45 | 0.888889 | 10 | 90 | 7.8 | 0.7 | 0.25641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 90 | 2 | 46 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef9dd96e424d2f8f59e24cbfbba86d6e11640044 | 148 | py | Python | lambda.py | theAshokSharma/pytipstricks | 0073debe9d4ae09bc7a91eea54b257877c81cb42 | [
"Unlicense"
] | null | null | null | lambda.py | theAshokSharma/pytipstricks | 0073debe9d4ae09bc7a91eea54b257877c81cb42 | [
"Unlicense"
] | null | null | null | lambda.py | theAshokSharma/pytipstricks | 0073debe9d4ae09bc7a91eea54b257877c81cb42 | [
"Unlicense"
] | null | null | null | import dis
def func(x):
return lambda y: (x + y + 1)
def func1(x):
return lambda y : (func(x)(x)+y+1)
print(func1(10)(2))
dis.dis(func1) | 13.454545 | 38 | 0.587838 | 29 | 148 | 3 | 0.448276 | 0.114943 | 0.298851 | 0.321839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 0.216216 | 148 | 11 | 39 | 13.454545 | 0.681034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.285714 | 0.714286 | 0.142857 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
efd15e6265d58607841a325e55447ee6009262b4 | 229 | py | Python | graphzoo/__init__.py | oom-debugger/GraphZoo-1 | 7ef1184c0016090597e56b8706a87539a3f62fd6 | [
"MIT"
] | 2 | 2022-03-30T01:11:39.000Z | 2022-03-30T11:08:12.000Z | graphzoo/__init__.py | oom-debugger/GraphZoo-1 | 7ef1184c0016090597e56b8706a87539a3f62fd6 | [
"MIT"
] | null | null | null | graphzoo/__init__.py | oom-debugger/GraphZoo-1 | 7ef1184c0016090597e56b8706a87539a3f62fd6 | [
"MIT"
] | 2 | 2022-01-27T21:03:40.000Z | 2022-03-15T20:20:12.000Z | from __future__ import print_function
from __future__ import division
from . import dataloader
from . import layers
from . import manifolds
from . import trainers
from . import optimizers
from . import utils
from . import models
| 22.9 | 37 | 0.812227 | 30 | 229 | 5.9 | 0.433333 | 0.39548 | 0.180791 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157205 | 229 | 9 | 38 | 25.444444 | 0.917098 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.111111 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ef346b6f72fc45b2b0b21aedbb1f50ecbccf64f8 | 6,230 | py | Python | feature_engineering/extract_examples.py | go-jugo/ml_event_prediction_trainer | 0d644b737afdef078ad5b6fc2b7e2549b964b56f | [
"Apache-2.0"
] | null | null | null | feature_engineering/extract_examples.py | go-jugo/ml_event_prediction_trainer | 0d644b737afdef078ad5b6fc2b7e2549b964b56f | [
"Apache-2.0"
] | null | null | null | feature_engineering/extract_examples.py | go-jugo/ml_event_prediction_trainer | 0d644b737afdef078ad5b6fc2b7e2549b964b56f | [
"Apache-2.0"
] | null | null | null | import datetime
import pandas as pd
import dask.dataframe as dd
from sklearn.utils import shuffle
import dask
import glob
from tsfresh import extract_features
from tsfresh.feature_extraction import EfficientFCParameters, MinimalFCParameters
from ..monitoring.time_it import timing
from math import ceil
import copy
import random
from .extract_windows_and_engineer_features_with_tsfresh import get_processed_timestamp_list
from .extract_windows_and_engineer_features_with_tsfresh import get_clean_errorcode_column_to_process
from .extract_windows_and_engineer_features_with_tsfresh import calculate_window
@timing
def extract_examples(df, error_code_series, errorcode_col, errorcode, pw_rw_list , minimal_features, iterations=1,
extract_examples=True):
if extract_examples:
for conf in pw_rw_list:
print(conf)
processed_timestamp_list_neg = get_processed_timestamp_list(errorcode, window_length=conf[1],
window_end=conf[0], negative_examples=True)
df_process_neg = get_clean_errorcode_column_to_process(error_code_series, errorcode_col, errorcode,
window_end=conf[0], window_length=conf[1], negative_examples=True)
df_process_neg = df_process_neg.drop(index=processed_timestamp_list_neg, errors='ignore')
for i in range(iterations):
df_process_neg = df_process_neg.drop(index=processed_timestamp_list_neg, errors='ignore')
print('Number of possible examples to process: ' + str(len(df_process_neg)))
if len(df_process_neg) >= 500:
df_loop = df_process_neg.sample(n=500)
else:
df_loop = df_process_neg
df_loop = df_loop.squeeze('columns')
print('Number of examples to process this iteration: ' + str(len(df_loop)))
process_list = list(zip(df_loop.index, df_loop))
lazy_results = []
for element in process_list:
window_start_date = element[0] - datetime.timedelta(seconds=(conf[1] + conf[0]))
window_end_date = element[0] - datetime.timedelta(seconds=(conf[0]))
lazy_result = dask.delayed(calculate_window)(df, window_start_date, window_end_date, element,
minimal_features, window_length=conf[1],
errorcode_col=errorcode_col, extract_negative_examples=True)
lazy_results.append(lazy_result)
lazy_results = dask.compute(*lazy_results)
df_tsfresh = pd.concat(lazy_results)
processed_timestamp_list_neg.extend(df_tsfresh['global_timestamp'].to_list())
df_tsfresh = df_tsfresh.dropna(axis=0, how='any')
df_tsfresh = df_tsfresh.reset_index(drop=True)
file_counter = len(glob.glob('../data/Extracted_Examples_ts_fresh/errorcode_' + str(errorcode) +
'_PW_' + str(conf[0]) + '_RW_' + str(conf[1]) + '_' + 'neg*.gzip'))
df_tsfresh.to_parquet('../data/Extracted_Examples_ts_fresh/errorcode_' + str(errorcode) + '_PW_' +
str(conf[0]) + '_RW_' + str(conf[1]) + '_' + 'neg' + '_' +
str(file_counter) + str('.parquet.gzip'))
print('parquet created')
processed_timestamp_list_pos = get_processed_timestamp_list(errorcode, window_length=conf[1], window_end=conf[0], negative_examples=False)
df_process_pos = get_clean_errorcode_column_to_process(error_code_series, errorcode_col, errorcode,
window_end=conf[0], window_length=conf[1],
negative_examples=False)
df_process_pos = df_process_pos.drop(index=processed_timestamp_list_pos, errors='ignore')
print('Number of possible examples to process: ' + str(len(df_process_pos)))
if len(df_process_pos) >= 500:
df_loop = df_process_pos.sample(n=500)
else:
df_loop = df_process_pos
df_loop = df_loop.squeeze('columns')
print('Number of examples to process this iteration: ' + str(len(df_loop)))
process_list = list(zip(df_loop.index, df_loop))
lazy_results = []
for element in process_list:
window_start_date = element[0] - datetime.timedelta(seconds=(conf[1] + conf[0]))
window_end_date = element[0] - datetime.timedelta(seconds=(conf[0]))
lazy_result = dask.delayed(calculate_window)(df, window_start_date, window_end_date, element,
minimal_features, window_length=conf[1],
errorcode_col=errorcode_col, extract_negative_examples=False)
lazy_results.append(lazy_result)
lazy_results = dask.compute(*lazy_results)
df_tsfresh = pd.concat(lazy_results)
processed_timestamp_list_pos.extend(df_tsfresh['global_timestamp'].to_list())
df_tsfresh = df_tsfresh.dropna(axis=0, how='any')
df_tsfresh = df_tsfresh.reset_index(drop=True)
file_counter = len(glob.glob('../data/Extracted_Examples_ts_fresh/errorcode_' + str(errorcode) +
'_PW_' + str(conf[0]) + '_RW_' + str(conf[1]) + '_' + 'pos*.gzip'))
df_tsfresh.to_parquet('../data/Extracted_Examples_ts_fresh/errorcode_' + str(errorcode) +
'_PW_' + str(conf[0]) + '_RW_' + str(conf[1]) + '_' + 'pos' + '_' +
str(file_counter) + str('.parquet.gzip'))
print('parquet created')
return df
| 62.3 | 151 | 0.583949 | 686 | 6,230 | 4.932945 | 0.172012 | 0.042553 | 0.065012 | 0.030142 | 0.826241 | 0.802896 | 0.767731 | 0.767731 | 0.750591 | 0.706856 | 0 | 0.01025 | 0.326645 | 6,230 | 99 | 152 | 62.929293 | 0.796424 | 0 | 0 | 0.431818 | 0 | 0 | 0.088758 | 0.030021 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011364 | false | 0 | 0.170455 | 0 | 0.193182 | 0.079545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
32358868c0130ba305dcb20a55d465611bee9163 | 88 | py | Python | telegram_bot/commands/menu/__init__.py | alenworld/django_telegram_bot | aa9a3570787feaaf474086a8cee66155f749983e | [
"MIT"
] | 3 | 2021-07-07T02:30:56.000Z | 2021-12-19T07:48:35.000Z | telegram_bot/commands/menu/__init__.py | alenworld/django_telegram_bot | aa9a3570787feaaf474086a8cee66155f749983e | [
"MIT"
] | null | null | null | telegram_bot/commands/menu/__init__.py | alenworld/django_telegram_bot | aa9a3570787feaaf474086a8cee66155f749983e | [
"MIT"
] | 1 | 2021-07-07T02:42:23.000Z | 2021-07-07T02:42:23.000Z | from .auth import *
from .chat_support import *
from .faq import *
from .claim import *
| 17.6 | 27 | 0.727273 | 13 | 88 | 4.846154 | 0.538462 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 88 | 4 | 28 | 22 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
32422395c2a8cf051d54895b6cb06db78db0a414 | 554 | py | Python | plugins/test_utils.py | StijnZanders/serverless-etl | ed67d4fead6de87a9fa161cd1732601b81eb99f8 | [
"Apache-2.0"
] | null | null | null | plugins/test_utils.py | StijnZanders/serverless-etl | ed67d4fead6de87a9fa161cd1732601b81eb99f8 | [
"Apache-2.0"
] | 22 | 2020-11-27T22:21:01.000Z | 2021-11-08T18:39:46.000Z | plugins/test_utils.py | StijnZanders/limber | ed67d4fead6de87a9fa161cd1732601b81eb99f8 | [
"Apache-2.0"
] | null | null | null | def test(arg):
import pandas as pd
from datetime import datetime
df = pd.DataFrame([[datetime.now(), arg]], columns=["current_timestamp", "message"])
df.to_gbq("test_dataset.test_table", if_exists="append")
def test_multiple_outputs(arg):
return ["test1", "test2"]
def test_with_context(arg, context):
import pandas as pd
from datetime import datetime
print(context)
df = pd.DataFrame([[datetime.now(), arg]], columns=["current_timestamp", "message"])
df.to_gbq("test_dataset.test_table", if_exists="append")
| 29.157895 | 88 | 0.694946 | 75 | 554 | 4.946667 | 0.413333 | 0.056604 | 0.075472 | 0.086253 | 0.754717 | 0.754717 | 0.754717 | 0.754717 | 0.528302 | 0.528302 | 0 | 0.004292 | 0.158845 | 554 | 18 | 89 | 30.777778 | 0.791845 | 0 | 0 | 0.615385 | 0 | 0 | 0.209386 | 0.083032 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.307692 | 0.076923 | 0.615385 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3ead3e1723ccde06bb90dfe12867b1f3fa548ca8 | 15,916 | py | Python | src/kgtests/src/extraction/test_load_openie_extractions.py | HermannKroll/KGExtractionToolbox | c17a55dd1fa098f5033b7765ed0f80d3abb44cb7 | [
"MIT"
] | 6 | 2021-09-17T09:49:59.000Z | 2021-12-06T10:07:01.000Z | src/kgtests/src/extraction/test_load_openie_extractions.py | HermannKroll/KGExtractionToolbox | c17a55dd1fa098f5033b7765ed0f80d3abb44cb7 | [
"MIT"
] | null | null | null | src/kgtests/src/extraction/test_load_openie_extractions.py | HermannKroll/KGExtractionToolbox | c17a55dd1fa098f5033b7765ed0f80d3abb44cb7 | [
"MIT"
] | 1 | 2021-09-18T17:56:12.000Z | 2021-09-18T17:56:12.000Z | from unittest import TestCase
from sqlalchemy import delete
from kgextractiontoolbox.extraction.loading.load_extractions import PRED
from kgextractiontoolbox.extraction.loading.load_openie_extractions import load_openie_tuples, OpenIEEntityFilterMode, \
get_subject_and_object_entities, clean_tuple_predicate_based
from kgextractiontoolbox.backend.database import Session
from kgextractiontoolbox.backend.models import Predication
from kgextractiontoolbox.document.load_document import document_bulk_load
from kgtests import util
class LoadExtractionsTestCase(TestCase):
def setUp(self) -> None:
documents_file = util.get_test_resource_filepath("extraction/documents_1.pubtator")
test_mapping = {"Chemical": ("Chemical", "1.0"), "Disease": ("Diseasetagger", "1.0")}
document_bulk_load(documents_file, "Test_Load_OpenIE_1", tagger_mapping=test_mapping, ignore_tags=False)
def test_detect_subjects_and_objects(self):
doc_tags = [("E1", "this", "ThisType"),
("E1", "test", "TestType")]
s, o = get_subject_and_object_entities(doc_tags, "this", "test",
entity_filter=OpenIEEntityFilterMode.EXACT_ENTITY_FILTER)
self.assertEqual(('this', 'E1', 'ThisType'), s[0])
self.assertEqual(('test', 'E1', 'TestType'), o[0])
s, o = get_subject_and_object_entities(doc_tags, "This", "Test",
entity_filter=OpenIEEntityFilterMode.EXACT_ENTITY_FILTER)
self.assertEqual(('this', 'E1', 'ThisType'), s[0])
self.assertEqual(('test', 'E1', 'TestType'), o[0])
s, o = get_subject_and_object_entities(doc_tags, "this", "test",
entity_filter=OpenIEEntityFilterMode.PARTIAL_ENTITY_FILTER)
self.assertEqual(('this', 'E1', 'ThisType'), s[0])
self.assertEqual(('test', 'E1', 'TestType'), o[0])
s, o = get_subject_and_object_entities(doc_tags, "This", "Test",
entity_filter=OpenIEEntityFilterMode.PARTIAL_ENTITY_FILTER)
self.assertEqual(('this', 'E1', 'ThisType'), s[0])
self.assertEqual(('test', 'E1', 'TestType'), o[0])
s, o = get_subject_and_object_entities(doc_tags, "this is", "a test",
entity_filter=OpenIEEntityFilterMode.PARTIAL_ENTITY_FILTER)
self.assertEqual(('this', 'E1', 'ThisType'), s[0])
self.assertEqual(('test', 'E1', 'TestType'), o[0])
s, o = get_subject_and_object_entities(doc_tags, "This is", "A Test",
entity_filter=OpenIEEntityFilterMode.PARTIAL_ENTITY_FILTER)
self.assertEqual(('this', 'E1', 'ThisType'), s[0])
self.assertEqual(('test', 'E1', 'TestType'), o[0])
s, o = get_subject_and_object_entities(doc_tags, "this is", "a test",
entity_filter=OpenIEEntityFilterMode.NO_ENTITY_FILTER)
self.assertEqual(('this is', 'this is', 'Unknown'), s[0])
self.assertEqual(('a test', 'a test', 'Unknown'), o[0])
def test_load_openie_extrations_no_entity_filter(self):
session = Session.get()
session.execute(delete(Predication).where(Predication.document_collection == 'Test_Load_OpenIE_1'))
session.commit()
openie_file = util.get_test_resource_filepath("extraction/openie_extractions_1.tsv")
load_openie_tuples(openie_file, document_collection="Test_Load_OpenIE_1",
entity_filter=OpenIEEntityFilterMode.NO_ENTITY_FILTER,
filter_predicate_str=True,
swap_passive_voice=True,
keep_be_and_have=False)
self.assertEqual(8, session.query(Predication).filter(
Predication.document_collection == "Test_Load_OpenIE_1").count())
tuples = set()
for q in Predication.iterate_predications_joined_sentences(session, document_collection="Test_Load_OpenIE_1"):
tuples.add((q.Predication.document_id, q.Predication.document_collection,
q.Predication.subject_id, q.Predication.subject_type, q.Predication.subject_str,
q.Predication.predicate, q.Predication.relation,
q.Predication.object_id, q.Predication.object_type, q.Predication.object_str,
q.Predication.extraction_type, q.Sentence.text))
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'tacrolimus', 'Unknown', 'tacrolimus',
'induce', None,
'onset scleroderma crisis', 'Unknown', 'onset scleroderma crisis', 'OpenIE',
'Late - onset scleroderma renal crisis induced by tacrolimus and prednisolone : a case report .'),
tuples)
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'tacrolimus', 'Unknown', 'tacrolimus',
'induce', None,
'onset scleroderma renal crisis', 'Unknown', 'onset scleroderma renal crisis', 'OpenIE',
'Late - onset scleroderma renal crisis induced by tacrolimus and prednisolone : a case report .'),
tuples)
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'major risk factor', 'Unknown', 'major risk factor',
'recognize', None,
'moderate', 'Unknown', 'moderate', 'OpenIE',
'Moderate to high dose corticosteroid use is recognized as a major risk factor for SRC .'),
tuples)
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'risk factor for src', 'Unknown', 'risk factor for src',
'recognize', None,
'moderate', 'Unknown', 'moderate', 'OpenIE',
'Moderate to high dose corticosteroid use is recognized as a major risk factor for SRC .'),
tuples)
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'major risk factor for src', 'Unknown', 'major risk factor for src',
'recognize', None,
'moderate', 'Unknown', 'moderate', 'OpenIE',
'Moderate to high dose corticosteroid use is recognized as a major risk factor for SRC .'),
tuples)
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'risk factor', 'Unknown', 'risk factor',
'recognize', None,
'moderate', 'Unknown', 'moderate', 'OpenIE',
'Moderate to high dose corticosteroid use is recognized as a major risk factor for SRC .'),
tuples)
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'cyclosporine patients', 'Unknown', 'cyclosporine patients',
'precipitate', None,
'have reports', 'Unknown', 'have reports', 'OpenIE',
'Furthermore , there have been reports of thrombotic microangiopathy precipitated by cyclosporine in patients with SSc .'),
tuples)
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'cyclosporine patients ssc', 'Unknown', 'cyclosporine patients ssc',
'precipitate', None,
'have reports', 'Unknown', 'have reports', 'OpenIE',
'Furthermore , there have been reports of thrombotic microangiopathy precipitated by cyclosporine in patients with SSc .'),
tuples)
def test_load_openie_extrations_partial_entity_filter(self):
session = Session.get()
session.execute(delete(Predication).where(Predication.document_collection == 'Test_Load_OpenIE_1'))
session.commit()
openie_file = util.get_test_resource_filepath("extraction/openie_extractions_1.tsv")
load_openie_tuples(openie_file, document_collection="Test_Load_OpenIE_1",
filter_predicate_str=True,
swap_passive_voice=True,
entity_filter=OpenIEEntityFilterMode.PARTIAL_ENTITY_FILTER)
self.assertEqual(1, session.query(Predication).filter(
Predication.document_collection == "Test_Load_OpenIE_1").count())
tuples = set()
for q in Predication.iterate_predications_joined_sentences(session, document_collection="Test_Load_OpenIE_1"):
tuples.add((q.Predication.document_id, q.Predication.document_collection,
q.Predication.subject_id, q.Predication.subject_type, q.Predication.subject_str,
q.Predication.predicate, q.Predication.relation,
q.Predication.object_id, q.Predication.object_type, q.Predication.object_str,
q.Predication.extraction_type, q.Sentence.text))
self.assertIn((22836123, 'Test_Load_OpenIE_1',
'D016559', 'Chemical', 'tacrolimus',
'induce', None,
'D007674', 'Disease', 'scleroderma renal crisis', 'OpenIE',
'Late - onset scleroderma renal crisis induced by tacrolimus and prednisolone : a case report .'),
tuples)
def test_load_openie_extrations_exact_entity_filter(self):
session = Session.get()
session.execute(delete(Predication).where(Predication.document_collection == 'Test_Load_OpenIE_1'))
session.commit()
openie_file = util.get_test_resource_filepath("extraction/openie_extractions_1.tsv")
load_openie_tuples(openie_file, document_collection="Test_Load_OpenIE_1",
filter_predicate_str=True,
swap_passive_voice=True,
entity_filter=OpenIEEntityFilterMode.EXACT_ENTITY_FILTER)
self.assertEqual(0, session.query(Predication).filter(
Predication.document_collection == "Test_Load_OpenIE_1").count())
def test_clean_tuple_predicate_based_not(self):
example1 = PRED(1, "USA", "will not tolerate", "be not tolerate", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
cleaned = clean_tuple_predicate_based(example1)
self.assertEqual(cleaned, example1)
example2 = PRED(1, "USA", "will tolerate", "be tolerate", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
cleaned2 = clean_tuple_predicate_based(example2)
self.assertEqual(cleaned2, example2)
def test_clean_tuple_predicate_based_ignore_be(self):
example1 = PRED(1, "USA", "will not tolerate", "be not tolerate", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
cleaned = clean_tuple_predicate_based(example1, keep_be_and_have=False, filter_predicate_str=True)
self.assertNotEqual(cleaned, example1)
correct1 = PRED(1, "USA", "will not tolerate", "not tolerate", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
self.assertEqual(cleaned, correct1)
example2 = PRED(1, "USA", "will tolerate", "be tolerate", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
cleaned2 = clean_tuple_predicate_based(example2, keep_be_and_have=False, filter_predicate_str=True)
self.assertNotEqual(cleaned2, example2)
correct2 = PRED(1, "USA", "will tolerate", "tolerate", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
self.assertEqual(cleaned2, correct2)
def test_clean_tuple_predicate_based_passive_voice(self):
# this triple should be flipped (passive voice)
example3 = PRED(1, "USA", "be tolerated by", "be tolerate by", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
correct3 = PRED(1, "UDSSR", "be tolerated by", "tolerate", "USA", 0.0, "USA will not tolerate UDSSR.",
"UDSSR", "UDSSR", "State", "USA", "USA", "State")
cleaned3 = clean_tuple_predicate_based(example3, swap_passive_voice=True)
self.assertNotEqual(cleaned3, example3)
self.assertEqual(cleaned3, correct3)
def test_clean_tuple_predicate_based_no_passive_voice_swap(self):
# this triple should be flipped (passive voice)
example3 = PRED(1, "USA", "be tolerated by", "be tolerate by", "UDSSR", 0.0, "USA will not tolerate UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
correct3 = PRED(1, "UDSSR", "be tolerated by", "be tolerate by", "USA", 0.0, "USA will not tolerate UDSSR.",
"UDSSR", "UDSSR", "State", "USA", "USA", "State")
cleaned3 = clean_tuple_predicate_based(example3, swap_passive_voice=False)
self.assertNotEqual(cleaned3, correct3)
self.assertEqual(cleaned3, example3)
def test_clean_tuple_predicate_based_fails_to(self):
example = PRED(1, "USA", "fails to offer", "fail to offer", "UDSSR", 0.0, "USA fails to offer the UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
cleaned = clean_tuple_predicate_based(example, filter_predicate_str=True)
self.assertNotEqual(cleaned, example)
correct = PRED(1, "USA", "fails to offer", "fail offer", "UDSSR", 0.0, "USA fails to offer the UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
self.assertEqual(cleaned, correct)
def test_clean_tuple_predicate_based_mate(self):
# Example extraction:
# 995 Henry A. Wallace, is mate of from be mate of from Franklin D. Roosevelt 0.16
# Letter from Govenor Herbert H. Lehman to William Wallace Farley, October 22, 1940 inviting Mr.
# Farley to a supper party in honor of Henry A. Wallace, Vice Presidential Candidate and running mate of
# Franklin D. Roosevelt in the 1940 U.S. Presidential Election..
example = PRED(1, "Henry A. Wallace", "is mate of from", "be mate of from", "Franklin D. Roosevelt", 0.0,
".",
"Henry A. Wallace", "Henry A. Wallace", "Person",
"Franklin D. Roosevelt", "Franklin D. Roosevelt", "Person")
cleaned = clean_tuple_predicate_based(example, filter_predicate_str=True)
self.assertNotEqual(cleaned, example)
correct = PRED(1, "Henry A. Wallace", "is mate of from", "be mate", "Franklin D. Roosevelt", 0.0,
".",
"Henry A. Wallace", "Henry A. Wallace", "Person",
"Franklin D. Roosevelt", "Franklin D. Roosevelt", "Person")
self.assertEqual(cleaned, correct)
def test_clean_tuple_keep_original_predicate(self):
example = PRED(1, "USA", "fails to offer", "fail to offer", "UDSSR", 0.0, "USA fails to offer the UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
correct = PRED(1, "USA", "fails to offer", "fails to offer", "UDSSR", 0.0, "USA fails to offer the UDSSR.",
"USA", "USA", "State", "UDSSR", "UDSSR", "State")
cleaned = clean_tuple_predicate_based(example, keep_original_predicate=True)
self.assertNotEqual(example, cleaned)
self.assertEqual(correct, cleaned)
| 60.060377 | 146 | 0.6048 | 1,726 | 15,916 | 5.378331 | 0.118192 | 0.03124 | 0.036195 | 0.033933 | 0.830766 | 0.805989 | 0.771518 | 0.761392 | 0.742217 | 0.742217 | 0 | 0.021364 | 0.279467 | 15,916 | 264 | 147 | 60.287879 | 0.788106 | 0.030221 | 0 | 0.660465 | 0 | 0 | 0.264601 | 0.008816 | 0 | 0 | 0 | 0 | 0.195349 | 1 | 0.055814 | false | 0.032558 | 0.037209 | 0 | 0.097674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
41165c1e2720fe747fd40f7e94b8c1c70db67240 | 37 | py | Python | processors/postprocessors/__init__.py | Zvezdin/blockchain-predictor | df6f939037471dd50b7b9c96673d89b04b646ef2 | [
"MIT"
] | 35 | 2017-10-25T17:10:35.000Z | 2022-03-20T18:12:06.000Z | processors/postprocessors/__init__.py | Zvezdin/blockchain-predictor | df6f939037471dd50b7b9c96673d89b04b646ef2 | [
"MIT"
] | 2 | 2017-09-20T17:39:15.000Z | 2018-04-01T17:20:29.000Z | processors/postprocessors/__init__.py | Zvezdin/blockchain-predictor | df6f939037471dd50b7b9c96673d89b04b646ef2 | [
"MIT"
] | 10 | 2017-12-01T13:47:04.000Z | 2021-12-16T06:53:17.000Z | from .postprocessors_imports import * | 37 | 37 | 0.864865 | 4 | 37 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4124706fc15ef3a1d33fbe04965cb6ba6b886989 | 64 | py | Python | 25/02/id.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | 25/02/id.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | 46 | 2017-06-30T22:19:07.000Z | 2017-07-31T22:51:31.000Z | 25/02/id.py | pylangstudy/201707 | c1cc72667f1e0b6e8eef4ee85067d7fa4ca500b6 | [
"CC0-1.0"
] | null | null | null | #id(object)
print(id(1))
print(id('a'))
a = 'abc'
print(id(a))x
| 10.666667 | 14 | 0.578125 | 14 | 64 | 2.642857 | 0.5 | 0.567568 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.109375 | 64 | 5 | 15 | 12.8 | 0.631579 | 0.15625 | 0 | 0 | 0 | 0 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.75 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f5ca05d43f8f06b2799f48e4c8e656b4054408bf | 197 | py | Python | src/AuShadha/medication_list/admin.py | GosthMan/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 46 | 2015-03-04T14:19:47.000Z | 2021-12-09T02:58:46.000Z | src/AuShadha/medication_list/admin.py | aytida23/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 2 | 2015-06-05T10:29:04.000Z | 2015-12-06T16:54:10.000Z | src/AuShadha/medication_list/admin.py | aytida23/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 24 | 2015-03-23T01:38:11.000Z | 2022-01-24T16:23:42.000Z | from django.contrib import admin
from medication_list.models import MedicationList
class MedicationListAdmin(admin.ModelAdmin):
pass
admin.site.register(MedicationList, MedicationListAdmin)
| 21.888889 | 56 | 0.84264 | 21 | 197 | 7.857143 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101523 | 197 | 8 | 57 | 24.625 | 0.932203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f5f785968c7852d7e5037b6a519e44cd02ff304a | 133 | py | Python | genepi/tools/__init__.py | dn070017/GenEpi | e6ee35e0b024408b80b75c25dd0b63c77a6e0339 | [
"MIT"
] | 21 | 2018-08-06T07:09:12.000Z | 2021-11-25T18:03:10.000Z | genepi/tools/__init__.py | dn070017/GenEpi | e6ee35e0b024408b80b75c25dd0b63c77a6e0339 | [
"MIT"
] | 7 | 2019-03-25T14:40:28.000Z | 2022-02-20T01:54:49.000Z | genepi/tools/__init__.py | dn070017/GenEpi | e6ee35e0b024408b80b75c25dd0b63c77a6e0339 | [
"MIT"
] | 10 | 2018-08-06T07:09:14.000Z | 2021-11-28T03:09:48.000Z | # -*- coding: utf-8 -*-
"""
Created on Jul 2019
@author: Chester (Yu-Chuan Chang)
"""
from . import six
from . import randomized_l1 | 14.777778 | 33 | 0.654135 | 19 | 133 | 4.526316 | 0.894737 | 0.232558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055046 | 0.180451 | 133 | 9 | 34 | 14.777778 | 0.733945 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eb0eb2fa307529896d9b8dda053f2b2804ea18be | 96 | py | Python | venv/lib/python3.8/site-packages/requests_toolbelt/adapters/x509.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/requests_toolbelt/adapters/x509.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/requests_toolbelt/adapters/x509.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/f7/2a/62/70d0af1887cb7423f31319baae2f96ccce0cc67880181338301408af4a | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 0 | 96 | 1 | 96 | 96 | 0.458333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de2c2eff068039c852bc52774c8f9df54e38b45b | 150 | py | Python | python/oddvoices/utils.py | oddvoices/oddvoices | 824592478f4b805afff4d6da2728de5aa93d0575 | [
"Apache-2.0"
] | 25 | 2021-03-11T17:31:31.000Z | 2022-03-23T07:24:34.000Z | python/oddvoices/utils.py | oddvoices/oddvoices | 824592478f4b805afff4d6da2728de5aa93d0575 | [
"Apache-2.0"
] | 60 | 2021-03-04T03:16:05.000Z | 2022-01-21T05:36:46.000Z | python/oddvoices/utils.py | oddvoices/oddvoices | 824592478f4b805afff4d6da2728de5aa93d0575 | [
"Apache-2.0"
] | null | null | null | import pathlib
BASE_DIR = pathlib.Path(__file__).resolve().parent
def midi_note_to_hertz(midi_note):
return 440 * 2 ** ((midi_note - 69) / 12)
| 18.75 | 50 | 0.706667 | 23 | 150 | 4.173913 | 0.782609 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063492 | 0.16 | 150 | 7 | 51 | 21.428571 | 0.698413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
de5428cf63a6c4fd011208011e920e6122a98d7c | 44 | py | Python | tools/Polygraphy/polygraphy/backend/pyt/__init__.py | leo0519/TensorRT | 498dcb009fe4c2dedbe9c61044d3de4f3c04a41b | [
"Apache-2.0"
] | 5,249 | 2019-06-17T17:20:34.000Z | 2022-03-31T17:56:05.000Z | tools/Polygraphy/polygraphy/backend/pyt/__init__.py | leo0519/TensorRT | 498dcb009fe4c2dedbe9c61044d3de4f3c04a41b | [
"Apache-2.0"
] | 1,721 | 2019-06-17T18:13:29.000Z | 2022-03-31T16:09:53.000Z | tools/Polygraphy/polygraphy/backend/pyt/__init__.py | leo0519/TensorRT | 498dcb009fe4c2dedbe9c61044d3de4f3c04a41b | [
"Apache-2.0"
] | 1,414 | 2019-06-18T04:01:17.000Z | 2022-03-31T09:16:53.000Z | from polygraphy.backend.pyt.runner import *
| 22 | 43 | 0.818182 | 6 | 44 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
de78e6b951699b7f15c5e9826b5edf8ef7128b74 | 33 | py | Python | colorization/baseline/__init__.py | soumik12345/colorization-using-optimization | 85a38e19810092b3bb630c3485f040a1a39a647d | [
"MIT"
] | 10 | 2021-08-17T04:33:32.000Z | 2022-03-18T20:07:35.000Z | synthtext/colorizer/__init__.py | mileistone/synthtext | 9ed751ace78b2d44a9dea191dec7277b7d5c607c | [
"Apache-2.0"
] | null | null | null | synthtext/colorizer/__init__.py | mileistone/synthtext | 9ed751ace78b2d44a9dea191dec7277b7d5c607c | [
"Apache-2.0"
] | 3 | 2020-04-01T03:00:00.000Z | 2021-02-09T14:48:23.000Z | from .colorizer import Colorizer
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dec6c84afb090da7b4bbc0b6cbaa9b77321d1c38 | 26 | py | Python | predictit_data/__init__.py | jjordanbaird/predictit-data | 53f01172d9e4f39abfa5e9b085ecd1912e46b481 | [
"MIT"
] | null | null | null | predictit_data/__init__.py | jjordanbaird/predictit-data | 53f01172d9e4f39abfa5e9b085ecd1912e46b481 | [
"MIT"
] | null | null | null | predictit_data/__init__.py | jjordanbaird/predictit-data | 53f01172d9e4f39abfa5e9b085ecd1912e46b481 | [
"MIT"
] | null | null | null | from .market import Market | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
722d66c9e214f67564bf625da5ee3085703ae16f | 205 | py | Python | pytorch_datasets/datasets/__init__.py | mpeven/Pytorch_Datasets | 6a1709bfb59739b5e7ce299c70350b0080209c82 | [
"Apache-2.0"
] | 3 | 2019-01-22T19:19:49.000Z | 2020-12-16T01:29:56.000Z | pytorch_datasets/datasets/__init__.py | mpeven/Pytorch_Datasets | 6a1709bfb59739b5e7ce299c70350b0080209c82 | [
"Apache-2.0"
] | null | null | null | pytorch_datasets/datasets/__init__.py | mpeven/Pytorch_Datasets | 6a1709bfb59739b5e7ce299c70350b0080209c82 | [
"Apache-2.0"
] | 2 | 2019-01-22T19:20:01.000Z | 2020-12-06T05:50:14.000Z | from .epfl import *
from .intuitive_simulated import *
from .jigsaws import *
from .mistic import *
from .object_net_3d import *
from .wcvp import *
from .needlemaster import *
from .needleframes import *
| 22.777778 | 34 | 0.765854 | 27 | 205 | 5.703704 | 0.481481 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00578 | 0.156098 | 205 | 8 | 35 | 25.625 | 0.884393 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7239f5999be3d6e527ee3ee474edb8ab821adb25 | 805 | py | Python | Shark_Training/pyimagesearch/preprocessing/applicationpreprocessor.py | crpurcell/MQ_DPI_Release | 97444513e8b8d48ec91ff8a43b9dfaed0da029f9 | [
"MIT"
] | null | null | null | Shark_Training/pyimagesearch/preprocessing/applicationpreprocessor.py | crpurcell/MQ_DPI_Release | 97444513e8b8d48ec91ff8a43b9dfaed0da029f9 | [
"MIT"
] | null | null | null | Shark_Training/pyimagesearch/preprocessing/applicationpreprocessor.py | crpurcell/MQ_DPI_Release | 97444513e8b8d48ec91ff8a43b9dfaed0da029f9 | [
"MIT"
] | null | null | null | #=============================================================================#
# #
# MODIFIED: 30-Dec-2018 by C. Purcell #
# #
#=============================================================================#
#-----------------------------------------------------------------------------#
class ApplicationPreprocessor:
"""
Wrapper class to allow use of Keras application preprocessor with the
HDF5 generator.
"""
def __init__(self, preprocess_function):
self.preprocess_function = preprocess_function
def preprocess(self, image):
return self.preprocess_function(image)
| 40.25 | 79 | 0.321739 | 41 | 805 | 6.121951 | 0.682927 | 0.286853 | 0.262948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013283 | 0.345342 | 805 | 19 | 80 | 42.368421 | 0.462998 | 0.680745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a0df32a8b3bbd8443be6571399838051c63153aa | 31 | py | Python | jetavator_azure_storage/jetavator_azure_storage/__init__.py | jetavator/jetavator_databricks | 719c934b6391f6f41ca34b4d4df8c697c1a25283 | [
"Apache-2.0"
] | null | null | null | jetavator_azure_storage/jetavator_azure_storage/__init__.py | jetavator/jetavator_databricks | 719c934b6391f6f41ca34b4d4df8c697c1a25283 | [
"Apache-2.0"
] | null | null | null | jetavator_azure_storage/jetavator_azure_storage/__init__.py | jetavator/jetavator_databricks | 719c934b6391f6f41ca34b4d4df8c697c1a25283 | [
"Apache-2.0"
] | null | null | null | from . import config, services
| 15.5 | 30 | 0.774194 | 4 | 31 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 1 | 31 | 31 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d0425f96828d041d7fc155484f37bb07c93561b | 254 | py | Python | tests/test_fib.py | angelus169/dp-fibonacci | 743e6be175b75fe7e8c1232ab400f1f427d68652 | [
"Apache-2.0"
] | null | null | null | tests/test_fib.py | angelus169/dp-fibonacci | 743e6be175b75fe7e8c1232ab400f1f427d68652 | [
"Apache-2.0"
] | null | null | null | tests/test_fib.py | angelus169/dp-fibonacci | 743e6be175b75fe7e8c1232ab400f1f427d68652 | [
"Apache-2.0"
] | null | null | null | from src.fib import fib
def test_fib_1():
assert fib(1) == 1
def test_fib_5():
assert fib(5) == 5
def test_fib_10():
assert fib(10) == 55
def test_fib_20():
assert fib(20) == 6765
def test_fib_30():
assert fib(30) == 832040
| 11.545455 | 28 | 0.606299 | 45 | 254 | 3.2 | 0.333333 | 0.243056 | 0.347222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 0.251969 | 254 | 21 | 29 | 12.095238 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.454545 | 1 | 0.454545 | true | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
9d1df0f15b6360aa49f5c1974e6bb7312b23d1a8 | 494 | py | Python | delivery/delivery/ext/auth/__init__.py | luxu/curso-flask | 978ba41b41b29d7ffd19ade5bef2765086e1f8f3 | [
"Unlicense"
] | 1 | 2020-06-13T13:26:03.000Z | 2020-06-13T13:26:03.000Z | delivery/delivery/ext/auth/__init__.py | luxu/curso-flask | 978ba41b41b29d7ffd19ade5bef2765086e1f8f3 | [
"Unlicense"
] | null | null | null | delivery/delivery/ext/auth/__init__.py | luxu/curso-flask | 978ba41b41b29d7ffd19ade5bef2765086e1f8f3 | [
"Unlicense"
] | null | null | null | from delivery.ext.auth import models # noqa
from delivery.ext.auth.commands import list_users, add_user
# from delivery.ext.auth.models import User
from delivery.ext.db import db
from delivery.ext.auth.admin import UserAdmin
from delivery.ext.admin import admin
from delivery.ext.auth.models import User
def init_app(app):
"""TODO: inicializar Flask Simple Login + JWT"""
app.cli.command()(list_users)
app.cli.command()(add_user)
admin.add_view(UserAdmin(User, db.session))
| 29.058824 | 59 | 0.765182 | 77 | 494 | 4.831169 | 0.363636 | 0.225806 | 0.282258 | 0.255376 | 0.188172 | 0.188172 | 0.188172 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 494 | 16 | 60 | 30.875 | 0.867133 | 0.182186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 1 | 0.1 | false | 0 | 0.6 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9d46188b15973316aafb346072f455a1c239d8c9 | 94 | py | Python | tests/unimplemented/argument_named_print.py | mayl8822/onelinerizer | bad341f261d35e56872b4c22297a44dc6d5cfab3 | [
"MIT"
] | 1,062 | 2015-11-18T01:04:33.000Z | 2022-03-29T07:13:30.000Z | tests/unimplemented/argument_named_print.py | CoDeRgAnEsh/1line | 507ef35b0006fc2998463dee92c2fdae53fe0694 | [
"MIT"
] | 26 | 2015-11-17T06:58:07.000Z | 2022-01-15T18:11:16.000Z | tests/unimplemented/argument_named_print.py | CoDeRgAnEsh/1line | 507ef35b0006fc2998463dee92c2fdae53fe0694 | [
"MIT"
] | 100 | 2015-11-17T09:01:22.000Z | 2021-09-12T13:58:28.000Z | from __future__ import print_function
def f(print):
return print
print(f(**{'print': 1}))
| 18.8 | 37 | 0.702128 | 14 | 94 | 4.357143 | 0.642857 | 0.196721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.148936 | 94 | 4 | 38 | 23.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.053191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
19f350cae3938a5270fe02dbcac1901aaa69e2a4 | 78 | py | Python | app/routes/index.py | riancu27/Food-Coalition-Project | b2fb81d11c706cdb544e326a060d902fd2d7fb76 | [
"MIT"
] | null | null | null | app/routes/index.py | riancu27/Food-Coalition-Project | b2fb81d11c706cdb544e326a060d902fd2d7fb76 | [
"MIT"
] | null | null | null | app/routes/index.py | riancu27/Food-Coalition-Project | b2fb81d11c706cdb544e326a060d902fd2d7fb76 | [
"MIT"
] | 1 | 2021-01-12T02:02:47.000Z | 2021-01-12T02:02:47.000Z | from . import routes
@routes.route('/')
def index():
return 'Hello World' | 15.6 | 24 | 0.653846 | 10 | 78 | 5.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 78 | 5 | 24 | 15.6 | 0.796875 | 0 | 0 | 0 | 0 | 0 | 0.151899 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.