hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
83189393482fe15dfcca69ffb0ad7e709ae83a34 | 1,682 | py | Python | test/test_autoorder_api.py | gstingy/uc_python_api | 9a0bd3f6e63f616586681518e44fe37c6bae2bba | [
"Apache-2.0"
] | null | null | null | test/test_autoorder_api.py | gstingy/uc_python_api | 9a0bd3f6e63f616586681518e44fe37c6bae2bba | [
"Apache-2.0"
] | null | null | null | test/test_autoorder_api.py | gstingy/uc_python_api | 9a0bd3f6e63f616586681518e44fe37c6bae2bba | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
UltraCart Rest API V2
This is the next generation UltraCart REST API...
OpenAPI spec version: 2.0.0
Contact: support@ultracart.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import os
import sys
import unittest
import ultracart
from ultracart.rest import ApiException
from ultracart.apis.autoorder_api import AutoorderApi
class TestAutoorderApi(unittest.TestCase):
""" AutoorderApi unit test stubs """
def setUp(self):
self.api = ultracart.apis.autoorder_api.AutoorderApi()
def tearDown(self):
pass
def test_get_auto_order(self):
"""
Test case for get_auto_order
Retrieve an auto order
"""
pass
def test_get_auto_orders(self):
"""
Test case for get_auto_orders
Retrieve auto orders
"""
pass
def test_update_auto_order(self):
"""
Test case for update_auto_order
Update an auto order
"""
pass
if __name__ == '__main__':
unittest.main()
| 23.361111 | 76 | 0.678359 | 223 | 1,682 | 4.982063 | 0.497758 | 0.054005 | 0.029703 | 0.040504 | 0.09811 | 0.069307 | 0 | 0 | 0 | 0 | 0 | 0.007177 | 0.254459 | 1,682 | 71 | 77 | 23.690141 | 0.878788 | 0.550535 | 0 | 0.2 | 1 | 0 | 0.013491 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.2 | 0.35 | 0 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 5 |
8350055065ec1e576e684cdf927102f5b4a7fc86 | 36 | py | Python | Exeplore/visits/views.py | Pierre-siddall/exeplore | 2a27f2ec6bf763efbb9748b1bc9b3bbe23030eec | [
"MIT"
] | null | null | null | Exeplore/visits/views.py | Pierre-siddall/exeplore | 2a27f2ec6bf763efbb9748b1bc9b3bbe23030eec | [
"MIT"
] | 3 | 2022-03-17T13:05:58.000Z | 2022-03-19T21:55:21.000Z | Exeplore/visits/views.py | Pierre-siddall/exeplore | 2a27f2ec6bf763efbb9748b1bc9b3bbe23030eec | [
"MIT"
] | null | null | null | #if more views are needed, add here
| 18 | 35 | 0.75 | 7 | 36 | 3.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 36 | 1 | 36 | 36 | 0.931034 | 0.944444 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
55d95d240fad070d2f2b6c1422314ac8dcd4bbec | 6,779 | py | Python | pyzoo/test/zoo/orca/learn/spark/test_estimator_for_spark.py | Wesley-Du/analytics-zoo | e4ca11b219a43bceec99aba39cf30c8aa368e8b3 | [
"Apache-2.0"
] | null | null | null | pyzoo/test/zoo/orca/learn/spark/test_estimator_for_spark.py | Wesley-Du/analytics-zoo | e4ca11b219a43bceec99aba39cf30c8aa368e8b3 | [
"Apache-2.0"
] | null | null | null | pyzoo/test/zoo/orca/learn/spark/test_estimator_for_spark.py | Wesley-Du/analytics-zoo | e4ca11b219a43bceec99aba39cf30c8aa368e8b3 | [
"Apache-2.0"
] | 1 | 2021-01-29T08:04:43.000Z | 2021-01-29T08:04:43.000Z | #
# Copyright 2018 Analytics Zoo Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import tensorflow as tf
from zoo.orca.data.tf.data import Dataset
from zoo.orca.learn.tf.estimator import Estimator
import zoo.orca.data.pandas
resource_path = os.path.join(os.path.split(__file__)[0], "../../../resources")
class SimpleModel(object):
def __init__(self):
self.user = tf.placeholder(dtype=tf.int32, shape=(None,))
self.item = tf.placeholder(dtype=tf.int32, shape=(None,))
self.label = tf.placeholder(dtype=tf.int32, shape=(None,))
feat = tf.stack([self.user, self.item], axis=1)
self.logits = tf.layers.dense(tf.to_float(feat), 2)
self.loss = tf.reduce_mean(tf.losses.sparse_softmax_cross_entropy(logits=self.logits,
labels=self.label))
def test_estimator_graph(estimator_for_spark_fixture):
import zoo.orca.data.pandas
sc = estimator_for_spark_fixture
tf.reset_default_graph()
model = SimpleModel()
file_path = os.path.join(resource_path, "orca/learn/ncf.csv")
data_shard = zoo.orca.data.pandas.read_csv(file_path, sc)
def transform(df):
result = {
"x": (df['user'].to_numpy(), df['item'].to_numpy()),
"y": df['label'].to_numpy()
}
return result
data_shard = data_shard.transform_shard(transform)
est = Estimator.from_graph(
inputs=[model.user, model.item],
labels=[model.label],
outputs=[model.logits],
loss=model.loss,
optimizer=tf.train.AdamOptimizer(),
metrics={"loss": model.loss})
est.fit(data=data_shard,
batch_size=8,
steps=10,
validation_data=data_shard)
data_shard = zoo.orca.data.pandas.read_csv(file_path, sc)
def transform(df):
result = {
"x": (df['user'].to_numpy(), df['item'].to_numpy()),
}
return result
data_shard = data_shard.transform_shard(transform)
predictions = est.predict(data_shard).collect()
print(predictions)
def test_estimator_graph_fit(estimator_for_spark_fixture):
import zoo.orca.data.pandas
tf.reset_default_graph()
model = SimpleModel()
sc = estimator_for_spark_fixture
file_path = os.path.join(resource_path, "orca/learn/ncf.csv")
data_shard = zoo.orca.data.pandas.read_csv(file_path, sc)
def transform(df):
result = {
"x": (df['user'].to_numpy(), df['item'].to_numpy()),
"y": df['label'].to_numpy()
}
return result
data_shard = data_shard.transform_shard(transform)
est = Estimator.from_graph(
inputs=[model.user, model.item],
labels=[model.label],
loss=model.loss,
optimizer=tf.train.AdamOptimizer(),
metrics={"loss": model.loss})
est.fit(data=data_shard,
batch_size=8,
steps=10,
validation_data=data_shard)
def test_estimator_graph_evaluate(estimator_for_spark_fixture):
import zoo.orca.data.pandas
tf.reset_default_graph()
model = SimpleModel()
sc = estimator_for_spark_fixture
file_path = os.path.join(resource_path, "orca/learn/ncf.csv")
data_shard = zoo.orca.data.pandas.read_csv(file_path, sc)
def transform(df):
result = {
"x": (df['user'].to_numpy(), df['item'].to_numpy()),
"y": df['label'].to_numpy()
}
return result
data_shard = data_shard.transform_shard(transform)
est = Estimator.from_graph(
inputs=[model.user, model.item],
labels=[model.label],
loss=model.loss,
optimizer=tf.train.AdamOptimizer(),
metrics={"loss": model.loss})
result = est.evaluate(data_shard)
assert "loss" in result
print(result)
def test_estimator_graph_predict(estimator_for_spark_fixture):
import zoo.orca.data.pandas
tf.reset_default_graph()
sc = estimator_for_spark_fixture
model = SimpleModel()
file_path = os.path.join(resource_path, "orca/learn/ncf.csv")
data_shard = zoo.orca.data.pandas.read_csv(file_path, sc)
est = Estimator.from_graph(
inputs=[model.user, model.item],
outputs=[model.logits])
def transform(df):
result = {
"x": (df['user'].to_numpy(), df['item'].to_numpy()),
}
return result
data_shard = data_shard.transform_shard(transform)
predictions = est.predict(data_shard).collect()
print(predictions)
def test_estimator_graph_fit_dataset(estimator_for_spark_fixture):
import zoo.orca.data.pandas
tf.reset_default_graph()
model = SimpleModel()
sc = estimator_for_spark_fixture
file_path = os.path.join(resource_path, "orca/learn/ncf.csv")
data_shard = zoo.orca.data.pandas.read_csv(file_path, sc)
def transform(df):
result = {
"x": (df['user'].to_numpy(), df['item'].to_numpy()),
"y": df['label'].to_numpy()
}
return result
data_shard = data_shard.transform_shard(transform)
dataset = Dataset.from_tensor_slices(data_shard)
est = Estimator.from_graph(
inputs=[model.user, model.item],
labels=[model.label],
loss=model.loss,
optimizer=tf.train.AdamOptimizer(),
metrics={"loss": model.loss})
est.fit(data=dataset,
batch_size=8,
steps=10,
validation_data=dataset)
def test_estimator_graph_predict_dataset(estimator_for_spark_fixture):
sc = estimator_for_spark_fixture
tf.reset_default_graph()
model = SimpleModel()
file_path = os.path.join(resource_path, "orca/learn/ncf.csv")
data_shard = zoo.orca.data.pandas.read_csv(file_path, sc)
est = Estimator.from_graph(
inputs=[model.user, model.item],
outputs=[model.logits])
def transform(df):
result = {
"x": (df['user'].to_numpy(), df['item'].to_numpy()),
}
return result
data_shard = data_shard.transform_shard(transform)
dataset = Dataset.from_tensor_slices(data_shard)
predictions = est.predict(dataset).collect()
print(predictions)
if __name__ == "__main__":
import pytest
pytest.main([__file__])
| 29.863436 | 93 | 0.646703 | 876 | 6,779 | 4.787671 | 0.178082 | 0.064378 | 0.036719 | 0.052694 | 0.755365 | 0.721268 | 0.721268 | 0.70577 | 0.687649 | 0.676443 | 0 | 0.004967 | 0.227762 | 6,779 | 226 | 94 | 29.995575 | 0.79618 | 0.082461 | 0 | 0.76875 | 0 | 0 | 0.038846 | 0 | 0 | 0 | 0 | 0 | 0.00625 | 1 | 0.0875 | false | 0 | 0.06875 | 0 | 0.20625 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
55e32b68203de9300a9b57bec88d6119cbdcbea3 | 436 | py | Python | user43_bQcIDCqug0_0.py | KuanZhasulan/Python-Games | b26f12cc5f052844c056a3922be3371acd114bc5 | [
"Apache-2.0"
] | 8 | 2018-10-01T17:35:57.000Z | 2022-02-01T08:12:12.000Z | user43_bQcIDCqug0_0.py | KuanZhasulan/Python-Games | b26f12cc5f052844c056a3922be3371acd114bc5 | [
"Apache-2.0"
] | null | null | null | user43_bQcIDCqug0_0.py | KuanZhasulan/Python-Games | b26f12cc5f052844c056a3922be3371acd114bc5 | [
"Apache-2.0"
] | 6 | 2018-07-22T19:15:21.000Z | 2022-02-05T07:54:58.000Z | def greet(friend, money):
if friend and (money > 20):
print "Hi!"
money = money - 20
elif friend:
print "Hello"
else:
print "Ha ha"
money = money + 10
return money
money = 15
money = greet(True, money)
print "Money:", money
print ""
money = greet(False, money)
print "Money:", money
print ""
money = greet(True, money)
print "Money:", money
print ""
| 16.769231 | 32 | 0.543578 | 53 | 436 | 4.471698 | 0.339623 | 0.253165 | 0.316456 | 0.253165 | 0.476793 | 0.476793 | 0.476793 | 0.329114 | 0 | 0 | 0 | 0.027682 | 0.337156 | 436 | 25 | 33 | 17.44 | 0.792388 | 0 | 0 | 0.4 | 0 | 0 | 0.075426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.45 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
55e7030da087de98cacc3d680046279c0b6fc0b3 | 5,561 | py | Python | models.py | msamogh/schema_attention_model | 01bf62625032a317f75f0d17f3e43f07e19ebaa9 | [
"MIT"
] | null | null | null | models.py | msamogh/schema_attention_model | 01bf62625032a317f75f0d17f3e43f07e19ebaa9 | [
"MIT"
] | null | null | null | models.py | msamogh/schema_attention_model | 01bf62625032a317f75f0d17f3e43f07e19ebaa9 | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.nn.functional as F
from collections import defaultdict
from torch import nn
from torch.nn import CrossEntropyLoss, NLLLoss
from torch.nn import Dropout
from transformers import BertConfig, BertModel, BertForMaskedLM
from typing import Any
class ActionBertModel(torch.nn.Module):
def __init__(self,
model_name_or_path,
dropout,
num_action_labels):
super(ActionBertModel, self).__init__()
self.bert_model = BertModel.from_pretrained(model_name_or_path)
self.dropout = Dropout(dropout)
self.num_action_labels = num_action_labels
self.action_classifier = nn.Linear(self.bert_model.config.hidden_size, num_action_labels)
def forward(self,
input_ids,
attention_mask,
token_type_ids,
action_label=None):
pooled_output = self.bert_model(input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
return_dict=False)[1]
action_logits = self.action_classifier(self.dropout(pooled_output))
# Compute losses if labels provided
if action_label is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(action_logits.view(-1, self.num_action_labels), action_label.type(torch.long))
else:
loss = torch.tensor(0)
return action_logits, loss
class SchemaActionBertModel(torch.nn.Module):
def __init__(self,
model_name_or_path,
dropout,
num_action_labels):
super(SchemaActionBertModel, self).__init__()
self.bert_model = BertModel.from_pretrained(model_name_or_path)
self.dropout = Dropout(dropout)
self.num_action_labels = num_action_labels
self.action_classifier = nn.Linear(self.bert_model.config.hidden_size, num_action_labels)
self.p_schema = nn.Linear(self.bert_model.config.hidden_size, 1)
def forward(self,
input_ids,
attention_mask,
token_type_ids,
tasks,
action_label,
sc_input_ids,
sc_attention_mask,
sc_token_type_ids,
sc_tasks,
sc_action_label):
all_output, pooled_output = self.bert_model(input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
return_dict=False)
print(f"pooled_output: {pooled_output}")
action_logits = self.action_classifier(self.dropout(pooled_output))
sc_all_output, sc_pooled_output = self.bert_model(input_ids=sc_input_ids,
attention_mask=sc_attention_mask,
token_type_ids=sc_token_type_ids,
return_dict=False)
all_output_flat = all_output.view(-1, all_output.size(-1))
i_probs = F.softmax(all_output_flat.mm(sc_all_output.view(-1, 768).t()), dim=-1).view(all_output_flat.size(0), -1, sc_input_ids.size(-1)).sum(dim=-1)
probs = i_probs.view(input_ids.size(0), -1, i_probs.size(-1)).mean(dim=1)
action_probs = torch.zeros(probs.size(0), self.num_action_labels).cuda().scatter_add(-1, sc_action_label.unsqueeze(0).repeat(probs.size(0), 1), probs)
sc_prob = F.sigmoid(self.p_schema(pooled_output))
action_lps = torch.log(action_probs+1e-10)
# Compute losses if labels provided
if action_label is not None:
loss_fct = NLLLoss()
loss = loss_fct(action_lps.view(-1, self.num_action_labels), action_label.type(torch.long))
else:
loss = torch.tensor(0)
return action_lps, loss
def predict(self,
input_ids,
attention_mask,
token_type_ids,
tasks,
sc_all_output,
sc_pooled_output,
sc_tasks,
sc_action_label):
all_output, pooled_output = self.bert_model(input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
return_dict=False)
action_logits = self.action_classifier(self.dropout(pooled_output))
all_output_flat = all_output.view(-1, all_output.size(-1))
i_probs = F.softmax(all_output_flat.mm(sc_all_output.view(-1, 768).t()), dim=-1).view(all_output_flat.size(0), -1, sc_all_output.size(-2)).sum(dim=-1)
probs = i_probs.view(input_ids.size(0), -1, i_probs.size(-1)).mean(dim=1)
# Zero out any attention across different tasks
for i in range(probs.size(0)):
for j in range(probs.size(1)):
if tasks[i] != sc_tasks[j]:
probs[i,j] = 0
action_probs = torch.zeros(probs.size(0), self.num_action_labels).cuda().scatter_add(-1, sc_action_label.unsqueeze(0).repeat(probs.size(0), 1), probs)
sc_prob = F.sigmoid(self.p_schema(pooled_output))
action_lps = torch.log(action_probs*sc_prob)
return action_lps, 0
| 40.591241 | 158 | 0.586225 | 676 | 5,561 | 4.491124 | 0.161243 | 0.050395 | 0.059289 | 0.048419 | 0.766798 | 0.758564 | 0.737154 | 0.726285 | 0.714098 | 0.644269 | 0 | 0.01416 | 0.32692 | 5,561 | 136 | 159 | 40.889706 | 0.796954 | 0.02032 | 0 | 0.586538 | 0 | 0 | 0.005511 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048077 | false | 0 | 0.086538 | 0 | 0.182692 | 0.009615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
55f267a94d6dc79983105388eb92c3794f2edb71 | 278 | py | Python | pythonlibs/thread.py | gkjohnson/python-pint-in-javascript | 56842b9fd7901de92bc60222ef1e92bf6693e7da | [
"MIT"
] | null | null | null | pythonlibs/thread.py | gkjohnson/python-pint-in-javascript | 56842b9fd7901de92bc60222ef1e92bf6693e7da | [
"MIT"
] | null | null | null | pythonlibs/thread.py | gkjohnson/python-pint-in-javascript | 56842b9fd7901de92bc60222ef1e92bf6693e7da | [
"MIT"
] | null | null | null | def get_ident():
return 0
def currentThread():
return 0
class RLock:
def acquire(val = 1):
return
def release(val = 1):
return
def __enter__(val = 1):
return
def __exit__(val = 1, val1 = 2, val2 = 3, val3 = 4):
return | 15.444444 | 56 | 0.539568 | 37 | 278 | 3.810811 | 0.540541 | 0.113475 | 0.212766 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067039 | 0.356115 | 278 | 18 | 57 | 15.444444 | 0.72067 | 0 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | false | 0 | 0 | 0.461538 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
55fb5c4b3071a45fa2f2a7264a74a6c6031e1dd8 | 119 | py | Python | ipynb/config.py | sdbzs/landsat578-water | 50fcbdb38566741faa18a368595d2553e2db4c45 | [
"MIT"
] | null | null | null | ipynb/config.py | sdbzs/landsat578-water | 50fcbdb38566741faa18a368595d2553e2db4c45 | [
"MIT"
] | null | null | null | ipynb/config.py | sdbzs/landsat578-water | 50fcbdb38566741faa18a368595d2553e2db4c45 | [
"MIT"
] | null | null | null | # root = '/content/drive/My Drive/landsat578_water'
root = '/Users/luo/OneDrive/Open-source-project/landsat578-water'
| 29.75 | 65 | 0.764706 | 16 | 119 | 5.625 | 0.75 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 0.07563 | 119 | 3 | 66 | 39.666667 | 0.763636 | 0.411765 | 0 | 0 | 0 | 0 | 0.835821 | 0.835821 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
361259d78cc5882ca89e697dd3ad69caf0d1c56b | 61 | py | Python | tests/formatters/test_utils.py | byaka/sublime_docblockr_python | bcfdb58edc9ed26d0d8fec38bf1d2649647f4a4a | [
"MIT"
] | 61 | 2015-12-21T11:58:40.000Z | 2021-07-09T03:45:15.000Z | tests/formatters/test_utils.py | byaka/sublime_docblockr_python | bcfdb58edc9ed26d0d8fec38bf1d2649647f4a4a | [
"MIT"
] | 28 | 2015-12-15T08:50:59.000Z | 2021-07-14T10:59:34.000Z | tests/formatters/test_utils.py | byaka/sublime_docblockr_python | bcfdb58edc9ed26d0d8fec38bf1d2649647f4a4a | [
"MIT"
] | 15 | 2016-01-19T14:22:39.000Z | 2021-08-25T15:11:46.000Z | def test_exists(formatter_utils):
assert formatter_utils
| 20.333333 | 33 | 0.819672 | 8 | 61 | 5.875 | 0.75 | 0.595745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131148 | 61 | 2 | 34 | 30.5 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
36274454ba0ad6a7ff5fe3ea901e3975cf1a04dd | 150 | py | Python | videgrenier/admin.py | caracole-io/videgrenier | 70e96d918c9bef9ddb05e2c372e3082e58d04bb8 | [
"BSD-2-Clause"
] | null | null | null | videgrenier/admin.py | caracole-io/videgrenier | 70e96d918c9bef9ddb05e2c372e3082e58d04bb8 | [
"BSD-2-Clause"
] | 16 | 2018-03-24T20:55:07.000Z | 2021-07-20T18:25:50.000Z | videgrenier/admin.py | caracole-io/videgrenier | 70e96d918c9bef9ddb05e2c372e3082e58d04bb8 | [
"BSD-2-Clause"
] | null | null | null | """Add Vide Grenier models to admin interface."""
from django.contrib import admin
from .models import Reservation
admin.site.register(Reservation)
| 21.428571 | 49 | 0.793333 | 20 | 150 | 5.95 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 150 | 6 | 50 | 25 | 0.901515 | 0.286667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
362d3399da50efa6880ffc54d389a4613a59cf4c | 168 | py | Python | osp-stibbons-master/contact/admin.py | deepakavattotte2191/FISBO-Real-esate--website | 08390a69013e78673607d4243a4f9f2d91531905 | [
"Apache-2.0"
] | null | null | null | osp-stibbons-master/contact/admin.py | deepakavattotte2191/FISBO-Real-esate--website | 08390a69013e78673607d4243a4f9f2d91531905 | [
"Apache-2.0"
] | null | null | null | osp-stibbons-master/contact/admin.py | deepakavattotte2191/FISBO-Real-esate--website | 08390a69013e78673607d4243a4f9f2d91531905 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import Question, QuestionTag
# Register your models here.
admin.site.register(Question)
admin.site.register(QuestionTag) | 24 | 41 | 0.821429 | 22 | 168 | 6.272727 | 0.545455 | 0.130435 | 0.246377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10119 | 168 | 7 | 42 | 24 | 0.913907 | 0.154762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
365e3e65607e242afbbbfb929055fdbe4574e29c | 180 | py | Python | app/main/exceptions.py | pullao/Farspeaker | 998a037d537f04e5297191f11c0bcd269b76ca31 | [
"MIT"
] | null | null | null | app/main/exceptions.py | pullao/Farspeaker | 998a037d537f04e5297191f11c0bcd269b76ca31 | [
"MIT"
] | 9 | 2016-10-17T06:28:28.000Z | 2016-12-09T02:29:19.000Z | app/main/exceptions.py | pullao/Farspeaker | 998a037d537f04e5297191f11c0bcd269b76ca31 | [
"MIT"
] | null | null | null | class Error(Exception):
"""Base class for exceptions in this module."""
pass
class DiceRollError(Error):
"""Exception raised for errors in the dice rool input.""" | 30 | 61 | 0.677778 | 23 | 180 | 5.304348 | 0.73913 | 0.229508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211111 | 180 | 6 | 61 | 30 | 0.859155 | 0.516667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
36a94b14ecb8d76b44c2e0c9d5b97c5cf53326ef | 208 | py | Python | src/gsf/core/types/__init__.py | RolandoAndrade/general-simulation-framework | 2fe2a981d365a7f482f6a7d4797a5f711b2dd502 | [
"MIT"
] | 1 | 2021-06-02T12:37:56.000Z | 2021-06-02T12:37:56.000Z | src/gsf/core/types/__init__.py | RolandoAndrade/general-simulation-framework | 2fe2a981d365a7f482f6a7d4797a5f711b2dd502 | [
"MIT"
] | null | null | null | src/gsf/core/types/__init__.py | RolandoAndrade/general-simulation-framework | 2fe2a981d365a7f482f6a7d4797a5f711b2dd502 | [
"MIT"
] | null | null | null | """Types module
=============================
This module contains the definitions of types and aliases used in the framework.
"""
from .dynamic_system_input import DynamicSystemInput
from .time import Time
| 26 | 80 | 0.692308 | 25 | 208 | 5.68 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 208 | 7 | 81 | 29.714286 | 0.78022 | 0.591346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
36b1df20e27c07f1d9c07d4f4dcc37cbafc45af5 | 340 | py | Python | GITcourse/c/c2/reward.py | CristianTeodorNita/GITcourse | 0aa418b5f8700e243bff61ad030350a39a31568c | [
"MIT"
] | null | null | null | GITcourse/c/c2/reward.py | CristianTeodorNita/GITcourse | 0aa418b5f8700e243bff61ad030350a39a31568c | [
"MIT"
] | null | null | null | GITcourse/c/c2/reward.py | CristianTeodorNita/GITcourse | 0aa418b5f8700e243bff61ad030350a39a31568c | [
"MIT"
] | null | null | null | def get_reward(times):
if times <= 10:
print("Congratulations, you won a car!")
elif 10 < times <= 25:
print("Congratulations, you won a trip!")
elif 25 < times <= 50:
print("Congratulations, you've won a consolation prize!")
else:
print("Unfortunately, maybe you will be lucky next time...") | 37.777778 | 68 | 0.611765 | 45 | 340 | 4.6 | 0.6 | 0.289855 | 0.333333 | 0.251208 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.264706 | 340 | 9 | 68 | 37.777778 | 0.788 | 0 | 0 | 0 | 0 | 0 | 0.475073 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.111111 | 0.444444 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
7fcdaab15e45a6ac73cf453719a3e40a50bddfa5 | 193 | py | Python | app/utils.py | zeshuaro/covid-19-dashboard-api | 486243171dffe2eb68dfe13d1ff3fe728ca84482 | [
"MIT"
] | 2 | 2020-04-29T04:03:45.000Z | 2021-02-14T23:24:15.000Z | app/utils.py | zeshuaro/covid-19-dashboard-api | 486243171dffe2eb68dfe13d1ff3fe728ca84482 | [
"MIT"
] | null | null | null | app/utils.py | zeshuaro/covid-19-dashboard-api | 486243171dffe2eb68dfe13d1ff3fe728ca84482 | [
"MIT"
] | null | null | null | import datetime as dt
from app import const
def parse_date(date):
return dt.datetime.strptime(date, const.API_DATE_FMT)
def format_date(date):
return date.strftime(const.DATE_FMT)
| 16.083333 | 57 | 0.761658 | 31 | 193 | 4.580645 | 0.516129 | 0.112676 | 0.197183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15544 | 193 | 11 | 58 | 17.545455 | 0.871166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
7feb860f642e11a681a5fb9b7fdabeda8871347e | 7,968 | py | Python | system/views.py | mwombe/carhire | 3c5407e5dafbcb22577beaa82be620325f7dab13 | [
"Apache-2.0"
] | 2 | 2021-10-19T04:18:59.000Z | 2022-01-24T18:46:25.000Z | system/views.py | mwombe/carhire | 3c5407e5dafbcb22577beaa82be620325f7dab13 | [
"Apache-2.0"
] | null | null | null | system/views.py | mwombe/carhire | 3c5407e5dafbcb22577beaa82be620325f7dab13 | [
"Apache-2.0"
] | 4 | 2020-09-27T08:39:32.000Z | 2021-12-26T05:31:36.000Z | from django.shortcuts import render, get_object_or_404
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
from django.http import HttpResponse , HttpResponseRedirect
from django.db.models import Q
from .models import Car, Order, PrivateMsg
from .forms import CarForm, OrderForm, MessageForm
def home(request):
context = {
"title" : "Car Rental"
}
return render(request,'home.html', context)
def car_list(request):
car = Car.objects.all()
query = request.GET.get('q')
if query:
car = car.filter(
Q(car_name__icontains=query) |
Q(company_name__icontains = query) |
Q(num_of_seats__icontains=query) |
Q(cost_par_day__icontains=query)
)
# pagination
paginator = Paginator(car, 12) # Show 15 contacts per page
page = request.GET.get('page')
try:
car = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
car = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
car = paginator.page(paginator.num_pages)
context = {
'car': car,
}
return render(request, 'car_list.html', context)
def car_detail(request, id=None):
detail = get_object_or_404(Car,id=id)
context = {
"detail": detail
}
return render(request, 'car_detail.html', context)
def car_created(request):
form = CarForm(request.POST or None, request.FILES or None)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
return HttpResponseRedirect("/")
context = {
"form" : form,
"title": "Create Car"
}
return render(request, 'car_create.html', context)
def car_update(request, id=None):
detail = get_object_or_404(Car, id=id)
form = CarForm(request.POST or None, instance=detail)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
return HttpResponseRedirect(instance.get_absolute_url())
context = {
"form": form,
"title": "Update Car"
}
return render(request, 'car_create.html', context)
def car_delete(request,id=None):
query = get_object_or_404(Car,id = id)
query.delete()
car = Car.objects.all()
context = {
'car': car,
}
return render(request, 'admin_index.html', context)
#order
def order_list(request):
order = Order.objects.all()
query = request.GET.get('q')
if query:
order = order.filter(
Q(movie_name__icontains=query)|
Q(employee_name__icontains=query)
)
# pagination
paginator = Paginator(order, 4) # Show 15 contacts per page
page = request.GET.get('page')
try:
order = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
order = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
order = paginator.page(paginator.num_pages)
context = {
'order': order,
}
return render(request, 'order_list.html', context)
def order_detail(request, id=None):
detail = get_object_or_404(Order,id=id)
context = {
"detail": detail,
}
return render(request, 'order_detail.html', context)
def order_created(request):
form = OrderForm(request.POST or None)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
return HttpResponseRedirect(instance.get_absolute_url())
context = {
"form": form,
"title": "Create Order"
}
return render(request, 'order_create.html', context)
def order_update(request, id=None):
detail = get_object_or_404(Order, id=id)
form = OrderForm(request.POST or None, instance=detail)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
return HttpResponseRedirect(instance.get_absolute_url())
context = {
"form": form,
"title": "Update Order"
}
return render(request, 'order_create.html', context)
def order_delete(request,id=None):
query = get_object_or_404(Order,id = id)
query.delete()
return HttpResponseRedirect("/listOrder/")
def newcar(request):
new = Car.objects.order_by('-id')
#seach
query = request.GET.get('q')
if query:
new = new.filter(
Q(car_name__icontains=query) |
Q(company_name__icontains=query) |
Q(num_of_seats__icontains=query) |
Q(cost_par_day__icontains=query)
)
# pagination
paginator = Paginator(new, 12) # Show 15 contacts per page
page = request.GET.get('page')
try:
new = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
new = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
new = paginator.page(paginator.num_pages)
context = {
'car': new,
}
return render(request, 'new_car.html', context)
def like_update(request, id=None):
new = Car.objects.order_by('-id')
like_count = get_object_or_404(Car, id=id)
like_count.like+=1
like_count.save()
context = {
'car': new,
}
return render(request,'new_car.html',context)
def popular_car(request):
new = Car.objects.order_by('-like')
# seach
query = request.GET.get('q')
if query:
new = new.filter(
Q(car_name__icontains=query) |
Q(company_name__icontains=query) |
Q(num_of_seats__icontains=query) |
Q(cost_par_day__icontains=query)
)
# pagination
paginator = Paginator(new, 12) # Show 15 contacts per page
page = request.GET.get('page')
try:
new = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
new = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
new = paginator.page(paginator.num_pages)
context = {
'car': new,
}
return render(request, 'new_car.html', context)
def contact(request):
form = MessageForm(request.POST or None)
if form.is_valid():
instance = form.save(commit=False)
instance.save()
return HttpResponseRedirect("/car/newcar/")
context = {
"form": form,
"title": "Contact With Us",
}
return render(request,'contact.html', context)
#-----------------Admin Section-----------------
def admin_car_list(request):
car = Car.objects.order_by('-id')
query = request.GET.get('q')
if query:
car = car.filter(
Q(car_name__icontains=query) |
Q(company_name__icontains=query) |
Q(num_of_seats__icontains=query) |
Q(cost_par_day__icontains=query)
)
# pagination
paginator = Paginator(car, 12) # Show 15 contacts per page
page = request.GET.get('page')
try:
car = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
car = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
car = paginator.page(paginator.num_pages)
context = {
'car': car,
}
return render(request, 'admin_index.html', context)
def admin_msg(request):
msg = PrivateMsg.objects.order_by('-id')
context={
"car": msg,
}
return render(request, 'admin_msg.html', context)
def msg_delete(request,id=None):
query = get_object_or_404(PrivateMsg, id=id)
query.delete()
return HttpResponseRedirect("/message/")
| 29.186813 | 76 | 0.618976 | 983 | 7,968 | 4.871821 | 0.110885 | 0.052621 | 0.063479 | 0.02631 | 0.801002 | 0.776989 | 0.713301 | 0.705575 | 0.689706 | 0.651284 | 0 | 0.01228 | 0.264182 | 7,968 | 272 | 77 | 29.294118 | 0.804537 | 0.102535 | 0 | 0.598174 | 0 | 0 | 0.064412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082192 | false | 0 | 0.027397 | 0 | 0.214612 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3d24389c18f60448d264b56348358e5b83a015a5 | 12,712 | py | Python | skactiveml/stream/_random.py | scikit-activeml/scikit-activeml | 2191ba452ca4d0fe349678d2a86b1906d79cb96a | [
"BSD-3-Clause"
] | 40 | 2020-09-22T00:50:52.000Z | 2022-03-15T14:16:42.000Z | skactiveml/stream/_random.py | scikit-activeml/scikit-activeml | 2191ba452ca4d0fe349678d2a86b1906d79cb96a | [
"BSD-3-Clause"
] | 161 | 2020-08-10T09:24:03.000Z | 2022-03-29T13:39:46.000Z | skactiveml/stream/_random.py | scikit-activeml/scikit-activeml | 2191ba452ca4d0fe349678d2a86b1906d79cb96a | [
"BSD-3-Clause"
] | 3 | 2021-11-15T09:10:59.000Z | 2021-12-15T11:40:47.000Z | import numpy as np
from ..base import SingleAnnotStreamBasedQueryStrategy
from .budget_manager import FixedThresholdBudget
from ..utils import call_func
class RandomSampler(SingleAnnotStreamBasedQueryStrategy):
"""The RandomSampler samples instances completely randomly. The probability
to sample an instance is dependent on the budget specified in the
budget_manager. Given a budget of 10%, the utility exceeds 0.9 (1-0.1) with
a probability of 10%. Instances are queried regardless of their position in
the feature space. As this query strategy disregards any information about
the instance. Thus, it should only be used as a baseline strategy.
Parameters
----------
budget_manager : BudgetManager
The BudgetManager which models the budgeting constraint used in
the stream-based active learning setting. The budget attribute set for
the budget_manager will be used to determine the probability to sample
instances
random_state : int, RandomState instance, default=None
Controls the randomness of the estimator.
"""
def __init__(self, budget_manager=FixedThresholdBudget(),
random_state=None):
super().__init__(
budget_manager=budget_manager, random_state=random_state
)
def query(self, X_cand, return_utilities=False):
"""Ask the query strategy which instances in X_cand to acquire.
Please note that, when the decisions from this function may differ from
the final sampling, simulate=True can set, so that the query strategy
can be updated later with update(...) with the final sampling. This is
especially helpful, when developing wrapper query strategies.
Parameters
----------
X_cand : {array-like, sparse matrix} of shape (n_samples, n_features)
The instances which may be queried. Sparse matrices are accepted
only if they are supported by the base query strategy.
return_utilities : bool, optional
If true, also return the utilities based on the query strategy.
The default is False.
Returns
-------
queried_indices : ndarray of shape (n_queried_instances,)
The indices of instances in X_cand which should be queried, with
0 <= n_queried_instances <= n_samples.
utilities: ndarray of shape (n_samples,), optional
The utilities based on the query strategy. Only provided if
return_utilities is True.
"""
X_Cand, return_utilities = self._validate_data(
X_cand, return_utilities
)
# copy random state in case of simulating the query
prior_random_state_state = self.random_state_.get_state()
utilities = self.random_state_.random_sample(len(X_cand))
self.random_state_.set_state(prior_random_state_state)
queried_indices = self.budget_manager_.query_by_utility(utilities)
if return_utilities:
return queried_indices, utilities
else:
return queried_indices
def update(self, X_cand, queried_indices, budget_manager_param_dict=None):
"""Updates the budget manager and the count for seen and queried
instances
Parameters
----------
X_cand : {array-like, sparse matrix} of shape (n_samples, n_features)
The instances which could be queried. Sparse matrices are accepted
only if they are supported by the base query strategy.
queried_indices : array-like of shape (n_samples,)
Indicates which instances from X_cand have been queried.
budget_manager_param_dict : kwargs
Optional kwargs for budget_manager.
Returns
-------
self : RandomSampler
The RandomSampler returns itself, after it is updated.
"""
# check if a random state is set
self._validate_random_state()
# check if a budget_manager is set
self._validate_budget_manager()
budget_manager_param_dict = ({} if budget_manager_param_dict is None
else budget_manager_param_dict)
# update the random state assuming, that query(..., simulate=True) was
# used
self.random_state_.random_sample(len(X_cand))
call_func(
self.budget_manager_.update,
X_cand=X_cand,
queried_indices=queried_indices,
**budget_manager_param_dict
)
return self
def _validate_data(
self, X_cand, return_utilities, reset=True, **check_X_cand_params
):
"""Validate input data and set or check the `n_features_in_` attribute.
Parameters
----------
X_cand: array-like of shape (n_candidates, n_features)
The instances which could be queried. Sparse matrices are accepted
only if they are supported by the base query strategy.
return_utilities : bool,
If true, also return the utilities based on the query strategy.
reset : bool, default=True
Whether to reset the `n_features_in_` attribute.
If False, the input will be checked for consistency with data
provided when reset was last True.
**check_X_cand_params : kwargs
Parameters passed to :func:`sklearn.utils.check_array`.
Returns
-------
X_cand: np.ndarray of shape (n_candidates, n_features)
Checked candidate samples.
return_utilities : bool,
Checked boolean value of `return_utilities`.
"""
X_cand, return_utilities = super()._validate_data(
X_cand, return_utilities, reset=reset, **check_X_cand_params
)
self._validate_random_state()
return X_cand, return_utilities
class PeriodicSampler(SingleAnnotStreamBasedQueryStrategy):
"""The PeriodicSampler samples instances periodically. The length of that
period is determined by the budget specified in the budget_manager. For
instance, a budget of 25% would result in the PeriodicSampler sampling
every fourth instance. The main idea behind this query strategy is to
exhaust a given budget as soon it is available. Instances are queried
regardless of their position in the feature space. As this query strategy
disregards any information about the instance. Thus, it should only be used
as a baseline strategy.
Parameters
----------
budget_manager : BudgetManager
The BudgetManager which models the budgeting constraint used in
the stream-based active learning setting. The budget attribute set for
the budget_manager will be used to determine the interval between
sampling instnces
random_state : int, RandomState instance, default=None
Controls the randomness of the estimator.
"""
def __init__(self, budget_manager=FixedThresholdBudget(),
random_state=None):
super().__init__(
budget_manager=budget_manager, random_state=random_state
)
def query(self, X_cand, return_utilities=False):
"""Ask the query strategy which instances in X_cand to acquire.
This query strategy only evaluates the time each instance arrives at.
The utilities returned, when return_utilities is set to True, are
either 0 (the instance is not queried) or 1 (the instance is queried).
Please note that, when the decisions from this function may differ from
the final sampling, simulate=True can set, so that the query strategy
can be updated later with update(...) with the final sampling. This is
especially helpful, when developing wrapper query strategies.
Parameters
----------
X_cand : {array-like, sparse matrix} of shape (n_samples, n_features)
The instances which may be queried. Sparse matrices are accepted
only if they are supported by the base query strategy.
return_utilities : bool, optional
If true, also return the utilities based on the query strategy.
The default is False.
Returns
-------
queried_indices : ndarray of shape (n_queried_instances,)
The indices of instances in X_cand which should be queried, with
0 <= n_queried_instances <= n_samples.
utilities: ndarray of shape (n_samples,), optional
The utilities based on the query strategy. Only provided if
return_utilities is True.
"""
X_cand, return_utilities = self._validate_data(
X_cand,
return_utilities
)
utilities = np.zeros(X_cand.shape[0])
budget = getattr(self.budget_manager_, "budget_", 0)
tmp_observed_instances = self.observed_instances_
tmp_queried_instances = self.queried_instances_
for i, x in enumerate(X_cand):
tmp_observed_instances += 1
remaining_budget = (
tmp_observed_instances * budget - tmp_queried_instances
)
if remaining_budget >= 1:
utilities[i] = 1
tmp_queried_instances += 1
else:
utilities[i] = 0
queried_indices = self.budget_manager_.query_by_utility(utilities)
if return_utilities:
return queried_indices, utilities
else:
return queried_indices
def update(self, X_cand, queried_indices, budget_manager_param_dict=None):
"""Updates the budget manager and the count for seen and queried
instances
Parameters
----------
X_cand : {array-like, sparse matrix} of shape (n_samples, n_features)
The instances which could be queried. Sparse matrices are accepted
only if they are supported by the base query strategy.
queried_indices : array-like of shape (n_samples,)
Indicates which instances from X_cand have been queried.
budget_manager_param_dict : kwargs
Optional kwargs for budget_manager.
Returns
-------
self : PeriodicSampler
The PeriodicSampler returns itself, after it is updated.
"""
# check if a budget_manager is set
self._validate_data(np.array([[0]]), False)
budget_manager_param_dict = ({} if budget_manager_param_dict is None
else budget_manager_param_dict)
call_func(
self.budget_manager_.update,
X_cand=X_cand,
queried_indices=queried_indices,
**budget_manager_param_dict
)
queried = np.zeros(len(X_cand))
queried[queried_indices] = 1
self.observed_instances_ += len(queried)
self.queried_instances_ += np.sum(queried)
# print("queried_instances_", self.queried_instances_)
return self
def _validate_data(
self, X_cand, return_utilities, reset=True, **check_X_cand_params
):
"""Validate input data and set or check the `n_features_in_` attribute.
Parameters
----------
X_cand: array-like of shape (n_candidates, n_features)
The instances which could be queried. Sparse matrices are accepted
only if they are supported by the base query strategy.
return_utilities : bool,
If true, also return the utilities based on the query strategy.
reset : bool, default=True
Whether to reset the `n_features_in_` attribute.
If False, the input will be checked for consistency with data
provided when reset was last True.
**check_X_cand_params : kwargs
Parameters passed to :func:`sklearn.utils.check_array`.
Returns
-------
X_cand: np.ndarray of shape (n_candidates, n_features)
Checked candidate samples.
batch_size : int
Checked number of samples to be selected in one AL cycle.
return_utilities : bool,
Checked boolean value of `return_utilities`.
"""
X_cand, return_utilities = super()._validate_data(
X_cand, return_utilities, reset=reset, **check_X_cand_params
)
self._validate_random_state()
# check if counting of instances has begun
if not hasattr(self, "observed_instances_"):
self.observed_instances_ = 0
if not hasattr(self, "queried_instances_"):
self.queried_instances_ = 0
return X_cand, return_utilities
| 39.725 | 79 | 0.653162 | 1,544 | 12,712 | 5.170984 | 0.145725 | 0.028181 | 0.019289 | 0.03507 | 0.797094 | 0.767535 | 0.763778 | 0.75476 | 0.745992 | 0.728206 | 0 | 0.002868 | 0.286816 | 12,712 | 319 | 80 | 39.84953 | 0.877785 | 0.565843 | 0 | 0.54 | 0 | 0 | 0.010002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0 | 0.22 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3d3aecbb0c62fe1d163650502f891ebb1ce16514 | 89 | py | Python | spiketools/__init__.py | TomDonoghue/spiketools | e5e20cf049ad89c7d096e6da82b693d186eeed2c | [
"Apache-2.0"
] | 1 | 2022-03-09T19:40:37.000Z | 2022-03-09T19:40:37.000Z | spiketools/__init__.py | TomDonoghue/spiketools | e5e20cf049ad89c7d096e6da82b693d186eeed2c | [
"Apache-2.0"
] | 35 | 2021-09-28T15:13:31.000Z | 2021-11-26T04:38:08.000Z | spiketools/__init__.py | TomDonoghue/spiketools | e5e20cf049ad89c7d096e6da82b693d186eeed2c | [
"Apache-2.0"
] | 4 | 2021-09-28T14:56:24.000Z | 2022-03-09T21:00:31.000Z | """SpikeTools: analysis tools for single-unit data."""
from .version import __version__
| 22.25 | 54 | 0.764045 | 11 | 89 | 5.818182 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123596 | 89 | 3 | 55 | 29.666667 | 0.820513 | 0.539326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3d4db4583adb5ba424e7031eca7524d720f9938a | 14,002 | py | Python | Dijkstra_k-mean_euclidean.py | gitgeoman/Astar_algorigtm | 93eaf7b27a42a392f1d5b1f5f928b26f77c08696 | [
"MIT"
] | null | null | null | Dijkstra_k-mean_euclidean.py | gitgeoman/Astar_algorigtm | 93eaf7b27a42a392f1d5b1f5f928b26f77c08696 | [
"MIT"
] | null | null | null | Dijkstra_k-mean_euclidean.py | gitgeoman/Astar_algorigtm | 93eaf7b27a42a392f1d5b1f5f928b26f77c08696 | [
"MIT"
] | null | null | null | # Loading the required modules
import psycopg2
from DB_connection_parameters import user, password, host, port, database3
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from kmean import kmeans
try:
connection = psycopg2.connect(user=user, password=password, host=host, port=port, database=database3)
cursor = connection.cursor()
n = 1000 # ile punktów
k = 5 # ile klas
no_of_iterations = 15 # ile iteracji
cursor.execute(
# f'SELECT id, geom, ST_AsText(ST_PointN(geom,1)), ST_AsText(geom) FROM public."500m_g" where id in (1330304, 4802327, 1430412, 4747224, 2430236, 1250323, 1520984, 764678, 1018929, 1382920, 289954, 2337235, 627569, 1249795, 4725844, 1810691, 805463, 1419216, 877282, 1112898, 2289451, 1857488, 280363, 286176, 2052348, 4630445, 1810661, 629570, 928357, 1383820, 1521302, 1600495, 1338729, 614495, 1810693, 805404, 282444, 4795348, 1069210, 2246751, 1472045, 295622, 1390117, 1660990, 2337313, 2096112, 4802384, 4666237, 4755778, 1120552, 4785518, 2195727, 1763193, 634963, 631515, 1606844, 1068188, 1703700, 4665436, 1160809, 295123, 764674, 1858012, 1159705, 1294036, 806137, 288816, 577032, 1160216, 293037, 292515, 1471506, 1603123, 2016570, 1430418, 1606346, 1332904, 1337346, 4666723, 1297472, 556360, 4785585, 1660410, 1337345, 4795329, 4666360, 4666032, 821529, 4635887, 1710060, 646224, 1471510, 4977836, 1249793, 764675, 1205049, 1206402, 631555, 928630, 1810657, 4743118, 4756569, 1068482, 4663212, 4756075, 1294013, 2052345, 1159555, 1332906, 1609932, 632226, 1709346, 1743375, 300842, 972731, 1294420, 2289405, 645725, 1383602, 1382924, 2147335, 970317, 1612374, 1205689, 1858313, 1433382, 1590971, 1384976, 1598806, 1520497, 1251369, 764955, 1519425, 1858303, 2337317, 1600491, 809687, 813187, 1709843, 1609927, 1284232, 970064, 1204772, 289932, 1068270, 4756145, 576420, 292513, 1598663, 4666722, 1426569, 809011, 4666637, 627304, 1810663, 1161170, 284165, 2337609, 1120553, 1430595, 1250208, 1068190, 281410, 1612373, 902280, 971595, 1810666, 1067212, 1709952, 1251345, 2016089, 1612189, 556362, 876343, 627304, 1109765, 631557, 299665, 709296, 1419696, 805466, 1338541, 1609348, 1703485, 614538, 764683, 1599179, 1120607, 765170, 809689, 2289200, 614884, 2052048, 1609936, 1109762, 4756235, 1250074, 577034, 4749049, 969353, 1858192, 1068965, 1921955, 4660007, 4743168, 282435, 765172, 1590972, 1250085, 1068939, 902276, 928729, 1420338, 1810493, 1069238, 817802, 2051682, 1160213, 812757, 2195677, 4756546, 289947, 1284297, 1332924, 822632, 4756213, 282435, 1291176, 2016824, 1918572, 4974774, 968897, 970710, 4968814, 1383348, 1519388, 2196012, 4810816, 2148174, 764678, 295621, 1660410, 972974, 1426422, 902808, 1205049, 4646886, 1430417, 1105566, 1250206, 1250197, 281406, 1019095, 1710060, 1471720, 4807075, 4769490, 296216, 1382261, 1520486, 1250325, 1426835, 2088777, 1206356, 1612183, 4817006, 1206358, 1606445, 1519145, 764678, 4743118, 1120606, 1603580, 1338728, 1159787, 1383618, 805064, 1900206, 1109726, 4971850, 805406, 1382895, 299371, 628723, 1105641, 1019669, 4966477, 4725845, 2374500, 1337341, 2430233, 1068437, 2016822, 287238, 614823, 971124, 824881, 928628, 631856, 1205230, 1433196, 1332906, 1609926, 1068437, 1606496, 1332926, 653841, 1250912, 4745693, 4756145, 652726, 1423786, 1598742, 282434, 4971851, 2148169, 289952, 1600218, 1206244, 810219, 2051673, 2016572, 1120551, 812757, 902542, 1599975, 1383820, 4782025, 764675, 637362, 630927, 1068414, 293039, 2147334, 1105513, 1019574, 1427385, 646224, 4663584, 4663211, 635652, 1205604, 4769609, 1751809, 1660700, 647237, 280821, 1383110, 630929, 4756213, 1705160, 1294026, 1204933, 1205485, 1382259, 1600141, 1606839, 631515, 282434, 1609933, 1159700, 2195565, 282433, 4170605, 1334457, 877404, 1743560, 4794102, 1068567, 1810660, 1068940, 636901, 1419217, 299366, 1382919, 970065, 765169, 4653271, 199745, 971124, 876463, 1337348, 1660699, 287728, 1471320, 298097, 1419217, 4977836, 970181, 295117, 809485, 1612374, 1068792, 4666723, 1743570, 2195795, 1520144, 1660991, 1068558, 4630478, 615043, 614543, 1395026, 1519148, 1338542, 4743198, 4782062, 902812, 1520899, 1383795, 764949, 1332908, 1205049, 627313, 1338543, 614889, 806139, 1384976, 1205048, 825322, 1743570, 1159436, 4743201, 1250045, 1105643, 1704498, 300840, 4769862, 4980293, 1160603, 1067107, 876654, 1204932, 1600218, 631517, 1709952, 1019008, 970180, 1160807, 636898, 813304, 1250348, 1609347, 282440, 1068438, 817292, 1383107, 1423648, 1337519, 2016084, 876467, 4666360, 1426943, 1973333, 636901, 1419273, 970180, 1160212, 765610, 279759, 2052053, 813303, 1743572, 289931, 1068566, 4963498, 1710057, 970341, 808811, 1973700, 4170612, 1337916, 805064, 4755889, 1284361, 4974797, 615044, 1383607, 631907, 928634, 1430415, 1609352, 4665984, 820519, 1385453, 1471511, 1161105, 1250322, 576430, 4743198, 1660246, 805800, 1383616, 1520898, 1112730, 4966477, 1924598, 4807100, 1429865, 1205046)'
# f'SELECT id,the_geom AS geom, ST_AsText(the_geom) AS geomDD FROM public."lineEdges_noded_vertices_pgr" ORDER BY random() limit {n}'
# f'SELECT id, geom, ST_AsText(ST_PointN(geom,1)), ST_AsText(geom) FROM public."500m_g" ORDER BY random() limit {n}'
f'SELECT id, geom AS geom, ST_AsText(geom) AS geomText FROM public."budynki_wawa_centroidy" WHERE id in '
f'(107928, 715, 71239, 3208, 11886, 112065, 67538, 15797, 87341, 147743, 87155, 37643, 137208, 18530, 135400, 28711, 137367, 95230, 89859, 125530, 42806, 77479, 19067, 82170, 36567, 77064, 124159, 42722, 63825, 105184, 42158, 113438, 131625, 105316, 9211, 67100, 54973, 39689, 139736, 100104, 136069, 63594, 7431, 108783, 50423, 119633, 75855, 16307, 13292, 138946, 47980, 4388, 61097, 10492, 50892, 77293, 46653, 69850, 57813, 52506, 62145, 90210, 99424, 34805, 77713, 27719, 147222, 106266, 146770, 29427, 86169, 316, 115027, 106259, 97220, 35069, 23531, 38824, 42425, 135415, 64775, 10088, 68579, 63944, 20882, 48954, 68586, 102326, 95552, 84685, 33028, 79006, 10609, 27195, 142178, 53718, 51565, 65124, 91054, 80358, 67863, 147299, 46036, 25555, 31826, 147968, 45628, 113315, 104177, 81325, 126551, 105623, 126575, 129739, 96789, 119912, 33070, 38608, 91564, 134292, 8815, 73649, 18157, 7637, 19384, 83123, 31265, 88223, 60590, 110657, 5134, 12037, 3757, 68888, 6992, 87670, 106724, 86545, 16298, 101034, 40599, 91521, 104483, 119843, 3116, 124863, 2097, 61904, 29011, 109681, 37990, 186, 59256, 17699, 3758, 142361, 34536, 145098, 73165, 77333, 43298, 45837, 141959, 56363, 44138, 27475, 92870, 99413, 27582, 64700, 132987, 41320, 38522, 136415, 69086, 52959, 85119, 54593, 130010, 3377, 141228, 130591, 24534, 67867, 12587, 101123, 146602, 75313, 146843, 76762, 1781, 106624, 21465, 43094, 73603, 46881, 137103, 78889, 131771, 114754, 133205, 129922, 70504, 115008, 124971, 65766, 30377, 31732, 145042, 43134, 118945, 107873, 141899, 144874, 68102, 118665, 116281, 17781, 23675, 21766, 66841, 11280, 71235, 35180, 84597, 74085, 66009, 46269, 66093, 91810, 15890, 34612, 96807, 36364, 138932, 137404, 27289, 70736, 10770, 76396, 21972, 16104, 91554, 11604, 80444, 105518, 118701, 15716, 109837, 85360, 58202, 36793, 39424, 39724, 91195, 20270, 73186, 61705, 84886, 20510, 54176, 89713, 2305, 83812, 109341, 23704, 135778, 146409, 138864, 61991, 114064, 25838, 39346, 73447, 115358, 110089, 92346, 126775, 104381, 128435, 95310, 6554, 67701, 143193, 28441, 145026, 75099, 62113, 136894, 39339, 117918, 64455, 102128, 98028, 124421, 62803, 82102, 94893, 3282, 130716, 146217, 99769, 85658, 84173, 139806, 20430, 90407, 131150, 64052, 133803, 81173, 35514, 48739, 41539, 67198, 55174, 100727, 47923, 113366, 100871, 111181, 76015, 56534, 92201, 132062, 34890, 8652, 65831, 104957, 117138, 140604, 89567, 143244, 76286, 21421, 63613, 49476, 31512, 109989, 83711, 111210, 59911, 19262, 90200, 94368, 3137, 126993, 124265, 128244, 114167, 4514, 38484, 73444, 131507, 49573, 115646, 102824, 29370, 18968, 69193, 46957, 79803, 130358, 104018, 39018, 28563, 137592, 44755, 134103, 59721, 101648, 6603, 122496, 693, 25893, 111614, 13895, 137187, 52512, 34721, 121328, 38239, 17281, 85099, 50367, 21685, 139213, 110942, 36262, 29239, 57898, 115762, 49890, 91751, 11748, 94906, 96489, 109201, 93916, 117279, 92304, 8456, 100878, 8971, 90972, 92630, 146831, 118531, 117793, 25767, 43447, 121857, 103436, 80986, 27343, 27566, 93732, 14053, 9265, 1778, 115823, 25420, 20008, 81792, 71009, 29455, 27275, 112039, 42648, 131956, 121018, 111935, 5876, 54749, 91266, 142998, 147985, 67496, 19780, 99662, 7787, 93816, 1642, 122287, 123666, 112100, 136696, 7211, 53779, 71957, 117564, 63046, 106363, 29943, 48432, 34439, 48506, 127353, 54695, 72450, 131360, 145512, 110525, 11622, 121347, 75953, 41637, 142330, 51671, 97034, 120902, 21217, 133046, 23904, 98237, 136007, 138157, 140685, 131958, 46361, 113788, 83199, 126029, 55091, 89182, 3639, 12655, 4991, 114650, 128162, 57087, 91702, 137842, 115115, 4546, 17636, 56957, 139837, 91659, 12364, 102216, 82917, 70840, 24659, 65922, 81878, 67258, 75975, 51149, 17840, 136108, 132706, 47399, 74153, 78546, 141412, 59122, 116521, 68667, 110256, 116830, 120216, 97862, 90057, 144696, 79173, 12517, 57284, 46366, 126517, 139292, 129826, 32331, 132371, 56593, 99719, 9622, 37295, 96449, 6114, 141665, 104775, 17484, 41310, 2453, 85681, 15476, 20190, 107189, 61029, 86815, 24671, 68374, 85793, 47577, 55573, 100363, 10156, 105724, 628, 85237, 77080, 71560, 122689, 117619, 69335, 39850, 32439, 79955, 48815, 15660, 32716, 82399, 125410, 142010, 1732, 148715, 116992, 41886, 102507, 21424, 146285, 20215, 122228, 95859, 128748, 33889, 38338, 98461, 112426, 136635, 5821, 66251, 121022, 117689, 6316, 27722, 54386, 51701, 88846, 2636, 101704, 62264, 42686, 140731, 130842, 87806, 52413, 24148, 47513, 24794, 74211, 114356, 80487, 55955, 45228, 124484, 56228, 78159, 112780, 3443, 129295, 86749, 146777, 39786, 98803, 49108, 40023, 140103, 35470, 31029, 34979, 42179, 6993, 128788, 145436, 132417, 69602, 66504, 129656, 59125, 91575, 117139, 26477, 132109, 2738, 5820, 19487, 98875, 99150, 27347, 79976, 54911, 135670, 130870, 9252, 68832, 101682, 145417, 36149, 27890, 15651, 116625, 81959, 67306, 108769, 119120, 135362, 70478, 137185, 30246, 61067, 136587, 40867, 89205, 84431, 82974, 52085, 5388, 59293, 67955, 53327, 128126, 23073, 130659, 59526, 111069, 72922, 16084, 82655, 109399, 106130, 129138, 113595, 45387, 145062, 85495, 51353, 129768, 10316, 83558, 42836, 100156, 56606, 15971, 31310, 100547, 100003, 96452, 96041, 74411, 122631, 107778, 22262, 34527, 24936, 19860, 28216, 116295, 44424, 140999, 24786, 89354, 45989, 145753, 110381, 111621, 119447, 44216, 46781, 104270, 59461, 2231, 46941, 3356, 69057, 44615, 6443, 91821, 49715, 136249, 16457, 56684, 143364, 69823, 99202, 66462, 116544, 44211, 87193, 102130, 102628, 26918, 130355, 142646, 52603, 101024, 140732, 146810, 90001, 85212, 82179, 71953, 25257, 45302, 140193, 38837, 21438, 72533, 51448, 28355, 77454, 104431, 66409, 72135, 12680, 21246, 45545, 6483, 125818, 14118, 6538, 2127, 130760, 106644, 250, 26827, 17940, 45107, 57308, 139635, 118727, 83450, 44422, 89225, 145581, 19872, 117668, 25655, 140464, 52684, 33982, 67911, 118895, 88400, 19547, 572, 55324, 46740, 106085, 83608, 113078, 21778, 89257, 60423, 66819, 98488, 76206, 74986, 68962, 68656, 8457, 12128, 81347, 36062, 6929, 105093, 50934, 134946, 2011, 133258, 129946, 53651, 64594, 34677, 57267, 90517, 8569, 27354, 103194, 82481, 24714, 119430, 79602, 54879, 46675, 3504, 25450, 6303, 31515, 16915, 85952, 80867, 112833, 24119, 123718, 109551, 12591, 132630, 8999, 114429, 62261, 53908, 2283, 95698, 25594, 34079, 35429, 148739, 50562, 74987, 43373, 94026, 72413, 4429, 87587, 94434, 135229, 86875, 78909, 33735, 72427, 148655, 35224, 110410, 68058, 59881, 133709, 69969, 2445, 87346, 118351, 122943, 66906, 123106, 101452, 9126, 132097, 565, 76707, 81650, 7752, 22631, 114812, 105928, 124948, 45521, 39983, 144349, 66552, 144851, 15713, 41657, 120145, 31970, 51111, 50915, 4857, 109772, 89141, 101227, 138234, 108307, 134465, 100825, 18269, 50333, 123109, 74379, 8991, 112549, 27278, 11419, 78732, 50882, 16529, 44674, 110218, 138975, 52795, 73004, 122047, 9755, 62234, 28374, 27121, 39565, 143883, 118799, 147526, 98066, 77114, 14126, 128432, 58787, 43565, 111199, 99093, 75442, 75128, 131722, 23771, 102135, 56566, 104983, 43958, 38756, 104365, 78979, 24495, 76857, 21836, 32118, 120805, 22812, 127367, 22987, 144369, 20141, 82964, 64053, 18686, 57934, 62840, 1706, 18131, 98026, 125853, 115205, 95707, 118435, 10887, 40668, 109214, 85178, 111501, 22784, 69100, 140580, 90076) '
f'limit {n}'
)
dane = cursor.fetchall()
# print('\n sórówka z bazy danych \n', dane)
# rozpakowuje dane
x = [item[0] for item in dane] # indeksy punktów
coords = [(item[1]) for item in dane] # współrzedne punktów
coordsDD = [(item[2]) for item in dane] # współrzedne punktów
coordsDX = [[float(item[7:-1].split()[0]), float(item[7:-1].split()[1])] for item in coordsDD]
coordsY = [float(item[6:-1].split()[0]) for item in coordsDD]
coordsX = [float(item[7:-1].split()[1]) for item in coordsDD]
print('Lista współrzędnych', coordsX, '\n', coordsY)
tablica_dane = np.column_stack([coordsX, coordsY], )
print('>>>>>>>>>>>>>>>>>>>>> to jest tablica na dane', tablica_dane)
pca = PCA(2)
df = pca.fit_transform(tablica_dane)
print('>>>>>>>>>>>>>>>>>>>>> df\n', df)
label = kmeans(tablica_dane, 5, 500)
print('label ===================\n', label)
u_labels = np.unique(label)
for i in u_labels:
plt.scatter(tablica_dane[label == i, 1], tablica_dane[label == i, 0], label=i)
plt.legend()
plt.grid
plt.show()
print('To jest df ', ())
except(Exception, psycopg2.Error) as error:
print("Próba połączenia zakończona niepowodzeniem", error)
finally:
# zamkniecie nawiazanego połączenia.
if (connection):
cursor.close()
connection.close()
print("Zakończono połączenie")
| 208.985075 | 7,251 | 0.724754 | 1,838 | 14,002 | 5.504353 | 0.867247 | 0.004744 | 0.005338 | 0.003855 | 0.034694 | 0.02995 | 0.023821 | 0.023821 | 0.023821 | 0.01878 | 0 | 0.718544 | 0.142337 | 14,002 | 66 | 7,252 | 212.151515 | 0.123907 | 0.350521 | 0 | 0 | 0 | 0.043478 | 0.830837 | 0.010352 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.043478 | 0.130435 | 0 | 0.130435 | 0.152174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
18474422524533559626b64247b4fea030e660b6 | 805 | py | Python | pydy/tests/test_utils.py | jcrist/pydy | ec139f0dcbeffba8242636b727b3be02091792b0 | [
"BSD-3-Clause"
] | null | null | null | pydy/tests/test_utils.py | jcrist/pydy | ec139f0dcbeffba8242636b727b3be02091792b0 | [
"BSD-3-Clause"
] | null | null | null | pydy/tests/test_utils.py | jcrist/pydy | ec139f0dcbeffba8242636b727b3be02091792b0 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
from pkg_resources import parse_version
from setuptools import __version__ as SETUPTOOLS_VERSION
from nose.tools import assert_raises
from ..utils import sympy_equal_to_or_newer_than
def test_sympy_equal_to_or_newer_than():
# sympy_equal_to_or_newer_than(version, installed_version)
assert sympy_equal_to_or_newer_than('0.7.6.dev', '0.7.6.dev')
assert not sympy_equal_to_or_newer_than('0.7.6', '0.7.6.dev')
assert sympy_equal_to_or_newer_than('0.7.5', '0.7.6.dev')
assert sympy_equal_to_or_newer_than('0.6.5', '0.7.6.dev')
assert not sympy_equal_to_or_newer_than('0.7.7', '0.7.6.dev')
if parse_version(SETUPTOOLS_VERSION) >= parse_version('8.0'):
with assert_raises(ValueError):
sympy_equal_to_or_newer_than('0.7.7', '0.7.6-git')
| 38.333333 | 65 | 0.746584 | 148 | 805 | 3.662162 | 0.25 | 0.04059 | 0.199262 | 0.232472 | 0.535055 | 0.535055 | 0.404059 | 0.404059 | 0.404059 | 0.341328 | 0 | 0.054131 | 0.12795 | 805 | 20 | 66 | 40.25 | 0.717949 | 0.095652 | 0 | 0 | 0 | 0 | 0.125344 | 0 | 0 | 0 | 0 | 0 | 0.538462 | 1 | 0.076923 | true | 0 | 0.307692 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
18477951eef41fbf1ccf52e4923db113d8a49ff2 | 970 | py | Python | docs/conf.py | nerexysesa/django-bootstrap-v5 | 1a9a7e9fa20791ad65f1e5a9695067e8ef685a9e | [
"BSD-3-Clause"
] | 46 | 2020-12-15T14:14:24.000Z | 2022-03-21T16:14:31.000Z | docs/conf.py | nerexysesa/django-bootstrap-v5 | 1a9a7e9fa20791ad65f1e5a9695067e8ef685a9e | [
"BSD-3-Clause"
] | 15 | 2020-12-12T05:38:34.000Z | 2022-03-25T16:50:55.000Z | docs/conf.py | nerexysesa/django-bootstrap-v5 | 1a9a7e9fa20791ad65f1e5a9695067e8ef685a9e | [
"BSD-3-Clause"
] | 35 | 2021-01-20T00:22:29.000Z | 2022-03-11T02:10:51.000Z | import os
#try:
# from importlib.metadata import metadata
#except ImportError:
# from importlib_metadata import metadata
#
#PROJECT_NAME = "django-bootstrap-v5"
#
#on_rtd = os.environ.get("READTHEDOCS", None) == "True"
#project_metadata = metadata(PROJECT_NAME)
#print(project_metadata)
#
#project = project_metadata["name"]
#author = project_metadata["author"]
#copyright = f"2020, {author}"
#
## The full version, including alpha/beta/rc tags, in x.y.z.misc format
#release = project_metadata["version"]
## The short X.Y version.
#version = ".".join(release.split(".")[:2])
#
#extensions = ["sphinx.ext.autodoc", "sphinx.ext.viewcode", "m2r2"]
#source_suffix = [".rst", ".md"]
#pygments_style = "sphinx"
#htmlhelp_basename = f"{PROJECT_NAME}-doc"
#
#if not on_rtd: # only import and set the theme if we're building docs locally
# import sphinx_rtd_theme
#
# html_theme = "sphinx_rtd_theme"
# html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
| 29.393939 | 79 | 0.718557 | 132 | 970 | 5.083333 | 0.545455 | 0.111773 | 0.062593 | 0.080477 | 0.172876 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009456 | 0.127835 | 970 | 32 | 80 | 30.3125 | 0.783688 | 0.923711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
18746ca6fe4907bbf00e0502cf84f904bcabb5de | 72 | py | Python | IA/Python/4/4.2/5.py | worthl3ss/random-small | ffb60781f57eb865acbd81aaa07056046bad32fe | [
"MIT"
] | 1 | 2022-02-23T12:47:00.000Z | 2022-02-23T12:47:00.000Z | IA/Python/4/4.2/5.py | worthl3ss/random-small | ffb60781f57eb865acbd81aaa07056046bad32fe | [
"MIT"
] | null | null | null | IA/Python/4/4.2/5.py | worthl3ss/random-small | ffb60781f57eb865acbd81aaa07056046bad32fe | [
"MIT"
] | null | null | null | import re
print(" ".join(re.compile("[a-zA-Z]+[.]*").findall(input())))
| 24 | 61 | 0.583333 | 11 | 72 | 3.818182 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 72 | 2 | 62 | 36 | 0.617647 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
a107216ad81ed7bb37b238b3b27e770845bee31f | 130 | py | Python | setup.py | kotarohara/python-cicd | f267c6d71e19978a0aa49450c991b12e285a1e66 | [
"MIT"
] | null | null | null | setup.py | kotarohara/python-cicd | f267c6d71e19978a0aa49450c991b12e285a1e66 | [
"MIT"
] | null | null | null | setup.py | kotarohara/python-cicd | f267c6d71e19978a0aa49450c991b12e285a1e66 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(name="calc", packages=find_packages()) # Change the name based on your project
| 32.5 | 84 | 0.792308 | 19 | 130 | 5.315789 | 0.736842 | 0.237624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123077 | 130 | 3 | 85 | 43.333333 | 0.885965 | 0.284615 | 0 | 0 | 0 | 0 | 0.043956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a1307f3ddf1e21bd554b823d3af2b59badb95891 | 279 | py | Python | src/models/user.py | YukinoKoh/multiuser_blog_template | 8d42f0ea7a905d0ae1602e12c569d15e48aaa062 | [
"MIT"
] | 1 | 2019-04-18T23:36:14.000Z | 2019-04-18T23:36:14.000Z | src/models/user.py | YukinoKoh/multiuser_blog_template | 8d42f0ea7a905d0ae1602e12c569d15e48aaa062 | [
"MIT"
] | null | null | null | src/models/user.py | YukinoKoh/multiuser_blog_template | 8d42f0ea7a905d0ae1602e12c569d15e48aaa062 | [
"MIT"
] | null | null | null | from google.appengine.ext import db
# Database
# User db
def user_key(name='default'):
return db.Key.from_path('users', name)
class User(db.Model):
name = db.StringProperty(required=True)
pw_hash = db.StringProperty(required=True)
email = db.StringProperty()
| 19.928571 | 46 | 0.713262 | 39 | 279 | 5.025641 | 0.589744 | 0.244898 | 0.244898 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 279 | 13 | 47 | 21.461538 | 0.837607 | 0.057348 | 0 | 0 | 0 | 0 | 0.046154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
a13e830f444f9d088cf7a466ae12ca9c8ef931ea | 23,928 | py | Python | tests/system/test_read_gbq.py | renovate-bot/pandas-gbq | 891a00c8f202aa476ffb22b2fb92c01ffa84889a | [
"BSD-3-Clause"
] | 32 | 2021-07-16T19:33:35.000Z | 2022-03-28T16:42:22.000Z | tests/system/test_read_gbq.py | renovate-bot/pandas-gbq | 891a00c8f202aa476ffb22b2fb92c01ffa84889a | [
"BSD-3-Clause"
] | 117 | 2021-07-19T14:55:31.000Z | 2022-03-28T22:07:22.000Z | tests/system/test_read_gbq.py | renovate-bot/pandas-gbq | 891a00c8f202aa476ffb22b2fb92c01ffa84889a | [
"BSD-3-Clause"
] | 6 | 2021-08-01T06:00:07.000Z | 2022-03-04T01:30:45.000Z | # Copyright (c) 2021 pandas-gbq Authors All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
import collections
import datetime
import decimal
import db_dtypes
import pandas
import pandas.testing
import pytest
from pandas_gbq.features import FEATURES
QueryTestCase = collections.namedtuple(
"QueryTestCase",
["query", "expected", "use_bqstorage_apis"],
defaults=[None, None, {True, False}],
)
@pytest.mark.parametrize(["use_bqstorage_api"], [(True,), (False,)])
@pytest.mark.parametrize(
["query", "expected", "use_bqstorage_apis"],
[
pytest.param(
*QueryTestCase(
query="""
SELECT
bools.row_num AS row_num,
bool_col,
bytes_col,
date_col,
datetime_col,
float_col,
int64_col,
numeric_col,
string_col,
time_col,
timestamp_col
FROM
UNNEST([
STRUCT(1 AS row_num, TRUE AS bool_col),
STRUCT(2 AS row_num, FALSE AS bool_col),
STRUCT(3 AS row_num, TRUE AS bool_col) ]) AS `bools`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('C00010FF' AS BYTES FORMAT 'HEX') AS bytes_col),
STRUCT(2 AS row_num, CAST('F1AC' AS BYTES FORMAT 'HEX') AS bytes_col),
STRUCT(3 AS row_num, CAST('FFBADD11' AS BYTES FORMAT 'HEX') AS bytes_co) ]) AS `bytes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, DATE(1998, 9, 4) AS date_col),
STRUCT(2 AS row_num, DATE(2011, 10, 1) AS date_col),
STRUCT(3 AS row_num, DATE(2018, 4, 11) AS date_col) ]) AS `dates`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, DATETIME('1998-09-04 12:34:56.789101') AS datetime_col),
STRUCT(2 AS row_num, DATETIME('2011-10-01 00:01:02.345678') AS datetime_col),
STRUCT(3 AS row_num, DATETIME('2018-04-11 23:59:59.999999') AS datetime_col) ]) AS `datetimes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, 1.125 AS float_col),
STRUCT(2 AS row_num, -2.375 AS float_col),
STRUCT(3 AS row_num, 0.0 AS float_col) ]) AS `floats`
INNER JOIN
UNNEST([
-- 2 ^ 63 - 1, but in hex to avoid intermediate overlfow.
STRUCT(1 AS row_num, 0x7fffffffffffffff AS int64_col),
STRUCT(2 AS row_num, -1 AS in64_col),
-- -2 ^ 63, but in hex to avoid intermediate overlfow.
STRUCT(3 AS row_num, -0x8000000000000000 AS int64_col) ]) AS `ints`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('123.456789' AS NUMERIC) AS numeric_col),
STRUCT(2 AS row_num, CAST('-123.456789' AS NUMERIC) AS numeric_col),
STRUCT(3 AS row_num, CAST('999.999999' AS NUMERIC) AS numeric_col) ]) AS `numerics`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, 'abcdefghijklmnopqrstuvwxyz' AS string_col),
STRUCT(2 AS row_num, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' AS string_col),
STRUCT(3 AS row_num, 'こんにちは' AS string_col) ]) AS `strings`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('00:00:00.000000' AS TIME) AS time_col),
STRUCT(2 AS row_num, CAST('09:08:07.654321' AS TIME) AS time_col),
STRUCT(3 AS row_num, CAST('23:59:59.999999' AS TIME) AS time_col) ]) AS `times`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, TIMESTAMP('1998-09-04 12:34:56.789101') AS timestamp_col),
STRUCT(2 AS row_num, TIMESTAMP('2011-10-01 00:01:02.345678') AS timestamp_col),
STRUCT(3 AS row_num, TIMESTAMP('2018-04-11 23:59:59.999999') AS timestamp_col) ]) AS `timestamps`
WHERE
`bools`.row_num = `dates`.row_num
AND `bools`.row_num = `bytes`.row_num
AND `bools`.row_num = `datetimes`.row_num
AND `bools`.row_num = `floats`.row_num
AND `bools`.row_num = `ints`.row_num
AND `bools`.row_num = `numerics`.row_num
AND `bools`.row_num = `strings`.row_num
AND `bools`.row_num = `times`.row_num
AND `bools`.row_num = `timestamps`.row_num
ORDER BY row_num ASC
""",
expected=pandas.DataFrame(
{
"row_num": pandas.Series([1, 2, 3], dtype="Int64"),
"bool_col": pandas.Series(
[True, False, True],
dtype="boolean"
if FEATURES.pandas_has_boolean_dtype
else "bool",
),
"bytes_col": [
bytes.fromhex("C00010FF"),
bytes.fromhex("F1AC"),
bytes.fromhex("FFBADD11"),
],
"date_col": pandas.Series(
[
datetime.date(1998, 9, 4),
datetime.date(2011, 10, 1),
datetime.date(2018, 4, 11),
],
dtype=db_dtypes.DateDtype(),
),
"datetime_col": pandas.Series(
[
"1998-09-04 12:34:56.789101",
"2011-10-01 00:01:02.345678",
"2018-04-11 23:59:59.999999",
],
dtype="datetime64[ns]",
),
"float_col": [1.125, -2.375, 0.0],
"int64_col": pandas.Series(
[(2 ** 63) - 1, -1, -(2 ** 63)], dtype="Int64"
),
"numeric_col": [
decimal.Decimal("123.456789"),
decimal.Decimal("-123.456789"),
decimal.Decimal("999.999999"),
],
"string_col": [
"abcdefghijklmnopqrstuvwxyz",
"ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"こんにちは",
],
"time_col": pandas.Series(
["00:00:00.000000", "09:08:07.654321", "23:59:59.999999"],
dtype=db_dtypes.TimeDtype(),
),
"timestamp_col": pandas.Series(
[
"1998-09-04 12:34:56.789101",
"2011-10-01 00:01:02.345678",
"2018-04-11 23:59:59.999999",
],
dtype="datetime64[ns]",
).dt.tz_localize(datetime.timezone.utc),
}
),
),
id="scalar-types-nonnull-normal-range",
),
pytest.param(
*QueryTestCase(
query="""
SELECT
bools.row_num AS row_num,
bool_col,
bytes_col,
date_col,
datetime_col,
float_col,
int64_col,
numeric_col,
string_col,
time_col,
timestamp_col
FROM
UNNEST([
STRUCT(1 AS row_num, TRUE AS bool_col),
STRUCT(2 AS row_num, FALSE AS bool_col),
STRUCT(3 AS row_num, NULL AS bool_col) ]) AS `bools`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, NULL AS bytes_col),
STRUCT(2 AS row_num, CAST('F1AC' AS BYTES FORMAT 'HEX') AS bytes_col),
STRUCT(3 AS row_num, CAST('' AS BYTES FORMAT 'HEX') AS bytes_co) ]) AS `bytes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, DATE(1970, 1, 1) AS date_col),
STRUCT(2 AS row_num, NULL AS date_col),
STRUCT(3 AS row_num, DATE(2018, 4, 11) AS date_col) ]) AS `dates`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, DATETIME('1970-01-01 00:00:00.000000') AS datetime_col),
STRUCT(2 AS row_num, DATETIME('2011-10-01 00:01:02.345678') AS datetime_col),
STRUCT(3 AS row_num, NULL AS datetime_col) ]) AS `datetimes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, NULL AS float_col),
STRUCT(2 AS row_num, -2.375 AS float_col),
STRUCT(3 AS row_num, 0.0 AS float_col) ]) AS `floats`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, -1 AS int64_col),
STRUCT(2 AS row_num, NULL AS int64_col),
STRUCT(3 AS row_num, 0 AS int64_col) ]) AS `int64s`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('123.456789' AS NUMERIC) AS numeric_col),
STRUCT(2 AS row_num, NULL AS numeric_col),
STRUCT(3 AS row_num, CAST('999.999999' AS NUMERIC) AS numeric_col) ]) AS `numerics`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, '' AS string_col),
STRUCT(2 AS row_num, 'こんにちは' AS string_col),
STRUCT(3 AS row_num, NULL AS string_col) ]) AS `strings`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, NULL AS time_col),
STRUCT(2 AS row_num, CAST('00:00:00.000000' AS TIME) AS time_col),
STRUCT(3 AS row_num, CAST('23:59:59.999999' AS TIME) AS time_col) ]) AS `times`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, TIMESTAMP('1970-01-01 00:00:00.000000') AS timestamp_col),
STRUCT(2 AS row_num, NULL AS timestamp_col),
STRUCT(3 AS row_num, TIMESTAMP('2018-04-11 23:59:59.999999') AS timestamp_col) ]) AS `timestamps`
WHERE
`bools`.row_num = `dates`.row_num
AND `bools`.row_num = `bytes`.row_num
AND `bools`.row_num = `datetimes`.row_num
AND `bools`.row_num = `floats`.row_num
AND `bools`.row_num = `int64s`.row_num
AND `bools`.row_num = `numerics`.row_num
AND `bools`.row_num = `strings`.row_num
AND `bools`.row_num = `times`.row_num
AND `bools`.row_num = `timestamps`.row_num
ORDER BY row_num ASC
""",
expected=pandas.DataFrame(
{
"row_num": pandas.Series([1, 2, 3], dtype="Int64"),
"bool_col": pandas.Series(
[True, False, None],
dtype="boolean"
if FEATURES.pandas_has_boolean_dtype
else "object",
),
"bytes_col": [None, bytes.fromhex("F1AC"), b""],
"date_col": pandas.Series(
[
datetime.date(1970, 1, 1),
None,
datetime.date(2018, 4, 11),
],
dtype=db_dtypes.DateDtype(),
),
"datetime_col": pandas.Series(
[
"1970-01-01 00:00:00.000000",
"2011-10-01 00:01:02.345678",
None,
],
dtype="datetime64[ns]",
),
"float_col": [None, -2.375, 0.0],
"int64_col": pandas.Series([-1, None, 0], dtype="Int64"),
"numeric_col": [
decimal.Decimal("123.456789"),
None,
decimal.Decimal("999.999999"),
],
"string_col": ["", "こんにちは", None],
"time_col": pandas.Series(
[None, "00:00:00", "23:59:59.999999"],
dtype=db_dtypes.TimeDtype(),
),
"timestamp_col": pandas.Series(
[
"1970-01-01 00:00:00.000000",
None,
"2018-04-11 23:59:59.999999",
],
dtype="datetime64[ns]",
).dt.tz_localize(datetime.timezone.utc),
}
),
),
id="scalar-types-nullable-normal-range",
),
pytest.param(
*QueryTestCase(
query="""
SELECT
bools.row_num AS row_num,
bool_col,
bytes_col,
date_col,
datetime_col,
float_col,
int64_col,
numeric_col,
string_col,
time_col,
timestamp_col
FROM
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS BOOL) AS bool_col) ]) AS `bools`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS BYTES) AS bytes_col) ]) AS `bytes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS DATE) AS date_col) ]) AS `dates`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS DATETIME) AS datetime_col) ]) AS `datetimes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS FLOAT64) AS float_col) ]) AS `floats`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS INT64) AS int64_col) ]) AS `int64s`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS NUMERIC) AS numeric_col) ]) AS `numerics`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS STRING) AS string_col) ]) AS `strings`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS TIME) AS time_col) ]) AS `times`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS TIMESTAMP) AS timestamp_col) ]) AS `timestamps`
WHERE
`bools`.row_num = `dates`.row_num
AND `bools`.row_num = `bytes`.row_num
AND `bools`.row_num = `datetimes`.row_num
AND `bools`.row_num = `floats`.row_num
AND `bools`.row_num = `int64s`.row_num
AND `bools`.row_num = `numerics`.row_num
AND `bools`.row_num = `strings`.row_num
AND `bools`.row_num = `times`.row_num
AND `bools`.row_num = `timestamps`.row_num
ORDER BY row_num ASC
""",
expected=pandas.DataFrame(
{
"row_num": pandas.Series([1], dtype="Int64"),
"bool_col": pandas.Series(
[None],
dtype="boolean"
if FEATURES.pandas_has_boolean_dtype
else "object",
),
"bytes_col": [None],
"date_col": pandas.Series([None], dtype=db_dtypes.DateDtype(),),
"datetime_col": pandas.Series([None], dtype="datetime64[ns]",),
"float_col": pandas.Series([None], dtype="float64"),
"int64_col": pandas.Series([None], dtype="Int64"),
"numeric_col": [None],
"string_col": [None],
"time_col": pandas.Series([None], dtype=db_dtypes.TimeDtype(),),
"timestamp_col": pandas.Series(
[None], dtype="datetime64[ns]",
).dt.tz_localize(datetime.timezone.utc),
}
),
),
id="scalar-types-null",
),
pytest.param(
*QueryTestCase(
query="""
SELECT
bignumerics.row_num AS row_num,
bignumeric_col,
nullable_col,
null_col
FROM
UNNEST([
STRUCT(1 AS row_num, CAST('123456789.123456789' AS BIGNUMERIC) AS bignumeric_col),
STRUCT(2 AS row_num, CAST('-123456789.123456789' AS BIGNUMERIC) AS bignumeric_col),
STRUCT(3 AS row_num, CAST('987654321.987654321' AS BIGNUMERIC) AS bignumeric_col) ]) AS `bignumerics`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('123456789.123456789' AS BIGNUMERIC) AS nullable_col),
STRUCT(2 AS row_num, NULL AS nullable_col),
STRUCT(3 AS row_num, CAST('987654321.987654321' AS BIGNUMERIC) AS nullable_col) ]) AS `nullables`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST(NULL AS BIGNUMERIC) AS null_col),
STRUCT(2 AS row_num, CAST(NULL AS BIGNUMERIC) AS null_col),
STRUCT(3 AS row_num, CAST(NULL AS BIGNUMERIC) AS null_col) ]) AS `nulls`
WHERE
`bignumerics`.row_num = `nullables`.row_num
AND `bignumerics`.row_num = `nulls`.row_num
ORDER BY row_num ASC
""",
expected=pandas.DataFrame(
{
"row_num": pandas.Series([1, 2, 3], dtype="Int64"),
# TODO: Support a special (nullable) dtype for decimal data.
# https://github.com/googleapis/python-db-dtypes-pandas/issues/49
"bignumeric_col": [
decimal.Decimal("123456789.123456789"),
decimal.Decimal("-123456789.123456789"),
decimal.Decimal("987654321.987654321"),
],
"nullable_col": [
decimal.Decimal("123456789.123456789"),
None,
decimal.Decimal("987654321.987654321"),
],
"null_col": [None, None, None],
}
),
),
id="bignumeric-normal-range",
marks=pytest.mark.skipif(
not FEATURES.bigquery_has_bignumeric,
reason="BIGNUMERIC not supported in this version of google-cloud-bigquery",
),
),
pytest.param(
*QueryTestCase(
query="""
SELECT
dates.row_num AS row_num,
date_col,
datetime_col,
timestamp_col
FROM
UNNEST([
STRUCT(1 AS row_num, DATE(1, 1, 1) AS date_col),
STRUCT(2 AS row_num, DATE(9999, 12, 31) AS date_col),
STRUCT(3 AS row_num, DATE(2262, 4, 12) AS date_col) ]) AS `dates`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, DATETIME('0001-01-01 00:00:00.000000') AS datetime_col),
STRUCT(2 AS row_num, DATETIME('9999-12-31 23:59:59.999999') AS datetime_col),
STRUCT(3 AS row_num, DATETIME('2262-04-11 23:47:16.854776') AS datetime_col) ]) AS `datetimes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, TIMESTAMP('0001-01-01 00:00:00.000000') AS timestamp_col),
STRUCT(2 AS row_num, TIMESTAMP('9999-12-31 23:59:59.999999') AS timestamp_col),
STRUCT(3 AS row_num, TIMESTAMP('2262-04-11 23:47:16.854776') AS timestamp_col) ]) AS `timestamps`
WHERE
`dates`.row_num = `datetimes`.row_num
AND `dates`.row_num = `timestamps`.row_num
ORDER BY row_num ASC
""",
expected=pandas.DataFrame(
{
"row_num": pandas.Series([1, 2, 3], dtype="Int64"),
"date_col": pandas.Series(
[
datetime.date(1, 1, 1),
datetime.date(9999, 12, 31),
datetime.date(2262, 4, 12),
],
dtype="object",
),
"datetime_col": pandas.Series(
[
datetime.datetime(1, 1, 1, 0, 0, 0, 0),
datetime.datetime(9999, 12, 31, 23, 59, 59, 999999),
# One microsecond more than pandas.Timestamp.max.
datetime.datetime(2262, 4, 11, 23, 47, 16, 854776),
],
dtype="object",
),
"timestamp_col": pandas.Series(
[
datetime.datetime(
1, 1, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc
),
datetime.datetime(
9999,
12,
31,
23,
59,
59,
999999,
tzinfo=datetime.timezone.utc,
),
# One microsecond more than pandas.Timestamp.max.
datetime.datetime(
2262,
4,
11,
23,
47,
16,
854776,
tzinfo=datetime.timezone.utc,
),
],
dtype="object",
),
}
),
use_bqstorage_apis={True, False}
if FEATURES.bigquery_has_accurate_timestamp
else {True},
),
id="issue365-extreme-datetimes",
),
],
)
def test_default_dtypes(
read_gbq, query, expected, use_bqstorage_apis, use_bqstorage_api
):
if use_bqstorage_api not in use_bqstorage_apis:
pytest.skip(f"use_bqstorage_api={use_bqstorage_api} not supported.")
result = read_gbq(query, use_bqstorage_api=use_bqstorage_api)
pandas.testing.assert_frame_equal(result, expected)
@pytest.mark.parametrize(["use_bqstorage_api"], [(True,), (False,)])
def test_empty_dataframe(read_gbq, use_bqstorage_api):
# Bug fix for https://github.com/pandas-dev/pandas/issues/10273 and
# https://github.com/googleapis/python-bigquery-pandas/issues/299
query = """
SELECT
bools.row_num AS row_num,
bool_col,
bytes_col,
date_col,
datetime_col,
float_col,
int64_col,
numeric_col,
string_col,
time_col,
timestamp_col
FROM
UNNEST([
STRUCT(1 AS row_num, TRUE AS bool_col) ]) AS `bools`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('F1AC' AS BYTES FORMAT 'HEX') AS bytes_col) ]) AS `bytes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, DATE(2018, 4, 11) AS date_col) ]) AS `dates`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, DATETIME('2011-10-01 00:01:02.345678') AS datetime_col) ]) AS `datetimes`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, -2.375 AS float_col) ]) AS `floats`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, 1234 AS int64_col) ]) AS `int64s`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('123.456789' AS NUMERIC) AS numeric_col) ]) AS `numerics`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, 'abcdefghijklmnopqrstuvwxyz' AS string_col) ]) AS `strings`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, CAST('09:08:07.654321' AS TIME) AS time_col) ]) AS `times`
INNER JOIN
UNNEST([
STRUCT(1 AS row_num, TIMESTAMP('1998-09-04 12:34:56.789101') AS timestamp_col) ]) AS `timestamps`
WHERE
`bools`.row_num = `dates`.row_num
AND `bools`.row_num = `bytes`.row_num
AND `bools`.row_num = `datetimes`.row_num
AND `bools`.row_num = `floats`.row_num
AND `bools`.row_num = `int64s`.row_num
AND `bools`.row_num = `numerics`.row_num
AND `bools`.row_num = `strings`.row_num
AND `bools`.row_num = `times`.row_num
AND `bools`.row_num = `timestamps`.row_num
AND `bools`.row_num = -1
ORDER BY row_num ASC
"""
expected = pandas.DataFrame(
{
"row_num": pandas.Series([], dtype="Int64"),
"bool_col": pandas.Series(
[], dtype="boolean" if FEATURES.pandas_has_boolean_dtype else "bool",
),
"bytes_col": pandas.Series([], dtype="object"),
"date_col": pandas.Series([], dtype=db_dtypes.DateDtype(),),
"datetime_col": pandas.Series([], dtype="datetime64[ns]",),
"float_col": pandas.Series([], dtype="float64"),
"int64_col": pandas.Series([], dtype="Int64"),
"numeric_col": pandas.Series([], dtype="object"),
"string_col": pandas.Series([], dtype="object"),
"time_col": pandas.Series([], dtype=db_dtypes.TimeDtype(),),
"timestamp_col": pandas.Series([], dtype="datetime64[ns]",).dt.tz_localize(
datetime.timezone.utc
),
}
)
result = read_gbq(query, use_bqstorage_api=use_bqstorage_api)
pandas.testing.assert_frame_equal(result, expected, check_index_type=False)
| 39.355263 | 107 | 0.514669 | 2,848 | 23,928 | 4.159059 | 0.079705 | 0.102828 | 0.070241 | 0.046602 | 0.847362 | 0.808274 | 0.760068 | 0.745209 | 0.669818 | 0.640523 | 0 | 0.098157 | 0.369442 | 23,928 | 607 | 108 | 39.420099 | 0.686904 | 0.021105 | 0 | 0.636672 | 0 | 0.061121 | 0.541152 | 0.03434 | 0 | 0 | 0.001538 | 0.001647 | 0.003396 | 1 | 0.003396 | false | 0 | 0.013582 | 0 | 0.016978 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a15272362b6c152cb7ea1378381b8c9bbea849e8 | 325 | py | Python | src/pyrin/task_queues/celery/audit/tasks.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/task_queues/celery/audit/tasks.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | src/pyrin/task_queues/celery/audit/tasks.py | wilsonGmn/pyrin | 25dbe3ce17e80a43eee7cfc7140b4c268a6948e0 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
celery audit tasks module.
"""
import pyrin.task_queues.celery.audit.services as celery_audit_services
from pyrin.task_queues.celery.decorators import task
@task
def audit_task():
"""
this method will be used to check status of celery.
"""
celery_audit_services.perform_job()
| 18.055556 | 71 | 0.713846 | 45 | 325 | 4.977778 | 0.6 | 0.196429 | 0.254464 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003717 | 0.172308 | 325 | 17 | 72 | 19.117647 | 0.828996 | 0.310769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a1a3112ca71a042de55c975e2c246dc4dc78b5fc | 112 | py | Python | ga_for_fs/__init__.py | MerelyMax/constrained-ga-for-fs | 2d889908ae78f1e072ebdbfd22e712909f9de498 | [
"MIT"
] | null | null | null | ga_for_fs/__init__.py | MerelyMax/constrained-ga-for-fs | 2d889908ae78f1e072ebdbfd22e712909f9de498 | [
"MIT"
] | null | null | null | ga_for_fs/__init__.py | MerelyMax/constrained-ga-for-fs | 2d889908ae78f1e072ebdbfd22e712909f9de498 | [
"MIT"
] | null | null | null | from .ga_for_fs import GeneticAlgorithm
from . import _version
__version__ = _version.get_versions()['version']
| 28 | 48 | 0.8125 | 14 | 112 | 5.857143 | 0.642857 | 0.341463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098214 | 112 | 3 | 49 | 37.333333 | 0.811881 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b81ed6a304ac8e8741429807682ef6c41f27250d | 335 | py | Python | modules/dbnd-airflow-monitor/src/airflow_monitor/_plugin.py | busunkim96/dbnd | 0191fdcd4c4fbd35006f1026d1a55b2abab9097b | [
"Apache-2.0"
] | 224 | 2020-01-02T10:46:37.000Z | 2022-03-02T13:54:08.000Z | modules/dbnd-airflow-monitor/src/airflow_monitor/_plugin.py | busunkim96/dbnd | 0191fdcd4c4fbd35006f1026d1a55b2abab9097b | [
"Apache-2.0"
] | 16 | 2020-03-11T09:37:58.000Z | 2022-01-26T10:22:08.000Z | modules/dbnd-airflow-monitor/src/airflow_monitor/_plugin.py | busunkim96/dbnd | 0191fdcd4c4fbd35006f1026d1a55b2abab9097b | [
"Apache-2.0"
] | 24 | 2020-03-24T13:53:50.000Z | 2022-03-22T11:55:18.000Z | import logging
import dbnd
logger = logging.getLogger(__name__)
@dbnd.hookimpl
def dbnd_get_commands():
from airflow_monitor.multiserver.cmd_multiserver import airflow_monitor_v2
from airflow_monitor.multiserver.cmd_liveness_probe import airflow_monitor_v2_alive
return [airflow_monitor_v2, airflow_monitor_v2_alive]
| 22.333333 | 87 | 0.835821 | 44 | 335 | 5.886364 | 0.454545 | 0.324324 | 0.247104 | 0.223938 | 0.247104 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.116418 | 335 | 14 | 88 | 23.928571 | 0.861486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
1a10e20ec66b4e52389fe342fb10be44665de3b1 | 99 | py | Python | leetcode/bulb-switcher.py | hg-pyun/algorithm | cf92483c399f05e488b6febc79c80620f115fadf | [
"MIT"
] | 7 | 2018-09-15T13:57:37.000Z | 2022-03-13T10:01:56.000Z | leetcode/bulb-switcher.py | hg-pyun/algorithm | cf92483c399f05e488b6febc79c80620f115fadf | [
"MIT"
] | 1 | 2019-04-26T07:02:28.000Z | 2019-04-26T07:02:28.000Z | leetcode/bulb-switcher.py | hg-pyun/algorithm | cf92483c399f05e488b6febc79c80620f115fadf | [
"MIT"
] | 1 | 2020-05-03T23:43:38.000Z | 2020-05-03T23:43:38.000Z | class Solution:
def bulbSwitch(self, n: int) -> int:
return math.floor(sqrt(n))
| 19.8 | 40 | 0.575758 | 13 | 99 | 4.384615 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292929 | 99 | 4 | 41 | 24.75 | 0.814286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
1a2bbe59ec30bb594315020bbbd5c3db31ea31a1 | 320 | py | Python | textile/textile/doctype/expenses/expenses.py | venkat102/Textile-billing | 024aa93b314de8ce3f5d6ce6262b802c423f9275 | [
"MIT"
] | null | null | null | textile/textile/doctype/expenses/expenses.py | venkat102/Textile-billing | 024aa93b314de8ce3f5d6ce6262b802c423f9275 | [
"MIT"
] | null | null | null | textile/textile/doctype/expenses/expenses.py | venkat102/Textile-billing | 024aa93b314de8ce3f5d6ce6262b802c423f9275 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2021, Venkatesh and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
@frappe.whitelist()
def get_date():
return frappe.utils.today()
class Expenses(Document):
pass
| 21.333333 | 49 | 0.76875 | 42 | 320 | 5.714286 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017986 | 0.13125 | 320 | 14 | 50 | 22.857143 | 0.845324 | 0.3625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | true | 0.125 | 0.375 | 0.125 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 5 |
a7df5f0558a0acd5402091e03b703d28a20d78c5 | 59 | py | Python | src/python/src/rmq/exceptions/__init__.py | halimov-oa/scrapy-boilerplate | fe3c552fed26bedb0618c245ab923aa34a89ac9d | [
"MIT"
] | 34 | 2019-12-13T10:31:39.000Z | 2022-03-09T15:59:07.000Z | src/python/src/rmq/exceptions/__init__.py | halimov-oa/scrapy-boilerplate | fe3c552fed26bedb0618c245ab923aa34a89ac9d | [
"MIT"
] | 49 | 2020-02-25T19:41:09.000Z | 2022-02-27T12:05:25.000Z | src/python/src/rmq/exceptions/__init__.py | halimov-oa/scrapy-boilerplate | fe3c552fed26bedb0618c245ab923aa34a89ac9d | [
"MIT"
] | 23 | 2019-12-23T15:19:42.000Z | 2022-03-09T16:00:15.000Z | from .consumed_data_corrupted import ConsumedDataCorrupted
| 29.5 | 58 | 0.915254 | 6 | 59 | 8.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 59 | 1 | 59 | 59 | 0.945455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a7e041217fff3c0c79df400d2b46d6600fba782b | 1,793 | py | Python | tests/test_styles.py | tombulled/case | befa226bdee6ca673fa2795ede85f6e27865adc1 | [
"MIT"
] | null | null | null | tests/test_styles.py | tombulled/case | befa226bdee6ca673fa2795ede85f6e27865adc1 | [
"MIT"
] | 1 | 2022-01-07T22:58:17.000Z | 2022-03-24T23:13:33.000Z | tests/test_styles.py | tombulled/case | befa226bdee6ca673fa2795ede85f6e27865adc1 | [
"MIT"
] | null | null | null | import case
import pytest
@pytest.fixture
def string() -> str:
return "MY __mask__ --ofSanityIS.slowly#Slipping"
def test_lower(string) -> None:
assert case.lower(string) == "my mask of sanity is slowly slipping"
def test_upper(string) -> None:
assert case.upper(string) == "MY MASK OF SANITY IS SLOWLY SLIPPING"
def test_title(string) -> None:
assert case.title(string) == "My Mask Of Sanity Is Slowly Slipping"
def test_sentence(string) -> None:
assert case.sentence(string) == "My mask of sanity is slowly slipping"
def test_snake(string) -> None:
assert case.snake(string) == "my_mask_of_sanity_is_slowly_slipping"
def test_helter(string) -> None:
assert case.helter(string) == "My_Mask_Of_Sanity_Is_Slowly_Slipping"
def test_macro(string) -> None:
assert case.macro(string) == "MY_MASK_OF_SANITY_IS_SLOWLY_SLIPPING"
def test_kebab(string) -> None:
assert case.kebab(string) == "my-mask-of-sanity-is-slowly-slipping"
def test_train(string) -> None:
assert case.train(string) == "My-Mask-Of-Sanity-Is-Slowly-Slipping"
def test_cobol(string) -> None:
assert case.cobol(string) == "MY-MASK-OF-SANITY-IS-SLOWLY-SLIPPING"
def test_flat(string) -> None:
assert case.flat(string) == "mymaskofsanityisslowlyslipping"
def test_flush(string) -> None:
assert case.flush(string) == "MYMASKOFSANITYISSLOWLYSLIPPING"
def test_pascal(string) -> None:
assert case.pascal(string) == "MyMaskOfSanityIsSlowlySlipping"
def test_camel(string) -> None:
assert case.camel(string) == "myMaskOfSanityIsSlowlySlipping"
def test_dot(string) -> None:
assert case.dot(string) == "my.mask.of.sanity.is.slowly.slipping"
def test_path(string) -> None:
assert case.path(string) == "my/mask/of/sanity/is/slowly/slipping"
| 24.902778 | 74 | 0.720022 | 247 | 1,793 | 5.072874 | 0.145749 | 0.089385 | 0.20431 | 0.255387 | 0.406225 | 0.406225 | 0.406225 | 0.406225 | 0.377494 | 0.377494 | 0 | 0 | 0.148912 | 1,793 | 71 | 75 | 25.253521 | 0.821101 | 0 | 0 | 0 | 0 | 0 | 0.330173 | 0.243168 | 0 | 0 | 0 | 0 | 0.432432 | 1 | 0.459459 | false | 0 | 0.054054 | 0.027027 | 0.540541 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 5 |
a7fed40782ecc3e05b51e3cde411761961cedaeb | 150 | py | Python | vispy/scene/systems/__init__.py | mssurajkaiga/vispy-experiments | 0f3a19e0f4ac46608da792cbd36ebe59b036bce7 | [
"BSD-3-Clause"
] | 1 | 2017-06-12T16:24:11.000Z | 2017-06-12T16:24:11.000Z | vispy/scene/systems/__init__.py | mssurajkaiga/vispy-experiments | 0f3a19e0f4ac46608da792cbd36ebe59b036bce7 | [
"BSD-3-Clause"
] | null | null | null | vispy/scene/systems/__init__.py | mssurajkaiga/vispy-experiments | 0f3a19e0f4ac46608da792cbd36ebe59b036bce7 | [
"BSD-3-Clause"
] | null | null | null | """
A collection of systems.
"""
from __future__ import division
from .drawingsystem import DrawingSystem # noqa
from ..base import System # noqa
| 16.666667 | 48 | 0.746667 | 18 | 150 | 6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173333 | 150 | 8 | 49 | 18.75 | 0.870968 | 0.233333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c5120604a9fee0ede440988849451c6aab9418c7 | 429 | py | Python | chainercmd/template/__init__.py | mitmul/chainercmd | 1925d7a8c27bdbc8946483090688c45a292e37f7 | [
"MIT"
] | 10 | 2017-07-18T12:29:05.000Z | 2018-07-18T17:49:24.000Z | chainercmd/template/__init__.py | mitmul/chainercmd | 1925d7a8c27bdbc8946483090688c45a292e37f7 | [
"MIT"
] | null | null | null | chainercmd/template/__init__.py | mitmul/chainercmd | 1925d7a8c27bdbc8946483090688c45a292e37f7 | [
"MIT"
] | 2 | 2021-01-08T01:17:04.000Z | 2021-01-23T09:28:16.000Z | import yaml
import os
from chainercmd.template import custom_extension # NOQA
from chainercmd.template import evaluator_creator # NOQA
from chainercmd.template import dataset # NOQA
from chainercmd.template import loss # NOQA
from chainercmd.template import model # NOQA
from chainercmd.template import updater_creator # NOQA
dname = os.path.dirname(__file__)
config_base = yaml.load(open('{}/config.yml'.format(dname)))
| 33 | 60 | 0.799534 | 57 | 429 | 5.877193 | 0.438596 | 0.250746 | 0.39403 | 0.501493 | 0.477612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125874 | 429 | 12 | 61 | 35.75 | 0.893333 | 0.067599 | 0 | 0 | 0 | 0 | 0.033079 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
c51e2574e31908b1b268f1ad83f64bdf81ecb682 | 224 | py | Python | core/src/zeit/vgwort/tests/test_doctest.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 5 | 2019-05-16T09:51:29.000Z | 2021-05-31T09:30:03.000Z | core/src/zeit/vgwort/tests/test_doctest.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 107 | 2019-05-24T12:19:02.000Z | 2022-03-23T15:05:56.000Z | core/src/zeit/vgwort/tests/test_doctest.py | rickdg/vivi | 16134ac954bf8425646d4ad47bdd1f372e089355 | [
"BSD-3-Clause"
] | 3 | 2020-08-14T11:01:17.000Z | 2022-01-08T17:32:19.000Z | import zeit.cms.testing
import zeit.vgwort.testing
def test_suite():
return zeit.cms.testing.FunctionalDocFileSuite(
'README.txt',
layer=zeit.vgwort.testing.XMLRPC_LAYER,
package='zeit.vgwort')
| 22.4 | 51 | 0.705357 | 27 | 224 | 5.777778 | 0.555556 | 0.192308 | 0.179487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183036 | 224 | 9 | 52 | 24.888889 | 0.852459 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.285714 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
c52ec475876b6fc6d52087a15ec265f4ecc92d23 | 1,807 | py | Python | pyopenproject/business/services/relation_service_impl.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 5 | 2021-02-25T15:54:28.000Z | 2021-04-22T15:43:36.000Z | pyopenproject/business/services/relation_service_impl.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 7 | 2021-03-15T16:26:23.000Z | 2022-03-16T13:45:18.000Z | pyopenproject/business/services/relation_service_impl.py | webu/pyopenproject | 40b2cb9fe0fa3f89bc0fe2a3be323422d9ecf966 | [
"MIT"
] | 6 | 2021-06-18T18:59:11.000Z | 2022-03-27T04:58:52.000Z | from pyopenproject.business.relation_service import RelationService
from pyopenproject.business.services.command.relation.delete import Delete
from pyopenproject.business.services.command.relation.find import Find
from pyopenproject.business.services.command.relation.find_all import FindAll
from pyopenproject.business.services.command.relation.find_by_context import FindByContext
from pyopenproject.business.services.command.relation.find_schema import FindSchema
from pyopenproject.business.services.command.relation.find_schema_by_type import FindSchemaByType
from pyopenproject.business.services.command.relation.update import Update
from pyopenproject.business.services.command.relation.update_form import UpdateForm
class RelationServiceImpl(RelationService):
def __init__(self, connection):
"""Constructor for class RelationServiceImpl, from RelationService
:param connection: The connection data
"""
super().__init__(connection)
def find(self, relation):
return Find(self.connection, relation).execute()
def update(self, relation):
return Update(self.connection, relation).execute()
def delete(self, relation):
return Delete(self.connection, relation).execute()
def find_schema(self):
return FindSchema(self.connection).execute()
def find_schema_by_type(self, relation_type):
return FindSchemaByType(self.connection, relation_type).execute()
def find_all(self, filters=None, sort_by=None):
return list(FindAll(self.connection, filters, sort_by).execute())
def update_form(self, relation, form):
return UpdateForm(self.connection, relation, form=form).execute()
def find_by_context(self, context):
return FindByContext(self.connection, context).execute()
| 42.023256 | 97 | 0.775872 | 206 | 1,807 | 6.665049 | 0.199029 | 0.111435 | 0.163875 | 0.19228 | 0.381646 | 0.311726 | 0.276766 | 0.084487 | 0 | 0 | 0 | 0 | 0.136691 | 1,807 | 42 | 98 | 43.02381 | 0.880128 | 0.056447 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.321429 | false | 0 | 0.321429 | 0.285714 | 0.964286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 5 |
c54d20cb71d4bc6ba824130ab548f08ca7af2f40 | 19 | py | Python | posthog/version.py | thinhnguyenuit/posthog | 4758e66790485587d29a617174158d07341342f8 | [
"MIT"
] | null | null | null | posthog/version.py | thinhnguyenuit/posthog | 4758e66790485587d29a617174158d07341342f8 | [
"MIT"
] | 1 | 2022-02-15T00:47:35.000Z | 2022-02-15T00:47:35.000Z | posthog/version.py | thinhnguyenuit/posthog | 4758e66790485587d29a617174158d07341342f8 | [
"MIT"
] | 1 | 2021-09-08T19:45:03.000Z | 2021-09-08T19:45:03.000Z | VERSION = "1.27.0"
| 9.5 | 18 | 0.578947 | 4 | 19 | 2.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.157895 | 19 | 1 | 19 | 19 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
3d91da8f7dae1a95f7aea4cb4b5b90273095529e | 41 | py | Python | open_workshop.py | stefanw/carpenter | 698b5bc81473772ed212ea6794d1d2998143f52a | [
"BSD-2-Clause"
] | 4 | 2017-10-10T06:15:10.000Z | 2019-03-11T07:29:20.000Z | open_workshop.py | rufuspollock/carpenter | 698b5bc81473772ed212ea6794d1d2998143f52a | [
"BSD-2-Clause"
] | null | null | null | open_workshop.py | rufuspollock/carpenter | 698b5bc81473772ed212ea6794d1d2998143f52a | [
"BSD-2-Clause"
] | 5 | 2016-07-19T04:59:45.000Z | 2019-03-04T08:59:10.000Z | from carpenter.web import app
app.run()
| 10.25 | 29 | 0.756098 | 7 | 41 | 4.428571 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 3 | 30 | 13.666667 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
3db46f5c0f711cc4b967fba22e62232ba1893ceb | 91 | py | Python | elastipy/dump/__init__.py | defgsus/elastipy | c1144ab39fa70571ba0e02ccf41d380a8a1bd730 | [
"Apache-2.0"
] | 1 | 2021-02-17T17:50:28.000Z | 2021-02-17T17:50:28.000Z | elastipy/dump/__init__.py | defgsus/elastipy | c1144ab39fa70571ba0e02ccf41d380a8a1bd730 | [
"Apache-2.0"
] | 2 | 2021-03-29T02:09:41.000Z | 2022-03-01T20:09:48.000Z | elastipy/dump/__init__.py | netzkolchose/elastipy | c1144ab39fa70571ba0e02ccf41d380a8a1bd730 | [
"Apache-2.0"
] | null | null | null | from .heatmap import Heatmap
from .table import Table
from .textplotter import TextPlotter
| 22.75 | 36 | 0.835165 | 12 | 91 | 6.333333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131868 | 91 | 3 | 37 | 30.333333 | 0.962025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3db626d0b18ffa0971184916775b6c28a042bc61 | 224 | py | Python | core/management/commands/rebuild_all_cache.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 6 | 2018-03-20T11:19:07.000Z | 2021-10-05T07:53:11.000Z | core/management/commands/rebuild_all_cache.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 802 | 2018-02-05T14:16:13.000Z | 2022-02-10T10:59:21.000Z | core/management/commands/rebuild_all_cache.py | uktrade/directory-cms | 8c8d13ce29ea74ddce7a40f3dd29c8847145d549 | [
"MIT"
] | 6 | 2019-01-22T13:19:37.000Z | 2019-07-01T10:35:26.000Z | from django.core.management import BaseCommand
from core.cache import rebuild_all_cache
class Command(BaseCommand):
help = 'Rebuild the redis cache'
def handle(self, *args, **options):
rebuild_all_cache()
| 22.4 | 46 | 0.736607 | 29 | 224 | 5.551724 | 0.655172 | 0.124224 | 0.186335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 224 | 9 | 47 | 24.888889 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0.102679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3db9a701ba7c198dcdf30a194cc3839bc3a18b90 | 110 | py | Python | info/modules/user/__init__.py | MINDONMARS/information | b42e50e4adf29fb974581713266f967b12b812da | [
"MIT"
] | null | null | null | info/modules/user/__init__.py | MINDONMARS/information | b42e50e4adf29fb974581713266f967b12b812da | [
"MIT"
] | null | null | null | info/modules/user/__init__.py | MINDONMARS/information | b42e50e4adf29fb974581713266f967b12b812da | [
"MIT"
] | null | null | null | from flask import Blueprint
user_blue = Blueprint('user', __name__, url_prefix='/user')
from .views import * | 22 | 59 | 0.754545 | 15 | 110 | 5.133333 | 0.666667 | 0.337662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.127273 | 110 | 5 | 60 | 22 | 0.802083 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 5 |
3dbde1f4349765ae595dc3094e1f93130a929ea3 | 1,612 | py | Python | beacon/swagger_server/models/__init__.py | NCATS-Tangerine/rhea-beacon | ccf6e790dc4c26eb4853b1bcb78382b84fbfe238 | [
"MIT"
] | 1 | 2019-07-29T08:17:37.000Z | 2019-07-29T08:17:37.000Z | beacon/swagger_server/models/__init__.py | NCATS-Tangerine/rhea-beacon | ccf6e790dc4c26eb4853b1bcb78382b84fbfe238 | [
"MIT"
] | 10 | 2018-08-18T03:13:08.000Z | 2019-02-05T20:04:15.000Z | beacon/swagger_server/models/__init__.py | NCATS-Tangerine/tkg-beacon | a1738a9a852554427deccd3e6b7a910354af9fbb | [
"MIT"
] | null | null | null | # coding: utf-8
# flake8: noqa
from __future__ import absolute_import
# import models into model package
from swagger_server.models.beacon_concept import BeaconConcept
from swagger_server.models.beacon_concept_category import BeaconConceptCategory
from swagger_server.models.beacon_concept_detail import BeaconConceptDetail
from swagger_server.models.beacon_concept_with_details import BeaconConceptWithDetails
from swagger_server.models.beacon_knowledge_map_object import BeaconKnowledgeMapObject
from swagger_server.models.beacon_knowledge_map_predicate import BeaconKnowledgeMapPredicate
from swagger_server.models.beacon_knowledge_map_statement import BeaconKnowledgeMapStatement
from swagger_server.models.beacon_knowledge_map_subject import BeaconKnowledgeMapSubject
from swagger_server.models.beacon_predicate import BeaconPredicate
from swagger_server.models.beacon_statement import BeaconStatement
from swagger_server.models.beacon_statement_annotation import BeaconStatementAnnotation
from swagger_server.models.beacon_statement_citation import BeaconStatementCitation
from swagger_server.models.beacon_statement_object import BeaconStatementObject
from swagger_server.models.beacon_statement_predicate import BeaconStatementPredicate
from swagger_server.models.beacon_statement_subject import BeaconStatementSubject
from swagger_server.models.beacon_statement_with_details import BeaconStatementWithDetails
from swagger_server.models.exact_match_response import ExactMatchResponse
from swagger_server.models.local_namespace import LocalNamespace
from swagger_server.models.namespace import Namespace
| 64.48 | 92 | 0.915012 | 186 | 1,612 | 7.596774 | 0.27957 | 0.147912 | 0.228592 | 0.309271 | 0.426752 | 0.406228 | 0.116065 | 0 | 0 | 0 | 0 | 0.001317 | 0.057692 | 1,612 | 24 | 93 | 67.166667 | 0.928901 | 0.036601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
3dbfc63e71fff8f8ea0c7ad3c93b5afacfef1fe6 | 45,898 | py | Python | stage/test_kafka_origin_cluster.py | anubandhan/datacollector-tests | 301c024c66d68353735256b262b681dd05ba16cc | [
"Apache-2.0"
] | null | null | null | stage/test_kafka_origin_cluster.py | anubandhan/datacollector-tests | 301c024c66d68353735256b262b681dd05ba16cc | [
"Apache-2.0"
] | 1 | 2019-04-24T11:06:38.000Z | 2019-04-24T11:06:38.000Z | stage/test_kafka_origin_cluster.py | anubandhan/datacollector-tests | 301c024c66d68353735256b262b681dd05ba16cc | [
"Apache-2.0"
] | 2 | 2019-05-24T06:34:37.000Z | 2020-03-30T11:48:18.000Z | # Copyright 2018 StreamSets Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import io
import json
import logging
import string
import random
import avro
import pytest
from avro.datafile import DataFileWriter
from streamsets.sdk.utils import Version
from streamsets.testframework.environments.cloudera import ClouderaManagerCluster
from streamsets.testframework.markers import cluster
from streamsets.testframework.utils import get_random_string
from stage.utils.utils_xml import get_xml_output_field
logger = logging.getLogger(__name__)
# Specify a port for SDC RPC stages to use.
SDC_RPC_PORT = 20000
SNAPSHOT_TIMEOUT_SEC = 150
MAX_BATCH_WAIT_TIME = 30
MIN_SDC_VERSION_WITH_SPARK_2_LIB = Version('3.3.0')
SCHEMA = {
'namespace': 'example.avro',
'type': 'record',
'name': 'Employee',
'fields': [
{'name': 'name', 'type': 'string'},
{'name': 'age', 'type': 'int'},
{'name': 'emails', 'type': {'type': 'array', 'items': 'string'}},
{'name': 'boss', 'type': ['Employee', 'null']}
]
}
@pytest.fixture(scope='function')
def port():
return random.randrange(20000, 25000)
@pytest.fixture(autouse=True)
def kafka_check(cluster):
if isinstance(cluster, ClouderaManagerCluster) and not hasattr(cluster, 'kafka'):
pytest.skip('Kafka tests require Kafka to be installed on the cluster')
@pytest.fixture(autouse=True)
def spark2_check(cluster):
"""
CDH 5 doesn't have Spark 2 installed by default, it's a separate Parcel that might or might no be present. Luckily
CDH 6 doesn't have the same problem as Spark 2 is the default version shipped there. We do depend on Spark 2 for a
while now, so unless we're sure we have all the services we need, we skip the test.
"""
if isinstance(cluster, ClouderaManagerCluster) and cluster.get_cluster_version().startswith("5.") and not hasattr(cluster, 'spark2_on_yarn'):
pytest.skip('Kafka tests require Spark 2 to be installed on the cluster')
@cluster('cdh')
def test_kafka_origin_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write simple text messages into Kafka and confirm that Kafka successfully reads them.
Because cluster mode pipelines don't support snapshots, we do this verification using a
second standalone pipeline whose origin is an SDC RPC written to by the Kafka Consumer pipeline.
Specifically, this would look like:
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = 'Hello World from SDC & DPM!'
expected = '{\'text\': Hello World from SDC & DPM!}'
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka String pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster kafka String Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, message.encode(), 'TEXT')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'TEXT')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_produce_string_records_multiple_partitions(sdc_builder, sdc_executor, cluster, port):
"""Write simple text messages into Kafka multiple partitions and confirm that Kafka successfully reads them.
Because cluster mode pipelines don't support snapshots, we do this verification using a
second standalone pipeline whose origin is an SDC RPC written to by the Kafka Consumer pipeline.
Specifically, this would look like:
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = 'Hello World from SDC & DPM!'
expected = '{\'text\': Hello World from SDC & DPM!}'
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster partitions pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, message.encode(), 'WITH_KEY')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'TEXT')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_origin_multiple_json_objects_single_record_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write json objects messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = {'Alex': 'Developer', 'Xavi': 'Developer'}
expected = '{\'Alex\': Developer, \'Xavi\': Developer}'
json_test(sdc_builder, sdc_executor, cluster, message, expected, port)
@cluster('cdh')
def test_kafka_origin_multiple_json_objects_multiple_records_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write json objects messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = [{'Alex': 'Developer'}, {'Xavi': 'Developer'}]
expected = '[{\'Alex\': Developer}, {\'Xavi\': Developer}]'
json_test(sdc_builder, sdc_executor, cluster, message, expected, port)
@cluster('cdh')
def test_kafka_origin_json_array_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write json array messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = ['Alex', 'Xavi']
expected = '[Alex, Xavi]'
json_test(sdc_builder, sdc_executor, cluster, message, expected, port)
@cluster('cdh')
def test_kafka_xml_record_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write simple XML messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = '<developers><developer>Alex</developer><developer>Xavi</developer></developers>'
expected = '{\'developer\': [{\'value\': Alex}, {\'value\': Xavi}]}'
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
kafka_consumer.set_attributes(data_format='XML')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka XML pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster kafka XML Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, message.encode(), 'XML')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'XML')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_xml_record_delimiter_element_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write simple XML messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = '<developers><developer>Alex</developer><developer>Xavi</developer></developers>'
expected = ['{\'developer\': {\'value\': Alex}}', '{\'developer\': {\'value\': Xavi}}']
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
kafka_consumer.set_attributes(data_format='XML', delimiter_element="developer")
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka XML pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster kafka XML Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, message.encode(), 'XML')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected,
'XML_MULTI_ELEMENT')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_csv_record_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write simple csv messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = 'Alex,Xavi,Tucu,Martin'
expected = 'OrderedDict([(\'0\', Alex), (\'1\', Xavi), (\'2\', Tucu), (\'3\', Martin)])'
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
kafka_consumer.set_attributes(data_format='DELIMITED')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka CSV pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster kafka CSV Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, message.encode(), 'CSV')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'CSV')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_binary_record_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write simple binary messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = 'Binary Text Example'
expected = message.encode()
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
kafka_consumer.set_attributes(data_format='BINARY')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka BINARY pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster kafka BINARY snapshot')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, message.encode(), 'BINARY')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'BINARY')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_produce_avro_records_with_schema(sdc_builder, sdc_executor, cluster, port):
"""Write avro text messages into Kafka multiple partitions and confirm that Kafka successfully reads them.
Because cluster mode pipelines don't support snapshots, we do this verification using a
second standalone pipeline whose origin is an SDC RPC written to by the Kafka Consumer pipeline.
Specifically, this would look like:
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
msg = {'name': 'boss', 'age': 60, 'emails': ['boss@company.com', 'boss2@company.com'], 'boss': None}
expected = ('OrderedDict([(\'name\', boss), (\'age\', 60), (\'emails\', [boss@company.com, boss2@company.com]),'
' (\'boss\', None)])')
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
kafka_consumer.set_attributes(data_format='AVRO', avro_schema_location='INLINE', avro_schema=json.dumps(SCHEMA))
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka AVRO pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, msg, 'AVRO')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'AVRO')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_produce_avro_records_without_schema(sdc_builder, sdc_executor, cluster, port):
"""Write avro text messages into Kafka multiple partitions with the schema in the records
and confirm that Kafka successfully reads them.
Because cluster mode pipelines don't support snapshots, we do this verification using a
second standalone pipeline whose origin is an SDC RPC written to by the Kafka Consumer pipeline.
Specifically, this would look like:
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
msg = {'name': 'boss', 'age': 60, 'emails': ['boss@company.com', 'boss2@company.com'], 'boss': None}
expected = ('OrderedDict([(\'name\', boss), (\'age\', 60), (\'emails\', [boss@company.com, boss2@company.com]),'
' (\'boss\', None)])')
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
kafka_consumer.set_attributes(data_format='AVRO', avro_schema_location='SOURCE')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka AVRO pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, msg, 'AVRO_WITHOUT_SCHEMA')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected,
'AVRO_WITHOUT_SCHEMA')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_origin_syslog_message(sdc_builder, sdc_executor, cluster, port):
"""Write a text message using UDP datagram mode SYSLOG
into Kafka multiple partitions with the schema in the records
and confirm that Kafka successfully reads them.
Because cluster mode pipelines don't support snapshots, we do this verification using a
second standalone pipeline whose origin is an SDC RPC written to by the Kafka Consumer pipeline.
Specifically, this would look like:
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
msg64packet = ("rO0ABXeOAAAAAQAAAAEAAAAAAAAAAQAJMTI3LjAuMC4xAAALuAAJMTI3LjAuMC4xAAAH0AAAAFw8MzQ+MSAyMDEz"
"LTA2LTI4VDA2OjE0OjU2LjAwMCswMjowMCBteW1hY2hpbmUgc3U6ICdzdSByb290JyBmYWlsZWQgZm9yIGxvbnZpY"
"2sgb24gL2Rldi9wdHMvOA==")
expected = (
'{\'severity\': 2, \'senderPort\': 3000, \'receiverAddr\': 127.0.0.1:2000, \'host\': mymachine, \'raw\': '
'<34>1 2013-06-28T06:14:56.000+02:00 mymachine su: \'su root\' failed for lonvick on /dev/pts/8, '
'\'senderAddr\': 127.0.0.1:3000, \'priority\': 34, \'facility\': 4, \'version\': 1, \'receiverPort\': 2000, '
'\'remaining\': su: \'su root\' failed for lonvick on /dev/pts/8, \'timestamp\': 1372392896000}')
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
# Override default configuration.
kafka_consumer.set_attributes(data_format='DATAGRAM', datagram_packet_format='SYSLOG')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka SYSLOG pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, base64.b64decode(msg64packet), 'SYSLOG')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'SYSLOG')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_origin_netflow_message(sdc_builder, sdc_executor, cluster, port):
"""Write a text message using UDP datagram mode NETFLOW
into Kafka multiple partitions with the schema in the records
and confirm that Kafka successfully reads them.
Because cluster mode pipelines don't support snapshots, we do this verification using a
second standalone pipeline whose origin is an SDC RPC written to by the Kafka Consumer pipeline.
Specifically, this would look like:
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
msg64packet = ('rO0ABXoAAAIqAAAAAQAAAAIAAAAAAAAAAQAJMTI3LjAuMC4xAAALuAAJMTI3LjAuMC4xAAAH0AAAAfgABQAKAAAAAFVFcOIBWL'
'IwAAAAAAAAAAD3waSb49Wa8QAAAAAAAAAAAAAAAQAAAFlnyqItZ8qiLQA1JA8AABEAAAAAAAAAAAD3waSb49Wa8QAAAAAAAAAA'
'AAAAAQAAAFlnyqItZ8qiLQA1+ioAABEAAAAAAAAAAAD3waSb49Wa8QAAAAAAAAAAAAAAAQAAAFlnyqItZ8qiLQA1SWAAABEAAA'
'AAAAAAAAD55boV49Wa8QAAAAAAAAAAAAAAAQAAAFlnyqIvZ8qiLwA1q94AABEAAAAAAAAAAAB/472549Wa8QAAAAAAAAAAAAAA'
'AQAAAFlnyqIvZ8qiLwA1IlYAABEAAAAAAAAAAAB/472549Wa8QAAAAAAAAAAAAAAAQAAAFlnyqIvZ8qiLwA1l5sAABEAAAAAAA'
'AAAAB/472549Wa8QAAAAAAAAAAAAAAAQAAAFlnyqIvZ8qiLwA1u4EAABEAAAAAAAAAAAD55boV49Wa8QAAAAAAAAAAAAAAAQAA'
'AFlnyqIvZ8qiLwA14OQAABEAAAAAAAAAAAAtZyl349Wa8QAAAAAAAAAAAAAAAQAAArhnyqIxZ8qiMQA11FQAABEAAAAAAAAAAA'
'B5SzUv49Wa8QAAAAAAAAAAAAAAAQAAAfhnyqIyZ8qiMgA1FbUAABEAAAAAAAAAAAA=')
expected = ['\'srcaddr\': -138304357', '\'first\': 1432355575064']
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
# Override default configuration.
kafka_consumer.set_attributes(data_format='DATAGRAM', datagram_data_format='NETFLOW')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka NETFLOW pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, base64.b64decode(msg64packet), 'NETFLOW')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'NETFLOW')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_origin_collecd_message(sdc_builder, sdc_executor, cluster, port):
"""Write a text message using UDP datagram mode COLLECTD
into Kafka multiple partitions with the schema in the records
and confirm that Kafka successfully reads them.
Because cluster mode pipelines don't support snapshots, we do this verification using a
second standalone pipeline whose origin is an SDC RPC written to by the Kafka Consumer pipeline.
Specifically, this would look like:
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
msg64packet = (
'rO0ABXoAAAQAAAAAAQAAAAMAAAAAAAAAAQAJMTI3LjAuMC4xAAALuAAJMTI3LjAuMC4xAAAH0AAABVkCAAAoLmo9Of+LakZDcogiJUJa2iIO1'
'+Fl9GzuT86v9yB0HXN1c2VyAAAAMWlwLTE5Mi0xNjgtNDItMjM4LnVzLXdlc3QtMi5jb21wdXRlLmludGVybmFsAAAIAAwVa65L6bcTJwAJAA'
'wAAAACgAAAAAACAA5pbnRlcmZhY2UAAAMACGxvMAAABAAOaWZfZXJyb3JzAAAGABgAAgICAAAAAAAAAAAAAAAAAAAAAAAIAAwVa65L6bZ8KAA'
'CAAlsb2FkAAADAAUAAAQACWxvYWQAAAYAIQADAQEBAAAAAAA2BkAAAAAAAMcOQAAAAAAALA5AAAgADBVrrkvptwrDAAIADmludGVyZmFjZQAA'
'AwAIbG8wAAAEAA9pZl9wYWNrZXRzAAAGABgAAgICAAAAAAAR1/AAAAAAABHX8AAIAAwVa65L6bb5/AAEAA5pZl9vY3RldHMAAAYAGAACAgIAA'
'AAAISMkFAAAAAAhIyQUAAgADBVrrkvptzCDAAMACWdpZjAAAAYAGAACAgIAAAAAAAAAAAAAAAAAAAAAAAgADBVrrkvptwaRAAIAC21lbW9yeQ'
'AAAwAFAAAEAAttZW1vcnkAAAUACndpcmVkAAAGAA8AAQEAAAAABA7yQQAIAAwVa65L6bfHggACAA5pbnRlcmZhY2UAAAMACWdpZjAAAAQAD2l'
'mX3BhY2tldHMAAAUABQAABgAYAAICAgAAAAAAAAAAAAAAAAAAAAAACAAMFWuuS+m3BpEAAgALbWVtb3J5AAADAAUAAAQAC21lbW9yeQAABQAN'
'aW5hY3RpdmUAAAYADwABAQAAAADW3OlBAAUAC2FjdGl2ZQAABgAPAAEBAAAAAPI17kEACAAMFWuuS+m4Cp0AAgAOaW50ZXJmYWNlAAADAAlna'
'WYwAAAEAA5pZl9lcnJvcnMAAAUABQAABgAYAAICAgAAAAAAAAAAAAAAAAAAAAAACAAMFWuuS+m3BpEAAgALbWVtb3J5AAADAAUAAAQAC21lbW'
'9yeQAABQAJZnJlZQAABgAPAAEBAAAAAECHnUEACAAMFWuuS+m4kNUAAgAOaW50ZXJmYWNlAAADAAlzdGYwAAAEAA5pZl9vY3RldHMAAAUABQA'
'ABgAYAAICAgAAAAAAAAAAAAAAAAAAAAAACAAMFWuuS+m4mTkABAAOaWZfZXJyb3JzAAAGABgAAgICAAAAAAAAAAAAAAAAAAAAAAAIAAwVa65L'
'6bidagADAAhlbjAAAAQADmlmX29jdGV0cwAABgAYAAICAgAAAABFC4cKAAAAAAhjPdIACHoAAAGLAAwVa65L6biVBwADAAlzdGYwAAAEAA9pZ'
'l9wYWNrZXRzAAAGABgAAgICAAAAAAAAAAAAAAAAAAAAAAAIAAwVa65L6bi2lQADAAhlbjAAAAYAGAACAgIAAAAAABJhDgAAAAAADMIoAAgADB'
'VrrkvpuLrHAAQADmlmX2Vycm9ycwAABgAYAAICAgAAAAAAAAAAAAAAAAAAAAAACAAMFWuuS+m4vvgAAwAIZW4xAAAEAA5pZl9vY3RldHMAAAY'
'AGAACAgIAAAAAAAAAAAAAAAAAAAAAAAQAD2lmX3BhY2tldHMAAAYAGAACAgIAAAAAAAAAAAAAAAAAAAAAAAgADBVrrkvpuMMqAAQADmlmX2Vy'
'cm9ycwAABgAYAAICAgAAAAAAAAAAAAAAAAAAAAAAAwAIZW4yAAAEAA5pZl9vY3RldHMAAAYAGAACAgIAAAAAAAAAAAAAAAAAAAAAAAgADBVrr'
'kvpuMdcAAQADmlmX2Vycm9ycwAABgAYAAICAgAAAAAAAAAAAAAAAAAAAAA=')
expected = (
'{\'plugin_instance\': lo0, \'plugin\': interface, \'tx\': 0, \'rx\': 0, \'host\': ip-192-168-42-238.us-west-2.'
'compute.internal, \'time_hires\': 1543518938371396391, \'type\': if_errors}')
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
# Override default configuration.
kafka_consumer.set_attributes(data_format='DATAGRAM', datagram_data_format='COLLECTD')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka COLLECTD pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, base64.b64decode(msg64packet), 'COLLECTD')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'COLLECTD')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
@cluster('cdh')
def test_kafka_log_record_cluster(sdc_builder, sdc_executor, cluster, port):
"""Write simple log messages into Kafka and confirm that Kafka successfully reads them.
Kafka Consumer Origin pipeline with cluster mode:
kafka_consumer >> sdc_rpc_destination
Snapshot pipeline:
sdc_rpc_origin >> trash
"""
message = ('+20150320 [15:53:31,161] DEBUG PipelineConfigurationValidator - Pipeline \'test:preview\' validation. '
'valid=true, canPreview=true, issuesCount=0 - ')
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
# Override default configuration.
kafka_consumer.set_attributes(data_format='LOG',
log_format='LOG4J',
retain_original_line=True,
on_parse_error='INCLUDE_AS_STACK_TRACE')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka BINARY pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster kafka BINARY snapshot')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, message.encode(), 'LOG')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, message, 'LOG')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
def get_kafka_consumer_stage(sdc_version, pipeline_builder, cluster):
"""Create and return a Kafka origin stage depending on execution mode for the pipeline."""
pipeline_builder.add_error_stage('Discard')
if Version(sdc_version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB:
kafka_cluster_stage_lib = cluster.kafka.cluster_stage_lib_spark1
else:
kafka_cluster_stage_lib = cluster.kafka.cluster_stage_lib_spark2
kafka_consumer = pipeline_builder.add_stage('Kafka Consumer',
type='origin',
library=kafka_cluster_stage_lib)
kafka_consumer.set_attributes(data_format='TEXT',
batch_wait_time_in_ms=20000,
max_batch_size_in_records=10,
rate_limit_per_partition_in_kafka_messages=10,
topic=get_random_string(string.ascii_letters, 10),
kafka_configuration=[{'key': 'auto.offset.reset', 'value': 'earliest'}])
return kafka_consumer
def get_rpc_origin(builder, sdc_rpc_destination, port):
"""Create and return rpc origin stage with basic configuration"""
sdc_rpc_origin = builder.add_stage(name='com_streamsets_pipeline_stage_origin_sdcipc_SdcIpcDSource')
sdc_rpc_origin.sdc_rpc_listening_port = port
sdc_rpc_origin.sdc_rpc_id = sdc_rpc_destination.sdc_rpc_id
# Since YARN jobs take a while to get going, set RPC origin batch wait time to MAX_BATCH_WAIT_TIME (30s).
sdc_rpc_origin.batch_wait_time_in_secs = MAX_BATCH_WAIT_TIME
return sdc_rpc_origin
def get_rpc_destination(builder, sdc_executor, port):
"""Create and return rpc destination stage with basic configuration"""
sdc_rpc_destination = builder.add_stage(name='com_streamsets_pipeline_stage_destination_sdcipc_SdcIpcDTarget')
sdc_rpc_destination.sdc_rpc_connection.append('{}:{}'.format(sdc_executor.server_host, port))
sdc_rpc_destination.sdc_rpc_id = get_random_string(string.ascii_letters, 10)
return sdc_rpc_destination
def produce_kafka_messages(topic, cluster, message, data_format):
"""Send basic messages to Kafka"""
producer = cluster.kafka.producer()
basic_data_formats = ['XML', 'CSV', 'SYSLOG', 'NETFLOW', 'COLLECTD', 'BINARY', 'LOG', 'PROTOBUF', 'TEXT', 'JSON']
# Write records into Kafka depending on the data_format.
if data_format in basic_data_formats:
producer.send(topic, message)
elif data_format == 'WITH_KEY':
producer.send(topic, message, key=get_random_string(string.ascii_letters, 10).encode())
elif data_format == 'AVRO':
writer = avro.io.DatumWriter(avro.schema.Parse(json.dumps(SCHEMA)))
bytes_writer = io.BytesIO()
encoder = avro.io.BinaryEncoder(bytes_writer)
writer.write(message, encoder)
raw_bytes = bytes_writer.getvalue()
producer.send(topic, raw_bytes)
elif data_format == 'AVRO_WITHOUT_SCHEMA':
bytes_writer = io.BytesIO()
datum_writer = avro.io.DatumWriter(avro.schema.Parse(json.dumps(SCHEMA)))
data_file_writer = DataFileWriter(writer=bytes_writer, datum_writer=datum_writer,
writer_schema=avro.schema.Parse(json.dumps(SCHEMA)))
data_file_writer.append(message)
data_file_writer.flush()
raw_bytes = bytes_writer.getvalue()
data_file_writer.close()
producer.send(topic, raw_bytes)
producer.flush()
def verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, message, data_format):
"""Start, stop pipeline and verify results using snapshot"""
# Start Pipeline.
snapshot_pipeline_command = sdc_executor.capture_snapshot(snapshot_pipeline, start_pipeline=True, wait=False)
sdc_executor.start_pipeline(kafka_consumer_pipeline)
logger.debug('Finish the snapshot and verify')
snapshot_command = snapshot_pipeline_command.wait_for_finished(timeout_sec=SNAPSHOT_TIMEOUT_SEC)
snapshot = snapshot_command.snapshot
basic_data_formats = ['CSV', 'SYSLOG', 'COLLECTD', 'PROTOBUF', 'TEXT', 'JSON', 'AVRO', 'AVRO_WITHOUT_SCHEMA']
# Verify snapshot data.
if data_format in basic_data_formats:
record_field = [record.field for record in snapshot[snapshot_pipeline[0].instance_name].output]
assert message == str(record_field[0])
elif data_format == 'XML':
output_data = [record.field for record in snapshot[snapshot_pipeline[0].instance_name].output][0]
record_field = get_xml_output_field(kafka_consumer_pipeline[0], output_data, 'developers')
assert message == str(record_field)
elif data_format == 'BINARY':
record_field = [record.field for record in snapshot[snapshot_pipeline[0].instance_name].output]
assert message == record_field[0]
elif data_format == 'LOG':
stage = snapshot[snapshot_pipeline[0].instance_name]
assert 0 == len(stage.error_records)
record_field = [record.field for record in snapshot[snapshot_pipeline[0].instance_name].output]
assert message == str(record_field[0]['originalLine'])
elif data_format == 'XML_MULTI_ELEMENT':
record_field = [record.field for record in snapshot[snapshot_pipeline[0].instance_name].output]
assert message[0] == str(record_field[0])
assert message[1] == str(record_field[1])
elif data_format == 'NETFLOW':
record_field = [record.field for record in snapshot[snapshot_pipeline[0].instance_name].output]
assert message[0] in str(record_field)
assert message[1] in str(record_field)
def json_test(sdc_builder, sdc_executor, cluster, message, expected, port):
"""Generic method to tests using JSON format"""
if (Version(sdc_builder.version) < MIN_SDC_VERSION_WITH_SPARK_2_LIB and
('kafka' in cluster.kerberized_services or cluster.kafka.is_ssl_enabled)):
pytest.skip('Kafka cluster mode test only '
f'runs against cluster with the non-secured Kafka for SDC version {sdc_builder.version}.')
# Build the Kafka consumer pipeline.
builder = sdc_builder.get_pipeline_builder()
kafka_consumer = get_kafka_consumer_stage(sdc_builder.version, builder, cluster)
kafka_consumer.set_attributes(data_format='JSON')
sdc_rpc_destination = get_rpc_destination(builder, sdc_executor, port)
kafka_consumer >> sdc_rpc_destination
kafka_consumer_pipeline = builder.build(title='Cluster kafka JSON pipeline').configure_for_environment(cluster)
kafka_consumer_pipeline.configuration['executionMode'] = 'CLUSTER_YARN_STREAMING'
kafka_consumer_pipeline.configuration['shouldRetry'] = False
# Build the Snapshot pipeline.
builder = sdc_builder.get_pipeline_builder()
builder.add_error_stage('Discard')
sdc_rpc_origin = get_rpc_origin(builder, sdc_rpc_destination, port)
trash = builder.add_stage(label='Trash')
sdc_rpc_origin >> trash
snapshot_pipeline = builder.build(title='Cluster kafka JSON Snapshot pipeline')
sdc_executor.add_pipeline(kafka_consumer_pipeline, snapshot_pipeline)
try:
# Publish messages to Kafka and verify using snapshot if the same messages are received.
produce_kafka_messages(kafka_consumer.topic, cluster, json.dumps(message).encode(), 'JSON')
verify_kafka_origin_results(kafka_consumer_pipeline, snapshot_pipeline, sdc_executor, expected, 'JSON')
finally:
sdc_executor.stop_pipeline(kafka_consumer_pipeline)
sdc_executor.stop_pipeline(snapshot_pipeline)
| 47.074872 | 145 | 0.746242 | 5,394 | 45,898 | 6.074898 | 0.087505 | 0.078949 | 0.065369 | 0.016235 | 0.772278 | 0.755096 | 0.745728 | 0.737549 | 0.732574 | 0.724915 | 0 | 0.013666 | 0.170966 | 45,898 | 974 | 146 | 47.123203 | 0.847494 | 0.191708 | 0 | 0.598182 | 0 | 0.001818 | 0.228682 | 0.105337 | 0 | 0 | 0 | 0 | 0.016364 | 1 | 0.043636 | false | 0 | 0.025455 | 0.001818 | 0.076364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
9a806c8d133f0503ff8a235427438afb499f4291 | 51 | py | Python | holobot/extensions/hentai/commands/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 1 | 2021-05-24T00:17:46.000Z | 2021-05-24T00:17:46.000Z | holobot/extensions/hentai/commands/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | 41 | 2021-03-24T22:50:09.000Z | 2021-12-17T12:15:13.000Z | holobot/extensions/hentai/commands/__init__.py | rexor12/holobot | 89b7b416403d13ccfeee117ef942426b08d3651d | [
"MIT"
] | null | null | null | from .link_hentai_command import LinkHentaiCommand
| 25.5 | 50 | 0.901961 | 6 | 51 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 51 | 1 | 51 | 51 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
9a84ed478069636359ff87e316542385d1ae427d | 78 | py | Python | forecast/__init__.py | ADGEfficiency/forecast | 69d636d4fa081a81c70c18d2a3cb8a60db00b493 | [
"MIT"
] | 16 | 2018-08-10T09:28:52.000Z | 2021-09-02T16:59:08.000Z | forecast/__init__.py | l-leo/forecast | 69d636d4fa081a81c70c18d2a3cb8a60db00b493 | [
"MIT"
] | 11 | 2019-02-20T14:19:49.000Z | 2022-02-09T23:50:04.000Z | forecast/__init__.py | l-leo/forecast | 69d636d4fa081a81c70c18d2a3cb8a60db00b493 | [
"MIT"
] | 6 | 2019-01-24T08:59:40.000Z | 2021-04-08T14:34:18.000Z | from forecast.utils import *
from forecast.models.register import make_model
| 19.5 | 47 | 0.833333 | 11 | 78 | 5.818182 | 0.727273 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 78 | 3 | 48 | 26 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9a8d299e9d272522b5bdb276d6616daf039606bc | 1,312 | py | Python | 3-Python-Advanced (May 2021)/04-Comprehensions/02_Exercises/06-Matrix-of-Palindromes.py | karolinanikolova/SoftUni-Software-Engineering | 7891924956598b11a1e30e2c220457c85c40f064 | [
"MIT"
] | null | null | null | 3-Python-Advanced (May 2021)/04-Comprehensions/02_Exercises/06-Matrix-of-Palindromes.py | karolinanikolova/SoftUni-Software-Engineering | 7891924956598b11a1e30e2c220457c85c40f064 | [
"MIT"
] | null | null | null | 3-Python-Advanced (May 2021)/04-Comprehensions/02_Exercises/06-Matrix-of-Palindromes.py | karolinanikolova/SoftUni-Software-Engineering | 7891924956598b11a1e30e2c220457c85c40f064 | [
"MIT"
] | null | null | null | # 6. Matrix of Palindromes
# Write a program to generate the following matrix of palindromes of 3 letters with r rows and c columns
# like the one in the examples below.
# • Rows define the first and the last letter: row 0 'a', row 1 'b', row 2 'c', …
# • Columns + rows define the middle letter:
# o column 0, row 0 'a', column 1, row 0 'b', column 2, row 0 'c', …
# o column 0, row 1 'b', column 1, row 1 'c', column 2, row 1 'd', …
# Option 1 - without saving to matrix and directly printing
rows, cols = [int(el) for el in input().split()]
[print(*[f"{chr(97+row) + chr(97+row+col) + chr(97+row)}" for col in range(cols)]) for row in range(rows)]
# # Option 2 - with saving to matrix
# rows, cols = [int(el) for el in input().split()]
#
# matrix = [[] for row in range(rows)]
#
# [[matrix[row].append(chr(97+row) + chr(97+row+col) + chr(97+row)) for col in range(cols)] for row in range(rows)]
#
# [print(*matrix[row]) for row in range(rows)]
# # Option 3 - without comprehension
#
# rows, cols = [int(el) for el in input().split()]
#
# matrix = []
#
# for row in range(rows):
# matrix.append([])
#
# for col in range(cols):
# string = chr(97+row) + chr(97+row+col) + chr(97+row)
# matrix[row].append(string)
#
# for row in range(rows):
# print(*matrix[row]) | 35.459459 | 115 | 0.617378 | 240 | 1,312 | 3.458333 | 0.25 | 0.054217 | 0.086747 | 0.093976 | 0.46747 | 0.446988 | 0.412048 | 0.412048 | 0.357831 | 0.321687 | 0 | 0.036574 | 0.208079 | 1,312 | 37 | 116 | 35.459459 | 0.743022 | 0.829268 | 0 | 0 | 1 | 0.5 | 0.235602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
9ae6983c0581e14b2e91bb3b576f2ff1c1d9d995 | 39 | py | Python | Snake game version 2/main/main.py | yashasvibmishra/Arcade-snake-game-versions | 7905deb17783fc6dc575c3dc3e219f03f23d02c8 | [
"MIT"
] | null | null | null | Snake game version 2/main/main.py | yashasvibmishra/Arcade-snake-game-versions | 7905deb17783fc6dc575c3dc3e219f03f23d02c8 | [
"MIT"
] | null | null | null | Snake game version 2/main/main.py | yashasvibmishra/Arcade-snake-game-versions | 7905deb17783fc6dc575c3dc3e219f03f23d02c8 | [
"MIT"
] | null | null | null | import game
g = game.Game()
g.run() | 9.75 | 16 | 0.589744 | 7 | 39 | 3.285714 | 0.571429 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 39 | 4 | 17 | 9.75 | 0.766667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
9af53e876345c1b6048bf89d1fac61fa2ba82532 | 71 | py | Python | pepe/test/__init__.py | Jfeatherstone/pepe | 4d28cab830ff2a94d3cfc06c680bde05d92b2cdb | [
"MIT"
] | null | null | null | pepe/test/__init__.py | Jfeatherstone/pepe | 4d28cab830ff2a94d3cfc06c680bde05d92b2cdb | [
"MIT"
] | null | null | null | pepe/test/__init__.py | Jfeatherstone/pepe | 4d28cab830ff2a94d3cfc06c680bde05d92b2cdb | [
"MIT"
] | null | null | null | """
Unit tests.
"""
from .test_utils import *
from .test_auto import *
| 11.833333 | 25 | 0.676056 | 10 | 71 | 4.6 | 0.7 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 71 | 5 | 26 | 14.2 | 0.779661 | 0.15493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
b103f5e1ee96d129b975015ae3f17ea17defc2c3 | 442 | py | Python | imagepreprocessing/imagepreprocessing.py | pawelplodzpl/imagepreprocessing | c802a61d047d85b014c3fae738114c1227f11fc1 | [
"MIT"
] | null | null | null | imagepreprocessing/imagepreprocessing.py | pawelplodzpl/imagepreprocessing | c802a61d047d85b014c3fae738114c1227f11fc1 | [
"MIT"
] | null | null | null | imagepreprocessing/imagepreprocessing.py | pawelplodzpl/imagepreprocessing | c802a61d047d85b014c3fae738114c1227f11fc1 | [
"MIT"
] | 1 | 2020-12-07T23:57:28.000Z | 2020-12-07T23:57:28.000Z | from imagepreprocessing.keras_functions import create_training_data_keras, make_prediction_from_directory_keras, make_prediction_from_array_keras
from imagepreprocessing.darknet_functions import create_training_data_yolo, yolo_annotation_tool, draw_bounding_boxes, create_cfg_file_yolo, make_prediction_from_directory_yolo, auto_annotation_by_random_points
from imagepreprocessing.utilities import create_confusion_matrix, train_test_split | 147.333333 | 211 | 0.925339 | 58 | 442 | 6.448276 | 0.534483 | 0.176471 | 0.144385 | 0.15508 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049774 | 442 | 3 | 212 | 147.333333 | 0.890476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b1164bfa00c99fd7f2e852504c7c43d111bd6d12 | 307 | py | Python | orcinus/workspace/__init__.py | orcinus-lang/orcinus-bootstrap | 3a4766f05a21ca5d4cd6384d1857ec1ffaa09518 | [
"MIT"
] | null | null | null | orcinus/workspace/__init__.py | orcinus-lang/orcinus-bootstrap | 3a4766f05a21ca5d4cd6384d1857ec1ffaa09518 | [
"MIT"
] | null | null | null | orcinus/workspace/__init__.py | orcinus-lang/orcinus-bootstrap | 3a4766f05a21ca5d4cd6384d1857ec1ffaa09518 | [
"MIT"
] | null | null | null | # Copyright (C) 2019 Vasiliy Sheredeko
#
# This software may be modified and distributed under the terms
# of the MIT license. See the LICENSE file for details.
from orcinus.workspace.workspace import Workspace
from orcinus.workspace.document import Document
from orcinus.workspace.package import Package
| 34.111111 | 63 | 0.814332 | 43 | 307 | 5.813953 | 0.651163 | 0.132 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.140065 | 307 | 8 | 64 | 38.375 | 0.931818 | 0.498371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b1175f699cd26b21661a5f6e5b6447c172a44fd8 | 1,137 | py | Python | terrascript/resource/launchdarkly/launchdarkly.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/resource/launchdarkly/launchdarkly.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/resource/launchdarkly/launchdarkly.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/resource/launchdarkly/launchdarkly.py
# Automatically generated by tools/makecode.py (24-Sep-2021 15:20:56 UTC)
import terrascript
class launchdarkly_access_token(terrascript.Resource):
pass
class launchdarkly_custom_role(terrascript.Resource):
pass
class launchdarkly_destination(terrascript.Resource):
pass
class launchdarkly_environment(terrascript.Resource):
pass
class launchdarkly_feature_flag(terrascript.Resource):
pass
class launchdarkly_feature_flag_environment(terrascript.Resource):
pass
class launchdarkly_project(terrascript.Resource):
pass
class launchdarkly_segment(terrascript.Resource):
pass
class launchdarkly_team_member(terrascript.Resource):
pass
class launchdarkly_webhook(terrascript.Resource):
pass
__all__ = [
"launchdarkly_access_token",
"launchdarkly_custom_role",
"launchdarkly_destination",
"launchdarkly_environment",
"launchdarkly_feature_flag",
"launchdarkly_feature_flag_environment",
"launchdarkly_project",
"launchdarkly_segment",
"launchdarkly_team_member",
"launchdarkly_webhook",
]
| 19.603448 | 73 | 0.784521 | 113 | 1,137 | 7.575221 | 0.292035 | 0.244159 | 0.268692 | 0.294393 | 0.471963 | 0.191589 | 0.119159 | 0 | 0 | 0 | 0 | 0.012257 | 0.138962 | 1,137 | 57 | 74 | 19.947368 | 0.862104 | 0.10642 | 0 | 0.30303 | 1 | 0 | 0.239882 | 0.180652 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.30303 | 0.030303 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
b17fdbd382fa24da92e37b18d1b0e7c720739a4d | 54 | py | Python | src/brouwers/general/signals.py | modelbrouwers/modelbrouwers | e0ba4819bf726d6144c0a648fdd4731cdc098a52 | [
"MIT"
] | 6 | 2015-03-03T13:23:07.000Z | 2021-12-19T18:12:41.000Z | src/brouwers/general/signals.py | modelbrouwers/modelbrouwers | e0ba4819bf726d6144c0a648fdd4731cdc098a52 | [
"MIT"
] | 95 | 2015-02-07T00:55:39.000Z | 2022-02-08T20:22:05.000Z | src/brouwers/general/signals.py | modelbrouwers/modelbrouwers | e0ba4819bf726d6144c0a648fdd4731cdc098a52 | [
"MIT"
] | 2 | 2016-03-22T16:53:26.000Z | 2019-02-09T22:46:04.000Z | # TODO: signal to create UserProfile on User creation
| 27 | 53 | 0.796296 | 8 | 54 | 5.375 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 54 | 1 | 54 | 54 | 0.955556 | 0.944444 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
493c5478943a96b141961394ab7ed5645e686c0b | 129 | py | Python | nappy/utils/__init__.py | ahurka/nappy | 153f2f86a40801619d91f2c87abb55d1e97ada87 | [
"BSD-3-Clause"
] | null | null | null | nappy/utils/__init__.py | ahurka/nappy | 153f2f86a40801619d91f2c87abb55d1e97ada87 | [
"BSD-3-Clause"
] | null | null | null | nappy/utils/__init__.py | ahurka/nappy | 153f2f86a40801619d91f2c87abb55d1e97ada87 | [
"BSD-3-Clause"
] | null | null | null | from .parse_config import getConfigDict, getLocalAttributesConfigDict
from .common_utils import getDebug, getVersion, getDefault
| 43 | 69 | 0.875969 | 13 | 129 | 8.538462 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085271 | 129 | 2 | 70 | 64.5 | 0.940678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
49983e4d0a03be35b955b496ae78a0bacfcbf7e5 | 137 | py | Python | chapter6/webscraping/crosswords.py | cs50sacramento/source-code-16-17 | 8e8276de39ba3106b67549108f6ee4cf71836025 | [
"MIT"
] | null | null | null | chapter6/webscraping/crosswords.py | cs50sacramento/source-code-16-17 | 8e8276de39ba3106b67549108f6ee4cf71836025 | [
"MIT"
] | null | null | null | chapter6/webscraping/crosswords.py | cs50sacramento/source-code-16-17 | 8e8276de39ba3106b67549108f6ee4cf71836025 | [
"MIT"
] | null | null | null | from flask import Flask, render_template
app = Flask(__name__)
@app.route("/")
def main():
return render_template("crosswords.html") | 22.833333 | 45 | 0.737226 | 18 | 137 | 5.277778 | 0.722222 | 0.294737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124088 | 137 | 6 | 45 | 22.833333 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.115942 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.2 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
77374a964a8ac9291639cc0509e1b6db8467b6ce | 120 | py | Python | mainhome/admin.py | VisheshJain112/dj_app | f7aa286d56ab5726e6cc3a20bcc808a859980ddd | [
"MIT"
] | null | null | null | mainhome/admin.py | VisheshJain112/dj_app | f7aa286d56ab5726e6cc3a20bcc808a859980ddd | [
"MIT"
] | null | null | null | mainhome/admin.py | VisheshJain112/dj_app | f7aa286d56ab5726e6cc3a20bcc808a859980ddd | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from .models import content
admin.site.register(content) | 20 | 32 | 0.808333 | 17 | 120 | 5.705882 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 120 | 6 | 33 | 20 | 0.92381 | 0.216667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
774ffd8d44c786c1034a0976ded9c00564c20210 | 2,267 | py | Python | src/pcb/interconnection.py | paulo-raca/PyPcb | cf768e90a643996fcf9ef1ce013d65aa7e4475a4 | [
"Unlicense"
] | 2 | 2021-08-25T10:50:01.000Z | 2021-12-24T03:09:29.000Z | src/pcb/interconnection.py | paulo-raca/PyPcb | cf768e90a643996fcf9ef1ce013d65aa7e4475a4 | [
"Unlicense"
] | null | null | null | src/pcb/interconnection.py | paulo-raca/PyPcb | cf768e90a643996fcf9ef1ce013d65aa7e4475a4 | [
"Unlicense"
] | null | null | null | from .common import PcbObject
class Interconnection(PcbObject):
pass
class Wire(Interconnection):
def __init__(self, name=None, parent=None):
super().__init__(name, parent)
self._wire_root = self
self._wire_connections = [self]
def root(self):
if self._wire_root is self:
return self
else:
self._wire_root = self._wire_root.root()
return self._wire_root
def all_wires(self):
yield from self.root()._wire_connections
def all_connections(self):
for wire in self.all_wires():
if wire._parent is not None:
yield wire._parent
def __add__(self, other):
my_root = self.root()
other_root = other.root()
if my_root is not other_root:
my_root._wire_connections += other_root._wire_connections
del other_root._wire_connections
other_root._wire_root = my_root
return self
def __eq__(self, other):
return type(other) is type(self) and self.root() is other.root()
def __hash__(self):
return id(self.root())
def __str__(self):
return "Interconnection: %s" % str([PcbObject.__str__(x) for x in self.all_wires()])
class Bus(Interconnection):
"""
Array of Wires
It should support slicing assignment. E.g.:
Bus[0] = vcc
Bus[1] = gnd
bus[2:10] = data
"""
pass #TODO
class NamedBus(Interconnection):
"""
Set of named Interconnections.
E.g.:
Serial: VCC, GND, RX, TX
I2C>: VCC, GND, SDA, SCL
CHARLCD: VCC, GND, RS, RW, EN, DATA[8], Backlight
"""
pass #TODO
#------------------------------------------------------------------------------
class InterconnectionAttribute:
def create(self, name, parent):
raise NotImplementedError
class WireAttribute(InterconnectionAttribute):
def create(self, name, parent):
return Wire(name, parent)
class BusAttribute(InterconnectionAttribute):
def create(self, name, parent):
return Bus(name, parent)
class NamedBusAttribute(InterconnectionAttribute):
def create(self, name, parent):
return NamedBus(name, parent)
| 26.360465 | 92 | 0.592854 | 261 | 2,267 | 4.900383 | 0.306513 | 0.062549 | 0.046912 | 0.115715 | 0.211102 | 0.211102 | 0.124316 | 0 | 0 | 0 | 0 | 0.004284 | 0.279224 | 2,267 | 85 | 93 | 26.670588 | 0.778458 | 0.155712 | 0 | 0.18 | 0 | 0 | 0.01032 | 0 | 0 | 0 | 0 | 0.023529 | 0 | 1 | 0.24 | false | 0.06 | 0.02 | 0.12 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 5 |
775778d74360516c2408ee2c2cd08433b010a980 | 5,022 | py | Python | src/lib/Potentials.py | amit112amit/oriented-particles-python | b6a9adc40c7e7129e074260778655dc175853fef | [
"MIT"
] | null | null | null | src/lib/Potentials.py | amit112amit/oriented-particles-python | b6a9adc40c7e7129e074260778655dc175853fef | [
"MIT"
] | null | null | null | src/lib/Potentials.py | amit112amit/oriented-particles-python | b6a9adc40c7e7129e074260778655dc175853fef | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Jun 24 18:29:22 2017
Oriented Partilce System potentials NUMERICAL functions
@author: amit
"""
from numba import jit
from numpy import exp, sqrt, cos, sin
# Morse potential
@jit(cache=True,nopython=True)
def morse(xi, xj, epsilon, l0, a):
a1,b1,c1 = xi
a2,b2,c2 = xj
r = sqrt((-a1 + a2)**2 + (-b1 + b2)**2 + (-c1 + c2)**2)
out = epsilon*(-2*exp(-a*(-l0 + r)) + exp(-2*a*(-l0 + r)))
return out
# The kernel function
@jit(cache=True,nopython=True)
def psi(vi, xi, xj, K, a, b):
u0,u1,u2 = vi
x0,x1,x2 = xi
y0,y1,y2 = xj
vi_mag_sqr = u0**2 + u1**2 + u2**2
vi_mag = sqrt(vi_mag_sqr)
sin_alpha_i = sin( 0.5*vi_mag )
cos_alpha_i = cos( 0.5*vi_mag )
psi0 = K*exp(-(cos_alpha_i**2*(-x2 + y2) +\
cos_alpha_i*sin_alpha_i*u0*(-2*x1 + 2*y1)/vi_mag +\
cos_alpha_i*sin_alpha_i*u1*(2*x0 - 2*y0)/vi_mag -\
sin_alpha_i**2*u0**2*(-x2 + y2)/vi_mag**2 + sin_alpha_i**2*u0*u2*(-\
2*x0 + 2*y0)/vi_mag**2 - sin_alpha_i**2*u1**2*(-x2 + y2)/vi_mag**2 +\
sin_alpha_i**2*u1*u2*(-2*x1 + 2*y1)/vi_mag**2 +\
sin_alpha_i**2*u2**2*(-x2 + y2)/vi_mag**2)**2/(2*b**2) + (-\
(cos_alpha_i**2*(-x0 + y0) + cos_alpha_i*sin_alpha_i*u1*(-2*x2 +\
2*y2)/vi_mag - cos_alpha_i*sin_alpha_i*u2*(-2*x1 + 2*y1)/vi_mag +\
sin_alpha_i**2*u0**2*(-x0 + y0)/vi_mag**2 + sin_alpha_i**2*u0*u1*(-\
2*x1 + 2*y1)/vi_mag**2 + sin_alpha_i**2*u0*u2*(-2*x2 +\
2*y2)/vi_mag**2 - sin_alpha_i**2*u1**2*(-x0 + y0)/vi_mag**2 -\
sin_alpha_i**2*u2**2*(-x0 + y0)/vi_mag**2)**2 - (cos_alpha_i**2*(-\
x1 + y1) - cos_alpha_i*sin_alpha_i*u0*(-2*x2 + 2*y2)/vi_mag +\
cos_alpha_i*sin_alpha_i*u2*(-2*x0 + 2*y0)/vi_mag -\
sin_alpha_i**2*u0**2*(-x1 + y1)/vi_mag**2 + sin_alpha_i**2*u0*u1*(-\
2*x0 + 2*y0)/vi_mag**2 + sin_alpha_i**2*u1**2*(-x1 + y1)/vi_mag**2 +\
sin_alpha_i**2*u1*u2*(-2*x2 + 2*y2)/vi_mag**2 -\
sin_alpha_i**2*u2**2*(-x1 + y1)/vi_mag**2)**2)/(2*a**2))
return psi0
# The co-planarity potential
@jit(cache=True,nopython=True)
def phi_p(vi, xi, xj):
u0,u1,u2 = vi
x0,x1,x2 = xi
y0,y1,y2 = xj
vi_mag_sqr = u0**2 + u1**2 + u2**2
vi_mag = sqrt(vi_mag_sqr)
sin_alpha_i = sin( 0.5*vi_mag )
cos_alpha_i = cos( 0.5*vi_mag )
phi_p0 = ((-x0 +\
y0)*(2*cos_alpha_i*sin_alpha_i*u1/vi_mag +\
2*sin_alpha_i**2*u0*u2/vi_mag**2) + (-x1 + y1)*(-\
2*cos_alpha_i*sin_alpha_i*u0/vi_mag +\
2*sin_alpha_i**2*u1*u2/vi_mag**2) + (-x2 + y2)*(cos_alpha_i**2 -\
sin_alpha_i**2*u0**2/vi_mag**2 - sin_alpha_i**2*u1**2/vi_mag**2 +\
sin_alpha_i**2*u2**2/vi_mag**2))**2
return phi_p0
# The co-normality potential
@jit(cache=True,nopython=True)
def phi_n(vi, vj):
u0,u1,u2 = vi
v0,v1,v2 = vj
vi_mag_sqr = u0**2 + u1**2 + u2**2
vj_mag_sqr = v0**2 + v1**2 + v2**2
vi_mag = sqrt(vi_mag_sqr)
vj_mag = sqrt(vj_mag_sqr)
sin_alpha_i = sin( 0.5*vi_mag )
sin_alpha_j = sin( 0.5*vj_mag )
cos_alpha_i = cos( 0.5*vi_mag )
cos_alpha_j = cos( 0.5*vj_mag )
phi_n0 = (-\
2*cos_alpha_i*sin_alpha_i*u0/vi_mag +\
2*cos_alpha_j*sin_alpha_j*v0/vj_mag +\
2*sin_alpha_i**2*u1*u2/vi_mag**2 -\
2*sin_alpha_j**2*v1*v2/vj_mag**2)**2 +\
(2*cos_alpha_i*sin_alpha_i*u1/vi_mag -\
2*cos_alpha_j*sin_alpha_j*v1/vj_mag +\
2*sin_alpha_i**2*u0*u2/vi_mag**2 -\
2*sin_alpha_j**2*v0*v2/vj_mag**2)**2 + (cos_alpha_i**2 -\
cos_alpha_j**2 - sin_alpha_i**2*u0**2/vi_mag**2 -\
sin_alpha_i**2*u1**2/vi_mag**2 + sin_alpha_i**2*u2**2/vi_mag**2 +\
sin_alpha_j**2*v0**2/vj_mag**2 + sin_alpha_j**2*v1**2/vj_mag**2 -\
sin_alpha_j**2*v2**2/vj_mag**2)**2
return phi_n0
# The co-circularity potential
@jit(cache=True,nopython=True)
def phi_c(vi, vj, xi, xj):
u0,u1,u2 = vi
v0,v1,v2 = vj
x0,x1,x2 = xi
y0,y1,y2 = xj
vi_mag_sqr = u0**2 + u1**2 + u2**2
vj_mag_sqr = v0**2 + v1**2 + v2**2
vi_mag = sqrt(vi_mag_sqr)
vj_mag = sqrt(vj_mag_sqr)
sin_alpha_i = sin( 0.5*vi_mag )
sin_alpha_j = sin( 0.5*vj_mag )
cos_alpha_i = cos( 0.5*vi_mag )
cos_alpha_j = cos( 0.5*vj_mag )
phi_c0 = ((-x0 +\
y0)*(2*cos_alpha_i*sin_alpha_i*u1/vi_mag +\
2*cos_alpha_j*sin_alpha_j*v1/vj_mag +\
2*sin_alpha_i**2*u0*u2/vi_mag**2 +\
2*sin_alpha_j**2*v0*v2/vj_mag**2) + (-x1 + y1)*(-\
2*cos_alpha_i*sin_alpha_i*u0/vi_mag -\
2*cos_alpha_j*sin_alpha_j*v0/vj_mag +\
2*sin_alpha_i**2*u1*u2/vi_mag**2 +\
2*sin_alpha_j**2*v1*v2/vj_mag**2) + (-x2 + y2)*(cos_alpha_i**2 +\
cos_alpha_j**2 - sin_alpha_i**2*u0**2/vi_mag**2 -\
sin_alpha_i**2*u1**2/vi_mag**2 + sin_alpha_i**2*u2**2/vi_mag**2 -\
sin_alpha_j**2*v0**2/vj_mag**2 - sin_alpha_j**2*v1**2/vj_mag**2 +\
sin_alpha_j**2*v2**2/vj_mag**2))**2
return phi_c0
| 35.871429 | 77 | 0.566906 | 1,051 | 5,022 | 2.43197 | 0.088487 | 0.159624 | 0.161972 | 0.117371 | 0.846244 | 0.836072 | 0.808294 | 0.785994 | 0.680751 | 0.676448 | 0 | 0.11225 | 0.223019 | 5,022 | 139 | 78 | 36.129496 | 0.542799 | 0.054162 | 0 | 0.46789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045872 | false | 0 | 0.018349 | 0 | 0.110092 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
775a910f76df4b5024db931756b3979c6a474462 | 6,385 | py | Python | tests/timers/test_element_multiple_timer.py | dougPhilips/python-seleniumpm | 4ddff760cd4486bfd48efdb77e33fb4574dc0e5d | [
"Apache-2.0"
] | 2 | 2020-01-13T14:41:08.000Z | 2020-01-29T10:21:04.000Z | tests/timers/test_element_multiple_timer.py | dougPhilips/python-seleniumpm | 4ddff760cd4486bfd48efdb77e33fb4574dc0e5d | [
"Apache-2.0"
] | 1 | 2018-05-29T14:47:55.000Z | 2018-05-29T14:47:55.000Z | tests/timers/test_element_multiple_timer.py | dougPhilips/python-seleniumpm | 4ddff760cd4486bfd48efdb77e33fb4574dc0e5d | [
"Apache-2.0"
] | null | null | null | import time
from tests.uitestwrapper import UiTestWrapper
from seleniumpm.webelements.element import Element
class TestElementMultipleTimer(UiTestWrapper):
def timer_assertions(self, element_start_time, element_end_time, duration):
assert hasattr(self.driver, "element_start_time")
assert hasattr(self.driver, "element_end_time")
assert hasattr(self.driver, "element_duration_time")
assert isinstance(element_start_time, float), "Expecting returned element_start_time to be a float"
assert isinstance(element_end_time, float), "Expecting returned element_end_time to be a float"
assert isinstance(duration, float), "Expecting returned duration to be a float"
assert isinstance(self.driver.element_start_time, float), "Expecting element_start_time to be a float"
assert isinstance(self.driver.element_end_time, float), "Expecting element_end_time to be a float"
assert isinstance(self.driver.element_duration_time, float), "Expecting element_duration_time to be a float"
assert str(self.driver.element_start_time) == str(element_start_time), "Expecting the element_start_time = {} - actual: {}".format(
element_start_time, self.driver.element_start_time)
assert str(self.driver.element_end_time) == str(element_end_time), "Expecting element_end_time actual: {} - expected: {}".format(
self.driver.element_end_time, element_end_time)
actual_duration = self.driver.element_end_time - self.driver.element_start_time
assert str(actual_duration) == str(duration), "Expecting duration actual: {} - expected: {}".format(
actual_duration, duration)
def teardown_method(self, test_method):
if hasattr(self.driver, "element_start_time"):
delattr(self.driver, "element_start_time")
if hasattr(self.driver, "element_end_time"):
delattr(self.driver, "element_end_time")
if hasattr(self.driver, "element_duration_time"):
delattr(self.driver, "element_duration_time")
def test_element_start_timer(self):
element = Element(self.driver, None)
element_start_time = element.start_timer(type="element")
assert hasattr(self.driver, "element_start_time")
assert self.driver.element_start_time == element_start_time, "Expecting the element_start_time = {} - actual: {}".format(
element_start_time, self.driver.element_start_time)
def test_get_split_timer(self):
element = Element(self.driver, None)
split_time = element.get_split_time(type="element")
assert split_time == 0, "Expecting the split time to be 0 if I haven't started a timer"
def test_get_split_after_reset_timer(self):
element = Element(self.driver, None)
element_start_time = element.start_timer(type="element")
assert hasattr(self.driver, "element_start_time")
assert self.driver.element_start_time == element_start_time, "Expecting the element_start_time = {} - actual: {}".format(
element_start_time, self.driver.element_start_time)
element.reset_timer(type="element")
split_time = element.get_split_time(type="element")
assert split_time == 0, "Expecting the split time to be 0 if I haven't started a timer"
def test_start_stop_timer(self):
element = Element(self.driver, None)
element_start_time = element.start_timer(type="element")
time.sleep(0.5)
element_end_time = element.stop_timer(type="element")
duration = element.get_duration(type="element")
self.timer_assertions(element_start_time, element_end_time, duration)
def test_start_stop_multiple_times_timer(self):
element = Element(self.driver, None)
element_start_time = element.start_timer(type="element")
time.sleep(0.5)
element.stop_timer(type="element")
time.sleep(0.5)
element_end_time = element.stop_timer(type="element")
duration = element.get_duration(type="element")
self.timer_assertions(element_start_time, element_end_time, duration)
def test_start_stop_duration_multiple_times_timer(self):
element = Element(self.driver, None)
element_start_time = element.start_timer(type="element")
time.sleep(0.5)
element_end_time = element.stop_timer(type="element")
time.sleep(0.5)
element.get_duration(type="element")
time.sleep(0.5)
element.get_duration(type="element")
duration = element.get_duration(type="element")
self.timer_assertions(element_start_time, element_end_time, duration)
def test_start_get_duration_multiple_times_timer(self):
element = Element(self.driver, None)
element_start_time = element.start_timer(type="element")
time.sleep(0.5)
duration = element.get_duration(type="element")
element_end_time = self.driver.element_end_time
time.sleep(0.5)
actual_duration = element.get_duration(type="element")
assert str(actual_duration) == str(duration), "Expecting duration to be the same - actual: {} - expected: {}".format(
actual_duration, duration)
time.sleep(0.5)
actual_duration = element.get_duration(type="element")
assert str(actual_duration) == str(duration), "Expecting duration to be the same - actual: {} - expected: {}".format(
actual_duration, duration)
self.timer_assertions(element_start_time, element_end_time, duration)
def test_start_stop_split_timer(self):
element = Element(self.driver, None)
element_start_time = element.start_timer(type="element")
time.sleep(0.5)
split_time = element.get_split_time(type="element")
assert split_time > 0
assert not hasattr(self.driver, "element_end_time") or self.driver.element_end_time == 0
time.sleep(0.5)
split_time = element.get_split_time(type="element")
assert not hasattr(self.driver, "element_end_time") or self.driver.element_end_time == 0
assert split_time > 0
time.sleep(0.5)
element_end_time = element.stop_timer(type="element")
duration = element.get_duration(type="element")
assert duration > split_time
self.timer_assertions(element_start_time, element_end_time, duration)
| 54.57265 | 139 | 0.705403 | 825 | 6,385 | 5.174545 | 0.067879 | 0.129304 | 0.142422 | 0.086203 | 0.889201 | 0.814711 | 0.73577 | 0.725463 | 0.68283 | 0.648395 | 0 | 0.006226 | 0.194988 | 6,385 | 116 | 140 | 55.043103 | 0.824319 | 0 | 0 | 0.647619 | 0 | 0 | 0.183712 | 0.013156 | 0 | 0 | 0 | 0 | 0.295238 | 1 | 0.095238 | false | 0 | 0.028571 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
622ade8f618eb8e4b97409e6c36da24ab21a6748 | 28 | py | Python | gluon/packages/yatl/tests/__init__.py | AustinKellar/web2py_installation | b6b8bb890762f875871d11a934b5ed7aea33563c | [
"BSD-3-Clause"
] | 2 | 2020-09-19T04:22:52.000Z | 2020-09-23T14:04:17.000Z | gluon/packages/yatl/tests/__init__.py | AustinKellar/web2py_installation | b6b8bb890762f875871d11a934b5ed7aea33563c | [
"BSD-3-Clause"
] | 14 | 2018-03-04T22:56:41.000Z | 2020-12-10T19:49:43.000Z | gluon/packages/yatl/tests/__init__.py | AustinKellar/web2py_installation | b6b8bb890762f875871d11a934b5ed7aea33563c | [
"BSD-3-Clause"
] | 2 | 2020-09-18T15:12:26.000Z | 2020-11-10T22:09:59.000Z | from .test_template import * | 28 | 28 | 0.821429 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 28 | 1 | 28 | 28 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
624f42ad8c3d9d4b1ed7c3dd8135c47bf0a8d3f1 | 13,168 | py | Python | code/code_version1.0/draw_geom.py | godisreal/CrowdEgress | 4e284f0108e9a6ed09c07d8738fb17421b82039b | [
"Apache-2.0"
] | null | null | null | code/code_version1.0/draw_geom.py | godisreal/CrowdEgress | 4e284f0108e9a6ed09c07d8738fb17421b82039b | [
"Apache-2.0"
] | 3 | 2020-04-24T06:04:53.000Z | 2022-02-02T14:32:23.000Z | code/code_version1.0/draw_geom.py | godisreal/CrowdEgress | 4e284f0108e9a6ed09c07d8738fb17421b82039b | [
"Apache-2.0"
] | 1 | 2021-08-13T01:32:45.000Z | 2021-08-13T01:32:45.000Z |
import pygame
import pygame.draw
import numpy as np
#from math_func import *
########################
##### Color Info as below ###
########################
red=255,0,0
green=0,255,0
blue=0,0,255
white=255,255,255
yellow=255,255,0
IndianRed=205,92,92
tan = 210,180,140
skyblue = 135,206,235
orange = 255,128,0
khaki = 240,230,140
black = 0,0,0
purple = 160, 32, 240
magenta = 255, 0, 255
lightpink =255, 174, 185
lightblue =178, 223, 238
Cyan = 0, 255, 255
LightCyan = 224, 255, 255
lightgreen = 193, 255, 193
####################
# Drawing the walls
####################
def drawWall(screen, walls, ZOOMFACTOR=10.0, SHOWDATA=False, xSpace=0.0, ySpace=0.0):
xyShift = np.array([xSpace, ySpace])
for wall in walls:
if wall.inComp == 0:
continue
if wall.mode=='line':
startPos = np.array([wall.params[0],wall.params[1]]) #+xyShift
endPos = np.array([wall.params[2],wall.params[3]]) #+xyShift
startPx = startPos*ZOOMFACTOR #+np.array([xSpace, ySpace])
endPx = endPos*ZOOMFACTOR #+np.array([xSpace, ySpace])
pygame.draw.line(screen, red, startPx+xyShift, endPx+xyShift, 2)
if SHOWDATA:
myfont=pygame.font.SysFont("arial",14)
text_surface=myfont.render(str(startPos), True, (255,0,0), (255,255,255))
screen.blit(text_surface, startPos*ZOOMFACTOR +xyShift)
text_surface=myfont.render(str(endPos), True, (255,0,0), (255,255,255))
screen.blit(text_surface, endPos*ZOOMFACTOR +xyShift)
elif wall.mode=='rect':
x= ZOOMFACTOR*wall.params[0]
y= ZOOMFACTOR*wall.params[1]
w= ZOOMFACTOR*(wall.params[2] - wall.params[0])
h= ZOOMFACTOR*(wall.params[3] - wall.params[1])
pygame.draw.rect(screen, red, [x+xSpace, y+ySpace, w, h], 2)
if SHOWDATA:
pass
startPos = np.array([wall.params[0],wall.params[1]])
endPos = np.array([wall.params[2],wall.params[3]])
myfont=pygame.font.SysFont("arial",10)
#text_surface=myfont.render(str(startPos), True, red, (255,255,255))
#screen.blit(text_surface, startPos*ZOOMFACTOR+xyShift)
#text_surface=myfont.render(str(endPos), True, red, (255,255,255))
#screen.blit(text_surface, endPos*ZOOMFACTOR+xyShift)
def drawSingleWall(screen, wall, ZOOMFACTOR=10.0, SHOWDATA=False, xSpace=0.0, ySpace=0.0, lw=2.0):
xyShift = np.array([xSpace, ySpace])
if wall.inComp == 0:
print('Error: Draw a wall that is not in Computation!\n')
return
if wall.mode=='line':
startPos = np.array([wall.params[0],wall.params[1]]) #+xyShift
endPos = np.array([wall.params[2],wall.params[3]]) #+xyShift
startPx = startPos*ZOOMFACTOR #+np.array([xSpace, ySpace])
endPx = endPos*ZOOMFACTOR #+np.array([xSpace, ySpace])
pygame.draw.line(screen, red, startPx+xyShift, endPx+xyShift, lw)
if SHOWDATA:
myfont=pygame.font.SysFont("arial",14)
text_surface=myfont.render(str(startPos), True, (255,0,0), (255,255,255))
screen.blit(text_surface, startPos*ZOOMFACTOR +xyShift)
text_surface=myfont.render(str(endPos), True, (255,0,0), (255,255,255))
screen.blit(text_surface, endPos*ZOOMFACTOR +xyShift)
elif wall.mode=='rect':
x= ZOOMFACTOR*wall.params[0]
y= ZOOMFACTOR*wall.params[1]
w= ZOOMFACTOR*(wall.params[2] - wall.params[0])
h= ZOOMFACTOR*(wall.params[3] - wall.params[1])
pygame.draw.rect(screen, red, [x+xSpace, y+ySpace, w, h], lw)
if SHOWDATA:
pass
startPos = np.array([wall.params[0],wall.params[1]])
endPos = np.array([wall.params[2],wall.params[3]])
myfont=pygame.font.SysFont("arial",10)
#text_surface=myfont.render(str(startPos), True, red, (255,255,255))
#screen.blit(text_surface, startPos*ZOOMFACTOR+xyShift)
#text_surface=myfont.render(str(endPos), True, red, (255,255,255))
#screen.blit(text_surface, endPos*ZOOMFACTOR+xyShift)
####################
# Drawing the doors
####################
def drawDoor(screen, doors, ZOOMFACTOR=10.0, SHOWDATA=False, xSpace=0.0, ySpace=0.0):
xyShift = np.array([xSpace, ySpace])
for door in doors:
if door.inComp == 0:
continue
#startPos = np.array([door[0], door[1]])
#endPos = np.array([door[2], door[3]])
startPos = np.array([door.params[0],door.params[1]]) #+xyShift
endPos = np.array([door.params[2],door.params[3]]) #+xyShift
#Px = [0, 0]
#Px[0] = int(Pos[0]*ZOOMFACTOR)
#Px[1] = int(Pos[1]*ZOOMFACTOR)
#pygame.draw.circle(screen, red, Px, LINESICKNESS)
x= ZOOMFACTOR*door.params[0]
y= ZOOMFACTOR*door.params[1]
w= ZOOMFACTOR*(door.params[2] - door.params[0])
h= ZOOMFACTOR*(door.params[3] - door.params[1])
pygame.draw.rect(screen, green, [x+ xSpace, y+ ySpace, w, h], 2)
if SHOWDATA:
myfont=pygame.font.SysFont("arial",10)
text_surface=myfont.render(str(startPos), True, blue, (255,255,255))
screen.blit(text_surface, startPos*ZOOMFACTOR+xyShift)
#text_surface=myfont.render(str(endPos), True, blue, (255,255,255))
#screen.blit(text_surface, endPos*ZOOMFACTOR+xyShift)
myfont=pygame.font.SysFont("arial",13)
text_surface=myfont.render('ID'+str(door.id)+'/'+str(door.arrow), True, blue, (255,255,255))
screen.blit(text_surface, door.pos*ZOOMFACTOR+xyShift)
def drawSingleDoor(screen, door, ZOOMFACTOR=10.0, SHOWDATA=False, xSpace=0.0, ySpace=0.0, lw=2.0):
xyShift = np.array([xSpace, ySpace])
if door.inComp == 0:
print('Error: Draw a door that is not in Computation!\n')
return
#startPos = np.array([door[0], door[1]])
#endPos = np.array([door[2], door[3]])
startPos = np.array([door.params[0],door.params[1]]) #+xyShift
endPos = np.array([door.params[2],door.params[3]]) #+xyShift
#Px = [0, 0]
#Px[0] = int(Pos[0]*ZOOMFACTOR)
#Px[1] = int(Pos[1]*ZOOMFACTOR)
#pygame.draw.circle(screen, red, Px, LINESICKNESS)
x= ZOOMFACTOR*door.params[0]
y= ZOOMFACTOR*door.params[1]
w= ZOOMFACTOR*(door.params[2] - door.params[0])
h= ZOOMFACTOR*(door.params[3] - door.params[1])
pygame.draw.rect(screen, green, [x+ xSpace, y+ ySpace, w, h], lw)
if SHOWDATA:
myfont=pygame.font.SysFont("arial",10)
text_surface=myfont.render(str(startPos), True, blue, (255,255,255))
screen.blit(text_surface, startPos*ZOOMFACTOR+xyShift)
#text_surface=myfont.render(str(endPos), True, blue, (255,255,255))
#screen.blit(text_surface, endPos*ZOOMFACTOR+xyShift)
myfont=pygame.font.SysFont("arial",13)
text_surface=myfont.render('ID'+str(door.id)+'/'+str(door.arrow), True, blue, (255,255,255))
screen.blit(text_surface, door.pos*ZOOMFACTOR+xyShift)
####################
# Drawing the exits
####################
def drawExit(screen, exits, ZOOMFACTOR=10.0, SHOWDATA=False, xSpace=0.0, ySpace=0.0):
xyShift = np.array([xSpace, ySpace])
for exit in exits:
if exit.inComp == 0:
continue
startPos = np.array([exit.params[0],exit.params[1]]) #+xyShift
endPos = np.array([exit.params[2],exit.params[3]]) #+xyShift
#Px = [0, 0]
#Px[0] = int(Pos[0]*ZOOMFACTOR)
#Px[1] = int(Pos[1]*ZOOMFACTOR)
#pygame.draw.circle(screen, red, Px, LINESICKNESS)
x= ZOOMFACTOR*exit.params[0]
y= ZOOMFACTOR*exit.params[1]
w= ZOOMFACTOR*(exit.params[2] - exit.params[0])
h= ZOOMFACTOR*(exit.params[3] - exit.params[1])
pygame.draw.rect(screen, orange, [x+ xSpace, y+ ySpace, w, h], 2)
if SHOWDATA:
myfont=pygame.font.SysFont("arial",10)
text_surface=myfont.render(str(startPos), True, blue, (255,255,255))
screen.blit(text_surface, startPos*ZOOMFACTOR + xyShift)
#text_surface=myfont.render(str(endPos), True, blue, (255,255,255))
#screen.blit(text_surface, endPos*ZOOMFACTOR + xyShift)
myfont=pygame.font.SysFont("arial",13)
text_surface=myfont.render('ID'+str(exit.id)+'/'+str(exit.arrow), True, blue, (255,255,255))
screen.blit(text_surface, exit.pos*ZOOMFACTOR + xyShift)
def drawSingleExit(screen, exit, ZOOMFACTOR=10.0, SHOWDATA=False, xSpace=0.0, ySpace=0.0, lw=2.0):
xyShift = np.array([xSpace, ySpace])
if exit.inComp == 0:
print('Error: Draw an exit that is not in Computation!\n')
return
startPos = np.array([exit.params[0],exit.params[1]]) #+xyShift
endPos = np.array([exit.params[2],exit.params[3]]) #+xyShift
#Px = [0, 0]
#Px[0] = int(Pos[0]*ZOOMFACTOR)
#Px[1] = int(Pos[1]*ZOOMFACTOR)
#pygame.draw.circle(screen, red, Px, LINESICKNESS)
x= ZOOMFACTOR*exit.params[0]
y= ZOOMFACTOR*exit.params[1]
w= ZOOMFACTOR*(exit.params[2] - exit.params[0])
h= ZOOMFACTOR*(exit.params[3] - exit.params[1])
pygame.draw.rect(screen, orange, [x+ xSpace, y+ ySpace, w, h], lw)
if SHOWDATA:
myfont=pygame.font.SysFont("arial",10)
text_surface=myfont.render(str(startPos), True, blue, (255,255,255))
screen.blit(text_surface, startPos*ZOOMFACTOR + xyShift)
#text_surface=myfont.render(str(endPos), True, blue, (255,255,255))
#screen.blit(text_surface, endPos*ZOOMFACTOR + xyShift)
myfont=pygame.font.SysFont("arial",13)
text_surface=myfont.render('ID'+str(exit.id)+'/'+str(exit.arrow), True, blue, (255,255,255))
screen.blit(text_surface, exit.pos*ZOOMFACTOR + xyShift)
def drawDirection(screen, door, arrow, ZOOMFACTOR=10.0, xSpace=0.0, ySpace=0.0):
xyShift = np.array([xSpace, ySpace])
if arrow == 1:
direction = np.array([1.0, 0.0])
elif arrow == -1:
direction = np.array([-1.0, 0.0])
elif arrow == 2:
direction = np.array([0.0, 1.0])
elif arrow == -2:
direction = np.array([0.0, -1.0])
elif arrow == 0:
direction = np.array([0.0, 0.0])
startPx=door.pos
endPx=door.pos+direction
pygame.draw.line(screen, red, startPx*ZOOMFACTOR+xyShift, endPx*ZOOMFACTOR+xyShift, 2)
dir = endPx - startPx
dir2 = np.array([-dir[0], dir[1]])
#dir2 = normalize(dir2)
arrowPx = endPx - dir*0.2
arrowPx1 = arrowPx + 0.6*dir2
arrowPx2 = arrowPx - 0.6*dir2
pygame.draw.line(screen, red, endPx*ZOOMFACTOR+xyShift, arrowPx1*ZOOMFACTOR+xyShift, 2)
pygame.draw.line(screen, red, endPx*ZOOMFACTOR+xyShift, arrowPx2*ZOOMFACTOR+xyShift, 2)
if __name__=="__main__":
from obst import *
#from passage import *
#from outlet import *
# initialize OBST
obstFeatures = readCSV("obstData2018.csv", "string")
walls = []
for obstFeature in obstFeatures:
wall = obst()
wall.params[0]= float(obstFeature[0])
wall.params[1]= float(obstFeature[1])
wall.params[2]= float(obstFeature[2])
wall.params[3]= float(obstFeature[3])
wall.mode = obstFeature[4]
wall.id = int(obstFeature[5])
wall.arrow = int(obstFeature[6])
wall.inComp = int(obstFeature[7])
#wall.pointer1 = np.array([float(obstFeature[8]), float(obstFeature[9])])
#wall.pointer2 = np.array([float(obstFeature[10]), float(obstFeature[11])])
walls.append(wall)
pygame.init()
screen = pygame.display.set_mode(SCREENSIZE)
pygame.display.set_caption('Test of This Package')
clock = pygame.time.Clock()
#screen.fill(BACKGROUNDCOLOR)
#myfont=pygame.font.SysFont("arial",16)
#text_surface=myfont.render("No2",True, (0,0,0), (255,255,255))
#screen.blit(text_surface, (16,20))
t_pause=0.0
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
elif event.type == pygame.MOUSEBUTTONDOWN:
(mouseX, mouseY) = pygame.mouse.get_pos()
####################################
# Drawing the geometries: walls, doors, exits
####################################
drawWall(screen, walls)
#drawDoor(screen, doors)
#drawExit(screen, exits)
pygame.display.flip()
clock.tick(20)
| 35.880109 | 105 | 0.575258 | 1,680 | 13,168 | 4.47619 | 0.113095 | 0.0375 | 0.02633 | 0.064229 | 0.760638 | 0.751729 | 0.737899 | 0.733777 | 0.72141 | 0.716755 | 0 | 0.062743 | 0.256835 | 13,168 | 366 | 106 | 35.978142 | 0.705702 | 0.17945 | 0 | 0.55122 | 0 | 0 | 0.027964 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034146 | false | 0.009756 | 0.019512 | 0 | 0.068293 | 0.014634 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
65b5fbce41f27910351010f22fc66a3a023b2e56 | 149 | py | Python | markb/__init__.py | amireldor/markb | b7f0166ecdd900f4a08c58ca2f10e80466a70df5 | [
"MIT"
] | 1 | 2018-10-14T18:28:00.000Z | 2018-10-14T18:28:00.000Z | markb/__init__.py | amireldor/markb | b7f0166ecdd900f4a08c58ca2f10e80466a70df5 | [
"MIT"
] | 1 | 2018-08-05T13:06:36.000Z | 2018-08-05T13:06:36.000Z | markb/__init__.py | amireldor/markb | b7f0166ecdd900f4a08c58ca2f10e80466a70df5 | [
"MIT"
] | null | null | null | from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
from .markb import main
__all__ = ["main", __version__]
| 21.285714 | 39 | 0.778523 | 19 | 149 | 5.263158 | 0.473684 | 0.33 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120805 | 149 | 6 | 40 | 24.833333 | 0.763359 | 0 | 0 | 0 | 0 | 0 | 0.073826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
028e6f903b0fc098692ee107660e7d264e272dd4 | 43 | py | Python | gaia_tools/select/__init__.py | npricejones/gaia_tools | 12dcd320ab07386aee816f9b0b14b19cabad29fc | [
"MIT"
] | 44 | 2016-09-13T06:37:46.000Z | 2022-02-03T20:59:56.000Z | gaia_tools/select/__init__.py | npricejones/gaia_tools | 12dcd320ab07386aee816f9b0b14b19cabad29fc | [
"MIT"
] | 24 | 2016-10-18T23:26:15.000Z | 2020-12-08T18:24:27.000Z | gaia_tools/select/__init__.py | npricejones/gaia_tools | 12dcd320ab07386aee816f9b0b14b19cabad29fc | [
"MIT"
] | 18 | 2016-10-18T22:26:45.000Z | 2021-08-20T09:07:31.000Z | from gaia_tools.select.tgasSelect import *
| 21.5 | 42 | 0.837209 | 6 | 43 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
02a0d2f95e847b0c311e8d9a180d5bb435add2d9 | 197 | bzl | Python | bazel/node_binding_cc.bzl | chokobole/node-binding | 149c8c4fdfe4ed67595a865cbea0bd7094899be8 | [
"BSD-3-Clause"
] | 17 | 2019-12-22T15:46:39.000Z | 2022-02-21T21:21:34.000Z | bazel/node_binding_cc.bzl | ntoskrnl7/node-binding | 1160d4eec43549928d179cc554a2e939a58ae48c | [
"BSD-3-Clause"
] | 2 | 2020-03-07T13:33:02.000Z | 2020-04-05T02:32:23.000Z | bazel/node_binding_cc.bzl | ntoskrnl7/node-binding | 1160d4eec43549928d179cc554a2e939a58ae48c | [
"BSD-3-Clause"
] | 2 | 2020-02-26T12:25:45.000Z | 2021-07-17T06:49:36.000Z | def node_binding_copts():
return select({
"@node_binding//:windows": [
"/std:c++14",
],
"//conditions:default": [
"-std=c++14",
],
}) | 21.888889 | 36 | 0.411168 | 17 | 197 | 4.588235 | 0.705882 | 0.282051 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033058 | 0.385787 | 197 | 9 | 37 | 21.888889 | 0.61157 | 0 | 0 | 0.222222 | 0 | 0 | 0.318182 | 0.116162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0 | 0.111111 | 0.222222 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 5 |
02a218bc53436dbde3e33c028ed1d451fa1a792d | 69 | py | Python | hcec/edwards/testdata/encodeint.py | duwu/hcd | 590966016bc42f9d043c16ad8438148ca40eff89 | [
"ISC"
] | 131 | 2018-07-19T13:01:41.000Z | 2021-12-26T12:27:33.000Z | hcec/edwards/testdata/encodeint.py | duwu/hcd | 590966016bc42f9d043c16ad8438148ca40eff89 | [
"ISC"
] | 32 | 2018-07-28T17:53:34.000Z | 2022-01-06T05:32:46.000Z | hcec/edwards/testdata/encodeint.py | duwu/hcd | 590966016bc42f9d043c16ad8438148ca40eff89 | [
"ISC"
] | 101 | 2018-08-22T03:31:11.000Z | 2022-03-17T09:01:24.000Z | import sys
from ed25519 import *
encodeinthex(int(sys.argv[1]))
| 13.8 | 31 | 0.710145 | 10 | 69 | 4.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 0.173913 | 69 | 4 | 32 | 17.25 | 0.754386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
02e70f9b24a47636fe8d73b0440d4249dffca937 | 234 | py | Python | app/modules/addresses/constants.py | systemaker/Flask-Easy-Template | a31c374d0420524e6c7ee9f92824c5a9d62223cf | [
"Apache-2.0"
] | 11 | 2017-06-03T15:58:25.000Z | 2019-06-13T19:10:58.000Z | app/modules/addresses/constants.py | systemaker/flask-web-api-demo | a31c374d0420524e6c7ee9f92824c5a9d62223cf | [
"Apache-2.0"
] | null | null | null | app/modules/addresses/constants.py | systemaker/flask-web-api-demo | a31c374d0420524e6c7ee9f92824c5a9d62223cf | [
"Apache-2.0"
] | 8 | 2017-07-31T04:10:05.000Z | 2019-01-29T01:46:31.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
# ------- IMPORT DEPENDENCIES -------
# ------- IMPORT LOCAL DEPENDENCIES -------
# from app import app
GOOGLE_MAP_API_KEY = ''
# app.config['GOOGLE_MAP_API_KEY'] = GOOGLE_MAP_API_KEY | 16.714286 | 55 | 0.606838 | 29 | 234 | 4.586207 | 0.551724 | 0.203008 | 0.270677 | 0.338346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005076 | 0.15812 | 234 | 14 | 55 | 16.714286 | 0.670051 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
f3141359a5d515496ee99a26f36218ab6b2c9256 | 1,298 | py | Python | test_day_10.py | bastoche/adventofcode2017 | a93ecff1de78376b03d4c922c82dff96574f2466 | [
"MIT"
] | null | null | null | test_day_10.py | bastoche/adventofcode2017 | a93ecff1de78376b03d4c922c82dff96574f2466 | [
"MIT"
] | null | null | null | test_day_10.py | bastoche/adventofcode2017 | a93ecff1de78376b03d4c922c82dff96574f2466 | [
"MIT"
] | null | null | null | from day_10 import part_one, invert_sublist, swap, part_two, to_ascii_codes, to_dense_hash, to_hexadecimal_string
def test_part_one():
assert part_one('3,4,1,5', size=5) == 12
def test_invert_sublist():
assert invert_sublist([0, 1, 2, 3, 4], 0, 3) == [2, 1, 0, 3, 4]
assert invert_sublist([2, 1, 0, 3, 4], 3, 4) == [4, 3, 0, 1, 2]
assert invert_sublist([4, 3, 0, 1, 2], 3, 1) == [4, 3, 0, 1, 2]
assert invert_sublist([4, 3, 0, 1, 2], 1, 5) == [3, 4, 2, 1, 0]
def test_swap():
assert swap([0, 1, 2, 3, 4], 0, 2) == [2, 1, 0, 3, 4]
def test_part_two():
assert part_two('') == 'a2582a3a0e66e6e86e3812dcb672a272'
assert part_two('AoC 2017') == '33efeb34ea91902bb2f59c9920caa6cd'
assert part_two('1,2,3') == '3efbe78a8d82f29979031a4aa0b16a9d'
assert part_two('1,2,4') == '63960835bcdc130f0b66d7ff4f6a5a8e'
def test_to_ascii_codes():
assert to_ascii_codes('1,2,3') == [49, 44, 50, 44, 51]
def test_to_dense_hash():
assert to_dense_hash([65, 27, 9, 1, 4, 3, 40, 50, 91, 7, 6, 0, 2, 5, 68, 22]) == [64]
assert to_dense_hash([0] * 256) == [0] * 16
assert to_dense_hash([65, 27, 9, 1, 4, 3, 40, 50, 91, 7, 6, 0, 2, 5, 68, 22] * 16) == [64] * 16
def test_to_hexadecimal_string():
assert to_hexadecimal_string([64, 7, 255]) == '4007ff'
| 34.157895 | 113 | 0.614792 | 230 | 1,298 | 3.273913 | 0.226087 | 0.023904 | 0.023904 | 0.021248 | 0.258964 | 0.199203 | 0.183267 | 0.183267 | 0.183267 | 0.183267 | 0 | 0.228927 | 0.195686 | 1,298 | 37 | 114 | 35.081081 | 0.492337 | 0 | 0 | 0 | 0 | 0 | 0.126348 | 0.098613 | 0 | 0 | 0 | 0 | 0.652174 | 1 | 0.304348 | true | 0 | 0.043478 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b82cd43928cec5029486ec13ccf1d66403601733 | 182 | py | Python | koopman_core/systems/__init__.py | Cafolkes/koopman_learning_and_control | 0152a2bd5079da4d672dbaac404b6c084410297d | [
"MIT"
] | 7 | 2021-11-06T11:32:40.000Z | 2022-03-16T00:06:23.000Z | koopman_core/systems/__init__.py | Cafolkes/koopman-learning-and-control | 0152a2bd5079da4d672dbaac404b6c084410297d | [
"MIT"
] | null | null | null | koopman_core/systems/__init__.py | Cafolkes/koopman-learning-and-control | 0152a2bd5079da4d672dbaac404b6c084410297d | [
"MIT"
] | 1 | 2022-03-04T09:34:58.000Z | 2022-03-04T09:34:58.000Z | from .one_dim_drone import OneDimDrone
from .planar_quadrotor_force_input import PlanarQuadrotorForceInput
from .aut_koop_sys import AutKoopSys
from .koop_sys_ctrl import KoopSysCtrl | 45.5 | 67 | 0.895604 | 25 | 182 | 6.16 | 0.68 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082418 | 182 | 4 | 68 | 45.5 | 0.922156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b8459ba5af1d528204ec7359ee345faccaa722f7 | 39 | py | Python | src/pytorch_yard/utils/__init__.py | karolpiczak/pytorch-yard | 1bf2515ffdf63365af87dffecc0e393b4a24ec0f | [
"MIT"
] | null | null | null | src/pytorch_yard/utils/__init__.py | karolpiczak/pytorch-yard | 1bf2515ffdf63365af87dffecc0e393b4a24ec0f | [
"MIT"
] | null | null | null | src/pytorch_yard/utils/__init__.py | karolpiczak/pytorch-yard | 1bf2515ffdf63365af87dffecc0e393b4a24ec0f | [
"MIT"
] | null | null | null | # utils.* modules are accessed directly | 39 | 39 | 0.794872 | 5 | 39 | 6.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.911765 | 0.948718 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b850520883ca4c65c7b85f30e754109327480418 | 2,395 | py | Python | floodsystem/analysis.py | jwll3/Flood-warning-system-group-48 | 4b3cb850fcbb82aade31589a78f5ec5bddd49836 | [
"MIT"
] | null | null | null | floodsystem/analysis.py | jwll3/Flood-warning-system-group-48 | 4b3cb850fcbb82aade31589a78f5ec5bddd49836 | [
"MIT"
] | null | null | null | floodsystem/analysis.py | jwll3/Flood-warning-system-group-48 | 4b3cb850fcbb82aade31589a78f5ec5bddd49836 | [
"MIT"
] | null | null | null | import matplotlib
import numpy as np
import matplotlib.pyplot as plt
def polyfit(dates,levels,p):
"returns d0, which is the shift applied to time axis to stop porly conditioned warnings"
"and also returns poly, which gives the coefficients to a polynomial of best fit for levels against time"
x = matplotlib.dates.date2num(dates)
y = levels
# Using shifted x values, find coefficient of best-fit
# polynomial f(x) of degree p
p_coeff = np.polyfit(x - x[0], y, p)
# Convert coefficient into a polynomial that can be evaluated
# e.g. poly(0.3)
poly = np.poly1d(p_coeff)
d0 = x[0]
return poly, d0
"""
x = matplotlib.dates.date2num(dates)
y = levels
# Using shifted x values, find coefficient of best-fit
# polynomial f(x) of degree 4
p_coeff = np.polyfit(x - x[0], y, p)
# Convert coefficient into a polynomial that can be evaluated
# e.g. poly(0.3)
poly = np.poly1d(p_coeff)
d0 = x[0]
# Plot original data points
#plt.plot(x, y, '.')
# Plot polynomial fit at 30 points along interval (note that polynomial
# is evaluated using the shift x)
x1 = np.linspace(x[0], x[-1], 30)
#plt.plot(x1, poly(x1 - x[0]))
return poly, d0
"""
"""
def polyfit(dates,levels,p):
x = matplotlib.dates.date2num(dates)
# Find coefficients of best-fit polynomial f(x) of degree 4
p_coeff = np.polyfit(x-x[0], levels, p)
# Convert coefficient into a polynomial that can be evaluated,
# e.g. poly(0.3)
poly = np.poly1d(p_coeff)
# Plot original data points
plt.plot(dates, levels, '.')
# Plot polynomial fit at 30 points along interval (note that polynomial
# is evaluated using the shift x)
x1 = np.linspace(x[0], x[-1], 30)
#plt.plot(x1, poly(x1 - dates[0]))
return poly, x1
"""
"""
# Create set of 10 data points on interval (0, 2)
x = np.linspace(0, 2, 10)
y = [0.1, 0.09, 0.23, 0.34, 0.78, 0.74, 0.43, 0.31, 0.01, -0.05]
# Find coefficients of best-fit polynomial f(x) of degree 4
p_coeff = np.polyfit(x, y, 4)
# Convert coefficient into a polynomial that can be evaluated,
# e.g. poly(0.3)
poly = np.poly1d(p_coeff)
# Plot original data points
plt.plot(x, y, '.')
# Plot polynomial fit at 30 points along interval
x1 = np.linspace(x[0], x[-1], 30)
plt.plot(x1, poly(x1))
# Display plot
plt.show()
"""
| 24.690722 | 109 | 0.640919 | 400 | 2,395 | 3.8175 | 0.225 | 0.011788 | 0.02947 | 0.049771 | 0.770792 | 0.705959 | 0.705959 | 0.705959 | 0.705959 | 0.705959 | 0 | 0.052516 | 0.236743 | 2,395 | 96 | 110 | 24.947917 | 0.782823 | 0.101461 | 0 | 0 | 0 | 0 | 0.373518 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b874611ed4a923a8ee5d5ed9904bddb7e0cacbe0 | 73 | py | Python | src/main.py | fossabot/pytorch-aarch64 | fa9b977ce504da58bc4e342a8b0041fbf034c0f7 | [
"MIT"
] | null | null | null | src/main.py | fossabot/pytorch-aarch64 | fa9b977ce504da58bc4e342a8b0041fbf034c0f7 | [
"MIT"
] | null | null | null | src/main.py | fossabot/pytorch-aarch64 | fa9b977ce504da58bc4e342a8b0041fbf034c0f7 | [
"MIT"
] | null | null | null | from index import gen_index
if __name__ == '__main__':
gen_index()
| 12.166667 | 27 | 0.69863 | 10 | 73 | 4.1 | 0.7 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205479 | 73 | 5 | 28 | 14.6 | 0.706897 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
b877fc4761b1ec89aee97b01066be4cf6774e8f3 | 83 | py | Python | api/views_group.py | yaroshyk/todo | 828d5afc9abd85cd7f8f25e4d01f90c765231357 | [
"MIT"
] | 3 | 2021-05-30T19:04:37.000Z | 2021-08-30T14:16:57.000Z | api/views_group.py | yaroshyk/todo | 828d5afc9abd85cd7f8f25e4d01f90c765231357 | [
"MIT"
] | null | null | null | api/views_group.py | yaroshyk/todo | 828d5afc9abd85cd7f8f25e4d01f90c765231357 | [
"MIT"
] | null | null | null | from api import forms
from api.forms import TodoForm
from api.models import Todo
| 13.833333 | 30 | 0.807229 | 14 | 83 | 4.785714 | 0.5 | 0.313433 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168675 | 83 | 5 | 31 | 16.6 | 0.971014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
b882f9a7d61f68cddf2d8ea860c1dacb7492a16d | 75 | py | Python | jobbr/security.py | nielslerche/autorisationsdemo | 19fac7c0770bd38527678daf750263de182968c1 | [
"MIT"
] | null | null | null | jobbr/security.py | nielslerche/autorisationsdemo | 19fac7c0770bd38527678daf750263de182968c1 | [
"MIT"
] | 1 | 2016-11-16T10:07:25.000Z | 2016-11-16T10:21:36.000Z | jobbr/security.py | nielslerche/autorisationsdemo | 19fac7c0770bd38527678daf750263de182968c1 | [
"MIT"
] | null | null | null | from jobbr import app
from flask_bcrypt import Bcrypt
bcrypt = Bcrypt(app) | 18.75 | 31 | 0.813333 | 12 | 75 | 5 | 0.5 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146667 | 75 | 4 | 32 | 18.75 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b21acf66ad14067aac0de0c3dea33c17f6d07f4f | 1,146 | py | Python | src/marking/marking_tests.py | hawesie/python-canvas-api | ceb3e419b989fa94c64f3de615cb1473759ae678 | [
"MIT"
] | 11 | 2016-02-12T17:52:52.000Z | 2020-06-28T21:53:42.000Z | src/marking/marking_tests.py | hawesie/python-canvas-api | ceb3e419b989fa94c64f3de615cb1473759ae678 | [
"MIT"
] | 3 | 2016-02-09T22:28:25.000Z | 2017-09-03T06:39:09.000Z | src/marking/marking_tests.py | hawesie/python-canvas-api | ceb3e419b989fa94c64f3de615cb1473759ae678 | [
"MIT"
] | 3 | 2017-03-12T15:01:59.000Z | 2020-11-20T20:24:24.000Z | import unittest
import marking_actions
class TestMarkingFunctions(unittest.TestCase):
def test_usernames(self):
# only works on student usernames
self.assertFalse(marking_actions.is_username('nah'))
self.assertFalse(marking_actions.is_username('hawesna'))
# like these
self.assertTrue(marking_actions.is_username('abc123'))
self.assertTrue(marking_actions.is_username('nia411'))
self.assertTrue(marking_actions.is_username('hbe173'))
self.assertTrue(marking_actions.is_username('hbe1734'))
# and these
self.assertTrue(marking_actions.is_username('ABD123'))
self.assertTrue(marking_actions.is_username('ASD411'))
self.assertTrue(marking_actions.is_username('XXX173'))
self.assertTrue(marking_actions.is_username('XXX1739'))
#
self.assertFalse(marking_actions.is_username('a123'))
self.assertFalse(marking_actions.is_username('123abc'))
self.assertFalse(marking_actions.is_username('abc1'))
self.assertFalse(marking_actions.is_username('abc12345'))
if __name__ == '__main__':
unittest.main() | 39.517241 | 65 | 0.71466 | 125 | 1,146 | 6.248 | 0.312 | 0.268886 | 0.286812 | 0.430218 | 0.701665 | 0.701665 | 0.110115 | 0 | 0 | 0 | 0 | 0.04 | 0.17103 | 1,146 | 29 | 66 | 39.517241 | 0.782105 | 0.045375 | 0 | 0 | 0 | 0 | 0.082569 | 0 | 0 | 0 | 0 | 0 | 0.7 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
b242689ba427b0a23a9112aeed321576137c4bf0 | 80 | py | Python | tests/base.py | ibalance2005/ocr_server | e7fd190df692a19c8d090950ee9cdd9838b262ba | [
"Apache-2.0"
] | null | null | null | tests/base.py | ibalance2005/ocr_server | e7fd190df692a19c8d090950ee9cdd9838b262ba | [
"Apache-2.0"
] | null | null | null | tests/base.py | ibalance2005/ocr_server | e7fd190df692a19c8d090950ee9cdd9838b262ba | [
"Apache-2.0"
] | null | null | null | import unittest as ut
class BT(ut.TestCase):
def setUp(self):
pass | 13.333333 | 22 | 0.6375 | 12 | 80 | 4.25 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275 | 80 | 6 | 23 | 13.333333 | 0.87931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
b249bee4fd6522907f7bd92e6c2e8f71f3f0f246 | 115 | py | Python | api/admin.py | pearlpandz/storyseller-api | 0db7bcb9be8b808e7dec62da180279ca720df312 | [
"MIT"
] | null | null | null | api/admin.py | pearlpandz/storyseller-api | 0db7bcb9be8b808e7dec62da180279ca720df312 | [
"MIT"
] | null | null | null | api/admin.py | pearlpandz/storyseller-api | 0db7bcb9be8b808e7dec62da180279ca720df312 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Story
# Register your models here.
admin.site.register(Story) | 23 | 32 | 0.808696 | 17 | 115 | 5.470588 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121739 | 115 | 5 | 33 | 23 | 0.920792 | 0.226087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
b269d54cce5be3148b1b3de53c1d73636eb4bb1b | 968 | py | Python | lockss_metadata/__init__.py | lockss/lockss-metadata-python | 032a42f8a250d9692bd85afb89fd3faeed249a04 | [
"BSD-3-Clause"
] | null | null | null | lockss_metadata/__init__.py | lockss/lockss-metadata-python | 032a42f8a250d9692bd85afb89fd3faeed249a04 | [
"BSD-3-Clause"
] | null | null | null | lockss_metadata/__init__.py | lockss/lockss-metadata-python | 032a42f8a250d9692bd85afb89fd3faeed249a04 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
LOCKSS Metadata Service REST API
API of the LOCKSS Metadata REST Service # noqa: E501
OpenAPI spec version: 1.0.0
Contact: lockss-support@lockss.org
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import apis into sdk package
from lockss_metadata.api.metadata_api import MetadataApi
from lockss_metadata.api.status_api import StatusApi
from lockss_metadata.api.urls_api import UrlsApi
# import ApiClient
from lockss_metadata.api_client import ApiClient
from lockss_metadata.configuration import Configuration
# import models into sdk package
from lockss_metadata.models.api_status import ApiStatus
from lockss_metadata.models.au_metadata_page_info import AuMetadataPageInfo
from lockss_metadata.models.item_metadata import ItemMetadata
from lockss_metadata.models.page_info import PageInfo
from lockss_metadata.models.url_info import UrlInfo
| 30.25 | 75 | 0.823347 | 136 | 968 | 5.669118 | 0.411765 | 0.217899 | 0.233463 | 0.155642 | 0.168612 | 0.083009 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0.123967 | 968 | 31 | 76 | 31.225806 | 0.899764 | 0.331612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a248cf2162f580b1b50763249b5fa777bb592d39 | 32 | py | Python | report_xlsx_helper/models/__init__.py | NextERP-Romania/addons_extern | d08f428aeea4cda1890adfd250bc359bda0c33f3 | [
"Apache-2.0"
] | null | null | null | report_xlsx_helper/models/__init__.py | NextERP-Romania/addons_extern | d08f428aeea4cda1890adfd250bc359bda0c33f3 | [
"Apache-2.0"
] | null | null | null | report_xlsx_helper/models/__init__.py | NextERP-Romania/addons_extern | d08f428aeea4cda1890adfd250bc359bda0c33f3 | [
"Apache-2.0"
] | null | null | null | from . import ir_actions_report
| 16 | 31 | 0.84375 | 5 | 32 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 32 | 1 | 32 | 32 | 0.892857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a24b0c09c4ab8da929ec136088b01067bc6003d8 | 197 | py | Python | botnet/modules/__init__.py | admdev8/botnet-2 | 2fd43237e628869eb34d8e7a6747da6d71c1192c | [
"MIT"
] | 69 | 2015-02-24T19:24:23.000Z | 2022-02-23T08:04:53.000Z | botnet/modules/__init__.py | 77eduard77/ano | 5d11925fb82839c2272c5afb6b9b889adb105cea | [
"MIT"
] | 10 | 2017-06-28T21:08:29.000Z | 2022-01-26T07:46:02.000Z | botnet/modules/__init__.py | 77eduard77/ano | 5d11925fb82839c2272c5afb6b9b889adb105cea | [
"MIT"
] | 39 | 2015-11-19T10:07:21.000Z | 2022-03-30T10:56:24.000Z | from .base import BaseModule
from .baseresponder import BaseResponder
from .mixins import ConfigMixin, BaseMessageDispatcherMixin, \
StandardMessageDispatcherMixin, AdminMessageDispatcherMixin
| 39.4 | 63 | 0.862944 | 15 | 197 | 11.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101523 | 197 | 4 | 64 | 49.25 | 0.960452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
a27ba47b0b3246c24fe75c830a9151ec2beb4b1d | 156 | py | Python | strategy/strategy.py | amitkc00/design_patterns | 86262200d8ab106e9e8b32abc0d87e7a8e1cce2a | [
"MIT"
] | null | null | null | strategy/strategy.py | amitkc00/design_patterns | 86262200d8ab106e9e8b32abc0d87e7a8e1cce2a | [
"MIT"
] | null | null | null | strategy/strategy.py | amitkc00/design_patterns | 86262200d8ab106e9e8b32abc0d87e7a8e1cce2a | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod, abstractproperty, ABCMeta
class istrategy(ABC):
@abstractmethod
def buildmaps(self, start, end):
pass | 26 | 62 | 0.717949 | 17 | 156 | 6.588235 | 0.823529 | 0.303571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 156 | 6 | 63 | 26 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.2 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 5 |
a28b1e1540c4b326aea6ce62ab2ce001b3e67bca | 216 | py | Python | ztm/ztm_app/admin.py | julklos/BD2-Tickets | 8f94ea52f48d23c17f9e9603e19897cd2c287164 | [
"MIT"
] | null | null | null | ztm/ztm_app/admin.py | julklos/BD2-Tickets | 8f94ea52f48d23c17f9e9603e19897cd2c287164 | [
"MIT"
] | 13 | 2020-06-03T16:37:02.000Z | 2021-09-22T19:06:03.000Z | ztm/ztm_app/admin.py | julklos/BD2-Tickets | 8f94ea52f48d23c17f9e9603e19897cd2c287164 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import *
# Register your models here.
# @admin.register(TypyBiletow)
# class TypyBiletowAdmin(admin.ModelAdmin):
# list_display = ("cena", "strefa", "czas_waznosci") | 30.857143 | 56 | 0.74537 | 25 | 216 | 6.36 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 216 | 7 | 56 | 30.857143 | 0.845745 | 0.703704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a29700140553d0a248e5024d264e2150e696d7a1 | 11,758 | py | Python | bot-marcus-python/cogs/konfiguracje.py | OLEK4640/Marcus-bot | 5b979f385bcab9f0fedf169ea177496caf71182e | [
"CC0-1.0"
] | null | null | null | bot-marcus-python/cogs/konfiguracje.py | OLEK4640/Marcus-bot | 5b979f385bcab9f0fedf169ea177496caf71182e | [
"CC0-1.0"
] | null | null | null | bot-marcus-python/cogs/konfiguracje.py | OLEK4640/Marcus-bot | 5b979f385bcab9f0fedf169ea177496caf71182e | [
"CC0-1.0"
] | 1 | 2021-04-07T09:01:21.000Z | 2021-04-07T09:01:21.000Z | import json
import discord
from discord.ext import commands
import asyncio
class Configi(commands.Cog):
def __init__(self, bot):
self.bot = bot
print('Komendy konfiguracyjne załadowane pomyślnie!.')
@commands.Cog.listener()
async def on_guild_join(self, guild):
with open('configi.json', 'r') as f:
configi = json.load(f)
configi[str(guild.id)] = {
"joinmsgchannel" : "null",
"joinmsgcolor" : "null",
"joinmsgtitle" : "null",
"joinmsgdescription" : "null",
"leavemsgchannel" : "null",
"leavemshcolor" : "null",
"leavemsgtitle" : "null",
"joinmsgdesc" : "null",
"kanalpropozycje" : "null",
"muterolename" : "null"
}
with open ('configi.json', 'w') as f:
json.dump(configi, f, indent=4)
@commands.Cog.listener()
async def on_guild_remove(self, guild):
with open('configi.json', 'r') as f:
configi = json.load(f)
configi.pop(str(guild.id))
with open ('configi.json', 'w') as f:
json.dump(configi, f, indent=4)
@commands.command(pass_context=True)
async def configure(self, ctx):
with open('configi.json', 'r') as f:
configi = json.load(f)
embd=discord.Embed(title="Wybierz co chcesz konfigurować:", description="• Wiadomość powitalna \n • Wiadomość porzegnalna \n • Kanał do propozycji \n • Nazwa Mute-role")
embd.set_footer(text="Użyj cancel aby anulować!")
await ctx.send(embed=embd)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
if message.content.lower() == "wiadomość powitalna":
f=discord.Embed(title="Co chcesz konfigurować:", description="• Kanał \n • Wiadomość")
await ctx.send(embed=f)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
if message.content.lower() == "kanał":
dembed=discord.Embed(title="Oznacz kanał na którym mają być wiadomości!")
await ctx.send(embed=dembed)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
msg = message[2:-1]
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono kanał!')
ceteiksguildajdi = ctx.guild.id
configi[str(ceteiksguildajdi)]["joinmsgchannel"] = msg
elif message.content.lower() == "wiadomość":
dwsss=discord.Embed(title="Co chcesz konfigurować:", description="• Tytuł \n • Kolor \n • Opis")
await ctx.send(embed=dwsss)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
if message.content.lower() == "tytuł":
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono tytuł!')
configi[str(ceteiksguildajdi)]["joinmsgtitle"] = message
elif message.content.lower() == "kolor":
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
msg = message[1:]
dede = "0x"+msg
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono kanał!')
configi[str(ceteiksguildajdi)]["joinmsgcolor"] = dede
elif message.content.lower() == "opis":
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono opis!')
configi[str(ceteiksguildajdi)]["joinmsgdescription"] = message
elif message.content.lower() == "wiadomość porzegnalna":
ff=discord.Embed(title="Co chcesz konfigurować:", description="• Kanał \n • Wiadomość")
await ctx.send(embed=ff)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
if message.content.lower() == "kanał":
dembedf=discord.Embed(title="Oznacz kanał na którym mają być wiadomości!")
await ctx.send(embed=dembedf)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
msg = message[2:-1]
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono kanał!')
configi[str(ceteiksguildajdi)]["leavemsgchannel"] = msg
elif message.content.lower() == "wiadomość":
dwsss=discord.Embed(title="Co chcesz konfigurować:", description="• Tytuł \n • Kolor \n • Opis")
await ctx.send(embed=dwsss)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
if message.content.lower() == "tytuł":
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono tytuł!')
configi[str(ceteiksguildajdi)]["leavemsgtitle"] = message
elif message.content.lower() == "kolor":
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono kolor!')
configi[str(ceteiksguildajdi)]["leavemshcolor"] = message
elif message.content.lower() == "opis":
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono opis!')
configi[str(ceteiksguildajdi)]["joinmsgdesc"] = message
elif message.content.lower() == "kanał do propozycji":
fdf=discord.Embed(title="Oznacz kanał")
await ctx.send(embed=fdf)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
msg = message[2:-1]
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono kanał!')
configi[str(ceteiksguildajdi)]["kanalpropozycje"] = msg
elif message.content.lower() == "nazwa mute-role":
fdf=discord.Embed(title="Oznacz rolę")
await ctx.send(embed=fdf)
try:
message=await self.bot.wait_for("message", check=lambda m: m.author == ctx.author and m.channel == ctx.channel, timeout=20.0)
except asyncio.TimeoutError:
await ctx.send("Czas na odpowiedź minął")
else:
await ctx.send('Zmieniono rolę!')
try:
configi[str(ctx.guild.id)]["muterolename"] = message.content
with open ('configi.json', 'w') as f:
json.dump(configi, f, indent=4)
except Exception as e:
await ctx.send("Wystąpił błąd: \n {}: {}\n".format(type(e).__name__, e))
elif message.content.lower() == "cancel":
await ctx.send('Anulowałeś konfigurację!')
return
else:
await ctx.send('Musisz wybrać jedno z powyższych!')
def setup(bot):
bot.add_cog(Configi(bot)) | 48.188525 | 181 | 0.457476 | 1,085 | 11,758 | 4.942857 | 0.133641 | 0.055193 | 0.082789 | 0.053142 | 0.787059 | 0.751445 | 0.751445 | 0.740257 | 0.740257 | 0.740257 | 0 | 0.008556 | 0.443358 | 11,758 | 244 | 182 | 48.188525 | 0.808709 | 0 | 0 | 0.644809 | 0 | 0.005464 | 0.145591 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010929 | false | 0.005464 | 0.021858 | 0 | 0.043716 | 0.005464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0c17d739bdee1d97f13beef05e2489488fb1f3f5 | 96 | py | Python | cct/core2/devices/detector/__init__.py | awacha/cct | be1adbed2533df15c778051f3f4f9da0749c873a | [
"BSD-3-Clause"
] | 1 | 2015-11-04T16:37:39.000Z | 2015-11-04T16:37:39.000Z | cct/core2/devices/detector/__init__.py | awacha/cct | be1adbed2533df15c778051f3f4f9da0749c873a | [
"BSD-3-Clause"
] | null | null | null | cct/core2/devices/detector/__init__.py | awacha/cct | be1adbed2533df15c778051f3f4f9da0749c873a | [
"BSD-3-Clause"
] | 1 | 2020-03-05T02:50:43.000Z | 2020-03-05T02:50:43.000Z | from . import pilatus
from .pilatus.frontend import PilatusBackend, PilatusGain, PilatusDetector | 48 | 74 | 0.854167 | 10 | 96 | 8.2 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 96 | 2 | 74 | 48 | 0.942529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
0c287155b09b35fd3f2c25425d8b5d025cda5167 | 3,023 | py | Python | tests/test_pr.py | trofi/nixpkgs-review | f79d28dca30015bdb3a937522fd089fe877dbeab | [
"MIT"
] | 57 | 2018-03-17T17:32:37.000Z | 2019-12-04T18:22:07.000Z | tests/test_pr.py | trofi/nixpkgs-review | f79d28dca30015bdb3a937522fd089fe877dbeab | [
"MIT"
] | 33 | 2018-04-22T01:26:30.000Z | 2019-12-05T15:51:28.000Z | tests/test_pr.py | trofi/nixpkgs-review | f79d28dca30015bdb3a937522fd089fe877dbeab | [
"MIT"
] | 11 | 2018-05-28T10:35:19.000Z | 2019-11-04T10:29:05.000Z | #!/usr/bin/env python3
import pytest
import shutil
import subprocess
from nixpkgs_review.cli import main
from .conftest import Helpers
from unittest.mock import MagicMock, mock_open, patch
def test_pr_local_eval(helpers: Helpers) -> None:
with helpers.nixpkgs() as nixpkgs:
with open(nixpkgs.path.joinpath("pkg1.txt"), "w") as f:
f.write("foo")
subprocess.run(["git", "add", "."])
subprocess.run(["git", "commit", "-m", "example-change"])
subprocess.run(["git", "checkout", "-b", "pull/1/head"])
subprocess.run(["git", "push", str(nixpkgs.remote), "pull/1/head"])
path = main(
"nixpkgs-review",
[
"pr",
"--remote",
str(nixpkgs.remote),
"--run",
"exit 0",
"1",
],
)
report = helpers.load_report(path)
assert report["built"] == ["pkg1"]
@pytest.mark.skipif(not shutil.which("bwrap"), reason="`bwrap` not found in PATH")
def test_pr_local_eval_with_sandbox(helpers: Helpers) -> None:
with helpers.nixpkgs() as nixpkgs:
with open(nixpkgs.path.joinpath("pkg1.txt"), "w") as f:
f.write("foo")
subprocess.run(["git", "add", "."])
subprocess.run(["git", "commit", "-m", "example-change"])
subprocess.run(["git", "checkout", "-b", "pull/1/head"])
subprocess.run(["git", "push", str(nixpkgs.remote), "pull/1/head"])
path = main(
"nixpkgs-review",
[
"pr",
"--sandbox",
"--remote",
str(nixpkgs.remote),
"--run",
"exit 0",
"1",
],
)
report = helpers.load_report(path)
assert report["built"] == ["pkg1"]
@patch("urllib.request.urlopen")
def test_pr_ofborg_eval(mock_urlopen: MagicMock, helpers: Helpers) -> None:
with helpers.nixpkgs() as nixpkgs:
with open(nixpkgs.path.joinpath("pkg1.txt"), "w") as f:
f.write("foo")
subprocess.run(["git", "add", "."])
subprocess.run(["git", "commit", "-m", "example-change"])
subprocess.run(["git", "checkout", "-b", "pull/37200/head"])
subprocess.run(["git", "push", str(nixpkgs.remote), "pull/37200/head"])
mock_urlopen.side_effect = [
mock_open(read_data=helpers.read_asset("github-pull-37200.json"))(),
mock_open(
read_data=helpers.read_asset("github-pull-37200-statuses.json")
)(),
helpers.read_asset("gist-37200.txt").encode("utf-8").split(b"\n"),
]
path = main(
"nixpkgs-review",
[
"pr",
"--remote",
str(nixpkgs.remote),
"--run",
"exit 0",
"37200",
],
)
report = helpers.load_report(path)
assert report["built"] == ["pkg1"]
| 32.159574 | 82 | 0.506451 | 321 | 3,023 | 4.688474 | 0.264798 | 0.103654 | 0.127575 | 0.043854 | 0.735548 | 0.711628 | 0.711628 | 0.711628 | 0.711628 | 0.650498 | 0 | 0.022827 | 0.318889 | 3,023 | 93 | 83 | 32.505376 | 0.708111 | 0.006947 | 0 | 0.64557 | 0 | 0 | 0.17994 | 0.024992 | 0 | 0 | 0 | 0 | 0.037975 | 1 | 0.037975 | false | 0 | 0.075949 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0c2ac7749eecb40c6a500f171f78cb8bab9bf9e1 | 37 | py | Python | pyJunosManager/version.py | JNPRAutomate/pyJunosManager | cfbe87bb55488f44bad0b383771a88be7b2ccf2a | [
"Apache-2.0"
] | 4 | 2018-07-06T04:07:58.000Z | 2021-06-24T00:59:16.000Z | pyJunosManager/version.py | JNPRAutomate/pyJunosManager | cfbe87bb55488f44bad0b383771a88be7b2ccf2a | [
"Apache-2.0"
] | 4 | 2021-03-25T21:48:07.000Z | 2022-03-29T21:54:52.000Z | pyJunosManager/version.py | JNPRAutomate/pyJunosManager | cfbe87bb55488f44bad0b383771a88be7b2ccf2a | [
"Apache-2.0"
] | 5 | 2015-05-04T23:40:08.000Z | 2018-03-05T17:08:17.000Z | VERSION = "0.6"
DATE = "2015-Jan-08"
| 12.333333 | 20 | 0.594595 | 7 | 37 | 3.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258065 | 0.162162 | 37 | 2 | 21 | 18.5 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
0c2b4df7c8ef82242be65bc18aee57434bc05870 | 269 | py | Python | fwl-automation-decisions/infrastructure/src/infrastructure/flsql/model/__init__.py | aherculano/fwl-project | 6d4c4d40393b76d45cf13b572b5aabc0696e9285 | [
"MIT"
] | null | null | null | fwl-automation-decisions/infrastructure/src/infrastructure/flsql/model/__init__.py | aherculano/fwl-project | 6d4c4d40393b76d45cf13b572b5aabc0696e9285 | [
"MIT"
] | null | null | null | fwl-automation-decisions/infrastructure/src/infrastructure/flsql/model/__init__.py | aherculano/fwl-project | 6d4c4d40393b76d45cf13b572b5aabc0696e9285 | [
"MIT"
] | null | null | null | from .EnvironmentModel import EnvironmentModel
from .FirewallModel import FirewallModel
from .ZoneModel import ZoneModel
from .Mappings import flask_sqlalchemy_mappings
from .AllowedPortModel import AllowedPortModel
from .ZoneConnectionModel import ZoneConnectionModel
| 38.428571 | 52 | 0.888476 | 26 | 269 | 9.115385 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089219 | 269 | 6 | 53 | 44.833333 | 0.967347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a749a12397807c60bfe01e2a447da847cd854c8f | 228 | py | Python | projects/Yolov3/yolov3/__init__.py | shellhue/detectron2 | a027f2fe0dc21eedd201727515c4e963cd007ec0 | [
"Apache-2.0"
] | 3 | 2019-12-18T09:04:21.000Z | 2020-04-21T08:31:26.000Z | projects/Yolov3/yolov3/__init__.py | shellhue/detectron2 | a027f2fe0dc21eedd201727515c4e963cd007ec0 | [
"Apache-2.0"
] | 4 | 2021-06-08T20:51:59.000Z | 2022-03-12T00:12:46.000Z | projects/Yolov3/yolov3/__init__.py | shellhue/detectron2 | a027f2fe0dc21eedd201727515c4e963cd007ec0 | [
"Apache-2.0"
] | 1 | 2020-03-14T05:39:43.000Z | 2020-03-14T05:39:43.000Z | from .config import add_yolov3_config
from .yolov3 import Yolov3
from .darknet_fpn import build_darknet_fpn_backbone
from .anchor_generator import YoloAnchorGenerator
from .detection_checkpoint import CustomDetectionCheckpointer | 45.6 | 61 | 0.894737 | 28 | 228 | 7 | 0.535714 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014354 | 0.083333 | 228 | 5 | 61 | 45.6 | 0.923445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a769d22cb354c436652502efcd00a138e0bdfcc6 | 191 | py | Python | Livid_Code_M4L/__init__.py | thomasf/LiveRemoteScripts | 866330653e1561a140e076c9a7ae64dd486e5692 | [
"MIT"
] | 25 | 2015-02-02T21:41:51.000Z | 2022-02-19T13:08:53.000Z | Livid_Code_M4L/__init__.py | thomasf/LiveRemoteScripts | 866330653e1561a140e076c9a7ae64dd486e5692 | [
"MIT"
] | null | null | null | Livid_Code_M4L/__init__.py | thomasf/LiveRemoteScripts | 866330653e1561a140e076c9a7ae64dd486e5692 | [
"MIT"
] | 13 | 2015-10-25T04:44:09.000Z | 2020-03-01T18:02:27.000Z | # http://julienbayle.net
from LividCodeM4L import LividCodeM4L
def create_instance(c_instance):
""" Creates and returns the LividCodeM4L script """
return LividCodeM4L(c_instance)
| 21.222222 | 55 | 0.764398 | 22 | 191 | 6.5 | 0.727273 | 0.125874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024691 | 0.151832 | 191 | 8 | 56 | 23.875 | 0.858025 | 0.356021 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a77bc2c339e66e11a1a9e3f7f7cb608d3e141fb4 | 3,118 | py | Python | models/Unet_2D/read_in_CT.py | LongxiZhou/DLPE-method | ed20abc91e27423c7ff677a009cfd99314730217 | [
"BSD-3-Clause"
] | null | null | null | models/Unet_2D/read_in_CT.py | LongxiZhou/DLPE-method | ed20abc91e27423c7ff677a009cfd99314730217 | [
"BSD-3-Clause"
] | null | null | null | models/Unet_2D/read_in_CT.py | LongxiZhou/DLPE-method | ed20abc91e27423c7ff677a009cfd99314730217 | [
"BSD-3-Clause"
] | 1 | 2021-08-22T14:29:58.000Z | 2021-08-22T14:29:58.000Z | import os
import bintrees
import numpy as np
import pydicom
import SimpleITK as sitk
def load_dicom(path, show=False):
# return a numpy array of the dicom file, and the slice number
if show:
content = pydicom.read_file(path)
print(content)
ds = sitk.ReadImage(path)
img_array = sitk.GetArrayFromImage(ds)
# frame_num, width, height = img_array.shape
return img_array[0, :, :], pydicom.read_file(path)['InstanceNumber'].value
def stack_dcm_files(dic):
# the dictionary like '/home/zhoul0a/CT_slices_for_patient_alice/'
# return a 3D np array with shape [Rows, Columns, Num_Slices], and the resolution of each axis: (0.625, 0.625, 0.9)
dcm_file_names = os.listdir(dic)
num_slices = len(dcm_file_names)
first_slice = load_dicom(dic+dcm_file_names[0])[0]
first_content = pydicom.read_file(dic+dcm_file_names[0])
resolutions = first_content.PixelSpacing
resolutions.append(first_content.SliceThickness)
print('the resolution for x, y, z in mm:', resolutions)
rows, columns = first_slice.shape
tree_instance = bintrees.AVLTree()
array_3d = np.zeros([rows, columns, num_slices], 'int32')
for file in dcm_file_names:
data_array, slice_id = load_dicom(dic+file)
assert not tree_instance.__contains__(slice_id)
tree_instance.insert(slice_id, slice_id)
array_3d[:, :, num_slices - slice_id] = data_array
print('the array corresponds to a volume of:', rows*resolutions[0], columns*resolutions[1], num_slices*resolutions[2])
return array_3d, resolutions
def stack_dcm_files_by_file_name(dic):
# the dictionary like '/home/zhoul0a/CT_slices_for_patient_alice/'
# return a 3D np array with shape [Rows, Columns, Num_Slices], and the resolution of each axis: (0.625, 0.625, 0.9)
dcm_file_names = os.listdir(dic)
num_slices = len(dcm_file_names)
first_slice = load_dicom(dic+dcm_file_names[0])[0]
first_content = pydicom.read_file(dic+dcm_file_names[0])
resolutions = first_content.PixelSpacing
resolutions.append(first_content.SliceThickness)
print('the resolution for x, y, z in mm:', resolutions)
rows, columns = first_slice.shape
tree_instance = bintrees.AVLTree()
array_3d = np.zeros([rows, columns, num_slices], 'int32')
for file in dcm_file_names:
data_array, slice_id = load_dicom(dic+file)
slice_id = int(file[-5]) + 10 * int(file[-6]) + 100 * int(file[-7]) - 1
print(file, slice_id)
if tree_instance.__contains__(slice_id):
continue
tree_instance.insert(slice_id, slice_id)
array_3d[:, :, num_slices - slice_id] = data_array
print('the array corresponds to a volume of:', rows*resolutions[0], columns*resolutions[1], num_slices*resolutions[2])
return array_3d, resolutions
def get_information_for_dcm(path):
array_dcm = load_dicom(path)[0]
rows, columns = array_dcm.shape
first_content = pydicom.read_file(path)
resolutions = first_content.PixelSpacing
resolutions.append(first_content.SliceThickness)
return rows, columns, resolutions
| 40.493506 | 122 | 0.712316 | 454 | 3,118 | 4.640969 | 0.22467 | 0.039867 | 0.056953 | 0.041766 | 0.759848 | 0.707167 | 0.707167 | 0.707167 | 0.707167 | 0.670147 | 0 | 0.022397 | 0.183772 | 3,118 | 76 | 123 | 41.026316 | 0.805501 | 0.148172 | 0 | 0.586207 | 0 | 0 | 0.06191 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 1 | 0.068966 | false | 0 | 0.086207 | 0 | 0.224138 | 0.103448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
a7ac066e73d6fbef83f4fabfa6d7d5c68743f141 | 90 | py | Python | ladim_plugins/sedimentation/__init__.py | pnsaevik/ladim_plugins | 2097a451346e2517e50f735be8b31862f24e64e2 | [
"MIT"
] | null | null | null | ladim_plugins/sedimentation/__init__.py | pnsaevik/ladim_plugins | 2097a451346e2517e50f735be8b31862f24e64e2 | [
"MIT"
] | null | null | null | ladim_plugins/sedimentation/__init__.py | pnsaevik/ladim_plugins | 2097a451346e2517e50f735be8b31862f24e64e2 | [
"MIT"
] | 1 | 2020-07-09T08:18:36.000Z | 2020-07-09T08:18:36.000Z | from .ibm import IBM, sinkvel, get_settled_particles
from .gridforce import Grid, Forcing
| 30 | 52 | 0.822222 | 13 | 90 | 5.538462 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122222 | 90 | 2 | 53 | 45 | 0.911392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
a7cae09bcf539e1ee4cf8e33b8533d837a0107b6 | 29 | py | Python | src/sort/quick_sort.py | ChrisKalahiki/python-algorithms | 53f206ca18de63941ad22e6249ec651ecf598062 | [
"MIT"
] | null | null | null | src/sort/quick_sort.py | ChrisKalahiki/python-algorithms | 53f206ca18de63941ad22e6249ec651ecf598062 | [
"MIT"
] | null | null | null | src/sort/quick_sort.py | ChrisKalahiki/python-algorithms | 53f206ca18de63941ad22e6249ec651ecf598062 | [
"MIT"
] | null | null | null | def quick_sort(arr):
pass | 14.5 | 20 | 0.689655 | 5 | 29 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 29 | 2 | 21 | 14.5 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
a7d8b7ac044e9cf844c47bcd47e553f648565c30 | 94 | py | Python | src/RobotFrameworkCore/org.robotframework.ide.core-functions/src/test/python/scripts/res_test_red_variables/vars_with_argument.py | alex729/RED | 128bf203cf035892c02805aabd0c915f96006bb0 | [
"Apache-2.0"
] | 375 | 2015-11-02T19:15:30.000Z | 2022-03-19T03:32:10.000Z | src/RobotFrameworkCore/org.robotframework.ide.core-functions/src/test/python/scripts/res_test_red_variables/vars_with_argument.py | alex729/RED | 128bf203cf035892c02805aabd0c915f96006bb0 | [
"Apache-2.0"
] | 433 | 2015-11-03T13:24:40.000Z | 2022-03-30T11:20:14.000Z | src/RobotFrameworkCore/org.robotframework.ide.core-functions/src/test/python/scripts/res_test_red_variables/vars_with_argument.py | alex729/RED | 128bf203cf035892c02805aabd0c915f96006bb0 | [
"Apache-2.0"
] | 133 | 2016-05-02T02:20:06.000Z | 2022-01-06T06:01:28.000Z | def get_variables(arg=None):
return {'a' : '1' + arg, 'b' : '2' + arg, 'c' : '3' + arg}
| 31.333333 | 63 | 0.468085 | 15 | 94 | 2.866667 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042857 | 0.255319 | 94 | 2 | 64 | 47 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
a7d907f855c1ea1b84a80bfc15b0a24610f735fc | 28,468 | py | Python | reversi/strategies/coordinator/scorer.py | y-tetsu/reversi | 65d566359eac97e456eb9ddb63d0754aaae2c98a | [
"MIT"
] | 10 | 2020-07-24T22:04:51.000Z | 2022-03-25T06:09:48.000Z | reversi/strategies/coordinator/scorer.py | y-tetsu/reversi | 65d566359eac97e456eb9ddb63d0754aaae2c98a | [
"MIT"
] | 12 | 2021-04-30T09:53:18.000Z | 2022-02-25T04:16:02.000Z | reversi/strategies/coordinator/scorer.py | y-tetsu/reversi | 65d566359eac97e456eb9ddb63d0754aaae2c98a | [
"MIT"
] | 1 | 2021-11-25T13:12:32.000Z | 2021-11-25T13:12:32.000Z | """Scorer
"""
from reversi.strategies.common import AbstractScorer
from reversi.strategies.table import Table
import reversi.strategies.coordinator.ScorerMethods as ScorerMethods
from reversi.board import PyBitBoard
from reversi.BitBoardMethods import CythonBitBoard
class TableScorer(AbstractScorer):
"""
盤面の評価値をTableで算出
"""
def __init__(self, size=8, corner=50, c=-20, a1=0, a2=-1, b1=-1, b2=-1, b3=-1, x=-25, o1=-5, o2=-5):
self.table = Table(size, corner, c, a1, a2, b1, b2, b3, x, o1, o2) # Table戦略を利用する
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
if self.table.size != board.size: # テーブルサイズの調整
self.table.set_table(board.size)
return self.table.get_score(board) # +側黒優勢、-側白優勢に直す
class PossibilityScorer(AbstractScorer):
"""
着手可能数に基づいて算出
"""
def __init__(self, w=5):
self._W = w
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
return (possibility_b - possibility_w) * self._W
class OpeningScorer(AbstractScorer):
"""
開放度に基づいて算出
"""
def __init__(self, w=-0.75):
self._W = w
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
size, board_info, opening = board.size, board.get_board_info(), 0
directions = [
(-1, 1), (0, 1), (1, 1),
(-1, 0), (1, 0),
(-1, -1), (0, -1), (1, -1),
]
# 最後にひっくり返された石の場所を取得する
if isinstance(board, PyBitBoard) or isinstance(board, CythonBitBoard):
flippable_discs = board._flippable_discs_num
discs = []
mask = 1 << ((size * size) - 1)
for y in range(size):
for x in range(size):
if mask & flippable_discs:
discs.append([x, y])
mask >>= 1
else:
discs = board.prev[-1]['flippable_discs']
# ひっくり返した石の周りをチェックする
for disc_x, disc_y in discs:
for dx, dy in directions:
x, y = disc_x + dx, disc_y + dy
if 0 <= x < size and 0 <= y < size:
if board_info[y][x] == 0:
opening += 1 # 石が置かれていない場所をカウント
return opening * self._W
class WinLoseScorer(AbstractScorer):
"""
勝敗に基づいて算出
"""
def __init__(self, w=10000):
self._W = w
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
# 対局終了時
ret = None
if not possibility_b and not possibility_w:
ret = board._black_score - board._white_score
if ret > 0: # 黒が勝った
ret += self._W
elif ret < 0: # 白が勝った
ret -= self._W
return ret
class NumberScorer(AbstractScorer):
"""
石数に基づいて算出
"""
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
return board._black_score - board._white_score
class EdgeScorer(AbstractScorer):
"""
辺のパターンに基づいて算出
"""
def __init__(self, w=100):
self._W = w
# 確定石
# ◎◎―――――― ◎◎◎――――― ◎◎◎◎―――― ◎◎◎◎◎――― ◎◎◎◎◎◎―― ◎◎◎◎◎◎◎―
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
#
# ――――――◎◎ ―――――◎◎◎ ――――◎◎◎◎ ―――◎◎◎◎◎ ――◎◎◎◎◎◎ ―◎◎◎◎◎◎◎
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
#
# ◎◎◎◎◎◎◎◎
# □□□□□□□□
# □□□□□□□□
# □□□□□□□□
# □□□□□□□□
# □□□□□□□□
# □□□□□□□□
# □□□□□□□□
self._get_table()
def _get_table(self):
self.edge_table8 = [0x00 for _ in range(0x100)]
left = 0x80
right = 0x01
for row in range(0x100):
score = 0
l_r = row & left
r_l = row & right
if l_r or r_l:
for _ in range(6):
# 左:右方向
l_r >>= 1
l_r &= row
if l_r:
score += self._W
# 右:左方向
r_l <<= 1
r_l &= row
if r_l:
score += self._W
if row == 0xFF:
score += self._W
self.edge_table8[row] = score
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
size = board.size
weight = self._W
score = 0
b_bitboard, w_bitboard = board.get_bitboard_info()
all_bitboard = b_bitboard | w_bitboard
bit_pos = 1 << (size * size - 1)
lt = bit_pos
rt = bit_pos >> size-1
lb = bit_pos >> size*(size-1)
rb = bit_pos >> size*size-1
# 四隅のどこかに石がある場合
if (lt | rt | lb | rb) & all_bitboard:
if size == 8:
# 上辺
b_t = (0xFF00000000000000 & b_bitboard) >> 56
w_t = (0xFF00000000000000 & w_bitboard) >> 56
# 下辺
b_b = 0x00000000000000FF & b_bitboard
w_b = 0x00000000000000FF & w_bitboard
# 左辺
b_l = 0
if b_bitboard & 0x8000000000000000:
b_l += 0x0000000000000080
if b_bitboard & 0x0080000000000000:
b_l += 0x0000000000000040
if b_bitboard & 0x0000800000000000:
b_l += 0x0000000000000020
if b_bitboard & 0x0000008000000000:
b_l += 0x0000000000000010
if b_bitboard & 0x0000000080000000:
b_l += 0x0000000000000008
if b_bitboard & 0x0000000000800000:
b_l += 0x0000000000000004
if b_bitboard & 0x0000000000008000:
b_l += 0x0000000000000002
if b_bitboard & 0x0000000000000080:
b_l += 0x0000000000000001
w_l = 0
if w_bitboard & 0x8000000000000000:
w_l += 0x0000000000000080
if w_bitboard & 0x0080000000000000:
w_l += 0x0000000000000040
if w_bitboard & 0x0000800000000000:
w_l += 0x0000000000000020
if w_bitboard & 0x0000008000000000:
w_l += 0x0000000000000010
if w_bitboard & 0x0000000080000000:
w_l += 0x0000000000000008
if w_bitboard & 0x0000000000800000:
w_l += 0x0000000000000004
if w_bitboard & 0x0000000000008000:
w_l += 0x0000000000000002
if w_bitboard & 0x0000000000000080:
w_l += 0x0000000000000001
# 右辺
b_r = 0
if b_bitboard & 0x0100000000000000:
b_r += 0x0000000000000080
if b_bitboard & 0x0001000000000000:
b_r += 0x0000000000000040
if b_bitboard & 0x0000010000000000:
b_r += 0x0000000000000020
if b_bitboard & 0x0000000100000000:
b_r += 0x0000000000000010
if b_bitboard & 0x0000000001000000:
b_r += 0x0000000000000008
if b_bitboard & 0x0000000000010000:
b_r += 0x0000000000000004
if b_bitboard & 0x0000000000000100:
b_r += 0x0000000000000002
if b_bitboard & 0x0000000000000001:
b_r += 0x0000000000000001
w_r = 0
if w_bitboard & 0x0100000000000000:
w_r += 0x0000000000000080
if w_bitboard & 0x0001000000000000:
w_r += 0x0000000000000040
if w_bitboard & 0x0000010000000000:
w_r += 0x0000000000000020
if w_bitboard & 0x0000000100000000:
w_r += 0x0000000000000010
if w_bitboard & 0x0000000001000000:
w_r += 0x0000000000000008
if w_bitboard & 0x0000000000010000:
w_r += 0x0000000000000004
if w_bitboard & 0x0000000000000100:
w_r += 0x0000000000000002
if w_bitboard & 0x0000000000000001:
w_r += 0x0000000000000001
return (self.edge_table8[b_t] - self.edge_table8[w_t]) + (self.edge_table8[b_b] - self.edge_table8[w_b]) + (self.edge_table8[b_l] - self.edge_table8[w_l]) + (self.edge_table8[b_r] - self.edge_table8[w_r]) # noqa: E501
# 左上
lt_board = b_bitboard
lt_sign = 1
if lt & w_bitboard:
lt_board = w_bitboard
lt_sign = -1
lt_r, lt_b = lt & lt_board, lt & lt_board
# 右上
rt_board = b_bitboard
rt_sign = 1
if rt & w_bitboard:
rt_board = w_bitboard
rt_sign = -1
rt_l, rt_b = rt & rt_board, rt & rt_board
# 左下
lb_board = b_bitboard
lb_sign = 1
if lb & w_bitboard:
lb_board = w_bitboard
lb_sign = -1
lb_r, lb_t = lb & lb_board, lb & lb_board
# 右下
rb_board = b_bitboard
rb_sign = 1
if rb & w_bitboard:
rb_board = w_bitboard
rb_sign = -1
rb_l, rb_t = rb & rb_board, rb & rb_board
# 確定石の連続数(2個~7個まで)をカウント
for i in range(size-2):
# 左上:右方向
lt_r >>= 1
lt_r &= lt_board
if lt_r & lt_board:
score += weight * lt_sign
# 左上:下方向
lt_b >>= size
lt_b &= lt_board
if lt_b & lt_board:
score += weight * lt_sign
# 右上:左方向
rt_l <<= 1
rt_l &= rt_board
if rt_l & rt_board:
score += weight * rt_sign
# 右上:下方向
rt_b >>= size
rt_b &= rt_board
if rt_b & rt_board:
score += weight * rt_sign
# 左下:右方向
lb_r >>= 1
lb_r &= lb_board
if lb_r & lb_board:
score += weight * lb_sign
# 左下:上方向
lb_t <<= size
lb_t &= lb_board
if lb_t & lb_board:
score += weight * lb_sign
# 右下:左方向
rb_l <<= 1
rb_l &= rb_board
if rb_l & rb_board:
score += weight * rb_sign
# 右下:上方向
rb_t <<= size
rb_t &= rb_board
if rb_t & rb_board:
score += weight * rb_sign
# 辺が同じ色で埋まっている場合はさらに加算
top = int(''.join(['1'] * size + ['0'] * (size*(size-1))), 2)
if lt_board & top == top:
score += weight * lt_sign
left = int(''.join((['1'] + ['0'] * (size-1)) * size), 2)
if lt_board & left == left:
score += weight * lt_sign
right = int(''.join((['0'] * (size-1) + ['1']) * size), 2)
if rb_board & right == right:
score += weight * rb_sign
bottom = int(''.join(['0'] * (size*(size-1)) + ['1'] * size), 2)
if rb_board & bottom == bottom:
score += weight * rb_sign
return score
class CornerScorer(AbstractScorer):
"""
隅のパターンに基づいて算出
"""
def __init__(self, w=100):
self._W = w
# 確定石
# Level1
# 1 1 1
# □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□
# ■■■■□□□□ ■■■■□□□□ ■■■■□□□□
# ●■■■□□□□ ●■■■□□□□ ■■■■□□□□
# ●◎■■□□□□ ●◎■■□□□□ ●◎■■□□□□
# ●●●■□□□□ ●●■■□□□□ ●●●■□□□□
self.level1_maskvalue = [
# 左下
[
0x000000000080C0E0,
0x000000000080C0C0,
0x000000000000C0E0,
],
# 左上
[
0xE0C0800000000000,
0xE0C0000000000000,
0xC0C0800000000000,
],
# 右上
[
0x0703010000000000,
0x0303010000000000,
0x0703000000000000,
],
# 右下
[
0x0000000000010307,
0x0000000000000307,
0x0000000000010303,
],
]
self.level1_weight = [
1, 1, 1
]
# Level2
# 3 3 3 2 2
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# ●■■■□□□□ ●■■■□□□□ ■■■■□□□□ ●■■■□□□□ ■■■■□□□□
# ●◎■■□□□□ ●◎■■□□□□ ●◎■■□□□□ ●◎■■□□□□ ■■■■□□□□
# ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎■□□□□ ●◎■■□□□□ ●◎◎■□□□□
# ●●●●□□□□ ●●●■□□□□ ●●●●□□□□ ●●■■□□□□ ●●●●□□□□
self.level2_maskvalue = [
# 左下
[
0x0000000080C0E0F0,
0x0000000080C0E0E0,
0x0000000000C0E0F0,
0x0000000080C0C0C0,
0x000000000000E0F0,
],
# 左上
[
0xF0E0C08000000000,
0xF0E0C00000000000,
0xE0E0C08000000000,
0xF0E0000000000000,
0xC0C0C08000000000,
],
# 右上
[
0x0F07030100000000,
0x0707030100000000,
0x0F07030000000000,
0x0303030100000000,
0x0F07000000000000,
],
# 右下
[
0x000000000103070F,
0x000000000003070F,
0x0000000001030707,
0x000000000000070F,
0x0000000001030303,
],
]
self.level2_weight = [
3, 3, 3, 2, 2
]
# Level3
# 6 6 6 5 5
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# ●□□□□□□□ ●□□□□□□□ □□□□□□□□ ●□□□□□□□ □□□□□□□□
# ●◎■■□□□□ ●◎■■□□□□ ●◎■■□□□□ ●◎■■□□□□ ■■■■□□□□
# ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎■□□□□
# ●◎◎◎□□□□ ●◎◎◎□□□□ ●◎◎◎□□□□ ●◎◎■□□□□ ●◎◎◎□□□□
# ●●●●●□□□ ●●●●□□□□ ●●●●●□□□ ●●●■□□□□ ●●●●●□□□
# 4 4 3 3
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# ●□□□□□□□ □□□□□□□□ ●□□□□□□□ □□□□□□□□
# ●◎■■□□□□ ■■■■□□□□ ●◎■■□□□□ ■■■■□□□□
# ●◎■■□□□□ ●◎■■□□□□ ●◎■■□□□□ ■■■■□□□□
# ●◎◎■□□□□ ●◎◎◎□□□□ ●◎■■□□□□ ●◎◎◎□□□□
# ●●●■□□□□ ●●●●●□□□ ●●■■□□□□ ●●●●●□□□
self.level3_maskvalue = [
# 左下
[
0x00000080C0E0F0F8,
0x00000080C0E0F0F0,
0x00000000C0E0F0F8,
0x00000080C0E0E0E0,
0x0000000000E0F0F8,
0x00000080C0C0E0E0,
0x0000000000C0F0F8,
0x00000080C0C0C0C0,
0x000000000000F0F8,
],
# 左上
[
0xF8F0E0C080000000,
0xF8F0E0C000000000,
0xF0F0E0C080000000,
0xF8F0E00000000000,
0xE0E0E0C080000000,
0xF8F0C00000000000,
0xE0E0C0C080000000,
0xF8F0000000000000,
0xC0C0C0C080000000,
],
# 右上
[
0x1F0F070301000000,
0x0F0F070301000000,
0x1F0F070300000000,
0x0707070301000000,
0x1F0F070000000000,
0x0707030301000000,
0x1F0F030000000000,
0x0303030301000000,
0x1F0F000000000000,
],
# 右下
[
0x0000000103070F1F,
0x0000000003070F1F,
0x0000000103070F0F,
0x0000000000070F1F,
0x0000000103070707,
0x0000000000030F1F,
0x0000000103030707,
0x0000000000000F1F,
0x0000000103030303,
],
]
self.level3_weight = [
6, 6, 6, 5, 5, 4, 4, 3, 3
]
# Level4
# 8 8 8 7 7
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# ●□□□□□□□ ●□□□□□□□ □□□□□□□□ ●□□□□□□□ □□□□□□□□
# ●●□□□□□□ ●●□□□□□□ □□□□□□□□ ●●□□□□□□ □□□□□□□□
# ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎■□□□□ ●◎■■□□□□
# ●◎◎◎□□□□ ●◎◎◎□□□□ ●◎◎◎□□□□ ●◎◎■□□□□ ●◎◎◎□□□□
# ●◎◎◎●□□□ ●◎◎◎□□□□ ●◎◎◎●□□□ ●◎◎◎□□□□ ●◎◎◎●□□□
# ●●●●●●□□ ●●●●□□□□ ●●●●●●□□ ●●●●□□□□ ●●●●●●□□
# 6 6 6 6 5
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# ●□□□□□□□ □□□□□□□□ ●□□□□□□□ □□□□□□□□ ●□□□□□□□
# ●●□□□□□□ □□□□□□□□ ●●□□□□□□ □□□□□□□□ ●●□□□□□□
# ●◎■■□□□□ ●◎■■□□□□ ●◎◎■□□□□ ■■■■□□□□ ●◎■■□□□□
# ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎■□□□□ ●◎◎◎□□□□ ●◎◎■□□□□
# ●◎◎◎□□□□ ●◎◎◎●□□□ ●◎◎■□□□□ ●◎◎◎●□□□ ●◎◎■□□□□
# ●●●●□□□□ ●●●●●●□□ ●●●■□□□□ ●●●●●●□□ ●●●■□□□□
# 5 4 4 3 3
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□ □□□□□□□□
# □□□□□□□□ ●□□□□□□□ □□□□□□□□ ●□□□□□□□ □□□□□□□□
# □□□□□□□□ ●●□□□□□□ □□□□□□□□ ●●□□□□□□ □□□□□□□□
# ■■■■□□□□ ●◎■■□□□□ ■■■■□□□□ ●◎■■□□□□ ■■■■□□□□
# ●◎◎■□□□□ ●◎■■□□□□ ●◎■■□□□□ ●◎■■□□□□ ■■■■□□□□
# ●◎◎◎●□□□ ●◎◎■□□□□ ●◎◎◎●□□□ ●◎■■□□□□ ●◎◎◎●□□□
# ●●●●●●□□ ●●●■□□□□ ●●●●●●□□ ●●■■□□□□ ●●●●●●□□
self.level4_maskvalue = [
# 左下
[
0x000080C0E0F0F8FC,
0x000080C0E0F0F0F0,
0x00000000E0F0F8FC,
0x000080C0E0E0F0F0,
0x00000000C0F0F8FC,
0x000080C0C0E0F0F0,
0x00000000C0E0F8FC,
0x000080C0E0E0E0E0,
0x0000000000F0F8FC,
0x000080C0C0E0E0E0,
0x0000000000E0F8FC,
0x000080C0C0C0E0E0,
0x0000000000C0F8FC,
0x000080C0C0C0C0C0,
0x000000000000F8FC,
],
# 左上
[
0xFCF8F0E0C0800000,
0xFCF8F0E000000000,
0xF0F0F0E0C0800000,
0xFCF8F0C000000000,
0xF0F0E0E0C0800000,
0xFCF8E0C000000000,
0xF0F0E0C0C0800000,
0xFCF8F00000000000,
0xE0E0E0E0C0800000,
0xFCF8E00000000000,
0xE0E0E0C0C0800000,
0xFCF8C00000000000,
0xE0E0C0C0C0800000,
0xFCF8000000000000,
0xC0C0C0C0C0800000,
],
# 右上
[
0x3F1F0F0703010000,
0x0F0F0F0703010000,
0x3F1F0F0700000000,
0x0F0F070703010000,
0x3F1F0F0300000000,
0x0F0F070303010000,
0x3F1F070300000000,
0x0707070703010000,
0x3F1F0F0000000000,
0x0707070303010000,
0x3F1F070000000000,
0x0707030303010000,
0x3F1F030000000000,
0x0303030303010000,
0x3F1F000000000000,
],
# 右下
[
0x00000103070F1F3F,
0x00000000070F1F3F,
0x00000103070F0F0F,
0x00000000030F1F3F,
0x0000010307070F0F,
0x0000000003071F3F,
0x0000010303070F0F,
0x00000000000F1F3F,
0x0000010307070707,
0x0000000000071F3F,
0x0000010303070707,
0x0000000000031F3F,
0x0000010303030707,
0x0000000000001F3F,
0x0000010303030303,
],
]
self.level4_weight = [
8, 8, 8, 7, 7, 6, 6, 6, 6, 5, 5, 4, 4, 3, 3
]
# Level5
# 9 9 9
# □□□□□□□□ □□□□□□□□ □□□□□□□□
# ●□□□□□□□ ●□□□□□□□ □□□□□□□□
# ●●□□□□□□ ●●□□□□□□ □□□□□□□□
# ●●●□□□□□ ●●●□□□□□ □□□□□□□□
# ●◎◎◎□□□□ ●◎◎◎□□□□ ●◎◎◎□□□□
# ●◎◎◎●□□□ ●◎◎◎□□□□ ●◎◎◎●□□□
# ●◎◎◎●●□□ ●◎◎◎□□□□ ●◎◎◎●●□□
# ●●●●●●●□ ●●●●□□□□ ●●●●●●●□
self.level5_maskvalue = [
# 左下
[
0x0080C0E0F0F8FCFE,
0x0080C0E0F0F0F0F0,
0x00000000F0F8FCFE,
],
# 左上
[
0xFEFCF8F0E0C08000,
0xFEFCF8F000000000,
0xF0F0F0F0E0C08000,
],
# 右上
[
0x7F3F1F0F07030100,
0x0F0F0F0F07030100,
0x7F3F1F0F00000000,
],
# 右下
[
0x000103070F1F3F7F,
0x000000000F1F3F7F,
0x000103070F0F0F0F,
],
]
self.level5_weight = [
9, 9, 9
]
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
score = 0
b_bitboard, w_bitboard = board.get_bitboard_info()
# ボードサイズ8以外は考慮なし
if board.size != 8:
return score
# 左下→左上→右上→右下
for index in range(4):
corner_score = 0
# Level1
maskvalues = self.level1_maskvalue[index]
for w_index, maskvalue in enumerate(maskvalues):
corner_score = self._get_mask_value(b_bitboard, w_bitboard, maskvalue, self.level1_weight[w_index])
if corner_score:
break
if corner_score:
# Level5
maskvalues = self.level5_maskvalue[index]
for w_index, maskvalue in enumerate(maskvalues):
tmp_score = self._get_mask_value(b_bitboard, w_bitboard, maskvalue, self.level5_weight[w_index])
if tmp_score:
corner_score = tmp_score
break
if not tmp_score:
# Level4
maskvalues = self.level4_maskvalue[index]
for w_index, maskvalue in enumerate(maskvalues):
tmp_score = self._get_mask_value(b_bitboard, w_bitboard, maskvalue, self.level4_weight[w_index])
if tmp_score:
corner_score = tmp_score
break
if not tmp_score:
# Level3
maskvalues = self.level3_maskvalue[index]
for w_index, maskvalue in enumerate(maskvalues):
tmp_score = self._get_mask_value(b_bitboard, w_bitboard, maskvalue, self.level3_weight[w_index])
if tmp_score:
corner_score = tmp_score
break
if not tmp_score:
# Level2
maskvalues = self.level2_maskvalue[index]
for w_index, maskvalue in enumerate(maskvalues):
tmp_score = self._get_mask_value(b_bitboard, w_bitboard, maskvalue, self.level2_weight[w_index])
if tmp_score:
corner_score = tmp_score
break
score += corner_score
return score
def _get_mask_value(self, b_bitboard, w_bitboard, maskvalue, weight):
"""
マスクした値を取得
"""
score_b = weight * self._W if (b_bitboard & maskvalue) == maskvalue else 0
score_w = weight * self._W if (w_bitboard & maskvalue) == maskvalue else 0
return score_b - score_w
class BlankScorer(AbstractScorer):
"""
空マスのパターンに基づいて算出
"""
def __init__(self, w1=-1, w2=-4, w3=-2):
self._W1 = w1
self._W2 = w2
self._W3 = w3
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
return ScorerMethods.get_blank_score(board, self._W1, self._W2, self._W3)
class EdgeCornerScorer(AbstractScorer):
"""
辺と隅のパターンに基づいて算出
"""
def __init__(self, w1=1, w2=8):
self._W1 = w1
self._W2 = w2
def get_score(self, color, board, possibility_b, possibility_w):
"""
評価値の算出
"""
size = board.size
black_bitboard = board._black_bitboard
white_bitboard = board._white_bitboard
all_bitboard = black_bitboard | white_bitboard
bit_pos = 1 << (size * size - 1)
corners = [0, size-1, size*size-8, size*size-1]
score = 0
for index1, corner1 in enumerate(corners):
for index2, corner2 in enumerate(corners):
if index1+index2 == 3 or index1 >= index2: # 斜め方向は除外し、4辺を1回ずつチェック
continue
d = (corner2 - corner1) // 7
is_edge_full = True
edge = 0
blank_check = bit_pos >> corner1
# 辺の確定石
for k in range(size): # 辺の方向をチェック
if not (blank_check & all_bitboard): # 空きマス発見時
is_edge_full = False
break
if blank_check & black_bitboard:
edge += self._W1 # 黒の場合
else:
edge -= self._W1 # 白の場合
blank_check >>= d
# 四隅のパターン
corner = 0
corner_check1 = bit_pos >> corner1
corner_check2 = bit_pos >> corner2
if corner_check1 & black_bitboard:
corner += 1
if corner_check2 & black_bitboard:
corner += 1
if corner_check1 & white_bitboard:
corner -= 1
if corner_check2 & white_bitboard:
corner -= 1
# 算出
if is_edge_full:
score += edge # 辺がすべて埋まっている場合
elif corner > 0:
score += self._W2 # 黒の隅が多い場合
elif corner < 0:
score -= self._W2 # 白の隅が多い場合
return score
| 34.257521 | 234 | 0.384818 | 2,370 | 28,468 | 5.722363 | 0.16962 | 0.205279 | 0.286683 | 0.35629 | 0.311311 | 0.283218 | 0.245539 | 0.237871 | 0.225483 | 0.216708 | 0 | 0.218839 | 0.445096 | 28,468 | 830 | 235 | 34.298795 | 0.445021 | 0.163974 | 0 | 0.180328 | 0 | 0 | 0.000989 | 0 | 0 | 0 | 0.162116 | 0 | 0 | 1 | 0.034608 | false | 0 | 0.009107 | 0 | 0.081967 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
38ee9c0fa3ae373877de636cde11a1fedbe2f3d6 | 275 | py | Python | cozmo_repl/cozmo_prompt.py | cozmo-polite/cozmo-repl | 406706a28b4b1d15a0035a160e82014319d2f5d7 | [
"Apache-2.0"
] | 7 | 2017-12-09T12:17:12.000Z | 2019-04-21T12:10:49.000Z | cozmo_repl/cozmo_prompt.py | cozmo-polite/cozmo-repl | 406706a28b4b1d15a0035a160e82014319d2f5d7 | [
"Apache-2.0"
] | null | null | null | cozmo_repl/cozmo_prompt.py | cozmo-polite/cozmo-repl | 406706a28b4b1d15a0035a160e82014319d2f5d7 | [
"Apache-2.0"
] | null | null | null | from IPython.terminal.prompts import Prompts, Token
class CozmoPrompt(Prompts):
def in_prompt_tokens(self, cli=None):
return [(Token.Prompt, '>>> ')]
# TODO: find a cozmo status!
def out_prompt_tokens(self):
return [(Token.Prompt, '<<< ')]
| 25 | 51 | 0.636364 | 33 | 275 | 5.181818 | 0.666667 | 0.140351 | 0.187135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.221818 | 275 | 10 | 52 | 27.5 | 0.799065 | 0.094545 | 0 | 0 | 0 | 0 | 0.032389 | 0 | 0 | 0 | 0 | 0.1 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
38f7b77c7ed1c1af77ec1fc6f6177c0102473245 | 52 | py | Python | string_utils.py | ekotysh/TF_Network | 26449f450b63e703601fe12e48f730521d522d04 | [
"MIT"
] | null | null | null | string_utils.py | ekotysh/TF_Network | 26449f450b63e703601fe12e48f730521d522d04 | [
"MIT"
] | null | null | null | string_utils.py | ekotysh/TF_Network | 26449f450b63e703601fe12e48f730521d522d04 | [
"MIT"
] | null | null | null |
def is_blank(s):
return not (s and s.strip())
| 10.4 | 32 | 0.596154 | 10 | 52 | 3 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 52 | 4 | 33 | 13 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
ac1424dc87b9d48260ef88f1b8f4ca3eb9e6a3ef | 18,167 | py | Python | fn_urlscanio/fn_urlscanio/util/customize.py | devsuds/resilient-community-apps | ce0b087a160dd1c2f86f8c261630b46ce6948ca2 | [
"MIT"
] | null | null | null | fn_urlscanio/fn_urlscanio/util/customize.py | devsuds/resilient-community-apps | ce0b087a160dd1c2f86f8c261630b46ce6948ca2 | [
"MIT"
] | null | null | null | fn_urlscanio/fn_urlscanio/util/customize.py | devsuds/resilient-community-apps | ce0b087a160dd1c2f86f8c261630b46ce6948ca2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Generate the Resilient customizations required for fn_urlscanio"""
from __future__ import print_function
from resilient_circuits.util import *
def customization_data(client=None):
"""Produce any customization definitions (types, fields, message destinations, etc)
that should be installed by `resilient-circuits customize`
"""
# This import data contains:
# Function inputs:
# urlscanio_public
# urlscanio_referer
# urlscanio_url
# urlscanio_useragent
# Message Destinations:
# urlscanio
# Functions:
# urlscanio
# Workflows:
# example_urlscanio
# Rules:
# Example: urlscan.io
yield ImportDefinition(u"""
eyJ0YXNrX29yZGVyIjogW10sICJ3b3JrZmxvd3MiOiBbeyJ1dWlkIjogIjIxYjg0MWJiLWYzZjMt
NDFiNy05MmExLWM0NGYwMjdkMTRhMSIsICJkZXNjcmlwdGlvbiI6ICIiLCAib2JqZWN0X3R5cGUi
OiAiYXJ0aWZhY3QiLCAiZXhwb3J0X2tleSI6ICJleGFtcGxlX3VybHNjYW5pbyIsICJ3b3JrZmxv
d19pZCI6IDE2OCwgImxhc3RfbW9kaWZpZWRfYnkiOiAiaHB5bGVAcmVzaWxpZW50c3lzdGVtcy5j
b20iLCAiY29udGVudCI6IHsieG1sIjogIjw/eG1sIHZlcnNpb249XCIxLjBcIiBlbmNvZGluZz1c
IlVURi04XCI/PjxkZWZpbml0aW9ucyB4bWxucz1cImh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQ
TU4vMjAxMDA1MjQvTU9ERUxcIiB4bWxuczpicG1uZGk9XCJodHRwOi8vd3d3Lm9tZy5vcmcvc3Bl
Yy9CUE1OLzIwMTAwNTI0L0RJXCIgeG1sbnM6b21nZGM9XCJodHRwOi8vd3d3Lm9tZy5vcmcvc3Bl
Yy9ERC8yMDEwMDUyNC9EQ1wiIHhtbG5zOm9tZ2RpPVwiaHR0cDovL3d3dy5vbWcub3JnL3NwZWMv
REQvMjAxMDA1MjQvRElcIiB4bWxuczpyZXNpbGllbnQ9XCJodHRwOi8vcmVzaWxpZW50LmlibS5j
b20vYnBtblwiIHhtbG5zOnhzZD1cImh0dHA6Ly93d3cudzMub3JnLzIwMDEvWE1MU2NoZW1hXCIg
eG1sbnM6eHNpPVwiaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2VcIiB0
YXJnZXROYW1lc3BhY2U9XCJodHRwOi8vd3d3LmNhbXVuZGEub3JnL3Rlc3RcIj48cHJvY2VzcyBp
ZD1cImV4YW1wbGVfdXJsc2NhbmlvXCIgaXNFeGVjdXRhYmxlPVwidHJ1ZVwiIG5hbWU9XCJFeGFt
cGxlOiB1cmxzY2FuLmlvXCI+PGRvY3VtZW50YXRpb24vPjxzdGFydEV2ZW50IGlkPVwiU3RhcnRF
dmVudF8xNTVhc3htXCI+PG91dGdvaW5nPlNlcXVlbmNlRmxvd18xN2RvNDZ0PC9vdXRnb2luZz48
L3N0YXJ0RXZlbnQ+PHNlcnZpY2VUYXNrIGlkPVwiU2VydmljZVRhc2tfMGMzZXN1blwiIG5hbWU9
XCJTY2FuIHdpdGggdXJsc2Nhbi5pb1wiIHJlc2lsaWVudDp0eXBlPVwiZnVuY3Rpb25cIj48ZXh0
ZW5zaW9uRWxlbWVudHM+PHJlc2lsaWVudDpmdW5jdGlvbiB1dWlkPVwiZDE5YzFmMDAtYjRmMS00
NDgwLWI4YTMtN2JkZDE5MTQzMDQxXCI+e1wiaW5wdXRzXCI6e1wiZWU5MjYzYzEtNDMyYS00MWE4
LThiYmQtNjkxN2QzNWZiM2FjXCI6e1wiaW5wdXRfdHlwZVwiOlwic3RhdGljXCIsXCJzdGF0aWNf
aW5wdXRcIjp7XCJib29sZWFuX3ZhbHVlXCI6ZmFsc2UsXCJtdWx0aXNlbGVjdF92YWx1ZVwiOltd
fX19LFwicHJlX3Byb2Nlc3Npbmdfc2NyaXB0XCI6XCIjIFRoaXMgaXMgYW4gYXJ0aWZhY3Qgd29y
a2Zsb3c7IFxcbiMgVGhlIFVSTCB0byBzY2FuIGlzIHRoZSBhcnRpZmFjdCB2YWx1ZVxcbmlucHV0
cy51cmxzY2FuaW9fdXJsID0gYXJ0aWZhY3QudmFsdWVcIixcInJlc3VsdF9uYW1lXCI6XCJ1cmxz
Y2FuaW9cIixcInBvc3RfcHJvY2Vzc2luZ19zY3JpcHRcIjpcIiMgVGhlIHJlc3VsdCBjb250YWlu
cyxcXG4jIHtcXG4jICAgXFxcInBuZ191cmxcXFwiOiB0aGUgVVJMIG9mIHRoZSBzY3JlZW5zaG90
IGltYWdlXFxuIyAgIFxcXCJwbmdfYmFzZTY0Y29udGVudFxcXCI6IHRoZSBiYXNlNjQtZW5jb2Rl
ZCBzY3JlZW5zaG90IChQTkcpXFxuIyAgIFxcXCJyZXBvcnRfdXJsXFxcIjogdGhlIFVSTCBvZiB0
aGUgSlNPTiByZXBvcnRfdXJsXFxuIyAgIFxcXCJyZXBvcnRcXFwiOiB0aGUgSlNPTiByZXBvcnQs
IHdoaWNoIHdpbGwgY29udGFpbiBsb3RzIG9mIGRldGFpbCBvZiB0aGUgcGFnZSBhbmFseXNpcyAo
c2VlIHVybHNjYW4uaW8gZm9yIGRldGFpbHMpLlxcbiMgfVxcbiNcXG4jIEluIHRoaXMgY2FzZSwg
dGhlIGZpbGUgYXR0YWNobWVudCBjb250ZW50IGlzIHVzZWQgbGF0ZXIgaW4gdGhlIHdvcmtmbG93
LiAgTm90aGluZyB0byBkbyBoZXJlLlwifTwvcmVzaWxpZW50OmZ1bmN0aW9uPjwvZXh0ZW5zaW9u
RWxlbWVudHM+PGluY29taW5nPlNlcXVlbmNlRmxvd18xN2RvNDZ0PC9pbmNvbWluZz48b3V0Z29p
bmc+U2VxdWVuY2VGbG93XzBqZHI3aDc8L291dGdvaW5nPjwvc2VydmljZVRhc2s+PHNlcXVlbmNl
RmxvdyBpZD1cIlNlcXVlbmNlRmxvd18xN2RvNDZ0XCIgc291cmNlUmVmPVwiU3RhcnRFdmVudF8x
NTVhc3htXCIgdGFyZ2V0UmVmPVwiU2VydmljZVRhc2tfMGMzZXN1blwiLz48c2VydmljZVRhc2sg
aWQ9XCJTZXJ2aWNlVGFza18wcjV2N3k4XCIgbmFtZT1cIlV0aWxpdGllczogQmFzZTY0IHRvIEF0
dGFjaG1lbnRcIiByZXNpbGllbnQ6dHlwZT1cImZ1bmN0aW9uXCI+PGV4dGVuc2lvbkVsZW1lbnRz
PjxyZXNpbGllbnQ6ZnVuY3Rpb24gdXVpZD1cIjExMzQ5MTU5LTE1M2UtNDliNy05YTliLWUyMjY3
NmMwMzY4N1wiPntcImlucHV0c1wiOnt9LFwicHJlX3Byb2Nlc3Npbmdfc2NyaXB0XCI6XCIjIFRo
ZSBmaWxlIHdpbGwgYmUgYXR0YWNoZWQgdG8gdGhpcyBpbmNpZGVudFxcbmlucHV0cy5pbmNpZGVu
dF9pZCA9IGluY2lkZW50LmlkXFxuXFxuIyBUaGUgZmlsZSBjb250ZW50IGlzIGJhc2U2NC1lbmNv
ZGVkIGRhdGFcXG5pbnB1dHMuYmFzZTY0Y29udGVudCA9IHdvcmtmbG93LnByb3BlcnRpZXMudXJs
c2NhbmlvLnBuZ19iYXNlNjRjb250ZW50XFxuXFxuIyBOYW1lIHRoZSBmaWxlIGF0dGFjaG1lbnQg
ZnJvbSB0aGUgYXJ0aWZhY3RcXG5pbnB1dHMuZmlsZV9uYW1lID0gXFxcInVybHNjYW5pb19zY3Jl
ZW5zaG90X3t9LnBuZ1xcXCIuZm9ybWF0KGFydGlmYWN0LnZhbHVlLnJlcGxhY2UoXFxcIjpcXFwi
LCBcXFwiX1xcXCIpLnJlcGxhY2UoXFxcIi9cXFwiLCBcXFwiX1xcXCIpKVxcblwifTwvcmVzaWxp
ZW50OmZ1bmN0aW9uPjwvZXh0ZW5zaW9uRWxlbWVudHM+PGluY29taW5nPlNlcXVlbmNlRmxvd18w
amRyN2g3PC9pbmNvbWluZz48b3V0Z29pbmc+U2VxdWVuY2VGbG93XzBnM24zZHA8L291dGdvaW5n
Pjwvc2VydmljZVRhc2s+PHNlcXVlbmNlRmxvdyBpZD1cIlNlcXVlbmNlRmxvd18wamRyN2g3XCIg
c291cmNlUmVmPVwiU2VydmljZVRhc2tfMGMzZXN1blwiIHRhcmdldFJlZj1cIlNlcnZpY2VUYXNr
XzByNXY3eThcIi8+PGVuZEV2ZW50IGlkPVwiRW5kRXZlbnRfMDh0OG00NFwiPjxpbmNvbWluZz5T
ZXF1ZW5jZUZsb3dfMGczbjNkcDwvaW5jb21pbmc+PC9lbmRFdmVudD48c2VxdWVuY2VGbG93IGlk
PVwiU2VxdWVuY2VGbG93XzBnM24zZHBcIiBzb3VyY2VSZWY9XCJTZXJ2aWNlVGFza18wcjV2N3k4
XCIgdGFyZ2V0UmVmPVwiRW5kRXZlbnRfMDh0OG00NFwiLz48dGV4dEFubm90YXRpb24gaWQ9XCJU
ZXh0QW5ub3RhdGlvbl8xa3h4aXl0XCI+PHRleHQ+UnVuIGZvciBhIFVSTCBhcnRpZmFjdDwvdGV4
dD48L3RleHRBbm5vdGF0aW9uPjxhc3NvY2lhdGlvbiBpZD1cIkFzc29jaWF0aW9uXzFzZXVqNDhc
IiBzb3VyY2VSZWY9XCJTdGFydEV2ZW50XzE1NWFzeG1cIiB0YXJnZXRSZWY9XCJUZXh0QW5ub3Rh
dGlvbl8xa3h4aXl0XCIvPjx0ZXh0QW5ub3RhdGlvbiBpZD1cIlRleHRBbm5vdGF0aW9uXzFpOGg5
c2hcIj48dGV4dD5TY2FuIHRoZSBVUkw8L3RleHQ+PC90ZXh0QW5ub3RhdGlvbj48YXNzb2NpYXRp
b24gaWQ9XCJBc3NvY2lhdGlvbl8wZGthdG9yXCIgc291cmNlUmVmPVwiU2VydmljZVRhc2tfMGMz
ZXN1blwiIHRhcmdldFJlZj1cIlRleHRBbm5vdGF0aW9uXzFpOGg5c2hcIi8+PHRleHRBbm5vdGF0
aW9uIGlkPVwiVGV4dEFubm90YXRpb25fMHlycWtqM1wiPjx0ZXh0PkF0dGFjaCB0aGUgc2NyZWVu
c2hvdCB0byB0aGUgaW5jaWRlbnQ8L3RleHQ+PC90ZXh0QW5ub3RhdGlvbj48YXNzb2NpYXRpb24g
aWQ9XCJBc3NvY2lhdGlvbl8wOHc0Z3YzXCIgc291cmNlUmVmPVwiU2VydmljZVRhc2tfMHI1djd5
OFwiIHRhcmdldFJlZj1cIlRleHRBbm5vdGF0aW9uXzB5cnFrajNcIi8+PC9wcm9jZXNzPjxicG1u
ZGk6QlBNTkRpYWdyYW0gaWQ9XCJCUE1ORGlhZ3JhbV8xXCI+PGJwbW5kaTpCUE1OUGxhbmUgYnBt
bkVsZW1lbnQ9XCJ1bmRlZmluZWRcIiBpZD1cIkJQTU5QbGFuZV8xXCI+PGJwbW5kaTpCUE1OU2hh
cGUgYnBtbkVsZW1lbnQ9XCJTdGFydEV2ZW50XzE1NWFzeG1cIiBpZD1cIlN0YXJ0RXZlbnRfMTU1
YXN4bV9kaVwiPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMzZcIiB3aWR0aD1cIjM2XCIgeD1cIjE2
MlwiIHk9XCIxODhcIi8+PGJwbW5kaTpCUE1OTGFiZWw+PG9tZ2RjOkJvdW5kcyBoZWlnaHQ9XCIw
XCIgd2lkdGg9XCI5MFwiIHg9XCIxNTdcIiB5PVwiMjIzXCIvPjwvYnBtbmRpOkJQTU5MYWJlbD48
L2JwbW5kaTpCUE1OU2hhcGU+PGJwbW5kaTpCUE1OU2hhcGUgYnBtbkVsZW1lbnQ9XCJUZXh0QW5u
b3RhdGlvbl8xa3h4aXl0XCIgaWQ9XCJUZXh0QW5ub3RhdGlvbl8xa3h4aXl0X2RpXCI+PG9tZ2Rj
OkJvdW5kcyBoZWlnaHQ9XCIzNVwiIHdpZHRoPVwiMTAwXCIgeD1cIjE3NFwiIHk9XCI2OVwiLz48
L2JwbW5kaTpCUE1OU2hhcGU+PGJwbW5kaTpCUE1ORWRnZSBicG1uRWxlbWVudD1cIkFzc29jaWF0
aW9uXzFzZXVqNDhcIiBpZD1cIkFzc29jaWF0aW9uXzFzZXVqNDhfZGlcIj48b21nZGk6d2F5cG9p
bnQgeD1cIjE4M1wiIHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMTg5XCIvPjxvbWdkaTp3
YXlwb2ludCB4PVwiMjE1XCIgeHNpOnR5cGU9XCJvbWdkYzpQb2ludFwiIHk9XCIxMDRcIi8+PC9i
cG1uZGk6QlBNTkVkZ2U+PGJwbW5kaTpCUE1OU2hhcGUgYnBtbkVsZW1lbnQ9XCJTZXJ2aWNlVGFz
a18wYzNlc3VuXCIgaWQ9XCJTZXJ2aWNlVGFza18wYzNlc3VuX2RpXCI+PG9tZ2RjOkJvdW5kcyBo
ZWlnaHQ9XCI4MFwiIHdpZHRoPVwiMTAwXCIgeD1cIjI5N1wiIHk9XCIxNjZcIi8+PC9icG1uZGk6
QlBNTlNoYXBlPjxicG1uZGk6QlBNTkVkZ2UgYnBtbkVsZW1lbnQ9XCJTZXF1ZW5jZUZsb3dfMTdk
bzQ2dFwiIGlkPVwiU2VxdWVuY2VGbG93XzE3ZG80NnRfZGlcIj48b21nZGk6d2F5cG9pbnQgeD1c
IjE5OFwiIHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMjA2XCIvPjxvbWdkaTp3YXlwb2lu
dCB4PVwiMjk3XCIgeHNpOnR5cGU9XCJvbWdkYzpQb2ludFwiIHk9XCIyMDZcIi8+PGJwbW5kaTpC
UE1OTGFiZWw+PG9tZ2RjOkJvdW5kcyBoZWlnaHQ9XCIxM1wiIHdpZHRoPVwiMFwiIHg9XCIyNDcu
NVwiIHk9XCIxODRcIi8+PC9icG1uZGk6QlBNTkxhYmVsPjwvYnBtbmRpOkJQTU5FZGdlPjxicG1u
ZGk6QlBNTlNoYXBlIGJwbW5FbGVtZW50PVwiU2VydmljZVRhc2tfMHI1djd5OFwiIGlkPVwiU2Vy
dmljZVRhc2tfMHI1djd5OF9kaVwiPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiODBcIiB3aWR0aD1c
IjEwMFwiIHg9XCI1MDZcIiB5PVwiMTY2XCIvPjwvYnBtbmRpOkJQTU5TaGFwZT48YnBtbmRpOkJQ
TU5FZGdlIGJwbW5FbGVtZW50PVwiU2VxdWVuY2VGbG93XzBqZHI3aDdcIiBpZD1cIlNlcXVlbmNl
Rmxvd18wamRyN2g3X2RpXCI+PG9tZ2RpOndheXBvaW50IHg9XCIzOTdcIiB4c2k6dHlwZT1cIm9t
Z2RjOlBvaW50XCIgeT1cIjIwNlwiLz48b21nZGk6d2F5cG9pbnQgeD1cIjUwNlwiIHhzaTp0eXBl
PVwib21nZGM6UG9pbnRcIiB5PVwiMjA2XCIvPjxicG1uZGk6QlBNTkxhYmVsPjxvbWdkYzpCb3Vu
ZHMgaGVpZ2h0PVwiMTNcIiB3aWR0aD1cIjBcIiB4PVwiNDUxLjVcIiB5PVwiMTg0XCIvPjwvYnBt
bmRpOkJQTU5MYWJlbD48L2JwbW5kaTpCUE1ORWRnZT48YnBtbmRpOkJQTU5TaGFwZSBicG1uRWxl
bWVudD1cIkVuZEV2ZW50XzA4dDhtNDRcIiBpZD1cIkVuZEV2ZW50XzA4dDhtNDRfZGlcIj48b21n
ZGM6Qm91bmRzIGhlaWdodD1cIjM2XCIgd2lkdGg9XCIzNlwiIHg9XCI3MTRcIiB5PVwiMTg4XCIv
PjxicG1uZGk6QlBNTkxhYmVsPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiMTNcIiB3aWR0aD1cIjBc
IiB4PVwiNzMyXCIgeT1cIjIyN1wiLz48L2JwbW5kaTpCUE1OTGFiZWw+PC9icG1uZGk6QlBNTlNo
YXBlPjxicG1uZGk6QlBNTkVkZ2UgYnBtbkVsZW1lbnQ9XCJTZXF1ZW5jZUZsb3dfMGczbjNkcFwi
IGlkPVwiU2VxdWVuY2VGbG93XzBnM24zZHBfZGlcIj48b21nZGk6d2F5cG9pbnQgeD1cIjYwNlwi
IHhzaTp0eXBlPVwib21nZGM6UG9pbnRcIiB5PVwiMjA2XCIvPjxvbWdkaTp3YXlwb2ludCB4PVwi
NzE0XCIgeHNpOnR5cGU9XCJvbWdkYzpQb2ludFwiIHk9XCIyMDZcIi8+PGJwbW5kaTpCUE1OTGFi
ZWw+PG9tZ2RjOkJvdW5kcyBoZWlnaHQ9XCIxM1wiIHdpZHRoPVwiMFwiIHg9XCI2NjBcIiB5PVwi
MTg0XCIvPjwvYnBtbmRpOkJQTU5MYWJlbD48L2JwbW5kaTpCUE1ORWRnZT48YnBtbmRpOkJQTU5T
aGFwZSBicG1uRWxlbWVudD1cIlRleHRBbm5vdGF0aW9uXzFpOGg5c2hcIiBpZD1cIlRleHRBbm5v
dGF0aW9uXzFpOGg5c2hfZGlcIj48b21nZGM6Qm91bmRzIGhlaWdodD1cIjMwXCIgd2lkdGg9XCIx
MDBcIiB4PVwiMzQ4XCIgeT1cIjY5XCIvPjwvYnBtbmRpOkJQTU5TaGFwZT48YnBtbmRpOkJQTU5F
ZGdlIGJwbW5FbGVtZW50PVwiQXNzb2NpYXRpb25fMGRrYXRvclwiIGlkPVwiQXNzb2NpYXRpb25f
MGRrYXRvcl9kaVwiPjxvbWdkaTp3YXlwb2ludCB4PVwiMzY0XCIgeHNpOnR5cGU9XCJvbWdkYzpQ
b2ludFwiIHk9XCIxNjZcIi8+PG9tZ2RpOndheXBvaW50IHg9XCIzOTJcIiB4c2k6dHlwZT1cIm9t
Z2RjOlBvaW50XCIgeT1cIjk5XCIvPjwvYnBtbmRpOkJQTU5FZGdlPjxicG1uZGk6QlBNTlNoYXBl
IGJwbW5FbGVtZW50PVwiVGV4dEFubm90YXRpb25fMHlycWtqM1wiIGlkPVwiVGV4dEFubm90YXRp
b25fMHlycWtqM19kaVwiPjxvbWdkYzpCb3VuZHMgaGVpZ2h0PVwiNDZcIiB3aWR0aD1cIjEwMFwi
IHg9XCI1NTRcIiB5PVwiNjFcIi8+PC9icG1uZGk6QlBNTlNoYXBlPjxicG1uZGk6QlBNTkVkZ2Ug
YnBtbkVsZW1lbnQ9XCJBc3NvY2lhdGlvbl8wOHc0Z3YzXCIgaWQ9XCJBc3NvY2lhdGlvbl8wOHc0
Z3YzX2RpXCI+PG9tZ2RpOndheXBvaW50IHg9XCI1NzJcIiB4c2k6dHlwZT1cIm9tZ2RjOlBvaW50
XCIgeT1cIjE2NlwiLz48b21nZGk6d2F5cG9pbnQgeD1cIjU5NVwiIHhzaTp0eXBlPVwib21nZGM6
UG9pbnRcIiB5PVwiMTA3XCIvPjwvYnBtbmRpOkJQTU5FZGdlPjwvYnBtbmRpOkJQTU5QbGFuZT48
L2JwbW5kaTpCUE1ORGlhZ3JhbT48L2RlZmluaXRpb25zPiIsICJ3b3JrZmxvd19pZCI6ICJleGFt
cGxlX3VybHNjYW5pbyIsICJ2ZXJzaW9uIjogMn0sICJsYXN0X21vZGlmaWVkX3RpbWUiOiAxNTMx
MzYyNDAyODg4LCAiY3JlYXRvcl9pZCI6ICJocHlsZUByZXNpbGllbnRzeXN0ZW1zLmNvbSIsICJh
Y3Rpb25zIjogW10sICJwcm9ncmFtbWF0aWNfbmFtZSI6ICJleGFtcGxlX3VybHNjYW5pbyIsICJu
YW1lIjogIkV4YW1wbGU6IHVybHNjYW4uaW8ifV0sICJhY3Rpb25zIjogW3sibG9naWNfdHlwZSI6
ICJhbGwiLCAibmFtZSI6ICJFeGFtcGxlOiB1cmxzY2FuLmlvIiwgInZpZXdfaXRlbXMiOiBbXSwg
InR5cGUiOiAxLCAid29ya2Zsb3dzIjogWyJleGFtcGxlX3VybHNjYW5pbyJdLCAib2JqZWN0X3R5
cGUiOiAiYXJ0aWZhY3QiLCAidGltZW91dF9zZWNvbmRzIjogODY0MDAsICJ1dWlkIjogImJiYjkw
NzJhLTI2M2MtNDMxNi04ZGEzLWRhODg1YzA0ZjgxNCIsICJhdXRvbWF0aW9ucyI6IFtdLCAiZXhw
b3J0X2tleSI6ICJFeGFtcGxlOiB1cmxzY2FuLmlvIiwgImNvbmRpdGlvbnMiOiBbeyJ0eXBlIjog
bnVsbCwgImV2YWx1YXRpb25faWQiOiBudWxsLCAiZmllbGRfbmFtZSI6ICJhcnRpZmFjdC50eXBl
IiwgIm1ldGhvZCI6ICJpbiIsICJ2YWx1ZSI6IFsiVVJMIiwgIlVSTCBSZWZlcmVyIl19XSwgImlk
IjogNTY1LCAibWVzc2FnZV9kZXN0aW5hdGlvbnMiOiBbXX1dLCAibGF5b3V0cyI6IFtdLCAiZXhw
b3J0X2Zvcm1hdF92ZXJzaW9uIjogMiwgImlkIjogMzYsICJpbmR1c3RyaWVzIjogbnVsbCwgInBo
YXNlcyI6IFtdLCAiYWN0aW9uX29yZGVyIjogW10sICJnZW9zIjogbnVsbCwgInNlcnZlcl92ZXJz
aW9uIjogeyJtYWpvciI6IDMwLCAidmVyc2lvbiI6ICIzMC4xLjI1IiwgImJ1aWxkX251bWJlciI6
IDI1LCAibWlub3IiOiAxfSwgInRpbWVmcmFtZXMiOiBudWxsLCAid29ya3NwYWNlcyI6IFtdLCAi
YXV0b21hdGljX3Rhc2tzIjogW10sICJmdW5jdGlvbnMiOiBbeyJkaXNwbGF5X25hbWUiOiAiU2Nh
biB3aXRoIHVybHNjYW4uaW8iLCAiZGVzY3JpcHRpb24iOiB7ImNvbnRlbnQiOiAiQW5hbHl6ZSBh
IFVSTCB3aXRoIHVybHNjYW4uaW8iLCAiZm9ybWF0IjogInRleHQifSwgImNyZWF0b3IiOiB7ImRp
c3BsYXlfbmFtZSI6ICJSZXNpbGllbnQgU3lzYWRtaW4iLCAidHlwZSI6ICJ1c2VyIiwgImlkIjog
NywgIm5hbWUiOiAiYXBpQGV4YW1wbGUuY29tIn0sICJ2aWV3X2l0ZW1zIjogW3sic2hvd19pZiI6
IG51bGwsICJmaWVsZF90eXBlIjogIl9fZnVuY3Rpb24iLCAic2hvd19saW5rX2hlYWRlciI6IGZh
bHNlLCAiZWxlbWVudCI6ICJmaWVsZF91dWlkIiwgImNvbnRlbnQiOiAiNjJmOTVlZTktYTExMi00
ZDFhLWFhNjUtYTZkMGUxZWY3MzQ4IiwgInN0ZXBfbGFiZWwiOiBudWxsfSwgeyJzaG93X2lmIjog
bnVsbCwgImZpZWxkX3R5cGUiOiAiX19mdW5jdGlvbiIsICJzaG93X2xpbmtfaGVhZGVyIjogZmFs
c2UsICJlbGVtZW50IjogImZpZWxkX3V1aWQiLCAiY29udGVudCI6ICJlZTkyNjNjMS00MzJhLTQx
YTgtOGJiZC02OTE3ZDM1ZmIzYWMiLCAic3RlcF9sYWJlbCI6IG51bGx9LCB7InNob3dfaWYiOiBu
dWxsLCAiZmllbGRfdHlwZSI6ICJfX2Z1bmN0aW9uIiwgInNob3dfbGlua19oZWFkZXIiOiBmYWxz
ZSwgImVsZW1lbnQiOiAiZmllbGRfdXVpZCIsICJjb250ZW50IjogIjczODE4NzVjLTdhMzItNDM5
ZC04ZjU1LTc2MjM5YTRjNzJiNyIsICJzdGVwX2xhYmVsIjogbnVsbH0sIHsic2hvd19pZiI6IG51
bGwsICJmaWVsZF90eXBlIjogIl9fZnVuY3Rpb24iLCAic2hvd19saW5rX2hlYWRlciI6IGZhbHNl
LCAiZWxlbWVudCI6ICJmaWVsZF91dWlkIiwgImNvbnRlbnQiOiAiYTJlYmRiNWItM2Q1YS00MjVj
LWEwYzctZmEzNTkwNGZhNWM0IiwgInN0ZXBfbGFiZWwiOiBudWxsfV0sICJleHBvcnRfa2V5Ijog
InVybHNjYW5pbyIsICJ1dWlkIjogImQxOWMxZjAwLWI0ZjEtNDQ4MC1iOGEzLTdiZGQxOTE0MzA0
MSIsICJsYXN0X21vZGlmaWVkX2J5IjogeyJkaXNwbGF5X25hbWUiOiAiXHVmZWZmXHUyMDYzIEh1
Z2giLCAidHlwZSI6ICJ1c2VyIiwgImlkIjogNCwgIm5hbWUiOiAiaHB5bGVAcmVzaWxpZW50c3lz
dGVtcy5jb20ifSwgInZlcnNpb24iOiAzLCAid29ya2Zsb3dzIjogW3siZGVzY3JpcHRpb24iOiBu
dWxsLCAib2JqZWN0X3R5cGUiOiAiYXJ0aWZhY3QiLCAiYWN0aW9ucyI6IFtdLCAibmFtZSI6ICJF
eGFtcGxlOiB1cmxzY2FuLmlvIiwgIndvcmtmbG93X2lkIjogMTY4LCAicHJvZ3JhbW1hdGljX25h
bWUiOiAiZXhhbXBsZV91cmxzY2FuaW8iLCAidXVpZCI6IG51bGx9XSwgImxhc3RfbW9kaWZpZWRf
dGltZSI6IDE1MzEzNTQ1ODYyOTUsICJkZXN0aW5hdGlvbl9oYW5kbGUiOiAidXJsc2NhbmlvIiwg
ImlkIjogMjU3LCAibmFtZSI6ICJ1cmxzY2FuaW8ifV0sICJub3RpZmljYXRpb25zIjogbnVsbCwg
InJlZ3VsYXRvcnMiOiBudWxsLCAiaW5jaWRlbnRfdHlwZXMiOiBbeyJjcmVhdGVfZGF0ZSI6IDE1
MzEzNjI2NDc2MzEsICJkZXNjcmlwdGlvbiI6ICJDdXN0b21pemF0aW9uIFBhY2thZ2VzIChpbnRl
cm5hbCkiLCAiZXhwb3J0X2tleSI6ICJDdXN0b21pemF0aW9uIFBhY2thZ2VzIChpbnRlcm5hbCki
LCAiaWQiOiAwLCAibmFtZSI6ICJDdXN0b21pemF0aW9uIFBhY2thZ2VzIChpbnRlcm5hbCkiLCAi
dXBkYXRlX2RhdGUiOiAxNTMxMzYyNjQ3NjMxLCAidXVpZCI6ICJiZmVlYzJkNC0zNzcwLTExZTgt
YWQzOS00YTAwMDQwNDRhYTAiLCAiZW5hYmxlZCI6IGZhbHNlLCAic3lzdGVtIjogZmFsc2UsICJw
YXJlbnRfaWQiOiBudWxsLCAiaGlkZGVuIjogZmFsc2V9XSwgInNjcmlwdHMiOiBbXSwgInR5cGVz
IjogW10sICJtZXNzYWdlX2Rlc3RpbmF0aW9ucyI6IFt7InV1aWQiOiAiOWM0ZTAxNDMtZDg0Yi00
ZmQyLWJlNDMtZTUyZWViMWUxZTA4IiwgImV4cG9ydF9rZXkiOiAidXJsc2NhbmlvIiwgIm5hbWUi
OiAidXJsc2Nhbi5pbyIsICJkZXN0aW5hdGlvbl90eXBlIjogMCwgInByb2dyYW1tYXRpY19uYW1l
IjogInVybHNjYW5pbyIsICJleHBlY3RfYWNrIjogdHJ1ZSwgInVzZXJzIjogWyJhcGlAZXhhbXBs
ZS5jb20iXX1dLCAiaW5jaWRlbnRfYXJ0aWZhY3RfdHlwZXMiOiBbXSwgInJvbGVzIjogW10sICJm
aWVsZHMiOiBbeyJvcGVyYXRpb25zIjogW10sICJyZWFkX29ubHkiOiB0cnVlLCAibmFtZSI6ICJp
bmNfdHJhaW5pbmciLCAidGVtcGxhdGVzIjogW10sICJ0eXBlX2lkIjogMCwgImNob3NlbiI6IGZh
bHNlLCAidGV4dCI6ICJTaW11bGF0aW9uIiwgImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6IGZh
bHNlLCAiZXhwb3J0X2tleSI6ICJpbmNpZGVudC9pbmNfdHJhaW5pbmciLCAidG9vbHRpcCI6ICJX
aGV0aGVyIHRoZSBpbmNpZGVudCBpcyBhIHNpbXVsYXRpb24gb3IgYSByZWd1bGFyIGluY2lkZW50
LiAgVGhpcyBmaWVsZCBpcyByZWFkLW9ubHkuIiwgInJpY2hfdGV4dCI6IGZhbHNlLCAib3BlcmF0
aW9uX3Blcm1zIjoge30sICJwcmVmaXgiOiBudWxsLCAiaW50ZXJuYWwiOiBmYWxzZSwgInZhbHVl
cyI6IFtdLCAiYmxhbmtfb3B0aW9uIjogZmFsc2UsICJpbnB1dF90eXBlIjogImJvb2xlYW4iLCAi
Y2hhbmdlYWJsZSI6IHRydWUsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAiaWQiOiAxMTcy
LCAidXVpZCI6ICJjM2YwZTNlZC0yMWUxLTRkNTMtYWZmYi1mZTVjYTMzMDhjY2EifSwgeyJvcGVy
YXRpb25zIjogW10sICJ0eXBlX2lkIjogMTEsICJvcGVyYXRpb25fcGVybXMiOiB7fSwgInRleHQi
OiAidXJsc2NhbmlvX3B1YmxpYyIsICJibGFua19vcHRpb24iOiBmYWxzZSwgInByZWZpeCI6IG51
bGwsICJjaGFuZ2VhYmxlIjogdHJ1ZSwgImlkIjogMzIwNywgInJlYWRfb25seSI6IGZhbHNlLCAi
dXVpZCI6ICJlZTkyNjNjMS00MzJhLTQxYTgtOGJiZC02OTE3ZDM1ZmIzYWMiLCAiY2hvc2VuIjog
ZmFsc2UsICJpbnB1dF90eXBlIjogImJvb2xlYW4iLCAidG9vbHRpcCI6ICJTaG91bGQgdGhlIHNj
YW4gYmUgcG9zdGVkIGFzIHB1YmxpYz8iLCAiaW50ZXJuYWwiOiBmYWxzZSwgInJpY2hfdGV4dCI6
IGZhbHNlLCAidGVtcGxhdGVzIjogW10sICJleHBvcnRfa2V5IjogIl9fZnVuY3Rpb24vdXJsc2Nh
bmlvX3B1YmxpYyIsICJoaWRlX25vdGlmaWNhdGlvbiI6IGZhbHNlLCAicGxhY2Vob2xkZXIiOiAi
IiwgIm5hbWUiOiAidXJsc2NhbmlvX3B1YmxpYyIsICJkZWZhdWx0X2Nob3Nlbl9ieV9zZXJ2ZXIi
OiBmYWxzZSwgInZhbHVlcyI6IFtdfSwgeyJvcGVyYXRpb25zIjogW10sICJ0eXBlX2lkIjogMTEs
ICJvcGVyYXRpb25fcGVybXMiOiB7fSwgInRleHQiOiAidXJsc2NhbmlvX3JlZmVyZXIiLCAiYmxh
bmtfb3B0aW9uIjogZmFsc2UsICJwcmVmaXgiOiBudWxsLCAiY2hhbmdlYWJsZSI6IHRydWUsICJp
ZCI6IDMyMDksICJyZWFkX29ubHkiOiBmYWxzZSwgInV1aWQiOiAiYTJlYmRiNWItM2Q1YS00MjVj
LWEwYzctZmEzNTkwNGZhNWM0IiwgImNob3NlbiI6IGZhbHNlLCAiaW5wdXRfdHlwZSI6ICJ0ZXh0
IiwgInRvb2x0aXAiOiAiQ3VzdG9tIHJlZmVyZXIgVVJMIGZvciB0aGlzIHNjYW4iLCAiaW50ZXJu
YWwiOiBmYWxzZSwgInJpY2hfdGV4dCI6IGZhbHNlLCAidGVtcGxhdGVzIjogW10sICJleHBvcnRf
a2V5IjogIl9fZnVuY3Rpb24vdXJsc2NhbmlvX3JlZmVyZXIiLCAiaGlkZV9ub3RpZmljYXRpb24i
OiBmYWxzZSwgInBsYWNlaG9sZGVyIjogIiIsICJuYW1lIjogInVybHNjYW5pb19yZWZlcmVyIiwg
ImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6IGZhbHNlLCAidmFsdWVzIjogW119LCB7Im9wZXJh
dGlvbnMiOiBbXSwgInR5cGVfaWQiOiAxMSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidGV4dCI6
ICJ1cmxzY2FuaW9fdXJsIiwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAicHJlZml4IjogbnVsbCwg
ImNoYW5nZWFibGUiOiB0cnVlLCAiaWQiOiAzMjA2LCAicmVhZF9vbmx5IjogZmFsc2UsICJ1dWlk
IjogIjYyZjk1ZWU5LWExMTItNGQxYS1hYTY1LWE2ZDBlMWVmNzM0OCIsICJjaG9zZW4iOiBmYWxz
ZSwgImlucHV0X3R5cGUiOiAidGV4dCIsICJ0b29sdGlwIjogIiIsICJpbnRlcm5hbCI6IGZhbHNl
LCAicmljaF90ZXh0IjogZmFsc2UsICJ0ZW1wbGF0ZXMiOiBbXSwgImV4cG9ydF9rZXkiOiAiX19m
dW5jdGlvbi91cmxzY2FuaW9fdXJsIiwgImhpZGVfbm90aWZpY2F0aW9uIjogZmFsc2UsICJwbGFj
ZWhvbGRlciI6ICIiLCAibmFtZSI6ICJ1cmxzY2FuaW9fdXJsIiwgImRlZmF1bHRfY2hvc2VuX2J5
X3NlcnZlciI6IGZhbHNlLCAidmFsdWVzIjogW119LCB7Im9wZXJhdGlvbnMiOiBbXSwgInR5cGVf
aWQiOiAxMSwgIm9wZXJhdGlvbl9wZXJtcyI6IHt9LCAidGV4dCI6ICJ1cmxzY2FuaW9fdXNlcmFn
ZW50IiwgImJsYW5rX29wdGlvbiI6IGZhbHNlLCAicHJlZml4IjogbnVsbCwgImNoYW5nZWFibGUi
OiB0cnVlLCAiaWQiOiAzMjA4LCAicmVhZF9vbmx5IjogZmFsc2UsICJ1dWlkIjogIjczODE4NzVj
LTdhMzItNDM5ZC04ZjU1LTc2MjM5YTRjNzJiNyIsICJjaG9zZW4iOiBmYWxzZSwgImlucHV0X3R5
cGUiOiAidGV4dCIsICJ0b29sdGlwIjogIk92ZXJyaWRlIFVzZXItQWdlbnQgZm9yIHRoaXMgc2Nh
biIsICJpbnRlcm5hbCI6IGZhbHNlLCAicmljaF90ZXh0IjogZmFsc2UsICJ0ZW1wbGF0ZXMiOiBb
XSwgImV4cG9ydF9rZXkiOiAiX19mdW5jdGlvbi91cmxzY2FuaW9fdXNlcmFnZW50IiwgImhpZGVf
bm90aWZpY2F0aW9uIjogZmFsc2UsICJwbGFjZWhvbGRlciI6ICIiLCAibmFtZSI6ICJ1cmxzY2Fu
aW9fdXNlcmFnZW50IiwgImRlZmF1bHRfY2hvc2VuX2J5X3NlcnZlciI6IGZhbHNlLCAidmFsdWVz
IjogW119XSwgIm92ZXJyaWRlcyI6IFtdLCAiZXhwb3J0X2RhdGUiOiAxNTMxMzYyNjIyNzk3fQ==
"""
)
| 70.414729 | 87 | 0.970276 | 341 | 18,167 | 51.653959 | 0.941349 | 0.00193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123358 | 0.023669 | 18,167 | 257 | 88 | 70.688716 | 0.869707 | 0.026972 | 0 | 0 | 1 | 0 | 0.986565 | 0.973696 | 0 | 1 | 0 | 0 | 0 | 1 | 0.00431 | false | 0 | 0.012931 | 0 | 0.017241 | 0.00431 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ac258f132e2c12a705eeeb7ca737fd171a772b0b | 7,071 | py | Python | generators/blockstates/blockstate_generator.py | Cheeseborgers/shelve-it | 170c05762a00300d5f645397991c64a8f96638e1 | [
"MIT"
] | null | null | null | generators/blockstates/blockstate_generator.py | Cheeseborgers/shelve-it | 170c05762a00300d5f645397991c64a8f96638e1 | [
"MIT"
] | null | null | null | generators/blockstates/blockstate_generator.py | Cheeseborgers/shelve-it | 170c05762a00300d5f645397991c64a8f96638e1 | [
"MIT"
] | null | null | null | from time import sleep
import json
types = ["concrete", "terracotta", "wool"]
colors = ["white", "black", "blue", "brown", "cyan", "gray", "green", "light_blue", "light_gray", "lime",
"magenta", "orange", "pink", "purple", "red", "white", "yellow"]
singular_block_types = ["terracotta"]
def process_data():
print("Beginning data processing...")
# For Shelves of multiple types and colors (ie: wool, concrete, terracotta)
for color in colors:
for type in types:
block_state_data = createBlockStateData(type, color)
block_path = f"{color}_{type}"
saveBlockStatesToJson(block_path, block_state_data)
# however Terracotta has a its own singular variant named 'terracotta', so we handle single block variants here.
for type in singular_block_types:
block_state_data =createBlockStateData(type, "")
block_path = f"{type}"
saveBlockStatesToJson(block_path, block_state_data)
print("Data processing finished.")
return colors
def saveBlockStatesToJson(block_path, model_data):
filename = f'{block_path}_bookshelf.json'
print(f"Saving {block_path}, To: {filename}")
with open(filename, "w", encoding="utf-8") as writeJSON:
json.dump(model_data, writeJSON, ensure_ascii=False, indent=4)
print(f'{block_path}_bookshelf.json was saved')
def createBlockStateData(type, color):
if color != "":
data = {
"variants": {
"number_of_books=0": {"model": f"shelveit:block/{color}_{type}_bookshelf_0"},
"number_of_books=1": {"model": f"shelveit:block/{color}_{type}_bookshelf_1"},
"number_of_books=2": {"model": f"shelveit:block/{color}_{type}_bookshelf_2"},
"number_of_books=3": {"model": f"shelveit:block/{color}_{type}_bookshelf_3"},
"number_of_books=4": {"model": f"shelveit:block/{color}_{type}_bookshelf_4"},
"number_of_books=5": {"model": f"shelveit:block/{color}_{type}_bookshelf_5"},
"number_of_books=6": {"model": f"shelveit:block/{color}_{type}_bookshelf_6"},
"number_of_books=7": {"model": f"shelveit:block/{color}_{type}_bookshelf_7"},
"number_of_books=8": {"model": f"shelveit:block/{color}_{type}_bookshelf_8"},
"number_of_books=9": {"model": f"shelveit:block/{color}_{type}_bookshelf_9"},
"number_of_books=10": {"model": f"shelveit:block/{color}_{type}_bookshelf_10"},
"number_of_books=11": {"model": f"shelveit:block/{color}_{type}_bookshelf_11"},
"number_of_books=12": {"model": f"shelveit:block/{color}_{type}_bookshelf_12"},
"number_of_books=13": {"model": f"shelveit:block/{color}_{type}_bookshelf_13"},
"number_of_books=14": {"model": f"shelveit:block/{color}_{type}_bookshelf_14"},
"number_of_books=15": {"model": f"shelveit:block/{color}_{type}_bookshelf_15"},
"number_of_books=16": {"model": f"shelveit:block/{color}_{type}_bookshelf_16"},
"number_of_books=17": {"model": f"shelveit:block/{color}_{type}_bookshelf_17"},
"number_of_books=18": {"model": f"shelveit:block/{color}_{type}_bookshelf_18"},
"number_of_books=19": {"model": f"shelveit:block/{color}_{type}_bookshelf_19"},
"number_of_books=20": {"model": f"shelveit:block/{color}_{type}_bookshelf_20"},
"number_of_books=21": {"model": f"shelveit:block/{color}_{type}_bookshelf_21"},
"number_of_books=22": {"model": f"shelveit:block/{color}_{type}_bookshelf_22"},
"number_of_books=23": {"model": f"shelveit:block/{color}_{type}_bookshelf_23"},
"number_of_books=24": {"model": f"shelveit:block/{color}_{type}_bookshelf_24"},
"number_of_books=25": {"model": f"shelveit:block/{color}_{type}_bookshelf_25"},
"number_of_books=26": {"model": f"shelveit:block/{color}_{type}_bookshelf_26"},
"number_of_books=27": {"model": f"shelveit:block/{color}_{type}_bookshelf_27"}
}
}
else:
data = {
"variants": {
"number_of_books=0": {"model": f"shelveit:block/{type}_bookshelf_0"},
"number_of_books=1": {"model": f"shelveit:block/{type}_bookshelf_1"},
"number_of_books=2": {"model": f"shelveit:block/{type}_bookshelf_2"},
"number_of_books=3": {"model": f"shelveit:block/{type}_bookshelf_3"},
"number_of_books=4": {"model": f"shelveit:block/{type}_bookshelf_4"},
"number_of_books=5": {"model": f"shelveit:block/{type}_bookshelf_5"},
"number_of_books=6": {"model": f"shelveit:block/{type}_bookshelf_6"},
"number_of_books=7": {"model": f"shelveit:block/{type}_bookshelf_7"},
"number_of_books=8": {"model": f"shelveit:block/{type}_bookshelf_8"},
"number_of_books=9": {"model": f"shelveit:block/{type}_bookshelf_9"},
"number_of_books=10": {"model": f"shelveit:block/{type}_bookshelf_10"},
"number_of_books=11": {"model": f"shelveit:block/{type}_bookshelf_11"},
"number_of_books=12": {"model": f"shelveit:block/{type}_bookshelf_12"},
"number_of_books=13": {"model": f"shelveit:block/{type}_bookshelf_13"},
"number_of_books=14": {"model": f"shelveit:block/{type}_bookshelf_14"},
"number_of_books=15": {"model": f"shelveit:block/{type}_bookshelf_15"},
"number_of_books=16": {"model": f"shelveit:block/{type}_bookshelf_16"},
"number_of_books=17": {"model": f"shelveit:block/{type}_bookshelf_17"},
"number_of_books=18": {"model": f"shelveit:block/{type}_bookshelf_18"},
"number_of_books=19": {"model": f"shelveit:block/{type}_bookshelf_19"},
"number_of_books=20": {"model": f"shelveit:block/{type}_bookshelf_20"},
"number_of_books=21": {"model": f"shelveit:block/{type}_bookshelf_21"},
"number_of_books=22": {"model": f"shelveit:block/{type}_bookshelf_22"},
"number_of_books=23": {"model": f"shelveit:block/{type}_bookshelf_23"},
"number_of_books=24": {"model": f"shelveit:block/{type}_bookshelf_24"},
"number_of_books=25": {"model": f"shelveit:block/{type}_bookshelf_25"},
"number_of_books=26": {"model": f"shelveit:block/{type}_bookshelf_26"},
"number_of_books=27": {"model": f"shelveit:block/{type}_bookshelf_27"}
}
}
return data
def main():
print("Starting block state file creation....")
process_data()
print("Done....")
if __name__ == "__main__":
main() | 60.435897 | 116 | 0.591854 | 840 | 7,071 | 4.636905 | 0.142857 | 0.115019 | 0.186906 | 0.273171 | 0.793838 | 0.782028 | 0.756611 | 0.692683 | 0.692683 | 0.692683 | 0 | 0.034695 | 0.241833 | 7,071 | 117 | 117 | 60.435897 | 0.691849 | 0.026022 | 0 | 0.060606 | 0 | 0 | 0.544517 | 0.314016 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040404 | false | 0 | 0.020202 | 0 | 0.080808 | 0.060606 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ac3f3260e43932e1f0e16367bfa9b58de8c70f42 | 101 | py | Python | builders/gallery/util.py | GLorieul/zachaire | a1ee733d407b88a11bc1a21cab69531a95bef525 | [
"MIT"
] | null | null | null | builders/gallery/util.py | GLorieul/zachaire | a1ee733d407b88a11bc1a21cab69531a95bef525 | [
"MIT"
] | null | null | null | builders/gallery/util.py | GLorieul/zachaire | a1ee733d407b88a11bc1a21cab69531a95bef525 | [
"MIT"
] | null | null | null |
import os
def getThumbnailName(fileName):
return os.path.splitext(fileName)[0] + '_thumb.jpg'
| 14.428571 | 55 | 0.722772 | 13 | 101 | 5.538462 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0.148515 | 101 | 6 | 56 | 16.833333 | 0.825581 | 0 | 0 | 0 | 0 | 0 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 5 |
ac6d6212150af25e0e5e508185bb9ee4d6db138f | 40 | py | Python | tests/__init__.py | SatelliteApplicationsCatapult/workfinder | d7e214e7133bb2efdd3947be3183203c4170b220 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | SatelliteApplicationsCatapult/workfinder | d7e214e7133bb2efdd3947be3183203c4170b220 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | SatelliteApplicationsCatapult/workfinder | d7e214e7133bb2efdd3947be3183203c4170b220 | [
"Apache-2.0"
] | null | null | null | """Unit test package for workfinder."""
| 20 | 39 | 0.7 | 5 | 40 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.8 | 0.825 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 |
ac6d84f6a0815721bca6ea06d6510ab004ceb2ee | 96 | py | Python | tests/functional/login.py | chibisov/cli-bdd | 579e2d9a07f9985b268aa9aaba42dee33021e163 | [
"MIT"
] | 8 | 2016-05-17T21:32:28.000Z | 2022-02-12T08:59:59.000Z | tests/functional/login.py | chibisov/cli-bdd | 579e2d9a07f9985b268aa9aaba42dee33021e163 | [
"MIT"
] | 7 | 2016-04-24T07:54:07.000Z | 2020-06-16T15:38:52.000Z | tests/functional/login.py | chibisov/cli-bdd | 579e2d9a07f9985b268aa9aaba42dee33021e163 | [
"MIT"
] | 4 | 2018-02-21T11:19:24.000Z | 2019-06-10T17:53:29.000Z | login = raw_input('Login:')
password = raw_input('Password:')
print '%s %s' % (login, password)
| 24 | 33 | 0.666667 | 13 | 96 | 4.769231 | 0.461538 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 3 | 34 | 32 | 0.738095 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.666667 | 0 | null | null | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 5 |
ac753bf28946ae0cca5eed3a6bf144f55c68b861 | 153 | py | Python | gammapy/analysis/__init__.py | Rishank2610/gammapy | 3cd64fdb2c53c8e5c697a9b85ef8d0486bff0b76 | [
"BSD-3-Clause"
] | 155 | 2015-02-25T12:38:02.000Z | 2022-03-13T17:54:30.000Z | gammapy/analysis/__init__.py | Rishank2610/gammapy | 3cd64fdb2c53c8e5c697a9b85ef8d0486bff0b76 | [
"BSD-3-Clause"
] | 3,131 | 2015-01-06T15:36:23.000Z | 2022-03-31T17:30:57.000Z | gammapy/analysis/__init__.py | Rishank2610/gammapy | 3cd64fdb2c53c8e5c697a9b85ef8d0486bff0b76 | [
"BSD-3-Clause"
] | 158 | 2015-03-16T20:36:44.000Z | 2022-03-30T16:05:37.000Z | # Licensed under a 3-clause BSD style license - see LICENSE.rst
"""Gammapy high level interface (analysis)."""
from .config import *
from .core import *
| 30.6 | 63 | 0.732026 | 22 | 153 | 5.090909 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0.156863 | 153 | 4 | 64 | 38.25 | 0.860465 | 0.673203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
ac7ccc2f5a9cf92e9e29420e6d93f9fe7b720796 | 31 | py | Python | Modulo_1/semana4/Modulos_Paquetes/Modulo/main-about-module.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | null | null | null | Modulo_1/semana4/Modulos_Paquetes/Modulo/main-about-module.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | null | null | null | Modulo_1/semana4/Modulos_Paquetes/Modulo/main-about-module.py | rubens233/cocid_python | 492ebdf21817e693e5eb330ee006397272f2e0cc | [
"MIT"
] | 1 | 2022-03-04T00:57:18.000Z | 2022-03-04T00:57:18.000Z |
from fibo import fib
fib(500)
| 7.75 | 20 | 0.741935 | 6 | 31 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0.193548 | 31 | 3 | 21 | 10.333333 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.