hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
78d62e77bb5a0e308415b3859755172e2afc77ca | 11,335 | py | Python | train_classifier.py | SteveCruz/icpr-syn2real | 179065de9552673fa9d627b42044de7cd0ddd08f | [
"MIT"
] | null | null | null | train_classifier.py | SteveCruz/icpr-syn2real | 179065de9552673fa9d627b42044de7cd0ddd08f | [
"MIT"
] | null | null | null | train_classifier.py | SteveCruz/icpr-syn2real | 179065de9552673fa9d627b42044de7cd0ddd08f | [
"MIT"
] | null | null | null | ##############################################################################################################################################################
##############################################################################################################################################################
"""
Training scripts for classification models presented in our paper.
Replace or modify the config file in the following part of the code to make changes to train different models.
# load the config file
config = toml.load("cfg/pretrained_classifier.toml")
"""
##############################################################################################################################################################
##############################################################################################################################################################
import os
import sys
import toml
import torch
import random
import numpy as np
from torch import optim
from torch.cuda.amp import autocast, GradScaler
import torchvision.transforms.functional as TF
import train_ae
import utils as utils
from pretrained_model import PretrainedClassifier, classification_loss
from dataset import create_dataset
##############################################################################################################################################################
##############################################################################################################################################################
def model_setup(config):
if config["model"]["type"] != "classification":
raise ValueError("Your config is for an autoencoder model, but this script is for classification models. Please use train_ae.py instead.")
# load data
train_loader = create_dataset(
which_dataset=config["dataset"]["name"],
which_factor=config["dataset"]["factor"],
use_triplet=False,
should_augment=config["training"]["augment"],
make_scene_impossible=False,
make_instance_impossible=False,
batch_size=config["training"]["batch_size"],
shuffle=True,
get_all=False
)
# number of classification classes
nbr_classes = len(list(train_loader.dataset.string_labels_to_integer_dict.keys()))
# define model
model = PretrainedClassifier(config["model"], nbr_classes=nbr_classes).to(config["device"])
# get the optimizer defined in the config file
# load it from the torch module
optim_def = getattr(optim, config["training"]["optimizer"])
# create the optimizer
optimizer = dict()
if config["training"]["optimizer"] == "SGD":
optimizer["method"] = optim_def(model.parameters(), lr=config["training"]["learning_rate"], weight_decay=config["training"]["weight_decay"], momentum=0.9, nesterov=True)
else:
optimizer["method"] = optim_def(model.parameters(), lr=config["training"]["learning_rate"], weight_decay=config["training"]["weight_decay"])
print('=' * 73)
print(optimizer["method"])
print('=' * 73)
return model, optimizer, train_loader
##############################################################################################################################################################
def train_one_epoch(model, optimizer, scaler, train_loader, config, nbr_epoch):
# make sure we are training
model.train()
# placeholder
total_loss = 0
# for each batch
for batch_images in train_loader:
# set gradients to zero
optimizer["method"].zero_grad()
# push to gpu
input_images = batch_images["image"].to(config["device"])
labels_left = batch_images["gt_left"]
labels_middle = batch_images["gt_middle"]
labels_right = batch_images["gt_right"]
labels = torch.tensor([train_loader.dataset.string_labels_to_integer_dict[str(x.item())+"_"+str(y.item())+"_"+str(z.item())] for x,y,z in zip(labels_left, labels_middle, labels_right)]).to(config["device"])
# inference
with autocast():
model_output = model(input_images)
# classification error
batch_loss = classification_loss(model_output, labels)
# Scales loss. Calls backward() on scaled loss to create scaled gradients.
# Backward passes under autocast are not recommended.
# Backward ops run in the same dtype autocast chose for corresponding forward ops.
scaler.scale(batch_loss).backward()
# scaler.step() first unscales the gradients of the optimizer's assigned params.
# If these gradients do not contain infs or NaNs, optimizer.step() is then called,
# otherwise, optimizer.step() is skipped.
scaler.step(optimizer["method"])
# Updates the scale for next iteration.
scaler.update()
# update total loss
total_loss += batch_loss.item()
print(f"[Training] \tEpoch: {nbr_epoch+1} Total Loss: {total_loss:.4f}")
return model
##############################################################################################################################################################
def evaluate(model, train_loader, loader_dict, config, save_folder, nbr_epoch):
# make sure we are evaluating
model.eval()
# we do not need to keep track of gradients
with torch.no_grad():
performances = dict()
# for the loader of each test vehicle
for vehicle, loader in loader_dict.items():
correct = 0
total = 0
# for each batch
for batch_images in loader:
# push to gpu
input_images = batch_images["image"].to(config["device"])
labels_left = batch_images["gt_left"]
labels_middle = batch_images["gt_middle"]
labels_right = batch_images["gt_right"]
labels = torch.tensor([train_loader.dataset.string_labels_to_integer_dict[str(x.item())+"_"+str(y.item())+"_"+str(z.item())] for x,y,z in zip(labels_left, labels_middle, labels_right)]).to(config["device"])
# we input the distorted input image
classif_output = model(input_images)
_, predictions = torch.max(classif_output, 1)
correct += (predictions == labels).sum().item()
total += labels.size(0)
flipped_input_images = torch.stack([TF.hflip(x) for x in input_images])
flipped_labels = torch.tensor([train_loader.dataset.string_labels_to_integer_dict[str(z.item())+"_"+str(y.item())+"_"+str(x.item())] for x,y,z in zip(labels_left, labels_middle, labels_right)]).to(config["device"])
flipped_classif_output = model(flipped_input_images)
_, flipped_predictions = torch.max(flipped_classif_output, 1)
correct += (flipped_predictions == flipped_labels).sum().item()
total += flipped_labels.size(0)
# compute the epoch accuracy
accuracy = correct / total
performances[vehicle] = dict()
performances[vehicle]["accuracy"] = accuracy
print(f"[Testing] \tEpoch: {nbr_epoch+1}, Vehicle: {vehicle}, Accuracy: {100*accuracy:.2f}% ({correct}/{total})")
if vehicle.lower() == "ticam":
utils.append_accuracy(save_folder, accuracy)
performances["epoch"] = nbr_epoch
return performances
##############################################################################################################################################################
def train(config):
#########################################################
# GPU
#########################################################
# specify which gpu should be visible
os.environ["CUDA_VISIBLE_DEVICES"] = config["training"]["gpu"]
# save the gpu settings
config["device"] = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# gradscaler to improve speed performance with mixed precision training
scaler = GradScaler()
#########################################################
# Setup
#########################################################
# create the folders for saving
save_folder = train_ae.folder_setup(config)
# create the model, optimizer and data loader
model, optimizer, train_loader = model_setup(config)
# get also a test loader for evaluation on unseen dataset
test_loader = train_ae.get_test_loader(config, real_only=True)
#########################################################
# Training
#########################################################
# keep track of time
timer = utils.TrainingTimer()
# best performance so far
best_performance = {"ticam": {"accuracy" : 0}}
# for each epoch
for nbr_epoch in range(config["training"]["epochs"]):
# train a single epoch
model = train_one_epoch(model, optimizer, scaler, train_loader, config, nbr_epoch)
# evaluate a single epoch
if (nbr_epoch+1) % config["training"]["frequency"] == 0 or nbr_epoch == 1:
performances = evaluate(model, train_loader, test_loader, config, save_folder, nbr_epoch)
# save the best model
if performances["ticam"]["accuracy"] > best_performance["ticam"]["accuracy"]:
best_performance = performances
torch.save(model.state_dict(), save_folder["checkpoints"] / "best_model.pth")
#########################################################
# Aftermath
#########################################################
# save the last model
torch.save(model.state_dict(), save_folder["checkpoints"] / "last_model.pth")
# save the transformation from string to integer labels
np.save(save_folder["checkpoints"] / 'label_dict.npy', train_loader.dataset.string_labels_to_integer_dict)
print("=" * 37)
timer.print_end_time()
print("=" * 37)
print("Best performance:")
for key, value in best_performance.items():
if key != "epoch":
print(f"{key}:{100*value['accuracy']:.2f}")
else:
print(f"{key}:{value}")
print("=" * 37)
# reset the stdout with the original one
# this is necessary when the train function is called several times
# by another script
sys.stdout = sys.stdout.end()
##############################################################################################################################################################
##############################################################################################################################################################
if __name__ == "__main__":
# reproducibility
seed = 42
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
# load the config file
config = toml.load("cfg/pretrained_classifier.toml")
# start the training using the config file
train(config) | 40.773381 | 230 | 0.52307 | 1,122 | 11,335 | 5.111408 | 0.258467 | 0.024935 | 0.014647 | 0.020924 | 0.260854 | 0.251264 | 0.245336 | 0.235571 | 0.195118 | 0.195118 | 0 | 0.003943 | 0.19453 | 11,335 | 278 | 231 | 40.773381 | 0.624206 | 0.165505 | 0 | 0.131783 | 0 | 0.015504 | 0.129735 | 0.008742 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031008 | false | 0 | 0.100775 | 0 | 0.155039 | 0.093023 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
78d6a169b3036fe9e92a2cdc8970817da6e3f700 | 1,072 | py | Python | main.py | felixenzogarofalo/PrectionModel | 839b3a419d19c486bcb34207a3c583c8ece573ec | [
"MIT"
] | null | null | null | main.py | felixenzogarofalo/PrectionModel | 839b3a419d19c486bcb34207a3c583c8ece573ec | [
"MIT"
] | null | null | null | main.py | felixenzogarofalo/PrectionModel | 839b3a419d19c486bcb34207a3c583c8ece573ec | [
"MIT"
] | null | null | null | from model.utils.data_engineering import DataEngineering
from model.prediction_model.regression import Regression
# Create an instance for DataEngineering and load data from CSV
csv_path = "data/area_01.csv"
data_e = DataEngineering()
data_e.load_data(csv_path)
data_e.clean_data()
# Create new features
# "age" feature
max_date = data_e.get_data()["año"].max()
age = max_date - data_e.get_data()["año"]
data_e.add_column("age", age)
# "flow" feature
flow_data = data_e.get_data()["E_FLUJO"].copy().astype("category").cat.codes
data_e.add_column("flow", flow_data)
# Set features and label
features = ["flow",
"NU_COORD_UTM ESTE",
"NU_COORD_UTM NORTE",
"°API",
"age"]
label = "BBPD"
data_e.set_features(features)
data_e.set_label(label)
# Split Train-Test data
data_e.split_data()
# Create a Model
model = Regression(data_e)
# Train and test the model
model.train()
print(f"------------------------------\nMean score: {model.score()}")
# Make a prediction
model.predict(data_e.x_test.iloc[0], data_e.y_test.iloc[0])
| 25.52381 | 76 | 0.695896 | 166 | 1,072 | 4.259036 | 0.379518 | 0.106082 | 0.033946 | 0.050919 | 0.062235 | 0.062235 | 0.062235 | 0 | 0 | 0 | 0 | 0.004362 | 0.14459 | 1,072 | 41 | 77 | 26.146341 | 0.76554 | 0.198694 | 0 | 0 | 0 | 0 | 0.180212 | 0.042403 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1524c9829dea3a19277582b4ca26f8bf2c1167f1 | 324 | py | Python | Web/flask/simple2.py | kasztp/python-lessons | 2a159ad5e1186c749b96c5d0ede45b7142c6bbb5 | [
"MIT"
] | 35 | 2015-05-18T08:08:41.000Z | 2022-03-07T09:38:02.000Z | Web/flask/simple2.py | kasztp/python-lessons | 2a159ad5e1186c749b96c5d0ede45b7142c6bbb5 | [
"MIT"
] | 1 | 2021-09-29T02:08:26.000Z | 2021-09-29T02:08:26.000Z | Web/flask/simple2.py | kasztp/python-lessons | 2a159ad5e1186c749b96c5d0ede45b7142c6bbb5 | [
"MIT"
] | 40 | 2015-04-28T00:38:54.000Z | 2022-02-13T14:18:34.000Z | from random import choice
from flask import Flask, render_template
app = Flask(__name__)
@app.route("/")
def index():
fortune = choice(
'You will have good health',
'You will not have good health'
)
return render_template('simple2.html', fortune=fortune)
app.run(host='127.0.0.1', debug=True)
| 21.6 | 59 | 0.67284 | 46 | 324 | 4.608696 | 0.630435 | 0.132075 | 0.132075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027132 | 0.203704 | 324 | 14 | 60 | 23.142857 | 0.794574 | 0 | 0 | 0 | 0 | 0 | 0.234568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1526ef7cf55091ee19bea03ca2f08af75d7666a3 | 2,559 | py | Python | contacts/testing/test_integration/test_audit.py | rossm6/accounts | 74633ce4038806222048d85ef9dfe97a957a6a71 | [
"MIT"
] | 11 | 2021-01-23T01:09:54.000Z | 2021-01-25T07:16:30.000Z | contacts/testing/test_integration/test_audit.py | rossm6/accounts | 74633ce4038806222048d85ef9dfe97a957a6a71 | [
"MIT"
] | 7 | 2021-04-06T18:19:10.000Z | 2021-09-22T19:45:03.000Z | contacts/testing/test_integration/test_audit.py | rossm6/accounts | 74633ce4038806222048d85ef9dfe97a957a6a71 | [
"MIT"
] | 3 | 2021-01-23T18:55:32.000Z | 2021-02-16T17:47:59.000Z | from accountancy.helpers import bulk_delete_with_history
from accountancy.signals import audit_post_delete
from contacts.models import Contact
from django.test import TestCase
from simple_history.models import HistoricalRecords
from django.db import models
class ContactAuditTests(TestCase):
"""
Check that the app is set up correctly i.e. right signals are set up,
and it is registered with simple history package.
"""
def test_simple_history_post_delete_receiver_is_removed(self):
"""
The ready method of the AppConfig calls simple_history_custom_set_up
on the AuditMixin class which disconnects this receiver.
"""
live_receivers = models.signals.post_delete._live_receivers(Contact)
for receiver in live_receivers:
if receiver.__self__.__class__.__name__ == HistoricalRecords.__name__:
self.fail(
"""
Historical Records receiver not disconnected.
It should be because we are using our own custom signal
which is fired when we delete."""
)
def test_audit_post_delete_signal_is_added(self):
"""
After registering the model and disconnecting the receiver from
the post delete signal we add our receiver to a custom signal
"""
live_receivers = audit_post_delete._live_receivers(Contact)
found = False
for receiver in live_receivers:
if str(receiver) == "<bound method AuditMixin.post_delete of <class 'contacts.models.Contact'>>":
found = True
break
if not found:
self.fail("Failed to find the post_delete method of the AuditMixin class")
def test_instance_deleted(self):
c = Contact(
code="1",
name="contact1",
email="doris@hotmail.com"
)
c.save()
c.delete()
self.assertEqual(
len(
Contact.history.all()
),
2 # created + deleted audits
)
def test_queryset_deleted(self):
c = Contact(
code="1",
name="contact1",
email="doris@hotmail.com"
)
c.save()
Contact.objects.all().delete()
self.assertEqual(
len(
Contact.history.all()
),
1 # created audit only
# deleted audit is not created
# use bulk_delete_with_history for deleted audits
) | 34.581081 | 109 | 0.601798 | 287 | 2,559 | 5.160279 | 0.38676 | 0.054018 | 0.030385 | 0.028359 | 0.21607 | 0.175557 | 0.137745 | 0.082377 | 0.082377 | 0.082377 | 0 | 0.003507 | 0.331379 | 2,559 | 74 | 110 | 34.581081 | 0.862069 | 0.192653 | 0 | 0.4 | 0 | 0 | 0.105056 | 0.027528 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.22 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15297f43a9cedd42da9b97021c8f787a4eb27e46 | 17,512 | py | Python | home_user/dj_iot/users/views.py | IoTree/IoTree42 | b7bb31f39add4e719a04e63cdd336983f8017137 | [
"MIT"
] | 4 | 2020-06-04T08:43:54.000Z | 2021-11-15T17:29:23.000Z | home_user/dj_iot/users/views.py | IoTree/IoTree42 | b7bb31f39add4e719a04e63cdd336983f8017137 | [
"MIT"
] | 3 | 2020-05-02T10:53:03.000Z | 2021-05-20T13:17:08.000Z | home_user/dj_iot/users/views.py | IoTree/IoTree42 | b7bb31f39add4e719a04e63cdd336983f8017137 | [
"MIT"
] | 3 | 2020-10-27T13:06:51.000Z | 2022-01-08T14:56:36.000Z |
"""
//Iotree42 sensor Network
//purpose: for handle requests and to process the wep pages
//used software: python3, django, time, datetime, rest_framework
//for hardware: Debian-Server
//design by Sebastian Stadler
//on behalf of the university of munich.
//NO WARRANTY AND NO LIABILITY
//use of the code at your own risk.
"""
from django.shortcuts import render, redirect
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from django.utils.decorators import method_decorator
from .forms import UserUpdateForm, ProfileUpdateForm, UserRegisterForm, TreePostForm, InputPostForm
from django.core.paginator import Paginator
from .mqttcon import InitMqttClient
from django.http import HttpResponse, Http404
from django.conf import settings
from django.contrib.auth.models import User
from datetime import timezone
from .fluxcon import InitInfluxUser, DelInfluxData
from .fluxdatacon import FluxDataCon
from .grafanacon import InitGrafaUser
from .pahocon import PahoSend
import time
import json
from rest_framework.decorators import api_view, permission_classes
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
from revproxy.views import ProxyView
import datetime
with open('/etc/iotree/config.json', encoding='utf-8') as config_file:
config = json.load(config_file)
def zip_download(request, version):
import os
if int(version) == 1:
file_name = 'IoTree_Gateway_V_2.0.zip'
# file_name = version
file_path = os.path.join(settings.MEDIA_ROOT, 'downloadfiles/'+file_name)
if os.path.exists(file_path):
with open(file_path, 'rb') as fh:
response = HttpResponse(fh.read(), content_type="application/force-download")
response['Content-Disposition'] = 'attachment; filename='+ file_name
return response
raise Http404
# func. for the register site
def register(request):
if request.method == 'POST':
form = UserRegisterForm(request.POST)
if form.is_valid():
user_name = form.cleaned_data.get('username')
user_email = form.cleaned_data.get('email')
password1 = form.cleaned_data.get('password1')
user = form.save(commit=False)
init_flux_client = InitInfluxUser(user_name, password1)
init_flux_client.run()
init_mqtt_client = InitMqttClient(user_name, password1)
init_mqtt_client.run()
init_grafa_client = InitGrafaUser(user_name, password1, user_email)
init_grafa_client.run()
user.first_name = "Same as the login PW from this site."
user.last_name = "Same as the login PW from this site."
user.save()
messages.success(request, str(user_name)+': account has been created! You are now able to log in!')
del init_mqtt_client
del init_grafa_client
del init_flux_client
return redirect('login')
else:
form = UserRegisterForm()
return render(request, 'users/register.html', {'form': form})
# func. for deleting user and process the deleting site
@login_required
def delete_user(request):
if request.method == 'POST':
confirm = request.POST.get('confirm')
cancel = request.POST.get('cancel')
if confirm == 'confirm':
user = User.objects.get(username=request.user.username)
user.delete()
messages.success(request, str(request.user.username) + ' account has been deleted! and all related data')
return redirect('logout')
if cancel == 'cancel':
return redirect('profile')
else:
return render(request, 'users/delete_user.html')
# func. for the profile site
@login_required
def profile(request):
if request.method == 'POST':
u_form = UserUpdateForm(request.POST, instance=request.user)
p_form = ProfileUpdateForm(request.POST, request.FILES, instance=request.user.profile)
delete = request.POST.get('delete', None)
if u_form.is_valid() and p_form.is_valid():
u_form.save()
p_form.save()
messages.success(request, 'Your account has been updated!')
if delete:
try:
return redirect('delete-user')
except User.DoesNotExist:
messages.success(request, ' account does not exist!')
except Exception as e:
messages.success(request, str(e.message))
else:
u_form = UserUpdateForm(instance=request.user)
p_form = ProfileUpdateForm(instance=request.user.profile)
print(p_form)
context = {
'u_form': u_form,
'p_form': p_form
}
return render(request, 'users/profile.html', context)
# func. for the iotree site
@login_required
def treeview(request):
if request.method == 'POST':
form = TreePostForm(request.POST)
if form.is_valid():
time_start = form.cleaned_data.get('time_start')
time_end = form.cleaned_data.get('time_end')
time_start = time_start.replace(tzinfo=timezone.utc).timestamp()
time_end = time_end.replace(tzinfo=timezone.utc).timestamp()
tree = request.POST.get('tree', None)
action = form.cleaned_data.get('action')
time_start = int(time_start*1000)
time_end = int(time_end*1000)
treee = tree.replace("/", "_")
if action == 'table':
return redirect('iotree-show', str(treee), time_start, time_end)
if action == 'download':
return redirect('iotree-download', str(treee), time_start, time_end)
if action == 'delete':
flux_client = FluxDataCon(request.user.username)
tags = flux_client.get_raw_tags(str(tree))
del flux_client
if tags == "":
messages.info(request, "No Date Found! for delete")
return redirect('treeview')
else:
flux_del = DelInfluxData(request.user.username, tags)
response = flux_del.run()
messages.info(request, 'Measuerments droped: '+str(tags)+'Database response: '+str(response))
del flux_del
return redirect('treeview')
else:
form = TreePostForm(initial={'time_end':datetime.datetime.now()})
flux_client = FluxDataCon(request.user.username)
context = flux_client.get_tag_tree()
if str(context) == "[]":
messages.info(request, 'No data jet!')
del flux_client
return render(request, 'users/treeview.html', {'context':context, 'form':form})
# func. for display all gateways and do actions
@login_required
def gatewaylist(request):
if request.method == 'POST':
messages.info(request, "No Post -> get")
else:
# connect to db
flux_client = FluxDataCon(request.user.username)
# get all tags
tags = flux_client.get_tag_tree()
# filter only the gateway ids
dicttags = json.loads(tags)
gatewaylist = []
for m in dicttags:
gatewaylist.append(m["text"])
# getting last 5 min entrys of ping of each gatway
flux_client.start_time((int(time.time())-300)*1000)
flux_client.end_time(int(time.time())*1000)
tree = [s + "/SYSTEMcontrol/ping" for s in gatewaylist]
lastseen = flux_client.find(",".join(tree))
del flux_client
# check if gateway is online the last 5 min or not
lastseenlist = []
for n in lastseen:
tag = n["posts_tree"]
tag = tag.split("/")
lastseenlist.append(tag[0])
# map data for render page
context = []
for b in gatewaylist:
element = {}
element["id"] = b
if b in lastseenlist:
element["status"] = "online"
element["color"] = "green"
else:
element["status"] = "offline"
element["color"] = "red"
context.append(element)
if str(context) == "[]":
messages.info(request, 'No gateway connected jet!')
return render(request, 'users/gateway_list.html', {'context':context})
# func. for the setup_rpi site
@login_required
def input(request, gateway, task):
if request.method == 'POST':
form = InputPostForm(request.POST)
if form.is_valid():
if request.POST.get("send"):
textbox = form.cleaned_data.get('textbox')
if "jsonfile" in task:
topic = "SYSTEMcontrolDONOTSAVE/syncfile"
pahosend = PahoSend(request.user.username, gateway, topic)
jsonstring = pahosend.checkjson(textbox)
if jsonstring:
io = pahosend.send(jsonstring)
if io:
messages.info(request, "MQTT message has been send!")
else:
messages.error(request, "Somthing went wrong when sending please try again.")
else:
messages.error(request, "Sorry this might be not proper json!")
elif "commandsend" in task:
topic = "SYSTEMcontrolDONOTSAVE/bashCOMMAND"
pahosend = PahoSend(request.user.username, gateway, topic)
io = pahosend.send(textbox)
if io:
messages.info(request, "MQTT message has been send!")
else:
messages.error(request, "Somthing went wrong when sending please try again.")
elif "linkgateway" in task:
topic = "SYSTEMcontrolDONOTSAVE/linkgateway"
pahosend = PahoSend(request.user.username, gateway, topic)
jsonstring = pahosend.checkjson(textbox)
if jsonstring:
io = pahosend.send(jsonstring)
if io:
messages.info(request, "MQTT message has been send!")
else:
messages.error(request, "Somthing went wrong when sending please try again.")
else:
messages.error(request, "Sorry this might be not proper json!")
else:
messages.error(request, "Somthing went wrong: task ist not clear. Please try again")
return redirect('input', gateway, task)
elif request.POST.get("update"):
return redirect('input', gateway, task)
elif request.POST.get("cancel"):
return redirect('gatewaylist')
else:
flux_client = FluxDataCon(request.user.username)
flux_client.last = True
# label the textbox
if "jsonfile" in task:
label = "Json File:"
# pre fill with saved data in db
lastentry = flux_client.find(gateway+"/SYSTEMcontrolSAVEJSON/syncfile")
if lastentry:
jsonstring = lastentry[0]["posts_body"][0][1]
form = InputPostForm(initial={"textbox": jsonstring})
else:
form = InputPostForm(initial={"textbox": "{}"})
elif "commandsend" in task:
label = "Send a command to your Gateway (default options: reboot, update, upgrade):"
form = InputPostForm(initial={"textbox": "update"})
elif "linkgateway" in task:
label = 'Listen to other Gateways (example: {"gatewayID":"topic1/topic2/#"}):'
# pre fill with saved data in db
lastentry = flux_client.find(gateway+"/SYSTEMcontrolSAVEJSON/linkgateway")
if lastentry:
jsonstring = lastentry[0]["posts_body"][0][1]
form = InputPostForm(initial={"textbox": jsonstring})
else:
form = InputPostForm(initial={"textbox": "{}"})
context = {
'gateway': "Gateway: "+gateway,
'label': label
}
return render(request, 'users/input.html', {'context':context, 'form':form})
# func. for the setup_rpi site
@login_required
def setup_rpi(request):
if request.method == 'POST':
request.POST.get('download', None)
version = 1
return redirect('zip-download', version)
else:
context = {
'file': '1'
}
return render(request, 'users/setup_rpi.html', context)
# func. for the manual site
@login_required
def manual(request):
return render(request, 'users/manual.html')
# func. for redirect to a grafana iframe via modif.
@login_required
def tografana(request):
# return render(request, 'users/dashboard.html')
# somthing not wroking properly with iframe needs more investiagtion.
# work around no iframe via redirect to grafana proxy address
return redirect(config['GRAFA_ADDRESS'])
# modified page for render Grafana in iframe
@login_required
def iframedash(request):
return redirect(config['GRAFA_ADDRESS'])
# methode for reverse proxy to grafana with auto login and user validation
@method_decorator(login_required, name='dispatch')
class GrafanaProxyView(ProxyView):
upstream = 'http://localhost:3000/'
def get_proxy_request_headers(self, request):
headers = super(GrafanaProxyView, self).get_proxy_request_headers(request)
headers['X-WEBAUTH-USER'] = request.user.username
return headers
# func. for the iotree_show site, for displaying tables
@login_required
def iotree_show(request, tags, time_start, time_end):
tags = tags.replace("_", "/")
time_start = int(time_start)
time_end = int(time_end)
flux_client = FluxDataCon(request.user.username)
flux_client.start_time(time_start)
flux_client.end_time(time_end)
contexts = flux_client.find(tags)
del flux_client
if len(contexts) == 0:
messages.error(request, 'No Data Found! Data response: '+str(contexts)+'. Given Nodes: '+str(tags) )
return redirect('treeview')
else:
paginator = Paginator(contexts, 1)
page = request.GET.get('page')
context = paginator.get_page(page)
return render(request, 'users/iotree_show.html', {'contexts': context})
# CSV download return
@login_required
def iotree_download(request, tags, time_start, time_end):
import csv
import datetime
tags = tags.replace("_", "/")
time_start = int(time_start)
time_end = int(time_end)
flux_client = FluxDataCon(request.user.username)
flux_client.start_time(time_start)
flux_client.end_time(time_end)
context = flux_client.find(tags)
del flux_client
if len(context) == 0:
messages.error(request, 'No Data Found! Data response: '+str(context)+'. Given Nodes: '+str(tags) )
return redirect('treeview')
else:
# starting a csv file
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="{}"'.format('IoTree42_'+str(datetime.datetime.
now())+'.csv')
writer = csv.writer(response, delimiter=';', dialect='excel')
for z in context:
# add info about data to csv
writer.writerow(['tree branchs: ', z['posts_tree']])
# headings for csv from data mongo
writer.writerow(z['posts_head'])
# values for csv form data mongo
writer.writerows(z['posts_body'])
# for n in reversed(z['posts_body']):
# writer.writerow(n)
writer.writerow(['------', '------', '------', '------', '------', '------', '------'])
return response
@api_view(['GET', 'POST'])
@permission_classes([IsAuthenticated])
def iotree_api(request):
if request.method == 'POST':
try:
data = dict(request.data)
print(data)
tree = data['tree']
time_start = data['time_start']
time_end = data['time_end']
if time_end == 'now':
time_end = int(time.time()*1000)
else:
time_end = int(time_end)
time_start = int(time_start)
flux_client = FluxDataCon(request.user.username)
flux_client.start_time(time_start)
flux_client.end_time(time_end)
context = flux_client.find(tree)
del flux_client
if str(context) == "[]":
context = {"error":"No Data found! or Timeout!", "Info":"Hint: Max rows 200000!"}
return Response(context)
except:
return Response({"status":404,"Info":"Something went wrong when the query", "Hint":"Max rows 200000!"})
else:
flux_client = FluxDataCon(request.user.username)
iotree = flux_client.get_tag_tree()
leafs = flux_client.get_leafs()
context = {
"listofleafs": leafs,
"iotree": json.loads(iotree)
}
del flux_client
if str(context) == "false":
context = {"error":"false"}
if str(context) == "[]":
context = {"error":"No Data jet!"}
return Response(context)
| 39.8 | 119 | 0.600331 | 1,947 | 17,512 | 5.285054 | 0.20339 | 0.036929 | 0.027697 | 0.023324 | 0.344995 | 0.274636 | 0.233236 | 0.202041 | 0.182604 | 0.160253 | 0 | 0.006286 | 0.2914 | 17,512 | 439 | 120 | 39.890661 | 0.822951 | 0.084285 | 0 | 0.364146 | 0 | 0 | 0.152826 | 0.021073 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042017 | false | 0.011204 | 0.070028 | 0.008403 | 0.204482 | 0.005602 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15299d729865a3bce0bda4fe56dd32f67a9cb475 | 4,506 | py | Python | art/classifiers/cnn.py | yalechang/adversarial-robustness-toolbox | df4efc86c3ed963c82687b3ab8cd5cdbd5fe5c57 | [
"MIT"
] | 1 | 2022-03-02T19:05:12.000Z | 2022-03-02T19:05:12.000Z | art/classifiers/cnn.py | yalechang/adversarial-robustness-toolbox | df4efc86c3ed963c82687b3ab8cd5cdbd5fe5c57 | [
"MIT"
] | null | null | null | art/classifiers/cnn.py | yalechang/adversarial-robustness-toolbox | df4efc86c3ed963c82687b3ab8cd5cdbd5fe5c57 | [
"MIT"
] | null | null | null | # MIT License
#
# Copyright (C) IBM Corporation 2018
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from __future__ import absolute_import, division, print_function
from keras.models import Sequential, model_from_json
from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D, Dropout
from keras.layers.normalization import BatchNormalization
from art.classifiers.classifier import Classifier
def mnist_layers(input_shape, nb_filters):
layers = [Conv2D(nb_filters, (8, 8), strides=(2, 2), padding="same", input_shape=input_shape),
"activation",
Conv2D((nb_filters * 2), (6, 6), strides=(2, 2), padding="valid"),
"activation",
Conv2D((nb_filters * 2), (5, 5), strides=(1, 1), padding="valid"),
"activation",
Flatten()]
return layers
def cifar10_layers(input_shape, nb_filters):
layers = [Conv2D(nb_filters // 2, (3, 3), padding="same", input_shape=input_shape),
"activation",
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Conv2D(nb_filters, (3, 3), padding="valid"),
"activation",
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(500),
"activation",
Dropout(0.5)]
return layers
class CNN(Classifier):
"""
Implementation of a convolutional neural network using Keras sequential model
"""
def __init__(self, input_shape=None, include_end=True, act='relu', bnorm=False, input_ph=None, nb_filters=64,
nb_classes=10, act_params={}, model=None, defences=None, preproc=None, dataset="mnist"):
"""Instantiates a ConvolutionalNeuralNetwork model using Keras sequential model
:param tuple input_shape: shape of the input images
:param bool include_end: whether to include a softmax layer at the end or not
:param str act: type of the intermediate activation functions
:param bool bnorm: whether to apply batch normalization after each layer or not
:param input_ph: The TensorFlow tensor for the input
(needed if returning logits)
("ph" stands for placeholder but it need not actually be a
placeholder)
:param int nb_filters: number of convolutional filters per layer
:param int nb_classes: the number of output classes
:param dict act_params: dict of params for activation layers
:rtype: keras.model object
"""
if model is None:
model = Sequential(name='cnn')
layers = []
if "mnist" in dataset:
layers = mnist_layers(input_shape, nb_filters)
elif "cifar10" in dataset:
layers = cifar10_layers(input_shape, nb_filters)
elif "stl10" in dataset:
raise NotImplementedError("No CNN architecture is defined for dataset '{0}'.".format(dataset))
for layer in layers:
if layer == "activation":
model.add(self.get_activation(act, **act_params))
if bnorm:
model.add(BatchNormalization())
else:
model.add(layer)
model.add(Dense(nb_classes))
if include_end:
model.add(Activation('softmax'))
super(CNN, self).__init__(model, defences, preproc)
| 42.509434 | 120 | 0.644918 | 559 | 4,506 | 5.100179 | 0.382826 | 0.034725 | 0.026307 | 0.025254 | 0.129779 | 0.115047 | 0.086286 | 0.061031 | 0.061031 | 0 | 0 | 0.017699 | 0.272747 | 4,506 | 105 | 121 | 42.914286 | 0.852304 | 0.419663 | 0 | 0.235294 | 0 | 0 | 0.072211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.098039 | 0 | 0.215686 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
152a22d4bca66cc2215242c3badf755c586421f1 | 5,380 | py | Python | promoterz/sequence/standard_loop.py | caux/japonicus | 3a9f000f1ff1eb01fa569b531ebf682dd73db90d | [
"MIT"
] | null | null | null | promoterz/sequence/standard_loop.py | caux/japonicus | 3a9f000f1ff1eb01fa569b531ebf682dd73db90d | [
"MIT"
] | null | null | null | promoterz/sequence/standard_loop.py | caux/japonicus | 3a9f000f1ff1eb01fa569b531ebf682dd73db90d | [
"MIT"
] | null | null | null | #!/bin/python
from deap import tools
from copy import deepcopy
import random
from deap import algorithms
import promoterz
import statistics
from .. import evolutionHooks
def checkPopulation(population, message):
if not (len(population)):
print(message)
def standard_loop(World, locale):
# --assertions are most for debugging purposes; they should not trigger
assert (len(locale.population))
locale.extraStats = {}
# --validate individuals;
locale.population = promoterz.validation.validatePopulation(
World.tools.constructPhenotype, World.TargetParameters, locale.population
)
# --remove equal citizens before evaluation for efficency
nonevaluated = [ind for ind in locale.population if not ind.fitness.valid]
Lu = len(nonevaluated)
print("first unevaluated: %i" % len(nonevaluated))
remains = locale.extratools.populationPD(nonevaluated, 1.0)
Lr = len(remains)
print("%i individues removed due to equality" % (Lu - Lr))
locale.population = [
ind for ind in locale.population if ind.fitness.valid
] + remains
# --evaluate individuals;
locale.extraStats['nb_evaluated'], locale.extraStats[
'avgTrades'
] = World.parallel.evaluatePopulation(
locale
)
locale.extraStats['avgExposure'] = sum([I.averageExposure for I in locale.population])/len(locale.population)
# --send best individue to HallOfFame;
if not locale.EPOCH % 15:
BestSetting = tools.selBest(locale.population, 1)[0]
locale.HallOfFame.insert(BestSetting)
assert (sum([x.fitness.valid for x in locale.population]) == len(locale.population))
# --compile stats;
statistics.compileStats(locale)
# --population ages
qpop = len(locale.population)
locale.population = locale.extratools.populationAges(
locale.population, locale.EvolutionStatistics[locale.EPOCH]
)
wpop = len(locale.population)
locale.extraStats['nbElderDies'] = qpop - wpop
# INDIVIDUE FITNESS ATTRIBUTES FILTERS;
# --remove very inapt citizens
if World.genconf.minimumProfitFilter is not None:
locale.extratools.filterThreshold(World.genconf.minimumProfitFilter,
World.genconf._lambda)
checkPopulation(locale.population, "Population dead after profit filter.")
# --remove individuals below tradecount
if World.genconf.TradeNumberFilterRange is not None:
locale.extratools.filterTrades(World.genconf.TradeNumberFilterRange,
World.genconf._lambda)
checkPopulation(locale.population, "Population dead after trading number filter.")
# --remove individues based on average roundtripe exposure time;
if World.genconf.averageExposureLengthFilterRange is not None:
locale.extratools.filterExposure(
World.genconf.averageExposureLengthFilterRange,
World.genconf._lambda
)
checkPopulation(locale.population, "Population dead after roundtrip exposure filter.")
if not locale.population:
locale.population = World.tools.population(World.genconf.POP_SIZE)
print("Repopulating... Aborting epoch.")
# --show stats;
statistics.showStatistics(locale)
# --calculate new population size;
if locale.EPOCH:
PRoFIGA = promoterz.supplement.PRoFIGA.calculatePRoFIGA(
World.genconf.PRoFIGA_beta,
locale.EPOCH,
World.genconf.NBEPOCH,
locale.EvolutionStatistics[locale.EPOCH - 1],
locale.EvolutionStatistics[locale.EPOCH],
)
locale.POP_SIZE += locale.POP_SIZE * PRoFIGA
minps, maxps = World.genconf.POP_SIZE // 2, World.genconf.POP_SIZE * 3
try:
locale.POP_SIZE = int(round(max(min(locale.POP_SIZE, maxps), minps)))
except:
locale.POP_SIZE = 30
M = "POP_SIZE PROFIGA ERROR;"
print(M)
# --filter best inds;
locale.population[:] = evolutionHooks.selBest(locale.population, locale.POP_SIZE)
checkPopulation(locale.population, "Population dead after selection of score filter.")
assert (None not in locale.population)
# print(EvolutionStatistics)
#FinalBestScores.append(Stats['max'])
# --select best individues to procreate
LAMBDA = max(World.genconf._lambda, locale.POP_SIZE - len(locale.population))
TournamentSize = max(2 * LAMBDA, len(locale.population))
offspring = evolutionHooks.Tournament(locale.population, LAMBDA, TournamentSize)
offspring = [deepcopy(x) for x in offspring] # is deepcopy necessary?
# --modify and integrate offspring;
offspring = algorithms.varAnd(
offspring, World.tools, World.genconf.cxpb, World.genconf.mutpb
)
locale.extratools.ageZero(offspring)
locale.population += offspring
# --NOW DOESN'T MATTER IF SOME INDIVIDUE LACKS FITNESS VALUES;
assert (None not in locale.population)
# --immigrate individual from HallOfFame;
if random.random() < 0.2:
locale.population = locale.extratools.ImmigrateHoF(locale.population)
# --immigrate random number of random individues;
if random.random() < 0.5:
locale.population = locale.extratools.ImmigrateRandom((2, 7), locale.population)
assert (len(locale.population))
assert (None not in locale.population)
| 40.149254 | 113 | 0.69145 | 569 | 5,380 | 6.506151 | 0.321617 | 0.155592 | 0.053485 | 0.0443 | 0.168558 | 0.12939 | 0.070773 | 0.055105 | 0.055105 | 0 | 0 | 0.004249 | 0.212639 | 5,380 | 133 | 114 | 40.451128 | 0.869688 | 0.148699 | 0 | 0.073684 | 0 | 0 | 0.072636 | 0 | 0 | 0 | 0 | 0 | 0.063158 | 1 | 0.021053 | false | 0 | 0.073684 | 0 | 0.094737 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
152be5684baefcb1596d9ac70a5e86c1b23ee43b | 1,675 | py | Python | docs/source/conf.py | vforgione/logging2 | 9d620c14d9b5f67e3dc285082330296cf1ecbfcd | [
"MIT"
] | 25 | 2017-05-01T10:24:21.000Z | 2022-03-30T18:07:47.000Z | docs/source/conf.py | vforgione/logging2 | 9d620c14d9b5f67e3dc285082330296cf1ecbfcd | [
"MIT"
] | 8 | 2017-05-01T05:50:31.000Z | 2020-06-29T00:51:17.000Z | docs/source/conf.py | vforgione/logging2 | 9d620c14d9b5f67e3dc285082330296cf1ecbfcd | [
"MIT"
] | 4 | 2019-03-07T15:07:40.000Z | 2022-01-05T14:52:21.000Z | #!/usr/bin/env python3
import codecs
import os
import re
import sys
sys.path.insert(0, os.path.abspath("../../logging2"))
# -- General configuration ------------------------------------------------
extensions = [
"sphinx.ext.autodoc",
"sphinx_autodoc_annotation",
]
templates_path = ["_templates"]
source_suffix = ".rst"
master_doc = "index"
project = "logging2"
copyright = "2017, Vince Forgione"
author = "Vince Forgione"
_setuppy = codecs.open(os.path.abspath("../../setup.py"), encoding="utf8").read()
_version = re.search("^VERSION = [\"']([^\"']+)[\"']", _setuppy, re.MULTILINE).group(1)
version = release = _version
language = None
exclude_patterns = []
pygments_style = "sphinx"
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
html_theme = "alabaster"
# html_theme_options = {}
html_static_path = ["_static"]
# -- Options for HTMLHelp output ------------------------------------------
htmlhelp_basename = "logging2doc"
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {}
latex_documents = [
(master_doc, "logging2.tex", "logging2 Documentation", "Vince Forgione", "manual"),
]
# -- Options for manual page output ---------------------------------------
man_pages = [(master_doc, "logging2", "logging2 Documentation", [author], 1)]
# -- Options for Texinfo output -------------------------------------------
texinfo_documents = [
(
master_doc,
"logging2",
"logging2 Documentation",
author,
"logging2",
"One line description of project.",
"Miscellaneous",
),
]
| 20.180723 | 87 | 0.552836 | 152 | 1,675 | 5.907895 | 0.539474 | 0.055679 | 0.056793 | 0.057906 | 0.097996 | 0.097996 | 0 | 0 | 0 | 0 | 0 | 0.013523 | 0.161194 | 1,675 | 82 | 88 | 20.426829 | 0.625623 | 0.29194 | 0 | 0.04878 | 0 | 0 | 0.303056 | 0.021222 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.097561 | 0 | 0.097561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15306ffbef39ca8ee5c2ab4f5a0173b6cc547bab | 3,925 | py | Python | tox_add_factor/hooks.py | jayvdb/tox-add-factor | 38742451c74a7655592d6930f8f365574cb13ee7 | [
"MIT"
] | 1 | 2020-12-17T11:14:00.000Z | 2020-12-17T11:14:00.000Z | tox_add_factor/hooks.py | jayvdb/tox-add-factor | 38742451c74a7655592d6930f8f365574cb13ee7 | [
"MIT"
] | 3 | 2020-10-15T14:49:04.000Z | 2020-10-20T07:49:25.000Z | tox_add_factor/hooks.py | jayvdb/tox-add-factor | 38742451c74a7655592d6930f8f365574cb13ee7 | [
"MIT"
] | null | null | null | """Tox hook implementations."""
from __future__ import print_function
import os
import tox
try:
from tox.reporter import warning
except ImportError:
warning = lambda s: None
from .envlist import add_factors, AFTER, BEFORE
@tox.hookimpl
def tox_addoption(parser):
"""Add arguments."""
parser.add_argument("--append-factor", type=str, nargs="+", help="Append a factor.")
parser.add_argument(
"--prepend-factor", type=str, nargs="+", help="Prepend a factor."
)
parser.add_argument(
"--prepend-archraw-factor",
action="store_true",
help="Prepend raw CPU arch arch to factors, such as ia32, armv8_a, aarch64.",
)
parser.add_argument(
"--prepend-cpuarch-factor",
action="store_true",
help="Prepend CPU arch to factors, such as x86_32, x86_64, arm_7, arm_8.",
)
parser.add_argument(
"--prepend-ostype-factor",
action="store_true",
help="Prepend OS type to factors, such as linux, macos, windows.",
)
parser.add_argument(
"--prepend-username-factor",
action="store_true",
help="Prepend username to factors.",
)
parser.add_argument(
"--add-ci-factor",
action="store_true",
help="Add CI factors if environment variable is set, such as appveyor, travis or fallback ci.",
)
@tox.hookimpl(trylast=True)
def tox_configure(config):
"""Check for the presence of the added options."""
if config.option.prepend_archraw_factor:
from cpuinfo.cpuinfo import DataSource # noqa
archraw_factor_name = DataSource.arch_string_raw.replace("-", "_").lower()
if not config.option.prepend_factor:
config.option.prepend_factor = [archraw_factor_name]
else:
config.option.prepend_factor.insert(0, archraw_factor_name)
if config.option.prepend_cpuarch_factor:
from cpuinfo.cpuinfo import _parse_arch, DataSource # noqa
try:
arch, _ = _parse_arch(DataSource.arch_string_raw)
arch = arch.lower()
if not config.option.prepend_factor:
config.option.prepend_factor = [arch]
else:
config.option.prepend_factor.insert(0, arch)
except Exception:
archraw_factor_name = DataSource.arch_string_raw.replace("-", "_").lower()
warning(
'cpuarch not available for archraw "{}"'.format(archraw_factor_name)
)
if config.option.prepend_ostype_factor:
from osinfo.osinfo import _get_os_type # noqa
if not config.option.prepend_factor:
config.option.prepend_factor = [_get_os_type().lower()]
else:
config.option.prepend_factor.insert(0, _get_os_type().lower())
if config.option.add_ci_factor and "CI" in os.environ:
extra_factor = None
if "APPVEYOR" in os.environ or "TRAVIS" in os.environ:
config.option.prepend_username_factor = True
elif "CIRRUS_CI" in os.environ:
extra_factor = "cirrusci"
else:
extra_factor = "ci"
if extra_factor:
if not config.option.append_factor:
config.option.append_factor = [extra_factor]
else:
config.option.append_factor.insert(0, extra_factor)
if config.option.prepend_username_factor:
import getpass # noqa
username = getpass.getuser()
if username:
username = username.lower()
if not config.option.prepend_factor:
config.option.prepend_factor = [username]
else:
config.option.prepend_factor.insert(0, username)
if config.option.prepend_factor:
add_factors(config, config.option.prepend_factor, position=BEFORE)
if config.option.append_factor:
add_factors(config, config.option.append_factor, position=AFTER)
| 31.910569 | 103 | 0.637452 | 465 | 3,925 | 5.172043 | 0.234409 | 0.12474 | 0.150104 | 0.14553 | 0.45447 | 0.357588 | 0.230353 | 0.138877 | 0.138877 | 0.097713 | 0 | 0.006887 | 0.260127 | 3,925 | 122 | 104 | 32.172131 | 0.821281 | 0.027006 | 0 | 0.265957 | 0 | 0 | 0.161053 | 0.025263 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0.021277 | 0.106383 | 0 | 0.12766 | 0.010638 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1530d85391603c1e9fcd521da0d319ec5ab96b05 | 6,704 | py | Python | EAC-code/Commitment.py | kvakil/BulletProofLib | 0cf22deff1746c03f16c3fb62541c5c02990bb3b | [
"MIT"
] | 162 | 2017-11-17T09:46:36.000Z | 2022-03-31T09:32:28.000Z | EAC-code/Commitment.py | GENERALBYTESCOM/BulletProofLib | fd154f28c1729f7f9bd20aa6bed10c952ade4e49 | [
"MIT"
] | 2 | 2017-11-22T20:05:11.000Z | 2019-08-29T10:36:15.000Z | EAC-code/Commitment.py | GENERALBYTESCOM/BulletProofLib | fd154f28c1729f7f9bd20aa6bed10c952ade4e49 | [
"MIT"
] | 56 | 2017-11-23T05:25:43.000Z | 2022-02-23T07:53:48.000Z | from petlib.ec import EcGroup,EcPt
from petlib.ec import _FFI,_C
from petlib.bn import Bn
from hashlib import sha256
#from memory_profiler import profile
def commitment_key_gen(n, trap=None,nid=714): #713 std, 415/714
'''Generates a key for a pedersen-like multicommitment.
It outputs a ECgroup specified by nid and n points on the curve.
If Trap is set the discrete log of all the points is also returned.'''
G = EcGroup(nid)
commitment_key=[]
trapdoor=[]
for i in xrange(n+1):
#priv = G.order().random()
#pub = priv * G.generator()
#commitment_key+=[pub]
#trapdoor+=[priv]
trapdoor+=[G.order().random()]
commitment_key+=[trapdoor[-1]*G.generator()]
if trap!=None:
return (G,commitment_key,tuple(trapdoor))
return (G,commitment_key)
def mult_prod(G,key,elements):
#G,key=ck
bvec=_FFI.new("EC_POINT * []",len(elements))
for i in xrange(len(elements)): bvec[i]=key[i].pt
evec=_FFI.new("BIGNUM * []",len(elements))
for i in xrange(len(elements)):
try:
evec[i]=elements[i].bn
except AttributeError:
#does this even work properly?
evec[i]=Bn(elements[i]).bn
comm = EcPt(G)
_C.EC_POINTs_mul(G.ecg, comm.pt, _FFI.NULL,len(elements), bvec, evec, _FFI.NULL)
return comm
def mult_prod_str(G,key,elements):#not actually used in commit_str, but could be potentially useful. Be careful that it was potentially causing segmentation fault.
#G,key=ck
bvec=_FFI.new("EC_POINT * []",len(elements))
for i in xrange(len(elements)): bvec[i]=key[i].pt
evec=_FFI.new("BIGNUM * []",len(elements))
for i in xrange(len(elements)): evec[i]=Bn.from_decimal(str(elements[i])).bn
comm = EcPt(G)
_C.EC_POINTs_mul(G.ecg, comm.pt, _FFI.NULL,len(elements), bvec, evec, _FFI.NULL)
return comm
def commit(ck,elements, rand=None):
'''Computes vector commitment to elements using ck
(and optionally using a given randomness).
Outputs a point on the curve and the randomness used (if not given as input)'''
G,key=ck
if len(elements)>=len(key):
raise Exception('Too many elements!Longer key required')
#term=(elements[i]*key[i] for i in xrange(len(elements)))
#term=[elements[i]*key[i] for i in xrange(len(elements))]
#bvec=_FFI.new("EC_POINT * []",len(elements))
#for i in xrange(len(elements)): bvec[i]=key[i].pt
#evec=_FFI.new("BIGNUM * []",len(elements))
#for i in xrange(len(elements)):
# try:
# evec[i]=elements[i].bn
# except AttributeError:
# evec[i]=Bn(elements[i]).bn
#comm = EcPt(G)
#_C.EC_POINTs_mul(G.ecg, comm.pt, _FFI.NULL,len(elements), bvec, evec, _FFI.NULL)
#comm=mult_prod(ck,elements)
#comm=reduce(lambda x, y : x + y,term) #apparently Reduce is more efficient than For loop
if rand==None:
rand=G.order().random()
#print elements
#print elements
#random_point=rand*key[-1]
#comm=comm+random_point
elements=list(elements)+[rand]
comm=mult_prod(G,key[:len(elements)-1]+[key[-1]],elements)
return comm,rand
#random_point=rand*key[-1]
#comm=comm+random_point
#print elements
elements=list(elements)+[rand]
#print elements
comm=mult_prod(G,key[:len(elements)-1]+[key[-1]],elements)
return comm
def commit_str(ck,elements_str, rand=None):
'''Computes vector commitment to elements using ck
(and optionally using a given randomness).
Outputs a point on the curve and the randomness used (if not given as input)'''
G,key=ck
#print 'test', len(key),len(elements)
if len(elements_str)>=len(key):
raise Exception('Too many elements!Longer key required')
#term=(Bn.from_decimal(str(elements[i]))*key[i] for i in xrange(len(elements)))
#bvec=_FFI.new("EC_POINT * []",len(elements))
#for i in xrange(len(elements)): bvec[i]=key[i].pt
#evec=_FFI.new("BIGNUM * []",len(elements))
#for i in xrange(len(elements)): evec[i]=Bn.from_decimal(str(elements[i])).bn
#comm = EcPt(G)
#_C.EC_POINTs_mul(G.ecg, comm.pt, _FFI.NULL,len(elements), bvec, evec, _FFI.NULL)
#comm=reduce(lambda x, y : x + y,term)
#comm=mult_prod_str(ck,elements)
if rand==None:
rand=G.order().random()
#random_point=rand*key[-1]
#comm=comm+random_point
#elements_str=list(elements_str)+[rand]
elements=[Bn.from_decimal(str(int(x))) for x in elements_str]+[Bn.from_decimal(str(rand))]
comm=mult_prod(G,key[:len(elements)-1]+[key[-1]],elements)
return comm,long(rand)
#random_point=Bn.from_decimal(str(rand))*key[-1]
#comm=comm+random_point
#elements_str=list(elements_str)+[rand]
elements=[Bn.from_decimal(str(x)) for x in elements_str]+[Bn.from_decimal(str(rand))]
comm=mult_prod(G,key[:len(elements)-1]+[key[-1]],elements)
return comm
def check_open_commit(ck,comm,elements,rand):
#Verifies that (element,rand) is an opening to comm
#G,key=ck
commitment=commit(ck,elements,rand)
return comm==commitment
def check_open_commit_str(ck,comm,elements,rand):
#Verifies that (element,rand) is an opening to comm
#G,key=ck
commitment=commit_str(ck,elements,rand)
return comm==commitment
def challenge(elements):
"""Packages a challenge in a bijective way"""
elem = [len(elements)] + elements
elem_str = map(str, elem)
elem_len = map(lambda x: "%s||%s" % (len(x) , x), elem_str)
state = "|".join(elem_len)
H = sha256()
H.update(state.encode("utf8"))
return H.digest()
def pok_open_comm_prove(public_key,A,opening,rand):
#ZKProof of knowledge of opening (opening,rand) to a commitment A
G,commitment_key=public_key
assert check_open_commit(public_key,A,opening,rand)
p = G.order()
blinder =[p.random() for i in xrange(len(opening))]
B,B_rand=commit(public_key,blinder)
state = ['Opening', G.nid(),list(commitment_key),A, B]#add a optional message
hash_x = challenge(state)
x = Bn.from_binary(hash_x) % p
f = [(blinder[i] - x*opening[i]) % p for i in xrange(len(opening)) ]
z = B_rand - x*rand % p
return (x, f, z)
def pok_open_comm_verify(public_key, A, proof):
#Verifies the ZKproof of knowledge of opening to a commitment A
G,commitment_key=public_key
x,f,z = proof
C=commit(public_key,f,z)+x*A
p = G.order()
state = ['Opening', G.nid(),list(commitment_key),A, C]
hash_x = challenge(state)
y = Bn.from_binary(hash_x) % p
return x == y
| 35.659574 | 163 | 0.641706 | 1,030 | 6,704 | 4.065049 | 0.16699 | 0.081443 | 0.020062 | 0.040124 | 0.638405 | 0.611655 | 0.588727 | 0.558634 | 0.531884 | 0.505135 | 0 | 0.006242 | 0.211366 | 6,704 | 187 | 164 | 35.850267 | 0.785701 | 0.381116 | 0 | 0.387097 | 0 | 0 | 0.03635 | 0 | 0 | 0 | 0 | 0 | 0.010753 | 1 | 0.107527 | false | 0 | 0.043011 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15311f6820c17879300a6f94c1498f7a667b022d | 692 | py | Python | saleor/graphql/core/enums.py | acabezasg/urpi-master | 7c9cd0fbe6d89dad70652482712ca38b21ba6f84 | [
"BSD-3-Clause"
] | 1 | 2019-05-02T17:24:05.000Z | 2019-05-02T17:24:05.000Z | saleor/graphql/core/enums.py | acabezasg/urpi-master | 7c9cd0fbe6d89dad70652482712ca38b21ba6f84 | [
"BSD-3-Clause"
] | 5 | 2021-03-09T16:22:37.000Z | 2022-02-10T19:10:03.000Z | saleor/graphql/core/enums.py | acabezasg/urpi-master | 7c9cd0fbe6d89dad70652482712ca38b21ba6f84 | [
"BSD-3-Clause"
] | 1 | 2020-12-26T10:25:37.000Z | 2020-12-26T10:25:37.000Z | import graphene
from ...core import TaxRateType as CoreTaxRateType
from ...core.permissions import MODELS_PERMISSIONS
from ...core.weight import WeightUnits
from .utils import str_to_enum
class ReportingPeriod(graphene.Enum):
TODAY = 'TODAY'
THIS_MONTH = 'THIS_MONTH'
TaxRateType = graphene.Enum(
'TaxRateType',
[(str_to_enum(rate[0]), rate[0]) for rate in CoreTaxRateType.CHOICES])
PermissionEnum = graphene.Enum(
'PermissionEnum', [
(str_to_enum(codename.split('.')[1]), codename)
for codename in MODELS_PERMISSIONS])
WeightUnitsEnum = graphene.Enum(
'WeightUnitsEnum',
[(str_to_enum(unit[0]), unit[0]) for unit in WeightUnits.CHOICES])
| 24.714286 | 74 | 0.719653 | 83 | 692 | 5.855422 | 0.373494 | 0.041152 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.16185 | 692 | 27 | 75 | 25.62963 | 0.82931 | 0 | 0 | 0 | 0 | 0 | 0.080925 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.277778 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1535835675e5bdbdd9ebedbf81f869ade75e556f | 2,152 | py | Python | src/mutation_waterfall/plot.py | dohlee/python-mutation-waterfall | 642f13884986df6df82419edbbb0cb2ff396adc7 | [
"MIT"
] | 1 | 2019-05-02T01:59:35.000Z | 2019-05-02T01:59:35.000Z | src/mutation_waterfall/plot.py | dohlee/python-mutation-waterfall | 642f13884986df6df82419edbbb0cb2ff396adc7 | [
"MIT"
] | null | null | null | src/mutation_waterfall/plot.py | dohlee/python-mutation-waterfall | 642f13884986df6df82419edbbb0cb2ff396adc7 | [
"MIT"
] | 3 | 2020-08-19T19:46:28.000Z | 2021-10-17T21:03:24.000Z | import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
import numpy as np
import mutation_waterfall.preprocess as preprocess
def plot(mutation_list_file, n_genes=30, ax=None, file=None):
"""Generates a waterfall plot describing mutational landscape of samples.
Args:
mutation_list_file: Path to mutation list.
num_gene: Number of genes to be plotted. (default: 30)
ax: Matplotlib axis to draw the plot.
file: If not None, resulting plot will be saved as an image file.
Returns:
ax: Axis containing the plot.
"""
binary_matrix, genes, samples = preprocess.make_binary_matrix(mutation_list_file)
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(111)
plt.sca(ax)
waterfall(binary_matrix, genes, n_genes, ax)
plt.tight_layout()
if file:
plt.savefig(file, dpi=150)
else:
plt.show()
def waterfall(binary_matrix, genes, num_gene, ax):
"""Sort binary matrix and plot it.
Args:
binary_matrix: Binary matrix containing mutation status.
genes: List of genes.
num_gene: Number of genes to be plotted.
ax: Matplotlib axis to draw the plot.
Returns:
ax: Axis containing the plot.
"""
row_order = binary_matrix.sum(axis=1).argsort()[::-1]
temp = binary_matrix[row_order]
column_order = np.array([''.join([str(x) for j, x in enumerate(temp[:, i])]) for i in range(temp.shape[1])]).argsort()[::-1]
temp = temp[:, column_order]
# Y-axis tick labels
ax.set_yticks(np.arange(num_gene))
percentages = binary_matrix.sum(axis=1) / binary_matrix.shape[1] * 100
yticklabels = ['$%s$ (%.1f%%)' % (genes[ix], percentages[ix]) for ix in row_order[:num_gene]]
plt.yticks(np.arange(num_gene), yticklabels)
ax.set_xticks(np.arange(-.5, temp.shape[1], 1), minor=True)
ax.set_yticks(np.arange(-.5, num_gene, 1), minor=True)
ax.grid(which='minor', color='grey', linestyle='-', alpha=0.33, linewidth=1,)
plt.xticks([])
ax.imshow(temp[:num_gene, :], interpolation='none', aspect='auto', cmap=plt.cm.gray_r)
return ax
| 31.647059 | 128 | 0.657528 | 315 | 2,152 | 4.371429 | 0.387302 | 0.095861 | 0.034858 | 0.021786 | 0.207698 | 0.130719 | 0.087146 | 0.045025 | 0 | 0 | 0 | 0.017119 | 0.212825 | 2,152 | 67 | 129 | 32.119403 | 0.79575 | 0.286245 | 0 | 0 | 0 | 0 | 0.023304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1536ddab0c3e440ac09a7120c95386059693e88a | 1,730 | py | Python | leetcode/381.Insert-Delete-GetRandom-Duplicates-allowed.py | sogapalag/problems | 0ea7d65448e1177f8b3f81124a82d187980d659c | [
"MIT"
] | 1 | 2020-04-04T14:56:12.000Z | 2020-04-04T14:56:12.000Z | leetcode/381.Insert-Delete-GetRandom-Duplicates-allowed.py | sogapalag/problems | 0ea7d65448e1177f8b3f81124a82d187980d659c | [
"MIT"
] | null | null | null | leetcode/381.Insert-Delete-GetRandom-Duplicates-allowed.py | sogapalag/problems | 0ea7d65448e1177f8b3f81124a82d187980d659c | [
"MIT"
] | null | null | null | import random
from collections import defaultdict
class RandomizedCollection(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self.collection = defaultdict(set)
self.element = []
def insert(self, val):
"""
Inserts a value to the collection. Returns true if the collection did not already contain the specified element.
:type val: int
:rtype: bool
"""
ok = not self.collection[val]
n = len(self.element)
self.element += val,
self.collection[val].add(n)
return ok
def remove(self, val):
"""
Removes a value from the collection. Returns true if the collection contained the specified element.
:type val: int
:rtype: bool
"""
if not self.collection[val]:
return False
index = self.collection[val].pop()
if index != len(self.element) - 1:
self.element[index] = self.element[-1]
self.collection[self.element[-1]].add(index)
self.collection[self.element[-1]].remove(len(self.element) - 1)
self.element.pop()
return True
def getRandom(self):
"""
Get a random element from the collection.
:rtype: int
"""
if len(self.element) == 0:
return -1
# OJ is python2., randint
return self.element[random.randint(0, len(self.element)-1)]
# Your RandomizedCollection object will be instantiated and called as such:
# obj = RandomizedCollection()
# param_1 = obj.insert(val)
# param_2 = obj.remove(val)
# param_3 = obj.getRandom()
| 30.350877 | 121 | 0.571676 | 198 | 1,730 | 4.959596 | 0.343434 | 0.145621 | 0.07332 | 0.045825 | 0.262729 | 0.209776 | 0.156823 | 0.077393 | 0 | 0 | 0 | 0.01113 | 0.324855 | 1,730 | 56 | 122 | 30.892857 | 0.829623 | 0.327168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15370089ce5fcb552e4d40b96066b973e9871c2f | 553 | py | Python | Python-code-snippets-201-300/287-pywebview-display-text-and-image.py | abartoha/python-snippets-ref | 04e4feada96077f0e849b277204c012194e8fbcd | [
"Unlicense"
] | null | null | null | Python-code-snippets-201-300/287-pywebview-display-text-and-image.py | abartoha/python-snippets-ref | 04e4feada96077f0e849b277204c012194e8fbcd | [
"Unlicense"
] | null | null | null | Python-code-snippets-201-300/287-pywebview-display-text-and-image.py | abartoha/python-snippets-ref | 04e4feada96077f0e849b277204c012194e8fbcd | [
"Unlicense"
] | null | null | null | """Code snippets vol-58
287-pywebview display text and image.
Requires:
pip3 install pywebview
display_txt_img.html
and model.jpg both in cwd.
Origin:
https://github.com/r0x0r/pywebview/tree/master/examples
"""
import webview
if __name__ == '__main__':
master_window = webview.create_window('Pywebview-text and image example',
url='display_txt_img.html',
width=665, height=575,
confirm_close=True,)
webview.start()
| 25.136364 | 77 | 0.59132 | 62 | 553 | 5.032258 | 0.741935 | 0.102564 | 0.076923 | 0.108974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037135 | 0.318264 | 553 | 21 | 78 | 26.333333 | 0.790451 | 0.377939 | 0 | 0 | 0 | 0 | 0.178042 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15385c2c32a27dc2532eed862036074f470c9bb8 | 6,861 | py | Python | nombrar_numero.py | roberpot/python-nombrar-numero | 135df1822acd0480f7bbe3602d85c1615c509c71 | [
"Apache-2.0"
] | null | null | null | nombrar_numero.py | roberpot/python-nombrar-numero | 135df1822acd0480f7bbe3602d85c1615c509c71 | [
"Apache-2.0"
] | null | null | null | nombrar_numero.py | roberpot/python-nombrar-numero | 135df1822acd0480f7bbe3602d85c1615c509c71 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (C) 2016 Roberto García Carvajal
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Módulo para nombrar números enteros no negativos (en castellano).
Uso: nombrar_numero(entero_no_negativo).
"""
__all__ = ['nombrar_numero']
__author__ = u'Roberto García Carvajal'
def _desempaquetar_segmento(x):
"""
Extrae de la cadena las centenas, decenas y unidades por separado.
Tener en cuenta que puede que la cadena tenga una longitud inferior 3.
:param x: str de como mucho 3 caracteres.
:return tuple: tupla de 3 enteros que representan a centenas, decenas y
unidades.
"""
index = 0
c = 0
d = 0
u = 0
l = len(x)
if l > 3:
raise ValueError(u"El segmento debe ser como mucho de longitud 3.")
if l > 2:
c = int(x[index])
index += 1
if l > 1:
d = int(x[index])
index += 1
if l > 0:
u = int(x[index])
return c, d, u
def _nombrar_segmento(x, unidad_larga=False):
"""
Nombra un segmento determinado. No incluye puntuación.
:param x: str de, como mucho, 3 caracteres numéricos.
:param unidad_larga: bool. Indica si la unidad se escribo como "uno"o
"un".
:return str con la transcripción del número.
"""
c, d, u = _desempaquetar_segmento(x)
# Mapa para las centenas. Con cuidado de '1', que será 'cien' si decenas y
# unidades son 0.
c_dict = {
0: u"",
1: ((d + u) > 0 and u"ciento" or u"cien"),
2: u"doscientos",
3: u"trescientos",
4: u"cuatrocientos",
5: u"quinientos",
6: u"seiscientos",
7: u"setecientos",
8: u"ochocientos",
9: u"novecientos",
}
# Mapa para decenas, con cuidado de que las unidades sean 0.
d_dict = {
0: u"",
1: (u and u"dieci" or u"diez"),
2: (u and u"veinti" or u"veinte"),
3: (u and u"treinta y " or u"treinta"),
4: (u and u"cuarenta y " or u"cuarenta"),
5: (u and u"cincuenta y " or u"cincuenta"),
6: (u and u"sesenta y " or u"sesenta"),
7: (u and u"setenta y " or u"setenta"),
8: (u and u"ochenta y " or u"ochenta"),
9: (u and u"noventa y " or u"noventa"),
}
# Mapa de unidades, teniendo en cuenta la unidad_larga.
# Además, si las decenas es 2, algunos números llevan tildes.
u_dict = {
0: u"",
1: (unidad_larga and u"uno") or (d == 2 and u"ún") or u"un",
2: (d == 2 and u"dós") or u"dos",
3: (d == 2 and u"trés") or u"tres",
4: u"cuatro",
5: u"cinco",
6: (d in (1, 2) and u"séis") or u"seis",
7: u"siete",
8: u"ocho",
9: u"nueve",
}
c_res = c_dict[c]
d_u_res = d_dict[d] + u_dict[u]
# Caso especial de los números entre 11 y 15.
if d == 1 and 0 < u < 6:
d_u_res = {
11: u"once",
12: u"doce",
13: u"trece",
14: u"catorce",
15: u"quince",
}[10 + u]
# Sólo incluimos separador si las dos partes del segmento tienen valores.
separator = u""
if c_res and d_u_res:
separator = u" "
return c_res + separator + d_u_res
def nombrar_numero(x):
"""
Convierte un número a su formato escrito. Sólo acepta números enteros
no negativos.
:param x: int, entero no negativo a convertir a formato escrito.
:return unicode: devuelve el número en formato alfabético.
"""
# Comprobación del tipo.
if not isinstance(x, int):
raise ValueError(u"Tipo incorrecto. Se esperaba int, encontrado %s" %
x.__class__.__name__)
# Comprobación de signo.
if x < 0:
raise ValueError(u"Se esperaba un entero no negativo.")
if x == 0:
return u"cero"
# Ahora vamos a trocear el número en grupos de 3 en 3, empezando desde la
# derecha y tomando nota del número de segmentos que tiene el número.
# Almacenamos los segmentos en una lista de cadenas en orden inverso.
# Ejemplos:
# 1 -> ["1"]
# 4234 -> ["234", "4"]
# 10001 -> ["001", "10"]
xx = u"%s" % x
xx = xx[::-1]
l = len(xx) / 3 + {False: 0, True: 1}[(len(xx) % 3) > 0]
vx = []
for i in range(0, l):
vx.append(xx[3 * i:3 * (i + 1)][::-1])
# vx = vx[::-1]
resultado = u""
mapa_sufijos_singular = {
0: u"",
1: u"mil",
2: u"millón",
3: u"mil",
4: u"billón",
5: u"mil",
6: u"trillón",
}
mapa_sufijos_plural = {
0: u"",
1: u"mil",
2: u"millones",
3: u"mil",
4: u"billones",
5: u"mil",
6: u"trillones",
}
# Recorremos los segmentos. Recordar que vamos a nombrar el número por
# grupos de tres desde la derecha, añadiendo el sufijo según puntuación, si
# corresponde.
for index, v in enumerate(vx):
resultado_segmento = _nombrar_segmento(v, unidad_larga=(index == 0))
# Si el segmento es de millares y el resultado tiene valor o el segmento
# es de millones, se añade el sufijo. Ejemplos:
# - Para 1000001, segmentos '1', '000', '001'. Si estamos en el segmento
# '000', no deberíamos de poner 'mil'.
# - Para 1000020001, segmentos '1', '000', '020', '001'. Si estamos en
# el segmento '000', deberíamos de poner "millones".
if (resultado_segmento or (index % 2) == 0) and index > 0:
resultado_segmento += u" "
# Distinguimos entre singular o plural.
if resultado_segmento == u"un ":
# Si el resultado es un 1 y estamos nombrando millares, lo
# dejamos vacío. Es decir: para 1000 no vamos a decir
# 'un mil', sino sólo mil.
if (index % 2) == 1:
resultado_segmento = mapa_sufijos_singular[index]
else:
resultado_segmento += mapa_sufijos_singular[index]
else:
resultado_segmento += mapa_sufijos_plural[index]
if resultado_segmento:
resultado = u" " + resultado_segmento + resultado
# Probablemente queden espacios a la izquierda (un mínimo de 1).
# Los eliminamos.
return resultado.lstrip()
| 34.134328 | 80 | 0.56741 | 984 | 6,861 | 3.885163 | 0.319106 | 0.015694 | 0.011771 | 0.005493 | 0.082658 | 0.075334 | 0.075334 | 0.047083 | 0.030866 | 0.030866 | 0 | 0.039649 | 0.319924 | 6,861 | 200 | 81 | 34.305 | 0.779683 | 0.417869 | 0 | 0.144068 | 0 | 0 | 0.14852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025424 | false | 0 | 0 | 0 | 0.059322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15388b76c9881d845bcff748e013ef5c592637d8 | 141,183 | py | Python | openzgy/test/black.py | equinor/pyzgy | 94cd3d9050c3027d042a83b98779da9182041137 | [
"Apache-2.0"
] | null | null | null | openzgy/test/black.py | equinor/pyzgy | 94cd3d9050c3027d042a83b98779da9182041137 | [
"Apache-2.0"
] | null | null | null | openzgy/test/black.py | equinor/pyzgy | 94cd3d9050c3027d042a83b98779da9182041137 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
"""
Black-box end to end tests for OpenZGY.
If the old ZGY-Public Python module is available, several of the
tests will be run both on the old and the new implementation to
verify that they mach. This is a bit messy. Because there are some
known bugs in the old code and some deliberate changes in the new.
If the Seismic Store plug-in and/or the ZFP compression plug-in
are available then this functionality is tested as well.
* Tested in checkReadingDeadArea(), checkContents, etc.
On read, bricks written as explicit all zero and bricks that were
never written should be treated identically. This is not quite the
case with the legacy reader: A read request covering both existing
and non-existing bricks works as described. A read request where
all the corresponding bricks are missing will return an error. This
is unfortunate as it makes the caller more aware of the file's
layout. The test should try reading a small rectangle fully inside
a partially written brick but outside the written area, and one
fully inside a never-written brick, and one that overlaps both
types of brick.
* Tested in checkContents() and checkRawContents().
On read of integral data, requesting data as float should give the
same result as requesting the data as storage values and doing the
conversion afterwards. Make sure this holds both for regular data,
data from all-constant bricks, and values in missing bricks. In
particular, make sure that "absent" data doesn't get returned as
storage-zero when reading as integer and converted-zero when
reading as float. To test for this, make sure the coding range is
not symmetrical. I.e. storage zero must not map to converted zero.
* Tested in checkContents() and checkRawContents().
When doing a partial write of a brick that did not exist, the
missing values should be "zero after conversion to float", or as
close to zero as possible. Make sure they are not garbage and not
"zero before conversion to float" instead. See
Accessor::WriteDataExT
* Tested in checkStatistics() and checkHistogram().
Statistics and histogram information stored in the ZGY file should
have the same values as if the entire survey was read and statistics
and histogram was computed from those values. In other words,
statistics should count all samples inside the survey boundary
regardless of whether they come from regular, all-constant, or
never written bricks. Samples from the padding area outside the
survey boundary must not be counted.
This is trivial if statistics and histogram is computed in a
separate pass. Less so if the information is collected during write.
NOTE: The old accessor will not count never-written bricks.
* Tested in checkStatistics() and checkHistogram().
The above rule holds true even when the coding range is not zero centric
(cannot precisely represent zero after conversion) or does not contain
zero so zero cannot be represented at all. In these cases, even
never-written bricks will affect "sum of all samples" et cetera.
To test this, two additional copies of the test data is needed
with "non zero centric" coding and "only positive" coding.
NOTE: The statistics will be pretty useless in this case, so we might
not really care either way.
NOTE: The old accessor will not count never-written bricks.
* Tested in checkStatistics() and checkHistogram().
Statistics and histogram should handle overwritten data correctly.
This is trivial if statistics and histogram is computed in a
separate pass. Less so if the information is collected during write.
In the test data, the inersection of "A" and "B" is overwritten.
* The histogram range should be wide enough for all samples to fit.
It is allowed to be wider. Specifically, for an 8-bit file the only
thing that makes sense is to have a 1:1 correspondence between
storage values and histogram bins. So, histogram range equals
coding range. For a 16-bit file which makes use of most of the
available storage values (which is a reasonable assumption) one
could also set histogram range equals coding range, assigning 256
storage values to each histogram bin. Not explicitly tested yet.
* If alpha information is written, and this is done before writing
the bricks, then histogram and statistics should only include actually
live traces. This test is N/A if we completely deprecate alpha support.
* If alpha information is written to change a trace to dead after
bulk has been written for that trace, the effect on the statistics
is unspecified. Technically it would make sense to immediately
correct the statistics once the alpha changes. This is not
implemented even in the old accessor. And probaby never will be.
This is N/A for testing in any case since the result is unspecified.
* Just in case we are forced to keep the old behavior that treats all-zero
bricks slightly different from never-written bricks, it is recommended
that applications that don't need this odd behavior explicitly fills
all newly created file with zeros before writing real data.
This is N/A for testing.
* Tested in testFancyReadConstant().
Applications that do want to distinguish between never-written,
all-constant, and regular bricks should only do so for performance
reasons. A separate api function will be provided to query the
brick status. This new api function obviously needs to be tested.
* Not tested. Mostly of historical interest.
The curorig and cursize members are deprecated but we might add a
unit test just to document the current behavior. They were supposed
to give the bounding box of data actually written to the file. Or
possibly the bounding box including padding to the nearest brick
boundary. Or maybe somebody just gave up and set them equal to the
survey. I suspect the origin will always be included, see
OnBrickWritten. I also suspect that DataInfo::SetExtent is always
called with setcur=true which means the range will always match the
full size.
"""
#print('Running' if __name__ == '__main__' else 'Importing', __file__)
import numpy as np
import os
import sys
import io
import math
import json
import base64
import time
from contextlib import suppress, ExitStack, contextmanager
from enum import Enum
from collections import namedtuple
try:
from .. import zgypublic as oldzgy
print("Also testing the old ZGY-Public API.")
except Exception as ex:
print("Old ZGY-Public is not available:", ex)
class FakeAPI:
zgy = None
ZgyReader = object()
ZgyWriter = object()
oldzgy = FakeAPI()
from .. import api as newzgy
from ..api import SampleDataType, UnitDimension, ProgressWithDots, ZgyCompressFactory, ZgyKnownCompressors, ZgyKnownDecompressors
from ..impl.lodalgo import DecimationType # TODO-Low encapsulation?
from ..test.utils import SDCredentials, TempFileAutoDelete, LocalFileAutoDelete, CloudFileAutoDelete, HasSeismicStore, HasZFPCompression, SDTestData, SDTestSink
from ..impl.enum import UpdateMode
from ..exception import *
def HasOldZgy():
return oldzgy.zgy is not None
def showZgy(*args):
msg = ""
for a in args:
if a is None: pass
elif a is newzgy.ZgyReader: msg += " and new reader"
elif a is newzgy.ZgyWriter: msg += " and new writer"
elif a is oldzgy.ZgyReader: msg += " and old reader"
elif a is oldzgy.ZgyWriter: msg += " and old writer"
else: msg += " and " + a.__module__ + "." + a.__name__
return msg[5:] if msg else ""
# ----- Called by test code; not runnable by themselves. ----- #
@contextmanager
def TimeMe(name):
#start = time.perf_counter()
yield None
#elapsed = time.perf_counter() - start
#print("TIMED: %-20.20s %7.3f" % (name+":", elapsed), flush=True)
class TraceCallsToSD:
"""
Suitable for use as a _debug_trace callback.
"""
_entry = namedtuple("io", "what nbytes padded parts")
def __init__(self, *, verbose = False):
self.calls = []
self._verbose = verbose
def __call__(self, what, nbytes, padded, parts):
self.calls.append(self._entry(what, nbytes, padded, parts))
if self._verbose:
print(" {0:9s} size {1:10s} padded {2:10s} parts {3:1d}".format(
what, self._pretty(nbytes), self._pretty(padded), parts))
@staticmethod
def _pretty(n):
if (n < 1024) or (n % (1024) != 0):
return "{0:4d} bytes".format(n)
elif (n < 1024*1024) or (n % (1024*1024) != 0):
return "{0:7d} KB".format(n//1024)
else:
return "{0:7d} MB".format(n//(1024*1024))
def reset(self):
self.calls = []
class MustThrow:
"""
Check that we get the expected exception.
"""
def __init__(self, message = None, extypes = None):
self._extypes = extypes
self._message = message
if isinstance(extypes, type) and issubclass(extypes, Exception):
self._extypes = (extypes,)
self._exnames = tuple([e.__name__ for e in self._extypes]) if self._extypes else "Exception"
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
if type is None:
problem = 'Expected {0}, got no exception'.format(self._exnames)
elif self._extypes and type not in self._extypes:
problem = 'Expected {0} got {1} "{2}"'.format(self._exnames, type.__name__, str(value))
elif self._message and str(value).find(self._message) < 0:
problem = 'Expected "{0}" got "{1}"'.format(self._message, str(value))
else:
problem = None
#print('Ok: Expected {0} "{1}" got {2} "{3}"'.format(self._exnames, self._message or "", type.__name__, str(value)))
if problem:
raise AssertionError(problem) from None
return True # suppress the exception.
def pretty(n):
"""
Format a number, assumed to be a size in bytes, as a human readable string.
"""
if type(n) != type(42):
return str(n)
if n >= (1024*1024*1024) and (n % (1024*1024*1024)) == 0:
return str(n//(1024*1024*1024)) + " GB"
if n >= (1024*1024) and (n % (1024*1024)) == 0:
return str(n//(1024*1024)) + " MB"
if n >= (512*1024) and (n % (256*1024)) == 0:
return str(n//(256*1024)) + "*256 KB"
if n >= (1024) and (n % (1024)) == 0:
return str(n//(1024)) + " KB"
return str(n) + " bytes"
def savePNG(data, outfile):
from PIL import Image
def normalize(a):
a = a.astype(np.float32)
dead = np.isnan(a)
amin, amax = (np.nanmin(a), np.nanmax(a))
a[dead] = amin
if amin == amax:
a *= 0
else:
a = (a - amin) / (amax - amin)
a = (a * 255).astype(np.uint8)
return a, dead
data = np.squeeze(data)
data = np.transpose(data)
data = np.flip(data, 1)
data, dead = normalize(data)
tmp = np.zeros((data.shape[0], data.shape[1], 3), dtype=np.uint8)
r = tmp[...,0]
g = tmp[...,1]
b = tmp[...,2]
r += data
g += data
b += data
r[dead] = 255
g[dead] = 255
b[dead] = 0
im = Image.fromarray(tmp, mode="RGB")
im.save(outfile, format="PNG")
def isMutable(obj, *, verbose = False, seen = set()):
"""
Recursive check for whether an object is mutable.
The idea was to check that all members of e.g. ZgyReader are
immutable so the user cannot (a) shoot himself in the foot by
directly modifying a data members, or (b) even worse, change
some cached value by modifying a mutable member of a container.
Unfortunately this was a lot harder then I thought.
- A callable might or might not be const. Need to check the code.
- A property and a data member look rather similar.
- A readonly property may have a syub __set__ that will throw.
- A __setattr__, if present, can make any attribute mutable.
- Python has no frozendict (yet) unless I want to add a
rather pointless dependency, So I copy dicts before
returning them. This is safe, but the code here cannot know.
I might make my own dict-like wrapper but this is getting
way too complicated.
Looks like I just jave to rely on dump() followed by eyeballing
the source code.
"""
# Known types
if isinstance(obj, (type(None), type, str, int, bool, float, tuple, bytes, Enum, np.dtype)):
if verbose: print("Immutable type", type(obj).__name__)
return False
elif isinstance(obj, (list, set, dict, bytearray, np.ndarray)):
if verbose: print("MUTABLE type", type(obj).__name__)
return True
elif callable(obj):
if verbose: print("CALLABLE type", type(obj).__name__)
return False
# Recursive checks
if id(obj) in seen:
if verbose: print("skipping cycle of", type(obj).__name__)
return False
print("Adding", id(obj), "to seen")
seen |= set((id(obj),))
if isinstance(obj, dict):
obj = obj.items()
if isinstance(obj, tuple):
if verbose: print("recursively checking", type(obj).__name__)
return any([isMutable(e, verbose=verbose, seen=seen) for e in obj])
if verbose: print("unknown type, assuming mutable", type(obj).__name__)
return True
def hasMutableMembers(obj, *, safe = set(), verbose = False):
"""
Try to detect whether obj (which is some kind of instance variable)
has any plain data members or any properties that contain data that
in turn looks like it is mutable. Note that this turned out to be
a lot harder then I first thought. The tests are by no means complete.
"""
if obj is not None:
for x in sorted(dir(obj)):
if x[0] != '_' and not x in safe:
is_prop = isinstance(getattr(type(obj), x, None), property)
is_call = callable(getattr(obj, x))
if not is_prop and not is_call:
if verbose: print(type(obj).__name__ + "." + x,
"looks like a DATA member")
return True
if isMutable(getattr(obj, x), verbose=False, seen=set()):
if verbose: print(type(obj).__name__ + "." + x,
"is of a MUTABLE type")
return True
return False
def dump(message, obj, verbose = False):
if message: print(message)
class Dummy:
"""(no doc)"""
for x in sorted(dir(obj)):
if x[0] != '_':
value = getattr(obj, x)
if isinstance(getattr(type(obj), x, None), property):
vt = "prop "
elif callable(value):
vt = "call "
else:
vt = "DATA "
if isMutable(value, seen=set()):
vt = "MUTABLE " + vt
if verbose:
doc = '\n' + str(getattr(obj.__class__, x, Dummy).__doc__)
doc = doc.replace('\n', '\n\t\t')
print('\t' + vt + x, "=", value, doc)
else:
if not callable(value):
print('\t' + vt + x, "=", value)
else:
print('\t' + vt + x + "()")
def createFancyBuffer(defaultvalue, unwrittenvalue):
"""
Create test data as described elsewhere. This version saves the
data in an in-memory numpy array making the code quite trivial.
There is no point in writing the data in multiple operations
because we aren't testing numpy.
The caller needs to specify the default value that will be
assigned to samples that were never written. Separate defaults
may be given for unwritten samples inside a brick vs. bricks
never written to at all. If these two differ this is arguably
a bug in the implementation.
"""
data = np.full((112, 64, 176), defaultvalue, dtype=np.float32)
data[16:16+40, 16:16+41, 16:16+42] = 31
data[48:48+72, 20:20+10, 24:24+16] = 97
data[:,:,128:176] = unwrittenvalue
return data
def createFancyFile(filename, datatype, datarange, zgyWriterFactory, *, single_write = False, kwargs = dict()):
"""
The layout of this test data is described in detail in doc/testdata.png
The figure also explains how to compute the expected statistics by hand.
As for computing the expected sample values, this is done by
createFancyBuffer().
* Create a ZGY file with size (112, 64, 176) which gives it a bricksize
of 2x1x3. Other parameters vary.
* Write an oddly sized rectangle "A" inside the first brick.
* Write an oddly sized rectangle "B" covering two cubes and partly
intersecting the first write, and also runs slightly into the
padding area.
* Write an all-zero region "C" that completely covers one brick and
also covers a second brick completely apart from padding area
outside the survey.
Additional arguments such as "snr" can be passed as kwargs={"snr": 99},
note that I have not declared the parameter as **kwargs so the dict
must be created by hand. To make it more explicit what the extras are.
Accounting for existing bugs:
Several of the tests have arguments (defaultvalue,unwrittenvalue,countdead).
- defaultvalue should be the value closest to 0 that can be represented.
- unwrittenvalue ought to have been the same as defaultvalue, but with the
old reader it might be 0 for float access and 0 converted to float for
raw.
- countdead should be True meaning unwritten samples are included in the
statistics and the histogram, but if the file was created by the old
writer then it needs to be set False.
Future: If implementing aplha support (currently not the case) we will
also need a file with alpha tiles set to the horizontal extent of the
actual stored data. In this data set there will still be unwritten
data at the tail end of each trace. Production code rarely does
this though; the assumption is that all traces have the same length
and that traces are written fully or not at all.
Note that currently, neither the old ZGY-Public nor the new OpenZGY
API can write alpha tiles. Only ZGY-Internal can do that. That API
does not have any Python wrapper.
"""
with zgyWriterFactory(filename,
iocontext = SDCredentials(),
size = (112, 64, 176),
datatype = datatype,
datarange = datarange,
zunitdim = UnitDimension.time,
zunitname = "ms",
zunitfactor = 0.001,
hunitdim = UnitDimension.length,
hunitname = "ft",
hunitfactor = 0.3048,
zstart = 2500,
zinc = 4.125,
annotstart = (1234, 5678),
annotinc = (5, 2),
corners = ((1000, 1000),
(3775, 1000),
(1000, 2890),
(3775, 2890)),
**kwargs
) as writer:
expect_datarange_1 = datarange
if datatype == SampleDataType.float and zgyWriterFactory != oldzgy.ZgyWriter:
# The value is unspecified. It could be NaN if the file was never
# flushed, or (0,0) if it was flushed before writing anything.
# Or it could be the (likely not calculated yet) statistical
# range if the code in api.ZgyMeta.datarange chooses to return
# the statistical range instead.
expect_datarange_1 = (0, 0)
#dump(filename, writer)
checkmeta(writer, datatype, expect_datarange_1)
if single_write:
# Read/modify/write is not allowed whan writing compressed data,
# or at least not recommended since noise will accumulate.
writer.write((0, 0, 0), createFancyBuffer(0, 0))
else:
writer.write((16,16,16), np.full((40,41,42), 31, dtype=np.float32))
writer.write((48,20,24), np.full((72,10,16), 97, dtype=np.float32))
writer.write((0,0,64), np.full((112,64,64), 0, dtype=np.float32))
# Statistics haven't been computed yet, so datarange for float cubes
# should still be returned as empty.
checkmeta(writer, datatype, expect_datarange_1)
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
expect_datarange_2 = datarange
if datatype == SampleDataType.float:
if True or zgyWriterFactory != oldzgy.ZgyWriter:
# The value has been explicitly set to the statistical range
# if written by the new writer. If api.ZgyMeta.datarange chooses
# to return the statistical range instead this this happens
# also for files written by the old accessor. The second
# conditinal should be disabled in that case.
expect_datarange_2 = (reader.statistics.min, reader.statistics.max)
checkmeta(reader, datatype, expect_datarange_2)
def checkmeta(meta, datatype = None, datarange = None):
"""
Verify round trip of metadata. This can be used both by a writer
(ensure the data we set is still available as properties) and a
reader (ensure the roundtrip to a stored file and back worked).
"""
assert(meta.size == (112, 64, 176))
assert(datatype is None or meta.datatype == datatype)
assert(datarange is None or meta.datarange == datarange)
assert(meta.raw_datarange == meta.datarange)
assert(meta.zunitdim == UnitDimension.time)
assert(meta.zunitname == "ms")
assert(abs(meta.zunitfactor - 0.001) < 1.0e-5)
assert(meta.hunitdim == UnitDimension.length)
assert(meta.hunitname == "ft")
assert(abs(meta.hunitfactor - 0.3048) < 0.0001)
assert(meta.zstart == 2500)
assert(abs(meta.zinc - 4.125) < 0.0001)
assert(meta.annotstart == (1234, 5678))
assert(meta.annotinc == (5, 2))
assert np.sum(np.abs(np.array(meta.corners) -
np.array(((1000, 1000),
(3775, 1000),
(1000, 2890),
(3775, 2890))))) < 0.0001
def explaincontents(expect, actual, delta):
"""
Detailed checking of a small part of the standard test cube.
A single trace that covers many special cases. Show an explanation
of what is being tested as well as expected vs. actual results.
See doc/testdata.png. This method is meant to be used to understand
why a particular test has failed.
"""
table = [( 0, 16, "default(r/m/w)"),
( 16, 24, "written once "),
( 24, 40, "written twice "),
( 40, 58, "written once "),
( 58, 63, "default(r/m/w)"),
( 64, 128, "constant-zero "),
(128, 176, "default(empty)")]
print("Displaying the trace at [50,22,:]")
for beg, end, text in table:
ex = expect[50,22,beg:end]
ac = actual[50,22,beg:end]
if np.amin(ex) == np.amax(ex) and np.amin(ac) == np.amax(ac):
print(" ", text, "expect", ex[0], "actual", ac[1])
else:
print(" ", text, "expect", ex, "actual", ac)
print(" largest error in entire cube:", delta)
def checkContents(filename, zgyReaderFactory, defaultvalue, unwrittenvalue, *, maxdelta = 0.001):
"""
Read back the entire survey from one of the files created by
createFancyFile() and compare with the expected results.
Also check the metadata.
"""
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
expect = createFancyBuffer(defaultvalue, unwrittenvalue)
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader, io.StringIO() as bitbucket:
# Improve coverage by exercising the debug log statements
verbose = lambda *args, **kwargs: print(*args, file=bitbucket, **kwargs)
checkmeta(reader)
actual = np.zeros((112, 64, 176), dtype=np.float32)
reader.read((0,0,0), actual, verbose = verbose)
delta = np.amax(np.abs(expect - actual))
if not delta <= maxdelta:
explaincontents(expect, actual, delta)
assert delta <= maxdelta
def compareArrays(expect, actual, value_epsilon = 0.02, count_epsilon = 0.01, *, verbose = False):
value_range = np.amax(expect) - np.amin(expect)
count_total = len(expect.flat)
# Error in each sample, relative to the total expected value range.
# Can technically be greater than 1 if "actual" has wild values.
# A value of e.g. <= 0.01 might be considered close enough.
value_delta = np.abs(expect - actual) / (value_range if value_range else 1)
count_bad = np.count_nonzero(value_delta > value_epsilon)
# In addition to the test for not exactly equal, allow a certain
# fraction of samples to differ by any amount. Typically this
# might be needed due to edge effects in lowres data.
relative_bad = count_bad / count_total
ok = relative_bad <= count_epsilon
if verbose:
print("{5}: {0:6d} of {1:7d} samples ({2:.2f}%) differ > {3:.2f}%. Allowed {4:.2f}%.".format(
count_bad, count_total, 100.0 * count_bad / count_total,
100.0 * value_epsilon, 100.0 * count_epsilon,
"pass" if ok else "FAIL"))
return ok
def showdecimation(lod0, lod1):
"""
Input 4 hires traces (2,2,n) and a corresponding decimated
trace (n//2) and display those to manually inspect the result.
"""
print(" decimated from these input samples")
for ii in range(0, lod0.shape[2], 2):
print("{0:10.5g} {1}".format(lod1[ii//2], list(lod0[:,:,ii:ii+2].flat)))
def checkLodContents(filename, zgyReaderFactory, defaultvalue, unwrittenvalue):
"""
As checkContents, but caller specifies which LOD to read and we
allow some slop in the result since the "expect" array uses trivial
decimation while the zgy writer uses something fancier.
NOTE: Due to bugs in the old writer, no checks are done for samples
where the fullres data has never been written. I have given up on
figuring out the current behavior; I just know that it is wrong.
"""
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
nlods = 1
size = np.array(reader.size, dtype=np.int64)
while np.any(size > reader.bricksize):
nlods += 1
size = (size + 1) // 2
assert nlods == reader.nlods
for lod in range(0, nlods):
step = 1<<lod
expect = createFancyBuffer(defaultvalue, unwrittenvalue)
expect = expect[:,:,:128] # Hard coded edge of written data.
expect = expect[::step,::step,::step]
size = (np.array(reader.size, dtype=np.int64) + (step-1)) // step
size[2] = 128//step
actual = np.zeros(size, dtype=np.float32)
reader.read((0,0,0), actual, lod = lod)
ok = compareArrays(expect, actual,
value_epsilon = 0.02 if lod < 2 else 0.04,
count_epsilon = 0.01 if lod < 2 else 0.03)
if not ok:
deltas = np.abs(expect - actual).astype(np.float64)
# A single 2d section in the "interesting" part of the survey.
actual_2d = actual[:,22//step,:]
expect_2d = expect[:,22//step,:]
deltas_2d = deltas[:,22//step,:]
# A single trace in the "interesting" part of the survey.
expect_1d = expect_2d[50//step,:]
actual_1d = actual_2d[50//step,:]
deltas_1d = deltas_2d[50//step,:]
# Now visualize these for debugging
savePNG(actual[:,22//step,:], "actual-" + str(lod) + ".png")
savePNG(expect[:,22//step,:], "expect-" + str(lod) + ".png")
savePNG(deltas[:,22//step,:], "deltas-" + str(lod) + ".png")
print("\n{0} LOD {1} check: {2}".format(
filename, lod, ("pass" if ok else "FAIL")))
print("Default", defaultvalue, "unwritten", unwrittenvalue)
print("first sample expect {0} actual {1}".format(
expect[0,0,0], actual[0,0,0]))
print("last sample expect {0} actual {1}".format(
expect[-1,-1,-1], actual[-1,-1,-1]))
print("interesting trace expect", expect_1d,
"interesting trace actual", actual_1d,
"delta", deltas_1d,
sep="\n")
assert ok
def checkRawContents(filename, zgyReaderFactory, defaultvalue, unwrittenvalue, *, maxdelta = 0.001):
"""
As checkContents, but do the value conversion ourselves.
There may be issues with never written bricks.
"""
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
expect = createFancyBuffer(defaultvalue, unwrittenvalue)
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
dtype = {SampleDataType.int8: np.int8,
SampleDataType.int16: np.int16,
SampleDataType.float: np.float32 }[reader.datatype]
checkmeta(reader)
actual = np.zeros((112, 64, 176), dtype=dtype)
reader.read((0,0,0), actual)
#print("raw...", actual[50,22,:])
if np.issubdtype(dtype, np.integer):
iinfo = np.iinfo(dtype)
actual = actual.astype(np.float32)
a = (reader.datarange[1]-reader.datarange[0])/(iinfo.max-iinfo.min)
b = reader.datarange[0] - a * iinfo.min
actual *= a
actual += b
delta = np.amax(np.abs(expect - actual))
if not delta <= maxdelta:
# A single trace in the "interesting" part of the survey.
print("expect", expect[50,22,:])
print("actual", actual[50,22,:])
print("delta", delta)
assert delta <= maxdelta
def computeStatisticsByRead(filename, zgyReaderFactory):
"""
Read back the entire survey from one of the files created by
createFancyFile() and compute statistics from the bulk data.
Concentrate on sum of samples and count of samples.
Also check the metadata.
"""
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
checkmeta(reader)
data = np.zeros((112, 64, 176), dtype=np.float32)
reader.read((0,0,0), data)
theSum = np.sum(data.flat, dtype=np.float64)
theCount = len(data.flat)
#print("Read sum {0}, sample count {1}".format(theSum, theCount))
#cnt = 0
#for x in (0, 1, 31, 97):
# c = np.count_nonzero(data == x)
# print(x, c)
# cnt += c
#print("?", theCount - cnt) # unaccounted for
return theSum, theCount
def readStatisticsStoredInFile(filename, zgyReaderFactory):
"""
Open the ZGY file and retrieve only the stored statistics information.
This is only supported in the new API.
"""
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
stats = reader.statistics
#print(stats)
return (stats.sum, stats.cnt)
def computeStatisticsByHand(defaultvalue, unwrittenvalue):
S = 112 * 64 * 176 # Total samples in survey, excluding padding.
P = 128 * 64 * 192 - S # Padding samples to align with 64^3 bricks.
A = 40 * 41 * 42 # Rect A beg (16,16,16) end (56,57,58) value 31.
B = 72 * 10 * 16 # rect B beg (48,20,24) end (120,30,40) value 97.
C = 112 * 64 * 64 # rect C beg (0,0,64) end (112,64,128) value 0.
D = 8 * 10 * 16 # overlap A/B, begin at (48,20,24).
E = 8 * 10 * 16 # B outside survey: begin at(128,30,40).
Z = 112 * 64 * 48 # Samples inside survey in never-written bricks.
nSample_31 = A - D
nSample_97 = B - E
nSample_unwritten = Z
nSample_default = S - nSample_31 - nSample_97 - nSample_unwritten
theSum = (31 * nSample_31 +
97 * nSample_97 +
defaultvalue * nSample_default +
(unwrittenvalue or 0) * nSample_unwritten)
theCount = S if unwrittenvalue is not None else S - Z
#print("Expected sum {0} * {1} + {2} * {3} + {4} * {5} + {6} * {7} = {8}, sample count {9}".format(31, nSample_31, 97, nSample_97, defaultvalue, nSample_default, unwrittenvalue, nSample_unwritten, theSum, theCount))
if unwrittenvalue is None:
theHist = { 31: nSample_31, 97: nSample_97,
defaultvalue: nSample_default }
elif defaultvalue == unwrittenvalue:
theHist = { 31: nSample_31, 97: nSample_97,
defaultvalue: nSample_default + nSample_unwritten }
else:
theHist = { 31: nSample_31, 97: nSample_97,
defaultvalue: nSample_default,
unwrittenvalue: nSample_unwritten }
return theSum, theCount, theHist
def checkStatistics(filename, zgyReaderFactory, defaultvalue, unwrittenvalue, countdead, *, maxdelta = 0.001):
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
byhand = computeStatisticsByHand(defaultvalue, unwrittenvalue)
byread = computeStatisticsByRead(filename, zgyReaderFactory)
if not (abs(byhand[0]-byread[0]) < maxdelta and byhand[1] == byread[1]):
print("stat sum: byhand: {0}, byread {1}, maxdelta {2}, count byhand: {3} byread {4}".format(byhand[0], byread[0], maxdelta, byhand[1], byread[1]))
assert(abs(byhand[0]-byread[0]) < maxdelta and byhand[1] == byread[1])
if zgyReaderFactory is not oldzgy.ZgyReader:
byhand = computeStatisticsByHand(defaultvalue, unwrittenvalue if countdead else None)
byload = readStatisticsStoredInFile(filename, zgyReaderFactory)
assert(abs(byhand[0]-byload[0]) < maxdelta and byhand[1] == byload[1])
def findHistogramSlot(value, histrange):
"""
Which slot this value belongs to in a 256-bin histogram.
The result is guaranteed to be in the range [0..255].
Values outside range are clipped to 0 or 255. This is not
how the actual histogram computation is done, but for the
tests it should not make any difference.
"""
value = 255 * (value - histrange[0]) / (histrange[1] - histrange[0])
return int(np.rint(np.clip(value, 0, 255)))
def checkHistogram(filename, zgyReaderFactory, defaultvalue, unwrittenvalue, countdead):
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
if zgyReaderFactory is not oldzgy.ZgyReader:
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
stat = (reader.statistics.min, reader.statistics.max)
hist = (reader.histogram.min, reader.histogram.max)
data = (reader.datarange[0], reader.datarange[1])
if False:
print("checkHistogram:",
"stat", stat, "hist", hist, "data", data,
"type", reader.datatype.name)
if reader.datatype == SampleDataType.float:
# Float data written by the old writer currently writes
# the histogram on the fly and may end up with a too wide
# range. The new reader doesn't do this now but it might do
# so in the future. Note that data == stat for float zgy.
assert hist[0] <= data[0] and hist[1] >= data[1]
else:
assert math.isclose(hist[0],data[0]) and math.isclose(hist[1],data[1])
assert reader.histogram.cnt == reader.statistics.cnt
hist = reader.histogram
#print(hist)
_, _, byhand = computeStatisticsByHand(defaultvalue, unwrittenvalue if countdead else None)
#print(byhand)
expect_hist = np.zeros(256, dtype=np.int64)
for value, expect in byhand.items():
slot = findHistogramSlot(value, (hist.min, hist.max))
expect_hist[slot] += expect
for slot in range(256):
actual = hist.bin[slot]
expect = expect_hist[slot]
if actual != expect:
print("histogram value", value, "slot", slot,
"expect", expect, "actual", actual)
#print("actual", hist)
#print("expect", expect_hist)
assert actual == expect
def isReaderOpen(reader):
"""
Return True if the zgy file is open for read.
There isn't a property for that in the API because
typically this is only needed when testing.
"""
tmp = np.zeros((1, 1, 1), dtype=np.float32)
try:
reader.read((0,0,0), tmp)
except (RuntimeError, newzgy.ZgyUserError) as ex:
assert "ot open for" in str(ex)
return False
return True
def checkReadingDeadArea(filename, pos, zgyReaderFactory, expected):
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
tmp = np.full((2, 2, 2), 42, dtype=np.float32)
reader.read(pos, tmp)
#print(list(tmp.flat), "expected", expected)
assert np.all(np.abs(tmp - expected) < 0.001)
def checkReadingOutsideRange(filename, zgyReaderFactory):
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
tmp = np.full((2, 2, 2), 42, dtype=np.float32)
with MustThrow("outside the valid range"):
reader.read((0, 0, 10000), tmp)
with MustThrow("outside the valid range"):
reader.read((0, 0, -9999), tmp)
with MustThrow("outside the valid range"):
reader.readconst((0, 0, 10000), (2, 2, 2))
with MustThrow("outside the valid range"):
reader.readconst((0, 0, -9999), (2, 2, 2))
#with MustThrow("outside the valid range"):
# reader.readconst((0, 0, 0), (1000000, 1000000, 1000000))
def checkReadingOutsideLod(filename, zgyReaderFactory):
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
tmp = np.full((2, 2, 2), 42, dtype=np.float32)
with MustThrow("outside the valid range"):
reader.read((0, 0, 0), tmp, lod=-1)
with MustThrow("outside the valid range"):
reader.read((0, 0, 0), tmp, lod=9)
with MustThrow("outside the valid range"):
reader.readconst((0, 0, 0), (2, 2, 2), lod=-1)
with MustThrow("outside the valid range"):
reader.readconst((0, 0, 0), (2, 2, 2), lod=9)
def checkReadingToWrongValueType(filename, zgyReaderFactory):
"""
This was supposed to cover a test in readToExistingBuffer()
but now the error is caught already in the API layer.
Which is already tested in testBadArgumentsOnReadWrite.
Keeping the test here in case this changes back later.
"""
if zgyReaderFactory == oldzgy.ZgyReader and not HasOldZgy(): return
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
tmp = np.full((2, 2, 2), 42, dtype=np.int16)
#with MustThrow("conversion only supported"):
with MustThrow("array of np.float32 or np.int8"):
reader.read((0, 0, 0), tmp)
def hasSAuthToken():
try:
jwt = json.loads(base64.urlsafe_b64decode(SDCredentials().sdtoken.split(".")[1] + "====").decode("ascii"))
print(json.dumps(jwt, indent=2, sort_keys=True))
timeleft = jwt["exp"] - int(time.time())
print("SAuth token has", timeleft // 60, "minutes to expiry")
return timeleft > 0
except IOError:
# Missing or malformed token, including "FILE:" tokens.
# Unfortunately, impersonation tokens that are still
# good to refresh will also fail here.
return True # optimist.
# ----- Separate tests, but needs testFancy() to create the test files. ----- #
def runCloseOnException(filename, zgyReaderFactory):
"""
Test that the "with" guard is working properly.
On leaving the scope the reader should be closed.
Even if we left via an exception.
"""
class DummyException(Exception):
pass
try:
# If the reader raises an exception in __init__ then "reader"
# remains unassigned. While if we raise an exception ourselves
# it gets caught at the same level but now with "reader" known.
# No big deal as long as we *only* catch the dummy exception,
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
assert isReaderOpen(reader)
raise DummyException("testing...")
except DummyException:
pass
assert not isReaderOpen(reader)
def runErrorOnClose(filename, ZgyReaderFactory):
"""
Only relevant for openzgy. Verify correct behavior when we exit
the context manager due to an exception. For the old zgy wrapper
there is no easy way of forcing an error to be thrown on close,
so while I would like to have tested that one as well, I won't.
"""
# Exception was thrown from inside the block only.
# Make sure the reader was closed. This peeks at internal data.
try:
message = ""
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
raise RuntimeError("xyzzy")
except Exception as ex:
message = str(ex)
assert message == "xyzzy"
assert reader._fd is None
# Exception was thrown from the reader's close() method only.
try:
message = ""
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
reader._fd.xx_close()
reader._fd = "oops"
except Exception as ex:
message = str(ex)
assert message.find("object has no attribute") >= 0
# Exception was thrown from inside the block, then when handling
# that exception another exception was thrown inside close().
try:
message1 = ""
message2 = ""
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
reader._fd.xx_close()
reader._fd = "oops"
raise RuntimeError("xyzzy")
except Exception as ex:
message1 = str(ex)
message2 = str(ex.__cause__ or ex.__context__)
assert message1.find("object has no attribute") >= 0
assert message2 == "xyzzy"
def runConversions(filename, zgyReaderFactory):
"""
Verify that coordinate conversion between index, annot, and world works.
"""
with zgyReaderFactory(filename, iocontext = SDCredentials()) as demo:
#dump("", demo, True)
a = demo.indexToAnnot((3, 7))
i = demo.annotToIndex(a)
#print(a, i)
assert(a == (1249, 5692) and i == (3, 7))
w = demo.indexToWorld((0, 0))
i = demo.worldToIndex(w)
#print(w, i)
assert(w == (1000, 1000) and i == (0, 0))
w = demo.indexToWorld((1, 0))
i = demo.worldToIndex(w)
#print(w, i)
assert(w == (1025, 1000) and i == (1, 0))
w = demo.indexToWorld((0, 1))
i = demo.worldToIndex(w)
#print(w, i)
assert(w == (1000, 1030) and i == (0, 1))
w = demo.indexToWorld((3, 7))
i = demo.worldToIndex(w)
#print(w, i)
assert(w == (1000 + 3*25, 1000 + 7*30) and i == (3, 7))
w = demo.annotToWorld(a)
a = demo.worldToAnnot(w)
#print(w, a)
assert(w == (1000 + 3*25, 1000 + 7*30) and a == (1249, 5692))
def runErrorIfNotOpenForRead(filename, zgyReaderFactory):
size = (1, 1, 1)
tmp = np.zeros(size, dtype=np.float32)
pos = (0, 0, 0)
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
reader.close()
with MustThrow("ot open for read"):
reader.read(pos, tmp)
if zgyReaderFactory is not oldzgy.ZgyReader:
with MustThrow("ot open for read"):
reader.readconst(pos, size)
def runDumpToDevNull(filename, zgyReaderFactory):
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader, io.StringIO() as stream:
reader._meta.dumpRaw(file=stream)
# No test on the result, only see that it doesn't crash.
assert len(stream.getvalue()) > 0
def runClone(filename, templatename):
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(), templatename=templatename) as writer:
checkmeta(writer, SampleDataType.int8, (-28,+227))
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
checkmeta(reader, SampleDataType.int8, (-28,+227))
def runUpdate(filename):
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(), templatename=filename) as writer:
checkmeta(writer, SampleDataType.int8, (-28,+227))
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
checkmeta(reader, SampleDataType.int8, (-28,+227))
def runDumpMembers(filename, templatename):
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(), templatename=templatename) as writer:
#dump("\nZgyWriter contents:", writer, verbose=False)
assert not hasMutableMembers(writer, safe=set(("meta",)), verbose=True)
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
#dump("\nZgyReader contents:", reader, verbose=True)
assert not hasMutableMembers(reader, safe=set(("meta",)), verbose=True)
# ----- Separately runnable tests, might need caller to clean up files. ----- #
def testRegisteredCompressors():
#print("Known compressors", ",".join(ZgyKnownCompressors()),
# "decompressors", ",".join(ZgyKnownDecompressors()))
assert "ZFP" in ZgyKnownCompressors()
assert "ZFP" in ZgyKnownDecompressors()
with MustThrow('"XYZZY" not recognized. Must be one of', ZgyMissingFeature):
lossy = ZgyCompressFactory("XYZZY", snr=30)
def testProgressWithDots():
with io.StringIO() as line:
p = ProgressWithDots(length=51, outfile=line)
assert line.getvalue() == ""
p(0, 1000)
assert line.getvalue() == "."
p(1, 1000)
assert line.getvalue() == "."
p(500, 1000)
assert line.getvalue() == "." * 26
p(999, 1000)
assert line.getvalue() == "." * 50
p(1000, 1000)
assert line.getvalue() == "." * 51 + "\n"
def testBadArgumentsOnCreate():
fname = "should-not-exist.zgy"
try:
os.remove(fname)
except FileNotFoundError:
pass
with MustThrow("size must be specified", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname):
pass
with MustThrow("size must be at least 1", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname, size=(10,0,20)):
pass
with MustThrow("bricksize must be specified in 3 dimensions", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname, size=(10,15,20), bricksize=(64,64)):
pass
with MustThrow("bricksize must be >= 4 and a power of 2", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname, size=(10,15,20), bricksize=(64,64,48)):
pass
with MustThrow("datarange must be specified for integral types", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname, size=(10,15,20), datatype=SampleDataType.int8):
pass
with MustThrow("datarange must have min < max", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname, size=(10,15,20), datatype=SampleDataType.int8, datarange=(3,2)):
pass
with MustThrow("datarange must have min < max", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname, size=(10,15,20), datatype=SampleDataType.int8, datarange=(3,3)):
pass
with MustThrow("datarange must be finite", newzgy.ZgyUserError):
with newzgy.ZgyWriter(fname, size=(10,15,20), datatype=SampleDataType.int8, datarange=(np.nan,np.nan)):
pass
# The consistency checks should be done before actually creating the file.
# Which means that the next call should fail.
with MustThrow(None, FileNotFoundError):
os.remove(fname)
def testBadArgumentsOnReadWrite(filename):
origin = (0, 0, 0)
expect = "Expected a 3d numpy array of np.float32 or np.float32"
with newzgy.ZgyWriter(filename, size=(10,15,20)) as w:
with MustThrow(expect): # no data
w.write(origin, None)
with MustThrow(expect): # not numpy data
w.write(origin, [[[1,1,1]]])
with MustThrow(expect): # wrong data type
w.write(origin, np.array([[[1,1,1]]], dtype=np.int8))
with MustThrow(expect): # wrong number of dimensions
w.write(origin, np.array([1,1,1], dtype=np.float32))
expect = "Expected a writeable 3d numpy array of np.float32 or np.float32"
with newzgy.ZgyReader(filename) as r:
with MustThrow(expect): # no data
r.read(origin, None)
with MustThrow(expect): # not numpy data
r.read(origin, [[[1,1,1]]])
with MustThrow(expect): # wrong data type
r.read(origin, np.array([[[1,1,1]]], dtype=np.int8))
with MustThrow(expect): # wrong number of dimensions
r.read(origin, np.array([1,1,1], dtype=np.float32))
with MustThrow(expect): # buffer not writeable
a = np.array([[[1,1,1]]], dtype=np.float32)
a.setflags(write=False)
r.read(origin, a)
def testAutoDelete():
# It is an error if the expected file is missing.
with MustThrow("", FileNotFoundError):
with LocalFileAutoDelete("xyzzy", silent=True) as fn:
pass
# As above, but if some other error occurred that will have precedence.
with MustThrow("", IndexError):
with LocalFileAutoDelete("xyzzy", silent=True) as fn:
foo = [][1]
# No attempt is made to remove, if we explicitly disarmed.
with LocalFileAutoDelete("xyzzy") as fn:
assert "/tmp-" in fn.name or "\\tmp-" in fn.name or fn.name[:4] == "tmp-"
fn.disarm()
# Actually try creating the file. Auto cleanup happens.
with LocalFileAutoDelete("xyzzy") as fn:
assert "/tmp-" in fn.name or "\\tmp-" in fn.name or fn.name[:4] == "tmp-"
myname = fn.name
with open(fn.name, "w"):
pass
assert os.path.exists(myname)
assert not os.path.exists(myname)
myname = [None, None]
with ExitStack() as cleanup:
fn1 = LocalFileAutoDelete("one")
myname[0] = fn1.name
cleanup.enter_context(fn1)
with open(fn1.name, "w"):
pass
fn2 = LocalFileAutoDelete("two")
myname[1] = fn2.name
cleanup.enter_context(fn2)
with open(fn2.name, "w"):
pass
assert os.path.exists(myname[0])
assert os.path.exists(myname[1])
assert not os.path.exists(myname[0])
assert not os.path.exists(myname[1])
myname = [None, None]
with MustThrow("", FileNotFoundError):
with ExitStack() as cleanup:
fn1 = LocalFileAutoDelete("one")
myname[0] = fn1.name
cleanup.enter_context(fn1)
with open(fn1.name, "w"):
pass
fn2 = LocalFileAutoDelete("two", silent=True)
myname[1] = fn2.name
cleanup.enter_context(fn2)
# I did not get around to creating the second file.
# This means the fn2 context will raise an exception.
# fn1 should still have been deleted though.
assert not os.path.exists(myname[0])
def testHistogramRangeIsCenterNotEdge(filename):
"""
When the histogram gets generated by the ZGY writer, the range gives
the center value of bin 0 and the center value of bin 255. NOT the
lowest value that maps to bin 0 and the highest value that maps to
bin 255. Which would arguably also make sense. Verify that behavior.
"""
with oldzgy.ZgyWriter(filename, iocontext = SDCredentials(),
size = (64, 64, 64),
datatype = SampleDataType.float,
datarange =(0, 255),
zstart = 0, zinc = 4,
annotstart = (1, 1), annotinc = (1, 1),
corners = ((1000, 1000), (1630, 1000),
(1000, 1630), (1630, 1630))
) as writer:
# With the 0..255 histogram range interpreted as the center of the
# first and last bin, we have the following:
# slot 0 is -0.5..+0.5, slot 2 is 1.5..2.5, slot 5 is 4.5..5.5
# If we instead had a 0..256 histogram range interpreted as the
# extreme eddes of the first and last bin, we have this:
# slot 0 is 0..1, slot 2 is 2..3, slot 5 is 5..6, slot 255: 255..256
# That would still be approximately correct at least for the first
# few bins when setting the histogram range to 0..255 instead of
# 0..256. So if the histogram algorithm choose to use the range
# as the extreme limits (which it is NOT supposed to do),
# 1.8 and 2.2 would end up in different slots. And 4.3 and 4.7
# would end up in the same slot. It should be the other way around.
#
writer.write((0, 0, 0), np.full((1, 10, 10), 1.8, dtype=np.float32))
writer.write((1, 0, 0), np.full((1, 1, 1), 2.2, dtype=np.float32))
writer.write((2, 0, 0), np.full((1, 10, 5), 4.3, dtype=np.float32))
writer.write((3, 0, 0), np.full((1, 1, 2), 4.7, dtype=np.float32))
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
#print(reader.histogram)
assert math.isclose(reader.histogram.min, 0.0)
assert math.isclose(reader.histogram.max, 255.0)
assert reader.histogram.bin[2] == 101
assert reader.histogram.bin[4] == 50
assert reader.histogram.bin[5] == 2
def testEmptyFile(filename, zgyWriterFactory = newzgy.ZgyWriter, zgyReaderFactory = newzgy.ZgyReader):
"""
Create a file without writing bulk data to it; make sure it is
well behaved both on write and on read back. Ideally test both
on-prem and cloud, and test all 9 combinations of ZGY, OpenZGY/C++,
and OpenZGY/Python readers and writers. With the current test
framework it gets a bit tricky to test the two OpenZGY/C++ vs.
OpenZGY/Python cases. Can I make a test that imports all three?
"""
#print('testEmptyFile("{0}")'.format(filename))
#print(' -> Using ' + showZgy(zgyWriterFactory, zgyReaderFactory))
with zgyWriterFactory(filename,
iocontext = SDCredentials(),
size = (100, 200, 300),
datatype = SampleDataType.float,
datarange = (-1, 1),
zunitdim = UnitDimension.time,
zunitname = "ms",
zunitfactor = 0.001,
hunitdim = UnitDimension.length,
hunitname = "ft",
hunitfactor = 0.3048,
zstart = 2500,
zinc = 4.125,
annotstart = (1234, 5678),
annotinc = (5, 2),
corners = ((1000, 1000),
(1005, 1000),
(1000, 1002),
(1005, 1002))
) as writer:
pass
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
slurp = np.ones(reader.size, dtype=np.float32)
reader.read((0,0,0), slurp)
assert np.count_nonzero(slurp) == 0
if zgyReaderFactory == newzgy.ZgyReader:
assert reader.readconst((0,0,0), reader.size) == 0
def testEmptyExistingFile(filename, zgyReaderFactory = newzgy.ZgyReader):
"""
Access a file that has already been created by the old ZGY accessor
with no written bricks and an invalid coding range.
To create, use the old ZGY-Public Python wrapper:
with zgy.ZgyWriter("OldEmpty2.zgy", size=(512, 640, 1000),
datarange=(101,101), datatype="int16") as w: pass
Can leave the file locally, or upload with ZGY, or with sdutil.
Currently the latter is the most interesting case to test.
"""
#print('testEmptyExistingFile("{0}")'.format(filename))
#print(' -> Using ' + showZgy(zgyReaderFactory))
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
if zgyReaderFactory == oldzgy.ZgyReader:
slurp = np.ones(reader.size, dtype=np.float32)
reader.read((0,0,0), slurp)
value = slurp[0,0,0] if np.all(slurp.flat == slurp[0,0,0]) else None
else:
value = reader.readconst((0,0,0), reader.size, as_float=True)
#print(" -> VALUE", value, "RANGE", reader.datarange)
# In spite of the 101..101 coding range, the file will contain
# all zeros. In the new accessor the coding range is rejected
# as bad, no conversion is done, so empty bricks read as zero.
# In the old accessor there is a "feature" that cause empty
# bricks to read as zero regardless of whether caller wants conversion.
assert value == 0
def testRmwFile(filename, zgyWriterFactory = newzgy.ZgyWriter):
"""
The layout of this test data is described in detail in doc/testdata-rmw.png.
"""
rmwsize = (((0,0,0), (304,64,384)), # Survey size.
((0,0,192), (304,64,384)), # Half the survey set to constant "1".
((28,0,84), (144,64,304)), # Touches 12 bricks.
((40,0,100), (160,64,288)), # Touches 12 bricks.
((204,0,0), (216,64,384)), # Tall, thin, to fill up this segment.
((52,0,120), (176,64,272)), # Touches 12 bricks.
((256,0,0), (304,64,352)), # Constant-value at survey edge.
((0,0,256), (64,64,320))) # Normal brick changed to constant.
surveysize = rmwsize[0][1]
expect = np.zeros(surveysize, dtype=np.float32)
partnum = 0
for part in rmwsize[1:]:
partnum += 1
beg, end = part
#print("part", part, "beg", beg, "end", end)
expect[beg[0]:end[0],beg[1]:end[1],beg[2]:end[2]] = partnum
with zgyWriterFactory(filename,
iocontext = SDCredentials(segsize=11/4),
size = surveysize,
datatype = SampleDataType.int8,
datarange = (-28,+227),
zunitdim = UnitDimension.time,
zunitname = "ms",
zunitfactor = 0.001,
hunitdim = UnitDimension.length,
hunitname = "ft",
hunitfactor = 0.3048,
zstart = 2500,
zinc = 4.125,
annotstart = (1234, 5678),
annotinc = (5, 2),
corners = ((1000, 1000),
(1005, 1000),
(1000, 1002),
(1005, 1002))
) as writer:
partnum = 0
sizes = [(0,)]
for part in rmwsize[1:]:
partnum += 1
beg, end = part
size = (end[0]-beg[0], end[1]-beg[1], end[2]-beg[2])
#print("part", part, "beg", beg, "end", end, "size", size)
if partnum == 1:
# Just doing this to exercise both the write functions.
data = np.full(size, partnum, dtype=np.float32)
writer.write(beg, data)
else:
data = np.float32(partnum)
writer.writeconst(beg, data, size=size, is_storage=False)
if filename[:5] == "sd://":
closed_sizes = tuple(writer._fd._relay._sizes)
opened_sizes = tuple([len(writer._fd._open_segment)])
sizes.append(closed_sizes + opened_sizes)
else:
sizes.append((writer._fd.xx_eof,))
#print(sizes)
sizes_in_bricks = []
for e in sizes:
for bytecount in e:
assert all([(bytecount % 64) == 0 for bytecount in e])
sizes_in_bricks.append(tuple(np.array(e, dtype=np.int64) // (256*1024)))
# The expected results have been computed by hand.
# See testdata-rmw.svg for a detailedexplanation with figures.
#print(sizes_in_bricks)
local = filename[:5] != "sd://"
assert sizes_in_bricks[1] == (( 1,) if local else (1, 0))
assert sizes_in_bricks[2] == ((11,) if local else (1, 10))
assert sizes_in_bricks[3] == ((11,) if local else (1, 10))
assert sizes_in_bricks[4] == ((17,) if local else (1, 11, 5))
assert sizes_in_bricks[5] == ((17,) if local else (1, 11, 11, 4))
assert sizes_in_bricks[6] == ((18,) if local else (1, 11, 11, 5))
assert sizes_in_bricks[7] == ((18,) if local else (1, 11, 11, 6))
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
# Read the entire survey, excluding padding bytes, in a single
# operation. Compare with the survey built in memory.
slurp = np.zeros(reader.size, dtype=np.float32)
reader.read((0,0,0), slurp)
assert np.all(slurp == expect)
# Check each brick for whether it takes up space in the file or
# is flagged as constant value. The expected result is explained
# in the textual- and image descriptionof the test data.
is_const = np.zeros((5, 6), dtype=np.float32)
for ii in range(0, 320, 64):
for kk in range(0, 384, 64):
c = reader.readconst((ii, 0, kk), (64, 64, 64))
is_const[ii//64, kk//64] = -1 if c is None else c
expect_const = np.array([[0, -1, -1, -1, -1, 1],
[0, -1, 5, 5, -1, 1],
[0, -1, -1, -1, -1, 1],
[-1, -1, -1, -1, -1, -1],
[6, 6, 6, 6, 6, -1]], dtype=np.float32)
assert np.all(is_const == expect_const)
def testNoRmwInCompressedFile(filename):
lossy = ZgyCompressFactory("ZFP", snr=30)
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(), size=(100, 64, 64), compressor=lossy) as w:
# Writing a constant value should not prevent overwriting later.
w.writeconst((0,0,0), value=42, size=w.size, is_storage=False)
# Write part of a brick for the first time.
data = np.arange(50*64*64, dtype=np.float32).reshape((50, 64, 64))
w.write((0,0,0), data)
# Write needing to update the first brick.
with MustThrow("Updating a local BrickStatus.Compressed brick with Compressed data is illegal"):
w.write((50,0,0), data)
# The above error might have set the global _is_bad flag, in spite of
# this being a recoverable user error. But it probably doesn't
# matter much either way.
w.errorflag = False
# Write entire survey. This is an update, but no read/modify/write.
# The old brick will be leaked if new one compresses larger.
data = np.arange(100*64*64, dtype=np.float32).reshape((100, 64, 64))
with MustThrow("Updating a local BrickStatus.Compressed brick with Compressed data is illegal"):
w.write((0,0,0), data)
w.errorflag = False
# This should actually have been set when we opened the file,
# that feature isn't implemented yet. Besides, for the purpose
# of this test I need to change it while the file is in use.
w._accessor._update_mode = UpdateMode.Pedantic
w.write((0,0,0), data)
def testFatalErrorFlag(filename):
class BogusFile:
def close(self): pass
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(), size=(100, 64, 64)) as w:
data = np.arange(64*64*64, dtype=np.float32).reshape(64, 64, 64)
w.write((0,0,0), data)
w.write((0,0,0), data)
hack = w._accessor._file._file
w._accessor._file._file = BogusFile()
with MustThrow("BogusFile", AttributeError):
w.write((0,0,0), data)
w._accessor._file._file = hack
# File is now usable again, but the global error flag is set.
with MustThrow("previous errors"):
w.write((0,0,0), data)
# Explicitly reset it and we should be good.
w.errorflag = False
w.write((0,0,0), data)
# Another bad write
w._accessor._file._file = BogusFile()
with MustThrow("BogusFile", AttributeError):
w.write((0,0,0), data)
# Verify that lod generation and meta flush is either
# turned off or is ignoring errors. The final close()
# of the python file descriptor will not throw because
# BogusFile wraps close().
w.close()
hack.close()
def testLargeSparseFile(filename, zgyWriterFactory, zgyReaderFactory):
size = (5000, 6000, 1000)
wbeg = (1000, 9000)
wend = (wbeg[0] + 10 * (size[0]-1), wbeg[1] + 10 * (size[1]-1))
if zgyWriterFactory:
with zgyWriterFactory(filename,
iocontext = SDCredentials(),
size = size,
datatype = SampleDataType.int8,
datarange = (-28,+227),
zunitdim = UnitDimension.time,
zunitname = "ms",
zunitfactor = 0.001,
hunitdim = UnitDimension.length,
hunitname = "ft",
hunitfactor = 0.3048,
zstart = 2500,
zinc = 4.125,
annotstart = (1234, 5678),
annotinc = (5, 2),
corners = ((wbeg[0], wbeg[1]),
(wend[0], wbeg[1]),
(wbeg[0], wend[1]),
(wend[0], wend[1]))) as writer:
writer.write((size[0]-1, size[1]-1, 0), np.array([[[42, 10, 10]]], dtype=np.int8))
writer.finalize(progress=ProgressWithDots(), decimation=[DecimationType.Maximum])
if zgyReaderFactory:
with zgyReaderFactory(filename, iocontext = SDCredentials()) as reader:
assert reader.size == size
data = np.zeros((1,1,4), dtype=np.int8)
pos = np.array((size[0]-1, size[1]-1, 0), dtype=np.int64)
reader.read(pos, data, lod=0)
assert tuple(data.flat) == (42, 10, 10, -100)
reader.read(pos//2, data, lod=1)
assert tuple(data.flat) == (42, 10, -100, -100)
for lod in range(2,8):
reader.read(pos//(1<<lod), data, lod=lod)
assert tuple(data.flat) == (42, -100, -100, -100)
def testNaan(filename, snr = -1):
compressor = ZgyCompressFactory("ZFP", snr = snr) if snr > 0 else None
with newzgy.ZgyWriter(filename,
compressor = compressor,
iocontext = SDCredentials(),
size = (256, 128, 128),
datatype = SampleDataType.float) as writer:
data = np.zeros((64, 64, 64), dtype=np.float32)
count_nan = 0
count_inf = 0
counts = np.zeros(256, dtype=np.int32)
# Some NaN, a few other different values, mostly zero.
data.fill(0)
data[0,0,:3] = np.nan
data[0,0,3] = 2
data[0,0,4] = 3
writer.write((0,0,0), data)
count_nan += 3
counts[2] += 1
counts[3] += 1
# Some NaN, only one other value (42)
data.fill(42)
data[0,0,:5] = np.nan
writer.write((64,0,0), data)
count_nan += 5
counts[42] += (64*64*64) - 5
# NaN only
data.fill(np.nan)
writer.write((128,0,0), data)
count_nan += (64*64*64)
# NaN explicitly written as constant value
writer.writeconst((192, 0, 0), np.nan, (64, 64, 64), is_storage=False)
count_nan += (64*64*64)
# Now repeat for +/- inf
# Some Inf, a few other different values. Mostly zero.
data.fill(0)
data[0,0,0] = np.inf
data[0,0,1] = -np.inf
data[0,0,2] = np.inf
data[0,0,3] = 3
data[0,0,4] = 4
writer.write((0,64,0), data)
count_inf += 3
counts[3] += 1
counts[4] += 1
# Some Inf, only one other value (255).
data.fill(255)
data[0,0,:13] = np.inf
data[0,1,:10] = -np.inf
writer.write((64,64,0), data)
count_inf += 23
counts[255] = (64*64*64) - 23
# +Inf only
data.fill(np.inf) # 64^3 Inf
writer.write((128,64,0), data)
count_inf += (64*64*64)
# -Inf explicitly written as constant value
writer.writeconst((192, 64, 0), -np.inf, (64, 64, 64), is_storage=False)
count_inf += (64*64*64)
counts[0] = 256*128*128 - np.sum(counts[1:]) - count_nan - count_inf
writer.finalize(decimation = [DecimationType.Average])
# Exercise logging & debug code in the compression module.
# Discard the output. Yes, this is a shameless trick to
# increase coverage. But in Python a test that only checks
# that a function is callable is in fact somewhat useful.
if compressor is not None:
with io.StringIO() as devnull:
compressor.dump(msg=None, outfile=devnull,
text=True, csv=True, reset=True)
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
# --- statistics and histogram ---
#print(reader.statistics)
#print(reader.histogram)
#print(list(counts))
#print("Expect total size", 256*128*128,
# "nan", count_nan,
# "inf", count_inf,
# "valid", 256*128*128 - count_nan - count_inf)
#print("Got valid",
# "stats", reader.statistics.cnt,
# "histo", reader.histogram.cnt,
# "sampl", np.sum(reader.histogram.bin))
# Limits are set automatically to the value range. I carefully
# chose 0..255 since the histogram then has one bin per sample value.
assert reader.histogram.min == 0 and reader.histogram.max == 255
h = reader.histogram.bin
for i in range(256):
if counts[i] != h[i]:
print("Histogram bin", i, "expected", counts[i], "actual", h[i])
assert reader.statistics.cnt == 256*128*128 - count_nan - count_inf
assert reader.histogram.cnt == 256*128*128 - count_nan - count_inf
assert np.all(np.array(reader.histogram.bin) == counts)
#assert reader.statistics.inf == count_nan + count_inf # not in api
# --- bricks stored as all-constant or not ---
BRICK = (64, 64, 64)
assert reader.readconst((0,0,0), BRICK) is None
assert reader.readconst((64,0,0), BRICK) is None
assert np.isnan(reader.readconst((128,0,0), BRICK))
assert np.isnan(reader.readconst((192,0,0), BRICK))
assert reader.readconst((0,64,0), BRICK) is None
assert reader.readconst((64,64,0), BRICK) is None
assert reader.readconst((128,64,0), BRICK) == np.inf
assert reader.readconst((192,64,0), BRICK) == -np.inf
# -- read back samples ---
reader.read((0,0,0), data)
assert np.all(np.isnan(data[0,0,:3]))
assert data[0,0,3] == 2
assert data[0,0,4] == 3
assert np.count_nonzero(data) == 5
reader.read((64,0,0), data)
assert np.all(np.isnan(data[0,0,:5]))
assert np.count_nonzero(data == 42) == 64*64*64 - 5
reader.read((0,64,0), data)
assert data[0,0,0] == np.inf
assert data[0,0,1] == -np.inf
assert data[0,0,2] == np.inf
assert data[0,0,3] == 3
assert data[0,0,4] == 4
assert np.count_nonzero(data) == 5
reader.read((64,64,0), data)
assert np.all(data[0,0,:13] == np.inf)
assert np.all(data[0,1,:10] == -np.inf)
assert np.count_nonzero(data == 255) == 64*64*64 - 13 - 10
# --- read back low resolution ---
# LOD1 should be sufficient to test.
# Note that this only tests a single decimation algorithm
# and the functions that call it. There needs to be separate
# unit tests to verify that all decimation algorithms have a
# reasonable behavior for nan and inf.
fullres = np.zeros((128, 128, 128), dtype=np.float32)
reader.read((0,0,0), fullres, lod=0)
reader.read((0,0,0), data, lod=1)
# Input first trace: nan, nan, nan, 2, 3
# An extra slop factor is needed because calculation done in float32.
assert math.isclose(data[0,0,0], 0, rel_tol=1.0e-5) # 2 NaN (skipped), the rest zero.
assert math.isclose(data[0,0,1], 2/7, rel_tol=1.0e-5) # 1 NaN (skipped), 1 "2", rest "0"
assert math.isclose(data[0,0,2], 3/8, rel_tol=1.0e-5) # one "3", rest default to zero
# Input trace: 5*nan, rest is 42. With "Average" decimation
# each output sample found at least one finite value.
assert np.all(data[32:64, 0:32, 0:32] == 42)
# Input trace: +inf, -inf, +inf, 3, 4. All others 0.
# Note: The C++ code skips +/- inf. Numpy includes them unless
# told otherwise, and the average of +inf and -inf is NaN.
# These rules are pretty obscure and it is probably easier to
# TODO-Low adopt the C++ strategy both places.
#showdecimation(fullres[0:2,64:66,0:20], data[0,32,0:10])
assert np.isnan(data[0,32,0])
assert data[0,32,1] == np.inf
assert math.isclose(data[0,32,2], 4/8, rel_tol=1.0e-5) # one "4", rest default to zero
# Input trace: 13 * +inf in one trace, 10 * -inf in another.
# So the first 5 samples have average(-inf,+inf) => nan
# the next 2 samples have average(255,+inf) => +inf
# Everything else should be 255.
# UPDATE: In the C++ version (and soon also Python)
# +/- inf is ignored so all decimated samples are 255.
#showdecimation(fullres[64:66,64:66,0:20], data[32,32,0:10])
assert np.all(np.isnan(data[32,32,:5]))
assert data[32,32,5] == np.inf
assert data[32,32,6] == np.inf
assert data[32,32,7] == 255
# Now read the brick built from all-constant input.
reader.read((64,0,0), data, lod=1)
d1 = data[:32,:32,:32] # from data written at (128,0,0)
d2 = data[32:,:32,:32] # from data written at (192,0,0)
d3 = data[:32,32:,:32] # from data written at (128,64,0)
d4 = data[32:,32:,:32] # from data written at (192,64,0)
assert np.all(np.isnan(d1))
assert np.all(np.isnan(d2))
assert np.all(d3 == np.inf)
assert np.all(d4 == -np.inf)
def testWriteNaanToIntegerStorage(filename):
with newzgy.ZgyWriter(filename,
size = (256, 128, 128),
iocontext = SDCredentials(),
datatype = SampleDataType.int8,
datarange = (-128,+127)
) as writer:
data = np.zeros((64, 64, 64), dtype=np.float32)
data[0,0,42] = np.nan
writer.write((0,0,0), data)
def testZeroCentric(filename):
"""
Specific test for the zero-centric property. When the hard coded
(in this test) datarange is zero-centric then the rounding makes
an equal number of small positive and small negative numbers
end up being returned as zero after a roundtrip.
"""
data = np.array([[[
-1.4, -1.2, -1.0, -0.8, -0.6,
-0.4, -0.2, +0.0, +0.2, +0.4,
+0.6, +0.8, +1.0, +1.2, +1.4,
100.0, 200.0]]], dtype=np.float32)
expect = np.array([[[
-1, -1, -1, -1, -1,
0, 0, 0, 0, 0,
1, 1, 1, 1, 1,
100, 200]]], dtype=np.float32)
with newzgy.ZgyWriter(filename,
iocontext = SDCredentials(),
size = (64, 64, 64),
datatype = SampleDataType.int8,
datarange = (-28,+227),
) as writer:
writer.write((0,0,0), data)
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
actual = np.zeros((1, 1, expect.size), dtype=np.float32)
reader.read((0,0,0), actual)
assert np.all(np.isclose(expect, actual))
def testFinalizeProgress(filename, abort = False):
"""
Check the progress callback that can be installed while generating
low resolution bricks. Optionally check that the callback can be
used to abort the generation.
"""
class Progress:
def __init__(self, abort = False):
self._abort = abort
self._complete = False
self._waszero = False
def __call__(self, done, total):
self._complete = bool(done == total)
self._waszero = self._waszero or done == 0
#print("done {0}/{1}".format(done, total))
return not abort or done < total//4
with newzgy.ZgyWriter(filename,
iocontext = SDCredentials(),
size = (112+640, 64+320, 176),
) as writer:
writer.write((16,16,16), np.full((40,41,42), 31, dtype=np.float32))
writer.write((48,20,24), np.full((72,10,16), 97, dtype=np.float32))
writer.write((0,0,64), np.full((112,64,64), 0, dtype=np.float32))
writer.write((512,0,0), np.full((128,128,64), 42, dtype=np.float32))
progress = Progress(abort)
if abort:
# The progress callback will return False on 25% done.
with MustThrow(extypes = newzgy.ZgyAborted):
writer.finalize(progress=progress)
assert progress._waszero
assert not progress._complete
else:
writer.finalize(progress=progress)
assert progress._waszero
assert progress._complete
def testHugeFile(filename):
"""
Create a very sparse file where the declared size is large enough
to make the header area > 1 MB. This can trigger some issues.
Number of bricks:
Lod 0: 64*64*32 bricks = 131072
Lod 1: 32*32*16 bricks = 16384
Lod 2: 16*16*8 bricks = 2048
Lod 3: 8*8*4 bricks = 256
Lod 4: 4*4*2 bricks = 32
Lod 5: 2*2*1 bricks = 4
Lod 6: 1*1*1 brick = 1
SUM: 149797 bricks, 1.14 Mb of brick lookup tables
Rounded up to brick size there will be 1.25 MB of headers.
Non-constant bricks: Only one per layer. 1.75 MB total
Total file size: 3 MB.
"""
with newzgy.ZgyWriter(filename,
iocontext = SDCredentials(),
datatype = SampleDataType.int8,
datarange = (-128,+127),
size = (64*64, 64*64, 32*64),
) as writer:
writer.write((640,512,0), np.full((64,64,65), 42, dtype=np.float32))
#writer.finalize(progress=ProgressWithDots())
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
assert reader.nlods == 7
c1 = reader.readconst((640,512,0), (64,64,64))
c2 = reader.readconst((640,512,64), (64,64,64))
c3 = reader.readconst((640,512,129), (64,64,64))
assert c1 == 42 # writer detected it was constant
assert c2 is None # partly written
assert c3 == 0 # never written
assert os.stat(filename).st_size == 3 * 1024 * 1024
def testDecimateOddSize(filename):
"""
At the survey edge, the decimation that normally has 8 samples input
might only have 4, 2, or 1. Make sure the code doesn't include
the padding in its computation.
"""
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(),
size = (7, 13, 64+17)
) as writer:
data = np.full(writer.size, 200, dtype=np.float32)
data[0::2,:,:] = 100
data[:,0::2,:] = 50
assert np.all(data[:,:,:] == data[:,:,0:1])
writer.write((0,0,0), data)
writer.finalize(decimation = [DecimationType.Average])
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
assert reader.nlods == 2
data = np.zeros((4, 7, 32+9), dtype=np.float32)
reader.read((0,0,0), data, lod=1)
# Within each trace all samples should be the same, also
# the last one, since this is true also for the input.
assert np.all(data[:,:,:] == data[:,:,0:1])
# Most output values will be avg(200, 100, 50, 50) = 100.
# At the edges in i/j it should be average(50, 100) or (50,50).
# At the corner expect average(50) i.e. 50.
# If the implemenation erroneously tried to read the
# padding (which ought to be zero) the numbers will be lower.
# Currently in OpenZGY/C++ the samples not based on 8 neighbors
# might be set to 0.
assert np.all(data[:3, :6, :] == 100)
assert np.all(data[:3, 6, :] == 50)
assert np.all(data[3, :6, :] == 75)
assert np.all(data[3, 6, :] == 50)
def testDecimateWeightedAverage(filename):
"""
As test.lodalgo.testSpecial but very simplified, just to make sure
the default lod2 algorithm is in fact WeightedAverage. The lod1
default is LowPass; to avoid this getting in the way I will
make all traces constant-value. This makes LowPass behave as
Decimate (or Average, or Median, etc.)
"""
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(),
size = (64, 256, 512)
) as writer:
data = np.zeros((64, 64, 512), dtype=np.float32)
# 1/4 brick of 300, 3/4 brick of 100, 3 bricks of unwritten 0.
data[:16,:,:] = 300
data[16:,:,:] = 100
tiny = np.array([[300, 300, 0, 0],
[300, 300, 0, 0],
[0, 0, 100, 100],
[0, 0, 100, 100]], dtype=np.float32)
# In lod 1 this will be just 300, 0, 0, 1000
tiny = tiny.reshape((4,4,1))
data[:4,:4,:] = tiny
assert np.all(data[:,:,:] == data[:,:,0:1])
writer.write((0,0,0), data)
#writer.finalize(decimation = [DecimationType.Average])
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
assert reader.nlods >= 3
# Checking the lowpass output, including the fact that it is
# supposed to have zero DC bias.
data = np.zeros((2, 2, 256), dtype=np.float32)
reader.read((0,0,0), data, lod=1)
#print(data[:,:,0])
assert np.all(np.isclose(data[0,0,:], 300))
assert np.all(np.isclose(data[0,1,:], 0))
assert np.all(np.isclose(data[1,0,:], 0))
assert np.all(np.isclose(data[1,1,:], 100))
data = np.zeros((1, 1, 1), dtype=np.float32)
reader.read((0,0,0), data, lod=2)
# average(300, 0, 0, 100) is 100 but we expect something closer to
# 300 since this value is relatively more scarce.
#print(data)
assert data.flat[0] > 200
def testMixingUserAndStorage(filename):
"""
When the file has an integer type both reading and writing can be done
either in float user sample values or in integral storage values.
Try all 4 combinations.
"""
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(),
datatype = SampleDataType.int8, datarange = (-2,+763),
size = (64, 64, 512)
) as writer:
# user = 3*storage + 382
# storage = (user - 382) / 3
# user 3 -> storage -126.33 -> -126 -> user 4
# user 12 -> storage -123.33 -> -123 -> user 13
# user 40 -> storage -114
# user 71 -> storage -103.66 -> -104 -> user 70
w1 = np.zeros((64, 64, 64), dtype=np.float32)
w2 = np.zeros((64, 64, 64), dtype=np.float32)
w3 = np.zeros((64, 64, 64), dtype=np.int8)
w4 = np.zeros((64, 64, 64), dtype=np.int8)
w1[0,0,0] = 3.0 # user 4 <-> storage -126
w2[0,0,0] = 12.0 # user 13 <-> storage -123
w3[0,0,0] = -114 # user 40 <-> storage -114
w4[0,0,0] = -104 # user 70 <-> storage -104
writer.write((0,0,0), w1)
writer.write((0,0,64), w2)
writer.write((0,0,128), w3)
writer.write((0,0,192), w4)
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
r1 = np.zeros((1, 1, 1), dtype=np.float32)
r2 = np.zeros((1, 1, 1), dtype=np.int8)
r3 = np.zeros((1, 1, 1), dtype=np.float32)
r4 = np.zeros((1, 1, 1), dtype=np.int8)
reader.read((0,0,0), r1)
reader.read((0,0,64), r2)
reader.read((0,0,128), r3)
reader.read((0,0,192), r4)
#print("expect", 4.0, -123, 40.0, -114)
#print("actual", r1.flat[0], r2.flat[0], r3.flat[0], r4.flat[0])
assert np.isclose(r1.flat[0], 4.0)
assert r2.flat[0] == -123
assert np.isclose(r3.flat[0], 40.0)
assert r4.flat[0] == -104
def testSmallConstArea(filename):
"""
Check what happens when writeconst() is called with a region
smaller than one brick. Application code might well specify
a region that doesn't align with the bricks. Actually writing
less than a brick in total would be odd, but the corner cases
that need to be handled are similar.
"""
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(),
datatype = SampleDataType.int8, datarange = (-128,+127),
size = (64, 64, 256)
) as writer:
writer.writeconst((0,0,128), 42, size=(64,64,128), is_storage=True)
# unwritten brick, value matches defaultvalue -> mark as const
# unwritten brick, value does not match default -> inflate
# const brick, value matches previous brick -> no-op
# const brick, value differs -> inflate
writer.writeconst((1,2,3+0), 0, size=(11,12,13), is_storage=True)
writer.writeconst((1,2,3+64), 15, size=(11,12,13), is_storage=True)
writer.writeconst((1,2,3+128), 42, size=(11,12,13), is_storage=True)
writer.writeconst((1,2,3+192), 67, size=(11,12,13), is_storage=True)
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
BRICK = (64,64,64)
r1 = reader.readconst((0,0,0), BRICK, as_float = False)
r2 = reader.readconst((0,0,64), BRICK, as_float = False)
r3 = reader.readconst((0,0,128), BRICK, as_float = False)
r4 = reader.readconst((0,0,192), BRICK, as_float = False)
#print("testSmallConstArea:", r1, r2, r3, r4)
assert r1 == 0 # Was converted from "unwritten" to "const zero"
assert r2 is None # Brick now contains a mix of 0 and 15.
assert r3 == 42 # No-op; the brick already contained const 42.
assert r4 is None # Brick now contains a mix of 42 and 67.
onevalue_t = namedtuple("result", "range stats histo stats_count histo_count bins")
def testHistoOneValue(filename, dtype, value, fill, *, datarange = None, verbose = False):
if verbose:
print("Test dtype", dtype, "value", value,
("only" if fill else "and unwritten bricks"))
center = value if np.isfinite(value) else -0.25
with newzgy.ZgyWriter(filename, iocontext = SDCredentials(),
size = (64, 64, 3*64),
datatype = dtype,
datarange = datarange or (center-1, center+1)
) as writer:
if np.isfinite(value):
writer.writeconst((0, 0, 0), value,
size=(64, 64, 64), is_storage=False)
if fill:
writer.writeconst((0, 0, 64), value,
size=(64, 64, 128), is_storage=False)
writer.finalize(force=True)
with newzgy.ZgyReader(filename, iocontext = SDCredentials()) as reader:
if verbose:
print("Data range", reader.datarange)
print("Statistics", reader.statistics)
print("Histogram ", (reader.histogram.min, reader.histogram.max))
return onevalue_t((reader.datarange[0], reader.datarange[1]),
(reader.statistics.min, reader.statistics.max),
(reader.histogram.min, reader.histogram.max),
reader.statistics.cnt,
np.sum(reader.histogram.bin),
reader.histogram.bin)
def testHistoCornercaseFloat(filename):
# Float: datarange with zero size is valid on input,
# in fact the data range isn't specified by the user.
# Reading back data gives the statistical range
# which for float may include defaultvalue.
# The histogram will use the fuzzy algorithm.
# The numbers in brackets correspond to the ones in
# GenLodImpl::suggestHistogramRange().
# [3] nothing written.
# Note that the writer might need to pass force=True to finalize()
# to get the histogram- and statistics information written out even
# when no actual data has been written. I am unsure about how the
# principle of least surprise applies here. As of Oct 2020 the force
# is required. See the ZgyWriter constructor setting _dirty(False).
BRICK = 64*64*64
r = testHistoOneValue(filename, SampleDataType.float, np.nan, False)
assert r.range == r.stats
assert r.histo_count == r.stats_count
assert r.stats == (0, 0)
assert r.histo == (-128, +127)
assert r.stats_count == 3*BRICK # Assuming finalize with force=True
assert r.bins[128] == r.histo_count
# [4] one all zero brick, two never written.
# Expected result same as for nothing written.
r = testHistoOneValue(filename, SampleDataType.float, 0, False)
assert r.range == r.stats
assert r.histo_count == r.stats_count
assert r.stats == (0, 0)
assert r.histo == (-128, +127)
assert r.stats_count == 3*BRICK
assert r.bins[128] == r.histo_count
# [4] three all zero bricks.
# Expected result same as for nothing written.
r = testHistoOneValue(filename, SampleDataType.float, 0, True)
assert r.range == r.stats
assert r.histo_count == r.stats_count
assert r.stats == (0, 0)
assert r.histo == (-128, +127)
assert r.stats_count == 3*BRICK
assert r.bins[128] == r.histo_count
# [6] single negative value, plus two never written bricks.
# The statistics and histogram include the never-written
# samples as if they were zero.
# Note: I won't be testing the "some never written" scenario
# for every remaining case; it is hopefully enough to
# confirm once that never-written is treated as written-zero.
r = testHistoOneValue(filename, SampleDataType.float, -42, False)
assert r.range == r.stats
assert r.histo_count == r.stats_count
assert r.stats == (-42, 0)
assert r.histo == (-42, 0)
assert r.stats_count == 3*BRICK
assert r.bins[0] == BRICK
assert r.bins[255] == 2*BRICK
# [6] single negative value in all three bricks.
# The value range and the statistics should have the True
# range i.e. low==high and the histogram range should be wider.
r = testHistoOneValue(filename, SampleDataType.float, -42, True)
assert r.range == r.stats
assert r.histo_count == r.stats_count
assert r.stats == (-42, -42)
assert r.histo == (-42, 0)
assert r.stats_count == 3*BRICK
assert r.bins[0] == 3*BRICK
assert r.bins[255] == 0
# [6] single positive value in all three bricks.
# Result similar to the above but the ranges are swapped.
r = testHistoOneValue(filename, SampleDataType.float, +42, True)
assert r.range == r.stats
assert r.histo_count == r.stats_count
assert r.stats == (42, 42)
assert r.histo == (0, 42)
assert r.stats_count == 3*BRICK
assert r.bins[0] == 0
assert r.bins[255] == 3*BRICK
def testHistoCornercaseInt(filename):
# Integral data.
# Histogram range should always match the user provided range,
# which for never-written is -1.25 to +0.75 and for the
# remaining cases value +/- 1. This means that value won't be
# exactly representable as an integer (it maps to -0.5) and
# this will be noticeable in the statistics. The 0.5 factor
# may also lead to numerical instability. The samples end up
# either in bin 127 or bin 128.
# Also, range might be wider then statistics (unlike the float
# case) if not all possible storage values have been used.
BRICK = 64*64*64
r = testHistoOneValue(filename, SampleDataType.int8, np.nan, False)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] - 0) < 0.25
assert abs(r.stats[0] - 0) > 0.001 # 0.0 not representable.
assert r.histo[0] == -1.25 and r.histo[1] == 0.75 # user choice exactly.
assert r.stats_count == 3*BRICK # Assuming finalize with force=True
# I don't really care where the "0" samples end up. It won't be the center.
assert r.bins[127] + r.bins[128] == 0
r = testHistoOneValue(filename, SampleDataType.int8, 0, True)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] - 0) < 0.25
assert abs(r.stats[0] - 0) > 0.001 # 0.0 not representable.
assert r.histo[0] == 0-1 and r.histo[1] == 0+1 # user choice exactly.
assert r.stats_count == 3*BRICK
assert r.bins[127] + r.bins[128] == 3*BRICK
r = testHistoOneValue(filename, SampleDataType.int8, -42, True)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] + 42) < 0.25
assert abs(r.stats[0] + 42) > 0.001 # 42.0 not representable.
assert r.histo[0] == -42-1 and r.histo[1] == -42+1 # user choice exactly.
assert r.stats_count == 3*BRICK
assert r.bins[127] + r.bins[128] == 3*BRICK
r = testHistoOneValue(filename, SampleDataType.int8, +42, True)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] - 42) < 0.25
assert abs(r.stats[0] - 42) > 0.001 # 42.0 not representable.
assert r.histo[0] == 42-1 and r.histo[1] == 42+1 # user choice exactly.
assert r.stats_count == 3*BRICK
assert r.bins[127] + r.bins[128] == 3*BRICK
# 16 bit not much different from 8 bit, but the statistics will be
# closer to the supplied value because the quantization error is smaller.
r = testHistoOneValue(filename, SampleDataType.int16, np.nan, False)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] - 0) < 0.25/256
assert abs(r.stats[0] - 0) > 0.001/256 # 0.0 not representable.
assert r.histo[0] == -1.25 and r.histo[1] == 0.75 # user choice exactly.
assert r.stats_count == 3*BRICK
# I don't really care where the "0" samples end up. It won't be the center.
assert r.bins[127] + r.bins[128] == 0
r = testHistoOneValue(filename, SampleDataType.int16, 0, True)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] - 0) < 0.25/256
assert abs(r.stats[0] - 0) > 0.001/256 # 0.0 not representable.
assert r.histo[0] == 0-1 and r.histo[1] == 0+1 # user choice exactly.
assert r.stats_count == 3*BRICK
assert r.bins[127] + r.bins[128] == 3*BRICK
r = testHistoOneValue(filename, SampleDataType.int16, -42, True)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] + 42) < 0.25/256
assert abs(r.stats[0] + 42) > 0.001/256 # 42.0 not representable.
assert r.histo[0] == -42-1 and r.histo[1] == -42+1 # user choice exactly.
assert r.stats_count == 3*BRICK
assert r.bins[127] + r.bins[128] == 3*BRICK
r = testHistoOneValue(filename, SampleDataType.int16, +42, True)
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats[0] == r.stats[1]
assert abs(r.stats[0] - 42) < 0.25/256
assert abs(r.stats[0] - 42) > 0.001/256 # 42.0 not representable.
assert r.histo[0] == 42-1 and r.histo[1] == 42+1 # user choice exactly.
assert r.stats_count == 3*BRICK
assert r.bins[127] + r.bins[128] == 3*BRICK
# Behavior when all explicitly written values get clipped.
# Expect both the histogram and the statistics to only reflect
# the clipped value (-5) as if that value and not -42 had been
# written.
r = testHistoOneValue(filename, SampleDataType.int8, -42, True,
datarange = (-5, +760))
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats == (-5, -5)
assert r.histo == (-5, +760)
assert r.stats_count == 3*BRICK
assert r.bins[0] == 3*BRICK
# As above, all explicitly written values get clipped but now
# there are a few unwritten bricks. Expect both the histogram
# and the statistics to only reflect the clipped value (-5) as
# if that value and not -42 had been written.
# Defaultvalue is +1 because the range does not give a zero
# centric histogram. The statistics should also reflect that.
# I.e. expect +1 to be part of the range.
r = testHistoOneValue(filename, SampleDataType.int8, -42, False,
datarange = (-5, +760))
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats == (-5, +1)
assert r.histo == (-5, +760)
assert r.stats_count == 3*BRICK
assert r.bins[0] == BRICK
assert r.bins[2] == 2*BRICK
# Similar to the above but no values written at all.
# Defaultvalue is still 1 due to missing zero-centric propery
# so this is what should be reflected in the statistics.
r = testHistoOneValue(filename, SampleDataType.int8, np.nan, False,
datarange = (-5, +760))
# Invariants for the integer case
assert r.range[0] <= r.stats[0] and r.range[1] >= r.stats[1]
assert r.histo == r.range
assert r.histo_count == r.stats_count
# Data dependent
assert r.stats == (+1, +1)
assert r.histo == (-5, +760)
assert r.stats_count == 3*BRICK
assert r.bins[2] == 3*BRICK
def testFancyDefaultValue():
"""
Part of the test suite using the same test data stored in different ways.
Check what happens when reading samples that were never written.
The rectangles used are:
a) Dead area of partly written brick
b) Part dead area, part all-constant brick
c) all-constant brick
d) part all-constant brick, part unwritten brick
e) unwritten brick.
In the new reader all should return the default value.
In the old reader the last one might throw a missing brick exception,
it does in the C++ ZGY-Public API but the Python wrapper catches it.
And the penultimate one might read zero from the unwritten area
while still seeing the default (1 in this case) elsewhere.
Also check reading completely outside range. The new accessor should
raise exceptions; the old one does whatever it feels like doing.
"""
with LocalFileAutoDelete("fancy-2.zgy") as fn:
createFancyFile(fn.name, SampleDataType.int8, (-2,+763),
newzgy.ZgyWriter)
checkReadingDeadArea(fn.name, (5, 22, 1), oldzgy.ZgyReader, 1)
checkReadingDeadArea(fn.name, (5, 22, 63), oldzgy.ZgyReader, 1)
checkReadingDeadArea(fn.name, (5, 22, 65), oldzgy.ZgyReader, 1)
checkReadingDeadArea(fn.name, (5, 22, 127), oldzgy.ZgyReader,
np.array([[[1, 0],[1, 0]],[[1, 0],[1, 0]]]))
checkReadingDeadArea(fn.name, (5, 22, 129), oldzgy.ZgyReader, 0)
#checkReadingOutsideRange(fn.name, oldzgy.ZgyReader)
#checkReadingOutsideLod(fn.name, oldzgy.ZgyReader)
#checkReadingToWrongValueType(fn.name, oldzgy.ZgyReader)
checkReadingDeadArea(fn.name, (5, 22, 1), newzgy.ZgyReader, 1)
checkReadingDeadArea(fn.name, (5, 22, 63), newzgy.ZgyReader, 1)
checkReadingDeadArea(fn.name, (5, 22, 65), newzgy.ZgyReader, 1)
checkReadingDeadArea(fn.name, (5, 22, 127), newzgy.ZgyReader, 1)
checkReadingDeadArea(fn.name, (5, 22, 129), newzgy.ZgyReader, 1)
checkReadingOutsideRange(fn.name, newzgy.ZgyReader)
checkReadingOutsideLod(fn.name, newzgy.ZgyReader)
checkReadingToWrongValueType(fn.name, newzgy.ZgyReader)
def testFancyReadConstant():
"""
Test the new API in openzgy to return brick status.
"""
with LocalFileAutoDelete("fancy-2.zgy") as fn:
createFancyFile(fn.name, SampleDataType.int8, (-2,+763),
newzgy.ZgyWriter)
with newzgy.ZgyReader(fn.name, iocontext = SDCredentials()) as reader, io.StringIO() as bitbucket:
verbose = lambda *args, **kwargs: print(*args, file=bitbucket, **kwargs)
# While the data inside this small rectangle is indeed constant,
# the whole brick is not. So, it won't be flagged as const val.
a = reader.readconst((17,17,17), (2,2,2), as_float = True, verbose=verbose)
b = reader.readconst((17,17,17), (2,2,2), as_float = False)
assert(a is None)
assert(b is None)
# In this case the enclosing brick was explicitly written with
# constant value 0, which will be read back as 1 because
# the range is not zero centric.
a = reader.readconst((1,2,67), (4,5,6), as_float = True)
b = reader.readconst((1,2,67), (4,5,6), as_float = False)
assert math.isclose(a, 1.0)
assert math.isclose(b, -127)
# Brick written as constant value 0 but only the region inside
# the survey. Whether this registers as "constant" may be
# considered an implementation detail. But ideally it ought to.
a = reader.readconst((65,2,67), (4,5,6), as_float = True)
b = reader.readconst((65,2,67), (4,5,6), as_float = False)
assert math.isclose(a, 1.0)
assert math.isclose(b, -127)
# Two bricks never written, two with constant value 0.
a = reader.readconst((0,0,64), (128,64,128), as_float = True)
b = reader.readconst((0,0,64), (128,64,128), as_float = False)
assert math.isclose(a, 1.0)
assert math.isclose(b, -127)
def testFancyMisc():
"""
Part of the test suite using the same test data stored in different ways.
"""
with LocalFileAutoDelete("fancy-1.zgy") as fn:
createFancyFile(fn.name, SampleDataType.int8, (-28,+227),
newzgy.ZgyWriter)
# Doesn't really belong here but doesn't bother to create a test file.
runCloseOnException(fn.name, newzgy.ZgyReader)
runErrorOnClose(fn.name, newzgy.ZgyReader)
runConversions(fn.name, newzgy.ZgyReader)
runErrorIfNotOpenForRead(fn.name, newzgy.ZgyReader)
runDumpToDevNull(fn.name, newzgy.ZgyReader)
if HasOldZgy():
runCloseOnException(fn.name, oldzgy.ZgyReader)
runConversions(fn.name, oldzgy.ZgyReader)
runErrorIfNotOpenForRead(fn.name, oldzgy.ZgyReader)
with LocalFileAutoDelete("fancy-1-clone.zgy") as cloned:
runClone(cloned.name, fn.name)
runUpdate(cloned.name)
runDumpMembers(cloned.name, fn.name)
def testFancy1():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY writer, both OpenZGY and ZGY-Public reader, local file, int8.
The coding range is asymmetric but zero centric.
"""
with LocalFileAutoDelete("fancy-1.zgy") as fn:
createFancyFile(fn.name, SampleDataType.int8, (-28,+227),
newzgy.ZgyWriter)
checkContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkContents(fn.name, newzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
# The next line reveals a bug in ZGY-Public.
checkRawContents(fn.name, oldzgy.ZgyReader, 0, 100)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
checkStatistics(fn.name, oldzgy.ZgyReader, 0, 0, True)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, oldzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy2():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY writer, both OpenZGY and ZGY-Public reader, local file, int8.
Unlike #1 the coding range is not zero centric. So 0 cannot be represented.
When can be stored is -2, +1, +4, ..., +763 i.e. only values 3*n+1.
So my sample data values 31 and 301 are representable, but zero is not.
"""
with LocalFileAutoDelete("fancy-2.zgy") as fn:
createFancyFile(fn.name, SampleDataType.int8, (-2,+763),
newzgy.ZgyWriter)
checkContents(fn.name, oldzgy.ZgyReader, 1, 0)
checkContents(fn.name, newzgy.ZgyReader, 1, 1)
checkLodContents(fn.name, oldzgy.ZgyReader, 1, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 1, 1)
# The next line reveals a bug in ZGY-Public.
checkRawContents(fn.name, oldzgy.ZgyReader, 1, 382)
checkRawContents(fn.name, newzgy.ZgyReader, 1, 1)
checkStatistics(fn.name, oldzgy.ZgyReader, 1, 0, True)
checkStatistics(fn.name, newzgy.ZgyReader, 1, 1, True)
checkHistogram(fn.name, oldzgy.ZgyReader, 1, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 1, 0, True)
def testFancy3():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY writer, both OpenZGY and ZGY-Public reader, local file, int16.
Unlike #1 and #2 zero is not included in the coding range.
The closest representable value to zero is +20
The valuetype is now int16 instead of int8 for variation.
"""
with LocalFileAutoDelete("fancy-3.zgy") as fn:
createFancyFile(fn.name, SampleDataType.int16, (+20,+16403.75),
newzgy.ZgyWriter)
checkContents(fn.name, oldzgy.ZgyReader, 20, 0)
checkContents(fn.name, newzgy.ZgyReader, 20, 20)
checkLodContents(fn.name, oldzgy.ZgyReader, 20, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 20, 20)
checkRawContents(fn.name, oldzgy.ZgyReader, 20, 8212)
checkRawContents(fn.name, newzgy.ZgyReader, 20, 20)
checkStatistics(fn.name, oldzgy.ZgyReader, 20, 0, True)
checkStatistics(fn.name, newzgy.ZgyReader, 20, 20, True)
checkHistogram(fn.name, oldzgy.ZgyReader, 20, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 20, 20, True)
def testFancy4():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY writer, both OpenZGY and ZGY-Public reader, local file, float32.
Bad coding range hint.
The coding range for float cubes is just a hint that might be used as a
hint for the histogram range. Or it might be completely ignored
if the histogram is written during a separate pass where the exact
range is already known.
"""
with LocalFileAutoDelete("fancy-4.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-1,+1),
newzgy.ZgyWriter)
checkContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkContents(fn.name, newzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
checkStatistics(fn.name, oldzgy.ZgyReader, 0, 0, True)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, oldzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy5():
"""
Part of the test suite using the same test data stored in different ways.
Unline 1..4, this uses the old ZGY-Public writer, to help verify that
the old and new code produces the same result. The test uses both OpenZGY
and ZGY-Public reader, local file, int8.
"""
with LocalFileAutoDelete("fancy-5.zgy") as fn:
createFancyFile(fn.name, SampleDataType.int8, (-28,+227),
oldzgy.ZgyWriter)
checkContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkContents(fn.name, newzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
# The next line reveals a bug in ZGY-Public.
checkRawContents(fn.name, oldzgy.ZgyReader, 0, 100)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
checkStatistics(fn.name, oldzgy.ZgyReader, 0, 0, False)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, False)
checkHistogram(fn.name, oldzgy.ZgyReader, 0, 0, False)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, False)
def testFancy6():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY Python writer, both OpenZGY and ZGY-Public reader, local file, float.
Compared to the old writer the user specified codingrange
will now be ignored and the statistical range used instead.
Note that if api.ZgyMeta.datarange chooses to enforce this
then only the old reader will be able to verify what was written.
"""
with LocalFileAutoDelete("fancy-6.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-1,+42),
newzgy.ZgyWriter)
checkContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkContents(fn.name, newzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, oldzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
checkStatistics(fn.name, oldzgy.ZgyReader, 0, 0, True)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, oldzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy7():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY Python writer, int8 with lossless compression.
Currently this is explicitly forbidden by a test in the api.
See comments in the doc and in the ZgyWriter source code for why. Also,
fewer checks because the old reader cannot handle the new compression.
"""
lossless = ZgyCompressFactory("ZFP", snr = 99)
with LocalFileAutoDelete("fancy-7.zgy") as fn:
with MustThrow("need to be stored as float", newzgy.ZgyUserError):
createFancyFile(fn.name, SampleDataType.int8, (-28,+227),
newzgy.ZgyWriter, single_write=True,
kwargs={"compressor": lossless})
#checkContents(fn.name, newzgy.ZgyReader, 0, 0)
#checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
#checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
#checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True)
#checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
fn.disarm()
def testFancy8():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY Python writer, float32 with lossy compression.
"""
lossless = ZgyCompressFactory("ZFP", snr = 99)
with LocalFileAutoDelete("fancy-8.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-1,+42),
newzgy.ZgyWriter, single_write=True,
kwargs={"compressor": lossless})
checkContents(fn.name, newzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy9():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY Python writer, int8 with lossy compression.
Currently this is explicitly forbidden by a test in the api.
See comments in the doc and in the ZgyWriter source code for why. Also,
fewer checks because the old reader cannot handle the new compression.
"""
lossy = ZgyCompressFactory("ZFP", snr = 30)
with LocalFileAutoDelete("fancy-9.zgy") as fn:
with MustThrow("need to be stored as float", newzgy.ZgyUserError):
createFancyFile(fn.name, SampleDataType.int8, (-28,+227),
newzgy.ZgyWriter, single_write=True,
kwargs={"compressor": lossy})
#checkContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=1.5)
#checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
#checkRawContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=2.5)
#checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True, maxdelta=8000)
#checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
fn.disarm()
def testFancy10():
"""
Part of the test suite using the same test data stored in different ways.
OpenZGY Python writer, float32 with lossy compression.
"""
lossy = ZgyCompressFactory("ZFP", snr = 30)
with LocalFileAutoDelete("fancy-10.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-1,+42),
newzgy.ZgyWriter, single_write=True,
kwargs={"compressor": lossy})
checkContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=2.0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=2.0)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True, maxdelta=5000)
#checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy11():
"""
Part of the test suite using the same test data stored in different ways.
New code only, small bricksize, no compression.
"""
with LocalFileAutoDelete("fancy-11.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-28,+227),
newzgy.ZgyWriter,
kwargs={"bricksize": (32,32,32)})
checkContents(fn.name, newzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy12():
"""
Part of the test suite using the same test data stored in different ways.
New code only, large bricksize, no compression.
"""
with LocalFileAutoDelete("fancy-12.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-28,+227),
newzgy.ZgyWriter,
kwargs={"bricksize": (128,128,128)})
checkContents(fn.name, newzgy.ZgyReader, 0, 0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy13():
"""
Part of the test suite using the same test data stored in different ways.
New code only, non-rectangular bricks, no compression.
Need single_write=True because with the very small
bricksize my test code ends up writing nore than
one brick past the end of the survey.
"""
with LocalFileAutoDelete("fancy-13.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-28,+227),
newzgy.ZgyWriter, single_write=True,
kwargs={"bricksize": (16,32,128)})
checkContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=2.0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=2.0)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True, maxdelta=5000)
checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testFancy14():
"""
Part of the test suite using the same test data stored in different ways.
New code only, non-rectangular bricks, with compression.
"""
lossy = ZgyCompressFactory("ZFP", snr = 30)
with LocalFileAutoDelete("fancy-14.zgy") as fn:
createFancyFile(fn.name, SampleDataType.float, (-28,+227),
newzgy.ZgyWriter, single_write=True,
kwargs={"bricksize": (16,32,128), "compressor": lossy})
checkContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=2.0)
checkLodContents(fn.name, newzgy.ZgyReader, 0, 0)
checkRawContents(fn.name, newzgy.ZgyReader, 0, 0, maxdelta=2.0)
checkStatistics(fn.name, newzgy.ZgyReader, 0, 0, True, maxdelta=5000)
#FAILS checkHistogram(fn.name, newzgy.ZgyReader, 0, 0, True)
def testCloudAutoDelete():
with CloudFileAutoDelete("xyzzy", None) as fn:
assert fn.name[:5] == "sd://"
fn.disarm()
# Seismic drive, missing credentials.
with MustThrow("service URL has not been defined", RuntimeError):
with CloudFileAutoDelete("xyzzy", None, silent=True) as fn:
assert fn.name[:5] == "sd://"
# Seismic drive, file not found.
# As of 2021-02-12 it is no longer an error to delete a non-existing file.
#with MustThrow("does not exist", RuntimeError):
with CloudFileAutoDelete("xyzzy", SDCredentials(), silent=True) as fn:
assert fn.name[:5] == "sd://"
def testReadFromCloud(filename):
with newzgy.ZgyReader(filename, iocontext=SDCredentials()) as reader, io.StringIO() as bitbucket:
verbose = lambda *args, **kwargs: print(*args, file=bitbucket, **kwargs)
assert reader.size == (181, 241, 169)
tmp = np.zeros((100, 50, 30), dtype=np.int8)
reader.read((42, 70, 50), tmp, verbose=verbose)
#print(tuple(tmp[0,0,:5]), tuple(tmp[0,0,-5:]))
assert tuple(tmp[0,0,:5]) == (57, 48, 38, 28, 17)
assert tuple(tmp[0,0,-5:]) == (-101, -91, -79, -65, -51)
def testCloudWriter(filename):
"""
File written by the new code to seismic store
I haven't hooked up the old API to seismic store, so do the read
checks only with newzgy.
"""
with TimeMe(" createFancyFile"):
createFancyFile(filename, SampleDataType.int8, (-28,+227), newzgy.ZgyWriter)
with TimeMe(" checkContents"):
checkContents(filename, newzgy.ZgyReader, 0, 0)
with TimeMe(" checkLodContents"):
checkLodContents(filename, newzgy.ZgyReader, 0, 0)
with TimeMe(" checkRawContents"):
checkRawContents(filename, newzgy.ZgyReader, 0, 0)
with TimeMe(" checkStatistics"):
checkStatistics(filename, newzgy.ZgyReader, 0, 0, True)
with TimeMe(" checkHistogram"):
checkHistogram(filename, newzgy.ZgyReader, 0, 0, True)
with TimeMe(" delete #1"):
newzgy.ZgyUtils(SDCredentials()).delete(filename)
with TimeMe(" delete #2"):
newzgy.ZgyUtils(SDCredentials()).delete(filename)
def testLegalTag(filename):
meta = {"foo": "bar", "foocount": 42}
meta = {"kind": "slb:openzgy:test:1.0.0", "data": meta}
iocontext = SDCredentials(legaltag="slb-synthetic-seismic",
writeid="test-my-write", seismicmeta=meta)
with newzgy.ZgyWriter(filename,
iocontext = iocontext,
size = (64, 64, 64),
datatype = SampleDataType.float) as writer:
data = np.zeros((64, 64, 64), dtype=np.float32)
writer.write((0, 0, 0), data)
writer.finalize()
#os.system("sdutil stat " + SDTestSink("legaltag.zgy") + " --detailed")
# TODO-Test, read back metadata and confirm it was stored correctly.
# Not possible yet.
# TODO-Question, there is both a {get,set}MetaData and a {get,set}SeismicMeta().
# I suspect the former only sets the "data" portion of SeismicMeta
# but the two might also be completely unrelated.
# TODO-Question, when (and only when) I specify seismicmeta I see that
# sdutil stat --detailed will show me the seismicmeta and this
# includes the legaltag. Is the legaltag in the seismicmeta
# different from the "old" legaltag? Can it be changed, since we
# do have a setSeismicMeta?
def testCloudConsolidateBricks(filename, *, verbose = False):
"""
When reading from seismic store, bricks that are contiguous in memory
should be read in a single operation because larger brick size is
faster (up to a point). When not contiguous the reads should still
make just a single call to seismic store with a scatter/gather array
so the lower level code miggt do multi-threading.
This test also enables the single-block caching which will cause
all the headers to be read in a single operation. It can also speed
up regular brick access. Note that this cache is extremely simplistic,
it only remembers the previous result and it only returns a match
if the next request is exactly identical.
TODO-Low consider splitting this into multiple tests.
"""
vprint = ((lambda *args, **kwargs: print(*args, **kwargs)) if verbose
else (lambda *args, **kwargs: None))
trace = TraceCallsToSD(verbose = verbose)
iocontext = SDCredentials(aligned=1, maxsize=64, maxhole=1, threads=1,
_debug_trace = trace
)
bricksize = np.array((64, 64, 64), dtype=np.int64)
brick = np.product(bricksize) * np.dtype(np.float32).itemsize
size = np.array((181, 241, 169), dtype=np.int64)
numbricks = (size + bricksize - 1) // bricksize
vprint("Creating. Expect header written twice, then bulk data once.")
with newzgy.ZgyWriter(filename, iocontext=iocontext,
bricksize = tuple(bricksize),
size = tuple(size)) as writer:
data = np.arange(np.product(size), dtype=np.float32).reshape(size)
writer.write((0,0,0), data)
# lod 0 bricks: 3 * 4 * 3 = 36
# lod 1 bricks: 2 * 2 * 2 = 8
# lod 2 bricks: 1
# sum bricks on file: 45
# Writing the final header is the penultimate and not the last write.
# This is due to how SeismicStoreFileDelayedWrite works. See also
# comments in ZgyWriter.close().
assert len(trace.calls) == 3
assert trace.calls[0] == ("append", brick, brick, 1)
assert trace.calls[1] == ("write", brick, brick, 1)
assert trace.calls[2] == ("append", 45 * brick, 45*brick, 1)
trace.reset()
vprint("Opening. Expect all headers read in just one real access.")
with newzgy.ZgyReader(filename, iocontext = iocontext) as reader:
assert len(trace.calls) >= 1
assert trace.calls[0].what in ("read", "readv", "cachemiss")
assert all([t.what == "cachehit" for t in trace.calls[1:]])
trace.reset()
# The size in bricks, il/xl/slice, is (3, 4, 3).
# Reading a single inline should require just a single access.
# Reading a single crossline should read one brick-column (3 bricks)
# at a time, so it will need 3 reads. Each brick is 256 KB.
ildata = np.zeros((1, size[1], size[2]), dtype=np.float32)
xldata = np.zeros((size[0], 1, size[2]), dtype=np.float32)
vprint("read one il,", numbricks[1] * numbricks[2], "bricks")
reader.read((0,0,0), ildata)
assert len(trace.calls) == 1
assert trace.calls[0] == ("readv",
brick*numbricks[1]*numbricks[2],
brick*numbricks[1]*numbricks[2], 1)
trace.reset()
vprint("read one xl,", numbricks[0], "*", numbricks[2], "bricks")
reader.read((0,0,0), xldata)
# Not contiguous, but a single scatter/gather read.
assert len(trace.calls) == 1
assert trace.calls[0] == ("readv",
brick*numbricks[0]*numbricks[2],
brick*numbricks[0]*numbricks[2], 3)
trace.reset()
sample = np.zeros((1,1,1), dtype=np.float32)
vprint("read one sample. Should require just one brick.")
reader.read((100,100,100), sample)
assert len(trace.calls) == 1
assert trace.calls[0].nbytes == brick
trace.reset()
vprint("read another sample in the same brick. Should be cached.")
reader.read((101,102,103), sample)
assert len(trace.calls) == 1
assert trace.calls[0] == ("cachehit", brick, brick, 1)
trace.reset()
vprint("Opening with 64 MB buffers. Everything ought to be cached.")
# Note that the entire file is smaller than the requested blocking,
# it is important to veryfy that this doesn't cause problems when
# hitting EOF. The "simple cache" and the "scatter/gather" cases
# need to be tested separately.
iocontext = SDCredentials(aligned=64, maxsize=64, maxhole=1, threads=1,
_debug_trace = trace
)
with newzgy.ZgyReader(filename, iocontext = iocontext) as reader:
# As with the previous case there should just be a single read.
assert len(trace.calls) >= 1
assert trace.calls[0].what in ("read", "readv", "cachemiss")
assert all([t.what == "cachehit" for t in trace.calls[1:]])
trace.reset()
# This will currently not be very performant. The requested
# padding will be applied but the simplistic cache won't use it.
# Not that big a deal since the padding in real cases should
# probably be just 4 MB or so, Small enough for the wasted
# bytes not actually costing anything.
# The test is important though. The padding to align reads
# is still applied, but in a different place in the code.
vprint("read one il,", numbricks[1] * numbricks[2], "bricks")
ildata = np.zeros((1, size[1], size[2]), dtype=np.float32)
reader.read((0,0,0), ildata)
assert len(trace.calls) == 1
# See FileAdt._consolidate_requests._groupsize()
# The header segment is not aligned to out oversized "align"
# parameter. This causes some needless data access because
# the padding will cross a segment boundary. Segment 0 (headers)
# will be read again even though we don't need it.
# The asserts below reflect the current implementation.
#assert trace.calls[0] == ("readv", 12*brick, 45*brick, 2)
assert trace.calls[0] == ("readv", 12*brick, 46*brick, 2)
trace.reset()
vprint("read one xl,", numbricks[0], "*", numbricks[2], "bricks")
xldata = np.zeros((size[0], 1, size[2]), dtype=np.float32)
reader.read((0,0,0), xldata)
# Consolidate and split causes this to end up as 3 separate
# non contiguous reads. Applying "align" is done too late
# which causes each of these 3 reads to cover the exact same
# area. And those areas in turn consist of two reads since
# we are reading the header also. The naive cache doesn't
# help us here. Fortunately this is a very contrived case.
assert len(trace.calls) == 1
#assert trace.calls[0] == ("readv", 9*brick, 45*brick, 1)
assert trace.calls[0] == ("readv", 9*brick, 3*46*brick, 6)
trace.reset()
# This should trigger the naive cache, tailored specifically
# to how Petrel reads data from ZGY.
vprint("read one il, one brick at a time")
ildata = np.zeros((1, 64, 64), dtype=np.float32)
for xl in range(0, size[1], 64):
for zz in range(0, size[2], 64):
reader.read((0, xl, zz), ildata)
assert len(trace.calls) >= 1
# The cache was cleared after readv, so expect one and just one
# read request to fill it.
assert trace.calls[0].what in ("read", "readv", "cachemiss")
assert all([t.what == "cachehit" for t in trace.calls[1:]])
trace.reset()
vprint("read one xl, one brick at a time")
xldata = np.zeros((64, 1, 64), dtype=np.float32)
for il in range(0, size[0], 64):
for zz in range(0, size[2], 64):
reader.read((il, 0, zz), ildata)
assert len(trace.calls) >= 1
assert all([t.what == "cachehit" for t in trace.calls[0:]])
trace.reset()
# Re-create the file with 7 MB segment size, to stress some more code.
iocontext = SDCredentials(aligned=1, maxsize=64, maxhole=1, threads=1,
segsize=7, _debug_trace = trace
)
bricksize = np.array((64, 64, 64), dtype=np.int64)
brick = np.product(bricksize) * np.dtype(np.float32).itemsize
size = np.array((181, 241, 169), dtype=np.int64)
numbricks = (size + bricksize - 1) // bricksize
vprint("Creating. Expect header written twice and bulk data in 7 parts.")
with newzgy.ZgyWriter(filename, iocontext=iocontext,
bricksize = tuple(bricksize),
size = tuple(size)) as writer:
data = np.arange(np.product(size), dtype=np.float32).reshape(size)
writer.write((0,0,0), data)
# There may be several reads needed to generate lod 1 bricks
# from data already flushed. Ignore those.
calls = list([ e for e in trace.calls
if e.what not in ("readv", "cachehit", "cachemiss")])
assert len(calls) == 9
assert calls[0] == ("append", brick, brick, 1) # empty header
assert calls[1] == ("append", 7 * brick, 7 * brick, 1)
assert calls[2] == ("append", 7 * brick, 7 * brick, 1)
assert calls[3] == ("append", 7 * brick, 7 * brick, 1)
assert calls[4] == ("append", 7 * brick, 7 * brick, 1)
assert calls[5] == ("append", 7 * brick, 7 * brick, 1)
assert calls[6] == ("append", 7 * brick, 7 * brick, 1)
assert calls[7] == ("write", brick, brick, 1) # actual header
assert calls[8] == ("append", 3 * brick, 3 * brick, 1) # mop up.
trace.reset()
iocontext = SDCredentials(aligned=1, maxsize=64, maxhole=1, threads=1,
_debug_trace = trace
)
with newzgy.ZgyReader(filename, iocontext = iocontext) as reader:
assert len(trace.calls) >= 1
assert trace.calls[0].what in ("read", "readv", "cachemiss")
assert all([t.what == "cachehit" for t in trace.calls[1:]])
trace.reset()
vprint("read one il,", numbricks[1] * numbricks[2], "bricks")
ildata = np.zeros((1, size[1], size[2]), dtype=np.float32)
reader.read((0,0,0), ildata)
# There will be two reads since it crissed a segment boundary.
assert len(trace.calls) == 1
assert trace.calls[0] == ("readv", 12*brick, 12*brick, 2)
trace.reset()
vprint("read one xl,", numbricks[0], "*", numbricks[2], "bricks")
xldata = np.zeros((size[0], 1, size[2]), dtype=np.float32)
reader.read((0,0,0), xldata)
# Not contiguous, but a single scatter/gather read.
# More that 3 parts due to crossing segment boundaries.
assert len(trace.calls) == 1
assert trace.calls[0] == ("readv", 9*brick, 9*brick, 4)
trace.reset()
vprint("done.")
def Main():
np.seterr(all='raise')
with TimeMe("ProgressWithDots"):
testProgressWithDots()
with TimeMe("BadArgumentsOnCreate"):
testBadArgumentsOnCreate()
with TimeMe("BadArgumentsOnReadWrite"):
with LocalFileAutoDelete("somefile.zgy") as fn:
testBadArgumentsOnReadWrite(fn.name)
with TimeMe("AutoDelete"):
testAutoDelete()
if HasOldZgy():
with TimeMe("HistogramRangeIsCenterNotEdge"):
with LocalFileAutoDelete("histo.zgy") as fn:
testHistogramRangeIsCenterNotEdge(fn.name)
with TimeMe("EmptyFile_NN"):
with LocalFileAutoDelete("emptyfile.zgy") as fn:
testEmptyFile(fn.name, newzgy.ZgyWriter, newzgy.ZgyReader)
if HasOldZgy():
with TimeMe("EmptyFile_ON"):
with LocalFileAutoDelete("emptyfile.zgy") as fn:
testEmptyFile(fn.name, oldzgy.ZgyWriter, newzgy.ZgyReader)
with TimeMe("EmptyFile_NO"):
with LocalFileAutoDelete("emptyfile.zgy") as fn:
testEmptyFile(fn.name, newzgy.ZgyWriter, oldzgy.ZgyReader)
with TimeMe("EmptyFile_OO"):
with LocalFileAutoDelete("emptyfile.zgy") as fn:
testEmptyFile(fn.name, oldzgy.ZgyWriter, oldzgy.ZgyReader)
with LocalFileAutoDelete("rmwfile.zgy") as fn:
testRmwFile(fn.name, newzgy.ZgyWriter)
with LocalFileAutoDelete("fatal-error.zgy") as fn:
testFatalErrorFlag(fn.name)
if False: # Disabled because it takes too long.
with TimeMe("LargeSparseFile"):
with LocalFileAutoDelete("largesparse.zgy") as fn:
testLargeSparseFile(fn.name, newzgy.ZgyWriter, newzgy.ZgyReader)
with TimeMe("Naan"):
with LocalFileAutoDelete("naan.zgy") as fn:
testNaan(fn.name)
with TimeMe("WriteNaanToIntegerStorage"):
with LocalFileAutoDelete("intnaan.zgy") as fn:
testWriteNaanToIntegerStorage(fn.name)
with TimeMe("ZeroCentric"):
with LocalFileAutoDelete("zerocentric.zgy") as fn:
testZeroCentric(fn.name)
with TimeMe("FinalizeProgress"):
with LocalFileAutoDelete("finalize.zgy") as fn:
testFinalizeProgress(fn.name, abort = False)
with TimeMe("FinalizeProgress"):
with LocalFileAutoDelete("finalize.zgy") as fn:
testFinalizeProgress(fn.name, abort = True)
with TimeMe("HugeFile"):
with LocalFileAutoDelete("huge.zgy") as fn:
testHugeFile(fn.name)
with LocalFileAutoDelete("oddsize.zgy") as fn:
testDecimateOddSize(fn.name)
with TimeMe("DecimateWeightedAverage"):
with LocalFileAutoDelete("weighted.zgy") as fn:
testDecimateWeightedAverage(fn.name)
with TimeMe("MixingUserAndStorage"):
with LocalFileAutoDelete("mixuserstorage.zgy") as fn:
testMixingUserAndStorage(fn.name)
with TimeMe("SmallConstArea"):
with LocalFileAutoDelete("smallconstarea.zgy") as fn:
testSmallConstArea(fn.name)
with LocalFileAutoDelete("testhisto_f.zgy") as fn:
testHistoCornercaseFloat(fn.name)
with LocalFileAutoDelete("testhisto_i.zgy") as fn:
testHistoCornercaseInt(fn.name)
with TimeMe("FancyDefaultValue"):
testFancyDefaultValue()
with TimeMe("FancyReadConstant"):
testFancyReadConstant()
with TimeMe("FancyMisc"):
testFancyMisc()
with TimeMe("TestFancy1"):
testFancy1()
with TimeMe("TestFancy2"):
testFancy2()
with TimeMe("TestFancy3"):
testFancy3()
with TimeMe("TestFancy4"):
testFancy4()
if HasOldZgy():
with TimeMe("TestFancy5"):
testFancy5()
with TimeMe("TestFancy6"):
testFancy6()
with TimeMe("TestFancy11"):
testFancy11()
with TimeMe("TestFancy12"):
testFancy12()
with TimeMe("TestFancy13"):
testFancy13()
# ZFP COMPRESSION
if HasZFPCompression():
with TimeMe("RegisteredCompressors"):
testRegisteredCompressors()
with TimeMe("TestFancy7"):
testFancy7()
with TimeMe("TestFancy8"):
testFancy8()
with TimeMe("TestFancy9"):
testFancy9()
with TimeMe("TestFancy10"):
testFancy10()
with TimeMe("TestFancy14"):
testFancy14()
with TimeMe("NoRmwInCompressedFile"):
with LocalFileAutoDelete("no-rmw.zgy") as fn:
testNoRmwInCompressedFile(fn.name)
with TimeMe("Naan"):
with LocalFileAutoDelete("naan.zgy") as fn:
testNaan(fn.name, 70)
# SEISMIC STORE
if not HasSeismicStore():
print("SKIPPING seismic store tests")
return
with TimeMe("testCloudAutoDelete"):
testCloudAutoDelete()
with TimeMe("testReadFromCloud"):
testReadFromCloud(SDTestData("Synt2.zgy"))
with TimeMe("testCloudWriter"):
with CloudFileAutoDelete("openzgy-rules.zgy", SDCredentials()) as cad:
testCloudWriter(cad.name)
cad.disarm() # The test function cleans up itself, unless it throws.
with TimeMe("EmptyFile"):
with CloudFileAutoDelete("emptyfile.zgy", SDCredentials()) as fn:
testEmptyFile(fn.name)
# oldzgy probably doesn't have zgycloud set up in this test.
if HasOldZgy() and False:
with TimeMe("EmptyFile_ON"):
with CloudFileAutoDelete("emptyfile.zgy", SDCredentials()) as fn:
testEmptyFile(fn.name, oldzgy.ZgyWriter, newzgy.ZgyReader)
with TimeMe("EmptyFile_NO"):
with CloudFileAutoDelete("emptyfile.zgy", SDCredentials()) as fn:
testEmptyFile(fn.name, newzgy.ZgyWriter, oldzgy.ZgyReader)
with TimeMe("EmptyFile_OO"):
with CloudFileAutoDelete("emptyfile.zgy", SDCredentials()) as fn:
testEmptyFile(fn.name, oldzgy.ZgyWriter, oldzgy.ZgyReader)
with TimeMe("EmptyExistingFile"):
testEmptyExistingFile("sd://sntc/testdata/OldEmpty.zgy")
with TimeMe("testRmwFile"):
with CloudFileAutoDelete("rmwfile.zgy", SDCredentials()) as fn:
testRmwFile(fn.name, newzgy.ZgyWriter)
with TimeMe("testLegalTag"):
with CloudFileAutoDelete("legaltag.zgy", SDCredentials()) as fn:
testLegalTag(fn.name)
with CloudFileAutoDelete("consolidate.zgy", SDCredentials()) as fn:
with TimeMe("ConsolidateBricks"):
testCloudConsolidateBricks(fn.name, verbose = False)
if __name__ == "__main__":
Main()
# Copyright 2017-2021, Schlumberger
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 45.048819 | 219 | 0.613289 | 18,747 | 141,183 | 4.593802 | 0.092015 | 0.007757 | 0.011705 | 0.01902 | 0.443556 | 0.4049 | 0.365769 | 0.334638 | 0.297503 | 0.279726 | 0 | 0.053968 | 0.272242 | 141,183 | 3,133 | 220 | 45.063198 | 0.784208 | 0.305462 | 0 | 0.384931 | 0 | 0.001538 | 0.052946 | 0.002709 | 0 | 0 | 0 | 0.001277 | 0.17837 | 1 | 0.048693 | false | 0.010764 | 0.009739 | 0.001025 | 0.082522 | 0.031266 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1539f874694fa29d5e5b0f1fdffa08f018772b5b | 3,041 | py | Python | D/D.py | budes/Exploracao-Tkinter | 594c13e8c6765dc3bc673360c423a4b47c91f197 | [
"MIT"
] | null | null | null | D/D.py | budes/Exploracao-Tkinter | 594c13e8c6765dc3bc673360c423a4b47c91f197 | [
"MIT"
] | null | null | null | D/D.py | budes/Exploracao-Tkinter | 594c13e8c6765dc3bc673360c423a4b47c91f197 | [
"MIT"
] | null | null | null | from tkinter import *
from functools import partial
from PIL import Image, ImageTk
class Calculadora():
def __init__(self):
# Coloquei a instancia Tk e o background
self.inst = Tk()
self.inst.geometry('720x1200')
self.inst['background'] = 'white'
# Fontes e imagens usadas
fonte = ('Verdana', 12, 'bold')
fontea = ('Verdana', 12)
fonte2 = ('Verdana', 18, 'bold')
# Vou colocar os calculos no Label
self.calculo = Label(self.inst, text='', font=fonte2,
bg='white', height=5)
self.calculo.pack()
# Fiz desse jeito pq é mais rápido e simples
Frames = [Frame(self.inst, bg='white', padx=0, pady=0) for cria in range(5)]
# Empacota os Frames
for empacotar in Frames: empacotar.pack()
self.texto = (
('C', '<×', '^', '/'),
('7', '8', '9', 'x'),
('4', '5', '6', '+'),
('1', '2', '3', '-'),
('.', '0', '()', '=')
)
# Cria os botoes na tela
self.botoes = []
for i in range(5):
frame = Frames[i]
for simbol in self.texto[i]:
but = Button(frame, text=simbol, font=fontea,
height=2, width=3, relief=GROOVE, bg='white',
command=partial(self.InterpretaBotoes, simbol)
)
but.pack(side=LEFT)
self.botoes.append(but)
# Muda a cor e a fonte do botão de acordo com o tipo
if simbol in ('C', '<×', '^', '/', 'x', '+', '-'):
but['bg'] = 'lightgray'
but['fg'] = 'darkcyan'
but['font'] = fonte
elif simbol == '=':
but['bg'] = 'green'
but['fg'] = 'white'
but['font'] = fonte
# Inicia a instancia
self.inst.mainloop()
# Vai executar o comando do botão pressionado
def InterpretaBotoes(self, valor):
if valor == 'C':
self.calculo['text'] = ''
elif valor == '<×':
self.calculo['text'] = self.calculo['text'][:len(self.calculo['text'])-1]
elif valor == '=':
self.Calcula()
elif valor == '()':
texto = self.calculo['text']
try:
if texto[len(texto)-1] in '+-/^x' or len(texto) == 0:
self.calculo['text'] += '('
elif texto[len(texto)-1] in '1234567890)':
self.calculo['text'] += ')'
except:
self.calculo['text'] += '('
else:
self.calculo['text'] += valor
if len(self.calculo['text']) % 15 == 0:
self.calculo['text'] += '\n'
def Calcula(self):
# Se tiver um dos operadores ele tenta fazer calculo
if any([op in self.calculo['text'] for op in ['+', '-', '/', '^', 'x']]):
calculo = ''
for elemento in self.calculo['text']:
for e in self.texto:
if elemento == 'x':
calculo += '*'
break
elif elemento == '^':
calculo += '**'
break
elif elemento in '()':
calculo += elemento
break
elif elemento in e:
calculo += elemento
break
self.calculo['text'] = ''
resultado = str(eval(calculo))
auxiliar = ''
for i in range(len(resultado)):
if (i+1) % 15 == 0:
auxiliar += '\n'
auxiliar += resultado[i]
self.calculo['text'] = auxiliar
Calculadora() | 23.037879 | 78 | 0.541927 | 385 | 3,041 | 4.277922 | 0.361039 | 0.11354 | 0.136612 | 0.013358 | 0.043716 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023777 | 0.267017 | 3,041 | 132 | 79 | 23.037879 | 0.713773 | 0.11345 | 0 | 0.136364 | 0 | 0 | 0.090097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034091 | false | 0 | 0.034091 | 0 | 0.079545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
153c2c1e71181268aa8af58caed4e86170ecfd56 | 8,970 | py | Python | forum/views.py | RAGNAROSaa/- | 833688d556ecc70570a9b464160271ace07380d9 | [
"Apache-2.0"
] | 5 | 2016-09-25T02:59:13.000Z | 2018-07-18T05:20:58.000Z | forum/views.py | RAGNAROSaa/- | 833688d556ecc70570a9b464160271ace07380d9 | [
"Apache-2.0"
] | 1 | 2016-12-01T01:11:53.000Z | 2016-12-01T01:11:53.000Z | forum/views.py | RAGNAROSaa/- | 833688d556ecc70570a9b464160271ace07380d9 | [
"Apache-2.0"
] | 6 | 2016-09-24T02:42:57.000Z | 2016-11-10T13:35:13.000Z | from django.core.urlresolvers import reverse_lazy, reverse
from forum.models import Category, Question, Answer
from utils.mixin import AjaxableResponseMixin
from django.views.generic.edit import CreateView, UpdateView, DeleteView
from django.views.generic.list import ListView
from django.views.generic.detail import DetailView
from django.http import JsonResponse, Http404
from website.mixin import FrontMixin
from django.contrib.auth.mixins import LoginRequiredMixin, UserPassesTestMixin
from django.shortcuts import get_object_or_404
from authentication.models import MyUser
class CategoryCreateView(UserPassesTestMixin, AjaxableResponseMixin, CreateView):
login_url = reverse_lazy('user-login')
model = Category
fields = ['name']
template_name_suffix = '_create_form'
success_url = reverse_lazy('category-list')
def test_func(self):
return self.request.user.is_staff
def get_context_data(self, *args, **kwargs):
context = super(CategoryCreateView, self).get_context_data(**kwargs)
context['active_page'] = 'category-add'
return context
class CategoryListView(UserPassesTestMixin, ListView):
login_url = reverse_lazy('user-login')
model = Category
context_object_name = 'category_list'
def test_func(self):
return self.request.user.is_staff
def get_context_data(self, *args, **kwargs):
context = super(CategoryListView, self).get_context_data(**kwargs)
context['active_page'] = 'category-list'
return context
class CategoryUpdateView(UserPassesTestMixin, AjaxableResponseMixin, UpdateView):
login_url = reverse_lazy('user-login')
model = Category
context_object_name = 'category'
template_name_suffix = '_update_form'
success_url = reverse_lazy('category-list')
fields = ['name']
def test_func(self):
return self.request.user.is_staff
def get_context_data(self, *args, **kwargs):
context = super(CategoryUpdateView, self).get_context_data(**kwargs)
context['active_page'] = 'category-update'
return context
class CategoryDeleteView(UserPassesTestMixin, AjaxableResponseMixin, DeleteView):
login_url = reverse_lazy('user-login')
model = Category
success_url = reverse_lazy('category-list')
def test_func(self):
return self.request.user.is_staff
def post(self, request, *args, **kwargs):
super(CategoryDeleteView, self).post(request, *args, **kwargs)
return JsonResponse({'state': 'success'})
class QuestionCreateView(LoginRequiredMixin, FrontMixin, CreateView):
login_url = reverse_lazy('user-login')
model = Question
template_name_suffix = '_create_form'
fields = ['title', 'content', 'category', 'inviting_person']
def get_context_data(self, *args, **kwargs):
context = super(QuestionCreateView, self).get_context_data(**kwargs)
context['category_list'] = Category.objects.all()
context['teacher_list'] = MyUser.objects.filter(identity='T').order_by('nickname')
return context
def form_valid(self, form):
form.instance.author = self.request.user
form.instance.show_times = 0
return super(QuestionCreateView, self).form_valid(form)
def get_success_url(self):
questions_list = Question.objects.filter(author=self.request.user).order_by('-publish_time')
least_question = questions_list[0]
return reverse('question-detail', kwargs={'pk': least_question.id})
class CategoryQuestionListView(FrontMixin, ListView):
template_name = 'website/frontend/homepage.html'
model = Question
paginate_by = 10
context_object_name = 'question_list'
def get_queryset(self):
category = get_object_or_404(Category, pk=self.kwargs['pk'])
return Question.objects.filter(category=category)
class QuestionDetailView(FrontMixin, ListView):
model = Answer
template_name = 'forum/question_detail.html'
paginate_by = 10
context_object_name = 'answer_list'
def get_queryset(self):
question = Question.objects.get(pk=self.kwargs['pk'])
question.show_times += 1
question.save()
return Answer.objects.filter(question=question)
def get_context_data(self, *args, **kwargs):
context = super(QuestionDetailView, self).get_context_data(*args, **kwargs)
context['question'] = Question.objects.get(pk=self.kwargs['pk'])
return context
class AnswerCreateView(LoginRequiredMixin, FrontMixin, CreateView):
model = Answer
template_name = 'forum/answer_create_form.html'
fields = ['content']
login_url = reverse_lazy('user-login')
def get_context_data(self, *args, **kwargs):
context = super(AnswerCreateView, self).get_context_data(*args, **kwargs)
context['question'] = Question.objects.get(pk=self.kwargs['pk'])
return context
def get_success_url(self):
return reverse('question-detail', kwargs={'pk': self.kwargs['pk']})
def form_valid(self, form):
form.instance.author = self.request.user
form.instance.question = Question.objects.get(pk=self.kwargs['pk'])
return super(AnswerCreateView, self).form_valid(form)
class ReplyCreateView(LoginRequiredMixin, FrontMixin, CreateView):
model = Answer
template_name = 'forum/reply_create_form.html'
fields = ['content']
login_url = reverse_lazy('user-login')
def get_context_data(self, *args, **kwargs):
context = super(ReplyCreateView, self).get_context_data(*args, **kwargs)
answer=Answer.objects.get(pk=self.kwargs['pk'])
context['answer'] = Answer.objects.get(pk=self.kwargs['pk'])
return context
def get_success_url(self):
answer=Answer.objects.get(pk=self.kwargs['pk'])
return reverse('question-detail', kwargs={'pk': answer.question.pk})
def form_valid(self, form):
answer=Answer.objects.get(pk=self.kwargs['pk'])
question=answer.question
form.instance.author = self.request.user
form.instance.question = question
form.instance.reply_author=answer.author.myuser
return super(ReplyCreateView, self).form_valid(form)
class QuestionListView(UserPassesTestMixin, ListView):
model = Question
login_url = reverse_lazy('user-login')
context_object_name = 'question_list'
template_name = 'forum/question_list.html'
def test_func(self):
return self.request.user.is_staff
class QuestionDeleteView(UserPassesTestMixin, AjaxableResponseMixin, DeleteView):
login_url = reverse_lazy('user-login')
model = Question
success_url = reverse_lazy('question-list')
def test_func(self):
return self.request.user.is_staff
def post(self, request, *args, **kwargs):
super(QuestionDeleteView, self).post(request, *args, **kwargs)
return JsonResponse({'state': 'success'})
class PersonalQuestionListView(FrontMixin, ListView):
paginate_by = 10
template_name = 'forum/question_weight2.html'
context_object_name = 'question_list'
def get_queryset(self):
return Question.objects.filter(author_id=self.kwargs['pk'])
def get_context_data(self, *args, **kwargs):
context = super(PersonalQuestionListView, self).get_context_data(**kwargs)
context['theuser']=MyUser.objects.get(pk=self.kwargs['pk'])
return context
class PersonalAnswerListView(FrontMixin, ListView):
paginate_by = 10
template_name = 'forum/answer_weight.html'
context_object_name = 'question_asked_list'
def get_queryset(self):
answers = Answer.objects.filter(author_id=self.kwargs['pk'])
question_asked_list = list(set([item.question for item in answers]))
question_asked_list.reverse()
return question_asked_list
def get_context_data(self, *args, **kwargs):
context = super(PersonalAnswerListView, self).get_context_data(**kwargs)
context['theuser']=MyUser.objects.get(pk=self.kwargs['pk'])
return context
class QuestionSearchView(FrontMixin, ListView):
paginate_by = 10
template_name = 'website/frontend/homepage.html'
context_object_name = 'question_list'
def get_queryset(self):
return Question.objects.filter(title__contains=self.request.GET.get('keyword', ''))
class PersonalInvitingListView(FrontMixin, ListView):
paginate_by = 10
template_name = 'website/frontend/homepage.html'
context_object_name = 'question_list'
def get_queryset(self):
return Question.objects.filter(inviting_person=MyUser.objects.get(pk=self.kwargs['pk']))
class PersonalReplyListView(FrontMixin, ListView):
paginate_by = 10
template_name = 'forum/reply_weight.html'
context_object_name = 'reply_list'
def get_queryset(self):
return Answer.objects.filter(reply_author=MyUser.objects.get(pk=self.kwargs['pk'])).order_by('-publish_time')
| 36.024096 | 117 | 0.711037 | 1,044 | 8,970 | 5.91954 | 0.128352 | 0.018447 | 0.040777 | 0.031715 | 0.631715 | 0.579773 | 0.528317 | 0.500809 | 0.419417 | 0.345793 | 0 | 0.003653 | 0.176031 | 8,970 | 248 | 118 | 36.169355 | 0.832499 | 0 | 0 | 0.540107 | 0 | 0 | 0.102007 | 0.030212 | 0 | 0 | 0 | 0 | 0 | 1 | 0.160428 | false | 0.037433 | 0.058824 | 0.058824 | 0.780749 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15407f3fe8f1c7f69d1779e6d91d10fdf27f7862 | 3,779 | py | Python | subprojects/sno-snapshot/src/common.py | openshift-psap/gpu-burst | cae1c71737c72caf0e980aa6361aeb0c145b4df3 | [
"Apache-2.0"
] | 1 | 2020-07-09T22:30:56.000Z | 2020-07-09T22:30:56.000Z | subprojects/sno-snapshot/src/common.py | openshift-psap/gpu-burst | cae1c71737c72caf0e980aa6361aeb0c145b4df3 | [
"Apache-2.0"
] | null | null | null | subprojects/sno-snapshot/src/common.py | openshift-psap/gpu-burst | cae1c71737c72caf0e980aa6361aeb0c145b4df3 | [
"Apache-2.0"
] | 1 | 2020-07-09T22:30:59.000Z | 2020-07-09T22:30:59.000Z | import time, datetime
import urllib3
print("Importing OpenShift/Kubernetes packages ...")
import kubernetes
import ocp_resources
import openshift
import ocp_resources.node
import ocp_resources.machine
import openshift.dynamic
print("Importing AWS boto3 ...")
import boto3
import botocore
client_k8s = None
client_ec2 = None
resource_ec2 = None
def configure():
#
# K8s
#
global client_k8s
try:
client_k8s = openshift.dynamic.DynamicClient(client=kubernetes.config.new_client_from_config())
except Exception as e:
print("WARNING: kubernetes not available:", e)
#
# AWS
#
machines = [m for m in ocp_resources.machine.Machine.get(dyn_client=client_k8s)]
if not machines:
raise RuntimeError("No machine available ...")
cluster_region = machines[0].instance.spec.providerSpec.value.placement.region
global client_ec2, resource_ec2
cfg = botocore.config.Config(region_name=cluster_region)
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html
client_ec2 = boto3.client('ec2', config=cfg)
resource_ec2 = boto3.resource('ec2', config=cfg)
print("Ready.")
def wait_openshift():
first = True
print("Waiting for OpenShift cluster to be ready ...")
while True:
try:
global client_k8s
client_k8s = DynamicClient(client=kubernetes.config.new_client_from_config())
nodes = [m for m in ocp_resources.node.Node.get(dyn_client=client_k8s)]
if len(nodes) != 0:
print(f"Found {len(nodes)} node, OpenShift Cluster is ready!")
break
except urllib3.exceptions.MaxRetryError: pass
except kubernetes.client.exceptions.ApiException: pass
time.sleep(10)
def get_machine_props():
if not client_k8s:
return None, None
machines = [m for m in ocp_resources.machine.Machine.get(dyn_client=client_k8s)]
if len(machines) != 1:
raise RuntimeError("Should be only one machine ...")
machine = machines[0]
cluster_name = machine.cluster_name
print(f"Cluster name: {cluster_name}")
instance = resource_ec2.Instance(machine.instance.status.providerStatus.instanceId)
instance.load()
print(f"Instance Id: {instance.id}")
zone = machine.instance.spec.providerSpec.value.placement.availabilityZone
print(f"Availability zone: {zone}")
return cluster_name, instance, zone
def get_instance_root_volume(instance):
volumes = [v for v in instance.volumes.all()]
if len(volumes) > 1:
print("WARNING: more than 1 volume found ...")
return volumes[0]
def get_cluster_snapshot(cluster_name, instance, zone):
resp = client_ec2.describe_snapshots(
Filters=[{
'Name': f'tag:kubernetes.io/cluster/{cluster_name}',
'Values': ['owned']
}])
snapshots = resp["Snapshots"]
if len(snapshots) == 0:
return None
if len(snapshots) > 1:
print("WARNING: more than 1 snapshot found ... taking the first one.")
snapshot = resource_ec2.Snapshot(snapshots[0]['SnapshotId'])
snapshot.load()
return snapshot
def await_snapshot(snapshot):
prev = ""
if snapshot.progress == "100%":
print(f"Snapshot {snapshot.id} is ready.")
while not snapshot.progress == "100%":
if prev == "":
print(f"Awaiting for the completion of snapshot {snapshot.id} ...")
print(snapshot.progress)
prev = snapshot.progress
time.sleep(10)
snapshot.reload()
if prev != snapshot.progress:
prev = snapshot.progress
print(snapshot.progress)
def human_ts():
return datetime.datetime.now().strftime("%Y-%m-%dT%H:%M")
| 27.583942 | 103 | 0.663668 | 459 | 3,779 | 5.348584 | 0.294118 | 0.032994 | 0.021996 | 0.008554 | 0.19389 | 0.133605 | 0.107943 | 0.09613 | 0.052138 | 0.052138 | 0 | 0.017406 | 0.224663 | 3,779 | 136 | 104 | 27.786765 | 0.820478 | 0.02408 | 0 | 0.12766 | 0 | 0 | 0.169837 | 0.01087 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074468 | false | 0.021277 | 0.12766 | 0.010638 | 0.265957 | 0.159574 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1545108723e1d95162092c28d39a13b5a922862e | 2,053 | py | Python | docs/examples/code/serialization.py | pombredanne/mogwai-2 | 536650c8abf5befac4269410fdfac5626be02a23 | [
"Apache-2.0"
] | 43 | 2015-09-17T14:13:17.000Z | 2017-06-18T17:45:40.000Z | docs/examples/code/serialization.py | pombredanne/mogwai-2 | 536650c8abf5befac4269410fdfac5626be02a23 | [
"Apache-2.0"
] | 20 | 2015-09-17T17:25:16.000Z | 2020-04-20T15:17:43.000Z | docs/examples/code/serialization.py | pombredanne/mogwai-2 | 536650c8abf5befac4269410fdfac5626be02a23 | [
"Apache-2.0"
] | 13 | 2015-10-12T09:35:43.000Z | 2022-01-10T23:55:12.000Z | from mogwai.connection import setup
from mogwai.models import Vertex, Edge
from mogwai import properties
from mogwai import relationships
from mogwai._compat import print_
import datetime
from pytz import utc
from functools import partial
import pickle
setup('127.0.0.1')
class OwnsObject(Edge):
label = 'owns_object' # this is optional, will default to the class name
since = properties.DateTime(required=True,
default=partial(datetime.datetime.now, tz=utc),
description='Owned object since')
class Trinket(Vertex):
element_type = 'gadget'
name = properties.String(required=True, max_length=1024)
class Person(Vertex):
element_type = 'person' # this is optional, will default to the class name
name = properties.String(required=True, max_length=512)
email = properties.Email(required=True)
# Define a shortcut relationship method
belongings = relationships.Relationship(OwnsObject, Trinket)
## Creation
# Create a trinket
trinket = Trinket.create(name='Clock')
# Create a Person
bob = Person.create(name='Bob Smith', email='bob@bob.net')
# Create the Ownership Relationship
relationship = OwnsObject.create(outV=bob, inV=trinket)
bob_serialized = pickle.dumps(bob)
print_("Bob Serialized: {}".format(bob_serialized))
deserialized_bob = pickle.loads(bob_serialized)
print_("Bob Deserialized: {}".format(deserialized_bob))
assert bob == deserialized_bob
relationship_serialized = pickle.dumps(relationship)
print_("Relationship Serialized: {}".format(relationship_serialized))
deserialized_relationship = pickle.loads(relationship_serialized)
print_("Relationship Deserialized: {}".format(deserialized_relationship))
assert relationship == deserialized_relationship
trinket_serialized = pickle.dumps(trinket)
print_("Trinket Serialized: {}".format(trinket_serialized))
deserialized_trinket = pickle.loads(trinket_serialized)
print_("Trinket Deserialized: {}".format(deserialized_trinket))
assert trinket == deserialized_trinket
| 28.915493 | 79 | 0.758402 | 238 | 2,053 | 6.411765 | 0.306723 | 0.032765 | 0.041284 | 0.023591 | 0.104849 | 0.104849 | 0.104849 | 0.051114 | 0.051114 | 0 | 0 | 0.007412 | 0.145641 | 2,053 | 70 | 80 | 29.328571 | 0.8626 | 0.102776 | 0 | 0 | 0 | 0 | 0.117294 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 1 | 0 | false | 0 | 0.219512 | 0 | 0.487805 | 0.170732 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1545bae10e175ce363c87f5e99b847280194257d | 1,877 | py | Python | Practice_Itertools/Permutations_Combinations.py | SugaanthMohan/Python_Tools | 7772d4832fb6aa80befa7bb7961ee9a5987f7b48 | [
"Unlicense"
] | 2 | 2020-10-05T14:06:51.000Z | 2021-04-02T04:39:48.000Z | Practice_Itertools/Permutations_Combinations.py | SugaanthMohan/Python_Tools | 7772d4832fb6aa80befa7bb7961ee9a5987f7b48 | [
"Unlicense"
] | null | null | null | Practice_Itertools/Permutations_Combinations.py | SugaanthMohan/Python_Tools | 7772d4832fb6aa80befa7bb7961ee9a5987f7b48 | [
"Unlicense"
] | 1 | 2019-08-20T11:01:37.000Z | 2019-08-20T11:01:37.000Z | import itertools
def test_product_combinator(container_, repeat_times):
"""
cartesian product, equivalent to a nested for-loop
product('ABCD', repeat=2)
AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD
"""
return list(itertools.product(container_, repeat=repeat_times))
def test_permutation_combinator(container_, pairs_):
"""
r-length tuples, all possible orderings, no repeated elements
permutations('ABCD', 2)
AB AC AD BA BC BD CA CB CD DA DB DC
"""
return list(
itertools.permutations(
container_,
pairs_
)
)
def test_combination_combinator(container, pairs_):
"""
r-length tuples, in sorted order, no repeated elements
combinations('ABCD', 2)
AB AC AD BC BD CD
"""
return list(
itertools.combinations(container, pairs_)
)
def test_combinations_with_replacement_combinator(container, pairs_):
"""
r-length tuples, in sorted order, no repeated elements
combinations('ABCD', 2)
AB AC AD BC BD CD
"""
return list(
itertools.combinations_with_replacement(container, pairs_)
)
if __name__ == '__main__':
count = 10
items_list = list(
map(
lambda x: str(x[0]) + "_" + str(x[1]),
list(
zip(
['TYPE'] * count,
list(range(1, count+1))
)
)
)
)
print(items_list)
print("PRODUCT :=>", test_product_combinator(container_=items_list, repeat_times=2))
print("PERMUTATIONS :=>", test_permutation_combinator(container_=items_list, pairs_=2))
print("COMBINATIONS :=>", test_combination_combinator(container=items_list, pairs_=2))
print("COMBINATIONS WITH REPLACEMENT :=>", test_combinations_with_replacement_combinator(container=items_list, pairs_=2))
| 24.376623 | 125 | 0.626532 | 219 | 1,877 | 5.118721 | 0.324201 | 0.135593 | 0.021409 | 0.099911 | 0.447814 | 0.438002 | 0.319358 | 0.319358 | 0.228368 | 0.228368 | 0 | 0.010234 | 0.271177 | 1,877 | 76 | 126 | 24.697368 | 0.809211 | 0.236548 | 0 | 0.083333 | 0 | 0 | 0.066468 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.027778 | 0 | 0.25 | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
154643bad32b0b2a24af7c2c4e6381e45fcf7cf0 | 1,922 | py | Python | test/test_cram.py | Indy2222/mbg-codon-usage | d415076a8150cd712010c0389c71ef22ba9ad850 | [
"MIT"
] | null | null | null | test/test_cram.py | Indy2222/mbg-codon-usage | d415076a8150cd712010c0389c71ef22ba9ad850 | [
"MIT"
] | null | null | null | test/test_cram.py | Indy2222/mbg-codon-usage | d415076a8150cd712010c0389c71ef22ba9ad850 | [
"MIT"
] | null | null | null | from cdnu.ccds import CdsPos, load_ccds
from cdnu.cram import load_cds_list
def test_load_cds_list():
cds = load_cds_list('./test/cramExample.cram',
[CdsPos('first', [(33036411, 33036588)], '21')])
assert len(cds) is 1
single_cds = cds[0]
assert single_cds is not None
assert len(single_cds) % 3 is 0
assert len(single_cds) is (33036588 - 33036411)
assert single_cds.startswith('ATG')
assert single_cds[-3:] in ('TAG', 'TAA', 'TGA')
def test_load_cds_list_small():
cds = load_cds_list('./test/cramExample.cram',
[CdsPos('first', [(33036660, 33036670)], '21')])
assert len(cds) is 1
assert cds[0] is None
def test_load_cds_list_some():
ccds = [
CdsPos('first', [(925941, 926012)], 'chr1'),
CdsPos('second', [(966531, 966613)], 'chr1'),
CdsPos('third', [(7784877, 7785004)], 'chr1')
]
address = ('ftp://ftp.ncbi.nlm.nih.gov/1000genomes/ftp/'
'1000G_2504_high_coverage/data/ERR3239281/NA07051.final.cram')
address = '/Users/peta/School/mbg/mbg-codon-usage/huge/NA07051.final.cram'
cds_list = load_cds_list(address, ccds)
assert len(cds_list) is 3
assert cds_list[0] is None
assert cds_list[1] is None
assert cds_list[2] is None
def test_load_cds_list_huge():
ccds = load_ccds()
address = ('ftp://ftp.ncbi.nlm.nih.gov/1000genomes/ftp/'
'1000G_2504_high_coverage/data/ERR3239281/NA07051.final.cram')
address = '/Users/peta/School/mbg/mbg-codon-usage/huge/NA07051.final.cram'
cds_list = load_cds_list(address, ccds[:100])
for cds in cds_list:
if cds is not None:
print('{}.. {:4} ..{}'.format(cds[:3], len(cds), cds[-3:]))
assert len(cds) % 3 is 0
assert cds.startswith('ATG')
assert cds[-3:] in ('TAG', 'TAA', 'TGA')
else:
print('None')
| 33.137931 | 78 | 0.612383 | 270 | 1,922 | 4.188889 | 0.274074 | 0.099027 | 0.087533 | 0.049514 | 0.580018 | 0.4916 | 0.435013 | 0.392573 | 0.392573 | 0.314766 | 0 | 0.116011 | 0.233091 | 1,922 | 57 | 79 | 33.719298 | 0.651289 | 0 | 0 | 0.222222 | 0 | 0.044444 | 0.238293 | 0.194589 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.088889 | false | 0 | 0.044444 | 0 | 0.133333 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15496d9beace3067196af6dad11d3e5b49c379cc | 1,033 | py | Python | setup.py | OvalMoney/horus | 90d839e9465f5089fa2632dad9f28190db3a829b | [
"MIT"
] | 2 | 2020-07-17T07:43:53.000Z | 2020-12-03T11:14:59.000Z | setup.py | OvalMoney/horus | 90d839e9465f5089fa2632dad9f28190db3a829b | [
"MIT"
] | 1 | 2020-01-27T15:49:33.000Z | 2020-01-27T15:49:33.000Z | setup.py | OvalMoney/horus | 90d839e9465f5089fa2632dad9f28190db3a829b | [
"MIT"
] | 2 | 2020-07-17T07:44:04.000Z | 2020-12-01T11:10:00.000Z | import io
from setuptools import setup, find_packages
def make_long_description():
with io.open("README.md", encoding="utf-8") as fp:
long_description = fp.read()
return long_description
setup(
name="nephthys",
description="Advanced Python Logger",
long_description=make_long_description(),
long_description_content_type="text/markdown",
version="1.0.2",
author="Fabio Todaro",
license="MIT",
author_email="ft@ovalmoney.com",
url="https://github.com/OvalMoney/Nephthys",
python_requires=">=3.6",
classifiers=[
"Development Status :: 3 - Alpha",
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent",
],
packages=find_packages(exclude=["tests", "requirements"]),
install_requires=["webob"],
extras_require={"JSON": ["python-rapidjson"], "requests": ["requests"]},
)
| 29.514286 | 76 | 0.64666 | 113 | 1,033 | 5.769912 | 0.663717 | 0.138037 | 0.058282 | 0.079755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012107 | 0.200387 | 1,033 | 34 | 77 | 30.382353 | 0.77724 | 0 | 0 | 0 | 0 | 0 | 0.385286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.068966 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
154c465ae5c00585f053431c72e2c927627c992c | 1,420 | py | Python | inlineplz/linters/rflint.py | CtrlZvi/inline-plz | 208195372a8138dce78a165dd8410a8ce15aea80 | [
"0BSD"
] | 30 | 2016-01-11T18:43:38.000Z | 2022-01-29T19:09:53.000Z | inlineplz/linters/rflint.py | CtrlZvi/inline-plz | 208195372a8138dce78a165dd8410a8ce15aea80 | [
"0BSD"
] | 237 | 2016-01-09T23:01:19.000Z | 2022-03-01T16:12:10.000Z | inlineplz/linters/rflint.py | CtrlZvi/inline-plz | 208195372a8138dce78a165dd8410a8ce15aea80 | [
"0BSD"
] | 14 | 2016-01-19T00:51:52.000Z | 2022-01-12T20:49:31.000Z | # -*- coding: utf-8 -*-
import sys
from ..decorators import linter
from ..parsers.base import ParserBase
@linter(
name="robotframework-lint",
install=[[sys.executable, "-m", "pip", "install", "-U", "robotframework-lint"]],
help_cmd=["rflint", "--help"],
run=["rflint"],
rundefault=["rflint", "-A", "{config_dir}/.rflint"],
dotfiles=[".rflint"],
language="robotframework",
autorun=True,
run_per_file=True,
)
class RobotFrameworkLintParser(ParserBase):
"""Parse rflint output."""
def parse(self, lint_data):
messages = set()
current_file = None
for _, output in lint_data:
for line in output.split("\n"):
try:
if not line.strip():
continue
if line.startswith("+"):
current_file = line[2:]
continue
else:
_, position, message = line.split(":")
line_number, _ = position.split(",")
messages.add(
(current_file.strip(), int(line_number), message.strip())
)
except (ValueError, IndexError):
print(
"({0}) Invalid message: {1}".format(type(self).__name__, line)
)
return messages
| 30.212766 | 86 | 0.48169 | 124 | 1,420 | 5.370968 | 0.580645 | 0.04955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004556 | 0.38169 | 1,420 | 46 | 87 | 30.869565 | 0.753986 | 0.030282 | 0 | 0.054054 | 0 | 0 | 0.109409 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.081081 | 0 | 0.162162 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
154e0a5afabdd063dcf184a1b9ec15c5e991fb02 | 6,676 | py | Python | catnip/client.py | google/catnip | 0b8c8c5f93e32931c97a561180c70c9480462717 | [
"Apache-2.0"
] | 24 | 2015-01-15T02:31:52.000Z | 2021-08-10T08:46:03.000Z | catnip/client.py | google/catnip | 0b8c8c5f93e32931c97a561180c70c9480462717 | [
"Apache-2.0"
] | 1 | 2018-01-10T08:37:02.000Z | 2018-01-10T08:45:45.000Z | catnip/client.py | google/catnip | 0b8c8c5f93e32931c97a561180c70c9480462717 | [
"Apache-2.0"
] | 11 | 2015-03-05T09:22:27.000Z | 2021-09-05T09:31:01.000Z | # Copyright 2014 Google Inc. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import signal
import subprocess
import sys
from catnip import protocol
from catnip import sandbox
class ClientError(Exception):
pass
class CatnipClient(object):
def __init__(self):
self._hostname = None
self._port = 22
self._username = 'catnip'
self._identity_file = None
self._disk_image_stream = None
self._check_ssh_host_key = False
self._multiplex = False
self._debug = False
##############################################################################
## Setters
def SetHost(self, hostname, port=22):
if not isinstance(hostname, str):
raise TypeError('hostname must be a string')
if not (isinstance(port, int) and 1 <= port < 65536):
raise TypeError('invalid port')
self._hostname = hostname
self._port = port
def SetUser(self, username):
if not isinstance(username, str):
raise TypeError('username must be a string')
self._username = username
def SetIdentityFile(self, identity_file):
self._identity_file = identity_file
def SetDiskImageStream(self, disk_image_stream):
if not hasattr(disk_image_stream, 'read'):
raise TypeError('disk_image_stream must be a stream')
self._disk_image_stream = disk_image_stream
def SetCheckSSHHostKey(self, check_ssh_host_key):
if not isinstance(check_ssh_host_key, bool):
raise TypeError('check_ssh_host_key must be a boolean')
self._check_ssh_host_key = check_ssh_host_key
def SetMultiplex(self, multiplex):
if not isinstance(multiplex, bool):
raise TypeError('multiplex must be a boolean')
self._multiplex = multiplex
def SetDebug(self, debug):
if not isinstance(debug, bool):
raise TypeError('debug must be a boolean')
self._debug = debug
##############################################################################
## Actions
def Run(self, params, requests, extra_files,
output_filename=None, callback=None):
if ((not output_filename and not callback) or
(output_filename and callback)):
raise ValueError('One of output_filename or callback should be passed.')
if not params.Validate():
raise ValueError('Insufficient SandboxParams')
for request in requests:
if not request.Validate():
raise ValueError('Insufficient RunRequest')
self._CheckSettingsBeforeRun()
args = self._BuildSSHArgs() + self._BuildRunArgs()
if params.debug:
print >>sys.stderr, ' '.join(args)
proc = None
try:
with open(os.devnull, 'w') as null:
stderr = None if self._debug else null
if output_filename:
with open(output_filename, 'w') as stdout:
proc = subprocess.Popen(args,
close_fds=True,
stdin=subprocess.PIPE,
stdout=stdout,
stderr=stderr)
else:
proc = subprocess.Popen(args,
close_fds=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=stderr)
writer = protocol.RequestWriter(proc.stdin)
writer.WriteParams(params)
for request in requests:
writer.WriteRunRequest(request)
for extra_file in extra_files:
writer.WriteExtraFile(os.path.basename(extra_file), extra_file)
if self._disk_image_stream:
writer.WriteDiskImage(self._disk_image_stream)
writer.Finish()
proc.stdin.close()
if callback:
reader = protocol.ResponseReader(proc.stdout)
reader.Read(callback)
proc.wait()
if proc.returncode != 0:
raise ClientError('Failed to start a remote program')
proc = None
except KeyboardInterrupt:
if params.debug:
print >>sys.stderr, 'Interrupted during execution.'
finally:
if proc and proc.poll() is None:
os.kill(proc.pid, signal.SIGINT)
def GetStatus(self):
self._CheckSettingsBeforeRun()
args = self._BuildSSHArgs() + self._BuildGetStatusArgs()
with open(os.devnull, 'w') as null:
stderr = None if self._debug else null
proc = subprocess.Popen(args,
close_fds=True,
stdin=null,
stdout=subprocess.PIPE,
stderr=stderr)
status = proc.communicate(None)[0]
if proc.returncode != 0:
raise ClientError('Failed to start a remote program')
return status
def EndMultiplex(self):
self._CheckSettingsBeforeRun()
args = self._BuildSSHArgs() + self._BuildEndMultiplexArgs()
with open(os.devnull, 'w') as null:
stderr = None if self._debug else null
proc = subprocess.Popen(args,
close_fds=True,
stdin=null,
stdout=null,
stderr=stderr)
proc.communicate(None)
##############################################################################
## Bits
def _CheckSettingsBeforeRun(self):
if not self._hostname:
raise ValueError('hostname is not set')
def _BuildSSHArgs(self):
args = ['ssh', '-T', '-p', '%d' % self._port]
if self._identity_file:
args.extend(['-i', self._identity_file])
if not self._check_ssh_host_key:
args.extend(['-o', 'StrictHostKeyChecking=no',
'-o', 'UserKnownHostsFile=/dev/null'])
if self._multiplex:
args.extend(['-o', 'ControlMaster=auto',
'-o', 'ControlPath=/tmp/catnip-ssh.%u.%r@%h:%p',
'-o', 'ControlPersist=yes'])
args.append('%s@%s' % (self._username, self._hostname))
return args
def _BuildRunArgs(self):
return ['sudo', 'catnip-run']
def _BuildGetStatusArgs(self):
return ['sudo', 'catnip-status']
def _BuildEndMultiplexArgs(self):
return ['-o', 'ControlPath=/tmp/catnip-ssh.%u.%r@%h:%p',
'-O', 'exit']
| 34.590674 | 80 | 0.603805 | 748 | 6,676 | 5.248663 | 0.295455 | 0.015283 | 0.030565 | 0.026745 | 0.253948 | 0.195874 | 0.169384 | 0.141875 | 0.141875 | 0.141875 | 0 | 0.004294 | 0.267525 | 6,676 | 192 | 81 | 34.770833 | 0.798569 | 0.088826 | 0 | 0.248322 | 0 | 0 | 0.109967 | 0.022302 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107383 | false | 0.013423 | 0.040268 | 0.020134 | 0.194631 | 0.013423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
155087d21e98b773714728f3ab09117912df0457 | 1,455 | py | Python | app/models/project.py | lguobin/KB_API | f7180cf430cb8de2eac8fa78e3937666da950c7a | [
"Apache-2.0"
] | null | null | null | app/models/project.py | lguobin/KB_API | f7180cf430cb8de2eac8fa78e3937666da950c7a | [
"Apache-2.0"
] | null | null | null | app/models/project.py | lguobin/KB_API | f7180cf430cb8de2eac8fa78e3937666da950c7a | [
"Apache-2.0"
] | null | null | null | # !/usr/bin/env python
# -*- coding: utf-8 -*-
# @Date : 2021/08/09 11:32:25
# @File : project.py
# @Author : K.B.Lam
# @Version : 1.0
from .base import _BaseModel
from app.extensions import db
from app.models.tools import get_username
class Project(_BaseModel):
__tablename__ = "Project"
__bind_key__ = "default"
REQUIRE_ITEMS = _BaseModel.REQUIRE_ITEMS + ["name", "projectTestType", "version",
"uid", "description"]
OPTIONAL_ITEMS = _BaseModel.OPTIONAL_ITEMS
name = db.Column('name', db.String(128), nullable=False, comment="项目名称")
projectTestType = db.Column('projectTestType', db.String(64), nullable=False, comment="项目测试类型")
version = db.Column('version', db.String(32), nullable=False, comment="项目版本")
uid = db.Column('uid', db.String(32), nullable=False, comment="创建者")
description = db.Column('description', db.String(256), nullable=True)
def get_json(self):
return {
"object_id": self.object_id,
"name": self.name,
"uid_name": self.uid,
"uid": get_username("UID", self.uid),
"projectTestType": self.projectTestType,
"version": self.version,
"create_at": self.created_at,
"updated_at": self.updated_at,
"description": self.description
}
@staticmethod
def get_type():
res = []
return {"res": res}
| 33.068182 | 99 | 0.601375 | 166 | 1,455 | 5.10241 | 0.439759 | 0.047226 | 0.094451 | 0.042503 | 0.070838 | 0.070838 | 0 | 0 | 0 | 0 | 0 | 0.026728 | 0.254296 | 1,455 | 43 | 100 | 33.837209 | 0.753917 | 0.091409 | 0 | 0 | 0 | 0 | 0.146768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.1 | 0.033333 | 0.566667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1550c30a3bf09658744df4a415c92773c8c8951d | 5,542 | py | Python | labellab-flask/api/controllers/classificationscontroller.py | AkMo3/LabelLab | 1f16905bba1a332035d082cfc6337b8551478e05 | [
"Apache-2.0"
] | 70 | 2019-01-25T19:16:00.000Z | 2022-03-23T14:37:28.000Z | labellab-flask/api/controllers/classificationscontroller.py | AkMo3/LabelLab | 1f16905bba1a332035d082cfc6337b8551478e05 | [
"Apache-2.0"
] | 350 | 2019-01-30T10:50:34.000Z | 2022-03-31T19:58:44.000Z | labellab-flask/api/controllers/classificationscontroller.py | AkMo3/LabelLab | 1f16905bba1a332035d082cfc6337b8551478e05 | [
"Apache-2.0"
] | 140 | 2019-01-30T08:53:35.000Z | 2022-03-25T15:37:12.000Z | from flask.views import MethodView
from flask import request, make_response, jsonify, current_app
from flask_jwt_extended import (jwt_required, get_jwt_identity, get_raw_jwt)
from datetime import datetime
from api.helpers.classification import (
get_classified_data, find_by_id, find_all_by_id, save_image, save_to_db, delete_by_id)
from api.models.Classification import Classification
class ClassifyImage(MethodView):
# This class handles image upload and return classification data
@jwt_required
def post(self):
current_user = get_jwt_identity()
try:
image = request.files.getlist("image")[0]
except Exception as ex:
response = {
"success": False,
"msg": "Missing image"
}
return make_response(jsonify(response)), 400
try:
image_name = image.filename.split('.')[0]
ext = image.filename.split('.')[1]
timestamp = datetime.timestamp(datetime.now())
image_url = f"{current_user}_{image_name}_{timestamp}.{ext}"
save_image(current_user, image, image_url)
# Mock classification data
_current = get_classified_data()
to_save = Classification(
image_name=image_name,
image_url=image_url,
label=_current[0],
confidence=_current[1],
user_id=current_user
)
classification_schema = save_to_db(to_save)
response = {
"success": True,
"msg": "Image labelled successfully",
"body": classification_schema
}
return make_response(jsonify(response)), 200
except Exception as err:
response = {
"success": False,
"msg": "Error while saving image"
}
return make_response(jsonify(response)), 500
class ClassificationInfo(MethodView):
# This class returns a single classification data
@jwt_required
def get(self, classification_id):
try:
if not classification_id:
response = {
"success": False,
"msg": "Classification id not found"
}
return make_response(jsonify(response)), 400
classification = find_by_id(classification_id)
response = {
"success": True,
"msg": "Classification found",
"body": classification
}
return make_response(jsonify(response)), 200
except Exception as err:
response = {
"success": False,
"msg": "Error while fetching classification"
}
return make_response(jsonify(response)), 500
class GetAllClassifications(MethodView):
# This class returns all classifications by user id
@jwt_required
def get(self):
current_user = get_jwt_identity()
try:
classifications = find_all_by_id(current_user)
response = {
"success": True,
"msg": "Classifications found",
"body": classifications
}
return make_response(jsonify(response)), 200
except Exception as err:
response = {
"success": False,
"msg": "Error while fetching classifications"
}
return make_response(jsonify(response)), 500
class DeleteClassification(MethodView):
# This class deletes a classification using its id
@jwt_required
def delete(self, classification_id):
if not classification_id:
response = {
"success": False,
"msg": "Classification id not found"
}
return make_response(jsonify(response)), 400
current_user_id = get_jwt_identity()
classification = find_by_id(classification_id)
# Check if a classification with that id exists or not
if not classification:
response = {
"success": False,
"msg": "Classification not found"
}
return make_response(jsonify(response)), 404
if classification["user_id"] != current_user_id:
# The user making this request is not the one who created this classification
# So we cannot let him delete it
response = {
"success": False,
"msg": "Can only delete classifications you created"
}
return make_response(jsonify(response)), 401
try:
delete_by_id(classification_id)
response = {
"success":True,
"msg": "Classification deleted successfully"
}
return make_response(jsonify(response)), 200
except Exception as err:
response = {
"success": False,
"msg": "Error deleting classification"
}
return make_response(jsonify(response)), 500
classificationController = {
"classify_image": ClassifyImage.as_view("classify_image"),
"get_classification": ClassificationInfo.as_view("get_classification"),
"get_all_classifications": GetAllClassifications.as_view("get_all_classifications"),
"delete_classification": DeleteClassification.as_view("delete_classification")
}
| 32.034682 | 90 | 0.576326 | 526 | 5,542 | 5.86692 | 0.230038 | 0.054439 | 0.086196 | 0.105314 | 0.415749 | 0.36455 | 0.317239 | 0.222618 | 0.222618 | 0.186325 | 0 | 0.012101 | 0.343919 | 5,542 | 172 | 91 | 32.22093 | 0.836634 | 0.071093 | 0 | 0.449612 | 0 | 0 | 0.138938 | 0.025881 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031008 | false | 0 | 0.046512 | 0 | 0.209302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1551ac1a6a79309991708839cd5a99e957a025cb | 6,263 | py | Python | pycdr/pycdr.py | wlchin/pycdr | 96e64a05f1b84fd01fbb003d3256e297d6492df4 | [
"MIT"
] | null | null | null | pycdr/pycdr.py | wlchin/pycdr | 96e64a05f1b84fd01fbb003d3256e297d6492df4 | [
"MIT"
] | null | null | null | pycdr/pycdr.py | wlchin/pycdr | 96e64a05f1b84fd01fbb003d3256e297d6492df4 | [
"MIT"
] | null | null | null |
import numpy as np
import scipy.sparse as ss
import logging
import time
import warnings
from .feature_selection import get_significant_genes
from .feature_selection import calculate_minmax
warnings.simplefilter("ignore")
logging.basicConfig(format='%(process)d - %(levelname)s : %(asctime)s - %(message)s', level=logging.DEBUG)
logger = logging.getLogger(__name__)
def run_CDR_analysis(data, phenotype, capvar = 0.95, pernum = 2000, thres = 0.05):
"""Main CDR-g analysis function
The key step in CDR-g is an SVD-decomposition on gene co-expression matrices.
Depending on the sequencing platform, this SVD step can produce thousands of
factor loadings. By default, CDR-g selects number of factor loadings which
captures 95% of variance in the dataset.
Args:
data (anndata): anndata object of interest
phenotype (str): condition of interest
capvar (float, optional): specifies the number of factor loadings to examine. Defaults to 0.95.
pernum (int, optional): number of permutations to determine importance score. Defaults to 2000.
thres (float, optional): cut-off for permutation importance to select genes. Defaults to 0.05.
"""
start = time.time()
gene_num = data.X.shape[0]
cell_num = data.X.shape[1]
logger.info('processing dataset of %s genes X %s cells', cell_num, gene_num)
logger.info('target class label:: %s', phenotype)
logger.info("SVD and threshold selection")
res = pvalgenerator(data, phenotype, capvar)
logger.info("completed SVD and varimax")
logger.info("permutation testing for gene sets:: perms:: %s threshold :: %s", pernum, thres)
npheno= data.uns["n_pheno"]
#get_significant_genes_perms(data, npheno, permnum = pernum, thres = thres)
get_significant_genes(data, npheno, permnum = pernum, thres = thres)
logger.info("computed thresholds for gene selection")
end = time.time()
timediff = end - start
numfact = data.uns["selected_loading"]
logger.info('N factor loadings:: %s', numfact)
logger.info('wall clock time in seconds:: %s', timediff)
def dask_ver(matrixlist, capvar):
"""provides svd and concatenation with dask"""
import dask.array as da
from dask_ml.decomposition import TruncatedSVD
if ss.issparse(matrixlist[0]):
list_of_mats_as_dask_arrays = [da.from_array(np.array(d.todense())) for d in matrixlist]
else:
list_of_mats_as_dask_arrays = [da.from_array(d) for d in matrixlist]
list_of_corr_mats = [da.corrcoef(d) for d in list_of_mats_as_dask_arrays]
X = da.concatenate(list_of_corr_mats, axis=1)
X[da.isnan(X)] = 0.0
_, y, Ek, Ss = get_optimal_threshold(X, capvar)
#Ek = svd.components_
#Ss = svd.singular_values_
return Ek, Ss, X, y
def process_svd_to_factors(Ek, Ss, N_k):
"""function for rotation and flips"""
Ek = Ek.T
ind = np.argsort(Ss)[::-1]
Ss = Ss[ind]
Ek = Ek[:, ind]
Lk = Ss**2 # singular values to eigenvalues
Fk = (Lk[:N_k]**0.5)*Ek[:,:N_k] # factor loadings
# Varimax rotation of the factor loadings
ROT = classic_orthomax(Fk, gamma=1) # finding rotation (gamma=1 implyes at CLASSIC varimax)
Fs = np.dot(Fk,ROT) # rotated factor loadings
Ls = np.diag(ROT.T@np.diag(Lk[:N_k])@ROT) # rotated eigenvalues
ind = np.argsort(Ls)[::-1]
Ls = Ls[ind]
Fs = Fs[:, ind]
Fs = flip_Ek(Fs)
return Fs, Ls, Fk, Lk
### aux functions for matrix extraction
def get_numbers_of_pheno(ad, pheno):
"""return list of nums"""
vals = ad.obs[pheno].value_counts().tolist()
return vals
def get_bools_of_pheno(ad, pheno):
"""return list of booleans"""
phenotypes = ad.obs[pheno].unique()
bool_list = [ad.obs[pheno] == i for i in phenotypes]
return bool_list
def extract_matrix_from_anndata(ad, pheno_column):
ind = get_bools_of_pheno(ad, pheno_column)
rands = [ad[i,:].X.T for i in ind]
return rands, len(rands)
#### functions for generating pvals and integrating whole varimax
def _full_Fs(ad, pheno, capvar):
matlist, numpheno = extract_matrix_from_anndata(ad, pheno)
Ee, Ss, _, N = dask_ver(matlist, capvar) # specify algorithm
Fs, Ls, Fk, Lk = process_svd_to_factors(Ee, Ss, N)
ad.uns["selected_loading"] = N
ad.uns["Fs"] = Fs
ad.uns["Ls"] = Ls
ad.uns["Fk"] = Fk
ad.uns["Lk"] = Lk
ad.uns["n_pheno"] = numpheno
Fs_diff = calculate_minmax(Fs, numpheno)
return Fs_diff
def pvalgenerator(ad, pheno, capvar):
Fs_diff = _full_Fs(ad, pheno, capvar)
ad.uns["Fs_diff"] = Fs_diff
return Fs_diff
# leos' aux functions
def classic_orthomax(Phi, gamma = 1, q = 20, tol = 1e-6):
"""Returns the orthomax rotation"""
from numpy import eye, asarray, dot, sum, diag
from numpy.linalg import svd
p,k = Phi.shape
R = eye(k)
d=0
for i in range(q):
d_old = d
Lambda = dot(Phi, R)
u,s,vh = svd(dot(Phi.T,asarray(Lambda)**3 - (gamma/p) * dot(Lambda, diag(diag(dot(Lambda.T,Lambda))))))
R = dot(u,vh)
d = sum(s)
if d_old!=0 and d/d_old < 1 + tol: break
return R
def flip_Ek(Ek):
"""That functions guaranties that the eigenvectors will "point up".
"""
n, m = Ek.shape
e_k_to_flip = abs(Ek.min(axis=0)) > Ek.max(axis=0)
flip = np.ones(m)
flip[e_k_to_flip] *= -1
Ek *= flip
return Ek
### aux functions for detecting factors.
def get_optimal_threshold(num, thres, ncomp = 2000):
"""
selects number of factors for truncated SVD
"""
from dask_ml.decomposition import TruncatedSVD
import dask.array as da
nrows = num.shape[0] # this shows num cells and is required for svd
numgenes = num.shape[1] # this is to make sure if less 2000
if numgenes < ncomp:
ncomp = numgenes - 1
print(ncomp)
numm = num.rechunk((nrows, 10))
svd = TruncatedSVD(n_components=ncomp, n_iter=5, random_state=42)
svd.fit(numm)
x = np.cumsum(svd.explained_variance_ratio_)
y = np.argmax(x>thres)
if y == 0:
y = ncomp
X = svd.components_[0:y]
v = svd.singular_values_[0:y]
return x, y, X, v
| 31.315 | 111 | 0.653201 | 943 | 6,263 | 4.207847 | 0.299046 | 0.020161 | 0.014365 | 0.009073 | 0.114919 | 0.095766 | 0.029738 | 0.016633 | 0.016633 | 0 | 0 | 0.01367 | 0.229123 | 6,263 | 199 | 112 | 31.472362 | 0.808202 | 0.244452 | 0 | 0.049587 | 0 | 0 | 0.084889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.107438 | 0 | 0.280992 | 0.008264 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15530d060380d749b116a3b7a27afff3e009ff61 | 1,925 | py | Python | commands.py | fivunlm/sbb8 | 9493cfb9d799b57ae4b3bb8a44672fa92736881e | [
"MIT"
] | null | null | null | commands.py | fivunlm/sbb8 | 9493cfb9d799b57ae4b3bb8a44672fa92736881e | [
"MIT"
] | null | null | null | commands.py | fivunlm/sbb8 | 9493cfb9d799b57ae4b3bb8a44672fa92736881e | [
"MIT"
] | null | null | null | import re
from feed import get_builds
def _filter_old_builds(builds):
to_remove = []
for build in builds:
for b in builds:
if b['artifact'] == build['artifact']:
partial_b_version = b['version'][:b['version'].rfind('.')]
partial_build_version = build['version'][:build['version'].rfind('.')]
if partial_b_version == partial_build_version and b['timestamp'] > build['timestamp']:
to_remove.append(build)
break
return [b for b in builds if b not in to_remove]
def command_check_builds(reg_ex, command):
builds = get_builds()
response = ''
clean_builds = _filter_old_builds(builds)
clean_builds.sort(key=lambda b: b['version'])
for build in clean_builds:
response += '%s %s *#%s* _%s_\n' % (
':heavy_check_mark:' if 'successful' in build['status'] else ':bangbang:', build['artifact'],
build['version'], 'successful' if 'successful' in build['status'] else 'failed')
return response
def command_check_specific_build(reg_ex, command):
build_name = reg_ex.match(command).group(1)
builds = get_builds()
response = ''
clean_builds = _filter_old_builds(builds)
clean_builds.sort(key=lambda b: b['version'])
clean_builds = filter(lambda b: build_name in b['version'], clean_builds)
for build in clean_builds:
response += '%s %s *#%s* _%s_\n' % (
':heavy_check_mark:' if 'successful' in build['status'] else ':bangbang:', build['artifact'],
build['version'], 'successful' if 'successful' in build['status'] else 'failed')
return response
COMMANDS = [
{
'regex': re.compile(r'\.do check builds(\s*)'),
'command': command_check_builds
},
{
'regex': re.compile(r'\.do check build ([\w.-]+)'),
'command': command_check_specific_build
}
]
| 32.627119 | 105 | 0.607792 | 239 | 1,925 | 4.669456 | 0.230126 | 0.078853 | 0.010753 | 0.0681 | 0.526882 | 0.526882 | 0.460573 | 0.460573 | 0.460573 | 0.460573 | 0 | 0.000689 | 0.245714 | 1,925 | 58 | 106 | 33.189655 | 0.767906 | 0 | 0 | 0.4 | 0 | 0 | 0.194805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.044444 | 0 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15532dfb542002fe8f2372645ba607827e292a39 | 2,124 | py | Python | src/NLP/NLP.py | dinispeixoto/Kaydara | 5a22be3f9e931a00f3f3c9bcd1dbda8e1cce0b4d | [
"MIT"
] | null | null | null | src/NLP/NLP.py | dinispeixoto/Kaydara | 5a22be3f9e931a00f3f3c9bcd1dbda8e1cce0b4d | [
"MIT"
] | 3 | 2021-02-08T20:22:41.000Z | 2022-03-25T14:38:24.000Z | src/NLP/NLP.py | dinispeixoto/Kaydara | 5a22be3f9e931a00f3f3c9bcd1dbda8e1cce0b4d | [
"MIT"
] | null | null | null | from src.APIs import IBMWatsonAPI, FacebookAPI
from src.NLP import ReminderNLP, WeatherNLP, NewsNLP, GmailNLP, CalendarNLP
from src.MsgBuilder import NLPMB
from src.Models import Client
import json
# processes the client message and select the related API
def process_message(client_id, msg):
cli = Client.get_client(client_id)
if cli is None:
cli = Client.insert_client(client_id, None)
results = IBMWatsonAPI.send_message(msg, cli.context)
print(results)
__selectAPI(results, cli)
# process a received quick_reply
def process_quick_reply(client_id, quick_reply):
cli = Client.get_client(client_id)
if cli is None:
cli = Client.insert_client(client_id, None)
results = IBMWatsonAPI.send_message(quick_reply, cli.context)
__selectAPI(results, cli)
def send_info(results, cli):
print('INFO REQUEST')
(context, output, _) = results
for m in output:
if m != '':
FacebookAPI.send_message(cli.id, m)
Client.update_client_context(cli.id, None)
cli.context = None
# method select the correct API and deals with invalid request
def __selectAPI(results, cli):
(newContext, _, _) = results
switch_request = {
'Info': send_info,
'WeatherRequest': WeatherNLP.process_message,
'NewsRequest': NewsNLP.process_message,
'EmailRequest': GmailNLP.process_message,
'ReminderRequest': ReminderNLP.process_message,
'CalendarRequest': CalendarNLP.process_message,
}
try:
node = newContext['node']
print(node)
except KeyError:
node = 'AnythingElse'
switch_request.get(node, __invalid_request)(results, cli)
# method deals with invalid request
def __invalid_request(results, cli):
print('ANYTHING ELSE')
(newContext,output,_) = results
json_newContext = json.dumps(newContext, indent=2)
Client.update_client_context(cli.id, json_newContext)
for m in output:
FacebookAPI.send_message(cli.id, m)
FacebookAPI.send_quick_replies(cli.id, NLPMB.quick_reply_features(), "Here's all the features, enjoy!")
| 30.782609 | 107 | 0.702919 | 261 | 2,124 | 5.509579 | 0.298851 | 0.058414 | 0.038943 | 0.025035 | 0.255911 | 0.21975 | 0.139082 | 0.139082 | 0.139082 | 0.139082 | 0 | 0.000592 | 0.204802 | 2,124 | 68 | 108 | 31.235294 | 0.850799 | 0.085217 | 0 | 0.24 | 0 | 0 | 0.073787 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.2 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1555ea7d41c4bf7bec602435933ff7dd2f16b76a | 1,043 | py | Python | src/tale/runtime/objects.py | tale-lang/tale | 1779f94aa13545e58a1d5a8819b85ad02ada4144 | [
"MIT"
] | 17 | 2020-02-11T10:38:19.000Z | 2020-09-22T16:36:25.000Z | src/tale/runtime/objects.py | tale-lang/tale | 1779f94aa13545e58a1d5a8819b85ad02ada4144 | [
"MIT"
] | 18 | 2020-02-14T20:36:25.000Z | 2020-05-26T21:52:46.000Z | src/tale/runtime/objects.py | tale-lang/tale | 1779f94aa13545e58a1d5a8819b85ad02ada4144 | [
"MIT"
] | 1 | 2020-02-16T12:04:07.000Z | 2020-02-16T12:04:07.000Z | from typing import Any
class TaleObject:
"""A basic block of Tale's object model.
All values in Tale exist only as instances of this class.
For example, in the following expression:
x = 1
`1` is an instance of `TaleObject`.
Attributes:
type: An instance of `TaleObject` that represents the type of the
object.
name: A name of the object.
py_instance: An instance of the object in Python memory.
"""
def __init__(self, type: 'TaleObject', py_instance: Any, name = None):
self.type = type
self.py_instance = py_instance
self.name = name
TaleType = TaleObject(None, None)
TaleType.type = TaleType
TaleString = TaleObject(TaleType, str)
TaleString.name = TaleObject(TaleString, 'String')
TaleType.name = TaleObject(TaleString, 'Type')
TaleNone = TaleObject(None, None, TaleObject(TaleString, 'None'))
TaleInt = TaleObject(TaleType, int, TaleObject(TaleString, 'Int'))
TaleTuple = TaleObject(TaleType, None, TaleObject(TaleString, 'Tuple'))
| 28.189189 | 74 | 0.685523 | 131 | 1,043 | 5.396947 | 0.40458 | 0.141443 | 0.050919 | 0.062235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00246 | 0.220518 | 1,043 | 36 | 75 | 28.972222 | 0.867159 | 0.361457 | 0 | 0 | 0 | 0 | 0.051696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15584c787333f66306e7fcabad2c6a63a9103c83 | 666 | py | Python | src/genderumrevelio/misc/jsonfilter.py | Conan88/GenderumRevelio | 6e047dba990ff6c7ae05fd8ee8388487c6d32ea9 | [
"MIT"
] | null | null | null | src/genderumrevelio/misc/jsonfilter.py | Conan88/GenderumRevelio | 6e047dba990ff6c7ae05fd8ee8388487c6d32ea9 | [
"MIT"
] | null | null | null | src/genderumrevelio/misc/jsonfilter.py | Conan88/GenderumRevelio | 6e047dba990ff6c7ae05fd8ee8388487c6d32ea9 | [
"MIT"
] | null | null | null | import json
def twitterfilter(filename, outputpath="filteredtweets/", inputpath="unfilteredtweets/"):
with open(inputpath + filename + ".json", "r") as f:
data = json.load(f)
indexs = []
for i in range(0, len(data)):
if data[i]["user"].lower() != filename.lower():
indexs.append(i)
print("Total Tweets: " + str(len(data)))
print("Deleted tweets: " + str(len(indexs)))
for ind in reversed(indexs):
del data[ind]
print("New Total Tweets: " + str(len(data)))
with open(outputpath + "new" + filename + ".json", "w") as nf:
json.dump(data, nf)
| 28.956522 | 89 | 0.548048 | 79 | 666 | 4.620253 | 0.493671 | 0.057534 | 0.09863 | 0.093151 | 0.115068 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002114 | 0.28979 | 666 | 22 | 90 | 30.272727 | 0.769556 | 0 | 0 | 0 | 0 | 0 | 0.148649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.133333 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15590c98b9bcc040e125c855d59afa23bb2ae6d3 | 1,038 | py | Python | linear_search/test_linear_search.py | agconti/searches | 930aa4e8c7315d5db039f633b11b337aaf9920d4 | [
"MIT"
] | null | null | null | linear_search/test_linear_search.py | agconti/searches | 930aa4e8c7315d5db039f633b11b337aaf9920d4 | [
"MIT"
] | null | null | null | linear_search/test_linear_search.py | agconti/searches | 930aa4e8c7315d5db039f633b11b337aaf9920d4 | [
"MIT"
] | null | null | null | import unittest
import random
from linear_search import linear_search
class TestLinearSearch(unittest.TestCase):
def setUp(self):
self.items_length = 20
self.unsorted_values = [random.random() for i in range(self.items_length)]
self.target = random.choice(self.unsorted_values)
def test_linear_search_returns_an_index(self):
position = linear_search(self.unsorted_values, self.target)
assert position in range(self.items_length)
assert isinstance(position, int)
def test_linear_search_raises_an_error_with_an_invalid_target(self):
target = 1
try:
linear_search(self.unsorted_values, target)
except ValueError as e:
self.assertEqual(e.message, "{} was not found in the list".format(target))
def test_linear_search_finds_target(self):
position = linear_search(self.unsorted_values, self.target)
self.assertEqual(self.unsorted_values[position], self.target)
if __name__ == '__main__':
unittest.main()
| 32.4375 | 86 | 0.712909 | 132 | 1,038 | 5.30303 | 0.401515 | 0.137143 | 0.154286 | 0.081429 | 0.254286 | 0.148571 | 0.148571 | 0.148571 | 0.148571 | 0 | 0 | 0.003628 | 0.203276 | 1,038 | 31 | 87 | 33.483871 | 0.842805 | 0 | 0 | 0.086957 | 0 | 0 | 0.034682 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 1 | 0.173913 | false | 0 | 0.130435 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
155a6708533a37b4522eab82627523cc40c0ea99 | 1,793 | py | Python | whowlong/management/commands/agspGetTrajetsAround.py | guillaut5/ttpu | 9d47f762a1064000f4443eaf12c3f843570a441e | [
"Apache-2.0"
] | null | null | null | whowlong/management/commands/agspGetTrajetsAround.py | guillaut5/ttpu | 9d47f762a1064000f4443eaf12c3f843570a441e | [
"Apache-2.0"
] | null | null | null | whowlong/management/commands/agspGetTrajetsAround.py | guillaut5/ttpu | 9d47f762a1064000f4443eaf12c3f843570a441e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
'''
Created on 3 mars 2017
@author: guillaume
'''
from django.core.management.base import BaseCommand, CommandError
import time
import logging
from django.test.client import Client
# Create your tests here.
from whowlong.models import Place,Trajet
from whowlong.computingtools.calculator import RouteComputer
from django.contrib.auth.models import User
logger = logging.getLogger(__name__)
from django.http import JsonResponse
class Command(BaseCommand):
help = u'create trajet arround adress'
#def add_arguments(self, parser):
#parser.add_argument('poll_id', nargs='+', type=int)
# voir https://stackoverflow.com/questions/27611468/django-management-command-argument
def add_arguments(self, parser):
parser.add_argument('--adress', type=str)
parser.add_argument('--arroundInMeter', type=int)
def handle(self, *args, **options):
start_time = time.time()
logger.info(u'Start get trajet arround')
adress = options['adress']
logger.info(u'adress:%s '%(adress))
t0 = time.time()
arround = options['arroundInMeter']
logger.info(u'arround:%sm '%(arround))
routeComputer = RouteComputer()
user = User.objects.get(username='guillaume')
request= routeComputer.initRequest(user, '80 rue jean antoine injalbert 34130 castelnau le lez', arround, label=u'CHEZMOIS')
routeComputer.buildTrajetsList(request)
logger.debug(" objectList,userProfil = agsp.getObjectsList(token) [%06dms] " %(1000*(time.time()-t0)))
self.stdout.write(self.style.SUCCESS(u'END'))
routeComputer.computeTrajetLength(request)
| 30.389831 | 132 | 0.654769 | 197 | 1,793 | 5.903553 | 0.532995 | 0.034394 | 0.043852 | 0.032674 | 0.072227 | 0.072227 | 0.072227 | 0.072227 | 0 | 0 | 0 | 0.020999 | 0.229782 | 1,793 | 59 | 133 | 30.389831 | 0.821144 | 0.144451 | 0 | 0 | 0 | 0 | 0.166886 | 0.03088 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.275862 | 0 | 0.413793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
155c37c9b2d90ac1125a6333fdc53a42a8125eeb | 2,759 | py | Python | ocdsmerge/util.py | open-contracting/ocds-merge | 80c7cb380d191c75f88feefd34b607bc0de13ee1 | [
"BSD-3-Clause"
] | 4 | 2017-09-06T06:14:09.000Z | 2019-07-05T13:11:40.000Z | ocdsmerge/util.py | open-contracting/ocds-merge | 80c7cb380d191c75f88feefd34b607bc0de13ee1 | [
"BSD-3-Clause"
] | 20 | 2017-11-25T02:29:41.000Z | 2020-01-08T18:45:49.000Z | ocdsmerge/util.py | open-contracting/ocds-merge | 80c7cb380d191c75f88feefd34b607bc0de13ee1 | [
"BSD-3-Clause"
] | 1 | 2018-11-07T14:05:12.000Z | 2018-11-07T14:05:12.000Z | import re
from functools import lru_cache
import requests
from ocdsmerge.exceptions import (MissingDateKeyError, NonObjectReleaseError, NonStringDateValueError,
NullDateValueError)
@lru_cache()
def get_tags():
"""
Returns the tags of all versions of OCDS in alphabetical order.
"""
return re.findall(r'"(\d+__\d+__\d+)/', requests.get('https://standard.open-contracting.org/schema/').text)
def get_release_schema_url(tag):
"""
Returns the URL of the release schema in the given version of OCDS.
"""
return f'https://standard.open-contracting.org/schema/{tag}/release-schema.json'
# If we need a method to get dates from releases, see https://github.com/open-contracting/ocds-merge/issues/25
def sorted_releases(releases):
"""
Sorts a list of releases by date.
"""
# Avoids an error if sorting a single compiled release.
if isinstance(releases, list) and len(releases) == 1 and isinstance(releases[0], dict):
return releases
try:
return sorted(releases, key=lambda release: release['date'])
except KeyError:
raise MissingDateKeyError('date', 'The `date` field of at least one release is missing.')
except TypeError as e:
if ' not supported between instances of ' in e.args[0]:
if 'NoneType' in e.args[0]:
raise NullDateValueError('The `date` field of at least one release is null.')
else:
raise NonStringDateValueError('The `date` field of at least one release is not a string.')
elif e.args[0] in ('string indices must be integers',
'string index indices must be integers or slices, not str'):
raise NonObjectReleaseError('At least one release is a string, not a dict. Use `json.loads` to parse the '
'string as JSON.')
elif e.args[0] == 'byte indices must be integers or slices, not str':
raise NonObjectReleaseError('At least one release is a byte-string, not a dict. Use `json.loads` to parse '
'the byte-string as JSON.')
elif e.args[0] == 'list indices must be integers or slices, not str':
raise NonObjectReleaseError('At least one release is a list, not a dict.')
elif e.args[0] == 'tuple indices must be integers or slices, not str':
raise NonObjectReleaseError('At least one release is a tuple, not a dict.')
elif e.args[0] in ("'set' object is not subscriptable",
"'set' object is not subscriptable (key 'date')"):
raise NonObjectReleaseError('At least one release is a set, not a dict.')
else:
raise
| 46.762712 | 119 | 0.631388 | 364 | 2,759 | 4.755495 | 0.315934 | 0.032351 | 0.046216 | 0.078567 | 0.440786 | 0.401502 | 0.358752 | 0.312536 | 0.285962 | 0.22877 | 0 | 0.005486 | 0.273287 | 2,759 | 58 | 120 | 47.568966 | 0.857855 | 0.119246 | 0 | 0.051282 | 0 | 0.051282 | 0.408728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.102564 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
155c3cff7ec88893e5e3afc0e765ecc9f4bcd784 | 2,566 | py | Python | geometry_analysis/tests/test_geometry_analysis.py | ErikVazquezM/geometry_analysis | f12cdd04f1d9dbb1547297aa05095aa390bd5ee0 | [
"BSD-3-Clause"
] | null | null | null | geometry_analysis/tests/test_geometry_analysis.py | ErikVazquezM/geometry_analysis | f12cdd04f1d9dbb1547297aa05095aa390bd5ee0 | [
"BSD-3-Clause"
] | null | null | null | geometry_analysis/tests/test_geometry_analysis.py | ErikVazquezM/geometry_analysis | f12cdd04f1d9dbb1547297aa05095aa390bd5ee0 | [
"BSD-3-Clause"
] | null | null | null | """
Unit and regression test for the geometry_analysis package.
"""
# Import package, test suite, and other packages as needed
import geometry_analysis
import pytest
import sys
import numpy as np
import math
@pytest.fixture()
def water_molecule():
name = "water"
symbols = ["H", "O", "H"]
coordinates = np.array([[2,0,0], [0,0,0], [-2,0,0]])
water = geometry_analysis.Molecule(name, symbols, coordinates)
return water
def test_create_failure():
name = 25
symbols = ['H', 'O', 'H']
coordinates = np.zeros([3,3])
with pytest.raises(TypeError):
water = geometry_analysis.Molecule(name, symbols, coordinates)
def test_molecules_set_coordinates(water_molecule):
"""Test bond list is rebuilt when we reset coordinates """
num_bonds = len(water_molecule.bonds)
assert num_bonds == 2
new_coordinates = np.array([[5,0,0], [0,0,0], [-2,0,0]])
water_molecule.coordinates = new_coordinates
new_bonds = len(water_molecule.bonds)
assert new_bonds == 1
assert np.array_equal(new_coordinates, water_molecule.coordinates)
def test_geometry_analysis_imported():
"""Sample test, will always pass so long as import statement worked"""
assert "geometry_analysis" in sys.modules
def test_calculate_distance():
"""Test the calculate_distance function"""
r1 = np.array([0,0,-1])
r2 = np.array([0,1,0])
expected_distance = np.sqrt(2)
calculated_distance = geometry_analysis.calculate_distance(r1, r2)
assert expected_distance == calculated_distance
def test_calculate_angle_180():
"""Test the calculate_distance function"""
r1 = np.array([-1,0,0])
r2 = np.array([0,0,0])
r3 = np.array([1,0,0])
expected_theta = math.pi
calculated_theta = geometry_analysis.calculate_angle(r1, r2, r3)
assert expected_theta == calculated_theta
def test_calculate_angle_90():
"""Test the calculate_distance function"""
r1 = np.array([1,0,0])
r2 = np.array([0,0,0])
r3 = np.array([0,1,0])
expected_theta = (math.pi) / 2
calculated_theta = geometry_analysis.calculate_angle(r1, r2, r3)
assert expected_theta == calculated_theta
@pytest.mark.parametrize('p1, p2, p3, expected_angle', [
(np.array([-1,0,0]), np.array([0,0,0]), np.array([1,0,0]), 180),
(np.array([1,0,0]), np.array([0,0,0]), np.array([0,1,0]), 90),
])
def test_calculate_angle(p1, p2, p3, expected_angle):
calculated_theta = geometry_analysis.calculate_angle(p1, p2, p3, expected_angle)
assert expected_angle == calculated_theta
| 26.729167 | 84 | 0.681216 | 371 | 2,566 | 4.533693 | 0.237197 | 0.029727 | 0.017836 | 0.032105 | 0.482759 | 0.448276 | 0.347206 | 0.250297 | 0.225922 | 0.210464 | 0 | 0.048815 | 0.177709 | 2,566 | 95 | 85 | 27.010526 | 0.748341 | 0.13484 | 0 | 0.145455 | 0 | 0 | 0.024703 | 0 | 0 | 0 | 0 | 0 | 0.145455 | 1 | 0.145455 | false | 0 | 0.109091 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
155d76276a8be0807befc12dbb2b607bf3e8e928 | 1,656 | py | Python | journal_venv/lib/python3.9/site-packages/cartopy/examples/regridding_arrows.py | ushham/JournalTool | f0ab9b6711b733f3c68a8a94bbb9773ffd3a95fe | [
"MIT"
] | 2 | 2020-07-29T13:23:42.000Z | 2020-10-24T08:48:13.000Z | journal_venv/lib/python3.9/site-packages/cartopy/examples/regridding_arrows.py | ushham/JournalTool | f0ab9b6711b733f3c68a8a94bbb9773ffd3a95fe | [
"MIT"
] | null | null | null | journal_venv/lib/python3.9/site-packages/cartopy/examples/regridding_arrows.py | ushham/JournalTool | f0ab9b6711b733f3c68a8a94bbb9773ffd3a95fe | [
"MIT"
] | 1 | 2022-03-10T16:12:09.000Z | 2022-03-10T16:12:09.000Z | """
Regridding vectors with quiver
------------------------------
This example demonstrates the regridding functionality in quiver (there exists
equivalent functionality in :meth:`cartopy.mpl.geoaxes.GeoAxes.barbs`).
Regridding can be an effective way of visualising a vector field, particularly
if the data is dense or warped.
"""
__tags__ = ['Vector data']
import matplotlib.pyplot as plt
import numpy as np
import cartopy.crs as ccrs
def sample_data(shape=(20, 30)):
"""
Return ``(x, y, u, v, crs)`` of some vector data
computed mathematically. The returned CRS will be a North Polar
Stereographic projection, meaning that the vectors will be unevenly
spaced in a PlateCarree projection.
"""
crs = ccrs.NorthPolarStereo()
scale = 1e7
x = np.linspace(-scale, scale, shape[1])
y = np.linspace(-scale, scale, shape[0])
x2d, y2d = np.meshgrid(x, y)
u = 10 * np.cos(2 * x2d / scale + 3 * y2d / scale)
v = 20 * np.cos(6 * x2d / scale)
return x, y, u, v, crs
def main():
fig = plt.figure(figsize=(8, 10))
x, y, u, v, vector_crs = sample_data(shape=(50, 50))
ax1 = fig.add_subplot(2, 1, 1, projection=ccrs.PlateCarree())
ax1.coastlines('50m')
ax1.set_extent([-45, 55, 20, 80], ccrs.PlateCarree())
ax1.quiver(x, y, u, v, transform=vector_crs)
ax2 = fig.add_subplot(2, 1, 2, projection=ccrs.PlateCarree())
ax2.set_title('The same vector field regridded')
ax2.coastlines('50m')
ax2.set_extent([-45, 55, 20, 80], ccrs.PlateCarree())
ax2.quiver(x, y, u, v, transform=vector_crs, regrid_shape=20)
plt.show()
if __name__ == '__main__':
main()
| 27.6 | 78 | 0.652174 | 245 | 1,656 | 4.314286 | 0.436735 | 0.011353 | 0.017029 | 0.018921 | 0.213813 | 0.138127 | 0.113529 | 0.113529 | 0 | 0 | 0 | 0.048302 | 0.199879 | 1,656 | 59 | 79 | 28.067797 | 0.749434 | 0.327295 | 0 | 0 | 0 | 0 | 0.051996 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.107143 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
155f265672b80abeaf4e072085fcf28eb806c6db | 1,543 | py | Python | letsencrypt/client/tests/log_test.py | rivy/lets-encrypt-preview | 759e233aaa36f33c413918e8ebbf1b949af1ce5d | [
"Apache-2.0"
] | null | null | null | letsencrypt/client/tests/log_test.py | rivy/lets-encrypt-preview | 759e233aaa36f33c413918e8ebbf1b949af1ce5d | [
"Apache-2.0"
] | null | null | null | letsencrypt/client/tests/log_test.py | rivy/lets-encrypt-preview | 759e233aaa36f33c413918e8ebbf1b949af1ce5d | [
"Apache-2.0"
] | 1 | 2020-07-20T00:19:40.000Z | 2020-07-20T00:19:40.000Z | """Tests for letsencrypt.client.log."""
import unittest
import mock
class DialogHandlerTest(unittest.TestCase):
def setUp(self):
self.d = mock.MagicMock() # pylint: disable=invalid-name
from letsencrypt.client.log import DialogHandler
self.handler = DialogHandler(height=2, width=6, d=self.d)
self.handler.PADDING_HEIGHT = 2
self.handler.PADDING_WIDTH = 4
def test_adds_padding(self):
self.handler.emit(mock.MagicMock())
self.d.infobox.assert_called_once_with(mock.ANY, 4, 10)
def test_args_in_msg_get_replaced(self):
assert len('123456') <= self.handler.width
self.handler.emit(mock.MagicMock(msg='123%s', args=(456,)))
self.d.infobox.assert_called_once_with('123456', mock.ANY, mock.ANY)
def test_wraps_nospace_is_greedy(self):
assert len('1234567') > self.handler.width
self.handler.emit(mock.MagicMock(msg='1234567'))
self.d.infobox.assert_called_once_with('123456\n7', mock.ANY, mock.ANY)
def test_wraps_at_whitespace(self):
assert len('123 567') > self.handler.width
self.handler.emit(mock.MagicMock(msg='123 567'))
self.d.infobox.assert_called_once_with('123\n567', mock.ANY, mock.ANY)
def test_only_last_lines_are_printed(self):
assert len('a\nb\nc'.split()) > self.handler.height
self.handler.emit(mock.MagicMock(msg='a\n\nb\nc'))
self.d.infobox.assert_called_once_with('b\nc', mock.ANY, mock.ANY)
if __name__ == '__main__':
unittest.main()
| 35.068182 | 79 | 0.682437 | 219 | 1,543 | 4.607306 | 0.328767 | 0.130823 | 0.074331 | 0.094153 | 0.446977 | 0.419227 | 0.367691 | 0.221011 | 0.145689 | 0.099108 | 0 | 0.050633 | 0.180817 | 1,543 | 43 | 80 | 35.883721 | 0.747627 | 0.04083 | 0 | 0 | 0 | 0 | 0.061058 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.333333 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
156048287df26174870e86440b151e9aefd7c0af | 19,312 | py | Python | tests/twitter/test_fetch_savers.py | garrettc/django-ditto | fcf15beb8f9b4d61634efd4a88064df12ee16a6f | [
"MIT"
] | 54 | 2016-08-15T17:32:41.000Z | 2022-02-27T03:32:05.000Z | tests/twitter/test_fetch_savers.py | garrettc/django-ditto | fcf15beb8f9b4d61634efd4a88064df12ee16a6f | [
"MIT"
] | 229 | 2015-07-23T12:50:47.000Z | 2022-03-24T10:33:20.000Z | tests/twitter/test_fetch_savers.py | garrettc/django-ditto | fcf15beb8f9b4d61634efd4a88064df12ee16a6f | [
"MIT"
] | 8 | 2015-09-10T17:10:35.000Z | 2022-03-25T13:05:01.000Z | import datetime
from decimal import Decimal
import json
import os
import pytz
import tempfile
from unittest.mock import patch
from freezegun import freeze_time
from ditto.core.utils import datetime_now
from ditto.core.utils.downloader import DownloadException, filedownloader
from django.test import override_settings
from .test_fetch import FetchTwitterTestCase
from ditto.twitter.fetch.savers import TweetSaver, UserSaver
from ditto.twitter.models import Media, Tweet, User
class TweetSaverTestCase(FetchTwitterTestCase):
"""Testing the TweetSaver class"""
# Note that we've changed the id and id_str of each Tweet in this
# fixture to something much shorter, and easier to test with.
api_fixture = "tweets.json"
def make_tweet(self, is_private=False):
self.fetch_time = datetime_now()
# Get the JSON for a single tweet.
tweets_data = json.loads(self.make_response_body())
tweet_data = tweets_data[0]
if is_private:
tweet_data["user"]["protected"] = True
# Send the JSON, and our new User object, to try and save the tweet:
TweetSaver().save_tweet(tweet_data, self.fetch_time)
# Load that saved tweet from the DB:
return Tweet.objects.get(twitter_id=300)
def test_saves_correct_tweet_data(self):
tweet = self.make_tweet()
# And check it's all there:
self.assertEqual(
tweet.title,
"@flaneur ooh, very exciting, thank you! Both my ears owe you a drink.",
)
self.assertEqual(
tweet.summary,
"@flaneur ooh, very exciting, thank you! Both my ears owe you a drink.",
)
self.assertEqual(
tweet.text,
"@flaneur ooh, very exciting, thank you!\n\nBoth my ears owe you a drink.",
)
self.assertEqual(tweet.latitude, Decimal("40.057016"))
self.assertEqual(tweet.longitude, Decimal("-75.143103"))
self.assertFalse(tweet.is_private)
self.assertEqual(tweet.fetch_time, self.fetch_time)
self.assertEqual(tweet.permalink, "https://twitter.com/philgyford/status/300")
tweets = json.loads(self.make_response_body())
self.assertEqual(tweet.raw, json.dumps(tweets[0]))
self.assertEqual(tweet.user.screen_name, "philgyford")
self.assertEqual(tweet.twitter_id, 300)
self.assertEqual(
tweet.post_time,
datetime.datetime.strptime(
"2015-08-06 19:42:59", "%Y-%m-%d %H:%M:%S"
).replace(tzinfo=pytz.utc),
)
self.assertEqual(tweet.favorite_count, 2)
self.assertEqual(tweet.retweet_count, 1)
self.assertEqual(tweet.media_count, 0)
self.assertEqual(tweet.in_reply_to_screen_name, "flaneur")
self.assertEqual(tweet.in_reply_to_status_id, 629375876216528896)
self.assertEqual(tweet.in_reply_to_user_id, 1859981)
self.assertEqual(tweet.language, "en")
self.assertEqual(tweet.place_attribute_street_address, "795 Folsom St")
self.assertEqual(tweet.place_full_name, "Twitter HQ, San Francisco")
self.assertEqual(tweet.place_country, "United States")
self.assertEqual(
tweet.source,
(
u'<a href="http://tapbots.com/tweetbot" rel="nofollow">Tweetbot '
'for iΟS</a>'
)
)
def test_saves_private_tweets_correctly(self):
"""If the user is protected, their tweets should be marked private."""
tweet = self.make_tweet(is_private=True)
self.assertTrue(tweet.is_private)
def test_saves_280_character_tweets_correctly(self):
"It should save the full text but truncate title and summary to 255 characters."
self.maxDiff = 3000
self.api_fixture = "tweets_280_characters.json"
tweet = self.make_tweet()
self.assertEqual(
tweet.text,
(
"@BarclaysUKHelp Thanks Jonny. I tried online chat at the time and "
"they said the form doesn’t work on iOS Safari. It’d be nice if it "
"said that on the form, rather than it returning to the start "
"half-way through :) So I set up an account at @TSB instead - their "
"form worked."
),
)
self.assertEqual(
tweet.text_html,
(
'<span class="twython-tweet-prefix"><a href="'
'https://twitter.com/BarclaysUKHelp" rel="external">@BarclaysUKHelp'
"</a> </span>Thanks Jonny. I tried online chat at the time and they "
"said the form doesn’t work on iOS Safari. It’d be nice if it said "
"that on the form, rather than it returning to the start half-way "
"through :) So I set up an account at "
'<a href="https://twitter.com/TSB" rel="external">@TSB</a> '
"instead - their form worked."
),
)
self.assertEqual(
tweet.title,
(
"Thanks Jonny. I tried online chat at the time and they said the "
"form doesn’t work on iOS Safari. It’d be nice if it said that on "
"the form, rather than it returning to the start half-way through "
":) So I set up an account at @TSB instead - their form…"
),
)
self.assertEqual(
tweet.summary,
(
"Thanks Jonny. I tried online chat at the time and they said the "
"form doesn’t work on iOS Safari. It’d be nice if it said that on "
"the form, rather than it returning to the start half-way through "
":) So I set up an account at @TSB instead - their form…"
),
)
def test_saves_user(self):
"Saving a Tweet should also save its user."
tweet = self.make_tweet()
self.assertEqual(tweet.user.twitter_id, 12552)
self.assertEqual(tweet.user.fetch_time, self.fetch_time)
def test_saves_quoted_tweets(self):
"Saving a Tweet that quotes another Tweet should save the quoted Tweet."
self.api_fixture = "tweets_with_quoted_tweet.json"
tweet = self.make_tweet()
self.assertEqual(
tweet.text,
(
"Quoting a couple of tweets: https://t.co/HSaYtiWAbg and "
"https://t.co/hpX1aGkWsv"
),
)
self.assertEqual(tweet.quoted_status_id, 663744897778872321)
quoted_tweet = Tweet.objects.get(twitter_id=663744897778872321)
self.assertEqual(
quoted_tweet.text,
"Very quiet in the basement of #Innovate2015 come say hi and talk #iot",
)
self.assertEqual(quoted_tweet.user.screen_name, "iotwatch")
def test_saves_double_quoted_tweets(self):
"""Saving Tweet 1 that quotes Tweet 2 that quotes Tweet 3 should save
Tweet 2, and cope with Tweet 3 not being savable."""
self.api_fixture = "tweets_with_double_quoted_tweet.json"
tweet1 = self.make_tweet()
self.assertEqual(
tweet1.text,
(
"Anyone fancy meeting sometime today/tomorrow to see "
"@genmon\u2019s book vending machine at Google Campus, "
"EC2? https://t.co/1ScaCLOUxb"
),
)
# ie, tweet2's ID:
self.assertEqual(tweet1.quoted_status_id, 714528026650869760)
tweet2 = Tweet.objects.get(twitter_id=714528026650869760)
self.assertEqual(
tweet2.text,
"Ludicrous hobby is ludicrous. But here we go https://t.co/DqYZB2gtQv",
)
self.assertEqual(tweet2.user.screen_name, "genmon")
# ie, tweet3's ID:
self.assertEqual(tweet2.quoted_status_id, 714527559946473474)
def test_saves_retweeted_tweets(self):
"Saving a Tweet that is a retweet should save the retweeted Tweet."
self.api_fixture = "tweets_with_retweeted_tweet.json"
tweet = self.make_tweet()
self.assertEqual(
tweet.text,
(
"RT @stefiorazi: Twitter help: Looking for early Barbican "
"Estate residents to interview. mail@modernistestates RTs "
"appreciated https://t.co/\u2026"
),
)
self.assertEqual(tweet.retweeted_status_id, 735555565724827649)
retweeted_tweet = Tweet.objects.get(twitter_id=735555565724827649)
self.assertEqual(
retweeted_tweet.text,
(
"Twitter help: Looking for early Barbican Estate residents to "
"interview. mail@modernistestates RTs appreciated "
"https://t.co/IFSZIh9DHm"
),
)
self.assertEqual(retweeted_tweet.user.screen_name, "stefiorazi")
def test_extended_2016_tweets(self):
"""Saves correctly from the new (2016) tweet format.
https://dev.twitter.com/overview/api/upcoming-changes-to-tweets
"""
self.api_fixture = "tweets_extended_format_2016.json"
tweet = self.make_tweet()
self.assertEqual(
tweet.text,
(
"@philgyford Here\u2019s a test tweet that goes on as much as "
"possible and includes an image. Hi to my fans in testland! "
"https://t.co/tzhyk2QWSr"
),
)
self.assertEqual(
tweet.summary,
(
"Here\u2019s a test tweet that goes on as much as possible and "
"includes an image. Hi to my fans in testland!"
),
)
self.assertEqual(
tweet.title,
(
"Here\u2019s a test tweet that goes on as much as possible and "
"includes an image. Hi to my fans in testland!"
),
)
class TweetSaverMediaTestCase(FetchTwitterTestCase):
"Parent class for testing the save_media() method of the TweetSaver class."
# Child classes should have an api_fixture property.
def setUp(self):
"Save a tweet using the api_fixture's data."
fetch_time = datetime_now()
tweet_data = json.loads(self.make_response_body())
# Send the JSON, and our new User object, to try and save the tweet:
TweetSaver().save_tweet(tweet_data, fetch_time)
# Load that saved tweet from the DB:
self.tweet = Tweet.objects.get(twitter_id=9876543210)
class TweetSaverPhotosTestCase(TweetSaverMediaTestCase):
"Testing that photos are saved correctly."
api_fixture = "tweet_with_photos.json"
def test_saves_photos(self):
self.assertEqual(self.tweet.media_count, 3)
photos = Media.objects.filter(tweets__pk=self.tweet.pk)
self.assertEqual(len(photos), 3)
photo = photos[1]
self.assertEqual(photo.media_type, "photo")
self.assertEqual(photo.twitter_id, 1234567890)
self.assertEqual(
photo.image_url, "https://pbs.twimg.com/media/CSaWsSkWsAA-yXb.jpg"
)
self.assertEqual(photo.large_w, 935)
self.assertEqual(photo.large_h, 397)
self.assertEqual(photo.medium_w, 600)
self.assertEqual(photo.medium_h, 254)
self.assertEqual(photo.small_w, 340)
self.assertEqual(photo.small_h, 144)
self.assertEqual(photo.thumb_w, 150)
self.assertEqual(photo.thumb_h, 150)
self.assertIn(self.tweet, photo.tweets.all())
class TweetSaverVideosTestCase(TweetSaverMediaTestCase):
"Testing that videos are saved correctly."
api_fixture = "tweet_with_video.json"
def test_saves_videos(self):
self.assertEqual(self.tweet.media_count, 1)
videos = Media.objects.filter(tweets__pk=self.tweet.pk)
self.assertEqual(len(videos), 1)
video = videos[0]
self.assertEqual(video.media_type, "video")
self.assertEqual(video.twitter_id, 1234567890)
self.assertEqual(
video.image_url,
"https://pbs.twimg.com/ext_tw_video_thumb/661601811007188992/pu/img/gcxHGl7EA08a-Gps.jpg", # noqa: E501
)
self.assertEqual(video.large_w, 640)
self.assertEqual(video.large_h, 360)
self.assertEqual(video.medium_w, 600)
self.assertEqual(video.medium_h, 338)
self.assertEqual(video.small_w, 340)
self.assertEqual(video.small_h, 191)
self.assertEqual(video.thumb_w, 150)
self.assertEqual(video.thumb_h, 150)
self.assertIn(self.tweet, video.tweets.all())
self.assertEqual(video.aspect_ratio, "16:9")
self.assertEqual(
video.dash_url,
"https://video.twimg.com/ext_tw_video/661601811007188992/pu/pl/K0pVjBgnc5BI_4e5.mpd", # noqa: E501
)
self.assertEqual(
video.xmpeg_url,
"https://video.twimg.com/ext_tw_video/661601811007188992/pu/pl/K0pVjBgnc5BI_4e5.m3u8", # noqa: E501
)
class TweetSaverAnimatedGifTestCase(TweetSaverMediaTestCase):
"Testing that animated GIFs are saved correctly."
api_fixture = "tweet_with_animated_gif.json"
def test_saves_gifs(self):
self.assertEqual(self.tweet.media_count, 1)
media = Media.objects.filter(tweets__pk=self.tweet.pk)
self.assertEqual(len(media), 1)
gif = media[0]
self.assertEqual(gif.media_type, "animated_gif")
self.assertEqual(gif.twitter_id, 726396540303073281)
self.assertEqual(
gif.image_url, "https://pbs.twimg.com/tweet_video_thumb/ChStzgbWYAErHLi.jpg"
)
self.assertEqual(gif.large_w, 320)
self.assertEqual(gif.large_h, 232)
self.assertEqual(gif.medium_w, 320)
self.assertEqual(gif.medium_h, 232)
self.assertEqual(gif.small_w, 320)
self.assertEqual(gif.small_h, 232)
self.assertEqual(gif.thumb_w, 150)
self.assertEqual(gif.thumb_h, 150)
self.assertIn(self.tweet, gif.tweets.all())
self.assertEqual(gif.aspect_ratio, "40:29")
self.assertEqual(
gif.mp4_url, "https://pbs.twimg.com/tweet_video/ChStzgbWYAErHLi.mp4"
)
class UserSaverTestCase(FetchTwitterTestCase):
api_fixture = "verify_credentials.json"
def make_user_data(self, custom={}):
"""Get the JSON for a single user.
custom is a dict of attributes to override on the default data.
eg, {'protected': True}
"""
raw_json = self.make_response_body()
user_data = json.loads(raw_json)
for key, value in custom.items():
user_data[key] = value
return user_data
@patch.object(filedownloader, "download")
def make_user_object(self, user_data, download):
""""Creates/updates a User from API data, then fetches that User from
the DB and returns it.
"""
# Quietly prevents avatar files being fetched:
download.side_effect = DownloadException("Oops")
UserSaver().save_user(user_data, datetime_now())
return User.objects.get(twitter_id=12552)
@freeze_time("2015-08-14 12:00:00", tz_offset=-8)
def test_saves_correct_user_data(self):
user_data = self.make_user_data()
user = self.make_user_object(user_data)
self.assertEqual(user.fetch_time, datetime_now())
self.assertEqual(user.raw, json.dumps(user_data))
self.assertEqual(user.screen_name, "philgyford")
self.assertEqual(user.url, "http://www.gyford.com/")
self.assertFalse(user.is_private)
self.assertFalse(user.is_verified)
self.assertEqual(
user.created_at,
datetime.datetime.strptime(
"2006-11-15 16:55:59", "%Y-%m-%d %H:%M:%S"
).replace(tzinfo=pytz.utc),
)
self.assertEqual(user.description, "Good. Good to Firm in places.")
self.assertEqual(user.location, "London, UK")
self.assertEqual(user.time_zone, "London")
self.assertEqual(user.favourites_count, 1389)
self.assertEqual(user.followers_count, 2435)
self.assertEqual(user.friends_count, 309)
self.assertEqual(user.listed_count, 138)
self.assertEqual(user.statuses_count, 16428)
def test_saves_alternate_data(self):
"""Check some different data to in the main user test."""
user_data = self.make_user_data({"protected": True, "verified": True})
user = self.make_user_object(user_data)
self.assertTrue(user.is_private)
self.assertTrue(user.is_verified)
def test_handles_missing_expanded_url(self):
"""Test fix for when expanded_url is None, as here:
{'indices': [0, 28],
'url': 'http://www.benhammersley.com',
'expanded_url': None
)
"""
entities = {
"url": {
"urls": [
{
"indices": [0, 22],
"url": "http://t.co/UEs0CCkdrl",
"expanded_url": None,
}
]
}
}
user_data = self.make_user_data({"entities": entities})
user = self.make_user_object(user_data)
self.assertEqual(user.url, "http://t.co/UEs0CCkdrl")
@patch.object(filedownloader, "download")
@patch.object(UserSaver, "_fetch_and_save_avatar")
def test_calls_fetch_and_save_avatar(self, fetch_avatar, download):
"_fetch_and_save_avatar should be called with the User object."
# Quietly prevents avatar files being fetched:
download.side_effect = DownloadException("Oops")
# Just make the mocked method return the User that's passed in:
fetch_avatar.side_effect = lambda value: value
user_data = self.make_user_data()
saved_user = UserSaver().save_user(user_data, datetime_now())
fetch_avatar.assert_called_once_with(saved_user)
@override_settings(MEDIA_ROOT=tempfile.gettempdir())
@patch.object(filedownloader, "download")
def test_downloads_and_saves_avatar(self, download):
"Should call download() and save avatar."
# Make a temporary file, like download() would make:
jpg = tempfile.NamedTemporaryFile()
temp_filepath = jpg.name
download.return_value = temp_filepath
user_data = self.make_user_data()
saved_user = UserSaver().save_user(user_data, datetime_now())
download.assert_called_once_with(
saved_user.profile_image_url_https,
["image/jpeg", "image/jpg", "image/png", "image/gif"],
)
self.assertEqual(
saved_user.avatar,
"twitter/avatars/25/52/12552/%s" % os.path.basename(temp_filepath),
)
@patch.object(filedownloader, "download")
@patch.object(os.path, "exists")
def test_does_not_download_and_save_avatar(self, exists, download):
"If we already have the user's avatar, don't download it."
# Fake that the file we look for exists:
exists.return_value = True
user_data = self.make_user_data()
UserSaver().save_user(user_data, datetime_now())
assert not download.called
| 37.71875 | 116 | 0.624948 | 2,356 | 19,312 | 4.974533 | 0.212224 | 0.131826 | 0.059727 | 0.010751 | 0.405119 | 0.339761 | 0.275256 | 0.229352 | 0.219795 | 0.20256 | 0 | 0.039822 | 0.27703 | 19,312 | 511 | 117 | 37.792564 | 0.799169 | 0.110398 | 0 | 0.234536 | 0 | 0.002577 | 0.248627 | 0.026726 | 0 | 0 | 0 | 0 | 0.296392 | 1 | 0.054124 | false | 0 | 0.036082 | 0 | 0.126289 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1566077a68f63bcb1bb1604faa0f1cbc023913fa | 4,335 | py | Python | cellrank/tl/_transition_matrix.py | WeilerP/cellrank | c8c2b9f6bd2448861fb414435aee7620ca5a0bad | [
"BSD-3-Clause"
] | 172 | 2020-03-19T19:50:53.000Z | 2022-03-28T09:36:04.000Z | cellrank/tl/_transition_matrix.py | WeilerP/cellrank | c8c2b9f6bd2448861fb414435aee7620ca5a0bad | [
"BSD-3-Clause"
] | 702 | 2020-03-19T08:09:04.000Z | 2022-03-30T09:55:14.000Z | cellrank/tl/_transition_matrix.py | WeilerP/cellrank | c8c2b9f6bd2448861fb414435aee7620ca5a0bad | [
"BSD-3-Clause"
] | 17 | 2020-04-07T03:11:02.000Z | 2022-02-02T20:39:16.000Z | from typing import Union, Callable, Iterable, Optional
from typing_extensions import Literal
from anndata import AnnData
from cellrank import logging as logg
from cellrank.ul._docs import d, inject_docs
from cellrank.tl._utils import _deprecate
from cellrank.tl.kernels import VelocityKernel, ConnectivityKernel
from cellrank.tl.kernels._base_kernel import KernelExpression
from cellrank.tl.kernels._velocity_kernel import BackwardMode, VelocityMode
from cellrank.tl.kernels._velocity_schemes import Scheme
@_deprecate(version="2.0")
@inject_docs(m=VelocityMode, b=BackwardMode, s=Scheme) # don't swap the order
@d.dedent
def transition_matrix(
adata: AnnData,
backward: bool = False,
vkey: str = "velocity",
xkey: str = "Ms",
conn_key: str = "connectivities",
gene_subset: Optional[Iterable] = None,
mode: Literal[
"deterministic", "stochastic", "sampling", "monte_carlo"
] = VelocityMode.DETERMINISTIC,
backward_mode: Literal["transpose", "negate"] = BackwardMode.TRANSPOSE,
scheme: Union[
Literal["dot_product", "cosine", "correlation"], Callable
] = Scheme.CORRELATION,
softmax_scale: Optional[float] = None,
weight_connectivities: float = 0.2,
density_normalize: bool = True,
key: Optional[str] = None,
**kwargs,
) -> KernelExpression:
"""
Compute a transition matrix based on a combination of RNA Velocity and transcriptomic or spatial similarity.
To learn more about the way in which the transition matrices are computed, see
:class:`cellrank.tl.kernels.VelocityKernel` for the velocity-based transition matrix and
:class:`cellrank.tl.kernels.ConnectivityKernel` for the similarity-based transition matrix.
Parameters
----------
%(adata)s
%(backward)s
vkey
Key from ``adata.layers`` to access the velocities.
xkey
Key in ``adata.layers`` where expected gene expression counts are stored.
conn_key
Key in :attr:`anndata.AnnData.obsp` to obtain the connectivity matrix, describing cell-cell similarity.
gene_subset
List of genes to be used to compute transition probabilities.
By default, genes from ``adata.var['velocity_genes']`` are used.
%(velocity_mode)s
%(velocity_backward_mode_high_lvl)s
%(velocity_scheme)s
%(softmax_scale)s
weight_connectivities
Weight given to similarities as opposed to velocities. Must be in `[0, 1]`.
density_normalize
Whether to use density correction when computing the transition probabilities based on similarities.
Density correction is done as by :cite:`haghverdi:16`.
%(write_to_adata.parameters)s
kwargs
Keyword arguments for :meth:`cellrank.tl.kernels.VelocityKernel.compute_transition_matrix`.
Returns
-------
A kernel expression object containing the computed transition matrix.
%(write_to_adata)s
"""
def compute_velocity_kernel() -> VelocityKernel:
return VelocityKernel(
adata,
backward=backward,
vkey=vkey,
xkey=xkey,
gene_subset=gene_subset,
conn_key=conn_key,
).compute_transition_matrix(
softmax_scale=softmax_scale,
mode=mode,
backward_mode=backward_mode,
scheme=scheme,
**kwargs,
)
if 0 < weight_connectivities < 1:
vk = compute_velocity_kernel()
logg.info(f"Using a connectivity kernel with weight `{weight_connectivities}`")
ck = ConnectivityKernel(
adata, backward=backward, conn_key=conn_key
).compute_transition_matrix(density_normalize=density_normalize)
final = (
(1 - weight_connectivities) * vk + weight_connectivities * ck
).compute_transition_matrix()
elif weight_connectivities == 0:
final = compute_velocity_kernel()
elif weight_connectivities == 1:
final = ConnectivityKernel(
adata,
backward=backward,
conn_key=conn_key,
).compute_transition_matrix(density_normalize=density_normalize)
else:
raise ValueError(
f"Parameter `weight_connectivities` must be in range `[0, 1]`, found `{weight_connectivities}`."
)
final.write_to_adata(key=key)
return final
| 36.737288 | 112 | 0.687659 | 496 | 4,335 | 5.84879 | 0.340726 | 0.055153 | 0.04102 | 0.028956 | 0.107204 | 0.087211 | 0.087211 | 0.074457 | 0.074457 | 0.074457 | 0 | 0.004474 | 0.226528 | 4,335 | 117 | 113 | 37.051282 | 0.860722 | 0.337024 | 0 | 0.140845 | 0 | 0 | 0.098901 | 0.027106 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028169 | false | 0 | 0.140845 | 0.014085 | 0.197183 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15673650b27d37a68700c313fcef5b17abf771b6 | 516 | py | Python | tests/users/api/test_user.py | arkhn/fhir-river | a12179c34fad131d16dedc20c61297ed83d805e6 | [
"Apache-2.0"
] | 42 | 2020-03-25T16:47:30.000Z | 2022-01-31T21:26:38.000Z | tests/users/api/test_user.py | arkhn/fhir-river | a12179c34fad131d16dedc20c61297ed83d805e6 | [
"Apache-2.0"
] | 367 | 2020-04-08T12:46:34.000Z | 2022-02-16T01:15:32.000Z | tests/users/api/test_user.py | arkhn/fhir-river | a12179c34fad131d16dedc20c61297ed83d805e6 | [
"Apache-2.0"
] | 3 | 2020-05-14T08:24:46.000Z | 2021-08-04T05:00:16.000Z | import pytest
from django.urls import reverse
pytestmark = pytest.mark.django_db
def test_retrieve_unauthenticated_user(api_client):
url = reverse("auth-user-detail")
response = api_client.get(url)
assert response.status_code == 403, response.data
@pytest.mark.as_user
def test_retrieve_authenticated_user(api_client, user):
url = reverse("auth-user-detail")
response = api_client.get(url)
assert response.status_code == 200, response.data
assert response.data["id"] == user.id
| 21.5 | 55 | 0.742248 | 71 | 516 | 5.197183 | 0.422535 | 0.097561 | 0.081301 | 0.097561 | 0.384824 | 0.384824 | 0.384824 | 0.384824 | 0.384824 | 0.384824 | 0 | 0.01373 | 0.153101 | 516 | 23 | 56 | 22.434783 | 0.830664 | 0 | 0 | 0.307692 | 0 | 0 | 0.065891 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1567d9ad382f0a244898929137268b5c664dab10 | 13,753 | py | Python | ngshare_exchange/course_management.py | lauri3k/shhhh | 92d985e5dbe4f39ab9bf8103c9ba5376e93c8af8 | [
"BSD-3-Clause"
] | 1 | 2021-03-23T04:31:38.000Z | 2021-03-23T04:31:38.000Z | ngshare_exchange/course_management.py | lauri3k/shhhh | 92d985e5dbe4f39ab9bf8103c9ba5376e93c8af8 | [
"BSD-3-Clause"
] | 6 | 2020-06-08T21:38:24.000Z | 2021-03-26T05:20:35.000Z | ngshare_exchange/course_management.py | lauri3k/shhhh | 92d985e5dbe4f39ab9bf8103c9ba5376e93c8af8 | [
"BSD-3-Clause"
] | 1 | 2021-03-23T03:17:33.000Z | 2021-03-23T03:17:33.000Z | import os
import sys
import requests
import csv
import subprocess
import json
import argparse
from urllib.parse import quote
# https://www.geeksforgeeks.org/print-colors-python-terminal/
def prRed(skk, exit=True):
print('\033[91m {}\033[00m'.format(skk))
if exit:
sys.exit(-1)
def prGreen(skk):
print('\033[92m {}\033[00m'.format(skk))
def prYellow(skk):
print("\033[93m {}\033[00m".format(skk))
class User:
def __init__(self, id, first_name, last_name, email):
self.id = id
self.first_name = '' if first_name is None else first_name
self.last_name = '' if last_name is None else last_name
self.email = '' if email is None else email
def get_username():
if 'JUPYTERHUB_USER' in os.environ:
return os.environ['JUPYTERHUB_USER']
else:
return os.environ['USER']
def ngshare_url():
global _ngshare_url
try:
return _ngshare_url
except NameError:
try:
from nbgrader.apps import NbGrader
nbgrader = NbGrader()
nbgrader.load_config_file()
exchange = nbgrader.config.ExchangeFactory.exchange()
_ngshare_url = exchange.ngshare_url
return _ngshare_url
except Exception as e:
prRed(
'Cannot determine ngshare URL. Please check your nbgrader_config.py!',
False,
)
prRed(e)
def get_header():
if 'JUPYTERHUB_API_TOKEN' in os.environ:
return {'Authorization': 'token ' + os.environ['JUPYTERHUB_API_TOKEN']}
else:
return None
def check_status_code(response):
if response.status_code != requests.codes.ok:
prRed(
'ngshare returned an invalid status code {}'.format(
response.status_code
),
False,
)
if response.status_code >= 500:
prRed(
'ngshare encountered an error. Please contact the maintainers'
)
check_message(response)
def check_message(response):
response = response.json()
if not response['success']:
prRed(response['message'])
return response
def encode_url(url):
return quote(url, safe='/', encoding=None, errors=None)
def post(url, data):
header = get_header()
encoded_url = encode_url(url)
try:
response = requests.post(
ngshare_url() + encoded_url, data=data, headers=header
)
response.raise_for_status()
except requests.exceptions.ConnectionError:
prRed('Could not establish connection to ngshare server')
except Exception:
check_status_code(response)
return check_message(response)
def delete(url, data):
header = get_header()
encoded_url = encode_url(url)
try:
response = requests.delete(
ngshare_url() + encoded_url, data=data, headers=header
)
response.raise_for_status()
except requests.exceptions.ConnectionError:
prRed('Could not establish connection to ngshare server')
except Exception:
check_status_code(response)
return check_message(response)
def check_username_warning(users):
invalid_usernames = [n for n in users if n != n.lower()]
if invalid_usernames:
prYellow(
'The following usernames have upper-case letters. Normally JupyterHub forces usernames to be lowercase. If the user has trouble accessing the course, you should add their lowercase username to ngshare instead.',
)
for user in invalid_usernames:
prYellow(user)
def create_course(args):
instructors = args.instructors or []
check_username_warning(instructors)
url = '/course/{}'.format(args.course_id)
data = {'user': get_username(), 'instructors': json.dumps(instructors)}
response = post(url, data)
prGreen('Successfully created {}'.format(args.course_id))
def add_student(args):
# add student to ngshare
check_username_warning([args.student_id])
student = User(args.student_id, args.first_name, args.last_name, args.email)
url = '/student/{}/{}'.format(args.course_id, student.id)
data = {
'user': get_username(),
'first_name': student.first_name,
'last_name': student.last_name,
'email': student.email,
}
response = post(url, data)
prGreen(
'Successfully added/updated {} on {}'.format(student.id, args.course_id)
)
if not args.no_gb:
add_jh_student(student)
def add_jh_student(student: User):
# add student to nbgrader gradebook
command = ['nbgrader', 'db', 'student', 'add']
if len(student.first_name) > 0:
command.append('--first-name')
command.append(student.first_name)
if len(student.last_name) > 0:
command.append('--last-name')
command.append(student.last_name)
if len(student.email) > 0:
command.append('--email')
command.append(student.email)
command.append(student.id)
subprocess.run(command)
def add_students(args):
students = []
if not os.path.exists(args.csv_file):
prRed(
'The csv file you entered does not exist. Please enter a valid path!'
)
with open(args.csv_file, 'r') as f:
csv_reader = csv.reader(f, delimiter=',')
rows = list(csv_reader)
if len(rows) == 0:
prRed('The csv file you entered is empty')
header = rows[0]
required_cols = ['student_id', 'first_name', 'last_name', 'email']
cols_dict = dict()
for i, col in enumerate(header):
cols_dict[col] = i
for col in required_cols:
if col not in cols_dict:
prRed('Missing column {} in {}.'.format(col, args.csv_file))
for i, row in enumerate(rows[1:]):
student_dict = {}
student_id = row[cols_dict['student_id']]
if len(student_id.replace(' ', '')) == 0:
prRed(
'Student ID cannot be empty (row {})'.format(i + 1), False
)
continue
first_name = row[cols_dict['first_name']]
last_name = row[cols_dict['last_name']]
email = row[cols_dict['email']]
student_dict['username'] = student_id
student_dict['first_name'] = first_name
student_dict['last_name'] = last_name
student_dict['email'] = email
students.append(student_dict)
check_username_warning([student['username'] for student in students])
url = '/students/{}'.format(args.course_id)
data = {'user': get_username(), 'students': json.dumps(students)}
response = post(url, data)
if response['success']:
for i, s in enumerate(response['status']):
user = s['username']
if s['success']:
prGreen(
'{} was successfully added to {}'.format(
user, args.course_id
)
)
student = User(
user,
students[i]['first_name'],
students[i]['last_name'],
students[i]['email'],
)
if not args.no_gb:
add_jh_student(student)
else:
prRed(
'There was an error adding {} to {}: {}'.format(
user, args.course_id, s['message']
),
False,
)
def remove_jh_student(student_id, force):
# remove a student from nbgrader gradebook
command = 'nbgrader db student remove {} '.format(student_id)
if force:
command += '--force'
os.system(command)
def remove_students(args):
for student in args.students:
if not args.no_gb:
remove_jh_student(student, args.force)
url = '/student/{}/{}'.format(args.course_id, student)
data = {'user': get_username()}
response = delete(url, data)
prGreen(
'Successfully deleted {} from {}'.format(student, args.course_id)
)
def add_instructor(args):
check_username_warning([args.instructor_id])
url = '/instructor/{}/{}'.format(args.course_id, args.instructor_id)
data = {
'user': get_username(),
'first_name': args.first_name,
'last_name': args.last_name,
'email': args.email,
}
print(data)
response = post(url, data)
prGreen(
'Successfully added {} as an instructor to {}'.format(
args.instructor_id, args.course_id
)
)
def remove_instructor(args):
url = '/instructor/{}/{}'.format(args.course_id, args.instructor_id)
data = {'user': get_username()}
response = delete(url, data)
prGreen(
'Successfully deleted instructor {} from {}'.format(
args.instructor_id, args.course_id
)
)
def parse_args(argv):
parser = argparse.ArgumentParser(description='ngshare Course Management')
subparsers = parser.add_subparsers()
create_course_parser = subparsers.add_parser(
'create_course', help='Create a course'
)
create_course_parser.add_argument(
'course_id', metavar='COURSE_ID', help='ID of the course'
)
create_course_parser.add_argument(
'instructors',
metavar='INSTRUCTOR',
nargs='*',
default=None,
help='List of instructors assigned to the course',
)
create_course_parser.set_defaults(func=create_course)
add_instructor_parser = subparsers.add_parser(
'add_instructor', help='Add/update one instructor for a course'
)
add_instructor_parser.add_argument(
'course_id', metavar='COURSE_ID', help='ID of the course'
)
add_instructor_parser.add_argument(
'instructor_id',
metavar='INSTRUCTOR_ID',
help='Username of the added/modified instructor',
)
add_instructor_parser.add_argument(
'-f',
'--first_name',
default=None,
help='First name of the instructor',
)
add_instructor_parser.add_argument(
'-l',
'--last_name',
default=None,
help='Last name of the instructor',
)
add_instructor_parser.add_argument(
'-e',
'--email',
default=None,
help='Email of the instructor',
)
add_instructor_parser.set_defaults(func=add_instructor)
remove_instructor_parser = subparsers.add_parser(
'remove_instructor', help='Remove one instructor from a course'
)
remove_instructor_parser.add_argument(
'course_id', metavar='COURSE_ID', help='ID of the course'
)
remove_instructor_parser.add_argument(
'instructor_id',
metavar='INSTRUCTOR_ID',
help='Username of the instructor to remove',
)
remove_instructor_parser.set_defaults(func=remove_instructor)
add_student_parser = subparsers.add_parser(
'add_student', help='Add/update one student for a course'
)
add_student_parser.add_argument(
'course_id', metavar='COURSE_ID', help='ID of the course'
)
add_student_parser.add_argument(
'student_id',
metavar='STUDENT_ID',
help='Username of the added/modified student',
)
add_student_parser.add_argument(
'-f',
'--first_name',
default=None,
help='First name of the student',
)
add_student_parser.add_argument(
'-l',
'--last_name',
default=None,
help='Last name of the student',
)
add_student_parser.add_argument(
'-e',
'--email',
default=None,
help='Email of the student',
)
add_student_parser.add_argument(
'--no-gb',
action='store_true',
help='Do not add student to local nbgrader gradebook',
)
add_student_parser.set_defaults(func=add_student)
add_students_parser = subparsers.add_parser(
'add_students',
help='Add/update multiple students in a course using a CSV file',
)
add_students_parser.add_argument(
'course_id', metavar='COURSE_ID', help='ID of the course'
)
add_students_parser.add_argument(
'csv_file',
metavar='CSV_FILE',
help='A CSV file with four fields: student_id,first_name,last_name,email',
)
add_students_parser.add_argument(
'--no-gb',
action='store_true',
help='Do not add students to local nbgrader gradebook',
)
add_students_parser.set_defaults(func=add_students)
remove_students_parser = subparsers.add_parser(
'remove_students', help='Remove one or more students from a course'
)
remove_students_parser.add_argument(
'course_id', metavar='COURSE_ID', help='ID of the course'
)
remove_students_parser.add_argument(
'students',
metavar='STUDENT',
nargs='+',
help='List of student IDs to remove',
)
remove_students_parser.add_argument(
'--no-gb',
action='store_true',
help='Do not remove student from local nbgrader gradebook',
)
remove_students_parser.add_argument(
'--force',
action='store_true',
help='Force student removal from local nbgrader gradebook, even if this deletes their grades',
)
remove_students_parser.set_defaults(func=remove_students)
parser.set_defaults(func=lambda x: parser.print_help())
args = parser.parse_args(argv)
return args
def main(argv=None):
argv = argv or sys.argv[1:]
args = parse_args(argv)
args.func(args)
if __name__ == '__main__':
sys.exit(main())
| 29.137712 | 223 | 0.609612 | 1,611 | 13,753 | 4.998759 | 0.145872 | 0.029057 | 0.046442 | 0.015646 | 0.432882 | 0.367813 | 0.302496 | 0.264994 | 0.244505 | 0.226624 | 0 | 0.004343 | 0.280157 | 13,753 | 471 | 224 | 29.199575 | 0.809091 | 0.011416 | 0 | 0.315245 | 0 | 0.002584 | 0.213965 | 0.002722 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059432 | false | 0 | 0.023256 | 0.002584 | 0.113695 | 0.01292 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
156946b470a7a457642d3b02e6f6bc4fecddde64 | 1,968 | py | Python | misc/visualizer.py | jacobandreas/nmn2 | 7e42dd98420f9580fd34185ba670490a5d86fb04 | [
"Apache-2.0"
] | 427 | 2016-01-27T01:08:58.000Z | 2022-03-22T17:55:34.000Z | misc/visualizer.py | jacobandreas/nmn2 | 7e42dd98420f9580fd34185ba670490a5d86fb04 | [
"Apache-2.0"
] | 22 | 2016-06-01T15:48:36.000Z | 2017-07-27T02:07:46.000Z | misc/visualizer.py | jacobandreas/nmn2 | 7e42dd98420f9580fd34185ba670490a5d86fb04 | [
"Apache-2.0"
] | 107 | 2016-02-14T02:11:42.000Z | 2022-03-25T06:27:29.000Z | #!/usr/bin/env python2
import numpy as np
import os
import scipy
VIS_DIR = "vis"
class Visualizer:
def __init__(self):
self.active = False
def begin(self, dest, max_entries):
self.lines = []
self.active = True
self.max_entries = max_entries
self.next_entry = 0
self.dest_dir = os.path.join(VIS_DIR, dest)
if not os.path.exists(self.dest_dir):
os.mkdir(self.dest_dir)
def reset(self):
self.next_entry = 0
self.active = True
def end(self):
self.active = False
with open(os.path.join(self.dest_dir, "index.html"), "w") as vis_file:
#print >>vis_file, "<html><head><link rel='stylesheet' href='style.css'></head><body><table>"
print >>vis_file, "<html><head>"
print >>vis_file, "<link rel='stylesheet' href='../style.css' />"
print >>vis_file, "</head><body><table>"
for line in self.lines:
print >>vis_file, " <tr>"
for field in line:
print >>vis_file, " <td>",
print >>vis_file, field,
print >>vis_file, "</td>"
print >>vis_file, " </tr>"
print >>vis_file, "</table></body></html>"
def show(self, data):
if not self.active:
return
table_data = []
for i_field, field in enumerate(data):
if isinstance(field, np.ndarray):
filename = "%d_%d.jpg" % (self.next_entry, i_field)
filepath = os.path.join(self.dest_dir, filename)
scipy.misc.imsave(filepath, field)
table_data.append("<img src='%s' />" % filename)
else:
table_data.append(str(field))
self.lines.append(table_data)
self.next_entry += 1
if self.next_entry >= self.max_entries:
self.active = False
visualizer = Visualizer()
| 31.238095 | 105 | 0.533028 | 243 | 1,968 | 4.160494 | 0.320988 | 0.076162 | 0.118694 | 0.037587 | 0.225519 | 0.150346 | 0.051434 | 0 | 0 | 0 | 0 | 0.00304 | 0.331301 | 1,968 | 62 | 106 | 31.741935 | 0.765198 | 0.057419 | 0 | 0.142857 | 0 | 0 | 0.088505 | 0.011873 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102041 | false | 0 | 0.061224 | 0 | 0.204082 | 0.183673 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
156a1cbfdf040c739583016c1ecadb31b207e37f | 6,509 | py | Python | dnachisel/builtin_specifications/AvoidChanges.py | rfuisz/DnaChisel | 1e29a18af88ae249169743746dd4c73a4e027b19 | [
"MIT"
] | 124 | 2017-11-14T14:42:25.000Z | 2022-03-31T08:02:07.000Z | dnachisel/builtin_specifications/AvoidChanges.py | rfuisz/DnaChisel | 1e29a18af88ae249169743746dd4c73a4e027b19 | [
"MIT"
] | 65 | 2017-11-15T07:25:38.000Z | 2022-01-31T10:38:45.000Z | dnachisel/builtin_specifications/AvoidChanges.py | rfuisz/DnaChisel | 1e29a18af88ae249169743746dd4c73a4e027b19 | [
"MIT"
] | 31 | 2018-10-18T12:59:47.000Z | 2022-02-11T16:54:43.000Z | """Implementation of AvoidChanges."""
import numpy as np
from ..Specification import Specification, SpecEvaluation
# from .VoidSpecification import VoidSpecification
from ..biotools import (
sequences_differences_array,
group_nearby_indices,
)
from ..Location import Location
class AvoidChanges(Specification):
"""Specify that some locations of the sequence should not be changed.
Shorthand for annotations: "change".
Parameters
----------
location
Location object indicating the position of the segment that must be
left unchanged. Alternatively,
indices can be provided. If neither is provided, the assumed location
is the whole sequence.
indices
List of indices that must be left unchanged.
target_sequence
At the moment, this is rather an internal variable. Do not use unless
you're not afraid of side effects.
"""
localization_interval_length = 6 # used when optimizing the minimize_diffs
best_possible_score = 0
enforced_by_nucleotide_restrictions = True
shorthand_name = "keep"
priority = -1000
def __init__(
self,
max_edits=0,
max_edits_percent=None,
location=None,
indices=None,
target_sequence=None,
boost=1.0,
):
"""Initialize."""
if location is None and (indices is not None):
location = (min(indices), max(indices) + 1)
self.location = Location.from_data(location)
if (self.location is not None) and self.location.strand == -1:
self.location.strand = 1
self.indices = np.array(indices) if (indices is not None) else None
self.target_sequence = target_sequence
self.max_edits = max_edits
self.max_edits_percent = max_edits_percent
self.boost = boost
def extract_subsequence(self, sequence):
"""Extract a subsequence from the location or indices.
Used to initialize the function when the sequence is provided.
"""
if (self.location is None) and (self.indices is None):
return sequence
elif self.indices is not None:
return "".join(np.array(list(sequence))[self.indices])
else: # self.location is not None:
return self.location.extract_sequence(sequence)
def initialized_on_problem(self, problem, role=None):
"""Find out what sequence it is that we are supposed to conserve."""
result = self._copy_with_full_span_if_no_location(problem)
L = len(result.location if result.indices is None else result.indices)
if result.max_edits_percent is not None:
result.max_edits = np.floor(result.max_edits_percent * L / 100.0)
result.enforced_by_nucleotide_restrictions = result.max_edits == 0
# Initialize the "target_sequence" in two cases:
# - Always at the very beginning
# - When the new sequence is bigger than the previous one
# (used in CircularDnaOptimizationProblem)
if result.target_sequence is None or (
len(result.target_sequence) < len(self.location)
):
result = result.copy_with_changes()
result.target_sequence = self.extract_subsequence(problem.sequence)
return result
def evaluate(self, problem):
"""Return a score equal to -number_of modifications.
Locations are "binned" modifications regions. Each bin has a length
in nucleotides equal to ``localization_interval_length`.`
"""
target = self.target_sequence
sequence = self.extract_subsequence(problem.sequence)
differing_indices = np.nonzero(
sequences_differences_array(sequence, target)
)[0]
if self.indices is not None:
differing_indices = self.indices[differing_indices]
elif self.location is not None:
if self.location.strand == -1:
differing_indices = self.location.end - differing_indices
else:
differing_indices = differing_indices + self.location.start
intervals = [
(r[0], r[-1] + 1)
for r in group_nearby_indices(
differing_indices,
max_group_spread=self.localization_interval_length,
)
]
locations = [Location(start, end, 1) for start, end in intervals]
score = self.max_edits - len(differing_indices)
return SpecEvaluation(self, problem, score=score, locations=locations)
def localized(self, location, problem=None, with_righthand=False):
"""Localize the spec to the overlap of its location and the new.
"""
if self.max_edits != 0:
return self
start, end = location.start, location.end
if self.indices is not None:
pos = ((start <= self.indices) & (self.indices < end)).nonzero()[0]
new_indices = self.indices[pos]
new_target = "".join(np.array(list(self.target_sequence))[pos])
return self.copy_with_changes(
indices=new_indices, target_sequence=new_target
)
else:
new_location = self.location.overlap_region(location)
if new_location is None:
return None
else:
new_constraint = self.copy_with_changes(location=new_location)
relative_location = new_location + (-self.location.start)
new_constraint.target_sequence = relative_location.extract_sequence(
self.target_sequence
)
return new_constraint
def restrict_nucleotides(self, sequence, location=None):
"""When localizing, forbid any nucleotide but the one already there."""
if self.max_edits or self.max_edits_percent:
return []
if location is not None:
start = max(location.start, self.location.start)
end = min(location.end, self.location.end)
else:
start, end = self.location.start, self.location.end
if self.indices is not None:
return [
((i, i + 1), set([sequence[i : i + 1]]))
for i in self.indices
if start <= i < end
]
else:
return [((start, end), set([sequence[start:end]]))]
def short_label(self):
return "keep"
def breach_label(self):
return "edits"
| 35.961326 | 84 | 0.626671 | 762 | 6,509 | 5.206037 | 0.246719 | 0.057474 | 0.024956 | 0.0242 | 0.091001 | 0.052937 | 0.016637 | 0.016637 | 0 | 0 | 0 | 0.005863 | 0.292518 | 6,509 | 180 | 85 | 36.161111 | 0.855592 | 0.207252 | 0 | 0.094828 | 0 | 0 | 0.002596 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.034483 | 0.017241 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
156a46a2364e4b4f08406ce869ce396572c9d10c | 2,795 | py | Python | users/views.py | samsonosiomwan/Hotels-Management-System | d37bc13faafd2cfc2f0ad4cbe56bc83b64eded36 | [
"MIT"
] | null | null | null | users/views.py | samsonosiomwan/Hotels-Management-System | d37bc13faafd2cfc2f0ad4cbe56bc83b64eded36 | [
"MIT"
] | null | null | null | users/views.py | samsonosiomwan/Hotels-Management-System | d37bc13faafd2cfc2f0ad4cbe56bc83b64eded36 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from ekohms.models import User
from django.contrib.auth.tokens import default_token_generator
from django.contrib.sites.shortcuts import get_current_site
from django.core.mail import EmailMessage
from django.http import HttpResponse
from django.shortcuts import render,redirect
from django.template.loader import render_to_string
from django.utils.encoding import force_bytes
from django.utils.http import urlsafe_base64_encode, urlsafe_base64_decode
from .forms import UserRegisterForm
from django.contrib import messages
UserModel = get_user_model()
def register(request):
# if request.method == 'GET':
# return render(request, 'users/register.html')
if request.method == 'POST':
form = UserRegisterForm(request.POST)
if form.is_valid():
user = form.save(commit=False)
user.email_verified = False
user.save()
current_site = get_current_site(request)
mail_subject = 'Activate your account.'
message = render_to_string('users/acc_active_email.html', {
'user': user,
'domain': current_site.domain,
'uid': urlsafe_base64_encode(force_bytes(user.pk)),
'token': default_token_generator.make_token(user),
})
to_email = form.cleaned_data.get('email')
email = EmailMessage(
mail_subject, message, to=[to_email]
)
email.send(fail_silently=False)
messages.success(request, 'Account successfully created')
messages.success(request, 'Please confirm your email address to complete the registration')
return redirect('login')
# return HttpResponse('Please confirm your email address to complete the registration')
else:
form = UserRegisterForm()
return render(request, 'users/register.html', {'form': form})
def activate(request, uidb64, token):
try:
uid = urlsafe_base64_decode(uidb64).decode()
user = UserModel._default_manager.get(pk=uid)
except(TypeError, ValueError, OverflowError, User.DoesNotExist):
user = None
if user is not None and default_token_generator.check_token(user, token):
user.email_verified = True
user.save()
#return HttpResponse('Thank you for your email confirmation. Now you can login your account.')
messages.success(request, 'Thank you for your email confirmation. Now you can login your account.')
return redirect('login')
else:
return HttpResponse('Activation link is invalid!')
# form = UserRegisterForm()
# return render(request, 'users/register.html',{'form':form} )
# Create your views here.
| 35.379747 | 107 | 0.675134 | 326 | 2,795 | 5.650307 | 0.337423 | 0.054289 | 0.036916 | 0.039088 | 0.209555 | 0.209555 | 0.190011 | 0.190011 | 0.190011 | 0.131379 | 0 | 0.005621 | 0.236136 | 2,795 | 78 | 108 | 35.833333 | 0.857143 | 0.131306 | 0 | 0.115385 | 0 | 0 | 0.122365 | 0.011162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.230769 | 0 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
156b887af38fc2be248562a6c65c5110758d7475 | 277 | py | Python | url.py | beboy01/nan-project | 0e43e2b3e3a4e34b47b8a4815312ec137572aa87 | [
"MIT"
] | null | null | null | url.py | beboy01/nan-project | 0e43e2b3e3a4e34b47b8a4815312ec137572aa87 | [
"MIT"
] | null | null | null | url.py | beboy01/nan-project | 0e43e2b3e3a4e34b47b8a4815312ec137572aa87 | [
"MIT"
] | null | null | null | ############# exo ###############
# Ne modifiez pas les variables ci-dessous
protocole = "https://"
nom_du_site = "docstring"
extension = "fr"
page = "glossaire"
# Modifiez le code à partir d'ici
URL = f"{protocole+nom_du_site+'.'+extension+'/'+page}"
print(URL) | 23.083333 | 56 | 0.595668 | 35 | 277 | 4.6 | 0.771429 | 0.062112 | 0.111801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169675 | 277 | 12 | 57 | 23.083333 | 0.7 | 0.281588 | 0 | 0 | 0 | 0 | 0.468354 | 0.291139 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
156f0f7798172ab04ff11fbc879e1ba5277dba3b | 1,079 | py | Python | setup.py | pltoledo/py-publicbr | 84e4cf8b71754217c4814d26c4b76a397fef84fc | [
"MIT"
] | null | null | null | setup.py | pltoledo/py-publicbr | 84e4cf8b71754217c4814d26c4b76a397fef84fc | [
"MIT"
] | null | null | null | setup.py | pltoledo/py-publicbr | 84e4cf8b71754217c4814d26c4b76a397fef84fc | [
"MIT"
] | null | null | null | from setuptools import setup
with open('README.md', encoding='utf-8') as f:
long_description = f.read()
setup(
name = 'py-publicbr',
packages = ['publicbr', 'publicbr.cnpj'],
version = '0.1',
license='MIT',
description = 'Extract and consolidate Brazilian public datasets',
long_description=long_description,
long_description_content_type='text/markdown',
author = 'Pedro Toledo',
author_email = 'pedroltoledo@gmail.com',
url = 'https://github.com/pltoledo/py-publicbr',
download_url = 'https://github.com/pltoledo/py-publicbr/archive/v_01.tar.gz',
keywords = ['public data', 'brazil', 'data', 'public', 'etl'],
install_requires=[
'tqdm',
'beautifulsoup4',
'requests',
'Unidecode',
'pyspark',
'geocoder'
],
classifiers=[
'Development Status :: 3 - Alpha',
'Topic :: Office/Business',
'Topic :: Sociology',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.8',
],
python_requires='>=3.8'
) | 30.828571 | 81 | 0.608897 | 114 | 1,079 | 5.666667 | 0.692982 | 0.092879 | 0.058824 | 0.092879 | 0.108359 | 0.108359 | 0.108359 | 0 | 0 | 0 | 0 | 0.013317 | 0.234476 | 1,079 | 35 | 82 | 30.828571 | 0.768765 | 0 | 0 | 0.060606 | 0 | 0 | 0.443519 | 0.02037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030303 | 0 | 0.030303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1577ce3ebf29865269a0cbf508c8f0ae892e5915 | 2,545 | py | Python | src/yass/visual/soft_assignment.py | jaib1/yass | 9899c7d63c522a26b160ac7a223c794dfd3e23c6 | [
"Apache-2.0"
] | 59 | 2017-10-29T02:21:17.000Z | 2022-03-23T01:12:27.000Z | src/yass/visual/soft_assignment.py | jaib1/yass | 9899c7d63c522a26b160ac7a223c794dfd3e23c6 | [
"Apache-2.0"
] | 257 | 2017-10-25T17:11:36.000Z | 2021-10-21T19:12:00.000Z | src/yass/visual/soft_assignment.py | jaib1/yass | 9899c7d63c522a26b160ac7a223c794dfd3e23c6 | [
"Apache-2.0"
] | 24 | 2017-10-28T19:59:44.000Z | 2021-07-14T09:56:45.000Z | """Providing a class for dealing with soft assignments of spikes at the end."""
import copy as copy
import numpy as np
from tqdm import tqdm
from yass.template import WaveForms
from yass.merge.merge import template_dist_linear_align, template_spike_dist_linear_align
def get_soft_assignments(templates, templates_upsampled, spike_train,
spike_train_upsampled, filename_residual, n_similar_units=2):
"""Given templates and spikes determines collision templates.
params:
-------
templates: np.ndarray
Has shape (# units, # time samples, # channels).
n_similar_units: int
Number of similar units that the spikes should be compare against.
"""
def softmax(x):
"""Sape must be (N, d)"""
e = np.exp(x)
return e / e.sum(axis=1)[:, None]
templates = np.transpose(templates, [2, 0, 1])
templates_upsampled = np.transpose(templates_upsampled, [2, 0, 1])
affinity_matrix = template_dist_linear_align(templates)
n_spikes = spike_train.shape[0]
temp = WaveForms(templates.transpose([0, 2, 1]))
pdist = temp.pair_dist()
soft_assignments = np.zeros([n_spikes, n_similar_units])
# By default assign each spike to its own cluster
soft_assignments[:, 0] = 1
sim_unit_map = np.zeros([temp.n_unit, n_similar_units]).astype(np.int)
for unit in tqdm(range(temp.n_unit), "Computing soft assignments"):
spt_idx = np.where(spike_train[:, 1] == unit)[0]
spt = spike_train[spt_idx, 0]
# Get all upsampled ids
units = spike_train_upsampled[spt_idx, 1]
n_unit_spikes = len(spt)
spikes, skipped_idx = read_spikes(
filename=filename_residual,
spikes=spt,
n_channels=temp.n_channel,
spike_size=temp.n_time,
units=units,
templates=templates_upsampled,
residual_flag=True)
sim_units = pdist[unit].argsort()[:n_similar_units]
sim_unit_map[unit] = sim_units
# Get distances of spikes to both similar units.
dist_features = template_spike_dist_linear_align(
templates=templates[sim_units],
spikes=spikes)
# Note that we are actually doing soft-min by using negative distance.
assignments = softmax(- dist_features.T)
success_idx = np.setdiff1d(
np.arange(n_unit_spikes), np.array(skipped_idx))
soft_assignments[spt_idx[success_idx], :] = assignments
return soft_assignments, sim_unit_map
| 34.863014 | 89 | 0.663261 | 337 | 2,545 | 4.774481 | 0.362018 | 0.065258 | 0.040398 | 0.028589 | 0.034804 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00987 | 0.243615 | 2,545 | 72 | 90 | 35.347222 | 0.825974 | 0.205501 | 0 | 0 | 0 | 0 | 0.013171 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.119048 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1577cf0ad58074e0901998c62043957a5ad89b97 | 3,473 | py | Python | wikiWebCrawler/dumpWikiExtraction.py | chrhenning/image_text_relation | 8d09483b48babe9bf90ca15a8cd67e389a0bd00c | [
"Apache-2.0"
] | null | null | null | wikiWebCrawler/dumpWikiExtraction.py | chrhenning/image_text_relation | 8d09483b48babe9bf90ca15a8cd67e389a0bd00c | [
"Apache-2.0"
] | null | null | null | wikiWebCrawler/dumpWikiExtraction.py | chrhenning/image_text_relation | 8d09483b48babe9bf90ca15a8cd67e389a0bd00c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# Tool that dumps all the information we need from the WikiExtraction parser [1] to create our own dataset into a file, such that it is later easily accessible
# [1]: https://github.com/attardi/wikiextractor
# Important: not efficient code, but it only has to run once on simple wiki, which is small
# This tool relies on the output of the bash script 'extractDataWithWikiExtractor'
# The dumped dictionary will have the following form
# It will map from an article id to an article dictionary
# Each article dictionary will have the keys: meta, plain, links, lists
import argparse
import os
import pickle
from lxml import etree
# the dumped dictionary
articles = {}
xmlParser = etree.XMLParser(recover=True)
def removeXMLMarkups(article):
article = article.strip();
splits = article.split('\n', 1)
assert(len(splits) == 2)
assert(splits[0].startswith('<doc'))
article = splits[1]
splits = article.rsplit('\n', 1)
assert(len(splits) == 2)
assert(splits[1].startswith('</doc>'))
article = splits[0]
return article
def addArticleKeys(key, files):
for fn in files:
f = open(fn)
currArticle = ''
for line in f:
currArticle += line
if line.strip() == '</doc>':
xmlArticle = etree.fromstring(currArticle, parser=xmlParser)
assert(xmlArticle.attrib.keys() == ['id','url','title'])
currid = int(xmlArticle.attrib['id'])
assert(currid in articles)
articles[currid][key] = removeXMLMarkups(currArticle)
currArticle = ''
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--input', type=str, help="Directory, where the script 'extractDataWithWikiExtractor.sh' was run." + \
"\nThus, folder that contains the directories: simplewiki-plain, simplewiki-lists, simplewiki-links", default='.')
parser.add_argument('--output', type=str, help="Name and directory of outputfile.", default='wikiextraction_dump.pickle')
args = parser.parse_args()
inputDir = args.input
output = args.output
plainDir = os.path.join(inputDir, 'simplewiki-plain')
linksDir = os.path.join(inputDir, 'simplewiki-links')
listsDir = os.path.join(inputDir, 'simplewiki-lists')
if not os.path.isdir(plainDir) or \
not os.path.isdir(linksDir) or \
not os.path.isdir(listsDir):
raise(Exception('At least one of the following folders is not present in directory \'' + inputDir + \
'\': simplewiki-plain, simplewiki-lists, simplewiki-links'))
getAllFilesInFolder = lambda folder : [os.path.join(root, name) \
for root, dirs, files in os.walk(folder) \
for name in files \
if name.startswith(("wiki_"))]
plainFiles = getAllFilesInFolder(plainDir)
linksFiles = getAllFilesInFolder(linksDir)
listsFiles = getAllFilesInFolder(listsDir)
for fn in plainFiles:
f = open(fn)
currArticle = ''
for line in f:
currArticle += line
if line.strip() == '</doc>':
xmlArticle = etree.fromstring(currArticle, parser=xmlParser)
assert(xmlArticle.attrib.keys() == ['id','url','title'])
currid = int(xmlArticle.attrib['id'])
assert(currid not in articles)
articles[currid] = {}
articles[currid]['meta'] = dict(xmlArticle.attrib)
articles[currid]['plain'] = removeXMLMarkups(currArticle)
currArticle = ''
addArticleKeys('links', linksFiles)
addArticleKeys('lists', listsFiles)
pickle.dump(articles, open(output, "wb"))
print('articles successfully dumped into ' + output)
| 30.734513 | 159 | 0.701699 | 433 | 3,473 | 5.598152 | 0.381062 | 0.017327 | 0.016502 | 0.022277 | 0.260726 | 0.212871 | 0.175743 | 0.175743 | 0.15099 | 0.15099 | 0 | 0.003813 | 0.169306 | 3,473 | 112 | 160 | 31.008929 | 0.836395 | 0.17161 | 0 | 0.28169 | 0 | 0 | 0.166376 | 0.020579 | 0 | 0 | 0 | 0 | 0.112676 | 1 | 0.028169 | false | 0 | 0.056338 | 0 | 0.098592 | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1578c4686ca3835c9e673bb51e7120e98cb9af4a | 1,127 | py | Python | mkdat.py | pachenya/MeiroPy | c11f4e1a35a1d31127fb9cb0109faae6c88dcb21 | [
"MIT"
] | null | null | null | mkdat.py | pachenya/MeiroPy | c11f4e1a35a1d31127fb9cb0109faae6c88dcb21 | [
"MIT"
] | null | null | null | mkdat.py | pachenya/MeiroPy | c11f4e1a35a1d31127fb9cb0109faae6c88dcb21 | [
"MIT"
] | null | null | null | # mkdat.py
#
# To make the list of monsters data.
#
import string
class enuming:
def __init__(self, n, namekey, ch='?'):
self.n = int(n)
self.namekey = str(namekey)
self.ch = ch = str(ch)
def DEBUGP(self):
print(self.n, self.namekey, self.ch)
class enuml:
def __init__(self, filename, debugmode=False):
f = open(filename, 'r')
l = f.readlines()
f.close()
self.dat = []
self.elist = []
n = 0
for i in l:
stch = i[0]
if stch == '#' or stch == '' or stch == '\n':
continue
n += 1
i = i[:-1]
stmp = i.split(':')
if debugmode:
for s in stmp:
print(s, end=',')
print(' : len ->' + str(len(stmp)))
ch = '?'
if len(stmp) >= 2:
ch = stmp[1]
else:
ch = '?'
d = enuming(n,stmp[0], ch)
self.elist.append(d)
self.dat.append(stmp)
def get_elist(self):
return self.elist
def get_dat(self):
return self.dat
def test():
a = enuml('data/dat.txt')
for i in a.elist:
i.DEBUGP()
def test2():
a = enuml('data/dat.txt')
test()
test2()
| 18.47541 | 51 | 0.507542 | 163 | 1,127 | 3.447853 | 0.361963 | 0.02669 | 0.039146 | 0.046263 | 0.05694 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01178 | 0.322094 | 1,127 | 60 | 52 | 18.783333 | 0.723822 | 0.038154 | 0 | 0.085106 | 0 | 0 | 0.038997 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148936 | false | 0 | 0.021277 | 0.042553 | 0.255319 | 0.06383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
157b334c31453897c1c551b1f1fa0ea95b15795f | 20,732 | py | Python | archive/nexus-api-v2/API/Git/Lab/Schemas/Repository.py | cloud-hybrid/delta | 402b00ed5aaa32ccef628361e9635879b7ace44f | [
"BSD-3-Clause"
] | null | null | null | archive/nexus-api-v2/API/Git/Lab/Schemas/Repository.py | cloud-hybrid/delta | 402b00ed5aaa32ccef628361e9635879b7ace44f | [
"BSD-3-Clause"
] | null | null | null | archive/nexus-api-v2/API/Git/Lab/Schemas/Repository.py | cloud-hybrid/delta | 402b00ed5aaa32ccef628361e9635879b7ace44f | [
"BSD-3-Clause"
] | 1 | 2022-01-03T05:33:15.000Z | 2022-01-03T05:33:15.000Z | from . import *
from . import Base as Scheme
__module__ = __name__
Schema = Scheme.Model
class Base(Schema):
"""
...
"""
class Config(Schema.Configuration): title = "Gitlab" + "-" + "{0}".format(__module__.split(".").pop())
class Query(Base):
"""
API Search-Query Schema
"""
archived: Optional[Boolean] = Field(
alias = "Archived",
title = "archived",
description = "Limit by archived status"
)
id_after: Integer = Field(
alias = "ID-Greater-Than",
title = "id_after",
description = "Limit results to projects with IDs greater than the specified ID"
)
id_before: Integer = Field(
alias = "ID-Less-Than",
title = "id_before",
description = "Limit results to projects with IDs less than the specified ID"
)
last_activity_after: Date = Field(
alias = "Last-Activity-After",
title = "last_activity_after",
description = "Limit results to projects with last_activity after specified time. Format: ISO 8601 YYYY-MM-DDTHH:MM:SSZ"
)
last_activity_before: Date = Field(
alias = "Last-Activity-Before",
title = "last_activity_before",
description = "Limit results to projects with last_activity before specified time. Format: ISO 8601 YYYY-MM-DDTHH:MM:SSZ"
)
membership: Optional[Boolean] = Field(
alias = "Membership",
title = "membership",
description = "Limit by projects that the current user is a member of"
)
min_access_level: Integer = Field(
alias = "Minimum-Access-Level",
title = "min_access_level",
description = "Limit by current user minimal access level"
)
order_by: String = Field(
alias = "Order-By",
title = "order_by",
description = "Return projects ordered by id, name, path, created_at, updated_at, or last_activity_at fields. repository_size, storage_size, packages_size or wiki_size fields are only allowed for admins. Default is created_at"
)
owned: Optional[Boolean] = Field(
alias = "Owned",
title = "owned",
description = "Limit by projects explicitly owned by the current user"
)
repository_checksum_failed: Optional[Union[String, Boolean]] = Field("Premium-Required",
alias = "Checksum-Failure",
title = "repository_checksum_failed",
description = "Limit projects where the repository checksum calculation has failed (Introduced in GitLab Premium 11.2)"
)
repository_storage: String = Field(
alias = "Storage",
title = "repository_storage",
description = "Limit results to projects stored on repository_storage. (admins only)"
)
search_namespaces: Optional[Boolean] = Field(
alias = "Search-Namespace",
title = "search_namespaces",
description = "Include ancestor namespaces when matching search criteria. Default is false"
)
search: String = Field(
alias = "Search",
title = "search",
description = "Return list of projects matching the search criteria"
)
simple: Optional[Boolean] = Field(
alias = "Simple",
title = "simple",
description = "Return only limited fields for each project. This is a no-op without authentication as then only simple fields are returned"
)
sort: String = Field(
alias = "Sortable-Function",
title = "sort",
description = "Return projects sorted in asc or desc order. Default is desc"
)
starred: Optional[Boolean] = Field(
alias = "Starred",
title = "starred",
description = "Limit by projects starred by the current user"
)
statistics: Optional[Boolean] = Field(
alias = "Statistics",
title = "statistics",
description = "Include project statistics"
)
visibility: String = Field(
alias = "Visibility",
title = "visibility",
description = "Limit by visibility public, internal, or private"
)
wiki_checksum_failed: Optional[Union[String,Boolean]] = Field("Premium-Required",
alias = "Wiki-Checksum-Failure",
title = "wiki_checksum_failed",
description = "Limit projects where the wiki checksum calculation has failed (Introduced in GitLab Premium 11.2)"
)
with_custom_attributes: Optional[Boolean] = Field(
alias = "Custom-Attributes",
title = "with_custom_attributes",
description = "Include custom attributes in response. (admins only)"
)
with_issues_enabled: Optional[Boolean] = Field(
alias = "Issues-Enabled",
title = "with_issues_enabled",
description = "Limit by enabled issues feature"
)
with_merge_requests_enabled: Optional[Boolean] = Field(
alias = "MR-Enabled",
title = "with_merge_requests_enabled",
description = "Limit by enabled merge requests feature"
)
with_programming_language: String = Field(
alias = "Programming-Language",
title = "with_programming_language",
description = "Limit by projects which use the given programming language"
)
class Namespace(Base):
id: Integer = Field(alias = "id", title = "id", description = "id")
name: String = Field(alias = "name", title = "name", description = "name")
path: String = Field(alias = "path", title = "path", description = "path")
kind: String = Field(alias = "kind", title = "kind", description = "kind")
full_path: String = Field(alias = "full_path", title = "full_path", description = "full_path")
parent_id: Optional[Integer] = Field(None, alias = "parent_id", title = "parent_id", description = "parent_id")
avatar_url: Optional[String] = Field(None, alias = "avatar_url", title = "avatar_url", description = "avatar_url")
web_url: String = Field(alias = "web_url", title = "web_url", description = "web_url")
class Config(Base.Config): title = Base.Config.title + "-" + "Namespace"
class Statistics(Base):
commit_count: Integer = Field(0,
alias = "Total-Commits",
title = "commit_count",
description = ""
)
storage_size: Integer = Field(0,
alias = "Storage-Size",
title = "storage_size",
description = ""
)
repository_size: Integer = Field(0,
alias = "Programming-Language",
title = "Repository-Size",
description = ""
)
wiki_size: Integer = Field(0,
alias = "Programming-Language",
title = "wiki_size",
description = ""
)
lfs_objects_size: Integer = Field(0,
alias = "LFS-Objects-Size",
title = "lfs_objects_size",
description = ""
)
job_artifacts_size: Integer = Field(0,
alias = "Artifacts-Size",
title = "job_artifacts_size",
description = ""
)
packages_size: Integer = Field(0,
alias = "Packages-Size",
title = "packages_size",
description = ""
)
snippets_size: Integer = Field(0,
alias = "Snippets-Size",
title = "snippets_size",
description = ""
)
class Config(Base.Config): title = Base.Config.title + "-" + "Statistics"
class Links(Base):
self: String = Field(
alias = "self",
title = "self",
description = "self"
)
issues: String = Field(
alias = "issues",
title = "issues",
description = "issues"
)
merge_requests: String = Field(
alias = "merge_requests",
title = "merge_requests",
description = "merge_requests"
)
repo_branches: String = Field(
alias = "repo_branches",
title = "repo_branches",
description = "repo_branches"
)
labels: String = Field(
alias = "labels",
title = "labels",
description = "labels"
)
events: String = Field(
alias = "events",
title = "events",
description = "events"
)
members: String = Field(
alias = "members",
title = "members",
description = "members"
)
class Config(Base.Config): title = Base.Config.title + "-" + "Links"
class Access(Base):
notification_level: Optional[Integer] = None
access_level: Optional[Integer] = None
class Config(Base.Config): title = Base.Config.title + "-" + "Access"
class Permissions(Base):
project_access: Optional[Access] = None
group_access: Optional[Access] = None
class Config(Base.Config): title = Base.Config.title + "-" + "Permissions"
class Owner(Base):
id: String = Field(
alias = "id",
title="id",
description = "id")
name: String = Field(
alias = "name",
title="name",
description = "name")
created_at: Optional[String] = Field(
alias = "created_at",
title="created_at",
description = "created_at")
class Config(Base.Config): title = Base.Config.title + "-" + "Owner"
class Project(Base):
"""
[...]
"""
id: Integer = Field(
alias = "id",
title = "id",
description = "id"
)
created_at: Date = Field(
alias = "created_at",
title = "created_at",
description = "created_at"
)
forks_count: Integer = Field(
alias = "forks_count",
title = "forks_count",
description = "forks_count"
)
star_count: Integer = Field(
alias = "star_count",
title = "star_count",
description = "star_count"
)
description: Optional[String] = Field(
alias = "description",
title = "description",
description = "description"
)
default_branch: String = Field(
alias = "default_branch",
title = "default_branch",
description = "default_branch"
)
visibility: Optional[String] = Field(
alias = "visibility",
title = "visibility",
description = "visibility"
)
ssh_url_to_repo: String = Field(
alias = "ssh_url_to_repo",
title = "ssh_url_to_repo",
description = "ssh_url_to_repo"
)
http_url_to_repo: String = Field(
alias = "http_url_to_repo",
title = "http_url_to_repo",
description = "http_url_to_repo"
)
web_url: String = Field(
alias = "web_url",
title = "web_url",
description = "web_url"
)
readme_url: Optional[String] = Field(
alias = "readme_url",
title = "readme_url",
description = "readme_url"
)
tag_list: Optional[List] = Field(
alias = "tag_list",
title = "tag_list",
description = "tag_list"
)
owner: Optional[Owner] = Field(
alias = "owner",
title = "owner",
description = "owner"
)
name: String = Field(
alias = "name",
title = "name",
description = "name"
)
name_with_namespace: String = Field(
alias = "name_with_namespace",
title = "name_with_namespace",
description = "name_with_namespace"
)
path: String = Field(
alias = "path",
title = "path",
description = "path"
)
path_with_namespace: String = Field(
alias = "path_with_namespace",
title = "path_with_namespace",
description = "path_with_namespace"
)
issues_enabled: Optional[Boolean] = Field(
alias = "issues_enabled",
title = "issues_enabled",
description = "issues_enabled"
)
open_issues_count: Optional[Integer] = Field(
alias = "open_issues_count",
title = "open_issues_count",
description = "open_issues_count"
)
merge_requests_enabled: Optional[Boolean] = Field(
alias = "merge_requests_enabled",
title = "merge_requests_enabled",
description = "merge_requests_enabled"
)
jobs_enabled: Optional[Boolean] = Field(
alias = "jobs_enabled",
title = "jobs_enabled",
description = "jobs_enabled"
)
wiki_enabled: Optional[Boolean] = Field(
alias = "wiki_enabled",
title = "wiki_enabled",
description = "wiki_enabled"
)
snippets_enabled: Optional[Boolean] = Field(
alias = "snippets_enabled",
title = "snippets_enabled",
description = "snippets_enabled"
)
can_create_merge_request_in: Optional[Boolean] = Field(
alias = "can_create_merge_request_in",
title = "can_create_merge_request_in",
description = "can_create_merge_request_in"
)
resolve_outdated_diff_discussions: Optional[Boolean] = Field(
alias = "resolve_outdated_diff_discussions",
title = "resolve_outdated_diff_discussions",
description = "resolve_outdated_diff_discussions"
)
container_registry_enabled: Optional[Boolean] = Field(
alias = "container_registry_enabled",
title = "container_registry_enabled",
description = "container_registry_enabled"
)
last_activity_at: Optional[Date] = Field(
alias = "last_activity_at",
title = "last_activity_at",
description = "last_activity_at"
)
creator_id: Optional[Integer] = Field(
alias = "creator_id",
title = "creator_id",
description = "creator_id"
)
namespace: Namespace = Field(
alias = "namespace",
title = "namespace",
description = "namespace"
)
import_status: Optional[String] = Field(
alias = "import_status",
title = "import_status",
description = "import_status"
)
import_error: Optional[String] = Field(
alias = "import_error",
title = "import_error",
description = "import_error"
)
permissions: Optional[Permissions] = Field(
alias = "permissions",
title = "permissions",
description = "permissions"
)
archived: Optional[Boolean] = Field(
alias = "archived",
title = "archived",
description = "archived"
)
avatar_url: Optional[String] = Field(
alias = "avatar_url",
title = "avatar_url",
description = "avatar_url"
)
shared_runners_enabled: Optional[Integer] = Field(
alias = "shared_runners_enabled",
title = "shared_runners_enabled",
description = "shared_runners_enabled"
)
runners_token: Optional[String] = Field(
alias = "runners_token",
title = "runners_token",
description = "runners_token"
)
ci_default_git_depth: Optional[Integer] = Field(
alias = "ci_default_git_depth",
title = "ci_default_git_depth",
description = "ci_default_git_depth"
)
ci_forward_deployment_enabled: Optional[Boolean] = Field(
alias = "ci_forward_deployment_enabled",
title = "ci_forward_deployment_enabled",
description = "ci_forward_deployment_enabled"
)
public_jobs: Optional[Boolean] = Field(
alias = "public_jobs",
title = "public_jobs",
description = "public_jobs"
)
shared_with_groups: Optional[List] = Field(
alias = "shared_with_groups",
title = "shared_with_groups",
description = "shared_with_groups"
)
only_allow_merge_if_pipeline_succeeds: Optional[Boolean] = Field(
alias = "only_allow_merge_if_pipeline_succeeds",
title = "only_allow_merge_if_pipeline_succeeds",
description = "only_allow_merge_if_pipeline_succeeds"
)
allow_merge_on_skipped_pipeline: Optional[Boolean] = Field(
alias = "allow_merge_on_skipped_pipeline",
title = "allow_merge_on_skipped_pipeline",
description = "allow_merge_on_skipped_pipeline"
)
restrict_user_defined_variables: Optional[Boolean] = Field(
alias = "restrict_user_defined_variables",
title = "restrict_user_defined_variables",
description = "restrict_user_defined_variables"
)
only_allow_merge_if_all_discussions_are_resolved: Optional[Boolean] = Field(
alias = "only_allow_merge_if_all_discussions_are_resolved",
title = "only_allow_merge_if_all_discussions_are_resolved",
description = "only_allow_merge_if_all_discussions_are_resolved"
)
remove_source_branch_after_merge: Optional[Boolean] = Field(
alias = "remove_source_branch_after_merge",
title = "remove_source_branch_after_merge",
description = "remove_source_branch_after_merge"
)
request_access_enabled: Optional[Boolean] = Field(
alias = "request_access_enabled",
title = "request_access_enabled",
description = "request_access_enabled"
)
merge_method: Optional[String] = Field(
alias = "merge_method",
title = "merge_method",
description = "merge_method"
)
auto_devops_enabled: Optional[Boolean] = Field(
alias = "auto_devops_enabled",
title = "auto_devops_enabled",
description = "auto_devops_enabled"
)
auto_devops_deploy_strategy: Optional[String] = Field(
alias = "auto_devops_deploy_strategy",
title = "auto_devops_deploy_strategy",
description = "auto_devops_deploy_strategy"
)
repository_storage: Optional[String] = Field(
alias = "repository_storage",
title = "repository_storage",
description = "repository_storage"
)
approvals_before_merge: Optional[Integer] = Field(
alias = "approvals_before_merge",
title = "approvals_before_merge",
description = "approvals_before_merge"
)
mirror: Optional[Boolean] = Field(
alias = "mirror",
title = "mirror",
description = "mirror"
)
mirror_user_id: Optional[Integer] = Field(
alias = "mirror_user_id",
title = "mirror_user_id",
description = "mirror_user_id"
)
mirror_trigger_builds: Optional[Boolean] = Field(
alias = "mirror_trigger_builds",
title = "mirror_trigger_builds",
description = "mirror_trigger_builds"
)
only_mirror_protected_branches: Optional[Boolean] = Field(
alias = "only_mirror_protected_branches",
title = "only_mirror_protected_branches",
description = "only_mirror_protected_branches"
)
mirror_overwrites_diverged_branches: Optional[Boolean] = Field(
alias = "mirror_overwrites_diverged_branches",
title = "mirror_overwrites_diverged_branches",
description = "mirror_overwrites_diverged_branches"
)
external_authorization_classification_label: Optional[String] = Field(
alias = "external_authorization_classification_label",
title = "external_authorization_classification_label",
description = "external_authorization_classification_label"
)
packages_enabled: Optional[Boolean] = Field(
alias = "packages_enabled",
title = "packages_enabled",
description = "packages_enabled"
)
service_desk_enabled: Optional[Boolean] = Field(
alias = "service_desk_enabled",
title = "service_desk_enabled",
description = "service_desk_enabled"
)
service_desk_address: Optional[String] = Field(
alias = "service_desk_address",
title = "service_desk_address",
description = "service_desk_address"
)
autoclose_referenced_issues: Optional[Boolean] = Field(
alias = "autoclose_referenced_issues",
title = "autoclose_referenced_issues",
description = "autoclose_referenced_issues"
)
suggestion_commit_message: Optional[String] = Field(
alias = "suggestion_commit_message",
title = "suggestion_commit_message",
description = "suggestion_commit_message"
)
statistics: Optional[Statistics] = Field(
alias = "statistics",
title = "statistics",
description = "statistics"
)
container_registry_image_prefix: Optional[String] = Field(
alias = "container_registry_image_prefix",
title = "container_registry_image_prefix",
description = "container_registry_image_prefix"
)
_links: Optional[Links] = Field(
alias = "_links",
title = "_links",
description = "_links"
)
class Config(Base.Config): title = Base.Config.title + "-" + "Project"
class Projects(Base):
Response: List[Project]
class Config(Base.Config): title = Base.Config.title + "-" + "Projects"
class Pages(Base):
Response: Dictionary[Integer, List[Project]]
class Config(Base.Config): title = Base.Config.title + "-" + "Pages"
| 29.532764 | 234 | 0.625265 | 2,077 | 20,732 | 5.966298 | 0.11844 | 0.082311 | 0.05552 | 0.07061 | 0.329567 | 0.218286 | 0.197789 | 0.166478 | 0.116043 | 0.069884 | 0 | 0.001519 | 0.269487 | 20,732 | 701 | 235 | 29.574893 | 0.816705 | 0.001592 | 0 | 0.061837 | 0 | 0.007067 | 0.309737 | 0.096064 | 0.001767 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017668 | 0 | 0.266784 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
157f7bac2f471946f836b8c5ee08c6cfd3a2fe43 | 2,324 | py | Python | server/TenantManagementService/tenant-provisioning.py | snetty/aws-saas-factory-ref-solution-serverless-saas | 34403ac7ad74847106cc7318d54afb51932d3711 | [
"Apache-2.0",
"MIT-0"
] | 1 | 2021-07-10T22:07:16.000Z | 2021-07-10T22:07:16.000Z | server/TenantManagementService/tenant-provisioning.py | snetty/aws-saas-factory-ref-solution-serverless-saas | 34403ac7ad74847106cc7318d54afb51932d3711 | [
"Apache-2.0",
"MIT-0"
] | null | null | null | server/TenantManagementService/tenant-provisioning.py | snetty/aws-saas-factory-ref-solution-serverless-saas | 34403ac7ad74847106cc7318d54afb51932d3711 | [
"Apache-2.0",
"MIT-0"
] | null | null | null | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import json
import boto3
import utils
from botocore.exceptions import ClientError
import logger
import os
from aws_lambda_powertools import Tracer
tracer = Tracer()
tenant_stack_mapping_table_name = os.environ['TENANT_STACK_MAPPING_TABLE_NAME']
dynamodb = boto3.resource('dynamodb')
codepipeline = boto3.client('codepipeline')
cloudformation = boto3.client('cloudformation')
table_tenant_stack_mapping = dynamodb.Table(tenant_stack_mapping_table_name)
stack_name = 'stack-{0}'
@tracer.capture_lambda_handler
def provision_tenant(event, context):
logger.info(event)
tenant_details = json.loads(event['body'])
try:
response_ddb = table_tenant_stack_mapping.put_item(
Item={
'tenantId': tenant_details['tenantId'],
'stackName': stack_name.format(tenant_details['tenantId']),
'applyLatestRelease': True,
'codeCommitId': ''
}
)
logger.info(response_ddb)
response_codepipeline = codepipeline.start_pipeline_execution(
name='serverless-saas-pipeline'
)
logger.info(response_ddb)
except Exception as e:
raise
else:
return utils.create_success_response("Tenant Provisioning Started")
@tracer.capture_lambda_handler
#this method uses IAM Authorization and protected using a resource policy. This method is also invoked async
def deprovision_tenant(event, context):
logger.info("Request received to deprovision a tenant")
logger.info(event)
tenantid_to_deprovision = event['tenantId']
try:
response_ddb = table_tenant_stack_mapping.delete_item(
Key={
'tenantId': tenantid_to_deprovision
}
)
logger.info(response_ddb)
response_cloudformation = cloudformation.delete_stack(
StackName=stack_name.format(tenantid_to_deprovision)
)
logger.info(response_cloudformation)
except Exception as e:
raise
else:
return utils.create_success_response("Tenant Deprovisioning Started")
| 29.794872 | 108 | 0.660069 | 241 | 2,324 | 6.124481 | 0.406639 | 0.047425 | 0.073171 | 0.062331 | 0.310976 | 0.191057 | 0.138211 | 0.088076 | 0.088076 | 0.088076 | 0 | 0.003507 | 0.263769 | 2,324 | 77 | 109 | 30.181818 | 0.859147 | 0.08778 | 0 | 0.267857 | 0 | 0 | 0.130907 | 0.025992 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.125 | 0 | 0.196429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
157fa5c40b8382a262791d36d2b4de815c319516 | 643 | py | Python | P25034-zhaojie/week-07/homework1.py | xiaohh2016/python-25 | 8981ba89bfb32754c3f9c881ee8fcaf13332ce51 | [
"Apache-2.0"
] | 1 | 2019-09-11T23:24:58.000Z | 2019-09-11T23:24:58.000Z | P25034-zhaojie/week-07/homework1.py | xiaohh2016/python-25 | 8981ba89bfb32754c3f9c881ee8fcaf13332ce51 | [
"Apache-2.0"
] | null | null | null | P25034-zhaojie/week-07/homework1.py | xiaohh2016/python-25 | 8981ba89bfb32754c3f9c881ee8fcaf13332ce51 | [
"Apache-2.0"
] | 5 | 2019-09-11T06:33:34.000Z | 2020-02-17T12:52:31.000Z | #!/usr/bin/env python
# encoding:utf-8
# file: homework1.py
# 1、请将 "1,2,3",变成 ["1","2","3"]
# 方法1
a = '1,2,3'
print(a.split(','))
# 方法2 遍历一下
a = '1,2,3'
sep = ','
result = []
for i in range(len(a)):
if a[i] == sep:
continue
result.append(a[i])
print(result)
"""
第一种方法比较简单,略过。
第二种方法,遍历的思路是对的,但是只针对本题给出的示例字符串有效,如果将字符串改成下面的样式:
b = '1,23,45,678'
c = '1, 23, 4, 5'
该怎么遍历呢?可以尝试迭代一下。
"""
# 优化版,能处理空格等,只用一个循环
a = ',1,23, 456 , ab c,,,'
sep = '\t ,'
tmp_str = ''
result = []
for i in a:
if i in sep:
if tmp_str:
result.append(tmp_str)
tmp_str = ''
continue
tmp_str += i
print(result)
| 15.309524 | 47 | 0.541213 | 107 | 643 | 3.205607 | 0.504673 | 0.087464 | 0.034985 | 0.023324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074534 | 0.248834 | 643 | 41 | 48 | 15.682927 | 0.635611 | 0.178849 | 0 | 0.454545 | 0 | 0 | 0.097744 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
158093f827c44fb31f9d2b1f817125c9d5ad8bef | 575 | py | Python | code/selection_sort.py | Rustam-Z/data-structures-and-algorithms | 0ed253c433198fb6fa6d609a806f4ae7e820af06 | [
"MIT"
] | 6 | 2021-09-19T11:01:27.000Z | 2021-11-11T08:53:31.000Z | code/selection_sort.py | Rustam-Z/data-structures-and-algorithms | 0ed253c433198fb6fa6d609a806f4ae7e820af06 | [
"MIT"
] | null | null | null | code/selection_sort.py | Rustam-Z/data-structures-and-algorithms | 0ed253c433198fb6fa6d609a806f4ae7e820af06 | [
"MIT"
] | 1 | 2021-12-20T13:25:12.000Z | 2021-12-20T13:25:12.000Z | """
Selection sort algorithm implementation.
Time Complexity:
Best O(n^2)
Worst O(n^2)
Average O(n^2)
Space Complexity: O(1)
"""
def selection_sort(array):
for i in range(len(array)):
min_index = i
for j in range(i + 1, len(array)):
if array[j] < array[min_index]:
min_index = j
array[i], array[min_index] = array[min_index], array[i]
return array
if __name__ == '__main__':
unsorted_array = [5, 3, 6, 2, 10, -23, 0]
selected_array = selection_sort(unsorted_array)
print(selected_array)
| 23 | 63 | 0.606957 | 85 | 575 | 3.882353 | 0.435294 | 0.121212 | 0.157576 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033097 | 0.264348 | 575 | 24 | 64 | 23.958333 | 0.747045 | 0.229565 | 0 | 0 | 0 | 0 | 0.018391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
158547efcb33f290119505b7dad81583331db6a6 | 885 | py | Python | pycritty/cli/install.py | binRick/pycritty | ae27e61fe597c22e6830d62533e11d64bf06a3ae | [
"MIT"
] | 159 | 2020-12-13T19:38:32.000Z | 2022-03-30T23:12:49.000Z | pycritty/cli/install.py | binRick/pycritty | ae27e61fe597c22e6830d62533e11d64bf06a3ae | [
"MIT"
] | 4 | 2021-01-10T16:31:10.000Z | 2022-02-15T17:38:51.000Z | pycritty/cli/install.py | binRick/pycritty | ae27e61fe597c22e6830d62533e11d64bf06a3ae | [
"MIT"
] | 50 | 2020-12-13T22:35:34.000Z | 2022-03-30T01:29:28.000Z | import argparse
from .pycritty import subparsers, formatter
install_parser = subparsers.add_parser(
'install',
formatter_class=formatter(),
help="Install a config file or theme from a url",
argument_default=argparse.SUPPRESS,
)
install_parser.add_argument(
'url',
help='URL where the config is located',
)
install_parser.add_argument(
'-n', '--name',
metavar='NAME',
default='',
help='Name of the config/theme once installed',
)
install_parser.add_argument(
'-o', '--override',
action='store_true',
help='Override existing config',
)
group = install_parser.add_mutually_exclusive_group()
group.add_argument(
'-t', '--theme',
action='store_true',
help='Install as theme',
)
group.add_argument(
'-c', '--config',
action='store_true',
help='Install as a config file in your saves directory (default)',
)
| 20.581395 | 70 | 0.680226 | 110 | 885 | 5.3 | 0.418182 | 0.111492 | 0.109777 | 0.123499 | 0.096055 | 0.096055 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183051 | 885 | 42 | 71 | 21.071429 | 0.806362 | 0 | 0 | 0.235294 | 0 | 0 | 0.329944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
158778c93cc9dac58b6a4fb948f20e547d063f46 | 442 | py | Python | manejo-de-excepciones.py | OscarPalominoC/PensamientoComputacionalPython | b2797475d25452c69467c7e24720671777ad3ed9 | [
"MIT"
] | null | null | null | manejo-de-excepciones.py | OscarPalominoC/PensamientoComputacionalPython | b2797475d25452c69467c7e24720671777ad3ed9 | [
"MIT"
] | null | null | null | manejo-de-excepciones.py | OscarPalominoC/PensamientoComputacionalPython | b2797475d25452c69467c7e24720671777ad3ed9 | [
"MIT"
] | null | null | null | def divide_elementos_en_lista(lista, divisor):
# Utilizando la programación defensiva para evitar que el usuario ingrese un 0 como divisor.
try:
return [i / divisor for i in lista]
except ZeroDivisionError as e:
print(e)
return 'No se puede dividir entre 0'
lista = range(10)
if __name__ == "__main__":
divisor = int(input('Escribe el divisor: '))
print(divide_elementos_en_lista(lista, divisor)) | 34 | 96 | 0.69457 | 61 | 442 | 4.803279 | 0.688525 | 0.102389 | 0.116041 | 0.150171 | 0.232082 | 0.232082 | 0 | 0 | 0 | 0 | 0 | 0.011662 | 0.223982 | 442 | 13 | 97 | 34 | 0.842566 | 0.20362 | 0 | 0 | 0 | 0 | 0.156695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.3 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1588ca5cc544c21212b83dbab38deb4721020810 | 5,259 | py | Python | openomics_web/layouts/datatable_view.py | JonnyTran/open-omics | ef5db2dc2fdf486ee5e9fa4e0cf5be61b4531232 | [
"MIT"
] | 12 | 2021-01-14T19:33:48.000Z | 2022-01-06T16:13:03.000Z | openomics_web/layouts/datatable_view.py | JonnyTran/open-omics | ef5db2dc2fdf486ee5e9fa4e0cf5be61b4531232 | [
"MIT"
] | 13 | 2020-12-31T20:38:11.000Z | 2021-11-24T06:21:12.000Z | openomics_web/layouts/datatable_view.py | JonnyTran/open-omics | ef5db2dc2fdf486ee5e9fa4e0cf5be61b4531232 | [
"MIT"
] | 7 | 2021-02-08T13:42:01.000Z | 2021-10-21T21:37:14.000Z | import dash_core_components as dcc
import dash_html_components as html
import dash_table as dt
from openomics_web.utils.str_utils import longest_common_prefix
def DataTableColumnSelect(columns):
"""
Args:
columns:
"""
longest_common_prefixes = longest_common_prefix(columns)
return html.Div([
html.Div(['Select the gene id/name column to index by:']),
dcc.Dropdown(
id='data-table-genes-col-name',
options=[{'label': col, 'value': col} for col in columns],
style={
'width': '100%',
},
value=columns[0],
),
html.Div(['Select the column prefixes to import:']),
dcc.Dropdown(
id='data-table-columns-select',
options=[{'label': col, 'value': col} for col in longest_common_prefixes],
style={
'width': '100%',
},
multi=True,
)
])
def ExpressionDataTable(df):
"""
Args:
df:
"""
return html.Div(
className="row",
children=[
html.Div(
dt.DataTable(
id='expression-datatable',
columns=[{"name": i, "id": i} for i in df.columns],
page_current=0,
page_size=20,
page_action='custom',
filter_action='custom',
filter_query='',
sort_action='custom',
sort_mode='multi',
sort_by=[],
style_as_list_view=True,
style_cell={
'overflow': 'hidden',
'textOverflow': 'clip',
'whiteSpace': 'normal'
},
style_data={'width': '30px'},
style_data_conditional=[
{'if': {'row_index': 'odd'},
'backgroundColor': 'rgb(248, 248, 248)'
},
],
style_table={"maxHeight": '800px',
'width': '800px',
'marginTop': '5px',
'marginBottom': '10px',
'overflowX': 'scroll'
},
style_header={
'backgroundColor': 'white',
'fontWeight': 'bold'
},
row_selectable="multi",
selected_rows=[],
# virtualization=True,
),
style={'height': 750, 'overflowY': 'scroll'},
className='six columns'
),
html.Div(
id='table-paging-with-graph-container',
className="five columns"
)
]
)
operators = [['ge ', '>='],
['le ', '<='],
['lt ', '<'],
['gt ', '>'],
['ne ', '!='],
['eq ', '='],
['contains '],
['datestartswith ']]
def split_filter_part(filter_part):
"""
Args:
filter_part:
"""
for operator_type in operators:
for operator in operator_type:
if operator in filter_part:
name_part, value_part = filter_part.split(operator, 1)
name = name_part[name_part.find('{') + 1: name_part.rfind('}')]
value_part = value_part.strip()
v0 = value_part[0]
if (v0 == value_part[-1] and v0 in ("'", '"', '`')):
value = value_part[1: -1].replace('\\' + v0, v0)
else:
try:
value = float(value_part)
except ValueError:
value = value_part
# word operators need spaces after them in the filter string,
# but we don't want these later
return name, operator_type[0].strip(), value
return [None] * 3
def expression_data_view():
return html.Div(id='table-container', children=[dt.DataTable(
id="data-table",
row_selectable='multi',
# sorting=True,
# filtering=True,
css=[{
"selector": ".dash-cell div.dash-cell-value",
"rule": "display: inline; "
"white-space: inherit; "
"overflow: auto; "
"text-overflow: inherit;"
}],
style_cell={
"whiteSpace": "no-wrap",
"overflow": "hidden",
"textOverflow": "ellipsis",
"maxWidth": 100,
'fontWeight': 100,
'fontSize': '11pt',
'fontFamily': 'Courier New',
'backgroundColor': '#1F2132'
},
style_header={
'backgroundColor': '#1F2132',
'textAlign': 'center'
},
style_table={
"maxHeight": "310px",
'width': '320px',
'marginTop': '5px',
'marginBottom': '10px',
},
# n_fixed_rows=1,
# n_fixed_columns=1
)])
| 30.398844 | 86 | 0.424415 | 431 | 5,259 | 5.025522 | 0.403712 | 0.033241 | 0.018006 | 0.014774 | 0.048938 | 0.028624 | 0.028624 | 0.028624 | 0 | 0 | 0 | 0.025782 | 0.446853 | 5,259 | 172 | 87 | 30.575581 | 0.718804 | 0.043925 | 0 | 0.142857 | 0 | 0 | 0.186543 | 0.01672 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030075 | false | 0 | 0.037594 | 0.007519 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
158cad16a3a956c3d72457100289dace28f8f3ee | 994 | py | Python | start.py | reddelexc/cryptoping-trader | c2b0fe1626fa985e8a1bfb84e26019a94af27c42 | [
"MIT"
] | 12 | 2019-04-12T07:13:56.000Z | 2022-03-07T06:21:18.000Z | start.py | reddelexc/cryptoping-trader | c2b0fe1626fa985e8a1bfb84e26019a94af27c42 | [
"MIT"
] | null | null | null | start.py | reddelexc/cryptoping-trader | c2b0fe1626fa985e8a1bfb84e26019a94af27c42 | [
"MIT"
] | 4 | 2021-01-29T19:28:04.000Z | 2021-12-09T01:52:12.000Z | import sys
from src import Bot, Client, Collector, \
Predictor, PredictorLearnThread, Scribe, \
Trader, TraderThreadCleaner, GarbageCleanerThread
if __name__ == '__main__':
use_proxy = len(sys.argv) > 1 and sys.argv[1] == '-p'
pool = {
'client': Client(use_proxy),
'bot': Bot(use_proxy),
'collector': Collector(),
'predictor': Predictor(),
'scribe': Scribe(),
'trader': Trader(),
}
for _, entity in pool.items():
entity.set_pool(pool)
garbage_cleaning_thread = GarbageCleanerThread(pool['bot'])
garbage_cleaning_thread.setDaemon(True)
garbage_cleaning_thread.start()
predictor_learn_thread = PredictorLearnThread(pool['predictor'], pool['client'], pool['bot'])
predictor_learn_thread.setDaemon(True)
predictor_learn_thread.start()
trader_thread_cleaner = TraderThreadCleaner(pool['trader'], pool['bot'])
trader_thread_cleaner.setDaemon(True)
trader_thread_cleaner.start()
| 31.0625 | 97 | 0.678068 | 105 | 994 | 6.12381 | 0.352381 | 0.037325 | 0.097978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002491 | 0.192153 | 994 | 31 | 98 | 32.064516 | 0.798257 | 0 | 0 | 0 | 0 | 0 | 0.079477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
158d3ba98e41c3cfa0005f23668ccbdb8759e04a | 5,733 | py | Python | bot/users/basedGuild.py | Trimatix-indie/SuperDeckBreaker | 6c5f0a6593df5e7f6807b1e2b09aff65dcf8a6fc | [
"MIT"
] | null | null | null | bot/users/basedGuild.py | Trimatix-indie/SuperDeckBreaker | 6c5f0a6593df5e7f6807b1e2b09aff65dcf8a6fc | [
"MIT"
] | 34 | 2021-03-20T22:42:16.000Z | 2021-09-29T15:50:31.000Z | bot/users/basedGuild.py | Trimatix-indie/SuperDeckBreaker | 6c5f0a6593df5e7f6807b1e2b09aff65dcf8a6fc | [
"MIT"
] | null | null | null | from __future__ import annotations
from typing import Dict, Union
from discord import Guild, TextChannel
import traceback
from .. import botState, lib
from ..baseClasses import serializable
from ..cfg import cfg
from ..game import sdbGame, sdbDeck
from ..reactionMenus import SDBSignupMenu
class BasedGuild(serializable.Serializable):
"""A class representing a guild in discord, and storing extra bot-specific information about it.
:var id: The ID of the guild, directly corresponding to a discord guild's ID.
:vartype id: int
:var dcGuild: This guild's corresponding discord.Guild object
:vartype dcGuild: discord.Guild
"""
def __init__(self, id : int, dcGuild: Guild, commandPrefix : str = cfg.defaultCommandPrefix,
runningGames: Dict[TextChannel, Union[sdbGame.SDBGame, sdbGame.GameChannelReservation]] = {}, decks: Dict[str, dict] = {}, modRoleID = -1,
scrapbookChannelId: int = -1, scrapbookMinCookies: int = 1):
"""
:param int id: The ID of the guild, directly corresponding to a discord guild's ID.
:param discord.Guild guild: This guild's corresponding discord.Guild object
"""
if not isinstance(dcGuild, Guild):
raise lib.exceptions.NoneDCGuildObj("Given dcGuild of type '" + dcGuild.__class__.__name__ + \
"', expecting discord.Guild")
self.id = id
self.dcGuild = dcGuild
if not commandPrefix:
raise ValueError("Empty command prefix provided")
self.commandPrefix = commandPrefix
self.runningGames = runningGames
self.decks = decks
self.activeDecks = {}
self.modRoleID = modRoleID
self.modRole = None
self.scrapbookChannelId = scrapbookChannelId
self.scrapbookMinCookies = scrapbookMinCookies
async def startGameSignups(self, owner, channel, deckName, expansionNames, rounds):
if deckName not in self.decks:
raise NameError("Unknown deck name: " + deckName)
if channel.guild.id != self.id:
raise RuntimeError("Attempted to start a game in a channel not owned by this guild: " + channel.name + "#" + str(channel.id))
if channel in self.runningGames:
raise ValueError("Attempted to start a game in a channel which aleady contains a running game: " + channel.name + "#" + str(channel.id))
if deckName in self.activeDecks:
gameDeck = self.activeDecks[deckName]
else:
try:
gameDeck = sdbDeck.SDBDeck(self.decks[deckName]["meta_path"])
except RuntimeError as e:
gameDeck = None
await channel.send("An unexpected error occurred when building the deck, the error has been logged.\nPlease try playing with a different deck!")
botState.logger.log("BasedGuild", "startGameSignups",
"Exception occured when trying to build a deck before starting a game",
eventType=type(e).__name__, trace=traceback.format_exception(type(e), e, e.__traceback__))
if gameDeck is not None:
self.runningGames[channel] = sdbGame.SDBGame(owner, gameDeck, expansionNames, channel, rounds, self)
signupMsg = await channel.send("")
signupMenu = SDBSignupMenu.SDBSignupMenu(signupMsg, self.runningGames[channel], lib.timeUtil.timeDeltaFromDict(cfg.timeouts.gameJoinMenu))
botState.reactionMenusDB[signupMsg.id] = signupMenu
await signupMenu.updateMessage()
self.decks[deckName]["plays"] += 1
def toDict(self, **kwargs) -> dict:
"""Serialize this BasedGuild into dictionary format to be saved to file.
:return: A dictionary containing all information needed to reconstruct this BasedGuild
:rtype: dict
"""
return {"commandPrefix" : self.commandPrefix, "decks": self.decks, "modRoleID": self.modRole.id if self.modRole is not None else -1,
"scrapbookChannelId": self.scrapbookChannelId, "scrapbookMinCookies": self.scrapbookMinCookies}
@classmethod
def fromDict(cls, guildDict: dict, **kwargs) -> BasedGuild:
"""Factory function constructing a new BasedGuild object from the information
in the provided guildDict - the opposite of BasedGuild.toDict
:param int id: The discord ID of the guild
:param dict guildDict: A dictionary containing all information required to build the BasedGuild object
:return: A BasedGuild according to the information in guildDict
:rtype: BasedGuild
"""
if "id" not in kwargs:
raise NameError("Required kwarg missing: id")
guildID = kwargs["id"]
dcGuild = botState.client.get_guild(guildID)
if not isinstance(dcGuild, Guild):
raise lib.exceptions.NoneDCGuildObj("Could not get guild object for id " + str(guildID))
if "commandPrefix" in guildDict:
return BasedGuild(guildID, dcGuild, commandPrefix=guildDict["commandPrefix"], decks=guildDict["decks"] if "decks" in guildDict else {}, modRoleID=guildDict["modRoleID"] if "modRoleID" in guildDict else -1,
scrapbookChannelId=guildDict.get("scrapbookChannelId", -1), scrapbookMinCookies=guildDict.get("scrapbookMinCookies", 1))
return BasedGuild(guildID, dcGuild, decks=guildDict["decks"] if "decks" in guildDict else {}, modRoleID=guildDict["modRoleID"] if "modRoleID" in guildDict else -1,
scrapbookChannelId=guildDict.get("scrapbookChannelId", -1), scrapbookMinCookies=guildDict.get("scrapbookMinCookies", 1))
| 50.289474 | 217 | 0.664922 | 621 | 5,733 | 6.095008 | 0.280193 | 0.022193 | 0.015852 | 0.009511 | 0.233554 | 0.215059 | 0.201849 | 0.180185 | 0.163804 | 0.132629 | 0 | 0.002549 | 0.24734 | 5,733 | 113 | 218 | 50.734513 | 0.874623 | 0.171115 | 0 | 0.057971 | 0 | 0.014493 | 0.163774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.130435 | 0 | 0.231884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
158daa9a2fa7c45f9dc4c8cd8863b5ada982780f | 8,720 | py | Python | dataspot/config/builders/network_configurator_builder.py | patrickdehoon/dataspot | f06a4606837fa3e0a8f9f679026d01a0b2bd3e37 | [
"MIT"
] | 3 | 2019-09-19T15:46:49.000Z | 2019-09-30T18:09:57.000Z | dataspot/config/builders/network_configurator_builder.py | patrickdehoon/dataspot | f06a4606837fa3e0a8f9f679026d01a0b2bd3e37 | [
"MIT"
] | null | null | null | dataspot/config/builders/network_configurator_builder.py | patrickdehoon/dataspot | f06a4606837fa3e0a8f9f679026d01a0b2bd3e37 | [
"MIT"
] | 3 | 2019-09-19T15:52:07.000Z | 2019-10-08T08:15:32.000Z | from dataspot.config.configurators.network_configurators.plot_height_configurator import PlotHeightConfigurator
from dataspot.config.configurators.network_configurators.plot_width_configurator import PlotWidthConfigurator
from dataspot.config.configurators.network_configurators.xrange_configurator import XRangeConfigurator
from dataspot.config.configurators.network_configurators.yrange_configurator import YRangeConfigurator
from dataspot.config.configurators.node_configurators.node_size_configurator import NodeSizeConfigurator
from dataspot.config.configurators.node_configurators.golden_sources_configurator import GoldenSourcesConfigurator
class NetworkConfiguratorBuilder(object):
"""
The NetworkConfiguratorBuilder builds all of the items needed to set the basic conditions of the configurators.
The following variables will be set:
[*] Plot width
[*] Plot height
[*] X-range
[*] Y-range
[*] Node-size-config (Interval based configuration setting the possible sizes a node can take, score-based)
[*] Golden Sources (Golden Sources are the absolute root of your configurators analysis. These objects are often the main
starting points of conducting your analysis.)
"""
def __init__(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
self.__network_config = config
self.__plot_width = None
self.__plot_height = None
self.__x_range = None
self.__y_range = None
self.__node_size_config = None
self.__golden_sources = None
def set_network_config(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
self.__network_config = config
def get_network_config(self):
"""
:return: The Dataspot config is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:rtype: dict
"""
return self.__network_config
def set_plot_width(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
plot_width_configurator = PlotWidthConfigurator(config=config)
plot_width_configurator.build()
plot_width = plot_width_configurator.get_plot_width_config()
self.__plot_width = plot_width
def get_plot_width(self):
"""
:return: Returns the integer value for the width of the plot the configurators analysis will be placed in.
:rtype: int
"""
return self.__plot_width
def set_plot_height(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
plot_height_configurator = PlotHeightConfigurator(config=config)
plot_height_configurator.build()
plot_height = plot_height_configurator.get_plot_height_config()
self.__plot_height = plot_height
def get_plot_height(self):
"""
:return: Returns the integer value for the height of the plot the configurators analysis will be placed in.
:rtype: int
"""
return self.__plot_height
def set_x_range(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
x_range_configurator = XRangeConfigurator(config=config)
x_range_configurator.build()
x_range = x_range_configurator.get_x_range_config()
self.__x_range = x_range
def get_x_range(self):
"""
:return: Returns a list containing the two extremes (int) for the x-axis for the configurators graph.
:rtype: list
"""
return self.__x_range
def set_y_range(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
y_range_configurator = YRangeConfigurator(config=config)
y_range_configurator.build()
y_range = y_range_configurator.get_y_range_config()
self.__y_range = y_range
def get_y_range(self):
"""
:return: Returns a list containing the two extremes (int) for the y-axis for the configurators graph.
:rtype: list
"""
return self.__y_range
def set_node_size_config(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
node_size_configurator = NodeSizeConfigurator(config=config)
node_size_configurator.build()
node_size_config = node_size_configurator.get_node_size_config()
self.__node_size_config = node_size_config
def get_node_size_config(self):
"""
:return: A dictionairy containing an interval-based configuration, on which the node sizes are determined.
Dataspot takes the calculated root score and matches this with one of the interval levels in this
configuration.
:rtype: dict
"""
return self.__node_size_config
def set_golden_sources(self, config):
"""
:param config: The config parameter is a dictionary containing all of the Dataspot basic configurations. An
example of the basic structure can be found in examples/dataspot_config_example.json
:type config: dict
"""
if not isinstance(config, dict):
raise TypeError("The configuration that has been provided is not of a dictionary type")
golden_sources_configurator = GoldenSourcesConfigurator(config=config)
golden_sources_configurator.build()
golden_sources = golden_sources_configurator.get_golden_sources_config()
self.__golden_sources = golden_sources
def get_golden_sources(self):
"""
:return: A list containing all of the golden sources of the configurators graph.
:rtype: list
"""
return self.__golden_sources
def build(self):
"""
The build function prepares all of the configurators configuration components at once.
"""
config = self.get_network_config()
self.set_plot_width(config=config)
self.set_plot_height(config=config)
self.set_x_range(config=config)
self.set_y_range(config=config)
self.set_node_size_config(config=config)
self.set_golden_sources(config=config)
| 43.819095 | 125 | 0.686927 | 1,061 | 8,720 | 5.441093 | 0.116871 | 0.022519 | 0.016629 | 0.03118 | 0.5815 | 0.558289 | 0.523991 | 0.498008 | 0.484843 | 0.468561 | 0 | 0 | 0.254243 | 8,720 | 198 | 126 | 44.040404 | 0.887744 | 0.39094 | 0 | 0.211765 | 0 | 0 | 0.11609 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.188235 | false | 0 | 0.070588 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1590abe2fcf4885ac2f64c9cfcad25ae46f3b4fb | 3,842 | py | Python | subdomain_takeover_tools/confirm_takeover.py | martinvw/subdomain-takeover-tools | a3899a259c88e4324e3e44a345575072dfbd4966 | [
"MIT"
] | 2 | 2022-01-19T22:24:29.000Z | 2022-01-28T07:50:49.000Z | subdomain_takeover_tools/confirm_takeover.py | martinvw/subdomain-takeover-tools | a3899a259c88e4324e3e44a345575072dfbd4966 | [
"MIT"
] | null | null | null | subdomain_takeover_tools/confirm_takeover.py | martinvw/subdomain-takeover-tools | a3899a259c88e4324e3e44a345575072dfbd4966 | [
"MIT"
] | null | null | null | import sys
from subdomain_takeover_tools.confirm_agile_crm import is_valid as agile_crm_is_valid
from subdomain_takeover_tools.confirm_azure_app_service import is_valid as azure_app_service_is_valid
from subdomain_takeover_tools.confirm_azure_edge_cdn import is_valid as azure_edge_cdn_is_valid
from subdomain_takeover_tools.confirm_azure_traffic_manager import is_valid as azure_traffic_manager_is_valid
from subdomain_takeover_tools.confirm_bigcartel import is_valid as bigcartel_is_valid
from subdomain_takeover_tools.confirm_cargo import is_valid as cargo_is_valid
from subdomain_takeover_tools.confirm_elb import is_valid as elb_is_valid
from subdomain_takeover_tools.confirm_fastly import is_valid as fastly_is_valid
from subdomain_takeover_tools.confirm_github import is_valid as github_is_valid
from subdomain_takeover_tools.confirm_pantheon import is_valid as pantheon_is_valid
from subdomain_takeover_tools.confirm_s3 import is_valid as s3_is_valid
from subdomain_takeover_tools.confirm_shopify import is_valid as shopify_is_valid
from subdomain_takeover_tools.confirm_surge import is_valid as surge_is_valid
from subdomain_takeover_tools.confirm_tumblr import is_valid as tumblr_is_valid
from subdomain_takeover_tools.confirm_unclaimed import is_valid as unclaimed_is_valid
def main():
inverse = '--inverse' in sys.argv
strict = '--strict' in sys.argv
data = sys.stdin.read()
lines = data.strip().split('\n')
for line in lines:
if not line.strip():
continue
elif ']\t\t' not in line:
raise IOError("Unexpected input received, currently only subtake output is supported")
(service, target, domain) = _process_line(line)
_process_subtake_output(service, target, domain, inverse, strict)
def _process_line(line):
(parts, domain) = line.split('\t\t')
if ': ]' in parts:
service = parts[1:-3]
target = ''
else:
(service, target) = parts[1:-2].split(': ')
return service, target, domain
def _process_subtake_output(service, target, domain, inverse, strict):
result = _perform_check(service, target, domain)
if result is None:
return
# xor
if inverse != result:
print(domain)
def _perform_check(service, target, domain):
if service == 'agilecrm':
return agile_crm_is_valid(domain, target)
elif service == 'azure':
if target.endswith('azurewebsites.net'):
return azure_app_service_is_valid(domain, target)
elif target.endswith('azureedge.net'):
return azure_edge_cdn_is_valid(domain, target)
elif target.endswith('trafficmanager.net'):
return azure_traffic_manager_is_valid(domain, target)
else:
# other Azure services are not yet supported
return None
elif service == 'bigcartel':
return bigcartel_is_valid(domain, target)
elif service == 'cargo':
return cargo_is_valid(domain, target)
elif service == 'elasticbeanstalk':
return elb_is_valid(domain, target)
elif service == 'fastly':
return fastly_is_valid(domain, target)
elif service == 'github':
return github_is_valid(domain, target)
elif service == 'github':
return github_is_valid(domain, target)
elif service == 'pantheon':
return pantheon_is_valid(domain, target)
elif service == 's3 bucket':
return s3_is_valid(domain, target)
elif service == 'shopify':
return shopify_is_valid(domain, target)
elif service == 'surge':
return surge_is_valid(domain, target)
elif service == 'tumblr':
return tumblr_is_valid(domain, target)
elif service == 'unclaimed':
return unclaimed_is_valid(domain, target)
else:
return None
if __name__ == "__main__":
main()
| 37.300971 | 109 | 0.726184 | 511 | 3,842 | 5.136986 | 0.172211 | 0.122667 | 0.079238 | 0.11581 | 0.545143 | 0.462857 | 0.334857 | 0.139048 | 0.048 | 0.048 | 0 | 0.002597 | 0.198074 | 3,842 | 102 | 110 | 37.666667 | 0.8494 | 0.011973 | 0 | 0.109756 | 0 | 0 | 0.069338 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0 | 0.195122 | 0 | 0.487805 | 0.012195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15975c5c1d59b3336b3cbfe96936dc3ab25cc893 | 5,606 | py | Python | custom_nn.py | penpy/neural-network | 5bc71577c6c13d937c036f3c7babb31f24e1cd6d | [
"MIT"
] | null | null | null | custom_nn.py | penpy/neural-network | 5bc71577c6c13d937c036f3c7babb31f24e1cd6d | [
"MIT"
] | null | null | null | custom_nn.py | penpy/neural-network | 5bc71577c6c13d937c036f3c7babb31f24e1cd6d | [
"MIT"
] | null | null | null | import numpy as np
def softmax(x):
exp_x = np.exp(x)
return exp_x / np.sum(exp_x, axis=0)
def sigmoid(x):
return 1 / (1+np.exp(-x))
def deriv_sigmoid(s):
"""Derivative of the sigmoid given the output s of the sigmoid"""
return s * (1 - s)
def relu(x):
return x * (x > 0)
def heaviside_step(x):
return np.array(x > 0, dtype=float)
class MultilayerPerceptronNN():
def __init__(self, dim=(784, 64, 10), activ='sigmoid'):
self.dim = dim
self.w = []
for i in range(len(dim)-1):
lim = 1 / np.sqrt(dim[i])
self.w.append(np.random.uniform(-lim, lim, (dim[i+1], dim[i]+1)))
self.best_w = self.w
if activ == 'sigmoid':
self.activ = sigmoid
self.d_activ = deriv_sigmoid
elif activ == 'relu':
self.activ = relu
self.d_activ = heaviside_step
else:
raise Exception(f"Activation function '{activ}' not supported")
def forward(self, x):
_, batch_size = x.shape
ones_row = np.ones((1, batch_size))
hidden = []
for i in range(len(self.dim) - 2):
x = np.concatenate((x, ones_row))
hidden.append(x)
x = self.w[i] @ x
hidden.append(x)
x = self.activ(x)
x = np.concatenate((x, ones_row))
hidden.append(x)
x = self.w[-1] @ x
y = softmax(x)
return y, hidden
def backprop(self, hidden, y, target):
delta = y - target
dw = []
for i in range(len(self.dim) - 1):
deriv = delta @ hidden[2*len(self.dim)-4-2*i].T
dw.append(deriv)
if i != len(self.dim) - 2:
d_activ = self.d_activ(hidden[2*len(self.dim)-4-2*i])
delta = self.w[len(self.dim)-2-i].T @ delta * d_activ
delta = delta[:-1]
dw.reverse()
return dw
def one_hot(labels):
"""Build one-hot vectors of the labels"""
t = np.zeros((10, len(labels)))
t[labels, range(len(labels))] = 1
return t
def accuracy(y, labels):
"""Proportion of outputs y which match the labels"""
guess = np.argmax(y, axis=0)
nb_correct = np.sum(guess == labels)
return nb_correct / len(labels)
def ce_loss(y, target):
"""Cross-entropy loss"""
return -np.mean(np.sum(np.log(y)*target, axis=0))
def train(model, data, n_epoch, batch_size, lr0, decay_rate=0):
"""Train Neural Net model"""
inp, labels, inp_val, labels_val = data
n_examples = len(labels)
assert batch_size <= n_examples
n_itr = int(np.ceil(n_examples*n_epoch/batch_size))
print_itr_step = 100
print('Total iterarions:', n_itr)
idx_permut = np.concatenate([np.random.permutation(n_examples)
for _ in range(n_epoch+2)])
idx_permut = idx_permut[:(n_itr+1)*batch_size].reshape((n_itr+1, -1))
labels_one_hot = one_hot(labels)
labels_val_one_hot = one_hot(labels_val)
y, hidden = model.forward(inp[:, idx_permut[0]])
y_val, _ = model.forward(inp_val)
# Loss and accuracy of the training batch and validation set
log = {'loss': [ce_loss(y, labels_one_hot[:, idx_permut[0]])],
'acc': [accuracy(y, labels[idx_permut[0]])],
'vloss': [ce_loss(y_val, labels_val_one_hot)],
'vacc': [accuracy(y_val, labels_val)],}
best_vloss = log['vloss'][0]
for itr in range(n_itr):
epoch = int(itr*batch_size/n_examples)
dw = model.backprop(hidden, y, labels_one_hot[:, idx_permut[itr]])
lr = lr0 / (1 + decay_rate*epoch)
for i in range(len(model.dim) - 1):
model.w[i] -= lr * dw[i]
y, hidden = model.forward(inp[:, idx_permut[itr+1]])
y_val, _ = model.forward(inp_val)
log['loss'].append(ce_loss(y, labels_one_hot[:, idx_permut[itr+1]]))
log['acc'].append(accuracy(y, labels[idx_permut[itr+1]]))
log['vloss'].append(ce_loss(y_val, labels_val_one_hot))
log['vacc'].append(accuracy(y_val, labels_val))
# Store the weights yielding the best validation loss
if log['vloss'][-1] < best_vloss:
for i in range(len(model.dim) - 1):
model.best_w[i] = model.w[i].copy()
# Keep track of the loss
if itr%print_itr_step == 0 or itr == n_itr-1:
info = f"Iteration {itr}/{n_itr} (epoch {epoch})"
info += f" ; loss={log['loss'][itr]} ; vloss={log['vloss'][itr]}"
print(info)
for i in range(len(model.dim) - 1):
model.w[i] = model.best_w[i].copy()
return log
def normalize(data):
"""Min-Max normalization: rescale to [0,1]"""
data_min = data.min(axis=1).reshape((-1, 1))
data_max = data.max(axis=1).reshape((-1, 1))
data_range = (data_max - data_min) + (data_max == data_min)
return (data - data_min) / data_range
def prepare(images, labels, p_validation=10):
"""Normalize and split train/validation sets"""
n_examples = len(images)
inputs = images.reshape((n_examples, -1))
normalized_inputs = normalize(inputs)
permutations = np.random.permutation(n_examples)
n_validation = round(p_validation * n_examples)
validation_ids = permutations[:n_validation]
train_ids = permutations[n_validation:]
inputs_valid = normalized_inputs[validation_ids]
labels_valid = labels[validation_ids]
inputs_train = normalized_inputs[train_ids]
labels_train = labels[train_ids]
return inputs_train.T, labels_train, inputs_valid.T, labels_valid | 33.171598 | 77 | 0.587585 | 823 | 5,606 | 3.831106 | 0.18955 | 0.020932 | 0.011418 | 0.020932 | 0.23882 | 0.16746 | 0.142087 | 0.100222 | 0.053917 | 0.045036 | 0 | 0.017048 | 0.26757 | 5,606 | 169 | 78 | 33.171598 | 0.750852 | 0.071531 | 0 | 0.080645 | 0 | 0 | 0.041215 | 0.009288 | 0 | 0 | 0 | 0 | 0.008065 | 1 | 0.112903 | false | 0 | 0.008065 | 0.024194 | 0.233871 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1598b7dee19781329e09f02bed28f00c924a7fb7 | 850 | py | Python | Max/Max_0075_20200318.py | Morek999/OMSCS_Taiwan_Leetcode | 8ec18e08e9313bc3326846ca6ef6e569380a133f | [
"MIT"
] | 1 | 2020-01-08T14:10:24.000Z | 2020-01-08T14:10:24.000Z | Max/Max_0075_20200318.py | Morek999/OMSCS_Taiwan_Leetcode | 8ec18e08e9313bc3326846ca6ef6e569380a133f | [
"MIT"
] | null | null | null | Max/Max_0075_20200318.py | Morek999/OMSCS_Taiwan_Leetcode | 8ec18e08e9313bc3326846ca6ef6e569380a133f | [
"MIT"
] | null | null | null | """
75. Sort Colors
https://leetcode.com/problems/sort-colors/
Time complexity: O()
Space complexity: O()
"""
from typing import List
class Solution:
def sortColors(self, nums: List[int]) -> None:
"""
Do not return anything, modify nums in-place instead.
"""
p0 = curr = 0
p2 = len(nums) - 1
while p2 >= curr:
if nums[curr] == 0:
nums[p0], nums[curr] = nums[curr], nums[p0]
p0 += 1
curr += 1
elif nums[curr] == 1:
curr += 1
else:
nums[p2], nums[curr] = nums[curr], nums[p2]
p2 -= 1
# nums.sort()
ans = [
[2,0,2,1,1,0] # [0,0,1,1,2,2]
]
for trails in ans:
print(Solution().maxProduct(trails))
| 24.285714 | 62 | 0.445882 | 103 | 850 | 3.679612 | 0.456311 | 0.126649 | 0.126649 | 0.084433 | 0.105541 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062124 | 0.412941 | 850 | 34 | 63 | 25 | 0.697395 | 0.216471 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.05 | 0 | 0.15 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
159b88e9ded1306f1bbc757eafe9af1af8292c32 | 5,590 | py | Python | dane/document.py | CLARIAH/DANE-util | 8a3edec69be18ac3bdee476b65059409af05c1bb | [
"Apache-2.0"
] | null | null | null | dane/document.py | CLARIAH/DANE-util | 8a3edec69be18ac3bdee476b65059409af05c1bb | [
"Apache-2.0"
] | 1 | 2019-12-11T19:46:20.000Z | 2019-12-11T21:30:38.000Z | dane/document.py | CLARIAH/DANE-util | 8a3edec69be18ac3bdee476b65059409af05c1bb | [
"Apache-2.0"
] | null | null | null | # Copyright 2020-present, Netherlands Institute for Sound and Vision (Nanne van Noord)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
import json
import sys
from abc import ABC, abstractmethod
from dane.errors import APIRegistrationError, MissingEndpointError
from requests.utils import requote_uri
class Document():
"""This is a class representation of a document in DANE, it holds both data
and some logic.
:param target: Dict containing `id`, `url`, and `type` keys to described
the target document.
:type target: dict
:param creator: Dict containing `id`, and `type` keys to describe the
document owner/creator.
:type creator: dict
:param api: Reference to a class:`base_classes.base_handler` which is
used to communicate with the server.
:type api: :class:`base_classes.base_handler`, optional
:param _id: ID of the document, assigned by DANE-server
:type _id: int, optional
:param created_at: Creation date
:param updated_at: Last modified date
"""
VALID_TYPES = ["Dataset", "Image", "Video", "Sound", "Text"]
VALID_AGENTS = ["Organization", "Human", "Software"]
def __init__(self, target, creator, api=None, _id=None,
created_at=None, updated_at=None):
if not {"id", "url", "type"} <= target.keys() and len(target['id']) > 2:
raise KeyError("Target object must contains at least the `id`," + \
"url, and type properties")
if target['type'] not in self.VALID_TYPES:
raise ValueError("Invalid target type. Valid types are: {}".format(
", ".join(self.VALID_TYPES)))
self.target = target
self.target['url'] = requote_uri(str(self.target['url']).strip())
if not {"id", "type"} <= creator.keys():
raise KeyError("Creator object must contains at least the `id` " + \
"and type properties")
if creator['type'] not in self.VALID_AGENTS:
raise ValueError("Invalid creator type. Valid types are: {}".format(
", ".join(self.VALID_AGENTS)))
self.creator = creator
self.created_at = created_at
self.updated_at = updated_at
self.api = api
self._id = _id
def __str__(self):
return self.to_json()
def to_json(self, indent=None):
"""Returns this document serialised as JSON, excluding the API reference.
:return: JSON string of the document
:rtype: str
"""
out = {}
for kw in vars(self):
if kw == 'api':
continue
elif kw == '_id' and self._id is None:
continue
else:
out[kw] = getattr(self, kw)
return json.dumps(out, indent=indent)
@staticmethod
def from_json(json_str):
"""Constructs a :class:`dane.Document` instance from a JSON string
:param json_str: Serialised :class:`dane.Document`
:type json_str: str or dict
:return: JSON string of the document
:rtype: :class:`dane.Document`
"""
if isinstance(json_str, str):
json_str = json.loads(json_str)
return Document(**json_str)
def set_api(self, api):
"""Set the API for the document
:param api: Reference to a :class:`base_classes.base_handler` which is
used to communicate with the database, and queueing system.
:type api: :class:`base_classes.base_handler`, optional
:return: self
"""
self.api = api
return self
def register(self):
"""Register this document in DANE, this will assign an _id to the
document. Requires an API to be set.
:return: self
"""
if self._id is not None:
raise APIRegistrationError('Document already registered')
elif self.api is None:
raise MissingEndpointError('No endpoint found to'\
'register document')
self._id = self.api.registerDocument(document=self)
return self
def delete(self):
"""Delete this document. Requires an API to be set.
"""
if self.api is None:
raise MissingEndpointError('No API found')
return self.api.deleteDocument(document=self)
def getAssignedTasks(self, task_key = None):
"""Retrieve tasks assigned to this document. Accepts an optional
task_key to filter for a specific type of tasks. Requires an
API to be set.
:param task_key: Key of task type to filter for
:type task_key: string, optional
:return: list of dicts with task keys and ids."""
if self._id is None:
raise APIRegistrationError('Document needs to be registered')
elif self.api is None:
raise MissingEndpointError('No endpoint found to'\
'query tasks')
return self.api.getAssignedTasks(self._id, task_key)
| 34.9375 | 86 | 0.615742 | 705 | 5,590 | 4.797163 | 0.283688 | 0.018628 | 0.018924 | 0.023655 | 0.215849 | 0.205204 | 0.19929 | 0.133057 | 0.086931 | 0.086931 | 0 | 0.002232 | 0.278712 | 5,590 | 159 | 87 | 35.157233 | 0.836558 | 0.404651 | 0 | 0.147059 | 0 | 0 | 0.150556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.073529 | 0.014706 | 0.338235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
159d44799a9c56a805e0bf7bdc7f5ca0be5ac255 | 3,551 | py | Python | pterygium_inference.py | SERI-EPI-DS/pterygium_detection | 5d56896cec4f154bd5e10ddf4db079b1e8d8564d | [
"MIT"
] | null | null | null | pterygium_inference.py | SERI-EPI-DS/pterygium_detection | 5d56896cec4f154bd5e10ddf4db079b1e8d8564d | [
"MIT"
] | null | null | null | pterygium_inference.py | SERI-EPI-DS/pterygium_detection | 5d56896cec4f154bd5e10ddf4db079b1e8d8564d | [
"MIT"
] | null | null | null | import argparse
import numpy as np
import pandas as pd
from tqdm import tqdm
from PIL import Image
import torch, torchvision
from torchvision import transforms
from torch.nn import functional as F
from torchvision.models.utils import load_state_dict_from_url
SEED = 123
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(SEED)
def get_model(task = 'any_ptergium'):
task_types = ['any_pterygium', 'referable_pterygium']
assert task in task_types, f"Pick from {task_types}"
state_dict = load_state_dict_from_url(f'https://github.com/SERI-EPI-DS/pterygium_detection/releases/download/v1.0/{task}.pth')
# Binary pterygium
model=torchvision.models.vgg16_bn(num_classes = 2)
model.load_state_dict(state_dict)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device);
model.eval();
return model, device
def get_data_loader(root_dir, num_workers, batch_size):
image_transformations = transforms.Compose([
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize(0.5, 0.5)
])
def test_valid_file(path):
try:
_ = Image.open(path)
except:
return False
return True
asp_dataset = torchvision.datasets.ImageFolder(root_dir,
is_valid_file = test_valid_file,
transform= image_transformations)
data_loader = torch.utils.data.DataLoader(asp_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=num_workers)
return data_loader
def get_predictions(model, data_loader, device):
predictions = []
with torch.no_grad():
for data in tqdm(data_loader):
inputs= data[0].to(device)
preds = model(inputs)
predictions.append(F.softmax(preds.detach(), dim=1).cpu().numpy())
predictions=np.concatenate(predictions)
return predictions
def main(args):
model, device = get_model(args.task_type)
dataloader = get_data_loader(args.folder_path, args.workers, args.batch_size)
predictions = get_predictions(model, dataloader, device)
files = [i[0].replace(args.folder_path, '') for i in dataloader.dataset.imgs]
df = pd.DataFrame({'files':files, 'prediction_probability': predictions[:,1]})
df.to_csv(args.df_save_path, index=False)
return
if __name__ == '__main__':
args = argparse.ArgumentParser(description='Store model predictions as a csv')
args.add_argument('task_type',
default='any_pterygium',
const='any_pterygium',
nargs='?',
choices=['any_pterygium', 'referable_pterygium'],
help='which model to load: (default: %(default)s)')
args.add_argument('folder_path', type=str,
help='path to folder with images')
args.add_argument('-w', '--workers', default=6, type=int,
help='number of cores to use in parallel')
args.add_argument('-b', '--batch_size', default=64, type=int,
help='batch size')
args.add_argument('-s', '--df_save_path', default='./predictions.csv', type=str,
help='path to save predictions')
main(args.parse_args())
| 30.350427 | 130 | 0.62405 | 424 | 3,551 | 5.023585 | 0.367925 | 0.028169 | 0.035211 | 0.015962 | 0.034742 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009619 | 0.268093 | 3,551 | 116 | 131 | 30.612069 | 0.809927 | 0.004506 | 0 | 0 | 0 | 0.012821 | 0.140957 | 0.006227 | 0 | 0 | 0 | 0 | 0.012821 | 1 | 0.064103 | false | 0 | 0.115385 | 0 | 0.25641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15a23a52ff14d809f1cafc15020c2e60fa07129d | 6,578 | py | Python | db/executions_tab.py | Jaleyhd/Krama | 5f29096828fe659eccf8525ff31bc4d7a273c049 | [
"MIT"
] | null | null | null | db/executions_tab.py | Jaleyhd/Krama | 5f29096828fe659eccf8525ff31bc4d7a273c049 | [
"MIT"
] | null | null | null | db/executions_tab.py | Jaleyhd/Krama | 5f29096828fe659eccf8525ff31bc4d7a273c049 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from . import db_util
from ..proto import krama_pb2
from ..conf import common
import MySQLdb
import os
from google.protobuf import text_format
import simplejson
from ..protobufjson.protobuf_json import *
# CREATE TABLE executions_tab
# (
# exec_id INT NOT NULL AUTO_INCREMENT,
# job_name VARCHAR(512),
# project_name VARCHAR(512),
# depends_on VARCHAR(2048),
# status INT NOT NULL,
# start_time BIGINT DEFAULT -1,
# end_time BIGINT DEFAULT -1,
# retry INT,
# pid INT,
# completion_percentage FLOAT DEFAULT -1.0,
# PRIMARY KEY(exec _id,job_name)
# )
{
"exec_id":{"value":"","data_type":"float"}
}
class Executions_tab:
def __init__(self):
self.db=db_util.DbUtil()
def insert_job(self,job_proto,project_name,exec_id,project_path):
arg_dict=self.proto_to_arg_dict(job_proto=job_proto,
project_name=project_name,exec_id=exec_id
,project_path=project_path)
self.insert_dict(arg_dict)
def update_job(self,job_proto,project_name,exec_id):
arg_dict=self.proto_to_arg_dict(job_proto=job_proto,
project_name=project_name,exec_id=exec_id)
self.update_dict(arg_dict)
def update_row(self,row):
self.update_proto(row)
arg_dict=self.row_to_arg_dict(db_job=row)
self.update_dict(arg_dict=arg_dict)
def update_proto(self,row):
"""
Updates the execution prototxt
Args:
row:
Returns:
"""
current_exec_path=row["project_path"]+'/.executions/exec_'+str(row["exec_id"])
current_exec_proto_path=current_exec_path+'/main.prototxt'
current_exec_json_path=current_exec_path+'/main.json'
if os.path.exists(current_exec_path) and os.path.exists(current_exec_proto_path):
schedule_graph=krama_pb2.ScheduleGraph()
text_format.Merge(text=open(current_exec_proto_path).read(),message=schedule_graph)
for idx,schedule_job in enumerate(schedule_graph.schedule_job):
if str(schedule_job.name) == row["job_name"]:
schedule_graph.schedule_job[idx].status=common.EXECUTION_STATUS_DICT[int(str(row["status"]))]
open(current_exec_proto_path,'w').write(str(schedule_graph))
open(current_exec_json_path, 'w').write(simplejson.dumps(pb2json(schedule_graph)))
#@staticmethod
def row_to_arg_dict(self,db_job):
arg_dict={}
#1 exec_id
arg_dict['exec_id']=str(db_job['exec_id'])
#2 project_name
arg_dict['project_name']="'"+str(db_job['project_name'])+"'"
#3 project_name
arg_dict['project_path']="'"+str(db_job['project_path'])+"'"
#4 job_name
arg_dict['job_name']="'"+str(db_job['job_name'])+"'"
#5 depends_on
arg_dict['depends_on']="'"+str(db_job['depends_on'])+"'"
#6 status
arg_dict['status']=str(db_job['status'])
#7 pid
arg_dict['pid']=str(db_job['pid'])
#8 start_time
arg_dict['start_time']=str(db_job['start_time'])
#9 end_time
arg_dict['end_time']=str(db_job['end_time'])
#10 retry
arg_dict['retry']=str(db_job['retry'])
#11 completion_percentage
arg_dict['completion_percentage']=str(db_job['completion_percentage'])
return arg_dict
#@staticmethod
def proto_to_arg_dict(self,job_proto,project_name,exec_id,project_path):
arg_dict={}
#1 exec_id
arg_dict['exec_id']=str(exec_id)
#2 project_name
arg_dict['project_name']="'"+str(project_name)+"'"
#3 project_path
arg_dict['project_path']="'"+str(project_path)+"'"
#4 job_name
if job_proto.HasField('name') and len(str(job_proto.name))>0:
arg_dict['job_name']="'"+str(job_proto.name)+"'"
#5 depends_on
arg_dict['depends_on']="'"+str(','.join(job_proto.depends_on))+"'"
#6 status
if job_proto.HasField('status') and len(str(job_proto.status))>0:
arg_dict['status']=str(job_proto.depends_on)
else:
arg_dict['status']=str(common.EXECUTION_STATUS_UNKNOWN)
#7 pid
if job_proto.HasField('pid') and len(str(job_proto.pid))>0:
arg_dict['pid']=str(job_proto.pid)
#8 start_time
if job_proto.HasField('start_time') and len(str(job_proto.start_time))>0:
arg_dict['start_time']=str(job_proto.start_time)
#9 end_time
if job_proto.HasField('end_time') and len(str(job_proto.end_time))>0:
arg_dict['end_time']=str(job_proto.end_time)
#10 retry
if job_proto.HasField('retry') and len(str(job_proto.retry))>0:
arg_dict['retry']=str(job_proto.retry)
else:
arg_dict['retry']=str(common.DEFAULT_EXECUTION_RETRY)
#11 completion_percentage
if job_proto.HasField('completion_percentage') and len(str(job_proto.completion_percentage))>0:
arg_dict['completion_percentage']=str(job_proto.completion_percentage)
else:
arg_dict['completion_percentage']=str(common.DEFAULT_EXECUTION_COMPLETION_PERC)
return arg_dict
def insert_dict(self,arg_dict):
statement="INSERT INTO executions_tab ("+str(",".join(arg_dict.keys()))+\
") VALUES ("+str(','.join(arg_dict.values()))+");"
self.db.execute(statement=statement)
def update_dict(self,arg_dict):
statement=""
statement="UPDATE executions_tab SET "+str(", ".join([k+'='+v for (k,v) in arg_dict.items() ]))+\
" WHERE "+str(' and '.join([k+'='+v for (k,v) in arg_dict.items()
if k in common.EXECUTIONS_TAB_PRIMARY_KEYS]))+";"
self.db.execute(statement=statement)
open(common.EXECUTION_UPDATE_TRIGGER_PATH,
'w').write(arg_dict["project_path"].replace("'","")+
'/.executions/exec_'+arg_dict['exec_id']+'/main.json')
def get_all_jobs_executions_tab(self,project_name,exec_id):
statement="SELECT * FROM executions_tab where project_name='"+str(project_name)\
+"' and exec_id="+str(exec_id)+";"
return self.db.fetch_dict(statement)
def get_all_executions_tab(self):
statement="SELECT * FROM executions_tab";
return self.db.fetch_dict(statement)
def close(self):
self.db.close()
if __name__=="__main__":
e=Executions_tab() | 37.375 | 113 | 0.633475 | 888 | 6,578 | 4.35473 | 0.149775 | 0.088699 | 0.039824 | 0.032583 | 0.416861 | 0.162141 | 0.15128 | 0.134213 | 0.125162 | 0.108094 | 0 | 0.009881 | 0.230769 | 6,578 | 176 | 114 | 37.375 | 0.754348 | 0.09927 | 0 | 0.12381 | 0 | 0 | 0.128231 | 0.017976 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0 | 0.085714 | 0 | 0.247619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15a5974f19858186082ef01ae0c5530964542d3d | 3,533 | py | Python | Downloader.py | PriyanshuG13/AnimeDownloader | cb5893339c8789fbedcebc73e8c1d91e8632846d | [
"MIT"
] | null | null | null | Downloader.py | PriyanshuG13/AnimeDownloader | cb5893339c8789fbedcebc73e8c1d91e8632846d | [
"MIT"
] | null | null | null | Downloader.py | PriyanshuG13/AnimeDownloader | cb5893339c8789fbedcebc73e8c1d91e8632846d | [
"MIT"
] | null | null | null | import os
import subprocess
import webbrowser
import clipboard
import requests
from bs4 import BeautifulSoup
from pyfiglet import Figlet
from Database.DatabaseManager import DatabaseManager as Animedb
class Downloader(Animedb):
def __init__(self, delay=60):
self._COLS = os.get_terminal_size().columns
self.showHeader()
super().__init__()
self.__URL = "https://nyaa.iss.one/?f=0&c=0_0&q="
self.__delay = delay
def showHeader(self, font=None):
os.system('clear||cls')
cols = self._COLS
self.drawline(cols)
if font is not None:
self.fancyPrint("Anime Downloader Script", font)
else:
if cols <= 125:
self.fancyPrint("Anime Downloader Script", 'small')
else:
self.fancyPrint("Anime Downloader Script", 'isometric3')
self.drawline(cols)
def downloadFromDB(self, n):
url = self.__URL
row = list(self.animedb['Downloader'][n].values())
ep = self.__incrementEP(row[3])
for j in range(6):
if row[j] == 'N/A':
continue
elif j == 3:
url += ep + "+"
else:
url += row[j] + "+"
self.fancyPrint(f'{row[1]}\nEP-{row[3]} -> {ep}', 'digital')
try:
self.__downloader(url)
self.fancyPrint("COPIED TO CLIPBOARD", 'digital')
self.fancyPrint("UPDATED EPISODE IN DATABASE", 'straight')
self.update(n, "EP", ep)
return self.__delay
except:
self.fancyPrint("NOT YET AVAILABLE", 'short')
return 1
def downloadFromInput(self, name, ep):
url = self.__URL
ep = self.__incrementEP(ep)
url += f"{name} {ep}"
try:
self.__downloader(url)
self.fancyPrint(f"DOWNLOADING EP-{ep}", 'digital')
return self.__delay
except:
self.fancyPrint("NOT YET AVAILABLE", 'short')
return 1
def __downloader(self, url):
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find_all('td', class_='text-center')
link = results[0].find_all('a')
self.__openClient(link)
clipboard.copy(link[1]["href"])
return link
def __openClient(self, link):
try:
subprocess.run(f'open -a "Free Download Manager" {link[1]["href"]}', shell=True, check=True)
except:
self.fancyPrint("Try Installing Free Download Manager", 'mini')
print("It also Backs Up as a Torrent Client")
download = "https://nyaa.iss.one/" + link[0]["href"]
webbrowser.open(download, new=2)
def __incrementEP(self, ep):
ep = str(int(ep) + 1)
if int(ep) < 10:
ep = "0" + ep
return ep
def drawline(self, cols):
print(end='\nX')
for col in range(cols - 2):
print(end='~')
print('X\n')
def commitToDb(self):
self.normalPrint("Commit?? : ", end='\c')
if input().upper() == 'Y':
self.commit()
self.fancyPrint("COMMITED UPDATES", 'short')
self.drawline(self._COLS)
def fancyPrint(self, text, font='digital'):
try:
subprocess.run(f'figlet -w $(tput cols) -c -f {font} "{text}" | lolcat', shell=True, check=True)
except:
f = Figlet(font=font)
print(f.renderText(text))
| 31.544643 | 108 | 0.546844 | 407 | 3,533 | 4.636364 | 0.346437 | 0.081611 | 0.030207 | 0.046105 | 0.18601 | 0.104928 | 0.068892 | 0.068892 | 0.068892 | 0.068892 | 0 | 0.011208 | 0.318143 | 3,533 | 111 | 109 | 31.828829 | 0.772105 | 0 | 0 | 0.237113 | 0 | 0 | 0.171243 | 0.005944 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103093 | false | 0 | 0.082474 | 0 | 0.257732 | 0.051546 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15a7b6d2d29c51ea6c81fd90e9bae8cfd595d7fd | 924 | py | Python | figures/scripts/studentst.py | mattpitkin/GraWIToNStatisticsLectures | 09175a3a8cb3c9f0f15535d64deaef1275eac870 | [
"MIT"
] | 1 | 2018-02-09T21:01:54.000Z | 2018-02-09T21:01:54.000Z | figures/scripts/studentst.py | mattpitkin/GraWIToNStatisticsLectures | 09175a3a8cb3c9f0f15535d64deaef1275eac870 | [
"MIT"
] | null | null | null | figures/scripts/studentst.py | mattpitkin/GraWIToNStatisticsLectures | 09175a3a8cb3c9f0f15535d64deaef1275eac870 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
Make plots of the Student's t-distribution for different degrees of freedom
"""
import matplotlib.pyplot as pl
from scipy.stats import norm
from scipy.stats import t
import numpy as np
mu = 0. # the mean, mu
nus = [1., 2., 5, 10, 100] # standard deviations, sigma
markers = ['b-', 'r-', 'm-', 'c-', 'g-']
x = np.linspace(-6, 6, 1000) # x
# set plot to render labels using latex
pl.rc('text', usetex=True)
pl.rc('font', family='serif')
pl.rc('font', size=14)
fig = pl.figure(figsize=(6,5), dpi=100)
# plot pdfs
for i, nu in enumerate(nus):
pl.plot(x, t.pdf(x, nu), markers[i], label='$\\nu=%d$'%nu)
# plot a Gaussian for comparison
pl.plot(x, norm.pdf(x, mu, 1.), 'k--', label='$N(0,1)$')
ax = pl.gca()
ax.set_xlabel('$t$', fontsize=14)
ax.set_ylabel('$p(t)$', fontsize=14)
ax.legend(loc='best', frameon=False)
fig.subplots_adjust(bottom=0.15)
pl.savefig('../studentst.pdf')
pl.show()
| 22 | 75 | 0.645022 | 166 | 924 | 3.572289 | 0.60241 | 0.020236 | 0.047218 | 0.067454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040404 | 0.142857 | 924 | 41 | 76 | 22.536585 | 0.708333 | 0.234848 | 0 | 0 | 0 | 0 | 0.109827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15a91f68e5ff17be773436d601a24af7798d38aa | 9,248 | py | Python | usr/local/lib/python3.6/dist-packages/html2text/cli.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | usr/local/lib/python3.6/dist-packages/html2text/cli.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | usr/local/lib/python3.6/dist-packages/html2text/cli.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | import argparse
import sys
from html2text import HTML2Text, __version__, config
def main():
baseurl = ""
class bcolors:
HEADER = "\033[95m"
OKBLUE = "\033[94m"
OKGREEN = "\033[92m"
WARNING = "\033[93m"
FAIL = "\033[91m"
ENDC = "\033[0m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
p = argparse.ArgumentParser()
p.add_argument(
"--default-image-alt",
dest="default_image_alt",
default=config.DEFAULT_IMAGE_ALT,
help="The default alt string for images with missing ones",
)
p.add_argument(
"--pad-tables",
dest="pad_tables",
action="store_true",
default=config.PAD_TABLES,
help="pad the cells to equal column width in tables",
)
p.add_argument(
"--no-wrap-links",
dest="wrap_links",
action="store_false",
default=config.WRAP_LINKS,
help="don't wrap links during conversion",
)
p.add_argument(
"--wrap-list-items",
dest="wrap_list_items",
action="store_true",
default=config.WRAP_LIST_ITEMS,
help="wrap list items during conversion",
)
p.add_argument(
"--ignore-emphasis",
dest="ignore_emphasis",
action="store_true",
default=config.IGNORE_EMPHASIS,
help="don't include any formatting for emphasis",
)
p.add_argument(
"--reference-links",
dest="inline_links",
action="store_false",
default=config.INLINE_LINKS,
help="use reference style links instead of inline links",
)
p.add_argument(
"--ignore-links",
dest="ignore_links",
action="store_true",
default=config.IGNORE_ANCHORS,
help="don't include any formatting for links",
)
p.add_argument(
"--protect-links",
dest="protect_links",
action="store_true",
default=config.PROTECT_LINKS,
help="protect links from line breaks surrounding them with angle brackets",
)
p.add_argument(
"--ignore-images",
dest="ignore_images",
action="store_true",
default=config.IGNORE_IMAGES,
help="don't include any formatting for images",
)
p.add_argument(
"--images-as-html",
dest="images_as_html",
action="store_true",
default=config.IMAGES_AS_HTML,
help=(
"Always write image tags as raw html; preserves `height`, `width` and "
"`alt` if possible."
),
)
p.add_argument(
"--images-to-alt",
dest="images_to_alt",
action="store_true",
default=config.IMAGES_TO_ALT,
help="Discard image data, only keep alt text",
)
p.add_argument(
"--images-with-size",
dest="images_with_size",
action="store_true",
default=config.IMAGES_WITH_SIZE,
help=(
"Write image tags with height and width attrs as raw html to retain "
"dimensions"
),
)
p.add_argument(
"-g",
"--google-doc",
action="store_true",
dest="google_doc",
default=False,
help="convert an html-exported Google Document",
)
p.add_argument(
"-d",
"--dash-unordered-list",
action="store_true",
dest="ul_style_dash",
default=False,
help="use a dash rather than a star for unordered list items",
)
p.add_argument(
"-e",
"--asterisk-emphasis",
action="store_true",
dest="em_style_asterisk",
default=False,
help="use an asterisk rather than an underscore for emphasized text",
)
p.add_argument(
"-b",
"--body-width",
dest="body_width",
type=int,
default=config.BODY_WIDTH,
help="number of characters per output line, 0 for no wrap",
)
p.add_argument(
"-i",
"--google-list-indent",
dest="list_indent",
type=int,
default=config.GOOGLE_LIST_INDENT,
help="number of pixels Google indents nested lists",
)
p.add_argument(
"-s",
"--hide-strikethrough",
action="store_true",
dest="hide_strikethrough",
default=False,
help="hide strike-through text. only relevant when -g is " "specified as well",
)
p.add_argument(
"--escape-all",
action="store_true",
dest="escape_snob",
default=False,
help=(
"Escape all special characters. Output is less readable, but avoids "
"corner case formatting issues."
),
)
p.add_argument(
"--bypass-tables",
action="store_true",
dest="bypass_tables",
default=config.BYPASS_TABLES,
help="Format tables in HTML rather than Markdown syntax.",
)
p.add_argument(
"--ignore-tables",
action="store_true",
dest="ignore_tables",
default=config.IGNORE_TABLES,
help="Ignore table-related tags (table, th, td, tr) " "while keeping rows.",
)
p.add_argument(
"--single-line-break",
action="store_true",
dest="single_line_break",
default=config.SINGLE_LINE_BREAK,
help=(
"Use a single line break after a block element rather than two line "
"breaks. NOTE: Requires --body-width=0"
),
)
p.add_argument(
"--unicode-snob",
action="store_true",
dest="unicode_snob",
default=config.UNICODE_SNOB,
help="Use unicode throughout document",
)
p.add_argument(
"--no-automatic-links",
action="store_false",
dest="use_automatic_links",
default=config.USE_AUTOMATIC_LINKS,
help="Do not use automatic links wherever applicable",
)
p.add_argument(
"--no-skip-internal-links",
action="store_false",
dest="skip_internal_links",
default=config.SKIP_INTERNAL_LINKS,
help="Do not skip internal links",
)
p.add_argument(
"--links-after-para",
action="store_true",
dest="links_each_paragraph",
default=config.LINKS_EACH_PARAGRAPH,
help="Put links after each paragraph instead of document",
)
p.add_argument(
"--mark-code",
action="store_true",
dest="mark_code",
default=config.MARK_CODE,
help="Mark program code blocks with [code]...[/code]",
)
p.add_argument(
"--decode-errors",
dest="decode_errors",
default=config.DECODE_ERRORS,
help=(
"What to do in case of decode errors.'ignore', 'strict' and 'replace' are "
"acceptable values"
),
)
p.add_argument(
"--open-quote",
dest="open_quote",
default=config.OPEN_QUOTE,
help="The character used to open quotes",
)
p.add_argument(
"--close-quote",
dest="close_quote",
default=config.CLOSE_QUOTE,
help="The character used to close quotes",
)
p.add_argument(
"--version", action="version", version=".".join(map(str, __version__))
)
p.add_argument("filename", nargs="?")
p.add_argument("encoding", nargs="?", default="utf-8")
args = p.parse_args()
if args.filename and args.filename != "-":
with open(args.filename, "rb") as fp:
data = fp.read()
else:
data = sys.stdin.buffer.read()
try:
data = data.decode(args.encoding, args.decode_errors)
except UnicodeDecodeError as err:
warning = bcolors.WARNING + "Warning:" + bcolors.ENDC
warning += " Use the " + bcolors.OKGREEN
warning += "--decode-errors=ignore" + bcolors.ENDC + " flag."
print(warning)
raise err
h = HTML2Text(baseurl=baseurl)
# handle options
if args.ul_style_dash:
h.ul_item_mark = "-"
if args.em_style_asterisk:
h.emphasis_mark = "*"
h.strong_mark = "__"
h.body_width = args.body_width
h.google_list_indent = args.list_indent
h.ignore_emphasis = args.ignore_emphasis
h.ignore_links = args.ignore_links
h.protect_links = args.protect_links
h.ignore_images = args.ignore_images
h.images_as_html = args.images_as_html
h.images_to_alt = args.images_to_alt
h.images_with_size = args.images_with_size
h.google_doc = args.google_doc
h.hide_strikethrough = args.hide_strikethrough
h.escape_snob = args.escape_snob
h.bypass_tables = args.bypass_tables
h.ignore_tables = args.ignore_tables
h.single_line_break = args.single_line_break
h.inline_links = args.inline_links
h.unicode_snob = args.unicode_snob
h.use_automatic_links = args.use_automatic_links
h.skip_internal_links = args.skip_internal_links
h.links_each_paragraph = args.links_each_paragraph
h.mark_code = args.mark_code
h.wrap_links = args.wrap_links
h.wrap_list_items = args.wrap_list_items
h.pad_tables = args.pad_tables
h.default_image_alt = args.default_image_alt
h.open_quote = args.open_quote
h.close_quote = args.close_quote
sys.stdout.write(h.handle(data))
| 30.123779 | 87 | 0.596453 | 1,105 | 9,248 | 4.78733 | 0.2181 | 0.024953 | 0.074858 | 0.039509 | 0.126465 | 0.086389 | 0.01758 | 0 | 0 | 0 | 0 | 0.006511 | 0.2859 | 9,248 | 306 | 88 | 30.222222 | 0.794518 | 0.001514 | 0 | 0.244068 | 0 | 0 | 0.312717 | 0.007257 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00339 | false | 0.013559 | 0.010169 | 0 | 0.044068 | 0.00339 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15aa3d879d1c430956af982477b50e91fe1a8bdc | 2,848 | py | Python | handybeam/solver.py | ultraleap/HandyBeam | 9f80b97742cde4b75d3478d554dc9bc2cd9dfd96 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2019-10-20T09:15:46.000Z | 2020-12-03T00:31:23.000Z | handybeam/solver.py | ultraleap/HandyBeam | 9f80b97742cde4b75d3478d554dc9bc2cd9dfd96 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-04-04T18:36:54.000Z | 2021-10-12T22:57:34.000Z | handybeam/solver.py | ultraleap/HandyBeam | 9f80b97742cde4b75d3478d554dc9bc2cd9dfd96 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2019-11-29T16:05:26.000Z | 2021-07-01T22:56:39.000Z | """
.. the following is a link to enable linking to this file:
.. solver_:
Contains excitation solvers.
A basic single-point-focus excitation solver is :meth:`handybeam.solver.Solver.single_focus_solver`
"""
# Imports
import warnings
warnings.warn('solver.py is obsolete - use beamformer.py instead')
import handybeam.opencl_wrappers.solver_wrappers as solver_wrappers
# Class
class Solver:
"""" Contains the OpenCL subsystem for single focus solver.
This class calls the OpenCL wrapper for the single focus solver.
"""
def __init__(self, parent=None):
""" Initializes an instance of class Solver.
Parameters
----------
parent : handybeam.world.World()
This is an instance of the handybeam world class.
"""
self.parent = parent
self.solver = solver_wrappers.Solver(parent=self.parent)
def single_focus_solver(self, x_focus, y_focus, z_focus, local_work_size=(1, 1, 1), print_performance_feedback=False):
""" Solve excitation coefficients for a single focal point
This method calls the OpenCL wrapper mixin class single_focus_solver which determines
the set of activation coefficients required to produce a single focal point a given point in space.
Parameters
----------
x_focus : numpy float
This is the x-coordinate of the requested focal point position.
y_focus : numpy float
This is the y-coordinate of the requested focal point position.
z_focus : numpy float
This is the z-coordinate of the requested focal point position.
local_work_size : tuple
Tuple containing the local work sizes for the GPU.
print_performance_feedback : boolean
Boolean value determining whether or not to output the GPU performance statistics.
"""
kernel_output = self.solver.single_focus_solver(
self.parent.tx_array,
x_focus, y_focus, z_focus,
local_work_size=local_work_size,
print_performance_feedback=print_performance_feedback
)
self.parent.tx_array.tx_array_element_descriptor = kernel_output
def set_parent(self, new_parent):
""" changes the parent of an instance of the class Solver.
Parameters
----------
new_parent : handybeam.world.World()
This is an instance of the handybeam world class.
"""
self.parent = new_parent
| 33.505882 | 122 | 0.589537 | 318 | 2,848 | 5.113208 | 0.305031 | 0.04059 | 0.062731 | 0.027675 | 0.252153 | 0.252153 | 0.207872 | 0.130381 | 0.130381 | 0.092251 | 0 | 0.001621 | 0.35007 | 2,848 | 84 | 123 | 33.904762 | 0.876823 | 0.505618 | 0 | 0 | 0 | 0 | 0.043401 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.117647 | 0 | 0.352941 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15aaa43978d8eaa0042bd4ae9f02f92ad679604f | 36,550 | py | Python | rest-service/manager_rest/deployment_update/manager.py | cloudify-cosmo/cloudify-manager | 4a3f44ceb49d449bc5ebc8766b1c7b9c174ff972 | [
"Apache-2.0"
] | 124 | 2015-01-22T22:28:37.000Z | 2022-02-26T23:12:06.000Z | rest-service/manager_rest/deployment_update/manager.py | cloudify-cosmo/cloudify-manager | 4a3f44ceb49d449bc5ebc8766b1c7b9c174ff972 | [
"Apache-2.0"
] | 345 | 2015-01-08T15:49:40.000Z | 2022-03-29T08:33:00.000Z | rest-service/manager_rest/deployment_update/manager.py | cloudify-cosmo/cloudify-manager | 4a3f44ceb49d449bc5ebc8766b1c7b9c174ff972 | [
"Apache-2.0"
] | 77 | 2015-01-07T14:04:35.000Z | 2022-03-07T22:46:00.000Z | ########
# Copyright (c) 2017-2019 Cloudify Platform Ltd. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
import copy
import uuid
from datetime import datetime
from flask import current_app
from cloudify.models_states import ExecutionState
from cloudify.utils import extract_and_merge_plugins
from dsl_parser import constants, tasks
from manager_rest import manager_exceptions, workflow_executor
from manager_rest.resource_manager import get_resource_manager
from manager_rest.deployment_update import step_extractor
from manager_rest.deployment_update.utils import extract_ids
from manager_rest.deployment_update.validator import StepValidator
from manager_rest.storage import (get_storage_manager,
models,
get_read_only_storage_manager,
db)
from manager_rest.deployment_update.constants import (
STATES,
ENTITY_TYPES,
NODE_MOD_TYPES,
DEFAULT_DEPLOYMENT_UPDATE_WORKFLOW
)
from manager_rest.deployment_update.handlers import (
DeploymentDependencies,
DeploymentUpdateNodeHandler,
DeploymentUpdateDeploymentHandler,
DeploymentUpdateNodeInstanceHandler)
from manager_rest.utils import get_formatted_timestamp
from manager_rest.rest.rest_utils import (
get_deployment_plan,
get_labels_from_plan,
get_parsed_deployment,
RecursiveDeploymentDependencies,
RecursiveDeploymentLabelsDependencies,
verify_blueprint_uploaded_state,
)
from manager_rest.execution_token import current_execution
class DeploymentUpdateManager(object):
def __init__(self, sm):
self.sm = sm
self._node_handler = DeploymentUpdateNodeHandler(sm)
self._node_instance_handler = DeploymentUpdateNodeInstanceHandler(sm)
self._deployment_handler = DeploymentUpdateDeploymentHandler(sm)
self._deployment_dependency_handler = DeploymentDependencies(sm)
self._step_validator = StepValidator(sm)
def get_deployment_update(self, deployment_update_id, include=None):
return self.sm.get(
models.DeploymentUpdate, deployment_update_id, include=include)
def list_deployment_updates(self,
include=None,
filters=None,
pagination=None,
sort=None,
substr_filters=None):
return self.sm.list(models.DeploymentUpdate,
include=include,
filters=filters,
pagination=pagination,
substr_filters=substr_filters,
sort=sort)
def stage_deployment_update(self,
deployment_id,
app_dir,
app_blueprint,
additional_inputs,
new_blueprint_id=None,
preview=False,
runtime_only_evaluation=False,
auto_correct_types=False,
reevaluate_active_statuses=False):
# validate no active updates are running for a deployment_id
if reevaluate_active_statuses:
self.reevaluate_updates_statuses_per_deployment(deployment_id)
self.validate_no_active_updates_per_deployment(deployment_id)
# enables reverting to original blueprint resources
deployment = self.sm.get(models.Deployment, deployment_id)
old_blueprint = deployment.blueprint
runtime_only_evaluation = (runtime_only_evaluation or
deployment.runtime_only_evaluation)
parsed_deployment = get_parsed_deployment(old_blueprint,
app_dir,
app_blueprint)
# Updating the new inputs with the deployment inputs
# (overriding old values and adding new ones)
old_inputs = copy.deepcopy(deployment.inputs)
new_inputs = {k: old_inputs[k]
for k in parsed_deployment.inputs if k in old_inputs}
new_inputs.update(additional_inputs)
# applying intrinsic functions
plan = get_deployment_plan(parsed_deployment, new_inputs,
runtime_only_evaluation,
auto_correct_types)
deployment_update_id = '{0}-{1}'.format(deployment.id, uuid.uuid4())
deployment_update = models.DeploymentUpdate(
id=deployment_update_id,
deployment_plan=plan,
runtime_only_evaluation=runtime_only_evaluation,
created_at=get_formatted_timestamp()
)
deployment_update.set_deployment(deployment)
deployment_update.preview = preview
deployment_update.old_inputs = old_inputs
deployment_update.new_inputs = new_inputs
if new_blueprint_id:
new_blueprint = self.sm.get(models.Blueprint, new_blueprint_id)
verify_blueprint_uploaded_state(new_blueprint)
deployment_update.old_blueprint = old_blueprint
deployment_update.new_blueprint = new_blueprint
self.sm.put(deployment_update)
return deployment_update
def reevaluate_updates_statuses_per_deployment(self, deployment_id: str):
for active_update in self.list_deployment_updates(
filters={'deployment_id': deployment_id,
'state': [STATES.UPDATING,
STATES.EXECUTING_WORKFLOW,
STATES.FINALIZING]}):
reevaluated_state = _map_execution_to_deployment_update_status(
active_update.execution.status)
if reevaluated_state and active_update.state != reevaluated_state:
current_app.logger.info("Deployment update %s status "
"reevaluation: `%s` -> `%s`",
active_update.id,
active_update.state,
reevaluated_state)
active_update.state = reevaluated_state
self.sm.update(active_update)
def create_deployment_update_step(self,
deployment_update,
action,
entity_type,
entity_id,
topology_order):
step = models.DeploymentUpdateStep(id=str(uuid.uuid4()),
action=action,
entity_type=entity_type,
entity_id=entity_id,
topology_order=topology_order)
step.set_deployment_update(deployment_update)
return self.sm.put(step)
def extract_steps_from_deployment_update(self, deployment_update):
nodes = [node.to_dict() for node in deployment_update.deployment.nodes]
supported_steps, unsupported_steps = step_extractor.extract_steps(
nodes,
deployment_update.deployment,
deployment_update.deployment_plan)
if unsupported_steps:
deployment_update.state = STATES.FAILED
self.sm.update(deployment_update)
unsupported_entity_ids = [step.entity_id
for step in unsupported_steps]
raise manager_exceptions.UnsupportedChangeInDeploymentUpdate(
'The blueprint you provided for the deployment update '
'contains changes currently unsupported by the deployment '
'update mechanism.\n'
'Unsupported changes: {0}'.format('\n'.join(
unsupported_entity_ids)))
for step in supported_steps:
self.create_deployment_update_step(deployment_update,
step.action,
step.entity_type,
step.entity_id,
step.topology_order)
def commit_deployment_update(self,
dep_update,
skip_install=False,
skip_uninstall=False,
skip_reinstall=False,
workflow_id=None,
ignore_failure=False,
install_first=False,
reinstall_list=None,
update_plugins=True,
force=False):
# Mark deployment update as committing
rm = get_resource_manager()
dep_update.keep_old_deployment_dependencies = skip_uninstall
dep_update.state = STATES.UPDATING
self.sm.update(dep_update)
# Handle any deployment related changes. i.e. workflows and deployments
modified_deployment_entities, raw_updated_deployment = \
self._deployment_handler.handle(dep_update)
# Retrieve previous_nodes
previous_nodes = [node.to_dict() for node in self.sm.list(
models.Node, filters={'deployment_id': dep_update.deployment_id},
get_all_results=True
)]
# Update the nodes on the storage
modified_entity_ids, depup_nodes = self._node_handler.handle(
dep_update)
# Extract changes from raw nodes
node_instance_changes = self._extract_changes(dep_update,
depup_nodes,
previous_nodes)
# Create (and update for adding step type) node instances
# according to the changes in raw_nodes
depup_node_instances = self._node_instance_handler.handle(
dep_update, node_instance_changes)
# Calculate which plugins to install and which to uninstall
central_plugins_to_install, central_plugins_to_uninstall = \
self._extract_plugins_changes(dep_update, update_plugins)
# Calculate which deployment schedules need to be added or deleted
schedules_to_create, schedules_to_delete = \
self._extract_schedules_changes(dep_update)
# Saving the needed changes back to the storage manager for future use
# (removing entities).
dep_update.deployment_update_deployment = raw_updated_deployment
dep_update.deployment_update_nodes = depup_nodes
dep_update.deployment_update_node_instances = depup_node_instances
dep_update.modified_entity_ids = modified_entity_ids.to_dict(
include_rel_order=True)
dep_update.central_plugins_to_install = central_plugins_to_install
dep_update.central_plugins_to_uninstall = central_plugins_to_uninstall
deployment = self.sm.get(models.Deployment, dep_update.deployment_id)
labels_to_create = self._get_deployment_labels_to_create(dep_update)
parents_labels = []
if labels_to_create:
parents_labels = rm.get_deployment_parents_from_labels(
labels_to_create
)
dep_graph = RecursiveDeploymentLabelsDependencies(self.sm)
dep_graph.create_dependencies_graph()
rm.verify_attaching_deployment_to_parents(
dep_graph,
parents_labels,
deployment.id
)
self.sm.update(dep_update)
# If this is a preview, no need to run workflow and update DB
if dep_update.preview:
dep_update.state = STATES.PREVIEW
dep_update.id = None
# retrieving recursive dependencies for the updated deployment
dep_graph = RecursiveDeploymentDependencies(self.sm)
dep_graph.create_dependencies_graph()
deployment_dependencies = dep_graph.retrieve_dependent_deployments(
dep_update.deployment_id)
dep_update.set_recursive_dependencies(deployment_dependencies)
dep_update.schedules_to_create = \
self.list_schedules(schedules_to_create)
dep_update.schedules_to_delete = schedules_to_delete
dep_update.labels_to_create = [{'key': label[0], 'value': label[1]}
for label in labels_to_create]
return dep_update
# Handle inter-deployment dependencies changes
self._deployment_dependency_handler.handle(dep_update)
# Update deployment attributes in the storage manager
deployment.inputs = dep_update.new_inputs
deployment.runtime_only_evaluation = dep_update.runtime_only_evaluation
if dep_update.new_blueprint:
deployment.blueprint = dep_update.new_blueprint
deployment.capabilities = \
dep_update.deployment_plan.get('capabilities', {})
self.sm.update(deployment)
# Execute the default 'update' workflow or a custom workflow using
# added and related instances. Any workflow executed should call
# finalize_update, since removing entities should be done after the
# executions.
# The raw_node_instances are being used only for their ids, thus
# they should really hold the finished version for the node instance.
execution = self._execute_update_workflow(
dep_update,
depup_node_instances,
modified_entity_ids.to_dict(),
skip_install=skip_install,
skip_uninstall=skip_uninstall,
skip_reinstall=skip_reinstall,
workflow_id=workflow_id,
ignore_failure=ignore_failure,
install_first=install_first,
reinstall_list=reinstall_list,
central_plugins_to_install=central_plugins_to_install,
central_plugins_to_uninstall=central_plugins_to_uninstall,
update_plugins=update_plugins,
force=force
)
# Update deployment update attributes in the storage manager
dep_update.execution = execution
dep_update.state = STATES.EXECUTING_WORKFLOW
self.sm.update(dep_update)
# First, delete old deployment schedules
for schedule_id in schedules_to_delete:
schedule = self.sm.get(
models.ExecutionSchedule,
None,
filters={'id': schedule_id, 'deployment_id': deployment.id})
self.sm.delete(schedule)
# Then, create new deployment schedules
deployment_creation_time = datetime.strptime(
deployment.created_at.split('.')[0], '%Y-%m-%dT%H:%M:%S'
).replace(second=0)
rm.create_deployment_schedules_from_dict(
schedules_to_create, deployment, deployment_creation_time)
rm.create_resource_labels(
models.DeploymentLabel,
deployment,
labels_to_create
)
if parents_labels:
for parent in parents_labels:
rm.add_deployment_to_labels_graph(
dep_graph,
deployment,
parent
)
return self.get_deployment_update(dep_update.id)
def validate_no_active_updates_per_deployment(self, deployment_id):
existing_updates = self.list_deployment_updates(
filters={'deployment_id': deployment_id}).items
active_updates = [u for u in existing_updates
if u.state not in (STATES.SUCCESSFUL, STATES.FAILED)]
if not active_updates:
return
raise manager_exceptions.ConflictError(
'there are deployment updates still active; update IDs: {0}'
.format(', '.join([u.id for u in active_updates])))
@staticmethod
def list_schedules(schedules_dict):
schedules_list = []
for k, v in schedules_dict.items():
list_item = v
list_item['id'] = k
schedules_list.append(list_item)
return schedules_list
def _extract_changes(self,
dep_update,
raw_nodes,
previous_nodes):
"""Extracts the changes between the current node_instances and
the raw_nodes specified
:param dep_update: deployment update object
:param raw_nodes: node objects from deployment update
:return: a dictionary of modification type and node instanced modified
"""
deployment = self.sm.get(models.Deployment, dep_update.deployment_id)
deployment_id_filter = {'deployment_id': deployment.id}
# By this point the node_instances aren't updated yet
previous_node_instances = [instance.to_dict() for instance in
self.sm.list(models.NodeInstance,
filters=deployment_id_filter,
get_all_results=True)]
# extract all the None relationships from the deployment update nodes
# in order to use in the extract changes
no_none_relationships_nodes = copy.deepcopy(raw_nodes)
for node in no_none_relationships_nodes:
node['relationships'] = [r for r in node['relationships'] if r]
# project changes in deployment
changes = tasks.modify_deployment(
nodes=no_none_relationships_nodes,
previous_nodes=previous_nodes,
previous_node_instances=previous_node_instances,
scaling_groups=deployment.scaling_groups,
modified_nodes=()
)
self._patch_changes_with_relationship_index(
changes[NODE_MOD_TYPES.EXTENDED_AND_RELATED], raw_nodes)
return changes
@staticmethod
def _patch_changes_with_relationship_index(raw_node_instances, raw_nodes):
for raw_node_instance in (i for i in raw_node_instances
if 'modification' in i):
raw_node = next(n for n in raw_nodes
if n['id'] == raw_node_instance['node_id'])
for relationship in raw_node_instance['relationships']:
target_node_id = relationship['target_name']
rel_index = next(i for i, d
in enumerate(raw_node['relationships'])
if d['target_id'] == target_node_id)
relationship['rel_index'] = rel_index
def _validate_reinstall_list(self,
reinstall,
add,
remove,
dep_update):
"""validate node-instances explicitly supplied to reinstall list exist
and are not about to be installed or uninstalled in this update"""
node_instances = self.sm.list(
models.NodeInstance,
filters={'deployment_id': dep_update.deployment_id},
get_all_results=True
)
node_instances_ids = [n.id for n in node_instances]
add_conflict = [n for n in reinstall if n in add]
remove_conflict = [n for n in reinstall if n in remove]
not_existing = [n for n in reinstall if n not in node_instances_ids]
msg = 'Invalid reinstall list supplied.'
if not_existing:
msg += '\nFollowing node instances do not exist in this ' \
'deployment: ' + ', '.join(not_existing)
if add_conflict:
msg += '\nFollowing node instances are just being added in the ' \
'update: ' + ', '.join(add_conflict)
if remove_conflict:
msg += '\nFollowing node instances are just being removed in ' \
'the update: ' + ', '.join(remove_conflict)
if any([not_existing, add_conflict, remove_conflict]):
dep_update.state = STATES.FAILED
self.sm.update(dep_update)
raise manager_exceptions.BadParametersError(msg)
def _update_reinstall_list(self,
reinstall_list,
add_list,
remove_list,
modified_entity_ids,
dep_update,
skip_reinstall):
"""Add nodes that their properties have been updated to the list of
node instances to reinstall, unless skip_reinstall is true"""
reinstall_list = reinstall_list or []
self._validate_reinstall_list(reinstall_list,
add_list,
remove_list,
dep_update)
if skip_reinstall:
return reinstall_list
# get all entities with modifications in properties or operations
for change_type in (ENTITY_TYPES.PROPERTY, ENTITY_TYPES.OPERATION):
for modified in modified_entity_ids[change_type]:
modified = modified.split(':')
# pick only entities that are part of nodes
if modified[0].lower() != 'nodes':
continue
# list instances of each node
node_instances = self.sm.list(
models.NodeInstance,
filters={'deployment_id': dep_update.deployment_id,
'node_id': modified[1]},
get_all_results=True
)
# add instances ids to the reinstall list, if they are not in
# the install/uninstall list
reinstall_list += [e.id for e in node_instances.items
if e.id not in add_list
and e.id not in remove_list]
return reinstall_list
def _execute_update_workflow(self,
dep_update,
node_instances,
modified_entity_ids,
skip_install=False,
skip_uninstall=False,
skip_reinstall=False,
workflow_id=None,
ignore_failure=False,
install_first=False,
reinstall_list=None,
central_plugins_to_install=None,
central_plugins_to_uninstall=None,
update_plugins=True,
force=False):
"""Executed the update workflow or a custom workflow
:param dep_update: deployment update object
:param node_instances: a dictionary of modification type and
add_node.modification instances
:param modified_entity_ids: the entire add_node.modification entities
list (by id)
:param skip_install: if to skip installation of node instances.
:param skip_uninstall: if to skip uninstallation of node instances.
:param skip_reinstall: if to skip reinstallation of node instances.
:param workflow_id: the update workflow id
:param ignore_failure: if to ignore failures.
:param install_first: if to install the node instances before
uninstalling them.
:param reinstall_list: list of node instances to reinstall.
:param central_plugins_to_install: plugins to install that have the
central_deployment_agent as the executor.
:param central_plugins_to_uninstall: plugins to uninstall that have the
central_deployment_agent as the executor.
:param update_plugins: whether or not to perform plugin updates.
:param force: force update (i.e. even if the blueprint is used to
create components).
:return: an Execution object.
"""
added_instances = node_instances[NODE_MOD_TYPES.ADDED_AND_RELATED]
extended_instances = \
node_instances[NODE_MOD_TYPES.EXTENDED_AND_RELATED]
reduced_instances = node_instances[NODE_MOD_TYPES.REDUCED_AND_RELATED]
removed_instances = node_instances[NODE_MOD_TYPES.REMOVED_AND_RELATED]
added_instance_ids = extract_ids(
added_instances.get(NODE_MOD_TYPES.AFFECTED))
removed_instance_ids = extract_ids(
removed_instances.get(NODE_MOD_TYPES.AFFECTED))
reinstall_list = self._update_reinstall_list(reinstall_list,
added_instance_ids,
removed_instance_ids,
modified_entity_ids,
dep_update,
skip_reinstall)
parameters = {
# needed in order to finalize the commit
'update_id': dep_update.id,
# For any added node instance
'added_instance_ids': added_instance_ids,
'added_target_instances_ids':
extract_ids(added_instances.get(NODE_MOD_TYPES.RELATED)),
# encapsulated all the change entity_ids (in a dictionary with
# 'node' and 'relationship' keys.
'modified_entity_ids': modified_entity_ids,
# Any nodes which were extended (positive modification)
'extended_instance_ids':
extract_ids(extended_instances.get(NODE_MOD_TYPES.AFFECTED)),
'extend_target_instance_ids':
extract_ids(extended_instances.get(NODE_MOD_TYPES.RELATED)),
# Any nodes which were reduced (negative modification)
'reduced_instance_ids':
extract_ids(reduced_instances.get(NODE_MOD_TYPES.AFFECTED)),
'reduce_target_instance_ids':
extract_ids(reduced_instances.get(NODE_MOD_TYPES.RELATED)),
# Any nodes which were removed as a whole
'removed_instance_ids': removed_instance_ids,
'remove_target_instance_ids':
extract_ids(removed_instances.get(NODE_MOD_TYPES.RELATED)),
# Whether or not execute install/uninstall/reinstall,
# order of execution, behavior in failure while uninstalling, and
# whether or not to update the plugins.
'skip_install': skip_install,
'skip_uninstall': skip_uninstall,
'ignore_failure': ignore_failure,
'install_first': install_first,
'update_plugins': update_plugins,
# Plugins that are executed by the central deployment agent and
# need to be un/installed
'central_plugins_to_install': central_plugins_to_install,
'central_plugins_to_uninstall': central_plugins_to_uninstall,
# List of node-instances to reinstall
'node_instances_to_reinstall': reinstall_list
}
execution = models.Execution(
workflow_id=workflow_id or DEFAULT_DEPLOYMENT_UPDATE_WORKFLOW,
deployment=dep_update.deployment,
allow_custom_parameters=True,
blueprint_id=dep_update.new_blueprint_id,
parameters=parameters,
status=ExecutionState.PENDING,
)
self.sm.put(execution)
if current_execution and \
current_execution.workflow_id == 'csys_update_deployment':
# if we're created from a update_deployment workflow, join its
# exec-groups, for easy tracking
for exec_group in current_execution.execution_groups:
exec_group.executions.append(execution)
db.session.commit()
messages = get_resource_manager().prepare_executions(
[execution],
allow_overlapping_running_wf=True,
force=force,
)
workflow_executor.execute_workflow(messages)
return execution
def finalize_commit(self, deployment_update_id):
""" finalizes the update process by removing any removed
node/node-instances and updating any reduced node
"""
# mark deployment update as finalizing
dep_update = self.get_deployment_update(deployment_update_id)
dep_update.state = STATES.FINALIZING
self.sm.update(dep_update)
# The order of these matter
self._deployment_handler.finalize(dep_update)
self._node_instance_handler.finalize(dep_update)
self._node_handler.finalize(dep_update)
self._deployment_dependency_handler.finalize(dep_update)
# mark deployment update as successful
dep_update.state = STATES.SUCCESSFUL
self.sm.update(dep_update)
return dep_update
def _extract_plugins_changes(self, dep_update, update_plugins):
"""Extracts plugins that need to be installed or uninstalled.
:param dep_update: a DeploymentUpdate object.
:param update_plugins: whether to update the plugins or not.
:return: plugins that need installation and uninstallation (a tuple).
"""
def get_plugins_to_install(plan, is_old_plan):
return extract_and_merge_plugins(
plan[constants.DEPLOYMENT_PLUGINS_TO_INSTALL],
plan[constants.WORKFLOW_PLUGINS_TO_INSTALL],
filter_func=is_centrally_deployed,
with_repetition=is_old_plan)
def is_centrally_deployed(plugin):
return (plugin[constants.PLUGIN_EXECUTOR_KEY]
== constants.CENTRAL_DEPLOYMENT_AGENT)
def extend_list_from_dict(source_dict, filter_out_dict, target_list):
target_list.extend(
source_dict[k]
for k in source_dict if k not in filter_out_dict)
if not update_plugins:
return [], []
deployment = self.sm.get(models.Deployment, dep_update.deployment_id)
old_plan = deployment.blueprint.plan
new_plan = dep_update.deployment_plan
plugins_to_install_old = get_plugins_to_install(old_plan, True)
plugins_to_install_new = get_plugins_to_install(new_plan, False)
# Convert to plugin_name->plugin dict
new_plugins = {p[constants.PLUGIN_NAME_KEY]: p
for p in plugins_to_install_new}
old_plugins = {p[constants.PLUGIN_NAME_KEY]: p
for p in plugins_to_install_old}
central_plugins_to_install, central_plugins_to_uninstall = [], []
extend_list_from_dict(source_dict=new_plugins,
filter_out_dict=old_plugins,
target_list=central_plugins_to_install)
extend_list_from_dict(source_dict=old_plugins,
filter_out_dict=new_plugins,
target_list=central_plugins_to_uninstall)
# Deal with the intersection between the old and new plugins
intersection = (k for k in new_plugins if k in old_plugins)
for plugin_name in intersection:
old_plugin = old_plugins[plugin_name]
new_plugin = new_plugins[plugin_name]
if new_plugin == old_plugin:
continue
central_plugins_to_install.append(new_plugin)
central_plugins_to_uninstall.append(old_plugin)
return central_plugins_to_install, central_plugins_to_uninstall
def _extract_schedules_changes(self, dep_update):
deployment = self.sm.get(models.Deployment, dep_update.deployment_id)
old_settings = deployment.blueprint.plan.get('deployment_settings')
new_settings = dep_update.deployment_plan.get('deployment_settings')
schedules_to_delete = []
schedules_to_create = {}
if old_settings:
for schedule_id in old_settings.get('default_schedules', {}):
try:
schedule = self.sm.get(
models.ExecutionSchedule,
None,
filters={'id': schedule_id,
'deployment_id': deployment.id})
if schedule.deployment_id == deployment.id:
schedules_to_delete.append(schedule_id)
except manager_exceptions.NotFoundError:
continue
if new_settings:
name_conflict_error_msg = \
'The Blueprint used for the deployment update contains a ' \
'default schedule `{0}`, but a deployment schedule `{0}` ' \
'already exists for the deployment `{1}` . Please either ' \
'delete the existing schedule or fix the blueprint.'
schedules_to_create = new_settings.get('default_schedules', {})
for schedule_id in schedules_to_create:
try:
self.sm.get(models.ExecutionSchedule,
None,
filters={'id': schedule_id,
'deployment_id': deployment.id})
if schedule_id not in schedules_to_delete:
raise manager_exceptions.InvalidBlueprintError(
name_conflict_error_msg.format(schedule_id,
deployment.id))
except manager_exceptions.NotFoundError:
continue
return schedules_to_create, schedules_to_delete
def _get_deployment_labels_to_create(self, dep_update):
deployment = self.sm.get(models.Deployment, dep_update.deployment_id)
new_labels = get_labels_from_plan(dep_update.deployment_plan,
constants.LABELS)
return get_resource_manager().get_labels_to_create(deployment,
new_labels)
def _delete_single_label_from_deployment(self,
label_key,
label_value,
deployment):
dep_label = self.sm.get(
models.DeploymentLabel,
None,
filters={
'_labeled_model_fk': deployment._storage_id,
'key': label_key,
'value': label_value
}
)
self.sm.delete(dep_label)
# What we need to access this manager in Flask
def get_deployment_updates_manager(preview=False):
"""
Get the current app's deployment updates manager, create if necessary
"""
if preview:
return current_app.config.setdefault(
'deployment_updates_preview_manager',
DeploymentUpdateManager(get_read_only_storage_manager())
)
return current_app.config.setdefault(
'deployment_updates_manager',
DeploymentUpdateManager(get_storage_manager())
)
def _map_execution_to_deployment_update_status(execution_status: str) -> str:
if execution_status == ExecutionState.TERMINATED:
return STATES.SUCCESSFUL
if execution_status in [ExecutionState.FAILED,
ExecutionState.CANCELLED,
ExecutionState.CANCELLING,
ExecutionState.FORCE_CANCELLING,
ExecutionState.KILL_CANCELLING]:
return STATES.FAILED
| 46.090794 | 79 | 0.601532 | 3,728 | 36,550 | 5.580204 | 0.125 | 0.03288 | 0.019997 | 0.014373 | 0.281209 | 0.2142 | 0.153199 | 0.127289 | 0.099649 | 0.087439 | 0 | 0.001124 | 0.342627 | 36,550 | 792 | 80 | 46.14899 | 0.864694 | 0.147305 | 0 | 0.153716 | 0 | 0 | 0.050104 | 0.009364 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04223 | false | 0 | 0.030405 | 0.006757 | 0.113176 | 0.033784 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15ab520350fbf5d2df3da6de772dd557ce6bea6a | 595 | py | Python | kmeans.py | Tilmanator/Stock-Predictor | c67efa24d8c9ab65aba33280e7fc56a12ded7261 | [
"MIT"
] | null | null | null | kmeans.py | Tilmanator/Stock-Predictor | c67efa24d8c9ab65aba33280e7fc56a12ded7261 | [
"MIT"
] | null | null | null | kmeans.py | Tilmanator/Stock-Predictor | c67efa24d8c9ab65aba33280e7fc56a12ded7261 | [
"MIT"
] | null | null | null | from sklearn.cluster import KMeans
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1,2,3,7,8,9])
y = np.array([7,6,7,3,2,2])
plt.scatter(x, y)
plt.show()
X = np.array(zip(x,y))
kmeans = KMeans(n_clusters=2)
kmeans.fit(X)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
colours = ['g.', 'r.', 'c.', 'y.', 'm.', 'k.' ,'w.', 'go']
for i in range(len(X)):
colour = colours[labels[i]%len(colours)]
plt.plot(X[i][0],X[i][1], colour, '10')
# Display centroids as well
plt.scatter(centroids[:,0], centroids[:,1], marker="x", s = 150, linewidths=4)
plt.show() | 22.037037 | 78 | 0.635294 | 108 | 595 | 3.462963 | 0.5 | 0.05615 | 0.042781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044834 | 0.137815 | 595 | 27 | 79 | 22.037037 | 0.684211 | 0.042017 | 0 | 0.111111 | 0 | 0 | 0.033392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15b0920b82c1636984d52cd2a1a47d2d9c032561 | 14,288 | py | Python | dataproxy/implementations.py | peerplays-network/bos-dataproxy | ff19ce97981a10d8ff8d6ad3ed6afe7b4cdd42fc | [
"MIT"
] | 6 | 2019-12-05T18:37:33.000Z | 2019-12-20T17:58:32.000Z | dataproxy/implementations.py | peerplays-network/bos-dataproxy | ff19ce97981a10d8ff8d6ad3ed6afe7b4cdd42fc | [
"MIT"
] | 2 | 2019-08-06T10:40:45.000Z | 2020-02-21T14:14:12.000Z | dataproxy/implementations.py | peerplays-network/bos-dataproxy | ff19ce97981a10d8ff8d6ad3ed6afe7b4cdd42fc | [
"MIT"
] | 1 | 2019-07-01T13:25:15.000Z | 2019-07-01T13:25:15.000Z | import logging
from .processors import JsonProcessor
from . import Config
from .stores import IncidentFileStore, RawStore, ProcessedFileStore
from .routes.push import PushReceiver
import json
from bos_incidents.exceptions import DuplicateIncidentException
import threading
import time
from strict_rfc3339 import InvalidRFC3339Error
from . import utils
from .utils import slugify
from datetime import timedelta
from bos_incidents.format import string_to_incident
from bos_incidents import factory
from dataproxy.utils import CommonFormat
incidents_storage = factory.get_incident_storage()
def _send_to_witness(processor, incident, targets=None):
try:
initial_delay = Config.get("subscriptions",
"delay_before_initial_sending_in_seconds",
incident["call"],
0)
if initial_delay > 0:
logging.getLogger(__name__).info("Incident " + incident["unique_string"] + ": Waiting before sending " + incident["call"])
time.sleep(initial_delay)
logging.getLogger(__name__).info("Incident " + incident["unique_string"] + ": Sending result now")
PushReceiver.subscribed_witnesses_status = processor.send_to_witness(
incident,
targets=targets
)
received_witnesses = len([key for key, value in PushReceiver.subscribed_witnesses_status.items() if value == "ok"])
logging.getLogger(__name__).debug("Incident " + incident["unique_string"] + ": Successfully sent to " + str(received_witnesses) + " witnesses")
return received_witnesses
except Exception as e:
logging.getLogger(__name__).info("Incident " + incident["unique_string"] + ": PUSH to witness failed, continueing anyways, exception below")
logging.getLogger(__name__).exception(e)
def _send_list_to_witness(processor, incident_list, targets=None, async_queue=True):
for incident in incident_list:
logging.getLogger(__name__).info("Trigger sending " + incident["unique_string"])
if async_queue:
# send to witnesses
thr = threading.Thread(target=_send_to_witness,
args=(processor, incident, targets,))
thr.start() # we dont care when it finishes
else:
_send_to_witness(processor, incident, targets=targets)
def process_content(provider_name,
processor,
processed_store,
incident_store,
file_content,
file_ending,
restrict_witness_group=None, # deprecated
async_queue=True,
target=None):
file_name = None
if restrict_witness_group is None and target is not None:
restrict_witness_group = target
# before storing, check if its worth processing
is_interesting = file_content is not None
if is_interesting and processor:
is_interesting = processor.source_of_interest(
file_content
)
incidents = []
do_not_send_to_witness = True
if is_interesting:
# store found file again
file_name = processed_store.save(
provider_name,
file_content,
file_ext=file_ending)
try:
# process content (should be asynchronous)
if processor:
for incident in processor.process(file_content):
logging.getLogger(__name__ + "_" + provider_name).debug("Postprocessing " + incident["unique_string"])
incident["provider_info"]["source_file"] = file_name
incidents.append(incident)
# only send if its a new incident
logging.getLogger(__name__ + "_" + provider_name).debug(" ... exists")
do_not_send_to_witness = incident_store.exists(
provider_name,
file_ext=".json",
file_name=incident["unique_string"])
if not do_not_send_to_witness:
logging.getLogger(__name__ + "_" + provider_name).debug(" ... save in incidents folder")
# save locally
incident_file = incident_store.save(
provider_name,
json.dumps(incident),
file_ext=".json",
file_name=incident["unique_string"])
try:
logging.getLogger(__name__ + "_" + provider_name).debug(" ... save in incidents database")
incidents_storage.insert_incident(incident)
except DuplicateIncidentException:
pass
except Exception as e:
logging.getLogger(__name__ + "_" + provider_name).info(provider_name + ": INSERT INTO stats failed, continueing anyways, incident file is " + incident_file + ", exception below")
logging.getLogger(__name__ + "_" + provider_name).exception(e)
incident.pop("_id", None)
try:
logging.getLogger(__name__ + "_" + provider_name).debug(" ... sending to witnesses (" + str(restrict_witness_group) + ", async_queue=" + str(async_queue) + ")")
if async_queue:
# send to witnesses
thr = threading.Thread(target=_send_to_witness,
args=(processor, incident, _find_targets(restrict_witness_group),))
thr.start() # we dont care when it finishes
else:
_send_to_witness(processor, incident, targets=_find_targets(restrict_witness_group))
except Exception as e:
logging.getLogger(__name__ + "_" + provider_name).info(provider_name + ": PUSH to witness failed, continueing anyways, incident file is " + incident_file + ", exception below")
logging.getLogger(__name__ + "_" + provider_name).exception(e)
except Exception as e:
logging.getLogger(__name__ + "_" + provider_name).info(provider_name + ": Processing failed, continueing anyways. Source file is " + file_name + ", exception below")
logging.getLogger(__name__ + "_" + provider_name).exception(e)
return {
"file_name": file_name,
"amount_incidents": len(incidents),
"incidents": incidents,
"do_not_send_to_witness": do_not_send_to_witness,
"is_interesting": is_interesting
}
def _find_targets(target):
matched = []
for witness in Config.get("subscriptions", "witnesses"):
if target is not None:
if target == witness.get("group", None):
matched.append(witness)
elif target == witness["url"] or target == witness.get("name", None):
matched.append(witness)
else:
matched.append(witness)
return matched
def replay(restrict_witness_group=None,
providers=None,
received=None,
processor=None,
name_filter=None,
incidents=None,
async_execution=None,
async_queue=None,
only_report=None,
target=None):
if name_filter is None and (incidents is None or incidents == []):
report = {"name_filter": "Name filter must not be empty"}
return report
if async_execution is None:
async_execution = False
if only_report is None:
only_report = False
logging.getLogger(__name__).info("Replay: Collecting configuration ...")
replay_stats = {}
replay_stats["async_execution"] = async_execution
replay_stats["target"] = target
if providers is None:
providers = list(Config.get("providers").keys())
if type(providers) == str:
providers = [providers]
replay_stats["providers"] = providers
if processor is None:
processor = JsonProcessor()
replay_stats["processor"] = processor.__class__.__name__
if restrict_witness_group is not None:
target = restrict_witness_group
matched_targets = _find_targets(target)
replay_stats["matched_targets"] = len(matched_targets)
if len(matched_targets) == 0:
logging.getLogger(__name__).info("Replay: No matched witnesses found for target " + target)
return replay_stats
if incidents is None:
incidents = []
if type(name_filter) == str:
name_filter = name_filter.split(",")
if name_filter is not None:
offset_left = 3
offset_right = 3
match_date = None
for tmp in name_filter:
tmp = slugify(tmp)
try:
match_date = utils.string_to_date(tmp[0:20])
break
except InvalidRFC3339Error:
pass
try:
match_date = utils.string_to_date(tmp[0:8])
break
except InvalidRFC3339Error:
pass
try:
match_date = utils.string_to_date(tmp[0:10])
break
except InvalidRFC3339Error:
pass
if "create" in name_filter:
offset_left = 28
if match_date and received is None:
received = []
for i in range(-offset_left, offset_right):
_date = utils.date_to_string(match_date + timedelta(days=i))
received.append(_date[0:4] + _date[5:7] + _date[8:10])
folder_filter = []
for provider in providers:
folder_filter.append(provider)
if received is None:
received = ["20181", "2019"]
if type(received) == str:
received = [received]
for tmp in received:
folder_filter.append(tmp)
replay_stats["folder_filter"] = folder_filter
replay_stats["name_filter"] = name_filter
logging.getLogger(__name__).info("Replay: Finding all incidents in file dump with configuration " + str(replay_stats))
for incident in processor.process_generic(
folder="dump/d_incidents",
folder_filter=folder_filter,
name_filter=name_filter):
incidents.append(incident)
if len(received) == 2:
logging.getLogger(__name__).info("Replay: Querying local database for incidents")
regex_filter = ".*".join(name_filter) + ".*"
# Only prepend ".*" if there expected to be anything beforehand
if not regex_filter.startswith("201"):
# cover all years 2010-2029
regex_filter = ".*" + regex_filter
try:
#if len(received) == 1:
# if len(received[0]) == 8:
# _from = datetime(received[0][0:4], received[0][4:6], received[0][6:8], 0, 0, tzinfo=tzutc())
# _till = datetime(received[0][0:4], received[0][4:6], received[0][6:8], 23, 59, tzinfo=tzutc())
# elif len(received[0]) == 6:
# _from = datetime(received[0][0:4], received[0][4:6], 1, 0, 0, tzinfo=tzutc())
# _till = datetime(received[0][0:4], received[0][4:6], 28, 23, 59, tzinfo=tzutc())
#else:
# _from = None
for incident in incidents_storage.get_incidents(
dict(
unique_string={"$regex": regex_filter, "$options": "i"}#,
#timestamp={"$lt": float(_till.timestamp()), "$gt": float(_from.timestamp())}
)
):
# don't add duplicates
if incident["provider_info"]["name"] + "-" + incident["unique_string"] not in [x["provider_info"]["name"] + "-" + x["unique_string"] for x in incidents]:
incidents.append(incident)
except Exception as e:
logging.getLogger(__name__).warning("MongoDB not reachable, continueing anyways" + str(e))
pass
else:
if type(incidents) == str:
incidents = [incidents]
if type(incidents) == list and len(incidents) > 0 and type(incidents[0]) == str:
manufactured = []
for item in incidents:
for provider in providers:
manufactured.append(string_to_incident(item, provider_info=provider))
incidents = manufactured
replay_stats["amount_incidents"] = len(incidents)
incident_ids = []
for incident in incidents:
incident_ids.append(incident["unique_string"])
if replay_stats["amount_incidents"] == 1:
replay_stats["incidents"] = incidents
else:
replay_stats["incidents"] = incident_ids
logging.getLogger(__name__).info("Found " + str(len(incident_ids)) + " incidents.")
if not only_report:
sorted_list = sorted(incidents, key=lambda k: k['provider_info']['pushed'])
logging.getLogger(__name__).info("Replay: Sorted " + str(len(sorted_list)) + " incidents ...")
if async_execution:
# send to witnesses
thr = threading.Thread(target=_send_list_to_witness,
args=(processor, sorted_list, matched_targets, async_queue))
thr.start() # we dont care when it finishes
replay_stats["incidents_sent"] = True
else:
number = _send_list_to_witness(processor,
sorted_list,
targets=matched_targets,
async_queue=async_queue)
replay_stats["incidents_sent"] = number
else:
replay_stats["incidents_sent"] = False
return replay_stats
| 42.778443 | 206 | 0.569219 | 1,439 | 14,288 | 5.366921 | 0.166782 | 0.049722 | 0.062152 | 0.039881 | 0.300401 | 0.240839 | 0.213 | 0.200181 | 0.160171 | 0.135181 | 0 | 0.012278 | 0.338746 | 14,288 | 333 | 207 | 42.906907 | 0.805144 | 0.068239 | 0 | 0.241509 | 0 | 0 | 0.114557 | 0.004591 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018868 | false | 0.018868 | 0.060377 | 0 | 0.101887 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15b670262fd9d1d5bb5bbf2c9ec8c0581b614760 | 4,418 | py | Python | main.py | kantegory/accreditation_system | d5595b265788cf9b63431650d2911695ad737038 | [
"MIT"
] | null | null | null | main.py | kantegory/accreditation_system | d5595b265788cf9b63431650d2911695ad737038 | [
"MIT"
] | 4 | 2020-04-16T17:52:37.000Z | 2021-12-13T20:35:51.000Z | main.py | kantegory/accreditation_system | d5595b265788cf9b63431650d2911695ad737038 | [
"MIT"
] | 1 | 2020-04-07T06:18:40.000Z | 2020-04-07T06:18:40.000Z | import bottle
from bottle import request, route, template, auth_basic, redirect
from utils.db_helper import create_blank
from utils.db_manage import get_all_blanks, get_all_questions_by_token, \
get_blank_id_by_token, add_new_users_answers, \
get_blank_info_by_token, get_report_by_token, get_all_standards_by_token, \
get_all_users_by_token, mark_competences_as_used, get_competences_by_token, \
change_blank_state_by_token, get_user_answers_by_user_id, \
change_user_state_by_user_id, get_user_state_by_user_id
import json
from utils.config import *
from utils.notify import send_email
from utils.analysis import get_analysis_by_blank_id
import pathlib
def check(user, password):
return user == CONFIG["ADMIN_LOGIN"] and password == CONFIG["ADMIN_PASSWORD"]
@route('/admin')
@auth_basic(check)
def get_admin_page():
blanks = get_all_blanks()
moderating_blanks = get_all_blanks('moderating')
tokens = [blank['token'] for blank in blanks]
standards = [
{
'standards': get_all_standards_by_token(token),
'token': token
}
for token in tokens
]
return template('{}/assets/admin.tpl'.format(_path), blanks=blanks, moderatingBlanks=moderating_blanks, standards=standards)
@route('/admin/new_blank', method="POST")
def create_new_blank():
data = {
'forms': request.forms,
'files': request.files
}
token = create_blank(data)
redirect('/admin/blank/{}'.format(token))
@route('/admin/blank/<token>')
@auth_basic(check)
def get_admin_blank_page(token):
blank = get_blank_info_by_token(token)
questions = get_all_questions_by_token(token)
standards = get_all_standards_by_token(token)
hostname = CONFIG["HOSTNAME"]
port = CONFIG["PORT"]
return template('{}/assets/blank.tpl'.format(_path), questions=questions, blank=blank, token=token, standards=standards, hostname=hostname, port=port)
@route('/admin/save_blank/<token>', method="POST")
@auth_basic(check)
def save_blank(token):
questions = request.body.read().decode('utf-8')
questions = json.loads(questions)
questions = [json.loads(question) for question in questions]
mark_competences_as_used(questions, token)
@route('/admin/send_email/<token>')
@auth_basic(check)
def send_notification(token):
recievers = get_all_users_by_token(token)
send_email(recievers, token)
change_blank_state_by_token("sent", token)
redirect('/admin/blank/{}'.format(token))
@route('/admin/report/<token>')
@auth_basic(check)
def get_admin_report_page(token):
reports = get_report_by_token(token)
blank = get_blank_info_by_token(token)
blank_id = blank["id"]
users = get_all_users_by_token(token)
questions_amount = len(get_competences_by_token(token))
user_stat = [
{
'user_id': user['user_id'],
'user_email': user['user_email'],
'stat': int(len(get_user_answers_by_user_id(user['user_id'])) / questions_amount * 100)
}
for user in users
]
analysis = get_analysis_by_blank_id(blank_id)
return template('{}/assets/report.tpl'.format(_path), reports=reports, blank=blank, users=users, user_stat=user_stat, analysis=analysis)
@route('/quiz/<token>/<user_id>')
def get_quiz_page(token, user_id):
questions = get_competences_by_token(token)
blank_info = get_blank_info_by_token(token)
state = get_user_state_by_user_id(user_id)
return template('{}/assets/quiz.tpl'.format(_path), questions=questions, token=token, user_id=user_id, blank=blank_info, state=state)
@route('/quiz/<token>/<user_id>', method='POST')
def write_new_user_answers(token, user_id):
blank_id = get_blank_id_by_token(token)
answers = request.body.read().decode('utf-8')
answers = json.loads(answers)
answers = [json.loads(answer) for answer in answers]
add_new_users_answers(answers, blank_id, user_id)
questions_amount = len(get_competences_by_token(token))
user_answers_amount = len(get_user_answers_by_user_id(user_id))
stat = user_answers_amount // questions_amount
if stat == 1:
change_user_state_by_user_id("finished", user_id)
def main(_host="localhost"):
app = bottle.app()
bottle.run(app=app, host=_host, port=CONFIG["PORT"])
if __name__ == "__main__":
_path = pathlib.Path().absolute()
main(CONFIG["HOSTNAME"])
| 29.851351 | 154 | 0.717972 | 613 | 4,418 | 4.805873 | 0.158238 | 0.052274 | 0.052953 | 0.028853 | 0.391718 | 0.251527 | 0.14664 | 0.075356 | 0.032587 | 0 | 0 | 0.001617 | 0.160254 | 4,418 | 147 | 155 | 30.054422 | 0.792453 | 0 | 0 | 0.106796 | 0 | 0 | 0.102082 | 0.026483 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097087 | false | 0.019417 | 0.087379 | 0.009709 | 0.23301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15b7a264d607ff095bc9d427c0c9c5d6ae9bc122 | 1,063 | py | Python | ImageProcessing-Python/blog12-warpAffine/blog12-image03.py | Songner/image_classfication | c1f15b2b96544e859e14a92373eb57c6a2644a93 | [
"MIT"
] | null | null | null | ImageProcessing-Python/blog12-warpAffine/blog12-image03.py | Songner/image_classfication | c1f15b2b96544e859e14a92373eb57c6a2644a93 | [
"MIT"
] | null | null | null | ImageProcessing-Python/blog12-warpAffine/blog12-image03.py | Songner/image_classfication | c1f15b2b96544e859e14a92373eb57c6a2644a93 | [
"MIT"
] | null | null | null | #encoding:utf-8
import cv2
import numpy as np
import matplotlib.pyplot as plt
#读取图片
src = cv2.imread('test01.jpg')
#获取图像大小
rows, cols = src.shape[:2]
#将源图像高斯模糊
img = cv2.GaussianBlur(src, (3,3), 0)
#进行灰度化处理
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#边缘检测(检测出图像的边缘信息)
edges = cv2.Canny(gray,50,250,apertureSize = 3)
cv2.imwrite("canny.jpg", edges)
#通过霍夫变换得到A4纸边缘
lines = cv2.HoughLinesP(edges,1,np.pi/180,50,minLineLength=90,maxLineGap=10)
#下面输出的四个点分别为四个顶点
for x1,y1,x2,y2 in lines[0]:
print(x1,y1),(x2,y2)
for x1,y1,x2,y2 in lines[1]:
print(x1,y1),(x2,y2)
#绘制边缘
for x1,y1,x2,y2 in lines[0]:
cv2.line(gray, (x1,y1), (x2,y2), (0,0,255), 1)
#根据四个顶点设置图像透视变换矩阵
pos1 = np.float32([[114, 82], [287, 156], [8, 322], [216, 333]])
pos2 = np.float32([[0, 0], [188, 0], [0, 262], [188, 262]])
M = cv2.getPerspectiveTransform(pos1, pos2)
#图像透视变换
result = cv2.warpPerspective(src, M, (190, 272))
#显示图像
cv2.imshow("original", src)
cv2.imshow("result", result)
#等待显示
cv2.waitKey(0)
cv2.destroyAllWindows()
| 21.693878 | 77 | 0.650988 | 169 | 1,063 | 4.088757 | 0.508876 | 0.034732 | 0.052098 | 0.069465 | 0.118669 | 0.081042 | 0.081042 | 0.054993 | 0 | 0 | 0 | 0.140607 | 0.163688 | 1,063 | 48 | 78 | 22.145833 | 0.63667 | 0.110066 | 0 | 0.166667 | 0 | 0 | 0.037288 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15b7b434f546b16436f33e7ab0369ff24dbf0a48 | 846 | py | Python | v2e/run_v2e.py | ibugueno/UOH-EventCameras | 1941e90499955e199be33c706547b25ba6856eb4 | [
"MIT"
] | 1 | 2021-07-06T08:53:26.000Z | 2021-07-06T08:53:26.000Z | v2e/run_v2e.py | ibugueno/UOH-EventCameras | 1941e90499955e199be33c706547b25ba6856eb4 | [
"MIT"
] | null | null | null | v2e/run_v2e.py | ibugueno/UOH-EventCameras | 1941e90499955e199be33c706547b25ba6856eb4 | [
"MIT"
] | 1 | 2021-07-06T08:53:26.000Z | 2021-07-06T08:53:26.000Z | from os import listdir
from os.path import isfile, join
from videoprops import get_video_properties
import math
import subprocess
import time
input_path = 'input/Test_Set/'
output_path = 'output/Test_Set/'
files = [f for f in listdir(input_path) if isfile(join(input_path, f))]
for file in files:
input_full_path = input_path + file
output_full_path = output_path + file[:-4]
props = get_video_properties(input_full_path)
print(f'''Resolution: {props['width']}×{props['height']}''')
new_w = 346
new_h = math.ceil((new_w * props['height'])/props['width'])
print(f'''New resolution: {new_w}×{new_h}''')
start_time = time.time()
subprocess.call('./run_v2e.sh ' + input_full_path + ' ' + output_full_path + ' ' + str(new_w) + ' ' + str(new_h), shell=True)
print("--- %s seconds ---" % (time.time() - start_time))
print('\n')
| 24.171429 | 126 | 0.689125 | 132 | 846 | 4.189394 | 0.363636 | 0.072333 | 0.070524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006906 | 0.144208 | 846 | 34 | 127 | 24.882353 | 0.754144 | 0 | 0 | 0 | 0 | 0 | 0.183432 | 0.040237 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.190476 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15b9e21e51e269288bc47d66331e512cd62e6c38 | 610 | py | Python | simulated_experiments/create_config.py | elerson/NetworkedRobotsProject | d2b9d0a121c5421a40e806a93e6a8f1315f24441 | [
"MIT"
] | null | null | null | simulated_experiments/create_config.py | elerson/NetworkedRobotsProject | d2b9d0a121c5421a40e806a93e6a8f1315f24441 | [
"MIT"
] | null | null | null | simulated_experiments/create_config.py | elerson/NetworkedRobotsProject | d2b9d0a121c5421a40e806a93e6a8f1315f24441 | [
"MIT"
] | null | null | null | #!/usr/bin/python
config_exp = '''
robots:
1:
id: 1
ip: 192.168.0.1
macaddress: ec:08:6b:0d:68:ef
cable_ip: 150.164.212.43
configs:
map: DIR_/map/ambiente.png
treefile: DIR_/steinerData1.dat
resolution: 0.05
exit: 1
simulation: 1
broadcast_address: 127.255.255.255
algorithm_port: 39988
configuration_port: 46544
'''
import sys
import os
import math
class Create():
def create(self, dir_):
with open("config_sim.yaml", 'w') as f:
f.write(config_exp.replace('DIR_', dir_))
if __name__ == "__main__":
dir_ = sys.argv[1]
exp = Create()
exp.create(dir_)
| 16.052632 | 47 | 0.659016 | 95 | 610 | 4 | 0.694737 | 0.047368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114519 | 0.198361 | 610 | 37 | 48 | 16.486486 | 0.662577 | 0.02623 | 0 | 0 | 0 | 0 | 0.576729 | 0.070826 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.107143 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15bb3cbfcfa07f95701ecd2783316238c64ed5b5 | 3,960 | py | Python | firebirdsql/tests/test_proc.py | dand-oss/pyfirebirdsql | 1b8148f8937929cdd74774fef2611dd55ea6a757 | [
"BSD-2-Clause"
] | 31 | 2015-03-28T09:43:53.000Z | 2022-02-27T18:20:06.000Z | firebirdsql/tests/test_proc.py | dand-oss/pyfirebirdsql | 1b8148f8937929cdd74774fef2611dd55ea6a757 | [
"BSD-2-Clause"
] | 24 | 2015-01-16T03:00:33.000Z | 2022-02-08T00:06:05.000Z | firebirdsql/tests/test_proc.py | dand-oss/pyfirebirdsql | 1b8148f8937929cdd74774fef2611dd55ea6a757 | [
"BSD-2-Clause"
] | 21 | 2015-01-15T23:00:26.000Z | 2020-11-04T08:30:13.000Z | from __future__ import with_statement
import datetime
import firebirdsql
from firebirdsql.tests.base import * # noqa
from firebirdsql.consts import * # noqa
class TestProc(TestBase):
def setUp(self):
TestBase.setUp(self)
cur = self.connection.cursor()
cur.execute('''
CREATE TABLE foo_table (
a INTEGER NOT NULL,
b VARCHAR(30) NOT NULL UNIQUE,
c VARCHAR(1024),
d DECIMAL(16,3) DEFAULT -0.123,
e DATE DEFAULT '1967-08-11',
f TIMESTAMP DEFAULT '1967-08-11 23:45:01',
g TIME DEFAULT '23:45:01',
h BLOB SUB_TYPE 1,
i DOUBLE PRECISION DEFAULT 0.0,
j FLOAT DEFAULT 0.0,
PRIMARY KEY (a),
CONSTRAINT CHECK_A CHECK (a <> 0)
)
''')
cur.execute('''
CREATE PROCEDURE foo_proc
RETURNS (out1 INTEGER, out2 VARCHAR(30))
AS
BEGIN
out1 = 1;
out2 = 'ABC';
END
''')
cur.execute('''
CREATE PROCEDURE bar_proc (param_a INTEGER, param_b VARCHAR(30))
RETURNS (out1 INTEGER, out2 VARCHAR(30))
AS
BEGIN
out1 = param_a;
out2 = param_b;
END
''')
cur.execute('''
CREATE PROCEDURE baz_proc(param_a INTEGER)
RETURNS (out1 INTEGER, out2 VARCHAR(30))
AS
BEGIN
SELECT a, b FROM foo_table
WHERE a= :param_a
INTO :out1, :out2;
SUSPEND;
END
''')
self.connection.commit()
# 3 records insert
cur.execute("""
insert into foo_table(a, b, c,h)
values (1, 'a', 'b','This is a memo')""")
cur.execute("""
insert into foo_table(a, b, c, e, g, i, j)
values (2, 'A', 'B', '1999-01-25', '00:00:01', 0.1, 0.1)""")
cur.execute("""
insert into foo_table(a, b, c, e, g, i, j)
values (3, 'X', 'Y', '2001-07-05', '00:01:02', 0.2, 0.2)""")
self.connection.commit()
def test_call_proc(self):
cur = self.connection.cursor()
r = cur.callproc("foo_proc")
self.assertEqual(cur.fetchone(), r)
cur.close()
cur = self.connection.cursor()
try:
rs = cur.execute("select out1, out2 from foo_proc")
if rs is None:
# foo_proc not selectable with Firebird 1.5
pass
else:
pass
except firebirdsql.OperationalError:
# foo_proc not selectable with Firebird 2.x
pass
finally:
cur.close()
cur = self.connection.cursor()
cur.callproc("bar_proc", (1, "ABC"))
rs = cur.fetchallmap()
self.assertEqual(len(rs), 1)
self.assertEqual(rs[0]['OUT1'], 1)
self.assertEqual(rs[0]['OUT2'], 'ABC')
cur.close()
cur = self.connection.cursor()
cur.execute("select out1, out2 from baz_proc(?)", (1, ))
rs = cur.fetchall()
self.assertEqual(len(rs), 1)
self.assertEqual((1, 'a'), rs[0])
cur.close()
def test_insert_returning(self):
cur = self.connection.cursor()
cur.execute("insert into foo_table(a, b) values (4, 'b') returning e")
self.assertEqual(cur.rowcount, 1)
self.assertEqual(cur.fetchone()[0], datetime.date(1967, 8, 11))
cur.close()
def test_prep_insert_returning(self):
cur = self.connection.cursor()
prep = cur.prep("insert into foo_table(a, b) values (?, 'b') returning e")
cur.execute(prep, (5, ))
self.assertEqual(cur.fetchone()[0], datetime.date(1967, 8, 11))
cur.close()
| 33.277311 | 82 | 0.49798 | 469 | 3,960 | 4.127932 | 0.270789 | 0.056818 | 0.061467 | 0.083161 | 0.47469 | 0.420455 | 0.344008 | 0.192665 | 0.157541 | 0.098141 | 0 | 0.059015 | 0.379545 | 3,960 | 118 | 83 | 33.559322 | 0.728938 | 0.027778 | 0 | 0.443396 | 0 | 0.018868 | 0.50026 | 0 | 0 | 0 | 0 | 0 | 0.084906 | 1 | 0.037736 | false | 0.028302 | 0.04717 | 0 | 0.09434 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15bcb236ed8a4476a796d56e5b8eb69e73ae26a3 | 2,630 | py | Python | backend/assignment/views/student.py | skku-npc/SKKU_Coding_Platform | 1d972e8922484cf94f6735fd08b2565e5d3517d0 | [
"MIT"
] | 1 | 2022-03-30T14:03:23.000Z | 2022-03-30T14:03:23.000Z | backend/assignment/views/student.py | skku-npc/SKKU_Coding_Platform | 1d972e8922484cf94f6735fd08b2565e5d3517d0 | [
"MIT"
] | 56 | 2022-02-19T08:13:48.000Z | 2022-03-25T10:17:07.000Z | backend/assignment/views/student.py | skku-npc/SKKU_Coding_Platform | 1d972e8922484cf94f6735fd08b2565e5d3517d0 | [
"MIT"
] | 1 | 2022-03-25T15:02:46.000Z | 2022-03-25T15:02:46.000Z | from utils.api import APIView
from utils.decorators import login_required
from course.models import Course, Registration
from ..models import Assignment
from ..serializers import AssignmentSerializer
from drf_yasg.utils import swagger_auto_schema
from drf_yasg import openapi
class AssignmentAPI(APIView):
@swagger_auto_schema(
manual_parameters=[
openapi.Parameter(
name="course_id",
in_=openapi.IN_QUERY,
description="Unique ID of a course",
required=True,
type=openapi.TYPE_INTEGER,
),
openapi.Parameter(
name="assignment_id",
in_=openapi.IN_QUERY,
description="Unique ID of a assignment",
type=openapi.TYPE_INTEGER,
),
openapi.Parameter(
name="limit",
in_=openapi.IN_QUERY,
description="Number of assignments to show",
type=openapi.TYPE_STRING,
default=10,
),
openapi.Parameter(
name="offset",
in_=openapi.IN_QUERY,
description="ID of the first assignment of list",
type=openapi.TYPE_STRING,
default=0,
),
],
operation_description="Get assignment list of the course",
responses={200: AssignmentSerializer},
)
@login_required
def get(self, request):
assignment_id = request.GET.get("assignment_id")
course_id = request.GET.get("course_id")
if not course_id:
return self.error("Invalid parameter, course_id is required")
try:
Course.objects.get(id=course_id)
Registration.objects.get(user_id=request.user.id, course_id=course_id)
except Course.DoesNotExist:
return self.error("Course does not exist")
except Registration.DoesNotExist:
return self.error("Invalid access, not registered user")
context = {"request": request}
if assignment_id:
try:
assignment = Assignment.objects.get(id=assignment_id, course_id=course_id, visible=True)
return self.success(AssignmentSerializer(assignment, context=context).data)
except Assignment.DoesNotExist:
return self.error("Assignment does not exists")
assignments = Assignment.objects.filter(course_id=course_id, visible=True)
return self.success(self.paginate_data(request, assignments, AssignmentSerializer, context))
| 36.027397 | 104 | 0.605323 | 270 | 2,630 | 5.744444 | 0.288889 | 0.061896 | 0.045132 | 0.041264 | 0.246293 | 0.162476 | 0.162476 | 0.108317 | 0.108317 | 0.05158 | 0 | 0.003326 | 0.314068 | 2,630 | 72 | 105 | 36.527778 | 0.85643 | 0 | 0 | 0.285714 | 0 | 0 | 0.123954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0.111111 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15beb740c74dbae31445d5c0271c8943a849e3b1 | 913 | py | Python | task_1_su.py | cicadina1/python_edu | 7d5ce26050b16b8dca09ec54f3544abf02f46eac | [
"MIT"
] | null | null | null | task_1_su.py | cicadina1/python_edu | 7d5ce26050b16b8dca09ec54f3544abf02f46eac | [
"MIT"
] | null | null | null | task_1_su.py | cicadina1/python_edu | 7d5ce26050b16b8dca09ec54f3544abf02f46eac | [
"MIT"
] | null | null | null | seq = "AACTGAGAC"
def split(sequence):
return [char for char in seq]
word = 'sequence'
seqlist=(split(word))
print("Task 1: разделена секвенция\n",seqlist)
def revetring (sequence):
reversed_seq = seqlist[::-1]
#reversed_seq = ''.join(reversed_seq)
return reversed_seq
print("Task 2: обратна секвенция\n",revetring(seq))
def complimentation (sequence):
compsequence=[]
for x in range(len(seqlist)):
if sequence[x] == "A": compsequence.append("T")
elif sequence[x] == "T": compsequence.append("A")
elif sequence[x] == "C": compsequence.append("G")
else: compsequence.append("C")
#compsequence = ''.join(compsequence)
return compsequence
print("Task 3: комплементарна секвенция\n",complimentation(seq))
compl_rev_seq=revetring(seq)
compl_rev_seq=complimentation(compl_rev_seq)
print("Task 4: комплементарна обратна секвенция\n",compl_rev_seq)
| 30.433333 | 65 | 0.696605 | 116 | 913 | 5.37931 | 0.344828 | 0.057692 | 0.070513 | 0.044872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006527 | 0.161008 | 913 | 29 | 66 | 31.482759 | 0.808094 | 0.079956 | 0 | 0 | 0 | 0 | 0.186158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0 | 0.045455 | 0.272727 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
15c0b4a7aa18f44f069e90d242770b8b30f3462c | 751 | py | Python | isee/pip_utils.py | i2mint/isee | cb38a734420af1ab4489b9938cea48f443e66b13 | [
"MIT"
] | null | null | null | isee/pip_utils.py | i2mint/isee | cb38a734420af1ab4489b9938cea48f443e66b13 | [
"MIT"
] | 15 | 2021-02-01T20:13:28.000Z | 2021-12-15T20:38:18.000Z | isee/pip_utils.py | i2mint/isee | cb38a734420af1ab4489b9938cea48f443e66b13 | [
"MIT"
] | null | null | null | import configparser
import pip
from isee.common import get_env_var, get_file_path
def install_requires(project_dir=None):
if not project_dir:
project_dir = get_env_var('GITHUB_WORKSPACE')
path = get_file_path('setup.cfg', project_dir)
config = configparser.ConfigParser()
config.read(path)
pkgs = [x for x in config['options']['install_requires'].split('\n') if x]
pip.main(['install'] + pkgs)
def build_dependency_wheels(repository_dir, wheelhouse, requirements_filepath=None):
args = ['wheel', '--wheel-dir', wheelhouse, '--find-links', wheelhouse]
if requirements_filepath:
args.extend(['--requirement', requirements_filepath])
args.extend(['--editable', repository_dir])
pip.main(args)
| 32.652174 | 84 | 0.712383 | 96 | 751 | 5.34375 | 0.489583 | 0.077973 | 0.035088 | 0.116959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154461 | 751 | 22 | 85 | 34.136364 | 0.807874 | 0 | 0 | 0 | 0 | 0 | 0.143808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec5c5a3367b8409735b414d000ae6d691a2e9b7a | 829 | py | Python | debug.py | FrancoisData/BigGAN-pytorch | 5fee762b83ff1de24876547d1cb1b2d9e99834e3 | [
"Apache-2.0"
] | 458 | 2018-11-15T00:20:38.000Z | 2020-04-11T11:05:17.000Z | debug.py | FrancoisData/BigGAN-pytorch | 5fee762b83ff1de24876547d1cb1b2d9e99834e3 | [
"Apache-2.0"
] | 18 | 2018-11-27T06:34:05.000Z | 2020-03-26T05:33:28.000Z | debug.py | FrancoisData/BigGAN-pytorch | 5fee762b83ff1de24876547d1cb1b2d9e99834e3 | [
"Apache-2.0"
] | 84 | 2018-11-17T07:36:50.000Z | 2020-04-20T02:57:51.000Z | from model_resnet import *
from demo import *
from utils import *
dim_z = 120
vocab_size = 1000
num_samples = 12 #@param {type:"slider", min:1, max:20, step:1}
truncation = 0.32 #@param {type:"slider", min:0.02, max:1, step:0.02}
noise_seed = 0 #@param {type:"slider", min:0, max:100, step:1}
category = "951"
z = truncated_z_sample(num_samples, truncation, noise_seed)
y = int(951)
# print(z)
feed_dict = sample(z, y, truncation=truncation)
# print(feed_dict['input_y'].shape)
model = Generator(code_dim=120, n_class=1000, chn=6, debug=True)
# inputs = torch.from_numpy(feed_dict['input_z']).float()
# labels = torch.from_numpy(feed_dict['input_y']).float()
# out = model(inputs,labels)
# print(out.size())
# model.apply(weights_init)
print('0,1,2,3'.split(','))
# torch.save(model.state_dict(),'test_model.pth')
| 21.815789 | 69 | 0.694813 | 138 | 829 | 4.007246 | 0.471014 | 0.057866 | 0.081374 | 0.097649 | 0.166365 | 0.097649 | 0 | 0 | 0 | 0 | 0 | 0.064649 | 0.12304 | 829 | 37 | 70 | 22.405405 | 0.696011 | 0.500603 | 0 | 0 | 0 | 0 | 0.0275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.214286 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec5db73a5e8c1e84b7771da0358c010b16de6f7f | 3,350 | py | Python | venv/lib/python3.8/site-packages/vsts/member_entitlement_management/v4_1/models/extension_summary_data.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/vsts/member_entitlement_management/v4_1/models/extension_summary_data.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | null | null | null | venv/lib/python3.8/site-packages/vsts/member_entitlement_management/v4_1/models/extension_summary_data.py | amcclead7336/Enterprise_Data_Science_Final | ccdc0aa08d4726bf82d71c11a1cc0c63eb301a28 | [
"Unlicense",
"MIT"
] | 2 | 2021-05-23T16:46:31.000Z | 2021-05-26T23:51:09.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Generated file, DO NOT EDIT
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------------------------
from .summary_data import SummaryData
class ExtensionSummaryData(SummaryData):
"""ExtensionSummaryData.
:param assigned: Count of Licenses already assigned.
:type assigned: int
:param available: Available Count.
:type available: int
:param included_quantity: Quantity
:type included_quantity: int
:param total: Total Count.
:type total: int
:param assigned_through_subscription: Count of Extension Licenses assigned to users through msdn.
:type assigned_through_subscription: int
:param extension_id: Gallery Id of the Extension
:type extension_id: str
:param extension_name: Friendly name of this extension
:type extension_name: str
:param is_trial_version: Whether its a Trial Version.
:type is_trial_version: bool
:param minimum_license_required: Minimum License Required for the Extension.
:type minimum_license_required: object
:param remaining_trial_days: Days remaining for the Trial to expire.
:type remaining_trial_days: int
:param trial_expiry_date: Date on which the Trial expires.
:type trial_expiry_date: datetime
"""
_attribute_map = {
'assigned': {'key': 'assigned', 'type': 'int'},
'available': {'key': 'available', 'type': 'int'},
'included_quantity': {'key': 'includedQuantity', 'type': 'int'},
'total': {'key': 'total', 'type': 'int'},
'assigned_through_subscription': {'key': 'assignedThroughSubscription', 'type': 'int'},
'extension_id': {'key': 'extensionId', 'type': 'str'},
'extension_name': {'key': 'extensionName', 'type': 'str'},
'is_trial_version': {'key': 'isTrialVersion', 'type': 'bool'},
'minimum_license_required': {'key': 'minimumLicenseRequired', 'type': 'object'},
'remaining_trial_days': {'key': 'remainingTrialDays', 'type': 'int'},
'trial_expiry_date': {'key': 'trialExpiryDate', 'type': 'iso-8601'}
}
def __init__(self, assigned=None, available=None, included_quantity=None, total=None, assigned_through_subscription=None, extension_id=None, extension_name=None, is_trial_version=None, minimum_license_required=None, remaining_trial_days=None, trial_expiry_date=None):
super(ExtensionSummaryData, self).__init__(assigned=assigned, available=available, included_quantity=included_quantity, total=total)
self.assigned_through_subscription = assigned_through_subscription
self.extension_id = extension_id
self.extension_name = extension_name
self.is_trial_version = is_trial_version
self.minimum_license_required = minimum_license_required
self.remaining_trial_days = remaining_trial_days
self.trial_expiry_date = trial_expiry_date
| 54.032258 | 272 | 0.645373 | 352 | 3,350 | 5.889205 | 0.295455 | 0.040521 | 0.074288 | 0.027979 | 0.042451 | 0.042451 | 0 | 0 | 0 | 0 | 0 | 0.00145 | 0.176418 | 3,350 | 61 | 273 | 54.918033 | 0.749909 | 0.443582 | 0 | 0 | 0 | 0 | 0.26384 | 0.060071 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.041667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec5fb3f3115f92782ad4a5cdee350a82801348f1 | 3,816 | py | Python | src/CreeDictionary/CreeDictionary/urls.py | aarppe/cree-intelligent-dictionary | 79717a08f95827a024061a5a27000bbd3d684363 | [
"Apache-2.0"
] | null | null | null | src/CreeDictionary/CreeDictionary/urls.py | aarppe/cree-intelligent-dictionary | 79717a08f95827a024061a5a27000bbd3d684363 | [
"Apache-2.0"
] | null | null | null | src/CreeDictionary/CreeDictionary/urls.py | aarppe/cree-intelligent-dictionary | 79717a08f95827a024061a5a27000bbd3d684363 | [
"Apache-2.0"
] | null | null | null | """
Definition of urls for CreeDictionary.
"""
from django.conf import settings
from django.contrib import admin
from django.contrib.sitemaps.views import sitemap
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
from django.urls import include, path
from django_js_reverse.views import urls_js
import CreeDictionary.API.views as api_views
from CreeDictionary.CreeDictionary import views
from CreeDictionary.CreeDictionary.sitemaps import sitemaps
# TODO: use URL namespaces:
# e.g., cree-dictionary:index instead of cree-dictionary-index
# See: https://docs.djangoproject.com/en/2.2/topics/http/urls/#url-namespaces
urlpatterns = [
################################# Primary URLs #################################
path("", views.index, name="cree-dictionary-index"),
path("search", views.index, name="cree-dictionary-search"),
# "word" is a user-friendly alternative for the linguistic term "lemma"
path(
"word/<str:lemma_text>/",
views.entry_details,
name="cree-dictionary-index-with-lemma",
),
path("about", views.about, name="cree-dictionary-about"),
path("contact-us", views.contact_us, name="cree-dictionary-contact-us"),
path("query-help", views.query_help, name="cree-dictionary-query-help"),
path("admin/fst-tool", views.fst_tool, name="cree-dictionary-fst-tool"),
################################# Internal API #################################
# internal use to render boxes of search results
path(
"_search_results/<str:query_string>/",
views.search_results,
name="cree-dictionary-search-results",
),
# internal use to render paradigm and only the paradigm
path(
"_paradigm_details/",
views.paradigm_internal,
name="cree-dictionary-paradigm-detail",
),
# POST to this URL to change the display mode:
path(
"_change_display_mode",
views.ChangeDisplayMode.as_view(),
name="cree-dictionary-change-display-mode",
),
# POST to this URL to change the display mode:
path(
"_change_paradigm_label",
views.ChangeParadigmLabelPreference.as_view(),
name="cree-dictionary-change-paradigm-label",
),
################################ Click in text #################################
# cree word translation for click-in-text
path(
"click-in-text/",
api_views.click_in_text,
name="cree-dictionary-word-click-in-text-api",
),
path(
"click-in-text-embedded-test/",
api_views.click_in_text_embedded_test,
name="cree-dictionary-click-in-text-embedded-test",
),
############################## Other applications ##############################
path("admin/", admin.site.urls),
path("search-quality/", include("CreeDictionary.search_quality.urls")),
path("", include("CreeDictionary.morphodict.urls")),
path(
"sitemap.xml",
sitemap,
{"sitemaps": sitemaps},
name="django.contrib.sitemaps.views.sitemap",
),
################################# Special URLS #################################
# Reverse URLs in JavaScript: https://github.com/ierror/django-js-reverse
path("jsreverse", urls_js, name="js_reverse"),
]
if hasattr(settings, "GOOGLE_SITE_VERIFICATION"):
urlpatterns.append(
path(
f"google{settings.GOOGLE_SITE_VERIFICATION}.html",
views.google_site_verification,
)
)
if settings.DEBUG:
# saves the need to `manage.py collectstatic` in development
urlpatterns += staticfiles_urlpatterns()
if settings.DEBUG and settings.ENABLE_DJANGO_DEBUG_TOOLBAR:
import debug_toolbar
# necessary for debug_toolbar to work
urlpatterns.append(path("__debug__/", include(debug_toolbar.urls)))
| 37.048544 | 84 | 0.628407 | 425 | 3,816 | 5.517647 | 0.282353 | 0.089552 | 0.099787 | 0.024307 | 0.128785 | 0.063966 | 0.03838 | 0.03838 | 0.03838 | 0.03838 | 0 | 0.000638 | 0.178197 | 3,816 | 102 | 85 | 37.411765 | 0.74713 | 0.195755 | 0 | 0.22973 | 0 | 0 | 0.304475 | 0.24358 | 0 | 0 | 0 | 0.009804 | 0 | 1 | 0 | false | 0 | 0.135135 | 0 | 0.135135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec61d6c1fe6b3a50c2ba90b0d04d3cc374029153 | 5,406 | py | Python | setup.py | reecechimento/python-project-template | fc4e34343cc6cbee9b887dda1961c2d5d1fa3c3a | [
"MIT"
] | null | null | null | setup.py | reecechimento/python-project-template | fc4e34343cc6cbee9b887dda1961c2d5d1fa3c3a | [
"MIT"
] | null | null | null | setup.py | reecechimento/python-project-template | fc4e34343cc6cbee9b887dda1961c2d5d1fa3c3a | [
"MIT"
] | null | null | null | """A setuptools based setup module.
See:
https://github.com/reecechimento/python-acelerate
"""
# Always prefer setuptools over distutils
from setuptools import setup, find_packages
import pathlib
here = pathlib.Path(__file__).parent.resolve()
# Get the long description from the README file
long_description = (here / 'README.md').read_text(encoding='utf-8')
setup(
name='ENTER_PKGNAME', # Required
version='ENTER_VERSION', # Required
description='ENTER_DESCRIPTION', # NOTE: Optional
long_description=long_description, # NOTE: Optional
long_description_content_type='text/markdown', # NOTE: 'text/plain' | 'text/rst' | 'text/markdown'
url='ENTER_GITURL', # NOTE: Optional
author='Chimento, Reece',
author_email='reecechimento@gmail.com',
classifiers=[
# How mature is this project? | 3 - Alpha | 4 - Beta | 5 Production/Stable
'Development Status :: 3 - Alpha',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.10 :: Only', # NOTE: Specifies the python versions you support.
],
# NOTE: This field adds keywords for your project which will appear on
# the project page. What does your project relate to?
keywords='engineering,electrical,energy-storage,test-engineering',
# NOTE: When your source code is in a subdirectory under the project
# root, e.g. `src/`, it is necessary to specifty the `package_dir`
# argument.
package_dir={'': 'src'}, # Optional
# You can just specify your package directories manually here if your
# project is simple.
# NOTE: OTHERWISE you can use find_packages().
# NOTE: Alternatively, if you just want to distribute a single Python
# file, use the `py_modules` argument instead as follows, which will
# expect a file called `my_modeule.py` to exist:
# py_modules=["my_module"],
packages=find_packages(where='src'), # WARN: Required
# WARN: Specify which Python versions you support. `pip install` will check this
python_requires='>=3.10, <4',
# WARN: `install_requires` specifies what a project *minimally* needs to
# run correctly.
# NOTE: This is the specification that is used to install its
# dependencies.
install_requires=[
'aiohttp',
'ruamel.yaml'
],
# List of additional groups of dependencies (e.g. development dependencies)
# $ pip install python-acelerate[dev]
extras_require={
'dev': ['check-manifest'],
'test': ['coverage'],
},
# If there are any data files included in your packages that need to be
# installed, specify them here.
package_data={
'config': ['init.yml'],
},
# Although 'package_data' is preferred approace, in some cases you may
# need to place data files outside of your packages. See:
# https://docs.python.org/distutils/setupscript.html # installing-additional-files
#
# In this case, 'data_file' will be installed into '<sys.previx>/my_data'
# data_files=[('my_data', ['data/data_file'])], # Optional
# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and
# allow `pip` to create the appropriate form of executable for the
# target platform
#
#
# For example, the following would provide a command called `acelerate` which
# executes the function `main` from this package when invoked:
entry_points={ # Optional
'console_scripts': [
'main = ENTER_PKGNAME:__INIT__.PY_FUNCTION', # NOTE: hook the __init__.py method in main()
],
},
# List additional URLs that are relevant to your project as a dict.
#
# This field corresponds to the "Project-URL" metadata fields:
# https://packaging.python.org/specifications/core-metadata/ # project-url-multiple-use
#
# Examples listed include a pattern for specifying where the package tracks
# issues, where the source is hosted, where to say thanks to the package
# maintainers, and where to support the project financially. The key is
# what's used to render the link text on PyPI.
project_urls={ # Optional
'Bug Reports': 'ENTER_GITHUB_BUGREPORTS',
'Source': 'ENTER_GITHUB_URL',
},
)
| 53 | 155 | 0.544395 | 554 | 5,406 | 5.216607 | 0.458484 | 0.025952 | 0.015917 | 0.018685 | 0.026298 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003555 | 0.375509 | 5,406 | 101 | 156 | 53.524752 | 0.852488 | 0.508694 | 0 | 0.069767 | 0 | 0 | 0.184699 | 0.051777 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046512 | 0 | 0.046512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec62eab66396e7e93ed3e02a393356b10591dc52 | 2,348 | py | Python | pmtg-synthetic-control/utils/loader.py | codyly/pmtg | 2b37200b3d6e38c8d70f25ec6fef19ed18b81496 | [
"MIT"
] | 1 | 2022-02-15T13:40:09.000Z | 2022-02-15T13:40:09.000Z | pmtg-synthetic-control/utils/loader.py | codyly/pmtg | 2b37200b3d6e38c8d70f25ec6fef19ed18b81496 | [
"MIT"
] | null | null | null | pmtg-synthetic-control/utils/loader.py | codyly/pmtg | 2b37200b3d6e38c8d70f25ec6fef19ed18b81496 | [
"MIT"
] | null | null | null | import numpy as np
from utils.visualize import view_trajectory
DEFAULT_SCALE = 1.0
def load_trajectory(file_name: str, num_steps: int, save_results: bool = True) -> np.ndarray:
"""Load paited trajectory and pre-processing
Args:
file_name (string): [description]
num_steps (int): [description]
Returns:
trajectory with coordinates in shape (num_steps, 2)
"""
trajectory = np.load(file_name).astype(np.float32)
# 1. normalization
w, h = (trajectory.max(axis=0) - trajectory.min(axis=0)).tolist()
ratio = w / h
if ratio >= 1:
trajectory = (trajectory - trajectory.min(axis=0)) / w
scales = np.array([[DEFAULT_SCALE, h / w]])
else:
trajectory = (trajectory - trajectory.min(axis=0)) / h
scales = np.array([[w / h, DEFAULT_SCALE]])
# [-DEFAULT_SCALE, DEFAULT_SCALE], ratio kepted
trajectory = (trajectory - scales / 2) * 2 * DEFAULT_SCALE
if save_results:
view_trajectory(trajectory, title="original_trajectory")
# 2. resampling / interpolation
# Human hardly draw the curve in the constant speed, so a rough resampling (linear interpolation)
# should be used for pre-processing
margin_right = sorted(np.where(trajectory[:, 0] == scales[0, 0])[0])
margin_left = sorted(np.where(trajectory[:, 0] == -scales[0, 0])[0])
assert len(margin_right) >= 1 and len(margin_left) >= 1
ts = np.linspace(0, 1, num_steps)
xs = scales[0, 0] * np.sin(2 * np.pi * ts)
ys = np.zeros_like(xs)
# from left to right
ids = np.arange(num_steps // 2 + 1) - num_steps // 4
ids_traj = np.arange(trajectory.shape[0] - margin_left[0] + margin_right[-1]) - (
trajectory.shape[0] - margin_left[0]
)
ys[ids] = np.interp(xs[ids], trajectory[ids_traj, 0], trajectory[ids_traj, 1])
# from right to left
ys[-num_steps // 4 - 1 : num_steps // 4 : -1] = np.interp(
xs[-num_steps // 4 - 1 : num_steps // 4 : -1],
trajectory[margin_left[-1] : margin_right[0] : -1, 0],
trajectory[margin_left[-1] : margin_right[0] : -1, 1],
)
trajectory_interp = np.zeros([num_steps, 2])
trajectory_interp[:, 0], trajectory_interp[:, 1] = xs, ys
if save_results:
view_trajectory(trajectory_interp, title="interp_trajectory")
return trajectory_interp
| 33.542857 | 101 | 0.633731 | 326 | 2,348 | 4.41411 | 0.291411 | 0.061154 | 0.031272 | 0.027797 | 0.262682 | 0.262682 | 0.120917 | 0.120917 | 0.045865 | 0 | 0 | 0.031956 | 0.227002 | 2,348 | 69 | 102 | 34.028986 | 0.760882 | 0.192078 | 0 | 0.054054 | 0 | 0 | 0.019355 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.027027 | false | 0 | 0.054054 | 0 | 0.108108 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec644d6a76b9ebec8b12e7c2b679fb791dbf5fcc | 1,110 | py | Python | problems/115.py | mengshun/Leetcode | 8bb676f2fff093e1417a4bed13d9ad708149be78 | [
"MIT"
] | null | null | null | problems/115.py | mengshun/Leetcode | 8bb676f2fff093e1417a4bed13d9ad708149be78 | [
"MIT"
] | null | null | null | problems/115.py | mengshun/Leetcode | 8bb676f2fff093e1417a4bed13d9ad708149be78 | [
"MIT"
] | null | null | null | """
115. 不同的子序列
https://leetcode-cn.com/problems/distinct-subsequences/
"""
def numDistinct(s: str, t: str):
m, n = len(s), len(t)
if n > m:
return 0
dp = [[0] * (n+1) for _ in range(m+1)]
for i in range(m+1):
dp[i][n] = 1
for i in range(m-1, -1, -1):
for j in range(n-1, -1, -1):
if s[i] == t[j]:
dp[i][j] = dp[i+1][j+1] + dp[i+1][j]
else:
dp[i][j] = dp[i+1][j]
return dp[0][0]
def numDistinctOther(s: str, t: str):
m, n = len(s), len(t)
if n > m:
return 0
dp = [[0] * (n+1) for _ in range(m+1)]
for i in range(m+1):
dp[i][0] = 1
for i in range(1, n+1):
for j in range(i, m+1):
if t[i-1] == s[j-1]:
dp[j][i] = dp[j-1][i-1] + dp[j-1][i]
else:
dp[j][i] = dp[j-1][i]
return dp[m][n]
"""
babgbag
bag
"""
print("numDistinct: ", numDistinctOther("rabbbit", "rabbit")) # 3
print("numDistinct: ", numDistinctOther("babgbag", "bag")) # 5
print("numDistinct: ", numDistinctOther("babb", "bbb")) # 1 | 21.346154 | 65 | 0.454955 | 191 | 1,110 | 2.633508 | 0.198953 | 0.063618 | 0.079523 | 0.089463 | 0.417495 | 0.345924 | 0.345924 | 0.246521 | 0.246521 | 0.246521 | 0 | 0.053548 | 0.327027 | 1,110 | 52 | 66 | 21.346154 | 0.619813 | 0.066667 | 0 | 0.387097 | 0 | 0 | 0.068588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0 | 0 | 0.193548 | 0.096774 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec6513c05b81af621f85dc42a34ad6180c6b7cdd | 3,902 | py | Python | scripts/update_dreqs/update_dreqs_0183.py | jonseddon/primavera-dmt | 1239044e37f070b925a3d06db68351f285df780c | [
"BSD-3-Clause"
] | null | null | null | scripts/update_dreqs/update_dreqs_0183.py | jonseddon/primavera-dmt | 1239044e37f070b925a3d06db68351f285df780c | [
"BSD-3-Clause"
] | 49 | 2018-11-14T17:00:03.000Z | 2021-12-20T11:04:22.000Z | scripts/update_dreqs/update_dreqs_0183.py | jonseddon/primavera-dmt | 1239044e37f070b925a3d06db68351f285df780c | [
"BSD-3-Clause"
] | 2 | 2018-07-04T10:58:43.000Z | 2018-09-29T14:55:08.000Z | #!/usr/bin/env python
"""
update_dreqs_0183.py
ESGF AttributeUpdate
Called from a Rose suite to update the checksums in MPI AMIP files that have
already been updated with the first version that didn't preserve checksums.
"""
from __future__ import (unicode_literals, division, absolute_import,
print_function)
import argparse
import logging.config
import os
import sys
import django
django.setup()
from pdata_app.models import Checksum, DataRequest, TapeChecksum
from pdata_app.utils.common import adler32
__version__ = '0.1.0b1'
DEFAULT_LOG_LEVEL = logging.WARNING
DEFAULT_LOG_FORMAT = '%(levelname)s: %(message)s'
logger = logging.getLogger(__name__)
def parse_args():
"""
Parse command-line arguments
"""
parser = argparse.ArgumentParser(description='Add additional data requests')
parser.add_argument('-l', '--log-level', help='set logging level to one of '
'debug, info, warn (the default), or error')
parser.add_argument('request_id', help='to request id to update')
parser.add_argument('--version', action='version',
version='%(prog)s {}'.format(__version__))
args = parser.parse_args()
return args
def main(args):
"""
Main entry point
"""
model, expt, var_lab, table, var = args.request_id.split('_')
if model == 'MPIESM-1-2-HR':
new_model = 'MPI-ESM1-2-HR'
elif model == 'MPIESM-1-2-XR':
new_model = 'MPI-ESM1-2-XR'
else:
raise ValueError('Unknown source_id {}'.format(model))
dreq = DataRequest.objects.get(
climate_model__short_name=new_model,
experiment__short_name=expt,
rip_code=var_lab,
variable_request__table_name=table,
variable_request__cmor_name=var
)
logger.debug('DataRequest is {}'.format(dreq))
for data_file in dreq.datafile_set.order_by('name'):
logger.debug('Processing {}'.format(data_file.name))
file_path = os.path.join(data_file.directory, data_file.name)
cs = data_file.checksum_set.first()
if not cs:
logger.error('No checksum for {}'.format(data_file.name))
else:
TapeChecksum.objects.create(
data_file=data_file,
checksum_value=cs.checksum_value,
checksum_type=cs.checksum_type
)
# Remove the original checksum now that the tape checksum's
# been created
cs.delete()
Checksum.objects.create(
data_file=data_file,
checksum_type='ADLER32',
checksum_value=adler32(file_path)
)
# Update the file's size
data_file.tape_size = data_file.size
data_file.size = os.path.getsize(file_path)
# Save all of the changes
data_file.save()
if __name__ == "__main__":
cmd_args = parse_args()
# determine the log level
if cmd_args.log_level:
try:
log_level = getattr(logging, cmd_args.log_level.upper())
except AttributeError:
logger.setLevel(logging.WARNING)
logger.error('log-level must be one of: debug, info, warn or error')
sys.exit(1)
else:
log_level = DEFAULT_LOG_LEVEL
# configure the logger
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': DEFAULT_LOG_FORMAT,
},
},
'handlers': {
'default': {
'level': log_level,
'class': 'logging.StreamHandler',
'formatter': 'standard'
},
},
'loggers': {
'': {
'handlers': ['default'],
'level': log_level,
'propagate': True
}
}
})
# run the code
main(cmd_args)
| 28.481752 | 80 | 0.604049 | 457 | 3,902 | 4.919037 | 0.404814 | 0.049822 | 0.022687 | 0.012456 | 0.088078 | 0.032918 | 0.032918 | 0 | 0 | 0 | 0 | 0.008636 | 0.287801 | 3,902 | 136 | 81 | 28.691176 | 0.800288 | 0.11225 | 0 | 0.073684 | 0 | 0 | 0.160573 | 0.013162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021053 | false | 0 | 0.084211 | 0 | 0.115789 | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec66e4ff2212655f50190f485778b9d0bb5f89f4 | 2,149 | py | Python | main.py | 0x437968/overexposure-correction-dise | 08bedbc1b418a23b1ab6e849c3c7d4fdef04e472 | [
"MIT"
] | 4 | 2020-09-30T03:00:57.000Z | 2021-06-28T04:51:47.000Z | main.py | 0x437968/overexposure-correction-dise | 08bedbc1b418a23b1ab6e849c3c7d4fdef04e472 | [
"MIT"
] | 1 | 2021-11-05T00:50:56.000Z | 2022-03-03T13:17:17.000Z | main.py | 0x437968/overexposure-correction-dise | 08bedbc1b418a23b1ab6e849c3c7d4fdef04e472 | [
"MIT"
] | 1 | 2022-03-03T13:15:07.000Z | 2022-03-03T13:15:07.000Z | from data_cfg import project_dataset
from tensorboardX import SummaryWriter
from utils import init_folder
from options import ProjectOptions
from model import create_model
import time
import os
import torch
if __name__ == '__main__':
opt=ProjectOptions().get_opt()
ProjectOptions.print_options(opt)
nets_path=os.path.join(opt.checkpoints_dir,opt.model+'_'+opt.name)
init_folder(nets_path,opt.im_save_dir)
data_set = project_dataset(opt)
data_loader = torch.utils.data.DataLoader(data_set, batch_size=opt.batch_size, shuffle=True)
length=len(data_loader)
model=create_model(opt)
model.load_networks()
if opt.print_net:
model.print_networks()
if opt.phase=='train':
log_dir=os.path.join('./log',opt.model+'_'+opt.name)
init_folder( './log')
print('Start training....')
writer = SummaryWriter(log_dir)
for e in range(opt.epochs):
epoch=e+1
model.clear_sumloss()
for i,data in enumerate(data_loader,0):
model.set_input(data)
model.train()
model.write(writer,e*length+i+1)
if (i+1)%opt.print_freq==0:
model.print_loss(opt.name,epoch,i,length)
if (e*length+i+1)%opt.im_save_freq==0:
print('save img')
model.save_results()
if epoch%opt.net_save_freq==0:
model.save_networks()
print('Training over')
if opt.phase=='test':
print('Current: ', time.asctime(time.localtime(time.time())))
print('Start testing')
t=0.0
l=0.0
for i,data in enumerate(data_loader,0):
l+=data['test'].size()[0]
model.set_input(data)
f1t=time.time()
model.forward()
f2t=time.time()
t+=(f2t-f1t)
model.save_results()
print('Testing over')
print('Current: ', time.asctime(time.localtime(time.time())))
print('Average inference latency for one frame: %.4fs'%(t/l))
| 35.229508 | 97 | 0.579339 | 273 | 2,149 | 4.377289 | 0.307692 | 0.033473 | 0.016736 | 0.025105 | 0.203347 | 0.174059 | 0.132218 | 0.132218 | 0.082008 | 0 | 0 | 0.012574 | 0.296882 | 2,149 | 60 | 98 | 35.816667 | 0.778293 | 0 | 0 | 0.142857 | 0 | 0 | 0.077144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.232143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec680a486a8b041372298819a8d2017978d1f0dd | 752 | py | Python | mult.py | jonathanbell/using-python-with-excel | 2a3355d060216e9911159bed9b3c4f9f11b944ae | [
"WTFPL"
] | null | null | null | mult.py | jonathanbell/using-python-with-excel | 2a3355d060216e9911159bed9b3c4f9f11b944ae | [
"WTFPL"
] | null | null | null | mult.py | jonathanbell/using-python-with-excel | 2a3355d060216e9911159bed9b3c4f9f11b944ae | [
"WTFPL"
] | null | null | null | import pandas
from openpyxl import load_workbook
from openpyxl.styles import Font
df1 = pandas.read_excel('data/shifts.xlsx', sheet_name='Sheet')
df2 = pandas.read_excel('data/shifts.xlsx', sheet_name='Sheet1')
df3 = pandas.read_excel('data/shift_3.xlsx')
df_all = pandas.concat([df1, df2, df3], sort=False)
to_excel = df_all.to_excel('output/allshifts.xlsx', index=None)
wb = load_workbook('output/allshifts.xlsx')
ws = wb.active
total_col = ws['G1']
total_col.font = Font(bold=True)
total_col.value = 'Total'
e_col, f_col = ['E', 'F']
for row in range(2, 300):
result_cell = 'G{}'.format(row)
e_value = ws[e_col + str(row)].value
f_value = ws[f_col + str(row)].value
ws[result_cell] = e_value * f_value
wb.save('output/totalled.xlsx')
| 26.857143 | 64 | 0.718085 | 128 | 752 | 4.023438 | 0.429688 | 0.058252 | 0.087379 | 0.11068 | 0.147573 | 0.147573 | 0.147573 | 0.147573 | 0 | 0 | 0 | 0.019637 | 0.119681 | 752 | 27 | 65 | 27.851852 | 0.758308 | 0 | 0 | 0 | 0 | 0 | 0.178191 | 0.055851 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec69ff59cf02e6fc2fd5b3c7379eb1801389f065 | 6,141 | py | Python | functions.py | traian-d/fractals | a8666c9b6eb446beef0fe71d6d1e6ccfa9f2108e | [
"MIT"
] | null | null | null | functions.py | traian-d/fractals | a8666c9b6eb446beef0fe71d6d1e6ccfa9f2108e | [
"MIT"
] | null | null | null | functions.py | traian-d/fractals | a8666c9b6eb446beef0fe71d6d1e6ccfa9f2108e | [
"MIT"
] | null | null | null | import fractal
class Mandelbrot(fractal.Fractal):
__slots__ = ('__w', '__h', '__max_iter', '__grid')
def __init__(self, re_start, re_end, im_start, im_end, max_iter=100, w=600, h=400):
self.__max_iter = max_iter
self.__w = w
self.__h = h
self.__grid = self.make_grid(re_start, re_end, im_start, im_end)
def __compute(self, c):
z = 0
n = 0
while abs(z) <= 4 and n < self.__max_iter:
z = z*z + c
n += 1
return n
def evaluate(self):
import numpy as np
compute_v = np.vectorize(self.__compute)
return compute_v(self.__grid)
def get_color(self, pt):
# Smooth coloring scheme, others exist.
hue = int(255 * pt / self.__max_iter)
saturation = 255
value = 255 if pt < self.__max_iter else 0
return hue, saturation, value
@property
def height(self):
return self.__h
@property
def width(self):
return self.__w
def make_image(self, evaluated):
from PIL import Image, ImageDraw
im = Image.new('HSV', (self.__w, self.__h), (0, 0, 0))
draw = ImageDraw.Draw(im)
for i in range(self.__w):
for j in range(self.__h):
img_pt = evaluated[i, j]
color = self.get_color(img_pt)
draw.point([i, j], color)
return im.convert('RGB')
class Newton(fractal.Fractal):
__slots__ = ('__re_start', '__re_end', '__im_start', '__im_end', '__w', '__h',
'__max_err', '__max_iter', '__decimals', '__grid', '__palette', '__color_dict')
def __init__(self, re_start, re_end, im_start, im_end, palette, w=600, h=400,
max_err=1e-5, max_iter=1e4, decimals=8):
self.__re_start = re_start
self.__re_end = re_end
self.__im_start = im_start
self.__im_end = im_end
self.__w = w
self.__h = h
self.__max_err = max_err
self.__max_iter = max_iter
self.__decimals = decimals
self.__grid = self.make_grid(re_start, re_end, im_start, im_end)
self.__palette = palette
self.__color_dict = {}
def __compute(self, c, func, func_der):
f_c = func(c)
count = 0
output = c
while abs(f_c.real) >= self.__max_err or abs(f_c.imag) >= self.__max_err:
f_prime_c = func_der(c)
if f_prime_c == 0:
output = c
break
c -= f_c / f_prime_c
f_c = func(c)
count += 1
if count >= self.__max_iter:
# Algorithm did not converge, input default val which should be outside of normal evaluation ranges.
output = -1e5
break
output = c
return complex(round(output, self.__decimals), 0) if isinstance(output, float) else \
complex(round(output.real, self.__decimals), round(output.imag, self.__decimals))
def evaluate(self, func, func_der):
import warnings
import numpy as np
compute_v = np.vectorize(self.__compute)
computed = compute_v(self.__grid, func, func_der)
roots = np.unique(computed.flatten())
roots_len = len(roots)
palette_len = len(self.__palette)
if palette_len < roots_len:
print(roots)
pad_len = roots_len - palette_len
self.__palette += ['#000000'] * pad_len
warnings.warn(f'Palette provided had length {palette_len}, but there were {roots_len} roots. ' +
f'Palette was padded with {pad_len} times black.')
# If the algorithm didn't converge the point will be colored black
self.__color_dict = {roots[i]: '#000000' if roots[i].real == -1e5 else self.__palette[i] for i in range(roots_len)}
return computed
def get_color(self, pt):
return self.__color_dict[pt]
def get_root_adjacent_pts(self, nr_pts=5):
"""
Method will return the pixel coordinates of the points nearest to each of the computed roots from __color_dict
:param nr_pts: Pixel window around the pixel nearest to the root.
:return: A list of lists, each representing a pixel coordinate.
"""
re_step = (self.__re_end - self.__re_start) / self.__w
im_step = (self.__im_end - self.__im_start) / self.__h
iter_range = range(- nr_pts//2, 1 + nr_pts//2)
out = []
for root in self.__color_dict:
out += self.__nearest_pts(root, re_step, im_step, iter_range)
return out
def __nearest_pts(self, root, re_step, im_step, iter_range):
re_nearest = (root.real - self.__re_start) // re_step
im_nearest = (root.imag - self.__im_start) // im_step
return [[re_nearest + i, im_nearest + j] for i in iter_range for j in iter_range]
@property
def height(self):
return self.__h
@property
def width(self):
return self.__w
def func(x):
# return x ** 5 - 3j * x**3 - (5 + 2j) + x
# return x ** 5 - 3j * x**3 - (5 + 2j) * x ** 2 + 3*x + 1
return x**3 - 2*x + 2
# return x**3 - 1
def func_der(x):
# return 5 * x ** 4 - 9j * x**2 + 1
# return 5 * x ** 4 - 9j * x**2 - 10 * x - 4j * x + 3
return 3 * x**2 - 2
# return 3 * x**2
if __name__ == '__main__':
# Blue hueues: ['#023E8A', '#0077B6', '#90E0EF', '#CAF0F8', '#03045E']
from PIL import Image, ImageDraw, ImageColor
# mdb = Mandelbrot(-2, 1, -1, 1, w=int(1024 * 3/2), h=1024)
# evaluated = mdb.evaluate()
# im = mdb.make_image(evaluated)
# im.save('images/mdb.jpg', 'JPEG')
newt = Newton(-4, 4, -4, 4, w=600, h=600, max_err=1e-10, max_iter=1e2, decimals=9,
palette=['#023E8A', '#0077B6', '#90E0EF', '#CAF0F8', '#03045E'])
evaluated = newt.evaluate(func, func_der)
near_roots = newt.get_root_adjacent_pts()
im = newt.make_image(evaluated)
draw = ImageDraw.Draw(im)
for pt in near_roots:
draw.point(pt, ImageColor.getrgb("#FF0000"))
im.save('images/cubic_w_roots.jpg', 'JPEG')
| 34.5 | 123 | 0.579873 | 876 | 6,141 | 3.718037 | 0.216895 | 0.02794 | 0.019343 | 0.018422 | 0.244704 | 0.178385 | 0.166104 | 0.134173 | 0.126804 | 0.126804 | 0 | 0.039348 | 0.300603 | 6,141 | 177 | 124 | 34.694915 | 0.718976 | 0.141996 | 0 | 0.277778 | 0 | 0 | 0.065439 | 0.004606 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134921 | false | 0 | 0.047619 | 0.055556 | 0.333333 | 0.007937 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec6a0fc301cf85afc5c914d15d835f6315fb62d6 | 2,204 | py | Python | 101-150/142_linked_list_cycle_ii.py | ChenhaoJiang/LeetCode-Solution | b2119ad938c18f75948d0f5cf4ae3773820dcc93 | [
"MIT"
] | 16 | 2019-03-30T07:25:27.000Z | 2020-07-28T15:34:53.000Z | 101-150/142_linked_list_cycle_ii.py | ChenhaoJiang/LeetCode-Solution | b2119ad938c18f75948d0f5cf4ae3773820dcc93 | [
"MIT"
] | null | null | null | 101-150/142_linked_list_cycle_ii.py | ChenhaoJiang/LeetCode-Solution | b2119ad938c18f75948d0f5cf4ae3773820dcc93 | [
"MIT"
] | 2 | 2020-06-26T13:02:14.000Z | 2020-07-28T04:59:15.000Z | """
Given a linked list, return the node where the cycle begins. If there is no cycle, return null.
To represent a cycle in the given linked list, we use an integer pos which represents the position (0-indexed) in the linked list where tail connects to.
If pos is -1, then there is no cycle in the linked list.
Note: Do not modify the linked list.
Example 1:
Input: head = [3,2,0,-4], pos = 1
Output: tail connects to node index 1
Explanation: There is a cycle in the linked list, where tail connects to the second node.
Example 2:
Input: head = [1,2], pos = 0
Output: tail connects to node index 0
Explanation: There is a cycle in the linked list, where tail connects to the first node.
Example 3:
Input: head = [1], pos = -1
Output: no cycle
Explanation: There is no cycle in the linked list.
Follow-up:
Can you solve it without using extra space?
"""
# Definition for singly-linked list.
# class ListNode(object):
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution(object):
def detectCycle(self, head):
"""
:type head: ListNode
:rtype: ListNode
"""
# 定义一个慢指针和一个快指针
slow, fast = head, head
while True:
# 如果快指针到头了,则说明链表中没有环
if not fast or not fast.next:
return None
# 慢指针每次前进一步
slow = slow.next
# 快指针每次前进两步
fast = fast.next.next
# 如果快指针赶上了慢指针,那就说明链表中有环,而且fast - slow = nb(b为环的长度)
# 而fast = 2 * slow ,因此此时slow走了nb步,其中n未定,但为一个正整数。
if slow == fast:
break
# 让fast从head开始
fast = head
# 当fast和slow再次相遇时,fast应该走了a步到达环的开始,slow走了a+nb步也到了环的开始,此时两个指针所在的结点就是所求。
# 其中a为从头结点到环的开始的长度。
while slow != fast:
slow = slow.next
fast = fast.next
return fast
"""
思路:快慢指针法,当快慢指针第一次相遇时,那就说明链表中有环,而且fast - slow = nb(b为环的长度)。而fast = 2 * slow ,因此此时slow走了nb步,其中n未定,但为一个正整数。让fast从head开始,
并且fast也改为每次走一步,这样当fast和slow再次相遇时,fast应该走了a步到达环的开始,slow走了a+nb步也到了环的开始,此时两个指针所在的结点就是所求(其中a为从头结点到环的开始的长度)。算法时间
复杂度为O(n),因为第二次相遇中,慢指针须走步数 a<a+b;第一次相遇中,慢指针须走步数 a+b−x<a+b,其中x为双指针重合点与环入口距离;因此总体为线性复杂度。空间复杂度为O(1),因为双指针使用常数大
小的额外空间。
"""
| 35.548387 | 154 | 0.651543 | 296 | 2,204 | 4.841216 | 0.402027 | 0.062805 | 0.054431 | 0.052338 | 0.365666 | 0.276343 | 0.235869 | 0.235869 | 0.171668 | 0.171668 | 0 | 0.012262 | 0.259982 | 2,204 | 61 | 155 | 36.131148 | 0.865727 | 0.57441 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec6c02a550bff95ca8a0b04da83c54cb712ce0cf | 1,161 | py | Python | tests/test_settings.py | martimarkov/django-ajax-datatable | d132504a199cb2afe2cfd74a2e6d5d5f2969c4a4 | [
"MIT"
] | 1 | 2021-11-19T13:36:30.000Z | 2021-11-19T13:36:30.000Z | tests/test_settings.py | martimarkov/django-ajax-datatable | d132504a199cb2afe2cfd74a2e6d5d5f2969c4a4 | [
"MIT"
] | null | null | null | tests/test_settings.py | martimarkov/django-ajax-datatable | d132504a199cb2afe2cfd74a2e6d5d5f2969c4a4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8
from __future__ import unicode_literals, absolute_import
DEBUG = True
USE_TZ = True
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = "77777777777777777777777777777777777777777777777777"
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
# Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'postgres', # Or path to database file if using sqlite3.
'USER': 'postgres', # Not used with sqlite3.
'PASSWORD': 'postgres', # Not used with sqlite3.
'HOST': 'localhost', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '5432', # Set to empty string for default. Not used with sqlite3.
}
}
# ROOT_URLCONF = "tests.urls"
INSTALLED_APPS = [
"user",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sites",
"ajax_datatable",
]
AUTH_USER_MODEL = "user.TestUser"
SITE_ID = 1
MIDDLEWARE = ()
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
},
]
| 25.23913 | 89 | 0.643411 | 127 | 1,161 | 5.755906 | 0.629921 | 0.038304 | 0.060192 | 0.098495 | 0.123119 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070078 | 0.225668 | 1,161 | 45 | 90 | 25.8 | 0.743048 | 0.332472 | 0 | 0 | 0 | 0 | 0.413072 | 0.2 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.032258 | 0.032258 | 0 | 0.032258 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec6de7bbb85c5d6dd0a3ddc51579d47f873de627 | 2,213 | py | Python | generate_areacode_lookup_data.py | EricSchles/investigator | 88b4215667b24483efbd9aba690e7c3a231539f6 | [
"MIT"
] | 17 | 2016-06-17T23:42:42.000Z | 2020-06-29T19:17:09.000Z | generate_areacode_lookup_data.py | EricSchles/investigator | 88b4215667b24483efbd9aba690e7c3a231539f6 | [
"MIT"
] | 6 | 2016-06-16T18:46:45.000Z | 2017-07-02T16:41:45.000Z | generate_areacode_lookup_data.py | EricSchles/investigator | 88b4215667b24483efbd9aba690e7c3a231539f6 | [
"MIT"
] | 11 | 2016-06-21T21:17:46.000Z | 2020-06-30T05:25:36.000Z | from selenium import webdriver
from selenium.webdriver.common.by import By
from app.models import AreaCodeLookup
from pyzipcode import ZipCodeDatabase
from app import db
import us
from geopy.geocoders import Nominatim
from easydict import EasyDict as edict
print("starting webdriver")
driver = webdriver.Firefox()
print("getting webpage")
driver.get("https://www.allareacodes.com/")
result = driver.find_elements(By.XPATH, "//select[@style='width: 100%; margin-right: 2px']")
area_code_and_place = result[0].text.split("\n")
prefixes = [
"New", "Los", "San", "Baton", "Fort",
"Bowling", "Lake", "Grand", "Saint",
"Charlotte"
]
zcdb = ZipCodeDatabase()
geolocator = Nominatim()
for area_code in area_code_and_place:
state = area_code.split("-")[1].split("(")[0].strip()
if "DC" in state:
state = us.states.lookup("DC").abbr
else:
state = us.states.lookup(state).abbr
city = area_code.split("-")[1].split("(")[1].rstrip(")")
city = city.strip()
if "," in city:
city = city.split(",")[0]
if " " in city:
if [prefix for prefix in prefixes if prefix in city] == []:
city = city.split(" ")[0]
if isinstance(zcdb.find_zip(city=city,state=state),list):
zip_code = zcdb.find_zip(city=city,state=state)[0]
else:
zip_code = zcdb.find_zip(city=city,state=state)
if zip_code is None:
try:
zip_code = zcdb.find_zip(state=state)[0]
except:
if state == "MP":
zip_code = edict({
"latitude":15.200755,
"longitude":145.756952
})
elif state == "GU":
zip_code = edict({
"latitude":13.463345,
"longitude":144.733168
})
else:
import code
code.interact(local=locals())
area_code = AreaCodeLookup(
area_code.split("-")[0].strip(),
city,
state,
zip_code.latitude,
zip_code.longitude
)
db.session.add(area_code)
db.session.commit()
| 32.072464 | 92 | 0.553999 | 257 | 2,213 | 4.673152 | 0.396887 | 0.053289 | 0.036636 | 0.037469 | 0.167361 | 0.120733 | 0.120733 | 0.05995 | 0.05995 | 0 | 0 | 0.031373 | 0.308631 | 2,213 | 68 | 93 | 32.544118 | 0.753595 | 0 | 0 | 0.109375 | 0 | 0 | 0.096249 | 0.010393 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.140625 | 0 | 0.140625 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec71f4c0538be0f19f562102f9375c395809b744 | 3,471 | py | Python | pybargain_protocol/bargaining_cancellation.py | LaurentMT/pybargain_protocol | 3b4c6040ec3562ce6921f917c97a9931d5c6e5de | [
"MIT"
] | 1 | 2015-06-30T15:34:41.000Z | 2015-06-30T15:34:41.000Z | pybargain_protocol/bargaining_cancellation.py | LaurentMT/pybargain_protocol | 3b4c6040ec3562ce6921f917c97a9931d5c6e5de | [
"MIT"
] | null | null | null | pybargain_protocol/bargaining_cancellation.py | LaurentMT/pybargain_protocol | 3b4c6040ec3562ce6921f917c97a9931d5c6e5de | [
"MIT"
] | null | null | null | #!/usr/bin/env python
'''
Version: 0.0.1
Python library for the bargaining protocol
'''
from pybargain_protocol import bargaining_pb2
from pybargain_protocol.constants import MAINNET
from pybargain_protocol.exceptions import SerializationError, DeserializationError
from pybargain_protocol.protocol_rules import check_time, check_memo
class BargainingCancellationDetails(object):
'''
Details of a BargainingCancellation message
'''
'''
ATTRIBUTES
buyer_data = arbitrary data that may be used by the buyer
memo = utf-8 encoded, plain-text (no formatting) note that should be displayed to the receiver (part of the negotiation)
seller_data = arbitrary data that may be used by the seller
time = unix timestamp associated to the message
'''
'''
CONSTRUCTOR
'''
def __init__(self,
time = 0,
buyer_data = '',
seller_data = '',
memo = ''):
'''
Constructor
Parameters:
time = unix timestamp associated to the message
buyer_data = arbitrary data that may be used by the buyer
seller_data = arbitrary data that may be used by the seller
memo = utf-8 encoded, plain-text (no formatting) note that should be displayed to the receiver (part of the negotiation)
'''
self.time = time
self.buyer_data = buyer_data
self.seller_data = seller_data
self.memo = memo
'''
SERIALIZATION
'''
def serialize(self):
'''
Serializes the message (protobuff)
'''
try:
pbcd = bargaining_pb2.BargainingCancellationDetails()
pbcd.time = self.time
if self.buyer_data : pbcd.buyer_data = self.buyer_data
if self.seller_data : pbcd.seller_data = self.seller_data
if self.memo : pbcd.memo = self.memo
return pbcd.SerializeToString()
except:
raise SerializationError('A problem occurred while serializing the BargainingCancellationDetails with Protocol Buffers')
def deserialize(pbuff):
'''
Deserializes a protobuff message as a BargainingCancellationDetails
Parameters:
pbuff = protobuff message
'''
if not pbuff: raise DeserializationError('Protocol Buffer message is empty')
try:
pbcd = bargaining_pb2.BargainingCancellationDetails()
pbcd.ParseFromString(pbuff)
except:
raise DeserializationError('A problem occurred while deserializing the Protocol Buffers message associated to a BargainingCancellationDetails')
time = pbcd.time
bdata = pbcd.buyer_data
sdata = pbcd.seller_data
memo = pbcd.memo
return BargainingCancellationDetails(time, bdata, sdata, memo)
deserialize = staticmethod(deserialize)
'''
VALIDATIONS
'''
def check_msg_fmt(self, network = MAINNET):
'''
Checks if message format is valid
Returns True if message is valid, False otherwise
Parameters:
network = network used for the negotiation
'''
return check_time(self) and check_memo(self)
| 32.745283 | 164 | 0.60098 | 347 | 3,471 | 5.907781 | 0.308357 | 0.039512 | 0.040976 | 0.040976 | 0.27122 | 0.27122 | 0.219512 | 0.181463 | 0.181463 | 0.181463 | 0 | 0.003918 | 0.338231 | 3,471 | 106 | 165 | 32.745283 | 0.88855 | 0.221262 | 0 | 0.153846 | 0 | 0 | 0.115497 | 0.028265 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.102564 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec7214eeeb62991a6453d69509efc08658687c8c | 37,375 | py | Python | geodesic.py | aconz2/blender-addon-geodesic | 8b5d48fcaaa42524cba43d2b0d9c328974e9ccd7 | [
"Unlicense"
] | null | null | null | geodesic.py | aconz2/blender-addon-geodesic | 8b5d48fcaaa42524cba43d2b0d9c328974e9ccd7 | [
"Unlicense"
] | null | null | null | geodesic.py | aconz2/blender-addon-geodesic | 8b5d48fcaaa42524cba43d2b0d9c328974e9ccd7 | [
"Unlicense"
] | null | null | null |
bl_info = {
'name': 'Geodesic',
'description': 'Geodesic like things; weighted shortest path and walking along a mesh',
'blender': (2, 92, 0),
'category': 'Object',
}
import itertools
import random
from functools import partial
from itertools import starmap
import math
import logging
import networkx as nx
import numpy as np
import mathutils
import bmesh
import bpy
from mathutils import Matrix, Vector
# import sys
# os.system(f'{sys.executable} -m ensurepip')
# os.system(f'{sys.executable} -m pip install networkx')
logger = logging.getLogger('geodesic')
TOL = 1e-4
VERT_TOL = 1e-2
AXES = {
'X': Vector((1, 0, 0)),
'Y': Vector((0, 1, 0)),
'Z': Vector((0, 0, 1)),
}
class RandomPairsWithReplacement:
def __init__(self, xs):
self.xs = xs
self.i = 0
random.shuffle(self.xs)
def commit(self): pass
def reject(self): pass
def draw(self):
if len(self.xs) - self.i < 2:
self.i = 0
random.shuffle(self.xs)
a = self.xs[self.i]
b = self.xs[self.i + 1]
self.i += 2
return a, b
class RandomPairsWithoutReplacement:
def __init__(self, xs):
# idk is it better to use a single deque and a marker of when the first element comes back around?
self.primary = xs
self.secondary = []
self.i = 0
self.a = None
self.b = None
self.done = False
self.reload()
def reload(self):
self.primary.extend(self.secondary)
random.shuffle(self.primary)
self.secondary.clear()
if len(self.primary) < 2:
self.done = True
def draw(self):
if self.done:
return None
if len(self.primary) < 2:
self.reload()
if self.done:
return None
self.a = self.primary.pop()
self.b = self.primary.pop()
return self.a, self.b
def commit(self):
assert self.a is not None and self.b is not None
self.a = None
self.b = None
def reject(self):
assert self.a is not None and self.b is not None
self.secondary.append(self.a)
self.secondary.append(self.b)
self.a = None
self.b = None
def const(n):
return n
def const_n(x, n):
return [x] * n
def rotated(v, rot):
v.rotate(rot)
return v
def uniform_n(low, hi, n):
return [random.uniform(low, hi) for _ in range(n)]
def vector_rejection(a, b):
return a - a.project(b)
def one_mesh_one_curve(objects):
if len(objects) != 2:
return None
a, b = objects
if a.type == 'MESH' and b.type == 'CURVE':
return a, b
elif b.type == 'MESH' and a.type == 'CURVE':
return b, a
else:
return None
def get_bmesh(obj, use_modifiers=False, context=None):
if use_modifiers:
if context is None:
context = bpy.context
dg = context.evaluated_depsgraph_get()
obj = obj.evaluated_get(dg)
ret = bmesh.new()
ret.from_mesh(obj.data)
return ret
def rotate_about_axis(axis, theta):
"""
rodrigues formula
Return the rotation matrix associated with counterclockwise rotation about
the given axis by theta radians.
"""
axis = axis.normalized()
a = math.cos(theta / 2.0)
b, c, d = -axis * math.sin(theta / 2.0)
aa, bb, cc, dd = a * a, b * b, c * c, d * d
bc, ad, ac, ab, bd, cd = b * c, a * d, a * c, a * b, b * d, c * d
return Matrix([[aa + bb - cc - dd, 2 * (bc + ad), 2 * (bd - ac)],
[2 * (bc - ad), aa + cc - bb - dd, 2 * (cd + ab)],
[2 * (bd + ac), 2 * (cd - ab), aa + dd - bb - cc]])
def build_graph_vert_pairs(G, it, vertex_group=None, min_weight=0.1):
for u, v in it:
if G.has_edge(u.index, v.index):
continue
d = (u.co - v.co).length
if vertex_group is not None:
# TODO the "right" thing to do is take the sum of a weighted average of weights encountered across the path for some sample amount
try:
half_weight = (vertex_group.weight(u.index) + vertex_group.weight(v.index)) / 2
except Exception:
continue
# stretch weight range from [0, 1] to [min_weight, 2]
multiplier = max(half_weight, min_weight / 2) * 2
d *= multiplier
G.add_edge(u.index, v.index, weight=d)
def build_graph(mesh, vertex_group=None, min_weight=0.1, cross_faces=False):
G = nx.Graph()
if cross_faces:
it = itertools.chain.from_iterable(itertools.combinations(f.verts, 2) for f in mesh.faces)
else:
it = (e.verts for e in mesh.edges)
build_graph_vert_pairs(G, it, vertex_group, min_weight)
mesh.verts.ensure_lookup_table()
for node in G:
G.add_node(node, vert=mesh.verts[node].co)
return G, mesh.verts
def remove_path(G, nodes):
for i in range(len(nodes) - 1):
G.remove_edge(nodes[i], nodes[i + 1])
def make_empty_curve(name='Curve'):
curve = bpy.data.objects.new(name, bpy.data.curves.new(name, 'CURVE'))
bpy.context.collection.objects.link(curve)
curve.data.dimensions = '3D'
return curve
def make_spline(curve, points, name='Spline', type='BEZIER', handle_type='AUTO'):
spline = curve.data.splines.new(type)
spline_points = spline.bezier_points if type == 'BEZIER' else spline.points
spline_points.add(len(points) - 1)
assert len(points) == len(spline_points)
for sp, p in zip(spline_points, points):
if isinstance(p, bmesh.types.BMVert):
p = p.co
sp.co = p
if type == 'BEZIER':
sp.handle_left_type = handle_type
sp.handle_right_type = handle_type
return spline
def make_curve(points, name='Curve', type='BEZIER', handle_type='AUTO'):
curve = make_empty_curve(name=name)
make_spline(curve, points, type=type, handle_type=handle_type)
return curve
def set_spline_handles(spline, handle_type):
if not spline.type == 'BEZIER':
return
for p in spline.bezier_points:
p.handle_left_type = handle_type
p.handle_right_type = handle_type
def set_curve_handles(curve, handle_type):
for spline in curve.data.splines:
set_spline_handles(spline, handle_type)
def _vert_or_index(v):
return getattr(v, 'index', v)
def try_shortest_path(G, a, b):
a = _vert_or_index(a)
b = _vert_or_index(b)
try:
return nx.algorithms.shortest_path(G, a, b, weight='weight')
except nx.exception.NodeNotFound:
logger.debug(f'NodeNotFound, {a} -> {b} vertex must have not been in the vertex group')
except nx.exception.NetworkXNoPath:
logger.debug(f'No such path {a} -> {b}')
return None
def get_path_points(G, path):
return [G.nodes[i]['vert'] for i in path]
def path_weight(G, path):
ret = 0
for i in range(len(path) - 1):
ret += G[path[i]][path[i + 1]]['weight']
return ret
def closest_vertex_on_face(mesh, face_index, point):
mesh.faces.ensure_lookup_table()
return min(mesh.faces[face_index].verts, key=lambda v: (v.co - point).length_squared)
def snap_curve_splines_shortest_path(G, obj, mesh, curve, vertex_group=None, cross_faces=False, closest_vert=True):
remove = []
splines = list(curve.data.splines)
for spline in splines:
points = spline.bezier_points if spline.type == 'BEZIER' else spline.points
if len(points) < 2:
continue
start = points[0].co
end = points[-1].co
succ1, loc1, normal1, face_index1 = obj.closest_point_on_mesh(start)
succ2, loc2, normal2, face_index2 = obj.closest_point_on_mesh(end)
if not succ1 or not succ2:
continue
if closest_vert:
a = closest_vertex_on_face(mesh, face_index1, obj.matrix_world @ loc1).index
b = closest_vertex_on_face(mesh, face_index2, obj.matrix_world @ loc2).index
path = try_shortest_path(G, a, b)
if path is None:
continue
points = get_path_points(G, path)
else:
# the start and end are on a face, try each path from each vert to each other vert and take the one with least total path length
p1 = loc1
p2 = loc2
mesh.faces.ensure_lookup_table()
paths = filter(None, starmap(partial(try_shortest_path, G), itertools.product(mesh.faces[face_index1].verts, mesh.faces[face_index2].verts)))
def score(path):
return (
path_weight(G, path) +
(p1 - G.nodes[path[0]]['vert']).length +
(p2 - G.nodes[path[-1]]['vert']).length
)
path = min(paths, key=score, default=None)
if path is None:
continue
points = [p1] + get_path_points(G, path) + [p2]
make_spline(curve, points, type=spline.type)
remove.append(spline)
for x in remove:
curve.data.splines.remove(x)
# options will be added that make it hard to tell without exhaustive checks whether
# paths that fit criteria will be found in reasonable time. maxtries_multiplier bounds our efforts
def generate_multiple_paths(G, n, maxtries_multiplier=10, with_replacement=True, min_length=2):
ret = []
pairs = (RandomPairsWithReplacement if with_replacement else RandomPairsWithoutReplacement)(list(G))
for _ in range(n * maxtries_multiplier):
pair = pairs.draw()
if pair is None:
break
a, b = pair
# TODO future things might reject this path
path = try_shortest_path(G, a, b)
# TODO if this is frequently caused by disconnected components, it would be smarter to partition
# the pairs up front and only try pairs with a connected component
if path is None or len(path) < min_length:
pairs.reject()
continue
ret.append(get_path_points(G, path))
pairs.commit()
if len(ret) == n:
break
return ret
# expected to be called with only edges containing 1 or 2 faces
def next_face(face, edge):
for f in edge.link_faces:
if f != face:
return f
return None
def line_line_intersection(a, b, c, d):
"""3space segment intersection"""
ret = mathutils.geometry.intersect_line_line(a, b, c, d)
if ret is None:
return None
x, y = ret
if ((x - y).length > TOL or # lines dont intersect
not point_on_line(x, a, b) or # intersection not on line 1
not point_on_line(y, c, d) # intersection not on line 2
):
return None
return x
def point_on_line(pt, line1, line2):
intersection, pct = mathutils.geometry.intersect_point_line(pt, line1, line2)
return (
(pt - intersection).length_squared < TOL and # closest point on line is this point
0 <= pct <= 1 # and it exists between the endpoints
)
# I don't know of a nicer way to do this
def make_face_face_rotation_matrix(face1, face2, axis):
dihedral = face1.normal.angle(face2.normal)
m = rotate_about_axis(axis, dihedral)
if math.isclose((m @ face1.normal).dot(face2.normal), 1, rel_tol=TOL):
return m
m = rotate_about_axis(axis, -dihedral)
assert math.isclose((m @ face1.normal).dot(face2.normal), 1, rel_tol=TOL)
return m
def closest_point_on_mesh(obj, point):
succ, loc, normal, face_index = obj.closest_point_on_mesh(point)
if not succ:
raise ValueError('failed to get closest_point_on_mesh')
return loc, normal, face_index
def walk_along_mesh(obj, mesh, start, heading):
"""
Expects heading to be along the face its starting on already, otherwise we project it onto the face
Returns Tuple[
list of N points including start of the walk along a mesh in direction of heading with length of heading,
list of N-1 face indices where the line from points[i] to points[i + 1] lies on face[i]
]
"""
loc, normal, face_index = closest_point_on_mesh(obj, start)
mesh.faces.ensure_lookup_table()
points = [loc]
face = mesh.faces[face_index]
faces = []
# TODO if we are given start at exactly a vert, the face is ambiguous, but maybe we should be nice and try each face
# and take the one with the smallest dot since the heading may imply which face was "intended"
# of course if you have coplanar faces the dot won't tell you enough, you'd then want to check which one the heading
# actually produces a path that doesn't end right away
# getting -0 issues
if not math.isclose(abs(heading.dot(face.normal)), 0, rel_tol=1e-3):
# if abs(heading.dot(face.normal)) > 1e-3:
logger.debug('reprojection heading onto face because dot is {:.6f} {} {}'.format(heading.dot(face.normal), heading, face.normal))
l = heading.length
heading = vector_rejection(heading, face.normal)
heading.normalize()
heading *= l
while heading.length_squared:
a = points[-1]
b = a + heading
# find first edge that intersects our heading
intersection = None
for edge in face.edges:
v1 = edge.verts[0].co
v2 = edge.verts[1].co
if point_on_line(a, v1, v2):
continue
intersection = line_line_intersection(a, b, v1, v2)
if intersection is not None:
break
# end of the road
if intersection is None:
# logger.debug('INTERSECTION IS NONE')
points.append(b)
faces.append(face.index)
assert len(points) - 1 == len(faces)
return points, faces
# back to start
# TODO this won't always be useful if the start is off an edge, we would have to check that an existing segment.dot(new_segment) == 0
if (intersection - points[0]).length < TOL:
# logger.debug('BACK TO START')
assert len(points) - 1 == len(faces)
return points, faces
# hit a vert
if (intersection - v1).length < VERT_TOL or (intersection - v2).length < VERT_TOL:
# logger.debug('HIT A VERT')
points.append(intersection)
faces.append(face.index)
assert len(points) - 1 == len(faces)
return points, faces
points.append(intersection)
new_face = next_face(face, edge)
if new_face is None:
# logger.debug('NEWFACE IS NONE')
faces.append(face.index)
assert len(points) - 1 == len(faces)
return points, faces
# assert (heading.length) >= (intersection - a).length
heading -= (intersection - a) # subtract off the amount we have
heading = make_face_face_rotation_matrix(face, new_face, v2 - v1) @ heading
faces.append(face.index)
face = new_face
assert len(points) - 1 == len(faces)
return points, faces
assert False
def snap_curve_splines_walk(obj, mesh, curve):
remove = []
mat = obj.matrix_world.copy()
mat.invert()
splines = list(curve.data.splines)
for spline in splines:
points = spline.bezier_points if spline.type == 'BEZIER' else spline.points
if len(points) < 2:
continue
# TODO does this need to be done anywhere else?
start = mat @ curve.matrix_world @ points[0].co
end = mat @ curve.matrix_world @ points[-1].co
points, faces = walk_along_mesh(obj, mesh, start, end - start)
make_spline(curve, points, type=spline.type)
remove.append(spline)
for x in remove:
curve.data.splines.remove(x)
def generate_walks(obj, mesh, curve, starts, gen_n_spokes, gen_angles, gen_lengths):
mesh.faces.ensure_lookup_table()
for i, start in enumerate(starts):
spokes = gen_n_spokes()
angles = gen_angles(spokes)
lengths = gen_lengths(spokes)
if isinstance(start, tuple):
start, heading = start
heading.normalize()
loc, normal, face_index = closest_point_on_mesh(obj, start)
else:
loc, normal, face_index = closest_point_on_mesh(obj, start)
# choose arb heading
heading = rotate_about_axis(normal, random.uniform(-math.pi, math.pi)) @ normal.orthogonal()
for length, angle in zip(lengths, angles):
h = (rotate_about_axis(normal, angle) @ heading) * length
points, faces = walk_along_mesh(obj, mesh, start, h)
make_spline(curve, points)
# not ideal but the graph isn't guaranteed to have real edges
def edges_from_verts(mesh, verts):
mesh.verts.ensure_lookup_table()
for i in range(len(verts) - 1):
a = mesh.verts[verts[i]]
b = mesh.verts[verts[i + 1]]
for e in a.link_edges:
if b == e.other_vert(a):
yield e.index
break
def dev():
C = bpy.context
D = bpy.data
to_remove = [x for x in D.objects if x.name.startswith('Curve')]
for x in to_remove:
D.objects.remove(x, do_unlink=True)
obj = D.objects['Dodec']
m = bmesh.new()
m.from_mesh(obj.data)
# m.transform(obj.matrix_world)
# shortest path test
G, verts = build_graph(m, vertex_group=obj.vertex_groups['Group'], cross_faces=True)
bc = D.objects['BezierCurve']
snap_curve_splines_shortest_path(G, obj, m, bc, closest_vert=False)
bc.matrix_world = obj.matrix_world
obj = D.objects['Plane']
m = bmesh.new()
m.from_mesh(obj.data)
# path runs into edge which has no other face
points, faces = walk_along_mesh(obj, m, Vector((-.99, -.99, 1)), Vector((1, 1.56, 0)).normalized() * 3)
curve = make_curve(points, handle_type='VECTOR')
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
obj = D.objects['Cube.000']
m = bmesh.new()
m.from_mesh(obj.data)
# ends on first face
points, faces = walk_along_mesh(obj, m, Vector((-.99, -.99, 1)), Vector((1, 1.56, 0)).normalized() * 1)
curve = make_curve(points, handle_type='VECTOR')
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
# goes to second face
points, faces = walk_along_mesh(obj, m, Vector((-0.99, -0.99, 1)), Vector((1, .76, 0)).normalized() * 3)
curve = make_curve(points, handle_type='VECTOR')
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
# hits a vert
points, faces = walk_along_mesh(obj, m, Vector((-0.99, -0.99, 1)), Vector((1, 1, 0)).normalized() * 150)
curve = make_curve(points, handle_type='VECTOR')
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
# initial heading is not along face
points, faces = walk_along_mesh(obj, m, Vector((-0.99, -0.99, 1)), Vector((1, .33, 0.1)).normalized() * 3)
curve = make_curve(points, handle_type='VECTOR')
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
# TODO we could have early stopping when we wrap around, but as in the case here, the starting point is off the edge
# so just checking the intersection isnt sufficient, need to check the dot with all existing segments
points, faces = walk_along_mesh(obj, m, Vector((-1, 0, -.99)), Vector((0, 0, 1)).normalized() * 30)
curve = make_curve(points, handle_type='VECTOR')
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
obj = D.objects['Cube.001']
m = bmesh.new()
m.from_mesh(obj.data)
points, faces = walk_along_mesh(obj, m, Vector((-0.99, -0.99, 1)), Vector((1, .56, 0)).normalized() * 100)
curve = make_curve(points, handle_type='VECTOR')
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
obj = D.objects['Cube.002']
m = bmesh.new()
m.from_mesh(obj.data)
curve = make_empty_curve()
generate_walks(
obj,
m,
curve,
[f.calc_center_median() for f in m.faces],
partial(random.randint, 3, 5),
partial(np.linspace, 0, np.pi * 2, endpoint=False),
lambda n: [random.uniform(3, 5) for _ in range(n)],
)
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
set_curve_handles(curve, 'VECTOR')
obj = D.objects['Cube.Particles']
m = bmesh.new()
m.from_mesh(obj.data)
depsg = C.evaluated_depsgraph_get()
particles = obj.evaluated_get(depsg).particle_systems[0].particles
curve = make_empty_curve('Curve.Particles')
mat = obj.matrix_world.copy()
mat.invert()
# for p in particles[:10]:
# v = rotated(Vector((0, 1, 0)), p.rotation)
# p1 = p.location
# p2 = p1 + v * 2
# make_spline(curve, [mat @ p1, mat @ p2])
# the mat stuff is to bring the particle location into object local space
generate_walks(
obj,
m,
curve,
[(mat @ p.location, rotated(Vector((0, 1, 0)), p.rotation)) for p in particles],
partial(const, 3),
partial(np.linspace, 0, np.pi * 2, endpoint=False),
partial(uniform_n, 1, 2),
)
curve.data.bevel_depth = 0.01
curve.matrix_world = obj.matrix_world
set_curve_handles(curve, 'VECTOR')
class GeodesicWeightedShortestPath(bpy.types.Operator):
"""Select shortest path between two vertices on a mesh using vertex weights"""
bl_idname = 'mesh.geodesic_select_shortest_weighted_path'
bl_label = 'Geodesic Select Shortest Weighted Path'
bl_options = {'REGISTER', 'UNDO'}
cross_faces: bpy.props.BoolProperty(
name='Cross Faces',
default=False,
description='Allow crossing faces in n-gons even if no edge connects the verts',
)
vertex_group: bpy.props.StringProperty(name='Vertex Group', default='')
@classmethod
def poll(cls, context):
return context.mode == 'EDIT_MESH'
def draw(self, context):
obj = context.object
self.layout.prop_search(self, 'vertex_group', obj, 'vertex_groups', text='Vertex Group')
self.layout.prop(self, 'cross_faces')
def execute(self, context):
obj = context.object
if len(obj.vertex_groups) == 0:
self.report({'WARNING'}, 'This mesh has no vertex groups, use the builtin select shortest path')
return {'CANCELLED'}
if obj.data.total_vert_sel != 2:
self.report({'WARNING'}, f'Select only 2 vertices, got {obj.data.total_vert_sel}')
return {'CANCELLED'}
if self.vertex_group == '':
self.vertex_group = obj.vertex_groups[obj.vertex_groups.active_index].name
m = bmesh.from_edit_mesh(obj.data)
selected_verts = [x for x in m.verts if x.select]
assert len(selected_verts) == 2
G, verts = build_graph(m, vertex_group=obj.vertex_groups[self.vertex_group], cross_faces=self.cross_faces)
path = try_shortest_path(G, selected_verts[0].index, selected_verts[1].index)
if path is None:
self.report({'WARNING'}, f'No path exists between the selected vertices {selected_verts[0].index} {selected_verts[1].index}')
# we use FINISHED here to allow selecting another vertex group that might have a path
return {'FINISHED'}
m.verts.ensure_lookup_table()
for p in path:
m.verts[p].select = True
m.edges.ensure_lookup_table()
for e in edges_from_verts(m, path):
m.edges[e].select = True
# TODO is the same thing useful for faces?
bmesh.update_edit_mesh(obj.data, False, False)
return {'FINISHED'}
class GeodesicSnapCurveToMeshShortestPath(bpy.types.Operator):
"""Snap each spline's in a curve to a mesh's face by optionally weighted shortest path"""
bl_idname = 'object.geodesic_snap_curve_to_mesh_shortest_path'
bl_label = 'Geodesic Snap Curve to Mesh Shortest Path'
bl_options = {'REGISTER', 'UNDO'}
vertex_group: bpy.props.StringProperty(name='Vertex Group', default='')
cross_faces: bpy.props.BoolProperty(
name='Cross Faces',
default=False,
description='Allow crossing faces in n-gons even if no edge connects the verts',
)
closest_vert: bpy.props.BoolProperty(
name='Closest Vert',
default=False,
description='Snap the start and end to the nearest vert',
)
@classmethod
def poll(cls, context):
return one_mesh_one_curve(context.selected_objects) is not None
def draw(self, context):
self.layout.prop_search(self, 'vertex_group', context.object, 'vertex_groups', text='Vertex Group')
self.layout.row().prop(self, 'cross_faces')
self.layout.row().prop(self, 'closest_vert')
def execute(self, context):
mesh_curve = one_mesh_one_curve(context.selected_objects)
if mesh_curve is None:
self.report({'ERROR'}, 'You need to select one mesh and one curve object')
return {'CANCELLED'}
obj, curve = mesh_curve
m = bmesh.new()
m.from_mesh(obj.data)
vertex_group = None if self.vertex_group == '' else obj.vertex_groups[self.vertex_group]
G, verts = build_graph(m, vertex_group=vertex_group, cross_faces=self.cross_faces)
snap_curve_splines_shortest_path(G, obj, m, curve, closest_vert=self.closest_vert)
curve.matrix_world = obj.matrix_world
return {'FINISHED'}
class GeodesicSnapCurveToMeshWalk(bpy.types.Operator):
"""Snap each spline's in a curve to the surface of mesh, using the splines start and endpoint as the heading"""
bl_idname = 'object.geodesic_snap_curve_to_mesh_walk'
bl_label = 'Geodesic Snap Curve to Mesh Walk'
bl_options = {'REGISTER', 'UNDO'}
handle_type: bpy.props.EnumProperty(
name='Handle Type',
items=[
('VECTOR', 'Vector', 'Vector'),
('AUTO', 'Auto', 'Auto'),
],
)
# TODO other things probably need to access the modifed geometry too
use_modifiers: bpy.props.BoolProperty(name='Use Modifiers', default=True)
@classmethod
def poll(cls, context):
return one_mesh_one_curve(context.selected_objects) is not None
def execute(self, context):
mesh_curve = one_mesh_one_curve(context.selected_objects)
if mesh_curve is None:
self.report({'ERROR'}, 'You need to select one mesh and one curve object')
return {'CANCELLED'}
obj, curve = mesh_curve
m = get_bmesh(obj, use_modifiers=self.use_modifiers, context=context)
snap_curve_splines_walk(obj, m, curve)
set_curve_handles(curve, self.handle_type)
curve.matrix_world = obj.matrix_world
return {'FINISHED'}
class GeodesicGenerateShortestPaths(bpy.types.Operator):
"""Generate shortest paths between random vertex pairs"""
bl_idname = 'object.geodesic_generate_shortest_paths'
bl_label = 'Geodesic Generate Shortest Paths'
bl_options = {'REGISTER', 'UNDO'}
n_paths: bpy.props.IntProperty(name='Number of Paths', min=1, default=1)
with_replacement: bpy.props.BoolProperty(name='With Replacement', description='Re-use vertices if true', default=True)
vertex_group: bpy.props.StringProperty(name='Vertex Group', default='')
cross_faces: bpy.props.BoolProperty(
name='Cross Faces',
default=False,
description='Allow crossing faces in n-gons even if no edge connects the verts',
)
handle_type: bpy.props.EnumProperty(
name='Handle Type',
items=[
('VECTOR', 'Vector', 'Vector'),
('AUTO', 'Auto', 'Auto'),
],
)
bevel_depth: bpy.props.FloatProperty(name='Bevel Depth', default=0, min=0, precision=3, step=1)
seed: bpy.props.IntProperty(name='Seed', default=0)
min_length: bpy.props.IntProperty(name='Min Length', default=2, description='Don\'t accept paths with fewer than this many vertices')
def draw(self, context):
self.layout.prop(self, 'n_paths')
self.layout.prop_search(self, 'vertex_group', context.object, 'vertex_groups', text='Vertex Group')
self.layout.prop(self, 'with_replacement')
self.layout.prop(self, 'cross_faces')
self.layout.prop(self, 'min_length')
self.layout.prop(self, 'handle_type')
self.layout.prop(self, 'bevel_depth')
self.layout.prop(self, 'seed')
@classmethod
def poll(cls, context):
return context.object is not None and context.object.type == 'MESH'
def execute(self, context):
random.seed(self.seed)
obj = context.object
m = bmesh.new()
m.from_mesh(obj.data)
vertex_group = None if self.vertex_group == '' else obj.vertex_groups[self.vertex_group]
G, verts = build_graph(m, vertex_group=vertex_group, cross_faces=self.cross_faces)
curve = make_empty_curve()
pointss = generate_multiple_paths(G, self.n_paths, with_replacement=self.with_replacement, min_length=self.min_length)
for points in pointss:
make_spline(curve, points, type='BEZIER', handle_type=self.handle_type)
if len(pointss) < self.n_paths:
self.report({'WARNING'}, f'Only generated {len(pointss)} curves')
curve.matrix_world = obj.matrix_world
curve.data.bevel_depth = self.bevel_depth
return {'FINISHED'}
def constant_or_random_enum(name):
return bpy.props.EnumProperty(
name=name,
items=[
('CONSTANT', 'Constant', 'Constant'),
('RANDOM_UNIFORM', 'Uniform Random', 'Uniform Random'),
]
)
class GeodesicGenerateWalks(bpy.types.Operator):
"""Generate walks on the surface of a mesh"""
bl_idname = 'object.geodesic_generate_walks'
bl_label = 'Geodesic Generate Walks'
bl_options = {'REGISTER', 'UNDO'}
n_spokes_type: constant_or_random_enum('Number of Spokes type')
n_spokes: bpy.props.IntProperty(name='Number of Spokes', min=1, default=1)
n_spokes_random_uniform_min: bpy.props.IntProperty(name='Uniform Random Min', min=0, default=1)
n_spokes_random_uniform_max: bpy.props.IntProperty(name='Uniform Random Max', min=0, default=1)
subset: bpy.props.IntProperty(name='Number of Sources to use', min=0, default=0, description='Only use this many sources (0 for all)')
source: bpy.props.EnumProperty(
name='Source',
items=[
('FACE_CENTERS', 'Face Centers', 'Face Centers'),
('PARTICLES', 'Particles', 'Particles'),
],
)
particle_system: bpy.props.StringProperty(name='Particle System', default='')
particle_axis: bpy.props.EnumProperty(
name='Particle Axis',
items=[
('X', 'X', 'X'),
('Y', 'Y', 'Y'),
('Z', 'Z', 'Z'),
]
)
path_length_type: constant_or_random_enum('Length of Paths Type')
path_length: bpy.props.FloatProperty(name='Length of Paths', min=0.001, default=1)
path_length_random_uniform_min: bpy.props.FloatProperty(name='Uniform Random Min', min=0.001, default=1)
path_length_random_uniform_max: bpy.props.FloatProperty(name='Uniform Random Max', min=0.001, default=1)
spoke_angle_type: bpy.props.EnumProperty(
name='Spoke Angle Type',
items=[
('EQUAL', 'Equally Spaced', 'Equally Spaced'),
('RANDOM', 'Randomly Spaced', 'Randomly Spaced'),
]
)
handle_type: bpy.props.EnumProperty(
name='Handle Type',
items=[
('VECTOR', 'Vector', 'Vector'),
('AUTO', 'Auto', 'Auto'),
],
)
bevel_depth: bpy.props.FloatProperty(name='Bevel Depth', default=0, min=0, precision=3, step=1)
seed: bpy.props.IntProperty(name='Seed', default=0)
def draw(self, context):
self.layout.row(heading='Source').prop(self, 'source', expand=True)
if self.source == 'PARTICLES':
self.layout.prop_search(self, 'particle_system', context.object, 'particle_systems', text='Particle System')
self.layout.row(heading='Axis').prop(self, 'particle_axis', expand=True)
self.layout.prop(self, 'subset')
self.layout.row(heading='Num Spokes Type').prop(self, 'n_spokes_type', expand=True)
if self.n_spokes_type == 'CONSTANT':
self.layout.prop(self, 'n_spokes')
else:
row = self.layout.row()
row.prop(self, 'n_spokes_random_uniform_min', text='Min')
row.prop(self, 'n_spokes_random_uniform_max', text='Max')
self.layout.row(heading='Path Length Type').prop(self, 'path_length_type', expand=True)
if self.path_length_type == 'CONSTANT':
self.layout.prop(self, 'path_length')
else:
row = self.layout.row()
row.prop(self, 'path_length_random_uniform_min', text='Min')
row.prop(self, 'path_length_random_uniform_max', text='Max')
self.layout.row(heading='Spoke Angle Type').prop(self, 'spoke_angle_type', expand=True)
self.layout.row(heading='Handle Type').prop(self, 'handle_type', expand=True)
self.layout.prop(self, 'bevel_depth')
self.layout.prop(self, 'seed')
@classmethod
def poll(cls, context):
return context.object is not None and context.object.type == 'MESH'
def execute(self, context):
random.seed(self.seed)
obj = context.object
if self.source == 'PARTICLES':
if len(obj.particle_systems) == 0:
self.report({'ERROR'}, 'Object has no particle system')
self.source = 'FACE_CENTERS'
elif self.particle_system == '':
self.particle_system = obj.particle_systems[obj.particle_systems.active_index].name
m = bmesh.new()
m.from_mesh(obj.data)
curve = make_empty_curve()
if self.source == 'FACE_CENTERS':
source = [f.calc_center_median() for f in m.faces]
else:
depsg = context.evaluated_depsgraph_get()
particles = obj.evaluated_get(depsg).particle_systems[self.particle_system].particles
mat = obj.matrix_world.copy()
mat.invert()
source = [(mat @ p.location, rotated(AXES[self.particle_axis].copy(), p.rotation)) for p in particles]
if self.subset > 0:
random.shuffle(source)
source = source[:self.subset]
if self.n_spokes_type == 'CONSTANT':
spokes = partial(const, self.n_spokes)
else:
self.n_spokes_random_uniform_max = max(self.n_spokes_random_uniform_min, self.n_spokes_random_uniform_max)
spokes = partial(random.randint, self.n_spokes_random_uniform_min, self.n_spokes_random_uniform_max)
if self.path_length_type == 'CONSTANT':
lengths = partial(const_n, self.path_length)
else:
self.path_length_random_uniform_max = max(self.path_length_random_uniform_min, self.path_length_random_uniform_max)
lengths = partial(uniform_n, self.path_length_random_uniform_min, self.path_length_random_uniform_max)
if self.spoke_angle_type == 'EQUAL':
angles = partial(np.linspace, 0, np.pi * 2, endpoint=False)
else:
angles = partial(uniform_n, -np.pi, np.pi)
generate_walks(
obj=obj,
mesh=m,
curve=curve,
starts=source,
gen_n_spokes=spokes,
gen_angles=angles,
gen_lengths=lengths,
)
curve.matrix_world = obj.matrix_world
curve.data.bevel_depth = self.bevel_depth
set_curve_handles(curve, self.handle_type)
return {'FINISHED'}
classes = [
GeodesicWeightedShortestPath,
GeodesicSnapCurveToMeshShortestPath,
GeodesicSnapCurveToMeshWalk,
GeodesicGenerateShortestPaths,
GeodesicGenerateWalks,
]
# TODO figure out the right place for all the menu items
class GeodesicMenu(bpy.types.Menu):
bl_label = 'Geodesic'
bl_idname = 'OBJECT_MT_geodesic'
def draw(self, context):
for klass in classes:
self.layout.operator(klass.bl_idname)
def menu_func(self, context):
self.layout.menu(GeodesicMenu.bl_idname)
def register():
bpy.utils.register_class(GeodesicMenu)
for klass in classes:
bpy.utils.register_class(klass)
bpy.types.VIEW3D_MT_object.append(menu_func)
def unregister():
bpy.utils.unregister_class(GeodesicMenu)
for klass in classes:
bpy.utils.unregister_class(klass)
bpy.types.VIEW3D_MT_object.remove(menu_func)
if __name__ == '__dev__':
# I have a script in a testing blendfile with the following two lines in it to run this script
# filename = "/path/to/origami.py"
# exec(compile(open(filename).read(), filename, 'exec'), {'__name__': '__dev__'})
try:
unregister()
except Exception:
pass
register()
logging.basicConfig(level=logging.DEBUG)
# dev()
logger.debug('-' * 80)
elif __name__ == '__main__':
register()
# FUTURE detect interesections for either path discard or early stopping or some type of weaving (ie add intersection to each involved spline and shuffle their z height)
| 34.415285 | 169 | 0.632241 | 5,136 | 37,375 | 4.454634 | 0.12169 | 0.017308 | 0.012238 | 0.012238 | 0.443201 | 0.374536 | 0.319026 | 0.290441 | 0.251453 | 0.226321 | 0 | 0.012166 | 0.254448 | 37,375 | 1,085 | 170 | 34.447005 | 0.8089 | 0.112615 | 0 | 0.401022 | 0 | 0.003831 | 0.097735 | 0.012387 | 0 | 0 | 0 | 0.001843 | 0.014049 | 1 | 0.079183 | false | 0.003831 | 0.015326 | 0.016603 | 0.240102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec74193abc892521102cb21f200e6fc43face9bf | 937 | py | Python | birch/paths.py | shaypal5/birch | f3d1b7b9701f8a13fb62c9d606aa0a99697ae5b4 | [
"MIT"
] | 14 | 2018-02-11T13:56:02.000Z | 2022-02-17T14:29:45.000Z | birch/paths.py | shaypal5/birch | f3d1b7b9701f8a13fb62c9d606aa0a99697ae5b4 | [
"MIT"
] | 2 | 2018-04-19T14:35:48.000Z | 2020-07-22T14:58:11.000Z | birch/paths.py | shaypal5/birch | f3d1b7b9701f8a13fb62c9d606aa0a99697ae5b4 | [
"MIT"
] | 11 | 2018-04-10T19:41:22.000Z | 2022-02-17T13:18:32.000Z | """Path-related functions for birch."""
import os
def _legacy_cfg_dpath(namespace):
return os.path.join(
os.path.expanduser('~'),
'.{}'.format(namespace),
)
XDG_CONFIG_HOME_VARNAME = 'XDG_CONFIG_HOME'
def _xdg_cfg_dpath(namespace):
if XDG_CONFIG_HOME_VARNAME in os.environ: # pragma: no cover
return os.path.join(
os.environ[XDG_CONFIG_HOME_VARNAME],
namespace,
)
return os.path.join( # pragma: no cover
os.path.expanduser('~'),
'.config',
namespace,
)
XDG_CACHE_HOME_VARNAME = 'XDG_CACHE_HOME'
def _xdg_cache_dpath(namespace):
if XDG_CACHE_HOME_VARNAME in os.environ: # pragma: no cover
return os.path.join(
os.environ[XDG_CACHE_HOME_VARNAME],
namespace,
)
return os.path.join( # pragma: no cover
os.path.expanduser('~'),
'.cache',
namespace,
)
| 21.790698 | 65 | 0.607257 | 113 | 937 | 4.761062 | 0.238938 | 0.089219 | 0.111524 | 0.148699 | 0.526022 | 0.475836 | 0.475836 | 0.475836 | 0.475836 | 0.475836 | 0 | 0 | 0.277481 | 937 | 42 | 66 | 22.309524 | 0.794682 | 0.108858 | 0 | 0.4 | 0 | 0 | 0.058111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.033333 | 0.033333 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec75d8f9432b77a66901b060a53c5ffc3d0850ee | 1,085 | py | Python | 2015/day18.py | kyz/adventofcode | b3dd544624a8fc313ca1fad0d2f02f53bd79ce3d | [
"MIT"
] | null | null | null | 2015/day18.py | kyz/adventofcode | b3dd544624a8fc313ca1fad0d2f02f53bd79ce3d | [
"MIT"
] | null | null | null | 2015/day18.py | kyz/adventofcode | b3dd544624a8fc313ca1fad0d2f02f53bd79ce3d | [
"MIT"
] | null | null | null | def read_state(lines):
out = dict()
for y, row in enumerate(lines):
for x, char in enumerate(row):
out[x,y] = 1 if char == '#' else 0
return out, len(lines[0]), len(lines)
def simulate(state, w, h, corners):
neighbours = {(x,y): [(i,j) for i in range(x-1,x+2) for j in range(y-1,y+2)
if i >= 0 and i < w and j >= 0 and j < h and (x,y) != (i,j)]
for x,y in state}
if corners:
state[0,0] = state[w-1,0] = state[0,h-1] = state[w-1,h-1] = 1
for c in range(100):
newstate = dict()
for s in state:
n = sum([state[k] for k in neighbours[s]])
newstate[s] = 1 if (n == 3 or (n == 2 and state[s])) else 0
state = newstate
if corners:
state[0,0] = state[w-1,0] = state[0,h-1] = state[w-1,h-1] = 1
return sum(state.values())
with open("day18.txt") as fh:
state, w, h = read_state([l.strip() for l in fh.readlines()])
print("2015 day 18 part 1: %d" % simulate(state, w, h, False))
print("2015 day 18 part 2: %d" % simulate(state, w, h, True))
| 38.75 | 79 | 0.527189 | 202 | 1,085 | 2.821782 | 0.277228 | 0.084211 | 0.049123 | 0.078947 | 0.291228 | 0.147368 | 0.147368 | 0.147368 | 0.147368 | 0.147368 | 0 | 0.065274 | 0.294009 | 1,085 | 27 | 80 | 40.185185 | 0.678851 | 0 | 0 | 0.16 | 0 | 0 | 0.04977 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0 | 0 | 0.16 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec76a5ad52a10a7333ca973d639bf92204a67540 | 1,149 | py | Python | schedule/ScheduleData.py | unkSonert/Vk-Bot-Uni-Dubna | e229a0200d2033693442fa22d02dcedd171e5510 | [
"MIT"
] | null | null | null | schedule/ScheduleData.py | unkSonert/Vk-Bot-Uni-Dubna | e229a0200d2033693442fa22d02dcedd171e5510 | [
"MIT"
] | null | null | null | schedule/ScheduleData.py | unkSonert/Vk-Bot-Uni-Dubna | e229a0200d2033693442fa22d02dcedd171e5510 | [
"MIT"
] | 2 | 2019-03-20T19:04:20.000Z | 2019-03-20T21:32:19.000Z | import pickle
from os.path import isfile
from schedule.Week import Week
from schedule.default import list_of_week_days_names, list_of_lesson_names
class ScheduleData(object):
homework = None
week = None
@staticmethod
def get_day_number(name):
return next(i for i, x in enumerate(list_of_week_days_names) if name.lower() in x)
@staticmethod
def get_lesson_number(name):
return next(i for i, x in enumerate(list_of_lesson_names) if name.lower() in x)
@staticmethod
def dump():
return pickle.dumps((ScheduleData.week, ScheduleData.homework))
@staticmethod
def load(schedule_data_bytes):
tup = pickle.loads(schedule_data_bytes)
ScheduleData.week = tup[0]
ScheduleData.homework = tup[1]
@staticmethod
def init():
if isfile("save_schedule/schedule"):
with open("save_schedule/schedule", "rb") as f:
ScheduleData.load(f.read())
else:
ScheduleData.week = Week.generate_current_week()
ScheduleData.homework = [None, None, None, None, None, None, None, None, None]
ScheduleData.init()
| 28.02439 | 90 | 0.671889 | 149 | 1,149 | 5.006711 | 0.362416 | 0.085791 | 0.112601 | 0.128686 | 0.290885 | 0.254692 | 0.254692 | 0.254692 | 0.115282 | 0.115282 | 0 | 0.002275 | 0.234987 | 1,149 | 40 | 91 | 28.725 | 0.846416 | 0 | 0 | 0.166667 | 0 | 0 | 0.040035 | 0.038294 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.133333 | 0.1 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ec77c8bf9c27a03f5c191007f8bb35cba9a7948b | 1,373 | py | Python | examples/quick_start/nested.py | timdavis3991/do-py | 921d3b3bdeb108f3e6379dcacab6ed6ffaaa0776 | [
"MIT"
] | 7 | 2020-07-07T02:53:44.000Z | 2022-03-28T00:56:36.000Z | examples/quick_start/nested.py | timdavis3991/do-py | 921d3b3bdeb108f3e6379dcacab6ed6ffaaa0776 | [
"MIT"
] | 31 | 2020-03-24T17:55:05.000Z | 2022-03-31T04:27:14.000Z | examples/quick_start/nested.py | timdavis3991/do-py | 921d3b3bdeb108f3e6379dcacab6ed6ffaaa0776 | [
"MIT"
] | null | null | null | """
Nest a DataObject in another DataObject.
"""
from do_py import DataObject, R
class Contact(DataObject):
_restrictions = {
'phone_number': R.STR
}
class Author(DataObject):
"""
This DataObject is nested under `VideoGame` and nests `Contact`.
:restriction id:
:restriction name:
:restriction contact: Nested DataObject that represents contact information for this author.
"""
_restrictions = {
'id': R.INT,
'name': R.STR,
'contact': Contact
}
class VideoGame(DataObject):
"""
This DataObject is nested under nests `Author`.
:restriction id:
:restriction name:
:restriction author: Nested DataObject that represents author information for this video game.
"""
_restrictions = {
'id': R.INT,
'name': R.NULL_STR,
'author': Author
}
# Data objects must be instantiated at their **init** with a dictionary and strict True(default) or False.
instance = VideoGame({
'id': 1985,
'name': 'The Game',
'author': {
'id': 3,
'name': 'You Lose',
'contact': {
'phone_number': '555-555-5555'
}
}
}, strict=False)
print(instance)
# output: VideoGame{"author": {"contact": {"phone_number": "555-555-5555"}, "id": 3, "name": "You Lose"}, "id": 1985, "name": "The Game"}
| 24.963636 | 137 | 0.593591 | 151 | 1,373 | 5.344371 | 0.397351 | 0.040892 | 0.05948 | 0.064436 | 0.391574 | 0.218092 | 0 | 0 | 0 | 0 | 0 | 0.03 | 0.271668 | 1,373 | 54 | 138 | 25.425926 | 0.777 | 0.477058 | 0 | 0.172414 | 0 | 0 | 0.155725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.241379 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |