hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31319d47ec8ad06ca44bd80af1576e1016d0086b | 837 | py | Python | leetcode/0015_3Sum/result.py | theck17/notes | f32f0f4b8f821b1ed38d173ef0913efddd094b91 | [
"MIT"
] | null | null | null | leetcode/0015_3Sum/result.py | theck17/notes | f32f0f4b8f821b1ed38d173ef0913efddd094b91 | [
"MIT"
] | null | null | null | leetcode/0015_3Sum/result.py | theck17/notes | f32f0f4b8f821b1ed38d173ef0913efddd094b91 | [
"MIT"
] | null | null | null | # !/usr/bin/env python3
# Author: C.K
# Email: theck17@163.com
# DateTime:2021-03-15 00:07:14
# Description:
class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
result = set()
for i in range(0, len(nums) - 1):
# Reduce the problem to two sum(0)
two_sum = -nums[i]
cache = set()
for num in nums[i + 1:]:
remaining = two_sum - num
if remaining in cache:
#sorting to create unique tuples
triplet = tuple(sorted([nums[i], remaining, num]))
# using tuple in a set will eliminate duplicates combinations
result.add(triplet)
else:
cache.add(num)
return result
if __name__ == "__main__":
pass
| 28.862069 | 81 | 0.51135 | 101 | 837 | 4.138614 | 0.643564 | 0.043062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046602 | 0.384707 | 837 | 28 | 82 | 29.892857 | 0.765049 | 0.265233 | 0 | 0 | 0 | 0 | 0.013201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
313582b593f74c9cfe2f0d1c30d9930aec3b40a3 | 12,957 | py | Python | src/robustness.py | mathigatti/sota-music-tagging-models | b4331b07fe45902af96830f2821926ab86e17d42 | [
"MIT"
] | null | null | null | src/robustness.py | mathigatti/sota-music-tagging-models | b4331b07fe45902af96830f2821926ab86e17d42 | [
"MIT"
] | null | null | null | src/robustness.py | mathigatti/sota-music-tagging-models | b4331b07fe45902af96830f2821926ab86e17d42 | [
"MIT"
] | null | null | null | # coding: utf-8
'''
Deformation codes are borrowed from MUDA
McFee et al., A software framework for musical data augmentation, 2015
https://github.com/bmcfee/muda
'''
import os
import time
import subprocess
import tempfile
import numpy as np
import pandas as pd
import datetime
import tqdm
import csv
import fire
import argparse
import pickle
from sklearn import metrics
import pandas as pd
import librosa
import soundfile as psf
import torch
import torch.nn as nn
from torch.autograd import Variable
from solver import skip_files
from sklearn.preprocessing import LabelBinarizer
import model as Model
TAGS = ['genre---downtempo', 'genre---ambient', 'genre---rock', 'instrument---synthesizer', 'genre---atmospheric', 'genre---indie', 'instrument---electricpiano', 'genre---newage', 'instrument---strings', 'instrument---drums', 'instrument---drummachine', 'genre---techno', 'instrument---guitar', 'genre---alternative', 'genre---easylistening', 'genre---instrumentalpop', 'genre---chillout', 'genre---metal', 'mood/theme---happy', 'genre---lounge', 'genre---reggae', 'genre---popfolk', 'genre---orchestral', 'instrument---acousticguitar', 'genre---poprock', 'instrument---piano', 'genre---trance', 'genre---dance', 'instrument---electricguitar', 'genre---soundtrack', 'genre---house', 'genre---hiphop', 'genre---classical', 'mood/theme---energetic', 'genre---electronic', 'genre---world', 'genre---experimental', 'instrument---violin', 'genre---folk', 'mood/theme---emotional', 'instrument---voice', 'instrument---keyboard', 'genre---pop', 'instrument---bass', 'instrument---computer', 'mood/theme---film', 'genre---triphop', 'genre---jazz', 'genre---funk', 'mood/theme---relaxing']
def read_file(tsv_file):
tracks = {}
with open(tsv_file) as fp:
reader = csv.reader(fp, delimiter='\t')
next(reader, None) # skip header
for row in reader:
track_id = row[0]
tracks[track_id] = {
'path': row[3].replace('.mp3', '.npy'),
'tags': row[5:],
}
return tracks
class Predict(object):
def __init__(self, config):
self.model_type = config.model_type
self.model_load_path = config.model_load_path
self.dataset = config.dataset
self.data_path = config.data_path
self.batch_size = config.batch_size
self.is_cuda = torch.cuda.is_available()
self.build_model()
self.get_dataset()
self.mod = config.mod
self.rate = config.rate
self.PRESETS = {
"radio": ["0.01,1", "-90,-90,-70,-70,-60,-20,0,0", "-5"],
"film standard": ["0.1,0.3", "-90,-90,-70,-64,-43,-37,-31,-31,-21,-21,0,-20", "0", "0", "0.1"],
"film light": ["0.1,0.3", "-90,-90,-70,-64,-53,-47,-41,-41,-21,-21,0,-20", "0", "0", "0.1"],
"music standard": ["0.1,0.3", "-90,-90,-70,-58,-55,-43,-31,-31,-21,-21,0,-20", "0", "0", "0.1"],
"music light": ["0.1,0.3", "-90,-90,-70,-58,-65,-53,-41,-41,-21,-21,0,-11", "0", "0", "0.1"],
"speech": ["0.1,0.3", "-90,-90,-70,-55,-50,-35,-31,-31,-21,-21,0,-20", "0", "0", "0.1"]
}
self.preset_dict = {1: "radio",
2: "film standard",
3: "film light",
4: "music standard",
5: "music light",
6: "speech"}
def get_model(self):
if self.model_type == 'fcn':
self.input_length = 29 * 16000
return Model.FCN()
elif self.model_type == 'musicnn':
self.input_length = 3 * 16000
return Model.Musicnn(dataset=self.dataset)
elif self.model_type == 'crnn':
self.input_length = 29 * 16000
return Model.CRNN()
elif self.model_type == 'sample':
self.input_length = 59049
return Model.SampleCNN()
elif self.model_type == 'se':
self.input_length = 59049
return Model.SampleCNNSE()
elif self.model_type == 'short':
self.input_length = 59049
return Model.ShortChunkCNN()
elif self.model_type == 'short_res':
self.input_length = 59049
return Model.ShortChunkCNN_Res()
elif self.model_type == 'attention':
self.input_length = 15 * 16000
return Model.CNNSA()
elif self.model_type == 'hcnn':
self.input_length = 5 * 16000
return Model.HarmonicCNN()
else:
print('model_type has to be one of [fcn, musicnn, crnn, sample, se, short, short_res, attention]')
def build_model(self):
self.model = self.get_model()
# cuda
if self.is_cuda:
self.model.cuda()
# load model
self.load(self.model_load_path)
def get_dataset(self):
if self.dataset == 'mtat':
self.test_list = np.load('./../split/mtat/test.npy')
self.binary = np.load('./../split/mtat/binary.npy')
if self.dataset == 'msd':
test_file = os.path.join('./../split/msd','filtered_list_test.cP')
test_list = pickle.load(open(test_file,'rb'), encoding='bytes')
self.test_list = [value for value in test_list if value.decode() not in skip_files]
id2tag_file = os.path.join('./../split/msd', 'msd_id_to_tag_vector.cP')
self.id2tag = pickle.load(open(id2tag_file,'rb'), encoding='bytes')
if self.dataset == 'jamendo':
test_file = os.path.join('./../split/mtg-jamendo', 'autotagging_top50tags-test.tsv')
self.file_dict= read_file(test_file)
self.test_list= list(self.file_dict.keys())
self.mlb = LabelBinarizer().fit(TAGS)
if self.dataset == 'jamendo-mood':
test_file = os.path.join('./../split/mtg-jamendo-mood', 'autotagging_moodtheme-test.tsv')
self.file_dict= read_file(test_file)
self.test_list= list(self.file_dict.keys())
self.mlb = LabelBinarizer().fit(TAGS)
def load(self, filename):
S = torch.load(filename)
self.model.load_state_dict(S)
def to_var(self, x):
if torch.cuda.is_available():
x = x.cuda()
return Variable(x)
def get_tensor(self, fn):
# load audio
if self.dataset == 'mtat':
npy_path = os.path.join(self.data_path, 'mtat', 'npy', fn.split('/')[1][:-3]) + 'npy'
elif self.dataset == 'msd':
msid = fn.decode()
filename = '{}/{}/{}/{}.npy'.format(msid[2], msid[3], msid[4], msid)
npy_path = os.path.join(self.data_path, filename)
elif self.dataset == 'jamendo':
filename = self.file_dict[fn]['path']
npy_path = os.path.join(self.data_path, filename)
elif self.dataset == 'jamendo-mood':
filename = self.file_dict[fn]['path']
npy_path = os.path.join(self.data_path, filename)
raw = np.load(npy_path, mmap_mode='r')
raw = self.modify(raw, self.rate, self.mod)
# split chunk
length = len(raw)
hop = (length - self.input_length) // self.batch_size
x = torch.zeros(self.batch_size, self.input_length)
for i in range(self.batch_size):
x[i] = torch.Tensor(raw[i*hop:i*hop+self.input_length]).unsqueeze(0)
return x
def modify(self, x, mod_rate, mod_type):
if mod_type == 'time_stretch':
return self.time_stretch(x, mod_rate)
elif mod_type == 'pitch_shift':
return self.pitch_shift(x, mod_rate)
elif mod_type == 'dynamic_range':
return self.dynamic_range_compression(x, mod_rate)
elif mod_type == 'white_noise':
return self.white_noise(x, mod_rate)
else:
print('choose from [time_stretch, pitch_shift, dynamic_range, white_noise]')
def time_stretch(self, x, rate):
'''
[2 ** (-.5), 2 ** (.5)]
'''
return librosa.effects.time_stretch(x, rate)
def pitch_shift(self, x, rate):
'''
[-1, 1]
'''
return librosa.effects.pitch_shift(x, 16000, rate)
def dynamic_range_compression(self, x, rate):
'''
[4, 6]
Music standard & Speech
'''
return self.sox(x, 16000, "compand", *self.PRESETS[self.preset_dict[rate]])
@staticmethod
def sox(x, fs, *args):
assert fs > 0
fdesc, infile = tempfile.mkstemp(suffix=".wav")
os.close(fdesc)
fdesc, outfile = tempfile.mkstemp(suffix=".wav")
os.close(fdesc)
psf.write(infile, x, fs)
try:
arguments = ["sox", infile, outfile, "-q"]
arguments.extend(args)
subprocess.check_call(arguments)
x_out, fs = psf.read(outfile)
x_out = x_out.T
if x.ndim == 1:
x_out = librosa.to_mono(x_out)
finally:
os.unlink(infile)
os.unlink(outfile)
return x_out
def white_noise(self, x, rate):
'''
[0.1, 0.4]
'''
n_frames = len(x)
noise_white = np.random.RandomState().randn(n_frames)
noise_fft = np.fft.rfft(noise_white)
values = np.linspace(1, n_frames * 0.5 + 1, n_frames // 2 + 1)
colored_filter = np.linspace(1, n_frames / 2 + 1, n_frames // 2 + 1) ** 0
noise_filtered = noise_fft * colored_filter
noise = librosa.util.normalize(np.fft.irfft(noise_filtered)) * (x.max())
if len(noise) < len(x):
x = x[:len(noise)]
return (1 - rate) * x + (noise * rate)
def get_auc(self, est_array, gt_array):
roc_aucs = metrics.roc_auc_score(gt_array, est_array, average='macro')
pr_aucs = metrics.average_precision_score(gt_array, est_array, average='macro')
return roc_aucs, pr_aucs
def test(self):
roc_auc, pr_auc, loss = self.get_test_score()
print('loss: %.4f' % loss)
print('roc_auc: %.4f' % roc_auc)
print('pr_auc: %.4f' % pr_auc)
def get_test_score(self):
self.model = self.model.eval()
est_array = []
gt_array = []
losses = []
reconst_loss = nn.BCELoss()
for line in tqdm.tqdm(self.test_list):
if self.dataset == 'mtat':
ix, fn = line.split('\t')
elif self.dataset == 'msd':
fn = line
if fn.decode() in skip_files:
continue
elif self.dataset == 'jamendo':
fn = line
elif self.dataset == 'jamendo-mood':
fn = line
# load and split
x = self.get_tensor(fn)
# ground truth
if self.dataset == 'mtat':
ground_truth = self.binary[int(ix)]
elif self.dataset == 'msd':
ground_truth = self.id2tag[fn].flatten()
elif self.dataset == 'jamendo':
ground_truth = np.sum(self.mlb.transform(self.file_dict[fn]['tags']), axis=0)
elif self.dataset == 'jamendo-mood':
ground_truth = np.sum(self.mlb.transform(self.file_dict[fn]['tags']), axis=0)
# forward
x = self.to_var(x)
y = torch.tensor([ground_truth.astype('float32') for i in range(self.batch_size)]).cuda()
out = self.model(x)
loss = reconst_loss(out, y)
losses.append(float(loss.data))
out = out.detach().cpu()
# estimate
estimated = np.array(out).mean(axis=0)
est_array.append(estimated)
gt_array.append(ground_truth)
est_array, gt_array = np.array(est_array), np.array(gt_array)
loss = np.mean(losses)
roc_auc, pr_auc = self.get_auc(est_array, gt_array)
return roc_auc, pr_auc, loss
if __name__ == '__main__':
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--num_workers', type=int, default=0)
parser.add_argument('--dataset', type=str, default='mtat', choices=['mtat', 'msd', 'jamendo','jamendo-mood'])
parser.add_argument('--model_type', type=str, default='fcn',
choices=['fcn', 'musicnn', 'crnn', 'sample', 'se', 'short', 'short_res', 'attention', 'hcnn'])
parser.add_argument('--batch_size', type=int, default=16)
parser.add_argument('--model_load_path', type=str, default='.')
parser.add_argument('--data_path', type=str, default='./data')
parser.add_argument('--mod', type=str, default='time_stretch')
parser.add_argument('--rate', type=float, default=0)
config = parser.parse_args()
p = Predict(config)
p.test()
| 39.027108 | 1,080 | 0.56317 | 1,635 | 12,957 | 4.316208 | 0.223853 | 0.022956 | 0.025507 | 0.019272 | 0.225167 | 0.190591 | 0.166926 | 0.117614 | 0.093666 | 0.07822 | 0 | 0.032525 | 0.281006 | 12,957 | 331 | 1,081 | 39.145015 | 0.724989 | 0.025237 | 0 | 0.161417 | 0 | 0.023622 | 0.17918 | 0.058687 | 0 | 0 | 0 | 0 | 0.003937 | 1 | 0.066929 | false | 0 | 0.086614 | 0 | 0.248032 | 0.019685 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3143e4df394889222436d2c1bdb781765f3da6bd | 223 | py | Python | example_bot/bot.py | JakeCover/Flare-DiscordPy | 24cc2541a6ef548583e46d58ae18abe72da5f37f | [
"MIT"
] | 1 | 2021-04-02T20:16:03.000Z | 2021-04-02T20:16:03.000Z | example_bot/bot.py | JakeCover/Flare-DiscordPy | 24cc2541a6ef548583e46d58ae18abe72da5f37f | [
"MIT"
] | null | null | null | example_bot/bot.py | JakeCover/Flare-DiscordPy | 24cc2541a6ef548583e46d58ae18abe72da5f37f | [
"MIT"
] | null | null | null | import os
from discord.ext.commands import Bot
from Flare import Flare
bot = Bot("~~")
bot.add_cog(Flare(bot))
@bot.command("ping")
async def ping_pong(ctx):
ctx.send("pong")
bot.run(os.environ.get("BOT_TOKEN"))
| 13.117647 | 36 | 0.695067 | 37 | 223 | 4.108108 | 0.567568 | 0.118421 | 0.144737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139013 | 223 | 16 | 37 | 13.9375 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.085202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3146c14380ad5914b64e35f3048435f94f9e6ee7 | 22,089 | py | Python | catalog/client/services/catalog.py | eoss-cloud/madxxx_catalog_api | ef37374a36129de4f0a6fe5dd46b5bc2e2f01d1d | [
"MIT"
] | null | null | null | catalog/client/services/catalog.py | eoss-cloud/madxxx_catalog_api | ef37374a36129de4f0a6fe5dd46b5bc2e2f01d1d | [
"MIT"
] | null | null | null | catalog/client/services/catalog.py | eoss-cloud/madxxx_catalog_api | ef37374a36129de4f0a6fe5dd46b5bc2e2f01d1d | [
"MIT"
] | null | null | null | #-*- coding: utf-8 -*-
""" EOSS catalog system
functionality for the catalog endpoint
"""
from utilities.web_utils import remote_file_exists
__author__ = "Thilo Wehrmann, Steffen Gebhardt"
__copyright__ = "Copyright 2016, EOSS GmbH"
__credits__ = ["Thilo Wehrmann", "Steffen Gebhardt"]
__license__ = "GPL"
__version__ = "1.0.0"
__maintainer__ = "Thilo Wehrmann"
__email__ = "twehrmann@eoss.cloud"
__status__ = "Production"
import datetime
import ujson
import time
import dateparser
import falcon
try:
import cStringIO as StringIO
except ImportError:
import StringIO
import csv
from xlsxwriter import Workbook
from dateutil.parser import parse
import numpy
from sqlalchemy import and_
import logging
from collections import defaultdict
from model.orm import Catalog_Dataset, Spatial_Reference
from api import General_Structure
from .db_calls import Persistance
from . import getKeysFromDict
from .tools import get_base_url, can_zip_response, compress_body, serialize, make_GeoJson
from api_logging import logger
def date_handler(obj):
if hasattr(obj, 'isoformat'):
return obj.isoformat()
else:
raise TypeError
GRID_SYSTEMS = {'Sentinel - 2A': 10,
'LANDSAT_ETM': 11,
'LANDSAT_ETM_SLC_OFF': 11,
'OLI_TIRS': 11,
'TIRS': 11}
class Catalog(object):
"""
EOSS catalog class from web API
"""
def __init__(self):
self.logger = logging.getLogger('eoss.' + __name__)
self.aggregations = defaultdict(list)
for agg in Persistance().get_all_sensor_aggregations():
self.aggregations[agg.aggregation_name.lower()].append(agg)
def _query_(self, areas, dates, sensors, clouds):
sensors_filter = list()
grid_list = defaultdict(set)
for sensor_grid in set(GRID_SYSTEMS.values()):
if 'ref_group' in areas[0].keys():
ref_type_id, ref_id = areas[0]['ref_group'], areas[0]['ref_id']
spatial_query = Persistance().get_reference_by_sensorgrid(ref_id, ref_type_id, sensor_grid)
elif 'aoi' in areas[0].keys():
aoi = areas[0]['aoi']
spatial_query = Persistance().get_referencebyaoi(aoi, sensor_grid)
for grid in spatial_query.all():
grid_list[sensor_grid].add(grid)
if len(grid_list) == 0:
description = 'Please specify valid reference object for data. (type:%s, id:%s)' \
% (ref_type_id, ref_id)
raise falcon.HTTPBadRequest('SensorGrid', description,
href='http://docs.example.com/auth')
joint_gridset = grid_list[10] | grid_list[11] # TODO: better grid system handling from extra table?
for item in sensors:
sensor, level = item['sensor_name'], item['level']
if len(sensor) > 0 and len(level) > 0:
sensors_filter.append(and_(Catalog_Dataset.level == level, Catalog_Dataset.sensor == sensor))
elif len(sensor) == 0 and len(level) > 0:
sensors_filter.append(Catalog_Dataset.level == level)
elif len(sensor) > 0 and len(level) == 0:
sensors_filter.append(Catalog_Dataset.sensor == sensor)
dates_filter = list()
for item in dates:
# ExtJS POST requests has provides unicode body
if type(item["start_date"]) is unicode:
item["start_date"] = parse(item["start_date"])
if type(item["end_date"]) is unicode:
item["end_date"] = parse(item["end_date"])
dates_filter.append(
and_(Catalog_Dataset.acq_time >= item["start_date"].isoformat(), Catalog_Dataset.acq_time <= item["end_date"].isoformat()))
query = Persistance().find_dataset(dates_filter, sensors_filter, grid_list, joint_gridset, clouds)
return query
def _get_datasets(self, query):
query_result = list()
for ds in query:
values = dict()
types = dict()
for k, v in ds.__dict__.iteritems():
if '_' != k[0]:
values[k] = v
types[k] = type(v)
x = General_Structure(values, types)
x.__class__.__name__ = 'Catalog_Dataset'
query_result.append(serialize(x, as_json=False)['data'])
return query_result
# TODO: tiles list as input - only first will be returned or exception thrown !
def _query_tile_geom(self, tiles):
tile_objs = Persistance().get_tile_geom(tiles)
return tile_objs.all()
def _export_query(self, found_dataset):
row_keys = ['tile_identifier', 'entity_id', 'acq_time', 'clouds']
resources = [('resources', 'metadata'), ('resources', 'quicklook')]
row = list()
rows = list()
for k in row_keys:
row.append(k)
for k in resources:
row.append(' '.join(k))
row.append('data')
rows.append(row)
for ds in found_dataset:
row = list()
for k in row_keys:
row.append(ds.get(k))
for k in resources:
row.append(getKeysFromDict(ds, k))
if ds.get('sensor') in ['LANDSAT_TM', 'LANDSAT_ETM', 'LANDSAT_ETM_SLC_OFF']:
if 'google' in ds.get('resources').keys():
row.append(getKeysFromDict(ds, ('resources', 'google', 'link')))
elif 'usgs' in ds.get('resources').keys():
row.append(getKeysFromDict(ds, ('resources', 'usgs', 'link')))
else:
row.append('?')
elif ds.get('sensor') in ['OLI_TIRS', 'OLI', 'TIRS']:
if 's3public' in ds.get('resources').keys():
row.append(getKeysFromDict(ds, ('resources', 's3public', 'zip')))
elif 'google' in ds.get('resources').keys():
row.append(getKeysFromDict(ds, ('resources', 'google', 'link')))
elif ds.get('sensor') in ['Sentinel-2A']:
if 's3public' in ds.get('resources').keys():
if getKeysFromDict(ds, ('resources', 's3public')) != None:
row.append(getKeysFromDict(ds, ('resources', 's3public', 'zip')))
else:
row.append('?')
else:
row.append('?')
rows.append(row)
return rows
class CatalogApi(Catalog):
def __init__(self, my_router):
Catalog.__init__(self)
self.router = my_router
def on_get(self, req, resp, format, check_resources=False):
"""Handles GET requests
http://localhost:8000/catalog/search/result.json?from_date=2016-05-01&to_date=2016-06-02&sensor=sentinel2&ref_group=9&ref_id=73&clouds=50
"""
BASE_URL = get_base_url(req.url)
start_time = time.time()
query_filter = req.params
results = dict()
results['action'] = 'catalog search'
results['action-time'] = str(datetime.datetime.now())
results.update({'query': query_filter})
dates = list()
sensor_list = list()
try:
for date_string in ['from_date', 'to_date']:
date = dateparser.parse(req.params[date_string])
if date is None:
description = 'Please format date propery, used %s:%s.' % (date_string, date)
raise falcon.HTTPBadRequest('DateFormat', description,
href='http://docs.example.com/auth')
else:
dates.append(date)
if dates[0] == dates[1]:
description = 'Given dates didnt cover date range. Please correct date span. (%s-%s)' \
% (req.params['from_date'], req.params['to_date'])
raise falcon.HTTPBadRequest('DateFormat', description,
href='http://docs.example.com/auth')
elif dates[0] > dates[1]:
description = 'Given end date is before start date. Please reverse dates. (%s-%s)' \
% (req.params['from_date'], req.params['to_date'])
raise falcon.HTTPBadRequest('DateFormat', description,
href='http://docs.example.com/auth')
if not req.params['sensor'].lower() in self.aggregations.keys():
description = 'Sensor label is unknown in aggregation table, use %s' % str(map(str, self.aggregations.keys()))
raise falcon.HTTPBadRequest('DateFormat', description,
href='http://docs.example.com/auth')
for agg in self.aggregations[req.params['sensor'].lower()]:
sensor_list.append({"sensor_name": agg.sensor, "level": agg.level})
ref_group, ref_id, clouds = int(req.params['ref_group']), int(req.params['ref_id']), int(req.params['clouds'])
except KeyError, e:
description = 'Search key: %s missing in query.' % e
raise falcon.HTTPBadRequest('KeyError', description,
href='http://docs.example.com/auth')
except ValueError, e:
description = 'Given parameters contain bad values: %s'% str(e)
raise falcon.HTTPBadRequest('KeyError', description,
href='http://docs.example.com/auth')
query = self._query_([{"ref_group": ref_group, "ref_id": ref_id}],
[{"start_date": dates[0], "end_date": dates[1]}],
sensor_list, clouds)
query_struct = {'area':[{"ref_group": ref_group, "ref_id": ref_id}],
'dates':[{"start_date": dates[0], "end_date": dates[1]}],
'sensors':sensor_list, 'clouds':clouds
}
found_dataset = self._get_datasets(query)
logger.info('[GET] /catalog/search/result.%s' % format, extra={x:str(y) for x,y in query_struct.iteritems()})
if check_resources:
for ds in found_dataset:
if 's3public' in ds['resources'].keys():
if 'zip' in ds['resources']['s3public'].keys():
if not remote_file_exists( ds['resources']['s3public']['zip']):
print '%s missing' % ds['resources']['s3public']['zip']
if format.lower() == 'json':
if 'search/count' in req.url:
results['count'] = query.count()
else:
results['count'] = query.count()
results['found_dataset'] = found_dataset
results['found_tiles'] = sorted(list(set([x['tile_identifier'] for x in found_dataset])))
results['found_resources'] = [BASE_URL + self.router.reverse('dataset_entity', entity_id=x['entity_id'])
for x in results['found_dataset']]
results['processing_time'] = time.time() - start_time
elif format.lower() == 'geojson':
tilegrids = defaultdict(lambda: defaultdict(list))
geoms, attrs = list(), list()
for x in found_dataset:
tilegrids[x['tile_identifier']]['acq_time'].append(x['acq_time'])
# tilegrids[x['tile_identifier']]['acq_time_js'].append(
# int(time.mktime(dateparser.parse(x['acq_time']).timetuple())) * 1000)
tilegrids[x['tile_identifier']]['tile_identifier'].append(x['tile_identifier'])
tilegrids[x['tile_identifier']]['clouds'].append(x['clouds'])
for tile_id in tilegrids.keys():
tilegrids[tile_id]['count'] = len(tilegrids[tile_id]['clouds'])
tilegrids[tile_id]['tile_identifier'] = tilegrids[tile_id]['tile_identifier'][0]
tiles_dict = dict()
if len(tilegrids.keys()) > 0:
for ref_name, geom in self._query_tile_geom(tilegrids.keys()):
tiles_dict[ref_name] = geom
for tile_id in tilegrids.keys():
geoms.append(ujson.loads(tiles_dict[tile_id]))
attrs.append(tilegrids[tile_id])
results = make_GeoJson(geoms, attrs)
elif format.lower() == 'csv':
rows = self._export_query(found_dataset)
si = StringIO.StringIO()
cw = csv.writer(si, delimiter='\t')
for row in rows:
cw.writerow(row)
results = si.getvalue().strip('\r\n')
elif format.lower() == 'xlsx':
rows = self._export_query(found_dataset)
strIO = StringIO.StringIO()
workbook = Workbook(strIO, {'in_memory': True, 'constant_memory': True})
bold = workbook.add_format({'bold': True})
big_bold = workbook.add_format({'bold': True, 'size': 20})
italic = workbook.add_format({'italic': True})
worksheet = workbook.add_worksheet(name='EOSS analysis')
worksheet.write(0, 0, 'EOSS data analysis', big_bold)
ref_obj = Persistance().get_reference(query_filter.get('ref_id'), query_filter.get('ref_group')).one()
query_filter['reference_name'] = ref_obj.ref_name
query_filter['reference_type'] = ref_obj.referencetype.name
# {'clouds': '60', 'ref_id': '5502', 'from_date': '09/07/2016', 'to_date': '10/07/2016', 'ref_group': '12', 'sensor': 'Sentinel2'}
r = 3
worksheet.write(r - 1, 0, 'query filter:', big_bold)
for c, k in enumerate(['sensor', 'from_date', 'to_date', 'clouds', 'reference_name', 'reference_type']):
worksheet.write(r + c, 0, k, bold)
worksheet.write(r + c, 1, query_filter[k])
r = 13
worksheet.write(r - 2, 0, 'query set:', big_bold)
for c, k in enumerate(rows[0]):
worksheet.write(r - 1, c, k, bold)
for values in rows[1:]:
for c, v in enumerate(values):
worksheet.write(r, c, v)
r += 1
workbook.close()
strIO.seek(0)
results = strIO.read()
elif format.lower() == 'hist':
found_tiles = sorted(list(set([x['tile_identifier'] for x in found_dataset])))
result_list = []
first = dict()
first['tile_identifier'] = 'percentagelabel'
first['span'] = 100
result_list.append(first)
data = numpy.zeros((len(found_dataset)))
tileslist = []
i = 0
for x in found_dataset:
tileslist.append(x['tile_identifier'])
data[i] = float(x['clouds'])
i = i + 1
for t in found_tiles:
ix = numpy.array(tileslist) == t
subset_clouds = data[ix]
num_scenes = sum(ix)
hist_abs = numpy.histogram(subset_clouds, bins=[-1] + range(0, 120, 20))
hist_rel = hist_abs[0] * 1.0 / num_scenes
hist_struct = dict()
hist_struct['tile_identifier'] = t
hist_struct['span'] = 100
hist_struct['scenes_perc_-1'] = hist_rel[0]
hist_struct['scenes_perc_20'] = hist_rel[1]
hist_struct['scenes_perc_40'] = hist_rel[2]
hist_struct['scenes_perc_60'] = hist_rel[3]
hist_struct['scenes_perc_80'] = hist_rel[4]
hist_struct['scenes_perc_100'] = hist_rel[5]
hist_struct['scenes_abs_-1'] = hist_abs[0][0]
hist_struct['scenes_abs_20'] = hist_abs[0][1]
hist_struct['scenes_abs_40'] = hist_abs[0][2]
hist_struct['scenes_abs_60'] = hist_abs[0][3]
hist_struct['scenes_abs_80'] = hist_abs[0][4]
hist_struct['scenes_abs_100'] = hist_abs[0][5]
result_list.append(hist_struct)
results['found_tiles'] = result_list
resp.status = falcon.HTTP_200
if can_zip_response(req.headers):
if format.lower() in ['hist', 'json', 'geojson']:
resp.set_header('Content-Type', 'application/json')
resp.set_header('Content-Encoding', 'gzip')
resp.body = compress_body(ujson.dumps(results))
elif format.lower() == 'csv':
resp.set_header('Content-Type', 'text/csv')
resp.set_header('Content-disposition', 'attachment;filename=%s;' % self.create_output_name('csv'))
resp.set_header('Content-Encoding', 'gzip')
resp.body = compress_body(results)
elif format.lower() == 'xlsx':
resp.set_header('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
resp.set_header('Content-disposition', 'attachment;filename=%s;' % self.create_output_name('xlsx'))
resp.set_header('Content-Encoding', 'gzip')
resp.body = compress_body(results)
else:
if format.lower() in ['hist', 'json', 'geojson']:
resp.set_header('Content-Type', 'application/json')
resp.body = ujson.dumps(results)
elif format.lower() == 'csv':
resp.set_header('Content-Type', 'text/csv')
resp.set_header('Content-disposition', 'attachment;filename=%s;' % self.create_output_name('csv'))
resp.body = results
elif format.lower() == 'xlsx':
resp.set_header('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
resp.set_header('Content-disposition', 'attachment;filename=%s;' % self.create_output_name('xlsx'))
resp.body = results
def create_output_name(self, extension):
return 'EOSS_analysis_%s.%s' % (datetime.datetime.now().isoformat(), extension)
def on_post(self, req, resp, format):
"""Handles POST requests
{
"daterange": [{
"start_date": "05/31/2000",
"end_date": "07/02/2003"
}],
"clouds": 1,
"sensors": [
{"sensor_name": "LANDSAT_ETM", "level": "" }],
"areas": [{
"ref_group": 12,
"ref_id": 6208
}]
}
{"clouds":20,"daterange":[{"start_date":"09/02/2015","end_date":"09/14/2016"}],
"sensors":[{"name":"landsat"}],
"areas":[{"ref_id":362,"ref_group":"9"}]}
"""
# TODO: loop over areas
sensor_list = list()
results = dict()
start_time = time.time()
output = StringIO.StringIO()
while True:
chunk = req.stream.read(4096)
if not chunk:
break
output.write(chunk)
body = output.getvalue()
output.close()
try:
struct = ujson.loads(body.decode('utf-8'))
except ValueError, e:
# try decode x-www-form-urlencoded
query_str = falcon.util.uri.decode(body.decode('utf-8'))
query_str = query_str[query_str.find('{'):query_str.rfind('}') + 1]
try:
struct = ujson.loads(query_str)
except ValueError, e:
description = 'Give request is no valid JSON nor urlencoded psot body.'
raise falcon.HTTPUnsupportedMediaType(description,
href='http://docs.example.com/auth')
try:
for s in struct['sensors']:
if 'sensor_name' in s.keys() and 'level' in s.keys():
sensor_list.append(s)
elif 'name' in s.keys():
if not s['name'].lower() in self.aggregations.keys():
description = 'Sensor label is unknown in aggregation table'
raise falcon.HTTPBadRequest('Catalog', description,
href='http://docs.example.com/auth')
for agg in self.aggregations[s['name'].lower()]:
sensor_list.append({"sensor_name": agg.sensor, "level": agg.level})
else:
description = 'Sensor is not specified in query'
raise falcon.HTTPBadRequest('Catalog', description,
href='http://docs.example.com/auth')
query = self._query_(struct['areas'], struct['daterange'], sensor_list, struct['clouds'])
query_struct = {'area': struct['areas'],
'dates': struct['daterange'],
'sensors': sensor_list, 'clouds': struct['clouds']
}
logger.info('[POST] /catalog/search/result.%s' % format, extra={x:str(y) for x,y in query_struct.iteritems()})
except KeyError, e:
description = 'Search key: %s missing in query.' % e
raise falcon.HTTPBadRequest('KeyError', description,
href='http://docs.example.com/auth')
results['count'] = query.count()
found_dataset = self._get_datasets(query)
results['found_dataset'] = found_dataset
results['found_tiles'] = sorted(list(set([x['tile_identifier'] for x in found_dataset])))
# results.update({'query': struct})
resp.body = ujson.dumps(results)
resp.status = falcon.HTTP_200
results['processing_time'] = time.time() - start_time
if can_zip_response(req.headers):
resp.set_header('Content-Type', 'application/json')
resp.set_header('Content-Encoding', 'gzip')
resp.body = compress_body(ujson.dumps(results))
else:
resp.set_header('Content-Type', 'application/json')
resp.body = ujson.dumps(results)
| 43.740594 | 145 | 0.552175 | 2,467 | 22,089 | 4.754763 | 0.160924 | 0.018414 | 0.017732 | 0.02728 | 0.369565 | 0.340324 | 0.306053 | 0.276812 | 0.263171 | 0.263171 | 0 | 0.016337 | 0.315542 | 22,089 | 504 | 146 | 43.827381 | 0.759508 | 0.024673 | 0 | 0.315522 | 0 | 0 | 0.171239 | 0.013069 | 0 | 0 | 0 | 0.003968 | 0 | 0 | null | null | 0 | 0.05598 | null | null | 0.002545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
314a3567674f4832f50804842163798ba6755e31 | 1,322 | py | Python | 3rdParty/boost/1.71.0/libs/python/test/iterator.py | rajeev02101987/arangodb | 817e6c04cb82777d266f3b444494140676da98e2 | [
"Apache-2.0"
] | 12,278 | 2015-01-29T17:11:33.000Z | 2022-03-31T21:12:00.000Z | 3rdParty/boost/1.71.0/libs/python/test/iterator.py | rajeev02101987/arangodb | 817e6c04cb82777d266f3b444494140676da98e2 | [
"Apache-2.0"
] | 9,469 | 2015-01-30T05:33:07.000Z | 2022-03-31T16:17:21.000Z | 3rdParty/boost/1.71.0/libs/python/test/iterator.py | rajeev02101987/arangodb | 817e6c04cb82777d266f3b444494140676da98e2 | [
"Apache-2.0"
] | 892 | 2015-01-29T16:26:19.000Z | 2022-03-20T07:44:30.000Z | # Copyright David Abrahams 2004. Distributed under the Boost
# Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
from __future__ import print_function
'''
>>> from iterator_ext import *
>>> from input_iterator import *
>>> x = list_int()
>>> x.push_back(1)
>>> x.back()
1
>>> x.push_back(3)
>>> x.push_back(5)
>>> for y in x:
... print(y)
1
3
5
>>> z = range(x)
>>> for y in z:
... print(y)
1
3
5
Range2 wraps a transform_iterator which doubles the elements it
traverses. This proves we can wrap input iterators
>>> z2 = range2(x)
>>> for y in z2:
... print(y)
2
6
10
>>> l2 = two_lists()
>>> for y in l2.primes:
... print(y)
2
3
5
7
11
13
>>> for y in l2.evens:
... print(y)
2
4
6
8
10
12
>>> ll = list_list()
>>> ll.push_back(x)
>>> x.push_back(7)
>>> ll.push_back(x)
>>> for a in ll: #doctest: +NORMALIZE_WHITESPACE
... for b in a:
... print(b, end='')
... print('')
...
1 3 5
1 3 5 7
'''
def run(args = None):
import sys
import doctest
if args is not None:
sys.argv = args
return doctest.testmod(sys.modules.get(__name__))
if __name__ == '__main__':
print("running...")
import sys
status = run()[0]
if (status == 0): print("Done.")
sys.exit(status)
| 16.734177 | 71 | 0.599849 | 220 | 1,322 | 3.45 | 0.445455 | 0.063241 | 0.039526 | 0.031621 | 0.023715 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055992 | 0.229955 | 1,322 | 78 | 72 | 16.948718 | 0.689587 | 0.133888 | 0 | 0.153846 | 0 | 0 | 0.065156 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.461538 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
314c834bc34744a5153b8449b8d6ede84e3fa535 | 1,614 | py | Python | scripts/markov_rulesets.py | takuyakanbr/covfefe | 8d6a88c838945fc8c8b8c88d19b775ec48a998b7 | [
"BSD-3-Clause"
] | null | null | null | scripts/markov_rulesets.py | takuyakanbr/covfefe | 8d6a88c838945fc8c8b8c88d19b775ec48a998b7 | [
"BSD-3-Clause"
] | 4 | 2019-12-02T17:39:27.000Z | 2019-12-02T17:43:49.000Z | scripts/markov_rulesets.py | takuyakanbr/covfefe | 8d6a88c838945fc8c8b8c88d19b775ec48a998b7 | [
"BSD-3-Clause"
] | null | null | null | # Script to generate the necessary grammar rules for the
# markov generator output type
# Dataset:
# http://www.drmaciver.com/2009/12/i-want-one-meelyun-sentences/
import re
ALPHA = ' abcdefghijklmnopqrstuvwxyz'
# read data from file
with open('sentences', 'r', encoding="utf8") as f:
content = f.read().splitlines()
n = len(content)
freq = {}
# process sentences
for i in range(n):
content[i] = re.sub('[^a-z]+', ' ', content[i].lower())
for word in content[i].split(' '):
if len(word) < 1: continue
word = ' ' + word + ' '
# sum up next-letter frequencies
pc = ''
for j in range(len(word) - 1):
c = word[j]
if pc != ' ': c = pc + c
nc = word[j+1]
if c not in freq:
freq[c] = {}
for a in ALPHA:
freq[c][a] = 0
freq[c][nc] += 1
pc = word[j]
# normalize frequencies
for c, d in freq.items():
sum_ = sum(d.values())
for nc in d:
d[nc] /= sum_
# helper functions for printing rulesets
def make_name(c):
if c == ' ': return '@mstart'
return '@m' + c
def make_option(pc, c, nc):
if nc == ' ': return pc + c + '|'
if c == ' ': return '@m' + nc + '|'
if len(pc) == 0: return '@m' + c + nc + '|'
return pc + ',@m' + c + nc + '|'
# print rulesets
for c, d in freq.items():
rule = make_name(c) + '='
pc = c[:-1]
c = c[-1]
for nc in d:
if d[nc] <= 0.0055: continue
mult = max(1, int(d[nc] / 0.01))
rule += make_option(pc, c, nc) * mult
print(rule[:-1])
| 24.454545 | 64 | 0.502478 | 235 | 1,614 | 3.425532 | 0.378723 | 0.02236 | 0.018634 | 0.017391 | 0.077019 | 0.039752 | 0 | 0 | 0 | 0 | 0 | 0.022999 | 0.326518 | 1,614 | 65 | 65 | 24.830769 | 0.717571 | 0.185874 | 0 | 0.093023 | 1 | 0 | 0.059094 | 0.019954 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.023256 | 0 | 0.116279 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
315044a27a790f45a0932ccbc6e97fab229aec69 | 667 | py | Python | mk42/apps/users/migrations/0003_auto_20170614_0038.py | vint21h/mk42 | 1574d1143ea829212203f2be0b11b44de1e7c722 | [
"WTFPL"
] | 5 | 2017-06-18T17:04:49.000Z | 2017-11-02T11:44:36.000Z | mk42/apps/users/migrations/0003_auto_20170614_0038.py | vint21h/mk42 | 1574d1143ea829212203f2be0b11b44de1e7c722 | [
"WTFPL"
] | 13 | 2017-07-05T06:35:42.000Z | 2017-09-06T02:04:04.000Z | mk42/apps/users/migrations/0003_auto_20170614_0038.py | vint21h/mk42 | 1574d1143ea829212203f2be0b11b44de1e7c722 | [
"WTFPL"
] | 10 | 2017-06-29T05:31:52.000Z | 2017-10-27T09:31:32.000Z | # -*- coding: utf-8 -*-
# mk42
# mk42/apps/users/migrations/0003_auto_20170614_0038.py
# Generated by Django 1.11.2 on 2017-06-14 00:38
from __future__ import unicode_literals
from django.db import (
migrations,
models,
)
class Migration(migrations.Migration):
dependencies = [
("users", "0002_auto_20170613_2124"),
]
operations = [
migrations.AlterField(
model_name="user",
name="language",
field=models.CharField(choices=[("en", "English"), ("uk", "\u0423\u043a\u0440\u0430\u0457\u043d\u0441\u044c\u043a\u0430")], default="en", max_length=5, verbose_name="language"),
),
]
| 23 | 189 | 0.635682 | 79 | 667 | 5.189873 | 0.78481 | 0.058537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171756 | 0.214393 | 667 | 28 | 190 | 23.821429 | 0.610687 | 0.190405 | 0 | 0 | 1 | 0 | 0.226168 | 0.15514 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31594d1cfb6364ea7147487913cd57829792bf34 | 2,530 | py | Python | scrapers/scrapsfbos.py | ndd365/showup | fae0cdc52d306e6bbb538e8f66afe1d7a51006b8 | [
"MIT"
] | 48 | 2016-12-12T13:59:30.000Z | 2021-01-22T01:34:39.000Z | scrapers/scrapsfbos.py | ndd365/showup | fae0cdc52d306e6bbb538e8f66afe1d7a51006b8 | [
"MIT"
] | null | null | null | scrapers/scrapsfbos.py | ndd365/showup | fae0cdc52d306e6bbb538e8f66afe1d7a51006b8 | [
"MIT"
] | 4 | 2017-02-02T16:59:47.000Z | 2017-08-23T11:05:47.000Z | import feedparser
from bs4 import BeautifulSoup
from dateutil.parser import parse
from datetime import timedelta
import pytz
from apiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from oauth2client.service_account import ServiceAccountCredentials
scopes = 'https://www.googleapis.com/auth/calendar'
credentials = ServiceAccountCredentials.from_json_keyfile_name(
'client_secret.json', scopes)
http_auth = credentials.authorize(Http())
CAL = build('calendar', 'v3', http=credentials.authorize(Http()))
class Event(object):
def __init__(self, name, start_date, end_date):
self.name = name
self.start_date = start_date
self.end_date = end_date
def __repr__(self):
return self.name
def get_calendar_data():
events = []
url = "http://sfbos.org/events/feed"
feed = feedparser.parse(url)
for item in feed["items"]:
event_name = item["title"]
event_details=item["summary_detail"]["value"]
soup = BeautifulSoup(event_details, 'html.parser')
start_date_unaware = parse(soup.span.string)
start_date = start_date_unaware.replace(tzinfo=pytz.UTC)
end_date = start_date + timedelta(hours=1)
event = Event(event_name, start_date, end_date)
print event
events.append(event)
return events
def sync_to_google_calendar(events):
for event in events:
GMT_OFF = '-07:00' # PDT/MST/GMT-7
start_date = event.start_date.isoformat()
end_date = event.end_date.isoformat()
gcal_event = {
'summary': event.name,
'start': {'dateTime': start_date},
'end': {'dateTime': end_date},
'attendees': [
# {'email': 'friend1@example.com'},
# {'email': 'friend2@example.com'},
],
}
print gcal_event
e = CAL.events().insert(calendarId='tn9cl12g4s7l978r0iqk3ieppk@group.calendar.google.com',
sendNotifications=True, body=gcal_event).execute()
print e
def print_calendars():
page_token = None
while True:
calendar_list = CAL.calendarList().list(pageToken=page_token).execute()
for calendar_list_entry in calendar_list['items']:
print calendar_list_entry
page_token = calendar_list.get('nextPageToken')
if not page_token:
break
events = get_calendar_data()
sync_to_google_calendar(events)
| 24.326923 | 98 | 0.650988 | 294 | 2,530 | 5.377551 | 0.394558 | 0.062619 | 0.02277 | 0.02024 | 0.058191 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012042 | 0.245059 | 2,530 | 103 | 99 | 24.563107 | 0.815707 | 0.032016 | 0 | 0 | 0 | 0 | 0.103152 | 0.021285 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.079365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
315f8e1d96aaa1b3755c0089f370b7d2dae3e33b | 646 | py | Python | setup.py | goofmint/qualityforward-py | 299c11cb4769fb8c42bfd2d553a5c1c1f042d2de | [
"MIT"
] | null | null | null | setup.py | goofmint/qualityforward-py | 299c11cb4769fb8c42bfd2d553a5c1c1f042d2de | [
"MIT"
] | null | null | null | setup.py | goofmint/qualityforward-py | 299c11cb4769fb8c42bfd2d553a5c1c1f042d2de | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(
name="qualityforward",
version="1.1",
author="Atsushi Nakatsugawa",
author_email="atsushi@moongift.jp",
description="Python library for QualityForward API",
long_description="This is python library for QualityForward API. QualityForward is cloud based test management service.",
long_description_content_type="text/markdown",
url="https://cloud.veriserve.co.jp/",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3.7",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
)
| 34 | 125 | 0.695046 | 70 | 646 | 6.328571 | 0.7 | 0.058691 | 0.072235 | 0.13544 | 0.148984 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007619 | 0.187307 | 646 | 18 | 126 | 35.888889 | 0.83619 | 0 | 0 | 0 | 0 | 0 | 0.534056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31615431c1549d43bb6f82b1e2019a1850b2c3a4 | 1,945 | py | Python | backend/trips/models.py | repeating/PoputchikiInno | 54b60cfd3c40a25357667c4044fd477f3b6b9152 | [
"CC-BY-4.0"
] | 20 | 2021-09-23T16:33:34.000Z | 2022-01-08T08:56:10.000Z | backend/trips/models.py | repeating/PoputchikiInno | 54b60cfd3c40a25357667c4044fd477f3b6b9152 | [
"CC-BY-4.0"
] | null | null | null | backend/trips/models.py | repeating/PoputchikiInno | 54b60cfd3c40a25357667c4044fd477f3b6b9152 | [
"CC-BY-4.0"
] | 2 | 2021-09-23T16:31:39.000Z | 2021-12-17T01:02:01.000Z | from django.db import models
from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin, AbstractUser
from django.utils import timezone
from django.utils.translation import gettext as _
from django import forms
from django.contrib.auth.hashers import make_password
from django.contrib.auth import get_user_model
from django.contrib.auth.models import User
from phonenumber_field.modelfields import PhoneNumberField
from datetime import datetime
class CarTrip(models.Model):
class Meta:
verbose_name = _('carTrip')
verbose_name_plural = _('cartrips')
def __str__(self):
return f'{self.driver_name} Car Trip'
driver_name = models.CharField(max_length=200)
destination = models.CharField(max_length=200)
number_of_seats = models.IntegerField('number of seats')
trip_date = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
@classmethod
def create(cls , driver_name, destination, number_of_seats, trip_date):
trip = cls(driver_name= driver_name,
destination=destination,
number_of_seats=number_of_seats,
trip_date=trip_date,
pub_date=datetime.now()
)
return trip
def was_published_recently(self):
now = timezone.now()
return now - datetime.timedelta(days=1) <= self.pub_date <= now
class Relation(models.Model):
class Meta:
verbose_name = _('relation')
verbose_name_plural = _('relation')
trip_number = models.IntegerField('trip_number')
hiker_name = models.CharField(max_length=200)
def __str__(self ):
return f'{self.hiker_name} going on trip id = {self.trip_number}'
@classmethod
def create(cls , trip_number, hiker_name):
rel = cls(trip_number=trip_number,
hiker_name=hiker_name,
)
return rel
| 32.966102 | 87 | 0.684833 | 235 | 1,945 | 5.412766 | 0.297872 | 0.062893 | 0.051101 | 0.066038 | 0.28066 | 0.221698 | 0 | 0 | 0 | 0 | 0 | 0.008719 | 0.233419 | 1,945 | 58 | 88 | 33.534483 | 0.8444 | 0 | 0 | 0.12766 | 0 | 0 | 0.078663 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106383 | false | 0.021277 | 0.212766 | 0.042553 | 0.659574 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
31617ded12576d7c4174e3e0dd3d96d3659501e5 | 6,957 | py | Python | jcs/jcs_main.py | orenmel/lexsub | 4197fccf489670fdc4e5510971f4288b9d0b1625 | [
"Apache-2.0"
] | 26 | 2016-07-28T03:05:07.000Z | 2021-05-14T05:02:38.000Z | jcs/jcs_main.py | afcarl/lexsub | 4197fccf489670fdc4e5510971f4288b9d0b1625 | [
"Apache-2.0"
] | 11 | 2018-09-14T12:20:25.000Z | 2021-05-03T18:54:42.000Z | jcs/jcs_main.py | afcarl/lexsub | 4197fccf489670fdc4e5510971f4288b9d0b1625 | [
"Apache-2.0"
] | 18 | 2016-11-15T14:15:23.000Z | 2022-03-15T23:41:57.000Z | '''
Run lexical substitution experiments
'''
import sys
import time
import argparse
import re
import numpy
from jcs.jcs_io import extract_word_weight
from jcs.data.context_instance import ContextInstance
from jcs.jcs_io import vec_to_str
from jcs.jcs_io import vec_to_str_generated
from jcs.cs_embedding_inferrer import CsEmbeddingInferrer
from jcs.context2vec_inferrer import Context2vecInferrer
target_re = re.compile(".*__(.*)__.*")
def read_candidates(candidates_file):
target2candidates = {}
# finally.r::eventually;ultimately
with open(candidates_file, 'r') as f:
for line in f:
segments = line.split('::')
target = segments[0]
candidates = set(segments[1].strip().split(';'))
target2candidates[target] = candidates
return target2candidates
def run_test(inferrer):
if args.candidatesfile != None:
target2candidates = read_candidates(args.candidatesfile)
else:
target2candidates = None
tfi = open(args.testfile, 'r')
tfo = open(args.resultsfile, 'w')
tfo_ranked = open(args.resultsfile+'.ranked', 'w')
tfo_generated_oot = open(args.resultsfile+'.generated.oot', 'w')
tfo_generated_best = open(args.resultsfile+'.generated.best', 'w')
lines = 0
while True:
context_line = tfi.readline()
if not context_line:
break;
lst_instance = ContextInstance(context_line, args.no_pos)
lines += 1
if (args.debug == True):
tfo.write("\nTest context:\n")
tfo.write("***************\n")
tfo.write(lst_instance.decorate_context())
result_vec = inferrer.find_inferred(lst_instance, tfo)
generated_results = inferrer.generate_inferred(result_vec, lst_instance.target, lst_instance.target_lemma, lst_instance.pos)
tfo.write("\nGenerated lemmatized results\n")
tfo.write("***************\n")
tfo.write("GENERATED\t" + ' '.join([lst_instance.full_target_key, lst_instance.target_id]) + " ::: " + vec_to_str_generated(generated_results.iteritems(), args.topgenerated)+"\n")
tfo_generated_oot.write(' '.join([lst_instance.full_target_key, lst_instance.target_id]) + " ::: " + vec_to_str_generated(generated_results.iteritems(), args.topgenerated)+"\n")
tfo_generated_best.write(' '.join([lst_instance.full_target_key, lst_instance.target_id]) + " :: " + vec_to_str_generated(generated_results.iteritems(), 1)+"\n")
filtered_results = inferrer.filter_inferred(result_vec, target2candidates[lst_instance.target_key], lst_instance.pos)
tfo.write("\nFiltered results\n")
tfo.write("***************\n")
tfo.write("RANKED\t" + ' '.join([lst_instance.full_target_key, lst_instance.target_id]) + "\t" + vec_to_str(filtered_results.iteritems(), len(filtered_results))+"\n")
tfo_ranked.write("RANKED\t" + ' '.join([lst_instance.full_target_key, lst_instance.target_id]) + "\t" + vec_to_str(filtered_results.iteritems(), len(filtered_results))+"\n")
# print "end %f" % time.time()
if lines % 10 == 0:
print "Read %d lines" % lines
print "Read %d lines in total" % lines
print "Time per word: %f msec" % inferrer.msec_per_word()
tfi.close()
tfo.close()
tfo_ranked.close()
tfo_generated_oot.close()
tfo_generated_best.close()
def run(args):
print time.asctime(time.localtime(time.time()))
if args.inferrer == 'emb':
inferrer = CsEmbeddingInferrer(args.vocabfile, args.ignoretarget, args.contextmath, args.embeddingpath, args.embeddingpathc, args.testfileconll, args.bow_size, 10)
print "Using CsEmbeddingInferrer"
elif args.inferrer == 'lstm':
inferrer = Context2vecInferrer(args.lstm_config, args.ignoretarget, args.contextmath, 10)
print "Using Context2vecInferrer"
else:
raise Exception("Unknown inferrer type: " + args.inferrer)
print time.asctime(time.localtime(time.time()))
run_test(inferrer)
print "Finished"
print time.asctime(time.localtime(time.time()))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='JCS utility')
parser.add_argument('--inferrer', choices=['lstm', 'emb'],
default='lstm',
help='context type ("lstm", "emb")')
# Only for Context2vecInferrer
parser.add_argument('-lstm_config', action="store", dest="lstm_config", default=None, help="config file of lstm context model and respective word embeddings")
# Only for CsEmbeddingInferrer
parser.add_argument('-embeddingpath', action="store", dest="embeddingpath", default=None, help="prefix to files containing word embeddings")
parser.add_argument('-embeddingpathc', action="store", dest="embeddingpathc", default=None, help="prefix to files containing context word embeddings")
parser.add_argument('-vocabfile', action="store", dest="vocabfile")
parser.add_argument('-bow',action='store',dest='bow_size', default=-1, type=int, help="context bag-of-words window size. 0 means entire sentence. -1 means syntactic dependency contexts.")
# Common
parser.add_argument('-targetsfile', action="store", dest="targetsfile", default=None)
parser.add_argument('-testfile', action="store", dest="testfile")
parser.add_argument('-testfileconll', action="store", dest="testfileconll", default=None, help="test file with sentences parsed in conll format")
parser.add_argument('-candidatesfile', action="store", dest="candidatesfile", default=None)
parser.add_argument('-resultsfile', action="store", dest="resultsfile")
parser.add_argument('-contextmath', action="store", dest="contextmath", default=None, help="arithmetics used to consider context [add|mult|geomean|none]")
parser.add_argument('--ignoretarget', action="store_true", dest="ignoretarget", default=False, help="ignore lhs target. compute only context compatibility.")
parser.add_argument('--nopos',action='store_true',dest='no_pos', default=False, help="ignore part-of-speech of target word")
parser.add_argument('-topgenerated', action="store", dest="topgenerated", type=int, default=10, help="top entries to print in generated parvecs")
parser.add_argument('--debug',action='store_true',dest='debug')
args = parser.parse_args(sys.argv[1:])
config_file_name = args.resultsfile + ".CONFIG"
cf = open(config_file_name, 'w')
cf.write(' '.join(sys.argv)+'\n')
cf.close()
numpy.seterr(all='raise', divide='raise', over='raise', under='raise', invalid='raise')
run(args)
| 44.883871 | 193 | 0.655455 | 806 | 6,957 | 5.473945 | 0.240695 | 0.044878 | 0.06165 | 0.027199 | 0.236627 | 0.19583 | 0.19175 | 0.155938 | 0.126473 | 0.126473 | 0 | 0.005255 | 0.206698 | 6,957 | 154 | 194 | 45.175325 | 0.794166 | 0.019117 | 0 | 0.076923 | 0 | 0.009615 | 0.211027 | 0.003474 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105769 | null | null | 0.096154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31658cd3302ac85eb923cde2f7d6f6b205979a8e | 675 | py | Python | examples/Api/channels.py | asheshambasta/csound-expression | 290567231c1d658e07ba882b1d1c726c96af67ce | [
"BSD-3-Clause"
] | null | null | null | examples/Api/channels.py | asheshambasta/csound-expression | 290567231c1d658e07ba882b1d1c726c96af67ce | [
"BSD-3-Clause"
] | null | null | null | examples/Api/channels.py | asheshambasta/csound-expression | 290567231c1d658e07ba882b1d1c726c96af67ce | [
"BSD-3-Clause"
] | null | null | null | import csnd6
class Control:
def __init__(self, volume, frequency):
engine = csnd6.Csound()
engine.SetOption("-odac")
engine.Compile("osc.csd")
thread = csnd6.CsoundPerformanceThread(engine)
thread.Play()
self.engine = engine
self.thread = thread
self.set_volume(volume)
self.set_frequency(frequency)
def set_volume(self, volume):
self.engine.SetChannel("volume", volume)
def set_frequency(self, frequency):
self.engine.SetChannel("frequency", frequency)
def close(self):
self.thread.Stop()
self.thread.Join()
| 25 | 55 | 0.58963 | 68 | 675 | 5.735294 | 0.352941 | 0.076923 | 0.107692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006383 | 0.303704 | 675 | 27 | 56 | 25 | 0.823404 | 0 | 0 | 0 | 0 | 0 | 0.039941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.052632 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
316b518ce4f1c79a069f5d1878febd49adb00f4b | 15,524 | py | Python | src/GridCal/Gui/TowerBuilder/gui.py | SanPen/GridCal | fd3c24afff91325b8b682b03cab2b8b8edcdda57 | [
"BSD-3-Clause"
] | 284 | 2016-01-31T03:20:44.000Z | 2022-03-17T21:16:52.000Z | src/GridCal/Gui/TowerBuilder/gui.py | SanPen/GridCal | fd3c24afff91325b8b682b03cab2b8b8edcdda57 | [
"BSD-3-Clause"
] | 94 | 2016-01-14T13:37:40.000Z | 2022-03-28T03:13:56.000Z | src/GridCal/Gui/TowerBuilder/gui.py | SanPen/GridCal | fd3c24afff91325b8b682b03cab2b8b8edcdda57 | [
"BSD-3-Clause"
] | 84 | 2016-03-29T10:43:04.000Z | 2022-02-22T16:26:55.000Z | # -*- coding: utf-8 -*-
################################################################################
## Form generated from reading UI file 'gui.ui'
##
## Created by: Qt User Interface Compiler version 5.15.2
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
from PySide2.QtCore import *
from PySide2.QtGui import *
from PySide2.QtWidgets import *
from .matplotlibwidget import MatplotlibWidget
from .icons_rc import *
class Ui_Dialog(object):
def setupUi(self, Dialog):
if not Dialog.objectName():
Dialog.setObjectName(u"Dialog")
Dialog.resize(1183, 675)
self.gridLayout = QGridLayout(Dialog)
self.gridLayout.setObjectName(u"gridLayout")
self.gridLayout.setContentsMargins(1, 1, 1, 1)
self.tabWidget = QTabWidget(Dialog)
self.tabWidget.setObjectName(u"tabWidget")
self.tab_2 = QWidget()
self.tab_2.setObjectName(u"tab_2")
self.verticalLayout_6 = QVBoxLayout(self.tab_2)
self.verticalLayout_6.setObjectName(u"verticalLayout_6")
self.verticalLayout_6.setContentsMargins(0, 0, 0, 0)
self.main_splitter = QSplitter(self.tab_2)
self.main_splitter.setObjectName(u"main_splitter")
self.main_splitter.setOrientation(Qt.Horizontal)
self.frame_8 = QFrame(self.main_splitter)
self.frame_8.setObjectName(u"frame_8")
self.frame_8.setFrameShape(QFrame.NoFrame)
self.frame_8.setFrameShadow(QFrame.Raised)
self.verticalLayout_5 = QVBoxLayout(self.frame_8)
self.verticalLayout_5.setObjectName(u"verticalLayout_5")
self.verticalLayout_5.setContentsMargins(0, 0, 0, 0)
self.frame_5 = QFrame(self.frame_8)
self.frame_5.setObjectName(u"frame_5")
sizePolicy = QSizePolicy(QSizePolicy.Minimum, QSizePolicy.Minimum)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.frame_5.sizePolicy().hasHeightForWidth())
self.frame_5.setSizePolicy(sizePolicy)
self.frame_5.setFrameShape(QFrame.NoFrame)
self.frame_5.setFrameShadow(QFrame.Raised)
self.horizontalLayout = QHBoxLayout(self.frame_5)
self.horizontalLayout.setObjectName(u"horizontalLayout")
self.label_9 = QLabel(self.frame_5)
self.label_9.setObjectName(u"label_9")
self.horizontalLayout.addWidget(self.label_9)
self.name_lineEdit = QLineEdit(self.frame_5)
self.name_lineEdit.setObjectName(u"name_lineEdit")
self.horizontalLayout.addWidget(self.name_lineEdit)
self.verticalLayout_5.addWidget(self.frame_5)
self.frame_6 = QFrame(self.frame_8)
self.frame_6.setObjectName(u"frame_6")
sizePolicy.setHeightForWidth(self.frame_6.sizePolicy().hasHeightForWidth())
self.frame_6.setSizePolicy(sizePolicy)
self.frame_6.setFrameShape(QFrame.NoFrame)
self.frame_6.setFrameShadow(QFrame.Raised)
self.horizontalLayout_3 = QHBoxLayout(self.frame_6)
self.horizontalLayout_3.setObjectName(u"horizontalLayout_3")
self.horizontalSpacer_2 = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.horizontalLayout_3.addItem(self.horizontalSpacer_2)
self.label_8 = QLabel(self.frame_6)
self.label_8.setObjectName(u"label_8")
self.horizontalLayout_3.addWidget(self.label_8)
self.frequency_doubleSpinBox = QDoubleSpinBox(self.frame_6)
self.frequency_doubleSpinBox.setObjectName(u"frequency_doubleSpinBox")
self.frequency_doubleSpinBox.setDecimals(0)
self.frequency_doubleSpinBox.setValue(50.000000000000000)
self.horizontalLayout_3.addWidget(self.frequency_doubleSpinBox)
self.label_11 = QLabel(self.frame_6)
self.label_11.setObjectName(u"label_11")
self.horizontalLayout_3.addWidget(self.label_11)
self.rho_doubleSpinBox = QDoubleSpinBox(self.frame_6)
self.rho_doubleSpinBox.setObjectName(u"rho_doubleSpinBox")
self.rho_doubleSpinBox.setMaximum(9999999.000000000000000)
self.rho_doubleSpinBox.setValue(100.000000000000000)
self.horizontalLayout_3.addWidget(self.rho_doubleSpinBox)
self.verticalLayout_5.addWidget(self.frame_6)
self.splitter = QSplitter(self.frame_8)
self.splitter.setObjectName(u"splitter")
self.splitter.setMaximumSize(QSize(16777215, 16777215))
self.splitter.setOrientation(Qt.Vertical)
self.frame_3 = QFrame(self.splitter)
self.frame_3.setObjectName(u"frame_3")
self.frame_3.setFrameShape(QFrame.NoFrame)
self.frame_3.setFrameShadow(QFrame.Raised)
self.verticalLayout_8 = QVBoxLayout(self.frame_3)
self.verticalLayout_8.setObjectName(u"verticalLayout_8")
self.label_12 = QLabel(self.frame_3)
self.label_12.setObjectName(u"label_12")
self.verticalLayout_8.addWidget(self.label_12)
self.wires_tableView = QTableView(self.frame_3)
self.wires_tableView.setObjectName(u"wires_tableView")
self.verticalLayout_8.addWidget(self.wires_tableView)
self.frame_7 = QFrame(self.frame_3)
self.frame_7.setObjectName(u"frame_7")
self.frame_7.setFrameShape(QFrame.StyledPanel)
self.frame_7.setFrameShadow(QFrame.Raised)
self.horizontalLayout_4 = QHBoxLayout(self.frame_7)
self.horizontalLayout_4.setObjectName(u"horizontalLayout_4")
self.horizontalLayout_4.setContentsMargins(0, 0, 0, 0)
self.add_to_tower_pushButton = QPushButton(self.frame_7)
self.add_to_tower_pushButton.setObjectName(u"add_to_tower_pushButton")
icon = QIcon()
icon.addFile(u":/Icons/icons/plus.svg", QSize(), QIcon.Normal, QIcon.Off)
self.add_to_tower_pushButton.setIcon(icon)
self.horizontalLayout_4.addWidget(self.add_to_tower_pushButton)
self.horizontalSpacer_3 = QSpacerItem(990, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.horizontalLayout_4.addItem(self.horizontalSpacer_3)
self.verticalLayout_8.addWidget(self.frame_7)
self.splitter.addWidget(self.frame_3)
self.frame_4 = QFrame(self.splitter)
self.frame_4.setObjectName(u"frame_4")
self.frame_4.setFrameShape(QFrame.NoFrame)
self.frame_4.setFrameShadow(QFrame.Raised)
self.verticalLayout_4 = QVBoxLayout(self.frame_4)
self.verticalLayout_4.setObjectName(u"verticalLayout_4")
self.verticalLayout_4.setContentsMargins(9, 9, 9, 9)
self.label_10 = QLabel(self.frame_4)
self.label_10.setObjectName(u"label_10")
self.verticalLayout_4.addWidget(self.label_10)
self.tower_tableView = QTableView(self.frame_4)
self.tower_tableView.setObjectName(u"tower_tableView")
self.verticalLayout_4.addWidget(self.tower_tableView)
self.frame = QFrame(self.frame_4)
self.frame.setObjectName(u"frame")
self.frame.setFrameShape(QFrame.NoFrame)
self.frame.setFrameShadow(QFrame.Raised)
self.horizontalLayout_2 = QHBoxLayout(self.frame)
self.horizontalLayout_2.setObjectName(u"horizontalLayout_2")
self.horizontalLayout_2.setContentsMargins(0, 0, 0, 0)
self.delete_from_tower_pushButton = QPushButton(self.frame)
self.delete_from_tower_pushButton.setObjectName(u"delete_from_tower_pushButton")
icon1 = QIcon()
icon1.addFile(u":/Icons/icons/minus.svg", QSize(), QIcon.Normal, QIcon.Off)
self.delete_from_tower_pushButton.setIcon(icon1)
self.horizontalLayout_2.addWidget(self.delete_from_tower_pushButton)
self.horizontalSpacer = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.horizontalLayout_2.addItem(self.horizontalSpacer)
self.compute_pushButton = QPushButton(self.frame)
self.compute_pushButton.setObjectName(u"compute_pushButton")
icon2 = QIcon()
icon2.addFile(u":/Icons/icons/calc.svg", QSize(), QIcon.Normal, QIcon.Off)
self.compute_pushButton.setIcon(icon2)
self.compute_pushButton.setIconSize(QSize(16, 16))
self.horizontalLayout_2.addWidget(self.compute_pushButton)
self.verticalLayout_4.addWidget(self.frame)
self.splitter.addWidget(self.frame_4)
self.verticalLayout_5.addWidget(self.splitter)
self.main_splitter.addWidget(self.frame_8)
self.PlotFrame = QFrame(self.main_splitter)
self.PlotFrame.setObjectName(u"PlotFrame")
self.PlotFrame.setFrameShape(QFrame.NoFrame)
self.PlotFrame.setFrameShadow(QFrame.Raised)
self.verticalLayout_7 = QVBoxLayout(self.PlotFrame)
self.verticalLayout_7.setObjectName(u"verticalLayout_7")
self.verticalLayout_7.setContentsMargins(9, 9, 9, 9)
self.label_4 = QLabel(self.PlotFrame)
self.label_4.setObjectName(u"label_4")
self.verticalLayout_7.addWidget(self.label_4)
self.plotwidget = MatplotlibWidget(self.PlotFrame)
self.plotwidget.setObjectName(u"plotwidget")
self.verticalLayout_7.addWidget(self.plotwidget)
self.frame_9 = QFrame(self.PlotFrame)
self.frame_9.setObjectName(u"frame_9")
self.frame_9.setMaximumSize(QSize(16777215, 24))
self.frame_9.setFrameShape(QFrame.StyledPanel)
self.frame_9.setFrameShadow(QFrame.Raised)
self.horizontalLayout_5 = QHBoxLayout(self.frame_9)
self.horizontalLayout_5.setObjectName(u"horizontalLayout_5")
self.horizontalLayout_5.setContentsMargins(0, 0, 0, 0)
self.horizontalSpacer_4 = QSpacerItem(19, 19, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.horizontalLayout_5.addItem(self.horizontalSpacer_4)
self.acceptButton = QPushButton(self.frame_9)
self.acceptButton.setObjectName(u"acceptButton")
self.horizontalLayout_5.addWidget(self.acceptButton)
self.verticalLayout_7.addWidget(self.frame_9)
self.main_splitter.addWidget(self.PlotFrame)
self.verticalLayout_6.addWidget(self.main_splitter)
self.tabWidget.addTab(self.tab_2, "")
self.tab = QWidget()
self.tab.setObjectName(u"tab")
self.verticalLayout_3 = QVBoxLayout(self.tab)
self.verticalLayout_3.setObjectName(u"verticalLayout_3")
self.frame_10 = QFrame(self.tab)
self.frame_10.setObjectName(u"frame_10")
self.frame_10.setFrameShape(QFrame.StyledPanel)
self.frame_10.setFrameShadow(QFrame.Raised)
self.gridLayout_2 = QGridLayout(self.frame_10)
self.gridLayout_2.setObjectName(u"gridLayout_2")
self.label_2 = QLabel(self.frame_10)
self.label_2.setObjectName(u"label_2")
self.gridLayout_2.addWidget(self.label_2, 0, 1, 1, 1)
self.label_6 = QLabel(self.frame_10)
self.label_6.setObjectName(u"label_6")
self.gridLayout_2.addWidget(self.label_6, 2, 0, 1, 1)
self.z_tableView_abcn = QTableView(self.frame_10)
self.z_tableView_abcn.setObjectName(u"z_tableView_abcn")
self.gridLayout_2.addWidget(self.z_tableView_abcn, 1, 0, 1, 1)
self.y_tableView_abcn = QTableView(self.frame_10)
self.y_tableView_abcn.setObjectName(u"y_tableView_abcn")
self.gridLayout_2.addWidget(self.y_tableView_abcn, 1, 1, 1, 1)
self.label_7 = QLabel(self.frame_10)
self.label_7.setObjectName(u"label_7")
self.gridLayout_2.addWidget(self.label_7, 4, 0, 1, 1)
self.z_tableView_abc = QTableView(self.frame_10)
self.z_tableView_abc.setObjectName(u"z_tableView_abc")
self.gridLayout_2.addWidget(self.z_tableView_abc, 3, 0, 1, 1)
self.label = QLabel(self.frame_10)
self.label.setObjectName(u"label")
self.gridLayout_2.addWidget(self.label, 0, 0, 1, 1)
self.z_tableView_seq = QTableView(self.frame_10)
self.z_tableView_seq.setObjectName(u"z_tableView_seq")
self.gridLayout_2.addWidget(self.z_tableView_seq, 5, 0, 1, 1)
self.label_3 = QLabel(self.frame_10)
self.label_3.setObjectName(u"label_3")
self.gridLayout_2.addWidget(self.label_3, 2, 1, 1, 1)
self.y_tableView_abc = QTableView(self.frame_10)
self.y_tableView_abc.setObjectName(u"y_tableView_abc")
self.gridLayout_2.addWidget(self.y_tableView_abc, 3, 1, 1, 1)
self.label_5 = QLabel(self.frame_10)
self.label_5.setObjectName(u"label_5")
self.gridLayout_2.addWidget(self.label_5, 4, 1, 1, 1)
self.y_tableView_seq = QTableView(self.frame_10)
self.y_tableView_seq.setObjectName(u"y_tableView_seq")
self.gridLayout_2.addWidget(self.y_tableView_seq, 5, 1, 1, 1)
self.verticalLayout_3.addWidget(self.frame_10)
self.tabWidget.addTab(self.tab, "")
self.gridLayout.addWidget(self.tabWidget, 4, 0, 1, 1)
self.retranslateUi(Dialog)
self.tabWidget.setCurrentIndex(0)
QMetaObject.connectSlotsByName(Dialog)
# setupUi
def retranslateUi(self, Dialog):
Dialog.setWindowTitle(QCoreApplication.translate("Dialog", u"Tower creation", None))
self.label_9.setText(QCoreApplication.translate("Dialog", u"Name", None))
self.label_8.setText(QCoreApplication.translate("Dialog", u"Frequency (Hz)", None))
self.label_11.setText(QCoreApplication.translate("Dialog", u"Earth resistivity (Ohm/m^3)", None))
self.label_12.setText(QCoreApplication.translate("Dialog", u"Wire catalogue", None))
#if QT_CONFIG(tooltip)
self.add_to_tower_pushButton.setToolTip(QCoreApplication.translate("Dialog", u"Add wire", None))
#endif // QT_CONFIG(tooltip)
self.add_to_tower_pushButton.setText("")
self.label_10.setText(QCoreApplication.translate("Dialog", u"Wire compisition", None))
#if QT_CONFIG(tooltip)
self.delete_from_tower_pushButton.setToolTip(QCoreApplication.translate("Dialog", u"Delete wire", None))
#endif // QT_CONFIG(tooltip)
self.delete_from_tower_pushButton.setText("")
#if QT_CONFIG(tooltip)
self.compute_pushButton.setToolTip(QCoreApplication.translate("Dialog", u"Compute matrices", None))
#endif // QT_CONFIG(tooltip)
self.compute_pushButton.setText("")
self.label_4.setText(QCoreApplication.translate("Dialog", u"Tower", None))
self.acceptButton.setText(QCoreApplication.translate("Dialog", u"Accept", None))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_2), QCoreApplication.translate("Dialog", u"Tower designer", None))
self.label_2.setText(QCoreApplication.translate("Dialog", u" Y shunt (uS / km) for ABCN", None))
self.label_6.setText(QCoreApplication.translate("Dialog", u" Z series (Ohm / km) for ABC", None))
self.label_7.setText(QCoreApplication.translate("Dialog", u" Z series (Ohm / km) in sequence components", None))
self.label.setText(QCoreApplication.translate("Dialog", u" Z series (Ohm / km) for ABCN", None))
self.label_3.setText(QCoreApplication.translate("Dialog", u" Y shunt (uS / km) for ABC", None))
self.label_5.setText(QCoreApplication.translate("Dialog", u" Y shunt (uS / km) for the sequence components", None))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab), QCoreApplication.translate("Dialog", u"Impedance matrices", None))
# retranslateUi
| 42.883978 | 134 | 0.706648 | 1,883 | 15,524 | 5.62666 | 0.102496 | 0.076451 | 0.055592 | 0.057386 | 0.479188 | 0.278339 | 0.160453 | 0.069372 | 0.053705 | 0.024917 | 0 | 0.037857 | 0.176436 | 15,524 | 361 | 135 | 43.00277 | 0.790849 | 0.023448 | 0 | 0 | 1 | 0 | 0.082182 | 0.009413 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007937 | false | 0 | 0.019841 | 0 | 0.031746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
316f3a2d1cd58368907db8ec0798ba08d6c7d1c7 | 527 | py | Python | util/mccLog.py | ccchooko/webControlClient | f12cf76d364c5270166c99a508d08999c7ed920c | [
"Apache-2.0"
] | null | null | null | util/mccLog.py | ccchooko/webControlClient | f12cf76d364c5270166c99a508d08999c7ed920c | [
"Apache-2.0"
] | null | null | null | util/mccLog.py | ccchooko/webControlClient | f12cf76d364c5270166c99a508d08999c7ed920c | [
"Apache-2.0"
] | null | null | null | #-*-coding:utf8-*-
import logging
from datetime import datetime
class mccLog(object):
def __init__(self):
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
filename= datetime.now().strftime("%Y%m%d%H%M%S") + '.log',
filemode='a')
def mccWriteLog(self, logContent):
logging.info(logContent)
def mccError(self, errorContent):
logging.error(errorContent)
| 29.277778 | 75 | 0.580645 | 60 | 527 | 5.033333 | 0.616667 | 0.013245 | 0.019868 | 0.02649 | 0.039735 | 0.039735 | 0 | 0 | 0 | 0 | 0 | 0.002564 | 0.259962 | 527 | 17 | 76 | 31 | 0.771795 | 0.032258 | 0 | 0 | 0 | 0 | 0.139489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3170ca6ee7a6eb3f4bee950684186e4a99de1a8e | 1,356 | py | Python | sleekxmpp/plugins/xep_0027/stanza.py | elrond79/SleekXMPP | 62ebbe2d7c37f55fa63cbe24b2a610c1e3eb7b9f | [
"BSD-3-Clause"
] | 3 | 2019-02-01T06:50:08.000Z | 2020-03-24T00:45:31.000Z | sleekxmpp/plugins/xep_0027/stanza.py | elrond79/SleekXMPP | 62ebbe2d7c37f55fa63cbe24b2a610c1e3eb7b9f | [
"BSD-3-Clause"
] | 1 | 2017-11-07T13:03:48.000Z | 2017-11-07T13:03:48.000Z | sleekxmpp/plugins/xep_0027/stanza.py | elrond79/SleekXMPP | 62ebbe2d7c37f55fa63cbe24b2a610c1e3eb7b9f | [
"BSD-3-Clause"
] | null | null | null | """
SleekXMPP: The Sleek XMPP Library
Copyright (C) 2012 Nathanael C. Fritz, Lance J.T. Stout
This file is part of SleekXMPP.
See the file LICENSE for copying permission.
"""
from sleekxmpp.xmlstream import ElementBase
class Signed(ElementBase):
name = 'x'
namespace = 'jabber:x:signed'
plugin_attrib = 'signed'
interfaces = set(['signed'])
is_extension = True
def set_signed(self, value):
parent = self.parent()
xmpp = parent.stream
data = xmpp['xep_0027'].sign(value, parent['from'])
if data:
self.xml.text = data
else:
del parent['signed']
def get_signed(self):
return self.xml.text
class Encrypted(ElementBase):
name = 'x'
namespace = 'jabber:x:encrypted'
plugin_attrib = 'encrypted'
interfaces = set(['encrypted'])
is_extension = True
def set_encrypted(self, value):
parent = self.parent()
xmpp = parent.stream
data = xmpp['xep_0027'].encrypt(value, parent['to'].bare)
if data:
self.xml.text = data
else:
del parent['encrypted']
def get_encrypted(self):
parent = self.parent()
xmpp = parent.stream
if self.xml.text:
return xmpp['xep_0027'].decrypt(self.xml.text, parent['to'])
return None
| 25.111111 | 72 | 0.597345 | 162 | 1,356 | 4.932099 | 0.382716 | 0.043805 | 0.068836 | 0.075094 | 0.397998 | 0.345432 | 0.225282 | 0.225282 | 0.225282 | 0.140175 | 0 | 0.016563 | 0.287611 | 1,356 | 53 | 73 | 25.584906 | 0.810559 | 0.123156 | 0 | 0.432432 | 0 | 0 | 0.096137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.027027 | 0.027027 | 0.540541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
31717266252a0d2d31a5a6a58dab2e9b98e48596 | 821 | py | Python | tcptofpc.py | catenacyber/fuzzpcap | 100db6cefd77238f623b5127a3efed01c7333cad | [
"MIT"
] | 6 | 2021-04-09T03:13:39.000Z | 2022-01-26T14:49:31.000Z | tcptofpc.py | catenacyber/fuzzpcap | 100db6cefd77238f623b5127a3efed01c7333cad | [
"MIT"
] | null | null | null | tcptofpc.py | catenacyber/fuzzpcap | 100db6cefd77238f623b5127a3efed01c7333cad | [
"MIT"
] | null | null | null | #tshark -r input.pcap -qz "follow,tcp,raw,0"
import struct
import sys
import binascii
import subprocess
result = subprocess.Popen( ["tshark", "-r", sys.argv[1], "-qz", "follow,tcp,raw,0"],
stdout=subprocess.PIPE)
sys.stdout.buffer.write(b"FPC\x80")
for i in range(4):
result.stdout.readline()
dp=result.stdout.readline().split(b":")[2]
sp=result.stdout.readline().split(b":")[2]
sys.stdout.buffer.write(struct.pack('>H', int(sp)))
sys.stdout.buffer.write(struct.pack('>H', int(dp)))
for l in result.stdout.readlines():
s2c = 0
if l[0] == 9:
l = l[1:]
s2c = 1
try:
r = binascii.unhexlify(l[:-1])
except:
continue
sys.stdout.buffer.write(struct.pack('>B', int(s2c)))
sys.stdout.buffer.write(r)
sys.stdout.buffer.write(b"FPC0")
| 27.366667 | 84 | 0.615104 | 125 | 821 | 4.04 | 0.384 | 0.106931 | 0.178218 | 0.237624 | 0.443564 | 0.30099 | 0.134653 | 0.134653 | 0 | 0 | 0 | 0.027108 | 0.19123 | 821 | 29 | 85 | 28.310345 | 0.733434 | 0.052375 | 0 | 0 | 0 | 0 | 0.059202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31749eb3567f7fa78e149effb86fecf4648c3519 | 1,501 | py | Python | tools/isolate/data/isolate/with_flag.py | Scopetta197/chromium | b7bf8e39baadfd9089de2ebdc0c5d982de4a9820 | [
"BSD-3-Clause"
] | 212 | 2015-01-31T11:55:58.000Z | 2022-02-22T06:35:11.000Z | tools/isolate/data/isolate/with_flag.py | 1065672644894730302/Chromium | 239dd49e906be4909e293d8991e998c9816eaa35 | [
"BSD-3-Clause"
] | 5 | 2015-03-27T14:29:23.000Z | 2019-09-25T13:23:12.000Z | tools/isolate/data/isolate/with_flag.py | 1065672644894730302/Chromium | 239dd49e906be4909e293d8991e998c9816eaa35 | [
"BSD-3-Clause"
] | 221 | 2015-01-07T06:21:24.000Z | 2022-02-11T02:51:12.000Z | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
import sys
def main():
print 'with_flag: Verify the test data files were mapped properly'
assert len(sys.argv) == 2
mode = sys.argv[1]
assert mode in ('run', 'trace')
files = sorted(os.listdir('files1'))
tree = {
'test_file1.txt': 'Foo\n',
'test_file2.txt': 'Bar\n',
}
# Ignore .svn directory which happens to be there with --mode=trace
# from a svn checkout. The file shouldn't be there when --mode=run is used.
if mode == 'trace' and '.svn' in files:
files.remove('.svn')
if files != sorted(tree):
print '%s != %s' % (files, sorted(tree))
return 2
for k, v in tree.iteritems():
content = open(os.path.join('files1', k), 'rb').read()
if v != content:
print '%s: %r != %r' % (k, v, content)
return 3
root_dir = os.path.dirname(os.path.abspath(__file__))
parent_dir, base = os.path.split(root_dir)
if mode == 'trace':
# Verify the parent directory.
parent_dir, base2 = os.path.split(parent_dir)
if base != 'isolate' or base2 != 'data':
print 'mode trace: Invalid root dir %s' % root_dir
return 4
else:
# Verify that we are not inside a checkout.
if base == 'data':
print 'mode run: Invalid root dir %s' % root_dir
return 5
return 0
if __name__ == '__main__':
sys.exit(main())
| 28.320755 | 77 | 0.632911 | 235 | 1,501 | 3.948936 | 0.489362 | 0.045259 | 0.023707 | 0.032328 | 0.060345 | 0.060345 | 0.060345 | 0 | 0 | 0 | 0 | 0.01468 | 0.228514 | 1,501 | 52 | 78 | 28.865385 | 0.786701 | 0.260493 | 0 | 0 | 0 | 0 | 0.216878 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0 | null | null | 0 | 0.055556 | null | null | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3177cd1e5b203f5ec64cb02bb1434a9783422e6d | 312 | py | Python | app.py | rhedgeco/test_plaid_webapp | e821e354796da9f35689eb386df9366407e2907b | [
"MIT"
] | null | null | null | app.py | rhedgeco/test_plaid_webapp | e821e354796da9f35689eb386df9366407e2907b | [
"MIT"
] | null | null | null | app.py | rhedgeco/test_plaid_webapp | e821e354796da9f35689eb386df9366407e2907b | [
"MIT"
] | null | null | null | from plaid import Client
from backend.link_token import LinkToken
from general_falcon_webserver import WebApp
client = Client(client_id='5e2e3527dd6924001167e8e8', secret='0b89f518880456b6f60020f481b3d7', environment='sandbox')
app = WebApp()
app.add_route('link', LinkToken(client))
app.launch_webserver()
| 24 | 117 | 0.817308 | 36 | 312 | 6.916667 | 0.611111 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144366 | 0.089744 | 312 | 12 | 118 | 26 | 0.732394 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
31793cdf4264c68e2f922f5ab5df2e6e45071db9 | 539 | py | Python | kickeststats/exceptions.py | antimaLinux/kickscarper | c2607847f1c7ad8bc30014ab4e62f0976ace5f0f | [
"MIT"
] | null | null | null | kickeststats/exceptions.py | antimaLinux/kickscarper | c2607847f1c7ad8bc30014ab4e62f0976ace5f0f | [
"MIT"
] | 1 | 2020-10-14T06:44:12.000Z | 2020-10-14T06:44:12.000Z | kickeststats/exceptions.py | antimaLinux/kickscraper | c2607847f1c7ad8bc30014ab4e62f0976ace5f0f | [
"MIT"
] | null | null | null | """Exception utilities."""
class ParsingException(Exception):
pass
class EnvVariableNotSet(Exception):
def __init__(self, varname: str) -> None:
super(EnvVariableNotSet, self).__init__(f"Env variable [{varname}] not set.")
class InvalidLineUp(Exception):
pass
class UnsupportedLineUp(Exception):
def __init__(self, line_up_name: str) -> None:
super(UnsupportedLineUp, self).__init__(
f"Line-up [{line_up_name}] is not supported."
)
class InvalidTeamLineup(Exception):
pass
| 20.730769 | 85 | 0.684601 | 57 | 539 | 6.122807 | 0.45614 | 0.111748 | 0.103152 | 0.114613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198516 | 539 | 25 | 86 | 21.56 | 0.80787 | 0.037106 | 0 | 0.214286 | 0 | 0 | 0.146199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.214286 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
317a91916b77974fe06fa6db1fb05dbc3398d9cf | 1,262 | py | Python | setup.py | holoyan/python-data-validation | e928c4131072c53cb8ace1fbaa83216f06ab6bfe | [
"MIT"
] | 3 | 2021-03-16T05:47:46.000Z | 2021-03-23T17:43:55.000Z | setup.py | holoyan/python-data-validation | e928c4131072c53cb8ace1fbaa83216f06ab6bfe | [
"MIT"
] | null | null | null | setup.py | holoyan/python-data-validation | e928c4131072c53cb8ace1fbaa83216f06ab6bfe | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
# read the contents of your README file
from os import path
this_directory = path.abspath(path.dirname(__file__))
with open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='pyva',
packages=find_packages(),
version='0.4.1',
license='MIT',
description='Simple and flexible python data validation library',
long_description=long_description,
long_description_content_type='text/markdown',
author='Artak',
author_email='artaksafaryanc@gmail.com',
url='https://github.com/holoyan/python-data-validation',
keywords=['data', 'validation', 'validator', 'data validator'],
install_requires=[ # I get to this in a second
'python-dateutil',
],
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Topic :: Software Development :: Build Tools',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
)
| 35.055556 | 73 | 0.652139 | 145 | 1,262 | 5.565517 | 0.613793 | 0.11772 | 0.154895 | 0.16109 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014056 | 0.210777 | 1,262 | 35 | 74 | 36.057143 | 0.796185 | 0.049921 | 0 | 0.0625 | 0 | 0 | 0.456522 | 0.020067 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
317b9439891e4fcd8a172759ee5646bbfda6a0f1 | 721 | py | Python | bus_system/apps/trip/migrations/0007_auto_20210624_1812.py | pygabo/bus_system | ffb76d3414e058286799f3df1cb551b26286e7c3 | [
"MIT"
] | null | null | null | bus_system/apps/trip/migrations/0007_auto_20210624_1812.py | pygabo/bus_system | ffb76d3414e058286799f3df1cb551b26286e7c3 | [
"MIT"
] | null | null | null | bus_system/apps/trip/migrations/0007_auto_20210624_1812.py | pygabo/bus_system | ffb76d3414e058286799f3df1cb551b26286e7c3 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.12 on 2021-06-24 18:12
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('trip', '0006_remove_travelmodel_driver'),
]
operations = [
migrations.AddField(
model_name='tripmodel',
name='tickets_sold',
field=models.PositiveSmallIntegerField(default=0),
),
migrations.AlterField(
model_name='travelmodel',
name='trip',
field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='travel_trip_set', related_query_name='travel_trip_set', to='trip.tripmodel'),
),
]
| 28.84 | 172 | 0.646325 | 78 | 721 | 5.794872 | 0.576923 | 0.053097 | 0.061947 | 0.097345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038321 | 0.239945 | 721 | 24 | 173 | 30.041667 | 0.786496 | 0.0638 | 0 | 0.111111 | 1 | 0 | 0.169391 | 0.044577 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31816e6d8bed3855148fe582fbc591405a02824d | 78,388 | py | Python | subversion/tests/cmdline/lock_tests.py | centic9/subversion-ppa | f65f38f4d8821f6225ba14b50a81973ad893fc02 | [
"Apache-2.0"
] | null | null | null | subversion/tests/cmdline/lock_tests.py | centic9/subversion-ppa | f65f38f4d8821f6225ba14b50a81973ad893fc02 | [
"Apache-2.0"
] | null | null | null | subversion/tests/cmdline/lock_tests.py | centic9/subversion-ppa | f65f38f4d8821f6225ba14b50a81973ad893fc02 | [
"Apache-2.0"
] | 1 | 2020-11-04T07:19:37.000Z | 2020-11-04T07:19:37.000Z | #!/usr/bin/env python
# encoding=utf-8
#
# lock_tests.py: testing versioned properties
#
# Subversion is a tool for revision control.
# See http://subversion.apache.org for more information.
#
# ====================================================================
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
######################################################################
# General modules
import re, os, stat, logging
logger = logging.getLogger()
# Our testing module
import svntest
# (abbreviation)
Skip = svntest.testcase.Skip_deco
SkipUnless = svntest.testcase.SkipUnless_deco
XFail = svntest.testcase.XFail_deco
Issues = svntest.testcase.Issues_deco
Issue = svntest.testcase.Issue_deco
Wimp = svntest.testcase.Wimp_deco
Item = svntest.wc.StateItem
######################################################################
# Helpers
def check_writability(path, writable):
bits = stat.S_IWGRP | stat.S_IWOTH | stat.S_IWRITE
mode = os.stat(path)[0]
if bool(mode & bits) != writable:
raise svntest.Failure("path '%s' is unexpectedly %s (mode %o)"
% (path, ["writable", "read-only"][writable], mode))
def is_writable(path):
"Raise if PATH is not writable."
check_writability(path, True)
def is_readonly(path):
"Raise if PATH is not readonly."
check_writability(path, False)
######################################################################
# Tests
#----------------------------------------------------------------------
# Each test refers to a section in
# notes/locking/locking-functional-spec.txt
# II.A.2, II.C.2.a: Lock a file in wc A as user FOO and make sure we
# have a representation of it. Checkout wc B as user BAR. Verify
# that user BAR cannot commit changes to the file nor its properties.
def lock_file(sbox):
"lock a file and verify that it's locked"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
# lock a file as wc_author
file_path = sbox.ospath('iota')
file_path_b = sbox.ospath('iota', wc_dir=wc_b)
svntest.main.file_append(file_path, "This represents a binary file\n")
svntest.main.run_svn(None, 'commit',
'-m', '', file_path)
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', file_path)
# --- Meanwhile, in our other working copy... ---
err_re = "(svn\: E195022\: File '.*iota' is locked in another)|" + \
"(svn\: E160039: User '?jconstant'? does not own lock on path.*iota')"
svntest.main.run_svn(None, 'update', wc_b)
# -- Try to change a file --
# change the locked file
svntest.main.file_append(file_path_b, "Covert tweak\n")
# attempt (and fail) to commit as user Sally
svntest.actions.run_and_verify_commit(wc_b, None, None, err_re,
'--username',
svntest.main.wc_author2,
'-m', '', file_path_b)
# Revert our change that we failed to commit
svntest.main.run_svn(None, 'revert', file_path_b)
# -- Try to change a property --
# change the locked file's properties
svntest.main.run_svn(None, 'propset', 'sneakyuser', 'Sally', file_path_b)
err_re = "(svn\: E195022\: File '.*iota' is locked in another)|" + \
"(svn\: E160039\: User '?jconstant'? does not own lock on path)"
# attempt (and fail) to commit as user Sally
svntest.actions.run_and_verify_commit(wc_b, None, None, err_re,
'--username',
svntest.main.wc_author2,
'-m', '', file_path_b)
#----------------------------------------------------------------------
# II.C.2.b.[12]: Lock a file and commit using the lock. Make sure the
# lock is released. Repeat, but request that the lock not be
# released. Make sure the lock is retained.
def commit_file_keep_lock(sbox):
"commit a file and keep lock"
sbox.build()
wc_dir = sbox.wc_dir
# lock 'A/mu' as wc_author
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', 'some lock comment',
sbox.ospath('A/mu'))
# make a change and commit it, holding lock
sbox.simple_append('A/mu', 'Tweak!\n')
svntest.main.run_svn(None, 'commit', '-m', '', '--no-unlock',
sbox.ospath('A/mu'))
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/mu', wc_rev=2, writelocked='K')
# Make sure the file is still locked
svntest.actions.run_and_verify_status(wc_dir, expected_status)
def commit_file_unlock(sbox):
"commit a file and release lock"
sbox.build()
wc_dir = sbox.wc_dir
# lock A/mu and iota as wc_author
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', 'some lock comment',
sbox.ospath('A/mu'),
sbox.ospath('iota'))
# make a change and commit it, allowing lock to be released
sbox.simple_append('A/mu', 'Tweak!\n')
sbox.simple_commit()
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/mu', wc_rev=2)
expected_status.tweak('iota', wc_rev=2)
# Make sure the file is unlocked
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
def commit_propchange(sbox):
"commit a locked file with a prop change"
sbox.build()
wc_dir = sbox.wc_dir
# lock A/mu as wc_author
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', 'some lock comment',
sbox.ospath('A/mu'))
# make a property change and commit it, allowing lock to be released
sbox.simple_propset('blue', 'azul', 'A/mu')
sbox.simple_commit('A/mu')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/mu', wc_rev=2)
# Make sure the file is unlocked
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
# II.C.2.c: Lock a file in wc A as user FOO. Attempt to unlock same
# file in same wc as user BAR. Should fail.
#
# Attempt again with --force. Should succeed.
#
# II.C.2.c: Lock a file in wc A as user FOO. Attempt to unlock same
# file in wc B as user FOO. Should fail.
#
# Attempt again with --force. Should succeed.
def break_lock(sbox):
"lock a file and verify lock breaking behavior"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
# lock a file as wc_author
file_path = sbox.ospath('iota')
file_path_b = sbox.ospath('iota', wc_dir=wc_b)
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', file_path)
# --- Meanwhile, in our other working copy... ---
svntest.main.run_svn(None, 'update', wc_b)
# attempt (and fail) to unlock file
# This should give a "iota' is not locked in this working copy" error
svntest.actions.run_and_verify_svn(None, None, ".*not locked",
'unlock',
file_path_b)
svntest.actions.run_and_verify_svn(None, ".*unlocked", [],
'unlock', '--force',
file_path_b)
#----------------------------------------------------------------------
# II.C.2.d: Lock a file in wc A as user FOO. Attempt to lock same
# file in wc B as user BAR. Should fail.
#
# Attempt again with --force. Should succeed.
#
# II.C.2.d: Lock a file in wc A as user FOO. Attempt to lock same
# file in wc B as user FOO. Should fail.
#
# Attempt again with --force. Should succeed.
def steal_lock(sbox):
"lock a file and verify lock stealing behavior"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
# lock a file as wc_author
file_path = sbox.ospath('iota')
file_path_b = sbox.ospath('iota', wc_dir=wc_b)
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', file_path)
# --- Meanwhile, in our other working copy... ---
svntest.main.run_svn(None, 'update', wc_b)
# attempt (and fail) to lock file
# This should give a "iota' is already locked... error, but exits 0.
svntest.actions.run_and_verify_svn2(None, None,
".*already locked", 0,
'lock',
'-m', 'trying to break', file_path_b)
svntest.actions.run_and_verify_svn(None, ".*locked by user", [],
'lock', '--force',
'-m', 'trying to break', file_path_b)
#----------------------------------------------------------------------
# II.B.2, II.C.2.e: Lock a file in wc A. Query wc for the
# lock and verify that all lock fields are present and correct.
def examine_lock(sbox):
"examine the fields of a lockfile for correctness"
sbox.build()
# lock a file as wc_author
svntest.actions.run_and_validate_lock(sbox.ospath('iota'),
svntest.main.wc_author)
#----------------------------------------------------------------------
# II.C.1: Lock a file in wc A. Check out wc B. Break the lock in wc
# B. Verify that wc A gracefully cleans up the lock via update as
# well as via commit.
def handle_defunct_lock(sbox):
"verify behavior when a lock in a wc is defunct"
sbox.build()
wc_dir = sbox.wc_dir
# set up our expected status
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
# lock the file
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', sbox.ospath('iota'))
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
file_path_b = sbox.ospath('iota', wc_dir=wc_b)
# --- Meanwhile, in our other working copy... ---
# Try unlocking the file in the second wc.
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
file_path_b)
# update the 1st wc, which should clear the lock there
sbox.simple_update()
# Make sure the file is unlocked
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
# II.B.1: Set "svn:needs-lock" property on file in wc A. Checkout wc
# B and verify that that file is set as read-only.
#
# Tests propset, propdel, lock, and unlock
def enforce_lock(sbox):
"verify svn:needs-lock read-only behavior"
sbox.build()
wc_dir = sbox.wc_dir
iota_path = sbox.ospath('iota')
lambda_path = sbox.ospath('A/B/lambda')
mu_path = sbox.ospath('A/mu')
# svn:needs-lock value should be forced to a '*'
svntest.actions.set_prop('svn:needs-lock', 'foo', iota_path)
svntest.actions.set_prop('svn:needs-lock', '*', lambda_path)
expected_err = ".*svn: warning: W125005: To turn off the svn:needs-lock property,.*"
svntest.actions.set_prop('svn:needs-lock', ' ', mu_path, expected_err)
# Check svn:needs-lock
svntest.actions.check_prop('svn:needs-lock', iota_path, ['*'])
svntest.actions.check_prop('svn:needs-lock', lambda_path, ['*'])
svntest.actions.check_prop('svn:needs-lock', mu_path, ['*'])
svntest.main.run_svn(None, 'commit',
'-m', '', iota_path, lambda_path, mu_path)
# Now make sure that the perms were flipped on all files
if os.name == 'posix':
mode = stat.S_IWGRP | stat.S_IWOTH | stat.S_IWRITE
if ((os.stat(iota_path)[0] & mode)
or (os.stat(lambda_path)[0] & mode)
or (os.stat(mu_path)[0] & mode)):
logger.warn("Setting 'svn:needs-lock' property on a file failed to set")
logger.warn("file mode to read-only.")
raise svntest.Failure
# obtain a lock on one of these files...
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', iota_path)
# ...and verify that the write bit gets set...
if not (os.stat(iota_path)[0] & mode):
logger.warn("Locking a file with 'svn:needs-lock' failed to set write bit.")
raise svntest.Failure
# ...and unlock it...
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
iota_path)
# ...and verify that the write bit gets unset
if (os.stat(iota_path)[0] & mode):
logger.warn("Unlocking a file with 'svn:needs-lock' failed to unset write bit.")
raise svntest.Failure
# Verify that removing the property restores the file to read-write
svntest.main.run_svn(None, 'propdel', 'svn:needs-lock', iota_path)
if not (os.stat(iota_path)[0] & mode):
logger.warn("Deleting 'svn:needs-lock' failed to set write bit.")
raise svntest.Failure
#----------------------------------------------------------------------
# Test that updating a file with the "svn:needs-lock" property works,
# especially on Windows, where renaming A to B fails if B already
# exists and has its read-only bit set. See also issue #2278.
@Issue(2278)
def update_while_needing_lock(sbox):
"update handles svn:needs-lock correctly"
sbox.build()
sbox.simple_propset('svn:needs-lock', 'foo', 'iota')
sbox.simple_commit('iota')
sbox.simple_update()
# Lock, modify, commit, unlock, to create r3.
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', sbox.ospath('iota'))
sbox.simple_append('iota', 'This line added in r2.\n')
sbox.simple_commit('iota') # auto-unlocks
# Backdate to r2.
sbox.simple_update(revision=2)
# Try updating forward to r3 again. This is where the bug happened.
sbox.simple_update(revision=3)
#----------------------------------------------------------------------
# Tests update / checkout with changing props
def defunct_lock(sbox):
"verify svn:needs-lock behavior with defunct lock"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
iota_path = sbox.ospath('iota')
iota_path_b = sbox.ospath('iota', wc_dir=wc_b)
mode = stat.S_IWGRP | stat.S_IWOTH | stat.S_IWRITE
# Set the prop in wc a
sbox.simple_propset('svn:needs-lock', 'foo', 'iota')
# commit r2
sbox.simple_commit('iota')
# update wc_b
svntest.main.run_svn(None, 'update', wc_b)
# lock iota in wc_b
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', iota_path_b)
# break the lock iota in wc a
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock', '--force',
'-m', '', iota_path)
# update wc_b
svntest.main.run_svn(None, 'update', wc_b)
# make sure that iota got set to read-only
if (os.stat(iota_path_b)[0] & mode):
logger.warn("Upon removal of a defunct lock, a file with 'svn:needs-lock'")
logger.warn("was not set back to read-only")
raise svntest.Failure
#----------------------------------------------------------------------
# Tests dealing with a lock on a deleted path
def deleted_path_lock(sbox):
"verify lock removal on a deleted path"
sbox.build()
wc_dir = sbox.wc_dir
iota_path = sbox.ospath('iota')
iota_url = sbox.repo_url + '/iota'
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', iota_path)
sbox.simple_rm('iota')
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'--no-unlock',
'-m', '', iota_path)
# Now make sure that we can delete the lock from iota via a URL
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
iota_url)
#----------------------------------------------------------------------
# Tests dealing with locking and unlocking
def lock_unlock(sbox):
"lock and unlock some files"
sbox.build()
wc_dir = sbox.wc_dir
pi_path = sbox.ospath('A/D/G/pi')
rho_path = sbox.ospath('A/D/G/rho')
tau_path = sbox.ospath('A/D/G/tau')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/D/G/pi', 'A/D/G/rho', 'A/D/G/tau', writelocked='K')
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', pi_path, rho_path, tau_path)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
expected_status.tweak('A/D/G/pi', 'A/D/G/rho', 'A/D/G/tau', writelocked=None)
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
pi_path, rho_path, tau_path)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
# Tests dealing with directory deletion and locks
def deleted_dir_lock(sbox):
"verify removal of a directory with locks inside"
sbox.build()
wc_dir = sbox.wc_dir
pi_path = sbox.ospath('A/D/G/pi')
rho_path = sbox.ospath('A/D/G/rho')
tau_path = sbox.ospath('A/D/G/tau')
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', pi_path, rho_path, tau_path)
sbox.simple_rm('A/D/G') # the parent directory
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'--no-unlock',
'-m', '', sbox.ospath('A/D/G'))
#----------------------------------------------------------------------
# III.c : Lock a file and check the output of 'svn stat' from the same
# working copy and another.
def lock_status(sbox):
"verify status of lock in working copy"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
# lock a file as wc_author
fname = 'iota'
file_path = os.path.join(sbox.wc_dir, fname)
sbox.simple_append('iota', "This is a spreadsheet\n")
sbox.simple_commit('iota')
svntest.main.run_svn(None, 'lock', '-m', '', sbox.ospath('iota'))
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('iota', wc_rev=2, writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Verify status again after modifying the file
sbox.simple_append('iota', 'check stat output after mod')
expected_status.tweak('iota', status='M ')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Verify status of lock from another working copy
svntest.main.run_svn(None, 'update', wc_b)
expected_status = svntest.actions.get_virginal_state(wc_b, 2)
expected_status.tweak('iota', writelocked='O')
svntest.actions.run_and_verify_status(wc_b, expected_status)
#----------------------------------------------------------------------
# III.c : Steal lock on a file from another working copy with 'svn lock
# --force', and check the status of lock in the repository from the
# working copy in which the file was initially locked.
def stolen_lock_status(sbox):
"verify status of stolen lock"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
# lock a file as wc_author
fname = 'iota'
file_path = os.path.join(sbox.wc_dir, fname)
file_path_b = os.path.join(wc_b, fname)
svntest.main.file_append(file_path, "This is a spreadsheet\n")
svntest.main.run_svn(None, 'commit',
'-m', '', file_path)
svntest.main.run_svn(None, 'lock',
'-m', '', file_path)
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak(fname, wc_rev=2)
expected_status.tweak(fname, writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Forcibly lock same file (steal lock) from another working copy
svntest.main.run_svn(None, 'update', wc_b)
svntest.main.run_svn(None, 'lock',
'-m', '', '--force', file_path_b)
# Verify status from working copy where file was initially locked
expected_status.tweak(fname, writelocked='T')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
# III.c : Break lock from another working copy with 'svn unlock --force'
# and verify the status of the lock in the repository with 'svn stat -u'
# from the working copy in the file was initially locked
def broken_lock_status(sbox):
"verify status of broken lock"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
# lock a file as wc_author
fname = 'iota'
file_path = os.path.join(sbox.wc_dir, fname)
file_path_b = os.path.join(wc_b, fname)
svntest.main.file_append(file_path, "This is a spreadsheet\n")
svntest.main.run_svn(None, 'commit',
'-m', '', file_path)
svntest.main.run_svn(None, 'lock',
'-m', '', file_path)
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak(fname, wc_rev=2)
expected_status.tweak(fname, writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Forcibly unlock the same file (break lock) from another working copy
svntest.main.run_svn(None, 'update', wc_b)
svntest.main.run_svn(None, 'unlock',
'--force', file_path_b)
# Verify status from working copy where file was initially locked
expected_status.tweak(fname, writelocked='B')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
# Invalid input test - lock non-existent file
def lock_non_existent_file(sbox):
"verify error on locking non-existent file"
sbox.build()
fname = 'A/foo'
file_path = os.path.join(sbox.wc_dir, fname)
exit_code, output, error = svntest.main.run_svn(1, 'lock',
'-m', '', file_path)
error_msg = "The node '%s' was not found." % os.path.abspath(file_path)
for line in error:
if line.find(error_msg) != -1:
break
else:
logger.warn("Error: %s : not found in: %s" % (error_msg, error))
raise svntest.Failure
#----------------------------------------------------------------------
# Check that locking an out-of-date file fails.
def out_of_date(sbox):
"lock an out-of-date file and ensure failure"
sbox.build()
wc_dir = sbox.wc_dir
# Make a second copy of the working copy
wc_b = sbox.add_wc_path('_b')
svntest.actions.duplicate_dir(wc_dir, wc_b)
fname = 'iota'
file_path = os.path.join(sbox.wc_dir, fname)
file_path_b = os.path.join(wc_b, fname)
# Make a new revision of the file in the first WC.
svntest.main.file_append(file_path, "This represents a binary file\n")
svntest.main.run_svn(None, 'commit',
'-m', '', file_path)
# --- Meanwhile, in our other working copy... ---
svntest.actions.run_and_verify_svn2(None, None,
".*newer version of '/iota' exists", 0,
'lock',
'--username', svntest.main.wc_author2,
'-m', '', file_path_b)
#----------------------------------------------------------------------
# Tests reverting a svn:needs-lock file
def revert_lock(sbox):
"verify svn:needs-lock behavior with revert"
sbox.build()
wc_dir = sbox.wc_dir
iota_path = sbox.ospath('iota')
mode = stat.S_IWGRP | stat.S_IWOTH | stat.S_IWRITE
# set the prop in wc
svntest.actions.run_and_verify_svn(None, None, [], 'propset',
'svn:needs-lock', 'foo', iota_path)
# commit r2
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'-m', '', iota_path)
# make sure that iota got set to read-only
if (os.stat(iota_path)[0] & mode):
logger.warn("Committing a file with 'svn:needs-lock'")
logger.warn("did not set the file to read-only")
raise svntest.Failure
# verify status is as we expect
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('iota', wc_rev=2)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# remove read-only-ness
svntest.actions.run_and_verify_svn(None, None, [], 'propdel',
'svn:needs-lock', iota_path)
# make sure that iota got read-only-ness removed
if (os.stat(iota_path)[0] & mode == 0):
logger.warn("Deleting the 'svn:needs-lock' property ")
logger.warn("did not remove read-only-ness")
raise svntest.Failure
# revert the change
svntest.actions.run_and_verify_svn(None, None, [], 'revert', iota_path)
# make sure that iota got set back to read-only
if (os.stat(iota_path)[0] & mode):
logger.warn("Reverting a file with 'svn:needs-lock'")
logger.warn("did not set the file back to read-only")
raise svntest.Failure
# try propdel and revert from a different directory so
# full filenames are used
extra_name = 'xx'
# now lock the file
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', iota_path)
# modify it
svntest.main.file_append(iota_path, "This line added\n")
expected_status.tweak(wc_rev=1)
expected_status.tweak('iota', wc_rev=2)
expected_status.tweak('iota', status='M ', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# revert it
svntest.actions.run_and_verify_svn(None, None, [], 'revert', iota_path)
# make sure it is still writable since we have the lock
if (os.stat(iota_path)[0] & mode == 0):
logger.warn("Reverting a 'svn:needs-lock' file (with lock in wc) ")
logger.warn("did not leave the file writable")
raise svntest.Failure
#----------------------------------------------------------------------
def examine_lock_via_url(sbox):
"examine the fields of a lock from a URL"
sbox.build()
wc_dir = sbox.wc_dir
fname = 'iota'
comment = 'This is a lock test.'
file_path = os.path.join(sbox.wc_dir, fname)
file_url = sbox.repo_url + '/' + fname
# lock the file url and check the contents of lock
svntest.actions.run_and_validate_lock(file_url,
svntest.main.wc_author2)
#----------------------------------------------------------------------
def lock_several_files(sbox):
"lock/unlock several files in one go"
sbox.build()
wc_dir = sbox.wc_dir
# Deliberately have no direct child of A as a target
iota_path = os.path.join(sbox.wc_dir, 'iota')
lambda_path = os.path.join(sbox.wc_dir, 'A', 'B', 'lambda')
alpha_path = os.path.join(sbox.wc_dir, 'A', 'B', 'E', 'alpha')
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'--username', svntest.main.wc_author2,
'-m', 'lock several',
iota_path, lambda_path, alpha_path)
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('iota', 'A/B/lambda', 'A/B/E/alpha', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
'--username', svntest.main.wc_author2,
iota_path, lambda_path, alpha_path)
expected_status.tweak('iota', 'A/B/lambda', 'A/B/E/alpha', writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
def lock_switched_files(sbox):
"lock/unlock switched files"
sbox.build()
wc_dir = sbox.wc_dir
gamma_path = sbox.ospath('A/D/gamma')
lambda_path = sbox.ospath('A/B/lambda')
iota_URL = sbox.repo_url + '/iota'
alpha_URL = sbox.repo_url + '/A/B/E/alpha'
svntest.actions.run_and_verify_svn(None, None, [], 'switch',
iota_URL, gamma_path,
'--ignore-ancestry')
svntest.actions.run_and_verify_svn(None, None, [], 'switch',
alpha_URL, lambda_path,
'--ignore-ancestry')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/D/gamma', 'A/B/lambda', switched='S')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', 'lock several',
gamma_path, lambda_path)
expected_status.tweak('A/D/gamma', 'A/B/lambda', writelocked='K')
# In WC-NG locks are kept per working copy, not per file
expected_status.tweak('A/B/E/alpha', 'iota', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
gamma_path, lambda_path)
expected_status.tweak('A/D/gamma', 'A/B/lambda', writelocked=None)
expected_status.tweak('A/B/E/alpha', 'iota', writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
def lock_uri_encoded(sbox):
"lock and unlock a file with an URI-unsafe name"
sbox.build()
wc_dir = sbox.wc_dir
# lock a file as wc_author
fname = 'amazing space'
file_path = sbox.ospath(fname)
svntest.main.file_append(file_path, "This represents a binary file\n")
svntest.actions.run_and_verify_svn(None, None, [], "add", file_path)
expected_output = svntest.wc.State(wc_dir, {
fname : Item(verb='Adding'),
})
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.add({ fname: Item(wc_rev=2, status=' ') })
# Commit the file.
svntest.actions.run_and_verify_commit(wc_dir,
expected_output,
expected_status,
None,
file_path)
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', file_path)
# Make sure that the file was locked.
expected_status.tweak(fname, writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
file_path)
# Make sure it was successfully unlocked again.
expected_status.tweak(fname, writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# And now the URL case.
file_url = sbox.repo_url + '/' + fname
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', file_url)
# Make sure that the file was locked.
expected_status.tweak(fname, writelocked='O')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
file_url)
# Make sure it was successfully unlocked again.
expected_status.tweak(fname, writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
# A regression test for a bug when svn:needs-lock and svn:executable
# interact badly. The bug was fixed in trunk @ r854933.
@SkipUnless(svntest.main.is_posix_os)
def lock_and_exebit1(sbox):
"svn:needs-lock and svn:executable, part I"
mode_w = stat.S_IWUSR
mode_x = stat.S_IXUSR
mode_r = stat.S_IRUSR
sbox.build()
wc_dir = sbox.wc_dir
gamma_path = sbox.ospath('A/D/gamma')
expected_err = ".*svn: warning: W125005: To turn off the svn:needs-lock property,.*"
svntest.actions.run_and_verify_svn2(None, None, expected_err, 0,
'ps', 'svn:needs-lock', ' ', gamma_path)
expected_err = ".*svn: warning: W125005: To turn off the svn:executable property,.*"
svntest.actions.run_and_verify_svn2(None, None, expected_err, 0,
'ps', 'svn:executable', ' ', gamma_path)
# commit
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'-m', '', gamma_path)
# mode should be +r, -w, +x
gamma_stat = os.stat(gamma_path)[0]
if (not gamma_stat & mode_r
or gamma_stat & mode_w
or not gamma_stat & mode_x):
logger.warn("Committing a file with 'svn:needs-lock, svn:executable'")
logger.warn("did not set the file to read-only, executable")
raise svntest.Failure
# lock
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', gamma_path)
# mode should be +r, +w, +x
gamma_stat = os.stat(gamma_path)[0]
if (not gamma_stat & mode_r
or not gamma_stat & mode_w
or not gamma_stat & mode_x):
logger.warn("Locking a file with 'svn:needs-lock, svn:executable'")
logger.warn("did not set the file to read-write, executable")
raise svntest.Failure
# modify
svntest.main.file_append(gamma_path, "check stat output after mod & unlock")
# unlock
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
gamma_path)
# Mode should be +r, -w, +x
gamma_stat = os.stat(gamma_path)[0]
if (not gamma_stat & mode_r
or gamma_stat & mode_w
or not gamma_stat & mode_x):
logger.warn("Unlocking a file with 'svn:needs-lock, svn:executable'")
logger.warn("did not set the file to read-only, executable")
raise svntest.Failure
# ci
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'-m', '', gamma_path)
# Mode should be still +r, -w, +x
gamma_stat = os.stat(gamma_path)[0]
if (not gamma_stat & mode_r
or gamma_stat & mode_w
or not gamma_stat & mode_x):
logger.warn("Commiting a file with 'svn:needs-lock, svn:executable'")
logger.warn("after unlocking modified file's permissions")
raise svntest.Failure
#----------------------------------------------------------------------
# A variant of lock_and_exebit1: same test without unlock
@SkipUnless(svntest.main.is_posix_os)
def lock_and_exebit2(sbox):
"svn:needs-lock and svn:executable, part II"
mode_w = stat.S_IWUSR
mode_x = stat.S_IXUSR
mode_r = stat.S_IRUSR
sbox.build()
wc_dir = sbox.wc_dir
gamma_path = sbox.ospath('A/D/gamma')
expected_err = ".*svn: warning: W125005: To turn off the svn:needs-lock property,.*"
svntest.actions.run_and_verify_svn2(None, None, expected_err, 0,
'ps', 'svn:needs-lock', ' ', gamma_path)
expected_err = ".*svn: warning: W125005: To turn off the svn:executable property,.*"
svntest.actions.run_and_verify_svn2(None, None, expected_err, 0,
'ps', 'svn:executable', ' ', gamma_path)
# commit
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'-m', '', gamma_path)
# mode should be +r, -w, +x
gamma_stat = os.stat(gamma_path)[0]
if (not gamma_stat & mode_r
or gamma_stat & mode_w
or not gamma_stat & mode_x):
logger.warn("Committing a file with 'svn:needs-lock, svn:executable'")
logger.warn("did not set the file to read-only, executable")
raise svntest.Failure
# lock
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', gamma_path)
# mode should be +r, +w, +x
gamma_stat = os.stat(gamma_path)[0]
if (not gamma_stat & mode_r
or not gamma_stat & mode_w
or not gamma_stat & mode_x):
logger.warn("Locking a file with 'svn:needs-lock, svn:executable'")
logger.warn("did not set the file to read-write, executable")
raise svntest.Failure
# modify
svntest.main.file_append(gamma_path, "check stat output after mod & unlock")
# commit
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'-m', '', gamma_path)
# Mode should be +r, -w, +x
gamma_stat = os.stat(gamma_path)[0]
if (not gamma_stat & mode_r
or gamma_stat & mode_w
or not gamma_stat & mode_x):
logger.warn("Commiting a file with 'svn:needs-lock, svn:executable'")
logger.warn("did not set the file to read-only, executable")
raise svntest.Failure
def commit_xml_unsafe_file_unlock(sbox):
"commit file with xml-unsafe name and release lock"
sbox.build()
wc_dir = sbox.wc_dir
fname = 'foo & bar'
file_path = os.path.join(sbox.wc_dir, fname)
svntest.main.file_append(file_path, "Initial data.\n")
svntest.main.run_svn(None, 'add', file_path)
svntest.main.run_svn(None,
'commit', '-m', '', file_path)
# lock fname as wc_author
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', 'some lock comment', file_path)
# make a change and commit it, allowing lock to be released
svntest.main.file_append(file_path, "Followup data.\n")
svntest.main.run_svn(None,
'commit', '-m', '', file_path)
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.add({ fname : Item(status=' ', wc_rev=3), })
# Make sure the file is unlocked
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
def repos_lock_with_info(sbox):
"verify info path@X or path -rY return repos lock"
sbox.build()
wc_dir = sbox.wc_dir
fname = 'iota'
comment = 'This is a lock test.'
file_path = os.path.join(sbox.wc_dir, fname)
file_url = sbox.repo_url + '/' + fname
# lock wc file
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'--username', svntest.main.wc_author2,
'-m', comment, file_path)
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak(fname, writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Steal lock on wc file
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'--username', svntest.main.wc_author2,
'--force',
'-m', comment, file_url)
expected_status.tweak(fname, writelocked='T')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Get repository lock token
repos_lock_token \
= svntest.actions.run_and_parse_info(file_url)[0]['Lock Token']
# info with revision option
expected_infos = [
{ 'Lock Token' : repos_lock_token },
]
svntest.actions.run_and_verify_info(expected_infos, file_path, '-r1')
# info with peg revision
svntest.actions.run_and_verify_info(expected_infos, file_path + '@1')
#----------------------------------------------------------------------
@Issue(4126)
def unlock_already_unlocked_files(sbox):
"(un)lock set of files, one already (un)locked"
sbox.build()
wc_dir = sbox.wc_dir
# Deliberately have no direct child of A as a target
iota_path = sbox.ospath('iota')
lambda_path = sbox.ospath('A/B/lambda')
alpha_path = sbox.ospath('A/B/E/alpha')
gamma_path = sbox.ospath('A/D/gamma')
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'--username', svntest.main.wc_author2,
'-m', 'lock several',
iota_path, lambda_path, alpha_path)
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('iota', 'A/B/lambda', 'A/B/E/alpha', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
error_msg = ".*Path '/A/B/E/alpha' is already locked by user '" + \
svntest.main.wc_author2 + "'.*"
svntest.actions.run_and_verify_svn2(None, None, error_msg, 0,
'lock',
'--username', svntest.main.wc_author2,
alpha_path, gamma_path)
expected_status.tweak('A/D/gamma', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
svntest.actions.run_and_verify_svn(None, ".*unlocked", [], 'unlock',
'--username', svntest.main.wc_author2,
lambda_path)
expected_status.tweak('A/B/lambda', writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
error_msg = "(.*No lock on path '/A/B/lambda'.*)" + \
"|(.*'A/B/lambda' is not locked.*)"
svntest.actions.run_and_verify_svn2(None, None, error_msg, 0,
'unlock',
'--username', svntest.main.wc_author2,
'--force',
iota_path, lambda_path, alpha_path)
expected_status.tweak('iota', 'A/B/E/alpha', writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
def info_moved_path(sbox):
"show correct lock info on moved path"
sbox.build()
wc_dir = sbox.wc_dir
fname = sbox.ospath("iota")
fname2 = sbox.ospath("iota2")
# Move iota, creating r2.
svntest.actions.run_and_verify_svn(None, None, [],
"mv", fname, fname2)
expected_output = svntest.wc.State(wc_dir, {
'iota2' : Item(verb='Adding'),
'iota' : Item(verb='Deleting'),
})
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.add({
"iota2" : Item(status=' ', wc_rev=2)
})
expected_status.remove("iota")
svntest.actions.run_and_verify_commit(wc_dir,
expected_output,
expected_status,
None,
wc_dir)
# Create a new, unrelated iota, creating r3.
svntest.main.file_append(fname, "Another iota")
svntest.actions.run_and_verify_svn(None, None, [],
"add", fname)
expected_output = svntest.wc.State(wc_dir, {
'iota' : Item(verb='Adding'),
})
expected_status.add({
"iota" : Item(status=' ', wc_rev=3)
})
svntest.actions.run_and_verify_commit(wc_dir,
expected_output,
expected_status,
None,
wc_dir)
# Lock the new iota.
svntest.actions.run_and_verify_svn(None, ".*locked by user", [],
"lock", fname)
expected_status.tweak("iota", writelocked="K")
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Get info for old iota at r1. This shouldn't give us any lock info.
expected_infos = [
{ 'URL' : '.*' ,
'Lock Token' : None },
]
svntest.actions.run_and_verify_info(expected_infos, fname2, '-r1')
#----------------------------------------------------------------------
def ls_url_encoded(sbox):
"ls locked path needing URL encoding"
sbox.build()
wc_dir = sbox.wc_dir
dirname = sbox.ospath("space dir")
fname = os.path.join(dirname, "f")
# Create a dir with a space in its name and a file therein.
svntest.actions.run_and_verify_svn(None, None, [],
"mkdir", dirname)
svntest.main.file_append(fname, "someone was here")
svntest.actions.run_and_verify_svn(None, None, [],
"add", fname)
expected_output = svntest.wc.State(wc_dir, {
'space dir' : Item(verb='Adding'),
'space dir/f' : Item(verb='Adding'),
})
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.add({
"space dir" : Item(status=' ', wc_rev=2),
"space dir/f" : Item(status=' ', wc_rev=2),
})
svntest.actions.run_and_verify_commit(wc_dir,
expected_output,
expected_status,
None,
wc_dir)
# Lock the file.
svntest.actions.run_and_verify_svn("Lock space dir/f", ".*locked by user",
[], "lock", fname)
# Make sure ls shows it being locked.
expected_output = " +2 " + re.escape(svntest.main.wc_author) + " +O .+f|" \
" +2 " + re.escape(svntest.main.wc_author) + " .+\./"
svntest.actions.run_and_verify_svn("List space dir",
expected_output, [],
"list", "-v", dirname)
#----------------------------------------------------------------------
# Make sure unlocking a path with the wrong lock token fails.
@Issue(3794)
def unlock_wrong_token(sbox):
"verify unlocking with wrong lock token"
sbox.build()
wc_dir = sbox.wc_dir
# lock a file as wc_author
fname = 'iota'
file_path = os.path.join(sbox.wc_dir, fname)
file_url = sbox.repo_url + "/iota"
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
file_path)
# Steal the lock as the same author, but using a URL to keep the old token
# in the WC.
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
"--force", file_url)
# Then, unlocking the WC path should fail.
### The error message returned is actually this, but let's worry about that
### another day...
svntest.actions.run_and_verify_svn2(
None, None, ".*((No lock on path)|(400 Bad Request))", 0,
'unlock', file_path)
#----------------------------------------------------------------------
# Verify that info shows lock info for locked files with URI-unsafe names
# when run in recursive mode.
def examine_lock_encoded_recurse(sbox):
"verify recursive info shows lock info"
sbox.build()
wc_dir = sbox.wc_dir
fname = 'A/B/F/one iota'
file_path = os.path.join(sbox.wc_dir, fname)
svntest.main.file_append(file_path, "This represents a binary file\n")
svntest.actions.run_and_verify_svn(None, None, [], "add", file_path)
expected_output = svntest.wc.State(wc_dir, {
fname : Item(verb='Adding'),
})
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.add({ fname: Item(wc_rev=2, status=' ') })
# Commit the file.
svntest.actions.run_and_verify_commit(wc_dir,
expected_output,
expected_status,
None,
file_path)
# lock the file and validate the contents
svntest.actions.run_and_validate_lock(file_path,
svntest.main.wc_author)
# Trying to unlock someone else's lock with --force should fail.
@Issue(3801)
def unlocked_lock_of_other_user(sbox):
"unlock file locked by other user"
sbox.build()
wc_dir = sbox.wc_dir
# lock a file with user jrandom
pi_path = sbox.ospath('A/D/G/pi')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/D/G/pi', writelocked='K')
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', pi_path)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# now try to unlock with user jconstant, should fail but exit 0.
if sbox.repo_url.startswith("http"):
expected_err = ".*403 Forbidden.*"
else:
expected_err = "svn: warning: W160039: User '%s' is trying to use a lock owned by "\
"'%s'.*" % (svntest.main.wc_author2, svntest.main.wc_author)
svntest.actions.run_and_verify_svn2(None, [], expected_err, 0,
'unlock',
'--username', svntest.main.wc_author2,
pi_path)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
def lock_funky_comment_chars(sbox):
"lock a file using a comment with xml special chars"
sbox.build()
wc_dir = sbox.wc_dir
# lock a file as wc_author
fname = 'iota'
file_path = os.path.join(sbox.wc_dir, fname)
svntest.main.file_append(file_path, "This represents a binary file\n")
svntest.main.run_svn(None, 'commit',
'-m', '', file_path)
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', 'lock & load', file_path)
#----------------------------------------------------------------------
# Check that the svn:needs-lock usage applies to a specific location
# in a working copy, not to the working copy overall.
def lock_twice_in_one_wc(sbox):
"try to lock a file twice in one working copy"
sbox.build()
wc_dir = sbox.wc_dir
mu_path = sbox.ospath('A/mu')
mu2_path = sbox.ospath('A/B/mu')
# Create a needs-lock file
svntest.actions.set_prop('svn:needs-lock', '*', mu_path)
svntest.actions.run_and_verify_svn(None, None, [],
'commit', wc_dir, '-m', '')
# Mark the file readonly
svntest.actions.run_and_verify_svn(None, None, [],
'update', wc_dir)
# Switch a second location for the same file in the same working copy
svntest.actions.run_and_verify_svn(None, None, [],
'switch', sbox.repo_url + '/A',
sbox.ospath('A/B'),
'--ignore-ancestry')
# Lock location 1
svntest.actions.run_and_verify_svn(None, None, [],
'lock', mu_path, '-m', 'Locked here')
# Locking in location 2 should fail ### Currently returns exitcode 0
svntest.actions.run_and_verify_svn2(None, None, ".*is already locked.*", 0,
'lock', '-m', '', mu2_path)
# Change the file anyway
os.chmod(mu2_path, 0700)
svntest.main.file_append(mu2_path, "Updated text")
# Commit will just succeed as the DB owns the lock. It's a user decision
# to commit the other target instead of the one originally locked
svntest.actions.run_and_verify_svn(None, None, [],
'commit', mu2_path, '-m', '')
#----------------------------------------------------------------------
# Test for issue #3524 'Locking path via ra_serf which doesn't exist in
# HEAD triggers assert'
@Issue(3524)
def lock_path_not_in_head(sbox):
"lock path that does not exist in HEAD"
sbox.build()
wc_dir = sbox.wc_dir
D_path = sbox.ospath('A/D')
lambda_path = sbox.ospath('A/B/lambda')
# Commit deletion of A/D and A/B/lambda as r2, then update the WC
# back to r1. Then attempt to lock some paths that no longer exist
# in HEAD. These should fail gracefully.
svntest.actions.run_and_verify_svn(None, None, [],
'delete', lambda_path, D_path)
svntest.actions.run_and_verify_svn(None, None, [], 'commit',
'-m', 'Some deletions', wc_dir)
svntest.actions.run_and_verify_svn(None, None, [], 'up', '-r1', wc_dir)
expected_lock_fail_err_re = "svn: warning: W160042: " \
"((Path .* doesn't exist in HEAD revision)" \
"|(L(ock|OCK) request (on '.*' )?failed: 405 Method Not Allowed))"
# Issue #3524 These lock attemtps were triggering an assert over ra_serf:
#
# working_copies\lock_tests-37>svn lock A\D
# ..\..\..\subversion\libsvn_client\ra.c:275: (apr_err=235000)
# svn: In file '..\..\..\subversion\libsvn_ra_serf\util.c' line 1120:
# assertion failed (ctx->status_code)
#
# working_copies\lock_tests-37>svn lock A\B\lambda
# ..\..\..\subversion\libsvn_client\ra.c:275: (apr_err=235000)
# svn: In file '..\..\..\subversion\libsvn_ra_serf\util.c' line 1120:
# assertion failed (ctx->status_code)
svntest.actions.run_and_verify_svn2(None, None, expected_lock_fail_err_re,
0, 'lock', lambda_path)
expected_err = 'svn: E155008: The node \'.*D\' is not a file'
svntest.actions.run_and_verify_svn(None, None, expected_err,
'lock', D_path)
#----------------------------------------------------------------------
def verify_path_escaping(sbox):
"verify escaping of lock paths"
sbox.build()
wc_dir = sbox.wc_dir
# Add test paths using two characters that need escaping in a url, but
# are within the normal ascii range
file1 = sbox.ospath('file #1')
file2 = sbox.ospath('file #2')
file3 = sbox.ospath('file #3')
svntest.main.file_write(file1, 'File 1')
svntest.main.file_write(file2, 'File 2')
svntest.main.file_write(file3, 'File 3')
svntest.main.run_svn(None, 'add', file1, file2, file3)
svntest.main.run_svn(None, 'ci', '-m', 'commit', wc_dir)
svntest.main.run_svn(None, 'lock', '-m', 'lock 1', file1)
svntest.main.run_svn(None, 'lock', '-m', 'lock 2', sbox.repo_url + '/file%20%232')
svntest.main.run_svn(None, 'lock', '-m', 'lock 3', file3)
svntest.main.run_svn(None, 'unlock', sbox.repo_url + '/file%20%233')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.add(
{
'file #1' : Item(status=' ', writelocked='K', wc_rev='2'),
'file #2' : Item(status=' ', writelocked='O', wc_rev='2'),
'file #3' : Item(status=' ', writelocked='B', wc_rev='2')
})
# Make sure the file locking is reported correctly
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
# Issue #3674: Replace + propset of locked file fails over DAV
@Issue(3674)
def replace_and_propset_locked_path(sbox):
"test replace + propset of locked file"
sbox.build()
wc_dir = sbox.wc_dir
mu_path = sbox.ospath('A/mu')
G_path = sbox.ospath('A/D/G')
rho_path = sbox.ospath('A/D/G/rho')
# Lock mu and A/D/G/rho.
svntest.actions.run_and_verify_svn(None, None, [],
'lock', mu_path, rho_path,
'-m', 'Locked')
# Now replace and propset on mu.
svntest.actions.run_and_verify_svn(None, None, [],
'rm', '--keep-local', mu_path)
svntest.actions.run_and_verify_svn(None, None, [],
'add', mu_path)
svntest.actions.run_and_verify_svn(None, None, [],
'propset', 'foo', 'bar', mu_path)
# Commit mu.
svntest.actions.run_and_verify_svn(None, None, [],
'commit', '-m', '', mu_path)
# Let's try this again where directories are involved, shall we?
# Replace A/D/G and A/D/G/rho, propset on A/D/G/rho.
svntest.actions.run_and_verify_svn(None, None, [],
'rm', G_path)
svntest.actions.run_and_verify_svn(None, None, [],
'mkdir', G_path)
svntest.main.file_append(rho_path, "This is the new file 'rho'.\n")
svntest.actions.run_and_verify_svn(None, None, [],
'add', rho_path)
svntest.actions.run_and_verify_svn(None, None, [],
'propset', 'foo', 'bar', rho_path)
# And commit G.
svntest.actions.run_and_verify_svn(None, None, [],
'commit', '-m', '', G_path)
#----------------------------------------------------------------------
def cp_isnt_ro(sbox):
"uncommitted svn:needs-lock add/cp not read-only"
sbox.build()
wc_dir = sbox.wc_dir
mu_URL = sbox.repo_url + '/A/mu'
mu_path = sbox.ospath('A/mu')
mu2_path = sbox.ospath('A/mu2')
mu3_path = sbox.ospath('A/mu3')
kappa_path = sbox.ospath('kappa')
open(kappa_path, 'w').write("This is the file 'kappa'.\n")
## added file
sbox.simple_add('kappa')
svntest.actions.set_prop('svn:needs-lock', 'yes', kappa_path)
is_writable(kappa_path)
sbox.simple_commit('kappa')
is_readonly(kappa_path)
## versioned file
svntest.actions.set_prop('svn:needs-lock', 'yes', mu_path)
is_writable(mu_path)
sbox.simple_commit('A/mu')
is_readonly(mu_path)
# At this point, mu has 'svn:needs-lock' set
## wc->wc copied file
svntest.main.run_svn(None, 'copy', mu_path, mu2_path)
is_writable(mu2_path)
sbox.simple_commit('A/mu2')
is_readonly(mu2_path)
## URL->wc copied file
svntest.main.run_svn(None, 'copy', mu_URL, mu3_path)
is_writable(mu3_path)
sbox.simple_commit('A/mu3')
is_readonly(mu3_path)
#----------------------------------------------------------------------
# Issue #3525: Locked file which is scheduled for delete causes tree
# conflict
@Issue(3525)
def update_locked_deleted(sbox):
"updating locked scheduled-for-delete file"
sbox.build()
wc_dir = sbox.wc_dir
iota_path = sbox.ospath('iota')
mu_path = sbox.ospath('A/mu')
alpha_path = sbox.ospath('A/B/E/alpha')
svntest.main.run_svn(None, 'lock', '-m', 'locked', mu_path, iota_path,
alpha_path)
sbox.simple_rm('iota')
sbox.simple_rm('A/mu')
sbox.simple_rm('A/B/E')
# Create expected output tree for an update.
expected_output = svntest.wc.State(wc_dir, {
})
# Create expected status tree for the update.
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/B/E', status='D ')
expected_status.tweak('iota', 'A/mu', 'A/B/E/alpha',
status='D ', writelocked='K')
expected_status.tweak('A/B/E/beta', status='D ')
svntest.actions.run_and_verify_update(wc_dir, expected_output,
None, expected_status)
# Now we steal the lock of iota and A/mu via URL and retry
svntest.main.run_svn(None, 'lock', '-m', 'locked', sbox.repo_url + '/iota',
'--force', sbox.repo_url + '/A/mu',
sbox.repo_url + '/A/B/E/alpha')
expected_status.tweak('iota', 'A/mu', 'A/B/E/alpha',
status='D ', writelocked='O')
expected_output = svntest.wc.State(wc_dir, {
'A/mu' : Item(status='B '),
'A/B/E/alpha' : Item(status='B '),
'iota' : Item(status='B '),
})
svntest.actions.run_and_verify_update(wc_dir, expected_output,
None, expected_status)
#----------------------------------------------------------------------
def block_unlock_if_pre_unlock_hook_fails(sbox):
"block unlock operation if pre-unlock hook fails"
sbox.build()
wc_dir = sbox.wc_dir
repo_dir = sbox.repo_dir
svntest.actions.create_failing_hook(repo_dir, "pre-unlock", "error text")
# lock a file.
pi_path = sbox.ospath('A/D/G/pi')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/D/G/pi', writelocked='K')
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
'-m', '', pi_path)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Make sure the unlock operation fails as pre-unlock hook blocks it.
expected_unlock_fail_err_re = ".*error text|.*500 Internal Server Error"
svntest.actions.run_and_verify_svn2(None, None, expected_unlock_fail_err_re,
1, 'unlock', pi_path)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
#----------------------------------------------------------------------
def lock_invalid_token(sbox):
"verify pre-lock hook returning invalid token"
sbox.build()
hook_path = os.path.join(sbox.repo_dir, 'hooks', 'pre-lock')
svntest.main.create_python_hook_script(hook_path,
'# encoding=utf-8\n'
'import sys\n'
'sys.stdout.write("тест")\n'
'sys.exit(0)\n')
fname = 'iota'
file_path = os.path.join(sbox.wc_dir, fname)
svntest.actions.run_and_verify_svn2(None, None,
"svn: warning: W160037: " \
".*scheme.*'opaquelocktoken'", 0,
'lock', '-m', '', file_path)
@Issue(3105)
def lock_multi_wc(sbox):
"obtain locks in multiple working copies in one go"
sbox.build()
sbox2 = sbox.clone_dependent(copy_wc=True)
wc_name = os.path.basename(sbox.wc_dir)
wc2_name = os.path.basename(sbox2.wc_dir)
expected_output = svntest.verify.UnorderedOutput([
'\'%s\' locked by user \'jrandom\'.\n' % os.path.join(wc_name, 'iota'),
'\'%s\' locked by user \'jrandom\'.\n' % os.path.join(wc2_name, 'A', 'mu'),
])
svntest.actions.run_and_verify_svn(None, expected_output, [],
'lock', sbox.ospath('iota'),
sbox2.ospath('A/mu'))
expected_output = svntest.verify.UnorderedOutput([
'\'%s\' unlocked.\n' % os.path.join(wc_name, 'iota'),
'\'%s\' unlocked.\n' % os.path.join(wc2_name, 'A', 'mu'),
])
svntest.actions.run_and_verify_svn(None, expected_output, [],
'unlock', sbox.ospath('iota'),
sbox2.ospath('A/mu'))
@Issue(3378)
def locks_stick_over_switch(sbox):
"locks are kept alive over switching"
sbox.build()
wc_dir = sbox.wc_dir
repo_url = sbox.repo_url
svntest.actions.run_and_verify_svn(None, None, [],
'cp', sbox.ospath('A'), repo_url + '/AA',
'-m', '')
expected_output = svntest.verify.UnorderedOutput([
'\'iota\' locked by user \'jrandom\'.\n',
'\'%s\' locked by user \'jrandom\'.\n' % os.path.join('A', 'D', 'H', 'chi'),
'\'%s\' locked by user \'jrandom\'.\n' % os.path.join('A', 'mu'),
])
svntest.actions.run_and_verify_svn(None, expected_output, [],
'lock', sbox.ospath('A/D/H/chi'),
sbox.ospath('A/mu'),
sbox.ospath('iota'))
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/D/H/chi', 'A/mu', 'iota', writelocked='K')
# Make sure the file is still locked
svntest.actions.run_and_verify_status(wc_dir, expected_status)
expected_output = svntest.wc.State(wc_dir, {
})
expected_status.tweak(wc_rev=2)
expected_status.tweak('', wc_rev=1)
expected_status.tweak('iota', writelocked='K', wc_rev=1)
switched_status = expected_status.copy()
switched_status.tweak(writelocked=None)
switched_status.tweak('iota', writelocked='K')
switched_status.tweak('A', switched='S')
svntest.actions.run_and_verify_switch(wc_dir, sbox.ospath('A'),
repo_url + '/AA',
expected_output, None, switched_status)
# And now switch back to verify that the locks reappear
expected_output = svntest.wc.State(wc_dir, {
})
svntest.actions.run_and_verify_switch(wc_dir, sbox.ospath('A'),
repo_url + '/A',
expected_output, None, expected_status)
@Issue(4304)
def lock_unlock_deleted(sbox):
"lock/unlock a deleted file"
sbox.build()
wc_dir = sbox.wc_dir
svntest.actions.run_and_verify_svn(None, None, [],
'rm', sbox.ospath('A/mu'))
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/mu', status='D ')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
expected_output = '\'mu\' locked by user \'jrandom\'.'
svntest.actions.run_and_verify_svn(None, expected_output, [],
'lock', sbox.ospath('A/mu'))
expected_status.tweak('A/mu', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
expected_output = '\'mu\' unlocked.'
svntest.actions.run_and_verify_svn(None, expected_output, [],
'unlock', sbox.ospath('A/mu'))
expected_status.tweak('A/mu', writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
@Issue(4369)
def commit_stolen_lock(sbox):
"commit with a stolen lock"
sbox.build()
wc_dir = sbox.wc_dir
sbox.simple_append('A/mu', 'zig-zag')
sbox.simple_lock('A/mu')
expected_output = '\'mu\' locked by user \'jrandom\'.'
svntest.actions.run_and_verify_svn(None, expected_output, [],
'lock', '--force',
sbox.repo_url + '/A/mu')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/mu', status='M ', writelocked='T')
err_re = "(.*E160037: Cannot verify lock on path '/A/mu')|" + \
"(.*E160038: '/.*/A/mu': no lock token available)"
svntest.actions.run_and_verify_commit(wc_dir,
[],
expected_status,
err_re,
wc_dir)
# When removing directories, the locks of contained files were not
# correctly removed from the working copy database, thus they later
# magically reappeared when new files or directories with the same
# pathes were added.
@Issue(4364)
def drop_locks_on_parent_deletion(sbox):
"drop locks when the parent is deleted"
sbox.build()
wc_dir = sbox.wc_dir
# lock some files, and remove them.
sbox.simple_lock('A/B/lambda')
sbox.simple_lock('A/B/E/alpha')
sbox.simple_lock('A/B/E/beta')
sbox.simple_rm('A/B')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.remove_subtree('A/B')
svntest.actions.run_and_verify_commit(wc_dir,
[],
expected_status,
None,
wc_dir)
# now re-add entities to the deleted pathes.
sbox.simple_mkdir('A/B')
sbox.simple_add_text('new file replacing old file', 'A/B/lambda')
sbox.simple_add_text('file replacing former dir', 'A/B/F')
# The bug also resurrected locks on directories when their path
# matched a former file.
sbox.simple_mkdir('A/B/E', 'A/B/E/alpha')
expected_status = svntest.actions.get_virginal_state(wc_dir, 1)
expected_status.tweak('A/B',
'A/B/E',
'A/B/E/alpha',
'A/B/F',
'A/B/lambda',
wc_rev='3')
expected_status.remove('A/B/E/beta')
svntest.actions.run_and_verify_commit(wc_dir,
[],
expected_status,
None,
wc_dir)
@SkipUnless(svntest.main.is_ra_type_dav)
def dav_lock_timeout(sbox):
"unlock a lock with timeout"
import httplib
from urlparse import urlparse
import base64
sbox.build()
loc = urlparse(sbox.repo_url)
if loc.scheme == 'http':
h = httplib.HTTPConnection(loc.hostname, loc.port)
else:
h = httplib.HTTPSConnection(loc.hostname, loc.port)
lock_body = '<?xml version="1.0" encoding="utf-8" ?>' \
'<D:lockinfo xmlns:D="DAV:">' \
' <D:lockscope><D:exclusive/></D:lockscope>' \
' <D:locktype><D:write/></D:locktype>' \
' <D:owner>' \
' <D:href>http://a/test</D:href>' \
' </D:owner>' \
'</D:lockinfo>'
lock_headers = {
'Authorization': 'Basic ' + base64.b64encode('jconstant:rayjandom'),
'Timeout': 'Second-86400'
}
# Enabling the following line makes this test easier to debug
h.set_debuglevel(9)
h.request('LOCK', sbox.repo_url + '/iota', lock_body, lock_headers)
r = h.getresponse()
# Verify that there is a lock, by trying to obtain one
svntest.actions.run_and_verify_svn2(None, None, ".*locked by user", 0,
'lock', '-m', '', sbox.ospath('iota'))
# Before this patch this used to fail with a parse error of the timeout
svntest.actions.run_and_verify_svn2(None, None, ".*W160039.*Unlock.*403", 0,
'unlock', sbox.repo_url + '/iota')
svntest.actions.run_and_verify_svn(None, None, [],
'unlock', sbox.ospath('iota'), '--force')
def non_root_locks(sbox):
"locks for working copies not at repos root"
sbox.build()
wc_dir = sbox.wc_dir
svntest.actions.run_and_verify_svn(None, None, [],
'cp', sbox.repo_url, sbox.repo_url + '/X',
'-m', 'copy greek tree')
sbox.simple_switch(sbox.repo_url + '/X')
expected_status = svntest.actions.get_virginal_state(wc_dir, 2)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Lock a file
svntest.actions.run_and_verify_svn(None, ".*locked by user", [],
'lock', sbox.ospath('A/D/G/pi'),
'-m', '')
expected_status.tweak('A/D/G/pi', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Updates don't break the lock
sbox.simple_update('A/D')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
sbox.simple_update('')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Break the lock
svntest.actions.run_and_verify_svn(None, None, [],
'unlock', sbox.repo_url + '/X/A/D/G/pi')
# Subdir update reports the break
sbox.simple_update('A/D')
expected_status.tweak('A/D/G/pi', writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
# Relock and break
svntest.actions.run_and_verify_svn(None, ".*locked by user", [],
'lock', sbox.ospath('A/D/G/pi'),
'-m', '')
expected_status.tweak('A/D/G/pi', writelocked='K')
svntest.actions.run_and_verify_status(wc_dir, expected_status)
svntest.actions.run_and_verify_svn(None, None, [],
'unlock', sbox.repo_url + '/X/A/D/G/pi')
# Root update reports the break
sbox.simple_update('')
expected_status.tweak('A/D/G/pi', writelocked=None)
svntest.actions.run_and_verify_status(wc_dir, expected_status)
@Issue(3515)
@SkipUnless(svntest.main.is_ra_type_dav)
def dav_lock_refresh(sbox):
"refresh timeout of DAV lock"
import httplib
from urlparse import urlparse
import base64
sbox.build(create_wc = False)
# Acquire lock on 'iota'
svntest.actions.run_and_verify_svn(None, ".*locked by user", [], 'lock',
sbox.repo_url + '/iota')
# Try to refresh lock using 'If' header
loc = urlparse(sbox.repo_url)
if loc.scheme == 'http':
h = httplib.HTTPConnection(loc.hostname, loc.port)
else:
h = httplib.HTTPSConnection(loc.hostname, loc.port)
lock_token = svntest.actions.run_and_parse_info(sbox.repo_url + '/iota')[0]['Lock Token']
lock_headers = {
'Authorization': 'Basic ' + base64.b64encode('jrandom:rayjandom'),
'If': '(<' + lock_token + '>)',
'Timeout': 'Second-7200'
}
# Enabling the following line makes this test easier to debug
h.set_debuglevel(9)
h.request('LOCK', sbox.repo_url + '/iota', '', lock_headers)
# XFAIL Refreshing of DAV lock fails with error '412 Precondition Failed'
r = h.getresponse()
if r.status != httplib.OK:
raise svntest.Failure('Lock refresh failed: %d %s' % (r.status, r.reason))
@SkipUnless(svntest.main.is_ra_type_dav)
def delete_locked_file_with_percent(sbox):
"lock and delete a file called 'a %( ) .txt'"
sbox.build()
locked_filename = 'a %( ) .txt'
locked_path = sbox.ospath(locked_filename)
svntest.main.file_write(locked_path, "content\n")
sbox.simple_add(locked_filename)
sbox.simple_commit()
sbox.simple_lock(locked_filename)
sbox.simple_rm(locked_filename)
# XFAIL: With a 1.8.x client, this commit fails with:
# svn: E175002: Unexpected HTTP status 400 'Bad Request' on '/svn-test-work/repositories/lock_tests-52/!svn/txr/2-2/a%20%25(%20)%20.txt'
# and the following error in the httpd error log:
# Invalid percent encoded URI in tagged If-header [400, #104]
sbox.simple_commit()
@Issue(4557)
@XFail(svntest.main.is_ra_type_dav)
def delete_dir_with_lots_of_locked_files(sbox):
"delete a directory containing lots of locked files"
sbox.build()
wc_dir = sbox.wc_dir
# A lot of paths.
nfiles = 75 # NOTE: test XPASSES with 50 files!!!
locked_paths = []
for i in range(nfiles):
locked_paths.append(sbox.ospath("A/locked_files/file-%i" % i))
# Create files at these paths
os.mkdir(sbox.ospath("A/locked_files"))
for file_path in locked_paths:
svntest.main.file_write(file_path, "This is '%s'.\n" % (file_path,))
sbox.simple_add("A/locked_files")
sbox.simple_commit()
sbox.simple_update()
# lock all the files
svntest.actions.run_and_verify_svn(None, None, [], 'lock',
'-m', 'All locks',
*locked_paths)
# Locally delete A (regression against earlier versions, which
# always used a special non-standard request)
sbox.simple_rm("A")
# But a further replacement never worked
sbox.simple_mkdir("A")
# And an additional propset didn't work either
# (but doesn't require all lock tokens recursively)
sbox.simple_propset("k", "v", "A")
# Commit the deletion
# XFAIL: As of 1.8.10, this commit fails with:
# svn: E175002: Unexpected HTTP status 400 'Bad Request' on '<path>'
# and the following error in the httpd error log:
# request failed: error reading the headers
# This problem was introduced on the 1.8.x branch in r1606976.
sbox.simple_commit()
########################################################################
# Run the tests
# list all tests here, starting with None:
test_list = [ None,
lock_file,
commit_file_keep_lock,
commit_file_unlock,
commit_propchange,
break_lock,
steal_lock,
examine_lock,
handle_defunct_lock,
enforce_lock,
defunct_lock,
deleted_path_lock,
lock_unlock,
deleted_dir_lock,
lock_status,
stolen_lock_status,
broken_lock_status,
lock_non_existent_file,
out_of_date,
update_while_needing_lock,
revert_lock,
examine_lock_via_url,
lock_several_files,
lock_switched_files,
lock_uri_encoded,
lock_and_exebit1,
lock_and_exebit2,
commit_xml_unsafe_file_unlock,
repos_lock_with_info,
unlock_already_unlocked_files,
info_moved_path,
ls_url_encoded,
unlock_wrong_token,
examine_lock_encoded_recurse,
unlocked_lock_of_other_user,
lock_funky_comment_chars,
lock_twice_in_one_wc,
lock_path_not_in_head,
verify_path_escaping,
replace_and_propset_locked_path,
cp_isnt_ro,
update_locked_deleted,
block_unlock_if_pre_unlock_hook_fails,
lock_invalid_token,
lock_multi_wc,
locks_stick_over_switch,
lock_unlock_deleted,
commit_stolen_lock,
drop_locks_on_parent_deletion,
dav_lock_timeout,
non_root_locks,
dav_lock_refresh,
delete_locked_file_with_percent,
delete_dir_with_lots_of_locked_files,
]
if __name__ == '__main__':
svntest.main.run_tests(test_list)
# NOTREACHED
### End of file.
| 36.240407 | 139 | 0.594708 | 10,479 | 78,388 | 4.247543 | 0.073957 | 0.073601 | 0.07104 | 0.083577 | 0.687104 | 0.640598 | 0.60847 | 0.574343 | 0.536374 | 0.495709 | 0 | 0.009304 | 0.247232 | 78,388 | 2,162 | 140 | 36.257169 | 0.745001 | 0.198232 | 0 | 0.556146 | 0 | 0 | 0.16723 | 0.002782 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.006829 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
318a952b81c7d9540e9926622426293ecbdc84ee | 1,572 | py | Python | src/Application/PythonScriptModule/pymodules_old/apitest/rotate.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | null | null | null | src/Application/PythonScriptModule/pymodules_old/apitest/rotate.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | null | null | null | src/Application/PythonScriptModule/pymodules_old/apitest/rotate.py | antont/tundra | 5c9b0a3957071f08ab425dff701cdbb34f9e1868 | [
"Apache-2.0"
] | 1 | 2021-09-04T12:37:34.000Z | 2021-09-04T12:37:34.000Z | import circuits
from PythonQt.QtGui import QQuaternion as Quat
from PythonQt.QtGui import QVector3D as Vec
import naali
COMPNAME = "rotation"
class RotationHandler(circuits.BaseComponent):
def __init__(self, entity=None, comp=None, changetype=None):
circuits.BaseComponent.__init__(self)
self.entity = entity
self.comp = comp
if self.comp is not None: #normal run, check for nonEC run now
# Todo: OnChanged() is deprecated
comp.connect("OnChanged()", self.onChanged)
self.rot = Quat.fromAxisAndAngle(Vec(0, 1, 0), 1)
def onChanged(self):
y = self.comp.GetAttribute('y')
self.rot = Quat.fromAxisAndAngle(Vec(0, y, 0), 1)
#print self.rot, y
@circuits.handler("update")
def update(self, frametime):
if self.entity is not None:
p = self.entity.placeable
ort = p.Orientation
ort *= self.rot
p.Orientation = ort
# else: #testing without EC, as a autoloaded module
# entid = 2088826547
# try:
# self.entity = naali.getEntity(entid)
# except:
# pass #not there (yet)
# else:
# self.entity.createComponent("EC_DynamicComponent")
# oldent = r.getEntity(ent.id)
# self.comp = oldent.dynamic
@circuits.handler("on_logout")
def on_logout(self, evid):
self.entity = None #XXX figure out proper unregistering, preferrably in componenthandler.py / EC_Script biz
| 34.933333 | 115 | 0.600509 | 181 | 1,572 | 5.149171 | 0.475138 | 0.075107 | 0.036481 | 0.049356 | 0.066524 | 0.066524 | 0 | 0 | 0 | 0 | 0 | 0.016408 | 0.302163 | 1,572 | 44 | 116 | 35.727273 | 0.833181 | 0.304707 | 0 | 0 | 0 | 0 | 0.032498 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
318fbfd55bdcd7ac71d0dc2747eb31643026f551 | 3,021 | py | Python | bin/analysis/ipa/constraints/split.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 6 | 2015-09-19T18:22:33.000Z | 2020-11-29T15:21:17.000Z | bin/analysis/ipa/constraints/split.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 1 | 2015-08-04T08:03:46.000Z | 2015-08-04T08:03:46.000Z | bin/analysis/ipa/constraints/split.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 1 | 2019-12-09T08:27:09.000Z | 2019-12-09T08:27:09.000Z | # Copyright 2011 Nicholas Bray
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from language.python import ast
from . base import Constraint
from .. calling import cpa
class Splitter(Constraint):
def __init__(self, src):
assert src.isNode(), src
self.src = src
self.dst = []
self.callbacks = []
def addSplitCallback(self, callback):
self.callbacks.append(callback)
if self.objects: callback()
def attach(self):
self.src.addNext(self)
def localName(self):
return 'split_temp'
def makeTarget(self, context):
lcl = context.local(ast.Local(self.localName()))
lcl.addPrev(self)
self.dst.append(lcl)
return lcl
def makeConsistent(self, context):
# Make constraint consistent
if self.src.values:
self.changed(context, self.src, self.src.values)
if self.src.critical.values:
self.criticalChanged(context, self.src, self.src.critical.values)
def criticalChanged(self, context, node, diff):
for dst in self.dst:
dst.critical.updateValues(context, dst, diff)
def doNotify(self):
for callback in self.callbacks:
callback()
def isSplit(self):
return True
class TypeSplitConstraint(Splitter):
def __init__(self, src):
Splitter.__init__(self, src)
self.objects = {}
self.megamorphic = False
def localName(self):
return 'type_split_temp'
def types(self):
return self.objects.keys()
def makeMegamorphic(self):
assert not self.megamorphic
self.megamorphic = True
self.objects.clear()
self.objects[cpa.anyType] = self.src
self.doNotify()
def changed(self, context, node, diff):
if self.megamorphic: return
changed = False
for obj in diff:
cpaType = obj.cpaType()
if cpaType not in self.objects:
if len(self.objects) >= 4:
self.makeMegamorphic()
break
else:
temp = self.makeTarget(context)
self.objects[cpaType] = temp
changed = True
else:
temp = self.objects[cpaType]
temp.updateSingleValue(obj)
else:
if changed: self.doNotify()
# TODO prevent over splitting? All objects with the same qualifier should be grouped?
class ExactSplitConstraint(Splitter):
def __init__(self, src):
Splitter.__init__(self, src)
self.objects = {}
def localName(self):
return 'exact_split_temp'
def changed(self, context, node, diff):
changed = False
for obj in diff:
if obj not in self.objects:
temp = self.makeTarget(context)
self.objects[obj] = temp
changed = True
else:
temp = self.objects[obj]
temp.updateSingleValue(obj)
if changed: self.doNotify()
| 23.97619 | 86 | 0.716319 | 411 | 3,021 | 5.20438 | 0.335766 | 0.045816 | 0.025713 | 0.019635 | 0.183263 | 0.163628 | 0.080411 | 0.048621 | 0.048621 | 0.048621 | 0 | 0.003638 | 0.181066 | 3,021 | 125 | 87 | 24.168 | 0.860954 | 0.219133 | 0 | 0.333333 | 0 | 0 | 0.017499 | 0 | 0 | 0 | 0 | 0.008 | 0.02381 | 1 | 0.202381 | false | 0 | 0.035714 | 0.059524 | 0.345238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3191404cbd9e515326e447e2206e0f73b067c5bc | 5,866 | py | Python | test/worker/net.py | ameserole/Naumachia | dc13c33c5fcf053c74dfce8351a696d28857fd9d | [
"MIT"
] | null | null | null | test/worker/net.py | ameserole/Naumachia | dc13c33c5fcf053c74dfce8351a696d28857fd9d | [
"MIT"
] | null | null | null | test/worker/net.py | ameserole/Naumachia | dc13c33c5fcf053c74dfce8351a696d28857fd9d | [
"MIT"
] | null | null | null | import fcntl
import os
import socket
import struct
import warnings
import subprocess
import logging
import base64
logger = logging.getLogger(__name__)
# Dummy socket used for fcntl functions
_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
class AddrMeta(type):
@property
def maxvalue(cls):
return (0x1 << (cls.bytelen * 8)) - 1
class Addr(metaclass=AddrMeta):
bytelen = 0
def __init__(self, addr):
self._str = None
self._int = None
self._bytes = None
if isinstance(addr, type(self)):
self._str = addr._str
self._bytes = addr._bytes
self._int = addr._int
elif isinstance(addr, str):
self._str = addr
elif isinstance(addr, int):
self._int = addr
elif isinstance(addr, bytes):
if len(addr) == self.bytelen:
self._bytes = addr
else:
self._str = addr.decode('utf-8')
else:
raise ValueError('Cannot create {!s} from {!s}'.format(type(self), type(addr)))
# Operations
def __and__(self, other):
return type(self)(int(self) & int(other))
def __or__(self, other):
return type(self)(int(self) | int(other))
def __xor__(self, other):
return type(self)(int(self) ^ int(other))
def __invert__(self):
return type(self)(int(self) ^ self.maxvalue)
# Conversions
def __str__(self):
if self._str is None:
self._str = self.bytes_to_str(bytes(self))
return self._str
def __int__(self):
return int.from_bytes(bytes(self), byteorder='big')
def __bytes__(self):
if self._bytes is None:
if self._str is not None:
self._bytes = self.str_to_bytes(self._str)
elif self._int is not None:
self._bytes = self._int.to_bytes(self.bytelen, byteorder='big')
return self._bytes
def __repr__(self):
return '<{0}.{1} {2!s}>'.format(__name__, type(self).__name__, self)
class Ip(Addr):
bytelen = 4
@staticmethod
def bytes_to_str(b):
return socket.inet_ntoa(b)
@staticmethod
def str_to_bytes(s):
return socket.inet_aton(s)
def slash(self):
x, i = int(self), 0
while x & 0x1 == 0:
x >>= 1
i += 1
return 32 - i
class Mac(Addr):
bytelen = 6
@staticmethod
def bytes_to_str(b):
return ':'.join('%02x' % byte for byte in b)
@staticmethod
def str_to_bytes(s):
return bytes.fromhex(s.replace(':', ''))
def _ifctl(ifname, code):
if isinstance(ifname, str):
ifname = ifname.encode('utf-8')
return fcntl.ioctl(
_socket.fileno(),
code,
struct.pack('256s', ifname[:15])
)
def ifaddr(ifname):
return Ip(_ifctl(ifname, 0x8915)[20:24]) # SIOCGIFADDR
def ifmask(ifname):
return Ip(_ifctl(ifname, 0x891b)[20:24]) # SIOCGIFNETMASK
def ifhwaddr(ifname):
return Mac(_ifctl(ifname, 0x8927)[18:24]) # SIOCGIFHWADDR
def cidr(ip, mask):
return "{!s}/{:d}".format(ip, mask.slash())
def parsecidr(ipnet):
ipstr, maskstr = ipnet.split('/')
ip = Ip(ipstr)
mask = Ip(0xffffffff ^ ((0x00000001 << (32-int(maskstr)))-1))
return ip, mask
def ifcidr(ifname):
return cidr(ifaddr(ifname), ifmask(ifname))
class OpenVpnError(Exception):
def __init__(self, instance, msg):
self.instance = instance
super().__init__(msg)
class OpenVpn:
exe = 'openvpn'
initmsg = b'Initialization Sequence Completed'
def __init__(self, **kwargs):
if 'daemonize' in kwargs:
warnings.warn("This class will not be able to close a daemonized tunnel", warnings.Warning)
self.options = kwargs
self.initialized = False
self._process = None
def args(self):
result = []
for name, value in self.options.items():
result.append('--{!s}'.format(name))
# None is special to indicate the option have no value
if value is not None:
result.append(str(value))
return result
def check(self):
if self._process is not None:
self._process.poll()
code = self._process.returncode
if code is not None and code != 0:
raise OpenVpnError(self, "`openvpn {:s}` exited with error code: {:d}".format(" ".join(self.args()), code))
def running(self):
return self._process is not None and self._process.poll() is None
@staticmethod
def maketun():
os.makedirs('/dev/net', exist_ok=True)
subprocess.run(['mknod', '/dev/net/tun', 'c', '10', '200'], check=True)
def connect(self):
if not os.path.exists('/dev/net/tun'):
self.maketun()
if not self.running():
self.initialized = False
self._process = subprocess.Popen(
[self.exe] + self.args(),
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
self.check()
def disconnect(self):
if self.running():
self._process.terminate()
os.waitpid(self._process.pid, 0)
def waitforinit(self):
if not self.initialized:
for line in self._process.stdout:
logger.debug("openvpn: %s", line.decode('utf-8').strip())
if self.initmsg in line:
self.initialized = True
break
else:
self.check()
raise OpenVpnError(self, "OpenVPN exited with code 0, but did not display init msg")
def __enter__(self):
self.connect()
return self
def __exit__(self, *args, **kwargs):
self.disconnect()
| 27.283721 | 123 | 0.574668 | 717 | 5,866 | 4.524407 | 0.27894 | 0.025894 | 0.016646 | 0.020962 | 0.144883 | 0.091554 | 0.07799 | 0.058261 | 0.037916 | 0.037916 | 0 | 0.018836 | 0.303103 | 5,866 | 214 | 124 | 27.411215 | 0.774706 | 0.026253 | 0 | 0.096386 | 0 | 0 | 0.061185 | 0 | 0 | 0 | 0.007714 | 0 | 0 | 1 | 0.198795 | false | 0 | 0.048193 | 0.10241 | 0.457831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
319239aac557dc3d968ccc908a828a9cd5002f12 | 2,161 | py | Python | kunrei.py | kosugi/alfred.romanizer | d2a3b4a9883f15101893e385f14e6dca115c1d7d | [
"BSD-2-Clause"
] | null | null | null | kunrei.py | kosugi/alfred.romanizer | d2a3b4a9883f15101893e385f14e6dca115c1d7d | [
"BSD-2-Clause"
] | null | null | null | kunrei.py | kosugi/alfred.romanizer | d2a3b4a9883f15101893e385f14e6dca115c1d7d | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
basic_table = dict(map(lambda s: s.split(u'\t'), u'''
あ a
い i
う u
え e
お o
か ka
き ki
く ku
け ke
こ ko
さ sa
し si
す su
せ se
そ so
た ta
ち ti
つ tu
て te
と to
な na
に ni
ぬ nu
ね ne
の no
は ha
ひ hi
ふ hu
へ he
ほ ho
ま ma
み mi
む mu
め me
も mo
や ya
ゆ yu
よ yo
ら ra
り ri
る ru
れ re
ろ ro
わ wa
を wo
ぁ a
ぃ i
ぅ u
ぇ e
ぉ o
が ga
ぎ gi
ぐ gu
げ ge
ご go
ざ za
じ zi
ず zu
ぜ ze
ぞ zo
だ da
ぢ di
づ du
で de
ど do
ば ba
び bi
ぶ bu
べ be
ぼ bo
ぱ pa
ぴ pi
ぷ pu
ぺ pe
ぽ po
きゃ kya
きゅ kyu
きょ kyo
しゃ sya
しゅ syu
しょ syo
ちゃ tya
ちゅ tyu
ちょ tyo
にゃ nya
にゅ nyu
にょ nyo
ひゃ hya
ひゅ hyu
ひょ hyo
みゃ mya
みゅ myu
みょ myo
りゃ rya
りゅ ryu
りょ ryo
ぎゃ gya
ぎゅ gyu
ぎょ gyo
じゃ zya
じゅ zyu
じょ zyo
でゃ dya
でゅ dyu
でょ dyo
びゃ bya
びゅ byu
びょ byo
ぴゃ pya
ぴゅ pyu
ぴょ pyo
クヮ kwa
グヮ gwa
ア a
イ i
ウ u
エ e
オ o
カ ka
キ ki
ク ku
ケ ke
コ ko
サ sa
シ si
ス su
セ se
ソ so
タ ta
チ ti
ツ tu
テ te
ト to
ナ na
ニ ni
ヌ nu
ネ ne
ノ no
ハ ha
ヒ hi
フ hu
ヘ he
ホ ho
マ ma
ミ mi
ム mu
メ me
モ mo
ヤ ya
ユ yu
ヨ yo
ラ ra
リ ri
ル ru
レ re
ロ ro
ワ wa
ヲ wo
ァ a
ィ i
ゥ u
ェ e
ォ o
ガ ga
ギ gi
グ gu
ゲ ge
ゴ go
ザ za
ジ zi
ズ zu
ゼ ze
ゾ zo
ダ da
ヂ di
ヅ du
デ de
ド do
バ ba
ビ bi
ブ bu
ベ be
ボ bo
パ pa
ピ pi
プ pu
ペ pe
ポ po
キャ kya
キュ kyu
キョ kyo
シャ sya
シュ syu
ショ syo
チャ tya
チュ tyu
チョ tyo
ニャ nya
ニュ nyu
ニョ nyo
ヒャ hya
ヒュ hyu
ヒョ hyo
ミャ mya
ミュ myu
ミョ myo
リャ rya
リュ ryu
リョ ryo
ギャ gya
ギュ gyu
ギョ gyo
ジャ zya
ジュ zyu
ジョ zyo
デャ dya
デュ dyu
デョ dyo
ビャ bya
ビュ byu
ビョ byo
ピャ pya
ピュ pyu
ピョ pyo
くゎ kwa
ぐゎ gwa
'''.strip(u'\n').split(u'\n')))
long_sound_table = dict(u'aâ iî uû eê oô'.split())
long_sounds = u'aa ii uu ee oo ou'.split()
def normalize(s):
roman = u''
l = len(s)
n = 0
while n < l:
c1 = s[n]
c2 = s[n:n+2]
c3 = s[n+1:n+2]
if roman and c1 == u'ー':
c1 = u''
if roman[-1] in u'aiueo':
roman = roman[:-1] + long_sound_table[roman[-1]]
elif c2 in long_sounds:
c1 = long_sound_table[c1]
n += 1
elif c1 in u'んン':
c1 = u'n'
if c3 and c3 in u'aiueoy':
c1 += u"'"
elif c1 in u'っッ':
if c3 in u'bcdfghjklmnpqrstvwxyz':
c1 = c3
else:
c1 = u''
roman += c1
n += 1
return roman
| 8.248092 | 64 | 0.553447 | 592 | 2,161 | 2.005068 | 0.650338 | 0.012637 | 0.035383 | 0.015164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021707 | 0.381768 | 2,161 | 261 | 65 | 8.279693 | 0.866766 | 0.009718 | 0 | 0.015564 | 0 | 0 | 0.626286 | 0.009822 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003891 | false | 0 | 0 | 0 | 0.007782 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3194a2997150c6f647d46dc4f5cbb7a6cd7d252d | 559 | py | Python | regtests/webclgl/call_external_method.py | bpmbank/PythonJS | 591a80afd8233fb715493591db2b68f1748558d9 | [
"BSD-3-Clause"
] | 319 | 2015-01-02T11:34:16.000Z | 2022-03-25T00:43:33.000Z | regtests/webclgl/call_external_method.py | bpmbank/PythonJS | 591a80afd8233fb715493591db2b68f1748558d9 | [
"BSD-3-Clause"
] | 10 | 2015-02-03T02:33:09.000Z | 2021-11-09T21:41:00.000Z | regtests/webclgl/call_external_method.py | bpmbank/PythonJS | 591a80afd8233fb715493591db2b68f1748558d9 | [
"BSD-3-Clause"
] | 61 | 2015-01-02T12:01:56.000Z | 2021-12-08T07:16:16.000Z | """external method"""
class myclass:
def __init__(self, i):
self.index = i
def get_index(self):
return self.index
def run(self, n):
self.intarray = new(Int16Array(n))
self.intarray[ self.index ] = 99
@returns( array=n )
@gpu.main
def gpufunc():
int* A = self.intarray
## GLSL compile error: `Index expression must be constant`
#int idx = self.get_index()
#return float( A[idx] )
return float( A[self.get_index()] )
return gpufunc()
def main():
m = myclass(10)
r = m.run(64)
print(r)
TestError( int(r[10])==99 ) | 18.032258 | 61 | 0.631485 | 84 | 559 | 4.119048 | 0.464286 | 0.078035 | 0.075145 | 0.104046 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027211 | 0.211091 | 559 | 31 | 62 | 18.032258 | 0.75737 | 0.184258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3197d22a066fe34f613aab3ff51fd1a605e176ab | 2,895 | py | Python | 18.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | 1 | 2021-12-02T15:19:36.000Z | 2021-12-02T15:19:36.000Z | 18.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | 18.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
def return_default():
return 0
REAL=open("18.txt").readlines()
SAMPLE=open("18.sample").readlines()
OPEN="."
TREE="|"
LUMBERYARD="#"
import copy
def safe_grid_get(grid, x, y, missing=None):
if x < 0 or y < 0:
return missing
if y >= len(grid):
return missing
if x >= len(grid[y]):
return missing
return grid[y][x]
def parse_lines(lines):
return list(map(lambda l: list(l.strip()), lines))
def next_sq(grid, x, y):
around = defaultdict(return_default)
for dy in [-1, 0, 1]:
for dx in [-1, 0, 1]:
if dx == 0 and dy == 0:
continue
a = safe_grid_get(grid, x + dx, y + dy)
if a is not None:
around[a] += 1
here = grid[y][x]
if here == OPEN:
if around[TREE] >= 3:
return TREE
else:
return OPEN
elif here == TREE:
if around[LUMBERYARD] >= 3:
return LUMBERYARD
else:
return TREE
else:
assert here == LUMBERYARD
if around[LUMBERYARD] >= 1 and around[TREE] >= 1:
return LUMBERYARD
else:
return OPEN
def resource_value(board):
lands = defaultdict(return_default)
for y in range(len(board)):
for x in range(len(board[0])):
lands[board[y][x]] += 1
return lands[TREE] * lands[LUMBERYARD]
def solve(lines, minutes):
cache = {}
old_board = parse_lines(lines)
for minute in range(minutes):
board = copy.deepcopy(old_board)
for y in range(len(board)):
for x in range(len(board[0])):
board[y][x] = next_sq(old_board, x, y)
old_board = board
key = "\n".join(map(lambda r: "".join(r), board))
# print(key)
if key in cache:
print(minute, cache[key])
else:
cache[key] = (minute, resource_value(board))
return resource_value(board)
sample = solve(SAMPLE, 10)
assert sample == 1147
print("*** SAMPLE PASSED ***")
# print(solve(REAL, 10000))
loop = """598 570 191420
599 571 189168
600 572 185082
601 573 185227
602 574 185320
603 575 185790
604 576 186120
605 577 189956
606 578 190068
607 579 191080
608 580 190405 # too low
609 581 193795
610 582 190950
611 583 193569
612 584 194350
613 585 196308
614 586 195364
615 587 197911
616 588 199755
617 589 201144
618 590 201607
619 591 203580
620 592 201260
621 593 201950
622 594 200675 # TOO HIGH
623 595 202208
624 596 200151
625 597 198948
626 570 191420
627 571 189168
628 572 185082
629 573 185227
630 574 185320
631 575 185790
632 576 186120
633 577 189956
634 578 190068
635 579 191080
636 580 190405
637 581 193795"""
num = 1000000000
nmod = 28
for num in range(570, 638):
print(num, (num - 570) % nmod + 570)
num = 1000000000 - 1
print(num, (num - 570) % nmod + 570 + nmod) | 21.444444 | 57 | 0.601382 | 433 | 2,895 | 3.979215 | 0.408776 | 0.024376 | 0.023215 | 0.034823 | 0.088218 | 0.069646 | 0.04527 | 0.04527 | 0.04527 | 0.04527 | 0 | 0.270864 | 0.292228 | 2,895 | 135 | 58 | 21.444444 | 0.570034 | 0.012435 | 0 | 0.156522 | 0 | 0 | 0.231362 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 1 | 0.052174 | false | 0.008696 | 0.017391 | 0.017391 | 0.191304 | 0.034783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
319925dc3819c9097723899fe8aef60117e396cb | 817 | py | Python | src/validate_model.py | mareklinka/esk-form-scanner-model | 30af9e1c5d652b3310222bc55f92e964bc524f2e | [
"MIT"
] | null | null | null | src/validate_model.py | mareklinka/esk-form-scanner-model | 30af9e1c5d652b3310222bc55f92e964bc524f2e | [
"MIT"
] | null | null | null | src/validate_model.py | mareklinka/esk-form-scanner-model | 30af9e1c5d652b3310222bc55f92e964bc524f2e | [
"MIT"
] | null | null | null |
import data_providers as gen
import model_storage as storage
import numpy as np
import data_visualizer
import time
def evaluate(model_name):
"""
Evaluates the model stored in the specified file.
Parameters
----------
model_name : string
The name of the file to read the model from
"""
model = storage.load_model(model_name)
model.summary()
start = time.clock()
score = model.evaluate_generator(gen.finite_generator("data\\validation"), steps=30)
end = time.clock()
print("Time per image: {} ".format((end-start)/300))
print (model.metrics_names)
print (score)
predictions = model.predict_generator(gen.finite_generator("data\\validation"), steps=30)
data_visualizer.draw_bounding_boxes("data\\validation", predictions, "data\\results") | 24.757576 | 93 | 0.69645 | 105 | 817 | 5.27619 | 0.47619 | 0.048736 | 0.064982 | 0.097473 | 0.173285 | 0.173285 | 0.173285 | 0.173285 | 0 | 0 | 0 | 0.010558 | 0.188494 | 817 | 33 | 94 | 24.757576 | 0.825038 | 0.172583 | 0 | 0 | 0 | 0 | 0.124224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.375 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
31998f7e8bdabc90d6fe3933e2b885a9ef1b8e16 | 4,154 | py | Python | sdk/formrecognizer/azure-ai-formrecognizer/azure/ai/formrecognizer/_generated/v3_0_preview_1/models/_form_recognizer_client_enums.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | null | null | null | sdk/formrecognizer/azure-ai-formrecognizer/azure/ai/formrecognizer/_generated/v3_0_preview_1/models/_form_recognizer_client_enums.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | null | null | null | sdk/formrecognizer/azure-ai-formrecognizer/azure/ai/formrecognizer/_generated/v3_0_preview_1/models/_form_recognizer_client_enums.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from enum import Enum, EnumMeta
from six import with_metaclass
class _CaseInsensitiveEnumMeta(EnumMeta):
def __getitem__(self, name):
return super().__getitem__(name.upper())
def __getattr__(cls, name):
"""Return the enum member matching `name`
We use __getattr__ instead of descriptors or inserting into the enum
class' __dict__ in order to support `name` and `value` being both
properties for enum members (which live in the class' __dict__) and
enum members themselves.
"""
try:
return cls._member_map_[name.upper()]
except KeyError:
raise AttributeError(name)
class AnalyzeResultOperationStatus(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Operation status.
"""
NOT_STARTED = "notStarted"
RUNNING = "running"
FAILED = "failed"
SUCCEEDED = "succeeded"
class ApiVersion(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""API version.
"""
TWO_THOUSAND_TWENTY_ONE09_30_PREVIEW = "2021-09-30-preview"
class ContentType(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Content type for upload
"""
#: Content Type 'application/octet-stream'.
APPLICATION_OCTET_STREAM = "application/octet-stream"
#: Content Type 'application/pdf'.
APPLICATION_PDF = "application/pdf"
#: Content Type 'image/bmp'.
IMAGE_BMP = "image/bmp"
#: Content Type 'image/jpeg'.
IMAGE_JPEG = "image/jpeg"
#: Content Type 'image/png'.
IMAGE_PNG = "image/png"
#: Content Type 'image/tiff'.
IMAGE_TIFF = "image/tiff"
class DocumentFieldType(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Semantic data type of the field value.
"""
STRING = "string"
DATE = "date"
TIME = "time"
PHONE_NUMBER = "phoneNumber"
NUMBER = "number"
INTEGER = "integer"
SELECTION_MARK = "selectionMark"
COUNTRY_REGION = "countryRegion"
CURRENCY = "currency"
SIGNATURE = "signature"
ARRAY = "array"
OBJECT = "object"
class DocumentSignatureType(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Presence of signature.
"""
SIGNED = "signed"
UNSIGNED = "unsigned"
class DocumentTableCellKind(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Table cell kind.
"""
CONTENT = "content"
ROW_HEADER = "rowHeader"
COLUMN_HEADER = "columnHeader"
STUB_HEAD = "stubHead"
DESCRIPTION = "description"
class LengthUnit(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""The unit used by the width, height, and boundingBox properties. For images, the unit is
"pixel". For PDF, the unit is "inch".
"""
PIXEL = "pixel"
INCH = "inch"
class OperationKind(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Type of operation.
"""
DOCUMENT_MODEL_BUILD = "documentModelBuild"
DOCUMENT_MODEL_COMPOSE = "documentModelCompose"
DOCUMENT_MODEL_COPY_TO = "documentModelCopyTo"
class OperationStatus(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Operation status.
"""
NOT_STARTED = "notStarted"
RUNNING = "running"
FAILED = "failed"
SUCCEEDED = "succeeded"
CANCELED = "canceled"
class SelectionMarkState(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""State of the selection mark.
"""
SELECTED = "selected"
UNSELECTED = "unselected"
class StringIndexType(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
"""Method used to compute string offset and length.
"""
TEXT_ELEMENTS = "textElements"
UNICODE_CODE_POINT = "unicodeCodePoint"
UTF16_CODE_UNIT = "utf16CodeUnit"
| 30.77037 | 94 | 0.667068 | 422 | 4,154 | 6.369668 | 0.462085 | 0.058036 | 0.147321 | 0.159598 | 0.259301 | 0.115327 | 0.090774 | 0.090774 | 0.090774 | 0.090774 | 0 | 0.005104 | 0.198122 | 4,154 | 134 | 95 | 31 | 0.801861 | 0.320173 | 0 | 0.121212 | 0 | 0 | 0.164754 | 0.008886 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.030303 | 0.015152 | 0.954545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
319ec31c5bec95f71fc86ec8dcab8ee33a9ec4c6 | 412 | py | Python | CeV - Gustavo Guanabara/exerc033.py | us19861229c/Meu-aprendizado-Python | 575c0714ac5377ff3122f4cb57952969e07ba89b | [
"Unlicense"
] | 1 | 2021-12-11T19:53:41.000Z | 2021-12-11T19:53:41.000Z | CeV - Gustavo Guanabara/exerc033.py | us19861229c/Meu-aprendizado-Python | 575c0714ac5377ff3122f4cb57952969e07ba89b | [
"Unlicense"
] | null | null | null | CeV - Gustavo Guanabara/exerc033.py | us19861229c/Meu-aprendizado-Python | 575c0714ac5377ff3122f4cb57952969e07ba89b | [
"Unlicense"
] | null | null | null | #033: ler tres numeros e dizer qual o maior e qual o menor:
print("Digite 3 numeros:")
maiorn = 0
n = int(input("Numero 1: "))
if n > maiorn:
maiorn = n
menorn = n
n = int(input("Numero 2: "))
if n > maiorn:
maiorn = n
if n < menorn:
menorn = n
n = int(input("Numero 3: "))
if n > maiorn:
maiorn = n
if n < menorn:
menorn = n
print(f"o maior numero foi {maiorn} e o menor foi {menorn}")
| 20.6 | 60 | 0.601942 | 73 | 412 | 3.39726 | 0.342466 | 0.060484 | 0.108871 | 0.181452 | 0.471774 | 0.407258 | 0.258065 | 0.258065 | 0.258065 | 0.258065 | 0 | 0.02623 | 0.259709 | 412 | 19 | 61 | 21.684211 | 0.786885 | 0.140777 | 0 | 0.647059 | 0 | 0 | 0.274788 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31a75a5de6817edf26be7b64cce143ae2a37bc84 | 2,101 | py | Python | lib/autoconnect/example/test_server.py | simotek/autoconnect | 7d956e5bef0bcfe22b7f06061f8024df62b004ab | [
"FTL"
] | null | null | null | lib/autoconnect/example/test_server.py | simotek/autoconnect | 7d956e5bef0bcfe22b7f06061f8024df62b004ab | [
"FTL"
] | null | null | null | lib/autoconnect/example/test_server.py | simotek/autoconnect | 7d956e5bef0bcfe22b7f06061f8024df62b004ab | [
"FTL"
] | null | null | null | #
# test_server.py
#
# Copyright (C) 2001-2007 Oisin Mulvihill.
# Email: oisin.mulvihill@gmail.com
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library (see the file LICENSE.TXT); if not,
# write to the Free Software Foundation, Inc., 59 Temple Place,
# Suite 330, Boston, MA 02111-1307 USA.
#
# Date: 2001/12/06 15:54:30
#
import sys
import socket
import xmlrpclib
import autoconnect
from SimpleXMLRPCServer import SimpleXMLRPCServer
class Person:
def greet(self, name=''):
msg = "Hello, nice to meet you"
if name:
msg = "%s %s" % (msg, name)
return msg
class Server:
"""This server runs a simple XML-RPC server and its clients
automatically find it. Its magic ;)
"""
def __init__(self):
self.server = None
self.broadcaster = None
def main(self):
print "Starting XML-RPC server http://localhost:8000"
self.server = SimpleXMLRPCServer(("localhost", 8000))
self.server.register_instance(Person())
# Start the beckon to tell clients the servers XML-RPC URI:
print "Homing beacon running. Press Ctrl-C to exit."
self.broadcaster = autoconnect.beacon("http://localhost:8000")
try:
self.server.serve_forever()
except KeyboardInterrupt,e:
pass
self.server.server_close()
if __name__ == '__main__':
server = Server()
server.main()
| 30.449275 | 79 | 0.643027 | 273 | 2,101 | 4.89011 | 0.564103 | 0.037453 | 0.026966 | 0.042697 | 0.074906 | 0.074906 | 0.050936 | 0 | 0 | 0 | 0 | 0.032916 | 0.277011 | 2,101 | 68 | 80 | 30.897059 | 0.845951 | 0.439791 | 0 | 0 | 0 | 0 | 0.159794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.034483 | 0.172414 | null | null | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31aa23acdb0243f1a7dd745198a7dc1050b82ef5 | 1,726 | py | Python | shongololo/Imet_serial.py | swyngaard/shongololo | 0d11378fb0e61cae5da0e09c9eed10fd9195f20d | [
"Apache-2.0"
] | null | null | null | shongololo/Imet_serial.py | swyngaard/shongololo | 0d11378fb0e61cae5da0e09c9eed10fd9195f20d | [
"Apache-2.0"
] | null | null | null | shongololo/Imet_serial.py | swyngaard/shongololo | 0d11378fb0e61cae5da0e09c9eed10fd9195f20d | [
"Apache-2.0"
] | null | null | null | import serial , time , os
import serial.tools.list_ports as port
import logging
sho_logger = logging.getLogger("shongololo_logger")
def open_imets(devices):
"""Tries to open as many imet device serial ports as there are
:return:
a list of socket handles
"""
imet_sockets = []
for d in range(len(devices)): # Create list of imet open ports
port = str(devices["Imet" + str(d)])
try:
ser = serial.Serial(port, baudrate=57600, parity=serial.PARITY_NONE, bytesize=serial.EIGHTBITS,stopbits=serial.STOPBITS_ONE, timeout=3.0, xonxoff=False)
imet_sockets.append(ser)
sho_logger.info("\n Successfully opened Imet device on port {}".format(devices["Imet" + str(d)]))
except serial.SerialException as e:
sho_logger.error(e)
sho_logger.critical("\nFailed to open imet on port {}".format(devices["Imet" + str(d)]))
return imet_sockets
def find_imets():
"""
Finds available imet serial ports and determines which device is attached to which /dev/ path
:rtype: object
:return:
A dictionary of devices labled as" imet<number starting from 0>
"""
device_dict = {}
imets = 0
portlist = list(port.comports())
for p in portlist:
sp = str(p)
if "FT230" in sp:
path = sp.split('-')[0]
device_dict["Imet" + str(imets)] = path[:-1]
imets = imets + 1
sho_logger.info("Found an Imet device on port: %s",path)
status=0
else:
pass
if imets==0:
sho_logger.error("No Imet devices found.")
else:
sho_logger.info("Found {} Imet devices".format(imets))
return device_dict
| 31.381818 | 164 | 0.618192 | 231 | 1,726 | 4.536797 | 0.428571 | 0.060115 | 0.040076 | 0.042939 | 0.051527 | 0.051527 | 0.051527 | 0 | 0 | 0 | 0 | 0.013492 | 0.269988 | 1,726 | 54 | 165 | 31.962963 | 0.818254 | 0 | 0 | 0.057143 | 0 | 0 | 0.139213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.028571 | 0.085714 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31ad8e6cefd31380ff5fa1bdef5437fd290e10f2 | 380 | py | Python | rental_property/migrations/0011_alter_rentalunit_options.py | shumwe/rental-house-management-system | f97f22afa8bc2740ed08baa387c74b93e02fac0c | [
"MIT"
] | 1 | 2022-03-16T13:29:30.000Z | 2022-03-16T13:29:30.000Z | rental_property/migrations/0011_alter_rentalunit_options.py | shumwe/rental-house-management-system | f97f22afa8bc2740ed08baa387c74b93e02fac0c | [
"MIT"
] | null | null | null | rental_property/migrations/0011_alter_rentalunit_options.py | shumwe/rental-house-management-system | f97f22afa8bc2740ed08baa387c74b93e02fac0c | [
"MIT"
] | null | null | null | # Generated by Django 4.0.2 on 2022-03-15 22:43
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('rental_property', '0010_alter_rentalunit_status'),
]
operations = [
migrations.AlterModelOptions(
name='rentalunit',
options={'verbose_name_plural': 'Rental Houses'},
),
]
| 21.111111 | 61 | 0.628947 | 39 | 380 | 5.974359 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067616 | 0.260526 | 380 | 17 | 62 | 22.352941 | 0.761566 | 0.118421 | 0 | 0 | 1 | 0 | 0.255255 | 0.084084 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31ade7fa4d1318ceab82ad2826fc1a70514e9372 | 951 | py | Python | AxesFrame.py | Toyuri453/RSSP-Python-demo | 0adf92ad765b5a9334d7e2830611b98c8c4eb26d | [
"MIT"
] | 1 | 2021-05-22T18:06:49.000Z | 2021-05-22T18:06:49.000Z | AxesFrame.py | Toyuri453/RSSP-Python-demo | 0adf92ad765b5a9334d7e2830611b98c8c4eb26d | [
"MIT"
] | null | null | null | AxesFrame.py | Toyuri453/RSSP-Python-demo | 0adf92ad765b5a9334d7e2830611b98c8c4eb26d | [
"MIT"
] | null | null | null | import Terminal
class Axes():
def __init__(self, weak_terminal : 'Terminal.CartesianPoint'):
# self._initiator_x = weak_terminal._x
# self._initiator_y = weak_terminal._y
self._initiator = Terminal.CartesianPoint(0.0, 0.0, "UWB", "initiator")
self._weak_terminal = weak_terminal
self._terminal_set = {self._initiator._terminal_name : self._initiator, self._weak_terminal._terminal_name : self._weak_terminal}
self._terminal_measuring_point_set = {'Set' : {}} #Fill Later
print(self._terminal_set)
def add_terminal(self, terminal : 'Terminal.CartesianPoint'):
print("[DATA] Add Terminal {0} ".format(terminal))
self._terminal_set[terminal._terminal_name] = terminal
def show_terminal_names(self):
for key in self._terminal_set:
print("[DATA] Terminal Name: {0}, Color: {1}".format(key, self._terminal_set[key]._terminal_color)) | 50.052632 | 138 | 0.681388 | 114 | 951 | 5.263158 | 0.27193 | 0.14 | 0.125 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009223 | 0.201893 | 951 | 19 | 139 | 50.052632 | 0.781291 | 0.087277 | 0 | 0 | 0 | 0 | 0.144038 | 0.054309 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.357143 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31aed2d6bc8b935fd6033025428a672731040be9 | 1,898 | py | Python | course_app/api/views.py | maks-nurgazy/diploma-project | 66889488ffaa0269e1be2df6f6c76a3ca68a3cfb | [
"MIT"
] | null | null | null | course_app/api/views.py | maks-nurgazy/diploma-project | 66889488ffaa0269e1be2df6f6c76a3ca68a3cfb | [
"MIT"
] | null | null | null | course_app/api/views.py | maks-nurgazy/diploma-project | 66889488ffaa0269e1be2df6f6c76a3ca68a3cfb | [
"MIT"
] | null | null | null | import json
from rest_framework.generics import ListAPIView, get_object_or_404
from rest_framework.response import Response
from rest_framework.views import APIView
from rest_framework.viewsets import ModelViewSet
from course_app.api.serializers import CourseSerializer
from course_app.models import Course, Enrolled
from users.api.serializers import StudentSerializer
from users.models import Student
class CourseViewSet(ModelViewSet):
queryset = Course.objects.all()
serializer_class = CourseSerializer
class StudentCourseView(ListAPIView):
serializer_class = CourseSerializer
def get_queryset(self):
user = self.request.user
enrolls = user.enrolls
courses = []
for enroll in list(enrolls.all()):
courses.append(enroll.course)
return courses
class TeacherCourseView(ListAPIView):
serializer_class = CourseSerializer
def get_queryset(self):
teacher = self.request.user
return teacher.course_list
class CourseStudentsView(ListAPIView):
serializer_class = StudentSerializer
def get_queryset(self):
course_id = self.kwargs['course_id']
course = get_object_or_404(Course, id=course_id)
students = course.students
return students
class EnrollmentView(APIView):
def get(self, request, *args, **kwargs):
student = request.user
courses = Course.objects.filter(co_class=student.profile.st_class)
response = CourseSerializer(courses, many=True).data
return Response(response)
def post(self, request, *args, **kwargs):
courses = json.loads(request.body)['courses']
student = request.user
for course_id in courses:
Enrolled.objects.create(student=student, course_id=course_id)
return Response({"detail": "Enrolled"})
def put(self, request, *args, **kwargs):
pass
| 28.757576 | 74 | 0.714436 | 214 | 1,898 | 6.205607 | 0.313084 | 0.042169 | 0.051205 | 0.040663 | 0.090361 | 0.090361 | 0.090361 | 0.090361 | 0 | 0 | 0 | 0.003976 | 0.204953 | 1,898 | 65 | 75 | 29.2 | 0.876077 | 0 | 0 | 0.170213 | 0 | 0 | 0.015806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12766 | false | 0.021277 | 0.191489 | 0 | 0.638298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
31b0a81b7e41eaa16ffc9d2a726e4978e07e1575 | 9,005 | py | Python | service/repository/repository_controller.py | yutiansut/cilantro | 3fa579999e7d5a6d6041ccc7e309c667fc7eac90 | [
"Apache-2.0"
] | 3 | 2019-09-04T12:40:33.000Z | 2021-12-28T16:33:27.000Z | service/repository/repository_controller.py | yutiansut/cilantro | 3fa579999e7d5a6d6041ccc7e309c667fc7eac90 | [
"Apache-2.0"
] | 97 | 2018-05-29T13:27:04.000Z | 2021-11-02T11:03:33.000Z | service/repository/repository_controller.py | yutiansut/cilantro | 3fa579999e7d5a6d6041ccc7e309c667fc7eac90 | [
"Apache-2.0"
] | 16 | 2018-04-25T11:39:21.000Z | 2019-12-16T14:37:39.000Z | import os
import json
import logging
import yaml
from flask import Blueprint, jsonify, send_file, request, redirect
from service.errors import ApiError
from utils.repository import generate_repository_path, \
list_objects_in_repository
from utils.list_dir import list_dir
repository_controller = Blueprint('repository', __name__)
repository_dir = os.environ['REPOSITORY_DIR']
metadata_file = 'meta.json'
representation_dir = 'data'
sub_object_dir = 'parts'
viewers_config = os.path.join(os.environ['CONFIG_DIR'], "viewers.yml")
with open(viewers_config, 'r', encoding="utf-8") as viewers_file:
viewers = yaml.safe_load(viewers_file)
@repository_controller.route('', methods=['GET'], strict_slashes=False)
def list_repository():
"""
List the ids of all cilantro objects in the repository.
Returns a list of the object_ids
.. :quickref: Repository Controller; List IDs of objects in the repository
**Example request**:
.. sourcecode:: http
GET /repository/ HTTP/1.1
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
["foo", "bar"]
:reqheader Accept: application/json
:resheader Content-Type: application/json
:status 200: OK
:return: JSON array containing the ids of all cilantro objects in the
repository
"""
return jsonify(list_objects_in_repository())
@repository_controller.route('/object/<path:object_id>', methods=['GET'],
strict_slashes=False)
def get_object(object_id):
"""
Retrieve an cilantro (sub)object in the repository folder.
Returns A JSON object containing metadata, representations and sub_objects
of the cilantro object. This can be a subobject as well.
.. :quickref: Repository Controller; Retrieve (sub)object in the repository
**Example request**:
.. sourcecode:: http
GET /repository/object/<object_id> HTTP/1.1
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
{
"metadata": {
"description": "[PDFs teilweise verfugbar]",
"identification": "year",
"number": "",
"ojs_id": "issue-test-188",
"volume": "",
"year": 2018
},
"representations": [
"origin"
],
"sub_objects": [
"part_0001",
"part_0002"
]
}
**Example response ERROR**:
.. sourcecode:: http
HTTP/1.1 404 NOT FOUND
{
"error": {
"code": "object_not_found",
"message": "No object with id test_object was found"
},
"success": false
}
:reqheader Accept: application/json
:param str object_id: The id of the object
:resheader Content-Type: application/json
:status 200: OK
:status 404: cilantro object was not found
:return: JSON object containing metadata, representations and sub_objects
of the cilantro (sub)object
"""
path = os.path.join(repository_dir, generate_repository_path(object_id))
if os.path.isdir(path):
with open(os.path.join(path, metadata_file)) as json_data:
metadata = json.load(json_data)
representations = list_dir(os.path.join(path, representation_dir),
sorted=True, ignore_not_found=True)
sub_objects = list_dir(os.path.join(path, sub_object_dir), sorted=True,
ignore_not_found=True)
return jsonify({
'metadata': metadata,
'representations': representations,
'sub_objects': sub_objects})
else:
raise ApiError("object_not_found",
f"No object with id {object_id} was found", 404)
@repository_controller.route('/representation/<path:object_id>/<rep_name>',
methods=['GET'], strict_slashes=False)
def get_representation(object_id, rep_name):
"""
Retrieve a representation of a cilantro (sub)object.
Returns A JSON array containing all files of the representation.
.. :quickref: Repository Controller; Retrieve a (sub)object representation
**Example request**:
.. sourcecode:: http
GET /repository/representation/<object_id>/<rep_name> HTTP/1.1
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
[
"merged.pdf"
]
**Example response ERROR**:
.. sourcecode:: http
HTTP/1.1 404 NOT FOUND
{
"error": {
"code": "representation_not_found",
"message": "No representation jpg for object with id
test_object was found"
},
"success": false
}
:reqheader Accept: application/json
:param str object_id: The id of the (sub) object
:param str rep_name: The name of the representation
:resheader Content-Type: application/json
:status 200: OK
:status 404: representation was not found
:return: JSON array containing all files of the representation
"""
path = os.path.join(repository_dir, generate_repository_path(object_id),
representation_dir, rep_name)
if os.path.isdir(path):
files = list_dir(path, sorted=True, ignore_not_found=True)
return jsonify(files)
else:
raise ApiError("representation_not_found",
f"No representation {rep_name} for object with "
f"id {object_id} was found", 404)
@repository_controller.route(
'/file/<path:object_id>/data/<path:rep_name>/<file>', methods=['GET'],
strict_slashes=False)
def get_file(object_id, rep_name, file):
"""
Retrieve a file from a representation of a cilantro (sub)object.
Returns the file's content
.. :quickref: Repository Controller; Retrieve a file from a representation
**Example request**:
.. sourcecode:: http
GET /repository/file/<object_id>/data/<rep_name>/<file> HTTP/1.1
Note that for sub-object the 'object_id' looks like:
"<parent-object_id>/part_0001"
**Example response ERROR**:
.. sourcecode:: http
HTTP/1.1 404 NOT FOUND
{
"error": {
"code": "file_not_found",
"message": "No file test_file.jpg was found in representation
jpg of object test_object"
},
"success": false
}
:reqheader Accept: *
:param str object_id: The id of the object
:param str rep_name: The name of the representation
:param str file: The name of the file
:resheader Content-Type: *
:status 200: OK
:status 404: file was not found
:return: Downloadable file
"""
path = os.path.join(repository_dir, generate_repository_path(object_id),
representation_dir, rep_name, file)
if os.path.isfile(path):
return handle_file_request(path)
else:
raise ApiError("file_not_found",
f"No file {file} was found in representation {rep_name}"
f" of object {object_id}", 404)
@repository_controller.route('/file/<path:object_id>/<file>',
methods=['GET'], strict_slashes=False)
def get_meta_file(object_id, file):
"""
Retrieve a file from the root of a cilantro (sub)object.
Returns the file's content. Files on root level are normally metdata files.
.. :quickref: Repository Controller; Retrieve metadatafile of (sub)object
**Example request**:
.. sourcecode:: http
GET /repository/file/<object_id>/<file> HTTP/1.1
**Example response ERROR**:
.. sourcecode:: http
HTTP/1.1 404 NOT FOUND
{
"error": {
"code": "file_not_found",
"message": "No file test_file.jpg was found in object
test_object"
},
"success": false
}
:reqheader Accept: application/json
:param str object_id: The id of the object
:param str file: Name of the file
:resheader Content-Type: application/json
:status 200: OK
:status 404: file was not found
:return: Downloadable file
"""
path = os.path.join(repository_dir, generate_repository_path(object_id),
file)
if os.path.isfile(path):
return send_file(path)
else:
raise ApiError("file_not_found",
f"No file {file} was found in object {object_id}", 404)
def handle_file_request(path):
if request.headers.get('Accept') == '*/*':
return send_file(path)
elif request.accept_mimetypes.accept_html:
ext = os.path.splitext(path)[1][1:]
if ext in viewers:
url = viewers[ext] + path[len(repository_dir):]
return redirect(url, code=303)
return send_file(path)
| 28.769968 | 79 | 0.608329 | 1,063 | 9,005 | 5.008467 | 0.149577 | 0.039068 | 0.013524 | 0.024981 | 0.608377 | 0.560481 | 0.540195 | 0.477836 | 0.400263 | 0.326634 | 0 | 0.017034 | 0.289395 | 9,005 | 312 | 80 | 28.862179 | 0.814971 | 0.51327 | 0 | 0.246914 | 1 | 0 | 0.151314 | 0.045129 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.098765 | 0 | 0.271605 | 0.024691 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31b3246b48b5cc2ea21a0461162a64666ab485f1 | 4,676 | py | Python | genshin/models/genshin/chronicle/notes.py | thesadru/genshin.py | 806b8d0dd059a06605e66dead917fdf550a552bc | [
"MIT"
] | 63 | 2021-10-04T19:53:54.000Z | 2022-03-30T07:21:03.000Z | genshin/models/genshin/chronicle/notes.py | thesadru/genshin.py | 806b8d0dd059a06605e66dead917fdf550a552bc | [
"MIT"
] | 17 | 2021-11-16T20:42:52.000Z | 2022-03-31T10:11:52.000Z | genshin/models/genshin/chronicle/notes.py | thesadru/genshin.py | 806b8d0dd059a06605e66dead917fdf550a552bc | [
"MIT"
] | 10 | 2021-10-16T22:41:41.000Z | 2022-02-19T17:55:23.000Z | """Genshin chronicle notes."""
import datetime
import typing
import pydantic
from genshin.models.genshin import character
from genshin.models.model import Aliased, APIModel
__all__ = ["Expedition", "ExpeditionCharacter", "Notes"]
def _process_timedelta(time: typing.Union[int, datetime.timedelta, datetime.datetime]) -> datetime.datetime:
if isinstance(time, int):
time = datetime.datetime.fromtimestamp(time).astimezone()
if isinstance(time, datetime.timedelta):
time = datetime.datetime.now().astimezone() + time
if time < datetime.datetime(2000, 1, 1).astimezone():
delta = datetime.timedelta(seconds=int(time.timestamp()))
time = datetime.datetime.now().astimezone() + delta
time = time.replace(second=0, microsecond=0)
return time
class ExpeditionCharacter(character.BaseCharacter):
"""Expedition character."""
class Expedition(APIModel):
"""Real-Time note expedition."""
character: ExpeditionCharacter = Aliased("avatar_side_icon")
status: typing.Literal["Ongoing", "Finished"]
remaining_time: datetime.timedelta = Aliased("remained_time")
@property
def finished(self) -> bool:
"""Whether the expedition has finished."""
return self.remaining_time <= datetime.timedelta(0)
@property
def completion_time(self) -> datetime.datetime:
return datetime.datetime.now().astimezone() + self.remaining_time
@pydantic.validator("character", pre=True)
def __complete_character(cls, v: typing.Any) -> ExpeditionCharacter:
if isinstance(v, str):
return ExpeditionCharacter(icon=v) # type: ignore
return v
class TransformerTimedelta(datetime.timedelta):
"""Transformer recovery time."""
@property
def timedata(self) -> typing.Tuple[int, int, int, int]:
seconds: int = super().seconds
days: int = super().days
hour, second = divmod(seconds, 3600)
minute, second = divmod(second, 60)
return days, hour, minute, second
@property
def hours(self) -> int:
return self.timedata[1]
@property
def minutes(self) -> int:
return self.timedata[2]
@property
def seconds(self) -> int:
return self.timedata[3]
class Notes(APIModel):
"""Real-Time notes."""
current_resin: int
max_resin: int
remaining_resin_recovery_time: datetime.timedelta = Aliased("resin_recovery_time")
current_realm_currency: int = Aliased("current_home_coin")
max_realm_currency: int = Aliased("max_home_coin")
remaining_realm_currency_recovery_time: datetime.timedelta = Aliased("home_coin_recovery_time")
completed_commissions: int = Aliased("finished_task_num")
max_commissions: int = Aliased("total_task_num")
claimed_commission_reward: bool = Aliased("is_extra_task_reward_received")
remaining_resin_discounts: int = Aliased("remain_resin_discount_num")
max_resin_discounts: int = Aliased("resin_discount_num_limit")
remaining_transformer_recovery_time: typing.Optional[TransformerTimedelta]
expeditions: typing.Sequence[Expedition]
max_expeditions: int = Aliased("max_expedition_num")
@property
def resin_recovery_time(self) -> datetime.datetime:
"""The remaining time until resin recovery in seconds."""
return datetime.datetime.now().astimezone() + self.remaining_resin_recovery_time
@property
def realm_currency_recovery_time(self) -> datetime.datetime:
"""The remaining time until realm currency recovery in seconds."""
return datetime.datetime.now().astimezone() + self.remaining_realm_currency_recovery_time
@property
def transformer_recovery_time(self) -> typing.Optional[datetime.datetime]:
"""The remaining time until realm currency recovery in seconds."""
if self.remaining_transformer_recovery_time is None:
return None
remaining = datetime.datetime.now().astimezone() + self.remaining_transformer_recovery_time
return remaining
@pydantic.root_validator(pre=True)
def __flatten_transformer(cls, values: typing.Dict[str, typing.Any]) -> typing.Dict[str, typing.Any]:
if "transformer_recovery_time" in values:
return values
if values.get("transformer") and values["transformer"]["obtained"]:
t = values["transformer"]["recovery_time"]
delta = TransformerTimedelta(days=t["Day"], hours=t["Hour"], minutes=t["Minute"], seconds=t["Second"])
values["remaining_transformer_recovery_time"] = delta
else:
values["remaining_transformer_recovery_time"] = None
return values
| 34.131387 | 114 | 0.700385 | 521 | 4,676 | 6.095969 | 0.232246 | 0.064232 | 0.065176 | 0.054786 | 0.266058 | 0.127834 | 0.11461 | 0.099496 | 0.099496 | 0.077771 | 0 | 0.004744 | 0.188623 | 4,676 | 136 | 115 | 34.382353 | 0.832367 | 0.073139 | 0 | 0.126437 | 0 | 0 | 0.105877 | 0.045709 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.057471 | 0.045977 | 0.609195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
31ba54fbc1b1ed1f7e053b99d91ae0c4606e4d0f | 314 | py | Python | pydashlite/arrays/sum_by.py | glowlex/pydashlite | cbc96478fa610aeae95b5584b406aa0c35b89db1 | [
"MIT"
] | null | null | null | pydashlite/arrays/sum_by.py | glowlex/pydashlite | cbc96478fa610aeae95b5584b406aa0c35b89db1 | [
"MIT"
] | null | null | null | pydashlite/arrays/sum_by.py | glowlex/pydashlite | cbc96478fa610aeae95b5584b406aa0c35b89db1 | [
"MIT"
] | null | null | null | from typing import Callable, Iterable, TypeVar
T = TypeVar('T')
Num = TypeVar('Num', int, float)
def sumBy(array: Iterable[T], iteratee: Callable[[T], Num] = None, start: Num = 0) -> Num:
if iteratee is None:
return sum([y for y in array], start)
return sum([iteratee(y) for y in array], start)
| 28.545455 | 90 | 0.646497 | 49 | 314 | 4.142857 | 0.489796 | 0.078818 | 0.049261 | 0.068966 | 0.167488 | 0.167488 | 0 | 0 | 0 | 0 | 0 | 0.004016 | 0.207006 | 314 | 10 | 91 | 31.4 | 0.811245 | 0 | 0 | 0 | 0 | 0 | 0.012739 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
31ba5edab7671efdaef9d530b3fadbb3b92a5249 | 344 | py | Python | Ten_Most_Common_Words.py | mcjohnchristopher/Python_Samples | 738f3b7d9baa7f4e396647f380118eba66ea645c | [
"CC0-1.0"
] | null | null | null | Ten_Most_Common_Words.py | mcjohnchristopher/Python_Samples | 738f3b7d9baa7f4e396647f380118eba66ea645c | [
"CC0-1.0"
] | null | null | null | Ten_Most_Common_Words.py | mcjohnchristopher/Python_Samples | 738f3b7d9baa7f4e396647f380118eba66ea645c | [
"CC0-1.0"
] | null | null | null | fhand = (romeo.txt)
counts = dict()
for line in fhand:
words = line.split()
for word in words():
count word = count.get(word, 0) + 1
st = list
for Key,Value in count.items():
st.append((val,key))
st.sort(reverse = true)
for val,key in st[:10]:
print key, val
#Using Sorted Function
sorted [(v,k) for k,v in c.items()]: | 20.235294 | 38 | 0.622093 | 60 | 344 | 3.566667 | 0.55 | 0.056075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.22093 | 344 | 17 | 39 | 20.235294 | 0.783582 | 0.061047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31bc57d9152ec85878460b40dfe42e1115dfd96e | 615 | py | Python | src/grpc_client.py | thealphadollar/py-grpcio-pg | aed6de047e4843f3bdf86184a0a2c5a1ecd6beb1 | [
"MIT"
] | null | null | null | src/grpc_client.py | thealphadollar/py-grpcio-pg | aed6de047e4843f3bdf86184a0a2c5a1ecd6beb1 | [
"MIT"
] | null | null | null | src/grpc_client.py | thealphadollar/py-grpcio-pg | aed6de047e4843f3bdf86184a0a2c5a1ecd6beb1 | [
"MIT"
] | null | null | null | import grpc
from consts import PORT, SERVER_CERT
from grpc_generated_files import api_pb2, api_pb2_grpc
def main(stub):
request = api_pb2.ApiRequest(
name="Shivam",
message="Hey there!"
)
response = stub.ApiEndpoint(request)
print(response)
if __name__ == "__main__":
with open(SERVER_CERT, 'rb') as f:
server_cert = f.read()
creds = grpc.ssl_channel_credentials(server_cert)
# the server IP should be in the common name of the certificate
channel = grpc.secure_channel(f'localhost:{PORT}', creds)
stub = api_pb2_grpc.ApiStub(channel)
main(stub)
| 25.625 | 67 | 0.692683 | 85 | 615 | 4.741176 | 0.541176 | 0.099256 | 0.049628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008247 | 0.211382 | 615 | 23 | 68 | 26.73913 | 0.82268 | 0.099187 | 0 | 0 | 1 | 0 | 0.076087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.235294 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31be5bcba5067c3d0f88dba211c9dc9337d0bf13 | 2,560 | py | Python | src/Cogs/InfoCog.py | kodyVS/Discord-Bot-Development | 389bf69871adbe289f162ddbeeaf681023ca1f02 | [
"MIT"
] | 5 | 2020-05-27T20:03:45.000Z | 2020-06-24T11:27:26.000Z | src/Cogs/InfoCog.py | kodyVS/Discord-Bot-Development | 389bf69871adbe289f162ddbeeaf681023ca1f02 | [
"MIT"
] | 11 | 2020-05-28T10:56:26.000Z | 2020-07-02T13:38:02.000Z | src/Cogs/InfoCog.py | kodyVS/Discord-Bot-Development | 389bf69871adbe289f162ddbeeaf681023ca1f02 | [
"MIT"
] | 3 | 2020-05-28T20:31:02.000Z | 2020-06-17T23:51:51.000Z | from discord.ext import commands
import discord
import requests
from bs4 import BeautifulSoup
# work in progress! more languages welcome!
class InfoCog(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(name = 'docs', brief = 'programming language documentation', description = 'documentation for languages, access by calling `.docs <language> <query>`', aliases = ['documentation', 'info'])
async def docs(self, ctx, language: str, query):
# access docs based on language
if language == 'python' or language == 'python3':
full_link = 'https://docs.python.org/3/genindex-all.html'
page = requests.get(full_link).content
soup = BeautifulSoup(page, 'html.parser')
link_descriptions = []
for link in soup.findAll('a'):
if query in link.contents[0]:
link_descriptions.append(f"[{link.contents[0]}](https://docs.python.org/3/{link['href']})")
link_descriptions = list(dict.fromkeys(link_descriptions))
link_descriptions = link_descriptions[:10]
### TODO: multi-lingual docs support (devdocs.io?)
### TODO: faster searching (current 4-5 secs)
### TODO: filter results -> currently only pick top ten, and there are some odd results as well
embed = discord.Embed(title="Python 3 Docs", color = 0x00ff00)
embed.add_field(name=f'{len(link_descriptions)} results found for `{query}` :', value='\n'.join(
link_descriptions), inline=False)
embed.set_thumbnail(url=
'https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/240px-Python-logo-notext.svg.png')
await ctx.send(embed=embed)
@commands.command(name='github', brief = 'view top 10 daily github repos', description = 'see the names and descriptions of the top x github repos today with `.github x` (default 10)', aliases=['gh'])
async def github(self, ctx, amount: int = 10):
'''Gets the GitHub first < amount > repositories without embeds'''
page = requests.get(
'https://github-trending-api.now.sh/repositories?q=sort=stars&order=desc&since=daily')
response = [
f"{entry['description']}: {'<' + entry['url'] + '>'}\n" for entry in page.json()[:amount]]
embed = discord.Embed(
title=f"**GitHub's top {str(amount)} today**", description='\n'.join(response), color=0x00ff00)
await ctx.send(embed=embed)
| 49.230769 | 210 | 0.632031 | 316 | 2,560 | 5.06962 | 0.506329 | 0.0799 | 0.02372 | 0.022472 | 0.051186 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01576 | 0.231641 | 2,560 | 51 | 211 | 50.196078 | 0.798678 | 0.098438 | 0 | 0.058824 | 0 | 0.117647 | 0.333184 | 0.021076 | 0 | 0 | 0.007175 | 0.019608 | 0 | 1 | 0.029412 | false | 0 | 0.117647 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31c6e6ace01eea05877a86d1f6316d5a911da292 | 588 | py | Python | test/show-cifar10.py | tom01h/deep-learning-from-scratch | acb3c31976cd736b4abd21c3e8ab81c3bf0eb9bb | [
"MIT"
] | 3 | 2018-10-11T16:19:18.000Z | 2022-01-16T07:48:06.000Z | test/show-cifar10.py | tom01h/deep-learning-from-scratch | acb3c31976cd736b4abd21c3e8ab81c3bf0eb9bb | [
"MIT"
] | null | null | null | test/show-cifar10.py | tom01h/deep-learning-from-scratch | acb3c31976cd736b4abd21c3e8ab81c3bf0eb9bb | [
"MIT"
] | null | null | null | # coding: utf-8
import sys, os
sys.path.append(os.pardir) # 親ディレクトリのファイルをインポートするための設定
import numpy as np
from dataset.cifar10 import load_cifar10
from PIL import Image
np.set_printoptions(threshold=100)
(x_train, t_train), (x_test, t_test) = load_cifar10(flatten=False)
sample_image = x_test[0:100].reshape((10, 10, 3, 32, 32)).transpose((0, 3, 1, 4, 2)).reshape((320, 320, 3)) # 先頭100個をタイル状に並べ替える
Image.fromarray(np.uint8(sample_image*255)).save('sample.png')
print(t_test[0:100].reshape(10,10))
#pil_img = Image.fromarray(np.uint8(sample_image*255))
#pil_img.show()
| 34.588235 | 128 | 0.727891 | 96 | 588 | 4.3125 | 0.510417 | 0.07971 | 0.038647 | 0.072464 | 0.26087 | 0.26087 | 0.169082 | 0 | 0 | 0 | 0 | 0.104046 | 0.117347 | 588 | 16 | 129 | 36.75 | 0.693642 | 0.210884 | 0 | 0 | 0 | 0 | 0.022624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
31c7910d7253d24e22e70937e36be79e678386eb | 10,533 | py | Python | PWWS/fool.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 21 | 2021-03-22T07:14:29.000Z | 2022-03-24T02:05:25.000Z | PWWS/fool.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 2 | 2021-04-07T11:31:01.000Z | 2022-01-10T03:41:10.000Z | PWWS/fool.py | ForeverZyh/ASCC | 2d76d679889953501c469221a37d486e7ee42ded | [
"MIT"
] | 4 | 2021-05-05T18:44:13.000Z | 2021-07-29T03:09:50.000Z | # coding: utf-8
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import sys
import argparse
import os
import numpy as np
from read_files import split_imdb_files, split_yahoo_files, split_agnews_files
from word_level_process import word_process, get_tokenizer
from char_level_process import char_process
from neural_networks import word_cnn, char_cnn, bd_lstm, lstm
from adversarial_tools import ForwardGradWrapper, adversarial_paraphrase
import tensorflow as tf
from keras import backend as K
import time
from unbuffered import Unbuffered
sys.stdout = Unbuffered(sys.stdout)
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True
K.set_session(tf.Session(config=config))
# os.environ["CUDA_VISIBLE_DEVICES"] = "1"
parser = argparse.ArgumentParser(
description='Craft adversarial examples for a text classifier.')
parser.add_argument('--clean_samples_cap',
help='Amount of clean(test) samples to fool',
type=int, default=1000)
parser.add_argument('-m', '--model',
help='The model of text classifier',
choices=['word_cnn', 'char_cnn', 'word_lstm', 'word_bdlstm'],
default='word_cnn')
parser.add_argument('-d', '--dataset',
help='Data set',
choices=['imdb', 'agnews', 'yahoo'],
default='imdb')
parser.add_argument('-l', '--level',
help='The level of process dataset',
choices=['word', 'char'],
default='word')
def write_origin_input_texts(origin_input_texts_path, test_texts, test_samples_cap=None):
if test_samples_cap is None:
test_samples_cap = len(test_texts)
with open(origin_input_texts_path, 'a') as f:
for i in range(test_samples_cap):
f.write(test_texts[i] + '\n')
def fool_text_classifier():
clean_samples_cap = args.clean_samples_cap # 1000
print('clean_samples_cap:', clean_samples_cap)
# get tokenizer
dataset = args.dataset
tokenizer = get_tokenizer(opt)
# Read data set
x_test = y_test = None
test_texts = None
if dataset == 'imdb':
train_texts, train_labels, dev_texts, dev_labels, test_texts, test_labels = split_imdb_files(opt)
if args.level == 'word':
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif args.level == 'char':
x_train, y_train, x_test, y_test = char_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'agnews':
train_texts, train_labels, test_texts, test_labels = split_agnews_files()
if args.level == 'word':
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif args.level == 'char':
x_train, y_train, x_test, y_test = char_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'yahoo':
train_texts, train_labels, test_texts, test_labels = split_yahoo_files()
if args.level == 'word':
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif args.level == 'char':
x_train, y_train, x_test, y_test = char_process(train_texts, train_labels, test_texts, test_labels, dataset)
# Write clean examples into a txt file
clean_texts_path = r'./fool_result/{}/clean_{}.txt'.format(dataset, str(clean_samples_cap))
if not os.path.isfile(clean_texts_path):
write_origin_input_texts(clean_texts_path, test_texts)
# Select the model and load the trained weights
assert args.model[:4] == args.level
model = None
if args.model == "word_cnn":
model = word_cnn(dataset)
elif args.model == "word_bdlstm":
model = bd_lstm(dataset)
elif args.model == "char_cnn":
model = char_cnn(dataset)
elif args.model == "word_lstm":
model = lstm(dataset)
model_path = r'./runs/{}/{}.dat'.format(dataset, args.model)
model.load_weights(model_path)
print('model path:', model_path)
# evaluate classification accuracy of model on clean samples
scores_origin = model.evaluate(x_test[:clean_samples_cap], y_test[:clean_samples_cap])
print('clean samples origin test_loss: %f, accuracy: %f' % (scores_origin[0], scores_origin[1]))
all_scores_origin = model.evaluate(x_test, y_test)
print('all origin test_loss: %f, accuracy: %f' % (all_scores_origin[0], all_scores_origin[1]))
grad_guide = ForwardGradWrapper(model)
classes_prediction = grad_guide.predict_classes(x_test[: clean_samples_cap])
print('Crafting adversarial examples...')
successful_perturbations = 0
failed_perturbations = 0
sub_rate_list = []
NE_rate_list = []
start_cpu = time.clock()
adv_text_path = r'./fool_result/{}/{}/adv_{}.txt'.format(dataset, args.model, str(clean_samples_cap))
change_tuple_path = r'./fool_result/{}/{}/change_tuple_{}.txt'.format(dataset, args.model, str(clean_samples_cap))
file_1 = open(adv_text_path, "a")
file_2 = open(change_tuple_path, "a")
for index, text in enumerate(test_texts[: clean_samples_cap]):
sub_rate = 0
NE_rate = 0
if np.argmax(y_test[index]) == classes_prediction[index]:
# If the ground_true label is the same as the predicted label
adv_doc, adv_y, sub_rate, NE_rate, change_tuple_list = adversarial_paraphrase(input_text=text,
true_y=np.argmax(y_test[index]),
grad_guide=grad_guide,
tokenizer=tokenizer,
dataset=dataset,
level=args.level)
if adv_y != np.argmax(y_test[index]):
successful_perturbations += 1
print('{}. Successful example crafted.'.format(index))
else:
failed_perturbations += 1
print('{}. Failure.'.format(index))
text = adv_doc
sub_rate_list.append(sub_rate)
NE_rate_list.append(NE_rate)
file_2.write(str(index) + str(change_tuple_list) + '\n')
file_1.write(text + " sub_rate: " + str(sub_rate) + "; NE_rate: " + str(NE_rate) + "\n")
end_cpu = time.clock()
print('CPU second:', end_cpu - start_cpu)
mean_sub_rate = sum(sub_rate_list) / len(sub_rate_list)
mean_NE_rate = sum(NE_rate_list) / len(NE_rate_list)
print('mean substitution rate:', mean_sub_rate)
print('mean NE rate:', mean_NE_rate)
file_1.close()
file_2.close()
def fool_text_classifier_pytorch(model, dataset='imdb'):
clean_samples_cap = 100
print('clean_samples_cap:', clean_samples_cap)
# get tokenizer
tokenizer = get_tokenizer(opt)
# Read data set
x_test = y_test = None
test_texts = None
if dataset == 'imdb':
train_texts, train_labels, dev_texts, dev_labels, test_texts, test_labels = split_imdb_files(opt)
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'agnews':
train_texts, train_labels, test_texts, test_labels = split_agnews_files()
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
elif dataset == 'yahoo':
train_texts, train_labels, test_texts, test_labels = split_yahoo_files()
x_train, y_train, x_test, y_test = word_process(train_texts, train_labels, test_texts, test_labels, dataset)
grad_guide = ForwardGradWrapper_pytorch(model)
classes_prediction = grad_guide.predict_classes(x_test[: clean_samples_cap])
print('Crafting adversarial examples...')
successful_perturbations = 0
failed_perturbations = 0
sub_rate_list = []
NE_rate_list = []
start_cpu = time.clock()
adv_text_path = r'./fool_result/{}/adv_{}.txt'.format(dataset, str(clean_samples_cap))
change_tuple_path = r'./fool_result/{}/change_tuple_{}.txt'.format(dataset, str(clean_samples_cap))
file_1 = open(adv_text_path, "a")
file_2 = open(change_tuple_path, "a")
for index, text in enumerate(test_texts[: clean_samples_cap]):
sub_rate = 0
NE_rate = 0
if np.argmax(y_test[index]) == classes_prediction[index]:
# If the ground_true label is the same as the predicted label
adv_doc, adv_y, sub_rate, NE_rate, change_tuple_list = adversarial_paraphrase(input_text=text,
true_y=np.argmax(y_test[index]),
grad_guide=grad_guide,
tokenizer=tokenizer,
dataset=dataset,
level='word')
if adv_y != np.argmax(y_test[index]):
successful_perturbations += 1
print('{}. Successful example crafted.'.format(index))
else:
failed_perturbations += 1
print('{}. Failure.'.format(index))
text = adv_doc
sub_rate_list.append(sub_rate)
NE_rate_list.append(NE_rate)
file_2.write(str(index) + str(change_tuple_list) + '\n')
file_1.write(text + " sub_rate: " + str(sub_rate) + "; NE_rate: " + str(NE_rate) + "\n")
end_cpu = time.clock()
print('CPU second:', end_cpu - start_cpu)
mean_sub_rate = sum(sub_rate_list) / len(sub_rate_list)
mean_NE_rate = sum(NE_rate_list) / len(NE_rate_list)
print('mean substitution rate:', mean_sub_rate)
print('mean NE rate:', mean_NE_rate)
file_1.close()
file_2.close()
if __name__ == '__main__':
args = parser.parse_args()
fool_text_classifier()
| 46.606195 | 122 | 0.619102 | 1,334 | 10,533 | 4.541979 | 0.142429 | 0.023766 | 0.047037 | 0.051989 | 0.66199 | 0.655554 | 0.628817 | 0.623205 | 0.623205 | 0.60472 | 0 | 0.005531 | 0.279123 | 10,533 | 225 | 123 | 46.813333 | 0.79244 | 0.035792 | 0 | 0.579787 | 0 | 0 | 0.097801 | 0.015873 | 0 | 0 | 0 | 0 | 0.005319 | 1 | 0.015957 | false | 0 | 0.085106 | 0 | 0.101064 | 0.095745 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31d09d41c952173a6ae2b73dccad4ea1fbc25f01 | 722 | py | Python | compress.py | willemwouters/PhotoboothPi | 7ef65d1411af15ea51e23ea8ddbd598affd2680d | [
"Beerware"
] | null | null | null | compress.py | willemwouters/PhotoboothPi | 7ef65d1411af15ea51e23ea8ddbd598affd2680d | [
"Beerware"
] | null | null | null | compress.py | willemwouters/PhotoboothPi | 7ef65d1411af15ea51e23ea8ddbd598affd2680d | [
"Beerware"
] | null | null | null | import os
import time
import sys
if(len(sys.argv) is 1):
path="/home/pi/storage/"
else:
path=sys.argv[1]
try:
arr=[]
for filename in os.listdir(path):
if("2018-09" in filename):
arr.append(filename)
for f in arr:
filen = os.path.splitext(f)[0]
if(("%s.h264" % filen) in arr) and (("%s.mp3" % filen) in arr and ("%s.mp4" % filen) not in arr):
if(("%s.h264" % filen) == f):
time.sleep(1)
os.system("ffmpeg -i %s -i %s -c:v copy -c:a aac -strict experimental %s" % (path + f, path + filen + ".mp3", path + filen + ".mp4"))
os.system("rm %s %s" % (path + filen + ".mp3", path + f))
except:
print "d" | 30.083333 | 149 | 0.50831 | 111 | 722 | 3.306306 | 0.432432 | 0.054496 | 0.038147 | 0.065395 | 0.076294 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041833 | 0.304709 | 722 | 24 | 150 | 30.083333 | 0.689243 | 0 | 0 | 0 | 0 | 0.047619 | 0.182573 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
31d79e6d0a59cc3302d9155c1c4c15215d0a9e1b | 1,387 | py | Python | pygromos/tests/test_submission/test_hpc_queuing_submission_scheduling.py | pultar/PyGromosTools | 3c104c560c2e654972a036e2060b120ade96f655 | [
"MIT"
] | 13 | 2021-03-17T09:29:37.000Z | 2022-01-14T20:42:16.000Z | pygromos/tests/test_submission/test_hpc_queuing_submission_scheduling.py | pultar/PyGromosTools | 3c104c560c2e654972a036e2060b120ade96f655 | [
"MIT"
] | 185 | 2021-03-03T14:24:55.000Z | 2022-03-31T18:39:29.000Z | pygromos/tests/test_submission/test_hpc_queuing_submission_scheduling.py | pultar/PyGromosTools | 3c104c560c2e654972a036e2060b120ade96f655 | [
"MIT"
] | 13 | 2021-03-03T14:18:06.000Z | 2022-02-17T09:48:55.000Z | import unittest, tempfile
from pygromos.simulations.hpc_queuing.job_scheduling.schedulers import simulation_scheduler
from pygromos.data.simulation_parameters_templates import template_md
from pygromos.data.topology_templates import blank_topo_template
from pygromos.simulations.hpc_queuing.submission_systems import DUMMY
from pygromos.files.gromos_system.gromos_system import Gromos_System
from pygromos.tests.in_testfiles import in_test_file_path
from pygromos.tests.test_files import out_test_root_dir
class test_MD_scheduler(unittest.TestCase):
submissionSystem = DUMMY
def setUp(self) -> None:
self.tmp_test_dir = tempfile.mkdtemp(dir=out_test_root_dir, prefix="scheduling_Dummy_")
def test_do(self):
in_cnf = in_test_file_path+"/cnf/in_cnf1.cnf"
out_dir_path = self.tmp_test_dir
in_simSystem = Gromos_System(system_name="test_do", work_folder=out_dir_path,
in_top_path=blank_topo_template, in_cnf_path=in_cnf, in_imd_path=template_md,
in_gromosXX_bin_dir=None, in_gromosPP_bin_dir=None)
submission_system = self.submissionSystem()
simulation_scheduler.do(in_simSystem=in_simSystem, out_dir_path=out_dir_path,
submission_system=submission_system,
simulation_run_num=2, verbose= True)
| 46.233333 | 114 | 0.746215 | 184 | 1,387 | 5.211957 | 0.342391 | 0.087591 | 0.04171 | 0.054223 | 0.068822 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001786 | 0.192502 | 1,387 | 29 | 115 | 47.827586 | 0.854464 | 0 | 0 | 0 | 0 | 0 | 0.028839 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
31dbeeeb585ae91b3ec528faf0591108ed8cc73b | 848 | py | Python | hear_me_django_app/accounts/management/commands/initial_users.py | kamil1marczak/hear_me_django_app | 2a567c15acddbf6bf183c6c637a3785c2a9c9c5c | [
"MIT"
] | null | null | null | hear_me_django_app/accounts/management/commands/initial_users.py | kamil1marczak/hear_me_django_app | 2a567c15acddbf6bf183c6c637a3785c2a9c9c5c | [
"MIT"
] | null | null | null | hear_me_django_app/accounts/management/commands/initial_users.py | kamil1marczak/hear_me_django_app | 2a567c15acddbf6bf183c6c637a3785c2a9c9c5c | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.contrib.auth.hashers import make_password
from django.core.management.base import BaseCommand
from ._private import populate_user
User = get_user_model()
class Command(BaseCommand):
help = 'admin deployment'
def add_arguments(self, parser):
parser.add_argument('total', type=int, help='Indicates the number of users to be created')
def handle(self, *args, **kwargs):
total = kwargs['total']
populate_user(number=total)
obj, created = User.objects.get_or_create(name="root", password=make_password('Kamil100!'), is_superuser=True)
message = "Successfully populated database with initial users"
if created:
message += f" Superuser {obj.name} ha been created"
self.stdout.write(self.style.SUCCESS(message))
| 36.869565 | 118 | 0.714623 | 110 | 848 | 5.381818 | 0.6 | 0.050676 | 0.057432 | 0.070946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004335 | 0.183962 | 848 | 22 | 119 | 38.545455 | 0.851156 | 0 | 0 | 0 | 0 | 0 | 0.199292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.117647 | 0.235294 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
31ea1b716a1b8a3e2fc957132ac8497e9ccd0dcb | 10,826 | py | Python | 2015/day7/2015-day7-part2.py | matt-the-ogre/advent-of-code | 7188089d4db4a99fa09ef8366137fe28d1c28205 | [
"MIT"
] | 1 | 2021-12-03T18:17:54.000Z | 2021-12-03T18:17:54.000Z | 2015/day7/2015-day7-part2.py | matt-the-ogre/advent-of-code | 7188089d4db4a99fa09ef8366137fe28d1c28205 | [
"MIT"
] | null | null | null | 2015/day7/2015-day7-part2.py | matt-the-ogre/advent-of-code | 7188089d4db4a99fa09ef8366137fe28d1c28205 | [
"MIT"
] | null | null | null | # Advent of Code - 2015 - Day 7
# --- Day 7: Some Assembly Required ---
# This year, Santa brought little Bobby Tables a set of wires and bitwise logic gates! Unfortunately, little Bobby is a little under the recommended age range, and he needs help assembling the circuit.
# Each wire has an identifier (some lowercase letters) and can carry a 16-bit signal (a number from 0 to 65535). A signal is provided to each wire by a gate, another wire, or some specific value. Each wire can only get a signal from one source, but can provide its signal to multiple destinations. A gate provides no signal until all of its inputs have a signal.
# The included instructions booklet describes how to connect the parts together: x AND y -> z means to connect wires x and y to an AND gate, and then connect its output to wire z.
# For example:
# 123 -> x means that the signal 123 is provided to wire x.
# x AND y -> z means that the bitwise AND of wire x and wire y is provided to wire z.
# p LSHIFT 2 -> q means that the value from wire p is left-shifted by 2 and then provided to wire q.
# NOT e -> f means that the bitwise complement of the value from wire e is provided to wire f.
# Other possible gates include OR (bitwise OR) and RSHIFT (right-shift). If, for some reason, you'd like to emulate the circuit instead, almost all programming languages (for example, C, JavaScript, or Python) provide operators for these gates.
# For example, here is a simple circuit:
# 123 -> x
# 456 -> y
# x AND y -> d
# x OR y -> e
# x LSHIFT 2 -> f
# y RSHIFT 2 -> g
# NOT x -> h
# NOT y -> i
# After it is run, these are the signals on the wires:
# d: 72
# e: 507
# f: 492
# g: 114
# h: 65412
# i: 65079
# x: 123
# y: 456
# In little Bobby's kit's instructions booklet (provided as your puzzle input), what signal is ultimately provided to wire a?
import time, math
def createCircuitDict():
global circuitStrings
global circuitDict
# this function takes the string as input (circuitStrings) and converts them (parses them) into a dictionary (circuitDict)
for circuitLine in circuitStrings:
# the string "->" is the delimeter (sp?) between the left side (input) and the wire name (dictionary key)
leftSide = circuitLine[0 : circuitLine.find("->") - 1]
# if debug:
# print("leftSide:", leftSide)
rightSide = circuitLine[circuitLine.find("->") + 3 : ]
# if debug:
# print("rightSide:", rightSide)
# we set the outputValue to nan (not a number) as a way of checking if we have successfully evaluated the wires inputs or not: default = nan, not evaluated
outputValue = math.nan
# check for numeric input string -- this is easy, just make it the output
if leftSide.isnumeric():
leftSide = int(leftSide)
outputValue = leftSide # simple -- the input to this wire is also it's output
# check for duplicate wire names (dictionary keys) in the input string
if circuitDict.get(rightSide) != None:
print("Weird... dictionary key ", rightSide, "already exists. This shouldn't happen.")
circuitDict[rightSide] = {"input" : leftSide, "output" : outputValue}
def evaluateInput(circuit, operator):
global circuitDict
# if debug:
# print(circuit, operator)
# check left argument for circuit name or number
inputWire1 = circuitDict[circuit]["input"][: circuitDict[circuit]["input"].find(operator) - 1]
inputWire2 = circuitDict[circuit]["input"][circuitDict[circuit]["input"].find(operator) + len(operator) + 1 : ]
# if debug:
# print(circuit, "=", inputWire1, operator, inputWire2)
# look up the output of the inputWire
if inputWire1.isnumeric():
input1 = int(inputWire1)
else:
input1 = circuitDict[inputWire1]["output"]
if inputWire2.isnumeric():
input2 = int(inputWire2)
else:
input2 = circuitDict[inputWire2]["output"]
if math.isnan(input1):
# print("input wire 1 isn't calculated yet")
pass
elif math.isnan(input2):
# print("input wire 2 isn't calculated yet")
pass
else:
# do the bitwise complement on the input number and assign it to the output of this wire
if operator == "AND":
circuitDict[circuit]["output"] = input1 & input2
elif operator == "OR":
circuitDict[circuit]["output"] = input1 | input2
elif operator == "LSHIFT":
circuitDict[circuit]["output"] = input1 << input2
elif operator == "RSHIFT":
circuitDict[circuit]["output"] = input1 >> input2
else:
print("Unknown operator", operator)
# check for rollunder 0
# this occurs because we are using a signed integer for what should be an unsigned 16-bit integer
# TODO figure out if Python has an unsigned 16-bit integer type
if circuitDict[circuit]["output"] < 0:
# if debug:
# print("result under zero, fix it")
circuitDict[circuit]["output"] = 65535 + circuitDict[circuit]["output"]
def doConnection():
global circuitDict
unfinishedCount = len(circuitDict)
lowCount = unfinishedCount
while unfinishedCount:
unfinishedCount = len(circuitDict)
if debug:
print("lowCount", lowCount)
for circuit in circuitDict:
# if the output is not a number, evaluate the input
if math.isnan(circuitDict[circuit]["output"]):
# parse the left side
# we can have NOT, AND, OR, LSHIFT, and RSHIFT as possible commands
if "NOT" in circuitDict[circuit]["input"]:
# operation is logical NOT, invert the input line to be the output
inputWire1 = circuitDict[circuit]["input"][circuitDict[circuit]["input"].find("NOT")+4 : ]
# if debug:
# print(circuit, "= NOT", inputWire1)
# look up the output of the inputWire
if inputWire1.isnumeric():
input1 = int(inputWire1)
else:
input1 = circuitDict[inputWire1]["output"]
if math.isnan(input1):
# print("input wire isn't calculated yet")
pass
else:
# do the bitwise complement on the input number and assign it to the output of this wire
circuitDict[circuit]["output"] = ~input1
# check for rollunder 0
if circuitDict[circuit]["output"] < 0:
# if debug:
# print("result under zero, fix it")
circuitDict[circuit]["output"] = 65536 + circuitDict[circuit]["output"]
elif "AND" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "AND")
elif "OR" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "OR")
elif "LSHIFT" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "LSHIFT")
elif "RSHIFT" in circuitDict[circuit]["input"]:
evaluateInput(circuit, "RSHIFT")
else:
# simplest case -- one input only!
# copy the input wire
# this could be improved by doing it only if the inputWire is resolved
inputWire1 = circuitDict[circuit]["input"]
if debug:
print("simplest case circuit", circuit, " inputWire", inputWire1)
circuitDict[circuit]["output"] = circuitDict[inputWire1]["output"]
else:
# this circuit is done, move on
# if debug:
# print("circuit",circuit,"is done with output ", circuitDict[circuit]["output"], "Break.")
pass
if math.isnan(circuitDict[circuit]["output"]) is False:
# this output is calculated, decrement the unfinished counter
unfinishedCount -= 1
if unfinishedCount < lowCount:
lowCount = unfinishedCount
# if debug:
# print("unfinishedCount", unfinishedCount)
startTime = time.perf_counter() # time in seconds (float)
debug = False
timing = True
unitTesting = False
# maybe a dictionary again?
# circuitStrings = {"a" : {"input" : 1, "output" : NaN}}
# parse the input text file to set up the circuitStrings inputs, then just roll through the dictionary to calculate the outputs
# how will I be sure that the output has been calculated to be the input for the next circuitStrings?
# can I assume the input file is "in order"? Probably not.
# does this mean some sort of recursion algorithm?
# maybe if I populate the outputs with 'NaN' (or Python equivalent) then check that it's not that before using it's output
# I can make it recurse through the inputs, calculating any that have fully realized inputs?
circuitStrings = []
circuitDict = {}
# unit tests, kind of
if unitTesting:
print("Unit Testing")
circuitStrings = ["123 -> x","456 -> y", "x AND y -> d", "x OR y -> e", "x LSHIFT 2 -> f", "y RSHIFT 2 -> g", "NOT x -> h", "NOT y -> i"]
else:
# read the input text file into a variable called presents
with open("2015/day7/input-part2.txt","r") as inputString:
circuitStrings = inputString.readlines()
# remove newlines
for i in range(0, len(circuitStrings)):
circuitStrings[i] = circuitStrings[i].rstrip()
# parse the input to create the dictionary
createCircuitDict()
doConnection()
# show the circuits
if debug:
for circuit in circuitDict:
print(circuit,":",circuitDict[circuit])
if unitTesting:
testPass = False
testPassOutput = {"d": {"output" : 72}, "e": {"output" : 507}, "f": {"output" : 492}, "g": {"output" : 114}, "h": {"output" : 65412}, "i": {"output" : 65079}, "x": {"output" : 123}, "y": {"output" : 456}}
for wire in testPassOutput:
testPassWire = testPassOutput[wire]["output"]
circuitWire = circuitDict[wire]["output"]
if debug:
print("wire", wire, "test:", testPassWire, "calc:", circuitWire)
testPass = testPassWire == circuitWire
if testPass is False:
break
print("testPass:", testPass)
else:
print(circuitDict["a"]["output"])
# this answer for my input is 46065 (part 1), 14134 (part 2)
endTime = time.perf_counter() # time in seconds (float)
if timing:
print("Execution took ", endTime - startTime, " seconds.")
| 42.289063 | 362 | 0.608627 | 1,321 | 10,826 | 4.986374 | 0.258138 | 0.076514 | 0.054653 | 0.022772 | 0.227873 | 0.2092 | 0.171246 | 0.139365 | 0.101108 | 0.101108 | 0 | 0.022882 | 0.293553 | 10,826 | 255 | 363 | 42.454902 | 0.838389 | 0.440052 | 0 | 0.325 | 0 | 0 | 0.101692 | 0.004188 | 0 | 0 | 0 | 0.003922 | 0 | 1 | 0.025 | false | 0.1 | 0.008333 | 0 | 0.033333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
31ee781effe2a319a7f8d1c8b7b12faf33878337 | 1,846 | py | Python | tests/dgds_functions_test.py | openearth/hydro-engine-service | 8e7eea489ee241dad2d6d8152d1c30af8a09a8d1 | [
"MIT"
] | 4 | 2019-02-15T13:53:01.000Z | 2021-12-13T09:53:02.000Z | tests/dgds_functions_test.py | openearth/hydro-engine-service | 8e7eea489ee241dad2d6d8152d1c30af8a09a8d1 | [
"MIT"
] | 12 | 2018-12-19T08:30:29.000Z | 2021-04-21T12:59:59.000Z | tests/dgds_functions_test.py | openearth/hydro-engine-service | 8e7eea489ee241dad2d6d8152d1c30af8a09a8d1 | [
"MIT"
] | 4 | 2018-10-17T23:48:21.000Z | 2020-08-05T18:36:14.000Z | import logging
import pytest
from . import auth
from hydroengine_service import dgds_functions
logger = logging.getLogger(__name__)
class TestDGDSFunctions:
@pytest.mark.parametrize('source, start_date, end_date, limit',
[
('projects/dgds-gee/bathymetry/gebco/2019', None, None, 10),
('projects/dgds-gee/glossis/currents', None, None, None),
('projects/dgds-gee/glossis/waterlevel', '2020-11-01', '2020-12-01', None),
('projects/dgds-gee/glossis/wind', '2020-11-01', '2020-11-10', 10),
('projects/dgds-gee/glossis/waveheight', None, None, None),
('projects/dgds-gee/gloffis/weather', None, None, 5),
('projects/dgds-gee/gloffis/hydro', None, None, 5),
('projects/dgds-gee/metocean/waves/percentiles', None, None, 5),
('projects/dgds-gee/chasm/waves', None, None, None),
('projects/dgds-gee/chasm/wind', None, None, None),
('projects/dgds-gee/crucial/evaporation_deficit', None, None, None),
('projects/dgds-gee/crucial/groundwater_declining_trend', None, None, None),
('projects/dgds-gee/msfd/chlorophyll', None, None, None)
])
def test_get_image_collection_info(self, source, start_date, end_date, limit):
image_date_list = dgds_functions.get_image_collection_info(source, start_date, end_date, limit)
assert len(image_date_list) >= 1
assert "imageId" in image_date_list[0]
assert "date" in image_date_list[0]
| 51.277778 | 109 | 0.538462 | 193 | 1,846 | 4.989637 | 0.34715 | 0.149533 | 0.202492 | 0.13811 | 0.458982 | 0.341641 | 0.070613 | 0 | 0 | 0 | 0 | 0.037954 | 0.343445 | 1,846 | 35 | 110 | 52.742857 | 0.756601 | 0 | 0 | 0 | 0 | 0 | 0.302275 | 0.255688 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.037037 | false | 0 | 0.148148 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ec42ebdeb8c357fae82c9abfd68ebde784ec5ba | 1,280 | py | Python | TeamClassificationUtils.py | Neerajj9/Computer-Vision-based-Offside-Detection-in-soccer | 744bfc636463f24c4f78f25684864c2ce4abb43f | [
"MIT"
] | 8 | 2020-10-17T14:54:53.000Z | 2022-02-09T11:03:01.000Z | TeamClassificationUtils.py | Neerajj9/Computer-Vision-based-Offside-Detection-in-soccer | 744bfc636463f24c4f78f25684864c2ce4abb43f | [
"MIT"
] | 4 | 2021-01-03T16:02:29.000Z | 2021-11-23T03:26:01.000Z | TeamClassificationUtils.py | Neerajj9/Computer-Vision-based-Offside-Detection-in-soccer | 744bfc636463f24c4f78f25684864c2ce4abb43f | [
"MIT"
] | 2 | 2021-04-10T07:05:55.000Z | 2021-09-19T23:22:18.000Z | import numpy as np
# TODO : add code for referee
def get_team_classifications(teamColor1, teamColor2, refColor, keeper1Color, keeper2Color, pose_estimations):
for pose in pose_estimations:
if len(pose[1]) < 2:
pose.append('color not found')
continue
colorDiffs = {}
colorList = np.array(pose[1][0]) + np.array(pose[1][1])
colorList = np.divide(colorList, 2)
colorList = colorList.tolist()
diffTeam1 = list(abs(np.array(teamColor1) - np.array(colorList)))
colorDiffs['team1'] = diffTeam1
diffTeam2 = list(abs(np.array(teamColor2) - np.array(colorList)))
colorDiffs['team2'] = diffTeam2
diffRef = list(abs(np.array(refColor) - np.array(colorList)))
colorDiffs['ref'] = diffRef
diffKeep1 = list(abs(np.array(refColor) - np.array(colorList)))
colorDiffs['keep1'] = diffKeep1
diffKeep2 = list(abs(np.array(refColor) - np.array(colorList)))
colorDiffs['keep2'] = diffKeep2
for key in colorDiffs.keys():
colorDiffs[key] = sum(colorDiffs[key]) / len(colorDiffs[key])
colorDiffs = {k: v for k, v in sorted(colorDiffs.items(), key=lambda item: item[1])}
for key in colorDiffs.keys():
pose.append(key)
break
return pose_estimations | 33.684211 | 109 | 0.651563 | 156 | 1,280 | 5.314103 | 0.378205 | 0.101327 | 0.054282 | 0.084439 | 0.226779 | 0.173703 | 0.173703 | 0.173703 | 0.173703 | 0 | 0 | 0.025819 | 0.213281 | 1,280 | 38 | 110 | 33.684211 | 0.797418 | 0.021094 | 0 | 0.074074 | 0 | 0 | 0.030351 | 0 | 0 | 0 | 0 | 0.026316 | 0 | 1 | 0.037037 | false | 0 | 0.037037 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9eca8cb06280c8af6786e7a410286dc58b44dac0 | 5,734 | py | Python | src/gt4sd/algorithms/generation/polymer_blocks/core.py | hhhsu0825/gt4sd-core | 4a1fe9da58d2f33bba2fba64604427e037ad7a46 | [
"MIT"
] | null | null | null | src/gt4sd/algorithms/generation/polymer_blocks/core.py | hhhsu0825/gt4sd-core | 4a1fe9da58d2f33bba2fba64604427e037ad7a46 | [
"MIT"
] | null | null | null | src/gt4sd/algorithms/generation/polymer_blocks/core.py | hhhsu0825/gt4sd-core | 4a1fe9da58d2f33bba2fba64604427e037ad7a46 | [
"MIT"
] | null | null | null | """PaccMann vanilla generator trained on polymer building blocks (catalysts/monomers)."""
import logging
import os
from dataclasses import field
from typing import ClassVar, Dict, Optional, TypeVar
from ....domains.materials import SmallMolecule, validate_molecules
from ....exceptions import InvalidItem
from ....training_pipelines.core import TrainingPipelineArguments
from ....training_pipelines.paccmann.core import PaccMannSavingArguments
from ...core import AlgorithmConfiguration, GeneratorAlgorithm, Untargeted
from ...registry import ApplicationsRegistry
from .implementation import Generator
logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())
T = type(None)
S = TypeVar("S", bound=SmallMolecule)
class PolymerBlocks(GeneratorAlgorithm[S, T]):
def __init__(
self, configuration: AlgorithmConfiguration, target: Optional[T] = None
):
"""Polymer blocks generation.
Args:
configuration: domain and application
specification, defining types and validations.
target: unused since it is not a conditional generator.
Example:
An example for generating small molecules (SMILES) that resembles
monomers/catalysts for polymer synthesis::
configuration = PolymerBlocksGenerator()
polymer_blocks = PolymerBlocks(configuration=configuration)
items = list(polymer_blocks.sample(10))
print(items)
"""
configuration = self.validate_configuration(configuration)
# TODO there might also be a validation/check on the target input
super().__init__(
configuration=configuration,
target=None, # type:ignore
)
def get_generator(
self,
configuration: AlgorithmConfiguration[S, T],
target: Optional[T],
) -> Untargeted:
"""Get the function to sample batches via the Generator.
Args:
configuration: helps to set up the application.
target: context or condition for the generation. Unused in the algorithm.
Returns:
callable generating a batch of items.
"""
logger.info("ensure artifacts for the application are present.")
self.local_artifacts = configuration.ensure_artifacts()
implementation: Generator = configuration.get_conditional_generator( # type: ignore
self.local_artifacts
)
return implementation.sample
def validate_configuration(
self, configuration: AlgorithmConfiguration
) -> AlgorithmConfiguration:
# TODO raise InvalidAlgorithmConfiguration
assert isinstance(configuration, AlgorithmConfiguration)
return configuration
@ApplicationsRegistry.register_algorithm_application(PolymerBlocks)
class PolymerBlocksGenerator(AlgorithmConfiguration[SmallMolecule, None]):
"""Configuration to generate subunits of polymers."""
algorithm_type: ClassVar[str] = "generation"
domain: ClassVar[str] = "materials"
algorithm_version: str = "v0"
batch_size: int = field(
default=32,
metadata=dict(description="Batch size used for the generative model sampling."),
)
generated_length: int = field(
default=100,
metadata=dict(
description="Maximum length in tokens of the generated molcules (relates to the SMILES length)."
),
)
def get_target_description(self) -> Optional[Dict[str, str]]:
"""Get description of the target for generation.
Returns:
target description, returns None in case no target is used.
"""
return None
def get_conditional_generator(self, resources_path: str) -> Generator:
return Generator(
resources_path=resources_path,
generated_length=self.generated_length,
batch_size=self.batch_size,
)
def validate_item(self, item: str) -> SmallMolecule:
(
molecules,
_,
) = validate_molecules([item])
if molecules[0] is None:
raise InvalidItem(
title="InvalidSMILES",
detail=f'rdkit.Chem.MolFromSmiles returned None for "{item}"',
)
return SmallMolecule(item)
@classmethod
def get_filepath_mappings_for_training_pipeline_arguments(
cls, training_pipeline_arguments: TrainingPipelineArguments
) -> Dict[str, str]:
"""Ger filepath mappings for the given training pipeline arguments.
Args:
training_pipeline_arguments: training pipeline arguments.
Returns:
a mapping between artifacts' files and training pipeline's output files.
"""
if isinstance(training_pipeline_arguments, PaccMannSavingArguments):
return {
"smiles_language.pkl": os.path.join(
training_pipeline_arguments.model_path,
f"{training_pipeline_arguments.training_name}.lang",
),
"params.json": os.path.join(
training_pipeline_arguments.model_path,
training_pipeline_arguments.training_name,
"model_params.json",
),
"weights.pt": os.path.join(
training_pipeline_arguments.model_path,
training_pipeline_arguments.training_name,
"weights",
"best_rec.pt",
),
}
else:
return super().get_filepath_mappings_for_training_pipeline_arguments(
training_pipeline_arguments
)
| 35.614907 | 108 | 0.646495 | 536 | 5,734 | 6.755597 | 0.350746 | 0.06628 | 0.096658 | 0.045568 | 0.113781 | 0.103563 | 0.08285 | 0.05689 | 0.044739 | 0.044739 | 0 | 0.002181 | 0.280258 | 5,734 | 160 | 109 | 35.8375 | 0.875212 | 0.235612 | 0 | 0.09 | 0 | 0 | 0.094271 | 0.017404 | 0 | 0 | 0 | 0.0125 | 0.01 | 1 | 0.07 | false | 0 | 0.11 | 0.01 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ecd99d19c3e1460adaaef7fa6dcf5ae53718429 | 2,551 | py | Python | python-trunk/sfapi2/sflib/ZSI/wstools/XMLname.py | raychorn/svn_molten-magma | 8aa2ff2340707eecae6514943e86f5afba9cd54a | [
"CC0-1.0"
] | null | null | null | python-trunk/sfapi2/sflib/ZSI/wstools/XMLname.py | raychorn/svn_molten-magma | 8aa2ff2340707eecae6514943e86f5afba9cd54a | [
"CC0-1.0"
] | null | null | null | python-trunk/sfapi2/sflib/ZSI/wstools/XMLname.py | raychorn/svn_molten-magma | 8aa2ff2340707eecae6514943e86f5afba9cd54a | [
"CC0-1.0"
] | null | null | null | """Translate strings to and from SOAP 1.2 XML name encoding
Implements rules for mapping application defined name to XML names
specified by the w3 SOAP working group for SOAP version 1.2 in
Appendix A of "SOAP Version 1.2 Part 2: Adjuncts", W3C Working Draft
17, December 2001, <http://www.w3.org/TR/soap12-part2/#namemap>
Also see <http://www.w3.org/2000/xp/Group/xmlp-issues>.
Author: Gregory R. Warnes <gregory_r_warnes@groton.pfizer.com>
Date:: 2002-04-25
Version 0.9.0
"""
ident = "$Id: XMLname.py 25 2006-05-24 18:12:14Z misha $"
from re import *
def _NCNameChar(x):
return x.isalpha() or x.isdigit() or x=="." or x=='-' or x=="_"
def _NCNameStartChar(x):
return x.isalpha() or x=="_"
def _toUnicodeHex(x):
hexval = hex(ord(x[0]))[2:]
hexlen = len(hexval)
# Make hexval have either 4 or 8 digits by prepending 0's
if (hexlen==1): hexval = "000" + hexval
elif (hexlen==2): hexval = "00" + hexval
elif (hexlen==3): hexval = "0" + hexval
elif (hexlen==4): hexval = "" + hexval
elif (hexlen==5): hexval = "000" + hexval
elif (hexlen==6): hexval = "00" + hexval
elif (hexlen==7): hexval = "0" + hexval
elif (hexlen==8): hexval = "" + hexval
else: raise Exception, "Illegal Value returned from hex(ord(x))"
return "_x"+ hexval + "_"
def _fromUnicodeHex(x):
return eval( r'u"\u'+x[2:-1]+'"' )
def toXMLname(string):
"""Convert string to a XML name."""
if string.find(':') != -1 :
(prefix, localname) = string.split(':',1)
else:
prefix = None
localname = string
T = unicode(localname)
N = len(localname)
X = [];
for i in range(N) :
if i< N-1 and T[i]==u'_' and T[i+1]==u'x':
X.append(u'_x005F_')
elif i==0 and N >= 3 and \
( T[0]==u'x' or T[0]==u'X' ) and \
( T[1]==u'm' or T[1]==u'M' ) and \
( T[2]==u'l' or T[2]==u'L' ):
X.append(u'_xFFFF_' + T[0])
elif (not _NCNameChar(T[i])) or (i==0 and not _NCNameStartChar(T[i])):
X.append(_toUnicodeHex(T[i]))
else:
X.append(T[i])
return u''.join(X)
def fromXMLname(string):
"""Convert XML name to unicode string."""
retval = sub(r'_xFFFF_','', string )
def fun( matchobj ):
return _fromUnicodeHex( matchobj.group(0) )
retval = sub(r'_x[0-9A-Za-z]+_', fun, retval )
return retval
| 28.662921 | 79 | 0.547236 | 375 | 2,551 | 3.661333 | 0.378667 | 0.050983 | 0.081573 | 0.018937 | 0.1311 | 0.02622 | 0 | 0 | 0 | 0 | 0 | 0.053994 | 0.288514 | 2,551 | 88 | 80 | 28.988636 | 0.702479 | 0.02156 | 0 | 0.040816 | 0 | 0 | 0.084416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.020408 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ed032bb75772e44674a7c37bb30bc62c636bc41 | 3,695 | py | Python | step2.py | mosheliv/tfcollab1 | 50da5683fb40a50cb957aeca2d28bc9f72440813 | [
"MIT"
] | null | null | null | step2.py | mosheliv/tfcollab1 | 50da5683fb40a50cb957aeca2d28bc9f72440813 | [
"MIT"
] | null | null | null | step2.py | mosheliv/tfcollab1 | 50da5683fb40a50cb957aeca2d28bc9f72440813 | [
"MIT"
] | null | null | null | """
Usage:
# From tensorflow/models/
# Create train data:
python generate_tfrecord.py --csv_input=data/train_labels.csv --output_path=train.record
# Create test data:
python generate_tfrecord.py --csv_input=data/test_labels.csv --output_path=test.record
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import os
import io
import pandas as pd
import tensorflow as tf
from PIL import Image
from collections import namedtuple, OrderedDict
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _bytes_list_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=value))
def _float_list_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
def _int64_list_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
flags = tf.app.flags
flags.DEFINE_string('image_dir', '', 'Path to the image directory')
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
FLAGS = flags.FLAGS
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'Blackbird':
return 1
else:
None
def split(df, group):
data = namedtuple('data', ['filename', 'object'])
gb = df.groupby(group)
return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]
def create_tf_example(group, path):
with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = Image.open(encoded_jpg_io)
width, height = image.size
filename = group.filename.encode('utf8')
image_format = b'jpg'
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []
for index, row in group.object.iterrows():
xmins.append(row['xmin'])
xmaxs.append(row['xmax'])
ymins.append(row['ymin'])
ymaxs.append(row['ymax'])
classes_text.append(row['class'].encode('utf8'))
classes.append(class_text_to_int(row['class']))
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': _int64_feature(height),
'image/width': _int64_feature(width),
'image/filename': _bytes_feature(filename),
'image/source_id': _bytes_feature(filename),
'image/encoded': _bytes_feature(encoded_jpg),
'image/format': _bytes_feature(image_format),
'image/object/bbox/xmin': _float_list_feature(xmins),
'image/object/bbox/xmax': _float_list_feature(xmaxs),
'image/object/bbox/ymin': _float_list_feature(ymins),
'image/object/bbox/ymax': _float_list_feature(ymaxs),
'image/object/class/text': _bytes_list_feature(classes_text),
'image/object/class/label': _int64_list_feature(classes),
}))
return tf_example
def main(_):
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
path = FLAGS.image_dir
examples = pd.read_csv(FLAGS.csv_input)
print(examples.columns.values)
grouped = split(examples, 'filename')
for group in grouped:
tf_example = create_tf_example(group, path)
writer.write(tf_example.SerializeToString())
writer.close()
output_path = os.path.join(os.getcwd(), FLAGS.output_path)
print('Successfully created the TFRecords: {}'.format(output_path))
if __name__ == '__main__':
tf.app.run()
| 32.991071 | 96 | 0.700677 | 500 | 3,695 | 4.932 | 0.264 | 0.034063 | 0.036496 | 0.040552 | 0.194647 | 0.161395 | 0.161395 | 0.143552 | 0.111111 | 0.111111 | 0 | 0.006816 | 0.166171 | 3,695 | 111 | 97 | 33.288288 | 0.793574 | 0.080379 | 0 | 0 | 1 | 0 | 0.128024 | 0.039823 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108434 | false | 0 | 0.108434 | 0.060241 | 0.313253 | 0.036145 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ed2a743eca4dbe121cf458e5a0377ba7b5dca61 | 385 | py | Python | algorithms/python/118.py | viing937/leetcode | e21ca52c98bddf59e43522c0aace5e8cf84350eb | [
"MIT"
] | 3 | 2016-10-01T10:15:09.000Z | 2017-07-09T02:53:36.000Z | algorithms/python/118.py | viing937/leetcode | e21ca52c98bddf59e43522c0aace5e8cf84350eb | [
"MIT"
] | null | null | null | algorithms/python/118.py | viing937/leetcode | e21ca52c98bddf59e43522c0aace5e8cf84350eb | [
"MIT"
] | null | null | null | class Solution:
def generate(self, numRows):
"""
:type numRows: int
:rtype: List[List[int]]
"""
if numRows == 0: return []
rls = [[1]]
for i in range(2, numRows+1):
row = [1] * i
for j in range(1, i-1):
row[j] = rls[-1][j-1] + rls[-1][j]
rls.append(row)
return rls
| 25.666667 | 50 | 0.415584 | 50 | 385 | 3.2 | 0.46 | 0.075 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045249 | 0.425974 | 385 | 14 | 51 | 27.5 | 0.678733 | 0.109091 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ed556610d4e386e3f7c1552b11e15722ee31053 | 1,125 | py | Python | DynamicProgramming/longestIncreasingSubsequence.py | suyash248/data_structures | 41a732cebf791ed63edbce10329251f03b763ccf | [
"Apache-2.0"
] | 7 | 2017-12-13T05:54:29.000Z | 2022-03-25T09:10:59.000Z | DynamicProgramming/longestIncreasingSubsequence.py | suyash248/data_structures | 41a732cebf791ed63edbce10329251f03b763ccf | [
"Apache-2.0"
] | null | null | null | DynamicProgramming/longestIncreasingSubsequence.py | suyash248/data_structures | 41a732cebf791ed63edbce10329251f03b763ccf | [
"Apache-2.0"
] | 4 | 2019-05-22T02:51:56.000Z | 2021-05-23T10:49:57.000Z | from Array import empty_1d_array
"""
input array : [10, 22, 9, 33, 21, 50, 41, 60]
# Element at each index `i` is representing length of longest LIS from index 0 to i in input array.
output array: [1, 2, 1, 3, 2, 4, 4, 5]
"""
# Time complexity: O(n^2)
# Space complexity: O(n)
def lis_dp(arr):
# Length of LIS at each index is at least 1 (element itself).
n = len(arr)
lis_arr = empty_1d_array(n, 1)
for i in xrange(1, n): # for i=1; i<n; i++
for j in xrange(0, i): # for j=0; j<i; j++
if arr[i] > arr[j] : # and lis_arr[i] < lis_arr[j]+1:
prev_lis_till_i = lis_arr[i]
curr_lis_till_i = lis_arr[j] + 1
if curr_lis_till_i > prev_lis_till_i:
# Update lis_till_i
lis_arr[i] = curr_lis_till_i
# print lis_arr
return max(lis_arr)
if __name__ == '__main__':
arr = [10, 22, 9, 33, 21, 50, 41, 60]
max_lis = lis_dp(arr)
print "Length of longest increasing sub-sequence for given array is {}".format(max_lis) | 36.290323 | 99 | 0.543111 | 190 | 1,125 | 3 | 0.336842 | 0.084211 | 0.084211 | 0.057895 | 0.177193 | 0.147368 | 0.147368 | 0.147368 | 0.094737 | 0.094737 | 0 | 0.067385 | 0.340444 | 1,125 | 31 | 100 | 36.290323 | 0.700809 | 0.182222 | 0 | 0 | 0 | 0 | 0.100424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9edfa90d3388411fff4970296751427f8a1b76b6 | 257 | py | Python | 2_UNIXCommands/Exercise11.py | takeyoshinitta/NLP-100-Exercise | e77fb385fbbf50c8a8bdc47442db1421739ea5b6 | [
"MIT"
] | 3 | 2022-01-04T19:02:22.000Z | 2022-02-21T08:52:18.000Z | 2_UNIXCommands/Exercise11.py | takeyoshinitta/NLP-100-Exercise | e77fb385fbbf50c8a8bdc47442db1421739ea5b6 | [
"MIT"
] | null | null | null | 2_UNIXCommands/Exercise11.py | takeyoshinitta/NLP-100-Exercise | e77fb385fbbf50c8a8bdc47442db1421739ea5b6 | [
"MIT"
] | null | null | null | # 11. Replace tabs into spaces
# Replace every occurrence of a tab character into a space. Confirm the result by using sed, tr, or expand command.
with open('popular-names.txt') as f:
for line in f:
print(line.strip().replace("\t", " "))
| 36.714286 | 116 | 0.66537 | 41 | 257 | 4.170732 | 0.853659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.22179 | 257 | 6 | 117 | 42.833333 | 0.845 | 0.552529 | 0 | 0 | 0 | 0 | 0.188679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ee57d6363120b9d54a9902e2243f9122d20af71 | 4,810 | py | Python | src/core/serializers.py | pradipta/back-end | 05895b051afc4c8e0cb17db708063d80102e9de5 | [
"MIT"
] | 17 | 2019-05-11T22:15:34.000Z | 2022-03-26T22:45:33.000Z | src/core/serializers.py | pradipta/back-end | 05895b051afc4c8e0cb17db708063d80102e9de5 | [
"MIT"
] | 390 | 2019-05-23T10:48:57.000Z | 2021-12-17T21:01:43.000Z | src/core/serializers.py | pradipta/back-end | 05895b051afc4c8e0cb17db708063d80102e9de5 | [
"MIT"
] | 40 | 2019-05-21T14:41:57.000Z | 2021-01-30T13:39:38.000Z | from django.contrib.auth import get_user_model
from rest_auth.registration.serializers import (
RegisterSerializer as BaseRegisterSerializer,
)
from rest_auth.registration.serializers import (
SocialLoginSerializer as BaseSocialLoginSerializer,
)
from rest_auth.serializers import LoginSerializer as BaseLoginSerializer
from rest_auth.serializers import (
PasswordResetConfirmSerializer as BasePasswordResetConfirmSerializer,
)
from rest_auth.serializers import UserDetailsSerializer as BaseUserDetailsSerializer
from rest_framework import serializers
from rest_framework.exceptions import ValidationError
from core.models import Profile
# noinspection PyAbstractClass
class LoginSerializer(BaseLoginSerializer):
"""
Extends the default LoginSerializer in order to return
custom error messages
"""
def validate(self, attrs):
try:
return super().validate(attrs)
except serializers.ValidationError as ex:
ex.detail = "The email or password you entered is incorrect!"
raise ex
# noinspection PyAbstractClass
class PasswordResetConfirmSerializer(BasePasswordResetConfirmSerializer):
"""
Extends the default PasswordResetConfirmSerializer in order to return
custom error messages
"""
def validate(self, attrs):
try:
return super().validate(attrs)
except serializers.ValidationError as ex:
if "new_password2" in ex.detail:
ex.detail = ex.detail["new_password2"][0]
else:
ex.detail = "Could not reset password. Reset token expired or invalid."
raise ex
# noinspection PyAbstractClass
class CustomSocialLoginSerializer(BaseSocialLoginSerializer):
"""
Extends default SocialLoginSerializer to add additional details to some
failed login attempts
"""
def validate(self, attrs):
try:
res = super().validate(attrs)
return res
except ValidationError as ex:
if "User is already registered with this e-mail address." in ex.detail:
ex.detail[0] = (
"User is already registered with this e-mail address. "
"Please login using the form above."
)
raise ex
# noinspection PyAbstractClass
class RegisterSerializer(BaseRegisterSerializer):
email = serializers.EmailField(required=True)
password = serializers.CharField(write_only=True)
first_name = serializers.CharField(write_only=True)
last_name = serializers.CharField(write_only=True)
# legacy compat
zip = serializers.CharField(write_only=True, required=False)
zipcode = serializers.CharField(write_only=True, required=False)
# Overrides the default required password fields
password1 = None
password2 = None
def get_cleaned_data(self):
return {
"username": self.validated_data.get("email", ""),
"email": self.validated_data.get("email", ""),
# allauth uses password1 internally for creation
"password1": self.validated_data.get("password", ""),
"first_name": self.validated_data.get("first_name", ""),
"last_name": self.validated_data.get("last_name", ""),
"zipcode": self.validated_data.get("zipcode", ""),
}
def validate(self, data):
return data
UserModel = get_user_model()
class ProfileSerializer(serializers.ModelSerializer):
class Meta:
model = Profile
fields = "__all__"
class UserDetailsSerializer(BaseUserDetailsSerializer):
profile = ProfileSerializer()
class Meta:
model = UserModel
fields = ("username", "email", "first_name", "last_name", "profile")
read_only_fields = ("email",)
def to_representation(self, instance: UserModel) -> dict:
"""Move fields from Profile to user representation."""
representation = super().to_representation(instance)
profile = representation.pop("profile")
representation["zipcode"] = profile["zipcode"]
representation["is_mentor"] = profile["is_mentor"]
return representation
class UserSerializer(BaseUserDetailsSerializer):
profile = ProfileSerializer()
class Meta:
model = UserModel
fields = ("username", "email", "first_name", "last_name", "profile")
read_only_fields = ("email",)
def to_representation(self, instance: UserModel) -> dict:
"""Move fields from Profile to user representation."""
representation = super().to_representation(instance)
profile = representation.pop("profile")
profile.pop("user")
for key, val in profile.items():
representation[key] = val
return representation
| 33.172414 | 88 | 0.677755 | 474 | 4,810 | 6.772152 | 0.278481 | 0.017445 | 0.031776 | 0.037383 | 0.482243 | 0.359502 | 0.310903 | 0.282243 | 0.282243 | 0.255452 | 0 | 0.002176 | 0.235551 | 4,810 | 144 | 89 | 33.402778 | 0.870819 | 0.121622 | 0 | 0.361702 | 0 | 0 | 0.122139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074468 | false | 0.106383 | 0.095745 | 0.021277 | 0.457447 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9ee68cd6efba5b094a83a85c60acb1031a826384 | 2,050 | py | Python | tools/docs/generate_api_rst.py | dcillera/envoy | cb54ba8eec26f768f8c1ae412113b07bacde7321 | [
"Apache-2.0"
] | 17,703 | 2017-09-14T18:23:43.000Z | 2022-03-31T22:04:17.000Z | tools/docs/generate_api_rst.py | dcillera/envoy | cb54ba8eec26f768f8c1ae412113b07bacde7321 | [
"Apache-2.0"
] | 15,957 | 2017-09-14T16:38:22.000Z | 2022-03-31T23:56:30.000Z | tools/docs/generate_api_rst.py | dcillera/envoy | cb54ba8eec26f768f8c1ae412113b07bacde7321 | [
"Apache-2.0"
] | 3,780 | 2017-09-14T18:58:47.000Z | 2022-03-31T17:10:47.000Z | import os
import shutil
import sys
import tarfile
def include_package(envoy_api_protos, rst_file_path, prefix):
# `envoy_api_rst_files` is a list of file paths for .proto.rst files
# generated by protodoc
#
# we are only interested in the proto files generated for envoy protos,
# not for non-envoy dependencies
if ("pkg/" + prefix) not in rst_file_path:
return None
# derive the "canonical" path from the filepath
canonical = f"{rst_file_path.split('pkg/' + prefix)[1]}"
# we are only interested in the actual v3 protos, not their dependencies
if (prefix + canonical) not in envoy_api_protos:
return None
return canonical
def main():
proto_srcs = sys.argv[1]
envoy_api_rst_files = sys.argv[1:-1]
output_filename = sys.argv[-1]
with open(proto_srcs) as f:
# the contents of `proto_srcs` are the result of a bazel genquery,
# containing bazel target rules, eg:
#
# @envoy_api//envoy/watchdog/v3:abort_action.proto
#
# this transforms them to a list with a "canonical" form of:
#
# envoy/watchdog/v3/abort_action.proto.rst
#
envoy_api_protos = [
f"{src.split('//')[1].replace(':', '/')}.rst" for src in f.read().split("\n") if src
]
for rst_file_path in envoy_api_rst_files:
canonical = include_package(envoy_api_protos, rst_file_path, "envoy/")
if canonical is None:
canonical = include_package(envoy_api_protos, rst_file_path, "contrib/envoy/")
if canonical is None:
continue
target = os.path.join("rst-out/api-v3", canonical)
if not os.path.exists(os.path.dirname(target)):
os.makedirs(os.path.dirname(target))
shutil.copy(rst_file_path, target)
# output the generated rst files to a tarfile for consumption
# by other bazel rules
with tarfile.open(output_filename, "w") as tar:
tar.add("rst-out", arcname=".")
if __name__ == "__main__":
main()
| 32.03125 | 96 | 0.642927 | 289 | 2,050 | 4.380623 | 0.318339 | 0.056872 | 0.060821 | 0.052133 | 0.228278 | 0.193523 | 0.106635 | 0.106635 | 0.075829 | 0 | 0 | 0.006545 | 0.254634 | 2,050 | 63 | 97 | 32.539683 | 0.82199 | 0.312195 | 0 | 0.121212 | 0 | 0 | 0.100647 | 0.042416 | 0 | 0 | 0 | 0.015873 | 0 | 1 | 0.060606 | false | 0 | 0.121212 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ee7307b78f857465fe941638e5a41dd83ec835a | 15,792 | py | Python | src/wa_parser.py | ifly6/NS-WA-Authorboards | 57921457795306867844a29cdfce88bfcdd1c3f6 | [
"Apache-2.0"
] | null | null | null | src/wa_parser.py | ifly6/NS-WA-Authorboards | 57921457795306867844a29cdfce88bfcdd1c3f6 | [
"Apache-2.0"
] | null | null | null | src/wa_parser.py | ifly6/NS-WA-Authorboards | 57921457795306867844a29cdfce88bfcdd1c3f6 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2020 ifly6
import html
import io
import re
from datetime import datetime
from functools import cache
from typing import Tuple
import numpy as np
import pandas as pd
import requests
from bs4 import BeautifulSoup
from lxml import etree
from pytz import timezone
from ratelimit import limits, sleep_and_retry
from helpers import ref
from src import wa_cacher
""" Imperium Anglorum:
This is adapted from proprietary InfoEurope code which in part does most of this already. Eg the proposal portions
which translate, the locality adjustments, API reading, etc. There is also code in beta (not-in-production)
which would have done this entirely, but I never got around to developing the VIEWS for that portion of the website.
It seems much easier just to commit something like this given that all the code is already present.
See ifly6.no-ip.org for more information. """
_headers = {
'User-Agent': 'WA parser (Auralia; Imperium Anglorum)'
}
class ApiError(Exception):
pass
@sleep_and_retry
@limits(calls=25, period=30) # 50 calls every 30 seconds they say but somehow this is fake news
def call_api(url) -> str:
response = requests.get(url, headers=_headers)
if response.status_code != 200:
raise ApiError('{} error at api url: {}'.format(response.status_code, str(url)))
return response.text
def clean_chamber_input(chamber):
""" Turns ambiguous chamber information into tuple (int, str) with chamber id and chamber name """
if type(chamber) == str:
if chamber == '1':
chamber = 1
elif chamber == '2':
chamber = 2
elif chamber == 'GA':
chamber = 1
elif chamber == 'SC':
chamber = 2
chamber_name = 'GA' if chamber == 1 else \
'SC' if chamber == 2 else ''
return chamber, chamber_name
def localised(dt: 'datetime', tz='US/Eastern'):
return timezone(tz).localize(dt)
@cache
def _category_map():
d = {'Advancement of Industry': 'Environmental Deregulation',
'Civil Rights': 'Mild',
'Human Rights': 'Mild',
'Education and Creativity': 'Artistic',
'Environmental': 'Automotive',
'Free Trade': 'Mild',
'Furtherment of Democracy': 'Mild',
'Global Disarmament': 'Mild',
'Health': 'Healthcare',
'International Security': 'Mild',
'Moral Decency': 'Mild',
'Political Stability': 'Mild',
'Regulation': 'Consumer Protection',
'Gun Control': 'Tighten',
'Social Justice': 'Mild'}
return {ref(k): v for k, v in d.items()} # force ref name for matching
# nb that this is identical to dict( ( ref(k), v ) for k, v in d.items() )
def _translate_category(category: str, s: str) -> Tuple[bool, str]:
if ref(category) in _category_map() and s == '0':
return True, _category_map()[ref(category)] # yield correct name from ref name of category
# if it isn't 0, then it doesn't apply, return given
# if not in the list, return given
return False, s
def capitalise(s):
s = s.replace('_', ' ').strip()
# exceptions
capitalisation_exceptions = wa_cacher.load_capitalisation_exceptions()
for i in capitalisation_exceptions:
if s.strip().lower() == i.strip().lower():
return i # replace with manual correction
# only capitalise words longer than 2 letters ('new') and always capitalise first
# unless the word is in given list
# > fanboys & the
s = " ".join(
w.capitalize()
if (len(w) > 2 and w not in ['for', 'and', 'nor', 'but', 'yet', 'the']) or (i == 0)
else w
for i, w in enumerate(s.split())
).strip() # avoid apostrophe capitalisations
# but capitalise st -> St
for exception in ['St']:
s = ' '.join((exception if w.lower() == exception.lower() else w)
for w in s.split())
# for split in ['-']:
# # as first should always be capitalised, not checking doesn't matter
# s = split.join(w[:1].upper() + w[1:] for i, w in enumerate(s.split(split))) # capitalise first letter only
# "Christian DeMocrats"
# python str.capitalize forces all other chars to lower
# don't use str.capitalize above
for numeral in ['ii', 'iii', 'iv', 'v', 'vi', 'vii', 'viii', 'ix', 'x']:
s = re.sub(r'(?<=\s){}$'.format(numeral), numeral.upper(), s) # matches only trailing numerals
# people used to use WA missions; capitalise these, they are separate words
s = re.sub(r'(?<=\s)(Wa|wa|wA)(?=\s)', 'WA', s) # if between two spaces
s = re.sub(r'^(Wa|wa|wA)(?=\s)', 'WA', s) # if at start (eg WA Mission of NERV-UN)
return s
def _get_council(i):
if i == 'GA' or i == 1: return 'GA'
if i == 'SC' or i == 2: return 'SC'
if i == 'UN' or i == 0: return 'UN'
raise ValueError(f'provided council code {i} is invalid')
class WaPassedResolution:
def __init__(self, **kwargs):
# core vote information
self.resolution_num = None
self.title = None
self.implementation = None
# category and strength
self.chamber = None
self.category = None
self.strength = None
# handle repeals
self.is_repealed = None
self.repealed_by = None
self.is_repeal = None
self.repeals = None
# text
self.text = None
# ancillary information
self.author = None
self.coauthor0 = None
self.coauthor1 = None
self.coauthor2 = None
self.votes_for = None
self.votes_against = None
self.council = None
self.__dict__.update(kwargs) # django does this automatically, i'm not updating it; lazy
@staticmethod
def parse_ga(res_num, council=1):
from src.wa_cacher import Cacher
try:
cacher = Cacher.load()
except FileNotFoundError:
cacher = Cacher() # init new
api_url = 'https://www.nationstates.net/cgi-bin/api.cgi?wa={}&id={}&q=resolution'.format(council, res_num)
in_cacher = cacher.contains(api_url)
if not in_cacher:
this_response = call_api(api_url)
cacher.update(api_url, this_response)
else:
this_response = cacher.get(api_url)
xml = etree.parse(io.StringIO(this_response))
if not xml.xpath('/WA/RESOLUTION/NAME'):
raise ValueError(f'resolution number {res_num} is invalid; no such resolution exists')
resolution_is_repealed = xml.xpath('/WA/RESOLUTION/REPEALED_BY') != []
resolution_is_a_repeal = xml.xpath('/WA/RESOLUTION/REPEALS_COUNCILID') != []
resolution_text = html.unescape(xml.xpath('/WA/RESOLUTION/DESC')[0].text)
resolution_author = xml.xpath('/WA/RESOLUTION/PROPOSED_BY')[0].text
print(resolution_author)
print(type(resolution_author))
if resolution_author is None or str(resolution_author).strip() == '':
raise RuntimeError('resolution author is empty')
author = capitalise(resolution_author)
resolution = WaPassedResolution(
council=_get_council(council),
resolution_num=res_num,
title=xml.xpath('/WA/RESOLUTION/NAME')[0].text,
implementation=localised(
datetime.utcfromtimestamp(int(xml.xpath('/WA/RESOLUTION/IMPLEMENTED')[0].text)),
'UTC'
).astimezone(timezone('US/Eastern')), # convert to eastern time
chamber=clean_chamber_input(xml.xpath('/WA/RESOLUTION/COUNCIL')[0].text)[1],
category=capitalise(xml.xpath('/WA/RESOLUTION/CATEGORY')[0].text),
strength=capitalise(
_translate_category(
xml.xpath('/WA/RESOLUTION/CATEGORY')[0].text, # category
xml.xpath('/WA/RESOLUTION/OPTION')[0].text # option
)[1] # get name
),
is_repealed=resolution_is_repealed,
repealed_by=int(xml.xpath('/WA/RESOLUTION/REPEALED_BY')[0].text) if resolution_is_repealed else None,
is_repeal=resolution_is_a_repeal,
repeals=int(xml.xpath('/WA/RESOLUTION/REPEALS_COUNCILID')[0].text) if resolution_is_a_repeal else None,
# text and author
text=resolution_text.strip(),
author=author.strip(),
# vote data
votes_for=int(xml.xpath('/WA/RESOLUTION/TOTAL_VOTES_FOR')[0].text),
votes_against=int(xml.xpath('/WA/RESOLUTION/TOTAL_VOTES_AGAINST')[0].text)
)
assert resolution.strength != '0', 'resolution {} has strength 0 with category {}'.format(
resolution.title, resolution.category
)
# overwrite category if repeal with the repeals field; NS API is broken sometimes for some reason
if resolution_is_a_repeal:
resolution.strength = str(int(resolution.repeals)) # cast to integer
# check for co-authors
coauth_list = xml.xpath('/WA/RESOLUTION/COAUTHOR/N')
if len(coauth_list) != 0:
print('received from API coauthors: {}'.format(
', '.join([capitalise(n.text) for n in coauth_list])
))
try:
resolution.coauthor0 = capitalise(coauth_list[0].text)
except IndexError:
pass
try:
resolution.coauthor1 = capitalise(coauth_list[1].text)
except IndexError:
pass
try:
resolution.coauthor2 = capitalise(coauth_list[2].text)
except IndexError:
pass
else:
cleaned_resolution_text = resolution_text \
.replace('[i]', '').replace('[/i]', '') \
.replace('[b]', '').replace('[/b]', '') \
.replace('[u]', '').replace('[/u]', '')
coauthor_matches = [s for s in cleaned_resolution_text.splitlines()
if re.search(
r'(Co-?((Author(ed)?:?)|written|writer) ?(by|with)? ?:?)|'
r'(This resolution includes significant contributions made by\s+)',
s, re.IGNORECASE
)]
if len(coauthor_matches) > 0:
coauthor_line = re.sub(r'Co-?((Author(ed)?:?)|written|writer) ?(by|with)? ?:? ', repl='',
string=coauthor_matches[0], flags=re.IGNORECASE)
print(f'\tidentified coauthor line: "{coauthor_line}"')
coauthor_line = coauthor_line \
.replace('[i]', '') \
.replace('[/i]', '') \
.replace('[b]', '') \
.replace('[/b]', '') \
.replace('[u]', '') \
.replace('[/u]', '')
if '[nation' in coauthor_line.lower(): # scion used the [Nation] tag instead of lower case once
amended_line = re.sub(r'(?<=\[nation)=(.*?)(?=\])', '', coauthor_line.lower()) # remove 'noflag' etc
coauthors = re.findall(r'(?<=\[nation\])(.*?)(?=\[/nation\])', amended_line.lower())
else:
# this will break with names like "Sch'tz and West Runk'land"
coauthors = re.split(r'(,? and )|(, )', coauthor_line, re.IGNORECASE)
coauthors = [i for i in coauthors if i is not None and i.strip() != 'and'] # post facto patching...
coauthors = [ref(s).replace('.', '') for s in coauthors] # cast to reference name
print(f'\tidentified coauthors as {coauthors}')
# pass each co-author in turn
'''
While it could be changed so that the original line's capitalisation is preserved, doing this might
introduce inconsistency in capitalisation of the same nation. Eg '[nation]imperium_anglorum[/nation]' would
be done under capitalisation rules while something provided as 'Imperium ANGLORUM' would be let through.
Because some authors use a ref'd name IN the nation tags, something like [nation]transilia[/nation] cannot
be disentangled from 'Transilia' if the former is proper and the latter is not. A proper-capitalisation
dictionary would be necessary and I am unwilling to download and parse all historical daily dumps for
something this minor.
'''
try:
resolution.coauthor0 = capitalise(coauthors[0])
except IndexError:
pass
try:
resolution.coauthor1 = capitalise(coauthors[1])
except IndexError:
pass
try:
resolution.coauthor2 = capitalise(coauthors[2])
except IndexError:
pass
cacher.save()
return resolution
def get_count() -> int:
soup = BeautifulSoup(call_api('http://forum.nationstates.net/viewtopic.php?f=9&t=30'), 'lxml')
resolution = soup.select('div#p310 div.content a')
return len(resolution)
def parse() -> 'pd.DataFrame':
# find the number of resolutions from Passed GA Resolutions
passed_res_max = get_count()
print(f'found {passed_res_max} resolutions')
# confirm that we have X resolutions
res_list = []
max_res = -1
for i in range(passed_res_max - 1, passed_res_max + 20): # passed resolutions should never be more than 20 behind
try:
print(f'gettingGA {i + 1} of {passed_res_max} predicted resolutions')
d = WaPassedResolution.parse_ga(i + 1).__dict__ # note that 0 returns resolution at vote, need to 1-index
res_list.append(d)
except ValueError:
print('out of resolutions; data should be complete')
max_res = i
break
print(f'found {max_res} resolutions; getting historical')
# get API information for each resolution
for i in reversed(range(0, passed_res_max - 1)): # passed_res_max is already called above
print(f'got {max_res - passed_res_max + i} of {max_res} resolutions')
print(f'getting GA {i + 1}')
r = WaPassedResolution.parse_ga(i + 1) # note that 0 returns resolution at vote, need to 1-index
d = r.__dict__ # hacky cheating to get into dict
res_list.append(d)
# put it up in pandas
df = pd.DataFrame(res_list).replace({None: np.nan})
df.drop(columns=['text'], inplace=True)
df.rename(columns={
'council': 'Council', # Auralia used these names for columns
'resolution_num': 'Number',
'title': 'Title',
'category': 'Category',
'strength': 'Sub-category',
'votes_for': 'Votes For',
'votes_against': 'Votes Against',
'implementation': 'Date Implemented',
'author': 'Author'
}, inplace=True)
df.sort_values(by='Number', inplace=True)
def join_coauthors(coauthor_list, j=', '):
""" Removes empty/whitespace-only strings and then joins """
authors = [s for s in coauthor_list if s.strip() != '']
return j.join(authors)
df['Co-authors'] = df[['coauthor0', 'coauthor1', 'coauthor2']] \
.replace({np.nan: ''}) \
.agg(join_coauthors, axis=1)
assert all(df['Sub-category'] != '0'), 'resolutions {} have sub-category 0'.format(
df.loc[df['Sub-category'] != '0', 'Title'].values
)
return df[['Number', 'Title', 'Category', 'Sub-category', 'Author', 'Co-authors',
'Votes For', 'Votes Against', 'Date Implemented']].copy() # take only relevant vars
| 38.705882 | 123 | 0.590109 | 1,915 | 15,792 | 4.772846 | 0.269974 | 0.014004 | 0.017505 | 0.035011 | 0.128556 | 0.101313 | 0.080744 | 0.030635 | 0.02407 | 0.020131 | 0 | 0.008902 | 0.288627 | 15,792 | 407 | 124 | 38.800983 | 0.8047 | 0.138804 | 0 | 0.108303 | 0 | 0.00361 | 0.191829 | 0.0411 | 0 | 0 | 0 | 0 | 0.00722 | 1 | 0.043321 | false | 0.061372 | 0.057762 | 0.00361 | 0.151625 | 0.039711 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9eec86a2c6579218afa159749612db5d5e43ce59 | 3,198 | py | Python | models/__init__.py | esentino/literate-doodle | 598533042602b989a4bdaa8778968c5f3ead3500 | [
"Apache-2.0"
] | null | null | null | models/__init__.py | esentino/literate-doodle | 598533042602b989a4bdaa8778968c5f3ead3500 | [
"Apache-2.0"
] | null | null | null | models/__init__.py | esentino/literate-doodle | 598533042602b989a4bdaa8778968c5f3ead3500 | [
"Apache-2.0"
] | 1 | 2019-09-11T21:27:37.000Z | 2019-09-11T21:27:37.000Z | # models/__init__.py
from clcrypto import password_hash
from psycopg2 import connect
def make_connection(db_name='w3'):
cnx = connect(user='postgres', password='coderslab', database=db_name, host='localhost')
cnx.autocommit = True
return cnx
class User:
__id = None
username = None
__hashed_password = None
email = None
def __init__(self):
self.__id = -1
self.username = ""
self.email = ""
self.__hashed_password = ""
@property
def id(self):
return self.__id
@property
def hashed_password(self):
return self.__hashed_password
def set_password(self, password, salt):
self.__hashed_password = password_hash(password, salt)
def save_to_db(self, cursor):
if self.__id == -1:
# saving new instance using prepared statements
sql = """INSERT INTO Users(username, email, hashed_password)
VALUES(%s, %s, %s) RETURNING id"""
values = (self.username, self.email, self.hashed_password)
cursor.execute(sql, values)
self.__id = cursor.fetchone()[0] # albo cursor.fetchone()['id']
return True
else:
sql = """UPDATE Users SET username=%s, email=%s, hashed_password=%s
WHERE id=%s"""
values = (self.username, self.email, self.hashed_password, self.id)
cursor.execute(sql, values)
return True
@staticmethod
def load_user_by_id(cursor, user_id):
sql = "SELECT id, username, email, hashed_password FROM users WHERE id=%s"
cursor.execute(sql, (user_id,)) # (user_id, ) - bo tworzymy krotkę
data = cursor.fetchone()
if data:
loaded_user = User()
loaded_user.__id = data[0]
loaded_user.username = data[1]
loaded_user.email = data[2]
loaded_user.__hashed_password = data[3]
return loaded_user
else:
return None
@staticmethod
def find_by_email(cursor, username):
sql = "SELECT id, username, email, hashed_password FROM users WHERE email=%s"
cursor.execute(sql, (username,)) # (user_id, ) - bo tworzymy krotkę
data = cursor.fetchone()
if data:
loaded_user = User()
loaded_user.__id = data[0]
loaded_user.username = data[1]
loaded_user.email = data[2]
loaded_user.__hashed_password = data[3]
return loaded_user
else:
return None
@staticmethod
def find_all( cursor):
sql = "SELECT id, username, email, hashed_password FROM Users"
ret = []
cursor.execute(sql)
for row in cursor.fetchall():
loaded_user = User()
loaded_user.__id = row[0]
loaded_user.username = row[1]
loaded_user.email = row[2]
loaded_user.__hashed_password = row[3]
ret.append(loaded_user)
return ret
def delete(self, cursor):
sql = "DELETE FROM Users WHERE id=%s"
cursor.execute(sql, (self.__id,))
self.__id = -1
return True
| 31.663366 | 92 | 0.581614 | 374 | 3,198 | 4.724599 | 0.216578 | 0.101868 | 0.054329 | 0.061121 | 0.44086 | 0.426712 | 0.411998 | 0.389926 | 0.309564 | 0.282965 | 0 | 0.008261 | 0.318637 | 3,198 | 100 | 93 | 31.98 | 0.802662 | 0.049719 | 0 | 0.404762 | 0 | 0 | 0.147709 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119048 | false | 0.214286 | 0.02381 | 0.02381 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9eed09503a5541f18459a14cf6ef3617066817b6 | 4,124 | py | Python | crys3d/command_line/model_viewer.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | crys3d/command_line/model_viewer.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | crys3d/command_line/model_viewer.py | rimmartin/cctbx_project | 644090f9432d9afc22cfb542fc3ab78ca8e15e5d | [
"BSD-3-Clause-LBNL"
] | null | null | null | from __future__ import division
# LIBTBX_PRE_DISPATCHER_INCLUDE_SH export PHENIX_GUI_ENVIRONMENT=1
# LIBTBX_PRE_DISPATCHER_INCLUDE_SH export BOOST_ADAPTBX_FPE_DEFAULT=1
import cStringIO
from crys3d.wx_selection_editor import selection_editor_mixin
import wx
import libtbx.load_env
import sys, os, time
########################################################################
# CLASSES AND METHODS FOR STANDALONE VIEWER
#
class App (wx.App) :
def __init__ (self, title="crys3d.wx_model_viewer", default_size=(800,600),
viewer_class=selection_editor_mixin) :
self.title = title
self.default_size = default_size
self.viewer_class = viewer_class
wx.App.__init__(self, 0)
def OnInit (self) :
self.frame = wx.Frame(None, -1, self.title, pos=wx.DefaultPosition,
size=self.default_size)
self.frame.CreateStatusBar()
box = wx.BoxSizer(wx.VERTICAL)
self.view_objects = self.viewer_class(self.frame, size=(800,600))
box.Add(self.view_objects, wx.EXPAND, wx.EXPAND)
self.frame.SetSizer(box)
box.SetSizeHints(self.frame)
return True
def run (args, viewer_class=selection_editor_mixin) :
import cStringIO
pdb_files = []
cif_files = []
show_ss_restraints = False
fast_connectivity = True
for arg in args :
if os.path.isfile(arg) :
import iotbx.pdb
if iotbx.pdb.is_pdb_file(arg) :
pdb_files.append(os.path.abspath(arg))
elif arg.endswith(".cif") :
cif_files.append(os.path.abspath(arg))
elif arg == "--ss" :
show_ss_restraints = True
elif arg in ["--thorough", "--slow", "--use_monomer_library"] :
fast_connectivity = False
if len(pdb_files) == 0 :
print "Please specify a PDB file (and optional CIFs) on the command line."
return
a = App(viewer_class=viewer_class)
a.frame.Show()
out = sys.stdout
if not "--debug" in args :
out = cStringIO.StringIO()
for file_name in pdb_files :
print "Reading PDB file %s" % file_name
from iotbx import file_reader
from mmtbx.monomer_library import pdb_interpretation
from mmtbx import secondary_structure
t1 = time.time()
if fast_connectivity :
pdb_in = file_reader.any_file(file_name, force_type="pdb")
pdb_hierarchy = pdb_in.file_object.hierarchy
atomic_bonds = pdb_hierarchy.distance_based_simple_two_way_bond_sets()
acp_selection = None
else :
processed_pdb_file = pdb_interpretation.run(args=[file_name]+cif_files,
log=out)
pdb_hierarchy = processed_pdb_file.all_chain_proxies.pdb_hierarchy
pdb_hierarchy.atoms().reset_i_seq()
grm = processed_pdb_file.geometry_restraints_manager()
acp_selection = processed_pdb_file.all_chain_proxies.selection
if grm is None or grm.shell_sym_tables is None :
raise Sorry("Atomic bonds could not be calculated for this model. "+
"This is probably due to a missing CRYST1 record in the PDB file.")
atomic_bonds = grm.shell_sym_tables[0].full_simple_connectivity()
t2 = time.time()
print "%.2fs" % (t2-t1)
a.view_objects.add_model(file_name, pdb_hierarchy, atomic_bonds,
mmtbx_selection_function=acp_selection)
sec_str = secondary_structure.manager(
pdb_hierarchy=pdb_hierarchy,
xray_structure=None)
a.view_objects.set_sec_str(file_name, sec_str.selections_as_ints())
if show_ss_restraints and acp_selection is not None :
bonds_table = secondary_structure.process_structure(params=None,
processed_pdb_file=processed_pdb_file,
tmp_dir=os.getcwd(),
log=sys.stderr)
a.view_objects.set_noncovalent_bonds(file_name, bonds_table.bonds)
a.view_objects.flag_show_noncovalent_bonds = True
a.view_objects.set_model_base_color([1.0,1.0,1.0], file_name)
a.view_objects.set_color_mode("element")
a.view_objects.force_update(recenter=True)
a.MainLoop()
if __name__ == "__main__" :
if "--test" in sys.argv :
pdb_file = libtbx.env.find_in_repositories(
relative_path="phenix_regression/pdb/1ywf.pdb",
test=os.path.isfile)
run([pdb_file, "--ss"])
else :
run(sys.argv[1:])
| 38.185185 | 78 | 0.707081 | 585 | 4,124 | 4.671795 | 0.333333 | 0.030735 | 0.030735 | 0.021954 | 0.095134 | 0.072448 | 0.024881 | 0.024881 | 0 | 0 | 0 | 0.01005 | 0.17968 | 4,124 | 107 | 79 | 38.542056 | 0.797813 | 0.042192 | 0 | 0.041237 | 0 | 0 | 0.087529 | 0.018848 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.113402 | null | null | 0.030928 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9eeee0e6163243e2bcb3f1fbe4bb62fbc1fef478 | 4,865 | py | Python | JIG.py | mmg1/JIG | bc36ed013b5ba48e549a16151b9135e271d55055 | [
"MIT"
] | 28 | 2017-12-04T02:03:25.000Z | 2021-09-13T04:37:21.000Z | JIG.py | mmg1/JIG | bc36ed013b5ba48e549a16151b9135e271d55055 | [
"MIT"
] | 1 | 2018-01-20T21:13:56.000Z | 2018-01-20T21:13:56.000Z | JIG.py | NetSPI/JIG | bc36ed013b5ba48e549a16151b9135e271d55055 | [
"MIT"
] | 18 | 2018-01-08T13:40:29.000Z | 2022-02-20T17:10:57.000Z | import re
import sys
from itertools import izip as zip
import argparse
import requests
# argparse definitions
parser = argparse.ArgumentParser(description='Jira attack script')
parser.add_argument('URL', type=str , help='the URL of the Jira instance... ex. https://jira.organization.com/')
parser.add_argument('-u' ,'--usernames', dest='names', action='store_const', const=True, help='Print discovered usernames')
parser.add_argument('-e' , '--emails', dest='emails',action='store_const', const=True, help='Print discovered email addresses')
parser.add_argument('-a' ,'--all', dest='all',action='store_const',const=True,help='Print discovered email addresses and usernames')
parser.add_argument('-eu' , dest='all',action='store_const',const=True,help=argparse.SUPPRESS)
parser.add_argument('-ue' , dest='all',action='store_const',const=True,help=argparse.SUPPRESS)
args = parser.parse_args()
url = args.URL
if args.URL[-1] != '/':
args.URL = args.URL + "/"
# Define URLs
pickerURL = args.URL + "secure/popups/UserPickerBrowser.jspa?max=9999"
filtersURL = args.URL + "secure/ManageFilters.jspa?filter=popular"
#dashboardURL = args.URL + "secure/Dashboard.jspa"
def extractPicker(response):
'''
Takes in the response body for UserBrowserPicker and returns a dictionary containing
usernames and email addresses.
'''
userList = re.compile(r"-name\">(.*)</td>").findall(response.text)
emailList = re.compile(r">(.*\@.*)</td>").findall(response.text)
dictionary = dict(zip(userList , emailList))
return dictionary
def extractFilters(response):
'''
Takes in the response body for the manage filters page and returns a list containing usernames.
'''
userList = re.compile(r"</span>.\((.*)\)").findall(response.text)
return list(set(userList))
def validateURL(url):
'''
Runs a stream of validation on a given URL and returns the response and a boolean value.
'''
try:
s = requests.Session()
validateresponse = s.get(url , allow_redirects=False,timeout=5)
except requests.exceptions.InvalidSchema:
print ""
print "[-] Invalid schema provided... Must follow format https://jira.organization.com/"
print ""
sys.exit(1)
except requests.exceptions.MissingSchema:
print ""
print "[-] A supported schema was not provided. Please use http:// or https://"
print ""
sys.exit(1)
except requests.exceptions.InvalidURL:
print "[-] Invalid base URL was supplied... Please try again."
sys.exit(1)
except requests.exceptions.ConnectionError:
print ""
print "[-] Connection failed... Please check the URL and try again."
print ""
sys.exit(1)
except requests.exceptions.RequestException:
print ""
print "[-] An unknown exception occurred... Please try again."
print ""
sys.exit(1)
if validateresponse.status_code == 200:
return validateresponse,True
else:
return "[-] The page is inaccessible",False
if __name__ == "__main__":
pickerResponse,pickerAccessible = validateURL(pickerURL)
filterResponse,filterAccessible = validateURL(filtersURL)
print ""
print ""
print "[+] Checking the User Picker page..."
if pickerAccessible == True:
users = extractPicker(pickerResponse)
print ""
print "[+] Success..."
print "[+] Users: "+str(len(users))
print "[+] Emails: " + str(len(users))
print ""
if (args.emails and args.names) or args.all:
print '{:<20}{:<20}'.format("---Username---", "---------Email---------")
for username, email in sorted(users.iteritems()):
print '{:<20}{:<20}'.format(username,email)
elif args.emails:
for username,email in sorted(users.iteritems()):
print email
elif args.names:
for username,email in sorted(users.iteritems()):
print username
print ""
elif pickerAccessible == False:
print pickerResponse
print ""
print ""
print "[+] Checking the Manage Filters page..."
if filterAccessible == True:
filterUsers = extractFilters(filterResponse)
if args.names or args.all:
if len(filterUsers) == 0:
print "[-] We could not find any anonymously accessible filters"
print ""
else:
print "[+] The Manage Filters page is accessible and contains data..."
print ""
for username in filterUsers:
print username
print ""
elif filterAccessible == False:
print filterResponse | 39.233871 | 133 | 0.615211 | 531 | 4,865 | 5.595104 | 0.335217 | 0.030293 | 0.034332 | 0.035342 | 0.261528 | 0.231908 | 0.194211 | 0.134635 | 0.074049 | 0.074049 | 0 | 0.00635 | 0.255498 | 4,865 | 124 | 134 | 39.233871 | 0.813915 | 0.01665 | 0 | 0.3 | 0 | 0 | 0.248838 | 0.025093 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.05 | null | null | 0.37 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ef839c4fcb13ab1bd28852911644c75dc9c3837 | 48,320 | py | Python | neon/backends/gpu.py | kashif/neon | d4d8ed498ee826b67f5fda1746d2d65c8ce613d2 | [
"Apache-2.0"
] | 1 | 2018-07-17T16:54:58.000Z | 2018-07-17T16:54:58.000Z | neon/backends/gpu.py | kashif/neon | d4d8ed498ee826b67f5fda1746d2d65c8ce613d2 | [
"Apache-2.0"
] | null | null | null | neon/backends/gpu.py | kashif/neon | d4d8ed498ee826b67f5fda1746d2d65c8ce613d2 | [
"Apache-2.0"
] | 2 | 2016-06-09T13:05:00.000Z | 2021-02-18T14:18:15.000Z | # ----------------------------------------------------------------------------
# Copyright 2014 Nervana Systems Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ----------------------------------------------------------------------------
"""
Neon backend wrapper for the NervanaGPU library. Most functions are thin
wrappers around functions from the NervanaGPU class, the GPUTensor is taken
directly from NervanaGPU as well.
NervanaGPU is available at `<https://github.com/NervanaSystems/nervanagpu>`
"""
import logging
from neon.backends.backend import Backend
from nervanagpu import NervanaGPU
from neon.diagnostics.timing_decorators import FlopsDecorator
import pycuda.driver as drv
import numpy as np
logger = logging.getLogger(__name__)
class GPU(Backend):
"""
Sets up a NervanaGPU based backend for matrix operations.
Note that some functions defined in the generic Backend class such as
cross-map pooling and normalization and are not implemented for
this backend.
"""
default_dtype = np.float32
def __init__(self, rng_seed, stochastic_round=False, device_id=0):
import pycuda.driver as drv
drv.init()
global ctx
ctx = drv.Device(device_id).make_context()
import atexit
atexit.register(ctx.pop)
self.ng = NervanaGPU(stochastic_round=stochastic_round)
logger.info("Initialized NervanaGPU with stochastic_round=%s",
stochastic_round)
self.rng_seed = rng_seed
self.rng_init()
self.device_id = device_id if device_id is not None else 0
def __getstate__(self):
"""
Defines what and how we go about serializing an instance of this class.
Returns:
self.__dict__: The full contents of the backend class instance,
except for the mem_pool which is on device and
cannot be serialized.
"""
if hasattr(self, 'mem_pool') and self.mem_pool is not None:
self.mem_pool_pickle = {'shape': self.mem_pool.shape,
'dtype': np.float32}
self.mem_pool = None
return self.__dict__
def __setstate__(self, state):
"""
Defines how we go about deserializing into an instance of this class.
Arguments:
self.__dict__: The full contents of the backend class instance,
except for the mem_pool which is on device and
cannot be serialized.
"""
self.__dict__.update(state)
self.mem_pool = self.ng.empty(self.mem_pool_pickle['shape'],
dtype=self.mem_pool_pickle['dtype'])
def init_mempool(self, shape, dtype=default_dtype):
"""
Allocates a memory pool for temporary storage
"""
self.mem_pool = self.ng.empty(shape, dtype=dtype)
def alloc_host_mem(self, shape, dtype=default_dtype):
return drv.pagelocked_empty(shape, dtype, order="C", mem_flags=0)
def create_stream(self):
return drv.Stream()
def synchronize(self):
pass
def async_copy(self, dest, src, stream=None):
drv.memcpy_htod_async(dest.gpudata, src, stream)
def rng_init(self):
"""
Initialize and seed the pseudo random number genrator. Random numbers
are generated on the host using numpy, then transfered to device.
"""
seed = None
if 'rng_seed' in self.__dict__:
seed = self.rng_seed
logger.info("Seeding random number generator with: %s", str(seed))
np.random.seed(seed)
def flop_timing_init(self, decorate_fc, decorate_conv, decorate_ew):
"""
Initialize FLOP timing. Wraps the specified MOP calls via a decorator
to record elapsed time and number of operations.
Arguments:
decorate_fc (list): string giving the function names of fully
connected layer forward/backward/update calls
to time.
decorate_conv (list): string giving the function names of
convolutional layer forward/backward/update
calls to time.
decorate_ew (list): string giving the function names of element-wise
calls to time.
Notes:
Must be called prior to first flop_timing_start call
"""
self.start = drv.Event()
self.end = drv.Event()
self.flop_timer = FlopsDecorator(self)
self.flop_timer.decorate(decorate_fc=decorate_fc,
decorate_conv=decorate_conv,
decorate_ew=decorate_ew)
def flop_timinig_start(self):
"""
Start a new FLOP timer.
Returns:
None: dummy value (not used)
"""
return self.start.record()
def flop_timing_finish(self, start_time):
"""
Complete current FLOP timing.
Arguments:
start_time (unused): ignored.
Returns:
float: elapsed time in seconds since prior flop_timing_start call.
"""
self.end.record()
self.end.synchronize()
return self.end.time_since(self.start)
def uniform(self, low=0.0, high=1.0, size=1, dtype=default_dtype,
persist_values=True, name=None):
"""
generate numpy random number and convert to a GPUTensor.
If called with dype=None it will probably explode
"""
ary = np.random.uniform(low, high, size)
return self.ng.array(ary, dtype=dtype, name=name)
def normal(self, loc=0.0, scale=1.0, size=1, dtype=default_dtype,
persist_values=True, name=None):
"""
Gaussian/Normal random number sample generation
"""
ary = np.random.normal(loc, scale, size)
return self.ng.array(ary, dtype=dtype, name=name)
def fprop_fc(self, out, inputs, weights, layer=None):
"""
Forward propagate the inputs of a fully connected network layer to
produce output pre-activations (ready for transformation by an
activation function).
Arguments:
out (GPUTensor): Where to store the forward propagated results.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
weights (GPUTensor): The weight coefficient values for this layer.
layer (Layer): The layer object.
"""
self.ng.dot(weights, inputs, out)
def bprop_fc(self, out, weights, deltas, layer=None):
"""
Backward propagate the error through a fully connected network layer.
Arguments:
out (GPUTensor): Where to store the backward propagated errors.
weights (GPUTensor): The weight coefficient values for this layer.
deltas (GPUTensor): The error values for this layer
layer (Layer): The layer object.
"""
self.ng.dot(weights.T, deltas, out)
def update_fc(self, out, inputs, deltas, layer=None):
"""
Compute the updated gradient for a fully connected network layer.
Arguments:
out (GPUTensor): Where to store the updated gradient value.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
deltas (GPUTensor): The error values for this layer
layer (Layer): The layer object.
"""
self.ng.dot(deltas, inputs.T, out)
def update_fc_bias(self, err, out):
"""
Compute the updated bias gradient for a fully connected network layer.
Arguments:
out (GPUTensor): Where to store the updated gradient value.
err (GPUTensor): backpropagated error
"""
self.ng.sum(err, axis=1, out=out)
def add_fc_bias(self, inputs, bias):
"""
Add the bias for a fully connected network layer.
Arguments:
inputs (GPUTensor): the input to update.
bias (GPUTensor): the amount to increment
"""
self.ng.add(inputs, bias, out=inputs)
def fprop_conv(self, out, inputs, weights, ofmshape, ofmsize, ofmlocs,
ifmshape, links, nifm, padding, stride, ngroups, fpropbuf,
local=False):
"""
Forward propagate the inputs of a convolutional network layer to
produce output pre-activations (ready for transformation by an
activation function).
Arguments:
out (GPUTensor): Where to store the forward propagated results.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
weights (GPUTensor): The weight coefficient values for this layer.
ofmshape (tuple): Dimensions of each output feature map (typically
number of height and width neurons).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element
in each output feature map stored in out.
ifmshape (tuple): Dimensions of each input feature map (typically
number of height and width neurons). For this
backend we expect these values to be square.
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
convolution operation.
stride (int): Number of neurons to shift the filter at each step.
ngroups (int): Number of groups.
fpropbuf (GPUTensor): Temporary storage buffer used to hold the
convolved outputs for a single receptive
field. Not used for this backend.
local (bool, optional): Whether to do local filtering (True) or
convolution (False, the default)
"""
'''
N: Number of images in mini-batch
C: Number of input feature maps
K: Number of output feature maps
D: Depth of input image
H: Height of input image
W: Width of input image
T: Depth of filter kernel
R: Height of filter kernel
S: Width of filter kernel
'''
self.ng.fprop_conv(layer=fpropbuf, I=inputs, F=weights, O=out,
alpha=1.0, repeat=1)
def bprop_conv(self, out, weights, deltas, ofmshape, ofmsize, ofmlocs,
ifmshape, links, padding, stride, nifm, ngroups, bpropbuf,
local=False):
"""
Backward propagate the error through a convolutional network layer.
Arguments:
out (GPUTensor): Where to store the backward propagated errors.
weights (GPUTensor): The weight coefficient values for this layer.
deltas (GPUTensor): The error values for this layer
ofmshape (tuple): Dimensions of each output feature map (typically
height and width).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
ifmshape (tuple): Dimensions of each input feature map (typically
height and width).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
convolution operation.
stride (int): Number of neurons to shift the filter at each step.
ngroups (int): Number of groups.
bpropbuf (GPUTensor): Temporary storage buffer used to hold the
backpropagated error for a single receptive
field
local (bool, optional): Whether to do local filtering (True) or
convolution (False, the default)
"""
self.ng.bprop_conv(layer=bpropbuf, F=weights, E=deltas, grad_I=out,
alpha=1.0, repeat=1)
def update_conv(self, out, inputs, weights, deltas, ofmshape, ofmsize,
ofmlocs, ifmshape, links, nifm, padding, stride, ngroups,
fwidth, updatebuf, local=False, layer=None):
"""
Compute the updated gradient for a convolutional network layer.
Arguments:
out (GPUTensor): Where to store the updated gradient value.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
weights (GPUTensor): The weight coefficient values for this layer.
deltas (GPUTensor): The error values for this layer
ofmshape (tuple): Dimensions of each output feature map (typically
height and width).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
ifmshape (tuple): Dimensions of each input feature map (typically
height and width).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
convolution operation.
stride (int): Number of neurons to shift the filter at each step.
ngroups (int): Number of groups.
fwidth (int): Filter width.
updatebuf (GPUTensor): Temporary storage buffer used to hold the
updated gradient for a single receptive
field
local (bool, optional): Whether to do local filtering (True) or
convolution (False, the default)
layer (Layer): The layer object.
"""
self.ng.update_conv(layer=updatebuf, I=inputs, E=deltas, grad_F=out,
alpha=1.0, repeat=1)
def fprop_pool(self, out, inputs, op, ofmshape, ofmsize, ofmlocs, fshape,
ifmshape, links, nifm, padding, stride, fpropbuf):
"""
Forward propagate the inputs of a Pooling network layer to
produce output pre-activations (ready for transformation by an
activation function).
Arguments:
out (GPUTensor): Where to store the forward propagated results.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
op (string): The type of pooling operation to apply. We support
"max", "avg", "l2" currently.
ofmshape (tuple): Dimensions of each output feature map (typically
number of height and width neurons).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
fshape (tuple): Dimensions of each filter (typically height and
width).
ifmshape (tuple): Dimensions of each input feature map (typically
number of height and width neurons).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
pooling operation.
stride (int): Number of neurons to shift the filter at each step.
fpropbuf (GPUTensor): Temporary storage buffer used to hold the
pooled outputs for a single receptive field.
"""
op = op.lower()
if op == "max":
self.ng.fprop_pool(layer=fpropbuf, I=inputs, O=out, repeat=1)
else:
raise AttributeError("unexpected pooling op type: %s", op)
def bprop_pool(self, out, fouts, inputs, deltas, op, ofmshape, ofmsize,
ofmlocs, fshape, fpsize, ifmshape, links, nifm, padding,
stride, bpropbuf):
"""
Backward propagate the error through a pooling network layer.
Arguments:
out (GPUTensor): Where to store the backward propagated errors.
fouts (GPUTensor): Forward propagated outputs from the previous
layer.
inputs (GPUTensor): Will be either the dataset input values (first
layer), or the outputs from the previous layer.
deltas (GPUTensor): The error values for this layer
op (string): The type of pooling operation to apply. We support
"max", "avg", "l2" currently.
ofmshape (tuple): Dimensions of each output feature map (typically
height and width).
ofmsize (int): Total size of each output feature map.
ofmlocs (GPUTensor): Indices giving the location of each element in
each output feature map stored in out.
fshape (tuple): Dimensions of each filter (typically height and
width).
fpsize (int): The size of each filter.
ifmshape (tuple): Dimensions of each input feature map (typically
height and width).
links (GPUTensor): Input receptive field indices.
nifm (int): Total number of input feature maps.
padding (int): Number of additional elements to include along each
dimension of each local receptive field during the
pooling operation.
stride (int): Number of neurons to shift the filter at each step.
bpropbuf (GPUTensor): Temporary storage buffer used to hold the
backpropagated error for a single receptive
field
"""
op = op.lower()
if op == "max":
self.ng.bprop_pool(layer=bpropbuf, I=inputs, E=deltas, grad_I=out,
repeat=1)
else:
raise AttributeError("unexpected pooling op type: %s", op)
def logistic(self, x, out):
"""
Logistic sigmoid nonlinearity, 1/(1+exp(-x))
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
"""
self.ng.sig(x, out=out)
return out
def transpose(self, untransposed, transposed):
transposed[:] = untransposed.T
def crossent(self, y, t, partial, out, epsilon, doscale, ismulti=False):
"""
Computes cross entropy cost.
Arguments:
y (GPUTensor): Model outputs
t (GPUTensor): Targets
partial (GPUTensor): temporary buffer used for 2D reduction
out (GPUTensor): Storage for the cross entropy output
epsilon (float): constant for numerical stability
doscale (boolean): If True, cross_entropy is scaled by batch size
ismulti (boolean): If True, compute multi class cross_entropy
"""
sumbuf = partial.reshape((partial.size, 1))[:partial.shape[0]]
if ismulti:
self.ng.sum(-t * self.ng.log(y + epsilon),
axis=None, partial=sumbuf, out=out)
else:
self.ng.sum((t - 1) * self.ng.log(1 - y + epsilon) -
t * self.ng.log(y + epsilon),
axis=None, partial=sumbuf, out=out)
if doscale:
out[:] = out / y.shape[1]
return out
def logistic_compound(self, inputs, outputs):
"""
Applies logistic function and its derivative to the dataset passed.
Arguments:
inputs (GPUTensor): Input data to be transformed. This also
acts as storage for the output of the
derivative function.
outputs (GPUTensor): Storage for the transformed output.
"""
# Apply the logistic function.
outputs[:] = self.ng.sig(inputs)
inputs[:] = (1.0 - outputs) * inputs
def rectlin(self, x, out):
"""
Rectified Linear nonlinearity
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
"""
self.ng.maximum(x, 0., out=out)
return out
def rectlin_derivative(self, x, out):
"""
Rectified linear nonlinearity derivative
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
"""
self.ng.greater(x, 0, out=out)
return out
def rectleaky(self, x, slope, out):
"""
Leaky rectified linear nonlinearity
Arguments:
x (GPUTensor): Input tensor
slope (float): amount of gradient to apply when unit is not active
out (GPUTensor): Output tensor
"""
out[:] = self.ng.maximum(x, x*slope)
def rectleaky_derivative(self, x, slope, out):
"""
Leaky rectified linear nonlinearity derivative
Arguments:
x (GPUTensor): Input tensor
slope (float): amount of gradient to apply when unit is not active
out (GPUTensor): Output tensor
"""
out[:] = self.ng.greater(x, 0) * (1.0 - slope) + slope
def sum(self, tsr, axes, out):
"""
Sum
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.sum(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.sum(tsr, axis=axes, out=out)
return out
def norm(self, tsr, order=None, axis=None, out=None):
"""
Calculates and returns the vector p-norms of the GPUTensor along the
specified axis. The p-norm is defined on a vector A as
:math:`||A||_p = \sum_i(|A_i|^p)^{1/p}`.
Arguments:
tsr (GPUTensor): the GPUTensor on which to find the norms
order (int): The order or p upon which the norm is calculated.
Valid values include:
None, inf, -inf, 0, 1, -1, 2, -2, ...
axis (int): The axis along which to compute vector norms.
out (GPUTensor): where to write the results to. Must be
of the expected result shape.
Returns:
GPUTensor: p-norm of tsr along the specified axis.
Raises:
IndexError if invalid axis specified
AttributeError if invalid order specified
See Also:
`numpy.linalg.norm`
"""
if not isinstance(axis, int) or axis < 0 or axis >= len(tsr.shape):
raise IndexError("invalid axis value: %s", axis)
if not isinstance(order, (int, float)):
raise AttributeError("invalid order value: %s", order)
if out is None:
raise AttributeError("No output tensor speficied", order)
if order == float('Inf'):
self.ng.max(self.fabs(tsr), axis, out)
elif order == float('-Inf'):
self.ng.min(self.fabs(tsr), axis, out)
elif order == 0:
tmp = self.zeros(tsr.shape)
self.ng.not_equal(tsr, tmp, tmp)
self.ng.sum(tmp, axis, out)
else:
tmp = self.empty(tsr.shape)
self.ng.power(self.fabs(tsr), order, tmp)
self.ng.sum(tmp, axis, out)
self.ng.power(out, (1.0 / order), out)
return out
def mean(self, tsr, axes, out):
"""
Calculates the arithmetic mean of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.mean(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.mean(tsr, axis=axes, out=out)
return out
def min(self, tsr, axes, out):
"""
Calculates the minimum of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.min(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.min(tsr, axis=axes, out=out)
return out
def max(self, tsr, axes, out):
"""
Calculates the maximum of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): Input tensor
axes (int): Axis along which the reduction is performed. If axes
is None, the tensor is flattened and reduced over
both dimensions.
out (GPUTensor): Output tensor
"""
if axes is None:
sze = tsr.shape[0]*tsr.shape[1]
self.ng.max(tsr.reshape(sze, 1), axis=0, out=out)
else:
self.ng.max(tsr, axis=axes, out=out)
return out
def variance(self, tsr, axes, out, mean=None):
"""
Calculates the variance of the elements along the specified
axes.
Arguments:
tsr (GPUTensor): the tensor on which to compute the variance
axes (int, list, optional): the dimension(s) along which to
variance. If set to None, we will
variance over all dimensions.
out (GPUTensor): where the result will be stored.
mean (GPUTensor): the tensor containing mean of tsr
Returns:
GPUTensor: reference to out
"""
if mean is None:
logger.error("GPUTensor requires mean to be specified.")
raise ValueError("mean not specified")
self.ng.mean(self.ng.square(tsr-mean), axis=axes, out=out)
return out
def fabs(self, x, out):
"""
Calculates absolute value of the elements in a tensor
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
Returns:
GPUTensor: reference to out
"""
self.ng.fabs(x, out=out)
return out
def sqrt(self, x, out):
"""
Calculates square root of the elements in a tensor
Arguments:
x (GPUTensor): Input tensor
out (GPUTensor): Output tensor
Returns:
GPUTensor: reference to out
"""
self.ng.sqrt(x, out=out)
return out
def zeros(self, shape, dtype=default_dtype, persist_values=True):
"""
Allocate a new GPUTensor and fill it with zeros.
Arguments:
shape (tupel): Shape of the desired GPUTensor
dtype (dtype): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
GPUTensor: output
"""
return self.ng.zeros(shape, dtype=dtype)
def ones(self, shape, dtype=default_dtype, persist_values=True):
"""
Allocate a new GPUTensor and fill it with ones.
Arguments:
shape (tupel): Shape of the desired GPUTensor
dtype (dtype): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
GPUTensor: output
"""
return self.ng.ones(shape, dtype=dtype)
def zeros_like(self, ary, dtype=default_dtype, persist_values=True,
name=None):
"""
Instantiate a new instance of this backend's Tensor class, with the
shape taken from ary and populating each element with a value of 0.
Arguments:
ary (tensor object): Tensor to inherit the dimensions of.
dtype (data-type, optional): If present, specifies the underlying
type to employ for each element.
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
Tensor: array object
Raises:
NotImplementedError: Can't be instantiated directly.
See Also:
:py:func:`~neon.backends.backend.Backend.empty`,
:py:func:`~neon.backends.backend.Backend.ones`,
:py:func:`~neon.backends.backend.Backend.array`
"""
return self.zeros(ary.shape, dtype=dtype,
persist_values=persist_values)
def empty_like(self, ary, dtype=default_dtype, persist_values=True,
name=None):
"""
Instantiate a new instance of this backend's Tensor class, with the
shape taken from ary.
Arguments:
ary (tensor object): Tensor to inherit the dimensions of.
dtype (data-type, optional): If present, specifies the underlying
type to employ for each element.
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
Tensor: array object
Raises:
NotImplementedError: Can't be instantiated directly.
See Also:
:py:func:`~neon.backends.backend.Backend.empty`,
:py:func:`~neon.backends.backend.Backend.ones`,
:py:func:`~neon.backends.backend.Backend.array`
"""
return self.empty(ary.shape, dtype=dtype,
persist_values=persist_values, name=name)
def empty(self, shape, dtype=default_dtype, persist_values=True,
name=None):
"""
Allocate a new GPUTensor.
Arguments:
shape (tupel): Shape of the desired GPUTensor
dtype (dtype): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
Returns:
GPUTensor: output
"""
return self.ng.empty(shape, dtype=dtype)
def copy(self, ary):
"""
returns a copy of ary
"""
res = self.empty_like(ary)
res.copy(ary)
return res
def array(self, ary, dtype=default_dtype, persist_values=True, name=None,
allocator=drv.mem_alloc):
"""
Allocate a new GPUTensor and fill it with supplied numpy array.
Arguments:
ary (ndarray): Numpy array with source data
dtype (dtype, optional): Optional datatype
persist_values (bool, optional): If set to True (the default), the
values assigned to this Tensor
will persist across multiple begin
and end calls. Setting to False
may provide a performance increase
if values do not need to be
maintained across such calls
name (string): Name for the GPUTensor
allocator (pycuda): Pycuda memory allocator
Returns:
GPUTensor: output
"""
return self.ng.array(ary, dtype=dtype, name=name)
def add(self, left, right, out):
"""
Elementwise addition
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.add(left, right, out=out)
return out
def subtract(self, left, right, out):
"""
Elementwise subtraction
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.subtract(left, right, out=out)
return out
def multiply(self, left, right, out):
"""
Elementwise multiplication
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.multiply(left, right, out=out)
return out
def divide(self, left, right, out):
"""
Elementwise division
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.divide(left, right, out=out)
return out
def greater(self, left, right, out):
"""
Elementwise greater than testing
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.greater(left, right, out=out)
return out
def equal(self, left, right, out):
"""
Performs element-wise equality testing on each element of left and
right, storing the result in out. Each operand is assumed to be the
same shape (or broadcastable as such).
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.equal(left, right, out=out)
return out
def not_equal(self, left, right, out):
"""
Elementwise not equal testing
Arguments:
left (GPUTensor, numeric): left-hand side operand.
right (GPUTensor, numeric): right-hand side operand.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.not_equal(left, right, out=out)
return out
def clip(self, a, a_min, a_max, out):
"""
Elementwise clipping between a range of specified values
Arguments:
a (GPUTensor): input tensor.
a_min (float): floor value.
a_max (float): ceiling value.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.clip(a, a_min, a_max, out=out)
return out
def log(self, a, out):
"""
Elementwise base-e logarithm
Arguments:
a (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.log(a, out=out)
return out
def tanh(self, a, out):
"""
Elementwise tanh
Arguments:
a (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
self.ng.tanh(a, out=out)
return out
def argmax(self, a, out, axis=0):
"""
Calculates the indices of the maximal element value along the specified
axis. If multiple elements contain the maximum, only the elements of
the first are returned.
Arguments:
tsr (GPUTensor): The GPUTensor on which to find the maximum indices
axis (int): The dimension along which to find the maximum. If set
to None, find the overall maximum index of a flattened
representation of tsr.
out (GPUTensor): Where to store the result. Should be of the
appropriate type and expected shape
Returns:
GPUTensor: reference to out
"""
self.ng.argmax(a, out=out, axis=axis)
return out
def softmax(self, x, out):
"""
Softmax nonlinearity. Computes exp(x-max(x)) / sum_i exp(x_i-max(x_i))
Arguments:
x (GPUTensor): input tensor.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
out[:] = (self.ng.reciprocal(self.ng.sum(
self.ng.exp(x - self.ng.max(x, axis=0)), axis=0)) *
self.ng.exp(x - self.ng.max(x, axis=0)))
return out
def softmax_gradient(self, y, err, out):
"""
Gradient of the softmax nonlinearity.
Arguments:
y (GPUTensor): input tensor.
err (GPUTensor): backpropagated error.
out (GPUTensor): where the result will be stored.
Returns:
GPUTensor: reference to out
"""
raise NotImplementedError("Softmax gradient should use shortcut")
return out
def make_binary_mask(self, tsr, keepthresh=0.5, dtype=default_dtype):
"""
Create a binary mask for dropout layers.
Arguments:
tsr (GPUTensor): Output tensor
keepthresh (float): fraction of ones
"""
self.ng.dropout(keep=keepthresh, out=tsr)
def gdm_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, epoch):
"""
Perform gradient descent update with momentum.
Arguments:
ps_item (GPUTensor): parameter tensor (e.g. a weight matrix)
us_item (GPUTensor): update tensor, contains gradient wrt. weights
vs_item (GPUTensor): velocity tensor.
momentum_coef (float): momentum coefficient.
learning_rate (float): learning rate.
epoch (int): epoch (used in conjunction with diagnostics).
Outputs are written to vs_item (updated velocity)
and ps_item (updated weights)
"""
vs_item[:] = vs_item * momentum_coef - us_item * learning_rate
ps_item[:] = ps_item + vs_item
def gdmwd_compound(self, ps_item, us_item, vs_item, momentum_coef,
learning_rate, wd, epoch):
"""
Perform gradient descent update with momentum and weight decay.
Arguments:
ps_item (GPUTensor): parameter tensor (e.g. a weight matrix)
us_item (GPUTensor): update tensor, contains gradient wrt. weights
vs_item (GPUTensor): velocity tensor.
momentum_coef (float): momentum coefficient.
learning_rate (float): learning rate.
wd (float): weight decay parameter.
epoch (int): epoch (used in conjunction with diagnostics).
Outputs:
ps_item, the updated weights.
vs_item, the updated velocity.
us_item, used as a temp buffer.
"""
vs_item[:] = (vs_item * momentum_coef -
us_item * learning_rate -
ps_item * learning_rate * wd)
ps_item[:] = ps_item + vs_item
def exp_mavg(self, mavg, newval, rho):
"""
Calculate the exponential moving average
Arguments:
mavg: The running value of the moving average
newval: New sample to be added to the moving average
rho: Interpolation value
"""
mavg[:] = rho * mavg + (1.0 - rho) * newval
def ada_update(self, ps_item, us_item, gs_item, ds_item, ls_item, ss_item,
rho, epsilon):
"""
Update rule for AdaDelta (Zeiler, http://arxiv.org/abs/1212.5701)
Arguments:
ps_item: weight / parameter (will be updated)
us_item: update
gs_item: expected value of Gradient Squared (will be updated)
ds_item: expected value of Delta Squared (will be updated)
ls_item: learning rate (will be updated)
ss_item: Scratch Space
rho: decay constant (determines window size)
epsilon: small positive constant for numerical stability
"""
# Accumulate E[Grad^2]
gs_item[:] = gs_item * rho + (1.0 - rho) * us_item * us_item
# Calculate Updates
ls_item[:] = self.ng.sqrt((ds_item + epsilon) /
(gs_item + epsilon)) * (-1.0) * us_item
# Accumulate E[Delt^2]
ds_item[:] = ds_item * rho + (1.0 - rho) * ls_item * ls_item
# Final update to the params
ps_item[:] = ps_item + ls_item
def rms_update(self, params, updates, run_squares, velocity, scratch_space,
gamma, epsilon, learning_rate, momentum_coef):
# Update running squares
run_squares[:] = gamma * run_squares + (1. - gamma) * updates * updates
# Now scale the gradient by lr / rms(grad) (with a epsilon term for
# stability) and use it to update the params
if momentum_coef == 0:
params[:] = params - learning_rate * updates * self.ng.reciprocal(
self.ng.sqrt(run_squares) + epsilon)
else:
velocity[:] = velocity * momentum_coef - \
learning_rate * updates * \
self.ng.reciprocal(self.ng.sqrt(run_squares) + epsilon)
params[:] = params + velocity
def fprop_bn_compound(self, inputs, beta, gamma, eps, xhat,
xmean, xvar, gmean, gvar, rho, out):
"""
Batch normalization forward pass, compounded to run in 3 kernel calls.
Arguments:
inputs: input data to be normalized
beta: location parameter
gamma: scale parameter
eps: small constant for numerical stability
xvar: variance (updated)
xhat: normalized input (updated)
out: normalized and rescaled input (updated)
"""
xvar[:] = self.ng.var(inputs, axis=1)
xmean[:] = self.ng.mean(inputs, axis=1)
gmean[:] = gmean * rho + (1.0 - rho) * xmean
gvar[:] = gvar * rho + (1.0 - rho) * xvar
xvar[:] = self.ng.reciprocal(self.ng.sqrt(xvar + eps))
xhat[:] = xvar * (inputs - xmean)
out[:] = xhat * gamma + beta
return out
def bprop_bn_compound(self, xhat, error, xvar, gamma,
beta_updates, gamma_updates):
"""
Batch normalization backward pass, compounded to run with 4 kernel
calls.
Arguments:
xhat: normalized input data (updated)
error: backpropagated deltas (updated)
xvar: precomputed variance
gamma: scale parameter
beta_updates: gradient update for beta (updated)
gamma_updates: gradient update for gamma (updated)
"""
gamma_updates[:] = self.ng.sum(xhat * error, axis=1)
beta_updates[:] = self.ng.sum(error, axis=1)
xhat[:] = (xhat * gamma_updates + beta_updates) / float(xhat.shape[1])
error[:] = xvar * gamma * (error - xhat)
| 39.736842 | 79 | 0.556126 | 5,446 | 48,320 | 4.882666 | 0.125046 | 0.0176 | 0.011733 | 0.011846 | 0.618405 | 0.581475 | 0.560453 | 0.529803 | 0.504306 | 0.489263 | 0 | 0.003858 | 0.366991 | 48,320 | 1,215 | 80 | 39.769547 | 0.865498 | 0.572641 | 0 | 0.224359 | 0 | 0 | 0.024425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217949 | false | 0.003205 | 0.025641 | 0.00641 | 0.378205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ef87644a467b7a43c75ac4ae95f1780dab19950 | 3,934 | py | Python | algopy/base_type.py | arthus701/algopy | 1e2430f803289bbaed6bbdff6c28f98d7767835c | [
"Unlicense"
] | 54 | 2015-03-05T13:38:08.000Z | 2021-11-29T11:54:48.000Z | algopy/base_type.py | arthus701/algopy | 1e2430f803289bbaed6bbdff6c28f98d7767835c | [
"Unlicense"
] | 7 | 2016-04-06T11:25:00.000Z | 2020-11-09T13:53:20.000Z | algopy/base_type.py | arthus701/algopy | 1e2430f803289bbaed6bbdff6c28f98d7767835c | [
"Unlicense"
] | 13 | 2015-01-17T17:05:56.000Z | 2021-08-05T01:13:16.000Z | """
This implements an abstrace base class Ring .
Rationale:
Goal is to separate the datatype specification from the algorithms and containers for the following reasons:
1) It allows to directly use the algorithms *without* overhead. E.g. calling mul(z.data, x.data, y.data)
has much less overhead than z = x.__mul__(y). data is to be kept as close as possible to
machine primitives. E.g. data is array or tuple of arrays.
2) Potential reuse of an algorithm in several datatypes.
3) Relatively easy to connect high performance algorithms with a very highlevel abstract description.
For instance, most programming languages allow calling C-functions. Therefore, the algorithms
should be given as void fcn(int A, double B, ...)
For instance, the datatype is a truncated Taylor polynomial R[t]/<t^D> of the class Foo.
The underlying container is a simple array of doubles.
"""
import numpy
class Ring(object):
"""
An abstract base class in an attempt to follow the DRY principle.
It implements the algebraic class of a ring as defined on
http://en.wikipedia.org/wiki/Ring_%28mathematics%29
The idea is that the set is described in data and the operations +,* etc.
are implemented as functions that operate on the data.
E.g. the factor ring of natural numbers modulo 4, x.data = 3 y.data = 2
then z = add(x,y) is implemented as
def add(x,y):
return self.__class__((x.data*y.data)%4)
and one obtains z.data = 1
Warning:
Since this class is only of little value it may be deprecated in the future.
"""
data = NotImplementedError()
def totype(self, x):
"""
tries to convert x to an object of the class
works for : scalar x, numpy.ndarray x
Remark:
at the moment, scalar x expanded as Ring with the same degree as self though.
The reason is a missing implementation that works for graded rings of different degree.
Once such implementations exist, this function should be adapted.
"""
if numpy.isscalar(x):
xdata = self.__class__.__zeros_like__(self.data)
self.__class__.__scalar_to_data__(xdata, x)
return self.__class__(xdata)
elif isinstance(x, numpy.ndarray):
raise NotImplementedError('sorry, not implemented just yet')
elif not isinstance(x, self.__class__):
raise NotImplementedError('Cannot convert x\n type(x) = %s but expected type(x) = %s'%(str(type(x))))
else:
return x
def __add__(self, rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.add(retval.data, self.data, rhs.data)
return retval
def __sub__(self, rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.sub(retval.data, self.data, rhs.data)
return retval
def __mul__(self,rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.mul(retval.data, self.data, rhs.data)
return retval
def __truediv__(self,rhs):
rhs = self.totype(rhs)
retval = self.__class__(self.__class__.__zeros_like__(self.data))
self.__class__.div(retval.data, self.data, rhs.data)
return retval
def __radd__(self, lhs):
return self + lhs
def __rmul__(self, lhs):
return self * lhs
def zeros_like(self):
return self.__class__(self.__class__.__zeros_like__(self.data))
def __str__(self):
return str(self.data)
| 35.125 | 113 | 0.630147 | 530 | 3,934 | 4.401887 | 0.373585 | 0.073296 | 0.039006 | 0.046292 | 0.243463 | 0.243463 | 0.223746 | 0.223746 | 0.193742 | 0.125161 | 0 | 0.004296 | 0.290036 | 3,934 | 111 | 114 | 35.441441 | 0.831006 | 0.482206 | 0 | 0.285714 | 0 | 0.02381 | 0.047439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.02381 | 0.095238 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9ef9d0cb1ac73ebdbfd64d7d2d0514517d257322 | 734 | py | Python | src/python/director/builtin/plugins/measurement_tool/plugin.py | afdaniele/director | 845ba027f9009803fcf77f44874f2ab9d7ab72e3 | [
"BSD-3-Clause"
] | null | null | null | src/python/director/builtin/plugins/measurement_tool/plugin.py | afdaniele/director | 845ba027f9009803fcf77f44874f2ab9d7ab72e3 | [
"BSD-3-Clause"
] | null | null | null | src/python/director/builtin/plugins/measurement_tool/plugin.py | afdaniele/director | 845ba027f9009803fcf77f44874f2ab9d7ab72e3 | [
"BSD-3-Clause"
] | null | null | null | from director.devel.plugin import GenericPlugin
from director.fieldcontainer import FieldContainer
from .lib import measurementpanel
from PythonQt import QtCore
class Plugin(GenericPlugin):
ID = 'measurement_tool'
NAME = 'MeasurementTool'
DEPENDENCIES = ['MainWindow']
def __init__(self, app, view):
super(Plugin, self).__init__(app, view)
def init(self, fields):
measurementPanel = measurementpanel.MeasurementPanel(self.app, self.view)
measurementDock = self.app.addWidgetToDock(
measurementPanel.widget,
QtCore.Qt.RightDockWidgetArea,
visible=False
)
# ---
return FieldContainer(
measurementToolPanel=measurementPanel,
measurementToolDock=measurementDock
)
| 25.310345 | 77 | 0.741144 | 68 | 734 | 7.867647 | 0.529412 | 0.039252 | 0.041122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175749 | 734 | 28 | 78 | 26.214286 | 0.884298 | 0.004087 | 0 | 0 | 0 | 0 | 0.056241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.190476 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7300890aeb852238c2f50f2aafaca22c70ba3108 | 158 | py | Python | Python/Back_solve_python/back_joon/StringArray/P10808.py | skyriv213/Studyriv | 6dfd3c52a873cd3bdb018280d81aec8bdcf61e6e | [
"MIT"
] | null | null | null | Python/Back_solve_python/back_joon/StringArray/P10808.py | skyriv213/Studyriv | 6dfd3c52a873cd3bdb018280d81aec8bdcf61e6e | [
"MIT"
] | null | null | null | Python/Back_solve_python/back_joon/StringArray/P10808.py | skyriv213/Studyriv | 6dfd3c52a873cd3bdb018280d81aec8bdcf61e6e | [
"MIT"
] | null | null | null | s = input()
num = [0] * 26
for i in range(len(s)):
num[ord(s[i])-97] += 1
for i in num:
print(i, end = " ")
if i == len(num)-1:
print(i)
| 15.8 | 26 | 0.455696 | 31 | 158 | 2.322581 | 0.516129 | 0.111111 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064815 | 0.316456 | 158 | 9 | 27 | 17.555556 | 0.601852 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73009bb6994a5ff455eca19ffc1b698f9cf1d1d2 | 600 | py | Python | src/reliefcpp/utils.py | ferrocactus/reliefcpp | 41705a9e5c749e700f83f9fe9f352457ae57426d | [
"MIT"
] | null | null | null | src/reliefcpp/utils.py | ferrocactus/reliefcpp | 41705a9e5c749e700f83f9fe9f352457ae57426d | [
"MIT"
] | null | null | null | src/reliefcpp/utils.py | ferrocactus/reliefcpp | 41705a9e5c749e700f83f9fe9f352457ae57426d | [
"MIT"
] | null | null | null | from enum import Enum
from numpy import isin
class Metric(Enum):
EUCLIDEAN = 0
MANHATTAN = 1
HAMMING = 2
L2 = 3
L1 = 4
metric_names = [
"euclidean",
"manhattan",
"hamming",
"l2",
"l1"
]
def _validate_metric(metric_name):
if isinstance(metric_name, Metric):
return metric_name.value
elif isinstance(metric_name, str):
metric_name = metric_name.lower()
return metric_names.index(metric_name)
elif isinstance(metric_name, int):
return metric_name
else:
raise ValueError("Could not identify metric.")
| 18.181818 | 54 | 0.638333 | 73 | 600 | 5.068493 | 0.493151 | 0.243243 | 0.162162 | 0.12973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020595 | 0.271667 | 600 | 32 | 55 | 18.75 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.091667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.08 | 0 | 0.48 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7306a719a754d7eb090a7a28857cf9ab3cc30caf | 1,880 | py | Python | plotter.py | ZiegHailo/SMUVI | c324c881c511f1c44e481f93e6bd6fe7f85d4ded | [
"MIT"
] | null | null | null | plotter.py | ZiegHailo/SMUVI | c324c881c511f1c44e481f93e6bd6fe7f85d4ded | [
"MIT"
] | null | null | null | plotter.py | ZiegHailo/SMUVI | c324c881c511f1c44e481f93e6bd6fe7f85d4ded | [
"MIT"
] | null | null | null | __author__ = 'zieghailo'
import matplotlib.pyplot as plt
# plt.ion()
def show():
plt.show()
plt.get_current_fig_manager().full_screen_toggle()
def plot_graph(graph):
# plt.ion()
x = [p.x for p in graph.points]
y = [p.y for p in graph.points]
plt.plot(x, y, 'b*')
plt.draw()
def plot_arrows(graph):
for p in graph.points:
x = p.x
y = p.y
for c in p.connections:
cx = c.x
cy = c.y
# ax.arrow(x, y, cx-x, cy-y)
plt.plot([x, cx], [y, cy], 'k')
plt.draw()
def plot_visited(visited):
x = [p.x for p in visited]
y = [p.y for p in visited]
plt.plot(x, y, 'ro', ms=10)
plt.draw()
def plot_connection(start, end):
plt.plot([start.x, end.x], [start.y, end.y], 'g', linewidth=4)
def start_gui(graph):
fig = plt.figure(1)
ax = fig.add_subplot(111)
ax.set_title('click to build line segments')
ax.axis('equal')
line, = ax.plot([0, 100], [0, 100], 'b.') # empty line
pointbuilder = PointBuilder(line, ax, graph)
fig.waitforbuttonpress(0)
class PointBuilder:
def __init__(self, points, ax, graph):
self.points = points
self.ax = ax
self.graph = graph
self.cid = points.figure.canvas.mpl_connect('button_press_event', self)
self.kid = points.figure.canvas.mpl_connect('key_press_event', self)
def __call__(self, event):
print 'click', event
if event.inaxes!=self.points.axes: return
self.graph.add_point(event.xdata, event.ydata)
x = [p.x for p in self.graph.points]
y = [p.y for p in self.graph.points]
plt.cla()
self.graph.build_graph()
plot_arrows(self.graph)
plot_graph(self.graph)
if event.key != 'x':
plt.waitforbuttonpress(0)
if __name__ == "__main__":
start_gui() | 22.650602 | 79 | 0.579787 | 285 | 1,880 | 3.673684 | 0.312281 | 0.026743 | 0.040115 | 0.022923 | 0.170965 | 0.090735 | 0.038204 | 0.038204 | 0 | 0 | 0 | 0.012463 | 0.274468 | 1,880 | 83 | 80 | 22.650602 | 0.755132 | 0.030319 | 0 | 0.054545 | 0 | 0 | 0.053326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.018182 | null | null | 0.018182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7306a81bcc0bef579d78b882fb2bc110b0f6bf5f | 1,506 | py | Python | fannypack/utils/_deprecation.py | brentyi/hfdsajk | 2888aa5d969824ac1e1a528264674ece3f4703f9 | [
"MIT"
] | 5 | 2020-03-13T21:34:31.000Z | 2020-10-27T15:18:17.000Z | fannypack/utils/_deprecation.py | brentyi/hfdsajk | 2888aa5d969824ac1e1a528264674ece3f4703f9 | [
"MIT"
] | 2 | 2020-06-17T11:06:56.000Z | 2020-10-25T03:06:18.000Z | fannypack/utils/_deprecation.py | brentyi/hfdsajk | 2888aa5d969824ac1e1a528264674ece3f4703f9 | [
"MIT"
] | 4 | 2020-03-15T01:55:18.000Z | 2022-01-21T22:06:48.000Z | import warnings
from typing import Callable, Optional, TypeVar, cast
CallableType = TypeVar("CallableType", bound=Callable)
def deprecation_wrapper(message: str, function_or_class: CallableType) -> CallableType:
"""Creates a wrapper for a deprecated function or class. Prints a warning
the first time a function or class is called.
Args:
message (str): Warning message.
function_or_class (CallableType): Function or class to wrap.
Returns:
CallableType: Wrapped function/class.
"""
warned = False
def curried(*args, **kwargs): # pragma: no cover
nonlocal warned
if not warned:
warnings.warn(message, DeprecationWarning, stacklevel=2)
warned = True
return function_or_class(*args, **kwargs)
return cast(CallableType, curried)
def new_name_wrapper(
old_name: str, new_name: str, function_or_class: CallableType
) -> CallableType:
"""Creates a wrapper for a renamed function or class. Prints a warning the first
time a function or class is called with the old name.
Args:
old_name (str): Old name of function or class. Printed in warning.
new_name (str): New name of function or class. Printed in warning.
function_or_class (CallableType): Function or class to wrap.
Returns:
CallableType: Wrapped function/class.
"""
return deprecation_wrapper(
f"{old_name} is deprecated! Use {new_name} instead.", function_or_class
)
| 31.375 | 87 | 0.688579 | 190 | 1,506 | 5.342105 | 0.321053 | 0.137931 | 0.206897 | 0.106404 | 0.492611 | 0.492611 | 0.492611 | 0.492611 | 0.419704 | 0.419704 | 0 | 0.000869 | 0.235724 | 1,506 | 47 | 88 | 32.042553 | 0.880973 | 0.459495 | 0 | 0 | 0 | 0 | 0.082321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.111111 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73085370dd0ae578546e4f06c27e87ad769b743a | 387 | py | Python | practice/ai/machine-learning/digital-camera-day-or-night/digital-camera-day-or-night.py | zeyuanxy/HackerRank | 5194a4af780ece396501c215996685d1be529e73 | [
"MIT"
] | 4 | 2017-01-18T17:51:58.000Z | 2019-10-20T12:14:37.000Z | practice/ai/machine-learning/digital-camera-day-or-night/digital-camera-day-or-night.py | zeyuanxy/HackerRank | 5194a4af780ece396501c215996685d1be529e73 | [
"MIT"
] | null | null | null | practice/ai/machine-learning/digital-camera-day-or-night/digital-camera-day-or-night.py | zeyuanxy/HackerRank | 5194a4af780ece396501c215996685d1be529e73 | [
"MIT"
] | 8 | 2016-03-14T17:16:59.000Z | 2021-06-26T10:11:33.000Z | if __name__ == "__main__":
data = raw_input().strip(',\n').split(' ')
count = 0
total = 0
for pxl in data:
pxl = pxl.split(',')
mean = 0
for i in pxl:
mean += int(i)
mean /= 3
if mean < 70:
count += 1
total += 1
if float(count) / total > 0.4:
print 'night'
else:
print 'day'
| 21.5 | 46 | 0.426357 | 49 | 387 | 3.183673 | 0.55102 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.431525 | 387 | 17 | 47 | 22.764706 | 0.663636 | 0 | 0 | 0 | 0 | 0 | 0.054264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7311fe6464a3f41ba16f8290bf926cae00157858 | 3,179 | py | Python | estradaspt_legacy/__init__.py | dpjrodrigues/home-assistant-custom-components | 105feec36ea065e62e839b5137a9ee2e2dcf3513 | [
"MIT"
] | null | null | null | estradaspt_legacy/__init__.py | dpjrodrigues/home-assistant-custom-components | 105feec36ea065e62e839b5137a9ee2e2dcf3513 | [
"MIT"
] | null | null | null | estradaspt_legacy/__init__.py | dpjrodrigues/home-assistant-custom-components | 105feec36ea065e62e839b5137a9ee2e2dcf3513 | [
"MIT"
] | 5 | 2018-12-29T16:39:25.000Z | 2019-12-21T22:29:22.000Z | import logging
import async_timeout
import urllib.request
import time
import re
from datetime import datetime, timedelta
import voluptuous as vol
import homeassistant.helpers.config_validation as cv
from homeassistant.components.sensor import PLATFORM_SCHEMA
from homeassistant.helpers.entity import Entity
from homeassistant.helpers.entity_component import EntityComponent
from homeassistant.util import Throttle
from homeassistant.helpers.aiohttp_client import async_get_clientsession
REQUIREMENTS = ['pyEstradasPT==1.0.2']
_LOGGER = logging.getLogger(__name__)
ATTRIBUTION = "Powered by estradas.pt"
CONF_CAMERA = 'camera'
SCAN_INTERVAL = timedelta(minutes=5)
DOMAIN = 'estradaspt'
PLATFORM_SCHEMA = vol.Schema({
DOMAIN: vol.Schema({
vol.Required(CONF_CAMERA): vol.All(cv.ensure_list, [cv.string])
})
}, extra=vol.ALLOW_EXTRA)
async def async_setup(hass, config):
"""Set up the Camera component"""
from pyEstradasPT import Cameras
websession = async_get_clientsession(hass)
with async_timeout.timeout(10, loop=hass.loop):
cameras = await Cameras.get(websession)
component = EntityComponent(_LOGGER, DOMAIN, hass)
entities = []
conf = config.get(DOMAIN)
for camera in conf[0].get(CONF_CAMERA):
url = await cameras.UrlByCameraName(camera)
file_name='/config/www/'+re.sub('[^A-Za-z0-9]+', '', camera)+'.3gp'
entities.append(CameraVideo(camera,file_name,url))
await store_cam_video(url, file_name)
await component.async_add_entities(entities)
return True
async def store_cam_video(url, file_name):
"""Save camera 3gp """
urllib.request.urlretrieve(url, file_name)
class CameraVideo(Entity):
"""Sensor that reads and stores the camera video."""
ICON = 'mdi:webcam'
def __init__(self, name, file_name, url):
"""Initialize the component."""
self._name = name
self._file_name = file_name
self._url = url
self._last_update = datetime.now()
@property
def name(self):
"""Return the name of the component."""
return self._name
@property
def file_name(self):
"""Return the file_name where camara was saved."""
return self._file_name
@property
def url(self):
"""Return the url of the camera."""
return self._file_name
@property
def last_update(self):
"""Return the date when camera url refreshed."""
return self._last_update
@property
def icon(self):
"""Icon to use in the frontend, if any."""
return self.ICON
@property
def device_state_attributes(self):
"""Return other details about the sensor state."""
attrs = {}
attrs["name"] = self._name
attrs["last_update"] = self._last_update
attrs["file_name"] = self._file_name
attrs["url"] = self._url
return attrs
@Throttle(SCAN_INTERVAL)
async def async_update(self):
"""Update the cam."""
await store_cam_video(self._url, self._file_name)
self._last_update = datetime.now()
self.schedule_update_ha_state()
| 27.17094 | 75 | 0.674111 | 397 | 3,179 | 5.201511 | 0.335013 | 0.058111 | 0.029056 | 0.029056 | 0.075545 | 0.051332 | 0 | 0 | 0 | 0 | 0 | 0.004444 | 0.221453 | 3,179 | 117 | 76 | 27.17094 | 0.829495 | 0 | 0 | 0.131579 | 0 | 0 | 0.04463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.184211 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73127b6e66f9e5e908a0672dbaeb988571d8cf2c | 14,720 | py | Python | python/terra_proto/terra/treasury/v1beta1/__init__.py | Vritra4/terra.proto | 977264b7c3e0f9d135120d77b48657b82f5eacf6 | [
"Apache-2.0"
] | null | null | null | python/terra_proto/terra/treasury/v1beta1/__init__.py | Vritra4/terra.proto | 977264b7c3e0f9d135120d77b48657b82f5eacf6 | [
"Apache-2.0"
] | null | null | null | python/terra_proto/terra/treasury/v1beta1/__init__.py | Vritra4/terra.proto | 977264b7c3e0f9d135120d77b48657b82f5eacf6 | [
"Apache-2.0"
] | null | null | null | # Generated by the protocol buffer compiler. DO NOT EDIT!
# sources: terra/treasury/v1beta1/genesis.proto, terra/treasury/v1beta1/query.proto, terra/treasury/v1beta1/treasury.proto
# plugin: python-betterproto
from dataclasses import dataclass
from typing import Dict, List
import betterproto
from betterproto.grpc.grpclib_server import ServiceBase
import grpclib
@dataclass(eq=False, repr=False)
class Params(betterproto.Message):
"""Params defines the parameters for the oracle module."""
tax_policy: "PolicyConstraints" = betterproto.message_field(1)
reward_policy: "PolicyConstraints" = betterproto.message_field(2)
seigniorage_burden_target: str = betterproto.string_field(3)
mining_increment: str = betterproto.string_field(4)
window_short: int = betterproto.uint64_field(5)
window_long: int = betterproto.uint64_field(6)
window_probation: int = betterproto.uint64_field(7)
@dataclass(eq=False, repr=False)
class PolicyConstraints(betterproto.Message):
"""
PolicyConstraints - defines policy constraints can be applied in tax &
reward policies
"""
rate_min: str = betterproto.string_field(1)
rate_max: str = betterproto.string_field(2)
cap: "___cosmos_base_v1_beta1__.Coin" = betterproto.message_field(3)
change_rate_max: str = betterproto.string_field(4)
@dataclass(eq=False, repr=False)
class EpochTaxProceeds(betterproto.Message):
"""
EpochTaxProceeds represents the tax amount collected at the current epoch
"""
tax_proceeds: List["___cosmos_base_v1_beta1__.Coin"] = betterproto.message_field(1)
@dataclass(eq=False, repr=False)
class EpochInitialIssuance(betterproto.Message):
"""
EpochInitialIssuance represents initial issuance of the currrent epoch
"""
issuance: List["___cosmos_base_v1_beta1__.Coin"] = betterproto.message_field(1)
@dataclass(eq=False, repr=False)
class QueryTaxRateRequest(betterproto.Message):
"""
QueryTaxRateRequest is the request type for the Query/TaxRate RPC method.
"""
pass
@dataclass(eq=False, repr=False)
class QueryTaxRateResponse(betterproto.Message):
"""
QueryTaxRateResponse is response type for the Query/TaxRate RPC method.
"""
tax_rate: str = betterproto.string_field(1)
@dataclass(eq=False, repr=False)
class QueryTaxCapRequest(betterproto.Message):
"""
QueryTaxCapRequest is the request type for the Query/TaxCap RPC method.
"""
# denom defines the denomination to query for.
denom: str = betterproto.string_field(1)
@dataclass(eq=False, repr=False)
class QueryTaxCapResponse(betterproto.Message):
"""
QueryTaxCapResponse is response type for the Query/TaxCap RPC method.
"""
tax_cap: str = betterproto.string_field(1)
@dataclass(eq=False, repr=False)
class QueryTaxCapsRequest(betterproto.Message):
"""
QueryTaxCapsRequest is the request type for the Query/TaxCaps RPC method.
"""
pass
@dataclass(eq=False, repr=False)
class QueryTaxCapsResponseItem(betterproto.Message):
"""
QueryTaxCapsResponseItem is response item type for the Query/TaxCaps RPC
method.
"""
denom: str = betterproto.string_field(1)
tax_cap: str = betterproto.string_field(2)
@dataclass(eq=False, repr=False)
class QueryTaxCapsResponse(betterproto.Message):
"""
QueryTaxCapsResponse is response type for the Query/TaxCaps RPC method.
"""
tax_caps: List["QueryTaxCapsResponseItem"] = betterproto.message_field(1)
@dataclass(eq=False, repr=False)
class QueryRewardWeightRequest(betterproto.Message):
"""
QueryRewardWeightRequest is the request type for the Query/RewardWeight RPC
method.
"""
pass
@dataclass(eq=False, repr=False)
class QueryRewardWeightResponse(betterproto.Message):
"""
QueryRewardWeightResponse is response type for the Query/RewardWeight RPC
method.
"""
reward_weight: str = betterproto.string_field(1)
@dataclass(eq=False, repr=False)
class QueryTaxProceedsRequest(betterproto.Message):
"""
QueryTaxProceedsRequest is the request type for the Query/TaxProceeds RPC
method.
"""
pass
@dataclass(eq=False, repr=False)
class QueryTaxProceedsResponse(betterproto.Message):
"""
QueryTaxProceedsResponse is response type for the Query/TaxProceeds RPC
method.
"""
tax_proceeds: List["___cosmos_base_v1_beta1__.Coin"] = betterproto.message_field(1)
@dataclass(eq=False, repr=False)
class QuerySeigniorageProceedsRequest(betterproto.Message):
"""
QuerySeigniorageProceedsRequest is the request type for the
Query/SeigniorageProceeds RPC method.
"""
pass
@dataclass(eq=False, repr=False)
class QuerySeigniorageProceedsResponse(betterproto.Message):
"""
QuerySeigniorageProceedsResponse is response type for the
Query/SeigniorageProceeds RPC method.
"""
seigniorage_proceeds: str = betterproto.string_field(1)
@dataclass(eq=False, repr=False)
class QueryIndicatorsRequest(betterproto.Message):
"""
QueryIndicatorsRequest is the request type for the Query/Indicators RPC
method.
"""
pass
@dataclass(eq=False, repr=False)
class QueryIndicatorsResponse(betterproto.Message):
"""
QueryIndicatorsResponse is response type for the Query/Indicators RPC
method.
"""
trl_year: str = betterproto.string_field(1)
trl_month: str = betterproto.string_field(2)
@dataclass(eq=False, repr=False)
class QueryParamsRequest(betterproto.Message):
"""
QueryParamsRequest is the request type for the Query/Params RPC method.
"""
pass
@dataclass(eq=False, repr=False)
class QueryParamsResponse(betterproto.Message):
"""
QueryParamsResponse is the response type for the Query/Params RPC method.
"""
# params defines the parameters of the module.
params: "Params" = betterproto.message_field(1)
@dataclass(eq=False, repr=False)
class GenesisState(betterproto.Message):
"""GenesisState defines the oracle module's genesis state."""
params: "Params" = betterproto.message_field(1)
tax_rate: str = betterproto.string_field(2)
reward_weight: str = betterproto.string_field(3)
tax_caps: List["TaxCap"] = betterproto.message_field(4)
tax_proceeds: List["___cosmos_base_v1_beta1__.Coin"] = betterproto.message_field(5)
epoch_initial_issuance: List[
"___cosmos_base_v1_beta1__.Coin"
] = betterproto.message_field(6)
epoch_states: List["EpochState"] = betterproto.message_field(7)
@dataclass(eq=False, repr=False)
class TaxCap(betterproto.Message):
"""TaxCap is the max tax amount can be charged for the given denom"""
denom: str = betterproto.string_field(1)
tax_cap: str = betterproto.string_field(2)
@dataclass(eq=False, repr=False)
class EpochState(betterproto.Message):
"""EpochState is the record for each epoch state"""
epoch: int = betterproto.uint64_field(1)
tax_reward: str = betterproto.string_field(2)
seigniorage_reward: str = betterproto.string_field(3)
total_staked_luna: str = betterproto.string_field(4)
class QueryStub(betterproto.ServiceStub):
async def tax_rate(self) -> "QueryTaxRateResponse":
request = QueryTaxRateRequest()
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/TaxRate", request, QueryTaxRateResponse
)
async def tax_cap(self, *, denom: str = "") -> "QueryTaxCapResponse":
request = QueryTaxCapRequest()
request.denom = denom
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/TaxCap", request, QueryTaxCapResponse
)
async def tax_caps(self) -> "QueryTaxCapsResponse":
request = QueryTaxCapsRequest()
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/TaxCaps", request, QueryTaxCapsResponse
)
async def reward_weight(self) -> "QueryRewardWeightResponse":
request = QueryRewardWeightRequest()
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/RewardWeight",
request,
QueryRewardWeightResponse,
)
async def seigniorage_proceeds(self) -> "QuerySeigniorageProceedsResponse":
request = QuerySeigniorageProceedsRequest()
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/SeigniorageProceeds",
request,
QuerySeigniorageProceedsResponse,
)
async def tax_proceeds(self) -> "QueryTaxProceedsResponse":
request = QueryTaxProceedsRequest()
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/TaxProceeds",
request,
QueryTaxProceedsResponse,
)
async def indicators(self) -> "QueryIndicatorsResponse":
request = QueryIndicatorsRequest()
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/Indicators", request, QueryIndicatorsResponse
)
async def params(self) -> "QueryParamsResponse":
request = QueryParamsRequest()
return await self._unary_unary(
"/terra.treasury.v1beta1.Query/Params", request, QueryParamsResponse
)
class QueryBase(ServiceBase):
async def tax_rate(self) -> "QueryTaxRateResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def tax_cap(self, denom: str) -> "QueryTaxCapResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def tax_caps(self) -> "QueryTaxCapsResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def reward_weight(self) -> "QueryRewardWeightResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def seigniorage_proceeds(self) -> "QuerySeigniorageProceedsResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def tax_proceeds(self) -> "QueryTaxProceedsResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def indicators(self) -> "QueryIndicatorsResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def params(self) -> "QueryParamsResponse":
raise grpclib.GRPCError(grpclib.const.Status.UNIMPLEMENTED)
async def __rpc_tax_rate(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.tax_rate(**request_kwargs)
await stream.send_message(response)
async def __rpc_tax_cap(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {
"denom": request.denom,
}
response = await self.tax_cap(**request_kwargs)
await stream.send_message(response)
async def __rpc_tax_caps(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.tax_caps(**request_kwargs)
await stream.send_message(response)
async def __rpc_reward_weight(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.reward_weight(**request_kwargs)
await stream.send_message(response)
async def __rpc_seigniorage_proceeds(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.seigniorage_proceeds(**request_kwargs)
await stream.send_message(response)
async def __rpc_tax_proceeds(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.tax_proceeds(**request_kwargs)
await stream.send_message(response)
async def __rpc_indicators(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.indicators(**request_kwargs)
await stream.send_message(response)
async def __rpc_params(self, stream: grpclib.server.Stream) -> None:
request = await stream.recv_message()
request_kwargs = {}
response = await self.params(**request_kwargs)
await stream.send_message(response)
def __mapping__(self) -> Dict[str, grpclib.const.Handler]:
return {
"/terra.treasury.v1beta1.Query/TaxRate": grpclib.const.Handler(
self.__rpc_tax_rate,
grpclib.const.Cardinality.UNARY_UNARY,
QueryTaxRateRequest,
QueryTaxRateResponse,
),
"/terra.treasury.v1beta1.Query/TaxCap": grpclib.const.Handler(
self.__rpc_tax_cap,
grpclib.const.Cardinality.UNARY_UNARY,
QueryTaxCapRequest,
QueryTaxCapResponse,
),
"/terra.treasury.v1beta1.Query/TaxCaps": grpclib.const.Handler(
self.__rpc_tax_caps,
grpclib.const.Cardinality.UNARY_UNARY,
QueryTaxCapsRequest,
QueryTaxCapsResponse,
),
"/terra.treasury.v1beta1.Query/RewardWeight": grpclib.const.Handler(
self.__rpc_reward_weight,
grpclib.const.Cardinality.UNARY_UNARY,
QueryRewardWeightRequest,
QueryRewardWeightResponse,
),
"/terra.treasury.v1beta1.Query/SeigniorageProceeds": grpclib.const.Handler(
self.__rpc_seigniorage_proceeds,
grpclib.const.Cardinality.UNARY_UNARY,
QuerySeigniorageProceedsRequest,
QuerySeigniorageProceedsResponse,
),
"/terra.treasury.v1beta1.Query/TaxProceeds": grpclib.const.Handler(
self.__rpc_tax_proceeds,
grpclib.const.Cardinality.UNARY_UNARY,
QueryTaxProceedsRequest,
QueryTaxProceedsResponse,
),
"/terra.treasury.v1beta1.Query/Indicators": grpclib.const.Handler(
self.__rpc_indicators,
grpclib.const.Cardinality.UNARY_UNARY,
QueryIndicatorsRequest,
QueryIndicatorsResponse,
),
"/terra.treasury.v1beta1.Query/Params": grpclib.const.Handler(
self.__rpc_params,
grpclib.const.Cardinality.UNARY_UNARY,
QueryParamsRequest,
QueryParamsResponse,
),
}
from ....cosmos.base import v1beta1 as ___cosmos_base_v1_beta1__
| 31.120507 | 122 | 0.691508 | 1,507 | 14,720 | 6.577306 | 0.112807 | 0.067191 | 0.038741 | 0.048426 | 0.632567 | 0.521186 | 0.422821 | 0.357546 | 0.337873 | 0.236885 | 0 | 0.008651 | 0.214674 | 14,720 | 472 | 123 | 31.186441 | 0.848789 | 0.14178 | 0 | 0.455598 | 1 | 0 | 0.103603 | 0.085426 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003861 | false | 0.027027 | 0.023166 | 0.003861 | 0.30888 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7318340689a601475670cd96bc3a15da21a3e8a4 | 2,438 | py | Python | pyzayo/svcinv_mixin.py | jeremyschulman/pyzayo | 37869daf6ef2df8e0898bae7c3ddbb0139840751 | [
"Apache-2.0"
] | 1 | 2021-06-02T10:00:35.000Z | 2021-06-02T10:00:35.000Z | pyzayo/svcinv_mixin.py | jeremyschulman/pyzayo | 37869daf6ef2df8e0898bae7c3ddbb0139840751 | [
"Apache-2.0"
] | null | null | null | pyzayo/svcinv_mixin.py | jeremyschulman/pyzayo | 37869daf6ef2df8e0898bae7c3ddbb0139840751 | [
"Apache-2.0"
] | null | null | null | """
This file contains the Zayo Service Inventory related API endpoints.
References
----------
Docs
http://54.149.224.75/wp-content/uploads/2020/02/Service-Inventory-Wiki.pdf
"""
# -----------------------------------------------------------------------------
# System Imports
# -----------------------------------------------------------------------------
from typing import List, Dict
# -----------------------------------------------------------------------------
# Public Imports
# -----------------------------------------------------------------------------
from first import first
# -----------------------------------------------------------------------------
# Private Imports
# -----------------------------------------------------------------------------
from pyzayo.base_client import ZayoClientBase
from pyzayo.consts import ZAYO_SM_ROUTE_SERVICES
# -----------------------------------------------------------------------------
# Module Exports
# -----------------------------------------------------------------------------
__all__ = ["ZayoServiceInventoryMixin"]
class ZayoServiceInventoryMixin(ZayoClientBase):
""" Supports the Service-Inventory API endpoints """
def get_services(self, **params) -> List[Dict]:
"""
Retrieve the service-inventory records given the `params` criterial
or all.
Other Parameters
----------------
key-value options as defined by the "existing-services" API endpoint.
The `filter` parameter, for example, supports the following
API record fields:
* status
* productGroup
* productCatagory
* product
* term
"""
return self.paginate_records(url=ZAYO_SM_ROUTE_SERVICES, **params)
def get_service_by_circuit_id(self, by_circuit_id: str, **params):
"""
Locate the service associated with the given ciruid ID.
Parameters
----------
by_circuit_id: str
The circuit ID string value
Other Parameters
----------------
Same as get_services() method, see for details.
Returns
-------
The service record in dict form from API.
"""
return first(
rec
for rec in self.paginate_records(url=ZAYO_SM_ROUTE_SERVICES, **params)
if rec["components"][0]["circuitId"] == by_circuit_id
)
| 30.475 | 82 | 0.455291 | 198 | 2,438 | 5.464646 | 0.515152 | 0.04159 | 0.040665 | 0.05268 | 0.086876 | 0.086876 | 0.086876 | 0.086876 | 0.086876 | 0 | 0 | 0.00885 | 0.212059 | 2,438 | 79 | 83 | 30.860759 | 0.554399 | 0.622642 | 0 | 0 | 0 | 0 | 0.064611 | 0.036711 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
731bee30cd85e8877da89abd76314f81852e3106 | 730 | py | Python | algoplex/api/order.py | dmitryaleks/algo-plex | c83421642fc1ac11e558126ec73909b175b07862 | [
"BSD-2-Clause"
] | null | null | null | algoplex/api/order.py | dmitryaleks/algo-plex | c83421642fc1ac11e558126ec73909b175b07862 | [
"BSD-2-Clause"
] | null | null | null | algoplex/api/order.py | dmitryaleks/algo-plex | c83421642fc1ac11e558126ec73909b175b07862 | [
"BSD-2-Clause"
] | null | null | null | class Order():
def __init__(self, side, pair, size, price, stop_loss_price, id):
self.side = side
self.pair = pair
self.size = size
self.price = price
self.stop_loss_price = stop_loss_price
self.id = id
self.fills = []
def define_id(self, id):
self.id = id
def add_fill(self, execution):
self.fills.append(execution)
def get_fill_price(self):
nominator = sum(map(lambda f: f.size * f.price, self.fills))
fill_price = nominator/self.get_filled_quantity()
return fill_price
def get_filled_quantity(self):
return sum(map(lambda f: f.size, self.fills))
def get_fills(self):
return self.fills
| 26.071429 | 69 | 0.609589 | 101 | 730 | 4.19802 | 0.267327 | 0.106132 | 0.091981 | 0.084906 | 0.084906 | 0.084906 | 0 | 0 | 0 | 0 | 0 | 0 | 0.286301 | 730 | 27 | 70 | 27.037037 | 0.81382 | 0 | 0 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.095238 | 0.47619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
731d1cfc50fdedf83932598a710d90798e979c30 | 4,940 | py | Python | mapping/sandbox/graphslam/graphslam_pipeline.py | sameeptandon/sail-car-log | 0ee3d598bb09d389bcbd2ebf73cd4b2411e796be | [
"BSD-2-Clause"
] | 1 | 2021-02-24T03:11:13.000Z | 2021-02-24T03:11:13.000Z | mapping/sandbox/graphslam/graphslam_pipeline.py | sameeptandon/sail-car-log | 0ee3d598bb09d389bcbd2ebf73cd4b2411e796be | [
"BSD-2-Clause"
] | null | null | null | mapping/sandbox/graphslam/graphslam_pipeline.py | sameeptandon/sail-car-log | 0ee3d598bb09d389bcbd2ebf73cd4b2411e796be | [
"BSD-2-Clause"
] | 3 | 2015-03-18T14:36:04.000Z | 2018-07-04T02:57:24.000Z | import os
from os.path import join as pjoin
from subprocess import check_call
from ruffus import files, follows, pipeline_run, pipeline_printout, pipeline_printout_graph, jobs_limit
from graphslam_config import GRAPHSLAM_PATH,\
GRAPHSLAM_MATCH_DIR, GRAPHSLAM_OPT_POS_DIR, GRAPHSLAM_ALIGN_DIR,\
MATCHES_FILE, GPS_FILES, RSS_LIST, GRAPHSLAM_OUT_DIR, GRAPHSLAM_DIRS,\
GRAPHSLAM_MAPS_DIR, GRAPHSLAM_VIDEOS_DIR, GRAPHSLAM_EVAL_DIR
from pipeline_config import NUM_CPUS, SAIL_CAR_LOG_PATH
from pipeline_utils import print_and_call, touchf
@files(None, MATCHES_FILE)
def match_traces(dummy, output_file):
cmd = 'python %s/match_traces.py %s' % (GRAPHSLAM_PATH, GRAPHSLAM_MATCH_DIR)
print_and_call(cmd)
# NOTE Have to rerun this after match_traces is run
@follows('match_traces')
@files(zip(GPS_FILES, [pjoin(GRAPHSLAM_OPT_POS_DIR, '--'.join(rss) + '.npz') for rss in RSS_LIST], GPS_FILES))
def solve_qps(gps_src_file, output_file, gps_tgt_file):
cmd = 'python %s/solve_qp.py %s %s %s' % (GRAPHSLAM_PATH,
gps_src_file, gps_tgt_file, output_file)
print_and_call(cmd)
@follows('solve_qps')
@jobs_limit(1)
@files(MATCHES_FILE, '%s/run_pipeline_sentinel' % GRAPHSLAM_OUT_DIR)
def run_pipelines(dummy, sentinel):
for route, segment, split in RSS_LIST:
cmd = 'export SCL_ROUTE=%s; export SCL_SEGMENT=%s; export SCL_SPLIT=%s; python %s/mapping/pipeline/pipeline.py run estimate_normals' % (route, segment, split, SAIL_CAR_LOG_PATH)
print_and_call(cmd)
touchf('%s/run_pipeline_sentinel' % GRAPHSLAM_OUT_DIR)
def clean_pipelines():
for route, segment, split in RSS_LIST:
cmd = 'export SCL_ROUTE=%s; export SCL_SEGMENT=%s; export SCL_SPLIT=%s; python %s/mapping/pipeline/pipeline.py clean' % (route, segment, split, SAIL_CAR_LOG_PATH)
print_and_call(cmd)
@follows('run_pipelines')
@files('%s/run_pipeline_sentinel' % GRAPHSLAM_OUT_DIR, '%s/chunk_and_align_sentinel' % GRAPHSLAM_ALIGN_DIR)
def chunk_and_align(dummy, sentinel):
cmd = 'python %s/chunk_and_align.py' % GRAPHSLAM_PATH
print_and_call(cmd)
touchf('%s/chunk_and_align_sentinel' % GRAPHSLAM_ALIGN_DIR)
@follows('chunk_and_align')
@files('%s/chunk_and_align_sentinel' % GRAPHSLAM_ALIGN_DIR,
'%s/export_maps_sentinel' % GRAPHSLAM_MAPS_DIR)
def export_maps(dummy, sentinel):
cmd = 'python scripts/export_maps.py'
print_and_call(cmd)
touchf('%s/export_maps_sentinel' % GRAPHSLAM_MAPS_DIR)
@follows('export_maps')
@files('%s/export_maps_sentinel' % GRAPHSLAM_MAPS_DIR,
'%s/align_maps_sentinel' % GRAPHSLAM_MAPS_DIR)
def align_maps(dummy, sentinel):
cmd = 'python scripts/align_maps_all.py'
print_and_call(cmd)
touchf('%s/align_maps_sentinel' % GRAPHSLAM_MAPS_DIR)
@follows('align_maps')
@files('%s/align_maps_sentinel' % GRAPHSLAM_MAPS_DIR,
'%s/eval_maps_sentinel' % GRAPHSLAM_EVAL_DIR)
def eval_maps(dummy, sentinel):
cmd = 'python scripts/eval_maps.py'
print_and_call(cmd)
touchf('%s/eval_maps_sentinel' % GRAPHSLAM_EVAL_DIR)
@follows('eval_maps')
@files('%s/align_maps_sentinel' % GRAPHSLAM_MAPS_DIR,
'%s/generate_videos_sentinel' % GRAPHSLAM_VIDEOS_DIR)
def generate_videos(dummy, sentinel):
cmd = 'python scripts/generate_videos.py'
print_and_call(cmd)
touchf('%s/generate_videos_sentinel' % GRAPHSLAM_VIDEOS_DIR)
def clean():
for d in GRAPHSLAM_DIRS:
print 'deleting %s' % d
if os.path.exists(d):
check_call('rm -r %s' % d, shell=True)
if __name__ == '__main__':
import sys
if len(sys.argv) < 2:
print 'Usage: python graphslam_pipeline.py print,graph,run (task1,task2)'
sys.exit(1)
TORUN = [
]
if len(sys.argv) == 3:
TORUN = sys.argv[2].split(',')
CMDS = sys.argv[1].split(',')
tasks = {
'print': lambda: pipeline_printout(sys.stdout, TORUN,
forcedtorun_tasks=[], verbose=5),
'graph': lambda: pipeline_printout_graph('graph.jpg', 'jpg', TORUN,
forcedtorun_tasks=[],
no_key_legend=False),
'run': lambda: pipeline_run(TORUN,
multiprocess=NUM_CPUS,
one_second_per_job=False),
'force': lambda: pipeline_run([],
forcedtorun_tasks=TORUN,
multiprocess=NUM_CPUS,
one_second_per_job=False),
'printf': lambda: pipeline_printout(sys.stdout,
[],
forcedtorun_tasks=TORUN,
verbose=2),
'clean': clean,
'clean_pipelines': clean_pipelines
}
for key in tasks:
if key in CMDS:
tasks[key]()
| 37.424242 | 185 | 0.654049 | 644 | 4,940 | 4.667702 | 0.194099 | 0.096141 | 0.03992 | 0.04491 | 0.49002 | 0.43014 | 0.390552 | 0.274784 | 0.163007 | 0.133733 | 0 | 0.002657 | 0.238057 | 4,940 | 131 | 186 | 37.709924 | 0.795962 | 0.009919 | 0 | 0.182692 | 0 | 0.019231 | 0.221109 | 0.119247 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
731f66af557f8e0f3fe1a093bf5c18d9478212d8 | 11,798 | py | Python | script/run_scribus.py | csneofreak/public-domain-season-songs | d6e559e7cfe6e3a7ab784855a096d02ae8c656cd | [
"Unlicense"
] | 14 | 2015-12-18T10:52:15.000Z | 2021-01-11T14:43:47.000Z | script/run_scribus.py | csneofreak/public-domain-season-songs | d6e559e7cfe6e3a7ab784855a096d02ae8c656cd | [
"Unlicense"
] | 1 | 2015-12-05T19:30:01.000Z | 2015-12-05T19:30:01.000Z | script/run_scribus.py | csneofreak/public-domain-season-songs | d6e559e7cfe6e3a7ab784855a096d02ae8c656cd | [
"Unlicense"
] | 9 | 2015-03-11T04:09:23.000Z | 2021-12-18T21:44:47.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
import time
import json
import os
import math
import scribus
import simplebin
import inspect
from collections import defaultdict
PWD = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
def pwd(path):
return os.path.join(PWD, path);
DATA_FILE = pwd("data.json")
CACHE_FILE = pwd("cache.json")
MANUEL_PROCESSING_FILE = pwd("manual_processing.json")
FILES = pwd("lily_output/")
FAST = False # use this to debug
SPACING_SONGS = 10
EFFECTIVE_PAGE_HEIGHT = 255 + SPACING_SONGS
SPACING_HEADLINE_SONG = 18
SPACING_SONG_TEXT = 5
PAGE_NUM_HEIGHT = 5
BASELINE_GRID = 5
def init():
scribus.openDoc(pwd("init.sla"))
scribus.saveDocAs("/tmp/{}.sla".format(time.time()))
scribus.setUnit(scribus.UNIT_MM)
def front_matter():
# load pages from other document
if not os.path.exists(pwd("front_matter.sla")):
print "not front matter, file not found!"
return
scribus.openDoc(pwd("front_matter.sla"))
pages = scribus.pageCount()
scribus.closeDoc()
scribus.importPage(
pwd("front_matter.sla"), # filename
tuple(range(1, pages+1)), # range of pages to import
1, # insert (1) or replace(0)
0, # where to insert
)
scribus.gotoPage(pages+1)
def fit_height(textbox):
# come to a state that the text box does not overflow:
width, height = scribus.getSize(textbox)
to_add = height + 1
while scribus.textOverflows(textbox):
scribus.sizeObject(width, height + to_add, textbox)
to_add = to_add * 2
# reduce height
step = height/2
overflows = False
counter = 0
while step > 0.05 or overflows:
counter += 1
width, old_height = scribus.getSize(textbox)
if scribus.textOverflows(textbox):
scribus.sizeObject(width, old_height + step, textbox)
else:
scribus.sizeObject(width, old_height - step, textbox)
step = step * 0.5
overflows = scribus.textOverflows(textbox)
def new_page():
scribus.newPage(-1)
scribus.gotoPage(scribus.pageCount())
add_page_number()
def add_page_number():
page_num = scribus.pageCount()
page_width, page_height, margin_top, margin_left, margin_right, margin_bottom = page_size_margin(page_num)
textbox = scribus.createText(margin_left, page_height-margin_bottom, page_width-margin_left-margin_right, PAGE_NUM_HEIGHT)
scribus.setStyle("pagenumber_{}".format(get_style_suffix()), textbox)
scribus.insertText(str(page_num), 0, textbox)
scribus.deselectAll()
def page_size_margin(page_num):
size = scribus.getPageNSize(page_num)
margin = scribus.getPageNMargins(page_num)
return size + margin
def get_style_suffix():
page_num = scribus.pageCount()
style_suffix = "r" # is this really the right way? is there no shortcut provided by scribus?
if page_num % 2 == 0:
style_suffix = "l"
return style_suffix
def load_song(data, offset, settings):
page_num = scribus.pageCount()
page_width, page_height, margin_top, margin_left, margin_right, margin_bottom = page_size_margin(page_num)
start_point = margin_top + offset
new_width = page_width - margin_left - margin_right
if not FAST:
scribus.placeEPS(os.path.join(FILES, data["filename"]), 0, 0)
eps = scribus.getSelectedObject()
eps_width, eps_height = scribus.getSize(eps)
#scribus.scaleGroup(new_width/eps_width) # slow on scribus 1.4; does something else on scribus 1.5
scribus.sizeObject(eps_width*0.86, eps_height*0.86, eps)
scribus.moveObjectAbs(margin_left, start_point+SPACING_HEADLINE_SONG, eps)
eps_width, eps_height = scribus.getSize(eps)
else:
eps_height = 0
scribus.deselectAll()
textbox = scribus.createText(margin_left, start_point, new_width, 20)
style_suffix = get_style_suffix()
if data["composer"]:
scribus.deselectAll()
scribus.insertText(u"{}\n".format(data["composer"]), 0, textbox)
scribus.selectText(0, 1, textbox)
scribus.setStyle("subline_{}".format(style_suffix), textbox)
if data["poet"]:
scribus.deselectAll()
scribus.insertText(u"{}\n".format(data["poet"]), 0, textbox)
scribus.selectText(0, 1, textbox)
scribus.setStyle("subline_{}".format(style_suffix), textbox)
scribus.deselectAll()
scribus.insertText(u"{}\n".format(data["name"]), 0, textbox)
scribus.selectText(0, 1, textbox)
scribus.setStyle("headline_{}".format(style_suffix), textbox)
text = data["text"]
text = [t.strip() for t in text if t.strip() != ""]
# TODO: exit if text == []
textbox = scribus.createText(margin_left, start_point + eps_height + SPACING_HEADLINE_SONG + SPACING_SONG_TEXT, new_width, 50)
scribus.setStyle("text", textbox)
# let's see how many digits are in there:
num_verses = len([l for l in text if l.isdigit()])
num_chars = 0
num_line_total = len(text)
num_line_actually = 0
no_new_line = False
verse_counter = 0
text_columns_height = 0 # TODO: should be None
for num_line, line in enumerate(text):
if line.strip == "":
continue
num_line_actually += 1
if line.isdigit():
print "#", verse_counter, math.ceil(num_verses * 0.5), num_verses, data["filename"]
if verse_counter == math.ceil(num_verses*0.5): # this is the first verse that should be in the new column, so let's see what's the height
print text_columns_height, num_line_actually
text_columns_height = BASELINE_GRID * (num_line_actually -1)
first_char = "\n"
if num_line == 0:
first_char = ""
no_new_line = True
line = u"{}{}.\t".format(first_char, line)
scribus.insertText(line, -1, textbox)
scribus.deselectAll()
scribus.selectText(num_chars, len(line), textbox)
#scribus.setStyle("num", textbox) # no character styles available
#scribus.setFontSize(5, textbox) # TODO: testing only # BUG?
scribus.setFont("Linux Libertine O Bold", textbox)
num_chars += len(line)
verse_counter += 1
else:
if no_new_line:
first_char = ""
else:
first_char = chr(28)
no_new_line = False
line = u"{}{}".format(first_char, line)
scribus.insertText(line, -1, textbox)
#scribus.deselectAll()
#scribus.selectText(num_chars, len(line), textbox)
#scribus.setStyle("text", textbox)
num_chars += len(line)
scribus.setColumnGap(5, textbox)
columns = settings.get("columns", 2)
scribus.setColumns(columns, textbox)
if columns != 2:
fit_height(textbox)
else:
scribus.sizeObject(new_width, text_columns_height, textbox)
l, t = scribus.getPosition(textbox)
scribus.moveObjectAbs(l, round(t/BASELINE_GRID)*BASELINE_GRID, textbox)
if scribus.textOverflows(textbox):
fit_height(textbox) # there are some cases,..
text_width, text_height = scribus.getSize(textbox)
text_left, text_top = scribus.getPosition(textbox)
return text_top + text_height - start_point + SPACING_SONGS, page_num
def create_toc(data):
if not scribus.objectExists("TOC"):
new_page()
page_width, page_height, margin_top, margin_left, margin_right, margin_bottom = page_size_margin(1)
toc = scribus.createText(margin_left, margin_top, page_width-margin_right-margin_left, page_height-margin_top-margin_bottom)
scribus.setNewName("TOC", toc)
scribus.insertText("provide a textframe with name 'TOC' in front_matter.sla and i will not create the toc at the end of the document", 0, "TOC")
text = "\n".join(("{}\t{}".format(title, pagenum) for (title, pagenum) in data))
scribus.insertText(text, -1, "TOC")
def add_songs(all_songs, songs_double_page, manual_processing, songs_data, cache):
# let's get the best sorting
songs_combined = simplebin.best_fit(all_songs, EFFECTIVE_PAGE_HEIGHT)
# sorting the songs alphabetic
songs_sorted = sorted(songs_combined, key=lambda x: x[0])
# make sure the double page will be added on the left side
page_num = scribus.pageCount()
for double_page in songs_double_page:
if not double_page in all_songs:
continue
offset = songs_sorted.index([double_page])
songs_sorted.insert(offset+1, None) # add a empty page after the song
if (page_num + offset) % 2 != 0: # song is on right side, empty side on the left side.
songs_sorted.insert(offset, songs_sorted.pop(offset+2)) # move next song before the double page
# TODO: what if double sided song is last song?
for songs in songs_sorted:
current_pos = 0
if songs == None: # we added this for a song that should be set on double page
new_page()
continue
for filename in songs:
if not manual_processing[filename].get("show", True):
continue
data = songs_data[filename]
height, page_num = load_song(data, current_pos, manual_processing[filename])
current_pos += math.ceil(height/BASELINE_GRID) * BASELINE_GRID
cache[filename]["height"] = round(height, 2)
cache[filename]["page"] = page_num
scribus.progressSet(1)
if current_pos != 0:
new_page()
def main():
cache = defaultdict(dict)
try:
with open(CACHE_FILE, "rb") as cache_file:
cache = defaultdict(dict, json.load(cache_file))
except:
pass
with open(DATA_FILE, "rb") as data_file:
songs_data = json.load(data_file)
with open(MANUEL_PROCESSING_FILE, "rb") as manual_file:
manual_processing = defaultdict(dict, json.load(manual_file))
scribus.statusMessage("Running script...")
scribus.progressReset()
scribus.progressTotal(len(songs_data))
init()
front_matter()
add_page_number()
# trying to get the best sorting
# setting all songs to the max height
all_songs = dict(zip(songs_data.keys(), [EFFECTIVE_PAGE_HEIGHT] * len(songs_data)))
# update according to cache
for song_name, data in cache.iteritems():
all_songs[song_name] = min(data.get("height", EFFECTIVE_PAGE_HEIGHT), EFFECTIVE_PAGE_HEIGHT)
# let's see which songs should be set on a double sided page:
songs_double_page = filter(lambda x: manual_processing[x].get("double_page", False), manual_processing)
for double_page in songs_double_page:
all_songs[double_page] = EFFECTIVE_PAGE_HEIGHT # all double page songs should get a whole page despite their height
appendix_filter = lambda a_s, boolean : {k:v for k,v in a_s.iteritems() if manual_processing[k].get("appendix", False) == boolean}
main_songs = appendix_filter(all_songs, False)
add_songs(main_songs, songs_double_page, manual_processing, songs_data, cache)
appendix_songs = appendix_filter(all_songs, True)
add_songs(appendix_songs, songs_double_page, manual_processing, songs_data, cache)
toc = []
for filename in filter(lambda s: manual_processing[s].get("show", True), all_songs.keys()):
toc.append((songs_data[filename]["name"], cache[filename].get("page", "XX")))
toc.sort(key=lambda (x,y): x)
create_toc(toc)
if scribus.haveDoc():
scribus.setRedraw(True)
scribus.statusMessage("")
scribus.progressReset()
with open(CACHE_FILE, "wb") as cache_file:
json.dump(cache, cache_file, indent=2)
if __name__ == "__main__":
main()
| 37.693291 | 152 | 0.66418 | 1,568 | 11,798 | 4.795281 | 0.193878 | 0.035377 | 0.013965 | 0.013965 | 0.23567 | 0.202554 | 0.184466 | 0.144567 | 0.117569 | 0.091103 | 0 | 0.009988 | 0.22775 | 11,798 | 312 | 153 | 37.814103 | 0.815278 | 0.117054 | 0 | 0.20332 | 0 | 0.004149 | 0.050491 | 0.00212 | 0 | 0 | 0 | 0.003205 | 0 | 0 | null | null | 0.004149 | 0.037344 | null | null | 0.012448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7335d3017f92ccc28bd13ffbbbef33f7a8f4f467 | 481 | py | Python | blog/migrations/0041_auto_20190504_0855.py | akindele214/181hub_2 | 48b8814b5f66ad87f9a54721506076ddf70fe9bc | [
"MIT"
] | 1 | 2020-05-20T08:42:49.000Z | 2020-05-20T08:42:49.000Z | blog/migrations/0041_auto_20190504_0855.py | akindele214/181hub_2 | 48b8814b5f66ad87f9a54721506076ddf70fe9bc | [
"MIT"
] | 14 | 2020-03-24T17:31:08.000Z | 2022-03-11T23:59:30.000Z | blog/migrations/0041_auto_20190504_0855.py | akindele214/181hub_2 | 48b8814b5f66ad87f9a54721506076ddf70fe9bc | [
"MIT"
] | 1 | 2020-04-13T12:37:37.000Z | 2020-04-13T12:37:37.000Z | # Generated by Django 2.1.5 on 2019-05-04 07:55
import blog.formatChecker
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blog', '0040_auto_20190504_0840'),
]
operations = [
migrations.AlterField(
model_name='videos',
name='video',
field=models.FileField(blank=True, null=True, upload_to='uploads/', validators=[blog.formatChecker.file_size]),
),
]
| 24.05 | 123 | 0.638254 | 54 | 481 | 5.574074 | 0.796296 | 0.112957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084932 | 0.241164 | 481 | 19 | 124 | 25.315789 | 0.739726 | 0.093555 | 0 | 0 | 1 | 0 | 0.105991 | 0.052995 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
733ab2bdfefcfa168386562dc21d727ee4511840 | 1,588 | py | Python | research/codec/codec_example.py | FXTD-ODYSSEY/QBinder | b4b288e7c0ef09d2382e3d6678a5c41950257b76 | [
"MIT"
] | 13 | 2020-11-29T15:02:57.000Z | 2022-02-11T03:12:25.000Z | research/codec/codec_example.py | FXTD-ODYSSEY/QBinder | b4b288e7c0ef09d2382e3d6678a5c41950257b76 | [
"MIT"
] | 8 | 2020-11-30T02:47:56.000Z | 2021-05-19T03:44:16.000Z | research/codec/codec_example.py | FXTD-ODYSSEY/QtConfig | 978cddf26c0305677b65b04d206138970cb73762 | [
"MIT"
] | 2 | 2020-11-30T01:59:19.000Z | 2021-12-17T06:44:54.000Z | # -*- coding: future_fstrings -*-
import codecs
import pdb
import string
# NOTE https://stackoverflow.com/questions/38777818/how-do-i-properly-create-custom-text-codecs
# prepare map from numbers to letters
_encode_table = {str(number): bytes(letter) for number, letter in enumerate(string.ascii_lowercase)}
# prepare inverse map
_decode_table = {v: k for k, v in _encode_table.items()}
def custom_encode(text):
# example encoder that converts ints to letters
print "custom_encode",text
# see https://docs.python.org/3/library/codecs.html#codecs.Codec.encode
return b''.join(_encode_table[x] for x in text), len(text)
def custom_decode(binary):
# example decoder that converts letters to ints
print "custom_decode",binary
# see https://docs.python.org/3/library/codecs.html#codecs.Codec.decode
return ''.join(_decode_table[x] for x in binary), len(binary)
def custom_search_function(encoding_name):
return codecs.CodecInfo(encode=custom_encode, decode=custom_decode, name='Reasons')
def main():
# register your custom codec
# note that CodecInfo.name is used later
codecs.register(custom_search_function)
binary = 'abcdefg'
# decode letters to numbers
pdb.set_trace()
text = binary.decode('Reasons')
print(text)
# encode numbers to letters
binary2 = text.encode('Reasons')
print(binary2)
# fstring = 'f"hello {text}"'.decode('future-fstrings')
# print fstring
# encode(decode(...)) should be an identity function
assert binary == binary2
if __name__ == '__main__':
main() | 28.872727 | 100 | 0.714736 | 217 | 1,588 | 5.082949 | 0.410138 | 0.024479 | 0.029012 | 0.032638 | 0.112421 | 0.090662 | 0.090662 | 0.090662 | 0.090662 | 0.090662 | 0 | 0.009863 | 0.170025 | 1,588 | 55 | 101 | 28.872727 | 0.827011 | 0.40806 | 0 | 0 | 0 | 0 | 0.067172 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0 | null | null | 0 | 0.125 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73407d37b530e40b65a5d94f1bc5d3086355dead | 1,084 | py | Python | numba/tests/__init__.py | mawanda-jun/numba | 8c6658375c1f8fe50e1a5ccd11d4e7bf5a8053de | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2019-12-04T07:13:18.000Z | 2019-12-04T07:13:18.000Z | numba/tests/__init__.py | mawanda-jun/numba | 8c6658375c1f8fe50e1a5ccd11d4e7bf5a8053de | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | numba/tests/__init__.py | mawanda-jun/numba | 8c6658375c1f8fe50e1a5ccd11d4e7bf5a8053de | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-09-18T15:03:46.000Z | 2020-09-18T15:03:46.000Z | from numba import unittest_support as unittest
import gc
from os.path import dirname, join
import multiprocessing
import sys
import time
import warnings
from unittest.suite import TestSuite
from numba.testing import load_testsuite
from numba.testing import ddt # for backward compatibility
try:
import faulthandler
except ImportError:
faulthandler = None
else:
try:
# May fail in IPython Notebook with UnsupportedOperation
faulthandler.enable()
except Exception as e:
msg = "Failed to enable faulthandler due to:\n{err}"
warnings.warn(msg.format(err=e))
def load_tests(loader, tests, pattern):
suite = TestSuite()
suite.addTests(load_testsuite(loader, dirname(__file__)))
# Numba CUDA tests are located in a separate directory:
cuda_dir = join(dirname(dirname(__file__)), 'cuda/tests')
suite.addTests(loader.discover(cuda_dir))
# Numba ROC tests are located in a separate directory
roc_dir = join(dirname(dirname(__file__)), 'roc/tests')
suite.addTests(loader.discover(roc_dir))
return suite
| 27.1 | 64 | 0.737085 | 142 | 1,084 | 5.485915 | 0.443662 | 0.03466 | 0.046213 | 0.064185 | 0.315789 | 0.089859 | 0.089859 | 0 | 0 | 0 | 0 | 0 | 0.186347 | 1,084 | 39 | 65 | 27.794872 | 0.88322 | 0.172509 | 0 | 0.071429 | 0 | 0 | 0.070707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.428571 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
734b4343088715a23f5435206ac174b0bc22413c | 11,371 | py | Python | tfx/orchestration/portable/execution_publish_utils.py | johnPertoft/tfx | c6335684a54651adbcbe50aa52918b9b9948326e | [
"Apache-2.0"
] | null | null | null | tfx/orchestration/portable/execution_publish_utils.py | johnPertoft/tfx | c6335684a54651adbcbe50aa52918b9b9948326e | [
"Apache-2.0"
] | null | null | null | tfx/orchestration/portable/execution_publish_utils.py | johnPertoft/tfx | c6335684a54651adbcbe50aa52918b9b9948326e | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Portable library for registering and publishing executions."""
import copy
import os
from typing import List, Mapping, MutableMapping, Optional, Sequence, cast
from absl import logging
from tfx import types
from tfx.orchestration import metadata
from tfx.orchestration.portable.mlmd import execution_lib
from tfx.proto.orchestration import execution_result_pb2
from ml_metadata.proto import metadata_store_pb2
def _check_validity(new_artifact: metadata_store_pb2.Artifact,
original_artifact: types.Artifact,
has_multiple_artifacts: bool) -> None:
"""Check the validity of new artifact against the original artifact."""
if new_artifact.type_id != original_artifact.type_id:
raise RuntimeError('Executor output should not change artifact type.')
if has_multiple_artifacts:
# If there are multiple artifacts in the executor output, their URIs should
# be a direct sub-dir of the system generated URI.
if os.path.dirname(new_artifact.uri) != original_artifact.uri:
raise RuntimeError(
'When there are multiple artifacts to publish, their URIs '
'should be direct sub-directories of the URI of the system generated '
'artifact.')
else:
# If there is only one output artifact, its URI should not be changed
if new_artifact.uri != original_artifact.uri:
# TODO(b/175426744): Data Binder will modify the uri.
logging.warning(
'When there is one artifact to publish, the URI of it should be '
'identical to the URI of system generated artifact.')
def publish_cached_execution(
metadata_handler: metadata.Metadata,
contexts: Sequence[metadata_store_pb2.Context],
execution_id: int,
output_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None,
) -> None:
"""Marks an existing execution as using cached outputs from a previous execution.
Args:
metadata_handler: A handler to access MLMD.
contexts: MLMD contexts to associated with the execution.
execution_id: The id of the execution.
output_artifacts: Output artifacts of the execution. Each artifact will be
linked with the execution through an event with type OUTPUT.
"""
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.CACHED
execution_lib.put_execution(
metadata_handler,
execution,
contexts,
input_artifacts=None,
output_artifacts=output_artifacts)
def _set_execution_result_if_not_empty(
executor_output: Optional[execution_result_pb2.ExecutorOutput],
execution: metadata_store_pb2.Execution) -> bool:
"""Sets execution result as a custom property of the execution."""
if executor_output and (executor_output.execution_result.result_message or
executor_output.execution_result.metadata_details or
executor_output.execution_result.code):
# TODO(b/190001754): Consider either switching to base64 encoding or using
# a proto descriptor pool to circumvent TypeError which may be raised when
# converting embedded `Any` protos.
try:
execution_lib.set_execution_result(executor_output.execution_result,
execution)
except TypeError:
logging.exception(
'Skipped setting execution_result as custom property of the '
'execution due to error')
def publish_succeeded_execution(
metadata_handler: metadata.Metadata,
execution_id: int,
contexts: Sequence[metadata_store_pb2.Context],
output_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None,
executor_output: Optional[execution_result_pb2.ExecutorOutput] = None
) -> Optional[MutableMapping[str, List[types.Artifact]]]:
"""Marks an existing execution as success.
Also publishes the output artifacts produced by the execution. This method
will also merge the executor produced info into system generated output
artifacts. The `last_know_state` of the execution will be changed to
`COMPLETE` and the output artifacts will be marked as `LIVE`.
Args:
metadata_handler: A handler to access MLMD.
execution_id: The id of the execution to mark successful.
contexts: MLMD contexts to associated with the execution.
output_artifacts: Output artifacts skeleton of the execution, generated by
the system. Each artifact will be linked with the execution through an
event with type OUTPUT.
executor_output: Executor outputs. `executor_output.output_artifacts` will
be used to update system-generated output artifacts passed in through
`output_artifacts` arg. There are three contraints to the update: 1. The
keys in `executor_output.output_artifacts` are expected to be a subset
of the system-generated output artifacts dict. 2. An update to a certain
key should contains all the artifacts under that key. 3. An update to an
artifact should not change the type of the artifact.
Returns:
The maybe updated output_artifacts, note that only outputs whose key are in
executor_output will be updated and others will be untouched. That said,
it can be partially updated.
Raises:
RuntimeError: if the executor output to a output channel is partial.
"""
output_artifacts = copy.deepcopy(output_artifacts) or {}
output_artifacts = cast(MutableMapping[str, List[types.Artifact]],
output_artifacts)
if executor_output:
if not set(executor_output.output_artifacts.keys()).issubset(
output_artifacts.keys()):
raise RuntimeError(
'Executor output %s contains more keys than output skeleton %s.' %
(executor_output, output_artifacts))
for key, artifact_list in output_artifacts.items():
if key not in executor_output.output_artifacts:
continue
updated_artifact_list = executor_output.output_artifacts[key].artifacts
# We assume the original output dict must include at least one output
# artifact and all artifacts in the list share the same type.
original_artifact = artifact_list[0]
# Update the artifact list with what's in the executor output
artifact_list.clear()
# TODO(b/175426744): revisit this:
# 1) Whether multiple output is needed or not after TFX componets
# are upgraded.
# 2) If multiple output are needed and is a common practice, should we
# use driver instead to create the list of output artifact instead
# of letting executor to create them.
for proto_artifact in updated_artifact_list:
_check_validity(proto_artifact, original_artifact,
len(updated_artifact_list) > 1)
python_artifact = types.Artifact(original_artifact.artifact_type)
python_artifact.set_mlmd_artifact(proto_artifact)
artifact_list.append(python_artifact)
# Marks output artifacts as LIVE.
for artifact_list in output_artifacts.values():
for artifact in artifact_list:
artifact.mlmd_artifact.state = metadata_store_pb2.Artifact.LIVE
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.COMPLETE
_set_execution_result_if_not_empty(executor_output, execution)
execution_lib.put_execution(
metadata_handler, execution, contexts, output_artifacts=output_artifacts)
return output_artifacts
def publish_failed_execution(
metadata_handler: metadata.Metadata,
contexts: Sequence[metadata_store_pb2.Context],
execution_id: int,
executor_output: Optional[execution_result_pb2.ExecutorOutput] = None
) -> None:
"""Marks an existing execution as failed.
Args:
metadata_handler: A handler to access MLMD.
contexts: MLMD contexts to associated with the execution.
execution_id: The id of the execution.
executor_output: The output of executor.
"""
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.FAILED
_set_execution_result_if_not_empty(executor_output, execution)
execution_lib.put_execution(metadata_handler, execution, contexts)
def publish_internal_execution(
metadata_handler: metadata.Metadata,
contexts: Sequence[metadata_store_pb2.Context],
execution_id: int,
output_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None
) -> None:
"""Marks an exeisting execution as as success and links its output to an INTERNAL_OUTPUT event.
Args:
metadata_handler: A handler to access MLMD.
contexts: MLMD contexts to associated with the execution.
execution_id: The id of the execution.
output_artifacts: Output artifacts of the execution. Each artifact will be
linked with the execution through an event with type INTERNAL_OUTPUT.
"""
[execution] = metadata_handler.store.get_executions_by_id([execution_id])
execution.last_known_state = metadata_store_pb2.Execution.COMPLETE
execution_lib.put_execution(
metadata_handler,
execution,
contexts,
output_artifacts=output_artifacts,
output_event_type=metadata_store_pb2.Event.INTERNAL_OUTPUT)
def register_execution(
metadata_handler: metadata.Metadata,
execution_type: metadata_store_pb2.ExecutionType,
contexts: Sequence[metadata_store_pb2.Context],
input_artifacts: Optional[MutableMapping[str,
Sequence[types.Artifact]]] = None,
exec_properties: Optional[Mapping[str, types.Property]] = None,
) -> metadata_store_pb2.Execution:
"""Registers a new execution in MLMD.
Along with the execution:
- the input artifacts will be linked to the execution.
- the contexts will be linked to both the execution and its input artifacts.
Args:
metadata_handler: A handler to access MLMD.
execution_type: The type of the execution.
contexts: MLMD contexts to associated with the execution.
input_artifacts: Input artifacts of the execution. Each artifact will be
linked with the execution through an event.
exec_properties: Execution properties. Will be attached to the execution.
Returns:
An MLMD execution that is registered in MLMD, with id populated.
"""
execution = execution_lib.prepare_execution(
metadata_handler, execution_type, metadata_store_pb2.Execution.RUNNING,
exec_properties)
return execution_lib.put_execution(
metadata_handler, execution, contexts, input_artifacts=input_artifacts)
| 43.734615 | 97 | 0.735731 | 1,459 | 11,371 | 5.559973 | 0.202193 | 0.068417 | 0.033531 | 0.021573 | 0.406065 | 0.359098 | 0.325567 | 0.312007 | 0.273422 | 0.253575 | 0 | 0.007182 | 0.204028 | 11,371 | 259 | 98 | 43.903475 | 0.889073 | 0.421775 | 0 | 0.341085 | 0 | 0 | 0.068609 | 0 | 0 | 0 | 0 | 0.011583 | 0 | 1 | 0.054264 | false | 0 | 0.069767 | 0 | 0.139535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
735799fe024faf41da595642a3d8bdb3ba238a42 | 1,693 | py | Python | tools/SDKTool/src/ui/dialog/progress_bar_dialog.py | Passer-D/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 1,210 | 2020-08-18T07:57:36.000Z | 2022-03-31T15:06:05.000Z | tools/SDKTool/src/ui/dialog/progress_bar_dialog.py | guokaiSama/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 37 | 2020-08-24T02:48:38.000Z | 2022-01-30T06:41:52.000Z | tools/SDKTool/src/ui/dialog/progress_bar_dialog.py | guokaiSama/GameAISDK | a089330a30b7bfe1f6442258a12d8c0086240606 | [
"Apache-2.0"
] | 275 | 2020-08-18T08:35:16.000Z | 2022-03-31T15:06:07.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making GameAISDK available.
This source code file is licensed under the GNU General Public License Version 3.
For full details, please refer to the file "LICENSE.txt" which is provided as part of this source code package.
Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved.
"""
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QWidget, QProgressDialog
class ProgressBarDialog(QWidget):
def __init__(self, title='', label='', minValue=0, maxValue=100, parent=None):
super(ProgressBarDialog, self).__init__(parent)
self.process_bar = QProgressDialog(self)
self.set_bar_window_title(title)
self.set_label_text(label)
self.set_min_value(minValue)
self.set_max_value(maxValue)
self.process_bar.setWindowModality(Qt.WindowModal)
self.setGeometry(800, 300, 580, 570)
self.process_bar.canceled.connect(self.close_bar)
def set_bar_window_title(self, text):
self.process_bar.setWindowTitle(text)
self.setWindowTitle(text)
def set_label_text(self, text):
self.process_bar.setLabelText(text)
def set_min_value(self, minValue):
self.process_bar.setMinimum(minValue)
def set_max_value(self, maxvalue):
self.process_bar.setMaximum(maxvalue)
def set_value(self, value):
self.process_bar.setValue(value)
def close_bar(self):
self.process_bar.close()
def reset_bar(self):
self.process_bar = None
def show(self):
self.process_bar.show()
def is_valid(self):
return bool(self.process_bar)
| 31.351852 | 111 | 0.705848 | 228 | 1,693 | 5.052632 | 0.438596 | 0.114583 | 0.145833 | 0.046875 | 0.074653 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.201418 | 1,693 | 53 | 112 | 31.943396 | 0.83284 | 0.225635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3125 | false | 0 | 0.0625 | 0.03125 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7358c21d44c9b2e4044c283c45da55bafa2452ee | 2,469 | py | Python | 9/main.py | misterwilliam/advent-of-code | b8ddcbc5efcf62c7e5e4244339029783ac9f90b6 | [
"MIT"
] | null | null | null | 9/main.py | misterwilliam/advent-of-code | b8ddcbc5efcf62c7e5e4244339029783ac9f90b6 | [
"MIT"
] | null | null | null | 9/main.py | misterwilliam/advent-of-code | b8ddcbc5efcf62c7e5e4244339029783ac9f90b6 | [
"MIT"
] | null | null | null | import itertools
import unittest
data = """Faerun to Norrath = 129
Faerun to Tristram = 58
Faerun to AlphaCentauri = 13
Faerun to Arbre = 24
Faerun to Snowdin = 60
Faerun to Tambi = 71
Faerun to Straylight = 67
Norrath to Tristram = 142
Norrath to AlphaCentauri = 15
Norrath to Arbre = 135
Norrath to Snowdin = 75
Norrath to Tambi = 82
Norrath to Straylight = 54
Tristram to AlphaCentauri = 118
Tristram to Arbre = 122
Tristram to Snowdin = 103
Tristram to Tambi = 49
Tristram to Straylight = 97
AlphaCentauri to Arbre = 116
AlphaCentauri to Snowdin = 12
AlphaCentauri to Tambi = 18
AlphaCentauri to Straylight = 91
Arbre to Snowdin = 129
Arbre to Tambi = 53
Arbre to Straylight = 40
Snowdin to Tambi = 15
Snowdin to Straylight = 99
Tambi to Straylight = 70"""
def GenPaths(cities):
for path in _GenPathsRec([], list(cities)):
yield path
def _GenPathsRec(stack, cities):
if len(cities) == 0:
yield stack
else:
for i in xrange(len(cities)):
for path in _GenPathsRec(stack + [cities[i]], cities[:i] + cities[i+1:]):
yield path
def CalcDistance(start, dest, distancePairs):
return distancePairs[frozenset((start, dest))]
def CalcPathLength(path, distance_pairs):
length = 0
for i in xrange(len(path) - 1):
length += CalcDistance(path[i], path[i+1], distance_pairs)
return length
def LoadData(data):
distance_pairs = {}
cities = set()
for line in data.split("\n"):
start, _, dest, _, distance = line.split()
cities.add(start)
cities.add(dest)
distance_pairs[frozenset([start, dest])] = int(distance)
return cities, distance_pairs
# ANSWER --------------------------------
cities, distance_pairs = LoadData(data)
longestLength = -1
for path in GenPaths(cities):
length = CalcPathLength(path, distance_pairs)
longestLength = max(longestLength, length)
print longestLength
# TESTS ---------------------------------
class GenPathsTests(unittest.TestCase):
def test_GenPaths(self):
self.assertEqual(
[path for path in GenPaths("abcd")],
[list(permutation) for permutation in itertools.permutations("abcd")])
class CalcPathLengthTests(unittest.TestCase):
def test_CalcPathLength(self):
distance_pairs = {
frozenset(["a", "b"]): 10,
frozenset(["b", "c"]): 20
}
self.assertEqual(CalcPathLength(["a", "b", "c"], distance_pairs), 30)
if __name__ == "__main__":
unittest.main() | 26.548387 | 85 | 0.665857 | 310 | 2,469 | 5.225806 | 0.309677 | 0.072222 | 0.022222 | 0.018519 | 0.050617 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038716 | 0.204941 | 2,469 | 93 | 86 | 26.548387 | 0.786551 | 0.031997 | 0 | 0.026316 | 0 | 0 | 0.309464 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0 | null | null | 0 | 0.026316 | null | null | 0.013158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73612698a39e054c2b652bdecf1e853efdbc6d55 | 526 | py | Python | src/importer/importer.py | tiefenauer/ip7-python | 512105ba39110ec77d2ea0961dd7c2a42d4ec26d | [
"MIT"
] | null | null | null | src/importer/importer.py | tiefenauer/ip7-python | 512105ba39110ec77d2ea0961dd7c2a42d4ec26d | [
"MIT"
] | null | null | null | src/importer/importer.py | tiefenauer/ip7-python | 512105ba39110ec77d2ea0961dd7c2a42d4ec26d | [
"MIT"
] | null | null | null | import logging
from abc import ABC, abstractmethod
from pony.orm import db_session, commit
log = logging.getLogger(__name__)
class Importer(ABC):
def __init__(self, TargetEntity):
self.TargetEntity = TargetEntity
@db_session
def truncate(self):
log.info('Truncating target tables...')
self.TargetEntity.select().delete(bulk=True)
commit()
log.info('...done!')
@abstractmethod
def __iter__(self):
"""iterate over items to be imported"""
return
| 21.04 | 52 | 0.653992 | 59 | 526 | 5.59322 | 0.627119 | 0.145455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237643 | 526 | 24 | 53 | 21.916667 | 0.822943 | 0.062738 | 0 | 0 | 0 | 0 | 0.071869 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.25 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
73623a0c8d94829ad21399f5bae6f22979a769e7 | 1,562 | py | Python | api/web/apps/auth/views.py | procool/itstructure | 6aa3a43e1a759f5509f130ddf911779645dc89d0 | [
"BSD-2-Clause"
] | null | null | null | api/web/apps/auth/views.py | procool/itstructure | 6aa3a43e1a759f5509f130ddf911779645dc89d0 | [
"BSD-2-Clause"
] | null | null | null | api/web/apps/auth/views.py | procool/itstructure | 6aa3a43e1a759f5509f130ddf911779645dc89d0 | [
"BSD-2-Clause"
] | null | null | null | from flask import url_for
from flaskcbv.view import View
from flaskcbv.conf import settings
from misc.mixins import HelperMixin
from misc.views import JSONView
class authView(JSONView):
def helper(self):
return """Authorizaion handler
Use "login" and "passwd" arguments by GET or POST to get session
"""
def get(self, *args, **kwargs):
return self.post(*args, **kwargs)
def post(self, *args, **kwargs):
try:
username = self.get_argument_smart('username')
passwd = self.get_argument_smart('password')
except Exception as err:
self.abort_error(errno=-1, error="wrong_params", details="set arguments: 'username', 'passwd'")
r = settings._BB_CLIENT.login(username, passwd)
answ = r.as_dict
del answ["cmd"]
del answ["token"]
self.abort_error(**answ)
class sessionView(JSONView):
def helper(self):
return """Session check handler
Use "session" argument by GET or POST to check your session
"""
def get(self, *args, **kwargs):
return self.post(*args, **kwargs)
def post(self, *args, **kwargs):
try:
session = self.get_argument_smart('session')
except Exception as err:
self.abort_error(errno=-1, error="wrong_params", details="set argument: 'session'")
r = settings._BB_CLIENT.session(session)
answ = r.as_dict
del answ["cmd"]
del answ["token"]
self.abort_error(**answ)
| 26.474576 | 107 | 0.608195 | 191 | 1,562 | 4.874346 | 0.329843 | 0.064447 | 0.06015 | 0.064447 | 0.498389 | 0.41246 | 0.41246 | 0.41246 | 0.41246 | 0.41246 | 0 | 0.001773 | 0.277849 | 1,562 | 58 | 108 | 26.931034 | 0.823582 | 0 | 0 | 0.55 | 0 | 0 | 0.206675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0.1 | 0.125 | 0.1 | 0.425 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7df75aa4524bb4f5a708857ab0d660fb8ccedfb8 | 603 | py | Python | math/0x04-convolutions_and_pooling/test/2-main.py | cbarros7/holbertonschool-machine_learning | 1edb4c253441f6319b86c9c590d1e7dd3fc32bf4 | [
"MIT"
] | 1 | 2022-03-09T19:12:22.000Z | 2022-03-09T19:12:22.000Z | math/0x04-convolutions_and_pooling/test/2-main.py | cbarros7/holbertonschool-machine_learning | 1edb4c253441f6319b86c9c590d1e7dd3fc32bf4 | [
"MIT"
] | null | null | null | math/0x04-convolutions_and_pooling/test/2-main.py | cbarros7/holbertonschool-machine_learning | 1edb4c253441f6319b86c9c590d1e7dd3fc32bf4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import matplotlib.pyplot as plt
import numpy as np
convolve_grayscale_padding = __import__(
'2-convolve_grayscale_padding').convolve_grayscale_padding
if __name__ == '__main__':
dataset = np.load('../../supervised_learning/data/MNIST.npz')
images = dataset['X_train']
print(images.shape)
kernel = np.array([[1, 0, -1], [1, 0, -1], [1, 0, -1]])
images_conv = convolve_grayscale_padding(images, kernel, (2, 4))
print(images_conv.shape)
plt.imshow(images[0], cmap='gray')
plt.show()
plt.imshow(images_conv[0], cmap='gray')
plt.show()
| 27.409091 | 68 | 0.6733 | 84 | 603 | 4.535714 | 0.47619 | 0.178478 | 0.251969 | 0.020997 | 0.107612 | 0.023622 | 0 | 0 | 0 | 0 | 0 | 0.029703 | 0.162521 | 603 | 21 | 69 | 28.714286 | 0.724752 | 0.034826 | 0 | 0.133333 | 0 | 0 | 0.156627 | 0.11704 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.133333 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dfda8cef5923a2a0d78158e8c874838389cfd46 | 3,678 | py | Python | src/oci/dns/models/external_master.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 249 | 2017-09-11T22:06:05.000Z | 2022-03-04T17:09:29.000Z | src/oci/dns/models/external_master.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 228 | 2017-09-11T23:07:26.000Z | 2022-03-23T10:58:50.000Z | src/oci/dns/models/external_master.py | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 224 | 2017-09-27T07:32:43.000Z | 2022-03-25T16:55:42.000Z | # coding: utf-8
# Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class ExternalMaster(object):
"""
An external master name server used as the source of zone data.
"""
def __init__(self, **kwargs):
"""
Initializes a new ExternalMaster object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param address:
The value to assign to the address property of this ExternalMaster.
:type address: str
:param port:
The value to assign to the port property of this ExternalMaster.
:type port: int
:param tsig_key_id:
The value to assign to the tsig_key_id property of this ExternalMaster.
:type tsig_key_id: str
"""
self.swagger_types = {
'address': 'str',
'port': 'int',
'tsig_key_id': 'str'
}
self.attribute_map = {
'address': 'address',
'port': 'port',
'tsig_key_id': 'tsigKeyId'
}
self._address = None
self._port = None
self._tsig_key_id = None
@property
def address(self):
"""
**[Required]** Gets the address of this ExternalMaster.
The server's IP address (IPv4 or IPv6).
:return: The address of this ExternalMaster.
:rtype: str
"""
return self._address
@address.setter
def address(self, address):
"""
Sets the address of this ExternalMaster.
The server's IP address (IPv4 or IPv6).
:param address: The address of this ExternalMaster.
:type: str
"""
self._address = address
@property
def port(self):
"""
Gets the port of this ExternalMaster.
The server's port. Port value must be a value of 53, otherwise omit
the port value.
:return: The port of this ExternalMaster.
:rtype: int
"""
return self._port
@port.setter
def port(self, port):
"""
Sets the port of this ExternalMaster.
The server's port. Port value must be a value of 53, otherwise omit
the port value.
:param port: The port of this ExternalMaster.
:type: int
"""
self._port = port
@property
def tsig_key_id(self):
"""
Gets the tsig_key_id of this ExternalMaster.
The OCID of the TSIG key.
:return: The tsig_key_id of this ExternalMaster.
:rtype: str
"""
return self._tsig_key_id
@tsig_key_id.setter
def tsig_key_id(self, tsig_key_id):
"""
Sets the tsig_key_id of this ExternalMaster.
The OCID of the TSIG key.
:param tsig_key_id: The tsig_key_id of this ExternalMaster.
:type: str
"""
self._tsig_key_id = tsig_key_id
def __repr__(self):
return formatted_flat_dict(self)
def __eq__(self, other):
if other is None:
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| 27.244444 | 245 | 0.609027 | 474 | 3,678 | 4.537975 | 0.274262 | 0.065086 | 0.075314 | 0.066946 | 0.452348 | 0.308229 | 0.261274 | 0.18689 | 0.18689 | 0.18689 | 0 | 0.010289 | 0.312942 | 3,678 | 134 | 246 | 27.447761 | 0.840918 | 0.503535 | 0 | 0.068182 | 0 | 0 | 0.051957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0 | 0.045455 | 0.045455 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4024d84d4513279dde8eeb7b78e3491e9770d6e | 1,038 | py | Python | app/api/v1/models/user_model.py | munniomer/Send-IT-Api-v1 | 17041c987638c7e47c7c2ebed29bf7e2b5156bed | [
"CNRI-Python",
"OML"
] | null | null | null | app/api/v1/models/user_model.py | munniomer/Send-IT-Api-v1 | 17041c987638c7e47c7c2ebed29bf7e2b5156bed | [
"CNRI-Python",
"OML"
] | null | null | null | app/api/v1/models/user_model.py | munniomer/Send-IT-Api-v1 | 17041c987638c7e47c7c2ebed29bf7e2b5156bed | [
"CNRI-Python",
"OML"
] | 1 | 2019-02-05T07:44:19.000Z | 2019-02-05T07:44:19.000Z | users = []
class UserModel(object):
"""Class user models."""
def __init__(self):
self.db = users
def add_user(self, fname, lname, email, phone, password, confirm_password, city):
""" Method for saving user to the dictionary """
payload = {
"userId": len(self.db)+1,
"fname": fname,
"lname": lname,
"email": email,
"phone": phone,
"password": password,
"confirm_password": confirm_password,
"city": city,
}
self.db.append(payload)
return self.db
def check_email(self, email):
"""Method for checking if user email exist"""
user = [user for user in users if user['email'] == email]
if user:
return True
return False
def check_user(self, userId):
"""Method for checking if user exist"""
user = [user for user in users if user['userId'] == userId]
if user:
return True
return False
| 26.615385 | 85 | 0.531792 | 116 | 1,038 | 4.672414 | 0.318966 | 0.066421 | 0.127306 | 0.099631 | 0.306273 | 0.221402 | 0.121771 | 0.121771 | 0.121771 | 0 | 0 | 0.001493 | 0.354528 | 1,038 | 38 | 86 | 27.315789 | 0.807463 | 0.129094 | 0 | 0.222222 | 0 | 0 | 0.07378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0.111111 | 0 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b403104a45ede1110a9c5cca95878c43993fc086 | 433 | py | Python | drip/migrations/0002_querysetrule_rule_type.py | RentFreeMedia/django-drip-campaigns | a71e5d3a3f242c04a6f7f921b85aa01daff467f8 | [
"MIT"
] | 46 | 2020-07-23T17:47:33.000Z | 2021-11-25T16:57:35.000Z | drip/migrations/0002_querysetrule_rule_type.py | RentFreeMedia/django-drip-campaigns | a71e5d3a3f242c04a6f7f921b85aa01daff467f8 | [
"MIT"
] | 54 | 2020-06-19T17:57:42.000Z | 2021-09-22T19:34:48.000Z | drip/migrations/0002_querysetrule_rule_type.py | kaozdl/django-drip | a71e5d3a3f242c04a6f7f921b85aa01daff467f8 | [
"MIT"
] | 19 | 2020-08-30T05:29:13.000Z | 2022-02-08T20:27:17.000Z | # Generated by Django 3.0.7 on 2020-11-25 13:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('drip', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='querysetrule',
name='rule_type',
field=models.CharField(choices=[('or', 'Or'), ('and', 'And')], default='and', max_length=3),
),
]
| 22.789474 | 104 | 0.577367 | 48 | 433 | 5.125 | 0.770833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063091 | 0.267898 | 433 | 18 | 105 | 24.055556 | 0.712934 | 0.103926 | 0 | 0 | 1 | 0 | 0.129534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4073a213da55b416141036502c3d25e2d22ed63 | 3,552 | py | Python | pingpongskill/pingpongskill.py | Garvys/PingPongSkill | 71749a34772326dd83121bb0ab6fad52b7d8d694 | [
"MIT"
] | 1 | 2017-09-22T13:30:20.000Z | 2017-09-22T13:30:20.000Z | pingpongskill/pingpongskill.py | Garvys/PingPongSkill | 71749a34772326dd83121bb0ab6fad52b7d8d694 | [
"MIT"
] | null | null | null | pingpongskill/pingpongskill.py | Garvys/PingPongSkill | 71749a34772326dd83121bb0ab6fad52b7d8d694 | [
"MIT"
] | null | null | null | # -*-: coding utf-8 -*-
""" Skeleton Snips skill. """
import re
import json
import os
import datetime
from text2num import text2num
from collections import defaultdict
FORMAT = '%Y.%m.%dT%H:%M:%S'
class PingPongSkill(object):
""" Skeleton Snips skill. """
def __init__(self):
pass
def handle_loser(self):
db = JsonDB()
perfs = db.compute_perfs()
if len(perfs) == 0:
print "No match registred"
return
loser = sorted(perfs.iteritems(), key=lambda x: x[1])[0][0]
print "The one who lost the most matches is {}".format(loser)
def handle_winner(self):
db = JsonDB()
perfs = db.compute_perfs()
if len(perfs) == 0:
print "No match registred"
return
loser = sorted(perfs.iteritems(), key=lambda x: -x[1])[0][0]
print "The one who lost the most matches is {}".format(loser)
def handle_terminate_game(self, winner, loser, score):
print "*** {} {} {}".format(winner, loser, score)
try:
score = parse_core(score)
except ValueError, err:
print err
db = JsonDB()
timestamp = datetime.datetime.now().strftime(FORMAT)
db.add(winner, loser, score[0], score[1], timestamp)
print "I added the match {} versus {}: score: {}".format(winner,
loser,
score)
regex = re.compile('([\w\s]+)to([\w\s]+)')
def parse_core(score):
match = regex.search(score)
if not match or len(match.groups()) != 2:
raise ValueError("{} is an incorrect score".format(score))
score_1 = text2num(match.groups()[0].strip())
score_2 = text2num(match.groups()[1].strip())
if score_1 != 11 and score_2 != 11:
raise ValueError(
"{} is an incorrect score: one of the player needs to have "
"11".format(
score))
return sorted([score_1, score_2], reverse=True)
class JsonDB(object):
path = 'ping_pong_db.json'
def __init__(self):
if not os.path.exists(self.path):
self._results = []
else:
with open(self.path, 'r') as f:
results = json.load(f)
self._results = results
def add(self, player_1, player_2, score_player_1, score_player_2,
datetime_str):
self._results += [
(datetime_str, player_1, player_2, score_player_1, score_player_2)]
self.save_results()
def save_results(self):
with open(self.path, 'w') as f:
json.dump(self._results, f)
def compute_perfs(self):
player_to_win = defaultdict(int)
player_to_lose = defaultdict(int)
for _, win, lose, _, _ in self._results:
player_to_win[win] += 1
player_to_lose[lose] += 1
player_to_proportion = {}
for player in set(player_to_win.keys() + player_to_lose.keys()):
proportion = float(player_to_win[player]) / (
player_to_win[player] + player_to_lose[player])
player_to_proportion[player] = proportion
return player_to_proportion
if __name__ == '__main__':
scores = [
'eleven to two',
'twenty to eleven'
]
for score in scores:
print parse_core(score)
PingPongSkill().handle_loser()
PingPongSkill().handle_terminate_game('thib', 'alex', 'eleven to two')
PingPongSkill().handle_loser()
| 30.62069 | 79 | 0.566441 | 434 | 3,552 | 4.437788 | 0.288018 | 0.049844 | 0.028557 | 0.017653 | 0.263759 | 0.263759 | 0.207684 | 0.207684 | 0.207684 | 0.207684 | 0 | 0.016327 | 0.310248 | 3,552 | 115 | 80 | 30.886957 | 0.769796 | 0.005912 | 0 | 0.188889 | 0 | 0 | 0.105187 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011111 | 0.066667 | null | null | 0.088889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b40aad26fdc784cc5dfaf249f1c167e4160e4887 | 2,279 | py | Python | Exemple.py | LVWolff/Python_Lesson_2 | ece186f988c94a1aaa1656a1e6e1093c3d5b6251 | [
"MIT"
] | null | null | null | Exemple.py | LVWolff/Python_Lesson_2 | ece186f988c94a1aaa1656a1e6e1093c3d5b6251 | [
"MIT"
] | null | null | null | Exemple.py | LVWolff/Python_Lesson_2 | ece186f988c94a1aaa1656a1e6e1093c3d5b6251 | [
"MIT"
] | null | null | null | #Задачи на циклы и оператор условия------
#----------------------------------------
'''
Задача 1
Вывести на экран циклом пять строк из нулей, причем каждая строка должна быть пронумерована.
'''
for i in range(1, 6):
print(i, '0000000000000000000000000000000000000000000')
'''
Задача 2
Пользователь в цикле вводит 10 цифр. Найти количество введеных пользователем цифр 5.
'''
count = 0
for i in range(10):
user_data = int(input('Введите число: '))
if user_data == 5:
count += 1
print(count)
'''
Задача 3
Найти сумму ряда чисел от 1 до 100. Полученный результат вывести на экран.
'''
sum = 0
for i in range(1, 101):
sum += i
print(sum)
'''
Задача 4
Найти произведение ряда чисел от 1 до 10. Полученный результат вывести на экран.
'''
proiz = 1
for i in range(2, 11):
proiz *= i
print(proiz)
'''
Задача 5
Вывести цифры числа на каждой строчке.
'''
integer_number = 123456
start_del = len(str(integer_number)) - 1
delitel = 10 ** start_del
#print(integer_number % delitel, integer_number // delitel)
while integer_number > 0:
print(int(integer_number // delitel))
integer_number = integer_number % delitel
delitel /= 10
'''
Задача 6
Найти сумму цифр числа.
'''
integer_number = 123456
sum = 0
while integer_number > 0:
sum += integer_number % 10
integer_number = integer_number // 10
print(sum)
'''
Задача 7
Найти произведение цифр числа.
'''
integer_number = 123456
proiz = 1
while integer_number > 0:
proiz *= integer_number % 10
integer_number = integer_number // 10
print(proiz)
'''
Задача 8
Дать ответ на вопрос: есть ли среди цифр числа 5?
'''
integer_number = 125254
while integer_number > 0:
if integer_number % 10 == 5:
print('Yes')
break
integer_number = integer_number // 10
else:
print('No')
'''
Задача 9
Найти максимальную цифру в числе
'''
integer_number = 125278954
max_num = integer_number % 10
while integer_number > 0:
max_num = max(max_num, integer_number % 10)
integer_number = integer_number // 10
print(max_num)
'''
Задача 10
Найти количество цифр 5 в числе
'''
integer_number = 125278954
count_num = 0
while integer_number > 0:
if integer_number % 10 == 5:
count_num += 1
integer_number = integer_number // 10
print(count_num)
| 18.087302 | 92 | 0.67749 | 321 | 2,279 | 4.669782 | 0.302181 | 0.294863 | 0.110073 | 0.076051 | 0.424283 | 0.167445 | 0.14543 | 0.14543 | 0.14543 | 0 | 0 | 0.089906 | 0.204476 | 2,279 | 125 | 93 | 18.232 | 0.7369 | 0.105309 | 0 | 0.464286 | 0 | 0 | 0.043994 | 0.030028 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.196429 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b40c71ed0a4ab0b122f61556dae6f792302c5678 | 776 | py | Python | lepiota/lepiota/urls.py | sgelias/lepiota | 4b30aa25ac5308229f6d41f1720e1af02557826e | [
"MIT"
] | null | null | null | lepiota/lepiota/urls.py | sgelias/lepiota | 4b30aa25ac5308229f6d41f1720e1af02557826e | [
"MIT"
] | null | null | null | lepiota/lepiota/urls.py | sgelias/lepiota | 4b30aa25ac5308229f6d41f1720e1af02557826e | [
"MIT"
] | null | null | null | from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import path, re_path
from django.conf.urls import include
from django.views.generic import TemplateView, RedirectView
urlpatterns = [
# Administration
path('admin/', admin.site.urls),
# Accounts
path('account/', include('account.urls', namespace='account')),
# Oauth2
path('api/v1/o/', include('oauth.urls', namespace='oauth2_provider')),
# General purpose
path('welcome/', TemplateView.as_view(template_name="welcome.html")),
path('', RedirectView.as_view(url="/welcome/")),
re_path(r'^$', RedirectView.as_view(url="/welcome/")),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
| 29.846154 | 74 | 0.716495 | 97 | 776 | 5.628866 | 0.412371 | 0.10989 | 0.076923 | 0.065934 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004484 | 0.137887 | 776 | 25 | 75 | 31.04 | 0.811659 | 0.059278 | 0 | 0 | 0 | 0 | 0.147586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b41042e5988e8d27b58649ccaf22e396c4b031cb | 2,800 | py | Python | gitgoggles/utils.py | nowells/git-goggles | 022dc0cd6dfe8f1641ccb33e85ab05309dba7dbf | [
"MIT"
] | 13 | 2015-03-10T08:48:51.000Z | 2019-04-16T09:06:55.000Z | gitgoggles/utils.py | nowells/git-goggles | 022dc0cd6dfe8f1641ccb33e85ab05309dba7dbf | [
"MIT"
] | null | null | null | gitgoggles/utils.py | nowells/git-goggles | 022dc0cd6dfe8f1641ccb33e85ab05309dba7dbf | [
"MIT"
] | 3 | 2016-04-29T05:38:56.000Z | 2020-07-06T13:04:05.000Z | import copy
import subprocess
import sys
import unicodedata
def disable_colored_func(text, *args, **kwargs):
return text
try:
from termcolor import colored as colored_func
except ImportError:
print 'You should run "pip install termcolor" to fully utilize these utilities.'
colored_func = disable_colored_func
def supports_color():
"""
Returns True if the running system's terminal supports color, and False
otherwise.
"""
unsupported_platform = (sys.platform in ('win32', 'Pocket PC'))
# isatty is not always implemented, #6223.
is_a_tty = hasattr(sys.stdout, 'isatty') and sys.stdout.isatty()
if unsupported_platform or not is_a_tty:
return False
return True
if not supports_color():
colored_func = disable_colored_func
class Colored(object):
disabled = False
def __call__(self, *args, **kwargs):
if self.disabled:
return disable_colored_func(*args, **kwargs)
return colored_func(*args, **kwargs)
colored = Colored()
def force_unicode(obj, encoding='utf-8'):
if isinstance(obj, basestring):
if not isinstance(obj, unicode):
obj = unicode(obj, encoding)
# Normalize the unicode data to have characters that in NFKD format would be represented by 2 characters, instead of 1.
obj = unicodedata.normalize('NFKC', obj)
return obj
def force_str(obj, encoding='utf-8'):
if isinstance(obj, basestring):
if not isinstance(obj, str):
obj = obj.encode(encoding)
return obj
def console(obj):
sys.stdout.write(force_str(obj))
class AccumulatorDict(dict):
def __init__(self, default, *args, **kwargs):
self.__default = default
def __getitem__(self, key):
if key not in self:
self[key] = copy.copy(self.__default)
return super(AccumulatorDict, self).__getitem__(key)
def memoize(func):
def _(self, *args, **kwargs):
if not hasattr(self, '__memoize_cache'):
self.__memoize_cache = AccumulatorDict(AccumulatorDict({}))
key = tuple([ tuple(args), tuple([ tuple([x, y]) for x, y in kwargs.items() ]) ])
if key not in self.__memoize_cache[func]:
self.__memoize_cache[func][key] = func(self, *args, **kwargs)
return self.__memoize_cache[func][key]
return _
def terminal_dimensions():
try:
# This probably does not work on windows, but it should work just about
# everywhere else.
p = subprocess.Popen(['stty', 'size'], stdout=subprocess.PIPE)
(stdout, stderr) = p.communicate(None)
stdout = force_unicode(stdout)
stderr = force_unicode(stderr)
rows, columns = [ int(x) for x in stdout.split() ]
except:
rows, columns = 40, 79
return rows, columns
| 32.183908 | 127 | 0.658571 | 356 | 2,800 | 5.002809 | 0.376404 | 0.04941 | 0.044919 | 0.033689 | 0.139248 | 0.065132 | 0.065132 | 0.065132 | 0.065132 | 0.065132 | 0 | 0.006542 | 0.235714 | 2,800 | 86 | 128 | 32.55814 | 0.825701 | 0.087143 | 0 | 0.123077 | 0 | 0 | 0.052696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.092308 | null | null | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b4130d04b43c706ebb56a9d6ede2201a268db5d7 | 7,913 | py | Python | tensorflow/contrib/training/python/training/hparam_test.py | DEVESHTARASIA/tensorflow | d3edb8c60ed4fd831d62833ed22f5c23486c561c | [
"Apache-2.0"
] | 384 | 2017-02-21T18:38:04.000Z | 2022-02-22T07:30:25.000Z | tensorflow/contrib/training/python/training/hparam_test.py | DEVESHTARASIA/tensorflow | d3edb8c60ed4fd831d62833ed22f5c23486c561c | [
"Apache-2.0"
] | 15 | 2017-03-01T20:18:43.000Z | 2020-05-07T10:33:51.000Z | tensorflow/contrib/training/python/training/hparam_test.py | DEVESHTARASIA/tensorflow | d3edb8c60ed4fd831d62833ed22f5c23486c561c | [
"Apache-2.0"
] | 81 | 2017-02-21T19:31:19.000Z | 2022-02-22T07:30:24.000Z | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for hparam."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import six
from tensorflow.contrib.training.python.training import hparam
from tensorflow.python.platform import test
class HParamsTest(test.TestCase):
def _assertDictEquals(self, d1, d2):
self.assertEqual(len(d1), len(d2))
for k, v in six.iteritems(d1):
self.assertTrue(k in d2, k)
self.assertEquals(v, d2[k], d2[k])
def testEmpty(self):
hparams = hparam.HParams()
self._assertDictEquals({}, hparams.values())
hparams.parse('')
self._assertDictEquals({}, hparams.values())
with self.assertRaisesRegexp(ValueError, 'Unknown hyperparameter'):
hparams.parse('xyz=123')
def testSomeValues(self):
hparams = hparam.HParams(aaa=1, b=2.0, c_c='relu6')
self._assertDictEquals(
{'aaa': 1, 'b': 2.0, 'c_c': 'relu6'}, hparams.values())
expected_str = '[(\'aaa\', 1), (\'b\', 2.0), (\'c_c\', \'relu6\')]'
self.assertEquals(expected_str, str(hparams.__str__()))
self.assertEquals(expected_str, str(hparams))
self.assertEquals(1, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('relu6', hparams.c_c)
hparams.parse('aaa=12')
self._assertDictEquals(
{'aaa': 12, 'b': 2.0, 'c_c': 'relu6'}, hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('relu6', hparams.c_c)
hparams.parse('c_c=relu4,b=-2.0e10')
self._assertDictEquals({'aaa': 12, 'b': -2.0e10, 'c_c': 'relu4'},
hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(-2.0e10, hparams.b)
self.assertEquals('relu4', hparams.c_c)
hparams.parse('c_c=,b=0,')
self._assertDictEquals({'aaa': 12, 'b': 0, 'c_c': ''}, hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(0.0, hparams.b)
self.assertEquals('', hparams.c_c)
hparams.parse('c_c=2.3",b=+2,')
self.assertEquals(2.0, hparams.b)
self.assertEquals('2.3"', hparams.c_c)
with self.assertRaisesRegexp(ValueError, 'Unknown hyperparameter'):
hparams.parse('x=123')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=poipoi')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=1.0')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=12x')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=relu')
with self.assertRaisesRegexp(ValueError, 'Must not pass a list'):
hparams.parse('aaa=[123]')
self.assertEquals(12, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('2.3"', hparams.c_c)
# Exports to proto.
hparam_def = hparams.to_proto()
# Imports from proto.
hparams2 = hparam.HParams(hparam_def=hparam_def)
# Verifies that all hparams are restored.
self.assertEquals(12, hparams2.aaa)
self.assertEquals(2.0, hparams2.b)
self.assertEquals('2.3"', hparams2.c_c)
def testBoolParsing(self):
for value in 'true', 'false', 'True', 'False', '1', '0':
for initial in False, True:
hparams = hparam.HParams(use_gpu=initial)
hparams.parse('use_gpu=' + value)
self.assertEqual(hparams.use_gpu, value in ['True', 'true', '1'])
# Exports to proto.
hparam_def = hparams.to_proto()
# Imports from proto.
hparams2 = hparam.HParams(hparam_def=hparam_def)
self.assertEquals(hparams.use_gpu, hparams2.use_gpu)
# Check that hparams2.use_gpu is a bool rather than an int.
# The assertEquals() call above won't catch this, since
# (0 == False) and (1 == True) in Python.
self.assertEquals(bool, type(hparams2.use_gpu))
def testBoolParsingFail(self):
hparams = hparam.HParams(use_gpu=True)
with self.assertRaisesRegexp(ValueError, r'Could not parse.*use_gpu'):
hparams.parse('use_gpu=yep')
def testLists(self):
hparams = hparam.HParams(aaa=[1], b=[2.0, 3.0], c_c=['relu6'])
self._assertDictEquals({'aaa': [1], 'b': [2.0, 3.0], 'c_c': ['relu6']},
hparams.values())
self.assertEquals([1], hparams.aaa)
self.assertEquals([2.0, 3.0], hparams.b)
self.assertEquals(['relu6'], hparams.c_c)
hparams.parse('aaa=[12]')
self.assertEquals([12], hparams.aaa)
hparams.parse('aaa=[12,34,56]')
self.assertEquals([12, 34, 56], hparams.aaa)
hparams.parse('c_c=[relu4,relu12],b=[1.0]')
self.assertEquals(['relu4', 'relu12'], hparams.c_c)
self.assertEquals([1.0], hparams.b)
hparams.parse('c_c=[],aaa=[-34]')
self.assertEquals([-34], hparams.aaa)
self.assertEquals([], hparams.c_c)
hparams.parse('c_c=[_12,3\'4"],aaa=[+3]')
self.assertEquals([3], hparams.aaa)
self.assertEquals(['_12', '3\'4"'], hparams.c_c)
with self.assertRaisesRegexp(ValueError, 'Unknown hyperparameter'):
hparams.parse('x=[123]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=[poipoi]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('aaa=[1.0]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=[12x]')
with self.assertRaisesRegexp(ValueError, 'Could not parse'):
hparams.parse('b=[relu]')
with self.assertRaisesRegexp(ValueError, 'Must pass a list'):
hparams.parse('aaa=123')
# Exports to proto.
hparam_def = hparams.to_proto()
# Imports from proto.
hparams2 = hparam.HParams(hparam_def=hparam_def)
# Verifies that all hparams are restored.
self.assertEquals([3], hparams2.aaa)
self.assertEquals([1.0], hparams2.b)
self.assertEquals(['_12', '3\'4"'], hparams2.c_c)
def testJson(self):
hparams = hparam.HParams(aaa=1, b=2.0, c_c='relu6', d=True)
self._assertDictEquals(
{'aaa': 1, 'b': 2.0, 'c_c': 'relu6', 'd': True}, hparams.values())
self.assertEquals(1, hparams.aaa)
self.assertEquals(2.0, hparams.b)
self.assertEquals('relu6', hparams.c_c)
hparams.parse_json('{"aaa": 12, "b": 3.0, "c_c": "relu4", "d": false}')
self._assertDictEquals(
{'aaa': 12, 'b': 3.0, 'c_c': 'relu4', 'd': False}, hparams.values())
self.assertEquals(12, hparams.aaa)
self.assertEquals(3.0, hparams.b)
self.assertEquals('relu4', hparams.c_c)
json_str = hparams.to_json()
hparams2 = hparam.HParams(aaa=10, b=20.0, c_c='hello', d=False)
hparams2.parse_json(json_str)
self.assertEquals(12, hparams2.aaa)
self.assertEquals(3.0, hparams2.b)
self.assertEquals('relu4', hparams2.c_c)
self.assertEquals(False, hparams2.d)
def testNonProtoFails(self):
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def=1)
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def=1.0)
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def='hello')
with self.assertRaisesRegexp(AssertionError, ''):
hparam.HParams(hparam_def=[1, 2, 3])
if __name__ == '__main__':
test.main()
| 40.372449 | 80 | 0.65841 | 1,052 | 7,913 | 4.85076 | 0.162548 | 0.153635 | 0.091711 | 0.098765 | 0.627474 | 0.576132 | 0.55242 | 0.524201 | 0.47913 | 0.396238 | 0 | 0.037315 | 0.173638 | 7,913 | 195 | 81 | 40.579487 | 0.74308 | 0.129534 | 0 | 0.331126 | 0 | 0.006623 | 0.112715 | 0.003791 | 0 | 0 | 0 | 0 | 0.529801 | 1 | 0.05298 | false | 0.013245 | 0.039735 | 0 | 0.099338 | 0.006623 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b415b852eb1504fe65a58d7db038c31b5386abda | 2,616 | py | Python | thelma/repositories/rdb/view.py | fogathmann/TheLMA | ac330a0005da4fea2f1387da9ff9938611ad1481 | [
"MIT"
] | 1 | 2020-07-12T22:47:58.000Z | 2020-07-12T22:47:58.000Z | thelma/repositories/rdb/view.py | papagr/TheLMA | d2dc7a478ee5d24ccf3cc680888e712d482321d0 | [
"MIT"
] | null | null | null | thelma/repositories/rdb/view.py | papagr/TheLMA | d2dc7a478ee5d24ccf3cc680888e712d482321d0 | [
"MIT"
] | 1 | 2020-07-12T22:40:36.000Z | 2020-07-12T22:40:36.000Z | """
This file is part of the TheLMA (THe Laboratory Management Application) project.
See LICENSE.txt for licensing, CONTRIBUTORS.txt for contributor information.
Utilities to create/drop views.
Based on a recipe published in:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Views
"""
from sqlalchemy.sql import table
from sqlalchemy.ext import compiler
from sqlalchemy.schema import DDLElement
__docformat__ = 'reStructuredText en'
__all__ = ['CreateView',
'DropView',
'view_factory',
]
class CreateView(DDLElement):
def __init__(self, name, selectable): # pylint: disable=W0231
self.name = name
self.selectable = selectable
class DropView(DDLElement):
def __init__(self, name): # pylint: disable=W0231
self.name = name
@compiler.compiles(CreateView, 'postgresql')
def create_view_compile_postgresql(element, compiler, **kw): # pylint: disable=W0621,W0613
selection = compiler.sql_compiler.process(element.selectable)
stmt = "CREATE OR REPLACE VIEW %s AS %s" % (element.name, selection)
# FIXME: we should not combine the statement and params here.
# it is a SQLAlchemy bug... report it.
params = {}
for k, v in element.selectable.compile().params.iteritems():
params[k] = ("'%s'" % v) if isinstance(v, basestring) else v
return stmt % params
@compiler.compiles(CreateView, 'sqlite')
def create_view_compile_sqlite(element, compiler, **kw): # pylint: disable=W0621,W0613
# FIXME: duplicate code
# FIXME: it seems that there is a bug in SQLAlchemy and creating views
# this way emits an exception
selection = compiler.sql_compiler.process(element.selectable)
stmt = "CREATE VIEW %s AS %s" % (element.name, selection)
# FIXME: we should not combine the statement and params here.
# it is a SQLAlchemy bug... report it.
params = {}
for k, v in element.selectable.compile().params.iteritems():
params[k] = ("'%s'" % v) if isinstance(v, basestring) else v
return stmt % params
@compiler.compiles(DropView)
def drop_view_compile(element, compiler, **kw): # pylint: disable=W0621,W0613
return "DROP VIEW %s" % (element.name)
def view_factory(name, metadata, selectable):
if not hasattr(metadata, 'views'):
metadata.views = {}
metadata.views[name] = table(name)
for c in selectable.c:
c._make_proxy(metadata.views[name]) # pylint: disable=W0212
CreateView(name, selectable).execute_at('after-create', metadata)
DropView(name).execute_at('before-drop', metadata)
return metadata.views[name]
| 33.974026 | 90 | 0.69419 | 332 | 2,616 | 5.373494 | 0.35241 | 0.043722 | 0.028587 | 0.038677 | 0.44843 | 0.420404 | 0.386771 | 0.319507 | 0.319507 | 0.25 | 0 | 0.017086 | 0.194572 | 2,616 | 76 | 91 | 34.421053 | 0.829616 | 0.292049 | 0 | 0.27907 | 0 | 0 | 0.08952 | 0 | 0 | 0 | 0 | 0.013158 | 0 | 1 | 0.139535 | false | 0 | 0.069767 | 0.023256 | 0.348837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b415cd56b8b968d2043025ce5a7780e981f5488b | 960 | py | Python | msblog/models.py | designermanjeets/mscreativepixel | 8fefa48296c97fc541bc6d4f9ad8fa7048d0e377 | [
"Apache-2.0"
] | null | null | null | msblog/models.py | designermanjeets/mscreativepixel | 8fefa48296c97fc541bc6d4f9ad8fa7048d0e377 | [
"Apache-2.0"
] | null | null | null | msblog/models.py | designermanjeets/mscreativepixel | 8fefa48296c97fc541bc6d4f9ad8fa7048d0e377 | [
"Apache-2.0"
] | null | null | null | from django.db import models
from datetime import datetime
import string, random
import uuid
# Create your models here.
class HeaderNavs(models.Model):
title = models.CharField(max_length = 50)
url = models.CharField(max_length = 50)
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "HeaderNavs"
class Blogs(models.Model):
title = models.CharField(max_length = 50)
short_description = models.TextField(max_length = 100)
description = models.TextField()
created_at = models.DateTimeField(default=datetime.now, blank=True)
avatar = models.ImageField(upload_to = 'static/img/avatar/', default = 'static/img/avatar_1.jpg')
slug = models.CharField(max_length=40, blank=True, default=uuid.uuid4, unique=True)
def __str__(self):
return self.title
class Meta:
verbose_name_plural = "Blogs"
| 28.235294 | 114 | 0.660417 | 114 | 960 | 5.377193 | 0.464912 | 0.073409 | 0.117455 | 0.156607 | 0.34584 | 0.303426 | 0.303426 | 0.303426 | 0.166395 | 0.166395 | 0 | 0.017981 | 0.246875 | 960 | 33 | 115 | 29.090909 | 0.829876 | 0.025 | 0 | 0.363636 | 0 | 0 | 0.059957 | 0.024625 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0.090909 | 0.909091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b41a1df236c0501272e47ba309bb8f6eaa3a041a | 4,113 | py | Python | Approxilyzer/gem5/scripts/relyzer/run_gem5_gl.py | cornell-zhang/GLAIVE | 8e29ac621a95a25c19ccfeb5071a9d3595093ef7 | [
"BSD-3-Clause"
] | 10 | 2020-11-21T04:13:33.000Z | 2022-01-03T23:08:09.000Z | Approxilyzer/gem5/scripts/relyzer/run_gem5_gl.py | cornell-zhang/GLAIVE | 8e29ac621a95a25c19ccfeb5071a9d3595093ef7 | [
"BSD-3-Clause"
] | null | null | null | Approxilyzer/gem5/scripts/relyzer/run_gem5_gl.py | cornell-zhang/GLAIVE | 8e29ac621a95a25c19ccfeb5071a9d3595093ef7 | [
"BSD-3-Clause"
] | null | null | null | import os, sys
from argparse import ArgumentParser
from datetime import datetime as dt
from pprint import pprint as pp
import shutil, glob
#from pyfiglet import figlet_format, Figlet
import datetime
'''
python run_gem5_gl.py -a radix -l inst
python run_gem5_gl.py -a radix -l bit
'''
def app(args):
if not args:
return []
else:
return args.split(',')
parser = ArgumentParser()
parser.add_argument('-a', "--apps", help='Target application names seperated by comma', \
dest='targetapp', required=True)
parser.add_argument('-l', "--info_level", help='Target application architecture', \
dest='info_level', default='bit')
args = parser.parse_args()
apps = app(args.targetapp)
level = args.info_level
#num = args.num_progs
src_dir = os.environ.get('GRAPHLEARN')
gem5_dir= os.environ.get('APPROXGEM5') + '/gem5/scripts/relyzer/'
dest_dir = os.environ.get('APPROXGEM5') + '/workloads/x86/apps/'
for app in apps:
app1 = app + '_' + level
os.chdir(gem5_dir)
if level == 'bit':
# cp result from src to dest
gl_src_file = src_dir + 'sdc_output' +'/' + app1 + '_post.txt'
gl_dest_file = dest_dir + app +'/' + app1 + '_post.txt'
cmd = 'cp ' + gl_src_file + ' ' + gl_dest_file
status = os.system(cmd)
if status != 0:
print('cp data in gl failure ' + app1)
exit(-1)
bit_rf_src_file = src_dir + 'sdc_output_ml_bit' +'/' + app1 + '_post_rf.txt'
bit_rf_dest_file = dest_dir + app +'/' + app1 + '_post_rf.txt'
cmd = 'cp ' + bit_rf_src_file + ' ' + bit_rf_dest_file
status = os.system(cmd)
if status != 0:
print('cp data in rf_bit faigem5_dirlure ' + app1)
exit(-1)
bit_mlpc_src_file = src_dir + 'sdc_output_ml_bit' +'/' + app1 + '_post_mlpc.txt'
bit_mlpc_dest_file = dest_dir + app +'/' + app1 + '_post_mlpc.txt'
cmd = 'cp ' + bit_mlpc_src_file + ' ' + bit_mlpc_dest_file
status = os.system(cmd)
if status != 0:
print('cp data in mlpc_bit failure ' + app1)
exit(-1)
#call sdc_comp
print('this is for %s comp_sdc under graph learning ' % app)
cmd = 'python comp_sdc.py ' + app + ' ' + 'x86' + ' ' + 'gl'
status = os.system(cmd)
if status != 0:
print('sdc comp in gl_bit failure ' + app1)
exit(-1)
print('this is for %s comp_sdc under random forest learning ' % app)
cmd = 'python comp_sdc.py ' + app + ' ' + 'x86' + ' ' + 'rf'
status = os.system(cmd)
if status != 0:
print('sdc comp in rf_bit failure ' + app1)
exit(-1)
print('this is for %s comp_sdc under MLP learning ' % app)
cmd = 'python comp_sdc.py ' + app + ' ' + 'x86' + ' ' + 'mlpc'
status = os.system(cmd)
if status != 0:
print('sdc comp in mlpc_bit failure ' + app1)
exit(-1)
# call coverage_comp
log_file = src_dir + 'glog/' + app + '.log'
cmd = 'python sdc_coverage.py ' + app + ' ' + '5' + ' ' + '105' + ' > ' + log_file
status = os.system(cmd)
if status != 0:
print('coverage comp for all methods failure ' + app)
exit(-1)
elif level == 'inst':
inst_rf_src_file = src_dir + 'sdc_output_classic' +'/' + app1 + '_rf.sdclist'
inst_rf_dest_file = dest_dir + app +'/' + app1 + '_rf.sdclist'
cmd = 'cp ' + inst_rf_src_file + ' ' + inst_rf_dest_file
status = os.system(cmd)
if status != 0:
print('cp data in inst_rf failure ' + app1)
exit(-1)
inst_svm_src_file = src_dir + 'sdc_output_classic' +'/' + app1 + '_svm.sdclist'
inst_svm_dest_file = dest_dir + app +'/' + app1 + '_svm.sdclist'
cmd = 'cp ' + inst_svm_src_file + ' ' + inst_svm_dest_file
status = os.system(cmd)
if status != 0:
print('cp data in inst_svm failure ' + app1)
exit(-1)
| 32.904 | 90 | 0.556042 | 550 | 4,113 | 3.925455 | 0.207273 | 0.032422 | 0.05836 | 0.070866 | 0.493747 | 0.470588 | 0.450208 | 0.400185 | 0.309866 | 0.245021 | 0 | 0.020134 | 0.311695 | 4,113 | 124 | 91 | 33.169355 | 0.742494 | 0.029419 | 0 | 0.313953 | 0 | 0 | 0.242758 | 0.00564 | 0.046512 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.069767 | null | null | 0.151163 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b41c9702fa909cdc15c31981b7aeb56a1df4c9bb | 534 | py | Python | src/commands/__init__.py | lysol/lvlss | ca068de516159be732d2cb8c4752dee4f4ef2e09 | [
"MIT"
] | null | null | null | src/commands/__init__.py | lysol/lvlss | ca068de516159be732d2cb8c4752dee4f4ef2e09 | [
"MIT"
] | null | null | null | src/commands/__init__.py | lysol/lvlss | ca068de516159be732d2cb8c4752dee4f4ef2e09 | [
"MIT"
] | null | null | null | from quit import Quit
from set_name import SetName
from who import Who
from say import Say
from look import Look
from go import Go
from take import Take
from inventory import Inventory
from drop import Drop
from make import Make
from landfill import Landfill
from item_info import ItemInfo
from script import SetScript, GetScript
from image_editing import ImageEditing
all_commands = (Quit, SetName, Who, Say, Look,
Go, Take, Inventory, Drop, Make, Landfill,
SetScript, GetScript, ItemInfo, ImageEditing)
| 28.105263 | 50 | 0.773408 | 77 | 534 | 5.311688 | 0.337662 | 0.08802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192884 | 534 | 18 | 51 | 29.666667 | 0.948956 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.823529 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b41e6039b9544ca2bf93ee054b91393cabc444ec | 1,343 | py | Python | Wallpaper change.py | Arbazkhan4712/Wallpaper-Changer-using-Python | a221443bc7e7b5410f06653fa741b9d7af0fe10f | [
"MIT"
] | 4 | 2020-04-17T06:39:23.000Z | 2021-12-25T11:05:16.000Z | Wallpaper change.py | Arbazkhan4712/Wallpaper-Changer-using-Python | a221443bc7e7b5410f06653fa741b9d7af0fe10f | [
"MIT"
] | null | null | null | Wallpaper change.py | Arbazkhan4712/Wallpaper-Changer-using-Python | a221443bc7e7b5410f06653fa741b9d7af0fe10f | [
"MIT"
] | 3 | 2020-04-03T12:36:20.000Z | 2020-06-06T15:12:04.000Z | import ctypes
import os
import time
from pynput.keyboard import Key,Controller
import Bing
def closeTerminal():
keyboard=Controller()
keyboard.press(Key.alt)
keyboard.press(Key.f4)
keyboard.release(Key.alt)
keyboard.release(Key.f4)
def changeWallpaper(image_path):
start=time.time()
end=time.time()
while True:
for dirname,dirnames,filenames in os.walk(image_path):
for file_name in filenames:
if (end-start)//3600 > 6:
try:
Bing.wallpaper_of_the_day(image_path)
start=time.time()
except:
pass
if file_name.endswith('.png') or file_name.endswith('.jpg'):
image=os.path.join(image_path,dirname,file_name)
SPI_SETDESKTOPWALLPAPER=20
ctypes.windll.user32.SystemParametersInfoW(SPI_SETDESKTOPWALLPAPER,0,image,3)
time.sleep(30)
end=time.time()
def main():
closeTerminal()
#configure own folder
image_path = r'D:\Wallpapers'
try:
os.makedirs(image_path)
except:
pass
try:
Bing.wallpaper_of_the_day(image_path)
except:
pass
changeWallpaper(image_path)
if __name__=='__main__':
main()
| 26.86 | 97 | 0.581534 | 150 | 1,343 | 5.02 | 0.426667 | 0.095618 | 0.042497 | 0.047809 | 0.13413 | 0.087649 | 0.087649 | 0.087649 | 0 | 0 | 0 | 0.016556 | 0.325391 | 1,343 | 49 | 98 | 27.408163 | 0.81457 | 0.014892 | 0 | 0.348837 | 0 | 0 | 0.021936 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0.069767 | 0.116279 | 0 | 0.186047 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b41e78f19f2060ee9b4a3efdc51b5e3c612a3ca4 | 968 | py | Python | tests/sensitivity/sf2/sf2_test.py | vic-c137/mpi-boids-simulation | a822f20f5c1cd7cd2a6261a53adeb24e2c0115ec | [
"Apache-2.0"
] | null | null | null | tests/sensitivity/sf2/sf2_test.py | vic-c137/mpi-boids-simulation | a822f20f5c1cd7cd2a6261a53adeb24e2c0115ec | [
"Apache-2.0"
] | null | null | null | tests/sensitivity/sf2/sf2_test.py | vic-c137/mpi-boids-simulation | a822f20f5c1cd7cd2a6261a53adeb24e2c0115ec | [
"Apache-2.0"
] | null | null | null | # Import statements
import subprocess
from os import system
# Variable declarations
np = "10"
cexe = "./Boids"
nboids = "50"
nloops = "500"
k = "7"
maxv = "10"
acc = "1.25"
width = "1000"
height = "1000"
sf1 = "1"
sf2 = "32"
min = "50"
sf3 = "8"
sf4 = "10"
dataPath = "./data/"
jexe = "BoidModelTest"
bdata = "boid_data.boid"
# Test calls
collection = [0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 1048576]
for i in collection:
print "Running test %s" % (str(i))
boidData = "run"+str(i)+".boid"
gif = "run"+str(i)+".gif"
sf2 = str(i)
subprocess.call("mpirun -np " + np +" "+ cexe +" "+ nboids +" "+ nloops +" "+ k +" "+ maxv +" "+ acc +" "+ width +" "+ height +" "+ sf1 +" "+ sf2 +" "+ min +" "+ sf3 +" "+ sf4 + " > " + dataPath + boidData, shell=True)
subprocess.call("java " + jexe + " " + gif + " " + boidData, shell=True)
system('gnuplot ./data/boid_script.gp') | 31.225806 | 220 | 0.558884 | 134 | 968 | 4.022388 | 0.58209 | 0.029685 | 0.025974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146277 | 0.223141 | 968 | 31 | 221 | 31.225806 | 0.570479 | 0.051653 | 0 | 0 | 0 | 0 | 0.185311 | 0.023729 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b41e7a6675758027f59252fdd90ad0a28c111058 | 976 | py | Python | flask_start/flask_start/public/email.py | kostekci/flask_start | fa279fc8907aff9868e2596f4ed9c4d9428d2f75 | [
"MIT"
] | null | null | null | flask_start/flask_start/public/email.py | kostekci/flask_start | fa279fc8907aff9868e2596f4ed9c4d9428d2f75 | [
"MIT"
] | 95 | 2021-09-13T21:23:12.000Z | 2022-03-31T21:22:32.000Z | flask_start/flask_start/public/email.py | kostekci/flask_start | fa279fc8907aff9868e2596f4ed9c4d9428d2f75 | [
"MIT"
] | null | null | null | from flask_mail import Message
from flask import render_template
from flask_start.extensions import mail
'''
from threading import Thread
def send_async_email(app, msg):
with app.app_context():
mail.send(msg)
'''
def send_email(subject, sender, recipients, text_body, html_body):
msg = Message(subject, sender=sender, recipients=recipients)
msg.body = text_body
msg.html = html_body
mail.send(msg)
#Thread(target=send_async_email, args=(app, msg)).start()
def send_password_reset_email(user):
token = user.get_reset_password_token()
send_email('Reset Your Password',
sender='admin@test.test',
recipients=[user.email],
text_body=render_template('public/reset_password_mail.txt',
user=user, token=token),
html_body=render_template('public/reset_password_mail.html',
user=user, token=token))
| 32.533333 | 75 | 0.646516 | 120 | 976 | 5.025 | 0.308333 | 0.044776 | 0.046434 | 0.079602 | 0.135987 | 0.135987 | 0.135987 | 0 | 0 | 0 | 0 | 0 | 0.254098 | 976 | 29 | 76 | 33.655172 | 0.828297 | 0.057377 | 0 | 0 | 0 | 0 | 0.118899 | 0.076345 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.294118 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.